qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
sequencelengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
24,044,734
I'm looking for a way to use pandas and python to combine several columns in an excel sheet with known column names into a new, single one, keeping all the important information as in the example below: input: ``` ID,tp_c,tp_b,tp_p 0,transportation - cars,transportation - boats,transportation - planes 1,checked,-,- 2,-,checked,- 3,checked,checked,- 4,-,checked,checked 5,checked,checked,checked ``` desired output: ``` ID,tp_all 0,transportation 1,cars 2,boats 3,cars+boats 4,boats+planes 5,cars+boats+planes ``` The row with ID of 0 contans a description of the contents of the column. Ideally the code would parse the description in the second row, look after the '-' and concatenate those values in the new "tp\_all" column.
2014/06/04
[ "https://Stackoverflow.com/questions/24044734", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3700450/" ]
OK a more dynamic method: ``` In [63]: # get a list of the columns col_list = list(df.columns) # remove 'ID' column col_list.remove('ID') # create a dict as a lookup col_dict = dict(zip(col_list, [df.iloc[0][col].split(' - ')[1] for col in col_list])) col_dict Out[63]: {'tp_b': 'boats', 'tp_c': 'cars', 'tp_p': 'planes'} In [64]: # define a func that tests the value and uses the dict to create our string def func(x): temp = '' for col in col_list: if x[col] == 'checked': if len(temp) == 0: temp = col_dict[col] else: temp = temp + '+' + col_dict[col] return temp df['combined'] = df[1:].apply(lambda row: func(row), axis=1) df Out[64]: ID tp_c tp_b tp_p \ 0 0 transportation - cars transportation - boats transportation - planes 1 1 checked NaN NaN 2 2 NaN checked NaN 3 3 checked checked NaN 4 4 NaN checked checked 5 5 checked checked checked combined 0 NaN 1 cars 2 boats 3 cars+boats 4 boats+planes 5 cars+boats+planes [6 rows x 5 columns] In [65]: df = df.ix[1:,['ID', 'combined']] df Out[65]: ID combined 1 1 cars 2 2 boats 3 3 cars+boats 4 4 boats+planes 5 5 cars+boats+planes [5 rows x 2 columns] ```
Here is one way: ``` newCol = pandas.Series('',index=d.index) for col in d.ix[:, 1:]: name = '+' + col.split('-')[1].strip() newCol[d[col]=='checked'] += name newCol = newCol.str.strip('+') ``` Then: ``` >>> newCol 0 cars 1 boats 2 cars+boats 3 boats+planes 4 cars+boats+planes dtype: object ``` You can create a new DataFrame with this column or do what you like with it. Edit: I see that you have edited your question so that the names of the modes of transportation are now in row 0 instead of in the column headers. It is easier if they're in the column headers (as my answer assumes), and your new column headers don't seem to contain any additional useful information, so you should probably start by just setting the column names to the info from row 0, and deleting row 0.
24,044,734
I'm looking for a way to use pandas and python to combine several columns in an excel sheet with known column names into a new, single one, keeping all the important information as in the example below: input: ``` ID,tp_c,tp_b,tp_p 0,transportation - cars,transportation - boats,transportation - planes 1,checked,-,- 2,-,checked,- 3,checked,checked,- 4,-,checked,checked 5,checked,checked,checked ``` desired output: ``` ID,tp_all 0,transportation 1,cars 2,boats 3,cars+boats 4,boats+planes 5,cars+boats+planes ``` The row with ID of 0 contans a description of the contents of the column. Ideally the code would parse the description in the second row, look after the '-' and concatenate those values in the new "tp\_all" column.
2014/06/04
[ "https://Stackoverflow.com/questions/24044734", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3700450/" ]
This is quite interesting as it's a reverse `get_dummies`... I think I would manually munge the column names so that you have a boolean DataFrame: ``` In [11]: df1 # df == 'checked' Out[11]: cars boats planes 0 1 True False False 2 False True False 3 True True False 4 False True True 5 True True True ``` Now you can use an apply with zip: ``` In [12]: df1.apply(lambda row: '+'.join([col for col, b in zip(df1.columns, row) if b]), axis=1) Out[12]: 0 1 cars 2 boats 3 cars+boats 4 boats+planes 5 cars+boats+planes dtype: object ``` Now you just have to tweak the headers, to get the desired csv. *Would be nice if there were a less manual way / faster to do reverse `get_dummies`...*
Here is one way: ``` newCol = pandas.Series('',index=d.index) for col in d.ix[:, 1:]: name = '+' + col.split('-')[1].strip() newCol[d[col]=='checked'] += name newCol = newCol.str.strip('+') ``` Then: ``` >>> newCol 0 cars 1 boats 2 cars+boats 3 boats+planes 4 cars+boats+planes dtype: object ``` You can create a new DataFrame with this column or do what you like with it. Edit: I see that you have edited your question so that the names of the modes of transportation are now in row 0 instead of in the column headers. It is easier if they're in the column headers (as my answer assumes), and your new column headers don't seem to contain any additional useful information, so you should probably start by just setting the column names to the info from row 0, and deleting row 0.
43,470,010
I am getting acquainted to Haskell, currently writing my third "homework" to some course I found on the web. The homework assignment needs to be presented\* in a file, named `Golf.hs`, starting with `module Golf where`. All well and good, this seems to be idiomatic in the language. However, I am used to python modules ending in `if __name__ == "__main__:`, where one can put tests over the module, including during module development.`ghc` doesn't seem happy with such an approach: ``` $ ghc Golf.hs -o Golf && ./Golf <no location info>: error: output was redirected with -o, but no output will be generated ``` Even though using `cabal` seems to be the norm, I would like to also understand the raw command-line invocations, that make programs work. `ghci` seems to be another approach to testing newly written code, yet reloading modules is peta. What is the easiest way to write some invocations of my functions with predefined test data and observe on `stdout` the result? \* - for students, who actually attend the course, I just follow the lecture notes and strive to complete the homeworks Golf2.hs: ``` {-# OPTIONS_GHC -Wall #-} module Golf2 where foo :: Int -> Int foo n = 42 main = putStr "Hello" ``` The output: ``` $ ghc Golf2.hs -o Golf2 [1 of 1] Compiling Golf ( Golf2.hs, Golf2.o ) Golf2.hs:6:5: warning: [-Wunused-matches] Defined but not used: ‘n’ Golf2.hs:8:1: warning: [-Wmissing-signatures] Top-level binding with no type signature: main :: IO () <no location info>: error: output was redirected with -o, but no output will be generated because there is no Main module. ```
2017/04/18
[ "https://Stackoverflow.com/questions/43470010", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1145760/" ]
Firstly, you are using **imgpt1** in every case which should not be the scenario. Rather use ``` v.getTag.equals("xxx") ``` After resolving that try to use the best android practice for comparison of strings. Best practice while checking the strings in android(Java) first check the null and empty string using ``` String string1 = "abc", string2 = "Abc"; TextUtils.isEmpty(string1); // Returns true if the string is empty or null ``` Then check for the equal cases by using the below mentioned code ``` string1.equals(string2) //Checks with case sensitivity string1.equalsIgnoreCase(string2) // Checks without case sensitivity. Here this will return true. ```
You should check the tag of the `view` not the static item as they will be always true! look at your first condition. `imgpt1` tag is "`frontbumpers`" thus that condition is always true! hence it shows the same message every time. ``` @Override public void onClick(View v) { String message=""; //for your clarification here v is the view which was clicked ex. impt1 or 2 3 4.... or anything which has an onClick listener assigned and was click will call this method. if(v.getTag()=="frontbumpers") { message="This is Bumper"; } else if(v.getTag()=="frontfenders") { message="This is Fenders"; } else if(v.getTag()=="frontheadlight") { message="This is headlight"; } else if(v.getTag()=="frontgrilles") { message="This is grilles"; } } ``` > > \*\* recommended to use `equals()` rather than `==` for `String`. change to `v.getTag().equals("someValueYouWantToCheck") > > >
43,470,010
I am getting acquainted to Haskell, currently writing my third "homework" to some course I found on the web. The homework assignment needs to be presented\* in a file, named `Golf.hs`, starting with `module Golf where`. All well and good, this seems to be idiomatic in the language. However, I am used to python modules ending in `if __name__ == "__main__:`, where one can put tests over the module, including during module development.`ghc` doesn't seem happy with such an approach: ``` $ ghc Golf.hs -o Golf && ./Golf <no location info>: error: output was redirected with -o, but no output will be generated ``` Even though using `cabal` seems to be the norm, I would like to also understand the raw command-line invocations, that make programs work. `ghci` seems to be another approach to testing newly written code, yet reloading modules is peta. What is the easiest way to write some invocations of my functions with predefined test data and observe on `stdout` the result? \* - for students, who actually attend the course, I just follow the lecture notes and strive to complete the homeworks Golf2.hs: ``` {-# OPTIONS_GHC -Wall #-} module Golf2 where foo :: Int -> Int foo n = 42 main = putStr "Hello" ``` The output: ``` $ ghc Golf2.hs -o Golf2 [1 of 1] Compiling Golf ( Golf2.hs, Golf2.o ) Golf2.hs:6:5: warning: [-Wunused-matches] Defined but not used: ‘n’ Golf2.hs:8:1: warning: [-Wmissing-signatures] Top-level binding with no type signature: main :: IO () <no location info>: error: output was redirected with -o, but no output will be generated because there is no Main module. ```
2017/04/18
[ "https://Stackoverflow.com/questions/43470010", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1145760/" ]
Try this If You have to check Strings use **.equals()** ,in case of int use **==** ``` if(imgpt1.getTag().equals("frontbumpers")) { message="This is Bumper"; } else if(imgpt1.getTag().equals("frontfenders")) { message="This is Fenders"; } else if(imgpt1.getTag().equals("frontheadlight")) { message="This is headlight"; } else if(imgpt1.getTag().equals("frontgrilles") { message="This is grilles"; } ```
Use `imgpt1.getTag().equals("string")` instead of `imgpt1.getTag()=="string"` **Try this:** ``` @Override public void onClick(View v) { String message=""; if(imgpt1.getTag().equals("frontbumpers")) { message="This is Bumper"; } else if(imgpt1.getTag().equals("frontfenders")) { message="This is Fenders"; } else if(imgpt1.getTag().equals("frontheadlight")) { message="This is headlight"; } else if(imgpt1.getTag().equals("frontgrilles")) { message="This is grilles"; } AlertDialog.Builder builder=new AlertDialog.Builder(MainActivity.this); builder.setTitle("Car Parts"); builder.setMessage(message); builder.setNeutralButton("OK", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { dialog.cancel(); } }).create().show(); } ```
43,470,010
I am getting acquainted to Haskell, currently writing my third "homework" to some course I found on the web. The homework assignment needs to be presented\* in a file, named `Golf.hs`, starting with `module Golf where`. All well and good, this seems to be idiomatic in the language. However, I am used to python modules ending in `if __name__ == "__main__:`, where one can put tests over the module, including during module development.`ghc` doesn't seem happy with such an approach: ``` $ ghc Golf.hs -o Golf && ./Golf <no location info>: error: output was redirected with -o, but no output will be generated ``` Even though using `cabal` seems to be the norm, I would like to also understand the raw command-line invocations, that make programs work. `ghci` seems to be another approach to testing newly written code, yet reloading modules is peta. What is the easiest way to write some invocations of my functions with predefined test data and observe on `stdout` the result? \* - for students, who actually attend the course, I just follow the lecture notes and strive to complete the homeworks Golf2.hs: ``` {-# OPTIONS_GHC -Wall #-} module Golf2 where foo :: Int -> Int foo n = 42 main = putStr "Hello" ``` The output: ``` $ ghc Golf2.hs -o Golf2 [1 of 1] Compiling Golf ( Golf2.hs, Golf2.o ) Golf2.hs:6:5: warning: [-Wunused-matches] Defined but not used: ‘n’ Golf2.hs:8:1: warning: [-Wmissing-signatures] Top-level binding with no type signature: main :: IO () <no location info>: error: output was redirected with -o, but no output will be generated because there is no Main module. ```
2017/04/18
[ "https://Stackoverflow.com/questions/43470010", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1145760/" ]
Firstly, you are using **imgpt1** in every case which should not be the scenario. Rather use ``` v.getTag.equals("xxx") ``` After resolving that try to use the best android practice for comparison of strings. Best practice while checking the strings in android(Java) first check the null and empty string using ``` String string1 = "abc", string2 = "Abc"; TextUtils.isEmpty(string1); // Returns true if the string is empty or null ``` Then check for the equal cases by using the below mentioned code ``` string1.equals(string2) //Checks with case sensitivity string1.equalsIgnoreCase(string2) // Checks without case sensitivity. Here this will return true. ```
Use `imgpt1.getTag().equals("string")` instead of `imgpt1.getTag()=="string"` **Try this:** ``` @Override public void onClick(View v) { String message=""; if(imgpt1.getTag().equals("frontbumpers")) { message="This is Bumper"; } else if(imgpt1.getTag().equals("frontfenders")) { message="This is Fenders"; } else if(imgpt1.getTag().equals("frontheadlight")) { message="This is headlight"; } else if(imgpt1.getTag().equals("frontgrilles")) { message="This is grilles"; } AlertDialog.Builder builder=new AlertDialog.Builder(MainActivity.this); builder.setTitle("Car Parts"); builder.setMessage(message); builder.setNeutralButton("OK", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { dialog.cancel(); } }).create().show(); } ```
43,470,010
I am getting acquainted to Haskell, currently writing my third "homework" to some course I found on the web. The homework assignment needs to be presented\* in a file, named `Golf.hs`, starting with `module Golf where`. All well and good, this seems to be idiomatic in the language. However, I am used to python modules ending in `if __name__ == "__main__:`, where one can put tests over the module, including during module development.`ghc` doesn't seem happy with such an approach: ``` $ ghc Golf.hs -o Golf && ./Golf <no location info>: error: output was redirected with -o, but no output will be generated ``` Even though using `cabal` seems to be the norm, I would like to also understand the raw command-line invocations, that make programs work. `ghci` seems to be another approach to testing newly written code, yet reloading modules is peta. What is the easiest way to write some invocations of my functions with predefined test data and observe on `stdout` the result? \* - for students, who actually attend the course, I just follow the lecture notes and strive to complete the homeworks Golf2.hs: ``` {-# OPTIONS_GHC -Wall #-} module Golf2 where foo :: Int -> Int foo n = 42 main = putStr "Hello" ``` The output: ``` $ ghc Golf2.hs -o Golf2 [1 of 1] Compiling Golf ( Golf2.hs, Golf2.o ) Golf2.hs:6:5: warning: [-Wunused-matches] Defined but not used: ‘n’ Golf2.hs:8:1: warning: [-Wmissing-signatures] Top-level binding with no type signature: main :: IO () <no location info>: error: output was redirected with -o, but no output will be generated because there is no Main module. ```
2017/04/18
[ "https://Stackoverflow.com/questions/43470010", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1145760/" ]
In `onClick` use `imgpt1.getTag().equals("frontgrilles")` **instead of** ``` imgpt1.getTag()=="frontgrilles" ```
Use `imgpt1.getTag().equals("string")` instead of `imgpt1.getTag()=="string"` **Try this:** ``` @Override public void onClick(View v) { String message=""; if(imgpt1.getTag().equals("frontbumpers")) { message="This is Bumper"; } else if(imgpt1.getTag().equals("frontfenders")) { message="This is Fenders"; } else if(imgpt1.getTag().equals("frontheadlight")) { message="This is headlight"; } else if(imgpt1.getTag().equals("frontgrilles")) { message="This is grilles"; } AlertDialog.Builder builder=new AlertDialog.Builder(MainActivity.this); builder.setTitle("Car Parts"); builder.setMessage(message); builder.setNeutralButton("OK", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { dialog.cancel(); } }).create().show(); } ```
43,470,010
I am getting acquainted to Haskell, currently writing my third "homework" to some course I found on the web. The homework assignment needs to be presented\* in a file, named `Golf.hs`, starting with `module Golf where`. All well and good, this seems to be idiomatic in the language. However, I am used to python modules ending in `if __name__ == "__main__:`, where one can put tests over the module, including during module development.`ghc` doesn't seem happy with such an approach: ``` $ ghc Golf.hs -o Golf && ./Golf <no location info>: error: output was redirected with -o, but no output will be generated ``` Even though using `cabal` seems to be the norm, I would like to also understand the raw command-line invocations, that make programs work. `ghci` seems to be another approach to testing newly written code, yet reloading modules is peta. What is the easiest way to write some invocations of my functions with predefined test data and observe on `stdout` the result? \* - for students, who actually attend the course, I just follow the lecture notes and strive to complete the homeworks Golf2.hs: ``` {-# OPTIONS_GHC -Wall #-} module Golf2 where foo :: Int -> Int foo n = 42 main = putStr "Hello" ``` The output: ``` $ ghc Golf2.hs -o Golf2 [1 of 1] Compiling Golf ( Golf2.hs, Golf2.o ) Golf2.hs:6:5: warning: [-Wunused-matches] Defined but not used: ‘n’ Golf2.hs:8:1: warning: [-Wmissing-signatures] Top-level binding with no type signature: main :: IO () <no location info>: error: output was redirected with -o, but no output will be generated because there is no Main module. ```
2017/04/18
[ "https://Stackoverflow.com/questions/43470010", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1145760/" ]
In `onClick` use `imgpt1.getTag().equals("frontgrilles")` **instead of** ``` imgpt1.getTag()=="frontgrilles" ```
You should check the tag of the `view` not the static item as they will be always true! look at your first condition. `imgpt1` tag is "`frontbumpers`" thus that condition is always true! hence it shows the same message every time. ``` @Override public void onClick(View v) { String message=""; //for your clarification here v is the view which was clicked ex. impt1 or 2 3 4.... or anything which has an onClick listener assigned and was click will call this method. if(v.getTag()=="frontbumpers") { message="This is Bumper"; } else if(v.getTag()=="frontfenders") { message="This is Fenders"; } else if(v.getTag()=="frontheadlight") { message="This is headlight"; } else if(v.getTag()=="frontgrilles") { message="This is grilles"; } } ``` > > \*\* recommended to use `equals()` rather than `==` for `String`. change to `v.getTag().equals("someValueYouWantToCheck") > > >
43,470,010
I am getting acquainted to Haskell, currently writing my third "homework" to some course I found on the web. The homework assignment needs to be presented\* in a file, named `Golf.hs`, starting with `module Golf where`. All well and good, this seems to be idiomatic in the language. However, I am used to python modules ending in `if __name__ == "__main__:`, where one can put tests over the module, including during module development.`ghc` doesn't seem happy with such an approach: ``` $ ghc Golf.hs -o Golf && ./Golf <no location info>: error: output was redirected with -o, but no output will be generated ``` Even though using `cabal` seems to be the norm, I would like to also understand the raw command-line invocations, that make programs work. `ghci` seems to be another approach to testing newly written code, yet reloading modules is peta. What is the easiest way to write some invocations of my functions with predefined test data and observe on `stdout` the result? \* - for students, who actually attend the course, I just follow the lecture notes and strive to complete the homeworks Golf2.hs: ``` {-# OPTIONS_GHC -Wall #-} module Golf2 where foo :: Int -> Int foo n = 42 main = putStr "Hello" ``` The output: ``` $ ghc Golf2.hs -o Golf2 [1 of 1] Compiling Golf ( Golf2.hs, Golf2.o ) Golf2.hs:6:5: warning: [-Wunused-matches] Defined but not used: ‘n’ Golf2.hs:8:1: warning: [-Wmissing-signatures] Top-level binding with no type signature: main :: IO () <no location info>: error: output was redirected with -o, but no output will be generated because there is no Main module. ```
2017/04/18
[ "https://Stackoverflow.com/questions/43470010", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1145760/" ]
Firstly, you are using **imgpt1** in every case which should not be the scenario. Rather use ``` v.getTag.equals("xxx") ``` After resolving that try to use the best android practice for comparison of strings. Best practice while checking the strings in android(Java) first check the null and empty string using ``` String string1 = "abc", string2 = "Abc"; TextUtils.isEmpty(string1); // Returns true if the string is empty or null ``` Then check for the equal cases by using the below mentioned code ``` string1.equals(string2) //Checks with case sensitivity string1.equalsIgnoreCase(string2) // Checks without case sensitivity. Here this will return true. ```
In `onClick` use `imgpt1.getTag().equals("frontgrilles")` **instead of** ``` imgpt1.getTag()=="frontgrilles" ```
43,470,010
I am getting acquainted to Haskell, currently writing my third "homework" to some course I found on the web. The homework assignment needs to be presented\* in a file, named `Golf.hs`, starting with `module Golf where`. All well and good, this seems to be idiomatic in the language. However, I am used to python modules ending in `if __name__ == "__main__:`, where one can put tests over the module, including during module development.`ghc` doesn't seem happy with such an approach: ``` $ ghc Golf.hs -o Golf && ./Golf <no location info>: error: output was redirected with -o, but no output will be generated ``` Even though using `cabal` seems to be the norm, I would like to also understand the raw command-line invocations, that make programs work. `ghci` seems to be another approach to testing newly written code, yet reloading modules is peta. What is the easiest way to write some invocations of my functions with predefined test data and observe on `stdout` the result? \* - for students, who actually attend the course, I just follow the lecture notes and strive to complete the homeworks Golf2.hs: ``` {-# OPTIONS_GHC -Wall #-} module Golf2 where foo :: Int -> Int foo n = 42 main = putStr "Hello" ``` The output: ``` $ ghc Golf2.hs -o Golf2 [1 of 1] Compiling Golf ( Golf2.hs, Golf2.o ) Golf2.hs:6:5: warning: [-Wunused-matches] Defined but not used: ‘n’ Golf2.hs:8:1: warning: [-Wmissing-signatures] Top-level binding with no type signature: main :: IO () <no location info>: error: output was redirected with -o, but no output will be generated because there is no Main module. ```
2017/04/18
[ "https://Stackoverflow.com/questions/43470010", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1145760/" ]
Firstly, you are using **imgpt1** in every case which should not be the scenario. Rather use ``` v.getTag.equals("xxx") ``` After resolving that try to use the best android practice for comparison of strings. Best practice while checking the strings in android(Java) first check the null and empty string using ``` String string1 = "abc", string2 = "Abc"; TextUtils.isEmpty(string1); // Returns true if the string is empty or null ``` Then check for the equal cases by using the below mentioned code ``` string1.equals(string2) //Checks with case sensitivity string1.equalsIgnoreCase(string2) // Checks without case sensitivity. Here this will return true. ```
Try this If You have to check Strings use **.equals()** ,in case of int use **==** ``` if(imgpt1.getTag().equals("frontbumpers")) { message="This is Bumper"; } else if(imgpt1.getTag().equals("frontfenders")) { message="This is Fenders"; } else if(imgpt1.getTag().equals("frontheadlight")) { message="This is headlight"; } else if(imgpt1.getTag().equals("frontgrilles") { message="This is grilles"; } ```
43,470,010
I am getting acquainted to Haskell, currently writing my third "homework" to some course I found on the web. The homework assignment needs to be presented\* in a file, named `Golf.hs`, starting with `module Golf where`. All well and good, this seems to be idiomatic in the language. However, I am used to python modules ending in `if __name__ == "__main__:`, where one can put tests over the module, including during module development.`ghc` doesn't seem happy with such an approach: ``` $ ghc Golf.hs -o Golf && ./Golf <no location info>: error: output was redirected with -o, but no output will be generated ``` Even though using `cabal` seems to be the norm, I would like to also understand the raw command-line invocations, that make programs work. `ghci` seems to be another approach to testing newly written code, yet reloading modules is peta. What is the easiest way to write some invocations of my functions with predefined test data and observe on `stdout` the result? \* - for students, who actually attend the course, I just follow the lecture notes and strive to complete the homeworks Golf2.hs: ``` {-# OPTIONS_GHC -Wall #-} module Golf2 where foo :: Int -> Int foo n = 42 main = putStr "Hello" ``` The output: ``` $ ghc Golf2.hs -o Golf2 [1 of 1] Compiling Golf ( Golf2.hs, Golf2.o ) Golf2.hs:6:5: warning: [-Wunused-matches] Defined but not used: ‘n’ Golf2.hs:8:1: warning: [-Wmissing-signatures] Top-level binding with no type signature: main :: IO () <no location info>: error: output was redirected with -o, but no output will be generated because there is no Main module. ```
2017/04/18
[ "https://Stackoverflow.com/questions/43470010", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1145760/" ]
You should check the tag of the `view` not the static item as they will be always true! look at your first condition. `imgpt1` tag is "`frontbumpers`" thus that condition is always true! hence it shows the same message every time. ``` @Override public void onClick(View v) { String message=""; //for your clarification here v is the view which was clicked ex. impt1 or 2 3 4.... or anything which has an onClick listener assigned and was click will call this method. if(v.getTag()=="frontbumpers") { message="This is Bumper"; } else if(v.getTag()=="frontfenders") { message="This is Fenders"; } else if(v.getTag()=="frontheadlight") { message="This is headlight"; } else if(v.getTag()=="frontgrilles") { message="This is grilles"; } } ``` > > \*\* recommended to use `equals()` rather than `==` for `String`. change to `v.getTag().equals("someValueYouWantToCheck") > > >
Use `imgpt1.getTag().equals("string")` instead of `imgpt1.getTag()=="string"` **Try this:** ``` @Override public void onClick(View v) { String message=""; if(imgpt1.getTag().equals("frontbumpers")) { message="This is Bumper"; } else if(imgpt1.getTag().equals("frontfenders")) { message="This is Fenders"; } else if(imgpt1.getTag().equals("frontheadlight")) { message="This is headlight"; } else if(imgpt1.getTag().equals("frontgrilles")) { message="This is grilles"; } AlertDialog.Builder builder=new AlertDialog.Builder(MainActivity.this); builder.setTitle("Car Parts"); builder.setMessage(message); builder.setNeutralButton("OK", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { dialog.cancel(); } }).create().show(); } ```
43,470,010
I am getting acquainted to Haskell, currently writing my third "homework" to some course I found on the web. The homework assignment needs to be presented\* in a file, named `Golf.hs`, starting with `module Golf where`. All well and good, this seems to be idiomatic in the language. However, I am used to python modules ending in `if __name__ == "__main__:`, where one can put tests over the module, including during module development.`ghc` doesn't seem happy with such an approach: ``` $ ghc Golf.hs -o Golf && ./Golf <no location info>: error: output was redirected with -o, but no output will be generated ``` Even though using `cabal` seems to be the norm, I would like to also understand the raw command-line invocations, that make programs work. `ghci` seems to be another approach to testing newly written code, yet reloading modules is peta. What is the easiest way to write some invocations of my functions with predefined test data and observe on `stdout` the result? \* - for students, who actually attend the course, I just follow the lecture notes and strive to complete the homeworks Golf2.hs: ``` {-# OPTIONS_GHC -Wall #-} module Golf2 where foo :: Int -> Int foo n = 42 main = putStr "Hello" ``` The output: ``` $ ghc Golf2.hs -o Golf2 [1 of 1] Compiling Golf ( Golf2.hs, Golf2.o ) Golf2.hs:6:5: warning: [-Wunused-matches] Defined but not used: ‘n’ Golf2.hs:8:1: warning: [-Wmissing-signatures] Top-level binding with no type signature: main :: IO () <no location info>: error: output was redirected with -o, but no output will be generated because there is no Main module. ```
2017/04/18
[ "https://Stackoverflow.com/questions/43470010", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1145760/" ]
In onClick method, you used first image for all if else conditions, that is why, no matter which image you click, it'll show first image message. ``` String message=""; if(**imgpt1.getTag()**=="frontbumpers") { message="This is Bumper"; } else if(**imgpt1.getTag()**=="frontfenders") { message="This is Fenders"; } else if(**imgpt1.getTag()**=="frontheadlight") { message="This is headlight"; } else if(**imgpt1.getTag()**=="frontgrilles") { message="This is grilles"; } ```
Use `imgpt1.getTag().equals("string")` instead of `imgpt1.getTag()=="string"` **Try this:** ``` @Override public void onClick(View v) { String message=""; if(imgpt1.getTag().equals("frontbumpers")) { message="This is Bumper"; } else if(imgpt1.getTag().equals("frontfenders")) { message="This is Fenders"; } else if(imgpt1.getTag().equals("frontheadlight")) { message="This is headlight"; } else if(imgpt1.getTag().equals("frontgrilles")) { message="This is grilles"; } AlertDialog.Builder builder=new AlertDialog.Builder(MainActivity.this); builder.setTitle("Car Parts"); builder.setMessage(message); builder.setNeutralButton("OK", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { dialog.cancel(); } }).create().show(); } ```
43,470,010
I am getting acquainted to Haskell, currently writing my third "homework" to some course I found on the web. The homework assignment needs to be presented\* in a file, named `Golf.hs`, starting with `module Golf where`. All well and good, this seems to be idiomatic in the language. However, I am used to python modules ending in `if __name__ == "__main__:`, where one can put tests over the module, including during module development.`ghc` doesn't seem happy with such an approach: ``` $ ghc Golf.hs -o Golf && ./Golf <no location info>: error: output was redirected with -o, but no output will be generated ``` Even though using `cabal` seems to be the norm, I would like to also understand the raw command-line invocations, that make programs work. `ghci` seems to be another approach to testing newly written code, yet reloading modules is peta. What is the easiest way to write some invocations of my functions with predefined test data and observe on `stdout` the result? \* - for students, who actually attend the course, I just follow the lecture notes and strive to complete the homeworks Golf2.hs: ``` {-# OPTIONS_GHC -Wall #-} module Golf2 where foo :: Int -> Int foo n = 42 main = putStr "Hello" ``` The output: ``` $ ghc Golf2.hs -o Golf2 [1 of 1] Compiling Golf ( Golf2.hs, Golf2.o ) Golf2.hs:6:5: warning: [-Wunused-matches] Defined but not used: ‘n’ Golf2.hs:8:1: warning: [-Wmissing-signatures] Top-level binding with no type signature: main :: IO () <no location info>: error: output was redirected with -o, but no output will be generated because there is no Main module. ```
2017/04/18
[ "https://Stackoverflow.com/questions/43470010", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1145760/" ]
Try this If You have to check Strings use **.equals()** ,in case of int use **==** ``` if(imgpt1.getTag().equals("frontbumpers")) { message="This is Bumper"; } else if(imgpt1.getTag().equals("frontfenders")) { message="This is Fenders"; } else if(imgpt1.getTag().equals("frontheadlight")) { message="This is headlight"; } else if(imgpt1.getTag().equals("frontgrilles") { message="This is grilles"; } ```
In onClick method, you used first image for all if else conditions, that is why, no matter which image you click, it'll show first image message. ``` String message=""; if(**imgpt1.getTag()**=="frontbumpers") { message="This is Bumper"; } else if(**imgpt1.getTag()**=="frontfenders") { message="This is Fenders"; } else if(**imgpt1.getTag()**=="frontheadlight") { message="This is headlight"; } else if(**imgpt1.getTag()**=="frontgrilles") { message="This is grilles"; } ```
17,659,010
I'm trying to use the ctypes module to call, from within a python program, a (fortran) library of linear algebra routines that I have written. I have successfully imported the library and can call my *subroutines* and functions that return a single value. My problem is calling functions that return an array of doubles. I can't figure out how to specify the return type. As a result, I get segfaults whenever I call a function like that. Here's a minimum working example, a routine to take the cross product between two 3-vectors: ``` !**************************************************************************************** ! Given vectors a and b, c = a x b function cross_product(a,b) real(dp) a(3), b(3), cross_product(3) cross_product = (/a(2)*b(3) - a(3)*b(2), & a(3)*b(1) - a(1)*b(3), & a(1)*b(2) - a(2)*b(1)/) end function cross_product ``` Here's my python script: ``` #!/usr/bin/python from ctypes import byref, cdll, c_double testlib = cdll.LoadLibrary('/Users/hart/codes/celib/trunk/libutils.so') cross = testlib.vector_matrix_utilities_mp_cross_product_ a = (c_double * 3)() b = (c_double * 3)() a[0] = c_double(0.0) a[1] = c_double(1.0) a[2] = c_double(2.0) b[0] = c_double(1.0) b[1] = c_double(3.0) b[2] = c_double(2.0) print a,b cross.restype = c_double * 3 print cross.restype print cross(byref(a),byref(b)) ``` And here's the output: ``` goku:~/python/ctypes> ./test_example.py <__main__.c_double_Array_3 object at 0x10399b710> <__main__.c_double_Array_3 object at 0x10399b7a0> <class '__main__.c_double_Array_3'> Segmentation fault: 11 goku:~/python/ctypes> ``` I've tried different permutations for the line "cross.restype = ..." but I can't figure out what should actually go there. Thanks for reading this question. --Gus
2013/07/15
[ "https://Stackoverflow.com/questions/17659010", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1733205/" ]
The compiler may return a pointer to the array, or the array descriptor... So, when mixing languages, you should always use `bind(C)` except when the wrapper specifically supports Fortran. And (not surprisingly) `bind(C)` functions cannot return arrays. You could theoretically allocate the array and return `type(c_ptr)` to it, but how to dealocate it after the use? So my suggestion is to use a subroutine.
With gfortran the function call has a hidden argument: ``` >>> from ctypes import * >>> testlib = CDLL('./libutils.so') >>> cross = testlib.cross_product_ >>> a = (c_double * 3)(*[0.0, 1.0, 2.0]) >>> b = (c_double * 3)(*[1.0, 3.0, 2.0]) >>> c = (c_double * 3)() >>> pc = pointer(c) >>> cross(byref(pc), a, b) 3 >>> c[:] [-4.0, 2.0, -1.0] ``` But [Vladimir's suggestion](https://stackoverflow.com/a/17664115/205580) to use `bind(C)` and a subroutine is the better way to go. FYI, arrays become pointers in C function calls, so using `byref` is redundant. I needed `byref` and `pointer` in order to create a `double **` for the hidden argument.
5,762,766
I've created a little helper application using Python and GTK. I've never used GTK before. As per the comment on <http://www.pygtk.org/> I used the PyGObject interface. Now I would like to add spell checking to my Gtk.TextBuffer. I found a library called GtkSpell and an associated python-gtkspell in the package manager, but when I try to import it it fails with "ImportError: cannot import name TextView from gtk", I presume this means it is using PyGtk instead of PyGObject. Is there someway to get this working with PyGObject? Or some other premade GTK spellcheck system I can use instead?
2011/04/23
[ "https://Stackoverflow.com/questions/5762766", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10471/" ]
I wrote one, yesterday because I had the same problem, so it's a bit alpha but it works fine. You could get the source from: <https://github.com/koehlma/pygtkspellcheck>. It requires [pyenchant](http://packages.python.org/pyenchant/) and I only test it with Python 3 on Archlinux. If something doesn't work feel free to fill out a bug report on Github. You have to install it with `python3 setup.py install`. It consists of two packages, `gtkspellcheck` which does the spellchecking and `pylocale` which provides human readable internationalized names for language Codes like `de_DE` or `en_US`. Because there is no documentation yet, an example: ```python # -*- coding:utf-8 -*- import locale from gtkspellcheck import SpellChecker, languages, language_exists from gi.repository import Gtk as gtk for code, name in languages: print('code: %5s, language: %s' % (code, name)) window = gtk.Window.new(gtk.WindowType(0)) view = gtk.TextView.new() if language_exists(locale.getdefaultlocale()[0]): spellchecker = SpellChecker(view, locale.getdefaultlocale()[0]) else: spellchecker = SpellChecker(view) window.set_default_size(600, 400) window.add(view) window.show_all() window.connect('delete-event', lambda widget, event: gtk.main_quit) gtk.main() ```
I'm afraid that the PyGObject interface is new enough that GtkSpell hasn't been updated to use it yet. As far as I know there is no other premade GTK spell checker.
61,893,719
I am trying to load my saved model from `s3` using `joblib` ``` import pandas as pd import numpy as np import json import subprocess import sqlalchemy from sklearn.externals import joblib ENV = 'dev' model_d2v = load_d2v('model_d2v_version_002', ENV) def load_d2v(fname, env): model_name = fname if env == 'dev': try: model=joblib.load(model_name) except: s3_base_path='s3://sd-flikku/datalake/doc2vec_model' path = s3_base_path+'/'+model_name command = "aws s3 cp {} {}".format(path,model_name).split() print('loading...'+model_name) subprocess.call(command) model=joblib.load(model_name) else: s3_base_path='s3://sd-flikku/datalake/doc2vec_model' path = s3_base_path+'/'+model_name command = "aws s3 cp {} {}".format(path,model_name).split() print('loading...'+model_name) subprocess.call(command) model=joblib.load(model_name) return model ``` But I get this error: ``` from sklearn.externals import joblib ImportError: cannot import name 'joblib' from 'sklearn.externals' (C:\Users\prane\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\externals\__init__.py) ``` Then I tried installing `joblib` directly by doing ``` import joblib ``` but it gave me this error ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 8, in load_d2v_from_s3 File "/home/ec2-user/.local/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 585, in load obj = _unpickle(fobj, filename, mmap_mode) File "/home/ec2-user/.local/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 504, in _unpickle obj = unpickler.load() File "/usr/lib64/python3.7/pickle.py", line 1088, in load dispatch[key[0]](self) File "/usr/lib64/python3.7/pickle.py", line 1376, in load_global klass = self.find_class(module, name) File "/usr/lib64/python3.7/pickle.py", line 1426, in find_class __import__(module, level=0) ModuleNotFoundError: No module named 'sklearn.externals.joblib' ``` Can you tell me how to solve this?
2020/05/19
[ "https://Stackoverflow.com/questions/61893719", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13114945/" ]
Maybe your code is outdated. For anyone who aims to use `fetch_mldata` in digit handwritten project, you should `fetch_openml` instead. ([link](https://stackoverflow.com/questions/47324921/cant-load-mnist-original-dataset-using-sklearn/52297457)) In old version of sklearn: ``` from sklearn.externals import joblib mnist = fetch_mldata('MNIST original') ``` In **sklearn 0.23** (stable release): ``` import sklearn.externals import joblib dataset = datasets.fetch_openml("mnist_784") features = np.array(dataset.data, 'int16') labels = np.array(dataset.target, 'int') ``` For more info about deprecating `fetch_mldata` see scikit-learn [doc](https://scikit-learn.org/0.20/modules/generated/sklearn.datasets.fetch_mldata.html)
When getting error: **from sklearn.externals import joblib** it deprecated older version. For new version follow: 1. conda install -c anaconda scikit-learn (install using "Anaconda Promt") 2. import joblib (Jupyter Notebook)
61,893,719
I am trying to load my saved model from `s3` using `joblib` ``` import pandas as pd import numpy as np import json import subprocess import sqlalchemy from sklearn.externals import joblib ENV = 'dev' model_d2v = load_d2v('model_d2v_version_002', ENV) def load_d2v(fname, env): model_name = fname if env == 'dev': try: model=joblib.load(model_name) except: s3_base_path='s3://sd-flikku/datalake/doc2vec_model' path = s3_base_path+'/'+model_name command = "aws s3 cp {} {}".format(path,model_name).split() print('loading...'+model_name) subprocess.call(command) model=joblib.load(model_name) else: s3_base_path='s3://sd-flikku/datalake/doc2vec_model' path = s3_base_path+'/'+model_name command = "aws s3 cp {} {}".format(path,model_name).split() print('loading...'+model_name) subprocess.call(command) model=joblib.load(model_name) return model ``` But I get this error: ``` from sklearn.externals import joblib ImportError: cannot import name 'joblib' from 'sklearn.externals' (C:\Users\prane\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\externals\__init__.py) ``` Then I tried installing `joblib` directly by doing ``` import joblib ``` but it gave me this error ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 8, in load_d2v_from_s3 File "/home/ec2-user/.local/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 585, in load obj = _unpickle(fobj, filename, mmap_mode) File "/home/ec2-user/.local/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 504, in _unpickle obj = unpickler.load() File "/usr/lib64/python3.7/pickle.py", line 1088, in load dispatch[key[0]](self) File "/usr/lib64/python3.7/pickle.py", line 1376, in load_global klass = self.find_class(module, name) File "/usr/lib64/python3.7/pickle.py", line 1426, in find_class __import__(module, level=0) ModuleNotFoundError: No module named 'sklearn.externals.joblib' ``` Can you tell me how to solve this?
2020/05/19
[ "https://Stackoverflow.com/questions/61893719", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13114945/" ]
It looks like your existing pickle save file (`model_d2v_version_002`) encodes a reference module in a non-standard location – a `joblib` that's in `sklearn.externals.joblib` rather than at top-level. The current `scikit-learn` documentation only talks about a top-level `joblib` – eg in [3.4.1 Persistence example](https://scikit-learn.org/stable/modules/model_persistence.html) – but I do see a [reference in someone else's old issue to a DeprecationWarning](https://github.com/EpistasisLab/tpot/issues/869) in `scikit-learn` version 0.21 about an older `scikit.external.joblib` variant going away: > > Python37\lib\site-packages\sklearn\externals\joblib\_init\_.py:15: > DeprecationWarning: sklearn.externals.joblib is deprecated in 0.21 and > will be removed in 0.23. Please import this functionality directly > from joblib, which can be installed with: pip install joblib. If this > warning is raised when loading pickled models, you may need to > re-serialize those models with scikit-learn 0.21+. > > > 'Deprecation' means marking something as inadvisable to rely-upon, as it is likely to be discontinued in a future release (often, but not always, with a recommended newer way to do the same thing). I suspect your `model_d2v_version_002` file was saved from an older version of `scikit-learn`, and you're now using `scikit-learn` (aka `sklearn`) version 0.23+ which has totally removed the `sklearn.external.joblib` variation. Thus your file can't be directly or easily loaded to your current environment. But, per the `DeprecationWarning`, you can probably temporarily use an older `scikit-learn` version to load the file the old way once, then re-save it with the now-preferred way. Given the warning info, this would probably require `scikit-learn` version 0.21.x or 0.22.x, but if you know exactly which version your `model_d2v_version_002` file was saved from, I'd try to use that. The steps would roughly be: * create a temporary working environment (or roll back your current working environment) with the older `sklearn` * do imports something like: ``` import sklearn.external.joblib as extjoblib import joblib ``` * `extjoblib.load()` your old file as you'd planned, but then immediately re-`joblib.dump()` the file using the top-level `joblib`. (You likely want to use a distinct name, to keep the older file around, just in case.) * move/update to your real, modern environment, and only `import joblib` (top level) to use `joblib.load()` - no longer having any references to `sklearn.external.joblib' in either your code, or your stored pickle files.
for this error, I had to directly use the following and it worked like a charm: ``` import joblib ``` Simple
61,893,719
I am trying to load my saved model from `s3` using `joblib` ``` import pandas as pd import numpy as np import json import subprocess import sqlalchemy from sklearn.externals import joblib ENV = 'dev' model_d2v = load_d2v('model_d2v_version_002', ENV) def load_d2v(fname, env): model_name = fname if env == 'dev': try: model=joblib.load(model_name) except: s3_base_path='s3://sd-flikku/datalake/doc2vec_model' path = s3_base_path+'/'+model_name command = "aws s3 cp {} {}".format(path,model_name).split() print('loading...'+model_name) subprocess.call(command) model=joblib.load(model_name) else: s3_base_path='s3://sd-flikku/datalake/doc2vec_model' path = s3_base_path+'/'+model_name command = "aws s3 cp {} {}".format(path,model_name).split() print('loading...'+model_name) subprocess.call(command) model=joblib.load(model_name) return model ``` But I get this error: ``` from sklearn.externals import joblib ImportError: cannot import name 'joblib' from 'sklearn.externals' (C:\Users\prane\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\externals\__init__.py) ``` Then I tried installing `joblib` directly by doing ``` import joblib ``` but it gave me this error ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 8, in load_d2v_from_s3 File "/home/ec2-user/.local/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 585, in load obj = _unpickle(fobj, filename, mmap_mode) File "/home/ec2-user/.local/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 504, in _unpickle obj = unpickler.load() File "/usr/lib64/python3.7/pickle.py", line 1088, in load dispatch[key[0]](self) File "/usr/lib64/python3.7/pickle.py", line 1376, in load_global klass = self.find_class(module, name) File "/usr/lib64/python3.7/pickle.py", line 1426, in find_class __import__(module, level=0) ModuleNotFoundError: No module named 'sklearn.externals.joblib' ``` Can you tell me how to solve this?
2020/05/19
[ "https://Stackoverflow.com/questions/61893719", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13114945/" ]
You should directly use ``` import joblib ``` instead of ``` from sklearn.externals import joblib ```
I had the same problem What I did not realize was that joblib *was already installed!* so what you have to do is replace ``` from sklearn.externals import joblib ``` with ``` import joblib ``` and that is it
61,893,719
I am trying to load my saved model from `s3` using `joblib` ``` import pandas as pd import numpy as np import json import subprocess import sqlalchemy from sklearn.externals import joblib ENV = 'dev' model_d2v = load_d2v('model_d2v_version_002', ENV) def load_d2v(fname, env): model_name = fname if env == 'dev': try: model=joblib.load(model_name) except: s3_base_path='s3://sd-flikku/datalake/doc2vec_model' path = s3_base_path+'/'+model_name command = "aws s3 cp {} {}".format(path,model_name).split() print('loading...'+model_name) subprocess.call(command) model=joblib.load(model_name) else: s3_base_path='s3://sd-flikku/datalake/doc2vec_model' path = s3_base_path+'/'+model_name command = "aws s3 cp {} {}".format(path,model_name).split() print('loading...'+model_name) subprocess.call(command) model=joblib.load(model_name) return model ``` But I get this error: ``` from sklearn.externals import joblib ImportError: cannot import name 'joblib' from 'sklearn.externals' (C:\Users\prane\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\externals\__init__.py) ``` Then I tried installing `joblib` directly by doing ``` import joblib ``` but it gave me this error ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 8, in load_d2v_from_s3 File "/home/ec2-user/.local/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 585, in load obj = _unpickle(fobj, filename, mmap_mode) File "/home/ec2-user/.local/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 504, in _unpickle obj = unpickler.load() File "/usr/lib64/python3.7/pickle.py", line 1088, in load dispatch[key[0]](self) File "/usr/lib64/python3.7/pickle.py", line 1376, in load_global klass = self.find_class(module, name) File "/usr/lib64/python3.7/pickle.py", line 1426, in find_class __import__(module, level=0) ModuleNotFoundError: No module named 'sklearn.externals.joblib' ``` Can you tell me how to solve this?
2020/05/19
[ "https://Stackoverflow.com/questions/61893719", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13114945/" ]
It looks like your existing pickle save file (`model_d2v_version_002`) encodes a reference module in a non-standard location – a `joblib` that's in `sklearn.externals.joblib` rather than at top-level. The current `scikit-learn` documentation only talks about a top-level `joblib` – eg in [3.4.1 Persistence example](https://scikit-learn.org/stable/modules/model_persistence.html) – but I do see a [reference in someone else's old issue to a DeprecationWarning](https://github.com/EpistasisLab/tpot/issues/869) in `scikit-learn` version 0.21 about an older `scikit.external.joblib` variant going away: > > Python37\lib\site-packages\sklearn\externals\joblib\_init\_.py:15: > DeprecationWarning: sklearn.externals.joblib is deprecated in 0.21 and > will be removed in 0.23. Please import this functionality directly > from joblib, which can be installed with: pip install joblib. If this > warning is raised when loading pickled models, you may need to > re-serialize those models with scikit-learn 0.21+. > > > 'Deprecation' means marking something as inadvisable to rely-upon, as it is likely to be discontinued in a future release (often, but not always, with a recommended newer way to do the same thing). I suspect your `model_d2v_version_002` file was saved from an older version of `scikit-learn`, and you're now using `scikit-learn` (aka `sklearn`) version 0.23+ which has totally removed the `sklearn.external.joblib` variation. Thus your file can't be directly or easily loaded to your current environment. But, per the `DeprecationWarning`, you can probably temporarily use an older `scikit-learn` version to load the file the old way once, then re-save it with the now-preferred way. Given the warning info, this would probably require `scikit-learn` version 0.21.x or 0.22.x, but if you know exactly which version your `model_d2v_version_002` file was saved from, I'd try to use that. The steps would roughly be: * create a temporary working environment (or roll back your current working environment) with the older `sklearn` * do imports something like: ``` import sklearn.external.joblib as extjoblib import joblib ``` * `extjoblib.load()` your old file as you'd planned, but then immediately re-`joblib.dump()` the file using the top-level `joblib`. (You likely want to use a distinct name, to keep the older file around, just in case.) * move/update to your real, modern environment, and only `import joblib` (top level) to use `joblib.load()` - no longer having any references to `sklearn.external.joblib' in either your code, or your stored pickle files.
Maybe your code is outdated. For anyone who aims to use `fetch_mldata` in digit handwritten project, you should `fetch_openml` instead. ([link](https://stackoverflow.com/questions/47324921/cant-load-mnist-original-dataset-using-sklearn/52297457)) In old version of sklearn: ``` from sklearn.externals import joblib mnist = fetch_mldata('MNIST original') ``` In **sklearn 0.23** (stable release): ``` import sklearn.externals import joblib dataset = datasets.fetch_openml("mnist_784") features = np.array(dataset.data, 'int16') labels = np.array(dataset.target, 'int') ``` For more info about deprecating `fetch_mldata` see scikit-learn [doc](https://scikit-learn.org/0.20/modules/generated/sklearn.datasets.fetch_mldata.html)
61,893,719
I am trying to load my saved model from `s3` using `joblib` ``` import pandas as pd import numpy as np import json import subprocess import sqlalchemy from sklearn.externals import joblib ENV = 'dev' model_d2v = load_d2v('model_d2v_version_002', ENV) def load_d2v(fname, env): model_name = fname if env == 'dev': try: model=joblib.load(model_name) except: s3_base_path='s3://sd-flikku/datalake/doc2vec_model' path = s3_base_path+'/'+model_name command = "aws s3 cp {} {}".format(path,model_name).split() print('loading...'+model_name) subprocess.call(command) model=joblib.load(model_name) else: s3_base_path='s3://sd-flikku/datalake/doc2vec_model' path = s3_base_path+'/'+model_name command = "aws s3 cp {} {}".format(path,model_name).split() print('loading...'+model_name) subprocess.call(command) model=joblib.load(model_name) return model ``` But I get this error: ``` from sklearn.externals import joblib ImportError: cannot import name 'joblib' from 'sklearn.externals' (C:\Users\prane\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\externals\__init__.py) ``` Then I tried installing `joblib` directly by doing ``` import joblib ``` but it gave me this error ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 8, in load_d2v_from_s3 File "/home/ec2-user/.local/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 585, in load obj = _unpickle(fobj, filename, mmap_mode) File "/home/ec2-user/.local/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 504, in _unpickle obj = unpickler.load() File "/usr/lib64/python3.7/pickle.py", line 1088, in load dispatch[key[0]](self) File "/usr/lib64/python3.7/pickle.py", line 1376, in load_global klass = self.find_class(module, name) File "/usr/lib64/python3.7/pickle.py", line 1426, in find_class __import__(module, level=0) ModuleNotFoundError: No module named 'sklearn.externals.joblib' ``` Can you tell me how to solve this?
2020/05/19
[ "https://Stackoverflow.com/questions/61893719", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13114945/" ]
When getting error: **from sklearn.externals import joblib** it deprecated older version. For new version follow: 1. conda install -c anaconda scikit-learn (install using "Anaconda Promt") 2. import joblib (Jupyter Notebook)
After a long investigation, given my computer setup, I've found that was because an SSL certificate was required to download the dataset.
61,893,719
I am trying to load my saved model from `s3` using `joblib` ``` import pandas as pd import numpy as np import json import subprocess import sqlalchemy from sklearn.externals import joblib ENV = 'dev' model_d2v = load_d2v('model_d2v_version_002', ENV) def load_d2v(fname, env): model_name = fname if env == 'dev': try: model=joblib.load(model_name) except: s3_base_path='s3://sd-flikku/datalake/doc2vec_model' path = s3_base_path+'/'+model_name command = "aws s3 cp {} {}".format(path,model_name).split() print('loading...'+model_name) subprocess.call(command) model=joblib.load(model_name) else: s3_base_path='s3://sd-flikku/datalake/doc2vec_model' path = s3_base_path+'/'+model_name command = "aws s3 cp {} {}".format(path,model_name).split() print('loading...'+model_name) subprocess.call(command) model=joblib.load(model_name) return model ``` But I get this error: ``` from sklearn.externals import joblib ImportError: cannot import name 'joblib' from 'sklearn.externals' (C:\Users\prane\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\externals\__init__.py) ``` Then I tried installing `joblib` directly by doing ``` import joblib ``` but it gave me this error ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 8, in load_d2v_from_s3 File "/home/ec2-user/.local/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 585, in load obj = _unpickle(fobj, filename, mmap_mode) File "/home/ec2-user/.local/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 504, in _unpickle obj = unpickler.load() File "/usr/lib64/python3.7/pickle.py", line 1088, in load dispatch[key[0]](self) File "/usr/lib64/python3.7/pickle.py", line 1376, in load_global klass = self.find_class(module, name) File "/usr/lib64/python3.7/pickle.py", line 1426, in find_class __import__(module, level=0) ModuleNotFoundError: No module named 'sklearn.externals.joblib' ``` Can you tell me how to solve this?
2020/05/19
[ "https://Stackoverflow.com/questions/61893719", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13114945/" ]
none of the answers below works for me, with a little changes this modification was ok for me ``` import sklearn.externals as extjoblib import joblib ```
After a long investigation, given my computer setup, I've found that was because an SSL certificate was required to download the dataset.
61,893,719
I am trying to load my saved model from `s3` using `joblib` ``` import pandas as pd import numpy as np import json import subprocess import sqlalchemy from sklearn.externals import joblib ENV = 'dev' model_d2v = load_d2v('model_d2v_version_002', ENV) def load_d2v(fname, env): model_name = fname if env == 'dev': try: model=joblib.load(model_name) except: s3_base_path='s3://sd-flikku/datalake/doc2vec_model' path = s3_base_path+'/'+model_name command = "aws s3 cp {} {}".format(path,model_name).split() print('loading...'+model_name) subprocess.call(command) model=joblib.load(model_name) else: s3_base_path='s3://sd-flikku/datalake/doc2vec_model' path = s3_base_path+'/'+model_name command = "aws s3 cp {} {}".format(path,model_name).split() print('loading...'+model_name) subprocess.call(command) model=joblib.load(model_name) return model ``` But I get this error: ``` from sklearn.externals import joblib ImportError: cannot import name 'joblib' from 'sklearn.externals' (C:\Users\prane\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\externals\__init__.py) ``` Then I tried installing `joblib` directly by doing ``` import joblib ``` but it gave me this error ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 8, in load_d2v_from_s3 File "/home/ec2-user/.local/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 585, in load obj = _unpickle(fobj, filename, mmap_mode) File "/home/ec2-user/.local/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 504, in _unpickle obj = unpickler.load() File "/usr/lib64/python3.7/pickle.py", line 1088, in load dispatch[key[0]](self) File "/usr/lib64/python3.7/pickle.py", line 1376, in load_global klass = self.find_class(module, name) File "/usr/lib64/python3.7/pickle.py", line 1426, in find_class __import__(module, level=0) ModuleNotFoundError: No module named 'sklearn.externals.joblib' ``` Can you tell me how to solve this?
2020/05/19
[ "https://Stackoverflow.com/questions/61893719", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13114945/" ]
You should directly use ``` import joblib ``` instead of ``` from sklearn.externals import joblib ```
none of the answers below works for me, with a little changes this modification was ok for me ``` import sklearn.externals as extjoblib import joblib ```
61,893,719
I am trying to load my saved model from `s3` using `joblib` ``` import pandas as pd import numpy as np import json import subprocess import sqlalchemy from sklearn.externals import joblib ENV = 'dev' model_d2v = load_d2v('model_d2v_version_002', ENV) def load_d2v(fname, env): model_name = fname if env == 'dev': try: model=joblib.load(model_name) except: s3_base_path='s3://sd-flikku/datalake/doc2vec_model' path = s3_base_path+'/'+model_name command = "aws s3 cp {} {}".format(path,model_name).split() print('loading...'+model_name) subprocess.call(command) model=joblib.load(model_name) else: s3_base_path='s3://sd-flikku/datalake/doc2vec_model' path = s3_base_path+'/'+model_name command = "aws s3 cp {} {}".format(path,model_name).split() print('loading...'+model_name) subprocess.call(command) model=joblib.load(model_name) return model ``` But I get this error: ``` from sklearn.externals import joblib ImportError: cannot import name 'joblib' from 'sklearn.externals' (C:\Users\prane\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\externals\__init__.py) ``` Then I tried installing `joblib` directly by doing ``` import joblib ``` but it gave me this error ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 8, in load_d2v_from_s3 File "/home/ec2-user/.local/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 585, in load obj = _unpickle(fobj, filename, mmap_mode) File "/home/ec2-user/.local/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 504, in _unpickle obj = unpickler.load() File "/usr/lib64/python3.7/pickle.py", line 1088, in load dispatch[key[0]](self) File "/usr/lib64/python3.7/pickle.py", line 1376, in load_global klass = self.find_class(module, name) File "/usr/lib64/python3.7/pickle.py", line 1426, in find_class __import__(module, level=0) ModuleNotFoundError: No module named 'sklearn.externals.joblib' ``` Can you tell me how to solve this?
2020/05/19
[ "https://Stackoverflow.com/questions/61893719", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13114945/" ]
You can import `joblib` directly by installing it as a dependency and using `import joblib`, [Documentation](https://joblib.readthedocs.io/en/latest/).
I had the same problem What I did not realize was that joblib *was already installed!* so what you have to do is replace ``` from sklearn.externals import joblib ``` with ``` import joblib ``` and that is it
61,893,719
I am trying to load my saved model from `s3` using `joblib` ``` import pandas as pd import numpy as np import json import subprocess import sqlalchemy from sklearn.externals import joblib ENV = 'dev' model_d2v = load_d2v('model_d2v_version_002', ENV) def load_d2v(fname, env): model_name = fname if env == 'dev': try: model=joblib.load(model_name) except: s3_base_path='s3://sd-flikku/datalake/doc2vec_model' path = s3_base_path+'/'+model_name command = "aws s3 cp {} {}".format(path,model_name).split() print('loading...'+model_name) subprocess.call(command) model=joblib.load(model_name) else: s3_base_path='s3://sd-flikku/datalake/doc2vec_model' path = s3_base_path+'/'+model_name command = "aws s3 cp {} {}".format(path,model_name).split() print('loading...'+model_name) subprocess.call(command) model=joblib.load(model_name) return model ``` But I get this error: ``` from sklearn.externals import joblib ImportError: cannot import name 'joblib' from 'sklearn.externals' (C:\Users\prane\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\externals\__init__.py) ``` Then I tried installing `joblib` directly by doing ``` import joblib ``` but it gave me this error ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 8, in load_d2v_from_s3 File "/home/ec2-user/.local/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 585, in load obj = _unpickle(fobj, filename, mmap_mode) File "/home/ec2-user/.local/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 504, in _unpickle obj = unpickler.load() File "/usr/lib64/python3.7/pickle.py", line 1088, in load dispatch[key[0]](self) File "/usr/lib64/python3.7/pickle.py", line 1376, in load_global klass = self.find_class(module, name) File "/usr/lib64/python3.7/pickle.py", line 1426, in find_class __import__(module, level=0) ModuleNotFoundError: No module named 'sklearn.externals.joblib' ``` Can you tell me how to solve this?
2020/05/19
[ "https://Stackoverflow.com/questions/61893719", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13114945/" ]
It looks like your existing pickle save file (`model_d2v_version_002`) encodes a reference module in a non-standard location – a `joblib` that's in `sklearn.externals.joblib` rather than at top-level. The current `scikit-learn` documentation only talks about a top-level `joblib` – eg in [3.4.1 Persistence example](https://scikit-learn.org/stable/modules/model_persistence.html) – but I do see a [reference in someone else's old issue to a DeprecationWarning](https://github.com/EpistasisLab/tpot/issues/869) in `scikit-learn` version 0.21 about an older `scikit.external.joblib` variant going away: > > Python37\lib\site-packages\sklearn\externals\joblib\_init\_.py:15: > DeprecationWarning: sklearn.externals.joblib is deprecated in 0.21 and > will be removed in 0.23. Please import this functionality directly > from joblib, which can be installed with: pip install joblib. If this > warning is raised when loading pickled models, you may need to > re-serialize those models with scikit-learn 0.21+. > > > 'Deprecation' means marking something as inadvisable to rely-upon, as it is likely to be discontinued in a future release (often, but not always, with a recommended newer way to do the same thing). I suspect your `model_d2v_version_002` file was saved from an older version of `scikit-learn`, and you're now using `scikit-learn` (aka `sklearn`) version 0.23+ which has totally removed the `sklearn.external.joblib` variation. Thus your file can't be directly or easily loaded to your current environment. But, per the `DeprecationWarning`, you can probably temporarily use an older `scikit-learn` version to load the file the old way once, then re-save it with the now-preferred way. Given the warning info, this would probably require `scikit-learn` version 0.21.x or 0.22.x, but if you know exactly which version your `model_d2v_version_002` file was saved from, I'd try to use that. The steps would roughly be: * create a temporary working environment (or roll back your current working environment) with the older `sklearn` * do imports something like: ``` import sklearn.external.joblib as extjoblib import joblib ``` * `extjoblib.load()` your old file as you'd planned, but then immediately re-`joblib.dump()` the file using the top-level `joblib`. (You likely want to use a distinct name, to keep the older file around, just in case.) * move/update to your real, modern environment, and only `import joblib` (top level) to use `joblib.load()` - no longer having any references to `sklearn.external.joblib' in either your code, or your stored pickle files.
You can import `joblib` directly by installing it as a dependency and using `import joblib`, [Documentation](https://joblib.readthedocs.io/en/latest/).
61,893,719
I am trying to load my saved model from `s3` using `joblib` ``` import pandas as pd import numpy as np import json import subprocess import sqlalchemy from sklearn.externals import joblib ENV = 'dev' model_d2v = load_d2v('model_d2v_version_002', ENV) def load_d2v(fname, env): model_name = fname if env == 'dev': try: model=joblib.load(model_name) except: s3_base_path='s3://sd-flikku/datalake/doc2vec_model' path = s3_base_path+'/'+model_name command = "aws s3 cp {} {}".format(path,model_name).split() print('loading...'+model_name) subprocess.call(command) model=joblib.load(model_name) else: s3_base_path='s3://sd-flikku/datalake/doc2vec_model' path = s3_base_path+'/'+model_name command = "aws s3 cp {} {}".format(path,model_name).split() print('loading...'+model_name) subprocess.call(command) model=joblib.load(model_name) return model ``` But I get this error: ``` from sklearn.externals import joblib ImportError: cannot import name 'joblib' from 'sklearn.externals' (C:\Users\prane\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\externals\__init__.py) ``` Then I tried installing `joblib` directly by doing ``` import joblib ``` but it gave me this error ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 8, in load_d2v_from_s3 File "/home/ec2-user/.local/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 585, in load obj = _unpickle(fobj, filename, mmap_mode) File "/home/ec2-user/.local/lib/python3.7/site-packages/joblib/numpy_pickle.py", line 504, in _unpickle obj = unpickler.load() File "/usr/lib64/python3.7/pickle.py", line 1088, in load dispatch[key[0]](self) File "/usr/lib64/python3.7/pickle.py", line 1376, in load_global klass = self.find_class(module, name) File "/usr/lib64/python3.7/pickle.py", line 1426, in find_class __import__(module, level=0) ModuleNotFoundError: No module named 'sklearn.externals.joblib' ``` Can you tell me how to solve this?
2020/05/19
[ "https://Stackoverflow.com/questions/61893719", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13114945/" ]
You can import `joblib` directly by installing it as a dependency and using `import joblib`, [Documentation](https://joblib.readthedocs.io/en/latest/).
none of the answers below works for me, with a little changes this modification was ok for me ``` import sklearn.externals as extjoblib import joblib ```
65,590,149
I am trying to make a python script that will make payment automatically on [this](https://www.audiobooks.com/signup) site. I am able to get credit-card-number input but i can't access expirty month or CVV. **Code I tried** I used this to get credit card number field below ``` WebDriverWait(browser, 20).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH,"//iframe[@id='braintree-hosted-field-number']"))) WebDriverWait(browser, 20).until(EC.element_to_be_clickable((By.XPATH, "//input[@class='number' and @id='credit-card-number']"))).send_keys("0000000000000000") ``` I used same thing to get Expiry month field, like this, ``` WebDriverWait(browser, 20).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH, '//iframe[@id="braintree-hosted-field-expirationMonth"]'))) WebDriverWait(browser, 60).until(EC.element_to_be_clickable((By.XPATH, "//input[@class='expirationMonth' and @id='expiration-month']"))).send_keys("12/2024") ``` But this code don't work So what I want is, I want to detect Expiration field and also CVV field, the method I used can't detect the field.
2021/01/06
[ "https://Stackoverflow.com/questions/65590149", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10830982/" ]
if you switch to one iframe you have to swithc to default content before you can interact with another iframe outside the current iframe in which the code focus is use ``` WebDriverWait(browser, 20).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH,"//iframe[@id='braintree-hosted-field-number']"))) WebDriverWait(browser, 20).until(EC.element_to_be_clickable((By.XPATH, "//input[@class='number' and @id='credit-card-number']"))).send_keys("0000000000000000") browser.switch_to_default_content() WebDriverWait(browser, 20).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH, '//iframe[@id="braintree-hosted-field-expirationMonth"]'))) WebDriverWait(browser, 60).until(EC.element_to_be_clickable((By.XPATH, "//input[@class='expirationMonth' and @id='expiration-month']"))).send_keys("12/2024") ```
[![Try switch first, then catch the xpath](https://i.stack.imgur.com/Pp7gY.png)](https://i.stack.imgur.com/Pp7gY.png) Try to switch to the iframe first, then you can identify the column with xpath
65,590,149
I am trying to make a python script that will make payment automatically on [this](https://www.audiobooks.com/signup) site. I am able to get credit-card-number input but i can't access expirty month or CVV. **Code I tried** I used this to get credit card number field below ``` WebDriverWait(browser, 20).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH,"//iframe[@id='braintree-hosted-field-number']"))) WebDriverWait(browser, 20).until(EC.element_to_be_clickable((By.XPATH, "//input[@class='number' and @id='credit-card-number']"))).send_keys("0000000000000000") ``` I used same thing to get Expiry month field, like this, ``` WebDriverWait(browser, 20).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH, '//iframe[@id="braintree-hosted-field-expirationMonth"]'))) WebDriverWait(browser, 60).until(EC.element_to_be_clickable((By.XPATH, "//input[@class='expirationMonth' and @id='expiration-month']"))).send_keys("12/2024") ``` But this code don't work So what I want is, I want to detect Expiration field and also CVV field, the method I used can't detect the field.
2021/01/06
[ "https://Stackoverflow.com/questions/65590149", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10830982/" ]
Frame id was off and the xpath was also off cause no expirationMonth. Also switch to default content. ``` browser.get("https://www.audiobooks.com/signup") wait = WebDriverWait(browser, 10) wait.until(EC.frame_to_be_available_and_switch_to_it((By.ID,"braintree-hosted-field-number"))) wait.until(EC.element_to_be_clickable((By.XPATH, "//input[@class='number' and @id='credit-card-number']"))).send_keys("0000000000000000") browser.switch_to.default_content() wait.until(EC.frame_to_be_available_and_switch_to_it((By.ID, "braintree-hosted-field-expirationDate"))) wait.until(EC.element_to_be_clickable((By.XPATH, "//input[@class='expirationDate' and @id='expiration']"))).send_keys("12/2024") ```
[![Try switch first, then catch the xpath](https://i.stack.imgur.com/Pp7gY.png)](https://i.stack.imgur.com/Pp7gY.png) Try to switch to the iframe first, then you can identify the column with xpath
54,058,184
I'm new to GCS and Cloud Functions and would like to understand how I can do an lightweight ETL using these two technologies combined with Python (3.7). I have a GCS bucket called 'Test\_1233' containing 3 files (all structurally identical). When a new file is added to this gcs bucket, I would like the following python code to run and produce an 'output.csv file' and save in the same bucket. The code I'm trying to run is below: ``` import pandas as pd import glob import os import re import numpy as np path = os.getcwd() files = os.listdir(path) ## Originally this was intentended for finding files in the local directlory - I now need this adapted for finding files within gcs(!) ### Loading Files by Variable ### df = pd.DataFrame() data = pd.DataFrame() for files in glob.glob('gs://test_1233/Test *.xlsx'): ## attempts to find all relevant files within the gcs bucket data = pd.read_excel(files,'Sheet1',skiprows=1).fillna(method='ffill') date = re.compile(r'([\.\d]+ - [\.\d]+)').search(files).groups()[0] data['Date'] = date data['Start_Date'], data['End_Date'] = data['Date'].str.split(' - ', 1).str data['End_Date'] = data['End_Date'].str[:10] data['Start_Date'] = data['Start_Date'].str[:10] data['Start_Date'] =pd.to_datetime(data['Start_Date'],format ='%d.%m.%Y',errors='coerce') data['End_Date']= pd.to_datetime(data['End_Date'],format ='%d.%m.%Y',errors='coerce') df = df.append(data) df df['Product'] = np.where(df['Product'] =='BR: Tpaste Adv Wht 2x120g','ToothpasteWht2x120g',df['Product']) ##Stores cleaned data back into same gcs bucket as 'csv' file df.to_csv('Test_Output.csv') ``` As I'm totally new to this, I'm not sure how I create the correct path to read all the files within the cloud environment (I used to read files from my local directory!). Any help would be most appreciated.
2019/01/06
[ "https://Stackoverflow.com/questions/54058184", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7638546/" ]
``` document.getElementById("loginField").getAttribute("name") ```
You can easily get it by attr method: ``` var name = $("#id").attr("name"); ```
23,653,147
I need to run a command as a different user in the %post section of an RPM. At the moment I am using a bit of a hack via python but it can't be the best way (it does feel a little dirty) ... ``` %post -p /usr/bin/python import os, pwd, subprocess os.setuid(pwd.getpwnam('apache')[2]) subprocess.call(['/usr/bin/something', 'an arg']) ``` Is there a proper way to do this?
2014/05/14
[ "https://Stackoverflow.com/questions/23653147", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2245703/" ]
If `/usr/bin/something` is something you are installing as part of the package, install it with something like ``` attr(4755, apache, apache) /usr/bin/something ``` When installed like this, `/usr/bin/something` will *always* run as user `apache`, regardless of what user actually runs it.
The accepted answer here is wrong IMO. It is not often at all you want to set attributes to allow *anyone* execute something as the owner. If you want to run something as a specific user, and that user doesn't have a shell set, you can use `su -s` to set the shell to use. For example: `su -s /bin/bash apache -c "/usr/bin/something an arg"`
7,988,772
I have already created a 64-bit program for windows using cx freeze on a 64-bit machine. I am using Windows 7 64-bit Home premium. py2exe is not working because as i understand it does not work with python 3.2.2 yet. Is there an option i have to specify in cx freeze to compile in 32-bit instead of 64-bit. Thanks!
2011/11/02
[ "https://Stackoverflow.com/questions/7988772", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1026738/" ]
To produce 32 bit executables you need to install 32-bit versions of Python and cx\_freeze.
All the "produce an executable from Python code" methods I know of basically create a file that bundles up the Python interpreter with the Python code you want to execute inside a single file. It is nothing at all like compiling C code to an executable; Python is just about impossible to compile to machine code in any significantly more useful way than just gluing the Python bytecode to the machine code for a Python interpreter. So that's almost certainly why you can't produce a 32 bit exe from a 64 bit installation of Python; there isn't a 32 bit interpreter to embed in the output file.
7,988,772
I have already created a 64-bit program for windows using cx freeze on a 64-bit machine. I am using Windows 7 64-bit Home premium. py2exe is not working because as i understand it does not work with python 3.2.2 yet. Is there an option i have to specify in cx freeze to compile in 32-bit instead of 64-bit. Thanks!
2011/11/02
[ "https://Stackoverflow.com/questions/7988772", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1026738/" ]
To produce 32 bit executables you need to install 32-bit versions of Python and cx\_freeze.
In addition to the answers already given: 1. To compile/freeze python code for different architectures (x86/x64), **install** both, **x86 and x64 versions of python**, to your system and corresponding variations of **all required modules and libraries** to your python installations so both installations have the same (required) set of packages installed. 2. The next step is to check that your global **OS environment** is **configuredcorrectly**. The following Windows environment variables need to point to the appropriate installation of Python you want to freeze to, You should know which locations they need to point to: * **%PATH%** * **%PYTHONHOME%** * **%PYTHONPATH%** 3. Once you've set them up properly, **re-open any terminals to make sure you've got the new environment loaded** (re-login to your Windows session if necessary to properly refresh your environment) and you are ready to **run your cx\_freeze** and any other python-related build ops to get your final builds for that architecture. 4. Once done with those builds, re-run the process from step 2. to change your Windows environment to the next python installation and build. To speed up the environment-change process I either script those steps or use a VM. Hope this helps.
7,988,772
I have already created a 64-bit program for windows using cx freeze on a 64-bit machine. I am using Windows 7 64-bit Home premium. py2exe is not working because as i understand it does not work with python 3.2.2 yet. Is there an option i have to specify in cx freeze to compile in 32-bit instead of 64-bit. Thanks!
2011/11/02
[ "https://Stackoverflow.com/questions/7988772", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1026738/" ]
In addition to the answers already given: 1. To compile/freeze python code for different architectures (x86/x64), **install** both, **x86 and x64 versions of python**, to your system and corresponding variations of **all required modules and libraries** to your python installations so both installations have the same (required) set of packages installed. 2. The next step is to check that your global **OS environment** is **configuredcorrectly**. The following Windows environment variables need to point to the appropriate installation of Python you want to freeze to, You should know which locations they need to point to: * **%PATH%** * **%PYTHONHOME%** * **%PYTHONPATH%** 3. Once you've set them up properly, **re-open any terminals to make sure you've got the new environment loaded** (re-login to your Windows session if necessary to properly refresh your environment) and you are ready to **run your cx\_freeze** and any other python-related build ops to get your final builds for that architecture. 4. Once done with those builds, re-run the process from step 2. to change your Windows environment to the next python installation and build. To speed up the environment-change process I either script those steps or use a VM. Hope this helps.
All the "produce an executable from Python code" methods I know of basically create a file that bundles up the Python interpreter with the Python code you want to execute inside a single file. It is nothing at all like compiling C code to an executable; Python is just about impossible to compile to machine code in any significantly more useful way than just gluing the Python bytecode to the machine code for a Python interpreter. So that's almost certainly why you can't produce a 32 bit exe from a 64 bit installation of Python; there isn't a 32 bit interpreter to embed in the output file.
41,448,447
I am trying to run a **list of tasks** (*here running airflow but it could be anything really*) that require to be executed in a existing Conda environment. I would like to do these tasks: ``` - name: activate conda environment # does not work, just for the sake of understanding command: source activate my_conda_env - name: initialize the database command: airflow initdb - name: start the web server command: 'airflow webserver -p {{ airflow_webserver_port }}' - name: start the scheduler command: airflow scheduler ``` Of course, this does not work as each task is independent and the `conda environment` activation in the first task is ignored by the following tasks. I guess the issue would be the same if using a `python virtualenv` instead of `conda`. How can I achieve each task being run in the Conda environment?
2017/01/03
[ "https://Stackoverflow.com/questions/41448447", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7370442/" ]
Each of your commands will be executed in a different process. `source` command, on the other hand, is used for reading the environment variables into the current process only (and its children), so it will apply only to the `activate conda environment` task. What you can try to do is: ``` - name: initialize the database shell: source /full/path/to/conda/activate my_conda_env && airflow initdb args: executable: /bin/bash - name: start the web server shell: 'source /full/path/to/conda/activate my_conda_env && airflow webserver -p {{ airflow_webserver_port }}' args: executable: /bin/bash - name: start the scheduler shell: source /full/path/to/conda/activate my_conda_env && airflow scheduler args: executable: /bin/bash ``` Before, check what's the full path to `activate` on the target machine with `which activate` (you need to do it before any environment is sourced). If Conda was installed in a user's space, you should use the same user for the Ansible connection.
Was looking out for something similar. Found a neater solution than having multiple actions: ``` - name: Run commands in conda environment shell: source activate my_conda_env && airflow {{ item }} with_items: - initdb - webserver -p {{ airflow_webserver_port }} - scheduler ```
51,273,827
I thought I read somewhere that python (3.x at least) is smart enough to handle this: ``` x = 1.01 if 1 < x < 0: print('out of range!') ``` However it is not working for me. I know I can use this instead: ``` if ((x > 1) | (x < 0)): print('out of range!') ``` ... but is it possible to fix the version above?
2018/07/10
[ "https://Stackoverflow.com/questions/51273827", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3126298/" ]
It works well, it is your expression that is always False; try this one instead: ``` x = .99 if 1 > x > 0: print('out of range!') ```
You can do it in one *compound* expression, as you've already noted, and others have commented. You cannot do it in an expression with an implied conjunction (and / or), as you're trying to do with `1 < x < 0`. Your expression requires an `or` conjunction, but Python's implied operation in this case is `and`. Therefore, to get what you want, you have to reverse your conditional branches and apply deMorgan's laws: ``` if not(0 <= x <= 1): print('out of range!') ``` Now you have the implied `and` operation, and you get the control flow you wanted.
51,273,827
I thought I read somewhere that python (3.x at least) is smart enough to handle this: ``` x = 1.01 if 1 < x < 0: print('out of range!') ``` However it is not working for me. I know I can use this instead: ``` if ((x > 1) | (x < 0)): print('out of range!') ``` ... but is it possible to fix the version above?
2018/07/10
[ "https://Stackoverflow.com/questions/51273827", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3126298/" ]
Python chained comparisons work like mathematical notation. In math, "0 < x < 1" means that x is greater than 0 **and** less than one, and "1 < x < 0" means that x is greater than 1 **and** less than 0. **And.** Not or. Both conditions need to hold. If you want an "or" , you can write one yourself. It's `or` in Python, not `|`; `|` is bitwise OR. ``` if x > 1 or x < 0: whatever() ``` Alternatively, you can write your expression in terms of "and": ``` if not (0 <= x <= 1): whatever() ```
It works well, it is your expression that is always False; try this one instead: ``` x = .99 if 1 > x > 0: print('out of range!') ```
51,273,827
I thought I read somewhere that python (3.x at least) is smart enough to handle this: ``` x = 1.01 if 1 < x < 0: print('out of range!') ``` However it is not working for me. I know I can use this instead: ``` if ((x > 1) | (x < 0)): print('out of range!') ``` ... but is it possible to fix the version above?
2018/07/10
[ "https://Stackoverflow.com/questions/51273827", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3126298/" ]
Python chained comparisons work like mathematical notation. In math, "0 < x < 1" means that x is greater than 0 **and** less than one, and "1 < x < 0" means that x is greater than 1 **and** less than 0. **And.** Not or. Both conditions need to hold. If you want an "or" , you can write one yourself. It's `or` in Python, not `|`; `|` is bitwise OR. ``` if x > 1 or x < 0: whatever() ``` Alternatively, you can write your expression in terms of "and": ``` if not (0 <= x <= 1): whatever() ```
You can do it in one *compound* expression, as you've already noted, and others have commented. You cannot do it in an expression with an implied conjunction (and / or), as you're trying to do with `1 < x < 0`. Your expression requires an `or` conjunction, but Python's implied operation in this case is `and`. Therefore, to get what you want, you have to reverse your conditional branches and apply deMorgan's laws: ``` if not(0 <= x <= 1): print('out of range!') ``` Now you have the implied `and` operation, and you get the control flow you wanted.
63,739,587
I've been following along to [Corey Schafer's awesome youtube tutorial](https://www.youtube.com/watch?v=MwZwr5Tvyxo&list=PL-osiE80TeTs4UjLw5MM6OjgkjFeUxCYH) on the basic flaskblog. In addition to Corey's code, I`d like to add a logic, where users have to verify their email-address before being able to login. I've figured to do this with the URLSafeTimedSerializer from itsdangerous, like suggested by [prettyprinted here](https://www.youtube.com/watch?v=vF9n248M1yk). The whole token creation and verification process seems to work. Unfortunately due to my very fresh python knowledge in general, I can't figure out a clean way on my own how to get that saved into the sqlite3 db. In my models I've created a Boolean Column email\_confirmed with default=False which I am intending to change to True after the verification process. My question is: how do I best identify the user (for whom to alter the email\_confirmed Column) when he clicks on his custom url? Would it be a good practice to also save the token inside a db Column and then filter by that token to identify the user? Here is some of the relevant code: **User Class in my modely.py** ``` class User(db.Model, UserMixin): id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(20), unique=True, nullable=False) email = db.Column(db.String(120), unique=True, nullable=False) image_file = db.Column(db.String(20), nullable=False, default='default_profile.jpg') password = db.Column(db.String(60), nullable=False) date_registered = db.Column(db.DateTime, nullable=False, default=datetime.utcnow) email_confirmed = db.Column(db.Boolean(), nullable=False, default=False) email_confirm_date = db.Column(db.DateTime) projects = db.relationship('Project', backref='author', lazy=True) def get_mail_confirm_token(self, expires_sec=1800): s = URLSafeTimedSerializer(current_app.config['SECRET_KEY'], expires_sec) return s.dumps(self.email, salt='email-confirm') @staticmethod def verify_mail_confirm_token(token): s = URLSafeTimedSerializer(current_app.config['SECRET_KEY']) try: return s.loads(token, salt='email-confirm', max_age=60) except SignatureExpired: return "PROBLEM" ``` **Registration Logic in my routes (using a users blueprint):** ``` @users.route('/register', methods=['GET', 'POST']) def register(): if current_user.is_authenticated: return redirect(url_for('dash.dashboard')) form = RegistrationForm() if form.validate_on_submit(): hashed_password = bcrypt.generate_password_hash(form.password.data).decode('utf-8') user = User(username=form.username.data, email=form.email.data, password=hashed_password) db.session.add(user) db.session.commit() send_mail_confirmation(user) return redirect(url_for('users.welcome')) return render_template('register.html', form=form) @users.route('/welcome') def welcome(): return render_template('welcome.html') @users.route('/confirm_email/<token>') def confirm_email(token): user = User.verify_mail_confirm_token(token) current_user.email_confirmed = True current_user.email_confirm_date = datetime.utcnow return user ``` The last parts `current_user.email_confirmed = True` and `current_user.email_confirm_date =datetime.utcnow` are probably the lines in question. Like stated above the desired entries aren't made because the user is not logged in at this stage, yet. Im grateful for any help on this! Thanks a lot in advance!
2020/09/04
[ "https://Stackoverflow.com/questions/63739587", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13828684/" ]
The key to your question is this: > > My question is: how do I best identify the user (for whom to alter the email\_confirmed Column) when he clicks on his custom url? > > > The answer can be seen [in the example on URL safe serialisation using itsdangerous](https://itsdangerous.palletsprojects.com/en/1.1.x/url_safe/). The token itself *contains* the e-mail address, because that's what you are using inside your `get_mail_confirm_token()` function. You can then use the serialiser to retrieve the e-mail address from that token. You can do that inside your `verify_mail_confirm_token()` function, but, because it's a static-method you still need a session. You can pass this in as a separate argument though without problem. You also should treat the `BadSignature` exception from `itsdangerous`. It would then become: ``` @staticmethod def verify_mail_confirm_token(session, token): s = URLSafeTimedSerializer(current_app.config['SECRET_KEY']) try: email = s.loads(token, salt='email-confirm', max_age=60) except (BadSignature, SignatureExpired): return "PROBLEM" user = session.query(User).filter(User.email == email).one_or_none() return user ``` > > Would it be a good practice to also save the token inside a db Column and then filter by that token to identify the user? > > > No. The token should be short-lived and should not be kept around. Finally, in your `get_mail_confirm_token` implementation you are not using the `URLSafeTimedSerializer` class correctly. You pass in a second argument called `expires_sec`, but if you [look at the docs](https://itsdangerous.palletsprojects.com/en/1.1.x/url_safe/#itsdangerous.url_safe.URLSafeTimedSerializer) you will see that the second argument is the salt, which might lead to unintended problems.
Thanks to @exhuma. Here is how I eventually got it to work - also in addition I'm posting the previously missing part of email-sending. **User Class in my models.py** ``` class User(db.Model, UserMixin): id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(20), unique=True, nullable=False) email = db.Column(db.String(120), unique=True, nullable=False) image_file = db.Column(db.String(20), nullable=False, default="default_profile.jpg") password = db.Column(db.String(60), nullable=False) date_registered = db.Column(db.DateTime, nullable=False, default=datetime.utcnow) email_confirmed = db.Column(db.Boolean(), nullable=False, default=False) email_confirm_date = db.Column(db.DateTime) projects = db.relationship("Project", backref="author", lazy=True) def get_mail_confirm_token(self): s = URLSafeTimedSerializer( current_app.config["SECRET_KEY"], salt="email-comfirm" ) return s.dumps(self.email, salt="email-confirm") @staticmethod def verify_mail_confirm_token(token): try: s = URLSafeTimedSerializer( current_app.config["SECRET_KEY"], salt="email-confirm" ) email = s.loads(token, salt="email-confirm", max_age=3600) return email except (SignatureExpired, BadSignature): return None ``` **Send Mail function in my utils.py** ``` def send_mail_confirmation(user): token = user.get_mail_confirm_token() msg = Message( "Please Confirm Your Email", sender="noreply@demo.com", recipients=[user.email], ) msg.html = render_template("mail_welcome_confirm.html", token=token) mail.send(msg) ``` **Registration Logic in my routes.py (using a users blueprint):** ``` @users.route("/register", methods=["GET", "POST"]) def register(): if current_user.is_authenticated: return redirect(url_for("dash.dashboard")) form = RegistrationForm() if form.validate_on_submit(): hashed_password = bcrypt.generate_password_hash(form.password.data).decode( "utf-8" ) user = User( username=form.username.data, email=form.email.data, password=hashed_password ) db.session.add(user) db.session.commit() send_mail_confirmation(user) return redirect(url_for("users.welcome")) return render_template("register.html", form=form) @users.route("/welcome") def welcome(): return render_template("welcome.html") @users.route("/confirm_email/<token>") def confirm_email(token): email = User.verify_mail_confirm_token(token) if email: user = db.session.query(User).filter(User.email == email).one_or_none() user.email_confirmed = True user.email_confirm_date = datetime.utcnow() db.session.add(user) db.session.commit() return redirect(url_for("users.login")) flash( f"Your email has been verified and you can now login to your account", "success", ) else: return render_template("errors/token_invalid.html") ``` **Only missing** from my point of view is a simple conditional logic, to check if email\_confirmed = True before logging in, as well as the same check inside the confirm\_email(token) function to not make this process repeatable in case the user clicks on the confirmation link several times. Thanks again! Hope this is of some help to anyone else!
17,457,608
I'm trying to time several things in python, including upload time to Amazon's S3 Cloud Storage, and am having a little trouble. I can time my hash, and a few other things, but not the upload. I thought [this](https://stackoverflow.com/questions/7523767/how-to-use-python-timeit-when-passing-variables-to-functions) post would finally, get me there, but I can't seem to find salvation. Any help would be appreciated. Very new to python, thanks! ``` import timeit accKey = r"xxxxxxxxxxx"; secKey = r"yyyyyyyyyyyyyyyyyyyyyyyyy"; bucket_name = 'sweet_data' c = boto.connect_s3(accKey, secKey) b = c.get_bucket(bucket_name); k = Key(b); p = '/my/aws.path' f = 'C:\\my.file' def upload_data(p, f): k.key = p k.set_contents_from_filename(f) return t = timeit.Timer(lambda: upload_data(p, f), "from aws_lib import upload_data; p=%r; f = %r" % (p,f)) # Just calling the function works fine #upload_data(p, f) ```
2013/07/03
[ "https://Stackoverflow.com/questions/17457608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2407064/" ]
I know this is heresy in the Python community, but I actually recommend *not* to use `timeit`, especially for something like this. For your purposes, I believe it will be good enough (and possibly even better than `timeit`!) if you simply use `time.time()` to time things. In other words, do something like ``` from time import time t0 = time() myfunc() t1 = time() print t1 - t0 ``` Note that depending on your platform, you might want to try `time.clock()` instead (see Stack Overflow questions such as [this](https://stackoverflow.com/questions/85451/python-time-clock-vs-time-time-accuracy) and [this](https://stackoverflow.com/questions/1938048/high-precision-clock-in-python)), and if you're on Python 3.3, then you have [better options](http://docs.python.org/3/library/time.html), due to [PEP 418](http://www.python.org/dev/peps/pep-0418/).
You can use the command line interface to `timeit`. Just save your code as a module without the timing stuff. For example: ``` # file: test.py data = range(5) def foo(l): return sum(l) ``` Then you can run the timing code from the command line, like this: ``` $ python -mtimeit -s 'import test;' 'test.foo(test.data)' ``` See also: * <http://docs.python.org/2/library/timeit.html#command-line-interface> * <http://docs.python.org/2/library/timeit.html#examples>
48,344,035
**Scenario:** I am trying to work out a way to send a quick test message in skype with a python code. From the documentations (<https://pypi.python.org/pypi/SkPy/0.1>) I got a snippet that should allow me to do that. **Problem:** I refilled the information as expected, but I am getting an error when trying to create the connection to skype in: ``` sk = Skype(username, password) ``` I get: > > SkypeAuthException: ("Couldn't retrieve t field from login response", > ) > > > I have no idea what this error means. **Question:** Any idea on how to solve this? **Code:** This is basically what I am using, plus my username and password: ``` from skpy import Skype sk = Skype(username, password) # connect to Skype sk.user # you sk.contacts # your contacts sk.chats # your conversations ch = sk.contacts["joe.4"].chat # 1-to-1 conversation ch.sendMsg(content) # plain-text message ``` **Question 2:** Is there any way to do this, in which the password and username should not be in the code? For example, would it be possible to use the skype instance that is already open in that computer?
2018/01/19
[ "https://Stackoverflow.com/questions/48344035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7321700/" ]
It might look like you have been blocked from your server's IP if you logged in elsewhere recently. This works for me. ``` from skpy import Skype loggedInUser = Skype("userName", "password") print(loggedInUser.users) // loggedIn user info print(loggedInUser.contacts) // loggedIn user contacts ``` PS: skpy version: 0.8.1
try this: ``` def connect_skype(user, pwd, token): s = Skype(connect=False) s.conn.setTokenFile(token) try: s.conn.readToken() except SkypeAuthException: s.conn.setUserPwd(user, pwd) s.conn.getSkypeToken() s.conn.writeToken() finally: sk = Skype(user, pwd, tokenFile=token) return sk ``` The token parameter can be a empty file, but you need to create before to use this function. The function will be write in this file the client token. If you still continue the problem, try to sign in to Skype online, sometimes need to update some information then try again.
34,004,510
I'm a beginner in the Python language. Is there a "try and except" function in python to check if the input is a LETTER or multiple LETTERS. If it isn't, ask for an input again? (I made one in which you have to enter an integer number) ``` def validation(i): try: result = int(i) return(result) except ValueError: print("Please enter a number") def start(): x = input("Enter Number: ") z = validation(x) if z != None: #Rest of function code print("Success") else: start() start() ``` When the above code is executed, and an integer number is entered, you get this: ``` Enter Number: 1 Success ``` If and invalid value however, such as a letter or floating point number is entered, you get this: ``` Enter Number: Hello Please enter a number Enter Number: 4.6 Please enter a number Enter Number: ``` As you can see it will keep looping until a valid **NUMBER** value is entered. So is it possible to use the "try and except" function to keep looping until a **letter** is entered? To make it clearer, I'll explain in vague structured English, not pseudo code, but just to help make it clearer: ``` print ("Hello this will calculate your lucky number") # Note this isn't the whole program, its just the validation section. input (lucky number) # English on what I want the code to do: x = input (luckynumber) ``` So what I want is that if the variable "x" IS NOT a letter, or multiple letters, it should repeat this input (x) until the user enters a valid **letter** or multiple **letters**. In other words, if a letter(s) isn't entered, the program will not continue until the input is a letter(s). I hope this makes it clearer.
2015/11/30
[ "https://Stackoverflow.com/questions/34004510", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5622261/" ]
You can just call the same function again, in the try/except clause - to do that, you'll have to adjust your logic a bit: ``` def validate_integer(): x = input('Please enter a number: ') try: int(x) except ValueError: print('Sorry, {} is not a valid number'.format(x)) return validate_integer() return x def start(): x = validate_integer() if x: print('Success!') ```
Don't use recursion in Python when simple iteration will do. ``` def validate(i): try: result = int(i) return result except ValueError: pass def start(): z = None while z is None: x = input("Please enter a number: ") z = validate(x) print("Success") start() ```
4,393,830
In the process of trying to write a Python script that uses PIL today, I discovered I don't seem have it on my local machine (OS X 10.5.8, default 2.5 Python install). So I run: ``` easy_install --prefix=/usr/local/python/ pil ``` and it complains a little about /usr/local/python/lib/python2.5/site-packages not yet existing, so I create it, and try again, and get this: > > TEST FAILED: > /usr/local/python//lib/python2.5/site-packages > does NOT support .pth files error: bad > install directory or PYTHONPATH > > > You are attempting to install a > package to a directory that is not on > PYTHONPATH and which Python does not > read ".pth" files from. The > installation directory you specified > (via --install-dir, --prefix, or the > distutils default setting) was: > > > > ``` > /usr/local/python//lib/python2.5/site-packages > > ``` > > and your PYTHONPATH environment > variable currently contains: > > > > ``` > '' > > ``` > > OK, fair enough -- I hadn't done anything to set the path. So I add a quick line to ~/.bash\_profile: > > PYTHONPATH="$PYTHONPATH:/usr/local/python/lib/python2.5" > > > and `source` it, and try again. Same error message. This is kindof curious, given that PYTHONPATH is clearly set; I can `echo $PYTHONPATH` and get back `:/usr/local/python/lib/python2.5`. I decided to check out what the include path looked like from inside: ``` import sys print "\n".join(sys.path) ``` which yields: > > /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python25.zip > /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5 > /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/plat-darwin > /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/plat-mac > /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/plat-mac/lib-scriptpackages > /System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python > /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/lib-tk > /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/lib-dynload > /Library/Python/2.5/site-packages > /System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python/PyObjC > > > from which `/usr/local/python/yadda/yadda` is notably missing. Not sure what I'm supposed to do here. How do I get python to recognize this location as an include path? **UPDATE** As Sven Marnach suggested, I was neglecting to export PYTHONPATH. I've corrected that problem, and now see it show up when I print out `sys.path` from within Python. However, I still got the `TEST FAILED` error message I mentioned above, just with my new PYTHONPATH environment variable. So, I tried changing it from `/usr/local/python/lib/python2.5` to `/usr/local/python/lib/python2.5/site-packages`, exporting, and running the same `easy_install` command again. This leads to an all new result that at first *looked* like success (but isn't): ``` Creating /usr/local/python/lib/python2.5/site-packages/site.py Searching for pil Reading http://pypi.python.org/simple/pil/ Reading http://www.pythonware.com/products/pil Reading http://effbot.org/zone/pil-changes-115.htm Reading http://effbot.org/downloads/#Imaging Best match: PIL 1.1.7 Downloading http://effbot.org/media/downloads/PIL-1.1.7.tar.gz Processing PIL-1.1.7.tar.gz Running PIL-1.1.7/setup.py -q bdist_egg --dist-dir /var/folders/XW/XWpClVq7EpSB37BV3zTo+++++TI/-Tmp-/easy_install-krj9oR/PIL-1.1.7/egg-dist-tmp--Pyauy --- using frameworks at /System/Library/Frameworks [snipped: compiler warnings] -------------------------------------------------------------------- PIL 1.1.7 SETUP SUMMARY -------------------------------------------------------------------- version 1.1.7 platform darwin 2.5.1 (r251:54863, Sep 1 2010, 22:03:14) [GCC 4.0.1 (Apple Inc. build 5465)] -------------------------------------------------------------------- --- TKINTER support available --- JPEG support available --- ZLIB (PNG/ZIP) support available *** FREETYPE2 support not available *** LITTLECMS support not available -------------------------------------------------------------------- To add a missing option, make sure you have the required library, and set the corresponding ROOT variable in the setup.py script. To check the build, run the selftest.py script. zip_safe flag not set; analyzing archive contents... Image: module references __file__ No eggs found in /var/folders/XW/XWpClVq7EpSB37BV3zTo+++++TI/-Tmp-/easy_install-krj9oR/PIL-1.1.7/egg-dist-tmp--Pyauy (setup script problem?) ``` Again, this looks good, but when I go to run my script: > > Traceback (most recent call last): > > File "checkerboard.py", line 1, in > > import Image, ImageDraw ImportError: No module named Image > > > When I check what's now under `/usr/local/python/` using `find .`, I get: > > ./lib ./lib/python2.5 > ./lib/python2.5/site-packages > ./lib/python2.5/site-packages/site.py > ./lib/python2.5/site-packages/site.pyc > > > So... nothing module-looking (I'm assuming site.py and site.pyc are metadata or helper scripts). Where did the install go? I note this: > > To check the build, run the > selftest.py script. > > > But don't really know what that is. And I also noticed the "No eggs found" message. Are either of these hints?
2010/12/09
[ "https://Stackoverflow.com/questions/4393830", "https://Stackoverflow.com", "https://Stackoverflow.com/users/87170/" ]
You are using the Apple-supplied Python 2.5 in OS X; it's a framework build and, by default, uses `/Library/Python/2.5/site-packages` as the location for installed packages, not `/usr/local`. Normally you shouldn't need to specify `--prefix` with an OS X framework build. Also beware that the `setuptools` (`easy_install`) supplied by Apple with OS X 10.5 is also rather old as is the version of Python itself. That said, installing `PIL` completely and correctly on OS X especially OS X 10.5 is not particularly simple. Search the archives or elsewhere for tips and/or binary packages. Particularly if you are planning to use other modules like MySQL or Django, my recommendation is to install everything (Python and PIL) using a package manager like [MacPorts](http://www.macports.org/).
Why did you specify `--prefix` in your `easy_install` invocation? Did you try just: ``` sudo easy_install pil ``` If you're only trying to install PIL to the default location, I would think `easy_install` could work out the correct path. (Clearly, `/usr/local/python` isn't it...) **EDIT**: Someone down-voted this answer, maybe because it was too terse . That's what I get for trying to post an answer from my cell phone, I guess. But the gist of it is perfectly valid, IMHO: if you're using `--prefix` to specify a custom install location with `easy_install`, you're kind of 'doing it wrong'. It might be *possible* to make this work, but the `easy_install` documentation has a section on [custom installation locations](http://peak.telecommunity.com/DevCenter/EasyInstall#custom-installation-locations) that doesn't even mention this as a possibility, except as a small tweak to the [virtual python](http://peak.telecommunity.com/DevCenter/EasyInstall#creating-a-virtual-python) option. I'd suggest following the [OS X instructions](http://peak.telecommunity.com/DevCenter/EasyInstall#mac-os-x-user-installation) if you want to install to a custom location on a Mac, `--prefix` just does not seem like the right tool for the job.
40,373,609
I am actually reading [Oracle-cx\_Oracle](http://www.oracle.com/technetwork/articles/dsl/python-091105.html) tutorial. There I came across non-pooled connections and DRCP, Basically I am not a DBA so I searched with google but couldn't found any thing. So could somebody help me understand what are they and how they are different to each other. Thank you.
2016/11/02
[ "https://Stackoverflow.com/questions/40373609", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1293013/" ]
The error means that you're navigating to a view whose model is declared as typeof `Foo` (by using `@model Foo`), but you actually passed it a model which is typeof `Bar` (note the term *dictionary* is used because a model is passed to the view via a `ViewDataDictionary`). The error can be caused by **Passing the wrong model from a controller method to a view (or partial view)** Common examples include using a query that creates an anonymous object (or collection of anonymous objects) and passing it to the view ```cs var model = db.Foos.Select(x => new { ID = x.ID, Name = x.Name }; return View(model); // passes an anonymous object to a view declared with @model Foo ``` or passing a collection of objects to a view that expect a single object ```cs var model = db.Foos.Where(x => x.ID == id); return View(model); // passes IEnumerable<Foo> to a view declared with @model Foo ``` The error can be easily identified at compile time by explicitly declaring the model type in the controller to match the model in the view rather than using `var`. **Passing the wrong model from a view to a partial view** Given the following model ```cs public class Foo { public Bar MyBar { get; set; } } ``` and a main view declared with `@model Foo` and a partial view declared with `@model Bar`, then ```cs Foo model = db.Foos.Where(x => x.ID == id).Include(x => x.Bar).FirstOrDefault(); return View(model); ``` will return the correct model to the main view. However the exception will be thrown if the view includes ```cs @Html.Partial("_Bar") // or @{ Html.RenderPartial("_Bar"); } ``` By default, the model passed to the partial view is the model declared in the main view and you need to use ```cs @Html.Partial("_Bar", Model.MyBar) // or @{ Html.RenderPartial("_Bar", Model.MyBar); } ``` to pass the instance of `Bar` to the partial view. Note also that if the value of `MyBar` is `null` (has not been initialized), then by default `Foo` will be passed to the partial, in which case, it needs to be ```cs @Html.Partial("_Bar", new Bar()) ``` **Declaring a model in a layout** If a layout file includes a model declaration, then all views that use that layout must declare the same model, or a model that derives from that model. If you want to include the html for a separate model in a Layout, then in the Layout, use `@Html.Action(...)` to call a `[ChildActionOnly]` method initializes that model and returns a partial view for it.
**Passing the model value that is populated from a controller method to a view** ``` public async Task<IActionResult> Index() { //Getting Data from Database var model= await _context.GetData(); //Selecting Populated Data from the Model and passing to view return View(model.Value); } ```
40,373,609
I am actually reading [Oracle-cx\_Oracle](http://www.oracle.com/technetwork/articles/dsl/python-091105.html) tutorial. There I came across non-pooled connections and DRCP, Basically I am not a DBA so I searched with google but couldn't found any thing. So could somebody help me understand what are they and how they are different to each other. Thank you.
2016/11/02
[ "https://Stackoverflow.com/questions/40373609", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1293013/" ]
The error means that you're navigating to a view whose model is declared as typeof `Foo` (by using `@model Foo`), but you actually passed it a model which is typeof `Bar` (note the term *dictionary* is used because a model is passed to the view via a `ViewDataDictionary`). The error can be caused by **Passing the wrong model from a controller method to a view (or partial view)** Common examples include using a query that creates an anonymous object (or collection of anonymous objects) and passing it to the view ```cs var model = db.Foos.Select(x => new { ID = x.ID, Name = x.Name }; return View(model); // passes an anonymous object to a view declared with @model Foo ``` or passing a collection of objects to a view that expect a single object ```cs var model = db.Foos.Where(x => x.ID == id); return View(model); // passes IEnumerable<Foo> to a view declared with @model Foo ``` The error can be easily identified at compile time by explicitly declaring the model type in the controller to match the model in the view rather than using `var`. **Passing the wrong model from a view to a partial view** Given the following model ```cs public class Foo { public Bar MyBar { get; set; } } ``` and a main view declared with `@model Foo` and a partial view declared with `@model Bar`, then ```cs Foo model = db.Foos.Where(x => x.ID == id).Include(x => x.Bar).FirstOrDefault(); return View(model); ``` will return the correct model to the main view. However the exception will be thrown if the view includes ```cs @Html.Partial("_Bar") // or @{ Html.RenderPartial("_Bar"); } ``` By default, the model passed to the partial view is the model declared in the main view and you need to use ```cs @Html.Partial("_Bar", Model.MyBar) // or @{ Html.RenderPartial("_Bar", Model.MyBar); } ``` to pass the instance of `Bar` to the partial view. Note also that if the value of `MyBar` is `null` (has not been initialized), then by default `Foo` will be passed to the partial, in which case, it needs to be ```cs @Html.Partial("_Bar", new Bar()) ``` **Declaring a model in a layout** If a layout file includes a model declaration, then all views that use that layout must declare the same model, or a model that derives from that model. If you want to include the html for a separate model in a Layout, then in the Layout, use `@Html.Action(...)` to call a `[ChildActionOnly]` method initializes that model and returns a partial view for it.
one more thing. if your view is a partial/sub page and the model for that partial view is null for some reason (e.g no data) you will get this error. Just need to handle the null partial view model
40,373,609
I am actually reading [Oracle-cx\_Oracle](http://www.oracle.com/technetwork/articles/dsl/python-091105.html) tutorial. There I came across non-pooled connections and DRCP, Basically I am not a DBA so I searched with google but couldn't found any thing. So could somebody help me understand what are they and how they are different to each other. Thank you.
2016/11/02
[ "https://Stackoverflow.com/questions/40373609", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1293013/" ]
This question already has a great answer, but I ran into the same error, in a different scenario: displaying a **`List`** in an *EditorTemplate*. I have a model like this: ``` public class Foo { public string FooName { get; set; } public List<Bar> Bars { get; set; } } public class Bar { public string BarName { get; set; } } ``` And this is my *main view*: ``` @model Foo @Html.TextBoxFor(m => m.Name, new { @class = "form-control" }) @Html.EditorFor(m => m.Bars) ``` And this is my Bar ***EditorTemplate*** (*Bar.cshtml*) ``` @model List<Bar> <div class="some-style"> @foreach (var item in Model) { <label>@item.BarName</label> } </div> ``` And I got this error: > > The model item passed into the dictionary is of type 'Bar', but this > dictionary requires a model item of type > 'System.Collections.Generic.List`1[Bar] > > > --- The reason for this error is that `EditorFor` already iterates the `List` for you, so if you pass a collection to it, it would display the editor template once for each item in the collection. This is how I fixed this problem: Brought the styles outside of the editor template, and into the *main view*: ``` @model Foo @Html.TextBoxFor(m => m.Name, new { @class = "form-control" }) <div class="some-style"> @Html.EditorFor(m => m.Bars) </div> ``` And changed the ***EditorTemplate*** (*Bar.cshtml*) to this: ``` @model Bar <label>@Model.BarName</label> ```
**Passing the model value that is populated from a controller method to a view** ``` public async Task<IActionResult> Index() { //Getting Data from Database var model= await _context.GetData(); //Selecting Populated Data from the Model and passing to view return View(model.Value); } ```
40,373,609
I am actually reading [Oracle-cx\_Oracle](http://www.oracle.com/technetwork/articles/dsl/python-091105.html) tutorial. There I came across non-pooled connections and DRCP, Basically I am not a DBA so I searched with google but couldn't found any thing. So could somebody help me understand what are they and how they are different to each other. Thank you.
2016/11/02
[ "https://Stackoverflow.com/questions/40373609", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1293013/" ]
This question already has a great answer, but I ran into the same error, in a different scenario: displaying a **`List`** in an *EditorTemplate*. I have a model like this: ``` public class Foo { public string FooName { get; set; } public List<Bar> Bars { get; set; } } public class Bar { public string BarName { get; set; } } ``` And this is my *main view*: ``` @model Foo @Html.TextBoxFor(m => m.Name, new { @class = "form-control" }) @Html.EditorFor(m => m.Bars) ``` And this is my Bar ***EditorTemplate*** (*Bar.cshtml*) ``` @model List<Bar> <div class="some-style"> @foreach (var item in Model) { <label>@item.BarName</label> } </div> ``` And I got this error: > > The model item passed into the dictionary is of type 'Bar', but this > dictionary requires a model item of type > 'System.Collections.Generic.List`1[Bar] > > > --- The reason for this error is that `EditorFor` already iterates the `List` for you, so if you pass a collection to it, it would display the editor template once for each item in the collection. This is how I fixed this problem: Brought the styles outside of the editor template, and into the *main view*: ``` @model Foo @Html.TextBoxFor(m => m.Name, new { @class = "form-control" }) <div class="some-style"> @Html.EditorFor(m => m.Bars) </div> ``` And changed the ***EditorTemplate*** (*Bar.cshtml*) to this: ``` @model Bar <label>@Model.BarName</label> ```
Consider the partial `map.cshtml` at `Partials/Map.cshtml`. This can be called from the Page where the partial is to be rendered, simply by using the `<partial>` tag: `<partial name="Partials/Map" model="new Pages.Partials.MapModel()" />` This is one of the easiest methods I encountered (although I am using razor pages, I am sure same is for MVC too)
40,373,609
I am actually reading [Oracle-cx\_Oracle](http://www.oracle.com/technetwork/articles/dsl/python-091105.html) tutorial. There I came across non-pooled connections and DRCP, Basically I am not a DBA so I searched with google but couldn't found any thing. So could somebody help me understand what are they and how they are different to each other. Thank you.
2016/11/02
[ "https://Stackoverflow.com/questions/40373609", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1293013/" ]
This question already has a great answer, but I ran into the same error, in a different scenario: displaying a **`List`** in an *EditorTemplate*. I have a model like this: ``` public class Foo { public string FooName { get; set; } public List<Bar> Bars { get; set; } } public class Bar { public string BarName { get; set; } } ``` And this is my *main view*: ``` @model Foo @Html.TextBoxFor(m => m.Name, new { @class = "form-control" }) @Html.EditorFor(m => m.Bars) ``` And this is my Bar ***EditorTemplate*** (*Bar.cshtml*) ``` @model List<Bar> <div class="some-style"> @foreach (var item in Model) { <label>@item.BarName</label> } </div> ``` And I got this error: > > The model item passed into the dictionary is of type 'Bar', but this > dictionary requires a model item of type > 'System.Collections.Generic.List`1[Bar] > > > --- The reason for this error is that `EditorFor` already iterates the `List` for you, so if you pass a collection to it, it would display the editor template once for each item in the collection. This is how I fixed this problem: Brought the styles outside of the editor template, and into the *main view*: ``` @model Foo @Html.TextBoxFor(m => m.Name, new { @class = "form-control" }) <div class="some-style"> @Html.EditorFor(m => m.Bars) </div> ``` And changed the ***EditorTemplate*** (*Bar.cshtml*) to this: ``` @model Bar <label>@Model.BarName</label> ```
First you need to return an IEnumerable version of your model to the list view. ``` @model IEnumerable<IdentityManager.Models.MerchantDetail> ``` Second, you need to return a list from the database. I am doing it via SQL Server, so this is code I got working. ``` public IActionResult Merchant_Boarding_List() List<MerchantDetail> merchList = new List<MerchantDetail>(); var model = new MerchantDetail(); try { using (var con = new SqlConnection(Common.DB_CONNECTION_STRING_BOARDING)) { con.Open(); using (var command = new SqlCommand("select * from MerchantDetail md where md.UserGUID = '" + UserGUID + "'", con)) { using (SqlDataReader reader = command.ExecuteReader()) { while (reader.Read()) { model.biz_dbaBusinessName = reader["biz_dbaBusinessName"].ToString(); merchList.Add(model); } } } } } catch (Exception ex) { } return View(merchList); ```
40,373,609
I am actually reading [Oracle-cx\_Oracle](http://www.oracle.com/technetwork/articles/dsl/python-091105.html) tutorial. There I came across non-pooled connections and DRCP, Basically I am not a DBA so I searched with google but couldn't found any thing. So could somebody help me understand what are they and how they are different to each other. Thank you.
2016/11/02
[ "https://Stackoverflow.com/questions/40373609", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1293013/" ]
Observe if the view has the model required: **View** ``` @model IEnumerable<WFAccess.Models.ViewModels.SiteViewModel> <div class="row"> <table class="table table-striped table-hover table-width-custom"> <thead> <tr> .... ``` **Controller** ``` [HttpGet] public ActionResult ListItems() { SiteStore site = new SiteStore(); site.GetSites(); IEnumerable<SiteViewModel> sites = site.SitesList.Select(s => new SiteViewModel { Id = s.Id, Type = s.Type }); return PartialView("_ListItems", sites); } ``` In my case I Use a partial view but runs in normal views
**Passing the model value that is populated from a controller method to a view** ``` public async Task<IActionResult> Index() { //Getting Data from Database var model= await _context.GetData(); //Selecting Populated Data from the Model and passing to view return View(model.Value); } ```
40,373,609
I am actually reading [Oracle-cx\_Oracle](http://www.oracle.com/technetwork/articles/dsl/python-091105.html) tutorial. There I came across non-pooled connections and DRCP, Basically I am not a DBA so I searched with google but couldn't found any thing. So could somebody help me understand what are they and how they are different to each other. Thank you.
2016/11/02
[ "https://Stackoverflow.com/questions/40373609", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1293013/" ]
The error means that you're navigating to a view whose model is declared as typeof `Foo` (by using `@model Foo`), but you actually passed it a model which is typeof `Bar` (note the term *dictionary* is used because a model is passed to the view via a `ViewDataDictionary`). The error can be caused by **Passing the wrong model from a controller method to a view (or partial view)** Common examples include using a query that creates an anonymous object (or collection of anonymous objects) and passing it to the view ```cs var model = db.Foos.Select(x => new { ID = x.ID, Name = x.Name }; return View(model); // passes an anonymous object to a view declared with @model Foo ``` or passing a collection of objects to a view that expect a single object ```cs var model = db.Foos.Where(x => x.ID == id); return View(model); // passes IEnumerable<Foo> to a view declared with @model Foo ``` The error can be easily identified at compile time by explicitly declaring the model type in the controller to match the model in the view rather than using `var`. **Passing the wrong model from a view to a partial view** Given the following model ```cs public class Foo { public Bar MyBar { get; set; } } ``` and a main view declared with `@model Foo` and a partial view declared with `@model Bar`, then ```cs Foo model = db.Foos.Where(x => x.ID == id).Include(x => x.Bar).FirstOrDefault(); return View(model); ``` will return the correct model to the main view. However the exception will be thrown if the view includes ```cs @Html.Partial("_Bar") // or @{ Html.RenderPartial("_Bar"); } ``` By default, the model passed to the partial view is the model declared in the main view and you need to use ```cs @Html.Partial("_Bar", Model.MyBar) // or @{ Html.RenderPartial("_Bar", Model.MyBar); } ``` to pass the instance of `Bar` to the partial view. Note also that if the value of `MyBar` is `null` (has not been initialized), then by default `Foo` will be passed to the partial, in which case, it needs to be ```cs @Html.Partial("_Bar", new Bar()) ``` **Declaring a model in a layout** If a layout file includes a model declaration, then all views that use that layout must declare the same model, or a model that derives from that model. If you want to include the html for a separate model in a Layout, then in the Layout, use `@Html.Action(...)` to call a `[ChildActionOnly]` method initializes that model and returns a partial view for it.
This question already has a great answer, but I ran into the same error, in a different scenario: displaying a **`List`** in an *EditorTemplate*. I have a model like this: ``` public class Foo { public string FooName { get; set; } public List<Bar> Bars { get; set; } } public class Bar { public string BarName { get; set; } } ``` And this is my *main view*: ``` @model Foo @Html.TextBoxFor(m => m.Name, new { @class = "form-control" }) @Html.EditorFor(m => m.Bars) ``` And this is my Bar ***EditorTemplate*** (*Bar.cshtml*) ``` @model List<Bar> <div class="some-style"> @foreach (var item in Model) { <label>@item.BarName</label> } </div> ``` And I got this error: > > The model item passed into the dictionary is of type 'Bar', but this > dictionary requires a model item of type > 'System.Collections.Generic.List`1[Bar] > > > --- The reason for this error is that `EditorFor` already iterates the `List` for you, so if you pass a collection to it, it would display the editor template once for each item in the collection. This is how I fixed this problem: Brought the styles outside of the editor template, and into the *main view*: ``` @model Foo @Html.TextBoxFor(m => m.Name, new { @class = "form-control" }) <div class="some-style"> @Html.EditorFor(m => m.Bars) </div> ``` And changed the ***EditorTemplate*** (*Bar.cshtml*) to this: ``` @model Bar <label>@Model.BarName</label> ```
40,373,609
I am actually reading [Oracle-cx\_Oracle](http://www.oracle.com/technetwork/articles/dsl/python-091105.html) tutorial. There I came across non-pooled connections and DRCP, Basically I am not a DBA so I searched with google but couldn't found any thing. So could somebody help me understand what are they and how they are different to each other. Thank you.
2016/11/02
[ "https://Stackoverflow.com/questions/40373609", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1293013/" ]
The error means that you're navigating to a view whose model is declared as typeof `Foo` (by using `@model Foo`), but you actually passed it a model which is typeof `Bar` (note the term *dictionary* is used because a model is passed to the view via a `ViewDataDictionary`). The error can be caused by **Passing the wrong model from a controller method to a view (or partial view)** Common examples include using a query that creates an anonymous object (or collection of anonymous objects) and passing it to the view ```cs var model = db.Foos.Select(x => new { ID = x.ID, Name = x.Name }; return View(model); // passes an anonymous object to a view declared with @model Foo ``` or passing a collection of objects to a view that expect a single object ```cs var model = db.Foos.Where(x => x.ID == id); return View(model); // passes IEnumerable<Foo> to a view declared with @model Foo ``` The error can be easily identified at compile time by explicitly declaring the model type in the controller to match the model in the view rather than using `var`. **Passing the wrong model from a view to a partial view** Given the following model ```cs public class Foo { public Bar MyBar { get; set; } } ``` and a main view declared with `@model Foo` and a partial view declared with `@model Bar`, then ```cs Foo model = db.Foos.Where(x => x.ID == id).Include(x => x.Bar).FirstOrDefault(); return View(model); ``` will return the correct model to the main view. However the exception will be thrown if the view includes ```cs @Html.Partial("_Bar") // or @{ Html.RenderPartial("_Bar"); } ``` By default, the model passed to the partial view is the model declared in the main view and you need to use ```cs @Html.Partial("_Bar", Model.MyBar) // or @{ Html.RenderPartial("_Bar", Model.MyBar); } ``` to pass the instance of `Bar` to the partial view. Note also that if the value of `MyBar` is `null` (has not been initialized), then by default `Foo` will be passed to the partial, in which case, it needs to be ```cs @Html.Partial("_Bar", new Bar()) ``` **Declaring a model in a layout** If a layout file includes a model declaration, then all views that use that layout must declare the same model, or a model that derives from that model. If you want to include the html for a separate model in a Layout, then in the Layout, use `@Html.Action(...)` to call a `[ChildActionOnly]` method initializes that model and returns a partial view for it.
Observe if the view has the model required: **View** ``` @model IEnumerable<WFAccess.Models.ViewModels.SiteViewModel> <div class="row"> <table class="table table-striped table-hover table-width-custom"> <thead> <tr> .... ``` **Controller** ``` [HttpGet] public ActionResult ListItems() { SiteStore site = new SiteStore(); site.GetSites(); IEnumerable<SiteViewModel> sites = site.SitesList.Select(s => new SiteViewModel { Id = s.Id, Type = s.Type }); return PartialView("_ListItems", sites); } ``` In my case I Use a partial view but runs in normal views
40,373,609
I am actually reading [Oracle-cx\_Oracle](http://www.oracle.com/technetwork/articles/dsl/python-091105.html) tutorial. There I came across non-pooled connections and DRCP, Basically I am not a DBA so I searched with google but couldn't found any thing. So could somebody help me understand what are they and how they are different to each other. Thank you.
2016/11/02
[ "https://Stackoverflow.com/questions/40373609", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1293013/" ]
Consider the partial `map.cshtml` at `Partials/Map.cshtml`. This can be called from the Page where the partial is to be rendered, simply by using the `<partial>` tag: `<partial name="Partials/Map" model="new Pages.Partials.MapModel()" />` This is one of the easiest methods I encountered (although I am using razor pages, I am sure same is for MVC too)
**Passing the model value that is populated from a controller method to a view** ``` public async Task<IActionResult> Index() { //Getting Data from Database var model= await _context.GetData(); //Selecting Populated Data from the Model and passing to view return View(model.Value); } ```
40,373,609
I am actually reading [Oracle-cx\_Oracle](http://www.oracle.com/technetwork/articles/dsl/python-091105.html) tutorial. There I came across non-pooled connections and DRCP, Basically I am not a DBA so I searched with google but couldn't found any thing. So could somebody help me understand what are they and how they are different to each other. Thank you.
2016/11/02
[ "https://Stackoverflow.com/questions/40373609", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1293013/" ]
Consider the partial `map.cshtml` at `Partials/Map.cshtml`. This can be called from the Page where the partial is to be rendered, simply by using the `<partial>` tag: `<partial name="Partials/Map" model="new Pages.Partials.MapModel()" />` This is one of the easiest methods I encountered (although I am using razor pages, I am sure same is for MVC too)
one more thing. if your view is a partial/sub page and the model for that partial view is null for some reason (e.g no data) you will get this error. Just need to handle the null partial view model
54,706,513
According to the xgboost documentation (<https://xgboost.readthedocs.io/en/latest/python/python_api.html#module-xgboost.training>) the xgboost returns feature importances: > > **feature\_importances\_** > > > Feature importances property > > > **Note** > > > Feature importance is defined only for tree boosters. Feature importance is only defined when the decision tree model is chosen as base learner > ((booster=gbtree). It is not defined for other base learner types, such as linear learners (booster=gblinear). > > > **Returns:** feature\_importances\_ > > > **Return type:** array of shape [n\_features] > > > However, this does not seem to case, as the following toy example shows: ``` import seaborn as sns import xgboost as xgb mpg = sns.load_dataset('mpg') toy = mpg[['mpg', 'cylinders', 'displacement', 'horsepower', 'weight', 'acceleration']] toy = toy.sample(frac=1) N = toy.shape[0] N1 = int(N/2) toy_train = toy.iloc[:N1, :] toy_test = toy.iloc[N1:, :] toy_train_x = toy_train.iloc[:, 1:] toy_train_y = toy_train.iloc[:, 1] toy_test_x = toy_test.iloc[:, 1:] toy_test_y = toy_test.iloc[:, 1] max_depth = 6 eta = 0.3 subsample = 0.8 colsample_bytree = 0.7 alpha = 0.1 params = {"booster" : 'gbtree' , 'objective' : 'reg:linear' , 'max_depth' : max_depth, 'eta' : eta,\ 'subsample' : subsample, 'colsample_bytree' : colsample_bytree, 'alpha' : alpha} dtrain_toy = xgb.DMatrix(data = toy_train_x , label = toy_train_y) dtest_toy = xgb.DMatrix(data = toy_test_x, label = toy_test_y) watchlist = [(dtest_toy, 'eval'), (dtrain_toy, 'train')] xg_reg_toy = xgb.train(params = params, dtrain = dtrain_toy, num_boost_round = 1000, evals = watchlist, \ early_stopping_rounds = 20) xg_reg_toy.feature_importances_ --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-378-248f7887e307> in <module>() ----> 1 xg_reg_toy.feature_importances_ AttributeError: 'Booster' object has no attribute 'feature_importances_' ```
2019/02/15
[ "https://Stackoverflow.com/questions/54706513", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8270077/" ]
If you set: ``` "moment": "^2.22.2" ``` the user will download almost the `v2.22.2`. In this case you will download the `v2.24.0` If you set: ``` "moment": "2.22.2" ``` the user will download exactly that version If you set: ``` "moment": "~2.22.1" ``` the user will download almost the `v2.22.1`. In this case you will download the `v2.22.2` You can use the functions in `v2.9.9` if and only if the module respect the [semver](https://semver.org/) standard. That is true the 99.999% of times.
> > can we use any of version 2.x.x functionality( i.e. we can use the new functions provided by 2.9.9 in our app, though we installed 2.22.2 on our computer) > > > Just to avoid confusion. You will not install version 2.22.2 on your computer. By saying ^2.22.2, npm will look what is the highest version of 2.x.x and install that version. You *will never* install version 2.22.2. You *will* install version 2.24, and when moment updates its packages to 2.25.0, you will install that version. So you will always have the latest verison 2.x.x installed, so you will get the functions of 2.9.9. > > are we saying that anyone else who uses our code of app can use any 2.x.x version of "moment" package ? > > > Yes, you can verify this by checking out package-lock.json which is created by NPM and describes the exact dependency tree. <https://docs.npmjs.com/files/package-lock.json> If your package.json is version 1.0.0 and you have 2.22.2 dependency on moment, and do npm install, you will see in package-lock. ``` { "name": "mypackage", "version": "1.0.0", "lockfileVersion": 1, "requires": true, "dependencies": { "moment": { "version": "2.24.0", "resolved": "https://registry.npmjs.org/moment/-/moment-2.24.0.tgz", } } } ``` So everybody that installs your version 1.0.0 of your package will get moment version 2.24 > > why do I need to install "moment.js" again (i.e. update it) once its > installed on my computer – > > > You don't have to to. But the common rule is to leave node\_modules out of repositories and only have package.json. So that when you publish your website to for example AWS, Azure or DigitalOcean, they will do npm install and therefore install everything, every time you publish your website. **To clarify how the flow of packages usually is** 1. You create a package/module with specific verison 2. I decide to use your package 3. So I will do npm install (to use your package) 4. NPM will go through the dependency tree and install versions accordingly. 5. My website works and I am happy 6. In the meanwhile you are changing your code, and updating your package. 7. Few months pass and I decide to change my website. So now when I do npm install (because I updated my code), I will get your updates as well.
50,750,688
In python I can do: ``` >>> 5 in [2,4,6] False >>> 5 in [4,5,6] True ``` to determine if the give value `5` exists in the list. I want to do the same concept in `jq`. But, there is no `in`. Here is an example with a more realistic data set, and how I can check for 2 values. In my real need I have to check for a few hundred and don't want to have all those `or`ed together. ``` jq '.[] | select(.PrivateIpAddress == "172.31.6.209" or .PrivateIpAddress == "172.31.6.229") | .PrivateDnsName' <<EOF [ { "PrivateDnsName": "ip-172-31-6-209.us-west-2.compute.internal", "PrivateIpAddress": "172.31.6.209" }, { "PrivateDnsName": "ip-172-31-6-219.us-west-2.compute.internal", "PrivateIpAddress": "172.31.6.219" }, { "PrivateDnsName": "ip-172-31-6-229.us-west-2.compute.internal", "PrivateIpAddress": "172.31.6.229" }, { "PrivateDnsName": "ip-172-31-6-239.us-west-2.compute.internal", "PrivateIpAddress": "172.31.6.239" } ] EOF ```
2018/06/07
[ "https://Stackoverflow.com/questions/50750688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/117471/" ]
using `,` --------- I don't know where in <https://stedolan.github.io/jq/manual/v1.5/> this is documented. But the answer is in that `jq` does implicit one-to-many and many-to-one munging. ``` jq '.[] | select(.PrivateIpAddress == ("172.31.6.209", "172.31.6.229")) | .PrivateDnsName' <<EOF [ { "PrivateDnsName": "ip-172-31-6-209.us-west-2.compute.internal", "PrivateIpAddress": "172.31.6.209" }, { "PrivateDnsName": "ip-172-31-6-219.us-west-2.compute.internal", "PrivateIpAddress": "172.31.6.219" }, { "PrivateDnsName": "ip-172-31-6-229.us-west-2.compute.internal", "PrivateIpAddress": "172.31.6.229" }, { "PrivateDnsName": "ip-172-31-6-239.us-west-2.compute.internal", "PrivateIpAddress": "172.31.6.239" } ] EOF ``` (the formatting/indenting of code was made to match that of the OP to simplify visual comparison) The output is: ``` "ip-172-31-6-209.us-west-2.compute.internal" "ip-172-31-6-229.us-west-2.compute.internal" ``` "Seems like voodoo to me." using `| IN("a","b","c")` ------------------------- **Update:** It's been 16 months, and I've finally learned how to use the `IN` function. Here is a demo that will produce the same results as above. ``` cat > filter.jq <<EOF # Either of these work in jq < v1.5, but I've commented them out since I'm using v1.6 # def IN(s): first( if (s == .) then true else empty end ) // false; # def IN(s): first(select(s == .)) // false; .[] | select(.PrivateIpAddress | IN("172.31.6.209","172.31.6.229")) | .PrivateDnsName EOF jq -f filter.jq <<EOF [ { "PrivateDnsName": "ip-172-31-6-209.us-west-2.compute.internal", "PrivateIpAddress": "172.31.6.209" }, { "PrivateDnsName": "ip-172-31-6-219.us-west-2.compute.internal", "PrivateIpAddress": "172.31.6.219" }, { "PrivateDnsName": "ip-172-31-6-229.us-west-2.compute.internal", "PrivateIpAddress": "172.31.6.229" }, { "PrivateDnsName": "ip-172-31-6-239.us-west-2.compute.internal", "PrivateIpAddress": "172.31.6.239" } ] EOF ```
> > But, there is no `in`. > > > You could use `index/1`, as documented in the manual. Even better would be to use `IN`, which however was only introduced after the release of jq 1.5. If your jq does not have it, you can use this definition for `IN/1`: ``` # return true or false as . is in the stream s def IN(s): first( if (s == .) then true else empty end ) // false; ``` If you want to check membership in an array, say $a, simply use `IN( $a[] )`.
3,221,314
Is there a generic notion of asynchronous programming in python? Could I assign a callback to a function, execute it and return to the main program flow immediately, no matter how long the execution of that function would take?
2010/07/11
[ "https://Stackoverflow.com/questions/3221314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/282307/" ]
Take a look here: [Asynchronous Programming in Python](http://xph.us/2009/12/10/asynchronous-programming-in-python.html) [An Introduction to Asynchronous Programming and Twisted](http://krondo.com/blog/?p=1247) Worth checking out: [asyncio (previously Tulip) has been checked into the Python default branch](https://plus.google.com/103282573189025907018/posts/6gLX8Nhk5WM) ### Edited on 14-Mar-2018 Today Python has [asyncIO — Asynchronous I/O, event loop, coroutines and tasks](https://docs.python.org/3/library/asyncio.html) built in. Description taken from the link above: > > The **asyncIO** module provides infrastructure for writing single-threaded > concurrent code using coroutines, multiplexing I/O access over sockets > and other resources, running network clients and servers, and other > related primitives. Here is a more detailed list of the package > contents: > > > 1. a pluggable event loop with various system-specific implementations; > 2. transport and protocol abstractions (similar to those in Twisted); > 3. concrete support for TCP, UDP, SSL, subprocess pipes, delayed calls, > and others (some may be system-dependent); > 4. a Future class that mimics the one in the concurrent.futures module, but adapted for use with the event loop; > 5. coroutines and tasks based on yield from (PEP 380), to > help write concurrent code in a sequential fashion; > 6. cancellation support for Futures and coroutines; > 7. synchronization primitives for use > between coroutines in a single thread, mimicking those in the > threading module; > 8. an interface for passing work off to a threadpool, > for times when you absolutely, positively have to use a library that > makes blocking I/O calls. > > > Asynchronous programming is more complex > than classical “sequential” programming: see the [Develop with asyncio > page](https://docs.python.org/3/library/asyncio-dev.html#asyncio-dev) which lists common traps and explains how to avoid them. Enable > the debug mode during development to detect common issues. > > > Also worth checking out: [A guide to asynchronous programming in Python with asyncIO](https://medium.freecodecamp.org/a-guide-to-asynchronous-programming-in-python-with-asyncio-232e2afa44f6)
The other respondents are pointing you to Twisted, which is a great and very comprehensive framework but in my opinion it has a very un-pythonic design. Also, AFAICT, you have to use the Twisted main loop, which may be a problem for you if you're already using something else that provides its own loop. Here is a contrived example that would demonstrate using the `threading` module: ``` from threading import Thread def background_stuff(): while True: print "I am doing some stuff" t = Thread(target=background_stuff) t.start() # Continue doing some other stuff now ``` However, in pretty much every useful case, you will want to communicate between threads. You should look into [synchronization primitives](http://en.wikipedia.org/wiki/Synchronization_primitive), and become familiar with the concept of [concurrency](http://en.wikipedia.org/wiki/Concurrency_%28computer_science%29) and the related issues. The `threading` module provides many such primitives for you to use, if you know how to use them.
3,221,314
Is there a generic notion of asynchronous programming in python? Could I assign a callback to a function, execute it and return to the main program flow immediately, no matter how long the execution of that function would take?
2010/07/11
[ "https://Stackoverflow.com/questions/3221314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/282307/" ]
What you describe (the main program flow resuming immediately while another function executes) is not what's normally called "asynchronous" (AKA "event-driven") programming, but rather "multitasking" (AKA "multithreading" or "multiprocessing"). You can get what you described with the standard library modules `threading` and `multiprocessing` (the latter allows actual concurrent execution on multi-core machines). Asynchronous (event-driven) programming is supported in the standard Python library in the `asyncore` and `asynchat` modules, which are very oriented to networking tasks (indeed they internally use the `select` module, which, on Windows, only supports sockets -- though on Unixy OSs it can also support any file descriptor). For a more general (though also mostly networking oriented, but not *limited* to that) support for asynchronous (event-driven) programming, check out the [twisted](http://twistedmatrix.com/trac/) third-party package.
The other respondents are pointing you to Twisted, which is a great and very comprehensive framework but in my opinion it has a very un-pythonic design. Also, AFAICT, you have to use the Twisted main loop, which may be a problem for you if you're already using something else that provides its own loop. Here is a contrived example that would demonstrate using the `threading` module: ``` from threading import Thread def background_stuff(): while True: print "I am doing some stuff" t = Thread(target=background_stuff) t.start() # Continue doing some other stuff now ``` However, in pretty much every useful case, you will want to communicate between threads. You should look into [synchronization primitives](http://en.wikipedia.org/wiki/Synchronization_primitive), and become familiar with the concept of [concurrency](http://en.wikipedia.org/wiki/Concurrency_%28computer_science%29) and the related issues. The `threading` module provides many such primitives for you to use, if you know how to use them.
3,221,314
Is there a generic notion of asynchronous programming in python? Could I assign a callback to a function, execute it and return to the main program flow immediately, no matter how long the execution of that function would take?
2010/07/11
[ "https://Stackoverflow.com/questions/3221314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/282307/" ]
Take a look here: [Asynchronous Programming in Python](http://xph.us/2009/12/10/asynchronous-programming-in-python.html) [An Introduction to Asynchronous Programming and Twisted](http://krondo.com/blog/?p=1247) Worth checking out: [asyncio (previously Tulip) has been checked into the Python default branch](https://plus.google.com/103282573189025907018/posts/6gLX8Nhk5WM) ### Edited on 14-Mar-2018 Today Python has [asyncIO — Asynchronous I/O, event loop, coroutines and tasks](https://docs.python.org/3/library/asyncio.html) built in. Description taken from the link above: > > The **asyncIO** module provides infrastructure for writing single-threaded > concurrent code using coroutines, multiplexing I/O access over sockets > and other resources, running network clients and servers, and other > related primitives. Here is a more detailed list of the package > contents: > > > 1. a pluggable event loop with various system-specific implementations; > 2. transport and protocol abstractions (similar to those in Twisted); > 3. concrete support for TCP, UDP, SSL, subprocess pipes, delayed calls, > and others (some may be system-dependent); > 4. a Future class that mimics the one in the concurrent.futures module, but adapted for use with the event loop; > 5. coroutines and tasks based on yield from (PEP 380), to > help write concurrent code in a sequential fashion; > 6. cancellation support for Futures and coroutines; > 7. synchronization primitives for use > between coroutines in a single thread, mimicking those in the > threading module; > 8. an interface for passing work off to a threadpool, > for times when you absolutely, positively have to use a library that > makes blocking I/O calls. > > > Asynchronous programming is more complex > than classical “sequential” programming: see the [Develop with asyncio > page](https://docs.python.org/3/library/asyncio-dev.html#asyncio-dev) which lists common traps and explains how to avoid them. Enable > the debug mode during development to detect common issues. > > > Also worth checking out: [A guide to asynchronous programming in Python with asyncIO](https://medium.freecodecamp.org/a-guide-to-asynchronous-programming-in-python-with-asyncio-232e2afa44f6)
You may see my Python Asynchronous Programming tool: <http://www.ideawu.com/blog/2010/08/delegate-in-pythonpython-asynchronous-programming.html> ``` import time, random, sys from delegate import * def proc(a): time.sleep(random.random()) return str(a) def proc_callback(handle, args=None): ret = d.end(handle) d = Delegate() d.init(2) # number of workers handle = d.begin(proc, '12345', proc_callback, 'test') sys.stdin.readline() d.free() ```
3,221,314
Is there a generic notion of asynchronous programming in python? Could I assign a callback to a function, execute it and return to the main program flow immediately, no matter how long the execution of that function would take?
2010/07/11
[ "https://Stackoverflow.com/questions/3221314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/282307/" ]
You may well want to checkout the Twisted library for Python. They provide many useful tools. 1. [A little primer](http://jessenoller.com/2009/02/11/twisted-hello-asynchronous-programming/1.) 2. [Defer and Related stuff](http://twistedmatrix.com/documents/current/core/howto/defer.html)
You may see my Python Asynchronous Programming tool: <http://www.ideawu.com/blog/2010/08/delegate-in-pythonpython-asynchronous-programming.html> ``` import time, random, sys from delegate import * def proc(a): time.sleep(random.random()) return str(a) def proc_callback(handle, args=None): ret = d.end(handle) d = Delegate() d.init(2) # number of workers handle = d.begin(proc, '12345', proc_callback, 'test') sys.stdin.readline() d.free() ```
3,221,314
Is there a generic notion of asynchronous programming in python? Could I assign a callback to a function, execute it and return to the main program flow immediately, no matter how long the execution of that function would take?
2010/07/11
[ "https://Stackoverflow.com/questions/3221314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/282307/" ]
The other respondents are pointing you to Twisted, which is a great and very comprehensive framework but in my opinion it has a very un-pythonic design. Also, AFAICT, you have to use the Twisted main loop, which may be a problem for you if you're already using something else that provides its own loop. Here is a contrived example that would demonstrate using the `threading` module: ``` from threading import Thread def background_stuff(): while True: print "I am doing some stuff" t = Thread(target=background_stuff) t.start() # Continue doing some other stuff now ``` However, in pretty much every useful case, you will want to communicate between threads. You should look into [synchronization primitives](http://en.wikipedia.org/wiki/Synchronization_primitive), and become familiar with the concept of [concurrency](http://en.wikipedia.org/wiki/Concurrency_%28computer_science%29) and the related issues. The `threading` module provides many such primitives for you to use, if you know how to use them.
You may see my Python Asynchronous Programming tool: <http://www.ideawu.com/blog/2010/08/delegate-in-pythonpython-asynchronous-programming.html> ``` import time, random, sys from delegate import * def proc(a): time.sleep(random.random()) return str(a) def proc_callback(handle, args=None): ret = d.end(handle) d = Delegate() d.init(2) # number of workers handle = d.begin(proc, '12345', proc_callback, 'test') sys.stdin.readline() d.free() ```
3,221,314
Is there a generic notion of asynchronous programming in python? Could I assign a callback to a function, execute it and return to the main program flow immediately, no matter how long the execution of that function would take?
2010/07/11
[ "https://Stackoverflow.com/questions/3221314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/282307/" ]
What you describe (the main program flow resuming immediately while another function executes) is not what's normally called "asynchronous" (AKA "event-driven") programming, but rather "multitasking" (AKA "multithreading" or "multiprocessing"). You can get what you described with the standard library modules `threading` and `multiprocessing` (the latter allows actual concurrent execution on multi-core machines). Asynchronous (event-driven) programming is supported in the standard Python library in the `asyncore` and `asynchat` modules, which are very oriented to networking tasks (indeed they internally use the `select` module, which, on Windows, only supports sockets -- though on Unixy OSs it can also support any file descriptor). For a more general (though also mostly networking oriented, but not *limited* to that) support for asynchronous (event-driven) programming, check out the [twisted](http://twistedmatrix.com/trac/) third-party package.
You may well want to checkout the Twisted library for Python. They provide many useful tools. 1. [A little primer](http://jessenoller.com/2009/02/11/twisted-hello-asynchronous-programming/1.) 2. [Defer and Related stuff](http://twistedmatrix.com/documents/current/core/howto/defer.html)
3,221,314
Is there a generic notion of asynchronous programming in python? Could I assign a callback to a function, execute it and return to the main program flow immediately, no matter how long the execution of that function would take?
2010/07/11
[ "https://Stackoverflow.com/questions/3221314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/282307/" ]
Take a look here: [Asynchronous Programming in Python](http://xph.us/2009/12/10/asynchronous-programming-in-python.html) [An Introduction to Asynchronous Programming and Twisted](http://krondo.com/blog/?p=1247) Worth checking out: [asyncio (previously Tulip) has been checked into the Python default branch](https://plus.google.com/103282573189025907018/posts/6gLX8Nhk5WM) ### Edited on 14-Mar-2018 Today Python has [asyncIO — Asynchronous I/O, event loop, coroutines and tasks](https://docs.python.org/3/library/asyncio.html) built in. Description taken from the link above: > > The **asyncIO** module provides infrastructure for writing single-threaded > concurrent code using coroutines, multiplexing I/O access over sockets > and other resources, running network clients and servers, and other > related primitives. Here is a more detailed list of the package > contents: > > > 1. a pluggable event loop with various system-specific implementations; > 2. transport and protocol abstractions (similar to those in Twisted); > 3. concrete support for TCP, UDP, SSL, subprocess pipes, delayed calls, > and others (some may be system-dependent); > 4. a Future class that mimics the one in the concurrent.futures module, but adapted for use with the event loop; > 5. coroutines and tasks based on yield from (PEP 380), to > help write concurrent code in a sequential fashion; > 6. cancellation support for Futures and coroutines; > 7. synchronization primitives for use > between coroutines in a single thread, mimicking those in the > threading module; > 8. an interface for passing work off to a threadpool, > for times when you absolutely, positively have to use a library that > makes blocking I/O calls. > > > Asynchronous programming is more complex > than classical “sequential” programming: see the [Develop with asyncio > page](https://docs.python.org/3/library/asyncio-dev.html#asyncio-dev) which lists common traps and explains how to avoid them. Enable > the debug mode during development to detect common issues. > > > Also worth checking out: [A guide to asynchronous programming in Python with asyncIO](https://medium.freecodecamp.org/a-guide-to-asynchronous-programming-in-python-with-asyncio-232e2afa44f6)
Good news everyone! **Python 3.4 would include brand new ambitious asynchronous programming [implementation](http://www.slideshare.net/megafeihong/tulip-24190096)!** It is currently called [tulip](https://code.google.com/p/tulip/source/list) and already has an [active following](https://groups.google.com/forum/?fromgroups#!forum/python-tulip). As described in [PEP 3153: Asynchronous IO support](http://www.python.org/dev/peps/pep-3153/) and [PEP 3156: Asynchronous IO Support Rebooted](http://www.python.org/dev/peps/pep-3156/): > > People who want to write asynchronous code in Python right now have a few options: > > > * asyncore and asynchat; > * something bespoke, most likely based on the select module; > * using a third party library, such as [Twisted](http://www.twistedmatrix.com/) or [gevent](http://www.gevent.org/). > > > Unfortunately, each of these options has its downsides, which this PEP tries to address. > > > Despite having been part of the Python standard library for a long time, the asyncore module suffers from fundamental flaws following from an inflexible API that does not stand up to the expectations of a modern asynchronous networking module. > > > Moreover, its approach is too simplistic to provide developers with all the tools they need in order to fully exploit the potential of asynchronous networking. > > > The most popular solution right now used in production involves the use of third party libraries. These often provide satisfactory solutions, but there is a lack of compatibility between these libraries, which tends to make codebases very tightly coupled to the library they use. > > > This current lack of portability between different asynchronous IO libraries causes a lot of duplicated effort for third party library developers. A sufficiently powerful abstraction could mean that asynchronous code gets written once, but used everywhere. > > > Here is the [brief overview](http://www.slideshare.net/megafeihong/tulip-24190096) of it's abilities.
3,221,314
Is there a generic notion of asynchronous programming in python? Could I assign a callback to a function, execute it and return to the main program flow immediately, no matter how long the execution of that function would take?
2010/07/11
[ "https://Stackoverflow.com/questions/3221314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/282307/" ]
The other respondents are pointing you to Twisted, which is a great and very comprehensive framework but in my opinion it has a very un-pythonic design. Also, AFAICT, you have to use the Twisted main loop, which may be a problem for you if you're already using something else that provides its own loop. Here is a contrived example that would demonstrate using the `threading` module: ``` from threading import Thread def background_stuff(): while True: print "I am doing some stuff" t = Thread(target=background_stuff) t.start() # Continue doing some other stuff now ``` However, in pretty much every useful case, you will want to communicate between threads. You should look into [synchronization primitives](http://en.wikipedia.org/wiki/Synchronization_primitive), and become familiar with the concept of [concurrency](http://en.wikipedia.org/wiki/Concurrency_%28computer_science%29) and the related issues. The `threading` module provides many such primitives for you to use, if you know how to use them.
You may well want to checkout the Twisted library for Python. They provide many useful tools. 1. [A little primer](http://jessenoller.com/2009/02/11/twisted-hello-asynchronous-programming/1.) 2. [Defer and Related stuff](http://twistedmatrix.com/documents/current/core/howto/defer.html)
3,221,314
Is there a generic notion of asynchronous programming in python? Could I assign a callback to a function, execute it and return to the main program flow immediately, no matter how long the execution of that function would take?
2010/07/11
[ "https://Stackoverflow.com/questions/3221314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/282307/" ]
Good news everyone! **Python 3.4 would include brand new ambitious asynchronous programming [implementation](http://www.slideshare.net/megafeihong/tulip-24190096)!** It is currently called [tulip](https://code.google.com/p/tulip/source/list) and already has an [active following](https://groups.google.com/forum/?fromgroups#!forum/python-tulip). As described in [PEP 3153: Asynchronous IO support](http://www.python.org/dev/peps/pep-3153/) and [PEP 3156: Asynchronous IO Support Rebooted](http://www.python.org/dev/peps/pep-3156/): > > People who want to write asynchronous code in Python right now have a few options: > > > * asyncore and asynchat; > * something bespoke, most likely based on the select module; > * using a third party library, such as [Twisted](http://www.twistedmatrix.com/) or [gevent](http://www.gevent.org/). > > > Unfortunately, each of these options has its downsides, which this PEP tries to address. > > > Despite having been part of the Python standard library for a long time, the asyncore module suffers from fundamental flaws following from an inflexible API that does not stand up to the expectations of a modern asynchronous networking module. > > > Moreover, its approach is too simplistic to provide developers with all the tools they need in order to fully exploit the potential of asynchronous networking. > > > The most popular solution right now used in production involves the use of third party libraries. These often provide satisfactory solutions, but there is a lack of compatibility between these libraries, which tends to make codebases very tightly coupled to the library they use. > > > This current lack of portability between different asynchronous IO libraries causes a lot of duplicated effort for third party library developers. A sufficiently powerful abstraction could mean that asynchronous code gets written once, but used everywhere. > > > Here is the [brief overview](http://www.slideshare.net/megafeihong/tulip-24190096) of it's abilities.
You may see my Python Asynchronous Programming tool: <http://www.ideawu.com/blog/2010/08/delegate-in-pythonpython-asynchronous-programming.html> ``` import time, random, sys from delegate import * def proc(a): time.sleep(random.random()) return str(a) def proc_callback(handle, args=None): ret = d.end(handle) d = Delegate() d.init(2) # number of workers handle = d.begin(proc, '12345', proc_callback, 'test') sys.stdin.readline() d.free() ```
3,221,314
Is there a generic notion of asynchronous programming in python? Could I assign a callback to a function, execute it and return to the main program flow immediately, no matter how long the execution of that function would take?
2010/07/11
[ "https://Stackoverflow.com/questions/3221314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/282307/" ]
What you describe (the main program flow resuming immediately while another function executes) is not what's normally called "asynchronous" (AKA "event-driven") programming, but rather "multitasking" (AKA "multithreading" or "multiprocessing"). You can get what you described with the standard library modules `threading` and `multiprocessing` (the latter allows actual concurrent execution on multi-core machines). Asynchronous (event-driven) programming is supported in the standard Python library in the `asyncore` and `asynchat` modules, which are very oriented to networking tasks (indeed they internally use the `select` module, which, on Windows, only supports sockets -- though on Unixy OSs it can also support any file descriptor). For a more general (though also mostly networking oriented, but not *limited* to that) support for asynchronous (event-driven) programming, check out the [twisted](http://twistedmatrix.com/trac/) third-party package.
Good news everyone! **Python 3.4 would include brand new ambitious asynchronous programming [implementation](http://www.slideshare.net/megafeihong/tulip-24190096)!** It is currently called [tulip](https://code.google.com/p/tulip/source/list) and already has an [active following](https://groups.google.com/forum/?fromgroups#!forum/python-tulip). As described in [PEP 3153: Asynchronous IO support](http://www.python.org/dev/peps/pep-3153/) and [PEP 3156: Asynchronous IO Support Rebooted](http://www.python.org/dev/peps/pep-3156/): > > People who want to write asynchronous code in Python right now have a few options: > > > * asyncore and asynchat; > * something bespoke, most likely based on the select module; > * using a third party library, such as [Twisted](http://www.twistedmatrix.com/) or [gevent](http://www.gevent.org/). > > > Unfortunately, each of these options has its downsides, which this PEP tries to address. > > > Despite having been part of the Python standard library for a long time, the asyncore module suffers from fundamental flaws following from an inflexible API that does not stand up to the expectations of a modern asynchronous networking module. > > > Moreover, its approach is too simplistic to provide developers with all the tools they need in order to fully exploit the potential of asynchronous networking. > > > The most popular solution right now used in production involves the use of third party libraries. These often provide satisfactory solutions, but there is a lack of compatibility between these libraries, which tends to make codebases very tightly coupled to the library they use. > > > This current lack of portability between different asynchronous IO libraries causes a lot of duplicated effort for third party library developers. A sufficiently powerful abstraction could mean that asynchronous code gets written once, but used everywhere. > > > Here is the [brief overview](http://www.slideshare.net/megafeihong/tulip-24190096) of it's abilities.
59,860,579
I used postman to get urls from an api so I can look at certain titles. The response was saved as a .json file. A snippet of my response.json file looks like this: ``` { "apiUrl":"https://api.ft.com/example/83example74-3c9b-11ea-a01a-example547046735", "title": { "title": "Example title example title example title" }, "lifecycle": { "initialPublishDateTime":"2020-01-21T22:54:57Z", "lastPublishDateTime":"2020-01-21T23:38:19Z" }, "location":{ "uri":"https://www.ft.com/exampleurl/83example74-3c9b-11ea-a01a-example547046735" }, "summary": "...", # ............(this continues for all different titles I found) } ``` Since I want to look at the articles I want to generate a list of all urls. I am not interested in the apiUrl but only in the uri. My current python file looks like this ``` with open ("My path to file/response.json") as file: for line in file: urls = re.findall('https://(?:[-\www.]|(?:%[\da-fA-F]{2}))+', line) print(urls) ``` This gives me the following output: `['https://api.ft.com', 'https://www.ft.com', 'https://api.ft.com', 'https://www.ft.com',........` However, I want to be able to see the entire url for www.ft.com ( so not the api.ft.com url's since I'm not interested in those). For example I want my program to extract something like: <https://www.ft.com/thisisanexampleurl/83example74-3c9b-11ea-a01a-example547046735> I want the program to do this for the entire response file Does anyone know a way to do this? All help would be appreciated. Raymond
2020/01/22
[ "https://Stackoverflow.com/questions/59860579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11197012/" ]
If you are using materialize CSS framework make sure you initialize the select again, after appending new options. This worked for me ``` $.each(jsonArray , (key , value)=>{ var option = new Option(value.name , value.id) $('#subcategory').append(option) }) $('select').formSelect(); ```
Try This : ``` function PopulateDropDown(jsonArray) { if (jsonArray != null && jsonArray.length > 0) { $("#subcategory").removeAttr("disabled"); $.each(jsonArray, function () { $("#subcategory").append($("<option></option>").val(this['id']).html(this['name'])); }); } } ```
49,091,870
I want a model with 5 choices, but I cannot enforce them and display the display value in template. I am using CharField(choice=..) instead of ChoiceField or TypeChoiceField as in the [docs](https://docs.djangoproject.com/en/dev/ref/models/instances/#django.db.models.Model.get_FOO_display). I tried the solutions [here](https://stackoverflow.com/questions/1105638/django-templates-verbose-version-of-a-choice) but they don't work for me (see below). model.py: ``` class Language(models.Model): language = models.CharField(max_length=20,blank=False) ILR_scale = ( (5, 'Native'), (4, 'Full professional proficiency'), (3, 'Professional working proficiency'), (2, 'Limited professional proficiency'), (1, 'Elementary professional proficiency') ) level = models.CharField(help_text='Choice between 1 and 5', default=5, max_length=25, choices=ILR_scale) def level_verbose(self): return dict(Language.ILR_scale)[self.level] class Meta: ordering = ['level','id'] def __unicode__(self): return ''.join([self.language, '-', self.level]) ``` view.py ``` .. def index(request): language = Language.objects.all() .. ``` mytemplate.html ``` <div class="subheading strong-underlined mb-3 my-3"> Languages </div> {% regroup language|dictsortreversed:"level" by level as level_list %} <ul> {% for lan_list in level_list %} <li> {% for lan in lan_list.list %} <strong>{{ lan.language }}</strong>: {{ lan.level_verbose }}{%if not forloop.last%},{%endif%} {% endfor %} </li> {% endfor %} </ul> ``` From shell: ``` python3 manage.py shell from resume.models import Language l1=Language.objects.create(language='English',level=4) l1.save() l1.get_level_display() #This is good Out[20]: 'Full professional proficiency' ``` As soon as I create a Language instance from shell I cannot load the site. It fails at line 0 of the template with Exception Type: KeyError, Exception Value: '4', Exception Location: /models.py in level\_verbose, line 175 (which is the return line of the level\_verbose method). Also, I was expecting a validation error here from shell: ``` l1.level='asdasd' l1.save() #Why can I save this instance with this level? ``` And I can also save a shown above when using ChoiceField, meaning that I do not understand what that field is used for. How to force instances to take field values within choices, and display the display value in templates?
2018/03/04
[ "https://Stackoverflow.com/questions/49091870", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3592827/" ]
Well this is the common issue even when I started with django. So first let's look at django's feature that you can do it like below (Note: your choice case's value are going to be store as integer so you should use `models.IntegerField` instead of `models.CharField`): * [get\_FOO\_display()](https://docs.djangoproject.com/en/2.0/ref/models/instances/#django.db.models.Model.get_FOO_display) : you are very near about this solution. As you can see in documentation `FOO` is the field name of your model. in your case it is `level` so when you want to access corresponding choice value in shell or view you can call method with model instance as below as you have already mentioned: ``` `l1.get_level_display()` ``` but when you want to access it in template file you need to write it like below: ``` {{ l1.get_level_display }} ``` * Now let's look at your method `level_verbose()` if you see quite again your model is a class and `level_verbose()` is the method you have created you can access `self.ILR_scale` directly just as you have used `self.level` the main catch in you create dictionary of ILR\_scale it's keys are Integer values `(i.e. 1, 2, 3, 4, 5)` but you have used `CharField()` to store the level values which returns you string values `(i.e. '1', '2', '3', '4' or '5')` and in python dictionary key 1 and '1' are both different one is integer and other is string. So you may change your model field to `models.IntegerField()` or you can access the keys like ``` dict(self.ILR_scal)[int(self.level)] ```
You can also use `models.CharField` but you have to set field option `choices` to your tuples. For exapmle: ``` FRESHMAN = 'FR' SOPHOMORE = 'SO' JUNIOR = 'JR' SENIOR = 'SR' LEVELS = ( (FRESHMAN, 'Freshman'), (SOPHOMORE, 'Sophomore'), (JUNIOR, 'Junior'), (SENIOR, 'Senior'), ) level = models.CharField( max_length=2, choices=LEVELS, default=FRESHMAN, ) ``` Then in your template you can use [get\_FOO\_display()](https://docs.djangoproject.com/en/2.0/ref/models/instances/#django.db.models.Model.get_FOO_display) for example: `{{l1.get_level_display}}` See more in [docs](https://docs.djangoproject.com/en/2.0/ref/models/fields/)
53,520,300
Using the python bindings for libVLC in a urwid music player I am building. libVLC keeps outputting some errors about converting time and such when pausing and resuming a mp3 file. As far as I can gather from various posts on the vlc mailing list and forums, these errors appear in mp3 files all the time and as long as the file is playing like it should one should not worry about them. That would be the end of it, but the errors keep getting written on top of the urwid interface and that is a problem. How can I either stop libVLC from outputting these non-essential errors or or perhaps simply prevent them from showing on top of the urwid interface?
2018/11/28
[ "https://Stackoverflow.com/questions/53520300", "https://Stackoverflow.com", "https://Stackoverflow.com/users/150033/" ]
This is just a macro `Privileged_Data` doing nothing. The compiler will not even see it after the preprocessor pass. It's probably a readability or company standards decision to tag some variables like this.
A preprocessor macro can be defined without an associated value. When that is the case, the macro is substituted with nothing after preprocessing. So given this: ``` #define Privileged_Data ``` Then this: ``` Privileged_Data static int dVariable ``` Becomes this after preprocessing: ``` static int dVariable ``` So this particular macro has no effect on the program, and was probably put in place for documentation purposes.
43,714,967
I found (lambda \*\*x: x) is very useful for defining a dict in a succinct way, e.g. ``` xxx = (lambda **x: x)(a=1, b=2, c=3) ``` Is there any pre-defined python function does that?
2017/05/01
[ "https://Stackoverflow.com/questions/43714967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4927088/" ]
The `dict` function/constructor can be used in the same manner. ``` >>> (lambda **x: x)(a=1, b=2, c=3) == dict(a=1, b=2, c=3) True ``` See `help(dict)` for more ways to instantiate `dict`s. You are not limited to just defining them with `{'a': 1, 'b': 2, 'c': 3}`.
Try the `{}` literal dictionary syntax. It is quite succinct. See [5.5. *Dictionaries*](https://docs.python.org/3/tutorial/datastructures.html#dictionaries) in the **Data Structures tutorial**. ``` >>> xxx = {'a': 1, 'b': 2, 'c': 3} >>> xxx {'a': 1, 'b': 2, 'c': 3} ```
48,103,343
I was a little surprised to find that: ``` # fast_ops_c.pyx cimport cython cimport numpy as np @cython.boundscheck(False) # turn off bounds-checking for entire function @cython.wraparound(False) # turn off negative index wrapping for entire function @cython.nonecheck(False) def c_iseq_f1(np.ndarray[np.double_t, ndim=1, cast=False] x, double val): # Test (x==val) except gives NaN where x is NaN cdef np.ndarray[np.double_t, ndim=1] result = np.empty_like(x) cdef size_t i = 0 cdef double _x = 0 for i in range(len(x)): _x = x[i] result[i] = (_x-_x) + (_x==val) return result ``` is orders or magnitude faster than: ``` @cython.boundscheck(False) # turn off bounds-checking for entire function @cython.wraparound(False) # turn off negative index wrapping for entire function @cython.nonecheck(False) def c_iseq_f2(np.ndarray[np.double_t, ndim=1, cast=False] x, double val): cdef np.ndarray[np.double_t, ndim=1] result = np.empty_like(x) cdef size_t i = 0 cdef double _x = 0 for _x in x: # Iterate over elements result[i] = (_x-_x) + (_x==val) return result ``` (for large arrays). I'm using the following to test the performance: ``` # fast_ops.py try: import pyximport pyximport.install(setup_args={"include_dirs": np.get_include()}, reload_support=True) except Exception: pass from fast_ops_c import * import math import nump as np NAN = float("nan") import unittest class FastOpsTest(unittest.TestCase): def test_eq_speed(self): from timeit import timeit a = np.random.random(500000) a[1] = 2. a[2] = NAN a2 = c_iseq_f(a, 2.) def f1(): c_iseq_f2(a, 2.) def f2(): c_iseq_f1(a, 2.) # warm up [f1() for x in range(20)] [f2() for x in range(20)] n=1000 dur = timeit(f1, number=n) print dur, "DUR1 s/iter", dur/n dur = timeit(f2, number=n) print dur, "DUR2 s/iter", dur/n dur = timeit(f1, number=n) print dur, "DUR1 s/iter", dur/n assert dur/n <= 0.005 dur = timeit(f2, number=n) print dur, "DUR2 s/iter", dur/n print a2[:10] assert a2[0] == 0. assert a2[1] == 1. assert math.isnan(a2[2]) ``` I'm guessing that `for _x in x` is interpreted as execute the python iterator for x, and `for i in range(n):` is interpreted as a C for loop, and `x[i]` interpreted as C's `x[i]` array indexing. However, I'm kinda guessing and trying to follow by example. In its [working with numpy](http://docs.cython.org/en/latest/src/tutorial/numpy.html) (and [here](http://docs.cython.org/en/latest/src/userguide/numpy_tutorial.html)) docs, Cython is a little quiet on what's optimized with respect to numpy, and what's not. Is there a guide to what *is* optimized. --- Similarly, the following, which assumes contiguous array memory, is considerably faster that either of the above. ``` @cython.boundscheck(False) # turn off bounds-checking for entire function @cython.wraparound(False) # turn off negative index wrapping for entire function def c_iseq_f(np.ndarray[np.double_t, ndim=1, cast=False, mode="c"] x not None, double val): cdef np.ndarray[np.double_t, ndim=1] result = np.empty_like(x) cdef size_t i = 0 cdef double* _xp = &x[0] cdef double* _resultp = &result[0] for i in range(len(x)): _x = _xp[i] _resultp[i] = (_x-_x) + (_x==val) return result ```
2018/01/04
[ "https://Stackoverflow.com/questions/48103343", "https://Stackoverflow.com", "https://Stackoverflow.com/users/48956/" ]
Current versions of Cython (at least >=0.29.20) produce similar performant C-code for both variants. The answer bellow holds for older Cython-versions. --- The reason for this surprise is that `x[i]` is more subtle as it looks. Let's take a look at the following cython function: ``` %%cython def cy_sum(x): cdef double res=0.0 cdef int i for i in range(len(x)): res+=x[i] return res ``` And measure its performance: ``` import numpy as np a=np.random.random((2000,)) %timeit cy_sum(a) >>>1000 loops, best of 3: 542 µs per loop ``` This is pretty slow! If you look into the produced C-code, you will see, that `x[i]` uses the `__getitem()__` functionality, which takes a `C-double`, creates a python-Float object, casts it back to a `C-double` and destroys the temporary python-float. Pretty much overhead for a single `double`-addition! Let's make it clear to cython, that `x` is a typed memory view: ``` %%cython def cy_sum_memview(double[::1] x): cdef double res=0.0 cdef int i for i in range(len(x)): res+=x[i] return res ``` with a much better performance: ``` %timeit cy_sum_memview(a) >>> 100000 loops, best of 3: 4.21 µs per loop ``` So what happened? Because cython know, that `x` is a [typed memory view](http://cython.readthedocs.io/en/latest/src/userguide/memoryviews.html) (I would rather use typed memory view than numpy-array in the signature of the cython-functions), it no longer must use the python-functionality `__getitem__` but can access the `C-double` value directly without the need to create an intermediate python-float. But back to the numpy-arrays. Numpy arrays can be intepreted by cython as typed memory views and thus `x[i]` can be translated into a direct/fast access to the underlying memory. So what about for-range? ``` %%cython cimport array def cy_sum_memview_for(double[::1] x): cdef double res=0.0 cdef double x_ for x_ in x: res+=x_ return res %timeit cy_sum_memview_for(a) >>> 1000 loops, best of 3: 736 µs per loop ``` It is slow again. So cython seems not to be clever enough to replace the for-range through direct/fast access and once again uses python-functionality with the resulting overhead. I'm must confess I'm as surprised as you are, because at first sight there is no good reason why cython should not be able to use fast access in the case of the for-range. But this is how it is... --- I'm not sure, that this is the reason but the situation is not that simple with two dimensional arrays. Consider the following code: ``` import numpy as np a=np.zeros((5,1), dtype=int) for d in a: print(int(d)+1) ``` This code works, because `d` is a 1-length array and thus can be be converted to Python scalar via `int(d)`. However, ``` for d in a.T: print(int(d)+1) ``` throws, because now `d`'s length is `5` and thus it cannot be converted to a Python scalar. Because we want this code have the same behavior as pure Python when cythonized and it can be determined only during the runtime whether the conversion to int is Ok or not, we have use a Python-object for `d` first and only than can we access the content of this array.
Cython can translate `range(len(x))` loops into nearly onLy C Code: ``` for i in range(len(x)): ``` Generated code: ``` __pyx_t_6 = PyObject_Length(((PyObject *)__pyx_v_x)); if (unlikely(__pyx_t_6 == -1)) __PYX_ERR(0, 17, __pyx_L1_error) for (__pyx_t_7 = 0; __pyx_t_7 < __pyx_t_6; __pyx_t_7+=1) { __pyx_v_i = __pyx_t_7; ``` But this remains Python: ``` for _x in x: # Iterate over elements ``` Generated code: ``` if (likely(PyList_CheckExact(((PyObject *)__pyx_v_x))) || PyTuple_CheckExact(((PyObject *)__pyx_v_x))) { __pyx_t_1 = ((PyObject *)__pyx_v_x); __Pyx_INCREF(__pyx_t_1); __pyx_t_6 = 0; __pyx_t_7 = NULL; } else { __pyx_t_6 = -1; __pyx_t_1 = PyObject_GetIter(((PyObject *)__pyx_v_x)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 12, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_1); __pyx_t_7 = Py_TYPE(__pyx_t_1)->tp_iternext; if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 12, __pyx_L1_error) } for (;;) { if (likely(!__pyx_t_7)) { if (likely(PyList_CheckExact(__pyx_t_1))) { if (__pyx_t_6 >= PyList_GET_SIZE(__pyx_t_1)) break; #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS __pyx_t_3 = PyList_GET_ITEM(__pyx_t_1, __pyx_t_6); __Pyx_INCREF(__pyx_t_3); __pyx_t_6++; if (unlikely(0 < 0)) __PYX_ERR(0, 12, __pyx_L1_error) #else __pyx_t_3 = PySequence_ITEM(__pyx_t_1, __pyx_t_6); __pyx_t_6++; if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 12, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_3); #endif } else { if (__pyx_t_6 >= PyTuple_GET_SIZE(__pyx_t_1)) break; #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS __pyx_t_3 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_6); __Pyx_INCREF(__pyx_t_3); __pyx_t_6++; if (unlikely(0 < 0)) __PYX_ERR(0, 12, __pyx_L1_error) #else __pyx_t_3 = PySequence_ITEM(__pyx_t_1, __pyx_t_6); __pyx_t_6++; if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 12, __pyx_L1_error) __Pyx_GOTREF(__pyx_t_3); #endif } } else { __pyx_t_3 = __pyx_t_7(__pyx_t_1); if (unlikely(!__pyx_t_3)) { PyObject* exc_type = PyErr_Occurred(); if (exc_type) { if (likely(exc_type == PyExc_StopIteration || PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); else __PYX_ERR(0, 12, __pyx_L1_error) } break; } __Pyx_GOTREF(__pyx_t_3); } __pyx_t_8 = __pyx_PyFloat_AsDouble(__pyx_t_3); if (unlikely((__pyx_t_8 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 12, __pyx_L1_error) __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; __pyx_v__x = __pyx_t_8; /* … */ } __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; ``` Generating this output is typically the best way to find out.
51,584,994
In python if my list is ``` TheTextImage = [["111000"],["222999"]] ``` How would one loop through this list creating a new one of ``` NewTextImage = [["000111"],["999222"]] ``` Can use `[:]` but not `[::-1]`, and cannot use `reverse()`
2018/07/29
[ "https://Stackoverflow.com/questions/51584994", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10092065/" ]
You may not use `[::-1]` but you can multiply each range index by -1. ``` t = [["111000"],["222999"]] def rev(x): return "".join(x[(i+1)*-1] for i in range(len(x))) >>> [[rev(x) for x in z] for z in t] [['000111'], ['999222']] ``` --- If you may use the `step` arg in `range`, can do AChampions suggestion: ``` def rev(x): return ''.join(x[i-1] for i in range(0, -len(x), -1)) ```
If you can't use any standard functionality such as `reversed` or `[::-1]`, you can use `collections.deque` and `deque.appendleft` in a loop. Then use a list comprehension to apply the logic to multiple items. ``` from collections import deque L = [["111000"], ["222999"]] def reverser(x): out = deque() for i in x: out.appendleft(i) return ''.join(out) res = [[reverser(x[0])] for x in L] print(res) [['000111'], ['999222']] ``` Note you *could* use a list, but appending to the beginning of a list is inefficient.
51,584,994
In python if my list is ``` TheTextImage = [["111000"],["222999"]] ``` How would one loop through this list creating a new one of ``` NewTextImage = [["000111"],["999222"]] ``` Can use `[:]` but not `[::-1]`, and cannot use `reverse()`
2018/07/29
[ "https://Stackoverflow.com/questions/51584994", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10092065/" ]
You know how to copy a sequence to another sequence one by one, right? ``` new_string = '' for ch in old_string: new_string = new_string + ch ``` If you want to copy the sequence in reverse, just add the new values onto the left instead of onto the right: ``` new_string = '' for ch in old_string: new_string = ch + new_string ``` That's really the only trick you need. --- Now, this isn't super-efficient, because string concatenation takes quadratic time. You could solve this by using a `collections.deque` (which you can append to the left of in constant time) and then calling `''.join` at the end. But I doubt your teacher is expecting that from you. Just do it the simple way. --- Of course you have to loop over `TextImage` applying this to every string in every sublist in the list. That's probably what they're expecting you to use `[:]` for. But that's easy; it's just looping over lists.
If you can't use any standard functionality such as `reversed` or `[::-1]`, you can use `collections.deque` and `deque.appendleft` in a loop. Then use a list comprehension to apply the logic to multiple items. ``` from collections import deque L = [["111000"], ["222999"]] def reverser(x): out = deque() for i in x: out.appendleft(i) return ''.join(out) res = [[reverser(x[0])] for x in L] print(res) [['000111'], ['999222']] ``` Note you *could* use a list, but appending to the beginning of a list is inefficient.
51,584,994
In python if my list is ``` TheTextImage = [["111000"],["222999"]] ``` How would one loop through this list creating a new one of ``` NewTextImage = [["000111"],["999222"]] ``` Can use `[:]` but not `[::-1]`, and cannot use `reverse()`
2018/07/29
[ "https://Stackoverflow.com/questions/51584994", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10092065/" ]
You may not use `[::-1]` but you can multiply each range index by -1. ``` t = [["111000"],["222999"]] def rev(x): return "".join(x[(i+1)*-1] for i in range(len(x))) >>> [[rev(x) for x in z] for z in t] [['000111'], ['999222']] ``` --- If you may use the `step` arg in `range`, can do AChampions suggestion: ``` def rev(x): return ''.join(x[i-1] for i in range(0, -len(x), -1)) ```
You can use `reduce(lambda x,y: y+x, string)` to reverse a string ``` >>> from functools import reduce >>> TheTextImage = [["111000"],["222999"]] >>> [[reduce(lambda x,y: y+x, b) for b in a] for a in TheTextImage] [['000111'], ['999222']] ```
51,584,994
In python if my list is ``` TheTextImage = [["111000"],["222999"]] ``` How would one loop through this list creating a new one of ``` NewTextImage = [["000111"],["999222"]] ``` Can use `[:]` but not `[::-1]`, and cannot use `reverse()`
2018/07/29
[ "https://Stackoverflow.com/questions/51584994", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10092065/" ]
You know how to copy a sequence to another sequence one by one, right? ``` new_string = '' for ch in old_string: new_string = new_string + ch ``` If you want to copy the sequence in reverse, just add the new values onto the left instead of onto the right: ``` new_string = '' for ch in old_string: new_string = ch + new_string ``` That's really the only trick you need. --- Now, this isn't super-efficient, because string concatenation takes quadratic time. You could solve this by using a `collections.deque` (which you can append to the left of in constant time) and then calling `''.join` at the end. But I doubt your teacher is expecting that from you. Just do it the simple way. --- Of course you have to loop over `TextImage` applying this to every string in every sublist in the list. That's probably what they're expecting you to use `[:]` for. But that's easy; it's just looping over lists.
You can use `reduce(lambda x,y: y+x, string)` to reverse a string ``` >>> from functools import reduce >>> TheTextImage = [["111000"],["222999"]] >>> [[reduce(lambda x,y: y+x, b) for b in a] for a in TheTextImage] [['000111'], ['999222']] ```
8,377,157
I want to find the fastest way to do the job of `switch` in C. I'm writing some Python code to replace C code, and it's all working fine except for a bottleneck. This code is used in a tight loop, so it really is quite crucial that I get the best performance. **Optimsation Attempt 1:** First attempt, as per previous questions such as [this](https://stackoverflow.com/questions/1429505/python-does-python-have-an-equivalent-to-switch) suggest using hash tables for lookups. This ended up being incredibly slow. **Optimsation Attempt 2** Another optimisation I have made is to create a run of `if ... return` statements which gives me a 13% speed boost. It's still disappointingly slow. **Optimsation Attempt 3** I created an `array.array` of all possible input values, and did an index lookup. This results in an over-all speed up of 43%, which is respectable. I'm running over an `array.array` using `map` and passing a transform function to it. This function is doing the lookup. My switch is working on short integers (it's a typed array). If this were GCC C, the compiler would create a jump table. It's frustrating to know that Python is either hashing my value to lookup a table entry or in the case of if, performing lots of comparisons. I know from profiling it that the slow functions are precisely the ones that are doing the look-up. What is the absolute fastest way of mapping one integer to another, mapped over an `array.array` if relevant. Anything faster than the above? EDIT ---- Although it makes me look like an idiot for only just realising, I will say it anwyay! Remember that running your code in a profiler slows your code down a *lot*. In my case, 19 times slower. Suddenly my bottleneck isn't so bad! Thanks very much everyone for all your answers. The question is still valid. I'll leave the question open for a bit because there may be some interesting answers. With profiler, for my test set of data: ``` real 0m37.309s user 0m33.263s sys 0m4.002s ``` without: ``` real 0m2.595s user 0m2.526s sys 0m0.028s ```
2011/12/04
[ "https://Stackoverflow.com/questions/8377157", "https://Stackoverflow.com", "https://Stackoverflow.com/users/148423/" ]
I think others are right to suggest numpy or pure c; but for pure python, here are some timings, for what they're worth. Based on these, I'm a bit surprised that `array.array` performed so much better than a `dict`. Are you creating these tables on the fly inside the loop? Or have I misunderstood something else about your question? In any case, this suggests that a `list` is actually the best way to go. ``` >>> def make_lookup_func(table): ... def lookup(val, t=table): ... return t[val] ... return lookup ... >>> lookup_tuple = make_lookup_func(tuple(range(10))) >>> lookup_list = make_lookup_func(list(range(10))) >>> lookup_array = make_lookup_func(array.array('i', range(10))) >>> lookup_dict = make_lookup_func(dict(zip(range(10), range(10)))) >>> %timeit lookup_tuple(9) 10000000 loops, best of 3: 177 ns per loop >>> %timeit lookup_list(9) 10000000 loops, best of 3: 158 ns per loop >>> %timeit lookup_array(9) 10000000 loops, best of 3: 181 ns per loop >>> %timeit lookup_dict(9) 10000000 loops, best of 3: 166 ns per loop ``` Scaling behavior: ``` >>> lookup_tuple = make_lookup_func(tuple(range(10000))) >>> lookup_list = make_lookup_func(list(range(10000))) >>> lookup_array = make_lookup_func(array.array('i', range(10000))) >>> lookup_dict = make_lookup_func(dict(zip(range(10000), range(10000)))) >>> %timeit lookup_tuple(9000) 10000000 loops, best of 3: 177 ns per loop >>> %timeit lookup_list(9000) 10000000 loops, best of 3: 158 ns per loop >>> %timeit lookup_array(9000) 10000000 loops, best of 3: 186 ns per loop >>> %timeit lookup_dict(9000) 10000000 loops, best of 3: 195 ns per loop ```
Branch logic in general can be painfully slow in python when used in this type of application and you basically struck on one of the better ways of doing this for a tight inner loop where you are converting between integers. A few more things to experiment with: You might try would be working with [np.array](http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#arrays-indexing) or using [Cython](http://cython.org/) (or just straight C) for the tight loop. These require some additional setup (and possibly writing the inner loop in C), but can also give tremendous speedups for this type of application and can let you take advantage of a good C optimizer. Something that can go either way and is more of a micro-optimization is that you could try using a list comprehension instead of a map, or make sure you aren't using a lambda in your map. Not using a lambda in a `map()` is actually a pretty big one, while the difference between a list comprehension and a map tends to be relatively small otherwise.
19,174,634
**I found a better error message (see below).** I have a model called App in core/models.py. The error occurs when trying to access a specific app object in django admin. Even on an empty database (after syncdb) with a single app object. Seems core\_app\_history is something django generated. Any help is appreciated. Here is the exception: ``` NoReverseMatch at /admin/core/app/251/ Reverse for 'core_app_history' with arguments '(u'',)' and keyword arguments '{}' not found. Request Method: GET Request URL: http://weblocal:8001/admin/core/app/251/ Django Version: 1.5.4 Exception Type: NoReverseMatch Exception Value: Reverse for 'core_app_history' with arguments '(u'',)' and keyword arguments '{}' not found. Exception Location: /opt/virtenvs/django_slice/local/lib/python2.7/site-packages/django/template/defaulttags.py in render, line 426 Python Executable: /opt/virtenvs/django_slice/bin/python Python Version: 2.7.3 Python Path: ['/opt/src/slicephone/cloud', '/opt/virtenvs/django_slice/local/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg', '/opt/virtenvs/django_slice/local/lib/python2.7/site-packages/pip-1.2.1-py2.7.egg', '/opt/virtenvs/django_slice/local/lib/python2.7/site-packages/distribute-0.6.35-py2.7.egg', '/opt/virtenvs/django_slice/lib/python2.7', '/opt/virtenvs/django_slice/lib/python2.7/plat-linux2', '/opt/virtenvs/django_slice/lib/python2.7/lib-tk', '/opt/virtenvs/django_slice/lib/python2.7/lib-old', '/opt/virtenvs/django_slice/lib/python2.7/lib-dynload', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-linux2', '/usr/lib/python2.7/lib-tk', '/opt/virtenvs/django_slice/local/lib/python2.7/site-packages'] Server time: Fri, 11 Oct 2013 22:06:43 +0000 ``` And it occurs in /django/contrib/admin/templates/admin/change\_form.html ``` 32 <li><a href="{% url opts|admin_urlname:'history' original.pk|admin_urlquote %}" class="historylink">{% trans "History" %}</a></li> ``` Here is the (possible) relevant urls: ``` /admin/core/app/ HANDLER: changelist_view /admin/core/app/add/ HANDLER: add_view /admin/core/app/(.+)/history/ HANDLER: history_view /admin/core/app/(.+)/delete/ HANDLER: delete_view /admin/core/app/(.+)/ HANDLER: change_view ```
2013/10/04
[ "https://Stackoverflow.com/questions/19174634", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1252307/" ]
i Think this is not valid JSON JSON should be like ``` [ { "id": 1, "src": "src1", "name": "name1" }, { "id": 2, "src": "src2", "name": "name2" }, { "id": 3, "src": "src3", "name": "name3" }, { "id": 4, "src": "src4", "name": "name4" } ] ``` Validate Your JSON @ <http://jsonlint.com/>
Your outer object in json does not have a key where the internal list is stored in. Also, your strings in json should be quoted. `src1`, `name1` are unquoted.
54,761,993
Passing the file as an argument and storing to an object reference seems very straightforward and easy to understand for the open() function, however the read () function does not take the argument in, and is using the format file.read() instead. Why does the read function not take in the file as arguments, such as read(in\_file), and why is it not included in the Python Standard Library of built-in functions? I've checked the list of built in functions in the standard library: <https://docs.python.org/3/library/functions.html#open> ``` # calls the open function passing from_file argument and storing to in_file object reference in_file = open(from_file) # why is this not written as read(in_file) instead? in_data = in_file.read() ```
2019/02/19
[ "https://Stackoverflow.com/questions/54761993", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11000101/" ]
It's not included there because it's not a *function*, it's a *method* of the object that's exposing a file-oriented API, which is, in this case, `in_file`.
Because you have file reference by `in_file = open(from_file)` so when you do `in_file.read()` you are calling the read on the reference itself which is equivalent of `self` it means the object in this case file object
49,314,270
stuck with create dbf file in python3 with dbf lib. im tried this - ``` import dbf Tbl = dbf.Table( 'sample.dbf', 'ID N(6,0); FCODE C(10)') Tbl.open('read-write') Tbl.append() with Tbl.last_record as rec: rec.ID = 5 rec.FCODE = 'GA24850000' ``` and have next error: ``` Traceback (most recent call last): File "c:\Users\operator\Desktop\2.py", line 3, in <module> Tbl.open('read-write') File "C:\Users\operator\AppData\Local\Programs\Python\Python36-32\lib\site-packages\dbf\__init__.py", line 5778, in open raise DbfError("mode for open must be 'read-write' or 'read-only', not %r" % mode) dbf.DbfError: mode for open must be 'read-write' or 'read-only', not 'read-write' ``` if im remove 'read-write' - next: ``` Traceback (most recent call last): File "c:\Users\operator\Desktop\2.py", line 4, in <module> Tbl.append() File "C:\Users\operator\AppData\Local\Programs\Python\Python36-32\lib\site-packages\dbf\__init__.py", line 5492, in append raise DbfError('%s not in read/write mode, unable to append records' % meta.filename) dbf.DbfError: sample.dbf not in read/write mode, unable to append records ``` thats im doing wrong? if im not try append, im just get .dbf with right columns, so dbf library works.
2018/03/16
[ "https://Stackoverflow.com/questions/49314270", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9394255/" ]
I had the same error. In the older versions of the dbf module, I was able to write dbf files by opening them just with `Tbl.open()` However, with the new version (dbf.0.97), I have to open the files with `Tbl.open(mode=dbf.READ_WRITE)` in order to be able to write them.
here's an append example: ``` table = dbf.Table('sample.dbf', 'cod N(1,0); name C(30)') table.open(mode=dbf.READ_WRITE) row_tuple = (1, 'Name') table.append(row_tuple) ```
14,633,952
I'm new to Elastic Search and to the non-SQL paradigm. I've been following ES tutorial, but there is one thing I couldn't put to work. In the following code (I'me using [PyES](http://packages.python.org/pyes/) to interact with ES) I create a single document, with a nested field (subjects), that contains another nested field (concepts). ``` from pyes import * conn = ES('127.0.0.1:9200') # Use HTTP # Delete and Create a new index. conn.indices.delete_index("documents-index") conn.create_index("documents-index") # Create a single document. document = { "docid": 123456789, "title": "This is the doc title.", "description": "This is the doc description.", "datepublished": 2005, "author": ["Joe", "John", "Charles"], "subjects": [{ "subjectname": 'subject1', "subjectid": [210, 311, 1012, 784, 568], "subjectkey": 2, "concepts": [ {"name": "concept1", "score": 75}, {"name": "concept2", "score": 55} ] }, { "subjectname": 'subject2', "subjectid": [111, 300, 141, 457, 748], "subjectkey": 0, "concepts": [ {"name": "concept3", "score": 88}, {"name": "concept4", "score": 55}, {"name": "concept5", "score": 66} ] }], } # Define the nested elements. mapping1 = { 'subjects': { 'type': 'nested' } } mapping2 = { 'concepts': { 'type': 'nested' } } conn.put_mapping("document", {'properties': mapping1}, ["documents-index"]) conn.put_mapping("subjects", {'properties': mapping2}, ["documents-index"]) # Insert document in 'documents-index' index. conn.index(document, "documents-index", "document", 1) # Refresh connection to make queries. conn.refresh() ``` I'm able to query *subjects* nested field: ``` query1 = { "nested": { "path": "subjects", "score_mode": "avg", "query": { "bool": { "must": [ { "text": {"subjects.subjectname": "subject1"} }, { "range": {"subjects.subjectkey": {"gt": 1}} } ] } } } } results = conn.search(query=query1) for r in results: print r # as expected, it returns the entire document. ``` but I can't figure out how to query based on *concepts* nested field. ES [documentation](http://www.elasticsearch.org/guide/reference/query-dsl/nested-query.html) refers that > > Multi level nesting is automatically supported, and detected, > resulting in an inner nested query to automatically match the relevant > nesting level (and not root) if it exists within another nested query. > > > So, I tryed to build a query with the following format: ``` query2 = { "nested": { "path": "concepts", "score_mode": "avg", "query": { "bool": { "must": [ { "text": {"concepts.name": "concept1"} }, { "range": {"concepts.score": {"gt": 0}} } ] } } } } ``` which returned 0 results. I can't figure out what is missing and I haven't found any example with queries based on two levels of nesting.
2013/01/31
[ "https://Stackoverflow.com/questions/14633952", "https://Stackoverflow.com", "https://Stackoverflow.com/users/759733/" ]
Ok, after trying a tone of combinations, I finally got it using the following query: ``` query3 = { "nested": { "path": "subjects", "score_mode": "avg", "query": { "bool": { "must": [ { "text": {"subjects.concepts.name": "concept1"} } ] } } } } ``` So, the nested **path** attribute (*subjects*) is always the same, no matter the nested attribute level, and in the query definition I used the attribute's full path (*subject.concepts.name*).
Shot in the dark since I haven't tried this personally, but have you tried the fully qualified path to Concepts? ``` query2 = { "nested": { "path": "subjects.concepts", "score_mode": "avg", "query": { "bool": { "must": [ { "text": {"subjects.concepts.name": "concept1"} }, { "range": {"subjects.concepts.score": {"gt": 0}} } ] } } } } ```
14,633,952
I'm new to Elastic Search and to the non-SQL paradigm. I've been following ES tutorial, but there is one thing I couldn't put to work. In the following code (I'me using [PyES](http://packages.python.org/pyes/) to interact with ES) I create a single document, with a nested field (subjects), that contains another nested field (concepts). ``` from pyes import * conn = ES('127.0.0.1:9200') # Use HTTP # Delete and Create a new index. conn.indices.delete_index("documents-index") conn.create_index("documents-index") # Create a single document. document = { "docid": 123456789, "title": "This is the doc title.", "description": "This is the doc description.", "datepublished": 2005, "author": ["Joe", "John", "Charles"], "subjects": [{ "subjectname": 'subject1', "subjectid": [210, 311, 1012, 784, 568], "subjectkey": 2, "concepts": [ {"name": "concept1", "score": 75}, {"name": "concept2", "score": 55} ] }, { "subjectname": 'subject2', "subjectid": [111, 300, 141, 457, 748], "subjectkey": 0, "concepts": [ {"name": "concept3", "score": 88}, {"name": "concept4", "score": 55}, {"name": "concept5", "score": 66} ] }], } # Define the nested elements. mapping1 = { 'subjects': { 'type': 'nested' } } mapping2 = { 'concepts': { 'type': 'nested' } } conn.put_mapping("document", {'properties': mapping1}, ["documents-index"]) conn.put_mapping("subjects", {'properties': mapping2}, ["documents-index"]) # Insert document in 'documents-index' index. conn.index(document, "documents-index", "document", 1) # Refresh connection to make queries. conn.refresh() ``` I'm able to query *subjects* nested field: ``` query1 = { "nested": { "path": "subjects", "score_mode": "avg", "query": { "bool": { "must": [ { "text": {"subjects.subjectname": "subject1"} }, { "range": {"subjects.subjectkey": {"gt": 1}} } ] } } } } results = conn.search(query=query1) for r in results: print r # as expected, it returns the entire document. ``` but I can't figure out how to query based on *concepts* nested field. ES [documentation](http://www.elasticsearch.org/guide/reference/query-dsl/nested-query.html) refers that > > Multi level nesting is automatically supported, and detected, > resulting in an inner nested query to automatically match the relevant > nesting level (and not root) if it exists within another nested query. > > > So, I tryed to build a query with the following format: ``` query2 = { "nested": { "path": "concepts", "score_mode": "avg", "query": { "bool": { "must": [ { "text": {"concepts.name": "concept1"} }, { "range": {"concepts.score": {"gt": 0}} } ] } } } } ``` which returned 0 results. I can't figure out what is missing and I haven't found any example with queries based on two levels of nesting.
2013/01/31
[ "https://Stackoverflow.com/questions/14633952", "https://Stackoverflow.com", "https://Stackoverflow.com/users/759733/" ]
Shot in the dark since I haven't tried this personally, but have you tried the fully qualified path to Concepts? ``` query2 = { "nested": { "path": "subjects.concepts", "score_mode": "avg", "query": { "bool": { "must": [ { "text": {"subjects.concepts.name": "concept1"} }, { "range": {"subjects.concepts.score": {"gt": 0}} } ] } } } } ```
I have some question for JCJS's answer. why your mapping shouldn't like this? ``` mapping = { "subjects": { "type": "nested", "properties": { "concepts": { "type": "nested" } } } } ``` I try to define two type-mapping maybe doesn't work, but be a flatten data; I think we should nested in nested properties.. At last... if we use this mapping nested query should like this... ``` { "query": { "nested": { "path": "subjects.concepts", "query": { "term": { "name": { "value": "concept1" } } } } } } ``` It's vital for using `full path` for path attribute...but not for term key can be full-path or relative-path.
14,633,952
I'm new to Elastic Search and to the non-SQL paradigm. I've been following ES tutorial, but there is one thing I couldn't put to work. In the following code (I'me using [PyES](http://packages.python.org/pyes/) to interact with ES) I create a single document, with a nested field (subjects), that contains another nested field (concepts). ``` from pyes import * conn = ES('127.0.0.1:9200') # Use HTTP # Delete and Create a new index. conn.indices.delete_index("documents-index") conn.create_index("documents-index") # Create a single document. document = { "docid": 123456789, "title": "This is the doc title.", "description": "This is the doc description.", "datepublished": 2005, "author": ["Joe", "John", "Charles"], "subjects": [{ "subjectname": 'subject1', "subjectid": [210, 311, 1012, 784, 568], "subjectkey": 2, "concepts": [ {"name": "concept1", "score": 75}, {"name": "concept2", "score": 55} ] }, { "subjectname": 'subject2', "subjectid": [111, 300, 141, 457, 748], "subjectkey": 0, "concepts": [ {"name": "concept3", "score": 88}, {"name": "concept4", "score": 55}, {"name": "concept5", "score": 66} ] }], } # Define the nested elements. mapping1 = { 'subjects': { 'type': 'nested' } } mapping2 = { 'concepts': { 'type': 'nested' } } conn.put_mapping("document", {'properties': mapping1}, ["documents-index"]) conn.put_mapping("subjects", {'properties': mapping2}, ["documents-index"]) # Insert document in 'documents-index' index. conn.index(document, "documents-index", "document", 1) # Refresh connection to make queries. conn.refresh() ``` I'm able to query *subjects* nested field: ``` query1 = { "nested": { "path": "subjects", "score_mode": "avg", "query": { "bool": { "must": [ { "text": {"subjects.subjectname": "subject1"} }, { "range": {"subjects.subjectkey": {"gt": 1}} } ] } } } } results = conn.search(query=query1) for r in results: print r # as expected, it returns the entire document. ``` but I can't figure out how to query based on *concepts* nested field. ES [documentation](http://www.elasticsearch.org/guide/reference/query-dsl/nested-query.html) refers that > > Multi level nesting is automatically supported, and detected, > resulting in an inner nested query to automatically match the relevant > nesting level (and not root) if it exists within another nested query. > > > So, I tryed to build a query with the following format: ``` query2 = { "nested": { "path": "concepts", "score_mode": "avg", "query": { "bool": { "must": [ { "text": {"concepts.name": "concept1"} }, { "range": {"concepts.score": {"gt": 0}} } ] } } } } ``` which returned 0 results. I can't figure out what is missing and I haven't found any example with queries based on two levels of nesting.
2013/01/31
[ "https://Stackoverflow.com/questions/14633952", "https://Stackoverflow.com", "https://Stackoverflow.com/users/759733/" ]
Ok, after trying a tone of combinations, I finally got it using the following query: ``` query3 = { "nested": { "path": "subjects", "score_mode": "avg", "query": { "bool": { "must": [ { "text": {"subjects.concepts.name": "concept1"} } ] } } } } ``` So, the nested **path** attribute (*subjects*) is always the same, no matter the nested attribute level, and in the query definition I used the attribute's full path (*subject.concepts.name*).
I have some question for JCJS's answer. why your mapping shouldn't like this? ``` mapping = { "subjects": { "type": "nested", "properties": { "concepts": { "type": "nested" } } } } ``` I try to define two type-mapping maybe doesn't work, but be a flatten data; I think we should nested in nested properties.. At last... if we use this mapping nested query should like this... ``` { "query": { "nested": { "path": "subjects.concepts", "query": { "term": { "name": { "value": "concept1" } } } } } } ``` It's vital for using `full path` for path attribute...but not for term key can be full-path or relative-path.
22,444,378
I am looking for a simple solution to display thumbnails using wxPython. This is not about creating the thumbnails. I have a directory of thumbnails and want to display them on the screen. I am purposely not using terms like (Panel, Frame, Window, ScrolledWindow) because I am open to various solutions. Also note I have found multiple examples for displaying a single image, so referencing any such solution will not help me. The solution must be for displaying multiple images at the same time in wx. It seems that what I want to do is being done in ThumbnailCtrl, but Andrea's code is complex and I cannot find the portion that does the display to screen. I did find a simple solution in Mark Lutz's Programming Python book, but while his viewer\_thumbs.py example definitely has the simplicity that I am looking for, it was done using Tkinter. So please any wx solution will be greatly appreciated. EDIT: I am adding a link to one place where Mark Lutz's working Tkinter code can be found. Can anyone think of a wx equivalent? <http://codeidol.com/community/python/viewing-and-processing-images-with-pil/17565/#part-33>
2014/03/16
[ "https://Stackoverflow.com/questions/22444378", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3381864/" ]
I would recommend using the ThumbNailCtrl widget: <http://wxpython.org/Phoenix/docs/html/lib.agw.thumbnailctrl.html>. There is a good example in the wxPython demo. Or you could use this one from the documentation. Note that the ThumbNailCtrl requires the Python Imaging Library to be installed. ``` import os import wx import wx.lib.agw.thumbnailctrl as TC class MyFrame(wx.Frame): def __init__(self, parent): wx.Frame.__init__(self, parent, -1, "ThumbnailCtrl Demo") panel = wx.Panel(self) sizer = wx.BoxSizer(wx.VERTICAL) thumbnail = TC.ThumbnailCtrl(panel, imagehandler=TC.NativeImageHandler) sizer.Add(thumbnail, 1, wx.EXPAND | wx.ALL, 10) thumbnail.ShowDir(os.getcwd()) panel.SetSizer(sizer) # our normal wxApp-derived class, as usual app = wx.App(0) frame = MyFrame(None) app.SetTopWindow(frame) frame.Show() app.MainLoop() ``` Just change the line **thumbnail.ShowDir(os.getcwd())** so that it points at the right folder on your machine. I also wrote up an article for viewing photos here: <http://www.blog.pythonlibrary.org/2010/03/26/creating-a-simple-photo-viewer-with-wxpython/> It doesn't use thumbnails though.
I would just display them as wx.Image inside a frame. <http://www.wxpython.org/docs/api/wx.Image-class.html> From the class: "A platform-independent image class. An image can be created from data, or using wx.Bitmap.ConvertToImage, or loaded from a file in a variety of formats. Functions are available to set and get image bits, so it can be used for basic image manipulation." Seems it should be able to do what you want, unless I'm missing something.
22,444,378
I am looking for a simple solution to display thumbnails using wxPython. This is not about creating the thumbnails. I have a directory of thumbnails and want to display them on the screen. I am purposely not using terms like (Panel, Frame, Window, ScrolledWindow) because I am open to various solutions. Also note I have found multiple examples for displaying a single image, so referencing any such solution will not help me. The solution must be for displaying multiple images at the same time in wx. It seems that what I want to do is being done in ThumbnailCtrl, but Andrea's code is complex and I cannot find the portion that does the display to screen. I did find a simple solution in Mark Lutz's Programming Python book, but while his viewer\_thumbs.py example definitely has the simplicity that I am looking for, it was done using Tkinter. So please any wx solution will be greatly appreciated. EDIT: I am adding a link to one place where Mark Lutz's working Tkinter code can be found. Can anyone think of a wx equivalent? <http://codeidol.com/community/python/viewing-and-processing-images-with-pil/17565/#part-33>
2014/03/16
[ "https://Stackoverflow.com/questions/22444378", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3381864/" ]
Not sure if I am supposed to answer my own question but I did find a solution to my problem and I wanted to share. I was using wx version 2.8. I found that in 2.9 and 3.0 there was a widget added called WrapSizer. Once I updated my version of wx to 3.0 that made the solution beyond simple. Here are the code snippets that matter. ``` self.PhotoMaxWidth = 100 self.PhotoMaxHeight = 100 self.GroupOfThumbnailsSizer = wx.WrapSizer() self.CreateThumbNails(len(ListOfPhotots),ListOfPhotots) self.GroupOfThumbnailsSizer.SetSizeHints(self.whateverPanel) self.whateverPanel.SetSizer(self.GroupOfThumbnailsSizer) self.whateverPanel.Layout() def CreateThumbNails(self, n, ListOfFiles): thumbnails = [] backgroundcolor = "white" for i in range(n): ThumbnailSizer = wx.BoxSizer(wx.VERTICAL) self.GroupOfThumbnailsSizer.Add(ThumbnailSizer, 0, 0, 0) thumbnails.append(ThumbnailSizer) for thumbnailcounter, thumbsizer in enumerate(thumbnails): image = Image.open(ListOfFiles[thumbnailcounter]) image = self.ResizeAndCenterImage(image, self.PhotoMaxWidth, self.PhotoMaxHeight, backgroundcolor) img = self.pil_to_image(image) thumb= wx.StaticBitmap(self.timelinePanel, wx.ID_ANY, wx.BitmapFromImage(img)) thumbsizer.Add(thumb, 0, wx.ALL, 5) return def pil_to_image(self, pil, alpha=True): """ Method will convert PIL Image to wx.Image """ if alpha: image = apply( wx.EmptyImage, pil.size ) image.SetData( pil.convert( "RGB").tostring() ) image.SetAlphaData(pil.convert("RGBA").tostring()[3::4]) else: image = wx.EmptyImage(pil.size[0], pil.size[1]) new_image = pil.convert('RGB') data = new_image.tostring() image.SetData(data) return image def ResizeAndCenterImage(self, image, NewWidth, NewHeight, backgroundcolor): width_ratio = NewWidth / float(image.size[0]) temp_height = int(image.size[1] * width_ratio) if temp_height < NewHeight: img2 = image.resize((NewWidth, temp_height), Image.ANTIALIAS) else: height_ratio = NewHeight / float(image.size[1]) temp_width = int(image.size[0] * height_ratio) img2 = image.resize((temp_width, NewHeight), Image.ANTIALIAS) background = Image.new("RGB", (NewWidth, NewHeight), backgroundcolor) masterwidth = background.size[0] masterheight = background.size[1] subwidth = img2.size[0] subheight = img2.size[1] mastercenterwidth = masterwidth // 2 mastercenterheight = masterheight // 2 subcenterwidth = subwidth // 2 subcenterheight = subheight // 2 insertpointwidth = mastercenterwidth - subcenterwidth insertpointheight = mastercenterheight - subcenterheight background.paste(img2, (insertpointwidth, insertpointheight)) return background ``` I got the pil\_to\_image portion from another stackoverflow post and I wrote the ResizeAndCenterImage portion to make all of my thumbnails the same size while keeping the aspect ration intact and not do any cropping. The resize and center call can be skipped all together if you like.
I would just display them as wx.Image inside a frame. <http://www.wxpython.org/docs/api/wx.Image-class.html> From the class: "A platform-independent image class. An image can be created from data, or using wx.Bitmap.ConvertToImage, or loaded from a file in a variety of formats. Functions are available to set and get image bits, so it can be used for basic image manipulation." Seems it should be able to do what you want, unless I'm missing something.
22,444,378
I am looking for a simple solution to display thumbnails using wxPython. This is not about creating the thumbnails. I have a directory of thumbnails and want to display them on the screen. I am purposely not using terms like (Panel, Frame, Window, ScrolledWindow) because I am open to various solutions. Also note I have found multiple examples for displaying a single image, so referencing any such solution will not help me. The solution must be for displaying multiple images at the same time in wx. It seems that what I want to do is being done in ThumbnailCtrl, but Andrea's code is complex and I cannot find the portion that does the display to screen. I did find a simple solution in Mark Lutz's Programming Python book, but while his viewer\_thumbs.py example definitely has the simplicity that I am looking for, it was done using Tkinter. So please any wx solution will be greatly appreciated. EDIT: I am adding a link to one place where Mark Lutz's working Tkinter code can be found. Can anyone think of a wx equivalent? <http://codeidol.com/community/python/viewing-and-processing-images-with-pil/17565/#part-33>
2014/03/16
[ "https://Stackoverflow.com/questions/22444378", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3381864/" ]
Not sure if I am supposed to answer my own question but I did find a solution to my problem and I wanted to share. I was using wx version 2.8. I found that in 2.9 and 3.0 there was a widget added called WrapSizer. Once I updated my version of wx to 3.0 that made the solution beyond simple. Here are the code snippets that matter. ``` self.PhotoMaxWidth = 100 self.PhotoMaxHeight = 100 self.GroupOfThumbnailsSizer = wx.WrapSizer() self.CreateThumbNails(len(ListOfPhotots),ListOfPhotots) self.GroupOfThumbnailsSizer.SetSizeHints(self.whateverPanel) self.whateverPanel.SetSizer(self.GroupOfThumbnailsSizer) self.whateverPanel.Layout() def CreateThumbNails(self, n, ListOfFiles): thumbnails = [] backgroundcolor = "white" for i in range(n): ThumbnailSizer = wx.BoxSizer(wx.VERTICAL) self.GroupOfThumbnailsSizer.Add(ThumbnailSizer, 0, 0, 0) thumbnails.append(ThumbnailSizer) for thumbnailcounter, thumbsizer in enumerate(thumbnails): image = Image.open(ListOfFiles[thumbnailcounter]) image = self.ResizeAndCenterImage(image, self.PhotoMaxWidth, self.PhotoMaxHeight, backgroundcolor) img = self.pil_to_image(image) thumb= wx.StaticBitmap(self.timelinePanel, wx.ID_ANY, wx.BitmapFromImage(img)) thumbsizer.Add(thumb, 0, wx.ALL, 5) return def pil_to_image(self, pil, alpha=True): """ Method will convert PIL Image to wx.Image """ if alpha: image = apply( wx.EmptyImage, pil.size ) image.SetData( pil.convert( "RGB").tostring() ) image.SetAlphaData(pil.convert("RGBA").tostring()[3::4]) else: image = wx.EmptyImage(pil.size[0], pil.size[1]) new_image = pil.convert('RGB') data = new_image.tostring() image.SetData(data) return image def ResizeAndCenterImage(self, image, NewWidth, NewHeight, backgroundcolor): width_ratio = NewWidth / float(image.size[0]) temp_height = int(image.size[1] * width_ratio) if temp_height < NewHeight: img2 = image.resize((NewWidth, temp_height), Image.ANTIALIAS) else: height_ratio = NewHeight / float(image.size[1]) temp_width = int(image.size[0] * height_ratio) img2 = image.resize((temp_width, NewHeight), Image.ANTIALIAS) background = Image.new("RGB", (NewWidth, NewHeight), backgroundcolor) masterwidth = background.size[0] masterheight = background.size[1] subwidth = img2.size[0] subheight = img2.size[1] mastercenterwidth = masterwidth // 2 mastercenterheight = masterheight // 2 subcenterwidth = subwidth // 2 subcenterheight = subheight // 2 insertpointwidth = mastercenterwidth - subcenterwidth insertpointheight = mastercenterheight - subcenterheight background.paste(img2, (insertpointwidth, insertpointheight)) return background ``` I got the pil\_to\_image portion from another stackoverflow post and I wrote the ResizeAndCenterImage portion to make all of my thumbnails the same size while keeping the aspect ration intact and not do any cropping. The resize and center call can be skipped all together if you like.
I would recommend using the ThumbNailCtrl widget: <http://wxpython.org/Phoenix/docs/html/lib.agw.thumbnailctrl.html>. There is a good example in the wxPython demo. Or you could use this one from the documentation. Note that the ThumbNailCtrl requires the Python Imaging Library to be installed. ``` import os import wx import wx.lib.agw.thumbnailctrl as TC class MyFrame(wx.Frame): def __init__(self, parent): wx.Frame.__init__(self, parent, -1, "ThumbnailCtrl Demo") panel = wx.Panel(self) sizer = wx.BoxSizer(wx.VERTICAL) thumbnail = TC.ThumbnailCtrl(panel, imagehandler=TC.NativeImageHandler) sizer.Add(thumbnail, 1, wx.EXPAND | wx.ALL, 10) thumbnail.ShowDir(os.getcwd()) panel.SetSizer(sizer) # our normal wxApp-derived class, as usual app = wx.App(0) frame = MyFrame(None) app.SetTopWindow(frame) frame.Show() app.MainLoop() ``` Just change the line **thumbnail.ShowDir(os.getcwd())** so that it points at the right folder on your machine. I also wrote up an article for viewing photos here: <http://www.blog.pythonlibrary.org/2010/03/26/creating-a-simple-photo-viewer-with-wxpython/> It doesn't use thumbnails though.
39,053,393
I'm using the formula "product of two number is equal to the product of their GCD and LCM". Here's my code : ``` # Uses python3 import sys def hcf(x, y): while(y): x, y = y, x % y return x a,b = map(int,sys.stdin.readline().split()) res=int(((a*b)/hcf(a,b))) print(res) ``` It works great for small numbers. But when i give input as : > > Input: > 226553150 1023473145 > > > My output: > 46374212988031352 > > > Correct output: > 46374212988031350 > > > Can anyone please tell me where am I going wrong ?
2016/08/20
[ "https://Stackoverflow.com/questions/39053393", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6032875/" ]
Elaborating on the comments. In Python 3, true division, `/`, converts its arguments to floats. In your example, the true answer of `lcm(226553150, 1023473145)` is `46374212988031350`. By looking at `bin(46374212988031350)` you can verify that this is a 56 bit number. When you compute `226553150*1023473145/5` (5 is the gcd) you get `4.637421298803135e+16`. Documentation suggests that such floats only have 53 bits of precision. Since 53 < 56, you have lost information. Using `//` avoids this. Somewhat counterintuitively, in cases like this it is "true" division which is actually false. By the way, a useful module when dealing with exact calculations involving large integers is [fractions](https://docs.python.org/3/library/fractions.html) (\*): ``` from fractions import gcd def lcm(a,b): return a*b // gcd(a,b) >>> lcm(226553150,1023473145) 46374212988031350 ``` (\*) I just noticed that the documentation on `fractions` says this about its `gcd`: "Deprecated since version 3.5: Use math.gcd() instead", but I decided to keep the reference to `fractions` since it is still good to know about it and you might be using a version prior to 3.5.
You should use a different method to find the **GCD** that will be the issue: Use: ``` def hcfnaive(a, b): if(b == 0): return abs(a) else: return hcfnaive(b, a % b) ``` You can try one more method: ``` import math a = 13 b = 5 print((a*b)/math.gcd(a,b)) ```
46,996,102
python is new to me and I'm facing this little, probably for most of you really easy to solve, problem. I am trying for the first time to use a class so I dont have to make so many functions and just pick one out of the class!! so here is what I have writen so far: ``` from tkinter import * import webbrowser class web_open3: A = "webbrowser.open(www.google.de") def open(self): self.A = webbrowser.open("www.google.de") test = web_open3.open() root = Tk() b1 = Button(root, text="button", command=test) b1.pack() root.mainloop() ``` The Error I get : > > Traceback (most recent call last): > line 11, in > test = web\_open3.open() > TypeError: open() missing 1 required positional argument: 'self' > > > greetings Slake
2017/10/29
[ "https://Stackoverflow.com/questions/46996102", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8839994/" ]
You need to initiate a class first variable = web\_open3(). The **init** is a magic function that is ran when you create an instance of the class. This is to show how to begin writing a class in python. ``` from tkinter import * import webbrowser class web_open3: def __init__(self): self.A = "http://www.google.de" def open(self): webbrowser.open_new(self.A) test = web_open3() root = Tk() b1 = Button(root, text="button", command=test.open) b1.pack() root.mainloop() ```
In programming, a class is an object. What is an object? It's an instance. In order to use your object, you first have to create it. You do that by instantiating it, `web = web_open3()`. Then, you can use the `open()` function. Now, objects may also be static. A static object, is an object that you don't instantiate. Any class, independent of being instantiated or not, may have static variables and functions. Let's take a look at your code: ``` # Classes should be named with CamelCase convention: 'WebOpen3' class web_open3: # This is a static variable. Variables should be named with lowercase letters A = "webbrowser.open(www.google.de" # This is an instance method def open(self): # You are accessing a static variable as an instance variable self.A = webbrowser.open("www.google.de") # Here, you try to use an instance method without first initializing your object. That raises an error, the one you gave in the description. test = web_open3.open() ``` Let's now look at a static example: ``` class WebOpen3: a = "webbrowser.open(www.google.de" @staticmethod def open(): WebOpen3.a = webbrowser.open("www.google.de") test = WebOpen3.open() ``` and an instance example: ``` class WebOpen3: def __init__(self): self.a = "webbrowser.open(www.google.de" def open(self): self.a = webbrowser.open("www.google.de") web = WebOpen3() test = web.open() ``` There is still one problem left. When saying: `test = web.open()`, or `test = WebOpen3.open()`, you're trying to bind the returning value from `open()` to `test`, however that function doesn't return anything. So, you need to add a return statement to it. Let's use the instance method/function as an example: ``` def open(self): self.a = webbrowser.open("www.google.de") return self.a ``` or, instead of returning a value, just call the function straight-forward: ``` WebOpen3.open() ``` or ``` web.open() ``` > > **Note**: functions belonging to instances, are also called methods. > > > **Note**: `self` refers to an instance of that class. > > > **Note**: `def __init__(self)`, is an instance´s initializer. For your case, you call it by using `WebOpen3()`. You will later find more special functions defined as `def __func_name__()`. > > > **Note**: For more on variables in a class, you should read this: [Static class variables in Python](https://stackoverflow.com/questions/68645/static-class-variables-in-python) > > > As for the case of your Tkinter window, to get a button in your view: you can use this code: ``` from tkinter import * app = Tk() button = Button(app, text='Open in browser') button.bind("<Button-1>", web.open) # Using 'web.open', or 'WebOpen3.open', both without parenthesis, will send a reference to your function. button.pack() app.mainloop() ```
57,270,642
I have a program that uploads videos to via the vimeo api. But everytime I click run, the program that runs is not the current one, its an old program, which I have now deleted and even deleted from recycle bin, yet everytime I run my vimeo code it runs a completely different program that shouldnt even exist its driving me crazy! I've tried to adjust my setting file which currently looks like below. ``` "configurations": [ { "name": "Python: Current File", "type": "python", "request": "launch", "program": "${file}", "console": "internalConsole" } ] } ```
2019/07/30
[ "https://Stackoverflow.com/questions/57270642", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8066094/" ]
I suspect you have a script cached somewhere. To troubleshoot please do the following: * Restart VScode * Restart PC (if on windows 10 use `shutdown/r /f /t 000` in cmd to force a full restart and avoid windows fast-boot saving anything.) * check what happens if you run the script manually via `python *your script*` and see what happens. Comment if this doesn't help and add more info such as your OS and how you are running your script.
If you are importing any module like "import some\_module" you could change it to "from some\_module import \*", or the specific function you want.
10,076,075
I have a data structure like this: ``` { 'key1':[ [1,1,'Some text'], [2,0,''], ... ], ... 'key99':[ [1,1,'Some text'], [2,1,'More text'], ... ], } ``` The size of this will be only like 100 keys and 100 lists in each key. I like to store it and retrieve it (the entire list) based on the key. This is for a use in a web-server with not very high traffic. However, the back end must handle concurrent reads and writes. How to do this in a safe way and without writing too much code? I suppose storing the [pickled](http://docs.python.org/library/pickle) object in SQLite is a possible solution. Are there better ways?
2012/04/09
[ "https://Stackoverflow.com/questions/10076075", "https://Stackoverflow.com", "https://Stackoverflow.com/users/126463/" ]
You should use a [`ObservableCollection<SomeType>`](http://msdn.microsoft.com/en-us/library/ms668604.aspx) for this instead. `ObservableCollection<T>` provides the `CollectionChanged` event which you can subscribe to - the [`CollectionChanged`](http://msdn.microsoft.com/en-us/library/ms653375.aspx) event fires when an item is added, removed, changed, moved, or the entire list is refreshed.
`List` does not expose any events for that. You should consider using [`ObservableCollection`](http://msdn.microsoft.com/en-us/library/ms653375.aspx) instead. It has `CollectionChanged` event which occurs when an item is added, removed, changed, moved, or the entire list is refreshed.
10,076,075
I have a data structure like this: ``` { 'key1':[ [1,1,'Some text'], [2,0,''], ... ], ... 'key99':[ [1,1,'Some text'], [2,1,'More text'], ... ], } ``` The size of this will be only like 100 keys and 100 lists in each key. I like to store it and retrieve it (the entire list) based on the key. This is for a use in a web-server with not very high traffic. However, the back end must handle concurrent reads and writes. How to do this in a safe way and without writing too much code? I suppose storing the [pickled](http://docs.python.org/library/pickle) object in SQLite is a possible solution. Are there better ways?
2012/04/09
[ "https://Stackoverflow.com/questions/10076075", "https://Stackoverflow.com", "https://Stackoverflow.com/users/126463/" ]
You should use a [`ObservableCollection<SomeType>`](http://msdn.microsoft.com/en-us/library/ms668604.aspx) for this instead. `ObservableCollection<T>` provides the `CollectionChanged` event which you can subscribe to - the [`CollectionChanged`](http://msdn.microsoft.com/en-us/library/ms653375.aspx) event fires when an item is added, removed, changed, moved, or the entire list is refreshed.
Maybe you should be using `ObservableCollection<T>`. It fires events when items are added or removed, or several other events. Here is the doc: <http://msdn.microsoft.com/en-us/library/ms668604.aspx>
10,076,075
I have a data structure like this: ``` { 'key1':[ [1,1,'Some text'], [2,0,''], ... ], ... 'key99':[ [1,1,'Some text'], [2,1,'More text'], ... ], } ``` The size of this will be only like 100 keys and 100 lists in each key. I like to store it and retrieve it (the entire list) based on the key. This is for a use in a web-server with not very high traffic. However, the back end must handle concurrent reads and writes. How to do this in a safe way and without writing too much code? I suppose storing the [pickled](http://docs.python.org/library/pickle) object in SQLite is a possible solution. Are there better ways?
2012/04/09
[ "https://Stackoverflow.com/questions/10076075", "https://Stackoverflow.com", "https://Stackoverflow.com/users/126463/" ]
You should use a [`ObservableCollection<SomeType>`](http://msdn.microsoft.com/en-us/library/ms668604.aspx) for this instead. `ObservableCollection<T>` provides the `CollectionChanged` event which you can subscribe to - the [`CollectionChanged`](http://msdn.microsoft.com/en-us/library/ms653375.aspx) event fires when an item is added, removed, changed, moved, or the entire list is refreshed.
you can do something like ``` private List<SomeType> _list; public void AddToList(SomeType item) { _list.Add(item); SomeOtherMethod(); } public ReadOnlyCollection<SomeType> MyList { get { return _list.AsReadOnly(); } } ``` but ObservableCollection would be best.
10,076,075
I have a data structure like this: ``` { 'key1':[ [1,1,'Some text'], [2,0,''], ... ], ... 'key99':[ [1,1,'Some text'], [2,1,'More text'], ... ], } ``` The size of this will be only like 100 keys and 100 lists in each key. I like to store it and retrieve it (the entire list) based on the key. This is for a use in a web-server with not very high traffic. However, the back end must handle concurrent reads and writes. How to do this in a safe way and without writing too much code? I suppose storing the [pickled](http://docs.python.org/library/pickle) object in SQLite is a possible solution. Are there better ways?
2012/04/09
[ "https://Stackoverflow.com/questions/10076075", "https://Stackoverflow.com", "https://Stackoverflow.com/users/126463/" ]
You should use a [`ObservableCollection<SomeType>`](http://msdn.microsoft.com/en-us/library/ms668604.aspx) for this instead. `ObservableCollection<T>` provides the `CollectionChanged` event which you can subscribe to - the [`CollectionChanged`](http://msdn.microsoft.com/en-us/library/ms653375.aspx) event fires when an item is added, removed, changed, moved, or the entire list is refreshed.
If you create your own implementation of IList you can call methods when an item is added to the list (or do anything else you want). Create a class that inherits from IList and have as a private member a list of type T. Implement each of the Interface methods using your private member and modify the Add(T item) call to whatever you need it to do Code: ``` public class MyList<T> : IList<T> { private List<T> _myList = new List<T>(); public IEnumerator<T> GetEnumerator() { return _myList.GetEnumerator(); } public void Clear() { _myList.Clear(); } public bool Contains(T item) { return _myList.Contains(item); } public void Add(T item) { _myList.Add(item); // Call your methods here } // ...implement the rest of the IList<T> interface using _myList } ```
10,076,075
I have a data structure like this: ``` { 'key1':[ [1,1,'Some text'], [2,0,''], ... ], ... 'key99':[ [1,1,'Some text'], [2,1,'More text'], ... ], } ``` The size of this will be only like 100 keys and 100 lists in each key. I like to store it and retrieve it (the entire list) based on the key. This is for a use in a web-server with not very high traffic. However, the back end must handle concurrent reads and writes. How to do this in a safe way and without writing too much code? I suppose storing the [pickled](http://docs.python.org/library/pickle) object in SQLite is a possible solution. Are there better ways?
2012/04/09
[ "https://Stackoverflow.com/questions/10076075", "https://Stackoverflow.com", "https://Stackoverflow.com/users/126463/" ]
`List` does not expose any events for that. You should consider using [`ObservableCollection`](http://msdn.microsoft.com/en-us/library/ms653375.aspx) instead. It has `CollectionChanged` event which occurs when an item is added, removed, changed, moved, or the entire list is refreshed.
you can do something like ``` private List<SomeType> _list; public void AddToList(SomeType item) { _list.Add(item); SomeOtherMethod(); } public ReadOnlyCollection<SomeType> MyList { get { return _list.AsReadOnly(); } } ``` but ObservableCollection would be best.
10,076,075
I have a data structure like this: ``` { 'key1':[ [1,1,'Some text'], [2,0,''], ... ], ... 'key99':[ [1,1,'Some text'], [2,1,'More text'], ... ], } ``` The size of this will be only like 100 keys and 100 lists in each key. I like to store it and retrieve it (the entire list) based on the key. This is for a use in a web-server with not very high traffic. However, the back end must handle concurrent reads and writes. How to do this in a safe way and without writing too much code? I suppose storing the [pickled](http://docs.python.org/library/pickle) object in SQLite is a possible solution. Are there better ways?
2012/04/09
[ "https://Stackoverflow.com/questions/10076075", "https://Stackoverflow.com", "https://Stackoverflow.com/users/126463/" ]
`List` does not expose any events for that. You should consider using [`ObservableCollection`](http://msdn.microsoft.com/en-us/library/ms653375.aspx) instead. It has `CollectionChanged` event which occurs when an item is added, removed, changed, moved, or the entire list is refreshed.
If you create your own implementation of IList you can call methods when an item is added to the list (or do anything else you want). Create a class that inherits from IList and have as a private member a list of type T. Implement each of the Interface methods using your private member and modify the Add(T item) call to whatever you need it to do Code: ``` public class MyList<T> : IList<T> { private List<T> _myList = new List<T>(); public IEnumerator<T> GetEnumerator() { return _myList.GetEnumerator(); } public void Clear() { _myList.Clear(); } public bool Contains(T item) { return _myList.Contains(item); } public void Add(T item) { _myList.Add(item); // Call your methods here } // ...implement the rest of the IList<T> interface using _myList } ```
10,076,075
I have a data structure like this: ``` { 'key1':[ [1,1,'Some text'], [2,0,''], ... ], ... 'key99':[ [1,1,'Some text'], [2,1,'More text'], ... ], } ``` The size of this will be only like 100 keys and 100 lists in each key. I like to store it and retrieve it (the entire list) based on the key. This is for a use in a web-server with not very high traffic. However, the back end must handle concurrent reads and writes. How to do this in a safe way and without writing too much code? I suppose storing the [pickled](http://docs.python.org/library/pickle) object in SQLite is a possible solution. Are there better ways?
2012/04/09
[ "https://Stackoverflow.com/questions/10076075", "https://Stackoverflow.com", "https://Stackoverflow.com/users/126463/" ]
Maybe you should be using `ObservableCollection<T>`. It fires events when items are added or removed, or several other events. Here is the doc: <http://msdn.microsoft.com/en-us/library/ms668604.aspx>
you can do something like ``` private List<SomeType> _list; public void AddToList(SomeType item) { _list.Add(item); SomeOtherMethod(); } public ReadOnlyCollection<SomeType> MyList { get { return _list.AsReadOnly(); } } ``` but ObservableCollection would be best.
10,076,075
I have a data structure like this: ``` { 'key1':[ [1,1,'Some text'], [2,0,''], ... ], ... 'key99':[ [1,1,'Some text'], [2,1,'More text'], ... ], } ``` The size of this will be only like 100 keys and 100 lists in each key. I like to store it and retrieve it (the entire list) based on the key. This is for a use in a web-server with not very high traffic. However, the back end must handle concurrent reads and writes. How to do this in a safe way and without writing too much code? I suppose storing the [pickled](http://docs.python.org/library/pickle) object in SQLite is a possible solution. Are there better ways?
2012/04/09
[ "https://Stackoverflow.com/questions/10076075", "https://Stackoverflow.com", "https://Stackoverflow.com/users/126463/" ]
Maybe you should be using `ObservableCollection<T>`. It fires events when items are added or removed, or several other events. Here is the doc: <http://msdn.microsoft.com/en-us/library/ms668604.aspx>
If you create your own implementation of IList you can call methods when an item is added to the list (or do anything else you want). Create a class that inherits from IList and have as a private member a list of type T. Implement each of the Interface methods using your private member and modify the Add(T item) call to whatever you need it to do Code: ``` public class MyList<T> : IList<T> { private List<T> _myList = new List<T>(); public IEnumerator<T> GetEnumerator() { return _myList.GetEnumerator(); } public void Clear() { _myList.Clear(); } public bool Contains(T item) { return _myList.Contains(item); } public void Add(T item) { _myList.Add(item); // Call your methods here } // ...implement the rest of the IList<T> interface using _myList } ```
10,076,075
I have a data structure like this: ``` { 'key1':[ [1,1,'Some text'], [2,0,''], ... ], ... 'key99':[ [1,1,'Some text'], [2,1,'More text'], ... ], } ``` The size of this will be only like 100 keys and 100 lists in each key. I like to store it and retrieve it (the entire list) based on the key. This is for a use in a web-server with not very high traffic. However, the back end must handle concurrent reads and writes. How to do this in a safe way and without writing too much code? I suppose storing the [pickled](http://docs.python.org/library/pickle) object in SQLite is a possible solution. Are there better ways?
2012/04/09
[ "https://Stackoverflow.com/questions/10076075", "https://Stackoverflow.com", "https://Stackoverflow.com/users/126463/" ]
you can do something like ``` private List<SomeType> _list; public void AddToList(SomeType item) { _list.Add(item); SomeOtherMethod(); } public ReadOnlyCollection<SomeType> MyList { get { return _list.AsReadOnly(); } } ``` but ObservableCollection would be best.
If you create your own implementation of IList you can call methods when an item is added to the list (or do anything else you want). Create a class that inherits from IList and have as a private member a list of type T. Implement each of the Interface methods using your private member and modify the Add(T item) call to whatever you need it to do Code: ``` public class MyList<T> : IList<T> { private List<T> _myList = new List<T>(); public IEnumerator<T> GetEnumerator() { return _myList.GetEnumerator(); } public void Clear() { _myList.Clear(); } public bool Contains(T item) { return _myList.Contains(item); } public void Add(T item) { _myList.Add(item); // Call your methods here } // ...implement the rest of the IList<T> interface using _myList } ```
70,234,520
I am new at python, im trying to write a code to print several lines after an if statement. for example, I have a file "test.txt" with this style: ``` Hello how are you? fine thanks how old are you? 24 good how old are you? i am 26 ok bye. Hello how are you? fine how old are you? 13 good how old are you? i am 34 ok bye. Hello how are you? good how old are you? 17 good how old are you? i am 19 ok bye. Hello how are you? perfect how old are you? 26 good how old are you? i am 21 ok bye. ``` so I want to print one line after each "how old are you" my code is like this: ``` fhandle=open('test.txt') for line in fhandle: if line.startswith('how old are you?') print(line) /*** THIS IS THE PROBLEM ``` I want to print next line after how old are you ( maybe print two lines after "how old are you" )
2021/12/05
[ "https://Stackoverflow.com/questions/70234520", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17594775/" ]
You can use `readlines()` function that returnes lines of a file as a list and use `enumerate()` function to loop through list elements: ``` lines = open('test.txt').readlines() for i,line in enumerate(lines): if line.startswith('how old are you?'): print(lines[i+1], line[i+2]) ```
You could convert the file to a list and use a variable which increases by 1 for each line: ```py fhandle = list(open('test.txt')) i = 1 for line in fhandle: if line.startswith('how old are you?') print(fhandle[i]) i += 1 ```
70,234,520
I am new at python, im trying to write a code to print several lines after an if statement. for example, I have a file "test.txt" with this style: ``` Hello how are you? fine thanks how old are you? 24 good how old are you? i am 26 ok bye. Hello how are you? fine how old are you? 13 good how old are you? i am 34 ok bye. Hello how are you? good how old are you? 17 good how old are you? i am 19 ok bye. Hello how are you? perfect how old are you? 26 good how old are you? i am 21 ok bye. ``` so I want to print one line after each "how old are you" my code is like this: ``` fhandle=open('test.txt') for line in fhandle: if line.startswith('how old are you?') print(line) /*** THIS IS THE PROBLEM ``` I want to print next line after how old are you ( maybe print two lines after "how old are you" )
2021/12/05
[ "https://Stackoverflow.com/questions/70234520", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17594775/" ]
You can use `readlines()` function that returnes lines of a file as a list and use `enumerate()` function to loop through list elements: ``` lines = open('test.txt').readlines() for i,line in enumerate(lines): if line.startswith('how old are you?'): print(lines[i+1], line[i+2]) ```
Assuming there are always two more lines after each "how old are you", you could just skip to the next iteration by using `next(fhandle)` like this: ``` fhandle = open('test.txt') for line in fhandle: if line.startswith('how old are you?'): print(line) print(next(fhandle)) print(next(fhandle)) ``` Remember that the for loop just uses `fhandle` as an iterator! :)
70,234,520
I am new at python, im trying to write a code to print several lines after an if statement. for example, I have a file "test.txt" with this style: ``` Hello how are you? fine thanks how old are you? 24 good how old are you? i am 26 ok bye. Hello how are you? fine how old are you? 13 good how old are you? i am 34 ok bye. Hello how are you? good how old are you? 17 good how old are you? i am 19 ok bye. Hello how are you? perfect how old are you? 26 good how old are you? i am 21 ok bye. ``` so I want to print one line after each "how old are you" my code is like this: ``` fhandle=open('test.txt') for line in fhandle: if line.startswith('how old are you?') print(line) /*** THIS IS THE PROBLEM ``` I want to print next line after how old are you ( maybe print two lines after "how old are you" )
2021/12/05
[ "https://Stackoverflow.com/questions/70234520", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17594775/" ]
You can use `readlines()` function that returnes lines of a file as a list and use `enumerate()` function to loop through list elements: ``` lines = open('test.txt').readlines() for i,line in enumerate(lines): if line.startswith('how old are you?'): print(lines[i+1], line[i+2]) ```
So as you want to print next line (or two lines) I suggest to use list index. to do that use `readlines()` to convert the file to a `list`: ```py fhandle = open('test.txt').readlines() ``` After you did this you can use `len` to calculate list length and index it in a for loop and print next two lines. ```py for line_index in range(len(fhandle)): if fhandle[line_index].startswith('how old are you?'): print(fhandle[line_index + 1], fhandle[line_index + 2]) ```
2,009,379
``` import re from decimal import * import numpy from scipy.signal import cspline1d, cspline1d_eval import scipy.interpolate import scipy import math import numpy from scipy import interpolate Y1 =[0.48960000000000004, 0.52736099999999997, 0.56413900000000006, 0.60200199999999993, 0.64071400000000001, 0.67668399999999995, 0.71315899999999999, 0.75050499999999998, 0.61494199999999999, 0.66246900000000009] X1 =[0.024, 0.026000000000000002, 0.028000000000000004, 0.029999999999999999, 0.032000000000000001, 0.034000000000000002, 0.035999999999999997, 0.038000000000000006, 0.029999999999999999, 0.032500000000000001] rep = scipy.interpolate.splrep(X1,Y1) ``` IN the above code i am getting and error of ``` Traceback (most recent call last): File "/home/vibhor/Desktop/timing_tool/timing/interpolation_cap.py", line 64, in <module> rep = scipy.interpolate.splrep(X1,Y1) File "/usr/lib/python2.6/site-packages/scipy/interpolate/fitpack.py", line 418, in splrep raise _iermess[ier][1],_iermess[ier][0] ValueError: Error on input data ``` Don't know what is happening
2010/01/05
[ "https://Stackoverflow.com/questions/2009379", "https://Stackoverflow.com", "https://Stackoverflow.com/users/240524/" ]
I believe it's due to the X1 values not being ordered from smallest to largest plus also you have one duplicate x point, i.e, you need to sort the values for X1 and Y1 before you can use the splrep and remove duplicates. splrep from the docs seem to be low level access to FITPACK libraries which expects a sorted, non-duplicate list that's why it returns an error interpolate.interp1d might seem to work, but have you actually tried to use it to find a new point? I think you'll find an error when you call it i.e. rep(2)
The X value 0.029999999999999999 occurs twice, with two different Y coordinates. It wouldn't surprise me if that caused a problem trying to fit a polynomial spline segment....
70,639,556
Recently I have started to use [hydra](https://hydra.cc/docs/intro/) to manage the configs in my application. I use [Structured Configs](https://hydra.cc/docs/tutorials/structured_config/intro/) to create schema for .yaml config files. Structured Configs in Hyda uses [dataclasses](https://docs.python.org/3/library/dataclasses.html) for type checking. However, I also want to use some kind of validators for some of the parameter I specify in my Structured Configs (something like [this](https://pydantic-docs.helpmanual.io/usage/validators/)). Do you know if it is somehow possible to use Pydantic for this purpose? When I try to use Pydantic, OmegaConf complains about it: ```sh omegaconf.errors.ValidationError: Input class 'SomeClass' is not a structured config. did you forget to decorate it as a dataclass? ```
2022/01/09
[ "https://Stackoverflow.com/questions/70639556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10943470/" ]
For those of you wondering how this works exactly, here is an example of it: ```py import hydra from hydra.core.config_store import ConfigStore from omegaconf import OmegaConf from pydantic.dataclasses import dataclass from pydantic import validator @dataclass class MyConfigSchema: some_var: float @validator("some_var") def validate_some_var(cls, some_var: float) -> float: if some_var < 0: raise ValueError(f"'some_var' can't be less than 0, got: {some_var}") return some_var cs = ConfigStore.instance() cs.store(name="config_schema", node=MyConfigSchema) @hydra.main(config_path="/path/to/configs", config_name="config") def my_app(config: MyConfigSchema) -> None: # The 'validator' methods will be called when you run the line below OmegaConf.to_object(config) if __name__ == "__main__": my_app() ``` And `config.yaml` : ```yaml defaults: - config_schema some_var: -1 # this will raise a ValueError ```
See [pydantic.dataclasses.dataclass](https://pydantic-docs.helpmanual.io/usage/dataclasses/), which are a drop-in replacement for the standard-library dataclasses with some extra type-checking.
31,460,152
I am writing a python code that will work as a dameon in a Raspberry pi. However, the person I am writing this for want to see the raw output it gets while it is running, not just my log files. My first idea to do this was to use a bash script using the Screen program, but that has some features in it that I CANNOT have. Mainly the ability to kill the program through the Screen program. Is there a way I can write a program (preferably python) or bash script, that is able to read the output of another program running, but doesn't send anything to it? Thanks.
2015/07/16
[ "https://Stackoverflow.com/questions/31460152", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3346931/" ]
In the latest seaborn, you can use the `countplot` function: ``` seaborn.countplot(x='reputation', data=df) ``` To do it with `barplot` you'd need something like this: ``` seaborn.barplot(x=df.reputation.value_counts().index, y=df.reputation.value_counts()) ``` You can't pass `'reputation'` as a column name to `x` while also passing the counts in `y`. Passing 'reputation' for `x` will use the *values* of `df.reputation` (all of them, not just the unique ones) as the `x` values, and seaborn has no way to align these with the counts. So you need to pass the unique values as `x` and the counts as `y`. But you need to call `value_counts` twice (or do some other sorting on both the unique values and the counts) to ensure they match up right.
Using just `countplot` you can get the bars in the same order as `.value_counts()` output too: ``` seaborn.countplot(data=df, x='reputation', order=df.reputation.value_counts().index) ```
35,230,093
When a terminal is opened, the environmental shell is set. If I then type "csh" it starts running a c shell as a program within the bash terminal. My question is, from a python script, how can I check to determine if csh has been executed prior to starting the python script. THanks
2016/02/05
[ "https://Stackoverflow.com/questions/35230093", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5889345/" ]
You can check the shell environment by using ``` import os shell = os.environ['SHELL'] ``` Then you can make sure `shell` is set to `/bin/csh`
You can use `os.getppid()` to find the [parent PID](https://unix.stackexchange.com/q/18166/3330), and `ps` to find the name of the command: ``` import subprocess import os ppid = os.getppid() out = subprocess.check_output(['ps', '--format', '%c', '--pid', str(ppid)]) print(out.splitlines()[-1]) ``` --- ``` % csh % script.py csh % bash (dev)13:53:04 unutbu@buster:~% script.py bash ``` Note that the parent process may not be a shell. If I run the code from an IPython session launched inside emacs, then the parent is emacs: ``` In [170]: ppid = os.getppid() out = subprocess.check_output(['ps', '--format', '%c', '--pid', str(ppid)]) print(out.splitlines()[-1]) In [172]: emacs ```