qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
sequencelengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
3,856,314
After using C# for long time I finally decided to switch to Python. The question I am facing for the moment has to do about auto-complete. I guess I am spoiled by C# and especially from resharper and I was expecting something similar to exist for Python. My editor of choice is emacs and after doing some research I found `autocomplete.pl`, `yasnippet` and rope although it is not clear to me if and how they can be installed in a cygwin based system which is what I use since all the related documentation appears to be linux specific... The version of emacs I currently use is 23.2.1 which bundles the python mode that although useful is far behind from whatever research has to offer. My question to python users has to do about how common is autocomplete vs manual typing (using `M-/` where possible) ? I am thinking about just memorizing python build-in functions like len, append, extend etc. and revert close to a pre-autocomplete editing mode. How different such an approach is from what other pythonistas are doing?
2010/10/04
[ "https://Stackoverflow.com/questions/3856314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/404984/" ]
I find that [PyDev](http://pydev.org/) + Eclipse can meet most of my needs. There is also [PyCharm](http://www.jetbrains.com/pycharm/) from the Intellij team. PyCharm has the added advantage of smooth integration with git.
Take a look at [Spyderlib](http://code.google.com/p/spyderlib/), support most of the features including code completion
3,856,314
After using C# for long time I finally decided to switch to Python. The question I am facing for the moment has to do about auto-complete. I guess I am spoiled by C# and especially from resharper and I was expecting something similar to exist for Python. My editor of choice is emacs and after doing some research I found `autocomplete.pl`, `yasnippet` and rope although it is not clear to me if and how they can be installed in a cygwin based system which is what I use since all the related documentation appears to be linux specific... The version of emacs I currently use is 23.2.1 which bundles the python mode that although useful is far behind from whatever research has to offer. My question to python users has to do about how common is autocomplete vs manual typing (using `M-/` where possible) ? I am thinking about just memorizing python build-in functions like len, append, extend etc. and revert close to a pre-autocomplete editing mode. How different such an approach is from what other pythonistas are doing?
2010/10/04
[ "https://Stackoverflow.com/questions/3856314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/404984/" ]
I found this post > > [My Emacs Python environment](http://www.saltycrane.com/blog/2010/05/my-emacs-python-environment/) > > > to be the most useful and comprehensive list of instructions and references on how to setup a decent Python development environment in Emacs regardless of OS platform. It is still a bit of work to setup but at least it covers the popular packages and components generally recommended for Python in Emacs that provide auto-completion functionality. I loosely used this post as a guide to do the setup on my Windows machine with Emacs 23.2.1 and Python 2.6.5. Although, I also have Cygwin installed in some cases instead of running the \*nix shell commands mentioned in the post, I just download the packages via a web browser, unzip them with 7zip, and copy them to my Emacs' plugin directory. Also, to install Pymacs, Rope, and Ropemacs, I used Python's [EasyInstall](http://en.wikipedia.org/wiki/EasyInstall) package manager. To use it, I downloaded and installed [the `setuptools` package using the Windows install version](http://pypi.python.org/pypi/setuptools#windows). Once installed, at the command line, cd to their respective download locations and run the command `easy_install .` instead of the shell commands shown in the post. Generally, I saved any `*.el` files in my `~\.emacs.d\plugins` (e.g. in `%USERPROFILE%\Application Data\.emacs.d\`) and then updated my `.emacs` file to reference them as documented in the post. Despite all this, on occasion, I've used [DreamPie](http://dreampie.sourceforge.net/) since it does have overall better auto-completion out of the box than my Emacs setup.
Take a look at [Spyderlib](http://code.google.com/p/spyderlib/), support most of the features including code completion
3,856,314
After using C# for long time I finally decided to switch to Python. The question I am facing for the moment has to do about auto-complete. I guess I am spoiled by C# and especially from resharper and I was expecting something similar to exist for Python. My editor of choice is emacs and after doing some research I found `autocomplete.pl`, `yasnippet` and rope although it is not clear to me if and how they can be installed in a cygwin based system which is what I use since all the related documentation appears to be linux specific... The version of emacs I currently use is 23.2.1 which bundles the python mode that although useful is far behind from whatever research has to offer. My question to python users has to do about how common is autocomplete vs manual typing (using `M-/` where possible) ? I am thinking about just memorizing python build-in functions like len, append, extend etc. and revert close to a pre-autocomplete editing mode. How different such an approach is from what other pythonistas are doing?
2010/10/04
[ "https://Stackoverflow.com/questions/3856314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/404984/" ]
I'm spoiled by Intellisense too. The PyDev extensions for Eclipse offer a pretty good auto-complete substitute.
I've been using [PyScripter](http://code.google.com/p/pyscripter), an IDE for Windows, for a while now, and have found it very good. It has autocompletion among many other features. It's written in Delphi -- not that there's anything wrong with that -- it just bothers me a bit, though...
3,856,314
After using C# for long time I finally decided to switch to Python. The question I am facing for the moment has to do about auto-complete. I guess I am spoiled by C# and especially from resharper and I was expecting something similar to exist for Python. My editor of choice is emacs and after doing some research I found `autocomplete.pl`, `yasnippet` and rope although it is not clear to me if and how they can be installed in a cygwin based system which is what I use since all the related documentation appears to be linux specific... The version of emacs I currently use is 23.2.1 which bundles the python mode that although useful is far behind from whatever research has to offer. My question to python users has to do about how common is autocomplete vs manual typing (using `M-/` where possible) ? I am thinking about just memorizing python build-in functions like len, append, extend etc. and revert close to a pre-autocomplete editing mode. How different such an approach is from what other pythonistas are doing?
2010/10/04
[ "https://Stackoverflow.com/questions/3856314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/404984/" ]
I found this post > > [My Emacs Python environment](http://www.saltycrane.com/blog/2010/05/my-emacs-python-environment/) > > > to be the most useful and comprehensive list of instructions and references on how to setup a decent Python development environment in Emacs regardless of OS platform. It is still a bit of work to setup but at least it covers the popular packages and components generally recommended for Python in Emacs that provide auto-completion functionality. I loosely used this post as a guide to do the setup on my Windows machine with Emacs 23.2.1 and Python 2.6.5. Although, I also have Cygwin installed in some cases instead of running the \*nix shell commands mentioned in the post, I just download the packages via a web browser, unzip them with 7zip, and copy them to my Emacs' plugin directory. Also, to install Pymacs, Rope, and Ropemacs, I used Python's [EasyInstall](http://en.wikipedia.org/wiki/EasyInstall) package manager. To use it, I downloaded and installed [the `setuptools` package using the Windows install version](http://pypi.python.org/pypi/setuptools#windows). Once installed, at the command line, cd to their respective download locations and run the command `easy_install .` instead of the shell commands shown in the post. Generally, I saved any `*.el` files in my `~\.emacs.d\plugins` (e.g. in `%USERPROFILE%\Application Data\.emacs.d\`) and then updated my `.emacs` file to reference them as documented in the post. Despite all this, on occasion, I've used [DreamPie](http://dreampie.sourceforge.net/) since it does have overall better auto-completion out of the box than my Emacs setup.
I'm spoiled by Intellisense too. The PyDev extensions for Eclipse offer a pretty good auto-complete substitute.
3,856,314
After using C# for long time I finally decided to switch to Python. The question I am facing for the moment has to do about auto-complete. I guess I am spoiled by C# and especially from resharper and I was expecting something similar to exist for Python. My editor of choice is emacs and after doing some research I found `autocomplete.pl`, `yasnippet` and rope although it is not clear to me if and how they can be installed in a cygwin based system which is what I use since all the related documentation appears to be linux specific... The version of emacs I currently use is 23.2.1 which bundles the python mode that although useful is far behind from whatever research has to offer. My question to python users has to do about how common is autocomplete vs manual typing (using `M-/` where possible) ? I am thinking about just memorizing python build-in functions like len, append, extend etc. and revert close to a pre-autocomplete editing mode. How different such an approach is from what other pythonistas are doing?
2010/10/04
[ "https://Stackoverflow.com/questions/3856314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/404984/" ]
I found this post > > [My Emacs Python environment](http://www.saltycrane.com/blog/2010/05/my-emacs-python-environment/) > > > to be the most useful and comprehensive list of instructions and references on how to setup a decent Python development environment in Emacs regardless of OS platform. It is still a bit of work to setup but at least it covers the popular packages and components generally recommended for Python in Emacs that provide auto-completion functionality. I loosely used this post as a guide to do the setup on my Windows machine with Emacs 23.2.1 and Python 2.6.5. Although, I also have Cygwin installed in some cases instead of running the \*nix shell commands mentioned in the post, I just download the packages via a web browser, unzip them with 7zip, and copy them to my Emacs' plugin directory. Also, to install Pymacs, Rope, and Ropemacs, I used Python's [EasyInstall](http://en.wikipedia.org/wiki/EasyInstall) package manager. To use it, I downloaded and installed [the `setuptools` package using the Windows install version](http://pypi.python.org/pypi/setuptools#windows). Once installed, at the command line, cd to their respective download locations and run the command `easy_install .` instead of the shell commands shown in the post. Generally, I saved any `*.el` files in my `~\.emacs.d\plugins` (e.g. in `%USERPROFILE%\Application Data\.emacs.d\`) and then updated my `.emacs` file to reference them as documented in the post. Despite all this, on occasion, I've used [DreamPie](http://dreampie.sourceforge.net/) since it does have overall better auto-completion out of the box than my Emacs setup.
IMO, by far the easiest way to take advantage of the python tools available for emacs is to take advantage of the defaults that are all set up at: <https://github.com/gabrielelanaro/emacs-for-python> I actually took the time to get pymacs and ropemacs and python-mode all working independently before finding that little gem, and now I rely on it entirely for all my python based customizations. If you are new, I would definitely start there.
3,856,314
After using C# for long time I finally decided to switch to Python. The question I am facing for the moment has to do about auto-complete. I guess I am spoiled by C# and especially from resharper and I was expecting something similar to exist for Python. My editor of choice is emacs and after doing some research I found `autocomplete.pl`, `yasnippet` and rope although it is not clear to me if and how they can be installed in a cygwin based system which is what I use since all the related documentation appears to be linux specific... The version of emacs I currently use is 23.2.1 which bundles the python mode that although useful is far behind from whatever research has to offer. My question to python users has to do about how common is autocomplete vs manual typing (using `M-/` where possible) ? I am thinking about just memorizing python build-in functions like len, append, extend etc. and revert close to a pre-autocomplete editing mode. How different such an approach is from what other pythonistas are doing?
2010/10/04
[ "https://Stackoverflow.com/questions/3856314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/404984/" ]
I find that [PyDev](http://pydev.org/) + Eclipse can meet most of my needs. There is also [PyCharm](http://www.jetbrains.com/pycharm/) from the Intellij team. PyCharm has the added advantage of smooth integration with git.
I've been using [PyScripter](http://code.google.com/p/pyscripter), an IDE for Windows, for a while now, and have found it very good. It has autocompletion among many other features. It's written in Delphi -- not that there's anything wrong with that -- it just bothers me a bit, though...
3,856,314
After using C# for long time I finally decided to switch to Python. The question I am facing for the moment has to do about auto-complete. I guess I am spoiled by C# and especially from resharper and I was expecting something similar to exist for Python. My editor of choice is emacs and after doing some research I found `autocomplete.pl`, `yasnippet` and rope although it is not clear to me if and how they can be installed in a cygwin based system which is what I use since all the related documentation appears to be linux specific... The version of emacs I currently use is 23.2.1 which bundles the python mode that although useful is far behind from whatever research has to offer. My question to python users has to do about how common is autocomplete vs manual typing (using `M-/` where possible) ? I am thinking about just memorizing python build-in functions like len, append, extend etc. and revert close to a pre-autocomplete editing mode. How different such an approach is from what other pythonistas are doing?
2010/10/04
[ "https://Stackoverflow.com/questions/3856314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/404984/" ]
I'm spoiled by Intellisense too. The PyDev extensions for Eclipse offer a pretty good auto-complete substitute.
IMO, by far the easiest way to take advantage of the python tools available for emacs is to take advantage of the defaults that are all set up at: <https://github.com/gabrielelanaro/emacs-for-python> I actually took the time to get pymacs and ropemacs and python-mode all working independently before finding that little gem, and now I rely on it entirely for all my python based customizations. If you are new, I would definitely start there.
3,856,314
After using C# for long time I finally decided to switch to Python. The question I am facing for the moment has to do about auto-complete. I guess I am spoiled by C# and especially from resharper and I was expecting something similar to exist for Python. My editor of choice is emacs and after doing some research I found `autocomplete.pl`, `yasnippet` and rope although it is not clear to me if and how they can be installed in a cygwin based system which is what I use since all the related documentation appears to be linux specific... The version of emacs I currently use is 23.2.1 which bundles the python mode that although useful is far behind from whatever research has to offer. My question to python users has to do about how common is autocomplete vs manual typing (using `M-/` where possible) ? I am thinking about just memorizing python build-in functions like len, append, extend etc. and revert close to a pre-autocomplete editing mode. How different such an approach is from what other pythonistas are doing?
2010/10/04
[ "https://Stackoverflow.com/questions/3856314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/404984/" ]
I found this post > > [My Emacs Python environment](http://www.saltycrane.com/blog/2010/05/my-emacs-python-environment/) > > > to be the most useful and comprehensive list of instructions and references on how to setup a decent Python development environment in Emacs regardless of OS platform. It is still a bit of work to setup but at least it covers the popular packages and components generally recommended for Python in Emacs that provide auto-completion functionality. I loosely used this post as a guide to do the setup on my Windows machine with Emacs 23.2.1 and Python 2.6.5. Although, I also have Cygwin installed in some cases instead of running the \*nix shell commands mentioned in the post, I just download the packages via a web browser, unzip them with 7zip, and copy them to my Emacs' plugin directory. Also, to install Pymacs, Rope, and Ropemacs, I used Python's [EasyInstall](http://en.wikipedia.org/wiki/EasyInstall) package manager. To use it, I downloaded and installed [the `setuptools` package using the Windows install version](http://pypi.python.org/pypi/setuptools#windows). Once installed, at the command line, cd to their respective download locations and run the command `easy_install .` instead of the shell commands shown in the post. Generally, I saved any `*.el` files in my `~\.emacs.d\plugins` (e.g. in `%USERPROFILE%\Application Data\.emacs.d\`) and then updated my `.emacs` file to reference them as documented in the post. Despite all this, on occasion, I've used [DreamPie](http://dreampie.sourceforge.net/) since it does have overall better auto-completion out of the box than my Emacs setup.
I've been using [PyScripter](http://code.google.com/p/pyscripter), an IDE for Windows, for a while now, and have found it very good. It has autocompletion among many other features. It's written in Delphi -- not that there's anything wrong with that -- it just bothers me a bit, though...
55,062,944
I have seen multiple posts on passing the string but not able to find good solution on reading the string passed to python script from batch file. Here is my problem. I am calling python script from batch file and passing the argument. ``` string_var = "123_Asdf" bat 'testscript.py %string_var%' ``` I have following in my python code. ``` import sys passed_var = sys.argv[1] ``` When I run the above code I always see below error. ``` passed_var = sys.argv[1] IndexError: list index out of range ``` Has anyone seen this issue before? I am only passing string and expect it to be read as part of the first argument I am passing to the script.
2019/03/08
[ "https://Stackoverflow.com/questions/55062944", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7442477/" ]
You can order by a field created with annotate: ``` from django.db.models import IntegerField, Value as V from django.db.models.functions import Cast, StrIndex, Substr last = ( Machine.objects.annotate( part=Cast(Substr("deviceSerialNo", StrIndex("deviceSerialNo", V("-"))), IntegerField()) ) .order_by("part") .first() .deviceSerialNo ) ``` Just like you had we start by getting the index of the `-` character: ``` StrIndex('deviceSerialNo', V('-')) ``` We then take use [`Substr`](https://docs.djangoproject.com/en/dev/ref/models/database-functions/#django.db.models.functions.Substr) to get the second part including the `-` character: ``` Substr("deviceSerialNo", StrIndex("deviceSerialNo", V("-"))) ``` Then we cast it to an IntegerField, sort and get the first object. Note: We can get the first object as the integer cast of `"-12344"` is a negative number.
If number have multiple - and want to extract out number from reverse then try following. AB-12-12344 Output: 12344 ``` qs.annotate( r_part=Reverse('number') ).annotate( part=Reverse( Cast( Substr("r_part", 1, StrIndex("r_part", V("-"))) ), IntegerField() ) ``` ) thanks
55,062,944
I have seen multiple posts on passing the string but not able to find good solution on reading the string passed to python script from batch file. Here is my problem. I am calling python script from batch file and passing the argument. ``` string_var = "123_Asdf" bat 'testscript.py %string_var%' ``` I have following in my python code. ``` import sys passed_var = sys.argv[1] ``` When I run the above code I always see below error. ``` passed_var = sys.argv[1] IndexError: list index out of range ``` Has anyone seen this issue before? I am only passing string and expect it to be read as part of the first argument I am passing to the script.
2019/03/08
[ "https://Stackoverflow.com/questions/55062944", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7442477/" ]
You can order by a field created with annotate: ``` from django.db.models import IntegerField, Value as V from django.db.models.functions import Cast, StrIndex, Substr last = ( Machine.objects.annotate( part=Cast(Substr("deviceSerialNo", StrIndex("deviceSerialNo", V("-"))), IntegerField()) ) .order_by("part") .first() .deviceSerialNo ) ``` Just like you had we start by getting the index of the `-` character: ``` StrIndex('deviceSerialNo', V('-')) ``` We then take use [`Substr`](https://docs.djangoproject.com/en/dev/ref/models/database-functions/#django.db.models.functions.Substr) to get the second part including the `-` character: ``` Substr("deviceSerialNo", StrIndex("deviceSerialNo", V("-"))) ``` Then we cast it to an IntegerField, sort and get the first object. Note: We can get the first object as the integer cast of `"-12344"` is a negative number.
You can sort the objects using the extra() function: ``` your_objects = Machine.objects.all() your_objects = your_objects.extra(select={'str_deviceSerialNo':'SUBSTRING("deviceSerialNo",initial_char,last_char)'}).order_by('str_deviceSerialNo') ``` It worked for me.
18,401,287
I am trying to build documentation for my flask project, and I am experiencing issues with the path My project structure is like: ``` myproject config all.py __init__.py logger.py logger.conf myproject models.py __init__.py en (english language docs folder) conf.py ``` logger.py includes a line ``` with open('logger.conf') as f: CONFIG = ast.literal_eval(f.read()) ``` which reads the configuration from logger.conf While "make html" I receive many errors according to models: ``` /home/username/projects/fb/myproject/en/models/index.rst:7: WARNING: autodoc: failed to import class u'User' from module u'myproject.models'; the following exception was raised: Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/sphinx/ext/autodoc.py", line 326, in import_object __import__(self.modname) File "/home/username/projects/fb/myproject/myproject/__init__.py", line 14, in <module> from logger import flask_debug File "/home/username/projects/fb/myproject/logger.py", line 5, in <module> with open('logger.conf') as f: CONFIG = ast.literal_eval(f.read()) IOError: [Errno 2] No such file or directory: 'logger.conf' ``` which is strange because conf.py includes the path: sys.path.insert(0, '/home/username/projects/fb/myproject/') and when I print sys.path it shows that the path is there. When I paste FULL PATH to the file logger.conf in logger.py it goes to another line simmilar to that and throws the same error for a different file. Why Sphinx does not check the path files relatively to the sys.path? Because it does not work for "./file" or "file". It started working only for "../file" - when I changed all the paths, but "destroyed" python working, as for python the path is broken.
2013/08/23
[ "https://Stackoverflow.com/questions/18401287", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2080641/" ]
It is the behaviour of `open()` that is the problem. Commands like `open()` and `chdir()` and so work from the directory you're now in, which is probably the directory where the makefile is. To test it, add an `print(os.listdir('.')` above your call to `open('logger.conf')`, that'll show you the problem. The solution? Use an absolute path. So, a little bit verbose, something like this: ``` import os this_directory = os.path.dirname(__file__) # __file__ is the absolute path to the current python file. open(os.path.join(this_directory, 'logger.conf')) ``` Bonus points if you've turned it into a python package (="it has an setup.py"), in that case you can do:  ``` import pkg_resources open(pkg_resources.resource_filename('myproject.config', 'logger.conf')) ```
I had a similar problem when generating sphinx documentation for some python code that was not written to be run in my computer, but in an embedded system instead. In that case, the existing code attempted to open a file that did not exist in my computer, and that made sphinx fail. In this case, I decided to change the code to verify the file existence first, and that allowed sphinx to pass over this logic without a problem. ``` if os.path.isfile(filename): # open file here else: # handle error in a way that doesn't make sphinx crash print "ERROR: No such file: '%s'" % filename ``` For a moment, I tried [mocking open()](https://stackoverflow.com/questions/5237693/mocking-openfile-name-in-unit-tests), but it turns out that sphinx does require open() to do its job.
54,681,449
I upgraded from pandas 0.20.3 to pandas 0.24.1. While running the command `ts.sort_index(inplace=True)`, I am getting a `FutureWarning` in my test output, which is shown below. Can I change the method call to suppress the following warning? I am happy to keep the old behavior. ``` /lib/python3.6/site-packages/pandas/core/sorting.py:257: FutureWarning: Converting timezone-aware DatetimeArray to timezone-naive ndarray with 'datetime64[ns]' dtype. In the future, this will return an ndarray with 'object' dtype where each element is a 'pandas.Timestamp' with the correct 'tz'. To accept the future behavior, pass 'dtype=object'. To keep the old behavior, pass 'dtype="datetime64[ns]"'. items = np.asanyarray(items) ``` My index looks like the following prior to running the sort\_index: ``` ts.index DatetimeIndex(['2017-07-05 07:00:00+00:00', '2017-07-05 07:15:00+00:00', '2017-07-05 07:30:00+00:00', '2017-07-05 07:45:00+00:00', ... '2017-07-05 08:00:00+00:00'], dtype='datetime64[ns, UTC]', name='start', freq=None) ```
2019/02/14
[ "https://Stackoverflow.com/questions/54681449", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4808588/" ]
I rewrote your question [here](https://stackoverflow.com/questions/54854900/workaround-for-pandas-futurewarning-when-sorting-a-datetimeindex), to include an MCVE. After it went a while with no responses, I posted an issue against Pandas. Here's my workaround: ``` with warnings.catch_warnings(): # Bug in Pandas emits useless warning when sorting tz-aware index warnings.simplefilter("ignore") ds = df.sort_index() ```
If I were you, I would do a downgrade using pip and setting the previous version. It's the lazier answer. But if you really want to keep it upgraded, then there is a parameter call deprecated warning inside pandas data frame. Just adjust it accordingly what you need. You can check it using the documentation of pandas. Have a nice night
29,585,296
I have the following code (test.cgi): ``` #!/usr/bin/env python # -*- coding: UTF-8 -*- # enable debugging import cgitb cgitb.enable() print "Content-Type: text/plain;charset=utf-8" print print "Hello World!" ``` The file is CHMOD 777 and so is the directory it is in. I am getting the following error log ``` [Sun Apr 12 02:24:46.395628 2015] [cgi:error] [pid 3574:tid 34479148032] [client 172.17.240.2:19716] AH01215: env: python\r: : /fs5a/cheerupper/public/scripts/test.cgi [Sun Apr 12 02:24:46.396715 2015] [cgi:error] [pid 3574:tid 34479148032] [client 172.17.240.2:19716] AH01215: No such file or directory: /fs5a/cheerupper/public/scripts/test.cgi [Sun Apr 12 02:24:46.397453 2015] [cgi:error] [pid 3574:tid 34479148032] [client 172.17.240.2:19716] End of script output before headers: test.cgi ``` I am getting a 500 Internal Service Error when I try to run in a browser. I can run when SSHing into the server by the command line. I have tried on Namecheap servers and am now trying on NearlyFreeSpeech.net to the same results.
2015/04/12
[ "https://Stackoverflow.com/questions/29585296", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1103669/" ]
Flatten the list by using `itertools.chain`, then find the minimum as you would otherwise: ``` from itertools import chain listA = [[10,20,30],[40,50,60],[70,80,90]] min(chain.from_iterable(listA)) # 10 ```
Set `result` to `float("inf")`. Iterate over every number in every list and call each number `i`. If `i` is less than `result`, `result = i`. Once you're done, `result` will contain the lowest value.
29,585,296
I have the following code (test.cgi): ``` #!/usr/bin/env python # -*- coding: UTF-8 -*- # enable debugging import cgitb cgitb.enable() print "Content-Type: text/plain;charset=utf-8" print print "Hello World!" ``` The file is CHMOD 777 and so is the directory it is in. I am getting the following error log ``` [Sun Apr 12 02:24:46.395628 2015] [cgi:error] [pid 3574:tid 34479148032] [client 172.17.240.2:19716] AH01215: env: python\r: : /fs5a/cheerupper/public/scripts/test.cgi [Sun Apr 12 02:24:46.396715 2015] [cgi:error] [pid 3574:tid 34479148032] [client 172.17.240.2:19716] AH01215: No such file or directory: /fs5a/cheerupper/public/scripts/test.cgi [Sun Apr 12 02:24:46.397453 2015] [cgi:error] [pid 3574:tid 34479148032] [client 172.17.240.2:19716] End of script output before headers: test.cgi ``` I am getting a 500 Internal Service Error when I try to run in a browser. I can run when SSHing into the server by the command line. I have tried on Namecheap servers and am now trying on NearlyFreeSpeech.net to the same results.
2015/04/12
[ "https://Stackoverflow.com/questions/29585296", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1103669/" ]
``` >>> listA = [[10,20,30],[40,50,60],[70,80,90]] >>> min(y for x in listA for y in x) 10 ```
Set `result` to `float("inf")`. Iterate over every number in every list and call each number `i`. If `i` is less than `result`, `result = i`. Once you're done, `result` will contain the lowest value.
29,585,296
I have the following code (test.cgi): ``` #!/usr/bin/env python # -*- coding: UTF-8 -*- # enable debugging import cgitb cgitb.enable() print "Content-Type: text/plain;charset=utf-8" print print "Hello World!" ``` The file is CHMOD 777 and so is the directory it is in. I am getting the following error log ``` [Sun Apr 12 02:24:46.395628 2015] [cgi:error] [pid 3574:tid 34479148032] [client 172.17.240.2:19716] AH01215: env: python\r: : /fs5a/cheerupper/public/scripts/test.cgi [Sun Apr 12 02:24:46.396715 2015] [cgi:error] [pid 3574:tid 34479148032] [client 172.17.240.2:19716] AH01215: No such file or directory: /fs5a/cheerupper/public/scripts/test.cgi [Sun Apr 12 02:24:46.397453 2015] [cgi:error] [pid 3574:tid 34479148032] [client 172.17.240.2:19716] End of script output before headers: test.cgi ``` I am getting a 500 Internal Service Error when I try to run in a browser. I can run when SSHing into the server by the command line. I have tried on Namecheap servers and am now trying on NearlyFreeSpeech.net to the same results.
2015/04/12
[ "https://Stackoverflow.com/questions/29585296", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1103669/" ]
Flatten the list by using `itertools.chain`, then find the minimum as you would otherwise: ``` from itertools import chain listA = [[10,20,30],[40,50,60],[70,80,90]] min(chain.from_iterable(listA)) # 10 ```
``` >>> listA = [[10,20,30],[40,50,60],[70,80,90]] >>> min(y for x in listA for y in x) 10 ```
29,585,296
I have the following code (test.cgi): ``` #!/usr/bin/env python # -*- coding: UTF-8 -*- # enable debugging import cgitb cgitb.enable() print "Content-Type: text/plain;charset=utf-8" print print "Hello World!" ``` The file is CHMOD 777 and so is the directory it is in. I am getting the following error log ``` [Sun Apr 12 02:24:46.395628 2015] [cgi:error] [pid 3574:tid 34479148032] [client 172.17.240.2:19716] AH01215: env: python\r: : /fs5a/cheerupper/public/scripts/test.cgi [Sun Apr 12 02:24:46.396715 2015] [cgi:error] [pid 3574:tid 34479148032] [client 172.17.240.2:19716] AH01215: No such file or directory: /fs5a/cheerupper/public/scripts/test.cgi [Sun Apr 12 02:24:46.397453 2015] [cgi:error] [pid 3574:tid 34479148032] [client 172.17.240.2:19716] End of script output before headers: test.cgi ``` I am getting a 500 Internal Service Error when I try to run in a browser. I can run when SSHing into the server by the command line. I have tried on Namecheap servers and am now trying on NearlyFreeSpeech.net to the same results.
2015/04/12
[ "https://Stackoverflow.com/questions/29585296", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1103669/" ]
Flatten the list by using `itertools.chain`, then find the minimum as you would otherwise: ``` from itertools import chain listA = [[10,20,30],[40,50,60],[70,80,90]] min(chain.from_iterable(listA)) # 10 ```
This is possible using list comprehension ``` >>> listA = [[10,20,30],[40,50,60],[70,80,90]] >>> output = [y for x in listA for y in x] >>> min(output) 10 ```
29,585,296
I have the following code (test.cgi): ``` #!/usr/bin/env python # -*- coding: UTF-8 -*- # enable debugging import cgitb cgitb.enable() print "Content-Type: text/plain;charset=utf-8" print print "Hello World!" ``` The file is CHMOD 777 and so is the directory it is in. I am getting the following error log ``` [Sun Apr 12 02:24:46.395628 2015] [cgi:error] [pid 3574:tid 34479148032] [client 172.17.240.2:19716] AH01215: env: python\r: : /fs5a/cheerupper/public/scripts/test.cgi [Sun Apr 12 02:24:46.396715 2015] [cgi:error] [pid 3574:tid 34479148032] [client 172.17.240.2:19716] AH01215: No such file or directory: /fs5a/cheerupper/public/scripts/test.cgi [Sun Apr 12 02:24:46.397453 2015] [cgi:error] [pid 3574:tid 34479148032] [client 172.17.240.2:19716] End of script output before headers: test.cgi ``` I am getting a 500 Internal Service Error when I try to run in a browser. I can run when SSHing into the server by the command line. I have tried on Namecheap servers and am now trying on NearlyFreeSpeech.net to the same results.
2015/04/12
[ "https://Stackoverflow.com/questions/29585296", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1103669/" ]
``` >>> listA = [[10,20,30],[40,50,60],[70,80,90]] >>> min(y for x in listA for y in x) 10 ```
This is possible using list comprehension ``` >>> listA = [[10,20,30],[40,50,60],[70,80,90]] >>> output = [y for x in listA for y in x] >>> min(output) 10 ```
16,505,259
I am new in django and python.I am using windows 7 and Eclipse IDE.I have installed python 2.7,django and pip.I have created a system variable called PYTHONPATH with values `C:\Python27;C:\Python27\Scripts`.I am unable to set path for django and pip.When i type django-admin.py and pip in powershell,it shows `commandnotfoundexception.`I have attached screen shots of my django files and pip files.![enter image description here](https://i.stack.imgur.com/XKC5A.jpg)![enter image description here](https://i.stack.imgur.com/FYsOt.jpg) Please help me
2013/05/12
[ "https://Stackoverflow.com/questions/16505259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1770199/" ]
You need to set the Powershell specific path variable. It does not know to look for an environment variable called `pythonpath`. That is helpful only for python aware applications (such as installers for Python modules). You need to add the Python directories to the `$env:Path` environment variable in Powershell. See [Setting Windows PowerShell path variable](https://stackoverflow.com/questions/714877/setting-windows-powershell-path-variable) > > $env:Path += ";C:\Python27;C:\Python27\Scripts" > > >
Also, if you're just getting started with Django and Eclipse, make sure you configure your PyDev interpreter settings to include the site-packages directory. This will ensure Eclipse can find your Django packages. You can find more details about setting up your PYTHONPATH inside Eclipse here: [PyDev Interpreter Configuration](http://pydev.org/manual_101_interpreter.html)
2,767,013
I think in the past python scripts would run off CGI, which would create a new thread for each process. I am a newbie so I'm not really sure, what options do we have? Is the web server pipeline that python works under any more/less effecient than say php?
2010/05/04
[ "https://Stackoverflow.com/questions/2767013", "https://Stackoverflow.com", "https://Stackoverflow.com/users/39677/" ]
You can still use CGI if you want, but the normal approach these days is using WSGI on the Python side, e.g. through `mod_wsgi` on Apache or via bridges to `FastCGI` on other web servers. At least with `mod_wsgi`, I know of no inefficiencies with this approach. BTW, your description of CGI ("create a new thread for each process") is inaccurate: what it does is create a new process for each query's service (and that process typically needs to open a database connection, import all needed modules, etc etc, which is what may make it slow even on platforms where forking a process, per se, is pretty fast, such as all Unix variants).
I suggest cherrypy (<http://www.cherrypy.org/>). It is very convenient to use, has everything you need for making web services, but still quite simple (no mega-framework). The most efficient way to use it is to run it as self-contained server on localhost and put it behind Apache via a Proxy statement, and make apache itself serve the static files. This generally has better performance than solutions such as CGI and mod-python, as the Python process running the web service runs separate from the main web server, so it can cache stuff and easily re-use resources (like DB handles). Also, you can then tweak the number of worker threads for Apache and your web application separately, resulting in better scalability.
2,767,013
I think in the past python scripts would run off CGI, which would create a new thread for each process. I am a newbie so I'm not really sure, what options do we have? Is the web server pipeline that python works under any more/less effecient than say php?
2010/05/04
[ "https://Stackoverflow.com/questions/2767013", "https://Stackoverflow.com", "https://Stackoverflow.com/users/39677/" ]
You can still use CGI if you want, but the normal approach these days is using WSGI on the Python side, e.g. through `mod_wsgi` on Apache or via bridges to `FastCGI` on other web servers. At least with `mod_wsgi`, I know of no inefficiencies with this approach. BTW, your description of CGI ("create a new thread for each process") is inaccurate: what it does is create a new process for each query's service (and that process typically needs to open a database connection, import all needed modules, etc etc, which is what may make it slow even on platforms where forking a process, per se, is pretty fast, such as all Unix variants).
I would suggest Django <http://www.djangoproject.com>. It is very convenient to use, has everything you need for making web services. The most efficient way to use it is to run it as via Apache's mod\_wsgi, and make Apache itself serve the static files. This generally has better performance than solutions such as CGI and mod-python, as the Python process running the web service runs separate from the main web server, so it can cache stuff and easily re-use resources (like DB handles). Also, you can then tweak the number of worker threads for Apache and your web application separately, resulting in better scalability.
2,767,013
I think in the past python scripts would run off CGI, which would create a new thread for each process. I am a newbie so I'm not really sure, what options do we have? Is the web server pipeline that python works under any more/less effecient than say php?
2010/05/04
[ "https://Stackoverflow.com/questions/2767013", "https://Stackoverflow.com", "https://Stackoverflow.com/users/39677/" ]
I would suggest Django <http://www.djangoproject.com>. It is very convenient to use, has everything you need for making web services. The most efficient way to use it is to run it as via Apache's mod\_wsgi, and make Apache itself serve the static files. This generally has better performance than solutions such as CGI and mod-python, as the Python process running the web service runs separate from the main web server, so it can cache stuff and easily re-use resources (like DB handles). Also, you can then tweak the number of worker threads for Apache and your web application separately, resulting in better scalability.
I suggest cherrypy (<http://www.cherrypy.org/>). It is very convenient to use, has everything you need for making web services, but still quite simple (no mega-framework). The most efficient way to use it is to run it as self-contained server on localhost and put it behind Apache via a Proxy statement, and make apache itself serve the static files. This generally has better performance than solutions such as CGI and mod-python, as the Python process running the web service runs separate from the main web server, so it can cache stuff and easily re-use resources (like DB handles). Also, you can then tweak the number of worker threads for Apache and your web application separately, resulting in better scalability.
21,102,790
I am using RHEL 6.3 and have 2.6.6. I need to use the Python 2.7.6. I compiled python from source, installed pip and virtual env. Now I am trying in different ways: ``` virtualenv-2.7 testvirtualenv virtualenv --python=/usr/local/bin/python2.7 myenv ``` However I am getting AssertionError. Full trace: ``` New python executable in testvirtualenv/bin/python2.7 Also creating executable in testvirtualenv/bin/python Installing setuptools, pip... Complete output from command /tmp/testvirtualenv/bin/python2.7 -c "import sys, pip; pip...ll\"] + sys.argv[1:])" setuptools pip: Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/pip-1.5-py2.py3-none-any.whl/pip/__init__.py", line 9, in <module> File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/pip-1.5-py2.py3-none-any.whl/pip/log.py", line 8, in <module> File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 2696, in <module> File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 429, in __init__ File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 443, in add_entry File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 1722, in find_in_zip File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 1298, in has_metadata File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 1614, in _has File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 1488, in _zipinfo_name AssertionError: /usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/EGG-INFO/PKG-INFO is not a subpath of /usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/ ---------------------------------------- ...Installing setuptools, pip...done. Traceback (most recent call last): File "/usr/local/bin/virtualenv-2.7", line 9, in <module> load_entry_point('virtualenv==1.11', 'console_scripts', 'virtualenv-2.7')() File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv.py", line 820, in main symlink=options.symlink) File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv.py", line 988, in create_environment install_wheel(to_install, py_executable, search_dirs) File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv.py", line 956, in install_wheel 'PIP_NO_INDEX': '1' File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv.py", line 898, in call_subprocess % (cmd_desc, proc.returncode)) OSError: Command /tmp/testvirtualenv/bin/python2.7 -c "import sys, pip; pip...ll\"] + sys.argv[1:])" setuptools pip failed with error code 1 ```
2014/01/13
[ "https://Stackoverflow.com/questions/21102790", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2195440/" ]
You need type composition: ``` trait Composition[F[_], G[_]] { type T[A] = F[G[A]] } class Later extends Do[Composition[Future, Seq]#T] { def do[A](f: Int => A): Future[Seq[A]] } ``` Or if you just need it in this one place ``` class Later extends Do[({ type T[A] = Future[Seq[A]] })#T] { def do[A](f: Int => A): Future[Seq[A]] } ``` See [scalaz](https://github.com/scalaz/scalaz/blob/2406a0f039e4e515478536fb58974b97c04de3b8/core/src/main/scala/scalaz/Composition.scala) (I could have sworn it included general type composition, but apparently not.)
**I** believe you want this: ``` import scala.language.higherKinds import scala.concurrent.Future object Main { type Id[A] = A trait Do[F[_]] { // Notice the return type now contains `Seq`. def `do`[A](f: Int => A): F[Seq[A]] } class Now extends Do[Id] { override def `do`[A](f: Int => A): Seq[A] = ??? } class Later extends Do[Future] { override def `do`[A](f: Int => A): Future[Seq[A]] = ??? } } ``` But if you want something more general, where the abstract method is fully generic in its return type, then the type composition answer of @AlexeyRomanov is the one you're looking for.
21,102,790
I am using RHEL 6.3 and have 2.6.6. I need to use the Python 2.7.6. I compiled python from source, installed pip and virtual env. Now I am trying in different ways: ``` virtualenv-2.7 testvirtualenv virtualenv --python=/usr/local/bin/python2.7 myenv ``` However I am getting AssertionError. Full trace: ``` New python executable in testvirtualenv/bin/python2.7 Also creating executable in testvirtualenv/bin/python Installing setuptools, pip... Complete output from command /tmp/testvirtualenv/bin/python2.7 -c "import sys, pip; pip...ll\"] + sys.argv[1:])" setuptools pip: Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/pip-1.5-py2.py3-none-any.whl/pip/__init__.py", line 9, in <module> File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/pip-1.5-py2.py3-none-any.whl/pip/log.py", line 8, in <module> File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 2696, in <module> File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 429, in __init__ File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 443, in add_entry File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 1722, in find_in_zip File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 1298, in has_metadata File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 1614, in _has File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 1488, in _zipinfo_name AssertionError: /usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/EGG-INFO/PKG-INFO is not a subpath of /usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/ ---------------------------------------- ...Installing setuptools, pip...done. Traceback (most recent call last): File "/usr/local/bin/virtualenv-2.7", line 9, in <module> load_entry_point('virtualenv==1.11', 'console_scripts', 'virtualenv-2.7')() File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv.py", line 820, in main symlink=options.symlink) File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv.py", line 988, in create_environment install_wheel(to_install, py_executable, search_dirs) File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv.py", line 956, in install_wheel 'PIP_NO_INDEX': '1' File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv.py", line 898, in call_subprocess % (cmd_desc, proc.returncode)) OSError: Command /tmp/testvirtualenv/bin/python2.7 -c "import sys, pip; pip...ll\"] + sys.argv[1:])" setuptools pip failed with error code 1 ```
2014/01/13
[ "https://Stackoverflow.com/questions/21102790", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2195440/" ]
You need type composition: ``` trait Composition[F[_], G[_]] { type T[A] = F[G[A]] } class Later extends Do[Composition[Future, Seq]#T] { def do[A](f: Int => A): Future[Seq[A]] } ``` Or if you just need it in this one place ``` class Later extends Do[({ type T[A] = Future[Seq[A]] })#T] { def do[A](f: Int => A): Future[Seq[A]] } ``` See [scalaz](https://github.com/scalaz/scalaz/blob/2406a0f039e4e515478536fb58974b97c04de3b8/core/src/main/scala/scalaz/Composition.scala) (I could have sworn it included general type composition, but apparently not.)
Alexey's solution is very clever and answers the question you made. However, I think you made the from question. You started with these two interfaces: ``` class Now { def do[A](f: Int => A): Seq[A] } class Later { def do[A](f: Int => A): Future[Seq[A]] } ``` and want to modify `Later` so it implements this: ``` trait Do[F[_]] { def do[A](f: Int => A): F[A] } ``` However, you are losing an opportunity here to abstract away whether something is now or later. You should instead change `Do` to be like this: ``` trait Do[F[_]] { def do[A](f: Int => A): F[Seq[A]] } ``` And change `Now` to this: ``` class Now { def do[A](f: Int => A): Need[Seq[A]] } ``` Here, [Need](http://docs.typelevel.org/api/scalaz/stable/7.0.4/doc/index.html#scalaz.Need%24) is a Scalaz monad that basically acts like a lazy identity to the object it contains. There are other alternatives in the same vein, but the point is that the only thing you need to know about `Future` and `Need` is that they are monads. You treat them the same, and you make the decision to use one or the other elsewhere.
20,986,255
In our application we allow users to write specific conditions and we allow them express the conditions using such notation: ``` (1 and 2 and 3 or 4) ``` Where each numeric number correspond to one specific rule/condition. Now the problem is, how should I convert it, such that the end result is something like this: ``` { "$or": [ "$and": [1, 2, 3], 4 ] } ``` One more example: ``` (1 or 2 or 3 and 4) ``` To: ``` { "$or": [ 1, 2, "$and": [3, 4] ] } ``` --- I have written 50 over lines of tokenizer that successfully tokenized the statement into tokens and validated using stack/peek algorithm, and the tokens looks like this: ``` ["(", "1", "and", "2", "and", "3", "or", "4", ")"] ``` And now how should I convert this kind of "infix notation" into "prefix notation" with the rule that `and` takes precedence over `or`? Some **pointers or keywords** are greatly appreciated! What I have now doesn't really lead me to what I needed at the moment. Some researches so far: * [Smart design of a math parser?](https://stackoverflow.com/questions/114586/smart-design-of-a-math-parser) * [Add missing left parentheses into equation](https://stackoverflow.com/questions/19062718/add-missing-left-parentheses-into-equation) * [Equation (expression) parser with precedence?](https://stackoverflow.com/questions/28256/equation-expression-parser-with-precedence?rq=1) * [Infix to postfix notation](http://scriptasylum.com/tutorials/infix_postfix/algorithms/infix-postfix/index.htm) * [Dijkstra's Shunting-yard Algorithm](http://en.wikipedia.org/wiki/Shunting-yard_algorithm) * [Infix and postfix algorithm](http://interactivepython.org/runestone/static/pythonds/BasicDS/stacks.html#infix-prefix-and-postfix-expressions) **EDIT** Also, user has the ability to specify any number of parentheses if they insist, such as like: ``` ((1 or 3) and (2 or 4) or 5) ``` So it get translates to: ``` { "$or": [{ $and": [ "$or": [1, 3], "$or": [2, 4] }, 5 ] } ``` --- **EDIT 2** I figured out the algorithm. [Posted as an answer below](https://stackoverflow.com/a/21024204/534862). Thanks for helping!
2014/01/08
[ "https://Stackoverflow.com/questions/20986255", "https://Stackoverflow.com", "https://Stackoverflow.com/users/534862/" ]
This is most easily done using a two step process. 1) Convert to syntax tree. 2) Convert syntax tree to prefix notation. A syntax tree is basically the same as your prefix notation, just built using the data structures of your programming language. The standard method to create a syntax tree is to use a LALR parser generator, which is available for most languages. LALR parsers are fast, powerful, and expressive. A LALR parser generator takes a .y file as input, and outputs a source code file for a parser in the programming language of your choice. So you run the LALR parser generator once to generate your parser. (All programmers should use learn to use parser generators :). It is also smart to use a standard tokenizer, while I am guessing you have written your own :).) The following is a .y-file to generate a LALR-parser for your mini-language. Running this .y file though a LALR parser generator will output the source for a LALR parser, which takes tokens as input and outputs a parse-tree (in the variable $root\_tree). You need to have defined the parsetree\_binaryop datastructure manually elsewhere. ``` %left AND. %left OR. start ::= expr(e). { $root_tree = e; } expr(r) ::= expr(e1) AND expr(e2). { r = new parsetree_binaryop(e1, OP_AND, e2); } expr(r) ::= expr(e1) OR expr(e2). { r = new parsetree_binaryop(e1, OP_OR, e2); } expr(r) ::= LPAR expr(e) RPAR. { r = e; } ``` The "%left AND" means that AND is left-associative (we could have chosen right too, doesn't matter for AND and OR). That "%left AND" is mentioned before "%left OR" means that AND binds tighter than OR, and the generated parser will therefore do the right thing. When you have the syntax tree the parser gives you, generating the text representation is easy. Edit: this seems to be a LALR parser generator which outputs a parser in JavaScript: <http://sourceforge.net/projects/jscc/>
First define semantics. In your first example you gave `(1 and 2 and 3) or 4` interpretation but it can also be `1 and 2 and (3 or 4)` so: ``` { "$and": [ {"$or": [3,4] }, [1,2] ] } ``` Let's assume that `and` has higher priority. Then just go through list join all terms with `and`. Next, join all the rest with `or`.
20,986,255
In our application we allow users to write specific conditions and we allow them express the conditions using such notation: ``` (1 and 2 and 3 or 4) ``` Where each numeric number correspond to one specific rule/condition. Now the problem is, how should I convert it, such that the end result is something like this: ``` { "$or": [ "$and": [1, 2, 3], 4 ] } ``` One more example: ``` (1 or 2 or 3 and 4) ``` To: ``` { "$or": [ 1, 2, "$and": [3, 4] ] } ``` --- I have written 50 over lines of tokenizer that successfully tokenized the statement into tokens and validated using stack/peek algorithm, and the tokens looks like this: ``` ["(", "1", "and", "2", "and", "3", "or", "4", ")"] ``` And now how should I convert this kind of "infix notation" into "prefix notation" with the rule that `and` takes precedence over `or`? Some **pointers or keywords** are greatly appreciated! What I have now doesn't really lead me to what I needed at the moment. Some researches so far: * [Smart design of a math parser?](https://stackoverflow.com/questions/114586/smart-design-of-a-math-parser) * [Add missing left parentheses into equation](https://stackoverflow.com/questions/19062718/add-missing-left-parentheses-into-equation) * [Equation (expression) parser with precedence?](https://stackoverflow.com/questions/28256/equation-expression-parser-with-precedence?rq=1) * [Infix to postfix notation](http://scriptasylum.com/tutorials/infix_postfix/algorithms/infix-postfix/index.htm) * [Dijkstra's Shunting-yard Algorithm](http://en.wikipedia.org/wiki/Shunting-yard_algorithm) * [Infix and postfix algorithm](http://interactivepython.org/runestone/static/pythonds/BasicDS/stacks.html#infix-prefix-and-postfix-expressions) **EDIT** Also, user has the ability to specify any number of parentheses if they insist, such as like: ``` ((1 or 3) and (2 or 4) or 5) ``` So it get translates to: ``` { "$or": [{ $and": [ "$or": [1, 3], "$or": [2, 4] }, 5 ] } ``` --- **EDIT 2** I figured out the algorithm. [Posted as an answer below](https://stackoverflow.com/a/21024204/534862). Thanks for helping!
2014/01/08
[ "https://Stackoverflow.com/questions/20986255", "https://Stackoverflow.com", "https://Stackoverflow.com/users/534862/" ]
Thanks for the guides guys, at least I came out with my own solution. Since this is my first time doing mathematical equation parsing, pardon me if I did it wrongly or inefficient, or help me spot the error: Basically, here are the steps I made it happen: 1. Before parsing, always validate the pattern. Throw error when something is wrong. 2. Once validated, we do a infix notation to prefix notation conversion. This step requires "and" takes precedence over "or". 1. Reverse the given pattern 2. Do infix to postfix notation conversion. [I dumb, I learn from this](http://www.cs.arizona.edu/classes/cs227/spring12/infix.pdf) 3. Do the reverse again 4. The infix to prefix should be done at this stage 3. Build a tree from the prefix notation such that 1. A node always have, and maximum, two branch 2. Traverse down until it reach full leaves 4. Optimize the tree such that it merges similar operators together (such as multiple `$and` operators with child `$and` can be merged and form a shorter tree) 5. Mix with the given criteria set, and all done!! Working example can be found here: <http://jsfiddle.net/chaoszcat/uGKYj/3/> Working code as below: ``` (function() { /** * This is a source example of my original question on * http://stackoverflow.com/questions/20986255/converting-conditional-equation-from-infix-to-prefix-notation * * This is my solution and use it at your own risk * @author Lionel Chan <chaoszcat[at]gmail.com> */ /** * isNumeric, from jQuery. Duplicated here to make this js code pure * @param {mix} n Test subject * @returns {boolean} true if it's numeric */ function isNumeric(n) { return !isNaN(parseFloat(n))&&isFinite(n); } /** * Node class - represent a operator or numeric node * @param {string} token The token string, operator "and", "or", or numeric value */ function Node(token) { this.parent = null; this.children = []; //one node has two children at most this.token = token; this.is_operator = token === 'and' || token === 'or'; this.is_numeric = !this.is_operator; this.destroyed = false; } Node.prototype = { isOperator: function() { return this.is_operator;}, isNumeric: function() { return this.is_numeric;}, //While building tree, a node is full if there are two children isFull: function() { return this.children.length >= 2; }, addChild: function(node) { node.parent = this; this.children.push(node); }, hasParent: function() { return this.parent !== null; }, indexOfChild: function(node) { for (var i = 0 ; i < this.children.length ; ++i) { if (this.children[i] === node) { return i; } } return -1; }, removeChild: function(node) { var idx = this.indexOfChild(node); if (idx >= 0) { this.children[idx].parent = null; //remove parent relationship this.children.splice(idx, 1); //splice it out } }, /** * Pass my children to the target node, and destroy myself * * @param {Node} node A target node */ passChildrenTo: function(node) { for (var i = 0 ; i < this.children.length ; ++i) { node.addChild(this.children[i]); } this.destroy(); }, //Destroy this node destroy: function() { this.parent.removeChild(this); this.children = null; this.destroyed = true; } }; /** * Tree class - node manipulation * @param {array} prefixTokens The converted, prefix-notated tokens */ function Tree(prefixTokens) { this.buildTree(prefixTokens); //Optimize tree - so that the tree will merge multiple similar operators together this.optimize(this.root); } Tree.prototype = { root: null, //Reference to the deepest operator node in the tree for next attachment point deepestNode: null, /** * Render this tree with given criteria array * @param {array} crits * @returns {object} The built criteria */ render: function(crits) { //After optimization, we build the criteria and that's all! return this.buildCriteria(this.root, crits); }, /** * Build criteria from root node. Recursive * * @param {Node} node * @param {array} crits * @returns {object} of criteria */ buildCriteria: function(node, crits) { var output = {}, label = '$'+node.token; output[label] = []; //cpnditions array for (var i = 0 ; i < node.children.length ; ++i) { if (node.children[i].isOperator()) { output[label].push(this.buildCriteria(node.children[i], crits)); }else{ output[label].push(crits[node.children[i].token-1]); } } return output; }, /** * Optimize the tree, we can simplify nodes with same operator. Recursive * * @param {Node} node * @void */ optimize: function(node) { //note that node.children.length will keep changing since the swapping children will occur midway. Rescan is required for (var i = 0 ; i < node.children.length ; ++i) { if (node.children[i].isOperator()) { this.optimize(node.children[i]); if (node.children[i].token === node.token) { node.children[i].passChildrenTo(node); i = 0; //rescan this level whenever a swap occured } } } }, /** * Build tree from raw tokens * @param {array} tokens */ buildTree: function(tokens) { for (var i = 0 ; i < tokens.length ; ++i) { this.addNode(new Node(tokens[i])); } }, /** * Add node into tree * * @param {Node} node */ addNode: function(node) { //If no root? The first node is root if (this.root === null) { this.root = node; this.deepestNode = node; return; } //if deepestNode is full, traverse up until we find a node with capacity while(this.deepestNode && this.deepestNode.isFull()) { this.deepestNode = this.deepestNode.parent; } if (this.deepestNode) { this.deepestNode.addChild(node); } //If the current node is an operator, we move the deepestNode cursor to it if (node.isOperator()) { this.deepestNode = node; } } }; /** * Main criteria parser */ var CriteriaParser = { /** * Convert raw string of pattern (1 and 2 or 3) into the object of criteria pattern * * @param {string} str The raw pattern * @param {array} crits The raw list of criteria * @returns {String|Boolean} */ parse: function(str, crits) { var tokens = this.tokenize(str), validationResult = this.validate(tokens, crits), prefixNotation = ''; //Once succeded, we proceed to convert it to prefix notation if (validationResult === true) { prefixNotation = this.infixToPrefix(tokens); return (new Tree(prefixNotation)).render(crits); }else{ return validationResult; } }, /** * Convert the infix notation of the pattern (1 and 2 or 3) into prefix notation "or and 1 2 3" * * Note: * - and has higher precedence than or * * Steps: * 1. Reverse the tokens array * 2. Do infix -> postfix conversion (http://www.cs.arizona.edu/classes/cs227/spring12/infix.pdf, http://scriptasylum.com/tutorials/infix_postfix/algorithms/infix-postfix/index.htm) * 3. Reverse the result * * @param {array} tokens The tokenized tokens * @returns {array} prefix notation of pattern */ infixToPrefix: function(tokens) { var reversedTokens = tokens.slice(0).reverse(), //slice to clone, so not to disturb the original array stack = [], output = []; //And since it's reversed, please regard "(" as closing bracket, and ")" as opening bracket do { var stackTop = stack.length > 0 ? stack[stack.length-1] : null, token = reversedTokens.shift(); if (token === 'and') { while(stackTop === 'and') { output.push(stack.pop()); stackTop = stack.length > 0 ? stack[stack.length-1] : null; } stack.push(token); stackTop = token; }else if (token === 'or') { while(stackTop === 'and' || stackTop === 'or') { //and has higher precedence, so it will be popped out output.push(stack.pop()); stackTop = stack.length > 0 ? stack[stack.length-1] : null; } stack.push(token); stackTop = token; }else if (token === '(') { //'(' is closing bracket in reversed tokens while(stackTop !== ')' && stackTop !== undefined) { //keep looping until found a "open - )" bracket output.push(stack.pop()); stackTop = stack.length > 0 ? stack[stack.length-1] : null; } stack.pop(); //remove the open ")" bracket stackTop = stack.length > 0 ? stack[stack.length-1] : null; }else if (token === ')') { //')' is opening bracket in reversed tokens stack.push(token); }else if (isNumeric(token)) { output.push(token); }else if (token === undefined) { // no more tokens. Just shift everything out from stack while(stack.length) { stackTop = stack.pop(); if (stackTop !== undefined && stackTop !== ')') { output.push(stackTop); } } } }while(stack.length || reversedTokens.length); //Reverse output and we are done return output.reverse(); }, /** * Tokenized the provided pattern * @param {string} str The raw pattern from user * @returns {array} A tokenized array */ tokenize: function(str) { var pattern = str.replace(/\s/g, ''), //remove all the spaces :) not needed tokens = pattern.split(''), tokenized = []; //Tokenize it and verify var token = null, next = null; //attempts to concatenate the "and" and "or" and numerics while (tokens.length > 0) { token = tokens.shift(); next = tokens.length > 0 ? tokens[0] : null; if (token === '(' || token === ')') { tokenized.push(token); }else if (token === 'a' && tokens.length >= 2 && tokens[0] === 'n' && tokens[1] === 'd') { //and tokenized.push(token + tokens.shift() + tokens.shift()); }else if (token === 'o' && tokens.length >= 1 && next === 'r') { //or tokenized.push(token + tokens.shift()); }else if (isNumeric(token)) { while(isNumeric(next)) { token += next; tokens.shift(); //exhaust it next = tokens.length > 0 ? tokens[0] : null; } tokenized.push(token); }else{ tokenized.push(token); } } return tokenized; }, /** * Attempt to validate tokenized tokens * * @param {array} tokens The tokenized tokens * @param {array} crits The user provided criteria set * @returns {Boolean|String} Returns boolean true if succeeded, string if error occured */ validate: function(tokens, crits) { var valid = true, token = null, stack = [], nextToken = null, criteria_count = crits.length; for (var i = 0 ; i < tokens.length ; ++i) { token = tokens[i]; nextToken = i < tokens.length - 1 ? tokens[i+1] : null; if (token === '(') { stack.push('('); if (!isNumeric(nextToken) && nextToken !== '(' && nextToken !== ')') { throw 'Unexpected token "'+nextToken+'"'; } }else if (token === ')') { if (stack.length > 0) { stack.pop(); }else{ throw 'Unexpected closing bracket'; } if (nextToken !== ')' && nextToken !== 'and' && nextToken !== 'or' && nextToken !== null) { throw 'Unexpected token "'+nextToken+'"'; } }else if (token === 'and' || token === 'or') { if (!isNumeric(nextToken) && nextToken !== '(') { throw 'Unexpected token "'+nextToken+'"'; } }else if (isNumeric(token) && token <= criteria_count) { if (nextToken !== ')' && nextToken !== 'and' && nextToken !== 'or') { throw 'Unexpected token "'+nextToken+'"'; } }else{ //anything not recognized, die. throw 'Unexpected token "'+token+'"'; } } //Last step - check if we have all brackets closed if (valid && stack.length > 0) { throw 'Missing '+stack.length+' closing bracket'; } return valid; } }; //This is an example pattern and criteria set. Note that pattern numbers must match criteria numbers. var pattern = '((1 or 3) and (2 or 4) or 5)', crits = [ 1, 2, 3, 4, 5 ]; //lazy on the document on load. Just delay setTimeout(function() { var result; try { result = JSON.stringify(CriteriaParser.parse(pattern, crits), undefined, 4); }catch(e) { result = e; } var pre = document.createElement('pre'); pre.innerHTML = result; document.body.appendChild(pre); }, 10); })(); ```
This is most easily done using a two step process. 1) Convert to syntax tree. 2) Convert syntax tree to prefix notation. A syntax tree is basically the same as your prefix notation, just built using the data structures of your programming language. The standard method to create a syntax tree is to use a LALR parser generator, which is available for most languages. LALR parsers are fast, powerful, and expressive. A LALR parser generator takes a .y file as input, and outputs a source code file for a parser in the programming language of your choice. So you run the LALR parser generator once to generate your parser. (All programmers should use learn to use parser generators :). It is also smart to use a standard tokenizer, while I am guessing you have written your own :).) The following is a .y-file to generate a LALR-parser for your mini-language. Running this .y file though a LALR parser generator will output the source for a LALR parser, which takes tokens as input and outputs a parse-tree (in the variable $root\_tree). You need to have defined the parsetree\_binaryop datastructure manually elsewhere. ``` %left AND. %left OR. start ::= expr(e). { $root_tree = e; } expr(r) ::= expr(e1) AND expr(e2). { r = new parsetree_binaryop(e1, OP_AND, e2); } expr(r) ::= expr(e1) OR expr(e2). { r = new parsetree_binaryop(e1, OP_OR, e2); } expr(r) ::= LPAR expr(e) RPAR. { r = e; } ``` The "%left AND" means that AND is left-associative (we could have chosen right too, doesn't matter for AND and OR). That "%left AND" is mentioned before "%left OR" means that AND binds tighter than OR, and the generated parser will therefore do the right thing. When you have the syntax tree the parser gives you, generating the text representation is easy. Edit: this seems to be a LALR parser generator which outputs a parser in JavaScript: <http://sourceforge.net/projects/jscc/>
20,986,255
In our application we allow users to write specific conditions and we allow them express the conditions using such notation: ``` (1 and 2 and 3 or 4) ``` Where each numeric number correspond to one specific rule/condition. Now the problem is, how should I convert it, such that the end result is something like this: ``` { "$or": [ "$and": [1, 2, 3], 4 ] } ``` One more example: ``` (1 or 2 or 3 and 4) ``` To: ``` { "$or": [ 1, 2, "$and": [3, 4] ] } ``` --- I have written 50 over lines of tokenizer that successfully tokenized the statement into tokens and validated using stack/peek algorithm, and the tokens looks like this: ``` ["(", "1", "and", "2", "and", "3", "or", "4", ")"] ``` And now how should I convert this kind of "infix notation" into "prefix notation" with the rule that `and` takes precedence over `or`? Some **pointers or keywords** are greatly appreciated! What I have now doesn't really lead me to what I needed at the moment. Some researches so far: * [Smart design of a math parser?](https://stackoverflow.com/questions/114586/smart-design-of-a-math-parser) * [Add missing left parentheses into equation](https://stackoverflow.com/questions/19062718/add-missing-left-parentheses-into-equation) * [Equation (expression) parser with precedence?](https://stackoverflow.com/questions/28256/equation-expression-parser-with-precedence?rq=1) * [Infix to postfix notation](http://scriptasylum.com/tutorials/infix_postfix/algorithms/infix-postfix/index.htm) * [Dijkstra's Shunting-yard Algorithm](http://en.wikipedia.org/wiki/Shunting-yard_algorithm) * [Infix and postfix algorithm](http://interactivepython.org/runestone/static/pythonds/BasicDS/stacks.html#infix-prefix-and-postfix-expressions) **EDIT** Also, user has the ability to specify any number of parentheses if they insist, such as like: ``` ((1 or 3) and (2 or 4) or 5) ``` So it get translates to: ``` { "$or": [{ $and": [ "$or": [1, 3], "$or": [2, 4] }, 5 ] } ``` --- **EDIT 2** I figured out the algorithm. [Posted as an answer below](https://stackoverflow.com/a/21024204/534862). Thanks for helping!
2014/01/08
[ "https://Stackoverflow.com/questions/20986255", "https://Stackoverflow.com", "https://Stackoverflow.com/users/534862/" ]
Thanks for the guides guys, at least I came out with my own solution. Since this is my first time doing mathematical equation parsing, pardon me if I did it wrongly or inefficient, or help me spot the error: Basically, here are the steps I made it happen: 1. Before parsing, always validate the pattern. Throw error when something is wrong. 2. Once validated, we do a infix notation to prefix notation conversion. This step requires "and" takes precedence over "or". 1. Reverse the given pattern 2. Do infix to postfix notation conversion. [I dumb, I learn from this](http://www.cs.arizona.edu/classes/cs227/spring12/infix.pdf) 3. Do the reverse again 4. The infix to prefix should be done at this stage 3. Build a tree from the prefix notation such that 1. A node always have, and maximum, two branch 2. Traverse down until it reach full leaves 4. Optimize the tree such that it merges similar operators together (such as multiple `$and` operators with child `$and` can be merged and form a shorter tree) 5. Mix with the given criteria set, and all done!! Working example can be found here: <http://jsfiddle.net/chaoszcat/uGKYj/3/> Working code as below: ``` (function() { /** * This is a source example of my original question on * http://stackoverflow.com/questions/20986255/converting-conditional-equation-from-infix-to-prefix-notation * * This is my solution and use it at your own risk * @author Lionel Chan <chaoszcat[at]gmail.com> */ /** * isNumeric, from jQuery. Duplicated here to make this js code pure * @param {mix} n Test subject * @returns {boolean} true if it's numeric */ function isNumeric(n) { return !isNaN(parseFloat(n))&&isFinite(n); } /** * Node class - represent a operator or numeric node * @param {string} token The token string, operator "and", "or", or numeric value */ function Node(token) { this.parent = null; this.children = []; //one node has two children at most this.token = token; this.is_operator = token === 'and' || token === 'or'; this.is_numeric = !this.is_operator; this.destroyed = false; } Node.prototype = { isOperator: function() { return this.is_operator;}, isNumeric: function() { return this.is_numeric;}, //While building tree, a node is full if there are two children isFull: function() { return this.children.length >= 2; }, addChild: function(node) { node.parent = this; this.children.push(node); }, hasParent: function() { return this.parent !== null; }, indexOfChild: function(node) { for (var i = 0 ; i < this.children.length ; ++i) { if (this.children[i] === node) { return i; } } return -1; }, removeChild: function(node) { var idx = this.indexOfChild(node); if (idx >= 0) { this.children[idx].parent = null; //remove parent relationship this.children.splice(idx, 1); //splice it out } }, /** * Pass my children to the target node, and destroy myself * * @param {Node} node A target node */ passChildrenTo: function(node) { for (var i = 0 ; i < this.children.length ; ++i) { node.addChild(this.children[i]); } this.destroy(); }, //Destroy this node destroy: function() { this.parent.removeChild(this); this.children = null; this.destroyed = true; } }; /** * Tree class - node manipulation * @param {array} prefixTokens The converted, prefix-notated tokens */ function Tree(prefixTokens) { this.buildTree(prefixTokens); //Optimize tree - so that the tree will merge multiple similar operators together this.optimize(this.root); } Tree.prototype = { root: null, //Reference to the deepest operator node in the tree for next attachment point deepestNode: null, /** * Render this tree with given criteria array * @param {array} crits * @returns {object} The built criteria */ render: function(crits) { //After optimization, we build the criteria and that's all! return this.buildCriteria(this.root, crits); }, /** * Build criteria from root node. Recursive * * @param {Node} node * @param {array} crits * @returns {object} of criteria */ buildCriteria: function(node, crits) { var output = {}, label = '$'+node.token; output[label] = []; //cpnditions array for (var i = 0 ; i < node.children.length ; ++i) { if (node.children[i].isOperator()) { output[label].push(this.buildCriteria(node.children[i], crits)); }else{ output[label].push(crits[node.children[i].token-1]); } } return output; }, /** * Optimize the tree, we can simplify nodes with same operator. Recursive * * @param {Node} node * @void */ optimize: function(node) { //note that node.children.length will keep changing since the swapping children will occur midway. Rescan is required for (var i = 0 ; i < node.children.length ; ++i) { if (node.children[i].isOperator()) { this.optimize(node.children[i]); if (node.children[i].token === node.token) { node.children[i].passChildrenTo(node); i = 0; //rescan this level whenever a swap occured } } } }, /** * Build tree from raw tokens * @param {array} tokens */ buildTree: function(tokens) { for (var i = 0 ; i < tokens.length ; ++i) { this.addNode(new Node(tokens[i])); } }, /** * Add node into tree * * @param {Node} node */ addNode: function(node) { //If no root? The first node is root if (this.root === null) { this.root = node; this.deepestNode = node; return; } //if deepestNode is full, traverse up until we find a node with capacity while(this.deepestNode && this.deepestNode.isFull()) { this.deepestNode = this.deepestNode.parent; } if (this.deepestNode) { this.deepestNode.addChild(node); } //If the current node is an operator, we move the deepestNode cursor to it if (node.isOperator()) { this.deepestNode = node; } } }; /** * Main criteria parser */ var CriteriaParser = { /** * Convert raw string of pattern (1 and 2 or 3) into the object of criteria pattern * * @param {string} str The raw pattern * @param {array} crits The raw list of criteria * @returns {String|Boolean} */ parse: function(str, crits) { var tokens = this.tokenize(str), validationResult = this.validate(tokens, crits), prefixNotation = ''; //Once succeded, we proceed to convert it to prefix notation if (validationResult === true) { prefixNotation = this.infixToPrefix(tokens); return (new Tree(prefixNotation)).render(crits); }else{ return validationResult; } }, /** * Convert the infix notation of the pattern (1 and 2 or 3) into prefix notation "or and 1 2 3" * * Note: * - and has higher precedence than or * * Steps: * 1. Reverse the tokens array * 2. Do infix -> postfix conversion (http://www.cs.arizona.edu/classes/cs227/spring12/infix.pdf, http://scriptasylum.com/tutorials/infix_postfix/algorithms/infix-postfix/index.htm) * 3. Reverse the result * * @param {array} tokens The tokenized tokens * @returns {array} prefix notation of pattern */ infixToPrefix: function(tokens) { var reversedTokens = tokens.slice(0).reverse(), //slice to clone, so not to disturb the original array stack = [], output = []; //And since it's reversed, please regard "(" as closing bracket, and ")" as opening bracket do { var stackTop = stack.length > 0 ? stack[stack.length-1] : null, token = reversedTokens.shift(); if (token === 'and') { while(stackTop === 'and') { output.push(stack.pop()); stackTop = stack.length > 0 ? stack[stack.length-1] : null; } stack.push(token); stackTop = token; }else if (token === 'or') { while(stackTop === 'and' || stackTop === 'or') { //and has higher precedence, so it will be popped out output.push(stack.pop()); stackTop = stack.length > 0 ? stack[stack.length-1] : null; } stack.push(token); stackTop = token; }else if (token === '(') { //'(' is closing bracket in reversed tokens while(stackTop !== ')' && stackTop !== undefined) { //keep looping until found a "open - )" bracket output.push(stack.pop()); stackTop = stack.length > 0 ? stack[stack.length-1] : null; } stack.pop(); //remove the open ")" bracket stackTop = stack.length > 0 ? stack[stack.length-1] : null; }else if (token === ')') { //')' is opening bracket in reversed tokens stack.push(token); }else if (isNumeric(token)) { output.push(token); }else if (token === undefined) { // no more tokens. Just shift everything out from stack while(stack.length) { stackTop = stack.pop(); if (stackTop !== undefined && stackTop !== ')') { output.push(stackTop); } } } }while(stack.length || reversedTokens.length); //Reverse output and we are done return output.reverse(); }, /** * Tokenized the provided pattern * @param {string} str The raw pattern from user * @returns {array} A tokenized array */ tokenize: function(str) { var pattern = str.replace(/\s/g, ''), //remove all the spaces :) not needed tokens = pattern.split(''), tokenized = []; //Tokenize it and verify var token = null, next = null; //attempts to concatenate the "and" and "or" and numerics while (tokens.length > 0) { token = tokens.shift(); next = tokens.length > 0 ? tokens[0] : null; if (token === '(' || token === ')') { tokenized.push(token); }else if (token === 'a' && tokens.length >= 2 && tokens[0] === 'n' && tokens[1] === 'd') { //and tokenized.push(token + tokens.shift() + tokens.shift()); }else if (token === 'o' && tokens.length >= 1 && next === 'r') { //or tokenized.push(token + tokens.shift()); }else if (isNumeric(token)) { while(isNumeric(next)) { token += next; tokens.shift(); //exhaust it next = tokens.length > 0 ? tokens[0] : null; } tokenized.push(token); }else{ tokenized.push(token); } } return tokenized; }, /** * Attempt to validate tokenized tokens * * @param {array} tokens The tokenized tokens * @param {array} crits The user provided criteria set * @returns {Boolean|String} Returns boolean true if succeeded, string if error occured */ validate: function(tokens, crits) { var valid = true, token = null, stack = [], nextToken = null, criteria_count = crits.length; for (var i = 0 ; i < tokens.length ; ++i) { token = tokens[i]; nextToken = i < tokens.length - 1 ? tokens[i+1] : null; if (token === '(') { stack.push('('); if (!isNumeric(nextToken) && nextToken !== '(' && nextToken !== ')') { throw 'Unexpected token "'+nextToken+'"'; } }else if (token === ')') { if (stack.length > 0) { stack.pop(); }else{ throw 'Unexpected closing bracket'; } if (nextToken !== ')' && nextToken !== 'and' && nextToken !== 'or' && nextToken !== null) { throw 'Unexpected token "'+nextToken+'"'; } }else if (token === 'and' || token === 'or') { if (!isNumeric(nextToken) && nextToken !== '(') { throw 'Unexpected token "'+nextToken+'"'; } }else if (isNumeric(token) && token <= criteria_count) { if (nextToken !== ')' && nextToken !== 'and' && nextToken !== 'or') { throw 'Unexpected token "'+nextToken+'"'; } }else{ //anything not recognized, die. throw 'Unexpected token "'+token+'"'; } } //Last step - check if we have all brackets closed if (valid && stack.length > 0) { throw 'Missing '+stack.length+' closing bracket'; } return valid; } }; //This is an example pattern and criteria set. Note that pattern numbers must match criteria numbers. var pattern = '((1 or 3) and (2 or 4) or 5)', crits = [ 1, 2, 3, 4, 5 ]; //lazy on the document on load. Just delay setTimeout(function() { var result; try { result = JSON.stringify(CriteriaParser.parse(pattern, crits), undefined, 4); }catch(e) { result = e; } var pre = document.createElement('pre'); pre.innerHTML = result; document.body.appendChild(pre); }, 10); })(); ```
First define semantics. In your first example you gave `(1 and 2 and 3) or 4` interpretation but it can also be `1 and 2 and (3 or 4)` so: ``` { "$and": [ {"$or": [3,4] }, [1,2] ] } ``` Let's assume that `and` has higher priority. Then just go through list join all terms with `and`. Next, join all the rest with `or`.
43,005,480
Let's say that I have three lists and want to add all elements that are integers to a list named `int_list`: ``` test1 = [1, 2, 3, "b", 6] test2 = [1, "foo", "bar", 7] test3 = ["more stuff", 1, 4, 99] int_list = [] ``` I know that I can do the following code to append all integers to a new list: ``` for elem1, elem2, elem3 in zip(test1, test2, test3): if elem1 is int: int_list.append(elem1) if elem2 is int: int_list.append(elem2) if elem3 is int: int_list.append(elem3) ``` Is there anyway that I can merge the if statements into one conditional statement? Or make it less code? Is there a more pythonic way to do this? I tried doing the following code, but it would include elements that were not integers: ``` for elem1, elem2, elem3 in zip(test1, test2, test3): if (elem1 is int, elem2 is int, elem3 is int): int_list.append(elem1) int_list.append(elem2) int_list.append(elem3) ```
2017/03/24
[ "https://Stackoverflow.com/questions/43005480", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6779980/" ]
When you import the data into Excel, tell the **Import Wizard** that the field is *Text*. [![enter image description here](https://i.stack.imgur.com/tqMFm.png)](https://i.stack.imgur.com/tqMFm.png)
My preference is to deal with the inputs, when possible, and in this case if you have control over the python script, it may be preferable to simply modify *that*, so that Excel's default behavior interprets the file in the desired way. Borrowing from [this similar question with a million upvotes](https://stackoverflow.com/questions/165042/stop-excel-from-automatically-converting-certain-text-values-to-dates?rq=1), you can modify your python script to include a non-printing character: ``` output.write('"{0}\t","{1}\t","{2}\t"\n'.format(value1, value2, value3)) ``` This way, you can easily double-click to open the file and the contents will be treated as text, rather than interpreted as a numeric/date value. The benefit of this is that other users won't have to remember to use the wizard, and it may be easier to deal with mixed data as well. Example: ``` def writeit(): csvPath = r'c:\debug\output.csv' a = '4-10' b = '10-0' with open(csvPath, 'w') as f: f.write('"{0}\t","{1}\t"'.format(a,b)) ``` Produces the following file in text editor: [![enter image description here](https://i.stack.imgur.com/iW4a5.png)](https://i.stack.imgur.com/iW4a5.png) And when opened via double-click in Excel: [![enter image description here](https://i.stack.imgur.com/RVxzo.png)](https://i.stack.imgur.com/RVxzo.png)
60,286,051
I have a python script and want to call a subprocess from it. The following example works completely fine: Script1: ``` from subprocess import Popen p = Popen('python Script2.py', shell=True) ``` Script2: ``` def execute(): print('works!') execute() ``` However as soon as I want to pass a variable to the function, I get the following error: ``` def execute(random_variable: str): SyntaxError: invalid syntax ``` Script1: ``` from subprocess import Popen p = Popen('python Script2.py', shell=True) ``` Script2: ``` def execute(random_variable: str): print(random_variable) execute(random_variable='does not work') ``` Does anyone have an idea why that could be the case? Couldn't find anything about it online :(
2020/02/18
[ "https://Stackoverflow.com/questions/60286051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12920312/" ]
There is file called extensions.json in the bin folder. Your startup calls register in that file. Whichever function app deployed latest, that function's startup call will be replaced with earlier function's startup call. So, you need to take an action that all the functions' startup calls will be registered in this file. [![enter image description here](https://i.stack.imgur.com/SNEUS.jpg)](https://i.stack.imgur.com/SNEUS.jpg) [![enter image description here](https://i.stack.imgur.com/kecTy.jpg)](https://i.stack.imgur.com/kecTy.jpg)
Seems like you haven't [injected](https://learn.microsoft.com/en-us/aspnet/core/fundamentals/dependency-injection?view=aspnetcore-3.1) `IAzureTableStorageService` properly in your Startup class hence the DI can't find it. Reference the project where `IAzureTableStorageService` is located, add something like this in your Startup class: ``` services.AddTransient<IAzureTableStorageService, AzureTableStorageService>(); ``` where `AzureTableStorageService` is your class that implements `IAzureTableStorageService`.
1,083,391
Please, help me in: how to put a double command in the *cmd*, like this in the Linux: `apt-get install firefox && cp test.py /home/python/`, but how to do this in Windows?, more specific in Windows CE, but it´s the same in Windows and in Windows CE, because the *cmd* is the same. Thanks!
2009/07/05
[ "https://Stackoverflow.com/questions/1083391", "https://Stackoverflow.com", "https://Stackoverflow.com/users/126353/" ]
If CE is the same as XP Pro (and I'm not sure you're right about that), you can use the same method: ``` dir && echo hello ``` Here it is running on my Windows VM (XP SP3): ``` C:\Documents and Settings\Pax>dir && echo hello Volume in drive C is Primary Volume Serial Number is 04F7-0E7B Directory of C:\Documents and Settings\Pax 29/06/2009 05:00 PM <DIR> . 29/06/2009 05:00 PM <DIR> .. 17/01/2009 12:38 PM <DIR> Desktop : : : 29/06/2009 05:00 PM 4,487 _viminfo 14 File(s) 51,658 bytes 9 Dir(s) 13,424,406,528 bytes free hello C:\Documents and Settings\Pax> ``` Some of the useful multi-command options are: ``` cmd1 & cmd2 - run cmd1 then run cmd2. cmd1 && cmd2 - run cmd1 then, if cmd1 was successful, run cmd2. cmd1 || cmd2 - run cmd1 then, if cmd1 was not successful, run cmd2. ```
This <http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/ntcmds_shelloverview.mspx?mfr=true> might be of some help. cheers
17,620,875
I'm python developer and most frequently I use [buildout](http://www.buildout.org/en/latest/) for managing my projects. In this case I dont ever need to run any command to activate my dependencies environment. However, sometime I use virtualenv when buildout is to complicated for this particular case. Recently I started playing with ruby. And noticed very useful feature. Enviourement is changing automatically when I `cd` in to the project folder. It is somehow related to `rvm` nad `.rvmrc` file. I'm just wondering if there are ways to hook some script on different bash commands. So than I can `workon environment_name` automatically when `cd` into to project folder. **So the logic as simple as:** When you `cd` in the project with `folder_name`, than script should run `workon folder_name`
2013/07/12
[ "https://Stackoverflow.com/questions/17620875", "https://Stackoverflow.com", "https://Stackoverflow.com/users/375373/" ]
One feature of Unix shells is that they let you create *shell functions*, which are much like functions in other languages; they are essentially named groups of commands. For example, you can write a function named `mycd` that first runs `cd`, and then runs other commands: ``` function mycd () { cd "$@" if ... ; then workon environment fi } ``` (The `"$@"` expands to the arguments that you passed to `mycd`; so `mycd /path/to/dir` will call `cd /path/to/dir`.) As a special case, a shell function actually supersedes a like-named builtin command; so if you name your function `cd`, it will be run instead of the `cd` builtin whenever you run `cd`. In that case, in order for the function to call the builtin `cd` to perform the actual directory-change (instead of calling itself, causing infinite recursion), it can use Bash's `builtin` builtin to call a specified builtin command. So: ``` function cd () { builtin cd "$@" # perform the actual cd if ... ; then workon environment fi } ``` (Note: I don't know what your logic is for recognizing a project directory, so I left that as `...` for you to fill in. If you describe your logic in a comment, I'll edit accordingly.)
I think you're looking for one of two things. [`autoenv`](https://github.com/kennethreitz/autoenv) is a relatively simple tool that creates the relevant bash functions for you. It's essentially doing what ruakh suggested, but you can use it without having to know how the shell works. [`virtualenvwrapper`](https://pypi.python.org/pypi/virtualenvwrapper) is full of tools that make it easier to build smarter versions of the bash functions—e.g., switch to the venv even if you `cd` into one of its subdirectories instead of the base, or track venvs stored in `git` or `hg`, or … See the [Tips and Tricks](http://virtualenvwrapper.readthedocs.org/en/latest/tips.html) page. The [Cookbook for `autoenv`](https://github.com/kennethreitz/autoenv/wiki/Cookbook), shows some nifty ways ways to use the two together.
17,620,875
I'm python developer and most frequently I use [buildout](http://www.buildout.org/en/latest/) for managing my projects. In this case I dont ever need to run any command to activate my dependencies environment. However, sometime I use virtualenv when buildout is to complicated for this particular case. Recently I started playing with ruby. And noticed very useful feature. Enviourement is changing automatically when I `cd` in to the project folder. It is somehow related to `rvm` nad `.rvmrc` file. I'm just wondering if there are ways to hook some script on different bash commands. So than I can `workon environment_name` automatically when `cd` into to project folder. **So the logic as simple as:** When you `cd` in the project with `folder_name`, than script should run `workon folder_name`
2013/07/12
[ "https://Stackoverflow.com/questions/17620875", "https://Stackoverflow.com", "https://Stackoverflow.com/users/375373/" ]
One feature of Unix shells is that they let you create *shell functions*, which are much like functions in other languages; they are essentially named groups of commands. For example, you can write a function named `mycd` that first runs `cd`, and then runs other commands: ``` function mycd () { cd "$@" if ... ; then workon environment fi } ``` (The `"$@"` expands to the arguments that you passed to `mycd`; so `mycd /path/to/dir` will call `cd /path/to/dir`.) As a special case, a shell function actually supersedes a like-named builtin command; so if you name your function `cd`, it will be run instead of the `cd` builtin whenever you run `cd`. In that case, in order for the function to call the builtin `cd` to perform the actual directory-change (instead of calling itself, causing infinite recursion), it can use Bash's `builtin` builtin to call a specified builtin command. So: ``` function cd () { builtin cd "$@" # perform the actual cd if ... ; then workon environment fi } ``` (Note: I don't know what your logic is for recognizing a project directory, so I left that as `...` for you to fill in. If you describe your logic in a comment, I'll edit accordingly.)
Just found in the description of virtualenvwraper [this topic](http://virtualenvwrapper.readthedocs.org/en/latest/tips.html#changing-the-default-behavior-of-cd) It describes exactly what I need.
17,620,875
I'm python developer and most frequently I use [buildout](http://www.buildout.org/en/latest/) for managing my projects. In this case I dont ever need to run any command to activate my dependencies environment. However, sometime I use virtualenv when buildout is to complicated for this particular case. Recently I started playing with ruby. And noticed very useful feature. Enviourement is changing automatically when I `cd` in to the project folder. It is somehow related to `rvm` nad `.rvmrc` file. I'm just wondering if there are ways to hook some script on different bash commands. So than I can `workon environment_name` automatically when `cd` into to project folder. **So the logic as simple as:** When you `cd` in the project with `folder_name`, than script should run `workon folder_name`
2013/07/12
[ "https://Stackoverflow.com/questions/17620875", "https://Stackoverflow.com", "https://Stackoverflow.com/users/375373/" ]
I think you're looking for one of two things. [`autoenv`](https://github.com/kennethreitz/autoenv) is a relatively simple tool that creates the relevant bash functions for you. It's essentially doing what ruakh suggested, but you can use it without having to know how the shell works. [`virtualenvwrapper`](https://pypi.python.org/pypi/virtualenvwrapper) is full of tools that make it easier to build smarter versions of the bash functions—e.g., switch to the venv even if you `cd` into one of its subdirectories instead of the base, or track venvs stored in `git` or `hg`, or … See the [Tips and Tricks](http://virtualenvwrapper.readthedocs.org/en/latest/tips.html) page. The [Cookbook for `autoenv`](https://github.com/kennethreitz/autoenv/wiki/Cookbook), shows some nifty ways ways to use the two together.
Just found in the description of virtualenvwraper [this topic](http://virtualenvwrapper.readthedocs.org/en/latest/tips.html#changing-the-default-behavior-of-cd) It describes exactly what I need.
62,389,496
I wanted to write a Python Script that lists all files in the current working directory, if the **length** of a file's name is between 3 - 6 characters long. Also, it should only list files with the extension `.py` I was not able to find any specific function that would return the legnth of a files name, only the size of its contet. Here is what my code looks like so far: ``` #!/usr/bin/env python3 import os for file in os.listdir(os.getcwd()): if file.endswith(".py"): print(file) ``` Can anyone tell me what the solution could look like? Do I use a RegEx in the `os.getcwd(RegEx)` function? **edit:** I am sorry for posting this trivial question. I found the solution and it looks as followed: ``` #!/usr/bin/env python3 import os for file in os.listdir(os.getcwd()): if file.endswith(".py"): if ((len(os.path.splitext(file)[0])) > 2 and (len(os.path.splitext(file)[0])) < 7): print(file) ``` This works for my intended purpose. Thanks for the answers, they made me realize that using len(filename) was an option and therefore my question was not very smart.
2020/06/15
[ "https://Stackoverflow.com/questions/62389496", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12781947/" ]
Maybe rewording `file` to `filename` might make things clearer... ``` import os for filename in os.listdir(os.getcwd()): if filename.endswith(".py"): print(filename, len(filename)) ``` Now since you know how `if` statements work, you can probably do something with `len(filename)`? :)
your variable `file` is a string so, you can use a function for retrive a lenght of a string ``` for file in os.listdir(os.getcwd()): if file.endswith(".py"): print(file, len(file)) ```
62,389,496
I wanted to write a Python Script that lists all files in the current working directory, if the **length** of a file's name is between 3 - 6 characters long. Also, it should only list files with the extension `.py` I was not able to find any specific function that would return the legnth of a files name, only the size of its contet. Here is what my code looks like so far: ``` #!/usr/bin/env python3 import os for file in os.listdir(os.getcwd()): if file.endswith(".py"): print(file) ``` Can anyone tell me what the solution could look like? Do I use a RegEx in the `os.getcwd(RegEx)` function? **edit:** I am sorry for posting this trivial question. I found the solution and it looks as followed: ``` #!/usr/bin/env python3 import os for file in os.listdir(os.getcwd()): if file.endswith(".py"): if ((len(os.path.splitext(file)[0])) > 2 and (len(os.path.splitext(file)[0])) < 7): print(file) ``` This works for my intended purpose. Thanks for the answers, they made me realize that using len(filename) was an option and therefore my question was not very smart.
2020/06/15
[ "https://Stackoverflow.com/questions/62389496", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12781947/" ]
You can try ``` import os for file in os.listdir(os.getcwd()): if file.endswith(".py") and 3 <= len(file[:file.find(".")]) <= 6: print(file) ``` This will print only files that have between 3 and 6 chars in the filename without the file extension.
your variable `file` is a string so, you can use a function for retrive a lenght of a string ``` for file in os.listdir(os.getcwd()): if file.endswith(".py"): print(file, len(file)) ```
21,957,231
I'm very new to d3 and in order to learn I'm trying to manipulate the [d3.js line example](http://bl.ocks.org/mbostock/3883245), the code is below. I'm trying to modify this to use model data that I already have on hand. This data is passed down as a json object. The problem is that I don't know how to manipulate the data to fit what d3 expects. Most of the d3 examples use key-value arrays. I want to use a key array + a value array. For example my data is structured per the example below: ``` // my data. A name property, with array values and a value property with array values. // data is the json object returned from the server var tl = new Object; tl.date = data[0].fields.date; tl.close = data[0].fields.close; console.log(tl); ``` Here is the structure visually (yes it time format for now): ![My Data](https://i.stack.imgur.com/rGMYR.png) Now this is different from the [data.tsv](http://bl.ocks.org/mbostock/3883245#data.tsv) call which results in key-value pairs in the code below. ![key-value data](https://i.stack.imgur.com/rfgtP.png) *The goal is to use my data as is, without having to iterate over my array to preprocess it.* **Questions:** 1) Are there any built in's to d3 to deal with this situation? For example, if key-values are absolutely necessary in python we could use the `zip` function to quickly generate a key-value list. 2) Can I use my data as is, or does it *have* to be turned into key-value pairs? **Below is the line example code.** ``` // javascript/d3 (LINE EXAMPLE) var margin = {top: 20, right: 20, bottom: 30, left: 50}, width = 640 - margin.left - margin.right, height = 480 - margin.top - margin.bottom; var parseDate = d3.time.format("%d-%b-%y").parse; var x = d3.time.scale() .range([0, width]); var y = d3.scale.linear() .range([height, 0]); var xAxis = d3.svg.axis() .scale(x) .orient("bottom"); var yAxis = d3.svg.axis() .scale(y) .orient("left"); var line = d3.svg.line() .x(function(d) { return x(d.date); }) .y(function(d) { return y(d.close); }); var svg = d3.select("body").append("svg") .attr("width", width + margin.left + margin.right) .attr("height", height + margin.top + margin.bottom) .append("g") .attr("transform", "translate(" + margin.left + "," + margin.top + ")"); d3.tsv("/data.tsv", function(error, data) { data.forEach(function(d) { d.date = parseDate(d.date); d.close = +d.close; }); x.domain(d3.extent(data, function(d) { return d.date; })); y.domain(d3.extent(data, function(d) { return d.close; })); svg.append("g") .attr("class", "x axis") .attr("transform", "translate(0," + height + ")") .call(xAxis); svg.append("g") .attr("class", "y axis") .call(yAxis) .append("text") .attr("transform", "rotate(-90)") .attr("y", 6) .attr("dy", ".71em") .style("text-anchor", "end") .text("Price ($)"); svg.append("path") .datum(data) .attr("class", "line") .attr("d", line); }); ```
2014/02/22
[ "https://Stackoverflow.com/questions/21957231", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1870013/" ]
``` for (String hashTagged : hashTags) { if (tweet.equalsIgnoreCase(hashTagged) != true) { hashTags.add(hashTagged); -----------------------------^ } } ``` The issue is while iterating the hashTags list you cant update it.
You are getting `java.util.ConcurrentModificationException` because you are modifying the `List` `hashTags` while you are iterating over it. ``` for (String hashTagged : hashTags) { if (tweet.equalsIgnoreCase(hashTagged) != true) { hashTags.add(hashTagged); } } ``` You can create a temporary list of items that must be removed or improve your logic.
65,736,625
In python, I have a datetime object in python with that format. ``` datetime_object = datetime.strptime(date_time_str, '%Y-%m-%d %H:%M:%S') ``` In other classes, I'm using this object. When i reach this object,i want to extract time from it and compare string time. Like below; ``` if "01:15:13" == time_from_datetime_object ``` How can I do this?
2021/01/15
[ "https://Stackoverflow.com/questions/65736625", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14697436/" ]
You need to use the strftime method: ``` from datetime import datetime date_time_str = '2021-01-15 01:15:13' datetime_object = datetime.strptime(date_time_str, '%Y-%m-%d %H:%M:%S') if "01:15:13" == datetime_object.strftime('%H:%M:%S'): print("match") ```
If you want to compare it as string: ``` if "01:15:13" == datetime_object.strftime('%H:%M:%S'): ```
65,736,625
In python, I have a datetime object in python with that format. ``` datetime_object = datetime.strptime(date_time_str, '%Y-%m-%d %H:%M:%S') ``` In other classes, I'm using this object. When i reach this object,i want to extract time from it and compare string time. Like below; ``` if "01:15:13" == time_from_datetime_object ``` How can I do this?
2021/01/15
[ "https://Stackoverflow.com/questions/65736625", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14697436/" ]
You need to use the strftime method: ``` from datetime import datetime date_time_str = '2021-01-15 01:15:13' datetime_object = datetime.strptime(date_time_str, '%Y-%m-%d %H:%M:%S') if "01:15:13" == datetime_object.strftime('%H:%M:%S'): print("match") ```
Use its [`time`](https://docs.python.org/3/library/datetime.html?highlight=datetime.datetime.time#datetime.datetime.time) method to return a `time` object, which you can compare to another `time` object. ``` from datetime import time if datetime_object.time() == time(1, 15, 13): ... ``` You may have to be careful with microseconds though, so at some point you might want to do `datetime_object = datetime_object.replace(microsecond=0)`, should your `datetime` objects contain non-zero microseconds.
7,172,585
> > **Possible Duplicate:** > > [Should Python import statements always be at the top of a module?](https://stackoverflow.com/questions/128478/should-python-import-statements-always-be-at-the-top-of-a-module) > > > In a very simple one-file python program like ``` # ------------------------ # place 1 # import something def foo(): # place 2 # import something return something.foo() def bar(f): ... def baz(): f = foo() bar(f) baz() # ---------------- ``` Would you put the "import something" at place 1 or 2?
2011/08/24
[ "https://Stackoverflow.com/questions/7172585", "https://Stackoverflow.com", "https://Stackoverflow.com/users/909210/" ]
[PEP 8](http://www.python.org/dev/peps/pep-0008/) specifies that: * Imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants. Imports should be grouped in the following order: 1. standard library imports 2. related third party imports 3. local application/library specific imports You should put a blank line between each group of imports. Put any relevant **all** specification after the imports.
I'd principally agree with Robert S. answer, but sometimes it makes sense to put it into a function. Especially if you want to control the importing mechanism. This is useful if you cannot be sure if you actually have access to a specific module. Consider this example: ``` def foo(): try: import somespecialmodule # do something # ... except ImportError: import anothermodule # do something else # ... ``` This might even be the case for standard library modules (I especially have in mind the `optparse` and `argparse` modules).
19,328,381
I am confused about classes in python. I don't want anyone to write down raw code but suggest methods of doing it. Right now I have the following code... ``` def main(): lst = [] filename = 'yob' + input('Enter year: ') + '.txt' for line in open(filename): line = line.strip() lst.append(line.split(',') ``` What this code does is have a input for a file based on a year. The program is placed in a folder with a bunch of text files that have different years to them. Then, I made a class... ``` class Names(): __slots__ = ('Name', 'Gender', 'Occurences') ``` This class just defines what objects I should make. The goal of the project is to build objects and create lists based off these objects. My main function returns a list containing several elements that look like the following: `[[jon, M, 190203], ...]` These elements have a name in `lst[0]`, a gender `M` or `F` in `[1]` and a occurence in `[3]`. I'm trying to find the top 20 Male and Female candidates and print them out. Goal- There should be a function which creates a name entry, i.e. mkEntry. It should be passed the appropriate information, build a new object, populate the fields, and return it.
2013/10/11
[ "https://Stackoverflow.com/questions/19328381", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2835743/" ]
You can use reflection ``` item.GetType().GetProperty(field).GetValue(item).ToString(); ``` (or `GetField()` instead of `GetProperty()` if... that's a field)
This is not trivial like it might be, say, in ecmascript. The simplest option is reflection, for example: ``` data = item.GetType().GetProperty(field).GetValue(item).ToString(); ``` however: depending on the API involved, there may be other options available involving indexers, etc. Note that reflection is slower than regular member access - if you are doing this in very high usage, you might need a more optimized implementation. It (reflection) is usually fast enough for light to moderate usage, though.
53,327,826
Many open-source projects use a "Fork me on Github" banner at the top-right corner of the pages in the documentation. To name just one, let's take the example of Python [requests](http://docs.python-requests.org/en/master/): [![Fork me on Github ribbon](https://i.stack.imgur.com/CHfGm.png)](https://i.stack.imgur.com/CHfGm.png) There is a post on the Github blog about those banners where image code is provided: [GitHub Ribbons](https://blog.github.com/2008-12-19-github-ribbons/) But nothing is explained about **how** to add the link in each of the page generated using Sphinx and then uploaded on ReadTheDocs. Could you please help to generate this automatically? I expected there could be an option in `conf.py` but I found none. My Sphinx configuration is the default one.
2018/11/15
[ "https://Stackoverflow.com/questions/53327826", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2291710/" ]
The easiest way is to use an alternative theme like [`alabaster`](https://pypi.org/project/alabaster/) as it comes with preconfigured option like `github_banner` (see [Joran's answer](https://stackoverflow.com/a/53328720/2291710)). For other themes like `sphinx-rtd-theme` which do not provide such setting, the solution is to rely on [Sphinx templating](https://www.sphinx-doc.org/en/master/templating.html). One has to create the file `docs/_templates/layout.html` with the following content: ``` {% extends '!layout.html' %} {% block document %} {{super()}} <a href="https://github.com/you"> <img style="position: absolute; top: 0; right: 0; border: 0;" src="https://s3.amazonaws.com/github/ribbons/forkme_right_darkblue_121621.png" alt="Fork me on GitHub"> </a> {% endblock %} ```
the great thing about python (especially python on github) is that you can simply look at the source I can go to <https://github.com/requests/requests/blob/master/docs/conf.py> and look at their conf.py where we can see this entry ``` # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. html_theme_options = { 'show_powered_by': False, 'github_user': 'requests', 'github_repo': 'requests', 'github_banner': True, 'show_related': False, 'note_bg': '#FFF59C' } ``` we also can notice they are using the theme alabaster with a quick google we find that alabaster has some docs <https://github.com/mitya57/alabaster-1> ``` github_banner: true or false (default: false) - whether to apply a 'Fork me on Github' banner in the top right corner of the page. If true, requires that you set github_user and github_repo. May also submit a string file path (as with logo, relative to $PROJECT/_static/) to be used as the banner image instead of the default. ``` so the answer is to use alabaster theme and set those options :)
44,922,108
The objective is to parse the output of an ill-behaving program which concatenates a list of numbers, e.g., 3, 4, 5, into a string "345", without any non-number separating the numbers. I also know that the list is sorted in ascending order. I came up with the following solution which reconstructs the list from a string: ``` a = '3456781015203040' numlist = [] numlist.append(int(a[0])) i = 1 while True: j = 1 while True: if int(a[i:i+j]) <= numlist[-1]: j = j + 1 else: numlist.append(int(a[i:i+j])) i = i + j break if i >= len(a): break ``` This works, but I have a feeling that the solution reflects too much the fact that I have been trained in Pascal, decades ago. Is there a better or more pythonic way to do it? I am aware that the problem is ill-posed, i.e., I could start with '34' as the initial element and get a different solution (or possibly end up with remaining trailing numeral characters which don't form the next element of the list).
2017/07/05
[ "https://Stackoverflow.com/questions/44922108", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2618889/" ]
This finds solutions for all possible initial number lengths: ``` a = '3456781015203040' def numbers(a,n): current_num, i = 0, 0 while True: while i+n <= len(a) and int(a[i:i+n]) <= current_num: n += 1 if i+n <= len(a): current_num = int(a[i:i+n]) yield current_num i += n else: return for n in range(1,len(a)): l = list(numbers(a,n)) # print only solutions that use up all digits of a if ''.join(map(str,l)) == a: print(l) ``` > > > ``` > [3, 4, 5, 6, 7, 8, 10, 15, 20, 30, 40] > [34, 56, 78, 101, 520, 3040] > [34567, 81015, 203040] > > ``` > >
little modification which allows to parse "7000000000001" data and give the best output (max list size) ``` a = 30000001 def numbers(a,n): current_num, i = 0, 0 while True: while i+n <= len(a) and int(a[i:i+n]) <= current_num:n += 1 if i+2*n>len(a):current_num = int(a[i:]);yield current_num; return elif i+n <= len(a):current_num = int(a[i:i+n]);yield current_num;i += n else: return print(current_num) for n in range(1,len(a)): l = list(numbers(a,n)) if "".join(map(str,l)) == a:print (l) ```
64,765,086
im trying to run a server on my laptop, when in the console i type 'python manage.py runserver' i recieve some errors. could it be i need to import some modules i tried 'pip install python-cron' but that didnt work. the error says: ``` [2020-11-10 09:04:47,241] autoreload: INFO - Watching for file changes with StatReloader Exception in thread django-main-thread: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/threading.py", line 932, in _bootstrap_inner self.run() File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/django/utils/autoreload.py", line 53, in wrapper fn(*args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/django/core/management/commands/runserver.py", line 109, in inner_run autoreload.raise_last_exception() File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/django/utils/autoreload.py", line 76, in raise_last_exception raise _exception[1] File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/django/core/management/__init__.py", line 357, in execute autoreload.check_errors(django.setup)() File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/django/utils/autoreload.py", line 53, in wrapper fn(*args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/django/__init__.py", line 24, in setup apps.populate(settings.INSTALLED_APPS) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/django/apps/registry.py", line 91, in populate app_config = AppConfig.create(entry) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/django/apps/config.py", line 90, in create module = import_module(entry) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked ModuleNotFoundError: No module named 'django_cron' ``` the cron.py file i have is: ``` from django.contrib.auth.models import User import os import datetime from crontab import CronTab #from django_common.helper import send_mail from django_cron import CronJobBase, Schedule from .models import Photo from PIL import Image class PhotoDeleteCronJob(CronJobBase): RUN_EVERY_MINS = 1 schedule = Schedule(run_every_mins=RUN_EVERY_MINS) code = 'cron.PhotoDeleteCronJob' def do(self): delet = Photo.objects.all() delet.delete() ``` thanks in advance if you need to see any other files just ask.
2020/11/10
[ "https://Stackoverflow.com/questions/64765086", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13740000/" ]
I have tried to re-create the same design with some minor changes in Flutter. I have to enable flutter web support by following the instructions here: [Flutter Web](https://flutter.dev/docs/get-started/web) [![Flutter Web sample](https://i.stack.imgur.com/oxX5K.gif)](https://i.stack.imgur.com/oxX5K.gif) Regarding the left menu, I have used `AnimatedSize` widget to give the sliding drawer feel & placed it inside `Row`. Please find the code below: ``` import 'package:flutter/material.dart'; final Color darkBlue = Color.fromARGB(255, 18, 32, 47); void main() { runApp(MyApp()); } class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( theme: ThemeData.dark().copyWith(scaffoldBackgroundColor: darkBlue), debugShowCheckedModeBanner: false, home: Scaffold( body: Center( child: MyWidget(), ), ), ); } } class MyWidget extends StatefulWidget { @override _MyWidgetState createState() => _MyWidgetState(); } class _MyWidgetState extends State<MyWidget> with SingleTickerProviderStateMixin { final colors = <Color>[Colors.indigo, Colors.blue, Colors.orange, Colors.red]; double _size = 250.0; bool _large = true; void _updateSize() { setState(() { _size = _large ? 250.0 : 0.0; _large = !_large; }); } @override Widget build(BuildContext context) { return Scaffold( body: Row( children: [ AnimatedSize( curve: Curves.easeIn, vsync: this, duration: Duration(seconds: 1), child: LeftDrawer(size: _size)), Expanded( flex: 4, child: Container( child: Column( children: [ Container( color: Colors.white, padding: const EdgeInsets.all(8), child: Row( children: [ IconButton( icon: Icon(Icons.menu, color: Colors.black87), onPressed: () { _updateSize(); }, ), FlatButton( child: Text( 'Dashboard', style: const TextStyle(color: Colors.black87), ), onPressed: () {}, ), FlatButton( child: Text( 'User', style: const TextStyle(color: Colors.black87), ), onPressed: () {}, ), FlatButton( child: Text( 'Settings', style: const TextStyle(color: Colors.black87), ), onPressed: () {}, ), const Spacer(), IconButton( icon: Icon(Icons.brightness_3, color: Colors.black87), onPressed: () {}, ), IconButton( icon: Icon(Icons.notification_important, color: Colors.black87), onPressed: () {}, ), CircleAvatar(), ], ), ), Container( height: 1, color: Colors.black12, ), Card( margin: EdgeInsets.zero, shape: RoundedRectangleBorder( borderRadius: BorderRadius.circular(0), ), child: Container( color: Colors.white, padding: const EdgeInsets.all(20), child: Row( children: [ Text( 'Home / Admin / Dashboard', style: const TextStyle(color: Colors.black), ), ], ), ), ), Expanded( child: ListView( children: [ Row( children: [ _container(0), _container(1), _container(2), _container(3), ], ), Container( height: 400, color: Color(0xFFE7E7E7), padding: const EdgeInsets.all(16), child: Card( color: Colors.white, child: Container( padding: const EdgeInsets.all(16), child: Text( 'Traffic', style: const TextStyle(color: Colors.black87), ), ), ), ), ], ), ), ], ), ), ), ], ), ); } Widget _container(int index) { return Expanded( child: Container( padding: const EdgeInsets.all(20), color: Color(0xFFE7E7E7), child: Card( color: Color(0xFFE7E7E7), child: Container( color: colors[index], width: 250, height: 140, padding: const EdgeInsets.all(20), child: Column( crossAxisAlignment: CrossAxisAlignment.start, children: [ Row( children: [ Expanded( child: Text( '9.823', style: TextStyle(fontSize: 24), )), Icon(Icons.more_vert), ], ), Text('Members online') ], ), ), ), ), ); } } class LeftDrawer extends StatelessWidget { const LeftDrawer({ Key key, this.size, }) : super(key: key); final double size; @override Widget build(BuildContext context) { return Expanded( flex: 1, child: Container( width: size, color: const Color(0xFF2C3C56), child: ListView( children: [ Container( alignment: Alignment.center, padding: const EdgeInsets.all(16), color: Color(0xFF223047), child: Text('CORE UI'), ), _tile('Dashboard'), Container( padding: const EdgeInsets.only(left: 10), margin: const EdgeInsets.only(top: 30), child: Text('THEME', style: TextStyle( color: Colors.white54, ))), _tile('Colors'), _tile('Typography'), _tile('Base'), _tile('Buttons'), ], ), ), ); } Widget _tile(String label) { return ListTile( title: Text(label), onTap: () {}, ); } } ```
You can use the `Drawer` widget inside a `Scaffold`. If you want the navigation drawer to be able to resize according to the browser height and width you can use the [responsive\_scaffold](https://pub.dev/packages/responsive_scaffold) package.
22,073,028
I just started python three days ago and I am already facing a problem. I couldn't get any information in the www. It looks like a bug - but I think I did s.th. wrong. However I can't find the problem. Here we go: I have 1 List called "inputData". So all I do is, take out the first 10 entries in each array, fit it with polyfit, save the fit parameters in the variable "linFit" and afterwards substract the fit from my "inputData" and save it in a new list called "correctData". The print line is only to show you the "bug". If you run the code below and you compare the "inputData" print before and after the procedure, it is different. I have no idea, why... :( However, if you remove one of the two arrays in "inputData", it works fine. Anyone any idea? Thx! ``` import matplotlib.pyplot as plt import pylab as np inputData = [np.array([[ 1.06999998e+01, 1.71811953e-01], [ 2.94000015e+01, 2.08369687e-01], [ 3.48000002e+01, 3.70725733e-01], [ 4.28000021e+01, 4.96874842e-01], [ 5.16000004e+01, 5.20280702e-01], [ 6.34000015e+01, 6.79658073e-01], [ 7.72000008e+01, 7.15826614e-01], [ 8.08000031e+01, 8.38463318e-01], [ 9.27000008e+01, 9.07969677e-01], [ 10.65000000e+01, 10.76921320e-01], [ 11.65000000e+01, 11.76921320e-01]]), np.array([[ 0.25999999e+00, 1.21419430e-01], [ 1.84000009e-01, 2.26843166e-01], [ 2.41999998e+01, 3.69826150e-01], [ 3.90000000e+01, 4.12130547e-01], [ 4.20999985e+01, 5.92435598e-01], [ 5.22999992e+01, 6.44819438e-01], [ 6.62999992e+01, 7.23920727e-01], [ 7.65000000e+01, 8.45791912e-01], [ 8.22000008e+01, 9.97368264e-01], [ 9.55000000e+01, 10.48223877e-01]])] linFit = [['', '']]*15 linFitData = [['', '']]*15 correctData = np.copy(inputData) print(inputData) for i, entry in enumerate(inputData): CUT = np.split(entry, [10], axis=0) linFitData[i] = CUT[0] linFit[i] = np.polyfit(linFitData[i][:,0], linFitData[i][:,1], 1) for j, subentry in enumerate(entry): correctData[i][j][1] = subentry[1]-subentry[0]*(linFit[i][0]+linFit[i][1]) #print (inputData[0][0][1]) print('----------') print(inputData) for i, entry in enumerate(inputData): plt.plot(entry[:,0], entry[:,1], '.') plt.plot(linFitData[i][:,0], (linFitData[i][:,0])*(linFit[i][0])+(linFit[i][1])) #plt.plot(correctData[i][:,0], correctData[i][:,1], '.') ```
2014/02/27
[ "https://Stackoverflow.com/questions/22073028", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3361064/" ]
Your `inputData` isn't a numpy array, it's a list of arrays. Those two lists don't have the same length: ``` >>> [len(sl) for sl in inputData] [11, 10] ``` numpy arrays can't handle varying lengths. If you try to make an array out of it, instead of having a 2-D array of float dtype, you get a 1-D array of object dtype, the members of which are lists: ``` >>> a = np.array(inputData) >>> a.shape, a.dtype ((2,), dtype('O')) ``` and so your "copy" is actually only a shallow copy; the lists inside are the same objects as in `inputData`: ``` >>> correctData = np.copy(inputData) >>> inputData[0] is correctData[0] True >>> inputData[1] is correctData[1] True ``` --- BTW, you can't multiply lists like this `linFit = [['', '']]*15`; that doesn't make a copy either (see [here](https://stackoverflow.com/questions/17702937/generating-sublists-using-multiplication-unexpected-behavior)). `linFit[0] is linFit[1]` -- try changing one of the sublists to see this.
Your code as you posted it is not runnable at all, as a bunch of definitions are missing or wrong. After fixing this and some code cleanup, I get the following, which basically shows, everything is working as intended: ``` import numpy as np from copy import deepcopy dataList = [np.array([[ 1.06999998e+01, 1.71811953e-01], [ -3.94000015e+01, -7.08369687e-02], [ 1.48000002e+01, 1.70725733e-02], [ 6.28000021e+00, 1.96874842e-01], [ 2.16000004e+01, -1.20280702e-02], [ 4.34000015e+01, -3.79658073e-01], [ 3.72000008e+01, -1.15826614e-01], [ 8.08000031e+01, 6.38463318e-01], [ 5.27000008e+01, 5.07969677e-01], [ 6.65000000e+01, -4.76921320e-01]], dtype=np.float32), np.array([[ -3.25999999e+00, 1.21419430e-01], [ 2.84000009e-01, -4.26843166e-02], [ -1.41999998e+01, -1.69826150e-01], [ 1.90000000e+01, 2.12130547e-01], [ 3.20999985e+01, -5.92435598e-02], [ 3.22999992e+01, 1.44819438e-01], [ 3.62999992e+01, -3.23920727e-01], [ 4.65000000e+01, 2.45791912e-01], [ 6.22000008e+01, 1.97368264e-02], [ 6.55000000e+01, -1.48223877e-01]], dtype=np.float32)] correctData = deepcopy(dataList) for i, entry in enumerate(dataList): CUT = np.split(entry, 5, axis=0)[0] linFit = np.polyfit(CUT[:,0], CUT[:,1], 1) for j, subentry in enumerate(entry): correctData[i][j][1] = subentry[1] - subentry[0] * linFit[0] + linFit[1] print dataList[1][0][1] print('----------') ``` Outputs: ``` 0.121419 0.121419 0.121419 0.121419 0.121419 0.121419 0.121419 0.121419 0.121419 0.121419 ---------- 0.121419 0.121419 0.121419 0.121419 0.121419 0.121419 0.121419 0.121419 0.121419 0.121419 ---------- ``` The actual problem in your code above is, that inputData is of type list. When you create the correctData, if it would be an array, it would be a nice copy. But as it is a list, the copy creates an array of objects, which holds only references to the original arrays. So in fact, you're directly writing to inputData, not to copies. See that: ``` correctData.dtype >>> dtype('O') ``` So either you create a list of copies, or you switch to a 3D-arrays, to fix the problem. To create a list with copies of all contained items, use this: ``` from copy import deepcopy correctData = deepcopy(inputData) ```
73,749,184
I'm following this [TensorFlow guide](https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10) for object detection models and I've gotten to part 6, which is training your program. I've input this line of code, ``` python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config ``` But it keeps resulting in the syntax error here. ``` (tensorflow1) C:\tensorflow1\models\research\object_detection>python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config 2022-09-16 14:38:10.767310: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found 2022-09-16 14:38:10.767443: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Traceback (most recent call last): File "train.py", line 53, in <module> from object_detection.builders import model_builder File "C:\tensorflow1\models\research\object_detection\builders\model_builder.py", line 34, in <module> from object_detection.core import target_assigner File "C:\tensorflow1\models\research\object_detection\core\target_assigner.py", line 1051 raise ValueError(f'Unknown heatmap type - {self._box_heatmap_type}') ^ SyntaxError: invalid syntax ``` It's happened before but I managed to fix it by going back to the file and editting the changes it asks for. But this time, if I take away that quote it's on, the whole line has a red squiggly. I've never used Python or Anaconda before, and this is my first time touching it. Any help would be appreciated. I've read online this is due to my Python being an older version, and that line of code doesn't work with the old version. I think I'm using 3.5, but I'm not sure if updating the version will break everything, because I think TensorFlow only works with 3.5
2022/09/16
[ "https://Stackoverflow.com/questions/73749184", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19228228/" ]
I wold try something like this ``` var paymentStatus = JObject.Parse(response.Content)["PaymentStatus"][0]; string statusDescription = paymentStatus["StatusDescription"].ToString(); string merchantTxnRefNo = paymentStatus["MerchantTxnRefNo"].ToString(); ``` or maybe you need c# classes ``` List<PaymentStatus> paymentStatuses = JObject.Parse(json)["PaymentStatus"] .ToObject<List<PaymentStatus>>(); string statusDescription = paymentStatuses[0].StatusDescription; long merchantTxnRefNo = paymentStatuses[0].MerchantTxnRefNo; public class PaymentStatus { public long MerchantTxnRefNo { get; set; } public long PaymentId { get; set; } public DateTime ProcessDate { get; set; } public string StatusDescription { get; set; } public long TrackId { get; set; } public long BankRefNo { get; set; } public string PaymentType { get; set; } public int ErrorCode { get; set; } public string ProductType { get; set; } public string finalStatus { get; set; } } ```
Since PaymentStatus resolves to an array, use the indexer to get the object as below var StatusDescription = (string)jObject["PaymentStatus"]`[0]`["MerchantTxnRefNo"];
52,747,655
I am trying to use the TensorFlow CLI debugger in order to identify the operation which is causing a NaN during training of a network, but when I try to run the code I get an error: `_curses.error: cbreak() returned ERR` I'm running the code on an Ubuntu server, which I'm connecting to via SSH, and have tried to follow [this tutorial](https://www.tensorflow.org/guide/debugger). I have tried using `tf.add_check_numerics_ops()`, but the layers in the network include while loops so are not compatible. This is the section of code where the error is being raised: ``` import tensorflow as tf from tensorflow.python import debug as tf_debug ... #Prepare data train_data, val_data, test_data = dataset.prepare_datasets(model_config) sess = tf.Session() sess = tf_debug.LocalCLIDebugWrapperSession(sess) # Create iterators handle = tf.placeholder(tf.string, shape=[]) iterator = tf.data.Iterator.from_string_handle(handle, train_data.output_types, train_data.output_shapes) mixed_spec, voice_spec, mixed_audio, voice_audio = iterator.get_next() training_iterator = train_data.make_initializable_iterator() validation_iterator = val_data.make_initializable_iterator() testing_iterator = test_data.make_initializable_iterator() training_handle = sess.run(training_iterator.string_handle()) ... ``` and the full error is: ``` Traceback (most recent call last): File "main.py", line 64, in <module> @ex.automain File "/home/enterprise.internal.city.ac.uk/acvn728/.local/lib/python3.5/site-packages/sacred/experiment.py", line 137, in automain self.run_commandline() File "/home/enterprise.internal.city.ac.uk/acvn728/.local/lib/python3.5/site-packages/sacred/experiment.py", line 260, in run_commandline return self.run(cmd_name, config_updates, named_configs, {}, args) File "/home/enterprise.internal.city.ac.uk/acvn728/.local/lib/python3.5/site-packages/sacred/experiment.py", line 209, in run run() File "/home/enterprise.internal.city.ac.uk/acvn728/.local/lib/python3.5/site-packages/sacred/run.py", line 221, in __call__ self.result = self.main_function(*args) File "/home/enterprise.internal.city.ac.uk/acvn728/.local/lib/python3.5/site-packages/sacred/config/captured_function.py", line 46, in captured_function result = wrapped(*args, **kwargs) File "main.py", line 95, in do_experiment training_handle = sess.run(training_iterator.string_handle()) File "/home/enterprise.internal.city.ac.uk/acvn728/.local/lib/python3.5/site-packages/tensorflow/python/debug/wrappers/framework.py", line 455, in run is_callable_runner=bool(callable_runner))) File "/home/enterprise.internal.city.ac.uk/acvn728/.local/lib/python3.5/site-packages/tensorflow/python/debug/wrappers/local_cli_wrapper.py", line 255, in on_run_start self._run_start_response = self._launch_cli() File "/home/enterprise.internal.city.ac.uk/acvn728/.local/lib/python3.5/site-packages/tensorflow/python/debug/wrappers/local_cli_wrapper.py", line 431, in _launch_cli title_color=self._title_color) File "/home/enterprise.internal.city.ac.uk/acvn728/.local/lib/python3.5/site-packages/tensorflow/python/debug/cli/curses_ui.py", line 492, in run_ui self._screen_launch(enable_mouse_on_start=enable_mouse_on_start) File "/home/enterprise.internal.city.ac.uk/acvn728/.local/lib/python3.5/site-packages/tensorflow/python/debug/cli/curses_ui.py", line 445, in _screen_launch curses.cbreak() _curses.error: cbreak() returned ERR ``` I'm pretty new to using Ubuntu (and TensorFlow), but as far as I can tell the server does have ncurses installed, which should allow the required curses based interface: ``` acvn728@america:~/MScFinalProject$ dpkg -l '*ncurses*' | grep '^ii' ii libncurses5:amd64 6.0+20160213-1ubuntu1 amd64 shared libraries for terminal handling ii libncursesw5:amd64 6.0+20160213-1ubuntu1 amd64 shared libraries for terminal handling (wide character support) ii ncurses-base 6.0+20160213-1ubuntu1 all basic terminal type definitions ii ncurses-bin 6.0+20160213-1ubuntu1 amd64 terminal-related programs and man pages ii ncurses-term 6.0+20160213-1ubuntu1 all additional terminal type definitions ```
2018/10/10
[ "https://Stackoverflow.com/questions/52747655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9067015/" ]
Problem solved! The solution was to change ``` sess = tf_debug.LocalCLIDebugWrapperSession(sess) ``` to ``` sess = tf_debug.LocalCLIDebugWrapperSession(sess, ui_type="readline") ``` This is similar to the solution to [this question](https://stackoverflow.com/questions/47833697/how-to-use-tensorflow-debugging-tool-tfdbg-on-tf-estimator-in-tensorflow), but I I think it is important to note that they are different because a) it refers to a different function and a different API and b) I wasn't trying to run from an IDE, as mentioned in that solution.
`cbreak` would return **`ERR`** if you run a curses application that is not on a *real terminal* (i.e., something that works with [POSIX termios calls](http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap11.html#tag_11)). From the description, > > but the layers in the network include while loops so are not compatible > > > it does not seem you are running in a terminal.
12,397,182
I am trying to remove all the html surrounding the data that I seek from a webpage so that all that is left is the raw data that I will then be able to input into a database. so if I have something like: ``` <p class="location"> Atlanta, GA </p> ``` The following code would return ``` Atlanta, GA </p> ``` But what I expect is not what is returned. This is a more specific solution to the basic problem I found [here](https://stackoverflow.com/questions/2582138/finding-and-replacing-elements-in-a-list-python). Any help would be appreciated, thanks! Code is found below. ``` def delHTML(self, html): """ html is a list made up of items with data surrounded by html this function should get rid of the html and return the data as a list """ for n,i in enumerate(html): if i==re.match('<p class="location">',str(html[n])): html[n]=re.sub('<p class="location">', '', str(html[n])) return html ```
2012/09/12
[ "https://Stackoverflow.com/questions/12397182", "https://Stackoverflow.com", "https://Stackoverflow.com/users/845888/" ]
As rightfully pointed out in the comments, you should be using a specific library to parse HTML and extract text, here are some examples: * [html2text](http://www.aaronsw.com/2002/html2text/): Limited functionnality, but exactly what you need. * [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/): More complex, more powerful.
Assuming all you want is to extract the data contained in `<p class="location">` tags, you could use a quick & dirty (but correct) approach with the Python `HTMLParser` module (a simple HTML SAX parser), like this: ``` from HTMLParser import HTMLParser class MyHTMLParser(HTMLParser): PLocationID=0 PCount=0 buf="" out=[] def handle_starttag(self, tag, attrs): if tag=="p": self.PCount+=1 if ("class", "location") in attrs and self.PLocationID==0: self.PLocationID=self.PCount def handle_endtag(self, tag): if tag=="p": if self.PLocationID==self.PCount: self.out.append(self.buf) self.buf="" self.PLocationID=0 self.PCount-=1 def handle_data(self, data): if self.PLocationID: self.buf+=data # instantiate the parser and fed it some HTML parser = MyHTMLParser() parser.feed(""" <html> <body> <p>This won't appear!</p> <p class="location">This <b>will</b></p> <div> <p class="location">This <span class="someclass">too</span></p> <p>Even if <p class="location">nested Ps <p class="location"><b>shouldn't</b> <p>be allowed</p></p> <p>this will work</p></p> (this last text is out!)</p> </div> </body> </html> """) print parser.out ``` Output: ``` ['This will', 'This too', "nested Ps shouldn't be allowed this will work"] ``` This will extract all the text contained inside any `<p class="location">` tag, stripping all the tags inside it. Separate tags (if not nested - which shouldn't be allowed anyhow for paragraphs) will have a separate entry in the `out` list. Notice that for more complex requirements this can easily get out of hand; in those cases a DOM parser is way more appropriate.
64,777,843
Today I come with a two in one set of issues that's on the verge of making me smash my computer to pieces! So please I would greatly appreciate any help as I've been stuck on it for two days now. I have a project where osmnx is required, so I follow the install instructions [provided](https://github.com/gboeing/osmnx#installation). Which means that I created a dedicated (clean) environment for it. Within this project there is a notebook that I should run, which leads me to `ImportError: No module named dotmap`. So I say okay, and install dotmap in the environment through conda install. Now heres the situation, even though I installed it sucessfully in the environment, I keep getting the same error when I run the notebook! So I think maybe I should reinstall dotmap using pip through conda prompt. But when I use pip install in the dedicated ox environment, I get failed to create process. So I say okay, lets install it again through conda install but it still doesn't work. I see that the dotmap is being called from a main.py document. So I decide to check the main.py document by installing spyder through anaconda navigtor. Once it's installed I click launch and spyder never launches! I try to launch from the conda prompt and I just get `Unable to create process using 'C:\Users\THESIS\.conda\envs\ox2\python.exe C:\Users\THESIS\.conda\envs\ox2\Scripts\spyder-script.py`. At the moment I ran out of ideas of what to do as I tried to work around it numerous times. Before questions get asked (and I hope someone asks something), I'd like to say that I already: * Reinstalled anaconda navigator * Tried created numerous environments in various different ways of installing osmnx and dotmap * Tried reinstalling and launching spyder in numerous different ways (Same thing is happening with Jupyterlab) * And probably some other things that I can not remember now off the top of my head I know I sound fed up and angry at the moment but I just can not understand how I did not find a solution to this (what should be) simple issue. All in all, any help is greatly appreciated!!
2020/11/10
[ "https://Stackoverflow.com/questions/64777843", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9935756/" ]
Following droptop recommendation, I actually just did a full clean (another one) reinstall of anaconda where I deleted almost everything that I could. I installed it again and it's working now! Thanks for the help anyway!!
Your fresh anaconda install should have `jupyter`, `jupyterlab` and `spyder` in the `base` environment. Starting the anaconda prompt and typing in `jupyter notebook` should launch jupyter. Try activating your `ox2` environment with another prompt, and follow through from step 3 of this post <https://medium.com/@nrk25693/how-to-add-your-conda-environment-to-your-jupyter-notebook-in-just-4-steps-abeab8b8d084>
59,519,338
Error occurs upon `import numpy as np`; command works fine when typed directly in terminal, but fails when ran via [Code Runner](https://marketplace.visualstudio.com/items?itemName=formulahendry.code-runner). My steps to reproduce below. Output of `import sys; print(sys.version)` is `3.7.5 (default, Oct 31 2019, 15:18:51) [MSC v.1916 64 bit (AMD64)]`. VSCode shows it's running the expected Python interpreter: `Python 3.7.5 64-bit ('vsc': conda)` at bottom-left pane (see clip). -- Brief [video demo](https://www.dropbox.com/s/cu4vzyp8ybdq6qo/np_demo.mp4?dl=0). For a complete list of enabled extensions and contents of `settings.json`, see [relevant Git](https://github.com/numpy/numpy/issues/15183). What is the problem, and how to fix? --- **Env info**: Windows 10 x64, Anaconda 10/19 (virtual env), VSCode 1.41.1 --- **Steps to reproduce:** ``` conda create --name vsc conda activate vsc conda install python==3.7.5 conda install numpy # in VSCode: import numpy as np, etc ``` --- **Full traceback**: ```py Traceback (most recent call last): File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\__init__.py", line 17, in <module> from . import multiarray File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\multiarray.py", line 14, in <module> from . import overrides File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\overrides.py", line 7, in <module> from numpy.core._multiarray_umath import ( ImportError: DLL load failed: The specified module could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\Desktop\School\Python\vscode\HelloWorld\app.py", line 1, in <module> import numpy as np File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\__init__.py", line 142, in <module> from . import core File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\__init__.py", line 47, in <module> raise ImportError(msg) ImportError: ``` --- **EDIT**: added the following to `settings.json` per James's suggestion: ``` "terminal.integrated.shell.windows": "C:\\Windows\\System32\\cmd.exe", "terminal.integrated.shellArgs.windows": ["/K", "D:\\Anaconda\\Scripts\\activate.bat D:\\Anaconda"], "python.condaPath": "D:\\Anaconda\\Scripts\\conda.exe" ```
2019/12/29
[ "https://Stackoverflow.com/questions/59519338", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10133797/" ]
Based on your comment, it looks like the conda environment is not being activated by VSCode. Selecting the Python interpreter points VSCode to the Python executable (python.exe), but sometimes environmental variables that are set by Conda are used to tell packages with large backends where to look for the compiled binaries. Conda does this to save on space. If you already have the DLLs in one environment, it will sometimes link to them when creating a new environment rather than installing them again. So the goal is to get VSCode to use Conda in the same way you would use it through the Start Menu: firing up the Anaconda Command prompt before starting Python. In VSCode open your `settings.json` file for editing using the following operations: ``` (type) CTRL + SHIFT + P (search for:) open settings (click:) Preferences: Open Settings (JSON) ``` We are going to add 3 lines to the JSON file. The first tell VSCode to use a Windows integrated shell. The second adds additional arguments when firing up the Windows Shell that run each time; this is where we will activate the base Conda environment. (This is just copy/pasted from the Anaconda Command Prompt shortcut properties.) The third line lets VSCode where your Conda executable is so it can properly change environments. My Anaconda base environment is located at `C:\Anaconda3\`. You will need to modify the paths to your installation. ``` settings.json ``` ```json { ... # any other settings you have already added (remove this line) "terminal.integrated.shell.windows": "C:\\WINDOWS\\System32\\cmd.exe", "terminal.integrated.shellArgs.windows": ["/K", "C:\\Anaconda3\\Scripts\\activate.bat C:\\Anaconda3"], "python.condaPath": "C:\\Anaconda3\\Scripts\\conda.exe" } ``` Save the file, change your interpreter to the `base` conda environment, restart VSCode, change your interpreter again to `vsc`.
If you deactivate the Code Runner extension and make sure you select the appropriate conda environment using the [Python extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-python.python) you will get a green play button instead of a white one. That green play button will use the environment you selected and thus should have numpy installed.
59,519,338
Error occurs upon `import numpy as np`; command works fine when typed directly in terminal, but fails when ran via [Code Runner](https://marketplace.visualstudio.com/items?itemName=formulahendry.code-runner). My steps to reproduce below. Output of `import sys; print(sys.version)` is `3.7.5 (default, Oct 31 2019, 15:18:51) [MSC v.1916 64 bit (AMD64)]`. VSCode shows it's running the expected Python interpreter: `Python 3.7.5 64-bit ('vsc': conda)` at bottom-left pane (see clip). -- Brief [video demo](https://www.dropbox.com/s/cu4vzyp8ybdq6qo/np_demo.mp4?dl=0). For a complete list of enabled extensions and contents of `settings.json`, see [relevant Git](https://github.com/numpy/numpy/issues/15183). What is the problem, and how to fix? --- **Env info**: Windows 10 x64, Anaconda 10/19 (virtual env), VSCode 1.41.1 --- **Steps to reproduce:** ``` conda create --name vsc conda activate vsc conda install python==3.7.5 conda install numpy # in VSCode: import numpy as np, etc ``` --- **Full traceback**: ```py Traceback (most recent call last): File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\__init__.py", line 17, in <module> from . import multiarray File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\multiarray.py", line 14, in <module> from . import overrides File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\overrides.py", line 7, in <module> from numpy.core._multiarray_umath import ( ImportError: DLL load failed: The specified module could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\Desktop\School\Python\vscode\HelloWorld\app.py", line 1, in <module> import numpy as np File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\__init__.py", line 142, in <module> from . import core File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\__init__.py", line 47, in <module> raise ImportError(msg) ImportError: ``` --- **EDIT**: added the following to `settings.json` per James's suggestion: ``` "terminal.integrated.shell.windows": "C:\\Windows\\System32\\cmd.exe", "terminal.integrated.shellArgs.windows": ["/K", "D:\\Anaconda\\Scripts\\activate.bat D:\\Anaconda"], "python.condaPath": "D:\\Anaconda\\Scripts\\conda.exe" ```
2019/12/29
[ "https://Stackoverflow.com/questions/59519338", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10133797/" ]
Based on your comment, it looks like the conda environment is not being activated by VSCode. Selecting the Python interpreter points VSCode to the Python executable (python.exe), but sometimes environmental variables that are set by Conda are used to tell packages with large backends where to look for the compiled binaries. Conda does this to save on space. If you already have the DLLs in one environment, it will sometimes link to them when creating a new environment rather than installing them again. So the goal is to get VSCode to use Conda in the same way you would use it through the Start Menu: firing up the Anaconda Command prompt before starting Python. In VSCode open your `settings.json` file for editing using the following operations: ``` (type) CTRL + SHIFT + P (search for:) open settings (click:) Preferences: Open Settings (JSON) ``` We are going to add 3 lines to the JSON file. The first tell VSCode to use a Windows integrated shell. The second adds additional arguments when firing up the Windows Shell that run each time; this is where we will activate the base Conda environment. (This is just copy/pasted from the Anaconda Command Prompt shortcut properties.) The third line lets VSCode where your Conda executable is so it can properly change environments. My Anaconda base environment is located at `C:\Anaconda3\`. You will need to modify the paths to your installation. ``` settings.json ``` ```json { ... # any other settings you have already added (remove this line) "terminal.integrated.shell.windows": "C:\\WINDOWS\\System32\\cmd.exe", "terminal.integrated.shellArgs.windows": ["/K", "C:\\Anaconda3\\Scripts\\activate.bat C:\\Anaconda3"], "python.condaPath": "C:\\Anaconda3\\Scripts\\conda.exe" } ``` Save the file, change your interpreter to the `base` conda environment, restart VSCode, change your interpreter again to `vsc`.
I had the same problem. In my case Anaconda3 and VS Code were installed separately. Here are the steps that worked for me to fix the problem: * Completely uninstall VS Code including any user settings/cache and start menu entries * Re-install VS Code from Anaconda Navigator * Launch Anaconda Prompt and activate whichever environment I want to use through `conda activate myenv` * Launch VS Code from Anaconda Prompt by typing `code` It will find the `numpy` package path in your environment now.
59,519,338
Error occurs upon `import numpy as np`; command works fine when typed directly in terminal, but fails when ran via [Code Runner](https://marketplace.visualstudio.com/items?itemName=formulahendry.code-runner). My steps to reproduce below. Output of `import sys; print(sys.version)` is `3.7.5 (default, Oct 31 2019, 15:18:51) [MSC v.1916 64 bit (AMD64)]`. VSCode shows it's running the expected Python interpreter: `Python 3.7.5 64-bit ('vsc': conda)` at bottom-left pane (see clip). -- Brief [video demo](https://www.dropbox.com/s/cu4vzyp8ybdq6qo/np_demo.mp4?dl=0). For a complete list of enabled extensions and contents of `settings.json`, see [relevant Git](https://github.com/numpy/numpy/issues/15183). What is the problem, and how to fix? --- **Env info**: Windows 10 x64, Anaconda 10/19 (virtual env), VSCode 1.41.1 --- **Steps to reproduce:** ``` conda create --name vsc conda activate vsc conda install python==3.7.5 conda install numpy # in VSCode: import numpy as np, etc ``` --- **Full traceback**: ```py Traceback (most recent call last): File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\__init__.py", line 17, in <module> from . import multiarray File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\multiarray.py", line 14, in <module> from . import overrides File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\overrides.py", line 7, in <module> from numpy.core._multiarray_umath import ( ImportError: DLL load failed: The specified module could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\Desktop\School\Python\vscode\HelloWorld\app.py", line 1, in <module> import numpy as np File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\__init__.py", line 142, in <module> from . import core File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\__init__.py", line 47, in <module> raise ImportError(msg) ImportError: ``` --- **EDIT**: added the following to `settings.json` per James's suggestion: ``` "terminal.integrated.shell.windows": "C:\\Windows\\System32\\cmd.exe", "terminal.integrated.shellArgs.windows": ["/K", "D:\\Anaconda\\Scripts\\activate.bat D:\\Anaconda"], "python.condaPath": "D:\\Anaconda\\Scripts\\conda.exe" ```
2019/12/29
[ "https://Stackoverflow.com/questions/59519338", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10133797/" ]
Based on your comment, it looks like the conda environment is not being activated by VSCode. Selecting the Python interpreter points VSCode to the Python executable (python.exe), but sometimes environmental variables that are set by Conda are used to tell packages with large backends where to look for the compiled binaries. Conda does this to save on space. If you already have the DLLs in one environment, it will sometimes link to them when creating a new environment rather than installing them again. So the goal is to get VSCode to use Conda in the same way you would use it through the Start Menu: firing up the Anaconda Command prompt before starting Python. In VSCode open your `settings.json` file for editing using the following operations: ``` (type) CTRL + SHIFT + P (search for:) open settings (click:) Preferences: Open Settings (JSON) ``` We are going to add 3 lines to the JSON file. The first tell VSCode to use a Windows integrated shell. The second adds additional arguments when firing up the Windows Shell that run each time; this is where we will activate the base Conda environment. (This is just copy/pasted from the Anaconda Command Prompt shortcut properties.) The third line lets VSCode where your Conda executable is so it can properly change environments. My Anaconda base environment is located at `C:\Anaconda3\`. You will need to modify the paths to your installation. ``` settings.json ``` ```json { ... # any other settings you have already added (remove this line) "terminal.integrated.shell.windows": "C:\\WINDOWS\\System32\\cmd.exe", "terminal.integrated.shellArgs.windows": ["/K", "C:\\Anaconda3\\Scripts\\activate.bat C:\\Anaconda3"], "python.condaPath": "C:\\Anaconda3\\Scripts\\conda.exe" } ``` Save the file, change your interpreter to the `base` conda environment, restart VSCode, change your interpreter again to `vsc`.
I had the same issue, I've fixed it by adding the `Python.CondaPath` in settings. Press `Ctrl + Shift + P` and select Terminal Configuration. Search for `python.conda`, and paste your conda path for example. C:\ProgramData\Anaconda3\Scripts\conda.exe This will fix your issue. [![enter image description here](https://i.stack.imgur.com/DngEf.png)](https://i.stack.imgur.com/DngEf.png)
59,519,338
Error occurs upon `import numpy as np`; command works fine when typed directly in terminal, but fails when ran via [Code Runner](https://marketplace.visualstudio.com/items?itemName=formulahendry.code-runner). My steps to reproduce below. Output of `import sys; print(sys.version)` is `3.7.5 (default, Oct 31 2019, 15:18:51) [MSC v.1916 64 bit (AMD64)]`. VSCode shows it's running the expected Python interpreter: `Python 3.7.5 64-bit ('vsc': conda)` at bottom-left pane (see clip). -- Brief [video demo](https://www.dropbox.com/s/cu4vzyp8ybdq6qo/np_demo.mp4?dl=0). For a complete list of enabled extensions and contents of `settings.json`, see [relevant Git](https://github.com/numpy/numpy/issues/15183). What is the problem, and how to fix? --- **Env info**: Windows 10 x64, Anaconda 10/19 (virtual env), VSCode 1.41.1 --- **Steps to reproduce:** ``` conda create --name vsc conda activate vsc conda install python==3.7.5 conda install numpy # in VSCode: import numpy as np, etc ``` --- **Full traceback**: ```py Traceback (most recent call last): File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\__init__.py", line 17, in <module> from . import multiarray File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\multiarray.py", line 14, in <module> from . import overrides File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\overrides.py", line 7, in <module> from numpy.core._multiarray_umath import ( ImportError: DLL load failed: The specified module could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\Desktop\School\Python\vscode\HelloWorld\app.py", line 1, in <module> import numpy as np File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\__init__.py", line 142, in <module> from . import core File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\__init__.py", line 47, in <module> raise ImportError(msg) ImportError: ``` --- **EDIT**: added the following to `settings.json` per James's suggestion: ``` "terminal.integrated.shell.windows": "C:\\Windows\\System32\\cmd.exe", "terminal.integrated.shellArgs.windows": ["/K", "D:\\Anaconda\\Scripts\\activate.bat D:\\Anaconda"], "python.condaPath": "D:\\Anaconda\\Scripts\\conda.exe" ```
2019/12/29
[ "https://Stackoverflow.com/questions/59519338", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10133797/" ]
Based on your comment, it looks like the conda environment is not being activated by VSCode. Selecting the Python interpreter points VSCode to the Python executable (python.exe), but sometimes environmental variables that are set by Conda are used to tell packages with large backends where to look for the compiled binaries. Conda does this to save on space. If you already have the DLLs in one environment, it will sometimes link to them when creating a new environment rather than installing them again. So the goal is to get VSCode to use Conda in the same way you would use it through the Start Menu: firing up the Anaconda Command prompt before starting Python. In VSCode open your `settings.json` file for editing using the following operations: ``` (type) CTRL + SHIFT + P (search for:) open settings (click:) Preferences: Open Settings (JSON) ``` We are going to add 3 lines to the JSON file. The first tell VSCode to use a Windows integrated shell. The second adds additional arguments when firing up the Windows Shell that run each time; this is where we will activate the base Conda environment. (This is just copy/pasted from the Anaconda Command Prompt shortcut properties.) The third line lets VSCode where your Conda executable is so it can properly change environments. My Anaconda base environment is located at `C:\Anaconda3\`. You will need to modify the paths to your installation. ``` settings.json ``` ```json { ... # any other settings you have already added (remove this line) "terminal.integrated.shell.windows": "C:\\WINDOWS\\System32\\cmd.exe", "terminal.integrated.shellArgs.windows": ["/K", "C:\\Anaconda3\\Scripts\\activate.bat C:\\Anaconda3"], "python.condaPath": "C:\\Anaconda3\\Scripts\\conda.exe" } ``` Save the file, change your interpreter to the `base` conda environment, restart VSCode, change your interpreter again to `vsc`.
The answer above are being depreciated by VS Code. The more modern solution is to force VS to activate the Anaconda when running Code standalone by adding: ``` "python.condaPath": "C:\\ProgramData\\Anaconda3\\Scripts\\conda.exe", "python.terminal.activateEnvironment": true, "terminal.integrated.defaultProfile.windows": "Command Prompt" ``` This has worked for me so far. Note this doesn't work with: ``` "python.terminal.launchArgs": ["-m", "IPython"], ``` So make sure that is not hidden in your settings.json.
59,519,338
Error occurs upon `import numpy as np`; command works fine when typed directly in terminal, but fails when ran via [Code Runner](https://marketplace.visualstudio.com/items?itemName=formulahendry.code-runner). My steps to reproduce below. Output of `import sys; print(sys.version)` is `3.7.5 (default, Oct 31 2019, 15:18:51) [MSC v.1916 64 bit (AMD64)]`. VSCode shows it's running the expected Python interpreter: `Python 3.7.5 64-bit ('vsc': conda)` at bottom-left pane (see clip). -- Brief [video demo](https://www.dropbox.com/s/cu4vzyp8ybdq6qo/np_demo.mp4?dl=0). For a complete list of enabled extensions and contents of `settings.json`, see [relevant Git](https://github.com/numpy/numpy/issues/15183). What is the problem, and how to fix? --- **Env info**: Windows 10 x64, Anaconda 10/19 (virtual env), VSCode 1.41.1 --- **Steps to reproduce:** ``` conda create --name vsc conda activate vsc conda install python==3.7.5 conda install numpy # in VSCode: import numpy as np, etc ``` --- **Full traceback**: ```py Traceback (most recent call last): File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\__init__.py", line 17, in <module> from . import multiarray File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\multiarray.py", line 14, in <module> from . import overrides File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\overrides.py", line 7, in <module> from numpy.core._multiarray_umath import ( ImportError: DLL load failed: The specified module could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\Desktop\School\Python\vscode\HelloWorld\app.py", line 1, in <module> import numpy as np File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\__init__.py", line 142, in <module> from . import core File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\__init__.py", line 47, in <module> raise ImportError(msg) ImportError: ``` --- **EDIT**: added the following to `settings.json` per James's suggestion: ``` "terminal.integrated.shell.windows": "C:\\Windows\\System32\\cmd.exe", "terminal.integrated.shellArgs.windows": ["/K", "D:\\Anaconda\\Scripts\\activate.bat D:\\Anaconda"], "python.condaPath": "D:\\Anaconda\\Scripts\\conda.exe" ```
2019/12/29
[ "https://Stackoverflow.com/questions/59519338", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10133797/" ]
I had the same problem. In my case Anaconda3 and VS Code were installed separately. Here are the steps that worked for me to fix the problem: * Completely uninstall VS Code including any user settings/cache and start menu entries * Re-install VS Code from Anaconda Navigator * Launch Anaconda Prompt and activate whichever environment I want to use through `conda activate myenv` * Launch VS Code from Anaconda Prompt by typing `code` It will find the `numpy` package path in your environment now.
If you deactivate the Code Runner extension and make sure you select the appropriate conda environment using the [Python extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-python.python) you will get a green play button instead of a white one. That green play button will use the environment you selected and thus should have numpy installed.
59,519,338
Error occurs upon `import numpy as np`; command works fine when typed directly in terminal, but fails when ran via [Code Runner](https://marketplace.visualstudio.com/items?itemName=formulahendry.code-runner). My steps to reproduce below. Output of `import sys; print(sys.version)` is `3.7.5 (default, Oct 31 2019, 15:18:51) [MSC v.1916 64 bit (AMD64)]`. VSCode shows it's running the expected Python interpreter: `Python 3.7.5 64-bit ('vsc': conda)` at bottom-left pane (see clip). -- Brief [video demo](https://www.dropbox.com/s/cu4vzyp8ybdq6qo/np_demo.mp4?dl=0). For a complete list of enabled extensions and contents of `settings.json`, see [relevant Git](https://github.com/numpy/numpy/issues/15183). What is the problem, and how to fix? --- **Env info**: Windows 10 x64, Anaconda 10/19 (virtual env), VSCode 1.41.1 --- **Steps to reproduce:** ``` conda create --name vsc conda activate vsc conda install python==3.7.5 conda install numpy # in VSCode: import numpy as np, etc ``` --- **Full traceback**: ```py Traceback (most recent call last): File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\__init__.py", line 17, in <module> from . import multiarray File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\multiarray.py", line 14, in <module> from . import overrides File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\overrides.py", line 7, in <module> from numpy.core._multiarray_umath import ( ImportError: DLL load failed: The specified module could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\Desktop\School\Python\vscode\HelloWorld\app.py", line 1, in <module> import numpy as np File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\__init__.py", line 142, in <module> from . import core File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\__init__.py", line 47, in <module> raise ImportError(msg) ImportError: ``` --- **EDIT**: added the following to `settings.json` per James's suggestion: ``` "terminal.integrated.shell.windows": "C:\\Windows\\System32\\cmd.exe", "terminal.integrated.shellArgs.windows": ["/K", "D:\\Anaconda\\Scripts\\activate.bat D:\\Anaconda"], "python.condaPath": "D:\\Anaconda\\Scripts\\conda.exe" ```
2019/12/29
[ "https://Stackoverflow.com/questions/59519338", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10133797/" ]
I had the same issue, I've fixed it by adding the `Python.CondaPath` in settings. Press `Ctrl + Shift + P` and select Terminal Configuration. Search for `python.conda`, and paste your conda path for example. C:\ProgramData\Anaconda3\Scripts\conda.exe This will fix your issue. [![enter image description here](https://i.stack.imgur.com/DngEf.png)](https://i.stack.imgur.com/DngEf.png)
If you deactivate the Code Runner extension and make sure you select the appropriate conda environment using the [Python extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-python.python) you will get a green play button instead of a white one. That green play button will use the environment you selected and thus should have numpy installed.
59,519,338
Error occurs upon `import numpy as np`; command works fine when typed directly in terminal, but fails when ran via [Code Runner](https://marketplace.visualstudio.com/items?itemName=formulahendry.code-runner). My steps to reproduce below. Output of `import sys; print(sys.version)` is `3.7.5 (default, Oct 31 2019, 15:18:51) [MSC v.1916 64 bit (AMD64)]`. VSCode shows it's running the expected Python interpreter: `Python 3.7.5 64-bit ('vsc': conda)` at bottom-left pane (see clip). -- Brief [video demo](https://www.dropbox.com/s/cu4vzyp8ybdq6qo/np_demo.mp4?dl=0). For a complete list of enabled extensions and contents of `settings.json`, see [relevant Git](https://github.com/numpy/numpy/issues/15183). What is the problem, and how to fix? --- **Env info**: Windows 10 x64, Anaconda 10/19 (virtual env), VSCode 1.41.1 --- **Steps to reproduce:** ``` conda create --name vsc conda activate vsc conda install python==3.7.5 conda install numpy # in VSCode: import numpy as np, etc ``` --- **Full traceback**: ```py Traceback (most recent call last): File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\__init__.py", line 17, in <module> from . import multiarray File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\multiarray.py", line 14, in <module> from . import overrides File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\overrides.py", line 7, in <module> from numpy.core._multiarray_umath import ( ImportError: DLL load failed: The specified module could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\Desktop\School\Python\vscode\HelloWorld\app.py", line 1, in <module> import numpy as np File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\__init__.py", line 142, in <module> from . import core File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\__init__.py", line 47, in <module> raise ImportError(msg) ImportError: ``` --- **EDIT**: added the following to `settings.json` per James's suggestion: ``` "terminal.integrated.shell.windows": "C:\\Windows\\System32\\cmd.exe", "terminal.integrated.shellArgs.windows": ["/K", "D:\\Anaconda\\Scripts\\activate.bat D:\\Anaconda"], "python.condaPath": "D:\\Anaconda\\Scripts\\conda.exe" ```
2019/12/29
[ "https://Stackoverflow.com/questions/59519338", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10133797/" ]
The answer above are being depreciated by VS Code. The more modern solution is to force VS to activate the Anaconda when running Code standalone by adding: ``` "python.condaPath": "C:\\ProgramData\\Anaconda3\\Scripts\\conda.exe", "python.terminal.activateEnvironment": true, "terminal.integrated.defaultProfile.windows": "Command Prompt" ``` This has worked for me so far. Note this doesn't work with: ``` "python.terminal.launchArgs": ["-m", "IPython"], ``` So make sure that is not hidden in your settings.json.
If you deactivate the Code Runner extension and make sure you select the appropriate conda environment using the [Python extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-python.python) you will get a green play button instead of a white one. That green play button will use the environment you selected and thus should have numpy installed.
59,519,338
Error occurs upon `import numpy as np`; command works fine when typed directly in terminal, but fails when ran via [Code Runner](https://marketplace.visualstudio.com/items?itemName=formulahendry.code-runner). My steps to reproduce below. Output of `import sys; print(sys.version)` is `3.7.5 (default, Oct 31 2019, 15:18:51) [MSC v.1916 64 bit (AMD64)]`. VSCode shows it's running the expected Python interpreter: `Python 3.7.5 64-bit ('vsc': conda)` at bottom-left pane (see clip). -- Brief [video demo](https://www.dropbox.com/s/cu4vzyp8ybdq6qo/np_demo.mp4?dl=0). For a complete list of enabled extensions and contents of `settings.json`, see [relevant Git](https://github.com/numpy/numpy/issues/15183). What is the problem, and how to fix? --- **Env info**: Windows 10 x64, Anaconda 10/19 (virtual env), VSCode 1.41.1 --- **Steps to reproduce:** ``` conda create --name vsc conda activate vsc conda install python==3.7.5 conda install numpy # in VSCode: import numpy as np, etc ``` --- **Full traceback**: ```py Traceback (most recent call last): File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\__init__.py", line 17, in <module> from . import multiarray File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\multiarray.py", line 14, in <module> from . import overrides File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\overrides.py", line 7, in <module> from numpy.core._multiarray_umath import ( ImportError: DLL load failed: The specified module could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\Desktop\School\Python\vscode\HelloWorld\app.py", line 1, in <module> import numpy as np File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\__init__.py", line 142, in <module> from . import core File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\__init__.py", line 47, in <module> raise ImportError(msg) ImportError: ``` --- **EDIT**: added the following to `settings.json` per James's suggestion: ``` "terminal.integrated.shell.windows": "C:\\Windows\\System32\\cmd.exe", "terminal.integrated.shellArgs.windows": ["/K", "D:\\Anaconda\\Scripts\\activate.bat D:\\Anaconda"], "python.condaPath": "D:\\Anaconda\\Scripts\\conda.exe" ```
2019/12/29
[ "https://Stackoverflow.com/questions/59519338", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10133797/" ]
I had the same problem. In my case Anaconda3 and VS Code were installed separately. Here are the steps that worked for me to fix the problem: * Completely uninstall VS Code including any user settings/cache and start menu entries * Re-install VS Code from Anaconda Navigator * Launch Anaconda Prompt and activate whichever environment I want to use through `conda activate myenv` * Launch VS Code from Anaconda Prompt by typing `code` It will find the `numpy` package path in your environment now.
I had the same issue, I've fixed it by adding the `Python.CondaPath` in settings. Press `Ctrl + Shift + P` and select Terminal Configuration. Search for `python.conda`, and paste your conda path for example. C:\ProgramData\Anaconda3\Scripts\conda.exe This will fix your issue. [![enter image description here](https://i.stack.imgur.com/DngEf.png)](https://i.stack.imgur.com/DngEf.png)
59,519,338
Error occurs upon `import numpy as np`; command works fine when typed directly in terminal, but fails when ran via [Code Runner](https://marketplace.visualstudio.com/items?itemName=formulahendry.code-runner). My steps to reproduce below. Output of `import sys; print(sys.version)` is `3.7.5 (default, Oct 31 2019, 15:18:51) [MSC v.1916 64 bit (AMD64)]`. VSCode shows it's running the expected Python interpreter: `Python 3.7.5 64-bit ('vsc': conda)` at bottom-left pane (see clip). -- Brief [video demo](https://www.dropbox.com/s/cu4vzyp8ybdq6qo/np_demo.mp4?dl=0). For a complete list of enabled extensions and contents of `settings.json`, see [relevant Git](https://github.com/numpy/numpy/issues/15183). What is the problem, and how to fix? --- **Env info**: Windows 10 x64, Anaconda 10/19 (virtual env), VSCode 1.41.1 --- **Steps to reproduce:** ``` conda create --name vsc conda activate vsc conda install python==3.7.5 conda install numpy # in VSCode: import numpy as np, etc ``` --- **Full traceback**: ```py Traceback (most recent call last): File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\__init__.py", line 17, in <module> from . import multiarray File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\multiarray.py", line 14, in <module> from . import overrides File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\overrides.py", line 7, in <module> from numpy.core._multiarray_umath import ( ImportError: DLL load failed: The specified module could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\Desktop\School\Python\vscode\HelloWorld\app.py", line 1, in <module> import numpy as np File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\__init__.py", line 142, in <module> from . import core File "D:\Anaconda\envs\vsc\lib\site-packages\numpy\core\__init__.py", line 47, in <module> raise ImportError(msg) ImportError: ``` --- **EDIT**: added the following to `settings.json` per James's suggestion: ``` "terminal.integrated.shell.windows": "C:\\Windows\\System32\\cmd.exe", "terminal.integrated.shellArgs.windows": ["/K", "D:\\Anaconda\\Scripts\\activate.bat D:\\Anaconda"], "python.condaPath": "D:\\Anaconda\\Scripts\\conda.exe" ```
2019/12/29
[ "https://Stackoverflow.com/questions/59519338", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10133797/" ]
The answer above are being depreciated by VS Code. The more modern solution is to force VS to activate the Anaconda when running Code standalone by adding: ``` "python.condaPath": "C:\\ProgramData\\Anaconda3\\Scripts\\conda.exe", "python.terminal.activateEnvironment": true, "terminal.integrated.defaultProfile.windows": "Command Prompt" ``` This has worked for me so far. Note this doesn't work with: ``` "python.terminal.launchArgs": ["-m", "IPython"], ``` So make sure that is not hidden in your settings.json.
I had the same issue, I've fixed it by adding the `Python.CondaPath` in settings. Press `Ctrl + Shift + P` and select Terminal Configuration. Search for `python.conda`, and paste your conda path for example. C:\ProgramData\Anaconda3\Scripts\conda.exe This will fix your issue. [![enter image description here](https://i.stack.imgur.com/DngEf.png)](https://i.stack.imgur.com/DngEf.png)
21,870,728
Hi I am trying to run the multiprocessing example in the docs: <http://docs.python.org/3.4/library/concurrent.futures.html>, the one using prime numbers but with a small difference. I want to be able to call a function with multiple arguments. What I am doing is matching small pieces of text (in a list around 30k long) to a much larger piece of text and return where in the larger string the smaller strings start. I can do this serially like this: ``` matchList = [] for pattern in patterns: # Approximate pattern matching patternStartingPositions = processPattern(pattern, numMismatchesAllowed, transformedText, charToIndex, countMatrix, firstOccurrence, suffixArray) # Now add each starting position found onto our master list. for startPos in patternStartingPositions: matchList.append(startPos) ``` But I want to do this to speed things up: ``` matchList = [] with concurrent.futures.ProcessPoolExecutor() as executor: for pattern, res in zip(patterns, executor.map(processPattern(pattern, numMismatchesAllowed, transformedText, charToIndex, countMatrix, firstOccurrence, suffixArray), patterns)): print('%d is starts at: %s' % (pattern, res)) ``` At this stage I've just got the print call there because I can't get the line above, the invocation of the processes to work. The only real difference between what I want to do and the example code is that my function takes 7 arguments and I have no idea how to do it, spent half the day on it. The call above generates this error: > > UnboundLocalError: local variable 'pattern' referenced before assignment. > > > Which makes sense. But then if I leave out that first argument, which is the one that changes with each call, and leave out the first parameter to the `processPattern` function: ``` matchList = [] with concurrent.futures.ProcessPoolExecutor() as executor: for pattern, res in zip(patterns, executor.map(processPattern(numMismatchesAllowed, transformedText, charToIndex, countMatrix, firstOccurrence, suffixArray), patterns)): print('%d is starts at: %s' % (pattern, res)) ``` Then I get this error: > > TypeError: processPattern() missing 1 required positional argument: 'suffixArray'. > > > I don't know how to get the `pattern` argument in the call!
2014/02/19
[ "https://Stackoverflow.com/questions/21870728", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3015449/" ]
To get the data into the right shape, simply use a generator expression (no need for `zip` at all) and use `submit` rather than `map`: ``` (pattern, executor.submit(processPattern, pattern, ...) for pattern in patterns) ``` To ensure that everything gets executed on the pool (instead of immediately), do not invoke the `processPatterns` function as you are doing in your example, but instead pass it in as the first argument to `.submit`. The fixed version of your code would be: ``` with concurrent.futures.ProcessPoolExecutor() as executor: for pattern, res in ((pattern, executor.submit(processPattern, pattern, numMismatchesAllowed, transformedText, charToIndex, countMatrix, firstOccurrence, suffixArray)) for pattern in patterns): print('%d is starts at: %s' % (pattern, res.result())) ```
Python *for*-loop is has functional behavior, and it is not possible to change value, which is iterating. ``` with concurrent.futures.ProcessPoolExecutor() as executor: def work(pattern): return processPattern(pattern, numMismatchesAllowed, transformedText, charToIndex, countMatrix, firstOccurrence, suffixArray) results = executor.map(work, patterns) for pattern, res in zip(patterns, results): print('%d is starts at: %s' % (pattern, res)) ``` In fact, for cycle not using *continue* and *break* instructions, works just like a map function. That is: ``` for i in something: work(i) ``` Is equivalent to ``` map(work, something) ```
21,870,728
Hi I am trying to run the multiprocessing example in the docs: <http://docs.python.org/3.4/library/concurrent.futures.html>, the one using prime numbers but with a small difference. I want to be able to call a function with multiple arguments. What I am doing is matching small pieces of text (in a list around 30k long) to a much larger piece of text and return where in the larger string the smaller strings start. I can do this serially like this: ``` matchList = [] for pattern in patterns: # Approximate pattern matching patternStartingPositions = processPattern(pattern, numMismatchesAllowed, transformedText, charToIndex, countMatrix, firstOccurrence, suffixArray) # Now add each starting position found onto our master list. for startPos in patternStartingPositions: matchList.append(startPos) ``` But I want to do this to speed things up: ``` matchList = [] with concurrent.futures.ProcessPoolExecutor() as executor: for pattern, res in zip(patterns, executor.map(processPattern(pattern, numMismatchesAllowed, transformedText, charToIndex, countMatrix, firstOccurrence, suffixArray), patterns)): print('%d is starts at: %s' % (pattern, res)) ``` At this stage I've just got the print call there because I can't get the line above, the invocation of the processes to work. The only real difference between what I want to do and the example code is that my function takes 7 arguments and I have no idea how to do it, spent half the day on it. The call above generates this error: > > UnboundLocalError: local variable 'pattern' referenced before assignment. > > > Which makes sense. But then if I leave out that first argument, which is the one that changes with each call, and leave out the first parameter to the `processPattern` function: ``` matchList = [] with concurrent.futures.ProcessPoolExecutor() as executor: for pattern, res in zip(patterns, executor.map(processPattern(numMismatchesAllowed, transformedText, charToIndex, countMatrix, firstOccurrence, suffixArray), patterns)): print('%d is starts at: %s' % (pattern, res)) ``` Then I get this error: > > TypeError: processPattern() missing 1 required positional argument: 'suffixArray'. > > > I don't know how to get the `pattern` argument in the call!
2014/02/19
[ "https://Stackoverflow.com/questions/21870728", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3015449/" ]
Other posters have covered possible solutions, but to explain your error, you should be passing the function and parameters as separate objects to `executor.map`. Here is the example from the docs ``` with concurrent.futures.ProcessPoolExecutor() as executor: # is_prime is the function, PRIMES are the arguments for number, prime in zip(PRIMES, executor.map(is_prime, PRIMES)): print('%d is prime: %s' % (number, prime)) ``` Your code is evaluating the `processPattern` function and passing in the result to `executor.map` ``` with concurrent.futures.ProcessPoolExecutor() as executor: for pattern, res in zip(patterns, executor.map(processPattern(numMis... # <- BAD print('%d is starts at: %s' % (pattern, res)) ``` Instead it should be ``` with concurrent.futures.ProcessPoolExecutor() as executor: for pattern, res in zip(patterns, executor.map(processPattern, <stuff>)): print('%d is starts at: %s' % (pattern, res)) ``` Where `<stuff>` is an iterable of the arguments to pass to `processPattern` on each subsequent call. Or, seeing as the other args stay fixed, create a function that just takes the one parameter you are iterating over and pass in `patterns` as the iterable (as @uhbif19 suggests) **EDIT:** To expand on how to create the `<stuff>` iterable, you need an iterable for each argument required by your function (`processPattern` in this case). You already have `patterns` as the iterable for argument one, if the others are constant then `itertools.repeat` may be helpful: ``` from itertools import repeat args = (patterns, repeat(numMismatchesAllowed, len(PATTERNS)), repeat(transformedText, len(PATTERNS)), repeat(charToIndex, len(PATTERNS)), <etc...> ) ``` Then ``` for pattern, res in zip(PATTERNS, executor.map(process, *args)): ``` I include this for the sake of understanding, but you can see how messy this is. The other answers offer better solutions. **EDIT 2:** Here's an example that better illustrates the use of submit vs map ``` import concurrent.futures def process(a, b): return a.upper() + b with concurrent.futures.ProcessPoolExecutor() as executor: for c, fut in [(c, executor.submit(process, c, 'b')) for c in 'testing']: print(c, fut.result()) with concurrent.futures.ProcessPoolExecutor() as executor: for c, res in zip('testing', executor.map(process, 'testing', 'bbbbbbb')): print(c, str(res)) ```
Python *for*-loop is has functional behavior, and it is not possible to change value, which is iterating. ``` with concurrent.futures.ProcessPoolExecutor() as executor: def work(pattern): return processPattern(pattern, numMismatchesAllowed, transformedText, charToIndex, countMatrix, firstOccurrence, suffixArray) results = executor.map(work, patterns) for pattern, res in zip(patterns, results): print('%d is starts at: %s' % (pattern, res)) ``` In fact, for cycle not using *continue* and *break* instructions, works just like a map function. That is: ``` for i in something: work(i) ``` Is equivalent to ``` map(work, something) ```
21,870,728
Hi I am trying to run the multiprocessing example in the docs: <http://docs.python.org/3.4/library/concurrent.futures.html>, the one using prime numbers but with a small difference. I want to be able to call a function with multiple arguments. What I am doing is matching small pieces of text (in a list around 30k long) to a much larger piece of text and return where in the larger string the smaller strings start. I can do this serially like this: ``` matchList = [] for pattern in patterns: # Approximate pattern matching patternStartingPositions = processPattern(pattern, numMismatchesAllowed, transformedText, charToIndex, countMatrix, firstOccurrence, suffixArray) # Now add each starting position found onto our master list. for startPos in patternStartingPositions: matchList.append(startPos) ``` But I want to do this to speed things up: ``` matchList = [] with concurrent.futures.ProcessPoolExecutor() as executor: for pattern, res in zip(patterns, executor.map(processPattern(pattern, numMismatchesAllowed, transformedText, charToIndex, countMatrix, firstOccurrence, suffixArray), patterns)): print('%d is starts at: %s' % (pattern, res)) ``` At this stage I've just got the print call there because I can't get the line above, the invocation of the processes to work. The only real difference between what I want to do and the example code is that my function takes 7 arguments and I have no idea how to do it, spent half the day on it. The call above generates this error: > > UnboundLocalError: local variable 'pattern' referenced before assignment. > > > Which makes sense. But then if I leave out that first argument, which is the one that changes with each call, and leave out the first parameter to the `processPattern` function: ``` matchList = [] with concurrent.futures.ProcessPoolExecutor() as executor: for pattern, res in zip(patterns, executor.map(processPattern(numMismatchesAllowed, transformedText, charToIndex, countMatrix, firstOccurrence, suffixArray), patterns)): print('%d is starts at: %s' % (pattern, res)) ``` Then I get this error: > > TypeError: processPattern() missing 1 required positional argument: 'suffixArray'. > > > I don't know how to get the `pattern` argument in the call!
2014/02/19
[ "https://Stackoverflow.com/questions/21870728", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3015449/" ]
To get the data into the right shape, simply use a generator expression (no need for `zip` at all) and use `submit` rather than `map`: ``` (pattern, executor.submit(processPattern, pattern, ...) for pattern in patterns) ``` To ensure that everything gets executed on the pool (instead of immediately), do not invoke the `processPatterns` function as you are doing in your example, but instead pass it in as the first argument to `.submit`. The fixed version of your code would be: ``` with concurrent.futures.ProcessPoolExecutor() as executor: for pattern, res in ((pattern, executor.submit(processPattern, pattern, numMismatchesAllowed, transformedText, charToIndex, countMatrix, firstOccurrence, suffixArray)) for pattern in patterns): print('%d is starts at: %s' % (pattern, res.result())) ```
Other posters have covered possible solutions, but to explain your error, you should be passing the function and parameters as separate objects to `executor.map`. Here is the example from the docs ``` with concurrent.futures.ProcessPoolExecutor() as executor: # is_prime is the function, PRIMES are the arguments for number, prime in zip(PRIMES, executor.map(is_prime, PRIMES)): print('%d is prime: %s' % (number, prime)) ``` Your code is evaluating the `processPattern` function and passing in the result to `executor.map` ``` with concurrent.futures.ProcessPoolExecutor() as executor: for pattern, res in zip(patterns, executor.map(processPattern(numMis... # <- BAD print('%d is starts at: %s' % (pattern, res)) ``` Instead it should be ``` with concurrent.futures.ProcessPoolExecutor() as executor: for pattern, res in zip(patterns, executor.map(processPattern, <stuff>)): print('%d is starts at: %s' % (pattern, res)) ``` Where `<stuff>` is an iterable of the arguments to pass to `processPattern` on each subsequent call. Or, seeing as the other args stay fixed, create a function that just takes the one parameter you are iterating over and pass in `patterns` as the iterable (as @uhbif19 suggests) **EDIT:** To expand on how to create the `<stuff>` iterable, you need an iterable for each argument required by your function (`processPattern` in this case). You already have `patterns` as the iterable for argument one, if the others are constant then `itertools.repeat` may be helpful: ``` from itertools import repeat args = (patterns, repeat(numMismatchesAllowed, len(PATTERNS)), repeat(transformedText, len(PATTERNS)), repeat(charToIndex, len(PATTERNS)), <etc...> ) ``` Then ``` for pattern, res in zip(PATTERNS, executor.map(process, *args)): ``` I include this for the sake of understanding, but you can see how messy this is. The other answers offer better solutions. **EDIT 2:** Here's an example that better illustrates the use of submit vs map ``` import concurrent.futures def process(a, b): return a.upper() + b with concurrent.futures.ProcessPoolExecutor() as executor: for c, fut in [(c, executor.submit(process, c, 'b')) for c in 'testing']: print(c, fut.result()) with concurrent.futures.ProcessPoolExecutor() as executor: for c, res in zip('testing', executor.map(process, 'testing', 'bbbbbbb')): print(c, str(res)) ```
57,354,747
I am trying to add a package to PyPi so I can install it with Pip. I am trying to add it using `twine upload dist/*`. This causes me to get multiple SSL errors such as `raise SSLError(e, request=request) requests.exceptions.SSLError: HTTPSConnectionPool(host='upload.pypi.org', port=443): Max retries exceeded with url: /legacy/ (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])")))`. I am using a school laptop and I presume that this is something my administrator has done however I can install stuff with pip by using `pip3 install --trusted-host pypi.org --trusted-h\ost files.pythonhosted.org`. I was wondering if there was another to add my package to pip?
2019/08/05
[ "https://Stackoverflow.com/questions/57354747", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9868018/" ]
My guess is your school has something in place where they are replacing the original cert with their own, you could maybe get around it using `--cert` and referencing the path for your schools cert, but I think an easier workaround is to copy the files to a non school computer and upload from there.
This could be a number of things, including an out-of-date version of `twine`, or (more likely) an out-of-date version of OpenSSL. Some possible solutions are listed here: <https://github.com/pypa/twine/issues/273>
48,313,388
I am trying to get selenium working on my headless raspberry pi with firefox. I have it working fine on Windows with chrome. Here are my versions: ``` uname -a Linux megabyte.thompco.com 4.9.59-v7+ #1047 SMP Sun Oct 29 12:19:23 GMT 2017 armv7l GNU/Linux which firefox /usr/bin/firefox firefox --version Mozilla Firefox 52.5.2 ./geckodriver_32 --version geckodriver 0.19.1 The source code of this program is available from testing/geckodriver in https://hg.mozilla.org/mozilla-central. This program is subject to the terms of the Mozilla Public License 2.0. You can obtain a copy of the license at https://mozilla.org/MPL/2.0/. ``` I think I have compatible versions of the driver and firefox (this seems to work): ``` ./geckodriver_32 -b /usr/bin/firefox 1516245181824 geckodriver INFO geckodriver 0.19.1 1516245181881 geckodriver INFO Listening on 127.0.0.1:4444 ``` When I run the following code: ``` def __init__(self, tag, user_name, password, driver_location, headless): logger = logging_utils.get_logger() logging_utils.start_function(logger, user_name=user_name) self.tag = tag self.user_name = user_name self.password = password self.cards = [] driver_options = Options() driver = None try: if "chrome" in driver_location.lower(): if headless: driver_options.add_argument("--headless") driver = webdriver.Chrome(executable_path=os.path.abspath("chromedriver.exe"), chrome_options=driver_options) elif "gecko" in driver_location.lower(): binary = FirefoxBinary("/usr/bin/firefox") driver_options.binary = binary profile = webdriver.FirefoxProfile() driver_options.profile = profile driver_options.set_headless(headless) driver = webdriver.Firefox(firefox_binary=binary, firefox_profile=profile, executable_path=os.path.abspath(driver_location), firefox_options=driver_options) ``` I get this error: ``` Traceback (most recent call last): File "/mnt/usbdrive/python/AmexOfferChecker/amexParser.py", line 105, in __init__ firefox_options=driver_options) File "/home/jordan/.local/lib/python2.7/site-packages/selenium/webdriver/firefox/webdriver.py", line 158, in __init__ keep_alive=True) File "/home/jordan/.local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 154, in __init__ self.start_session(desired_capabilities, browser_profile) File "/home/jordan/.local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 243, in start_session response = self.execute(Command.NEW_SESSION, parameters) File "/home/jordan/.local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 312, in execute self.error_handler.check_response(response) File "/home/jordan/.local/lib/python2.7/site-packages/selenium/webdriver/remote/errorhandler.py", line 237, in check_response raise exception_class(message, screen, stacktrace) SessionNotCreatedException: Message: Unable to find a matching set of capabilities ``` Any suggestions would be most welcome! I have modified my "gecko" section to look like this: ``` options = Options() options.add_argument('-headless') print driver_location print os.path.abspath(driver_location) driver = Firefox(executable_path=os.path.abspath(driver_location), firefox_options=options) print "Driver has been loaded!" ``` Now I get this error: ``` geckodriver_32 /mnt/usbdrive/python/AmexOfferChecker/geckodriver_32 Traceback (most recent call last): File "/mnt/usbdrive/python/AmexOfferChecker/amexParser.py", line 106, in __init__ firefox_options=options) File "/home/jordan/.local/lib/python2.7/site-packages/selenium/webdriver/firefox/webdriver.py", line 158, in __init__ keep_alive=True) File "/home/jordan/.local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 154, in __init__ self.start_session(desired_capabilities, browser_profile) File "/home/jordan/.local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 243, in start_session response = self.execute(Command.NEW_SESSION, parameters) File "/home/jordan/.local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 312, in execute self.error_handler.check_response(response) File "/home/jordan/.local/lib/python2.7/site-packages/selenium/webdriver/remote/errorhandler.py", line 237, in check_response raise exception_class(message, screen, stacktrace) WebDriverException: Message: Process unexpectedly closed with status: 1 ``` I did see this bug: On Linux, certain libraries are currently required on your system even though headless mode doesn't use them — because Firefox links against them. See [bug 1372998](https://bugzilla.mozilla.org/show_bug.cgi?id=1372998) for more details and progress towards a fix. I added the suggested libraries: ``` sudo apt-get install libgtk-3-0 libdbus-glib-1-2 xvfb ``` but am still getting the same exception. I have tried this also with the same error (**NOTE** that the debug file is created, but empty): ``` options = Options() options.add_argument('--headless') profile = webdriver.FirefoxProfile() profile.set_preference("logs/webdriver.log", "/tmp/firefox_console") binary = FirefoxBinary(firefox_path="/usr/bin/firefox", log_file=open("/tmp/firefox_output", "wb")) driver = webdriver.Firefox(firefox_profile=profile, firefox_binary=binary, options=options, executable_path=os.path.abspath(driver_location)) ``` Finally got logging turned on for seleniuim (does this mean anything to anyone?): ``` 2018-01-24 22:51:00,078 - selenium.webdriver.remote.remote_connection 480 -DEBUG - POST http://127.0.0.1:45413/session {"capabilities": {"alwaysMatch": {"acceptInsecureCerts": true, "browserName": "firefox", "moz:firefoxOptions": {"args": ["headless"]}}, "firstMatch": [{}]}, "desiredCapabilities": {"acceptInsecureCerts": true, "browserName": "firefox", "moz:firefoxOptions": {"args": ["headless"]}}} 2018-01-24 22:51:00,944 - selenium.webdriver.remote.remote_connection 567 -DEBUG - Finished Request {"value":{"error":"unknown error","message":"Process unexpectedly closed with status: 1","stacktrace":"stack backtrace:\n 0: 0x55d797 - backtrace::backtrace::trace::hc4bd56a2f176de7e\n 1: 0x55d8ff - backtrace::capture::Backtrace::new::he3b2a15d39027c46\n 2: 0x4b7f4b - webdriver::error::WebDriverError::new::ha0fbd6d1a1131b43\n 3: 0x4bcb57 - geckodriver::marionette::MarionetteHandler::create_connection::hf0532ddb9e159684\n 4: 0x4a14cb - <webdriver::server::Dispatcher<T, U>>::run::h2119c674d7b88193\n 5: 0x47fcbf - std::sys_common::backtrace::__rust_begin_short_backtrace::h21d98a9ff86d4c25\n 6: 0x4871cf - std::panicking::try::do_call::h5cff0c9b18cfdbba\n 7: 0x606237 - panic_unwind::__rust_maybe_catch_panic\n at /checkout/src/libpanic_unwind/lib.rs:99\n 8: 0x4999e7 - <F as alloc::boxed::FnBox<A>>::call_box::h413eb1d9d9f1c473\n 9: 0x6000d3 - alloc::boxed::{{impl}}::call_once<(),()>\n at /checkout/src/liballoc/boxed.rs:692\n - std::sys_common::thread::start_thread\n at /checkout/src/libstd/sys_common/thread.rs:21\n - std::sys::imp::thread::{{impl}}::new::thread_start\n at /checkout/src/libstd/sys/unix/thread.rs:84"}} 2018-01-24 22:51:00,947 - main.main 38 -WARNING - Problem (Message: Process unexpectedly closed with status: 1 ``` Here is the crux of the error (interesting that it is reported as a DEBUG). does anyone have any suggestions: ``` 2018-01-24 22:51:02,863 - selenium.webdriver.remote.remote_connection 567 -DEBUG - Finished Request {"value": {"error":"unknown error","message":"Process unexpectedly closed with status: 1","stacktrace":"stack backtrace: 0: 0x576797 - backtrace::backtrace::trace::hc4bd56a2f176de7e 1: 0x5768ff - backtrace::capture::Backtrace::new::he3b2a15d39027c46 2: 0x4d0f4b - webdriver::error::WebDriverError::new::ha0fbd6d1a1131b43 3: 0x4d5b57 - geckodriver::marionette::MarionetteHandler::create_connection::hf0532ddb9e159684 4: 0x4ba4cb - <webdriver::server::Dispatcher<T, U>>::run::h2119c674d7b88193 5: 0x498cbf - std::sys_common::backtrace::__rust_begin_short_backtrace::h21d98a9ff86d4c25 6: 0x4a01cf - std::panicking::try::do_call::h5cff0c9b18cfdbba 7: 0x61f237 - panic_unwind::__rust_maybe_catch_panic at /checkout/src/libpanic_unwind/lib.rs:99 8: 0x4b29e7 - <F as alloc::boxed::FnBox<A>>::call_box::h413eb1d9d9f1c473 9: 0x6190d3 - alloc::boxed::{{impl}}::call_once<(),()> at /checkout/src/liballoc/boxed.rs:692 - std::sys_common::thread::start_thread at /checkout/src/libstd/sys_common/thread.rs:21 - std::sys::imp::thread::{{impl}}::new::thread_start at /checkout/src/libstd/sys/unix/thread.rs:84" } } ```
2018/01/18
[ "https://Stackoverflow.com/questions/48313388", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1039860/" ]
Finally got this to work, but with chromedriver and chrome. You will have to install chrome first: ``` sudo apt-get install chromium-browser ``` Next downloaded the debian package from here: <https://packages.debian.org/stretch/armhf/chromium-driver/download> Unpack the file "chromedriver": ``` mkdir tmp dpkg-deb -R chromium-driver_63.0.3239.84-1_deb9u1_armhf.deb tmp cp /usr/local/bin/chromedriver . mv chromedriver chromedriver_arm_64 ``` The rest of the code is unchanged. Note that ``` driver_options.add_argument("headless") ``` is fine ("--headless" may work as well - I haven't tried it). I sure hope that someone finds this before spending as much time on this as I have!
You can also try declaring the DISPLAY variable, it works especially for remote connections. Run this command on the terminal: ``` export DISPLAY=:0.0 ```
36,426,547
I am using Ubuntu 14.04 I wanted to install package "requests" to use in python 3.5, so I installed it using pip3. I could see it in /usr/lib/python3.4, but while trying to actually execute scripts with Python 3.5 I always got "ImportError: No module named 'requests'" OK, so I figured, perhaps that's because the package is not in python3.5 but in python3.4. Therefore, I tried to uninstall and install i again, but it just kept popping up where I didn't want it (not to mention, when I run apt-get remove pip3-requests, it actually removed pip3 for me as well lol). Therefore, I tried physically removing python3.4 from usr/lib and usr/local/lib in order to try and see if maybe pip3 was confused and installed packages in wrong directories. I'm afraid it was not a good idea... when I now run e.g. `sudo pip3 install reqests` I get the following error: `Could not find platform independent libraries <prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Fatal Python error: Py_Initialize: Unable to get the locale encoding ImportError: No module named 'encodings'` Is there any way to fix this now? And to actually use requests package? When I use ``` sudo apt-get install python3-pip ``` It works and starts unpacking etc. but then I get a long error that starts with: ``` Setting up python3.4 (3.4.3-1ubuntu1~14.04.3) Could not find platform independent libraries <prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Fatal Python error: Py_Initialize: Unable to get the locale encoding ImportError: No module named 'encodings' Aborted dpkg: error processing package python3.4 (--configure): subprocess installed post-installation script returned error exit status 134 dpkg: dependency problems prevent configuration of python3: ``` (...) and ends with ``` python3 depends on python3.4 (>= 3.4.0-0~); however: Package python3.4 is not configured yet. dpkg: error processing package python3-wheel (--configure): dependency problems - leaving unconfigured E: Sub-process /usr/bin/dpkg returned an error code (1) ```
2016/04/05
[ "https://Stackoverflow.com/questions/36426547", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4680896/" ]
**First of all, it is a very bad idea to remove your *system* Python 3 in Ubuntu (which 3.4 is in recent subrevisions of Trusty LTS)**. That is because it is a **vital part of the system**. If you run the command `apt-cache rdepends python3`, you'd see that packages such as `ubuntu-minimal`, `ubuntu-release-upgrader-core`, `lsb-release`, `lsb-core`, `ubuntu-core-libs` and so on, all depend on Ubuntu's version of Python 3 being installed (and this is the **python3.4** in Ubuntu 14.04.4). If you force-remove python 3.4 by hand, you've ruined your system. It might very well be that you now have to reinstall the whole operating system, unless you manage to reinstall all the system `.deb` packages that put data in `/usr/lib/python3.4`. And especially so if you do it with force. It can make your system even unbootable, so do not reboot that computer before you've successfully reinstalled Python 3... actually I am not sure how to do it safely since it seems you've forcefully removed all system dependencies from the /usr/lib) --- You should try to reinstall python3.4 ``` sudo apt-get install --reinstall python3.4 ``` But now the bigger problem is that you've still missing all sorts of dependencies for your system programs. Do note that `pip` also should be available as a *module*. Thus to ensure that you install for Python 3.5, you can do ``` sudo python3.5 -mpip install requests ``` The `pip3` is a wrapper for a `pip` that installs to the *system* Python 3 version (3.4 in your case).
Ubuntu 14.04LTS uses the [*trusty* package list](http://packages.ubuntu.com/trusty/). That repository comes with [Python 3.4.0-0ubuntu2](http://packages.ubuntu.com/trusty/python3). So the `pip` contained in `python3-pip` belongs to *that* version: 3.4. As such, when using Python 3.5, packages installed using Python 3.4 and that version’s `pip` will not be available. I don’t know how you installed Python 3.5 on your system, but you should use that way to install `pip` for that version as well. If you compiled it from source yourself, you should see the [install instructions for pip](https://pip.pypa.io/en/stable/installing/) on how to get it installed for Python 3.5.
36,426,547
I am using Ubuntu 14.04 I wanted to install package "requests" to use in python 3.5, so I installed it using pip3. I could see it in /usr/lib/python3.4, but while trying to actually execute scripts with Python 3.5 I always got "ImportError: No module named 'requests'" OK, so I figured, perhaps that's because the package is not in python3.5 but in python3.4. Therefore, I tried to uninstall and install i again, but it just kept popping up where I didn't want it (not to mention, when I run apt-get remove pip3-requests, it actually removed pip3 for me as well lol). Therefore, I tried physically removing python3.4 from usr/lib and usr/local/lib in order to try and see if maybe pip3 was confused and installed packages in wrong directories. I'm afraid it was not a good idea... when I now run e.g. `sudo pip3 install reqests` I get the following error: `Could not find platform independent libraries <prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Fatal Python error: Py_Initialize: Unable to get the locale encoding ImportError: No module named 'encodings'` Is there any way to fix this now? And to actually use requests package? When I use ``` sudo apt-get install python3-pip ``` It works and starts unpacking etc. but then I get a long error that starts with: ``` Setting up python3.4 (3.4.3-1ubuntu1~14.04.3) Could not find platform independent libraries <prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Fatal Python error: Py_Initialize: Unable to get the locale encoding ImportError: No module named 'encodings' Aborted dpkg: error processing package python3.4 (--configure): subprocess installed post-installation script returned error exit status 134 dpkg: dependency problems prevent configuration of python3: ``` (...) and ends with ``` python3 depends on python3.4 (>= 3.4.0-0~); however: Package python3.4 is not configured yet. dpkg: error processing package python3-wheel (--configure): dependency problems - leaving unconfigured E: Sub-process /usr/bin/dpkg returned an error code (1) ```
2016/04/05
[ "https://Stackoverflow.com/questions/36426547", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4680896/" ]
**First of all, it is a very bad idea to remove your *system* Python 3 in Ubuntu (which 3.4 is in recent subrevisions of Trusty LTS)**. That is because it is a **vital part of the system**. If you run the command `apt-cache rdepends python3`, you'd see that packages such as `ubuntu-minimal`, `ubuntu-release-upgrader-core`, `lsb-release`, `lsb-core`, `ubuntu-core-libs` and so on, all depend on Ubuntu's version of Python 3 being installed (and this is the **python3.4** in Ubuntu 14.04.4). If you force-remove python 3.4 by hand, you've ruined your system. It might very well be that you now have to reinstall the whole operating system, unless you manage to reinstall all the system `.deb` packages that put data in `/usr/lib/python3.4`. And especially so if you do it with force. It can make your system even unbootable, so do not reboot that computer before you've successfully reinstalled Python 3... actually I am not sure how to do it safely since it seems you've forcefully removed all system dependencies from the /usr/lib) --- You should try to reinstall python3.4 ``` sudo apt-get install --reinstall python3.4 ``` But now the bigger problem is that you've still missing all sorts of dependencies for your system programs. Do note that `pip` also should be available as a *module*. Thus to ensure that you install for Python 3.5, you can do ``` sudo python3.5 -mpip install requests ``` The `pip3` is a wrapper for a `pip` that installs to the *system* Python 3 version (3.4 in your case).
1. Open a text-only virtual console by using the keyboard shortcut `Ctrl` + `Alt` + `F3`. 2. At the `login:` prompt type your username and press `Enter`. 3. At the `Password:` prompt type your user password and press `Enter`. 4. Reinstall the default Python 3 version by running the following command: ``` sudo apt install python3-all ``` 5. Switch out of the virtual console and return to your desktop environment by pressing the keyboard shortcut `Ctrl`+`Alt`+`F7`. In Ubuntu 17.10 and later press the keyboard shortcut `Ctrl`+`Alt`+`F2` to exit from the virtual console. --- After you have installed the default Python 3 version, you need to get back your default Ubuntu desktop system. In order to avoid messing something up, do it in the following order: 1. First install the terminal from the console using the command: `sudo apt install gnome-terminal`. If you can't install gnome-terminal at all, skip this step and go to step 2. 2. Return to your desktop and open the terminal using the keyboard shortcut `Ctrl` + `Alt` + `T`. From the terminal install the Ubuntu Software Center using the command: ``` sudo apt install software-center ``` In Ubuntu 16.04 and later run this command instead to reinstall the default Software application: ``` sudo apt install gnome-software ``` If you still can't open the terminal, run the same command from the console instead. If you can't install the default software application at all, skip this step and go to step 3. 3. Open the terminal and try to open the Ubuntu Software Center from the terminal by running the appropriate command, either: `software-center` or `gnome-software`. From the Ubuntu Software Center install the Ubuntu desktop system or else open the terminal (or the console) and install the Ubuntu desktop system by running the command: `sudo apt install ubuntu-desktop`.
6,397,344
it is a python code..whether implementing using linked list .... is efficient in this way........... ``` data = [] # data storage for stacks represented as linked lists stack = [-1, -1, -1] # pointers to each of three stacks (-1 is the "null" pointer) free = -1 # pointer to list of free stack nodes to be reused def allocate(value): ''' allocate a new node and return a pointer to it ''' global free global data if free == -1: # free list is empty, need to expand data list data += [value,-1] return len(data)-2 else: # pop a node off the free list and reuse it temp = free free = data[temp+1] data[temp] = value data[temp+1] = -1 return temp def release(ptr): ''' put node on the free list ''' global free temp = free free = ptr data[free+1] = temp def push(n, value): ''' push value onto stack n ''' global free global data temp = stack[n] stack[n] = allocate(value) data[stack[n]+1] = temp def pop(n): ''' pop a value off of stack n ''' value = data[stack[n]] temp = stack[n] stack[n] = data[stack[n]+1] release(temp) return value def list(ptr): ''' list contents of a stack ''' while ptr != -1: print data[ptr], ptr = data[ptr+1] print def list_all(): ''' list contents of all the stacks and the free list ''' print stack,free,data for i in range(3): print i,":", list(stack[i]) print "free:", list(free) push(0,"hello") push(1,"foo") push(0,"goodbye") push(1,"bar") list_all() pop(0) pop(0) push(2,"abc") list_all() pop(1) pop(2) pop(1) list_all() ``` r there is any way to do this efficiently other than this??implement in this way in c /c++ would be eficient???
2011/06/18
[ "https://Stackoverflow.com/questions/6397344", "https://Stackoverflow.com", "https://Stackoverflow.com/users/804287/" ]
In python, a list is a stack: ``` >>> l = [1, 2, 3, 4, 5] >>> l.pop() 5 >>> l.pop() 4 >>> l.append(9) >>> l [1, 2, 3, 9] >>> l.pop() 9 >>> l.pop() 3 >>> l.append(12) >>> l [1, 2, 12] ``` Although it may be an... entertaining exercise to implement a c-style linked list in python, it is unnecessary, and likely to be very slow. Just use a list instead.
A far better solution could be using list instead of stack to implement linked list. The code given is stack implementation of linked list, which I believe is a norm in python but in C/C++ you can use list for efficient implementation. A sample code in C would be as follows :- ``` #include <stdio.h> #include <stdlib.h> struct node{ int data; struct node *next; }; struct node* add(struct node *head, int data){ struct node *tmp; if(head == NULL){ head=(struct node *)malloc(sizeof(struct node)); if(head == NULL){ printf("Error! memory is not available\n"); exit(0); } head-> data = data; head-> next = head; }else{ tmp = head; while (tmp-> next != head) tmp = tmp-> next; tmp-> next = (struct node *)malloc(sizeof(struct node)); if(tmp -> next == NULL) { printf("Error! memory is not available\n"); exit(0); } tmp = tmp-> next; tmp-> data = data; tmp-> next = head; } return head; } void printlist(struct node *head) { struct node *current; current = head; if(current!= NULL) { do { printf("%d\t",current->data); current = current->next; } while (current!= head); printf("\n"); } else printf("The list is empty\n"); } void destroy(struct node *head) { struct node *current, *tmp; current = head->next; head->next = NULL; while(current != NULL) { tmp = current->next; free(current); current = tmp; } } void main() { struct node *head = NULL; head = add(head,1); /* 1 */ printlist(head); head = add(head,20);/* 20 */ printlist(head); head = add(head,10);/* 1 20 10 */ printlist(head); head = add(head,5); /* 1 20 10 5*/ printlist(head); destroy(head); getchar(); } ``` In the above example if you create an array of pointers with size 3, each of the pointer pointing to head, you can create three linked lists. This would handle the space with maximum efficiency and there is no need to check for free nodes too.
6,397,344
it is a python code..whether implementing using linked list .... is efficient in this way........... ``` data = [] # data storage for stacks represented as linked lists stack = [-1, -1, -1] # pointers to each of three stacks (-1 is the "null" pointer) free = -1 # pointer to list of free stack nodes to be reused def allocate(value): ''' allocate a new node and return a pointer to it ''' global free global data if free == -1: # free list is empty, need to expand data list data += [value,-1] return len(data)-2 else: # pop a node off the free list and reuse it temp = free free = data[temp+1] data[temp] = value data[temp+1] = -1 return temp def release(ptr): ''' put node on the free list ''' global free temp = free free = ptr data[free+1] = temp def push(n, value): ''' push value onto stack n ''' global free global data temp = stack[n] stack[n] = allocate(value) data[stack[n]+1] = temp def pop(n): ''' pop a value off of stack n ''' value = data[stack[n]] temp = stack[n] stack[n] = data[stack[n]+1] release(temp) return value def list(ptr): ''' list contents of a stack ''' while ptr != -1: print data[ptr], ptr = data[ptr+1] print def list_all(): ''' list contents of all the stacks and the free list ''' print stack,free,data for i in range(3): print i,":", list(stack[i]) print "free:", list(free) push(0,"hello") push(1,"foo") push(0,"goodbye") push(1,"bar") list_all() pop(0) pop(0) push(2,"abc") list_all() pop(1) pop(2) pop(1) list_all() ``` r there is any way to do this efficiently other than this??implement in this way in c /c++ would be eficient???
2011/06/18
[ "https://Stackoverflow.com/questions/6397344", "https://Stackoverflow.com", "https://Stackoverflow.com/users/804287/" ]
In python, a list is a stack: ``` >>> l = [1, 2, 3, 4, 5] >>> l.pop() 5 >>> l.pop() 4 >>> l.append(9) >>> l [1, 2, 3, 9] >>> l.pop() 9 >>> l.pop() 3 >>> l.append(12) >>> l [1, 2, 12] ``` Although it may be an... entertaining exercise to implement a c-style linked list in python, it is unnecessary, and likely to be very slow. Just use a list instead.
``` def finding_element(a,k): print a i = 0 while k < a[i]: i = i-1 print k,a[i] if k > a[i]: i = i+1 print k,a[i] if k == a[i]: print k,a[i] else: print "not found" a = [ 1,3,5,7,8,9] k = 5 finding_element(a,k) ```
6,397,344
it is a python code..whether implementing using linked list .... is efficient in this way........... ``` data = [] # data storage for stacks represented as linked lists stack = [-1, -1, -1] # pointers to each of three stacks (-1 is the "null" pointer) free = -1 # pointer to list of free stack nodes to be reused def allocate(value): ''' allocate a new node and return a pointer to it ''' global free global data if free == -1: # free list is empty, need to expand data list data += [value,-1] return len(data)-2 else: # pop a node off the free list and reuse it temp = free free = data[temp+1] data[temp] = value data[temp+1] = -1 return temp def release(ptr): ''' put node on the free list ''' global free temp = free free = ptr data[free+1] = temp def push(n, value): ''' push value onto stack n ''' global free global data temp = stack[n] stack[n] = allocate(value) data[stack[n]+1] = temp def pop(n): ''' pop a value off of stack n ''' value = data[stack[n]] temp = stack[n] stack[n] = data[stack[n]+1] release(temp) return value def list(ptr): ''' list contents of a stack ''' while ptr != -1: print data[ptr], ptr = data[ptr+1] print def list_all(): ''' list contents of all the stacks and the free list ''' print stack,free,data for i in range(3): print i,":", list(stack[i]) print "free:", list(free) push(0,"hello") push(1,"foo") push(0,"goodbye") push(1,"bar") list_all() pop(0) pop(0) push(2,"abc") list_all() pop(1) pop(2) pop(1) list_all() ``` r there is any way to do this efficiently other than this??implement in this way in c /c++ would be eficient???
2011/06/18
[ "https://Stackoverflow.com/questions/6397344", "https://Stackoverflow.com", "https://Stackoverflow.com/users/804287/" ]
In python, a list is a stack: ``` >>> l = [1, 2, 3, 4, 5] >>> l.pop() 5 >>> l.pop() 4 >>> l.append(9) >>> l [1, 2, 3, 9] >>> l.pop() 9 >>> l.pop() 3 >>> l.append(12) >>> l [1, 2, 12] ``` Although it may be an... entertaining exercise to implement a c-style linked list in python, it is unnecessary, and likely to be very slow. Just use a list instead.
You really don't have to go to all that trouble when Python does all of that out of the box. Sure you can wrap it in functions if you have some complex object to manipulate but don't overthink it and let Python worry about memory allocation (nobody does that manually anymore). Here is the equivalent of all your function calls in very basic Python: ``` stacks = [ [] for _ in range(3) ] stacks[0].append("hello") # push(0,"hello") stacks[1].append("foo") # push(1,"foo") stacks[0].append("goodbye") # push(0,"goodbye") stacks[1].append("bar") # push(1,"bar") print(stacks) # list_all() stacks[0].pop() # pop(0) stacks[0].pop() # pop(0) stacks[2].append("abc") # push(2,"abc") print(stacks) # list_all() stacks[1].pop() # pop(1) stacks[2].pop() # pop(2) stacks[1].pop() # pop(1) print(stacks) # list_all() ```
6,397,344
it is a python code..whether implementing using linked list .... is efficient in this way........... ``` data = [] # data storage for stacks represented as linked lists stack = [-1, -1, -1] # pointers to each of three stacks (-1 is the "null" pointer) free = -1 # pointer to list of free stack nodes to be reused def allocate(value): ''' allocate a new node and return a pointer to it ''' global free global data if free == -1: # free list is empty, need to expand data list data += [value,-1] return len(data)-2 else: # pop a node off the free list and reuse it temp = free free = data[temp+1] data[temp] = value data[temp+1] = -1 return temp def release(ptr): ''' put node on the free list ''' global free temp = free free = ptr data[free+1] = temp def push(n, value): ''' push value onto stack n ''' global free global data temp = stack[n] stack[n] = allocate(value) data[stack[n]+1] = temp def pop(n): ''' pop a value off of stack n ''' value = data[stack[n]] temp = stack[n] stack[n] = data[stack[n]+1] release(temp) return value def list(ptr): ''' list contents of a stack ''' while ptr != -1: print data[ptr], ptr = data[ptr+1] print def list_all(): ''' list contents of all the stacks and the free list ''' print stack,free,data for i in range(3): print i,":", list(stack[i]) print "free:", list(free) push(0,"hello") push(1,"foo") push(0,"goodbye") push(1,"bar") list_all() pop(0) pop(0) push(2,"abc") list_all() pop(1) pop(2) pop(1) list_all() ``` r there is any way to do this efficiently other than this??implement in this way in c /c++ would be eficient???
2011/06/18
[ "https://Stackoverflow.com/questions/6397344", "https://Stackoverflow.com", "https://Stackoverflow.com/users/804287/" ]
A far better solution could be using list instead of stack to implement linked list. The code given is stack implementation of linked list, which I believe is a norm in python but in C/C++ you can use list for efficient implementation. A sample code in C would be as follows :- ``` #include <stdio.h> #include <stdlib.h> struct node{ int data; struct node *next; }; struct node* add(struct node *head, int data){ struct node *tmp; if(head == NULL){ head=(struct node *)malloc(sizeof(struct node)); if(head == NULL){ printf("Error! memory is not available\n"); exit(0); } head-> data = data; head-> next = head; }else{ tmp = head; while (tmp-> next != head) tmp = tmp-> next; tmp-> next = (struct node *)malloc(sizeof(struct node)); if(tmp -> next == NULL) { printf("Error! memory is not available\n"); exit(0); } tmp = tmp-> next; tmp-> data = data; tmp-> next = head; } return head; } void printlist(struct node *head) { struct node *current; current = head; if(current!= NULL) { do { printf("%d\t",current->data); current = current->next; } while (current!= head); printf("\n"); } else printf("The list is empty\n"); } void destroy(struct node *head) { struct node *current, *tmp; current = head->next; head->next = NULL; while(current != NULL) { tmp = current->next; free(current); current = tmp; } } void main() { struct node *head = NULL; head = add(head,1); /* 1 */ printlist(head); head = add(head,20);/* 20 */ printlist(head); head = add(head,10);/* 1 20 10 */ printlist(head); head = add(head,5); /* 1 20 10 5*/ printlist(head); destroy(head); getchar(); } ``` In the above example if you create an array of pointers with size 3, each of the pointer pointing to head, you can create three linked lists. This would handle the space with maximum efficiency and there is no need to check for free nodes too.
``` def finding_element(a,k): print a i = 0 while k < a[i]: i = i-1 print k,a[i] if k > a[i]: i = i+1 print k,a[i] if k == a[i]: print k,a[i] else: print "not found" a = [ 1,3,5,7,8,9] k = 5 finding_element(a,k) ```
6,397,344
it is a python code..whether implementing using linked list .... is efficient in this way........... ``` data = [] # data storage for stacks represented as linked lists stack = [-1, -1, -1] # pointers to each of three stacks (-1 is the "null" pointer) free = -1 # pointer to list of free stack nodes to be reused def allocate(value): ''' allocate a new node and return a pointer to it ''' global free global data if free == -1: # free list is empty, need to expand data list data += [value,-1] return len(data)-2 else: # pop a node off the free list and reuse it temp = free free = data[temp+1] data[temp] = value data[temp+1] = -1 return temp def release(ptr): ''' put node on the free list ''' global free temp = free free = ptr data[free+1] = temp def push(n, value): ''' push value onto stack n ''' global free global data temp = stack[n] stack[n] = allocate(value) data[stack[n]+1] = temp def pop(n): ''' pop a value off of stack n ''' value = data[stack[n]] temp = stack[n] stack[n] = data[stack[n]+1] release(temp) return value def list(ptr): ''' list contents of a stack ''' while ptr != -1: print data[ptr], ptr = data[ptr+1] print def list_all(): ''' list contents of all the stacks and the free list ''' print stack,free,data for i in range(3): print i,":", list(stack[i]) print "free:", list(free) push(0,"hello") push(1,"foo") push(0,"goodbye") push(1,"bar") list_all() pop(0) pop(0) push(2,"abc") list_all() pop(1) pop(2) pop(1) list_all() ``` r there is any way to do this efficiently other than this??implement in this way in c /c++ would be eficient???
2011/06/18
[ "https://Stackoverflow.com/questions/6397344", "https://Stackoverflow.com", "https://Stackoverflow.com/users/804287/" ]
A far better solution could be using list instead of stack to implement linked list. The code given is stack implementation of linked list, which I believe is a norm in python but in C/C++ you can use list for efficient implementation. A sample code in C would be as follows :- ``` #include <stdio.h> #include <stdlib.h> struct node{ int data; struct node *next; }; struct node* add(struct node *head, int data){ struct node *tmp; if(head == NULL){ head=(struct node *)malloc(sizeof(struct node)); if(head == NULL){ printf("Error! memory is not available\n"); exit(0); } head-> data = data; head-> next = head; }else{ tmp = head; while (tmp-> next != head) tmp = tmp-> next; tmp-> next = (struct node *)malloc(sizeof(struct node)); if(tmp -> next == NULL) { printf("Error! memory is not available\n"); exit(0); } tmp = tmp-> next; tmp-> data = data; tmp-> next = head; } return head; } void printlist(struct node *head) { struct node *current; current = head; if(current!= NULL) { do { printf("%d\t",current->data); current = current->next; } while (current!= head); printf("\n"); } else printf("The list is empty\n"); } void destroy(struct node *head) { struct node *current, *tmp; current = head->next; head->next = NULL; while(current != NULL) { tmp = current->next; free(current); current = tmp; } } void main() { struct node *head = NULL; head = add(head,1); /* 1 */ printlist(head); head = add(head,20);/* 20 */ printlist(head); head = add(head,10);/* 1 20 10 */ printlist(head); head = add(head,5); /* 1 20 10 5*/ printlist(head); destroy(head); getchar(); } ``` In the above example if you create an array of pointers with size 3, each of the pointer pointing to head, you can create three linked lists. This would handle the space with maximum efficiency and there is no need to check for free nodes too.
You really don't have to go to all that trouble when Python does all of that out of the box. Sure you can wrap it in functions if you have some complex object to manipulate but don't overthink it and let Python worry about memory allocation (nobody does that manually anymore). Here is the equivalent of all your function calls in very basic Python: ``` stacks = [ [] for _ in range(3) ] stacks[0].append("hello") # push(0,"hello") stacks[1].append("foo") # push(1,"foo") stacks[0].append("goodbye") # push(0,"goodbye") stacks[1].append("bar") # push(1,"bar") print(stacks) # list_all() stacks[0].pop() # pop(0) stacks[0].pop() # pop(0) stacks[2].append("abc") # push(2,"abc") print(stacks) # list_all() stacks[1].pop() # pop(1) stacks[2].pop() # pop(2) stacks[1].pop() # pop(1) print(stacks) # list_all() ```
28,223,747
I'm new to python and want help cleaning up my code. I had to make a definition that takes a string and returns the first half lowercase and second part uppercase. This is my code - but I can't help think there's a cleaner way to write this. ``` def sillycase(string): x = len(string)/2 y = round(x) print (string[:y].lower() + string[y:].upper()) ```
2015/01/29
[ "https://Stackoverflow.com/questions/28223747", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4459432/" ]
You can use `find -exec` for this: ``` find /FolderA -type f -exec sed -i 's/wordA/wordB/g' {} + ```
`find` would be the way to go. A bash alternative: ``` shopt -s globstar files=() for file in FolderA/**; do # double asterisk is not a typo [[ -f "$file" ]] && files+=("$file") done sed -i 's/wordA/wordB/g' "${files[@]}" ```
55,454,514
I'm using a Kubernetes inventory builder script found here: <https://github.com/kubernetes-sigs/kubespray/blob/master/contrib/inventory_builder/inventory.py> On `line 36`, the ruamel YML library is imported using the code `from ruamel.yaml import YAML`. This library can be found here: <https://pypi.org/project/ruamel.yaml/> On my OSX device (`Mojave 10.14.3`), if I run `pip list`, I can clearly see the most up to date version of `ruamel.yaml`: [![pip list](https://i.stack.imgur.com/iuu1g.png)](https://i.stack.imgur.com/iuu1g.png) If I run `pip show ruamel.yaml`, I get the following output: [![pip show ruamel.yaml](https://i.stack.imgur.com/WRdHY.png)](https://i.stack.imgur.com/WRdHY.png) I'm running the script with this command: `CONFIG_FILE=inventory/mycluster/hosts.ini python3 contrib/inventory_builder/inventory.py 10.0.0.1 10.0.0.2 10.0.0.4 10.0.0.5` Bizarrely, it returns the following error: ``` Traceback (most recent call last): File "contrib/inventory_builder/inventory.py", line 36, in <module> from ruamel.yaml import YAML ModuleNotFoundError: No module named 'ruamel' ``` I have very little experience with Python, so don't understand how this could be failing. Have I installed the library incorrectly or something? From the documentation on the `ruamel.yml` project page, it looks like the script is calling the library as it should be. Thanks in advance
2019/04/01
[ "https://Stackoverflow.com/questions/55454514", "https://Stackoverflow.com", "https://Stackoverflow.com/users/864245/" ]
`pip` is set to point to the Python 2 installation. To install the library under Python 3, do `pip3 install ruamel.yml`.
you're using python 3 and want to use the package that is with python 2. Go to the directory where your python 3 is, navigate to Scripts and use the pip in there to install the needed library.
55,454,514
I'm using a Kubernetes inventory builder script found here: <https://github.com/kubernetes-sigs/kubespray/blob/master/contrib/inventory_builder/inventory.py> On `line 36`, the ruamel YML library is imported using the code `from ruamel.yaml import YAML`. This library can be found here: <https://pypi.org/project/ruamel.yaml/> On my OSX device (`Mojave 10.14.3`), if I run `pip list`, I can clearly see the most up to date version of `ruamel.yaml`: [![pip list](https://i.stack.imgur.com/iuu1g.png)](https://i.stack.imgur.com/iuu1g.png) If I run `pip show ruamel.yaml`, I get the following output: [![pip show ruamel.yaml](https://i.stack.imgur.com/WRdHY.png)](https://i.stack.imgur.com/WRdHY.png) I'm running the script with this command: `CONFIG_FILE=inventory/mycluster/hosts.ini python3 contrib/inventory_builder/inventory.py 10.0.0.1 10.0.0.2 10.0.0.4 10.0.0.5` Bizarrely, it returns the following error: ``` Traceback (most recent call last): File "contrib/inventory_builder/inventory.py", line 36, in <module> from ruamel.yaml import YAML ModuleNotFoundError: No module named 'ruamel' ``` I have very little experience with Python, so don't understand how this could be failing. Have I installed the library incorrectly or something? From the documentation on the `ruamel.yml` project page, it looks like the script is calling the library as it should be. Thanks in advance
2019/04/01
[ "https://Stackoverflow.com/questions/55454514", "https://Stackoverflow.com", "https://Stackoverflow.com/users/864245/" ]
In my case, I was installing this with `pip3 install ruamel.yaml`, and it was puting the package in `/usr/local/lib/python3.9/site-packages/`, but the `python3` binary on the machine was pinned to Python 3.7, so trying to import that module was sending the `ModuleNotFoundError` message. What helped to fix this, was to install the module with `python3 -m pip install ruamel.yaml`, running pip via the python3 binary makes sure it runs on the same version, in this case 3.7, and gets installed via the correct version number site-packages.
you're using python 3 and want to use the package that is with python 2. Go to the directory where your python 3 is, navigate to Scripts and use the pip in there to install the needed library.
55,454,514
I'm using a Kubernetes inventory builder script found here: <https://github.com/kubernetes-sigs/kubespray/blob/master/contrib/inventory_builder/inventory.py> On `line 36`, the ruamel YML library is imported using the code `from ruamel.yaml import YAML`. This library can be found here: <https://pypi.org/project/ruamel.yaml/> On my OSX device (`Mojave 10.14.3`), if I run `pip list`, I can clearly see the most up to date version of `ruamel.yaml`: [![pip list](https://i.stack.imgur.com/iuu1g.png)](https://i.stack.imgur.com/iuu1g.png) If I run `pip show ruamel.yaml`, I get the following output: [![pip show ruamel.yaml](https://i.stack.imgur.com/WRdHY.png)](https://i.stack.imgur.com/WRdHY.png) I'm running the script with this command: `CONFIG_FILE=inventory/mycluster/hosts.ini python3 contrib/inventory_builder/inventory.py 10.0.0.1 10.0.0.2 10.0.0.4 10.0.0.5` Bizarrely, it returns the following error: ``` Traceback (most recent call last): File "contrib/inventory_builder/inventory.py", line 36, in <module> from ruamel.yaml import YAML ModuleNotFoundError: No module named 'ruamel' ``` I have very little experience with Python, so don't understand how this could be failing. Have I installed the library incorrectly or something? From the documentation on the `ruamel.yml` project page, it looks like the script is calling the library as it should be. Thanks in advance
2019/04/01
[ "https://Stackoverflow.com/questions/55454514", "https://Stackoverflow.com", "https://Stackoverflow.com/users/864245/" ]
you're using python 3 and want to use the package that is with python 2. Go to the directory where your python 3 is, navigate to Scripts and use the pip in there to install the needed library.
This helped me (adding version number to python): ``` CONFIG_FILE=inventory/mycluster/hosts.yaml python3.6 contrib/inventory_builder/inventory.py ${IPS[@]} ```
55,454,514
I'm using a Kubernetes inventory builder script found here: <https://github.com/kubernetes-sigs/kubespray/blob/master/contrib/inventory_builder/inventory.py> On `line 36`, the ruamel YML library is imported using the code `from ruamel.yaml import YAML`. This library can be found here: <https://pypi.org/project/ruamel.yaml/> On my OSX device (`Mojave 10.14.3`), if I run `pip list`, I can clearly see the most up to date version of `ruamel.yaml`: [![pip list](https://i.stack.imgur.com/iuu1g.png)](https://i.stack.imgur.com/iuu1g.png) If I run `pip show ruamel.yaml`, I get the following output: [![pip show ruamel.yaml](https://i.stack.imgur.com/WRdHY.png)](https://i.stack.imgur.com/WRdHY.png) I'm running the script with this command: `CONFIG_FILE=inventory/mycluster/hosts.ini python3 contrib/inventory_builder/inventory.py 10.0.0.1 10.0.0.2 10.0.0.4 10.0.0.5` Bizarrely, it returns the following error: ``` Traceback (most recent call last): File "contrib/inventory_builder/inventory.py", line 36, in <module> from ruamel.yaml import YAML ModuleNotFoundError: No module named 'ruamel' ``` I have very little experience with Python, so don't understand how this could be failing. Have I installed the library incorrectly or something? From the documentation on the `ruamel.yml` project page, it looks like the script is calling the library as it should be. Thanks in advance
2019/04/01
[ "https://Stackoverflow.com/questions/55454514", "https://Stackoverflow.com", "https://Stackoverflow.com/users/864245/" ]
`pip` is set to point to the Python 2 installation. To install the library under Python 3, do `pip3 install ruamel.yml`.
In my case, I was installing this with `pip3 install ruamel.yaml`, and it was puting the package in `/usr/local/lib/python3.9/site-packages/`, but the `python3` binary on the machine was pinned to Python 3.7, so trying to import that module was sending the `ModuleNotFoundError` message. What helped to fix this, was to install the module with `python3 -m pip install ruamel.yaml`, running pip via the python3 binary makes sure it runs on the same version, in this case 3.7, and gets installed via the correct version number site-packages.
55,454,514
I'm using a Kubernetes inventory builder script found here: <https://github.com/kubernetes-sigs/kubespray/blob/master/contrib/inventory_builder/inventory.py> On `line 36`, the ruamel YML library is imported using the code `from ruamel.yaml import YAML`. This library can be found here: <https://pypi.org/project/ruamel.yaml/> On my OSX device (`Mojave 10.14.3`), if I run `pip list`, I can clearly see the most up to date version of `ruamel.yaml`: [![pip list](https://i.stack.imgur.com/iuu1g.png)](https://i.stack.imgur.com/iuu1g.png) If I run `pip show ruamel.yaml`, I get the following output: [![pip show ruamel.yaml](https://i.stack.imgur.com/WRdHY.png)](https://i.stack.imgur.com/WRdHY.png) I'm running the script with this command: `CONFIG_FILE=inventory/mycluster/hosts.ini python3 contrib/inventory_builder/inventory.py 10.0.0.1 10.0.0.2 10.0.0.4 10.0.0.5` Bizarrely, it returns the following error: ``` Traceback (most recent call last): File "contrib/inventory_builder/inventory.py", line 36, in <module> from ruamel.yaml import YAML ModuleNotFoundError: No module named 'ruamel' ``` I have very little experience with Python, so don't understand how this could be failing. Have I installed the library incorrectly or something? From the documentation on the `ruamel.yml` project page, it looks like the script is calling the library as it should be. Thanks in advance
2019/04/01
[ "https://Stackoverflow.com/questions/55454514", "https://Stackoverflow.com", "https://Stackoverflow.com/users/864245/" ]
`pip` is set to point to the Python 2 installation. To install the library under Python 3, do `pip3 install ruamel.yml`.
This helped me (adding version number to python): ``` CONFIG_FILE=inventory/mycluster/hosts.yaml python3.6 contrib/inventory_builder/inventory.py ${IPS[@]} ```
55,454,514
I'm using a Kubernetes inventory builder script found here: <https://github.com/kubernetes-sigs/kubespray/blob/master/contrib/inventory_builder/inventory.py> On `line 36`, the ruamel YML library is imported using the code `from ruamel.yaml import YAML`. This library can be found here: <https://pypi.org/project/ruamel.yaml/> On my OSX device (`Mojave 10.14.3`), if I run `pip list`, I can clearly see the most up to date version of `ruamel.yaml`: [![pip list](https://i.stack.imgur.com/iuu1g.png)](https://i.stack.imgur.com/iuu1g.png) If I run `pip show ruamel.yaml`, I get the following output: [![pip show ruamel.yaml](https://i.stack.imgur.com/WRdHY.png)](https://i.stack.imgur.com/WRdHY.png) I'm running the script with this command: `CONFIG_FILE=inventory/mycluster/hosts.ini python3 contrib/inventory_builder/inventory.py 10.0.0.1 10.0.0.2 10.0.0.4 10.0.0.5` Bizarrely, it returns the following error: ``` Traceback (most recent call last): File "contrib/inventory_builder/inventory.py", line 36, in <module> from ruamel.yaml import YAML ModuleNotFoundError: No module named 'ruamel' ``` I have very little experience with Python, so don't understand how this could be failing. Have I installed the library incorrectly or something? From the documentation on the `ruamel.yml` project page, it looks like the script is calling the library as it should be. Thanks in advance
2019/04/01
[ "https://Stackoverflow.com/questions/55454514", "https://Stackoverflow.com", "https://Stackoverflow.com/users/864245/" ]
In my case, I was installing this with `pip3 install ruamel.yaml`, and it was puting the package in `/usr/local/lib/python3.9/site-packages/`, but the `python3` binary on the machine was pinned to Python 3.7, so trying to import that module was sending the `ModuleNotFoundError` message. What helped to fix this, was to install the module with `python3 -m pip install ruamel.yaml`, running pip via the python3 binary makes sure it runs on the same version, in this case 3.7, and gets installed via the correct version number site-packages.
This helped me (adding version number to python): ``` CONFIG_FILE=inventory/mycluster/hosts.yaml python3.6 contrib/inventory_builder/inventory.py ${IPS[@]} ```
466,321
How can I convert from a unix timestamp (say 1232559922) to a fractional julian date (2454853.03150). I found a website ( <http://aa.usno.navy.mil/data/docs/JulianDate.php> ) that performs a similar calculation but I need to do it programatically. Solutions can be in C/C++, python, perl, bash, etc...
2009/01/21
[ "https://Stackoverflow.com/questions/466321", "https://Stackoverflow.com", "https://Stackoverflow.com/users/280/" ]
The Unix epoch (zero-point) is January 1, 1970 GMT. That corresponds to the Julian day of 2440587.5 So, in pseudo-code: ``` function float getJulianFromUnix( int unixSecs ) { return ( unixSecs / 86400.0 ) + 2440587.5; } ```
I know that this is an old post, but I'll just say ... The answer given by Jason Cohen is a good approximation of the conversion. There is a problem though that relates to the number of seconds in one day. A day is not -exactly- 86400 seconds long, and periodically seconds are added to days in order to keep time synchronized with the various observable standards. These are called Leap Seconds (<https://en.wikipedia.org/wiki/Leap_second>). Leap seconds are added to UTC in order to keep it within 1 second of UT1. It stands to reason that as more and more time has elapsed since Jan 1, 1970, the simple conversion above will accrue more and more error from "actual observable time." Between 1972 and 2013 there were added 25 leap seconds. Part of the beauty and the simplicity of Julian Day numbers is that they don't represent date strings at all. They are just a count of elapsed time since the start of the Julian Epoch, much like POSIX time is a continuous count of milliseconds since the POSIX Epoch. The only problem that exists, then, is when you try to map a Julian Day number to a localized date string. If you need a date string that is accurate to within a minute (in 2013), then you'll need an algorithm that can account for leap seconds.
466,321
How can I convert from a unix timestamp (say 1232559922) to a fractional julian date (2454853.03150). I found a website ( <http://aa.usno.navy.mil/data/docs/JulianDate.php> ) that performs a similar calculation but I need to do it programatically. Solutions can be in C/C++, python, perl, bash, etc...
2009/01/21
[ "https://Stackoverflow.com/questions/466321", "https://Stackoverflow.com", "https://Stackoverflow.com/users/280/" ]
The Unix epoch (zero-point) is January 1, 1970 GMT. That corresponds to the Julian day of 2440587.5 So, in pseudo-code: ``` function float getJulianFromUnix( int unixSecs ) { return ( unixSecs / 86400.0 ) + 2440587.5; } ```
Here is my JavaScript code to convert *Unix* timestamp to *Julian*. Originally is showing the current date and time, but with a little mod is answer to your question: ```js function computeJulianDate(DD,MM,YY,HR,MN,SC) { with (Math) { HR = HR + (MN / 60) + (SC/3600); GGG = 1; if (YY <= 1585) GGG = 0; JD = -1 * floor(7 * (floor((MM + 9) / 12) + YY) / 4); S = 1; if ((MM - 9)<0) S=-1; A = abs(MM - 9); J1 = floor(YY + S * floor(A / 7)); J1 = -1 * floor((floor(J1 / 100) + 1) * 3 / 4); JD = JD + floor(275 * MM / 9) + DD + (GGG * J1); JD = JD + 1721027 + 2 * GGG + 367 * YY - 0.5; JD = JD + (HR / 24); } return JD; } function getUTCDateTimeOrJD(now,jd=0) { var hours = now.getUTCHours(); var minutes = now.getUTCMinutes(); var seconds = now.getUTCSeconds() var month = now.getUTCMonth() + 1; var day = now.getUTCDate(); var year = now.getUTCFullYear(); if (jd==1) return computeJulianDate(month, day, year, hours, minutes, seconds); else return day+". "+month+". "+year+". "+hours+":"+minutes+":"+seconds; } var unixTime = 1473294606; getUTCDateTimeOrJD(new Date(unixTime*1000)); getUTCDateTimeOrJD(new Date(unixTime*1000),1); ``` [Working JSFiddle example here](https://jsfiddle.net/eapo/vacduhr5/1/)
466,321
How can I convert from a unix timestamp (say 1232559922) to a fractional julian date (2454853.03150). I found a website ( <http://aa.usno.navy.mil/data/docs/JulianDate.php> ) that performs a similar calculation but I need to do it programatically. Solutions can be in C/C++, python, perl, bash, etc...
2009/01/21
[ "https://Stackoverflow.com/questions/466321", "https://Stackoverflow.com", "https://Stackoverflow.com/users/280/" ]
The Unix epoch (zero-point) is January 1, 1970 GMT. That corresponds to the Julian day of 2440587.5 So, in pseudo-code: ``` function float getJulianFromUnix( int unixSecs ) { return ( unixSecs / 86400.0 ) + 2440587.5; } ```
This question was asked over 13 years ago as of writing. That's pretty wild. Thanks to eapo's JS formula I converted it to PineScript v5 and in testing its at least really close. I don't think perfect accuracy would even be relevant for most (if any) TradingView applications. So, I didn't go as far as to ensure perfect accuracy. But it works. Thanks eapo, you saved me a bunch of time. EDIT: TradingView displays times in the stock/currency exchange time zone. So, it became necessary to create additional arguments to provide for the UTC offsets that exchanges utilize. [IMPORTANT: Keep in mind that exchanges that utilize daylight savings time will shift from `UTC - n` to `UTC - n - 1` depending on the DST state. You must update your UTC offset argument accordingly.] ``` // Julian Date & Partial Day in CST computeJulianDate(dd, mm, yy, hr, mn, sc, offset=0, live=false) => HR = hr HR := hr + (mn / 60) + (sc / 3600) GGG = 1 if year <= 1585 GGG := 0 float JD = -1 * math.floor(7 * (math.floor((mm + 9) / 12) + yy) / 4) S = 1 if ((mm - 9)<0) S :=-1 A = math.abs(mm - 9) J1 = math.floor(yy + S * math.floor(A / 7)) J1 := -1 * math.floor((math.floor(J1 / 100) + 1) * 3 / 4) JD := JD + math.floor(275 * mm / 9) + dd + (GGG * J1) JD := JD + 1721027 + 2 * GGG + 367 * yy JD := JD + (HR / 24) barsInSession = timeframe.isintraday ? ((24 * 60) / timeframe.multiplier) : timeframe.multiplier barsInSession := math.floor(barsInSession) == barsInSession and timeframe.isintraday ? barsInSession - 1 : math.floor(barsInSession) offsetInc = 1 / barsInSession offsetCt = (offset * ((barsInSession / 24) * offsetInc)) JD := live ? JD + offsetCt : math.floor(JD - offsetCt) - 0.5 JD ```
466,321
How can I convert from a unix timestamp (say 1232559922) to a fractional julian date (2454853.03150). I found a website ( <http://aa.usno.navy.mil/data/docs/JulianDate.php> ) that performs a similar calculation but I need to do it programatically. Solutions can be in C/C++, python, perl, bash, etc...
2009/01/21
[ "https://Stackoverflow.com/questions/466321", "https://Stackoverflow.com", "https://Stackoverflow.com/users/280/" ]
I know that this is an old post, but I'll just say ... The answer given by Jason Cohen is a good approximation of the conversion. There is a problem though that relates to the number of seconds in one day. A day is not -exactly- 86400 seconds long, and periodically seconds are added to days in order to keep time synchronized with the various observable standards. These are called Leap Seconds (<https://en.wikipedia.org/wiki/Leap_second>). Leap seconds are added to UTC in order to keep it within 1 second of UT1. It stands to reason that as more and more time has elapsed since Jan 1, 1970, the simple conversion above will accrue more and more error from "actual observable time." Between 1972 and 2013 there were added 25 leap seconds. Part of the beauty and the simplicity of Julian Day numbers is that they don't represent date strings at all. They are just a count of elapsed time since the start of the Julian Epoch, much like POSIX time is a continuous count of milliseconds since the POSIX Epoch. The only problem that exists, then, is when you try to map a Julian Day number to a localized date string. If you need a date string that is accurate to within a minute (in 2013), then you'll need an algorithm that can account for leap seconds.
Here is my JavaScript code to convert *Unix* timestamp to *Julian*. Originally is showing the current date and time, but with a little mod is answer to your question: ```js function computeJulianDate(DD,MM,YY,HR,MN,SC) { with (Math) { HR = HR + (MN / 60) + (SC/3600); GGG = 1; if (YY <= 1585) GGG = 0; JD = -1 * floor(7 * (floor((MM + 9) / 12) + YY) / 4); S = 1; if ((MM - 9)<0) S=-1; A = abs(MM - 9); J1 = floor(YY + S * floor(A / 7)); J1 = -1 * floor((floor(J1 / 100) + 1) * 3 / 4); JD = JD + floor(275 * MM / 9) + DD + (GGG * J1); JD = JD + 1721027 + 2 * GGG + 367 * YY - 0.5; JD = JD + (HR / 24); } return JD; } function getUTCDateTimeOrJD(now,jd=0) { var hours = now.getUTCHours(); var minutes = now.getUTCMinutes(); var seconds = now.getUTCSeconds() var month = now.getUTCMonth() + 1; var day = now.getUTCDate(); var year = now.getUTCFullYear(); if (jd==1) return computeJulianDate(month, day, year, hours, minutes, seconds); else return day+". "+month+". "+year+". "+hours+":"+minutes+":"+seconds; } var unixTime = 1473294606; getUTCDateTimeOrJD(new Date(unixTime*1000)); getUTCDateTimeOrJD(new Date(unixTime*1000),1); ``` [Working JSFiddle example here](https://jsfiddle.net/eapo/vacduhr5/1/)
466,321
How can I convert from a unix timestamp (say 1232559922) to a fractional julian date (2454853.03150). I found a website ( <http://aa.usno.navy.mil/data/docs/JulianDate.php> ) that performs a similar calculation but I need to do it programatically. Solutions can be in C/C++, python, perl, bash, etc...
2009/01/21
[ "https://Stackoverflow.com/questions/466321", "https://Stackoverflow.com", "https://Stackoverflow.com/users/280/" ]
I know that this is an old post, but I'll just say ... The answer given by Jason Cohen is a good approximation of the conversion. There is a problem though that relates to the number of seconds in one day. A day is not -exactly- 86400 seconds long, and periodically seconds are added to days in order to keep time synchronized with the various observable standards. These are called Leap Seconds (<https://en.wikipedia.org/wiki/Leap_second>). Leap seconds are added to UTC in order to keep it within 1 second of UT1. It stands to reason that as more and more time has elapsed since Jan 1, 1970, the simple conversion above will accrue more and more error from "actual observable time." Between 1972 and 2013 there were added 25 leap seconds. Part of the beauty and the simplicity of Julian Day numbers is that they don't represent date strings at all. They are just a count of elapsed time since the start of the Julian Epoch, much like POSIX time is a continuous count of milliseconds since the POSIX Epoch. The only problem that exists, then, is when you try to map a Julian Day number to a localized date string. If you need a date string that is accurate to within a minute (in 2013), then you'll need an algorithm that can account for leap seconds.
This question was asked over 13 years ago as of writing. That's pretty wild. Thanks to eapo's JS formula I converted it to PineScript v5 and in testing its at least really close. I don't think perfect accuracy would even be relevant for most (if any) TradingView applications. So, I didn't go as far as to ensure perfect accuracy. But it works. Thanks eapo, you saved me a bunch of time. EDIT: TradingView displays times in the stock/currency exchange time zone. So, it became necessary to create additional arguments to provide for the UTC offsets that exchanges utilize. [IMPORTANT: Keep in mind that exchanges that utilize daylight savings time will shift from `UTC - n` to `UTC - n - 1` depending on the DST state. You must update your UTC offset argument accordingly.] ``` // Julian Date & Partial Day in CST computeJulianDate(dd, mm, yy, hr, mn, sc, offset=0, live=false) => HR = hr HR := hr + (mn / 60) + (sc / 3600) GGG = 1 if year <= 1585 GGG := 0 float JD = -1 * math.floor(7 * (math.floor((mm + 9) / 12) + yy) / 4) S = 1 if ((mm - 9)<0) S :=-1 A = math.abs(mm - 9) J1 = math.floor(yy + S * math.floor(A / 7)) J1 := -1 * math.floor((math.floor(J1 / 100) + 1) * 3 / 4) JD := JD + math.floor(275 * mm / 9) + dd + (GGG * J1) JD := JD + 1721027 + 2 * GGG + 367 * yy JD := JD + (HR / 24) barsInSession = timeframe.isintraday ? ((24 * 60) / timeframe.multiplier) : timeframe.multiplier barsInSession := math.floor(barsInSession) == barsInSession and timeframe.isintraday ? barsInSession - 1 : math.floor(barsInSession) offsetInc = 1 / barsInSession offsetCt = (offset * ((barsInSession / 24) * offsetInc)) JD := live ? JD + offsetCt : math.floor(JD - offsetCt) - 0.5 JD ```
24,235,241
I recently installed sublime text 2 to try it out before I decide to get sublime text 3 but I can't properly run any code from it. I've hit Ctrl + B and I get an output like this. ``` [Error 2] The system cannot find the file specified [cmd: [u'python', u'-u', u'C:\\Users\\Jeff\\Desktop\\Personal codes\\print.py']] [dir: C:\Users\Jeff\Desktop\Personal codes] [path: C:\Program Files\Common Files\Microsoft Shared\Windows Live;C:\Program Files (x86)\Common Files\Microsoft Shared\Windows Live;C:\Program Files(x86)\AMD APP\bin\x86_64;C:\Program Files (x86)\AMD APP\bin\x86;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files (x86)\ATI Technologies\ATI.ACE\Core-Static;C:\Program Files (x86)\QuickTime\QTSystem\;C:\Program Files (x86)\Windows Live\Shared] [Finished] ``` I've looked in my roaming folder and found sublime text 2 because another post mentioned editing a file in the python folder there but no such folder exists in Roaming\Sublime Text 2 all I have is Installed Packages, Packages, Pristine Packages and Settings. Am I missing something or is it something obvious that I should know?
2014/06/16
[ "https://Stackoverflow.com/questions/24235241", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3697905/" ]
Instead of adding python to the path, I prefer simply specifying the full path to python in the sublime build. Python.exe is probably installed in one of these (or something similar) ``` C:/Python C:/Program Files/Python C:/Program Files (x86)/Python etc... ``` Once you found it (lets say its in C:\Program Files (x86)\Python27) edit the sublime\_build for python. Here is the build I use: ``` { "cmd": ["C:\\Program Files (x86)\\Python27\\python.exe","-u","$file"], "selector": "source.python" } ``` for me, this file is in ``` Sublime Text\Data\Packages\Python\Python.sublime-build ```
Windows is unable to find your python installation. When you run a command like: ``` python <your_file.py> ``` the first `python` tells your system to find wherever your python binary is and try to run some command by that name. By looking over the path that was echoed, it doesn't look like you actually have your python binary on your system path. If you're uncertain as to how to add python to your `path`, check out this superuser question: <https://superuser.com/questions/143119/how-to-add-python-to-the-windows-path>
35,931,198
I searched the forum and all answers are python or C+ related, this is for ruby. I'm trying to figure out how to make the below program prompt the user for an item in the array by typing a number 1-4 (so the position wouldn't start from 0 in the users eyes). It's probably a simple fix, but I am new to this.. I appreciate any time and help. ``` array = [] puts "please add to the array 4 times" 4.times do array << gets.chomp end puts "#{array}" puts "Select a position in the array by typing a singular number from 1-4" ```
2016/03/11
[ "https://Stackoverflow.com/questions/35931198", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5281054/" ]
You can try something like this: ``` array = [] puts "please add to the array 4 times" 4.times do array << gets.chomp end puts "#{array}" loop do puts "Select a position in the array by typing a singular number from 1-4" ans = gets.chomp.to_i if ans > 0 && ans <= array.length puts "The element at position #{ans} is " + array[ans-1] break else puts "You have to pick a number between 1 & 4, try again." end end ```
You can get the index by combining `gets.chomp` (reads a line of user input and removes the trailing newline character) and `to_i` (convert to integer). Combine this with the ability to access an array's element at a specific index using the `array[index_integer]` method. To piece it together: ``` array = ["first_item", "second_item", "third_item"] puts "enter the array index: " index = gets.chomp.to_i adjusted_index = index - 1 value_at_index = array[adjusted_index] puts "The element at that index is #{value_at_index}" ``` However be forewarned that the index will 'loop around' to -1 if a value of 0 is given. For example, if the user enters 0, then `adjusted_index` will be -1 and the last element of the array will be displayed.
17,806,673
Is there a canonical location where to put self-written packages? My own search only yielded a blog post about [where to put version-independent pure Python packages](http://pythonsimple.noucleus.net/python-install/python-site-packages-what-they-are-and-where-to-put-them) and a [SO question for the canonical location under Linux](https://stackoverflow.com/questions/16196268/where-should-i-put-my-own-python-module-so-that-it-can-be-imported), while I am working on Windows. My use case is that I would like to be able to import my own packages during a IPython session just like any site-package, no matter in which working directory I started the session. In Matlab, the corresponding folder for example is simply `C:/Users/ojdo/Documents/MATLAB`. ``` import mypackage as mp mp.awesomefunction() ... ```
2013/07/23
[ "https://Stackoverflow.com/questions/17806673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2375855/" ]
Thanks to the [two](http://docs.python.org/2/install/#how-installation-works) [additional](http://docs.python.org/2/install/#alternate-installation-the-home-scheme) links, I found not only the intended answer to my question, but also a solution that I like even more and that - ironically - was also explained in my first search result, but obfuscated by all the version-(in)dependent site-package lingo. Answer to original question: default folder ------------------------------------------- I wanted to know if there was a canonical (as in "default") location for my self-written packages. And that exists: ``` >>> import site >>> site.USER_SITE 'C:\\Users\\ojdo\\AppData\\Roaming\\Python\\Python27\\site-packages' ``` And for a Linux and Python 3 example: ``` ojdo@ubuntu:~$ python3 >>> import site >>> site.USER_SITE '/home/ojdo/.local/lib/python3.6/site-packages' ``` The docs on [user scheme package installation](http://docs.python.org/2/install/#alternate-installation-the-user-scheme) state that folder `USER_SITE` - if it exists - will be automatically added to your Python's `sys.path` upon interpreter startup, no manual steps needed. --- Bonus: custom directory for own packages ---------------------------------------- 1. Create a directory anywhere, e.g. `C:\Users\ojdo\Documents\Python\Libs`. 2. Add the file `sitecustomize.py` to the site-packages folder of the Python installation, i.e. in `C:\Python27\Lib\site-packages` (for all users) or `site.USER_SITE` (for a single user). 3. This file then is filled with the following code: ``` import site site.addsitedir(r'C:\Users\ojdo\Documents\Python\Libs') ``` 4. Voilà, the new directory now is automatically added to `sys.path` in every (I)Python session. How it works: Package [site](http://docs.python.org/2/library/site.html), that is automatically imported during every start of Python, also tries to import the package `sitecustomize` for custom package path modifications. In this case, this dummy package consists of a script that adds the personal package folder to the Python path.
I'd use the home scheme for this: <http://docs.python.org/2/install/#alternate-installation-the-home-scheme>
17,806,673
Is there a canonical location where to put self-written packages? My own search only yielded a blog post about [where to put version-independent pure Python packages](http://pythonsimple.noucleus.net/python-install/python-site-packages-what-they-are-and-where-to-put-them) and a [SO question for the canonical location under Linux](https://stackoverflow.com/questions/16196268/where-should-i-put-my-own-python-module-so-that-it-can-be-imported), while I am working on Windows. My use case is that I would like to be able to import my own packages during a IPython session just like any site-package, no matter in which working directory I started the session. In Matlab, the corresponding folder for example is simply `C:/Users/ojdo/Documents/MATLAB`. ``` import mypackage as mp mp.awesomefunction() ... ```
2013/07/23
[ "https://Stackoverflow.com/questions/17806673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2375855/" ]
Place the source of your package wherever you'd like, but at least give your package a minimal `setup.py` file, immediately outside the package: ``` import setuptools setuptools.setup(name='mypackage') ``` Then fake-install your package into your python install's `site-packages` by running: ``` python setup.py develop ``` This is a lot like running `python setup.py install`, except the `egg` just points to your source tree, so you don't have to `install` after every source code change. Finally, you should be able to import your package: ``` python -c "import mypackage as mp; print mp.awesomefunction()" ```
I'd use the home scheme for this: <http://docs.python.org/2/install/#alternate-installation-the-home-scheme>
17,806,673
Is there a canonical location where to put self-written packages? My own search only yielded a blog post about [where to put version-independent pure Python packages](http://pythonsimple.noucleus.net/python-install/python-site-packages-what-they-are-and-where-to-put-them) and a [SO question for the canonical location under Linux](https://stackoverflow.com/questions/16196268/where-should-i-put-my-own-python-module-so-that-it-can-be-imported), while I am working on Windows. My use case is that I would like to be able to import my own packages during a IPython session just like any site-package, no matter in which working directory I started the session. In Matlab, the corresponding folder for example is simply `C:/Users/ojdo/Documents/MATLAB`. ``` import mypackage as mp mp.awesomefunction() ... ```
2013/07/23
[ "https://Stackoverflow.com/questions/17806673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2375855/" ]
Thanks to the [two](http://docs.python.org/2/install/#how-installation-works) [additional](http://docs.python.org/2/install/#alternate-installation-the-home-scheme) links, I found not only the intended answer to my question, but also a solution that I like even more and that - ironically - was also explained in my first search result, but obfuscated by all the version-(in)dependent site-package lingo. Answer to original question: default folder ------------------------------------------- I wanted to know if there was a canonical (as in "default") location for my self-written packages. And that exists: ``` >>> import site >>> site.USER_SITE 'C:\\Users\\ojdo\\AppData\\Roaming\\Python\\Python27\\site-packages' ``` And for a Linux and Python 3 example: ``` ojdo@ubuntu:~$ python3 >>> import site >>> site.USER_SITE '/home/ojdo/.local/lib/python3.6/site-packages' ``` The docs on [user scheme package installation](http://docs.python.org/2/install/#alternate-installation-the-user-scheme) state that folder `USER_SITE` - if it exists - will be automatically added to your Python's `sys.path` upon interpreter startup, no manual steps needed. --- Bonus: custom directory for own packages ---------------------------------------- 1. Create a directory anywhere, e.g. `C:\Users\ojdo\Documents\Python\Libs`. 2. Add the file `sitecustomize.py` to the site-packages folder of the Python installation, i.e. in `C:\Python27\Lib\site-packages` (for all users) or `site.USER_SITE` (for a single user). 3. This file then is filled with the following code: ``` import site site.addsitedir(r'C:\Users\ojdo\Documents\Python\Libs') ``` 4. Voilà, the new directory now is automatically added to `sys.path` in every (I)Python session. How it works: Package [site](http://docs.python.org/2/library/site.html), that is automatically imported during every start of Python, also tries to import the package `sitecustomize` for custom package path modifications. In this case, this dummy package consists of a script that adds the personal package folder to the Python path.
Place the source of your package wherever you'd like, but at least give your package a minimal `setup.py` file, immediately outside the package: ``` import setuptools setuptools.setup(name='mypackage') ``` Then fake-install your package into your python install's `site-packages` by running: ``` python setup.py develop ``` This is a lot like running `python setup.py install`, except the `egg` just points to your source tree, so you don't have to `install` after every source code change. Finally, you should be able to import your package: ``` python -c "import mypackage as mp; print mp.awesomefunction()" ```
17,806,673
Is there a canonical location where to put self-written packages? My own search only yielded a blog post about [where to put version-independent pure Python packages](http://pythonsimple.noucleus.net/python-install/python-site-packages-what-they-are-and-where-to-put-them) and a [SO question for the canonical location under Linux](https://stackoverflow.com/questions/16196268/where-should-i-put-my-own-python-module-so-that-it-can-be-imported), while I am working on Windows. My use case is that I would like to be able to import my own packages during a IPython session just like any site-package, no matter in which working directory I started the session. In Matlab, the corresponding folder for example is simply `C:/Users/ojdo/Documents/MATLAB`. ``` import mypackage as mp mp.awesomefunction() ... ```
2013/07/23
[ "https://Stackoverflow.com/questions/17806673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2375855/" ]
Thanks to the [two](http://docs.python.org/2/install/#how-installation-works) [additional](http://docs.python.org/2/install/#alternate-installation-the-home-scheme) links, I found not only the intended answer to my question, but also a solution that I like even more and that - ironically - was also explained in my first search result, but obfuscated by all the version-(in)dependent site-package lingo. Answer to original question: default folder ------------------------------------------- I wanted to know if there was a canonical (as in "default") location for my self-written packages. And that exists: ``` >>> import site >>> site.USER_SITE 'C:\\Users\\ojdo\\AppData\\Roaming\\Python\\Python27\\site-packages' ``` And for a Linux and Python 3 example: ``` ojdo@ubuntu:~$ python3 >>> import site >>> site.USER_SITE '/home/ojdo/.local/lib/python3.6/site-packages' ``` The docs on [user scheme package installation](http://docs.python.org/2/install/#alternate-installation-the-user-scheme) state that folder `USER_SITE` - if it exists - will be automatically added to your Python's `sys.path` upon interpreter startup, no manual steps needed. --- Bonus: custom directory for own packages ---------------------------------------- 1. Create a directory anywhere, e.g. `C:\Users\ojdo\Documents\Python\Libs`. 2. Add the file `sitecustomize.py` to the site-packages folder of the Python installation, i.e. in `C:\Python27\Lib\site-packages` (for all users) or `site.USER_SITE` (for a single user). 3. This file then is filled with the following code: ``` import site site.addsitedir(r'C:\Users\ojdo\Documents\Python\Libs') ``` 4. Voilà, the new directory now is automatically added to `sys.path` in every (I)Python session. How it works: Package [site](http://docs.python.org/2/library/site.html), that is automatically imported during every start of Python, also tries to import the package `sitecustomize` for custom package path modifications. In this case, this dummy package consists of a script that adds the personal package folder to the Python path.
I had the same question, and your answer is very helpful. To add a little bit, I came across this example that's helpful to me: <http://python-packaging.readthedocs.io/en/latest/minimal.html> It is a minimal example of how to package your own code, and properly install it locally (I imagine this is what you actually want), or distribute on PyPI. Doing things the python way.
17,806,673
Is there a canonical location where to put self-written packages? My own search only yielded a blog post about [where to put version-independent pure Python packages](http://pythonsimple.noucleus.net/python-install/python-site-packages-what-they-are-and-where-to-put-them) and a [SO question for the canonical location under Linux](https://stackoverflow.com/questions/16196268/where-should-i-put-my-own-python-module-so-that-it-can-be-imported), while I am working on Windows. My use case is that I would like to be able to import my own packages during a IPython session just like any site-package, no matter in which working directory I started the session. In Matlab, the corresponding folder for example is simply `C:/Users/ojdo/Documents/MATLAB`. ``` import mypackage as mp mp.awesomefunction() ... ```
2013/07/23
[ "https://Stackoverflow.com/questions/17806673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2375855/" ]
Place the source of your package wherever you'd like, but at least give your package a minimal `setup.py` file, immediately outside the package: ``` import setuptools setuptools.setup(name='mypackage') ``` Then fake-install your package into your python install's `site-packages` by running: ``` python setup.py develop ``` This is a lot like running `python setup.py install`, except the `egg` just points to your source tree, so you don't have to `install` after every source code change. Finally, you should be able to import your package: ``` python -c "import mypackage as mp; print mp.awesomefunction()" ```
I had the same question, and your answer is very helpful. To add a little bit, I came across this example that's helpful to me: <http://python-packaging.readthedocs.io/en/latest/minimal.html> It is a minimal example of how to package your own code, and properly install it locally (I imagine this is what you actually want), or distribute on PyPI. Doing things the python way.
25,403,110
I am getting started with Django through [this](http://www.youtube.com/watch?v=3DccH9AMwFQ) beautiful video tutorial.On Tutorial 15 of the video series, there is database migration using **south**. But when I do `python manage.py migrate signups`, I got a whole lot of errors. The first error was: ``` File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 164, i n _run_migration for name, db in south.db.dbs.iteritems(): AttributeError: 'dict' object has no attribute 'iteritems' ``` I changed `iteritems()` to `items()` to fix that but there is a whole lot of other errors popping up. My guess is that it has to do with the versions in action- `South==1.0 Django == 1.6.5 and Python 3.4.1` Here is the content of my *models.py* and `for_you, timestamp, updated` are the attributes added after migration. The commented out attributes were there originally. ``` `from django.db import models class SignUp(models.Model): for_you = models.BooleanField(default = True) first_name = models.CharField(max_length = 120, null=True, blank=True) last_name = models.CharField(max_length = 120, null=True, blank=True) email = models.EmailField() timestamp = models.DateTimeField(auto_now_add = True, auto_now = False) updated = models.DateTimeField(auto_now_add = False, auto_now = True, default=True) #timestamp = models.DateTimeField(auto_now_add = False, auto_now = True) #timestamp = models.DateTimeField(auto_now_add = True, auto_now = False) def __str__(self): return self.email` ``` The autogenerated **migrations/0002\_auto\_\_add\_field\_signup\_for\_you\_\_add\_field\_signup\_updated.py** looks like ``` # -*- coding: utf-8 -*- from south.utils import datetime_utils as datetime from south.db import db from south.v2 import SchemaMigration from django.db import models class Migration(SchemaMigration): def forwards(self, orm): # Adding field 'SignUp.for_you' db.add_column('signups_signup', 'for_you', self.gf('django.db.models.fields.BooleanField')(default=True), keep_default=False) # Adding field 'SignUp.updated' db.add_column('signups_signup', 'updated', self.gf('django.db.models.fields.DateTimeField')(blank=True, default=True, auto_now=True), keep_default=False) def backwards(self, orm): # Deleting field 'SignUp.for_you' db.delete_column('signups_signup', 'for_you') # Deleting field 'SignUp.updated' db.delete_column('signups_signup', 'updated') models = { 'signups.signup': { 'Meta': {'object_name': 'SignUp'}, 'email': ('django.db.models.fields.EmailField', [], {'max_length': '75'}), 'first_name': ('django.db.models.fields.CharField', [], {'blank': 'True', 'null': 'True', 'max_length': '120'}), 'for_you': ('django.db.models.fields.BooleanField', [], {'default': 'True'}), 'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), 'last_name': ('django.db.models.fields.CharField', [], {'blank': 'True', 'null': 'True', 'max_length': '120'}), 'timestamp': ('django.db.models.fields.DateTimeField', [], {'blank': 'True', 'auto_now_add': 'True'}), 'updated': ('django.db.models.fields.DateTimeField', [], {'blank': 'True', 'default': 'True', 'auto_now': 'True'}) } } complete_apps = ['signups'] ``` And here is the complete error log: ``` Running migrations for signups: - Migrating forwards to 0002_auto__add_field_signup_for_you__add_field_signup_u pdated. > signups:0002_auto__add_field_signup_for_you__add_field_signup_updated Traceback (most recent call last): File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 175, i n _run_migration migration_function() File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 60, in <lambda> return (lambda: direction(orm)) File "D:\Projects\skillshare\src\signups\migrations\0002_auto__add_ field_signup_for_you__add_field_signup_updated.py", line 19, in forwards keep_default=False) File "C:\Python34\lib\site-packages\south\db\sqlite3.py", line 35, in add_colu mn field_default = "'%s'" % field.get_db_prep_save(default, connection=self._ge t_connection()) File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line 350, in get_db_prep_save prepared=False) File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line 911, in get_db_prep_value value = self.get_prep_value(value) File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line 895, in get_prep_value value = self.to_python(value) File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line 854, in to_python parsed = parse_datetime(value) File "C:\Python34\lib\site-packages\django\utils\dateparse.py", line 67, in pa rse_datetime match = datetime_re.match(value) TypeError: expected string or buffer During handling of the above exception, another exception occurred: Traceback (most recent call last): File "manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "C:\Python34\lib\site-packages\django\core\management\__init__.py", line 399, in execute_from_command_line utility.execute() File "C:\Python34\lib\site-packages\django\core\management\__init__.py", line 392, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "C:\Python34\lib\site-packages\django\core\management\base.py", line 242, in run_from_argv self.execute(*args, **options.__dict__) File "C:\Python34\lib\site-packages\django\core\management\base.py", line 285, in execute output = self.handle(*args, **options) File "C:\Python34\lib\site-packages\south\management\commands\migrate.py", lin e 111, in handle ignore_ghosts = ignore_ghosts, File "C:\Python34\lib\site-packages\south\migration\__init__.py", line 220, in migrate_app success = migrator.migrate_many(target, workplan, database) File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 256, i n migrate_many result = migrator.__class__.migrate_many(migrator, target, migrations, datab ase) File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 331, i n migrate_many result = self.migrate(migration, database) File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 133, i n migrate result = self.run(migration, database) File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 113, i n run dry_run.run_migration(migration, database) File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 192, i n run_migration self._run_migration(migration) File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 178, i n _run_migration raise exceptions.FailedDryRun(migration, sys.exc_info()) south.exceptions.FailedDryRun: ! Error found during dry run of '0002_auto__add_ field_signup_for_you__add_field_signup_updated'! Aborting. Traceback (most recent call last): File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 175, i n _run_migration migration_function() File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 60, in <lambda> return (lambda: direction(orm)) File "D:\Projects\skillshare\src\signups\migrations\0002_auto__add_ field_signup_for_you__add_field_signup_updated.py", line 19, in forwards keep_default=False) File "C:\Python34\lib\site-packages\south\db\sqlite3.py", line 35, in add_colu mn field_default = "'%s'" % field.get_db_prep_save(default, connection=self._ge t_connection()) File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line 350, in get_db_prep_save prepared=False) File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line 911, in get_db_prep_value value = self.get_prep_value(value) File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line 895, in get_prep_value value = self.to_python(value) File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line 854, in to_python parsed = parse_datetime(value) File "C:\Python34\lib\site-packages\django\utils\dateparse.py", line 67, in pa rse_datetime match = datetime_re.match(value) TypeError: expected string or buffer ```
2014/08/20
[ "https://Stackoverflow.com/questions/25403110", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2673433/" ]
There's that problem, you use boolean as a default value (see `default=True` on line 19 in your migration) for `DateTime` column. That wont work. Just remove that `default=True` from your model and regenerate your migration. You would probably need `null=True` in that column or some time-based default value.
In your migration the `fields.DateTimeField` can not be a Boolean value (default=True). You can edit your migrations set a datetime value ``` import datetime ... default = datetime.datetime(2016,2,25,16,35,658000) ... ``` The `models.DateTimeField` should be a `None` or a `datetime` object
28,779,395
This is a very easy code to understand things : Main : ```html import pdb #pdb.set_trace() import sys import csv sys.version_info if sys.version_info[0] < 3: from Tkinter import * else: from tkinter import * from Untitled import * main_window =Tk() main_window.title("Welcome") label = Label(main_window, text="Enter your current weight") label.pack() Current_Weight=StringVar() Current_Weight.set("0.0") entree1 = Entry(main_window,textvariable=Current_Weight,width=30) entree1.pack() bouton1 = Button(main_window, text="Enter", command= lambda evt,Current_Weight,entree1: get(evt,Current_Weight,entree1)) bouton1.pack() ``` and in another file Untitled i have the "get" function : ```html def get (event,loot, entree): loot=float(entree.get()) print(loot) ``` When i run the main i receive the following error : > > Exception in Tkinter callback > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/idlelib/run.py", line 121, in main > seq, request = rpc.request\_queue.get(block=True, timeout=0.05) > File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/queue.py", line 175, in get > raise Empty > queue.Empty > > > During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/tkinter/**init**.py", line 1533, in **call** return self.func(\*args) TypeError: () missing 3 required positional arguments: 'evt', 'Current\_Weight', and 'entree1' How can i solve that ? I thought the lambda function allows us to uses some args in a event-dependant function.
2015/02/28
[ "https://Stackoverflow.com/questions/28779395", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4502651/" ]
The `command` lambda does not take any arguments at all; furthermore there is no `evt` that you can catch. A lambda can refer to variables outside it; this is called a closure. Thus your button code should be: ``` bouton1 = Button(main_window, text="Enter", command = lambda: get(Current_Weight, entree1)) ``` And your `get` should say: ``` def get(loot, entree): loot = float(entree.get()) print(loot) ```
Actually, you just need the Entry object entree1 as the lamda pass-in argument. Either statement below would work. ``` bouton1 = Button(main_window, text="Enter", command=lambda x = entree1: get(x)) bouton1 = Button(main_window, text="Enter", command=lambda : get(entree1)) ``` with the function get defined as ``` def get(entree): print(float(entree.get())) ```
17,349,928
I understand that an RGB to HSV conversion should take RGB values 0-255 and convert to HSV values [0-360, 0-1, 0-1]. For example see this [converter in java](http://www.javascripter.net/faq/rgb2hsv.htm): When I run matplotlib.colors.rbg\_to\_hsv on an image, it seems to output values [0-1, 0-1, 0-360] instead. However, I have used this function on an [image like this](http://www.tadpolewebworks.com/web/atomic/images/H02-HSV.jpg), and it seems to be working in the right order [H,S,V], just the V is too large. Example: ``` In [1]: import matplotlib.pyplot as plt In [2]: import matplotlib.colors as colors In [3]: image = plt.imread("/path/to/rgb/jpg/image") In [4]: print image [[[126 91 111] [123 85 106] [123 85 106] ..., In [5]: print colors.rgb_to_hsv(image) [[[ 0 0 126] [ 0 0 123] [ 0 0 123] ..., ``` Those are not 0s, they're some number between 0 and 1. Here is the definition from matplotlib.colors.rgb\_to\_hsv ``` def rgb_to_hsv(arr): """ convert rgb values in a numpy array to hsv values input and output arrays should have shape (M,N,3) """ out = np.zeros(arr.shape, dtype=np.float) arr_max = arr.max(-1) ipos = arr_max > 0 delta = arr.ptp(-1) s = np.zeros_like(delta) s[ipos] = delta[ipos] / arr_max[ipos] ipos = delta > 0 # red is max idx = (arr[:, :, 0] == arr_max) & ipos out[idx, 0] = (arr[idx, 1] - arr[idx, 2]) / delta[idx] # green is max idx = (arr[:, :, 1] == arr_max) & ipos out[idx, 0] = 2. + (arr[idx, 2] - arr[idx, 0]) / delta[idx] # blue is max idx = (arr[:, :, 2] == arr_max) & ipos out[idx, 0] = 4. + (arr[idx, 0] - arr[idx, 1]) / delta[idx] out[:, :, 0] = (out[:, :, 0] / 6.0) % 1.0 out[:, :, 1] = s out[:, :, 2] = arr_max return out ``` I would use one of the other rgb\_to\_hsv conversions like colorsys, but this is the only vectorized python one I have found. Can we figure this out? Do we need to report it on github? Matplotlib 1.2.0 , numpy 1.6.1 , Python 2.7 , Mac OS X 10.8
2013/06/27
[ "https://Stackoverflow.com/questions/17349928", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1276299/" ]
It works beautifully if, instead of unsigned int RGB values from 0 to 255, you feed it float RGB values from 0 to 1. It would be nice if the documentation specified this, or if the function tried to catch what seems to be a very likely human error. But you can get what you want simply by calling: ``` print colors.rgb_to_hsv(image / 255) ```
Take care, the source comment states input/output should be of dimension M,N,3, and the function fails for RGBA (M,N,4) images, e.g. imported png files.
65,408,099
When i was practising list&if in python i got stuck with a problem ``` friends=["a","b","c"] print("eklemek mi cikarmak mi istiyosunuz ?") ans=(input()) if ans == 'add': add=input("adding who ?") friends.append(add) if ans=='remove': remove = input("removing who ?") friends.remove(remove) print(remove) ``` the code is above works fine but when i want to improve it with already existing friends and not having that friend i got stuck and having this error = if add in list: TypeError: argument of type 'type' is not iterable same goes to not having that friend to remove ``` friends=["a","b","c"] print("add or remove ? ?") ans=(input()) if ans == 'add': add=input("adding who ? ?") if add in friends: print("you already added this person") else : friends.append(add) if ans=='remove': remove = input("removing who ?") friends.remove(remove) print(friends) ```
2020/12/22
[ "https://Stackoverflow.com/questions/65408099", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14871427/" ]
We had the same issue, and managed to fix it by searching for the exact minified function in the minified code (in this case `(0,o.useState`), then search around that code to find some string or identifier that wasn't minifed (found a prop name that was a string) that we could use to find the place in the source code. There we saw that VS Code had auto imported useState from the wrong place (`import { useEffect } from 'react/cjs/react.development'` ). So we just removed that and imported useState from "react" instead. (Also had to clear all react native cache to make this work)
I had a similar problem, I've realized that my Expo SDK version was an older one, I've upgraded Expo SDK and re-deployed my app, problem did not occur again.
57,774,652
This function: ```js function print(){ console.log('num 1') setTimeout(() => { global.name = 'max' console.log('num 2') },9000); console.log('num 3'); } print(); console.log(global.name) ``` is priting this: ``` num 1 num 3 undefined num 2 ``` And I need to: 1. print `num 1` 2. wait untill the 9 seconds 3. set the `global.name` = `max` 4. print `num 2` 5. print `num 3` 6. `console.log(global.name)` 7. print `max` and not `undefined` I wrote this code in python and it executese line by line because there is nothing called sync and async. I need this code executed like python(line by line)
2019/09/03
[ "https://Stackoverflow.com/questions/57774652", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10489311/" ]
The error is cause by this line: ``` options['partitionKey'] = '/Structures' ``` You need to specify the specific value of partition key here, not the column name.For example,my partition key is '/name',and the specific value in this document is 'A'. [![enter image description here](https://i.stack.imgur.com/qvm0f.png)](https://i.stack.imgur.com/qvm0f.png) Then your code looks like : ``` from azure.cosmos import cosmos_client client = cosmos_client.CosmosClient("https://***.documents.azure.com:443/", {'masterKey': '***'}) options = {} options['enableCrossPartitionQuery'] = True options['maxItemCount'] = 5 options['partitionKey'] = 'A' client.DeleteItem("dbs/db/colls/coll/docs/2", options) ```
``` import datetime as datetime import pandas as pd import json import os URL = 'https://resouceName.documents.azure.com:443/' KEY = 'YourKey' DATABASE_NAME = 'resourceName' CONTAINER_NAME = 'ContainerName' client = CosmosClient(URL, credential=KEY) database = client.get_database_client(DATABASE_NAME) container = database.get_container_client(CONTAINER_NAME) items = container.query_items( query=f'SELECT * FROM {CONTAINER_NAME} c ', enable_cross_partition_query=True) documents = [] for i in items: delete = container.delete_item(i["id"],i["partitionKey"]) ``` #The parameter above for delete_item should be your Id and PartitonKey which runs in a loop and all the records will be deleted ```
39,875,273
I have attempted to create an insertion sort in python, however the list returned is not sorted. What is the problem with my code? Argument given: [3, 2, 1, 4, 5, 8, 7, 9, 6] Result: 2 1 3 6 4 7 5 8 9 Python code: ``` def insertion_sort(mylist): sorted_list = [] for i in mylist: posfound = 0 #defaults to 0 for j in range(len(sorted_list)): if sorted_list[j] > i: sorted_list.insert(j-1, i) #put the number in before element 'j' posfound = 1 #if you found the correct position in the list set to 1 break if posfound == 0: #if you can't find a place in the list sorted_list.insert(len(sorted_list), i) #put number at the end of the list return sorted_list ```
2016/10/05
[ "https://Stackoverflow.com/questions/39875273", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4363434/" ]
You need to change `sorted_list.insert(j-1, i)` to be `sorted_list.insert(j, i)` to insert before position `j`. `insert(j-1, ..)` will insert before the *previous* element, and in the case where `j=0` it'll wrap around and insert before the last element. The [Python data structures tutorial](https://docs.python.org/3/tutorial/datastructures.html#data-structures) may be useful.
As often, it was a off-by-one error, the code below is fixed. I also made some parts a bit prettier. ``` def insertion_sort(mylist): sorted_list = [] for i in mylist: for index, j in enumerate(sorted_list): if j > i: sorted_list.insert(index, i) #put the number in before element 'j' break else: sorted_list.append(i) #put number at the end of the list return sorted_list ```
39,875,273
I have attempted to create an insertion sort in python, however the list returned is not sorted. What is the problem with my code? Argument given: [3, 2, 1, 4, 5, 8, 7, 9, 6] Result: 2 1 3 6 4 7 5 8 9 Python code: ``` def insertion_sort(mylist): sorted_list = [] for i in mylist: posfound = 0 #defaults to 0 for j in range(len(sorted_list)): if sorted_list[j] > i: sorted_list.insert(j-1, i) #put the number in before element 'j' posfound = 1 #if you found the correct position in the list set to 1 break if posfound == 0: #if you can't find a place in the list sorted_list.insert(len(sorted_list), i) #put number at the end of the list return sorted_list ```
2016/10/05
[ "https://Stackoverflow.com/questions/39875273", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4363434/" ]
You need to change `sorted_list.insert(j-1, i)` to be `sorted_list.insert(j, i)` to insert before position `j`. `insert(j-1, ..)` will insert before the *previous* element, and in the case where `j=0` it'll wrap around and insert before the last element. The [Python data structures tutorial](https://docs.python.org/3/tutorial/datastructures.html#data-structures) may be useful.
As Efferalgan & tzaman have mentioned your core problem is due to an off-by-one error. To catch these sorts of errors it's useful to print `i`, `j` and `sorted_list` on each loop iteration to make sure they contain what you think they contain. Here are a few versions of your algorithm. First, a repaired version of your code that fixes the off-by-one error; it also implements Efferalgan's suggestion of using `.append` if an insertion position isn't found. ``` def insertion_sort(mylist): sorted_list = [] for i in mylist: posfound = 0 #defaults to 0 for j in range(len(sorted_list)): if sorted_list[j] > i: sorted_list.insert(j, i) #put the number in before element 'j' posfound = 1 #if you found the correct position in the list set to 1 break if posfound == 0: #if you can't find a place in the list sorted_list.append(i) #put number at the end of the list return sorted_list ``` Here's a slightly improved version that uses an `else` clause on the loop instead of the `posfound` flag; it also uses slice assignment to do the insertion. ``` def insertion_sort(mylist): sorted_list = [] for i in mylist: for j in range(len(sorted_list)): if sorted_list[j] > i: sorted_list[j:j] = [i] break else: #if you can't find a place in the list sorted_list.append(i) #put number at the end of the list return sorted_list ``` Finally, a version that uses `enumerate` to get the indices and items in `sorted_list` rather than a simple `range` loop. ``` def insertion_sort(mylist): sorted_list = [] for u in mylist: for j, v in enumerate(sorted_list): if v > u: sorted_list[j:j] = [u] break else: #if you can't find a place in the list sorted_list.append(u) #put number at the end of the list return sorted_list ```
39,875,273
I have attempted to create an insertion sort in python, however the list returned is not sorted. What is the problem with my code? Argument given: [3, 2, 1, 4, 5, 8, 7, 9, 6] Result: 2 1 3 6 4 7 5 8 9 Python code: ``` def insertion_sort(mylist): sorted_list = [] for i in mylist: posfound = 0 #defaults to 0 for j in range(len(sorted_list)): if sorted_list[j] > i: sorted_list.insert(j-1, i) #put the number in before element 'j' posfound = 1 #if you found the correct position in the list set to 1 break if posfound == 0: #if you can't find a place in the list sorted_list.insert(len(sorted_list), i) #put number at the end of the list return sorted_list ```
2016/10/05
[ "https://Stackoverflow.com/questions/39875273", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4363434/" ]
As Efferalgan & tzaman have mentioned your core problem is due to an off-by-one error. To catch these sorts of errors it's useful to print `i`, `j` and `sorted_list` on each loop iteration to make sure they contain what you think they contain. Here are a few versions of your algorithm. First, a repaired version of your code that fixes the off-by-one error; it also implements Efferalgan's suggestion of using `.append` if an insertion position isn't found. ``` def insertion_sort(mylist): sorted_list = [] for i in mylist: posfound = 0 #defaults to 0 for j in range(len(sorted_list)): if sorted_list[j] > i: sorted_list.insert(j, i) #put the number in before element 'j' posfound = 1 #if you found the correct position in the list set to 1 break if posfound == 0: #if you can't find a place in the list sorted_list.append(i) #put number at the end of the list return sorted_list ``` Here's a slightly improved version that uses an `else` clause on the loop instead of the `posfound` flag; it also uses slice assignment to do the insertion. ``` def insertion_sort(mylist): sorted_list = [] for i in mylist: for j in range(len(sorted_list)): if sorted_list[j] > i: sorted_list[j:j] = [i] break else: #if you can't find a place in the list sorted_list.append(i) #put number at the end of the list return sorted_list ``` Finally, a version that uses `enumerate` to get the indices and items in `sorted_list` rather than a simple `range` loop. ``` def insertion_sort(mylist): sorted_list = [] for u in mylist: for j, v in enumerate(sorted_list): if v > u: sorted_list[j:j] = [u] break else: #if you can't find a place in the list sorted_list.append(u) #put number at the end of the list return sorted_list ```
As often, it was a off-by-one error, the code below is fixed. I also made some parts a bit prettier. ``` def insertion_sort(mylist): sorted_list = [] for i in mylist: for index, j in enumerate(sorted_list): if j > i: sorted_list.insert(index, i) #put the number in before element 'j' break else: sorted_list.append(i) #put number at the end of the list return sorted_list ```
7,615,511
I am writing a python script and I just need the second line of a series of very small text files. I would like to extract this without saving the file to my harddrive as I currently do. I have found a few threads that reference the TempFile and StringIO modules but I was unable to make much sense of them. Currently I download all of the files and name them sequentially like 1.txt, 2.txt, etc, then go through all of them and extract the second line. I would like to open the file grab the line then move on to finding and opening and reading the next file. Here is what I do currently with writing it to my HDD: ``` while (count4 <= num_files): file_p = [directory,str(count4),'.txt'] file_path = ''.join(file_p) cand_summary = string.strip(linecache.getline(file_path, 2)) linkFile = open('Summary.txt', 'a') linkFile.write(cand_summary) linkFile.write("\n") count4 = count4 + 1 linkFile.close() ```
2011/09/30
[ "https://Stackoverflow.com/questions/7615511", "https://Stackoverflow.com", "https://Stackoverflow.com/users/935684/" ]
There's a glitch in iText and iTextSharp but you can fix it pretty easily if you don't mind downloading the source and recompiling it. You need to make a change to two files. Any changes I've made are commented inline in the code. Line numbers are based on the 5.1.2.0 code rev 240 The first is in `iTextSharp.text.html.HtmlUtilities.cs`. Look for the function `EliminateWhiteSpace` at line 249 and change it to: ``` public static String EliminateWhiteSpace(String content) { // multiple spaces are reduced to one, // newlines are treated as spaces, // tabs, carriage returns are ignored. StringBuilder buf = new StringBuilder(); int len = content.Length; char character; bool newline = false; bool space = false;//Detect whether we have written at least one space already for (int i = 0; i < len; i++) { switch (character = content[i]) { case ' ': if (!newline && !space) {//If we are not at a new line AND ALSO did not just append a space buf.Append(character); space = true; //flag that we just wrote a space } break; case '\n': if (i > 0) { newline = true; buf.Append(' '); } break; case '\r': break; case '\t': break; default: newline = false; space = false; //reset flag buf.Append(character); break; } } return buf.ToString(); } ``` The second change is in `iTextSharp.text.xml.simpleparser.SimpleXMLParser.cs`. In the function `Go` at line 185 change line 248 to: ``` if (html /*&& nowhite*/) {//removed the nowhite check from here because that should be handled by the HTML parser later, not the XML parser ```
I would recommend using [wkhtmltopdf](http://code.google.com/p/wkhtmltopdf/) instead of iText. wkhtmltopdf will output the html exactly as rendered by webkit (Google Chrome, Safari) instead of iText's conversion. It is just a binary that you can call. That being said, I might check the html to ensure that there are paragraphs and/or line breaks in the user input. They might be stripped out before the conversion.
7,615,511
I am writing a python script and I just need the second line of a series of very small text files. I would like to extract this without saving the file to my harddrive as I currently do. I have found a few threads that reference the TempFile and StringIO modules but I was unable to make much sense of them. Currently I download all of the files and name them sequentially like 1.txt, 2.txt, etc, then go through all of them and extract the second line. I would like to open the file grab the line then move on to finding and opening and reading the next file. Here is what I do currently with writing it to my HDD: ``` while (count4 <= num_files): file_p = [directory,str(count4),'.txt'] file_path = ''.join(file_p) cand_summary = string.strip(linecache.getline(file_path, 2)) linkFile = open('Summary.txt', 'a') linkFile.write(cand_summary) linkFile.write("\n") count4 = count4 + 1 linkFile.close() ```
2011/09/30
[ "https://Stackoverflow.com/questions/7615511", "https://Stackoverflow.com", "https://Stackoverflow.com/users/935684/" ]
Thanks for the help everyone. I was able to find a small work around by doing the following: ``` vsHTML.Replace(" ", "&nbsp;&nbsp;").Replace(Chr(9), "&nbsp;&nbsp;&nbsp;&nbsp;").Replace(Chr(160), "&nbsp;").Replace(vbCrLf, "<br />") ``` The actual code does not display properly but, the first replace is replacing white spaces with `&nbsp;`, `Chr(9)` with 5 `&nbsp;`, and `Chr(160)` with `&nbsp;`.
I would recommend using [wkhtmltopdf](http://code.google.com/p/wkhtmltopdf/) instead of iText. wkhtmltopdf will output the html exactly as rendered by webkit (Google Chrome, Safari) instead of iText's conversion. It is just a binary that you can call. That being said, I might check the html to ensure that there are paragraphs and/or line breaks in the user input. They might be stripped out before the conversion.
7,615,511
I am writing a python script and I just need the second line of a series of very small text files. I would like to extract this without saving the file to my harddrive as I currently do. I have found a few threads that reference the TempFile and StringIO modules but I was unable to make much sense of them. Currently I download all of the files and name them sequentially like 1.txt, 2.txt, etc, then go through all of them and extract the second line. I would like to open the file grab the line then move on to finding and opening and reading the next file. Here is what I do currently with writing it to my HDD: ``` while (count4 <= num_files): file_p = [directory,str(count4),'.txt'] file_path = ''.join(file_p) cand_summary = string.strip(linecache.getline(file_path, 2)) linkFile = open('Summary.txt', 'a') linkFile.write(cand_summary) linkFile.write("\n") count4 = count4 + 1 linkFile.close() ```
2011/09/30
[ "https://Stackoverflow.com/questions/7615511", "https://Stackoverflow.com", "https://Stackoverflow.com/users/935684/" ]
Thanks for the help everyone. I was able to find a small work around by doing the following: ``` vsHTML.Replace(" ", "&nbsp;&nbsp;").Replace(Chr(9), "&nbsp;&nbsp;&nbsp;&nbsp;").Replace(Chr(160), "&nbsp;").Replace(vbCrLf, "<br />") ``` The actual code does not display properly but, the first replace is replacing white spaces with `&nbsp;`, `Chr(9)` with 5 `&nbsp;`, and `Chr(160)` with `&nbsp;`.
There's a glitch in iText and iTextSharp but you can fix it pretty easily if you don't mind downloading the source and recompiling it. You need to make a change to two files. Any changes I've made are commented inline in the code. Line numbers are based on the 5.1.2.0 code rev 240 The first is in `iTextSharp.text.html.HtmlUtilities.cs`. Look for the function `EliminateWhiteSpace` at line 249 and change it to: ``` public static String EliminateWhiteSpace(String content) { // multiple spaces are reduced to one, // newlines are treated as spaces, // tabs, carriage returns are ignored. StringBuilder buf = new StringBuilder(); int len = content.Length; char character; bool newline = false; bool space = false;//Detect whether we have written at least one space already for (int i = 0; i < len; i++) { switch (character = content[i]) { case ' ': if (!newline && !space) {//If we are not at a new line AND ALSO did not just append a space buf.Append(character); space = true; //flag that we just wrote a space } break; case '\n': if (i > 0) { newline = true; buf.Append(' '); } break; case '\r': break; case '\t': break; default: newline = false; space = false; //reset flag buf.Append(character); break; } } return buf.ToString(); } ``` The second change is in `iTextSharp.text.xml.simpleparser.SimpleXMLParser.cs`. In the function `Go` at line 185 change line 248 to: ``` if (html /*&& nowhite*/) {//removed the nowhite check from here because that should be handled by the HTML parser later, not the XML parser ```
65,470,264
I am pretty new to python. Just been working through some online tutorials on udemy. I seem to have an issue with pip installing modules. * I've tried reinstalling them. * Upgrading my python version. * In VS I always just get `module not found`. If I do it in the cmd prompt this is what I get below. [![error](https://i.stack.imgur.com/joizG.png)](https://i.stack.imgur.com/joizG.png)
2020/12/27
[ "https://Stackoverflow.com/questions/65470264", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14898017/" ]
> > there is no action called until my final write to parquet. > > > and > > Spark during that final write to parquet call will be able to see that this dataframe is being used in f1 and f2 and will cache the dataframe itself. > > > are correct. If you do `output_df.explain()`, you will see the query plan, which will show that what you said is correct. Thus, there is no need to do `special_rows.cache()`. Generally, `cache` is only necessary if you intend to reuse the dataframe **after** forcing Spark to calculate something, e.g. after `write` or `show`. If you see yourself intentionally calling `count()`, you're probably doing something wrong.
You might want to repartition after running `special_rows = df.filter(col('special') > 0)`. There can be a large number of empty partitions after running a filtering operation, [as explained here](https://mungingdata.com/apache-spark/filter-where/). The `new_df_1` will make cache `special_rows` which will be reused by `new_df_2` here `new_df_1.union(new_df_2)`. That's not necessarily a performance optimization. Caching is expensive. I've seen caching slow down a lot of computations, even when it's being used in a textbook manner (i.e. caching a DataFrame that gets reused several times downstream). Counting does not necessarily make sure the data is cached. Counts avoid scanning rows whenever possible. They'll use the Parquet metadata when they can, which means they don't cache all the data like you might expect. You can also "cache" data by writing it to disk. Something like this: ```py df.filter(col('special') > 0).repartition(500).write.parquet("some_path") special_rows = spark.read.parquet("some_path") ``` To summarize, yes, the DataFrame will be cached in this example, but it's not necessarily going to make your computation run any faster. It might be better to have no cache or to "cache" by writing data to disk.
74,081,960
I got my program running fine as explained at: [How can you make a micropython program on a raspberry pi pico autorun?](https://stackoverflow.com/questions/66183596/how-can-you-make-a-micropython-program-on-a-raspberry-pi-pico-autorun/74078142#74078142) I'm installing a `main.py` that does: ``` import machine import time led = machine.Pin('LED', machine.Pin.OUT) # For Rpi Pico (non-W) it was like this instead apparently. # led = Pin(25, Pin.OUT) i = 0 while (True): led.toggle() print(i) time.sleep(.5) i += 1 ``` When I power the device on by plugging the USB to my laptop, it seems to run fine, with the LED blinking. Then, if I connect from my laptop to the UART with: ``` screen /dev/ttyACM0 115200 ``` I can see the numbers coming out on my host terminal correctly, and the LED still blinks, all as expected. However, when I disconnect from screen with Ctrl-A K, after a few seconds, the LED stops blinking! It takes something around 15 seconds for it to stop, but it does so every time I tested. If I reconnect the UART again with: ``` screen /dev/ttyACM0 115200 ``` it starts blinking again. Also also noticed that after I reconnect the UART and execution resumes, the count has increased much less than the actual time passed, so one possibility is that the Pico is going into some slow low power mode? If I remove the `print()` from the program, I noticed that it does not freeze anymore after disconnecting the UART (which of course shows no data in this case). `screen -fn`, `screen -f` and `screen -fa` made no difference. Micropython firmware: rp2-pico-w-20221014-unstable-v1.19.1-544-g89b320737.uf2, Ubuntu 22.04 host. Some variants follow. `picocom /dev/ttyACM0` instead of screen and disconnect with Ctrl-A Ctrl-Q: still freezes like with `screen`. If I exit from `picocom` with Ctrl-A Ctrl-X instead however, then it works. The difference between both seems to be that Ctrl-Q logs: ``` Skipping tty reset... ``` while Ctrl-X doesn't, making this a good possible workaround. The following C analog of the MicroPython hacked from: * <https://github.com/raspberrypi/pico-examples/blob/a7ad17156bf60842ee55c8f86cd39e9cd7427c1d/pico_w/blink> * <https://github.com/raspberrypi/pico-examples/blob/a7ad17156bf60842ee55c8f86cd39e9cd7427c1d/hello_world/usb> did not show the same problem, tested on <https://github.com/raspberrypi/pico-sdk/tree/2e6142b15b8a75c1227dd3edbe839193b2bf9041> ``` #include <stdio.h> #include "pico/stdlib.h" #include "pico/cyw43_arch.h" int main() { stdio_init_all(); if (cyw43_arch_init()) { printf("WiFi init failed"); return -1; } int i = 0; while (true) { printf("%i\n", i); cyw43_arch_gpio_put(CYW43_WL_GPIO_LED_PIN, i % 2); i++; sleep_ms(500); } return 0; } ``` Reproduction speed can be greatly increased from a few seconds to almost instant by printing more and faster as in: ``` import machine import time led = machine.Pin('LED', machine.Pin.OUT) i = 0 while (True): led.toggle() print('asdf ' * 10 + str(i)) time.sleep(.1) i += 1 ``` This corroborates people's theories that the problem is linked to flow control: the sender appears to stop sending if the consumer stops being able to receive fast enough. Also asked at: * <https://github.com/orgs/micropython/discussions/9633> Possibly related: * <https://forums.raspberrypi.com/viewtopic.php?p=1833725&hilit=uart+freezes#p1833725>
2022/10/15
[ "https://Stackoverflow.com/questions/74081960", "https://Stackoverflow.com", "https://Stackoverflow.com/users/895245/" ]
What appears to be happening here is that exiting `screen` (or exiting `picocom` without the tty reset) leaves the [`DTR`](https://en.wikipedia.org/wiki/Data_Terminal_Ready) line on the serial port high. We can verify this by writing some simple code to control the DTR line, like this: ``` #include <unistd.h> #include <string.h> #include <stdio.h> #include <stdlib.h> #include <errno.h> #include <termios.h> #include <sys/types.h> #include <sys/time.h> #include <sys/stat.h> #include <fcntl.h> #include <sys/ioctl.h> #include <signal.h> int main(int argc, char **argv) { int fd; int dtrEnable; int flags; if (argc < 2) { fprintf(stderr, "Usage: ioctl <device> <1 or 0 (DTR high or low)>\n"); exit(1); } if ((fd = open(argv[1], O_RDWR | O_NDELAY)) < 0) { perror("open:"); exit(1); } sscanf(argv[2], "%d", &dtrEnable); ioctl(fd, TIOCMGET, &flags); if(dtrEnable!=0) { flags |= TIOCM_DTR; } else { flags &= ~TIOCM_DTR; } ioctl(fd, TIOCMSET, &flags); close(fd); } ``` Compile this into a tool called `setdtr`: ``` gcc -o setdtr setdtr.c ``` Connect to your Pico using `screen`, start your code, and then disconnect. Wait for the LED to stop blinking. Now run: ``` ./setdtr /dev/ttyACM0 0 ``` You will find that your code starts running again. If you run: ``` ./setdr /dev/ttyACM0 1 ``` You will find that your code gets stuck again. --- The serial chip on the RP2040 interprets a high DTR line to mean that a device is still connected. If nothing is reading from the serial port, it eventually blocks. Setting the DTR pin to 0 -- either using this `setdtr` tool or by explicitly resetting the serial port state on close -- avoids this problem.
I don't know why it works, but based on advie from larsks: ``` sudo apt install picocom picocom /dev/ttyACM0 ``` and then quit with Ctrl-A Ctrl-X (not Ctrl-A Ctrl-Q) does do what I want. Not sure what `screen` is doing differently exactly. When quitting, Ctrl-Q shows on terminal: ``` Skipping tty reset... ``` and Ctrl-X does not, which may be a major clue.