qid
int64 469
74.7M
| question
stringlengths 36
37.8k
| date
stringlengths 10
10
| metadata
sequence | response_j
stringlengths 5
31.5k
| response_k
stringlengths 10
31.6k
|
---|---|---|---|---|---|
31,969,540 | My python scripts often contain "executable code" (functions, classes, &c) in the first part of the file and "test code" (interactive experiments) at the end.
I want `python`, `py_compile`, `pylint` &c to completely ignore the experimental stuff at the end.
I am looking for something like `#if 0` for `cpp`.
**How can this be done?**
Here are some ideas and the reasons they are bad:
1. `sys.exit(0)`: works for `python` but not `py_compile` and `pylint`
2. put all experimental code under `def test():`: I can no longer copy/paste the code into a `python` REPL because it has non-trivial indent
3. put all experimental code between lines with `"""`: emacs no longer indents and fontifies the code properly
4. comment and uncomment the code all the time: I am too lazy (yes, this is a single key press, but I have to remember to do that!)
5. put the test code into a separate file: I want to keep the related stuff together
PS. My IDE is Emacs and my python interpreter is `pyspark`. | 2015/08/12 | [
"https://Stackoverflow.com/questions/31969540",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/850781/"
] | Use `ipython` rather than `python` for your REPL It has better code completion and introspection and when you paste indented code it can automatically "de-indent" the pasted code.
Thus you can put your experimental code in a test function and then paste in parts without worrying and having to de-indent your code.
If you are pasting large blocks that can be considered individual blocks then you will need to use the `%paste` or `%cpaste` magics.
eg.
```
for i in range(3):
i *= 2
# with the following the blank line this is a complete block
print(i)
```
With a normal paste:
```
In [1]: for i in range(3):
...: i *= 2
...:
In [2]: print(i)
4
```
Using `%paste`
```
In [3]: %paste
for i in range(10):
i *= 2
print(i)
## -- End pasted text --
0
2
4
In [4]:
```
### PySpark and IPython
>
> It is also possible to launch PySpark in IPython, the enhanced Python interpreter. PySpark works with IPython 1.0.0 and later. To use IPython, set the IPYTHON variable to 1 when running bin/pyspark:[1](https://spark.apache.org/docs/0.9.0/python-programming-guide.html)
>
>
>
> ```
> $ IPYTHON=1 ./bin/pyspark
>
> ```
>
> | I think the standard ('Pythonic') way to deal with this is to do it like so:
```
class MyClass(object):
...
def my_function():
...
if __name__ == '__main__':
# testing code here
```
**Edit after your comment**
I don't think what you want is possible using a plain Python interpreter. You could have a look at the IEP Python editor ([website](http://www.iep-project.org/), [bitbucket](https://bitbucket.org/iep-project/iep)): it supports something like Matlab's cell mode, where a cell can be defined with a double comment character (`##`):
```
## main code
class MyClass(object):
...
def my_function():
...
## testing code
do_some_testing_please()
```
All code from a `##`-beginning line until either the next such line or end-of-file constitutes a single cell.
Whenever the cursor is within a particular cell and you strike some hotkey (default Ctrl+Enter), the code within that cell is executed in the currently running interpreter. An additional feature of IEP is that selected code can be executed with F9; a pretty standard feature but the nice thing here is that IEP will smartly deal with whitespace, so just selecting and pasting stuff from inside a method will automatically work. |
31,969,540 | My python scripts often contain "executable code" (functions, classes, &c) in the first part of the file and "test code" (interactive experiments) at the end.
I want `python`, `py_compile`, `pylint` &c to completely ignore the experimental stuff at the end.
I am looking for something like `#if 0` for `cpp`.
**How can this be done?**
Here are some ideas and the reasons they are bad:
1. `sys.exit(0)`: works for `python` but not `py_compile` and `pylint`
2. put all experimental code under `def test():`: I can no longer copy/paste the code into a `python` REPL because it has non-trivial indent
3. put all experimental code between lines with `"""`: emacs no longer indents and fontifies the code properly
4. comment and uncomment the code all the time: I am too lazy (yes, this is a single key press, but I have to remember to do that!)
5. put the test code into a separate file: I want to keep the related stuff together
PS. My IDE is Emacs and my python interpreter is `pyspark`. | 2015/08/12 | [
"https://Stackoverflow.com/questions/31969540",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/850781/"
] | Use `ipython` rather than `python` for your REPL It has better code completion and introspection and when you paste indented code it can automatically "de-indent" the pasted code.
Thus you can put your experimental code in a test function and then paste in parts without worrying and having to de-indent your code.
If you are pasting large blocks that can be considered individual blocks then you will need to use the `%paste` or `%cpaste` magics.
eg.
```
for i in range(3):
i *= 2
# with the following the blank line this is a complete block
print(i)
```
With a normal paste:
```
In [1]: for i in range(3):
...: i *= 2
...:
In [2]: print(i)
4
```
Using `%paste`
```
In [3]: %paste
for i in range(10):
i *= 2
print(i)
## -- End pasted text --
0
2
4
In [4]:
```
### PySpark and IPython
>
> It is also possible to launch PySpark in IPython, the enhanced Python interpreter. PySpark works with IPython 1.0.0 and later. To use IPython, set the IPYTHON variable to 1 when running bin/pyspark:[1](https://spark.apache.org/docs/0.9.0/python-programming-guide.html)
>
>
>
> ```
> $ IPYTHON=1 ./bin/pyspark
>
> ```
>
> | Follow something like option 2.
I usually put experimental code in a main method.
```
def main ():
*experimental code goes here *
```
Then if you want to execute the experimental code just call the main.
```
main()
``` |
31,969,540 | My python scripts often contain "executable code" (functions, classes, &c) in the first part of the file and "test code" (interactive experiments) at the end.
I want `python`, `py_compile`, `pylint` &c to completely ignore the experimental stuff at the end.
I am looking for something like `#if 0` for `cpp`.
**How can this be done?**
Here are some ideas and the reasons they are bad:
1. `sys.exit(0)`: works for `python` but not `py_compile` and `pylint`
2. put all experimental code under `def test():`: I can no longer copy/paste the code into a `python` REPL because it has non-trivial indent
3. put all experimental code between lines with `"""`: emacs no longer indents and fontifies the code properly
4. comment and uncomment the code all the time: I am too lazy (yes, this is a single key press, but I have to remember to do that!)
5. put the test code into a separate file: I want to keep the related stuff together
PS. My IDE is Emacs and my python interpreter is `pyspark`. | 2015/08/12 | [
"https://Stackoverflow.com/questions/31969540",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/850781/"
] | I think the standard ('Pythonic') way to deal with this is to do it like so:
```
class MyClass(object):
...
def my_function():
...
if __name__ == '__main__':
# testing code here
```
**Edit after your comment**
I don't think what you want is possible using a plain Python interpreter. You could have a look at the IEP Python editor ([website](http://www.iep-project.org/), [bitbucket](https://bitbucket.org/iep-project/iep)): it supports something like Matlab's cell mode, where a cell can be defined with a double comment character (`##`):
```
## main code
class MyClass(object):
...
def my_function():
...
## testing code
do_some_testing_please()
```
All code from a `##`-beginning line until either the next such line or end-of-file constitutes a single cell.
Whenever the cursor is within a particular cell and you strike some hotkey (default Ctrl+Enter), the code within that cell is executed in the currently running interpreter. An additional feature of IEP is that selected code can be executed with F9; a pretty standard feature but the nice thing here is that IEP will smartly deal with whitespace, so just selecting and pasting stuff from inside a method will automatically work. | I suggest you use a proper version control system to keep the "real" and the "experimental" parts separated.
For example, using Git, you could only include the real code without the experimental parts in your commits (using [`add -p`](https://git-scm.com/book/en/v2/Git-Tools-Interactive-Staging#Staging-Patches)), and then temporarily [`stash`](https://git-scm.com/book/en/v1/Git-Tools-Stashing) the experimental parts for running your various tools.
You could also keep the experimental parts in their own branch which you then [`rebase`](https://git-scm.com/book/en/v2/Git-Branching-Rebasing) on top of the non-experimental parts when you need them. |
31,969,540 | My python scripts often contain "executable code" (functions, classes, &c) in the first part of the file and "test code" (interactive experiments) at the end.
I want `python`, `py_compile`, `pylint` &c to completely ignore the experimental stuff at the end.
I am looking for something like `#if 0` for `cpp`.
**How can this be done?**
Here are some ideas and the reasons they are bad:
1. `sys.exit(0)`: works for `python` but not `py_compile` and `pylint`
2. put all experimental code under `def test():`: I can no longer copy/paste the code into a `python` REPL because it has non-trivial indent
3. put all experimental code between lines with `"""`: emacs no longer indents and fontifies the code properly
4. comment and uncomment the code all the time: I am too lazy (yes, this is a single key press, but I have to remember to do that!)
5. put the test code into a separate file: I want to keep the related stuff together
PS. My IDE is Emacs and my python interpreter is `pyspark`. | 2015/08/12 | [
"https://Stackoverflow.com/questions/31969540",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/850781/"
] | Follow something like option 2.
I usually put experimental code in a main method.
```
def main ():
*experimental code goes here *
```
Then if you want to execute the experimental code just call the main.
```
main()
``` | Another possibility is to put tests as [*doctests*](https://docs.python.org/2/library/doctest.html) into the docstrings of your code, which admittedly is only practical for simpler cases.
This way, they are only treated as executable code by the `doctest` module, but as comments otherwise. |
31,969,540 | My python scripts often contain "executable code" (functions, classes, &c) in the first part of the file and "test code" (interactive experiments) at the end.
I want `python`, `py_compile`, `pylint` &c to completely ignore the experimental stuff at the end.
I am looking for something like `#if 0` for `cpp`.
**How can this be done?**
Here are some ideas and the reasons they are bad:
1. `sys.exit(0)`: works for `python` but not `py_compile` and `pylint`
2. put all experimental code under `def test():`: I can no longer copy/paste the code into a `python` REPL because it has non-trivial indent
3. put all experimental code between lines with `"""`: emacs no longer indents and fontifies the code properly
4. comment and uncomment the code all the time: I am too lazy (yes, this is a single key press, but I have to remember to do that!)
5. put the test code into a separate file: I want to keep the related stuff together
PS. My IDE is Emacs and my python interpreter is `pyspark`. | 2015/08/12 | [
"https://Stackoverflow.com/questions/31969540",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/850781/"
] | With python-mode.el mark arbitrary chunks as section - for example via `py-sectionize-region`.
Than call `py-execute-section`.
Updated after comment:
python-mode.el is delivered by melpa.
M-x list-packages RET
Look for python-mode - the built-in python.el provides 'python, while python-mode.el provides 'python-mode.
Developement just moved hereto: <https://gitlab.com/python-mode-devs/python-mode> | I suggest you use a proper version control system to keep the "real" and the "experimental" parts separated.
For example, using Git, you could only include the real code without the experimental parts in your commits (using [`add -p`](https://git-scm.com/book/en/v2/Git-Tools-Interactive-Staging#Staging-Patches)), and then temporarily [`stash`](https://git-scm.com/book/en/v1/Git-Tools-Stashing) the experimental parts for running your various tools.
You could also keep the experimental parts in their own branch which you then [`rebase`](https://git-scm.com/book/en/v2/Git-Branching-Rebasing) on top of the non-experimental parts when you need them. |
31,969,540 | My python scripts often contain "executable code" (functions, classes, &c) in the first part of the file and "test code" (interactive experiments) at the end.
I want `python`, `py_compile`, `pylint` &c to completely ignore the experimental stuff at the end.
I am looking for something like `#if 0` for `cpp`.
**How can this be done?**
Here are some ideas and the reasons they are bad:
1. `sys.exit(0)`: works for `python` but not `py_compile` and `pylint`
2. put all experimental code under `def test():`: I can no longer copy/paste the code into a `python` REPL because it has non-trivial indent
3. put all experimental code between lines with `"""`: emacs no longer indents and fontifies the code properly
4. comment and uncomment the code all the time: I am too lazy (yes, this is a single key press, but I have to remember to do that!)
5. put the test code into a separate file: I want to keep the related stuff together
PS. My IDE is Emacs and my python interpreter is `pyspark`. | 2015/08/12 | [
"https://Stackoverflow.com/questions/31969540",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/850781/"
] | Use `ipython` rather than `python` for your REPL It has better code completion and introspection and when you paste indented code it can automatically "de-indent" the pasted code.
Thus you can put your experimental code in a test function and then paste in parts without worrying and having to de-indent your code.
If you are pasting large blocks that can be considered individual blocks then you will need to use the `%paste` or `%cpaste` magics.
eg.
```
for i in range(3):
i *= 2
# with the following the blank line this is a complete block
print(i)
```
With a normal paste:
```
In [1]: for i in range(3):
...: i *= 2
...:
In [2]: print(i)
4
```
Using `%paste`
```
In [3]: %paste
for i in range(10):
i *= 2
print(i)
## -- End pasted text --
0
2
4
In [4]:
```
### PySpark and IPython
>
> It is also possible to launch PySpark in IPython, the enhanced Python interpreter. PySpark works with IPython 1.0.0 and later. To use IPython, set the IPYTHON variable to 1 when running bin/pyspark:[1](https://spark.apache.org/docs/0.9.0/python-programming-guide.html)
>
>
>
> ```
> $ IPYTHON=1 ./bin/pyspark
>
> ```
>
> | Unfortunately, there is no widely (or any) standard describing what you are talking about, so getting a bunch of python specific things to work like this will be difficult.
However, you could wrap these commands in such a way that they only read until a signifier. For example (assuming you are on a unix system):
```
cat $file | sed '/exit(0)/q' |sed '/exit(0)/d'
```
The command will read until 'exit(0)' is found. You could pipe this into your checkers, or create a temp file that your checkers read. You could create wrapper executable files on your path that *may* work with your editors.
Windows may be able to use a similar technique.
I might advise a different approach. Separate files might be best. You might explore iPython notebooks as a possible solution, but I'm not sure exactly what your use case is. |
31,969,540 | My python scripts often contain "executable code" (functions, classes, &c) in the first part of the file and "test code" (interactive experiments) at the end.
I want `python`, `py_compile`, `pylint` &c to completely ignore the experimental stuff at the end.
I am looking for something like `#if 0` for `cpp`.
**How can this be done?**
Here are some ideas and the reasons they are bad:
1. `sys.exit(0)`: works for `python` but not `py_compile` and `pylint`
2. put all experimental code under `def test():`: I can no longer copy/paste the code into a `python` REPL because it has non-trivial indent
3. put all experimental code between lines with `"""`: emacs no longer indents and fontifies the code properly
4. comment and uncomment the code all the time: I am too lazy (yes, this is a single key press, but I have to remember to do that!)
5. put the test code into a separate file: I want to keep the related stuff together
PS. My IDE is Emacs and my python interpreter is `pyspark`. | 2015/08/12 | [
"https://Stackoverflow.com/questions/31969540",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/850781/"
] | Use `ipython` rather than `python` for your REPL It has better code completion and introspection and when you paste indented code it can automatically "de-indent" the pasted code.
Thus you can put your experimental code in a test function and then paste in parts without worrying and having to de-indent your code.
If you are pasting large blocks that can be considered individual blocks then you will need to use the `%paste` or `%cpaste` magics.
eg.
```
for i in range(3):
i *= 2
# with the following the blank line this is a complete block
print(i)
```
With a normal paste:
```
In [1]: for i in range(3):
...: i *= 2
...:
In [2]: print(i)
4
```
Using `%paste`
```
In [3]: %paste
for i in range(10):
i *= 2
print(i)
## -- End pasted text --
0
2
4
In [4]:
```
### PySpark and IPython
>
> It is also possible to launch PySpark in IPython, the enhanced Python interpreter. PySpark works with IPython 1.0.0 and later. To use IPython, set the IPYTHON variable to 1 when running bin/pyspark:[1](https://spark.apache.org/docs/0.9.0/python-programming-guide.html)
>
>
>
> ```
> $ IPYTHON=1 ./bin/pyspark
>
> ```
>
> | Another possibility is to put tests as [*doctests*](https://docs.python.org/2/library/doctest.html) into the docstrings of your code, which admittedly is only practical for simpler cases.
This way, they are only treated as executable code by the `doctest` module, but as comments otherwise. |
31,969,540 | My python scripts often contain "executable code" (functions, classes, &c) in the first part of the file and "test code" (interactive experiments) at the end.
I want `python`, `py_compile`, `pylint` &c to completely ignore the experimental stuff at the end.
I am looking for something like `#if 0` for `cpp`.
**How can this be done?**
Here are some ideas and the reasons they are bad:
1. `sys.exit(0)`: works for `python` but not `py_compile` and `pylint`
2. put all experimental code under `def test():`: I can no longer copy/paste the code into a `python` REPL because it has non-trivial indent
3. put all experimental code between lines with `"""`: emacs no longer indents and fontifies the code properly
4. comment and uncomment the code all the time: I am too lazy (yes, this is a single key press, but I have to remember to do that!)
5. put the test code into a separate file: I want to keep the related stuff together
PS. My IDE is Emacs and my python interpreter is `pyspark`. | 2015/08/12 | [
"https://Stackoverflow.com/questions/31969540",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/850781/"
] | Unfortunately, there is no widely (or any) standard describing what you are talking about, so getting a bunch of python specific things to work like this will be difficult.
However, you could wrap these commands in such a way that they only read until a signifier. For example (assuming you are on a unix system):
```
cat $file | sed '/exit(0)/q' |sed '/exit(0)/d'
```
The command will read until 'exit(0)' is found. You could pipe this into your checkers, or create a temp file that your checkers read. You could create wrapper executable files on your path that *may* work with your editors.
Windows may be able to use a similar technique.
I might advise a different approach. Separate files might be best. You might explore iPython notebooks as a possible solution, but I'm not sure exactly what your use case is. | I suggest you use a proper version control system to keep the "real" and the "experimental" parts separated.
For example, using Git, you could only include the real code without the experimental parts in your commits (using [`add -p`](https://git-scm.com/book/en/v2/Git-Tools-Interactive-Staging#Staging-Patches)), and then temporarily [`stash`](https://git-scm.com/book/en/v1/Git-Tools-Stashing) the experimental parts for running your various tools.
You could also keep the experimental parts in their own branch which you then [`rebase`](https://git-scm.com/book/en/v2/Git-Branching-Rebasing) on top of the non-experimental parts when you need them. |
31,969,540 | My python scripts often contain "executable code" (functions, classes, &c) in the first part of the file and "test code" (interactive experiments) at the end.
I want `python`, `py_compile`, `pylint` &c to completely ignore the experimental stuff at the end.
I am looking for something like `#if 0` for `cpp`.
**How can this be done?**
Here are some ideas and the reasons they are bad:
1. `sys.exit(0)`: works for `python` but not `py_compile` and `pylint`
2. put all experimental code under `def test():`: I can no longer copy/paste the code into a `python` REPL because it has non-trivial indent
3. put all experimental code between lines with `"""`: emacs no longer indents and fontifies the code properly
4. comment and uncomment the code all the time: I am too lazy (yes, this is a single key press, but I have to remember to do that!)
5. put the test code into a separate file: I want to keep the related stuff together
PS. My IDE is Emacs and my python interpreter is `pyspark`. | 2015/08/12 | [
"https://Stackoverflow.com/questions/31969540",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/850781/"
] | Unfortunately, there is no widely (or any) standard describing what you are talking about, so getting a bunch of python specific things to work like this will be difficult.
However, you could wrap these commands in such a way that they only read until a signifier. For example (assuming you are on a unix system):
```
cat $file | sed '/exit(0)/q' |sed '/exit(0)/d'
```
The command will read until 'exit(0)' is found. You could pipe this into your checkers, or create a temp file that your checkers read. You could create wrapper executable files on your path that *may* work with your editors.
Windows may be able to use a similar technique.
I might advise a different approach. Separate files might be best. You might explore iPython notebooks as a possible solution, but I'm not sure exactly what your use case is. | With python-mode.el mark arbitrary chunks as section - for example via `py-sectionize-region`.
Than call `py-execute-section`.
Updated after comment:
python-mode.el is delivered by melpa.
M-x list-packages RET
Look for python-mode - the built-in python.el provides 'python, while python-mode.el provides 'python-mode.
Developement just moved hereto: <https://gitlab.com/python-mode-devs/python-mode> |
19,167,550 | My code goes through a number of files reading them into lists with the command:
```
data = np.loadtxt(myfile, unpack=True)
```
Some of these files are empty (I can't control that) and when that happens I get this warning printed on screen:
```
/usr/local/lib/python2.7/dist-packages/numpy/lib/npyio.py:795: UserWarning: loadtxt: Empty input file: "/path_to_file/file.dat"
warnings.warn('loadtxt: Empty input file: "%s"' % fname)
```
How can I prevent this warning from showing? | 2013/10/03 | [
"https://Stackoverflow.com/questions/19167550",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1391441/"
] | You will have to wrap the line with `catch_warnings`, then call the `simplefilter` method to suppress those warnings. For example:
```
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
data = np.loadtxt(myfile, unpack=True)
```
Should do it. | One obvious possibility is to pre-check the files:
```
if os.fstat(myfile.fileno()).st_size:
data = np.loadtxt(myfile, unpack=True)
else:
# whatever you want to do for empty files
``` |
22,345,798 | I currently have a working python code in command line. How can I convert this into a GUI program. I know how to design a GUI(make buttons,callback function, create text field, label widget...). My question is how should be the GUI connected to the existing program. *should I make a python file called gui.py and import this in the main program..
..or should it be in the other way...*
eg:
```
n = int(raw_input('enter an integer: '))
def fx(n):
result = ''
for i in xrange(1,11):
result += "{} x {} = {}\n".format(i,n,i*n)
return result
print fx(n)
```
the above program will print the multiplication table of an integer. How should be the gui program(with a entry box, button widget, text widget were o/p will be printed). should this program call the GUI code or should I include this code (**fx()** function) in the **GUI class**. | 2014/03/12 | [
"https://Stackoverflow.com/questions/22345798",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2332665/"
] | As the GUI is the user front-end, and because your function already exists, the easiest is to make GUI class to import the function. On event, the GUI would call the function and handle the display to the user.
In fact, it's exactly what you have done with a Command-Line Interface (CLI) in your example code :) | I would say the answer strongly depends on your choice of GUI-framework to use. For a small piece of code like the one you posted you probably may want to rely on "batteries included" tkinter. In this case I agree to the comment of shaktimaan to simply include the tkinter commands in your existing code. But you have many choices like PyQT, PySide, kivy... All these frameworks have possiblities to seperate programlogic from GUI-view-code, but have different ways to achieve this.
So read about these frameworks if you're not satisfied with tkinter and make a choice, then you can ask again how to do this seperation if you're not sure. |
63,580,623 | Right now I'm sitting on a blank file which consists only of the following:
```
import os
import sys
import shlex
import subprocess
import signal
from time import monotonic as timer
```
I get this error when I try to run my file: ImportError: Cannot import name monotonic
If it matters, I am on linux and my python ver is 2.7.16 - I can't really change any of this because I'm working from my school server... What exactly is causing the error? | 2020/08/25 | [
"https://Stackoverflow.com/questions/63580623",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10847907/"
] | You'll need to use regular Producer and execute the serialization functions yourself
```
from confluent_kafka import avro
from confluent_kafka.avro import CachedSchemaRegistryClient
from confluent_kafka.avro.serializer.message_serializer import MessageSerializer as AvroSerializer
avro_serializer = AvroSerializer(schema_registry)
serialize_avro = avro_serializer.encode_record_with_schema # extract function definition
value_schema = avro.load('avro_schemas/value.avsc') # TODO: Create avro_schemas folder
p = Producer({'bootstrap.servers': bootstrap_servers})
value_payload = serialize_avro(topic, value_schema, value, is_key=False)
p.produce(topic, key=key, value=value_payload, callback=delivery_report)
``` | `AvroProducer` assumes that both keys and values are encoded with the schema registry, prepending a magic byte and the schema id to the payload of both the key and the value.
If you want to use a custom serialization for the key, you could use a `Producer` instead of an `AvroProducer`. But it will be your responsibility to serialize the key (using whatever format you want) and the values (which means encoding the value and prepending the magic byte and the schema id). To find out how this is done you can look at the `AvroProducer` code.
But it also means you'll have to write your own `AvroConsumer` and won't be able to use the `kafka-avro-console-consumer`. |
69,833,702 | I keep running into this use and I haven't found a good solution. I am asking for a solution in python, but a solution in R would also be helpful.
I've been getting data that looks something like this:
```
import pandas as pd
data = {'Col1': ['Bob', '101', 'First Street', '', 'Sue', '102', 'Second Street', '', 'Alex' , '200', 'Third Street', '']}
df = pd.DataFrame(data)
Col1
0 Bob
1 101
3
4 Sue
5 102
6 Second Street
7
8 Alex
9 200
10 Third Street
11
```
The pattern in my real data does repeat like this. Sometimes there is a blank row (or more than 1), and sometimes there are not any blank rows. The important part here is that I need to convert this column into a row.
I want the data to look like this.
```
Name Address Street
0 Bob 101 First Street
1 Sue 102 Second Street
2 Alex 200 Third Street
```
I have tried playing around with this, but nothing has worked. My thought was to iterate through a few rows at a time, assign the values to the appropriate column, and just build a data frame row by row.
```
x = len(df['Col1'])
holder = pd.DataFrame()
new_df = pd.DataFrame()
while x < 4:
temp = df.iloc[:5]
holder['Name'] = temp['Col1'].iloc[0]
holder['Address'] = temp['Col1'].iloc[1]
holder['Street'] = temp['Col1'].iloc[2]
new_df = pd.concat([new_df, holder])
df = temp[5:]
df.reset_index()
holder = pd.DataFrame()
x = len(df['Col1'])
new_df.head(10)
``` | 2021/11/04 | [
"https://Stackoverflow.com/questions/69833702",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14167846/"
] | In `R`,
```
data <- data.frame(
Col1 = c('Bob', '101', 'First Street', '', 'Sue', '102', 'Second Street', '', 'Alex' , '200', 'Third Street', '')
)
k<-which(grepl("Street", data$Col1) == TRUE)
j <- k-1
i <- k-2
data.frame(
Name = data[i,],
Adress = data[j,],
Street = data[k,]
)
Name Adress Street
1 Bob 101 First Street
2 Sue 102 Second Street
3 Alex 200 Third Street
```
Or, if `Street` not ends with `Street` but `Adress` are always a number, you can also try
```
j <- which(apply(data, 1, function(x) !is.na(as.numeric(x)) ))
i <- j-1
k <- j+1
``` | ### Python3
In Python 3, you can convert your DataFrame into an array and then reshape it.
```py
n = df.shape[0]
df2 = pd.DataFrame(
data=df.to_numpy().reshape((n//4, 4), order='C'),
columns=['Name', 'Address', 'Street', 'Empty'])
```
This produces for your sample data this:
```
Name Address Street Empty
0 Bob 101 First Street
1 Sue 102 Second Street
2 Alex 200 Third Street
```
If you like you can remove the last column:
```py
df2 = df2.drop(['Empty'], axis=1)
```
```
Name Address Street
0 Bob 101 First Street
1 Sue 102 Second Street
2 Alex 200 Third Street
```
### One-liner code
```
df2 = pd.DataFrame(data=df.to_numpy().reshape((df.shape[0]//4, 4), order='C' ), columns=['Name', 'Address', 'Street', 'Empty']).drop(['Empty'], axis=1)
```
```
Name Address Street
0 Bob 101 First Street
1 Sue 102 Second Street
2 Alex 200 Third Street
``` |
69,833,702 | I keep running into this use and I haven't found a good solution. I am asking for a solution in python, but a solution in R would also be helpful.
I've been getting data that looks something like this:
```
import pandas as pd
data = {'Col1': ['Bob', '101', 'First Street', '', 'Sue', '102', 'Second Street', '', 'Alex' , '200', 'Third Street', '']}
df = pd.DataFrame(data)
Col1
0 Bob
1 101
3
4 Sue
5 102
6 Second Street
7
8 Alex
9 200
10 Third Street
11
```
The pattern in my real data does repeat like this. Sometimes there is a blank row (or more than 1), and sometimes there are not any blank rows. The important part here is that I need to convert this column into a row.
I want the data to look like this.
```
Name Address Street
0 Bob 101 First Street
1 Sue 102 Second Street
2 Alex 200 Third Street
```
I have tried playing around with this, but nothing has worked. My thought was to iterate through a few rows at a time, assign the values to the appropriate column, and just build a data frame row by row.
```
x = len(df['Col1'])
holder = pd.DataFrame()
new_df = pd.DataFrame()
while x < 4:
temp = df.iloc[:5]
holder['Name'] = temp['Col1'].iloc[0]
holder['Address'] = temp['Col1'].iloc[1]
holder['Street'] = temp['Col1'].iloc[2]
new_df = pd.concat([new_df, holder])
df = temp[5:]
df.reset_index()
holder = pd.DataFrame()
x = len(df['Col1'])
new_df.head(10)
``` | 2021/11/04 | [
"https://Stackoverflow.com/questions/69833702",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14167846/"
] | In `R`,
```
data <- data.frame(
Col1 = c('Bob', '101', 'First Street', '', 'Sue', '102', 'Second Street', '', 'Alex' , '200', 'Third Street', '')
)
k<-which(grepl("Street", data$Col1) == TRUE)
j <- k-1
i <- k-2
data.frame(
Name = data[i,],
Adress = data[j,],
Street = data[k,]
)
Name Adress Street
1 Bob 101 First Street
2 Sue 102 Second Street
3 Alex 200 Third Street
```
Or, if `Street` not ends with `Street` but `Adress` are always a number, you can also try
```
j <- which(apply(data, 1, function(x) !is.na(as.numeric(x)) ))
i <- j-1
k <- j+1
``` | In python i believe this may help u.
```
1 import pandas as pd
2
3 data = {'Col1': ['Bob', '101', 'First Street', '', 'Sue', '102', 'Second Street', '', 'Alex' , '200', 'Third Street', '']}
4
5 var = list(data.values())[0]
6 var2 = []
7 for aux in range(int(len(var)/4)):
8 var2.append(var[aux*4: aux*4+3])
9 data = pd.DataFrame(var2, columns=['Name', 'Address','Street',])
10 print(data)
``` |
69,833,702 | I keep running into this use and I haven't found a good solution. I am asking for a solution in python, but a solution in R would also be helpful.
I've been getting data that looks something like this:
```
import pandas as pd
data = {'Col1': ['Bob', '101', 'First Street', '', 'Sue', '102', 'Second Street', '', 'Alex' , '200', 'Third Street', '']}
df = pd.DataFrame(data)
Col1
0 Bob
1 101
3
4 Sue
5 102
6 Second Street
7
8 Alex
9 200
10 Third Street
11
```
The pattern in my real data does repeat like this. Sometimes there is a blank row (or more than 1), and sometimes there are not any blank rows. The important part here is that I need to convert this column into a row.
I want the data to look like this.
```
Name Address Street
0 Bob 101 First Street
1 Sue 102 Second Street
2 Alex 200 Third Street
```
I have tried playing around with this, but nothing has worked. My thought was to iterate through a few rows at a time, assign the values to the appropriate column, and just build a data frame row by row.
```
x = len(df['Col1'])
holder = pd.DataFrame()
new_df = pd.DataFrame()
while x < 4:
temp = df.iloc[:5]
holder['Name'] = temp['Col1'].iloc[0]
holder['Address'] = temp['Col1'].iloc[1]
holder['Street'] = temp['Col1'].iloc[2]
new_df = pd.concat([new_df, holder])
df = temp[5:]
df.reset_index()
holder = pd.DataFrame()
x = len(df['Col1'])
new_df.head(10)
``` | 2021/11/04 | [
"https://Stackoverflow.com/questions/69833702",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14167846/"
] | In `R`,
```
data <- data.frame(
Col1 = c('Bob', '101', 'First Street', '', 'Sue', '102', 'Second Street', '', 'Alex' , '200', 'Third Street', '')
)
k<-which(grepl("Street", data$Col1) == TRUE)
j <- k-1
i <- k-2
data.frame(
Name = data[i,],
Adress = data[j,],
Street = data[k,]
)
Name Adress Street
1 Bob 101 First Street
2 Sue 102 Second Street
3 Alex 200 Third Street
```
Or, if `Street` not ends with `Street` but `Adress` are always a number, you can also try
```
j <- which(apply(data, 1, function(x) !is.na(as.numeric(x)) ))
i <- j-1
k <- j+1
``` | Another R solution. This solution is based on the `tidyverse` package. The example data frame `data` is from Park's post (<https://stackoverflow.com/a/69833814/7669809>).
```
library(tidyverse)
data2 <- data %>%
mutate(ID = cumsum(Col1 %in% "")) %>%
filter(!Col1 %in% "") %>%
group_by(ID) %>%
mutate(Type = case_when(
row_number() == 1L ~"Name",
row_number() == 2L ~"Address",
row_number() == 3L ~"Street",
TRUE ~NA_character_
)) %>%
pivot_wider(names_from = "Type", values_from = "Col1") %>%
ungroup()
data2
# # A tibble: 3 x 4
# ID Name Address Street
# <int> <chr> <chr> <chr>
# 1 0 Bob 101 First Street
# 2 1 Sue 102 Second Street
# 3 2 Alex 200 Third Street
``` |
69,833,702 | I keep running into this use and I haven't found a good solution. I am asking for a solution in python, but a solution in R would also be helpful.
I've been getting data that looks something like this:
```
import pandas as pd
data = {'Col1': ['Bob', '101', 'First Street', '', 'Sue', '102', 'Second Street', '', 'Alex' , '200', 'Third Street', '']}
df = pd.DataFrame(data)
Col1
0 Bob
1 101
3
4 Sue
5 102
6 Second Street
7
8 Alex
9 200
10 Third Street
11
```
The pattern in my real data does repeat like this. Sometimes there is a blank row (or more than 1), and sometimes there are not any blank rows. The important part here is that I need to convert this column into a row.
I want the data to look like this.
```
Name Address Street
0 Bob 101 First Street
1 Sue 102 Second Street
2 Alex 200 Third Street
```
I have tried playing around with this, but nothing has worked. My thought was to iterate through a few rows at a time, assign the values to the appropriate column, and just build a data frame row by row.
```
x = len(df['Col1'])
holder = pd.DataFrame()
new_df = pd.DataFrame()
while x < 4:
temp = df.iloc[:5]
holder['Name'] = temp['Col1'].iloc[0]
holder['Address'] = temp['Col1'].iloc[1]
holder['Street'] = temp['Col1'].iloc[2]
new_df = pd.concat([new_df, holder])
df = temp[5:]
df.reset_index()
holder = pd.DataFrame()
x = len(df['Col1'])
new_df.head(10)
``` | 2021/11/04 | [
"https://Stackoverflow.com/questions/69833702",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14167846/"
] | In `R`,
```
data <- data.frame(
Col1 = c('Bob', '101', 'First Street', '', 'Sue', '102', 'Second Street', '', 'Alex' , '200', 'Third Street', '')
)
k<-which(grepl("Street", data$Col1) == TRUE)
j <- k-1
i <- k-2
data.frame(
Name = data[i,],
Adress = data[j,],
Street = data[k,]
)
Name Adress Street
1 Bob 101 First Street
2 Sue 102 Second Street
3 Alex 200 Third Street
```
Or, if `Street` not ends with `Street` but `Adress` are always a number, you can also try
```
j <- which(apply(data, 1, function(x) !is.na(as.numeric(x)) ))
i <- j-1
k <- j+1
``` | The values of the DataFrame are reshaped by unknown rows and 4 columns, then the first 3 columns of the entire array are taken out by slicing and converted into a DataFrame, and finally the columns of DataFrame are reset by set\_axis
```
result = pd.DataFrame(df.values.reshape(-1, 4)[:, :-1])\
.set_axis(['Name', 'Address', 'Street'], axis=1)
result
>>>
Name Address Street
0 Bob 101 First Street
1 Sue 102 Second Street
2 Alex 200 Third Street
``` |
69,833,702 | I keep running into this use and I haven't found a good solution. I am asking for a solution in python, but a solution in R would also be helpful.
I've been getting data that looks something like this:
```
import pandas as pd
data = {'Col1': ['Bob', '101', 'First Street', '', 'Sue', '102', 'Second Street', '', 'Alex' , '200', 'Third Street', '']}
df = pd.DataFrame(data)
Col1
0 Bob
1 101
3
4 Sue
5 102
6 Second Street
7
8 Alex
9 200
10 Third Street
11
```
The pattern in my real data does repeat like this. Sometimes there is a blank row (or more than 1), and sometimes there are not any blank rows. The important part here is that I need to convert this column into a row.
I want the data to look like this.
```
Name Address Street
0 Bob 101 First Street
1 Sue 102 Second Street
2 Alex 200 Third Street
```
I have tried playing around with this, but nothing has worked. My thought was to iterate through a few rows at a time, assign the values to the appropriate column, and just build a data frame row by row.
```
x = len(df['Col1'])
holder = pd.DataFrame()
new_df = pd.DataFrame()
while x < 4:
temp = df.iloc[:5]
holder['Name'] = temp['Col1'].iloc[0]
holder['Address'] = temp['Col1'].iloc[1]
holder['Street'] = temp['Col1'].iloc[2]
new_df = pd.concat([new_df, holder])
df = temp[5:]
df.reset_index()
holder = pd.DataFrame()
x = len(df['Col1'])
new_df.head(10)
``` | 2021/11/04 | [
"https://Stackoverflow.com/questions/69833702",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14167846/"
] | ### Python3
In Python 3, you can convert your DataFrame into an array and then reshape it.
```py
n = df.shape[0]
df2 = pd.DataFrame(
data=df.to_numpy().reshape((n//4, 4), order='C'),
columns=['Name', 'Address', 'Street', 'Empty'])
```
This produces for your sample data this:
```
Name Address Street Empty
0 Bob 101 First Street
1 Sue 102 Second Street
2 Alex 200 Third Street
```
If you like you can remove the last column:
```py
df2 = df2.drop(['Empty'], axis=1)
```
```
Name Address Street
0 Bob 101 First Street
1 Sue 102 Second Street
2 Alex 200 Third Street
```
### One-liner code
```
df2 = pd.DataFrame(data=df.to_numpy().reshape((df.shape[0]//4, 4), order='C' ), columns=['Name', 'Address', 'Street', 'Empty']).drop(['Empty'], axis=1)
```
```
Name Address Street
0 Bob 101 First Street
1 Sue 102 Second Street
2 Alex 200 Third Street
``` | In python i believe this may help u.
```
1 import pandas as pd
2
3 data = {'Col1': ['Bob', '101', 'First Street', '', 'Sue', '102', 'Second Street', '', 'Alex' , '200', 'Third Street', '']}
4
5 var = list(data.values())[0]
6 var2 = []
7 for aux in range(int(len(var)/4)):
8 var2.append(var[aux*4: aux*4+3])
9 data = pd.DataFrame(var2, columns=['Name', 'Address','Street',])
10 print(data)
``` |
69,833,702 | I keep running into this use and I haven't found a good solution. I am asking for a solution in python, but a solution in R would also be helpful.
I've been getting data that looks something like this:
```
import pandas as pd
data = {'Col1': ['Bob', '101', 'First Street', '', 'Sue', '102', 'Second Street', '', 'Alex' , '200', 'Third Street', '']}
df = pd.DataFrame(data)
Col1
0 Bob
1 101
3
4 Sue
5 102
6 Second Street
7
8 Alex
9 200
10 Third Street
11
```
The pattern in my real data does repeat like this. Sometimes there is a blank row (or more than 1), and sometimes there are not any blank rows. The important part here is that I need to convert this column into a row.
I want the data to look like this.
```
Name Address Street
0 Bob 101 First Street
1 Sue 102 Second Street
2 Alex 200 Third Street
```
I have tried playing around with this, but nothing has worked. My thought was to iterate through a few rows at a time, assign the values to the appropriate column, and just build a data frame row by row.
```
x = len(df['Col1'])
holder = pd.DataFrame()
new_df = pd.DataFrame()
while x < 4:
temp = df.iloc[:5]
holder['Name'] = temp['Col1'].iloc[0]
holder['Address'] = temp['Col1'].iloc[1]
holder['Street'] = temp['Col1'].iloc[2]
new_df = pd.concat([new_df, holder])
df = temp[5:]
df.reset_index()
holder = pd.DataFrame()
x = len(df['Col1'])
new_df.head(10)
``` | 2021/11/04 | [
"https://Stackoverflow.com/questions/69833702",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14167846/"
] | ### Python3
In Python 3, you can convert your DataFrame into an array and then reshape it.
```py
n = df.shape[0]
df2 = pd.DataFrame(
data=df.to_numpy().reshape((n//4, 4), order='C'),
columns=['Name', 'Address', 'Street', 'Empty'])
```
This produces for your sample data this:
```
Name Address Street Empty
0 Bob 101 First Street
1 Sue 102 Second Street
2 Alex 200 Third Street
```
If you like you can remove the last column:
```py
df2 = df2.drop(['Empty'], axis=1)
```
```
Name Address Street
0 Bob 101 First Street
1 Sue 102 Second Street
2 Alex 200 Third Street
```
### One-liner code
```
df2 = pd.DataFrame(data=df.to_numpy().reshape((df.shape[0]//4, 4), order='C' ), columns=['Name', 'Address', 'Street', 'Empty']).drop(['Empty'], axis=1)
```
```
Name Address Street
0 Bob 101 First Street
1 Sue 102 Second Street
2 Alex 200 Third Street
``` | Another R solution. This solution is based on the `tidyverse` package. The example data frame `data` is from Park's post (<https://stackoverflow.com/a/69833814/7669809>).
```
library(tidyverse)
data2 <- data %>%
mutate(ID = cumsum(Col1 %in% "")) %>%
filter(!Col1 %in% "") %>%
group_by(ID) %>%
mutate(Type = case_when(
row_number() == 1L ~"Name",
row_number() == 2L ~"Address",
row_number() == 3L ~"Street",
TRUE ~NA_character_
)) %>%
pivot_wider(names_from = "Type", values_from = "Col1") %>%
ungroup()
data2
# # A tibble: 3 x 4
# ID Name Address Street
# <int> <chr> <chr> <chr>
# 1 0 Bob 101 First Street
# 2 1 Sue 102 Second Street
# 3 2 Alex 200 Third Street
``` |
69,833,702 | I keep running into this use and I haven't found a good solution. I am asking for a solution in python, but a solution in R would also be helpful.
I've been getting data that looks something like this:
```
import pandas as pd
data = {'Col1': ['Bob', '101', 'First Street', '', 'Sue', '102', 'Second Street', '', 'Alex' , '200', 'Third Street', '']}
df = pd.DataFrame(data)
Col1
0 Bob
1 101
3
4 Sue
5 102
6 Second Street
7
8 Alex
9 200
10 Third Street
11
```
The pattern in my real data does repeat like this. Sometimes there is a blank row (or more than 1), and sometimes there are not any blank rows. The important part here is that I need to convert this column into a row.
I want the data to look like this.
```
Name Address Street
0 Bob 101 First Street
1 Sue 102 Second Street
2 Alex 200 Third Street
```
I have tried playing around with this, but nothing has worked. My thought was to iterate through a few rows at a time, assign the values to the appropriate column, and just build a data frame row by row.
```
x = len(df['Col1'])
holder = pd.DataFrame()
new_df = pd.DataFrame()
while x < 4:
temp = df.iloc[:5]
holder['Name'] = temp['Col1'].iloc[0]
holder['Address'] = temp['Col1'].iloc[1]
holder['Street'] = temp['Col1'].iloc[2]
new_df = pd.concat([new_df, holder])
df = temp[5:]
df.reset_index()
holder = pd.DataFrame()
x = len(df['Col1'])
new_df.head(10)
``` | 2021/11/04 | [
"https://Stackoverflow.com/questions/69833702",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14167846/"
] | ### Python3
In Python 3, you can convert your DataFrame into an array and then reshape it.
```py
n = df.shape[0]
df2 = pd.DataFrame(
data=df.to_numpy().reshape((n//4, 4), order='C'),
columns=['Name', 'Address', 'Street', 'Empty'])
```
This produces for your sample data this:
```
Name Address Street Empty
0 Bob 101 First Street
1 Sue 102 Second Street
2 Alex 200 Third Street
```
If you like you can remove the last column:
```py
df2 = df2.drop(['Empty'], axis=1)
```
```
Name Address Street
0 Bob 101 First Street
1 Sue 102 Second Street
2 Alex 200 Third Street
```
### One-liner code
```
df2 = pd.DataFrame(data=df.to_numpy().reshape((df.shape[0]//4, 4), order='C' ), columns=['Name', 'Address', 'Street', 'Empty']).drop(['Empty'], axis=1)
```
```
Name Address Street
0 Bob 101 First Street
1 Sue 102 Second Street
2 Alex 200 Third Street
``` | The values of the DataFrame are reshaped by unknown rows and 4 columns, then the first 3 columns of the entire array are taken out by slicing and converted into a DataFrame, and finally the columns of DataFrame are reset by set\_axis
```
result = pd.DataFrame(df.values.reshape(-1, 4)[:, :-1])\
.set_axis(['Name', 'Address', 'Street'], axis=1)
result
>>>
Name Address Street
0 Bob 101 First Street
1 Sue 102 Second Street
2 Alex 200 Third Street
``` |
56,746,773 | I had a college exercise which contains a question which asked to write a function which returns how many times a particular key repeats in an object in python. after researching on dictionaries I know that python automatically ignores duplicate keys only keeping the last one. I tried to loop over each key the conventional way:
```
dictt = {'a' : 22, 'a' : 33, 'c' : 34, 'd' : 456}
lookFor = 'a'
times = 0
for k,v in dictt.items():
if k == lookFor:
times = times + 1
```
This would return 1. even if I check the length of the dictionary it shows 3 meaning only one of the key 'a' was counted. | 2019/06/25 | [
"https://Stackoverflow.com/questions/56746773",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9096030/"
] | Just to mention other options note that you can use the `filter` function here:
```
julia> filter(row -> row.a == 2, df)
1×2 DataFrame
│ Row │ a │ b │
│ │ Int64 │ String │
├─────┼───────┼────────┤
│ 1 │ 2 │ y │
```
or
```
julia> df[filter(==(2), df.a), :]
1×2 DataFrame
│ Row │ a │ b │
│ │ Int64 │ String │
├─────┼───────┼────────┤
│ 1 │ 2 │ y │
``` | Fortunately, you only need to add one character: `.`. The `.` character enables broadcasting on any Julia function, even ones like `==`. Therefore, your code would be as follows:
```
df = DataFrame(a=[1,2,3], b=["x", "y", "z"])
df2 = df[df.a .== 2, :]
```
Without the broadcast, the clause `df.a == 2` returns `false` because it's literally comparing the Array [1,2,3], as a whole unit, to the scalar value of 2. An Array of shape (3,) will never be equal to a scalar value of 2, without broadcasting, because the sizes are different. Therefore, that clause just returns a single `false`.
The error you're getting tells you that you're trying to access the DataFrame at index `false`, which is not a valid index for a DataFrame with 3 rows. By broadcasting with `.`, you're now creating a Bool Array of shape (3,), which is a valid way to index a DataFrame with 3 rows.
For more on broadcasting, see the official Julia documentation [here](https://docs.julialang.org/en/v1/manual/functions/#man-vectorized-1). |
38,212,340 | I am trying to extract all those tags whose class name fits the regex pattern frag-0-0, frag-1-0, etc. from [this link](http://de.vroniplag.wikia.com/wiki/Aak/002)
I am trying to retrieve it using the following code
```
driver = webdriver.Chrome(chromedriver)
for frg in frgs:
driver.get(URL + frg[1:])
frags=driver.find_elements_by_id(re.compile('frag-[0-9]-0'))
for frag in frags:
for tag in frag.find_elements_by_css_selector('[class^=fragmark]'):
lst.append([tag.get_attribute('class'), tag.text])
driver.quit()
return lst
```
But I get an error. What is the right way of doing this?
The error is as follows:
```
Traceback (most recent call last):
File "vroni.py", line 119, in <module>
op('Aaf')
File "vroni.py", line 104, in op
plags=getplags(cd)
File "vroni.py", line 95, in getplags
frags=driver.find_elements_by_id(re.compile('frag-[0-9]-0'))
File "/home/eadaradhiraj/Documents/webscrape/venv/local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 281, in find_elements_by_id
return self.find_elements(by=By.ID, value=id_)
File "/home/eadaradhiraj/Documents/webscrape/venv/local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 778, in find_elements
'value': value})['value']
File "/home/eadaradhiraj/Documents/webscrape/venv/local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 234, in execute
response = self.command_executor.execute(driver_command, params)
File "/home/eadaradhiraj/Documents/webscrape/venv/local/lib/python2.7/site-packages/selenium/webdriver/remote/remote_connection.py", line 398, in execute
data = utils.dump_json(params)
File "/home/eadaradhiraj/Documents/webscrape/venv/local/lib/python2.7/site-packages/selenium/webdriver/remote/utils.py", line 34, in dump_json
return json.dumps(json_struct)
File "/usr/lib/python2.7/json/__init__.py", line 243, in dumps
return _default_encoder.encode(obj)
File "/usr/lib/python2.7/json/encoder.py", line 207, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python2.7/json/encoder.py", line 270, in iterencode
return _iterencode(o, 0)
File "/usr/lib/python2.7/json/encoder.py", line 184, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: <_sre.SRE_Pattern object at 0xb668b1b0> is not JSON serializable
``` | 2016/07/05 | [
"https://Stackoverflow.com/questions/38212340",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6213939/"
] | Try to remove DownloadCachePluginBootstrap.cs and FilePluginBootstrap.cs just leave manual setup inside InitializeLastChance(). It seems that there is a problem with loading order. | As @Piotr mentioned:
>
> Try to remove DownloadCachePluginBootstrap.cs and FilePluginBootstrap.cs just
> leave manual setup inside InitializeLastChance(). It seems that there is a
> problem with loading order.
>
>
>
That fixed the issue for me as well.
I just want to share my code in the Setup.cs of the iOS project because I think that's a better implementation. I didn't use **InitializeLastChance()**. Instead, I used **AddPluginsLoaders** and **LoadPlugins**.
```
protected override void AddPluginsLoaders(MvxLoaderPluginRegistry registry)
{
registry.Register<MvvmCross.Plugins.File.PluginLoader, MvvmCross.Plugins.File.iOS.Plugin>();
registry.Register<MvvmCross.Plugins.DownloadCache.PluginLoader, MvvmCross.Plugins.DownloadCache.iOS.Plugin>();
base.AddPluginsLoaders(registry);
}
public override void LoadPlugins(IMvxPluginManager pluginManager)
{
pluginManager.EnsurePluginLoaded<MvvmCross.Plugins.File.PluginLoader>();
pluginManager.EnsurePluginLoaded<MvvmCross.Plugins.DownloadCache.PluginLoader>();
base.LoadPlugins(pluginManager);
}
``` |
44,206,346 | How can I stop pgadmin 4 process?
I ran pgadmin 4 next method:
`python3 /usr/local/pgAdmin4.py`
My idea using Ctrl-c. | 2017/05/26 | [
"https://Stackoverflow.com/questions/44206346",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8071434/"
] | If you are using pgAdmin 4 on mac OS or Ubuntu, you can use system tool bar (at the top of the screen) icon for this. After you start pgAdmin server the icon with elephant head should appear. If you click it you will have an option `Shut down server`. | You can shut down the server[](https://i.stack.imgur.com/qzpud.png) from the top menu as shown.
Just click the Shutdown server and it will work. |
74,495,864 | I have a huge list of sublists, each sublist consisting of a tuple and an int. Example:
```
[[(1, 1), 46], [(1, 2), 25.0], [(1, 1), 25.0], [(1, 3), 19.5], [(1, 2), 19.5], [(1, 4), 4.5], [(1, 3), 4.5], [(1, 5), 17.5], [(1, 4), 17.5], [(1, 6), 9.5], [(1, 5), 9.5]]
```
I want to create a unique list of those tuples corresponding to the sum of all those integer values using python. For the example above, my desired output looks like this:
```
[[(1, 1), 71], [(1, 2), 44.5], [(1, 3), 24], [(1, 4), 22], [(1, 5), 27], [(1, 6), 9.5]]
```
Could I get some help on how to do this?
I have tried to use dictionaries to solve this problem, but I keep running into errors, as I am not too familiar with how to use them. | 2022/11/18 | [
"https://Stackoverflow.com/questions/74495864",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20543467/"
] | From the helpfile you can read:
>
> If there is a header and the first row contains one fewer field than the number of columns, the first column in the input is used for the row names. Otherwise if **row.names is missing, the rows are numbered**.
>
>
>
That explains the same behavior when you set row.names=NULL or when you use its default value.
You can set row.names as in this example:
```
df <- read.table(text="V1 V2
ENSG00000000003.15 2
ENSG00000000005.6 0
ENSG00000000419.14 21
ENSG00000000457.14 0
ENSG00000000460.17 2
ENSG00000000938.13 0", header=TRUE, row.names=letters[1:6])
```
which displays:
```
V1 V2
a ENSG00000000003.15 2
b ENSG00000000005.6 0
c ENSG00000000419.14 21
d ENSG00000000457.14 0
e ENSG00000000460.17 2
f ENSG00000000938.13 0
``` | The first two executions are functionally the same, when you don't use row.names parameter of read.table, it's assumed that its value is NULL.
The third one fails because `1` is interpreted as being a vector with length equal to the number of rows filled with the value 1. Hence the error affirming you can't have two rows with the same name.
What you're doing with `row.names=1` is equivalent trying to do:
```
test <- read.table(text="X Y
1 2
3 4", header=TRUE)
row.names(test) = c(1,1)
```
It gives the same Error.
If you want to name your rows `R1:RX` why not try something like this:
```
ak1a = read.table("/Users/abhaykanodia/Desktop/smallRNA/AK1a_counts.txt")
row.names(ak1a) = paste("R",1:dim(ak1a)[1],sep="")
``` |
10,104,805 | I have installed python 32 package to the
>
> C:\python32
>
>
>
I have also set the paths:
>
> PYTHONPATH | C:\Python32\Lib;C:\Python32\DLLs;C:\Python32\Lib\lib-tk;
>
>
> PATH ;C:\Python32;
>
>
>
I would like to use the "2to3" tool, but CMD does not recognize it.
```
CMD: c:\test\python> 2to3 test.py
```
Should i add an extra path for "2to3" or something?
Thanks | 2012/04/11 | [
"https://Stackoverflow.com/questions/10104805",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1318239/"
] | 2to3 is actually a Python script found in the Tools/scripts folder of your Python install.
So you should run it like this:
```
python.exe C:\Python32\Tools\scripts\2to3.py your-script-here.py
```
See this for more details: <http://docs.python.org/library/2to3.html> | You can set up 2to3.py to run as a command when you type 2to3 by creating a batch file in the same directory as your python.exe file (assuming that directory is already on your windows path - it doesn't have to be this directory it just is a convenient, relatively logical spot).
Lets assume you have python installed in `C:\Python33`. If you aren't sure where your python installation is, you can find out where Windows thinks it is by typing `where python` from the command line.
You should have `python.exe` in `C:\Python33` and `2to3.py` in `C:\Python33\Tools\Scripts`.
Create a batch file called `2to3.bat` in `C:\Python33\Scripts` and put this line in the batch file
```
@python "%~dp0\..\Tools\Scripts\2to3.py" %*
```
The `%~dp0` is the location of the batch file, in this case `c:\Python33\Scripts` and the `%*` passes all arguments from the command line to the `2to3.py` script. After you've saved the .bat file, you should be able to type `2to3` from the command line and see
```
At least one file or directory argument required.
Use --help to show usage.
```
I have found this technique useful when installing from setup.py, because sometimes the setup script expects 2to3 to be available as a command. |
10,104,805 | I have installed python 32 package to the
>
> C:\python32
>
>
>
I have also set the paths:
>
> PYTHONPATH | C:\Python32\Lib;C:\Python32\DLLs;C:\Python32\Lib\lib-tk;
>
>
> PATH ;C:\Python32;
>
>
>
I would like to use the "2to3" tool, but CMD does not recognize it.
```
CMD: c:\test\python> 2to3 test.py
```
Should i add an extra path for "2to3" or something?
Thanks | 2012/04/11 | [
"https://Stackoverflow.com/questions/10104805",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1318239/"
] | 2to3 is actually a Python script found in the Tools/scripts folder of your Python install.
So you should run it like this:
```
python.exe C:\Python32\Tools\scripts\2to3.py your-script-here.py
```
See this for more details: <http://docs.python.org/library/2to3.html> | Make a batch file then rename it to `2to3.bat` and paste this code in it:
```
@python "%~dp0\Tools\Scripts\2to3.py" %*
```
Copy that file beside your python.exe file, for me that folder is:
`C:\Users\Admin\AppData\Local\Programs\Python\Python38`
Usage:
```
2to3 mycode.py
``` |
10,104,805 | I have installed python 32 package to the
>
> C:\python32
>
>
>
I have also set the paths:
>
> PYTHONPATH | C:\Python32\Lib;C:\Python32\DLLs;C:\Python32\Lib\lib-tk;
>
>
> PATH ;C:\Python32;
>
>
>
I would like to use the "2to3" tool, but CMD does not recognize it.
```
CMD: c:\test\python> 2to3 test.py
```
Should i add an extra path for "2to3" or something?
Thanks | 2012/04/11 | [
"https://Stackoverflow.com/questions/10104805",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1318239/"
] | You can set up 2to3.py to run as a command when you type 2to3 by creating a batch file in the same directory as your python.exe file (assuming that directory is already on your windows path - it doesn't have to be this directory it just is a convenient, relatively logical spot).
Lets assume you have python installed in `C:\Python33`. If you aren't sure where your python installation is, you can find out where Windows thinks it is by typing `where python` from the command line.
You should have `python.exe` in `C:\Python33` and `2to3.py` in `C:\Python33\Tools\Scripts`.
Create a batch file called `2to3.bat` in `C:\Python33\Scripts` and put this line in the batch file
```
@python "%~dp0\..\Tools\Scripts\2to3.py" %*
```
The `%~dp0` is the location of the batch file, in this case `c:\Python33\Scripts` and the `%*` passes all arguments from the command line to the `2to3.py` script. After you've saved the .bat file, you should be able to type `2to3` from the command line and see
```
At least one file or directory argument required.
Use --help to show usage.
```
I have found this technique useful when installing from setup.py, because sometimes the setup script expects 2to3 to be available as a command. | Make a batch file then rename it to `2to3.bat` and paste this code in it:
```
@python "%~dp0\Tools\Scripts\2to3.py" %*
```
Copy that file beside your python.exe file, for me that folder is:
`C:\Users\Admin\AppData\Local\Programs\Python\Python38`
Usage:
```
2to3 mycode.py
``` |
8,576,104 | Just for fun, I've been using `python` and `gstreamer` to create simple Linux audio players. The first one was a command-line procedural script that used gst-launch-0.10 playbin to play a webstream. The second version was again procedural but had a GUI and used playbin2 to create the gstreamer pipeline. Now I'm trying to create a fully OOP version.
My first step was to put the gstreamer code in a module of its own and save it as 'player.py':
```
#!/usr/bin/env python
# coding=utf-8
"""player.py"""
import glib, pygst
pygst.require("0.10")
import gst
class Player():
def __init__(self):
self.pipeline = gst.Pipeline("myPipeline")
self.player = gst.element_factory_make("playbin2", "theplayer")
self.pipeline.add(self.player)
self.audiosink = gst.element_factory_make("autoaudiosink", 'audiosink')
self.audiosink.set_property('async-handling', True)
self.player.set_property("uri", "http://sc.grupodial.net:8086")
self.pipeline.set_state(gst.STATE_PLAYING)
if __name__ == "__main__":
Player()
glib.MainLoop().run()
```
(Please note that this is a very simple experimental script that automatically loads and plays a stream. In the final application there will be specific methods of Player to take care of URI/file selection and play/pause/stop reproduction.)
The file was marked as executable and the following command made it run fine, the webstream being loaded and played:
```
$ python player.py
```
However, trying to run it directly (using the shebang directive) returned
```
$ ./player.py
: No such file or directory
```
Anyway, having made it work as a standalone script I wrote the following "main" application code to import the player module and create an instance of Player:
```
#!/usr/bin/env python
# coding=utf-8
"""jukebox3.py"""
import glib
import player
def main():
myplayer = player.Player()
# remove these later:
print myplayer.pipeline
print myplayer.player
print myplayer.audiosink
print myplayer.player.get_property("uri")
print myplayer.pipeline.get_state()
if __name__ == "__main__":
main()
glib.MainLoop().run()
```
Running this main script either through the interpreter or directly produces **no sound at all** though I believe the instance is created because the printing statements output information consistent with playbin2 behavior:
```
/GstPipeline:myPipeline (gst.Pipeline)
/GstPipeline:myPipeline/GstPlayBin2:theplayer (__main__.GstPlayBin2)
/GstAutoAudioSink:audiosink (__main__.GstAutoAudioSink)
http://sc.grupodial.net:8086
(<enum GST_STATE_CHANGE_SUCCESS of type GstStateChangeReturn>, <enum GST_STATE_PLAYING of type GstState>, <enum GST_STATE_VOID_PENDING of type GstState>)
```
BTW, the result is the same using either `glib.MainLoop` or `gtk.main` to create the main loop.
Any suggestions what am I missing? Or, is this scheme possible at all? | 2011/12/20 | [
"https://Stackoverflow.com/questions/8576104",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1106979/"
] | If you use this, you will pass the element value as param.
```
javascript:checkStatus('{$k->bus_company_name}','{$k->bus_id}','{$k->bus_time}',document.getElementById('dt').value)
```
But you also can get inside the function checkStatus. | Since you're looping through a list of items, I would recommend using the current index at each iteration to create a unique date ID. You can then pass this to your script and get the element's value by ID there:
```
{foreach name = feach key = i item = k from = $allBuses}
{$k->bus_company_name}<br />
A/C {$k->bus_is_ac}<br />
Date : <input type="text" name="date" id="dt_{$i}" />yyyy/mm/dd
<a href="javascript:checkStatus('{$k->bus_company_name}','{$k->bus_id}','{$k->bus_time}','dt_{$i}')">Status</a>
{/foreach}
<script>
function checkStatus(name, id, time, date_id){
var date = document.getElementById(date_id);
if(date){
alert(date.value);
// Do something fancy with the date
}
}
</script>
``` |
29,476,054 | I have a list of things I want to filter out of a csv, and I'm trying to figure out a pythonic way to do it. EG, this is what I'm doing:
```
with open('output.csv', 'wb') as outf:
with open('input.csv', 'rbU') as inf:
read = csv.reader(inf)
outwriter = csv.writer(outf)
notstrings = ['and', 'or', '&', 'is', 'a', 'the']
for row in read:
(if none of notstrings in row[3])
outwriter(row)
```
I don't know what to put in the parentheses (or if there's a better overall way to go about this). | 2015/04/06 | [
"https://Stackoverflow.com/questions/29476054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2898989/"
] | You can use the [`any()` function](https://docs.python.org/2/library/functions.html#any) to test each of the words in your list against a column:
```
if not any(w in row[3] for w in notstrings):
# none of the strings are found, write the row
```
This will be true if *none* of those strings appear in `row[3]`. It'll match *substrings*, however, so `false-positive` would be a match for `'a' in 'false-positive` for example.
Put into context:
```
with open('output.csv', 'wb') as outf:
with open('input.csv', 'rbU') as inf:
read = csv.reader(inf)
outwriter = csv.writer(outf)
notstrings = ['and', 'or', '&', 'is', 'a', 'the']
for row in read:
if not any(w in row[3] for w in notstrings):
outwriter(row)
```
If you need to honour word boundaries then a regular expression is going to be a better idea here:
```
notstrings = re.compile(r'(?:\b(?:and|or|is|a|the)\b)|(?:\B&\B)')
if not notstrings.search(row[3]):
# none of the words are found, write the row
```
I created a [Regex101 demo](https://regex101.com/r/oK1hD2/2) for the expression to demonstrate how it works. It has two branches:
* `\b(?:and|or|is|a|the)\b` - matches any of the words in the list provided they are at the start, end, or between non-word characters (punctuation, whitespace, etc.)
* `\B&\B` - matches the `&` character if at the start, end, or between non-word characters. You can't use `\b` here as `&` is itself not a word character. | You can use sets. In this code, I transform your list into a set. I transform your `row[3]` into a set of words and I check the intersection between the two sets. If there is not intersection, that means none of the words in notstrings are in `row[3]`.
Using sets, you make sure that you match only words and not parts of words.
```
with open('output.csv', 'wb') as outf:
with open('input.csv', 'rbU') as inf:
read = csv.reader(inf)
outwriter = csv.writer(outf)
notstrings = set(['and', 'or', '&', 'is', 'a', 'the'])
for row in read:
if not notstrings.intersection(set(row[3].split(' '))):
outwriter(row)
``` |
62,514,068 | I am trying to develop a AWS lambda to make a `rollout restart deployment` using the python client. I cannot find any implementation in the github repo or references. Using the -v in the `kubectl rollout restart` is not giving me enough hints to continue with the development.
Anyways, it is more related to the python client:
<https://github.com/kubernetes-client/python>
Any ideas? perhaps I could be missing something | 2020/06/22 | [
"https://Stackoverflow.com/questions/62514068",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13791762/"
] | The python client interacts directly with the Kubernetes API. Similar to what `kubectl` does. However, `kubectl` added some utility commands which contain logic that is not contained in the Kubernetes API. Rollout is one of those utilities.
In this case that means you have two approaches. You could reverse engineer the API calls the [kubectl rollout restart](https://github.com/kubernetes/kubectl/blob/master/pkg/cmd/rollout/rollout_restart.go) makes. Pro tip: With go, you can actually import internal Kubectl behaviour and libraries, making this quite easy. So consider writing your lambda in golang.
Alternatively, you can have your Lambda call the Kubectl binary (using the process exec libraries in python). However, this does mean you need to include the binary in your lambda in some way (either by uploading it with your lambda or by building a lambda layer containing `kubectl`). | @Andre Pires, it can be done like this way :
```
data := fmt.Sprintf(`{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt":"%s"}}}},"strategy":{"type":"RollingUpdate","rollingUpdate":{"maxUnavailable":"%s","maxSurge": "%s"}}}`, time.Now().String(), "25%", "25%")
newDeployment, err := clientImpl.ClientSet.AppsV1().Deployments(item.Pod.Namespace).Patch(context.Background(), deployment.Name, types.StrategicMergePatchType, []byte(data), metav1.PatchOptions{FieldManager: "kubectl-rollout"})
``` |
51,314,875 | Seems fairly straight forward but whenever I try to merely import the module I get this:
```
from pptx.util import Inches
from pptx import Presentation
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\parts\image.py in <module>()
12 try:
---> 13 from PIL import Image as PIL_Image
14 except ImportError:
~\AppData\Local\Continuum\anaconda3\lib\site-packages\PIL\Image.py in <module>()
59 # and should be considered private and subject to change.
---> 60 from . import _imaging as core
61 if PILLOW_VERSION != getattr(core, 'PILLOW_VERSION', None):
ImportError: DLL load failed: The specified module could not be found.
During handling of the above exception, another exception occurred:
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-82a968e5e132> in <module>()
----> 1 from pptx.util import Inches
2 from pptx import Presentation
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\__init__.py in <module>()
11 del sys
12
---> 13 from pptx.api import Presentation # noqa
14
15 from pptx.opc.constants import CONTENT_TYPE as CT # noqa: E402
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\api.py in <module>()
15
16 from .opc.constants import CONTENT_TYPE as CT
---> 17 from .package import Package
18
19
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\package.py in <module>()
14 from .opc.packuri import PackURI
15 from .parts.coreprops import CorePropertiesPart
---> 16 from .parts.image import Image, ImagePart
17 from .parts.media import MediaPart
18 from .util import lazyproperty
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pptx\parts\image.py in <module>()
13 from PIL import Image as PIL_Image
14 except ImportError:
---> 15 import Image as PIL_Image
16
17 from ..compat import BytesIO, is_string
ModuleNotFoundError: No module named 'Image'
```
Can anyone help me to overcome this error, or possibly show me a better library to accomplish this? I'm more than happy to provide any info that would help someone to help me debug this.
I know very little on the modules. Aside from using anaconda prompt, I know nothing. | 2018/07/12 | [
"https://Stackoverflow.com/questions/51314875",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9459261/"
] | I've finally figured it out by creating a small app and trying to reproduce it. As Dmitry and Paulo have pointed out, it should work. However, it should work for any new project and in my case the project is 10 years old and has lots of legacy configurations.
**TL;DR:** The `async`/`await` keywords do not work very well (the `HttpContext.Current` will be null after calling `await`) if this setting is **not** present in the web.config:
```
<httpRuntime targetFramework="4.6.1" />
```
That is a shortcut for a bunch of settings, including this one (which is the one I care here):
```
<configuration>
<appSettings>
<add key="aspnet:UseTaskFriendlySynchronizationContext" value="true" />
</appSettings>
</configuration>
```
Everything is explained in detail here: <https://blogs.msdn.microsoft.com/webdev/2012/11/19/all-about-httpruntime-targetframework/>
For reference, it says:
>
> **<add key="aspnet:UseTaskFriendlySynchronizationContext" value="true" />**
>
>
> Enables the new await-friendly asynchronous pipeline that was
> introduced in 4.5. Many of our synchronization primitives in earlier
> versions of ASP.NET had bad behaviors, such as taking locks on public
> objects or violating API contracts. In fact, ASP.NET 4’s
> implementation of SynchronizationContext.Post is a blocking
> synchronous call! The new asynchronous pipeline strives to be more
> efficient while also following the expected contracts for its APIs.
> The new pipeline also performs a small amount of error checking on
> behalf of the developer, such as detecting unanticipated calls to
> async void methods.
>
>
> Certain features like WebSockets require that this switch be set.
> Importantly, the behavior of async / await is undefined in ASP.NET
> unless this switch has been set. (Remember: setting `<httpRuntime
> targetFramework="4.5" />` is also sufficient.)
>
>
>
If that settings is not present at all, then version 4.0 is assumed and it works in 'quirks'-mode:
>
> If there is no <httpRuntime targetFramework> attribute present in Web.config, we assume that the application wanted 4.0 quirks behavior.
>
>
> | For retrieving files in `ASP.NET Core` try using [`IFileProvider`](https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.fileproviders.ifileprovider) instead of `HttpContext` - see [File Providers in ASP.NET Core](https://learn.microsoft.com/en-us/aspnet/core/fundamentals/file-providers) documentation for more details about configuring and injecting it via `DI`.
If that is the `POST` controller action to upload multiple files and receive other data - you can do it this way. **Below for demo purposes I use `View` but data can just go from anywhere as API POST request**.
**View**
```
@model MyNamespace.Models.UploadModel
<form asp-controller="MyController" asp-action="Upload" enctype="multipart/form-data" method="post">
<input asp-for="OtherProperty">
<input name="Files" multiple type="file">
<button type="submit" class="btn btn-success">Upload</button>
</form>
```
**Model** - note that files are passed as [`IFormFile`](https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.http.iformfile) objects
```
public class UploadModel
{
public List<IFormFile> Files { get; set; }
public string OtherProperty { get; set; }
}
```
**Controller**
```
[HttpGet]
public IActionResult Upload()
{
return View(new UploadModel());
}
[HttpPost]
public async Task<IActionResult> Index(UploadModel model)
{
var otherProperty = model.OtherProperty;
var files = new Dictionary<string, string>();
foreach (IFormFile file in model.Files)
{
using (var reader = new StreamReader(file.OpenReadStream()))
{
string content = await reader.ReadToEndAsync();
files.Add(file.Name, content);
// Available file properties:
// file.FileName
// file.ContentDisposition
// file.ContentType
// file.Headers
// file.Length
// file.Name
// You can copy file to other stream if needed:
// file.CopyTo(new MemoryStream()...);
}
}
}
``` |
35,796,968 | I have a python GUI application. And now I need to know what all libraries the application links to. So that I can check the license compatibility of all the libraries.
I have tried using strace, but strace seems to report all the packages even if they are not used by the application.
And, I tried python ModuleFinder but it just returns the modules that are inside python2.7 and not system level packages that are linked.
So is there any way I can get all the libraries that are linked from my application? | 2016/03/04 | [
"https://Stackoverflow.com/questions/35796968",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2109788/"
] | You can give a try to the library
<https://github.com/bndr/pipreqs>
found following the guide
<https://www.fullstackpython.com/application-dependencies.html>
---
The library `pipreqs` is pip installable and automatically generates the file `requirements.txt`.
It contains all the imports libraries with versions you are using in the virtualenv or in the python correctly installed.
Just type:
```
pip install pipreqs
pipreqs /home/project/location
```
It will print:
```
INFO: Successfully saved requirements file in /home/project/location/requirements.txt
```
In addition it is compatible with the *pip install -r* command: if you need to create a venv of your project, or update your current python version with compatible libraries, you just need to type:
```
pip install -r requirements.txt
```
I had the same problem and this library solved it for me. Not sure if it works for multiple layers of dependencies i.e. in case you have nested level of dependent libraries.
-- Edit 1:
If looking for a more sophisticated **version manager**, please consider as well pyvenv <https://github.com/pyenv/pyenv>. It wraps `virtualenv` producing some improvements over the version specification that is created by `pipreqs`.
-- Edit 2:
If, after creating the file with the dependency libraries of your module with `pipreqs`, you want to pin the whole dependency tree, take a look at `pip-compile`. It figures out a way to get the dependencies of your top level libraries, and it pins them in a new requirement files, indicating the dependency tree.
-- Edit 2:
If you want to split your dependency tree into different files (e.g. base, test, dev, docs) and have a way of managing the dependency tree, please take a look at `pip-compile-multi`. | Install yolk for python2 with:
```
pip install yolk
```
Or install yolk for python3 with:
```
pip install yolk3k
```
Call the following to get the list of eggs in your environment:
```
yolk -l
```
Alternatively, you can use [snakefood](http://furius.ca/snakefood/) for graphing your dependencies, as answered in [this question](https://stackoverflow.com/questions/508277/is-there-a-good-dependency-analysis-tool-for-python).
You could try going into the site-packages folder where the unpacked eggs are stored, and running this:
```
ls -l */LICENSE*
```
That will give you a list of the licence files for each project (if they're stored in the root of the egg, which they usually are). |
35,796,968 | I have a python GUI application. And now I need to know what all libraries the application links to. So that I can check the license compatibility of all the libraries.
I have tried using strace, but strace seems to report all the packages even if they are not used by the application.
And, I tried python ModuleFinder but it just returns the modules that are inside python2.7 and not system level packages that are linked.
So is there any way I can get all the libraries that are linked from my application? | 2016/03/04 | [
"https://Stackoverflow.com/questions/35796968",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2109788/"
] | Install yolk for python2 with:
```
pip install yolk
```
Or install yolk for python3 with:
```
pip install yolk3k
```
Call the following to get the list of eggs in your environment:
```
yolk -l
```
Alternatively, you can use [snakefood](http://furius.ca/snakefood/) for graphing your dependencies, as answered in [this question](https://stackoverflow.com/questions/508277/is-there-a-good-dependency-analysis-tool-for-python).
You could try going into the site-packages folder where the unpacked eggs are stored, and running this:
```
ls -l */LICENSE*
```
That will give you a list of the licence files for each project (if they're stored in the root of the egg, which they usually are). | To get all the installed packages or modules. A very easy way is by going to your virtual environment directory on the terminal (The one with (venv) behind your computer's username) and run on the terminal one of these commands
`pip freeze > requirements.txt`
If you are using python3
`pip3 freeze > requirements.txt`
This would get all your installed packages from your virtual environment's library of packages used during the project, and store them in the 'requirements.txt' file that would automatically be created upon running the command. |
35,796,968 | I have a python GUI application. And now I need to know what all libraries the application links to. So that I can check the license compatibility of all the libraries.
I have tried using strace, but strace seems to report all the packages even if they are not used by the application.
And, I tried python ModuleFinder but it just returns the modules that are inside python2.7 and not system level packages that are linked.
So is there any way I can get all the libraries that are linked from my application? | 2016/03/04 | [
"https://Stackoverflow.com/questions/35796968",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2109788/"
] | You can give a try to the library
<https://github.com/bndr/pipreqs>
found following the guide
<https://www.fullstackpython.com/application-dependencies.html>
---
The library `pipreqs` is pip installable and automatically generates the file `requirements.txt`.
It contains all the imports libraries with versions you are using in the virtualenv or in the python correctly installed.
Just type:
```
pip install pipreqs
pipreqs /home/project/location
```
It will print:
```
INFO: Successfully saved requirements file in /home/project/location/requirements.txt
```
In addition it is compatible with the *pip install -r* command: if you need to create a venv of your project, or update your current python version with compatible libraries, you just need to type:
```
pip install -r requirements.txt
```
I had the same problem and this library solved it for me. Not sure if it works for multiple layers of dependencies i.e. in case you have nested level of dependent libraries.
-- Edit 1:
If looking for a more sophisticated **version manager**, please consider as well pyvenv <https://github.com/pyenv/pyenv>. It wraps `virtualenv` producing some improvements over the version specification that is created by `pipreqs`.
-- Edit 2:
If, after creating the file with the dependency libraries of your module with `pipreqs`, you want to pin the whole dependency tree, take a look at `pip-compile`. It figures out a way to get the dependencies of your top level libraries, and it pins them in a new requirement files, indicating the dependency tree.
-- Edit 2:
If you want to split your dependency tree into different files (e.g. base, test, dev, docs) and have a way of managing the dependency tree, please take a look at `pip-compile-multi`. | To get all the installed packages or modules. A very easy way is by going to your virtual environment directory on the terminal (The one with (venv) behind your computer's username) and run on the terminal one of these commands
`pip freeze > requirements.txt`
If you are using python3
`pip3 freeze > requirements.txt`
This would get all your installed packages from your virtual environment's library of packages used during the project, and store them in the 'requirements.txt' file that would automatically be created upon running the command. |
36,075,407 | I'm developing python flask app.
I have a problem mysqldb.
If I type 'import MySQLdb' on python console.
It show "ImportError: No module named 'MySQLdb' "
On my computer MySQL-python installed and running on <http://127.0.0.1:5000/>
How can I solve this problem? | 2016/03/18 | [
"https://Stackoverflow.com/questions/36075407",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5736099/"
] | If you are using Python **2.x**, one of the following command will install `mysqldb` on your machine:
```
pip install mysql-python
```
or
```
easy_install mysql-python
``` | **for python 3.x install**
pip install mysqlclient |
36,075,407 | I'm developing python flask app.
I have a problem mysqldb.
If I type 'import MySQLdb' on python console.
It show "ImportError: No module named 'MySQLdb' "
On my computer MySQL-python installed and running on <http://127.0.0.1:5000/>
How can I solve this problem? | 2016/03/18 | [
"https://Stackoverflow.com/questions/36075407",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5736099/"
] | Please follow these steps to get mysql support in your Flask app.
* Install the dev package for mysql depending on your Linux distro
* Make sure you have virtualenv installed and activated for your Flask app
* Install the mysqlclient package by using `pip install mysqlclient`
All of the above steps are independent of Python2 or Python3. | **for python 3.x install**
pip install mysqlclient |
37,691,320 | Im very new to `c` and am trying to make a `while` loop that checks if the parameter is less than or equal to a certain number but also if it is greater than or equal to a different number as well. I usually code in `python` and this is example of what I'm looking to do in `c`:
`while(8 <= x <= 600)` | 2016/06/08 | [
"https://Stackoverflow.com/questions/37691320",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5355216/"
] | ```
while (x >= 8 && x <= 600){
}
``` | The relational and equality operators (`<`, `<=`, `>`, `>=`, `==`, and `!=`) don't work like that in C. The expression `a <= b` will evaluate to 1 if the condition is true, 0 otherwise. The operator is *left-associative*, so `8 <= x <= 600` will be evaluated as `(8 <= x) <= 600`. `8 <= x` will evaluate to 0 or 1, both of which are less than 600, so the result of the expression is always 1 (true).
To check if `x` falls within a range of values, you have to do two separate comparisons: `8 <= x && x <= 600` (or `8 > x || x > 600`) |
37,691,320 | Im very new to `c` and am trying to make a `while` loop that checks if the parameter is less than or equal to a certain number but also if it is greater than or equal to a different number as well. I usually code in `python` and this is example of what I'm looking to do in `c`:
`while(8 <= x <= 600)` | 2016/06/08 | [
"https://Stackoverflow.com/questions/37691320",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5355216/"
] | ```
while (x >= 8 && x <= 600){
}
``` | this one means if x>=8,if x is bigger than 8 ,it turns 1 <= 600 ;(always true)
if not , Then it turns 0<=600 ; (always fause) |
37,691,320 | Im very new to `c` and am trying to make a `while` loop that checks if the parameter is less than or equal to a certain number but also if it is greater than or equal to a different number as well. I usually code in `python` and this is example of what I'm looking to do in `c`:
`while(8 <= x <= 600)` | 2016/06/08 | [
"https://Stackoverflow.com/questions/37691320",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5355216/"
] | The relational and equality operators (`<`, `<=`, `>`, `>=`, `==`, and `!=`) don't work like that in C. The expression `a <= b` will evaluate to 1 if the condition is true, 0 otherwise. The operator is *left-associative*, so `8 <= x <= 600` will be evaluated as `(8 <= x) <= 600`. `8 <= x` will evaluate to 0 or 1, both of which are less than 600, so the result of the expression is always 1 (true).
To check if `x` falls within a range of values, you have to do two separate comparisons: `8 <= x && x <= 600` (or `8 > x || x > 600`) | this one means if x>=8,if x is bigger than 8 ,it turns 1 <= 600 ;(always true)
if not , Then it turns 0<=600 ; (always fause) |
69,090,032 | Using Python.
I have two data frames
df1:
```
email timezone country_app_web
0 nhvfstdfg@vxc.com Europe/Paris NaN
1 taifoor096@gmail.com NaN FR
2 nivo1996@gmail.com US/Eastern NaN
3 jorgehersan90@gmail.com NaN UK
4 syeager2@cox.net NaN NaN
```
df2:
```
email country
0 008023@abpat.qld.edu.au AU
1 0081634947@fanaticsgsiorder.com AU
2 008farhan05@gmail.com ID
3 00bronzy@gmail.com AU
4 00monstar@gmail.com AU
```
I want to check using python and add column country in df1
Problem1: if email in df1 is present in df2, if yes then return the value of a column "country" present in df2 to matched email in df1
problem 2: for the remaning unmatched emails , need to check if the country\_web\_app in df1 has any value corresponding to the unmatched email if yes then return the country\_\_web\_app values into country column of df1
problem 3: Similarly for remaning unmatched email after problem 2, need to check if the timezone in df1 has any value corresponding to the unamtched email if yes then return the timezone value into country column of df1 | 2021/09/07 | [
"https://Stackoverflow.com/questions/69090032",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16677735/"
] | if you want to remove all object in `products`
use this
```
db.collection.update({},
{
$set: {
products: {}
}
})
```
<https://mongoplayground.net/p/aBSnpRhblxt>
if you want to delete specific key (gCx5qSTLvdWeel8E2Yo7m) from product use this
```
db.collection.update({},
{
$unset: {
"products.gCx5qSTLvdWeel8E2Yo7m": undefined
}
})
```
<https://mongoplayground.net/p/z6xRyh3oJrs> | Thank you for your answer Mohammad but I think this works for MongoDB, but in mongoose, we need to set the value as 1 to remove the item with unset.
Here is my working example
```js
const { ids } = req.body;
try {
const order = await Order.findById(req.params.id).populate('user', 'name').exec();
if (!order) {
return res.status(404).json({ errors: [{ msg: 'Vous ne pouvez pas fermer une commande déjà fermée' }] });
}
console.log(ids);
const un: {
[key:string]: number,
} = {};
if (ids) {
for (let i = 0; i < ids.length; i += 1) {
const e = ids[i];
un[`products.${e}`] = 1;
}
}
console.log(un);
const changedOrder = await Order.updateOne({ id: req.params.id }, {
$unset: un,
}, { new: true }).populate('user', 'name');
console.log(changedOrder);
res.json(changedOrder);
} catch (err) {
console.log(err);
res.status(500).json({ errors: [{ msg: 'Server Error' }] });
}
``` |
61,746,984 | I have a script which has been simplified to provide me with a sequence of numbers.
I have run this under windows 10, using both Python3.6 and Python3.8
If the script is run with the line the line : pal\_gen.send(10 \*\* (digits)) commented out, I get what I expected. But I want to change the sequence when num % 10 = 0.
The script:
```
def infinite_pal():
num = 0
while True:
#print(f"num= {str(num)}")
if num % 10 ==0:
#if num==20: print(f"Have num of {str(num)}")
i = (yield num)
#if num==20: print(i)
if i is not None:
num = i
#print(f"i = {str(i)} num= {str(num)}")
num += 1
if num==112: break
pal_gen = infinite_pal()
for i in pal_gen:
print(i)
digits = len(str(i))
#print(f"result = {str(10 ** (digits))}")
pal_gen.send(10 ** (digits))
```
gives 0, 30
I would have expected: 0, 10, 20, 20, 20 etc.
When num has the value of 20, the yield expression appears to be called, but the value 20 is never sent to the calling for i in pal\_gen loop. The num value does get upto 30 and is yielded. 30 should not appear.
Have I totally misunderstood the effect of the .send
Many thanks. I can do this another way but I am puzzled why the above does not work.
From an earlier question, [python generator yield statement not yield](https://stackoverflow.com/questions/59327603/python-generator-yield-statement-not-yield), I tried - but it still does not give what I would expect:
```
def infinite_pal():
num = 0
while True:
if num % 10 ==0:
#if num==20: print(f"Have num of {str(num)}")
i = (yield num)
#if num==20: print(i)
if i is not None:
num = i
#print(f"i = {str(i)} num= {str(num)}")
num += 1
pal_gen = infinite_pal()
i = pal_gen.send(None)
while True:
print(i)
digits = len(str(i))
#print(f"result = {str(10 ** (digits))}")
i=pal_gen.send(10 ** (digits))
if i>200: break
``` | 2020/05/12 | [
"https://Stackoverflow.com/questions/61746984",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5467308/"
] | I don't know why you expect result `0, 10, 20, 20, 20` if you send `10`, `100`, `1000`, `10000`
In second version you have to send
```
i = pal_gen.send(10*(digits-1))
```
but it will gives endless `20` so if you expect other values then it will need totally different code.
---
```
def infinite_pal():
num = 0
while True:
if num % 10 ==0:
i = yield num
if i is not None:
num = i
num += 1
pal_gen = infinite_pal()
i = pal_gen.send(None)
while True:
print(i)
digits = len(str(i))
i = pal_gen.send(10*(digits-1))
## `i` never will be bigger then `20` so next lines are useless
#if i > 200:
# break
``` | Many thanks for the above comments. In case anyone else is new to generators in Python, I make the following comments. The first example came from a web site (2 sites in fact) that supposedly explained Python generators. I appreciate there was an error in the .send parameter, but my real concern was why the first approach did not work. I made the comment:
"When num has the value of 20 in the generator, the yield expression appears to be called, but the value 20 is never sent to the calling for i in pal\_gen loop", ie print(i) never displayed 20.
I know that the generator yielded 20 because when I uncommented the line in the generator:
```
#if num==20: print(f"Have num of {str(num)}")
```
20 was displayed.
At the time I did not realise that .send also gets the yielded values, so the variable i in print(i) in the for loop only received every second yielded value.
The second example solved this problem although the calculation for .send parameter was incorrect. |
45,851,791 | I am running the docker image for snappydata v0.9. From inside that image, I can run queries against the database. However, I cannot do so from a second server on my machine.
I copied the python files from snappydata to the installed pyspark (editing snappysession to SnappySession in the imports) and (based on the answer to [Unable to connect to snappydata store with spark-shell command](https://stackoverflow.com/questions/38921733/unable-to-connect-to-snappydata-store-with-spark-shell-command/38926794#38926794)), I wrote the following script (it is a bit of cargo-cult programming as I was copying from the python code in the docker image -- suggestions to improve it are welcome):
```
import pyspark
from pyspark.context import SparkContext
from pyspark.sql import SparkSession, SQLContext
from pyspark.sql.snappy import SnappyContext
from pyspark.storagelevel import StorageLevel
SparkContext._ensure_initialized()
spark = SparkSession.builder.appName("test") \
.master("local[*]") \
.config("snappydata.store.locators", "localhost:10034") \
.getOrCreate()
spark.sql("SELECT col1, min(col2) from TABLE1")
```
However, I get a traceback with:
```
pyspark.sql.utils.AnalysisException: u'Table or view not found: TABLE1
```
I have verified with wireshark that my program is communicating with the docker image (TCP follow stream shows the traceback message and a scala traceback). My assumption is that the permissions in the snappydata cluster is set wrong, but grepping through the logs and configuration did not show anything obvious.
How can I proceed?
-------- Edit 1 ------------
The new code that I am running (still getting the same error), incorporating the suggestions for the change in the config and ensuring that I get a SnappySession is:
```
from pyspark.sql.snappy import SnappySession
snappy = SnappySession.builder.appName("test") \
.master("local[*]") \
.config("spark.snappydata.connection", "localhost:1527") \
.getOrCreate()
snappy.sql("SELECT col1, min(col2) from TABLE1")
``` | 2017/08/24 | [
"https://Stackoverflow.com/questions/45851791",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/767565/"
] | Try this:
```
from random import randint
print( "You rolled " + ",".join(str(randint(1,6)) for j in range(6)) )
``` | If you're using python 3, which it appears you are, you could very simply print like that printing "you rolled" and then the numbers one at a time with the print argument 'end' set to a blank string
```
print("You rolled ", end='')
for i in range(6):
print(str(random.randint(1,6)), end='')
if i < 5:
print(", ", end='')
``` |
45,851,791 | I am running the docker image for snappydata v0.9. From inside that image, I can run queries against the database. However, I cannot do so from a second server on my machine.
I copied the python files from snappydata to the installed pyspark (editing snappysession to SnappySession in the imports) and (based on the answer to [Unable to connect to snappydata store with spark-shell command](https://stackoverflow.com/questions/38921733/unable-to-connect-to-snappydata-store-with-spark-shell-command/38926794#38926794)), I wrote the following script (it is a bit of cargo-cult programming as I was copying from the python code in the docker image -- suggestions to improve it are welcome):
```
import pyspark
from pyspark.context import SparkContext
from pyspark.sql import SparkSession, SQLContext
from pyspark.sql.snappy import SnappyContext
from pyspark.storagelevel import StorageLevel
SparkContext._ensure_initialized()
spark = SparkSession.builder.appName("test") \
.master("local[*]") \
.config("snappydata.store.locators", "localhost:10034") \
.getOrCreate()
spark.sql("SELECT col1, min(col2) from TABLE1")
```
However, I get a traceback with:
```
pyspark.sql.utils.AnalysisException: u'Table or view not found: TABLE1
```
I have verified with wireshark that my program is communicating with the docker image (TCP follow stream shows the traceback message and a scala traceback). My assumption is that the permissions in the snappydata cluster is set wrong, but grepping through the logs and configuration did not show anything obvious.
How can I proceed?
-------- Edit 1 ------------
The new code that I am running (still getting the same error), incorporating the suggestions for the change in the config and ensuring that I get a SnappySession is:
```
from pyspark.sql.snappy import SnappySession
snappy = SnappySession.builder.appName("test") \
.master("local[*]") \
.config("spark.snappydata.connection", "localhost:1527") \
.getOrCreate()
snappy.sql("SELECT col1, min(col2) from TABLE1")
``` | 2017/08/24 | [
"https://Stackoverflow.com/questions/45851791",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/767565/"
] | Try this:
```
from random import randint
print( "You rolled " + ",".join(str(randint(1,6)) for j in range(6)) )
``` | You want a comma-separated list of numbers, but you are only generating one number at a time.
You say:
```
for i in range(6):
roll1 =int(random.randint(1,6))
print ("You rolled",roll1)
```
>
> I need it to print out like
>
>
>
```
you rolled 3,4,5,6,2
```
First, let's try working with what you already have. Python's built-in [`print()`](https://docs.python.org/3/library/functions.html#print) function can do a lot of cool things - please do read the documentation.
First, you can use a "keyword argument" to override what `print()` outputs when it finishes printing. Normally, at the end `print()` emits a newline, but you can suppress that, or replace it:
```
print("you rolled ", end='') # Print empty string (nothing) at end of line
```
Or, since you want a space between the words and the numbers, you could use the space as the end-of-line marker:
```
print("you rolled", end=' ') # Note: no space at end of "rolled"
```
In either of the above cases, there **will not** be a newline printed, so whatever you put in your *next* call to `print()` will pick up at the end of the text.
Now, in your loop you generate a random number and print it. You can do that just fine. It's worth pointing out that `randint` returns an integer. It actually has the word "int" as part of the name. So you really don't need to call `int(randint`.
```
print("you rolled ", end='')
for i in range(6):
roll1 = random.randint(1,6)
print(roll1, end='')
```
Now, that's not going to do what you want, because it will jam all the numbers together like "you rolled 123456". So there's a tricky problem of how to separate the numbers with commas:
```
for i in range(6):
roll1 = random.randint(1,6)
print(",", roll1, end='')
```
That won't quite work, because `print()` puts a space between every two things it prints. You can override that with the `sep=` named argument.
```
print(",", roll1, sep='', end='')
```
Even this won't quite work, since it *always* puts a comma in front of the number. That is fine, except for the very first number, but you wind up with something like "you rolled ,1,2,3,4,5,6".
Instead, you need some logic to suppress the comma.
```
for i in range(6):
roll1 = random.randint(1,6)
if i == 0:
print(roll1, end='')
else:
print(",", roll1, sep='', end='')
print()
```
Or you need to handle one case outside the loop:
```
roll1 = random.randint(1, 6)
print("you rolled ", roll1, end='')
for i in range(5): # note: 5, not 6
roll1 = random.randint(1, 6)
print(",", roll1, sep='', end='')
print()
```
In general, suppressing the comma with an `if` statement is "cleaner" but pulling that outside the loop will be faster, since the `if` statement gets executed every time. This won't matter for 6 items, but it does matter if you have, say, 6 billion items.
Finally, it's worth pointing out that in this scenario, you are *forgetting* the numbers rolled as soon as you roll them. Frequently, that won't work. You might consider just "unrolling the loop" and storing 6 values in 6 variables. (I don't know if this is useful, since I don't know what problem you are trying to solve.)
In role-playing games, the dice used are frequently described as letter 'd' plus a number indicating the number of sides. A die capable of returning a number from 1-6 is a six-sided die (#sides ranges from 4 to 100). Thus, you are rolling a "d6", pronounced "dee six". Your for loop causes six such rolls, which is expressed as 6d6, "six dee six". Let's write a little function to avoid having to type so much, called `d6` because that meets the requirements for Python identifiers:
```
def d6():
'''Return the result of rolling 1d6'''
return random.randint(1, 6)
```
With that written, you can just say:
```
roll1 = d6()
roll2 = d6()
roll3 = d6()
roll4 = d6()
roll5 = d6()
roll6 = d6()
print("you rolled ", end='')
print(roll1, roll2, roll3, roll4, roll5, roll6, sep=',')
``` |
10,656,147 | I figured out how to run my Django application via `sudo python /home/david/myproject/manage.py runserver 68.164.125.221:80`. However, after I quit terminal, the server stops running.
I tried to run this process in the background, but the server just shuts down quickly after I execute `sudo python /home/david/myproject/manage.py runserver 68.164.125.221:80 &`.
How do I keep my Django application running even after I quit my ssh session in terminal?
PS - Sorry if this question strikes you as elementary. Such sillyness ensues when a front-end javascript programmer must turn into a server administrator in break-neck speed. | 2012/05/18 | [
"https://Stackoverflow.com/questions/10656147",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/795319/"
] | Meet [screen](http://www.gnu.org/software/screen/).
Connect through ssh, start screen. This open a virtual console emulator on top of the one provided by ssh. Start your server there.
Then press Ctrl-a, then d. This detach the screen session, keeping it running in the background.
To [R]e-attach to it, use screen -r.
If screen is not installed and you can't install it, you can also start an application in the background by adding a & to the command, as you tried. But you should not close the terminal window then ; just disconnect, with the bash command exit, or Ctrl-d.
The advantage of screen is that you can still read the output from the server, in case there is an error or anything.
Screen is a really powerful tool, with many more commands. You can add a new virtual window with Ctrl-a, then c (for Create) ; switch through windows with Ctrl-a, then n (next) or p (previous), ...
But you need it to be installed to use it. Since you seem to have root access, this shouldn't be a problem.
EDIT: [tmux](https://tmux.github.io/) is another great solution for the same use-case. | Use `screen` to create a new virtual window, and run the server there.
```
$ screen
$ python manage.py runserver
```
You will see that Django server has started running.
Now press `Ctrl+A` and then press the `D` key to detach from that screen. It will say:
```
$ [detached from ###.pts-0.hostname]
```
You can now safely logout from your terminal, log back in to your terminal, do other bits of coding in other directories, go for a vacation, do whatever you want.
---
To return to the screen that you have detached from,
```
$ screen -r
```
To kill the django server now, simply press `Ctrl+C` like you would've done normally.
---
To `terminate` this current screen instead of `detaching` from this screen, use `Ctrl+D`. It will say:
```
$ [screen is terminating]
$
``` |
10,656,147 | I figured out how to run my Django application via `sudo python /home/david/myproject/manage.py runserver 68.164.125.221:80`. However, after I quit terminal, the server stops running.
I tried to run this process in the background, but the server just shuts down quickly after I execute `sudo python /home/david/myproject/manage.py runserver 68.164.125.221:80 &`.
How do I keep my Django application running even after I quit my ssh session in terminal?
PS - Sorry if this question strikes you as elementary. Such sillyness ensues when a front-end javascript programmer must turn into a server administrator in break-neck speed. | 2012/05/18 | [
"https://Stackoverflow.com/questions/10656147",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/795319/"
] | Meet [screen](http://www.gnu.org/software/screen/).
Connect through ssh, start screen. This open a virtual console emulator on top of the one provided by ssh. Start your server there.
Then press Ctrl-a, then d. This detach the screen session, keeping it running in the background.
To [R]e-attach to it, use screen -r.
If screen is not installed and you can't install it, you can also start an application in the background by adding a & to the command, as you tried. But you should not close the terminal window then ; just disconnect, with the bash command exit, or Ctrl-d.
The advantage of screen is that you can still read the output from the server, in case there is an error or anything.
Screen is a really powerful tool, with many more commands. You can add a new virtual window with Ctrl-a, then c (for Create) ; switch through windows with Ctrl-a, then n (next) or p (previous), ...
But you need it to be installed to use it. Since you seem to have root access, this shouldn't be a problem.
EDIT: [tmux](https://tmux.github.io/) is another great solution for the same use-case. | Use nohup. Change your command as follows:
```
nohup sudo python /home/david/myproject/manage.py runserver 68.164.125.221:80 &
``` |
10,656,147 | I figured out how to run my Django application via `sudo python /home/david/myproject/manage.py runserver 68.164.125.221:80`. However, after I quit terminal, the server stops running.
I tried to run this process in the background, but the server just shuts down quickly after I execute `sudo python /home/david/myproject/manage.py runserver 68.164.125.221:80 &`.
How do I keep my Django application running even after I quit my ssh session in terminal?
PS - Sorry if this question strikes you as elementary. Such sillyness ensues when a front-end javascript programmer must turn into a server administrator in break-neck speed. | 2012/05/18 | [
"https://Stackoverflow.com/questions/10656147",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/795319/"
] | Use `screen` to create a new virtual window, and run the server there.
```
$ screen
$ python manage.py runserver
```
You will see that Django server has started running.
Now press `Ctrl+A` and then press the `D` key to detach from that screen. It will say:
```
$ [detached from ###.pts-0.hostname]
```
You can now safely logout from your terminal, log back in to your terminal, do other bits of coding in other directories, go for a vacation, do whatever you want.
---
To return to the screen that you have detached from,
```
$ screen -r
```
To kill the django server now, simply press `Ctrl+C` like you would've done normally.
---
To `terminate` this current screen instead of `detaching` from this screen, use `Ctrl+D`. It will say:
```
$ [screen is terminating]
$
``` | Use nohup. Change your command as follows:
```
nohup sudo python /home/david/myproject/manage.py runserver 68.164.125.221:80 &
``` |
34,086,062 | today I'm updated the elastic search from 1.6 to 2.1, because 1.6 is vulnerable version, after this update my website not working, give this error :
```
Traceback (most recent call last):
File "manage.py", line 8, in <module>
from app import app, db
File "/opt/project/app/__init__.py", line 30, in <module>
es.create_index(app.config['ELASTICSEARCH_INDEX'])
File "/usr/local/lib/python2.7/dist-packages/pyelasticsearch/client.py", line 93, in decorate
return func(*args, query_params=query_params, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pyelasticsearch/client.py", line 1033, in create_index
query_params=query_params)
File "/usr/local/lib/python2.7/dist-packages/pyelasticsearch/client.py", line 285, in send_request
self._raise_exception(status, error_message)
File "/usr/local/lib/python2.7/dist-packages/pyelasticsearch/client.py", line 299, in _raise_exception
raise error_class(status, error_message)
pyelasticsearch.exceptions.ElasticHttpError: (400, u'index_already_exists_exception')
make: *** [run] Error 1
```
the code is this :
```
redis = Redis()
es = ElasticSearch(app.config['ELASTICSEARCH_URI'])
try:
es.create_index(app.config['ELASTICSEARCH_INDEX'])
except IndexAlreadyExistsError, e:
pass
```
where is wrong ? what is new on this new version ? | 2015/12/04 | [
"https://Stackoverflow.com/questions/34086062",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5544303/"
] | `jeuResultats.next();` moves your result to the next row. You start with 0th row, i.e. when you call `.next()` it reads the first row, then when you call it again, it tries to read the 2nd row, which does not exist.
*Some additional hints, not directly related to the question:*
1. Java Docs are a good place to start [Java 8 ResultSet](http://docs.oracle.com/javase/8/docs/api/java/sql/ResultSet.html), for e.x., perhaps `ResultSet.first()` method may be more suited for your use.
2. Since you are working with resources, take a look at try-with-resources syntax. [Official tutorials](https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html) are a good starting point for that.
3. Also take a look at prepared statement vs Statement. Again, [official guide](https://docs.oracle.com/javase/tutorial/jdbc/basics/prepared.html) is a good place to start | Make the below changes in you code. Currently the next() method is shifting result list to fetch the data at 1st index, whereas the data is at the 0th Index:
```
boolean result = false;
try{
result = jeuResultats.next();
} catch (SQLException e) {
e.printStackTrace();
}
if (!result) {
loadJSP("/index.jsp", request, reponse);
}else {
loadJSP("/views/menu.jsp", request, reponse);
}
``` |
34,086,062 | today I'm updated the elastic search from 1.6 to 2.1, because 1.6 is vulnerable version, after this update my website not working, give this error :
```
Traceback (most recent call last):
File "manage.py", line 8, in <module>
from app import app, db
File "/opt/project/app/__init__.py", line 30, in <module>
es.create_index(app.config['ELASTICSEARCH_INDEX'])
File "/usr/local/lib/python2.7/dist-packages/pyelasticsearch/client.py", line 93, in decorate
return func(*args, query_params=query_params, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pyelasticsearch/client.py", line 1033, in create_index
query_params=query_params)
File "/usr/local/lib/python2.7/dist-packages/pyelasticsearch/client.py", line 285, in send_request
self._raise_exception(status, error_message)
File "/usr/local/lib/python2.7/dist-packages/pyelasticsearch/client.py", line 299, in _raise_exception
raise error_class(status, error_message)
pyelasticsearch.exceptions.ElasticHttpError: (400, u'index_already_exists_exception')
make: *** [run] Error 1
```
the code is this :
```
redis = Redis()
es = ElasticSearch(app.config['ELASTICSEARCH_URI'])
try:
es.create_index(app.config['ELASTICSEARCH_INDEX'])
except IndexAlreadyExistsError, e:
pass
```
where is wrong ? what is new on this new version ? | 2015/12/04 | [
"https://Stackoverflow.com/questions/34086062",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5544303/"
] | `jeuResultats.next();` moves your result to the next row. You start with 0th row, i.e. when you call `.next()` it reads the first row, then when you call it again, it tries to read the 2nd row, which does not exist.
*Some additional hints, not directly related to the question:*
1. Java Docs are a good place to start [Java 8 ResultSet](http://docs.oracle.com/javase/8/docs/api/java/sql/ResultSet.html), for e.x., perhaps `ResultSet.first()` method may be more suited for your use.
2. Since you are working with resources, take a look at try-with-resources syntax. [Official tutorials](https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html) are a good starting point for that.
3. Also take a look at prepared statement vs Statement. Again, [official guide](https://docs.oracle.com/javase/tutorial/jdbc/basics/prepared.html) is a good place to start | Replace your code by below code:
```
requete = "SELECT Login, Password, DroitModifAnnuaire, DroitRecepteurDem, DroitResponsableDem, PiloteIso, Administrateur, DroitNews, DroitTenues, DroitEssai, Nom, Prenom FROM Annuaire WHERE Login = '"
+ (request.getParameter("login") + "'");
instruction = connexion.createStatement();
jeuResultats = instruction.executeQuery(requete);
try{
if (jeuResultats.next()) {
loadJSP("/index.jsp", request, reponse);
} else {
loadJSP("/views/menu.jsp", request, reponse);
}
} catch (SQLException e) {
e.printStackTrace();
}
``` |
34,086,062 | today I'm updated the elastic search from 1.6 to 2.1, because 1.6 is vulnerable version, after this update my website not working, give this error :
```
Traceback (most recent call last):
File "manage.py", line 8, in <module>
from app import app, db
File "/opt/project/app/__init__.py", line 30, in <module>
es.create_index(app.config['ELASTICSEARCH_INDEX'])
File "/usr/local/lib/python2.7/dist-packages/pyelasticsearch/client.py", line 93, in decorate
return func(*args, query_params=query_params, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pyelasticsearch/client.py", line 1033, in create_index
query_params=query_params)
File "/usr/local/lib/python2.7/dist-packages/pyelasticsearch/client.py", line 285, in send_request
self._raise_exception(status, error_message)
File "/usr/local/lib/python2.7/dist-packages/pyelasticsearch/client.py", line 299, in _raise_exception
raise error_class(status, error_message)
pyelasticsearch.exceptions.ElasticHttpError: (400, u'index_already_exists_exception')
make: *** [run] Error 1
```
the code is this :
```
redis = Redis()
es = ElasticSearch(app.config['ELASTICSEARCH_URI'])
try:
es.create_index(app.config['ELASTICSEARCH_INDEX'])
except IndexAlreadyExistsError, e:
pass
```
where is wrong ? what is new on this new version ? | 2015/12/04 | [
"https://Stackoverflow.com/questions/34086062",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5544303/"
] | Make the below changes in you code. Currently the next() method is shifting result list to fetch the data at 1st index, whereas the data is at the 0th Index:
```
boolean result = false;
try{
result = jeuResultats.next();
} catch (SQLException e) {
e.printStackTrace();
}
if (!result) {
loadJSP("/index.jsp", request, reponse);
}else {
loadJSP("/views/menu.jsp", request, reponse);
}
``` | Replace your code by below code:
```
requete = "SELECT Login, Password, DroitModifAnnuaire, DroitRecepteurDem, DroitResponsableDem, PiloteIso, Administrateur, DroitNews, DroitTenues, DroitEssai, Nom, Prenom FROM Annuaire WHERE Login = '"
+ (request.getParameter("login") + "'");
instruction = connexion.createStatement();
jeuResultats = instruction.executeQuery(requete);
try{
if (jeuResultats.next()) {
loadJSP("/index.jsp", request, reponse);
} else {
loadJSP("/views/menu.jsp", request, reponse);
}
} catch (SQLException e) {
e.printStackTrace();
}
``` |
48,074,568 | as part of Unity's ML Agents images fed to a reinforcement learning agent can be converted to greyscale like so:
```
def _process_pixels(image_bytes=None, bw=False):
s = bytearray(image_bytes)
image = Image.open(io.BytesIO(s))
s = np.array(image) / 255.0
if bw:
s = np.mean(s, axis=2)
s = np.reshape(s, [s.shape[0], s.shape[1], 1])
return s
```
As I'm not familiar enough with Python and especially numpy, how can I get the dimensions right for plotting the reshaped numpy array? To my understanding, the shape is based on the image's width, height and number of channels. So after reshaping there is only one channel to determine the greyscale value. I just didn't find a way yet to plot it yet.
Here is a link to the mentioned code of the [Unity ML Agents repository](https://github.com/Unity-Technologies/ml-agents/blob/master/python/unityagents/environment.py#L176).
That's how I wanted to plot it:
```
plt.imshow(s)
plt.show()
``` | 2018/01/03 | [
"https://Stackoverflow.com/questions/48074568",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3515869/"
] | Won't just doing this work?
```
plt.imshow(s[..., 0])
plt.show()
```
Explanation
`plt.imshow` expects either a 2-D array with shape `(x, y)`, and treats it like grayscale, or dimensions `(x, y, 3)` (treated like RGB) or `(x, y, 4)` (treated as RGBA). The array you had was `(x, y, 1)`. To get rid of the last dimension we can do Numpy indexing to remove the last dimension. `s[..., 0]` says, "take all other dimensions as-is, but along the last dimension, get the slice at index 0". | It looks like the grayscale version has an extra single dimension at the end. To plot, you just need to collapse it, e.g. with [`np.squeeze`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.squeeze.html):
```
plt.imshow(np.squeeze(s))
``` |
38,510,140 | What is the difference between a list & a stack in python?
I have read its explanation in the python documentation but there both the things seems to be same?
```
>>> stack = [3, 4, 5]
>>> stack.append(6)
>>> stack.append(7)
>>> stack
[3, 4, 5, 6, 7]
>>> stack.pop()
7
>>> stack
[3, 4, 5, 6]
>>> stack.pop()
6
>>> stack.pop()
5
>>> stack
[3, 4]
``` | 2016/07/21 | [
"https://Stackoverflow.com/questions/38510140",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6621144/"
] | A stack is a *data structure concept*. The documentation uses a Python `list` object to implement one. That's why that section of the tutorial is named *Using Lists as Stacks*.
Stacks are just things you add stuff to, and when you take stuff away from a stack again, you do so in reverse order, first in, last out style. Like a stack of books or hats or... *beer crates*:
[](https://www.youtube.com/watch?v=9SReWtHt68A)
See the [Wikipedia explanation](https://en.wikipedia.org/wiki/Stack_(abstract_data_type)).
Lists on the other hand are far more versatile, you can add and remove elements anywhere in the list. You wouldn't try that with a stack of beer crates with someone on top!
You could implement a stack with a custom class:
```
from collections import namedtuple
class _Entry(namedtuple('_Entry', 'value next')):
def _repr_assist(self, postfix):
r = repr(self.value) + postfix
if self.next is not None:
return self.next._repr_assist(', ' + r)
return r
class Stack(object):
def __init__(self):
self.top = None
def push(self, value):
self.top = _Entry(value, self.top)
def pop(self):
if self.top is None:
raise ValueError("Can't pop from an empty stack")
res, self.top = self.top.value, self.top.next
return res
def __repr__(self):
if self.top is None: return '[]'
return '[' + self.top._repr_assist(']')
```
Hardly a list in sight (somewhat artificially), but it is definitely a stack:
```
>>> stack = Stack()
>>> stack.push(3)
>>> stack.push(4)
>>> stack.push(5)
>>> stack
[3, 4, 5]
>>> stack.pop()
5
>>> stack.push(6)
>>> stack
[3, 4, 6]
>>> stack.pop()
6
>>> stack.pop()
4
>>> stack.pop()
3
>>> stack
[]
```
The Python standard library doesn't come with a specific stack datatype; a `list` object does just fine. Just limit any use to `list.append()` and `list.pop()` (the latter with no arguments) to treat a list *as* a stack.
You could also use the [`collections.deque()` type](https://docs.python.org/3/library/collections.html#collections.deque); it is usually slightly faster than a list for the typical patterns seen when using either as a stack. However, like lists, a deque can be used for other purposes too. | A "stack" is a specific application of `list`, with operations limited to appending (pushing) to and popping (pulling) from the end. |
38,510,140 | What is the difference between a list & a stack in python?
I have read its explanation in the python documentation but there both the things seems to be same?
```
>>> stack = [3, 4, 5]
>>> stack.append(6)
>>> stack.append(7)
>>> stack
[3, 4, 5, 6, 7]
>>> stack.pop()
7
>>> stack
[3, 4, 5, 6]
>>> stack.pop()
6
>>> stack.pop()
5
>>> stack
[3, 4]
``` | 2016/07/21 | [
"https://Stackoverflow.com/questions/38510140",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6621144/"
] | A stack is a *data structure concept*. The documentation uses a Python `list` object to implement one. That's why that section of the tutorial is named *Using Lists as Stacks*.
Stacks are just things you add stuff to, and when you take stuff away from a stack again, you do so in reverse order, first in, last out style. Like a stack of books or hats or... *beer crates*:
[](https://www.youtube.com/watch?v=9SReWtHt68A)
See the [Wikipedia explanation](https://en.wikipedia.org/wiki/Stack_(abstract_data_type)).
Lists on the other hand are far more versatile, you can add and remove elements anywhere in the list. You wouldn't try that with a stack of beer crates with someone on top!
You could implement a stack with a custom class:
```
from collections import namedtuple
class _Entry(namedtuple('_Entry', 'value next')):
def _repr_assist(self, postfix):
r = repr(self.value) + postfix
if self.next is not None:
return self.next._repr_assist(', ' + r)
return r
class Stack(object):
def __init__(self):
self.top = None
def push(self, value):
self.top = _Entry(value, self.top)
def pop(self):
if self.top is None:
raise ValueError("Can't pop from an empty stack")
res, self.top = self.top.value, self.top.next
return res
def __repr__(self):
if self.top is None: return '[]'
return '[' + self.top._repr_assist(']')
```
Hardly a list in sight (somewhat artificially), but it is definitely a stack:
```
>>> stack = Stack()
>>> stack.push(3)
>>> stack.push(4)
>>> stack.push(5)
>>> stack
[3, 4, 5]
>>> stack.pop()
5
>>> stack.push(6)
>>> stack
[3, 4, 6]
>>> stack.pop()
6
>>> stack.pop()
4
>>> stack.pop()
3
>>> stack
[]
```
The Python standard library doesn't come with a specific stack datatype; a `list` object does just fine. Just limit any use to `list.append()` and `list.pop()` (the latter with no arguments) to treat a list *as* a stack.
You could also use the [`collections.deque()` type](https://docs.python.org/3/library/collections.html#collections.deque); it is usually slightly faster than a list for the typical patterns seen when using either as a stack. However, like lists, a deque can be used for other purposes too. | In python lists can also be used as stacks. Think of a list like a combination between your normal lists and a stack.
This is also described [here](https://docs.python.org/3/tutorial/datastructures.html)
>
> The list methods make it very easy to use a list as a stack, where the
> last element added is the first element retrieved (“last-in,
> first-out”). To add an item to the top of the stack, use append(). To
> retrieve an item from the top of the stack, use pop() without an
> explicit index
>
>
>
In fact you are using their exact example. Are you confused by the fact that it's a "combined data structure" ?
EDIT: as another user mentioned, it is a concept that is implemented using lists. |
38,510,140 | What is the difference between a list & a stack in python?
I have read its explanation in the python documentation but there both the things seems to be same?
```
>>> stack = [3, 4, 5]
>>> stack.append(6)
>>> stack.append(7)
>>> stack
[3, 4, 5, 6, 7]
>>> stack.pop()
7
>>> stack
[3, 4, 5, 6]
>>> stack.pop()
6
>>> stack.pop()
5
>>> stack
[3, 4]
``` | 2016/07/21 | [
"https://Stackoverflow.com/questions/38510140",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6621144/"
] | A stack is a *data structure concept*. The documentation uses a Python `list` object to implement one. That's why that section of the tutorial is named *Using Lists as Stacks*.
Stacks are just things you add stuff to, and when you take stuff away from a stack again, you do so in reverse order, first in, last out style. Like a stack of books or hats or... *beer crates*:
[](https://www.youtube.com/watch?v=9SReWtHt68A)
See the [Wikipedia explanation](https://en.wikipedia.org/wiki/Stack_(abstract_data_type)).
Lists on the other hand are far more versatile, you can add and remove elements anywhere in the list. You wouldn't try that with a stack of beer crates with someone on top!
You could implement a stack with a custom class:
```
from collections import namedtuple
class _Entry(namedtuple('_Entry', 'value next')):
def _repr_assist(self, postfix):
r = repr(self.value) + postfix
if self.next is not None:
return self.next._repr_assist(', ' + r)
return r
class Stack(object):
def __init__(self):
self.top = None
def push(self, value):
self.top = _Entry(value, self.top)
def pop(self):
if self.top is None:
raise ValueError("Can't pop from an empty stack")
res, self.top = self.top.value, self.top.next
return res
def __repr__(self):
if self.top is None: return '[]'
return '[' + self.top._repr_assist(']')
```
Hardly a list in sight (somewhat artificially), but it is definitely a stack:
```
>>> stack = Stack()
>>> stack.push(3)
>>> stack.push(4)
>>> stack.push(5)
>>> stack
[3, 4, 5]
>>> stack.pop()
5
>>> stack.push(6)
>>> stack
[3, 4, 6]
>>> stack.pop()
6
>>> stack.pop()
4
>>> stack.pop()
3
>>> stack
[]
```
The Python standard library doesn't come with a specific stack datatype; a `list` object does just fine. Just limit any use to `list.append()` and `list.pop()` (the latter with no arguments) to treat a list *as* a stack.
You could also use the [`collections.deque()` type](https://docs.python.org/3/library/collections.html#collections.deque); it is usually slightly faster than a list for the typical patterns seen when using either as a stack. However, like lists, a deque can be used for other purposes too. | Stack works in the concept of Last in First out.
We can perform push and pop operations in the stack
But compare to stack list is easy to do all operations like add,insert,delete,concat etc...
Stack is the application of stack and it's like data structures we use it more. |
38,510,140 | What is the difference between a list & a stack in python?
I have read its explanation in the python documentation but there both the things seems to be same?
```
>>> stack = [3, 4, 5]
>>> stack.append(6)
>>> stack.append(7)
>>> stack
[3, 4, 5, 6, 7]
>>> stack.pop()
7
>>> stack
[3, 4, 5, 6]
>>> stack.pop()
6
>>> stack.pop()
5
>>> stack
[3, 4]
``` | 2016/07/21 | [
"https://Stackoverflow.com/questions/38510140",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6621144/"
] | A "stack" is a specific application of `list`, with operations limited to appending (pushing) to and popping (pulling) from the end. | Stack works in the concept of Last in First out.
We can perform push and pop operations in the stack
But compare to stack list is easy to do all operations like add,insert,delete,concat etc...
Stack is the application of stack and it's like data structures we use it more. |
38,510,140 | What is the difference between a list & a stack in python?
I have read its explanation in the python documentation but there both the things seems to be same?
```
>>> stack = [3, 4, 5]
>>> stack.append(6)
>>> stack.append(7)
>>> stack
[3, 4, 5, 6, 7]
>>> stack.pop()
7
>>> stack
[3, 4, 5, 6]
>>> stack.pop()
6
>>> stack.pop()
5
>>> stack
[3, 4]
``` | 2016/07/21 | [
"https://Stackoverflow.com/questions/38510140",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6621144/"
] | In python lists can also be used as stacks. Think of a list like a combination between your normal lists and a stack.
This is also described [here](https://docs.python.org/3/tutorial/datastructures.html)
>
> The list methods make it very easy to use a list as a stack, where the
> last element added is the first element retrieved (“last-in,
> first-out”). To add an item to the top of the stack, use append(). To
> retrieve an item from the top of the stack, use pop() without an
> explicit index
>
>
>
In fact you are using their exact example. Are you confused by the fact that it's a "combined data structure" ?
EDIT: as another user mentioned, it is a concept that is implemented using lists. | Stack works in the concept of Last in First out.
We can perform push and pop operations in the stack
But compare to stack list is easy to do all operations like add,insert,delete,concat etc...
Stack is the application of stack and it's like data structures we use it more. |
18,971,162 | I am trying to create a simple python calculator for an assignment. The basic idea of it is simple and documented all over online, but I am trying to create one where the user actually inputs the operators. So instead of printing 1: addition, 2: subtraction, etc, the user would select + for addition, - for subtraction, etc. I am also trying to make Q or q quit the program.
Any ideas for how to allow the user to type operators to represent the operation?
Note: I know I still need to define my remainder operation.
```
import math
loop = 1
choice = 0
while loop == 1:
print("your options are:")
print("+ Addition")
print("- Subtraction")
print("* Multiplication")
print("/ Division")
print("% Remainder")
print("Q Quit")
print("***************************")
choice = str(input("Choose your option: "))
if choice == +:
ad1 = float(input("Add this: "))
ad2 = float(input("to this: "))
print(ad1, "+", ad2, "=", ad1 + ad2)
elif choice == -:
su2 = float(input("Subtract this: "))
su1 = float(input("from this: "))
print(su1, "-", su2, "=", su1 - su2)
elif choice == *:
mu1 = float(input("Multiply this: "))
mu2 = float(input("with this: "))
print(mu1, "*", mu2, "=", mu1 * mu2)
elif choice == /:
di1 = float(input("Divide this: "))
di2 = float(input("by this: "))
print(di1, "/", di2, "=", di1 / di2)
elif choice == Q:
loop = 0
print("Thank-you for using calculator")
``` | 2013/09/24 | [
"https://Stackoverflow.com/questions/18971162",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2809161/"
] | First off, you don't need to assign `choice` to zero
Second, you have your code right, but you need to put quotes around the operators in your if statements like this
```
if choice == '+':
```
to show that you are checking for a string
make your loop like this:
```
while 1: #or while True:
#do stuff
elif choice == 'Q': #qoutes around Q
break #use the `break` keyword to end the while loop
```
then, you don't need to assign `loop` at the top of your program | You should try replacing `if choice == +` by `if choice == "+"`.
What you're getting from the input is actually a string, which means it can contain any character, even one that represents an operator. |
57,624,355 | I deploy a Python app to Google Cloud Functions and got this very vague error message:
```
$ gcloud functions deploy parking_photo --runtime python37 --trigger-http
Deploying function (may take a while - up to 2 minutes)...failed.
ERROR: (gcloud.functions.deploy) OperationError: code=3, message=Function failed on loading user code. Error message: 'main'
```
I don't know what is wrong. Searching around gives no result. Anyone can help?
I believe my code layout is correct:
```
$ tree
.
├── main.py
├── poetry.lock
├── pyproject.toml
├── README.rst
├── requirements.txt
└── tests
├── __init__.py
└── test_burrowingowl.py
```
My `main.py` file has a function that matches the function name:
```py
import operator
from datetime import datetime
import logbook
from flask import Request, abort, redirect
from pydantic import ValidationError
from pydantic.dataclasses import dataclass
from google.cloud import storage
from pytz import timezone
logger = logbook.Logger(__name__)
storage_client = storage.Client()
@dataclass
class Form:
bucket: str = ...
parkinglot: str = ...
space_id: int = ...
tz: str = ...
def parking_photo(request: Request):
# Some code
return
```
### Update
Thank you for the answers. This topic is out of my sight, when I didn't receive notification from StackOverflow for a while.
Last year, I fixed it by just dropping use of `dataclass`. At that time, Google claimed to support Python 3.7 but actually not, that is why `dataclass` didn't work.
When you tried to reproduce this issue, maybe Google already fix the Python 3.7 compatibility. | 2019/08/23 | [
"https://Stackoverflow.com/questions/57624355",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/502780/"
] | Most likely your function is raising a `FileNotFound` error, and Cloud Functions interprets this as `main.py` not existing. A minimal example that will cause the same error:
```
$ cat main.py
with open('missing.file'):
pass
def test(request):
return 'Hello World!'
```
You should check to make sure that any files you're trying to open are included with your function. You can `try`/`except` for this error and log a message to figure it out as well. | I’ve tried to reproduce the error that you are describing by deploying a new Cloud Function without any function with the name of the CF and I got the following error:
ERROR: (gcloud.functions.deploy) OperationError: code=3, message=Function failed on loading user code. Error message: File main.py is expected to contain a function named function-test
I think it’s a similar one, since the error code is the same. My function was deployed with an error and later I edited the name by the Cloud Console, and then I got a correct deployment.
I would suggest to validate the function to be called in the Cloud Console and validate it’s correctly set.
Another approach would be to use the [--entry\_point](https://cloud.google.com/sdk/gcloud/reference/functions/deploy#--entry-point) parameter indicating the name of the function to be executed. |
57,624,355 | I deploy a Python app to Google Cloud Functions and got this very vague error message:
```
$ gcloud functions deploy parking_photo --runtime python37 --trigger-http
Deploying function (may take a while - up to 2 minutes)...failed.
ERROR: (gcloud.functions.deploy) OperationError: code=3, message=Function failed on loading user code. Error message: 'main'
```
I don't know what is wrong. Searching around gives no result. Anyone can help?
I believe my code layout is correct:
```
$ tree
.
├── main.py
├── poetry.lock
├── pyproject.toml
├── README.rst
├── requirements.txt
└── tests
├── __init__.py
└── test_burrowingowl.py
```
My `main.py` file has a function that matches the function name:
```py
import operator
from datetime import datetime
import logbook
from flask import Request, abort, redirect
from pydantic import ValidationError
from pydantic.dataclasses import dataclass
from google.cloud import storage
from pytz import timezone
logger = logbook.Logger(__name__)
storage_client = storage.Client()
@dataclass
class Form:
bucket: str = ...
parkinglot: str = ...
space_id: int = ...
tz: str = ...
def parking_photo(request: Request):
# Some code
return
```
### Update
Thank you for the answers. This topic is out of my sight, when I didn't receive notification from StackOverflow for a while.
Last year, I fixed it by just dropping use of `dataclass`. At that time, Google claimed to support Python 3.7 but actually not, that is why `dataclass` didn't work.
When you tried to reproduce this issue, maybe Google already fix the Python 3.7 compatibility. | 2019/08/23 | [
"https://Stackoverflow.com/questions/57624355",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/502780/"
] | Most likely your function is raising a `FileNotFound` error, and Cloud Functions interprets this as `main.py` not existing. A minimal example that will cause the same error:
```
$ cat main.py
with open('missing.file'):
pass
def test(request):
return 'Hello World!'
```
You should check to make sure that any files you're trying to open are included with your function. You can `try`/`except` for this error and log a message to figure it out as well. | I want to expand on Dustins answer:
A similar error occurs with any error that happens when initializing the function.
```
OperationError: code=3, message=Function failed on loading user code. Error message: File main.py is expected to contain a function named function-test
```
The following snippet can reproduce this error. It turned out there was an exception in some additional initialization code. Therefore the function that should be deployed is never reached.
```
def init_something():
raise NameError('Ooops, something went wrong')
init_something()
def function_to_deploy():
return 'Hello World', 200
```
Make sure to handle exceptions in the init code. |
62,579,243 | I know my question has a lot of answers on the internet but it's seems i can't find a good answer for it, so i will try to explain what i have and hope for the best,
so what i'm trying to do is reading a big json file that might be has more complex structure "nested objects with big arrays" than this but for simple example:
```
{
"data": {
"time": [
1,
2,
3,
4,
5,
...
],
"values": [
1,
2,
3,
4,
6,
...
]
}
}
```
this file might be 200M or more, and i'm using `file_get_contents()` and `json_decode()` to read the data from the file,
then i put the result in variable and loop over the time and take the time value with the current index to get the corresponding value by index form the values array, then save the time and the value in the database but this taking so much CPU and Memory, is their a better way to do this
a better functions to use, a better json structure to use, or maybe a better data format than json to do this
my code:
```
$data = json_decode(file_get_contents(storage_path("test/ts/ts_big_data.json")), true);
foreach(data["time"] as $timeIndex => timeValue) {
saveInDataBase(timeValue, data["values"][timeIndex])
}
```
thanks in advance for any help
**Update 06/29/2020:**
i have another more complex json structure example
```
{
"data": {
"set_1": {
"sub_set_1": {
"info_1": {
"details_1": {
"data_1": [1,2,3,4,5,...],
"data_2": [1,2,3,4,5,...],
"data_3": [1,2,3,4,5,...],
"data_4": [1,2,3,4,5,...],
"data_5": 10254552
},
"details_2": [
[1,2,3,4,5,...],
[1,2,3,4,5,...],
[1,2,3,4,5,...],
]
},
"info_2": {
"details_1": {
"data_1": {
"arr_1": [1,2,3,4,5,...],
"arr_2": [1,2,3,4,5,...]
},
"data_2": {
"arr_1": [1,2,3,4,5,...],
"arr_2": [1,2,3,4,5,...]
},
"data_5": {
"text": "some text"
}
},
"details_2": [1,2,3,4,5,...]
}
}, ...
}, ...
}
}
```
the file size might be around 500MB or More and the arrays inside this json file might have around 100MB of data or more.
and my question how can i get any peace and navigate between nodes of this data with the most efficient way that will not take much RAM and CPU, i can't read the file line by line because i need to get any peace of data when i have to,
is python for example more suitable for handling this big data with more efficient than php ?
please if you can provide a detailed answer i think it will be much help for every one that looking to do this big data stuff with php. | 2020/06/25 | [
"https://Stackoverflow.com/questions/62579243",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2440284/"
] | >
> and my question how can i get any peace and navigate between nodes of this data with the most efficient way that will not take much RAM and CPU, i can't read the file line by line because i need to get any peace of data when i have to,
>
>
>
It's plain text JSON and you have no indexes, so it's impossible to parse your data without iterating it line-by-line. The solution is to serialize your data once and for all and store it in a database (I'm thinking SQLite for fast setup).
If you mandatory can't store your data in a database, or can't retrieve it in SQLite format, you have no other choice but to create a [queue job](https://laravel.com/docs/7.x/queues) which will parse it in time. | **Try Reducing You Bulk Data Complexity For Faster File I/O**
JSON is a great format to store data in, but it comes at the cost of needing to read the entire file to parse it.
Making your data structure simpler but more spread out across several files can allow you to read a file line-by-line which is much faster than all-at-once. This also comes with the benefit of not needing to store the entire file in RAM all at once, so it is more friendly to resource-limited enviroments.
**This might look something like this:**
objects.json
```
{
"data": {
"times_file": "/some/path/objects/object-123/object-123-times.csv",
"values_file": "/some/path/objects/object-123/object-123-times.csv"
}
}
```
object-123-times.csv
```
1
2
3
4
...
```
This would allow you to store your bulk data in a simpler but easier to access format. You could then use something like [`fgetcsv()`](https://www.php.net/manual/en/function.fgetcsv.php) to parse each line. |
62,579,243 | I know my question has a lot of answers on the internet but it's seems i can't find a good answer for it, so i will try to explain what i have and hope for the best,
so what i'm trying to do is reading a big json file that might be has more complex structure "nested objects with big arrays" than this but for simple example:
```
{
"data": {
"time": [
1,
2,
3,
4,
5,
...
],
"values": [
1,
2,
3,
4,
6,
...
]
}
}
```
this file might be 200M or more, and i'm using `file_get_contents()` and `json_decode()` to read the data from the file,
then i put the result in variable and loop over the time and take the time value with the current index to get the corresponding value by index form the values array, then save the time and the value in the database but this taking so much CPU and Memory, is their a better way to do this
a better functions to use, a better json structure to use, or maybe a better data format than json to do this
my code:
```
$data = json_decode(file_get_contents(storage_path("test/ts/ts_big_data.json")), true);
foreach(data["time"] as $timeIndex => timeValue) {
saveInDataBase(timeValue, data["values"][timeIndex])
}
```
thanks in advance for any help
**Update 06/29/2020:**
i have another more complex json structure example
```
{
"data": {
"set_1": {
"sub_set_1": {
"info_1": {
"details_1": {
"data_1": [1,2,3,4,5,...],
"data_2": [1,2,3,4,5,...],
"data_3": [1,2,3,4,5,...],
"data_4": [1,2,3,4,5,...],
"data_5": 10254552
},
"details_2": [
[1,2,3,4,5,...],
[1,2,3,4,5,...],
[1,2,3,4,5,...],
]
},
"info_2": {
"details_1": {
"data_1": {
"arr_1": [1,2,3,4,5,...],
"arr_2": [1,2,3,4,5,...]
},
"data_2": {
"arr_1": [1,2,3,4,5,...],
"arr_2": [1,2,3,4,5,...]
},
"data_5": {
"text": "some text"
}
},
"details_2": [1,2,3,4,5,...]
}
}, ...
}, ...
}
}
```
the file size might be around 500MB or More and the arrays inside this json file might have around 100MB of data or more.
and my question how can i get any peace and navigate between nodes of this data with the most efficient way that will not take much RAM and CPU, i can't read the file line by line because i need to get any peace of data when i have to,
is python for example more suitable for handling this big data with more efficient than php ?
please if you can provide a detailed answer i think it will be much help for every one that looking to do this big data stuff with php. | 2020/06/25 | [
"https://Stackoverflow.com/questions/62579243",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2440284/"
] | >
> and my question how can i get any peace and navigate between nodes of this data with the most efficient way that will not take much RAM and CPU, i can't read the file line by line because i need to get any peace of data when i have to,
>
>
>
It's plain text JSON and you have no indexes, so it's impossible to parse your data without iterating it line-by-line. The solution is to serialize your data once and for all and store it in a database (I'm thinking SQLite for fast setup).
If you mandatory can't store your data in a database, or can't retrieve it in SQLite format, you have no other choice but to create a [queue job](https://laravel.com/docs/7.x/queues) which will parse it in time. | You may Split your arrays into chunks using
`array_chunk()` Function
>
> The `array_chunk()` function is an inbuilt function in PHP which is
> used to split an array into parts or chunks of given size depending
> upon the parameters passed to the function. The last chunk may contain
> fewer elements than the desired size of the chunk.
>
>
>
Check the examples in this [link](https://www.geeksforgeeks.org/php-array_chunk-function/) |
62,579,243 | I know my question has a lot of answers on the internet but it's seems i can't find a good answer for it, so i will try to explain what i have and hope for the best,
so what i'm trying to do is reading a big json file that might be has more complex structure "nested objects with big arrays" than this but for simple example:
```
{
"data": {
"time": [
1,
2,
3,
4,
5,
...
],
"values": [
1,
2,
3,
4,
6,
...
]
}
}
```
this file might be 200M or more, and i'm using `file_get_contents()` and `json_decode()` to read the data from the file,
then i put the result in variable and loop over the time and take the time value with the current index to get the corresponding value by index form the values array, then save the time and the value in the database but this taking so much CPU and Memory, is their a better way to do this
a better functions to use, a better json structure to use, or maybe a better data format than json to do this
my code:
```
$data = json_decode(file_get_contents(storage_path("test/ts/ts_big_data.json")), true);
foreach(data["time"] as $timeIndex => timeValue) {
saveInDataBase(timeValue, data["values"][timeIndex])
}
```
thanks in advance for any help
**Update 06/29/2020:**
i have another more complex json structure example
```
{
"data": {
"set_1": {
"sub_set_1": {
"info_1": {
"details_1": {
"data_1": [1,2,3,4,5,...],
"data_2": [1,2,3,4,5,...],
"data_3": [1,2,3,4,5,...],
"data_4": [1,2,3,4,5,...],
"data_5": 10254552
},
"details_2": [
[1,2,3,4,5,...],
[1,2,3,4,5,...],
[1,2,3,4,5,...],
]
},
"info_2": {
"details_1": {
"data_1": {
"arr_1": [1,2,3,4,5,...],
"arr_2": [1,2,3,4,5,...]
},
"data_2": {
"arr_1": [1,2,3,4,5,...],
"arr_2": [1,2,3,4,5,...]
},
"data_5": {
"text": "some text"
}
},
"details_2": [1,2,3,4,5,...]
}
}, ...
}, ...
}
}
```
the file size might be around 500MB or More and the arrays inside this json file might have around 100MB of data or more.
and my question how can i get any peace and navigate between nodes of this data with the most efficient way that will not take much RAM and CPU, i can't read the file line by line because i need to get any peace of data when i have to,
is python for example more suitable for handling this big data with more efficient than php ?
please if you can provide a detailed answer i think it will be much help for every one that looking to do this big data stuff with php. | 2020/06/25 | [
"https://Stackoverflow.com/questions/62579243",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2440284/"
] | My approach will be reading the `JSON FILE` in chunks.
>
> If these json objects have a consistent structure, you can easily detect when a json object in a file starts, and ends.
>
>
> Once you collect a whole object, you insert it into a db, then go on
> to the next one.
>
>
> There isn't much more to it. the algorithm to detect the beginning and
> end of a json object may get complicating depending on your data
> source, but I hvae done something like this before with a far more
> complex structure (xml) and it worked fine.
>
>
>
Above answer is taken from => [Parse large JSON file](https://stackoverflow.com/questions/15373529/parse-large-json-file)
Please see the below references, it can be helpful for your case
=> <https://laracasts.com/discuss/channels/general-discussion/how-to-open-a-28-gb-json-file-in-php> | As you say correctly you won't get around with reading line per line. Using SQL as suggested just moves the problem to another environment. I would personally do it this way:
1. When a new JSON file comes in, put it in a storage, easiest would be S3 with `Storage::disk('s3')->put(...);` (<https://laravel.com/docs/7.x/filesystem>) and put it in a queue. You could use Laravel queue or what I prefer, RabbitMQ. Add to the queue a new entry, like `{'job': 'parseMyJSON', 'path': 'https://path-on.s3'}`
2. Create a new server instance that can access the queue
3. Write a worker instance of your app, that can take a job from the queue. Run it on the new server from 2. Whenever you put the job into the queue, it will get the JSON file from S3 and do the necessary job. Then it will take the next job from the queue, one by one.
If this worker instance is written in Python or PHP you have to test what will work faster. The advantage of this is, that you can scale the workers as how much you need them. And it won't affect the performance of your webapp. I hope this helps you. |
62,579,243 | I know my question has a lot of answers on the internet but it's seems i can't find a good answer for it, so i will try to explain what i have and hope for the best,
so what i'm trying to do is reading a big json file that might be has more complex structure "nested objects with big arrays" than this but for simple example:
```
{
"data": {
"time": [
1,
2,
3,
4,
5,
...
],
"values": [
1,
2,
3,
4,
6,
...
]
}
}
```
this file might be 200M or more, and i'm using `file_get_contents()` and `json_decode()` to read the data from the file,
then i put the result in variable and loop over the time and take the time value with the current index to get the corresponding value by index form the values array, then save the time and the value in the database but this taking so much CPU and Memory, is their a better way to do this
a better functions to use, a better json structure to use, or maybe a better data format than json to do this
my code:
```
$data = json_decode(file_get_contents(storage_path("test/ts/ts_big_data.json")), true);
foreach(data["time"] as $timeIndex => timeValue) {
saveInDataBase(timeValue, data["values"][timeIndex])
}
```
thanks in advance for any help
**Update 06/29/2020:**
i have another more complex json structure example
```
{
"data": {
"set_1": {
"sub_set_1": {
"info_1": {
"details_1": {
"data_1": [1,2,3,4,5,...],
"data_2": [1,2,3,4,5,...],
"data_3": [1,2,3,4,5,...],
"data_4": [1,2,3,4,5,...],
"data_5": 10254552
},
"details_2": [
[1,2,3,4,5,...],
[1,2,3,4,5,...],
[1,2,3,4,5,...],
]
},
"info_2": {
"details_1": {
"data_1": {
"arr_1": [1,2,3,4,5,...],
"arr_2": [1,2,3,4,5,...]
},
"data_2": {
"arr_1": [1,2,3,4,5,...],
"arr_2": [1,2,3,4,5,...]
},
"data_5": {
"text": "some text"
}
},
"details_2": [1,2,3,4,5,...]
}
}, ...
}, ...
}
}
```
the file size might be around 500MB or More and the arrays inside this json file might have around 100MB of data or more.
and my question how can i get any peace and navigate between nodes of this data with the most efficient way that will not take much RAM and CPU, i can't read the file line by line because i need to get any peace of data when i have to,
is python for example more suitable for handling this big data with more efficient than php ?
please if you can provide a detailed answer i think it will be much help for every one that looking to do this big data stuff with php. | 2020/06/25 | [
"https://Stackoverflow.com/questions/62579243",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2440284/"
] | JSON is a great format and way better alternative to XML.
In the end JSON is almost one on one convertible to XML and back.
Big files can get bigger, so we don't want to read all the stuff in memory and we don't want to parse the whole file. I had the same issue with XXL size JSON files.
I think the issue lays not in a specific programming language, but in a realisation and specifics of the formats.
I have 3 solutions for you:
1. **Native PHP implementation** (*preferred*)
Almost as fast as streamed XMLReader, there is a library <https://github.com/pcrov/JsonReader>. Example:
```
use pcrov\JsonReader\JsonReader;
$reader = new JsonReader();
$reader->open("data.json");
while ($reader->read("type")) {
echo $reader->value(), "\n";
}
$reader->close();
```
This library will not read the whole file into memory or parse all the lines. It is step by step on command traverse through the tree of JSON object.
2. **Let go formats** (*cons: multiple conversions*)
Preprocess file to a different format like XML or CSV.
There is very lightweight nodejs libs like <https://www.npmjs.com/package/json2csv> to CSV from JSON.
3. **Use some NoSQL DB** (*cons: additional complex software to install and maintain*)
For example Redis or CouchDB([import json file to couch db-](https://stackoverflow.com/questions/790757/import-json-file-to-couch-db)) | You may Split your arrays into chunks using
`array_chunk()` Function
>
> The `array_chunk()` function is an inbuilt function in PHP which is
> used to split an array into parts or chunks of given size depending
> upon the parameters passed to the function. The last chunk may contain
> fewer elements than the desired size of the chunk.
>
>
>
Check the examples in this [link](https://www.geeksforgeeks.org/php-array_chunk-function/) |
62,579,243 | I know my question has a lot of answers on the internet but it's seems i can't find a good answer for it, so i will try to explain what i have and hope for the best,
so what i'm trying to do is reading a big json file that might be has more complex structure "nested objects with big arrays" than this but for simple example:
```
{
"data": {
"time": [
1,
2,
3,
4,
5,
...
],
"values": [
1,
2,
3,
4,
6,
...
]
}
}
```
this file might be 200M or more, and i'm using `file_get_contents()` and `json_decode()` to read the data from the file,
then i put the result in variable and loop over the time and take the time value with the current index to get the corresponding value by index form the values array, then save the time and the value in the database but this taking so much CPU and Memory, is their a better way to do this
a better functions to use, a better json structure to use, or maybe a better data format than json to do this
my code:
```
$data = json_decode(file_get_contents(storage_path("test/ts/ts_big_data.json")), true);
foreach(data["time"] as $timeIndex => timeValue) {
saveInDataBase(timeValue, data["values"][timeIndex])
}
```
thanks in advance for any help
**Update 06/29/2020:**
i have another more complex json structure example
```
{
"data": {
"set_1": {
"sub_set_1": {
"info_1": {
"details_1": {
"data_1": [1,2,3,4,5,...],
"data_2": [1,2,3,4,5,...],
"data_3": [1,2,3,4,5,...],
"data_4": [1,2,3,4,5,...],
"data_5": 10254552
},
"details_2": [
[1,2,3,4,5,...],
[1,2,3,4,5,...],
[1,2,3,4,5,...],
]
},
"info_2": {
"details_1": {
"data_1": {
"arr_1": [1,2,3,4,5,...],
"arr_2": [1,2,3,4,5,...]
},
"data_2": {
"arr_1": [1,2,3,4,5,...],
"arr_2": [1,2,3,4,5,...]
},
"data_5": {
"text": "some text"
}
},
"details_2": [1,2,3,4,5,...]
}
}, ...
}, ...
}
}
```
the file size might be around 500MB or More and the arrays inside this json file might have around 100MB of data or more.
and my question how can i get any peace and navigate between nodes of this data with the most efficient way that will not take much RAM and CPU, i can't read the file line by line because i need to get any peace of data when i have to,
is python for example more suitable for handling this big data with more efficient than php ?
please if you can provide a detailed answer i think it will be much help for every one that looking to do this big data stuff with php. | 2020/06/25 | [
"https://Stackoverflow.com/questions/62579243",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2440284/"
] | Your problem is basically related to the memory management performed by each specific programming language that you might use in order to access the data from a huge (storage purpose) file.
For example, when you amass the operations by using the code that you just mentioned (as below)
`$data = json_decode(file_get_contents(storage_path("test/ts/ts_big_data.json")), true);`
what happens is that the memory used by runtime Zend engine increases too much, because it has to allocate certain memory units to store references about each ongoing file handling involved in your code statement - like keeping also in memory a pointer, not only the real file opened - unless this file gets finally overwritten and the memory buffer released (freed) again. It's no wonder that if you force the execution of both **file\_get\_contents()** function that reads the file into a string and also the **json\_decode()** function, you force the interpreter to keep in memory all 3 "things": the file itself, the reference created (the string), and also the structure (the json file).
On the contrary if you break the statement in several ones, the memory stack hold by the first data structure (the file) will be unloaded when the operation of "getting its content" then writing it into another variable (or file) is fully performed. As time as you don't define a variable where to save the data, it will still stay in the memory (as a blob - with *no name*, *no storage address*, *just content*). For this reason, it is much more CPU and RAM effective - when working with big data - to break everything in smaller steps.
So you have first to start by simply rewriting your code as follows:
```
$somefile = file_get_contents(storage_path("test/ts/ts_big_data.json"));
$data = json_decode($somefile, true);
```
When first line gets executed, the memory hold by **ts\_big\_data.json** gets released (think of it as being purged and made available again to other processes).
When second line gets executed, also **$somefile**'s memory buffer gets released, too. The take away point from this is that instead of always having 3 memory buffers used just to store the data structures, you'll only have 2 at each time, if of course ignoring the other memory used to actually construct the file. Not to say that when working with arrays (and JSON files just exactly arrays they are), that dynamically allocated memory increases dramatically and not linear as we might tend to think. Bottom line is that instead of a 50% loss in performance just on storage allocation for the files (3 big files taking 50% more space than just 2 of them), we better manage to handle in smaller steps the execution of the functions 'touching' these huge files.
In order to understand this, imagine that you access only what is needed at a certain moment in time (this is also a principle called YAGNI -You Aren't Gonna Need It - or similar in the context of Extreme Programming Practices - see reference here <https://wiki.c2.com/?YouArentGonnaNeedIt> something inherited since the C or Cobol old times.
The next approach to follow is to break the file in more pieces, but in a structured one (relational dependent data structure) as is in a database table / tables.
Obviously, you have to save the data pieces again **as blobs**, in the database. The advantage is that the retrieval of data in a DB is much more faster than in a file (due to the allocation of indexes by the SQL when generating and updating the tables). A table having 1 or two indexes can be accessed in a lightning fast manner by a structured query. Again, the indexes are pointers to the main storage of the data.
One important topic however is that if you still want to work with the json (content and type of data storage - instead of tables in a DB) is that you cannot update it locally without changing it globally. I am not sure what you meant by reading the time related function values in the json file. Do you mean that your json file is continuously changing? Better break it in several tables so each separate one can change without affecting all the mega structure of the data. Easier to manage, easier to maintain, easier to locate the changes.
***My understanding is that best solution would be to split the same file in several json files where you strip down the not needed values. BY THE WAY, DO YOU ACTUALLY NEED ALL THE STORED DATA ??***
I wouldn't come now with a code unless you explain me the above issues (so we can have a conversation) and thereafter I will accordingly edit my answer. I wrote yesterday a question related to handling of blobs - and storing in the server - in order to accelerate the execution of a data update in a server using a cron process. My data was about 25MB+ not 500+ as in your case however I must understand the use case for your situation.
One more thing, how was created that file that you must process ? Why do you manage only the final form of it instead of intervening in further feeding it with data ? My opinion is that you might stop storing data into it as previously done (and thus stop adding to your pain) and instead transform its today purpose only into historic data storage from now on then go toward storing the future data in something more elastic (as MongoDB or NoSQL databases).
Probably you don't need so much a code as a solid and useful strategy and way of working with your data first.
***Programming comes last, after you decided all the detailed architecture of your web project.*** | As you say correctly you won't get around with reading line per line. Using SQL as suggested just moves the problem to another environment. I would personally do it this way:
1. When a new JSON file comes in, put it in a storage, easiest would be S3 with `Storage::disk('s3')->put(...);` (<https://laravel.com/docs/7.x/filesystem>) and put it in a queue. You could use Laravel queue or what I prefer, RabbitMQ. Add to the queue a new entry, like `{'job': 'parseMyJSON', 'path': 'https://path-on.s3'}`
2. Create a new server instance that can access the queue
3. Write a worker instance of your app, that can take a job from the queue. Run it on the new server from 2. Whenever you put the job into the queue, it will get the JSON file from S3 and do the necessary job. Then it will take the next job from the queue, one by one.
If this worker instance is written in Python or PHP you have to test what will work faster. The advantage of this is, that you can scale the workers as how much you need them. And it won't affect the performance of your webapp. I hope this helps you. |
62,579,243 | I know my question has a lot of answers on the internet but it's seems i can't find a good answer for it, so i will try to explain what i have and hope for the best,
so what i'm trying to do is reading a big json file that might be has more complex structure "nested objects with big arrays" than this but for simple example:
```
{
"data": {
"time": [
1,
2,
3,
4,
5,
...
],
"values": [
1,
2,
3,
4,
6,
...
]
}
}
```
this file might be 200M or more, and i'm using `file_get_contents()` and `json_decode()` to read the data from the file,
then i put the result in variable and loop over the time and take the time value with the current index to get the corresponding value by index form the values array, then save the time and the value in the database but this taking so much CPU and Memory, is their a better way to do this
a better functions to use, a better json structure to use, or maybe a better data format than json to do this
my code:
```
$data = json_decode(file_get_contents(storage_path("test/ts/ts_big_data.json")), true);
foreach(data["time"] as $timeIndex => timeValue) {
saveInDataBase(timeValue, data["values"][timeIndex])
}
```
thanks in advance for any help
**Update 06/29/2020:**
i have another more complex json structure example
```
{
"data": {
"set_1": {
"sub_set_1": {
"info_1": {
"details_1": {
"data_1": [1,2,3,4,5,...],
"data_2": [1,2,3,4,5,...],
"data_3": [1,2,3,4,5,...],
"data_4": [1,2,3,4,5,...],
"data_5": 10254552
},
"details_2": [
[1,2,3,4,5,...],
[1,2,3,4,5,...],
[1,2,3,4,5,...],
]
},
"info_2": {
"details_1": {
"data_1": {
"arr_1": [1,2,3,4,5,...],
"arr_2": [1,2,3,4,5,...]
},
"data_2": {
"arr_1": [1,2,3,4,5,...],
"arr_2": [1,2,3,4,5,...]
},
"data_5": {
"text": "some text"
}
},
"details_2": [1,2,3,4,5,...]
}
}, ...
}, ...
}
}
```
the file size might be around 500MB or More and the arrays inside this json file might have around 100MB of data or more.
and my question how can i get any peace and navigate between nodes of this data with the most efficient way that will not take much RAM and CPU, i can't read the file line by line because i need to get any peace of data when i have to,
is python for example more suitable for handling this big data with more efficient than php ?
please if you can provide a detailed answer i think it will be much help for every one that looking to do this big data stuff with php. | 2020/06/25 | [
"https://Stackoverflow.com/questions/62579243",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2440284/"
] | Your problem is basically related to the memory management performed by each specific programming language that you might use in order to access the data from a huge (storage purpose) file.
For example, when you amass the operations by using the code that you just mentioned (as below)
`$data = json_decode(file_get_contents(storage_path("test/ts/ts_big_data.json")), true);`
what happens is that the memory used by runtime Zend engine increases too much, because it has to allocate certain memory units to store references about each ongoing file handling involved in your code statement - like keeping also in memory a pointer, not only the real file opened - unless this file gets finally overwritten and the memory buffer released (freed) again. It's no wonder that if you force the execution of both **file\_get\_contents()** function that reads the file into a string and also the **json\_decode()** function, you force the interpreter to keep in memory all 3 "things": the file itself, the reference created (the string), and also the structure (the json file).
On the contrary if you break the statement in several ones, the memory stack hold by the first data structure (the file) will be unloaded when the operation of "getting its content" then writing it into another variable (or file) is fully performed. As time as you don't define a variable where to save the data, it will still stay in the memory (as a blob - with *no name*, *no storage address*, *just content*). For this reason, it is much more CPU and RAM effective - when working with big data - to break everything in smaller steps.
So you have first to start by simply rewriting your code as follows:
```
$somefile = file_get_contents(storage_path("test/ts/ts_big_data.json"));
$data = json_decode($somefile, true);
```
When first line gets executed, the memory hold by **ts\_big\_data.json** gets released (think of it as being purged and made available again to other processes).
When second line gets executed, also **$somefile**'s memory buffer gets released, too. The take away point from this is that instead of always having 3 memory buffers used just to store the data structures, you'll only have 2 at each time, if of course ignoring the other memory used to actually construct the file. Not to say that when working with arrays (and JSON files just exactly arrays they are), that dynamically allocated memory increases dramatically and not linear as we might tend to think. Bottom line is that instead of a 50% loss in performance just on storage allocation for the files (3 big files taking 50% more space than just 2 of them), we better manage to handle in smaller steps the execution of the functions 'touching' these huge files.
In order to understand this, imagine that you access only what is needed at a certain moment in time (this is also a principle called YAGNI -You Aren't Gonna Need It - or similar in the context of Extreme Programming Practices - see reference here <https://wiki.c2.com/?YouArentGonnaNeedIt> something inherited since the C or Cobol old times.
The next approach to follow is to break the file in more pieces, but in a structured one (relational dependent data structure) as is in a database table / tables.
Obviously, you have to save the data pieces again **as blobs**, in the database. The advantage is that the retrieval of data in a DB is much more faster than in a file (due to the allocation of indexes by the SQL when generating and updating the tables). A table having 1 or two indexes can be accessed in a lightning fast manner by a structured query. Again, the indexes are pointers to the main storage of the data.
One important topic however is that if you still want to work with the json (content and type of data storage - instead of tables in a DB) is that you cannot update it locally without changing it globally. I am not sure what you meant by reading the time related function values in the json file. Do you mean that your json file is continuously changing? Better break it in several tables so each separate one can change without affecting all the mega structure of the data. Easier to manage, easier to maintain, easier to locate the changes.
***My understanding is that best solution would be to split the same file in several json files where you strip down the not needed values. BY THE WAY, DO YOU ACTUALLY NEED ALL THE STORED DATA ??***
I wouldn't come now with a code unless you explain me the above issues (so we can have a conversation) and thereafter I will accordingly edit my answer. I wrote yesterday a question related to handling of blobs - and storing in the server - in order to accelerate the execution of a data update in a server using a cron process. My data was about 25MB+ not 500+ as in your case however I must understand the use case for your situation.
One more thing, how was created that file that you must process ? Why do you manage only the final form of it instead of intervening in further feeding it with data ? My opinion is that you might stop storing data into it as previously done (and thus stop adding to your pain) and instead transform its today purpose only into historic data storage from now on then go toward storing the future data in something more elastic (as MongoDB or NoSQL databases).
Probably you don't need so much a code as a solid and useful strategy and way of working with your data first.
***Programming comes last, after you decided all the detailed architecture of your web project.*** | >
> and my question how can i get any peace and navigate between nodes of this data with the most efficient way that will not take much RAM and CPU, i can't read the file line by line because i need to get any peace of data when i have to,
>
>
>
It's plain text JSON and you have no indexes, so it's impossible to parse your data without iterating it line-by-line. The solution is to serialize your data once and for all and store it in a database (I'm thinking SQLite for fast setup).
If you mandatory can't store your data in a database, or can't retrieve it in SQLite format, you have no other choice but to create a [queue job](https://laravel.com/docs/7.x/queues) which will parse it in time. |
62,579,243 | I know my question has a lot of answers on the internet but it's seems i can't find a good answer for it, so i will try to explain what i have and hope for the best,
so what i'm trying to do is reading a big json file that might be has more complex structure "nested objects with big arrays" than this but for simple example:
```
{
"data": {
"time": [
1,
2,
3,
4,
5,
...
],
"values": [
1,
2,
3,
4,
6,
...
]
}
}
```
this file might be 200M or more, and i'm using `file_get_contents()` and `json_decode()` to read the data from the file,
then i put the result in variable and loop over the time and take the time value with the current index to get the corresponding value by index form the values array, then save the time and the value in the database but this taking so much CPU and Memory, is their a better way to do this
a better functions to use, a better json structure to use, or maybe a better data format than json to do this
my code:
```
$data = json_decode(file_get_contents(storage_path("test/ts/ts_big_data.json")), true);
foreach(data["time"] as $timeIndex => timeValue) {
saveInDataBase(timeValue, data["values"][timeIndex])
}
```
thanks in advance for any help
**Update 06/29/2020:**
i have another more complex json structure example
```
{
"data": {
"set_1": {
"sub_set_1": {
"info_1": {
"details_1": {
"data_1": [1,2,3,4,5,...],
"data_2": [1,2,3,4,5,...],
"data_3": [1,2,3,4,5,...],
"data_4": [1,2,3,4,5,...],
"data_5": 10254552
},
"details_2": [
[1,2,3,4,5,...],
[1,2,3,4,5,...],
[1,2,3,4,5,...],
]
},
"info_2": {
"details_1": {
"data_1": {
"arr_1": [1,2,3,4,5,...],
"arr_2": [1,2,3,4,5,...]
},
"data_2": {
"arr_1": [1,2,3,4,5,...],
"arr_2": [1,2,3,4,5,...]
},
"data_5": {
"text": "some text"
}
},
"details_2": [1,2,3,4,5,...]
}
}, ...
}, ...
}
}
```
the file size might be around 500MB or More and the arrays inside this json file might have around 100MB of data or more.
and my question how can i get any peace and navigate between nodes of this data with the most efficient way that will not take much RAM and CPU, i can't read the file line by line because i need to get any peace of data when i have to,
is python for example more suitable for handling this big data with more efficient than php ?
please if you can provide a detailed answer i think it will be much help for every one that looking to do this big data stuff with php. | 2020/06/25 | [
"https://Stackoverflow.com/questions/62579243",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2440284/"
] | Your problem is basically related to the memory management performed by each specific programming language that you might use in order to access the data from a huge (storage purpose) file.
For example, when you amass the operations by using the code that you just mentioned (as below)
`$data = json_decode(file_get_contents(storage_path("test/ts/ts_big_data.json")), true);`
what happens is that the memory used by runtime Zend engine increases too much, because it has to allocate certain memory units to store references about each ongoing file handling involved in your code statement - like keeping also in memory a pointer, not only the real file opened - unless this file gets finally overwritten and the memory buffer released (freed) again. It's no wonder that if you force the execution of both **file\_get\_contents()** function that reads the file into a string and also the **json\_decode()** function, you force the interpreter to keep in memory all 3 "things": the file itself, the reference created (the string), and also the structure (the json file).
On the contrary if you break the statement in several ones, the memory stack hold by the first data structure (the file) will be unloaded when the operation of "getting its content" then writing it into another variable (or file) is fully performed. As time as you don't define a variable where to save the data, it will still stay in the memory (as a blob - with *no name*, *no storage address*, *just content*). For this reason, it is much more CPU and RAM effective - when working with big data - to break everything in smaller steps.
So you have first to start by simply rewriting your code as follows:
```
$somefile = file_get_contents(storage_path("test/ts/ts_big_data.json"));
$data = json_decode($somefile, true);
```
When first line gets executed, the memory hold by **ts\_big\_data.json** gets released (think of it as being purged and made available again to other processes).
When second line gets executed, also **$somefile**'s memory buffer gets released, too. The take away point from this is that instead of always having 3 memory buffers used just to store the data structures, you'll only have 2 at each time, if of course ignoring the other memory used to actually construct the file. Not to say that when working with arrays (and JSON files just exactly arrays they are), that dynamically allocated memory increases dramatically and not linear as we might tend to think. Bottom line is that instead of a 50% loss in performance just on storage allocation for the files (3 big files taking 50% more space than just 2 of them), we better manage to handle in smaller steps the execution of the functions 'touching' these huge files.
In order to understand this, imagine that you access only what is needed at a certain moment in time (this is also a principle called YAGNI -You Aren't Gonna Need It - or similar in the context of Extreme Programming Practices - see reference here <https://wiki.c2.com/?YouArentGonnaNeedIt> something inherited since the C or Cobol old times.
The next approach to follow is to break the file in more pieces, but in a structured one (relational dependent data structure) as is in a database table / tables.
Obviously, you have to save the data pieces again **as blobs**, in the database. The advantage is that the retrieval of data in a DB is much more faster than in a file (due to the allocation of indexes by the SQL when generating and updating the tables). A table having 1 or two indexes can be accessed in a lightning fast manner by a structured query. Again, the indexes are pointers to the main storage of the data.
One important topic however is that if you still want to work with the json (content and type of data storage - instead of tables in a DB) is that you cannot update it locally without changing it globally. I am not sure what you meant by reading the time related function values in the json file. Do you mean that your json file is continuously changing? Better break it in several tables so each separate one can change without affecting all the mega structure of the data. Easier to manage, easier to maintain, easier to locate the changes.
***My understanding is that best solution would be to split the same file in several json files where you strip down the not needed values. BY THE WAY, DO YOU ACTUALLY NEED ALL THE STORED DATA ??***
I wouldn't come now with a code unless you explain me the above issues (so we can have a conversation) and thereafter I will accordingly edit my answer. I wrote yesterday a question related to handling of blobs - and storing in the server - in order to accelerate the execution of a data update in a server using a cron process. My data was about 25MB+ not 500+ as in your case however I must understand the use case for your situation.
One more thing, how was created that file that you must process ? Why do you manage only the final form of it instead of intervening in further feeding it with data ? My opinion is that you might stop storing data into it as previously done (and thus stop adding to your pain) and instead transform its today purpose only into historic data storage from now on then go toward storing the future data in something more elastic (as MongoDB or NoSQL databases).
Probably you don't need so much a code as a solid and useful strategy and way of working with your data first.
***Programming comes last, after you decided all the detailed architecture of your web project.*** | My approach will be reading the `JSON FILE` in chunks.
>
> If these json objects have a consistent structure, you can easily detect when a json object in a file starts, and ends.
>
>
> Once you collect a whole object, you insert it into a db, then go on
> to the next one.
>
>
> There isn't much more to it. the algorithm to detect the beginning and
> end of a json object may get complicating depending on your data
> source, but I hvae done something like this before with a far more
> complex structure (xml) and it worked fine.
>
>
>
Above answer is taken from => [Parse large JSON file](https://stackoverflow.com/questions/15373529/parse-large-json-file)
Please see the below references, it can be helpful for your case
=> <https://laracasts.com/discuss/channels/general-discussion/how-to-open-a-28-gb-json-file-in-php> |
62,579,243 | I know my question has a lot of answers on the internet but it's seems i can't find a good answer for it, so i will try to explain what i have and hope for the best,
so what i'm trying to do is reading a big json file that might be has more complex structure "nested objects with big arrays" than this but for simple example:
```
{
"data": {
"time": [
1,
2,
3,
4,
5,
...
],
"values": [
1,
2,
3,
4,
6,
...
]
}
}
```
this file might be 200M or more, and i'm using `file_get_contents()` and `json_decode()` to read the data from the file,
then i put the result in variable and loop over the time and take the time value with the current index to get the corresponding value by index form the values array, then save the time and the value in the database but this taking so much CPU and Memory, is their a better way to do this
a better functions to use, a better json structure to use, or maybe a better data format than json to do this
my code:
```
$data = json_decode(file_get_contents(storage_path("test/ts/ts_big_data.json")), true);
foreach(data["time"] as $timeIndex => timeValue) {
saveInDataBase(timeValue, data["values"][timeIndex])
}
```
thanks in advance for any help
**Update 06/29/2020:**
i have another more complex json structure example
```
{
"data": {
"set_1": {
"sub_set_1": {
"info_1": {
"details_1": {
"data_1": [1,2,3,4,5,...],
"data_2": [1,2,3,4,5,...],
"data_3": [1,2,3,4,5,...],
"data_4": [1,2,3,4,5,...],
"data_5": 10254552
},
"details_2": [
[1,2,3,4,5,...],
[1,2,3,4,5,...],
[1,2,3,4,5,...],
]
},
"info_2": {
"details_1": {
"data_1": {
"arr_1": [1,2,3,4,5,...],
"arr_2": [1,2,3,4,5,...]
},
"data_2": {
"arr_1": [1,2,3,4,5,...],
"arr_2": [1,2,3,4,5,...]
},
"data_5": {
"text": "some text"
}
},
"details_2": [1,2,3,4,5,...]
}
}, ...
}, ...
}
}
```
the file size might be around 500MB or More and the arrays inside this json file might have around 100MB of data or more.
and my question how can i get any peace and navigate between nodes of this data with the most efficient way that will not take much RAM and CPU, i can't read the file line by line because i need to get any peace of data when i have to,
is python for example more suitable for handling this big data with more efficient than php ?
please if you can provide a detailed answer i think it will be much help for every one that looking to do this big data stuff with php. | 2020/06/25 | [
"https://Stackoverflow.com/questions/62579243",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2440284/"
] | JSON is a great format and way better alternative to XML.
In the end JSON is almost one on one convertible to XML and back.
Big files can get bigger, so we don't want to read all the stuff in memory and we don't want to parse the whole file. I had the same issue with XXL size JSON files.
I think the issue lays not in a specific programming language, but in a realisation and specifics of the formats.
I have 3 solutions for you:
1. **Native PHP implementation** (*preferred*)
Almost as fast as streamed XMLReader, there is a library <https://github.com/pcrov/JsonReader>. Example:
```
use pcrov\JsonReader\JsonReader;
$reader = new JsonReader();
$reader->open("data.json");
while ($reader->read("type")) {
echo $reader->value(), "\n";
}
$reader->close();
```
This library will not read the whole file into memory or parse all the lines. It is step by step on command traverse through the tree of JSON object.
2. **Let go formats** (*cons: multiple conversions*)
Preprocess file to a different format like XML or CSV.
There is very lightweight nodejs libs like <https://www.npmjs.com/package/json2csv> to CSV from JSON.
3. **Use some NoSQL DB** (*cons: additional complex software to install and maintain*)
For example Redis or CouchDB([import json file to couch db-](https://stackoverflow.com/questions/790757/import-json-file-to-couch-db)) | My approach will be reading the `JSON FILE` in chunks.
>
> If these json objects have a consistent structure, you can easily detect when a json object in a file starts, and ends.
>
>
> Once you collect a whole object, you insert it into a db, then go on
> to the next one.
>
>
> There isn't much more to it. the algorithm to detect the beginning and
> end of a json object may get complicating depending on your data
> source, but I hvae done something like this before with a far more
> complex structure (xml) and it worked fine.
>
>
>
Above answer is taken from => [Parse large JSON file](https://stackoverflow.com/questions/15373529/parse-large-json-file)
Please see the below references, it can be helpful for your case
=> <https://laracasts.com/discuss/channels/general-discussion/how-to-open-a-28-gb-json-file-in-php> |
62,579,243 | I know my question has a lot of answers on the internet but it's seems i can't find a good answer for it, so i will try to explain what i have and hope for the best,
so what i'm trying to do is reading a big json file that might be has more complex structure "nested objects with big arrays" than this but for simple example:
```
{
"data": {
"time": [
1,
2,
3,
4,
5,
...
],
"values": [
1,
2,
3,
4,
6,
...
]
}
}
```
this file might be 200M or more, and i'm using `file_get_contents()` and `json_decode()` to read the data from the file,
then i put the result in variable and loop over the time and take the time value with the current index to get the corresponding value by index form the values array, then save the time and the value in the database but this taking so much CPU and Memory, is their a better way to do this
a better functions to use, a better json structure to use, or maybe a better data format than json to do this
my code:
```
$data = json_decode(file_get_contents(storage_path("test/ts/ts_big_data.json")), true);
foreach(data["time"] as $timeIndex => timeValue) {
saveInDataBase(timeValue, data["values"][timeIndex])
}
```
thanks in advance for any help
**Update 06/29/2020:**
i have another more complex json structure example
```
{
"data": {
"set_1": {
"sub_set_1": {
"info_1": {
"details_1": {
"data_1": [1,2,3,4,5,...],
"data_2": [1,2,3,4,5,...],
"data_3": [1,2,3,4,5,...],
"data_4": [1,2,3,4,5,...],
"data_5": 10254552
},
"details_2": [
[1,2,3,4,5,...],
[1,2,3,4,5,...],
[1,2,3,4,5,...],
]
},
"info_2": {
"details_1": {
"data_1": {
"arr_1": [1,2,3,4,5,...],
"arr_2": [1,2,3,4,5,...]
},
"data_2": {
"arr_1": [1,2,3,4,5,...],
"arr_2": [1,2,3,4,5,...]
},
"data_5": {
"text": "some text"
}
},
"details_2": [1,2,3,4,5,...]
}
}, ...
}, ...
}
}
```
the file size might be around 500MB or More and the arrays inside this json file might have around 100MB of data or more.
and my question how can i get any peace and navigate between nodes of this data with the most efficient way that will not take much RAM and CPU, i can't read the file line by line because i need to get any peace of data when i have to,
is python for example more suitable for handling this big data with more efficient than php ?
please if you can provide a detailed answer i think it will be much help for every one that looking to do this big data stuff with php. | 2020/06/25 | [
"https://Stackoverflow.com/questions/62579243",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2440284/"
] | Your problem is basically related to the memory management performed by each specific programming language that you might use in order to access the data from a huge (storage purpose) file.
For example, when you amass the operations by using the code that you just mentioned (as below)
`$data = json_decode(file_get_contents(storage_path("test/ts/ts_big_data.json")), true);`
what happens is that the memory used by runtime Zend engine increases too much, because it has to allocate certain memory units to store references about each ongoing file handling involved in your code statement - like keeping also in memory a pointer, not only the real file opened - unless this file gets finally overwritten and the memory buffer released (freed) again. It's no wonder that if you force the execution of both **file\_get\_contents()** function that reads the file into a string and also the **json\_decode()** function, you force the interpreter to keep in memory all 3 "things": the file itself, the reference created (the string), and also the structure (the json file).
On the contrary if you break the statement in several ones, the memory stack hold by the first data structure (the file) will be unloaded when the operation of "getting its content" then writing it into another variable (or file) is fully performed. As time as you don't define a variable where to save the data, it will still stay in the memory (as a blob - with *no name*, *no storage address*, *just content*). For this reason, it is much more CPU and RAM effective - when working with big data - to break everything in smaller steps.
So you have first to start by simply rewriting your code as follows:
```
$somefile = file_get_contents(storage_path("test/ts/ts_big_data.json"));
$data = json_decode($somefile, true);
```
When first line gets executed, the memory hold by **ts\_big\_data.json** gets released (think of it as being purged and made available again to other processes).
When second line gets executed, also **$somefile**'s memory buffer gets released, too. The take away point from this is that instead of always having 3 memory buffers used just to store the data structures, you'll only have 2 at each time, if of course ignoring the other memory used to actually construct the file. Not to say that when working with arrays (and JSON files just exactly arrays they are), that dynamically allocated memory increases dramatically and not linear as we might tend to think. Bottom line is that instead of a 50% loss in performance just on storage allocation for the files (3 big files taking 50% more space than just 2 of them), we better manage to handle in smaller steps the execution of the functions 'touching' these huge files.
In order to understand this, imagine that you access only what is needed at a certain moment in time (this is also a principle called YAGNI -You Aren't Gonna Need It - or similar in the context of Extreme Programming Practices - see reference here <https://wiki.c2.com/?YouArentGonnaNeedIt> something inherited since the C or Cobol old times.
The next approach to follow is to break the file in more pieces, but in a structured one (relational dependent data structure) as is in a database table / tables.
Obviously, you have to save the data pieces again **as blobs**, in the database. The advantage is that the retrieval of data in a DB is much more faster than in a file (due to the allocation of indexes by the SQL when generating and updating the tables). A table having 1 or two indexes can be accessed in a lightning fast manner by a structured query. Again, the indexes are pointers to the main storage of the data.
One important topic however is that if you still want to work with the json (content and type of data storage - instead of tables in a DB) is that you cannot update it locally without changing it globally. I am not sure what you meant by reading the time related function values in the json file. Do you mean that your json file is continuously changing? Better break it in several tables so each separate one can change without affecting all the mega structure of the data. Easier to manage, easier to maintain, easier to locate the changes.
***My understanding is that best solution would be to split the same file in several json files where you strip down the not needed values. BY THE WAY, DO YOU ACTUALLY NEED ALL THE STORED DATA ??***
I wouldn't come now with a code unless you explain me the above issues (so we can have a conversation) and thereafter I will accordingly edit my answer. I wrote yesterday a question related to handling of blobs - and storing in the server - in order to accelerate the execution of a data update in a server using a cron process. My data was about 25MB+ not 500+ as in your case however I must understand the use case for your situation.
One more thing, how was created that file that you must process ? Why do you manage only the final form of it instead of intervening in further feeding it with data ? My opinion is that you might stop storing data into it as previously done (and thus stop adding to your pain) and instead transform its today purpose only into historic data storage from now on then go toward storing the future data in something more elastic (as MongoDB or NoSQL databases).
Probably you don't need so much a code as a solid and useful strategy and way of working with your data first.
***Programming comes last, after you decided all the detailed architecture of your web project.*** | You may Split your arrays into chunks using
`array_chunk()` Function
>
> The `array_chunk()` function is an inbuilt function in PHP which is
> used to split an array into parts or chunks of given size depending
> upon the parameters passed to the function. The last chunk may contain
> fewer elements than the desired size of the chunk.
>
>
>
Check the examples in this [link](https://www.geeksforgeeks.org/php-array_chunk-function/) |
62,579,243 | I know my question has a lot of answers on the internet but it's seems i can't find a good answer for it, so i will try to explain what i have and hope for the best,
so what i'm trying to do is reading a big json file that might be has more complex structure "nested objects with big arrays" than this but for simple example:
```
{
"data": {
"time": [
1,
2,
3,
4,
5,
...
],
"values": [
1,
2,
3,
4,
6,
...
]
}
}
```
this file might be 200M or more, and i'm using `file_get_contents()` and `json_decode()` to read the data from the file,
then i put the result in variable and loop over the time and take the time value with the current index to get the corresponding value by index form the values array, then save the time and the value in the database but this taking so much CPU and Memory, is their a better way to do this
a better functions to use, a better json structure to use, or maybe a better data format than json to do this
my code:
```
$data = json_decode(file_get_contents(storage_path("test/ts/ts_big_data.json")), true);
foreach(data["time"] as $timeIndex => timeValue) {
saveInDataBase(timeValue, data["values"][timeIndex])
}
```
thanks in advance for any help
**Update 06/29/2020:**
i have another more complex json structure example
```
{
"data": {
"set_1": {
"sub_set_1": {
"info_1": {
"details_1": {
"data_1": [1,2,3,4,5,...],
"data_2": [1,2,3,4,5,...],
"data_3": [1,2,3,4,5,...],
"data_4": [1,2,3,4,5,...],
"data_5": 10254552
},
"details_2": [
[1,2,3,4,5,...],
[1,2,3,4,5,...],
[1,2,3,4,5,...],
]
},
"info_2": {
"details_1": {
"data_1": {
"arr_1": [1,2,3,4,5,...],
"arr_2": [1,2,3,4,5,...]
},
"data_2": {
"arr_1": [1,2,3,4,5,...],
"arr_2": [1,2,3,4,5,...]
},
"data_5": {
"text": "some text"
}
},
"details_2": [1,2,3,4,5,...]
}
}, ...
}, ...
}
}
```
the file size might be around 500MB or More and the arrays inside this json file might have around 100MB of data or more.
and my question how can i get any peace and navigate between nodes of this data with the most efficient way that will not take much RAM and CPU, i can't read the file line by line because i need to get any peace of data when i have to,
is python for example more suitable for handling this big data with more efficient than php ?
please if you can provide a detailed answer i think it will be much help for every one that looking to do this big data stuff with php. | 2020/06/25 | [
"https://Stackoverflow.com/questions/62579243",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2440284/"
] | Your problem is basically related to the memory management performed by each specific programming language that you might use in order to access the data from a huge (storage purpose) file.
For example, when you amass the operations by using the code that you just mentioned (as below)
`$data = json_decode(file_get_contents(storage_path("test/ts/ts_big_data.json")), true);`
what happens is that the memory used by runtime Zend engine increases too much, because it has to allocate certain memory units to store references about each ongoing file handling involved in your code statement - like keeping also in memory a pointer, not only the real file opened - unless this file gets finally overwritten and the memory buffer released (freed) again. It's no wonder that if you force the execution of both **file\_get\_contents()** function that reads the file into a string and also the **json\_decode()** function, you force the interpreter to keep in memory all 3 "things": the file itself, the reference created (the string), and also the structure (the json file).
On the contrary if you break the statement in several ones, the memory stack hold by the first data structure (the file) will be unloaded when the operation of "getting its content" then writing it into another variable (or file) is fully performed. As time as you don't define a variable where to save the data, it will still stay in the memory (as a blob - with *no name*, *no storage address*, *just content*). For this reason, it is much more CPU and RAM effective - when working with big data - to break everything in smaller steps.
So you have first to start by simply rewriting your code as follows:
```
$somefile = file_get_contents(storage_path("test/ts/ts_big_data.json"));
$data = json_decode($somefile, true);
```
When first line gets executed, the memory hold by **ts\_big\_data.json** gets released (think of it as being purged and made available again to other processes).
When second line gets executed, also **$somefile**'s memory buffer gets released, too. The take away point from this is that instead of always having 3 memory buffers used just to store the data structures, you'll only have 2 at each time, if of course ignoring the other memory used to actually construct the file. Not to say that when working with arrays (and JSON files just exactly arrays they are), that dynamically allocated memory increases dramatically and not linear as we might tend to think. Bottom line is that instead of a 50% loss in performance just on storage allocation for the files (3 big files taking 50% more space than just 2 of them), we better manage to handle in smaller steps the execution of the functions 'touching' these huge files.
In order to understand this, imagine that you access only what is needed at a certain moment in time (this is also a principle called YAGNI -You Aren't Gonna Need It - or similar in the context of Extreme Programming Practices - see reference here <https://wiki.c2.com/?YouArentGonnaNeedIt> something inherited since the C or Cobol old times.
The next approach to follow is to break the file in more pieces, but in a structured one (relational dependent data structure) as is in a database table / tables.
Obviously, you have to save the data pieces again **as blobs**, in the database. The advantage is that the retrieval of data in a DB is much more faster than in a file (due to the allocation of indexes by the SQL when generating and updating the tables). A table having 1 or two indexes can be accessed in a lightning fast manner by a structured query. Again, the indexes are pointers to the main storage of the data.
One important topic however is that if you still want to work with the json (content and type of data storage - instead of tables in a DB) is that you cannot update it locally without changing it globally. I am not sure what you meant by reading the time related function values in the json file. Do you mean that your json file is continuously changing? Better break it in several tables so each separate one can change without affecting all the mega structure of the data. Easier to manage, easier to maintain, easier to locate the changes.
***My understanding is that best solution would be to split the same file in several json files where you strip down the not needed values. BY THE WAY, DO YOU ACTUALLY NEED ALL THE STORED DATA ??***
I wouldn't come now with a code unless you explain me the above issues (so we can have a conversation) and thereafter I will accordingly edit my answer. I wrote yesterday a question related to handling of blobs - and storing in the server - in order to accelerate the execution of a data update in a server using a cron process. My data was about 25MB+ not 500+ as in your case however I must understand the use case for your situation.
One more thing, how was created that file that you must process ? Why do you manage only the final form of it instead of intervening in further feeding it with data ? My opinion is that you might stop storing data into it as previously done (and thus stop adding to your pain) and instead transform its today purpose only into historic data storage from now on then go toward storing the future data in something more elastic (as MongoDB or NoSQL databases).
Probably you don't need so much a code as a solid and useful strategy and way of working with your data first.
***Programming comes last, after you decided all the detailed architecture of your web project.*** | **Try Reducing You Bulk Data Complexity For Faster File I/O**
JSON is a great format to store data in, but it comes at the cost of needing to read the entire file to parse it.
Making your data structure simpler but more spread out across several files can allow you to read a file line-by-line which is much faster than all-at-once. This also comes with the benefit of not needing to store the entire file in RAM all at once, so it is more friendly to resource-limited enviroments.
**This might look something like this:**
objects.json
```
{
"data": {
"times_file": "/some/path/objects/object-123/object-123-times.csv",
"values_file": "/some/path/objects/object-123/object-123-times.csv"
}
}
```
object-123-times.csv
```
1
2
3
4
...
```
This would allow you to store your bulk data in a simpler but easier to access format. You could then use something like [`fgetcsv()`](https://www.php.net/manual/en/function.fgetcsv.php) to parse each line. |
39,942,061 | I'm having a weird problem with a piece of python code.
The idea how it should work:
1. a barcode is entered (now hardcode for the moment);
2. barcode is looked up in local mysqldb, if not found, the barcode is looked up via api from datakick, if it's not found there either, step 3
3. i want to add the barcode to my local mysqldatabase and request some input.
Now the problem: it works! als long as you fill in numbers for the `naamProduct`. If you use letters (eg. I filled in Bla as productname), I get a weird SQL-error `(_mysql_exceptions.OperationalError: (1054, "Unknown column 'Bla' in 'field.list'")`
I have checked the tables in mysql and the types are all ok. The table where name should end up in is text. I have also tried a hardcoded string which works fine. Using the sql-query from the mysql console also works perfectly. My guess is something is going wrong with the inputpart, but I can't figure out what.
(code is still not really tidy with the exceptions, I know ;) Working on it step by step)
`
```
def barcodeFunctie(sql):
con = mdb.connect ('localhost', 'python', 'python', 'stock')
cur = con.cursor()
cur.execute(sql)
ver = cur.fetchone();
con.commit()
con.close()
return ver
#barcode = '8710624957278'
#barcode = '2147483647'
barcode = '123'
#zoeken op barcode. Barcode is ook de sleutel in de tabel.
sql = "select * from Voorraad where Id=%s" % barcode
if barcodeFunctie(sql) == "None":
print "geen output"
else:
try:
url='https://www.datakick.org/api/items/'+barcode
data = json.load(urllib2.urlopen(url))
print data['brand_name'], data['name']
except:
#barcode komt niet voor in eigen db en niet in db van datakick, in beide toevoegen
print barcode, " barcode als input"
naamProduct = str(raw_input("Wat is de naam van het product? "))
hoeveelheidProduct = raw_input("Hoeveel inhoud heeft het product? ")
sql = "insert into Voorraad (Id, NaamProduct,HoeveelHeidProduct) values (%s,%s,%s)" % (barcode, naamProduct, hoeveelheidProduct)
barcodeFunctie(sql)
print "meuktoegevoegd! :D"
```
` | 2016/10/09 | [
"https://Stackoverflow.com/questions/39942061",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6944323/"
] | `UNORDERED` essentially means that the collector is both associative (required by the spec) and commutative (not required).
Associativity allows splitting the computation into subparts and then combining them into the full result, but requires the combining step to be strictly ordered. Examine this snippet from the [docs](https://docs.oracle.com/javase/8/docs/api/java/util/stream/Collector.html):
```
A a2 = supplier.get();
accumulator.accept(a2, t1);
A a3 = supplier.get();
accumulator.accept(a3, t2);
R r2 = finisher.apply(combiner.apply(a2, a3)); // result with splitting
```
In the last step, `combiner.apply(a2, a3)`, the arguments must appear in exactly this order, which means that the entire computation pipeline must track the order and respect it in the end.
Another way of saying this is that the tree we get from recursive splitting must be ordered.
On the other hand, if the combining operation is commutative, we can combine any subpart with any other, in no particular order, and always obtain the same result. Clearly this leads to many optimization opportunities in both space and time dimensions.
It should be noted that there are `UNORDERED` collectors in the JDK which don't guarantee commutativity. The main category are the "higher-order" collectors which are composed with other downstream collectors, but they don't enforce the `UNORDERED` property on them. | The inner `Collector.Characteristics` class itself is fairly terse in its description, but if you spend a few seconds exploring the context you will notice that the containing [Collector](https://docs.oracle.com/javase/8/docs/api/java/util/stream/Collector.html) interface provides additional information
>
> For collectors that do not have the UNORDERED characteristic, two accumulated results a1 and a2 are equivalent if finisher.apply(a1).equals(finisher.apply(a2)). For unordered collectors, equivalence is relaxed to allow for non-equality related to differences in order. (For example, an unordered collector that accumulated elements to a List would consider two lists equivalent if they contained the same elements, ignoring order.)
>
>
>
---
>
> In OpenJDK looks like reducing operations (min, sum, avg) have empty characteristics, I expected to find there at least CONCURRENT and UNORDERED.
>
>
>
At least for doubles summation and averages definitely are ordered and not concurrent because the summation logic uses subresult merging, not a thread-safe accumulator. |
39,942,061 | I'm having a weird problem with a piece of python code.
The idea how it should work:
1. a barcode is entered (now hardcode for the moment);
2. barcode is looked up in local mysqldb, if not found, the barcode is looked up via api from datakick, if it's not found there either, step 3
3. i want to add the barcode to my local mysqldatabase and request some input.
Now the problem: it works! als long as you fill in numbers for the `naamProduct`. If you use letters (eg. I filled in Bla as productname), I get a weird SQL-error `(_mysql_exceptions.OperationalError: (1054, "Unknown column 'Bla' in 'field.list'")`
I have checked the tables in mysql and the types are all ok. The table where name should end up in is text. I have also tried a hardcoded string which works fine. Using the sql-query from the mysql console also works perfectly. My guess is something is going wrong with the inputpart, but I can't figure out what.
(code is still not really tidy with the exceptions, I know ;) Working on it step by step)
`
```
def barcodeFunctie(sql):
con = mdb.connect ('localhost', 'python', 'python', 'stock')
cur = con.cursor()
cur.execute(sql)
ver = cur.fetchone();
con.commit()
con.close()
return ver
#barcode = '8710624957278'
#barcode = '2147483647'
barcode = '123'
#zoeken op barcode. Barcode is ook de sleutel in de tabel.
sql = "select * from Voorraad where Id=%s" % barcode
if barcodeFunctie(sql) == "None":
print "geen output"
else:
try:
url='https://www.datakick.org/api/items/'+barcode
data = json.load(urllib2.urlopen(url))
print data['brand_name'], data['name']
except:
#barcode komt niet voor in eigen db en niet in db van datakick, in beide toevoegen
print barcode, " barcode als input"
naamProduct = str(raw_input("Wat is de naam van het product? "))
hoeveelheidProduct = raw_input("Hoeveel inhoud heeft het product? ")
sql = "insert into Voorraad (Id, NaamProduct,HoeveelHeidProduct) values (%s,%s,%s)" % (barcode, naamProduct, hoeveelheidProduct)
barcodeFunctie(sql)
print "meuktoegevoegd! :D"
```
` | 2016/10/09 | [
"https://Stackoverflow.com/questions/39942061",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6944323/"
] | In the absence of special pleading, stream operations must behave as if the elements are processed in the encounter order of the source. For some operations -- such as reduction with an associative operation -- one can obey this constraint and still get efficient parallel execution. For others, though, this constraint is very limiting. And, for some problems, this constraint isn't meaningful to the user. Consider the following stream pipeline:
```
people.stream()
.collect(groupingBy(Person::getLastName,
mapping(Person::getFirstName));
```
Is it important that the list of first names associated with "Smith" appear in the map in the order they appeared in the initial stream? For some problems, yes, for some no -- we don't want the stream library guessing for us. An unordered collector says that it's OK to insert the first names into the list in an order inconsistent with the order in which Smith-surnamed people appear in the input source. By relaxing this constraint, sometimes (not always), the stream library can give a more efficient execution.
For example, if you didn't care about this order preservation, you could execute it as:
```
people.parallelStream()
.collect(groupingByConcurrent(Person::getLastName,
mapping(Person::getFirstName));
```
The concurrent collector is unordered, which permits the optimization of sharing an underlying `ConcurrentMap`, rather than having `O(log n)` map-merge steps. Relaxing the ordering constraint enables a real algorithmic advantage -- but we can't assume the constraint doesn't matter, we need for the user to tell us this. Using an `UNORDERED` collector is one way to tell the stream library that these optimizations are fair game. | The inner `Collector.Characteristics` class itself is fairly terse in its description, but if you spend a few seconds exploring the context you will notice that the containing [Collector](https://docs.oracle.com/javase/8/docs/api/java/util/stream/Collector.html) interface provides additional information
>
> For collectors that do not have the UNORDERED characteristic, two accumulated results a1 and a2 are equivalent if finisher.apply(a1).equals(finisher.apply(a2)). For unordered collectors, equivalence is relaxed to allow for non-equality related to differences in order. (For example, an unordered collector that accumulated elements to a List would consider two lists equivalent if they contained the same elements, ignoring order.)
>
>
>
---
>
> In OpenJDK looks like reducing operations (min, sum, avg) have empty characteristics, I expected to find there at least CONCURRENT and UNORDERED.
>
>
>
At least for doubles summation and averages definitely are ordered and not concurrent because the summation logic uses subresult merging, not a thread-safe accumulator. |
15,526,996 | After installing the latest [Mac OSX 64-bit Anaconda Python distribution](http://continuum.io/downloads.html), I keep getting a ValueError when trying to start the IPython Notebook.
Starting ipython works fine:
```
3-millerc-~:ipython
Python 2.7.3 |Anaconda 1.4.0 (x86_64)| (default, Feb 25 2013, 18:45:56)
Type "copyright", "credits" or "license" for more information.
IPython 0.13.1 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
```
But starting ipython notebook:
```
4-millerc-~:ipython notebook
```
Results in the ValueError (with traceback):
```
Traceback (most recent call last):
File "/Users/millerc/anaconda/bin/ipython", line 7, in <module>
launch_new_instance()
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/terminal/ipapp.py", line 388, in launch_new_instance
app.initialize()
File "<string>", line 2, in initialize
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/terminal/ipapp.py", line 313, in initialize
super(TerminalIPythonApp, self).initialize(argv)
File "<string>", line 2, in initialize
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/core/application.py", line 325, in initialize
self.parse_command_line(argv)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/terminal/ipapp.py", line 308, in parse_command_line
return super(TerminalIPythonApp, self).parse_command_line(argv)
File "<string>", line 2, in parse_command_line
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 420, in parse_command_line
return self.initialize_subcommand(subc, subargv)
File "<string>", line 2, in initialize_subcommand
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 352, in initialize_subcommand
subapp = import_item(subapp)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/utils/importstring.py", line 40, in import_item
module = __import__(package,fromlist=[obj])
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/html/notebook/notebookapp.py", line 46, in <module>
from .handlers import (LoginHandler, LogoutHandler,
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/html/notebook/handlers.py", line 36, in <module>
from docutils.core import publish_string
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/core.py", line 20, in <module>
from docutils import frontend, io, utils, readers, writers
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/frontend.py", line 41, in <module>
import docutils.utils
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/utils/__init__.py", line 20, in <module>
from docutils.io import FileOutput
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/io.py", line 18, in <module>
from docutils.utils.error_reporting import locale_encoding, ErrorString, ErrorOutput
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/utils/error_reporting.py", line 47, in <module>
locale_encoding = locale.getlocale()[1] or locale.getdefaultlocale()[1]
File "/Users/millerc/anaconda/lib/python2.7/locale.py", line 503, in getdefaultlocale
return _parse_localename(localename)
File "/Users/millerc/anaconda/lib/python2.7/locale.py", line 435, in _parse_localename
raise ValueError, 'unknown locale: %s' % localename
ValueError: unknown locale: UTF-8
```
Running the `locale` command from the terminal:
```
5-millerc-~:locale
LANG=
LC_COLLATE="C"
LC_CTYPE="UTF-8"
LC_MESSAGES="C"
LC_MONETARY="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_ALL=
``` | 2013/03/20 | [
"https://Stackoverflow.com/questions/15526996",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/655733/"
] | I summarize here the solution to be found on: <http://blog.lobraun.de/2009/04/11/mercurial-on-mac-os-x-valueerror-unknown-locale-utf-8/>
I added these lines to my `.bash_profile`:
```
export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
```
I reloaded the profile:
```
source ~/.bash_profile
```
I then ran `ipython` again:
```
ipython notebook
```
Changing locales
----------------
The above will work for the English language in a US locale. One may want different settings.
At the risk of stating the obvious, to discover the current settings for your system, use:
```
$ locale
```
And to retrieve a list of all valid settings on your system:
```
$ locale -a
```
Then choose your preferred locale. For example, for a Swiss French locale, the solution would look like this:
```
export LC_ALL=fr_CH.UTF-8
export LANG=fr_CH.UTF-8
``` | As your `LC_CTYPE` is wrong, you should find out where that wrong value is set and change it to something like `en_US.UTF-8`. |
15,526,996 | After installing the latest [Mac OSX 64-bit Anaconda Python distribution](http://continuum.io/downloads.html), I keep getting a ValueError when trying to start the IPython Notebook.
Starting ipython works fine:
```
3-millerc-~:ipython
Python 2.7.3 |Anaconda 1.4.0 (x86_64)| (default, Feb 25 2013, 18:45:56)
Type "copyright", "credits" or "license" for more information.
IPython 0.13.1 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
```
But starting ipython notebook:
```
4-millerc-~:ipython notebook
```
Results in the ValueError (with traceback):
```
Traceback (most recent call last):
File "/Users/millerc/anaconda/bin/ipython", line 7, in <module>
launch_new_instance()
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/terminal/ipapp.py", line 388, in launch_new_instance
app.initialize()
File "<string>", line 2, in initialize
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/terminal/ipapp.py", line 313, in initialize
super(TerminalIPythonApp, self).initialize(argv)
File "<string>", line 2, in initialize
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/core/application.py", line 325, in initialize
self.parse_command_line(argv)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/terminal/ipapp.py", line 308, in parse_command_line
return super(TerminalIPythonApp, self).parse_command_line(argv)
File "<string>", line 2, in parse_command_line
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 420, in parse_command_line
return self.initialize_subcommand(subc, subargv)
File "<string>", line 2, in initialize_subcommand
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 352, in initialize_subcommand
subapp = import_item(subapp)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/utils/importstring.py", line 40, in import_item
module = __import__(package,fromlist=[obj])
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/html/notebook/notebookapp.py", line 46, in <module>
from .handlers import (LoginHandler, LogoutHandler,
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/html/notebook/handlers.py", line 36, in <module>
from docutils.core import publish_string
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/core.py", line 20, in <module>
from docutils import frontend, io, utils, readers, writers
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/frontend.py", line 41, in <module>
import docutils.utils
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/utils/__init__.py", line 20, in <module>
from docutils.io import FileOutput
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/io.py", line 18, in <module>
from docutils.utils.error_reporting import locale_encoding, ErrorString, ErrorOutput
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/utils/error_reporting.py", line 47, in <module>
locale_encoding = locale.getlocale()[1] or locale.getdefaultlocale()[1]
File "/Users/millerc/anaconda/lib/python2.7/locale.py", line 503, in getdefaultlocale
return _parse_localename(localename)
File "/Users/millerc/anaconda/lib/python2.7/locale.py", line 435, in _parse_localename
raise ValueError, 'unknown locale: %s' % localename
ValueError: unknown locale: UTF-8
```
Running the `locale` command from the terminal:
```
5-millerc-~:locale
LANG=
LC_COLLATE="C"
LC_CTYPE="UTF-8"
LC_MESSAGES="C"
LC_MONETARY="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_ALL=
``` | 2013/03/20 | [
"https://Stackoverflow.com/questions/15526996",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/655733/"
] | I summarize here the solution to be found on: <http://blog.lobraun.de/2009/04/11/mercurial-on-mac-os-x-valueerror-unknown-locale-utf-8/>
I added these lines to my `.bash_profile`:
```
export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
```
I reloaded the profile:
```
source ~/.bash_profile
```
I then ran `ipython` again:
```
ipython notebook
```
Changing locales
----------------
The above will work for the English language in a US locale. One may want different settings.
At the risk of stating the obvious, to discover the current settings for your system, use:
```
$ locale
```
And to retrieve a list of all valid settings on your system:
```
$ locale -a
```
Then choose your preferred locale. For example, for a Swiss French locale, the solution would look like this:
```
export LC_ALL=fr_CH.UTF-8
export LANG=fr_CH.UTF-8
``` | This is a bug in the OS X Terminal app that only shows up in certain locales (country/language combinations). Open Terminal in /Applications/Utilities and uncheck the box “Set locale environment variables on startup”.
[](https://i.stack.imgur.com/EwOj7.jpg)
This will set your `LANG` environment variable to be empty. This may cause terminal use to incorrect settings for your locale. The locale command in the Terminal will tell you what settings are used. To use the correct language, add a line to your bash profile (typically `~/.profile`)
```
export LANG=your-lang
```
Replace `your-lang` with the correct locale specifier for your language. The command `locale -a` will show you all the specifiers. For example, the language code for US English is `en_US.UTF-8`. The locale affects what translations are used when they are available, and also how dates, currencies, and decimals are formatted.
Note, this image and content were taken from <http://conda.pydata.org/docs/troubleshooting.html#unknown-locale> (I'm also the original author of that page). |
15,526,996 | After installing the latest [Mac OSX 64-bit Anaconda Python distribution](http://continuum.io/downloads.html), I keep getting a ValueError when trying to start the IPython Notebook.
Starting ipython works fine:
```
3-millerc-~:ipython
Python 2.7.3 |Anaconda 1.4.0 (x86_64)| (default, Feb 25 2013, 18:45:56)
Type "copyright", "credits" or "license" for more information.
IPython 0.13.1 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
```
But starting ipython notebook:
```
4-millerc-~:ipython notebook
```
Results in the ValueError (with traceback):
```
Traceback (most recent call last):
File "/Users/millerc/anaconda/bin/ipython", line 7, in <module>
launch_new_instance()
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/terminal/ipapp.py", line 388, in launch_new_instance
app.initialize()
File "<string>", line 2, in initialize
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/terminal/ipapp.py", line 313, in initialize
super(TerminalIPythonApp, self).initialize(argv)
File "<string>", line 2, in initialize
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/core/application.py", line 325, in initialize
self.parse_command_line(argv)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/terminal/ipapp.py", line 308, in parse_command_line
return super(TerminalIPythonApp, self).parse_command_line(argv)
File "<string>", line 2, in parse_command_line
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 420, in parse_command_line
return self.initialize_subcommand(subc, subargv)
File "<string>", line 2, in initialize_subcommand
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 352, in initialize_subcommand
subapp = import_item(subapp)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/utils/importstring.py", line 40, in import_item
module = __import__(package,fromlist=[obj])
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/html/notebook/notebookapp.py", line 46, in <module>
from .handlers import (LoginHandler, LogoutHandler,
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/html/notebook/handlers.py", line 36, in <module>
from docutils.core import publish_string
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/core.py", line 20, in <module>
from docutils import frontend, io, utils, readers, writers
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/frontend.py", line 41, in <module>
import docutils.utils
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/utils/__init__.py", line 20, in <module>
from docutils.io import FileOutput
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/io.py", line 18, in <module>
from docutils.utils.error_reporting import locale_encoding, ErrorString, ErrorOutput
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/utils/error_reporting.py", line 47, in <module>
locale_encoding = locale.getlocale()[1] or locale.getdefaultlocale()[1]
File "/Users/millerc/anaconda/lib/python2.7/locale.py", line 503, in getdefaultlocale
return _parse_localename(localename)
File "/Users/millerc/anaconda/lib/python2.7/locale.py", line 435, in _parse_localename
raise ValueError, 'unknown locale: %s' % localename
ValueError: unknown locale: UTF-8
```
Running the `locale` command from the terminal:
```
5-millerc-~:locale
LANG=
LC_COLLATE="C"
LC_CTYPE="UTF-8"
LC_MESSAGES="C"
LC_MONETARY="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_ALL=
``` | 2013/03/20 | [
"https://Stackoverflow.com/questions/15526996",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/655733/"
] | I summarize here the solution to be found on: <http://blog.lobraun.de/2009/04/11/mercurial-on-mac-os-x-valueerror-unknown-locale-utf-8/>
I added these lines to my `.bash_profile`:
```
export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
```
I reloaded the profile:
```
source ~/.bash_profile
```
I then ran `ipython` again:
```
ipython notebook
```
Changing locales
----------------
The above will work for the English language in a US locale. One may want different settings.
At the risk of stating the obvious, to discover the current settings for your system, use:
```
$ locale
```
And to retrieve a list of all valid settings on your system:
```
$ locale -a
```
Then choose your preferred locale. For example, for a Swiss French locale, the solution would look like this:
```
export LC_ALL=fr_CH.UTF-8
export LANG=fr_CH.UTF-8
``` | in iTerm going to the menu
```
Preferences -> Profiles -> Terminal -> (Environment)
```
and then unchecking
```
"Set locale variables automatically"
```
made this error go away. |
15,526,996 | After installing the latest [Mac OSX 64-bit Anaconda Python distribution](http://continuum.io/downloads.html), I keep getting a ValueError when trying to start the IPython Notebook.
Starting ipython works fine:
```
3-millerc-~:ipython
Python 2.7.3 |Anaconda 1.4.0 (x86_64)| (default, Feb 25 2013, 18:45:56)
Type "copyright", "credits" or "license" for more information.
IPython 0.13.1 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
```
But starting ipython notebook:
```
4-millerc-~:ipython notebook
```
Results in the ValueError (with traceback):
```
Traceback (most recent call last):
File "/Users/millerc/anaconda/bin/ipython", line 7, in <module>
launch_new_instance()
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/terminal/ipapp.py", line 388, in launch_new_instance
app.initialize()
File "<string>", line 2, in initialize
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/terminal/ipapp.py", line 313, in initialize
super(TerminalIPythonApp, self).initialize(argv)
File "<string>", line 2, in initialize
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/core/application.py", line 325, in initialize
self.parse_command_line(argv)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/terminal/ipapp.py", line 308, in parse_command_line
return super(TerminalIPythonApp, self).parse_command_line(argv)
File "<string>", line 2, in parse_command_line
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 420, in parse_command_line
return self.initialize_subcommand(subc, subargv)
File "<string>", line 2, in initialize_subcommand
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 352, in initialize_subcommand
subapp = import_item(subapp)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/utils/importstring.py", line 40, in import_item
module = __import__(package,fromlist=[obj])
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/html/notebook/notebookapp.py", line 46, in <module>
from .handlers import (LoginHandler, LogoutHandler,
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/html/notebook/handlers.py", line 36, in <module>
from docutils.core import publish_string
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/core.py", line 20, in <module>
from docutils import frontend, io, utils, readers, writers
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/frontend.py", line 41, in <module>
import docutils.utils
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/utils/__init__.py", line 20, in <module>
from docutils.io import FileOutput
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/io.py", line 18, in <module>
from docutils.utils.error_reporting import locale_encoding, ErrorString, ErrorOutput
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/utils/error_reporting.py", line 47, in <module>
locale_encoding = locale.getlocale()[1] or locale.getdefaultlocale()[1]
File "/Users/millerc/anaconda/lib/python2.7/locale.py", line 503, in getdefaultlocale
return _parse_localename(localename)
File "/Users/millerc/anaconda/lib/python2.7/locale.py", line 435, in _parse_localename
raise ValueError, 'unknown locale: %s' % localename
ValueError: unknown locale: UTF-8
```
Running the `locale` command from the terminal:
```
5-millerc-~:locale
LANG=
LC_COLLATE="C"
LC_CTYPE="UTF-8"
LC_MESSAGES="C"
LC_MONETARY="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_ALL=
``` | 2013/03/20 | [
"https://Stackoverflow.com/questions/15526996",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/655733/"
] | This is a bug in the OS X Terminal app that only shows up in certain locales (country/language combinations). Open Terminal in /Applications/Utilities and uncheck the box “Set locale environment variables on startup”.
[](https://i.stack.imgur.com/EwOj7.jpg)
This will set your `LANG` environment variable to be empty. This may cause terminal use to incorrect settings for your locale. The locale command in the Terminal will tell you what settings are used. To use the correct language, add a line to your bash profile (typically `~/.profile`)
```
export LANG=your-lang
```
Replace `your-lang` with the correct locale specifier for your language. The command `locale -a` will show you all the specifiers. For example, the language code for US English is `en_US.UTF-8`. The locale affects what translations are used when they are available, and also how dates, currencies, and decimals are formatted.
Note, this image and content were taken from <http://conda.pydata.org/docs/troubleshooting.html#unknown-locale> (I'm also the original author of that page). | As your `LC_CTYPE` is wrong, you should find out where that wrong value is set and change it to something like `en_US.UTF-8`. |
15,526,996 | After installing the latest [Mac OSX 64-bit Anaconda Python distribution](http://continuum.io/downloads.html), I keep getting a ValueError when trying to start the IPython Notebook.
Starting ipython works fine:
```
3-millerc-~:ipython
Python 2.7.3 |Anaconda 1.4.0 (x86_64)| (default, Feb 25 2013, 18:45:56)
Type "copyright", "credits" or "license" for more information.
IPython 0.13.1 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
```
But starting ipython notebook:
```
4-millerc-~:ipython notebook
```
Results in the ValueError (with traceback):
```
Traceback (most recent call last):
File "/Users/millerc/anaconda/bin/ipython", line 7, in <module>
launch_new_instance()
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/terminal/ipapp.py", line 388, in launch_new_instance
app.initialize()
File "<string>", line 2, in initialize
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/terminal/ipapp.py", line 313, in initialize
super(TerminalIPythonApp, self).initialize(argv)
File "<string>", line 2, in initialize
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/core/application.py", line 325, in initialize
self.parse_command_line(argv)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/terminal/ipapp.py", line 308, in parse_command_line
return super(TerminalIPythonApp, self).parse_command_line(argv)
File "<string>", line 2, in parse_command_line
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 420, in parse_command_line
return self.initialize_subcommand(subc, subargv)
File "<string>", line 2, in initialize_subcommand
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 352, in initialize_subcommand
subapp = import_item(subapp)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/utils/importstring.py", line 40, in import_item
module = __import__(package,fromlist=[obj])
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/html/notebook/notebookapp.py", line 46, in <module>
from .handlers import (LoginHandler, LogoutHandler,
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/html/notebook/handlers.py", line 36, in <module>
from docutils.core import publish_string
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/core.py", line 20, in <module>
from docutils import frontend, io, utils, readers, writers
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/frontend.py", line 41, in <module>
import docutils.utils
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/utils/__init__.py", line 20, in <module>
from docutils.io import FileOutput
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/io.py", line 18, in <module>
from docutils.utils.error_reporting import locale_encoding, ErrorString, ErrorOutput
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/utils/error_reporting.py", line 47, in <module>
locale_encoding = locale.getlocale()[1] or locale.getdefaultlocale()[1]
File "/Users/millerc/anaconda/lib/python2.7/locale.py", line 503, in getdefaultlocale
return _parse_localename(localename)
File "/Users/millerc/anaconda/lib/python2.7/locale.py", line 435, in _parse_localename
raise ValueError, 'unknown locale: %s' % localename
ValueError: unknown locale: UTF-8
```
Running the `locale` command from the terminal:
```
5-millerc-~:locale
LANG=
LC_COLLATE="C"
LC_CTYPE="UTF-8"
LC_MESSAGES="C"
LC_MONETARY="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_ALL=
``` | 2013/03/20 | [
"https://Stackoverflow.com/questions/15526996",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/655733/"
] | in iTerm going to the menu
```
Preferences -> Profiles -> Terminal -> (Environment)
```
and then unchecking
```
"Set locale variables automatically"
```
made this error go away. | As your `LC_CTYPE` is wrong, you should find out where that wrong value is set and change it to something like `en_US.UTF-8`. |
15,526,996 | After installing the latest [Mac OSX 64-bit Anaconda Python distribution](http://continuum.io/downloads.html), I keep getting a ValueError when trying to start the IPython Notebook.
Starting ipython works fine:
```
3-millerc-~:ipython
Python 2.7.3 |Anaconda 1.4.0 (x86_64)| (default, Feb 25 2013, 18:45:56)
Type "copyright", "credits" or "license" for more information.
IPython 0.13.1 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
```
But starting ipython notebook:
```
4-millerc-~:ipython notebook
```
Results in the ValueError (with traceback):
```
Traceback (most recent call last):
File "/Users/millerc/anaconda/bin/ipython", line 7, in <module>
launch_new_instance()
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/terminal/ipapp.py", line 388, in launch_new_instance
app.initialize()
File "<string>", line 2, in initialize
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/terminal/ipapp.py", line 313, in initialize
super(TerminalIPythonApp, self).initialize(argv)
File "<string>", line 2, in initialize
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/core/application.py", line 325, in initialize
self.parse_command_line(argv)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/terminal/ipapp.py", line 308, in parse_command_line
return super(TerminalIPythonApp, self).parse_command_line(argv)
File "<string>", line 2, in parse_command_line
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 420, in parse_command_line
return self.initialize_subcommand(subc, subargv)
File "<string>", line 2, in initialize_subcommand
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 352, in initialize_subcommand
subapp = import_item(subapp)
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/utils/importstring.py", line 40, in import_item
module = __import__(package,fromlist=[obj])
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/html/notebook/notebookapp.py", line 46, in <module>
from .handlers import (LoginHandler, LogoutHandler,
File "/Users/millerc/anaconda/lib/python2.7/site-packages/IPython/frontend/html/notebook/handlers.py", line 36, in <module>
from docutils.core import publish_string
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/core.py", line 20, in <module>
from docutils import frontend, io, utils, readers, writers
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/frontend.py", line 41, in <module>
import docutils.utils
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/utils/__init__.py", line 20, in <module>
from docutils.io import FileOutput
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/io.py", line 18, in <module>
from docutils.utils.error_reporting import locale_encoding, ErrorString, ErrorOutput
File "/Users/millerc/anaconda/lib/python2.7/site-packages/docutils/utils/error_reporting.py", line 47, in <module>
locale_encoding = locale.getlocale()[1] or locale.getdefaultlocale()[1]
File "/Users/millerc/anaconda/lib/python2.7/locale.py", line 503, in getdefaultlocale
return _parse_localename(localename)
File "/Users/millerc/anaconda/lib/python2.7/locale.py", line 435, in _parse_localename
raise ValueError, 'unknown locale: %s' % localename
ValueError: unknown locale: UTF-8
```
Running the `locale` command from the terminal:
```
5-millerc-~:locale
LANG=
LC_COLLATE="C"
LC_CTYPE="UTF-8"
LC_MESSAGES="C"
LC_MONETARY="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_ALL=
``` | 2013/03/20 | [
"https://Stackoverflow.com/questions/15526996",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/655733/"
] | This is a bug in the OS X Terminal app that only shows up in certain locales (country/language combinations). Open Terminal in /Applications/Utilities and uncheck the box “Set locale environment variables on startup”.
[](https://i.stack.imgur.com/EwOj7.jpg)
This will set your `LANG` environment variable to be empty. This may cause terminal use to incorrect settings for your locale. The locale command in the Terminal will tell you what settings are used. To use the correct language, add a line to your bash profile (typically `~/.profile`)
```
export LANG=your-lang
```
Replace `your-lang` with the correct locale specifier for your language. The command `locale -a` will show you all the specifiers. For example, the language code for US English is `en_US.UTF-8`. The locale affects what translations are used when they are available, and also how dates, currencies, and decimals are formatted.
Note, this image and content were taken from <http://conda.pydata.org/docs/troubleshooting.html#unknown-locale> (I'm also the original author of that page). | in iTerm going to the menu
```
Preferences -> Profiles -> Terminal -> (Environment)
```
and then unchecking
```
"Set locale variables automatically"
```
made this error go away. |
26,005,454 | I am creating a fast method of generating a list of primes in the range(0, limit+1). In the function I end up removing all integers in the list named removable from the list named primes. I am looking for a fast and pythonic way of removing the integers, knowing that both lists are always sorted.
I might be wrong, but I believe list.remove(n) iterates over the list comparing each element with n. meaning that the following code runs in O(n^2) time.
```
# removable and primes are both sorted lists of integers
for composite in removable:
primes.remove(composite)
```
Based off my assumption (which could be wrong and please confirm whether or not this is correct) and the fact that both lists are always sorted, I would think that the following code runs faster, since it only loops over the list once for a O(n) time. However, it is not at all pythonic or clean.
```
i = 0
j = 0
while i < len(primes) and j < len(removable):
if primes[i] == removable[j]:
primes = primes[:i] + primes[i+1:]
j += 1
else:
i += 1
```
Is there perhaps a built in function or simpler way of doing this? And what is the fastest way?
Side notes: I have not actually timed the functions or code above. Also, it doesn't matter if the list removable is changed/destroyed in the process.
For anyone interested the full functions is below:
```
import math
# returns a list of primes in range(0, limit+1)
def fastPrimeList(limit):
if limit < 2:
return list()
sqrtLimit = int(math.ceil(math.sqrt(limit)))
primes = [2] + range(3, limit+1, 2)
index = 1
while primes[index] <= sqrtLimit:
removable = list()
index2 = index
while primes[index] * primes[index2] <= limit:
composite = primes[index] * primes[index2]
removable.append(composite)
index2 += 1
for composite in removable:
primes.remove(composite)
index += 1
return primes
``` | 2014/09/23 | [
"https://Stackoverflow.com/questions/26005454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3987360/"
] | This is quite fast and clean, it does `O(n)` set membership checks, and in amortized time it runs in `O(n)` (first line is `O(n)` amortized, second line is `O(n * 1)` amortized, because a membership check is `O(1)` amortized):
```
removable_set = set(removable)
primes = [p for p in primes if p not in removable_set]
```
Here is the modification of your 2nd solution. It does `O(n)` basic operations (worst case):
```
tmp = []
i = j = 0
while i < len(primes) and j < len(removable):
if primes[i] < removable[j]:
tmp.append(primes[i])
i += 1
elif primes[i] == removable[j]:
i += 1
else:
j += 1
primes[:i] = tmp
del tmp
```
Please note that constants also matter. The Python interpreter is quite slow (i.e. with a large constant) to execute Python code. The 2nd solution has lots of Python code, and it can indeed be slower for small practical values of n than the solution with `set`s, because the `set` operations are implemented in C, thus they are fast (i.e. with a small constant).
If you have multiple working solutions, run them on typical input sizes, and measure the time. You may get surprised about their relative speed, often it is not what you would predict. | The most important thing here is to remove the quadratic behavior. You have this for two reasons.
First, calling `remove` searches the entire list for values to remove. Doing this takes linear time, and you're doing it once for each element in `removable`, so your total time is `O(NM)` (where `N` is the length of `primes` and `M` is the length of `removable`).
Second, removing elements from the middle of a list forces you to shift the whole rest of the list up one slot. So, each one takes linear time, and again you're doing it `M` times, so again it's `O(NM)`.
---
How can you avoid these?
For the first, you either need to take advantage of the sorting, or just use something that allows you to do constant-time lookups instead of linear-time, like a `set`.
For the second, you either need to create a list of indices to delete and then do a second pass to move each element up the appropriate number of indices all at once, or just build a new list instead of trying to mutate the original in-place.
So, there are a variety of options here. Which one is best? It almost certainly doesn't matter; changing your `O(NM)` time to just `O(N+M)` will probably be more than enough of an optimization that you're happy with the results. But if you need to squeeze out more performance, then you'll have to implement all of them and test them on realistic data.
The only one of these that I think isn't obvious is how to "use the sorting". The idea is to use the same kind of staggered-zip iteration that you'd use in a merge sort, like this:
```
def sorted_subtract(seq1, seq2):
i1, i2 = 0, 0
while i1 < len(seq1):
if seq1[i1] != seq2[i2]:
i2 += 1
if i2 == len(seq2):
yield from seq1[i1:]
return
else:
yield seq1[i1]
i1 += 1
``` |
59,160,291 | Is there a way how to simplify this static methods in python? I'm looking to reduce typing of the arguments every time I need to use a function.
```
class Ibeam:
def __init__ (self, b1, tf1, tw, h, b2, tf2, rt, rb):
self.b1 = b1
self.tf1 = tf1
self.tw = tw
self.h = h
self.b2 = b2
self.tf2 = tf2
self.rt = rt
self.rb = rb
def area (b1, tf1, tw, h, b2, tf2, rt, rb):
dw = h - tf1 - tf2
area = b1*tf1+tw*dw+b2*tf2+2*circularspandrel.area(rt)+2*circularspandrel.area(rb)
return area
def distToOriginZ (b1, tf1, tw, h, b2, tf2, rt, rb):
dw = h - tf1 - tf2
Dist = collections.namedtuple('Dist', 'ytf1 yw ytf2')
dist = Dist(ytf1 = h - rectangle.centroid(b1,tf1).ez, yw = rectangle.centroid(tw,dw).ez + tf2, ytf2 = rectangle.centroid(b2,tf2))
return dist
def areaMoment (b1, tf1, tw, h, b2, tf2, rt, rb):
dw = h - tf1 - tf2
sum = (rectangle.area(b1, tf1)*Ibeam.distToOriginZ(b1, tf1, tw, h, b2, tf2, rt, rb).ytf1) + (rectangle.area(tw, dw)*Ibeam.distToOriginZ(b1, tf1, tw, h, b2, tf2, rt, rb).yw) + (rectangle.area(b2,tf2)*Ibeam.distToOriginZ(b1, tf1, tw, h, b2, tf2, rt, rb).ytf2)
return sum
def centroidZ (b1, tf1, tw, h, b2, tf2, rt, rb):
ez = Ibeam.areaMoment (b1, tf1, tw, h, b2, tf2, rt, rb)/Ibeam.area(b1, tf1, tw, h, b2, tf2, rt, rb)
return ez
``` | 2019/12/03 | [
"https://Stackoverflow.com/questions/59160291",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1448859/"
] | You could use good default values if such exist.
```py
def area(b1=None, tf1=None, tw=None, h=None, b2=None, tf2=None, rt=None, rb=None):
....
```
An even better solution would be to design your class in a way that it does not require so many parameters. | When having functions with many arguments it might be useful to think about "related" arguments and group them together. For example, consider a function that calculates the distance between two points. You could write a function like the following:
```
def distance(x1, y1, x2, y2):
...
return distance
print(distance(1, 2, 3, 4))
```
In that case, the values `x1, y1` and `x2, y2` are both very closely related together. you could group these together. Python gives you many options, some more expressive, some less expressive.
Your code examples look very similar, and I believe you could benefit from grouping them together.
The advantages of grouping related variables together are mainly that you reduce the number of required arguments (what you ask for), but *most importantly it gives you a chance to document these variables by giving them better names*.
```
"""
Simple Tuples
"""
def distance(point_a, point_b):
x1, y1 = point_a
x2, y2 = point_b
...
return distance
print(distance((1, 2), (3, 4)))
```
This is a quick-win, but it is not very expressive. So you could spice this up with named-tuples (typed or untyped) or even a full-blown object. For example:
```
"""
Simple named tuples (very similar to tuples, but better error-messages/repr)
"""
from collections import namedtuple
Point = namedtuple('Point', 'x, y')
def distance(point_a, point_b):
x1, y1 = point_a
x2, y2 = point_b
...
return distance
print(distance(Point(1, 2), Point(3, 4)))
```
```
"""
Typed named tuples (in case you want to use typing)
"""
from typing import NamedTuple
Point = NamedTuple('Point', [
('x', float),
('y', float),
])
def distance(point_a: Point, point_b: Point) -> float:
x1, y1 = point_a
x2, y2 = point_b
...
return distance
print(distance(Point(1, 2), Point(3, 4)))
```
```
"""
Custom Object
"""
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
def distance(point_a: Point, point_b: Point) -> float:
x1, y1 = point_a.x, point_a.y
x2, y2 = point_b.x, point_b.y
...
return distance
print(distance(Point(1, 2), Point(3, 4)))
``` |
59,160,291 | Is there a way how to simplify this static methods in python? I'm looking to reduce typing of the arguments every time I need to use a function.
```
class Ibeam:
def __init__ (self, b1, tf1, tw, h, b2, tf2, rt, rb):
self.b1 = b1
self.tf1 = tf1
self.tw = tw
self.h = h
self.b2 = b2
self.tf2 = tf2
self.rt = rt
self.rb = rb
def area (b1, tf1, tw, h, b2, tf2, rt, rb):
dw = h - tf1 - tf2
area = b1*tf1+tw*dw+b2*tf2+2*circularspandrel.area(rt)+2*circularspandrel.area(rb)
return area
def distToOriginZ (b1, tf1, tw, h, b2, tf2, rt, rb):
dw = h - tf1 - tf2
Dist = collections.namedtuple('Dist', 'ytf1 yw ytf2')
dist = Dist(ytf1 = h - rectangle.centroid(b1,tf1).ez, yw = rectangle.centroid(tw,dw).ez + tf2, ytf2 = rectangle.centroid(b2,tf2))
return dist
def areaMoment (b1, tf1, tw, h, b2, tf2, rt, rb):
dw = h - tf1 - tf2
sum = (rectangle.area(b1, tf1)*Ibeam.distToOriginZ(b1, tf1, tw, h, b2, tf2, rt, rb).ytf1) + (rectangle.area(tw, dw)*Ibeam.distToOriginZ(b1, tf1, tw, h, b2, tf2, rt, rb).yw) + (rectangle.area(b2,tf2)*Ibeam.distToOriginZ(b1, tf1, tw, h, b2, tf2, rt, rb).ytf2)
return sum
def centroidZ (b1, tf1, tw, h, b2, tf2, rt, rb):
ez = Ibeam.areaMoment (b1, tf1, tw, h, b2, tf2, rt, rb)/Ibeam.area(b1, tf1, tw, h, b2, tf2, rt, rb)
return ez
``` | 2019/12/03 | [
"https://Stackoverflow.com/questions/59160291",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1448859/"
] | I'm not sure if it's what you are looking for:
But for me it looks like you want to have a class and use the functions in it.
```
class Ibeam:
def __init__ (self, b1, tf1, tw, h, b2, tf2, rt, rb):
self.b1 = b1
self.tf1 = tf1
self.tw = tw
self.h = h
self.b2 = b2
self.tf2 = tf2
self.rt = rt
self.rb = rb
def area (self):
dw = self.h - self.tf1 - self.tf2
area = self.b1*self.tf1+self.tw*dw+self.b2*self.tf2+2*circularspandrel.area(self.rt)+2*circularspandrel.area(self.rb)
return area
def distToOriginZ (self):
dw = self.h - self.tf1 - self.tf2
Dist = collections.namedtuple('Dist', 'ytf1 yw ytf2')
dist = Dist(ytf1 =self. h - rectangle.centroid(self.b1,self.tf1).ez, yw = rectangle.centroid(self.tw,dw).ez + self.tf2, ytf2 = rectangle.centroid(self.b2,self.tf2))
return dist
def areaMoment (self):
dw = self.h - self.tf1 - self.tf2
sum = (rectangle.area(self.b1, self.tf1)*self.distToOriginZ().ytf1) + (rectangle.area(self.tw, dw)*self.distToOriginZ()) + (rectangle.area(self.b2,self.tf2)*self.distToOriginZ().ytf2)
return sum
def centroidZ (self):
ez = self.areaMoment ()/self.area()
return ez
```
Now you can do the following:
```
beam = Ibeam(1,1,1,1,1,1,1,1)
print(beam.area())
print(beam.distToOriginZ())
print(beam.areaMoment())
print(beam.centroidZ())
```
With this you don't have to write that many parameters and you use proper capsulation.
With this approach you create a Class Ibeam with properties.
And in this approach you are even using this properties. Before you didn't use them at all. The disadvantage is you have to create a class before, if that is not what you want use the approach with default variable and declare it static. | You could use good default values if such exist.
```py
def area(b1=None, tf1=None, tw=None, h=None, b2=None, tf2=None, rt=None, rb=None):
....
```
An even better solution would be to design your class in a way that it does not require so many parameters. |
59,160,291 | Is there a way how to simplify this static methods in python? I'm looking to reduce typing of the arguments every time I need to use a function.
```
class Ibeam:
def __init__ (self, b1, tf1, tw, h, b2, tf2, rt, rb):
self.b1 = b1
self.tf1 = tf1
self.tw = tw
self.h = h
self.b2 = b2
self.tf2 = tf2
self.rt = rt
self.rb = rb
def area (b1, tf1, tw, h, b2, tf2, rt, rb):
dw = h - tf1 - tf2
area = b1*tf1+tw*dw+b2*tf2+2*circularspandrel.area(rt)+2*circularspandrel.area(rb)
return area
def distToOriginZ (b1, tf1, tw, h, b2, tf2, rt, rb):
dw = h - tf1 - tf2
Dist = collections.namedtuple('Dist', 'ytf1 yw ytf2')
dist = Dist(ytf1 = h - rectangle.centroid(b1,tf1).ez, yw = rectangle.centroid(tw,dw).ez + tf2, ytf2 = rectangle.centroid(b2,tf2))
return dist
def areaMoment (b1, tf1, tw, h, b2, tf2, rt, rb):
dw = h - tf1 - tf2
sum = (rectangle.area(b1, tf1)*Ibeam.distToOriginZ(b1, tf1, tw, h, b2, tf2, rt, rb).ytf1) + (rectangle.area(tw, dw)*Ibeam.distToOriginZ(b1, tf1, tw, h, b2, tf2, rt, rb).yw) + (rectangle.area(b2,tf2)*Ibeam.distToOriginZ(b1, tf1, tw, h, b2, tf2, rt, rb).ytf2)
return sum
def centroidZ (b1, tf1, tw, h, b2, tf2, rt, rb):
ez = Ibeam.areaMoment (b1, tf1, tw, h, b2, tf2, rt, rb)/Ibeam.area(b1, tf1, tw, h, b2, tf2, rt, rb)
return ez
``` | 2019/12/03 | [
"https://Stackoverflow.com/questions/59160291",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1448859/"
] | I'm not sure if it's what you are looking for:
But for me it looks like you want to have a class and use the functions in it.
```
class Ibeam:
def __init__ (self, b1, tf1, tw, h, b2, tf2, rt, rb):
self.b1 = b1
self.tf1 = tf1
self.tw = tw
self.h = h
self.b2 = b2
self.tf2 = tf2
self.rt = rt
self.rb = rb
def area (self):
dw = self.h - self.tf1 - self.tf2
area = self.b1*self.tf1+self.tw*dw+self.b2*self.tf2+2*circularspandrel.area(self.rt)+2*circularspandrel.area(self.rb)
return area
def distToOriginZ (self):
dw = self.h - self.tf1 - self.tf2
Dist = collections.namedtuple('Dist', 'ytf1 yw ytf2')
dist = Dist(ytf1 =self. h - rectangle.centroid(self.b1,self.tf1).ez, yw = rectangle.centroid(self.tw,dw).ez + self.tf2, ytf2 = rectangle.centroid(self.b2,self.tf2))
return dist
def areaMoment (self):
dw = self.h - self.tf1 - self.tf2
sum = (rectangle.area(self.b1, self.tf1)*self.distToOriginZ().ytf1) + (rectangle.area(self.tw, dw)*self.distToOriginZ()) + (rectangle.area(self.b2,self.tf2)*self.distToOriginZ().ytf2)
return sum
def centroidZ (self):
ez = self.areaMoment ()/self.area()
return ez
```
Now you can do the following:
```
beam = Ibeam(1,1,1,1,1,1,1,1)
print(beam.area())
print(beam.distToOriginZ())
print(beam.areaMoment())
print(beam.centroidZ())
```
With this you don't have to write that many parameters and you use proper capsulation.
With this approach you create a Class Ibeam with properties.
And in this approach you are even using this properties. Before you didn't use them at all. The disadvantage is you have to create a class before, if that is not what you want use the approach with default variable and declare it static. | When having functions with many arguments it might be useful to think about "related" arguments and group them together. For example, consider a function that calculates the distance between two points. You could write a function like the following:
```
def distance(x1, y1, x2, y2):
...
return distance
print(distance(1, 2, 3, 4))
```
In that case, the values `x1, y1` and `x2, y2` are both very closely related together. you could group these together. Python gives you many options, some more expressive, some less expressive.
Your code examples look very similar, and I believe you could benefit from grouping them together.
The advantages of grouping related variables together are mainly that you reduce the number of required arguments (what you ask for), but *most importantly it gives you a chance to document these variables by giving them better names*.
```
"""
Simple Tuples
"""
def distance(point_a, point_b):
x1, y1 = point_a
x2, y2 = point_b
...
return distance
print(distance((1, 2), (3, 4)))
```
This is a quick-win, but it is not very expressive. So you could spice this up with named-tuples (typed or untyped) or even a full-blown object. For example:
```
"""
Simple named tuples (very similar to tuples, but better error-messages/repr)
"""
from collections import namedtuple
Point = namedtuple('Point', 'x, y')
def distance(point_a, point_b):
x1, y1 = point_a
x2, y2 = point_b
...
return distance
print(distance(Point(1, 2), Point(3, 4)))
```
```
"""
Typed named tuples (in case you want to use typing)
"""
from typing import NamedTuple
Point = NamedTuple('Point', [
('x', float),
('y', float),
])
def distance(point_a: Point, point_b: Point) -> float:
x1, y1 = point_a
x2, y2 = point_b
...
return distance
print(distance(Point(1, 2), Point(3, 4)))
```
```
"""
Custom Object
"""
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
def distance(point_a: Point, point_b: Point) -> float:
x1, y1 = point_a.x, point_a.y
x2, y2 = point_b.x, point_b.y
...
return distance
print(distance(Point(1, 2), Point(3, 4)))
``` |
59,493,383 | I'm currently working on a project and I am having a hard time understanding how does the Pandas UDF in PySpark works.
I have a Spark Cluster with one Master node with 8 cores and 64GB, along with two workers of 16 cores each and 112GB. My dataset is quite large and divided into seven principal partitions consisting each of ~78M lines. The dataset consists of 70 columns.
I defined a Pandas UDF in to do some operations on the dataset, that can only be done using Python, on a Pandas dataframe.
The pandas UDF is defined this way :
```
@pandas_udf(schema, PandasUDFType.GROUPED_MAP)
def operation(pdf):
#Some operations
return pdf
spark.table("my_dataset").groupBy(partition_cols).apply(operation)
```
There is absolutely no way to get the Pandas UDF to work as it crashes before even doing the operations. I suspect there is an OOM error somewhere. The code above runs for a few minutes before crashing with an error code stating that the connection has reset.
However, if I call the .toPandas() function after filtering on one partition and then display it, it runs fine, with no error. The error seems to happen only when using a PandasUDF.
I fail to understand how it works. Does Spark try to convert one whole partition at once (78M lines) ? If so, what memory does it use ? The driver memory ? The executor's ? If it's on the driver's, is all Python code executed on it ?
The cluster is configured with the following :
* SPARK\_WORKER\_CORES=2
* SPARK\_WORKER\_MEMORY=64g
* spark.executor.cores 2
* spark.executor.memory 30g (to allow memory for the python instance)
* spark.driver.memory 43g
Am I missing something or is there just no way to run 78M lines through a PandasUDF ? | 2019/12/26 | [
"https://Stackoverflow.com/questions/59493383",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5932364/"
] | >
> Does Spark try to convert one whole partition at once (78M lines) ?
>
>
>
That's exactly what happens. Spark 3.0 adds support for chunked UDFs, which operate on iterators of Pandas `DataFrames` or `Series`, but if *operations on the dataset, that can only be done using Python, on a Pandas dataframe*, these might not be the right choice for you.
>
> If so, what memory does it use ? The driver memory? The executor's?
>
>
>
Each partition is processed locally, on the respective executor, and data is passed to and from Python worker, using Arrow streaming.
>
> Am I missing something or is there just no way to run 78M lines through a PandasUDF?
>
>
>
As long as you have enough memory to handle Arrow input, output (especially if data is copied), auxiliary data structures, as well as as JVM overhead, it should handle large datasets just fine.
But on such tiny cluster, you'll be better with partitioning the output and reading data directly with Pandas, without using Spark at all. This way you'll be able to use all the available resources (i.e. > 100GB / interpreter) for data processing instead of wasting these on secondary tasks (having 16GB - overhead / interpreter). | To answer the general question about using a Pandas UDF on a large pyspark dataframe:
If you're getting out-of-memory errors such as
`java.lang.OutOfMemoryError : GC overhead limit exceeded` or `java.lang.OutOfMemoryError: Java heap space` and increasing memory limits hasn't worked, ensure that pyarrow is enabled. It is disabled by default.
In pyspark, you can enable it using:
`spark.conf.set("spark.sql.execution.arrow.pyspark.enabled", "true")`
More info [here](https://spark.apache.org/docs/3.0.1/sql-pyspark-pandas-with-arrow.html). |
20,317,792 | I want my interactive bash to run a program that will ultimately do things like:
echo Error: foobar >/dev/tty
and in another(python) component tries to prompt for and read a password from /dev/tty.
I want such reads and writes to fail, but not block.
Is there some way to close /dev/tty in the parent script and then run the program?
I tried
foo >&/tmp/outfile
which does not work.
What does sort of work is the 'at' command:
at now
at> foobar >&/tmp/outfile | 2013/12/01 | [
"https://Stackoverflow.com/questions/20317792",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/727810/"
] | You are doing a [UNION ALL](http://dev.mysql.com/doc/refman/5.0/en/union.html)
`at_tot` results are being appended to `a_tot`.
`at_prix` results are being appended to `a_tva`.
`at_pax` results are being appended to `v_tot`.
`at_vente` results are being appended to `v_tva`.
The [SQL UNION ALL](http://www.w3schools.com/sql/sql_union.asp) query allows you to combine the result sets of 2 or more SELECT statements. It returns *all rows* from the query (even if the row exists in more than one of the SELECT statements.). So rows are appended, NOT columns.
EDIT:
Now based on your comments, its simply that you are writing your code as though 8 columns are going to be returned, but you are only getting 4 columns, with 2 rows.
This would work, though returning different data for each row is not recommended.
```
var i = 0;
while (reader.Read())
{
if(i == 0){
MyArray[0] = reader["a_tot"].ToString();
MyArray[1] = reader["a_tva"].ToString();
MyArray[2] = reader["v_tot"].ToString();
MyArray[3] = reader["v_tva"].ToString();
i++;
}
else{
MyArray[0] = reader["at_tot"].ToString();
MyArray[1] = reader["at_prix"].ToString();
MyArray[2] = reader["at_pax"].ToString();
MyArray[3] = reader["at_vente"].ToString();
}
}
``` | When you use UNION , the alias that end up in the result are the one from the first select in the union. So `at_tot` (from second select of union) is replaced with a\_tot.
What you do is the same as doing:
```sql
SELECT SUM(IF(status=0,montant,0)) AS a_tot,
SUM(IF(status=0, montant * (tvaval/100),0)) AS a_tva,
SUM(IF(status= 1, montant,0)) AS v_tot,
SUM(IF(status=1, montant * (tvaval/100),0)) AS v_tva
FROM StockData
UNION ALL
SELECT
SUM(at.prix*at.pax),
SUM(at.prix),
SUM(at.pax),
SUM(at.vente)
FROM Atelier AS at
```
You have to put the alias you want in the output in the first select, as you will end up with only 4 columns, not 8 like you are trying to get in your picture. |
49,757,771 | So I wrote a python file creating the single topology ( just to check if custom topology works) without using any controller at first. the code goes:
```
#!/usr/bin/python
from mininet.node import CPULimitedHost, Host, Node
from mininet.node import OVSSwitch
from mininet.topo import Topo
class Single1(Topo):
"Single Topology"
def __init__(self):
"Create Fat tree Topology"
Topo.__init__(self)
#Add hosts
h1 = self.addHost('h1', cls=Host, ip='10.0.0.1', defaultRoute=None)
h2 = self.addHost('h2', cls=Host, ip='10.0.0.2', defaultRoute=None)
h3 = self.addHost('h3', cls=Host, ip='10.0.0.3', defaultRoute=None)
#Add switches
s1 = self.addSwitch('s1', cls=OVSSwitch)
#Add links
self.addLink(h1,s1)
self.addLink(h2,s1)
self.addLink(h3,s1)
topos = { 'mytopo': (lambda: Single1() ) }
```
Pingall doesn't work when I run :
```
sudo mn --custom single.py --topo mytopo
```
Although it does work for predefined 'single' topology. Could someone help me with the problem? | 2018/04/10 | [
"https://Stackoverflow.com/questions/49757771",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7463091/"
] | This is an older question and probably no longer of interest to the original poster, but I landed here from a mininet related search so I thought I'd provide a working example in case other folks find there way here in the future.
First, there are a number of indentation problems with the posted code, but those are simple to correct.
Next, the logic has been implemented in `Single1.__init__`, but at least according to [the documentation](http://mininet.org/walkthrough/#custom-topologies) this should be in the `build` method.
Correcting both of those issues and removing the unnecessary use of
`host=Host` and `defaultRoute=None` in the `addHost` calls gives us:
```
#!/usr/bin/python
from mininet.node import OVSSwitch
from mininet.topo import Topo
class Single1(Topo):
"Single Topology"
def build(self):
"Create Fat tree Topology"
#Add hosts
h1 = self.addHost('h1', ip='10.0.0.1')
h2 = self.addHost('h2', ip='10.0.0.2')
h3 = self.addHost('h3', ip='10.0.0.3')
#Add switches
s1 = self.addSwitch('s1', cls=OVSSwitch)
#Add links
self.addLink(h1,s1)
self.addLink(h2,s1)
self.addLink(h3,s1)
topos = { 'mytopo': Single1 }
```
The above code will run without errors and build the topology, but will probably still present the original problem: using `cls=OVSSwitch` when creating the switch means that Mininet expects there to exist an OpenFlow controller to manage the switch, which in general won't exist by default.
The simplest solution is to change:
```
s1 = self.addSwitch('s1', cls=OVSSwitch)
```
To:
```
s1 = self.addSwitch('s1', cls=OVSBridge)
```
With this change, Mininet will configure a "standalone" switch that doesn't require an explicit controller, and we will have the expected connectivity. The final version of the code looks like:
```
#!/usr/bin/python
from mininet.topo import Topo
from mininet.node import OVSBridge
class Single1(Topo):
"Single Topology"
def build(self):
"Create Fat tree Topology"
#Add hosts
h1 = self.addHost('h1', ip='10.0.0.1')
h2 = self.addHost('h2', ip='10.0.0.2')
h3 = self.addHost('h3', ip='10.0.0.3')
#Add switches
s1 = self.addSwitch('s1', cls=OVSBridge)
#Add links
self.addLink(h1,s1)
self.addLink(h2,s1)
self.addLink(h3,s1)
topos = { 'mytopo': Single1 }
```
And running it looks like:
```
[root@servera ~]# mn --custom example.py --topo mytopo
*** Creating network
*** Adding controller
*** Adding hosts:
h1 h2 h3
*** Adding switches:
s1
*** Adding links:
(h1, s1) (h2, s1) (h3, s1)
*** Configuring hosts
h1 h2 h3
*** Starting controller
c0
*** Starting 1 switches
s1 ...
*** Starting CLI:
mininet> h1 ping -c2 h2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.051 ms
--- 10.0.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1009ms
rtt min/avg/max/mdev = 0.051/0.185/0.320/0.134 ms
mininet>
``` | The hosts must be in same subnet in order to avoid routing protocols. Otherwise you need static routes |
49,757,771 | So I wrote a python file creating the single topology ( just to check if custom topology works) without using any controller at first. the code goes:
```
#!/usr/bin/python
from mininet.node import CPULimitedHost, Host, Node
from mininet.node import OVSSwitch
from mininet.topo import Topo
class Single1(Topo):
"Single Topology"
def __init__(self):
"Create Fat tree Topology"
Topo.__init__(self)
#Add hosts
h1 = self.addHost('h1', cls=Host, ip='10.0.0.1', defaultRoute=None)
h2 = self.addHost('h2', cls=Host, ip='10.0.0.2', defaultRoute=None)
h3 = self.addHost('h3', cls=Host, ip='10.0.0.3', defaultRoute=None)
#Add switches
s1 = self.addSwitch('s1', cls=OVSSwitch)
#Add links
self.addLink(h1,s1)
self.addLink(h2,s1)
self.addLink(h3,s1)
topos = { 'mytopo': (lambda: Single1() ) }
```
Pingall doesn't work when I run :
```
sudo mn --custom single.py --topo mytopo
```
Although it does work for predefined 'single' topology. Could someone help me with the problem? | 2018/04/10 | [
"https://Stackoverflow.com/questions/49757771",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7463091/"
] | This is an older question and probably no longer of interest to the original poster, but I landed here from a mininet related search so I thought I'd provide a working example in case other folks find there way here in the future.
First, there are a number of indentation problems with the posted code, but those are simple to correct.
Next, the logic has been implemented in `Single1.__init__`, but at least according to [the documentation](http://mininet.org/walkthrough/#custom-topologies) this should be in the `build` method.
Correcting both of those issues and removing the unnecessary use of
`host=Host` and `defaultRoute=None` in the `addHost` calls gives us:
```
#!/usr/bin/python
from mininet.node import OVSSwitch
from mininet.topo import Topo
class Single1(Topo):
"Single Topology"
def build(self):
"Create Fat tree Topology"
#Add hosts
h1 = self.addHost('h1', ip='10.0.0.1')
h2 = self.addHost('h2', ip='10.0.0.2')
h3 = self.addHost('h3', ip='10.0.0.3')
#Add switches
s1 = self.addSwitch('s1', cls=OVSSwitch)
#Add links
self.addLink(h1,s1)
self.addLink(h2,s1)
self.addLink(h3,s1)
topos = { 'mytopo': Single1 }
```
The above code will run without errors and build the topology, but will probably still present the original problem: using `cls=OVSSwitch` when creating the switch means that Mininet expects there to exist an OpenFlow controller to manage the switch, which in general won't exist by default.
The simplest solution is to change:
```
s1 = self.addSwitch('s1', cls=OVSSwitch)
```
To:
```
s1 = self.addSwitch('s1', cls=OVSBridge)
```
With this change, Mininet will configure a "standalone" switch that doesn't require an explicit controller, and we will have the expected connectivity. The final version of the code looks like:
```
#!/usr/bin/python
from mininet.topo import Topo
from mininet.node import OVSBridge
class Single1(Topo):
"Single Topology"
def build(self):
"Create Fat tree Topology"
#Add hosts
h1 = self.addHost('h1', ip='10.0.0.1')
h2 = self.addHost('h2', ip='10.0.0.2')
h3 = self.addHost('h3', ip='10.0.0.3')
#Add switches
s1 = self.addSwitch('s1', cls=OVSBridge)
#Add links
self.addLink(h1,s1)
self.addLink(h2,s1)
self.addLink(h3,s1)
topos = { 'mytopo': Single1 }
```
And running it looks like:
```
[root@servera ~]# mn --custom example.py --topo mytopo
*** Creating network
*** Adding controller
*** Adding hosts:
h1 h2 h3
*** Adding switches:
s1
*** Adding links:
(h1, s1) (h2, s1) (h3, s1)
*** Configuring hosts
h1 h2 h3
*** Starting controller
c0
*** Starting 1 switches
s1 ...
*** Starting CLI:
mininet> h1 ping -c2 h2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.051 ms
--- 10.0.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1009ms
rtt min/avg/max/mdev = 0.051/0.185/0.320/0.134 ms
mininet>
``` | It's strange but now I can ping all of a sudden..I don't know why or how..I didn't change anything. |
49,757,771 | So I wrote a python file creating the single topology ( just to check if custom topology works) without using any controller at first. the code goes:
```
#!/usr/bin/python
from mininet.node import CPULimitedHost, Host, Node
from mininet.node import OVSSwitch
from mininet.topo import Topo
class Single1(Topo):
"Single Topology"
def __init__(self):
"Create Fat tree Topology"
Topo.__init__(self)
#Add hosts
h1 = self.addHost('h1', cls=Host, ip='10.0.0.1', defaultRoute=None)
h2 = self.addHost('h2', cls=Host, ip='10.0.0.2', defaultRoute=None)
h3 = self.addHost('h3', cls=Host, ip='10.0.0.3', defaultRoute=None)
#Add switches
s1 = self.addSwitch('s1', cls=OVSSwitch)
#Add links
self.addLink(h1,s1)
self.addLink(h2,s1)
self.addLink(h3,s1)
topos = { 'mytopo': (lambda: Single1() ) }
```
Pingall doesn't work when I run :
```
sudo mn --custom single.py --topo mytopo
```
Although it does work for predefined 'single' topology. Could someone help me with the problem? | 2018/04/10 | [
"https://Stackoverflow.com/questions/49757771",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7463091/"
] | This is an older question and probably no longer of interest to the original poster, but I landed here from a mininet related search so I thought I'd provide a working example in case other folks find there way here in the future.
First, there are a number of indentation problems with the posted code, but those are simple to correct.
Next, the logic has been implemented in `Single1.__init__`, but at least according to [the documentation](http://mininet.org/walkthrough/#custom-topologies) this should be in the `build` method.
Correcting both of those issues and removing the unnecessary use of
`host=Host` and `defaultRoute=None` in the `addHost` calls gives us:
```
#!/usr/bin/python
from mininet.node import OVSSwitch
from mininet.topo import Topo
class Single1(Topo):
"Single Topology"
def build(self):
"Create Fat tree Topology"
#Add hosts
h1 = self.addHost('h1', ip='10.0.0.1')
h2 = self.addHost('h2', ip='10.0.0.2')
h3 = self.addHost('h3', ip='10.0.0.3')
#Add switches
s1 = self.addSwitch('s1', cls=OVSSwitch)
#Add links
self.addLink(h1,s1)
self.addLink(h2,s1)
self.addLink(h3,s1)
topos = { 'mytopo': Single1 }
```
The above code will run without errors and build the topology, but will probably still present the original problem: using `cls=OVSSwitch` when creating the switch means that Mininet expects there to exist an OpenFlow controller to manage the switch, which in general won't exist by default.
The simplest solution is to change:
```
s1 = self.addSwitch('s1', cls=OVSSwitch)
```
To:
```
s1 = self.addSwitch('s1', cls=OVSBridge)
```
With this change, Mininet will configure a "standalone" switch that doesn't require an explicit controller, and we will have the expected connectivity. The final version of the code looks like:
```
#!/usr/bin/python
from mininet.topo import Topo
from mininet.node import OVSBridge
class Single1(Topo):
"Single Topology"
def build(self):
"Create Fat tree Topology"
#Add hosts
h1 = self.addHost('h1', ip='10.0.0.1')
h2 = self.addHost('h2', ip='10.0.0.2')
h3 = self.addHost('h3', ip='10.0.0.3')
#Add switches
s1 = self.addSwitch('s1', cls=OVSBridge)
#Add links
self.addLink(h1,s1)
self.addLink(h2,s1)
self.addLink(h3,s1)
topos = { 'mytopo': Single1 }
```
And running it looks like:
```
[root@servera ~]# mn --custom example.py --topo mytopo
*** Creating network
*** Adding controller
*** Adding hosts:
h1 h2 h3
*** Adding switches:
s1
*** Adding links:
(h1, s1) (h2, s1) (h3, s1)
*** Configuring hosts
h1 h2 h3
*** Starting controller
c0
*** Starting 1 switches
s1 ...
*** Starting CLI:
mininet> h1 ping -c2 h2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.051 ms
--- 10.0.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1009ms
rtt min/avg/max/mdev = 0.051/0.185/0.320/0.134 ms
mininet>
``` | By default, Mininet emulates the switches by Open VSwitch.
And if not connected to a controller, OVS will act like a normal L2 switch with its default rules.
That's the reason you can do pingall().
However, I also get into problems that Mininet hosts can't ping each other even if they are actually connected. After a few days I find that it takes time for mininet links to get ready. So if you wait about 30 seconds then call pingall(), it should act normally I think. |
2,332,164 | I use python debugger pdb. I use emacs for python programming. I use python-mode.el. My idea is to make emacs intuitive. So I need the following help for python programs (.py)
1. Whenever I press 'F9' key, the emacs should put "import pdb; pdb.set\_trace();" statements in the current line and move the current line to one line below.
Sentence to be in same line. smart indentation may help very much.
2. Wherever "import pdb; pdb.set\_trace();" statement presents in the python code, emacs should display left indicator and highlight that line.
3. When I press 'Alt-F9' keys at the current line and emacs found the "import pdb; pdb.set\_trace();" statement then, emacs should remove the "import pdb; pdb.set\_trace();" line and move the current line to one up.
4. Whenever I press "F8" key, emacs to jump to "import pdb; pdb.set\_trace();" in the same buffer.
I am trying to learn elisp and catch up lisp soon to customize emacs myself. I will appreciate your answers.
The answer shall be great enough for me and others who find this solution is very useful. | 2010/02/25 | [
"https://Stackoverflow.com/questions/2332164",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | to do 1)
```
(defun add-py-debug ()
"add debug code and move line down"
(interactive)
(move-beginning-of-line 1)
(insert "import pdb; pdb.set_trace();\n"))
(local-set-key (kbd "<f9>") 'add-py-debug)
```
to do 2) you probably have to change the syntax highlighting of the python mode, or write you own minor mode. You'd have to look into font-lock to get more. Sorry.
to do 3) though I've set this to be C-c F9 instead of Alt-F9
```
(defun remove-py-debug ()
"remove py debug code, if found"
(interactive)
(let ((x (line-number-at-pos))
(cur (point)))
(search-forward-regexp "^[ ]*import pdb; pdb.set_trace();")
(if (= x (line-number-at-pos))
(let ()
(move-beginning-of-line 1)
(kill-line 1)
(move-beginning-of-line 1))
(goto-char cur))))
(local-set-key (kbd "C c <f9>") 'remove-py-debug)
```
and to do 4)
```
(local-set-key (kbd "<f3>") '(lambda ()
(interactive)
(search-forward-regexp "^[ ]*import pdb; pdb.set_trace();")
(move-beginning-of-line 1)))
```
Note, this is not the best elisp code in the world, but I've tried to make it clear to you what's going on rather than make it totally idiomatic. The GNU Elsip book is a great place to start if you want to do more with elisp.
HTH | I've found that [Xah's Elisp Tutorial](http://xahlee.info/emacs/emacs/elisp.html) is an excellent starting point in figuring out the basics of Emacs Lisp programming. [There](https://sites.google.com/site/steveyegge2/effective-emacs) [are](https://steve-yegge.blogspot.com/2008/01/emergency-elisp.html) [also](https://steve-yegge.blogspot.com/2006/06/shiny-and-new-emacs-22.html) some SteveY articles from a while ago that go through techniques you might find useful for learning the basics.
If you're serious about making an amended Python mode, you'll do well to take a look at [Writing GNU Emacs Extensions](https://www.google.ca/search?hl=en&q=Writing%20GNU%20Emacs%20Extensions&gws_rd=ssl), which is available as a PDF.
Finally, the most useful resource for me is actually Emacs itself. I make frequent use of `M-x apropos` and `M-x describe-key` to figure out how built-in functions work, and whether there's something already in place to do what I want.
The specific things you want to look like they can be done through some simple use of `insert`, and a few search/replace functions, so that'll be a good starting point. |
41,936,098 | I am trying to install the `zipline` module using `"pip install zipline"` but I get this exception:
```
IOError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/editor.pyc'` - any help would be greatly appreciated
Failed building wheel for numexpr
Running setup.py clean for numexpr
Failed to build numexpr
Installing collected packages: python-editor, Mako, sqlalchemy, alembic, sortedcontainers, intervaltree, python-dateutil, numpy, numexpr, toolz, bottleneck, scipy, pytz, pandas, empyrical, requests, requests-file, requests-ftp, pandas-datareader, decorator, networkx, patsy, statsmodels, click, Logbook, multipledispatch, bcolz, Cython, contextlib2, cyordereddict, cachetools, zipline
Exception:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/usr/local/lib/python2.7/dist-packages/pip/commands/install.py", line 342, in run
prefix=options.prefix_path,
File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 784, in install
**kwargs
File "/usr/local/lib/python2.7/dist-packages/pip/req/req_install.py", line 851, in install
self.move_wheel_files(self.source_dir, root=root, prefix=prefix)
File "/usr/local/lib/python2.7/dist-packages/pip/req/req_install.py", line 1064, in move_wheel_files
isolated=self.isolated,
File "/usr/local/lib/python2.7/dist-packages/pip/wheel.py", line 345, in move_wheel_files
clobber(source, lib_dir, True)
File "/usr/local/lib/python2.7/dist-packages/pip/wheel.py", line 323, in clobber
shutil.copyfile(srcfile, destfile)
File "/usr/lib/python2.7/shutil.py", line 83, in copyfile
with open(dst, 'wb') as fdst:
IOError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/editor.pyc'
``` | 2017/01/30 | [
"https://Stackoverflow.com/questions/41936098",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7283601/"
] | AS you are not root. You can use sudo to obtain superuser permissions:
```
sudo pip install zipline
```
Or else
**For GNU/Linux :**
On Debian-derived Linux distributions, you can acquire all the necessary binary dependencies from apt by running:
```
$ sudo apt-get install libatlas-base-dev python-dev gfortran pkg-config libfreetype6-dev
```
On recent RHEL-derived derived Linux distributions (e.g. Fedora), the following should be sufficient to acquire the necessary additional dependencies:
```
$ sudo dnf install atlas-devel gcc-c++ gcc-gfortran libgfortran python-devel redhat-rep-config
```
On Arch Linux, you can acquire the additional dependencies via pacman:
```
$ pacman -S lapack gcc gcc-fortran pkg-config
```
There are also AUR packages available for installing Python 3.4 (Arch’s default python is now 3.5, but Zipline only currently supports 3.4), and ta-lib, an optional Zipline dependency. Python 2 is also installable via:
```
$ pacman -S python2
``` | Avoid using `sudo` to install packages with `pip`. Use the `--user` option instead or, even better, use virtual environments.
See [this SO answer](https://stackoverflow.com/a/42021993/3577054). I think this question is a duplicate of that one. |
60,917,385 | My aim:
To count the frequency of a user entered word in a text file.(in python)
I tried this.But it gives the frequency of all the words in the file.How can i modify it to give the frequency of a word entered by the user?
```
from collections import Counter
word=input("Enter a word:")
def word_count(test6):
with open('test6.txt') as f:
return Counter(f.read().split())
print("Number of input words in the file :",word_count(word))
```
This may be a naive question but I am just beginning to code.So please try to answer.
Thanks in advance. | 2020/03/29 | [
"https://Stackoverflow.com/questions/60917385",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13140422/"
] | Hi I just solved the problem.
After you run
`docker build .`
run the `docker-compose build` instead of `docker-compose up`.
And then finally run `docker-compose up` | instead of
```
COPY Pipfile Pipfile.lock /code/
RUN pip install pipenv && pipenv install --system
```
you may use:
```
RUN pip install pipenv
COPY pipfile* /tmp
RUN cd /tmp && pipenv lock --requirements > requirements.txt
RUN pip install -r /tmp/requirements.txt
```
this is a snippet from [here](https://pythonspeed.com/articles/pipenv-docker/) |
42,216,370 | Installation of python-devel fails with attached message
Configuration is as follows:
- CentOS 7.2
- Python 2.7 Installed
1. I re-ran with yum load as suggested in output and it failed with same message.
2. yum info python ==> Installed package python 2.7.5 34.el7
3. yum info python-devel ==> NOT installed. Available 2.7.5 48.el7
4. yum deplist python-devel ==> dependency on python2.7.5-48.el7
5. Tried to install Python2.7.5-48.el7 wih "yum update python" and it fails with same error message as python-devel install.
Sudhir
```
yum install -y python-devel
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.sonic.net
* epel: ftp.linux.ncsu.edu
* extras: mirror.cogentco.com
* updates: www.gtlib.gatech.edu
Resolving Dependencies
--> Running transaction check
---> Package python-devel.x86_64 0:2.7.5-48.el7 will be installed
--> Processing Dependency: python(x86-64) = 2.7.5-48.el7 for package: python-devel-2.7.5-48.el7.x86_64
--> Running transaction check
---> Package python.x86_64 0:2.7.5-34.el7 will be updated
---> Package python.x86_64 0:2.7.5-48.el7 will be an update
--> Processing Dependency: python-libs(x86-64) = 2.7.5-48.el7 for package: python-2.7.5-48.el7.x86_64
--> Running transaction check
---> Package python-libs.x86_64 0:2.7.5-34.el7 will be updated
---> Package python-libs.x86_64 0:2.7.5-48.el7 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
```
================================================================================ Package Arch Version Repository Size
======================================================================================================================
```
Installing:
python-devel x86_64 2.7.5-48.el7 base 393 k
Updating for dependencies:
python x86_64 2.7.5-48.el7 base 90 k
python-libs x86_64 2.7.5-48.el7 base 5.6 M
Transaction Summary
==============================================================================================================================================
Install 1 Package
Upgrade ( 2 Dependent packages)
Total size: 6.1 M
Downloading packages:
Running transaction check
ERROR with transaction check vs depsolve:
python(abi) = 2.6 is needed by (installed) python-argparse-1.2.1-2.1.el6.noarch
python(abi) = 2.6 is needed by (installed) redhat-upgrade-tool-1:0.7.22-3.el6.centos.noarch
** Found 5 pre-existing rpmdb problem(s), 'yum check' output follows:
epel-release-7-6.noarch is a duplicate with epel-release-7-5.noarch
grep-2.20-3.el6_7.1.x86_64 has missing requires of libpcre.so.0()(64bit)
python-argparse-1.2.1-2.1.el6.noarch has missing requires of python(abi) = ('0', '2.6', None)
1:redhat-upgrade-tool-0.7.22-3.el6.centos.noarch has missing requires of preupgrade-assistant >= ('0', '1.0.2', '4')
1:redhat-upgrade-tool-0.7.22-3.el6.centos.noarch has missing requires of python(abi) = ('0', '2.6', None)
Your transaction was saved, rerun it with:
yum load-transaction /tmp/yum_save_tx.2017-02-13.16-01.jUFBE4.yumtx
``` | 2017/02/14 | [
"https://Stackoverflow.com/questions/42216370",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5070752/"
] | From the yum documentation, here's the safest way to handle each of your 5 errors:
First remove duplicates and resolve any errors after running this:
```
package-cleanup --cleandupes
```
If the above comes with a missing package-cleanup error, then run this first:
```
yum install yum-utils
```
Then address the other 4 errors with:
```
yum reinstall grep-*
```
where grep-\* is the package name as shown in the error message. I abbreviated the rest of the grep version name with \* in the command above.
Repeat the above command for the 3 other packages that were indicated as missing. If yum command gives you errors, then try this for just that one package:
```
rpm -ivh --force grep-*
```
Then finally re-run the yum command from the original error message.
At any point you want to clean up leftover mess, run this command:
```
yum clean all
package-cleanup --problems
```
And follow directions. For further reference, look up documentation with
```
man yum.conf
``` | Removed packages python-argparse and redhat-upgrade-tool.
Then did a yum install python-devel and it succeed's this time. I am thinking there is a hard dependency for those 2 packages on older python 2.6.
Sudhir Nallagangu |
42,216,370 | Installation of python-devel fails with attached message
Configuration is as follows:
- CentOS 7.2
- Python 2.7 Installed
1. I re-ran with yum load as suggested in output and it failed with same message.
2. yum info python ==> Installed package python 2.7.5 34.el7
3. yum info python-devel ==> NOT installed. Available 2.7.5 48.el7
4. yum deplist python-devel ==> dependency on python2.7.5-48.el7
5. Tried to install Python2.7.5-48.el7 wih "yum update python" and it fails with same error message as python-devel install.
Sudhir
```
yum install -y python-devel
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.sonic.net
* epel: ftp.linux.ncsu.edu
* extras: mirror.cogentco.com
* updates: www.gtlib.gatech.edu
Resolving Dependencies
--> Running transaction check
---> Package python-devel.x86_64 0:2.7.5-48.el7 will be installed
--> Processing Dependency: python(x86-64) = 2.7.5-48.el7 for package: python-devel-2.7.5-48.el7.x86_64
--> Running transaction check
---> Package python.x86_64 0:2.7.5-34.el7 will be updated
---> Package python.x86_64 0:2.7.5-48.el7 will be an update
--> Processing Dependency: python-libs(x86-64) = 2.7.5-48.el7 for package: python-2.7.5-48.el7.x86_64
--> Running transaction check
---> Package python-libs.x86_64 0:2.7.5-34.el7 will be updated
---> Package python-libs.x86_64 0:2.7.5-48.el7 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
```
================================================================================ Package Arch Version Repository Size
======================================================================================================================
```
Installing:
python-devel x86_64 2.7.5-48.el7 base 393 k
Updating for dependencies:
python x86_64 2.7.5-48.el7 base 90 k
python-libs x86_64 2.7.5-48.el7 base 5.6 M
Transaction Summary
==============================================================================================================================================
Install 1 Package
Upgrade ( 2 Dependent packages)
Total size: 6.1 M
Downloading packages:
Running transaction check
ERROR with transaction check vs depsolve:
python(abi) = 2.6 is needed by (installed) python-argparse-1.2.1-2.1.el6.noarch
python(abi) = 2.6 is needed by (installed) redhat-upgrade-tool-1:0.7.22-3.el6.centos.noarch
** Found 5 pre-existing rpmdb problem(s), 'yum check' output follows:
epel-release-7-6.noarch is a duplicate with epel-release-7-5.noarch
grep-2.20-3.el6_7.1.x86_64 has missing requires of libpcre.so.0()(64bit)
python-argparse-1.2.1-2.1.el6.noarch has missing requires of python(abi) = ('0', '2.6', None)
1:redhat-upgrade-tool-0.7.22-3.el6.centos.noarch has missing requires of preupgrade-assistant >= ('0', '1.0.2', '4')
1:redhat-upgrade-tool-0.7.22-3.el6.centos.noarch has missing requires of python(abi) = ('0', '2.6', None)
Your transaction was saved, rerun it with:
yum load-transaction /tmp/yum_save_tx.2017-02-13.16-01.jUFBE4.yumtx
``` | 2017/02/14 | [
"https://Stackoverflow.com/questions/42216370",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5070752/"
] | From the yum documentation, here's the safest way to handle each of your 5 errors:
First remove duplicates and resolve any errors after running this:
```
package-cleanup --cleandupes
```
If the above comes with a missing package-cleanup error, then run this first:
```
yum install yum-utils
```
Then address the other 4 errors with:
```
yum reinstall grep-*
```
where grep-\* is the package name as shown in the error message. I abbreviated the rest of the grep version name with \* in the command above.
Repeat the above command for the 3 other packages that were indicated as missing. If yum command gives you errors, then try this for just that one package:
```
rpm -ivh --force grep-*
```
Then finally re-run the yum command from the original error message.
At any point you want to clean up leftover mess, run this command:
```
yum clean all
package-cleanup --problems
```
And follow directions. For further reference, look up documentation with
```
man yum.conf
``` | The problem is that you are on CentOS 7, but have CentOS 6 packages installed.
* python-argparse-1.2.1-2.1.el6.noarch
* redhat-upgrade-tool-1:0.7.22-3.el6.centos.noarch
Get a list of all installed el6 packages (`rpm -qa | grep el6`) and remove them or update them to their el7 equivalents. You should be able remove argparse, since it's in the 2.7 standard library. |
46,480,621 | I upgraded my ansible to 2.4 and now I cannot manage my CentOS 5 hosts which are running python 2.4. How do I fix it?
<http://docs.ansible.com/ansible/2.4/porting_guide_2.4.html> says ansible 2.4 will not support any versions of python lower than 2.6 | 2017/09/29 | [
"https://Stackoverflow.com/questions/46480621",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4055115/"
] | After I upgraded to ansible 2.4 I was not able to manage hosts running python 2.6+. These were CentOS 5 hosts and this is how I fixed the problem.
First, I installed `python26` from epel repo. After enabling epel repo, `yum install python26`
Then in my hosts file, for the CentOS 5 hosts, I added `ansible_python_interpreter=/usr/bin/python26` as the python interpreter.
To specify the python interpreter in the hosts file individually, it will be something like
`centos5-database ansible_python_interpreter=/usr/bin/python26`
And for a group of hosts, it will be something like
`[centos5-www:vars]
ansible_python_interpreter=/usr/bin/python26` | And what about python26-yum package? It is required to use yum module to install packages using Ansible. |
46,480,621 | I upgraded my ansible to 2.4 and now I cannot manage my CentOS 5 hosts which are running python 2.4. How do I fix it?
<http://docs.ansible.com/ansible/2.4/porting_guide_2.4.html> says ansible 2.4 will not support any versions of python lower than 2.6 | 2017/09/29 | [
"https://Stackoverflow.com/questions/46480621",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4055115/"
] | After I upgraded to ansible 2.4 I was not able to manage hosts running python 2.6+. These were CentOS 5 hosts and this is how I fixed the problem.
First, I installed `python26` from epel repo. After enabling epel repo, `yum install python26`
Then in my hosts file, for the CentOS 5 hosts, I added `ansible_python_interpreter=/usr/bin/python26` as the python interpreter.
To specify the python interpreter in the hosts file individually, it will be something like
`centos5-database ansible_python_interpreter=/usr/bin/python26`
And for a group of hosts, it will be something like
`[centos5-www:vars]
ansible_python_interpreter=/usr/bin/python26` | My experience so far has been that anisible works (gather facts) but that some modules (in particular yum / package) do not because yum uses python 2.4.
I ended up using yum via command and shell modules (not pretty but works).
1) Before you can install python26 you need to fix the repos as CentOS5 is end of life:
( [YumRepo Error: All mirror URLs are not using ftp, http[s] or file](https://stackoverflow.com/questions/21396508/yumrepo-error-all-mirror-urls-are-not-using-ftp-https-or-file) )
2) then you can install EPEL 5 and pthon26
( <https://www.ansible.com/blog/using-ansible-to-manage-rhel-5-yesterday-today-and-tomorrow> )
3) then you can use the command module to use yum:
( [CentOS 5. ansible\_python\_interpreter=/usr/bin/python26. Still cannot use yum: module](https://stackoverflow.com/questions/48429110/centos-5-ansible-python-interpreter-usr-bin-python26-still-cannot-use-yum-mo) )
many newer ansible modules don't work either due to missing python dependencies.
My intent is just to use Ansible in CentOS5 (or RH 5) to facilitate the upgrade to something newer and supported. ;) |
46,480,621 | I upgraded my ansible to 2.4 and now I cannot manage my CentOS 5 hosts which are running python 2.4. How do I fix it?
<http://docs.ansible.com/ansible/2.4/porting_guide_2.4.html> says ansible 2.4 will not support any versions of python lower than 2.6 | 2017/09/29 | [
"https://Stackoverflow.com/questions/46480621",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4055115/"
] | And what about python26-yum package? It is required to use yum module to install packages using Ansible. | My experience so far has been that anisible works (gather facts) but that some modules (in particular yum / package) do not because yum uses python 2.4.
I ended up using yum via command and shell modules (not pretty but works).
1) Before you can install python26 you need to fix the repos as CentOS5 is end of life:
( [YumRepo Error: All mirror URLs are not using ftp, http[s] or file](https://stackoverflow.com/questions/21396508/yumrepo-error-all-mirror-urls-are-not-using-ftp-https-or-file) )
2) then you can install EPEL 5 and pthon26
( <https://www.ansible.com/blog/using-ansible-to-manage-rhel-5-yesterday-today-and-tomorrow> )
3) then you can use the command module to use yum:
( [CentOS 5. ansible\_python\_interpreter=/usr/bin/python26. Still cannot use yum: module](https://stackoverflow.com/questions/48429110/centos-5-ansible-python-interpreter-usr-bin-python26-still-cannot-use-yum-mo) )
many newer ansible modules don't work either due to missing python dependencies.
My intent is just to use Ansible in CentOS5 (or RH 5) to facilitate the upgrade to something newer and supported. ;) |
57,588,744 | How do you quit or halt a python program without the error messages showing?
I have tried quit(), exit(), systemexit(), raise SystemExit, and others but they all seem to raise an error message saying the program has been halted. How do I get rid of this? | 2019/08/21 | [
"https://Stackoverflow.com/questions/57588744",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11939397/"
] | you can structure your program within a function then `return` when you wish to halt/end the program
ie
```
def foo():
# your program here
if we_want_to_halt:
return
if __name__ == "__main__":
foo()
``` | You would need to handle the exit in your python program.
For example:
```
def main():
x = raw_input("Enter a value: ")
if x == "a value":
print("its alright")
else:
print("exit")
exit(0)
```
Note: This works in python 2 because raw\_input is included by default there but the concept is the same for both versions.
Output:
```
Enter a value: a
exit
```
Just out of curiousity: Why do you want to prevent the message? I prefer to see that my program has been closed because the user forced a system exit. |
57,588,744 | How do you quit or halt a python program without the error messages showing?
I have tried quit(), exit(), systemexit(), raise SystemExit, and others but they all seem to raise an error message saying the program has been halted. How do I get rid of this? | 2019/08/21 | [
"https://Stackoverflow.com/questions/57588744",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11939397/"
] | You are trying too hard. Write your program using the regular boilerplate:
```
def main():
# your real code goes here
return
if __name__ == "__main__":
main()
```
and just return from function `main`. That will get you back to the `if`-clause, and execution will fall out the bottom of the program.
You can have as many `return` statements in `main()` as you like. | You would need to handle the exit in your python program.
For example:
```
def main():
x = raw_input("Enter a value: ")
if x == "a value":
print("its alright")
else:
print("exit")
exit(0)
```
Note: This works in python 2 because raw\_input is included by default there but the concept is the same for both versions.
Output:
```
Enter a value: a
exit
```
Just out of curiousity: Why do you want to prevent the message? I prefer to see that my program has been closed because the user forced a system exit. |
57,588,744 | How do you quit or halt a python program without the error messages showing?
I have tried quit(), exit(), systemexit(), raise SystemExit, and others but they all seem to raise an error message saying the program has been halted. How do I get rid of this? | 2019/08/21 | [
"https://Stackoverflow.com/questions/57588744",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11939397/"
] | you can structure your program within a function then `return` when you wish to halt/end the program
ie
```
def foo():
# your program here
if we_want_to_halt:
return
if __name__ == "__main__":
foo()
``` | you can try the following code to terminate the program.
```
import sys
sys.exit()
``` |
57,588,744 | How do you quit or halt a python program without the error messages showing?
I have tried quit(), exit(), systemexit(), raise SystemExit, and others but they all seem to raise an error message saying the program has been halted. How do I get rid of this? | 2019/08/21 | [
"https://Stackoverflow.com/questions/57588744",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11939397/"
] | You are trying too hard. Write your program using the regular boilerplate:
```
def main():
# your real code goes here
return
if __name__ == "__main__":
main()
```
and just return from function `main`. That will get you back to the `if`-clause, and execution will fall out the bottom of the program.
You can have as many `return` statements in `main()` as you like. | you can try the following code to terminate the program.
```
import sys
sys.exit()
``` |
57,588,744 | How do you quit or halt a python program without the error messages showing?
I have tried quit(), exit(), systemexit(), raise SystemExit, and others but they all seem to raise an error message saying the program has been halted. How do I get rid of this? | 2019/08/21 | [
"https://Stackoverflow.com/questions/57588744",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11939397/"
] | You are trying too hard. Write your program using the regular boilerplate:
```
def main():
# your real code goes here
return
if __name__ == "__main__":
main()
```
and just return from function `main`. That will get you back to the `if`-clause, and execution will fall out the bottom of the program.
You can have as many `return` statements in `main()` as you like. | you can structure your program within a function then `return` when you wish to halt/end the program
ie
```
def foo():
# your program here
if we_want_to_halt:
return
if __name__ == "__main__":
foo()
``` |
60,327,453 | I am new to tensorflow and Convolutional Neural Networks, and I would like to build an AI that learns to find the mode of floating point numbers. But whenever I try to run the code, I run into some errors.
Here is my code so far:
```
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Dense
train_data = [
[0.5, 0.2, 0.2],
[0.3, 0.3, 0.4],
[0.4, 0.4, 0.5],
[0.8, 0.8, 0.1]
]
train_labels = [
2.0,
3.0,
4.0,
8.0
]
test_data = [
[0.2, 0.5, 0.2],
[0.7, 0.1, 0.7],
[0.6, 0.8, 0.8]
]
test_labels = [
2,
7,
8
]
model = keras.Sequential()
model.add(Dense(4, activation=tf.nn.relu, input_shape=(1,)))
model.add(Dense(2, activation=tf.nn.relu))
model.add(Dense(1, activation=tf.nn.softmax))
model.compile(loss='sparse_categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model.summary()
EPOCHS = 2
BATCH_SIZE=1
model.fit(train_data, train_labels, epochs=EPOCHS, batch_size=BATCH_SIZE)
```
However, when I try and run the code I get the following errors:
```
Traceback (most recent call last):
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 511, in _apply_op_helper
preferred_dtype=default_dtype)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1175, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 977, in _TensorTensorConversionFunction
(dtype.name, t.dtype.name, str(t)))
ValueError: Tensor conversion requested dtype int32 for Tensor with dtype float32: 'Tensor("metrics/acc/Cast_6:0", shape=(?, 1), dtype=float32)'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "testNeural.py", line 38, in <module>
metrics=['accuracy'])
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\training\checkpointable\base.py", line 442, in _method_wrapper
method(self, *args, **kwargs)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py", line 499, in compile
sample_weights=self.sample_weights)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1844, in _handle_metrics
return_stateful_result=return_stateful_result))
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1801, in _handle_per_output_metrics
metric_result = _call_stateless_fn(metric_fn)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1777, in _call_stateless_fn
return weighted_metric_fn(y_true, y_pred, weights=weights, mask=mask)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_utils.py", line 647, in weighted
score_array = fn(y_true, y_pred)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\keras\metrics.py", line 1533, in binary_accuracy
return K.mean(math_ops.equal(y_true, y_pred), axis=-1)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 3093, in equal
"Equal", x=x, y=y, name=name)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 547, in _apply_op_helper
inferred_from[input_arg.type_attr]))
TypeError: Input 'y' of 'Equal' Op has type float32 that does not match type int32 of argument 'x'.
```
Does anyone know how to fix this? | 2020/02/20 | [
"https://Stackoverflow.com/questions/60327453",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | Finally **SOLVED**:
**HTML:**
```
<mat-form-field>
<mat-label>Course</mat-label>
<mat-select
[formControl]="subjectControl"
[attr.data-tag]="this.subjectControl.value"
required
>
<mat-option>-- None --</mat-option>
<mat-optgroup *ngFor="let course of subjects" [label]="course.semester" [disabled]="course.disabled">
<mat-option *ngFor="let subject of course.courses" [value]="subject.subjectName"><!--here I have to set [value] to `.subjectName` or `.subjectSemester` to show it into the `data-tag`-->
{{ subject.subjectName }}
</mat-option>
</mat-optgroup>
</mat-select>
</mat-form-field>
```
As written in the comments of the code, I have to put the `[attr.data-tag]` into the `mat-select` equal to `this.subjectControl.value`, and set `[value]` of `mat-option` equal to the value to store into `[attr.data-tag]`. | Your code looks correct to me. I tried adding it to an existing stackblitz example, and it showed up in the HTML. Maybe it will help to figure it out:
<https://stackblitz.com/edit/angular-material-select-compare-with?embed=1&file=app/app.html>
```
<mat-option class="mat-option ng-star-inserted" data-tag="Three" role="option" ng-reflect-value="[object Object]" tabindex="0" id="mat-option-29" aria-selected="false" aria-disabled="false">
``` |
51,878,354 | Is there a built-in function that works like zip(), but fills the results so that the length of the resulting list is the length of the longest input and fills the list **from the left** with e.g. `None`?
There is already an [answer](https://stackoverflow.com/a/1277311/2648551) using [zip\_longest](https://docs.python.org/3/library/itertools.html#itertools.zip_longest) from `itertools` module and the corresponding [question](https://stackoverflow.com/q/1277278/2648551) is very similar to this. But with `zip_longest` it seems that you can only fill missing data from the right.
Here might be a use case for that, assuming we have names stored only like this (it's just an example):
```
header = ["title", "firstname", "lastname"]
person_1 = ["Dr.", "Joe", "Doe"]
person_2 = ["Mary", "Poppins"]
person_3 = ["Smith"]
```
There is no other permutation like (`["Poppins", "Mary"]`, `["Poppins", "Dr", "Mary"]`) and so on.
How can I get results like this using built-in functions?
```
>>> dict(magic_zip(header, person_1))
{'title': 'Dr.', 'lastname': 'Doe', 'firstname': 'Joe'}
>>> dict(magic_zip(header, person_2))
{'title': None, 'lastname': 'Poppins', 'firstname': 'Mary'}
>>> dict(magic_zip(header, person_3))
{'title': None, 'lastname': 'Smith', 'firstname': None}
``` | 2018/08/16 | [
"https://Stackoverflow.com/questions/51878354",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2648551/"
] | Use **`zip_longest`** but reverse lists.
**Example**:
```
from itertools import zip_longest
header = ["title", "firstname", "lastname"]
person_1 = ["Dr.", "Joe", "Doe"]
person_2 = ["Mary", "Poppins"]
person_3 = ["Smith"]
print(dict(zip_longest(reversed(header), reversed(person_2))))
# {'lastname': 'Poppins', 'firstname': 'Mary', 'title': None}
```
On your use cases:
```
>>> dict(zip_longest(reversed(header), reversed(person_1)))
{'title': 'Dr.', 'lastname': 'Doe', 'firstname': 'Joe'}
>>> dict(zip_longest(reversed(header), reversed(person_2)))
{'lastname': 'Poppins', 'firstname': 'Mary', 'title': None}
>>> dict(zip_longest(reversed(header), reversed(person_3)))
{'lastname': 'Smith', 'firstname': None, 'title': None}
``` | Simply use `zip_longest` and read the arguments in the reverse direction:
```
In [20]: dict(zip_longest(header[::-1], person_1[::-1]))
Out[20]: {'lastname': 'Doe', 'firstname': 'Joe', 'title': 'Dr.'}
In [21]: dict(zip_longest(header[::-1], person_2[::-1]))
Out[21]: {'lastname': 'Poppins', 'firstname': 'Mary', 'title': None}
In [22]: dict(zip_longest(header[::-1], person_3[::-1]))
Out[22]: {'lastname': 'Smith', 'firstname': None, 'title': None}
```
Since the zip\* functions need to be able to work on general iterables, they don't support filling "from the left", because you'd need to exhaust the iterable first. Here we can just flip things ourselves. |
51,878,354 | Is there a built-in function that works like zip(), but fills the results so that the length of the resulting list is the length of the longest input and fills the list **from the left** with e.g. `None`?
There is already an [answer](https://stackoverflow.com/a/1277311/2648551) using [zip\_longest](https://docs.python.org/3/library/itertools.html#itertools.zip_longest) from `itertools` module and the corresponding [question](https://stackoverflow.com/q/1277278/2648551) is very similar to this. But with `zip_longest` it seems that you can only fill missing data from the right.
Here might be a use case for that, assuming we have names stored only like this (it's just an example):
```
header = ["title", "firstname", "lastname"]
person_1 = ["Dr.", "Joe", "Doe"]
person_2 = ["Mary", "Poppins"]
person_3 = ["Smith"]
```
There is no other permutation like (`["Poppins", "Mary"]`, `["Poppins", "Dr", "Mary"]`) and so on.
How can I get results like this using built-in functions?
```
>>> dict(magic_zip(header, person_1))
{'title': 'Dr.', 'lastname': 'Doe', 'firstname': 'Joe'}
>>> dict(magic_zip(header, person_2))
{'title': None, 'lastname': 'Poppins', 'firstname': 'Mary'}
>>> dict(magic_zip(header, person_3))
{'title': None, 'lastname': 'Smith', 'firstname': None}
``` | 2018/08/16 | [
"https://Stackoverflow.com/questions/51878354",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2648551/"
] | Use **`zip_longest`** but reverse lists.
**Example**:
```
from itertools import zip_longest
header = ["title", "firstname", "lastname"]
person_1 = ["Dr.", "Joe", "Doe"]
person_2 = ["Mary", "Poppins"]
person_3 = ["Smith"]
print(dict(zip_longest(reversed(header), reversed(person_2))))
# {'lastname': 'Poppins', 'firstname': 'Mary', 'title': None}
```
On your use cases:
```
>>> dict(zip_longest(reversed(header), reversed(person_1)))
{'title': 'Dr.', 'lastname': 'Doe', 'firstname': 'Joe'}
>>> dict(zip_longest(reversed(header), reversed(person_2)))
{'lastname': 'Poppins', 'firstname': 'Mary', 'title': None}
>>> dict(zip_longest(reversed(header), reversed(person_3)))
{'lastname': 'Smith', 'firstname': None, 'title': None}
``` | ```
def magic_zip(*lists):
max_len = max(map(len, lists))
return zip(*([None] * (max_len - len(l)) + l for l in lists))
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.