qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
---|---|---|---|---|---|
44,453,416 | I'm packing python application into docker with nix's `dockerTools` and all is good except of the image size. Python itself is about 40Mb, and if you add `numpy` and `pandas` it would be few hundreds of megabytes, while the application code is only ~100Kb.
The only solution I see is to pack dependencies in separate image and then inherit main one from it, it won't fix the size, but at least I won't need to transfer huge images on every commit. Also I don't know how to do this, should I use some image with nix, or build environment with `pythonPackages.buildEnv` and the attach my app to it?
It would be great to have some generic solution, but python specific would be good. Even if you have imperfect solution, please share.
Ok, with `fromImage` attr for `buildImage` I split one huge layer into huge dependency layer and small app code layer.
I wonder if there is any way to move this fat dependency layer into separate image, so I could share it among my other projects? | 2017/06/09 | [
"https://Stackoverflow.com/questions/44453416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1307593/"
] | After googling a bit and reading `dockerTools` code I ended with this solution:
```
let
deps = pkgs.dockerTools.buildImage {
name = "deps";
content = [ list of all deps here ];
};
in pkgs.dockertools.buildImage {
name = "app";
fromImage = deps;
}
```
This will build two layer docker image, one of them would be dependencies, other one is app. Also is seems that value for `fromImage` could be result of `pullImage` which should give you same result (if I understood code correctly), but I wasn't able to check it. | There is no need to package your dependencies in a separate image and inherit it, although that can't do harm.
All you need to do is make sure that you add your application code as one of the last steps in the Dockerfile. Each command will have its own layer, so if you only change your application code, all layers above that change can be used from cache.
Example from the [Docker Images and Layers](https://docs.docker.com/engine/userguide/storagedriver/imagesandcontainers/#images-and-layers.) documentation:
The dockerfile
```
FROM ubuntu:15.10
COPY . /app
RUN make /app
CMD python /app/app.py
```
contains four distinct layers. If you only modify the last line, only that layer and all layers below that will have to be transferred. When pushing or pulling you will see `4b0ba2c4050a: Already exists` next to the layers being used from cache.
Following this approach you don't end up with a smaller image, but as you say you don't have to pull large images on each change. |
60,397,004 | Hi new to python and programming in general
I'm trying to find an element in an array based on user input
here's what i've done
```
a =[31,41,59,26,41,58]
input = input("Enter number : ")
for i in range(1,len(a),1) :
if input == a[i] :
print(i)
```
problem is that it doesn't print out anything.
what am I doing wrong here? | 2020/02/25 | [
"https://Stackoverflow.com/questions/60397004",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12932148/"
] | `input` returns a string. To make them integers wrap them in `int`.
```
inp=int(input('enter :'))
for i in range(0,len(a)-1):
if inp==a[i]:
print(i)
```
Indices in `list` start from *0* to *len(list)-1*.
Instead of using `range(0,len(a)-1)` it's preferred to use `enumerate`.
```
for idx,val in enumerate(a):
if inp==val:
print(idx)
```
---
To check if a `inp` is in `a` you can this.
```
>>> inp in a
True #if it exists or else False
```
---
You can use `try-except` also.
```
try:
print(a.index(inp))
except ValueError:
print('Element not Found')
``` | `input` returns a string; `a` contains integers.
Your loop starts at 1, so it will never test against `a[0]` (in this case, 31).
And you shouldn't re-define the name `input`. |
60,397,004 | Hi new to python and programming in general
I'm trying to find an element in an array based on user input
here's what i've done
```
a =[31,41,59,26,41,58]
input = input("Enter number : ")
for i in range(1,len(a),1) :
if input == a[i] :
print(i)
```
problem is that it doesn't print out anything.
what am I doing wrong here? | 2020/02/25 | [
"https://Stackoverflow.com/questions/60397004",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12932148/"
] | `input` returns a string. To make them integers wrap them in `int`.
```
inp=int(input('enter :'))
for i in range(0,len(a)-1):
if inp==a[i]:
print(i)
```
Indices in `list` start from *0* to *len(list)-1*.
Instead of using `range(0,len(a)-1)` it's preferred to use `enumerate`.
```
for idx,val in enumerate(a):
if inp==val:
print(idx)
```
---
To check if a `inp` is in `a` you can this.
```
>>> inp in a
True #if it exists or else False
```
---
You can use `try-except` also.
```
try:
print(a.index(inp))
except ValueError:
print('Element not Found')
``` | Please don't declare a variable ***input*** is not a good practise and ***Space*** is very important in Python
```
a =[31,41,59,26,41,58]
b = input("Enter number : ")
for i in range(1,len(a),1):
if int(b) == a[i] :
print(i)
```
I think you want to check a value from your list so your input need to be a ***Int***. But input takes it as String. That's you need to convert it into int. |
60,397,004 | Hi new to python and programming in general
I'm trying to find an element in an array based on user input
here's what i've done
```
a =[31,41,59,26,41,58]
input = input("Enter number : ")
for i in range(1,len(a),1) :
if input == a[i] :
print(i)
```
problem is that it doesn't print out anything.
what am I doing wrong here? | 2020/02/25 | [
"https://Stackoverflow.com/questions/60397004",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12932148/"
] | `input` returns a string. To make them integers wrap them in `int`.
```
inp=int(input('enter :'))
for i in range(0,len(a)-1):
if inp==a[i]:
print(i)
```
Indices in `list` start from *0* to *len(list)-1*.
Instead of using `range(0,len(a)-1)` it's preferred to use `enumerate`.
```
for idx,val in enumerate(a):
if inp==val:
print(idx)
```
---
To check if a `inp` is in `a` you can this.
```
>>> inp in a
True #if it exists or else False
```
---
You can use `try-except` also.
```
try:
print(a.index(inp))
except ValueError:
print('Element not Found')
``` | input is providing you a str but you are comparing a list of ints. That and your loop starts at 1 but your index starts at 0 |
26,532,216 | I am trying to install some additional packages that do not come with Anaconda. All of these packages can be installed using `pip install PackageName`. However, when I type this command at the Anaconda Command Prompt, I get the following error:
```
Fatal error in launcher: Unable to create process using '"C:\Python27\python.exe
" "C:\python27\scripts\pip.exe" install MechanicalSoup'
```
I also tried to run the command from the python interpreter after `import pip` but that also did not work (I got a `SyntaxError: invalid syntax`).
I am a noob and understand that this might be a very basic question so thanks for your help in advance!
PS: I am using Windows 7, 64 bit, conda version: 3.7.1 and python version: 2.7.6. | 2014/10/23 | [
"https://Stackoverflow.com/questions/26532216",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4174494/"
] | When installing anaconda, you are asked if you want to include the installed python to your system PATH variable. Make sure you have it in your PATH. If everything is set up correct, you can run pip from your regular command prompt aswell. | There is a way around the use of pip
From the anaconda terminal window you can run:
```
conda install PackageName
```
Because MechanicalSoup isn't in one of anaconda's package channels you will have to do a bit of editing
See instructions near the bottom [on their blog](http://www.continuum.io/blog/conda) |
26,532,216 | I am trying to install some additional packages that do not come with Anaconda. All of these packages can be installed using `pip install PackageName`. However, when I type this command at the Anaconda Command Prompt, I get the following error:
```
Fatal error in launcher: Unable to create process using '"C:\Python27\python.exe
" "C:\python27\scripts\pip.exe" install MechanicalSoup'
```
I also tried to run the command from the python interpreter after `import pip` but that also did not work (I got a `SyntaxError: invalid syntax`).
I am a noob and understand that this might be a very basic question so thanks for your help in advance!
PS: I am using Windows 7, 64 bit, conda version: 3.7.1 and python version: 2.7.6. | 2014/10/23 | [
"https://Stackoverflow.com/questions/26532216",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4174494/"
] | Using @heinzchr's and @mmann's suggestions I was able to piece together the problem. I already had a version of Python 2.7 saved at `C:\Python27` and I had to remove this from the Path `(My Computer's properties> Advanced system settings> System variables> Path)`. I can now use `pip install` from the command line. | There is a way around the use of pip
From the anaconda terminal window you can run:
```
conda install PackageName
```
Because MechanicalSoup isn't in one of anaconda's package channels you will have to do a bit of editing
See instructions near the bottom [on their blog](http://www.continuum.io/blog/conda) |
26,532,216 | I am trying to install some additional packages that do not come with Anaconda. All of these packages can be installed using `pip install PackageName`. However, when I type this command at the Anaconda Command Prompt, I get the following error:
```
Fatal error in launcher: Unable to create process using '"C:\Python27\python.exe
" "C:\python27\scripts\pip.exe" install MechanicalSoup'
```
I also tried to run the command from the python interpreter after `import pip` but that also did not work (I got a `SyntaxError: invalid syntax`).
I am a noob and understand that this might be a very basic question so thanks for your help in advance!
PS: I am using Windows 7, 64 bit, conda version: 3.7.1 and python version: 2.7.6. | 2014/10/23 | [
"https://Stackoverflow.com/questions/26532216",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4174494/"
] | Using @heinzchr's and @mmann's suggestions I was able to piece together the problem. I already had a version of Python 2.7 saved at `C:\Python27` and I had to remove this from the Path `(My Computer's properties> Advanced system settings> System variables> Path)`. I can now use `pip install` from the command line. | When installing anaconda, you are asked if you want to include the installed python to your system PATH variable. Make sure you have it in your PATH. If everything is set up correct, you can run pip from your regular command prompt aswell. |
26,532,216 | I am trying to install some additional packages that do not come with Anaconda. All of these packages can be installed using `pip install PackageName`. However, when I type this command at the Anaconda Command Prompt, I get the following error:
```
Fatal error in launcher: Unable to create process using '"C:\Python27\python.exe
" "C:\python27\scripts\pip.exe" install MechanicalSoup'
```
I also tried to run the command from the python interpreter after `import pip` but that also did not work (I got a `SyntaxError: invalid syntax`).
I am a noob and understand that this might be a very basic question so thanks for your help in advance!
PS: I am using Windows 7, 64 bit, conda version: 3.7.1 and python version: 2.7.6. | 2014/10/23 | [
"https://Stackoverflow.com/questions/26532216",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4174494/"
] | When installing anaconda, you are asked if you want to include the installed python to your system PATH variable. Make sure you have it in your PATH. If everything is set up correct, you can run pip from your regular command prompt aswell. | For those looking for Python packages not added to current channels in anaconda, try <https://conda-forge.org/> For example, if you want to install MechanicalSoup you'll find it at <https://anaconda.org/conda-forge/mechanicalsoup> and use the -c option to tell conda the channel to use:
```
conda install -c conda-forge mechanicalsoup
``` |
26,532,216 | I am trying to install some additional packages that do not come with Anaconda. All of these packages can be installed using `pip install PackageName`. However, when I type this command at the Anaconda Command Prompt, I get the following error:
```
Fatal error in launcher: Unable to create process using '"C:\Python27\python.exe
" "C:\python27\scripts\pip.exe" install MechanicalSoup'
```
I also tried to run the command from the python interpreter after `import pip` but that also did not work (I got a `SyntaxError: invalid syntax`).
I am a noob and understand that this might be a very basic question so thanks for your help in advance!
PS: I am using Windows 7, 64 bit, conda version: 3.7.1 and python version: 2.7.6. | 2014/10/23 | [
"https://Stackoverflow.com/questions/26532216",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4174494/"
] | Using @heinzchr's and @mmann's suggestions I was able to piece together the problem. I already had a version of Python 2.7 saved at `C:\Python27` and I had to remove this from the Path `(My Computer's properties> Advanced system settings> System variables> Path)`. I can now use `pip install` from the command line. | For those looking for Python packages not added to current channels in anaconda, try <https://conda-forge.org/> For example, if you want to install MechanicalSoup you'll find it at <https://anaconda.org/conda-forge/mechanicalsoup> and use the -c option to tell conda the channel to use:
```
conda install -c conda-forge mechanicalsoup
``` |
61,302,203 | ```
File "<ipython-input-6-b985bbbd8c62>", line 21
cv2.rectangle(img,(ix,iy),(x,y),(255,0,0),-1)
^
IndentationError: expected an indented block
```
my code
```
import cv2
import numpy as np
#variables
#True while mouse button down, False while mouse button up
drawing = False
ix,iy = -1
#Function
def draw_rectangle(event,x,y,param,flags):
global ix,iy,drawing
if event == cv2.EVENT_LBUTTONDOWN:
drawing = True
ix,iy = x,y
elif event == cv2.EVENT_MOUSEMOVE:
if drawing == True:
cv2.rectangle(img,(ix,iy),(x,y),(255,0,0),-1)
elif event == cv2.EVENT_LBUTTONUP:
drawing = False
cv2.rectangle(img,(ix,iy),(x,y),(255,0,0),-1)
#Showing images with opencv
#black
img = np.zeros((612,612,3))
cv2.namedwindow(winname='draw_painting')
cv2.setMouseCallback('draw_painting',draw_rectangle)
while True:
cv2.imshow('draw_painting',img)
cv2.waitkey(20) & 0xFF = 27:
break
cv2.destryAllWindows()
``` | 2020/04/19 | [
"https://Stackoverflow.com/questions/61302203",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12148825/"
] | launch `npm install`
and in your body add the class `<body class="mat-app-background">`, or if you want you can try to add `import { MatSidenavModule } from "@angular/material/sidenav";` in your app.module.ts
and put your html code in `<mat-sidenav-container>` | Try add `@import '@angular/material/prebuilt-themes/pink-bluegrey.css';` in your `styles.css` file. |
38,274,695 | Can anybody help me with this? I'm a beginner in python and programming. Thanks very much.
I got this TypeError: 'dict' object is not callable when I execute this function.
```
def goodVsEvil(good, evil):
GoodTeam = {'Hobbits':1, 'Men':2, 'Elves':3, 'Dwarves':3, 'Eagles':4, 'Wizards':10}
EvilTeam = {'Orcs':1, 'Men':2, 'Wargs':2, 'Goblins':2, 'Uruk Hai':3, 'Trolls':5, 'Wizards':10}
Gworth = 0
Eworth = 0
for k, val in GoodTeam():
Input = raw_input ('How many of {0} : ')
Gworth = Gworth + int(Input) * val
for k, val in EvilTeam():
inp = raw_input ('How many of {0} : ')
Eworth = Eworth + int(inp) * val
if Gworth > Eworth:
return 'Battle Result: Good triumphs over Evil'
if Eworth > Gworth:
return 'Battle Result: Evil eradicates all trace of Good'
if Eworth == Gworth:
return 'Battle Result: No victor on this battle field'
``` | 2016/07/08 | [
"https://Stackoverflow.com/questions/38274695",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6566925/"
] | Those parenthesis are unnecessary. You intend to use `.items()` which allows you to iterate on the keys and values of your dictionary:
```
for k, val in GoodTeam.items():
# your code
```
You should replicate this change for `EvilTeam` also. | Like the error says, `GoodTeam` is a dict, but you're trying to call it. I think you mean to call its `items` method:
```
for k, val in GoodTeam.items():
```
The same is true for BadTeam.
Note you have other errors; you're using the string format method but haven't given it anything to actually format. |
34,090,999 | With pythons [`logging`](https://docs.python.org/2/library/logging.html) module, is there a way to **collect multiple events into one log entry**? An ideal solution would be an extension of python's `logging` module or a **custom formatter/filter** for it so collecting logging events of the same kind happens in the background and **nothing needs to be added in code body** (e.g. at every call of a logging function).
Here an **example** that generates a **large number of the same or very similar logging** events:
```
import logging
for i in range(99999):
try:
asdf[i] # not defined!
except NameError:
logging.exception('foo') # generates large number of logging events
else: pass
# ... more code with more logging ...
for i in range(88888): logging.info('more of the same %d' % i)
# ... and so on ...
```
So we have the same exception **99999** times and log it. It would be nice, if the log just said something like:
```
ERROR:root:foo (occured 99999 times)
Traceback (most recent call last):
File "./exceptionlogging.py", line 10, in <module>
asdf[i] # not defined!
NameError: name 'asdf' is not defined
INFO:root:foo more of the same (occured 88888 times with various values)
``` | 2015/12/04 | [
"https://Stackoverflow.com/questions/34090999",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/789308/"
] | You should probably be writing a message aggregate/statistics class rather than trying to hook onto the logging system's [singletons](https://stackoverflow.com/questions/31875/is-there-a-simple-elegant-way-to-define-singletons-in-python) but I guess you may have an existing code base that uses logging.
I'd also suggest that you should instantiate your loggers rather than always using the default root. The [Python Logging Cookbook](https://docs.python.org/2/howto/logging-cookbook.html) has extensive explanation and examples.
The following class should do what you are asking.
```
import logging
import atexit
import pprint
class Aggregator(object):
logs = {}
@classmethod
def _aggregate(cls, record):
id = '{0[levelname]}:{0[name]}:{0[msg]}'.format(record.__dict__)
if id not in cls.logs: # first occurrence
cls.logs[id] = [1, record]
else: # subsequent occurrence
cls.logs[id][0] += 1
@classmethod
def _output(cls):
for count, record in cls.logs.values():
record.__dict__['msg'] += ' (occured {} times)'.format(count)
logging.getLogger(record.__dict__['name']).handle(record)
@staticmethod
def filter(record):
# pprint.pprint(record)
Aggregator._aggregate(record)
return False
@staticmethod
def exit():
Aggregator._output()
logging.getLogger().addFilter(Aggregator)
atexit.register(Aggregator.exit)
for i in range(99999):
try:
asdf[i] # not defined!
except NameError:
logging.exception('foo') # generates large number of logging events
else: pass
# ... more code with more logging ...
for i in range(88888): logging.error('more of the same')
# ... and so on ...
```
Note that you don't get any logs until the program exits.
The result of running it this is:
```
ERROR:root:foo (occured 99999 times)
Traceback (most recent call last):
File "C:\work\VEMS\python\logcount.py", line 38, in
asdf[i] # not defined!
NameError: name 'asdf' is not defined
ERROR:root:more of the same (occured 88888 times)
``` | Create a counter and only log it for `count=1`, then increment thereafter and write out in a finally block (to ensure it gets logged no matter how bad the application crashes and burns). This could of course pose an issue if you have the same exception for different reasons, but you could always search for the line number to verify it's the same issue or something similar. A minimal example:
```
name_error_exception_count = 0
try:
for i in range(99999):
try:
asdf[i] # not defined!
except NameError:
name_error_exception_count += 1
if name_error_exception_count == 1:
logging.exception('foo')
else: pass
except Exception:
pass # this is just to get the finally block, handle exceptions here too, maybe
finally:
if name_error_exception_count > 0:
logging.exception('NameError exception occurred {} times.'.format(name_error_exception_count))
``` |
34,090,999 | With pythons [`logging`](https://docs.python.org/2/library/logging.html) module, is there a way to **collect multiple events into one log entry**? An ideal solution would be an extension of python's `logging` module or a **custom formatter/filter** for it so collecting logging events of the same kind happens in the background and **nothing needs to be added in code body** (e.g. at every call of a logging function).
Here an **example** that generates a **large number of the same or very similar logging** events:
```
import logging
for i in range(99999):
try:
asdf[i] # not defined!
except NameError:
logging.exception('foo') # generates large number of logging events
else: pass
# ... more code with more logging ...
for i in range(88888): logging.info('more of the same %d' % i)
# ... and so on ...
```
So we have the same exception **99999** times and log it. It would be nice, if the log just said something like:
```
ERROR:root:foo (occured 99999 times)
Traceback (most recent call last):
File "./exceptionlogging.py", line 10, in <module>
asdf[i] # not defined!
NameError: name 'asdf' is not defined
INFO:root:foo more of the same (occured 88888 times with various values)
``` | 2015/12/04 | [
"https://Stackoverflow.com/questions/34090999",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/789308/"
] | You can subclass the logger class and override the exception method to put your error types in a cache until they reach a certain counter before they are emitted to the log.
```
import logging
from collections import defaultdict
MAX_COUNT = 99999
class MyLogger(logging.getLoggerClass()):
def __init__(self, name):
super(MyLogger, self).__init__(name)
self.cache = defaultdict(int)
def exception(self, msg, *args, **kwargs):
err = msg.__class__.__name__
self.cache[err] += 1
if self.cache[err] > MAX_COUNT:
new_msg = "{err} occurred {count} times.\n{msg}"
new_msg = new_msg.format(err=err, count=MAX_COUNT, msg=msg)
self.log(logging.ERROR, new_msg, *args, **kwargs)
self.cache[err] = None
log = MyLogger('main')
try:
raise TypeError("Useful error message")
except TypeError as err:
log.exception(err)
```
Please note this isn't copy paste code.
You need to add your handlers (I recommend formatter, too) yourself.
<https://docs.python.org/2/howto/logging.html#handlers>
Have fun. | Create a counter and only log it for `count=1`, then increment thereafter and write out in a finally block (to ensure it gets logged no matter how bad the application crashes and burns). This could of course pose an issue if you have the same exception for different reasons, but you could always search for the line number to verify it's the same issue or something similar. A minimal example:
```
name_error_exception_count = 0
try:
for i in range(99999):
try:
asdf[i] # not defined!
except NameError:
name_error_exception_count += 1
if name_error_exception_count == 1:
logging.exception('foo')
else: pass
except Exception:
pass # this is just to get the finally block, handle exceptions here too, maybe
finally:
if name_error_exception_count > 0:
logging.exception('NameError exception occurred {} times.'.format(name_error_exception_count))
``` |
34,090,999 | With pythons [`logging`](https://docs.python.org/2/library/logging.html) module, is there a way to **collect multiple events into one log entry**? An ideal solution would be an extension of python's `logging` module or a **custom formatter/filter** for it so collecting logging events of the same kind happens in the background and **nothing needs to be added in code body** (e.g. at every call of a logging function).
Here an **example** that generates a **large number of the same or very similar logging** events:
```
import logging
for i in range(99999):
try:
asdf[i] # not defined!
except NameError:
logging.exception('foo') # generates large number of logging events
else: pass
# ... more code with more logging ...
for i in range(88888): logging.info('more of the same %d' % i)
# ... and so on ...
```
So we have the same exception **99999** times and log it. It would be nice, if the log just said something like:
```
ERROR:root:foo (occured 99999 times)
Traceback (most recent call last):
File "./exceptionlogging.py", line 10, in <module>
asdf[i] # not defined!
NameError: name 'asdf' is not defined
INFO:root:foo more of the same (occured 88888 times with various values)
``` | 2015/12/04 | [
"https://Stackoverflow.com/questions/34090999",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/789308/"
] | Your question hides a subliminal assumption of how "very similar" is defined.
Log records can either be const-only (whose instances are strictly identical), or a mix of consts and variables (no consts at all is also considered a mix).
An aggregator for const-only log records is a piece of cake. You just need to decide whether process/thread will fork your aggregation or not.
For log records which include both consts and variables you'll need to decide whether to split your aggregation based on the variables you have in your record.
A dictionary-style counter (from collections import Counter) can serve as a cache, which will count your instances in O(1), but you may need some higher-level structure in order to write the variables down if you wish. Additionally, you'll have to manually handle the writing of the cache into a file - every X seconds (binning) or once the program has exited (risky - you may lose all in-memory data if something gets stuck).
A framework for aggregation would look something like this (tested on Python v3.4):
```
from logging import Handler
from threading import RLock, Timer
from collections import defaultdict
class LogAggregatorHandler(Handler):
_default_flush_timer = 300 # Number of seconds between flushes
_default_separator = "\t" # Seperator char between metadata strings
_default_metadata = ["filename", "name", "funcName", "lineno", "levelname"] # metadata defining unique log records
class LogAggregatorCache(object):
""" Keeps whatever is interesting in log records aggregation. """
def __init__(self, record=None):
self.message = None
self.counter = 0
self.timestamp = list()
self.args = list()
if record is not None:
self.cache(record)
def cache(self, record):
if self.message is None: # Only the first message is kept
self.message = record.msg
assert self.message == record.msg, "Non-matching log record" # note: will not work with string formatting for log records; e.g. "blah {}".format(i)
self.timestamp.append(record.created)
self.args.append(record.args)
self.counter += 1
def __str__(self):
""" The string of this object is used as the default output of log records aggregation. For example: record message with occurrences. """
return self.message + "\t (occurred {} times)".format(self.counter)
def __init__(self, flush_timer=None, separator=None, add_process_thread=False):
"""
Log record metadata will be concatenated to a unique string, separated by self._separator.
Process and thread IDs will be added to the metadata if set to True; otherwise log records across processes/threads will be aggregated together.
:param separator: str
:param add_process_thread: bool
"""
super().__init__()
self._flush_timer = flush_timer or self._default_flush_timer
self._cache = self.cache_factory()
self._separator = separator or self._default_separator
self._metadata = self._default_metadata
if add_process_thread is True:
self._metadata += ["process", "thread"]
self._aggregation_lock = RLock()
self._store_aggregation_timer = self.flush_timer_factory()
self._store_aggregation_timer.start()
# Demo logger which outputs aggregations through a StreamHandler:
self.agg_log = logging.getLogger("aggregation_logger")
self.agg_log.addHandler(logging.StreamHandler())
self.agg_log.setLevel(logging.DEBUG)
self.agg_log.propagate = False
def cache_factory(self):
""" Returns an instance of a new caching object. """
return defaultdict(self.LogAggregatorCache)
def flush_timer_factory(self):
""" Returns a threading.Timer daemon object which flushes the Handler aggregations. """
timer = Timer(self._flush_timer, self.flush)
timer.daemon = True
return timer
def find_unique(self, record):
""" Extracts a unique metadata string from log records. """
metadata = ""
for single_metadata in self._metadata:
value = getattr(record, single_metadata, "missing " + str(single_metadata))
metadata += str(value) + self._separator
return metadata[:-len(self._separator)]
def emit(self, record):
try:
with self._aggregation_lock:
metadata = self.find_unique(record)
self._cache[metadata].cache(record)
except Exception:
self.handleError(record)
def flush(self):
self.store_aggregation()
def store_aggregation(self):
""" Write the aggregation data to file. """
self._store_aggregation_timer.cancel()
del self._store_aggregation_timer
with self._aggregation_lock:
temp_aggregation = self._cache
self._cache = self.cache_factory()
# ---> handle temp_aggregation and write to file <--- #
for key, value in sorted(temp_aggregation.items()):
self.agg_log.info("{}\t{}".format(key, value))
# ---> re-create the store_aggregation Timer object <--- #
self._store_aggregation_timer = self.flush_timer_factory()
self._store_aggregation_timer.start()
```
Testing this Handler class with random log severity in a for-loop:
```
if __name__ == "__main__":
import random
import logging
logger = logging.getLogger()
handler = LogAggregatorHandler()
logger.addHandler(handler)
logger.addHandler(logging.StreamHandler())
logger.setLevel(logging.DEBUG)
logger.info("entering logging loop")
for i in range(25):
# Randomly choose log severity:
severity = random.choice([logging.DEBUG, logging.INFO, logging.WARN, logging.ERROR, logging.CRITICAL])
logger.log(severity, "test message number %s", i)
logger.info("end of test code")
```
If you want to add more stuff, this is what a Python log record looks like:
```
{'args': ['()'],
'created': ['1413747902.18'],
'exc_info': ['None'],
'exc_text': ['None'],
'filename': ['push_socket_log.py'],
'funcName': ['<module>'],
'levelname': ['DEBUG'],
'levelno': ['10'],
'lineno': ['17'],
'module': ['push_socket_log'],
'msecs': ['181.387901306'],
'msg': ['Test message.'],
'name': ['__main__'],
'pathname': ['./push_socket_log.py'],
'process': ['65486'],
'processName': ['MainProcess'],
'relativeCreated': ['12.6709938049'],
'thread': ['140735262810896'],
'threadName': ['MainThread']}
```
One more thing to think about:
Most features you run depend on a flow of several consecutive commands (which will ideally report log records accordingly); e.g. a client-server communication will typically depend on receiving a request, processing it, reading some data from the DB (which requires a connection and some read commands), some kind of parsing/processing, constructing the response packet and reporting the response code.
This highlights one of the main disadvantages of using an aggregation approach: by aggregating log records you lose track of the time and order of the actions that took place. It will be extremely difficult to figure out what request was incorrectly structured if you only have the aggregation at hand.
My advice in this case is that you keep both the raw data and the aggregation (using two file handlers or something similar), so that you can investigate a macro-level (aggregation) and a micro-level (normal logging).
However, you are still left with the responsibility of finding out that things have gone wrong, and then manually investe what caused it. When developing on your PC this is an easy enough task; but deploying your code in several production servers makes these tasks cumbersome, wasting a lot of your time.
Accordingly, there are several companies developing products specifically for log management. Most aggregate similar log records together, but others incorporate machine learning algorithms for automatic aggregation and learning your software's behavior. Outsourcing your log handling can then enable you to focus on your product, instead of on your bugs.
Disclaimer: I work for [Coralogix](http://www.coralogix.com), one such solution. | Create a counter and only log it for `count=1`, then increment thereafter and write out in a finally block (to ensure it gets logged no matter how bad the application crashes and burns). This could of course pose an issue if you have the same exception for different reasons, but you could always search for the line number to verify it's the same issue or something similar. A minimal example:
```
name_error_exception_count = 0
try:
for i in range(99999):
try:
asdf[i] # not defined!
except NameError:
name_error_exception_count += 1
if name_error_exception_count == 1:
logging.exception('foo')
else: pass
except Exception:
pass # this is just to get the finally block, handle exceptions here too, maybe
finally:
if name_error_exception_count > 0:
logging.exception('NameError exception occurred {} times.'.format(name_error_exception_count))
``` |
34,090,999 | With pythons [`logging`](https://docs.python.org/2/library/logging.html) module, is there a way to **collect multiple events into one log entry**? An ideal solution would be an extension of python's `logging` module or a **custom formatter/filter** for it so collecting logging events of the same kind happens in the background and **nothing needs to be added in code body** (e.g. at every call of a logging function).
Here an **example** that generates a **large number of the same or very similar logging** events:
```
import logging
for i in range(99999):
try:
asdf[i] # not defined!
except NameError:
logging.exception('foo') # generates large number of logging events
else: pass
# ... more code with more logging ...
for i in range(88888): logging.info('more of the same %d' % i)
# ... and so on ...
```
So we have the same exception **99999** times and log it. It would be nice, if the log just said something like:
```
ERROR:root:foo (occured 99999 times)
Traceback (most recent call last):
File "./exceptionlogging.py", line 10, in <module>
asdf[i] # not defined!
NameError: name 'asdf' is not defined
INFO:root:foo more of the same (occured 88888 times with various values)
``` | 2015/12/04 | [
"https://Stackoverflow.com/questions/34090999",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/789308/"
] | You should probably be writing a message aggregate/statistics class rather than trying to hook onto the logging system's [singletons](https://stackoverflow.com/questions/31875/is-there-a-simple-elegant-way-to-define-singletons-in-python) but I guess you may have an existing code base that uses logging.
I'd also suggest that you should instantiate your loggers rather than always using the default root. The [Python Logging Cookbook](https://docs.python.org/2/howto/logging-cookbook.html) has extensive explanation and examples.
The following class should do what you are asking.
```
import logging
import atexit
import pprint
class Aggregator(object):
logs = {}
@classmethod
def _aggregate(cls, record):
id = '{0[levelname]}:{0[name]}:{0[msg]}'.format(record.__dict__)
if id not in cls.logs: # first occurrence
cls.logs[id] = [1, record]
else: # subsequent occurrence
cls.logs[id][0] += 1
@classmethod
def _output(cls):
for count, record in cls.logs.values():
record.__dict__['msg'] += ' (occured {} times)'.format(count)
logging.getLogger(record.__dict__['name']).handle(record)
@staticmethod
def filter(record):
# pprint.pprint(record)
Aggregator._aggregate(record)
return False
@staticmethod
def exit():
Aggregator._output()
logging.getLogger().addFilter(Aggregator)
atexit.register(Aggregator.exit)
for i in range(99999):
try:
asdf[i] # not defined!
except NameError:
logging.exception('foo') # generates large number of logging events
else: pass
# ... more code with more logging ...
for i in range(88888): logging.error('more of the same')
# ... and so on ...
```
Note that you don't get any logs until the program exits.
The result of running it this is:
```
ERROR:root:foo (occured 99999 times)
Traceback (most recent call last):
File "C:\work\VEMS\python\logcount.py", line 38, in
asdf[i] # not defined!
NameError: name 'asdf' is not defined
ERROR:root:more of the same (occured 88888 times)
``` | You can subclass the logger class and override the exception method to put your error types in a cache until they reach a certain counter before they are emitted to the log.
```
import logging
from collections import defaultdict
MAX_COUNT = 99999
class MyLogger(logging.getLoggerClass()):
def __init__(self, name):
super(MyLogger, self).__init__(name)
self.cache = defaultdict(int)
def exception(self, msg, *args, **kwargs):
err = msg.__class__.__name__
self.cache[err] += 1
if self.cache[err] > MAX_COUNT:
new_msg = "{err} occurred {count} times.\n{msg}"
new_msg = new_msg.format(err=err, count=MAX_COUNT, msg=msg)
self.log(logging.ERROR, new_msg, *args, **kwargs)
self.cache[err] = None
log = MyLogger('main')
try:
raise TypeError("Useful error message")
except TypeError as err:
log.exception(err)
```
Please note this isn't copy paste code.
You need to add your handlers (I recommend formatter, too) yourself.
<https://docs.python.org/2/howto/logging.html#handlers>
Have fun. |
34,090,999 | With pythons [`logging`](https://docs.python.org/2/library/logging.html) module, is there a way to **collect multiple events into one log entry**? An ideal solution would be an extension of python's `logging` module or a **custom formatter/filter** for it so collecting logging events of the same kind happens in the background and **nothing needs to be added in code body** (e.g. at every call of a logging function).
Here an **example** that generates a **large number of the same or very similar logging** events:
```
import logging
for i in range(99999):
try:
asdf[i] # not defined!
except NameError:
logging.exception('foo') # generates large number of logging events
else: pass
# ... more code with more logging ...
for i in range(88888): logging.info('more of the same %d' % i)
# ... and so on ...
```
So we have the same exception **99999** times and log it. It would be nice, if the log just said something like:
```
ERROR:root:foo (occured 99999 times)
Traceback (most recent call last):
File "./exceptionlogging.py", line 10, in <module>
asdf[i] # not defined!
NameError: name 'asdf' is not defined
INFO:root:foo more of the same (occured 88888 times with various values)
``` | 2015/12/04 | [
"https://Stackoverflow.com/questions/34090999",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/789308/"
] | You should probably be writing a message aggregate/statistics class rather than trying to hook onto the logging system's [singletons](https://stackoverflow.com/questions/31875/is-there-a-simple-elegant-way-to-define-singletons-in-python) but I guess you may have an existing code base that uses logging.
I'd also suggest that you should instantiate your loggers rather than always using the default root. The [Python Logging Cookbook](https://docs.python.org/2/howto/logging-cookbook.html) has extensive explanation and examples.
The following class should do what you are asking.
```
import logging
import atexit
import pprint
class Aggregator(object):
logs = {}
@classmethod
def _aggregate(cls, record):
id = '{0[levelname]}:{0[name]}:{0[msg]}'.format(record.__dict__)
if id not in cls.logs: # first occurrence
cls.logs[id] = [1, record]
else: # subsequent occurrence
cls.logs[id][0] += 1
@classmethod
def _output(cls):
for count, record in cls.logs.values():
record.__dict__['msg'] += ' (occured {} times)'.format(count)
logging.getLogger(record.__dict__['name']).handle(record)
@staticmethod
def filter(record):
# pprint.pprint(record)
Aggregator._aggregate(record)
return False
@staticmethod
def exit():
Aggregator._output()
logging.getLogger().addFilter(Aggregator)
atexit.register(Aggregator.exit)
for i in range(99999):
try:
asdf[i] # not defined!
except NameError:
logging.exception('foo') # generates large number of logging events
else: pass
# ... more code with more logging ...
for i in range(88888): logging.error('more of the same')
# ... and so on ...
```
Note that you don't get any logs until the program exits.
The result of running it this is:
```
ERROR:root:foo (occured 99999 times)
Traceback (most recent call last):
File "C:\work\VEMS\python\logcount.py", line 38, in
asdf[i] # not defined!
NameError: name 'asdf' is not defined
ERROR:root:more of the same (occured 88888 times)
``` | Your question hides a subliminal assumption of how "very similar" is defined.
Log records can either be const-only (whose instances are strictly identical), or a mix of consts and variables (no consts at all is also considered a mix).
An aggregator for const-only log records is a piece of cake. You just need to decide whether process/thread will fork your aggregation or not.
For log records which include both consts and variables you'll need to decide whether to split your aggregation based on the variables you have in your record.
A dictionary-style counter (from collections import Counter) can serve as a cache, which will count your instances in O(1), but you may need some higher-level structure in order to write the variables down if you wish. Additionally, you'll have to manually handle the writing of the cache into a file - every X seconds (binning) or once the program has exited (risky - you may lose all in-memory data if something gets stuck).
A framework for aggregation would look something like this (tested on Python v3.4):
```
from logging import Handler
from threading import RLock, Timer
from collections import defaultdict
class LogAggregatorHandler(Handler):
_default_flush_timer = 300 # Number of seconds between flushes
_default_separator = "\t" # Seperator char between metadata strings
_default_metadata = ["filename", "name", "funcName", "lineno", "levelname"] # metadata defining unique log records
class LogAggregatorCache(object):
""" Keeps whatever is interesting in log records aggregation. """
def __init__(self, record=None):
self.message = None
self.counter = 0
self.timestamp = list()
self.args = list()
if record is not None:
self.cache(record)
def cache(self, record):
if self.message is None: # Only the first message is kept
self.message = record.msg
assert self.message == record.msg, "Non-matching log record" # note: will not work with string formatting for log records; e.g. "blah {}".format(i)
self.timestamp.append(record.created)
self.args.append(record.args)
self.counter += 1
def __str__(self):
""" The string of this object is used as the default output of log records aggregation. For example: record message with occurrences. """
return self.message + "\t (occurred {} times)".format(self.counter)
def __init__(self, flush_timer=None, separator=None, add_process_thread=False):
"""
Log record metadata will be concatenated to a unique string, separated by self._separator.
Process and thread IDs will be added to the metadata if set to True; otherwise log records across processes/threads will be aggregated together.
:param separator: str
:param add_process_thread: bool
"""
super().__init__()
self._flush_timer = flush_timer or self._default_flush_timer
self._cache = self.cache_factory()
self._separator = separator or self._default_separator
self._metadata = self._default_metadata
if add_process_thread is True:
self._metadata += ["process", "thread"]
self._aggregation_lock = RLock()
self._store_aggregation_timer = self.flush_timer_factory()
self._store_aggregation_timer.start()
# Demo logger which outputs aggregations through a StreamHandler:
self.agg_log = logging.getLogger("aggregation_logger")
self.agg_log.addHandler(logging.StreamHandler())
self.agg_log.setLevel(logging.DEBUG)
self.agg_log.propagate = False
def cache_factory(self):
""" Returns an instance of a new caching object. """
return defaultdict(self.LogAggregatorCache)
def flush_timer_factory(self):
""" Returns a threading.Timer daemon object which flushes the Handler aggregations. """
timer = Timer(self._flush_timer, self.flush)
timer.daemon = True
return timer
def find_unique(self, record):
""" Extracts a unique metadata string from log records. """
metadata = ""
for single_metadata in self._metadata:
value = getattr(record, single_metadata, "missing " + str(single_metadata))
metadata += str(value) + self._separator
return metadata[:-len(self._separator)]
def emit(self, record):
try:
with self._aggregation_lock:
metadata = self.find_unique(record)
self._cache[metadata].cache(record)
except Exception:
self.handleError(record)
def flush(self):
self.store_aggregation()
def store_aggregation(self):
""" Write the aggregation data to file. """
self._store_aggregation_timer.cancel()
del self._store_aggregation_timer
with self._aggregation_lock:
temp_aggregation = self._cache
self._cache = self.cache_factory()
# ---> handle temp_aggregation and write to file <--- #
for key, value in sorted(temp_aggregation.items()):
self.agg_log.info("{}\t{}".format(key, value))
# ---> re-create the store_aggregation Timer object <--- #
self._store_aggregation_timer = self.flush_timer_factory()
self._store_aggregation_timer.start()
```
Testing this Handler class with random log severity in a for-loop:
```
if __name__ == "__main__":
import random
import logging
logger = logging.getLogger()
handler = LogAggregatorHandler()
logger.addHandler(handler)
logger.addHandler(logging.StreamHandler())
logger.setLevel(logging.DEBUG)
logger.info("entering logging loop")
for i in range(25):
# Randomly choose log severity:
severity = random.choice([logging.DEBUG, logging.INFO, logging.WARN, logging.ERROR, logging.CRITICAL])
logger.log(severity, "test message number %s", i)
logger.info("end of test code")
```
If you want to add more stuff, this is what a Python log record looks like:
```
{'args': ['()'],
'created': ['1413747902.18'],
'exc_info': ['None'],
'exc_text': ['None'],
'filename': ['push_socket_log.py'],
'funcName': ['<module>'],
'levelname': ['DEBUG'],
'levelno': ['10'],
'lineno': ['17'],
'module': ['push_socket_log'],
'msecs': ['181.387901306'],
'msg': ['Test message.'],
'name': ['__main__'],
'pathname': ['./push_socket_log.py'],
'process': ['65486'],
'processName': ['MainProcess'],
'relativeCreated': ['12.6709938049'],
'thread': ['140735262810896'],
'threadName': ['MainThread']}
```
One more thing to think about:
Most features you run depend on a flow of several consecutive commands (which will ideally report log records accordingly); e.g. a client-server communication will typically depend on receiving a request, processing it, reading some data from the DB (which requires a connection and some read commands), some kind of parsing/processing, constructing the response packet and reporting the response code.
This highlights one of the main disadvantages of using an aggregation approach: by aggregating log records you lose track of the time and order of the actions that took place. It will be extremely difficult to figure out what request was incorrectly structured if you only have the aggregation at hand.
My advice in this case is that you keep both the raw data and the aggregation (using two file handlers or something similar), so that you can investigate a macro-level (aggregation) and a micro-level (normal logging).
However, you are still left with the responsibility of finding out that things have gone wrong, and then manually investe what caused it. When developing on your PC this is an easy enough task; but deploying your code in several production servers makes these tasks cumbersome, wasting a lot of your time.
Accordingly, there are several companies developing products specifically for log management. Most aggregate similar log records together, but others incorporate machine learning algorithms for automatic aggregation and learning your software's behavior. Outsourcing your log handling can then enable you to focus on your product, instead of on your bugs.
Disclaimer: I work for [Coralogix](http://www.coralogix.com), one such solution. |
57,907,518 | So i'm trying to login web-client wifi login page with python. The web-client keep generating special octal character for every login session. So what i'm trying to do is:
requests.get(web-client).text -> get the octal code by looping the text index -> combine with the password
the problem is:
-if i write
```
password="password"
special="\340" + password + "\043\242\062\374\062\365\062\266\201\323\145\251\200\303\025\315"
print(special)
```
it returns =
```
àpassword#¢2ü2õ2¶Óe©ÃÍ #this is what i want, python translate it to char
```
-but if i index the webpage
```
import requests
webtext= requests.get(web-client url).text
password= "password"
special1= ""
special2= ""
for i in range(3163, 3167): #range of the first octal
special1 = special1+webtext[i]
for i in range(3204, 3268): #range of the second octal
special2 = special2+webtext[i]
special=special1+password+special2
print(special)
```
it returns =
```
\340password\043\242\062\374\062\365\062\266\201\323\145\251\200\303\025\315
```
as you can see it's not decoded to char, the python translate it as a string. So what should i do to get the same result?
btw i'm simulating the requests by opening the saved text file of the web-page html | 2019/09/12 | [
"https://Stackoverflow.com/questions/57907518",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11561888/"
] | You need to order comments.
in micropost view:
```
<% @post.comments.order(created_at: :asc).each do |comment| %>
..
<% end %>
``` | Inside your show method:
```
@post = Micropost.find(params[:id])
@comments = @post.includes(:comments).order('comments.created_at DESC')
```
Then you can iterate @comments in your HTML files. |
73,770,461 | I am using the Twitter API StreamingClient using the python module Tweepy. I am currently doing a short stream where I am collecting tweets and saving the entire ID and text from the tweet inside of a json object and writing it to a file.
My goal is to be able to collect the Twitter handle from each specific tweet and save it to a json file (preferably print it in the output terminal as well).
This is what the current code looks like:
```py
KEY_FILE = './keys/bearer_token'
DURATION = 10
def on_data(json_data):
json_obj = json.loads(json_data.decode())
#print('Received tweet:', json_obj)
print(f'Tweet Screen Name: {json_obj.user.screen_name}')
with open('./collected_tweets/tweets.json', 'a') as out:
json.dump(json_obj, out)
bearer_token = open(KEY_FILE).read().strip()
streaming_client = tweepy.StreamingClient(bearer_token)
streaming_client.on_data = on_data
streaming_client.sample(threaded=True)
time.sleep(DURATION)
streaming_client.disconnect()
```
And I have no idea how to do this, the only thing I found is that someone did this:
```
json_obj.user.screen_name
```
However, this did not work at all, and I am completely stuck. | 2022/09/19 | [
"https://Stackoverflow.com/questions/73770461",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11977945/"
] | So a couple of things
Firstly, I'd recommend using `on_response` rather than `on_data` because StreamClient already defines a `on_data` function to parse the json. (Then it will fire `on_tweet`, `on_response`, `on_error`, etc)
Secondly, `json_obj.user.screen_name` is part of API v1 I believe, which is why it doesn't work.
---
To get extra data using Twitter Apiv2, you'll want to use Expansions and Fields ([Tweepy Documentation](https://docs.tweepy.org/en/stable/expansions_and_fields.html), [Twitter Documentation](https://developer.twitter.com/en/docs/twitter-api/expansions))
For your case, you'll probably want to use `"username"` which is under the `user_fields`.
```py
def on_response(response:tweepy.StreamResponse):
tweet:tweepy.Tweet = response.data
users:list = response.includes.get("users")
# response.includes is a dictionary representing all the fields (user_fields, media_fields, etc)
# response.includes["users"] is a list of `tweepy.User`
# the first user in the list is the author (at least from what I've tested)
# the rest of the users in that list are anyone who is mentioned in the tweet
author_username = users and users[0].username
print(tweet.text, author_username)
streaming_client = tweepy.StreamingClient(bearer_token)
streaming_client.on_response = on_response
streaming_client.sample(threaded=True, user_fields = ["id", "name", "username"]) # using user fields
time.sleep(DURATION)
streaming_client.disconnect()
```
Hope this helped.
*also tweepy documentation definitely needs more examples for api v2* | ```py
KEY_FILE = './keys/bearer_token'
DURATION = 10
def on_data(json_data):
json_obj = json.loads(json_data.decode())
print('Received tweet:', json_obj)
with open('./collected_tweets/tweets.json', 'a') as out:
json.dump(json_obj, out)
bearer_token = open(KEY_FILE).read().strip()
streaming_client = tweepy.StreamingClient(bearer_token)
streaming_client.on_data = on_data
streaming_client.on_closed = on_finish
streaming_client.sample(threaded=True, expansions="author_id", user_fields="username", tweet_fields="created_at")
time.sleep(DURATION)
streaming_client.disconnect()
``` |
4,618,373 | How do I tell Selenium to use HTMLUnit?
I'm running selenium-server-standalone-2.0b1.jar as a Selenium server in the background, and the latest Python bindings installed with "pip install -U selenium".
Everything works fine with Firefox. But I'd like to use HTMLUnit, as it is lighter weight and doesn't need X. This is my attempt to do so:
```
>>> import selenium
>>> s = selenium.selenium("localhost", 4444, "*htmlunit", "http://localhost/")
>>> s.start()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.6/dist-packages/selenium/selenium/selenium.py", line 189, in start
result = self.get_string("getNewBrowserSession", start_args)
File "/usr/local/lib/python2.6/dist-packages/selenium/selenium/selenium.py", line 223, in get_string
result = self.do_command(verb, args)
File "/usr/local/lib/python2.6/dist-packages/selenium/selenium/selenium.py", line 217, in do_command
raise Exception, data
Exception: Failed to start new browser session: Browser not supported: *htmlunit
Supported browsers include:
*firefox
*mock
*firefoxproxy
*pifirefox
*chrome
*iexploreproxy
*iexplore
*firefox3
*safariproxy
*googlechrome
*konqueror
*firefox2
*safari
*piiexplore
*firefoxchrome
*opera
*iehta
*custom
```
So the question is, what is the HTMLUnit driver called? How do I enable it?
The code for HTMLUnit seems to be in the source for Selenium 2, so I expected it to be available by default like the other browsers. I can't find any instructions on how to enable it. | 2011/01/06 | [
"https://Stackoverflow.com/questions/4618373",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/284340/"
] | As of the 2.0b3 release of the python client you can create an HTMLUnit webdriver via a remote connection like so:
```
from selenium import webdriver
driver = webdriver.Remote(
desired_capabilities=webdriver.DesiredCapabilities.HTMLUNIT)
driver.get('http://www.google.com')
```
You can also use the `HTMLUNITWITHJS` capability item for a browser with Javascript support.
Note that you need to run the Selenium Java server for this to work, since HTMLUnit is implemented on the Java side. | I use it like this:
```
from selenium.remote import connect
b = connect('htmlunit')
b.get('http://google.com')
q = b.find_element_by_name('q')
q.send_keys('selenium')
q.submit()
for l in b.find_elements_by_xpath('//h3/a'):
print('%s\n\t%s\n' % (l.get_text(), l.get_attribute('href')))
``` |
4,618,373 | How do I tell Selenium to use HTMLUnit?
I'm running selenium-server-standalone-2.0b1.jar as a Selenium server in the background, and the latest Python bindings installed with "pip install -U selenium".
Everything works fine with Firefox. But I'd like to use HTMLUnit, as it is lighter weight and doesn't need X. This is my attempt to do so:
```
>>> import selenium
>>> s = selenium.selenium("localhost", 4444, "*htmlunit", "http://localhost/")
>>> s.start()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.6/dist-packages/selenium/selenium/selenium.py", line 189, in start
result = self.get_string("getNewBrowserSession", start_args)
File "/usr/local/lib/python2.6/dist-packages/selenium/selenium/selenium.py", line 223, in get_string
result = self.do_command(verb, args)
File "/usr/local/lib/python2.6/dist-packages/selenium/selenium/selenium.py", line 217, in do_command
raise Exception, data
Exception: Failed to start new browser session: Browser not supported: *htmlunit
Supported browsers include:
*firefox
*mock
*firefoxproxy
*pifirefox
*chrome
*iexploreproxy
*iexplore
*firefox3
*safariproxy
*googlechrome
*konqueror
*firefox2
*safari
*piiexplore
*firefoxchrome
*opera
*iehta
*custom
```
So the question is, what is the HTMLUnit driver called? How do I enable it?
The code for HTMLUnit seems to be in the source for Selenium 2, so I expected it to be available by default like the other browsers. I can't find any instructions on how to enable it. | 2011/01/06 | [
"https://Stackoverflow.com/questions/4618373",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/284340/"
] | using the selenium 2.20.0.jar server and matching python version, I am able to use HtmlUnitDriver by specifying the browser as \*mock
```
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
server_url = "http://%s:%s/wd/hub" % (test_host, test_port)
dc = DesiredCapabilities.HTMLUNIT
wd = webdriver.Remote(server_url, dc)
wd.get('http://www.google.com')
``` | I use it like this:
```
from selenium.remote import connect
b = connect('htmlunit')
b.get('http://google.com')
q = b.find_element_by_name('q')
q.send_keys('selenium')
q.submit()
for l in b.find_elements_by_xpath('//h3/a'):
print('%s\n\t%s\n' % (l.get_text(), l.get_attribute('href')))
``` |
41,599,600 | (The question was edited based on feedback received. I will continue to edit it based on input received until the issue is resolved)
I am learning Pyhton and beautiful soup in particular and I am doing the Google Exercise on Regex using the set of html files that contains popular baby names for different years (e.g. baby1990.html etc). You can find this dataset if you are interested here: <https://developers.google.com/edu/python/exercises/baby-names>
Each html file contains a table with baby names data that looks like this:
[](https://i.stack.imgur.com/r4GpY.png)
Before the table with the baby names there is another table. The html code in the Tags of the two tables is respectively the following
```
<table width="100%" border="0" cellspacing="0" cellpadding="4"> # Unwanted table
<table width="100%" border="0" cellspacing="0" cellpadding="4" summary="formatting"> # targeted table
```
You may observe that the targeted differs from the unwanted table by the attribute: summary="formatting"
The first table--the one we must skip -- has the following html code:
```
<table width="100%" border="0" cellspacing="0" cellpadding="4">
<tbody>
<tr><td class="sstop" valign="bottom" align="left" width="25%">
Social Security Online
</td><td valign="bottom" class="titletext">
<!-- sitetitle -->Popular Baby Names
</td>
</tr>
<tr bgcolor="#333366"><td colspan="2" height="2"></td></tr>
<tr><td class="graystars" width="25%" valign="top">
<a href="../OACT/babynames/">Popular Baby Names</a></td><td valign="top">
<a href="http://www.ssa.gov/"><img src="/templateimages/tinylogo.gif"
width="52" height="47" align="left"
alt="SSA logo: link to Social Security home page" border="0"></a><a name="content"></a>
<h1>Popular Names by Birth Year</h1>September 12, 2007</td>
</tr>
<tr bgcolor="#333366"><td colspan="2" height="1"></td></tr>
</tbody></table>
```
Within the targeted table the code is the following:
```
<table width="100%" border="0" cellspacing="0" cellpadding="4" summary="formatting">
<tr valign="top"><td width="25%" class="greycell">
<a href="../OACT/babynames/background.html">Background information</a>
<p><br />
Select another <label for="yob">year of birth</label>?<br />
<form method="post" action="/cgi-bin/popularnames.cgi">
<input type="text" name="year" id="yob" size="4" value="1990">
<input type="hidden" name="top" value="1000">
<input type="hidden" name="number" value="">
<input type="submit" value=" Go "></form>
</td><td>
<h3 align="center">Popularity in 1990</h3>
<p align="center">
<table width="48%" border="1" bordercolor="#aaabbb"
cellpadding="2" cellspacing="0" summary="Popularity for top 1000">
<tr align="center" valign="bottom">
<th scope="col" width="12%" bgcolor="#efefef">Rank</th>
<th scope="col" width="41%" bgcolor="#99ccff">Male name</th>
<th scope="col" bgcolor="pink" width="41%">Female name</th></tr>
<tr align="right"><td>1</td><td>Michael</td><td>Jessica</td> # Targeted row
<tr align="right"><td>2</td><td>Christopher</td><td>Ashley</td> # Targeted row
etc...
```
You can see that the distinctive attribute of the targeted rows is: align = "right".
Now the code to extract the content of the targeted cells is the following:
```
with open("C:/Users/ALEX/MyFiles/JUPYTER NOTEBOOKS/google-python-exercises/babynames/baby1990.html","r") \
as f: soup = bs(f.read(), 'html.parser')
print soup.tr
print "number of elemenents in the soup:" , len(soup)
right_table = soup.find("table", summary = "formatting")
print(right_table.prettify())
print "right_table" , len(right_table)
print(right_table[0].prettify())
for row in right_table[1].find_all("tr", allign = "right"):
cells = row.find_all("td")
try:
print "cells[0]: " , cells[0]
except:
print "cells[0] : NaN"
try:
print "cells[1]: " , cells[1]
except:
print "cells[1] : NaN"
try:
print "cells[2]: " , cells[2]
except:
print "cells[2] : NaN"
```
The output is an error message:
```
<tr><td align="left" class="sstop" valign="bottom" width="25%">
Social Security Online
</td><td class="titletext" valign="bottom">
<!-- sitetitle -->Popular Baby Names
</td>
</tr>
number of elemenents in the soup: 4
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-116-3ec77a65b5ad> in <module>()
6 right_table = soup.find("table", summary = "formatting")
7
----> 8 print(right_table.prettify())
9
10 print "right_table" , len(right_table)
C:\users\alex\Anaconda2\lib\site-packages\bs4\element.pyc in prettify(self, encoding, formatter)
1198 def prettify(self, encoding=None, formatter="minimal"):
1199 if encoding is None:
-> 1200 return self.decode(True, formatter=formatter)
1201 else:
1202 return self.encode(encoding, True, formatter=formatter)
C:\users\alex\Anaconda2\lib\site-packages\bs4\element.pyc in decode(self, indent_level, eventual_encoding, formatter)
1164 indent_contents = None
1165 contents = self.decode_contents(
-> 1166 indent_contents, eventual_encoding, formatter)
1167
1168 if self.hidden:
C:\users\alex\Anaconda2\lib\site-packages\bs4\element.pyc in decode_contents(self, indent_level, eventual_encoding, formatter)
1233 elif isinstance(c, Tag):
1234 s.append(c.decode(indent_level, eventual_encoding,
-> 1235 formatter))
1236 if text and indent_level and not self.name == 'pre':
1237 text = text.strip()
... last 2 frames repeated, from the frame below ...
C:\users\alex\Anaconda2\lib\site-packages\bs4\element.pyc in decode(self, indent_level, eventual_encoding, formatter)
1164 indent_contents = None
1165 contents = self.decode_contents(
-> 1166 indent_contents, eventual_encoding, formatter)
1167
1168 if self.hidden:
RuntimeError: maximum recursion depth exceeded while calling a Python object
```
The questions are the following:
1. Why the code returns the first table -- the unwanted one-- given that we have passed the argument summary = "formatting"?
2. What the error message implies? Why it is created?
3. What are other errors you can observe in the code -- if any?
Your advice will be appreciated. | 2017/01/11 | [
"https://Stackoverflow.com/questions/41599600",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7128498/"
] | Your test is failing because those strings are not in ascending order. It fails at `word-e` of first string and `wordc` of second string where `c` is before `e` and hyphen is ignored by default. If you want to include the hyphen in ordering use `StringComparer.Ordinal`:
```
Assert.That(anotherList, Is.Ordered.Ascending.Using((IComparer)StringComparer.Ordinal));
```
Now the test will succeed. | Thanks, abdul
In some cases, if your collection has an UpperCase item, you should use StringComparer.OrdinalIgnoreCase instead of StringComparer.Ordinal |
55,296,584 | I am new to python. I want to get the ipaddress of the system. I am connected in LAN. When i use the below code to get the ip, it shows 127.0.1.1 instead of 192.168.1.32. Why it is not showing the LAN ip. Then how can i get my LAN ip. Every tutorials shows this way only. I also checked via connecting with mobile hotspot. Eventhough, it shows the same.
```
import socket
hostname = socket.gethostname()
IPAddr = socket.gethostbyname(hostname)
print("Your Computer Name is:" + hostname)
print("Your Computer IP Address is:" + IPAddr)
```
Output:
```
Your Computer Name is:smackcoders
Your Computer IP Address is:127.0.1.1
```
Required Output:
```
Your Computer Name is:smackcoders
Your Computer IP Address is:192.168.1.32
``` | 2019/03/22 | [
"https://Stackoverflow.com/questions/55296584",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11090395/"
] | I got this same problem with my raspi.
```
host_name = socket.gethostname()`
host_addr = socket.gethostbyname(host_name)
```
and now if i print host\_addr, it will print 127.0.1.1.
So i foundthis: <https://www.raspberrypi.org/forums/viewtopic.php?t=188615#p1187999>
```
host_addr = socket.gethostbyname(host_name + ".local")
```
and it worked. | i get the same problem what your are facing. but I get the solution with help of my own idea, And don't worry it is simple to use.
if you familiar to linux you should heard the `ifconfig` command which return the informations about the network interfaces, and also you should understand about `grep` command which filter the lines which consist specified words
now just open the terminal and type
```
ifconfig | grep 255.255.255.0
```
and hit `enter` now you will get wlan inet address line alone like below
```
inet 192.168.43.248 netmask 255.255.255.0 broadcast 192.168.43.255
```
in your terminal
in your python script just insert
```
#!/usr/bin/env python
import subprocess
cmd = "ifconfig | grep 255.255.255.0"
inet = subprocess.check_output(cmd, shell = True)
inet = wlan.decode("utf-8")
inet = wlan.split(" ")
inet_addr = inet[inet.index("inet")+1]
print(inet_addr)
```
this script return your local ip address, this script works for me and I hope this will work for your linux machine
all the best |
55,296,584 | I am new to python. I want to get the ipaddress of the system. I am connected in LAN. When i use the below code to get the ip, it shows 127.0.1.1 instead of 192.168.1.32. Why it is not showing the LAN ip. Then how can i get my LAN ip. Every tutorials shows this way only. I also checked via connecting with mobile hotspot. Eventhough, it shows the same.
```
import socket
hostname = socket.gethostname()
IPAddr = socket.gethostbyname(hostname)
print("Your Computer Name is:" + hostname)
print("Your Computer IP Address is:" + IPAddr)
```
Output:
```
Your Computer Name is:smackcoders
Your Computer IP Address is:127.0.1.1
```
Required Output:
```
Your Computer Name is:smackcoders
Your Computer IP Address is:192.168.1.32
``` | 2019/03/22 | [
"https://Stackoverflow.com/questions/55296584",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11090395/"
] | As per the above '/etc/hosts' file content, you have an IP address mapping with '127.0.1.1' to your hostname. This is causing the name resolution to get 127.0.1.1. You can try removing/commenting this line and rerun. | This also worked for me:
```
gethostbyname(gethostname()+'.')
``` |
55,296,584 | I am new to python. I want to get the ipaddress of the system. I am connected in LAN. When i use the below code to get the ip, it shows 127.0.1.1 instead of 192.168.1.32. Why it is not showing the LAN ip. Then how can i get my LAN ip. Every tutorials shows this way only. I also checked via connecting with mobile hotspot. Eventhough, it shows the same.
```
import socket
hostname = socket.gethostname()
IPAddr = socket.gethostbyname(hostname)
print("Your Computer Name is:" + hostname)
print("Your Computer IP Address is:" + IPAddr)
```
Output:
```
Your Computer Name is:smackcoders
Your Computer IP Address is:127.0.1.1
```
Required Output:
```
Your Computer Name is:smackcoders
Your Computer IP Address is:192.168.1.32
``` | 2019/03/22 | [
"https://Stackoverflow.com/questions/55296584",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11090395/"
] | [How can I get the IP address of eth0 in Python?](https://stackoverflow.com/questions/24196932/how-can-i-get-the-ip-address-of-eth0-in-python)
```
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect(("8.8.8.8", 80))
print s.getsockname()[0]
``` | This solution works for me on Windows. If you're using Linux you could try this line of code instead:
```
IPAddr = socket.gethostbyname(socket.getfqdn())
``` |
55,296,584 | I am new to python. I want to get the ipaddress of the system. I am connected in LAN. When i use the below code to get the ip, it shows 127.0.1.1 instead of 192.168.1.32. Why it is not showing the LAN ip. Then how can i get my LAN ip. Every tutorials shows this way only. I also checked via connecting with mobile hotspot. Eventhough, it shows the same.
```
import socket
hostname = socket.gethostname()
IPAddr = socket.gethostbyname(hostname)
print("Your Computer Name is:" + hostname)
print("Your Computer IP Address is:" + IPAddr)
```
Output:
```
Your Computer Name is:smackcoders
Your Computer IP Address is:127.0.1.1
```
Required Output:
```
Your Computer Name is:smackcoders
Your Computer IP Address is:192.168.1.32
``` | 2019/03/22 | [
"https://Stackoverflow.com/questions/55296584",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11090395/"
] | As per the above '/etc/hosts' file content, you have an IP address mapping with '127.0.1.1' to your hostname. This is causing the name resolution to get 127.0.1.1. You can try removing/commenting this line and rerun. | I got this same problem with my raspi.
```
host_name = socket.gethostname()`
host_addr = socket.gethostbyname(host_name)
```
and now if i print host\_addr, it will print 127.0.1.1.
So i foundthis: <https://www.raspberrypi.org/forums/viewtopic.php?t=188615#p1187999>
```
host_addr = socket.gethostbyname(host_name + ".local")
```
and it worked. |
55,296,584 | I am new to python. I want to get the ipaddress of the system. I am connected in LAN. When i use the below code to get the ip, it shows 127.0.1.1 instead of 192.168.1.32. Why it is not showing the LAN ip. Then how can i get my LAN ip. Every tutorials shows this way only. I also checked via connecting with mobile hotspot. Eventhough, it shows the same.
```
import socket
hostname = socket.gethostname()
IPAddr = socket.gethostbyname(hostname)
print("Your Computer Name is:" + hostname)
print("Your Computer IP Address is:" + IPAddr)
```
Output:
```
Your Computer Name is:smackcoders
Your Computer IP Address is:127.0.1.1
```
Required Output:
```
Your Computer Name is:smackcoders
Your Computer IP Address is:192.168.1.32
``` | 2019/03/22 | [
"https://Stackoverflow.com/questions/55296584",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11090395/"
] | I got this same problem with my raspi.
```
host_name = socket.gethostname()`
host_addr = socket.gethostbyname(host_name)
```
and now if i print host\_addr, it will print 127.0.1.1.
So i foundthis: <https://www.raspberrypi.org/forums/viewtopic.php?t=188615#p1187999>
```
host_addr = socket.gethostbyname(host_name + ".local")
```
and it worked. | This solution works for me on Windows. If you're using Linux you could try this line of code instead:
```
IPAddr = socket.gethostbyname(socket.getfqdn())
``` |
55,296,584 | I am new to python. I want to get the ipaddress of the system. I am connected in LAN. When i use the below code to get the ip, it shows 127.0.1.1 instead of 192.168.1.32. Why it is not showing the LAN ip. Then how can i get my LAN ip. Every tutorials shows this way only. I also checked via connecting with mobile hotspot. Eventhough, it shows the same.
```
import socket
hostname = socket.gethostname()
IPAddr = socket.gethostbyname(hostname)
print("Your Computer Name is:" + hostname)
print("Your Computer IP Address is:" + IPAddr)
```
Output:
```
Your Computer Name is:smackcoders
Your Computer IP Address is:127.0.1.1
```
Required Output:
```
Your Computer Name is:smackcoders
Your Computer IP Address is:192.168.1.32
``` | 2019/03/22 | [
"https://Stackoverflow.com/questions/55296584",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11090395/"
] | I got this same problem with my raspi.
```
host_name = socket.gethostname()`
host_addr = socket.gethostbyname(host_name)
```
and now if i print host\_addr, it will print 127.0.1.1.
So i foundthis: <https://www.raspberrypi.org/forums/viewtopic.php?t=188615#p1187999>
```
host_addr = socket.gethostbyname(host_name + ".local")
```
and it worked. | [How can I get the IP address of eth0 in Python?](https://stackoverflow.com/questions/24196932/how-can-i-get-the-ip-address-of-eth0-in-python)
```
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect(("8.8.8.8", 80))
print s.getsockname()[0]
``` |
55,296,584 | I am new to python. I want to get the ipaddress of the system. I am connected in LAN. When i use the below code to get the ip, it shows 127.0.1.1 instead of 192.168.1.32. Why it is not showing the LAN ip. Then how can i get my LAN ip. Every tutorials shows this way only. I also checked via connecting with mobile hotspot. Eventhough, it shows the same.
```
import socket
hostname = socket.gethostname()
IPAddr = socket.gethostbyname(hostname)
print("Your Computer Name is:" + hostname)
print("Your Computer IP Address is:" + IPAddr)
```
Output:
```
Your Computer Name is:smackcoders
Your Computer IP Address is:127.0.1.1
```
Required Output:
```
Your Computer Name is:smackcoders
Your Computer IP Address is:192.168.1.32
``` | 2019/03/22 | [
"https://Stackoverflow.com/questions/55296584",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11090395/"
] | I got this same problem with my raspi.
```
host_name = socket.gethostname()`
host_addr = socket.gethostbyname(host_name)
```
and now if i print host\_addr, it will print 127.0.1.1.
So i foundthis: <https://www.raspberrypi.org/forums/viewtopic.php?t=188615#p1187999>
```
host_addr = socket.gethostbyname(host_name + ".local")
```
and it worked. | This also worked for me:
```
gethostbyname(gethostname()+'.')
``` |
55,296,584 | I am new to python. I want to get the ipaddress of the system. I am connected in LAN. When i use the below code to get the ip, it shows 127.0.1.1 instead of 192.168.1.32. Why it is not showing the LAN ip. Then how can i get my LAN ip. Every tutorials shows this way only. I also checked via connecting with mobile hotspot. Eventhough, it shows the same.
```
import socket
hostname = socket.gethostname()
IPAddr = socket.gethostbyname(hostname)
print("Your Computer Name is:" + hostname)
print("Your Computer IP Address is:" + IPAddr)
```
Output:
```
Your Computer Name is:smackcoders
Your Computer IP Address is:127.0.1.1
```
Required Output:
```
Your Computer Name is:smackcoders
Your Computer IP Address is:192.168.1.32
``` | 2019/03/22 | [
"https://Stackoverflow.com/questions/55296584",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11090395/"
] | [How can I get the IP address of eth0 in Python?](https://stackoverflow.com/questions/24196932/how-can-i-get-the-ip-address-of-eth0-in-python)
```
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect(("8.8.8.8", 80))
print s.getsockname()[0]
``` | i get the same problem what your are facing. but I get the solution with help of my own idea, And don't worry it is simple to use.
if you familiar to linux you should heard the `ifconfig` command which return the informations about the network interfaces, and also you should understand about `grep` command which filter the lines which consist specified words
now just open the terminal and type
```
ifconfig | grep 255.255.255.0
```
and hit `enter` now you will get wlan inet address line alone like below
```
inet 192.168.43.248 netmask 255.255.255.0 broadcast 192.168.43.255
```
in your terminal
in your python script just insert
```
#!/usr/bin/env python
import subprocess
cmd = "ifconfig | grep 255.255.255.0"
inet = subprocess.check_output(cmd, shell = True)
inet = wlan.decode("utf-8")
inet = wlan.split(" ")
inet_addr = inet[inet.index("inet")+1]
print(inet_addr)
```
this script return your local ip address, this script works for me and I hope this will work for your linux machine
all the best |
55,296,584 | I am new to python. I want to get the ipaddress of the system. I am connected in LAN. When i use the below code to get the ip, it shows 127.0.1.1 instead of 192.168.1.32. Why it is not showing the LAN ip. Then how can i get my LAN ip. Every tutorials shows this way only. I also checked via connecting with mobile hotspot. Eventhough, it shows the same.
```
import socket
hostname = socket.gethostname()
IPAddr = socket.gethostbyname(hostname)
print("Your Computer Name is:" + hostname)
print("Your Computer IP Address is:" + IPAddr)
```
Output:
```
Your Computer Name is:smackcoders
Your Computer IP Address is:127.0.1.1
```
Required Output:
```
Your Computer Name is:smackcoders
Your Computer IP Address is:192.168.1.32
``` | 2019/03/22 | [
"https://Stackoverflow.com/questions/55296584",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11090395/"
] | i get the same problem what your are facing. but I get the solution with help of my own idea, And don't worry it is simple to use.
if you familiar to linux you should heard the `ifconfig` command which return the informations about the network interfaces, and also you should understand about `grep` command which filter the lines which consist specified words
now just open the terminal and type
```
ifconfig | grep 255.255.255.0
```
and hit `enter` now you will get wlan inet address line alone like below
```
inet 192.168.43.248 netmask 255.255.255.0 broadcast 192.168.43.255
```
in your terminal
in your python script just insert
```
#!/usr/bin/env python
import subprocess
cmd = "ifconfig | grep 255.255.255.0"
inet = subprocess.check_output(cmd, shell = True)
inet = wlan.decode("utf-8")
inet = wlan.split(" ")
inet_addr = inet[inet.index("inet")+1]
print(inet_addr)
```
this script return your local ip address, this script works for me and I hope this will work for your linux machine
all the best | This solution works for me on Windows. If you're using Linux you could try this line of code instead:
```
IPAddr = socket.gethostbyname(socket.getfqdn())
``` |
55,296,584 | I am new to python. I want to get the ipaddress of the system. I am connected in LAN. When i use the below code to get the ip, it shows 127.0.1.1 instead of 192.168.1.32. Why it is not showing the LAN ip. Then how can i get my LAN ip. Every tutorials shows this way only. I also checked via connecting with mobile hotspot. Eventhough, it shows the same.
```
import socket
hostname = socket.gethostname()
IPAddr = socket.gethostbyname(hostname)
print("Your Computer Name is:" + hostname)
print("Your Computer IP Address is:" + IPAddr)
```
Output:
```
Your Computer Name is:smackcoders
Your Computer IP Address is:127.0.1.1
```
Required Output:
```
Your Computer Name is:smackcoders
Your Computer IP Address is:192.168.1.32
``` | 2019/03/22 | [
"https://Stackoverflow.com/questions/55296584",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11090395/"
] | As per the above '/etc/hosts' file content, you have an IP address mapping with '127.0.1.1' to your hostname. This is causing the name resolution to get 127.0.1.1. You can try removing/commenting this line and rerun. | i get the same problem what your are facing. but I get the solution with help of my own idea, And don't worry it is simple to use.
if you familiar to linux you should heard the `ifconfig` command which return the informations about the network interfaces, and also you should understand about `grep` command which filter the lines which consist specified words
now just open the terminal and type
```
ifconfig | grep 255.255.255.0
```
and hit `enter` now you will get wlan inet address line alone like below
```
inet 192.168.43.248 netmask 255.255.255.0 broadcast 192.168.43.255
```
in your terminal
in your python script just insert
```
#!/usr/bin/env python
import subprocess
cmd = "ifconfig | grep 255.255.255.0"
inet = subprocess.check_output(cmd, shell = True)
inet = wlan.decode("utf-8")
inet = wlan.split(" ")
inet_addr = inet[inet.index("inet")+1]
print(inet_addr)
```
this script return your local ip address, this script works for me and I hope this will work for your linux machine
all the best |
46,004,408 | I bought a Raspberry Pi yesterday and I am facing quite a large problem. I can't sudo apt-get update. I think this error comes from my dns because I am connected via ethernet (Physically). so the message it prints when I execute the command is that:
```
pi@raspberrypi:~ $ sudo apt-get update
Err:1 http://goddess-gate.com/archive.raspbian.org/raspbian jessie InRelease
Temporary failure resolving 'goddess-gate.com'
Err:2 http://archive.raspberrypi.org/debian stretch InRelease
Temporary failure resolving 'archive.raspberrypi.org'
Reading package lists... Done
W: Failed to fetch http://goddess-gate.com/archive.raspbian.org/raspbian/dists/jessie/InRelease Temporary failure resolving 'goddess-gate.com'
W: Failed to fetch http://archive.raspberrypi.org/debian/dists/stretch/InRelease Temporary failure resolving 'archive.raspberrypi.org'
W: Some index files failed to download. They have been ignored, or old ones used instead.
```
So to resolve this problem I have tried a few things:
```
- Changing the etc/apt/sources.list to a valid mirror of my country (france)
- Reinstalling Raspbian (1st try was with NOOBS) and now I installed Raspbian with the .img file
- Changing my /ect/resolv.conf and /etc/network/interfaces nameservers to these ip 8.8.8.8 8.8.4.4
```
Nothing worked... I am really stucked, there is something elese, I can't browse any website with Chromium but I have internet connexion because I can pip install python modules... here is the Chromium message:
'This site can't be reached' ERR\_NAME\_RESOLUTION\_FAILED
Other things, my inet ip is not valid, usally it should start with 192.168 but here it is 168.254.241.6 ... here is my if config:
```
pi@raspberrypi:~ $ ifconfig
enxb827ebaf69fc: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 169.254.241.6 netmask 255.255.0.0 broadcast 169.254.255.255
inet6 fe80::5d8b:1a8c:c520:c339 prefixlen 64 scopeid 0x20<link>
ether b8:27:eb:af:69:fc txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 995 bytes 61042 (59.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1 (Local Loopback)
RX packets 806 bytes 77318 (75.5 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 806 bytes 77318 (75.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
wlan0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether b8:27:eb:fa:3c:a9 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
``` | 2017/09/01 | [
"https://Stackoverflow.com/questions/46004408",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7767248/"
] | Type following at the command line in order to edit `resolv.conf` which is the linux configuration file where **domain-name to IP mapping** is stored for the purpose of **DNS resolution**.
```sh
sudo nano /etc/resolv.conf
```
then add these 2 lines:
```
nameserver 8.8.8.8
nameserver 8.8.4.4
```
hope it will help ... | The ip-adress range 169.254.0.0 to 169.254.255.255 is used by zeroconf.
Probably there is no active DHCP server in the LAN. Mostly the router is also a DHCP server.
You also have no public IPv6 address. But this could also come from a IPv4 only internet connection.
Try to configure the interface completly manual with corrected ip-address. When there should be an active DHCP server, try to fix it. Sometimes a reboot helps.
You can show your gateway with "ip r". It should be the address of the router.
Important is that the ip-address of the Pi is in the same subnet as the gateway. |
46,004,408 | I bought a Raspberry Pi yesterday and I am facing quite a large problem. I can't sudo apt-get update. I think this error comes from my dns because I am connected via ethernet (Physically). so the message it prints when I execute the command is that:
```
pi@raspberrypi:~ $ sudo apt-get update
Err:1 http://goddess-gate.com/archive.raspbian.org/raspbian jessie InRelease
Temporary failure resolving 'goddess-gate.com'
Err:2 http://archive.raspberrypi.org/debian stretch InRelease
Temporary failure resolving 'archive.raspberrypi.org'
Reading package lists... Done
W: Failed to fetch http://goddess-gate.com/archive.raspbian.org/raspbian/dists/jessie/InRelease Temporary failure resolving 'goddess-gate.com'
W: Failed to fetch http://archive.raspberrypi.org/debian/dists/stretch/InRelease Temporary failure resolving 'archive.raspberrypi.org'
W: Some index files failed to download. They have been ignored, or old ones used instead.
```
So to resolve this problem I have tried a few things:
```
- Changing the etc/apt/sources.list to a valid mirror of my country (france)
- Reinstalling Raspbian (1st try was with NOOBS) and now I installed Raspbian with the .img file
- Changing my /ect/resolv.conf and /etc/network/interfaces nameservers to these ip 8.8.8.8 8.8.4.4
```
Nothing worked... I am really stucked, there is something elese, I can't browse any website with Chromium but I have internet connexion because I can pip install python modules... here is the Chromium message:
'This site can't be reached' ERR\_NAME\_RESOLUTION\_FAILED
Other things, my inet ip is not valid, usally it should start with 192.168 but here it is 168.254.241.6 ... here is my if config:
```
pi@raspberrypi:~ $ ifconfig
enxb827ebaf69fc: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 169.254.241.6 netmask 255.255.0.0 broadcast 169.254.255.255
inet6 fe80::5d8b:1a8c:c520:c339 prefixlen 64 scopeid 0x20<link>
ether b8:27:eb:af:69:fc txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 995 bytes 61042 (59.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1 (Local Loopback)
RX packets 806 bytes 77318 (75.5 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 806 bytes 77318 (75.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
wlan0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether b8:27:eb:fa:3c:a9 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
``` | 2017/09/01 | [
"https://Stackoverflow.com/questions/46004408",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7767248/"
] | The ip-adress range 169.254.0.0 to 169.254.255.255 is used by zeroconf.
Probably there is no active DHCP server in the LAN. Mostly the router is also a DHCP server.
You also have no public IPv6 address. But this could also come from a IPv4 only internet connection.
Try to configure the interface completly manual with corrected ip-address. When there should be an active DHCP server, try to fix it. Sometimes a reboot helps.
You can show your gateway with "ip r". It should be the address of the router.
Important is that the ip-address of the Pi is in the same subnet as the gateway. | ```
sudo nano /etc/resolv.conf
nameserver 8.8.8.8
nameserver 8.8.4.4
```
I connected Raspberry Pi Directly with an Ethernet Cable.
it work. |
46,004,408 | I bought a Raspberry Pi yesterday and I am facing quite a large problem. I can't sudo apt-get update. I think this error comes from my dns because I am connected via ethernet (Physically). so the message it prints when I execute the command is that:
```
pi@raspberrypi:~ $ sudo apt-get update
Err:1 http://goddess-gate.com/archive.raspbian.org/raspbian jessie InRelease
Temporary failure resolving 'goddess-gate.com'
Err:2 http://archive.raspberrypi.org/debian stretch InRelease
Temporary failure resolving 'archive.raspberrypi.org'
Reading package lists... Done
W: Failed to fetch http://goddess-gate.com/archive.raspbian.org/raspbian/dists/jessie/InRelease Temporary failure resolving 'goddess-gate.com'
W: Failed to fetch http://archive.raspberrypi.org/debian/dists/stretch/InRelease Temporary failure resolving 'archive.raspberrypi.org'
W: Some index files failed to download. They have been ignored, or old ones used instead.
```
So to resolve this problem I have tried a few things:
```
- Changing the etc/apt/sources.list to a valid mirror of my country (france)
- Reinstalling Raspbian (1st try was with NOOBS) and now I installed Raspbian with the .img file
- Changing my /ect/resolv.conf and /etc/network/interfaces nameservers to these ip 8.8.8.8 8.8.4.4
```
Nothing worked... I am really stucked, there is something elese, I can't browse any website with Chromium but I have internet connexion because I can pip install python modules... here is the Chromium message:
'This site can't be reached' ERR\_NAME\_RESOLUTION\_FAILED
Other things, my inet ip is not valid, usally it should start with 192.168 but here it is 168.254.241.6 ... here is my if config:
```
pi@raspberrypi:~ $ ifconfig
enxb827ebaf69fc: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 169.254.241.6 netmask 255.255.0.0 broadcast 169.254.255.255
inet6 fe80::5d8b:1a8c:c520:c339 prefixlen 64 scopeid 0x20<link>
ether b8:27:eb:af:69:fc txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 995 bytes 61042 (59.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1 (Local Loopback)
RX packets 806 bytes 77318 (75.5 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 806 bytes 77318 (75.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
wlan0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether b8:27:eb:fa:3c:a9 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
``` | 2017/09/01 | [
"https://Stackoverflow.com/questions/46004408",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7767248/"
] | Type following at the command line in order to edit `resolv.conf` which is the linux configuration file where **domain-name to IP mapping** is stored for the purpose of **DNS resolution**.
```sh
sudo nano /etc/resolv.conf
```
then add these 2 lines:
```
nameserver 8.8.8.8
nameserver 8.8.4.4
```
hope it will help ... | ```
sudo nano /etc/resolv.conf
nameserver 8.8.8.8
nameserver 8.8.4.4
```
I connected Raspberry Pi Directly with an Ethernet Cable.
it work. |
62,585,234 | It seems that the output of [`zlib.compress`](https://docs.python.org/3/library/zlib.html#zlib.compress) uses all possible byte values. Is this possible to use 255 of 256 byte values (for example avoid using `\n`)?
Note that I just use the python manual as a reference, but the question is not specific to python (i.e. any other languages that has a `zlib` library). | 2020/06/25 | [
"https://Stackoverflow.com/questions/62585234",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1424739/"
] | No, this is not possible. Apart from the compressed data itself, there is standardized control structures which contain integers. Those integers may accidentially lead to any 8-bit character ending up in the bytestream.
Your only chance would be to encode the zlib bytestream into another format, e.g. base64. | As [@ypnos says](https://stackoverflow.com/a/62585291/3798897), this isn't possible within zlib itself. You mentioned that base64 encoding is too inefficient, but it's pretty easy to use an escape character to encode a character you want to avoid (like newlines).
This isn't the most efficient code in the world (and you might want to do something like finding the least used bytes to save a tiny bit more space), but it's readable enough and demonstrates the idea. You can losslessly encode/decode, and the encoded stream won't have any newlines.
```
def encode(data):
# order matters
return data.replace(b'a', b'aa').replace(b'\n', b'ab')
def decode(data):
def _foo():
pair = False
for b in data:
if pair:
# yield b'a' if b==b'a' else b'\n'
yield 97 if b==97 else 10
pair = False
elif b==97: # b'a'
pair = True
else:
yield b
return bytes(_foo())
```
As some measure of confidence you can check this exhaustively on small bytestrings:
```
from itertools import *
all(
bytes(p) == decode(encode(bytes(p)))
for c in combinations_with_replacement(b'ab\nc', r=6)
for p in permutations(c)
)
``` |
62,585,234 | It seems that the output of [`zlib.compress`](https://docs.python.org/3/library/zlib.html#zlib.compress) uses all possible byte values. Is this possible to use 255 of 256 byte values (for example avoid using `\n`)?
Note that I just use the python manual as a reference, but the question is not specific to python (i.e. any other languages that has a `zlib` library). | 2020/06/25 | [
"https://Stackoverflow.com/questions/62585234",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1424739/"
] | The whole point of compression is to reduce the size as much as possible. If zlib or any compressor only used 255 of the 256 byte values, the size of the output would be increased by at least 0.07%.
That may be perfectly fine for you, so you can simply post-process the compressed output, or any data at all, to remove one particular byte value at the expense of some expansion. The simplest approach would be to replace that byte when it occurs with a two-byte escape sequence. You would also then need to replace the escape prefix with a different two-byte escape sequence. That would expand the data on average by 0.8%. That is exactly what Hans provided in another answer here.
If that cost is too high, you can do something more sophisticated, which is to *decode* a fixed Huffman code that encodes 255 symbols of equal probability. To decode you then *encode* that Huffman code. The input is a sequence of bits, not bytes, and most of the time you will need to pad the input with some zero bits to encode the last symbol. The Huffman code turns one symbol into seven bits and the other 254 symbols into eight bits. So going the other way, it will expand the input by a little less than 0.1%. For short messages it will be a little more, since often less than seven bits at the very end will be encoded into a symbol.
Implementation in C:
```
// Placed in the public domain by Mark Adler, 26 June 2020.
// Encode an arbitrary stream of bytes into a stream of symbols limited to 255
// values. In particular, avoid the \n (10) byte value. With -d, decode back to
// the original byte stream. Take input from stdin, and write output to stdout.
#include <stdio.h>
#include <string.h>
// Encode arbitrary bytes to a sequence of 255 symbols, which are written out
// as bytes that exclude the value '\n' (10). This encoding is actually a
// decoding of a fixed Huffman code of 255 symbols of equal probability. The
// output will be on average a little less than 0.1% larger than the input,
// plus one byte, assuming random input. This is intended to be used on
// compressed data, which will appear random. An input of all zero bits will
// have the maximum possible expansion, which is 14.3%, plus one byte.
int nolf_encode(FILE *in, FILE *out) {
unsigned buf = 0;
int bits = 0, ch;
do {
if (bits < 8) {
ch = getc(in);
if (ch != EOF) {
buf |= (unsigned)ch << bits;
bits += 8;
}
else if (bits == 0)
break;
}
if ((buf & 0x7f) == 0) {
buf >>= 7;
bits -= 7;
putc(0, out);
continue;
}
int sym = buf & 0xff;
buf >>= 8;
bits -= 8;
if (sym >= '\n' && sym < 128)
sym++;
putc(sym, out);
} while (ch != EOF);
return 0;
}
// Decode a sequence of symbols from a set of 255 that was encoded by
// nolf_encode(). The input is read as bytes that exclude the value '\n' (10).
// Any such values in the input are ignored and flagged in an error message.
// The sequence is decoded to the original sequence of arbitrary bytes. The
// decoding is actually an encoding of a fixed Huffman code of 255 symbols of
// equal probability.
int nolf_decode(FILE *in, FILE *out) {
unsigned long lfs = 0;
unsigned buf = 0;
int bits = 0, ch;
while ((ch = getc(in)) != EOF) {
if (ch == '\n') {
lfs++;
continue;
}
if (ch == 0) {
if (bits == 0) {
bits = 7;
continue;
}
bits--;
}
else {
if (ch > '\n' && ch <= 128)
ch--;
buf |= (unsigned)ch << bits;
}
putc(buf, out);
buf >>= 8;
}
if (lfs)
fprintf(stderr, "nolf: %lu unexpected line feeds ignored\n", lfs);
return lfs != 0;
}
// Encode (no arguments) or decode (-d) from stdin to stdout.
int main(int argc, char **argv) {
if (argc == 1)
return nolf_encode(stdin, stdout);
else if (argc == 2 && strcmp(argv[1], "-d") == 0)
return nolf_decode(stdin, stdout);
fputs("nolf: unknown options (use -d to decode)\n", stderr);
return 1;
}
``` | As [@ypnos says](https://stackoverflow.com/a/62585291/3798897), this isn't possible within zlib itself. You mentioned that base64 encoding is too inefficient, but it's pretty easy to use an escape character to encode a character you want to avoid (like newlines).
This isn't the most efficient code in the world (and you might want to do something like finding the least used bytes to save a tiny bit more space), but it's readable enough and demonstrates the idea. You can losslessly encode/decode, and the encoded stream won't have any newlines.
```
def encode(data):
# order matters
return data.replace(b'a', b'aa').replace(b'\n', b'ab')
def decode(data):
def _foo():
pair = False
for b in data:
if pair:
# yield b'a' if b==b'a' else b'\n'
yield 97 if b==97 else 10
pair = False
elif b==97: # b'a'
pair = True
else:
yield b
return bytes(_foo())
```
As some measure of confidence you can check this exhaustively on small bytestrings:
```
from itertools import *
all(
bytes(p) == decode(encode(bytes(p)))
for c in combinations_with_replacement(b'ab\nc', r=6)
for p in permutations(c)
)
``` |
1,984,759 | Is there any solution to force the RawConfigParser.write() method to export the config file with an alphabetical sort?
Even if the original/loaded config file is sorted, the module mixes the section and the options into the sections arbitrarily, and is really annoying to edit manually a huge unsorted config file.
PD: I'm using python 2.6 | 2009/12/31 | [
"https://Stackoverflow.com/questions/1984759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/235709/"
] | Three solutions:
1. Pass in a dict type (second argument to the constructor) which returns the keys in your preferred sort order.
2. Extend the class and overload `write()` (just copy this method from the original source and modify it).
3. Copy the file ConfigParser.py and add the sorting to the method `write()`.
See [this article](http://www.voidspace.org.uk/python/odict.html) for a ordered dict or maybe use [this implementation](http://code.activestate.com/recipes/496761/) which preserves the original adding order. | The first method looked as the most easier, and safer way.
But, after looking at the source code of the ConfigParser, it creates an empty built-in dict, and then copies all the values from the "second parameter" one-by-one. That means it won't use the OrderedDict type. An easy work around can be to overload the CreateParser class.
```
class OrderedRawConfigParser(ConfigParser.RawConfigParser):
def __init__(self, defaults=None):
self._defaults = type(defaults)() ## will be correct with all type of dict.
self._sections = type(defaults)()
if defaults:
for key, value in defaults.items():
self._defaults[self.optionxform(key)] = value
```
It leaves only one flaw open... namely in ConfigParser.items(). odict doesn't support `update` and `comparison` with normal dicts.
Workaround (overload this function too):
```
def items(self, section):
try:
d2 = self._sections[section]
except KeyError:
if section != DEFAULTSECT:
raise NoSectionError(section)
d2 = type(self._section)() ## Originally: d2 = {}
d = self._defaults.copy()
d.update(d2) ## No more unsupported dict-odict incompatibility here.
if "__name__" in d:
del d["__name__"]
return d.items()
```
Other solution to the items issue is to modify the `odict.OrderedDict.update` function - maybe it is more easy than this one, but I leave it to you.
PS: I implemented this solution, but it doesn't work. If i figure out, ConfigParser is still mixing the order of the entries, I will report it.
PS2: Solved. The reader function of the ConfigParser is quite idiot. Anyway, only one line had to be changed - and some others for overloading in an external file:
```
def _read(self, fp, fpname):
cursect = None
optname = None
lineno = 0
e = None
while True:
line = fp.readline()
if not line:
break
lineno = lineno + 1
if line.strip() == '' or line[0] in '#;':
continue
if line.split(None, 1)[0].lower() == 'rem' and line[0] in "rR":
continue
if line[0].isspace() and cursect is not None and optname:
value = line.strip()
if value:
cursect[optname] = "%s\n%s" % (cursect[optname], value)
else:
mo = self.SECTCRE.match(line)
if mo:
sectname = mo.group('header')
if sectname in self._sections:
cursect = self._sections[sectname]
## Add ConfigParser for external overloading
elif sectname == ConfigParser.DEFAULTSECT:
cursect = self._defaults
else:
## The tiny single modification needed
cursect = type(self._sections)() ## cursect = {'__name__':sectname}
cursect['__name__'] = sectname
self._sections[sectname] = cursect
optname = None
elif cursect is None:
raise ConfigParser.MissingSectionHeaderError(fpname, lineno, line)
## Add ConfigParser for external overloading.
else:
mo = self.OPTCRE.match(line)
if mo:
optname, vi, optval = mo.group('option', 'vi', 'value')
if vi in ('=', ':') and ';' in optval:
pos = optval.find(';')
if pos != -1 and optval[pos-1].isspace():
optval = optval[:pos]
optval = optval.strip()
if optval == '""':
optval = ''
optname = self.optionxform(optname.rstrip())
cursect[optname] = optval
else:
if not e:
e = ConfigParser.ParsingError(fpname)
## Add ConfigParser for external overloading
e.append(lineno, repr(line))
if e:
raise e
```
Trust me, I didn't wrote this thing. I copy-pasted it entirely from ConfigParser.py
So overall what to do?
1. Download odict.py from one of the links previously suggested
2. Import it.
3. Copy-paste these codes in your favorite utils.py (which will create the `OrderedRawConfigParser` class for you)
4. `cfg = utils.OrderedRawConfigParser(odict.OrderedDict())`
5. use cfg as always. it will stay ordered.
6. Sit back, smoke a havanna, relax.
PS3: The problem I solved here is only in Python 2.5. In 2.6 there is already a solution for that. They created a second custom parameter in the `__init__` function, which is a custom dict\_type.
So this workaround is needed only for 2.5 |
1,984,759 | Is there any solution to force the RawConfigParser.write() method to export the config file with an alphabetical sort?
Even if the original/loaded config file is sorted, the module mixes the section and the options into the sections arbitrarily, and is really annoying to edit manually a huge unsorted config file.
PD: I'm using python 2.6 | 2009/12/31 | [
"https://Stackoverflow.com/questions/1984759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/235709/"
] | Three solutions:
1. Pass in a dict type (second argument to the constructor) which returns the keys in your preferred sort order.
2. Extend the class and overload `write()` (just copy this method from the original source and modify it).
3. Copy the file ConfigParser.py and add the sorting to the method `write()`.
See [this article](http://www.voidspace.org.uk/python/odict.html) for a ordered dict or maybe use [this implementation](http://code.activestate.com/recipes/496761/) which preserves the original adding order. | This is my solution for writing config file in alphabetical sort:
```
class OrderedRawConfigParser( ConfigParser.RawConfigParser ):
"""
Overload standart Class ConfigParser.RawConfigParser
"""
def __init__( self, defaults = None, dict_type = dict ):
ConfigParser.RawConfigParser.__init__( self, defaults = None, dict_type = dict )
def write(self, fp):
"""Write an .ini-format representation of the configuration state."""
if self._defaults:
fp.write("[%s]\n" % DEFAULTSECT)
for key in sorted( self._defaults ):
fp.write( "%s = %s\n" % (key, str( self._defaults[ key ] ).replace('\n', '\n\t')) )
fp.write("\n")
for section in self._sections:
fp.write("[%s]\n" % section)
for key in sorted( self._sections[section] ):
if key != "__name__":
fp.write("%s = %s\n" %
(key, str( self._sections[section][ key ] ).replace('\n', '\n\t')))
fp.write("\n")
``` |
1,984,759 | Is there any solution to force the RawConfigParser.write() method to export the config file with an alphabetical sort?
Even if the original/loaded config file is sorted, the module mixes the section and the options into the sections arbitrarily, and is really annoying to edit manually a huge unsorted config file.
PD: I'm using python 2.6 | 2009/12/31 | [
"https://Stackoverflow.com/questions/1984759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/235709/"
] | Three solutions:
1. Pass in a dict type (second argument to the constructor) which returns the keys in your preferred sort order.
2. Extend the class and overload `write()` (just copy this method from the original source and modify it).
3. Copy the file ConfigParser.py and add the sorting to the method `write()`.
See [this article](http://www.voidspace.org.uk/python/odict.html) for a ordered dict or maybe use [this implementation](http://code.activestate.com/recipes/496761/) which preserves the original adding order. | I was looking into this for merging a .gitmodules doing a subtree merge with a supermodule -- was super confused to start with, and having different orders for submodules was confusing enough haha.
Using GitPython helped alot:
```
from collections import OrderedDict
import git
filePath = '/tmp/git.config'
# Could use SubmoduleConfigParser to get fancier
c = git.GitConfigParser(filePath, False)
c.sections()
# http://stackoverflow.com/questions/8031418/how-to-sort-ordereddict-in-ordereddict-python
c._sections = OrderedDict(sorted(c._sections.iteritems(), key=lambda x: x[0]))
c.write()
del c
``` |
1,984,759 | Is there any solution to force the RawConfigParser.write() method to export the config file with an alphabetical sort?
Even if the original/loaded config file is sorted, the module mixes the section and the options into the sections arbitrarily, and is really annoying to edit manually a huge unsorted config file.
PD: I'm using python 2.6 | 2009/12/31 | [
"https://Stackoverflow.com/questions/1984759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/235709/"
] | Three solutions:
1. Pass in a dict type (second argument to the constructor) which returns the keys in your preferred sort order.
2. Extend the class and overload `write()` (just copy this method from the original source and modify it).
3. Copy the file ConfigParser.py and add the sorting to the method `write()`.
See [this article](http://www.voidspace.org.uk/python/odict.html) for a ordered dict or maybe use [this implementation](http://code.activestate.com/recipes/496761/) which preserves the original adding order. | I was able to solve this issue by sorting the sections in the ConfigParser from the outside like so:
```
config = ConfigParser.ConfigParser({}, collections.OrderedDict)
config.read('testfile.ini')
# Order the content of each section alphabetically
for section in config._sections:
config._sections[section] = collections.OrderedDict(sorted(config._sections[section].items(), key=lambda t: t[0]))
# Order all sections alphabetically
config._sections = collections.OrderedDict(sorted(config._sections.items(), key=lambda t: t[0] ))
# Write ini file to standard output
config.write(sys.stdout)
``` |
1,984,759 | Is there any solution to force the RawConfigParser.write() method to export the config file with an alphabetical sort?
Even if the original/loaded config file is sorted, the module mixes the section and the options into the sections arbitrarily, and is really annoying to edit manually a huge unsorted config file.
PD: I'm using python 2.6 | 2009/12/31 | [
"https://Stackoverflow.com/questions/1984759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/235709/"
] | This is my solution for writing config file in alphabetical sort:
```
class OrderedRawConfigParser( ConfigParser.RawConfigParser ):
"""
Overload standart Class ConfigParser.RawConfigParser
"""
def __init__( self, defaults = None, dict_type = dict ):
ConfigParser.RawConfigParser.__init__( self, defaults = None, dict_type = dict )
def write(self, fp):
"""Write an .ini-format representation of the configuration state."""
if self._defaults:
fp.write("[%s]\n" % DEFAULTSECT)
for key in sorted( self._defaults ):
fp.write( "%s = %s\n" % (key, str( self._defaults[ key ] ).replace('\n', '\n\t')) )
fp.write("\n")
for section in self._sections:
fp.write("[%s]\n" % section)
for key in sorted( self._sections[section] ):
if key != "__name__":
fp.write("%s = %s\n" %
(key, str( self._sections[section][ key ] ).replace('\n', '\n\t')))
fp.write("\n")
``` | The first method looked as the most easier, and safer way.
But, after looking at the source code of the ConfigParser, it creates an empty built-in dict, and then copies all the values from the "second parameter" one-by-one. That means it won't use the OrderedDict type. An easy work around can be to overload the CreateParser class.
```
class OrderedRawConfigParser(ConfigParser.RawConfigParser):
def __init__(self, defaults=None):
self._defaults = type(defaults)() ## will be correct with all type of dict.
self._sections = type(defaults)()
if defaults:
for key, value in defaults.items():
self._defaults[self.optionxform(key)] = value
```
It leaves only one flaw open... namely in ConfigParser.items(). odict doesn't support `update` and `comparison` with normal dicts.
Workaround (overload this function too):
```
def items(self, section):
try:
d2 = self._sections[section]
except KeyError:
if section != DEFAULTSECT:
raise NoSectionError(section)
d2 = type(self._section)() ## Originally: d2 = {}
d = self._defaults.copy()
d.update(d2) ## No more unsupported dict-odict incompatibility here.
if "__name__" in d:
del d["__name__"]
return d.items()
```
Other solution to the items issue is to modify the `odict.OrderedDict.update` function - maybe it is more easy than this one, but I leave it to you.
PS: I implemented this solution, but it doesn't work. If i figure out, ConfigParser is still mixing the order of the entries, I will report it.
PS2: Solved. The reader function of the ConfigParser is quite idiot. Anyway, only one line had to be changed - and some others for overloading in an external file:
```
def _read(self, fp, fpname):
cursect = None
optname = None
lineno = 0
e = None
while True:
line = fp.readline()
if not line:
break
lineno = lineno + 1
if line.strip() == '' or line[0] in '#;':
continue
if line.split(None, 1)[0].lower() == 'rem' and line[0] in "rR":
continue
if line[0].isspace() and cursect is not None and optname:
value = line.strip()
if value:
cursect[optname] = "%s\n%s" % (cursect[optname], value)
else:
mo = self.SECTCRE.match(line)
if mo:
sectname = mo.group('header')
if sectname in self._sections:
cursect = self._sections[sectname]
## Add ConfigParser for external overloading
elif sectname == ConfigParser.DEFAULTSECT:
cursect = self._defaults
else:
## The tiny single modification needed
cursect = type(self._sections)() ## cursect = {'__name__':sectname}
cursect['__name__'] = sectname
self._sections[sectname] = cursect
optname = None
elif cursect is None:
raise ConfigParser.MissingSectionHeaderError(fpname, lineno, line)
## Add ConfigParser for external overloading.
else:
mo = self.OPTCRE.match(line)
if mo:
optname, vi, optval = mo.group('option', 'vi', 'value')
if vi in ('=', ':') and ';' in optval:
pos = optval.find(';')
if pos != -1 and optval[pos-1].isspace():
optval = optval[:pos]
optval = optval.strip()
if optval == '""':
optval = ''
optname = self.optionxform(optname.rstrip())
cursect[optname] = optval
else:
if not e:
e = ConfigParser.ParsingError(fpname)
## Add ConfigParser for external overloading
e.append(lineno, repr(line))
if e:
raise e
```
Trust me, I didn't wrote this thing. I copy-pasted it entirely from ConfigParser.py
So overall what to do?
1. Download odict.py from one of the links previously suggested
2. Import it.
3. Copy-paste these codes in your favorite utils.py (which will create the `OrderedRawConfigParser` class for you)
4. `cfg = utils.OrderedRawConfigParser(odict.OrderedDict())`
5. use cfg as always. it will stay ordered.
6. Sit back, smoke a havanna, relax.
PS3: The problem I solved here is only in Python 2.5. In 2.6 there is already a solution for that. They created a second custom parameter in the `__init__` function, which is a custom dict\_type.
So this workaround is needed only for 2.5 |
1,984,759 | Is there any solution to force the RawConfigParser.write() method to export the config file with an alphabetical sort?
Even if the original/loaded config file is sorted, the module mixes the section and the options into the sections arbitrarily, and is really annoying to edit manually a huge unsorted config file.
PD: I'm using python 2.6 | 2009/12/31 | [
"https://Stackoverflow.com/questions/1984759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/235709/"
] | The first method looked as the most easier, and safer way.
But, after looking at the source code of the ConfigParser, it creates an empty built-in dict, and then copies all the values from the "second parameter" one-by-one. That means it won't use the OrderedDict type. An easy work around can be to overload the CreateParser class.
```
class OrderedRawConfigParser(ConfigParser.RawConfigParser):
def __init__(self, defaults=None):
self._defaults = type(defaults)() ## will be correct with all type of dict.
self._sections = type(defaults)()
if defaults:
for key, value in defaults.items():
self._defaults[self.optionxform(key)] = value
```
It leaves only one flaw open... namely in ConfigParser.items(). odict doesn't support `update` and `comparison` with normal dicts.
Workaround (overload this function too):
```
def items(self, section):
try:
d2 = self._sections[section]
except KeyError:
if section != DEFAULTSECT:
raise NoSectionError(section)
d2 = type(self._section)() ## Originally: d2 = {}
d = self._defaults.copy()
d.update(d2) ## No more unsupported dict-odict incompatibility here.
if "__name__" in d:
del d["__name__"]
return d.items()
```
Other solution to the items issue is to modify the `odict.OrderedDict.update` function - maybe it is more easy than this one, but I leave it to you.
PS: I implemented this solution, but it doesn't work. If i figure out, ConfigParser is still mixing the order of the entries, I will report it.
PS2: Solved. The reader function of the ConfigParser is quite idiot. Anyway, only one line had to be changed - and some others for overloading in an external file:
```
def _read(self, fp, fpname):
cursect = None
optname = None
lineno = 0
e = None
while True:
line = fp.readline()
if not line:
break
lineno = lineno + 1
if line.strip() == '' or line[0] in '#;':
continue
if line.split(None, 1)[0].lower() == 'rem' and line[0] in "rR":
continue
if line[0].isspace() and cursect is not None and optname:
value = line.strip()
if value:
cursect[optname] = "%s\n%s" % (cursect[optname], value)
else:
mo = self.SECTCRE.match(line)
if mo:
sectname = mo.group('header')
if sectname in self._sections:
cursect = self._sections[sectname]
## Add ConfigParser for external overloading
elif sectname == ConfigParser.DEFAULTSECT:
cursect = self._defaults
else:
## The tiny single modification needed
cursect = type(self._sections)() ## cursect = {'__name__':sectname}
cursect['__name__'] = sectname
self._sections[sectname] = cursect
optname = None
elif cursect is None:
raise ConfigParser.MissingSectionHeaderError(fpname, lineno, line)
## Add ConfigParser for external overloading.
else:
mo = self.OPTCRE.match(line)
if mo:
optname, vi, optval = mo.group('option', 'vi', 'value')
if vi in ('=', ':') and ';' in optval:
pos = optval.find(';')
if pos != -1 and optval[pos-1].isspace():
optval = optval[:pos]
optval = optval.strip()
if optval == '""':
optval = ''
optname = self.optionxform(optname.rstrip())
cursect[optname] = optval
else:
if not e:
e = ConfigParser.ParsingError(fpname)
## Add ConfigParser for external overloading
e.append(lineno, repr(line))
if e:
raise e
```
Trust me, I didn't wrote this thing. I copy-pasted it entirely from ConfigParser.py
So overall what to do?
1. Download odict.py from one of the links previously suggested
2. Import it.
3. Copy-paste these codes in your favorite utils.py (which will create the `OrderedRawConfigParser` class for you)
4. `cfg = utils.OrderedRawConfigParser(odict.OrderedDict())`
5. use cfg as always. it will stay ordered.
6. Sit back, smoke a havanna, relax.
PS3: The problem I solved here is only in Python 2.5. In 2.6 there is already a solution for that. They created a second custom parameter in the `__init__` function, which is a custom dict\_type.
So this workaround is needed only for 2.5 | I was looking into this for merging a .gitmodules doing a subtree merge with a supermodule -- was super confused to start with, and having different orders for submodules was confusing enough haha.
Using GitPython helped alot:
```
from collections import OrderedDict
import git
filePath = '/tmp/git.config'
# Could use SubmoduleConfigParser to get fancier
c = git.GitConfigParser(filePath, False)
c.sections()
# http://stackoverflow.com/questions/8031418/how-to-sort-ordereddict-in-ordereddict-python
c._sections = OrderedDict(sorted(c._sections.iteritems(), key=lambda x: x[0]))
c.write()
del c
``` |
1,984,759 | Is there any solution to force the RawConfigParser.write() method to export the config file with an alphabetical sort?
Even if the original/loaded config file is sorted, the module mixes the section and the options into the sections arbitrarily, and is really annoying to edit manually a huge unsorted config file.
PD: I'm using python 2.6 | 2009/12/31 | [
"https://Stackoverflow.com/questions/1984759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/235709/"
] | I was able to solve this issue by sorting the sections in the ConfigParser from the outside like so:
```
config = ConfigParser.ConfigParser({}, collections.OrderedDict)
config.read('testfile.ini')
# Order the content of each section alphabetically
for section in config._sections:
config._sections[section] = collections.OrderedDict(sorted(config._sections[section].items(), key=lambda t: t[0]))
# Order all sections alphabetically
config._sections = collections.OrderedDict(sorted(config._sections.items(), key=lambda t: t[0] ))
# Write ini file to standard output
config.write(sys.stdout)
``` | The first method looked as the most easier, and safer way.
But, after looking at the source code of the ConfigParser, it creates an empty built-in dict, and then copies all the values from the "second parameter" one-by-one. That means it won't use the OrderedDict type. An easy work around can be to overload the CreateParser class.
```
class OrderedRawConfigParser(ConfigParser.RawConfigParser):
def __init__(self, defaults=None):
self._defaults = type(defaults)() ## will be correct with all type of dict.
self._sections = type(defaults)()
if defaults:
for key, value in defaults.items():
self._defaults[self.optionxform(key)] = value
```
It leaves only one flaw open... namely in ConfigParser.items(). odict doesn't support `update` and `comparison` with normal dicts.
Workaround (overload this function too):
```
def items(self, section):
try:
d2 = self._sections[section]
except KeyError:
if section != DEFAULTSECT:
raise NoSectionError(section)
d2 = type(self._section)() ## Originally: d2 = {}
d = self._defaults.copy()
d.update(d2) ## No more unsupported dict-odict incompatibility here.
if "__name__" in d:
del d["__name__"]
return d.items()
```
Other solution to the items issue is to modify the `odict.OrderedDict.update` function - maybe it is more easy than this one, but I leave it to you.
PS: I implemented this solution, but it doesn't work. If i figure out, ConfigParser is still mixing the order of the entries, I will report it.
PS2: Solved. The reader function of the ConfigParser is quite idiot. Anyway, only one line had to be changed - and some others for overloading in an external file:
```
def _read(self, fp, fpname):
cursect = None
optname = None
lineno = 0
e = None
while True:
line = fp.readline()
if not line:
break
lineno = lineno + 1
if line.strip() == '' or line[0] in '#;':
continue
if line.split(None, 1)[0].lower() == 'rem' and line[0] in "rR":
continue
if line[0].isspace() and cursect is not None and optname:
value = line.strip()
if value:
cursect[optname] = "%s\n%s" % (cursect[optname], value)
else:
mo = self.SECTCRE.match(line)
if mo:
sectname = mo.group('header')
if sectname in self._sections:
cursect = self._sections[sectname]
## Add ConfigParser for external overloading
elif sectname == ConfigParser.DEFAULTSECT:
cursect = self._defaults
else:
## The tiny single modification needed
cursect = type(self._sections)() ## cursect = {'__name__':sectname}
cursect['__name__'] = sectname
self._sections[sectname] = cursect
optname = None
elif cursect is None:
raise ConfigParser.MissingSectionHeaderError(fpname, lineno, line)
## Add ConfigParser for external overloading.
else:
mo = self.OPTCRE.match(line)
if mo:
optname, vi, optval = mo.group('option', 'vi', 'value')
if vi in ('=', ':') and ';' in optval:
pos = optval.find(';')
if pos != -1 and optval[pos-1].isspace():
optval = optval[:pos]
optval = optval.strip()
if optval == '""':
optval = ''
optname = self.optionxform(optname.rstrip())
cursect[optname] = optval
else:
if not e:
e = ConfigParser.ParsingError(fpname)
## Add ConfigParser for external overloading
e.append(lineno, repr(line))
if e:
raise e
```
Trust me, I didn't wrote this thing. I copy-pasted it entirely from ConfigParser.py
So overall what to do?
1. Download odict.py from one of the links previously suggested
2. Import it.
3. Copy-paste these codes in your favorite utils.py (which will create the `OrderedRawConfigParser` class for you)
4. `cfg = utils.OrderedRawConfigParser(odict.OrderedDict())`
5. use cfg as always. it will stay ordered.
6. Sit back, smoke a havanna, relax.
PS3: The problem I solved here is only in Python 2.5. In 2.6 there is already a solution for that. They created a second custom parameter in the `__init__` function, which is a custom dict\_type.
So this workaround is needed only for 2.5 |
1,984,759 | Is there any solution to force the RawConfigParser.write() method to export the config file with an alphabetical sort?
Even if the original/loaded config file is sorted, the module mixes the section and the options into the sections arbitrarily, and is really annoying to edit manually a huge unsorted config file.
PD: I'm using python 2.6 | 2009/12/31 | [
"https://Stackoverflow.com/questions/1984759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/235709/"
] | This is my solution for writing config file in alphabetical sort:
```
class OrderedRawConfigParser( ConfigParser.RawConfigParser ):
"""
Overload standart Class ConfigParser.RawConfigParser
"""
def __init__( self, defaults = None, dict_type = dict ):
ConfigParser.RawConfigParser.__init__( self, defaults = None, dict_type = dict )
def write(self, fp):
"""Write an .ini-format representation of the configuration state."""
if self._defaults:
fp.write("[%s]\n" % DEFAULTSECT)
for key in sorted( self._defaults ):
fp.write( "%s = %s\n" % (key, str( self._defaults[ key ] ).replace('\n', '\n\t')) )
fp.write("\n")
for section in self._sections:
fp.write("[%s]\n" % section)
for key in sorted( self._sections[section] ):
if key != "__name__":
fp.write("%s = %s\n" %
(key, str( self._sections[section][ key ] ).replace('\n', '\n\t')))
fp.write("\n")
``` | I was looking into this for merging a .gitmodules doing a subtree merge with a supermodule -- was super confused to start with, and having different orders for submodules was confusing enough haha.
Using GitPython helped alot:
```
from collections import OrderedDict
import git
filePath = '/tmp/git.config'
# Could use SubmoduleConfigParser to get fancier
c = git.GitConfigParser(filePath, False)
c.sections()
# http://stackoverflow.com/questions/8031418/how-to-sort-ordereddict-in-ordereddict-python
c._sections = OrderedDict(sorted(c._sections.iteritems(), key=lambda x: x[0]))
c.write()
del c
``` |
1,984,759 | Is there any solution to force the RawConfigParser.write() method to export the config file with an alphabetical sort?
Even if the original/loaded config file is sorted, the module mixes the section and the options into the sections arbitrarily, and is really annoying to edit manually a huge unsorted config file.
PD: I'm using python 2.6 | 2009/12/31 | [
"https://Stackoverflow.com/questions/1984759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/235709/"
] | I was able to solve this issue by sorting the sections in the ConfigParser from the outside like so:
```
config = ConfigParser.ConfigParser({}, collections.OrderedDict)
config.read('testfile.ini')
# Order the content of each section alphabetically
for section in config._sections:
config._sections[section] = collections.OrderedDict(sorted(config._sections[section].items(), key=lambda t: t[0]))
# Order all sections alphabetically
config._sections = collections.OrderedDict(sorted(config._sections.items(), key=lambda t: t[0] ))
# Write ini file to standard output
config.write(sys.stdout)
``` | I was looking into this for merging a .gitmodules doing a subtree merge with a supermodule -- was super confused to start with, and having different orders for submodules was confusing enough haha.
Using GitPython helped alot:
```
from collections import OrderedDict
import git
filePath = '/tmp/git.config'
# Could use SubmoduleConfigParser to get fancier
c = git.GitConfigParser(filePath, False)
c.sections()
# http://stackoverflow.com/questions/8031418/how-to-sort-ordereddict-in-ordereddict-python
c._sections = OrderedDict(sorted(c._sections.iteritems(), key=lambda x: x[0]))
c.write()
del c
``` |
36,958,167 | I need to update a document in an array inside another document in Mongo DB.
```
{
"_id" : ObjectId("51cff693d342704b5047e6d8"),
"author" : "test",
"body" : "sdfkj dsfhk asdfjad ",
"comments" : [
{
"author" : "test",
"body" : "sdfkjdj\r\nasdjgkfdfj",
"email" : "test@tes.com"
},
{
"author" : "hola",
"body" : "sdfl\r\nhola \r\nwork here"
}
],
"date" : ISODate("2013-06-30T09:12:51.629Z"),
"permalink" : "mxwnnnqafl",
"tags" : [
"ab"
],
"title" : "cd"
}
```
If I try to update first document in comments array by below command, it works.
```
db.posts.update({'permalink':"cxzdzjkztkqraoqlgcru"},{'$inc': {"comments.0.num_likes": 1}})
```
But if I put the same in python code like below, I am getting Write error, that it can't traverse the element. I am not understanding what is missing!!
Can anyone help me out please.
```
post = self.posts.find_one({'permalink': permalink})
response = self.posts.update({'permalink': permalink},
{'$inc':"comments.comment_ordinal.num_likes": 1}})
WriteError: cannot use the part (comments of comments.comment_ordinal.num_likes) to traverse the element
``` | 2016/04/30 | [
"https://Stackoverflow.com/questions/36958167",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3123752/"
] | The moment you give your HTTP/HTTPS endpoint and create subscription from aws console, what happens is , the Amazon sends a subscription msg to that endpoint. Now this is a rest call, and your app must have a handler for this endpoint, otherwise you miss catching this subscription message. The httpRequest object that your handler is passed, needs to access it's SNSMsgTypeHdr header field. This value will be "SubscriptionConfirmation". You need to catch this particular message first and then get the subscription url. You can handle it in your app itself or maybe print it out, and then manually visit that url to make the subscription. I would ideally suggest to make a subscription to the same topic at the same with your mail id, so that everytime your app gets a messages pushed , your mail id also gets the message(albeit the tokens will be different) but at least you will be sure that the message was pushed to your endpoint. All you need to do is keep working your app to handle the messages at that endpoint as per your requirements then. | There are 3 types of messages with SNS. Subscribe, Unsubscribe, and Notification. You will not get any Notification messages until you have correctly handled the subscribe message. Which involves making an API request to AWS when you receive the Subscribe request.
The call in this case is ConfirmSubscription: <http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/SNS.html#confirmSubscription-property>
Once you do that, then you will start receiving notification messages and you can handle those as your code allows. |
36,958,167 | I need to update a document in an array inside another document in Mongo DB.
```
{
"_id" : ObjectId("51cff693d342704b5047e6d8"),
"author" : "test",
"body" : "sdfkj dsfhk asdfjad ",
"comments" : [
{
"author" : "test",
"body" : "sdfkjdj\r\nasdjgkfdfj",
"email" : "test@tes.com"
},
{
"author" : "hola",
"body" : "sdfl\r\nhola \r\nwork here"
}
],
"date" : ISODate("2013-06-30T09:12:51.629Z"),
"permalink" : "mxwnnnqafl",
"tags" : [
"ab"
],
"title" : "cd"
}
```
If I try to update first document in comments array by below command, it works.
```
db.posts.update({'permalink':"cxzdzjkztkqraoqlgcru"},{'$inc': {"comments.0.num_likes": 1}})
```
But if I put the same in python code like below, I am getting Write error, that it can't traverse the element. I am not understanding what is missing!!
Can anyone help me out please.
```
post = self.posts.find_one({'permalink': permalink})
response = self.posts.update({'permalink': permalink},
{'$inc':"comments.comment_ordinal.num_likes": 1}})
WriteError: cannot use the part (comments of comments.comment_ordinal.num_likes) to traverse the element
``` | 2016/04/30 | [
"https://Stackoverflow.com/questions/36958167",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3123752/"
] | Try this:
```
const express = require('express');
const router = express.Router();
const request = require('request');
var bodyParser = require('body-parser')
router.post('/',bodyParser.text(),handleSNSMessage);
module.exports = router;
var handleSubscriptionResponse = function (error, response) {
if (!error && response.statusCode == 200) {
console.log('Yess! We have accepted the confirmation from AWS');
}
else {
throw new Error(`Unable to subscribe to given URL`);
//console.error(error)
}
}
async function handleSNSMessage(req, resp, next) {
try {
let payloadStr = req.body
payload = JSON.parse(payloadStr)
console.log(JSON.stringify(payload))
if (req.header('x-amz-sns-message-type') === 'SubscriptionConfirmation') {
const url = payload.SubscribeURL;
await request(url, handleSubscriptionResponse)
} else if (req.header('x-amz-sns-message-type') === 'Notification') {
console.log(payload)
//process data here
} else {
throw new Error(`Invalid message type ${payload.Type}`);
}
} catch (err) {
console.error(err)
resp.status(500).send('Oops')
}
resp.send('Ok')
}
```
**Note:** I didn't use `app.use` as that will impact all my other endpoints. | There are 3 types of messages with SNS. Subscribe, Unsubscribe, and Notification. You will not get any Notification messages until you have correctly handled the subscribe message. Which involves making an API request to AWS when you receive the Subscribe request.
The call in this case is ConfirmSubscription: <http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/SNS.html#confirmSubscription-property>
Once you do that, then you will start receiving notification messages and you can handle those as your code allows. |
36,958,167 | I need to update a document in an array inside another document in Mongo DB.
```
{
"_id" : ObjectId("51cff693d342704b5047e6d8"),
"author" : "test",
"body" : "sdfkj dsfhk asdfjad ",
"comments" : [
{
"author" : "test",
"body" : "sdfkjdj\r\nasdjgkfdfj",
"email" : "test@tes.com"
},
{
"author" : "hola",
"body" : "sdfl\r\nhola \r\nwork here"
}
],
"date" : ISODate("2013-06-30T09:12:51.629Z"),
"permalink" : "mxwnnnqafl",
"tags" : [
"ab"
],
"title" : "cd"
}
```
If I try to update first document in comments array by below command, it works.
```
db.posts.update({'permalink':"cxzdzjkztkqraoqlgcru"},{'$inc': {"comments.0.num_likes": 1}})
```
But if I put the same in python code like below, I am getting Write error, that it can't traverse the element. I am not understanding what is missing!!
Can anyone help me out please.
```
post = self.posts.find_one({'permalink': permalink})
response = self.posts.update({'permalink': permalink},
{'$inc':"comments.comment_ordinal.num_likes": 1}})
WriteError: cannot use the part (comments of comments.comment_ordinal.num_likes) to traverse the element
``` | 2016/04/30 | [
"https://Stackoverflow.com/questions/36958167",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3123752/"
] | The moment you give your HTTP/HTTPS endpoint and create subscription from aws console, what happens is , the Amazon sends a subscription msg to that endpoint. Now this is a rest call, and your app must have a handler for this endpoint, otherwise you miss catching this subscription message. The httpRequest object that your handler is passed, needs to access it's SNSMsgTypeHdr header field. This value will be "SubscriptionConfirmation". You need to catch this particular message first and then get the subscription url. You can handle it in your app itself or maybe print it out, and then manually visit that url to make the subscription. I would ideally suggest to make a subscription to the same topic at the same with your mail id, so that everytime your app gets a messages pushed , your mail id also gets the message(albeit the tokens will be different) but at least you will be sure that the message was pushed to your endpoint. All you need to do is keep working your app to handle the messages at that endpoint as per your requirements then. | After you subscribe your endpoint, Amazon SNS will send a subscription confirmation message to the endpoint.
You should have code at the endpoint that retrieves the **SubscribeURL** value from the subscription confirmation message and either visit the location specified by **SubscribeURL** itself or make it available to you so that you can manually visit the **SubscribeURL**, for example, using a web browser.
Amazon SNS will not send messages to the endpoint until the subscription has been confirmed.
You can use the Amazon SNS console to verify that the subscription is confirmed: The Subscription ID will display the ARN for the subscription instead of the **PendingConfirmation** value that you saw when you first added the subscription. |
36,958,167 | I need to update a document in an array inside another document in Mongo DB.
```
{
"_id" : ObjectId("51cff693d342704b5047e6d8"),
"author" : "test",
"body" : "sdfkj dsfhk asdfjad ",
"comments" : [
{
"author" : "test",
"body" : "sdfkjdj\r\nasdjgkfdfj",
"email" : "test@tes.com"
},
{
"author" : "hola",
"body" : "sdfl\r\nhola \r\nwork here"
}
],
"date" : ISODate("2013-06-30T09:12:51.629Z"),
"permalink" : "mxwnnnqafl",
"tags" : [
"ab"
],
"title" : "cd"
}
```
If I try to update first document in comments array by below command, it works.
```
db.posts.update({'permalink':"cxzdzjkztkqraoqlgcru"},{'$inc': {"comments.0.num_likes": 1}})
```
But if I put the same in python code like below, I am getting Write error, that it can't traverse the element. I am not understanding what is missing!!
Can anyone help me out please.
```
post = self.posts.find_one({'permalink': permalink})
response = self.posts.update({'permalink': permalink},
{'$inc':"comments.comment_ordinal.num_likes": 1}})
WriteError: cannot use the part (comments of comments.comment_ordinal.num_likes) to traverse the element
``` | 2016/04/30 | [
"https://Stackoverflow.com/questions/36958167",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3123752/"
] | Try this:
```
const express = require('express');
const router = express.Router();
const request = require('request');
var bodyParser = require('body-parser')
router.post('/',bodyParser.text(),handleSNSMessage);
module.exports = router;
var handleSubscriptionResponse = function (error, response) {
if (!error && response.statusCode == 200) {
console.log('Yess! We have accepted the confirmation from AWS');
}
else {
throw new Error(`Unable to subscribe to given URL`);
//console.error(error)
}
}
async function handleSNSMessage(req, resp, next) {
try {
let payloadStr = req.body
payload = JSON.parse(payloadStr)
console.log(JSON.stringify(payload))
if (req.header('x-amz-sns-message-type') === 'SubscriptionConfirmation') {
const url = payload.SubscribeURL;
await request(url, handleSubscriptionResponse)
} else if (req.header('x-amz-sns-message-type') === 'Notification') {
console.log(payload)
//process data here
} else {
throw new Error(`Invalid message type ${payload.Type}`);
}
} catch (err) {
console.error(err)
resp.status(500).send('Oops')
}
resp.send('Ok')
}
```
**Note:** I didn't use `app.use` as that will impact all my other endpoints. | After you subscribe your endpoint, Amazon SNS will send a subscription confirmation message to the endpoint.
You should have code at the endpoint that retrieves the **SubscribeURL** value from the subscription confirmation message and either visit the location specified by **SubscribeURL** itself or make it available to you so that you can manually visit the **SubscribeURL**, for example, using a web browser.
Amazon SNS will not send messages to the endpoint until the subscription has been confirmed.
You can use the Amazon SNS console to verify that the subscription is confirmed: The Subscription ID will display the ARN for the subscription instead of the **PendingConfirmation** value that you saw when you first added the subscription. |
36,958,167 | I need to update a document in an array inside another document in Mongo DB.
```
{
"_id" : ObjectId("51cff693d342704b5047e6d8"),
"author" : "test",
"body" : "sdfkj dsfhk asdfjad ",
"comments" : [
{
"author" : "test",
"body" : "sdfkjdj\r\nasdjgkfdfj",
"email" : "test@tes.com"
},
{
"author" : "hola",
"body" : "sdfl\r\nhola \r\nwork here"
}
],
"date" : ISODate("2013-06-30T09:12:51.629Z"),
"permalink" : "mxwnnnqafl",
"tags" : [
"ab"
],
"title" : "cd"
}
```
If I try to update first document in comments array by below command, it works.
```
db.posts.update({'permalink':"cxzdzjkztkqraoqlgcru"},{'$inc': {"comments.0.num_likes": 1}})
```
But if I put the same in python code like below, I am getting Write error, that it can't traverse the element. I am not understanding what is missing!!
Can anyone help me out please.
```
post = self.posts.find_one({'permalink': permalink})
response = self.posts.update({'permalink': permalink},
{'$inc':"comments.comment_ordinal.num_likes": 1}})
WriteError: cannot use the part (comments of comments.comment_ordinal.num_likes) to traverse the element
``` | 2016/04/30 | [
"https://Stackoverflow.com/questions/36958167",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3123752/"
] | Try this:
```
const express = require('express');
const router = express.Router();
const request = require('request');
var bodyParser = require('body-parser')
router.post('/',bodyParser.text(),handleSNSMessage);
module.exports = router;
var handleSubscriptionResponse = function (error, response) {
if (!error && response.statusCode == 200) {
console.log('Yess! We have accepted the confirmation from AWS');
}
else {
throw new Error(`Unable to subscribe to given URL`);
//console.error(error)
}
}
async function handleSNSMessage(req, resp, next) {
try {
let payloadStr = req.body
payload = JSON.parse(payloadStr)
console.log(JSON.stringify(payload))
if (req.header('x-amz-sns-message-type') === 'SubscriptionConfirmation') {
const url = payload.SubscribeURL;
await request(url, handleSubscriptionResponse)
} else if (req.header('x-amz-sns-message-type') === 'Notification') {
console.log(payload)
//process data here
} else {
throw new Error(`Invalid message type ${payload.Type}`);
}
} catch (err) {
console.error(err)
resp.status(500).send('Oops')
}
resp.send('Ok')
}
```
**Note:** I didn't use `app.use` as that will impact all my other endpoints. | The moment you give your HTTP/HTTPS endpoint and create subscription from aws console, what happens is , the Amazon sends a subscription msg to that endpoint. Now this is a rest call, and your app must have a handler for this endpoint, otherwise you miss catching this subscription message. The httpRequest object that your handler is passed, needs to access it's SNSMsgTypeHdr header field. This value will be "SubscriptionConfirmation". You need to catch this particular message first and then get the subscription url. You can handle it in your app itself or maybe print it out, and then manually visit that url to make the subscription. I would ideally suggest to make a subscription to the same topic at the same with your mail id, so that everytime your app gets a messages pushed , your mail id also gets the message(albeit the tokens will be different) but at least you will be sure that the message was pushed to your endpoint. All you need to do is keep working your app to handle the messages at that endpoint as per your requirements then. |
67,347,499 | I've error in Python Selenium. I'm trying to download all songs with Selenium, but there is some error. Here is code:
```
from selenium import webdriver
import time
driver = webdriver.Chrome('/home/tigran/Documents/chromedriver/chromedriver')
url = 'https://sefon.pro/genres/shanson/top/'
driver.get(url)
songs = driver.find_elements_by_xpath('/html/body/div[2]/div[2]/div[1]/div[3]/div/div[3]/div[2]/a')
for song in songs:
song.click()
time.sleep(5)
driver.find_element_by_xpath('/html/body/div[2]/div[2]/div[1]/div[1]/div[2]/div/div[3]/div[1]/a[2]').click()
time.sleep(8)
driver.get(url)
time.sleep(5)
```
And here is error:
```
Traceback (most recent call last):
File "test.py", line 13, in <module>
song.click()
File "/home/tigran/.local/lib/python3.8/site-packages/selenium/webdriver/remote/webelement.py", line 80, in click
self._execute(Command.CLICK_ELEMENT)
File "/home/tigran/.local/lib/python3.8/site-packages/selenium/webdriver/remote/webelement.py", line 633, in _execute
return self._parent.execute(command, params)
File "/home/tigran/.local/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/home/tigran/.local/lib/python3.8/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
(Session info: chrome=90.0.4430.72)
```
Any ideas why error comes? | 2021/05/01 | [
"https://Stackoverflow.com/questions/67347499",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14646178/"
] | You can try to use Image.fromarray:
```
Image.fromarray(matrice, mode=couleur)
``` | Sorry for my longtime answer. The problem is in the type being used and converting it to the image. If you use a single-byte type for an image, then the matrix type must also be single-byte. Example:
```
from PIL import Image
import numpy as np
size_x = 50
size_y = 8
m = "L"
matrix = np.array([[255] * 50 for _ in range(size_y)], dtype="uint8")
im = Image.fromarray(matrix, mode=m)
im.save('Degrade.jpg')
im.show()
``` |
53,719,606 | Without changing any code, the graph plotted will be different. Correct at the first run in a fresh bash, disordered in the next runs. (maybe it can cycle back to correct order)
To be specific:
Environment: MacOS Mojave 10.14.2, python3.7.1 installed through homebrew.
To do: Plot `scatter` for two or three sets of data on the same `axes`, each with a different `markertype` and different `colors`. Plot customised legend showing which data set each `markertype` represents.
I am sorry I don't have enough time to prepare a testable code (for now), but this part seems to be the problem:
```
markerTypes = cycle(['o', 's', '^', 'd', 'p', 'P', '*'])
strainLegends = []
strains = list(set([idx.split('_')[0] for idx in pca2Plot.index]))
for strain in strains:
# markerType is fixed here, and shouldn't be passed on to the next python run anyway.
markerType = next(markerTypes)
# strainSamples connects directly to strain variable, then data is generated from getting strainSamples:
strainSamples = [sample for sample in samples if
sample.split('_')[0] == strain]
xData = pca2Plot.loc[strainSamples, 'PC1']
yData = pca2Plot.loc[strainSamples, 'PC2']
# See pictures below, data is correctly identified from source
# both scatter and legend instance use the same fixed markerType
ax.scatter(xData, yData, c=drawColors[strainSamples],
s=40, marker=markerType, zorder=3)
strainLegends.append(Line2D([0], [0], marker=markerType, color='k',
markersize=10,
linewidth=0, label=strain))
# print([i for i in ax.get_children() if isinstance(i, PathCollection)])
ax.legend(handles=strainLegends)
```
As you can see the `markerType` and `strain` data are correlated with the data.
For the first run with `python3 my_code.py` in bash, it creates a correct picture: see the circle represents A, square represents B
[](https://i.stack.imgur.com/tAEN7.png)
A = circle, B = square. See the square around `(-3, -3.8)`, this data point is from dataset B.
While if I run the code again within the same terminal `python3 my_code.py`
[](https://i.stack.imgur.com/TudM1.png)
Note A and B completely massed up, un-correlated.
Now as the legend: A = square, B = circle. Again see the data point `(-3, -3.8)` which comes from dataset B, now annotated as A.
If I run the code again, it might produce another result.
Here is the code I used to generate annotation:
```
dictColor = {ax: pd.Series(index=pca2Plot.index), }
HoverClick = interactionHoverClick(
dictColor, fig, ax)
fig.canvas.mpl_connect("motion_notify_event", HoverClick.hover)
fig.canvas.mpl_connect("button_press_event", HoverClick.click)
```
In class `HoverClick`, I have
```
def hover(self, event):
if event.inaxes != None:
ax = event.inaxes
annot = self.annotAxs[ax]
# class matplotlib.collections.PathCollection, here refere to the scatter plotting event (correct?)
drawingNum = sum(isinstance(i, PathCollection)
for i in ax.get_children())
# print([i for i in ax.get_children() if isinstance(i, PathCollection)])
plotSeq = 0
jump = []
indInd = []
indIndInstances = []
for i in range(drawingNum):
sc = ax.get_children()[i]
cont, ind = sc.contains(event)
jump.append(len(sc.get_facecolor()))
indIndInstances.append(ind['ind'])
if cont:
plotSeq = i
indInd.extend(ind['ind'])
# here plotSeq is the index of last PathCollection instance that program find my mouse hovering on a datapoint of it.
sc = ax.get_children()[plotSeq]
cont, ind = sc.contains(event)
if cont:
try:
exist = (indInd[0] in self.hovered)
except:
exist = False
if not exist:
hovered = indInd[0]
pos = sc.get_offsets()[indInd[0]]
textList = []
for num in range(plotSeq + 1):
singleJump = sum(jump[:num])
textList.extend([self.colorDict[ax].index[i + singleJump]
for i in indIndInstances[num]])
text = '\n'.join(textList)
annot.xy = pos
annot.set_text(text)
annot.set_visible(True)
self.fig.canvas.draw_idle()
else:
if annot.get_visible():
annot.set_visible(False)
self.fig.canvas.draw_idle()
# hover
```
Note that I annotated the code for print each instances. This is tested because I thought it might be the order of instances that has been changed throughout other part of code. But the result showed in both correct and wrong cases, the order was not changed.
Does anyone knows what happened?
Anyone have experienced this before?
If I need to clean the memory in the end of the code, what should I do? | 2018/12/11 | [
"https://Stackoverflow.com/questions/53719606",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6823079/"
] | Since your code is incomplete it is difficult to say for sure, but it seems that the order of markers is being messed up by the `cycle` iterator. Why don't you just try:
```
markerTypes = ['o', 's', '^']
strainLegends = []
for strain, markerType in zip(strains, markerTypes):
strainSamples = [sample for sample in samples if sample.split('_')[0] == strain]
xData = pca2Plot.loc[strainSamples, 'PC1']
yData = pca2Plot.loc[strainSamples, 'PC2']
ax.scatter(xData, yData, c=drawColors[strainSamples], s=40, marker=markerType, zorder=3)
strainLegends.append(Line2D([0], [0], marker=markerType, color='k',
markersize=10,
linewidth=0, label=strain))
ax.legend(handles=strainLegends)
```
This of course assumes that `strains` and `markerTypes` are of the same length and the markers are in the same position in the list as the strain value you want to assign them. | I found this issue caused by a de-replication process I made in `strains`.
```
# wrong code:
strains = list(set([idx.split('_')[0] for idx in pca2Plot.index]))
# correct code:
strains = list(OrderedDict.fromkeys([idx.split('_')[0] for idx in pca2Plot.index]))
```
Thus the question I asked was not a valid question. Thanks and sorry for everyone looked into this. |
56,937,573 | How do they run these python commands in python console within their django project. Here is [example](https://docs.djangoproject.com/en/2.2/intro/overview/#enjoy-the-free-api).
I'm using Windows 10, PyCharm and python 3.7. I know how to run the project. But when I run the project, - console opens, which gives regular input/output for the project running.
When I open python console - I can run commands, so that they execute immidiately, but how do I run python console, so that I can type some commands and they would execute immediately, but that would happen within some project?
Example from [here](https://docs.djangoproject.com/en/2.2/intro/overview/#enjoy-the-free-api):
```
# Import the models we created from our "news" app
>>> from news.models import Article, Reporter
# No reporters are in the system yet.
>>> Reporter.objects.all()
<QuerySet []>
# Create a new Reporter.
>>> r = Reporter(full_name='John Smith')
# Save the object into the database. You have to call save() explicitly.
>>> r.save()
# Now it has an ID.
>>> r.id
1
``` | 2019/07/08 | [
"https://Stackoverflow.com/questions/56937573",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7402089/"
] | When you run the project you're using a management command: `python manage.py runserver`. To enter a console that has access to all your Django apps, the ORM, etc., use another management command: `python manage.py shell`. That will allow you to import models as shown in your example.
As an additional tip, consider installing the [Django extensions](https://github.com/django-extensions/django-extensions) package, which includes a management command `shell_plus`. It's helpful, especially (but not only) in development, as it imports all your models, along with some other handy tools. | Django has a [Shell](https://docs.djangoproject.com/en/2.2/ref/django-admin/#shell) management command that allows you to open a Python shell with all the Django stuff bootstrapped and ready to be executed.
So by using `./manage.py shell` you will get an interactive python shell where you can write code. |
59,530,439 | I am trying to [`save`](https://code.kx.com/q/ref/save/) a [matrix](https://code.kx.com/q4m3/3_Lists/#3112-formal-definition-of-matrices) to file in binary format in KDB as per below:
```
matrix: (til 10)*/:til 10;
save matrix;
```
However, I get the error `'type`.
I guess `save` only works with tables? In which case does anyone know of a workaround?
Finally, I would like to read the matrix from the binary file into Python with [NumPy](https://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfile.html), which I presume is just:
```
import numpy as np
matrix = np.fromfile('C:/q/w32/matrix', dtype='f')
```
Is that right?
*Note: I'm aware of [KDB-Python libraries](http://www.timestored.com/kdb-guides/python-api), but have been unable to install them thus far.* | 2019/12/30 | [
"https://Stackoverflow.com/questions/59530439",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1681681/"
] | `save` does work, you just have to reference it by name.
```
save`matrix
```
You can also save using
```
`:matrix set matrix;
`:matrix 1: matrix;
```
But I don't think you'll be able to read this into python directly using numpy as it is stored in kdb format. It could be read into python using one of the python-kdb interfaces (e.g PyQ) or by storing it in a common format such as csv. | Another option is to save in KDB+ IPC format and then read it into Python with [qPython](https://github.com/exxeleron/qPython) as a Pandas DataFrame.
On the KDB+ side you can save it with
```
matrix:(til 10)*/:til 10;
`:matrix.ipc 1: -8!matrix;
```
On the Python side you do
```
from pandas import DataFrame
from qpython.qreader import QReader
with open('matrix.ipc',"rb") as f:
matrix = DataFrame(QReader(f).read().data)
print(matrix)
``` |
3,526,748 | Sometimes, when fetching data from the database either through the python shell or through a python script, the python process dies, and one single word is printed to the terminal: `Killed`
That's literally all it says. It only happens with certain scripts, but it always happens for those scripts. It consistently happens with this one single query that takes a while to run, and also with a south migration that adds a bunch of rows one-by-one to the database.
My initial hunch was that a single transaction was taking too long, so I turned on autocommit for Postgres. Didn't solve the problem.
I checked the Postgres logs, and this is the only thing in there:
`2010-08-19 22:06:34 UTC LOG: could not receive data from client: Connection reset by peer`
`2010-08-19 22:06:34 UTC LOG: unexpected EOF on client connection`
I've tried googling, but as you might expect, a one-word error message is tough to google for.
I'm using Django 1.2 with Postgres 8.4 on a single Ubuntu 10.4 rackspace cloud VPS, stock config for everything. | 2010/08/19 | [
"https://Stackoverflow.com/questions/3526748",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/836/"
] | Only one thing I could think of that will kill automatically a process on Linux - the OOM killer. What's in the system logs? | If psycopg is being used the issue is probably that the db connection isn't being closed.
As per the psycopg [docs](http://initd.org/psycopg/docs/usage.html) example:
```
# Connect to an existing database
>>> conn = psycopg2.connect("dbname=test user=postgres")
# Open a cursor to perform database operations
>>> cur = conn.cursor()
# Close communication with the database
>>> cur.close()
>>> conn.close()
```
Note that if you do delete the connection (using `dbcon.close()` or by deleting the connection object you probably need to issue a commit or rollback, depending on what sort of transaction type your connection is working under.
See [the close connection docs](http://initd.org/psycopg/docs/connection.html#connection.close) for more details. |
2,876,337 | I am currently learning PHP and want to learn about OOP.
1. I know Python is a well-organized and is all OOP, so would learning Python be a wise choose to learn OOP?
The thing is I am more towards web development then just general programming, and I know Python is just a general purpose language, but there is Django.
2. So how should I go about learning Python if I am lending towards web development?
Is there any good books/websites that help me learn Python for web development?
3. Is there any free webhosting companies that allow Python? I never used Python before, only PHP, and not sure how it works? Is there like a "xampp" for python? | 2010/05/20 | [
"https://Stackoverflow.com/questions/2876337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/345690/"
] | As long as you stay within their quota Google Apps Engine provides free hosting for Python.
Django is a great framework when you want to do webdevelopment with Python. Django also has great documention with <http://www.djangobook.com/> and the official Django website. | You could learn using books, but nothing beats practical hands-on approach - so make sure you have Python installed in a computer to help you learn. If you decide to buy a Python book, I strongly suggest you **DO NOT** buy a copy of Vernon Ceder's [Python Book](http://valashiya.wordpress.com/2010/04/22/the-quick-python-book/), it has very bad reviews. I bought a copy and was also disappointed.
If you'd like to join a mailing list, we have a good community at [Python Tutor](http://mail.python.org/mailman/listinfo/tutor). Sign up and post your questions there as well.
Good luck |
2,876,337 | I am currently learning PHP and want to learn about OOP.
1. I know Python is a well-organized and is all OOP, so would learning Python be a wise choose to learn OOP?
The thing is I am more towards web development then just general programming, and I know Python is just a general purpose language, but there is Django.
2. So how should I go about learning Python if I am lending towards web development?
Is there any good books/websites that help me learn Python for web development?
3. Is there any free webhosting companies that allow Python? I never used Python before, only PHP, and not sure how it works? Is there like a "xampp" for python? | 2010/05/20 | [
"https://Stackoverflow.com/questions/2876337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/345690/"
] | Work through the examples on [www.pythonchallenge.com](http://www.pythonchallenge.com/). Refer to the [language documentation](http://www.python.org/doc/) when you get stuck. | I recently learnt Python and had very little programming experience before. I found that doing a little bit of Python first then diving into Django worked for me. USing Django, looking through its reference material and Googling individual problems when I needed the help was really good.
Django has a built in Development server for you to use a bit like xampp, however, to make things like installing Django, installing Python, installing plugins etc a lot easier, use a unix based OS. I am developing on Mac OS and I have had no problems. Most Linux distributions will be the same. I wouldn't want to try Django development on Windows, there are just too many hacks you need to do to get it working, plus, it is more difficult for when you then publish the site (on a unix server).
Learn some Python, there are some good books suggested here, but don't get too deeply stuck into it if your focus will be Django. Go and do the official Django tutorial and then Google around for one or two more.
I use a book called '[The Definitive Guide to Django](https://rads.stackoverflow.com/amzn/click/com/143021936X)'. It is great for learning Django in the first place, but after the first few chapters, I stopped following it and started my own projects instead. Now it is a really good reference book to have.
It takes a while, but its worth it. I started working at a company as a Django developer recently and it is great.
Good Luck! |
2,876,337 | I am currently learning PHP and want to learn about OOP.
1. I know Python is a well-organized and is all OOP, so would learning Python be a wise choose to learn OOP?
The thing is I am more towards web development then just general programming, and I know Python is just a general purpose language, but there is Django.
2. So how should I go about learning Python if I am lending towards web development?
Is there any good books/websites that help me learn Python for web development?
3. Is there any free webhosting companies that allow Python? I never used Python before, only PHP, and not sure how it works? Is there like a "xampp" for python? | 2010/05/20 | [
"https://Stackoverflow.com/questions/2876337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/345690/"
] | As long as you stay within their quota Google Apps Engine provides free hosting for Python.
Django is a great framework when you want to do webdevelopment with Python. Django also has great documention with <http://www.djangobook.com/> and the official Django website. | 1. If it is your basics in OOPS that you wish to strengthen, Java is a good option(provided you know c++ or any other non-web-based language which supports OOPS). However, if you are looking towards web-development, Python should be your best option.
2. Yes, Python is a good option
3. Yes, Django is a very good web application framework(and they have awesome documentation and tutorials put up at their site)
4. To learn Python I definitely recommend reading "The Python Cookbook" cover-to-cover. Its fun, and covers some very important concepts. However, there really is no substitute for the standard python documentation. Its well written, but it might take a while through a major portion of it. Using it as just reference material is also a fine idea.
5. Well I have seen domains which allow Django to be hosted; also you should try out the GAE(google app engine) once you are comfortable with django. Its a great place to host your apps. |
2,876,337 | I am currently learning PHP and want to learn about OOP.
1. I know Python is a well-organized and is all OOP, so would learning Python be a wise choose to learn OOP?
The thing is I am more towards web development then just general programming, and I know Python is just a general purpose language, but there is Django.
2. So how should I go about learning Python if I am lending towards web development?
Is there any good books/websites that help me learn Python for web development?
3. Is there any free webhosting companies that allow Python? I never used Python before, only PHP, and not sure how it works? Is there like a "xampp" for python? | 2010/05/20 | [
"https://Stackoverflow.com/questions/2876337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/345690/"
] | I learned Python reading the book [Learning Python](https://rads.stackoverflow.com/amzn/click/com/0596158068). I read almost the whole thing on a plane trip, and when I got home I was able to start building applications immediately. There are newer versions out since I read it (and it's longer), but I found it very easy to follow.
As mentioned by others, Django is definitely the place to start for Web development. | 1. If it is your basics in OOPS that you wish to strengthen, Java is a good option(provided you know c++ or any other non-web-based language which supports OOPS). However, if you are looking towards web-development, Python should be your best option.
2. Yes, Python is a good option
3. Yes, Django is a very good web application framework(and they have awesome documentation and tutorials put up at their site)
4. To learn Python I definitely recommend reading "The Python Cookbook" cover-to-cover. Its fun, and covers some very important concepts. However, there really is no substitute for the standard python documentation. Its well written, but it might take a while through a major portion of it. Using it as just reference material is also a fine idea.
5. Well I have seen domains which allow Django to be hosted; also you should try out the GAE(google app engine) once you are comfortable with django. Its a great place to host your apps. |
2,876,337 | I am currently learning PHP and want to learn about OOP.
1. I know Python is a well-organized and is all OOP, so would learning Python be a wise choose to learn OOP?
The thing is I am more towards web development then just general programming, and I know Python is just a general purpose language, but there is Django.
2. So how should I go about learning Python if I am lending towards web development?
Is there any good books/websites that help me learn Python for web development?
3. Is there any free webhosting companies that allow Python? I never used Python before, only PHP, and not sure how it works? Is there like a "xampp" for python? | 2010/05/20 | [
"https://Stackoverflow.com/questions/2876337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/345690/"
] | Here's some answers to your questions:
Python is an excellent language for beginners looking to learn OO design/programming.
As far as books and websites, the best python book I've read is available free online
Mark Pilgrim's [Dive into Python](http://www.diveintopython.net).
For web programming there are many many options. You mention Django which is the most popular although I like Turbogears, Cherrypy and web.py. All of these have their own webserver built-in (Based on paste or cherrypy)
For hosting, it's usually based on fastcgi or Apache's mod\_python.
I've heard really good reports of webfaction for python based hosting.
Hope this helps, but if you are learning php why not go for Apress's PHP Objects, Patterns, and Practice that's a good book. | 1. If it is your basics in OOPS that you wish to strengthen, Java is a good option(provided you know c++ or any other non-web-based language which supports OOPS). However, if you are looking towards web-development, Python should be your best option.
2. Yes, Python is a good option
3. Yes, Django is a very good web application framework(and they have awesome documentation and tutorials put up at their site)
4. To learn Python I definitely recommend reading "The Python Cookbook" cover-to-cover. Its fun, and covers some very important concepts. However, there really is no substitute for the standard python documentation. Its well written, but it might take a while through a major portion of it. Using it as just reference material is also a fine idea.
5. Well I have seen domains which allow Django to be hosted; also you should try out the GAE(google app engine) once you are comfortable with django. Its a great place to host your apps. |
2,876,337 | I am currently learning PHP and want to learn about OOP.
1. I know Python is a well-organized and is all OOP, so would learning Python be a wise choose to learn OOP?
The thing is I am more towards web development then just general programming, and I know Python is just a general purpose language, but there is Django.
2. So how should I go about learning Python if I am lending towards web development?
Is there any good books/websites that help me learn Python for web development?
3. Is there any free webhosting companies that allow Python? I never used Python before, only PHP, and not sure how it works? Is there like a "xampp" for python? | 2010/05/20 | [
"https://Stackoverflow.com/questions/2876337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/345690/"
] | If you want to learn about Object Oriented Programming in general, you may want to look at the answers to [this question](https://stackoverflow.com/questions/574001/what-books-do-you-suggest-for-understanding-object-oriented-programming-design-de), although many of the books are higher level (and some are aimed at Java/C# like languages instead of Python-like languages). | Get [ipython](http://ipython.scipy.org/moin/). Use it as your shell. This means move, copy, view, change, edit files from ipython. Day to day shell stuff anywhere has enough little problems that one ordinarily solves by piping, but are just as easily solvable by python. The real bonus is that your eye for syntax and simple solutions will develop quickly.
Need to find files? use [os.walk](http://docs.python.org/library/os.html),
Running grep? try to '[open](http://docs.python.org/library/functions.html#open)' the file instead, try some [regex](http://docs.python.org/library/re.html?highlight=regular%20expressions) while you are there. Those uses of the language will serve you in any type of python programming.
( Good news, PHP and python use the same underlying regex lib PCRE, so although there are some additions, it'll be familiar to you, )
The nice thing about having this in the language , which is not really the case in PHP or Perl, is that you can just mess around with functions, not full programs.
Why ipython and not the standard REPL or bpython? Easier to use as a shell out of the box. That's all. |
2,876,337 | I am currently learning PHP and want to learn about OOP.
1. I know Python is a well-organized and is all OOP, so would learning Python be a wise choose to learn OOP?
The thing is I am more towards web development then just general programming, and I know Python is just a general purpose language, but there is Django.
2. So how should I go about learning Python if I am lending towards web development?
Is there any good books/websites that help me learn Python for web development?
3. Is there any free webhosting companies that allow Python? I never used Python before, only PHP, and not sure how it works? Is there like a "xampp" for python? | 2010/05/20 | [
"https://Stackoverflow.com/questions/2876337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/345690/"
] | I learned Python reading the book [Learning Python](https://rads.stackoverflow.com/amzn/click/com/0596158068). I read almost the whole thing on a plane trip, and when I got home I was able to start building applications immediately. There are newer versions out since I read it (and it's longer), but I found it very easy to follow.
As mentioned by others, Django is definitely the place to start for Web development. | You could learn using books, but nothing beats practical hands-on approach - so make sure you have Python installed in a computer to help you learn. If you decide to buy a Python book, I strongly suggest you **DO NOT** buy a copy of Vernon Ceder's [Python Book](http://valashiya.wordpress.com/2010/04/22/the-quick-python-book/), it has very bad reviews. I bought a copy and was also disappointed.
If you'd like to join a mailing list, we have a good community at [Python Tutor](http://mail.python.org/mailman/listinfo/tutor). Sign up and post your questions there as well.
Good luck |
2,876,337 | I am currently learning PHP and want to learn about OOP.
1. I know Python is a well-organized and is all OOP, so would learning Python be a wise choose to learn OOP?
The thing is I am more towards web development then just general programming, and I know Python is just a general purpose language, but there is Django.
2. So how should I go about learning Python if I am lending towards web development?
Is there any good books/websites that help me learn Python for web development?
3. Is there any free webhosting companies that allow Python? I never used Python before, only PHP, and not sure how it works? Is there like a "xampp" for python? | 2010/05/20 | [
"https://Stackoverflow.com/questions/2876337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/345690/"
] | As long as you stay within their quota Google Apps Engine provides free hosting for Python.
Django is a great framework when you want to do webdevelopment with Python. Django also has great documention with <http://www.djangobook.com/> and the official Django website. | Get [ipython](http://ipython.scipy.org/moin/). Use it as your shell. This means move, copy, view, change, edit files from ipython. Day to day shell stuff anywhere has enough little problems that one ordinarily solves by piping, but are just as easily solvable by python. The real bonus is that your eye for syntax and simple solutions will develop quickly.
Need to find files? use [os.walk](http://docs.python.org/library/os.html),
Running grep? try to '[open](http://docs.python.org/library/functions.html#open)' the file instead, try some [regex](http://docs.python.org/library/re.html?highlight=regular%20expressions) while you are there. Those uses of the language will serve you in any type of python programming.
( Good news, PHP and python use the same underlying regex lib PCRE, so although there are some additions, it'll be familiar to you, )
The nice thing about having this in the language , which is not really the case in PHP or Perl, is that you can just mess around with functions, not full programs.
Why ipython and not the standard REPL or bpython? Easier to use as a shell out of the box. That's all. |
2,876,337 | I am currently learning PHP and want to learn about OOP.
1. I know Python is a well-organized and is all OOP, so would learning Python be a wise choose to learn OOP?
The thing is I am more towards web development then just general programming, and I know Python is just a general purpose language, but there is Django.
2. So how should I go about learning Python if I am lending towards web development?
Is there any good books/websites that help me learn Python for web development?
3. Is there any free webhosting companies that allow Python? I never used Python before, only PHP, and not sure how it works? Is there like a "xampp" for python? | 2010/05/20 | [
"https://Stackoverflow.com/questions/2876337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/345690/"
] | I would pick up a good O'Reilly book on Python and build a strong understanding of the fundamentals before delving into more web specific ventures. Once you've got the essentials then I'd branch out to things like Django.
Here's a good starting page:
[O'Reilly - Python](http://oreilly.com/pub/topic/python)
And here's a good tutorial if you'd rather do your research on the web:
[Python Tutorial](http://docs.python.org/tutorial/) | 1. If it is your basics in OOPS that you wish to strengthen, Java is a good option(provided you know c++ or any other non-web-based language which supports OOPS). However, if you are looking towards web-development, Python should be your best option.
2. Yes, Python is a good option
3. Yes, Django is a very good web application framework(and they have awesome documentation and tutorials put up at their site)
4. To learn Python I definitely recommend reading "The Python Cookbook" cover-to-cover. Its fun, and covers some very important concepts. However, there really is no substitute for the standard python documentation. Its well written, but it might take a while through a major portion of it. Using it as just reference material is also a fine idea.
5. Well I have seen domains which allow Django to be hosted; also you should try out the GAE(google app engine) once you are comfortable with django. Its a great place to host your apps. |
2,876,337 | I am currently learning PHP and want to learn about OOP.
1. I know Python is a well-organized and is all OOP, so would learning Python be a wise choose to learn OOP?
The thing is I am more towards web development then just general programming, and I know Python is just a general purpose language, but there is Django.
2. So how should I go about learning Python if I am lending towards web development?
Is there any good books/websites that help me learn Python for web development?
3. Is there any free webhosting companies that allow Python? I never used Python before, only PHP, and not sure how it works? Is there like a "xampp" for python? | 2010/05/20 | [
"https://Stackoverflow.com/questions/2876337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/345690/"
] | 1. If it is your basics in OOPS that you wish to strengthen, Java is a good option(provided you know c++ or any other non-web-based language which supports OOPS). However, if you are looking towards web-development, Python should be your best option.
2. Yes, Python is a good option
3. Yes, Django is a very good web application framework(and they have awesome documentation and tutorials put up at their site)
4. To learn Python I definitely recommend reading "The Python Cookbook" cover-to-cover. Its fun, and covers some very important concepts. However, there really is no substitute for the standard python documentation. Its well written, but it might take a while through a major portion of it. Using it as just reference material is also a fine idea.
5. Well I have seen domains which allow Django to be hosted; also you should try out the GAE(google app engine) once you are comfortable with django. Its a great place to host your apps. | Get [ipython](http://ipython.scipy.org/moin/). Use it as your shell. This means move, copy, view, change, edit files from ipython. Day to day shell stuff anywhere has enough little problems that one ordinarily solves by piping, but are just as easily solvable by python. The real bonus is that your eye for syntax and simple solutions will develop quickly.
Need to find files? use [os.walk](http://docs.python.org/library/os.html),
Running grep? try to '[open](http://docs.python.org/library/functions.html#open)' the file instead, try some [regex](http://docs.python.org/library/re.html?highlight=regular%20expressions) while you are there. Those uses of the language will serve you in any type of python programming.
( Good news, PHP and python use the same underlying regex lib PCRE, so although there are some additions, it'll be familiar to you, )
The nice thing about having this in the language , which is not really the case in PHP or Perl, is that you can just mess around with functions, not full programs.
Why ipython and not the standard REPL or bpython? Easier to use as a shell out of the box. That's all. |
55,031,604 | So I haven't been doing python for a while and haven't needed to deal with this before so if i'm making some stupid mistake don't go crazy.
I have a list that is pulled from an SQLite database with `.fetchall()` on the end and it returns a list of one tuple and inside that tuple are all the results:
```
[('Bob', 'Science Homework Test', 'Science homework is a test about Crude Oil development', 'Science-Chemistry', '2019-03-06', '2019-02-27', None, 0)]
```
I want to get inside this tuple to get the items but if I loop the list it doesn't seem to do anything.
I want to do this to `pop()` an item in the list which is `Science-Chemistry` for a HTML select option.
I have had a look before but no one seems to have this same problem with only 1 tuple inside the list | 2019/03/06 | [
"https://Stackoverflow.com/questions/55031604",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6635590/"
] | If there is always going to be exactly only one tuple in the returning list, you can unpack it into meaningfully named variables, the number of which should match the number of output columns in your query:
```
(name, test, description, subject, updated, created, flags, score), = cursor.fetchall()
```
Note the comma after the parentheses that makes it unpack as the first tuple of a sequence. | I suggest going from the most outside element to the inner one. At the beginning you have a list with one tuple.
```
>>>> result = [('Bob', 'Science Homework Test', 'Science homework is a test about Crude Oil development', 'Science-Chemistry', '2019-03-06', '2019-02-27', None, 0)]
```
To get the tuple, just get the first item of the list:
```
>>> tuple_ = result[0]
('Bob', 'Science Homework Test', 'Science homework is a test about Crude Oil development', 'Science-Chemistry', '2019-03-06', '2019-02-27', None, 0)
```
Then you can loop over it or access it like an array, to get the items:
```
for item in tuple_:
# do stuff with the item
print(item)
```
or
```
item = tuple_[0]
``` |
55,031,604 | So I haven't been doing python for a while and haven't needed to deal with this before so if i'm making some stupid mistake don't go crazy.
I have a list that is pulled from an SQLite database with `.fetchall()` on the end and it returns a list of one tuple and inside that tuple are all the results:
```
[('Bob', 'Science Homework Test', 'Science homework is a test about Crude Oil development', 'Science-Chemistry', '2019-03-06', '2019-02-27', None, 0)]
```
I want to get inside this tuple to get the items but if I loop the list it doesn't seem to do anything.
I want to do this to `pop()` an item in the list which is `Science-Chemistry` for a HTML select option.
I have had a look before but no one seems to have this same problem with only 1 tuple inside the list | 2019/03/06 | [
"https://Stackoverflow.com/questions/55031604",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6635590/"
] | If there is always going to be exactly only one tuple in the returning list, you can unpack it into meaningfully named variables, the number of which should match the number of output columns in your query:
```
(name, test, description, subject, updated, created, flags, score), = cursor.fetchall()
```
Note the comma after the parentheses that makes it unpack as the first tuple of a sequence. | You can think of this as nested index of list and tuple, i.e first index will give you an element of the list which is a tuple, and second index will give you an element of that tuple.
Let's say the above list is assigned to variable a.
`a = [('Bob', 'Science Homework Test', 'Science homework is a test about Crude Oil development', 'Science-Chemistry', '2019-03-06', '2019-02-27', None, 0)]`
```
a[0] = 1st element of list (a tuple)
a[0][3] = 4th element of that tuple
``` |
10,971,468 | Similar posts such as the following do not answer my question.
[Convert a string to integer with decimal in Python](https://stackoverflow.com/questions/1094717/convert-a-string-to-integer-with-decimal-in-python)
Consider the following Python code.
```
>>> import decimal
>>> s = '23.456'
>>> d = decimal.Decimal(s)
>>> d
Decimal('23.456') # How do I represent this as simply 23.456?
>>> d - 1
22 # How do I obtain the output to be 22.456?
```
How do I convert a string to a decimal number, so I am able to perform arithmetic functions on it and obtain an output with the correct precision? | 2012/06/10 | [
"https://Stackoverflow.com/questions/10971468",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/322885/"
] | If you want to stay in `decimal` numbers, safest is to convert everything:
```
>>> s = '23.456'
>>> d = decimal.Decimal(s)
>>> d - decimal.Decimal('1')
Decimal('22.456')
>>> d - decimal.Decimal('1.0')
Decimal('22.456')
```
In Python 2.7, there's an implicit conversion for integers, but not floats.
```
>>> d - 1
Decimal('22.456')
>>> d - 1.0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for -: 'Decimal' and 'float'
``` | Use the bultin float function:
```
>>> d = float('23.456')
>>> d
23.456
>>> d - 1
22.456
```
See the docs here: <http://docs.python.org/library/functions.html#float> |
10,971,468 | Similar posts such as the following do not answer my question.
[Convert a string to integer with decimal in Python](https://stackoverflow.com/questions/1094717/convert-a-string-to-integer-with-decimal-in-python)
Consider the following Python code.
```
>>> import decimal
>>> s = '23.456'
>>> d = decimal.Decimal(s)
>>> d
Decimal('23.456') # How do I represent this as simply 23.456?
>>> d - 1
22 # How do I obtain the output to be 22.456?
```
How do I convert a string to a decimal number, so I am able to perform arithmetic functions on it and obtain an output with the correct precision? | 2012/06/10 | [
"https://Stackoverflow.com/questions/10971468",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/322885/"
] | If you want to stay in `decimal` numbers, safest is to convert everything:
```
>>> s = '23.456'
>>> d = decimal.Decimal(s)
>>> d - decimal.Decimal('1')
Decimal('22.456')
>>> d - decimal.Decimal('1.0')
Decimal('22.456')
```
In Python 2.7, there's an implicit conversion for integers, but not floats.
```
>>> d - 1
Decimal('22.456')
>>> d - 1.0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for -: 'Decimal' and 'float'
``` | If using float, when the number gets too large -- **x = 29345678.91 for example** -- you get results that you might not expect. In this case, `float(x)` becomes **2.934567891E7** which seems undesirable especially if working with financial numbers. |
10,971,468 | Similar posts such as the following do not answer my question.
[Convert a string to integer with decimal in Python](https://stackoverflow.com/questions/1094717/convert-a-string-to-integer-with-decimal-in-python)
Consider the following Python code.
```
>>> import decimal
>>> s = '23.456'
>>> d = decimal.Decimal(s)
>>> d
Decimal('23.456') # How do I represent this as simply 23.456?
>>> d - 1
22 # How do I obtain the output to be 22.456?
```
How do I convert a string to a decimal number, so I am able to perform arithmetic functions on it and obtain an output with the correct precision? | 2012/06/10 | [
"https://Stackoverflow.com/questions/10971468",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/322885/"
] | Is the `Decimal` required for your computations? The [Decimal fixed point and floating point arithmetic](http://docs.python.org/library/decimal.html) doc outlines their differences. If not, you could just do
```
d = float('23.456')
d
23.456
d - 1
22.456
```
Oddly enough re `Decimal`, I get this interactively
```
d = decimal.Decimal('23.456')
d
Decimal('23.456')
d - 1
Decimal('22.456')
```
But when I print it, I get the values
```
print d
23.456
print d-1
22.456
``` | Use the bultin float function:
```
>>> d = float('23.456')
>>> d
23.456
>>> d - 1
22.456
```
See the docs here: <http://docs.python.org/library/functions.html#float> |
10,971,468 | Similar posts such as the following do not answer my question.
[Convert a string to integer with decimal in Python](https://stackoverflow.com/questions/1094717/convert-a-string-to-integer-with-decimal-in-python)
Consider the following Python code.
```
>>> import decimal
>>> s = '23.456'
>>> d = decimal.Decimal(s)
>>> d
Decimal('23.456') # How do I represent this as simply 23.456?
>>> d - 1
22 # How do I obtain the output to be 22.456?
```
How do I convert a string to a decimal number, so I am able to perform arithmetic functions on it and obtain an output with the correct precision? | 2012/06/10 | [
"https://Stackoverflow.com/questions/10971468",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/322885/"
] | Is the `Decimal` required for your computations? The [Decimal fixed point and floating point arithmetic](http://docs.python.org/library/decimal.html) doc outlines their differences. If not, you could just do
```
d = float('23.456')
d
23.456
d - 1
22.456
```
Oddly enough re `Decimal`, I get this interactively
```
d = decimal.Decimal('23.456')
d
Decimal('23.456')
d - 1
Decimal('22.456')
```
But when I print it, I get the values
```
print d
23.456
print d-1
22.456
``` | If using float, when the number gets too large -- **x = 29345678.91 for example** -- you get results that you might not expect. In this case, `float(x)` becomes **2.934567891E7** which seems undesirable especially if working with financial numbers. |
10,971,468 | Similar posts such as the following do not answer my question.
[Convert a string to integer with decimal in Python](https://stackoverflow.com/questions/1094717/convert-a-string-to-integer-with-decimal-in-python)
Consider the following Python code.
```
>>> import decimal
>>> s = '23.456'
>>> d = decimal.Decimal(s)
>>> d
Decimal('23.456') # How do I represent this as simply 23.456?
>>> d - 1
22 # How do I obtain the output to be 22.456?
```
How do I convert a string to a decimal number, so I am able to perform arithmetic functions on it and obtain an output with the correct precision? | 2012/06/10 | [
"https://Stackoverflow.com/questions/10971468",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/322885/"
] | If you want to stay in `decimal` numbers, safest is to convert everything:
```
>>> s = '23.456'
>>> d = decimal.Decimal(s)
>>> d - decimal.Decimal('1')
Decimal('22.456')
>>> d - decimal.Decimal('1.0')
Decimal('22.456')
```
In Python 2.7, there's an implicit conversion for integers, but not floats.
```
>>> d - 1
Decimal('22.456')
>>> d - 1.0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for -: 'Decimal' and 'float'
``` | My Python seems to do it differently:
```
>>> s = '23.456'
>>> d = decimal.Decimal(s)
>>> d
Decimal('23.456')
>>> d-1
Decimal('22.456')
```
What version/OS are you using? |
10,971,468 | Similar posts such as the following do not answer my question.
[Convert a string to integer with decimal in Python](https://stackoverflow.com/questions/1094717/convert-a-string-to-integer-with-decimal-in-python)
Consider the following Python code.
```
>>> import decimal
>>> s = '23.456'
>>> d = decimal.Decimal(s)
>>> d
Decimal('23.456') # How do I represent this as simply 23.456?
>>> d - 1
22 # How do I obtain the output to be 22.456?
```
How do I convert a string to a decimal number, so I am able to perform arithmetic functions on it and obtain an output with the correct precision? | 2012/06/10 | [
"https://Stackoverflow.com/questions/10971468",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/322885/"
] | Is the `Decimal` required for your computations? The [Decimal fixed point and floating point arithmetic](http://docs.python.org/library/decimal.html) doc outlines their differences. If not, you could just do
```
d = float('23.456')
d
23.456
d - 1
22.456
```
Oddly enough re `Decimal`, I get this interactively
```
d = decimal.Decimal('23.456')
d
Decimal('23.456')
d - 1
Decimal('22.456')
```
But when I print it, I get the values
```
print d
23.456
print d-1
22.456
``` | My Python seems to do it differently:
```
>>> s = '23.456'
>>> d = decimal.Decimal(s)
>>> d
Decimal('23.456')
>>> d-1
Decimal('22.456')
```
What version/OS are you using? |
10,971,468 | Similar posts such as the following do not answer my question.
[Convert a string to integer with decimal in Python](https://stackoverflow.com/questions/1094717/convert-a-string-to-integer-with-decimal-in-python)
Consider the following Python code.
```
>>> import decimal
>>> s = '23.456'
>>> d = decimal.Decimal(s)
>>> d
Decimal('23.456') # How do I represent this as simply 23.456?
>>> d - 1
22 # How do I obtain the output to be 22.456?
```
How do I convert a string to a decimal number, so I am able to perform arithmetic functions on it and obtain an output with the correct precision? | 2012/06/10 | [
"https://Stackoverflow.com/questions/10971468",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/322885/"
] | If you want to stay in `decimal` numbers, safest is to convert everything:
```
>>> s = '23.456'
>>> d = decimal.Decimal(s)
>>> d - decimal.Decimal('1')
Decimal('22.456')
>>> d - decimal.Decimal('1.0')
Decimal('22.456')
```
In Python 2.7, there's an implicit conversion for integers, but not floats.
```
>>> d - 1
Decimal('22.456')
>>> d - 1.0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for -: 'Decimal' and 'float'
``` | Are you specifically TRYING specifically to use the Decimal arbitrary precision library or are you just struggling to convert a string to a Python float?
If you are TRYING to use Decimal:
```
>>> import decimal
>>> s1='23.456'
>>> s2='1.0'
>>> decimal.Decimal(s1) - decimal.Decimal(s2)
Decimal('22.456')
>>> s1='23.456'
>>> s2='1'
>>> decimal.Decimal(s1) - decimal.Decimal(s2)
Decimal('22.456')
```
Or, what I think is more likely, you are trying to just convert a string to a Python floating point value:
```
>>> s1='23.456'
>>> s2='1'
>>> float(s1)-float(s2)
22.456
>>> float(s1)-1
22.456
>>> float(s1)-1.0
22.456
``` |
10,971,468 | Similar posts such as the following do not answer my question.
[Convert a string to integer with decimal in Python](https://stackoverflow.com/questions/1094717/convert-a-string-to-integer-with-decimal-in-python)
Consider the following Python code.
```
>>> import decimal
>>> s = '23.456'
>>> d = decimal.Decimal(s)
>>> d
Decimal('23.456') # How do I represent this as simply 23.456?
>>> d - 1
22 # How do I obtain the output to be 22.456?
```
How do I convert a string to a decimal number, so I am able to perform arithmetic functions on it and obtain an output with the correct precision? | 2012/06/10 | [
"https://Stackoverflow.com/questions/10971468",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/322885/"
] | If you want to stay in `decimal` numbers, safest is to convert everything:
```
>>> s = '23.456'
>>> d = decimal.Decimal(s)
>>> d - decimal.Decimal('1')
Decimal('22.456')
>>> d - decimal.Decimal('1.0')
Decimal('22.456')
```
In Python 2.7, there's an implicit conversion for integers, but not floats.
```
>>> d - 1
Decimal('22.456')
>>> d - 1.0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for -: 'Decimal' and 'float'
``` | In python, to convert a value string to float just do it:
```
num = "29.0"
print (float(num))
```
To convert string to decimal
```
from decimal import Decimal
num = "29.0"
print (Decimal(num))
``` |
10,971,468 | Similar posts such as the following do not answer my question.
[Convert a string to integer with decimal in Python](https://stackoverflow.com/questions/1094717/convert-a-string-to-integer-with-decimal-in-python)
Consider the following Python code.
```
>>> import decimal
>>> s = '23.456'
>>> d = decimal.Decimal(s)
>>> d
Decimal('23.456') # How do I represent this as simply 23.456?
>>> d - 1
22 # How do I obtain the output to be 22.456?
```
How do I convert a string to a decimal number, so I am able to perform arithmetic functions on it and obtain an output with the correct precision? | 2012/06/10 | [
"https://Stackoverflow.com/questions/10971468",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/322885/"
] | Is the `Decimal` required for your computations? The [Decimal fixed point and floating point arithmetic](http://docs.python.org/library/decimal.html) doc outlines their differences. If not, you could just do
```
d = float('23.456')
d
23.456
d - 1
22.456
```
Oddly enough re `Decimal`, I get this interactively
```
d = decimal.Decimal('23.456')
d
Decimal('23.456')
d - 1
Decimal('22.456')
```
But when I print it, I get the values
```
print d
23.456
print d-1
22.456
``` | Are you specifically TRYING specifically to use the Decimal arbitrary precision library or are you just struggling to convert a string to a Python float?
If you are TRYING to use Decimal:
```
>>> import decimal
>>> s1='23.456'
>>> s2='1.0'
>>> decimal.Decimal(s1) - decimal.Decimal(s2)
Decimal('22.456')
>>> s1='23.456'
>>> s2='1'
>>> decimal.Decimal(s1) - decimal.Decimal(s2)
Decimal('22.456')
```
Or, what I think is more likely, you are trying to just convert a string to a Python floating point value:
```
>>> s1='23.456'
>>> s2='1'
>>> float(s1)-float(s2)
22.456
>>> float(s1)-1
22.456
>>> float(s1)-1.0
22.456
``` |
10,971,468 | Similar posts such as the following do not answer my question.
[Convert a string to integer with decimal in Python](https://stackoverflow.com/questions/1094717/convert-a-string-to-integer-with-decimal-in-python)
Consider the following Python code.
```
>>> import decimal
>>> s = '23.456'
>>> d = decimal.Decimal(s)
>>> d
Decimal('23.456') # How do I represent this as simply 23.456?
>>> d - 1
22 # How do I obtain the output to be 22.456?
```
How do I convert a string to a decimal number, so I am able to perform arithmetic functions on it and obtain an output with the correct precision? | 2012/06/10 | [
"https://Stackoverflow.com/questions/10971468",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/322885/"
] | Is the `Decimal` required for your computations? The [Decimal fixed point and floating point arithmetic](http://docs.python.org/library/decimal.html) doc outlines their differences. If not, you could just do
```
d = float('23.456')
d
23.456
d - 1
22.456
```
Oddly enough re `Decimal`, I get this interactively
```
d = decimal.Decimal('23.456')
d
Decimal('23.456')
d - 1
Decimal('22.456')
```
But when I print it, I get the values
```
print d
23.456
print d-1
22.456
``` | In python, to convert a value string to float just do it:
```
num = "29.0"
print (float(num))
```
To convert string to decimal
```
from decimal import Decimal
num = "29.0"
print (Decimal(num))
``` |
49,519,789 | I want to have a black box in python where
* The input is a list A.
* There is a random number C for the black box which is randomly selected the first time the black box is called and stays the same for the next times the black box is called.
* Based on list A and number C, the output is a list B.
I was thinking of defining this black box as a function but the issue is that a function cannot keep the selected number C for next calls. Note that the input and output of the black box are as described above and we cannot have C also as output and use it for next calls. Any suggestion? | 2018/03/27 | [
"https://Stackoverflow.com/questions/49519789",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9559925/"
] | Make it a Class so C will persist.
```
class BlackBox():
def __init__(self):
self.C = rand.randint(100)
etc...
```
*As a side note, using some pretty cool Python functionality...*
You can make objects of this class callable by implementing `__call__()` for your new class.
```
#inside the BlackBox class
def __call__(self, A):
B = []
#do something to B with A and self.C
return B
```
You can then use this in your main code.
```
bb = BlackBox()
A = [1, 2, 3]
B = bb(A)
``` | >
> the issue is that a function cannot keep the selected number C for next calls.
>
>
>
This may be true in other languages, but not so in Python. Functions in Python are objects like any other, so you can store things on them. Here's a minimal example of doing so.
```
import random
def this_function_stores_a_value():
me = this_function_stores_a_value
if 'value' not in me.__dict__:
me.value = random.random()
return me.value
```
This doesn't directly solve your list problem, but it should point you in the right direction.
---
*Side note:* You can also store persistent data in optional arguments, like
```
def this_function_also_stores_a_value(value = random.random()):
...
```
I don't, however, recommend this approach because users can tamper with your values by passing an argument explicitly. |
49,519,789 | I want to have a black box in python where
* The input is a list A.
* There is a random number C for the black box which is randomly selected the first time the black box is called and stays the same for the next times the black box is called.
* Based on list A and number C, the output is a list B.
I was thinking of defining this black box as a function but the issue is that a function cannot keep the selected number C for next calls. Note that the input and output of the black box are as described above and we cannot have C also as output and use it for next calls. Any suggestion? | 2018/03/27 | [
"https://Stackoverflow.com/questions/49519789",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9559925/"
] | Make it a Class so C will persist.
```
class BlackBox():
def __init__(self):
self.C = rand.randint(100)
etc...
```
*As a side note, using some pretty cool Python functionality...*
You can make objects of this class callable by implementing `__call__()` for your new class.
```
#inside the BlackBox class
def __call__(self, A):
B = []
#do something to B with A and self.C
return B
```
You can then use this in your main code.
```
bb = BlackBox()
A = [1, 2, 3]
B = bb(A)
``` | Since you are asking in the comments.
This is probably not recommended way but it's easy and works so I'll add it here.
You can use global variable to achieve your goal.
```
import random
persistant_var = 0
def your_func():
global persistant_var
if persistant_var:
print('variable already set {}'.format(persistant_var))
else:
print('setting variable')
persistant_var = random.randint(1,10)
your_func()
your_func()
```
Output:
```
setting variable
variable already set 7
```
Hope this is clear.
[](https://i.stack.imgur.com/LkqtF.jpg) |
49,519,789 | I want to have a black box in python where
* The input is a list A.
* There is a random number C for the black box which is randomly selected the first time the black box is called and stays the same for the next times the black box is called.
* Based on list A and number C, the output is a list B.
I was thinking of defining this black box as a function but the issue is that a function cannot keep the selected number C for next calls. Note that the input and output of the black box are as described above and we cannot have C also as output and use it for next calls. Any suggestion? | 2018/03/27 | [
"https://Stackoverflow.com/questions/49519789",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9559925/"
] | Make it a Class so C will persist.
```
class BlackBox():
def __init__(self):
self.C = rand.randint(100)
etc...
```
*As a side note, using some pretty cool Python functionality...*
You can make objects of this class callable by implementing `__call__()` for your new class.
```
#inside the BlackBox class
def __call__(self, A):
B = []
#do something to B with A and self.C
return B
```
You can then use this in your main code.
```
bb = BlackBox()
A = [1, 2, 3]
B = bb(A)
``` | There are many ways to store persistent data for a function. They all have their uses, but in general, the ones that come first are useful more often than the ones that come later. (To keep things shorter, I'm solving a slightly simpler problem than the one you asked about, but it should be obvious how to adapt it.)
Instance attribute
------------------
```
class BlackBox:
def __init__(self):
self.C = rand.randint(100)
def check(self, guess):
return (guess - self.C) / abs(guess - self.C)
```
Now you can create one or more `BlackBox()` instances, and each one has its own random number.
Closure variable
----------------
```
def blackbox():
C = rand.random()
def check(guess):
return (guess - C) / abs(guess - C)
return check
```
Now, you can create one or more `check` functions, and each one has its own random number. (This is dual to the instance variable—that is, it has the same capabilities—but usually one or the other is more readable.)
Global variable
---------------
```
def makeblackbox():
global C
C = random.randint(100)
def check(guess):
return (guess - C) / abs(guess - C)
```
This way, there's only a single blackbox for the entire program. That's usually not as good a design, which is one of the reasons that "globals are bad". Plus, it's polluting the global namespace with a `C` variable that means nothing to anyone but the `check` function, which is another one of the reasons that "globals are bad".
Function attribute
------------------
```
def makeblackbox():
check.C = random.randint(100)
def check():
return (guess - check.C) / abs(guess - check.C)
```
This is equivalent to a global in that you can only ever have one black box, but at least the variable is hidden away on the `check` function instead of polluting the global namespace.
Class attribute
---------------
```
class BlackBox:
C = rand.randint(100)
@staticmethod
def check(guess):
return (guess - BlackBox.C) / abs(guess - BlackBox.C)
```
This is again equivalent to a global variable without polluting the global namespace. But it has a downside over the function attribute—you're creating a class that has no useful instances is often misleading.
Class attribute 2
-----------------
```
class BlackBox:
C = rand.randint(100)
@classmethod
def check(cls, guess):
return (guess - cls.C) / abs(guess - cls.C)
```
This is different from the last three in that you can create new blackboxes by creating subclasses of `BlackBox`. But this is very rarely what you actually want to do. If you want multiple persistent values, you probably want instances. |
49,519,789 | I want to have a black box in python where
* The input is a list A.
* There is a random number C for the black box which is randomly selected the first time the black box is called and stays the same for the next times the black box is called.
* Based on list A and number C, the output is a list B.
I was thinking of defining this black box as a function but the issue is that a function cannot keep the selected number C for next calls. Note that the input and output of the black box are as described above and we cannot have C also as output and use it for next calls. Any suggestion? | 2018/03/27 | [
"https://Stackoverflow.com/questions/49519789",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9559925/"
] | >
> the issue is that a function cannot keep the selected number C for next calls.
>
>
>
This may be true in other languages, but not so in Python. Functions in Python are objects like any other, so you can store things on them. Here's a minimal example of doing so.
```
import random
def this_function_stores_a_value():
me = this_function_stores_a_value
if 'value' not in me.__dict__:
me.value = random.random()
return me.value
```
This doesn't directly solve your list problem, but it should point you in the right direction.
---
*Side note:* You can also store persistent data in optional arguments, like
```
def this_function_also_stores_a_value(value = random.random()):
...
```
I don't, however, recommend this approach because users can tamper with your values by passing an argument explicitly. | Since you are asking in the comments.
This is probably not recommended way but it's easy and works so I'll add it here.
You can use global variable to achieve your goal.
```
import random
persistant_var = 0
def your_func():
global persistant_var
if persistant_var:
print('variable already set {}'.format(persistant_var))
else:
print('setting variable')
persistant_var = random.randint(1,10)
your_func()
your_func()
```
Output:
```
setting variable
variable already set 7
```
Hope this is clear.
[](https://i.stack.imgur.com/LkqtF.jpg) |
49,519,789 | I want to have a black box in python where
* The input is a list A.
* There is a random number C for the black box which is randomly selected the first time the black box is called and stays the same for the next times the black box is called.
* Based on list A and number C, the output is a list B.
I was thinking of defining this black box as a function but the issue is that a function cannot keep the selected number C for next calls. Note that the input and output of the black box are as described above and we cannot have C also as output and use it for next calls. Any suggestion? | 2018/03/27 | [
"https://Stackoverflow.com/questions/49519789",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9559925/"
] | There are many ways to store persistent data for a function. They all have their uses, but in general, the ones that come first are useful more often than the ones that come later. (To keep things shorter, I'm solving a slightly simpler problem than the one you asked about, but it should be obvious how to adapt it.)
Instance attribute
------------------
```
class BlackBox:
def __init__(self):
self.C = rand.randint(100)
def check(self, guess):
return (guess - self.C) / abs(guess - self.C)
```
Now you can create one or more `BlackBox()` instances, and each one has its own random number.
Closure variable
----------------
```
def blackbox():
C = rand.random()
def check(guess):
return (guess - C) / abs(guess - C)
return check
```
Now, you can create one or more `check` functions, and each one has its own random number. (This is dual to the instance variable—that is, it has the same capabilities—but usually one or the other is more readable.)
Global variable
---------------
```
def makeblackbox():
global C
C = random.randint(100)
def check(guess):
return (guess - C) / abs(guess - C)
```
This way, there's only a single blackbox for the entire program. That's usually not as good a design, which is one of the reasons that "globals are bad". Plus, it's polluting the global namespace with a `C` variable that means nothing to anyone but the `check` function, which is another one of the reasons that "globals are bad".
Function attribute
------------------
```
def makeblackbox():
check.C = random.randint(100)
def check():
return (guess - check.C) / abs(guess - check.C)
```
This is equivalent to a global in that you can only ever have one black box, but at least the variable is hidden away on the `check` function instead of polluting the global namespace.
Class attribute
---------------
```
class BlackBox:
C = rand.randint(100)
@staticmethod
def check(guess):
return (guess - BlackBox.C) / abs(guess - BlackBox.C)
```
This is again equivalent to a global variable without polluting the global namespace. But it has a downside over the function attribute—you're creating a class that has no useful instances is often misleading.
Class attribute 2
-----------------
```
class BlackBox:
C = rand.randint(100)
@classmethod
def check(cls, guess):
return (guess - cls.C) / abs(guess - cls.C)
```
This is different from the last three in that you can create new blackboxes by creating subclasses of `BlackBox`. But this is very rarely what you actually want to do. If you want multiple persistent values, you probably want instances. | Since you are asking in the comments.
This is probably not recommended way but it's easy and works so I'll add it here.
You can use global variable to achieve your goal.
```
import random
persistant_var = 0
def your_func():
global persistant_var
if persistant_var:
print('variable already set {}'.format(persistant_var))
else:
print('setting variable')
persistant_var = random.randint(1,10)
your_func()
your_func()
```
Output:
```
setting variable
variable already set 7
```
Hope this is clear.
[](https://i.stack.imgur.com/LkqtF.jpg) |
66,030,433 | I am having trouble setting up a GStreamer pipeline to forward a video stream over UDP via OpenCV. I have a laptop, and an AGX Xavier connected to the same network. The idea is to forward the webcam video feed to AGX which will do some OpenCV optical flow estimation on the GPU (in Python), draw flow vectors on the original image and send it back to my laptop. Up until now, I can configure two pipelines. As a minimum example, I have made two bash scripts and a Python script that ideally would function as pass-through over OpenCV's VideoCapture and VideoWriter objects.
servevideo.bash:
```
#!/bin/bash
gst-launch-1.0 v4l2src device=[device-fd] \
! video/x-raw, width=800, height=600, framerate=24/1 \
! jpegenc ! rtpjpegpay ! rtpstreampay \
! udpsink host=[destination-ip] port=12345
```
receivevideo.bash:
```
#!/bin/bash
gst-launch-1.0 -e udpsrc port=12344 \
! application/x-rtp-stream,encoding-name=JPEG \
! rtpstreamdepay ! rtpjpegdepay ! jpegdec \
! autovideosink
```
If I run these two scripts on either the same computer or on two different computers on the network, it works fine. When I throw my Python script (listed below) in the mix, I start to experience issues. Ideally, I would run the bash scripts on my laptop with the intended setup in mind while running the Python script on my Jetson. I would then expect to see the webcam video feed at my laptop after taking a detour around the Jetson.
webcam\_passthrough.py:
#!/usr/bin/python3.6
```
import cv2
video_in = cv2.VideoCapture("udpsrc port=12345 ! application/x-rtp-stream,encoding-name=JPEG ! rtpstreamdepay ! rtpjpegdepay ! jpegdec ! videoconvert ! appsink", cv2.CAP_GSTREAMER)
video_out = cv2.VideoWriter("appsrc ! videoconvert ! jpegenc ! rtpjpegpay ! rtpstreampay ! udpsink host=[destination-ip] port=12344", cv2.CAP_GSTREAMER, 0, 24, (800, 600), True)
while True:
ret, frame = video_in.read()
if not ret: break
video_out.write(frame)
cv2.imshow('Original', frame)
key = cv2.waitKey(1) & 0xff
if key == 27: break
cv2.destroyAllWindows()
video_out.release()
video_in.release()
```
With the following Python script, I can visualise the frames via `cv2.imshow` received from the pipeline set up by the `servevideo.bash` script. So I think my problem is connected to how I am setting up the VideoWriter `video_out` in OpenCV. I have verified my two bash scripts are working when I am relaying the webcam video feed between those two pipelines created, and I have verified that the `cv2.VideoCapture` receives the frames. I am no expert here, and my GStreamer knowledge is almost non-existent, so there might be several misunderstandings in my minimum example. It would be greatly appreciated if some of you could point out what I am missing here.
I will also happily provide more information if something is unclear or missing.
**EDIT:**
So it seems the intention of my minimum example was not clearly communicated.
The three scripts provided as a minimum example serve to relay my webcam video feed from my laptop to the Jetson AGX Xavier who then relays the video-feed back to the laptop. The `servevideo.bash` creates a GStreamer pipeline on the laptop that uses v4l2 to grab frames from the camera and relay it on to a UDP socket. The `webcam_passthrough.py` runs on the Jetson where it "connects" to the UDP socket created by the pipeline running on the laptop. The Python script serves a passthrough which ideally will open a new UDP socket on another port and relay the frames back to the laptop. The `receivevideo.bash` creates yet another pipeline on the laptop for receiving the frames that were passed through the Python script at the Jetson. The second pipeline on the laptop is only utilised for visualisation purpose. Ideally, this minimum example shows the "raw" video feed from the camera connected to the laptop.
The two bash scripts are working in isolation, both running locally on the laptop and running `receivevideo.bash` remotely on another computer.
The `cv2.VideoCapture` configuration in the Python script also seems to work as I can visualise the frames (with `cv2.imshow`) received over the UDP socket provided by the `servevideo.bash` script. This is working locally and remotely as well. The part that is causing me some headache (I believe) is the configuration of `cv2.VideoWriter`; ideally, that should open a UDP socket which I can "connect" to via my `receivevideo.bash` script. I have tested this locally and remotely but to no avail.
When I run `receivevideo.bash` to connect to the UDP socket provided by the Python script I get the following output:
```
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
```
This does not seem wrong to me, I have tried to run the different scripts with GST\_DEBUG=3 which gave some warnings, but as the pipeline configurations are basically the same in the bash scripts and for the cv2 `VideoCapture` and `VideoWriter` I do not add much value to those warnings. As an example I have included one such warning below:
```
0:00:06.595120595 8962 0x25b8cf0 WARN rtpjpegpay gstrtpjpegpay.c:596:gst_rtp_jpeg_pay_read_sof:<rtpjpegpay0> warning: Invalid component
```
This warning is printed continuously running the Python script with `GST_DEBUG=3`. Running the `receivevideo.bash` with the same debug level gave:
```
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
0:00:00.013911480 9078 0x55be0899de80 FIXME videodecoder gstvideodecoder.c:933:gst_video_decoder_drain_out:<jpegdec0> Sub-class should implement drain()
Setting pipeline to PLAYING ...
New clock: GstSystemClock
```
I hope my intention is clearer now, and as I already pointed out I believe something is wrong with my `cv2.VideoWriter` in the Python script, but I am no expert and GStreamer is far from something that I use every day. Thus, I may have misunderstood something.
**EDIT 2:**
So now I have tried to split the two pipelines into two separate processes as suggested by @abysslover. I still see the same result, and I still have no clue why that is. My current implementation of the Python script is listed below.
webcam\_passthrough.py:
```
#!/usr/bin/python3.6
import signal, cv2
from multiprocessing import Process, Pipe
is_running = True
def signal_handler(sig, frame):
global is_running
print("Program was interrupted - terminating ...")
is_running = False
def produce(pipe):
global is_running
video_in = cv2.VideoCapture("udpsrc port=12345 ! application/x-rtp-stream,encoding-name=JPEG ! rtpstreamdepay ! rtpjpegdepay ! jpegdec ! videoconvert ! appsink", cv2.CAP_GSTREAMER)
while is_running:
ret, frame = video_in.read()
if not ret: break
print("Receiving frame ...")
pipe.send(frame)
video_in.release()
if __name__ == "__main__":
consumer_pipe, producer_pipe = Pipe()
signal.signal(signal.SIGINT, signal_handler)
producer = Process(target=produce, args=(producer_pipe,))
video_out = cv2.VideoWriter("appsrc ! videoconvert ! jpegenc ! rtpjpegpay ! rtpstreampay ! udpsink host=[destination-ip] port=12344", cv2.CAP_GSTREAMER, 0, 24, (800, 600), True)
producer.start()
while is_running:
frame = consumer_pipe.recv()
video_out.write(frame)
print("Sending frame ...")
video_out.release()
producer.join()
```
The pipe that I have created between the two processes is providing a new frame as expected. When I try to listen to UDP port 12344 with `netcat`, I do not receive anything that is the same thing as before. I also have a hard time understanding how differentiating the pipelines are changing much as I would expect them to already run in different contexts. Still, I could be wrong concerning this assumption. | 2021/02/03 | [
"https://Stackoverflow.com/questions/66030433",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15008550/"
] | you were very close to the solution. The problem lies in the warning you yourself noticed `warning: Invalid component`. The problem is that rtp jpeg payloader gets stuck due to not supporting video format it is getting. Check [this](http://gstreamer-devel.966125.n4.nabble.com/ximagesrc-to-jpegenc-td4669619.html)
However I was blind and missed what you wrote and went full debug mode into the problem.
So lets just keep the debug how-to for others or for similar problems:
1, First debugging step - check with wireshark if the receiving machine is getting udp packets on port 12344. Nope it does not.
2, Would this work without opencv stuff? Lets check with replacing opencv logic with some random processing - say rotation of video. Also eliminate appsrc/appsink to simplify.
Then I used this:
`GST_DEBUG=3 gst-launch-1.0 udpsrc port=12345 ! application/x-rtp-stream,encoding-name=JPEG ! rtpstreamdepay ! rtpjpegdepay ! jpegdec ! videoconvert ! rotate angle=0.45 ! videoconvert ! jpegenc ! rtpjpegpay ! rtpstreampay ! queue ! udpsink host=[my ip] port=12344`
Hm now I get weird warnings like:
```
0:00:00.174424533 90722 0x55cb38841060 WARN rtpjpegpay gstrtpjpegpay.c:596:gst_rtp_jpeg_pay_read_sof:<rtpjpegpay0> warning: Invalid component
WARNING: from element /GstPipeline:pipeline0/GstRtpJPEGPay:rtpjpegpay0: Invalid component
```
3, Quick search yielded above mentioned GStreamer forum page.
4, When I added `video/x-raw,format=I420` after videoconvert it started working and my second machine started getting the udp packets.
5, So the solution to your problem is just limit the jpegenc to specific video format that the subsequent rtp payloader can handle:
```
#!/usr/bin/python3
import signal, cv2
from multiprocessing import Process, Pipe
is_running = True
def signal_handler(sig, frame):
global is_running
print("Program was interrupted - terminating ...")
is_running = False
def produce(pipe):
global is_running
video_in = cv2.VideoCapture("udpsrc port=12345 ! application/x-rtp-stream,encoding-name=JPEG ! rtpstreamdepay ! rtpjpegdepay ! jpegdec ! videoconvert ! appsink", cv2.CAP_GSTREAMER)
while is_running:
ret, frame = video_in.read()
if not ret: break
print("Receiving frame ...")
pipe.send(frame)
video_in.release()
if __name__ == "__main__":
consumer_pipe, producer_pipe = Pipe()
signal.signal(signal.SIGINT, signal_handler)
producer = Process(target=produce, args=(producer_pipe,))
# the only edit is here, added video/x-raw capsfilter: <-------
video_out = cv2.VideoWriter("appsrc ! videoconvert ! video/x-raw,format=I420 ! jpegenc ! rtpjpegpay ! rtpstreampay ! udpsink host=[receiver ip] port=12344", cv2.CAP_GSTREAMER, 0, 24, (800, 600), True)
producer.start()
while is_running:
frame = consumer_pipe.recv()
rr = video_out.write(frame)
print("Sending frame ...")
print(rr)
video_out.release()
producer.join()
``` | Note: I cannot write a comment due to the low reputation.
According to your problem description, it is difficult to understand what your problem is.
Simply, you will run two bash scripts (`servevideo.bash` and `receivevideo.bash`) on your laptop, which may receive and send web-cam frames from the laptop (?), while a Python script(`webcam_passthrough.py`) runs on a Jetson AGX Xavier.
Your bash scripts work, so I guess you have some problems in the Python script. According to your explanation, you've already got the frames from the gst-launch in the bash scripts and visualized the frames.
Thus, what is your real problem? What are you trying to solve using the Python script?
The following statement is unclear to me.
>
> When I throw my Python script (listed below) in the mix, I start to experience issues.
>
>
>
How about the following configuration?
servevideo.bash:
```
#!/bin/bash
gst-launch-1.0 videotestsrc device=[device-fd] \
! video/x-raw, width=800, height=600, framerate=20/1 \
! videoscale
! videoconvert
! x264enc tune=zerolatency bitrate=500 speed-preset=superfast
! rtph264pay
! udpsink host=[destination-ip] port=12345
```
receivevideo.bash
```
#!/bin/bash
gst-launch-1.0 -v udpsrc port=12345 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" \
! rtph264depay \
! decodebin \
! videoconvert \
! autovideosink
```
Python script:
```
import numpy as np
import cv2
from multiprocessing import Process
def send_process():
video_in = cv2.VideoCapture("videotestsrc ! video/x-raw,framerate=20/1 ! videoscale ! videoconvert ! appsink", cv2.CAP_GSTREAMER)
video_out = cv2.VideoWriter("appsrc ! videoconvert ! x264enc tune=zerolatency bitrate=500 speed-preset=superfast ! rtph264pay ! udpsink host=[destination_ip] port=12345", cv2.CAP_GSTREAMER, 0, 24, (800,600), True)
if not video_in.isOpened() or not video_out.isOpened():
print("VideoCapture or VideoWriter not opened")
exit(0)
while True:
ret,frame = video_in.read()
if not ret: break
video_out.write(frame)
cv2.imshow("send_process", frame)
if cv2.waitKey(1)&0xFF == ord("q"):
break
video_in.release()
video_out.release()
def receive_process():
cap_receive = cv2.VideoCapture('udpsrc port=12345 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! decodebin ! videoconvert ! appsink', cv2.CAP_GSTREAMER)
if not cap_receive.isOpened():
print("VideoCapture not opened")
exit(0)
while True:
ret,frame = cap_receive.read()
if not ret: break
cv2.imshow('receive_process', frame)
if cv2.waitKey(1)&0xFF == ord('q'):
break
cap_receive.release()
if __name__ == '__main__':
s = Process(target=send_process)
r = Process(target=receive_process)
s.start()
r.start()
s.join()
r.join()
cv2.destroyAllWindows()
```
I cannot test with codes since I do not have your configuration. I think that the receiver and sender needs to be forked into two separate processes using multiprocessing.Process in Python. You may need to adjust some detailed parameters in order to work with these scripts in your configuration.
Good luck to you. |
53,055,563 | The python `collections.Counter` object keeps track of the counts of objects.
```
>> from collections import Counter
>> myC = Counter()
>> myC.update("cat")
>> myC.update("cat")
>> myC["dogs"] = 8
>> myC["lizards"] = 0
>> print(myC)
{"cat": 2, "dogs": 8, "lizards": 0}
```
Is there an analogous C++ object where I can easily keep track of the occurrence counts of a type? Maybe a `map` to `string`? Keep in mind that the above is just an example, and in C++ this would generalize to other types to count. | 2018/10/30 | [
"https://Stackoverflow.com/questions/53055563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5843327/"
] | You could use an `std::map` like:
```
#include <iostream>
#include <map>
int main()
{
std::map<std::string,int> counter;
counter["dog"] = 8;
counter["cat"]++;
counter["cat"]++;
counter["1"] = 0;
for (auto pair : counter) {
cout << pair.first << ":" << pair.second << std::endl;
}
}
```
Output:
```
1:0
cat:2
dog:8
``` | You can use [std::unordered\_map](https://en.cppreference.com/w/cpp/container/unordered_map) if you want constant on average lookup complexity (as you get using collections.Counter). [std::map](https://en.cppreference.com/w/cpp/container/map) "usually implemented as red-black trees", so complexity for lookup is logarithmic in the size of the container. And we don't have red-black tree implementation in Python in built-in library.
```
std::unordered_map<std::string,int> counter;
counter["dog"] = 8;
counter["cat"]++;
counter["cat"]++;
counter["1"] = 0;
for (auto pair : counter) {
cout << pair.first << ":" << pair.second << std::endl;
}
``` |
53,055,563 | The python `collections.Counter` object keeps track of the counts of objects.
```
>> from collections import Counter
>> myC = Counter()
>> myC.update("cat")
>> myC.update("cat")
>> myC["dogs"] = 8
>> myC["lizards"] = 0
>> print(myC)
{"cat": 2, "dogs": 8, "lizards": 0}
```
Is there an analogous C++ object where I can easily keep track of the occurrence counts of a type? Maybe a `map` to `string`? Keep in mind that the above is just an example, and in C++ this would generalize to other types to count. | 2018/10/30 | [
"https://Stackoverflow.com/questions/53055563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5843327/"
] | You could use an `std::map` like:
```
#include <iostream>
#include <map>
int main()
{
std::map<std::string,int> counter;
counter["dog"] = 8;
counter["cat"]++;
counter["cat"]++;
counter["1"] = 0;
for (auto pair : counter) {
cout << pair.first << ":" << pair.second << std::endl;
}
}
```
Output:
```
1:0
cat:2
dog:8
``` | Python3 code:
```
import collections
stringlist = ["Cat","Cat","Cat","Dog","Dog","Lizard"]
counterinstance = collections.Counter(stringlist)
for key,value in counterinstance.items():
print(key,":",value)
```
C++ code:
```
#include <iostream>
#include <unordered_map>
#include <vector>
using namespace std;
int main()
{
unordered_map <string,int> counter;
vector<string> stringVector {"Cat","Cat","Cat","Dog","Dog","Lizard"};
for (auto stringval: stringVector)
{
if (counter.find(stringval) == counter.end()) // if key is NOT present already
{
counter[stringval] = 1; // initialize the key with value 1
}
else
{
counter[stringval]++; // key is already present, increment the value by 1
}
}
for (auto keyvaluepair : counter)
{
// .first to access key, .second to access value
cout << keyvaluepair.first << ":" << keyvaluepair.second << endl;
}
return 0;
}
```
Output:
```
Lizard:1
Cat:3
Dog:2
``` |
53,055,563 | The python `collections.Counter` object keeps track of the counts of objects.
```
>> from collections import Counter
>> myC = Counter()
>> myC.update("cat")
>> myC.update("cat")
>> myC["dogs"] = 8
>> myC["lizards"] = 0
>> print(myC)
{"cat": 2, "dogs": 8, "lizards": 0}
```
Is there an analogous C++ object where I can easily keep track of the occurrence counts of a type? Maybe a `map` to `string`? Keep in mind that the above is just an example, and in C++ this would generalize to other types to count. | 2018/10/30 | [
"https://Stackoverflow.com/questions/53055563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5843327/"
] | You could use an `std::map` like:
```
#include <iostream>
#include <map>
int main()
{
std::map<std::string,int> counter;
counter["dog"] = 8;
counter["cat"]++;
counter["cat"]++;
counter["1"] = 0;
for (auto pair : counter) {
cout << pair.first << ":" << pair.second << std::endl;
}
}
```
Output:
```
1:0
cat:2
dog:8
``` | You can use CppCounter:
```
#include <iostream>
#include "counter.hpp"
int main() {
collection::Counter<std::string> counter;
++counter["cat"];
++counter["cat"];
counter["dogs"] = 8;
counter["lizards"] = 0;
std::cout << "{ ";
for (const auto& it: counter) {
std::cout << "\"" << it.first << "\":" << it.second.value() << " ";
}
std::cout << "}" << std::endl;
}
```
CppCounter is a C++ implementation of collections.Counter: <https://gitlab.com/miyamoto128/cppcounter>.
It is written on top of unordered\_map and easy to use.
May I add I am the author ;) |
53,055,563 | The python `collections.Counter` object keeps track of the counts of objects.
```
>> from collections import Counter
>> myC = Counter()
>> myC.update("cat")
>> myC.update("cat")
>> myC["dogs"] = 8
>> myC["lizards"] = 0
>> print(myC)
{"cat": 2, "dogs": 8, "lizards": 0}
```
Is there an analogous C++ object where I can easily keep track of the occurrence counts of a type? Maybe a `map` to `string`? Keep in mind that the above is just an example, and in C++ this would generalize to other types to count. | 2018/10/30 | [
"https://Stackoverflow.com/questions/53055563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5843327/"
] | You can use [std::unordered\_map](https://en.cppreference.com/w/cpp/container/unordered_map) if you want constant on average lookup complexity (as you get using collections.Counter). [std::map](https://en.cppreference.com/w/cpp/container/map) "usually implemented as red-black trees", so complexity for lookup is logarithmic in the size of the container. And we don't have red-black tree implementation in Python in built-in library.
```
std::unordered_map<std::string,int> counter;
counter["dog"] = 8;
counter["cat"]++;
counter["cat"]++;
counter["1"] = 0;
for (auto pair : counter) {
cout << pair.first << ":" << pair.second << std::endl;
}
``` | Python3 code:
```
import collections
stringlist = ["Cat","Cat","Cat","Dog","Dog","Lizard"]
counterinstance = collections.Counter(stringlist)
for key,value in counterinstance.items():
print(key,":",value)
```
C++ code:
```
#include <iostream>
#include <unordered_map>
#include <vector>
using namespace std;
int main()
{
unordered_map <string,int> counter;
vector<string> stringVector {"Cat","Cat","Cat","Dog","Dog","Lizard"};
for (auto stringval: stringVector)
{
if (counter.find(stringval) == counter.end()) // if key is NOT present already
{
counter[stringval] = 1; // initialize the key with value 1
}
else
{
counter[stringval]++; // key is already present, increment the value by 1
}
}
for (auto keyvaluepair : counter)
{
// .first to access key, .second to access value
cout << keyvaluepair.first << ":" << keyvaluepair.second << endl;
}
return 0;
}
```
Output:
```
Lizard:1
Cat:3
Dog:2
``` |
53,055,563 | The python `collections.Counter` object keeps track of the counts of objects.
```
>> from collections import Counter
>> myC = Counter()
>> myC.update("cat")
>> myC.update("cat")
>> myC["dogs"] = 8
>> myC["lizards"] = 0
>> print(myC)
{"cat": 2, "dogs": 8, "lizards": 0}
```
Is there an analogous C++ object where I can easily keep track of the occurrence counts of a type? Maybe a `map` to `string`? Keep in mind that the above is just an example, and in C++ this would generalize to other types to count. | 2018/10/30 | [
"https://Stackoverflow.com/questions/53055563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5843327/"
] | You can use [std::unordered\_map](https://en.cppreference.com/w/cpp/container/unordered_map) if you want constant on average lookup complexity (as you get using collections.Counter). [std::map](https://en.cppreference.com/w/cpp/container/map) "usually implemented as red-black trees", so complexity for lookup is logarithmic in the size of the container. And we don't have red-black tree implementation in Python in built-in library.
```
std::unordered_map<std::string,int> counter;
counter["dog"] = 8;
counter["cat"]++;
counter["cat"]++;
counter["1"] = 0;
for (auto pair : counter) {
cout << pair.first << ":" << pair.second << std::endl;
}
``` | You can use CppCounter:
```
#include <iostream>
#include "counter.hpp"
int main() {
collection::Counter<std::string> counter;
++counter["cat"];
++counter["cat"];
counter["dogs"] = 8;
counter["lizards"] = 0;
std::cout << "{ ";
for (const auto& it: counter) {
std::cout << "\"" << it.first << "\":" << it.second.value() << " ";
}
std::cout << "}" << std::endl;
}
```
CppCounter is a C++ implementation of collections.Counter: <https://gitlab.com/miyamoto128/cppcounter>.
It is written on top of unordered\_map and easy to use.
May I add I am the author ;) |
53,055,563 | The python `collections.Counter` object keeps track of the counts of objects.
```
>> from collections import Counter
>> myC = Counter()
>> myC.update("cat")
>> myC.update("cat")
>> myC["dogs"] = 8
>> myC["lizards"] = 0
>> print(myC)
{"cat": 2, "dogs": 8, "lizards": 0}
```
Is there an analogous C++ object where I can easily keep track of the occurrence counts of a type? Maybe a `map` to `string`? Keep in mind that the above is just an example, and in C++ this would generalize to other types to count. | 2018/10/30 | [
"https://Stackoverflow.com/questions/53055563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5843327/"
] | You can use CppCounter:
```
#include <iostream>
#include "counter.hpp"
int main() {
collection::Counter<std::string> counter;
++counter["cat"];
++counter["cat"];
counter["dogs"] = 8;
counter["lizards"] = 0;
std::cout << "{ ";
for (const auto& it: counter) {
std::cout << "\"" << it.first << "\":" << it.second.value() << " ";
}
std::cout << "}" << std::endl;
}
```
CppCounter is a C++ implementation of collections.Counter: <https://gitlab.com/miyamoto128/cppcounter>.
It is written on top of unordered\_map and easy to use.
May I add I am the author ;) | Python3 code:
```
import collections
stringlist = ["Cat","Cat","Cat","Dog","Dog","Lizard"]
counterinstance = collections.Counter(stringlist)
for key,value in counterinstance.items():
print(key,":",value)
```
C++ code:
```
#include <iostream>
#include <unordered_map>
#include <vector>
using namespace std;
int main()
{
unordered_map <string,int> counter;
vector<string> stringVector {"Cat","Cat","Cat","Dog","Dog","Lizard"};
for (auto stringval: stringVector)
{
if (counter.find(stringval) == counter.end()) // if key is NOT present already
{
counter[stringval] = 1; // initialize the key with value 1
}
else
{
counter[stringval]++; // key is already present, increment the value by 1
}
}
for (auto keyvaluepair : counter)
{
// .first to access key, .second to access value
cout << keyvaluepair.first << ":" << keyvaluepair.second << endl;
}
return 0;
}
```
Output:
```
Lizard:1
Cat:3
Dog:2
``` |
54,726,459 | I'm working through an Exploit Development course on Pluralsight and in the lab I'm currently on we are doing a basic function pointer overwrite. The python script for the lab essentially runs the target executable with a 24 byte string input ending with the memory address of the "jackpot" function. Here's the code:
```
#!/usr/bin/python
import sys
import subprocess
import struct
# 20+4+8+4=36 would overwrite 'r', but we only want to hit the func ptr
jackpot = 0x401591
# we only take 3 of the 4 bytes because strings cannot have a null,
# but will be null terminated terminated to complete the dword address
jackpot_packed = struct.pack('L', jackpot)[0:3]
arg = "A" * 20
arg += jackpot_packed
# or
# arg += "\x91\x15\x40"
subprocess.call(['functionoverwrite.exe', arg])
```
The script runs without error and works as expected using python 2.7.8, but with 3.7.2 I get this error:
>
> Traceback (most recent call last):
> File "c:/Users/rossk/Desktop/Pluralsight/Exploit Development/03/demos/lab2/solution/solution.py", line 14, in
> arg += jackpot\_packed
> TypeError: can only concatenate str (not "bytes") to str
>
>
>
So I've tried commenting out the "arg += jackpot\_packed" expression and using the "arg += "\x91\x15\x40" one instead, but apparently that doesn't result in the same string because when I run the script the target executable crashes without calling the jackpot function.
I'm looking for a way to fix this program for python 3. How can this code be rewritten so that it works for 3.x? | 2019/02/16 | [
"https://Stackoverflow.com/questions/54726459",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9919300/"
] | The way this works is by processing the string from the end, each time you look at a character you check it's position in the array (I use a flipped array as it's more efficient than using `array_search()` each time). Then if the character is at the end of the array, then set it to the 0th element of the alphabet and increment the next digit to the left. If there is another letter from the alphabet to increment the current value, then just replace it and stop the loop.
The last bit is that if you have processed every character and the loop was still going, then there is a carry - so add the 0th digit to the start.
```
$characters = ['a', 'b', 'c'];
$string = 'cccc';
$index = array_flip($characters);
$alphabetCount = count($index)-1;
for ( $i = strlen($string)-1; $i >= 0; $i--) {
$current = $index[$string[$i]]+1;
// Carry
if ( $current > $alphabetCount ) {
$string[$i] = $characters[0];
}
else {
// update and exit
$string[$i] = $characters[$current];
break;
}
}
// As reached end of loop - carry
if ( $i == -1 ) {
$string = $characters[0].$string;
}
echo $string;
```
gives
```
aaaaa
```
with
```
$characters = ['f', 'h', 'z', '@', 's'];
$string = 'ffff@zz';
```
you get
```
ffff@z@
``` | I ended up with something like this:
```php
$string = 'ccc';
$alphabet = ['a', 'b', 'c'];
$numbers = array_keys($alphabet);
$numeric = str_replace($alphabet, $numbers, $string);
$base = count($alphabet) + 1;
$decimal = base_convert($numeric, $base, 10);
$string = base_convert(++$decimal, 10, $base);
strlen($decimal) !== strlen($string)
and $string = str_replace('0', '1', $string);
echo str_replace($numbers, $alphabet, $string);
```
This one has the advantage of supporting multi byte characters |
46,716,912 | I am new to scala. As title, I would like to create a mutable map `Map[Int,(Int, Int)]` and with default value as tuple (0,0) if key not exist. In python the "defaultdict" make such effort easy. what is the elegant way to do it in Scala? | 2017/10/12 | [
"https://Stackoverflow.com/questions/46716912",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1269298/"
] | Use `withDefaultValue` after creating the map:
```
import scala.collection.mutable
val map = mutable.Map[Int,(Int, Int)]().withDefaultValue((0, 0))
``` | you are probably lookign for `.getOrElseUpdate` which takes the key, if not present updates with given value.
```
scala> val googleMap = Map[Int, (Int, Int)]().empty
googleMap: scala.collection.mutable.Map[Int,(Int, Int)] = Map()
scala> googleMap.getOrElseUpdate(100, (0, 0))
res3: (Int, Int) = (0,0)
scala> googleMap
res4: scala.collection.mutable.Map[Int,(Int, Int)] = Map(100 -> (0,0))
```
You can also pass the `orElse` part implicitly,
```
scala> implicit val defaultValue = (0, 0)
defaultValue: (Int, Int) = (0,0)
scala> googleMap.getOrElseUpdate(100, implicitly)
res8: (Int, Int) = (0,0)
scala> googleMap
res9: scala.collection.mutable.Map[Int,(Int, Int)] = Map(100 -> (0,0))
``` |
46,716,912 | I am new to scala. As title, I would like to create a mutable map `Map[Int,(Int, Int)]` and with default value as tuple (0,0) if key not exist. In python the "defaultdict" make such effort easy. what is the elegant way to do it in Scala? | 2017/10/12 | [
"https://Stackoverflow.com/questions/46716912",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1269298/"
] | Use `withDefaultValue` after creating the map:
```
import scala.collection.mutable
val map = mutable.Map[Int,(Int, Int)]().withDefaultValue((0, 0))
``` | withDefaultValue is much simpler than getOrElseUpdate.
```
import scala.collection.mutable
var kv1 = mutable.Map[Int, Int]().withDefaultValue(0)
var kv2 = mutable.Map[Int, Int]()
kv1(1) += 5 // use default value when key is not exists
kv1(2) = 3
kv2(2) = 3 // both can assign value to a new key.
println(f"kv1(1) ${kv1(1)}, kv1(2) ${kv1(2)} " )
println(f"kv1 ${kv1}")
kv2.getOrElseUpdate(1, 18) // set a default if key not exists
println(f"kv2(1) ${kv2(1)}, kv2(2) ${kv2(2)}")
println(f"kv2 ${kv2}")
```
Output:
kv1(1) 5, kv1(2) 3
kv1 Map(2 -> 3, 1 -> 5)
kv2(1) 18, kv2(2) 3
kv2 Map(2 -> 3, 1 -> 18) |
46,716,912 | I am new to scala. As title, I would like to create a mutable map `Map[Int,(Int, Int)]` and with default value as tuple (0,0) if key not exist. In python the "defaultdict" make such effort easy. what is the elegant way to do it in Scala? | 2017/10/12 | [
"https://Stackoverflow.com/questions/46716912",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1269298/"
] | you are probably lookign for `.getOrElseUpdate` which takes the key, if not present updates with given value.
```
scala> val googleMap = Map[Int, (Int, Int)]().empty
googleMap: scala.collection.mutable.Map[Int,(Int, Int)] = Map()
scala> googleMap.getOrElseUpdate(100, (0, 0))
res3: (Int, Int) = (0,0)
scala> googleMap
res4: scala.collection.mutable.Map[Int,(Int, Int)] = Map(100 -> (0,0))
```
You can also pass the `orElse` part implicitly,
```
scala> implicit val defaultValue = (0, 0)
defaultValue: (Int, Int) = (0,0)
scala> googleMap.getOrElseUpdate(100, implicitly)
res8: (Int, Int) = (0,0)
scala> googleMap
res9: scala.collection.mutable.Map[Int,(Int, Int)] = Map(100 -> (0,0))
``` | withDefaultValue is much simpler than getOrElseUpdate.
```
import scala.collection.mutable
var kv1 = mutable.Map[Int, Int]().withDefaultValue(0)
var kv2 = mutable.Map[Int, Int]()
kv1(1) += 5 // use default value when key is not exists
kv1(2) = 3
kv2(2) = 3 // both can assign value to a new key.
println(f"kv1(1) ${kv1(1)}, kv1(2) ${kv1(2)} " )
println(f"kv1 ${kv1}")
kv2.getOrElseUpdate(1, 18) // set a default if key not exists
println(f"kv2(1) ${kv2(1)}, kv2(2) ${kv2(2)}")
println(f"kv2 ${kv2}")
```
Output:
kv1(1) 5, kv1(2) 3
kv1 Map(2 -> 3, 1 -> 5)
kv2(1) 18, kv2(2) 3
kv2 Map(2 -> 3, 1 -> 18) |
31,110,801 | I am learning recursion in python. I wrote a program but it is not working correctly. I am a beginner in python. I have two functions **scrabbleScore()** and **letterScore()**. **scrabbleScore()** calls **letterscore()** and itself also. Here is my code:
```
def letterScore( let ):
if let in ['a','e','i','l','n','o','r','s','t','u']:
return 1
elif let in ['d','g']:
return 2
elif let in ['b','c','m','p']:
return 3
elif let in ['f','h','v','w','y']:
return 4
elif let=='k':
return 5
elif let in ['j','x']:
return 8
elif let in ['q','z']:
return 10
else:
return 0
p = 0
def scrabbleScore( S ):
if S == "":
return 0
else:
global p
p=p+letterScore(S[0])
scrabbleScore(S[1:])
return p
print "scrabbleScore('quetzal'): 25 ==", scrabbleScore('quetzal')
print "scrabbleScore('jonquil'): 23 ==", scrabbleScore('jonquil')
print "scrabbleScore('syzygy'): 25 ==", scrabbleScore('syzygy')
print "scrabbleScore('abcdefghijklmnopqrstuvwxyz'): 87 ==", scrabbleScore('abcdefghijklmnopqrstuvwxyz')
print "scrabbleScore('?!@#$%^&*()'): 0 ==", scrabbleScore('?!@#$%^&*()')
print "scrabbleScore(''): 0 ==", scrabbleScore('')
``` | 2015/06/29 | [
"https://Stackoverflow.com/questions/31110801",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3222665/"
] | I would rewrite your scrabbleScore to something like this
```
def scrabbleScore(S):
def helper(S, p):
if S == "":
return p
else:
p += letterScore(S[0])
return helper(S[1:], p)
return helper(S, 0)
```
this is how you could write it. `p` is called a akkumulator and should not be global (in general, global variabls should be avoided). I didn't figured out why your programm does not work, but my will print the correct result. | Your variable `p` should be initialized to zero again after each call to `scrabbleScore(S)`. This will solve your problem. Example:
```
print "scrabbleScore('quetzal'): 25 ==", scrabbleScore('quetzal')
p=0
print "scrabbleScore('jonquil'): 23 ==", scrabbleScore('jonquil')
``` |
31,110,801 | I am learning recursion in python. I wrote a program but it is not working correctly. I am a beginner in python. I have two functions **scrabbleScore()** and **letterScore()**. **scrabbleScore()** calls **letterscore()** and itself also. Here is my code:
```
def letterScore( let ):
if let in ['a','e','i','l','n','o','r','s','t','u']:
return 1
elif let in ['d','g']:
return 2
elif let in ['b','c','m','p']:
return 3
elif let in ['f','h','v','w','y']:
return 4
elif let=='k':
return 5
elif let in ['j','x']:
return 8
elif let in ['q','z']:
return 10
else:
return 0
p = 0
def scrabbleScore( S ):
if S == "":
return 0
else:
global p
p=p+letterScore(S[0])
scrabbleScore(S[1:])
return p
print "scrabbleScore('quetzal'): 25 ==", scrabbleScore('quetzal')
print "scrabbleScore('jonquil'): 23 ==", scrabbleScore('jonquil')
print "scrabbleScore('syzygy'): 25 ==", scrabbleScore('syzygy')
print "scrabbleScore('abcdefghijklmnopqrstuvwxyz'): 87 ==", scrabbleScore('abcdefghijklmnopqrstuvwxyz')
print "scrabbleScore('?!@#$%^&*()'): 0 ==", scrabbleScore('?!@#$%^&*()')
print "scrabbleScore(''): 0 ==", scrabbleScore('')
``` | 2015/06/29 | [
"https://Stackoverflow.com/questions/31110801",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3222665/"
] | It is not working correctly because you are using global variable for storing intermediate results. After you call "scrabbleScore" for the first time p value would become 25 and you need to reset it.
The solution here would be to avoid using global variable at all:
```
def scrabbleScore( S ):
p = 0
if S == "":
return 0
else:
p = p + letterScore(S[0])
p = p + scrabbleScore(S[1:])
return p
``` | Your variable `p` should be initialized to zero again after each call to `scrabbleScore(S)`. This will solve your problem. Example:
```
print "scrabbleScore('quetzal'): 25 ==", scrabbleScore('quetzal')
p=0
print "scrabbleScore('jonquil'): 23 ==", scrabbleScore('jonquil')
``` |
1,826,824 | On my ubuntu server I run the following command:
```
python -c 'import os; os.kill(5555, 0)'
```
This is done so that I can see if pid 5555 is running. From my understanding this should raise an OSError if the pid is not running. This is not raising an OSError for me which means it should be a running process. However when I run:
```
ps aux | grep 5555
```
I see no process running with that pid. This also happens on several other pids in that general range, but it does not happen with say 555 or 55555.
Does anyone have any insight as to why os.kill would not raise an OSError like it is expected to?
Note: this is running under python 2.5.1. | 2009/12/01 | [
"https://Stackoverflow.com/questions/1826824",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/205114/"
] | Try installing htop (sudo apt-get install htop), it sometimes displays process that ps doesn't. | I don't know why that OSError is not raised in some cases, but it's important to note that there is a max pid value on linux and unix based OS:
```
$> cat /proc/sys/kernel/pid_max
32768
``` |
1,826,824 | On my ubuntu server I run the following command:
```
python -c 'import os; os.kill(5555, 0)'
```
This is done so that I can see if pid 5555 is running. From my understanding this should raise an OSError if the pid is not running. This is not raising an OSError for me which means it should be a running process. However when I run:
```
ps aux | grep 5555
```
I see no process running with that pid. This also happens on several other pids in that general range, but it does not happen with say 555 or 55555.
Does anyone have any insight as to why os.kill would not raise an OSError like it is expected to?
Note: this is running under python 2.5.1. | 2009/12/01 | [
"https://Stackoverflow.com/questions/1826824",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/205114/"
] | Under linux, each process **and** each thread have a different pid. `os.kill` doesn't care whether you have a thread pid, or a task pid, however `ps` doesn't normally show the thread pids.
For instance on my machine the process with PID 8502 is running threads which you can see like this
```
$ ls /proc/8502/task/
8502 8503 8504 8505 8506 8507 8511 8512 8514 8659
```
Note that 8503 doesn't appear in the process list
```
$ ps aux | grep [8]503
$
```
However using some more `ps` arguments you can see it
```
$ ps -eLf | grep [8]503
ncw 8502 1 8503 0 10 10:00 ? 00:00:00 /usr/lib/virtualbox/VBoxSVC --automate
```
(Grepping for `[8]503` means that the `grep` won't show up - it's an old unix trick!)
Now lets see if it is alive or not
```
$ python
Python 2.6.4 (r264:75706, Nov 2 2009, 14:44:17)
[GCC 4.4.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Loaded customisations from '/home/ncw/.pystartup'
>>> import os
>>> os.kill(8503, 0)
>>>
```
This duplicates your problem.
I think if you do
```
ls /proc/*/task/5555
```
or
```
ps -eLf | grep [5]555
```
You will see the culprit thread. | Try installing htop (sudo apt-get install htop), it sometimes displays process that ps doesn't. |
1,826,824 | On my ubuntu server I run the following command:
```
python -c 'import os; os.kill(5555, 0)'
```
This is done so that I can see if pid 5555 is running. From my understanding this should raise an OSError if the pid is not running. This is not raising an OSError for me which means it should be a running process. However when I run:
```
ps aux | grep 5555
```
I see no process running with that pid. This also happens on several other pids in that general range, but it does not happen with say 555 or 55555.
Does anyone have any insight as to why os.kill would not raise an OSError like it is expected to?
Note: this is running under python 2.5.1. | 2009/12/01 | [
"https://Stackoverflow.com/questions/1826824",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/205114/"
] | Maybe it's a bug in 2.5? On 2.6.4 I get:
```
gruszczy@gruszczy-laptop:~$ python -c 'import os; os.kill(5555, 0)'
Traceback (most recent call last):
File "<string>", line 1, in <module>
OSError: [Errno 3] No such process
```
I believe, there is a bug report for this:
<http://mail.python.org/pipermail/new-bugs-announce/2009-February/004222.html> | I don't know why that OSError is not raised in some cases, but it's important to note that there is a max pid value on linux and unix based OS:
```
$> cat /proc/sys/kernel/pid_max
32768
``` |
1,826,824 | On my ubuntu server I run the following command:
```
python -c 'import os; os.kill(5555, 0)'
```
This is done so that I can see if pid 5555 is running. From my understanding this should raise an OSError if the pid is not running. This is not raising an OSError for me which means it should be a running process. However when I run:
```
ps aux | grep 5555
```
I see no process running with that pid. This also happens on several other pids in that general range, but it does not happen with say 555 or 55555.
Does anyone have any insight as to why os.kill would not raise an OSError like it is expected to?
Note: this is running under python 2.5.1. | 2009/12/01 | [
"https://Stackoverflow.com/questions/1826824",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/205114/"
] | Under linux, each process **and** each thread have a different pid. `os.kill` doesn't care whether you have a thread pid, or a task pid, however `ps` doesn't normally show the thread pids.
For instance on my machine the process with PID 8502 is running threads which you can see like this
```
$ ls /proc/8502/task/
8502 8503 8504 8505 8506 8507 8511 8512 8514 8659
```
Note that 8503 doesn't appear in the process list
```
$ ps aux | grep [8]503
$
```
However using some more `ps` arguments you can see it
```
$ ps -eLf | grep [8]503
ncw 8502 1 8503 0 10 10:00 ? 00:00:00 /usr/lib/virtualbox/VBoxSVC --automate
```
(Grepping for `[8]503` means that the `grep` won't show up - it's an old unix trick!)
Now lets see if it is alive or not
```
$ python
Python 2.6.4 (r264:75706, Nov 2 2009, 14:44:17)
[GCC 4.4.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Loaded customisations from '/home/ncw/.pystartup'
>>> import os
>>> os.kill(8503, 0)
>>>
```
This duplicates your problem.
I think if you do
```
ls /proc/*/task/5555
```
or
```
ps -eLf | grep [5]555
```
You will see the culprit thread. | Maybe it's a bug in 2.5? On 2.6.4 I get:
```
gruszczy@gruszczy-laptop:~$ python -c 'import os; os.kill(5555, 0)'
Traceback (most recent call last):
File "<string>", line 1, in <module>
OSError: [Errno 3] No such process
```
I believe, there is a bug report for this:
<http://mail.python.org/pipermail/new-bugs-announce/2009-February/004222.html> |
1,826,824 | On my ubuntu server I run the following command:
```
python -c 'import os; os.kill(5555, 0)'
```
This is done so that I can see if pid 5555 is running. From my understanding this should raise an OSError if the pid is not running. This is not raising an OSError for me which means it should be a running process. However when I run:
```
ps aux | grep 5555
```
I see no process running with that pid. This also happens on several other pids in that general range, but it does not happen with say 555 or 55555.
Does anyone have any insight as to why os.kill would not raise an OSError like it is expected to?
Note: this is running under python 2.5.1. | 2009/12/01 | [
"https://Stackoverflow.com/questions/1826824",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/205114/"
] | Under linux, each process **and** each thread have a different pid. `os.kill` doesn't care whether you have a thread pid, or a task pid, however `ps` doesn't normally show the thread pids.
For instance on my machine the process with PID 8502 is running threads which you can see like this
```
$ ls /proc/8502/task/
8502 8503 8504 8505 8506 8507 8511 8512 8514 8659
```
Note that 8503 doesn't appear in the process list
```
$ ps aux | grep [8]503
$
```
However using some more `ps` arguments you can see it
```
$ ps -eLf | grep [8]503
ncw 8502 1 8503 0 10 10:00 ? 00:00:00 /usr/lib/virtualbox/VBoxSVC --automate
```
(Grepping for `[8]503` means that the `grep` won't show up - it's an old unix trick!)
Now lets see if it is alive or not
```
$ python
Python 2.6.4 (r264:75706, Nov 2 2009, 14:44:17)
[GCC 4.4.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Loaded customisations from '/home/ncw/.pystartup'
>>> import os
>>> os.kill(8503, 0)
>>>
```
This duplicates your problem.
I think if you do
```
ls /proc/*/task/5555
```
or
```
ps -eLf | grep [5]555
```
You will see the culprit thread. | I don't know why that OSError is not raised in some cases, but it's important to note that there is a max pid value on linux and unix based OS:
```
$> cat /proc/sys/kernel/pid_max
32768
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.