qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
sequencelengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
---|---|---|---|---|---|
33,617,221 | I am trying to speed up some heavy simulations by using python's multiprocessing module on a machine with 24 cores that runs Suse Linux. From reading through the documentation, I understand that this only makes sense if the individual calculations take much longer than the overhead for creating the pool etc.
What confuses me is that the execution time of some of the individual processes is much longer with multiprocessing than when I just run a single process. In my actual simulations the the time increases from around 300s to up to 1500s. Interestingly this gets worse when I use more processes.
The following example illustrates the problem with a slightly shorter dummy loop:
```py
from time import clock,time
import multiprocessing
import os
def simulate(params):
t1 = clock()
result = 0
for i in range(10000):
for j in range(10000):
result+=i*j
pid = os.getpid()
print 'pid: ',pid,' sim time: ',clock() - t1, 'seconds'
return result
if __name__ == '__main__':
for n_procs in [1,5,10,20]:
print n_procs,' processes:'
t1 = time()
result = multiprocessing.Pool(processes = n_procs).map(simulate,range(20))
print 'total: ',time()-t1
```
This produces the following output:
```
1 processes:
pid: 1872 sim time: 8.1 seconds
pid: 1872 sim time: 7.92 seconds
pid: 1872 sim time: 7.93 seconds
pid: 1872 sim time: 7.89 seconds
pid: 1872 sim time: 7.87 seconds
pid: 1872 sim time: 7.74 seconds
pid: 1872 sim time: 7.83 seconds
pid: 1872 sim time: 7.84 seconds
pid: 1872 sim time: 7.88 seconds
pid: 1872 sim time: 7.82 seconds
pid: 1872 sim time: 8.83 seconds
pid: 1872 sim time: 7.91 seconds
pid: 1872 sim time: 7.97 seconds
pid: 1872 sim time: 7.84 seconds
pid: 1872 sim time: 7.87 seconds
pid: 1872 sim time: 7.91 seconds
pid: 1872 sim time: 7.86 seconds
pid: 1872 sim time: 7.9 seconds
pid: 1872 sim time: 7.96 seconds
pid: 1872 sim time: 7.97 seconds
total: 159.337743998
5 processes:
pid: 1906 sim time: 8.66 seconds
pid: 1907 sim time: 8.74 seconds
pid: 1908 sim time: 8.75 seconds
pid: 1905 sim time: 8.79 seconds
pid: 1909 sim time: 9.52 seconds
pid: 1906 sim time: 7.72 seconds
pid: 1908 sim time: 7.74 seconds
pid: 1907 sim time: 8.26 seconds
pid: 1905 sim time: 8.45 seconds
pid: 1909 sim time: 9.25 seconds
pid: 1908 sim time: 7.48 seconds
pid: 1906 sim time: 8.4 seconds
pid: 1907 sim time: 8.23 seconds
pid: 1905 sim time: 8.33 seconds
pid: 1909 sim time: 8.15 seconds
pid: 1908 sim time: 7.47 seconds
pid: 1906 sim time: 8.19 seconds
pid: 1907 sim time: 8.21 seconds
pid: 1905 sim time: 8.27 seconds
pid: 1909 sim time: 8.1 seconds
total: 35.1368539333
10 processes:
pid: 1918 sim time: 8.79 seconds
pid: 1920 sim time: 8.81 seconds
pid: 1915 sim time: 14.78 seconds
pid: 1916 sim time: 14.78 seconds
pid: 1914 sim time: 14.81 seconds
pid: 1922 sim time: 14.81 seconds
pid: 1913 sim time: 14.98 seconds
pid: 1921 sim time: 14.97 seconds
pid: 1917 sim time: 15.13 seconds
pid: 1919 sim time: 15.13 seconds
pid: 1920 sim time: 8.26 seconds
pid: 1918 sim time: 8.34 seconds
pid: 1915 sim time: 9.03 seconds
pid: 1921 sim time: 9.03 seconds
pid: 1916 sim time: 9.39 seconds
pid: 1913 sim time: 9.27 seconds
pid: 1914 sim time: 12.12 seconds
pid: 1922 sim time: 12.17 seconds
pid: 1917 sim time: 12.15 seconds
pid: 1919 sim time: 12.17 seconds
total: 27.4067809582
20 processes:
pid: 1941 sim time: 8.63 seconds
pid: 1939 sim time: 10.32 seconds
pid: 1931 sim time: 12.35 seconds
pid: 1936 sim time: 12.23 seconds
pid: 1937 sim time: 12.82 seconds
pid: 1942 sim time: 12.73 seconds
pid: 1932 sim time: 13.01 seconds
pid: 1946 sim time: 13.0 seconds
pid: 1945 sim time: 13.74 seconds
pid: 1944 sim time: 14.03 seconds
pid: 1929 sim time: 14.44 seconds
pid: 1943 sim time: 14.75 seconds
pid: 1935 sim time: 14.8 seconds
pid: 1930 sim time: 14.79 seconds
pid: 1927 sim time: 14.85 seconds
pid: 1934 sim time: 14.8 seconds
pid: 1928 sim time: 14.83 seconds
pid: 1940 sim time: 14.88 seconds
pid: 1933 sim time: 15.05 seconds
pid: 1938 sim time: 15.06 seconds
total: 15.1311581135
```
What I do not understand is that *some* of the processes become much slower above a certain number of CPUs. I should add that nothing else is running on this machine. Is this expected? Am I doing something wrong? | 2015/11/09 | [
"https://Stackoverflow.com/questions/33617221",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5543796/"
] | Cores are shared resource like anything else on computer.
OS will usually balance load. Meaning it will spread threads on as many cores as possible.`*` Guiding metric will be core load.
So if there are less thread counts then core count some cores will sit idle. (Thread architecture prevent splitting onto multiple cores).
If there are more threads then cores. OS will assign many threads to single core, and will multitask between those threads on that core. Switching from one thread to other on single core have some cost associated.
Shifting task from core to another have even greater cost. (Quite significant in terms of both cores resources) OS will generally avoid such actions.
**So getting back to Your story.**
**Performance rouse with thread count up to core count** because there where idling cores that got new work. Few last cores though where busy with OS work anyway, so those added very little to actual performance.
**Overall performance still improved** after thread count passed core count. Just because OS can switch active thread if previous got stuck on long running task (like I/O), so another one can use CPU time.
**Perofrmance would decrease** if thread count would significantly pass core count. As too many threads would fight for same resource (CPU time), and switching costs would aggregate to substantial portion of CPU cycles. However from Your listing its still not happened.
**As for seemingly long execution time?** It was long! Just threads did not spent it all working. OS switched them off and on to maximize CPU usage whenever anyone of them got stuck on external work (I/O), and then some more switching to more evenly spread CPU time across threads assigned to core.
`*` OS may also go for least power usage, maximized I/O usage, etc. Especially Linux is very flexible here. But its out of scope ;) Read on various schedulers in Linux if interested. | The answer to this question kind of makes the question redundant.
It turns out that the machine has only 12 physical cores that accept two threads each.
The output of `multiprocessing.cpu_count()` is 24. However `lscpu` yields that there are only two sockets with six cores each.
This explains why above ten processes, the individual runs become slower. |
65,157,911 | I"m struggling on how to count the letter, number, and special character in a string. I'm a beginner and exploring python. Thank you in advance guys!
```
string=input("Enter string: Abc123--- ")
count1=0
count2=0
count3=0
count4=0
for i in string:
if(i.isletter()):
count1=count1+1
count2=count2+1
count3=count3+1
count4=count2+1
print("Letter count:")
print(count1)
print("Number count:")
print(count2)
print("Special Characters count:")
print(count3)
print("Total characters count:")
print(count4)
``` | 2020/12/05 | [
"https://Stackoverflow.com/questions/65157911",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8055559/"
] | * You should import your classes directly:
```
from Item import Item # Assuming the file name is Item.py
from Inventory import Inventory # Assuming the file name is Inventory.py
```
and then you can do:
```
item1 = Item(0,"Hat", 14, 10.00)
```
* Then you use `Item` inside the `Inventory` class, but you didn't **import** `Item` there. Be careful and import it the same way with `from x import y`.
* Also you have an error in the line:
```
addingItem = Item(pn,id,name,amt,cost)
```
what is `pn`? Remove it and it should work:
```
addingItem = Item(id,name,amt,cost)
```
* Also avoid using reserved names like `list`, `id` ecc... you might end up with problems. | This error statement TypeError: 'module' object is not callable is raised as you are being confused about the Class name and Module name. The problem is in the import line . You are importing a module, not a class. This happend because the module name and class name have the same name.
If you have a class MyClass in a file called MyClass.py , then you should write:
```
from YourClass import YourClass
``` |
2,693,820 | How might one extract all images from a pdf document, at native resolution and format? (Meaning extract tiff as tiff, jpeg as jpeg, etc. and without resampling). Layout is unimportant, I don't care were the source image is located on the page.
I'm using python 2.7 but can use 3.x if required. | 2010/04/22 | [
"https://Stackoverflow.com/questions/2693820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14420/"
] | Often in a PDF, the image is simply stored as-is. For example, a PDF with a jpg inserted will have a range of bytes somewhere in the middle that when extracted is a valid jpg file. You can use this to very simply extract byte ranges from the PDF. I wrote about this some time ago, with sample code: [Extracting JPGs from PDFs](http://nedbatchelder.com/blog/200712/extracting_jpgs_from_pdfs.html). | I added all of those together in PyPDFTK [here](https://github.com/ronanpaixao/PyPDFTK/blob/master/pdf_images.py).
My own contribution is handling of `/Indexed` files as such:
```
for obj in xObject:
if xObject[obj]['/Subtype'] == '/Image':
size = (xObject[obj]['/Width'], xObject[obj]['/Height'])
color_space = xObject[obj]['/ColorSpace']
if isinstance(color_space, pdf.generic.ArrayObject) and color_space[0] == '/Indexed':
color_space, base, hival, lookup = [v.getObject() for v in color_space] # pg 262
mode = img_modes[color_space]
if xObject[obj]['/Filter'] == '/FlateDecode':
data = xObject[obj].getData()
img = Image.frombytes(mode, size, data)
if color_space == '/Indexed':
img.putpalette(lookup.getData())
img = img.convert('RGB')
img.save("{}{:04}.png".format(filename_prefix, i))
```
Note that when `/Indexed` files are found, you can't just compare `/ColorSpace` to a string, because it comes as an `ArrayObject`. So, we have to check the array and retrieve the indexed palette (`lookup` in the code) and set it in the PIL Image object, otherwise it stays uninitialized (zero) and the whole image shows as black.
My first instinct was to save them as GIFs (which is an indexed format), but my tests turned out that PNGs were smaller and looked the same way.
I found those types of images when printing to PDF with Foxit Reader PDF Printer. |
2,693,820 | How might one extract all images from a pdf document, at native resolution and format? (Meaning extract tiff as tiff, jpeg as jpeg, etc. and without resampling). Layout is unimportant, I don't care were the source image is located on the page.
I'm using python 2.7 but can use 3.x if required. | 2010/04/22 | [
"https://Stackoverflow.com/questions/2693820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14420/"
] | Libpoppler comes with a tool called "pdfimages" that does exactly this.
(On ubuntu systems it's in the poppler-utils package)
<http://poppler.freedesktop.org/>
<http://en.wikipedia.org/wiki/Pdfimages>
Windows binaries: <http://blog.alivate.com.au/poppler-windows/> | Here is my version from 2019 that recursively gets all images from PDF and reads them with PIL.
Compatible with Python 2/3. I also found that sometimes image in PDF may be compressed by zlib, so my code supports decompression.
```
#!/usr/bin/env python3
try:
from StringIO import StringIO
except ImportError:
from io import BytesIO as StringIO
from PIL import Image
from PyPDF2 import PdfFileReader, generic
import zlib
def get_color_mode(obj):
try:
cspace = obj['/ColorSpace']
except KeyError:
return None
if cspace == '/DeviceRGB':
return "RGB"
elif cspace == '/DeviceCMYK':
return "CMYK"
elif cspace == '/DeviceGray':
return "P"
if isinstance(cspace, generic.ArrayObject) and cspace[0] == '/ICCBased':
color_map = obj['/ColorSpace'][1].getObject()['/N']
if color_map == 1:
return "P"
elif color_map == 3:
return "RGB"
elif color_map == 4:
return "CMYK"
def get_object_images(x_obj):
images = []
for obj_name in x_obj:
sub_obj = x_obj[obj_name]
if '/Resources' in sub_obj and '/XObject' in sub_obj['/Resources']:
images += get_object_images(sub_obj['/Resources']['/XObject'].getObject())
elif sub_obj['/Subtype'] == '/Image':
zlib_compressed = '/FlateDecode' in sub_obj.get('/Filter', '')
if zlib_compressed:
sub_obj._data = zlib.decompress(sub_obj._data)
images.append((
get_color_mode(sub_obj),
(sub_obj['/Width'], sub_obj['/Height']),
sub_obj._data
))
return images
def get_pdf_images(pdf_fp):
images = []
try:
pdf_in = PdfFileReader(open(pdf_fp, "rb"))
except:
return images
for p_n in range(pdf_in.numPages):
page = pdf_in.getPage(p_n)
try:
page_x_obj = page['/Resources']['/XObject'].getObject()
except KeyError:
continue
images += get_object_images(page_x_obj)
return images
if __name__ == "__main__":
pdf_fp = "test.pdf"
for image in get_pdf_images(pdf_fp):
(mode, size, data) = image
try:
img = Image.open(StringIO(data))
except Exception as e:
print ("Failed to read image with PIL: {}".format(e))
continue
# Do whatever you want with the image
``` |
2,693,820 | How might one extract all images from a pdf document, at native resolution and format? (Meaning extract tiff as tiff, jpeg as jpeg, etc. and without resampling). Layout is unimportant, I don't care were the source image is located on the page.
I'm using python 2.7 but can use 3.x if required. | 2010/04/22 | [
"https://Stackoverflow.com/questions/2693820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14420/"
] | You can use the module PyMuPDF. This outputs all images as .png files, but worked out of the box and is fast.
```
import fitz
doc = fitz.open("file.pdf")
for i in range(len(doc)):
for img in doc.getPageImageList(i):
xref = img[0]
pix = fitz.Pixmap(doc, xref)
if pix.n < 5: # this is GRAY or RGB
pix.writePNG("p%s-%s.png" % (i, xref))
else: # CMYK: convert to RGB first
pix1 = fitz.Pixmap(fitz.csRGB, pix)
pix1.writePNG("p%s-%s.png" % (i, xref))
pix1 = None
pix = None
```
[see here for more resources](https://github.com/rk700/PyMuPDF/wiki/How-to-Extract-Images-from-a-PDF-(v1.9.2))
Here is a modified the version for fitz 1.19.6:
```
import os
import fitz # pip install --upgrade pip; pip install --upgrade pymupdf
from tqdm import tqdm # pip install tqdm
workdir = "your_folder"
for each_path in os.listdir(workdir):
if ".pdf" in each_path:
doc = fitz.Document((os.path.join(workdir, each_path)))
for i in tqdm(range(len(doc)), desc="pages"):
for img in tqdm(doc.get_page_images(i), desc="page_images"):
xref = img[0]
image = doc.extract_image(xref)
pix = fitz.Pixmap(doc, xref)
pix.save(os.path.join(workdir, "%s_p%s-%s.png" % (each_path[:-4], i, xref)))
print("Done!")
``` | I did this for my own program, and found that the best library to use was PyMuPDF. It lets you find out the "xref" numbers of each image on each page, and use them to extract the raw image data from the PDF.
```
import fitz
from PIL import Image
import io
filePath = "path/to/file.pdf"
#opens doc using PyMuPDF
doc = fitz.Document(filePath)
#loads the first page
page = doc.loadPage(0)
#[First image on page described thru a list][First attribute on image list: xref n], check PyMuPDF docs under getImageList()
xref = page.getImageList()[0][0]
#gets the image as a dict, check docs under extractImage
baseImage = doc.extractImage(xref)
#gets the raw string image data from the dictionary and wraps it in a BytesIO object before using PIL to open it
image = Image.open(io.BytesIO(baseImage['image']))
#Displays image for good measure
image.show()
```
Definitely check out the docs, though. |
2,693,820 | How might one extract all images from a pdf document, at native resolution and format? (Meaning extract tiff as tiff, jpeg as jpeg, etc. and without resampling). Layout is unimportant, I don't care were the source image is located on the page.
I'm using python 2.7 but can use 3.x if required. | 2010/04/22 | [
"https://Stackoverflow.com/questions/2693820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14420/"
] | Libpoppler comes with a tool called "pdfimages" that does exactly this.
(On ubuntu systems it's in the poppler-utils package)
<http://poppler.freedesktop.org/>
<http://en.wikipedia.org/wiki/Pdfimages>
Windows binaries: <http://blog.alivate.com.au/poppler-windows/> | I installed [ImageMagick](http://www.imagemagick.org) on my server and then run commandline-calls through `Popen`:
```
#!/usr/bin/python
import sys
import os
import subprocess
import settings
IMAGE_PATH = os.path.join(settings.MEDIA_ROOT , 'pdf_input' )
def extract_images(pdf):
output = 'temp.png'
cmd = 'convert ' + os.path.join(IMAGE_PATH, pdf) + ' ' + os.path.join(IMAGE_PATH, output)
subprocess.Popen(cmd.split(), stderr=subprocess.STDOUT, stdout=subprocess.PIPE)
```
This will create an image for every page and store them as temp-0.png, temp-1.png ....
This is only 'extraction' if you got a pdf with only images and no text. |
2,693,820 | How might one extract all images from a pdf document, at native resolution and format? (Meaning extract tiff as tiff, jpeg as jpeg, etc. and without resampling). Layout is unimportant, I don't care were the source image is located on the page.
I'm using python 2.7 but can use 3.x if required. | 2010/04/22 | [
"https://Stackoverflow.com/questions/2693820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14420/"
] | I prefer minecart as it is extremely easy to use. The below snippet show how to extract images from a pdf:
```
#pip install minecart
import minecart
pdffile = open('Invoices.pdf', 'rb')
doc = minecart.Document(pdffile)
page = doc.get_page(0) # getting a single page
#iterating through all pages
for page in doc.iter_pages():
im = page.images[0].as_pil() # requires pillow
display(im)
``` | After some searching I found the following script which works really well with my PDF's. It does only tackle JPG, but it worked perfectly with my unprotected files. Also is does not require any outside libraries.
Not to take any credit, the script originates from Ned Batchelder, and not me.
Python3 code: extract jpg's from pdf's. Quick and dirty
```
import sys
with open(sys.argv[1],"rb") as file:
file.seek(0)
pdf = file.read()
startmark = b"\xff\xd8"
startfix = 0
endmark = b"\xff\xd9"
endfix = 2
i = 0
njpg = 0
while True:
istream = pdf.find(b"stream", i)
if istream < 0:
break
istart = pdf.find(startmark, istream, istream + 20)
if istart < 0:
i = istream + 20
continue
iend = pdf.find(b"endstream", istart)
if iend < 0:
raise Exception("Didn't find end of stream!")
iend = pdf.find(endmark, iend - 20)
if iend < 0:
raise Exception("Didn't find end of JPG!")
istart += startfix
iend += endfix
print("JPG %d from %d to %d" % (njpg, istart, iend))
jpg = pdf[istart:iend]
with open("jpg%d.jpg" % njpg, "wb") as jpgfile:
jpgfile.write(jpg)
njpg += 1
i = iend
``` |
2,693,820 | How might one extract all images from a pdf document, at native resolution and format? (Meaning extract tiff as tiff, jpeg as jpeg, etc. and without resampling). Layout is unimportant, I don't care were the source image is located on the page.
I'm using python 2.7 but can use 3.x if required. | 2010/04/22 | [
"https://Stackoverflow.com/questions/2693820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14420/"
] | Much easier solution:
Use the poppler-utils package. To install it use homebrew (homebrew is MacOS specific, but you can find the poppler-utils package for Widows or Linux here: <https://poppler.freedesktop.org/>). First line of code below installs poppler-utils using homebrew. After installation the second line (run from the command line) then extracts images from a PDF file and names them "image\*". To run this program from within Python use the os or subprocess module. Third line is code using os module, beneath that is an example with subprocess (python 3.5 or later for run() function). More info here: <https://www.cyberciti.biz/faq/easily-extract-images-from-pdf-file/>
`brew install poppler`
`pdfimages file.pdf image`
```
import os
os.system('pdfimages file.pdf image')
```
or
```
import subprocess
subprocess.run('pdfimages file.pdf image', shell=True)
``` | After some searching I found the following script which works really well with my PDF's. It does only tackle JPG, but it worked perfectly with my unprotected files. Also is does not require any outside libraries.
Not to take any credit, the script originates from Ned Batchelder, and not me.
Python3 code: extract jpg's from pdf's. Quick and dirty
```
import sys
with open(sys.argv[1],"rb") as file:
file.seek(0)
pdf = file.read()
startmark = b"\xff\xd8"
startfix = 0
endmark = b"\xff\xd9"
endfix = 2
i = 0
njpg = 0
while True:
istream = pdf.find(b"stream", i)
if istream < 0:
break
istart = pdf.find(startmark, istream, istream + 20)
if istart < 0:
i = istream + 20
continue
iend = pdf.find(b"endstream", istart)
if iend < 0:
raise Exception("Didn't find end of stream!")
iend = pdf.find(endmark, iend - 20)
if iend < 0:
raise Exception("Didn't find end of JPG!")
istart += startfix
iend += endfix
print("JPG %d from %d to %d" % (njpg, istart, iend))
jpg = pdf[istart:iend]
with open("jpg%d.jpg" % njpg, "wb") as jpgfile:
jpgfile.write(jpg)
njpg += 1
i = iend
``` |
2,693,820 | How might one extract all images from a pdf document, at native resolution and format? (Meaning extract tiff as tiff, jpeg as jpeg, etc. and without resampling). Layout is unimportant, I don't care were the source image is located on the page.
I'm using python 2.7 but can use 3.x if required. | 2010/04/22 | [
"https://Stackoverflow.com/questions/2693820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14420/"
] | Here is my version from 2019 that recursively gets all images from PDF and reads them with PIL.
Compatible with Python 2/3. I also found that sometimes image in PDF may be compressed by zlib, so my code supports decompression.
```
#!/usr/bin/env python3
try:
from StringIO import StringIO
except ImportError:
from io import BytesIO as StringIO
from PIL import Image
from PyPDF2 import PdfFileReader, generic
import zlib
def get_color_mode(obj):
try:
cspace = obj['/ColorSpace']
except KeyError:
return None
if cspace == '/DeviceRGB':
return "RGB"
elif cspace == '/DeviceCMYK':
return "CMYK"
elif cspace == '/DeviceGray':
return "P"
if isinstance(cspace, generic.ArrayObject) and cspace[0] == '/ICCBased':
color_map = obj['/ColorSpace'][1].getObject()['/N']
if color_map == 1:
return "P"
elif color_map == 3:
return "RGB"
elif color_map == 4:
return "CMYK"
def get_object_images(x_obj):
images = []
for obj_name in x_obj:
sub_obj = x_obj[obj_name]
if '/Resources' in sub_obj and '/XObject' in sub_obj['/Resources']:
images += get_object_images(sub_obj['/Resources']['/XObject'].getObject())
elif sub_obj['/Subtype'] == '/Image':
zlib_compressed = '/FlateDecode' in sub_obj.get('/Filter', '')
if zlib_compressed:
sub_obj._data = zlib.decompress(sub_obj._data)
images.append((
get_color_mode(sub_obj),
(sub_obj['/Width'], sub_obj['/Height']),
sub_obj._data
))
return images
def get_pdf_images(pdf_fp):
images = []
try:
pdf_in = PdfFileReader(open(pdf_fp, "rb"))
except:
return images
for p_n in range(pdf_in.numPages):
page = pdf_in.getPage(p_n)
try:
page_x_obj = page['/Resources']['/XObject'].getObject()
except KeyError:
continue
images += get_object_images(page_x_obj)
return images
if __name__ == "__main__":
pdf_fp = "test.pdf"
for image in get_pdf_images(pdf_fp):
(mode, size, data) = image
try:
img = Image.open(StringIO(data))
except Exception as e:
print ("Failed to read image with PIL: {}".format(e))
continue
# Do whatever you want with the image
``` | You could use `pdfimages` command in Ubuntu as well.
Install poppler lib using the below commands.
```
sudo apt install poppler-utils
sudo apt-get install python-poppler
pdfimages file.pdf image
```
List of files created are, (for eg.,. there are two images in pdf)
```
image-000.png
image-001.png
```
It works ! Now you can use a `subprocess.run` to run this from python. |
2,693,820 | How might one extract all images from a pdf document, at native resolution and format? (Meaning extract tiff as tiff, jpeg as jpeg, etc. and without resampling). Layout is unimportant, I don't care were the source image is located on the page.
I'm using python 2.7 but can use 3.x if required. | 2010/04/22 | [
"https://Stackoverflow.com/questions/2693820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14420/"
] | I prefer minecart as it is extremely easy to use. The below snippet show how to extract images from a pdf:
```
#pip install minecart
import minecart
pdffile = open('Invoices.pdf', 'rb')
doc = minecart.Document(pdffile)
page = doc.get_page(0) # getting a single page
#iterating through all pages
for page in doc.iter_pages():
im = page.images[0].as_pil() # requires pillow
display(im)
``` | As of February 2019, the solution given by @sylvain (at least on my setup) does not work without a small modification: `xObject[obj]['/Filter']` is not a value, but a list, thus in order to make the script work, I had to modify the format checking as follows:
```
import PyPDF2, traceback
from PIL import Image
input1 = PyPDF2.PdfFileReader(open(src, "rb"))
nPages = input1.getNumPages()
print nPages
for i in range(nPages) :
print i
page0 = input1.getPage(i)
try :
xObject = page0['/Resources']['/XObject'].getObject()
except : xObject = []
for obj in xObject:
if xObject[obj]['/Subtype'] == '/Image':
size = (xObject[obj]['/Width'], xObject[obj]['/Height'])
data = xObject[obj].getData()
try :
if xObject[obj]['/ColorSpace'] == '/DeviceRGB':
mode = "RGB"
elif xObject[obj]['/ColorSpace'] == '/DeviceCMYK':
mode = "CMYK"
# will cause errors when saving
else:
mode = "P"
fn = 'p%03d-%s' % (i + 1, obj[1:])
print '\t', fn
if '/FlateDecode' in xObject[obj]['/Filter'] :
img = Image.frombytes(mode, size, data)
img.save(fn + ".png")
elif '/DCTDecode' in xObject[obj]['/Filter']:
img = open(fn + ".jpg", "wb")
img.write(data)
img.close()
elif '/JPXDecode' in xObject[obj]['/Filter'] :
img = open(fn + ".jp2", "wb")
img.write(data)
img.close()
elif '/LZWDecode' in xObject[obj]['/Filter'] :
img = open(fn + ".tif", "wb")
img.write(data)
img.close()
else :
print 'Unknown format:', xObject[obj]['/Filter']
except :
traceback.print_exc()
``` |
2,693,820 | How might one extract all images from a pdf document, at native resolution and format? (Meaning extract tiff as tiff, jpeg as jpeg, etc. and without resampling). Layout is unimportant, I don't care were the source image is located on the page.
I'm using python 2.7 but can use 3.x if required. | 2010/04/22 | [
"https://Stackoverflow.com/questions/2693820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14420/"
] | Here is my version from 2019 that recursively gets all images from PDF and reads them with PIL.
Compatible with Python 2/3. I also found that sometimes image in PDF may be compressed by zlib, so my code supports decompression.
```
#!/usr/bin/env python3
try:
from StringIO import StringIO
except ImportError:
from io import BytesIO as StringIO
from PIL import Image
from PyPDF2 import PdfFileReader, generic
import zlib
def get_color_mode(obj):
try:
cspace = obj['/ColorSpace']
except KeyError:
return None
if cspace == '/DeviceRGB':
return "RGB"
elif cspace == '/DeviceCMYK':
return "CMYK"
elif cspace == '/DeviceGray':
return "P"
if isinstance(cspace, generic.ArrayObject) and cspace[0] == '/ICCBased':
color_map = obj['/ColorSpace'][1].getObject()['/N']
if color_map == 1:
return "P"
elif color_map == 3:
return "RGB"
elif color_map == 4:
return "CMYK"
def get_object_images(x_obj):
images = []
for obj_name in x_obj:
sub_obj = x_obj[obj_name]
if '/Resources' in sub_obj and '/XObject' in sub_obj['/Resources']:
images += get_object_images(sub_obj['/Resources']['/XObject'].getObject())
elif sub_obj['/Subtype'] == '/Image':
zlib_compressed = '/FlateDecode' in sub_obj.get('/Filter', '')
if zlib_compressed:
sub_obj._data = zlib.decompress(sub_obj._data)
images.append((
get_color_mode(sub_obj),
(sub_obj['/Width'], sub_obj['/Height']),
sub_obj._data
))
return images
def get_pdf_images(pdf_fp):
images = []
try:
pdf_in = PdfFileReader(open(pdf_fp, "rb"))
except:
return images
for p_n in range(pdf_in.numPages):
page = pdf_in.getPage(p_n)
try:
page_x_obj = page['/Resources']['/XObject'].getObject()
except KeyError:
continue
images += get_object_images(page_x_obj)
return images
if __name__ == "__main__":
pdf_fp = "test.pdf"
for image in get_pdf_images(pdf_fp):
(mode, size, data) = image
try:
img = Image.open(StringIO(data))
except Exception as e:
print ("Failed to read image with PIL: {}".format(e))
continue
# Do whatever you want with the image
``` | I rewrite solutions as single python class.
It should be easy to work with. If you notice new "/Filter" or "/ColorSpace" then just add it to internal dictionaries.
<https://github.com/survtur/extract_images_from_pdf>
Requirements:
* Python3.6+
* PyPDF2
* PIL |
2,693,820 | How might one extract all images from a pdf document, at native resolution and format? (Meaning extract tiff as tiff, jpeg as jpeg, etc. and without resampling). Layout is unimportant, I don't care were the source image is located on the page.
I'm using python 2.7 but can use 3.x if required. | 2010/04/22 | [
"https://Stackoverflow.com/questions/2693820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14420/"
] | After reading the posts using **pyPDF2**.
The error while using @sylvain's code `NotImplementedError: unsupported filter /DCTDecode` must come from the method `.getData()`: It is solved when using `._data` instead, by @Alex Paramonov.
So far I have only met "DCTDecode" cases, but I am sharing the adapted code that include remarks from the different posts: From `zilb` by @Alex Paramonov, `sub_obj['/Filter']` being a list, by @mxl.
Hope it can help the pyPDF2 users. Follow the code:
```py
import sys
import PyPDF2, traceback
import zlib
try:
from PIL import Image
except ImportError:
import Image
pdf_path = 'path_to_your_pdf_file.pdf'
input1 = PyPDF2.PdfFileReader(open(pdf_path, "rb"))
nPages = input1.getNumPages()
for i in range(nPages) :
page0 = input1.getPage(i)
if '/XObject' in page0['/Resources']:
try:
xObject = page0['/Resources']['/XObject'].getObject()
except :
xObject = []
for obj_name in xObject:
sub_obj = xObject[obj_name]
if sub_obj['/Subtype'] == '/Image':
zlib_compressed = '/FlateDecode' in sub_obj.get('/Filter', '')
if zlib_compressed:
sub_obj._data = zlib.decompress(sub_obj._data)
size = (sub_obj['/Width'], sub_obj['/Height'])
data = sub_obj._data#sub_obj.getData()
try :
if sub_obj['/ColorSpace'] == '/DeviceRGB':
mode = "RGB"
elif sub_obj['/ColorSpace'] == '/DeviceCMYK':
mode = "CMYK"
# will cause errors when saving (might need convert to RGB first)
else:
mode = "P"
fn = 'p%03d-%s' % (i + 1, obj_name[1:])
if '/Filter' in sub_obj:
if '/FlateDecode' in sub_obj['/Filter']:
img = Image.frombytes(mode, size, data)
img.save(fn + ".png")
elif '/DCTDecode' in sub_obj['/Filter']:
img = open(fn + ".jpg", "wb")
img.write(data)
img.close()
elif '/JPXDecode' in sub_obj['/Filter']:
img = open(fn + ".jp2", "wb")
img.write(data)
img.close()
elif '/CCITTFaxDecode' in sub_obj['/Filter']:
img = open(fn + ".tiff", "wb")
img.write(data)
img.close()
elif '/LZWDecode' in sub_obj['/Filter'] :
img = open(fn + ".tif", "wb")
img.write(data)
img.close()
else :
print('Unknown format:', sub_obj['/Filter'])
else:
img = Image.frombytes(mode, size, data)
img.save(fn + ".png")
except:
traceback.print_exc()
else:
print("No image found for page %d" % (i + 1))
``` | **Try below code. it will extract all image from pdf.**
```
import sys
import PyPDF2
from PIL import Image
pdf=sys.argv[1]
print(pdf)
input1 = PyPDF2.PdfFileReader(open(pdf, "rb"))
for x in range(0,input1.numPages):
xObject=input1.getPage(x)
xObject = xObject['/Resources']['/XObject'].getObject()
for obj in xObject:
if xObject[obj]['/Subtype'] == '/Image':
size = (xObject[obj]['/Width'], xObject[obj]['/Height'])
print(size)
data = xObject[obj]._data
#print(data)
print(xObject[obj]['/Filter'])
if xObject[obj]['/Filter'][0] == '/DCTDecode':
img_name=str(x)+".jpg"
print(img_name)
img = open(img_name, "wb")
img.write(data)
img.close()
print(str(x)+" is done")
``` |
94,334 | What is the best python framework to create distributed applications? For example to build a P2P app. | 2008/09/18 | [
"https://Stackoverflow.com/questions/94334",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | You could checkout [pyprocessing](http://pyprocessing.berlios.de/) which will be included in the standard library as of 2.6. It allows you to run tasks on multiple processes using an API similar to threading. | You could download the source of BitTorrent for starters and see how they did it.
<http://download.bittorrent.com/dl/> |
94,334 | What is the best python framework to create distributed applications? For example to build a P2P app. | 2008/09/18 | [
"https://Stackoverflow.com/questions/94334",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | I think you mean "Networked Apps"? Distributed means an app that can split its workload among multiple worker clients over the network.
You probably want.
[Twisted](http://twistedmatrix.com/trac/) | You could download the source of BitTorrent for starters and see how they did it.
<http://download.bittorrent.com/dl/> |
94,334 | What is the best python framework to create distributed applications? For example to build a P2P app. | 2008/09/18 | [
"https://Stackoverflow.com/questions/94334",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | You probably want [Twisted](http://twistedmatrix.com/trac/). There is a P2P framework for Twisted called "[Vertex](http://divmod.org/trac/wiki/DivmodVertex)". While not actively maintained, it does allow you to tunnel through NATs and make connections directly between users in a very abstract way; if there were more interest in this sort of thing I'm sure it would be more actively maintained. | You could download the source of BitTorrent for starters and see how they did it.
<http://download.bittorrent.com/dl/> |
94,334 | What is the best python framework to create distributed applications? For example to build a P2P app. | 2008/09/18 | [
"https://Stackoverflow.com/questions/94334",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | You could checkout [pyprocessing](http://pyprocessing.berlios.de/) which will be included in the standard library as of 2.6. It allows you to run tasks on multiple processes using an API similar to threading. | If it's something where you're going to need tons of threads and need better concurrent performance, check out [Stackless Python](http://www.stackless.com/). Otherwise you could just use the [SOAP](http://en.wikipedia.org/wiki/SOAP) or [XML-RPC](http://www.xmlrpc.com/) protocols. In response to Ben's post, if you don't want to look over the BitTorrent source, you could just look at the article on [the BitTorrent protocol](http://en.wikipedia.org/wiki/BitTorrent_(protocol)). |
94,334 | What is the best python framework to create distributed applications? For example to build a P2P app. | 2008/09/18 | [
"https://Stackoverflow.com/questions/94334",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | I think you mean "Networked Apps"? Distributed means an app that can split its workload among multiple worker clients over the network.
You probably want.
[Twisted](http://twistedmatrix.com/trac/) | If it's something where you're going to need tons of threads and need better concurrent performance, check out [Stackless Python](http://www.stackless.com/). Otherwise you could just use the [SOAP](http://en.wikipedia.org/wiki/SOAP) or [XML-RPC](http://www.xmlrpc.com/) protocols. In response to Ben's post, if you don't want to look over the BitTorrent source, you could just look at the article on [the BitTorrent protocol](http://en.wikipedia.org/wiki/BitTorrent_(protocol)). |
94,334 | What is the best python framework to create distributed applications? For example to build a P2P app. | 2008/09/18 | [
"https://Stackoverflow.com/questions/94334",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | You probably want [Twisted](http://twistedmatrix.com/trac/). There is a P2P framework for Twisted called "[Vertex](http://divmod.org/trac/wiki/DivmodVertex)". While not actively maintained, it does allow you to tunnel through NATs and make connections directly between users in a very abstract way; if there were more interest in this sort of thing I'm sure it would be more actively maintained. | If it's something where you're going to need tons of threads and need better concurrent performance, check out [Stackless Python](http://www.stackless.com/). Otherwise you could just use the [SOAP](http://en.wikipedia.org/wiki/SOAP) or [XML-RPC](http://www.xmlrpc.com/) protocols. In response to Ben's post, if you don't want to look over the BitTorrent source, you could just look at the article on [the BitTorrent protocol](http://en.wikipedia.org/wiki/BitTorrent_(protocol)). |
94,334 | What is the best python framework to create distributed applications? For example to build a P2P app. | 2008/09/18 | [
"https://Stackoverflow.com/questions/94334",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | You could checkout [pyprocessing](http://pyprocessing.berlios.de/) which will be included in the standard library as of 2.6. It allows you to run tasks on multiple processes using an API similar to threading. | You probably want [Twisted](http://twistedmatrix.com/trac/). There is a P2P framework for Twisted called "[Vertex](http://divmod.org/trac/wiki/DivmodVertex)". While not actively maintained, it does allow you to tunnel through NATs and make connections directly between users in a very abstract way; if there were more interest in this sort of thing I'm sure it would be more actively maintained. |
94,334 | What is the best python framework to create distributed applications? For example to build a P2P app. | 2008/09/18 | [
"https://Stackoverflow.com/questions/94334",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | I think you mean "Networked Apps"? Distributed means an app that can split its workload among multiple worker clients over the network.
You probably want.
[Twisted](http://twistedmatrix.com/trac/) | You probably want [Twisted](http://twistedmatrix.com/trac/). There is a P2P framework for Twisted called "[Vertex](http://divmod.org/trac/wiki/DivmodVertex)". While not actively maintained, it does allow you to tunnel through NATs and make connections directly between users in a very abstract way; if there were more interest in this sort of thing I'm sure it would be more actively maintained. |
51,817,237 | I am working on a Flask project and I am using marshmallow to validate user input.
Below is a code snippet:
```
def create_user():
in_data = request.get_json()
data, errors = Userschema.load(in_data)
if errors:
return (errors), 400
fname = data.get('fname')
lname = data.get('lname')
email = data.get('email')
password = data.get('password')
cpass = data.get('cpass')
```
When I eliminate the `errors` part, the code works perfectly. When I run it as it is, I get the following error:
>
> builtins.ValueError
>
>
> ValueError: too many values to unpack (expected 2)
>
>
> Traceback (most recent call last)
>
>
> File
> "/home/..project-details.../venv3/lib/python3.6/site-packages/flask/app.py",
> line 2000, in **call**
>
>
> error = None
>
>
> ctx.auto\_pop(error)
>
>
>
```
def __call__(self, environ, start_response):
"""Shortcut for :attr:`wsgi_app`."""
return self.wsgi_app(environ, start_response)
def __repr__(self):
return '<%s %r>' % (
self.__class__.__name__,
self.name,
```
Note: The var `in_data` is a dict.
Any ideas?? | 2018/08/13 | [
"https://Stackoverflow.com/questions/51817237",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10217900/"
] | I recommend you check your dependency versions.
Per the [Marshmallow API reference](http://marshmallow.readthedocs.io/en/latest/api_reference.html#schema), schema.load returns:
>
> Changed in version 3.0.0b7: This method returns the deserialized data rather than a (data, errors) duple. A ValidationError is raised if invalid data are passed.
>
>
>
I suspect python is trying to unpack the dict (returned as a singular object) into two variables. The exception is raised because there is nothing to pack into the 'errors' variable. The below reproduces the error:
```
d = dict()
d['test'] = 10101
a, b = d
print("%s : %s" % (a, b))
``` | according to the documentation in its most recent version (3.17.1) the way of handling with validation errors is as follows:
```
from marshmallow import ValidationError
try:
result = UserSchema().load({"name": "John", "email": "foo"})
except ValidationError as err:
print(err.messages) # => {"email": ['"foo" is not a valid email address.']}
print(err.valid_data) # => {"name": "John"}
``` |
48,072,131 | I am not sure what would be an appropriate heading for this question and this can be a repeated question as well. So please guide accordingly.
I am new to python programming. I have this simple code to generate Fibonacci series.
```
1: def fibo(n):
2: a = 0
3: b = 1
4: for x in range(n):
5: print (a, end=' ')
6: #a, b = b, a+b
7: a = b
8: b = a+b
9: print()
10: num = int(input("enter n value: "))
11: print(fibo(num))
```
If I execute the above code as-is the result I get is as follows
```
enter n value: 10
0 1 2 4 8 16 32 64 128 256
```
If uncomment #6 and comment lines #7 and #8 the result I get is the actual fibo series.
```
enter n value: 10
0 1 1 2 3 5 8 13 21 34
```
I would like to know what is the difference between
```
a, b = b, a + b
```
and
```
a = b
b = a + b
```
Programming IDE used: PyCharm Community 2017.3 | 2018/01/03 | [
"https://Stackoverflow.com/questions/48072131",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1755089/"
] | ```
a = b
b = a + b
```
is actually:
```
a = b
b = b + b
```
what you want is:
```
a = b
b = old_value_of_a + b
```
When you do
`a, b = b, a + b`
it really is doing:
```
tmp_a = b
tmp_b = a + b
a = tmp_a
b = tmp_b
```
which is what you want | In line 7, you've already assigned the value in `b` to `a`, so in line 8, new value for `b` is actually double the old b's value.
While in line 6, the values on the right side of `=` will be using the old values, that's why you could get Fibo series. |
48,072,131 | I am not sure what would be an appropriate heading for this question and this can be a repeated question as well. So please guide accordingly.
I am new to python programming. I have this simple code to generate Fibonacci series.
```
1: def fibo(n):
2: a = 0
3: b = 1
4: for x in range(n):
5: print (a, end=' ')
6: #a, b = b, a+b
7: a = b
8: b = a+b
9: print()
10: num = int(input("enter n value: "))
11: print(fibo(num))
```
If I execute the above code as-is the result I get is as follows
```
enter n value: 10
0 1 2 4 8 16 32 64 128 256
```
If uncomment #6 and comment lines #7 and #8 the result I get is the actual fibo series.
```
enter n value: 10
0 1 1 2 3 5 8 13 21 34
```
I would like to know what is the difference between
```
a, b = b, a + b
```
and
```
a = b
b = a + b
```
Programming IDE used: PyCharm Community 2017.3 | 2018/01/03 | [
"https://Stackoverflow.com/questions/48072131",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1755089/"
] | In line 7, you've already assigned the value in `b` to `a`, so in line 8, new value for `b` is actually double the old b's value.
While in line 6, the values on the right side of `=` will be using the old values, that's why you could get Fibo series. | Thought make it simple so anyone can understand it
if you use this kind of syntax
```
a = 10
b = 20
a = b
b = a+b
print (a)
print (b)
```
after initially assigning a = 10 it will be assigning a = 20 since python is dynamically typed language it will change the value of variable a from 10 to 20
so the result will be like
```
a=20
b=40
```
but if we use
```
a = 10
b = 20
a,b = b,a+b
print (a)
print (b)
```
this will be assigning the values in a single line so the values of a and b will be exactly used from what it is initialised above it and the result will be like
which is the correct solution
```
a=20
b=30
``` |
48,072,131 | I am not sure what would be an appropriate heading for this question and this can be a repeated question as well. So please guide accordingly.
I am new to python programming. I have this simple code to generate Fibonacci series.
```
1: def fibo(n):
2: a = 0
3: b = 1
4: for x in range(n):
5: print (a, end=' ')
6: #a, b = b, a+b
7: a = b
8: b = a+b
9: print()
10: num = int(input("enter n value: "))
11: print(fibo(num))
```
If I execute the above code as-is the result I get is as follows
```
enter n value: 10
0 1 2 4 8 16 32 64 128 256
```
If uncomment #6 and comment lines #7 and #8 the result I get is the actual fibo series.
```
enter n value: 10
0 1 1 2 3 5 8 13 21 34
```
I would like to know what is the difference between
```
a, b = b, a + b
```
and
```
a = b
b = a + b
```
Programming IDE used: PyCharm Community 2017.3 | 2018/01/03 | [
"https://Stackoverflow.com/questions/48072131",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1755089/"
] | In line 7, you've already assigned the value in `b` to `a`, so in line 8, new value for `b` is actually double the old b's value.
While in line 6, the values on the right side of `=` will be using the old values, that's why you could get Fibo series. | I think the `#` line is pythonic solution. But if you got confused,you can you use a variable which is temporary. you can assign the value `temp` before, then you can change the values |
48,072,131 | I am not sure what would be an appropriate heading for this question and this can be a repeated question as well. So please guide accordingly.
I am new to python programming. I have this simple code to generate Fibonacci series.
```
1: def fibo(n):
2: a = 0
3: b = 1
4: for x in range(n):
5: print (a, end=' ')
6: #a, b = b, a+b
7: a = b
8: b = a+b
9: print()
10: num = int(input("enter n value: "))
11: print(fibo(num))
```
If I execute the above code as-is the result I get is as follows
```
enter n value: 10
0 1 2 4 8 16 32 64 128 256
```
If uncomment #6 and comment lines #7 and #8 the result I get is the actual fibo series.
```
enter n value: 10
0 1 1 2 3 5 8 13 21 34
```
I would like to know what is the difference between
```
a, b = b, a + b
```
and
```
a = b
b = a + b
```
Programming IDE used: PyCharm Community 2017.3 | 2018/01/03 | [
"https://Stackoverflow.com/questions/48072131",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1755089/"
] | ```
a = b
b = a + b
```
is actually:
```
a = b
b = b + b
```
what you want is:
```
a = b
b = old_value_of_a + b
```
When you do
`a, b = b, a + b`
it really is doing:
```
tmp_a = b
tmp_b = a + b
a = tmp_a
b = tmp_b
```
which is what you want | [Assignment Statements](https://docs.python.org/3/reference/simple_stmts.html#assignment-statements) assigns reference of source variable to target variable. Let walk through an example to understand more
```
>>> a = 5
>>> b = 6
>>> a = b
```
In this example `b` is source variable and `a` is the target variable. Now memory address for both of these variables are same. We can confirm this as well
```
>>> hex(id(a)), hex(id(b))
>>> ('0x1002739e0', '0x1002739e0')
```
Another test to confirm this is to use [is](https://stackoverflow.com/questions/13650293/understanding-pythons-is-operator) operator
```
>>> a is b
>>> True
```
Now coming back to your example. First statement
```
>>> a, b = b, a + b
```
Assignes `b` to `a` and `(a+b)` to `b`. This happens as a single operation so both variables are different. We can apply above tests to confirm this
```
>>> a is b
>>> False
>>> hex(id(a)), hex(id(b))
>>> ('0x1002739e0', '0x2008739t0')
```
The second statement
```
>>> a = b
>>> b = a + b
```
Assignes `b` to `a` and then `(a+b)` to `b`. These are two different statements, so at first step `a` and `b` are already identical. Thus the second statement is equivalent to `b = b + b`. |
48,072,131 | I am not sure what would be an appropriate heading for this question and this can be a repeated question as well. So please guide accordingly.
I am new to python programming. I have this simple code to generate Fibonacci series.
```
1: def fibo(n):
2: a = 0
3: b = 1
4: for x in range(n):
5: print (a, end=' ')
6: #a, b = b, a+b
7: a = b
8: b = a+b
9: print()
10: num = int(input("enter n value: "))
11: print(fibo(num))
```
If I execute the above code as-is the result I get is as follows
```
enter n value: 10
0 1 2 4 8 16 32 64 128 256
```
If uncomment #6 and comment lines #7 and #8 the result I get is the actual fibo series.
```
enter n value: 10
0 1 1 2 3 5 8 13 21 34
```
I would like to know what is the difference between
```
a, b = b, a + b
```
and
```
a = b
b = a + b
```
Programming IDE used: PyCharm Community 2017.3 | 2018/01/03 | [
"https://Stackoverflow.com/questions/48072131",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1755089/"
] | ```
a = b
b = a + b
```
is actually:
```
a = b
b = b + b
```
what you want is:
```
a = b
b = old_value_of_a + b
```
When you do
`a, b = b, a + b`
it really is doing:
```
tmp_a = b
tmp_b = a + b
a = tmp_a
b = tmp_b
```
which is what you want | Thought make it simple so anyone can understand it
if you use this kind of syntax
```
a = 10
b = 20
a = b
b = a+b
print (a)
print (b)
```
after initially assigning a = 10 it will be assigning a = 20 since python is dynamically typed language it will change the value of variable a from 10 to 20
so the result will be like
```
a=20
b=40
```
but if we use
```
a = 10
b = 20
a,b = b,a+b
print (a)
print (b)
```
this will be assigning the values in a single line so the values of a and b will be exactly used from what it is initialised above it and the result will be like
which is the correct solution
```
a=20
b=30
``` |
48,072,131 | I am not sure what would be an appropriate heading for this question and this can be a repeated question as well. So please guide accordingly.
I am new to python programming. I have this simple code to generate Fibonacci series.
```
1: def fibo(n):
2: a = 0
3: b = 1
4: for x in range(n):
5: print (a, end=' ')
6: #a, b = b, a+b
7: a = b
8: b = a+b
9: print()
10: num = int(input("enter n value: "))
11: print(fibo(num))
```
If I execute the above code as-is the result I get is as follows
```
enter n value: 10
0 1 2 4 8 16 32 64 128 256
```
If uncomment #6 and comment lines #7 and #8 the result I get is the actual fibo series.
```
enter n value: 10
0 1 1 2 3 5 8 13 21 34
```
I would like to know what is the difference between
```
a, b = b, a + b
```
and
```
a = b
b = a + b
```
Programming IDE used: PyCharm Community 2017.3 | 2018/01/03 | [
"https://Stackoverflow.com/questions/48072131",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1755089/"
] | ```
a = b
b = a + b
```
is actually:
```
a = b
b = b + b
```
what you want is:
```
a = b
b = old_value_of_a + b
```
When you do
`a, b = b, a + b`
it really is doing:
```
tmp_a = b
tmp_b = a + b
a = tmp_a
b = tmp_b
```
which is what you want | I think the `#` line is pythonic solution. But if you got confused,you can you use a variable which is temporary. you can assign the value `temp` before, then you can change the values |
48,072,131 | I am not sure what would be an appropriate heading for this question and this can be a repeated question as well. So please guide accordingly.
I am new to python programming. I have this simple code to generate Fibonacci series.
```
1: def fibo(n):
2: a = 0
3: b = 1
4: for x in range(n):
5: print (a, end=' ')
6: #a, b = b, a+b
7: a = b
8: b = a+b
9: print()
10: num = int(input("enter n value: "))
11: print(fibo(num))
```
If I execute the above code as-is the result I get is as follows
```
enter n value: 10
0 1 2 4 8 16 32 64 128 256
```
If uncomment #6 and comment lines #7 and #8 the result I get is the actual fibo series.
```
enter n value: 10
0 1 1 2 3 5 8 13 21 34
```
I would like to know what is the difference between
```
a, b = b, a + b
```
and
```
a = b
b = a + b
```
Programming IDE used: PyCharm Community 2017.3 | 2018/01/03 | [
"https://Stackoverflow.com/questions/48072131",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1755089/"
] | [Assignment Statements](https://docs.python.org/3/reference/simple_stmts.html#assignment-statements) assigns reference of source variable to target variable. Let walk through an example to understand more
```
>>> a = 5
>>> b = 6
>>> a = b
```
In this example `b` is source variable and `a` is the target variable. Now memory address for both of these variables are same. We can confirm this as well
```
>>> hex(id(a)), hex(id(b))
>>> ('0x1002739e0', '0x1002739e0')
```
Another test to confirm this is to use [is](https://stackoverflow.com/questions/13650293/understanding-pythons-is-operator) operator
```
>>> a is b
>>> True
```
Now coming back to your example. First statement
```
>>> a, b = b, a + b
```
Assignes `b` to `a` and `(a+b)` to `b`. This happens as a single operation so both variables are different. We can apply above tests to confirm this
```
>>> a is b
>>> False
>>> hex(id(a)), hex(id(b))
>>> ('0x1002739e0', '0x2008739t0')
```
The second statement
```
>>> a = b
>>> b = a + b
```
Assignes `b` to `a` and then `(a+b)` to `b`. These are two different statements, so at first step `a` and `b` are already identical. Thus the second statement is equivalent to `b = b + b`. | Thought make it simple so anyone can understand it
if you use this kind of syntax
```
a = 10
b = 20
a = b
b = a+b
print (a)
print (b)
```
after initially assigning a = 10 it will be assigning a = 20 since python is dynamically typed language it will change the value of variable a from 10 to 20
so the result will be like
```
a=20
b=40
```
but if we use
```
a = 10
b = 20
a,b = b,a+b
print (a)
print (b)
```
this will be assigning the values in a single line so the values of a and b will be exactly used from what it is initialised above it and the result will be like
which is the correct solution
```
a=20
b=30
``` |
48,072,131 | I am not sure what would be an appropriate heading for this question and this can be a repeated question as well. So please guide accordingly.
I am new to python programming. I have this simple code to generate Fibonacci series.
```
1: def fibo(n):
2: a = 0
3: b = 1
4: for x in range(n):
5: print (a, end=' ')
6: #a, b = b, a+b
7: a = b
8: b = a+b
9: print()
10: num = int(input("enter n value: "))
11: print(fibo(num))
```
If I execute the above code as-is the result I get is as follows
```
enter n value: 10
0 1 2 4 8 16 32 64 128 256
```
If uncomment #6 and comment lines #7 and #8 the result I get is the actual fibo series.
```
enter n value: 10
0 1 1 2 3 5 8 13 21 34
```
I would like to know what is the difference between
```
a, b = b, a + b
```
and
```
a = b
b = a + b
```
Programming IDE used: PyCharm Community 2017.3 | 2018/01/03 | [
"https://Stackoverflow.com/questions/48072131",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1755089/"
] | [Assignment Statements](https://docs.python.org/3/reference/simple_stmts.html#assignment-statements) assigns reference of source variable to target variable. Let walk through an example to understand more
```
>>> a = 5
>>> b = 6
>>> a = b
```
In this example `b` is source variable and `a` is the target variable. Now memory address for both of these variables are same. We can confirm this as well
```
>>> hex(id(a)), hex(id(b))
>>> ('0x1002739e0', '0x1002739e0')
```
Another test to confirm this is to use [is](https://stackoverflow.com/questions/13650293/understanding-pythons-is-operator) operator
```
>>> a is b
>>> True
```
Now coming back to your example. First statement
```
>>> a, b = b, a + b
```
Assignes `b` to `a` and `(a+b)` to `b`. This happens as a single operation so both variables are different. We can apply above tests to confirm this
```
>>> a is b
>>> False
>>> hex(id(a)), hex(id(b))
>>> ('0x1002739e0', '0x2008739t0')
```
The second statement
```
>>> a = b
>>> b = a + b
```
Assignes `b` to `a` and then `(a+b)` to `b`. These are two different statements, so at first step `a` and `b` are already identical. Thus the second statement is equivalent to `b = b + b`. | I think the `#` line is pythonic solution. But if you got confused,you can you use a variable which is temporary. you can assign the value `temp` before, then you can change the values |
43,566,044 | Python does a lot with magic methods and most of these are part of some protocol. I am familiar with the "iterator protocol" and the "number protocol" but recently stumbled over the term ["sequence protocol"](https://docs.python.org/c-api/sequence.html#sequence-protocol). But even after some research I'm not exactly sure what the "sequence protocol" is.
For example the C API function [`PySequence_Check`](https://docs.python.org/c-api/sequence.html#c.PySequence_Check) checks (according to the documentation) if some object implements the "sequence protocol". The [source code](https://github.com/python/cpython/blob/813f943c592cf225871b99cffc99304c8cbbee40/Objects/abstract.c#L1471) indicates that this is a class that's not a dict but implements a `__getitem__` method which is *roughly* identical to what the documentation on [`iter`](https://docs.python.org/library/functions.html#iter) also states:
>
> [...]must support the sequence protocol (the `__getitem__()` method with integer arguments starting at 0).[...]
>
>
>
But the requirement to start with `0` isn't something that's "implemented" in `PySequence_Check`.
Then there is also the [`collections.abc.Sequence`](https://docs.python.org/library/collections.abc.html#collections-abstract-base-classes) type, which basically says the instance has to implement `__reversed__`, `__contains__`, `__iter__` and `__len__`.
But by that definition a class implementing the "sequence protocol" isn't necessarily a Sequence, for example the ["data model"](https://docs.python.org/reference/datamodel.html#the-standard-type-hierarchy) and the abstract class guarantee that a sequence has a length. But a class just implementing `__getitem__` (passing the `PySequence_Check`) throws an exception when using `len(an_instance_of_that_class)`.
Could someone please clarify for me the difference between a sequence and the sequence protocol (if there's a definition for the protocol besides reading the source code) and when to use which definition? | 2017/04/23 | [
"https://Stackoverflow.com/questions/43566044",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5393381/"
] | It's not really consistent.
Here's [`PySequence_Check`](https://github.com/python/cpython/blob/3.6/Objects/abstract.c#L1460):
```
int
PySequence_Check(PyObject *s)
{
if (PyDict_Check(s))
return 0;
return s != NULL && s->ob_type->tp_as_sequence &&
s->ob_type->tp_as_sequence->sq_item != NULL;
}
```
`PySequence_Check` checks if an object provides the C sequence protocol, implemented through a `tp_as_sequence` member in the `PyTypeObject` representing the object's type. This `tp_as_sequence` member is a pointer to a struct containing a bunch of functions for sequence behavior, such as `sq_item` for item retrieval by numeric index and `sq_ass_item` for item assignment.
Specifically, `PySequence_Check` requires that its argument is not a dict, and that it provides `sq_item`.
Types with a `__getitem__` written in Python will provide `sq_item` regardless of whether they're conceptually sequences or mappings, so a mapping written in Python that doesn't inherit from `dict` will pass `PySequence_Check`.
---
On the other hand, `collections.abc.Sequence` only checks whether an object concretely inherits from `collections.abc.Sequence` or whether its class (or a superclass) is explicitly `register`ed with `collections.abc.Sequence`. If you just implement a sequence yourself without doing either of those things, it won't pass `isinstance(your_sequence, Sequence)`. Also, most classes registered with `collections.abc.Sequence` don't support all of `collections.abc.Sequence`'s methods. Overall, `collections.abc.Sequence` is a lot less reliable than people commonly expect it to be.
---
As for what counts as a sequence in practice, it's usually anything that supports `__len__` and `__getitem__` with integer indexes starting at 0 and isn't a mapping. If the docs for a function say it takes any sequence, that's almost always all it needs. Unfortunately, "isn't a mapping" is hard to test for, for reasons similar to how "is a sequence" is hard to pin down. | For a type to be in accordance with the sequence protocol, these 4 conditions must be met:
* Retrieve elements by index
`item = seq[index]`
* Find items by value
`index = seq.index(item)`
* Count items
`num = seq.count(item)`
* Produce a reversed sequence
`r = reversed(seq)` |
26,199,376 | I am trying to use level db in my python project. I zeroed in on python binding PlyVel <http://plyvel.readthedocs.org/en/latest/installation.html>, which seems to be better maintained and documented python binding.
However installation fails for plyvel
>
> plyvel/\_plyvel.cpp:359:10: fatal error: 'leveldb/db.h' file not found
>
>
> #include "leveldb/db.h"
>
>
>
So i believe i have to install leveldb to my machine. I did not find installation guide for leveldb for macosx. I have downloaded the tarball for leveldb, <https://code.google.com/p/leveldb/downloads/list>. Make file compiles the code, but the plylevel still fails. How should i compile the level db such that its binaries are made available to plyvel. | 2014/10/05 | [
"https://Stackoverflow.com/questions/26199376",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2089768/"
] | In OS X, it seems like `/usr/local/include`, where the leveldb headers (db.h) live, is not visible to gcc.
You need to install the Apple command line tools:
```
xcode-select --install
```
plyvel will compile after that.
[Link to GH issue](https://github.com/wbolster/plyvel/issues/34). Seems to be an OS X problem. | I'm not familiar with leveldb but most direct binary installations require you to run `./configure` then `make` then `make install` before the binary is actually installed. You should try that.
Also, according to this github page you should be able to install it with `gem`: <https://github.com/DAddYE/leveldb> |
26,199,376 | I am trying to use level db in my python project. I zeroed in on python binding PlyVel <http://plyvel.readthedocs.org/en/latest/installation.html>, which seems to be better maintained and documented python binding.
However installation fails for plyvel
>
> plyvel/\_plyvel.cpp:359:10: fatal error: 'leveldb/db.h' file not found
>
>
> #include "leveldb/db.h"
>
>
>
So i believe i have to install leveldb to my machine. I did not find installation guide for leveldb for macosx. I have downloaded the tarball for leveldb, <https://code.google.com/p/leveldb/downloads/list>. Make file compiles the code, but the plylevel still fails. How should i compile the level db such that its binaries are made available to plyvel. | 2014/10/05 | [
"https://Stackoverflow.com/questions/26199376",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2089768/"
] | The easiest way to install leveldb on Mac OS X would be to use [homebrew](http://brew.sh/).
[With homebrew](https://brewinstall.org/Install-leveldb-on-Mac-with-Brew/) you only need to run:
```
brew install leveldb
``` | I'm not familiar with leveldb but most direct binary installations require you to run `./configure` then `make` then `make install` before the binary is actually installed. You should try that.
Also, according to this github page you should be able to install it with `gem`: <https://github.com/DAddYE/leveldb> |
26,199,376 | I am trying to use level db in my python project. I zeroed in on python binding PlyVel <http://plyvel.readthedocs.org/en/latest/installation.html>, which seems to be better maintained and documented python binding.
However installation fails for plyvel
>
> plyvel/\_plyvel.cpp:359:10: fatal error: 'leveldb/db.h' file not found
>
>
> #include "leveldb/db.h"
>
>
>
So i believe i have to install leveldb to my machine. I did not find installation guide for leveldb for macosx. I have downloaded the tarball for leveldb, <https://code.google.com/p/leveldb/downloads/list>. Make file compiles the code, but the plylevel still fails. How should i compile the level db such that its binaries are made available to plyvel. | 2014/10/05 | [
"https://Stackoverflow.com/questions/26199376",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2089768/"
] | As mentioned by jAlpedrinha
The easiest way to install leveldb on Mac OS X would be to use homebrew.
With homebrew you only need to run:
```
brew install leveldb
```
You also need to have gcc or clang installed.
If there is a problem installing python bindings as mentioned here, <https://github.com/wbolster/plyvel/issues/95>
change your pylvel setup.py and add extra-compile-args like this
```
if platform.system() == 'Darwin':
extra_compile_args += ['-mmacosx-version-min=10.7', '-stdlib=libc++', '-Wall', '-g', '-x', 'c++', '-std=c++11']
```
Now pip install sphinx,
Probably you will face this error now,
gcc: error: plyvel/\_plyvel.cpp: No such file or directory
Reason and solution:
```
Plyvel uses cython to bind the leveldb C++ implementation to python.
So when you're using the setup script, you have to make sure that plyvel
cython modules have been compiled to .cpp files.
To do that just run make it will handle this opeartion for you.
Just make sure you have sphinx installed : pip install sphinx
Once make is run, you can safely python setup.py install.
``` | I'm not familiar with leveldb but most direct binary installations require you to run `./configure` then `make` then `make install` before the binary is actually installed. You should try that.
Also, according to this github page you should be able to install it with `gem`: <https://github.com/DAddYE/leveldb> |
26,199,376 | I am trying to use level db in my python project. I zeroed in on python binding PlyVel <http://plyvel.readthedocs.org/en/latest/installation.html>, which seems to be better maintained and documented python binding.
However installation fails for plyvel
>
> plyvel/\_plyvel.cpp:359:10: fatal error: 'leveldb/db.h' file not found
>
>
> #include "leveldb/db.h"
>
>
>
So i believe i have to install leveldb to my machine. I did not find installation guide for leveldb for macosx. I have downloaded the tarball for leveldb, <https://code.google.com/p/leveldb/downloads/list>. Make file compiles the code, but the plylevel still fails. How should i compile the level db such that its binaries are made available to plyvel. | 2014/10/05 | [
"https://Stackoverflow.com/questions/26199376",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2089768/"
] | If you are a MacPorts user, you can `sudo port install leveldb` to install the shared libraries.
Depending on how you've installed pip/python, you may also need to tell pip where to find the necessary files. Per <https://stackoverflow.com/a/22942120/5568265> you will want to do something like this:
```
pip install --global-option=build_ext --global-option='-L/opt/local/lib' plyvel
``` | I'm not familiar with leveldb but most direct binary installations require you to run `./configure` then `make` then `make install` before the binary is actually installed. You should try that.
Also, according to this github page you should be able to install it with `gem`: <https://github.com/DAddYE/leveldb> |
26,199,376 | I am trying to use level db in my python project. I zeroed in on python binding PlyVel <http://plyvel.readthedocs.org/en/latest/installation.html>, which seems to be better maintained and documented python binding.
However installation fails for plyvel
>
> plyvel/\_plyvel.cpp:359:10: fatal error: 'leveldb/db.h' file not found
>
>
> #include "leveldb/db.h"
>
>
>
So i believe i have to install leveldb to my machine. I did not find installation guide for leveldb for macosx. I have downloaded the tarball for leveldb, <https://code.google.com/p/leveldb/downloads/list>. Make file compiles the code, but the plylevel still fails. How should i compile the level db such that its binaries are made available to plyvel. | 2014/10/05 | [
"https://Stackoverflow.com/questions/26199376",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2089768/"
] | The easiest way to install leveldb on Mac OS X would be to use [homebrew](http://brew.sh/).
[With homebrew](https://brewinstall.org/Install-leveldb-on-Mac-with-Brew/) you only need to run:
```
brew install leveldb
``` | In OS X, it seems like `/usr/local/include`, where the leveldb headers (db.h) live, is not visible to gcc.
You need to install the Apple command line tools:
```
xcode-select --install
```
plyvel will compile after that.
[Link to GH issue](https://github.com/wbolster/plyvel/issues/34). Seems to be an OS X problem. |
26,199,376 | I am trying to use level db in my python project. I zeroed in on python binding PlyVel <http://plyvel.readthedocs.org/en/latest/installation.html>, which seems to be better maintained and documented python binding.
However installation fails for plyvel
>
> plyvel/\_plyvel.cpp:359:10: fatal error: 'leveldb/db.h' file not found
>
>
> #include "leveldb/db.h"
>
>
>
So i believe i have to install leveldb to my machine. I did not find installation guide for leveldb for macosx. I have downloaded the tarball for leveldb, <https://code.google.com/p/leveldb/downloads/list>. Make file compiles the code, but the plylevel still fails. How should i compile the level db such that its binaries are made available to plyvel. | 2014/10/05 | [
"https://Stackoverflow.com/questions/26199376",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2089768/"
] | In OS X, it seems like `/usr/local/include`, where the leveldb headers (db.h) live, is not visible to gcc.
You need to install the Apple command line tools:
```
xcode-select --install
```
plyvel will compile after that.
[Link to GH issue](https://github.com/wbolster/plyvel/issues/34). Seems to be an OS X problem. | As mentioned by jAlpedrinha
The easiest way to install leveldb on Mac OS X would be to use homebrew.
With homebrew you only need to run:
```
brew install leveldb
```
You also need to have gcc or clang installed.
If there is a problem installing python bindings as mentioned here, <https://github.com/wbolster/plyvel/issues/95>
change your pylvel setup.py and add extra-compile-args like this
```
if platform.system() == 'Darwin':
extra_compile_args += ['-mmacosx-version-min=10.7', '-stdlib=libc++', '-Wall', '-g', '-x', 'c++', '-std=c++11']
```
Now pip install sphinx,
Probably you will face this error now,
gcc: error: plyvel/\_plyvel.cpp: No such file or directory
Reason and solution:
```
Plyvel uses cython to bind the leveldb C++ implementation to python.
So when you're using the setup script, you have to make sure that plyvel
cython modules have been compiled to .cpp files.
To do that just run make it will handle this opeartion for you.
Just make sure you have sphinx installed : pip install sphinx
Once make is run, you can safely python setup.py install.
``` |
26,199,376 | I am trying to use level db in my python project. I zeroed in on python binding PlyVel <http://plyvel.readthedocs.org/en/latest/installation.html>, which seems to be better maintained and documented python binding.
However installation fails for plyvel
>
> plyvel/\_plyvel.cpp:359:10: fatal error: 'leveldb/db.h' file not found
>
>
> #include "leveldb/db.h"
>
>
>
So i believe i have to install leveldb to my machine. I did not find installation guide for leveldb for macosx. I have downloaded the tarball for leveldb, <https://code.google.com/p/leveldb/downloads/list>. Make file compiles the code, but the plylevel still fails. How should i compile the level db such that its binaries are made available to plyvel. | 2014/10/05 | [
"https://Stackoverflow.com/questions/26199376",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2089768/"
] | The easiest way to install leveldb on Mac OS X would be to use [homebrew](http://brew.sh/).
[With homebrew](https://brewinstall.org/Install-leveldb-on-Mac-with-Brew/) you only need to run:
```
brew install leveldb
``` | As mentioned by jAlpedrinha
The easiest way to install leveldb on Mac OS X would be to use homebrew.
With homebrew you only need to run:
```
brew install leveldb
```
You also need to have gcc or clang installed.
If there is a problem installing python bindings as mentioned here, <https://github.com/wbolster/plyvel/issues/95>
change your pylvel setup.py and add extra-compile-args like this
```
if platform.system() == 'Darwin':
extra_compile_args += ['-mmacosx-version-min=10.7', '-stdlib=libc++', '-Wall', '-g', '-x', 'c++', '-std=c++11']
```
Now pip install sphinx,
Probably you will face this error now,
gcc: error: plyvel/\_plyvel.cpp: No such file or directory
Reason and solution:
```
Plyvel uses cython to bind the leveldb C++ implementation to python.
So when you're using the setup script, you have to make sure that plyvel
cython modules have been compiled to .cpp files.
To do that just run make it will handle this opeartion for you.
Just make sure you have sphinx installed : pip install sphinx
Once make is run, you can safely python setup.py install.
``` |
26,199,376 | I am trying to use level db in my python project. I zeroed in on python binding PlyVel <http://plyvel.readthedocs.org/en/latest/installation.html>, which seems to be better maintained and documented python binding.
However installation fails for plyvel
>
> plyvel/\_plyvel.cpp:359:10: fatal error: 'leveldb/db.h' file not found
>
>
> #include "leveldb/db.h"
>
>
>
So i believe i have to install leveldb to my machine. I did not find installation guide for leveldb for macosx. I have downloaded the tarball for leveldb, <https://code.google.com/p/leveldb/downloads/list>. Make file compiles the code, but the plylevel still fails. How should i compile the level db such that its binaries are made available to plyvel. | 2014/10/05 | [
"https://Stackoverflow.com/questions/26199376",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2089768/"
] | The easiest way to install leveldb on Mac OS X would be to use [homebrew](http://brew.sh/).
[With homebrew](https://brewinstall.org/Install-leveldb-on-Mac-with-Brew/) you only need to run:
```
brew install leveldb
``` | If you are a MacPorts user, you can `sudo port install leveldb` to install the shared libraries.
Depending on how you've installed pip/python, you may also need to tell pip where to find the necessary files. Per <https://stackoverflow.com/a/22942120/5568265> you will want to do something like this:
```
pip install --global-option=build_ext --global-option='-L/opt/local/lib' plyvel
``` |
26,199,376 | I am trying to use level db in my python project. I zeroed in on python binding PlyVel <http://plyvel.readthedocs.org/en/latest/installation.html>, which seems to be better maintained and documented python binding.
However installation fails for plyvel
>
> plyvel/\_plyvel.cpp:359:10: fatal error: 'leveldb/db.h' file not found
>
>
> #include "leveldb/db.h"
>
>
>
So i believe i have to install leveldb to my machine. I did not find installation guide for leveldb for macosx. I have downloaded the tarball for leveldb, <https://code.google.com/p/leveldb/downloads/list>. Make file compiles the code, but the plylevel still fails. How should i compile the level db such that its binaries are made available to plyvel. | 2014/10/05 | [
"https://Stackoverflow.com/questions/26199376",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2089768/"
] | If you are a MacPorts user, you can `sudo port install leveldb` to install the shared libraries.
Depending on how you've installed pip/python, you may also need to tell pip where to find the necessary files. Per <https://stackoverflow.com/a/22942120/5568265> you will want to do something like this:
```
pip install --global-option=build_ext --global-option='-L/opt/local/lib' plyvel
``` | As mentioned by jAlpedrinha
The easiest way to install leveldb on Mac OS X would be to use homebrew.
With homebrew you only need to run:
```
brew install leveldb
```
You also need to have gcc or clang installed.
If there is a problem installing python bindings as mentioned here, <https://github.com/wbolster/plyvel/issues/95>
change your pylvel setup.py and add extra-compile-args like this
```
if platform.system() == 'Darwin':
extra_compile_args += ['-mmacosx-version-min=10.7', '-stdlib=libc++', '-Wall', '-g', '-x', 'c++', '-std=c++11']
```
Now pip install sphinx,
Probably you will face this error now,
gcc: error: plyvel/\_plyvel.cpp: No such file or directory
Reason and solution:
```
Plyvel uses cython to bind the leveldb C++ implementation to python.
So when you're using the setup script, you have to make sure that plyvel
cython modules have been compiled to .cpp files.
To do that just run make it will handle this opeartion for you.
Just make sure you have sphinx installed : pip install sphinx
Once make is run, you can safely python setup.py install.
``` |
47,944,927 | I am trying to make a GET request to a shopify store, packershoes as follow:
```
endpoint = "http://www.packershoes.com"
print session.get(endpoint, headers=headers)
```
When I run a get request to the site I get the following error:
```
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 467, in get
return self.request('GET', url, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 455, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 558, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 385, in send
raise SSLError(e)
requests.exceptions.SSLError: hostname 'packershoes.com' doesn't match either of '*.myshopify.com', 'myshopify.com'
```
When I request a different sites, it works fine. | 2017/12/22 | [
"https://Stackoverflow.com/questions/47944927",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8528309/"
] | This looks like more of an SSL problem than a Python problem. You haven't shown us your code, so I'm making some guesses here, but it looks as if the site to which you are connecting is presenting an SSL certificate that doesn't match the hostname you're using. The resolution here is typically:
* See if there is an alternative hostname you should/could be using that would match the hostname in the certificate,
* Report the problem to the site you're trying to access and see if there is a configuration error on their end, or
* Disable certificate validation in your code. You don't really want to do that because it would open your code up to man-in-the-middle attacks.
This is discussed [in the requests documentation](http://docs.python-requests.org/en/master/user/advanced/#ssl-cert-verification).
**Update**
Taking a closer look at your problem, I notice that I cannot reproduce this problem myself. The certificates presented by `www.packershoes.com` are clearly for `*.myshopify.com`, but I don't get any certificate errors presumably because that address is actually an alias for `packer-shoes.myshopify.com`
```
$ host www.packershoes.com
www.packershoes.com is an alias for packer-shoes.myshopify.com.
packer-shoes.myshopify.com is an alias for shops.myshopify.com.
shops.myshopify.com has address 23.227.38.64
```
I wonder if your issue isn't simply related to either the version of `requests` that you're using or something in your local DNS configuration. If you replace `www.packershoes.com` in your request with `packer-shoes.myshopify.com`, does it work correctly? | Requests verifies SSL certificates for HTTPS requests, just like a web browser. By default, SSL verification is enabled, and Requests will throw a `SSLError` if it's unable to verify the certificate, you have set verify to False:
```
session.get("http://www.packershoes.com", headers=headers, verify=False)
``` |
22,042,673 | I've setup a code in python to search for tweets using the oauth2 and urllib2 libraries only. (I'm not using any particular twitter library)
I'm able to search for tweets based on keywords. However, I'm getting zero number of tweets when I search for this particular keyword - "Jurgen%20Mayer-Hermann". (this is challenge because my ultimate goal is to search for this keyword only.
On the other hand when I search for the same thing online (twitter interface, I'm getting enough tweets). - <https://twitter.com/search?q=Jurgen%20Mayer-Hermann&src=typd>
Can someone please see if we can identify the issue?
The code is as follows:
```
def getfeed(mystr, tweetcount):
url = "https://api.twitter.com/1.1/search/tweets.json?q=" + mystr + "&count=" + tweetcount
parameters = []
response = twitterreq(url, "GET", parameters)
res = json.load(response)
return res
search_str = "Jurgen Mayer-Hermann"
search_str = '%22'+search_str+'%22'
search = search_str.replace(" ","%20")
search = search.replace("#","%23")
tweetcount = str(50)
res = getfeed(search, tweetcount)
```
When I print the constructed url, I get
```
https://api.twitter.com/1.1/search/tweets.json?q=%22Jurgen%20Mayer-Hermann%22&count=50
``` | 2014/02/26 | [
"https://Stackoverflow.com/questions/22042673",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2935885/"
] | Change `$(document).load(` to [`$(document).ready(`](http://learn.jquery.com/using-jquery-core/document-ready/)
```
$(document).ready(function() {
var vague = $('.zero').Vague({
intensity: 3,
forceSVGUrl: false
});
vague.blur();
});
```
or use
```
$(window).load(function(){
```
or use Shorthand for `$( document ).ready()`
```
$(function(){
``` | Try to use:
```
$(window).load(function() {
```
or:
```
$(document).ready(function() {
```
instead of:
```
$(document).load(function() {
``` |
22,042,673 | I've setup a code in python to search for tweets using the oauth2 and urllib2 libraries only. (I'm not using any particular twitter library)
I'm able to search for tweets based on keywords. However, I'm getting zero number of tweets when I search for this particular keyword - "Jurgen%20Mayer-Hermann". (this is challenge because my ultimate goal is to search for this keyword only.
On the other hand when I search for the same thing online (twitter interface, I'm getting enough tweets). - <https://twitter.com/search?q=Jurgen%20Mayer-Hermann&src=typd>
Can someone please see if we can identify the issue?
The code is as follows:
```
def getfeed(mystr, tweetcount):
url = "https://api.twitter.com/1.1/search/tweets.json?q=" + mystr + "&count=" + tweetcount
parameters = []
response = twitterreq(url, "GET", parameters)
res = json.load(response)
return res
search_str = "Jurgen Mayer-Hermann"
search_str = '%22'+search_str+'%22'
search = search_str.replace(" ","%20")
search = search.replace("#","%23")
tweetcount = str(50)
res = getfeed(search, tweetcount)
```
When I print the constructed url, I get
```
https://api.twitter.com/1.1/search/tweets.json?q=%22Jurgen%20Mayer-Hermann%22&count=50
``` | 2014/02/26 | [
"https://Stackoverflow.com/questions/22042673",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2935885/"
] | Change `$(document).load(` to [`$(document).ready(`](http://learn.jquery.com/using-jquery-core/document-ready/)
```
$(document).ready(function() {
var vague = $('.zero').Vague({
intensity: 3,
forceSVGUrl: false
});
vague.blur();
});
```
or use
```
$(window).load(function(){
```
or use Shorthand for `$( document ).ready()`
```
$(function(){
``` | Make sure the permissions on the Vague.js are set to 755. Assuming a LAMP stack of course. |
22,042,673 | I've setup a code in python to search for tweets using the oauth2 and urllib2 libraries only. (I'm not using any particular twitter library)
I'm able to search for tweets based on keywords. However, I'm getting zero number of tweets when I search for this particular keyword - "Jurgen%20Mayer-Hermann". (this is challenge because my ultimate goal is to search for this keyword only.
On the other hand when I search for the same thing online (twitter interface, I'm getting enough tweets). - <https://twitter.com/search?q=Jurgen%20Mayer-Hermann&src=typd>
Can someone please see if we can identify the issue?
The code is as follows:
```
def getfeed(mystr, tweetcount):
url = "https://api.twitter.com/1.1/search/tweets.json?q=" + mystr + "&count=" + tweetcount
parameters = []
response = twitterreq(url, "GET", parameters)
res = json.load(response)
return res
search_str = "Jurgen Mayer-Hermann"
search_str = '%22'+search_str+'%22'
search = search_str.replace(" ","%20")
search = search.replace("#","%23")
tweetcount = str(50)
res = getfeed(search, tweetcount)
```
When I print the constructed url, I get
```
https://api.twitter.com/1.1/search/tweets.json?q=%22Jurgen%20Mayer-Hermann%22&count=50
``` | 2014/02/26 | [
"https://Stackoverflow.com/questions/22042673",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2935885/"
] | Try to use:
```
$(window).load(function() {
```
or:
```
$(document).ready(function() {
```
instead of:
```
$(document).load(function() {
``` | Make sure the permissions on the Vague.js are set to 755. Assuming a LAMP stack of course. |
56,112,849 | I have a Div containing 4 images of the same size, placed in a row. I want them to occupy all the space avaible in the div by staying in the same row, with the first image in the far left and the fourth image in the far right, they also have to be equally spaced. I can accomplish this by modifying the padding of each image so I'm asking if there is a method to do it automatically.
```
<div>
<img src="imgsrc\html5.svg" id="htmLogo" class="iconPgr">
<img src="imgsrc\java.svg" id="javaLogo" class="iconPgr">
<img src="imgsrc\python.svg" id="pythonLogo" class="iconPgr">
<img src="imgsrc\C++ icon.png" id="cLogo" class="iconPgr">
</div>
```
```
#htmLogo {
padding-left: 35px
padding-right: 0px
/* I repeat the same for every ID with different padding values so the imgs result equally spaced with htmLogo in the far right and cLogo in the far left */
``` | 2019/05/13 | [
"https://Stackoverflow.com/questions/56112849",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11426745/"
] | You can use flexbox. Read more about it here: <https://css-tricks.com/snippets/css/a-guide-to-flexbox/>
```css
#my-container {
display: flex;
justify-content: space-between;
}
```
```html
<div id="my-container">
<img src="https://placekitten.com/50/50" />
<img src="https://placekitten.com/50/50" />
<img src="https://placekitten.com/50/50" />
<img src="https://placekitten.com/50/50" />
</div>
``` | if bootstrap is available and you can change the html then you can wrap each image in a div and use bootstrap's grid system ([here is a demo](https://codepen.io/carnnia/pen/ZNprZa)).
```css
#container{
width: 100%;
border: 1px solid red;
}
.row{
text-align: center;
}
```
```html
<div class="container" id="container">
<div class="row">
<div class="col-lg-3"><img src="https://image.flaticon.com/icons/svg/23/23735.svg" width=50 height=50 /></div>
<div class="col-lg-3"><img src="http://freevector.co/wp-content/uploads/2012/07/14675-telegram-logo1.png" width=50 height=50/></div>
<div class="col-lg-3"><img src="https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTgWDI1YcQTcTa2IoQn_yuEXtWwuLy7KbkFZ5H-2F3554d_j29nAQ" width=50 height=50/></div>
<div class="col-lg-3"><img src="https://image.flaticon.com/icons/png/512/130/130484.png" width=50 height=50/></div>
</div><!-- row END -->
</div> <!-- container END -->
``` |
8,051,506 | am I going about this in the correct way? Ive never done anything like this before, so im not 100% sure on what I am doing. The code so far gets html and css files and that works fine, but images wont load, and will I have to create a new "if" for every different file type? or am I doing this a silly way...here is what I have:
```
import string,cgi,time
from os import curdir, sep
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
import os
import mimetypes
#import pri
port = 888
host = "0.0.0.0"
class MyHandler(BaseHTTPRequestHandler):
def do_GET(self):
try:
#RequestedURL = self.path
mimeType = mimetypes.guess_type(self.path)[0]
fileType = mimetypes.guess_extension(mimeType)
infoList = [mimeType, fileType]
if infoList[1] != ".py":
self.send_response(200)
self.send_header('Content-type', mimeType)
self.end_headers()
f = open(curdir + sep + self.path, "rb")
self.wfile.write(f.read())
f.close()
return
if fileType == ".py":
pythonFilename = self.path.lstrip("/")
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.end_headers()
pyname = pythonFilename.replace("/", ".")[:-3]
print pythonFilename
print pyname
temp1 = pyname.split(".")
temp2 = temp1[-1]
print temp2
module = __import__(root.index)
self.wfile.write(module.root.index.do_work())
#module = __import__("test.index")
#self.wfile.write( module.index.do_work())
return
return
except IOError:
self.send_error(404,'File Not Found: %s' % self.path)
def do_POST(self):
global rootnode
try:
ctype, pdict = cgi.parse_header(self.headers.getheader('content-type'))
if ctype == 'multipart/form-data':
query=cgi.parse_multipart(self.rfile, pdict)
self.send_response(301)
self.end_headers()
upfilecontent = query.get('upfile')
print "filecontent", upfilecontent[0]
self.wfile.write("<HTML>POST OK.<BR><BR>");
self.wfile.write(upfilecontent[0]);
except :
pass
def main():
try:
server = HTTPServer((host, port), MyHandler)
print 'started httpserver:'
print ("Host: " + (host))
print ("Port: " + str(port))
server.serve_forever()
except KeyboardInterrupt:
print '^C received, shutting down server'
server.socket.close()
if __name__ == '__main__':
main()
```
html and css works, but png images do not load | 2011/11/08 | [
"https://Stackoverflow.com/questions/8051506",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/787367/"
] | You are on the right track with it, though your ifs are very redundant. I suggest you refactor the code to check for type using a loop and a dict:
```
mime = {"html":"text/html", "css":"text/css", "png":"image/png"}
if RequestedFileType in mime.keys():
self.send_response(200)
self.send_header('Content-type', mime[RequestedFileType])
self.end_headers()
print RequestedFileType
f = open(curdir + sep + self.path)
self.wfile.write(f.read())
f.close()
return
```
Also, you are sending binary files as text. Instead of open(curdir + sep + self.path) use open(curdir + sep + self.path, "b")
Gergely from toptal.com | As to a panoply of `if` statements, the usual approach is to have a file that handles the mapping between extensions and mime types (look here: [List of ALL MimeTypes on the Planet, mapped to File Extensions?](https://stackoverflow.com/questions/1735659/list-of-all-mimetypes-on-the-planet-mapped-to-file-extensions)). Read that into an appropriate data structure.
You should probably be opening all files as binary unless they are a text/\* mime type; for those you should ensure that your line endings are as specified in the appropriate RFC (if any - it's been years since I have needed to consult those standards, on account of not writing a web server for deployment in anger).
And a syntactical point:
```
>>> ('foo')
'foo'
>>> ('foo',)
('foo',)
>>>
```
Your brackets are redundant. You can index on the value you are extracting. |
74,061,083 | My python GUI has been working fine from VSCode for months now, but today (with no changes in the code that I can find) it has been throwing me an error in the form of:
Exception has occurred: ModuleNotFoundError
No module named '\_tkinter'
This error occurs for any import that is not commented out. The GUI works as intended when ran from the terminal using "python3 filename.py",but the run/debug function in VSCode keeps throwing that same error. I'm relatively new here so I have no clue what the problem could be, any insight or things to try would be appreciated. | 2022/10/13 | [
"https://Stackoverflow.com/questions/74061083",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20234707/"
] | Press Ctrl+Shift+P and type "select interpreter", press Enter and select the python interpreter path that you want to use by default in current project.
If currently selected one does not have some libraries installed, you may see error from Pylance. | The cause of this problem may be that there are multiple python versions on your machine, and the interpreter environment you are currently using is not the same environment where you installed the third-party library.
**Solution:**
1. Use the following code to get the current interpreter path
```
import sys
print(sys.executable)
```
[](https://i.stack.imgur.com/cvBJZ.png)
2. Copy the resulting path, and then use the following commands to install third-party libraries for the current environment (using `numpy` as an example)
```
C:\Users\Admin\AppData\Local\Programs\Python\Python36\python.exe -m pip install numpy
```
[](https://i.stack.imgur.com/h9bgp.png) |
35,127,452 | Could someone please help me create a field in my model that generates a unique 8 character alphanumeric string (i.e. A#######) ID every time a user makes a form submission?
My **models.py** form is currently as follows:
```
from django.db import models
from django.contrib.auth.models import User
class Transfer(models.Model):
id = models.AutoField(primary_key=True)
user = models.ForeignKey(User)
timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
```
I have looked at pythons UUID feature but these identifiers are quite long and messy compared with what I am looking to generate.
Any help would be hugely appreciated! | 2016/02/01 | [
"https://Stackoverflow.com/questions/35127452",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2798841/"
] | Something like this:
```
''.join(random.choice(string.ascii_uppercase) for _ in range(8))
``` | In order for the ID to be truly unique you have to keep track of previously generated unique IDs, This can be simply done with a simple sqlite DB.
In order to generate a simple unique id use the following line:
```
import random
import string
u_id = ''.join(random.choice(string.ascii_letters + string.digits) for _ in range(8))
```
More on strings can be found in the [docs](https://docs.python.org/2/library/string.html#string-constants) |
35,127,452 | Could someone please help me create a field in my model that generates a unique 8 character alphanumeric string (i.e. A#######) ID every time a user makes a form submission?
My **models.py** form is currently as follows:
```
from django.db import models
from django.contrib.auth.models import User
class Transfer(models.Model):
id = models.AutoField(primary_key=True)
user = models.ForeignKey(User)
timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
```
I have looked at pythons UUID feature but these identifiers are quite long and messy compared with what I am looking to generate.
Any help would be hugely appreciated! | 2016/02/01 | [
"https://Stackoverflow.com/questions/35127452",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2798841/"
] | Something like this:
```
''.join(random.choice(string.ascii_uppercase) for _ in range(8))
``` | I suggest to make a token generator class like this:
```
import string, random
class RandomTokenGenerator(object):
def __init__(self, chars=None, random_generator=None):
self.chars = chars or string.ascii_uppercase + string.ascii_lowercase + string.digits
self.random_generator = random_generator or random.SystemRandom()
def make_token(self, n=20):
return ''.join(self.random_generator.choice(self.chars) for _ in range(n))
token_generator = RandomTokenGenerator()
```
Then in your model should be like:
```
from django.db import models
from django.contrib.auth.models import User
from .token_utils import token_generator
class Transfer(models.Model):
id = models.AutoField(primary_key=True)
user = models.ForeignKey(User)
timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
token = models.CharField(_('UUID'), max_length=8, null=False)
def save(self, *args, **kwargs):
## This to check if it creates a new or updates an old instance
if self.pk is None:
self.token = token_generator.make_token(8)
super(Transfer, self).save(*args, **kwargs)
``` |
35,127,452 | Could someone please help me create a field in my model that generates a unique 8 character alphanumeric string (i.e. A#######) ID every time a user makes a form submission?
My **models.py** form is currently as follows:
```
from django.db import models
from django.contrib.auth.models import User
class Transfer(models.Model):
id = models.AutoField(primary_key=True)
user = models.ForeignKey(User)
timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
```
I have looked at pythons UUID feature but these identifiers are quite long and messy compared with what I am looking to generate.
Any help would be hugely appreciated! | 2016/02/01 | [
"https://Stackoverflow.com/questions/35127452",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2798841/"
] | Something like this:
```
''.join(random.choice(string.ascii_uppercase) for _ in range(8))
``` | You could use the DB id and convert that to a sting using someting like this:
```
def int_to_key(num):
if num == 0:
return ""
return "{0}{1}".format(
int_to_key(num // 52),
'0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'[num % 52]
)
```
If you can adjust the code to use less or more characters as you need. |
13,529,852 | I have a python GUI and i want to run a shell command which you cannot do using windows cmd
i have installed cygwin and i was wondering how i would go about running cygwin instead of the windows cmd. I am wanting to use subprocess and get the results of the .sh file
but my code
```
subprocess.check_output("./listChains.sh < 2p31protein.pdb")
```
This will run it in cmd and as windows will not recognize it, it will not work, so how can i get it to run in cygwin instead of cmd | 2012/11/23 | [
"https://Stackoverflow.com/questions/13529852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1810400/"
] | Execute a cygwin shell (e.g. `bash`) and have it run your script, instead of running your script directly:
```
subprocess.check_output("C:/cygwin/bin/bash.exe ./listChains.sh < 2p31protein.pdb")
```
Alternatively, associate the `.sh` filetype extension to open with `bash.exe`. | Using python sub-process to run a cygwin executable requires that the ./bin directory with `cygwin1.dll` be on the Windows path. `cygwin1.dll` exposes cygwin executables to Windows, allowing them to run in Windows command line and be called by Python sub-process. |
74,581,136 | I would like to do the same in python pandas as shown on the picture.
[pandas image](https://i.stack.imgur.com/ZsHLT.png)
This is sum function where the first cell is fixed and the formula calculates "**continuous sum**".
I tried to create pandas data frame however I did not manage to do this exactly. | 2022/11/26 | [
"https://Stackoverflow.com/questions/74581136",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20557036/"
] | The phrasing, and the unqualified template, aren't super-helpful in figuring out the difference, but: `transform()` does "re-boxing" into an optional, but `and_then()` does not, expecting the function to returned a boxed value on its own. So,
* `transform()` is for when you want to use a function like `T2 foo(T1 x)`.
* `and_then()` is for when you want to use a function like `optional<T2> bar(T1 x)`.
Both `my_optional.transform(foo)` and `my_optional.and_then(bar)` return a value of type `optional<T2>`.
See also [this question](https://stackoverflow.com/questions/70606173/what-are-monadic-bind-and-monadic-return-for-c23-optional). | `and_then` is monadic `bind` aka `flatmap` aka `>>=` and `transform` is functorial `map`.
One can express `map` in terms of `bind` generically, but not the other way around, because a functor is not necessarily a monad. Of course the particular monad of `std::optional` can be opened at any time, so both functions are expressible in terms of ordinary pre-C++23 `std::optional` API. Thus the question why the C++ standard defines both functions is no better than the question why it defines any of the two. Perhaps the Standard wishes to give the programmer a standard functorial interface and a standard monadic interface independently. Either interface is useful and important on its own right. |
74,581,136 | I would like to do the same in python pandas as shown on the picture.
[pandas image](https://i.stack.imgur.com/ZsHLT.png)
This is sum function where the first cell is fixed and the formula calculates "**continuous sum**".
I tried to create pandas data frame however I did not manage to do this exactly. | 2022/11/26 | [
"https://Stackoverflow.com/questions/74581136",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20557036/"
] | The phrasing, and the unqualified template, aren't super-helpful in figuring out the difference, but: `transform()` does "re-boxing" into an optional, but `and_then()` does not, expecting the function to returned a boxed value on its own. So,
* `transform()` is for when you want to use a function like `T2 foo(T1 x)`.
* `and_then()` is for when you want to use a function like `optional<T2> bar(T1 x)`.
Both `my_optional.transform(foo)` and `my_optional.and_then(bar)` return a value of type `optional<T2>`.
See also [this question](https://stackoverflow.com/questions/70606173/what-are-monadic-bind-and-monadic-return-for-c23-optional). | `and_then` only takes functions of type `T -> std::optional<U>` (whereas `transform` is free to take functions returning any type).
If you just `transform` with such a function you will get a `std::optional<std::optional<U>>`.
`and_then` just then flattens the `std::optional<std::optional<U>>` into an `std::optional<U>`.
That's all monads are: `transform` composed with a type level `flatten`. Think `range<range<U>>` and `future<future<U>>`. |
74,581,136 | I would like to do the same in python pandas as shown on the picture.
[pandas image](https://i.stack.imgur.com/ZsHLT.png)
This is sum function where the first cell is fixed and the formula calculates "**continuous sum**".
I tried to create pandas data frame however I did not manage to do this exactly. | 2022/11/26 | [
"https://Stackoverflow.com/questions/74581136",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20557036/"
] | `and_then` is monadic `bind` aka `flatmap` aka `>>=` and `transform` is functorial `map`.
One can express `map` in terms of `bind` generically, but not the other way around, because a functor is not necessarily a monad. Of course the particular monad of `std::optional` can be opened at any time, so both functions are expressible in terms of ordinary pre-C++23 `std::optional` API. Thus the question why the C++ standard defines both functions is no better than the question why it defines any of the two. Perhaps the Standard wishes to give the programmer a standard functorial interface and a standard monadic interface independently. Either interface is useful and important on its own right. | `and_then` only takes functions of type `T -> std::optional<U>` (whereas `transform` is free to take functions returning any type).
If you just `transform` with such a function you will get a `std::optional<std::optional<U>>`.
`and_then` just then flattens the `std::optional<std::optional<U>>` into an `std::optional<U>`.
That's all monads are: `transform` composed with a type level `flatten`. Think `range<range<U>>` and `future<future<U>>`. |
60,155,460 | I was planning to automate the manual steps to run the ssh commands using python. I developed the code that automatically executes the below command and log me in VM. The SSH command works fine whenever i run the code in spyder and conda prompt. The command works whenever I open the cmd and try the command directly where the key is, but fails and give error whenever i run the python script on cmd prompt
```
os.system('cmd /k "ssh -i <path to private key> <user>@<remotehost>"')
```
error:
```
'ssh' is not recognized as an internal or external command,
operable program or batch file.
```
How to solve this error to run the script on cmd?
Note: The ssh commands works fine in cmd but not inside script when run on cmd | 2020/02/10 | [
"https://Stackoverflow.com/questions/60155460",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12873907/"
] | Just resolved the issue here.
I updated to the last version of all libs
```
"@react-navigation/bottom-tabs": "^5.0.1",
"@react-navigation/core": "^5.1.0",
"@react-navigation/material-top-tabs": "^5.0.1",
"@react-navigation/native": "^5.0.1",
"@react-navigation/stack": "^5.0.1",
```
and then i deleted my package-lock.json,
and in your terminal go to android folder and then type ./gradlew clean
after that you should run your npx react-native run-android,
close your default metro terminal,
and then run npx react-native start --reset-cache,
worked well after doing this | Make sure you have installed latest versions of `@react-navigation/native` and `@react-navigation/bottom-tabs`:
```js
npm install @react-navigation/native @react-navigation/bottom-tabs
```
Then clear the cache:
```sh
npm react-native start --reset-cache
```
Or if using Expo:
```js
expo start -c
``` |
54,337,433 | I have a list of tuples and need to delete tuples if its 1st item is matching with 1st item of other tuples in the list. 3rd item may or may not be the same, so I cannot use set (I have seen this question - [Grab unique tuples in python list, irrespective of order](https://stackoverflow.com/questions/35975441/grab-unique-tuples-in-python-list-irrespective-of-order) and this is not same as my issue)
For eg if I got `a` as:
```
[(0, 13, 'order1'), (14, 27, 'order2'), (14, 27, 'order2.1'),
(0, 13, 'order1'), (28, 41, 'order3')]
```
I want the output as:
```
[(14, 27, 'order2'), (0, 13, 'order1'), (28, 41, 'order3')]
```
I am getting the desired output using below code.
```
for e, i in enumerate(a):
r = [True if i[0] == k[0] and e != j else False for j, k in enumerate(a)]
if any(r):
a.pop(e)
pprint(a)
```
Is there a better or more pythonic way to achieve the same? | 2019/01/23 | [
"https://Stackoverflow.com/questions/54337433",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2565385/"
] | The usual way is keying a dict off whatever you want to dedupe by, for example:
```
>>> a = [(0, 13, 'order1'), (14, 27, 'order2'), (14, 27, 'order2.1'), (0, 13, 'order1'), (28, 41, 'order3')]
>>> print(*{tup[:2]: tup for tup in a}.values())
(0, 13, 'order1') (14, 27, 'order2.1') (28, 41, 'order3')
```
This is *O(n)* time complexity, superior to *O(n log n)* groupby based approaches. | You can get the first element of each group in a grouped, sorted list:
```
from itertools import groupby
from operator import itemgetter
a = [(0, 13, 'order1'), (14, 27, 'order2'), (14, 27, 'order2.1'), (0, 13, 'order1'), (28, 41, 'order3')]
result = [list(g)[0] for k, g in groupby(sorted(a), key=itemgetter(0))]
print(result)
``` |
54,337,433 | I have a list of tuples and need to delete tuples if its 1st item is matching with 1st item of other tuples in the list. 3rd item may or may not be the same, so I cannot use set (I have seen this question - [Grab unique tuples in python list, irrespective of order](https://stackoverflow.com/questions/35975441/grab-unique-tuples-in-python-list-irrespective-of-order) and this is not same as my issue)
For eg if I got `a` as:
```
[(0, 13, 'order1'), (14, 27, 'order2'), (14, 27, 'order2.1'),
(0, 13, 'order1'), (28, 41, 'order3')]
```
I want the output as:
```
[(14, 27, 'order2'), (0, 13, 'order1'), (28, 41, 'order3')]
```
I am getting the desired output using below code.
```
for e, i in enumerate(a):
r = [True if i[0] == k[0] and e != j else False for j, k in enumerate(a)]
if any(r):
a.pop(e)
pprint(a)
```
Is there a better or more pythonic way to achieve the same? | 2019/01/23 | [
"https://Stackoverflow.com/questions/54337433",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2565385/"
] | You should avoid modifying your list in place while iterating over it. Instead, you can use the popular [`itertools` `unique_everseen` recipe](https://docs.python.org/3/library/itertools.html#itertools-recipes), also available in 3rd party [`more_itertools`](https://more-itertools.readthedocs.io/en/stable/api.html#more_itertools.unique_everseen). Just use [`operator.itemgetter`](https://docs.python.org/3/library/operator.html#operator.itemgetter) in the `key` argument:
```
from more_itertools import unique_everseen
from operator import itemgetter
res = list(unique_everseen(L, key=itemgetter(0, 1)))
```
This solution takes O(*n*) time, but is generally *less* efficient than a dictionary-based solution, although it is arguably clearer in its intention. | You can get the first element of each group in a grouped, sorted list:
```
from itertools import groupby
from operator import itemgetter
a = [(0, 13, 'order1'), (14, 27, 'order2'), (14, 27, 'order2.1'), (0, 13, 'order1'), (28, 41, 'order3')]
result = [list(g)[0] for k, g in groupby(sorted(a), key=itemgetter(0))]
print(result)
``` |
54,337,433 | I have a list of tuples and need to delete tuples if its 1st item is matching with 1st item of other tuples in the list. 3rd item may or may not be the same, so I cannot use set (I have seen this question - [Grab unique tuples in python list, irrespective of order](https://stackoverflow.com/questions/35975441/grab-unique-tuples-in-python-list-irrespective-of-order) and this is not same as my issue)
For eg if I got `a` as:
```
[(0, 13, 'order1'), (14, 27, 'order2'), (14, 27, 'order2.1'),
(0, 13, 'order1'), (28, 41, 'order3')]
```
I want the output as:
```
[(14, 27, 'order2'), (0, 13, 'order1'), (28, 41, 'order3')]
```
I am getting the desired output using below code.
```
for e, i in enumerate(a):
r = [True if i[0] == k[0] and e != j else False for j, k in enumerate(a)]
if any(r):
a.pop(e)
pprint(a)
```
Is there a better or more pythonic way to achieve the same? | 2019/01/23 | [
"https://Stackoverflow.com/questions/54337433",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2565385/"
] | The usual way is keying a dict off whatever you want to dedupe by, for example:
```
>>> a = [(0, 13, 'order1'), (14, 27, 'order2'), (14, 27, 'order2.1'), (0, 13, 'order1'), (28, 41, 'order3')]
>>> print(*{tup[:2]: tup for tup in a}.values())
(0, 13, 'order1') (14, 27, 'order2.1') (28, 41, 'order3')
```
This is *O(n)* time complexity, superior to *O(n log n)* groupby based approaches. | You should avoid modifying your list in place while iterating over it. Instead, you can use the popular [`itertools` `unique_everseen` recipe](https://docs.python.org/3/library/itertools.html#itertools-recipes), also available in 3rd party [`more_itertools`](https://more-itertools.readthedocs.io/en/stable/api.html#more_itertools.unique_everseen). Just use [`operator.itemgetter`](https://docs.python.org/3/library/operator.html#operator.itemgetter) in the `key` argument:
```
from more_itertools import unique_everseen
from operator import itemgetter
res = list(unique_everseen(L, key=itemgetter(0, 1)))
```
This solution takes O(*n*) time, but is generally *less* efficient than a dictionary-based solution, although it is arguably clearer in its intention. |
54,028,502 | I have this kind of list of dictionary in python
```
[
{
"compania": "Fiat",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Renault",
"modelo": "2014",
"precio": "2000"
},
{
"compania": "Volkwagen",
"modelo": "2014",
"precio": "3000"
},
{
"compania": "Chevrolet",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Peugeot",
"modelo": "2014",
"precio": "2000"
}
]
```
That I'd like to transform into this kind of list of list of dictionary
```
{
"Fiat": {
"modelo": "2014",
"precio": "1000"
},
"Renault": {
"modelo": "2014",
"precio": "2000"
},
"Volkwagen": {
"modelo": "2014",
"precio": "3000"
},
"Chevrolet": {
"modelo": "2014",
"precio": "1000"
},
"Peugeot": {
"modelo": "2014",
"precio": "2000"
}
}
``` | 2019/01/03 | [
"https://Stackoverflow.com/questions/54028502",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10864244/"
] | We can use dict comprehension
```
{a.get('compania'): {k: v for k, v in a.items() if k != 'compania'} for a in c}
{'Fiat': {'modelo': '2014', 'precio': '1000'},
'Renault': {'modelo': '2014', 'precio': '2000'},
'Volkwagen': {'modelo': '2014', 'precio': '3000'},
'Chevrolet': {'modelo': '2014', 'precio': '1000'},
'Peugeot': {'modelo': '2014', 'precio': '2000'}}
```
where `c` is your original data | ```
result = {}
for d in l:
# Store the value of the key 'compania' before popping it from the small dictionary d
compania = d['compania']
d.pop('compania')
# Construct new dictionary with key of the compania and value of the small dictionary without the compania key/value pair
result[compania] = d
print(result)
```
Output:
```
{'Chevrolet': {'modelo': '2014', 'precio': '1000'},
'Fiat': {'modelo': '2014', 'precio': '1000'},
'Peugeot': {'modelo': '2014', 'precio': '2000'},
'Renault': {'modelo': '2014', 'precio': '2000'},
'Volkwagen': {'modelo': '2014', 'precio': '3000'}}
``` |
54,028,502 | I have this kind of list of dictionary in python
```
[
{
"compania": "Fiat",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Renault",
"modelo": "2014",
"precio": "2000"
},
{
"compania": "Volkwagen",
"modelo": "2014",
"precio": "3000"
},
{
"compania": "Chevrolet",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Peugeot",
"modelo": "2014",
"precio": "2000"
}
]
```
That I'd like to transform into this kind of list of list of dictionary
```
{
"Fiat": {
"modelo": "2014",
"precio": "1000"
},
"Renault": {
"modelo": "2014",
"precio": "2000"
},
"Volkwagen": {
"modelo": "2014",
"precio": "3000"
},
"Chevrolet": {
"modelo": "2014",
"precio": "1000"
},
"Peugeot": {
"modelo": "2014",
"precio": "2000"
}
}
``` | 2019/01/03 | [
"https://Stackoverflow.com/questions/54028502",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10864244/"
] | We can use dict comprehension
```
{a.get('compania'): {k: v for k, v in a.items() if k != 'compania'} for a in c}
{'Fiat': {'modelo': '2014', 'precio': '1000'},
'Renault': {'modelo': '2014', 'precio': '2000'},
'Volkwagen': {'modelo': '2014', 'precio': '3000'},
'Chevrolet': {'modelo': '2014', 'precio': '1000'},
'Peugeot': {'modelo': '2014', 'precio': '2000'}}
```
where `c` is your original data | with a mapper function to return a new customized list of dicts
```
a=[
{
"compania": "Fiat",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Renault",
"modelo": "2014",
"precio": "2000"
},
{
"compania": "Volkwagen",
"modelo": "2014",
"precio": "3000"
},
{
"compania": "Chevrolet",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Peugeot",
"modelo": "2014",
"precio": "2000"
}
]
def mapper(temp): # define a mapper
new_temp={}
new_temp[temp['compania']]={k:v for k,v in temp.items() if k!='compania'}
return new_temp
map(mapper,a) # call map function with mapper and list as argument
``` |
54,028,502 | I have this kind of list of dictionary in python
```
[
{
"compania": "Fiat",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Renault",
"modelo": "2014",
"precio": "2000"
},
{
"compania": "Volkwagen",
"modelo": "2014",
"precio": "3000"
},
{
"compania": "Chevrolet",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Peugeot",
"modelo": "2014",
"precio": "2000"
}
]
```
That I'd like to transform into this kind of list of list of dictionary
```
{
"Fiat": {
"modelo": "2014",
"precio": "1000"
},
"Renault": {
"modelo": "2014",
"precio": "2000"
},
"Volkwagen": {
"modelo": "2014",
"precio": "3000"
},
"Chevrolet": {
"modelo": "2014",
"precio": "1000"
},
"Peugeot": {
"modelo": "2014",
"precio": "2000"
}
}
``` | 2019/01/03 | [
"https://Stackoverflow.com/questions/54028502",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10864244/"
] | You can create a dictionary by iterating over the elements of your original list. Assuming your list is called `car_list`:
```
d = { x["compania"]: {"modelo": x["modelo"], "precio": x["precio"] } for x in car_list }
``` | Assuming your list is called `l`, you could accomplish this using simple iteration and building a new dictionary `d`:
```
d = {}
for sub in l:
d[sub.pop('compania')] = sub
```
This produces in the dictionary `d`:
```
{'Chevrolet': {'modelo': '2014', 'precio': '1000'},
'Fiat': {'modelo': '2014', 'precio': '1000'},
'Peugeot': {'modelo': '2014', 'precio': '2000'},
'Renault': {'modelo': '2014', 'precio': '2000'},
'Volkwagen': {'modelo': '2014', 'precio': '3000'}}
```
Explanation: For each dictionary in that list (`sub`): `sub.pop('compania')` deletes the entry of the key 'compania' in `sub` and returns it's associated value. We are then updating the dictionary `d` to have a key of what was just returned and we are associating with that key, the remainder of `sub` after the deletion of that entry. |
54,028,502 | I have this kind of list of dictionary in python
```
[
{
"compania": "Fiat",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Renault",
"modelo": "2014",
"precio": "2000"
},
{
"compania": "Volkwagen",
"modelo": "2014",
"precio": "3000"
},
{
"compania": "Chevrolet",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Peugeot",
"modelo": "2014",
"precio": "2000"
}
]
```
That I'd like to transform into this kind of list of list of dictionary
```
{
"Fiat": {
"modelo": "2014",
"precio": "1000"
},
"Renault": {
"modelo": "2014",
"precio": "2000"
},
"Volkwagen": {
"modelo": "2014",
"precio": "3000"
},
"Chevrolet": {
"modelo": "2014",
"precio": "1000"
},
"Peugeot": {
"modelo": "2014",
"precio": "2000"
}
}
``` | 2019/01/03 | [
"https://Stackoverflow.com/questions/54028502",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10864244/"
] | You can create a dictionary by iterating over the elements of your original list. Assuming your list is called `car_list`:
```
d = { x["compania"]: {"modelo": x["modelo"], "precio": x["precio"] } for x in car_list }
``` | I would share simple solution:
```
>>> {d.pop("compania"):d for d in dd}
``` |
54,028,502 | I have this kind of list of dictionary in python
```
[
{
"compania": "Fiat",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Renault",
"modelo": "2014",
"precio": "2000"
},
{
"compania": "Volkwagen",
"modelo": "2014",
"precio": "3000"
},
{
"compania": "Chevrolet",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Peugeot",
"modelo": "2014",
"precio": "2000"
}
]
```
That I'd like to transform into this kind of list of list of dictionary
```
{
"Fiat": {
"modelo": "2014",
"precio": "1000"
},
"Renault": {
"modelo": "2014",
"precio": "2000"
},
"Volkwagen": {
"modelo": "2014",
"precio": "3000"
},
"Chevrolet": {
"modelo": "2014",
"precio": "1000"
},
"Peugeot": {
"modelo": "2014",
"precio": "2000"
}
}
``` | 2019/01/03 | [
"https://Stackoverflow.com/questions/54028502",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10864244/"
] | Assuming your list is called `l`, you could accomplish this using simple iteration and building a new dictionary `d`:
```
d = {}
for sub in l:
d[sub.pop('compania')] = sub
```
This produces in the dictionary `d`:
```
{'Chevrolet': {'modelo': '2014', 'precio': '1000'},
'Fiat': {'modelo': '2014', 'precio': '1000'},
'Peugeot': {'modelo': '2014', 'precio': '2000'},
'Renault': {'modelo': '2014', 'precio': '2000'},
'Volkwagen': {'modelo': '2014', 'precio': '3000'}}
```
Explanation: For each dictionary in that list (`sub`): `sub.pop('compania')` deletes the entry of the key 'compania' in `sub` and returns it's associated value. We are then updating the dictionary `d` to have a key of what was just returned and we are associating with that key, the remainder of `sub` after the deletion of that entry. | I would share simple solution:
```
>>> {d.pop("compania"):d for d in dd}
``` |
54,028,502 | I have this kind of list of dictionary in python
```
[
{
"compania": "Fiat",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Renault",
"modelo": "2014",
"precio": "2000"
},
{
"compania": "Volkwagen",
"modelo": "2014",
"precio": "3000"
},
{
"compania": "Chevrolet",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Peugeot",
"modelo": "2014",
"precio": "2000"
}
]
```
That I'd like to transform into this kind of list of list of dictionary
```
{
"Fiat": {
"modelo": "2014",
"precio": "1000"
},
"Renault": {
"modelo": "2014",
"precio": "2000"
},
"Volkwagen": {
"modelo": "2014",
"precio": "3000"
},
"Chevrolet": {
"modelo": "2014",
"precio": "1000"
},
"Peugeot": {
"modelo": "2014",
"precio": "2000"
}
}
``` | 2019/01/03 | [
"https://Stackoverflow.com/questions/54028502",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10864244/"
] | You can create a dictionary by iterating over the elements of your original list. Assuming your list is called `car_list`:
```
d = { x["compania"]: {"modelo": x["modelo"], "precio": x["precio"] } for x in car_list }
``` | with a mapper function to return a new customized list of dicts
```
a=[
{
"compania": "Fiat",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Renault",
"modelo": "2014",
"precio": "2000"
},
{
"compania": "Volkwagen",
"modelo": "2014",
"precio": "3000"
},
{
"compania": "Chevrolet",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Peugeot",
"modelo": "2014",
"precio": "2000"
}
]
def mapper(temp): # define a mapper
new_temp={}
new_temp[temp['compania']]={k:v for k,v in temp.items() if k!='compania'}
return new_temp
map(mapper,a) # call map function with mapper and list as argument
``` |
54,028,502 | I have this kind of list of dictionary in python
```
[
{
"compania": "Fiat",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Renault",
"modelo": "2014",
"precio": "2000"
},
{
"compania": "Volkwagen",
"modelo": "2014",
"precio": "3000"
},
{
"compania": "Chevrolet",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Peugeot",
"modelo": "2014",
"precio": "2000"
}
]
```
That I'd like to transform into this kind of list of list of dictionary
```
{
"Fiat": {
"modelo": "2014",
"precio": "1000"
},
"Renault": {
"modelo": "2014",
"precio": "2000"
},
"Volkwagen": {
"modelo": "2014",
"precio": "3000"
},
"Chevrolet": {
"modelo": "2014",
"precio": "1000"
},
"Peugeot": {
"modelo": "2014",
"precio": "2000"
}
}
``` | 2019/01/03 | [
"https://Stackoverflow.com/questions/54028502",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10864244/"
] | We can use dict comprehension
```
{a.get('compania'): {k: v for k, v in a.items() if k != 'compania'} for a in c}
{'Fiat': {'modelo': '2014', 'precio': '1000'},
'Renault': {'modelo': '2014', 'precio': '2000'},
'Volkwagen': {'modelo': '2014', 'precio': '3000'},
'Chevrolet': {'modelo': '2014', 'precio': '1000'},
'Peugeot': {'modelo': '2014', 'precio': '2000'}}
```
where `c` is your original data | I would share simple solution:
```
>>> {d.pop("compania"):d for d in dd}
``` |
54,028,502 | I have this kind of list of dictionary in python
```
[
{
"compania": "Fiat",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Renault",
"modelo": "2014",
"precio": "2000"
},
{
"compania": "Volkwagen",
"modelo": "2014",
"precio": "3000"
},
{
"compania": "Chevrolet",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Peugeot",
"modelo": "2014",
"precio": "2000"
}
]
```
That I'd like to transform into this kind of list of list of dictionary
```
{
"Fiat": {
"modelo": "2014",
"precio": "1000"
},
"Renault": {
"modelo": "2014",
"precio": "2000"
},
"Volkwagen": {
"modelo": "2014",
"precio": "3000"
},
"Chevrolet": {
"modelo": "2014",
"precio": "1000"
},
"Peugeot": {
"modelo": "2014",
"precio": "2000"
}
}
``` | 2019/01/03 | [
"https://Stackoverflow.com/questions/54028502",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10864244/"
] | Assuming your list is called `l`, you could accomplish this using simple iteration and building a new dictionary `d`:
```
d = {}
for sub in l:
d[sub.pop('compania')] = sub
```
This produces in the dictionary `d`:
```
{'Chevrolet': {'modelo': '2014', 'precio': '1000'},
'Fiat': {'modelo': '2014', 'precio': '1000'},
'Peugeot': {'modelo': '2014', 'precio': '2000'},
'Renault': {'modelo': '2014', 'precio': '2000'},
'Volkwagen': {'modelo': '2014', 'precio': '3000'}}
```
Explanation: For each dictionary in that list (`sub`): `sub.pop('compania')` deletes the entry of the key 'compania' in `sub` and returns it's associated value. We are then updating the dictionary `d` to have a key of what was just returned and we are associating with that key, the remainder of `sub` after the deletion of that entry. | with a mapper function to return a new customized list of dicts
```
a=[
{
"compania": "Fiat",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Renault",
"modelo": "2014",
"precio": "2000"
},
{
"compania": "Volkwagen",
"modelo": "2014",
"precio": "3000"
},
{
"compania": "Chevrolet",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Peugeot",
"modelo": "2014",
"precio": "2000"
}
]
def mapper(temp): # define a mapper
new_temp={}
new_temp[temp['compania']]={k:v for k,v in temp.items() if k!='compania'}
return new_temp
map(mapper,a) # call map function with mapper and list as argument
``` |
54,028,502 | I have this kind of list of dictionary in python
```
[
{
"compania": "Fiat",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Renault",
"modelo": "2014",
"precio": "2000"
},
{
"compania": "Volkwagen",
"modelo": "2014",
"precio": "3000"
},
{
"compania": "Chevrolet",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Peugeot",
"modelo": "2014",
"precio": "2000"
}
]
```
That I'd like to transform into this kind of list of list of dictionary
```
{
"Fiat": {
"modelo": "2014",
"precio": "1000"
},
"Renault": {
"modelo": "2014",
"precio": "2000"
},
"Volkwagen": {
"modelo": "2014",
"precio": "3000"
},
"Chevrolet": {
"modelo": "2014",
"precio": "1000"
},
"Peugeot": {
"modelo": "2014",
"precio": "2000"
}
}
``` | 2019/01/03 | [
"https://Stackoverflow.com/questions/54028502",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10864244/"
] | You can create a dictionary by iterating over the elements of your original list. Assuming your list is called `car_list`:
```
d = { x["compania"]: {"modelo": x["modelo"], "precio": x["precio"] } for x in car_list }
``` | You can simply use dictionary [update](https://docs.python.org/3/library/stdtypes.html#dict.update) which can then produce a new dictionary of your preference.
```
from pprint import PrettyPrinter as pp
d={}
for i in l: # 'l' represents your list of dictionary
d.update({i['compania']:{"modelo":i['modelo'],"precio":i['precio']}})
print(pp(indent=4).pprint(d))
```
>
> ouput:
>
>
>
```
{ 'Chevrolet': {'modelo': '2014', 'precio': '1000'},
'Fiat': {'modelo': '2014', 'precio': '1000'},
'Peugeot': {'modelo': '2014', 'precio': '2000'},
'Renault': {'modelo': '2014', 'precio': '2000'},
'Volkwagen': {'modelo': '2014', 'precio': '3000'}}
``` |
54,028,502 | I have this kind of list of dictionary in python
```
[
{
"compania": "Fiat",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Renault",
"modelo": "2014",
"precio": "2000"
},
{
"compania": "Volkwagen",
"modelo": "2014",
"precio": "3000"
},
{
"compania": "Chevrolet",
"modelo": "2014",
"precio": "1000"
},
{
"compania": "Peugeot",
"modelo": "2014",
"precio": "2000"
}
]
```
That I'd like to transform into this kind of list of list of dictionary
```
{
"Fiat": {
"modelo": "2014",
"precio": "1000"
},
"Renault": {
"modelo": "2014",
"precio": "2000"
},
"Volkwagen": {
"modelo": "2014",
"precio": "3000"
},
"Chevrolet": {
"modelo": "2014",
"precio": "1000"
},
"Peugeot": {
"modelo": "2014",
"precio": "2000"
}
}
``` | 2019/01/03 | [
"https://Stackoverflow.com/questions/54028502",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10864244/"
] | Assuming your list is called `l`, you could accomplish this using simple iteration and building a new dictionary `d`:
```
d = {}
for sub in l:
d[sub.pop('compania')] = sub
```
This produces in the dictionary `d`:
```
{'Chevrolet': {'modelo': '2014', 'precio': '1000'},
'Fiat': {'modelo': '2014', 'precio': '1000'},
'Peugeot': {'modelo': '2014', 'precio': '2000'},
'Renault': {'modelo': '2014', 'precio': '2000'},
'Volkwagen': {'modelo': '2014', 'precio': '3000'}}
```
Explanation: For each dictionary in that list (`sub`): `sub.pop('compania')` deletes the entry of the key 'compania' in `sub` and returns it's associated value. We are then updating the dictionary `d` to have a key of what was just returned and we are associating with that key, the remainder of `sub` after the deletion of that entry. | You can simply use dictionary [update](https://docs.python.org/3/library/stdtypes.html#dict.update) which can then produce a new dictionary of your preference.
```
from pprint import PrettyPrinter as pp
d={}
for i in l: # 'l' represents your list of dictionary
d.update({i['compania']:{"modelo":i['modelo'],"precio":i['precio']}})
print(pp(indent=4).pprint(d))
```
>
> ouput:
>
>
>
```
{ 'Chevrolet': {'modelo': '2014', 'precio': '1000'},
'Fiat': {'modelo': '2014', 'precio': '1000'},
'Peugeot': {'modelo': '2014', 'precio': '2000'},
'Renault': {'modelo': '2014', 'precio': '2000'},
'Volkwagen': {'modelo': '2014', 'precio': '3000'}}
``` |
55,655,666 | Hello i m new at django. I installed all moduoles from anaconda. Then created a web application with
```
django-admin startproject
```
My project crated successfully. No problem
Then i tried to run that project at localhost to see is everything okay or not. And i run that code in command line
```
python manage.py runserver
```
And i get that error:
```
Unhandled exception in thread started by <function check_errors.
<locals>.wrapper at 0x00000221B6D45A60>
Traceback (most recent call last):
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\utils\autoreload.py", line 225, in wrapper
fn(*args, **kwargs)
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\core\management\commands\runserver.py", line 109, in
inner_run
autoreload.raise_last_exception()
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\utils\autoreload.py", line 248, in raise_last_exception
raise _exception[1]
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\core\management\__init__.py", line 337, in execute
autoreload.check_errors(django.setup)()
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\utils\autoreload.py", line 225, in wrapper
fn(*args, **kwargs)
File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\__init__.py",
line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\apps\registry.py", line 112, in populate
app_config.import_models()
File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\apps\config.py",
line 198, in import_models
self.models_module = import_module(models_module_name)
File "C:\Users\Sercan\Anaconda3\lib\importlib\__init__.py", line 127, in
import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _
find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in
_call_with_frames_removed
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\contrib\auth\models.py", line 2, in <module>
from django.contrib.auth.base_user import AbstractBaseUser,
BaseUserManager
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\contrib\auth\base_user.py", line 47, in <module>
class AbstractBaseUser(models.Model):
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\db\models\base.py", line 101, in __new__
new_class.add_to_class('_meta', Options(meta, app_label))
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\db\models\base.py", line 305, in add_to_class
value.contribute_to_class(cls, name)
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\db\models\options.py", line 203, in contribute_to_class
self.db_table = truncate_name(self.db_table,
connection.ops.max_name_length())
File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\db\__init__.py",
line 33, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\db\utils.py",
line 202, in __getitem__
backend = load_backend(db['ENGINE'])
File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\db\utils.py",
line 110, in load_backend
return import_module('%s.base' % backend_name)
File "C:\Users\Sercan\Anaconda3\lib\importlib\__init__.py", line 127, in
import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\db\backends\sqlite3\base.py", line 10, in <module>
from sqlite3 import dbapi2 as Database
File "C:\Users\Sercan\Anaconda3\lib\sqlite3\__init__.py", line 23, in
<module>
from sqlite3.dbapi2 import *
File "C:\Users\Sercan\Anaconda3\lib\sqlite3\dbapi2.py", line 27, in
<module>
from _sqlite3 import *
ImportError: DLL load failed: The specified module could not be found.
```
Can someone tell me where do i make mistake and how can i fix this problem ? | 2019/04/12 | [
"https://Stackoverflow.com/questions/55655666",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4511476/"
] | I had this problem. I solved it by running it in the Anaconda shell.
1. Open **Anaconda Shell/terminal** by pressing your Windows key and searching Anaconda
2. Go to the directory you have your django project in
3. `python manage.py runserver` | It sounds like you need to install SQLite:
<https://www.sqlite.org/download.html>
Or you could change the database settings in your settings file to use some other database. |
55,655,666 | Hello i m new at django. I installed all moduoles from anaconda. Then created a web application with
```
django-admin startproject
```
My project crated successfully. No problem
Then i tried to run that project at localhost to see is everything okay or not. And i run that code in command line
```
python manage.py runserver
```
And i get that error:
```
Unhandled exception in thread started by <function check_errors.
<locals>.wrapper at 0x00000221B6D45A60>
Traceback (most recent call last):
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\utils\autoreload.py", line 225, in wrapper
fn(*args, **kwargs)
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\core\management\commands\runserver.py", line 109, in
inner_run
autoreload.raise_last_exception()
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\utils\autoreload.py", line 248, in raise_last_exception
raise _exception[1]
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\core\management\__init__.py", line 337, in execute
autoreload.check_errors(django.setup)()
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\utils\autoreload.py", line 225, in wrapper
fn(*args, **kwargs)
File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\__init__.py",
line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\apps\registry.py", line 112, in populate
app_config.import_models()
File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\apps\config.py",
line 198, in import_models
self.models_module = import_module(models_module_name)
File "C:\Users\Sercan\Anaconda3\lib\importlib\__init__.py", line 127, in
import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _
find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in
_call_with_frames_removed
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\contrib\auth\models.py", line 2, in <module>
from django.contrib.auth.base_user import AbstractBaseUser,
BaseUserManager
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\contrib\auth\base_user.py", line 47, in <module>
class AbstractBaseUser(models.Model):
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\db\models\base.py", line 101, in __new__
new_class.add_to_class('_meta', Options(meta, app_label))
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\db\models\base.py", line 305, in add_to_class
value.contribute_to_class(cls, name)
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\db\models\options.py", line 203, in contribute_to_class
self.db_table = truncate_name(self.db_table,
connection.ops.max_name_length())
File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\db\__init__.py",
line 33, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\db\utils.py",
line 202, in __getitem__
backend = load_backend(db['ENGINE'])
File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\db\utils.py",
line 110, in load_backend
return import_module('%s.base' % backend_name)
File "C:\Users\Sercan\Anaconda3\lib\importlib\__init__.py", line 127, in
import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\db\backends\sqlite3\base.py", line 10, in <module>
from sqlite3 import dbapi2 as Database
File "C:\Users\Sercan\Anaconda3\lib\sqlite3\__init__.py", line 23, in
<module>
from sqlite3.dbapi2 import *
File "C:\Users\Sercan\Anaconda3\lib\sqlite3\dbapi2.py", line 27, in
<module>
from _sqlite3 import *
ImportError: DLL load failed: The specified module could not be found.
```
Can someone tell me where do i make mistake and how can i fix this problem ? | 2019/04/12 | [
"https://Stackoverflow.com/questions/55655666",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4511476/"
] | I had this problem. I solved it by running it in the Anaconda shell.
1. Open **Anaconda Shell/terminal** by pressing your Windows key and searching Anaconda
2. Go to the directory you have your django project in
3. `python manage.py runserver` | Remove anaconda
Download and install from python.org in c:\python37. Here it will be easy to set variables
Setup python variables
Don't forget to select pip while installing python.
Path:c:\python37,c:\python32\Scripts
If you want to install django on a virtual environment install virtualevmwrapper-win
Voila! It works for me. |
55,655,666 | Hello i m new at django. I installed all moduoles from anaconda. Then created a web application with
```
django-admin startproject
```
My project crated successfully. No problem
Then i tried to run that project at localhost to see is everything okay or not. And i run that code in command line
```
python manage.py runserver
```
And i get that error:
```
Unhandled exception in thread started by <function check_errors.
<locals>.wrapper at 0x00000221B6D45A60>
Traceback (most recent call last):
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\utils\autoreload.py", line 225, in wrapper
fn(*args, **kwargs)
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\core\management\commands\runserver.py", line 109, in
inner_run
autoreload.raise_last_exception()
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\utils\autoreload.py", line 248, in raise_last_exception
raise _exception[1]
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\core\management\__init__.py", line 337, in execute
autoreload.check_errors(django.setup)()
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\utils\autoreload.py", line 225, in wrapper
fn(*args, **kwargs)
File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\__init__.py",
line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\apps\registry.py", line 112, in populate
app_config.import_models()
File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\apps\config.py",
line 198, in import_models
self.models_module = import_module(models_module_name)
File "C:\Users\Sercan\Anaconda3\lib\importlib\__init__.py", line 127, in
import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _
find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in
_call_with_frames_removed
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\contrib\auth\models.py", line 2, in <module>
from django.contrib.auth.base_user import AbstractBaseUser,
BaseUserManager
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\contrib\auth\base_user.py", line 47, in <module>
class AbstractBaseUser(models.Model):
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\db\models\base.py", line 101, in __new__
new_class.add_to_class('_meta', Options(meta, app_label))
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\db\models\base.py", line 305, in add_to_class
value.contribute_to_class(cls, name)
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\db\models\options.py", line 203, in contribute_to_class
self.db_table = truncate_name(self.db_table,
connection.ops.max_name_length())
File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\db\__init__.py",
line 33, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\db\utils.py",
line 202, in __getitem__
backend = load_backend(db['ENGINE'])
File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\db\utils.py",
line 110, in load_backend
return import_module('%s.base' % backend_name)
File "C:\Users\Sercan\Anaconda3\lib\importlib\__init__.py", line 127, in
import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\db\backends\sqlite3\base.py", line 10, in <module>
from sqlite3 import dbapi2 as Database
File "C:\Users\Sercan\Anaconda3\lib\sqlite3\__init__.py", line 23, in
<module>
from sqlite3.dbapi2 import *
File "C:\Users\Sercan\Anaconda3\lib\sqlite3\dbapi2.py", line 27, in
<module>
from _sqlite3 import *
ImportError: DLL load failed: The specified module could not be found.
```
Can someone tell me where do i make mistake and how can i fix this problem ? | 2019/04/12 | [
"https://Stackoverflow.com/questions/55655666",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4511476/"
] | I had this problem. I solved it by running it in the Anaconda shell.
1. Open **Anaconda Shell/terminal** by pressing your Windows key and searching Anaconda
2. Go to the directory you have your django project in
3. `python manage.py runserver` | if you want to use anaconda then follow below steps
>
> conda create --name MyDjangoEnv(virtual environment) Django
>
>
> press y to install. before press y please make sure correct version of software are selected
>
>
> activate myDjangoEnv
>
>
> conda info --envs
>
>
> conda install django
>
>
> conda install sqlparse
>
>
> django-admin startproject first\_project
>
>
> |
55,655,666 | Hello i m new at django. I installed all moduoles from anaconda. Then created a web application with
```
django-admin startproject
```
My project crated successfully. No problem
Then i tried to run that project at localhost to see is everything okay or not. And i run that code in command line
```
python manage.py runserver
```
And i get that error:
```
Unhandled exception in thread started by <function check_errors.
<locals>.wrapper at 0x00000221B6D45A60>
Traceback (most recent call last):
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\utils\autoreload.py", line 225, in wrapper
fn(*args, **kwargs)
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\core\management\commands\runserver.py", line 109, in
inner_run
autoreload.raise_last_exception()
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\utils\autoreload.py", line 248, in raise_last_exception
raise _exception[1]
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\core\management\__init__.py", line 337, in execute
autoreload.check_errors(django.setup)()
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\utils\autoreload.py", line 225, in wrapper
fn(*args, **kwargs)
File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\__init__.py",
line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\apps\registry.py", line 112, in populate
app_config.import_models()
File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\apps\config.py",
line 198, in import_models
self.models_module = import_module(models_module_name)
File "C:\Users\Sercan\Anaconda3\lib\importlib\__init__.py", line 127, in
import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _
find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in
_call_with_frames_removed
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\contrib\auth\models.py", line 2, in <module>
from django.contrib.auth.base_user import AbstractBaseUser,
BaseUserManager
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\contrib\auth\base_user.py", line 47, in <module>
class AbstractBaseUser(models.Model):
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\db\models\base.py", line 101, in __new__
new_class.add_to_class('_meta', Options(meta, app_label))
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\db\models\base.py", line 305, in add_to_class
value.contribute_to_class(cls, name)
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\db\models\options.py", line 203, in contribute_to_class
self.db_table = truncate_name(self.db_table,
connection.ops.max_name_length())
File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\db\__init__.py",
line 33, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\db\utils.py",
line 202, in __getitem__
backend = load_backend(db['ENGINE'])
File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\db\utils.py",
line 110, in load_backend
return import_module('%s.base' % backend_name)
File "C:\Users\Sercan\Anaconda3\lib\importlib\__init__.py", line 127, in
import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\db\backends\sqlite3\base.py", line 10, in <module>
from sqlite3 import dbapi2 as Database
File "C:\Users\Sercan\Anaconda3\lib\sqlite3\__init__.py", line 23, in
<module>
from sqlite3.dbapi2 import *
File "C:\Users\Sercan\Anaconda3\lib\sqlite3\dbapi2.py", line 27, in
<module>
from _sqlite3 import *
ImportError: DLL load failed: The specified module could not be found.
```
Can someone tell me where do i make mistake and how can i fix this problem ? | 2019/04/12 | [
"https://Stackoverflow.com/questions/55655666",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4511476/"
] | I had this problem. I solved it by running it in the Anaconda shell.
1. Open **Anaconda Shell/terminal** by pressing your Windows key and searching Anaconda
2. Go to the directory you have your django project in
3. `python manage.py runserver` | I found a solution in this site: <http://felipegalvao.com.br/blog/2017/01/03/como-criar-ambientes-e-instalar-o-django-com-distribuicao-anaconda/>
Basically, you need to activate an environment in your anaconda prompt.
Step 1: `conda info --envs`
Step 2 : `conda create --name env_name python=3`
Step 3: `pip install django`
Step 4: Assuming that you already created a startproject, run the manage.py runserver
Thanks! |
55,655,666 | Hello i m new at django. I installed all moduoles from anaconda. Then created a web application with
```
django-admin startproject
```
My project crated successfully. No problem
Then i tried to run that project at localhost to see is everything okay or not. And i run that code in command line
```
python manage.py runserver
```
And i get that error:
```
Unhandled exception in thread started by <function check_errors.
<locals>.wrapper at 0x00000221B6D45A60>
Traceback (most recent call last):
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\utils\autoreload.py", line 225, in wrapper
fn(*args, **kwargs)
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\core\management\commands\runserver.py", line 109, in
inner_run
autoreload.raise_last_exception()
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\utils\autoreload.py", line 248, in raise_last_exception
raise _exception[1]
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\core\management\__init__.py", line 337, in execute
autoreload.check_errors(django.setup)()
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\utils\autoreload.py", line 225, in wrapper
fn(*args, **kwargs)
File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\__init__.py",
line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\apps\registry.py", line 112, in populate
app_config.import_models()
File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\apps\config.py",
line 198, in import_models
self.models_module = import_module(models_module_name)
File "C:\Users\Sercan\Anaconda3\lib\importlib\__init__.py", line 127, in
import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _
find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in
_call_with_frames_removed
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\contrib\auth\models.py", line 2, in <module>
from django.contrib.auth.base_user import AbstractBaseUser,
BaseUserManager
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\contrib\auth\base_user.py", line 47, in <module>
class AbstractBaseUser(models.Model):
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\db\models\base.py", line 101, in __new__
new_class.add_to_class('_meta', Options(meta, app_label))
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\db\models\base.py", line 305, in add_to_class
value.contribute_to_class(cls, name)
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\db\models\options.py", line 203, in contribute_to_class
self.db_table = truncate_name(self.db_table,
connection.ops.max_name_length())
File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\db\__init__.py",
line 33, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\db\utils.py",
line 202, in __getitem__
backend = load_backend(db['ENGINE'])
File "C:\Users\Sercan\Anaconda3\lib\site-packages\django\db\utils.py",
line 110, in load_backend
return import_module('%s.base' % backend_name)
File "C:\Users\Sercan\Anaconda3\lib\importlib\__init__.py", line 127, in
import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "C:\Users\Sercan\Anaconda3\lib\site-
packages\django\db\backends\sqlite3\base.py", line 10, in <module>
from sqlite3 import dbapi2 as Database
File "C:\Users\Sercan\Anaconda3\lib\sqlite3\__init__.py", line 23, in
<module>
from sqlite3.dbapi2 import *
File "C:\Users\Sercan\Anaconda3\lib\sqlite3\dbapi2.py", line 27, in
<module>
from _sqlite3 import *
ImportError: DLL load failed: The specified module could not be found.
```
Can someone tell me where do i make mistake and how can i fix this problem ? | 2019/04/12 | [
"https://Stackoverflow.com/questions/55655666",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4511476/"
] | I had this problem. I solved it by running it in the Anaconda shell.
1. Open **Anaconda Shell/terminal** by pressing your Windows key and searching Anaconda
2. Go to the directory you have your django project in
3. `python manage.py runserver` | I was facing the same problem, it simply means that dll module is not installed in that path while creating a project don't go with first option i.e venv(virtual environment) this will not let modules to import in your project.., go with the second option for interpreter and select your respective python.exe.
>
>
> >
> > run the below commands in terminal
> > django-admin
> > if it shows error then
> >
> >
> >
>
>
>
```
pip install django
```
thanks. |
55,779,936 | I used pip to install keras and tensorflow, yet when I import subpackages from keras, my shell fails a check for PyBfloat16\_Type.tp\_base.
I tried uninstalling and reinstalling tensorflow, but I don't know for certain what is causing this error.
```
from keras.models import Sequential
from keras.layers import Dense
```
```
3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:59:51) [MSC v.1914 64 bit (AMD64)]
Python Type "help", "copyright", "credits" or "license" for more information.
>>>[evaluate machineLearning.py]
Using TensorFlow backend. 2019-04-21 00:31:22.995541: F tensorflow/python/lib/core/bfloat16.cc:675] Check failed: PyBfloat16_Type.tp_base != nullptr
aborted (disconnected)
>>>
```
Can someone help me solve this issue? | 2019/04/21 | [
"https://Stackoverflow.com/questions/55779936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11262404/"
] | You may try to downgrade python to 3.6 (I know some people have troubles with tensorflow and keras using python 3.7). One simple way is to download anaconda, create a new environment with python 3.6, then install tensorflow and keras.
`conda create -n myenv python=3.6`
`conda activate myenv`
`pip3 install tensorflow`
`pip3 install keras` | You have a few options to try:
First, try to uninstall and re-install the TensorFlow and see whether the problem is resolved or not (replace `tensorflow` with `tensorflow-gpu` in the following commands if you have installed the GPU version):
```
pip uninstall tensorflow
pip install --no-cache-dir tensorflow
```
If the problem is not resolved, try to do the same thing with `numpy`:
```
pip uninstall numpy
pip install --no-cache-dir numpy
```
Hopefully, one of these two would resolve the problem. |
28,023,697 | I want to setup cronjobs on various servers at the same time for Data Mining. I was also already following the steps in [Ansible and crontabs](https://stackoverflow.com/questions/21787755/ansible-and-crontabs) but so far nothing worked.
Whatever i do, i get the Error Message:
```
ERROR: cron is not a legal parameter at this level in an Ansible Playbook
```
I have: Ansible 1.8.1
And for some unknown reasons, my Modules are located in:
`/usr/lib/python2.6/site-packages/ansible/modules/`
I would like to know which precise steps i have to follow to let Ansible install a new cronjob in the crontab file.
1. How precisely must a playbook look like to install a cronjob?
2. What is the command line to start this playbook?
I'm asking this odd question because the documentation of cron is insufficient and the examples are not working. Maybe my installation is wrong too, which I want to test out with a working example of cron. | 2015/01/19 | [
"https://Stackoverflow.com/questions/28023697",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4469762/"
] | I've got (something very much like) this in a ./roles/cron/tasks/main.yml file:
```
- name: Creates weekly backup cronjob
cron: minute="20" hour="5" weekday="sun"
name="Backup mysql tables (weekly schedule)"
cron_file="mysqlbackup-WeeklyBackups"
user="root"
job="/usr/local/bin/mysqlbackup.WeeklyBackups.sh"
tags:
- mysql
- cronjobs
```
The shell script listed in the 'job' was created a little earlier in the main.yml file.
This task will create a file in /etc/cron.d/mysqlbackup-WeeklyBackups:
```
#Ansible: Backup mysql tables (weekly schedule)
20 5 * * sun root /usr/local/bin/mysqlbackup.WeeklyBackups.sh
``` | If you're setting it up to run on the Crontab of the user:
```
- name: Install Batchjobs on crontab
cron:
name: "Manage Disk Space"
minute: "30"
hour: "02"
weekday: "0-6"
job: "home/export/manageDiskSpace.sh > home/export/manageDiskSpace.sh.log 2>&1"
#user: "admin"
disabled: "no"
become_user: "{{ admin_user }}"
tags:
- cronjobs
```
Reference [1]: <https://docs.ansible.com/ansible/latest/collections/ansible/builtin/cron_module.html#examples> |
28,023,697 | I want to setup cronjobs on various servers at the same time for Data Mining. I was also already following the steps in [Ansible and crontabs](https://stackoverflow.com/questions/21787755/ansible-and-crontabs) but so far nothing worked.
Whatever i do, i get the Error Message:
```
ERROR: cron is not a legal parameter at this level in an Ansible Playbook
```
I have: Ansible 1.8.1
And for some unknown reasons, my Modules are located in:
`/usr/lib/python2.6/site-packages/ansible/modules/`
I would like to know which precise steps i have to follow to let Ansible install a new cronjob in the crontab file.
1. How precisely must a playbook look like to install a cronjob?
2. What is the command line to start this playbook?
I'm asking this odd question because the documentation of cron is insufficient and the examples are not working. Maybe my installation is wrong too, which I want to test out with a working example of cron. | 2015/01/19 | [
"https://Stackoverflow.com/questions/28023697",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4469762/"
] | I've got (something very much like) this in a ./roles/cron/tasks/main.yml file:
```
- name: Creates weekly backup cronjob
cron: minute="20" hour="5" weekday="sun"
name="Backup mysql tables (weekly schedule)"
cron_file="mysqlbackup-WeeklyBackups"
user="root"
job="/usr/local/bin/mysqlbackup.WeeklyBackups.sh"
tags:
- mysql
- cronjobs
```
The shell script listed in the 'job' was created a little earlier in the main.yml file.
This task will create a file in /etc/cron.d/mysqlbackup-WeeklyBackups:
```
#Ansible: Backup mysql tables (weekly schedule)
20 5 * * sun root /usr/local/bin/mysqlbackup.WeeklyBackups.sh
``` | ```
---
- hosts: servers
tasks:
- name: "Cronjob Entry"
cron:
name: "### recording mixing/compressing/ftping scripts"
minute: 0,3,6,9,12,15,18,21,24,27,30,33,36,39,42,45,48,51,54,57
hour: "*"
day: "*"
month: "*"
weekday: "*"
job: /usr/share/astguiclient/AST_CRON_audio_1_move_mix.pl
```
I am also getting below output.
```
#Ansible: ### recording mixing/compressing/ftping scripts
0,3,6,9,12,15,18,21,24,27,30,33,36,39,42,45,48,51,54,57 * * * * /usr/share/astguiclient/AST_CRON_audio_1_move_mix.pl
``` |
28,023,697 | I want to setup cronjobs on various servers at the same time for Data Mining. I was also already following the steps in [Ansible and crontabs](https://stackoverflow.com/questions/21787755/ansible-and-crontabs) but so far nothing worked.
Whatever i do, i get the Error Message:
```
ERROR: cron is not a legal parameter at this level in an Ansible Playbook
```
I have: Ansible 1.8.1
And for some unknown reasons, my Modules are located in:
`/usr/lib/python2.6/site-packages/ansible/modules/`
I would like to know which precise steps i have to follow to let Ansible install a new cronjob in the crontab file.
1. How precisely must a playbook look like to install a cronjob?
2. What is the command line to start this playbook?
I'm asking this odd question because the documentation of cron is insufficient and the examples are not working. Maybe my installation is wrong too, which I want to test out with a working example of cron. | 2015/01/19 | [
"https://Stackoverflow.com/questions/28023697",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4469762/"
] | ```
---
- hosts: servers
tasks:
- name: "Cronjob Entry"
cron:
name: "### recording mixing/compressing/ftping scripts"
minute: 0,3,6,9,12,15,18,21,24,27,30,33,36,39,42,45,48,51,54,57
hour: "*"
day: "*"
month: "*"
weekday: "*"
job: /usr/share/astguiclient/AST_CRON_audio_1_move_mix.pl
```
I am also getting below output.
```
#Ansible: ### recording mixing/compressing/ftping scripts
0,3,6,9,12,15,18,21,24,27,30,33,36,39,42,45,48,51,54,57 * * * * /usr/share/astguiclient/AST_CRON_audio_1_move_mix.pl
``` | If you're setting it up to run on the Crontab of the user:
```
- name: Install Batchjobs on crontab
cron:
name: "Manage Disk Space"
minute: "30"
hour: "02"
weekday: "0-6"
job: "home/export/manageDiskSpace.sh > home/export/manageDiskSpace.sh.log 2>&1"
#user: "admin"
disabled: "no"
become_user: "{{ admin_user }}"
tags:
- cronjobs
```
Reference [1]: <https://docs.ansible.com/ansible/latest/collections/ansible/builtin/cron_module.html#examples> |
61,262,487 | Having an issue with Django Allauth. When I log out of one user, and log back in with another, I get this issue, both locally and in production.
I'm using the latest version of Allauth, Django 3.0.5, and Python 3.7.4.
It seems like this is an Allauth issue, but I haven't seen it reported online anywhere else. So just wondering what I can do next to troubleshoot. Login works fine, less I just logged out of another user.
```
'NoneType' object has no attribute 'append'
Request Method: POST
Request URL: http://127.0.0.1:8000/account/login/
Django Version: 3.0.5
Exception Type: AttributeError
Exception Value:
'NoneType' object has no attribute 'append'
Exception Location: /Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/allauth/account/adapter.py in authentication_failed, line 507
Python Executable: /Users/[USERDIR]/Sites/frontline/venv/bin/python
Python Version: 3.7.4
Python Path:
['/Users/[USERDIR]/Sites/frontline',
'/Users/[USERDIR]/Sites/frontline/venv/lib/python37.zip',
'/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7',
'/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/lib-dynload',
'/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7',
'/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages',
'/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/odf',
'/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/odf',
'/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/odf',
'/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/odf',
'/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/odf',
'/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/odf',
'/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/odf']
Server time: Thu, 16 Apr 2020 17:53:52 -0700
Environment:
Request Method: POST
Request URL: http://127.0.0.1:8000/account/login/
Django Version: 3.0.5
Python Version: 3.7.4
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.humanize',
'django.contrib.sites',
'django.contrib.sitemaps',
'django.contrib.postgres',
'common',
'bootstrap4',
's3direct',
'bootstrap_datepicker_plus',
'import_export',
'tinymce',
'allauth',
'allauth.account',
'allauth.socialaccount',
'debug_toolbar',
'dashboard',
'marketing']
Installed Middleware:
('debug_toolbar.middleware.DebugToolbarMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware')
Traceback (most recent call last):
File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/django/views/generic/base.py", line 71, in view
return self.dispatch(request, *args, **kwargs)
File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/django/utils/decorators.py", line 43, in _wrapper
return bound_method(*args, **kwargs)
File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/django/views/decorators/debug.py", line 76, in sensitive_post_parameters_wrapper
return view(request, *args, **kwargs)
File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/allauth/account/views.py", line 138, in dispatch
return super(LoginView, self).dispatch(request, *args, **kwargs)
File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/allauth/account/views.py", line 81, in dispatch
**kwargs)
File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/django/views/generic/base.py", line 97, in dispatch
return handler(request, *args, **kwargs)
File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/allauth/account/views.py", line 103, in post
if form.is_valid():
File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/django/forms/forms.py", line 180, in is_valid
return self.is_bound and not self.errors
File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/django/forms/forms.py", line 175, in errors
self.full_clean()
File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/django/forms/forms.py", line 377, in full_clean
self._clean_form()
File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/django/forms/forms.py", line 404, in _clean_form
cleaned_data = self.clean()
File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/allauth/account/forms.py", line 179, in clean
**credentials)
File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/allauth/account/adapter.py", line 497, in authenticate
self.authentication_failed(request, **credentials)
File "/Users/[USERDIR]/Sites/frontline/venv/lib/python3.7/site-packages/allauth/account/adapter.py", line 507, in authentication_failed
data.append(time.mktime(dt.timetuple()))
Exception Type: AttributeError at /account/login/
Exception Value: 'NoneType' object has no attribute 'append'
``` | 2020/04/17 | [
"https://Stackoverflow.com/questions/61262487",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/636064/"
] | You need to have the segue from LiveController, not from Navigation Controller | This could be a few things so try these fixes:
1. Clean and build your project. Then, run again.
2. Quit Xcode, open up project and run.
3. In the `Attribute Inspector`, remove `openWelcomePage` and leave it blank.
Hope that either of these suggestions help. |
8,651,095 | How do you control how the order in which PyYaml outputs key/value pairs when serializing a Python dictionary?
I'm using Yaml as a simple serialization format in a Python script. My Yaml serialized objects represent a sort of "document", so for maximum user-friendliness, I'd like my object's "name" field to appear first in the file. Of course, since the value returned by my object's `__getstate__` is a dictionary, and Python dictionaries are unordered, the "name" field will be serialized to a random location in the output.
e.g.
```
>>> import yaml
>>> class Document(object):
... def __init__(self, name):
... self.name = name
... self.otherstuff = 'blah'
... def __getstate__(self):
... return self.__dict__.copy()
...
>>> doc = Document('obj-20111227')
>>> print yaml.dump(doc, indent=4)
!!python/object:__main__.Document
otherstuff: blah
name: obj-20111227
``` | 2011/12/28 | [
"https://Stackoverflow.com/questions/8651095",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/247542/"
] | Took me a few hours of digging through PyYAML docs and tickets, but I eventually discovered [this comment](https://web.archive.org/web/20170308231702/http://pyyaml.org/ticket/29) that lays out some proof-of-concept code for serializing an OrderedDict as a normal YAML map (but maintaining the order).
e.g. applied to my original code, the solution looks something like:
```
>>> import yaml
>>> from collections import OrderedDict
>>> def dump_anydict_as_map(anydict):
... yaml.add_representer(anydict, _represent_dictorder)
...
>>> def _represent_dictorder( self, data):
... if isinstance(data, Document):
... return self.represent_mapping('tag:yaml.org,2002:map', data.__getstate__().items())
... else:
... return self.represent_mapping('tag:yaml.org,2002:map', data.items())
...
>>> class Document(object):
... def __init__(self, name):
... self.name = name
... self.otherstuff = 'blah'
... def __getstate__(self):
... d = OrderedDict()
... d['name'] = self.name
... d['otherstuff'] = self.otherstuff
... return d
...
>>> dump_anydict_as_map(Document)
>>> doc = Document('obj-20111227')
>>> print yaml.dump(doc, indent=4)
!!python/object:__main__.Document
name: obj-20111227
otherstuff: blah
``` | The last time I checked, Python's dictionaries weren't ordered. If you really want them to be, I strongly recommend using a list of key/value pairs.
```
[
('key', 'value'),
('key2', 'value2')
]
```
Alternatively, define a list with the keys and put them in the right order.
```
keys = ['key1', 'name', 'price', 'key2'];
for key in keys:
print obj[key]
``` |
8,651,095 | How do you control how the order in which PyYaml outputs key/value pairs when serializing a Python dictionary?
I'm using Yaml as a simple serialization format in a Python script. My Yaml serialized objects represent a sort of "document", so for maximum user-friendliness, I'd like my object's "name" field to appear first in the file. Of course, since the value returned by my object's `__getstate__` is a dictionary, and Python dictionaries are unordered, the "name" field will be serialized to a random location in the output.
e.g.
```
>>> import yaml
>>> class Document(object):
... def __init__(self, name):
... self.name = name
... self.otherstuff = 'blah'
... def __getstate__(self):
... return self.__dict__.copy()
...
>>> doc = Document('obj-20111227')
>>> print yaml.dump(doc, indent=4)
!!python/object:__main__.Document
otherstuff: blah
name: obj-20111227
``` | 2011/12/28 | [
"https://Stackoverflow.com/questions/8651095",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/247542/"
] | I think the problem is when you dump the data.
I looked into the code of PyYaml and there is a optional argument called `sort_keys`, setting that value to `False` seems to do the trick. | The last time I checked, Python's dictionaries weren't ordered. If you really want them to be, I strongly recommend using a list of key/value pairs.
```
[
('key', 'value'),
('key2', 'value2')
]
```
Alternatively, define a list with the keys and put them in the right order.
```
keys = ['key1', 'name', 'price', 'key2'];
for key in keys:
print obj[key]
``` |
8,651,095 | How do you control how the order in which PyYaml outputs key/value pairs when serializing a Python dictionary?
I'm using Yaml as a simple serialization format in a Python script. My Yaml serialized objects represent a sort of "document", so for maximum user-friendliness, I'd like my object's "name" field to appear first in the file. Of course, since the value returned by my object's `__getstate__` is a dictionary, and Python dictionaries are unordered, the "name" field will be serialized to a random location in the output.
e.g.
```
>>> import yaml
>>> class Document(object):
... def __init__(self, name):
... self.name = name
... self.otherstuff = 'blah'
... def __getstate__(self):
... return self.__dict__.copy()
...
>>> doc = Document('obj-20111227')
>>> print yaml.dump(doc, indent=4)
!!python/object:__main__.Document
otherstuff: blah
name: obj-20111227
``` | 2011/12/28 | [
"https://Stackoverflow.com/questions/8651095",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/247542/"
] | **New Solution** (as of **2020** and PyYAML 5.1)
You can dump a dictionary in its current order by simply using
```
yaml.dump(data, default_flow_style=False, sort_keys=False)
``` | The last time I checked, Python's dictionaries weren't ordered. If you really want them to be, I strongly recommend using a list of key/value pairs.
```
[
('key', 'value'),
('key2', 'value2')
]
```
Alternatively, define a list with the keys and put them in the right order.
```
keys = ['key1', 'name', 'price', 'key2'];
for key in keys:
print obj[key]
``` |
8,651,095 | How do you control how the order in which PyYaml outputs key/value pairs when serializing a Python dictionary?
I'm using Yaml as a simple serialization format in a Python script. My Yaml serialized objects represent a sort of "document", so for maximum user-friendliness, I'd like my object's "name" field to appear first in the file. Of course, since the value returned by my object's `__getstate__` is a dictionary, and Python dictionaries are unordered, the "name" field will be serialized to a random location in the output.
e.g.
```
>>> import yaml
>>> class Document(object):
... def __init__(self, name):
... self.name = name
... self.otherstuff = 'blah'
... def __getstate__(self):
... return self.__dict__.copy()
...
>>> doc = Document('obj-20111227')
>>> print yaml.dump(doc, indent=4)
!!python/object:__main__.Document
otherstuff: blah
name: obj-20111227
``` | 2011/12/28 | [
"https://Stackoverflow.com/questions/8651095",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/247542/"
] | Took me a few hours of digging through PyYAML docs and tickets, but I eventually discovered [this comment](https://web.archive.org/web/20170308231702/http://pyyaml.org/ticket/29) that lays out some proof-of-concept code for serializing an OrderedDict as a normal YAML map (but maintaining the order).
e.g. applied to my original code, the solution looks something like:
```
>>> import yaml
>>> from collections import OrderedDict
>>> def dump_anydict_as_map(anydict):
... yaml.add_representer(anydict, _represent_dictorder)
...
>>> def _represent_dictorder( self, data):
... if isinstance(data, Document):
... return self.represent_mapping('tag:yaml.org,2002:map', data.__getstate__().items())
... else:
... return self.represent_mapping('tag:yaml.org,2002:map', data.items())
...
>>> class Document(object):
... def __init__(self, name):
... self.name = name
... self.otherstuff = 'blah'
... def __getstate__(self):
... d = OrderedDict()
... d['name'] = self.name
... d['otherstuff'] = self.otherstuff
... return d
...
>>> dump_anydict_as_map(Document)
>>> doc = Document('obj-20111227')
>>> print yaml.dump(doc, indent=4)
!!python/object:__main__.Document
name: obj-20111227
otherstuff: blah
``` | I think the problem is when you dump the data.
I looked into the code of PyYaml and there is a optional argument called `sort_keys`, setting that value to `False` seems to do the trick. |
8,651,095 | How do you control how the order in which PyYaml outputs key/value pairs when serializing a Python dictionary?
I'm using Yaml as a simple serialization format in a Python script. My Yaml serialized objects represent a sort of "document", so for maximum user-friendliness, I'd like my object's "name" field to appear first in the file. Of course, since the value returned by my object's `__getstate__` is a dictionary, and Python dictionaries are unordered, the "name" field will be serialized to a random location in the output.
e.g.
```
>>> import yaml
>>> class Document(object):
... def __init__(self, name):
... self.name = name
... self.otherstuff = 'blah'
... def __getstate__(self):
... return self.__dict__.copy()
...
>>> doc = Document('obj-20111227')
>>> print yaml.dump(doc, indent=4)
!!python/object:__main__.Document
otherstuff: blah
name: obj-20111227
``` | 2011/12/28 | [
"https://Stackoverflow.com/questions/8651095",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/247542/"
] | Took me a few hours of digging through PyYAML docs and tickets, but I eventually discovered [this comment](https://web.archive.org/web/20170308231702/http://pyyaml.org/ticket/29) that lays out some proof-of-concept code for serializing an OrderedDict as a normal YAML map (but maintaining the order).
e.g. applied to my original code, the solution looks something like:
```
>>> import yaml
>>> from collections import OrderedDict
>>> def dump_anydict_as_map(anydict):
... yaml.add_representer(anydict, _represent_dictorder)
...
>>> def _represent_dictorder( self, data):
... if isinstance(data, Document):
... return self.represent_mapping('tag:yaml.org,2002:map', data.__getstate__().items())
... else:
... return self.represent_mapping('tag:yaml.org,2002:map', data.items())
...
>>> class Document(object):
... def __init__(self, name):
... self.name = name
... self.otherstuff = 'blah'
... def __getstate__(self):
... d = OrderedDict()
... d['name'] = self.name
... d['otherstuff'] = self.otherstuff
... return d
...
>>> dump_anydict_as_map(Document)
>>> doc = Document('obj-20111227')
>>> print yaml.dump(doc, indent=4)
!!python/object:__main__.Document
name: obj-20111227
otherstuff: blah
``` | **New Solution** (as of **2020** and PyYAML 5.1)
You can dump a dictionary in its current order by simply using
```
yaml.dump(data, default_flow_style=False, sort_keys=False)
``` |
52,528,911 | In the [docs](https://docs.aws.amazon.com/neptune/latest/userguide/access-graph-gremlin-differences.html) under **Updating a Vertex Property**, it is mentioned that one can *"update a property value without adding an additional value to the set of values"*
by doing
`g.V('exampleid01').property(single, 'age', 25)`
In **gremlin\_python**, I am unable to run a query like the above.
I get the error:
```
update_prop_overwrite = g.V().hasLabel('placeholder-vertex').property(single,'maker','unknown').next()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'single' is not defined
```
How can I resolve this so that I can replace a Vertex property value in Neptune?
Without `single` the query will append the new property value to the property key if a value exists already. | 2018/09/27 | [
"https://Stackoverflow.com/questions/52528911",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4007615/"
] | You need to be sure to import `single` which is seen [here in the code](https://github.com/apache/tinkerpop/blob/d1a3fa147d1f009ae57274827c9b59426dfc6e58/gremlin-python/src/main/jython/gremlin_python/process/traversal.py#L127) and can be imported with:
```
from gremlin_python.process.traversal import Cardinality
```
however TinkerPop documentation recommends importing all such classes with:
```
statics.load_statics(globals())
```
You can read more about that [here](http://tinkerpop.apache.org/docs/current/reference/#_static_enums_and_methods). | ```
from gremlin_python.process.traversal import Cardinality
g.V().hasLabel('placeholder-vertex').property(Cardinality.single,'maker','unknown').next()
```
This should also work. |
52,528,911 | In the [docs](https://docs.aws.amazon.com/neptune/latest/userguide/access-graph-gremlin-differences.html) under **Updating a Vertex Property**, it is mentioned that one can *"update a property value without adding an additional value to the set of values"*
by doing
`g.V('exampleid01').property(single, 'age', 25)`
In **gremlin\_python**, I am unable to run a query like the above.
I get the error:
```
update_prop_overwrite = g.V().hasLabel('placeholder-vertex').property(single,'maker','unknown').next()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'single' is not defined
```
How can I resolve this so that I can replace a Vertex property value in Neptune?
Without `single` the query will append the new property value to the property key if a value exists already. | 2018/09/27 | [
"https://Stackoverflow.com/questions/52528911",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4007615/"
] | You need to be sure to import `single` which is seen [here in the code](https://github.com/apache/tinkerpop/blob/d1a3fa147d1f009ae57274827c9b59426dfc6e58/gremlin-python/src/main/jython/gremlin_python/process/traversal.py#L127) and can be imported with:
```
from gremlin_python.process.traversal import Cardinality
```
however TinkerPop documentation recommends importing all such classes with:
```
statics.load_statics(globals())
```
You can read more about that [here](http://tinkerpop.apache.org/docs/current/reference/#_static_enums_and_methods). | Import `statics` from `gremlin_python`
```
from gremlin_python import statics
statics.load_statics(globals())
update_prop_overwrite = g.V().hasLabel('placeholder-vertex').property(single,'maker','unknown').next()
``` |
52,528,911 | In the [docs](https://docs.aws.amazon.com/neptune/latest/userguide/access-graph-gremlin-differences.html) under **Updating a Vertex Property**, it is mentioned that one can *"update a property value without adding an additional value to the set of values"*
by doing
`g.V('exampleid01').property(single, 'age', 25)`
In **gremlin\_python**, I am unable to run a query like the above.
I get the error:
```
update_prop_overwrite = g.V().hasLabel('placeholder-vertex').property(single,'maker','unknown').next()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'single' is not defined
```
How can I resolve this so that I can replace a Vertex property value in Neptune?
Without `single` the query will append the new property value to the property key if a value exists already. | 2018/09/27 | [
"https://Stackoverflow.com/questions/52528911",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4007615/"
] | ```
from gremlin_python.process.traversal import Cardinality
g.V().hasLabel('placeholder-vertex').property(Cardinality.single,'maker','unknown').next()
```
This should also work. | Import `statics` from `gremlin_python`
```
from gremlin_python import statics
statics.load_statics(globals())
update_prop_overwrite = g.V().hasLabel('placeholder-vertex').property(single,'maker','unknown').next()
``` |
15,713,427 | I want to remove rows from several data frames so that they are all length n. When I tried to use a -for- loop, the changes would not persist through the rest of the script.
```
n = 50
groups = [df1, df2, df3]
for dataset in groups:
dataset = dataset[:n]
```
Redefining names individually (e.g., df1 = df1[:n] ), works, but what are some alternate ways? (Either through python or pandas)
More importantly, why does the -for- loop not work here?
pandas == 0.10.1
python == 2.7.3 | 2013/03/30 | [
"https://Stackoverflow.com/questions/15713427",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1560238/"
] | This is a slight python mis-understanding, rather than to do with pandas specific one. :)
You're re-assigning the variable used in the iteration and not changing it in the list:
```
In [1]: L = [1, 2, 3]
In [2]: for i in L:
i = i + 1
In [3]: L
Out[3]: [1, 2, 3]
```
You want to actually change the list:
```
In [4]: for i in range(len(L)):
L[i] = L[i] + 1
In [5]: L
Out[5]: [2, 3, 4]
```
Or perhaps in a nicer syntax is to use `enumerate`:
```
In [6]: for i, x in enumerate(L):
L[i] = x + 1
In [7]: L
Out[7]: [3, 4, 5]
```
That is:
```
for i, dataset in enumerate(groups):
groups[i] = dataset[:n]
``` | Your code creates (and discards) a new variable `dataset` in the for-loop.
Try this:
```
n = 50
groups = [df1, df2, df3]
for dataset in groups:
dataset[:] = dataset[:n]
``` |
15,713,427 | I want to remove rows from several data frames so that they are all length n. When I tried to use a -for- loop, the changes would not persist through the rest of the script.
```
n = 50
groups = [df1, df2, df3]
for dataset in groups:
dataset = dataset[:n]
```
Redefining names individually (e.g., df1 = df1[:n] ), works, but what are some alternate ways? (Either through python or pandas)
More importantly, why does the -for- loop not work here?
pandas == 0.10.1
python == 2.7.3 | 2013/03/30 | [
"https://Stackoverflow.com/questions/15713427",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1560238/"
] | This is a slight python mis-understanding, rather than to do with pandas specific one. :)
You're re-assigning the variable used in the iteration and not changing it in the list:
```
In [1]: L = [1, 2, 3]
In [2]: for i in L:
i = i + 1
In [3]: L
Out[3]: [1, 2, 3]
```
You want to actually change the list:
```
In [4]: for i in range(len(L)):
L[i] = L[i] + 1
In [5]: L
Out[5]: [2, 3, 4]
```
Or perhaps in a nicer syntax is to use `enumerate`:
```
In [6]: for i, x in enumerate(L):
L[i] = x + 1
In [7]: L
Out[7]: [3, 4, 5]
```
That is:
```
for i, dataset in enumerate(groups):
groups[i] = dataset[:n]
``` | ```
n = 50
groups = [df1, df2, df3]
groups = [df.head(n) for df in groups]
```
---
In Python, you can think of variable names as pointing to objects.
The statement
```
groups = [df1, df2, df3]
```
makes the variable name, `groups`, point to a list object, which contains 3 other objects.
The `for-loop`:
```
for dataset in groups:
```
creates a new variable name, `dataset`, which *also* points to the objects inside `groups`, one at a time as Python iterates through the loop.
The assignment
```
dataset = dataset[:n]
```
**redirects** the variable name `dataset` to point to a new object, `dataset[:n]`. But this assignment does not affect `groups`. That variable name is still pointing to the `list` object, which contains the original DataFrames.
To affect `groups`, you need to either assign a new value for `groups` with as assignment statement, such as
```
groups = ...
```
or assign a new value for one of the elements in the list:
```
groups[i] = ...
``` |
15,713,427 | I want to remove rows from several data frames so that they are all length n. When I tried to use a -for- loop, the changes would not persist through the rest of the script.
```
n = 50
groups = [df1, df2, df3]
for dataset in groups:
dataset = dataset[:n]
```
Redefining names individually (e.g., df1 = df1[:n] ), works, but what are some alternate ways? (Either through python or pandas)
More importantly, why does the -for- loop not work here?
pandas == 0.10.1
python == 2.7.3 | 2013/03/30 | [
"https://Stackoverflow.com/questions/15713427",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1560238/"
] | ```
n = 50
groups = [df1, df2, df3]
groups = [df.head(n) for df in groups]
```
---
In Python, you can think of variable names as pointing to objects.
The statement
```
groups = [df1, df2, df3]
```
makes the variable name, `groups`, point to a list object, which contains 3 other objects.
The `for-loop`:
```
for dataset in groups:
```
creates a new variable name, `dataset`, which *also* points to the objects inside `groups`, one at a time as Python iterates through the loop.
The assignment
```
dataset = dataset[:n]
```
**redirects** the variable name `dataset` to point to a new object, `dataset[:n]`. But this assignment does not affect `groups`. That variable name is still pointing to the `list` object, which contains the original DataFrames.
To affect `groups`, you need to either assign a new value for `groups` with as assignment statement, such as
```
groups = ...
```
or assign a new value for one of the elements in the list:
```
groups[i] = ...
``` | Your code creates (and discards) a new variable `dataset` in the for-loop.
Try this:
```
n = 50
groups = [df1, df2, df3]
for dataset in groups:
dataset[:] = dataset[:n]
``` |
52,465,856 | ```
def frame_processing(frame):
out_frame = np.zeros((frame.shape[0],frame.shape[1],4),dtype = np.uint8)
b,g,r = cv2.split(frame)
alpha = np.zeros_like(b , dtype=np.uint8)
print(out_frame.shape)
print(b.shape);print(g.shape);print(r.shape);print(alpha.shape)
for i in range(frame.shape[0]):
for j in range(frame.shape[1]):
a = (frame[i,j,0],frame[i,j,1],frame[i,j,2])
b = (225,225,225)
if all(i > j for i, j in zip(a,b)): #all(a>b) :
alpha[i,j] = 0
else:
alpha[i,j] = 255
out_frame[:,:,0] = b
out_frame[:,:,1] = g
out_frame[:,:,2] = r
out_frame[:,:,3] = alpha
#out_frame = cv2.merge((b,g,r,alpha))
return out_frame
```
Wanted to add an alpha channel; tried `cv2.Merge()` and manual stacking of channels but failed.
When using `cv2.merge()`:
```
error: OpenCV(3.4.2) C:\projects\opencv-
python\opencv\modules\core\src\merge.cpp:458: error: (-215:Assertion failed)
mv[i].size == mv[0].size && mv[i].depth() == depth in function 'cv::merge'
```
When manually adding channels:
```
ValueError: could not broadcast input array from shape (3) into shape
(225,225)
``` | 2018/09/23 | [
"https://Stackoverflow.com/questions/52465856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9811461/"
] | Use `cv2.inRange` to find the mask, then merge them with `np.dstack`:
```
#!/use/bin/python3
# 2018/09/24 11:51:31 (CST)
import cv2
import numpy as np
#frame = ...
mask = cv2.inRange(frame, (225,225,225), (255,255,255))
#dst = np.dstack((frame, 255-mask))
dst = np.dstack((frame, mask))
cv2.imwrite("dst.png", dst)
```
To find the specific color, maybe you will be interested with this question:
[Choosing the correct upper and lower HSV boundaries for color detection with`cv::inRange` (OpenCV)](https://stackoverflow.com/questions/10948589/choosing-the-correct-upper-and-lower-hsv-boundaries-for-color-detection-withcv/48367205#48367205) | Its a simple typo. You are changing the variable "b" in the for loop and it conflicts with variable of blue channel. Change `b = (225,225,225)` to `threshold = (225, 255, 255)` and `zip(a,b)` to `zip(a, threshold)` should fix the problem.
By the way, you can use this to create your alpha channel:
```
alpha = np.zeros(b.shape, dtype=b.dtype)
```
Also you can fill your alpha channel like this if you need more speed (you can measure time difference):
```
alpha[~((b[:,:]>threshold[0]) & (g[:,:]>threshold[1]) & (r[:,:]>threshold[2]))] = 255
```
So your function becomes:
```
def frame_processing(frame):
# split channels
b,g,r = cv2.split(frame)
# initialize alpha to zeros
alpha = np.zeros(b.shape, dtype=b.dtype)
# fill alpha values
threshold = (225, 225, 225)
alpha[~((b[:,:]>threshold[0]) & (g[:,:]>threshold[1]) & (r[:,:]>threshold[2]))] = 255
# merge all channels back
out_frame = cv2.merge((b, g, r, alpha))
return out_frame
``` |
60,823,720 | I have a really long ordered dict that looks similar to this:
```
OrderedDict([('JIRAUSER16100', {'name': 'john.smith', 'fullname': 'John Smith', 'email': 'John.Smith@domain.test', 'active': True}), ('JIRAUSER16300', {'name': 'susan.jones', 'fullname': 'Susan Jones', 'email': 'Susan.Jones@domain.test', 'active': True})])
```
How can I search through this list to find a key value based on a key value match? For example, for Susan Jones, I'd like to find her email based on the name value? Is there a pythonic way to find that without just looping through the entire dictionary?
Currently I'm just doing this below, but it seems inefficient when I have to go through the list a thousand times. I'm curious if there is a "find" method of some sort?
```
searchname = "susan.jones"
for user in my_ordered_dict.items():
if user[1]["name"] == searchname:
print(user[1]["email"])
``` | 2020/03/24 | [
"https://Stackoverflow.com/questions/60823720",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11483315/"
] | Two ways you could potentially improve on this. You say your `OrderedDict` is really long, so I'd recommend the first option, since quickly become faster than the second as the size of your data grows.
1) **use [Pandas](https://pandas.pydata.org/)**:
```
In [1]: from collections import OrderedDict
In [2]: import pandas as pd
In [3]: d = OrderedDict([
...: ('JIRAUSER16100', {'name': 'john.smith',
...: 'fullname': 'John Smith',
...: 'email': 'John.Smith@domain.test',
...: 'active': True}),
...: ('JIRAUSER16300', {'name': 'susan.jones',
...: 'fullname': 'Susan Jones',
...: 'email': 'Susan.Jones@domain.test',
...: 'active': True})
...: ])
In [4]: df = pd.DataFrame(d).T
In [5]: df
Out[5]:
name fullname email active
JIRAUSER16100 john.smith John Smith John.Smith@domain.test True
JIRAUSER16300 susan.jones Susan Jones Susan.Jones@domain.test True
In [6]: df.loc[df['name'] == 'susan.jones', 'email'][0]
Out[6]: 'Susan.Jones@domain.test'
```
On the scale of easy-to-learn-but-weak to hard-to-learn-but-powerful, `pandas` is fairly far toward the latter extreme. There's a decent amount to unpack here if you aren't familiar with `pandas`, so for the sake of brevity I won't go into it. But feel free to comment with any questions if more explanation would help.
2) **Use the built-in [`next`](https://docs.python.org/3/library/functions.html#next) function**
This will allow you to avoid looping through the full dictionary. To make a long story really short, you can pass `next` a generator with a ternary expression, and it will essentially run through an iterable until it finds the *first* item that satisfies the given condition. So in your case,
```
In [7]: next(entry['email'] for entry in d.values() if entry['name'] == 'susan.jones')
Out[7]: 'Susan.Jones@domain.test'
```
would work. It will save you time verses looping through the entire dict, but unlike option 1, its speed will depend on where in your `OrderedDict` the entry you're trying to find is located. Unless you for some reason need to stick exclusively to the standard library, Pandas will be much faster on any reasonably sized dataset.
Hope this helps! | If you are looking for a specific match you will have to iterate through your structure until you find it so you don't have to go through the entire dictionary.
Something like:
```
In [19]: d = OrderedDict([('JIRAUSER16100', {'name': 'john.smith', 'fullname': 'John Smith', 'email': 'John.Smith@domain.test',
...: 'active': True}), ('JIRAUSER16300', {'name': 'susan.jones', 'fullname': 'Susan Jones', 'email': 'Susan.Jones@domain.tes
...: t', 'active': True})])
...:
In [20]: def find_entry_by_subkey(sub_key, sub_key_value, data):
...: for entry in data.values():
...: if entry[sub_key] == sub_key_value:
...: return entry
...:
In [21]: find_entry_by_subkey('email', 'Susan.Jones@domain.test', d)
Out[21]:
{'name': 'susan.jones',
'fullname': 'Susan Jones',
'email': 'Susan.Jones@domain.test',
'active': True
``` |
58,983,828 | I am using docplex in google collab with python
For the following LP, the some of the decision variables are predetermined, and the LP needs to be solved for that. It's a sequencing problem and the sequence is a set of given values. The other decision variables will be optimized based on this.
```
#Define the decision variables
x = cost.continuous_var_dict(P, name='x') # The landing time of plane i
alpha = cost.continuous_var_dict(P, name='alpha') # How much of deviation of landing before target
landing time for plane i
beta = cost.continuous_var_dict(P, name='beta') # How much of deviation of landing after target
landing time for plane i
delta = cost.binary_var_dict(plane_matrix,name="delta") # 1 if plane i lands before plane j; 0 o/w
z = cost.binary_var_dict(plane_matrix, name="z") # 1 if plane i and j land on same runway; 0 o/w
y = cost.binary_var_dict(plane_runway, name="y") # 1 if plane j lands on runway r; 0 o/w
```
So the given values are for the delta, there is a constraint to satisfy this which is as follows
```
# Constraint 2: either plane i lands before j or j lands before i
cost.add_constraints(delta[i,j] + delta[j,i] == 1 for i in P for j in P if j!=i)
```
However, I get an error as follows:
```
DOcplexException Traceback (most recent call last)
<ipython-input-23-441ca8cbb9d0> in <module>()
3
4 # #Constraint 2: either i lands before j or j lands before i
----> 5 cost.add_constraints(delta[i,j] + delta[j,i] == 1 for i in P for j in P if j!=i)
6
7 # #Constraint 3: Each plane can land on only one runway
4 frames
/usr/local/lib/python3.6/dist-packages/docplex/mp/model.py in add_constraints(self, cts, names)
3514 return self._lfactory._new_constraint_block2(cts, names)
3515 else:
-> 3516 return self._lfactory._new_constraint_block1(cts)
3517
3518
/usr/local/lib/python3.6/dist-packages/docplex/mp/mfactory.py in _new_constraint_block1(self, cts)
891 posted_cts.append(ct)
892 else:
--> 893 checker.typecheck_constraint_seq(ctseq, check_linear=True, accept_range=True)
894 for ct in ctseq:
895 if filterfn(ct, ctname=None, check_for_trivial_ct=check_trivial,
arg_checker=checker):
/usr/local/lib/python3.6/dist-packages/docplex/mp/tck.py in typecheck_constraint_seq(self, cts,
check_linear, accept_range)
354 for i, ct in enumerate(checked_cts_list):
355 if not isinstance(ct, AbstractConstraint):
--> 356 self.fatal("Expecting sequence of constraints, got: {0!r} at position {1}",
ct, i)
357 if check_linear:
358 if not ct.is_linear():
/usr/local/lib/python3.6/dist-packages/docplex/mp/tck.py in fatal(self, msg, *args)
229
230 def fatal(self, msg, *args):
--> 231 self._logger.fatal(msg, args)
232
233 def error(self, msg, *args): # pragma: no cover
/usr/local/lib/python3.6/dist-packages/docplex/mp/error_handler.py in fatal(self, msg, args)
208 resolved_message = resolve_pattern(msg, args)
209 docplex_error_stop_here()
--> 210 raise DOcplexException(resolved_message)
211
212 def fatal_limits_exceeded(self):
DOcplexException: Expecting sequence of constraints, got: True at position 0
```
Please Help. I really can't figure out why that is an issue. Thank you | 2019/11/21 | [
"https://Stackoverflow.com/questions/58983828",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3916398/"
] | `join()` doesn't do anything to the child thread -- all it does is block until the child thread has exited. It only has an effect on the calling thread (i.e. by blocking its progress). The child thread can keep running for as long as it wants (although typically you'd prefer it to exit quickly, so that the thread calling `join()` doesn't get blocked for a long time -- but that's up to you to implement) | >
> And to my surprise, joining these alive threads does not remove them from list of threads that top is giving. Is this expected behaviour?
>
>
>
That suggests the thread(s) are still running. Calling `join()` on a thread doesn't have any impact on that running thread; simply the calling thread
waits for the called-on thread to exit.
>
> found out the loop inside Threadpool destructor never moved further than first join
>
>
>
That means the first thread hasn't completed yet. So none of the other threads haven't been joined yet either (even if they have exited).
However, if the thread function is implemented correctly, the first thread (and all other threads in the pool) should eventually complete and
the `join()` calls should return (assuming the threads in the pool are supposed to exit - but this doesn't need to true in general.
Depending on application, you could simply make the threads run forever too).
So it appears there's some sort of deadlock or wait for some resource that's holding up one or more threads. So you need to run through a debugger.
[Helgrind](http://valgrind.org/docs/manual/hg-manual.html) would be very useful.
You could also try to reduce the number of threads (say 2) and to see if the problem becomes reproducible/obvious and then you could increase the threads. |
54,440,762 | I'm busy configuring a TensorFlow Serving client that asks a TensorFlow Serving server to produce predictions on a given input image, for a given model.
If the model being requested has not yet been served, it is downloaded from a remote URL to a folder where the server's models are located. (The client does this). At this point I need to update the `model_config` and trigger the server to reload it.
This functionality appears to exist (based on <https://github.com/tensorflow/serving/pull/885> and <https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/model_service.proto#L22>), but I can't find any documentation on how to actually use it.
I am essentially looking for a python script with which I can trigger the reload from client side (or otherwise to configure the server to listen for changes and trigger the reload itself). | 2019/01/30 | [
"https://Stackoverflow.com/questions/54440762",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/141789/"
] | So it took me ages of trawling through pull requests to finally find a code example for this. For the next person who has the same question as me, here is an example of how to do this. (You'll need the `tensorflow_serving package` for this; `pip install tensorflow-serving-api`).
Based on this pull request (which at the time of writing hadn't been accepted and was closed since it needed review): <https://github.com/tensorflow/serving/pull/1065>
```
from tensorflow_serving.apis import model_service_pb2_grpc
from tensorflow_serving.apis import model_management_pb2
from tensorflow_serving.config import model_server_config_pb2
import grpc
def add_model_config(host, name, base_path, model_platform):
channel = grpc.insecure_channel(host)
stub = model_service_pb2_grpc.ModelServiceStub(channel)
request = model_management_pb2.ReloadConfigRequest()
model_server_config = model_server_config_pb2.ModelServerConfig()
#Create a config to add to the list of served models
config_list = model_server_config_pb2.ModelConfigList()
one_config = config_list.config.add()
one_config.name= name
one_config.base_path=base_path
one_config.model_platform=model_platform
model_server_config.model_config_list.CopyFrom(config_list)
request.config.CopyFrom(model_server_config)
print(request.IsInitialized())
print(request.ListFields())
response = stub.HandleReloadConfigRequest(request,10)
if response.status.error_code == 0:
print("Reload sucessfully")
else:
print("Reload failed!")
print(response.status.error_code)
print(response.status.error_message)
add_model_config(host="localhost:8500",
name="my_model",
base_path="/models/my_model",
model_platform="tensorflow")
``` | **Add a model** to TF Serving server and to the existing config file `conf_filepath`: Use arguments `name`, `base_path`, `model_platform` for the new model. Keeps the original models intact.
Notice a small difference from @Karl 's answer - using `MergeFrom` instead of `CopyFrom`
>
> pip install tensorflow-serving-api
>
>
>
```
import grpc
from google.protobuf import text_format
from tensorflow_serving.apis import model_service_pb2_grpc, model_management_pb2
from tensorflow_serving.config import model_server_config_pb2
def add_model_config(conf_filepath, host, name, base_path, model_platform):
with open(conf_filepath, 'r+') as f:
config_ini = f.read()
channel = grpc.insecure_channel(host)
stub = model_service_pb2_grpc.ModelServiceStub(channel)
request = model_management_pb2.ReloadConfigRequest()
model_server_config = model_server_config_pb2.ModelServerConfig()
config_list = model_server_config_pb2.ModelConfigList()
model_server_config = text_format.Parse(text=config_ini, message=model_server_config)
# Create a config to add to the list of served models
one_config = config_list.config.add()
one_config.name = name
one_config.base_path = base_path
one_config.model_platform = model_platform
model_server_config.model_config_list.MergeFrom(config_list)
request.config.CopyFrom(model_server_config)
response = stub.HandleReloadConfigRequest(request, 10)
if response.status.error_code == 0:
with open(conf_filepath, 'w+') as f:
f.write(request.config.__str__())
print("Updated TF Serving conf file")
else:
print("Failed to update model_config_list!")
print(response.status.error_code)
print(response.status.error_message)
``` |
54,440,762 | I'm busy configuring a TensorFlow Serving client that asks a TensorFlow Serving server to produce predictions on a given input image, for a given model.
If the model being requested has not yet been served, it is downloaded from a remote URL to a folder where the server's models are located. (The client does this). At this point I need to update the `model_config` and trigger the server to reload it.
This functionality appears to exist (based on <https://github.com/tensorflow/serving/pull/885> and <https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/model_service.proto#L22>), but I can't find any documentation on how to actually use it.
I am essentially looking for a python script with which I can trigger the reload from client side (or otherwise to configure the server to listen for changes and trigger the reload itself). | 2019/01/30 | [
"https://Stackoverflow.com/questions/54440762",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/141789/"
] | So it took me ages of trawling through pull requests to finally find a code example for this. For the next person who has the same question as me, here is an example of how to do this. (You'll need the `tensorflow_serving package` for this; `pip install tensorflow-serving-api`).
Based on this pull request (which at the time of writing hadn't been accepted and was closed since it needed review): <https://github.com/tensorflow/serving/pull/1065>
```
from tensorflow_serving.apis import model_service_pb2_grpc
from tensorflow_serving.apis import model_management_pb2
from tensorflow_serving.config import model_server_config_pb2
import grpc
def add_model_config(host, name, base_path, model_platform):
channel = grpc.insecure_channel(host)
stub = model_service_pb2_grpc.ModelServiceStub(channel)
request = model_management_pb2.ReloadConfigRequest()
model_server_config = model_server_config_pb2.ModelServerConfig()
#Create a config to add to the list of served models
config_list = model_server_config_pb2.ModelConfigList()
one_config = config_list.config.add()
one_config.name= name
one_config.base_path=base_path
one_config.model_platform=model_platform
model_server_config.model_config_list.CopyFrom(config_list)
request.config.CopyFrom(model_server_config)
print(request.IsInitialized())
print(request.ListFields())
response = stub.HandleReloadConfigRequest(request,10)
if response.status.error_code == 0:
print("Reload sucessfully")
else:
print("Reload failed!")
print(response.status.error_code)
print(response.status.error_message)
add_model_config(host="localhost:8500",
name="my_model",
base_path="/models/my_model",
model_platform="tensorflow")
``` | If you're using the method described in [this answer](https://stackoverflow.com/a/65519903/10999642), please note that you're actually launching multiple tensorflow model server instances instead of a single model server, effectively making the servers compete for resources instead of working together to optimize tail latency. |
54,440,762 | I'm busy configuring a TensorFlow Serving client that asks a TensorFlow Serving server to produce predictions on a given input image, for a given model.
If the model being requested has not yet been served, it is downloaded from a remote URL to a folder where the server's models are located. (The client does this). At this point I need to update the `model_config` and trigger the server to reload it.
This functionality appears to exist (based on <https://github.com/tensorflow/serving/pull/885> and <https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/model_service.proto#L22>), but I can't find any documentation on how to actually use it.
I am essentially looking for a python script with which I can trigger the reload from client side (or otherwise to configure the server to listen for changes and trigger the reload itself). | 2019/01/30 | [
"https://Stackoverflow.com/questions/54440762",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/141789/"
] | So it took me ages of trawling through pull requests to finally find a code example for this. For the next person who has the same question as me, here is an example of how to do this. (You'll need the `tensorflow_serving package` for this; `pip install tensorflow-serving-api`).
Based on this pull request (which at the time of writing hadn't been accepted and was closed since it needed review): <https://github.com/tensorflow/serving/pull/1065>
```
from tensorflow_serving.apis import model_service_pb2_grpc
from tensorflow_serving.apis import model_management_pb2
from tensorflow_serving.config import model_server_config_pb2
import grpc
def add_model_config(host, name, base_path, model_platform):
channel = grpc.insecure_channel(host)
stub = model_service_pb2_grpc.ModelServiceStub(channel)
request = model_management_pb2.ReloadConfigRequest()
model_server_config = model_server_config_pb2.ModelServerConfig()
#Create a config to add to the list of served models
config_list = model_server_config_pb2.ModelConfigList()
one_config = config_list.config.add()
one_config.name= name
one_config.base_path=base_path
one_config.model_platform=model_platform
model_server_config.model_config_list.CopyFrom(config_list)
request.config.CopyFrom(model_server_config)
print(request.IsInitialized())
print(request.ListFields())
response = stub.HandleReloadConfigRequest(request,10)
if response.status.error_code == 0:
print("Reload sucessfully")
else:
print("Reload failed!")
print(response.status.error_code)
print(response.status.error_message)
add_model_config(host="localhost:8500",
name="my_model",
base_path="/models/my_model",
model_platform="tensorflow")
``` | While the solutions mentioned here works fine, there is one more method that you can use to hot-reload your models. You can use `--model_config_file_poll_wait_seconds`
As mentioned here in the [documentation](https://www.tensorflow.org/tfx/serving/serving_config#reloading_model_server_configuration) -
>
> By setting the --model\_config\_file\_poll\_wait\_seconds flag to instruct the server to periodically check for a new config file at --model\_config\_file filepath.
>
>
>
So, you just have to update the config file at `model_config_path` and tf-serving will load any **[new](https://github.com/tensorflow/serving/blob/ee665381efc6450bd8dc1499d5f6c939263228fe/tensorflow_serving/model_servers/server_core.cc#L367)** models and unload any models removed from the config file.
---
Edit 1: I looked at the source code and it seems that the flag is present from the very early version of tf-serving but there have been instances where some users were not able to use this flag (see [this](https://stackoverflow.com/questions/57891282/invalid-argument-model-config-file-poll-wait-seconds)). So, try to use the latest version if possible. |
54,440,762 | I'm busy configuring a TensorFlow Serving client that asks a TensorFlow Serving server to produce predictions on a given input image, for a given model.
If the model being requested has not yet been served, it is downloaded from a remote URL to a folder where the server's models are located. (The client does this). At this point I need to update the `model_config` and trigger the server to reload it.
This functionality appears to exist (based on <https://github.com/tensorflow/serving/pull/885> and <https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/model_service.proto#L22>), but I can't find any documentation on how to actually use it.
I am essentially looking for a python script with which I can trigger the reload from client side (or otherwise to configure the server to listen for changes and trigger the reload itself). | 2019/01/30 | [
"https://Stackoverflow.com/questions/54440762",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/141789/"
] | **Add a model** to TF Serving server and to the existing config file `conf_filepath`: Use arguments `name`, `base_path`, `model_platform` for the new model. Keeps the original models intact.
Notice a small difference from @Karl 's answer - using `MergeFrom` instead of `CopyFrom`
>
> pip install tensorflow-serving-api
>
>
>
```
import grpc
from google.protobuf import text_format
from tensorflow_serving.apis import model_service_pb2_grpc, model_management_pb2
from tensorflow_serving.config import model_server_config_pb2
def add_model_config(conf_filepath, host, name, base_path, model_platform):
with open(conf_filepath, 'r+') as f:
config_ini = f.read()
channel = grpc.insecure_channel(host)
stub = model_service_pb2_grpc.ModelServiceStub(channel)
request = model_management_pb2.ReloadConfigRequest()
model_server_config = model_server_config_pb2.ModelServerConfig()
config_list = model_server_config_pb2.ModelConfigList()
model_server_config = text_format.Parse(text=config_ini, message=model_server_config)
# Create a config to add to the list of served models
one_config = config_list.config.add()
one_config.name = name
one_config.base_path = base_path
one_config.model_platform = model_platform
model_server_config.model_config_list.MergeFrom(config_list)
request.config.CopyFrom(model_server_config)
response = stub.HandleReloadConfigRequest(request, 10)
if response.status.error_code == 0:
with open(conf_filepath, 'w+') as f:
f.write(request.config.__str__())
print("Updated TF Serving conf file")
else:
print("Failed to update model_config_list!")
print(response.status.error_code)
print(response.status.error_message)
``` | If you're using the method described in [this answer](https://stackoverflow.com/a/65519903/10999642), please note that you're actually launching multiple tensorflow model server instances instead of a single model server, effectively making the servers compete for resources instead of working together to optimize tail latency. |
54,440,762 | I'm busy configuring a TensorFlow Serving client that asks a TensorFlow Serving server to produce predictions on a given input image, for a given model.
If the model being requested has not yet been served, it is downloaded from a remote URL to a folder where the server's models are located. (The client does this). At this point I need to update the `model_config` and trigger the server to reload it.
This functionality appears to exist (based on <https://github.com/tensorflow/serving/pull/885> and <https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/model_service.proto#L22>), but I can't find any documentation on how to actually use it.
I am essentially looking for a python script with which I can trigger the reload from client side (or otherwise to configure the server to listen for changes and trigger the reload itself). | 2019/01/30 | [
"https://Stackoverflow.com/questions/54440762",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/141789/"
] | **Add a model** to TF Serving server and to the existing config file `conf_filepath`: Use arguments `name`, `base_path`, `model_platform` for the new model. Keeps the original models intact.
Notice a small difference from @Karl 's answer - using `MergeFrom` instead of `CopyFrom`
>
> pip install tensorflow-serving-api
>
>
>
```
import grpc
from google.protobuf import text_format
from tensorflow_serving.apis import model_service_pb2_grpc, model_management_pb2
from tensorflow_serving.config import model_server_config_pb2
def add_model_config(conf_filepath, host, name, base_path, model_platform):
with open(conf_filepath, 'r+') as f:
config_ini = f.read()
channel = grpc.insecure_channel(host)
stub = model_service_pb2_grpc.ModelServiceStub(channel)
request = model_management_pb2.ReloadConfigRequest()
model_server_config = model_server_config_pb2.ModelServerConfig()
config_list = model_server_config_pb2.ModelConfigList()
model_server_config = text_format.Parse(text=config_ini, message=model_server_config)
# Create a config to add to the list of served models
one_config = config_list.config.add()
one_config.name = name
one_config.base_path = base_path
one_config.model_platform = model_platform
model_server_config.model_config_list.MergeFrom(config_list)
request.config.CopyFrom(model_server_config)
response = stub.HandleReloadConfigRequest(request, 10)
if response.status.error_code == 0:
with open(conf_filepath, 'w+') as f:
f.write(request.config.__str__())
print("Updated TF Serving conf file")
else:
print("Failed to update model_config_list!")
print(response.status.error_code)
print(response.status.error_message)
``` | While the solutions mentioned here works fine, there is one more method that you can use to hot-reload your models. You can use `--model_config_file_poll_wait_seconds`
As mentioned here in the [documentation](https://www.tensorflow.org/tfx/serving/serving_config#reloading_model_server_configuration) -
>
> By setting the --model\_config\_file\_poll\_wait\_seconds flag to instruct the server to periodically check for a new config file at --model\_config\_file filepath.
>
>
>
So, you just have to update the config file at `model_config_path` and tf-serving will load any **[new](https://github.com/tensorflow/serving/blob/ee665381efc6450bd8dc1499d5f6c939263228fe/tensorflow_serving/model_servers/server_core.cc#L367)** models and unload any models removed from the config file.
---
Edit 1: I looked at the source code and it seems that the flag is present from the very early version of tf-serving but there have been instances where some users were not able to use this flag (see [this](https://stackoverflow.com/questions/57891282/invalid-argument-model-config-file-poll-wait-seconds)). So, try to use the latest version if possible. |
54,440,762 | I'm busy configuring a TensorFlow Serving client that asks a TensorFlow Serving server to produce predictions on a given input image, for a given model.
If the model being requested has not yet been served, it is downloaded from a remote URL to a folder where the server's models are located. (The client does this). At this point I need to update the `model_config` and trigger the server to reload it.
This functionality appears to exist (based on <https://github.com/tensorflow/serving/pull/885> and <https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/model_service.proto#L22>), but I can't find any documentation on how to actually use it.
I am essentially looking for a python script with which I can trigger the reload from client side (or otherwise to configure the server to listen for changes and trigger the reload itself). | 2019/01/30 | [
"https://Stackoverflow.com/questions/54440762",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/141789/"
] | While the solutions mentioned here works fine, there is one more method that you can use to hot-reload your models. You can use `--model_config_file_poll_wait_seconds`
As mentioned here in the [documentation](https://www.tensorflow.org/tfx/serving/serving_config#reloading_model_server_configuration) -
>
> By setting the --model\_config\_file\_poll\_wait\_seconds flag to instruct the server to periodically check for a new config file at --model\_config\_file filepath.
>
>
>
So, you just have to update the config file at `model_config_path` and tf-serving will load any **[new](https://github.com/tensorflow/serving/blob/ee665381efc6450bd8dc1499d5f6c939263228fe/tensorflow_serving/model_servers/server_core.cc#L367)** models and unload any models removed from the config file.
---
Edit 1: I looked at the source code and it seems that the flag is present from the very early version of tf-serving but there have been instances where some users were not able to use this flag (see [this](https://stackoverflow.com/questions/57891282/invalid-argument-model-config-file-poll-wait-seconds)). So, try to use the latest version if possible. | If you're using the method described in [this answer](https://stackoverflow.com/a/65519903/10999642), please note that you're actually launching multiple tensorflow model server instances instead of a single model server, effectively making the servers compete for resources instead of working together to optimize tail latency. |
10,496,815 | I have written a job server that runs 1 or more jobs concurrently (or simultaneously depending on the number of CPUs on the system). A lot of the jobs created connect to a SQL Server database, perform a query, fetch the results and write the results to a CSV file. For these types of jobs I use `pyodbc` and Microsoft SQL Server ODBC Driver 1.0 for Linux to connect, run the query, then disconnect.
Each job runs as a separate process using the python multiprocessing module. The job server itself is kicked off as a double forked background process.
This all ran fine until I noticed today that the first SQL Server job ran fine but the second seemed to hang (i.e. look as though it was running forever).
On further investigation I noticed the process for this second job had become zombified so I ran a manual test as follows:
```
[root@myserver jobserver]# python
Python 2.6.6 (r266:84292, Dec 7 2011, 20:48:22)
[GCC 4.4.6 20110731 (Red Hat 4.4.6-3)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
import pyodbc
conn = pyodbc.connect('DRIVER={SQL Server Native Client 11.0};SERVER=MY-DATABASE-SERVER;DATABASE=MY-DATABASE;UID=MY-ID;PWD=MY-PASSWORD')
c = conn.cursor()
c.execute('select * from my_table')
<pyodbc.Cursor object at 0x1d373f0>
r = c.fetchall()
len(r)
19012
c.close()
conn.close()
conn = pyodbc.connect('DRIVER={SQL Server Native Client 11.0};SERVER=MY-DATABASE-SERVER;DATABASE=MY-DATABASE;UID=MY-ID;PWD=MY-PASSWORD')
Segmentation fault
```
So as you can see the first connection to the database works fine but any subsequent attempts to connect fail with a segmentation fault.
I cannot for the life of me figure out why this has started happening or the solution, all worked fine before today and no code has been changed.
Any help on this issue would be much appreciated. | 2012/05/08 | [
"https://Stackoverflow.com/questions/10496815",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1328695/"
] | I had a very similar problem and in my case the solution was to upgrade the ODBC driver on the machine I was trying to make the connection from. I'm afraid I don't know much about why that fixed the problem. I suspect something was changed or upgraded on the database server I was trying to connect to.
This answer might be too late for the OP but I wanted to share it anyway since I found this question while I was troubleshooting the problem and was a little discouraged when I didn't see any answers. | I also encounter this problem recently. My config includes unixODBC-2.3.0 plus MS ODBC Driver 1.0 for Linux. After some experiments, we speculate that the problem may arise due to database upgrade (to SQLServer 2008 SP1 in our case), thus triggering some bugs in the MS ODBC driver. The problem also occurs in this thread:
<http://social.technet.microsoft.com/Forums/sqlserver/en-US/23fafa84-d333-45ac-8bd0-4b76151e8bcc/sql-server-driver-for-linux-causes-segmentation-fault?forum=sqldataaccess>
I also tried upgrade my driver manager to unixODBC-2.3.2 but with no luck. My final solution is using FreeTDS 0.82.6+ with unixODBC-2.3.2. This version of FreeTDS driver goes badly along with unixODBC-2.3.0, for the manager keeps complaining about function non-support of the driver. Everything goes smooth if unixODBC is upgraded. |
10,496,815 | I have written a job server that runs 1 or more jobs concurrently (or simultaneously depending on the number of CPUs on the system). A lot of the jobs created connect to a SQL Server database, perform a query, fetch the results and write the results to a CSV file. For these types of jobs I use `pyodbc` and Microsoft SQL Server ODBC Driver 1.0 for Linux to connect, run the query, then disconnect.
Each job runs as a separate process using the python multiprocessing module. The job server itself is kicked off as a double forked background process.
This all ran fine until I noticed today that the first SQL Server job ran fine but the second seemed to hang (i.e. look as though it was running forever).
On further investigation I noticed the process for this second job had become zombified so I ran a manual test as follows:
```
[root@myserver jobserver]# python
Python 2.6.6 (r266:84292, Dec 7 2011, 20:48:22)
[GCC 4.4.6 20110731 (Red Hat 4.4.6-3)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
import pyodbc
conn = pyodbc.connect('DRIVER={SQL Server Native Client 11.0};SERVER=MY-DATABASE-SERVER;DATABASE=MY-DATABASE;UID=MY-ID;PWD=MY-PASSWORD')
c = conn.cursor()
c.execute('select * from my_table')
<pyodbc.Cursor object at 0x1d373f0>
r = c.fetchall()
len(r)
19012
c.close()
conn.close()
conn = pyodbc.connect('DRIVER={SQL Server Native Client 11.0};SERVER=MY-DATABASE-SERVER;DATABASE=MY-DATABASE;UID=MY-ID;PWD=MY-PASSWORD')
Segmentation fault
```
So as you can see the first connection to the database works fine but any subsequent attempts to connect fail with a segmentation fault.
I cannot for the life of me figure out why this has started happening or the solution, all worked fine before today and no code has been changed.
Any help on this issue would be much appreciated. | 2012/05/08 | [
"https://Stackoverflow.com/questions/10496815",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1328695/"
] | I cannot detail the specifics of the underlying mechanics behind this problem. I can however say that the problem was being caused by using the Queue class in python's multiprocessing module. Whether I was implementing this Queue correctly remains unanswered but it appears the queue was not terminating the sub process (and the underlying database connection) properly after each job completed which led to the segmentation faults.
To solve this I implemented my own queuing system which was basically a list of Process objects executed in the order they were put into the list. A loop then made periodic checks on the status of those processes until all had completed. The next batch of jobs would then be retrieved and executed. | I also encounter this problem recently. My config includes unixODBC-2.3.0 plus MS ODBC Driver 1.0 for Linux. After some experiments, we speculate that the problem may arise due to database upgrade (to SQLServer 2008 SP1 in our case), thus triggering some bugs in the MS ODBC driver. The problem also occurs in this thread:
<http://social.technet.microsoft.com/Forums/sqlserver/en-US/23fafa84-d333-45ac-8bd0-4b76151e8bcc/sql-server-driver-for-linux-causes-segmentation-fault?forum=sqldataaccess>
I also tried upgrade my driver manager to unixODBC-2.3.2 but with no luck. My final solution is using FreeTDS 0.82.6+ with unixODBC-2.3.2. This version of FreeTDS driver goes badly along with unixODBC-2.3.0, for the manager keeps complaining about function non-support of the driver. Everything goes smooth if unixODBC is upgraded. |
57,465,747 | I do the following operations:
1. Convert string datetime in pandas dataframe to python datetime via `apply(strptime)`
2. Convert `datetime` to posix timestamp via `.timestamp()` method
3. If I revert posix back to `datetime` with `.fromtimestamp()` I obtain different datetime
It differs by 3 hours which is my timezone (I'm at UTC+3 now), so I suppose it is a kind of timezone issue. Also I understand that in apply it implicitly converts to `pandas.Timestamp`, but I don't understand the difference in this case.
What is the reason for such strange behavior and what should I do to avoid it? Actually in my project I need to compare this pandas timestamps with correct poxis timestamps and now it works wrong.
Below is dummy reproducible example:
```
df = pd.DataFrame(['2018-03-03 14:30:00'], columns=['c'])
df['c'] = df['c'].apply(lambda x: datetime.datetime.strptime(x, '%Y-%m-%d %H:%M:%S'))
dt = df['c'].iloc[0]
dt
>> Timestamp('2018-03-03 14:30:00')
datetime.datetime.fromtimestamp(dt.timestamp())
>> datetime.datetime(2018, 3, 3, 17, 30)
``` | 2019/08/12 | [
"https://Stackoverflow.com/questions/57465747",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5331908/"
] | First, I suggest using the `np.timedelta64` dtype when working with `pandas`. In this case it makes the reciprocity simple.
```
pd.to_datetime('2018-03-03 14:30:00').value
#1520087400000000000
pd.to_datetime(pd.to_datetime('2018-03-03 14:30:00').value)
#Timestamp('2018-03-03 14:30:00')
```
The issue with the other methods is that POSIX has UTC as the origin, but `fromtimestamp` returns the local time. If your system isn't UTC compliant, then we get issues. The following methods will work to remedy this:
```
from datetime import datetime
import pytz
dt
#Timestamp('2018-03-03 14:30:00')
# Seemingly problematic:
datetime.fromtimestamp(dt.timestamp())
#datetime.datetime(2018, 3, 3, 9, 30)
```
---
```
datetime.fromtimestamp(dt.timestamp(), tz=pytz.utc)
#datetime.datetime(2018, 3, 3, 14, 30, tzinfo=<UTC>)
datetime.combine(dt.date(), dt.timetz())
#datetime.datetime(2018, 3, 3, 14, 30)
mytz = pytz.timezone('US/Eastern') # Use your own local timezone
datetime.fromtimestamp(mytz.localize(dt).timestamp())
#datetime.datetime(2018, 3, 3, 14, 30)
``` | An answer with the [`to_datetime`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html) function:
```py
df = pd.DataFrame(['2018-03-03 14:30:00'], columns=['c'])
df['c'] = pd.to_datetime(df['c'].values, dayfirst=False).tz_localize('Your/Timezone')
```
When working with date, you should always put a timezone it is easier after to work with.
It does not explain the difference between the `datetime` in pandas and alone. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.