text
stringlengths 226
34.5k
|
---|
java script html element scrape using a scrapy on phython 2.7.11 i get like these
Question:
[root@Imx8 craigslist_sample]# scrapy crawl spider
/root/Python-2.7.11/craigslist_sample/craigslist_sample/spiders/test.py:1: ScrapyDeprecationWarning: Module `scrapy.spider` is deprecated, use `scrapy.spiders` instead
from scrapy.spider import BaseSpider
/root/Python-2.7.11/craigslist_sample/craigslist_sample/spiders/test.py:6: ScrapyDeprecationWarning: craigslist_sample.spiders.test.MySpider inherits from deprecated class scrapy.spiders.BaseSpider, please inherit from scrapy.spiders.Spider. (warning only on first subclass, there may be others)
class MySpider(BaseSpider):
2016-10-18 18:23:30 [scrapy] INFO: Scrapy 1.2.0 started (bot: craigslist_sample)
2016-10-18 18:23:30 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'craigslist_sample.spiders', 'SPIDER_MODULES': ['craigslist_sample.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'craigslist_sample'}
Traceback (most recent call last):
File "/usr/local/bin/scrapy", line 11, in <module>
sys.exit(execute())
File "/usr/local/lib/python2.7/site-packages/scrapy/cmdline.py", line 142, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/usr/local/lib/python2.7/site-packages/scrapy/cmdline.py", line 88, in _run_print_help
func(*a, **kw)
File "/usr/local/lib/python2.7/site-packages/scrapy/cmdline.py", line 149, in _run_command
cmd.run(args, opts)
File "/usr/local/lib/python2.7/site-packages/scrapy/commands/crawl.py", line 57, in run
self.crawler_process.crawl(spname, **opts.spargs)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 162, in crawl
crawler = self.create_crawler(crawler_or_spidercls)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 190, in create_crawler
return self._create_crawler(crawler_or_spidercls)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 194, in _create_crawler
spidercls = self.spider_loader.load(spidercls)
File "/usr/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 43, in load
raise KeyError("Spider not found: {}".format(spider_name))
KeyError: 'Spider not found: spider'
Answer: your should set name='spider' in
craigslist_sample/craigslist_sample/spiders/test.py
class MySpider(Spider):
name = 'spider'
def parse(self,response):
#....
|
Django AppsNotLoaded
Question: I'm trying to make a python script to put some things in my database;
from django.conf import settings
settings.configure()
import django.db
from models import Hero #Does not work..?
heroes = [name for name in open('hero_names.txt').readlines()]
names_in_db = [hero.hero_name for hero in Hero.objects.all()] #ALready existing heroes
for heroname in heroes:
if heroname not in names_in_db:
h = Hero(hero_name=heroname, portraid_link='/static/heroes/'+heroname)
h.save()
The import throws the following
Traceback (most recent call last):
File "heroes_to_db.py", line 4, in <module>
from models import Hero
File "C:\Users\toft_\Desktop\d2-patchnotes-master\dota2notes\patch\models.py", line 5, in <module>
class Hero(models.Model):
File "C:\Python27\lib\site-packages\django\db\models\base.py", line 105, in __new__
app_config = apps.get_containing_app_config(module)
File "C:\Python27\lib\site-packages\django\apps\registry.py", line 237, in get_containing_app_config
self.check_apps_ready()
File "C:\Python27\lib\site-packages\django\apps\registry.py", line 124, in check_apps_ready
raise AppRegistryNotReady("Apps aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
I know I can do `python manage.py --shell` and write the code for hand but to
be honest, I dont want to. What am I missing ?
Answer: Django must configure all installed applications before you can use any
models. To do this you must call `django.setup()`
import django
django.setup()
[From the
documentation:](https://docs.djangoproject.com/en/1.10/ref/applications/#how-
applications-are-loaded)
> This function is called automatically:
>
> * When running an HTTP server via Django’s WSGI support.
> * When invoking a management command.
>
>
> It must be called explicitly in other cases, for instance in plain Python
> scripts.
|
large data transformation in python
Question: I have a large data set (ten 12gb csv files) that have 25 columns and would
want to transform it to a dataset with 6 columns. the first 3 columns remains
the same whereas the 4th one would be the variable names and the rest contains
data. Below is my input:
#RIC Date[L] Time[L] Type L1-BidPrice L1-BidSize L1-AskPrice L1-AskSize L2-BidPrice L2-BidSize L2-AskPrice L2-AskSize L3-BidPrice L3-BidSize L3-AskPrice L3-AskSize L4-BidPrice L4-BidSize L4-AskPrice L4-AskSize L5-BidPrice L5-BidSize L5-AskPrice L5-AskSize
HOU.ALP 20150901 30:10.8 Market Depth 5.29 50000 5.3 32000 5.28 50000 5.31 50000 5.27 50000 5.32 50000 5.26 50000 5.33 50000 5.34 50000
HOU.ALP 20150901 30:10.8 Market Depth 5.29 50000 5.3 44000 5.28 50000 5.31 50000 5.27 50000 5.32 50000 5.26 50000 5.33 50000 5.34 50000
HOU.ALP 20150901 30:12.1 Market Depth 5.29 50000 5.3 32000 5.28 50000 5.31 50000 5.27 50000 5.32 50000 5.26 50000 5.33 50000 5.34 50000
HOU.ALP 20150901 30:12.1 Market Depth 5.29 50000 5.3 38000 5.28 50000 5.31 50000 5.27 50000 5.32 50000 5.26 50000 5.33 50000 5.34 50000
and I would transform it to:
#RIC Date[L] Time[L] level Bid_price bid_volume Ask_price Ask_volume
HOU.ALP 20150901 30:10.8 L1 5.29 50000 5.3 50000
HOU.ALP 20150901 30:10.8 L2 5.28 50000 5.31 50000
HOU.ALP 20150901 30:12.1 L3 5.27 50000 5.32 50000
HOU.ALP 20150901 30:12.1 L4 5.26 50000 5.33 50000
HOU.ALP 20150901 30:12.1 L5
HOU.ALP 20150901 30:12.1 L1 5.29 50000 5.3 50000
HOU.ALP 20150901 30:12.1 L2 5.28 44000 5.31 50000
HOU.ALP 20150901 30:12.1 L3 5.27 48000 5.32 50000
HOU.ALP 20150901 30:12.1 L4 5.26 50000 5.33 50000
Here is my attempt with the coding. I think I would have to use dictionary to
write to a csv file
def depth_data_transformation(input_file_list, output_file):
for file in input_file_list:
file_to_open = '%s.csv' %file
with open(file_to_open) as f, open(output_file, "w") as out:
next(f) # skip header
cols = ["#RIC", "Date[L]", "Time[L]", "level", "Bid_price", "bid_volume", "Ask_price", "Ask_volume"]
wr = csv.writer(out)
wr.writerow(cols)
for row in csv.reader(f):
# get all but first three cols
it = row[4:]
# zip_longest(*[iter(it)] * 4, fillvalue="") -> group into 4's, add empty string for missing values
for ind, t in enumerate(izip_longest(*[iter(it)] * 4, fillvalue=""), 1):
# first 3 cols, level and group all in one row/list.
wr.writerow(row[:3]+ ["l{}".format(ind)] + list(t))
Answer: You need to group the levels, i.e `L1-BidPrice L1-BidSize L1-AskPrice
L1-AskSize` and write each to a new row :
import csv
from itertools import zip_longest # izip_longest python2
with open("infile.csv") as f, open("out.csv", "w") as out:
next(f) # skip header
cols = ["#RIC", "Date[L]", "Time[L]", "level", "Bid_price", "bid_volume", "Ask_price", "Ask_volume"]
wr = csv.writer(out)
wr.writerow(cols)
for row in csv.reader(f):
# get all but first three cols.
it = row[4:]
# zip_longest(*[iter(it)] * 4, fillvalue="") -> group into 4's, add empty string for missing values
for ind, t in enumerate(zip_longest(*[iter(it)] * 4, fillvalue=""), 1):
# first 3 cols, level and group all in one row/list.
wr.writerow(row[:3]+ ["l{}".format(ind)] + list(t))
Which would give you:
#RIC,Date[L],Time[L],level,Bid_price,bid_volume,Ask_price,Ask_volume
HOU.ALP,20150901,30:10.8,l1,5.29,50000,5.3,32000
HOU.ALP,20150901,30:10.8,l2,5.28,50000,5.31,50000
HOU.ALP,20150901,30:10.8,l3,5.27,50000,5.32,50000
HOU.ALP,20150901,30:10.8,l4,5.26,50000,5.33,50000
HOU.ALP,20150901,30:10.8,l5,5.34,50000,,
HOU.ALP,20150901,30:10.8,l1,5.29,50000,5.3,44000
HOU.ALP,20150901,30:10.8,l2,5.28,50000,5.31,50000
HOU.ALP,20150901,30:10.8,l3,5.27,50000,5.32,50000
HOU.ALP,20150901,30:10.8,l4,5.26,50000,5.33,50000
HOU.ALP,20150901,30:10.8,l5,5.34,50000,,
HOU.ALP,20150901,30:12.1,l1,5.29,50000,5.3,32000
HOU.ALP,20150901,30:12.1,l2,5.28,50000,5.31,50000
HOU.ALP,20150901,30:12.1,l3,5.27,50000,5.32,50000
HOU.ALP,20150901,30:12.1,l4,5.26,50000,5.33,50000
HOU.ALP,20150901,30:12.1,l5,5.34,50000,,
HOU.ALP,20150901,30:12.1,l1,5.29,50000,5.3,38000
HOU.ALP,20150901,30:12.1,l2,5.28,50000,5.31,50000
HOU.ALP,20150901,30:12.1,l3,5.27,50000,5.32,50000
HOU.ALP,20150901,30:12.1,l4,5.26,50000,5.33,50000
HOU.ALP,20150901,30:12.1,l5,5.34,50000,,
In `for ind, t in enumerate(zip_longest(*[iter(it)] * 4, fillvalue=""), 1)`,
_`enumerate`_ with a start index of 1 is keeping track of which _group/level_
we are at.
_`zip_longest(*[iter(it)] * 4, fillvalue="")`_ groups the cols into sections
i.e `L1-BidPrice,L1-BidSize,L1-AskPrice,L1-AskSize`,
`L2-BidPrice,L2-BidSize,L2-AskPrice,L2-AskSize` etc.. all the way to `Ln-..`
You have `HOU.ALP 20150901 30:10.8 L1 5.29 50000 5.3 50000` in your expected
output but 32000 is the value in your input for `L1-AskSize`, each row has 5
levels and you also have 8 columns so I presume your expected output is wrong.
|
MAGICS - undefined symbol: _ZTIN5eckit9ExceptionE
Question: I'm stuck on a runtime error "undefined symbol: _ZTIN5eckit9ExceptionE" like
this
Start 2: basic_python
2: Test command: /usr/local/Python/2.7.10/bin/python "coast.py"
2: Environment variables:
2: PYTHONPATH=/opt/src/ecmwf/Magics-2.29.4-Source/build/python
2: LD_LIBRARY_PATH=/opt/src/ecmwf/Magics-2.29.4-Source/build/lib
2: MAGPLUS_HOME=/opt/src/ecmwf/Magics-2.29.4-Source/test/..
2: OMP_NUM_THREADS=1
2: Test timeout computed to be: 1500
2: Traceback (most recent call last):
2: File "coast.py", line 11, in <module>
2: from Magics.macro import *
2: File "/opt/src/ecmwf/Magics-2.29.4-Source/build/python/Magics/__init__.py", line 32, in <module>
2: _Magics = swig_import_helper()
2: File "/opt/src/ecmwf/Magics-2.29.4-Source/build/python/Magics/__init__.py", line 28, in swig_import_helper
2: _mod = imp.load_module('_Magics', fp, pathname, description)
2: ImportError: /usr/local/Magics/2.29.4/gnu/4.4.7/lib/libMagPlus.so: undefined symbol: _ZTIN5eckit9ExceptionE
There was no error while building shared libaray libMagPlus.so. The error just
was raised at runtime when Python module loading it.
Checked with nm, the undefined symbol '_ZTIN5eckit9ExceptionE' is from a
static library libOdb.a, like this
nm libOdb.a | grep _ZTIN5eckit9ExceptionE
U _ZTIN5eckit9ExceptionE
U _ZTIN5eckit9ExceptionE
U _ZTIN5eckit9ExceptionE
U _ZTIN5eckit9ExceptionE
U _ZTIN5eckit9ExceptionE
U _ZTIN5eckit9ExceptionE
0000000000000000 V DW.ref._ZTIN5eckit9ExceptionE
U _ZTIN5eckit9ExceptionE
U _ZTIN5eckit9ExceptionE
U _ZTIN5eckit9ExceptionE
U _ZTIN5eckit9ExceptionE
U _ZTIN5eckit9ExceptionE
U _ZTIN5eckit9ExceptionE
But there was no any complaint about undefined symbol '_ZTIN5eckit9ExceptionE'
for the excutables which linked against the static library libOdb.a directly
at the both of complation time and runtime time. All C, Fortran codes also
worked well with the static library libOdb.a, except the shared library
libMagPlus.so.
The library LibMagPlus.so was linked like this
/usr/bin/g++ -fPIC -pipe -O2 -g \
-Wl,--disable-new-dtags -shared \
-Wl,-soname,libMagPlus.so -o ../lib/libMagPlus.so \
... ... \
-Wl,-Bstatic -L$ODB_API/lib -lOdb \
... ...
The library libOdb.a was built like this
usr/bin/ar qc ../../lib/libOdb.a ... ...
/usr/bin/ranlib ../../lib/libOdb.a
Searched the FAQ and Googled, little help with my problem. I knew little about
C++, and have no idea how to get this fixed.
[ In response to Jorge's inputs, updated these ]
Exceptions.h
#ifndef eckit_Exceptions_h
#define eckit_Exceptions_h
#include <errno.h>
#include "eckit/eckit.h"
#include "eckit/eckit_version.h"
#include "eckit/log/CodeLocation.h"
#include "eckit/log/Log.h"
#include "eckit/log/SavedStatus.h"
#include "eckit/compat/StrStream.h"
namespace eckit {
//-----------------------------------------------------------------------------
void handle_panic(const char*);
void handle_panic(const char*, const CodeLocation&);
/// @brief General purpose exception
/// Derive other exceptions from this class and implement then in the class that throws them.
class Exception : public std::exception {
public: // methods
/// Constructor with message
Exception(const std::string& what, const CodeLocation& location = CodeLocation() );
/// Destructor
/// @throws nothing
~Exception() throw();
virtual const char *what() const throw() { return what_.c_str(); }
virtual bool retryOnServer() const { return false; }
virtual bool retryOnClient() const { return false; }
virtual bool terminateApplication() const { return false; }
static bool throwing();
static void exceptionStack(std::ostream&,bool callStack = false);
const std::string& callStack() const { return callStack_; }
protected: // methods
void reason(const std::string&);
Exception();
virtual void print(std::ostream&) const;
private: // members
std::string what_; ///< description
std::string callStack_; ///< call stack
SavedStatus save_; ///< saved monitor status to recover after destruction
Exception* next_;
CodeLocation location_; ///< where exception was first thrown
friend std::ostream& operator<<(std::ostream& s,const Exception& p)
{
p.print(s);
return s;
}
};
nm -Cl $ODB_API/lib/libOdb.a | grep -i "eckit::Exception"
U eckit::Exception::Exception(std::string const&, eckit::CodeLocation const&) /opt/src/OdbAPI-0.10.2-Source/eckit/src/eckit/exception/Exceptions.h:84
U eckit::Exception::~Exception() /opt/src/OdbAPI-0.10.2-Source/eckit/src/eckit/exception/Exceptions.h:108
0000000000000000 W eckit::Exception::retryOnClient() const /opt/src/OdbAPI-0.10.2-Source/eckit/src/eckit/exception/Exceptions.h:48
0000000000000000 W eckit::Exception::retryOnServer() const /opt/src/OdbAPI-0.10.2-Source/eckit/src/eckit/exception/Exceptions.h:47
0000000000000000 W eckit::Exception::terminateApplication() const /opt/src/OdbAPI-0.10.2-Source/eckit/src/eckit/exception/Exceptions.h:49
0000000000000000 W eckit::Exception::what() const /opt/src/OdbAPI-0.10.2-Source/eckit/src/eckit/exception/Exceptions.h:46
U eckit::Exception::print(std::ostream&) const
U typeinfo for eckit::Exception
I also tried to unpack all object files from libOdb.a and relink libMagPlus.so
with all of them with option '-fvisibility=default -rdynamic', like this
ar x libOdb.a ( ./Odb )
/usr/bin/g++ -fvisibility=default -rdynamic -fPIC -pipe -O2 -g \
-Wl,--disable-new-dtags -shared \
-Wl,-soname,libMagPlus.so -o ../lib/libMagPlus.so \
... ... \
./Odb/*.o \
... ...
But still got these undefined symbols
U eckit::Exception::~Exception() /opt/src/OdbAPI-0.10.2-Source/eckit/src/eckit/exception/Exceptions.h:108
U eckit::Exception::print(std::ostream&) const
U typeinfo for eckit::Exception
**Wondering if needs to touch Exceptions.h and how to touch it ?**
Anybody can help ?
Appreciating your time
Regards
Answer: Take a look at `Exceptions.h` file. Note how all your undefined symbols belong
to functions that are **declared but not defined**.
~Exception() throw();
// [...]
virtual void print(std::ostream&) const;
`eckit::Exception::~Exception()` is your destructor (declared in
`Exceptions.h:108` but not defined). The same applies to
`eckit::Exception::print(std::ostream&) const`.
In the case of `typeinfo for eckit::Exception`, the problem here is that you
have virtual functions that are not declared as _pure virtual_ (in abstract
classes), but are neither defined, so the type is not complete.
If I'm not wrong, as `eckit::Exception` class is meant to be superclass for
other derived classes, its destructor should be declared `virtual` too.
Check where are those missing functions declared. They should be either in an
object file skipped/missed by the archiver (if missing functions are defined
in a `.cpp` file) or in a header file that you didn't include (if they are
defined in a `.hpp` file).
See also: [g++ undefined reference to
typeinfo](http://stackoverflow.com/questions/307352/g-undefined-reference-to-
typeinfo)
|
How to use a dict to subset a DataFrame?
Question: Say, I have given a DataFrame with most of the columns being categorical data.
> data.head()
age risk sex smoking
0 28 no male no
1 58 no female no
2 27 no male yes
3 26 no male no
4 29 yes female yes
And I would like to subset this data by a dict of key-value pairs for those
categorical variables.
tmp = {'risk':'no', 'smoking':'yes', 'sex':'female'}
Hence, I would like to have the following subset.
data[ (data.risk == 'no') & (data.smoking == 'yes') & (data.sex == 'female')]
What I want to do is:
data[tmp]
What is the most python / pandas way of doing this?
* * *
Minimal example:
import numpy as np
import pandas as pd
from pandas import Series, DataFrame
x = Series(random.randint(0,2,50), dtype='category')
x.cat.categories = ['no', 'yes']
y = Series(random.randint(0,2,50), dtype='category')
y.cat.categories = ['no', 'yes']
z = Series(random.randint(0,2,50), dtype='category')
z.cat.categories = ['male', 'female']
a = Series(random.randint(20,60,50), dtype='category')
data = DataFrame({'risk':x, 'smoking':y, 'sex':z, 'age':a})
tmp = {'risk':'no', 'smoking':'yes', 'sex':'female'}
Answer: You can create a look up data frame from the dictionary and then do an inner
join with the `data` which will have the same effect as `query`:
from pandas import merge, DataFrame
merge(DataFrame(tmp, index =[0]), data)
[![enter image description
here](https://i.stack.imgur.com/xW4kf.png)](https://i.stack.imgur.com/xW4kf.png)
|
Is there a way to use itertools in python to clean up nested iterations?
Question: Let's say I have the following code:
a = [1,2,3]
b = [2,4,6]
c = [3,5,7]
for i in a:
for j in b:
for k in c:
print i * j * k
Is there a way I can consolidate an iterator in one line instead of making it
nested?
Answer: Use `itertools.product` within a list comprehension:
In [1]: from itertools import product
In [5]: [i*j*k for i, j, k in product(a, b, c)]
Out[5]:
[6,
10,
14,
12,
20,
28,
18,
30,
42,
12,
20,
28,
24,
40,
56,
36,
60,
84,
18,
30,
42,
36,
60,
84,
54,
90,
126]
|
imgurpython.helpers.error.ImgurClientRateLimitError: Rate-limit exceeded
Question: I have the following error:
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Traceback (most recent call last):
File "download.py", line 22, in <module>
search = imgur_client.gallery_search('cat', window='all', sort='time', page=p)
File "/usr/local/lib/python2.7/dist-packages/imgurpython/client.py", line 531, in gallery_search
response = self.make_request('GET', 'gallery/search/%s/%s/%s' % (sort, window, page), data)
File "/usr/local/lib/python2.7/dist-packages/imgurpython/client.py", line 153, in make_request
raise ImgurClientRateLimitError()
imgurpython.helpers.error.ImgurClientRateLimitError: Rate-limit exceeded!
for this code:
1 from imgurpython import ImgurClient
2 import inspect
3 import random
4 import urllib2
5 import requests
6 from imgurpython.helpers.error import ImgurClientError
7
8 client_id = "ABC"
9 client_secret = "ABC"
10 access_token = "ABC"
11 refresh_token = "ABC"
12
13
14
15 image_type = ['jpg', 'jpeg']
16
17 imgur_client = ImgurClient(client_id, client_secret, access_token, refresh_token)
18
19 item_count = 0
20 for p in range(1, 10000):
21 try:
22 search = imgur_client.gallery_search('cat', window='all', sort='time', page=p)
23 for i in range(0,https://gist.github.com/monajalal/e02792e9a5cbced301a8691b7a62836f len(search)):
24 item_count +=1
25 print(search[i].comment_count)
26 if search[i].comment_count > 10 and not search[i].is_album:
27 print(search[i].type)
28 if search[i].type[6:] in image_type:
29 count = 0
30 try:
31 image_file = urllib2.urlopen(search[i].link, timeout = 5)
32 image_file_name = 'images/'+ search[i].id+'.'+search[i].type[6:]
33 output_image = open(image_file_name, 'wb')
34 output_image.write(image_file.read())
35 for post in imgur_client.gallery_item_comments(search[i].id, sort='best'):
36 if count <= 10:
37 count += 1
38 output_image.close()
39 except urllib2.URLError as e:
40 print(e)
41 continue
42 except socket.timeout as e:
43 print(e)
44 continue
45 except socket.error as e:
46 print(e)
47 continue
48 except ImgurClientError as e:
49 print(e)
50 continue
51
52 print item_count
Also I see this line almost very often:
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
How can I fix the error? Is there any workaround for rate limit error in
Imgur? So I am creating this app for academic research use not for commercial
and according to <https://api.imgur.com/#limits> it should be free but I had
to register my app to get client_id related stuff. How can I set my
application as non-commercial so that I would not get this rate limit error or
if all kind of applications get this error how should I handle it? How should
I set my code so that it would make only 1250 requests per hour?
Also here's my credit info:
User Limit: 500
User Remaining: 500
User Reset: 2016-10-18 14:32:41
User Client Limit: 12500
User Client Remaining: 9570
UPDATE: With sleep(8) as suggested in the answer I end up with this going on
continuously. For different search query this happens at different pages. How
can I fix the code so that it would stop executing when this happens? Here's
the related code to the update:
<https://gist.github.com/monajalal/e02792e9a5cbced301a8691b7a62836f>
page number is: 157
0
image/jpeg
page number is: 157
0
page number is: 157
0
page number is: 157
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Answer: The rate limit refers to how frequently you're hitting the API, not how many
calls you're allowed. To prevent hammering, most APIs have a rate limit (eg:
30 requests per minute, 1 every 2 seconds). Your script is making requests as
quickly possible, hundreds or even thousands of times faster than the limit.
To prevent your script from hammering, the simplest solution is to introduce a
[`sleep`](https://docs.python.org/2/library/time.html#time.sleep) to your for
loop.
from time import sleep
for i in range(10000):
print i
sleep(2) # seconds
Adjust the sleep time to be at least one second greater than what the API
defines as its rate limit.
<https://api.imgur.com/#limits>
> The Imgur API uses a credit allocation system to ensure fair distribution of
> capacity. Each application can allow **approximately 1,250 uploads per day
> or approximately 12,500 requests per day**. If the daily limit is hit five
> times in a month, then the app will be blocked for the rest of the month.
> The remaining credit limit will be shown with each requests response in the
> `X-RateLimit-ClientRemaining` HTTP header.
So 12500 requests / 24 hours is 520 requests per hour or ~8 per minute. That
means your sleep should be about 8 seconds long.
|
Minimize memory overhead in sparse matrix inverse
Question: As pretense, I am continuing development in Python 2.7 from a prior question:
[Determining a sparse matrix
quotient](http://stackoverflow.com/questions/40050947/determining-a-sparse-
matrix-quotient)
## My existing code:
import scipy.sparse as sp
k = sp.csr_matrix(([], ([],[])),shape=[R,R])
denom = sp.csc_matrix(denominator)
halfeq = sp.linalg.inv(denom)
k = numerator.dot(halfeq)
I was successful in calculating for the base `k` and `denom`. Python continued
attempting calculation on `halfeq`. The process sat in limbo for aproximately
2 hours before returning an error
## Error Statement:
Not enough memory to perform factorization.
Traceback (most recent call last):
File "<myfilename.py>", line 111, in <module>
halfeq = sp.linalg.inv(denom)
File "/opt/anaconda/lib/python2.7/site-packages/scipy/sparse/linalg/matfuncs.py", line 61, in inv Ainv = spsolve(A, I)
File "/opt/anaconda/lib/python2.7/site-packages/scipy/sparse/linalg/dsolve/linsolve.py", line 151, in spsolve Afactsolve = factorized(A)
File "/opt/anaconda/lib/python2.7/site-packages/scipy/sparse/linalg/dsolve/linsolve.py", line 366, in factorized return splu(A).solve
File "/opt/anaconda/lib/python2.7/site-packages/scipy/sparse/linalg/dsolve/linsolve.py", line 242, in splu ilu=False, options=_options)
MemoryError
From the [scipy/smemory.c
sourcecode](https://github.com/scipy/scipy/blob/master/scipy/sparse/linalg/dsolve/SuperLU/SRC/smemory.c),
the initial statement from the error is found on line 256. I am unable to
further analyze the memory defs to determine how to best reallocate memory
usage sufficient for execution.
For reference,
`numerator` has `shape: (552297, 552297)` with `stored elements: 301067607`
calculated as `sp.csr_matrix(A.T.dot(Ap))`
`denominator` has `shape: (552297, 552297)` with `stored elements: 170837213`
calculated as `sp.csr_matrix(A.T.dot(A))`
**EDIT** : I've found [a related question on
Reddit](https://www.reddit.com/r/Python/comments/3c0m7b/what_is_the_most_precise_way_to_invert_large/),
but cannot determine how I would change my equation from `numerator *
inv(denominator) = k`
Answer: No need to 'preallocate' `k`; this isn't a compiled language. Not that this is
costing anything.
k = sp.csr_matrix(([], ([],[])),shape=[R,R])
I need to double check this, but I think the `dot/inv` can be replaced by one
call to `spsolve`. Remember in the other question I noted that `inv` is
`spsolve(A, I)`;
denom = sp.csc_matrix(denominator)
#halfeq = sp.linalg.inv(denom)
#k = numerator.dot(halfeq)
k = sp.linalg.spsolve(denom, numerator)
That said, it looks like the problem is in the `inv` part, the
`factorized(denom)`. While your arrays are sparse, (denom density is 0.00056),
they still have a large number of values.
Maybe it would help to step back and look at:
num = A.T.dot(Ap)
den = A.T.dot(A)
k = solve(den, num)
In other words, review the matrix algebra.
(A'*Ap)/(A'*A)
I'm little rusty on this. Can we reduce this? Can we partition?
Just throwing great big arrays together, even if they are sparse, isn't
working.
How about providing small `A` and `Ap` arrays that we can use for testing? I'm
not interested in testing memory limits, but I'd like to experiment with
different calculation methods.
The sparse linalg module has a number of iterative solvers. I have no idea
whether their memory use is greater or less.
|
Python: Pandas, dealing with spaced column names
Question: If I have multiple text files that I need to parse that look like so, but can
vary in terms of column names, and the length of the hashtags above:
![txt.file](https://s4.postimg.org/8p69ptj9p/feafdfdfdfdf.png)
How would I go about turning this into a pandas dataframe? I've tried using
`pd.read_table('file.txt', delim_whitespace = True, skiprows = 14)`, but it
has all sorts of problems. My issues are...
All the text, asterisks, and pounds at the top needs to be ignored, but I
can't just use skip rows because the size of all the junk up top can vary in
length in another file.
The columns "stat (+/-)" and "syst (+/-)" are seen as 4 columns because of the
whitespace.
The one pound sign is included in the column names, and I don't want that. I
can't just assign the column names manually because they vary from text file
to text file.
Any help is much obliged, I'm just not really sure where to go from after I
read the file using pandas.
Answer: Consider reading in raw file, cleaning it line by line while writing to a new
file using `csv` module. Regex is used to identify column headers using the
_i_ as match criteria. Below assumes more than one space separates columns:
import os
import csv, re
import pandas as pd
rawfile = "path/To/RawText.txt"
tempfile = "path/To/TempText.txt"
with open(tempfile, 'w', newline='') as output_file:
writer = csv.writer(output_file)
with open(rawfile, 'r') as data_file:
for line in data_file:
if re.match('^.*i', line): # KEEP COLUMN HEADER ROW
line = line.replace('\n', '')
row = line.split(" ")
writer.writerow(row)
elif line.startswith('#') == False: # REMOVE HASHTAG LINES
line = line.replace('\n', '')
row = line.split(" ")
writer.writerow(row)
df = pd.read_csv(tempfile) # IMPORT TEMP FILE
df.columns = [c.replace('# ', '') for c in df.columns] # REMOVE '#' IN COL NAMES
os.remove(tempfile) # DELETE TEMP FILE
|
Why additional memory allocation makes multithread Python application work few times faster?
Question: I'm writing python module which one of the functions is to check multiple IP
addresses if they're active and write this information to database. As those
are I/O bound operations I decided to work on multiple threads:
* 20 threads for pinging host and checking if it's active (function `check_if_active`)
* 5 threads for updating data in database (function `check_if_active_callback`)
Program works as followed:
1. Main thread takes IPs from database and puts them to queue `pending_ip`
2. One of 20 threads takes record from `pending_ip` queue, pings host and puts answer to `done_ip` queue
3. One of 5 threads takes record from `done_ip` queue and does update in database if needed
What I've observed (during timing tests to get answer how many threads would
suit the best in my situation) is that program works aprox. 7-8 times faster
if in 5 loops I first declare and start 20+5 threads, delete those objects and
then in 6th loop run the program, than if I run program without those
additional 5 loops.
I suppose this could be somehow related to memory management. Not really sure
though if deleting objects makes any sense in python. My questions are:
* why is that happening?
* how I can achieve this time boost without additional code (and additional memory allocation)?
Code:
import time, os
from threading import Thread
from Queue import Queue
from readconfig import read_db_config
from mysql.connector import Error, MySQLConnection
pending_ip = Queue()
done_ip = Queue()
class Database:
connection = MySQLConnection()
def connect(self):
db_config = read_db_config("mysql")
try:
self.connection = MySQLConnection(**db_config)
except Error as e:
print(e)
def close_connection(self):
if self.connection.is_connected() is True:
self.connection.close()
def query(self, sqlquery):
if self.connection.is_connected() is False:
self.connect()
try:
cursor = self.connection.cursor()
cursor.execute(sqlquery)
rows = cursor.fetchall()
except Error as e:
print(e)
finally:
cursor.close()
return rows
def update(self,sqlquery, var):
if self.connection.is_connected() is False:
self.connect()
try:
cursor = self.connection.cursor()
cursor.execute(sqlquery, var)
self.connection.commit()
except Error as e:
self.connection.rollback()
print(e)
finally:
cursor.close()
db=Database()
def check_if_active(q):
while True:
host = q.get()
response = os.system("ping -c 1 -W 2 %s > /dev/null 2>&1" % (host))
if response == 0:
ret = 1
else:
ret = 0
done_ip.put((host, ret))
q.task_done()
def check_if_active_callback(q, db2):
while True:
record = q.get()
sql = "select active from table where address_ip='%s'" % record[0]
rowIP = db2.query(sql)
if(rowIP[0][0] != record[1]):
sqlupdq = "update table set active=%s where address_ip=%s"
updv = (record[1], record[0])
db2.update(sqlupdq, updv)
q.task_done()
def calculator():
#some irrelevant code
rows = db.query("select ip_address from table limit 1000")
for row in rows:
pending_ip.put(row[0])
#some irrelevant code
if __name__ == '__main__':
num_threads_pinger = 20
num_threads_pinger_callback = 5
db = Database()
for i in range(6):
db_pinger_callback =[]
worker_p = []
worker_cb = []
#additional memory allocation here in 5 loops for 20 threads
for z in range(num_threads_pinger):
worker = Thread(target=check_if_active, args=(pending_ip))
worker.setDaemon(True)
worker.start()
worker_p.append(worker)
#additional memory allocation here in 5 loops for 5 threads
for z in range(num_threads_pinger_callback):
db_pinger_callback.append(Database())
worker = Thread(target=check_if_active_callback, args=(done_ip, db_pinger_callback[z]))
worker.setDaemon(True)
worker.start()
worker_cb.append(worker)
if i == 5:
start_time = time.time()
calculator()
pending_ip.join()
done_ip.join()
print "%s sec" % (time.time() - start_time)
#freeing (?) that additional memory
for z in range(num_threads_pinger - 1, 0, -1):
del worker_p[z]
#freeing (?) that additional memory
for z in range(num_threads_pinger_callback - 1, 0, -1):
db_pinger_callback[z].close_connection()
del db_pinger_callback[z]
del worker_cb[z]
db.close_connection()
Answer: In order to give you an exact explanation it would help to know what version
of Python you're using. For instance if you're using PyPy then what you've
observed is the JIT kicking in after you call your loop 5 times and it just
returns a pre-calculated answer. If you're using a standard version of Python
then this speed up is due to the interpreter using the compiled byte code from
the .pyc files. How it works is basically Python will first create an in
memory representation of your code and run it from there. During repeated
calls the interpreter will convert some of the more often used code into byte
code and store it on disk in .pyc files (this is python byte code similar to
java byte code not to be confused with native machine code). Every time you
call the same function the interpreter will go to your .pyc files and execute
the corresponding byte code this makes the execution much faster as the code
you're running is precompiled compared to when you call the function once and
python has to parse and interpret your code.
|
Python SQLite TypeError
Question:
from sqlite3 import *
def insert_record(Who, Invented):
connection = connect(database = "activity.db")
internet = connection.cursor()
list = "INSERT INTO Information VALUES((Alexander_Graham, Phone))"
internet.execute(list)
rows_inserted = internet.rowcount
connection.commit()
internet.close()
connection.close()
#GUI interface
from Tkinter import *
window = Tk()
the_button3 = Button(window, text='Record', command = insert_record).grid(row=1, sticky=W, padx=100,pady=5)
window.mainloop()
Alright, so what I'm trying to do is when I press the Record button, the
values for Information (it have 2 fields called Who and Invented,
respectively) in activity.db will add a record Alexander_Graham and Phone.
*activity.db is already premade in the same folder as the code
But instead I get this error: TypeError: insert_record() takes exactly 2
arguments (0 given)
How do I fix it?
Answer: insert record is a function that takes two required arguments, which you
didn't pass values in for.
command = insert_record
A good example of this in action is the following test sequence:
In [1]: def func(one,two):
...: return one+two
...:
In [2]: func()
------------------------------------------------------------------------- --
TypeError Traceback (most recent call last)
<ipython-input-2-08a2da4138f6> in <module>()
----> 1 func()
TypeError: func() takes exactly 2 arguments (0 given)
In [3]: func(1,3)
Out[3]: 4
In this function, it fails in the same was as your tk app. You need to provide
a function that has default values, or handle it a different way.
|
Getting HTTP POST Error : {"reason":null,"error":"Request JSON object for insert cannot be null."}
Question: I am getting HTTP POST error when I am trying to connect to a Service Now
Instance for Change Request Automation using Python. Here is the script I am
using with Python 3.4.4
# SNOW CR AUTOMATION SCRIPT
import requests
import json
# put the ip address or dns of your SNOW API in this url
url = 'http://<>/change_request.do?JSONv2&sysparm_action=insert'
data= {
'short_description': '<value>',
'priority': '<value>',
'reason': '<value>',
'u_reason_for_change': '<value>',
'u_business_driver': '<value>',
'u_plan_of_record_id': '<value>'
}
print ("Data Inserted :")
print (data)
#Content type must be included in the header
header = {"Authorization":"Basic V1NfRVRPX1ROOkBiY2RlNTQzMjE=","Content- Type":"application/json"}
#Performs a POST on the specified url.
response = requests.request('POST', url, auth=("<value>","<value>"), json=data, headers=header)
print ( " Header is : ")
print (response.headers)
print (" ")
print ( "HTTP Response is :" )
print (response)
print (" ")
print ("***********************")
print (" Output : ")
print ( response.text)
I am getting an error as below while running the above script.
Output :
{"reason":null,"error":"Request JSON object for insert cannot be null."}
I am not sure why this error is thrown. Can anybody please help on this ?
Answer: This is a working example I tested on my instance. I am using REST Table API
to insert a change request. It's not true that it can not be http. It's
whatever protocol your instance allows to connect, say from browser.
#Need to install requests package for python
#easy_install requests
import requests
# Set the request parameters
url = '<yourinstance base url>/api/now/table/change_request'
user = <username>
pwd = <password>
# Set proper headers
headers = {"Content-Type":"application/json","Accept":"application/json"}
# Do the HTTP request
response = requests.post(url, auth=(user, pwd), headers=headers ,data="{\"short_description\":\"test in python\"}")
# Check for HTTP codes other than 201
if response.status_code != 201:
print('Status:', response.status_code, 'Headers:', response.headers, 'Error Response:',response.json())
exit()
# Decode the JSON response into a dictionary and use the data
data = response.json()
print(data)
|
Error in inserting python variable in mysql table
Question: I am working on a raspberry pi project, in which I'm fetching data from plc
and storing it into mysql database.
Here is my code:
import minimalmodbus
import serial
import mysql.connector
instrument = minimalmodbus.Instrument('/dev/ttyAMA0',3,mode='rtu')
instrument.serial.baudrate=115200
instrument.serial.parity = serial.PARITY_NONE
instrument.serial.bytesize = 8
instrument.serial.stopbits = 1
instrument.serial.timeout = 0.05
con = mysql.connector.connect(user='root',password='raspberry',host='localhost',
database='Fujiplc')
cursor = con.cursor()
try:
reg_value=instrument.read_register(102)
print reg_value
cursor.execute("insert into Register_Values values(%s)",(reg_value))
print ('One row inserted successfully.')
except IOError:
print("Failed to read from PLC.")
print (cursor.rowcount)
con.commit()
cursor.close()
con.close()
After running this code, I get next error:
Traceback (most recent call last):
File "/home/pi/rpi_to_plc_read.py", line 22, in <module>
cursor.execute("insert into Register_Values values(%d)",(reg_value))
File "/usr/local/lib/python2.7/dist-packages/mysql/connector/cursor.py", line 477, in execute
stmt = operation % self._process_params(params)
File "/usr/local/lib/python2.7/dist-packages/mysql/connector/cursor.py", line 355, in _process_params
"Failed processing format-parameters; %s" % err)
ProgrammingError: Failed processing format-parameters; argument 2 to map() must support iteration
I have gone through so many solutions but problem couldn't solve. Please help
me.
Answer: i think should be.
cursor.execute("insert into Register_Values values(%s)",(reg_value))
con.commit()
|
understanding marshmallow nested schema with list data
Question: Am new to python and am usign
[marshmallow](https://marshmallow.readthedocs.io/en/latest/) serialization.
unable to use the nested scehma. , my code
from sqlalchemy import Column, Float, Integer, String, Text, text,ForeignKey
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship
Base = declarative_base()
metadata = Base.metadata
class CompanyDemo(Base):
__tablename__ = 'company_demo'
company_id = Column(Integer, primary_key=True,
server_default=text("nextval('company_demo_company_id_seq'::regclass)"))
name = Column(Text, nullable=False)
address = Column(String(50))
location = Column(String(50))
class UsersDemo(Base):
__tablename__ = 'users_demo'
id = Column(Integer, primary_key=True,
server_default=text("nextval('users_demo_id_seq'::regclass)"))
company_id = Column(Integer,ForeignKey('company_demo.company_id'), nullable=False)
email = Column(String)
company = relationship('CompanyDemo')
schema
from marshmallow import Schema, fields, pprint
class CompanySchema(Schema):
company_id = fields.Int(dump_only=True)
name = fields.Str()
address = fields.Str()
location = fields.Str()
class UserSchema(Schema):
email = fields.Str()
company = fields.Nested(CompanySchema)
user = UserSchema()
user = UserSchema(many=True)
company = CompanySchema()
company = CompanySchema(many=True)
and my flask app
from flask import Flask, jsonify, url_for, render_template
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from flask_sqlalchemy import SQLAlchemy
from model import CompanyDemo, UsersDemo
from schemas.userschema import user, company
app = Flask(__name__)
app.secret_key = "shiva"
def db_connect():
engine = create_engine('postgresql://ss@127.0.0.1:5432/test')
Session = sessionmaker(autocommit=False, autoflush=False, bind=engine)
# create a Session
session = Session()
session._model_changes = {}
return session
@app.route('/company', methods=["GET", "POST"])
def get_all_company():
db = db_connect()
allcompany = db.query(CompanyDemo).join(UsersDemo).all()
return jsonify(company.dump(allcompany, many=True).data) # company is marshmallow schema
if __name__ == '__main__':
app.run(host='0.0.0.0', port=15418, debug=True)
anything wrong in my code? and am facing problem with nested schema and unable
to get the nested data in output.
the output below
> [ { "address": "qqq ", "company_id": 1, "location": "www ", "name": "eee" },
> { "address": "www ", "company_id": 2, "location": "qqq ", "name": "aaa" } ]
Answer: Self contained example using in-memory SQLite:
from flask import Flask, jsonify
from flask.ext.sqlalchemy import SQLAlchemy
from marshmallow import Schema, fields, pprint
app = Flask(__name__)
app.config['DEBUG'] = True
app.config['SECRET_KEY'] = 'super-secret'
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///:memory:'
app.config['SQLALCHEMY_ECHO'] = True
db = SQLAlchemy(app)
class CompanyDemo(db.Model):
__tablename__ = 'company_demo'
company_id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.Text, nullable=False)
address = db.Column(db.String(50))
location = db.Column(db.String(50))
def __unicode__(self):
return u"{name} ({address})".format(name=self.name, address=self.address)
class UsersDemo(db.Model):
__tablename__ = 'users_demo'
id = db.Column(db.Integer, primary_key=True,)
company_id = db.Column(db.Integer, db.ForeignKey('company_demo.company_id'), nullable=False)
company = db.relationship('CompanyDemo')
email = db.Column(db.String)
def __unicode__(self):
return u"{email}".format(email=self.email)
class CompanySchema(Schema):
company_id = fields.Int(dump_only=True)
name = fields.Str()
address = fields.Str()
location = fields.Str()
class UserSchema(Schema):
email = fields.Str()
company = fields.Nested(CompanySchema)
user_schema = UserSchema()
company_schema = CompanySchema()
@app.route('/')
def index():
return "<a href='/dump_company'>Dump Company</a><br><a href='/dump_user'>Dump User</a>"
@app.route('/dump_user')
def dump_user():
user = UsersDemo.query.first()
return jsonify(user_schema.dump(user).data)
@app.route('/dump_company')
def dump_company():
company = CompanyDemo.query.first()
return jsonify(company_schema.dump(company).data)
def build_db():
db.drop_all()
db.create_all()
company = CompanyDemo(name='Test 1', address='10 Downing Street', location='wherever')
db.session.add(company)
user = UsersDemo(email='fred@example.com', company=company)
db.session.add(user)
db.session.commit()
@app.before_first_request
def first_request():
build_db()
if __name__ == '__main__':
app.run(debug=True, port=7777)
|
NetworkX: how to properly create a dictionary of edge lengths?
Question: Say I have a regular grid network made of `10x10` nodes which I create like
this:
import networkx as nx
from pylab import *
import matplotlib.pyplot as plt
%pylab inline
ncols=10
N=10 #Nodes per side
G=nx.grid_2d_graph(N,N)
labels = dict( ((i,j), i + (N-1-j) * N ) for i, j in G.nodes() )
nx.relabel_nodes(G,labels,False)
inds=labels.keys()
vals=labels.values()
inds=[(N-j-1,N-i-1) for i,j in inds]
posk=dict(zip(vals,inds))
nx.draw_networkx(G, pos=posk, with_labels=True, node_size = 150, node_color='blue',font_size=10)
plt.axis('off')
plt.title('Grid')
plt.show()
Now say I want to create a dictionary which stores, for each edge, its length.
This is the intended outcome:
d={(0,1): 3.4, (0,2): 1.7, ...}
And this is how I try to get to that point:
from math import sqrt
lengths={G.edges(): math.sqrt((x-a)**2 + (y-b)**2) for (x,y),(a,b) in G.edges()}
But there clearly is something wrong as I get the following error message:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-7-c73c212f0d7f> in <module>()
2 from math import sqrt
3
----> 4 lengths={G.edges(): math.sqrt((x-a)**2 + (y-b)**2) for (x,y),(a,b) in G.edges()}
5
6
<ipython-input-7-c73c212f0d7f> in <dictcomp>(***failed resolving arguments***)
2 from math import sqrt
3
----> 4 lengths={G.edges(): math.sqrt((x-a)**2 + (y-b)**2) for (x,y),(a,b) in G.edges()}
5
6
TypeError: 'int' object is not iterable
**What am I missing?**
Answer: There is a lot going wrong in the last line, first and foremost that G.edges()
is an iterator and not a valid dictionary key, and secondly, that G.edges()
really just yields the edges, not the positions of the nodes.
This is what you want instead:
lengths = dict()
for source, target in G.edges():
x1, y1 = posk[source]
x2, y2 = posk[target]
lengths[(source, target)] = math.sqrt((x2-x1)**2 + (y2-y1)**2)
|
Why df[[2,3,4]][2:4] works and df[[2:4]][2:4] does not in Python
Question: suppose we have a datarame
import pandas as pd
df = pd.read_csv('...')
df
0 1 2 3 4
0 1 2 3 4 5
1 1 2 3 4 5
2 1 2 3 4 5
3 1 2 3 4 5
4 1 2 3 4 5
Why one approach is working and other returns syntax error?
Answer: I think you need [`ix`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.DataFrame.ix.html):
print (df.ix[2:4,2:4])
2 3
2 3 4
3 3 4
4 3 4
|
Running the sample code in pytesseract
Question: I am running python 2.6.6 and want to install the
[pytesseract](https://pypi.python.org/pypi/pytesseract) package. After
extraction and installation, I can call the pytesseract from the command line.
However I want to run the tesseract within python. I have the following code
(ocr.py):
try:
import Image
except ImportError:
from PIL import Image
import pytesseract
print(pytesseract.image_to_string(Image.open('test.png')))
print(pytesseract.image_to_string(Image.open('test-european.jpg'),lang='fra'))
When I run the code by python ocr.py, I get the following output:
Traceback (most recent call last):
File "ocr.py", line 6, in <module>
print(pytesseract.image_to_string(Image.open('test.png')))
File "/pytesseract-0.1.6/build/lib/pytesseract/pytesseract.py", line 164, in image_to_string
raise TesseractError(status, errors)
pytesseract.TesseractError: (2, 'Usage: python tesseract.py [-l language] input_file')
test.png and test-european.jpg are in the working directory. Can Someone help
me running this code? I have tried the following:
1. Adjusted the tesseract_cmd to 'pytesseract'
2. Installed tesseract-ocr
Any help is appreciated as I am trying to solve this problem for hours now.
Answer: `tesseract_cmd` should point to the command line program
[`tesseract`](https://github.com/tesseract-ocr/tesseract), not `pytesseract`.
For instance on Ubuntu you can install the program using:
sudo apt install tesseract-ocr
And then set the variable to just `tesseract` or `/usr/bin/tesseract`.
|
Python cprofiler a function
Question: How to profile one function with cprofiler?
label = process_one(signature)
become
import cProfile
label = cProfile.run(process_one(signature))
but it didn't work :/
Answer: according to documentation (<https://docs.python.org/2/library/profile.html>)
it should be `cProfile.run('process_one(signature)')`
also, look at the answer <http://stackoverflow.com/a/17259420/1966790>
|
Reading files with hdfs3 fails
Question: I am trying to read a file on HDFS with Python using the hdfs3 module.
import hdfs3
hdfs = hdfs3.HDFileSystem(host='xxx.xxx.com', port=12345)
hdfs.ls('/projects/samplecsv/part-r-00000')
This produces
[{'block_size': 134345348,
'group': 'supergroup',
'kind': 'file',
'last_access': 1473453452,
'last_mod': 1473454723,
'name': '/projects/samplecsv/part-r-00000/',
'owner': 'dr',
'permissions': 420,
'replication': 3,
'size': 98765631}]
So it seems to be able to access the HDFS and read the directory structure.
However, reading the file fails.
with hdfs.open('/projects/samplecsv/part-r-00000', 'rb') as f:
print(f.read(100))
gives
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
.
.<snipped>
.
OSError: [Errno Read file /projects/samplecsv/part-r-00000 Failed:] 1
What could be the issue? I am using Python3.5.
Answer: if You want any operation on files then you have to pass full File path .
import hdfs3
hdfs = hdfs3.HDFileSystem(host='xxx.xxx.com', port=12345)
hdfs.ls('/projects/samplecsv/part-r-00000')
#you have to add file to location
hdfs.put('local-file.txt', '/projects/samplecsv/part-r-00000')
with hdfs.open('projects/samplecsv/part-r-00000/local-file.txt', 'rb') as f:
print(f.read(100))
|
find repeated element in list of list python
Question: I have been struggling with this problem for two days and I need help with it.
I need to find repeated element in a list of lists `list_of_list = [(a1, b1,
c1), (a2, b2, c2), ..., (an, bn, cn)]` where "a" and "b" elements are integers
and "c" elements are floats.
So if for example `a1 == a2` or `a1 == bn`, I need to create a new list with
the entire list elements and I need to iterate this for all the lists (a, b,
c) in the list of lists. To put it another way, I need all lists that have
elements that are present in more than one list. I need to compare only "a"
and "b" elements but obtain the associated value "c" in the final list.
For example:
list_of_list = [(1, 2, 4.99), (3, 6, 5.99), (1, 4, 3.00), (5, 1, 1.12), (7, 8, 1.99) ]
desired_result=[(1, 2, 4.99), (1, 4, 3.00), (5, 1, 1.12)]
I try many ideas...but nothing nice came up:
MI_network = [] #repeated elements list
genesis = list(complete_net) #clon to work on
genesis_next = list(genesis) #clon to remove elements in iterations
genesis_next.remove(genesis_next[0])
while genesis_next != []:
for x in genesis:
if x[0] in genesis_next and x[1] not in genesis_next:
MI_network.append(x)
if x[0] not in genesis_next and x[1] in genesis_next:
MI_network.append(x)
genesis_next.remove(genesis_next[0])
Answer: You can count occurrences of specific list elements and take lists with counts
> 1. Something like this, using `collections.defaultdict()`:
>>> from collections import defaultdict
>>> count = defaultdict(int)
>>> for lst in list_of_list:
... count[lst[0]] += 1
... count[lst[1]] += 1
...
>>> [lst for lst in list_of_list if count[lst[0]] > 1 or count[lst[1]] > 1]
[(1, 2, 4.99), (1, 4, 3.0), (5, 1, 1.12)]
|
Python/Flask: UnicodeDecodeError/ UnicodeEncodeError: 'ascii' codec can't decode/encode
Question: Sorry for the millionth question about this, but I've read so much about the
topic and still don't get this error fixed (newbie to all of this). I'm trying
to display the content of a postgres table on a website with flask (using
Ubuntu 16.04/python 2.7.12). There are non-ascii characters in the table ('ü'
in this case) and the result is a UnicodeDecodeError: 'ascii' codec can't
decode byte 0xc3 in position 2: ordinal not in range(128).
This is what my **init**.py looks like:
#-*- coding: utf-8 -*-
from flask import Blueprint, render_template
import psycopg2
from .forms import Form
from datetime import datetime
from .table import Item, ItemTable
test = Blueprint('test', __name__)
def init_test(app):
app.register_blueprint(test)
def createTable(cur):
cmd = "select * from table1 order by start desc;"
cur.execute(cmd)
queryResult = cur.fetchall()
items = []
table = 'table could not be read'
if queryResult is not None:
for row in range(0, len(queryResult)):
items.append(Item(queryResult[row][0], queryResult[row][1].strftime("%d.%m.%Y"), queryResult[row][2].strftime("%d.%m.%Y"), \
queryResult[row][1].strftime("%H:%M"), queryResult[row][2].strftime("%H:%M"), \
queryResult[row][3], queryResult[row][4], queryResult[row][5], queryResult[row][6]))
table = ItemTable(items)
return table
@test.route('/test')
def index():
dbcon = psycopg2.connect("dbname=testdb user=postgres host=localhost")
cur = dbcon.cursor()
table = createTable(cur)
cur.close()
return render_template('test_index.html', table=table)
And part of the html-file:
{% extends "layout.html" %}
{% block head %}Title{% endblock %}
{% block body %}
<script type="text/javascript" src="{{ url_for('static', filename='js/bootstrap.js') }}"></script>
<link rel="stylesheet" type="text/css" href="{{ url_for('static', filename='css/custom.css') }}">
<div class="row" id="testid">
{{table}}
</div>
{% endblock %}{#
Local Variables:
coding: utf-8
End: #}
The problem is in queryResult[row][6] which is the only row in the table with
strings, the rest is integers. The encoding of the postgres database is utf-8.
The type of queryResult[row][6] returns type 'str'. What I read
[here](http://initd.org/psycopg/docs/usage.html#unicode-handling) is that the
string should be encoded in utf-8, as that is the encoding of the database
client. Well, that doesn't seem to work!? Then I added the line
psycopg2.extensions.register_type(psycopg2.extensions.UNICODE)
to force the result to be unicode (type of queryResult[row][6] returned type
'unicode'), because as was recommended
[here](http://stackoverflow.com/questions/5120302/avoiding-python-
unicodedecodeerror-in-jinjas-nl2br-filter), I tried to stick to unicode
everywhere. Well that resulted in a UnicodeEncodeError: 'ascii' codec can't
encode character u'\xfc' in position 2: ordinal not in range(128). Then I
thought, maybe something went wrong with converting to string (bytes) before
and I tried to do it myself then with writing
queryResult[row][6].encode('utf-8', 'replace')
which led to an UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in
position 2: ordinal not in range(128). Didn't even work with 'ignore' instead
of 'replace'. What is going on here? I checked if the render_template() has a
problem with unicode by creating and passing a variable v=u'ü', but that was
no problem and was displayed correctly. Yeah, I read the usual recommended
stuff like nedbatchelder.com/text/unipain.html and Unicode Demystified, but
that didn't help me solve my problem here, I'm obviously missing something.
Here is a traceback of the first UnicodeDecodeError:
File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/app.py", line 2000, in __call__
return self.wsgi_app(environ, start_response)
File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/app.py", line 1991, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/app.py", line 1567, in handle_exception
reraise(exc_type, exc_value, tb)
File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/app.py", line 1988, in wsgi_app
response = self.full_dispatch_request()
File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/app.py", line 1641, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/app.py", line 1544, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/app.py", line 1639, in full_dispatch_request
rv = self.dispatch_request()
File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/app.py", line 1625, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/name/Desktop/testFlask/app/test/__init__.py", line 95, in index
return render_template('test_index.html', table=table) #, var=var
File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/templating.py", line 134, in render_template
context, ctx.app)
File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask/templating.py", line 116, in _render
rv = template.render(context)
File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/jinja2/environment.py", line 989, in render
return self.environment.handle_exception(exc_info, True)
File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/jinja2/environment.py", line 754, in handle_exception
reraise(exc_type, exc_value, tb)
File "/home/name/Desktop/testFlask/app/templates/test_index.html", line 1, in top-level template code
{% extends "layout.html" %}
File "/home/name/Desktop/testFlask/app/templates/layout.html", line 40, in top-level template code
{% block body %}{% endblock %}
File "/home/name/Desktop/testFlask/app/templates/test_index.html", line 7, in block "body"
{{table}}
File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask_table/table.py", line 86, in __html__
tbody = self.tbody()
File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask_table/table.py", line 103, in tbody
out = [self.tr(item) for item in self.items]
File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask_table/table.py", line 120, in tr
''.join(c.td(item, attr) for attr, c in self._cols.items()
File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask_table/table.py", line 121, in <genexpr>
if c.show))
File "/home/name/Desktop/testFlask/app/test/table.py", line 7, in td
self.td_contents(item, self.get_attr_list(attr)))
File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask_table/columns.py", line 99, in td_contents
return self.td_format(self.from_attr_list(item, attr_list))
File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/flask_table/columns.py", line 114, in td_format
return Markup.escape(content)
File "/home/name/Desktop/testFlask/venv/lib/python2.7/site-packages/markupsafe/__init__.py", line 165, in escape
rv = escape(s)
Any help is greatly appreciated...
Answer: since in Python 2 bytecode is not enforced, one can get confused with them.
Encoding and Decoding works as far as i know from string to bytecode and
reverse. So if your resultset is a string, there should be no need to encode
it again. If you get wrong representations for special characters like "§", i
would try something like this:
repr(queryResult[row][6])).
Does that work?
|
Identify drive letter of USB composite device using Python
Question: I have a USB composite device that has an SD card. Using Python, I need a way
to find the drive letter of the SD card when the device is connected. Does
anyone have experience with this? Initially it needs to work in Windows, but
I'll eventually need to port it to Mac and Linux.
Answer: I don't have an SD card attached to a USB port. To get you started, you could
_try_ this on Windows. Install [Golden's
WMI](http://timgolden.me.uk/python/wmi/index.html). I found that the Windows
.zip wouldn't install but the pip version works fine, or at least it does on
Win7. Then you can list logical disks with code like this.
>>> import wmi
>>> c=wmi.WMI()
...
>>> for disk in c.Win32_LogicalDisk():
... print(disk)
This code provided a listing that included mention of a NAS which is why I
have hopes for your SD card. Various refinements are possible.
|
Convert .fbx to .obj with Python FBX SDK
Question: I have a ten frame .fbx file of an animal walking. This file includes a rigged
model with textures, but I am only interested in the mesh of the model at each
frame.
How can I use Python FBX SDK or Python Blender SDK to export each frame of the
fbx file into an obj file?
Am I approaching this the wrong way? Should I try to find a way to do this
manually in Maya/Blender first?
Answer: its a example for fbx to obj import fbx
# Create an SDK manager
manager = fbx.FbxManager.Create()
# Create a scene
scene = fbx.FbxScene.Create(manager, "")
# Create an importer object
importer = fbx.FbxImporter.Create(manager, "")
# Path to the .obj file
milfalcon = "samples/millenium-falcon/millenium-falcon.fbx"
# Specify the path and name of the file to be imported
importstat = importer.Initialize(milfalcon, -1)
importstat = importer.Import(scene)
# Create an exporter object
exporter = fbx.FbxExporter.Create(manager, "")
save_path = "samples/millenium-falcon/millenium-falcon.obj"
# Specify the path and name of the file to be imported
exportstat = exporter.Initialize(save_path, -1)
exportstat = exporter.Export(scene)
|
How to tell Python to save files in this folder?
Question: I am new to Python and have been assigned the task to clean up the files in
Slack. I have to backup the files and save them to the designated folder Z
drive Slack Files and I am using the open syntax below but it is producing the
permission denied error for it. This script has been prepared by my senior to
finish up this job.
from slacker import *
import sys
import time
import os
from datetime import timedelta, datetime
root = 'Z:\Slack_Files'
def main(token, weeks=4):
slack = Slacker(token)
total = slack.files.list(count=1).body['paging']['total']
num_pages = int(total/1000.00 + 1)
print("{} files to be processed, across {} pages".format(total, num_pages))
files_to_delete = []
ids = []
count = 1
for page in range(num_pages):
print ("Pulling page number {}".format(page + 1))
files = slack.files.list(count=1000, page=page+1).body['files']
for file in files:
print("Checking file number {}".format(count))
if file['id'] not in ids:
ids.append(file['id'])
if datetime.fromtimestamp(file['timestamp']) < datetime.now() - timedelta(weeks=weeks):
files_to_delete.append(file)
print("File No. {} will be deleted".format(count))
else:
print ("File No. {} will not be deleted".format(count))
count+=1
print("All files checked\nProceeding to delete files")
print("{} files will be deleted!".format(len(files_to_delete)))
count = 1
for file in files_to_delete:
# print open('Z:\Slack_Files')
print("Deleting file {} of {} - {}".format(count, len(files_to_delete), file["name"]))
print(file["name"])
count+=1
return count-1
for fn in os.listdir(r'Z:\Slack_Files'):
if os.path.isfile(fn):
open(fn,'r')
if __name__ == "__main__":
try:
token = '****'
except IndexError:
print("Usage: python file_deleter.py api_token\nPlease provide a value for the API Token")
sys.exit(2)
main(token)
The error it displays is:
Traceback (most recent call last):
File "C:\Users\Slacker.py", line 55, in <module>
main(token)
File "C:\Users\Slacker.py", line 39, in main
print open('Z:\Slack_Files')
IOError: [Errno 13] Permission denied: 'Z:\\Slack_Files'
Answer: To iterate over files in a particular folder, we can simply use os.listdir()
to traverse a single tree.
import os
for fn in os.listdir(r'Z:\Slack_Files'):
if os.path.isfile(fn):
open(fn,'r') # mode is r means read mode
|
Ipython cv2.imwrite() not saving image
Question: I have written a code in python opencv. I am trying to write the processed
image back to disk but the image is not getting saved and it is not showing
any error(runtime and compilation) The code is
"""
Created on Wed Oct 19 18:07:34 2016
@author: Niladri
"""
import numpy as np
import cv2
if __name__ == '__main__':
import sys
img = cv2.imread('C:\Users\Niladri\Desktop\TexturesCom_LandscapeTropical0080_2_S.jpg')
if img is None:
print 'Failed to load image file:'
sys.exit(1)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
h, w = img.shape[:2]
eigen = cv2.cornerEigenValsAndVecs(gray, 15, 3)
eigen = eigen.reshape(h, w, 3, 2) # [[e1, e2], v1, v2]
#flow = eigen[:,:,2]
iter_n = 10
sigma = 5
str_sigma = 3*sigma
blend = 0.5
img2 = img
for i in xrange(iter_n):
print i,
gray = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
eigen = cv2.cornerEigenValsAndVecs(gray, str_sigma, 3)
eigen = eigen.reshape(h, w, 3, 2) # [[e1, e2], v1, v2]
x, y = eigen[:,:,1,0], eigen[:,:,1,1]
print eigen
gxx = cv2.Sobel(gray, cv2.CV_32F, 2, 0, ksize=sigma)
gxy = cv2.Sobel(gray, cv2.CV_32F, 1, 1, ksize=sigma)
gyy = cv2.Sobel(gray, cv2.CV_32F, 0, 2, ksize=sigma)
gvv = x*x*gxx + 2*x*y*gxy + y*y*gyy
m = gvv < 0
ero = cv2.erode(img, None)
dil = cv2.dilate(img, None)
img1 = ero
img1[m] = dil[m]
img2 = np.uint8(img2*(1.0 - blend) + img1*blend)
#print 'done'
cv2.imshow('dst_rt', img2)
cv2.waitKey(0)
cv2.destroyAllWindows()
#cv2.imwrite('C:\Users\Niladri\Desktop\leaf_image_shock_filtered.jpg', img2)
for i in xrange(iter_n):
print i,
gray = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
eigen = cv2.cornerEigenValsAndVecs(gray, str_sigma, 3)
eigen = eigen.reshape(h, w, 3, 2) # [[e1, e2], v1, v2]
x, y = eigen[:,:,1,0], eigen[:,:,1,1]
print eigen
gxx = cv2.Sobel(gray, cv2.CV_32F, 2, 0, ksize=sigma)
gxy = cv2.Sobel(gray, cv2.CV_32F, 1, 1, ksize=sigma)
gyy = cv2.Sobel(gray, cv2.CV_32F, 0, 2, ksize=sigma)
gvv = x*x*gxx + 2*x*y*gxy + y*y*gyy
m = gvv < 0
ero = cv2.erode(img, None)
dil = cv2.dilate(img, None)
img1 = dil
img1[m] = ero[m]
img2 = np.uint8(img2*(1.0 - blend) + img1*blend)
print 'done'
#cv2.imwrite('D:\IP\tropical_image_sig5.bmp', img2)
cv2.imshow('dst_rt', img2)
cv2.waitKey(0)
cv2.destroyAllWindows()
#cv2.imshow('dst_rt', img2)
cv2.imwrite('C:\Users\Niladri\Desktop\tropical_image_sig5.bmp', img2)
Can anyone please tell me why it is not working. cv2.imshow is working
properly(as it is showing the correct image). Thanks and Regards Niladri
Answer: As a general and absolute rule, you _have_ to protect your windows path
strings (containing backslashes) with `r` prefix or some characters are
interpreted (ex: `\n,\b,\v,\x` aaaaand `\t` !):
so when doing this:
cv2.imwrite('C:\Users\Niladri\Desktop\tropical_image_sig5.bmp', img2)
you're trying to save to `C:\Users\Niladri\Desktop<TAB>ropical_image_sig5.bmp`
(and I really don't know what it does :))
Do this:
cv2.imwrite(r'C:\Users\Niladri\Desktop\tropical_image_sig5.bmp', img2)
Note: the read works fine because "escaped" uppercase letters have no
particular meaning in python 2 (`\U` has a meaning in python 3)
|
how do i find my ipv4 using python?
Question: my server copy it if you want! :) how do i find my ipv4 using python? can i
you try to keep it real short?
import socket
def Main():
host = '127.0.0.1'
port = 5000
s = socket.socket()
s.bind((host,port))
s.listen(1)
c1, addr1 = s.accept()
sending = "Connection:" + str(addr1)
connection = (sending)
print(connection)
s.listen(1)
c2, addr2 = s.accept()
sending = "Connection:" + str(addr2)
connection = (sending)
print(connection)
while True:
data1 = c1.recv(1024).decode('utf-8')
data2 = c2.recv(1024).decode('utf-8')
if not data1:
break
if not data2:
break
if data2:
c1.send(data2.encode('utf-8'))
if data1:
c2.send(data1.encode('utf-8'))
s.close()
if __name__== '__main__':
Main()
thx for the help i appreciate it!
Answer: That's all you need for the local address (returns a string):
socket.gethostbyname(socket.gethostname())
|
Drawing on python and pycharm
Question: I am a beginner on Python. I draw a square with this code.
import turtle
square=turtle.Turtle()
print(square)
for i in range(4):
square.fd(100)
square.lt(90)
turtle.mainloop()
However, there is another code for drawing square with this code in the book.
Apparently, I tried to copy the exact same thing but it didn't work out. Can
someone help me to figure out the problem?
def drawSquare(t,sz):
"""Make turtle t draw a square of sz."""
for i in range(4):
t.forward(sz)
t.left(90)
turtle.mainloop()
Answer: You need to call the function so it will start:
import turtle
def drawSquare(t, size):
for i in range(4):
t.forward(size)
t.left(90)
turtle.mainloop()
drawSquare(turtle.Turtle(), 100)
|
Work with a row in a pandas dataframe without incurring chain indexing (not coping just indexing)
Question: My data is organized in a dataframe:
import pandas as pd
import numpy as np
data = {'Col1' : [4,5,6,7], 'Col2' : [10,20,30,40], 'Col3' : [100,50,-30,-50], 'Col4' : ['AAA', 'BBB', 'AAA', 'CCC']}
df = pd.DataFrame(data=data, index = ['R1','R2','R3','R4'])
Which looks like this (only much bigger):
Col1 Col2 Col3 Col4
R1 4 10 100 AAA
R2 5 20 50 BBB
R3 6 30 -30 AAA
R4 7 40 -50 CCC
My algorithm loops through this table rows and performs a set of operations.
For cleaness/lazyness sake, I would like to work on a single row at each
iteration without typing `df.loc['row index', 'column name']` to get each cell
value
I have tried to follow the [right style](http://pandas.pydata.org/pandas-
docs/stable/indexing.html#indexing-view-versus-copy) using for example:
row_of_interest = df.loc['R2', :]
However, I still get the warning when I do:
row_of_interest['Col2'] = row_of_interest['Col2'] + 1000
SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame
And it is not working (as I intended) it is making a copy
print df
Col1 Col2 Col3 Col4
R1 4 10 100 AAA
R2 5 20 50 BBB
R3 6 30 -30 AAA
R4 7 40 -50 CCC
Any advice on the proper way to do it? Or should I just stick to work with the
data frame directly?
Edit 1:
Using the replies provided the warning is removed from the code but the
original dataframe is not modified: The "row of interest" `Series` is a copy
not part of the original dataframe. For example:
import pandas as pd
import numpy as np
data = {'Col1' : [4,5,6,7], 'Col2' : [10,20,30,40], 'Col3' : [100,50,-30,-50], 'Col4' : ['AAA', 'BBB', 'AAA', 'CCC']}
df = pd.DataFrame(data=data, index = ['R1','R2','R3','R4'])
row_of_interest = df.loc['R2']
row_of_interest.is_copy = False
new_cell_value = row_of_interest['Col2'] + 1000
row_of_interest['Col2'] = new_cell_value
print row_of_interest
Col1 5
Col2 1020
Col3 50
Col4 BBB
Name: R2, dtype: object
print df
Col1 Col2 Col3 Col4
R1 4 10 100 AAA
R2 5 20 50 BBB
R3 6 30 -30 AAA
R4 7 40 -50 CCC
Edit 2:
This is an example of the functionality I would like to replicate. In python a
list of lists looks like:
a = [[1,2,3],[4,5,6]]
Now I can create a "label"
b = a[0]
And if I change an entry in b:
b[0] = 7
Both a and b change.
print a, b
[[7,2,3],[4,5,6]], [7,2,3]
Can this behavior be replicated between a pandas dataframe labeling one of its
rows a pandas series?
Answer: This should work:
row_of_interest = df.loc['R2', :]
row_of_interest.is_copy = False
row_of_interest['Col2'] = row_of_interest['Col2'] + 1000
Setting `.is_copy = False` is the trick
Edit 2:
import pandas as pd
import numpy as np
data = {'Col1' : [4,5,6,7], 'Col2' : [10,20,30,40], 'Col3' : [100,50,-30,-50], 'Col4' : ['AAA', 'BBB', 'AAA', 'CCC']}
df = pd.DataFrame(data=data, index = ['R1','R2','R3','R4'])
row_of_interest = df.loc['R2']
row_of_interest.is_copy = False
new_cell_value = row_of_interest['Col2'] + 1000
row_of_interest['Col2'] = new_cell_value
print row_of_interest
df.loc['R2'] = row_of_interest
print df
df:
Col1 Col2 Col3 Col4
R1 4 10 100 AAA
R2 5 1020 50 BBB
R3 6 30 -30 AAA
R4 7 40 -50 CCC
|
'DataFrame' object is not callable
Question: I'm trying to create a heatmap using Python on Pycharms. I've this code:
import numpy as np
import pandas as pd
import matplotlib
matplotlib.use('agg')
import matplotlib.pyplot as plt
data1 = pd.read_csv(FILE")
freqMap = {}
for line in data1:
for item in line:
if not item in freqMap:
freqMap[item] = {}
for other_item in line:
if not other_item in freqMap:
freqMap[other_item] = {}
freqMap[item][other_item] = freqMap[item].get(other_item, 0) + 1
freqMap[other_item][item] = freqMap[other_item].get(item, 0) + 1
df = data1[freqMap].T.fillna(0)
print(df)
My data is stored into a CSV file. Each row represents a sequence of products
that are associated by a Consumer Transaction.The typically Basket Market
Analysis:
99 32 35 45 56 58 7 72
99 45 51 56 58 62 72 17
55 56 58 62 21 99 35
21 99 44 56 58 7 72
72 17 99 35 45 56 7
56 62 72 21 91 99 35
99 35 55 56 58 62 72
99 35 51 55 58 7 21
99 56 58 62 72 21
55 56 58 21 99 35
99 35 62 7 17 21
62 72 21 99 35 58
56 62 72 99 32 35
72 17 99 55 56 58
When I execute the code, I'm getting the following error:
Traceback (most recent call last):
File "C:/Users/tst/PycharmProjects/untitled1/tes.py", line 22, in <module>
df = data1[freqMap].T.fillna(0)
File "C:\Users\tst\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\core\frame.py", line 1997, in __getitem__
return self._getitem_column(key)
File "C:\Users\tst\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\core\frame.py", line 2004, in _getitem_column
return self._get_item_cache(key)
File "C:\Users\tst\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\core\generic.py", line 1348, in _get_item_cache
res = cache.get(item)
TypeError: unhashable type: 'dict'
How can I solve this problem?
Many thanks!
Answer: You are reading a csv file but it has no header, the delimiter is a space not
a comma, and there are a variable number of columns. So that is three mistakes
in your first line.
And data1 is a DataFrame, freqMap is a dictionary that is completely
unrelated. So it makes no sense to do data1[freqMap].
I suggest you step through this line by line in jupyter or a python
interpreter. Then you can see what each line actually does and experiment.
|
Reformat JSON file?
Question: I have two JSON files.
File A:
"features": [
{
"attributes": {
"NAME": "R T CO",
"LTYPE": 64,
"QUAD15M": "279933",
"OBJECTID": 225,
"SHAPE.LEN": 828.21510830520401
},
"geometry": {
"paths": [
[
[
-99.818614674337155,
27.782542677671653
],
[
-99.816056346719051,
27.782590806976135
]
]
]
}
}
File B:
"features": [
{
"geometry": {
"type": "MultiLineString",
"coordinates": [
[
[
-99.773315512624,
27.808875128096
],
[
-99.771397939251,
27.809512259374
]
]
]
},
"type": "Feature",
"properties": {
"LTYPE": 64,
"SHAPE.LEN": 662.3800009247,
"NAME": "1586",
"OBJECTID": 204,
"QUAD15M": "279933"
}
},
I would like File B to be reformatted to look like File A. Change "properties"
to "attributes", "coordinates" to "paths", and remove both "type":
"MultiLineString" and "type": "Feature". What is the best way to do this via
python?
Is there a way to also reorder the "attributes" key value pairs to look like
File A?
It's a rather large dataset and I would like to iterate through the entire
file.
Answer: Manipulating JSON in Python is a good candidate for the [input-process-output
model](https://en.wikipedia.org/wiki/IPO_model) of programming.
For input, you convert the external JSON file into a Python data structure,
using [`json.load()`](https://docs.python.org/2/library/json.html#json.load).
For output, you convert the Python data structure into an external JSON file
using [`json.dump()`](https://docs.python.org/2/library/json.html#json.dump).
For the processing or conversion step, do whatever it is that you need to do,
using ordinary Python `dict` and `list` methods.
This program might do what you want:
import json
with open("b.json") as b:
b = json.load(b)
for feature in b["features"]:
feature["attributes"] = feature["properties"]
del feature["properties"]
feature["geometry"]["paths"] = feature["geometry"]["coordinates"]
del feature["geometry"]["coordinates"]
del feature["geometry"]["type"]
del feature["type"]
with open("new-b.json", "w") as new_b:
json.dump(b, new_b, indent=1, separators=(',', ': '))
|
How to execute multiline python code from a bash script?
Question: I need to extend a shell script (bash). As I am much more familiar with python
I want to do this by writing some lines of python code which depends on
variables from the shell script. Adding an extra python file is not an option.
result=`python -c "import stuff; print('all $code in one very long line')"`
is not very readable.
I would prefer to specify my python code as a multiline string and then
execute it.
Answer: Use a here-doc:
result=$(python <<EOF
import stuff
print('all $code in one very long line')
EOF
)
|