Question
stringlengths
25
7.47k
Q_Score
int64
0
1.24k
Users Score
int64
-10
494
Score
float64
-1
1.2
Data Science and Machine Learning
int64
0
1
is_accepted
bool
2 classes
A_Id
int64
39.3k
72.5M
Web Development
int64
0
1
ViewCount
int64
15
1.37M
Available Count
int64
1
9
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Q_Id
int64
39.1k
48M
Answer
stringlengths
16
5.07k
Database and SQL
int64
1
1
GUI and Desktop Applications
int64
0
1
Python Basics and Environment
int64
0
1
Title
stringlengths
15
148
AnswerCount
int64
1
32
Tags
stringlengths
6
90
Other
int64
0
1
CreationDate
stringlengths
23
23
Php or python Use and connect to our existing postgres databases open source / or very low license fees Common features of cms, with admin tools to help manage / moderate community have a large member base on very basic site where members provide us contact info and info about their professional characteristics. About to expand to build new community site (to migrate our member base to) where the users will be able to msg each other, post to forums, blog, share private group discussions, and members will be sent inivitations to earn compensation for their expertise. Profile pages, job postings, and video chat would be plus. Already have a team of admins savvy with web apps to help manage it but our developer resources are limited (3-4 programmers) and looking to save time in development as opposed to building our new site from scratch.
5
1
0.099668
0
false
13,003,890
1
5,703
1
0
0
13,000,007
Have you tried Drupal. Drupal supports PostgreSQL and is written in PHP and is open source.
1
0
0
What is a good cms that is postgres compatible, open source and either php or python based?
2
postgresql,content-management-system,python-2.7
0
2012-10-21T16:56:00.000
I cannot get a connection to a MySQL database if my password contains punctuation characters in particular $ or @. I have tried to escape the characters, by doubling the $$ etc. but no joy. I have tried the pymysql library and the _mssql library. the code... self.dbConn = _mysql.connect(host=self.dbDetails['site'], port=self.dbDetails['port'], user=self.dbDetails['user'], passwd=self.dbDetails['passwd'], db=self.dbDetails['db']) where self.dbDetails['passwd'] = "$abcdef". I have tried '$$abcdef', and re.escape(self.dbDetails['passwd']), and '\$abcdef' but nothing works until I change the users password to remove the "$". Then it connects just fine. The only error I am getting is a failure to connect. I guess I will have to figure out how to print the actual exception message.
2
0
0
0
false
16,186,975
0
432
1
0
0
13,004,789
Try to MySQLdb package, you can punctuation in password to connect database through this package.
1
0
0
How to connect with passwords that contains characters like "$" or "@"?
1
python,mysql,passwords
0
2012-10-22T03:56:00.000
I am using Django non-rel version with mongodb backends. I am interested in tracking the changes that occur on model instances e.g if someone creates/edits or deletes a model instance. Backend db is mongo hence models have an associated "_id" fields with them in the respective collections/dbs. Now i want to extract this "_id" field on which this modif operation took place. The idea is to write this "_id" field to another db so someone can pick it up from there and know what object was updated. I thought about overriding the save() method from Django "models.Model" since all my models are derived from that. However the mongo "_id" field is obviously not present there since the mongo-insert has not taken place yet. Is there any possibility of a pseudo post-save() method that can be called after the save operation has taken place into mongo? Can django/django-toolbox/pymongo provide such a combination?
0
0
1.2
0
true
13,031,452
1
132
1
0
0
13,024,361
After some deep digging into the Django Models i was able to solve the problem. The save() method inturn call the save_base() method. This method saves the returned results, ids in case of mongo, into self.id. This _id field can then be picked by by over riding the save() method for the model
1
0
0
Django-Nonrel(mongo-backend):Model instance modification tracking
1
python,django,mongodb,django-models,django-nonrel
0
2012-10-23T06:01:00.000
Which of these two languages interfaces better and delivers a better performance/toolset for working with sqlite database? I am familiar with both languages but need to choose one for a project I'm developing and so I thought I would ask here. I don't believe this to be opinionated as performance of a language is pretty objective.
0
5
1.2
0
true
13,059,204
0
150
1
0
0
13,059,142
There is no good reason to choose one over the other as far as sqlite performance or usability. Both languages have perfectly usable (and pythonic/rubyriffic) sqlite3 bindings. In both languages, unless you do something stupid, the performance is bounded by the sqlite3 performance, not by the bindings. Neither language's bindings are missing any uncommon but sometimes performance-critical functions (like an "exec many", manual transaction management, etc.). There may be language-specific frameworks that are better or worse in how well they integrate with sqlite3, but at that point you're choosing between frameworks, not languages.
1
0
0
ruby or python for use with sqlite database?
1
python,ruby,sqlite
1
2012-10-24T23:00:00.000
I have huge tables of data that I need to manipulate (sort, calculate new quantities, select specific rows according to some conditions and so on...). So far I have been using a spreadsheet software to do the job but this is really time consuming and I am trying to find a more efficient way to do the job. I use python but I could not figure out how to use it for such things. I am wondering if anybody can suggest something to use. SQL?!
0
1
0.099668
0
false
13,060,535
0
97
1
0
0
13,060,427
This is a very general question, but there are multiple things that you can do to possibly make your life easier. 1.CSV These are very useful if you are storing data that is ordered in columns, and if you are looking for easy to read text files. 2.Sqlite3 Sqlite3 is a database system that does not require a server to use (it uses a file instead), and is interacted with just like any other database system. However, for very large scale projects that are handling massive amounts of data, it is not recommended. 3.MySql MySql is a database system that requires a server to interact with, but can be tweaked for very large scale projects, as well as small scale projects. There are many other different types of systems though, so I suggest you search around and find that perfect fit. However, if you want to mess around with Sqlite3 or CSV, both Sqlite3 and CSV modules are supplied in the standard library with python 2.7 and 3.x I believe.
1
0
0
sorting and selecting data
2
python,sql,sorting,select
0
2012-10-25T01:40:00.000
I have come across a problem and am not sure which would be the best suitable technology to implement it. Would be obliged if you guys can suggest me some based on your experience. I want to load data from 10-15 CSV files each of them being fairly large 5-10 GBs. By load data I mean convert the CSV file to XML and then populate around 6-7 stagings tables in Oracle using this XML. The data needs to be populated such that the elements of the XML and eventually the rows of the table come from multiple CSV files. So for e.g. an element A would have sub-elements coming data from CSV file 1, file 2 and file 3 etc. I have a framework built on Top of Apache Camel, Jboss on Linux. Oracle 10G is the database server. Options I am considering, Smooks - However the problem is that Smooks serializes one CSV at a time and I cant afford to hold on to the half baked java beans til the other CSV files are read since I run the risk of running out of memory given the sheer number of beans I would need to create and hold on to before they are fully populated written to disk as XML. SQLLoader - I could skip the XML creation all together and load the CSV directly to the staging tables using SQLLoader. But I am not sure if I can a. load multiple CSV files in SQL Loader to the same tables updating the records after the first file. b. Apply some translation rules while loading the staging tables. Python script to convert the CSV to XML. SQLLoader to load a different set of staging tables corresponding to the CSV data and then writing stored procedure to load the actual staging tables from this new set of staging tables (a path which I want to avoid given the amount of changes to my existing framework it would need). Thanks in advance. If someone can point me in the right direction or give me some insights from his/her personal experience it will help me make an informed decision. regards, -v- PS: The CSV files are fairly simple with around 40 columns each. The depth of objects or relationship between the files would be around 2 to 3.
3
1
0.066568
1
false
14,449,025
0
2,011
2
0
0
13,061,800
Create a process / script that will call a procedure to load csv files to external Oracle table and another script to load it to the destination table. You can also add cron jobs to call these scripts that will keep track of incoming csv files into the directory, process it and move the csv file to an output/processed folder. Exceptions also can be handled accordingly by logging it or sending out an email. Good Luck.
1
0
0
Choice of technology for loading large CSV files to Oracle tables
3
python,csv,etl,sql-loader,smooks
0
2012-10-25T04:54:00.000
I have come across a problem and am not sure which would be the best suitable technology to implement it. Would be obliged if you guys can suggest me some based on your experience. I want to load data from 10-15 CSV files each of them being fairly large 5-10 GBs. By load data I mean convert the CSV file to XML and then populate around 6-7 stagings tables in Oracle using this XML. The data needs to be populated such that the elements of the XML and eventually the rows of the table come from multiple CSV files. So for e.g. an element A would have sub-elements coming data from CSV file 1, file 2 and file 3 etc. I have a framework built on Top of Apache Camel, Jboss on Linux. Oracle 10G is the database server. Options I am considering, Smooks - However the problem is that Smooks serializes one CSV at a time and I cant afford to hold on to the half baked java beans til the other CSV files are read since I run the risk of running out of memory given the sheer number of beans I would need to create and hold on to before they are fully populated written to disk as XML. SQLLoader - I could skip the XML creation all together and load the CSV directly to the staging tables using SQLLoader. But I am not sure if I can a. load multiple CSV files in SQL Loader to the same tables updating the records after the first file. b. Apply some translation rules while loading the staging tables. Python script to convert the CSV to XML. SQLLoader to load a different set of staging tables corresponding to the CSV data and then writing stored procedure to load the actual staging tables from this new set of staging tables (a path which I want to avoid given the amount of changes to my existing framework it would need). Thanks in advance. If someone can point me in the right direction or give me some insights from his/her personal experience it will help me make an informed decision. regards, -v- PS: The CSV files are fairly simple with around 40 columns each. The depth of objects or relationship between the files would be around 2 to 3.
3
2
1.2
1
true
13,062,737
0
2,011
2
0
0
13,061,800
Unless you can use some full-blown ETL tool (e.g. Informatica PowerCenter, Pentaho Data Integration), I suggest the 4th solution - it is straightforward and the performance should be good, since Oracle will handle the most complicated part of the task.
1
0
0
Choice of technology for loading large CSV files to Oracle tables
3
python,csv,etl,sql-loader,smooks
0
2012-10-25T04:54:00.000
I'm using python's MySQLdb to fetch rows from a MySQL 5.6.7 db, that supports microsecond precision datetime columns. When I read a row with MySQLdb I get "None" for the time field. Is there are way to read such time fields with python?
2
1
1.2
0
true
13,299,592
0
310
1
0
0
13,068,227
MySQLdb-1.2.4 (to be released within the next week) and the current release candidate has support for MySQL-5.5 and newer and should solve your problem. Please try 1.2.4c1 from PyPi (pip install MySQL-python)
1
0
0
How to read microsecond-precision mysql datetime fields with python
2
python,mysql,mysql-python
0
2012-10-25T12:08:00.000
I see that when you add a column and want to create a schemamigration, the field has to have either null=True or default=something. What I don't get is that many of the fields that I've written in my models initially (say, before initial schemamigration --init or from a converted_to_south app, I did both) were not run against this check, since I didn't have the null/default error. Is it normal? Why is it so? And why is South checking this null/default thing anyway?
0
1
1.2
0
true
13,085,822
1
107
2
0
0
13,085,658
If you add a column to a table, which already has some rows populated, then either: the column is nullable, and the existing rows simply get a null value for the column the column is not nullable but has a default value, and the existing rows are updated to have that default value for the column To produce a non-nullable column without a default, you need to add the column in multiple steps. Either: add the column as nullable, populate the defaults manually, and then mark the column as not-nullable add the column with a default value, and then remove the default value These are effectively the same, they both will go through updating each row. I don't know South, but from what you're describing, it is aiming to produce a single DDL statement to add the column, and doesn't have the capability to add it in multiple steps like this. Maybe you can override that behaviour, or maybe you can use two migrations? By contrast, when you are creating a table, there clearly is no existing data, so you can create non-nullable columns without defaults freely.
1
0
0
South initial migrations are not forced to have a default value?
2
python,django,postgresql,django-south
0
2012-10-26T11:00:00.000
I see that when you add a column and want to create a schemamigration, the field has to have either null=True or default=something. What I don't get is that many of the fields that I've written in my models initially (say, before initial schemamigration --init or from a converted_to_south app, I did both) were not run against this check, since I didn't have the null/default error. Is it normal? Why is it so? And why is South checking this null/default thing anyway?
0
0
0
0
false
13,085,826
1
107
2
0
0
13,085,658
When you have existing records in your database and you add a column to one of your tables, you will have to tell the database what to put in there, south can't read your mind :-) So unless you mark the new field null=True or opt in a default value it will raise an error. If you had an empty database, there are no values to be set, but a model field would still require basic properties. If you look deeper at the field class you're using you will see django sets some default values, like max_length and null (depending on the field).
1
0
0
South initial migrations are not forced to have a default value?
2
python,django,postgresql,django-south
0
2012-10-26T11:00:00.000
I'm using python and excel with office 2010 and have no problems there. I used python's makepy module in order to bind to the txcel com objects. However, on a different computer I've installed office 2013 and when I launched makepy no excel option was listed (as opposed to office 2010 where 'Microsoft Excel 14.0 Object Library' is listed by makepy). I've searched for 'Microsoft Excel 15.0 Object Library' in the registry and it is there. I tried to use : makepy -d 'Microsoft Excel 15.0 Object Library' but that didn't work. Help will be much appreciated. Thanks.
4
0
0
0
false
42,290,194
0
2,318
1
0
0
13,121,529
wilywampa's answer corrects the problem. However, the combrowse.py at win32com\client\combrowse.py can also be used to get the IID (Interface Identifier) from the registered type libraries folder and subsequently integrate it with code as suggested by @cool_n_curious. But as stated before, wilywampa's answer does correct the problem and you can just use the makepy.py utility as usual.
1
0
0
Python Makepy with Office 2013 (office 15)
3
python,excel,win32com,office-2013
0
2012-10-29T12:20:00.000
I have a class that can interface with either Oracle or MySQL. The class is initialized with a keyword of either "Oracle" or "MySQL" and a few other parameters that are standard for both database types (what to print, whether or not to stop on an exception, etc.). It was easy enough to add if Oracle do A, elif MySQL do B as necessary when I began, but as I add more specialized code that only applies to one database type, this is becoming ugly. I've split the class into two, one for Oracle and one for MySQL, with some shared functions to avoid duplicate code. What is the most Pythonic way to handle calling these new classes? Do I create a wrapper function/class that uses this same keyword and returns the correct class? Do I change all of my code that calls the old generic class to call the correct DB-specific class? I'll gladly mock up some example code if needed, but I didn't think it was necessary. Thanks in advance for any help!
5
3
0.291313
0
false
13,125,435
0
126
1
0
0
13,125,271
Create a factory class which returns an implementation based on the parameter. You can then have a common base class for both DB types, one implementation for each and let the factory create, configure and return the correct implementation to the user based on a parameter. This works well when the two classes behave very similarly; but as soon as you want to use DB specific features, it gets ugly because you need methods like isFeatureXSupported() (good approach) or isOracle() (more simple but bad since it moves knowledge of which DB has which feature from the helper class into the app code). Alternatively, you can implement all features for both and throw an exception when one isn't supported. In your code, you can then look for the exception to check this. This makes the code more clean but now, you can really check whether a feature is available without actually using it. That can cause problems in the app code (when you want to disable menus, for example, or when the app could do it some other way).
1
0
1
Most Pythonic way to handle splitting a class into multiple classes
2
python
0
2012-10-29T16:03:00.000
I am using the modules xlwd, xlwt and xlutil to do some Excel manipulations in Python. I am not able to figure out how to copy the value of cell (X,Y) to cell (A,B) in the same sheet of an Excel file in Python. Could someone let me know how to do that?
1
0
0
0
false
13,998,563
0
472
1
0
0
13,156,730
Work on 2 cells among tens of thousands...quite meager. Normally,one should present an iteration over rows x columns.
1
0
0
Copying value of cell (X,Y) to cell (A,B) in same sheet of an Excel file using Python
2
python,excel,xlrd,xlwt
0
2012-10-31T11:21:00.000
I have few things to ask for custom queries in Django DO i need to use the DB table name in the query or just the Model name if i need to join the various tables in raw sql. do i need to use db field name or model field name like Person.objects.raw('SELECT id, first_name, last_name, birth_date FROM Person A inner join Address B on A.address = B.id ') or B.id = A.address_id
0
3
0.53705
0
false
13,172,382
1
408
1
0
0
13,172,331
You need to use the database's table and field names in the raw query--the string you provide will be passed to the database, not interpreted by the Django ORM.
1
0
0
Using raw sql in django python
1
python,django
0
2012-11-01T06:54:00.000
What is the most efficient way to delete orphan blobs from a Blobstore? App functionality & scope: A (logged-in) user wants to create a post containing some normal datastore fields (e.g. name, surname, comments) and blobs (images). In addition, the blobs are uploaded asynchronously before the resto of the data is sent via a POST This leaves a good chance of having orphans as, for example, a user may upload images but not complete the form for one reason or another. This issue would be minimized by not using an asynchronous upload of the blobs before sending the rest of the data, however, this issue would still be there on a smaller scale. Possible, yet inefficient solutions: Whenever a post is completed (i.e. the rest of the data is sent), you add the blob keys to a table of "used blobs". Then, you can run a cron every so often and compare all of the blobs with the table of "used blobs". Those that have been uploaded over an hour ago yet are still "not used" are deleted. My understanding is that running through a list of potentially hundreds of thousands of blob keys and comparing it with another table of hundreds of thousands of "used blob keys" is very inefficient. Is there any better way of doing this? I've searched for similar posts yet I couldn't find any mentioning efficient solutions. Thanks in advance!
2
1
0.049958
0
false
13,187,373
1
1,014
3
0
0
13,186,494
You can create an entity that links blobs to users. When a user uploads a blob, you immediately create a new record with the blob id, user id (or post id), and time created. When a user submits a post, you add a flag to this entity, indicating that a blob is used. Now your cron job needs to fetch all entities of this kind where a flag is not equal to "true" and time created is more one hour ago. Moreover, you can fetch keys only, which is a more efficient operation that fetching full entities.
1
0
0
Deleting Blobstore orphans
4
google-app-engine,python-2.7,google-cloud-datastore,blobstore
0
2012-11-01T22:29:00.000
What is the most efficient way to delete orphan blobs from a Blobstore? App functionality & scope: A (logged-in) user wants to create a post containing some normal datastore fields (e.g. name, surname, comments) and blobs (images). In addition, the blobs are uploaded asynchronously before the resto of the data is sent via a POST This leaves a good chance of having orphans as, for example, a user may upload images but not complete the form for one reason or another. This issue would be minimized by not using an asynchronous upload of the blobs before sending the rest of the data, however, this issue would still be there on a smaller scale. Possible, yet inefficient solutions: Whenever a post is completed (i.e. the rest of the data is sent), you add the blob keys to a table of "used blobs". Then, you can run a cron every so often and compare all of the blobs with the table of "used blobs". Those that have been uploaded over an hour ago yet are still "not used" are deleted. My understanding is that running through a list of potentially hundreds of thousands of blob keys and comparing it with another table of hundreds of thousands of "used blob keys" is very inefficient. Is there any better way of doing this? I've searched for similar posts yet I couldn't find any mentioning efficient solutions. Thanks in advance!
2
3
1.2
0
true
13,247,039
1
1,014
3
0
0
13,186,494
Thank for the comments. However, I understood those solutions well, I find them too inefficient. Querying thousands of entries for those that are flagged as "unused" is not ideal. I believe I have come up with a better way and would like to hear your thoughts on it: When a blob is saved, immediately a deferred task is created to delete the same blob in an hour’s time. If the post is created and saved, the deferred task is deleted, thus the blob will not be deleted in an hour’s time. I believe this saves you from having to query thousands of entries every single hour. What are your thoughts on this solution?
1
0
0
Deleting Blobstore orphans
4
google-app-engine,python-2.7,google-cloud-datastore,blobstore
0
2012-11-01T22:29:00.000
What is the most efficient way to delete orphan blobs from a Blobstore? App functionality & scope: A (logged-in) user wants to create a post containing some normal datastore fields (e.g. name, surname, comments) and blobs (images). In addition, the blobs are uploaded asynchronously before the resto of the data is sent via a POST This leaves a good chance of having orphans as, for example, a user may upload images but not complete the form for one reason or another. This issue would be minimized by not using an asynchronous upload of the blobs before sending the rest of the data, however, this issue would still be there on a smaller scale. Possible, yet inefficient solutions: Whenever a post is completed (i.e. the rest of the data is sent), you add the blob keys to a table of "used blobs". Then, you can run a cron every so often and compare all of the blobs with the table of "used blobs". Those that have been uploaded over an hour ago yet are still "not used" are deleted. My understanding is that running through a list of potentially hundreds of thousands of blob keys and comparing it with another table of hundreds of thousands of "used blob keys" is very inefficient. Is there any better way of doing this? I've searched for similar posts yet I couldn't find any mentioning efficient solutions. Thanks in advance!
2
0
0
0
false
16,378,785
1
1,014
3
0
0
13,186,494
Use Drafts! Save as draft after each upload. Then dont do the cleaning! Let the user for himself chose to wipe out. If you're planning on posts in a Facebook style use drafts either or make it private. Why bother deleting users' data?
1
0
0
Deleting Blobstore orphans
4
google-app-engine,python-2.7,google-cloud-datastore,blobstore
0
2012-11-01T22:29:00.000
This issue has been occurring on and off for a few weeks now, and it's unlike any that has come up with my project. Two of the models that are used have a timestamp field, which is by default set to timezone.now(). This is the sequence that raises error flags: Model one is created at time 7:30 PM Model two is created at time 10:00 PM, but in the MySQL database it's stored as 7:30 PM! Every model that is created has its time stamp saved under 7:30 PM, not the actual time, until a certain duration passes. Then a new time is set and all the following models have that new time... Bizzare Some extra details which may help in discovering the issue: I have a bunch of methods that I use to strip my timezones of their tzinfo's and replace them with UTC. This is because I'm doing a timezone.now() - creationTime calculation to create a: "model was posted this long ago" feature in the project. However, this really should not be the cause of the problem. I don't think using datetime.datetime.now() will make any difference either. Anyway, thanks for the help!
28
66
1.2
0
true
13,226,368
1
23,029
1
0
0
13,225,890
Just ran into this last week for a field that had default=date.today(). If you remove the parentheses (in this case, try default=timezone.now) then you're passing a callable to the model and it will be called each time a new instance is saved. With the parentheses, it's only being called once when models.py loads.
1
0
0
Django default=timezone.now() saves records using "old" time
2
python,django,django-timezone
0
2012-11-05T04:23:00.000
I'm currently building a web service using python / flask and would like to build my data layer on top of neo4j, since my core data structure is inherently a graph. I'm a bit confused by the different technologies offered by neo4j for that case. Especially : i originally planned on using the REST Api through py2neo , but the lack of transaction is a bit of a problem. The "embedded database" neo4j doesn't seem to suit my case very well. I guess it's useful when you're working with batch and one-time analytics, and don't need to store the database on a different server from the web server. I've stumbled upon the neo4django project, but i'm not sure this one offers transaction support (since there are no native client to neo4j for python), and if it would be a problem to use it outside django itself. In fact, after having looked at the project's documentation, i feel like it has exactly the same limitations, aka no transaction (but then, how can you build a real-world service when you can corrupt your model upon a single connection timeout ?). I don't even understand what is the use for that project. Could anyone could recommend anything ? I feel completely stuck. Thanks
2
5
1.2
0
true
13,234,558
1
1,075
1
0
0
13,233,107
None of the REST API clients will be able to explicitly support (proper) transactions since that functionality is not available through the Neo4j REST API interface. There are a few alternatives such as Cypher queries and batched execution which all operate within a single atomic transaction on the server side; however, my general approach for client applications is to try to build code which can gracefully handle partially complete data, removing the need for explicit transaction control. Often, this approach will make heavy use of unique indexing and this is one reason that I have provided a large number of "get_or_create" type methods within py2neo. Cypher itself is incredibly powerful and also provides uniqueness capabilities, in particular through the CREATE UNIQUE clause. Using these, you can make your writes idempotent and you can err on the side of "doing it more than once" safe in the knowledge that you won't end up with duplicate data. Agreed, this approach doesn't give you transactions per se but in most cases it can give you an equivalent end result. It's certainly worth challenging yourself as to where in your application transactions are truly necessary. Hope this helps Nigel
1
0
0
using neo4J (server) from python with transaction
2
python,flask,neo4j,py2neo
0
2012-11-05T13:27:00.000
Newbie here trying to use python to do some database analysis. I keep getting the error: "error: cannot locate an Oracle software installation" When installing CX_oracle (via easy_install). The problem is I do not have oracle on my local machine, I'm trying to use python to connect to the main oracle server. I have have setup another program to do this(visualdb) and I had a .jar file I used as the driver but I'm not sure how to use it in this case. Any suggestions?
12
1
0.033321
0
false
58,120,873
0
25,818
3
0
0
13,234,196
Tip for Ubuntu users After configuring .bashrc environment variables, like it was explained in other answers, don't forget to reload your terminal window, typing $SHELL.
1
0
0
"error: cannot locate an Oracle software installation" When trying to install cx_Oracle
6
python,oracle,cx-oracle
0
2012-11-05T14:32:00.000
Newbie here trying to use python to do some database analysis. I keep getting the error: "error: cannot locate an Oracle software installation" When installing CX_oracle (via easy_install). The problem is I do not have oracle on my local machine, I'm trying to use python to connect to the main oracle server. I have have setup another program to do this(visualdb) and I had a .jar file I used as the driver but I'm not sure how to use it in this case. Any suggestions?
12
2
0.066568
0
false
28,741,244
0
25,818
3
0
0
13,234,196
I got this message when I was trying to install the 32 bit version while having the 64bit Oracle client installed. What worked for me: reinstalled python with 64 bit (had 32 for some reason), installed cx_Oracle (64bit version) with the Windows installer and it worked perfectly.
1
0
0
"error: cannot locate an Oracle software installation" When trying to install cx_Oracle
6
python,oracle,cx-oracle
0
2012-11-05T14:32:00.000
Newbie here trying to use python to do some database analysis. I keep getting the error: "error: cannot locate an Oracle software installation" When installing CX_oracle (via easy_install). The problem is I do not have oracle on my local machine, I'm trying to use python to connect to the main oracle server. I have have setup another program to do this(visualdb) and I had a .jar file I used as the driver but I'm not sure how to use it in this case. Any suggestions?
12
2
0.066568
0
false
13,234,377
0
25,818
3
0
0
13,234,196
I installed cx_Oracle, but I also had to install an Oracle client to use it (the cx_Oracle module is just a common and pythonic way to interface with the Oracle client in Python). So you have to set the variable ORACLE_HOME to your Oracle client folder (on Unix: via a shell, for instance; on Windows: create a new variable if it does not exist in the Environment variables of the Configuration Panel). Your folder $ORACLE_HOME/network/admin (%ORACLE_HOME%\network\admin on Windows) is the place where you would place your tnsnames.ora file.
1
0
0
"error: cannot locate an Oracle software installation" When trying to install cx_Oracle
6
python,oracle,cx-oracle
0
2012-11-05T14:32:00.000
I've looked a a number of questions on this site and cannot find an answer to the question: How to create multiple NEW tables in a database (in my case I am using PostgreSQL) from multiple CSV source files, where the new database table columns accurately reflect the data within the CSV columns? I can write the CREATE TABLE syntax just fine, and I can read the rows/values of a CSV file(s), but does a method already exist to inspect the CSV file(s) and accurately determine the column type? Before I build my own, I wanted to check if this already existed. If it doesn't exist already, my idea would be to use Python, CSV module, and psycopg2 module to build a python script that would: Read the CSV file(s). Based upon a subset of records (10-100 rows?), iteratively inspect each column of each row to automatically determine the right column type of the data in the CSV. Therefore, if row 1, column A had a value of 12345 (int), but row 2 of column A had a value of ABC (varchar), the system would automatically determine it should be a format varchar(5) based upon the combination of the data it found in the first two passes. This process could go on as many times as the user felt necessary to determine the likely type and size of the column. Build the CREATE TABLE query as defined by the column inspection of the CSV. Execute the create table query. Load the data into the new table. Does a tool like this already exist within either SQL, PostgreSQL, Python, or is there another application I should be be using to accomplish this (similar to pgAdmin3)?
11
0
0
0
false
52,581,750
0
7,124
2
0
0
13,239,004
Although this is quite an old question, it doesn't seem to have a satisfying answer and I was struggling with the exact samen issue. With the arrival of SQL Server Management Studio 2018 edition - and probably somewhat before that - a pretty good solution was offered by Microsoft. In SSMS on a database node in the object explorer, right-click, select 'Tasks' and choose 'Import data'; Choose 'Flat file' as source and, in the General section, browse to your .csv file. An important note here: make sure there's no table in your target SQL server matching the files name; In the Advanced section, click on 'Suggest types' and in the next dialog, enter preferrably the total number of rows in your file or, if that's too much, a large enough number to cover all possible values (this takes a while); Click next, and in the subsquent step, connect to your SQL server. Now, every brand has their own flavour of data types, but you should get a nice set of relevant pointers for your taste later on. I've tested this using the SQL Server Native Client 11.0. Please leave your comments for other providers as a reply to this solution; Here it comes... click 'Edit Mappings'...; click 'Edit SQL' et voila, a nice SQL statement with all the discovered data types; Click through to the end, selecting 'Run immediately' to see all of your .csv columns created with appopriate types in your SQL server. Extra: If you run the above steps twice, exactly the same way with the same file, the first loop will use the 'CREATE TABLE...' statement, but the second go will skip table creation. If you save the second run as an SSIS (Integration Services) file, you can later re-run the entire setup without scanning the .csv file.
1
0
0
Create SQL table with correct column types from CSV
3
python,sql,postgresql,pgadmin
0
2012-11-05T19:30:00.000
I've looked a a number of questions on this site and cannot find an answer to the question: How to create multiple NEW tables in a database (in my case I am using PostgreSQL) from multiple CSV source files, where the new database table columns accurately reflect the data within the CSV columns? I can write the CREATE TABLE syntax just fine, and I can read the rows/values of a CSV file(s), but does a method already exist to inspect the CSV file(s) and accurately determine the column type? Before I build my own, I wanted to check if this already existed. If it doesn't exist already, my idea would be to use Python, CSV module, and psycopg2 module to build a python script that would: Read the CSV file(s). Based upon a subset of records (10-100 rows?), iteratively inspect each column of each row to automatically determine the right column type of the data in the CSV. Therefore, if row 1, column A had a value of 12345 (int), but row 2 of column A had a value of ABC (varchar), the system would automatically determine it should be a format varchar(5) based upon the combination of the data it found in the first two passes. This process could go on as many times as the user felt necessary to determine the likely type and size of the column. Build the CREATE TABLE query as defined by the column inspection of the CSV. Execute the create table query. Load the data into the new table. Does a tool like this already exist within either SQL, PostgreSQL, Python, or is there another application I should be be using to accomplish this (similar to pgAdmin3)?
11
7
1
0
false
21,917,162
0
7,124
2
0
0
13,239,004
I have been dealing with something similar, and ended up writing my own module to sniff datatypes by inspecting the source file. There is some wisdom among all the naysayers, but there can also be reasons this is worth doing, particularly when we don't have any control of the input data format (e.g. working with government open data), so here are some things I learned in the process: Even though it's very time consuming, it's worth running through the entire file rather than a small sample of rows. More time is wasted by having a column flagged as numeric that turns out to have text every few thousand rows and therefore fails to import. If in doubt, fail over to a text type, because it's easier to cast those to numeric or date/times later than to try and infer the data that was lost in a bad import. Check for leading zeroes in what appear otherwise to be integer columns, and import them as text if there are any - this is a common issue with ID / account numbers. Give yourself some way of manually overriding the automatically detected types for some columns, so that you can blend some semantic awareness with the benefits of automatically typing most of them. Date/time fields are a nightmare, and in my experience generally require manual processing. If you ever add data to this table later, don't attempt to repeat the type detection - get the types from the database to ensure consistency. If you can avoid having to do automatic type detection it's worth avoiding it, but that's not always practical so I hope these tips are of some help.
1
0
0
Create SQL table with correct column types from CSV
3
python,sql,postgresql,pgadmin
0
2012-11-05T19:30:00.000
I am trying to build an online Python Shell. I execute commands by creating an instance of InteractiveInterpreter and use the command runcode. For that I need to store the interpreter state in the database so that variables, functions, definitions and other values in the global and local namespaces can be used across commands. Is there a way to store the current state of the object InteractiveInterpreter that could be retrieved later and passed as an argument local to InteractiveInterpreter constructor or If I can't do this, what alternatives do I have to achieve the mentioned functionality? Below is the pseudo code of what I am trying to achieve def fun(code, sessionID): session = Session() # get the latest state of the interpreter object corresponding to SessionID vars = session.getvars(sessionID) it = InteractiveInterpreter(vars) it.runcode(code) #save back the new state of the interpreter object session.setvars(it.getState(),sessionID) Here, session is an instance of table containing all the necessary information.
0
0
0
0
false
13,254,202
0
136
1
0
0
13,254,044
I believe the pickle package should work for you. You can use pickle.dump or pickle.dumps to save the state of most objects. (then pickle.load or pickle.loads to get it back)
1
0
1
How to store the current state of InteractiveInterpreter Object in a database?
2
python,interactive-shell,python-interactive
0
2012-11-06T15:19:00.000
I have started a retrival job for an archive stored in one of my vaults on Glacier AWS. It turns out that I do not need to resurrect and download that archive any more. Is there a way to stop and/or delete my Glacier job? I am using boto and I cannot seem to find a suitable function. Thanks
7
9
1.2
0
true
13,275,014
1
1,164
1
0
0
13,274,197
The AWS Glacier service does not provide a way to delete a job. You can: Initiate a job Describe a job Get the output of a job List all of your jobs The Glacier service manages the jobs associated with an vault.
1
0
0
AWS glacier delete job
1
python,amazon-web-services,boto,amazon-glacier
0
2012-11-07T16:42:00.000
I'm using the most recent versions of all software (Django, Python, virtualenv, MySQLdb) and I can't get this to work. When I run "import MySQLdb" in the python prompt from outside of the virtualenv, it works, inside it says "ImportError: No module named MySQLdb". I'm trying to learn Python and Linux web development. I know that it's easiest to use SQLLite, but I want to learn how to develop larger-scale applications comparable to what I can do in .NET. I've read every blog post on Google and every post here on StackOverflow and they all suggest that I run "sudo pip install mysql-python" but it just says "Requirement already satisfied: mysql-python in /usr/lib/pymodules/python2.7" Any help would be appreciated! I'm stuck over here and don't want to throw in the towel and just go back to doing this on Microsoft technologies because I can't even get a basic dev environment up and running.
9
1
0.066568
0
false
43,866,023
0
6,817
2
0
0
13,288,013
source $ENV_PATH/bin/activate pip uninstall MySQL-python pip install MySQL-python this worked for me.
1
0
1
Have MySQLdb installed, works outside of virtualenv but inside it doesn't exist. How to resolve?
3
python,virtualenv,mysql-python
0
2012-11-08T11:17:00.000
I'm using the most recent versions of all software (Django, Python, virtualenv, MySQLdb) and I can't get this to work. When I run "import MySQLdb" in the python prompt from outside of the virtualenv, it works, inside it says "ImportError: No module named MySQLdb". I'm trying to learn Python and Linux web development. I know that it's easiest to use SQLLite, but I want to learn how to develop larger-scale applications comparable to what I can do in .NET. I've read every blog post on Google and every post here on StackOverflow and they all suggest that I run "sudo pip install mysql-python" but it just says "Requirement already satisfied: mysql-python in /usr/lib/pymodules/python2.7" Any help would be appreciated! I'm stuck over here and don't want to throw in the towel and just go back to doing this on Microsoft technologies because I can't even get a basic dev environment up and running.
9
14
1.2
0
true
13,288,095
0
6,817
2
0
0
13,288,013
If you have created the virtualenv with the --no-site-packages switch (the default), then system-wide installed additions such as MySQLdb are not included in the virtual environment packages. You need to install MySQLdb with the pip command installed with the virtualenv. Either activate the virtualenv with the bin/activate script, or use bin/pip from within the virtualenv to install the MySQLdb library locally as well. Alternatively, create a new virtualenv with system site-packages included by using the --system-site-package switch.
1
0
1
Have MySQLdb installed, works outside of virtualenv but inside it doesn't exist. How to resolve?
3
python,virtualenv,mysql-python
0
2012-11-08T11:17:00.000
We are developing application for which we going to use a NoSql database. We have evaluated couchdb and mongodb. Our application is in python and read-speed is most critical for our application. And application is reading a large number of documents. I want ask: Is reading large number of documents is faster in bson than json? Which is better when we want to read say 100 documents, parse them & print result: python+mongodb+pymongo or python+couchdb+couchdbkit (database going to be on ec2 & accessible over internet)?
0
-1
-0.197375
0
false
13,641,512
0
468
1
0
0
13,298,480
bson Try LogoDb from 1985 logo programming language for trs-80
1
0
1
CouchDB vs mongodb
1
python-2.7,pymongo,couchdbkit
0
2012-11-08T21:54:00.000
I'm filtering the twitter streaming API by tracking for several keywords. If for example I only want to query and return from my database tweet information that was filtered by tracking for the keyword = 'BBC' how could this be done? Do the tweet information collected have a key:value relating to that keyword by which it was filtered? I'm using python, tweepy and MongoDB. Would an option be to search for the keyword in the returned json 'text' field? Thus generate a query where it searches for that keyword = 'BBC' in the text field of the returned json data?
1
0
0
0
false
22,388,827
0
374
1
0
0
13,352,796
Unfortunately, the Twitter API doesn't provide a way to do this. You can try searching through receive tweets for the keywords you specified, but it might not match exactly.
1
0
0
Querying twitter streaming api keywords from a database
1
python,mongodb,twitter,tweepy
0
2012-11-12T22:42:00.000
so I discovered Sets in Python a few days ago and am surprised that they never crossed my mind before even though they make a lot of things really simple. I give an example later. Some things are still unclear to me. The docs say that Sets can be created from iterables and that the operators always return new Sets but do they always copy all data from one set to another and from the iterable? I work with a lot of data and would love to have Sets and set operators that behave much like itertools. So Sets([iterable]) would be more like a wrapper and the operators union, intersection and so on would return "iSets" and would not copy any data. They all would evaluate once I iter my final Set. In the end I really much would like to have "iSet" operators. Purpose: I work with MongoDB using mongoengine. I have articles saved. Some are associated with a user, some are marked as read others were shown to the user and so on. Wrapping them in Sets that do not load all data would be a great way to combine, intersect etc. them. Obviously I could make special queries but not always since MongoDB does not support joins. So I end up doing joins in Python. I know I could use a relational database then, however, I don't need joins that often and the advantages of MongoDB outweigh them in my case. So what do you think? Is there already a third party module? Would a few lines combining itertools and Sets do? EDIT: I accepted the answer by Martijn Pieters because it is obviously right. I ended up loading only IDs into sets to work with them. Also, the sets in Python have a pretty good running time.
2
4
1.2
0
true
13,358,975
0
707
1
0
0
13,358,955
Sets are just like dict and list; on creation they copy the references from the seeding iterable. Iterators cannot be sets, because you cannot enforce the uniqueness requirement of a set. You cannot know if a future value yielded by an iterator has already been seen before. Moreover, in order for you to determine what the intersection is between two iterables, you have to load all data from at least one of these iterables to see if there are any matches. For each item in the second iterable, you need to test if that item has been seen in the first iterable. To do so efficiently, you need to have loaded all the items from the first iterable into a set. The alternative would be to loop through the first iterable from start to finish for each item from the second iterable, leading to exponential performance degradation.
1
0
1
Python: Combining itertools and sets to save memory
1
python,memory-management,set,itertools
0
2012-11-13T10:20:00.000
I'm scraping tweets and inserting them into a mongo database for analysis work in python. I want to check the size of my database so that I won't incur additional charges if I run this on amazon. How can I tell how big my current mongo database is on osx? And will a free tier cover me?
7
1
0.039979
0
false
13,369,827
0
17,118
2
0
0
13,369,795
Databases are, by default, stored in /data/db (some environments override this and use /var/lib/mongodb, however). You can see the total db size by looking at db.stats() (specifically fileSize) in the MongoDB shell.
1
0
0
where is mongo db database stored on local hard drive?
5
python,macos,mongodb,amazon-ec2
0
2012-11-13T22:13:00.000
I'm scraping tweets and inserting them into a mongo database for analysis work in python. I want to check the size of my database so that I won't incur additional charges if I run this on amazon. How can I tell how big my current mongo database is on osx? And will a free tier cover me?
7
4
1.2
0
true
13,369,857
0
17,118
2
0
0
13,369,795
I believe on OSX the default location would be /data/db. But you can check your config file for the dbpath value to verify.
1
0
0
where is mongo db database stored on local hard drive?
5
python,macos,mongodb,amazon-ec2
0
2012-11-13T22:13:00.000
I am interested in learning more about node.js and utilizing it in a new project. The problem I am having is envisioning where I could enhance my web stack with it and what role it would play. All I have really done with it is followed a tutorial or two where you make something like a todo app in all JS. That is all fine and dandy but where do I leverage this is in a more complex web architecture. so here is an example of how I plan on setting up my application web server for serving views: Python (flask/werkzeug) Jinja nginx html/css/js API sever: Python (flask/werkzeug) SQLAlchemy (ORM) nginx supervisor + gunicorn DB Server Postgres So is there any part of this stack that could be replaced or enhanced by introducing nodeJS I would assume it would be best used on the API server but not exactly sure how.
0
0
1.2
0
true
13,384,050
1
228
1
0
0
13,382,262
It would replace Python (flask/werkzeug) in both your view server and your API server.
1
0
0
Where does node.js fit in a stack or enhance it
1
python,node.js,web-applications
0
2012-11-14T15:51:00.000
I'm new to web development and I'm trying to get my mac set up for doing Django tutorials and helping some developers with a project that uses postgres. I will try to specify my questions as much as possible. However, it seems that there are lots of floating parts to this question and I'm not quite understanding some parts of the connection between an SQL Shell, virtual environments, paths, databases, terminals (which seem to be necessary to get running on this web development project). I will detail what I did and the error messages that appear. If you could help me with the error messages or simply post links to tutorials that help me better understand how these floating parts work together, I would very much appreciate it. I installed postgres and pgAdmin III and set it up on the default port. I created a test database. Now when I try to open it on the local server, I get an error message: 'ERROR: column "datconfig" does not exist LINE1:...b.dattablespace AS spcoid, spcname, datallowconn, dataconfig,... Here is what I did before I closed pgAdmin and then reopened it: Installation: The Setup told me that an existing data directory was found at /Library/PostgreSQL/9.2/data set to use port 5433. I loaded an .sql file that I wanted to test (I saved it on my desktop and loaded it into the database from there). I'm not sure whether this is related to the problem or not, but I also have virtual environments in a folder ~/Sites/django_test (i.e. when I tell the bash Terminal to “activate” this folder, it puts me in a an (env)). I read in a forum that I need to do the Django tutorials by running “python manage.py runserver" at the bash Terminal command line. When I do this, I get an error message saying “can't open file 'manage.py': [Errno 2] No such file or directory”. Even when I run the command in the (env), I get the error message: /Library/Frameworks/Python.framework/Versions/3.2/Resources/Python.app/Contents/MacOS/Python: can't open file 'manage.py': [Errno 2] No such file or directory (Which I presume is telling me that the path is still set on an incorrect version of Python (3.2), even though I want to use version 2.7 and trashed the 3.2 version from my system. ) I think that there are a few gaps in my understanding here: I don’t understand the difference between typing in commands into my bash Terminal versus my SQL shell Is running “python manage.py runserver” the same as running Python programs with an IDE like IDLE? How and where do I adjust your $PATH environment variable so that the correct python occurs first on the path? I think that I installed the correct Python version into the virtual environment using pip install. Why am I still receiving a “No such file or directory” error? Why does Python version 3.2 still appear in the path indicated by my error message is I trashed it? If you could help me with these questions, or simply list links with any tutorials that explain this, that would be much appreciated. And again, sorry for not being more specific. But I thought that it would be more helpful to list the problems that I have with these different pieces rather than just one, since its their interrelatedness that seems to be causing the error messages. Thanks!
1
1
1.2
0
true
13,495,557
1
291
1
0
0
13,495,135
Er, not sure how we can help you with that. One is for bash, one is for SQL. No, that's for running the development webserver, as the tutorial explains. There's no need to do that, that's what the virtualenv is for. This has nothing to do with Python versions, you simply don't seem to be in the right directory. Note that, again as the tutorial explains, manage.py isn't created until you've run django-admin.py startproject myprojectname. Have you done that? You presumably created the virtualenv using 3.2. Delete it and recreate it with 2.7. You shouldn't be "reading in a forum" about how to do the Django tutorial. You should just be following the tutorial.
1
0
0
postgres installation error on Mac 10.6.8
1
python,django,postgresql
0
2012-11-21T14:12:00.000
Is there a library or open source utility available to search all the tables and columns of an Sqlite database? The only input would be the name of the sqlite DB file. I am trying to write a forensics tool and want to search sqlite files for a specific string.
13
5
0.244919
0
false
65,373,519
0
15,981
2
0
0
13,514,509
Just dump the db and search it. % sqlite3 file_name .dump | grep 'my_search_string' You could instead pipe through less, and then use / to search: % sqlite3 file_name .dump | less
1
0
0
Search Sqlite Database - All Tables and Columns
4
python,sqlite,search
0
2012-11-22T14:11:00.000
Is there a library or open source utility available to search all the tables and columns of an Sqlite database? The only input would be the name of the sqlite DB file. I am trying to write a forensics tool and want to search sqlite files for a specific string.
13
4
0.197375
0
false
59,407,127
0
15,981
2
0
0
13,514,509
@MrWorf's answer didn't work for my sqlite file (an .exb file from Evernote) but this similar method worked: Open the file with DB Browser for SQLite sqlitebrowser mynotes.exb File / Export to SQL file (will create mynotes.exb.sql) grep 'STRING I WANT" mynotes.exb.sql
1
0
0
Search Sqlite Database - All Tables and Columns
4
python,sqlite,search
0
2012-11-22T14:11:00.000
I have created an app using web2py and have declared certain new table in it using the syntax db.define_table() but the tables created are not visible when I run the app in Google App Engine even on my local server. The tables that web2py creates by itself like auth_user and others in auth are available. What am I missing here? I have declared the new table in db.py in my application. Thanks in advance
1
0
0
0
false
13,551,914
1
100
1
1
0
13,548,590
App Engine datastore doesn't really have tables. That said, if web2py is able to make use of the datastore (I'm not familiar with it), then Kinds (a bit like tables) will only show up in the admin-console (/_ah/admin locally) once an entity has been created (i.e. tables only show up once one row has been inserted, you'll never see empty tables).
1
0
0
New tables created in web2py not seen when running in Google app Engine
1
python,google-app-engine,web2py
0
2012-11-25T05:29:00.000
I have a shared hosting environment on Bluehost. I am running a custom installation of python(+ django) with a few installed modules. All has been working, until yesterday a change was made on the server(I assume) which gave me this django error: ... File "/****/****/.local/lib/python/django/utils/importlib.py", line 35, in import_module __import__(name) File "/****/****/.local/lib/python/django/db/backends/mysql/base.py", line 14, in raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) ImproperlyConfigured: Error loading MySQLdb module: libmysqlclient_r.so.16: cannot open shared object file: No such file or directory Of course, Bluehost support is not too helpful. They advised that 1) I use the default python install, because that has MySQLdb installed already. Or that 2) I somehow import the MySQLdb package installed on the default python, from my python(dont know if this can even be done). I am concerned that if I use the default install I wont have permission to install my other packages. Does anybody have any ideas how to get back to a working state, with as little infrastructure changes as possible?
2
2
1.2
0
true
13,573,647
1
2,446
2
0
0
13,573,359
I think you upgraded your OS installation which in turn upgraded libmysqlclient and broke native extension. What you can do is reinstall libmysqlclient16 again (how to do it depends your particular OS) and that should fix your issue. Other approach would be to uninstall MySQLdb module and reinstall it again, forcing python to compile it against a newer library.
1
0
0
Python module issue
2
python,linux,mysql-python,bluehost
0
2012-11-26T21:20:00.000
I have a shared hosting environment on Bluehost. I am running a custom installation of python(+ django) with a few installed modules. All has been working, until yesterday a change was made on the server(I assume) which gave me this django error: ... File "/****/****/.local/lib/python/django/utils/importlib.py", line 35, in import_module __import__(name) File "/****/****/.local/lib/python/django/db/backends/mysql/base.py", line 14, in raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) ImproperlyConfigured: Error loading MySQLdb module: libmysqlclient_r.so.16: cannot open shared object file: No such file or directory Of course, Bluehost support is not too helpful. They advised that 1) I use the default python install, because that has MySQLdb installed already. Or that 2) I somehow import the MySQLdb package installed on the default python, from my python(dont know if this can even be done). I am concerned that if I use the default install I wont have permission to install my other packages. Does anybody have any ideas how to get back to a working state, with as little infrastructure changes as possible?
2
0
0
0
false
13,591,200
1
2,446
2
0
0
13,573,359
You were right. Bluehost upgraded MySQL. Here is what I did: 1) remove the "build" directory in the "MySQL-python-1.2.3" directory 2) remove the egg 3) build the module again "python setup.py build" 4) install the module again "python setup.py install --prefix=$HOME/.local" Morale of the story for me is to remove the old stuff when reinstalling module
1
0
0
Python module issue
2
python,linux,mysql-python,bluehost
0
2012-11-26T21:20:00.000
I'm starting a Django project and need to shard multiple tables that are likely to all be of too many rows. I've looked through threads here and elsewhere, and followed the Django multi-db documentation, but am still not sure how that all stitches together. My models have relationships that would be broken by sharding, so it seems like the options are to either drop the foreign keys of forgo sharding the respective models. For argument's sake, consider the classic Authot, Publisher and Book scenario, but throw in book copies and users that can own them. Say books and users had to be sharded. How would you approach that? A user may own a copy of a book that's not in the same database. In general, what are the best practices you have used for routing and the sharding itself? Did you use Django database routers, manually selected a database inside commands based on your sharding logic, or overridden some parts of the ORM to achive that? I'm using PostgreSQL on Ubuntu, if it matters. Many thanks.
3
1
0.099668
0
false
13,639,532
1
1,300
1
0
0
13,620,867
I agree with @DanielRoseman. Also, how many is too many rows. If you are careful with indexing, you can handle a lot of rows with no performance problems. Keep your indexed values small (ints). I've got tables in excess of 400 million rows that produce sub-second responses even when joining with other many million row tables. It might make more sense to break user up into multiple tables so that the user object has a core of commonly used things and then the "profile" info lives elsewhere (std Django setup). Copies would be a small table referencing books which has the bulk of the data. Considering how much ram you can put into a DB server these days, sharding before you have too seems wrong.
1
0
0
Sharding a Django Project
2
python,django,postgresql,sharding
0
2012-11-29T07:32:00.000
I am connecting my python software to a remote msql server. i have had to add an access host on cPanel just for my computer but the problem is the access host, which is my IP, is dynamic. How can i connect to the remote server without having to change the access host everytime? thanks guys, networking is my weakness.
0
0
1.2
0
true
13,657,435
0
1,475
1
0
0
13,657,404
Your best option is probably to find a [dynamic DNS] provider. The idea is to have a client running on your machine which updates a DNS entry on a remote server. Then you can use the hostname provided instead of your IP address in cPanel.
1
0
0
Configuring Remote MYSQL with a Dynamic IP
2
python,networking,cpanel
0
2012-12-01T07:31:00.000
I'm familiar with LAMP systems and have been programming mostly in PHP for the past 4 years. I'm learning Python and playing around with Nginx a little bit. We're working on a project website which will handle a lot of http handle requests, stream videos(mostly from a provider like youtube or vimeo). My colleague has experience with OpenBSD and has insisted that we use it as an alternative to linux. The reason that we want to use OpenBSD is that it's well known for it's security. The reason we chose Python is that it's fast. The reason we want to use Nginx is that it's known to be able to handle more http request when compared to Apache. The reason we want to use NoSQL is that MySQL is known to have problems in scalability when the databases grows. We want the web pages to load as fast as possible (caching and cdn's will be used) using the minimum amount of hardware possible. That's why we want to use ONPN (OpenBSD,Nginx,Python,Nosql) instead of the traditional LAMP (Linux,Apache,Mysql,PHP). We're not a very big company so we're using opensource technologies. Any suggestion is appreciated on how to use these software as a platform and giving hardware suggestions is also appreciated. Any criticism is also welcomed.
1
4
0.379949
0
false
13,675,611
0
1,447
2
0
0
13,675,440
My advice - if you don't know how to use these technologies - don't do it. Few servers will cost you less than the time spent mastering technologies you don't know. If you want to try them out - do it. One by one, not everything at once. There is no magic solution on how to use them.
1
0
0
How to utilize OpenBSD, Nginx, Python and NoSQL
2
python,nginx,nosql,openbsd
1
2012-12-03T00:05:00.000
I'm familiar with LAMP systems and have been programming mostly in PHP for the past 4 years. I'm learning Python and playing around with Nginx a little bit. We're working on a project website which will handle a lot of http handle requests, stream videos(mostly from a provider like youtube or vimeo). My colleague has experience with OpenBSD and has insisted that we use it as an alternative to linux. The reason that we want to use OpenBSD is that it's well known for it's security. The reason we chose Python is that it's fast. The reason we want to use Nginx is that it's known to be able to handle more http request when compared to Apache. The reason we want to use NoSQL is that MySQL is known to have problems in scalability when the databases grows. We want the web pages to load as fast as possible (caching and cdn's will be used) using the minimum amount of hardware possible. That's why we want to use ONPN (OpenBSD,Nginx,Python,Nosql) instead of the traditional LAMP (Linux,Apache,Mysql,PHP). We're not a very big company so we're using opensource technologies. Any suggestion is appreciated on how to use these software as a platform and giving hardware suggestions is also appreciated. Any criticism is also welcomed.
1
1
0.099668
0
false
13,676,002
0
1,447
2
0
0
13,675,440
I agree with wdev, the time it takes to learn this is not worth the money you will save. First of all, MySQL databases are not hard to scale. WordPress utilizes MySQL databases, and some of the world's largest websites use MySQL (google for a list). I can also say the same of linux and PHP. If you design your site using best practices (CSS sprites) Apache versus Nginx will not make a considerable difference in load times if you utilize a CDN and best practices (caching, gzip, etc). I strongly urge you to reconsider your decisions. They seem very ill-advised.
1
0
0
How to utilize OpenBSD, Nginx, Python and NoSQL
2
python,nginx,nosql,openbsd
1
2012-12-03T00:05:00.000
I have a list of times in h:m format in an Excel spreadsheet, and I'm trying to do some manipulation with DataNitro but it doesn't seem to like the way Excel formats times. For example, in Excel the time 8:32 is actually just the decimal number .355556 formatted to appear as 8:32. When I access that time with DataNitro it sees it as the decimal, not the string 8:32. If I change the format in Excel from Time to General or Number, it converts it to the decimal (which I don't want). The only thing I've found that works is manually going through each cell and placing ' in front of each one, then going through and changing the format type to General. Is there any way to convert these times in Excel into strings so I can extract the info with DataNitro (which is only viewing it as a decimal)?
1
3
1.2
0
true
13,725,706
0
488
1
0
0
13,725,567
If .355556 (represented as 8:32) is in A1 then =HOUR(A1)&":"&MINUTE(A1) and Copy/Paste Special Values should get you to a string.
1
0
1
Converting time with Python and DataNitro in Excel
2
python,excel,time,number-formatting,datanitro
0
2012-12-05T14:35:00.000
I've implemented a breadth first search with a PyMongo social network. It's breadth first to reduce the number of connections. Now I get queries like coll.find({"_id":{"$in":["id1", "id2", ...]}} with a huge number of ids. PyMongo does not process some of these big queries due to their size. Is there a technical solution around it? Or do you suggest another approach to such kind of queries where I need to select all docs with one of a huge set of ids?
0
0
0
0
false
13,729,295
0
116
1
0
0
13,728,955
If this is an inescapable problem, you could split the array of ids across multiple queries and then merge the results client-side.
1
0
0
Large size query with PyMongo?
2
python,mongodb
0
2012-12-05T17:27:00.000
So I am trying to create a realtime plot of data that is being recorded to a SQL server. The format is as follows: Database: testDB Table: sensors First record contains 3 records. The first column is an auto incremented ID starting at 1. The second column is the time in epoch format. The third column is my sensor data. It is in the following format: 23432.32 112343.3 53454.322 34563.32 76653.44 000.000 333.2123 I am completely lost on how to complete this project. I have read many pages showing examples dont really understand them. They provide source code, but I am not sure where that code goes. I installed httpd on my server and that is where I stand. Does anyone know of a good how-to from beginning to end that I could follow? Or could someone post a good step by step for me to follow? Thanks for your help
0
0
0
1
false
13,774,224
0
428
1
0
0
13,772,857
Install a httpd server Install php Write a php script to fetch the data from the database and render it as a webpage. This is fairly elaborate request, with relatively little details given. More information will allow us to give better answers.
1
0
0
Plotting data using Flot and MySQL
1
python,mysql,flot
0
2012-12-07T23:52:00.000
I need to obtain path from a FileField, in order to check it against a given file system path, to know if the file I am inserting into mongo database is already present. Is it possible? All I get is a GridFSProxy, but I am unable to understand how to handle it.
0
1
1.2
0
true
13,962,502
0
430
1
0
0
13,791,542
You can't since it stores the data into database. If you need to store the original path then you can create an EmbeddedDocument which contains a FileField and a StringField with the path string. But remember that the stored file and the file you might find on that path are not the same
1
0
0
How to get filesystem path from mongoengine FileField
1
python,mongodb,path,mongoengine,filefield
0
2012-12-09T20:34:00.000
I am using psycopg2 in python, but my question is DBMS agnostic (as long as the DBMS supports transactions): I am writing a python program that inserts records into a database table. The number of records to be inserted is more than a million. When I wrote my code so that it ran a commit on each insert statement, my program was too slow. Hence, I altered my code to run a commit every 5000 records and the difference in speed was tremendous. My problem is that at some point an exception occurs when inserting records (some integrity check fails) and I wish to commit my changes up to that point, except of course for the last command that caused the exception to happen, and continue with the rest of my insert statements. I haven't found a way to achieve this; the only thing I've achieved was to capture the exception, rollback my transaction and keep on from that point, where I loose my pending insert statements. Moreover, I tried (deep)copying the cursor object and the connection object without any luck, either. Is there a way to achieve this functionality, either directly or indirectly, without having to rollback and recreate/re-run my statements? Thank you all in advance, George.
1
2
0.197375
0
false
13,849,917
0
1,119
2
0
0
13,838,231
If you are committing your transactions after every 5000 record interval, it seems like you could do a little bit of preprocessing of your input data and actually break it out into a list of 5000 record chunks, i.e. [[[row1_data],[row2_data]...[row4999_data]],[[row5000_data],[row5001_data],...],[[....[row1000000_data]]] Then run your inserts, and keep track of which chunk you are processing as well as which record you are currently inserting. When you get the error, you rerun the chunk, but skip the the offending record.
1
0
0
How can I commit all pending queries until an exception occurs in a python connection object
2
python,postgresql,transactions,commit,psycopg2
0
2012-12-12T10:58:00.000
I am using psycopg2 in python, but my question is DBMS agnostic (as long as the DBMS supports transactions): I am writing a python program that inserts records into a database table. The number of records to be inserted is more than a million. When I wrote my code so that it ran a commit on each insert statement, my program was too slow. Hence, I altered my code to run a commit every 5000 records and the difference in speed was tremendous. My problem is that at some point an exception occurs when inserting records (some integrity check fails) and I wish to commit my changes up to that point, except of course for the last command that caused the exception to happen, and continue with the rest of my insert statements. I haven't found a way to achieve this; the only thing I've achieved was to capture the exception, rollback my transaction and keep on from that point, where I loose my pending insert statements. Moreover, I tried (deep)copying the cursor object and the connection object without any luck, either. Is there a way to achieve this functionality, either directly or indirectly, without having to rollback and recreate/re-run my statements? Thank you all in advance, George.
1
3
1.2
0
true
13,838,751
0
1,119
2
0
0
13,838,231
I doubt you'll find a fast cross-database way to do this. You just have to optimize the balance between the speed gains from batch size and the speed costs of repeating work when an entry causes a batch to fail. Some DBs can continue with a transaction after an error, but PostgreSQL can't. However, it does allow you to create subtransactions with the SAVEPOINT command. These are far from free, but they're lower cost than a full transaction. So what you can do is every (say) 100 rows, issue a SAVEPOINT and then release the prior savepoint. If you hit an error, ROLLBACK TO SAVEPOINT, commit, then pick up where you left off.
1
0
0
How can I commit all pending queries until an exception occurs in a python connection object
2
python,postgresql,transactions,commit,psycopg2
0
2012-12-12T10:58:00.000
I'm building a file hosting app that will store all client files within a folder on an S3 bucket. I then want to track the amount of usage on S3 recursively per top folder to charge back the cost of storage and bandwidth to each corresponding client. Front-end is django but the solution can be python for obvious reasons. Is it better to create a bucket per client programmatically? If I do go with the approach of creating a bucket per client, is it then possible to get the cost of cloudfront exposure of the bucket if enabled?
0
0
0
0
false
13,892,252
1
818
1
0
1
13,873,119
No its not possible to create a bucket for each user as Amazon allows only 100 buckets per account. So unless you are sure not to have more than 100 users, it will be a very bad idea. The ideal solution will be to remember each user's storage in you Django app itself in database. I guess you would be using S3 boto library for storing the files, than it returns the byte size after each upload. You can use that to store that. There is also another way out, you could create many folders inside a bucket with each folder specific to an user. But still the best way to remember the storage usage in your app
1
0
0
How can I track s3 bucket folder usage with python?
2
python,django,amazon-s3
0
2012-12-14T05:20:00.000
How would I extend the sqlite3 module so if I import Database I can do Database.connect() as an alias to sqlite3.connect(), but define extra non standard methods?
1
4
0.379949
0
false
13,881,814
0
206
1
0
0
13,881,533
You can create a class which wraps sqlite3. It takes its .connect() method and maybe others and exposes it to the outside, and then you add your own stuff. Another option would be subclassing - if that works.
1
0
0
How do I extend a python module to include extra functionality? (sqlite3)
2
python,sqlite
0
2012-12-14T15:24:00.000
I`m trying to write loader to sqlite that will load as fast as possible simple rows in DB. Input data looks like rows retrieved from postgres DB. Approximated amount of rows that will go to sqlite: from 20mil to 100mil. I cannot use other DB except sqlite due to project restrictions. My question is : what is a proper logic to write such loader? At first try I`ve tried to write set of encapsulated generators, that will take one row from Postgres, slightly ammend it and put it into sqlite. I ended up with the fact that for each row, i create separate sqlite connection and cursor. And that looks awfull. At second try , i moved sqlite connection and cursor out of the generator , to the body of the script and it became clear that i do not commit data to sqlite untill i fetch and process all 20mils records. And this possibly could crash all my hardware. At third try I strated to consider to keep Sqlite connection away from the loops , but create/close cursor each time i process and push one row to Sqlite. This is better but i think also have some overhead. I also considered to play with transactions : One connection, one cursor, one transaction and commit called in generator each time row is being pushed to Sqlite. Is this i right way i`m going? Is there some widely-used pattern to write such a component in python? Because I feel as if I am inventing a bicycle.
0
1
0.066568
0
false
13,919,496
0
293
2
0
0
13,919,448
SQLite can handle huge transactions with ease, so why not commit at the end? Have you tried this at all? If you do feel one transaction is a problem, why not commit ever n transactions? Process rows one by one, insert as needed, but every n executed insertions add a connection.commit() to spread the load.
1
0
0
How to write proper big data loader to sqlite
3
python,sqlite,python-2.7
0
2012-12-17T17:56:00.000
I`m trying to write loader to sqlite that will load as fast as possible simple rows in DB. Input data looks like rows retrieved from postgres DB. Approximated amount of rows that will go to sqlite: from 20mil to 100mil. I cannot use other DB except sqlite due to project restrictions. My question is : what is a proper logic to write such loader? At first try I`ve tried to write set of encapsulated generators, that will take one row from Postgres, slightly ammend it and put it into sqlite. I ended up with the fact that for each row, i create separate sqlite connection and cursor. And that looks awfull. At second try , i moved sqlite connection and cursor out of the generator , to the body of the script and it became clear that i do not commit data to sqlite untill i fetch and process all 20mils records. And this possibly could crash all my hardware. At third try I strated to consider to keep Sqlite connection away from the loops , but create/close cursor each time i process and push one row to Sqlite. This is better but i think also have some overhead. I also considered to play with transactions : One connection, one cursor, one transaction and commit called in generator each time row is being pushed to Sqlite. Is this i right way i`m going? Is there some widely-used pattern to write such a component in python? Because I feel as if I am inventing a bicycle.
0
0
1.2
0
true
13,976,529
0
293
2
0
0
13,919,448
Finally i managed to resolve my problem. Main issue was in exessive amount of insertions in sqlite. After i started to load all data from postgress to memory, aggregate it proper way to reduce amount of rows, i was able to decrease processing time from 60 hrs to 16 hrs.
1
0
0
How to write proper big data loader to sqlite
3
python,sqlite,python-2.7
0
2012-12-17T17:56:00.000
I'm attempting to install MySQL-python on a machine running CentOS 5.5 and python 2.7. This machine isn't running a mysql server, the mysql instance this box will be using is hosted on a separate server. I do have a working mysql client. On attempting sudo pip install MySQL-python, I get an error of EnvironmentError: mysql_config not found, which as far as I can tell is a command that just references /etc/my.cnf, which also isn't present. Before I go on some wild goose chase creating spurious my.cnf files, is there an easy way to get MySQL-python installed?
13
21
1
0
false
13,932,070
0
27,898
1
1
0
13,922,955
So it transpires that mysql_config is part of mysql-devel. mysql-devel is for compiling the mysql client, not the server. Installing mysql-devel allows the installation of MySQL-python.
1
0
0
Installing MySQL-python without mysql-server on CentOS
3
centos,mysql-python
0
2012-12-17T22:01:00.000
trying to figure out whether this is a bug or by design. when no query_string is specified for a query, the SearchResults object is NOT sorted by the requested column. for example, here is some logging to show the problem: Results are returned unsorted on return index.search(query): query_string = '' sort_options string: search.SortOptions(expressions=[search.SortExpression(expression=u'firstname', direction='ASCENDING', default_value=u'')], limit=36) Results are returned sorted on return index.search(query): query_string = 'test' sort_options string: search.SortOptions(expressions=[search.SortExpression(expression=u'firstname', direction='ASCENDING', default_value=u'')], limit=36) This is how I'm constructing my query for both cases (options has limit, offset and sort_options parameters): query = search.Query(query_string=query_string, options=options)
8
-2
-0.197375
0
false
13,954,922
1
103
1
0
0
13,953,039
Could be a bug in the way you build your query, since it's not shown. Could be that you don't have an index for the case that isn't working.
1
0
0
sort_options only applied when query_string is not empty?
2
python,google-app-engine,gae-search
0
2012-12-19T13:02:00.000
Im getting this error when trying to run python / django after installing psycopg2: Error: dlopen(/Users/macbook/Envs/medint/lib/python2.7/site-packages/psycopg2/_psycopg.so, 2): Symbol not found: _PQbackendPID Referenced from: /Users/macbook/Envs/medint/lib/python2.7/site-packages/psycopg2/_psycopg.so Expected in: dynamic lookup Anyone?
1
6
1
0
false
59,063,813
0
3,182
1
0
0
14,001,116
on Mojave macOS, I solved it by running below steps: pip uninstall psycopg2 pip install psycopg2-binary
1
0
0
Psycopg2 Symbol not found: _PQbackendPID Expected in: dynamic lookup
2
python,django,postgresql,heroku,psycopg2
0
2012-12-22T08:02:00.000
I'm a python developer with pretty good RDBMS experience. I need to process a fairly large amount of data (approx 500GB). The data is sitting in approximately 1200 csv files in s3 buckets. I have written a script in Python and can run it on a server. However, it is way too slow. Based on the current speed and the amount of data it will take approximately 50 days to get through all of the files (and of course, the deadline is WELL before that). Note: the processing is sort of your basic ETL type of stuff - nothing terrible fancy. I could easily just pump it into a temp schema in PostgreSQL, and then run scripts onto of it. But, again, from my initial testing, this would be way to slow. Note: A brand new PostgreSQL 9.1 database will be it's final destination. So, I was thinking about trying to spin up a bunch of EC2 instances to try and run them in batches (in parallel). But, I have never done something like this before so I've been looking around for ideas, etc. Again, I'm a python developer, so it seems like Fabric + boto might be promising. I have used boto from time to time, but never any experience with Fabric. I know from reading/research this is probably a great job for Hadoop, but I don't know it and can't afford to hire it done, and the time line doesn't allow for a learning curve or hiring someone. I should also not, that it's kind of a one time deal. So, I don't need to build a really elegant solution. I just need for it to work and be able to get through all of the data by the end of the year. Also, I know this is not a simple stackoverflow-kind of question (something like "how can I reverse a list in python"). But, what I'm hoping for is someone to read this and "say, I do something similar and use XYZ... it's great!" I guess what I'm asking is does anybody know of any thing out there that I could use to accomplish this task (given that I'm a Python developer and I don't know Hadoop or Java - and have a tight timeline that prevents me learning a new technology like Hadoop or learning a new language) Thanks for reading. I look forward to any suggestions.
5
2
1.2
0
true
14,012,685
0
1,819
4
0
0
14,006,363
I often use a combination of SQS/S3/EC2 for this type of batch work. Queue up messages in SQS for all of the work that needs to be performed (chunked into some reasonably small chunks). Spin up N EC2 instances that are configured to start reading messages from SQS, performing the work and putting results into S3, and then, and only then, delete the message from SQS. You can scale this to crazy levels and it has always worked really well for me. In your case, I don't know if you would store results in S3 or go right to PostgreSQL.
1
0
0
Processing a large amount of data in parallel
5
python,fabric,boto,data-processing
0
2012-12-22T20:30:00.000
I'm a python developer with pretty good RDBMS experience. I need to process a fairly large amount of data (approx 500GB). The data is sitting in approximately 1200 csv files in s3 buckets. I have written a script in Python and can run it on a server. However, it is way too slow. Based on the current speed and the amount of data it will take approximately 50 days to get through all of the files (and of course, the deadline is WELL before that). Note: the processing is sort of your basic ETL type of stuff - nothing terrible fancy. I could easily just pump it into a temp schema in PostgreSQL, and then run scripts onto of it. But, again, from my initial testing, this would be way to slow. Note: A brand new PostgreSQL 9.1 database will be it's final destination. So, I was thinking about trying to spin up a bunch of EC2 instances to try and run them in batches (in parallel). But, I have never done something like this before so I've been looking around for ideas, etc. Again, I'm a python developer, so it seems like Fabric + boto might be promising. I have used boto from time to time, but never any experience with Fabric. I know from reading/research this is probably a great job for Hadoop, but I don't know it and can't afford to hire it done, and the time line doesn't allow for a learning curve or hiring someone. I should also not, that it's kind of a one time deal. So, I don't need to build a really elegant solution. I just need for it to work and be able to get through all of the data by the end of the year. Also, I know this is not a simple stackoverflow-kind of question (something like "how can I reverse a list in python"). But, what I'm hoping for is someone to read this and "say, I do something similar and use XYZ... it's great!" I guess what I'm asking is does anybody know of any thing out there that I could use to accomplish this task (given that I'm a Python developer and I don't know Hadoop or Java - and have a tight timeline that prevents me learning a new technology like Hadoop or learning a new language) Thanks for reading. I look forward to any suggestions.
5
1
0.039979
0
false
14,009,860
0
1,819
4
0
0
14,006,363
You might benefit from hadoop in form of Amazon Elastic Map Reduce. Without getting too deep it can be seen as a way to apply some logic to massive data volumes in parralel (Map stage). There is also hadoop technology called hadoop streaming - which enables to use scripts / executables in any languages (like python). Another hadoop technology you can find useful is sqoop - which moves data between HDFS and RDBMS.
1
0
0
Processing a large amount of data in parallel
5
python,fabric,boto,data-processing
0
2012-12-22T20:30:00.000
I'm a python developer with pretty good RDBMS experience. I need to process a fairly large amount of data (approx 500GB). The data is sitting in approximately 1200 csv files in s3 buckets. I have written a script in Python and can run it on a server. However, it is way too slow. Based on the current speed and the amount of data it will take approximately 50 days to get through all of the files (and of course, the deadline is WELL before that). Note: the processing is sort of your basic ETL type of stuff - nothing terrible fancy. I could easily just pump it into a temp schema in PostgreSQL, and then run scripts onto of it. But, again, from my initial testing, this would be way to slow. Note: A brand new PostgreSQL 9.1 database will be it's final destination. So, I was thinking about trying to spin up a bunch of EC2 instances to try and run them in batches (in parallel). But, I have never done something like this before so I've been looking around for ideas, etc. Again, I'm a python developer, so it seems like Fabric + boto might be promising. I have used boto from time to time, but never any experience with Fabric. I know from reading/research this is probably a great job for Hadoop, but I don't know it and can't afford to hire it done, and the time line doesn't allow for a learning curve or hiring someone. I should also not, that it's kind of a one time deal. So, I don't need to build a really elegant solution. I just need for it to work and be able to get through all of the data by the end of the year. Also, I know this is not a simple stackoverflow-kind of question (something like "how can I reverse a list in python"). But, what I'm hoping for is someone to read this and "say, I do something similar and use XYZ... it's great!" I guess what I'm asking is does anybody know of any thing out there that I could use to accomplish this task (given that I'm a Python developer and I don't know Hadoop or Java - and have a tight timeline that prevents me learning a new technology like Hadoop or learning a new language) Thanks for reading. I look forward to any suggestions.
5
3
0.119427
0
false
14,006,535
0
1,819
4
0
0
14,006,363
Did you do some performance measurements: Where are the bottlenecks? Is it CPU bound, IO bound, DB bound? When it is CPU bound, you can try a python JIT like pypy. When it is IO bound, you need more HDs (and put some striping md on them). When it is DB bound, you can try to drop all the indexes and keys first. Last week I imported the Openstreetmap DB into a postgres instance on my server. The input data were about 450G. The preprocessing (which was done in JAVA here) just created the raw data files which could be imported with postgres 'copy' command. After importing the keys and indices were generated. Importing all the raw data took about one day - and then it took several days to build keys and indices.
1
0
0
Processing a large amount of data in parallel
5
python,fabric,boto,data-processing
0
2012-12-22T20:30:00.000
I'm a python developer with pretty good RDBMS experience. I need to process a fairly large amount of data (approx 500GB). The data is sitting in approximately 1200 csv files in s3 buckets. I have written a script in Python and can run it on a server. However, it is way too slow. Based on the current speed and the amount of data it will take approximately 50 days to get through all of the files (and of course, the deadline is WELL before that). Note: the processing is sort of your basic ETL type of stuff - nothing terrible fancy. I could easily just pump it into a temp schema in PostgreSQL, and then run scripts onto of it. But, again, from my initial testing, this would be way to slow. Note: A brand new PostgreSQL 9.1 database will be it's final destination. So, I was thinking about trying to spin up a bunch of EC2 instances to try and run them in batches (in parallel). But, I have never done something like this before so I've been looking around for ideas, etc. Again, I'm a python developer, so it seems like Fabric + boto might be promising. I have used boto from time to time, but never any experience with Fabric. I know from reading/research this is probably a great job for Hadoop, but I don't know it and can't afford to hire it done, and the time line doesn't allow for a learning curve or hiring someone. I should also not, that it's kind of a one time deal. So, I don't need to build a really elegant solution. I just need for it to work and be able to get through all of the data by the end of the year. Also, I know this is not a simple stackoverflow-kind of question (something like "how can I reverse a list in python"). But, what I'm hoping for is someone to read this and "say, I do something similar and use XYZ... it's great!" I guess what I'm asking is does anybody know of any thing out there that I could use to accomplish this task (given that I'm a Python developer and I don't know Hadoop or Java - and have a tight timeline that prevents me learning a new technology like Hadoop or learning a new language) Thanks for reading. I look forward to any suggestions.
5
2
0.07983
0
false
14,006,466
0
1,819
4
0
0
14,006,363
I did something like this some time ago, and my setup was like one multicore instance (x-large or more), that converts raw source files (xml/csv) into an intermediate format. You can run (num-of-cores) copies of the convertor script on it in parallel. Since my target was mongo, I used json as an intermediate format, in your case it will be sql. this instance has N volumes attached to it. Once a volume becomes full, it gets detached and attached to the second instance (via boto). the second instance runs a DBMS server and a script which imports prepared (sql) data into the db. I don't know anything about postgres, but I guess it does have a tool like mysql or mongoimport. If yes, use that to make bulk inserts instead of making queries via a python script.
1
0
0
Processing a large amount of data in parallel
5
python,fabric,boto,data-processing
0
2012-12-22T20:30:00.000
So, a friend and I are currently writing a panel (in python/django) for managing gameservers. Each client also gets a MySQL server with their game server. What we are stuck on at the moment is how clients will find out their MySQL password and how it will be 'stored'. The passwords would be generated randomly and presented to the user in the panel, however, we obviously don't want them to be stored in plaintext or reversible encryption, so we are unsure what to do if a a client forgets their password. Resetting the password is something we would try to avoid as some clients may reset the password while the gameserver is still trying to use it, which could cause corruption and crashes. What would be a secure (but without sacrificing ease of use for the clients) way to go about this?
2
1
0.099668
0
false
14,008,320
1
408
2
0
0
14,008,232
Your question embodies a contradiction in terms. Either you don't want reversibility or you do. You will have to choose. The usual technique is to hash the passwords and to provide a way for the user to reset his own password on sufficient alternative proof of identity. You should never display a password to anybody, for legal non-repudiability reasons. If you don't know what that means, ask a lawyer.
1
0
0
Storing MySQL Passwords
2
python,mysql,django,security,encryption
0
2012-12-23T02:46:00.000
So, a friend and I are currently writing a panel (in python/django) for managing gameservers. Each client also gets a MySQL server with their game server. What we are stuck on at the moment is how clients will find out their MySQL password and how it will be 'stored'. The passwords would be generated randomly and presented to the user in the panel, however, we obviously don't want them to be stored in plaintext or reversible encryption, so we are unsure what to do if a a client forgets their password. Resetting the password is something we would try to avoid as some clients may reset the password while the gameserver is still trying to use it, which could cause corruption and crashes. What would be a secure (but without sacrificing ease of use for the clients) way to go about this?
2
4
1.2
0
true
14,008,264
1
408
2
0
0
14,008,232
Though this is not the answer you were looking for, you only have three possibilities store the passwords plaintext (ugh!) store with a reversible encryption, e.g. RSA (http://stackoverflow.com/questions/4484246/encrypt-and-decrypt-text-with-rsa-in-php) do not store it; clients can only reset password, not view it The second choice is a secure way, as RSA is also used for TLS encryption within the HTTPS protocol used by your bank of choice ;)
1
0
0
Storing MySQL Passwords
2
python,mysql,django,security,encryption
0
2012-12-23T02:46:00.000
I am writing myself a blog in python, and am to put it up to GitHub. One of the file in this project will be a script that create the required tables in DB at the very beginning. Since I've gonna put this file on a public repository, I expose all DB structure. Is it dangerous if I do so? If yes, I am thinking of an alternative to put column names in a separate config file and not upload column names of my blog. What are others ways of avoiding exposing schemas?
2
3
0.197375
0
false
14,039,904
0
599
3
0
0
14,039,877
It's not dangerous if you secure access to database. You are exposing only your know-how. Once somebody gains access to database, it's easy to list database structure.
1
0
0
Is it dangerous if I expose my database schema in an open source project?
3
python,database,open-source,schema,database-schema
0
2012-12-26T11:20:00.000
I am writing myself a blog in python, and am to put it up to GitHub. One of the file in this project will be a script that create the required tables in DB at the very beginning. Since I've gonna put this file on a public repository, I expose all DB structure. Is it dangerous if I do so? If yes, I am thinking of an alternative to put column names in a separate config file and not upload column names of my blog. What are others ways of avoiding exposing schemas?
2
0
0
0
false
14,039,945
0
599
3
0
0
14,039,877
There is a difference between sharing database and database schema. You can comment the values of database machine/username/password in your code and publish the code on github. As a proof of concept, you can host your application on cloud(without disclosing its database credentials) and add its link to your github readme file.
1
0
0
Is it dangerous if I expose my database schema in an open source project?
3
python,database,open-source,schema,database-schema
0
2012-12-26T11:20:00.000
I am writing myself a blog in python, and am to put it up to GitHub. One of the file in this project will be a script that create the required tables in DB at the very beginning. Since I've gonna put this file on a public repository, I expose all DB structure. Is it dangerous if I do so? If yes, I am thinking of an alternative to put column names in a separate config file and not upload column names of my blog. What are others ways of avoiding exposing schemas?
2
0
0
0
false
21,087,156
0
599
3
0
0
14,039,877
I think it is dangerous, as if a SQL injection vulnerability exists in your website, the scheme will help the attacker to retrieve all important data easier.
1
0
0
Is it dangerous if I expose my database schema in an open source project?
3
python,database,open-source,schema,database-schema
0
2012-12-26T11:20:00.000
Need a way to improve performance on my website's SQL based Activity Feed. We are using Django on Heroku. Right now we are using actstream, which is a Django App that implements an activity feed using Generic Foreign Keys in the Django ORM. Basically, every action has generic foreign keys to its actor and to any objects that it might be acting on, like this: Action: (Clay - actor) wrote a (comment - action object) on (Andrew's review of Starbucks - target) As we've scaled, its become way too slow, which is understandable because it relies on big, expensive SQL joins. I see at least two options: Put a Redis layer on top of the SQL database and get activity feeds from there. Try to circumvent the Django ORM and do all the queries in raw SQL, which I understand can improve performance. Any one have thoughts on either of these two, or other ideas, I'd love to hear them.
4
1
0.066568
0
false
14,074,169
1
499
2
0
0
14,073,030
You said redis? Everything is better with redis. Caching is one of the best ideas in software development, no mather if you use Materialized Views you should also consider trying to cache those, believe me your users will notice the difference.
1
0
0
Good way to make a SQL based activity feed faster
3
python,sql,django,redis,feed
0
2012-12-28T17:04:00.000
Need a way to improve performance on my website's SQL based Activity Feed. We are using Django on Heroku. Right now we are using actstream, which is a Django App that implements an activity feed using Generic Foreign Keys in the Django ORM. Basically, every action has generic foreign keys to its actor and to any objects that it might be acting on, like this: Action: (Clay - actor) wrote a (comment - action object) on (Andrew's review of Starbucks - target) As we've scaled, its become way too slow, which is understandable because it relies on big, expensive SQL joins. I see at least two options: Put a Redis layer on top of the SQL database and get activity feeds from there. Try to circumvent the Django ORM and do all the queries in raw SQL, which I understand can improve performance. Any one have thoughts on either of these two, or other ideas, I'd love to hear them.
4
1
1.2
0
true
14,201,647
1
499
2
0
0
14,073,030
Went with an approach that sort of combined the two suggestions. We created a master list of every action in the database, which included all the information we needed about the actions, and stuck it in Redis. Given an action ID, we can now do a Redis look up on it and get a dictionary object that is ready to be returned to the front end. We also created action id lists that correspond to all the different types of activity streams that are available to a user. So given a user id, we have his friends' activity, his own activity, favorite places activity, etc, available for look up. (These I guess correspond somewhat to materialized views, although they are in Redis, not in PSQL.) So we get a user's feed as a list of action ids. Then we get the details of those actions by look ups on the ids in the master action list. Then we return the feed to the front end. Thanks for the suggestions, guys.
1
0
0
Good way to make a SQL based activity feed faster
3
python,sql,django,redis,feed
0
2012-12-28T17:04:00.000
I would like to know as to where is the value stored for a one2many table initially in OpenERP6.1? i.e if we create a record for a one2many table,this record will be actually saved to the database table only after saving the record of the main table associated with this, even though we can create many records(rows) for one2many table. Where are these rows stored? Are they stored in any OpenERP memory variable? if so which is that variable or function with which we can access those.. Please help me out on this. Thanks in Advance!!!
2
2
0.132549
0
false
14,119,351
1
1,493
2
0
0
14,119,208
When saving a new record in openerp, a dictionary will be generated with all the fields having data as keys and its data as values. If the field is a one2many and have many lines, then a list of dictionaries will be the value for the one2many field. You can modify it by overriding the create and write functions in openerp.
1
0
0
Where is the value stored for a one2many table initially in OpenERP6.1
3
python,openerp
0
2013-01-02T08:57:00.000
I would like to know as to where is the value stored for a one2many table initially in OpenERP6.1? i.e if we create a record for a one2many table,this record will be actually saved to the database table only after saving the record of the main table associated with this, even though we can create many records(rows) for one2many table. Where are these rows stored? Are they stored in any OpenERP memory variable? if so which is that variable or function with which we can access those.. Please help me out on this. Thanks in Advance!!!
2
0
0
0
false
14,120,545
1
1,493
2
0
0
14,119,208
One2Many field is child parent relation in OpenERP. One2Many is just logical field there is no effect in database for that. If you are creating Sale order then Sale order line is One2Many in Sale order model. But if you will not put Many2One in Sale order line then One2Many in Sale order will not work. Many2One field put foreign key for the related model in the current table.
1
0
0
Where is the value stored for a one2many table initially in OpenERP6.1
3
python,openerp
0
2013-01-02T08:57:00.000
I want to compare the value of a given column at each row against another value, and if the values are equal, I want to copy the whole row to another spreadsheet. How can I do this using Python? THANKS!
3
0
0
0
false
30,048,138
0
14,135
1
0
0
14,188,923
For "xls" files it's possible to use the xlutils package. It's currently not possible to copy objects between workbooks in openpyxl due to the structure of the Excel format: there are lots of dependencies all over the place that need to be managed. It is, therefore, the responsibility of client code to copy everything required manually. If time permits we might try and port some of the xlutils functionality to openpyxl.
1
0
0
How to copy a row of Excel sheet to another sheet using Python
2
python,excel,xlrd,xlwt,openpyxl
0
2013-01-07T02:04:00.000
I had GAE 1.4 installed in my local UBUNTU system and everything was working fine. Only warning I was getting at that time was something like "You are using old GAE SDK 1.4." So, to get rid of that I have done following things: I removed old version of GAE and installed GAE 1.7. Along with that I have also changed my djangoappengine folder with latest version. I have copied new version of GAE to /usr/local directory since my ~/bashrc file PATH variable pointing to GAE to this directory. Now, I am getting error django.core.exceptions.ImproperlyConfigured: 'djangoappengine.db' isn't an available database backend. Try using django.db.backends.XXX, where XXX is one of: 'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3' Error was: No module named utils I don't think there is any problem of directory structure since earlier it was running fine. Does anyone has any idea ? Your help will be highly appreciated. -Sunil .
0
1
0.099668
0
false
14,368,275
1
191
2
1
0
14,307,581
Did you update djangoappengine without updating django-nonrel and djangotoolbox? While I haven't upgraded to GAE 1.7.4 yet, I'm running 1.7.2 with no problems. I suspect your problem is not related to the GAE SDK but rather your django-nonrel installation has mismatching pieces.
1
0
0
Django-nonrel broke after installing new version of Google App Engine SDK
2
python,google-app-engine,django-nonrel
0
2013-01-13T20:03:00.000
I had GAE 1.4 installed in my local UBUNTU system and everything was working fine. Only warning I was getting at that time was something like "You are using old GAE SDK 1.4." So, to get rid of that I have done following things: I removed old version of GAE and installed GAE 1.7. Along with that I have also changed my djangoappengine folder with latest version. I have copied new version of GAE to /usr/local directory since my ~/bashrc file PATH variable pointing to GAE to this directory. Now, I am getting error django.core.exceptions.ImproperlyConfigured: 'djangoappengine.db' isn't an available database backend. Try using django.db.backends.XXX, where XXX is one of: 'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3' Error was: No module named utils I don't think there is any problem of directory structure since earlier it was running fine. Does anyone has any idea ? Your help will be highly appreciated. -Sunil .
0
0
1.2
0
true
14,382,654
1
191
2
1
0
14,307,581
Actually I changed the google app engine path in /.bashrc file and restarted the system. It solved the issue. I think since I was not restarting the system after .bashrc changes, hence it was creating problem.
1
0
0
Django-nonrel broke after installing new version of Google App Engine SDK
2
python,google-app-engine,django-nonrel
0
2013-01-13T20:03:00.000
I'm working on an NDB based Google App Engine application that needs to keep track of the day/night cycle of a large number (~2000) fixed locations. Because the latitude and longitude don't ever change, I can precompute them ahead of time using something like PyEphem. I'm using NDB. As I see it, the possible strategies are: To precompute a year's worth of sunrises into datetime objects, put them into a list, pickle the list and put it into a PickleProperty , but put the list into a JsonProperty Go with DateTimeProperty and set repeated=True Now, I'd like the very next sunrise/sunset property to be indexed, but that can be popped from the list and places into it's own DateTimeProperty, so that I can periodically use a query to determine which locations have changed to a different part of the cycle. The whole list does not need to be indexed. Does anyone know the relative effort -in terms of indexing and CPU load for these three approaches? Does repeated=True have an effect on the indexing? Thanks, Dave
1
0
0
0
false
14,365,980
1
537
2
1
0
14,343,871
I would say precompute those structures and output them into hardcoded python structures that you save in a generated python file. Just read those structures into memory as part of your instance startup. From your description, there's no reason to compute these values at runtime, and there's no reason to store it in the datastore since that has a cost associated with it, as well as some latency for the RPC.
1
0
0
Best strategy for storing precomputed sunrise/sunset data?
3
python,google-app-engine,python-2.7
0
2013-01-15T17:59:00.000
I'm working on an NDB based Google App Engine application that needs to keep track of the day/night cycle of a large number (~2000) fixed locations. Because the latitude and longitude don't ever change, I can precompute them ahead of time using something like PyEphem. I'm using NDB. As I see it, the possible strategies are: To precompute a year's worth of sunrises into datetime objects, put them into a list, pickle the list and put it into a PickleProperty , but put the list into a JsonProperty Go with DateTimeProperty and set repeated=True Now, I'd like the very next sunrise/sunset property to be indexed, but that can be popped from the list and places into it's own DateTimeProperty, so that I can periodically use a query to determine which locations have changed to a different part of the cycle. The whole list does not need to be indexed. Does anyone know the relative effort -in terms of indexing and CPU load for these three approaches? Does repeated=True have an effect on the indexing? Thanks, Dave
1
1
0.066568
0
false
14,345,283
1
537
2
1
0
14,343,871
For 2000 immutable data points - just calculate them when instance starts or on first use, then keep it in memory. This will be the cheapest and fastest.
1
0
0
Best strategy for storing precomputed sunrise/sunset data?
3
python,google-app-engine,python-2.7
0
2013-01-15T17:59:00.000
I have income table which contain recurrence field. Now if user select recurrence_type as "Monthly" or "Daily" then I have to add row into income table "daily" or "monthly" . Is there any way in Mysql which will add data periodically into table ? I am using Django Framework for developing web application.
1
0
0
0
false
27,122,957
1
214
2
0
0
14,344,473
Used django-celery package and created job in it to update the data periodically
1
0
0
add data to table periodically in mysql
2
python,mysql,django
0
2013-01-15T18:33:00.000
I have income table which contain recurrence field. Now if user select recurrence_type as "Monthly" or "Daily" then I have to add row into income table "daily" or "monthly" . Is there any way in Mysql which will add data periodically into table ? I am using Django Framework for developing web application.
1
1
1.2
0
true
14,344,610
1
214
2
0
0
14,344,473
As I know there is no such function in MySQL. Even if MySQL could do it, this should not be its job. Such functions should be part of the business logic in your application. The normal way is to setup the cron job in server. The cron job will wake up at the time you set, and then call your python script or SQL to fulfil the adding data work. And scripts are much better than direct SQL.
1
0
0
add data to table periodically in mysql
2
python,mysql,django
0
2013-01-15T18:33:00.000
I have written a simple blog using Python in google app engine. I want to implement a voting system for each of my posts. My posts are stored in a SQL database and I have a column for no of votes received. Can somebody help me set up voting buttons for individual posts? I am using Jinja2 as the templating engine. How can I make the voting secure? I was thinking of sending a POST/GET when someone clicks on the vote button which my python script will then read and update the database accordingly. But then I realized that this was insecure. All suggestions are welcome.
4
1
0.099668
0
false
14,347,324
1
1,105
2
0
0
14,347,244
If voting is only for subscribed users, then enable voting after members log in to your site. If not, then you can track users' IP addresses so one IP address can vote once for a single article in a day. By the way, what kind of security do you need?
1
0
0
How to implement a 'Vote up' System for posts in my blog?
2
python,mysql,google-app-engine,jinja2
0
2013-01-15T21:27:00.000
I have written a simple blog using Python in google app engine. I want to implement a voting system for each of my posts. My posts are stored in a SQL database and I have a column for no of votes received. Can somebody help me set up voting buttons for individual posts? I am using Jinja2 as the templating engine. How can I make the voting secure? I was thinking of sending a POST/GET when someone clicks on the vote button which my python script will then read and update the database accordingly. But then I realized that this was insecure. All suggestions are welcome.
4
4
1.2
0
true
14,349,144
1
1,105
2
0
0
14,347,244
First, keep in mind that there is no such thing as "secure", just "secure enough for X". There's always a tradeoff—more secure means more annoying for your legitimate users and more expensive for you. Getting past these generalities, think about your specific case. There is nothing that has a 1-to-1 relationship with users. IP addresses or computers are often shared by multiple people, and at the same time, people often have multiple addresses or computers. Sometimes, something like this is "good enough", but from your question, it doesn't sound like it would be. However, with user accounts, the only false negatives come from people intentionally creating multiple accounts or hacking others' accounts, and there are no false positives. And there's a pretty linear curve in the annoyance/cost vs. security tradeoff, all the way from ""Please don't create sock puppets" to CAPTCHA to credit card checks to web of trust/reputation score to asking for real-life info and hiring an investigator to check it out. In real life, there's often a tradeoff between more than just these two things. For example, if you're willing to accept more cheating if it directly means more money for you, you can just charge people real money to vote (as with those 1-900 lines that many TV shows use). How do Reddit and Digg check multiple voting from a single registered user? I don't know exactly how Reddit or Digg does things, but the general idea is simple: Keep track of individual votes. Normally, you've got your users stored in a SQL RDBMS of some kind. So, you just add a Votes table with columns for user ID, question ID, and answer. (If you're using some kind of NoSQL solution, it should be easy to translate appropriately. For example, maybe there's a document for each question, and the document is a dictionary mapping user IDs to answers.) When a user votes, just INSERT a row into the database. When putting together the voting interface, whether via server-side template or client-side AJAX, call a function that checks for an existing vote. If there is one, instead of showing the vote controls, show some representation of "You already voted Yes." You also want to check again at vote-recording time, to make sure someone doesn't hack the system by opening 200 copies of the page, all of which allow voting (because the user hasn't voted yet), and then submitting 200 Yes votes, but with a SQL database, this is as simple as making Question, User into a multi-column unique key. If you want to allow vote changing or undoing, just add more controls to the interface, and handle them with UPDATE and DELETE calls. If you want to get really fancy—like this site, which allows undoing if you have enough rep and if either your original vote was in the past 5 minutes or the answer has been edited since your vote (or something like that)—you may have to keep some extra info, like record a row for each voting action, with a timestamp, instead of just a single answer for each user. This design also means that, instead of keeping a count somewhere, you generate the vote tally on the fly by, e.g., SELECT COUNT(*) FROM Votes WHERE Question=? GROUP BY Answer. But, as usual, if this is too slow, you can always optimize-by-denormalizing and keep the totals along with the actual votes. Similarly, if your user base is huge, you may want to archive votes on old questions and get them out of the operational database. And so on.
1
0
0
How to implement a 'Vote up' System for posts in my blog?
2
python,mysql,google-app-engine,jinja2
0
2013-01-15T21:27:00.000
My Question is a bit complex and iam new to OpenERP. I have an external database and an OpenERP. the external one isn't PostgreSQL. MY job is that I need to synchronize the partners in the two databases. External one being the more important. This means that if the external one's data change so does the OpenERp's, but if OpenERP's data changes nothing changes onthe external one. I can access to the external database, and using XML RCP I have acces to OpenERP's as well. I can import data from the external database simply with XML RCP but the problem is the sync. I can't just INSERT the modified partner and delete the old one because i have no way to identify the old one. I need to UPDATE it. But then i need an id that says which is which. and external ID. To my knowledge OpenERP can handle external IDs. How does this work? and how can i add an external ID to my res.partner using this? I was told that I cant create a new module for this alone I need to use the internal ID works.
6
0
0
0
false
14,356,856
1
4,853
1
0
0
14,356,218
Add an integer field in res partner table for storing external id on both database. When data is retrived from the external server and adding to your openerp database, store the external id in the record of res partner in local server and also save the id of the newly created partner record in the external server's partner record. So next time when the external partner record is updated, we can search the external id in our local server and update that record. Please check the openerp module base_synchronization and read the codes, which will be helpful for you.
1
0
0
Adding external Ids to Partners in OpenERP withouth a new module
3
python,xml-rpc,openerp
0
2013-01-16T10:27:00.000
I'm working on a web-app that's very heavily database driven. I'm nearing the initial release and so I've locked down the features for this version, but there are going to be lots of other features implemented after release. These features will inevitably require some modification to the database models, so I'm concerned about the complexity of migrating the database on each release. What I'd like to know is how much should I concern myself with locking down a solid database design now so that I can release quickly, against trying to anticipate certain features now so that I can build it into the database before release? I'm also anticipating finding flaws with my current model and would probably then want to make changes to it, but if I release the app and then data starts coming in, migrating the data would be a difficult task I imagine. Are there conventional methods to tackle this type of problem? A point in the right direction would be very useful. For a bit of background I'm developing an asset management system for a CG production pipeline. So lots of pieces of data with lots of connections between them. It's web-based, written entirely in Python and it uses SQLAlchemy with a SQLite engine.
1
2
1.2
0
true
14,364,804
1
120
1
0
0
14,364,214
Some thoughts for managing databases for a production application: Make backups nightly. This is crucial because if you try to do an update (to the data or the schema), and you mess up, you'll need to be able to revert to something more stable. Create environments. You should have something like a local copy of the database for development, a staging database for other people to see and test before going live and of course a production database that your live system points to. Make sure all three environments are in sync before you start development locally. This way you can track changes over time. Start writing scripts and version them for releases. Make sure you store these in a source control system (SVN, Git, etc.) You just want a historical record of what has changed and also a small set of scripts that need to be run with a given release. Just helps you stay organized. Do your changes to your local database and test it. Make sure you have scripts that do two things, 1) Scripts that modify the data, or the schema, 2) Scripts that undo what you've done in case things go wrong. Test these over and over locally. Run the scripts, test and then rollback. Are things still ok? Run the scripts on staging and see if everything is still ok. Just another chance to prove your work is good and that if needed you can undo your changes. Once staging is good and you feel confident, run your scripts on the production database. Remember you have scripts to change data (update, delete statements) and scripts to change schema (add fields, rename fields, add tables). In general take your time and be very deliberate in your actions. The more disciplined you are the more confident you'll be. Updating the database can be scary, so don't rush things, write out your plan of action, and test, test, test!
1
0
0
How to approach updating an database-driven application after release?
2
python,database,migration,sqlalchemy
0
2013-01-16T17:29:00.000
I have been trying to get my head around Django over the last week or two. Its slowly starting to make some sense and I am really liking it. My goal is to replace a fairly messy excel spreadsheet with a database and frontend for my users. This would involve pulling the data out of a table, presenting it in a web tabular format, and allowing changes to be made through text fields and drop down menus, with a simple update button that will update all changes to the DB. My question is, will the built in Django Forms functionality be the best solution? Or would I create some sort of for loop for my objects and wrap them around html form syntax in my template? I'm just not too sure how to approach the solution. Apologies if this seems like an simple question, I just feel like there is maybe a few ways to do it but maybe there is one perfect way. Thanks
3
1
0.099668
0
false
14,371,043
1
2,045
1
0
0
14,370,576
Exporting the excel sheet in Django and have the them rendered as text fields , is not as easy as 2 step process. you need to know how Django works. First you need to export the data in mysql in database using either some language or some ready made tools. Then you need to make a Model for that table and then you can use Django admin to edit them
1
0
0
Custom Django Database Frontend
2
python,database,django,frontend
0
2013-01-17T01:00:00.000
I'm wrinting a webapp in bottle. I have a small interface that lets user run sql statements. Sometimes it takes about 5 seconds until the user get's a result because the DB is quite big and old. What I want to do is the following: 1.Starte the query in a thread 2.Give the user a response right away and have ajax poll for the result There is one thing that I'm not sure of....Where do I store the result of the query? Should I store it in a DB ? Should I store it in a variable inside my webapp ? What do you guys think would be best ?
1
0
1.2
0
true
14,377,893
1
105
1
0
0
14,377,250
This would be a good use for something like memcached.
1
0
0
Python 3 - SQL Result - where to store it
1
python,database,multithreading
0
2013-01-17T10:43:00.000
When I try installing mysql-python using below command, macbook-user$ sudo pip install MYSQL-python I get these messages: /System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/pyconfig.h:891:1: warning: this is the location of the previous definition /usr/bin/lipo: /tmp/_mysql-LtlmLe.o and /tmp/_mysql-thwkfu.o have the same architectures (i386) and can't be in the same fat output file clang: error: lipo command failed with exit code 1 (use -v to see invocation) error: command 'clang' failed with exit status 1 Does anyone know how to solve this problem? Help me please!
1
0
0
0
false
14,399,388
0
506
1
1
0
14,399,223
At first glance it looks like damaged pip package. Have you tried easy_install instead with the same package?
1
0
0
clang error when installing MYSQL-python on Lion-mountain (Mac OS X 10.8)
1
python,mysql,django,pip,mysql-python
0
2013-01-18T12:41:00.000
When I fired redis-py's bgsave() command, the return value was False, but I'm pretty sure the execution was successful because I've checked with lastsave(). However, if I use save() the return value would be True after successful execution. Could anyone please explain what False indicates for bgsave()? Not sure if it has anything to do with bgsave() being executed in the background.
1
2
0.379949
0
false
14,418,853
0
778
1
0
0
14,417,846
Thanks to Pavel Anossov, after reading the code of client.py, I found out that responses from 2 commands (BGSAVE and BGREWRITEAOF) were not converted from bytes to str, and this caused the problem in Python 3. To fix this issue, just change lambda r: r == to lambda r: nativestr(r) == for these two commands in RESPONSE_CALLBACKS.
1
0
0
Why does redis-py's bgsave() command return False after successful execution?
1
python,redis
0
2013-01-19T19:10:00.000
I am writing a chat bot that uses past conversations to generate its responses. Currently I use text files to store all the data but I want to use a database instead so that multiple instances of the bot can use it at the same time. How should I structure this database? My first idea was to keep a main table like create table Sessions (startTime INT,ip INT, botVersion REAL, length INT, tableName TEXT). Then for each conversation I create table <generated name>(timestamp INT, message TEXT) with all the messages that were sent or received during that conversation. When the conversation is over, I insert the name of the new table into Sessions(tableName). Is it ok to programmatically create tables in this manner? I am asking because most SQL tutorials seem to suggest that tables are created when the program is initialized. Another way to do this is to have a huge create table Messages(id INT, message TEXT) table that stores every message that was sent or received. When a conversation is over, I can add a new entry to Sessions that includes the id used during that conversation so that I can look up all the messages sent during a certain conversation. I guess one advantage of this is that I don't need to have hundreds or thousands of tables. I am planning on using SQLite despite its low concurrency since each instance of the bot may make thousands of reads before generating a response (which will result in one write). Still, if another relational database is better suited for this task, please comment. Note: There are other questions on SO about storing chat logs in databases but I am specifically looking for how it should be structured and feedback on the above ideas.
1
1
0.197375
0
false
14,430,911
0
1,198
1
0
0
14,430,856
Don't use a different table for each conversation. Instead add a "conversation" column to your single table.
1
0
0
Storing chat logs in relational database
1
python,sql,database,sqlite,database-design
0
2013-01-21T00:13:00.000
This is my program import MySQLdb as mdb from MySQLdb import IntegrityError conn = mdb.connect("localhost", "asdf", "asdf", "asdf") when the connect function is called python prints some text ("h" in the shell). This happens only if I execute the script file from a particular folder. If I copy the same script file to some other folder "h" is not printed. actually i had this line previously in the same script for testing print "h" but now i have removed the line from the script. But still it is printed. What happen to my folder?
0
1
1.2
0
true
14,434,772
0
41
1
0
0
14,434,712
Try deleting *.pyc files. Secondly use script with -v option so that you can view from where the file is being imported
1
0
0
python mysqldb printing text even if no print statement in the code
1
python,mysql
0
2013-01-21T08:18:00.000
I have populated a combobox with an QSqlQueryModel. It's all working fine as it is, but I would like to add an extra item to the combobox that could say "ALL_RECORDS". This way I could use the combobox as a filtering device. I obviously don't want to add this extra item in the database, how can I add it to the combobox after it's been populated by a model?
1
1
0.099668
0
false
14,540,595
0
243
1
0
0
14,455,871
You could use a proxy model that takes gets it's data from two models, one for your default values, the other for your database, and use it to populate your QComboBox.
1
1
0
Adding an item to an already populated combobox
2
python,qt,pyqt,pyqt4,pyside
0
2013-01-22T10:02:00.000
I'm building a finance application in Python to do time series analysis on security prices (among other things). The heavy lifting will be done in Python mainly using Numpy, SciPy, and pandas (pandas has an interface for SQLite and MySQL). With a web interface to present results. There will be a few hundred GB of data. I'm curious what is the better option for database in terms of performance, ease of accessing the data (queries), and interface with Python. I've seen the posts about the general pros and cons of SQLite v. MySQL but I'm looking for feedback that's more specific to a Python application.
0
0
0
0
false
14,509,945
0
1,382
2
0
0
14,509,517
SQLite is great for embedded databases, but it's not really great for anything that requires access by more than one process at a time. For this reason it cannot be taken seriously for your application. MySQL is a much better alternative. I'm also in agreement that Postgres would be an even better option.
1
0
0
MySQL v. SQLite for Python based financial web app
3
python,mysql,sqlite,pandas
0
2013-01-24T19:49:00.000
I'm building a finance application in Python to do time series analysis on security prices (among other things). The heavy lifting will be done in Python mainly using Numpy, SciPy, and pandas (pandas has an interface for SQLite and MySQL). With a web interface to present results. There will be a few hundred GB of data. I'm curious what is the better option for database in terms of performance, ease of accessing the data (queries), and interface with Python. I've seen the posts about the general pros and cons of SQLite v. MySQL but I'm looking for feedback that's more specific to a Python application.
0
0
0
0
false
14,514,661
0
1,382
2
0
0
14,509,517
For many 'research' oriented time series database loads, it is far faster to do as much analysis in the database than to copy the data to a client and analyze it using a regular programming language. Copying 10G across the network is far slower than reading it from disk. Relational databases do not natively support time series operations, so generating something as simple as security returns from security prices is either impossible or very difficult in both MySQL and SQLite. Postgres has windowing operations, as do several other relational-like databases; the trade-off is that that they don't do as many transactions per second. Many others use K or Q. The financial services web apps that I've seen used multiple databases; the raw data was stored in 'research' databases that were multiply indexed and designed for flexibility, while the web-apps interacted directly with in-memory caches and higher-speed RDBs; the tradeoff was that data had to be copied from the 'research' databases to the 'production' databases.
1
0
0
MySQL v. SQLite for Python based financial web app
3
python,mysql,sqlite,pandas
0
2013-01-24T19:49:00.000
I am using Celery standalone (not within Django). I am planning to have one worker task type running on multiple physical machines. The task does the following Accept an XML document. Transform it. Make multiple database reads and writes. I'm using PostgreSQL, but this would apply equally to other store types that use connections. In the past, I've used a database connection pool to avoid creating a new database connection on every request or avoid keeping the connection open too long. However, since each Celery worker runs in a separate process, I'm not sure how they would actually be able to share the pool. Am I missing something? I know that Celery allows you to persist a result returned from a Celery worker, but that is not what I'm trying to do here. Each task can do several different updates or inserts depending on the data processed. What is the right way to access a database from within a Celery worker? Is it possible to share a pool across multiple workers/tasks or is there some other way to do this?
47
2
0.066568
0
false
14,526,700
0
24,474
2
1
0
14,526,249
You can override the default behavior to have threaded workers instead of a worker per process in your celery config: CELERYD_POOL = "celery.concurrency.threads.TaskPool" Then you can store the shared pool instance on your task instance and reference it from each threaded task invocation.
1
0
0
Celery Worker Database Connection Pooling
6
python,postgresql,connection-pooling,celery
0
2013-01-25T16:38:00.000
I am using Celery standalone (not within Django). I am planning to have one worker task type running on multiple physical machines. The task does the following Accept an XML document. Transform it. Make multiple database reads and writes. I'm using PostgreSQL, but this would apply equally to other store types that use connections. In the past, I've used a database connection pool to avoid creating a new database connection on every request or avoid keeping the connection open too long. However, since each Celery worker runs in a separate process, I'm not sure how they would actually be able to share the pool. Am I missing something? I know that Celery allows you to persist a result returned from a Celery worker, but that is not what I'm trying to do here. Each task can do several different updates or inserts depending on the data processed. What is the right way to access a database from within a Celery worker? Is it possible to share a pool across multiple workers/tasks or is there some other way to do this?
47
3
0.099668
0
false
14,549,811
0
24,474
2
1
0
14,526,249
Have one DB connection per worker process. Since celery itself maintains a pool of worker processes, your db connections will always be equal to the number of celery workers. Flip side, sort of, it will tie up db connection pooling to celery worker process management. But that should be fine given that GIL allows only one thread at a time in a process.
1
0
0
Celery Worker Database Connection Pooling
6
python,postgresql,connection-pooling,celery
0
2013-01-25T16:38:00.000
There is a sqlite3 library that comes with python 2.7.3, but it is hardly the latest version. I would like to upgrade it within a virtualenv environment. In other words, the upgrade only applies to the version of python installed within this virtualenv. What is the correct way to do so?
4
1
0.066568
0
false
17,417,792
0
9,594
2
0
0
14,541,869
I was stuck in the same problem once. This solved it for me: Download and untar the python version required mkdir local untar sqlite after downloading its package ./configure --prefix=/home/aanuj/local make make install ./configure --prefix=/home/anauj/local LDFLAGS='-L/home/aaanuj/local/lib' CPPFLAGS='-I/home/aanuj/local/include' make Find the sqlite3.so and copy to home/desired loc Extract beaver Setup the virtual env with the python version needed Activate the env unalias python export PYTHONPATH=/home/aanuj(location of _sqlite3.so) Enjoy
1
0
1
How to upgrade sqlite3 in python 2.7.3 inside a virtualenv?
3
python,sqlite,virtualenv
0
2013-01-26T21:42:00.000
There is a sqlite3 library that comes with python 2.7.3, but it is hardly the latest version. I would like to upgrade it within a virtualenv environment. In other words, the upgrade only applies to the version of python installed within this virtualenv. What is the correct way to do so?
4
4
1.2
0
true
14,550,136
0
9,594
2
0
0
14,541,869
The below works for me, but please comment if there is any room for improvement: Activate the virtualenv to which you are going to install the latest sqlite3 Get the latest source of pysqlite package from google code: wget http://pysqlite.googlecode.com/files/pysqlite-2.6.3.tar.gz Compile pysqlite from source and together with the latest sqlite database: python setup.py build_static Install it to the site-packages directory of the virtualenv: python setup.py install The above will actually install the pysqlite into path-to-virtualenv/lib/python2.7/site-packages, which is where all other pip-installed libraries are. Now, I have the latest version of sqlite (compiled into pysqlite) installed within a virtualenv, so I can do: from pysqlite2 import dbapi2 as sqlite
1
0
1
How to upgrade sqlite3 in python 2.7.3 inside a virtualenv?
3
python,sqlite,virtualenv
0
2013-01-26T21:42:00.000
I have couple OpenERP modules implemented for OpenERP 6.1 version. When I installed OpenERP 7.0, i copied these modules into addons folder for OpenERP 7. After that, I tried to update modules list trough web interface, but nothings changed. Also, I started server again with options --database=mydb --update=all, but modules list didn't change. Did I miss something? Is it possible in OpenERP version 7, usage of modules from version 6.1? Thanks for advice. UPDATE: I already exported my database from version 6.1 in *.sql file. Will it OpenERP 7 work, if I just import these data in new database, which I created with OpenERP 7?
2
6
1.2
0
true
14,564,692
1
3,217
1
0
0
14,563,801
Openerp 6.1 modules directly can not be used in openerp 7. You have to do some basic changes in openerp 6.1 modules. Like tree, form tag compulsory string and verision="7" include in form. If you have inherited some basic modules like sale, purchase then you have to do changes in inherit xpath etc. Some objects res.parter.address removed then you have take care of this and replace with res.partner. Thanks
1
0
0
OpenERP 7 with modules from OpenERP 6.1
1
python,openerp,erp
0
2013-01-28T14:06:00.000
I am trying to analyse the SQL performance of our Django (1.3) web application. I have added a custom log handler which attaches to django.db.backends and set DEBUG = True, this allows me to see all the database queries that are being executed. However the SQL is not valid SQL! The actual query is select * from app_model where name = %s with some parameters passed in (e.g. "admin"), however the logging message doesn't quote the params, so the sql is select * from app_model where name = admin, which is wrong. This also happens using django.db.connection.queries. AFAIK the django debug toolbar has a complex custom cursor to handle this. Update For those suggesting the Django debug toolbar: I am aware of that tool, it is great. However it does not do what I need. I want to run a sample interaction of our application, and aggregate the SQL that's used. DjDT is great for showing and shallow learning. But not great for aggregating and summarazing the interaction of dozens of pages. Is there any easy way to get the real, legit, SQL that is run?
0
0
0
0
false
14,567,526
1
189
1
0
0
14,567,172
select * from app_model where name = %s is a prepared statement. I would recommend you to log the statement and the parameters separately. In order to get a wellformed query you need to do something like "select * from app_model where name = %s" % quote_string("user") or more general query % map(quote_string, params). Please note that quote_string is DB specific and the DB 2.0 API does not define a quote_string method. So you need to write one yourself. For logging purposes I'd recommend keeping the queries and parameters separate as it allows for far better profiling as you can easily group the queries without taking the actual values into account.
1
0
0
How to retrieve the real SQL from the Django logger?
4
python,sql,django,django-database
0
2013-01-28T17:00:00.000
I am trying to use a python set as a filter for ids from a mysql table. The python set stores all the ids to filter (about 30 000 right now) this number will grow slowly over time and I am concerned about the maximum capacity of a python set. Is there a limit to the number of elements it can contain?
2
0
0
0
false
14,577,827
0
2,460
1
0
0
14,577,790
I don't know if there is an arbitrary limit for the number of items in a set. More than likely the limit is tied to the available memory.
1
0
1
Is there a limit to the number of values that a python set can contain?
2
python,set
0
2013-01-29T07:31:00.000
No code examples here. Just running into an issue with Microsoft Excel 2010 where I have a python script on linux that pulls data from csv files, pushes it into excel, and emails that file to a certain email address as an attachment. My problem is that I'm using formulas in my excel file, and when it first opens up it goes into "Protected View". My formulas don't load until after I click "Enable Editing". Is there anyway to get my numbers to show up even if Protected Mode is on?
0
0
0
0
false
14,592,481
0
853
1
0
0
14,592,328
Figured this out. Just used the for loop to keep a running total. Sorry for the wasted question.
1
0
0
Protected View in Microsoft Excel 2010 and Python
1
python,linux,excel,view,protected
0
2013-01-29T21:08:00.000
For 100k+ entities in google datastore, ndb.query().count() is going to cancelled by deadline , even with index. I've tried with produce_cursors options but only iter() or fetch_page() will returns cursor but count() doesn't. How can I count large entities?
4
2
0.132549
0
false
14,713,169
1
2,669
1
1
0
14,673,642
This is indeed a frustrating issue. I've been doing some work in this area lately to get some general count stats - basically, the number of entities that satisfy some query. count() is a great idea, but it is hobbled by the datastore RPC timeout. It would be nice if count() supported cursors somehow so that you could cursor across the result set and simply add up the resulting integers rather than returning a large list of keys only to throw them away. With cursors, you could continue across all 1-minute / 10-minute boundaries, using the "pass the baton" deferred approach. With count() (as opposed to fetch(keys_only=True)) you can greatly reduce the waste and hopefully increase the speed of the RPC calls, e.g., it takes a shocking amount of time to count to 1,000,000 using the fetch(keys_only=True) approach - an expensive proposition on backends. Sharded counters are a lot of overhead if you only need/want periodic count statistics (e.g., a daily count of all my accounts in the system by, e.g., country).
1
0
0
ndb.query.count() failed with 60s query deadline on large entities
3
python,google-app-engine,app-engine-ndb,bigtable
0
2013-02-03T14:41:00.000
I need some help with d3 and MySQL. Below is my question: I have data stored in MySQL (eg: keywords with their frequencies). I now want to visualize it using d3. As far as my knowledge of d3 goes, it requires json file as input. My question is: How do I access this MySQL database from d3 script? One way which i could think of is: Using Python, connect with database and convert the data in json format. Save this in some .json file. In d3, read this json file as input and use it in visualization. Is there any other way to convert the data in MySQL into .json format directly using d3? Can we connect to MySQL from d3 and read the data? Thanks a lot!
4
1
0.066568
0
false
14,679,748
0
8,185
1
0
0
14,679,610
d3 is a javascript library that run on client-side, while MySQL database is supposed to run on server-side. d3 can't connect to MySQL database, let alone conversion to json format. The way you thought it was possible (steps 1 and 2) is what you should do.
1
0
0
Accessing MySQL database in d3 visualization
3
javascript,python,mysql,d3.js,data-visualization
0
2013-02-04T02:22:00.000