Question
stringlengths
25
7.47k
Q_Score
int64
0
1.24k
Users Score
int64
-10
494
Score
float64
-1
1.2
Data Science and Machine Learning
int64
0
1
is_accepted
bool
2 classes
A_Id
int64
39.3k
72.5M
Web Development
int64
0
1
ViewCount
int64
15
1.37M
Available Count
int64
1
9
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Q_Id
int64
39.1k
48M
Answer
stringlengths
16
5.07k
Database and SQL
int64
1
1
GUI and Desktop Applications
int64
0
1
Python Basics and Environment
int64
0
1
Title
stringlengths
15
148
AnswerCount
int64
1
32
Tags
stringlengths
6
90
Other
int64
0
1
CreationDate
stringlengths
23
23
I've written a Python package that includes a bsddb database of pre-computed values for one of the more time-consuming computations. For simplicity, my setup script installs the database file in the same directory as the code which accesses the database (on Unix, something like /usr/lib/python2.5/site-packages/mypackage/). How do I store the final location of the database file so my code can access it? Right now, I'm using a hack based on the __file__ variable in the module which accesses the database: dbname = os.path.join(os.path.dirname(__file__), "database.dat") It works, but it seems... hackish. Is there a better way to do this? I'd like to have the setup script just grab the final installation location from the distutils module and stuff it into a "dbconfig.py" file that gets installed alongside the code that accesses the database.
32
19
1
0
false
9,918,496
0
28,993
2
0
0
39,104
Use pkgutil.get_data. It’s the cousin of pkg_resources.resource_stream, but in the standard library, and should work with flat filesystem installs as well as zipped packages and other importers.
1
0
1
Finding a file in a Python module distribution
4
python,distutils
0
2008-09-02T09:40:00.000
I've written a Python package that includes a bsddb database of pre-computed values for one of the more time-consuming computations. For simplicity, my setup script installs the database file in the same directory as the code which accesses the database (on Unix, something like /usr/lib/python2.5/site-packages/mypackage/). How do I store the final location of the database file so my code can access it? Right now, I'm using a hack based on the __file__ variable in the module which accesses the database: dbname = os.path.join(os.path.dirname(__file__), "database.dat") It works, but it seems... hackish. Is there a better way to do this? I'd like to have the setup script just grab the final installation location from the distutils module and stuff it into a "dbconfig.py" file that gets installed alongside the code that accesses the database.
32
3
0.148885
0
false
39,295
0
28,993
2
0
0
39,104
That's probably the way to do it, without resorting to something more advanced like using setuptools to install the files where they belong. Notice there's a problem with that approach, because on OSes with real a security framework (UNIXes, etc.) the user running your script might not have the rights to access the DB in the system directory where it gets installed.
1
0
1
Finding a file in a Python module distribution
4
python,distutils
0
2008-09-02T09:40:00.000
All the docs for SQLAlchemy give INSERT and UPDATE examples using the local table instance (e.g. tablename.update()... ) Doing this seems difficult with the declarative syntax, I need to reference Base.metadata.tables["tablename"] to get the table reference. Am I supposed to do this another way? Is there a different syntax for INSERT and UPDATE recommended when using the declarative syntax? Should I just switch to the old way?
8
4
0.26052
0
false
77,962
0
2,919
2
0
0
75,829
via the __table__ attribute on your declarative class
1
0
0
Best way to access table instances when using SQLAlchemy's declarative syntax
3
python,sql,sqlalchemy
0
2008-09-16T19:08:00.000
All the docs for SQLAlchemy give INSERT and UPDATE examples using the local table instance (e.g. tablename.update()... ) Doing this seems difficult with the declarative syntax, I need to reference Base.metadata.tables["tablename"] to get the table reference. Am I supposed to do this another way? Is there a different syntax for INSERT and UPDATE recommended when using the declarative syntax? Should I just switch to the old way?
8
0
0
0
false
315,406
0
2,919
2
0
0
75,829
There may be some confusion between table (the object) and tablename (the name of the table, a string). Using the table class attribute works fine for me.
1
0
0
Best way to access table instances when using SQLAlchemy's declarative syntax
3
python,sql,sqlalchemy
0
2008-09-16T19:08:00.000
I'm writing a server that I expect to be run by many different people, not all of whom I will have direct contact with. The servers will communicate with each other in a cluster. Part of the server's functionality involves selecting a small subset of rows from a potentially very large table. The exact choice of what rows are selected will need some tuning, and it's important that it's possible for the person running the cluster (eg, myself) to update the selection criteria without getting each and every server administrator to deploy a new version of the server. Simply writing the function in Python isn't really an option, since nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime. What I need are suggestions on the simplest way to implement a Domain Specific Language to achieve this goal. The language needs to be capable of simple expression evaluation, as well as querying table indexes and iterating through the returned rows. Ease of writing and reading the language is secondary to ease of implementing it. I'd also prefer not to have to write an entire query optimiser, so something that explicitly specifies what indexes to query would be ideal. The interface that this will have to compile against will be similar in capabilities to what the App Engine datastore exports: You can query for sequential ranges on any index on the table (eg, less-than, greater-than, range and equality queries), then filter the returned row by any boolean expression. You can also concatenate multiple independent result sets together. I realise this question sounds a lot like I'm asking for SQL. However, I don't want to require that the datastore backing this data be a relational database, and I don't want the overhead of trying to reimplement SQL myself. I'm also dealing with only a single table with a known schema. Finally, no joins will be required. Something much simpler would be far preferable. Edit: Expanded description to clear up some misconceptions.
5
1
0.022219
0
false
141,872
0
2,773
6
0
0
140,026
"implement a Domain Specific Language" "nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime" I want a DSL but I don't want Python to be that DSL. Okay. How will you execute this DSL? What runtime is acceptable if not Python? What if I have a C program that happens to embed the Python interpreter? Is that acceptable? And -- if Python is not an acceptable runtime -- why does this have a Python tag?
1
0
0
Writing a Domain Specific Language for selecting rows from a table
9
python,database,algorithm,dsl
0
2008-09-26T14:56:00.000
I'm writing a server that I expect to be run by many different people, not all of whom I will have direct contact with. The servers will communicate with each other in a cluster. Part of the server's functionality involves selecting a small subset of rows from a potentially very large table. The exact choice of what rows are selected will need some tuning, and it's important that it's possible for the person running the cluster (eg, myself) to update the selection criteria without getting each and every server administrator to deploy a new version of the server. Simply writing the function in Python isn't really an option, since nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime. What I need are suggestions on the simplest way to implement a Domain Specific Language to achieve this goal. The language needs to be capable of simple expression evaluation, as well as querying table indexes and iterating through the returned rows. Ease of writing and reading the language is secondary to ease of implementing it. I'd also prefer not to have to write an entire query optimiser, so something that explicitly specifies what indexes to query would be ideal. The interface that this will have to compile against will be similar in capabilities to what the App Engine datastore exports: You can query for sequential ranges on any index on the table (eg, less-than, greater-than, range and equality queries), then filter the returned row by any boolean expression. You can also concatenate multiple independent result sets together. I realise this question sounds a lot like I'm asking for SQL. However, I don't want to require that the datastore backing this data be a relational database, and I don't want the overhead of trying to reimplement SQL myself. I'm also dealing with only a single table with a known schema. Finally, no joins will be required. Something much simpler would be far preferable. Edit: Expanded description to clear up some misconceptions.
5
0
0
0
false
140,066
0
2,773
6
0
0
140,026
Why not create a language that when it "compiles" it generates SQL or whatever query language your datastore requires ? You would be basically creating an abstraction over your persistence layer.
1
0
0
Writing a Domain Specific Language for selecting rows from a table
9
python,database,algorithm,dsl
0
2008-09-26T14:56:00.000
I'm writing a server that I expect to be run by many different people, not all of whom I will have direct contact with. The servers will communicate with each other in a cluster. Part of the server's functionality involves selecting a small subset of rows from a potentially very large table. The exact choice of what rows are selected will need some tuning, and it's important that it's possible for the person running the cluster (eg, myself) to update the selection criteria without getting each and every server administrator to deploy a new version of the server. Simply writing the function in Python isn't really an option, since nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime. What I need are suggestions on the simplest way to implement a Domain Specific Language to achieve this goal. The language needs to be capable of simple expression evaluation, as well as querying table indexes and iterating through the returned rows. Ease of writing and reading the language is secondary to ease of implementing it. I'd also prefer not to have to write an entire query optimiser, so something that explicitly specifies what indexes to query would be ideal. The interface that this will have to compile against will be similar in capabilities to what the App Engine datastore exports: You can query for sequential ranges on any index on the table (eg, less-than, greater-than, range and equality queries), then filter the returned row by any boolean expression. You can also concatenate multiple independent result sets together. I realise this question sounds a lot like I'm asking for SQL. However, I don't want to require that the datastore backing this data be a relational database, and I don't want the overhead of trying to reimplement SQL myself. I'm also dealing with only a single table with a known schema. Finally, no joins will be required. Something much simpler would be far preferable. Edit: Expanded description to clear up some misconceptions.
5
0
0
0
false
140,304
0
2,773
6
0
0
140,026
It really sounds like SQL, but perhaps it's worth to try using SQLite if you want to keep it simple?
1
0
0
Writing a Domain Specific Language for selecting rows from a table
9
python,database,algorithm,dsl
0
2008-09-26T14:56:00.000
I'm writing a server that I expect to be run by many different people, not all of whom I will have direct contact with. The servers will communicate with each other in a cluster. Part of the server's functionality involves selecting a small subset of rows from a potentially very large table. The exact choice of what rows are selected will need some tuning, and it's important that it's possible for the person running the cluster (eg, myself) to update the selection criteria without getting each and every server administrator to deploy a new version of the server. Simply writing the function in Python isn't really an option, since nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime. What I need are suggestions on the simplest way to implement a Domain Specific Language to achieve this goal. The language needs to be capable of simple expression evaluation, as well as querying table indexes and iterating through the returned rows. Ease of writing and reading the language is secondary to ease of implementing it. I'd also prefer not to have to write an entire query optimiser, so something that explicitly specifies what indexes to query would be ideal. The interface that this will have to compile against will be similar in capabilities to what the App Engine datastore exports: You can query for sequential ranges on any index on the table (eg, less-than, greater-than, range and equality queries), then filter the returned row by any boolean expression. You can also concatenate multiple independent result sets together. I realise this question sounds a lot like I'm asking for SQL. However, I don't want to require that the datastore backing this data be a relational database, and I don't want the overhead of trying to reimplement SQL myself. I'm also dealing with only a single table with a known schema. Finally, no joins will be required. Something much simpler would be far preferable. Edit: Expanded description to clear up some misconceptions.
5
0
0
0
false
140,091
0
2,773
6
0
0
140,026
You mentioned Python. Why not use Python? If someone can "type in" an expression in your DSL, they can type in Python. You'll need some rules on structure of the expression, but that's a lot easier than implementing something new.
1
0
0
Writing a Domain Specific Language for selecting rows from a table
9
python,database,algorithm,dsl
0
2008-09-26T14:56:00.000
I'm writing a server that I expect to be run by many different people, not all of whom I will have direct contact with. The servers will communicate with each other in a cluster. Part of the server's functionality involves selecting a small subset of rows from a potentially very large table. The exact choice of what rows are selected will need some tuning, and it's important that it's possible for the person running the cluster (eg, myself) to update the selection criteria without getting each and every server administrator to deploy a new version of the server. Simply writing the function in Python isn't really an option, since nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime. What I need are suggestions on the simplest way to implement a Domain Specific Language to achieve this goal. The language needs to be capable of simple expression evaluation, as well as querying table indexes and iterating through the returned rows. Ease of writing and reading the language is secondary to ease of implementing it. I'd also prefer not to have to write an entire query optimiser, so something that explicitly specifies what indexes to query would be ideal. The interface that this will have to compile against will be similar in capabilities to what the App Engine datastore exports: You can query for sequential ranges on any index on the table (eg, less-than, greater-than, range and equality queries), then filter the returned row by any boolean expression. You can also concatenate multiple independent result sets together. I realise this question sounds a lot like I'm asking for SQL. However, I don't want to require that the datastore backing this data be a relational database, and I don't want the overhead of trying to reimplement SQL myself. I'm also dealing with only a single table with a known schema. Finally, no joins will be required. Something much simpler would be far preferable. Edit: Expanded description to clear up some misconceptions.
5
0
0
0
false
140,228
0
2,773
6
0
0
140,026
You said nobody is going to want to install a server that downloads and executes arbitrary code at runtime. However, that is exactly what your DSL will do (eventually) so there probably isn't that much of a difference. Unless you're doing something very specific with the data then I don't think a DSL will buy you that much and it will frustrate the users who are already versed in SQL. Don't underestimate the size of the task you'll be taking on. To answer your question however, you will need to come up with a grammar for your language, something to parse the text and walk the tree, emitting code or calling an API that you've written (which is why my comment that you're still going to have to ship some code). There are plenty of educational texts on grammars for mathematical expressions you can refer to on the net, that's fairly straight forward. You may have a parser generator tool like ANTLR or Yacc you can use to help you generate the parser (or use a language like Lisp/Scheme and marry the two up). Coming up with a reasonable SQL grammar won't be easy. But google 'BNF SQL' and see what you come up with. Best of luck.
1
0
0
Writing a Domain Specific Language for selecting rows from a table
9
python,database,algorithm,dsl
0
2008-09-26T14:56:00.000
I'm writing a server that I expect to be run by many different people, not all of whom I will have direct contact with. The servers will communicate with each other in a cluster. Part of the server's functionality involves selecting a small subset of rows from a potentially very large table. The exact choice of what rows are selected will need some tuning, and it's important that it's possible for the person running the cluster (eg, myself) to update the selection criteria without getting each and every server administrator to deploy a new version of the server. Simply writing the function in Python isn't really an option, since nobody is going to want to install a server that downloads and executes arbitrary Python code at runtime. What I need are suggestions on the simplest way to implement a Domain Specific Language to achieve this goal. The language needs to be capable of simple expression evaluation, as well as querying table indexes and iterating through the returned rows. Ease of writing and reading the language is secondary to ease of implementing it. I'd also prefer not to have to write an entire query optimiser, so something that explicitly specifies what indexes to query would be ideal. The interface that this will have to compile against will be similar in capabilities to what the App Engine datastore exports: You can query for sequential ranges on any index on the table (eg, less-than, greater-than, range and equality queries), then filter the returned row by any boolean expression. You can also concatenate multiple independent result sets together. I realise this question sounds a lot like I'm asking for SQL. However, I don't want to require that the datastore backing this data be a relational database, and I don't want the overhead of trying to reimplement SQL myself. I'm also dealing with only a single table with a known schema. Finally, no joins will be required. Something much simpler would be far preferable. Edit: Expanded description to clear up some misconceptions.
5
1
0.022219
0
false
140,275
0
2,773
6
0
0
140,026
I think we're going to need a bit more information here. Let me know if any of the following is based on incorrect assumptions. First of all, as you pointed out yourself, there already exists a DSL for selecting rows from arbitrary tables-- it is called "SQL". Since you don't want to reinvent SQL, I'm assuming that you only need to query from a single table with a fixed format. If this is the case, you probably don't need to implement a DSL (although that's certainly one way to go); it may be easier, if you are used to Object Orientation, to create a Filter object. More specifically, a "Filter" collection that would hold one or more SelectionCriterion objects. You can implement these to inherit from one or more base classes representing types of selections (Range, LessThan, ExactMatch, Like, etc.) Once these base classes are in place, you can create column-specific inherited versions which are appropriate to that column. Finally, depending on the complexity of the queries you want to support, you'll want to implement some kind of connective glue to handle AND and OR and NOT linkages between the various criteria. If you feel like it, you can create a simple GUI to load up the collection; I'd look at the filtering in Excel as a model, if you don't have anything else in mind. Finally, it should be trivial to convert the contents of this Collection to the corresponding SQL, and pass that to the database. However: if what you are after is simplicity, and your users understand SQL, you could simply ask them to type in the contents of a WHERE clause, and programmatically build up the rest of the query. From a security perspective, if your code has control over the columns selected and the FROM clause, and your database permissions are set properly, and you do some sanity checking on the string coming in from the users, this would be a relatively safe option.
1
0
0
Writing a Domain Specific Language for selecting rows from a table
9
python,database,algorithm,dsl
0
2008-09-26T14:56:00.000
I've seen a number of postgresql modules for python like pygresql, pypgsql, psyco. Most of them are Python DB API 2.0 compliant, some are not being actively developed anymore. Which module do you recommend? Why?
28
0
0
0
false
1,579,851
0
15,582
2
0
0
144,448
I uses only psycopg2 and had no problems with that.
1
0
0
Python PostgreSQL modules. Which is best?
6
python,postgresql,module
0
2008-09-27T20:55:00.000
I've seen a number of postgresql modules for python like pygresql, pypgsql, psyco. Most of them are Python DB API 2.0 compliant, some are not being actively developed anymore. Which module do you recommend? Why?
28
0
0
0
false
145,801
0
15,582
2
0
0
144,448
Psycopg1 is known for better performance in heavilyy threaded environments (like web applications) than Psycopg2, although not maintained. Both are well written and rock solid, I'd choose one of these two depending on use case.
1
0
0
Python PostgreSQL modules. Which is best?
6
python,postgresql,module
0
2008-09-27T20:55:00.000
I'm trying to develop an app using turbogears and sqlalchemy. There is already an existing app using kinterbasdb directly under mod_wsgi on the same server. When both apps are used, neither seems to recognize that kinterbasdb is already initialized Is there something non-obvious I am missing about using sqlalchemy and kinterbasdb in separate apps? In order to make sure only one instance of kinterbasdb gets initialized and both apps use that instance, does anyone have suggestions?
1
2
0.379949
0
false
175,634
0
270
1
0
0
155,029
I thought I posted my solution already... Modifying both apps to run under WSGIApplicationGroup ${GLOBAL} in their httpd conf file and patching sqlalchemy.databases.firebird.py to check if self.dbapi.initialized is True before calling self.dbapi.init(... was the only way I could manage to get this scenario up and running. The SQLAlchemy 0.4.7 patch: diff -Naur SQLAlchemy-0.4.7/lib/sqlalchemy/databases/firebird.py SQLAlchemy-0.4.7.new/lib/sqlalchemy/databases/firebird.py --- SQLAlchemy-0.4.7/lib/sqlalchemy/databases/firebird.py 2008-07-26 12:43:52.000000000 -0400 +++ SQLAlchemy-0.4.7.new/lib/sqlalchemy/databases/firebird.py 2008-10-01 10:51:22.000000000 -0400 @@ -291,7 +291,8 @@ global _initialized_kb if not _initialized_kb and self.dbapi is not None: _initialized_kb = True - self.dbapi.init(type_conv=type_conv, concurrency_level=concurrency_level) + if not self.dbapi.initialized: + self.dbapi.init(type_conv=type_conv, concurrency_level=concurrency_level) return ([], opts) def create_execution_context(self, *args, **kwargs):
1
1
0
SQLAlchemy and kinterbasdb in separate apps under mod_wsgi
1
python,sqlalchemy,kinterbasdb
0
2008-09-30T20:47:00.000
I have a table that looks something like this: word big expensive smart fast dog 9 -10 -20 4 professor 2 4 40 -7 ferrari 7 50 0 48 alaska 10 0 1 0 gnat -3 0 0 0 The + and - values are associated with the word, so professor is smart and dog is not smart. Alaska is big, as a proportion of the total value associated with its entries, and the opposite is true of gnat. Is there a good way to get the absolute value of the number farthest from zero, and some token whether absolute value =/= value? Relatedly, how might I calculate whether the results for a given value are proportionately large with respect to the other values? I would write something to format the output to the effect of: "dog: not smart, probably not expensive; professor smart; ferrari: fast, expensive; alaska: big; gnat: probably small." (The formatting is not a question, just an illustration, I am stuck on the underlying queries.) Also, the rest of the program is python, so if there is any python solution with normal dbapi modules or a more abstract module, any help appreciated.
1
0
0
0
false
177,302
0
4,588
1
0
0
177,284
Can you use the built-in database aggregate functions like MAX(column)?
1
0
0
SQL Absolute value across columns
5
python,mysql,sql,oracle,postgresql
0
2008-10-07T05:06:00.000
Sometimes in our production environment occurs situation when connection between service (which is python program that uses MySQLdb) and mysql server is flacky, some packages are lost, some black magic happens and .execute() of MySQLdb.Cursor object never ends (or take great amount of time to end). This is very bad because it is waste of service worker threads. Sometimes it leads to exhausting of workers pool and service stops responding at all. So the question is: Is there a way to interrupt MySQLdb.Connection.execute operation after given amount of time?
2
2
0.197375
0
false
196,308
0
2,995
2
0
0
196,217
if the communication is such a problem, consider writing a 'proxy' that receives your SQL commands over the flaky connection and relays them to the MySQL server on a reliable channel (maybe running on the same box as the MySQL server). This way you have total control over failure detection and retrying.
1
0
0
MySQLdb execute timeout
2
python,mysql,timeout
0
2008-10-12T22:27:00.000
Sometimes in our production environment occurs situation when connection between service (which is python program that uses MySQLdb) and mysql server is flacky, some packages are lost, some black magic happens and .execute() of MySQLdb.Cursor object never ends (or take great amount of time to end). This is very bad because it is waste of service worker threads. Sometimes it leads to exhausting of workers pool and service stops responding at all. So the question is: Is there a way to interrupt MySQLdb.Connection.execute operation after given amount of time?
2
1
0.099668
0
false
196,891
0
2,995
2
0
0
196,217
You need to analyse exactly what the problem is. MySQL connections should eventually timeout if the server is gone; TCP keepalives are generally enabled. You may be able to tune the OS-level TCP timeouts. If the database is "flaky", then you definitely need to investigate how. It seems unlikely that the database really is the problem, more likely that networking in between is. If you are using (some) stateful firewalls of any kind, it's possible that they're losing some of the state, thus causing otherwise good long-lived connections to go dead. You might want to consider changing the idle timeout parameter in MySQL; otherwise, a long-lived, unused connection may go "stale", where the server and client both think it's still alive, but some stateful network element in between has "forgotten" about the TCP connection. An application trying to use such a "stale" connection will have a long wait before receiving an error (but it should eventually).
1
0
0
MySQLdb execute timeout
2
python,mysql,timeout
0
2008-10-12T22:27:00.000
Any gotchas I should be aware of? Can I store it in a text field, or do I need to use a blob? (I'm not overly familiar with either pickle or sqlite, so I wanted to make sure I'm barking up the right tree with some of my high-level design ideas.)
40
2
0.028564
0
false
198,763
0
31,019
6
0
0
198,692
Since Pickle can dump your object graph to a string it should be possible. Be aware though that TEXT fields in SQLite uses database encoding so you might need to convert it to a simple string before you un-pickle.
1
0
1
Can I pickle a python dictionary into a sqlite3 text field?
14
python,sqlite,pickle
0
2008-10-13T19:11:00.000
Any gotchas I should be aware of? Can I store it in a text field, or do I need to use a blob? (I'm not overly familiar with either pickle or sqlite, so I wanted to make sure I'm barking up the right tree with some of my high-level design ideas.)
40
5
0.071307
0
false
198,767
0
31,019
6
0
0
198,692
Pickle has both text and binary output formats. If you use the text-based format you can store it in a TEXT field, but it'll have to be a BLOB if you use the (more efficient) binary format.
1
0
1
Can I pickle a python dictionary into a sqlite3 text field?
14
python,sqlite,pickle
0
2008-10-13T19:11:00.000
Any gotchas I should be aware of? Can I store it in a text field, or do I need to use a blob? (I'm not overly familiar with either pickle or sqlite, so I wanted to make sure I'm barking up the right tree with some of my high-level design ideas.)
40
2
0.028564
0
false
198,770
0
31,019
6
0
0
198,692
If a dictionary can be pickled, it can be stored in text/blob field as well. Just be aware of the dictionaries that can't be pickled (aka that contain unpickable objects).
1
0
1
Can I pickle a python dictionary into a sqlite3 text field?
14
python,sqlite,pickle
0
2008-10-13T19:11:00.000
Any gotchas I should be aware of? Can I store it in a text field, or do I need to use a blob? (I'm not overly familiar with either pickle or sqlite, so I wanted to make sure I'm barking up the right tree with some of my high-level design ideas.)
40
2
0.028564
0
false
198,829
0
31,019
6
0
0
198,692
Yes, you can store a pickled object in a TEXT or BLOB field in an SQLite3 database, as others have explained. Just be aware that some object cannot be pickled. The built-in container types can (dict, set, list, tuple, etc.). But some objects, such as file handles, refer to state that is external to their own data structures, and other extension types have similar problems. Since a dictionary can contain arbitrary nested data structures, it might not be pickle-able.
1
0
1
Can I pickle a python dictionary into a sqlite3 text field?
14
python,sqlite,pickle
0
2008-10-13T19:11:00.000
Any gotchas I should be aware of? Can I store it in a text field, or do I need to use a blob? (I'm not overly familiar with either pickle or sqlite, so I wanted to make sure I'm barking up the right tree with some of my high-level design ideas.)
40
1
0.014285
0
false
199,190
0
31,019
6
0
0
198,692
SpoonMeiser is correct, you need to have a strong reason to pickle into a database. It's not difficult to write Python objects that implement persistence with SQLite. Then you can use the SQLite CLI to fiddle with the data as well. Which in my experience is worth the extra bit of work, since many debug and admin functions can be simply performed from the CLI rather than writing specific Python code. In the early stages of a project, I did what you propose and ended up re-writing with a Python class for each business object (note: I didn't say for each table!) This way the body of the application can focus on "what" needs to be done rather than "how" it is done.
1
0
1
Can I pickle a python dictionary into a sqlite3 text field?
14
python,sqlite,pickle
0
2008-10-13T19:11:00.000
Any gotchas I should be aware of? Can I store it in a text field, or do I need to use a blob? (I'm not overly familiar with either pickle or sqlite, so I wanted to make sure I'm barking up the right tree with some of my high-level design ideas.)
40
23
1.2
0
true
198,748
0
31,019
6
0
0
198,692
If you want to store a pickled object, you'll need to use a blob, since it is binary data. However, you can, say, base64 encode the pickled object to get a string that can be stored in a text field. Generally, though, doing this sort of thing is indicative of bad design, since you're storing opaque data you lose the ability to use SQL to do any useful manipulation on that data. Although without knowing what you're actually doing, I can't really make a moral call on it.
1
0
1
Can I pickle a python dictionary into a sqlite3 text field?
14
python,sqlite,pickle
0
2008-10-13T19:11:00.000
For a website like reddit with lots of up/down votes and lots of comments per topic what should I go with? Lighttpd/Php or Lighttpd/CherryPy/Genshi/SQLAlchemy? and for database what would scale better / be fastest MySQL ( 4.1 or 5 ? ) or PostgreSQL?
7
2
0.07983
0
false
244,836
1
1,670
4
0
0
204,802
I would go with nginx + php + xcache + postgresql
1
0
0
What would you recommend for a high traffic ajax intensive website?
5
php,python,lighttpd,cherrypy,high-load
0
2008-10-15T13:57:00.000
For a website like reddit with lots of up/down votes and lots of comments per topic what should I go with? Lighttpd/Php or Lighttpd/CherryPy/Genshi/SQLAlchemy? and for database what would scale better / be fastest MySQL ( 4.1 or 5 ? ) or PostgreSQL?
7
2
0.07983
0
false
204,854
1
1,670
4
0
0
204,802
Going to need more data. Jeff had a few articles on the same problems and the answer was to wait till you hit a performance issue. to start with - who is hosting and what do they have available ? what's your in house talent skill sets ? Are you going to be hiring an outside firm ? what do they recommend ? brand new project w/ a team willing to learn a new framework ? 2nd thing is to do some mockups - how is the interface going to work. what data does it need to load and persist ? the idea is to keep your traffic between the web and db side down. e.g. no chatty pages with lots of queries. etc. Once you have a better idea of the data requirements and flow - then work on the database design. there are plenty of rules to follow but one of the better ones is to follow normalization rules (yea i'm a db guy why ?) Now you have a couple of pages build - run your tests. are you having a problem ? Yes, now look at what is it. Page serving or db pulls ? Measure then pick a course of action.
1
0
0
What would you recommend for a high traffic ajax intensive website?
5
php,python,lighttpd,cherrypy,high-load
0
2008-10-15T13:57:00.000
For a website like reddit with lots of up/down votes and lots of comments per topic what should I go with? Lighttpd/Php or Lighttpd/CherryPy/Genshi/SQLAlchemy? and for database what would scale better / be fastest MySQL ( 4.1 or 5 ? ) or PostgreSQL?
7
8
1.2
0
true
204,853
1
1,670
4
0
0
204,802
I can't speak to the MySQL/PostgreSQL question as I have limited experience with Postgres, but my Masters research project was about high-performance websites with CherryPy, and I don't think you'll be disappointed if you use CherryPy for your site. It can easily scale to thousands of simultaneous users on commodity hardware. Of course, the same could be said for PHP, and I don't know of any reasonable benchmarks comparing PHP and CherryPy performance. But if you were wondering whether CherryPy can handle a high-traffic site with a huge number of requests per second, the answer is definitely yes.
1
0
0
What would you recommend for a high traffic ajax intensive website?
5
php,python,lighttpd,cherrypy,high-load
0
2008-10-15T13:57:00.000
For a website like reddit with lots of up/down votes and lots of comments per topic what should I go with? Lighttpd/Php or Lighttpd/CherryPy/Genshi/SQLAlchemy? and for database what would scale better / be fastest MySQL ( 4.1 or 5 ? ) or PostgreSQL?
7
3
0.119427
0
false
205,425
1
1,670
4
0
0
204,802
On the DB question, I'd say PostgreSQL scales better and has better data integrity than MySQL. For a small site MySQL might be faster, but from what I've heard it slows significantly as the size of the database grows. (Note: I've never used MySQL for a large database, so you should probably get a second opinion about its scalability.) But PostgreSQL definitely scales well, and would be a good choice for a high traffic site.
1
0
0
What would you recommend for a high traffic ajax intensive website?
5
php,python,lighttpd,cherrypy,high-load
0
2008-10-15T13:57:00.000
I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially "inline", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production. What I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if/else statements, whichever is better). I created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try/except before but figured this is a good time to learn). Are there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this.
13
0
0
0
false
214,623
0
31,269
5
0
0
211,501
Yes, I was nuking out the problem. All I needed to do was check for the file and catch the IOError if it didn't exist. Thanks for all the other answers. They may come in handy in the future.
1
0
0
Using SQLite in a Python program
8
python,exception,sqlite
0
2008-10-17T09:02:00.000
I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially "inline", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production. What I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if/else statements, whichever is better). I created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try/except before but figured this is a good time to learn). Are there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this.
13
3
0.07486
0
false
211,539
0
31,269
5
0
0
211,501
Doing SQL in overall is horrible in any language I've picked up. SQLalchemy has shown to be easiest from them to use because actual query and committing with it is so clean and absent from troubles. Here's some basic steps on actually using sqlalchemy in your app, better details can be found from the documentation. provide table definitions and create ORM-mappings load database ask it to create tables from the definitions (won't do so if they exist) create session maker (optional) create session After creating a session, you can commit and query from the database.
1
0
0
Using SQLite in a Python program
8
python,exception,sqlite
0
2008-10-17T09:02:00.000
I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially "inline", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production. What I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if/else statements, whichever is better). I created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try/except before but figured this is a good time to learn). Are there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this.
13
7
1
0
false
211,573
0
31,269
5
0
0
211,501
SQLite automatically creates the database file the first time you try to use it. The SQL statements for creating tables can use IF NOT EXISTS to make the commands only take effect if the table has not been created This way you don't need to check for the database's existence beforehand: SQLite can take care of that for you. The main thing I would still be worried about is that executing CREATE TABLE IF EXISTS for every web transaction (say) would be inefficient; you can avoid that by having the program keep an (in-memory) variable saying whether it has created the database today, so it runs the CREATE TABLE script once per run. This would still allow for you to delete the database and start over during debugging.
1
0
0
Using SQLite in a Python program
8
python,exception,sqlite
0
2008-10-17T09:02:00.000
I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially "inline", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production. What I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if/else statements, whichever is better). I created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try/except before but figured this is a good time to learn). Are there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this.
13
29
1
0
false
211,660
0
31,269
5
0
0
211,501
Don't make this more complex than it needs to be. The big, independent databases have complex setup and configuration requirements. SQLite is just a file you access with SQL, it's much simpler. Do the following. Add a table to your database for "Components" or "Versions" or "Configuration" or "Release" or something administrative like that. CREATE TABLE REVISION( RELEASE_NUMBER CHAR(20) ); In your application, connect to your database normally. Execute a simple query against the revision table. Here's what can happen. The query fails to execute: your database doesn't exist, so execute a series of CREATE statements to build it. The query succeeds but returns no rows or the release number is lower than expected: your database exists, but is out of date. You need to migrate from that release to the current release. Hopefully, you have a sequence of DROP, CREATE and ALTER statements to do this. The query succeeds, and the release number is the expected value. Do nothing more, your database is configured correctly.
1
0
0
Using SQLite in a Python program
8
python,exception,sqlite
0
2008-10-17T09:02:00.000
I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially "inline", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production. What I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if/else statements, whichever is better). I created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try/except before but figured this is a good time to learn). Are there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this.
13
13
1.2
0
true
211,534
0
31,269
5
0
0
211,501
AFAIK an SQLITE database is just a file. To check if the database exists, check for file existence. When you open a SQLITE database it will automatically create one if the file that backs it up is not in place. If you try and open a file as a sqlite3 database that is NOT a database, you will get this: "sqlite3.DatabaseError: file is encrypted or is not a database" so check to see if the file exists and also make sure to try and catch the exception in case the file is not a sqlite3 database
1
0
0
Using SQLite in a Python program
8
python,exception,sqlite
0
2008-10-17T09:02:00.000
From what I understand, the parent attribute of a db.Model (typically defined/passed in the constructor call) allows you to define hierarchies in your data models. As a result, this increases the size of the entity group. However, it's not very clear to me why we would want to do that. Is this strictly for ACID compliance? I would like to see scenarios where each is best suited or more appropriate.
10
15
1.2
0
true
216,187
1
1,067
1
1
0
215,570
There are several differences: All entities with the same ancestor are in the same entity group. Transactions can only affect entities inside a single entity group. All writes to a single entity group are serialized, so throughput is limited. The parent entity is set on creation and is fixed. References can be changed at any time. With reference properties, you can only query for direct relationships, but with parent properties you can use the .ancestor() filter to find everything (directly or indirectly) descended from a given ancestor. Each entity has only a single parent, but can have multiple reference properties.
1
0
0
What's the difference between a parent and a reference property in Google App Engine?
2
python,api,google-app-engine
0
2008-10-18T21:12:00.000
I need to update data to a mssql 2005 database so I have decided to use adodbapi, which is supposed to come built into the standard installation of python 2.1.1 and greater. It needs pywin32 to work correctly and the open office python 2.3 installation does not have pywin32 built into it. It also seems like this built int python installation does not have adodbapi, as I get an error when I go import adodbapi. Any suggestions on how to get both pywin32 and adodbapi installed into this open office 2.4 python installation? thanks oh yeah I tried those ways. annoyingly nothing. So i have reverted to jython, that way I can access Open Office for its conversion capabilities along with decent database access. Thanks for the help.
0
1
0.066568
0
false
239,487
0
833
1
0
0
239,009
maybe the best way to install pywin32 is to place it in (openofficedir)\program\python-core-2.3.4\lib\site-packages it is easy if you have a python 2.3 installation (with pywin installed) under C:\python2.3 move the C:\python2.3\Lib\site-packages\ to your (openofficedir)\program\python-core-2.3.4\lib\site-packages
1
0
0
getting pywin32 to work inside open office 2.4 built in python 2.3 interpreter
3
python,openoffice.org,pywin32,adodbapi
0
2008-10-27T03:32:00.000
With SQLAlchemy, is there a way to know beforehand whether a relation would be lazy-loaded? For example, given a lazy parent->children relation and an instance X of "parent", I'd like to know if "X.children" is already loaded, without triggering the query.
16
5
1.2
0
true
261,191
0
3,701
1
0
0
258,775
I think you could look at the child's __dict__ attribute dictionary to check if the data is already there or not.
1
0
0
How to find out if a lazy relation isn't loaded yet, with SQLAlchemy?
3
python,sqlalchemy
0
2008-11-03T14:28:00.000
I want to make my Python library working with MySQLdb be able to detect deadlocks and try again. I believe I've coded a good solution, and now I want to test it. Any ideas for the simplest queries I could run using MySQLdb to create a deadlock condition would be? system info: MySQL 5.0.19 Client 5.1.11 Windows XP Python 2.4 / MySQLdb 1.2.1 p2
10
1
0.039979
0
false
270,449
0
7,080
1
0
0
269,676
you can always run LOCK TABLE tablename from another session (mysql CLI for instance). That might do the trick. It will remain locked until you release it or disconnect the session.
1
0
0
How can I Cause a Deadlock in MySQL for Testing Purposes
5
python,mysql,database,deadlock
0
2008-11-06T18:06:00.000
I'm starting a web project that likely should be fine with SQLite. I have SQLObject on top of it, but thinking long term here -- if this project should require a more robust (e.g. able to handle high traffic), I will need to have a transition plan ready. My questions: How easy is it to transition from one DB (SQLite) to another (MySQL or Firebird or PostGre) under SQLObject? Does SQLObject provide any tools to make such a transition easier? Is it simply take the objects I've defined and call createTable? What about having multiple SQLite databases instead? E.g. one per visitor group? Does SQLObject provide a mechanism for handling this scenario and if so, what is the mechanism to use? Thanks, Sean
1
2
0.132549
0
false
275,676
0
876
1
0
0
275,572
Your success with createTable() will depend on your existing underlying table schema / data types. In other words, how well SQLite maps to the database you choose and how SQLObject decides to use your data types. The safest option may be to create the new database by hand. Then you'll have to deal with data migration, which may be as easy as instantiating two SQLObject database connections over the same table definitions. Why not just start with the more full-featured database?
1
0
0
Database change underneath SQLObject
3
python,mysql,database,sqlite,sqlobject
0
2008-11-09T03:46:00.000
I am having a postgres production database in production (which contains a lot of Data). now I need to modify the model of the tg-app to add couple of new tables to the database. How do i do this? I am using sqlAlchemy.
1
1
1.2
0
true
301,708
1
889
1
0
0
301,566
This always works and requires little thinking -- only patience. Make a backup. Actually make a backup. Everyone skips step 1 thinking that they have a backup, but they can never find it or work with it. Don't trust any backup that you can't recover from. Create a new database schema. Define your new structure from the ground up in the new schema. Ideally, you'll run a DDL script that builds the new schema. Don't have a script to build the schema? Create one and put it under version control. With SA, you can define your tables and it can build your schema for you. This is ideal, since you have your schema under version control in Python. Move data. a. For tables which did not change structure, move data from old schema to new schema using simple INSERT/SELECT statements. b. For tables which did change structure, develop INSERT/SELECT scripts to move the data from old to new. Often, this can be a single SQL statement per new table. In some cases, it has to be a Python loop with two open connections. c. For new tables, load the data. Stop using the old schema. Start using the new schema. Find every program that used the old schema and fix the configuration. Don't have a list of applications? Make one. Seriously -- it's important. Applications have hard-coded DB configurations? Fix that, too, while you're at it. Either create a common config file, or use some common environment variable or something to (a) assure consistency and (b) centralize the notion of "production". You can do this kind of procedure any time you do major surgery. It never touches the old database except to extract the data.
1
0
0
How to update turbogears application production database
4
python,database,postgresql,data-migration,turbogears
0
2008-11-19T11:00:00.000
I'm using Django and Python 2.6, and I want to grow my application using a MySQL backend. Problem is that there isn't a win32 package for MySQLdb on Python 2.6. Now I'm no hacker, but I thought I might compile it myself using MSVC++9 Express. But I run into a problem that the compiler quickly can't find config_win.h, which I assume is a header file for MySQL so that the MySQLdb package can know what calls it can make into MySQL. Am I right? And if so, where do I get the header files for MySQL?
9
2
1.2
0
true
317,716
0
3,446
1
0
0
316,484
I think that the header files are shipped with MySQL, just make sure you check the appropriate options when installing (I think that sources and headers are under "developer components" in the installation dialog).
1
0
0
Problem compiling MySQLdb for Python 2.6 on Win32
4
python,mysql,winapi
0
2008-11-25T06:14:00.000
What is the sqlalchemy equivalent column type for 'money' and 'OID' column types in Postgres?
8
3
1.2
0
true
405,923
0
10,565
1
0
0
359,409
we've never had an "OID" type specifically, though we've supported the concept of an implicit "OID" column on every table through the 0.4 series, primarily for the benefit of postgres. However since user-table defined OID columns are deprecated in Postgres, and we in fact never really used the OID feature that was present, we've removed this feature from the library. If a particular type is not supplied in SQLA, as an alternative to specifying a custom type, you can always use the NullType which just means SQLA doesn't know anything in particular about that type. If psycopg2 sends/receives a useful Python type for the column already, there's not really any need for a SQLA type object, save for issuing CREATE TABLE statements.
1
0
0
What is the sqlalchemy equivalent column type for 'money' and 'OID' in Postgres?
3
python,postgresql,sqlalchemy
0
2008-12-11T13:52:00.000
How do I connect to a MySQL database using a python program?
1,242
1
0.008
0
false
64,762,149
0
1,369,727
1
0
0
372,885
First step to get The Library: Open terminal and execute pip install mysql-python-connector. After the installation go the second step. Second Step to import the library: Open your python file and write the following code: import mysql.connector Third step to connect to the server: Write the following code: conn = mysql.connector.connect(host=you host name like localhost or 127.0.0.1, username=your username like root, password = your password) Third step Making the cursor: Making a cursor makes it easy for us to run queries. To make the cursor use the following code: cursor = conn.cursor() Executing queries: For executing queries you can do the following: cursor.execute(query) If the query changes any thing in the table you need to add the following code after the execution of the query: conn.commit() Getting values from a query: If you want to get values from a query then you can do the following: cursor.excecute('SELECT * FROM table_name') for i in cursor: print(i) #Or for i in cursor.fetchall(): print(i) The fetchall() method returns a list with many tuples that contain the values that you requested ,row after row . Closing the connection: To close the connection you should use the following code: conn.close() Handling exception: To Handel exception you can do it Vai the following method: try: #Logic pass except mysql.connector.errors.Error: #Logic pass To use a database: For example you are a account creating system where you are storing the data in a database named blabla, you can just add a database parameter to the connect() method ,like mysql.connector.connect(database = database name) don't remove other informations like host,username,password.
1
0
0
How do I connect to a MySQL Database in Python?
25
python,mysql
0
2008-12-16T21:49:00.000
So, looking for a mysql-db-lib that is compatible with py3k/py3.0/py3000, any ideas? Google turned up nothing.
36
0
0
0
false
385,225
0
43,916
1
0
0
384,471
You're probably better off using Python 2.x at the moment. It's going to be a while before all Python packages are ported to 3.x, and I expect writing a library or application with 3.x at the moment would be quite frustrating.
1
0
0
MySQL-db lib for Python 3.x?
9
python,mysql,python-3.x
0
2008-12-21T13:37:00.000
I need to design a program using python that will ask the user for a barcode. Then, using this barcode, it will search a mysql to find its corresponding product. I am a bit stuck on how to get started. Does anyone have any tips for me?
1
0
0
0
false
387,800
0
4,847
3
0
0
387,606
To start with, treat the barcode input as plain old text. It has been quite a while since I worked with barcode scanners, but I doubt they have changed that much, the older ones used to just piggyback on the keyboard input, so from a programming perspective, the net result was a stream of characters in the keyboard buffer, either typed or scanned made no difference. If the device you are targeting differs from that, you will need to write something to deal with that before you get to the database query. If you have one of the devices to play with, plug it in, start notepad, start scanning some barcodes and see what happens.
1
0
0
Using user input to find information in a Mysql database
4
python,sql,user-input
0
2008-12-22T22:37:00.000
I need to design a program using python that will ask the user for a barcode. Then, using this barcode, it will search a mysql to find its corresponding product. I am a bit stuck on how to get started. Does anyone have any tips for me?
1
1
0.049958
0
false
387,622
0
4,847
3
0
0
387,606
A barcode is simply a graphical representation of a series of characters (alphanumeric) So if you have a method for users to enter this code (a barcode scanner), then its just an issue of querying the mysql database for the character string.
1
0
0
Using user input to find information in a Mysql database
4
python,sql,user-input
0
2008-12-22T22:37:00.000
I need to design a program using python that will ask the user for a barcode. Then, using this barcode, it will search a mysql to find its corresponding product. I am a bit stuck on how to get started. Does anyone have any tips for me?
1
0
0
0
false
387,694
0
4,847
3
0
0
387,606
That is a very ambiguous question. What you want can be done in many ways depending on what you actually want to do. How are your users going to enter the bar code? Are they going to use a bar code scanner? Are they entering the bar code numbers manually? Is this going to run on a desktop/laptop computer or is it going to run on a handheld device? Is the bar code scanner storing the bar codes for later retrieval or is it sending them directly to the computer. Will it send them through a USB cable or wireless?
1
0
0
Using user input to find information in a Mysql database
4
python,sql,user-input
0
2008-12-22T22:37:00.000
I'm writing a script in python which basically queries WMI and updates the information in a mysql database. One of those "write something you need" to learn to program exercises. In case something breaks in the middle of the script, for example, the remote computer turns off, it's separated out into functions. Query Some WMI data Update that to the database Query Other WMI data Update that to the database Is it better to open one mysql connection at the beginning and leave it open or close the connection after each update? It seems as though one connection would use less resources. (Although I'm just learning, so this is a complete guess.) However, opening and closing the connection with each update seems more 'neat'. Functions would be more stand alone, rather than depend on code outside that function.
2
7
1.2
0
true
387,932
0
1,201
3
0
0
387,619
"However, opening and closing the connection with each update seems more 'neat'. " It's also a huge amount of overhead -- and there's no actual benefit. Creating and disposing of connections is relatively expensive. More importantly, what's the actual reason? How does it improve, simplify, clarify? Generally, most applications have one connection that they use from when they start to when they stop.
1
0
0
Mysql Connection, one or many?
4
python,mysql
0
2008-12-22T22:40:00.000
I'm writing a script in python which basically queries WMI and updates the information in a mysql database. One of those "write something you need" to learn to program exercises. In case something breaks in the middle of the script, for example, the remote computer turns off, it's separated out into functions. Query Some WMI data Update that to the database Query Other WMI data Update that to the database Is it better to open one mysql connection at the beginning and leave it open or close the connection after each update? It seems as though one connection would use less resources. (Although I'm just learning, so this is a complete guess.) However, opening and closing the connection with each update seems more 'neat'. Functions would be more stand alone, rather than depend on code outside that function.
2
2
0.099668
0
false
387,735
0
1,201
3
0
0
387,619
I don't think that there is "better" solution. Its too early to think about resources. And since wmi is quite slow ( in comparison to sql connection ) the db is not an issue. Just make it work. And then make it better. The good thing about working with open connection here, is that the "natural" solution is to use objects and not just functions. So it will be a learning experience( In case you are learning python and not mysql).
1
0
0
Mysql Connection, one or many?
4
python,mysql
0
2008-12-22T22:40:00.000
I'm writing a script in python which basically queries WMI and updates the information in a mysql database. One of those "write something you need" to learn to program exercises. In case something breaks in the middle of the script, for example, the remote computer turns off, it's separated out into functions. Query Some WMI data Update that to the database Query Other WMI data Update that to the database Is it better to open one mysql connection at the beginning and leave it open or close the connection after each update? It seems as though one connection would use less resources. (Although I'm just learning, so this is a complete guess.) However, opening and closing the connection with each update seems more 'neat'. Functions would be more stand alone, rather than depend on code outside that function.
2
1
0.049958
0
false
389,364
0
1,201
3
0
0
387,619
Useful clues in S.Lott's and Igal Serban's answers. I think you should first find out your actual requirements and code accordingly. Just to mention a different strategy; some applications keep a pool of database (or whatever) connections and in case of a transaction just pull one from that pool. It seems rather obvious you just need one connection for this kind of application. But you can still keep a pool of one connection and apply following; Whenever database transaction is needed the connection is pulled from the pool and returned back at the end. (optional) The connection is expired (and of replaced by a new one) after a certain amount of time. (optional) The connection is expired after a certain amount of usage. (optional) The pool can check (by sending an inexpensive query) if the connection is alive before handing it over the program. This is somewhat in between single connection and connection per transaction strategies.
1
0
0
Mysql Connection, one or many?
4
python,mysql
0
2008-12-22T22:40:00.000
I am using python to read a currency value from excel. The returned from the range.Value method is a tuple that I don't know how to parse. For example, the cell appears as $548,982, but in python the value is returned as (1, 1194857614). How can I get the numerical amount from excel or how can I convert this tuple value into the numerical value? Thanks!
1
0
0
0
false
390,304
0
586
1
0
0
390,263
I tried this with Excel 2007 and VBA. It is giving correct value. 1) Try pasting this value in a new excel workbook 2) Press Alt + F11. Gets you to VBA Editor. 3) Press Ctrl + G. Gets you to immediate window. 4) In the immediate window, type ?cells("a1").Value here "a1" is the cell where you have pasted the value. I am doubting that the cell has some value or character due to which it is interpreted this way. Post your observations here.
1
0
1
Interpreting Excel Currency Values
2
python,excel,pywin32
0
2008-12-23T22:37:00.000
I am currently analyzing a wikipedia dump file; I am extracting a bunch of data from it using python and persisting it into a PostgreSQL db. I am always trying to make things go faster for this file is huge (18GB). In order to interface with PostgreSQL, I am using psycopg2, but this module seems to mimic many other such DBAPIs. Anyway, I have a question concerning cursor.executemany(command, values); it seems to me like executing an executemany once every 1000 values or so is better than calling cursor.execute(command % value) for each of these 5 million values (please confirm or correct me!). But, you see, I am using an executemany to INSERT 1000 rows into a table which has a UNIQUE integrity constraint; this constraint is not verified in python beforehand, for this would either require me to SELECT all the time (this seems counter productive) or require me to get more than 3 GB of RAM. All this to say that I count on Postgres to warn me when my script tried to INSERT an already existing row via catching the psycopg2.DatabaseError. When my script detects such a non-UNIQUE INSERT, it connection.rollback() (which makes ups to 1000 rows everytime, and kind of makes the executemany worthless) and then INSERTs all values one by one. Since psycopg2 is so poorly documented (as are so many great modules...), I cannot find an efficient and effective workaround. I have reduced the number of values INSERTed per executemany from 1000 to 100 in order to reduce the likeliness of a non-UNIQUE INSERT per executemany, but I am pretty certain their is a way to just tell psycopg2 to ignore these execeptions or to tell the cursor to continue the executemany. Basically, this seems like the kind of problem which has a solution so easy and popular, that all I can do is ask in order to learn about it. Thanks again!
6
-1
-0.049958
0
false
675,865
0
7,742
2
0
0
396,455
using a MERGE statement instead of an INSERT one would solve your problem.
1
0
0
Python-PostgreSQL psycopg2 interface --> executemany
4
python,postgresql,database,psycopg
0
2008-12-28T17:51:00.000
I am currently analyzing a wikipedia dump file; I am extracting a bunch of data from it using python and persisting it into a PostgreSQL db. I am always trying to make things go faster for this file is huge (18GB). In order to interface with PostgreSQL, I am using psycopg2, but this module seems to mimic many other such DBAPIs. Anyway, I have a question concerning cursor.executemany(command, values); it seems to me like executing an executemany once every 1000 values or so is better than calling cursor.execute(command % value) for each of these 5 million values (please confirm or correct me!). But, you see, I am using an executemany to INSERT 1000 rows into a table which has a UNIQUE integrity constraint; this constraint is not verified in python beforehand, for this would either require me to SELECT all the time (this seems counter productive) or require me to get more than 3 GB of RAM. All this to say that I count on Postgres to warn me when my script tried to INSERT an already existing row via catching the psycopg2.DatabaseError. When my script detects such a non-UNIQUE INSERT, it connection.rollback() (which makes ups to 1000 rows everytime, and kind of makes the executemany worthless) and then INSERTs all values one by one. Since psycopg2 is so poorly documented (as are so many great modules...), I cannot find an efficient and effective workaround. I have reduced the number of values INSERTed per executemany from 1000 to 100 in order to reduce the likeliness of a non-UNIQUE INSERT per executemany, but I am pretty certain their is a way to just tell psycopg2 to ignore these execeptions or to tell the cursor to continue the executemany. Basically, this seems like the kind of problem which has a solution so easy and popular, that all I can do is ask in order to learn about it. Thanks again!
6
0
0
0
false
396,824
0
7,742
2
0
0
396,455
"When my script detects such a non-UNIQUE INSERT, it connection.rollback() (which makes ups to 1000 rows everytime, and kind of makes the executemany worthless) and then INSERTs all values one by one." The question doesn't really make a lot of sense. Does EVERY block of 1,000 rows fail due to non-unique rows? Does 1 block of 1,000 rows fail (out 5,000 such blocks)? If so, then the execute many helps for 4,999 out of 5,000 and is far from "worthless". Are you worried about this non-Unique insert? Or do you have actual statistics on the number of times this happens? If you've switched from 1,000 row blocks to 100 row blocks, you can -- obviously -- determine if there's a performance advantage for 1,000 row blocks, 100 row blocks and 1 row blocks. Please actually run the actual program with actual database and different size blocks and post the numbers.
1
0
0
Python-PostgreSQL psycopg2 interface --> executemany
4
python,postgresql,database,psycopg
0
2008-12-28T17:51:00.000
I'm wondering, is it possible to make an sql query that does the same function as 'select products where barcode in table1 = barcode in table2'. I am writing this function in a python program. Once that function is called will the table be joined permanently or just while that function is running? thanks.
0
0
0
0
false
403,848
0
680
2
0
0
403,527
Here is an example of inner joining two tables based on a common field in both tables. SELECT table1.Products FROM table1 INNER JOIN table2 on table1.barcode = table2.barcode WHERE table1.Products is not null
1
0
0
Making a SQL Query in two tables
6
python,sql
0
2008-12-31T17:20:00.000
I'm wondering, is it possible to make an sql query that does the same function as 'select products where barcode in table1 = barcode in table2'. I am writing this function in a python program. Once that function is called will the table be joined permanently or just while that function is running? thanks.
0
0
0
0
false
403,904
0
680
2
0
0
403,527
Here's a way to talk yourself through table design in these cases, based on Object Role Modeling. (Yes, I realize this is only indirectly related to the question.) You have products and barcodes. Products are uniquely identified by Product Code (e.g. 'A2111'; barcodes are uniquely identified by Value (e.g. 1002155061). A Product has a Barcode. Questions: Can a product have no barcode? Can the same product have multiple barcodes? Can multiple products have the same barcode? (If you have any experience with UPC labels, you know the answer to all these is TRUE.) So you can make some assertions: A Product (code) has zero or more Barcode (value). A Barcode (value) has one or more Product (code). -- assumption: we barcodes don't have independent existence if they aren't/haven't been/won't be related to products). Which leads directly (via your ORM model) to a schema with two tables: Product ProductCode(PK) Description etc ProductBarcode ProductCode(FK) BarcodeValue -- with a two-part natural primary key, ProductCode + BarcodeValue and you tie them together as described in the other answers. Similar assertions can be used to determine which fields go into various tables in your design.
1
0
0
Making a SQL Query in two tables
6
python,sql
0
2008-12-31T17:20:00.000
What is the difference between these two apis? Which one faster, reliable using Python DB API? Upd: I see two psql drivers for Django. The first one is psycopg2. What is the second one? pygresql?
13
5
1.2
0
true
413,259
1
15,364
4
0
0
413,228
For what it's worth, django uses psycopg2.
1
0
0
PyGreSQL vs psycopg2
5
python,postgresql
0
2009-01-05T14:21:00.000
What is the difference between these two apis? Which one faster, reliable using Python DB API? Upd: I see two psql drivers for Django. The first one is psycopg2. What is the second one? pygresql?
13
0
0
0
false
413,508
1
15,364
4
0
0
413,228
psycopg2 is partly written in C so you can expect a performance gain, but on the other hand, a bit harder to install. PyGreSQL is written in Python only, easy to deployed but slower.
1
0
0
PyGreSQL vs psycopg2
5
python,postgresql
0
2009-01-05T14:21:00.000
What is the difference between these two apis? Which one faster, reliable using Python DB API? Upd: I see two psql drivers for Django. The first one is psycopg2. What is the second one? pygresql?
13
4
0.158649
0
false
592,846
1
15,364
4
0
0
413,228
"PyGreSQL is written in Python only, easy to deployed but slower." PyGreSQL contains a C-coded module, too. I haven't done speed tests, but they're not likely to be much different, as the real work will happen inside the database server.
1
0
0
PyGreSQL vs psycopg2
5
python,postgresql
0
2009-01-05T14:21:00.000
What is the difference between these two apis? Which one faster, reliable using Python DB API? Upd: I see two psql drivers for Django. The first one is psycopg2. What is the second one? pygresql?
13
2
0.07983
0
false
413,537
1
15,364
4
0
0
413,228
Licensing may be an issue for you. PyGreSQL is MIT license. Psycopg2 is GPL license. (as long as you are accessing psycopg2 in normal ways from Python, with no internal API, and no direct C calls, this shouldn't cause you any headaches, and you can release your code under whatever license you like - but I am not a lawyer).
1
0
0
PyGreSQL vs psycopg2
5
python,postgresql
0
2009-01-05T14:21:00.000
The beauty of ORM lulled me into a soporific sleep. I've got an existing Django app with a lack of database indexes. Is there a way to automatically generate a list of columns that need indexing? I was thinking maybe some middleware that logs which columns are involved in WHERE clauses? but is there anything built into MySQL that might help?
5
4
0.379949
0
false
438,700
1
620
1
0
0
438,559
No. Adding indexes willy-nilly to all "slow" queries will also slow down inserts, updates and deletes. Indexes are a balancing act between fast queries and fast changes. There is no general or "right" answer. There's certainly nothing that can automate this. You have to measure the improvement across your whole application as you add and change indexes.
1
0
0
Is there a way to automatically generate a list of columns that need indexing?
2
python,mysql,database,django,django-models
0
2009-01-13T10:36:00.000
I'm looking for some quick thoughts about a business application I am looking to build. I'd like to separate the three layers of presentation, domain logic, and data using PHP, Python, and PostgreSQL, respectively. I would like to hear, possibly from other folks who have gone down this path before, if there are problems with this approach, if I am targeting the wrong tools, etc. I'm looking at PHP because it is widely used, fairly mature, and I can find ample people with skills in PHP interface design. I'm looking at Python because of the benefits of readable code, because I hear can find more Python programmers that also have subject-matter skills (in this case, finance), and it's an open source language. Plus, it seems easier to code with. I'm looking at PostgreSQL for the transaction-level features. MySQL is also an option here, but I don't need to debate this aspect. This is not a web application, although I would like to utilize a browser for the user interface. This is more of an Enterprise Application, but for a small business with moderate numbers of users (maybe 5-10) and a modest number of daily transactions. What is important is that we are able to upgrade the database or domain logic or interface separate from the other layers in the future. I'm NOT looking for a buy vs. build debate, as that's a different discussion. Thanks for any insight
2
0
0
0
false
494,119
0
2,512
6
0
0
439,759
Just to throw it out there... there are PHP frameworks utilizing MVC. Codeigniter does simple and yet powerful things. You can definitely separate the template layer from the logic layer.
1
0
0
Is a PHP, Python, PostgreSQL design suitable for a business application?
7
php,python,postgresql
0
2009-01-13T16:47:00.000
I'm looking for some quick thoughts about a business application I am looking to build. I'd like to separate the three layers of presentation, domain logic, and data using PHP, Python, and PostgreSQL, respectively. I would like to hear, possibly from other folks who have gone down this path before, if there are problems with this approach, if I am targeting the wrong tools, etc. I'm looking at PHP because it is widely used, fairly mature, and I can find ample people with skills in PHP interface design. I'm looking at Python because of the benefits of readable code, because I hear can find more Python programmers that also have subject-matter skills (in this case, finance), and it's an open source language. Plus, it seems easier to code with. I'm looking at PostgreSQL for the transaction-level features. MySQL is also an option here, but I don't need to debate this aspect. This is not a web application, although I would like to utilize a browser for the user interface. This is more of an Enterprise Application, but for a small business with moderate numbers of users (maybe 5-10) and a modest number of daily transactions. What is important is that we are able to upgrade the database or domain logic or interface separate from the other layers in the future. I'm NOT looking for a buy vs. build debate, as that's a different discussion. Thanks for any insight
2
0
0
0
false
439,793
0
2,512
6
0
0
439,759
I personally agree with the second and the third points in your post. Speaking about PHP, in my opinion you can use Python also for presentation, there are many solutions (Zope, Plone ...) based on Python.
1
0
0
Is a PHP, Python, PostgreSQL design suitable for a business application?
7
php,python,postgresql
0
2009-01-13T16:47:00.000
I'm looking for some quick thoughts about a business application I am looking to build. I'd like to separate the three layers of presentation, domain logic, and data using PHP, Python, and PostgreSQL, respectively. I would like to hear, possibly from other folks who have gone down this path before, if there are problems with this approach, if I am targeting the wrong tools, etc. I'm looking at PHP because it is widely used, fairly mature, and I can find ample people with skills in PHP interface design. I'm looking at Python because of the benefits of readable code, because I hear can find more Python programmers that also have subject-matter skills (in this case, finance), and it's an open source language. Plus, it seems easier to code with. I'm looking at PostgreSQL for the transaction-level features. MySQL is also an option here, but I don't need to debate this aspect. This is not a web application, although I would like to utilize a browser for the user interface. This is more of an Enterprise Application, but for a small business with moderate numbers of users (maybe 5-10) and a modest number of daily transactions. What is important is that we are able to upgrade the database or domain logic or interface separate from the other layers in the future. I'm NOT looking for a buy vs. build debate, as that's a different discussion. Thanks for any insight
2
0
0
0
false
439,818
0
2,512
6
0
0
439,759
Just skip PHP and use Python (with Django, as already noticed while I typed). Django already separates the layers as you mentioned. I have never used PgSQL myself, but I think it's mostly a matter of taste whether you prefer it over MySQL. It used to support more enterprise features than MySQL but I'm not sure if that's still true with MySQL 5.0 and 5.1. Transactions are supported in MySQL, anyway (you have to use the InnoDB table engine, however).
1
0
0
Is a PHP, Python, PostgreSQL design suitable for a business application?
7
php,python,postgresql
0
2009-01-13T16:47:00.000
I'm looking for some quick thoughts about a business application I am looking to build. I'd like to separate the three layers of presentation, domain logic, and data using PHP, Python, and PostgreSQL, respectively. I would like to hear, possibly from other folks who have gone down this path before, if there are problems with this approach, if I am targeting the wrong tools, etc. I'm looking at PHP because it is widely used, fairly mature, and I can find ample people with skills in PHP interface design. I'm looking at Python because of the benefits of readable code, because I hear can find more Python programmers that also have subject-matter skills (in this case, finance), and it's an open source language. Plus, it seems easier to code with. I'm looking at PostgreSQL for the transaction-level features. MySQL is also an option here, but I don't need to debate this aspect. This is not a web application, although I would like to utilize a browser for the user interface. This is more of an Enterprise Application, but for a small business with moderate numbers of users (maybe 5-10) and a modest number of daily transactions. What is important is that we are able to upgrade the database or domain logic or interface separate from the other layers in the future. I'm NOT looking for a buy vs. build debate, as that's a different discussion. Thanks for any insight
2
1
0.028564
0
false
440,496
0
2,512
6
0
0
439,759
I can only repeat what other peoples here already said : if you choose Python for the domain layer, you won't gain anything (quite on the contrary) using PHP for the presentation layer. Others already advised Django, and that might be a pretty good choice, but there's no shortage of good Python web frameworks.
1
0
0
Is a PHP, Python, PostgreSQL design suitable for a business application?
7
php,python,postgresql
0
2009-01-13T16:47:00.000
I'm looking for some quick thoughts about a business application I am looking to build. I'd like to separate the three layers of presentation, domain logic, and data using PHP, Python, and PostgreSQL, respectively. I would like to hear, possibly from other folks who have gone down this path before, if there are problems with this approach, if I am targeting the wrong tools, etc. I'm looking at PHP because it is widely used, fairly mature, and I can find ample people with skills in PHP interface design. I'm looking at Python because of the benefits of readable code, because I hear can find more Python programmers that also have subject-matter skills (in this case, finance), and it's an open source language. Plus, it seems easier to code with. I'm looking at PostgreSQL for the transaction-level features. MySQL is also an option here, but I don't need to debate this aspect. This is not a web application, although I would like to utilize a browser for the user interface. This is more of an Enterprise Application, but for a small business with moderate numbers of users (maybe 5-10) and a modest number of daily transactions. What is important is that we are able to upgrade the database or domain logic or interface separate from the other layers in the future. I'm NOT looking for a buy vs. build debate, as that's a different discussion. Thanks for any insight
2
1
0.028564
0
false
440,118
0
2,512
6
0
0
439,759
I'm going to assume that by "business application" you mean a web application hosted in an intranet environment as opposed to some sort of SaaS application on the internet. While you're in the process of architecting your application you need to consider the existing infrastructure and infrastructure support people of your employer/customer. Also, if the company is large enough to have things such as "approved software/hardware lists," you should be aware of those. Keep in mind that some elements of the list may be downright retarded. Don't let past mistakes dictate the architecture of your app, but in cases where they are reasonably sensible I would pick my battles and stick with your enterprise standard. This can be a real pain when you pick a development stack that really works best on Unix/Linux, and then someone tries to force onto a Windows server admined by someone who's never touched anything but ASP.NET applications. Unless there is a particular PHP module that you intend to use that has no Python equivalent, I would drop PHP and use Django. If there is a compelling reason to use PHP, then I'd drop Python. I'm having difficulty imagining a scenario where you would want to use both at the same time. As for PG versus MySQL, either works. Look at what you customer already has deployed, and if they have a bunch of one and little of another, pick that. If they have existing Oracle infrastructure you should consider using it. If they are an SQL Server shop...reconsider your stack and remember to pick your battles.
1
0
0
Is a PHP, Python, PostgreSQL design suitable for a business application?
7
php,python,postgresql
0
2009-01-13T16:47:00.000
I'm looking for some quick thoughts about a business application I am looking to build. I'd like to separate the three layers of presentation, domain logic, and data using PHP, Python, and PostgreSQL, respectively. I would like to hear, possibly from other folks who have gone down this path before, if there are problems with this approach, if I am targeting the wrong tools, etc. I'm looking at PHP because it is widely used, fairly mature, and I can find ample people with skills in PHP interface design. I'm looking at Python because of the benefits of readable code, because I hear can find more Python programmers that also have subject-matter skills (in this case, finance), and it's an open source language. Plus, it seems easier to code with. I'm looking at PostgreSQL for the transaction-level features. MySQL is also an option here, but I don't need to debate this aspect. This is not a web application, although I would like to utilize a browser for the user interface. This is more of an Enterprise Application, but for a small business with moderate numbers of users (maybe 5-10) and a modest number of daily transactions. What is important is that we are able to upgrade the database or domain logic or interface separate from the other layers in the future. I'm NOT looking for a buy vs. build debate, as that's a different discussion. Thanks for any insight
2
0
0
0
false
440,098
0
2,512
6
0
0
439,759
Just to address the MySQL vs PgSQL issues - it shouldn't matter. They're both more than capable of the task, and any reasonable framework should isolate you from the differences relatively well. I think it's down to what you use already, what people have most experience in, and if there's a feature in one or the other you think you'd benefit from. If you have no preference, you might want to go with MySQL purely because it's more popular for web work. This translates to more examples, easier to find help, etc. I actually prefer the philosophy of PgSQL, but this isn't a good enough reason to blow against the wind.
1
0
0
Is a PHP, Python, PostgreSQL design suitable for a business application?
7
php,python,postgresql
0
2009-01-13T16:47:00.000
Example Problem: Entities: User contains name and a list of friends (User references) Blog Post contains title, content, date and Writer (User) Requirement: I want a page that displays the title and a link to the blog of the last 10 posts by a user's friend. I would also like the ability to keep paging back through older entries. SQL Solution: So in sql land it would be something like: select * from blog_post where user_id in (select friend_id from user_friend where user_id = :userId) order by date GAE solutions i can think of are: Load user, loop through the list of friends and load their latest blog posts. Finally merge all the blog posts to find the latest 10 blog entries In a blog post have a list of all users that have the writer as a friend. This would mean a simple read but would result in quota overload when adding a friend who has lots of blog posts. I don't believe either of these solutions will scale. Im sure others have hit this problem but I've searched, watched google io videos, read other's code ... What am i missing?
13
13
1
0
false
446,471
1
2,112
2
1
0
445,827
If you look at how the SQL solution you provided will be executed, it will go basically like this: Fetch a list of friends for the current user For each user in the list, start an index scan over recent posts Merge-join all the scans from step 2, stopping when you've retrieved enough entries You can carry out exactly the same procedure yourself in App Engine, by using the Query instances as iterators and doing a merge join over them. You're right that this will not scale well to large numbers of friends, but it suffers from exactly the same issues the SQL implementation has, it just doesn't disguise them as well: Fetching the latest 20 (for example) entries costs roughly O(n log n) work, where n is the number of friends.
1
0
0
GAE - How to live with no joins?
4
python,google-app-engine,join,google-cloud-datastore
0
2009-01-15T06:07:00.000
Example Problem: Entities: User contains name and a list of friends (User references) Blog Post contains title, content, date and Writer (User) Requirement: I want a page that displays the title and a link to the blog of the last 10 posts by a user's friend. I would also like the ability to keep paging back through older entries. SQL Solution: So in sql land it would be something like: select * from blog_post where user_id in (select friend_id from user_friend where user_id = :userId) order by date GAE solutions i can think of are: Load user, loop through the list of friends and load their latest blog posts. Finally merge all the blog posts to find the latest 10 blog entries In a blog post have a list of all users that have the writer as a friend. This would mean a simple read but would result in quota overload when adding a friend who has lots of blog posts. I don't believe either of these solutions will scale. Im sure others have hit this problem but I've searched, watched google io videos, read other's code ... What am i missing?
13
1
0.049958
0
false
446,477
1
2,112
2
1
0
445,827
"Load user, loop through the list of friends and load their latest blog posts." That's all a join is -- nested loops. Some kinds of joins are loops with lookups. Most lookups are just loops; some are hashes. "Finally merge all the blog posts to find the latest 10 blog entries" That's a ORDER BY with a LIMIT. That's what the database is doing for you. I'm not sure what's not scalable about this; it's what a database does anyway.
1
0
0
GAE - How to live with no joins?
4
python,google-app-engine,join,google-cloud-datastore
0
2009-01-15T06:07:00.000
I am using plone.app.blob to store large ZODB objects in a blobstorage directory. This reduces size pressure on Data.fs but I have not been able to find any advice on backing up this data. I am already backing up Data.fs by pointing a network backup tool at a directory of repozo backups. Should I simply point that tool at the blobstorage directory to backup my blobs? What if the database is being repacked or blobs are being added and deleted while the copy is taking place? Are there files in the blobstorage directory that must be copied over in a certain order?
8
13
1
0
false
2,664,479
0
2,618
3
0
0
451,952
It should be safe to do a repozo backup of the Data.fs followed by an rsync of the blobstorage directory, as long as the database doesn't get packed while those two operations are happening. This is because, at least when using blobs with FileStorage, modifications to a blob always results in the creation of a new file named based on the object id and transaction id. So if new or updated blobs are written after the Data.fs is backed up, it shouldn't be a problem, as the files that are referenced by the Data.fs should still be around. Deletion of a blob doesn't result in the file being removed until the database is packed, so that should be okay too. Performing a backup in a different order, or with packing during the backup, may result in a backup Data.fs that references blobs that are not included in the backup.
1
0
0
What is the correct way to backup ZODB blobs?
4
python,plone,zope,zodb,blobstorage
0
2009-01-16T20:51:00.000
I am using plone.app.blob to store large ZODB objects in a blobstorage directory. This reduces size pressure on Data.fs but I have not been able to find any advice on backing up this data. I am already backing up Data.fs by pointing a network backup tool at a directory of repozo backups. Should I simply point that tool at the blobstorage directory to backup my blobs? What if the database is being repacked or blobs are being added and deleted while the copy is taking place? Are there files in the blobstorage directory that must be copied over in a certain order?
8
3
1.2
0
true
453,942
0
2,618
3
0
0
451,952
Backing up "blobstorage" will do it. No need for a special order or anything else, it's very simple. All operations in Plone are fully transactional, so hitting the backup in the middle of a transaction should work just fine. This is why you can do live backups of the ZODB. Without knowing what file system you're on, I'd guess that it should work as intended.
1
0
0
What is the correct way to backup ZODB blobs?
4
python,plone,zope,zodb,blobstorage
0
2009-01-16T20:51:00.000
I am using plone.app.blob to store large ZODB objects in a blobstorage directory. This reduces size pressure on Data.fs but I have not been able to find any advice on backing up this data. I am already backing up Data.fs by pointing a network backup tool at a directory of repozo backups. Should I simply point that tool at the blobstorage directory to backup my blobs? What if the database is being repacked or blobs are being added and deleted while the copy is taking place? Are there files in the blobstorage directory that must be copied over in a certain order?
8
1
0.049958
0
false
676,364
0
2,618
3
0
0
451,952
Your backup strategy for the FileStorage is fine. However, making a backup of any database that stores data in multiple files never is easy as your copy has to happen with no writes to the various files. For the FileStorage a blind stupid copy is fine as it's just a single file. (Using repozo is even better.) In this case (with BlobStorage combined with FileStorage) I have to point to the regular backup advice: take the db offline while making a file-system copy use snapshot tools like LVM to freeze the disk at a given point do a transactional export (not feasable in practice)
1
0
0
What is the correct way to backup ZODB blobs?
4
python,plone,zope,zodb,blobstorage
0
2009-01-16T20:51:00.000
I am using Python version 2.5.4 and install MySQL version 5.0 and Django. Django is working fine with Python, but not MySQL. I am using it in Windows Vista.
493
5
0.03124
0
false
28,278,997
1
804,257
5
0
0
454,854
Go to your project directory with cd. source/bin/activate (activate your env. if not previously). Run the command easy_install MySQL-python
1
0
0
No module named MySQLdb
32
python,django,python-2.x
0
2009-01-18T09:13:00.000
I am using Python version 2.5.4 and install MySQL version 5.0 and Django. Django is working fine with Python, but not MySQL. I am using it in Windows Vista.
493
6
1
0
false
58,246,337
1
804,257
5
0
0
454,854
I personally recommend using pymysql instead of using the genuine MySQL connector, which provides you with a platform independent interface and could be installed through pip. And you could edit the SQLAlchemy URL schema like this: mysql+pymysql://username:passwd@host/database
1
0
0
No module named MySQLdb
32
python,django,python-2.x
0
2009-01-18T09:13:00.000
I am using Python version 2.5.4 and install MySQL version 5.0 and Django. Django is working fine with Python, but not MySQL. I am using it in Windows Vista.
493
93
1
0
false
38,310,817
1
804,257
5
0
0
454,854
if your python version is 3.5, do a pip install mysqlclient, other things didn't work for me
1
0
0
No module named MySQLdb
32
python,django,python-2.x
0
2009-01-18T09:13:00.000
I am using Python version 2.5.4 and install MySQL version 5.0 and Django. Django is working fine with Python, but not MySQL. I am using it in Windows Vista.
493
2
0.012499
0
false
58,825,148
1
804,257
5
0
0
454,854
None of the above worked for me on an Ubuntu 18.04 fresh install via docker image. The following solved it for me: apt-get install holland python3-mysqldb
1
0
0
No module named MySQLdb
32
python,django,python-2.x
0
2009-01-18T09:13:00.000
I am using Python version 2.5.4 and install MySQL version 5.0 and Django. Django is working fine with Python, but not MySQL. I am using it in Windows Vista.
493
0
0
0
false
72,496,371
1
804,257
5
0
0
454,854
For CentOS 8 and Python3 $ sudo dnf install python3-mysqlclient -y
1
0
0
No module named MySQLdb
32
python,django,python-2.x
0
2009-01-18T09:13:00.000
I think I am being a bonehead, maybe not importing the right package, but when I do... from pysqlite2 import dbapi2 as sqlite import types import re import sys ... def create_asgn(self): stmt = "CREATE TABLE ? (login CHAR(8) PRIMARY KEY NOT NULL, grade INTEGER NOT NULL)" stmt2 = "insert into asgn values ('?', ?)" self.cursor.execute(stmt, (sys.argv[2],)) self.cursor.execute(stmt2, [sys.argv[2], sys.argv[3]]) ... I get the error pysqlite2.dbapi2.OperationalError: near "?": syntax error This makes very little sense to me, as the docs show that pysqlite is qmark parametrized. I am new to python and db-api though, help me out! THANKS
1
7
1.2
0
true
474,296
0
1,629
1
0
0
474,261
That's because parameters can only be passed to VALUES. The table name can't be parametrized. Also you have quotes around a parametrized argument on the second query. Remove the quotes, escaping is handled by the underlining library automatically for you.
1
0
0
Python pysqlite not accepting my qmark parameterization
3
python,sqlite,pysqlite,python-db-api
0
2009-01-23T19:55:00.000
I have been working with PostgreSQL, playing around with Wikipedia's millions of hyperlinks and such, for 2 years now. I either do my thing directly by sending SQL commands, or I write a client side script in python to manage a million queries when this cannot be done productively (efficiently and effectively) manually. I would run my python script on my 32bit laptop and have it communicate with a $6000 64bit server running PostgreSQL; I would hence have an extra 2.10 Ghz, 3 GB of RAM, psyco and a multithreaded SQL query manager. I now realize that it is time for me to level up. I need to learn to server-side script using a procedural language (PL); I really need to reduce network traffic and its inherent serializing overhead. Now, I really do not feel like researching all the PLs. Knowing that I already know python, and that I am looking for the means between effort and language efficiency, what PL do you guys fancy I should install, learn and use, and why and how?
3
1
0.066568
0
false
476,089
0
2,801
2
0
0
475,302
I was in the exact same situation as you and went with PL/Python after giving up on PL/SQL after a while. It was a good decision, looking back. Some things that bit me where unicode issues (client encoding, byte sequence) and specific postgres data types (bytea).
1
0
0
PostgreSQL procedural languages: to choose?
3
python,postgresql
0
2009-01-24T01:38:00.000
I have been working with PostgreSQL, playing around with Wikipedia's millions of hyperlinks and such, for 2 years now. I either do my thing directly by sending SQL commands, or I write a client side script in python to manage a million queries when this cannot be done productively (efficiently and effectively) manually. I would run my python script on my 32bit laptop and have it communicate with a $6000 64bit server running PostgreSQL; I would hence have an extra 2.10 Ghz, 3 GB of RAM, psyco and a multithreaded SQL query manager. I now realize that it is time for me to level up. I need to learn to server-side script using a procedural language (PL); I really need to reduce network traffic and its inherent serializing overhead. Now, I really do not feel like researching all the PLs. Knowing that I already know python, and that I am looking for the means between effort and language efficiency, what PL do you guys fancy I should install, learn and use, and why and how?
3
2
0.132549
0
false
475,939
0
2,801
2
0
0
475,302
Why can't you run your Python on the database server? That has the fewest complexities -- you can run the program you already have.
1
0
0
PostgreSQL procedural languages: to choose?
3
python,postgresql
0
2009-01-24T01:38:00.000
Is there a good ORM (object relational manager) solution that can use the same database from C++, C#, Python? It could also be multiple solutions, e.g. one per language, as long as they can can access the same database and use the same schema. Multi platform support is also needed. Clarification: The idea is to have one database and access this from software written in several different programming languages. Ideally this would be provided by one ORM having APIs (or bindings) in all of these languages. One other solution is to have a different ORM in each language, that use compatible schemas. However I believe that schema migration will be very hard in this setting.
7
0
0
0
false
496,166
0
1,697
2
0
0
482,612
We have an O/RM that has C++ and C# (actually COM) bindings (in FOST.3) and we're putting together the Python bindings which are new in version 4 together with Linux and Mac support.
1
0
0
ORM (object relational manager) solution with multiple programming language support
3
c#,c++,python,orm
0
2009-01-27T08:10:00.000
Is there a good ORM (object relational manager) solution that can use the same database from C++, C#, Python? It could also be multiple solutions, e.g. one per language, as long as they can can access the same database and use the same schema. Multi platform support is also needed. Clarification: The idea is to have one database and access this from software written in several different programming languages. Ideally this would be provided by one ORM having APIs (or bindings) in all of these languages. One other solution is to have a different ORM in each language, that use compatible schemas. However I believe that schema migration will be very hard in this setting.
7
1
0.066568
0
false
482,653
0
1,697
2
0
0
482,612
With SQLAlchemy, you can use reflection to get the schema, so it should work with any of the supported engines. I've used this to migrate data from an old SQLite to Postgres.
1
0
0
ORM (object relational manager) solution with multiple programming language support
3
c#,c++,python,orm
0
2009-01-27T08:10:00.000
Here is the situation: I have a parent model say BlogPost. It has many Comments. What I want is the list of BlogPosts ordered by the creation date of its' Comments. I.e. the blog post which has the most newest comment should be on top of the list. Is this possible with SQLAlchemy?
3
1
0.099668
0
false
1,227,979
1
595
1
0
0
492,223
I had the same question as the parent when using the ORM, and GHZ's link contained the answer on how it's possible. In sqlalchemy, assuming BlogPost.comments is a mapped relation to the Comments table, you can't do: session.query(BlogPost).order_by(BlogPost.comments.creationDate.desc()) , but you can do: session.query(BlogPost).join(Comments).order_by(Comments.creationDate.desc())
1
0
0
How can I order objects according to some attribute of the child in sqlalchemy?
2
python,sqlalchemy
0
2009-01-29T16:01:00.000
I have a small project I am doing in Python using web.py. It's a name generator, using 4 "parts" of a name (firstname, middlename, anothername, surname). Each part of the name is a collection of entites in a MySQL databse (name_part (id, part, type_id), and name_part_type (id, description)). Basic stuff, I guess. My generator picks a random entry of each "type", and assembles a comical name. Right now, I am using select * from name_part where type_id=[something] order by rand() limit 1 to select a random entry of each type (so I also have 4 queries that run per pageview, I figured this was better than one fat query returning potentially hundreds of rows; if you have a suggestion for how to pull this off in one query w/o a sproc I'll listen). Obviously I want to make this more random. Actually, I want to give it better coverage, not necessarily randomness. I want to make sure it's using as many possibilities as possible. That's what I am asking in this question, what sorts of strategies can I use to give coverage over a large random sample? My idea, is to implement a counter column on each name_part, and increment it each time I use it. I would need some logic to then say like: "get a name_part that is less than the highest "counter" for this "name_part_type", unless there are none then pick a random one". I am not very good at SQL, is this kind of logic even possible? The only way I can think to do this would require up to 3 or 4 queries for each part of the name (so up to 12 queries per pageview). Can I get some input on my logic here? Am I overthinking it? This actually sounds ideal for a stored procedure... but can you guys at least help me solve how to do it without a sproc? (I don't know if I can even use a sproc with the built-in database stuff of web.py). I hope this isn't terribly dumb but thanks ahead of time. edit: Aside from my specific problem I am still curious if there are any alternate strategies I can use that may be better.
4
1
0.099668
0
false
514,643
0
1,758
1
0
0
514,617
I agree with your intuition that using a stored procedure is the right way to go, but then, I almost always try to implement database stuff in the database. In your proc, I would introduce some kind of logic like say, there's only a 30% chance that returning the result will actually increment the counter. Just to increase the variability.
1
0
0
Random name generator strategy - help me improve it
2
python,mysql,random,web.py
0
2009-02-05T04:51:00.000
I've got a legacy application which is implemented in a number of Excel workbooks. It's not something that I have the authority to re-implement, however another application that I do maintain does need to be able to call functions in the Excel workbook. It's been given a python interface using the Win32Com library. Other processes can call functions in my python package which in turn invokes the functions I need via Win32Com. Unfortunately COM does not allow me to specify a particular COM process, so at the moment no matter how powerful my server I can only control one instance of Excel at a time on the computer. If I were to try to run more than one instance of excel there would be no way of ensuring that the python layer is bound to a specific Excel instance. I'd like to be able to run more than 1 of my excel applications on my Windows server concurrently. Is there a way to do this? For example, could I compartmentalize my environment so that I could run as many Excel _ Python combinations as my application will support?
6
0
0
0
false
516,983
0
4,391
1
0
0
516,946
If you application uses a single excel file which contains macros which you call, I fear the answer is probably no since aside from COM Excel does not allow the same file to be opened with the same name (even if in different directories). You may be able to get around this by dynamically copying the file to another name before opening. My python knowledge isn't huge, but in most languages there is a way of specifying when you create a COM object whether you wish it to be a new object or connect to a preexisting instance by default. Check the python docs for something along these lines. Can you list the kind of specific problems you are having and exactly what you are hoping to do?
1
0
1
Control 2 separate Excel instances by COM independently... can it be done?
3
python,windows,excel,com
0
2009-02-05T17:32:00.000
I am getting the error OperationalError: FATAL: sorry, too many clients already when using psycopg2. I am calling the close method on my connection instance after I am done with it. I am not sure what could be causing this, it is my first experience with python and postgresql, but I have a few years experience with php, asp.net, mysql, and sql server. EDIT: I am running this locally, if the connections are closing like they should be then I only have 1 connection open at a time. I did have a GUI open to the database but even closed I am getting this error. It is happening very shortly after I run my program. I have a function I call that returns a connection that is opened like: psycopg2.connect(connectionString) Thanks Final Edit: It was my mistake, I was recursively calling the same method on mistake that was opening the same method over and over. It has been a long day..
22
3
0.197375
0
false
15,046,529
0
27,905
3
0
0
519,296
Make sure your db connection command isn't in any kind of loop. I was getting the same error from my script until I moved my db.database() out of my programs repeating execution loop.
1
0
0
Getting OperationalError: FATAL: sorry, too many clients already using psycopg2
3
python,postgresql,psycopg2
0
2009-02-06T06:15:00.000
I am getting the error OperationalError: FATAL: sorry, too many clients already when using psycopg2. I am calling the close method on my connection instance after I am done with it. I am not sure what could be causing this, it is my first experience with python and postgresql, but I have a few years experience with php, asp.net, mysql, and sql server. EDIT: I am running this locally, if the connections are closing like they should be then I only have 1 connection open at a time. I did have a GUI open to the database but even closed I am getting this error. It is happening very shortly after I run my program. I have a function I call that returns a connection that is opened like: psycopg2.connect(connectionString) Thanks Final Edit: It was my mistake, I was recursively calling the same method on mistake that was opening the same method over and over. It has been a long day..
22
15
1.2
0
true
519,304
0
27,905
3
0
0
519,296
This error means what it says, there are too many clients connected to postgreSQL. Questions you should ask yourself: Are you the only one connected to this database? Are you running a graphical IDE? What method are you using to connect? Are you testing queries at the same time that you running the code? Any of these things could be the problem. If you are the admin, you can up the number of clients, but if a program is hanging it open, then that won't help for long. There are many reasons why you could be having too many clients running at the same time.
1
0
0
Getting OperationalError: FATAL: sorry, too many clients already using psycopg2
3
python,postgresql,psycopg2
0
2009-02-06T06:15:00.000
I am getting the error OperationalError: FATAL: sorry, too many clients already when using psycopg2. I am calling the close method on my connection instance after I am done with it. I am not sure what could be causing this, it is my first experience with python and postgresql, but I have a few years experience with php, asp.net, mysql, and sql server. EDIT: I am running this locally, if the connections are closing like they should be then I only have 1 connection open at a time. I did have a GUI open to the database but even closed I am getting this error. It is happening very shortly after I run my program. I have a function I call that returns a connection that is opened like: psycopg2.connect(connectionString) Thanks Final Edit: It was my mistake, I was recursively calling the same method on mistake that was opening the same method over and over. It has been a long day..
22
1
0.066568
0
false
64,746,356
0
27,905
3
0
0
519,296
It simple means many clients are making transaction to PostgreSQL at same time. I was running Postgis container and Django in different docker container. Hence for my case restarting both db and system container solved the problem.
1
0
0
Getting OperationalError: FATAL: sorry, too many clients already using psycopg2
3
python,postgresql,psycopg2
0
2009-02-06T06:15:00.000
I'm working on an application that will gather data through HTTP from several places, cache the data locally and then serve it through HTTP. So I was looking at the following. My application will first create several threads that will gather data at a specified interval and cache that data locally into a SQLite database. Then in the main thread start a CherryPy application that will query that SQLite database and serve the data. My problem is: how do I handle connections to the SQLite database from my threads and from the CherryPy application? If I'd do a connection per thread to the database will I also be able to create/use an in memory database?
9
0
0
0
false
524,955
0
13,542
4
0
0
524,797
Depending on the data rate sqlite could be exactly the correct way to do this. The entire database is locked for each write so you aren't going to scale to 1000s of simultaneous writes per second. But if you only have a few it is the safest way of assuring you don't overwrite each other.
1
0
0
Python, SQLite and threading
6
python,multithreading,sqlite
0
2009-02-07T23:18:00.000
I'm working on an application that will gather data through HTTP from several places, cache the data locally and then serve it through HTTP. So I was looking at the following. My application will first create several threads that will gather data at a specified interval and cache that data locally into a SQLite database. Then in the main thread start a CherryPy application that will query that SQLite database and serve the data. My problem is: how do I handle connections to the SQLite database from my threads and from the CherryPy application? If I'd do a connection per thread to the database will I also be able to create/use an in memory database?
9
0
0
0
false
524,937
0
13,542
4
0
0
524,797
Depending on the application the DB could be a real overhead. If we are talking about volatile data, maybe you could skip the communication via DB completely and share the data between the data gathering process and the data serving process(es) via IPC. This is not an option if the data has to be persisted, of course.
1
0
0
Python, SQLite and threading
6
python,multithreading,sqlite
0
2009-02-07T23:18:00.000
I'm working on an application that will gather data through HTTP from several places, cache the data locally and then serve it through HTTP. So I was looking at the following. My application will first create several threads that will gather data at a specified interval and cache that data locally into a SQLite database. Then in the main thread start a CherryPy application that will query that SQLite database and serve the data. My problem is: how do I handle connections to the SQLite database from my threads and from the CherryPy application? If I'd do a connection per thread to the database will I also be able to create/use an in memory database?
9
8
1.2
0
true
524,806
0
13,542
4
0
0
524,797
Short answer: Don't use Sqlite3 in a threaded application. Sqlite3 databases scale well for size, but rather terribly for concurrency. You will be plagued with "Database is locked" errors. If you do, you will need a connection per thread, and you have to ensure that these connections clean up after themselves. This is traditionally handled using thread-local sessions, and is performed rather well (for example) using SQLAlchemy's ScopedSession. I would use this if I were you, even if you aren't using the SQLAlchemy ORM features.
1
0
0
Python, SQLite and threading
6
python,multithreading,sqlite
0
2009-02-07T23:18:00.000
I'm working on an application that will gather data through HTTP from several places, cache the data locally and then serve it through HTTP. So I was looking at the following. My application will first create several threads that will gather data at a specified interval and cache that data locally into a SQLite database. Then in the main thread start a CherryPy application that will query that SQLite database and serve the data. My problem is: how do I handle connections to the SQLite database from my threads and from the CherryPy application? If I'd do a connection per thread to the database will I also be able to create/use an in memory database?
9
1
0.033321
0
false
524,901
0
13,542
4
0
0
524,797
"...create several threads that will gather data at a specified interval and cache that data locally into a sqlite database. Then in the main thread start a CherryPy app that will query that sqlite db and serve the data." Don't waste a lot of time on threads. The things you're describing are simply OS processes. Just start ordinary processes to do gathering and run Cherry Py. You have no real use for concurrent threads in a single process for this. Gathering data at a specified interval -- when done with simple OS processes -- can be scheduled by the OS very simply. Cron, for example, does a great job of this. A CherryPy App, also, is an OS process, not a single thread of some larger process. Just use processes -- threads won't help you.
1
0
0
Python, SQLite and threading
6
python,multithreading,sqlite
0
2009-02-07T23:18:00.000
I get "database table is locked" error in my sqlite3 db. My script is single threaded, no other app is using the program (i did have it open once in "SQLite Database Browser.exe"). I copied the file, del the original (success) and renamed the copy so i know no process is locking it yet when i run my script everything in table B cannot be written to and it looks like table A is fine. Whats happening? -edit- I fixed it but unsure how. I notice the code not doing the correct things (i copied the wrong field) and after fixing it up and cleaning it, it magically started working again. -edit2- Someone else posted so i might as well update. I think the problem was i was trying to do a statement with a command/cursor in use.
3
0
0
0
false
6,345,495
0
5,791
1
0
0
531,711
I've also seen this error when the db file is on an NFS mounted file system.
1
0
0
python, sqlite error? db is locked? but it isnt?
4
python,sqlite,locking
0
2009-02-10T09:50:00.000
All I want to do is serialize and unserialize tuples of strings or ints. I looked at pickle.dumps() but the byte overhead is significant. Basically it looks like it takes up about 4x as much space as it needs to. Besides, all I need is basic types and have no need to serialize objects. marshal is a little better in terms of space but the result is full of nasty \x00 bytes. Ideally I would like the result to be human readable. I thought of just using repr() and eval(), but is there a simple way I could accomplish this without using eval()? This is getting stored in a db, not a file. Byte overhead matters because it could make the difference between requiring a TEXT column versus a varchar, and generally data compactness affects all areas of db performance.
7
0
0
0
false
532,989
0
2,674
1
0
0
532,934
"the byte overhead is significant" Why does this matter? It does the job. If you're running low on disk space, I'd be glad to sell you a 1Tb for $500. Have you run it? Is performance a problem? Can you demonstrate that the performance of serialization is the problem? "I thought of just using repr() and eval(), but is there a simple way I could accomplish this without using eval()?" Nothing simpler than repr and eval. What's wrong with eval? Is is the "someone could insert malicious code into the file where I serialized my lists" issue? Who -- specifically -- is going to find and edit this file to put in malicious code? Anything you do to secure this (i.e., encryption) removes "simple" from it.
1
0
1
Lightweight pickle for basic types in python?
7
python,serialization,pickle
0
2009-02-10T16:03:00.000
I want to fetch data from a mysql database using sqlalchemy and use the data in a different class.. Basically I fetch a row at a time, use the data, fetch another row, use the data and so on.. I am running into some problem doing this.. Basically, how do I output data a row at a time from mysql data?.. I have looked into all tutorials but they are not helping much.
0
1
0.099668
0
false
536,269
0
563
1
0
0
536,051
Exactly what problems are you running into? You can simply iterate over the ResultProxy object: for row in conn_or_sess_or_engine.execute(selectable_obj_or_SQLstring): do_something_with(row)
1
0
0
Outputting data a row at a time from mysql using sqlalchemy
2
python,mysql,sqlalchemy
0
2009-02-11T09:19:00.000
I have some software that is heavily dependent on MySQL, and is written in python without any class definitions. For performance reasons, and because the database is really just being used to store and retrieve large amounts of data, I'd like to convert this to an object-oriented python script that does not use the database at all. So my plan is to export the database tables to a set of files (not many -- it's a pretty simple database; it's big in that it has a lot of rows, but only a few tables, each of which has just two or three columns). Then I plan to read the data in, and have a set of functions which provide access to and operations on the data. My question is this: is there a preferred way to convert a set of database tables to classes and objects? For example, if I have a table which contains fruit, where each fruit has an id and a name, would I have a "CollectionOfFruit" class which contains a list of "Fruit" objects, or would I just have a "CollectionOfFruit" class which contains a list of tuples? Or would I just have a list of Fruit objects? I don't want to add any extra frameworks, because I want this code to be easy to transfer to different machines. So I'm really just looking for general advice on how to represent data that might more naturally be stored in database tables, in objects in Python. Alternatively, is there a good book I should read that would point me in the right direction on this?
0
1
0.024995
0
false
558,822
0
330
5
0
0
557,199
Here are a couple points for you to consider. If your data is large reading it all into memory may be wasteful. If you need random access and not just sequential access to your data then you'll either have to scan the at most the entire file each time or read that table into an indexed memory structure like a dictionary. A list will still require some kind of scan (straight iteration or binary search if sorted). With that said, if you don't require some of the features of a DB then don't use one but if you just think MySQL is too heavy then +1 on the Sqlite suggestion from earlier. It gives you most of the features you'd want while using a database without the concurrency overhead.
1
0
0
Converting a database-driven (non-OO) python script into a non-database driven, OO-script
8
python,object
0
2009-02-17T14:56:00.000
I have some software that is heavily dependent on MySQL, and is written in python without any class definitions. For performance reasons, and because the database is really just being used to store and retrieve large amounts of data, I'd like to convert this to an object-oriented python script that does not use the database at all. So my plan is to export the database tables to a set of files (not many -- it's a pretty simple database; it's big in that it has a lot of rows, but only a few tables, each of which has just two or three columns). Then I plan to read the data in, and have a set of functions which provide access to and operations on the data. My question is this: is there a preferred way to convert a set of database tables to classes and objects? For example, if I have a table which contains fruit, where each fruit has an id and a name, would I have a "CollectionOfFruit" class which contains a list of "Fruit" objects, or would I just have a "CollectionOfFruit" class which contains a list of tuples? Or would I just have a list of Fruit objects? I don't want to add any extra frameworks, because I want this code to be easy to transfer to different machines. So I'm really just looking for general advice on how to represent data that might more naturally be stored in database tables, in objects in Python. Alternatively, is there a good book I should read that would point me in the right direction on this?
0
5
1.2
0
true
557,473
0
330
5
0
0
557,199
If the data is a natural fit for database tables ("rectangular data"), why not convert it to sqlite? It's portable -- just one file to move the db around, and sqlite is available anywhere you have python (2.5 and above anyway).
1
0
0
Converting a database-driven (non-OO) python script into a non-database driven, OO-script
8
python,object
0
2009-02-17T14:56:00.000
I have some software that is heavily dependent on MySQL, and is written in python without any class definitions. For performance reasons, and because the database is really just being used to store and retrieve large amounts of data, I'd like to convert this to an object-oriented python script that does not use the database at all. So my plan is to export the database tables to a set of files (not many -- it's a pretty simple database; it's big in that it has a lot of rows, but only a few tables, each of which has just two or three columns). Then I plan to read the data in, and have a set of functions which provide access to and operations on the data. My question is this: is there a preferred way to convert a set of database tables to classes and objects? For example, if I have a table which contains fruit, where each fruit has an id and a name, would I have a "CollectionOfFruit" class which contains a list of "Fruit" objects, or would I just have a "CollectionOfFruit" class which contains a list of tuples? Or would I just have a list of Fruit objects? I don't want to add any extra frameworks, because I want this code to be easy to transfer to different machines. So I'm really just looking for general advice on how to represent data that might more naturally be stored in database tables, in objects in Python. Alternatively, is there a good book I should read that would point me in the right direction on this?
0
1
0.024995
0
false
557,279
0
330
5
0
0
557,199
you could have a fruit class with id and name instance variables. and a function to read/write the information from a file, and maybe a class variable to keep track of the number of fruits (objects) created
1
0
0
Converting a database-driven (non-OO) python script into a non-database driven, OO-script
8
python,object
0
2009-02-17T14:56:00.000
I have some software that is heavily dependent on MySQL, and is written in python without any class definitions. For performance reasons, and because the database is really just being used to store and retrieve large amounts of data, I'd like to convert this to an object-oriented python script that does not use the database at all. So my plan is to export the database tables to a set of files (not many -- it's a pretty simple database; it's big in that it has a lot of rows, but only a few tables, each of which has just two or three columns). Then I plan to read the data in, and have a set of functions which provide access to and operations on the data. My question is this: is there a preferred way to convert a set of database tables to classes and objects? For example, if I have a table which contains fruit, where each fruit has an id and a name, would I have a "CollectionOfFruit" class which contains a list of "Fruit" objects, or would I just have a "CollectionOfFruit" class which contains a list of tuples? Or would I just have a list of Fruit objects? I don't want to add any extra frameworks, because I want this code to be easy to transfer to different machines. So I'm really just looking for general advice on how to represent data that might more naturally be stored in database tables, in objects in Python. Alternatively, is there a good book I should read that would point me in the right direction on this?
0
1
0.024995
0
false
557,241
0
330
5
0
0
557,199
There's no "one size fits all" answer for this -- it'll depend a lot on the data and how it's used in the application. If the data and usage are simple enough you might want to store your fruit in a dict with id as key and the rest of the data as tuples. Or not. It totally depends. If there's a guiding principle out there then it's to extract the underlying requirements of the app and then write code against those requirements.
1
0
0
Converting a database-driven (non-OO) python script into a non-database driven, OO-script
8
python,object
0
2009-02-17T14:56:00.000
I have some software that is heavily dependent on MySQL, and is written in python without any class definitions. For performance reasons, and because the database is really just being used to store and retrieve large amounts of data, I'd like to convert this to an object-oriented python script that does not use the database at all. So my plan is to export the database tables to a set of files (not many -- it's a pretty simple database; it's big in that it has a lot of rows, but only a few tables, each of which has just two or three columns). Then I plan to read the data in, and have a set of functions which provide access to and operations on the data. My question is this: is there a preferred way to convert a set of database tables to classes and objects? For example, if I have a table which contains fruit, where each fruit has an id and a name, would I have a "CollectionOfFruit" class which contains a list of "Fruit" objects, or would I just have a "CollectionOfFruit" class which contains a list of tuples? Or would I just have a list of Fruit objects? I don't want to add any extra frameworks, because I want this code to be easy to transfer to different machines. So I'm really just looking for general advice on how to represent data that might more naturally be stored in database tables, in objects in Python. Alternatively, is there a good book I should read that would point me in the right direction on this?
0
2
0.049958
0
false
557,291
0
330
5
0
0
557,199
Generally you want your Objects to absolutely match your "real world entities". Since you're starting from a database, it's not always the case that the database has any real-world fidelity, either. Some database designs are simply awful. If your database has reasonable models for Fruit, that's where you start. Get that right first. A "collection" may -- or may not -- be an artificial construct that's part of the solution algorithm, not really a proper part of the problem. Usually collections are part of the problem, and you should design those classes, also. Other times, however, the collection is an artifact of having used a database, and a simple Python list is all you need. Still other times, the collection is actually a proper mapping from some unique key value to an entity, in which case, it's a Python dictionary. And sometimes, the collection is a proper mapping from some non-unique key value to some collection of entities, in which case it's a Python collections.defaultdict(list). Start with the fundamental, real-world-like entities. Those get class definitions. Collections may use built-in Python collections or may require their own classes.
1
0
0
Converting a database-driven (non-OO) python script into a non-database driven, OO-script
8
python,object
0
2009-02-17T14:56:00.000
I want to experiment/play around with non-relational databases, it'd be best if the solution was: portable, meaning it doesn't require an installation. ideally just copy-pasting the directory to someplace would make it work. I don't mind if it requires editing some configuration files or running a configuration tool for first time usage. accessible from python works on both windows and linux What can you recommend for me? Essentially, I would like to be able to install this system on a shared linux server where I have little user privileges.
2
4
0.088656
0
false
575,197
0
3,510
1
0
0
575,172
If you're used to thinking a relational database has to be huge and heavy like PostgreSQL or MySQL, then you'll be pleasantly surprised by SQLite. It is relational, very small, uses a single file, has Python bindings, requires no extra priviledges, and works on Linux, Windows, and many other platforms.
1
0
0
portable non-relational database
9
python,non-relational-database,portable-database
0
2009-02-22T16:31:00.000
I don't expect to need much more than basic CRUD type functionality. I know that SQLAlchemy is more flexible, but the syntax etc of sqlobject just seem to be a bit easier to get up and going with.
14
9
1
0
false
592,348
0
3,351
1
0
0
592,332
I think SQLObject is more pythonic/simpler, so if it works for you, then stick with it. SQLAlchemy takes a little more to learn, but can do more advanced things if you need that.
1
0
0
Any reasons not to use SQLObject over SQLAlchemy?
3
python,orm,sqlalchemy,sqlobject
0
2009-02-26T20:37:00.000
I am working on ajax-game. The abstract: 2+ gamers(browsers) change a variable which is saved to DB through json. All gamers are synchronized by javascript-timer+json - periodically reading that variable from DB. In general, all changes are stored in DB as history, but I want the recent change duplicated in memory. So the problem is: i want one variable to be stored in memory instead of DB.
0
0
0
0
false
603,637
1
127
1
0
0
602,030
You'd either have to use a cache, or fetch the most recent change on each request (since you can't persist objects between requests in-memory). From what you describe, it sounds as if it's being hit fairly frequently, so the cache is probably the way to go.
1
0
1
Store last created model's row in memory
4
python,django
0
2009-03-02T11:46:00.000
I am writing a Python (2.5) GUI Application that does the following: Imports from Access to an Sqlite database Saves ui form settings to an Sqlite database Currently I am using pywin32 to read Access, and pysqlite2/dbapi2 to read/write Sqlite. However, certain Qt objects don't automatically cast to Python or Sqlite equivalents when updating the Sqlite database. For example, a QDate, QDateTime, QString and others raise an error. Currently I am maintaining conversion functions. I investigated using QSql, which appears to overcome the casting problem. In addition, it is able to connect to both Access and Sqlite. These two benefits would appear to allow me to refactor my code to use less modules and not maintain my own conversion functions. What I am looking for is a list of important side-effects, performance gains/losses, functionality gains/losses that any of the SO community has experienced as a result from the switch to QSql. One functionality loss I have experienced thus far is the inability to use Access functions using the QODBC driver (e.g., 'SELECT LCASE(fieldname) from tablename' fails, as does 'SELECT FORMAT(fieldname, "General Number") from tablename')
1
0
0
0
false
608,262
0
241
1
0
0
608,098
When dealing with databases and PyQt UIs, I'll use something similar to model-view-controller model to help organize and simplify the code. View module uses/holds any QObjects that are necessary for the UI contain simple functions/methods for updating your QTGui Object, as well as extracting input from GUI objects Controller module will perform all DB interactions the more complex code lives here By using a MVC, you will not need to rely on the QT Library as much, and you will run into less problems linking QT with Python. So I guess my suggestion is to continue using pysqlite (since that's what you are used to), but refactor your design a little so the only thing dealing with the QT libraries is the UI. From the description of your GUI, it should be fairly straightforward.
1
0
0
What will I lose or gain from switching database APIs? (from pywin32 and pysqlite to QSql)
1
python,qt,sqlite,pyqt4,pywin32
0
2009-03-03T20:45:00.000