Question
stringlengths 25
7.47k
| Q_Score
int64 0
1.24k
| Users Score
int64 -10
494
| Score
float64 -1
1.2
| Data Science and Machine Learning
int64 0
1
| is_accepted
bool 2
classes | A_Id
int64 39.3k
72.5M
| Web Development
int64 0
1
| ViewCount
int64 15
1.37M
| Available Count
int64 1
9
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Q_Id
int64 39.1k
48M
| Answer
stringlengths 16
5.07k
| Database and SQL
int64 1
1
| GUI and Desktop Applications
int64 0
1
| Python Basics and Environment
int64 0
1
| Title
stringlengths 15
148
| AnswerCount
int64 1
32
| Tags
stringlengths 6
90
| Other
int64 0
1
| CreationDate
stringlengths 23
23
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I'm wondering if there is any built-in way in Python to test a MySQL server connection. I know I can use PyMySQL, MySQLdb, and a few others, but if the user does not already have these dependencies installed my script will not work? How can I write a Python script to test a MySQL connection without requiring external dependencies? | 0 | 2 | 1.2 | 0 | true | 31,524,504 | 0 | 244 | 1 | 0 | 0 | 31,524,210 | Python distributions do not include support for MySQL, which is only available by installing a third-party module such as PyMySQL or MySQLdb. The only relational support included in Python is for the SQLite database (in the shape of the sqlite3 module).
There is, however, nothing to stop you distributing a third-party module as a part of your application, thereby including the support your project requires. PyMySQL would probably be the best choice because, being pure Python, it will run on any platform and give you best portability. | 1 | 0 | 0 | Test MySQL Connection with default Python | 1 | python,mysql | 0 | 2015-07-20T18:54:00.000 |
I've been trying to do this and I really got no clue. I've search a lot and i know that i can merge the files with easily with VBA or other languages, but i really want to do it with Python.
Can anyone get me on track? | 2 | 1 | 1.2 | 0 | true | 31,531,821 | 0 | 262 | 1 | 0 | 0 | 31,530,728 | I wished there was a straight forward support from openpyxl/xlsxwriter to copy sheets across different workbooks.
However, I see you would have to mash up a recipe using a couple of libraries:
One for reading the worksheet data and,
Another for writing data to a unified xlsx
For both of the above there are lot of options in terms of python packages. | 1 | 0 | 1 | Merge(Combine) several .xlsx with one worksheet into just one workbook (Python) | 1 | python,excel | 0 | 2015-07-21T05:01:00.000 |
I have an excel file with 234 rows and 5 columns. I want to create an array for each column so that when I can read each column separately in xlrd. Does anyone can help please? | 1 | 0 | 0 | 1 | false | 31,540,754 | 0 | 898 | 1 | 0 | 0 | 31,540,437 | I might, but as a former user of the excellent xlrd package, I really would recommend switching to pyopenxl. Quite apart from other benefits, each worksheet has a columns attribute that is a list of columns, each column being a list of cells. (There is also a rows) attribute.
Converting your code would be relatively painless as long as there isn't too much and it's reasonably well-written. I believe I've never had do do anything other than pip install pyopenxl to add it to a virtual environment.
I observe that there's no code in your question, and it's harder (and more time-consuming) to write examples than point out required changes in code, so since you are an xlrd user I'm going to assume that you can take it from here. If you need help, edit the question and add your problem code. If you get through to what you want, submit it as an answer and mark it correct.
Suffice to say I recently wrote some code to extract Pandas datasets from UK government health statistics, and pyopenxl was amazingly helpful in my investigations and easy to use.
Since it appears from the comments this is not a new question I'll leave it at that. | 1 | 0 | 0 | Creating arrays in Python by using excel data sheet | 3 | python | 0 | 2015-07-21T13:27:00.000 |
Im using python 2.7 with mongodb as my database. (actually it dosen't matter which database i use)
In my database i have millions of documents, from time to time i need to iterate over all of them.
It's not realistic to pull all the documents in one query because that will kill the memory, instead i pull each iteration 1000 documents and iterate them, when i finish i'm pulling another 1000 and so on.
I was wondering if there is any formula to calculate the best number of pulling each iteration from the database.
I couldn't find over the internet something that answer my issue.
Basically my question is what is the best way of finding the best number to pull from the database in each iteration. | 0 | 0 | 0 | 0 | false | 31,543,147 | 0 | 48 | 1 | 0 | 0 | 31,542,307 | The only chance you have is to take some sample documents to calculate their average size. The more difficult part is to know what the available memory is, keeping in mind that there are other processes that consume ram in parallel!
So even when you take this road, you need to keep an amount of ram free. I doubt that the effort is worth it. | 1 | 0 | 1 | Decide how many documents to pull from Database for memory utilization | 1 | python,sql,mongodb,memory-management,mongoengine | 0 | 2015-07-21T14:44:00.000 |
I used to use manage.py sqlall app to dump the database to sql statements. While, after upgrading to 1.8, it doesn't work any more.
It says:
CommandError: App 'app' has migrations. Only the sqlmigrate and
sqlflush commands can be used when an app has migrations.
It seems there is not a way to solve this.
I need to dump the database to sql file, so I can use it to clone the whole database else where, how can I accomplish this? | 1 | 1 | 1.2 | 0 | true | 31,545,842 | 1 | 309 | 1 | 0 | 0 | 31,545,025 | You can dump the db directly with mysqldump as allcaps suggested, or run manage.py migrate first and then it should work. It's telling you there are migrations that you have yet to apply to the DB. | 1 | 0 | 0 | Django: How to dump the database in 1.8? | 2 | python,django,django-models | 0 | 2015-07-21T16:49:00.000 |
I've been struggling with "sqlite3.OperationalError database is locked" all day....
Searching around for answers to what seems to be a well known problem I've found that it is explained most of the time by the fact that sqlite does not work very nice in multithreading where a thread could potentially timeout waiting for more than 5 (default timeout) seconds to write into the db because another thread has the db lock .
So having more threads that play with the db , one of them using transactions and frequently writing I've began measuring the time it takes for transactionns to complete. I've found that no transaction takes more than 300 ms , thus rendering as not plausible the above explication. Unless the thread that uses transactions makes ~21 (5000 ms / 300 ms) consecutive transactions while any other thread desiring to write gets ignored all this time
So what other hypothesis could potentially explain this behavior ? | 3 | 9 | 1 | 0 | false | 31,547,325 | 1 | 4,653 | 1 | 0 | 0 | 31,547,234 | I have had a lot of these problems with Sqlite before. Basically, don't have multiple threads that could, potentially, write to the db. If you this is not acceptable, you should switch to Postgres or something else that is better at concurrency.
Sqlite has a very simple implementation that relies on the file system for locking. Most file systems are not built for low-latency operations like this. This is especially true for network-mounted filesystems and the virtual filesystems used by some VPS solutions (that last one got me BTW).
Additionally, you also have the Django layer on top of all this, adding complexity. You don't know when Django releases connections (although I am pretty sure someone here can give that answer in detail :) ). But again, if you have multiple concurrent writers, you need a database layer than can do concurrency. Period.
I solved this issue by switching to postgres. Django makes this very simple for you, even migrating the data is a no-brainer with very little downtime. | 1 | 0 | 0 | Django sqlite database is locked | 2 | python,django,sqlite | 0 | 2015-07-21T18:47:00.000 |
I'm working with XLSX files with pivot tables and writing an automated script to parse and extract the data. I have multiple pivot tables per spreadsheet with cost categories, their totals, and their values for each month etc. Any ideas on how to use openpyxl to parse each pivot table? | 0 | 0 | 0 | 0 | false | 31,556,316 | 0 | 1,122 | 1 | 0 | 0 | 31,551,135 | This is currently not possible with openpyxl. | 1 | 0 | 0 | Extracting data from excel pivot tables using openpyxl | 1 | python,excel,pivot-table,xlsx,openpyxl | 0 | 2015-07-21T23:05:00.000 |
I have an excel file composed of several sheets. I need to load them as separate dataframes individually. What would be a similar function as pd.read_csv("") for this kind of task?
P.S. due to the size I cannot copy and paste individual sheets in excel | 4 | 0 | 0 | 1 | false | 31,582,822 | 0 | 15,976 | 1 | 0 | 0 | 31,582,821 | exFile = ExcelFile(f) #load file f
data = ExcelFile.parse(exFile) #this creates a dataframe out of the first sheet in file | 1 | 0 | 0 | How to open an excel file with multiple sheets in pandas? | 3 | python,excel,import | 0 | 2015-07-23T09:04:00.000 |
I have a Django application that runs on apache server and uses Sqlite3 db. I want to access this database remotely using a python script that first ssh to the machine and then access the database.
After a lot of search I understand that we cannot access sqlite db remotely. I don't want to download the db folder using ftp and perform the function, instead I want to access it remotely.
What could be the other possible ways to do this? I don't want to change the database, but am looking for alternate ways to achieve the connection. | 1 | 0 | 0 | 0 | false | 31,583,131 | 0 | 1,226 | 2 | 0 | 0 | 31,582,861 | Sqlite needs to access the provided file. So this is more of a filesystem question rather than a python one. You have to find a way for sqlite and python to access the remote directory, be it sftp, sshfs, ftp or whatever. It entirely depends on your remote and local OS. Preferably mount the remote subdirectory on your local filesystem.
You would not need to make a copy of it although if the file is large you might want to consider that option too. | 1 | 0 | 0 | Remotely accessing sqlite3 in Django using a python script | 2 | python,django,sqlite | 0 | 2015-07-23T09:07:00.000 |
I have a Django application that runs on apache server and uses Sqlite3 db. I want to access this database remotely using a python script that first ssh to the machine and then access the database.
After a lot of search I understand that we cannot access sqlite db remotely. I don't want to download the db folder using ftp and perform the function, instead I want to access it remotely.
What could be the other possible ways to do this? I don't want to change the database, but am looking for alternate ways to achieve the connection. | 1 | 3 | 1.2 | 0 | true | 31,583,957 | 0 | 1,226 | 2 | 0 | 0 | 31,582,861 | Leaving aside the question of whether it is sensible to run a production Django installation against sqlite (it really isn't), you seem to have forgotten that, well, you are actually running Django. That means that Django can be the main interface to your data; and therefore you should write code in Django that enables this.
Luckily, there exists the Django REST Framework that allows you to simply expose your data via HTTP interfaces like GET and POST. That would be a much better solution than accessing it via ssh. | 1 | 0 | 0 | Remotely accessing sqlite3 in Django using a python script | 2 | python,django,sqlite | 0 | 2015-07-23T09:07:00.000 |
I am trying to add users to my Google Analytics account through the API but the code yields this error:
googleapiclient.errors.HttpError: https://www.googleapis.com/analytics/v3/management/accounts/**accountID**/entityUserLinks?alt=json returned "Insufficient Permission">
I have Admin rights to this account - MANAGE USERS. I can add or delete users through the Google Analytics Interface but not through the API. I have also added the service account email to GA as a user. Scope is set to analytics.manage.users
This is the code snippet I am using in my add_user function which has the same code as that provided in the API documentation.
def add_user(service):
try:
service.management().accountUserLinks().insert(
accountId='XXXXX',
body={
'permissions': {
'local': [
'EDIT',
]
},
'userRef': {
'email': 'ABC.DEF@gmail.com'
}
}
).execute()
except TypeError, error:
# Handle errors in constructing a query.
print 'There was an error in constructing your query : %s' % error
return None
Any help will be appreciated. Thank you!! | 2 | 0 | 0 | 0 | false | 31,866,981 | 1 | 480 | 1 | 1 | 0 | 31,621,373 | The problem was I using a service account when I should have been using an installed application. I did not need a service account since I had access using my own credentials.That did the trick for me! | 1 | 0 | 0 | Google Analytics Management API - Insert method - Insufficient permissions HTTP 403 | 2 | api,python-2.7,google-analytics,insert,http-error | 0 | 2015-07-24T23:46:00.000 |
I am trying to access the remote database from one Linux server to another which is connected via LAN.
but it is not working.. after some time it will generate an error
`_mysql_exceptions.OperationalError: (2003, "Can't connect to MySQL server on '192.168.0.101' (99)")'
this error is random it will raise any time.
each time create a new db object in all methods.
and close the connection as well then also why this error raise.
can any one please help me to sort out this problem | 2 | 0 | 1.2 | 0 | true | 31,717,545 | 0 | 1,039 | 2 | 0 | 0 | 31,645,016 | This issue is due to so many pending request on the remote database.
So in this situation MySql closes the connection to the running script.
to overcome this situation put
time.sleep(sec) # here int is a seconds in number that to sleep the script.
it will solve this issue.. without transferring database to local server or any other administrative task on mysql | 1 | 0 | 0 | python mysql database connection error | 2 | python,mysql,database-connection,mysql-python,remote-server | 0 | 2015-07-27T04:30:00.000 |
I am trying to access the remote database from one Linux server to another which is connected via LAN.
but it is not working.. after some time it will generate an error
`_mysql_exceptions.OperationalError: (2003, "Can't connect to MySQL server on '192.168.0.101' (99)")'
this error is random it will raise any time.
each time create a new db object in all methods.
and close the connection as well then also why this error raise.
can any one please help me to sort out this problem | 2 | -3 | -0.291313 | 0 | false | 41,724,945 | 0 | 1,039 | 2 | 0 | 0 | 31,645,016 | My solution was to collect more queries for one commit statement if those were insert queries. | 1 | 0 | 0 | python mysql database connection error | 2 | python,mysql,database-connection,mysql-python,remote-server | 0 | 2015-07-27T04:30:00.000 |
I'm trying to embed a bunch of URLs into an Excel file using Python with XLSXWriter's function write_url(), but it gives me the warning of it exceeding the 255 character limit. I think this is happening because it may be using the built-in HYPERLINK Excel function.
However, I found that Apache POI from Java doesn't seem to have that issue. Is it because they directly write it into the cell itself or is there a different reason? Also, is there a workaround in Python that can solve this issue? | 1 | 0 | 0 | 0 | false | 31,662,734 | 1 | 599 | 2 | 0 | 0 | 31,661,485 | 255 characters in a URL is an Excel 2007+ limitation. Try it in Excel.
I think the XLS format allowed longer URLs (so perhaps that is the difference).
Also XlsxWriter doesn't use the HYPERLINK() function internally (although it is available to the user via the standard interface). | 1 | 0 | 0 | Why is Apache POI able to write a hyperlink more than 255 characters but not XLSXWriter? | 2 | java,python,excel,apache-poi,xlsxwriter | 0 | 2015-07-27T19:19:00.000 |
I'm trying to embed a bunch of URLs into an Excel file using Python with XLSXWriter's function write_url(), but it gives me the warning of it exceeding the 255 character limit. I think this is happening because it may be using the built-in HYPERLINK Excel function.
However, I found that Apache POI from Java doesn't seem to have that issue. Is it because they directly write it into the cell itself or is there a different reason? Also, is there a workaround in Python that can solve this issue? | 1 | 1 | 0.099668 | 0 | false | 36,582,681 | 1 | 599 | 2 | 0 | 0 | 31,661,485 | Obviously the length limitation of a hyperlink address in .xlsx (using Excel 2013) is 2084 characters. Generating a file with a longer address using POI, repairing it with Excel and saving it will yield an address with a length of 2084 characters.
The Excel UI and .xls files seem to have a limit of 255 characters, as already mentioned by other commenters. | 1 | 0 | 0 | Why is Apache POI able to write a hyperlink more than 255 characters but not XLSXWriter? | 2 | java,python,excel,apache-poi,xlsxwriter | 0 | 2015-07-27T19:19:00.000 |
In SQLAlchemy, when I try to query for user by
request.db.query(models.User.password).filter(models.User.email == email).first()
Of course it works with different DB (SQLite3).
The source of the problem is, that the password is
sqlalchemy.Column(sqlalchemy_utils.types.passwordPasswordType(schemes=['pbkdf2_sha512']), nullable=False)
I really don't know how to solve it
I'm using psycopg2 | 0 | 0 | 0 | 0 | false | 31,781,831 | 1 | 84 | 1 | 0 | 0 | 31,733,583 | Actually it was a problem with Alembic migration, in migration table must be also created with the PasswordType, not String or any other type | 1 | 0 | 0 | PasswordType not supported in Postgres | 1 | python,postgresql,sqlalchemy | 0 | 2015-07-30T20:38:00.000 |
I need to fetch the data using REST Endpoints(returns JSON file) and load the data(JSON) into Cassandra cluster which is sitting on AWS.
This is a migration effort, which involves millions of records. No access to source DB. Only access to REST End points.
What are the options I have?
What is the programming language to use?(I am thinking of Python or any scripting language)?
Since I will have to migrate millions of records, I would like to process the jobs concurrently.
What are the challenges?
Thanks for the time and help.
--GK. | 1 | 1 | 0.197375 | 0 | false | 31,738,908 | 0 | 129 | 1 | 0 | 1 | 31,737,396 | Cassandra 2.2.0 give feature to insert and get data as JSON .So you can use that .
Like for insert json data .
CREATE TABLE test.example (
id int PRIMARY KEY,
id2 int,
id3 int
) ;
cqlsh > INSERT INTO example JSON '{"id":10,"id2":10,"id3":10}' ;
For Select data as Json :
cqlsh > SELECT json * FROM example;
[json]
{"id": 10, "id2": 10, "id3": 10} | 1 | 0 | 0 | Data Migration to Cassandra using REST End points | 1 | python,json,rest,cassandra,data-migration | 0 | 2015-07-31T02:59:00.000 |
I'm using a session with autocommit=True and expire_on_commit=False. I use the session to get an object A with a foreign key that points to an object B. I then call session.expunge(a.b); session.expunge(a).
Later, when trying to read the value of b.some_datetime, SQLAlchemy raises a DetachedInstanceError. No attribute has been configured for lazy-loading. The error happens randomly.
How is this possible? I assumed that all scalar attributes would be eagerly loaded and available after the object is expunged.
For what it's worth, the objects get expunged so they can be used in another thread, after all interactions with the database are over. | 1 | 1 | 1.2 | 0 | true | 31,967,070 | 0 | 573 | 1 | 0 | 0 | 31,746,829 | One of the mapped class's fields had an onupdate attribute, which caused it to expire whenever the object is changed.
The solution is to call session.refresh(myobj) between the flush and the call to session.expunge(). | 1 | 0 | 0 | DetachedInstanceError: SQLAlchemy wants to refresh the DateTime attribute of an expunged instance | 1 | python,sqlalchemy | 0 | 2015-07-31T12:59:00.000 |
I have a question about SQL, especially SQLite3. I have two tables, let's name them main_table and temp_table. These tables are based on the same relational schema so they have the same columns but different rows (values).
Now what I want to do:
For each row of the main_table I want to replace it if there is a row in a temp_table with the same ID. Otherwise I want to keep the old row in the table.
I was thinking about using some joins but it does not provides the thing I want.
Would you give me an advice?
EDIT: ADITIONAL INFO:
I would like to avoid writing all columns because those tables conains tens of attributes and since I have to update all columns it couldn't be necessary to write out all of them. | 1 | 0 | 0 | 0 | false | 31,748,808 | 0 | 334 | 1 | 0 | 0 | 31,748,654 | You have 2 approaches:
Update current rows inside main_table with data from temp_table. The relation will be based by ID.
Add a column to temp_table to mark all rows that have to be transferred to main_table or add aditional table to store IDs that have to be transferred. Then delete all rows that have to be transferred from table main_table and insert corresponding rows from temp_table using column with marks or new table. | 1 | 0 | 0 | SQL - update main table using temp table | 2 | python,sql,sqlite,sql-update | 0 | 2015-07-31T14:29:00.000 |
I'm writing an application for topology optimization within the ABAQUS PDE. As I have quite some iterations, in each of which FEM is performed, a lot of data is written to the system -- and thus a lot of time is lost on I/O.
Is it possible to limit the amount of information that gets written into the ODB file? | 0 | 1 | 1.2 | 0 | true | 31,808,849 | 0 | 571 | 1 | 0 | 0 | 31,755,078 | Indeed it's possible. You should check the frequency of your output in the field output section inside the step module. You can configure it in terms of step intervals of time, number of increments, exact amount of outputs, etc.
If you're running your analysis from a inp file, you can add FREQ = X after the *STEP command. This way Abaqus will write on the ODB file every X increments. | 1 | 0 | 0 | Limited ODB output in ABAQUS | 1 | python,abaqus,odb | 0 | 2015-07-31T20:57:00.000 |
I have a web application that uses flask and mongodb. I recently downloaded a clone of it from github onto a new Linux machine, then proceeded to run it. It starts and runs without any errors, but when I use a function that needs access to the database, I get this error:
File "/usr/local/lib/python2.7/dist-packages/pymongo/cursor.py", line 533, in __ getitem__
raise IndexError("no such item for Cursor instance")
IndexError: no such item for Cursor instance
This isn't happening on any of the other computers running this same application. Does anybody know what's going on? | 0 | 0 | 0 | 0 | false | 31,865,825 | 1 | 1,948 | 1 | 0 | 0 | 31,755,276 | Well, it ended up being an issue with the String specifying the working directory. Once it was resolved I was able to connect to the database. | 1 | 0 | 0 | Cursor Instance Error when connecting to mongo db? | 1 | python,mongodb,flask,pymongo | 0 | 2015-07-31T21:14:00.000 |
I have a string of categories stored in a table. The categories are separated by a ',', so that I can turn the string into a list of strings as
category_string.split(',')
I now want to select all elements of a sql table which have one of the the following categories [catergory1, catagory2].
I have many such comparisons and the list of categories to compare with is not necessarily 2 elements long, so I would need a comparison of elements of two lists. I know that list comparisons are done as
Table.categories.in_(category_list)
in sql-alchemy but I also need to convert a table string element in a list and do the comparison of list elements.
any ideas?
thanks
carl | 0 | 1 | 0.197375 | 0 | false | 31,759,707 | 1 | 1,109 | 1 | 0 | 0 | 31,759,266 | I figured it out. Basically one needs to use the like command and or_(
carl | 1 | 0 | 0 | sql alchemy filter: string split and comparison of list elements | 1 | python,sqlalchemy | 0 | 2015-08-01T07:05:00.000 |
Usually the workflow I have is as follows:
Perform SQL query on database,
Load it into memory
Transform data based on logic foo()
Insert the transformed data to a table in a database.
How should unit test be written for this kind of workflow? I'm really new to testing.
Anyway, I'm using Python 3.4. | 1 | 0 | 0 | 0 | false | 31,769,998 | 0 | 538 | 1 | 0 | 0 | 31,769,814 | One way to test this kind of workflow is by using a special database just for testing. The test database mirrors the structure of your production database, but is otherwise completely empty (i.e. no data is in the tables). The routine is then as follows
Connect to the test database (and and maybe reload its structure)
For every testcase, do the following:
Load the minimal set of data into the database necessary to test your routine
Run your function to test and grab its output (if any)
Perform some tests to see that your function did what you expected it to do.
Drop all data from the database before the next test case runs
After all your tests are done, disconnect from the database | 1 | 0 | 0 | How should unit test be written for data transformation? | 2 | python,unit-testing,tdd,integration-testing | 1 | 2015-08-02T08:04:00.000 |
Say you have a column that contains the values for the year, month and date. Is it possible to get just the year? In particular I have
ALTER TABLE pmk_pp_disturbances.disturbances_natural ADD COLUMN sdate timestamp without time zone;
and want just the 2004 from 2004-08-10 05:00:00. Can this be done with Postgres or must a script parse the string? By the way, any rules as to when to "let the database do the work" vs. let the script running on the local computer do the work? I once heard querying databases is slower than the rest of the program written in C/C++, generally speaking. | 0 | -3 | -0.197375 | 0 | false | 31,820,790 | 0 | 38 | 1 | 0 | 0 | 31,820,655 | I think no. You're forced to read the entire value of a column. You can divide the date in few columns, one for the year, another for the month, etc. , or store the date on an integer format if you want an aggressive space optimization. But it will doing the database worst about scalability and modifications.
The databases are slow, you must assume it, but they offer hardest things to do with C/C++.
If you think make a game and save your 'save game' on SQL forget it. Use it if you're doing a back-end server or a management application, tool, etc. | 1 | 0 | 0 | Can Postgres be used to take only the portion of a date from a field? | 3 | python,postgresql | 0 | 2015-08-04T22:46:00.000 |
I have an SQLAlchemy DB column which is of type datetime:
type(<my_object>) --> sqlalchemy.orm.attributes.InstrumentedAttribute
How do I reach the actual date in order to filter the DB by weekday() ? | 5 | 6 | 1 | 0 | false | 31,857,049 | 0 | 3,875 | 1 | 0 | 0 | 31,841,054 | I got it:
from sqlalchemy import func
(func.extract(<my_object>, 'dow') == some_day)
dow stands for 'day of week'
The extract is an SQLAlchemy function allowing the extraction of any field from the column object. | 1 | 0 | 0 | Extract a weekday() from an SQLAlchemy InstrumentedAttribute (Column type is datetime) | 1 | python,sqlalchemy,flask-sqlalchemy | 0 | 2015-08-05T19:19:00.000 |
I have several thousand excel documents. All of these documents are 95% the same in terms of column headings. However, since they are not 100% identical, I cannot simply merge them together and upload it into a database without messing up the data.
Would anyone happen to have a library or an example that they've ran into that would help? | 0 | 0 | 0 | 0 | false | 31,842,901 | 0 | 177 | 1 | 0 | 0 | 31,842,810 | If a large proportion of them are similar, and this is a one-off operation it may be worth your while coding the solution for the majority and handling the other documents (or groups of them if they are similar) separately. If using Python to do this you could simply build a dynamic query where the columns that are present in a given excel sheet are built into the INSERT statements. Of course, this assumes that your database table allows for NULLs or that a default value is present on the columns that aren't in a given document. | 1 | 0 | 0 | Python merge excel documents with dynamic columns | 1 | python,excel,pandas,xlwt | 0 | 2015-08-05T21:04:00.000 |
This will be a bit of a lengthly post, sorry in advance. I have a bit of experience using MongoDB (been awhile) and I'm so-so with python, but I have a big project and I would like some feedback before spending lots of time coding.
The project involves creating a gallery where individual presentation slides (from apple keynote '09) can be selected and parsed together into a presentation. This way a user with a few thousand slides can use this program to create a new presentation by mixing and matching old slides, rather than having to open up each presentation and copy-paste all of the desired slides into a new presentation manually.
Within the program there is a master gallery that contains all the slides. Each slide may be selected and assigned searchable tags. New "groups" of slides may be formed, where all slides with a specific set of tags are added to the group automatically. In addition, individual slides can be dragged from the master gallery and dropped into a user-created group.
There is a folder with preview images for each slide, here is why I believe I need MongoDB: By having a database where each slide is a document that contains the filename of the slide, the filename of the preview thumbnail of the slide, and an array containing searchable tag words, one will be able to query specific sets of slides very quickly. The query will return an array of matching slides which can than be looped through to add each slide thumbnail to the GUI gallery. The user-created "groups" can be individual collections, where a collection is created when a group is created, slides are added/removed from the collection as needed, and the collection can be destroyed when the group is deleted. This also will allow permanent storage as the database and its collections will persist between opening and closing the program.
My question is, will I be able to use MongoDB (through pyMongo) to do the following with decent performance:
-Create and delete collections as needed
-Copy and delete specific documents from a master collection into newly created collections
-Store an array of searchable tags in string format in a dynamic array associated with each document
-Query slides within a collection based on a single tag word stored in an array within each document
-Maintain the database between system shutdowns and opening / closing the program.
Thanks! | 0 | 0 | 1.2 | 0 | true | 31,867,932 | 0 | 70 | 1 | 0 | 0 | 31,862,957 | Your post's points and questions are quoted below. My comments follow each quote.
By having a database where each slide is a document that contains the filename of the slide, the filename of the preview thumbnail of the
slide, and an array containing searchable tag words, one will be able
to query specific sets of slides very quickly.
Sounds good. I'm assuming that the slides are in one collection.
The user-created "groups" can be individual collections, where a
collection is created when a group is created, slides are
added/removed from the collection as needed, and the collection can be
destroyed when the group is deleted.
I suggest that you do not create a collection for each group. You may end up with hundreds or thousands of collections to represent "groups". This could bite you down the road if you decide you want to introduce Sharding (which may only be done on a collection). Consider creating one collection just for "groups" that will contain the unique user id to which the group is associated.
[Will I be able to use MongoDB, with decent performance, to] create
and delete collections as needed?
Constantly creating and deleting collections is not good design. You probably do not need to do this. If you have any familiarity with RDBMS then a collection is analogous to a table. Would you keep creating dozens/hundreds/thousands of tables on the fly? Probably not.
[Will I be able to use MongoDB, with decent performance, to] copy and
delete specific documents from a master collection into newly created
collections?
Yes, it is possible to take data from one collection and save it to another collection. Deleting documents is also quite easy and performant. (Indexing is your friend.)
[Will I be able to use MongoDB, with decent performance, to] store an
array of searchable tags in string format in a dynamic array
associated with each document?
Yes. MongoDB is very good at this.
[Will I be able to use MongoDB, with decent performance, to] query
slides within a collection based on a single tag word stored in an
array within each document?
Yes. Decent performance will depend on the size of your collection and, more importantly, the existence of relevant indexes.
[Will I be able to use MongoDB, with decent performance, to] maintain
the database between system shutdowns and opening / closing the
program?
I'm assuming that you are referring to replication and failover. If so, yes, MongoDB supports this quite well. | 1 | 0 | 0 | Will I be able to implement this design using MongoDB (via pyMongo) | 1 | python,mongodb,pymongo,database | 0 | 2015-08-06T18:14:00.000 |
How can I store python 'list' values into MySQL and access it later from the same database like a normal list?
I tried storing the list as a varchar type and it did store it. However, while accessing the data from MySQL I couldn't access the same stored value as a list, but it instead it acts as a string. So, accessing the list with index was no longer possible. Is it perhaps easier to store some data in the form of sets datatype? I see the MySQL datatype 'set' but i'm unable to use it from python. When I try to store set from python into MySQL, it throws the following error: 'MySQLConverter' object has no attribute '_set_to_mysql'. Any help is appreciated
P.S. I have to store co-ordinate of an image within the list along with the image number. So, it is going to be in the form [1,157,421] | 3 | 2 | 0.197375 | 0 | false | 31,879,445 | 0 | 5,462 | 1 | 0 | 0 | 31,879,337 | Are you using an ORM like SQLAlchemy?
Anyway, to answer your question directly, you can use json or pickle to convert your list to a string and store that. Then to get it back, you can parse it (as JSON or a pickle) and get the list back.
However, if your list is always a 3 point coordinate, I'd recommend making separate x, y, and z columns in your table. You could easily write functions to store a list in the correct columns and convert the columns to a list, if you need that. | 1 | 0 | 0 | storing python list into mysql and accessing it | 2 | python,mysql | 0 | 2015-08-07T13:47:00.000 |
Let's say I have the following Microsoft Access Database: random.mdb.
The main thing I'm trying to achieve is to use read_sql() from pandas so that I can work with the data I have using python. How would I approach this? Is there a way to convert the Microsoft Access database to a SQL database... to eventually pass in to pandas (all in python)? | 0 | 1 | 0.066568 | 0 | false | 31,949,357 | 0 | 2,680 | 1 | 0 | 0 | 31,949,312 | use sql server import export module to convert, but you will need table structure ready in sql server or there may be many other utilities | 1 | 0 | 0 | How to convert a Microsoft Access Database to a SQL database (and then open it with pandas)? | 3 | python,sql,sql-server,ms-access,pandas | 0 | 2015-08-11T18:26:00.000 |
So I've been trying to solve this for a while now and can't seem to find a way to speed up performance of inserts with Django despite the many suggestions and tips found on StackOverflow and many Google searches.
So basically I need to insert a LOT of data records (~2 million) through Django into my MySQL DB, each record entry being a whopping 180KB. I've scaled my testing down to 2,000 inserts yet still cant get the running time down to a reasonable amount. 2,000 inserts currently takes approximately 120 seconds.
So I've tried ALL of the following (and many combinations of each) to no avail:
"Classic" Django ORM create model and .save()
Single transaction (transaction.atomic())
Bulk_create
Raw SQL INSERT in for loop
Raw SQL "executemany" (multiple value inserts in one query)
Setting SQL attributes like "SET FOREIGN_KEY_CHECKS=0"
SQL BEGIN ... COMMIT
Dividing the mass insert into smaller batches
Apologizes if I forgot to list something, but I've just tried so many different things at this point, I can't even keep track ahah.
Would greatly appreciate a little help here in speeding up performance from someone who maybe had to perform a similar task with Django database insertions.
Please let me know if I've left out any necessary information! | 2 | 1 | 0.066568 | 0 | false | 31,977,326 | 1 | 2,130 | 3 | 0 | 0 | 31,977,138 | This is out of django's scope really. Django just translates your python into on INSERT INTO statement. For most performance on the django layer skipping it entirely (by doing sql raw) might be best, even though python processing is pretty fast compared to IO of a sql-database.
You should rather focus on the database. I'm a postgres person, so I don't know what config options mysql has, but there is probably some fine tuning available.
If you have done that and there is still no increase you should consider using SSDs, SSDs in a RAID 0, or even a db in memory, to skip IO times.
Sharding may be a solution too - splitting the tasks and executing them in parallel.
If the inserts however are not time critical, i.e. can be done whenever, but shouldn't block the page from loading, I recommend celery.
There you can queue a task to be executed whenever there is time - asynchronously. | 1 | 0 | 0 | Increasing INSERT Performance in Django For Many Records of HUGE Data | 3 | python,mysql,django,database,insert | 0 | 2015-08-12T23:33:00.000 |
So I've been trying to solve this for a while now and can't seem to find a way to speed up performance of inserts with Django despite the many suggestions and tips found on StackOverflow and many Google searches.
So basically I need to insert a LOT of data records (~2 million) through Django into my MySQL DB, each record entry being a whopping 180KB. I've scaled my testing down to 2,000 inserts yet still cant get the running time down to a reasonable amount. 2,000 inserts currently takes approximately 120 seconds.
So I've tried ALL of the following (and many combinations of each) to no avail:
"Classic" Django ORM create model and .save()
Single transaction (transaction.atomic())
Bulk_create
Raw SQL INSERT in for loop
Raw SQL "executemany" (multiple value inserts in one query)
Setting SQL attributes like "SET FOREIGN_KEY_CHECKS=0"
SQL BEGIN ... COMMIT
Dividing the mass insert into smaller batches
Apologizes if I forgot to list something, but I've just tried so many different things at this point, I can't even keep track ahah.
Would greatly appreciate a little help here in speeding up performance from someone who maybe had to perform a similar task with Django database insertions.
Please let me know if I've left out any necessary information! | 2 | 0 | 0 | 0 | false | 31,977,679 | 1 | 2,130 | 3 | 0 | 0 | 31,977,138 | You can also try to delete any index on the tables (and any other constraint), the recreate the indexes and constraints after the insert.
Updating indexes and checking constraints can slow down every insert. | 1 | 0 | 0 | Increasing INSERT Performance in Django For Many Records of HUGE Data | 3 | python,mysql,django,database,insert | 0 | 2015-08-12T23:33:00.000 |
So I've been trying to solve this for a while now and can't seem to find a way to speed up performance of inserts with Django despite the many suggestions and tips found on StackOverflow and many Google searches.
So basically I need to insert a LOT of data records (~2 million) through Django into my MySQL DB, each record entry being a whopping 180KB. I've scaled my testing down to 2,000 inserts yet still cant get the running time down to a reasonable amount. 2,000 inserts currently takes approximately 120 seconds.
So I've tried ALL of the following (and many combinations of each) to no avail:
"Classic" Django ORM create model and .save()
Single transaction (transaction.atomic())
Bulk_create
Raw SQL INSERT in for loop
Raw SQL "executemany" (multiple value inserts in one query)
Setting SQL attributes like "SET FOREIGN_KEY_CHECKS=0"
SQL BEGIN ... COMMIT
Dividing the mass insert into smaller batches
Apologizes if I forgot to list something, but I've just tried so many different things at this point, I can't even keep track ahah.
Would greatly appreciate a little help here in speeding up performance from someone who maybe had to perform a similar task with Django database insertions.
Please let me know if I've left out any necessary information! | 2 | 0 | 1.2 | 0 | true | 32,000,816 | 1 | 2,130 | 3 | 0 | 0 | 31,977,138 | So I found that editing the mysql /etc/mysql/my.cnf file and configuring some of the InnoDB settings significantly increased performance.
I set:
innodb_buffer_pool_size = 9000M 75% of your system RAM
innodb_log_file_size = 2000M 20%-30% of the above value
restarted the mysql server and this cut down 50 inserts from ~3 seconds to ~0.8 seconds. Not too bad!
Now I'm noticing the inserts are gradually taking longer longer for big data amounts. 50 inserts starts at about 0.8 seconds but after 100 or so batches the average is up to 1.4 seconds and continues increasing.
Will report back if solved. | 1 | 0 | 0 | Increasing INSERT Performance in Django For Many Records of HUGE Data | 3 | python,mysql,django,database,insert | 0 | 2015-08-12T23:33:00.000 |
What is the difference between executing raw SQL on the SQLAlchemy engine and the session? Specifically against a MSSQL database.
engine.execute('DELETE FROM MyTable WHERE MyId IN(1, 2, 3)')
versus
session.execute('DELETE FROM MyTable WHERE MyId IN(1, 2, 3)')
I've noticed that executing the SQL on the session, cause MSSQL to 'hang'.
Perhaps someone has an idea on how these two executions are different, or perhaps someone can point me where to further investigate. | 1 | 1 | 0.197375 | 0 | false | 32,419,439 | 0 | 1,516 | 1 | 0 | 0 | 32,020,502 | The reason why MSSQL Server was hanging, was not because of the difference between calling execute on the engine or the session, but because a delete was being called on the table, without a commit, and then a subsequent read. | 1 | 0 | 0 | Executing raw SQL on the SQLAlchemy engine versus a session | 1 | python,sql-server,sqlalchemy | 0 | 2015-08-15T01:02:00.000 |
I am new to Dynamodb and have a requirement of grouping the documents on the basis of a certain condition before performing other operations.
From what i could read on the internet, i figured out that there is no direct way to group dynamodb documents.
Can anyone confirm if thats true of help out with a solution if that is not the case? | 0 | 1 | 0.197375 | 0 | false | 32,045,125 | 1 | 582 | 1 | 0 | 0 | 32,044,338 | Amazon DynamoDB is a NoSQL database, so you won't find standard SQL capabilities like group by and average().
There is, however, the ability to filter results, so you will only receive results that match your criteria. It is then the responsibility of the calling app to perform grouping and aggregations.
It's really a trade-off between the flexibility of SQL and the sheer speed of NoSQL. Plus, in the case of DynamoDB, the benefit of data being stored in three facilities to improve durability and availability. | 1 | 0 | 0 | Does Dynamodb support Groupby/ Aggregations directly? | 1 | python,amazon-web-services,amazon-dynamodb | 0 | 2015-08-17T06:52:00.000 |
I am writing a script to compare the records in a database in my DynamoDB with record in another database in EC2.
I will appreciate any help with iterating through the table in Python. | 1 | 0 | 0 | 0 | false | 48,822,032 | 0 | 1,953 | 1 | 0 | 0 | 32,073,427 | I came across the same need when I query DynamoDB from an AWS Lambda in Python and expected the dataset to be over 128MB of memory limit. If I can't iterate through, I'll have to pay extra bucks to AWS. Unfortunately, it seems there is no way to do so except converting the query response to an iterator (which wouldn't save memory at all). | 1 | 0 | 0 | How to iterate through a table in AWS DynamoDB? | 1 | python,amazon-web-services,amazon-dynamodb | 0 | 2015-08-18T13:10:00.000 |
I created an Excel spreadsheet using Pandas and xlsxwriter, which has all the data in the right rows and columns. However, the formatting in xlsxwriter is pretty basic, so I want to solve this problem by writing my Pandas spreadsheet on top of a template spreadsheet with Pyxl.
First, however, I need to get Pyxl to only import data up to the first blank row, and to get rid of the column headings. This way I could write my Excel data from the xlsxwriter output to the template.
I have no clue how to go about this and can't find it here or in the docs. Any ideas?
How about if I want to read data from the first column after the first blank column? (I can think of a workaround for this, but it would help if I knew how) | 1 | 1 | 1.2 | 1 | true | 32,119,557 | 0 | 256 | 1 | 0 | 0 | 32,077,627 | To be honest I'd be tempted to suggest you use openpyxl all the way if there is something that xlsxwriter doesn't do, though I think that it's formatting options are pretty extensive. The most recent version of openpyxl is as fast as xlsxwriter if lxml is installed.
However, it's worth noting that Pandas has tended to ship with an older version of openpyxl because we changed the style API.
Otherwise you can use max_row to get the highest row but this won't check for an empty row. | 1 | 0 | 0 | Use Pyxl to read Excel spreadsheet data up to a certain row | 1 | python,excel,openpyxl | 0 | 2015-08-18T16:17:00.000 |
I'm working on a distributed system where one process is controlling a hardware piece and I want it to be running as a service. My app is Django + Twisted based, so Twisted maintains the main loop and I access the database (SQLite) through Django, the entry point being a Django Management Command.
On the other hand, for user interface, I am writing a web application on the same Django project on the same database (also using Crossbar as websockets and WAMP server). This is a second Django process accessing the same database.
I'm looking for some validation here. Is anything fundamentally wrong to this approach? I'm particularly scared of issues with database (two different processes accessing it via Django ORM). | 0 | 1 | 0.099668 | 0 | false | 32,235,411 | 1 | 143 | 1 | 1 | 0 | 32,213,796 | No there is nothing inherently wrong with that approach. We currently use a similar approach for a lot of our work. | 1 | 0 | 0 | Twisted + Django as a daemon process plus Django + Apache | 2 | python,django,sqlite,twisted,daemon | 0 | 2015-08-25T20:49:00.000 |
I made a sheet with a graph using python and openpyxl. Later on in the code I add some extra cells that I would also like to see in the graph. Is there a way that I can change the range of cell that the graph is using, or maybe there is another library that lets me do this?
Example:
my graph initially uses columns A1:B10, then I want to update it to use A1:D10
Currently I am deleting the sheet, and recreating it, writing back the values and making the graph again, the problem is that this is a big process that takes days, and there will be a point that rewriting the sheet will take some time. | 0 | 0 | 0 | 0 | false | 32,256,294 | 0 | 1,178 | 1 | 0 | 0 | 32,254,733 | At the moment it is not possible to preserve charts in existing files. With rewrite in version 2.3 of openpyxl the groundwork has been laid that will make this possible. When it happens will depend on the resources available to do the work. Pull requests gladly accepted.
In the meantime you might be able find a workaround by writing macros to create the charts for you because macros are preserved. A bit clumsy but should work.
Make sure that you are using version 2.3 or higher when working on charts as the API has changed slightly. | 1 | 0 | 0 | Python Excel, Is it possible to update values of a created graph? | 1 | python,graph,openpyxl | 0 | 2015-08-27T16:20:00.000 |
Anybody know of any currently worked on projects that wire up MongoDB to the most recent version of Django? mongoengine's Django module github hasn't been updated in 2 years (and I don't know if I can use its regular module with Django) and django-nonrel uses Django 1.6. Anybody tried using django-nonrel with Django 1.8? | 0 | 0 | 0 | 0 | false | 32,260,215 | 1 | 210 | 1 | 0 | 0 | 32,260,031 | If you are using mongoengine, there is no need of django-nonrel.You can directly use django latest versions. | 1 | 0 | 0 | Django: MongoDB engine for Django 1.8 | 1 | python,django,mongodb | 0 | 2015-08-27T21:50:00.000 |
My original purpose was to bulk insert into my db
I tried pyodbc, sqlalchemy and ceodbc to do this with executemany function but my dba checked and they execute each row individually.
his solution was to run procedure that recieve table (user defined data type) as parameter and load it into the real table.
The problem is no library seem to support that (or at least no example on the internet).
So my question is anyone try bulk insert before or know how to pass user data defined type to stored procedure?
EDIT
I also tried bulk insert query but the problem it requires local path or share and it will not happend because organizition limits | 2 | 0 | 0 | 0 | false | 32,305,711 | 0 | 826 | 1 | 0 | 0 | 32,297,244 | Sqlalchemy bulk operations doesn't really inserts a bulk. It's written in the docs.
And we've checked it with our dba.
Thank you we'll try the xml. | 1 | 0 | 0 | Python mssql passing procedure user defined data type as parameter | 1 | sql-server,python-2.7,sqlalchemy,pyodbc,pymssql | 0 | 2015-08-30T13:51:00.000 |
I would like to read a column of date from SQL database. However, the format of the date in the database is something like 27-Jan-13 which is day-month-year. When I read this column using peewee DateField it is read in a format which cannot be compared later using datetime.date.
Can anyone help me solve the issue? | 0 | 0 | 0 | 0 | false | 32,429,590 | 0 | 366 | 1 | 0 | 0 | 32,303,115 | You need to store the data in the database using the format %Y-%m-%d. When you extract the data you can present it in any format you like, but to ensure the data is sorted correctly (and recognized by SQLite as a date) you must use the %Y-%m-%d format (or unix timestamps if you prefer that way). | 1 | 0 | 0 | Reading DateField with specific format from SQL database using peewee | 1 | python,peewee,datefield | 0 | 2015-08-31T02:14:00.000 |
I know there are various ETL tools available to export data from oracle to MongoDB but i wish to use python as intermediate to perform this. Please can anyone guide me how to proceed with this?
Requirement:
Initially i want to add all the records from oracle to mongoDB and after that I want to insert only newly inserted records from Oracle into MongoDB.
Appreciate any kind of help. | 1 | 0 | 0 | 0 | false | 32,305,243 | 0 | 889 | 1 | 0 | 0 | 32,305,131 | To answer your question directly:
1. Connect to Oracle
2. Fetch all the delta data by timestamp or id (first time is all records)
3. Transform the data to json
4. Write the json to mongo with pymongo
5. Save the maximum timestamp / id for next iteration
Keep in mind that you should think about the data model considerations and usually relational DB (like Oracle) and document DB (like mongo) will have different data model. | 1 | 0 | 0 | Export from Oracle to MongoDB using python | 1 | python,oracle,mongodb | 0 | 2015-08-31T06:29:00.000 |
We are migrating some data from our production database and would like to archive most of this data in the Cloud Datastore.
Eventually we would move all our data there, however initially focusing on the archived data as a test.
Our language of choice is Python, and have been able to transfer data from mysql to the datastore row by row.
We have approximately 120 million rows to transfer and at a one row at a time method will take a very long time.
Has anyone found some documentation or examples on how to bulk insert data into cloud datastore using python?
Any comments, suggestions is appreciated thank you in advanced. | 6 | 7 | 1.2 | 0 | true | 33,367,328 | 1 | 3,726 | 1 | 1 | 0 | 32,316,088 | There is no "bulk-loading" feature for Cloud Datastore that I know of today, so if you're expecting something like "upload a file with all your data and it'll appear in Datastore", I don't think you'll find anything.
You could always write a quick script using a local queue that parallelizes the work.
The basic gist would be:
Queuing script pulls data out of your MySQL instance and puts it on a queue.
(Many) Workers pull from this queue, and try to write the item to Datastore.
On failure, push the item back on the queue.
Datastore is massively parallelizable, so if you can write a script that will send off thousands of writes per second, it should work just fine. Further, your big bottleneck here will be network IO (after you send a request, you have to wait a bit to get a response), so lots of threads should get a pretty good overall write rate. However, it'll be up to you to make sure you split the work up appropriately among those threads.
Now, that said, you should investigate whether Cloud Datastore is the right fit for your data and durability/availability needs. If you're taking 120m rows and loading it into Cloud Datastore for key-value style querying (aka, you have a key and an unindexed value property which is just JSON data), then this might make sense, but loading your data will cost you ~$70 in this case (120m * $0.06/100k).
If you have properties (which will be indexed by default), this cost goes up substantially.
The cost of operations is $0.06 per 100k, but a single "write" may contain several "operations". For example, let's assume you have 120m rows in a table that has 5 columns (which equates to one Kind with 5 properties).
A single "new entity write" is equivalent to:
+ 2 (1 x 2 write ops fixed cost per new entity)
+ 10 (5 x 2 write ops per indexed property)
= 12 "operations" per entity.
So your actual cost to load this data is:
120m entities * 12 ops/entity * ($0.06/100k ops) = $864.00 | 1 | 0 | 0 | Is it possible to Bulk Insert using Google Cloud Datastore | 3 | python,mysql,google-cloud-datastore | 0 | 2015-08-31T16:47:00.000 |
I am doing a mini-project on Web-Crawler+Search-Engine. I already know how to scrape data using Scrapy framework. Now I want to do indexing. For that I figured out Python dictionary is the best option for me. I want mapping to be like name/title of an object (a string) -> the object itself (a Python object).
Now the problem is that I don't know how to store dynamic dict in MySQL database and I definitely want to store this dict as it is!
Some commands on how to go about doing that would be very much appreciated! | 1 | 1 | 0.099668 | 0 | false | 32,334,348 | 0 | 339 | 1 | 0 | 0 | 32,325,390 | If you want to store dynamic data in a database, here are a few options. It really depends on what you need out of this.
First, you could go with a NoSQL solution, like MongoDB. NoSQL allows you to store unstructured data in a database without an explicit data schema. It's a pretty big topic, with far better guides/information than I could provide you. NoSQL may not be suited to the rest of your project, though.
Second, if possible, you could switch to PostgreSQL, and use it's HSTORE column (unavailable in MySQL). The HSTORE column is designed to store a bunch of Key/Value pairs. This column types supports BTREE, GIST, GIN, and HASH indexing. You're going to need to ensure you're familiar with PostgreSQL, and how it differs from MySQL. Some of your other SQL may no longer work as you'd expect.
Third, you can serialize the data, then store the serialized entity. Both json and pickle come to mind. The viability and reliability of this will of course depend on how complicated your dictionaries are. Serializing data, especially with pickle can be dangerous, so ensure you're familiar with how it works from a security perspective.
Fourth, use an "Entity-Attribute-Value" table. This mimics a dictionaries "Key/Value" pairing. You, essentially, create a new table with three columns of "Related_Object_ID", "Attribute", "Value". You lose a lot of object metadata you'd normally get in a table, and SQL queries can become much more complicated.
Any of these options can be a double edged sword. Make sure you've read up on the downfalls of whatever option you want to go with, or, in looking into the options more, perhaps you'll find something that better suits you and your project. | 1 | 0 | 1 | How to store a dynamic python dictionary in MySQL database? | 2 | python,mysql,dictionary,scrapy | 0 | 2015-09-01T06:58:00.000 |
I'm working on aws S3 multipart upload, And I am facing following issue.
Basically I am uploading a file chunk by chunk to s3, And during the time if any write happens to the file locally, I would like to reflect that change to the s3 object which is in current upload process.
Here is the procedure that I am following,
Initiate multipart upload operation.
upload the parts one by one [5 mb chunk size.] [do not complete that operation yet.]
During the time if a write goes to that file, [assuming i have the details for the write [offset, no_bytes_written] ].
I will calculate the part no for that write happen locally, And read that chunk from the s3 uploaded object.
Read the same chunk from the local file and write to read part from s3.
Upload the same part to s3 object.
This will be an a-sync operation. I will complete the multipart operation at the end.
I am facing an issue in reading the uploaded part that is in multipart uploading process. Is there any API available for the same?
Any help would be greatly appreciated. | 1 | 3 | 1.2 | 0 | true | 32,352,584 | 1 | 1,069 | 1 | 0 | 1 | 32,348,812 | There is no API in S3 to retrieve a part of a multi-part upload. You can list the parts but I don't believe there is any way to retrieve an individual part once it has been uploaded.
You can re-upload a part. S3 will just throw away the previous part and use the new one in it's place. So, if you had the old and new versions of the file locally and were keeping track of the parts yourself, I suppose you could, in theory, replace individual parts that had been modified after the multipart upload was initiated. However, it seems to me that this would be a very complicated and error-prone process. What if the change made to a file was to add several MB's of data to it? Wouldn't that change your boundaries? Would that potentially affect other parts, as well?
I'm not saying it can't be done but I am saying it seems complicated and would require you to do a lot of bookkeeping on the client side. | 1 | 0 | 0 | How to read a part of amazon s3 key, assuming that "multipart upload complete" is yet to happen for that key? | 1 | python,amazon-web-services,file-upload,amazon-s3,boto | 0 | 2015-09-02T08:59:00.000 |
I am trying to migrate django models from sqlite to postgres. I tested it locally and now trying to do the samething with remote database. I dumped the data first then started the application which created the tables in remote database.
Finally I am trying to loaddata but it looks like hanged and no errors.
Is there a way to get verbose ? Or I am not sure how to diagnose this issue. It just 199M size file and when I test locally loaddata works in few minutes. | 2 | 1 | 0.197375 | 0 | false | 32,385,965 | 1 | 353 | 1 | 0 | 0 | 32,362,384 | I had no solution, so I ran loaddata locally and used pg_dump and ran the dump with pqsl -f and restored the data. | 1 | 0 | 0 | manage.py loaddata hangs when loading to remote postgres | 1 | python,django,django-south | 0 | 2015-09-02T20:19:00.000 |
Is it bad practice to have Django perform migrations on a predominantly Rails web app?
We have a RoR app and have moved a few of the requirements out to Python. One of the devs here has suggested creating some of the latest database migration using Django and my gut says this is a bad idea.
I haven't found any solid statements one way or the other after scouring the web and am hoping someone can provide some facts of why this is crazy (or why I should keep calm).
database: Postgres
hosting: heroku
skills level: junior | 0 | 1 | 0.197375 | 0 | false | 32,450,497 | 1 | 254 | 1 | 0 | 0 | 32,450,413 | I think you need to maintain migrations at one system (in this case, Rails), because it will be difficult to check migrations between two different apps. What you'll do if you'll haven't access to another app?
But you can store something like db/schema.rb for django tracked in git. | 1 | 0 | 0 | Rails and Django migrations on a shared database | 1 | python,ruby-on-rails,django,postgresql,ruby-on-rails-4 | 0 | 2015-09-08T06:11:00.000 |
Recently, I used Python and Scrapy to crawl article information like 'title' from a blog. Without using a database, the results are fine / as expected. However, when I use SQLalchemy, I received the following error:
InterfaceError:(sqlite3.InterfaceError)Error binding parameter 0
-probably unsupported type.[SQL:u'INSERT INTO myblog(title) VALUES (?)'] [PARAMETERS:([u'\r\n Accelerated c++\u5b66\u4e60
chapter3 -----\u4f7f\u7528\u6279\u636e \r\n '],)]
My xpath expression is:
item['title'] = sel.xpath('//*[@class="link_title"]/a/text()').extract()
Which gives me the following value for item['title']:
[u'\r\n Accelerated c++ \u5b66 \u4e60 chapter3 -----\u4f7f\u7528\u6279\u636e \r\n ']
It's unicode, why doesn't sqlite3 support it? This blog's title information contains some Chinese. I am a tired of sqlalchemy. I've referred its documents, but found nothing, and I'm out of ideas. | 2 | 0 | 1.2 | 0 | true | 32,471,731 | 1 | 454 | 1 | 0 | 0 | 32,460,120 | The problem that you're experiencing is that SQLite3 wants a datatype of "String", and you're passing in a list with a unicode string in it.
change:
item['title'] = sel.xpath('//*[@class="link_title"]/a/text()').extract()
to
item['title'] = sel.xpath('//*[@class="link_title"]/a/text()').extract()[0].
You'll be left with a string to be inserted, and your SQLite3 errors should go away. Warning, though, if your ever wanting to deal with more than just one title, this will limit you to the first. You can use whatever method you want to persuade those into being a string, though. | 1 | 0 | 0 | InterfaceError:(sqlte3.InterfaceError)Error binding parameter 0 | 1 | python,xpath,sqlite,scrapy | 0 | 2015-09-08T14:13:00.000 |
I have to write a python script which will copy a file in s3 to my EBS directory, here the problem is I'm running this python script from my local machine. is there any boto function in which I can copy from s3 to EBS without storing in my local? | 0 | 3 | 0.53705 | 0 | false | 32,478,697 | 0 | 830 | 1 | 0 | 1 | 32,478,432 | No. EBS volumes are accessible only on the EC2 instance they're mounted on. If you want to download a file directly from S3 to an EBS volume, you need to run your script on the EC2 instance. | 1 | 0 | 0 | Copy file from S3 to EBS | 1 | python,amazon-web-services,amazon-s3,boto,boto3 | 0 | 2015-09-09T11:33:00.000 |
My organisation uses Business Objects as a layer over its Oracle database so that people like me (i.e. not in the IT dept) can access the data without the risk of breaking something.
I have a PythonAnywhere account where I have a few dashboards built using Flask.
Each morning, BO sends me an email with the cvs files of the data that I want. I then upload these to a MYSQL server, and go from there. There is also an option to send it to an FTP recipient...but that's pretty much it.
Is it possible to set up an FTP server on my (paid for) PythonAnywhere account? If I could have those files go to a dir like /data, I could then have a scheduled job to insert them into my DB.
The data is already in the public domain and not sensitive.
Or is there intact a better way? | 1 | 1 | 1.2 | 0 | true | 32,530,008 | 0 | 875 | 1 | 0 | 0 | 32,509,768 | PythonAnywhere dev here: we don't support regular FTP, unfortunately. If there was a way to tell BO to send the data via an HTTP POST to a website, then you could set up a simple Flask app to handle that -- but I'm guessing from what you say that it doesn't :-( | 1 | 0 | 0 | Sending csv file via FTP to PythonAnywhere | 1 | ftp,pythonanywhere | 0 | 2015-09-10T19:01:00.000 |
I am trying to read a data from csv file to postgres table. I have two columns in table, but there are four fields in csv data file. I want to read only two specific columns from csv to table. | 0 | 0 | 0 | 0 | false | 32,553,942 | 0 | 509 | 1 | 0 | 0 | 32,553,773 | Would you know how to do it if there were only those two columns in CSV file?
If yes, then the simplest solution is to transform the CSV prior to importing into Postgres. | 1 | 0 | 0 | how to copy specific columns from CSV file to postgres table using psycopg2? | 1 | python,sql,postgresql,csv,psycopg2 | 0 | 2015-09-13T19:36:00.000 |
It's been two days I am trying to work with cx_Oracle. I want to connect to oracle from python. But I am getting "ImportError: DLL load failed: The specified procedure could not be found." error. I have already gone through many posts and tried the things suggested on them, but nothing helped me.
I checked the versions of Windows, Python, Oracle client as suggested on many posts but all of them looks good to me.
Python veriosn 2.7: 64 bit
Python 2.7.8 (default, Jun 30 2014, 16:08:48) [MSC v.1500 64 bit (AMD64)] on win
32
Windows 7: 64 bit
Oracle client is 11.2.0: 64 bit
I ran Sqlplus and checked task manager to confirm that. As I have both 32 and 64 bit client installed on my system, but 64 bit is set in PATH variable.
Please help me to sort out this problem. Do let me know if any other information is needed. | 3 | 1 | 0.197375 | 0 | false | 32,578,626 | 0 | 1,699 | 1 | 0 | 0 | 32,561,547 | I was able to sort it out. I had installed the incorrect version of cx_Oracle previously. It was for 12c oracle client. I installed 11g version later and it started working for me.
Note: There is no need to set ORACLE_HOME environment variable.
Oracle client, Python, Windows OS all of them must be of same architecture. Either 32 or 64 bit. | 1 | 0 | 0 | Not able to import cx_Oracle in python "ImportError: DLL load failed: The specified procedure could not be found." | 1 | oracle,python-2.7,cx-oracle | 0 | 2015-09-14T09:37:00.000 |
Has anyone installed psycopg2 for python 3 on Centos 7? I'm sure it's possible, but when I run:
pip install psycopg2
I get:
Could not find a version that satisfies the requirement pyscopg2 (from versions: )
No matching distribution found for pyscopg2 | 0 | 2 | 0.379949 | 0 | false | 32,576,369 | 0 | 1,254 | 1 | 0 | 0 | 32,576,326 | You have misspelled the name of the library. The correct name is psycopg2 | 1 | 0 | 0 | psycopg2 for python3 on Centos 7 | 1 | python,postgresql,python-3.x,centos,centos7 | 0 | 2015-09-15T01:35:00.000 |
I found this line to help configure Postgresql in web2py but I can't seem to find a good place where to put it :
db = DAL("postgres://myuser:mypassword@localhost:5432/mydb")
Do I really have to write it in all db.py ? | 0 | 2 | 1.2 | 0 | true | 32,618,356 | 1 | 939 | 1 | 0 | 0 | 32,616,625 | Files in the /models folder are executed in alphabetical order, so just put the DAL definition at the top of the first model file that needs to use it (it will then be available globally in all subsequent model files as well as all controllers and views). | 1 | 0 | 0 | web2py database configuration | 2 | python,database,web2py | 0 | 2015-09-16T19:00:00.000 |
I just upgraded my django from 1.7.1 to 1.8.4. I tried to run python manage.py migrate but I got this error:
django.db.utils.ProgrammingError: relation "django_content_type" does not exist
I dropped my database, created a new one, and ran the command again. But I get the same error. Am I missing something? Do I need to do something for upgrading my django?
EDIT:
I downgraded back to 1.7.1 and it works. Is there a way to fix it for 1.8.4? | 6 | 1 | 1.2 | 0 | true | 32,637,043 | 1 | 10,126 | 4 | 0 | 0 | 32,620,930 | Well, I found the issue. I have auditlog installed as one my apps. I removed it and migrate works fine. | 1 | 0 | 0 | Django 1.8 migrate: django_content_type does not exist | 6 | django,python-2.7,django-1.7,django-migrations,django-1.8 | 0 | 2015-09-17T00:45:00.000 |
I just upgraded my django from 1.7.1 to 1.8.4. I tried to run python manage.py migrate but I got this error:
django.db.utils.ProgrammingError: relation "django_content_type" does not exist
I dropped my database, created a new one, and ran the command again. But I get the same error. Am I missing something? Do I need to do something for upgrading my django?
EDIT:
I downgraded back to 1.7.1 and it works. Is there a way to fix it for 1.8.4? | 6 | 0 | 0 | 0 | false | 67,501,508 | 1 | 10,126 | 4 | 0 | 0 | 32,620,930 | i had drop the database and rebuild it, then in the PyCharm Terminal py manage.py makemigrations and py manage.py migrate fix this problem. I think the reason is the table django_content_type is the django's table, if it misssed can not migrate, so have to drop the database and rebuild. | 1 | 0 | 0 | Django 1.8 migrate: django_content_type does not exist | 6 | django,python-2.7,django-1.7,django-migrations,django-1.8 | 0 | 2015-09-17T00:45:00.000 |
I just upgraded my django from 1.7.1 to 1.8.4. I tried to run python manage.py migrate but I got this error:
django.db.utils.ProgrammingError: relation "django_content_type" does not exist
I dropped my database, created a new one, and ran the command again. But I get the same error. Am I missing something? Do I need to do something for upgrading my django?
EDIT:
I downgraded back to 1.7.1 and it works. Is there a way to fix it for 1.8.4? | 6 | 6 | 1 | 0 | false | 32,623,157 | 1 | 10,126 | 4 | 0 | 0 | 32,620,930 | Delete all the migration folder from your app and delete the database then migrate your database......
if this does not work delete django_migration table from database and add the "name" column in django_content_type table ALTER TABLE django_content_type ADD COLUMN name character varying(50) NOT NULL DEFAULT 'anyName'; and then run $ python manage.py migrate --fake-initial | 1 | 0 | 0 | Django 1.8 migrate: django_content_type does not exist | 6 | django,python-2.7,django-1.7,django-migrations,django-1.8 | 0 | 2015-09-17T00:45:00.000 |
I just upgraded my django from 1.7.1 to 1.8.4. I tried to run python manage.py migrate but I got this error:
django.db.utils.ProgrammingError: relation "django_content_type" does not exist
I dropped my database, created a new one, and ran the command again. But I get the same error. Am I missing something? Do I need to do something for upgrading my django?
EDIT:
I downgraded back to 1.7.1 and it works. Is there a way to fix it for 1.8.4? | 6 | 2 | 0.066568 | 0 | false | 37,074,120 | 1 | 10,126 | 4 | 0 | 0 | 32,620,930 | Here's what I found/did. I am using django 1.8.13 and python 2.7. The problem did not occur for Sqlite. It did occur for PostgreSQL.
I have an app the uses a GenericForeignKey (which relies on Contenttypes). I have another app that has a model that is linked to the first app via the GenericForeignKey. If I run makemigrations for both these apps, then migrate works. | 1 | 0 | 0 | Django 1.8 migrate: django_content_type does not exist | 6 | django,python-2.7,django-1.7,django-migrations,django-1.8 | 0 | 2015-09-17T00:45:00.000 |
I've a read-only access to a database server dbserver1. I need to store the result set generated from my query running on dbserver1 into another server of mine dbserver2. How should I go about doing that?
Also can I setup a trigger which will automatically copy new entries that will come in to the dbserver1 to dbserver2? Source and destination are both using Microsoft SQL server.
Following on that, I need to call a python script on a database trigger event. Any ideas on how could that be accomplished? | 0 | 1 | 1.2 | 0 | true | 32,659,567 | 0 | 449 | 1 | 0 | 0 | 32,659,008 | lad2015 answered the first part. The second part can be infinitely more dangerous as it involves calling outside the Sql Server process.
In the bad old days one would use the xp_cmdshell. These days it may be more worthwhile to create an Unsafe CLR stored procedure that'll call the python script.
But it is very dangerous and I cannot stress just how much you shouldn't do it unless you really have no other choices.
I'd prefer to see a polling python script that runs 100% external to Sql that connects to a Status table populated by the trigger and performs work accordingly. | 1 | 0 | 0 | Copy database from one server to another server + trigger python script on a database event | 1 | python,sql-server,database-design,triggers,database-cloning | 0 | 2015-09-18T18:45:00.000 |
In the environment, we have an excel file, which includes rawdata in one sheet and pivot table and charts in another sheet.
I need to append rows every day to raw data automatically using a python job.
I am not sure, but there may be some VB Script running on the front end which will refresh the pivot tables.
I used openpyxl and by following its online documentation, I was able to append rows and save the workbook. I used keep_vba=true while loading the workbook to keep the VBA modules inside to enable pivoting. But after saving the workbook, the xlsx is not being opened anymore using MS office and saying the format or the extension is not valid. I can see the data using python but with office, its not working anymore. If I don't use keep_vba=true, then pivoting is not working, only the previous values are present (ofcourse as I understood, as VBA script is needed for pivoting).
Could you explain me what's happening? I am new to python and don't know its concepts much.
How can I fix this in openpyxl or is there any better alternative other than openpyxl. Data connections in MS office is not an option for me.
As I understood, xlsx may need special modules to save the VB script to save in the same way as it may be saved using MS office. If it is, then what is the purpose of keep_vba=true ?
I would be grateful if you could explain in more detail. I would love to know.
As I have very short time to complete this task, I am looking for a quick answer here, instead of going through all the concepts.
Thankyou! | 1 | 3 | 0.53705 | 0 | false | 32,684,112 | 0 | 3,002 | 1 | 0 | 0 | 32,682,336 | You have to save the files with the extension ".xlsm" rather than ".xlsx". The .xlsx format exists specifically to provide the user with assurance that there is no VBA code within the file. This is an Excel standard and not a problem with openpyxl. With that said, I haven't worked with openpyxl, so I'm not sure what you need to do to be sure your files are properly converted to .xlsm.
Edit: Sorry, misread your question first time around. Easiest step would be to set keep_vba=False. That might resolve your issue right there, since you're telling openpyxl to look for VBA code that can't possibly exist in an xlsx file. Hard to say more than that until you post the relevant section of your code. | 1 | 0 | 0 | xlsx file extension not valid after saving with openpyxl and keep_vba=true. Which is the best way? | 1 | python,xlsx,openpyxl | 0 | 2015-09-20T17:37:00.000 |
I'm using Python 2.7.5 in Spyder along with Sqlite3 3.6.21. I have noticed the execute method to be pretty slow, pretty much regardless of the size of the database I'm creating. After doing some research, no solution really works for me:
Python 3 is not supported by Spyder yet
updating the Sqlite3 version does not work (replacing the dll file causes problems)
Is there a way around this? If any more details are needed, I'm glad to elaborate further. | 0 | 0 | 0 | 0 | false | 32,700,885 | 0 | 157 | 1 | 0 | 0 | 32,682,674 | Indeed, inspired by Colonel Thirty Two's comment above, I just realized that I need to wrap all my operations into one transaction. This was trivial to implement and improved overall efficiency drastically. Thanks once again! | 1 | 0 | 0 | python sqlite3 execute method slow | 1 | python,sqlite | 0 | 2015-09-20T18:12:00.000 |
I have a large Excel file (450mb+). I need to replace (,) -> (; or .) for one of my fastload scripts to work. I am not able to open the file at all. Any script would actually involve opening the file, performing operation, saving and closing the file, in that order.
Will a VB script like that work here for the 450mb+ file, wherein the file is not opening only.
Is there any VB script , Shell script, Python, Java etc I can write actually to perform the replacement(operation) without opening the Excel file?
Or alternatively, is there any way of opening an Excel file that big
and performing that operation. | 1 | 0 | 0 | 0 | false | 32,691,381 | 0 | 273 | 1 | 0 | 0 | 32,690,561 | If you have access to a Linux environment (which you might since you mention shell script as one of your options) then just use sed in a terminal or Putty:
sed -i .bak 's/,/;/g' yourfile.excel
Sed streams the text without loading the entire file at once.
-i will make changes to your original file but providing .bak will create a copy named yourfile.excel.bak first | 1 | 0 | 0 | Replace characters without opening Excel file | 1 | python,excel,shell,vbscript | 0 | 2015-09-21T08:27:00.000 |
I had posted about this error some time back but need some more clarification on this.
I'm currently building out a Django Web Application using Visual Studio 2013 on a Windows 10 machine (Running Python3.4). While starting out I was constantly dealing with the MySQL connectivity issue, for which I did a mysqlclient pip-install. I had created two projects that use MySQL as a backend and after installing the mysqlclient I was able to connect to the database through the current project I was working on. When I opened the second project and tried to connect to the database, I got the same 'No Module called MySqlDB' error.
Now, the difference between both projects was that the first one was NOT created within a Virtual Environment whereas the second was.
So I have come to deduce that projects created within the Python virtual environment are not able to connect to the database. Can someone here please help me in getting this problem solved. I need to know how the mysqlclient module can be loaded onto a virtual environment so that a project can use it.
Thanks | 0 | 1 | 0.066568 | 0 | false | 32,723,517 | 1 | 660 | 1 | 0 | 0 | 32,715,175 | This approach worked ! I was able to install the mysqlclient inside the virtual environment through the following command:-
python -m pip install mysqlclient
Thanks Much..!!!!! | 1 | 0 | 0 | No Module Named MySqlDb in Python Virtual Enviroment | 3 | python,mysql,django | 0 | 2015-09-22T11:02:00.000 |
Because I need to parse and then use the actual data in cells, I open an xlsm in openpyxl with data_only = True.
This has proved very useful. Now though, having the same need for an xlsm that contains formuale in cells, when I then save my changes, the formulae are missing from the saved version.
Are data_only = True and formulae mutually exclusive? If not, how can I access the actual value in cells without losing the formulae when I save?
When I say I lose the formulae, it seems that the results of the formulae (sums, concatenattions etc.) get preserved. But the actual formulaes themselves are no longer displayed when a cell is clicked.
UPDATE:
To confirm whether or not the formulaes were being preserved or not, I've re-opened the saved xlsm, this time with data_only left as False. I've checked the value of a cell that had been constructed using a formula. Had formulae been preserved, opening the xlsm with data_only set to False should have return the formula. But it returns the actual text value (which is not what I want). | 6 | 2 | 0.197375 | 0 | false | 32,776,318 | 0 | 6,128 | 1 | 0 | 0 | 32,772,954 | If you want to preserve the integrity of the workbook, ie. retain the formulae, the you cannot use data_only=True. The documentation makes this very clear. | 1 | 0 | 0 | How to save in openpyxl without losing formulae? | 2 | python,python-2.7,openpyxl | 0 | 2015-09-25T00:21:00.000 |
Arrghh... I am trying to use mySQL with Python. I have installed all the libraries for using mySQL, but keep getting the: "ImportError: No module named mysql.connector" for "import mysql.connector", "mysql", etc..
Here is my config:
I have a RHEL server:
Red Hat Enterprise Linux Server release 6.7 (Santiago)
with Python 2.7.9
Python 2.7.9 (default, Dec 16 2014, 10:42:10)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-11)] on linux2
with mySQL 5.1
mysql Ver 14.14 Distrib 5.1.73, for redhat-linux-gnu (x86_64) using readline 5.1
I have all the appropriate libraries/modules installed, I think!
yum install MySQL-python
Package MySQL-python-1.2.3-0.3.c1.1.el6.x86_64 already installed and latest version
yum install mysql-connector-python.noarch
Installed: mysql-connector-python.noarch 0:1.1.6-1.el6
yum install MySQL-python.x86_64
Package MySQL-python-1.2.3-0.3.c1.1.el6.x86_64 already installed and latest version
yum install mysql-connector-python.noarch
Package mysql-connector-python-1.1.6-1.el6.noarch already installed and latest version
What am I doing wrong? HELP!? | 0 | -1 | -0.099668 | 0 | false | 32,786,846 | 0 | 544 | 2 | 0 | 0 | 32,786,620 | Nevermind!!!
Apparently I am installing Python and libraries in the right directories and such (I have always used YUM), but apparently there are other versions of Python installed.. need to clean that up.
Running: /usr/bin/python
All the modules worked!
Running: python (Linux finding python in the path somewhere)
Modules are not working!
Grrr...
Maybe I will checking out the Python "Virtual Environments"!
Thanks for the suggestions! | 1 | 0 | 0 | mySQL within Python 2.7.9 | 2 | python,mysql,linux,python-2.7 | 0 | 2015-09-25T16:24:00.000 |
Arrghh... I am trying to use mySQL with Python. I have installed all the libraries for using mySQL, but keep getting the: "ImportError: No module named mysql.connector" for "import mysql.connector", "mysql", etc..
Here is my config:
I have a RHEL server:
Red Hat Enterprise Linux Server release 6.7 (Santiago)
with Python 2.7.9
Python 2.7.9 (default, Dec 16 2014, 10:42:10)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-11)] on linux2
with mySQL 5.1
mysql Ver 14.14 Distrib 5.1.73, for redhat-linux-gnu (x86_64) using readline 5.1
I have all the appropriate libraries/modules installed, I think!
yum install MySQL-python
Package MySQL-python-1.2.3-0.3.c1.1.el6.x86_64 already installed and latest version
yum install mysql-connector-python.noarch
Installed: mysql-connector-python.noarch 0:1.1.6-1.el6
yum install MySQL-python.x86_64
Package MySQL-python-1.2.3-0.3.c1.1.el6.x86_64 already installed and latest version
yum install mysql-connector-python.noarch
Package mysql-connector-python-1.1.6-1.el6.noarch already installed and latest version
What am I doing wrong? HELP!? | 0 | 2 | 1.2 | 0 | true | 32,786,680 | 0 | 544 | 2 | 0 | 0 | 32,786,620 | You should use virtualenv in order to isolate the environment. That way your project libs won't clash with other projects libs. Also, you probably should install the Mysql driver/connector from pip.
Virtualenv is a CLI tool for managing your environment. It is really easy to use and helps a lot. What it does is to create all the folders Python needs on a custom location (usually your specific project folder) and it also sets all the shell variables so that Python can find the folders. Your system (/usr and so on) folders are not removed from the shell; rather, they just get a low priority. That is done by correctly setting your PATH variable, and virtualenv does that when you load a determined environment.
It is common practice to use an environment for each project you work on. That way, Python and pip won't install libs on the global folders. Instead, pip installs the libs on the current environment you are using. That avoids version conflicts and even Python version conflicts. | 1 | 0 | 0 | mySQL within Python 2.7.9 | 2 | python,mysql,linux,python-2.7 | 0 | 2015-09-25T16:24:00.000 |
I wrote a Python program which produces invoices in a specific form as .xlsx files using openpyxl. I have the general invoice form as an Excel workbook and my program copies this form and fills up the details about the specific client (eg. client refernce number, price, etc.) which are read from another .txt file.
The program works perfectly. The only problem is that the form contains a cell which has multiple styles: half of the letters are red and the rest black and there is also size difference. This cell is not edited in my program (it is the same in all the invoices), however after the rest worksheet is edited by my program the cell keeps only the first style (the red letters).
Why does openpyxl changes this cell since I don't edit it? Does openpyxl support multiple styles, or I have to split the letters with different styles in seperate cells? | 3 | 0 | 0 | 0 | false | 32,790,713 | 0 | 1,165 | 1 | 0 | 0 | 32,788,716 | openpyxl does not support multiple styles within an individual cell. | 1 | 0 | 0 | Multiple styles in one cell in openpyxl | 1 | python,excel,openpyxl | 0 | 2015-09-25T18:45:00.000 |
I want to install py-MySQLdb but I always get the same lib error..
Any suggestions?
Thanks in advance
ImportError: /usr/lib/libz.so.6: unsupported file layout
*** [do-configure] Error code 1
Stop in /usr/ports/databases/py-MySQLdb.
*** [install] Error code 1
Stop in /usr/ports/databases/py-MySQLdb. | 1 | 0 | 0 | 0 | false | 32,839,165 | 0 | 226 | 1 | 0 | 0 | 32,826,804 | It seems like you mixed 32 and 64bit libraries.
I suggest cleaning up the wrong libraries (at least libz seems to be affected) and restoring them from backup/installation medium | 1 | 0 | 0 | FreeBSD 9.3 ImportError while installing py-MySQLdb | 1 | python,freebsd | 0 | 2015-09-28T15:38:00.000 |
I am writing a Python script to fetch and update some data on a remote oracle database from a Linux server. I would like to know how can I connect to remote oracle database from the server.
Do I necessarily need to have an oracle client installed on my server or any connector can be used for the same?
And also if I use cx_Oracle module in Python, is there any dependency that has to be fulfilled for making it work? | 1 | 0 | 0 | 0 | false | 32,836,606 | 0 | 6,425 | 1 | 0 | 0 | 32,836,510 | Yes, you definitely need to install an Oracle Client, it even says so in cx_oracle readme.txt.
Another recommendation you can find there is installing an oracle instant client, which is the minimal installation needed to communicate with Oracle, and is the simplest to use.
Other dependencies can usually be found in the readme.txt file, and should be the first place to look for these details. | 1 | 0 | 0 | Python connection to Oracle database | 2 | python,oracle,database-connection,cx-oracle | 0 | 2015-09-29T05:47:00.000 |
I inherited a project and am having what seems to be a permissions issue when trying to interact with the database. Basically we have a two step process of detach and then delete.
Does anyone know where the user would come from if the connection string only has driver, server, and database name.
EDIT
I am on Windows Server 2008 standard
EDIT
"DRIVER={%s};SERVER=%s;DATABASE=%s;" Where driver is "SQL Server" | 3 | 2 | 0.197375 | 0 | false | 32,915,064 | 0 | 989 | 2 | 0 | 0 | 32,914,037 | Since you're on Windows, a few things you should know:
Using the Driver={SQL Server} only enables features and data types
supported by SQL Server 2000. For features up through 2005, use {SQL
Native Client} and for features up through 2008 use {SQL Server
Native Client 10.0}.
To view your ODBC connections, go to Start and search for "ODBC" and
bring up Data Sources (ODBC). This will list User, System, and File
DSNs in a GUI. You should find the DSN with username and password
filled in there. | 1 | 0 | 0 | Where does pyodbc get its user and pwd from when none are provided in the connection string | 2 | python,database,permissions,pyodbc | 0 | 2015-10-02T18:57:00.000 |
I inherited a project and am having what seems to be a permissions issue when trying to interact with the database. Basically we have a two step process of detach and then delete.
Does anyone know where the user would come from if the connection string only has driver, server, and database name.
EDIT
I am on Windows Server 2008 standard
EDIT
"DRIVER={%s};SERVER=%s;DATABASE=%s;" Where driver is "SQL Server" | 3 | 3 | 1.2 | 0 | true | 32,951,286 | 0 | 989 | 2 | 0 | 0 | 32,914,037 | I just did a few tests and the {SQL Server} ODBC driver apparently defaults to using Windows Authentication if the Trusted_connection and UID options are both omitted from the connection string. So, your Python script must be connecting to the SQL Server instance using the Windows credentials of the user running the script.
(On the other hand, the {SQL Server Native Client 10.0} driver seems to default to SQL Authentication unless Trusted_connection=yes is included in the connection string.) | 1 | 0 | 0 | Where does pyodbc get its user and pwd from when none are provided in the connection string | 2 | python,database,permissions,pyodbc | 0 | 2015-10-02T18:57:00.000 |
In my Google App Engine App, I have a large number of entities representing people. At certain times, I want to process these entities, and it is really important that I have the most up to date data. There are far too many to put them in the same entity group or do a cross-group transaction.
As a solution, I am considering storing a list of keys in Google Cloud Storage. I actually use the person's email address as the key name so I can store a list of email addresses in a text file.
When I want to process all of the entities, I can do the following:
Read the file from Google Cloud Storage
Iterate over the file in batches (say 100)
Use ndb.get_multi() to get the entities (this will always give the most recent data)
Process the entities
Repeat with next batch until done
Are there any problems with this process or is there a better way to do it? | 0 | 1 | 1.2 | 0 | true | 32,941,257 | 1 | 60 | 1 | 1 | 0 | 32,915,462 | if, like you say in comments, your lists change rarely and cant use ancestors (I assume because of write frequency in the rest of your system), your proposed solution would work fine. You can do as many get(multi) and as frequently as you wish, datastore can handle it.
Since you mentioned you can handle having that keys list updated as needed, that would be a good way to do it.
You can stream-read a big file (say from cloud storage with one row per line) and use datastore async reads to finish very quickly or use google cloud dataflow to do the reading and processing/consolidating.
dataflow can also be used to instantly generate that keys list file in cloud storage. | 1 | 0 | 0 | GAE/P: Storing list of keys to guarantee getting up to date data | 2 | python,google-app-engine,google-cloud-datastore,eventual-consistency | 0 | 2015-10-02T20:35:00.000 |
I'm startint a project on my own and I'm having some troubles with importing datas from IMDb. Already downloaded everything that's necessary but I'm kinda newbie in this python and command lines stuff, and it's pissing me off because I'm doing my homework (trying to learn how to do these things) but I can't reach it :(
So, is there anybody who could create a step-by-step of how to do it? I mean, something like:
You'll need to download this and run this commands on 'x'.
Create a database and then run 'x'.
It would be amazing for me and other people who don't know how to do this as well and I would truly appreciate A LOT, really!. Oh, I'm using Windows. | 0 | 3 | 1.2 | 0 | true | 32,972,532 | 0 | 1,057 | 1 | 0 | 0 | 32,953,669 | Problem solved!
For those who are having the same problem, here it goes:
Download the java movie database. It works witch postrgres or mysql. You'll have to download java runtime. After that open the readme in the directory you installed the java movie database, there are all the instructions, but I'll help you.
Follow the link off the *.list archives and download then. Move them into a new folder. After that, open JMDB (java movie database) and select the right folders of moviecollection, sound etc (they are in C:/programs...). In IMDb-import select the folder you created which contains the *.files. Well, this is the end. Run the JMDB and you'll have your DB populated. | 1 | 0 | 0 | How to run imdbpy2sql.py and import data from IMDb to postgres | 1 | python,postgresql,imdb,imdbpy | 0 | 2015-10-05T16:44:00.000 |
I need to change data in large excel file(more than 240 000 rows on sheet), it's possible through win32com.client, but I need use Linux OS ...
Please, could you advise something suitable! | 4 | -1 | -0.099668 | 0 | false | 32,997,001 | 0 | 414 | 1 | 0 | 0 | 32,994,822 | If it's raw data, I always export it to a .csv file and work on it directly. CSV is a simple format with one row per line and all the elements on the row separated with commas. Depending on what you want to do, it's not hard to write a python script to edit that. | 1 | 0 | 0 | Change data in large excel file(more than 240 000 rows on sheet) | 2 | python,python-2.7 | 0 | 2015-10-07T14:22:00.000 |
I tried to open and read the contents of cookie.sqlite file in the firefox.
cookie.sqlie is the database file were all cookies of webpages opened in firefox are stored. When I am trying to access with a python program it is not allowing to read, as cookie.sqlite is locked. How to open and read the contents. | 0 | 0 | 1.2 | 0 | true | 33,078,468 | 0 | 858 | 1 | 0 | 0 | 33,078,383 | That's not a programming question as is -- sqlite sets a flag in the file so that when trying to open the file a second time, the other sqlite instance knows that the file is "dirty", because the first Sqlite is actively modifying it. There's no way around this -- data integrity is the core functionality of databases, and hence, you really shouldn't try to do that.
I'd generally recommend just writing a Firefox extension, which then can access the cookie database as accessible from within firefox, and export whatever you need to your external program. | 1 | 0 | 0 | how to read cookie.sqlite file in firefox through python program | 1 | python,sqlite,cookies | 0 | 2015-10-12T10:02:00.000 |
I'm not overly familiar with Linux and am trying to run a Python script that is dependent upon Python 3.4 as well as pymssql. Both Python 2.7 and 3.4 are installed (usr/local/lib/[PYTHON_VERSION_HERE]). pymssql is also installed, except it's installed in the Python 2.7 directory, not the 3.4 directory. When I run my Python script (python3 myscript.py), I get the following error:
File "myscript.py", line 2, in
import pymssql
ImportError: No module named 'pymssql'
My belief is that I need to install pymssql to the Python 3.4 folder, but that's my uneducated opinion. So my question is this:
How can I get my script to run using Python 3.4 as well as use the pymssql package (sorry, probably wrong term there)?
I've tried many different approaches, broken my Ubuntu install (and subsequently reimaged), and at this point don't know what to do. I am a relative novice, so some of the replies I've seen on the web say to use ENV and separate the versions are really far beyond the scope of my understanding. If I have to go that route, then I will, but if there is another (i.e. easier) way to go here, I'd really appreciate it, as this was supposed to just be a tiny thing I need to take care of but it's tied up 12 hours of my life thus far! Thank you in advance. | 1 | 2 | 0.197375 | 0 | false | 35,328,465 | 0 | 5,321 | 1 | 0 | 0 | 33,114,337 | It is better if when you run python3.4 you can have modules for that version.
Another way to get the desire modules running is install pip for python 3.4
sudo apt-get install python3-pip
Then install the module you want
python3.4 -m pip install pymssql | 1 | 0 | 0 | How to install pymssql to Python 3.4 rather than 2.7 on Ubuntu Linux? | 2 | python,linux,pymssql | 0 | 2015-10-13T23:38:00.000 |
I'd like to use the Dropbox API (with access only to my own account) to generate a link to SomeFile.xlsx that I can put in an email to multiple Dropbox account holders, all of whom are presumed to have access to the file. I'd like for the same link, when clicked on, to talk to Dropbox to figure out where SomeFile.xlsx is on their local filesystem and open that up directly.
In other words, I do NOT want to link to the cloud copy of the file. I want to link to the clicker's locally-synced version of the file.
Does Dropbox have that service and does the API let me consume it? I haven't been able to discover the answer from the documentation yet. | 0 | 1 | 1.2 | 0 | true | 33,136,183 | 0 | 82 | 1 | 0 | 0 | 33,134,816 | No, Dropbox doesn't have an API like this. | 1 | 0 | 0 | Relative link to local copy of Dropbox file | 1 | python,dropbox,dropbox-api | 0 | 2015-10-14T20:18:00.000 |
I am doing an application in Excel and I'd like to use python language. I've seen a pretty cool library called xlwings, but to run it a user need to have python installed.
Is there any possibility to prepare this kind of application that will be launch from a PC without Python?
Any suggestion are welcome! | 1 | 0 | 1.2 | 0 | true | 33,176,797 | 0 | 963 | 3 | 0 | 0 | 33,172,842 | A small workaround could be to package your application with cx_freeze or pyinstaller. Then it can run on a machine without installing python. The downside is of course that the program tend to be a bit bulky in size. | 1 | 0 | 1 | create an application in excel using python for a user without python | 4 | python,excel,xlwings | 0 | 2015-10-16T14:24:00.000 |
I am doing an application in Excel and I'd like to use python language. I've seen a pretty cool library called xlwings, but to run it a user need to have python installed.
Is there any possibility to prepare this kind of application that will be launch from a PC without Python?
Any suggestion are welcome! | 1 | 0 | 0 | 0 | false | 33,176,474 | 0 | 963 | 3 | 0 | 0 | 33,172,842 | It is possible using xlloop. This is a customized client-server approach, where the client is an excel .xll which must be installed on client's machine.
The server can be written in many languages, including python, and of course it must be launched on a server that has python installed. Currently the .xll is available only for 32 bits. | 1 | 0 | 1 | create an application in excel using python for a user without python | 4 | python,excel,xlwings | 0 | 2015-10-16T14:24:00.000 |
I am doing an application in Excel and I'd like to use python language. I've seen a pretty cool library called xlwings, but to run it a user need to have python installed.
Is there any possibility to prepare this kind of application that will be launch from a PC without Python?
Any suggestion are welcome! | 1 | 0 | 0 | 0 | false | 33,172,988 | 0 | 963 | 3 | 0 | 0 | 33,172,842 | no. you must need install a python and for interpreting the python function etc. | 1 | 0 | 1 | create an application in excel using python for a user without python | 4 | python,excel,xlwings | 0 | 2015-10-16T14:24:00.000 |
I'm new to using databases in Python and I'm playing around with MySQLdb. I have several methods that will issue database calls. Do I need to go through the database connection steps every time I want to make a call or is the instance of the database persistent? | 1 | 1 | 1.2 | 0 | true | 33,243,613 | 0 | 32 | 1 | 0 | 0 | 33,243,575 | Connection instance is persistent, you can connect one time and work with connection as long as you need. | 1 | 0 | 0 | Do I need to call MySQLdb.connect() in every method where I execute a database operation? | 1 | python,mysql,mysql-python | 0 | 2015-10-20T17:57:00.000 |
I have the following table
create table players (name varchar(30), playerid serial primary key);
And I am working with the script:
def registerPlayer(name):
"""Registers new player."""
db = psycopg2.connect("dbname=tournament")
c = db.cursor()
player = "insert into players values (%s);"
scores = "insert into scores (wins, matches) values (0, 0);"
c.execute(player, (name,))
c.execute(scores)
db.commit()
db.close()
But when I try and register a player with the argument in quotes as so:
registerPlayer("Any Name")
It doesn't work... Now, if I directly enter the query into psql, it works if I only use single quotes as so
INSERT INTO players VALUES ('Any Name');
But not if I use "Any Name". If I use the "", it tells me:
ERROR: column "Any Name" does not exist Now, this is a problem if I want to enter a name in such as Bob O'Neal, because it will close off that entry after the O.
The quotes were working fine the other day, and I went to format so that all the SQL queries were capitalized, and everything stopped working. I returned to the code that was working fine, and now nothing is working! | 0 | 2 | 0.379949 | 0 | false | 33,248,538 | 0 | 74 | 1 | 0 | 0 | 33,248,482 | Double-quotes in SQL are not strings - they escape table, index, and other object names (ex. "John Smith" refers to a table named John Smith). Only single quoted strings are actually strings.
In any case, if you are using query parameters properly (which, in your example code, you seem to be), you should not have to worry about escaping your data. Simply pass the raw values to execute (ex. c.execute(player, ("Bob O'Niel",))) | 1 | 0 | 0 | Quotations not working in PostgreSQL Queries | 1 | python,sql,database,postgresql | 0 | 2015-10-20T23:24:00.000 |
I filled an Excel sheet with a correct float numbers based on the German decimal point format. So, the number 3.142 is correctly written 3,142, and if it is written 3.142 (or '3.142 by declaring it as a text entry in order to avoid English interpretation as 3142), then I want to report an error to the author of the Excel file.
So, I want to see a 3,142 in the first case when reading this file by openpyxl, and in the second case a 3.142 - just as writting by hand in Excel.
However, I see 3.142 in both cases. What can I do? | 2 | 0 | 0 | 0 | false | 33,275,544 | 0 | 549 | 1 | 0 | 0 | 33,263,378 | When it comes to value of numbers openpyxl doesn't care about their formatting so it will report 3142 in both cases. I don't think coercing this to a string makes any sense at all. | 1 | 0 | 0 | How to read a cell of a sheet filled with floats containing German decimal point | 2 | python,python-3.x,openpyxl | 0 | 2015-10-21T15:31:00.000 |
I have an EC2 instance and an S3 bucket in different region. The bucket contains some files that are used regularly by my EC2 instance.
I want to programatically download the files on my EC2 instance (using python)
Is there a way to do that? | 5 | 0 | 0 | 0 | false | 33,375,622 | 1 | 6,069 | 1 | 0 | 1 | 33,298,821 | As mentioned above, you can do this with Boto. To make it more secure and not worry about the user credentials, you could use IAM to grant the EC2 machine access to the specific bucket only. Hope that helps. | 1 | 0 | 0 | Access to Amazon S3 Bucket from EC2 instance | 5 | python,amazon-web-services,amazon-s3,amazon-ec2,amazon-iam | 0 | 2015-10-23T09:17:00.000 |
I create a database connection in the __init__ method of a Python class and want to make sure that the connection is closed on object destruction.
It looks like I can do this in __del__() or make the class a context manager and close the connection in __exit__(). I wonder which one is more Pythonic. | 1 | 1 | 1.2 | 0 | true | 33,306,596 | 0 | 523 | 1 | 0 | 0 | 33,306,517 | It looks like I can do this in __del__() or make the class a context manager and close the connection in __exit__(). I wonder which one is more Pythonic.
I won't comment on what's more "pythonic", since that is a highly subjective question.
However, Python doesn't make very strict guarantees on when a destructor is called, making the context/__exit__ approach the right one here. | 1 | 0 | 0 | What's the preferred way to close a psycopg2 connection used by Python object? | 1 | python,database-connection,psycopg2 | 0 | 2015-10-23T15:46:00.000 |
In my django application (django 1.8) I'm using two databases, one 'default' which is MySQL, and another one which is a schemaless, read-only database.
I've two models which are accessing this database, and I'd like to exclude these two models permanently from data and schema migrations:
makemigrations should never detect any changes, and create migrations for them
migrate should never complain about missing migrations for that app
So far, I've tried different things, all without any success:
used the managed=False Meta option on both Models
added a allow_migrate method to my router which returns False for both models
Does anyone have an example of how this scenario can be achieved?
Thanks for your help! | 17 | 2 | 0.132549 | 0 | false | 44,014,653 | 1 | 7,061 | 2 | 0 | 0 | 33,385,618 | So far, I've tried different things, all without any success:
used the managed=False Meta option on both Models
That option (the managed = False attribute on the model's meta options) seems to meet the requirements.
If not, you'll need to expand the question to say exactly what is special about your model that managed = False doesn't do the job. | 1 | 0 | 0 | django: exclude models from migrations | 3 | python,django,django-models,django-migrations | 0 | 2015-10-28T07:53:00.000 |
In my django application (django 1.8) I'm using two databases, one 'default' which is MySQL, and another one which is a schemaless, read-only database.
I've two models which are accessing this database, and I'd like to exclude these two models permanently from data and schema migrations:
makemigrations should never detect any changes, and create migrations for them
migrate should never complain about missing migrations for that app
So far, I've tried different things, all without any success:
used the managed=False Meta option on both Models
added a allow_migrate method to my router which returns False for both models
Does anyone have an example of how this scenario can be achieved?
Thanks for your help! | 17 | 1 | 0.066568 | 0 | false | 68,460,381 | 1 | 7,061 | 2 | 0 | 0 | 33,385,618 | You have the correct solution:
used the managed=False Meta option on both Models
It may appear that it is not working but it is likely that you are incorrectly preempting the final result when you see - Create model xxx for models with managed = False when running makemigrations.
How have you been checking/confirming that migrations are being made?
makemigrations will still print to terminal - Create model xxx and create code in the migration file but those migrations will not actually result in any SQL code or appear in Running migrations: when you run migrate. | 1 | 0 | 0 | django: exclude models from migrations | 3 | python,django,django-models,django-migrations | 0 | 2015-10-28T07:53:00.000 |
Well, I tried to understand Open Database Connectivity and Python DB-API, but I can't.
ODBC is some kind of standard and Python DB-API is another standard, but why not use just one standard? Or maybe I got these terms wrong.
Can someone please explain these terms and the difference between them as some of the explanations I read were too technical?
Thank you | 1 | 1 | 0.197375 | 0 | false | 33,404,879 | 0 | 342 | 1 | 0 | 0 | 33,404,837 | There are other programming languages besides python -- java, javascript, ruby, perl, cobol, lisp, smalltalk, go, r, and many, many others. None of them can use the python db-api, but all of them could, potentially use odbc. Python offers odbc for people who come from other languages and already know odbc,and its own db-api for people who only know python and who aren't interested in learning the standard. Also, python db-api isn't really a standard, because it hasn't been accepted by any standards body (afaik) | 1 | 0 | 0 | Difference between ODBC and Python DB-API? | 1 | database,odbc,python-db-api | 0 | 2015-10-29T02:15:00.000 |
I'm trying to use PyMySQL on Ubuntu.
I've installed pymysql using both pip and pip3 but every time I use import pymysql, it returns ImportError: No module named 'pymysql'
I'm using Ubuntu 15.10 64-bit and Python 3.5.
The same .py works on Windows with Python 3.5, but not on Ubuntu. | 69 | 0 | 0 | 0 | false | 59,368,220 | 0 | 196,169 | 5 | 0 | 0 | 33,446,347 | if you are using SPYDER IDE , just try to restart the console or restart the IDE, it works | 1 | 0 | 1 | No module named 'pymysql' | 19 | python,python-3.x,ubuntu,pymysql | 0 | 2015-10-30T23:34:00.000 |
I'm trying to use PyMySQL on Ubuntu.
I've installed pymysql using both pip and pip3 but every time I use import pymysql, it returns ImportError: No module named 'pymysql'
I'm using Ubuntu 15.10 64-bit and Python 3.5.
The same .py works on Windows with Python 3.5, but not on Ubuntu. | 69 | 0 | 0 | 0 | false | 50,157,898 | 0 | 196,169 | 5 | 0 | 0 | 33,446,347 | I had this same problem just now, and found the reason was my editor (Visual Studio Code) was running against the wrong instance of python; I had it set to run again python bundled with tensorflow, I changed it to my Anaconda python and it worked. | 1 | 0 | 1 | No module named 'pymysql' | 19 | python,python-3.x,ubuntu,pymysql | 0 | 2015-10-30T23:34:00.000 |
I'm trying to use PyMySQL on Ubuntu.
I've installed pymysql using both pip and pip3 but every time I use import pymysql, it returns ImportError: No module named 'pymysql'
I'm using Ubuntu 15.10 64-bit and Python 3.5.
The same .py works on Windows with Python 3.5, but not on Ubuntu. | 69 | 0 | 0 | 0 | false | 49,817,699 | 0 | 196,169 | 5 | 0 | 0 | 33,446,347 | sudo apt-get install python3-pymysql
This command also works for me to install the package required for Flask app to tun on ubuntu 16x with WISG module on APACHE2 server.
BY default on WSGI uses python 3 installation of UBUNTU.
Anaconda custom installation won't work. | 1 | 0 | 1 | No module named 'pymysql' | 19 | python,python-3.x,ubuntu,pymysql | 0 | 2015-10-30T23:34:00.000 |
I'm trying to use PyMySQL on Ubuntu.
I've installed pymysql using both pip and pip3 but every time I use import pymysql, it returns ImportError: No module named 'pymysql'
I'm using Ubuntu 15.10 64-bit and Python 3.5.
The same .py works on Windows with Python 3.5, but not on Ubuntu. | 69 | 0 | 0 | 0 | false | 57,734,684 | 0 | 196,169 | 5 | 0 | 0 | 33,446,347 | Just a note:
for Anaconda install packages command:
python setup.py install | 1 | 0 | 1 | No module named 'pymysql' | 19 | python,python-3.x,ubuntu,pymysql | 0 | 2015-10-30T23:34:00.000 |
I'm trying to use PyMySQL on Ubuntu.
I've installed pymysql using both pip and pip3 but every time I use import pymysql, it returns ImportError: No module named 'pymysql'
I'm using Ubuntu 15.10 64-bit and Python 3.5.
The same .py works on Windows with Python 3.5, but not on Ubuntu. | 69 | 0 | 0 | 0 | false | 63,201,272 | 0 | 196,169 | 5 | 0 | 0 | 33,446,347 | I also got this error recently when using Anaconda on a Mac machine.
Here is what I found:
After running python3 -m pip install PyMySql, pymysql module is under /Library/Python/3.7/site-packages
Anaconda wants this module to be under /opt/anaconda3/lib/python3.8/site-packages
Therefore, after copying pymysql module to the designated path, it runs correctly. | 1 | 0 | 1 | No module named 'pymysql' | 19 | python,python-3.x,ubuntu,pymysql | 0 | 2015-10-30T23:34:00.000 |
I am trying to read data from text file (which is output given by Tesseract OCR) and save the same in excel file. The problem i am facing here is the text files are in space separated format, and there are multiple files. Now i need to read all the files and save the same in excel sheet.
I am using MATLAB to import and export data. I even thought of using python to convert the files into CSV format so that i can easily import the same in MATLAB and simply excelwrite the same. But no good solution.
Any guidance would be of great help.
thank you | 0 | 0 | 1.2 | 1 | true | 33,481,202 | 0 | 212 | 1 | 0 | 0 | 33,479,646 | To read a text file in Matlab you can use fscanf or textscan then to export to excel you can use xlswrite that write directly to the excel file. | 1 | 0 | 0 | Importing data from text file and saving the same in excel | 1 | matlab,python-2.7,csv,export-to-csv | 0 | 2015-11-02T14:14:00.000 |
Is it possible to insert many rows into a table using one query in pyhdb? Because when I have millions of records to insert, inserting each record in a loop is not very efficient. | 0 | 0 | 0 | 0 | false | 72,168,012 | 0 | 1,564 | 1 | 0 | 0 | 33,514,183 | pyhdb executemany() is faster than simply execute()
but for larger records even if you divide in chunks and use executemany() it still takes significant time.
For better and faster performance use string formatting like values (?, ?, ?...) instead of values('%s', '%s', '%s', ...)
This saves a lots of time that heavy type conversion uses on server side and gets response faster and hence faster execution. | 1 | 0 | 0 | How to insert many rows into a table using pyhdb? | 3 | python,sap | 0 | 2015-11-04T05:15:00.000 |
All of the MySql modules I've found are compatible with Python 2.7 or 3.4, but none with 3.5. Any way I can use a MySql module with the newest Python version?
ANSWER:
The regular Python versions of mysql-connector-python would not work, but the rf version did.
python -m pip install mysql-connector-python-rf | 3 | 3 | 1.2 | 0 | true | 33,524,987 | 0 | 1,498 | 1 | 0 | 0 | 33,524,731 | Python tries hard to be forward compatible. A pure-python module written for 3.4 should work with 3.5; a binary package may work, you just have to try it and see. | 1 | 0 | 0 | Will a MySql module for Python 3.4 work with 3.5? | 1 | mysql,python-3.x | 0 | 2015-11-04T14:45:00.000 |
I'm using openpyxl to put data validation to all rows that have "Default" in them. But to do that, I need to know how many rows there are.
I know there is a way to do that if I were using Iterable workbook mode, but I also add a new sheet to the workbook and in the iterable mode that is not possible. | 38 | 68 | 1.2 | 0 | true | 33,543,305 | 0 | 117,258 | 1 | 0 | 0 | 33,541,692 | ws.max_row will give you the number of rows in a worksheet.
Since version openpyxl 2.4 you can also access individual rows and columns and use their length to answer the question.
len(ws['A'])
Though it's worth noting that for data validation for a single column Excel uses 1:1048576. | 1 | 0 | 0 | How to find the last row in a column using openpyxl normal workbook? | 3 | python,excel,openpyxl | 0 | 2015-11-05T10:06:00.000 |
I am using sqlAlchemy to interact with a postgres database. It is all set to work with inserting string data. The data I receive is normally utf-8 and the setup works very well. As a edge case, recently, data came up in the format somedata\xtrailingdata.
SQLAlchemy is attempting to make this entry with somedata completely stripping out everything after \x.
Can you please tell me if there's a way to instruct SQLAlchemy to just attempt inserting the whole thing instead of removing the unicode part.
I have attempted
create_engine(dbUri, convert_unicode=True, client_encoding='utf8')
create_engine(dbUri, convert_unicode=False, client_encoding='utf8')
create_engine(dbUri, convert_unicode=False)
None worked out so far. I really would appreciate your input in inserting this data into string column.
PS:Can't modify the column type of DB. This is a very edge case, not the norm. | 0 | 0 | 0 | 0 | false | 33,685,308 | 0 | 442 | 2 | 0 | 0 | 33,665,544 | There's nothing Unicode about it. \x is a byte literal prefix and requires a hex value to follow. PostgreSQL also supports the \x syntax also, so it may be PostgreSQL that's dropping it.
Consider escaping all slashes or find-replace on \x before handing to SQLAlchemy | 1 | 0 | 0 | Inserting unicode into string with sqlAlchemy | 2 | python,postgresql,utf-8,sqlalchemy,unicode-string | 0 | 2015-11-12T06:27:00.000 |
I am using sqlAlchemy to interact with a postgres database. It is all set to work with inserting string data. The data I receive is normally utf-8 and the setup works very well. As a edge case, recently, data came up in the format somedata\xtrailingdata.
SQLAlchemy is attempting to make this entry with somedata completely stripping out everything after \x.
Can you please tell me if there's a way to instruct SQLAlchemy to just attempt inserting the whole thing instead of removing the unicode part.
I have attempted
create_engine(dbUri, convert_unicode=True, client_encoding='utf8')
create_engine(dbUri, convert_unicode=False, client_encoding='utf8')
create_engine(dbUri, convert_unicode=False)
None worked out so far. I really would appreciate your input in inserting this data into string column.
PS:Can't modify the column type of DB. This is a very edge case, not the norm. | 0 | 0 | 0 | 0 | false | 34,233,985 | 0 | 442 | 2 | 0 | 0 | 33,665,544 | The problem turned out to be \x00. When passed to SQLAlchemy an a value with \x00, it truncates it to the data preceding \x00. We traced the problem to the C library underneath the SQLAlchemy. | 1 | 0 | 0 | Inserting unicode into string with sqlAlchemy | 2 | python,postgresql,utf-8,sqlalchemy,unicode-string | 0 | 2015-11-12T06:27:00.000 |
Ubuntu python2 and python3 both can import sqlite3, but I can not type sqlite3 in command prompt to open it, it said sqlite3 is not installed, if i want to use it out of python should I install sqlite3 solely using apt-get or I can find it in some directory of python, add it to path and use directly in command line.
I also installed python3.5 on mac, the mac shipped with python2, and I can use sqlite3 in the command line by type sqlite3, it is version 3.8.10.2, seems installed by python2, but the python3.5 installed a different version of sqlite3, where can I find it? | 1 | 0 | 0 | 0 | false | 48,045,704 | 0 | 16,159 | 1 | 0 | 0 | 33,691,635 | Any Cygwin users who find themselves here: run the Cygwin installation .exe...
Choose "Categories", "Database", and then one item says "sqlite3 client to access sqlite3 databases", or words to that effect. | 1 | 0 | 0 | How to open sqlite3 installed by Python 3? | 3 | python,python-3.x,sqlite | 0 | 2015-11-13T11:23:00.000 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.