qid
int64
469
74.7M
question
stringlengths
36
37.8k
date
stringlengths
10
10
metadata
sequence
response_j
stringlengths
5
31.5k
response_k
stringlengths
10
31.6k
33,309,121
I am using docker-compose to deploy a multicontainer python Flask web application. I'm having difficulty understanding how to create tables in the postgresql database during the build so I don't have to add them manually with psql. My docker-compose.yml file is: ``` web: restart: always build: ./web expose: - "8000" links: - postgres:postgres volumes: - /usr/src/flask-app/static env_file: .env command: /usr/local/bin/gunicorn -w 2 -b :8000 app:app nginx: restart: always build: ./nginx/ ports: - "80:80" volumes: - /www/static volumes_from: - web links: - web:web data: restart: always image: postgres:latest volumes: - /var/lib/postgresql command: "true" postgres: restart: always image: postgres:latest volumes_from: - data ports: - "5432:5432" ``` I dont want to have to enter psql in order to type in: ``` CREATE DATABASE my_database; CREATE USER this_user WITH PASSWORD 'password'; GRANT ALL PRIVILEGES ON DATABASE "my_database" to this_user; \i create_tables.sql ``` I would appreciate guidance on how to create the tables.
2015/10/23
[ "https://Stackoverflow.com/questions/33309121", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2569531/" ]
It didn't work for me with the `COPY` approach in `Dockerfile`. But I managed to run my `init.sql` file by adding the following to `docker-compose.yml`: ``` volumes: - ./init.sql:/docker-entrypoint-initdb.d/init.sql ``` `init.sql` was in the same directory as my `docker-compose.yml`. I picked the solution from [this](https://gist.github.com/vrulevskyi/307b08abddc9568cf8f9c1b429c1ab56) gist. Check [this](https://onexlab-io.medium.com/docker-compose-postgres-initdb-ba0021deef76) article for more information.
I would create the tables as part of the build process. Create a new `Dockerfile` in a new directory `./database/` ``` FROM postgres:latest COPY . /fixtures WORKDIR /fixtures RUN /fixtures/setup.sh ``` `./database/setup.sh` would look something like this: ``` #!/bin/bash set -e /etc/init.d/postgresql start psql -f create_fixtures.sql /etc/init.d/postgresql stop ``` Put your create user, create database, create table sql (and any other fixture data) into a `create_fixtures.sql` file in the `./database/` directory. and finally your `postgres` service will change to use `build`: ``` postgres: build: ./database/ ... ``` Note: Sometimes you'll need a `sleep 5` (or even better a script to poll and wait for postgresql to start) after the `/etc/init.d/postgresql start` line. In my experience either the init script or the psql client handles this for you, but I know that's not the case with mysql, so I thought I'd call it out.
33,309,121
I am using docker-compose to deploy a multicontainer python Flask web application. I'm having difficulty understanding how to create tables in the postgresql database during the build so I don't have to add them manually with psql. My docker-compose.yml file is: ``` web: restart: always build: ./web expose: - "8000" links: - postgres:postgres volumes: - /usr/src/flask-app/static env_file: .env command: /usr/local/bin/gunicorn -w 2 -b :8000 app:app nginx: restart: always build: ./nginx/ ports: - "80:80" volumes: - /www/static volumes_from: - web links: - web:web data: restart: always image: postgres:latest volumes: - /var/lib/postgresql command: "true" postgres: restart: always image: postgres:latest volumes_from: - data ports: - "5432:5432" ``` I dont want to have to enter psql in order to type in: ``` CREATE DATABASE my_database; CREATE USER this_user WITH PASSWORD 'password'; GRANT ALL PRIVILEGES ON DATABASE "my_database" to this_user; \i create_tables.sql ``` I would appreciate guidance on how to create the tables.
2015/10/23
[ "https://Stackoverflow.com/questions/33309121", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2569531/" ]
> > I dont want to have to enter psql in order to type in > > > You can simply use container's built-in init mechanism: `COPY init.sql /docker-entrypoint-initdb.d/10-init.sql` This makes sure that your sql is executed after DB server is properly booted up. Take a look at their entrypoint [script](https://github.com/docker-library/postgres/blob/master/docker-entrypoint.sh). It does some preparations to start psql correctly and looks into `/docker-entrypoint-initdb.d/` directory for files ending in `.sh`, `.sql` and `.sql.gz`. `10-` in filename is because files are processed in ASCII order. You can name your other init files like `20-create-tables.sql` and `30-seed-tables.sql.gz` for example and be sure that they are processed in order you need. Also note that invoking command [does not](https://github.com/docker-library/postgres/blob/master/docker-entrypoint.sh#L116) specify the database. Keep that in mind if you are, say, migrating to docker-compose and your existing `.sql` files don't specify DB either. Your files will be processed at container's first start instead of `build` stage though. Since Docker Compose stops images and then resumes them, there's almost no difference, but if it's crucial for you to init the DB at `build` stage I suggest still using built-in init method by calling `/docker-entrypoint.sh` from your dockerfile and then cleaning up at `/docker-entrypoint-initdb.d/` directory.
It didn't work for me with the `COPY` approach in `Dockerfile`. But I managed to run my `init.sql` file by adding the following to `docker-compose.yml`: ``` volumes: - ./init.sql:/docker-entrypoint-initdb.d/init.sql ``` `init.sql` was in the same directory as my `docker-compose.yml`. I picked the solution from [this](https://gist.github.com/vrulevskyi/307b08abddc9568cf8f9c1b429c1ab56) gist. Check [this](https://onexlab-io.medium.com/docker-compose-postgres-initdb-ba0021deef76) article for more information.
11,720,155
I am just learning python. I read about Unicode characters and strings and was trying to print out a recently introduced symbol as mentioned here - <http://www.fileformat.info/info/unicode/char/20b9/index.htm> However my attempt in the IDLE prompt: ``` u"\u20b9" ``` just echoes the same expression instead of producing the character. I am not sure if this is a problem with fonts or Unicode versions or if I am doing something wrong. Please help!
2012/07/30
[ "https://Stackoverflow.com/questions/11720155", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1562715/" ]
Windows has an [`RTL_BITMAP`](http://msdn.microsoft.com/en-us/library/windows/hardware/ff563614.aspx) data structure one can use along with its APIs. But I needed the code for this sometime ago, and so I wrote it here (warning, it's a little ugly): <https://gist.github.com/3206128> I have **only partially** tested it, so it might still have bugs (especially on `reverse`). But a recent version (only slightly different from this one) seemed to be usable for me, so it's worth a try. The fundamental operation for the entire thing is being able to -- quickly -- find the length of a run of bits: ``` long long GetRunLength( const void *const pBitmap, unsigned long long nBitmapBits, long long startInclusive, long long endExclusive, const bool reverse, /*out*/ bool *pBit); ``` Everything else should be easy to build upon this, given its versatility. I tried to include some SSE code, but it didn't noticeably improve the performance. However, in general, the code is many times faster than doing bit-by-bit analysis, so I think it might be useful. It should be easy to test if you can get a hold of `vector<bool>`'s buffer somehow -- and if you're on Visual C++, then there's a function I included which does that for you. If you find bugs, feel free to let me know.
I can't figure how to do well directly on memory words, so I've made up a quick solution which is working on bytes; for convenience, let's sketch the algorithm for counting contiguous ones: Construct two tables of size 256 where you will write for each number between 0 and 255, the number of trailing 1's at the beginning and at the end of the byte. For example, for the number 167 (10100111 in binary), put 1 in the first table and 3 in the second table. Let's call the first table BBeg and the second table BEnd. Then, for each byte b, two cases: if it is 255, add 8 to your current sum of your current contiguous set of ones, and you are in a region of ones. Else, you end a region with BBeg[b] bits and begin a new one with BEnd[b] bits. Depending on what information you want, you can adapt this algorithm (this is a reason why I don't put here any code, I don't know what output you want). A flaw is that it does not count (small) contiguous set of ones inside one byte ... Beside this algorithm, a friend tells me that if it is for disk compression, just look for bytes different from 0 (empty disk area) and 255 (full disk area). It is a quick heuristic to build a map of what blocks you have to compress. Maybe it is beyond the scope of this topic ...
11,720,155
I am just learning python. I read about Unicode characters and strings and was trying to print out a recently introduced symbol as mentioned here - <http://www.fileformat.info/info/unicode/char/20b9/index.htm> However my attempt in the IDLE prompt: ``` u"\u20b9" ``` just echoes the same expression instead of producing the character. I am not sure if this is a problem with fonts or Unicode versions or if I am doing something wrong. Please help!
2012/07/30
[ "https://Stackoverflow.com/questions/11720155", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1562715/" ]
Windows has an [`RTL_BITMAP`](http://msdn.microsoft.com/en-us/library/windows/hardware/ff563614.aspx) data structure one can use along with its APIs. But I needed the code for this sometime ago, and so I wrote it here (warning, it's a little ugly): <https://gist.github.com/3206128> I have **only partially** tested it, so it might still have bugs (especially on `reverse`). But a recent version (only slightly different from this one) seemed to be usable for me, so it's worth a try. The fundamental operation for the entire thing is being able to -- quickly -- find the length of a run of bits: ``` long long GetRunLength( const void *const pBitmap, unsigned long long nBitmapBits, long long startInclusive, long long endExclusive, const bool reverse, /*out*/ bool *pBit); ``` Everything else should be easy to build upon this, given its versatility. I tried to include some SSE code, but it didn't noticeably improve the performance. However, in general, the code is many times faster than doing bit-by-bit analysis, so I think it might be useful. It should be easy to test if you can get a hold of `vector<bool>`'s buffer somehow -- and if you're on Visual C++, then there's a function I included which does that for you. If you find bugs, feel free to let me know.
Sounds like this might be useful: <http://www.aggregate.org/MAGIC/#Population%20Count%20%28Ones%20Count%29> and <http://www.aggregate.org/MAGIC/#Leading%20Zero%20Count> You don't say if you wanted to do some sort of RLE or to simply count in-bytes zeros and one bits (like 0b1001 should return 1x1 2x0 1x1). A look up table plus SWAR algorithm for fast check might gives you that information easily. A bit like this: ``` byte lut[0x10000] = { /* see below */ }; for (uint * word = words; word < words + bitmapSize; word++) { if (word == 0 || word == (uint)-1) // Fast bailout { // Do what you want if all 0 or all 1 } byte hiVal = lut[*word >> 16], loVal = lut[*word & 0xFFFF]; // Do what you want with hiVal and loVal ``` The LUT will have to be constructed depending on your intended algorithm. If you want to count the number of contiguous 0 and 1 in the word, you'll built it like this: ``` for (int i = 0; i < sizeof(lut); i++) lut[i] = countContiguousZero(i); // Or countContiguousOne(i) // The implementation of countContiguousZero can be slow, you don't care // The result of the function should return the largest number of contiguous zero (0 to 15, using the 4 low bits of the byte, and might return the position of the run in the 4 high bits of the byte // Since you've already dismissed word = 0, you don't need the 16 contiguous zero case. ```
57,540,412
I am trying to copy files (scripts and rpms) stored locally to a set of servers. I can copy the files when the names are hard coded, but not when I am using a variable. ansible-lint comes back with no errors. When use variable replacement I get the error: ``` TASK [Copy cpu_gov.sh] *************************************************************************************************************************************************************** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'dict' object has no attribute 'endswith' fatal: [ceph3]: FAILED! => {"msg": "Unexpected failure during module execution.", "stdout": ""} ``` In debug mode I can see that it is a Python error on a trailing "/". All other uses of the variable work fine, only when it is in the ""src:" field does it fail. ``` The full traceback is: Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 145, in run res = self._execute() File "/usr/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 650, in _execute result = self._handler.run(task_vars=variables) File "/usr/lib/python2.7/site-packages/ansible/plugins/action/copy.py", line 461, in run trailing_slash = source.endswith(os.path.sep) AttributeError: 'dict' object has no attribute 'endswith' fatal: [ceph3]: FAILED! => { "msg": "Unexpected failure during module execution.", "stdout": "" } ``` ``` --- ### Test # - hosts: all vars: #isdct_rpm: foobar.txt isdct_rpm: isdct-3.0.16-1.x86_64.rpm cpu_gov: cpu_gov.sh irq_bal: irq_balance.sh root_dir: /root bin_dir: /root/bin files_dir: /root/projects/ansible/bootstrap/files remote_user: root tasks: ``` These work just fine - ``` - name: ISDCT rpm exists? stat: path: "{{ root_dir }}/{{ isdct_rpm }}" register: isdct_rpm tags: - tools - name: cpu_gov exists? stat: path: "{{ bin_dir }}/{{ cpu_gov }}" register: cpu_gov tags: - tools - name: irq_balance exists? stat: path: "{{ bin_dir }}/{{ irq_bal }}" register: irq_bal tags: - tools ``` The first task is the failing one: ``` - name: Copy ISDCT rpm copy: remote_src: no src: "{{ isdct_rpm }}" dest: "{{ root_dir }}" when: not isdct_rpm.stat.exists ``` These work fine: ``` - name: Copy rpm copy: remote_src: no src: isdct-3.0.16-1.x86_64.rpm dest: /root when: not isdct_rpm.stat.exists - name: Copy cpu_gov.sh copy: remote_src: no src: cpu_gov.sh # - fails - src: "{{ cpu_gov }}" dest: "{{ bin_dir }}" when: not cpu_gov.stat.exists - name: Copy irq_balance.sh copy: remote_src: no src: irq_balance.sh dest: /root when: not irq_bal.stat.exists ``` ``` The full traceback is: Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 145, in run res = self._execute() File "/usr/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 650, in _execute result = self._handler.run(task_vars=variables) File "/usr/lib/python2.7/site-packages/ansible/plugins/action/copy.py", line 461, in run trailing_slash = source.endswith(os.path.sep) AttributeError: 'dict' object has no attribute 'endswith' fatal: [ceph3]: FAILED! => { "msg": "Unexpected failure during module execution.", "stdout": "" } ```
2019/08/17
[ "https://Stackoverflow.com/questions/57540412", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10039264/" ]
I do not understand why your answers are in jQuery while you have clearly asked for an answer in javascript. There are several errors in your essay : **1-** as indicated by others you have confused the use of a class (represented by a `.`) With the use of an ID (represented by a `#`), but anyway you do not need to use it. **2-** repeating the addition of a single class is useless: the rotation values will not add up. **3-** you have placed the button "ROTATE" is in the background of the image => I added a z-index to 100 so that it returns to the foreground. ```js const Root = document.documentElement , gRoot = getComputedStyle(Root) var RotateDeg = parseInt(gRoot.getPropertyValue('--turn')) function rotate90() { RotateDeg = (RotateDeg+90) % 360 Root.style.setProperty('--turn', RotateDeg + "deg") } ``` ```css :root { --turn : 0deg; } .btn-floating-container { top:50px; left:50px; position: fixed; z-index: 100; } .btn-floating { width: 150px; height: 50px; border-radius: 50%; text-align: center; padding: 0px; font-size: 24px; } #theImage { -webkit-transform:rotate( var(--turn) ); -moz-transform: rotate( var(--turn) ); -ms-transform: rotate( var(--turn) ); -o-transform: rotate( var(--turn) ); transform: rotate( var(--turn) ); } ``` ```html <div class="btn-floating-container"> <button class="btn-floating btn btn-primary btn-medium" onclick="rotate90()">ROTATE</button> </div> <img id="theImage" src="https://images.unsplash.com/photo-1533467915241-eac02e856653?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=1350&q=80" /> ```
You are selecting by class (`.theImage`), but that element is assigned an id, so you should select like this `$('#theImage')`.
57,540,412
I am trying to copy files (scripts and rpms) stored locally to a set of servers. I can copy the files when the names are hard coded, but not when I am using a variable. ansible-lint comes back with no errors. When use variable replacement I get the error: ``` TASK [Copy cpu_gov.sh] *************************************************************************************************************************************************************** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'dict' object has no attribute 'endswith' fatal: [ceph3]: FAILED! => {"msg": "Unexpected failure during module execution.", "stdout": ""} ``` In debug mode I can see that it is a Python error on a trailing "/". All other uses of the variable work fine, only when it is in the ""src:" field does it fail. ``` The full traceback is: Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 145, in run res = self._execute() File "/usr/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 650, in _execute result = self._handler.run(task_vars=variables) File "/usr/lib/python2.7/site-packages/ansible/plugins/action/copy.py", line 461, in run trailing_slash = source.endswith(os.path.sep) AttributeError: 'dict' object has no attribute 'endswith' fatal: [ceph3]: FAILED! => { "msg": "Unexpected failure during module execution.", "stdout": "" } ``` ``` --- ### Test # - hosts: all vars: #isdct_rpm: foobar.txt isdct_rpm: isdct-3.0.16-1.x86_64.rpm cpu_gov: cpu_gov.sh irq_bal: irq_balance.sh root_dir: /root bin_dir: /root/bin files_dir: /root/projects/ansible/bootstrap/files remote_user: root tasks: ``` These work just fine - ``` - name: ISDCT rpm exists? stat: path: "{{ root_dir }}/{{ isdct_rpm }}" register: isdct_rpm tags: - tools - name: cpu_gov exists? stat: path: "{{ bin_dir }}/{{ cpu_gov }}" register: cpu_gov tags: - tools - name: irq_balance exists? stat: path: "{{ bin_dir }}/{{ irq_bal }}" register: irq_bal tags: - tools ``` The first task is the failing one: ``` - name: Copy ISDCT rpm copy: remote_src: no src: "{{ isdct_rpm }}" dest: "{{ root_dir }}" when: not isdct_rpm.stat.exists ``` These work fine: ``` - name: Copy rpm copy: remote_src: no src: isdct-3.0.16-1.x86_64.rpm dest: /root when: not isdct_rpm.stat.exists - name: Copy cpu_gov.sh copy: remote_src: no src: cpu_gov.sh # - fails - src: "{{ cpu_gov }}" dest: "{{ bin_dir }}" when: not cpu_gov.stat.exists - name: Copy irq_balance.sh copy: remote_src: no src: irq_balance.sh dest: /root when: not irq_bal.stat.exists ``` ``` The full traceback is: Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 145, in run res = self._execute() File "/usr/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 650, in _execute result = self._handler.run(task_vars=variables) File "/usr/lib/python2.7/site-packages/ansible/plugins/action/copy.py", line 461, in run trailing_slash = source.endswith(os.path.sep) AttributeError: 'dict' object has no attribute 'endswith' fatal: [ceph3]: FAILED! => { "msg": "Unexpected failure during module execution.", "stdout": "" } ```
2019/08/17
[ "https://Stackoverflow.com/questions/57540412", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10039264/" ]
I do not understand why your answers are in jQuery while you have clearly asked for an answer in javascript. There are several errors in your essay : **1-** as indicated by others you have confused the use of a class (represented by a `.`) With the use of an ID (represented by a `#`), but anyway you do not need to use it. **2-** repeating the addition of a single class is useless: the rotation values will not add up. **3-** you have placed the button "ROTATE" is in the background of the image => I added a z-index to 100 so that it returns to the foreground. ```js const Root = document.documentElement , gRoot = getComputedStyle(Root) var RotateDeg = parseInt(gRoot.getPropertyValue('--turn')) function rotate90() { RotateDeg = (RotateDeg+90) % 360 Root.style.setProperty('--turn', RotateDeg + "deg") } ``` ```css :root { --turn : 0deg; } .btn-floating-container { top:50px; left:50px; position: fixed; z-index: 100; } .btn-floating { width: 150px; height: 50px; border-radius: 50%; text-align: center; padding: 0px; font-size: 24px; } #theImage { -webkit-transform:rotate( var(--turn) ); -moz-transform: rotate( var(--turn) ); -ms-transform: rotate( var(--turn) ); -o-transform: rotate( var(--turn) ); transform: rotate( var(--turn) ); } ``` ```html <div class="btn-floating-container"> <button class="btn-floating btn btn-primary btn-medium" onclick="rotate90()">ROTATE</button> </div> <img id="theImage" src="https://images.unsplash.com/photo-1533467915241-eac02e856653?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=1350&q=80" /> ```
Firstly, you will have to use `$('#theImage')` since you are referencing by id. Try the below code. ```js let angle = [0, 90, 180, 270]; let current = 0; function rotate90() { current++; if (current == 4) current = 0; $('#theImage').css('transform', 'rotate(' + angle[current] + 'deg)'); } ``` ```css .btn-floating-container { top: 50px; left: 50px; position: fixed; z-index: 1; } .btn-floating { width: 150px; height: 50px; border-radius: 50%; text-align: center; padding: 0px; font-size: 24px; } .rotateimg90 { -webkit-transform: rotate(90deg); -moz-transform: rotate(90deg); -ms-transform: rotate(90deg); -o-transform: rotate(90deg); transform: rotate(90deg); } ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <div class="btn-floating-container"> <button class="btn-floating btn btn-primary btn-medium" onclick="rotate90()">ROTATE</button> </div> <img id="theImage" src="https://images.unsplash.com/photo-1533467915241-eac02e856653?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=1350&q=80" /> ```
31,918,472
I am trying to do some analysis using Python 2.7 of how the content of a Wikipedia article changes over time. I am only interested in the content of the page and found the easiest way to do this is with the [Wikipedia package](https://pypi.python.org/pypi/wikipedia/). My code successfully loads an article and then I can use the article.content function to do my analysis. ``` import Wikipdia # pull in wikipedia article name = 'George W. Bush' article = wikipedia.page(name) object = article.content # Do analysis here ``` The following code (I got from another Stack Exchange question) finds all the revision history that I want. From this I can extract all the revision ID's I need. ``` site = wiki.Wiki("http://en.wikipedia.org/w/api.php") names = ["Sherrod Brown","Maria Cantwell"] allMembers = [] for name in labels: params = {'action':'query', 'titles': name, 'prop':'revisions', 'rvprop':'ids|flags|timestamp|userid|user|size|comment|tags|minor', 'rvlimit':'10' } req = api.APIRequest(site, params) res = req.query(querycontinue=False) allMembers.append(res) # Write to a file and load it into a dictionary with open('wiki-leg.json', 'w') as outfile: json.dump(allMembers, outfile, indent=2) with open('wiki-leg.json') as data_file: data = json.load(data_file) ``` From here I am not sure how to get the same article.content() for a revision. I cannot seem to do it with the Wikipedia package and using urllib2 hasn't yielded easy results for text parsing. Is there a way to get the content of old versions (revisions) of a wikipedia article?
2015/08/10
[ "https://Stackoverflow.com/questions/31918472", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4921853/" ]
I'm not very familiar with the wikipedia library, but what you want to do seems easy with [pywikibot](https://www.mediawiki.org/wiki/Manual:Pywikibot). So you need the text from different revisions of the same page, right? Try this: ```py import pywikibot site = pywikibot.Site(u"en", fam=u"wikipedia") wpage = pywikibot.Page(site, u"Portugal") wpHist = wpage.fullVersionHistory(total=5) for i in wpHist: print i[3] ``` In each iteration you'll get information about that revision.
To get the **page content for a single revision ID**, i.e. you already have the revision ID and want just that page, not the full history, you can use `getOldVersion`: ``` import pywikibot site = pywikibot.Site("en", "wikipedia") page = pywikibot.Page(site, "George W. Bush") revs = page.revisions(content=False, total = 10) revs = list(revs) # example revs[9]['revid'] text = page.getOldVersion(oldid = revs[9]['revid']) print(text) ``` Some context: getting the full history for a page with thousands of edits can take some time and for what I was trying to do I only needed a small subset of revisions. So it was quicker to query the revision history *without* content, identify the revision ID's I need, and then download the content for only those.
6,318,126
I'm completely new to [Python's sqlite3 module](https://docs.python.org/3.6/library/sqlite3.html) (and SQL in general for that matter), and this just completely stumps me. The abundant lack of descriptions of [`cursor` objects](https://docs.python.org/3.6/library/sqlite3.html#cursor-objects) (rather, their necessity) also seems odd. This snippet of code is the preferred way of doing things: ``` import sqlite3 conn = sqlite3.connect("db.sqlite") c = conn.cursor() c.execute('''insert into table "users" values ("Jack Bauer", "555-555-5555")''') conn.commit() c.close() ``` This one isn't, even though it works just as well and without the (seemingly pointless) `cursor`: ``` import sqlite3 conn = sqlite3.connect("db.sqlite") conn.execute('''insert into table "users" values ("Jack Bauer", "555-555-5555")''') conn.commit() ``` Can anyone tell me why I need a `cursor`? It just seems like pointless overhead. For every method in my script that accesses a database, I'm supposed to create and destroy a `cursor`? Why not just use the `connection` object?
2011/06/11
[ "https://Stackoverflow.com/questions/6318126", "https://Stackoverflow.com", "https://Stackoverflow.com/users/794241/" ]
You need a cursor object to fetch results. Your example works because it's an `INSERT` and thus you aren't trying to get any rows back from it, but if you look at the [`sqlite3` docs](http://docs.python.org/library/sqlite3.html), you'll notice that there aren't any `.fetchXXXX` methods on connection objects, so if you tried to do a `SELECT` without a cursor, you'd have no way to get the resulting data. Cursor objects allow you to keep track of which result set is which, since it's possible to run multiple queries before you're done fetching the results of the first.
It gives us the ability to have multiple separate working environments through the same connection to the database.
6,318,126
I'm completely new to [Python's sqlite3 module](https://docs.python.org/3.6/library/sqlite3.html) (and SQL in general for that matter), and this just completely stumps me. The abundant lack of descriptions of [`cursor` objects](https://docs.python.org/3.6/library/sqlite3.html#cursor-objects) (rather, their necessity) also seems odd. This snippet of code is the preferred way of doing things: ``` import sqlite3 conn = sqlite3.connect("db.sqlite") c = conn.cursor() c.execute('''insert into table "users" values ("Jack Bauer", "555-555-5555")''') conn.commit() c.close() ``` This one isn't, even though it works just as well and without the (seemingly pointless) `cursor`: ``` import sqlite3 conn = sqlite3.connect("db.sqlite") conn.execute('''insert into table "users" values ("Jack Bauer", "555-555-5555")''') conn.commit() ``` Can anyone tell me why I need a `cursor`? It just seems like pointless overhead. For every method in my script that accesses a database, I'm supposed to create and destroy a `cursor`? Why not just use the `connection` object?
2011/06/11
[ "https://Stackoverflow.com/questions/6318126", "https://Stackoverflow.com", "https://Stackoverflow.com/users/794241/" ]
According to the official [docs](https://docs.python.org/3/library/sqlite3.html#sqlite3.Connection.execute) `connection.execute()` is a **nonstandard shortcut** that creates an intermediate cursor object: > > Connection.execute > > This is a nonstandard shortcut that creates a cursor object by calling the cursor() method, calls the cursor’s execute() method with the parameters given, and returns the cursor. > > >
It gives us the ability to have multiple separate working environments through the same connection to the database.
6,318,126
I'm completely new to [Python's sqlite3 module](https://docs.python.org/3.6/library/sqlite3.html) (and SQL in general for that matter), and this just completely stumps me. The abundant lack of descriptions of [`cursor` objects](https://docs.python.org/3.6/library/sqlite3.html#cursor-objects) (rather, their necessity) also seems odd. This snippet of code is the preferred way of doing things: ``` import sqlite3 conn = sqlite3.connect("db.sqlite") c = conn.cursor() c.execute('''insert into table "users" values ("Jack Bauer", "555-555-5555")''') conn.commit() c.close() ``` This one isn't, even though it works just as well and without the (seemingly pointless) `cursor`: ``` import sqlite3 conn = sqlite3.connect("db.sqlite") conn.execute('''insert into table "users" values ("Jack Bauer", "555-555-5555")''') conn.commit() ``` Can anyone tell me why I need a `cursor`? It just seems like pointless overhead. For every method in my script that accesses a database, I'm supposed to create and destroy a `cursor`? Why not just use the `connection` object?
2011/06/11
[ "https://Stackoverflow.com/questions/6318126", "https://Stackoverflow.com", "https://Stackoverflow.com/users/794241/" ]
Just a misapplied abstraction it seems to me. A db cursor is an abstraction, meant for data set traversal. From [Wikipedia article on subject](http://en.wikipedia.org/wiki/Cursor_%28databases%29): > > In computer science and technology, a database cursor is a control > structure that enables traversal over the records in a database. > Cursors facilitate subsequent processing in conjunction with the > traversal, such as retrieval, addition and removal of database > records. The database cursor characteristic of traversal makes cursors > akin to the programming language concept of iterator. > > > And: > > Cursors can not only be used to fetch data from the DBMS into an > application but also to identify a row in a table to be updated or > deleted. The SQL:2003 standard defines positioned update and > positioned delete SQL statements for that purpose. Such statements do > not use a regular WHERE clause with predicates. Instead, a cursor > identifies the row. The cursor must be opened and already positioned > on a row by means of FETCH statement. > > > If you check the [docs on Python sqlite module](http://docs.python.org/2/library/sqlite3.html), you can see that a python module `cursor` is needed even for a `CREATE TABLE` statement, so it's used for cases where a mere `connection` object should suffice - as correctly pointed out by the OP. Such abstraction is different from what people understand a db cursor to be and hence, the confusion/frustration on the part of users. Regardless of efficiency, it's just a conceptual overhead. Would be nice if it was pointed out in the docs that the python module `cursor` is bit different than what a cursor is in SQL and databases.
It gives us the ability to have multiple separate working environments through the same connection to the database.
6,318,126
I'm completely new to [Python's sqlite3 module](https://docs.python.org/3.6/library/sqlite3.html) (and SQL in general for that matter), and this just completely stumps me. The abundant lack of descriptions of [`cursor` objects](https://docs.python.org/3.6/library/sqlite3.html#cursor-objects) (rather, their necessity) also seems odd. This snippet of code is the preferred way of doing things: ``` import sqlite3 conn = sqlite3.connect("db.sqlite") c = conn.cursor() c.execute('''insert into table "users" values ("Jack Bauer", "555-555-5555")''') conn.commit() c.close() ``` This one isn't, even though it works just as well and without the (seemingly pointless) `cursor`: ``` import sqlite3 conn = sqlite3.connect("db.sqlite") conn.execute('''insert into table "users" values ("Jack Bauer", "555-555-5555")''') conn.commit() ``` Can anyone tell me why I need a `cursor`? It just seems like pointless overhead. For every method in my script that accesses a database, I'm supposed to create and destroy a `cursor`? Why not just use the `connection` object?
2011/06/11
[ "https://Stackoverflow.com/questions/6318126", "https://Stackoverflow.com", "https://Stackoverflow.com/users/794241/" ]
> > 12.6.8. Using sqlite3 **efficient**ly > > > 12.6.8.1. Using **shortcut** methods > > > Using the **nonstandard** `execute()`, `executemany()` and `executescript()` methods of the Connection object, your code can be written **more concise**ly because you don’t have to create the (often **superfluous**) Cursor objects explicitly. Instead, the Cursor objects are created implicitly and these shortcut methods return the cursor objects. This way, you can execute a SELECT statement and iterate over it directly using only a single call on the Connection object. > > > ([sqlite3 documentation](https://docs.python.org/3/library/sqlite3.html#using-sqlite3-efficiently); emphasis mine.) > > Why not just use the connection object? > > > Because those methods of the connection object are [nonstandard](https://docs.python.org/3/library/sqlite3.html#sqlite3.Connection.execute), i.e. they are not part of [Python Database API Specification v2.0](https://www.python.org/dev/peps/pep-0249/) (PEP 249). As long as you use the standard methods of the Cursor object, you can be sure that if you switch to another database implementation that follows the above specification, your code will be fully portable. Perhaps you will only need to change the `import` line. But if you use the `connection.execute` there is a chance that switching won't be that straightforward. That's the main reason you might want to use `cursor.execute` instead. However if you are certain that you're not going to switch, I'd say it's completely OK to take the `connection.execute` shortcut and be "efficient".
It gives us the ability to have multiple separate working environments through the same connection to the database.
18,809,575
I'm trying to merge RSS feeds using python and then play them back out to a website. Having researched the recommended methods I've opted for the following code which is basically a straight copy of what is recommended: ``` hit_list = ['http://www.bbc.co.uk/sport/football/teams/cardiff-city/rss.xml','http://www1.skysports.com/feeds/11704/news.xml','http://www.cardiffcity-mad.co.uk/rssfeeds/rssfull.asp'] # pull down all feeds future_calls = [Future(feedparser.parse,rss_url) for rss_url in hit_list] # block until they are all in feeds = [future_obj() for future_obj in future_calls] #Now that you have your feeds, extract all the entries entries = [] for feed in feeds: entries.extend(feed["items"]) values['feeds'] = sorted(entries, key=lambda entry: entry["updated_parsed"]) ``` Later, I call the web using: ``` template = jinja_environment.get_template('TeamView.html') self.response.out.write(template.render(values)) ``` Finally, within my html page I have: ``` {% for r in feeds.entries %} <a href={{r.link}} target=_blank>{{r.title}}</a>: {{r.description}} <br/> {% endfor %} ``` When I use feedparser on the feeds individually I can pass the information but when I try merging the feeds nothing shows. I have imported feedparser and Future.
2013/09/15
[ "https://Stackoverflow.com/questions/18809575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2740177/" ]
``` background-image: url(../images/bgr_top.jpg), url(../images/bgr_bottom.jpg); ``` will only work in browsers which support css3 & hence only IE 9+ Best way to support all browsers would be to use separate entities for each background.
are they working on different browser? i would first check to see if the urls are correct and the image exist on them, open developer tools and see if the browser download them, also need to use only one image and see if you can see that, also IE 9 is not supporting css 3 it can cause the images not to showen
18,809,575
I'm trying to merge RSS feeds using python and then play them back out to a website. Having researched the recommended methods I've opted for the following code which is basically a straight copy of what is recommended: ``` hit_list = ['http://www.bbc.co.uk/sport/football/teams/cardiff-city/rss.xml','http://www1.skysports.com/feeds/11704/news.xml','http://www.cardiffcity-mad.co.uk/rssfeeds/rssfull.asp'] # pull down all feeds future_calls = [Future(feedparser.parse,rss_url) for rss_url in hit_list] # block until they are all in feeds = [future_obj() for future_obj in future_calls] #Now that you have your feeds, extract all the entries entries = [] for feed in feeds: entries.extend(feed["items"]) values['feeds'] = sorted(entries, key=lambda entry: entry["updated_parsed"]) ``` Later, I call the web using: ``` template = jinja_environment.get_template('TeamView.html') self.response.out.write(template.render(values)) ``` Finally, within my html page I have: ``` {% for r in feeds.entries %} <a href={{r.link}} target=_blank>{{r.title}}</a>: {{r.description}} <br/> {% endfor %} ``` When I use feedparser on the feeds individually I can pass the information but when I try merging the feeds nothing shows. I have imported feedparser and Future.
2013/09/15
[ "https://Stackoverflow.com/questions/18809575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2740177/" ]
``` background-image: url(../images/bgr_top.jpg), url(../images/bgr_bottom.jpg); ``` will only work in browsers which support css3 & hence only IE 9+ Best way to support all browsers would be to use separate entities for each background.
Just wanted to let you know how I fixed my issue. I have added and additional div which holds the whole website and then I have added the top image to the html and the bottom image to the body in the css. ``` html { background: url("../images/bgr_top.jpg");background-repeat: repeat-x; background-position:center top; height: 100%; } body{ background: url("../images/bgr_bottom.jpg"); background-repeat: repeat-x; background-position:center bottom; margin: 0; padding: 0; margin: 0 auto; font-family:"Trebuchet MS"; font-size:14px; line-height: 190%; text-align:left; text-decoration: italic; } .main { margin: 0; padding: 0; width: 1008px; margin: 0 auto; font-family:"Trebuchet MS"; font-size:14px; line-height: 190%; text-align:left; text-decoration: italic; } ``` **The Result** I have now two backgrounds in all browsers including IE7,8,etc Hope this helps someone.
18,809,575
I'm trying to merge RSS feeds using python and then play them back out to a website. Having researched the recommended methods I've opted for the following code which is basically a straight copy of what is recommended: ``` hit_list = ['http://www.bbc.co.uk/sport/football/teams/cardiff-city/rss.xml','http://www1.skysports.com/feeds/11704/news.xml','http://www.cardiffcity-mad.co.uk/rssfeeds/rssfull.asp'] # pull down all feeds future_calls = [Future(feedparser.parse,rss_url) for rss_url in hit_list] # block until they are all in feeds = [future_obj() for future_obj in future_calls] #Now that you have your feeds, extract all the entries entries = [] for feed in feeds: entries.extend(feed["items"]) values['feeds'] = sorted(entries, key=lambda entry: entry["updated_parsed"]) ``` Later, I call the web using: ``` template = jinja_environment.get_template('TeamView.html') self.response.out.write(template.render(values)) ``` Finally, within my html page I have: ``` {% for r in feeds.entries %} <a href={{r.link}} target=_blank>{{r.title}}</a>: {{r.description}} <br/> {% endfor %} ``` When I use feedparser on the feeds individually I can pass the information but when I try merging the feeds nothing shows. I have imported feedparser and Future.
2013/09/15
[ "https://Stackoverflow.com/questions/18809575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2740177/" ]
Just wanted to let you know how I fixed my issue. I have added and additional div which holds the whole website and then I have added the top image to the html and the bottom image to the body in the css. ``` html { background: url("../images/bgr_top.jpg");background-repeat: repeat-x; background-position:center top; height: 100%; } body{ background: url("../images/bgr_bottom.jpg"); background-repeat: repeat-x; background-position:center bottom; margin: 0; padding: 0; margin: 0 auto; font-family:"Trebuchet MS"; font-size:14px; line-height: 190%; text-align:left; text-decoration: italic; } .main { margin: 0; padding: 0; width: 1008px; margin: 0 auto; font-family:"Trebuchet MS"; font-size:14px; line-height: 190%; text-align:left; text-decoration: italic; } ``` **The Result** I have now two backgrounds in all browsers including IE7,8,etc Hope this helps someone.
are they working on different browser? i would first check to see if the urls are correct and the image exist on them, open developer tools and see if the browser download them, also need to use only one image and see if you can see that, also IE 9 is not supporting css 3 it can cause the images not to showen
52,286,104
I am trying to go through a list that has been imported from a csv file and find the number of times a candidate has been voted for. I am writing this in python and I am not sure if I should create a dictionary and do an index search or create a loop that will give me a count of the name? ``` Sample Data: Voter ID,County,Candidate 12864552,Marsh,Khan 17444633,Marsh,Correy 19330107,Marsh,Khan 19865775,Queen,Khan 11927875,Marsh,Khan 19014606,Marsh,Li 17775191,Queen,Correy 14003692,Marsh,Khan ```
2018/09/12
[ "https://Stackoverflow.com/questions/52286104", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7994471/" ]
If you don't want to use pandas, you may also use a [Counter, from the collections tree](https://docs.python.org/3/library/collections.html#collections.Counter). A sample of using this class is below. If you would like something specific to your problem, edit your question to post what you've tried and I'll edit this response to help you. ``` c = Counter('abcaba') c['a'] += 1 # increment an existing value c.clear() # clear the counter -- all values are 0 and you can start again c['hd1'] # should be 1 c['hd1'] = c['hd1']+1 c['hd1'] # should be 2 ```
First install python-pandas with ``` pip install pandas ``` Then you can use the following code for getting the candidates count county-wise. ``` import pandas as pd df = pd.read_csv('<path_to_csv.file>') df.groupby(['Candidate', 'County']).count() ```
52,286,104
I am trying to go through a list that has been imported from a csv file and find the number of times a candidate has been voted for. I am writing this in python and I am not sure if I should create a dictionary and do an index search or create a loop that will give me a count of the name? ``` Sample Data: Voter ID,County,Candidate 12864552,Marsh,Khan 17444633,Marsh,Correy 19330107,Marsh,Khan 19865775,Queen,Khan 11927875,Marsh,Khan 19014606,Marsh,Li 17775191,Queen,Correy 14003692,Marsh,Khan ```
2018/09/12
[ "https://Stackoverflow.com/questions/52286104", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7994471/" ]
Or can do `pandas`, `pandas.DataFrame.groupby`, then do `as_index=False` inside, then do `count` for counting: ``` import pandas as pd df=pd.read_csv(filename) print(df.groupby(['Candidate','County'],as_index=False).count()) ```
First install python-pandas with ``` pip install pandas ``` Then you can use the following code for getting the candidates count county-wise. ``` import pandas as pd df = pd.read_csv('<path_to_csv.file>') df.groupby(['Candidate', 'County']).count() ```
52,286,104
I am trying to go through a list that has been imported from a csv file and find the number of times a candidate has been voted for. I am writing this in python and I am not sure if I should create a dictionary and do an index search or create a loop that will give me a count of the name? ``` Sample Data: Voter ID,County,Candidate 12864552,Marsh,Khan 17444633,Marsh,Correy 19330107,Marsh,Khan 19865775,Queen,Khan 11927875,Marsh,Khan 19014606,Marsh,Li 17775191,Queen,Correy 14003692,Marsh,Khan ```
2018/09/12
[ "https://Stackoverflow.com/questions/52286104", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7994471/" ]
If you don't want to use pandas, you may also use a [Counter, from the collections tree](https://docs.python.org/3/library/collections.html#collections.Counter). A sample of using this class is below. If you would like something specific to your problem, edit your question to post what you've tried and I'll edit this response to help you. ``` c = Counter('abcaba') c['a'] += 1 # increment an existing value c.clear() # clear the counter -- all values are 0 and you can start again c['hd1'] # should be 1 c['hd1'] = c['hd1']+1 c['hd1'] # should be 2 ```
From the standard library: **Given** A sample file `test.txt`: ``` Voter ID,County,Candidate 12864552,Marsh,Khan 17444633,Marsh,Correy 19330107,Marsh,Khan 19865775,Queen,Khan 11927875,Marsh,Khan 19014606,Marsh,Li 17775191,Queen,Correy 14003692,Marsh,Khan 12864552,Marsh,Khan 17444633,Marsh,Correy 19330107,Marsh,Khan 19865775,Queen,Khan 11927875,Marsh,Khan 19014606,Marsh,Li 17775191,Queen,Correy 14003692,Marsh,Khan ``` **Code** ``` import collections as ct filepath = "test.txt" with open(filepath) as f: votes = ct.Counter() header = next(f) for line in f: candidate = line.split(",")[-1].strip() votes[candidate] += 1 ``` Alternatively ``` import csv import collections as ct filepath = "test.txt" with open(filepath) as f: votes = ct.Counter() reader = csv.reader(f) next(reader) for line in reader: candidate = line[-1] votes[candidate] += 1 ``` **Demo** ``` votes # Counter({'Khan': 10, 'Correy': 4, 'Li': 2}) votes.most_common(1) # [('Khan', 10)] ``` See also [docs](https://docs.python.org/3/library/collections.html#collections.Counter) on `collections.Counter` and [`csv`](https://docs.python.org/3/library/csv.html) modules.
52,286,104
I am trying to go through a list that has been imported from a csv file and find the number of times a candidate has been voted for. I am writing this in python and I am not sure if I should create a dictionary and do an index search or create a loop that will give me a count of the name? ``` Sample Data: Voter ID,County,Candidate 12864552,Marsh,Khan 17444633,Marsh,Correy 19330107,Marsh,Khan 19865775,Queen,Khan 11927875,Marsh,Khan 19014606,Marsh,Li 17775191,Queen,Correy 14003692,Marsh,Khan ```
2018/09/12
[ "https://Stackoverflow.com/questions/52286104", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7994471/" ]
Or can do `pandas`, `pandas.DataFrame.groupby`, then do `as_index=False` inside, then do `count` for counting: ``` import pandas as pd df=pd.read_csv(filename) print(df.groupby(['Candidate','County'],as_index=False).count()) ```
From the standard library: **Given** A sample file `test.txt`: ``` Voter ID,County,Candidate 12864552,Marsh,Khan 17444633,Marsh,Correy 19330107,Marsh,Khan 19865775,Queen,Khan 11927875,Marsh,Khan 19014606,Marsh,Li 17775191,Queen,Correy 14003692,Marsh,Khan 12864552,Marsh,Khan 17444633,Marsh,Correy 19330107,Marsh,Khan 19865775,Queen,Khan 11927875,Marsh,Khan 19014606,Marsh,Li 17775191,Queen,Correy 14003692,Marsh,Khan ``` **Code** ``` import collections as ct filepath = "test.txt" with open(filepath) as f: votes = ct.Counter() header = next(f) for line in f: candidate = line.split(",")[-1].strip() votes[candidate] += 1 ``` Alternatively ``` import csv import collections as ct filepath = "test.txt" with open(filepath) as f: votes = ct.Counter() reader = csv.reader(f) next(reader) for line in reader: candidate = line[-1] votes[candidate] += 1 ``` **Demo** ``` votes # Counter({'Khan': 10, 'Correy': 4, 'Li': 2}) votes.most_common(1) # [('Khan', 10)] ``` See also [docs](https://docs.python.org/3/library/collections.html#collections.Counter) on `collections.Counter` and [`csv`](https://docs.python.org/3/library/csv.html) modules.
71,407,082
when I write the directory of the file, Sublime Text shows this annoying red light before some backslashes. How can I get this to go away? note: The code runs without any errors. [![ST python screenshot](https://i.stack.imgur.com/sMrHE.jpg)](https://i.stack.imgur.com/sMrHE.jpg)
2022/03/09
[ "https://Stackoverflow.com/questions/71407082", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17959613/" ]
Sublime Text highlights raw strings starting with a lowercase `r` as Regular Expressions, which is why `\c` and `\M` are highlighted as an illegal escape character in regex. You'll also notice the different highlighting on `\U`, `\h` and `\D` in your screenshot. To prevent Sublime Text from treating your raw string as a regular expression, simply use an uppercase `R` for the raw string, like: [![ST highlighting python code with raw strings and Windows paths](https://i.stack.imgur.com/VKZGO.png)](https://i.stack.imgur.com/VKZGO.png)
Replace backslash "\" instead of forelash "/" or use double backslashes "\\" In Python, backslash means escape sext character.
61,184,220
I'm learning network automation using python. I'm unable to install Napalm-fortios module in my pycharm because of the error as below. Try to run this command from the system terminal. Make sure that you use the correct version of 'pip' installed for your Python interpreter located at 'C:\Users\user\PycharmProjects\DUKE\venv\Scripts\python.exe'. Collecting napalm-fortios Using cached napalm-fortios-0.4.1.tar.gz (7.0 kB) ``` ERROR: Command errored out with exit status 1: command: 'C:\Users\user\PycharmProjects\DUKE\venv\Scripts\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\user\\AppData\\Local\\Temp\\pycharm-packaging\\napalm-fortios\\setup.py'"'"'; __file__='"'"'C:\\Users\\user\\AppData\\Local\\Temp\\pycharm-packaging\\napalm-fortios\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\user\AppData\Local\Temp\pycharm-packaging\napalm-fortios\pip-egg-info' cwd: C:\Users\user\AppData\Local\Temp\pycharm-packaging\napalm-fortios\ Complete output (5 lines): Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\user\AppData\Local\Temp\pycharm-packaging\napalm-fortios\setup.py", line 6, in <module> from pip.req import parse_requirements ModuleNotFoundError: No module named 'pip.req' ---------------------------------------- ``` ERROR: Command errored out with exit status 1: python setup.py egg\_info Check the logs for full command output.
2020/04/13
[ "https://Stackoverflow.com/questions/61184220", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13299856/" ]
First, let's make this example a little easier and see what the problem is then. We could build the `html` function like this: ``` fun html(init: (HTML) -> Unit): HTML { val html = HTML() init(html) return html } ``` This would be easier to grasp (at first), because we are just passing a usual one-parameter lambda to the `html` function. But now the call-site is not builder like: ``` html { it: HTML -> // just for clarity it.body() // not very nice } ``` Wouldn't it be nice if we could invoke `body()` inside `html` without `it`? That's possible! All we need is a lambda with receiver. ``` fun html(init: HTML.() -> Unit): HTML { // <-- only this changed val html = HTML() init(html) return html } ``` See how `html` is passed as an argument to `init` like before? Of course, we can invoke it like this too: `html.init()` as shown in the example. The instance of `HTML` becomes `this` inside the block of the lambda. Now, we can do this: ``` html { this.body() } ``` Since `this` can be omitted, we arrive here: ``` html { body() } ``` So, in the end lambdas with receivers make the code more concise and allow us to use a nice builder syntax.
**Here is step by step explaination:** *1. Creation of function, receiver type lambda.* ```kotlin fun html(init: HTML.() -> Unit): HTML { ``` here function html accept a parameter `init` of type `HTML.() -> Unit` i.e. it indicated that it is a receiver of HTML and can only be called with help of a real HTML object. And `: HTML` indicates that the function obviously returns HTML object. *2. call of init at html* ```kotlin html.init() ``` Here init() function is called as a receiver of HTML by a real HTML object. --- **Alright enough of formal talking, Here is what a receiver is:** So if you remember extension function defined as `fun A.myFun(...): ReturnType {}`, in that case you get a variable `this` which act as an instance of type A it was called on. Similarly receiver lambda gives you a `this` variable inside that, In a particular example: ```kotlin class A { fun thisCanBeCalledByAInstance() { println("I've got called") } } fun main() { val receiver: A.() -> Unit = { // this: A thisCanBeCalledByAInstance() // prints: I've got called // or traditional way this.thisCanBeCalledByAInstance() } val a: A = A() a.receiver() } ``` Here you were able to call the method(function) from the instance of `A` even if it was lambda because it was a receiver. **PS: For simple langauge you can think html.init() as init(html) but html is not a parameter but instead works as `this` vaiable inside the lambda** This is why you were able to call `body()` on that lambda, because implicitly you were calling `this.body()` and `this` has came from `html.init()`'s html object.
9,387,119
I'm currently creating a code named "SET". The code's name is an acronym, which has been defined for many (non programming) reasons, and therefore cannot be changed. **Problem:** The easiest, and I believe the less painful way for the end-user to use my code would be naming the package "set". But of course this is a problem since this conflicts with the built-in ***set*** function. **Question:** What are the possible solutions? Some may be (there is probably more): 1. change the package name (eg. setb). ``` import setb ``` I would really really prefer not to, because then it will be different from the real name 2. make the package's name upper-case (SET) ``` import SET ``` It would be a straight forward solution, but I'm wondering: is this a pythonic proper naming for a package? Also, I find this a bit painful since all modules defined in the code will have something like "import SET.x.y..." (ie. upper-case, written a lot of times). But this is not a really big deal if this is a pythonic way. 3. keep the name "set" ``` import set ``` Well this is obviously not fine. But it would be a problem only if the user is using "import set", would not it be? This should not happen in "normal usage conditions", since the code will provides some scripts to use it, rather use it as a standard python module. But we never know, and it could be imported as it, and there may be even some problems I'm not seeing (with the built-in set). I'm considering the solution 2., but I'm really not sure. Maybe this is not proper, or maybe you guys have a better solution. PS: I've found some similar topics on the web and on stackoverflow, but it usually deals with names inside a script or module inside a package. The problem here is really related to the code's name (which is meaningful only written this way), and therefore related to the proper naming of the package's name. --- EDIT **Selected solution:** I've choosen to use "SET" as the package name. Although many good suggestions have been proposed here: pyset, semt, setool... or the more explicit "starexoplanettool" (expliciting the acronym). Thanks to you all. EDIT #2 I like the "funny" solution of having a package named S, and a subpackage, E... to get finally: ``` import S.E.T ``` Thanks Don Question.
2012/02/21
[ "https://Stackoverflow.com/questions/9387119", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1198648/" ]
> > Is this a pythonic proper naming for a package? > > > Under the circumstances, it doesn't matter. Leading upper case is rare (ConfigParser, HTMLParser). Camel-case is rare (cStringIO). But that doesn't make all upper-case wrong. It just means you should pick a better acronym next time. "set" is the English word with the largest number of definitions. It is the **single** poorest choice of acronym possible. The point is not to conform to a community "standard" of more-or-less acceptable behavior. The point is to write something that works. > > (ie. upper-case, written a lot of times). > > > Hardly a concern. People actually run software more often than they write it. People read and tweak more often than writing, also. If you're worried about misspelling, there's copy and paste. Also, even a cheap IDE like Komodo Edit can figure out how to code-complete an installed module with a long name.
Why not spell out the meaning behind the acronym, and if the user is desperate for a shorter name, they can do `import someetymologyterm as SET` or whatever they prefer. Python gives them the choice, so it's not the end of the world either way.
9,387,119
I'm currently creating a code named "SET". The code's name is an acronym, which has been defined for many (non programming) reasons, and therefore cannot be changed. **Problem:** The easiest, and I believe the less painful way for the end-user to use my code would be naming the package "set". But of course this is a problem since this conflicts with the built-in ***set*** function. **Question:** What are the possible solutions? Some may be (there is probably more): 1. change the package name (eg. setb). ``` import setb ``` I would really really prefer not to, because then it will be different from the real name 2. make the package's name upper-case (SET) ``` import SET ``` It would be a straight forward solution, but I'm wondering: is this a pythonic proper naming for a package? Also, I find this a bit painful since all modules defined in the code will have something like "import SET.x.y..." (ie. upper-case, written a lot of times). But this is not a really big deal if this is a pythonic way. 3. keep the name "set" ``` import set ``` Well this is obviously not fine. But it would be a problem only if the user is using "import set", would not it be? This should not happen in "normal usage conditions", since the code will provides some scripts to use it, rather use it as a standard python module. But we never know, and it could be imported as it, and there may be even some problems I'm not seeing (with the built-in set). I'm considering the solution 2., but I'm really not sure. Maybe this is not proper, or maybe you guys have a better solution. PS: I've found some similar topics on the web and on stackoverflow, but it usually deals with names inside a script or module inside a package. The problem here is really related to the code's name (which is meaningful only written this way), and therefore related to the proper naming of the package's name. --- EDIT **Selected solution:** I've choosen to use "SET" as the package name. Although many good suggestions have been proposed here: pyset, semt, setool... or the more explicit "starexoplanettool" (expliciting the acronym). Thanks to you all. EDIT #2 I like the "funny" solution of having a package named S, and a subpackage, E... to get finally: ``` import S.E.T ``` Thanks Don Question.
2012/02/21
[ "https://Stackoverflow.com/questions/9387119", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1198648/" ]
> > Is this a pythonic proper naming for a package? > > > Under the circumstances, it doesn't matter. Leading upper case is rare (ConfigParser, HTMLParser). Camel-case is rare (cStringIO). But that doesn't make all upper-case wrong. It just means you should pick a better acronym next time. "set" is the English word with the largest number of definitions. It is the **single** poorest choice of acronym possible. The point is not to conform to a community "standard" of more-or-less acceptable behavior. The point is to write something that works. > > (ie. upper-case, written a lot of times). > > > Hardly a concern. People actually run software more often than they write it. People read and tweak more often than writing, also. If you're worried about misspelling, there's copy and paste. Also, even a cheap IDE like Komodo Edit can figure out how to code-complete an installed module with a long name.
If you have to choose such a misfortunate name, you could make the pain in the ass an ironic anathema! Just create a main-package "S" with a subpackage "E" and therein a subpackage "T". Then you could do an: ``` import S.E.T ``` or ``` from s.e.t import xyz ``` or ``` import s.e.t as set_ ``` you could even do some vodoo in the `__init__.py` file of the main-pacakge ;-) Just my 2 cents!
9,387,119
I'm currently creating a code named "SET". The code's name is an acronym, which has been defined for many (non programming) reasons, and therefore cannot be changed. **Problem:** The easiest, and I believe the less painful way for the end-user to use my code would be naming the package "set". But of course this is a problem since this conflicts with the built-in ***set*** function. **Question:** What are the possible solutions? Some may be (there is probably more): 1. change the package name (eg. setb). ``` import setb ``` I would really really prefer not to, because then it will be different from the real name 2. make the package's name upper-case (SET) ``` import SET ``` It would be a straight forward solution, but I'm wondering: is this a pythonic proper naming for a package? Also, I find this a bit painful since all modules defined in the code will have something like "import SET.x.y..." (ie. upper-case, written a lot of times). But this is not a really big deal if this is a pythonic way. 3. keep the name "set" ``` import set ``` Well this is obviously not fine. But it would be a problem only if the user is using "import set", would not it be? This should not happen in "normal usage conditions", since the code will provides some scripts to use it, rather use it as a standard python module. But we never know, and it could be imported as it, and there may be even some problems I'm not seeing (with the built-in set). I'm considering the solution 2., but I'm really not sure. Maybe this is not proper, or maybe you guys have a better solution. PS: I've found some similar topics on the web and on stackoverflow, but it usually deals with names inside a script or module inside a package. The problem here is really related to the code's name (which is meaningful only written this way), and therefore related to the proper naming of the package's name. --- EDIT **Selected solution:** I've choosen to use "SET" as the package name. Although many good suggestions have been proposed here: pyset, semt, setool... or the more explicit "starexoplanettool" (expliciting the acronym). Thanks to you all. EDIT #2 I like the "funny" solution of having a package named S, and a subpackage, E... to get finally: ``` import S.E.T ``` Thanks Don Question.
2012/02/21
[ "https://Stackoverflow.com/questions/9387119", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1198648/" ]
> > Is this a pythonic proper naming for a package? > > > Under the circumstances, it doesn't matter. Leading upper case is rare (ConfigParser, HTMLParser). Camel-case is rare (cStringIO). But that doesn't make all upper-case wrong. It just means you should pick a better acronym next time. "set" is the English word with the largest number of definitions. It is the **single** poorest choice of acronym possible. The point is not to conform to a community "standard" of more-or-less acceptable behavior. The point is to write something that works. > > (ie. upper-case, written a lot of times). > > > Hardly a concern. People actually run software more often than they write it. People read and tweak more often than writing, also. If you're worried about misspelling, there's copy and paste. Also, even a cheap IDE like Komodo Edit can figure out how to code-complete an installed module with a long name.
Maybe I'm not getting it but this looks simply like a namespace problem. Here is one way out. Prepend the path to the set.py onto the sys.path Before the import takes place. You can wrap this in a function which backs out the change after the import. I'm in train so can't test this but that is what I would try.
9,387,119
I'm currently creating a code named "SET". The code's name is an acronym, which has been defined for many (non programming) reasons, and therefore cannot be changed. **Problem:** The easiest, and I believe the less painful way for the end-user to use my code would be naming the package "set". But of course this is a problem since this conflicts with the built-in ***set*** function. **Question:** What are the possible solutions? Some may be (there is probably more): 1. change the package name (eg. setb). ``` import setb ``` I would really really prefer not to, because then it will be different from the real name 2. make the package's name upper-case (SET) ``` import SET ``` It would be a straight forward solution, but I'm wondering: is this a pythonic proper naming for a package? Also, I find this a bit painful since all modules defined in the code will have something like "import SET.x.y..." (ie. upper-case, written a lot of times). But this is not a really big deal if this is a pythonic way. 3. keep the name "set" ``` import set ``` Well this is obviously not fine. But it would be a problem only if the user is using "import set", would not it be? This should not happen in "normal usage conditions", since the code will provides some scripts to use it, rather use it as a standard python module. But we never know, and it could be imported as it, and there may be even some problems I'm not seeing (with the built-in set). I'm considering the solution 2., but I'm really not sure. Maybe this is not proper, or maybe you guys have a better solution. PS: I've found some similar topics on the web and on stackoverflow, but it usually deals with names inside a script or module inside a package. The problem here is really related to the code's name (which is meaningful only written this way), and therefore related to the proper naming of the package's name. --- EDIT **Selected solution:** I've choosen to use "SET" as the package name. Although many good suggestions have been proposed here: pyset, semt, setool... or the more explicit "starexoplanettool" (expliciting the acronym). Thanks to you all. EDIT #2 I like the "funny" solution of having a package named S, and a subpackage, E... to get finally: ``` import S.E.T ``` Thanks Don Question.
2012/02/21
[ "https://Stackoverflow.com/questions/9387119", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1198648/" ]
Why not spell out the meaning behind the acronym, and if the user is desperate for a shorter name, they can do `import someetymologyterm as SET` or whatever they prefer. Python gives them the choice, so it's not the end of the world either way.
If you have to choose such a misfortunate name, you could make the pain in the ass an ironic anathema! Just create a main-package "S" with a subpackage "E" and therein a subpackage "T". Then you could do an: ``` import S.E.T ``` or ``` from s.e.t import xyz ``` or ``` import s.e.t as set_ ``` you could even do some vodoo in the `__init__.py` file of the main-pacakge ;-) Just my 2 cents!
9,387,119
I'm currently creating a code named "SET". The code's name is an acronym, which has been defined for many (non programming) reasons, and therefore cannot be changed. **Problem:** The easiest, and I believe the less painful way for the end-user to use my code would be naming the package "set". But of course this is a problem since this conflicts with the built-in ***set*** function. **Question:** What are the possible solutions? Some may be (there is probably more): 1. change the package name (eg. setb). ``` import setb ``` I would really really prefer not to, because then it will be different from the real name 2. make the package's name upper-case (SET) ``` import SET ``` It would be a straight forward solution, but I'm wondering: is this a pythonic proper naming for a package? Also, I find this a bit painful since all modules defined in the code will have something like "import SET.x.y..." (ie. upper-case, written a lot of times). But this is not a really big deal if this is a pythonic way. 3. keep the name "set" ``` import set ``` Well this is obviously not fine. But it would be a problem only if the user is using "import set", would not it be? This should not happen in "normal usage conditions", since the code will provides some scripts to use it, rather use it as a standard python module. But we never know, and it could be imported as it, and there may be even some problems I'm not seeing (with the built-in set). I'm considering the solution 2., but I'm really not sure. Maybe this is not proper, or maybe you guys have a better solution. PS: I've found some similar topics on the web and on stackoverflow, but it usually deals with names inside a script or module inside a package. The problem here is really related to the code's name (which is meaningful only written this way), and therefore related to the proper naming of the package's name. --- EDIT **Selected solution:** I've choosen to use "SET" as the package name. Although many good suggestions have been proposed here: pyset, semt, setool... or the more explicit "starexoplanettool" (expliciting the acronym). Thanks to you all. EDIT #2 I like the "funny" solution of having a package named S, and a subpackage, E... to get finally: ``` import S.E.T ``` Thanks Don Question.
2012/02/21
[ "https://Stackoverflow.com/questions/9387119", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1198648/" ]
Why not spell out the meaning behind the acronym, and if the user is desperate for a shorter name, they can do `import someetymologyterm as SET` or whatever they prefer. Python gives them the choice, so it's not the end of the world either way.
Maybe I'm not getting it but this looks simply like a namespace problem. Here is one way out. Prepend the path to the set.py onto the sys.path Before the import takes place. You can wrap this in a function which backs out the change after the import. I'm in train so can't test this but that is what I would try.
9,387,119
I'm currently creating a code named "SET". The code's name is an acronym, which has been defined for many (non programming) reasons, and therefore cannot be changed. **Problem:** The easiest, and I believe the less painful way for the end-user to use my code would be naming the package "set". But of course this is a problem since this conflicts with the built-in ***set*** function. **Question:** What are the possible solutions? Some may be (there is probably more): 1. change the package name (eg. setb). ``` import setb ``` I would really really prefer not to, because then it will be different from the real name 2. make the package's name upper-case (SET) ``` import SET ``` It would be a straight forward solution, but I'm wondering: is this a pythonic proper naming for a package? Also, I find this a bit painful since all modules defined in the code will have something like "import SET.x.y..." (ie. upper-case, written a lot of times). But this is not a really big deal if this is a pythonic way. 3. keep the name "set" ``` import set ``` Well this is obviously not fine. But it would be a problem only if the user is using "import set", would not it be? This should not happen in "normal usage conditions", since the code will provides some scripts to use it, rather use it as a standard python module. But we never know, and it could be imported as it, and there may be even some problems I'm not seeing (with the built-in set). I'm considering the solution 2., but I'm really not sure. Maybe this is not proper, or maybe you guys have a better solution. PS: I've found some similar topics on the web and on stackoverflow, but it usually deals with names inside a script or module inside a package. The problem here is really related to the code's name (which is meaningful only written this way), and therefore related to the proper naming of the package's name. --- EDIT **Selected solution:** I've choosen to use "SET" as the package name. Although many good suggestions have been proposed here: pyset, semt, setool... or the more explicit "starexoplanettool" (expliciting the acronym). Thanks to you all. EDIT #2 I like the "funny" solution of having a package named S, and a subpackage, E... to get finally: ``` import S.E.T ``` Thanks Don Question.
2012/02/21
[ "https://Stackoverflow.com/questions/9387119", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1198648/" ]
If you have to choose such a misfortunate name, you could make the pain in the ass an ironic anathema! Just create a main-package "S" with a subpackage "E" and therein a subpackage "T". Then you could do an: ``` import S.E.T ``` or ``` from s.e.t import xyz ``` or ``` import s.e.t as set_ ``` you could even do some vodoo in the `__init__.py` file of the main-pacakge ;-) Just my 2 cents!
Maybe I'm not getting it but this looks simply like a namespace problem. Here is one way out. Prepend the path to the set.py onto the sys.path Before the import takes place. You can wrap this in a function which backs out the change after the import. I'm in train so can't test this but that is what I would try.
18,259,403
I just installed Ubuntu 12.04 and wish to use Python 3.3 rather than 2.7 which is currently installed. I downloaded python from python's ftp and installed it. I followed the steps given in : <https://askubuntu.com/questions/244544/how-to-install-python-3-3> Now I have Python 3.3 installed in /opt/python3.3 and create sym links for it as told in this URL. Still python --version command tells me that my current version is 2.7.3. What's wrong?
2013/08/15
[ "https://Stackoverflow.com/questions/18259403", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1043400/" ]
Instead of using `python` in terminal use `python3.3`
Python 3.x and 2.x are incompatible languages. [PEP 394 — The "python" Command on Unix-Like Systems](http://www.python.org/dev/peps/pep-0394/) contains recommendations for how to handle this fact. The short version is this: * `python` should refer to the latest Python 2.x. * `python3` should always refer to the latest Python 3.x. * `python2` should refer to the latest Python 2.x (and *must* if `python` refers to 3.x), but often doesn't. * `pythonX.Y` should always refer to Python X.Y. So, running `python3` or `python3.3` should run your custom 3.3 installation, but `python` should run your platform's standard 2.7. The Rationale and Migration Notes sections explain why the first rule is as it is, but the short version is this: There are millions of scripts out there—some integral to the functional of various OS distros or third-party packages—that assume `/usr/bin/env python` runs Python 2.x, and no scripts that assume it runs Python 3.x. So, your best bet is to leave things the way your distro wanted, and explicitly use `python3` to refer to your 3.3 installation. (And, likewise, `pip3`, `ipython3`, `idle3`, etc.) --- The specific instructions you followed also create a link named `py`. This is definitely non-standard, but I can see how it would be convenient. If you want to do this, go for it—then, instead of running `python`, just run `py`. (However, you still want to make sure you get links named `python3.3` and `python3` onto the path, so you have a PEP-394-compliant installation. If those instructions don't do that for you, do it manually after you're done.)
43,540,593
I have the following model ``` @python_2_unicode_compatible class Booking(models.Model): session = models.ForeignKey(verbose_name=_('Session'), to=Session, default=None, null=False, blank=False) quantity = models.PositiveIntegerField(verbose_name=_('Quantity'), default=1, null=False, blank=False) price = models.DecimalField(verbose_name=_('Price'), max_digits=10, decimal_places=2, default=None, null=False, blank=False) name = models.CharField(verbose_name=_('Name'), max_length=100, default=None, null=False, blank=False) email = models.EmailField(verbose_name=_('Email'), default=None, null=True, blank=True) phone_number = models.CharField(verbose_name=_('Phone Number'), max_length=30, default=None, null=True, blank=True) ``` Say I need to change my email and phone\_number fields. I want them to have `null=False` and `blank=False`. Do these alterations require a new migration?
2017/04/21
[ "https://Stackoverflow.com/questions/43540593", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7337701/" ]
Yes they do. `null=False` requires a change to the database schema itself; `blank=False` does not, but Django needs a migration anyway so that the migration runner's internal graph of the model state is up to date.
Sure they do. Every change you made to your model fields (from simply altering the `help_text` to completely rename a model field) requires to `makemigrations` and `migrate` in order to reconstruct your model in the future.
43,540,593
I have the following model ``` @python_2_unicode_compatible class Booking(models.Model): session = models.ForeignKey(verbose_name=_('Session'), to=Session, default=None, null=False, blank=False) quantity = models.PositiveIntegerField(verbose_name=_('Quantity'), default=1, null=False, blank=False) price = models.DecimalField(verbose_name=_('Price'), max_digits=10, decimal_places=2, default=None, null=False, blank=False) name = models.CharField(verbose_name=_('Name'), max_length=100, default=None, null=False, blank=False) email = models.EmailField(verbose_name=_('Email'), default=None, null=True, blank=True) phone_number = models.CharField(verbose_name=_('Phone Number'), max_length=30, default=None, null=True, blank=True) ``` Say I need to change my email and phone\_number fields. I want them to have `null=False` and `blank=False`. Do these alterations require a new migration?
2017/04/21
[ "https://Stackoverflow.com/questions/43540593", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7337701/" ]
<https://docs.djangoproject.com/en/1.11/topics/db/models/#field-options> Django document says: > > null > > > If True, Django will store empty values as NULL in the database. Default >is False. > > > blank > > > If True, the field is allowed to be blank. Default is False. > > > Note that this is different than null. null is purely database-related, > whereas blank is validation-related. If a field has blank=True, form > validation will allow entry of an empty value. If a field has blank=False, > the field will be required. > > > For change in `null` you need to migrate For change in `blank` you need not to migrate, because its admin form related
Sure they do. Every change you made to your model fields (from simply altering the `help_text` to completely rename a model field) requires to `makemigrations` and `migrate` in order to reconstruct your model in the future.
43,540,593
I have the following model ``` @python_2_unicode_compatible class Booking(models.Model): session = models.ForeignKey(verbose_name=_('Session'), to=Session, default=None, null=False, blank=False) quantity = models.PositiveIntegerField(verbose_name=_('Quantity'), default=1, null=False, blank=False) price = models.DecimalField(verbose_name=_('Price'), max_digits=10, decimal_places=2, default=None, null=False, blank=False) name = models.CharField(verbose_name=_('Name'), max_length=100, default=None, null=False, blank=False) email = models.EmailField(verbose_name=_('Email'), default=None, null=True, blank=True) phone_number = models.CharField(verbose_name=_('Phone Number'), max_length=30, default=None, null=True, blank=True) ``` Say I need to change my email and phone\_number fields. I want them to have `null=False` and `blank=False`. Do these alterations require a new migration?
2017/04/21
[ "https://Stackoverflow.com/questions/43540593", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7337701/" ]
Sure. To check it you can run `python manage.py makemigrations --dry-run` (the --dry-run doesn't save a new migration file, but shows if it's necessary)
Sure they do. Every change you made to your model fields (from simply altering the `help_text` to completely rename a model field) requires to `makemigrations` and `migrate` in order to reconstruct your model in the future.
43,540,593
I have the following model ``` @python_2_unicode_compatible class Booking(models.Model): session = models.ForeignKey(verbose_name=_('Session'), to=Session, default=None, null=False, blank=False) quantity = models.PositiveIntegerField(verbose_name=_('Quantity'), default=1, null=False, blank=False) price = models.DecimalField(verbose_name=_('Price'), max_digits=10, decimal_places=2, default=None, null=False, blank=False) name = models.CharField(verbose_name=_('Name'), max_length=100, default=None, null=False, blank=False) email = models.EmailField(verbose_name=_('Email'), default=None, null=True, blank=True) phone_number = models.CharField(verbose_name=_('Phone Number'), max_length=30, default=None, null=True, blank=True) ``` Say I need to change my email and phone\_number fields. I want them to have `null=False` and `blank=False`. Do these alterations require a new migration?
2017/04/21
[ "https://Stackoverflow.com/questions/43540593", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7337701/" ]
Yes they do. `null=False` requires a change to the database schema itself; `blank=False` does not, but Django needs a migration anyway so that the migration runner's internal graph of the model state is up to date.
<https://docs.djangoproject.com/en/1.11/topics/db/models/#field-options> Django document says: > > null > > > If True, Django will store empty values as NULL in the database. Default >is False. > > > blank > > > If True, the field is allowed to be blank. Default is False. > > > Note that this is different than null. null is purely database-related, > whereas blank is validation-related. If a field has blank=True, form > validation will allow entry of an empty value. If a field has blank=False, > the field will be required. > > > For change in `null` you need to migrate For change in `blank` you need not to migrate, because its admin form related
43,540,593
I have the following model ``` @python_2_unicode_compatible class Booking(models.Model): session = models.ForeignKey(verbose_name=_('Session'), to=Session, default=None, null=False, blank=False) quantity = models.PositiveIntegerField(verbose_name=_('Quantity'), default=1, null=False, blank=False) price = models.DecimalField(verbose_name=_('Price'), max_digits=10, decimal_places=2, default=None, null=False, blank=False) name = models.CharField(verbose_name=_('Name'), max_length=100, default=None, null=False, blank=False) email = models.EmailField(verbose_name=_('Email'), default=None, null=True, blank=True) phone_number = models.CharField(verbose_name=_('Phone Number'), max_length=30, default=None, null=True, blank=True) ``` Say I need to change my email and phone\_number fields. I want them to have `null=False` and `blank=False`. Do these alterations require a new migration?
2017/04/21
[ "https://Stackoverflow.com/questions/43540593", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7337701/" ]
Yes they do. `null=False` requires a change to the database schema itself; `blank=False` does not, but Django needs a migration anyway so that the migration runner's internal graph of the model state is up to date.
Sure. To check it you can run `python manage.py makemigrations --dry-run` (the --dry-run doesn't save a new migration file, but shows if it's necessary)
43,540,593
I have the following model ``` @python_2_unicode_compatible class Booking(models.Model): session = models.ForeignKey(verbose_name=_('Session'), to=Session, default=None, null=False, blank=False) quantity = models.PositiveIntegerField(verbose_name=_('Quantity'), default=1, null=False, blank=False) price = models.DecimalField(verbose_name=_('Price'), max_digits=10, decimal_places=2, default=None, null=False, blank=False) name = models.CharField(verbose_name=_('Name'), max_length=100, default=None, null=False, blank=False) email = models.EmailField(verbose_name=_('Email'), default=None, null=True, blank=True) phone_number = models.CharField(verbose_name=_('Phone Number'), max_length=30, default=None, null=True, blank=True) ``` Say I need to change my email and phone\_number fields. I want them to have `null=False` and `blank=False`. Do these alterations require a new migration?
2017/04/21
[ "https://Stackoverflow.com/questions/43540593", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7337701/" ]
Sure. To check it you can run `python manage.py makemigrations --dry-run` (the --dry-run doesn't save a new migration file, but shows if it's necessary)
<https://docs.djangoproject.com/en/1.11/topics/db/models/#field-options> Django document says: > > null > > > If True, Django will store empty values as NULL in the database. Default >is False. > > > blank > > > If True, the field is allowed to be blank. Default is False. > > > Note that this is different than null. null is purely database-related, > whereas blank is validation-related. If a field has blank=True, form > validation will allow entry of an empty value. If a field has blank=False, > the field will be required. > > > For change in `null` you need to migrate For change in `blank` you need not to migrate, because its admin form related
54,676,297
I am learning python by reading the 'Python Crash Course' book so I'm still a beginner, what I want to know is the difference between these two pieces of code as they both seem to be the same to me but give different results. ``` names={ 'first' : ['john','mike'], } for name in names.keys(): print(name[1]) ``` This gives the result 'i'. ``` names={ 'first' : ['john','mike'], } for name in names.keys(): print(names[name][1]) ``` This gives my expected result; 'mike'. Any help would be appreciated.
2019/02/13
[ "https://Stackoverflow.com/questions/54676297", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10791329/" ]
you can do this with a simple `ifelse` statement - nothing else required. ``` a <- 10 ifelse(a %in% k2, length(k2), 0) [1] 0 ``` you could wrap in a function and feed the different sets in: ``` my_func <- function(x){ ifelse(a %in% x, length(x), 0) } my_func(k2) [1] 6 ```
If you have more K(i) lists (100, for example) and you need to interate with all of then, you can use a loop and store the results in a resume table. I never saw map\_lgl, but we can use the ~hard code~ of R, like: ``` k1 <- c(1,2,3,5,6) k2 <- c(10,12,13,15,16,18) results <- data.frame() for(i in 1:2){ analysis <- get(paste("k",i,sep="")) if(10 %in% analysis){ results[nrow(results)+1, 1] <- paste("k",i,sep="") results[nrow(results), 2] <- length(analysis) } else{ results[nrow(results)+1, 1] <- paste("k",i,sep="") results[nrow(results), 2] <- 0 } } ``` Than we get: [![enter image description here](https://i.stack.imgur.com/l8GAE.png)](https://i.stack.imgur.com/l8GAE.png)
24,589,581
I'm trying to set the default value for a datetime field to the max date `timezone.datetime.max` ``` from django.db import models from django.utils import timezone class Item(models.Model): id = models.AutoField(primary_key=True) deleted = models.DateTimeField(default=timezone.datetime.max) ``` And i'm getting this error when ever i try to save a new object to the db ``` /usr/local/lib/python2.7/dist-packages/django/db/models/fields/__init__.py:903: RuntimeWarning: DateTimeField Item.deleted received a naive datetime (9999-12-31 23:59:59.999999) while time zone support is active. RuntimeWarning) ``` So reading the [docs](https://docs.djangoproject.com/en/dev/topics/i18n/timezones/) on this issue i'v set `USE_TZ=True` and i've got `pytz` installed.
2014/07/05
[ "https://Stackoverflow.com/questions/24589581", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2000875/" ]
`timezone.datetime` - this is regular python `datetime.datetime` module which is imported in `django.utils.timezone` and it returns regular datetime, so you need to make it aware: ``` timezone.make_aware(timezone.datetime.max, timezone.get_default_timezone()) ```
I had the same issue, this was my fix ``` timezone.datetime.max.replace(tzinfo=timezone.get_default_timezone()) ``` with "datetime.min" I had another year -1 issue, which I resolved with this: ``` timezone.datetime.min.replace(tzinfo=timezone.utc) ```
60,231,537
I'm trying to convert a simple python code (any sample code) into an executable file but on opening the application all I can see is a blank black screen. I am using: * Python 3.7 * PyInstaller 3.6 * One file output I am able to convert the file to .exe using auto-py-to-exe but I don't see any output when I try to run the application.
2020/02/14
[ "https://Stackoverflow.com/questions/60231537", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12894687/" ]
``` -> Index Scan using .. s (cost=0.42..8.44 rows=1 width=69) (actual time=0.082..17.418 rows=21899 loops=1) Index Cond: (s.solution = 12642) ``` The planner thinks it will find 1 row, and instead finds 21899. That error can pretty clearly lead to bad plans. And a single equality condition should be estimated quite accurately, so I'd say the statistics on your table are way off. It could be that the autovac launcher is tuned poorly so it doesn't run often enough, or it could be that select parts of your data change very rapidly (did you just insert 21899 rows with s.solution = 12642 immediately before running the query?) and so the stats can't be kept accurate enough. ``` -> Nested Loop ... Join Filter: (s.activity_index = sa.activity_index) Rows Removed by Join Filter: 705476285 -> ... -> Index Scan using scheduled_activities_idx on public.scheduledactivities sa (cost=0.42..4.60 rows=1 width=16) (actual time=0.006..4.612 rows=32216 loops=21899) Output: sa.usedresourceset, sa.activity_index, sa.solution_id Index Cond: (sa.solution_id = 12642) ``` If you can't get it to use the Hash Join, you can at least reduce the harm of the Nested Loop by building an index `on scheduledactivities (solution_id, activity_index)`. That way the `activity_index` criterion could be part of the Index Condition, rather than being a Join Filter. You could probably then drop the index exclusively on `solution_id`, as there is little point in maintaining both indexes.
The SQL statement of the fast plan is using WHERE r.schedule\_id = 10 and returns about 22000 rows (with estimated 105). The SQL statement of the slow plan is using WHERE r.schedule\_id = 12642 and returns about 21000 rows (with estimated only 1). The slow plan is using nested loops instead of hash joins: maybe because there is a bad estimation for joins: estimated rows is 1 but actual rows is 21899. For example in this step: ``` Nested Loop (cost=0.42..9.74 rows=1 width=110) (actual time=0.091..227.705 rows=21899 loops=1) ``` If data does not change there is maybe a statistic issue (skew data) for some columns.
53,132,434
I use the search in the register and the Win32\_Product class to get the list of the programs installed on the computer, but it doesn’t give all the programs, I’ve seen programs in C ++ that give the same results as in the programs and components of the control panel. Is there any api for python that can give me the same result. Here is the code for c ++ <https://www.codeproject.com/Articles/6791/How-to-get-a-list-of-installed-applications> That's what i use: import win32com.client ``` strComputer = "." objWMIService = win32com.client.Dispatch("WbemScripting.SWbemLocator") objSWbemServices = objWMIService.ConnectServer(strComputer, "root\cimv2") colItems = objSWbemServices.ExecQuery("Select * from Win32_Product") for objItem in colItems: print("Name: ", objItem.Name) ``` And whis registry: ``` aReg = ConnectRegistry(None, HKEY_LOCAL_MACHINE) aKey = OpenKey(aReg, r"SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall") for i in range(1024): try: asubkey_name = EnumKey(aKey, i) asubkey = OpenKey(aKey, asubkey_name) val = str(QueryValueEx(asubkey, "DisplayName")) b = "!@#$,01'" for char in b: val = val.replace(char, "") r = len(val) val = str(val[1:r - 2]) val2 = str(QueryValueEx(asubkey, "DisplayIcon")) if s.lower() in val.lower(): r = len(val2) val2 = str(val2[2:r - 5]) # print(val2) subprocess.Popen(val2) break # print(val, val2) except EnvironmentError: continue ```
2018/11/03
[ "https://Stackoverflow.com/questions/53132434", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10351974/" ]
Slightly improved version that works without win32con import and retrieves software version and publisher. Thanks Barmak Shemirani for his initial answer :) [EDIT] Disclaimer: The code in this post is outdated. **I have published that code as a python package.** Install with `pip install windows_tools.installed_software` Usage: ``` from windows_tools.installed_software import get_installed_software for software in get_installed_software(): print(software['name'], software['version'], software['publisher']) ``` [/EDIT] ``` import winreg def foo(hive, flag): aReg = winreg.ConnectRegistry(None, hive) aKey = winreg.OpenKey(aReg, r"SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall", 0, winreg.KEY_READ | flag) count_subkey = winreg.QueryInfoKey(aKey)[0] software_list = [] for i in range(count_subkey): software = {} try: asubkey_name = winreg.EnumKey(aKey, i) asubkey = winreg.OpenKey(aKey, asubkey_name) software['name'] = winreg.QueryValueEx(asubkey, "DisplayName")[0] try: software['version'] = winreg.QueryValueEx(asubkey, "DisplayVersion")[0] except EnvironmentError: software['version'] = 'undefined' try: software['publisher'] = winreg.QueryValueEx(asubkey, "Publisher")[0] except EnvironmentError: software['publisher'] = 'undefined' software_list.append(software) except EnvironmentError: continue return software_list software_list = foo(winreg.HKEY_LOCAL_MACHINE, winreg.KEY_WOW64_32KEY) + foo(winreg.HKEY_LOCAL_MACHINE, winreg.KEY_WOW64_64KEY) + foo(winreg.HKEY_CURRENT_USER, 0) for software in software_list: print('Name=%s, Version=%s, Publisher=%s' % (software['name'], software['version'], software['publisher'])) print('Number of installed apps: %s' % len(software_list)) ```
Check both 32-bit and 64-bit registry using `KEY_WOW64_32KEY` and `KEY_WOW64_64KEY`. In addition, some installers will use `HKEY_CURRENT_USER`, although the latter is rarely used. Note, pywin32's `QueryValueEx` returns an tuple, the first element in that tuple contains the required string. `QueryInfoKey` returns a tuple whose first element is the total number of subkeys. ``` def foo(hive, flag): aReg = winreg.ConnectRegistry(None, hive) aKey = winreg.OpenKey(aReg, r"SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall", 0, win32con.KEY_READ | flag) count_subkey = winreg.QueryInfoKey(aKey)[0] for i in range(count_subkey): try: asubkey_name = winreg.EnumKey(aKey, i) asubkey = winreg.OpenKey(aKey, asubkey_name) val = winreg.QueryValueEx(asubkey, "DisplayName")[0] print(val) except EnvironmentError: continue foo(win32con.HKEY_LOCAL_MACHINE, win32con.KEY_WOW64_32KEY) foo(win32con.HKEY_LOCAL_MACHINE, win32con.KEY_WOW64_64KEY) foo(win32con.HKEY_CURRENT_USER, 0) ```
57,469,145
The following is how I get historical data, however the date format does not allow me to specify the minute or second of the data; only the day, which is not precise enough. ``` start_date_str=start_date.strftime("%d %b, %Y") data1=client.get_historical_klines(pair, Client.KLINE_INTERVAL_1MINUTE, start_date_str) ``` I'm looking to get price data for a particular hour, minute and second using the python binance api provided by Sam McHardy (link below). <https://github.com/binance-exchange/python-binance> If anyone knows whether this is possible, or if there are any alternatives, I would really appreciate it.
2019/08/12
[ "https://Stackoverflow.com/questions/57469145", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10005867/" ]
KLINE\_INTERVAL\_1MINUTE is the smallest time resolution for get\_historical\_klines provided by Binance. If you need smaller time intervals you can use **get\_aggregate\_trades** which allows startTime and endTime in milliseconds. This request return the list of trades, so one has to calculate average price abd total volume for desired time period.
current syntax is : (symbol)@kline\_(interval) for example ``` BTCUSDT@kline_1m (one minute) ``` for autobahn complete json : ``` def onOpen(self): subscribe_message = { "method": "SUBSCRIBE", "params": [ "btcusdt@miniTicker", "btcusdt@depth" ], "id": 1 } self.sendMessage(json.dumps(subscribe_message).encode('utf8')) ```
10,264,739
I grabbed the KDD track1 dataset from Kaggle and decided to load a ~2.5GB 3-column CSV file into memory, on my 16GB high-memory EC2 instance: ``` data = np.loadtxt('rec_log_train.txt') ``` the python session ate up all my memory (100%), and then got killed. I then read the same file using R (via read.table) and it used less than 5GB of ram, which collapsed to less than 2GB after I called the garbage collector. My question is why did this fail under numpy, and what's the proper way of reading a file into memory. Yes I can use generators and avoid the problem, but that's not the goal.
2012/04/22
[ "https://Stackoverflow.com/questions/10264739", "https://Stackoverflow.com", "https://Stackoverflow.com/users/190894/" ]
``` import pandas, re, numpy as np def load_file(filename, num_cols, delimiter='\t'): data = None try: data = np.load(filename + '.npy') except: splitter = re.compile(delimiter) def items(infile): for line in infile: for item in splitter.split(line): yield item with open(filename, 'r') as infile: data = np.fromiter(items(infile), float64, -1) data = data.reshape((-1, num_cols)) np.save(filename, data) return pandas.DataFrame(data) ``` This reads in the 2.5GB file, and serializes the output matrix. The input file is read in "lazily", so no intermediate data-structures are built and minimal memory is used. The initial load takes a long time, but each subsequent load (of the serialized file) is fast. Please let me if you have tips!
You can try `numpy.fromfile` <http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfile.html>
10,264,739
I grabbed the KDD track1 dataset from Kaggle and decided to load a ~2.5GB 3-column CSV file into memory, on my 16GB high-memory EC2 instance: ``` data = np.loadtxt('rec_log_train.txt') ``` the python session ate up all my memory (100%), and then got killed. I then read the same file using R (via read.table) and it used less than 5GB of ram, which collapsed to less than 2GB after I called the garbage collector. My question is why did this fail under numpy, and what's the proper way of reading a file into memory. Yes I can use generators and avoid the problem, but that's not the goal.
2012/04/22
[ "https://Stackoverflow.com/questions/10264739", "https://Stackoverflow.com", "https://Stackoverflow.com/users/190894/" ]
Try out recfile for now: <http://code.google.com/p/recfile/> . There are a couple of efforts I know of to make a fast C/C++ file reader for NumPy; it's on my short todo list for pandas because it causes problems like these. Warren Weckesser also has a project here: <https://github.com/WarrenWeckesser/textreader> . I don't know which one is better, try them both?
You can try `numpy.fromfile` <http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfile.html>
10,264,739
I grabbed the KDD track1 dataset from Kaggle and decided to load a ~2.5GB 3-column CSV file into memory, on my 16GB high-memory EC2 instance: ``` data = np.loadtxt('rec_log_train.txt') ``` the python session ate up all my memory (100%), and then got killed. I then read the same file using R (via read.table) and it used less than 5GB of ram, which collapsed to less than 2GB after I called the garbage collector. My question is why did this fail under numpy, and what's the proper way of reading a file into memory. Yes I can use generators and avoid the problem, but that's not the goal.
2012/04/22
[ "https://Stackoverflow.com/questions/10264739", "https://Stackoverflow.com", "https://Stackoverflow.com/users/190894/" ]
``` import pandas, re, numpy as np def load_file(filename, num_cols, delimiter='\t'): data = None try: data = np.load(filename + '.npy') except: splitter = re.compile(delimiter) def items(infile): for line in infile: for item in splitter.split(line): yield item with open(filename, 'r') as infile: data = np.fromiter(items(infile), float64, -1) data = data.reshape((-1, num_cols)) np.save(filename, data) return pandas.DataFrame(data) ``` This reads in the 2.5GB file, and serializes the output matrix. The input file is read in "lazily", so no intermediate data-structures are built and minimal memory is used. The initial load takes a long time, but each subsequent load (of the serialized file) is fast. Please let me if you have tips!
Try out recfile for now: <http://code.google.com/p/recfile/> . There are a couple of efforts I know of to make a fast C/C++ file reader for NumPy; it's on my short todo list for pandas because it causes problems like these. Warren Weckesser also has a project here: <https://github.com/WarrenWeckesser/textreader> . I don't know which one is better, try them both?
40,715,198
i' ve Django 1.9.2 with python 3.4.2 in a virtualenvironment. I' ve many applications, and the 2 related are common and shop. common/models.py contains: ``` from django.apps import apps class Document(CLDate): user = models.ForeignKey(User) assessmentorder = models.ForeignKey(apps.get_model('shop', 'AssessmentOrder'), blank=True, null=True) ``` shop/models.py contains: ``` from common.models import ServiceModel class AssessmentOrder(CLDate): """AssessmentOrder model""" order = models.ForeignKey(Order) comment = models.TextField() ``` . This is a circular import, and i read many strategy to resolve it (including apps.get\_model), but none of them seem to work for me. I also tried ``` apps.get_model('shop.AssessmentOrder') ``` , but the same. The complete error message is the following: ``` File "/home/hidden/.virtualenvs/sccdb34/lib/python3.4/site-packages/django/utils/autoreload.py", line 226, in wrapper fn(*args, **kwargs) File "/home/hidden/.virtualenvs/sccdb34/lib/python3.4/site-packages/django/core/management/commands/runserver.py", line 109, in inner_run autoreload.raise_last_exception() File "/home/hidden/.virtualenvs/sccdb34/lib/python3.4/site-packages/django/utils/autoreload.py", line 249, in raise_last_exception six.reraise(*_exception) File "/home/hidden/.virtualenvs/sccdb34/lib/python3.4/site-packages/django/utils/six.py", line 685, in reraise raise value.with_traceback(tb) File "/home/hidden/.virtualenvs/sccdb34/lib/python3.4/site-packages/django/utils/autoreload.py", line 226, in wrapper fn(*args, **kwargs) File "/home/hidden/.virtualenvs/sccdb34/lib/python3.4/site-packages/django/__init__.py", line 18, in setup apps.populate(settings.INSTALLED_APPS) File "/home/hidden/.virtualenvs/sccdb34/lib/python3.4/site-packages/django/apps/registry.py", line 108, in populate app_config.import_models(all_models) File "/home/hidden/.virtualenvs/sccdb34/lib/python3.4/site-packages/django/apps/config.py", line 202, in import_models self.models_module = import_module(models_module_name) File "/home/hidden/.virtualenvs/sccdb34/lib/python3.4/importlib/__init__.py", line 109, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 2254, in _gcd_import File "<frozen importlib._bootstrap>", line 2237, in _find_and_load File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked File "<frozen importlib._bootstrap>", line 1129, in _exec File "<frozen importlib._bootstrap>", line 1471, in exec_module File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed File "/home/hidden/git/sccdb/sccdb/common/models.py", line 25, in <module> class Document(CLDate): File "/home/hidden/git/sccdb/sccdb/common/models.py", line 28, in Document assessmentorder = models.ForeignKey(apps.get_model('shop.AssessmentOrder'), blank=True, null=True) File "/home/hidden/.virtualenvs/sccdb34/lib/python3.4/site-packages/django/apps/registry.py", line 194, in get_model self.check_models_ready() File "/home/hidden/.virtualenvs/sccdb34/lib/python3.4/site-packages/django/apps/registry.py", line 131, in check_models_ready raise AppRegistryNotReady("Models aren't loaded yet.") django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet. ``` Is it somehow related to my django version or python3, or what am i doing wrong?
2016/11/21
[ "https://Stackoverflow.com/questions/40715198", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2194805/" ]
Instead of doing using `get_model` function in foreign key declaration, you can simply put the model name as string and it'll still work: ``` assessmentorder = models.ForeignKey('shop.AssessmentOrder', blank=True, null=True) ``` This should resolve the issue
Don't include models, just put a path to them. As it described in [docs](https://docs.djangoproject.com/en/1.9/ref/models/fields/#foreignkey) ``` from django.conf import settings class Document(CLDate): user = models.ForeignKey(settings.AUTH_USER_MODEL) assessmentorder = models.ForeignKey('shop.AssessmentOrder', blank=True, null=True) ``` And ``` class AssessmentOrder(CLDate): """AssessmentOrder model""" order = models.ForeignKey('yourapp.Order') comment = models.TextField() ```
55,952,337
I have a problem with installing `psycopg2` on my mac. I tried several things such as installing it with `pip` and also homebrew. I also downloaded all dependencies but it still fails to install it. I get the following error message, which looks horrifying: ``` Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/private/var/folders/1c/nhs40gy513143dfqyblmt3r80000gn/T/pip-install-rxlPem/psycopg2/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /private/var/folders/1c/nhs40gy513143dfqyblmt3r80000gn/T/pip-record-93LksX/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/var/folders/1c/nhs40gy513143dfqyblmt3r80000gn/T/pip-install-rxlPem/psycopg2/ ``` What could be the issue? **EDIT:** Full error can be found [here](https://pastebin.com/yL3BzJeG), and this appears to be the most relevant part: ``` clang: warning: no such sysroot directory: '/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.sdk' [-Wmissing-sysroot] In file included from psycopg/psycopgmodule.c:27: In file included from ./psycopg/psycopg.h:34: /Library/Frameworks/Python.framework/Versions/3.7/include/python3.7m/Python.h:25:10: fatal error: 'stdio.h' file not found #include <stdio.h> ^~~~~~~~~ 1 error generated. ```
2019/05/02
[ "https://Stackoverflow.com/questions/55952337", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9423974/" ]
`psycopg2` [has some install dependencies that might be giving you trouble](http://initd.org/psycopg/docs/install.html#install-from-source). In this case it looks like the install process can't find [`stdio.h`](https://en.wikipedia.org/wiki/C_file_input/output), part of the C standard library. [Installing the Xcode Command Line Tools](http://osxdaily.com/2014/02/12/install-command-line-tools-mac-os-x/) might help. But if you don't want to bother with this, try installing [`psycopg2-binary`](https://pypi.org/project/psycopg2-binary/) instead: > > You can also obtain a stand-alone package, not requiring a compiler or external libraries, by installing the [psycopg2-binary](https://pypi.org/project/psycopg2-binary/) package from PyPI: > > > > ``` > $ pip install psycopg2-binary > > ``` > > The binary package is a practical choice for development and testing but in production it is advised to use the package built from sources. > > >
Anyone having this issue with a Debian distribution can use this solution Install the development headers for postgresql: ``` sudo apt install python3-dev libpq-dev ``` then try: ``` pip3 install psycopg2 ``` hope this solves your issues
55,952,337
I have a problem with installing `psycopg2` on my mac. I tried several things such as installing it with `pip` and also homebrew. I also downloaded all dependencies but it still fails to install it. I get the following error message, which looks horrifying: ``` Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/private/var/folders/1c/nhs40gy513143dfqyblmt3r80000gn/T/pip-install-rxlPem/psycopg2/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /private/var/folders/1c/nhs40gy513143dfqyblmt3r80000gn/T/pip-record-93LksX/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/var/folders/1c/nhs40gy513143dfqyblmt3r80000gn/T/pip-install-rxlPem/psycopg2/ ``` What could be the issue? **EDIT:** Full error can be found [here](https://pastebin.com/yL3BzJeG), and this appears to be the most relevant part: ``` clang: warning: no such sysroot directory: '/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.sdk' [-Wmissing-sysroot] In file included from psycopg/psycopgmodule.c:27: In file included from ./psycopg/psycopg.h:34: /Library/Frameworks/Python.framework/Versions/3.7/include/python3.7m/Python.h:25:10: fatal error: 'stdio.h' file not found #include <stdio.h> ^~~~~~~~~ 1 error generated. ```
2019/05/02
[ "https://Stackoverflow.com/questions/55952337", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9423974/" ]
for anybody looking for solution or future reference to this, I faced this issue while installing psycopg2 on Mojave, following on, i could find link to psycopg2 github [issue](https://github.com/psycopg/psycopg2/issues/903). which has the solution as installing missing library header, using ``` sudo installer -pkg /Library/Developer/CommandLineTools/Packages/macOS_SDK_headers_for_macOS_10.14.pkg -target / ``` This is probably an issue with Mojave, [enter link description here](https://github.com/python-pillow/Pillow/issues/3438#issuecomment-435169249)
Anyone having this issue with a Debian distribution can use this solution Install the development headers for postgresql: ``` sudo apt install python3-dev libpq-dev ``` then try: ``` pip3 install psycopg2 ``` hope this solves your issues
49,051,111
I have a question regarding the send\_keys function. How can I make the test wait for the entire content of send\_keys to be entered? I can not use time.sleep, so I tried: ``` WebDriverWait(self.browser, 5).until( expected_conditions.presence_of_element_located((By.ID, "name"))) query = driver.find_element_by_id('name') query.send_keys('python') driver.find_element_by_id("button").click() ``` the app clicks the button before the action completes send\_keys thank you for an answer
2018/03/01
[ "https://Stackoverflow.com/questions/49051111", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9429143/" ]
You could try to use the following code: ``` query = WebDriverWait(self.browser, 5).until( expected_conditions.presence_of_element_located((By.ID, "name"))) query.send_keys('python') WebDriverWait(self.browser, 5).until(lambda browser: query.get_attribute('value') == 'python') self.browser.find_element_by_id("button").click() ``` This code should allow you to wait until a full string is entered in the field.
If I am interpreting your question correctly, you have a web control that provides a "search" field which will progressively filter a list based on the content of the field. So, as you type "python", your list will get reduced to just the items that match "python". in this case you'll want to use your code, but add an additional wait for the item in the list that matches. something like this: ``` WebDriverWait(self.browser, 5).until( expected_conditions.presence_of_element_located((By.ID, "name"))) query = driver.find_element_by_id('name') query.send_keys('python') options_list = some_code_to_find_your_options_list target_option = WebDriverWait(options_list, 5).until(expected_conditions.presense_of_element_located((By.XPATH, "[text()[contains(.,'python')]]"))) driver.find_element_by_id("button").click() ``` This all assumes that the button selects the chosen item.
49,051,111
I have a question regarding the send\_keys function. How can I make the test wait for the entire content of send\_keys to be entered? I can not use time.sleep, so I tried: ``` WebDriverWait(self.browser, 5).until( expected_conditions.presence_of_element_located((By.ID, "name"))) query = driver.find_element_by_id('name') query.send_keys('python') driver.find_element_by_id("button").click() ``` the app clicks the button before the action completes send\_keys thank you for an answer
2018/03/01
[ "https://Stackoverflow.com/questions/49051111", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9429143/" ]
You could try to use the following code: ``` query = WebDriverWait(self.browser, 5).until( expected_conditions.presence_of_element_located((By.ID, "name"))) query.send_keys('python') WebDriverWait(self.browser, 5).until(lambda browser: query.get_attribute('value') == 'python') self.browser.find_element_by_id("button").click() ``` This code should allow you to wait until a full string is entered in the field.
``` #to use send_keys from selenium.webdriver.common.keys import Keys #enter a url inside quotes or any other value to send url = '' #initialize the input field as variable 'textField' textField = driver.find_element_by........("") #time to wait n = 10 #equivalent of do while loop in python while (True): #infinite loop print("in while loop") #clear the input field textField.clear() textField.send_keys(url) #enter the value driver.implicitly_wait(n) #get the text from input field after send_keys typed = textField.get_attribute("value") #check whether the send_keys value and text in input field are same, if same quit the loop if(typed == url): print(n) break #if not same, continue the loop with increased waiting time n = n+5 ```
49,051,111
I have a question regarding the send\_keys function. How can I make the test wait for the entire content of send\_keys to be entered? I can not use time.sleep, so I tried: ``` WebDriverWait(self.browser, 5).until( expected_conditions.presence_of_element_located((By.ID, "name"))) query = driver.find_element_by_id('name') query.send_keys('python') driver.find_element_by_id("button").click() ``` the app clicks the button before the action completes send\_keys thank you for an answer
2018/03/01
[ "https://Stackoverflow.com/questions/49051111", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9429143/" ]
``` #to use send_keys from selenium.webdriver.common.keys import Keys #enter a url inside quotes or any other value to send url = '' #initialize the input field as variable 'textField' textField = driver.find_element_by........("") #time to wait n = 10 #equivalent of do while loop in python while (True): #infinite loop print("in while loop") #clear the input field textField.clear() textField.send_keys(url) #enter the value driver.implicitly_wait(n) #get the text from input field after send_keys typed = textField.get_attribute("value") #check whether the send_keys value and text in input field are same, if same quit the loop if(typed == url): print(n) break #if not same, continue the loop with increased waiting time n = n+5 ```
If I am interpreting your question correctly, you have a web control that provides a "search" field which will progressively filter a list based on the content of the field. So, as you type "python", your list will get reduced to just the items that match "python". in this case you'll want to use your code, but add an additional wait for the item in the list that matches. something like this: ``` WebDriverWait(self.browser, 5).until( expected_conditions.presence_of_element_located((By.ID, "name"))) query = driver.find_element_by_id('name') query.send_keys('python') options_list = some_code_to_find_your_options_list target_option = WebDriverWait(options_list, 5).until(expected_conditions.presense_of_element_located((By.XPATH, "[text()[contains(.,'python')]]"))) driver.find_element_by_id("button").click() ``` This all assumes that the button selects the chosen item.
69,879,490
I am using bigquery connector for aws glue in my glue job. It was working fine few days ago but now suddenly it is giving me below error: ``` LAUNCH ERROR | Glue ETL Marketplace - failed to download connector.Please refer logs for details. ``` Below is the full error that i am getting on cloudwatch ``` 2021-11-08T11:33:02.045+05:00 Traceback (most recent call last): File "/usr/lib64/python3.7/runpy.py", line 193, in _run_module_as_main 2021-11-08T11:33:02.070+05:00 "__main__", mod_spec) File "/usr/lib64/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 361, in <module> 2021-11-08T11:33:02.070+05:00 main() File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 351, in main 2021-11-08T11:33:02.070+05:00 res += download_jars_per_connection(conn, region, endpoint, proxy) File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 304, in download_jars_per_connection 2021-11-08T11:33:02.070+05:00 download_and_unpack_docker_layer(ecr_url, layer["digest"], dir_prefix, http_header) File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 168, in download_and_unpack_docker_layer 2021-11-08T11:33:02.070+05:00 layer = send_get_request(layer_url, header) File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 80, in send_get_request 2021-11-08T11:33:02.070+05:00 2021-11-08T11:33:02.070+05:00 response.raise_for_status() File "/home/spark/.local/lib/python3.7/site-packages/requests/models.py", line 765, in raise_for_status 2021-11-08T11:33:02.071+05:00 raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 400 Client Error: Bad Request 2021-11-08T11:33:02.119+05:00 Glue ETL Marketplace - failed to download connector, activation script exited with code ```
2021/11/08
[ "https://Stackoverflow.com/questions/69879490", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7773898/" ]
When Glue job tried using connector, it has to download the connector in the form of container. The containers for connectors are available in amazon public ECR repo. To pull the container from the AWS public repo, we have to add "AmazonEC2ContainerRegistryFullAccess" policy to your IAM role. We can limit the access to read only as well.
This could very likely be a permissions issue. I was running into it and temporarily gave looser permissions, which seemed to resolve it.
69,879,490
I am using bigquery connector for aws glue in my glue job. It was working fine few days ago but now suddenly it is giving me below error: ``` LAUNCH ERROR | Glue ETL Marketplace - failed to download connector.Please refer logs for details. ``` Below is the full error that i am getting on cloudwatch ``` 2021-11-08T11:33:02.045+05:00 Traceback (most recent call last): File "/usr/lib64/python3.7/runpy.py", line 193, in _run_module_as_main 2021-11-08T11:33:02.070+05:00 "__main__", mod_spec) File "/usr/lib64/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 361, in <module> 2021-11-08T11:33:02.070+05:00 main() File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 351, in main 2021-11-08T11:33:02.070+05:00 res += download_jars_per_connection(conn, region, endpoint, proxy) File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 304, in download_jars_per_connection 2021-11-08T11:33:02.070+05:00 download_and_unpack_docker_layer(ecr_url, layer["digest"], dir_prefix, http_header) File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 168, in download_and_unpack_docker_layer 2021-11-08T11:33:02.070+05:00 layer = send_get_request(layer_url, header) File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 80, in send_get_request 2021-11-08T11:33:02.070+05:00 2021-11-08T11:33:02.070+05:00 response.raise_for_status() File "/home/spark/.local/lib/python3.7/site-packages/requests/models.py", line 765, in raise_for_status 2021-11-08T11:33:02.071+05:00 raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 400 Client Error: Bad Request 2021-11-08T11:33:02.119+05:00 Glue ETL Marketplace - failed to download connector, activation script exited with code ```
2021/11/08
[ "https://Stackoverflow.com/questions/69879490", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7773898/" ]
This could very likely be a permissions issue. I was running into it and temporarily gave looser permissions, which seemed to resolve it.
I have this issue with the GCP BigQuery connector. Some jobs are able to run the connector, some aren't. All have the same permissions and settings. There seems to be an issue after requesting the ECR Authorization Token where the request times out.
69,879,490
I am using bigquery connector for aws glue in my glue job. It was working fine few days ago but now suddenly it is giving me below error: ``` LAUNCH ERROR | Glue ETL Marketplace - failed to download connector.Please refer logs for details. ``` Below is the full error that i am getting on cloudwatch ``` 2021-11-08T11:33:02.045+05:00 Traceback (most recent call last): File "/usr/lib64/python3.7/runpy.py", line 193, in _run_module_as_main 2021-11-08T11:33:02.070+05:00 "__main__", mod_spec) File "/usr/lib64/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 361, in <module> 2021-11-08T11:33:02.070+05:00 main() File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 351, in main 2021-11-08T11:33:02.070+05:00 res += download_jars_per_connection(conn, region, endpoint, proxy) File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 304, in download_jars_per_connection 2021-11-08T11:33:02.070+05:00 download_and_unpack_docker_layer(ecr_url, layer["digest"], dir_prefix, http_header) File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 168, in download_and_unpack_docker_layer 2021-11-08T11:33:02.070+05:00 layer = send_get_request(layer_url, header) File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 80, in send_get_request 2021-11-08T11:33:02.070+05:00 2021-11-08T11:33:02.070+05:00 response.raise_for_status() File "/home/spark/.local/lib/python3.7/site-packages/requests/models.py", line 765, in raise_for_status 2021-11-08T11:33:02.071+05:00 raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 400 Client Error: Bad Request 2021-11-08T11:33:02.119+05:00 Glue ETL Marketplace - failed to download connector, activation script exited with code ```
2021/11/08
[ "https://Stackoverflow.com/questions/69879490", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7773898/" ]
When Glue job tried using connector, it has to download the connector in the form of container. The containers for connectors are available in amazon public ECR repo. To pull the container from the AWS public repo, we have to add "AmazonEC2ContainerRegistryFullAccess" policy to your IAM role. We can limit the access to read only as well.
I have this issue with the GCP BigQuery connector. Some jobs are able to run the connector, some aren't. All have the same permissions and settings. There seems to be an issue after requesting the ECR Authorization Token where the request times out.
69,879,490
I am using bigquery connector for aws glue in my glue job. It was working fine few days ago but now suddenly it is giving me below error: ``` LAUNCH ERROR | Glue ETL Marketplace - failed to download connector.Please refer logs for details. ``` Below is the full error that i am getting on cloudwatch ``` 2021-11-08T11:33:02.045+05:00 Traceback (most recent call last): File "/usr/lib64/python3.7/runpy.py", line 193, in _run_module_as_main 2021-11-08T11:33:02.070+05:00 "__main__", mod_spec) File "/usr/lib64/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 361, in <module> 2021-11-08T11:33:02.070+05:00 main() File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 351, in main 2021-11-08T11:33:02.070+05:00 res += download_jars_per_connection(conn, region, endpoint, proxy) File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 304, in download_jars_per_connection 2021-11-08T11:33:02.070+05:00 download_and_unpack_docker_layer(ecr_url, layer["digest"], dir_prefix, http_header) File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 168, in download_and_unpack_docker_layer 2021-11-08T11:33:02.070+05:00 layer = send_get_request(layer_url, header) File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 80, in send_get_request 2021-11-08T11:33:02.070+05:00 2021-11-08T11:33:02.070+05:00 response.raise_for_status() File "/home/spark/.local/lib/python3.7/site-packages/requests/models.py", line 765, in raise_for_status 2021-11-08T11:33:02.071+05:00 raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 400 Client Error: Bad Request 2021-11-08T11:33:02.119+05:00 Glue ETL Marketplace - failed to download connector, activation script exited with code ```
2021/11/08
[ "https://Stackoverflow.com/questions/69879490", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7773898/" ]
When Glue job tried using connector, it has to download the connector in the form of container. The containers for connectors are available in amazon public ECR repo. To pull the container from the AWS public repo, we have to add "AmazonEC2ContainerRegistryFullAccess" policy to your IAM role. We can limit the access to read only as well.
I've come around this in an organizational setting trying to use the BigQuery Markertplace connector. I was explicitly denied to `GetAuthorizationToken` on non eu regions. Hence, the Glue Job will fail in a similar way as describes by OP because it tries to download the docker image on run from here: `https://709825985650.dkr.ecr.us-east-1.amazonaws.com/amazon-web-services/glue/bigquery:0.22.0-glue3.0-2` A possible workaround is to push a copy of the image to your private ECR. Then, when creating the GLUE connection, set `CONNECTOR_URL` in its `connection_properties` to your private ECR url. This will solve similar issues. Also, this seems more reasonable then adding wide reaching policies like `AmazonEC2ContainerRegistryFullAccess` (as suggested by [Sparkian](https://stackoverflow.com/a/71209447/5251061)) to a role. You'll be able to give granular access permissions on this specific ECR repo instead.
69,879,490
I am using bigquery connector for aws glue in my glue job. It was working fine few days ago but now suddenly it is giving me below error: ``` LAUNCH ERROR | Glue ETL Marketplace - failed to download connector.Please refer logs for details. ``` Below is the full error that i am getting on cloudwatch ``` 2021-11-08T11:33:02.045+05:00 Traceback (most recent call last): File "/usr/lib64/python3.7/runpy.py", line 193, in _run_module_as_main 2021-11-08T11:33:02.070+05:00 "__main__", mod_spec) File "/usr/lib64/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 361, in <module> 2021-11-08T11:33:02.070+05:00 main() File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 351, in main 2021-11-08T11:33:02.070+05:00 res += download_jars_per_connection(conn, region, endpoint, proxy) File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 304, in download_jars_per_connection 2021-11-08T11:33:02.070+05:00 download_and_unpack_docker_layer(ecr_url, layer["digest"], dir_prefix, http_header) File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 168, in download_and_unpack_docker_layer 2021-11-08T11:33:02.070+05:00 layer = send_get_request(layer_url, header) File "/tmp/aws_glue_custom_connector_python/docker/unpack_docker_image.py", line 80, in send_get_request 2021-11-08T11:33:02.070+05:00 2021-11-08T11:33:02.070+05:00 response.raise_for_status() File "/home/spark/.local/lib/python3.7/site-packages/requests/models.py", line 765, in raise_for_status 2021-11-08T11:33:02.071+05:00 raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 400 Client Error: Bad Request 2021-11-08T11:33:02.119+05:00 Glue ETL Marketplace - failed to download connector, activation script exited with code ```
2021/11/08
[ "https://Stackoverflow.com/questions/69879490", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7773898/" ]
I've come around this in an organizational setting trying to use the BigQuery Markertplace connector. I was explicitly denied to `GetAuthorizationToken` on non eu regions. Hence, the Glue Job will fail in a similar way as describes by OP because it tries to download the docker image on run from here: `https://709825985650.dkr.ecr.us-east-1.amazonaws.com/amazon-web-services/glue/bigquery:0.22.0-glue3.0-2` A possible workaround is to push a copy of the image to your private ECR. Then, when creating the GLUE connection, set `CONNECTOR_URL` in its `connection_properties` to your private ECR url. This will solve similar issues. Also, this seems more reasonable then adding wide reaching policies like `AmazonEC2ContainerRegistryFullAccess` (as suggested by [Sparkian](https://stackoverflow.com/a/71209447/5251061)) to a role. You'll be able to give granular access permissions on this specific ECR repo instead.
I have this issue with the GCP BigQuery connector. Some jobs are able to run the connector, some aren't. All have the same permissions and settings. There seems to be an issue after requesting the ECR Authorization Token where the request times out.
69,791,617
No matter what i try, kivy always fails to install. I can install most other packages but not kivy. I tried it through pycharm, reinstalling python, and trying every command on cmd prompt, it just won't install. This is the errors i get: ``` C:\Program Files\Python 3.10.0>python -m pip install kivy==2.0.0 Collecting kivy==2.0.0 Using cached Kivy-2.0.0.tar.gz (23.7 MB) Installing build dependencies ... error ERROR: Command errored out with exit status 1: command: 'C:\Program Files\Python 3.10.0\python.exe' 'C:\Users\ranta\AppData\Local\Temp\pip-standalone-pip-12uaq80_\__env_pip__.zip\pip' install --ignore-installed --no-user --prefix 'C:\Users\ranta\AppData\Local\Temp\pip-build-env-pi2ci1b2\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools wheel 'cython>=0.24,<=0.29.21,!=0.27,!=0.27.2' 'kivy_deps.gstreamer_dev~=0.3.1; sys_platform == "win32"' 'kivy_deps.sdl2_dev~=0.3.1; sys_platform == "win32"' 'kivy_deps.glew_dev~=0.3.0; sys_platform == "win32"' 'kivy_deps.gstreamer~=0.3.1; sys_platform == "win32"' 'kivy_deps.sdl2~=0.3.1; sys_platform == "win32"' 'kivy_deps.glew~=0.3.0; sys_platform == "win32"' cwd: None Complete output (10 lines): Collecting setuptools Using cached setuptools-58.3.0-py3-none-any.whl (946 kB) Collecting wheel Using cached wheel-0.37.0-py2.py3-none-any.whl (35 kB) Collecting cython!=0.27,!=0.27.2,<=0.29.21,>=0.24 Using cached Cython-0.29.21-py2.py3-none-any.whl (974 kB) Collecting kivy_deps.gstreamer_dev~=0.3.1 Using cached kivy_deps.gstreamer_dev-0.3.2-cp310-cp310-win_amd64.whl (3.9 MB) ERROR: Could not find a version that satisfies the requirement kivy_deps.sdl2_dev~=0.3.1 (from versions: 0.4.2, 0.4.3) ERROR: No matching distribution found for kivy_deps.sdl2_dev~=0.3.1 ---------------------------------------- WARNING: Discarding https://files.pythonhosted.org/packages/12/96/091ddacafb84dd18555a32d860dbfaf9c806147aa30c6f3c8b93cb7bab97/Kivy-2.0.0.tar.gz#sha256=d25e44eb44e43762b2fd0c5874e51954e0f1181fd9800d8a6756be6d084812d8 (from https://pypi.org/simple/kivy/). Command errored out with exit status 1: 'C:\Program Files\Python 3.10.0\python.exe' 'C:\Users\ranta\AppData\Local\Temp\pip-standalone-pip-12uaq80_\__env_pip__.zip\pip' install --ignore-installed --no-user --prefix 'C:\Users\ranta\AppData\Local\Temp\pip-build-env-pi2ci1b2\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools wheel 'cython>=0.24,<=0.29.21,!=0.27,!=0.27.2' 'kivy_deps.gstreamer_dev~=0.3.1; sys_platform == "win32"' 'kivy_deps.sdl2_dev~=0.3.1; sys_platform == "win32"' 'kivy_deps.glew_dev~=0.3.0; sys_platform == "win32"' 'kivy_deps.gstreamer~=0.3.1; sys_platform == "win32"' 'kivy_deps.sdl2~=0.3.1; sys_platform == "win32"' 'kivy_deps.glew~=0.3.0; sys_platform == "win32"' Check the logs for full command output. ERROR: Could not find a version that satisfies the requirement kivy==2.0.0 (from versions: 1.0.1a0, 1.0.2a0, 1.0.3a0, 1.0.4b0, 1.0.5, 1.0.6, 1.0.7, 1.0.8, 1.0.9, 1.1.0, 1.1.1, 1.2.0, 1.3.0, 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.11.0rc1, 1.11.0rc2, 1.11.0, 1.11.1, 2.0.0rc1, 2.0.0rc2, 2.0.0rc3, 2.0.0rc4, 2.0.0) ERROR: No matching distribution found for kivy==2.0.0 ``` END OF ERROR Any help would be greatly appreciated!
2021/11/01
[ "https://Stackoverflow.com/questions/69791617", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14114212/" ]
EDIT: Kivy now supports 3.10: <https://kivy.org/doc/master/gettingstarted/installation.html#install-kivy> Did some research at [their PyPI page](https://pypi.org/project/Kivy/#files) and [their GitHub](https://github.com/kivy/kivy/releases) It looks like Kivy doesn't support Python 3.10 **yet**. ([GitHub issue](https://github.com/kivy/kivy/issues/7654#issuecomment-949975555)) Kivy currently supports Python 3.6 - 3.9
I faced the same problem and I can solve this by uninstall python 3.10, and it works with python 3.9. After uninstall python 3.10 I followed the steps in [kivy's installing page](https://kivy.org/doc/stable/gettingstarted/installation.html) and it works. While I followed this steps, my virtual environment was created in python 3.8.8 (Python 3.8.8 64-bit ('kivy\_venv':venv)), I don't know why but at least it works.
69,791,617
No matter what i try, kivy always fails to install. I can install most other packages but not kivy. I tried it through pycharm, reinstalling python, and trying every command on cmd prompt, it just won't install. This is the errors i get: ``` C:\Program Files\Python 3.10.0>python -m pip install kivy==2.0.0 Collecting kivy==2.0.0 Using cached Kivy-2.0.0.tar.gz (23.7 MB) Installing build dependencies ... error ERROR: Command errored out with exit status 1: command: 'C:\Program Files\Python 3.10.0\python.exe' 'C:\Users\ranta\AppData\Local\Temp\pip-standalone-pip-12uaq80_\__env_pip__.zip\pip' install --ignore-installed --no-user --prefix 'C:\Users\ranta\AppData\Local\Temp\pip-build-env-pi2ci1b2\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools wheel 'cython>=0.24,<=0.29.21,!=0.27,!=0.27.2' 'kivy_deps.gstreamer_dev~=0.3.1; sys_platform == "win32"' 'kivy_deps.sdl2_dev~=0.3.1; sys_platform == "win32"' 'kivy_deps.glew_dev~=0.3.0; sys_platform == "win32"' 'kivy_deps.gstreamer~=0.3.1; sys_platform == "win32"' 'kivy_deps.sdl2~=0.3.1; sys_platform == "win32"' 'kivy_deps.glew~=0.3.0; sys_platform == "win32"' cwd: None Complete output (10 lines): Collecting setuptools Using cached setuptools-58.3.0-py3-none-any.whl (946 kB) Collecting wheel Using cached wheel-0.37.0-py2.py3-none-any.whl (35 kB) Collecting cython!=0.27,!=0.27.2,<=0.29.21,>=0.24 Using cached Cython-0.29.21-py2.py3-none-any.whl (974 kB) Collecting kivy_deps.gstreamer_dev~=0.3.1 Using cached kivy_deps.gstreamer_dev-0.3.2-cp310-cp310-win_amd64.whl (3.9 MB) ERROR: Could not find a version that satisfies the requirement kivy_deps.sdl2_dev~=0.3.1 (from versions: 0.4.2, 0.4.3) ERROR: No matching distribution found for kivy_deps.sdl2_dev~=0.3.1 ---------------------------------------- WARNING: Discarding https://files.pythonhosted.org/packages/12/96/091ddacafb84dd18555a32d860dbfaf9c806147aa30c6f3c8b93cb7bab97/Kivy-2.0.0.tar.gz#sha256=d25e44eb44e43762b2fd0c5874e51954e0f1181fd9800d8a6756be6d084812d8 (from https://pypi.org/simple/kivy/). Command errored out with exit status 1: 'C:\Program Files\Python 3.10.0\python.exe' 'C:\Users\ranta\AppData\Local\Temp\pip-standalone-pip-12uaq80_\__env_pip__.zip\pip' install --ignore-installed --no-user --prefix 'C:\Users\ranta\AppData\Local\Temp\pip-build-env-pi2ci1b2\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools wheel 'cython>=0.24,<=0.29.21,!=0.27,!=0.27.2' 'kivy_deps.gstreamer_dev~=0.3.1; sys_platform == "win32"' 'kivy_deps.sdl2_dev~=0.3.1; sys_platform == "win32"' 'kivy_deps.glew_dev~=0.3.0; sys_platform == "win32"' 'kivy_deps.gstreamer~=0.3.1; sys_platform == "win32"' 'kivy_deps.sdl2~=0.3.1; sys_platform == "win32"' 'kivy_deps.glew~=0.3.0; sys_platform == "win32"' Check the logs for full command output. ERROR: Could not find a version that satisfies the requirement kivy==2.0.0 (from versions: 1.0.1a0, 1.0.2a0, 1.0.3a0, 1.0.4b0, 1.0.5, 1.0.6, 1.0.7, 1.0.8, 1.0.9, 1.1.0, 1.1.1, 1.2.0, 1.3.0, 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.11.0rc1, 1.11.0rc2, 1.11.0, 1.11.1, 2.0.0rc1, 2.0.0rc2, 2.0.0rc3, 2.0.0rc4, 2.0.0) ERROR: No matching distribution found for kivy==2.0.0 ``` END OF ERROR Any help would be greatly appreciated!
2021/11/01
[ "https://Stackoverflow.com/questions/69791617", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14114212/" ]
EDIT: Kivy now supports 3.10: <https://kivy.org/doc/master/gettingstarted/installation.html#install-kivy> Did some research at [their PyPI page](https://pypi.org/project/Kivy/#files) and [their GitHub](https://github.com/kivy/kivy/releases) It looks like Kivy doesn't support Python 3.10 **yet**. ([GitHub issue](https://github.com/kivy/kivy/issues/7654#issuecomment-949975555)) Kivy currently supports Python 3.6 - 3.9
I faced the same issue and resolved it by uninstalling 3.10 and installing 3.9.10.
69,791,617
No matter what i try, kivy always fails to install. I can install most other packages but not kivy. I tried it through pycharm, reinstalling python, and trying every command on cmd prompt, it just won't install. This is the errors i get: ``` C:\Program Files\Python 3.10.0>python -m pip install kivy==2.0.0 Collecting kivy==2.0.0 Using cached Kivy-2.0.0.tar.gz (23.7 MB) Installing build dependencies ... error ERROR: Command errored out with exit status 1: command: 'C:\Program Files\Python 3.10.0\python.exe' 'C:\Users\ranta\AppData\Local\Temp\pip-standalone-pip-12uaq80_\__env_pip__.zip\pip' install --ignore-installed --no-user --prefix 'C:\Users\ranta\AppData\Local\Temp\pip-build-env-pi2ci1b2\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools wheel 'cython>=0.24,<=0.29.21,!=0.27,!=0.27.2' 'kivy_deps.gstreamer_dev~=0.3.1; sys_platform == "win32"' 'kivy_deps.sdl2_dev~=0.3.1; sys_platform == "win32"' 'kivy_deps.glew_dev~=0.3.0; sys_platform == "win32"' 'kivy_deps.gstreamer~=0.3.1; sys_platform == "win32"' 'kivy_deps.sdl2~=0.3.1; sys_platform == "win32"' 'kivy_deps.glew~=0.3.0; sys_platform == "win32"' cwd: None Complete output (10 lines): Collecting setuptools Using cached setuptools-58.3.0-py3-none-any.whl (946 kB) Collecting wheel Using cached wheel-0.37.0-py2.py3-none-any.whl (35 kB) Collecting cython!=0.27,!=0.27.2,<=0.29.21,>=0.24 Using cached Cython-0.29.21-py2.py3-none-any.whl (974 kB) Collecting kivy_deps.gstreamer_dev~=0.3.1 Using cached kivy_deps.gstreamer_dev-0.3.2-cp310-cp310-win_amd64.whl (3.9 MB) ERROR: Could not find a version that satisfies the requirement kivy_deps.sdl2_dev~=0.3.1 (from versions: 0.4.2, 0.4.3) ERROR: No matching distribution found for kivy_deps.sdl2_dev~=0.3.1 ---------------------------------------- WARNING: Discarding https://files.pythonhosted.org/packages/12/96/091ddacafb84dd18555a32d860dbfaf9c806147aa30c6f3c8b93cb7bab97/Kivy-2.0.0.tar.gz#sha256=d25e44eb44e43762b2fd0c5874e51954e0f1181fd9800d8a6756be6d084812d8 (from https://pypi.org/simple/kivy/). Command errored out with exit status 1: 'C:\Program Files\Python 3.10.0\python.exe' 'C:\Users\ranta\AppData\Local\Temp\pip-standalone-pip-12uaq80_\__env_pip__.zip\pip' install --ignore-installed --no-user --prefix 'C:\Users\ranta\AppData\Local\Temp\pip-build-env-pi2ci1b2\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools wheel 'cython>=0.24,<=0.29.21,!=0.27,!=0.27.2' 'kivy_deps.gstreamer_dev~=0.3.1; sys_platform == "win32"' 'kivy_deps.sdl2_dev~=0.3.1; sys_platform == "win32"' 'kivy_deps.glew_dev~=0.3.0; sys_platform == "win32"' 'kivy_deps.gstreamer~=0.3.1; sys_platform == "win32"' 'kivy_deps.sdl2~=0.3.1; sys_platform == "win32"' 'kivy_deps.glew~=0.3.0; sys_platform == "win32"' Check the logs for full command output. ERROR: Could not find a version that satisfies the requirement kivy==2.0.0 (from versions: 1.0.1a0, 1.0.2a0, 1.0.3a0, 1.0.4b0, 1.0.5, 1.0.6, 1.0.7, 1.0.8, 1.0.9, 1.1.0, 1.1.1, 1.2.0, 1.3.0, 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.11.0rc1, 1.11.0rc2, 1.11.0, 1.11.1, 2.0.0rc1, 2.0.0rc2, 2.0.0rc3, 2.0.0rc4, 2.0.0) ERROR: No matching distribution found for kivy==2.0.0 ``` END OF ERROR Any help would be greatly appreciated!
2021/11/01
[ "https://Stackoverflow.com/questions/69791617", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14114212/" ]
I faced the same problem and I can solve this by uninstall python 3.10, and it works with python 3.9. After uninstall python 3.10 I followed the steps in [kivy's installing page](https://kivy.org/doc/stable/gettingstarted/installation.html) and it works. While I followed this steps, my virtual environment was created in python 3.8.8 (Python 3.8.8 64-bit ('kivy\_venv':venv)), I don't know why but at least it works.
I faced the same issue and resolved it by uninstalling 3.10 and installing 3.9.10.
48,198,030
I'm a total newbie to docker, and am having trouble on how to approach this problem. Consider this super simplified cli tool that produces a log when ran with `docker run`. ``` import click import logging logging.basicConfig(filename='log.log') logger = logging.getLogger(__name__) @click.group() @click.version_option('1.0') def cli(): '''docker_cli with docker test''' @cli.command('run') @click.argument('name', default='my name') def run(name): logger.info("running 'run' within docker") print('Hello {}'.format(name)) ``` And my dockerfile is as follows: ``` FROM python:3.5-slim LABEL maintainer="Boudewijn Aasman" LABEL version="1.0" ENV config production RUN mkdir /docker_cli COPY docker_cli ./docker_cli COPY setup.py . RUN python setup.py install CMD ["cli", "run"] ``` If I execute the container using: ``` docker run cli_test cli run world ``` how do I retrieve the log file that gets created during the process? The container exits immediately after the command print out 'Hello world'. My assumption is using a volume, but not sure how to make it work.
2018/01/11
[ "https://Stackoverflow.com/questions/48198030", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3935333/" ]
Just pass a list of column names to index `df`: ``` df[['nnn', 'mmm', 'yyy']] nnn mmm yyy 0 5 5 10 1 3 4 9 2 7 0 8 ``` --- If you need to handle non-existent column names in your list, try filtering with `df.columns.isin` - ``` df.loc[:, df.columns.isin(['nnn', 'mmm', 'yyy', 'zzzzzz'])] yyy nnn mmm 0 10 5 5 1 9 3 4 2 8 7 0 ```
You can just put `mylist` inside `[]` and pandas will select it for you. ``` mydata_new = mydata[mylist] ``` Not sure whether your `yyy` is a typo. The reason that you are wrong is that you are assigning `mydata_new` to a new series every time in the loop. ``` for item in mylist: mydata_new = mydata[item] # <- ``` Thus, it will create a series rather than the whole df you want. --- If some names in the list is not in your data frame, you can always check it with, ``` len(set(mylist) - set(mydata.columns)) > 0 ``` and print it out ``` print(set(mylist) - set(mydata.columns)) ``` Then see if there are typos or other unintended behaviors.
48,198,030
I'm a total newbie to docker, and am having trouble on how to approach this problem. Consider this super simplified cli tool that produces a log when ran with `docker run`. ``` import click import logging logging.basicConfig(filename='log.log') logger = logging.getLogger(__name__) @click.group() @click.version_option('1.0') def cli(): '''docker_cli with docker test''' @cli.command('run') @click.argument('name', default='my name') def run(name): logger.info("running 'run' within docker") print('Hello {}'.format(name)) ``` And my dockerfile is as follows: ``` FROM python:3.5-slim LABEL maintainer="Boudewijn Aasman" LABEL version="1.0" ENV config production RUN mkdir /docker_cli COPY docker_cli ./docker_cli COPY setup.py . RUN python setup.py install CMD ["cli", "run"] ``` If I execute the container using: ``` docker run cli_test cli run world ``` how do I retrieve the log file that gets created during the process? The container exits immediately after the command print out 'Hello world'. My assumption is using a volume, but not sure how to make it work.
2018/01/11
[ "https://Stackoverflow.com/questions/48198030", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3935333/" ]
Just pass a list of column names to index `df`: ``` df[['nnn', 'mmm', 'yyy']] nnn mmm yyy 0 5 5 10 1 3 4 9 2 7 0 8 ``` --- If you need to handle non-existent column names in your list, try filtering with `df.columns.isin` - ``` df.loc[:, df.columns.isin(['nnn', 'mmm', 'yyy', 'zzzzzz'])] yyy nnn mmm 0 10 5 5 1 9 3 4 2 8 7 0 ```
If `mylist` contains some column names which are not in `mydata.columns`, you will get an error like ```python KeyError: "['fff'] not in index" ``` In this case, you can use the [`df.filter`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.filter.html) function: ```python mydata.filter(['nnn', 'mmm', 'yyy', 'fff']) ```
48,198,030
I'm a total newbie to docker, and am having trouble on how to approach this problem. Consider this super simplified cli tool that produces a log when ran with `docker run`. ``` import click import logging logging.basicConfig(filename='log.log') logger = logging.getLogger(__name__) @click.group() @click.version_option('1.0') def cli(): '''docker_cli with docker test''' @cli.command('run') @click.argument('name', default='my name') def run(name): logger.info("running 'run' within docker") print('Hello {}'.format(name)) ``` And my dockerfile is as follows: ``` FROM python:3.5-slim LABEL maintainer="Boudewijn Aasman" LABEL version="1.0" ENV config production RUN mkdir /docker_cli COPY docker_cli ./docker_cli COPY setup.py . RUN python setup.py install CMD ["cli", "run"] ``` If I execute the container using: ``` docker run cli_test cli run world ``` how do I retrieve the log file that gets created during the process? The container exits immediately after the command print out 'Hello world'. My assumption is using a volume, but not sure how to make it work.
2018/01/11
[ "https://Stackoverflow.com/questions/48198030", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3935333/" ]
You can just put `mylist` inside `[]` and pandas will select it for you. ``` mydata_new = mydata[mylist] ``` Not sure whether your `yyy` is a typo. The reason that you are wrong is that you are assigning `mydata_new` to a new series every time in the loop. ``` for item in mylist: mydata_new = mydata[item] # <- ``` Thus, it will create a series rather than the whole df you want. --- If some names in the list is not in your data frame, you can always check it with, ``` len(set(mylist) - set(mydata.columns)) > 0 ``` and print it out ``` print(set(mylist) - set(mydata.columns)) ``` Then see if there are typos or other unintended behaviors.
If `mylist` contains some column names which are not in `mydata.columns`, you will get an error like ```python KeyError: "['fff'] not in index" ``` In this case, you can use the [`df.filter`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.filter.html) function: ```python mydata.filter(['nnn', 'mmm', 'yyy', 'fff']) ```
34,654,173
I try to install [triangle](http://dzhelil.info/triangle/installing.html) for python on Kubuntu 14.04. Either if I try to install it via easy\_install or from source, I receive the following error: ``` sudo easy_install triangle Processing triangle Writing /home/magnus/triangle/setup.cfg Running setup.py -q bdist_egg --dist-dir /home/magnus/triangle/egg-dist-tmp-sJ6dHl Searching for setuptools>=18.0 Best match: setuptools 19.2 Processing setuptools-19.2-py2.7.egg Using /home/magnus/triangle/setuptools-19.2-py2.7.egg /home/magnus/triangle/core.pyx:5:35: Expected an identifier or literal error: Setup script exited with error: unknown file type '.pxd' (from 'triangle/c_triangle.pxd') ``` It seems to be a similar problem as mentioned in [this question](https://stackoverflow.com/questions/6846084/unknown-file-type-error-with-pyx-file), but I can't figure out how to fix it. My Cython version is: ``` cython --version Cython version 0.20.1post0 ``` Does someone know how to fix this? Any help is highly appreciated.
2016/01/07
[ "https://Stackoverflow.com/questions/34654173", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1831827/" ]
I have use this snipped code in our application. I was change in the registry of the user window change and enable "folder option" check boxes. I was actually change in the registry in window. ``` private static void checkBoxeEnable() { var processInfo = new ProcessStartInfo("cmd.exe", "/c" + "\""+System.AppDomain.CurrentDomain.BaseDirectory+"Turn_On_Check_Boxes_to_Select_Items.bat\""); processInfo.CreateNoWindow = true; processInfo.UseShellExecute = false; processInfo.RedirectStandardError = true; processInfo.RedirectStandardOutput = true; var process = Process.Start(processInfo); process.Start(); process.WaitForExit(); } ``` This is the file use the register enable code"Turn\_On\_Check\_Boxes\_to\_Select\_Items.bat" Code of snipped of this file... ``` REG ADD "HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced" /V AutoCheckSelect /T REG_DWORD /D 1 /F ```
You need to use a [`CheckedListBox`](https://msdn.microsoft.com/en-us/library/system.windows.forms.checkedlistbox(v=vs.110).aspx) instead of a normal file dialog. You should work on it to enable navigating to parent and children folders.
35,510,582
I tried to install XGBoost package in python. I am using windows os, 64bits . I have gone through following. The package directory states that xgboost is unstable for windows and is disabled: pip installation on windows is currently disabled for further invesigation, please install from github. <https://pypi.python.org/pypi/xgboost/> I am not well versed in Visual Studio, facing problem building XGBoost. I am missing opportunities to utilize xgboost package in data science. Please guide, so that I can import the XGBoost package in python. Thanks
2016/02/19
[ "https://Stackoverflow.com/questions/35510582", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4903479/" ]
If you are using `anaconda` (or `miniconda`) you can use the following: * `conda install -c anaconda py-xgboost` **UPDATED 2019-09-20** * [Docs](https://anaconda.org/anaconda/py-xgboost) * [Other channels available](https://anaconda.org/search?q=py-xgboost), such as `rapidsai/py-xgboost` **UPDATED 2021-12-07** Check install by: * Activating the environment (see below) * Running `conda list` --- [To activate an environment](https://conda.io/docs/user-guide/tasks/manage-environments.html#activating-an-environment): On Windows, in your Anaconda Prompt, run (assumes your environment is named `myenv`): * `activate myenv` On macOS and Linux, in your Terminal Window, run (assumes your environment is named `myenv`): * `source activate myenv` Conda prepends the path name myenv onto your system command.
The following command should work but, If you have a problem with this command > > conda install -c conda-forge xgboost > > > First activate your environment . Assume your environment is named simply write in conda terminal : ``` activate <MY_ENV> ``` and then ``` pip install xgboost ```
35,510,582
I tried to install XGBoost package in python. I am using windows os, 64bits . I have gone through following. The package directory states that xgboost is unstable for windows and is disabled: pip installation on windows is currently disabled for further invesigation, please install from github. <https://pypi.python.org/pypi/xgboost/> I am not well versed in Visual Studio, facing problem building XGBoost. I am missing opportunities to utilize xgboost package in data science. Please guide, so that I can import the XGBoost package in python. Thanks
2016/02/19
[ "https://Stackoverflow.com/questions/35510582", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4903479/" ]
If you are using `anaconda` (or `miniconda`) you can use the following: * `conda install -c anaconda py-xgboost` **UPDATED 2019-09-20** * [Docs](https://anaconda.org/anaconda/py-xgboost) * [Other channels available](https://anaconda.org/search?q=py-xgboost), such as `rapidsai/py-xgboost` **UPDATED 2021-12-07** Check install by: * Activating the environment (see below) * Running `conda list` --- [To activate an environment](https://conda.io/docs/user-guide/tasks/manage-environments.html#activating-an-environment): On Windows, in your Anaconda Prompt, run (assumes your environment is named `myenv`): * `activate myenv` On macOS and Linux, in your Terminal Window, run (assumes your environment is named `myenv`): * `source activate myenv` Conda prepends the path name myenv onto your system command.
`pip install xgboost` works also with **python 3.8**, whereas other options mentioned above did not work for me
35,510,582
I tried to install XGBoost package in python. I am using windows os, 64bits . I have gone through following. The package directory states that xgboost is unstable for windows and is disabled: pip installation on windows is currently disabled for further invesigation, please install from github. <https://pypi.python.org/pypi/xgboost/> I am not well versed in Visual Studio, facing problem building XGBoost. I am missing opportunities to utilize xgboost package in data science. Please guide, so that I can import the XGBoost package in python. Thanks
2016/02/19
[ "https://Stackoverflow.com/questions/35510582", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4903479/" ]
If you are using `anaconda` (or `miniconda`) you can use the following: * `conda install -c anaconda py-xgboost` **UPDATED 2019-09-20** * [Docs](https://anaconda.org/anaconda/py-xgboost) * [Other channels available](https://anaconda.org/search?q=py-xgboost), such as `rapidsai/py-xgboost` **UPDATED 2021-12-07** Check install by: * Activating the environment (see below) * Running `conda list` --- [To activate an environment](https://conda.io/docs/user-guide/tasks/manage-environments.html#activating-an-environment): On Windows, in your Anaconda Prompt, run (assumes your environment is named `myenv`): * `activate myenv` On macOS and Linux, in your Terminal Window, run (assumes your environment is named `myenv`): * `source activate myenv` Conda prepends the path name myenv onto your system command.
You can pip install catboost. It is a recently open-sourced gradient boosting library, which is in most cases more accurate and faster than XGBoost, and it has categorical features support. Here is the site of the library: <https://catboost.ai>
35,510,582
I tried to install XGBoost package in python. I am using windows os, 64bits . I have gone through following. The package directory states that xgboost is unstable for windows and is disabled: pip installation on windows is currently disabled for further invesigation, please install from github. <https://pypi.python.org/pypi/xgboost/> I am not well versed in Visual Studio, facing problem building XGBoost. I am missing opportunities to utilize xgboost package in data science. Please guide, so that I can import the XGBoost package in python. Thanks
2016/02/19
[ "https://Stackoverflow.com/questions/35510582", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4903479/" ]
You can pip install catboost. It is a recently open-sourced gradient boosting library, which is in most cases more accurate and faster than XGBoost, and it has categorical features support. Here is the site of the library: <https://catboost.ai>
on a macOS the following command worked conda install -c conda-forge xgboost but before doing this i had read some other article and hence did install gcc using brew
35,510,582
I tried to install XGBoost package in python. I am using windows os, 64bits . I have gone through following. The package directory states that xgboost is unstable for windows and is disabled: pip installation on windows is currently disabled for further invesigation, please install from github. <https://pypi.python.org/pypi/xgboost/> I am not well versed in Visual Studio, facing problem building XGBoost. I am missing opportunities to utilize xgboost package in data science. Please guide, so that I can import the XGBoost package in python. Thanks
2016/02/19
[ "https://Stackoverflow.com/questions/35510582", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4903479/" ]
You can pip install catboost. It is a recently open-sourced gradient boosting library, which is in most cases more accurate and faster than XGBoost, and it has categorical features support. Here is the site of the library: <https://catboost.ai>
XGBoost is used in Applied Machine Learning and is known for its gradient boost algorithm and it is available as a library in python but has to be compiled using **cmake**. Alternatively what you can do is from this [link](https://www.lfd.uci.edu/~gohlke/pythonlibs/#xgboost) you can download the C pre-compiled library and install it using the **pip install < FILE-NAME.whl>** command. Ensure you have downloaded the library which is compatible with your python version. I experienced this problem while I was using the same in Anaconda(Spyder). Then just restart the kernel and your error will go away.
35,510,582
I tried to install XGBoost package in python. I am using windows os, 64bits . I have gone through following. The package directory states that xgboost is unstable for windows and is disabled: pip installation on windows is currently disabled for further invesigation, please install from github. <https://pypi.python.org/pypi/xgboost/> I am not well versed in Visual Studio, facing problem building XGBoost. I am missing opportunities to utilize xgboost package in data science. Please guide, so that I can import the XGBoost package in python. Thanks
2016/02/19
[ "https://Stackoverflow.com/questions/35510582", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4903479/" ]
Build it from here: * download xgboost whl file from [here](http://www.lfd.uci.edu/~gohlke/pythonlibs/#xgboost) (make sure to match your python version and system architecture, e.g. "xgboost-0.6-cp35-cp35m-win\_amd64.whl" for python 3.5 on 64-bit machine) * open command prompt * cd to your Downloads folder (or wherever you saved the whl file) pip install xgboost-0.6-cp35-cp35m-win\_amd64.whl (or whatever your whl file is named)
`pip install xgboost` works also with **python 3.8**, whereas other options mentioned above did not work for me
35,510,582
I tried to install XGBoost package in python. I am using windows os, 64bits . I have gone through following. The package directory states that xgboost is unstable for windows and is disabled: pip installation on windows is currently disabled for further invesigation, please install from github. <https://pypi.python.org/pypi/xgboost/> I am not well versed in Visual Studio, facing problem building XGBoost. I am missing opportunities to utilize xgboost package in data science. Please guide, so that I can import the XGBoost package in python. Thanks
2016/02/19
[ "https://Stackoverflow.com/questions/35510582", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4903479/" ]
Build it from here: * download xgboost whl file from [here](http://www.lfd.uci.edu/~gohlke/pythonlibs/#xgboost) (make sure to match your python version and system architecture, e.g. "xgboost-0.6-cp35-cp35m-win\_amd64.whl" for python 3.5 on 64-bit machine) * open command prompt * cd to your Downloads folder (or wherever you saved the whl file) pip install xgboost-0.6-cp35-cp35m-win\_amd64.whl (or whatever your whl file is named)
You first need to build the library through "make", then you can install using anaconda prompt (if you want it on anaconda) or git bash (if you use it in Python only). First [follow the official guide](http://xgboost.readthedocs.org/en/latest/build.html#building-on-windows) with the following procedure (in Git Bash on Windows): ``` git clone --recursive https://github.com/dmlc/xgboost git submodule init git submodule update ``` then [install TDM-GCC here](http://tdm-gcc.tdragon.net/download) and do the following in Git Bash: ``` alias make='mingw32-make' cp make/mingw64.mk config.mk; make -j4 ``` Last, do the following using anaconda prompt or Git Bash: ``` cd xgboost\python-package python setup.py install ``` Also refer to these great resources: [Official Guide](http://xgboost.readthedocs.org/en/latest/build.html) [Installing Xgboost on Windows](https://dnc1994.com/2016/03/installing-xgboost-on-windows/) [Installing XGBoost For Anaconda on Windows](https://www.ibm.com/developerworks/community/blogs/jfp/entry/Installing_XGBoost_For_Anaconda_on_Windows?lang=en)
35,510,582
I tried to install XGBoost package in python. I am using windows os, 64bits . I have gone through following. The package directory states that xgboost is unstable for windows and is disabled: pip installation on windows is currently disabled for further invesigation, please install from github. <https://pypi.python.org/pypi/xgboost/> I am not well versed in Visual Studio, facing problem building XGBoost. I am missing opportunities to utilize xgboost package in data science. Please guide, so that I can import the XGBoost package in python. Thanks
2016/02/19
[ "https://Stackoverflow.com/questions/35510582", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4903479/" ]
Build it from here: * download xgboost whl file from [here](http://www.lfd.uci.edu/~gohlke/pythonlibs/#xgboost) (make sure to match your python version and system architecture, e.g. "xgboost-0.6-cp35-cp35m-win\_amd64.whl" for python 3.5 on 64-bit machine) * open command prompt * cd to your Downloads folder (or wherever you saved the whl file) pip install xgboost-0.6-cp35-cp35m-win\_amd64.whl (or whatever your whl file is named)
Besides what's already on developers' github, which is building from source(creating a c++ environment, etc.), I have found an easier way to do it, which I explained [here](https://www.building-skynet.com/2018/08/03/xgboost-part-un-installation/) with details. Basically, you have to go a website by UC Irvine and download a .whl file, then cd to the folder and install xgboost with pip.
35,510,582
I tried to install XGBoost package in python. I am using windows os, 64bits . I have gone through following. The package directory states that xgboost is unstable for windows and is disabled: pip installation on windows is currently disabled for further invesigation, please install from github. <https://pypi.python.org/pypi/xgboost/> I am not well versed in Visual Studio, facing problem building XGBoost. I am missing opportunities to utilize xgboost package in data science. Please guide, so that I can import the XGBoost package in python. Thanks
2016/02/19
[ "https://Stackoverflow.com/questions/35510582", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4903479/" ]
`pip install xgboost` works also with **python 3.8**, whereas other options mentioned above did not work for me
The following command should work but, If you have a problem with this command > > conda install -c conda-forge xgboost > > > First activate your environment . Assume your environment is named simply write in conda terminal : ``` activate <MY_ENV> ``` and then ``` pip install xgboost ```
35,510,582
I tried to install XGBoost package in python. I am using windows os, 64bits . I have gone through following. The package directory states that xgboost is unstable for windows and is disabled: pip installation on windows is currently disabled for further invesigation, please install from github. <https://pypi.python.org/pypi/xgboost/> I am not well versed in Visual Studio, facing problem building XGBoost. I am missing opportunities to utilize xgboost package in data science. Please guide, so that I can import the XGBoost package in python. Thanks
2016/02/19
[ "https://Stackoverflow.com/questions/35510582", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4903479/" ]
You can pip install catboost. It is a recently open-sourced gradient boosting library, which is in most cases more accurate and faster than XGBoost, and it has categorical features support. Here is the site of the library: <https://catboost.ai>
The following command should work but, If you have a problem with this command > > conda install -c conda-forge xgboost > > > First activate your environment . Assume your environment is named simply write in conda terminal : ``` activate <MY_ENV> ``` and then ``` pip install xgboost ```
34,492,422
I have a file which contains 1000 words and a folder (the name of this folder is "train") that contains 7 other sub folders and each sub folder contains 8 files. I have to check if these 1000 words are in the train folder. if they exist, it appends +1 to the list, else it should append -1 to the list. At last it should produce a vector that contains 1000 of +1 or -1. the code I wrote produces more than 1000 elements and printing doesn't stop. It's because I have to find unique words first and check these 1000 words in them. but I don't know how to insert the code of finding unique words to the code I wrote it already. hint: I extracted these 1000 words from the train folder. they are 1000 most common words of this folder. (the language is persian) ``` import os def c (folder_path): t=[] k=[] uniqueWords = [] for folder in sorted(os.listdir(folder_path)): folder_path1 = os.path.join(folder_path, folder) for folder1 in sorted(os.listdir(folder_path1)): file=os.path.join(folder_path1, folder1) text = open(file,encoding="utf-8") text=text.read().split() t=text+t with open ("c:/python34/1000CommonWords.txt", encoding="utf-8") as f1: f1=f1.readlines() for j in f1: if j in uniqueWords: k.append (+1) else: k.append (-1) print (k) ``` the code for finding unique words: ``` for i in t: if not i in uniqueWords: uniqueWords.append(i) ```
2015/12/28
[ "https://Stackoverflow.com/questions/34492422", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5585501/" ]
First of all, your question title and the "real question" that pops up in the comments are quite different. To find all unique words in all files in subdirectories of a given directory, you should use the handy [os.walk method](https://docs.python.org/2/library/os.html#os.walk "os.walk method"), then open all files and collect the words in a [set](https://docs.python.org/2/library/sets.html#set-objects). ``` import os allwords = set() for path, dirs, filenames in os.walk(folder_path): print("In path {}: subdirs: {}, files: {}".format(path, dirs, filenames)) for filename in filenames: with open(os.path.join(path, filename)) as fileobj: for line in fileobj: print(line) allwords.update(line.split()) print(allwords) ```
Use a function: ``` def unique_words(filename): # most of the code in your first example block return uniqueWords uniques = unique_words('...CommonWords.txt') for i in t: if i not in uniques: ... ```
34,492,422
I have a file which contains 1000 words and a folder (the name of this folder is "train") that contains 7 other sub folders and each sub folder contains 8 files. I have to check if these 1000 words are in the train folder. if they exist, it appends +1 to the list, else it should append -1 to the list. At last it should produce a vector that contains 1000 of +1 or -1. the code I wrote produces more than 1000 elements and printing doesn't stop. It's because I have to find unique words first and check these 1000 words in them. but I don't know how to insert the code of finding unique words to the code I wrote it already. hint: I extracted these 1000 words from the train folder. they are 1000 most common words of this folder. (the language is persian) ``` import os def c (folder_path): t=[] k=[] uniqueWords = [] for folder in sorted(os.listdir(folder_path)): folder_path1 = os.path.join(folder_path, folder) for folder1 in sorted(os.listdir(folder_path1)): file=os.path.join(folder_path1, folder1) text = open(file,encoding="utf-8") text=text.read().split() t=text+t with open ("c:/python34/1000CommonWords.txt", encoding="utf-8") as f1: f1=f1.readlines() for j in f1: if j in uniqueWords: k.append (+1) else: k.append (-1) print (k) ``` the code for finding unique words: ``` for i in t: if not i in uniqueWords: uniqueWords.append(i) ```
2015/12/28
[ "https://Stackoverflow.com/questions/34492422", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5585501/" ]
First of all, your question title and the "real question" that pops up in the comments are quite different. To find all unique words in all files in subdirectories of a given directory, you should use the handy [os.walk method](https://docs.python.org/2/library/os.html#os.walk "os.walk method"), then open all files and collect the words in a [set](https://docs.python.org/2/library/sets.html#set-objects). ``` import os allwords = set() for path, dirs, filenames in os.walk(folder_path): print("In path {}: subdirs: {}, files: {}".format(path, dirs, filenames)) for filename in filenames: with open(os.path.join(path, filename)) as fileobj: for line in fileobj: print(line) allwords.update(line.split()) print(allwords) ```
I found the answer. ``` import os def classification (folder_path): t=[] # opens train k=[] uniqueWords = [] for folder in sorted(os.listdir(folder_path)): folder_path1 = os.path.join(folder_path, folder) for folder1 in sorted(os.listdir(folder_path1)): file=os.path.join(folder_path1, folder1) text = open(file,encoding="utf-8") text=text.read().split() t=text+t for i in t: if not i in uniqueWords: uniqueWords.append(i) # return uniqueWords with open ("c:/python34/1000CommonWords.txt", encoding="utf-8") as f1: f1=f1.read().split() #extracts the vector for j in f1: if j in uniqueWords : k.append (+1) else: k.append (-1) print (k) ```
64,048,758
I'm trying to use Celery with SQS as broker. In order to use the SQS from my container I need to assume a role and for that I'm using STS. My code looks like this: ```py role_info = { 'RoleArn': 'arn:aws:iam::xxxxxxx:role/my-role-execution', 'RoleSessionName': 'roleExecution' } sts_client = boto3.client('sts', region_name='eu-central-1') credentials = sts_client.assume_role(**role_info) aws_access_key_id = credentials["Credentials"]['AccessKeyId'] aws_secret_access_key = credentials["Credentials"]['SecretAccessKey'] aws_session_token = credentials["Credentials"]["SessionToken"] os.environ["AWS_ACCESS_KEY_ID"] = aws_access_key_id os.environ["AWS_SECRET_ACCESS_KEY"] = aws_secret_access_key os.environ["AWS_DEFAULT_REGION"] = 'eu-central-1' os.environ["AWS_SESSION_TOKEN"] = aws_session_token broker = "sqs://" backend = 'redis://redis-service:6379/0' celery = Celery('tasks', broker=broker, backend=backend) celery.conf["task_default_queue"] = 'my-queue' celery.conf["broker_transport_options"] = { 'region': 'eu-central-1', 'predefined_queues': { 'my-queue': { 'url': 'https://sqs.eu-central-1.amazonaws.com/xxxxxxx/my-queue' } } } ``` In the same file I have the following task: ```py @celery.task(name='my-queue.my_task') def my_task(content) -> int: print("hello") return 0 ``` When I execute the following code I get an error: ```py [2020-09-24 10:38:03,602: CRITICAL/MainProcess] Unrecoverable error: ClientError('An error occurred (AccessDenied) when calling the ListQueues operation: Access to the resource https://eu-central-1.queue.amazonaws.com/ is denied.',) Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 921, in create_channel return self._avail_channels.pop() IndexError: pop from empty list During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/celery/worker/worker.py", line 208, in start self.blueprint.start(self) File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start step.start(parent) File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 369, in start return self.obj.start() File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 318, in start blueprint.start(self) File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start step.start(parent) File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/connection.py", line 23, in start c.connection = c.connect() File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 405, in connect conn = self.connection_for_read(heartbeat=self.amqheartbeat) File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 412, in connection_for_read self.app.connection_for_read(heartbeat=heartbeat)) File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 439, in ensure_connected callback=maybe_shutdown, File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 422, in ensure_connection callback, timeout=timeout) File "/usr/local/lib/python3.6/site-packages/kombu/utils/functional.py", line 341, in retry_over_time return fun(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 275, in connect return self.connection File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 823, in connection self._connection = self._establish_connection() File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 778, in _establish_connection conn = self.transport.establish_connection() File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 941, in establish_connection self._avail_channels.append(self.create_channel(self)) File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 923, in create_channel channel = self.Channel(connection) File "/usr/local/lib/python3.6/site-packages/kombu/transport/SQS.py", line 100, in __init__ self._update_queue_cache(self.queue_name_prefix) File "/usr/local/lib/python3.6/site-packages/kombu/transport/SQS.py", line 105, in _update_queue_cache resp = self.sqs.list_queues(QueueNamePrefix=queue_name_prefix) File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 337, in _api_call return self._make_api_call(operation_name, kwargs) File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 656, in _make_api_call raise error_class(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the ListQueues operation: Access to the resource https://eu-central-1.queue.amazonaws.com/ is denied. ``` If I use `boto3` **directly** without Celery, I'm able to connect to the queue and retrieve data without this error. I don't know why Celery/Kombu try to list queues when I specify the `predefined_queues` configuration, tha is used to avoid these behavior (from docs): > > If you want Celery to use a set of predefined queues in AWS, and to never attempt to list SQS queues, nor attempt to create or delete them, pass a map of queue names to URLs using the predefined\_queue\_urls setting > > > [Source here](https://docs.celeryproject.org/en/stable/getting-started/brokers/sqs.html#predefined-queues) Anyone know what happens? How I should modify my code in order to make it work?. Seems that Celery is not using the credentials at all. The versions I'm using: ```py celery==4.4.7 boto3==1.14.54 kombu==4.5.0 ``` Thanks! PS: I created and [issue](https://github.com/celery/kombu/issues/1255) in Github to track if this can be a library error or not...
2020/09/24
[ "https://Stackoverflow.com/questions/64048758", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6090859/" ]
Ubuntu 20.04. Ruby 2.7.1. Changing `rvm` to `rbenv` was fixed this error.
I add the same problem. On this environment : * ubuntu 20.04 LTS * ruby 2.7.0 * gem 3.1.2 * bundler 2.1.4, the steps described here <https://mrtan.me/post/34.html/> worked for me : ``` sudo apt-get install build-essential patch ruby-dev zlib1g-dev liblzma-dev gem install nokogiri bundle ```
64,048,758
I'm trying to use Celery with SQS as broker. In order to use the SQS from my container I need to assume a role and for that I'm using STS. My code looks like this: ```py role_info = { 'RoleArn': 'arn:aws:iam::xxxxxxx:role/my-role-execution', 'RoleSessionName': 'roleExecution' } sts_client = boto3.client('sts', region_name='eu-central-1') credentials = sts_client.assume_role(**role_info) aws_access_key_id = credentials["Credentials"]['AccessKeyId'] aws_secret_access_key = credentials["Credentials"]['SecretAccessKey'] aws_session_token = credentials["Credentials"]["SessionToken"] os.environ["AWS_ACCESS_KEY_ID"] = aws_access_key_id os.environ["AWS_SECRET_ACCESS_KEY"] = aws_secret_access_key os.environ["AWS_DEFAULT_REGION"] = 'eu-central-1' os.environ["AWS_SESSION_TOKEN"] = aws_session_token broker = "sqs://" backend = 'redis://redis-service:6379/0' celery = Celery('tasks', broker=broker, backend=backend) celery.conf["task_default_queue"] = 'my-queue' celery.conf["broker_transport_options"] = { 'region': 'eu-central-1', 'predefined_queues': { 'my-queue': { 'url': 'https://sqs.eu-central-1.amazonaws.com/xxxxxxx/my-queue' } } } ``` In the same file I have the following task: ```py @celery.task(name='my-queue.my_task') def my_task(content) -> int: print("hello") return 0 ``` When I execute the following code I get an error: ```py [2020-09-24 10:38:03,602: CRITICAL/MainProcess] Unrecoverable error: ClientError('An error occurred (AccessDenied) when calling the ListQueues operation: Access to the resource https://eu-central-1.queue.amazonaws.com/ is denied.',) Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 921, in create_channel return self._avail_channels.pop() IndexError: pop from empty list During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/celery/worker/worker.py", line 208, in start self.blueprint.start(self) File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start step.start(parent) File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 369, in start return self.obj.start() File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 318, in start blueprint.start(self) File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start step.start(parent) File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/connection.py", line 23, in start c.connection = c.connect() File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 405, in connect conn = self.connection_for_read(heartbeat=self.amqheartbeat) File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 412, in connection_for_read self.app.connection_for_read(heartbeat=heartbeat)) File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 439, in ensure_connected callback=maybe_shutdown, File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 422, in ensure_connection callback, timeout=timeout) File "/usr/local/lib/python3.6/site-packages/kombu/utils/functional.py", line 341, in retry_over_time return fun(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 275, in connect return self.connection File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 823, in connection self._connection = self._establish_connection() File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 778, in _establish_connection conn = self.transport.establish_connection() File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 941, in establish_connection self._avail_channels.append(self.create_channel(self)) File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 923, in create_channel channel = self.Channel(connection) File "/usr/local/lib/python3.6/site-packages/kombu/transport/SQS.py", line 100, in __init__ self._update_queue_cache(self.queue_name_prefix) File "/usr/local/lib/python3.6/site-packages/kombu/transport/SQS.py", line 105, in _update_queue_cache resp = self.sqs.list_queues(QueueNamePrefix=queue_name_prefix) File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 337, in _api_call return self._make_api_call(operation_name, kwargs) File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 656, in _make_api_call raise error_class(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the ListQueues operation: Access to the resource https://eu-central-1.queue.amazonaws.com/ is denied. ``` If I use `boto3` **directly** without Celery, I'm able to connect to the queue and retrieve data without this error. I don't know why Celery/Kombu try to list queues when I specify the `predefined_queues` configuration, tha is used to avoid these behavior (from docs): > > If you want Celery to use a set of predefined queues in AWS, and to never attempt to list SQS queues, nor attempt to create or delete them, pass a map of queue names to URLs using the predefined\_queue\_urls setting > > > [Source here](https://docs.celeryproject.org/en/stable/getting-started/brokers/sqs.html#predefined-queues) Anyone know what happens? How I should modify my code in order to make it work?. Seems that Celery is not using the credentials at all. The versions I'm using: ```py celery==4.4.7 boto3==1.14.54 kombu==4.5.0 ``` Thanks! PS: I created and [issue](https://github.com/celery/kombu/issues/1255) in Github to track if this can be a library error or not...
2020/09/24
[ "https://Stackoverflow.com/questions/64048758", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6090859/" ]
I reinstalled Ubuntu 20.04 fresh along with Ruby etc, and it worked. I don't know what went wrong, and I don't know if any of the solutions below will work.
I add the same problem. On this environment : * ubuntu 20.04 LTS * ruby 2.7.0 * gem 3.1.2 * bundler 2.1.4, the steps described here <https://mrtan.me/post/34.html/> worked for me : ``` sudo apt-get install build-essential patch ruby-dev zlib1g-dev liblzma-dev gem install nokogiri bundle ```
64,048,758
I'm trying to use Celery with SQS as broker. In order to use the SQS from my container I need to assume a role and for that I'm using STS. My code looks like this: ```py role_info = { 'RoleArn': 'arn:aws:iam::xxxxxxx:role/my-role-execution', 'RoleSessionName': 'roleExecution' } sts_client = boto3.client('sts', region_name='eu-central-1') credentials = sts_client.assume_role(**role_info) aws_access_key_id = credentials["Credentials"]['AccessKeyId'] aws_secret_access_key = credentials["Credentials"]['SecretAccessKey'] aws_session_token = credentials["Credentials"]["SessionToken"] os.environ["AWS_ACCESS_KEY_ID"] = aws_access_key_id os.environ["AWS_SECRET_ACCESS_KEY"] = aws_secret_access_key os.environ["AWS_DEFAULT_REGION"] = 'eu-central-1' os.environ["AWS_SESSION_TOKEN"] = aws_session_token broker = "sqs://" backend = 'redis://redis-service:6379/0' celery = Celery('tasks', broker=broker, backend=backend) celery.conf["task_default_queue"] = 'my-queue' celery.conf["broker_transport_options"] = { 'region': 'eu-central-1', 'predefined_queues': { 'my-queue': { 'url': 'https://sqs.eu-central-1.amazonaws.com/xxxxxxx/my-queue' } } } ``` In the same file I have the following task: ```py @celery.task(name='my-queue.my_task') def my_task(content) -> int: print("hello") return 0 ``` When I execute the following code I get an error: ```py [2020-09-24 10:38:03,602: CRITICAL/MainProcess] Unrecoverable error: ClientError('An error occurred (AccessDenied) when calling the ListQueues operation: Access to the resource https://eu-central-1.queue.amazonaws.com/ is denied.',) Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 921, in create_channel return self._avail_channels.pop() IndexError: pop from empty list During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/celery/worker/worker.py", line 208, in start self.blueprint.start(self) File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start step.start(parent) File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 369, in start return self.obj.start() File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 318, in start blueprint.start(self) File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start step.start(parent) File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/connection.py", line 23, in start c.connection = c.connect() File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 405, in connect conn = self.connection_for_read(heartbeat=self.amqheartbeat) File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 412, in connection_for_read self.app.connection_for_read(heartbeat=heartbeat)) File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 439, in ensure_connected callback=maybe_shutdown, File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 422, in ensure_connection callback, timeout=timeout) File "/usr/local/lib/python3.6/site-packages/kombu/utils/functional.py", line 341, in retry_over_time return fun(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 275, in connect return self.connection File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 823, in connection self._connection = self._establish_connection() File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 778, in _establish_connection conn = self.transport.establish_connection() File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 941, in establish_connection self._avail_channels.append(self.create_channel(self)) File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 923, in create_channel channel = self.Channel(connection) File "/usr/local/lib/python3.6/site-packages/kombu/transport/SQS.py", line 100, in __init__ self._update_queue_cache(self.queue_name_prefix) File "/usr/local/lib/python3.6/site-packages/kombu/transport/SQS.py", line 105, in _update_queue_cache resp = self.sqs.list_queues(QueueNamePrefix=queue_name_prefix) File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 337, in _api_call return self._make_api_call(operation_name, kwargs) File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 656, in _make_api_call raise error_class(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the ListQueues operation: Access to the resource https://eu-central-1.queue.amazonaws.com/ is denied. ``` If I use `boto3` **directly** without Celery, I'm able to connect to the queue and retrieve data without this error. I don't know why Celery/Kombu try to list queues when I specify the `predefined_queues` configuration, tha is used to avoid these behavior (from docs): > > If you want Celery to use a set of predefined queues in AWS, and to never attempt to list SQS queues, nor attempt to create or delete them, pass a map of queue names to URLs using the predefined\_queue\_urls setting > > > [Source here](https://docs.celeryproject.org/en/stable/getting-started/brokers/sqs.html#predefined-queues) Anyone know what happens? How I should modify my code in order to make it work?. Seems that Celery is not using the credentials at all. The versions I'm using: ```py celery==4.4.7 boto3==1.14.54 kombu==4.5.0 ``` Thanks! PS: I created and [issue](https://github.com/celery/kombu/issues/1255) in Github to track if this can be a library error or not...
2020/09/24
[ "https://Stackoverflow.com/questions/64048758", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6090859/" ]
You have to do a full compile of Ruby 2.7.x in order to get nokogiri 1.10.10 to install on Ubuntu 20.04. For a clean install run: ``` rvm install 2.7 --disable-binary ``` To reinstall your existing ruby version use: ``` rvm reinstall 2.7 --disable-binary ```
I add the same problem. On this environment : * ubuntu 20.04 LTS * ruby 2.7.0 * gem 3.1.2 * bundler 2.1.4, the steps described here <https://mrtan.me/post/34.html/> worked for me : ``` sudo apt-get install build-essential patch ruby-dev zlib1g-dev liblzma-dev gem install nokogiri bundle ```
64,048,758
I'm trying to use Celery with SQS as broker. In order to use the SQS from my container I need to assume a role and for that I'm using STS. My code looks like this: ```py role_info = { 'RoleArn': 'arn:aws:iam::xxxxxxx:role/my-role-execution', 'RoleSessionName': 'roleExecution' } sts_client = boto3.client('sts', region_name='eu-central-1') credentials = sts_client.assume_role(**role_info) aws_access_key_id = credentials["Credentials"]['AccessKeyId'] aws_secret_access_key = credentials["Credentials"]['SecretAccessKey'] aws_session_token = credentials["Credentials"]["SessionToken"] os.environ["AWS_ACCESS_KEY_ID"] = aws_access_key_id os.environ["AWS_SECRET_ACCESS_KEY"] = aws_secret_access_key os.environ["AWS_DEFAULT_REGION"] = 'eu-central-1' os.environ["AWS_SESSION_TOKEN"] = aws_session_token broker = "sqs://" backend = 'redis://redis-service:6379/0' celery = Celery('tasks', broker=broker, backend=backend) celery.conf["task_default_queue"] = 'my-queue' celery.conf["broker_transport_options"] = { 'region': 'eu-central-1', 'predefined_queues': { 'my-queue': { 'url': 'https://sqs.eu-central-1.amazonaws.com/xxxxxxx/my-queue' } } } ``` In the same file I have the following task: ```py @celery.task(name='my-queue.my_task') def my_task(content) -> int: print("hello") return 0 ``` When I execute the following code I get an error: ```py [2020-09-24 10:38:03,602: CRITICAL/MainProcess] Unrecoverable error: ClientError('An error occurred (AccessDenied) when calling the ListQueues operation: Access to the resource https://eu-central-1.queue.amazonaws.com/ is denied.',) Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 921, in create_channel return self._avail_channels.pop() IndexError: pop from empty list During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/celery/worker/worker.py", line 208, in start self.blueprint.start(self) File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start step.start(parent) File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 369, in start return self.obj.start() File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 318, in start blueprint.start(self) File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start step.start(parent) File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/connection.py", line 23, in start c.connection = c.connect() File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 405, in connect conn = self.connection_for_read(heartbeat=self.amqheartbeat) File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 412, in connection_for_read self.app.connection_for_read(heartbeat=heartbeat)) File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 439, in ensure_connected callback=maybe_shutdown, File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 422, in ensure_connection callback, timeout=timeout) File "/usr/local/lib/python3.6/site-packages/kombu/utils/functional.py", line 341, in retry_over_time return fun(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 275, in connect return self.connection File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 823, in connection self._connection = self._establish_connection() File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 778, in _establish_connection conn = self.transport.establish_connection() File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 941, in establish_connection self._avail_channels.append(self.create_channel(self)) File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 923, in create_channel channel = self.Channel(connection) File "/usr/local/lib/python3.6/site-packages/kombu/transport/SQS.py", line 100, in __init__ self._update_queue_cache(self.queue_name_prefix) File "/usr/local/lib/python3.6/site-packages/kombu/transport/SQS.py", line 105, in _update_queue_cache resp = self.sqs.list_queues(QueueNamePrefix=queue_name_prefix) File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 337, in _api_call return self._make_api_call(operation_name, kwargs) File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 656, in _make_api_call raise error_class(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the ListQueues operation: Access to the resource https://eu-central-1.queue.amazonaws.com/ is denied. ``` If I use `boto3` **directly** without Celery, I'm able to connect to the queue and retrieve data without this error. I don't know why Celery/Kombu try to list queues when I specify the `predefined_queues` configuration, tha is used to avoid these behavior (from docs): > > If you want Celery to use a set of predefined queues in AWS, and to never attempt to list SQS queues, nor attempt to create or delete them, pass a map of queue names to URLs using the predefined\_queue\_urls setting > > > [Source here](https://docs.celeryproject.org/en/stable/getting-started/brokers/sqs.html#predefined-queues) Anyone know what happens? How I should modify my code in order to make it work?. Seems that Celery is not using the credentials at all. The versions I'm using: ```py celery==4.4.7 boto3==1.14.54 kombu==4.5.0 ``` Thanks! PS: I created and [issue](https://github.com/celery/kombu/issues/1255) in Github to track if this can be a library error or not...
2020/09/24
[ "https://Stackoverflow.com/questions/64048758", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6090859/" ]
Ubuntu 20.04. Ruby 2.7.1. Changing `rvm` to `rbenv` was fixed this error.
Can confirm. Going to `rbenv` worked for me as well.
64,048,758
I'm trying to use Celery with SQS as broker. In order to use the SQS from my container I need to assume a role and for that I'm using STS. My code looks like this: ```py role_info = { 'RoleArn': 'arn:aws:iam::xxxxxxx:role/my-role-execution', 'RoleSessionName': 'roleExecution' } sts_client = boto3.client('sts', region_name='eu-central-1') credentials = sts_client.assume_role(**role_info) aws_access_key_id = credentials["Credentials"]['AccessKeyId'] aws_secret_access_key = credentials["Credentials"]['SecretAccessKey'] aws_session_token = credentials["Credentials"]["SessionToken"] os.environ["AWS_ACCESS_KEY_ID"] = aws_access_key_id os.environ["AWS_SECRET_ACCESS_KEY"] = aws_secret_access_key os.environ["AWS_DEFAULT_REGION"] = 'eu-central-1' os.environ["AWS_SESSION_TOKEN"] = aws_session_token broker = "sqs://" backend = 'redis://redis-service:6379/0' celery = Celery('tasks', broker=broker, backend=backend) celery.conf["task_default_queue"] = 'my-queue' celery.conf["broker_transport_options"] = { 'region': 'eu-central-1', 'predefined_queues': { 'my-queue': { 'url': 'https://sqs.eu-central-1.amazonaws.com/xxxxxxx/my-queue' } } } ``` In the same file I have the following task: ```py @celery.task(name='my-queue.my_task') def my_task(content) -> int: print("hello") return 0 ``` When I execute the following code I get an error: ```py [2020-09-24 10:38:03,602: CRITICAL/MainProcess] Unrecoverable error: ClientError('An error occurred (AccessDenied) when calling the ListQueues operation: Access to the resource https://eu-central-1.queue.amazonaws.com/ is denied.',) Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 921, in create_channel return self._avail_channels.pop() IndexError: pop from empty list During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/celery/worker/worker.py", line 208, in start self.blueprint.start(self) File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start step.start(parent) File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 369, in start return self.obj.start() File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 318, in start blueprint.start(self) File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start step.start(parent) File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/connection.py", line 23, in start c.connection = c.connect() File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 405, in connect conn = self.connection_for_read(heartbeat=self.amqheartbeat) File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 412, in connection_for_read self.app.connection_for_read(heartbeat=heartbeat)) File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 439, in ensure_connected callback=maybe_shutdown, File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 422, in ensure_connection callback, timeout=timeout) File "/usr/local/lib/python3.6/site-packages/kombu/utils/functional.py", line 341, in retry_over_time return fun(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 275, in connect return self.connection File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 823, in connection self._connection = self._establish_connection() File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 778, in _establish_connection conn = self.transport.establish_connection() File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 941, in establish_connection self._avail_channels.append(self.create_channel(self)) File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 923, in create_channel channel = self.Channel(connection) File "/usr/local/lib/python3.6/site-packages/kombu/transport/SQS.py", line 100, in __init__ self._update_queue_cache(self.queue_name_prefix) File "/usr/local/lib/python3.6/site-packages/kombu/transport/SQS.py", line 105, in _update_queue_cache resp = self.sqs.list_queues(QueueNamePrefix=queue_name_prefix) File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 337, in _api_call return self._make_api_call(operation_name, kwargs) File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 656, in _make_api_call raise error_class(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the ListQueues operation: Access to the resource https://eu-central-1.queue.amazonaws.com/ is denied. ``` If I use `boto3` **directly** without Celery, I'm able to connect to the queue and retrieve data without this error. I don't know why Celery/Kombu try to list queues when I specify the `predefined_queues` configuration, tha is used to avoid these behavior (from docs): > > If you want Celery to use a set of predefined queues in AWS, and to never attempt to list SQS queues, nor attempt to create or delete them, pass a map of queue names to URLs using the predefined\_queue\_urls setting > > > [Source here](https://docs.celeryproject.org/en/stable/getting-started/brokers/sqs.html#predefined-queues) Anyone know what happens? How I should modify my code in order to make it work?. Seems that Celery is not using the credentials at all. The versions I'm using: ```py celery==4.4.7 boto3==1.14.54 kombu==4.5.0 ``` Thanks! PS: I created and [issue](https://github.com/celery/kombu/issues/1255) in Github to track if this can be a library error or not...
2020/09/24
[ "https://Stackoverflow.com/questions/64048758", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6090859/" ]
I reinstalled Ubuntu 20.04 fresh along with Ruby etc, and it worked. I don't know what went wrong, and I don't know if any of the solutions below will work.
Can confirm. Going to `rbenv` worked for me as well.
64,048,758
I'm trying to use Celery with SQS as broker. In order to use the SQS from my container I need to assume a role and for that I'm using STS. My code looks like this: ```py role_info = { 'RoleArn': 'arn:aws:iam::xxxxxxx:role/my-role-execution', 'RoleSessionName': 'roleExecution' } sts_client = boto3.client('sts', region_name='eu-central-1') credentials = sts_client.assume_role(**role_info) aws_access_key_id = credentials["Credentials"]['AccessKeyId'] aws_secret_access_key = credentials["Credentials"]['SecretAccessKey'] aws_session_token = credentials["Credentials"]["SessionToken"] os.environ["AWS_ACCESS_KEY_ID"] = aws_access_key_id os.environ["AWS_SECRET_ACCESS_KEY"] = aws_secret_access_key os.environ["AWS_DEFAULT_REGION"] = 'eu-central-1' os.environ["AWS_SESSION_TOKEN"] = aws_session_token broker = "sqs://" backend = 'redis://redis-service:6379/0' celery = Celery('tasks', broker=broker, backend=backend) celery.conf["task_default_queue"] = 'my-queue' celery.conf["broker_transport_options"] = { 'region': 'eu-central-1', 'predefined_queues': { 'my-queue': { 'url': 'https://sqs.eu-central-1.amazonaws.com/xxxxxxx/my-queue' } } } ``` In the same file I have the following task: ```py @celery.task(name='my-queue.my_task') def my_task(content) -> int: print("hello") return 0 ``` When I execute the following code I get an error: ```py [2020-09-24 10:38:03,602: CRITICAL/MainProcess] Unrecoverable error: ClientError('An error occurred (AccessDenied) when calling the ListQueues operation: Access to the resource https://eu-central-1.queue.amazonaws.com/ is denied.',) Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 921, in create_channel return self._avail_channels.pop() IndexError: pop from empty list During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/celery/worker/worker.py", line 208, in start self.blueprint.start(self) File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start step.start(parent) File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 369, in start return self.obj.start() File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 318, in start blueprint.start(self) File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start step.start(parent) File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/connection.py", line 23, in start c.connection = c.connect() File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 405, in connect conn = self.connection_for_read(heartbeat=self.amqheartbeat) File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 412, in connection_for_read self.app.connection_for_read(heartbeat=heartbeat)) File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 439, in ensure_connected callback=maybe_shutdown, File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 422, in ensure_connection callback, timeout=timeout) File "/usr/local/lib/python3.6/site-packages/kombu/utils/functional.py", line 341, in retry_over_time return fun(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 275, in connect return self.connection File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 823, in connection self._connection = self._establish_connection() File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 778, in _establish_connection conn = self.transport.establish_connection() File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 941, in establish_connection self._avail_channels.append(self.create_channel(self)) File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 923, in create_channel channel = self.Channel(connection) File "/usr/local/lib/python3.6/site-packages/kombu/transport/SQS.py", line 100, in __init__ self._update_queue_cache(self.queue_name_prefix) File "/usr/local/lib/python3.6/site-packages/kombu/transport/SQS.py", line 105, in _update_queue_cache resp = self.sqs.list_queues(QueueNamePrefix=queue_name_prefix) File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 337, in _api_call return self._make_api_call(operation_name, kwargs) File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 656, in _make_api_call raise error_class(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the ListQueues operation: Access to the resource https://eu-central-1.queue.amazonaws.com/ is denied. ``` If I use `boto3` **directly** without Celery, I'm able to connect to the queue and retrieve data without this error. I don't know why Celery/Kombu try to list queues when I specify the `predefined_queues` configuration, tha is used to avoid these behavior (from docs): > > If you want Celery to use a set of predefined queues in AWS, and to never attempt to list SQS queues, nor attempt to create or delete them, pass a map of queue names to URLs using the predefined\_queue\_urls setting > > > [Source here](https://docs.celeryproject.org/en/stable/getting-started/brokers/sqs.html#predefined-queues) Anyone know what happens? How I should modify my code in order to make it work?. Seems that Celery is not using the credentials at all. The versions I'm using: ```py celery==4.4.7 boto3==1.14.54 kombu==4.5.0 ``` Thanks! PS: I created and [issue](https://github.com/celery/kombu/issues/1255) in Github to track if this can be a library error or not...
2020/09/24
[ "https://Stackoverflow.com/questions/64048758", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6090859/" ]
You have to do a full compile of Ruby 2.7.x in order to get nokogiri 1.10.10 to install on Ubuntu 20.04. For a clean install run: ``` rvm install 2.7 --disable-binary ``` To reinstall your existing ruby version use: ``` rvm reinstall 2.7 --disable-binary ```
Can confirm. Going to `rbenv` worked for me as well.
19,750,294
guys, I'm learning Python recently, I got a problem when I write some simple codes in Python Shell(command in Terminal in Linux) and in a file: in Python Shell: ``` >>> def firstn(n): ... num, nums = 0, [] ... while num < n: ... nums.append(nums) ... num += 1 ... return nums ... sum_of_first_n = sum(firstn(1000000)) File "<stdin>", Line7 sum_of_firstn_n = sum(firstn(1000000)) ^ SyntaxError: invalid syntax ``` if print(sum(firstn(1000000))), the print will be a SyntaxError too But when I put the codes into a file, and execute it, it is totally OK, no SyntaxError, I don't know why. Is there anyone who can explain this? PS: The code is from <https://wiki.python.org/moin/Generators>
2013/11/03
[ "https://Stackoverflow.com/questions/19750294", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1528712/" ]
In interactive mode, put blank line to end the block. ``` >>> def firstn(n): ... num, nums = 0, [] ... while num < n: ... nums.append(nums) ... num += 1 ... return nums ... >>> sum_of_first_n = sum(firstn(1000000)) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unsupported operand type(s) for +: 'int' and 'list' ``` BTW, the code make a cyclic reference in the following line: ``` nums.append(nums) ``` --- ``` >>> def firstn(n): ... num, nums = 0, [] ... while num < n: ... nums.append(num) # <-- ... num += 1 ... return nums ... >>> sum_of_first_n = sum(firstn(1000000)) >>> sum_of_first_n 499999500000L ```
It worked fine for me when I copied directly from the Python docs. When I tried adding a space before the last line (sum\_of\_first\_n) I received that same syntax error message. Most likely a copy paste error. Try copying to a text editor to check for spaces and then paste into terminal.
70,202,022
I have the user select a strings of weekdays which then get stored in a list. My goal is to replace any user-selected strings with it's corresponding [weekday number](https://docs.python.org/3/library/datetime.html#datetime.date.weekday). My code works but it seems unnecessarily bulky. Is there a more elegant way? ```py selected_weekdays = ["MON", "WED", "THU", "SAT"] for i in range(len (selected_weekdays)): if selected_weekdays[i] == "MON": selected_weekdays[i] = 0 elif selected_weekdays[i] == "TUE": selected_weekdays[i] = 1 elif selected_weekdays[i] == "WED": selected_weekdays[i] = 2 elif selected_weekdays[i] == "THU": selected_weekdays[i] = 3 elif selected_weekdays[i] == "FRI": selected_weekdays[i] = 4 elif selected_weekdays[i] == "SAT": selected_weekdays[i] = 5 elif selected_weekdays[i] == "SUN": selected_weekdays[i] = 6 else: pass print(selected_weekdays) ``` Correct output: ```py [0, 2, 3, 5] ```
2021/12/02
[ "https://Stackoverflow.com/questions/70202022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14819955/" ]
Using a dictionary with a list comprehension: ```py days = {'MON': 0, 'TUE': 1, 'WED': 2, 'THU': 3, 'FRI': 4, 'SAT': 5, 'SUN': 6} selected_weekdays = ["MON", "WED", "THU", "SAT"] output = [days[x] for x in selected_weekdays] print(output) # [0, 2, 3, 5] ```
You could use a dictionary: ```py day_number = {"MON": 0, "TUE": 1, "WED": 2, "THU": 3, "FRI": 4, "SAT": 5, "SUN": 6} selected_weekdays = ["MON", "WED", "THU", "SAT"] for i in range(len(selected_weekdays)): selected_weekdays[i] = day_number[selected_weekdays[i]] print(selected_weekdays) ``` Instead of the for-loop, you could use ```py selected_weekdays = [day_number[d] for d in selected_weekdays] ```
70,202,022
I have the user select a strings of weekdays which then get stored in a list. My goal is to replace any user-selected strings with it's corresponding [weekday number](https://docs.python.org/3/library/datetime.html#datetime.date.weekday). My code works but it seems unnecessarily bulky. Is there a more elegant way? ```py selected_weekdays = ["MON", "WED", "THU", "SAT"] for i in range(len (selected_weekdays)): if selected_weekdays[i] == "MON": selected_weekdays[i] = 0 elif selected_weekdays[i] == "TUE": selected_weekdays[i] = 1 elif selected_weekdays[i] == "WED": selected_weekdays[i] = 2 elif selected_weekdays[i] == "THU": selected_weekdays[i] = 3 elif selected_weekdays[i] == "FRI": selected_weekdays[i] = 4 elif selected_weekdays[i] == "SAT": selected_weekdays[i] = 5 elif selected_weekdays[i] == "SUN": selected_weekdays[i] = 6 else: pass print(selected_weekdays) ``` Correct output: ```py [0, 2, 3, 5] ```
2021/12/02
[ "https://Stackoverflow.com/questions/70202022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14819955/" ]
You can avoid some extra work by generating your dictionary mapping it's also better to avoid modifying a structure you're also iterating over for a variety of reasons See [Modifying list while iterating](https://stackoverflow.com/questions/1637807/modifying-list-while-iterating) The dictionary `.get()` method will choose a default choice (weekday in your original case, though you may want to change this such that it's one of * the unlisted weekday (this is what your code currently does, so I kept the results the same) * an Exception (`KeyError` from directly indexing `[]` .. you may want this to highlight errors) * `None` (essentially throws out bad values and is easy to `filter` in a next step) ```py # dictionary expression weekday_mapping = {day: index for index, day in enumerate(( "MON", "TUE", "WED", "THU", "FRI", "SAT", "SUN"))} selected_weekdays = ["MON", "WED", "THU", "SAT"] results = [] for weekday in selected_weekdays: results.append(weekday_mapping.get(weekday, weekday)) ``` ```py >>> results [0, 2, 3, 5] >>> selected_weekdays = ["MON", "WED", "THU", "FakeDay", "SAT"] ... >>> results [0, 2, 3, 'FakeDay', 5] ```
You could use a dictionary: ```py day_number = {"MON": 0, "TUE": 1, "WED": 2, "THU": 3, "FRI": 4, "SAT": 5, "SUN": 6} selected_weekdays = ["MON", "WED", "THU", "SAT"] for i in range(len(selected_weekdays)): selected_weekdays[i] = day_number[selected_weekdays[i]] print(selected_weekdays) ``` Instead of the for-loop, you could use ```py selected_weekdays = [day_number[d] for d in selected_weekdays] ```
70,202,022
I have the user select a strings of weekdays which then get stored in a list. My goal is to replace any user-selected strings with it's corresponding [weekday number](https://docs.python.org/3/library/datetime.html#datetime.date.weekday). My code works but it seems unnecessarily bulky. Is there a more elegant way? ```py selected_weekdays = ["MON", "WED", "THU", "SAT"] for i in range(len (selected_weekdays)): if selected_weekdays[i] == "MON": selected_weekdays[i] = 0 elif selected_weekdays[i] == "TUE": selected_weekdays[i] = 1 elif selected_weekdays[i] == "WED": selected_weekdays[i] = 2 elif selected_weekdays[i] == "THU": selected_weekdays[i] = 3 elif selected_weekdays[i] == "FRI": selected_weekdays[i] = 4 elif selected_weekdays[i] == "SAT": selected_weekdays[i] = 5 elif selected_weekdays[i] == "SUN": selected_weekdays[i] = 6 else: pass print(selected_weekdays) ``` Correct output: ```py [0, 2, 3, 5] ```
2021/12/02
[ "https://Stackoverflow.com/questions/70202022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14819955/" ]
Using a dictionary with a list comprehension: ```py days = {'MON': 0, 'TUE': 1, 'WED': 2, 'THU': 3, 'FRI': 4, 'SAT': 5, 'SUN': 6} selected_weekdays = ["MON", "WED", "THU", "SAT"] output = [days[x] for x in selected_weekdays] print(output) # [0, 2, 3, 5] ```
One method is to use a list, and have the values be the indices of the list: ``` weekdays_list = ["MON", "TUE", "WED", "THU", "FRI", "SAT", "SUN"] selected_weekdays = ["MON", "WED", "THU", "SAT"] selected_weekdays_numbered = [weekdays_list.index(i) for i in selected_weekdays] print(selected_weekdays_numbered) ``` Output: ``` [0, 2, 3, 5] ```
70,202,022
I have the user select a strings of weekdays which then get stored in a list. My goal is to replace any user-selected strings with it's corresponding [weekday number](https://docs.python.org/3/library/datetime.html#datetime.date.weekday). My code works but it seems unnecessarily bulky. Is there a more elegant way? ```py selected_weekdays = ["MON", "WED", "THU", "SAT"] for i in range(len (selected_weekdays)): if selected_weekdays[i] == "MON": selected_weekdays[i] = 0 elif selected_weekdays[i] == "TUE": selected_weekdays[i] = 1 elif selected_weekdays[i] == "WED": selected_weekdays[i] = 2 elif selected_weekdays[i] == "THU": selected_weekdays[i] = 3 elif selected_weekdays[i] == "FRI": selected_weekdays[i] = 4 elif selected_weekdays[i] == "SAT": selected_weekdays[i] = 5 elif selected_weekdays[i] == "SUN": selected_weekdays[i] = 6 else: pass print(selected_weekdays) ``` Correct output: ```py [0, 2, 3, 5] ```
2021/12/02
[ "https://Stackoverflow.com/questions/70202022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14819955/" ]
You can avoid some extra work by generating your dictionary mapping it's also better to avoid modifying a structure you're also iterating over for a variety of reasons See [Modifying list while iterating](https://stackoverflow.com/questions/1637807/modifying-list-while-iterating) The dictionary `.get()` method will choose a default choice (weekday in your original case, though you may want to change this such that it's one of * the unlisted weekday (this is what your code currently does, so I kept the results the same) * an Exception (`KeyError` from directly indexing `[]` .. you may want this to highlight errors) * `None` (essentially throws out bad values and is easy to `filter` in a next step) ```py # dictionary expression weekday_mapping = {day: index for index, day in enumerate(( "MON", "TUE", "WED", "THU", "FRI", "SAT", "SUN"))} selected_weekdays = ["MON", "WED", "THU", "SAT"] results = [] for weekday in selected_weekdays: results.append(weekday_mapping.get(weekday, weekday)) ``` ```py >>> results [0, 2, 3, 5] >>> selected_weekdays = ["MON", "WED", "THU", "FakeDay", "SAT"] ... >>> results [0, 2, 3, 'FakeDay', 5] ```
Using a dictionary with a list comprehension: ```py days = {'MON': 0, 'TUE': 1, 'WED': 2, 'THU': 3, 'FRI': 4, 'SAT': 5, 'SUN': 6} selected_weekdays = ["MON", "WED", "THU", "SAT"] output = [days[x] for x in selected_weekdays] print(output) # [0, 2, 3, 5] ```
70,202,022
I have the user select a strings of weekdays which then get stored in a list. My goal is to replace any user-selected strings with it's corresponding [weekday number](https://docs.python.org/3/library/datetime.html#datetime.date.weekday). My code works but it seems unnecessarily bulky. Is there a more elegant way? ```py selected_weekdays = ["MON", "WED", "THU", "SAT"] for i in range(len (selected_weekdays)): if selected_weekdays[i] == "MON": selected_weekdays[i] = 0 elif selected_weekdays[i] == "TUE": selected_weekdays[i] = 1 elif selected_weekdays[i] == "WED": selected_weekdays[i] = 2 elif selected_weekdays[i] == "THU": selected_weekdays[i] = 3 elif selected_weekdays[i] == "FRI": selected_weekdays[i] = 4 elif selected_weekdays[i] == "SAT": selected_weekdays[i] = 5 elif selected_weekdays[i] == "SUN": selected_weekdays[i] = 6 else: pass print(selected_weekdays) ``` Correct output: ```py [0, 2, 3, 5] ```
2021/12/02
[ "https://Stackoverflow.com/questions/70202022", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14819955/" ]
You can avoid some extra work by generating your dictionary mapping it's also better to avoid modifying a structure you're also iterating over for a variety of reasons See [Modifying list while iterating](https://stackoverflow.com/questions/1637807/modifying-list-while-iterating) The dictionary `.get()` method will choose a default choice (weekday in your original case, though you may want to change this such that it's one of * the unlisted weekday (this is what your code currently does, so I kept the results the same) * an Exception (`KeyError` from directly indexing `[]` .. you may want this to highlight errors) * `None` (essentially throws out bad values and is easy to `filter` in a next step) ```py # dictionary expression weekday_mapping = {day: index for index, day in enumerate(( "MON", "TUE", "WED", "THU", "FRI", "SAT", "SUN"))} selected_weekdays = ["MON", "WED", "THU", "SAT"] results = [] for weekday in selected_weekdays: results.append(weekday_mapping.get(weekday, weekday)) ``` ```py >>> results [0, 2, 3, 5] >>> selected_weekdays = ["MON", "WED", "THU", "FakeDay", "SAT"] ... >>> results [0, 2, 3, 'FakeDay', 5] ```
One method is to use a list, and have the values be the indices of the list: ``` weekdays_list = ["MON", "TUE", "WED", "THU", "FRI", "SAT", "SUN"] selected_weekdays = ["MON", "WED", "THU", "SAT"] selected_weekdays_numbered = [weekdays_list.index(i) for i in selected_weekdays] print(selected_weekdays_numbered) ``` Output: ``` [0, 2, 3, 5] ```
44,086,019
Let us say given string is: From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008 According to me answer should be: d@uct.ac.za because Number of non-whitespace characters should be lazy greedy(as few as possible) hence. But python says: stephen.marquard@uct.ac.za ``` import re hand = open('textfile.txt') for line in hand: line=line.rstrip() x=re.findall('\S+@\S+',lin) if x: print x ``` and textfile.txt is: ``` From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008 ```
2017/05/20
[ "https://Stackoverflow.com/questions/44086019", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4565731/" ]
When you say: > > Number of non-whitespace characters should be lazy greedy(as few as possible) hence. > > > In your case it implies it should be either **one or zero**. Hence making non-whitespace character `optional` using `?` quantifier will do the job. Try with `\S?@\S+` **[Regex101 Demo](https://regex101.com/r/XxNNYV/1/)**
If you just want one character before `@`, you could use : ``` '\S@\S+' ``` Non-greedy doesn't mean that the regex will try to find the shortest substring by varying the start index. It just means that if there's a substring which starts at index 0 and matches the regex, the engine will stop as soon as possible.
44,086,019
Let us say given string is: From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008 According to me answer should be: d@uct.ac.za because Number of non-whitespace characters should be lazy greedy(as few as possible) hence. But python says: stephen.marquard@uct.ac.za ``` import re hand = open('textfile.txt') for line in hand: line=line.rstrip() x=re.findall('\S+@\S+',lin) if x: print x ``` and textfile.txt is: ``` From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008 ```
2017/05/20
[ "https://Stackoverflow.com/questions/44086019", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4565731/" ]
If you just want one character before `@`, you could use : ``` '\S@\S+' ``` Non-greedy doesn't mean that the regex will try to find the shortest substring by varying the start index. It just means that if there's a substring which starts at index 0 and matches the regex, the engine will stop as soon as possible.
I have your same question and it's not yet answered. The non greedy '?' works for after the '@' but not for before it. I have no explanation for it. You can try `[^ ]@[^ ]+` for the required/expected answer.
44,086,019
Let us say given string is: From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008 According to me answer should be: d@uct.ac.za because Number of non-whitespace characters should be lazy greedy(as few as possible) hence. But python says: stephen.marquard@uct.ac.za ``` import re hand = open('textfile.txt') for line in hand: line=line.rstrip() x=re.findall('\S+@\S+',lin) if x: print x ``` and textfile.txt is: ``` From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008 ```
2017/05/20
[ "https://Stackoverflow.com/questions/44086019", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4565731/" ]
If you just want one character before `@`, you could use : ``` '\S@\S+' ``` Non-greedy doesn't mean that the regex will try to find the shortest substring by varying the start index. It just means that if there's a substring which starts at index 0 and matches the regex, the engine will stop as soon as possible.
I think I have the answer to this one. Bit late on the trend, but here it goes. Let's reference Eric's statement first: Non-greedy ... just means that if there's a substring which starts at index 0 and matches the regex, the engine will stop as soon as possible. Okay fellas. So the engine starts at index 0. It reads stephen.marquard, it still hasn't reached @ yet so it goes on, but there isn't anything yet that deems its selection wrong so it keeps it in the memory. It reaches the @ sign. Great. The \S+? will not impact before @ but will impact after @ because the engine reads from left to right. When it reaches u in @uct, it has satisfied requirements, so it stops there. While reading through stephen.marquard@ it still hasn't.
44,086,019
Let us say given string is: From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008 According to me answer should be: d@uct.ac.za because Number of non-whitespace characters should be lazy greedy(as few as possible) hence. But python says: stephen.marquard@uct.ac.za ``` import re hand = open('textfile.txt') for line in hand: line=line.rstrip() x=re.findall('\S+@\S+',lin) if x: print x ``` and textfile.txt is: ``` From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008 ```
2017/05/20
[ "https://Stackoverflow.com/questions/44086019", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4565731/" ]
If you just want one character before `@`, you could use : ``` '\S@\S+' ``` Non-greedy doesn't mean that the regex will try to find the shortest substring by varying the start index. It just means that if there's a substring which starts at index 0 and matches the regex, the engine will stop as soon as possible.
it will extract the following " stephen.marquard@uct.ac.za " because the ? sign works only after the @ but not before it
44,086,019
Let us say given string is: From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008 According to me answer should be: d@uct.ac.za because Number of non-whitespace characters should be lazy greedy(as few as possible) hence. But python says: stephen.marquard@uct.ac.za ``` import re hand = open('textfile.txt') for line in hand: line=line.rstrip() x=re.findall('\S+@\S+',lin) if x: print x ``` and textfile.txt is: ``` From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008 ```
2017/05/20
[ "https://Stackoverflow.com/questions/44086019", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4565731/" ]
When you say: > > Number of non-whitespace characters should be lazy greedy(as few as possible) hence. > > > In your case it implies it should be either **one or zero**. Hence making non-whitespace character `optional` using `?` quantifier will do the job. Try with `\S?@\S+` **[Regex101 Demo](https://regex101.com/r/XxNNYV/1/)**
I have your same question and it's not yet answered. The non greedy '?' works for after the '@' but not for before it. I have no explanation for it. You can try `[^ ]@[^ ]+` for the required/expected answer.
44,086,019
Let us say given string is: From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008 According to me answer should be: d@uct.ac.za because Number of non-whitespace characters should be lazy greedy(as few as possible) hence. But python says: stephen.marquard@uct.ac.za ``` import re hand = open('textfile.txt') for line in hand: line=line.rstrip() x=re.findall('\S+@\S+',lin) if x: print x ``` and textfile.txt is: ``` From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008 ```
2017/05/20
[ "https://Stackoverflow.com/questions/44086019", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4565731/" ]
When you say: > > Number of non-whitespace characters should be lazy greedy(as few as possible) hence. > > > In your case it implies it should be either **one or zero**. Hence making non-whitespace character `optional` using `?` quantifier will do the job. Try with `\S?@\S+` **[Regex101 Demo](https://regex101.com/r/XxNNYV/1/)**
I think I have the answer to this one. Bit late on the trend, but here it goes. Let's reference Eric's statement first: Non-greedy ... just means that if there's a substring which starts at index 0 and matches the regex, the engine will stop as soon as possible. Okay fellas. So the engine starts at index 0. It reads stephen.marquard, it still hasn't reached @ yet so it goes on, but there isn't anything yet that deems its selection wrong so it keeps it in the memory. It reaches the @ sign. Great. The \S+? will not impact before @ but will impact after @ because the engine reads from left to right. When it reaches u in @uct, it has satisfied requirements, so it stops there. While reading through stephen.marquard@ it still hasn't.
44,086,019
Let us say given string is: From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008 According to me answer should be: d@uct.ac.za because Number of non-whitespace characters should be lazy greedy(as few as possible) hence. But python says: stephen.marquard@uct.ac.za ``` import re hand = open('textfile.txt') for line in hand: line=line.rstrip() x=re.findall('\S+@\S+',lin) if x: print x ``` and textfile.txt is: ``` From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008 ```
2017/05/20
[ "https://Stackoverflow.com/questions/44086019", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4565731/" ]
When you say: > > Number of non-whitespace characters should be lazy greedy(as few as possible) hence. > > > In your case it implies it should be either **one or zero**. Hence making non-whitespace character `optional` using `?` quantifier will do the job. Try with `\S?@\S+` **[Regex101 Demo](https://regex101.com/r/XxNNYV/1/)**
it will extract the following " stephen.marquard@uct.ac.za " because the ? sign works only after the @ but not before it
44,086,019
Let us say given string is: From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008 According to me answer should be: d@uct.ac.za because Number of non-whitespace characters should be lazy greedy(as few as possible) hence. But python says: stephen.marquard@uct.ac.za ``` import re hand = open('textfile.txt') for line in hand: line=line.rstrip() x=re.findall('\S+@\S+',lin) if x: print x ``` and textfile.txt is: ``` From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008 ```
2017/05/20
[ "https://Stackoverflow.com/questions/44086019", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4565731/" ]
I think I have the answer to this one. Bit late on the trend, but here it goes. Let's reference Eric's statement first: Non-greedy ... just means that if there's a substring which starts at index 0 and matches the regex, the engine will stop as soon as possible. Okay fellas. So the engine starts at index 0. It reads stephen.marquard, it still hasn't reached @ yet so it goes on, but there isn't anything yet that deems its selection wrong so it keeps it in the memory. It reaches the @ sign. Great. The \S+? will not impact before @ but will impact after @ because the engine reads from left to right. When it reaches u in @uct, it has satisfied requirements, so it stops there. While reading through stephen.marquard@ it still hasn't.
I have your same question and it's not yet answered. The non greedy '?' works for after the '@' but not for before it. I have no explanation for it. You can try `[^ ]@[^ ]+` for the required/expected answer.
44,086,019
Let us say given string is: From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008 According to me answer should be: d@uct.ac.za because Number of non-whitespace characters should be lazy greedy(as few as possible) hence. But python says: stephen.marquard@uct.ac.za ``` import re hand = open('textfile.txt') for line in hand: line=line.rstrip() x=re.findall('\S+@\S+',lin) if x: print x ``` and textfile.txt is: ``` From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008 ```
2017/05/20
[ "https://Stackoverflow.com/questions/44086019", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4565731/" ]
it will extract the following " stephen.marquard@uct.ac.za " because the ? sign works only after the @ but not before it
I have your same question and it's not yet answered. The non greedy '?' works for after the '@' but not for before it. I have no explanation for it. You can try `[^ ]@[^ ]+` for the required/expected answer.
36,583,653
I am working on a sensor API and dashboard application using Django 1.9 and Django Restframework. I have the usual unit tests that should run when I call `python manage.py test`. In addition, I have two different types of tests: a) Test connectivity to upstream data sources and API's. b) Data consistency tests. While unit tests should run in any context, a) and b) depend on specific contexts. I would like to invoke them manually, when this contexts are available (or should be tested). I don't want them to run (and fail) whenever I call the management `test` command. Any suggestions, how to create a test runner that would exclude certain test folders by default but runs them when explicitly called.
2016/04/12
[ "https://Stackoverflow.com/questions/36583653", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5186864/" ]
So, I figured out my issue and unfortunately I didn't realize a class tag would mess with my search but it did. I had the form\_tag inside of a "form" class, by having a form within a form, it gave me the issues. I updated my views with below and it worked. ``` .navbar-form.navbar-left = form_tag search_path, :method => 'get' do = text_field_tag :search, params[:query], class: 'form-control' = submit_tag "Search", class: 'btn btn-default' ```
I'm pretty sure you need to name your route if you want to use `form_tag search_path`. You need to do the following in your `routes.rb`: ``` get '/search', to: 'search#search', as: :search ``` This way it should work.
68,193,009
Can you help me? I try to install a ffmpeg using a brew, but i can't. error message ``` ==> Installing dependencies for ffmpeg: aom, dav1d, libpng, freetype, fontconfig, frei0r, gmp, bdw-gc, libffi, m4, libtool, libunistring, pkg-config, readline, guile, gettext, libidn2, libtasn1, nettle, p11-kit, openssl@1.1, libevent, c-ares, jemalloc, libev, nghttp2, unbound, gnutls, lame, fribidi, pcre, gdbm, mpdecimal, sqlite, xz, python@3.9, glib, libpthread-stubs, xorgproto, libxau, libxdmcp, libxcb, libx11, libxext, libxrender, lzo, pixman, cairo, gobject-introspection, graphite2, icu4c, harfbuzz, libass, libbluray, libsoxr, libvidstab, libogg, libvorbis, libvpx, opencore-amr, jpeg, libtiff, little-cms2, openjpeg, opus, rav1e, flac, libsndfile, libsamplerate, rubberband, sdl2, snappy, speex, srt, giflib, webp, leptonica, tesseract, theora, x264, x265, xvid, libsodium, zeromq and zimg ==> Installing ffmpeg dependency: aom ==> Pouring aom-3.1.1.big_sur.bottle.tar.gz tar: Error opening archive: Failed to open '/Users/xxx/Library/Caches/Homebrew/downloads/7718423841127802e2aac1c252a9f7480c690e9513b51d509aa81812231487fb--aom-3.1.1.big_sur.bottle.tar.gz' Error: Failure while executing; `tar --extract --no-same-owner --file /Users/xxx/Library/Caches/Homebrew/downloads/7718423841127802e2aac1c252a9f7480c690e9513b51d509aa81812231487fb--aom-3.1.1.big_sur.bottle.tar.gz --directory /private/tmp/d20210630-27979-or2mst` exited with 1. Here's the output: tar: Error opening archive: Failed to open '/Users/xxx/Library/Caches/Homebrew/downloads/7718423841127802e2aac1c252a9f7480c690e9513b51d509aa81812231487fb--aom-3.1.1.big_sur.bottle.tar.gz' ```
2021/06/30
[ "https://Stackoverflow.com/questions/68193009", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7603876/" ]
i solve this problem,reinstall brew
**Error: Failure while executing; `tar --extract --no-same-owner --file** sudo # change ownership of the file and will do.
58,306,029
here is my python loop code with output ``` peaks =range(2,10) for i in range(len(peaks) - 1, -1, -1): print(i) # 7,6,5,4,3,2,1,0 ``` but i do not know how to do this with R suggest me please
2019/10/09
[ "https://Stackoverflow.com/questions/58306029", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11520406/" ]
I agree with Sosel. Nevertheless, here is working code: ``` peaks = 2:9 for(i in rev(0:(length(peaks)-1))) print(i) ```
in R ``` peaks <- c(2:10) l <- length(peaks) result <- l - peaks result[result > 0] ``` returns ``` 7 6 5 4 3 2 1 ``` if you nedd a for loop, you can use this: ``` for(p in peaks){ifelse(l-p >= 0, print(l-p), NA)} ```
36,305,532
I tried installing supervisor in my linux openSUSE 13.1 by ``` sudo zypper install supervisor ``` I am getting > > Supervisor not found in package names.Trying capabilities > No provider of supervisor found. > > > Even i tried installing by `**pip3 install supervisor** It is raising an error like `supervisor requires with python 2.4 or later but doesnot work on`any version of python3 Am using python3.4. Is there any way to solve this?
2016/03/30
[ "https://Stackoverflow.com/questions/36305532", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5681023/" ]
Please don't install python2.4 anywhere anymore, unless you absolutely have to. You probably don't. For reasons unknown openSUSE doesn't include supervisor in the main repositories, though usually python packages are covered pretty well. Supervisor is a daemon that supervises processes, be they python programs or any other. It is something you should usually install system wide (IMO) and not in a virtualenv. As always, your case may be the exception to the rule. Here are the [instructions](https://software.opensuse.org/download.html?project=devel%3Alanguages%3Apython&package=supervisor) for installing supervisor from the development repositories for openSUSE 13.1: ``` zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_13.1/devel:languages:python.repo zypper refresh zypper install supervisor ``` The package will install required dependencies along with it, including the proper python2 version. Python3 support is [coming](https://github.com/Supervisor/supervisor/issues/110) [to](https://github.com/Supervisor/supervisor#supported-platforms) supervisor at some point, but for the time being consider it python2 only, unless you feel very very adventurous. This on the other hand matters very little, as you usually do not need to even think about supervisor being a python program. You use it like any other system service daemon.
If it is saying you that it doesn't work in any version Python3, try to install python 2.4 also, so you will have both python3.4 and python2.4 together. This way, you will can install. If the problem is only the python version, of course.
36,305,532
I tried installing supervisor in my linux openSUSE 13.1 by ``` sudo zypper install supervisor ``` I am getting > > Supervisor not found in package names.Trying capabilities > No provider of supervisor found. > > > Even i tried installing by `**pip3 install supervisor** It is raising an error like `supervisor requires with python 2.4 or later but doesnot work on`any version of python3 Am using python3.4. Is there any way to solve this?
2016/03/30
[ "https://Stackoverflow.com/questions/36305532", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5681023/" ]
If the first mentioned repo doesn't try the following: ``` zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_Tumbleweed/devel:languages:python.repo zypper refresh zypper install supervisor ```
If it is saying you that it doesn't work in any version Python3, try to install python 2.4 also, so you will have both python3.4 and python2.4 together. This way, you will can install. If the problem is only the python version, of course.
36,305,532
I tried installing supervisor in my linux openSUSE 13.1 by ``` sudo zypper install supervisor ``` I am getting > > Supervisor not found in package names.Trying capabilities > No provider of supervisor found. > > > Even i tried installing by `**pip3 install supervisor** It is raising an error like `supervisor requires with python 2.4 or later but doesnot work on`any version of python3 Am using python3.4. Is there any way to solve this?
2016/03/30
[ "https://Stackoverflow.com/questions/36305532", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5681023/" ]
Please don't install python2.4 anywhere anymore, unless you absolutely have to. You probably don't. For reasons unknown openSUSE doesn't include supervisor in the main repositories, though usually python packages are covered pretty well. Supervisor is a daemon that supervises processes, be they python programs or any other. It is something you should usually install system wide (IMO) and not in a virtualenv. As always, your case may be the exception to the rule. Here are the [instructions](https://software.opensuse.org/download.html?project=devel%3Alanguages%3Apython&package=supervisor) for installing supervisor from the development repositories for openSUSE 13.1: ``` zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_13.1/devel:languages:python.repo zypper refresh zypper install supervisor ``` The package will install required dependencies along with it, including the proper python2 version. Python3 support is [coming](https://github.com/Supervisor/supervisor/issues/110) [to](https://github.com/Supervisor/supervisor#supported-platforms) supervisor at some point, but for the time being consider it python2 only, unless you feel very very adventurous. This on the other hand matters very little, as you usually do not need to even think about supervisor being a python program. You use it like any other system service daemon.
If the first mentioned repo doesn't try the following: ``` zypper addrepo http://download.opensuse.org/repositories/devel:languages:python/openSUSE_Tumbleweed/devel:languages:python.repo zypper refresh zypper install supervisor ```
59,676,082
I have a python class, and this class has functions as follows : ``` import numpy as np class output_hidden: def feature(self,x1,y1): feature=np.array([x1,y1]) return feature def weights(self): self.weights = np.random.rand(2,1) return weights object_1=output_hidden() value_of_feature = object_1.feature(0.05, 0.10) ``` I have supplied values to the class `feature` and this is its output: ``` >>> print(value_of_feature) [0.05 0.1 ] ``` However when I want to return values stored in weights I get ``` >>> print(object_1.weights) <bound method output_hidden.weights of <__main__.output_hidden object at 0x7ff88b7fb860>> ``` What I want for `print(object_1.weights` is = `[[0.29707055] [0.04773819]]` What am I missing why aren't I getting my expected results ?
2020/01/10
[ "https://Stackoverflow.com/questions/59676082", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11462954/" ]
Just to illustrate the comment from @Michael Butscher and the answer from @Dan A.S. You can use `sys` and `traceback` to catch and display the event in a `wx.MessageDialog` ``` import wx import sys, traceback def my_message(exception_type, exception_value, exception_traceback): msg = "Oh no! An error has occurred.\n\n" tb= traceback.format_exception(exception_type, exception_value, exception_traceback) for i in tb: msg += i dlg=wx.MessageDialog(None, msg, str(exception_type), wx.OK|wx.ICON_INFORMATION) dlg.ShowModal() dlg.Destroy() sys.excepthook = my_message class MyFrame(wx.Frame): def __init__(self, parent, id=wx.ID_ANY, title="", size=(360,100)): super(MyFrame, self).__init__(parent, id, title, size) self.panel = wx.Panel(self) self.panel.Bind(wx.EVT_KEY_DOWN, self.OnKey) self.Show() def OnKey(self, event): print ("alpha" + 1) if __name__ == "__main__": app = wx.App() frame = MyFrame(None,title="Press a key") app.MainLoop() ``` [![enter image description here](https://i.stack.imgur.com/EG2Rm.png)](https://i.stack.imgur.com/EG2Rm.png)
Have you tried wxPython dialogs? I think this is the easiest way to display an alert message. ``` import wx app = wx.App() wx.MessageBox('Your error message', 'Your error title', wx.OK | wx.ICON_ERROR) ``` More info: <https://wxpython.org/Phoenix/docs/html/wx.MessageDialog.html> <https://pythonspot.com/wxpython-dialogs/>
164,137
I have started on a personal python application that runs on the desktop. I am using wxPython as a GUI toolkit. Should there be a demand for this type of application, I would possibly like to commercialize it. I have no knowledge of deploying "real-life" Python applications, though I have used [`py2exe`](http://www.py2exe.org/) in the past with varied success. How would I obfuscate the code? Can I somehow deploy only the bytecode? An ideal solution would not jeopardize my intellectual property (source code), would not require a direct installation of Python (though I'm sure it will need to have some embedded interpreter), and would be cross-platform (Windows, Mac, and Linux). Does anyone know of any tools or resources in this area? Thanks.
2008/10/02
[ "https://Stackoverflow.com/questions/164137", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1638/" ]
You can distribute the compiled Python bytecode (.pyc files) instead of the source. You can't prevent decompilation in Python (or any other language, really). You could use an obfuscator like [pyobfuscate](http://www.lysator.liu.se/~astrand/projects/pyobfuscate/) to make it more annoying for competitors to decipher your decompiled source. As Alex Martelli says [in this thread](http://mail.python.org/pipermail/python-list/2006-April/1079623.html), if you want to keep your code a secret, you shouldn't run it on other people's machines. IIRC, the last time I used [cx\_Freeze](http://python.net/crew/atuining/cx_Freeze/) it created a DLL for Windows that removed the necessity for a native Python installation. This is at least worth checking out.
Wow, there are a lot of questions in there: * It is possible to run the bytecode (.pyc) file directly from the Python interpreter, but I haven't seen any bytecode obfuscation tools available. * I'm not aware of any "all in one" deployment solution, but: + For Windows you could use NSIS(<http://nsis.sourceforge.net/Main_Page>). The problem here is that while OSX/\*nix comes with python, Windows doesn't. If you're not willing to build a binary with py2exe, I'm not sure what the licensing issues would be surrounding distribution of the Python runtime environment (not to mention the technical ones). + You could package up the OS X distribution using the "bundle" format, and \*NIX has it's own conventions for installing software-- typically a "make install" script. Hope that was helpful.