qid
int64
469
74.7M
question
stringlengths
36
37.8k
date
stringlengths
10
10
metadata
sequence
response_j
stringlengths
5
31.5k
response_k
stringlengths
10
31.6k
36,534,313
In python, I have a dictionary composed of the following: ``` [ ['FOXP2_MOUSE', 'AminoAcidSequence1'], ['FOXP2_RAT','AminoAcidSequence2'], ['FOXP2_DOG', 'AminoAcidSequence3'] ]. ``` I'm trying to remove the keyed `FOXP2_DOG` from the dictionary and place it in a new dictionary alone. I've tried different methods: `del` and `.remove` to no avail.
2016/04/10
[ "https://Stackoverflow.com/questions/36534313", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6184948/" ]
It's not because it's at the end of the word, it's because you know what comes *after* the word. In your example: ``` m = re.search(r'(\b\w+\b)\s+\1', 'Cherry tree blooming will begin in in later March') ``` ...the first `\b` is necessary to prevent a match starting with the `in` in `begin`. The second one is redundant because you're explicitly matching the non-word characters (`\s+`) that follow the word. Word boundaries are for situations where you don't know what the character on the other side will be, or even if there will be a character there. Where you *should* be using another one is at the end of the regex. For example: ``` m = re.search(r'(\b\w+)\s+\1\b', "Let's go to the theater") ``` Without the second `\b`, you would get a false positive for `the theater`.
``` "I understand that \b can represent either the beginning or the end of a word. When would \b be required to represent the end?" ``` `\b` is never *required* to represent the end, or beginning, of a word. To answer your bigger question, it's only useful during development -- when working with natural language, you'll ultimately need to replace `\b` with something else. Why? The \b operator matches a word boundary as you've discovered. But a key concept here is, "What is a *word*?" The answer is the very narrow set `[A-Za-z0-9_]` -- *word* is not a natural language word but a computer language identifier. The \b operator exists for a formal language's parser. This means it doesn't handle common natural language situations like: The word *let's* becomes two words, 'let' & 's' if `\b' represents the boundaries of a word. Also consider titles like *Mr. & Mrs.* lose their period. Similarly, if `\b' represents the start of a word, then the appostrophe in these cases will be lost: *'twas 'bout 'cause* Hyphenated words suffer at the hand of `\b' as well, e.g *mother-in-law* (unless you want her to suffer.) Unfortunately, you can't simply augment `\b` by including it in a character set as it doesn't represent a character. You may be able to combine it with other characters via alternation in a zero-width assertion. When working with natural language, the `\b` operator is great for quickly prototyping an idea, but ultimately, probably not what you want. Ditto `\w`, but, since it represents a character, it's more easily augmented.
4,236,279
After talking with a friend of mine from Google, I'd like to implement some kind of Job/Worker model for updating my dataset. This dataset mirrors a 3rd party service's data, so, to do the update, I need to make several remote calls to their API. I think a lot of time will be spent waiting for responses from this 3rd party service. I'd like to speed things up, and make better use of my compute hours, by parallelizing these requests and keeping many of them open at once, as they wait for their individual responses. Before I explain my specific dataset and get into the problem, I'd like to clarify what answers I'm looking for: 1. Is this a flow that would be well suited to parallelizing with MapReduce? 2. If **yes**, would this be cost effective to run on Amazon's mapreduce module, which bills by the hour, and rounds hour's up when the job is complete? (I'm not sure exactly what counts as a "Job", so I don't know exactly how I'll be billed) 3. If **no**, Is there another system/pattern I should use? **and** Is there a library that will help me do this in python (On AWS, usign EC2 + EBS)? 4. Are there any problems you see with the way I've designed this job flow? Ok, now onto the details: The dataset consists of users who have favorite items and who follow other users. The aim is to be able to update each user's queue -- the list of items the user will see when they load the page, based on the favorite items of the users she follows. But, before I can crunch the data and update a user's queue, I need to make sure I have the most up-to-date data, which is where the API calls come in. There are two calls I can make: * *Get Followed Users* -- Which returns all the users being followed by the requested user, and * *Get Favorite Items* -- Which returns all the favorite items of the requested user. After I call *get followed users* for the user being updated, I need to update the favorite items for each user being followed. Only when all of the favorites are returned for all the users being followed can I start processing the queue for that original user. This flow looks like: ![Updating UserX's Queue](https://i.stack.imgur.com/9DDQP.png) Jobs in this flow include: * *Start Updating Queue* for user -- kicks off the process by fetching the users followed by the user being updated, storing them, and then creating *Get Favorites* jobs for each user. * *Get Favorites* for user -- Requests, and stores, a list of favorites for the specified user, from the 3rd party service. * *Calculate New Queue* for user -- Processes a new queue, now that all the data has been fetched, and then stores the results in a cache which is used by the application layer. So, again, my questions are: 1. Is this a flow that would be well suited to parallelizing with MapReduce? I don't know if it would let me start the process for UserX, fetch all the related data, and come back to processing UserX's queue only after that's all done. 2. If **yes**, would this be cost effective to run on Amazon's mapreduce module, which bills by the hour, and rounds hour's up when the job is complete? Is there a limit on how many "threads" I can have waiting on open API requests if I use their module? 3. If **no**, Is there another system/pattern I should use? **and** Is there a library that will help me do this in python (On AWS, usign EC2 + EBS?)? 4. Are there any problems you see with the way I've designed this job flow? Thanks for reading, I'm looking forward to some discussion with you all. **Edit**, in response to JimR: Thanks for a solid reply. In my reading since I wrote the original question, I've leaned away from using MapReduce. I haven't decided for sure yet how I want to build this, but I'm beginning to feel MapReduce is better for distributing / parallelizing computing load when I'm really just looking to parallelize HTTP requests. What would have been my "reduce" task, the part that takes all the fetched data and crunches it into results, isn't that computationally intensive. I'm pretty sure it's going to wind up being one big SQL query that executes for a second or two per user. So, what I'm leaning towards is: * A non-MapReduce **Job/Worker** model, written in **Python**. A google friend of mine turned me onto learning Python for this, since it's low overhead and scales well. * Using Amazon EC2 as a compute layer. I think this means I also need an EBS slice to store my database. * Possibly using Amazon's Simple Message queue thingy. It sounds like this 3rd amazon widget is designed to keep track of job queues, move results from one task into the inputs of another and gracefully handle failed tasks. It's very cheap. May be worth implementing instead of a custom job-queue system.
2010/11/21
[ "https://Stackoverflow.com/questions/4236279", "https://Stackoverflow.com", "https://Stackoverflow.com/users/311901/" ]
The work you describe is probably a good fit for either a queue, or a combination of a queue and job server. It certainly could work as a set of MapReduce steps as well. For a job server, I recommend looking at Gearman. The documentation isn't awesome, but the presentations do a great job documenting it, and the Python module is fairly self-explanatory too. Basically, you create functions in the job server, and these functions get called by clients via an API. The functions can be called either synchronously or asynchronously. In your example, you probably want to asynchronously add the "Start update" job. That will do whatever preparatory tasks, and then asynchronously call the "Get followed users" job. That job will fetch the users, and then call the "Update followed users" job. That will submit all the "Get Favourites for UserA" and friend jobs together in one go, and synchronously wait for the result of all of them. When they have all returned, it will call the "Calculate new queue" job. This job-server-only approach will initially be a bit less robust, since ensuring that you handle errors and any down servers and persistence properly is going to be fun. For a queue, SQS is an obvious choice. It is rock-solid, and very quick to access from EC2, and cheap. And way easier to set up and maintain than other queues when you're just getting started. Basically, you will put a message onto the queue, much like you would submit a job to the job server above, except you probably won't do anything synchronously. Instead of making the "Get Favourites For UserA" and so forth calls synchronously, you will make them asynchronously, and then have a message that says to check whether all of them are finished. You'll need some sort of persistence (a SQL database you're familiar with, or Amazon's SimpleDB if you want to go fully AWS) to track whether the work is done - you can't check on the progress of a job in SQS (although you can in other queues). The message that checks whether they are all finished will do the check - if they're not all finished, don't do anything, and then the message will be retried in a few minutes (based on the visibility\_timeout). Otherwise, you can put the next message on the queue. This queue-only approach should be robust, assuming you don't consume queue messages by mistake without doing the work. Making a mistake like that is hard to do with SQS - you really have to try. Don't use auto-consuming queues or protocols - if you error out, you might not be able to ensure that you put a replacement message back on the queue. A combination of queue and job server may be useful in this case. You can get away with not having a persistence store to check job progress - the job server will allow you to track job progress. Your "get favourites for users" message could place all the "get favourites for UserA/B/C" jobs into the job server. Then, put a "check all favourites fetching done" message on the queue with a list of tasks that need to be complete (and enough information to restart any jobs that mysteriously disappear). For bonus points: Doing this as a MapReduce should be fairly easy. Your first job's input will be a list of all your users. The map will take each user, get the followed users, and output lines for each user and their followed user: ``` "UserX" "UserA" "UserX" "UserB" "UserX" "UserC" ``` An identity reduce step will leave this unchanged. This will form the second job's input. The map for the second job will get the favourites for each line (you may want to use memcached to prevent fetching favourites for UserX/UserA combo and UserY/UserA via the API), and output a line for each favourite: ``` "UserX" "UserA" "Favourite1" "UserX" "UserA" "Favourite2" "UserX" "UserA" "Favourite3" "UserX" "UserB" "Favourite4" ``` The reduce step for this job will convert this to: ``` "UserX" [("UserA", "Favourite1"), ("UserA", "Favourite2"), ("UserA", "Favourite3"), ("UserB", "Favourite4")] ``` At this point, you might have another MapReduce job to update your database for each user with these values, or you might be able to use some of the Hadoop-related tools like Pig, Hive, and HBase to manage your database for you. I'd recommend using Cloudera's Distribution for Hadoop's ec2 management commands to create and tear down your Hadoop cluster on EC2 (their AMIs have Python set up on them), and use something like Dumbo (on PyPI) to create your MapReduce jobs, since it allows you to test your MapReduce jobs on your local/dev machine without access to Hadoop. Good luck!
I am working with a similar problem that i need to solve. I was also looking at MapReduce and using the Elastic MapReduce service from Amazon. I'm pretty convinced MapReduce will work for this problem. The implementation is where I'm getting hung up, becauase I'm not sure my reducer even needs to do anything. I'll answer your questions as I understand your (and my) problem, and hopefully it helps. 1. Yes I think it'll be suited well. You could look at leveraging the Elastic MapReduce service's multiple steps option. You could use 1 Step to fetch a the people a user is following, and another step to compile a list of tracks for each of those followers, and the reducer for that 2nd step would probably be the one to build the cache. 2. Depends on how big your data-set is and how often you'll be running it. It's hard to say without knowing how big the data-set is (or is going to get) if it'll be cost effective or not. Initially, it'll probably be quite cost-effective, as you won't have to manage your own hadoop cluster, nor have to pay for EC2 instances (assuming that's what you use) to be up all the time. Once you reach the point where you're actually crunching this data for a long period of time, it probably will make less and less sense to use Amazon's MapReduce service, because you'll constantly have nodes online all the time. A job is basically your MapReduce task. It can consist of multiple steps (each MapReduce task is a step). Once your data has been processed and all steps have been completed, your Job is done. So you're effectively paying for CPU time for each node in the Hadoop cluster. so, T\*n where T is the Time (in hours) it takes to process your data, and n is the number of nodes you tell Amazon to spin up. I hope this helps, good luck. I'd like to hear how you end up implementing your Mappers and Reducers, as I'm solving a very similar problem and I'm not sure my approach is really the best.
4,236,279
After talking with a friend of mine from Google, I'd like to implement some kind of Job/Worker model for updating my dataset. This dataset mirrors a 3rd party service's data, so, to do the update, I need to make several remote calls to their API. I think a lot of time will be spent waiting for responses from this 3rd party service. I'd like to speed things up, and make better use of my compute hours, by parallelizing these requests and keeping many of them open at once, as they wait for their individual responses. Before I explain my specific dataset and get into the problem, I'd like to clarify what answers I'm looking for: 1. Is this a flow that would be well suited to parallelizing with MapReduce? 2. If **yes**, would this be cost effective to run on Amazon's mapreduce module, which bills by the hour, and rounds hour's up when the job is complete? (I'm not sure exactly what counts as a "Job", so I don't know exactly how I'll be billed) 3. If **no**, Is there another system/pattern I should use? **and** Is there a library that will help me do this in python (On AWS, usign EC2 + EBS)? 4. Are there any problems you see with the way I've designed this job flow? Ok, now onto the details: The dataset consists of users who have favorite items and who follow other users. The aim is to be able to update each user's queue -- the list of items the user will see when they load the page, based on the favorite items of the users she follows. But, before I can crunch the data and update a user's queue, I need to make sure I have the most up-to-date data, which is where the API calls come in. There are two calls I can make: * *Get Followed Users* -- Which returns all the users being followed by the requested user, and * *Get Favorite Items* -- Which returns all the favorite items of the requested user. After I call *get followed users* for the user being updated, I need to update the favorite items for each user being followed. Only when all of the favorites are returned for all the users being followed can I start processing the queue for that original user. This flow looks like: ![Updating UserX's Queue](https://i.stack.imgur.com/9DDQP.png) Jobs in this flow include: * *Start Updating Queue* for user -- kicks off the process by fetching the users followed by the user being updated, storing them, and then creating *Get Favorites* jobs for each user. * *Get Favorites* for user -- Requests, and stores, a list of favorites for the specified user, from the 3rd party service. * *Calculate New Queue* for user -- Processes a new queue, now that all the data has been fetched, and then stores the results in a cache which is used by the application layer. So, again, my questions are: 1. Is this a flow that would be well suited to parallelizing with MapReduce? I don't know if it would let me start the process for UserX, fetch all the related data, and come back to processing UserX's queue only after that's all done. 2. If **yes**, would this be cost effective to run on Amazon's mapreduce module, which bills by the hour, and rounds hour's up when the job is complete? Is there a limit on how many "threads" I can have waiting on open API requests if I use their module? 3. If **no**, Is there another system/pattern I should use? **and** Is there a library that will help me do this in python (On AWS, usign EC2 + EBS?)? 4. Are there any problems you see with the way I've designed this job flow? Thanks for reading, I'm looking forward to some discussion with you all. **Edit**, in response to JimR: Thanks for a solid reply. In my reading since I wrote the original question, I've leaned away from using MapReduce. I haven't decided for sure yet how I want to build this, but I'm beginning to feel MapReduce is better for distributing / parallelizing computing load when I'm really just looking to parallelize HTTP requests. What would have been my "reduce" task, the part that takes all the fetched data and crunches it into results, isn't that computationally intensive. I'm pretty sure it's going to wind up being one big SQL query that executes for a second or two per user. So, what I'm leaning towards is: * A non-MapReduce **Job/Worker** model, written in **Python**. A google friend of mine turned me onto learning Python for this, since it's low overhead and scales well. * Using Amazon EC2 as a compute layer. I think this means I also need an EBS slice to store my database. * Possibly using Amazon's Simple Message queue thingy. It sounds like this 3rd amazon widget is designed to keep track of job queues, move results from one task into the inputs of another and gracefully handle failed tasks. It's very cheap. May be worth implementing instead of a custom job-queue system.
2010/11/21
[ "https://Stackoverflow.com/questions/4236279", "https://Stackoverflow.com", "https://Stackoverflow.com/users/311901/" ]
Seems that we're going with [Node.js](http://nodejs.org) and the [Seq](http://substack.net/posts/e0741f) flow control library. It was very easy to move from my map/flowchart of the process to a stubb of the code, and now it's just a matter of filling out the code to hook into the right APIs. Thanks for the answers, they were a lot of help finding the solution I was looking for.
I am working with a similar problem that i need to solve. I was also looking at MapReduce and using the Elastic MapReduce service from Amazon. I'm pretty convinced MapReduce will work for this problem. The implementation is where I'm getting hung up, becauase I'm not sure my reducer even needs to do anything. I'll answer your questions as I understand your (and my) problem, and hopefully it helps. 1. Yes I think it'll be suited well. You could look at leveraging the Elastic MapReduce service's multiple steps option. You could use 1 Step to fetch a the people a user is following, and another step to compile a list of tracks for each of those followers, and the reducer for that 2nd step would probably be the one to build the cache. 2. Depends on how big your data-set is and how often you'll be running it. It's hard to say without knowing how big the data-set is (or is going to get) if it'll be cost effective or not. Initially, it'll probably be quite cost-effective, as you won't have to manage your own hadoop cluster, nor have to pay for EC2 instances (assuming that's what you use) to be up all the time. Once you reach the point where you're actually crunching this data for a long period of time, it probably will make less and less sense to use Amazon's MapReduce service, because you'll constantly have nodes online all the time. A job is basically your MapReduce task. It can consist of multiple steps (each MapReduce task is a step). Once your data has been processed and all steps have been completed, your Job is done. So you're effectively paying for CPU time for each node in the Hadoop cluster. so, T\*n where T is the Time (in hours) it takes to process your data, and n is the number of nodes you tell Amazon to spin up. I hope this helps, good luck. I'd like to hear how you end up implementing your Mappers and Reducers, as I'm solving a very similar problem and I'm not sure my approach is really the best.
4,236,279
After talking with a friend of mine from Google, I'd like to implement some kind of Job/Worker model for updating my dataset. This dataset mirrors a 3rd party service's data, so, to do the update, I need to make several remote calls to their API. I think a lot of time will be spent waiting for responses from this 3rd party service. I'd like to speed things up, and make better use of my compute hours, by parallelizing these requests and keeping many of them open at once, as they wait for their individual responses. Before I explain my specific dataset and get into the problem, I'd like to clarify what answers I'm looking for: 1. Is this a flow that would be well suited to parallelizing with MapReduce? 2. If **yes**, would this be cost effective to run on Amazon's mapreduce module, which bills by the hour, and rounds hour's up when the job is complete? (I'm not sure exactly what counts as a "Job", so I don't know exactly how I'll be billed) 3. If **no**, Is there another system/pattern I should use? **and** Is there a library that will help me do this in python (On AWS, usign EC2 + EBS)? 4. Are there any problems you see with the way I've designed this job flow? Ok, now onto the details: The dataset consists of users who have favorite items and who follow other users. The aim is to be able to update each user's queue -- the list of items the user will see when they load the page, based on the favorite items of the users she follows. But, before I can crunch the data and update a user's queue, I need to make sure I have the most up-to-date data, which is where the API calls come in. There are two calls I can make: * *Get Followed Users* -- Which returns all the users being followed by the requested user, and * *Get Favorite Items* -- Which returns all the favorite items of the requested user. After I call *get followed users* for the user being updated, I need to update the favorite items for each user being followed. Only when all of the favorites are returned for all the users being followed can I start processing the queue for that original user. This flow looks like: ![Updating UserX's Queue](https://i.stack.imgur.com/9DDQP.png) Jobs in this flow include: * *Start Updating Queue* for user -- kicks off the process by fetching the users followed by the user being updated, storing them, and then creating *Get Favorites* jobs for each user. * *Get Favorites* for user -- Requests, and stores, a list of favorites for the specified user, from the 3rd party service. * *Calculate New Queue* for user -- Processes a new queue, now that all the data has been fetched, and then stores the results in a cache which is used by the application layer. So, again, my questions are: 1. Is this a flow that would be well suited to parallelizing with MapReduce? I don't know if it would let me start the process for UserX, fetch all the related data, and come back to processing UserX's queue only after that's all done. 2. If **yes**, would this be cost effective to run on Amazon's mapreduce module, which bills by the hour, and rounds hour's up when the job is complete? Is there a limit on how many "threads" I can have waiting on open API requests if I use their module? 3. If **no**, Is there another system/pattern I should use? **and** Is there a library that will help me do this in python (On AWS, usign EC2 + EBS?)? 4. Are there any problems you see with the way I've designed this job flow? Thanks for reading, I'm looking forward to some discussion with you all. **Edit**, in response to JimR: Thanks for a solid reply. In my reading since I wrote the original question, I've leaned away from using MapReduce. I haven't decided for sure yet how I want to build this, but I'm beginning to feel MapReduce is better for distributing / parallelizing computing load when I'm really just looking to parallelize HTTP requests. What would have been my "reduce" task, the part that takes all the fetched data and crunches it into results, isn't that computationally intensive. I'm pretty sure it's going to wind up being one big SQL query that executes for a second or two per user. So, what I'm leaning towards is: * A non-MapReduce **Job/Worker** model, written in **Python**. A google friend of mine turned me onto learning Python for this, since it's low overhead and scales well. * Using Amazon EC2 as a compute layer. I think this means I also need an EBS slice to store my database. * Possibly using Amazon's Simple Message queue thingy. It sounds like this 3rd amazon widget is designed to keep track of job queues, move results from one task into the inputs of another and gracefully handle failed tasks. It's very cheap. May be worth implementing instead of a custom job-queue system.
2010/11/21
[ "https://Stackoverflow.com/questions/4236279", "https://Stackoverflow.com", "https://Stackoverflow.com/users/311901/" ]
The work you describe is probably a good fit for either a queue, or a combination of a queue and job server. It certainly could work as a set of MapReduce steps as well. For a job server, I recommend looking at Gearman. The documentation isn't awesome, but the presentations do a great job documenting it, and the Python module is fairly self-explanatory too. Basically, you create functions in the job server, and these functions get called by clients via an API. The functions can be called either synchronously or asynchronously. In your example, you probably want to asynchronously add the "Start update" job. That will do whatever preparatory tasks, and then asynchronously call the "Get followed users" job. That job will fetch the users, and then call the "Update followed users" job. That will submit all the "Get Favourites for UserA" and friend jobs together in one go, and synchronously wait for the result of all of them. When they have all returned, it will call the "Calculate new queue" job. This job-server-only approach will initially be a bit less robust, since ensuring that you handle errors and any down servers and persistence properly is going to be fun. For a queue, SQS is an obvious choice. It is rock-solid, and very quick to access from EC2, and cheap. And way easier to set up and maintain than other queues when you're just getting started. Basically, you will put a message onto the queue, much like you would submit a job to the job server above, except you probably won't do anything synchronously. Instead of making the "Get Favourites For UserA" and so forth calls synchronously, you will make them asynchronously, and then have a message that says to check whether all of them are finished. You'll need some sort of persistence (a SQL database you're familiar with, or Amazon's SimpleDB if you want to go fully AWS) to track whether the work is done - you can't check on the progress of a job in SQS (although you can in other queues). The message that checks whether they are all finished will do the check - if they're not all finished, don't do anything, and then the message will be retried in a few minutes (based on the visibility\_timeout). Otherwise, you can put the next message on the queue. This queue-only approach should be robust, assuming you don't consume queue messages by mistake without doing the work. Making a mistake like that is hard to do with SQS - you really have to try. Don't use auto-consuming queues or protocols - if you error out, you might not be able to ensure that you put a replacement message back on the queue. A combination of queue and job server may be useful in this case. You can get away with not having a persistence store to check job progress - the job server will allow you to track job progress. Your "get favourites for users" message could place all the "get favourites for UserA/B/C" jobs into the job server. Then, put a "check all favourites fetching done" message on the queue with a list of tasks that need to be complete (and enough information to restart any jobs that mysteriously disappear). For bonus points: Doing this as a MapReduce should be fairly easy. Your first job's input will be a list of all your users. The map will take each user, get the followed users, and output lines for each user and their followed user: ``` "UserX" "UserA" "UserX" "UserB" "UserX" "UserC" ``` An identity reduce step will leave this unchanged. This will form the second job's input. The map for the second job will get the favourites for each line (you may want to use memcached to prevent fetching favourites for UserX/UserA combo and UserY/UserA via the API), and output a line for each favourite: ``` "UserX" "UserA" "Favourite1" "UserX" "UserA" "Favourite2" "UserX" "UserA" "Favourite3" "UserX" "UserB" "Favourite4" ``` The reduce step for this job will convert this to: ``` "UserX" [("UserA", "Favourite1"), ("UserA", "Favourite2"), ("UserA", "Favourite3"), ("UserB", "Favourite4")] ``` At this point, you might have another MapReduce job to update your database for each user with these values, or you might be able to use some of the Hadoop-related tools like Pig, Hive, and HBase to manage your database for you. I'd recommend using Cloudera's Distribution for Hadoop's ec2 management commands to create and tear down your Hadoop cluster on EC2 (their AMIs have Python set up on them), and use something like Dumbo (on PyPI) to create your MapReduce jobs, since it allows you to test your MapReduce jobs on your local/dev machine without access to Hadoop. Good luck!
Seems that we're going with [Node.js](http://nodejs.org) and the [Seq](http://substack.net/posts/e0741f) flow control library. It was very easy to move from my map/flowchart of the process to a stubb of the code, and now it's just a matter of filling out the code to hook into the right APIs. Thanks for the answers, they were a lot of help finding the solution I was looking for.
22,450,848
I am trying to install some python requirements from a local package directory containing [wheel](http://pip.readthedocs.org/en/latest/reference/pip_wheel.html) archives. I am installing the requirements inside a Docker container. The steps I'm following are: ```sh $ pip install wheel # wheel runs, outputs .whl files to wheelhouse directory $ pip wheel --wheel-dir wheelhouse -r requirements.txt ``` Then, inside my `Dockerfile`: ```sh ADD requirements.txt /tmp/requirements.txt ADD wheelhouse /tmp/wheelhouse # install requirements. Leave file in /tmp for now - may be useful. RUN pip install --use-wheel --no-index --find-link /tmp/wheelhouse/ -r /tmp/requirements.txt ``` This works - and all the requirements are installed correctly: ```sh # 'app' is the name of my built docker image $ docker run app pip list ... psycopg2 (2.5.1) ... ``` However, if I actually try running something inside the container that **uses** `psycopg2`, then I get the following: ```sh Error loading psycopg2 module: /usr/local/lib/python2.7/site-packages/psycopg2/_psycopg.so: undefined symbol: PyUnicodeUCS4_AsUTF8String ``` I presume that this is something to do with the way in which the wheels were built - I ran `pip wheel` on the container host machine (Ubuntu 12.04). How can I fix this - using wheels significantly reduces the time taken to build the container image, so I don't want to revert to installing packages if I can help it?
2014/03/17
[ "https://Stackoverflow.com/questions/22450848", "https://Stackoverflow.com", "https://Stackoverflow.com/users/45698/" ]
I don't know what a wheel or a docker is, but your error comes from a mismatch between the Python used to build the module and the one that is trying to run it.
In my experience, psycopg2 can be rather finicky when installing/building from source, so am not surprised that it doesn't package into a wheel. However, could you simply wheel everything apart from psycopg2? Still would save you a heap of time.
60,182,107
Here is the exception that I am getting: ``` Traceback (most recent call last): File "/home/navendu/lead-generator/python_scripts/tempCodeRunnerFile.py", line 12, in <module> driver.switch_to_frame("http://103.251.43.139/~ksebuser/orumabills/upload/billview/") File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 789, in switch_to_frame self._switch_to.frame(frame_reference) File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/switch_to.py", line 87, in frame raise NoSuchFrameException(frame_reference) selenium.common.exceptions.NoSuchFrameException: Message: http://103.251.43.139/~ksebuser/orumabills/upload/billview/ ``` Here is the python code that I am running: ``` from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.ui import Select from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By driver = webdriver.Chrome() driver.get("http://www.kseb.in/index.php?option=com_wrapper&view=wrapper&Itemid=813&lang=en") driver.maximize_window() driver.implicitly_wait(7) driver.switch_to_frame("http://103.251.43.139/~ksebuser/orumabills/upload/billview/") ele = driver.find_element_by_id('t_consumer-no_5') ele.send_keys("some text") ``` Here is the link to the web page. I am trying to automate form filling in that website <http://www.kseb.in/index.php?option=com_wrapper&view=wrapper&Itemid=813&lang=en>
2020/02/12
[ "https://Stackoverflow.com/questions/60182107", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12424846/" ]
This kind of authentication works if you use the GitHub API but you cannot use the Web UI with basic authentication. Normally, a Web UI uses a login form that sends a POST Request when you log in. After that, the session cookie is used in order to stay logged in (for the session). If the login should persist after the session expired, the website could use cookies that persist longer. I think GitHub uses this concept. I would recommend you to use the API for automated processes because you can parse the responses easier. Also, I strongly recommand not to use basic authentication with the real password. I would use PATs instead. If you want to send authenticated requests to the API you can e.g. execute ``` requests.get('https://api.github.com/repos/geetikatalreja/WebApp_DotNet', auth = ('valid_username', 'Valid_password')) ``` Instead of the password, you can also just use a PAT of your account(which is more secure). You can create a PAT [over there](https://github.com/settings/tokens/new). The GitHub API documentation can be found [here](https://developer.github.com/v3/) and the documentation for accessing repositories [there](https://developer.github.com/v3/repos/).
You can pass in the `application/vnd.github.VERSION.diff` media type to get the diff. So that would make it ``` requests.get('https://api.github.com/geetikatalreja/WebApp_DotNet.git/:owner/:repo/pulls/:number', auth = ('valid_username', 'Valid_password')) ``` the format should be like ``` requests.get('https://api.github.com/repos/:owner/:repo/pulls/:number', auth = ('valid_username', 'Valid_password'), headers=headers) ``` where ``` headers = { 'Authorization': 'token mygithubtoken', 'Accept': 'application/vnd.github.VERSION.diff', } ``` We can't use OAuth tokens to access the website. However, diffs are available through the API: > > <https://developer.github.com/v3/pulls/#get-a-single-pull-request> > > >
60,182,107
Here is the exception that I am getting: ``` Traceback (most recent call last): File "/home/navendu/lead-generator/python_scripts/tempCodeRunnerFile.py", line 12, in <module> driver.switch_to_frame("http://103.251.43.139/~ksebuser/orumabills/upload/billview/") File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 789, in switch_to_frame self._switch_to.frame(frame_reference) File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/switch_to.py", line 87, in frame raise NoSuchFrameException(frame_reference) selenium.common.exceptions.NoSuchFrameException: Message: http://103.251.43.139/~ksebuser/orumabills/upload/billview/ ``` Here is the python code that I am running: ``` from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.ui import Select from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By driver = webdriver.Chrome() driver.get("http://www.kseb.in/index.php?option=com_wrapper&view=wrapper&Itemid=813&lang=en") driver.maximize_window() driver.implicitly_wait(7) driver.switch_to_frame("http://103.251.43.139/~ksebuser/orumabills/upload/billview/") ele = driver.find_element_by_id('t_consumer-no_5') ele.send_keys("some text") ``` Here is the link to the web page. I am trying to automate form filling in that website <http://www.kseb.in/index.php?option=com_wrapper&view=wrapper&Itemid=813&lang=en>
2020/02/12
[ "https://Stackoverflow.com/questions/60182107", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12424846/" ]
This kind of authentication works if you use the GitHub API but you cannot use the Web UI with basic authentication. Normally, a Web UI uses a login form that sends a POST Request when you log in. After that, the session cookie is used in order to stay logged in (for the session). If the login should persist after the session expired, the website could use cookies that persist longer. I think GitHub uses this concept. I would recommend you to use the API for automated processes because you can parse the responses easier. Also, I strongly recommand not to use basic authentication with the real password. I would use PATs instead. If you want to send authenticated requests to the API you can e.g. execute ``` requests.get('https://api.github.com/repos/geetikatalreja/WebApp_DotNet', auth = ('valid_username', 'Valid_password')) ``` Instead of the password, you can also just use a PAT of your account(which is more secure). You can create a PAT [over there](https://github.com/settings/tokens/new). The GitHub API documentation can be found [here](https://developer.github.com/v3/) and the documentation for accessing repositories [there](https://developer.github.com/v3/repos/).
Use token based authentication instead. Here is the link to get the personal token from github <https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line> Once you have the token, you can - 1. Login from Command Line ``` $ git clone https://github.com/geetikatalreja/WebApp_DotNet.git Username: your_username Password: your_token ``` 2. Login using Code (Python urllib2 module) ``` url = "https://api.github.com/geetikatalreja/WebApp_DotNet.git/:owner/:repo/pulls/:number" token = "your_token" request = Request(url) request.add_header('Authorization', 'token %s' % token) response = urlopen(request) ``` 3. Login using python request module ``` import requests url = "https://github.com/geetikatalreja" response = requests.get(url, headers={'Authorization': 'your_token'}) ``` BUT for some reason if you have to login using username/password then you can use below code **Disclaimer**: code not tested, copied from [here](https://stackoverflow.com/a/46046675/1685980). This code is not using API instead parsing web page and doing the login ``` s = requests.Session() r = s.get('https://www.github.com/login') tree = html.fromstring(r.content) data = {i.get('name'):i.get('value') for i in tree.cssselect('input')} data['login'] = username data['password'] = password r = s.post('https://github.com/session', data=data) ```
60,182,107
Here is the exception that I am getting: ``` Traceback (most recent call last): File "/home/navendu/lead-generator/python_scripts/tempCodeRunnerFile.py", line 12, in <module> driver.switch_to_frame("http://103.251.43.139/~ksebuser/orumabills/upload/billview/") File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 789, in switch_to_frame self._switch_to.frame(frame_reference) File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/switch_to.py", line 87, in frame raise NoSuchFrameException(frame_reference) selenium.common.exceptions.NoSuchFrameException: Message: http://103.251.43.139/~ksebuser/orumabills/upload/billview/ ``` Here is the python code that I am running: ``` from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.ui import Select from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By driver = webdriver.Chrome() driver.get("http://www.kseb.in/index.php?option=com_wrapper&view=wrapper&Itemid=813&lang=en") driver.maximize_window() driver.implicitly_wait(7) driver.switch_to_frame("http://103.251.43.139/~ksebuser/orumabills/upload/billview/") ele = driver.find_element_by_id('t_consumer-no_5') ele.send_keys("some text") ``` Here is the link to the web page. I am trying to automate form filling in that website <http://www.kseb.in/index.php?option=com_wrapper&view=wrapper&Itemid=813&lang=en>
2020/02/12
[ "https://Stackoverflow.com/questions/60182107", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12424846/" ]
You can pass in the `application/vnd.github.VERSION.diff` media type to get the diff. So that would make it ``` requests.get('https://api.github.com/geetikatalreja/WebApp_DotNet.git/:owner/:repo/pulls/:number', auth = ('valid_username', 'Valid_password')) ``` the format should be like ``` requests.get('https://api.github.com/repos/:owner/:repo/pulls/:number', auth = ('valid_username', 'Valid_password'), headers=headers) ``` where ``` headers = { 'Authorization': 'token mygithubtoken', 'Accept': 'application/vnd.github.VERSION.diff', } ``` We can't use OAuth tokens to access the website. However, diffs are available through the API: > > <https://developer.github.com/v3/pulls/#get-a-single-pull-request> > > >
Use token based authentication instead. Here is the link to get the personal token from github <https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line> Once you have the token, you can - 1. Login from Command Line ``` $ git clone https://github.com/geetikatalreja/WebApp_DotNet.git Username: your_username Password: your_token ``` 2. Login using Code (Python urllib2 module) ``` url = "https://api.github.com/geetikatalreja/WebApp_DotNet.git/:owner/:repo/pulls/:number" token = "your_token" request = Request(url) request.add_header('Authorization', 'token %s' % token) response = urlopen(request) ``` 3. Login using python request module ``` import requests url = "https://github.com/geetikatalreja" response = requests.get(url, headers={'Authorization': 'your_token'}) ``` BUT for some reason if you have to login using username/password then you can use below code **Disclaimer**: code not tested, copied from [here](https://stackoverflow.com/a/46046675/1685980). This code is not using API instead parsing web page and doing the login ``` s = requests.Session() r = s.get('https://www.github.com/login') tree = html.fromstring(r.content) data = {i.get('name'):i.get('value') for i in tree.cssselect('input')} data['login'] = username data['password'] = password r = s.post('https://github.com/session', data=data) ```
1,463,153
I'm planning to create a website using django that will have a common header throughout the entire website. I've read django's documentation on templating inheritance, but I can't seem to find an elegant solution for the "dynamic" elements in my header. For example, the header in the website will include tabs, say similar to <http://www.google.com/> (where it has "Web", "Images", etc), where the selected tab will describe your current location in the website. Using the django template inheritance, it would seem like you would create a base template like this: ``` <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <link rel="stylesheet" href="style.css" /> <title>{% block title %}My Amazing Site{% endblock %}</title> </head> <body> <div id="header"> {% block header %} .... html to create tabs ... {% endblock header %} </div> ``` and then in all of my other pages, i would do this: ``` {% extends "base.html" %} {% block header % } .... html to create tabs with one tab "selected" ... {% endblock header %} ``` which seems annoying as every single one of my pages would have to have duplicated HTML with the header information, but slightly different. So when its time to add a new tab, i have to modify every single HTML file. Upon further reading, it seems like some other possible solutions are: 1 - Create a custom template tag that takes in which tab is currently selected, that way in each HTML page i just call: {% block header %} {% mycustomtag abc %} {% endblock header %} I don't like this solution because it would requiring placing HTML into the python code for creating this custom tag. 2 - Create X number of sub-templates of base.html, all with the appropriate tab selected. Then each page would inherit from the appropriate sub-template based on which tab they want selected. This solution seems fine, except for the fact that it will require X number of almost exactly the same HTML, and still runs into the issue of having to modify all the files when a tab is added or removed. 3 - Use javascript (like jquery) to modify the header on page load to "select" the correct tab. This solution is fine but then would require one to remember to add this functionality to every page's javascript. the good part is that the header HTML would only live in a single HTML file. Any other suggestions? Thanks!
2009/09/22
[ "https://Stackoverflow.com/questions/1463153", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
I'm assuming each tab is a list item in your template `base.html`. ``` <ul> <li>Tab 1</li> <li>Tab 2</li> ... </ul> ``` Add an extra block to each `li`. ``` <ul> <li class="{% block class_tab1 %}inactive{% endblock %}">Tab 1</li> <li class="{% block class_tab2 %}inactive{% endblock %}">Tab 2</li> <li class="{% block class_tab3 %}inactive{% endblock %}">Tab 3</li> ... </ul> ``` Then in your template if tab 1 is to be selected: ``` {% extends "base.html" %} {% block class_tab1 %}active{% endblock %} ... ``` So the html rendered for Tab 1 is: ``` <ul> <li class="active">Tab 1</li> <li class="inactive">Tab 2</li> <li class="inactive">Tab 3</li> ... </ul> ``` and you can write CSS to target the `li .active` as you wish.
A version of #1 will do the trick — with a separate template file for the tag. Lets say you have the models "Category" and "Article". ``` class Category(models.Model): title = models.CharField(_("Name"), max_length=200) introduction = models.TextField(blank=True, null=True) slug = models.SlugField(help_text=_("Used for URLs")) sort_order = models.IntegerField(_("Sortierung")) class Article(models.Model): title = models.CharField(_("Full Name"), max_length=255) slug = models.SlugField(_("Slug Name"), unique=True, help_text=_("This is a short, descriptive name of article that will be used in the URL link to this item")) text = models.TextField(_("Text of Article"), blank=True, null=True) category = models.ForeignKey(Category) ``` in your views you would pass the category you are viewing to the template: ``` @render_to('cat_index.html') def category_view(request,string): cat = Category.objects.get(slug=string) articles = Article.objects.filter(category = cat).order_by('date') return { 'articles':articles, 'category':cat, } ``` (Note: using the [annoying](http://bitbucket.org/offline/django-annoying/) `render_to`-decorator – same as `render_to_response`) and in your template you call a inclusion\_tag like this: ``` @register.inclusion_tag('snippets/navigation.html') def navigation(cat=None): return {'cats':Category.objects.order_by('sort_order'), 'cat':cat } ``` by using this in your base-template (often called base.html) ``` {% navigation category %} ``` Now in the inclusions\_tags's template (`snippets/navigation.html`) you would for-loop over `cats` and if one of it equals `cat` you can assign other styles ``` <ul> {% for c in cats %} <li{% ifequal c cat %} class="active"{% endifequal %}> <a href="{{c|url}}">{{ c }}</a> </li> {% endfor %} </ul> ```
1,463,153
I'm planning to create a website using django that will have a common header throughout the entire website. I've read django's documentation on templating inheritance, but I can't seem to find an elegant solution for the "dynamic" elements in my header. For example, the header in the website will include tabs, say similar to <http://www.google.com/> (where it has "Web", "Images", etc), where the selected tab will describe your current location in the website. Using the django template inheritance, it would seem like you would create a base template like this: ``` <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <link rel="stylesheet" href="style.css" /> <title>{% block title %}My Amazing Site{% endblock %}</title> </head> <body> <div id="header"> {% block header %} .... html to create tabs ... {% endblock header %} </div> ``` and then in all of my other pages, i would do this: ``` {% extends "base.html" %} {% block header % } .... html to create tabs with one tab "selected" ... {% endblock header %} ``` which seems annoying as every single one of my pages would have to have duplicated HTML with the header information, but slightly different. So when its time to add a new tab, i have to modify every single HTML file. Upon further reading, it seems like some other possible solutions are: 1 - Create a custom template tag that takes in which tab is currently selected, that way in each HTML page i just call: {% block header %} {% mycustomtag abc %} {% endblock header %} I don't like this solution because it would requiring placing HTML into the python code for creating this custom tag. 2 - Create X number of sub-templates of base.html, all with the appropriate tab selected. Then each page would inherit from the appropriate sub-template based on which tab they want selected. This solution seems fine, except for the fact that it will require X number of almost exactly the same HTML, and still runs into the issue of having to modify all the files when a tab is added or removed. 3 - Use javascript (like jquery) to modify the header on page load to "select" the correct tab. This solution is fine but then would require one to remember to add this functionality to every page's javascript. the good part is that the header HTML would only live in a single HTML file. Any other suggestions? Thanks!
2009/09/22
[ "https://Stackoverflow.com/questions/1463153", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
I'm assuming each tab is a list item in your template `base.html`. ``` <ul> <li>Tab 1</li> <li>Tab 2</li> ... </ul> ``` Add an extra block to each `li`. ``` <ul> <li class="{% block class_tab1 %}inactive{% endblock %}">Tab 1</li> <li class="{% block class_tab2 %}inactive{% endblock %}">Tab 2</li> <li class="{% block class_tab3 %}inactive{% endblock %}">Tab 3</li> ... </ul> ``` Then in your template if tab 1 is to be selected: ``` {% extends "base.html" %} {% block class_tab1 %}active{% endblock %} ... ``` So the html rendered for Tab 1 is: ``` <ul> <li class="active">Tab 1</li> <li class="inactive">Tab 2</li> <li class="inactive">Tab 3</li> ... </ul> ``` and you can write CSS to target the `li .active` as you wish.
This is a rather common problems and I've come up with some various ways to solve it. Since you're asking for options, here's 3 other alternative ways achieve this effect. The options you mentioned and these listed below all have thier positives and negatives. It's really up to you to decide which is a best fit. **Alternate 1 - Use Regular Expressions and a Hash Table** This could be performed either client-side (less advantageous) or server-side (a better pick). To do this you could have a tag that had 1 input: a regular expression. In use it would look like this... ``` // In base.html... <li class="tab {% is_tab_active r'^/cars/' %}"><a>Cars</a></li> <li class="tab {% is_tab_active r'^/trucks/' %}"><a>Trucks</a></li> ``` The custom tag applies the regular expression against the current page being viewed. If successfull, it adds a css class "active" if not "inactive" (or whatever your CSS classes are). I've been pondering this method for a while. I feel as if there should be some good way to come up with a way to tie it into urls.py, but I haven't seen it yet. **Alternate 2 - Use CSS** If you were to identify each [body] tag, or at least have a common template for the sections of your site, CSS could be used to assign which was active. Consider the following: ``` body.cars_section .navigation #cars_tab { color: #00000; } body.truck_section .navigation #trucks_tab { color: #00000; } ``` For your base template... ``` <body class="{% block category %}{% endblock %}"> ... <ul class="navigation"> <li id="cars_tab"><a>Cars</a></li> <li id="trucks_tab"><a>Trucks</a></li> ``` Then for any page you simply put the category it's a part of (matching the CSS rule)... ``` {% extends "base.html" %} ... {% block category %}cars_section{% endblock %} ``` **Alternate 3 - Have Some Bloated Middleware** Django lets you write Middleware to affect the behavior of just about whatever you want. This seems like a bloated and complex route with potential negative performance impact, but I figured I'd at least mention it as an option.
46,542,165
I've spent the last few weeks learning the Bokeh package (which for visualizations, is excellent in my opinion). Unfortunately, I have come across a problem that I can't for the life of me, figure out how to solve. The below two links have been helpful, but I can't seem to replicate for my problem. [Using bokeh to plot interactive pie chart in Jupyter/Python](https://stackoverflow.com/questions/43396052/using-bokeh-to-plot-interactive-pie-chart-in-jupyter-python) - refer to answer #3 <https://github.com/bokeh/bokeh/blob/0.12.9/examples/howto/notebook_comms/Jupyter%20Interactors.ipynb> The below code (in Jupyter) displays the graph correctly and displays the slider correctly, but I'm unsure how to connect the two as when I move the slider, the graph remains static. I am using Python 3.6 and Bokeh 12.9 ``` N = 300 source = ColumnDataSource(data={'x':random(N), 'y':random(N)}) plot = figure(plot_width=950, plot_height=400) plot.circle(x='x', y='y', source=source) callback = CustomJS(code=""" if (IPython.notebook.kernel !== undefined) { var kernel = IPython.notebook.kernel; cmd = "update_plot(" + cb_obj.value + ")"; kernel.execute(cmd, {}, {})}; """) slider = Slider(start=100, end=1000, value=N, step=10, callback=callback) def callback(attr, old, new): N = slider.value source.data={'x':random(N), 'y':random(N)} slider.on_change('value', callback) layout = column(slider, plot) curdoc().add_root(layout) show(widgetbox(slider, width = 300)) show(plot) ``` After reading the bokeh documentation and reading a view threads on GitHub, the 'callback' function is a little unclear for me as I'm not entirely sure what to parse to it (if in fact attr, old, new need certain elements parsed too it) Any help would be greatly appreciated Hopefully, I haven't missed anything glaringly obvious. Kind Regards, Adrian
2017/10/03
[ "https://Stackoverflow.com/questions/46542165", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7557077/" ]
You are currently mixing different ways for interactivity but unfortunately you always miss something for each different way. The slider you use is from bokeh, but unfortunately it looks like `slider.on_change` only works if you run through the bokeh server. From the [documentation](https://bokeh.pydata.org/en/latest/docs/user_guide/interaction/widgets.html#adding-widgets): > > Use bokeh serve to start the Bokeh server and set up event handlers with .on\_change (or for some widgets, .on\_click). > > > I couldn't really find that much on running jupyter notebook and bokeh server, but [this issue](https://github.com/bokeh/bokeh/issues/3461) seems to discuss that possibility. It also mentions `bokeh.application` but I've never used that, so no idea how that works. You also use additionally a custom js callback, which calls into the jupyter kernel and tries to execute `update_plot(value)`, but you never defined such a function, so it does nothing. Then you need a method to push the data to the output. I guess bokeh server can somehow do that nativly, for jupyter notebooks without the bokeh server `push_notebook` seems to be the solution. Note that you need `show(..., notebook_handle=True)` to be able to push. Solution 1 use the bokeh server =============================== Sliders and others widgets automatically sync their state back to python, so you can use `slider.on_change`. You don't need the CustomJS. Data flow should look as following: `python script -> bokeh server -> html -> userinput -> bokeh server -> python callbacks -> bokeh server updates plots` Solution 2 use bokeh sliders but sync via CustomJS ================================================== If you don't want to run a seperate process you can use the jupyter kernel to execute code in your python notebook. Dataflow: `jupyter notebook -> html -> user input -> customjs -> jupyter kernel -> python callbacks -> push_notebook to update plots` ``` output_notebook() N = 300 source = ColumnDataSource(data={'x':random(N), 'y':random(N)}) plot = figure(plot_width=950, plot_height=400) plot.circle(x='x', y='y', source=source) callback = CustomJS(code=""" if (IPython.notebook.kernel !== undefined) { var kernel = IPython.notebook.kernel; cmd = "update_plot(" + cb_obj.value + ")"; kernel.execute(cmd, {}, {})}; """) slider = Slider(start=100, end=1000, value=N, step=10, callback=callback) # must have the same name as the function that the CustomJS tries to call def update_plot(N): source.data={'x':random(N), 'y':random(N)} # push notebooks to update plots push_notebook() layout = column(slider, plot) # notebook_handle must be true, otherwise push_notebook will not work h1 = show(layout, notebook_handle=True) ``` Solution 3 use ipywidgets ========================= If you are not married to the bokeh widgets you can use the ipywidgets which are designed for interactivity in the jupyter notebook. The data flow is as following: `jupyter notebook -> html -> user input -> ipywidgets sync automatically -> python callbacks -> push_notebook` I use here `interact` but the other widgets should work as expected. ``` from ipywidgets import interact output_notebook() N = 300 source = ColumnDataSource(data={'x':random(N), 'y':random(N)}) plot = figure(plot_width=950, plot_height=400) plot.circle(x='x', y='y', source=source) def update_plot(v): N = v print(N) source.data={'x':random(N), 'y':random(N)} # push changed plots to the frontend push_notebook() # notebook_handle must be true so that push_notebook works show(plot, notebook_handle=True) ``` Note that you need to install ipywidgets properly, which inlcudes calling `jupyter nbextension enable --py --sys-prefix widgetsnbextension` if you are not using conda. For details [see the documentation](https://github.com/jupyter-widgets/ipywidgets#install)
I suppose your question relates to the server although you have both a CustomJS and a server callback. I am not familiar with the previous way of doing bokeh server in notebook (`push_notebook`). The new way would be like this: you wrap your code in a function taking one parameter (a document) and your call to `add_layout` is made on that document. Then you build an app with that function and show it. This gives: ``` from bokeh.models import ColumnDataSource, Slider from bokeh.layouts import column from bokeh.plotting import figure, show, output_notebook from numpy.random import random from bokeh.application import Application from bokeh.application.handlers import FunctionHandler output_notebook() def modify_doc(doc): N = 300 source = ColumnDataSource(data={'x':random(N), 'y':random(N)}) plot = figure(plot_width=950, plot_height=400) plot.circle(x='x', y='y', source=source) slider = Slider(start=100, end=1000, value=N, step=10) def callback(attr, old, new): N = new # but slider.value would also work source.data={'x': random(N), 'y': random(N)} slider.on_change('value', callback) layout = column(slider, plot) doc.add_root(layout) app = Application(FunctionHandler(modify_doc)) show(app, notebook_url="localhost:8888") ```
20,353,928
I am developing some automation with Jenkins, and I'm trying to create jobs with a simple script. ``` python-jenkins==0.2.1 ``` The very odd problem I'm facing, is that i get a very generic ``` jenkins.JenkinsException: create[job name] failed ``` and when I check on Jenkins, the job is actually created, the configuration is ok, but the name stops at the very first space. If the job name doesn't contain any space, no error is raised. Any of you has experience with this? Or a suggestion on what it could be? Thanks!
2013/12/03
[ "https://Stackoverflow.com/questions/20353928", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1722687/" ]
It seems to be a problem with how the `trace` relates to the `business`. Try working with the relationships between your `trace` and `business` classes. It appears that the `trace` class should have a many-to-one relationship to `business` within your entity models. Not knowing what your models for these classes look like, I recommend that you could make sure you have the appropriate references within the classes. if they aren't there, add these: Add this to your `trace` class: ``` public virtual business business { get; set; } ``` Add the following to your `business` class: ``` public business() { this.Traces = new List<trace>(); } public virtual ICollection<trace> Traces { get; set; } ``` And within your `tracemap` class add the relationship: ``` this.HasOptional(t => t.business) .WithMany(t => t.Traces) .HasForeignKey(t => t.businessid); ``` Once you have this setup, try setting up your view model like this: ``` public class vmhome { public trace trace { get; set; } public business business { get; set; } } ``` and use the relationships in your LINQ query ``` var joined = from t in traces select new vwhome { trace = t, business = t.business }; ``` Then in your View, you can run through your references: ``` @Html.DisplayFor(modelItem => item.trace.prop_1) @Html.DisplayFor(modelItem => item.business.name) ``` You could probably omit the `business` class from the vm and just exploit the relationship in your View: ``` @Html.DisplayFor(modelItem => item.trace.business.name) ``` I hope that helps.
Your `traces` and `business` fields need to be strings, not of the actual object, and populate them with the trace name and the business. Then you can pass that along to your view to be property displayed on the screen. The LINQ query you have near the top will return those as an `IEnumerable<vwhome>`.
20,353,928
I am developing some automation with Jenkins, and I'm trying to create jobs with a simple script. ``` python-jenkins==0.2.1 ``` The very odd problem I'm facing, is that i get a very generic ``` jenkins.JenkinsException: create[job name] failed ``` and when I check on Jenkins, the job is actually created, the configuration is ok, but the name stops at the very first space. If the job name doesn't contain any space, no error is raised. Any of you has experience with this? Or a suggestion on what it could be? Thanks!
2013/12/03
[ "https://Stackoverflow.com/questions/20353928", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1722687/" ]
It seems to be a problem with how the `trace` relates to the `business`. Try working with the relationships between your `trace` and `business` classes. It appears that the `trace` class should have a many-to-one relationship to `business` within your entity models. Not knowing what your models for these classes look like, I recommend that you could make sure you have the appropriate references within the classes. if they aren't there, add these: Add this to your `trace` class: ``` public virtual business business { get; set; } ``` Add the following to your `business` class: ``` public business() { this.Traces = new List<trace>(); } public virtual ICollection<trace> Traces { get; set; } ``` And within your `tracemap` class add the relationship: ``` this.HasOptional(t => t.business) .WithMany(t => t.Traces) .HasForeignKey(t => t.businessid); ``` Once you have this setup, try setting up your view model like this: ``` public class vmhome { public trace trace { get; set; } public business business { get; set; } } ``` and use the relationships in your LINQ query ``` var joined = from t in traces select new vwhome { trace = t, business = t.business }; ``` Then in your View, you can run through your references: ``` @Html.DisplayFor(modelItem => item.trace.prop_1) @Html.DisplayFor(modelItem => item.business.name) ``` You could probably omit the `business` class from the vm and just exploit the relationship in your View: ``` @Html.DisplayFor(modelItem => item.trace.business.name) ``` I hope that helps.
If I understand you correctly you will need this in your Trace class ``` public virtual vwhome vwhome { get; set; } ``` In your vwhome class you need ``` public virtual ICollection<Trace > Traces { get; set; } ```
20,353,928
I am developing some automation with Jenkins, and I'm trying to create jobs with a simple script. ``` python-jenkins==0.2.1 ``` The very odd problem I'm facing, is that i get a very generic ``` jenkins.JenkinsException: create[job name] failed ``` and when I check on Jenkins, the job is actually created, the configuration is ok, but the name stops at the very first space. If the job name doesn't contain any space, no error is raised. Any of you has experience with this? Or a suggestion on what it could be? Thanks!
2013/12/03
[ "https://Stackoverflow.com/questions/20353928", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1722687/" ]
It seems to be a problem with how the `trace` relates to the `business`. Try working with the relationships between your `trace` and `business` classes. It appears that the `trace` class should have a many-to-one relationship to `business` within your entity models. Not knowing what your models for these classes look like, I recommend that you could make sure you have the appropriate references within the classes. if they aren't there, add these: Add this to your `trace` class: ``` public virtual business business { get; set; } ``` Add the following to your `business` class: ``` public business() { this.Traces = new List<trace>(); } public virtual ICollection<trace> Traces { get; set; } ``` And within your `tracemap` class add the relationship: ``` this.HasOptional(t => t.business) .WithMany(t => t.Traces) .HasForeignKey(t => t.businessid); ``` Once you have this setup, try setting up your view model like this: ``` public class vmhome { public trace trace { get; set; } public business business { get; set; } } ``` and use the relationships in your LINQ query ``` var joined = from t in traces select new vwhome { trace = t, business = t.business }; ``` Then in your View, you can run through your references: ``` @Html.DisplayFor(modelItem => item.trace.prop_1) @Html.DisplayFor(modelItem => item.business.name) ``` You could probably omit the `business` class from the vm and just exploit the relationship in your View: ``` @Html.DisplayFor(modelItem => item.trace.business.name) ``` I hope that helps.
You view is using a *collection* of `vwhome` objects, each of which contains a `trace` and a `vusiness`, so your view will look something like: ``` <ul> @foreach (var item in Model) { <li>@(item.trace) @(item.business)</li> } </ul> ```
44,052,723
I am trying to change the keyboard language in python (windows). I have searched everywhere for a way to do it and the most common answer is ``` import win32api win32api.LoadKeyboardLayout('00000409',1) # to switch to english win32api.LoadKeyboardLayout('00000401',1) # to switch to arabic ``` But the only thing this code does is add another language to the current list of languages on my pc - it doesn't change the language's keyboard (when I try typing something after I ran the script it keeps typing in the same language). BTW, I am using windows Thank you in advance for any help!
2017/05/18
[ "https://Stackoverflow.com/questions/44052723", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7891336/" ]
Numpy will pickle embedded objects by default (which you could avoid with `allow_pickle=False` but sounds like you may need it) which is slow (see <https://docs.scipy.org/doc/numpy/reference/generated/numpy.save.html>). You may want to check Pandas (see <http://matthewrocklin.com/blog/work/2015/03/16/Fast-Serialization>) or try to come up with your own file format that avoids pickling of your complex data structures.
Saving and loading large datasets to/from disk will always be a costly operation. One possible optimization is using memory mapping to disk and working directly on the array (if this is compatible with your application), especially if you're only interested in a small part of the dataset. This is what numpy.memmap does. For example: ``` import numpy as np a=np.memmap('largeArray.dat',dtype=np.int32,mode='w+',shape=(100000,)) ``` this will create a numpy array 'a' of 1000000 int32. It can be handled as any "normal" numpy array. This also creates the corresponding file 'largeArray' on your disk that will contain the data in 'a'. Synchronization between 'a' and 'largeArray' is handled by numpy and this depends on your RAM size. More info [here](https://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.memmap.html)
28,800,651
I have a function to run `flake8` every time I save a python file. However I don't want that function to run if the full file name (full path) includes `/.git/`. How can I achieve that? I tried the following without success: ``` au BufWritePost */[^git]/*.py call s:MyFlake8() ```
2015/03/01
[ "https://Stackoverflow.com/questions/28800651", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2149400/" ]
I'd do an early check at top of `MyFlake8` and exit if conditions are not met. ``` function! MyFlake8() if expand("%:p") =~ "/\\.git/" return endif echo "FLAKE8!" endfunction au BufWritePost *.py call s:MyFlake8() ```
Negative file globs are difficult to achieve, but you can avoid the problem with a positive glob that sets an "ignore this" marker, and another global command (in that order!) that checks for that marker: ``` au BufWritePost */.git/*.py let b:insideGit = 1 au BufWritePost *.py if ! exists('b:insideGit') | call s:MyFlake8() | endif ```
16,005,686
I have a Python program in which I am trying to use [this rhyming dictionary](http://rhyme.sourceforge.net/) to look up rhymes. Part of the dictionary library setup works by a C program creating three gdbm `.db` files. The code that does this is publicly available [here](http://rhyme.sourceforge.net/) (from 'get the source' section), and the key part of the C gdbm builder `compile.c` looks like this: ``` #include <gdbm.h> ...<snip>... int main(int argc, char *argv[]) { FILE *input; GDBM_FILE output; ...<snip>... output = gdbm_open(argv[2], 0, GDBM_NEWDB, 0644, 0); ...<snip>... gdbm_sync(output); ...<snip>... ``` `words.db` is created, and it can be successfully accessed from the bundled command line interface. According to [the library's example page](http://rhyme.sourceforge.net/how.html), I should be able to use the python gdbm module to access the database. However, I cannot. When I run the following: ``` import gdbm words = gdbm.open('/usr/share/rhyme/words.db') ``` I get the following error: ``` words = gdbm.open('/usr/share/rhyme/words.db') gdbm.error: Bad magic number ``` Is there a file incompatibility problem? Do some gdbm files created in C not open with Python? (This is with Python 2.7 on OSX 10.6.8. Python's gdbm was installed via MacPorts)
2013/04/14
[ "https://Stackoverflow.com/questions/16005686", "https://Stackoverflow.com", "https://Stackoverflow.com/users/279858/" ]
What you are looking for is [Selenium Grid 2](https://github.com/SeleniumHQ/selenium/wiki/Grid2). Grid allows you to : * scale by distributing tests on several machines ( parallel execution ) * manage multiple environments from a central point, making it easy to run the tests against a vast combination of browsers / OS. * minimize the maintenance time for the grid by allowing you to implement custom hooks to leverage virtual infrastructure for instance.
I agree using grid in combination with Maven parallelized class, you can run multiple instance in one PC. Jenkins is possible when you are using Ant for your build ,then you can specify which test can be run parallel. Its quite easy to set it up though ;)
16,005,686
I have a Python program in which I am trying to use [this rhyming dictionary](http://rhyme.sourceforge.net/) to look up rhymes. Part of the dictionary library setup works by a C program creating three gdbm `.db` files. The code that does this is publicly available [here](http://rhyme.sourceforge.net/) (from 'get the source' section), and the key part of the C gdbm builder `compile.c` looks like this: ``` #include <gdbm.h> ...<snip>... int main(int argc, char *argv[]) { FILE *input; GDBM_FILE output; ...<snip>... output = gdbm_open(argv[2], 0, GDBM_NEWDB, 0644, 0); ...<snip>... gdbm_sync(output); ...<snip>... ``` `words.db` is created, and it can be successfully accessed from the bundled command line interface. According to [the library's example page](http://rhyme.sourceforge.net/how.html), I should be able to use the python gdbm module to access the database. However, I cannot. When I run the following: ``` import gdbm words = gdbm.open('/usr/share/rhyme/words.db') ``` I get the following error: ``` words = gdbm.open('/usr/share/rhyme/words.db') gdbm.error: Bad magic number ``` Is there a file incompatibility problem? Do some gdbm files created in C not open with Python? (This is with Python 2.7 on OSX 10.6.8. Python's gdbm was installed via MacPorts)
2013/04/14
[ "https://Stackoverflow.com/questions/16005686", "https://Stackoverflow.com", "https://Stackoverflow.com/users/279858/" ]
What you are looking for is [Selenium Grid 2](https://github.com/SeleniumHQ/selenium/wiki/Grid2). Grid allows you to : * scale by distributing tests on several machines ( parallel execution ) * manage multiple environments from a central point, making it easy to run the tests against a vast combination of browsers / OS. * minimize the maintenance time for the grid by allowing you to implement custom hooks to leverage virtual infrastructure for instance.
This is an old question, but for anyone still reading along, it is very possible to run multiple Selenium WebDriver instances in parallel **without** using Grid. I have successfully tested this using Chrome, FireFox, and PhantomJs (up to 5). Each WebDriver instance uses an isolated context, so session conflict should not be an issue. Be wary of server side conflicts though, depending on the requirements of your website! For NUnit users, NUnit 3.2.1 now has a 'TestContext.Current.WorkerId' property that will allow you to isolate one WebDriver instance per NUnit worker. * Running multiple browsers on the same machine will often hinder performance, so be careful not to use too many browsers instances, or you may actually increase your testing time!
16,005,686
I have a Python program in which I am trying to use [this rhyming dictionary](http://rhyme.sourceforge.net/) to look up rhymes. Part of the dictionary library setup works by a C program creating three gdbm `.db` files. The code that does this is publicly available [here](http://rhyme.sourceforge.net/) (from 'get the source' section), and the key part of the C gdbm builder `compile.c` looks like this: ``` #include <gdbm.h> ...<snip>... int main(int argc, char *argv[]) { FILE *input; GDBM_FILE output; ...<snip>... output = gdbm_open(argv[2], 0, GDBM_NEWDB, 0644, 0); ...<snip>... gdbm_sync(output); ...<snip>... ``` `words.db` is created, and it can be successfully accessed from the bundled command line interface. According to [the library's example page](http://rhyme.sourceforge.net/how.html), I should be able to use the python gdbm module to access the database. However, I cannot. When I run the following: ``` import gdbm words = gdbm.open('/usr/share/rhyme/words.db') ``` I get the following error: ``` words = gdbm.open('/usr/share/rhyme/words.db') gdbm.error: Bad magic number ``` Is there a file incompatibility problem? Do some gdbm files created in C not open with Python? (This is with Python 2.7 on OSX 10.6.8. Python's gdbm was installed via MacPorts)
2013/04/14
[ "https://Stackoverflow.com/questions/16005686", "https://Stackoverflow.com", "https://Stackoverflow.com/users/279858/" ]
You can run multiple instances of chromedriver locally quite easily, just instantiate multiple driver objects, chromedriver will keep the profiles separate and find a port to run on all by itself. Here a link to an example that can run multiple tests using TestNG and Maven: <https://github.com/Ardesco/Selenium-Maven-Template> Just clone the above project and run the following in the command line: ``` mvn verify -Pselenium-tests -Dbrowser=chrome -Dthreads=2 ``` It takes advantage of TestNG's ability to manage the thread pool and will open up multiple instances if specified. You can do the same thing with jUnit but you'll need to write a custom test runner to fire the tests off into individual threads. If you decide to use gradle it can deal with managing the thread pools for you with both TestNG and jUnit and a lot of people prefer it to maven.
I agree using grid in combination with Maven parallelized class, you can run multiple instance in one PC. Jenkins is possible when you are using Ant for your build ,then you can specify which test can be run parallel. Its quite easy to set it up though ;)
16,005,686
I have a Python program in which I am trying to use [this rhyming dictionary](http://rhyme.sourceforge.net/) to look up rhymes. Part of the dictionary library setup works by a C program creating three gdbm `.db` files. The code that does this is publicly available [here](http://rhyme.sourceforge.net/) (from 'get the source' section), and the key part of the C gdbm builder `compile.c` looks like this: ``` #include <gdbm.h> ...<snip>... int main(int argc, char *argv[]) { FILE *input; GDBM_FILE output; ...<snip>... output = gdbm_open(argv[2], 0, GDBM_NEWDB, 0644, 0); ...<snip>... gdbm_sync(output); ...<snip>... ``` `words.db` is created, and it can be successfully accessed from the bundled command line interface. According to [the library's example page](http://rhyme.sourceforge.net/how.html), I should be able to use the python gdbm module to access the database. However, I cannot. When I run the following: ``` import gdbm words = gdbm.open('/usr/share/rhyme/words.db') ``` I get the following error: ``` words = gdbm.open('/usr/share/rhyme/words.db') gdbm.error: Bad magic number ``` Is there a file incompatibility problem? Do some gdbm files created in C not open with Python? (This is with Python 2.7 on OSX 10.6.8. Python's gdbm was installed via MacPorts)
2013/04/14
[ "https://Stackoverflow.com/questions/16005686", "https://Stackoverflow.com", "https://Stackoverflow.com/users/279858/" ]
This is an old question, but for anyone still reading along, it is very possible to run multiple Selenium WebDriver instances in parallel **without** using Grid. I have successfully tested this using Chrome, FireFox, and PhantomJs (up to 5). Each WebDriver instance uses an isolated context, so session conflict should not be an issue. Be wary of server side conflicts though, depending on the requirements of your website! For NUnit users, NUnit 3.2.1 now has a 'TestContext.Current.WorkerId' property that will allow you to isolate one WebDriver instance per NUnit worker. * Running multiple browsers on the same machine will often hinder performance, so be careful not to use too many browsers instances, or you may actually increase your testing time!
I agree using grid in combination with Maven parallelized class, you can run multiple instance in one PC. Jenkins is possible when you are using Ant for your build ,then you can specify which test can be run parallel. Its quite easy to set it up though ;)
16,005,686
I have a Python program in which I am trying to use [this rhyming dictionary](http://rhyme.sourceforge.net/) to look up rhymes. Part of the dictionary library setup works by a C program creating three gdbm `.db` files. The code that does this is publicly available [here](http://rhyme.sourceforge.net/) (from 'get the source' section), and the key part of the C gdbm builder `compile.c` looks like this: ``` #include <gdbm.h> ...<snip>... int main(int argc, char *argv[]) { FILE *input; GDBM_FILE output; ...<snip>... output = gdbm_open(argv[2], 0, GDBM_NEWDB, 0644, 0); ...<snip>... gdbm_sync(output); ...<snip>... ``` `words.db` is created, and it can be successfully accessed from the bundled command line interface. According to [the library's example page](http://rhyme.sourceforge.net/how.html), I should be able to use the python gdbm module to access the database. However, I cannot. When I run the following: ``` import gdbm words = gdbm.open('/usr/share/rhyme/words.db') ``` I get the following error: ``` words = gdbm.open('/usr/share/rhyme/words.db') gdbm.error: Bad magic number ``` Is there a file incompatibility problem? Do some gdbm files created in C not open with Python? (This is with Python 2.7 on OSX 10.6.8. Python's gdbm was installed via MacPorts)
2013/04/14
[ "https://Stackoverflow.com/questions/16005686", "https://Stackoverflow.com", "https://Stackoverflow.com/users/279858/" ]
You can run multiple instances of chromedriver locally quite easily, just instantiate multiple driver objects, chromedriver will keep the profiles separate and find a port to run on all by itself. Here a link to an example that can run multiple tests using TestNG and Maven: <https://github.com/Ardesco/Selenium-Maven-Template> Just clone the above project and run the following in the command line: ``` mvn verify -Pselenium-tests -Dbrowser=chrome -Dthreads=2 ``` It takes advantage of TestNG's ability to manage the thread pool and will open up multiple instances if specified. You can do the same thing with jUnit but you'll need to write a custom test runner to fire the tests off into individual threads. If you decide to use gradle it can deal with managing the thread pools for you with both TestNG and jUnit and a lot of people prefer it to maven.
This is an old question, but for anyone still reading along, it is very possible to run multiple Selenium WebDriver instances in parallel **without** using Grid. I have successfully tested this using Chrome, FireFox, and PhantomJs (up to 5). Each WebDriver instance uses an isolated context, so session conflict should not be an issue. Be wary of server side conflicts though, depending on the requirements of your website! For NUnit users, NUnit 3.2.1 now has a 'TestContext.Current.WorkerId' property that will allow you to isolate one WebDriver instance per NUnit worker. * Running multiple browsers on the same machine will often hinder performance, so be careful not to use too many browsers instances, or you may actually increase your testing time!
48,756,915
I have a situation where I have a directory "batches" containing several batch files: ``` one.txt two.txt ... seventy.txt ``` Each of these files needs to be processed by a python script as: ``` python processor.py --inputFile=batches/one.txt ``` My current implementation is as such: ``` for f in batches/$f do python processor.py --inputFile=batches/$f done ``` I have hundreds of batches, so running all of them in parallel as ``` python processor.py --inputFile=batches/$f & ``` Is not feasible. However,I think that running ~10 at a time shouldn't be a problem. I'm aware that the syntax ``` { python processor.py --inputFile=batches/batchOne.txt python processor.py --inputFile=batches/batchTwo.txt } & { python processor.py --inputFile=batches/batchThree.txt python processor.py --inputFile=batches/batchFour.txt } ``` Should give me a result similar to the one I wanted. However, are there any better solutions? Basically, given a command template, in my case ``` python processor.py --inputFile=batches/$1 ``` And a list of batches, I'd like to control how many get executed at the same time. I'm working on Ubuntu Linux.
2018/02/12
[ "https://Stackoverflow.com/questions/48756915", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1876047/" ]
`Array#include?` only tests membership of one element in an array. Your `compare_from.include?(compare_to)` tests whether `compare_to` is an element of `compare_from`, and would e.g. return `true` in case `compare_from` is `[1, 2, 3, ["John Doe", "Eric Schulz", "Tom Jerry"], 5]`. If you want to see if all elements of `compare_to` are in `compare_from`, `compare_to.all? { |element| compare_from.include?(element) }` is idiomatic and legible but slow; tadman's `(compare_from & compare_to).size == compare_to.size` is much more performant. A third option, when speaking of subsets, and the one I'd likely prefer, is to use sets: ``` require 'set' Set[compare_to].subset?(Set[compare_from]) ```
This code boils down to: ``` compare_from = emp_data.map(&:employee_name) ``` Where that's calling the `employee_name` method on each of the items in the `emp_data` array and returning a new array with the result. You can easily test overlap on two arrays using `&` to find the intersection: ``` compare_to = ["John Doe", "Eric Schulz", "Tom Jerry"] common = compare_from & compare_to ``` If that array `common` has any entries then you have matches.
67,650,024
I'm very new to python. Would like to seek your advise. I have this simple dataframe list of items workers request for. I tried this to get unique values: ``` import numpy as np import pandas as pd df = pd.read_csv('item.csv') column_values=df[['Name', 'Item']].values unique_values = np.unique(column_values) print(unique_values) print('Total no. of items: ') print('No. of hand sanitizer: ') print('No. of mask: ') print('No. of wet tissue: ') print('\n') ``` I'm not sure how to code the counting and which type of loop to use for the name. I tried a few coding to get it list the name of the requestor, total no. of items requested, total of each item but cannot get the output as below by using loop: My desired output would be like this: ``` Eric Total no. of items: 11 No. of hand sanitizer: 5 No. of mask: 3 No. of wet tissue: 3 Farhana Total no. of items: 9 No. of hand sanitizer: 5 No. of mask: 2 No. of wet tissue: 2 .... and so on ``` Appreciate your advise and input.
2021/05/22
[ "https://Stackoverflow.com/questions/67650024", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16000467/" ]
The entire point of `sensitive = true` is to prevent the values from being displayed on the console every time you run `terraform apply`. You have to output the sensitive value explicitly, like this: ``` terraform output signalrserviceconnstring ``` I highly suggest reading [the documentation](https://learn.hashicorp.com/tutorials/terraform/outputs).
You could use function `nonsensitive` like this ``` output "mysecret" { value = nonsensitive(var.mysecret) } ```
67,650,024
I'm very new to python. Would like to seek your advise. I have this simple dataframe list of items workers request for. I tried this to get unique values: ``` import numpy as np import pandas as pd df = pd.read_csv('item.csv') column_values=df[['Name', 'Item']].values unique_values = np.unique(column_values) print(unique_values) print('Total no. of items: ') print('No. of hand sanitizer: ') print('No. of mask: ') print('No. of wet tissue: ') print('\n') ``` I'm not sure how to code the counting and which type of loop to use for the name. I tried a few coding to get it list the name of the requestor, total no. of items requested, total of each item but cannot get the output as below by using loop: My desired output would be like this: ``` Eric Total no. of items: 11 No. of hand sanitizer: 5 No. of mask: 3 No. of wet tissue: 3 Farhana Total no. of items: 9 No. of hand sanitizer: 5 No. of mask: 2 No. of wet tissue: 2 .... and so on ``` Appreciate your advise and input.
2021/05/22
[ "https://Stackoverflow.com/questions/67650024", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16000467/" ]
The entire point of `sensitive = true` is to prevent the values from being displayed on the console every time you run `terraform apply`. You have to output the sensitive value explicitly, like this: ``` terraform output signalrserviceconnstring ``` I highly suggest reading [the documentation](https://learn.hashicorp.com/tutorials/terraform/outputs).
You really shouldn't, but if you must: ``` terraform plan -out=tfplan terraform show -json tfplan # then, if you decide that the planned change is acceptable terraform apply tfplan ``` source: <https://discuss.hashicorp.com/t/how-to-show-sensitive-values/24076>
67,650,024
I'm very new to python. Would like to seek your advise. I have this simple dataframe list of items workers request for. I tried this to get unique values: ``` import numpy as np import pandas as pd df = pd.read_csv('item.csv') column_values=df[['Name', 'Item']].values unique_values = np.unique(column_values) print(unique_values) print('Total no. of items: ') print('No. of hand sanitizer: ') print('No. of mask: ') print('No. of wet tissue: ') print('\n') ``` I'm not sure how to code the counting and which type of loop to use for the name. I tried a few coding to get it list the name of the requestor, total no. of items requested, total of each item but cannot get the output as below by using loop: My desired output would be like this: ``` Eric Total no. of items: 11 No. of hand sanitizer: 5 No. of mask: 3 No. of wet tissue: 3 Farhana Total no. of items: 9 No. of hand sanitizer: 5 No. of mask: 2 No. of wet tissue: 2 .... and so on ``` Appreciate your advise and input.
2021/05/22
[ "https://Stackoverflow.com/questions/67650024", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16000467/" ]
You could use function `nonsensitive` like this ``` output "mysecret" { value = nonsensitive(var.mysecret) } ```
You really shouldn't, but if you must: ``` terraform plan -out=tfplan terraform show -json tfplan # then, if you decide that the planned change is acceptable terraform apply tfplan ``` source: <https://discuss.hashicorp.com/t/how-to-show-sensitive-values/24076>
58,185,487
I have been having issues reading a CSV file into Jupyter Notebook. this is the code: ``` import pandas as pd mpg = pd.read_csv('C:/Users/Ajibola/Documents/mpg.csv') mpg.head() ``` And this is the error I got: ``` File "<ipython-input-138-844bace16611>", line 1 mpg = pd.read_csv('C:\Users\Ajibola\Documents\mpg.csv') ^ SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape ``` And after prefixing the PATH with r, I got the error: ``` --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) <ipython-input-140-a1289650ba91> in <module> ----> 1 mpg = pd.read_csv(r'C:\Users\Ajibola\Documents\mpg.csv') 2 mpg.head() ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision) 700 skip_blank_lines=skip_blank_lines) 701 --> 702 return _read(filepath_or_buffer, kwds) 703 704 parser_f.__name__ = name ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds) 427 428 # Create the parser. --> 429 parser = TextFileReader(filepath_or_buffer, **kwds) 430 431 if chunksize or iterator: ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in __init__(self, f, engine, **kwds) 893 self.options['has_index_names'] = kwds['has_index_names'] 894 --> 895 self._make_engine(self.engine) 896 897 def close(self): ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in _make_engine(self, engine) 1120 def _make_engine(self, engine='c'): 1121 if engine == 'c': -> 1122 self._engine = CParserWrapper(self.f, **self.options) 1123 else: 1124 if engine == 'python': ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in __init__(self, src, **kwds) 1851 kwds['usecols'] = self.usecols 1852 -> 1853 self._reader = parsers.TextReader(src, **kwds) 1854 self.unnamed_cols = self._reader.unnamed_cols 1855 pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.__cinit__() pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._get_header() UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 0-1: invalid continuation byte ``` I've run through the community for related problems and answers but making no headway. An answer would be really appreciated.
2019/10/01
[ "https://Stackoverflow.com/questions/58185487", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12148422/" ]
**Create your .csv file in the same folder with your code. This will work** ``` import pandas as pd data = pd.read_csv('data.csv') print(data) ```
Headers refer to the column names of your dataset. For some datasets you might encounter, the headers may be completely missing, partially missing, or they might exist, but you may want to rename them. [enter link description here](https://www.datacamp.com/community/tutorials/pandas-read-csv) hope this article is benificial for you
58,185,487
I have been having issues reading a CSV file into Jupyter Notebook. this is the code: ``` import pandas as pd mpg = pd.read_csv('C:/Users/Ajibola/Documents/mpg.csv') mpg.head() ``` And this is the error I got: ``` File "<ipython-input-138-844bace16611>", line 1 mpg = pd.read_csv('C:\Users\Ajibola\Documents\mpg.csv') ^ SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape ``` And after prefixing the PATH with r, I got the error: ``` --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) <ipython-input-140-a1289650ba91> in <module> ----> 1 mpg = pd.read_csv(r'C:\Users\Ajibola\Documents\mpg.csv') 2 mpg.head() ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision) 700 skip_blank_lines=skip_blank_lines) 701 --> 702 return _read(filepath_or_buffer, kwds) 703 704 parser_f.__name__ = name ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds) 427 428 # Create the parser. --> 429 parser = TextFileReader(filepath_or_buffer, **kwds) 430 431 if chunksize or iterator: ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in __init__(self, f, engine, **kwds) 893 self.options['has_index_names'] = kwds['has_index_names'] 894 --> 895 self._make_engine(self.engine) 896 897 def close(self): ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in _make_engine(self, engine) 1120 def _make_engine(self, engine='c'): 1121 if engine == 'c': -> 1122 self._engine = CParserWrapper(self.f, **self.options) 1123 else: 1124 if engine == 'python': ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in __init__(self, src, **kwds) 1851 kwds['usecols'] = self.usecols 1852 -> 1853 self._reader = parsers.TextReader(src, **kwds) 1854 self.unnamed_cols = self._reader.unnamed_cols 1855 pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.__cinit__() pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._get_header() UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 0-1: invalid continuation byte ``` I've run through the community for related problems and answers but making no headway. An answer would be really appreciated.
2019/10/01
[ "https://Stackoverflow.com/questions/58185487", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12148422/" ]
``` import pandas as pd mpg = pd.read_csv('C://Users//Ajibola//Documents//mpg.csv') mpg.head() ``` it will work since its a unicode error.
Headers refer to the column names of your dataset. For some datasets you might encounter, the headers may be completely missing, partially missing, or they might exist, but you may want to rename them. [enter link description here](https://www.datacamp.com/community/tutorials/pandas-read-csv) hope this article is benificial for you
58,185,487
I have been having issues reading a CSV file into Jupyter Notebook. this is the code: ``` import pandas as pd mpg = pd.read_csv('C:/Users/Ajibola/Documents/mpg.csv') mpg.head() ``` And this is the error I got: ``` File "<ipython-input-138-844bace16611>", line 1 mpg = pd.read_csv('C:\Users\Ajibola\Documents\mpg.csv') ^ SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape ``` And after prefixing the PATH with r, I got the error: ``` --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) <ipython-input-140-a1289650ba91> in <module> ----> 1 mpg = pd.read_csv(r'C:\Users\Ajibola\Documents\mpg.csv') 2 mpg.head() ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision) 700 skip_blank_lines=skip_blank_lines) 701 --> 702 return _read(filepath_or_buffer, kwds) 703 704 parser_f.__name__ = name ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds) 427 428 # Create the parser. --> 429 parser = TextFileReader(filepath_or_buffer, **kwds) 430 431 if chunksize or iterator: ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in __init__(self, f, engine, **kwds) 893 self.options['has_index_names'] = kwds['has_index_names'] 894 --> 895 self._make_engine(self.engine) 896 897 def close(self): ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in _make_engine(self, engine) 1120 def _make_engine(self, engine='c'): 1121 if engine == 'c': -> 1122 self._engine = CParserWrapper(self.f, **self.options) 1123 else: 1124 if engine == 'python': ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in __init__(self, src, **kwds) 1851 kwds['usecols'] = self.usecols 1852 -> 1853 self._reader = parsers.TextReader(src, **kwds) 1854 self.unnamed_cols = self._reader.unnamed_cols 1855 pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.__cinit__() pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._get_header() UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 0-1: invalid continuation byte ``` I've run through the community for related problems and answers but making no headway. An answer would be really appreciated.
2019/10/01
[ "https://Stackoverflow.com/questions/58185487", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12148422/" ]
**Create your .csv file in the same folder with your code. This will work** ``` import pandas as pd data = pd.read_csv('data.csv') print(data) ```
The error is complaining about 'utf-8' not being able of decoding the data in your file. This is probably due to special characters in your file. Try another encoding (such as 'utf-16' or 'latin-1') as a parameter in your call: ``` import pandas as pd mpg = pd.read_csv('C:/Users/Ajibola/Documents/mpg.csv', encoding = 'utf-16') mpg.head() ``` For more info refer to: [pandas read csv](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html) to see how to use the encoding parameter and python [standard encodings](https://docs.python.org/3/library/codecs.html#standard-encodings).
58,185,487
I have been having issues reading a CSV file into Jupyter Notebook. this is the code: ``` import pandas as pd mpg = pd.read_csv('C:/Users/Ajibola/Documents/mpg.csv') mpg.head() ``` And this is the error I got: ``` File "<ipython-input-138-844bace16611>", line 1 mpg = pd.read_csv('C:\Users\Ajibola\Documents\mpg.csv') ^ SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape ``` And after prefixing the PATH with r, I got the error: ``` --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) <ipython-input-140-a1289650ba91> in <module> ----> 1 mpg = pd.read_csv(r'C:\Users\Ajibola\Documents\mpg.csv') 2 mpg.head() ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision) 700 skip_blank_lines=skip_blank_lines) 701 --> 702 return _read(filepath_or_buffer, kwds) 703 704 parser_f.__name__ = name ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds) 427 428 # Create the parser. --> 429 parser = TextFileReader(filepath_or_buffer, **kwds) 430 431 if chunksize or iterator: ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in __init__(self, f, engine, **kwds) 893 self.options['has_index_names'] = kwds['has_index_names'] 894 --> 895 self._make_engine(self.engine) 896 897 def close(self): ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in _make_engine(self, engine) 1120 def _make_engine(self, engine='c'): 1121 if engine == 'c': -> 1122 self._engine = CParserWrapper(self.f, **self.options) 1123 else: 1124 if engine == 'python': ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in __init__(self, src, **kwds) 1851 kwds['usecols'] = self.usecols 1852 -> 1853 self._reader = parsers.TextReader(src, **kwds) 1854 self.unnamed_cols = self._reader.unnamed_cols 1855 pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.__cinit__() pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._get_header() UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 0-1: invalid continuation byte ``` I've run through the community for related problems and answers but making no headway. An answer would be really appreciated.
2019/10/01
[ "https://Stackoverflow.com/questions/58185487", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12148422/" ]
``` import pandas as pd mpg = pd.read_csv('C://Users//Ajibola//Documents//mpg.csv') mpg.head() ``` it will work since its a unicode error.
The error is complaining about 'utf-8' not being able of decoding the data in your file. This is probably due to special characters in your file. Try another encoding (such as 'utf-16' or 'latin-1') as a parameter in your call: ``` import pandas as pd mpg = pd.read_csv('C:/Users/Ajibola/Documents/mpg.csv', encoding = 'utf-16') mpg.head() ``` For more info refer to: [pandas read csv](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html) to see how to use the encoding parameter and python [standard encodings](https://docs.python.org/3/library/codecs.html#standard-encodings).
58,185,487
I have been having issues reading a CSV file into Jupyter Notebook. this is the code: ``` import pandas as pd mpg = pd.read_csv('C:/Users/Ajibola/Documents/mpg.csv') mpg.head() ``` And this is the error I got: ``` File "<ipython-input-138-844bace16611>", line 1 mpg = pd.read_csv('C:\Users\Ajibola\Documents\mpg.csv') ^ SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape ``` And after prefixing the PATH with r, I got the error: ``` --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) <ipython-input-140-a1289650ba91> in <module> ----> 1 mpg = pd.read_csv(r'C:\Users\Ajibola\Documents\mpg.csv') 2 mpg.head() ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision) 700 skip_blank_lines=skip_blank_lines) 701 --> 702 return _read(filepath_or_buffer, kwds) 703 704 parser_f.__name__ = name ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds) 427 428 # Create the parser. --> 429 parser = TextFileReader(filepath_or_buffer, **kwds) 430 431 if chunksize or iterator: ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in __init__(self, f, engine, **kwds) 893 self.options['has_index_names'] = kwds['has_index_names'] 894 --> 895 self._make_engine(self.engine) 896 897 def close(self): ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in _make_engine(self, engine) 1120 def _make_engine(self, engine='c'): 1121 if engine == 'c': -> 1122 self._engine = CParserWrapper(self.f, **self.options) 1123 else: 1124 if engine == 'python': ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in __init__(self, src, **kwds) 1851 kwds['usecols'] = self.usecols 1852 -> 1853 self._reader = parsers.TextReader(src, **kwds) 1854 self.unnamed_cols = self._reader.unnamed_cols 1855 pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.__cinit__() pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._get_header() UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 0-1: invalid continuation byte ``` I've run through the community for related problems and answers but making no headway. An answer would be really appreciated.
2019/10/01
[ "https://Stackoverflow.com/questions/58185487", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12148422/" ]
**Create your .csv file in the same folder with your code. This will work** ``` import pandas as pd data = pd.read_csv('data.csv') print(data) ```
``` import pandas as pd mpg = pd.read_csv('C://Users//Ajibola//Documents//mpg.csv') mpg.head() ``` it will work since its a unicode error.
72,116,352
``` # Program make a simple calculator # This function adds two numbers def add(x, y): return x + y # This function subtracts two numbers def subtract(x, y): return x - y # This function multiplies two numbers def multiply(x, y): return x * y # This function divides two numbers def divide(x, y): return x / y print("Select operation.") print("1.Add") print("2.Subtract") print("3.Multiply") print("4.Divide") while True: # take input from the user choice = input("Enter choice(1/2/3/4): ") # check if choice is one of the four options if choice in ('1', '2', '3', '4'): num1 = float(input("Enter first number: ")) num2 = float(input("Enter second number: ")) if choice == '1': print(num1, "+", num2, "=", add(num1, num2)) elif choice == '2': print(num1, "-", num2, "=", subtract(num1, num2)) elif choice == '3': print(num1, "*", num2, "=", multiply(num1, num2)) elif choice == '4': print(num1, "/", num2, "=", divide(num1, num2)) # check if user wants another calculation # break the while loop if answer is no next_calculation = input("Let's do next calculation? (yes/no): ") if next_calculation == "no": break else: print("Invalid Input") ``` the error: File "main.py", line 38 break ^ SyntaxError: 'break' outside loop \*\* Process exited - Return Code: 1 \*\* Press Enter to exit terminal I'm trying to make an interactive calculator, and the only issue is that I don't know how to break the loop. i tried indenting it, and it gave this. deleting the indentation made python tell me that I need to add an indent; yet adding an indent gives this error message. I don't know what else to do; I'm new to this.
2022/05/04
[ "https://Stackoverflow.com/questions/72116352", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19035212/" ]
Try it with indentation like this: ``` # Program make a simple calculator # This function adds two numbers def add(x, y): return x + y # This function subtracts two numbers def subtract(x, y): return x - y # This function multiplies two numbers def multiply(x, y): return x * y # This function divides two numbers def divide(x, y): return x / y print("Select operation.") print("1.Add") print("2.Subtract") print("3.Multiply") print("4.Divide") while True: # take input from the user choice = input("Enter choice(1/2/3/4): ") # check if choice is one of the four options if choice in ('1', '2', '3', '4'): num1 = float(input("Enter first number: ")) num2 = float(input("Enter second number: ")) if choice == '1': print(num1, "+", num2, "=", add(num1, num2)) elif choice == '2': print(num1, "-", num2, "=", subtract(num1, num2)) elif choice == '3': print(num1, "*", num2, "=", multiply(num1, num2)) elif choice == '4': print(num1, "/", num2, "=", divide(num1, num2)) # check if user wants another calculation # break the while loop if answer is no next_calculation = input("Let's do next calculation? (yes/no): ") if next_calculation == "no": break else: print("Invalid Input") ```
Another option would be to add a state variable like this. And move it to false when you want to exit the loop. ``` # Program make a simple calculator # This function adds two numbers def add(x, y): return x + y # This function subtracts two numbers def subtract(x, y): return x - y # This function multiplies two numbers def multiply(x, y): return x * y # This function divides two numbers def divide(x, y): return x / y print("Select operation.") print("1.Add") print("2.Subtract") print("3.Multiply") print("4.Divide") state = True while state: # take input from the user choice = input("Enter choice(1/2/3/4): ") # check if choice is one of the four options if choice in ('1', '2', '3', '4'): num1 = float(input("Enter first number: ")) num2 = float(input("Enter second number: ")) if choice == '1': print(num1, "+", num2, "=", add(num1, num2)) elif choice == '2': print(num1, "-", num2, "=", subtract(num1, num2)) elif choice == '3': print(num1, "*", num2, "=", multiply(num1, num2)) elif choice == '4': print(num1, "/", num2, "=", divide(num1, num2)) # check if user wants another calculation # break the while loop if answer is no next_calculation = input("Let's do next calculation? (yes/no): ") if next_calculation == "no": state = False else: print("Invalid Input") ```
42,539,670
I am using wagtail as a REST backend for a website. The website is built using react and fetches data via wagtails API v2. The SPA website needs to be able to show previews of pages in wagtail. My thought was to override `serve_preview` on the page model and simply seralize the new page as JSON and write it to a cache which could be accessed by my frontend. But im having trouble serializing my page to json. All attempts made feel very "hackish" I've made several attempts using extentions of wagtails built in serializers but without success: **Atempt 1:** ``` def serve_preview(self, request, mode_name): from wagtail.api.v2.endpoints import PagesAPIEndpoint endpoint = PagesAPIEndpoint() setattr(request, 'wagtailapi_router', WagtailAPIRouter('wagtailapi_v2')) endpoint.request = request endpoint.action = None endpoint.kwargs = {'slug': self.slug, 'pk': self.pk} endpoint.lookup_field = 'pk' serializer = endpoint.get_serializer(self) ``` Feels very ugly to use router here and set a bunch of attrs **Attempt 2:** ``` def serve_preview(self, request, mode_name): from wagtail.api.v2.endpoints import PagesAPIEndpoint fields = PagesAPIEndpoint.get_available_fields(self) if hasattr(self, 'api_fields'): fields.extend(self.api_fields) serializer_class = get_serializer_class( type(self), fields, meta_fields=[PagesAPIEndpoint.meta_fields], base=PageSerializer) serializer = serializer_class(self) ``` Better but i get context issues: ``` Traceback (most recent call last): ... File "/usr/local/lib/python3.5/site-packages/wagtail/api/v2/serializers.py", line 92, in to_representation self.context['view'].seen_types[name] = page.specific_class KeyError: 'view' ``` Any toughts?
2017/03/01
[ "https://Stackoverflow.com/questions/42539670", "https://Stackoverflow.com", "https://Stackoverflow.com/users/565319/" ]
Solved it by diving through the source code. First define an empty dummy view: ``` class DummyView(GenericViewSet): def __init__(self, *args, **kwargs): super(DummyView, self).__init__(*args, **kwargs) # seen_types is a mapping of type name strings (format: "app_label.ModelName") # to model classes. When an object is serialised in the API, its model # is added to this mapping. This is used by the Admin API which appends a # summary of the used types to the response. self.seen_types = OrderedDict() ``` Then use this view and set the context of your serializer manually. Im also using the same router as in my api in my context. It has methods which are called by the PageSerializer to resolve some fields. Kinda strange it is so tightly coupled with the wagtail api but at least this works: ``` def serve_preview(self, request, mode_name): import starrepublic.api as StarApi fields = StarApi.PagesAPIEndpoint.get_available_fields(self) if hasattr(self, 'api_fields'): fields.extend(self.api_fields) serializer_class = get_serializer_class( type(self), fields, meta_fields=[StarApi.PagesAPIEndpoint.meta_fields], base=PageSerializer) serializer = serializer_class( self, context={'request': request, 'view': DummyView(), 'router': StarApi.api_router}) ``` Dont forget to import: ``` from wagtail.api.v2.serializers import get_serializer_class from rest_framework.viewsets import GenericViewSet from rest_framework import status from rest_framework.response import Response from django.http import JsonResponse from django.http import HttpResponse ```
Possibly a non-answer answer, but I too have had challenges in the area of DRF, Wagtail's layering on top of DRF, and the need to cache json results (DRF has no built-in caching as far as I can tell, so that's an additional challenge). In a recent project, I ended up just building a list of dictionaries in a view and sending them back out with `HttpResponse()`, bypassing DRF and Wagtail API altogether. The code ended up simple, readable, and was easy to cache: ``` import json from django.http import HttpResponse from django.core.cache import cache data = cache.get('mydata') if not data: datalist = [] for foo in bar: somedata = {} # Populate somedata, "serializing" fields manually... datalist.append(somedata) # Cache for a week. data = datalist cache.set('mydata', datalist, 60 * 60 * 24 * 7) return HttpResponse(json.dumps(data), content_type='application/json') ``` Not as elegant as using the pre-built REST framework, but sometimes the simpler approach is just more productive...
58,868,529
I am trying to make a matrix and tensor in python without importing any modules such as numpy. Would there be a way to do this?
2019/11/15
[ "https://Stackoverflow.com/questions/58868529", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11210300/" ]
A matrix is just a list of lists. You can do so as such: ``` x = [[0, 1, 2, 3, 4], [1, 2, 3, 4, 5], [3, 4, 5, 6, 7]] ``` As far as performing operations without numpy goes, that will be up to you to create functions for likely using nested loops.
Python reserves the method name `__matmul__` for matrix multiplication ``` class Matrix: def __mul__(left, right): print("__mul__ was called") def __matmul__(left, right): print("__MATMUL__ WAS CALLED") def __rmatmul__(right, left): print(40*"#") print("__rmatmul__") print("left == ", left) print("right == ", right) print(40 * "#") def __imatmul__(total, step): print("__imatmul__") a = Matrix() b = Matrix() a * b # scalar multiplication __mul__ a @ b # matrix multiplication __matmul__ 3 @ b # matrix multiplication __rmatmul__ a @= b ``` --- `__imatmul__` is similar to the following: ``` x = 5 x += 2 # __iadd__(x, 2) `x = x + 2` x *= 1 # __imul__(x, 3) `x = x * 3` ``` --- Whenever you write `x * y` python attempts to get the definition of multiplication from the left-hand argument first. That is, `x * y` is initially `type(x).__add__(x, y)` However, sometimes the left-hand thing doesn't know how to multiply itself by the right-hand thing. ``` class K: pass a = K() result = 55*a result = type(55).__mul__(55, a) ``` The `int` class does not know how to multiply together `55` and an instance of class `K`. If `type(left).__mul__(left, right)` fails, then the back-up mechanism `type(right).__rmul__(right, left)` is called. If you write your own matrix class, then 3 `__rmatmul__` and `__rmul__` are what will be called when you put a scalar multiple out in front, like 88. ``` m = Matrix() 88 * m # __rmul__(m, 88) 88 @ m # __rmatmul__(m, 88) ``` --- One way to implement a matrix is as a list of lists: ``` matrix = [ [1, 0, 0], [0, 1, 0], [0, 0, 0], ] ``` This has several notable disadvantages. One is that it is easy to select a row of the matrix: ``` matrix[1] == [0, 1, 0] ``` However, selecting columns would be very inefficient: ``` def get_column(coli:int): coli = 1 column = list() for rowi in range(len(matrix)): column.append(matrix[rowi][coli]) return column ``` --- One solution would be to have two different lists of lists: 1. one in "row-major" order. 2. the other in "column-major" order. ``` row_then_col = [ [[("r0", "c0")], [("r0", "c1")], [("r0", "c2")]], [[("r1", "c0")], [("r1", "c1")], [("r1", "c2")]], [[("r2", "c0")], [("r2", "c1")], [("r2", "c2")]], ] ncols = 3 nrows = 3 col_then_row = list() for coli in range(ncols): col_then_row.append([None]*ncols) for rowi in range(nrows): col_then_row[coli] col_then_row[coli][rowi] = row_then_col[rowi][coli] ``` Then `col_then_row[coli]` will return a whole column and `row_then_col[rowi]` will return a whole row. You can fake "pointers" with a list of one element. This is will allow a change in `col_then_row` to automatically be visible in `row_then_col`, a vis versa, without updating anything. ``` row_then_col[1][2][0] = "happy" print(col_then_row[2][1][0]) # "happy" ``` There are many [algorithms for matrix multiplication](https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm). I recommend implementing [Strassen's algorithm](https://en.wikipedia.org/wiki/Strassen_algorithm). It is not the fastest in the world, but it is easier to understand than the truly fast ones. There are many ways to implement matrices. The beginnings of one implementation is shown below: ``` import io import math import abc def flatten(container): for elem in container: if not hasattr(elem, "__iter__"): yield elem else: it_elem = iter(elem) subelem = next(it_elem) if subelem != elem: yield subelem for j in flatten(it_elem): yield j class MatrixNodeState(abc.ABC): """ Abstract Base Class """ pass MatrixNodeState.MatrixNodeState = MatrixNodeState class MatrixNodeStateNullNullNullClass(MatrixNodeState): @classmethod def ERR(cls): with io.StringIO() as string_stream: print( "YOU ARE OFF THE EDGE OF THE MAP!", "STOP ITERATING!", file=string_stream ) msg = string_stream.getvalue() raise cls.OFF_THE_EDGE_OF_THE_MAP(msg) class OFF_THE_EDGE_OF_THE_MAP(Exception): pass def __getattribute__(self, *args): type(self).ERR() def __setattr__(self, *args): type(self).ERR() MatrixNodeState.nullnullnull = MatrixNodeStateNullNullNullClass() class MatrixNodeStateNullNullClass(MatrixNodeState): def __setattr__(*args): pass def __getattribute__(self, *args): return type(self).nullnullnull MatrixNodeState.nullnull = MatrixNodeStateNullNullClass() class MatrixNodeStateNullClass(MatrixNodeState): """ This class exists because `None.left = n` would produce an error `null.left = k` -----`no operation`. Does nothing Does not set the `left` attribute of the nullnull node. `x = node.left` returns `nullnull` """ def __setattr__(*args): pass def __getattribute__(self, *args): return type(self).nullnull MatrixNodeState.null = MatrixNodeStateNullClass() class MatrixNodeStateNonNullNull(MatrixNodeState): def __init__(self, data): self.data = data self.up = type(self).null self.right = type(self).null self.down = type(self).null self.left = type(self).null def __setattr__(self, key, value): if isinstance(value, type(self).nullnull): value = type(self).null elif isinstance(value, type(self).nullnullnull): value = type(self).null super().__setattr__(self, key, value) MatrixNodeState.MatrixNodeStateNonNullNull = MatrixNodeStateNonNullNull class MatrixNode: def __init__(self, data=None): MatrixNodeState = type(self) if data: self.state = MatrixNodeState.MatrixNodeStateNonNullNull(data) else: self.state = MatrixNodeState.nullnull def __getattr__(self, attrname): return self.state.attrname def __setattr__(self, attr_name, attr_value): try: object.__getattr__(self) super().__setattr__(self, attr_name, attr_value) except AttributeError: setattr(self.state, attr_name, attr_value) MatrixNode.MatrixNodeState = MatrixNodeState class Matrix: """ """ MatrixNode = MatrixNode def __init__(self, xdims, xelems): """ Example 1: m = Matrix([3, 3], [1, 0, 0, 0, 1, 0, 0, 0, 1]) Example 2 m = Matrix([3, 3], [[1, 0, 0], [0, 1, 0], [0, 0, 1]]) """ MatrixNode = type(self).MatrixNode idims = tuple(map(int, xdims)) ielems = iter(flatten(xelems)) nrows = idims[0] ncols = idims[1] self.d = dict() try: elem_count = 0 left_node = MatrixNode.nullnull up_node = MatrixNode.nullnull for rowi in range(nrows): for coli in range(ncols): ielem = next(ielem) elem_count += 1 up_node = left_node.up.right node = MatrixNode(ielem) self.d[(rowi, coli)] = node node.left = left_node left_node.right = node node.up = up_node up_node.down = node left_node = node except StopIteration: with io.StringIO() as string_stream: print( "Dimensions", idims, "indicated", "that there should be", math.prod(idims), "elements.", "Instead, only ", elem_count, "elements were found.", file=string_stream ) msg = string_stream.getvalue() raise TypeError(msg) def __getitem__(self, xkey): ikey = tuple(map(int, iter(flatten(xkey)))) return self.d[ikey].data def __setitem__(self, xkey, xval): ikey = tuple(map(int, iter(flatten(xkey)))) self.d[ikey].data = xval return def get_column(self, coli): coli = int(str(coli)) def get_row(self, rowi): rowi = int(str(rowi)) def __mul__(left, right): print("__mul__ was called") raise NotImplementedError() def __rmul__(right, left): """ m = Matrix([1, 2, 3]) 88 * m """ print("__rmul__ was called") raise NotImplementedError() def __matmul__(left, right): print("__MATMUL__ WAS CALLED") raise NotImplementedError() def __rmatmul__(right, left): print(40*"#") print("__rmatmul__") print("left == ", left) print("right == ", right) print(40 * "#") raise NotImplementedError() def __imatmul__(total, step): print("__imatmul__") raise NotImplementedError() def __str__(self): raise NotImplementedError() def __repr__(self): return type(self) + str(self) row_then_col = [ [[("r0", "c0")], [("r0", "c1")], [("r0", "c2")]], [[("r1", "c0")], [("r1", "c1")], [("r1", "c2")]], [[("r2", "c0")], [("r2", "c1")], [("r2", "c2")]], ] a = Matrix([3, 3], row_then_col) ```
58,868,529
I am trying to make a matrix and tensor in python without importing any modules such as numpy. Would there be a way to do this?
2019/11/15
[ "https://Stackoverflow.com/questions/58868529", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11210300/" ]
It would be a list of lists, e.g.: ``` matrix = [ [1, 0, 0], [0, 1, 0], [0, 0, 0], ] ``` You would then have to implement all of the mathematical operations (matrix multiplication etc) on top of that data structure.
Python reserves the method name `__matmul__` for matrix multiplication ``` class Matrix: def __mul__(left, right): print("__mul__ was called") def __matmul__(left, right): print("__MATMUL__ WAS CALLED") def __rmatmul__(right, left): print(40*"#") print("__rmatmul__") print("left == ", left) print("right == ", right) print(40 * "#") def __imatmul__(total, step): print("__imatmul__") a = Matrix() b = Matrix() a * b # scalar multiplication __mul__ a @ b # matrix multiplication __matmul__ 3 @ b # matrix multiplication __rmatmul__ a @= b ``` --- `__imatmul__` is similar to the following: ``` x = 5 x += 2 # __iadd__(x, 2) `x = x + 2` x *= 1 # __imul__(x, 3) `x = x * 3` ``` --- Whenever you write `x * y` python attempts to get the definition of multiplication from the left-hand argument first. That is, `x * y` is initially `type(x).__add__(x, y)` However, sometimes the left-hand thing doesn't know how to multiply itself by the right-hand thing. ``` class K: pass a = K() result = 55*a result = type(55).__mul__(55, a) ``` The `int` class does not know how to multiply together `55` and an instance of class `K`. If `type(left).__mul__(left, right)` fails, then the back-up mechanism `type(right).__rmul__(right, left)` is called. If you write your own matrix class, then 3 `__rmatmul__` and `__rmul__` are what will be called when you put a scalar multiple out in front, like 88. ``` m = Matrix() 88 * m # __rmul__(m, 88) 88 @ m # __rmatmul__(m, 88) ``` --- One way to implement a matrix is as a list of lists: ``` matrix = [ [1, 0, 0], [0, 1, 0], [0, 0, 0], ] ``` This has several notable disadvantages. One is that it is easy to select a row of the matrix: ``` matrix[1] == [0, 1, 0] ``` However, selecting columns would be very inefficient: ``` def get_column(coli:int): coli = 1 column = list() for rowi in range(len(matrix)): column.append(matrix[rowi][coli]) return column ``` --- One solution would be to have two different lists of lists: 1. one in "row-major" order. 2. the other in "column-major" order. ``` row_then_col = [ [[("r0", "c0")], [("r0", "c1")], [("r0", "c2")]], [[("r1", "c0")], [("r1", "c1")], [("r1", "c2")]], [[("r2", "c0")], [("r2", "c1")], [("r2", "c2")]], ] ncols = 3 nrows = 3 col_then_row = list() for coli in range(ncols): col_then_row.append([None]*ncols) for rowi in range(nrows): col_then_row[coli] col_then_row[coli][rowi] = row_then_col[rowi][coli] ``` Then `col_then_row[coli]` will return a whole column and `row_then_col[rowi]` will return a whole row. You can fake "pointers" with a list of one element. This is will allow a change in `col_then_row` to automatically be visible in `row_then_col`, a vis versa, without updating anything. ``` row_then_col[1][2][0] = "happy" print(col_then_row[2][1][0]) # "happy" ``` There are many [algorithms for matrix multiplication](https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm). I recommend implementing [Strassen's algorithm](https://en.wikipedia.org/wiki/Strassen_algorithm). It is not the fastest in the world, but it is easier to understand than the truly fast ones. There are many ways to implement matrices. The beginnings of one implementation is shown below: ``` import io import math import abc def flatten(container): for elem in container: if not hasattr(elem, "__iter__"): yield elem else: it_elem = iter(elem) subelem = next(it_elem) if subelem != elem: yield subelem for j in flatten(it_elem): yield j class MatrixNodeState(abc.ABC): """ Abstract Base Class """ pass MatrixNodeState.MatrixNodeState = MatrixNodeState class MatrixNodeStateNullNullNullClass(MatrixNodeState): @classmethod def ERR(cls): with io.StringIO() as string_stream: print( "YOU ARE OFF THE EDGE OF THE MAP!", "STOP ITERATING!", file=string_stream ) msg = string_stream.getvalue() raise cls.OFF_THE_EDGE_OF_THE_MAP(msg) class OFF_THE_EDGE_OF_THE_MAP(Exception): pass def __getattribute__(self, *args): type(self).ERR() def __setattr__(self, *args): type(self).ERR() MatrixNodeState.nullnullnull = MatrixNodeStateNullNullNullClass() class MatrixNodeStateNullNullClass(MatrixNodeState): def __setattr__(*args): pass def __getattribute__(self, *args): return type(self).nullnullnull MatrixNodeState.nullnull = MatrixNodeStateNullNullClass() class MatrixNodeStateNullClass(MatrixNodeState): """ This class exists because `None.left = n` would produce an error `null.left = k` -----`no operation`. Does nothing Does not set the `left` attribute of the nullnull node. `x = node.left` returns `nullnull` """ def __setattr__(*args): pass def __getattribute__(self, *args): return type(self).nullnull MatrixNodeState.null = MatrixNodeStateNullClass() class MatrixNodeStateNonNullNull(MatrixNodeState): def __init__(self, data): self.data = data self.up = type(self).null self.right = type(self).null self.down = type(self).null self.left = type(self).null def __setattr__(self, key, value): if isinstance(value, type(self).nullnull): value = type(self).null elif isinstance(value, type(self).nullnullnull): value = type(self).null super().__setattr__(self, key, value) MatrixNodeState.MatrixNodeStateNonNullNull = MatrixNodeStateNonNullNull class MatrixNode: def __init__(self, data=None): MatrixNodeState = type(self) if data: self.state = MatrixNodeState.MatrixNodeStateNonNullNull(data) else: self.state = MatrixNodeState.nullnull def __getattr__(self, attrname): return self.state.attrname def __setattr__(self, attr_name, attr_value): try: object.__getattr__(self) super().__setattr__(self, attr_name, attr_value) except AttributeError: setattr(self.state, attr_name, attr_value) MatrixNode.MatrixNodeState = MatrixNodeState class Matrix: """ """ MatrixNode = MatrixNode def __init__(self, xdims, xelems): """ Example 1: m = Matrix([3, 3], [1, 0, 0, 0, 1, 0, 0, 0, 1]) Example 2 m = Matrix([3, 3], [[1, 0, 0], [0, 1, 0], [0, 0, 1]]) """ MatrixNode = type(self).MatrixNode idims = tuple(map(int, xdims)) ielems = iter(flatten(xelems)) nrows = idims[0] ncols = idims[1] self.d = dict() try: elem_count = 0 left_node = MatrixNode.nullnull up_node = MatrixNode.nullnull for rowi in range(nrows): for coli in range(ncols): ielem = next(ielem) elem_count += 1 up_node = left_node.up.right node = MatrixNode(ielem) self.d[(rowi, coli)] = node node.left = left_node left_node.right = node node.up = up_node up_node.down = node left_node = node except StopIteration: with io.StringIO() as string_stream: print( "Dimensions", idims, "indicated", "that there should be", math.prod(idims), "elements.", "Instead, only ", elem_count, "elements were found.", file=string_stream ) msg = string_stream.getvalue() raise TypeError(msg) def __getitem__(self, xkey): ikey = tuple(map(int, iter(flatten(xkey)))) return self.d[ikey].data def __setitem__(self, xkey, xval): ikey = tuple(map(int, iter(flatten(xkey)))) self.d[ikey].data = xval return def get_column(self, coli): coli = int(str(coli)) def get_row(self, rowi): rowi = int(str(rowi)) def __mul__(left, right): print("__mul__ was called") raise NotImplementedError() def __rmul__(right, left): """ m = Matrix([1, 2, 3]) 88 * m """ print("__rmul__ was called") raise NotImplementedError() def __matmul__(left, right): print("__MATMUL__ WAS CALLED") raise NotImplementedError() def __rmatmul__(right, left): print(40*"#") print("__rmatmul__") print("left == ", left) print("right == ", right) print(40 * "#") raise NotImplementedError() def __imatmul__(total, step): print("__imatmul__") raise NotImplementedError() def __str__(self): raise NotImplementedError() def __repr__(self): return type(self) + str(self) row_then_col = [ [[("r0", "c0")], [("r0", "c1")], [("r0", "c2")]], [[("r1", "c0")], [("r1", "c1")], [("r1", "c2")]], [[("r2", "c0")], [("r2", "c1")], [("r2", "c2")]], ] a = Matrix([3, 3], row_then_col) ```
61,027,648
I'm trying to run automatic test within CANoe from Jenkins. I wrote python scripts using CANoe COM interface. It can work normally with command line in the local computer(configured as the Jenkins slave). But when I tried to call the python scripts from Jenkins, I got the error as below. ``` Traceback (most recent call last): File "automatic_test.py", line 165, in <module> app = CanoeSync() File "automatic_test.py", line 23, in __init__ app = win32com.client.gencache.EnsureDispatch("CANoe.Application") File "C:\Python27\lib\site-packages\win32com\client\gencache.py", line 527, in EnsureDispatch disp = win32com.client.Dispatch(prog_id) File "C:\Python27\lib\site-packages\win32com\client\__init__.py", line 95, in Dispatch dispatch, userName = dynamic._GetGoodDispatchAndUserName(dispatch,userName,clsctx) File "C:\Python27\lib\site-packages\win32com\client\dynamic.py", line 114, in _GetGoodDispatchAndUserName return (_GetGoodDispatch(IDispatch, clsctx), userName) File "C:\Python27\lib\site-packages\win32com\client\dynamic.py", line 91, in _GetGoodDispatch IDispatch = pythoncom.CoCreateInstance(IDispatch, None, clsctx, pythoncom.IID_IDispatch) pywintypes.com_error: (-2147024891, 'Access is denied.', None, None) ``` Other information: CANoe version: 11.0SP2; Python version: 2.7;. Jenkins account is also the admin account for the slave computer. The python scripts work normally with the Jenkins account locally. Thanks a lot for any help.
2020/04/04
[ "https://Stackoverflow.com/questions/61027648", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13220015/" ]
Finally, the issue was resolved! It's a permission issue. Please try the solution from [Win32com codes not working on IIS](https://stackoverflow.com/questions/28195793/win32com-codes-not-working-on-iis) and remember to add your Jenkins user name and set full permission.
Am facing the same problem when am trying to launch CANoe application from Jenkins even after following the steps mentioned in [Win32com codes not working on IIS](https://stackoverflow.com/questions/28195793/win32com-codes-not-working-on-iis). If I execute script from CMD it works fine but problem is observed on launching from jenkins with my user ID Are there any specific settings to be done in CANoe as well ``` File "C:\Program Files (x86)\Python279\lib\site-packages\win32com\client\dynamic.py", line 91, in _GetGoodDispatch IDispatch = pythoncom.CoCreateInstance(IDispatch, None, clsctx, pythoncom.IID_IDispatch) pywintypes.com_error: (-2147467238, 'The server process could not be started because the configured identity is incorrect. Check the username and password.', None, None) ```
45,913,275
I have a class that I call from my project that is executed through the url: data/refresh **urls.py** ``` from django.conf.urls import url, include from . import views from rest_framework import routers router = routers.DefaultRouter() urlpatterns = [ url(r'^refresh/$', views.refresh), url(r'^$', views.index, name='index'), url(r'^', include(router.urls)), url(r'^api-auth/', include('rest_framework.urls', namespace='rest_framework')), ] ``` and my **views.py** ``` def refresh(request): gatorate = Gatorate() r = gatorate.start() if r["code"] < 0: return {"code":-1,"error":r["error"]} # while spider.has_next_page() == True: data = gatorate.run() # #run last page # spider.run() return JsonResponse({"code":1,"data":data} ``` The class runs and populates my database when I run in Development, and in production it works ONLY if I start python, import my module, and execute it from ssh on the server. However when I open its url, or use a button that executes the URL, I get 502 error, or, I dont get an error, and it just does not populate. I am wondering if there is a permission issue that is not allowing me to run the script, or the fact that it does not enter the virtual environment if that even makes sense. Also, if anyone can suggest a way to automate this to run everyday, I was planning on using CRON, but I like the flexibility to execute it from the url remotely maybe using a lambda to call it. I have everything already on aws EC2 to my RDS both inside a VPC. Most important I just want the thing to work. THANK YOU IN ADVANCED. **spider.spider.py:** ``` import sqlite3 import MySQLdb import time import os import django os.environ["DJANGO_SETTINGS_MODULE"] = 'web.settings' django.setup() from django.utils import timezone from webservice.models import BSR from vardata import ASINS class Gatorate: def __init__(self): self.amazon = None self.product = None self.asins = None self.ASINS = ASINS def start(self): #options = webdriver.ChromeOptions() #options.binary_location = '/usr/bin/chromedriver' #options.add_argument('headless') #self.driver = webdriver.Chrome(chrome_options=options) #self.driver.get(self.url) try: self.amazon = AmazonAPI(AMAZON_ACCESS_KEY, AMAZON_SECRET_KEY, AMAZON_ASSOC_TAG,region="US") # options = webdriver.ChromeOptions() # options.binary_location = '/usr/bin/google-chrome-stable' # # options.add_argument('headless') # self.driver = webdriver.Chrome(chrome_options=options) # #wait = WebDriverWait(self.driver, self.sleep) # self.driver.get(self.url) return {"code":1} except Exception as e : #self.driver.quit() return {"code":-1,"error": str(e)} def get_asins(self): try: asins = self.ASINS return {"code":1,"data":asins} except Exception as e: #print e return {"code":-2,"error":e} def refresh_asins(self,asins): """Extracts cars from page @scope: @param: list @return list""" extract = []#list to hold the extracted data print asins if asins is None: return{"code":-1,"error":"Not defined"} if len(asins) == 0: return {"code":-2,"error":"Empty value"} for asin in set(asins): print 'finding sku: '+ str(asin) try: #by ASIN or BY SKU choose one time.sleep(1) self.product = self.amazon.lookup(ItemId=asin) # SKU # self.product = self.amazon.lookup(ItemId=sku, IdType="SKU",SearchIndex='All') print str(self.product) try: ASIN = asin except Exception as e: ASIN = "N/A" print e try: Brand = self.product.get_attribute("Brand") except Exception as e: Brand = "" print e try: selling_price = self.product.price_and_currency selling_price = selling_price[0] except Exception as e: price = "" print e try: currency = self.product.price_and_currency currency = currency[1] except Exception as e: price = "" print e try: availability = self.product.availability except Exception as e: availability = "" print e try: sales_rank = self.product.sales_rank except Exception as e: sales_rank = "" print e try: Binding = self.product.get_attribute("Binding") time.sleep(5) except Exception as e: Binding = "" print e try: Color = self.product.get_attribute("Color") time.sleep(5) except Exception as e: Color = "" print e try: Title = self.product.get_attribute("Title") time.sleep(5) except Exception as e: Title = "" print e try: PackageQuantity = self.product.get_attribute("PackageQuantity") time.sleep(5) except Exception as e: PackageQuantity = "" print e try: ProductGroup = self.product.get_attribute("ProductGroup") time.sleep(5) except Exception as e: ProductGroup = "" print e c = BSR( ASIN=asin, Brand=Brand, Selling_Price=selling_price, Currency=currency, Availability=availability, BSR=sales_rank, Binding=Binding, Color=Color, Product_Description=Title, Package_Quantity=PackageQuantity, Product_Group=ProductGroup ).save() except Exception as e: print e def run(self): """ Run me""" r = self.get_asins() if r["code"] < 0: return {"code":-1,"error":r["error"]} print r["data"] time.sleep(5) r = self.refresh_asins(r["data"]) time.sleep(5) # print "this is what we are saving"+str(r) # r = self.save_skus(r) # if self.has_next_page(): # self.go_to_next_page() return {"code":1,"data":"complete"} ```
2017/08/28
[ "https://Stackoverflow.com/questions/45913275", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Create a FileExtensionContentTypeProvider object in configure section and fill or remove Mapping for each MIME Type as follows: ``` public void Configure(IApplicationBuilder app) { // Set up custom content types -associating file extension to MIME type var provider = new FileExtensionContentTypeProvider(); // Add new mappings provider.Mappings[".myapp"] = "application/x-msdownload"; provider.Mappings[".htm3"] = "text/html"; provider.Mappings[".image"] = "image/png"; // Replace an existing mapping provider.Mappings[".rtf"] = "application/x-msdownload"; // Remove MP4 videos. provider.Mappings.Remove(".mp4"); app.UseStaticFiles(new StaticFileOptions() { FileProvider = new PhysicalFileProvider( Path.Combine(Directory.GetCurrentDirectory(), @"wwwroot", "images")), RequestPath = new PathString("/MyImages"), ContentTypeProvider = provider }); . . . } ``` Go to this link for more information: [microsoft](https://learn.microsoft.com/en-us/aspnet/core/fundamentals/static-files)
The static files middleware basically has [a very long list of explicit file extension to MIME type mappings](https://github.com/aspnet/AspNetCore/blob/v2.2.1/src/Middleware/StaticFiles/src/FileExtensionContentTypeProvider.cs). So the MIME type detection is solely based on the file extension. There is not really a clear way to hook into the middleware *after* the MIME type has been detected but before the static files middleware actually runs. However, you can use the [`StaticFileOptions.OnPrepareResponse`](https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.builder.staticfileoptions.onprepareresponse?view=aspnetcore-2.2) callback to hook into it to for example modify headers. Whether that’s enough for you depends on what you are trying to do. If you want to do a more sophisticated handling, possibly replacing the static files middleware, you would need to run your own implementation of the MIME type detection.
38,710,603
I am following the instructions from <http://www.highcharts.com/docs/getting-started/your-first-chart> to create a sample chart. I have saved the main chunk of javascript locally, and am add the `<script src="/chart.js"></script>` tag in my html to reference it. On my side, I am using python flask to render a html template containing the script. ``` @app.route('/view', methods=['POST', 'GET']) def show_graph_view(): query= request.form['query'] data = get_current_data(query) return render_template('graph.html', data=data) ``` I have a function to prepare some custom and current data I want to plot instead and I want the data to be available once the client brower loads. How do I add this data into the charts?
2016/08/02
[ "https://Stackoverflow.com/questions/38710603", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5579871/" ]
Assuming a globally accessible function, just call it in the module with the data converted to json on the server with the tojson and safe filters. ``` <script type=text/javascript> doSomethingWith({{ data|tojson|safe }}); </script> ``` It's a bit hard to follow the logic when you mix together server side templating and client side scripting like this. But sometimes you gotta do it.
Is stumbled on this old answer in 2021 and I just wanted to add another option.. Injecting Python seems to mess with your Javascript, but you can actually do this: ``` <script> var names = [] var i = 0 // {% for i in range(0, names_len) %} names[i++] = "{{names[i]}}"; // {% endfor %} </script> ``` You can "comment-out" the injected Pyhton code, so your Javascript remains correct :)
15,812,068
I am new to python, so I apologise if the answer is obvious. I am trying to pick 10 random numbers from a range of 128 numbers. And then if a certain number is present in this list, then print the (list) or a (word) or something. I can pick the 10 random numbers and print these random numbers, but I am having trouble printing if a number is present in this random list. I am picking and printing the 10 random numbers in the following way: ``` import random list = random.sample(range(128), 10) print list ``` I am trying to print (something), if a number in this case 5, is present in this random list in the following way: ``` import random list = random.sample(range(128), 10) if list == 5: print list ```
2013/04/04
[ "https://Stackoverflow.com/questions/15812068", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2244880/" ]
Use the `in` operator: `if 5 in list` Also `list` is the name of the type, so I suggest to use another name for the variable.
you can make it like this. ``` if 5 in list: print list ```
3,295,065
I'm trying to write a very simple python client for Gpsd, but I have this error after some time of execute the script: ``` Traceback (most recent call last): File "gps_cap.py", line 13, in <module> g.stream() File "/usr/lib/python2.6/site-packages/gps/gps.py", line 348, in stream gpsjson.stream(self, flags) File "/usr/lib/python2.6/site-packages/gps/client.py", line 176, in stream return self.send(arg + "}") File "/usr/lib/python2.6/site-packages/gps/client.py", line 111, in send self.sock.send(commands) socket.error: [Errno 104] Connection reset by peer ``` and this is my python code: ``` import os from gps import * from time import * g = gps(mode=WATCH_ENABLE) while 1: os.system('clear') g.poll() if PACKET_SET: g.stream() print print ' GPS reading' print '----------------------------------------' print 'latitude ' , g.fix.latitude print 'longitude ' , g.fix.longitude print 'time utc ' , g.utc,' + ', g.fix.time print 'altitude ' , g.fix.altitude print 'epc ' , g.fix.epc print 'epd ' , g.fix.epd print 'eps ' , g.fix.eps print 'epx ' , g.fix.epx print 'epv ' , g.fix.epv print 'ept ' , g.fix.ept print 'speed ' , g.fix.speed print 'climb ' , g.fix.climb print 'track ' , g.fix.track print 'mode ' , g.fix.mode print print 'sats ' , g.satellites sleep(1) ``` Maybe anyone can help with this issue? I'm runnig Gpsd 2.95 in a ArchLinux box. Thanks!
2010/07/20
[ "https://Stackoverflow.com/questions/3295065", "https://Stackoverflow.com", "https://Stackoverflow.com/users/73437/" ]
I know this question is pretty old but i still drop my answer here in case someone needs it in the future: ``` #! /usr/bin/python # Written by Dan Mandle http://dan.mandle.me September 2012 # License: GPL 2.0 import os from gps import * from time import * import time import threading gpsd = None #seting the global variable os.system('clear') #clear the terminal (optional) class GpsPoller(threading.Thread): def __init__(self): threading.Thread.__init__(self) global gpsd #bring it in scope gpsd = gps(mode=WATCH_ENABLE) #starting the stream of info self.current_value = None self.running = True #setting the thread running to true def run(self): global gpsd while gpsp.running: gpsd.next() #this will continue to loop and grab EACH set of gpsd info to clear the buffer if __name__ == '__main__': gpsp = GpsPoller() # create the thread try: gpsp.start() # start it up while True: #It may take a second or two to get good data #print gpsd.fix.latitude,', ',gpsd.fix.longitude,' Time: ',gpsd.utc os.system('clear') print print ' GPS reading' print '----------------------------------------' print 'latitude ' , gpsd.fix.latitude print 'longitude ' , gpsd.fix.longitude print 'time utc ' , gpsd.utc,' + ', gpsd.fix.time print 'altitude (m)' , gpsd.fix.altitude print 'eps ' , gpsd.fix.eps print 'epx ' , gpsd.fix.epx print 'epv ' , gpsd.fix.epv print 'ept ' , gpsd.fix.ept print 'speed (m/s) ' , gpsd.fix.speed print 'climb ' , gpsd.fix.climb print 'track ' , gpsd.fix.track print 'mode ' , gpsd.fix.mode print print 'sats ' , gpsd.satellites time.sleep(5) #set to whatever except (KeyboardInterrupt, SystemExit): #when you press ctrl+c print "\nKilling Thread..." gpsp.running = False gpsp.join() # wait for the thread to finish what it's doing print "Done.\nExiting." ``` This code work with thread and will give out a nice output of gpsd data to the screen. It can be terminated with Ctrl + C. All credits go to <http://www.danmandle.com/blog/getting-gpsd-to-work-with-python/>
I would put some money on this snippit from the gpsd how to page; also, thanks for the bootstrap code. <http://gpsd.berlios.de/client-howto.html> > > If you’re a clever sort, you’re already wondering what the daemon does if the application at the other end of the client socket doesn’t read data out of it as fast as gpsd is shipping it upwards. And the answer is this: eventually the socket buffer fills up, a write from the daemon throws an error, and the daemon shuts down that client socket. > > > As long as your application checks for and reads socket data no less often than once a second, you won’t — and a second is a lot of time in which to come back around your main loop. > > >
3,295,065
I'm trying to write a very simple python client for Gpsd, but I have this error after some time of execute the script: ``` Traceback (most recent call last): File "gps_cap.py", line 13, in <module> g.stream() File "/usr/lib/python2.6/site-packages/gps/gps.py", line 348, in stream gpsjson.stream(self, flags) File "/usr/lib/python2.6/site-packages/gps/client.py", line 176, in stream return self.send(arg + "}") File "/usr/lib/python2.6/site-packages/gps/client.py", line 111, in send self.sock.send(commands) socket.error: [Errno 104] Connection reset by peer ``` and this is my python code: ``` import os from gps import * from time import * g = gps(mode=WATCH_ENABLE) while 1: os.system('clear') g.poll() if PACKET_SET: g.stream() print print ' GPS reading' print '----------------------------------------' print 'latitude ' , g.fix.latitude print 'longitude ' , g.fix.longitude print 'time utc ' , g.utc,' + ', g.fix.time print 'altitude ' , g.fix.altitude print 'epc ' , g.fix.epc print 'epd ' , g.fix.epd print 'eps ' , g.fix.eps print 'epx ' , g.fix.epx print 'epv ' , g.fix.epv print 'ept ' , g.fix.ept print 'speed ' , g.fix.speed print 'climb ' , g.fix.climb print 'track ' , g.fix.track print 'mode ' , g.fix.mode print print 'sats ' , g.satellites sleep(1) ``` Maybe anyone can help with this issue? I'm runnig Gpsd 2.95 in a ArchLinux box. Thanks!
2010/07/20
[ "https://Stackoverflow.com/questions/3295065", "https://Stackoverflow.com", "https://Stackoverflow.com/users/73437/" ]
I would put some money on this snippit from the gpsd how to page; also, thanks for the bootstrap code. <http://gpsd.berlios.de/client-howto.html> > > If you’re a clever sort, you’re already wondering what the daemon does if the application at the other end of the client socket doesn’t read data out of it as fast as gpsd is shipping it upwards. And the answer is this: eventually the socket buffer fills up, a write from the daemon throws an error, and the daemon shuts down that client socket. > > > As long as your application checks for and reads socket data no less often than once a second, you won’t — and a second is a lot of time in which to come back around your main loop. > > >
To keep an old question alive, pasted below is the current state of [GPS3, a Python 2.7-3.5 gpsd client](https://github.com/wadda/gps3) found at <https://github.com/wadda/gps3>. GPS3 has two components; the *GPSDSocket* class, and the *Fix* class. GPSD delivers JSON data in a number of 'classes', TPV, SKY, etc. After connecting to the GPSD, GPS3 unpacks those JSON objects into dictionaries (`Fix.TPV['lat']`, `Fix.SKY['satellites']`, etc.) Common use would create an instance, e.g., `fix = gps3.Fix()`, and all available data would be derived from the name of the native JSON object (e.g., `fix.TPV['speed']`, `fix.TPV['alt']`, etc.) Confer usage with [a demo application gegps3.py](https://github.com/wadda/gps3/blob/master/gegps3.py) which creates a `kml` file (`/tmp/gps3_live.kml`) to be viewed in Google Earth. ``` #!/usr/bin/env python3 # coding=utf-8 """ GPS3 (gps3.py) is a Python 2.7-3.5 GPSD interface (http://www.catb.org/gpsd) Defaults host='127.0.0.1', port=2947, gpsd_protocol='json' GPS3 has two classes. 1) 'GPSDSocket' to create a GPSD socket connection and request/retreive GPSD output. 2) 'Fix' unpacks the streamed gpsd data into python dictionaries. These dictionaries are literated from the JSON data packet sent from the GPSD. Import import gps3 Instantiate gps_connection = gps3.GPSDSocket(host='192.168.0.4') gps_fix = gps3.Fix() Iterate for new_data in gps_connection: if new_data: gps_fix.refresh(new_data) Use print('Altitude = ',gps_fix.TPV['alt']) print('Latitude = ',gps_fix.TPV['lat']) Consult Lines 152-ff for Attribute/Key possibilities. or http://www.catb.org/gpsd/gpsd_json.html Run human.py; python[X] human.py [arguments] for a human experience. """ from __future__ import print_function import json import select import socket import sys __author__ = 'Moe' __copyright__ = 'Copyright 2015-2016 Moe' __license__ = 'MIT' __version__ = '0.2' HOST = '127.0.0.1' # gpsd GPSD_PORT = 2947 # defaults PROTOCOL = 'json' # " class GPSDSocket(object): """Establish a socket with gpsd, by which to send commands and receive data.""" def __init__(self, host=HOST, port=GPSD_PORT, gpsd_protocol=PROTOCOL, devicepath=None): self.devicepath_alternate = devicepath self.response = None self.protocol = gpsd_protocol self.streamSock = None if host: self.connect(host, port) def connect(self, host, port): """Connect to a host on a given port. Arguments: port: default port=2947 host: default host='127.0.0.1' """ for alotta_stuff in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM): family, socktype, proto, _canonname, host_port = alotta_stuff try: self.streamSock = socket.socket(family, socktype, proto) self.streamSock.connect(host_port) self.streamSock.setblocking(False) self.watch(gpsd_protocol=self.protocol) except OSError as error: sys.stderr.write('\nGPSDSocket.connect OSError is-->', error) sys.stderr.write('\nAttempt to connect to a gpsd at {0} on port \'{1}\' failed:\n'.format(host, port)) sys.exit(1) # TODO: gpsd existence check and start def watch(self, enable=True, gpsd_protocol='json', devicepath=None): """watch gpsd in various gpsd_protocols or devices. Arguments: self: enable: (bool) stream data to socket gpsd_protocol: (str) 'json' | 'nmea' | 'rare' | 'raw' | 'scaled' | 'split24' | 'pps' devicepath: (str) device path - '/dev/ttyUSBn' for some number n or '/dev/whatever_works' Returns: command: (str) e.g., '?WATCH={"enable":true,"json":true};' """ # N.B.: 'timing' requires special attention, as it is undocumented and lives with dragons. command = '?WATCH={{"enable":true,"{0}":true}}'.format(gpsd_protocol) if gpsd_protocol == 'rare': # 1 for a channel, gpsd reports the unprocessed NMEA or AIVDM data stream command = command.replace('"rare":true', '"raw":1') if gpsd_protocol == 'raw': # 2 channel that processes binary data, received data verbatim without hex-dumping. command = command.replace('"raw":true', '"raw",2') if not enable: command = command.replace('true', 'false') # sets -all- command values false . if devicepath: command = command.replace('}', ',"device":"') + devicepath + '"}' return self.send(command) def send(self, commands): """Ship commands to the daemon Arguments: commands: e.g., '?WATCH={{'enable':true,'json':true}}'|'?VERSION;'|'?DEVICES;'|'?DEVICE;'|'?POLL;' """ # The POLL command requests data from the last-seen fixes on all active GPS devices. # Devices must previously have been activated by ?WATCH to be pollable. if sys.version_info[0] < 3: # Not less than 3, but 'broken hearted' because self.streamSock.send(commands) # 2.7 chokes on 'bytes' and 'encoding=' else: self.streamSock.send(bytes(commands, encoding='utf-8')) # It craps out here when there is no gpsd running # TODO: Add recovery, check gpsd existence, re/start, etc.. def __iter__(self): """banana""" # <------- for scale return self def next(self, timeout=0): """Return empty unless new data is ready for the client. Arguments: timeout: Default timeout=0 range zero to float specifies a time-out as a floating point number in seconds. Will sit and wait for timeout seconds. When the timeout argument is omitted the function blocks until at least one file descriptor is ready. A time-out value of zero specifies a poll and never blocks. """ try: waitin, _waitout, _waiterror = select.select((self.streamSock,), (), (), timeout) if not waitin: return else: gpsd_response = self.streamSock.makefile() # '.makefile(buffering=4096)' In strictly Python3 self.response = gpsd_response.readline() return self.response except OSError as error: sys.stderr.write('The readline OSError in GPSDSocket.next is this: ', error) __next__ = next # Workaround for changes in iterating between Python 2.7 and 3 def close(self): """turn off stream and close socket""" if self.streamSock: self.watch(enable=False) self.streamSock.close() self.streamSock = None class Fix(object): """Retrieve JSON Object(s) from GPSDSocket and unpack it into respective gpsd 'class' dictionaries, TPV, SKY, etc. yielding hours of fun and entertainment. """ def __init__(self): """Potential data packages from gpsd for a generator of class attribute dictionaries""" packages = {'VERSION': {'release', 'proto_major', 'proto_minor', 'remote', 'rev'}, 'TPV': {'alt', 'climb', 'device', 'epc', 'epd', 'eps', 'ept', 'epv', 'epx', 'epy', 'lat', 'lon', 'mode', 'speed', 'tag', 'time', 'track'}, 'SKY': {'satellites', 'gdop', 'hdop', 'pdop', 'tdop', 'vdop', 'xdop', 'ydop'}, # Subset of SKY: 'satellites': {'PRN', 'ss', 'el', 'az', 'used'} # is always present. 'GST': {'alt', 'device', 'lat', 'lon', 'major', 'minor', 'orient', 'rms', 'time'}, 'ATT': {'acc_len', 'acc_x', 'acc_y', 'acc_z', 'depth', 'device', 'dip', 'gyro_x', 'gyro_y', 'heading', 'mag_len', 'mag_st', 'mag_x', 'mag_y', 'mag_z', 'pitch', 'pitch_st', 'roll', 'roll_st', 'temperature', 'time', 'yaw', 'yaw_st'}, # 'POLL': {'active', 'tpv', 'sky', 'time'}, 'PPS': {'device', 'clock_sec', 'clock_nsec', 'real_sec', 'real_nsec', 'precision'}, 'TOFF': {'device', 'clock_sec', 'clock_nsec','real_sec', 'real_nsec' }, 'DEVICES': {'devices', 'remote'}, 'DEVICE': {'activated', 'bps', 'cycle', 'mincycle', 'driver', 'flags', 'native', 'parity', 'path', 'stopbits', 'subtype'}, # 'AIS': {} # see: http://catb.org/gpsd/AIVDM.html 'ERROR': {'message'}} # TODO: Full suite of possible GPSD output for package_name, dataset in packages.items(): _emptydict = {key: 'n/a' for key in dataset} setattr(self, package_name, _emptydict) self.DEVICES['devices'] = {key: 'n/a' for key in packages['DEVICE']} # How does multiple listed devices work? # self.POLL = {'tpv': self.TPV, 'sky': self.SKY, 'time': 'n/a', 'active': 'n/a'} def refresh(self, gpsd_data_package): """Sets new socket data as Fix attributes in those initialied dictionaries Arguments: self: gpsd_data_package (json object): Provides: self attribute dictionaries, e.g., self.TPV['lat'], self.SKY['gdop'] Raises: AttributeError: 'str' object has no attribute 'keys' when the device falls out of the system ValueError, KeyError: most likely extra, or mangled JSON data, should not happen, but that applies to a lot of things. """ try: fresh_data = json.loads(gpsd_data_package) # The reserved word 'class' is popped from JSON object class package_name = fresh_data.pop('class', 'ERROR') # gpsd data package errors are also 'ERROR'. package = getattr(self, package_name, package_name) # packages are named for JSON object class for key in package.keys(): # TODO: Rollover and retry. It fails here when device disappears package[key] = fresh_data.get(key, 'n/a') # Updates and restores 'n/a' if key is absent in the socket # response, present --> 'key: 'n/a'' instead.' except AttributeError: # 'str' object has no attribute 'keys' print('No Data') return except (ValueError, KeyError) as error: sys.stderr.write(str(error)) # Look for extra data in stream return if __name__ == '__main__': print('\n', __doc__) # # Someday a cleaner Python interface will live here # # End ```
3,295,065
I'm trying to write a very simple python client for Gpsd, but I have this error after some time of execute the script: ``` Traceback (most recent call last): File "gps_cap.py", line 13, in <module> g.stream() File "/usr/lib/python2.6/site-packages/gps/gps.py", line 348, in stream gpsjson.stream(self, flags) File "/usr/lib/python2.6/site-packages/gps/client.py", line 176, in stream return self.send(arg + "}") File "/usr/lib/python2.6/site-packages/gps/client.py", line 111, in send self.sock.send(commands) socket.error: [Errno 104] Connection reset by peer ``` and this is my python code: ``` import os from gps import * from time import * g = gps(mode=WATCH_ENABLE) while 1: os.system('clear') g.poll() if PACKET_SET: g.stream() print print ' GPS reading' print '----------------------------------------' print 'latitude ' , g.fix.latitude print 'longitude ' , g.fix.longitude print 'time utc ' , g.utc,' + ', g.fix.time print 'altitude ' , g.fix.altitude print 'epc ' , g.fix.epc print 'epd ' , g.fix.epd print 'eps ' , g.fix.eps print 'epx ' , g.fix.epx print 'epv ' , g.fix.epv print 'ept ' , g.fix.ept print 'speed ' , g.fix.speed print 'climb ' , g.fix.climb print 'track ' , g.fix.track print 'mode ' , g.fix.mode print print 'sats ' , g.satellites sleep(1) ``` Maybe anyone can help with this issue? I'm runnig Gpsd 2.95 in a ArchLinux box. Thanks!
2010/07/20
[ "https://Stackoverflow.com/questions/3295065", "https://Stackoverflow.com", "https://Stackoverflow.com/users/73437/" ]
I know this question is pretty old but i still drop my answer here in case someone needs it in the future: ``` #! /usr/bin/python # Written by Dan Mandle http://dan.mandle.me September 2012 # License: GPL 2.0 import os from gps import * from time import * import time import threading gpsd = None #seting the global variable os.system('clear') #clear the terminal (optional) class GpsPoller(threading.Thread): def __init__(self): threading.Thread.__init__(self) global gpsd #bring it in scope gpsd = gps(mode=WATCH_ENABLE) #starting the stream of info self.current_value = None self.running = True #setting the thread running to true def run(self): global gpsd while gpsp.running: gpsd.next() #this will continue to loop and grab EACH set of gpsd info to clear the buffer if __name__ == '__main__': gpsp = GpsPoller() # create the thread try: gpsp.start() # start it up while True: #It may take a second or two to get good data #print gpsd.fix.latitude,', ',gpsd.fix.longitude,' Time: ',gpsd.utc os.system('clear') print print ' GPS reading' print '----------------------------------------' print 'latitude ' , gpsd.fix.latitude print 'longitude ' , gpsd.fix.longitude print 'time utc ' , gpsd.utc,' + ', gpsd.fix.time print 'altitude (m)' , gpsd.fix.altitude print 'eps ' , gpsd.fix.eps print 'epx ' , gpsd.fix.epx print 'epv ' , gpsd.fix.epv print 'ept ' , gpsd.fix.ept print 'speed (m/s) ' , gpsd.fix.speed print 'climb ' , gpsd.fix.climb print 'track ' , gpsd.fix.track print 'mode ' , gpsd.fix.mode print print 'sats ' , gpsd.satellites time.sleep(5) #set to whatever except (KeyboardInterrupt, SystemExit): #when you press ctrl+c print "\nKilling Thread..." gpsp.running = False gpsp.join() # wait for the thread to finish what it's doing print "Done.\nExiting." ``` This code work with thread and will give out a nice output of gpsd data to the screen. It can be terminated with Ctrl + C. All credits go to <http://www.danmandle.com/blog/getting-gpsd-to-work-with-python/>
To keep an old question alive, pasted below is the current state of [GPS3, a Python 2.7-3.5 gpsd client](https://github.com/wadda/gps3) found at <https://github.com/wadda/gps3>. GPS3 has two components; the *GPSDSocket* class, and the *Fix* class. GPSD delivers JSON data in a number of 'classes', TPV, SKY, etc. After connecting to the GPSD, GPS3 unpacks those JSON objects into dictionaries (`Fix.TPV['lat']`, `Fix.SKY['satellites']`, etc.) Common use would create an instance, e.g., `fix = gps3.Fix()`, and all available data would be derived from the name of the native JSON object (e.g., `fix.TPV['speed']`, `fix.TPV['alt']`, etc.) Confer usage with [a demo application gegps3.py](https://github.com/wadda/gps3/blob/master/gegps3.py) which creates a `kml` file (`/tmp/gps3_live.kml`) to be viewed in Google Earth. ``` #!/usr/bin/env python3 # coding=utf-8 """ GPS3 (gps3.py) is a Python 2.7-3.5 GPSD interface (http://www.catb.org/gpsd) Defaults host='127.0.0.1', port=2947, gpsd_protocol='json' GPS3 has two classes. 1) 'GPSDSocket' to create a GPSD socket connection and request/retreive GPSD output. 2) 'Fix' unpacks the streamed gpsd data into python dictionaries. These dictionaries are literated from the JSON data packet sent from the GPSD. Import import gps3 Instantiate gps_connection = gps3.GPSDSocket(host='192.168.0.4') gps_fix = gps3.Fix() Iterate for new_data in gps_connection: if new_data: gps_fix.refresh(new_data) Use print('Altitude = ',gps_fix.TPV['alt']) print('Latitude = ',gps_fix.TPV['lat']) Consult Lines 152-ff for Attribute/Key possibilities. or http://www.catb.org/gpsd/gpsd_json.html Run human.py; python[X] human.py [arguments] for a human experience. """ from __future__ import print_function import json import select import socket import sys __author__ = 'Moe' __copyright__ = 'Copyright 2015-2016 Moe' __license__ = 'MIT' __version__ = '0.2' HOST = '127.0.0.1' # gpsd GPSD_PORT = 2947 # defaults PROTOCOL = 'json' # " class GPSDSocket(object): """Establish a socket with gpsd, by which to send commands and receive data.""" def __init__(self, host=HOST, port=GPSD_PORT, gpsd_protocol=PROTOCOL, devicepath=None): self.devicepath_alternate = devicepath self.response = None self.protocol = gpsd_protocol self.streamSock = None if host: self.connect(host, port) def connect(self, host, port): """Connect to a host on a given port. Arguments: port: default port=2947 host: default host='127.0.0.1' """ for alotta_stuff in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM): family, socktype, proto, _canonname, host_port = alotta_stuff try: self.streamSock = socket.socket(family, socktype, proto) self.streamSock.connect(host_port) self.streamSock.setblocking(False) self.watch(gpsd_protocol=self.protocol) except OSError as error: sys.stderr.write('\nGPSDSocket.connect OSError is-->', error) sys.stderr.write('\nAttempt to connect to a gpsd at {0} on port \'{1}\' failed:\n'.format(host, port)) sys.exit(1) # TODO: gpsd existence check and start def watch(self, enable=True, gpsd_protocol='json', devicepath=None): """watch gpsd in various gpsd_protocols or devices. Arguments: self: enable: (bool) stream data to socket gpsd_protocol: (str) 'json' | 'nmea' | 'rare' | 'raw' | 'scaled' | 'split24' | 'pps' devicepath: (str) device path - '/dev/ttyUSBn' for some number n or '/dev/whatever_works' Returns: command: (str) e.g., '?WATCH={"enable":true,"json":true};' """ # N.B.: 'timing' requires special attention, as it is undocumented and lives with dragons. command = '?WATCH={{"enable":true,"{0}":true}}'.format(gpsd_protocol) if gpsd_protocol == 'rare': # 1 for a channel, gpsd reports the unprocessed NMEA or AIVDM data stream command = command.replace('"rare":true', '"raw":1') if gpsd_protocol == 'raw': # 2 channel that processes binary data, received data verbatim without hex-dumping. command = command.replace('"raw":true', '"raw",2') if not enable: command = command.replace('true', 'false') # sets -all- command values false . if devicepath: command = command.replace('}', ',"device":"') + devicepath + '"}' return self.send(command) def send(self, commands): """Ship commands to the daemon Arguments: commands: e.g., '?WATCH={{'enable':true,'json':true}}'|'?VERSION;'|'?DEVICES;'|'?DEVICE;'|'?POLL;' """ # The POLL command requests data from the last-seen fixes on all active GPS devices. # Devices must previously have been activated by ?WATCH to be pollable. if sys.version_info[0] < 3: # Not less than 3, but 'broken hearted' because self.streamSock.send(commands) # 2.7 chokes on 'bytes' and 'encoding=' else: self.streamSock.send(bytes(commands, encoding='utf-8')) # It craps out here when there is no gpsd running # TODO: Add recovery, check gpsd existence, re/start, etc.. def __iter__(self): """banana""" # <------- for scale return self def next(self, timeout=0): """Return empty unless new data is ready for the client. Arguments: timeout: Default timeout=0 range zero to float specifies a time-out as a floating point number in seconds. Will sit and wait for timeout seconds. When the timeout argument is omitted the function blocks until at least one file descriptor is ready. A time-out value of zero specifies a poll and never blocks. """ try: waitin, _waitout, _waiterror = select.select((self.streamSock,), (), (), timeout) if not waitin: return else: gpsd_response = self.streamSock.makefile() # '.makefile(buffering=4096)' In strictly Python3 self.response = gpsd_response.readline() return self.response except OSError as error: sys.stderr.write('The readline OSError in GPSDSocket.next is this: ', error) __next__ = next # Workaround for changes in iterating between Python 2.7 and 3 def close(self): """turn off stream and close socket""" if self.streamSock: self.watch(enable=False) self.streamSock.close() self.streamSock = None class Fix(object): """Retrieve JSON Object(s) from GPSDSocket and unpack it into respective gpsd 'class' dictionaries, TPV, SKY, etc. yielding hours of fun and entertainment. """ def __init__(self): """Potential data packages from gpsd for a generator of class attribute dictionaries""" packages = {'VERSION': {'release', 'proto_major', 'proto_minor', 'remote', 'rev'}, 'TPV': {'alt', 'climb', 'device', 'epc', 'epd', 'eps', 'ept', 'epv', 'epx', 'epy', 'lat', 'lon', 'mode', 'speed', 'tag', 'time', 'track'}, 'SKY': {'satellites', 'gdop', 'hdop', 'pdop', 'tdop', 'vdop', 'xdop', 'ydop'}, # Subset of SKY: 'satellites': {'PRN', 'ss', 'el', 'az', 'used'} # is always present. 'GST': {'alt', 'device', 'lat', 'lon', 'major', 'minor', 'orient', 'rms', 'time'}, 'ATT': {'acc_len', 'acc_x', 'acc_y', 'acc_z', 'depth', 'device', 'dip', 'gyro_x', 'gyro_y', 'heading', 'mag_len', 'mag_st', 'mag_x', 'mag_y', 'mag_z', 'pitch', 'pitch_st', 'roll', 'roll_st', 'temperature', 'time', 'yaw', 'yaw_st'}, # 'POLL': {'active', 'tpv', 'sky', 'time'}, 'PPS': {'device', 'clock_sec', 'clock_nsec', 'real_sec', 'real_nsec', 'precision'}, 'TOFF': {'device', 'clock_sec', 'clock_nsec','real_sec', 'real_nsec' }, 'DEVICES': {'devices', 'remote'}, 'DEVICE': {'activated', 'bps', 'cycle', 'mincycle', 'driver', 'flags', 'native', 'parity', 'path', 'stopbits', 'subtype'}, # 'AIS': {} # see: http://catb.org/gpsd/AIVDM.html 'ERROR': {'message'}} # TODO: Full suite of possible GPSD output for package_name, dataset in packages.items(): _emptydict = {key: 'n/a' for key in dataset} setattr(self, package_name, _emptydict) self.DEVICES['devices'] = {key: 'n/a' for key in packages['DEVICE']} # How does multiple listed devices work? # self.POLL = {'tpv': self.TPV, 'sky': self.SKY, 'time': 'n/a', 'active': 'n/a'} def refresh(self, gpsd_data_package): """Sets new socket data as Fix attributes in those initialied dictionaries Arguments: self: gpsd_data_package (json object): Provides: self attribute dictionaries, e.g., self.TPV['lat'], self.SKY['gdop'] Raises: AttributeError: 'str' object has no attribute 'keys' when the device falls out of the system ValueError, KeyError: most likely extra, or mangled JSON data, should not happen, but that applies to a lot of things. """ try: fresh_data = json.loads(gpsd_data_package) # The reserved word 'class' is popped from JSON object class package_name = fresh_data.pop('class', 'ERROR') # gpsd data package errors are also 'ERROR'. package = getattr(self, package_name, package_name) # packages are named for JSON object class for key in package.keys(): # TODO: Rollover and retry. It fails here when device disappears package[key] = fresh_data.get(key, 'n/a') # Updates and restores 'n/a' if key is absent in the socket # response, present --> 'key: 'n/a'' instead.' except AttributeError: # 'str' object has no attribute 'keys' print('No Data') return except (ValueError, KeyError) as error: sys.stderr.write(str(error)) # Look for extra data in stream return if __name__ == '__main__': print('\n', __doc__) # # Someday a cleaner Python interface will live here # # End ```
37,526,026
`python3` is my local Anaconda version of python, while `python3.4` is the system one. I can import `gi` module with `python3.4` (probably because i installed it with `sudo apt-get install python3-gi`) but `python3` doesn't see it: ``` $ python3 -c 'import gi' Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named 'gi' $ python3.4 -c 'import gi' $ python3 --version Python 3.5.1 :: Anaconda 4.0.0 (64-bit) $ python3.4 --version Python 3.4.3 $ which python3 /home/kossak/anaconda3/bin/python3 $ which python3.4 /usr/bin/python3.4 $ ``` How should i install `gi` for Anaconda python? Or maybe i can somehow import sysem-wide modules? My os: ``` System: Kernel: 3.19.0-32-generic x86_64 (64 bit gcc: 4.8.2) Desktop: Cinnamon 2.8.8 (Gtk 2.24.23) dm: mdm Distro: Linux Mint 17.3 Rosa ```
2016/05/30
[ "https://Stackoverflow.com/questions/37526026", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3024945/" ]
This is how you do it: (example for Linux Mint and python3) First install `gi` module using your distro package manager. For Linux Mint it would be: ``` sudo apt-get install python3-gi ``` Then run your distro python to check where the module is located: ``` $ /usr/bin/python3 Python 3.5.2 (default, Sep 10 2016, 08:21:44) [GCC 5.4.0 20160609] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import gi >>> print(gi) <module 'gi' from '/usr/lib/python3/dist-packages/gi/__init__.py'> ``` So in my case the module `gi` was installed to `/usr/lib/python3/dist-packages/gi`. Assuming you have your anaconda installed in `/home/kossak/anaconda3`, create a link to gi module in the proper folder: ``` ln -s /usr/lib/python3/dist-packages/gi /home/kossak/anaconda3/lib/python3.5/site-packages/ ``` If you have conda virtual environment and want `gi` module to be available there, the path should be a bit different. Assuming the virtual env is called TEST: ``` ln -s /usr/lib/python3/dist-packages/gi /home/kossak/anaconda3/envs/TEST/lib/python3.5/site-packages/ ``` and it works: ``` $ python3 Python 3.5.2 |Anaconda custom (64-bit)| (default, Jul 2 2016, 17:53:06) [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import gi >>> ```
for me ``` conda install -c pkgw/label/superseded gtk3 ``` worked
37,526,026
`python3` is my local Anaconda version of python, while `python3.4` is the system one. I can import `gi` module with `python3.4` (probably because i installed it with `sudo apt-get install python3-gi`) but `python3` doesn't see it: ``` $ python3 -c 'import gi' Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named 'gi' $ python3.4 -c 'import gi' $ python3 --version Python 3.5.1 :: Anaconda 4.0.0 (64-bit) $ python3.4 --version Python 3.4.3 $ which python3 /home/kossak/anaconda3/bin/python3 $ which python3.4 /usr/bin/python3.4 $ ``` How should i install `gi` for Anaconda python? Or maybe i can somehow import sysem-wide modules? My os: ``` System: Kernel: 3.19.0-32-generic x86_64 (64 bit gcc: 4.8.2) Desktop: Cinnamon 2.8.8 (Gtk 2.24.23) dm: mdm Distro: Linux Mint 17.3 Rosa ```
2016/05/30
[ "https://Stackoverflow.com/questions/37526026", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3024945/" ]
This is how you do it: (example for Linux Mint and python3) First install `gi` module using your distro package manager. For Linux Mint it would be: ``` sudo apt-get install python3-gi ``` Then run your distro python to check where the module is located: ``` $ /usr/bin/python3 Python 3.5.2 (default, Sep 10 2016, 08:21:44) [GCC 5.4.0 20160609] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import gi >>> print(gi) <module 'gi' from '/usr/lib/python3/dist-packages/gi/__init__.py'> ``` So in my case the module `gi` was installed to `/usr/lib/python3/dist-packages/gi`. Assuming you have your anaconda installed in `/home/kossak/anaconda3`, create a link to gi module in the proper folder: ``` ln -s /usr/lib/python3/dist-packages/gi /home/kossak/anaconda3/lib/python3.5/site-packages/ ``` If you have conda virtual environment and want `gi` module to be available there, the path should be a bit different. Assuming the virtual env is called TEST: ``` ln -s /usr/lib/python3/dist-packages/gi /home/kossak/anaconda3/envs/TEST/lib/python3.5/site-packages/ ``` and it works: ``` $ python3 Python 3.5.2 |Anaconda custom (64-bit)| (default, Jul 2 2016, 17:53:06) [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import gi >>> ```
If you want to perform a proper install (without soft linking) inside a Linux anaconda environment. Keep in mind that the errors may vary if you have not installed gcc previously (I assumed it was installed by default when I posted, however not everyone does so). **Install it if you don't know very well what you're doing to avoid missunderstandings** 1. Create or open your conda environment. 2. Attemp to install pygobject (don't worry, it will most likely throw an error): ``` pip install pygobject ``` In linux, it will promp the usual installation progress followed by an error: > > (...) Please, try executing the following in your system: > sudo apt install libgirepository1.0-dev > > > Depending on your operation sistem or installed dependencies, the command name or package name may vary. Just follow the instructions and allow the system to install your packets. This step doesn't change anything, is just to give you the precise info of the package you need on your system. **If you run this on Windows, it will ask you to install a specific version of Visual Studio. If you are in Windows, download the required Visual Studio from MS website, install it, reboot your computer and go to step 5 (in my case I never needed step 4 in windows, however, I'm not a MS expert.** 3. Close your conda environment ``` conda deactivate ``` 4. Next you need to install pygobject from conda-forge repository in your conda environment. You can add the repo to your favourite conda package manager or simply run the following command as root (it is important to be root). I did it outside the project, but you may do it inside: ``` conda install -c conda-forge pygobject ``` In my case conda was not in the path. I had is installed in: ``` /opt/anaconda3/bin/ ``` You can run the following command from your normal user to find out where conda is: ``` which conda ``` 5. Open the conda environment ``` source activate <your env name> ``` or the corresponding anaconda activate syntax (I never use it so I cant remember precisely) 6. Repeat the first step and now the installation wont fail: ``` pip install pygobject ``` OR if you specifically want to install gi you can run: ``` pip install pgi ``` the correct package is "pgi" NOT "gi" As gi is a dependency of pygobject, everything will get properly installed. You can check it by running ``` python >>> import gi ``` You may find the following usefull for Windows, although you may need to work it out a bit: [GStreammer python bindings on Windows](https://stackoverflow.com/questions/17278953/gstreamer-python-bindings-for-windows)
37,526,026
`python3` is my local Anaconda version of python, while `python3.4` is the system one. I can import `gi` module with `python3.4` (probably because i installed it with `sudo apt-get install python3-gi`) but `python3` doesn't see it: ``` $ python3 -c 'import gi' Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named 'gi' $ python3.4 -c 'import gi' $ python3 --version Python 3.5.1 :: Anaconda 4.0.0 (64-bit) $ python3.4 --version Python 3.4.3 $ which python3 /home/kossak/anaconda3/bin/python3 $ which python3.4 /usr/bin/python3.4 $ ``` How should i install `gi` for Anaconda python? Or maybe i can somehow import sysem-wide modules? My os: ``` System: Kernel: 3.19.0-32-generic x86_64 (64 bit gcc: 4.8.2) Desktop: Cinnamon 2.8.8 (Gtk 2.24.23) dm: mdm Distro: Linux Mint 17.3 Rosa ```
2016/05/30
[ "https://Stackoverflow.com/questions/37526026", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3024945/" ]
If you're using conda virtualenv for python-3, you can use ``` $ conda install -c conda-forge pygobject ``` in your virtualenv You can read more about this on: <https://anaconda.org/conda-forge/pygobject>
for me ``` conda install -c pkgw/label/superseded gtk3 ``` worked
37,526,026
`python3` is my local Anaconda version of python, while `python3.4` is the system one. I can import `gi` module with `python3.4` (probably because i installed it with `sudo apt-get install python3-gi`) but `python3` doesn't see it: ``` $ python3 -c 'import gi' Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named 'gi' $ python3.4 -c 'import gi' $ python3 --version Python 3.5.1 :: Anaconda 4.0.0 (64-bit) $ python3.4 --version Python 3.4.3 $ which python3 /home/kossak/anaconda3/bin/python3 $ which python3.4 /usr/bin/python3.4 $ ``` How should i install `gi` for Anaconda python? Or maybe i can somehow import sysem-wide modules? My os: ``` System: Kernel: 3.19.0-32-generic x86_64 (64 bit gcc: 4.8.2) Desktop: Cinnamon 2.8.8 (Gtk 2.24.23) dm: mdm Distro: Linux Mint 17.3 Rosa ```
2016/05/30
[ "https://Stackoverflow.com/questions/37526026", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3024945/" ]
If you're using conda virtualenv for python-3, you can use ``` $ conda install -c conda-forge pygobject ``` in your virtualenv You can read more about this on: <https://anaconda.org/conda-forge/pygobject>
If you want to perform a proper install (without soft linking) inside a Linux anaconda environment. Keep in mind that the errors may vary if you have not installed gcc previously (I assumed it was installed by default when I posted, however not everyone does so). **Install it if you don't know very well what you're doing to avoid missunderstandings** 1. Create or open your conda environment. 2. Attemp to install pygobject (don't worry, it will most likely throw an error): ``` pip install pygobject ``` In linux, it will promp the usual installation progress followed by an error: > > (...) Please, try executing the following in your system: > sudo apt install libgirepository1.0-dev > > > Depending on your operation sistem or installed dependencies, the command name or package name may vary. Just follow the instructions and allow the system to install your packets. This step doesn't change anything, is just to give you the precise info of the package you need on your system. **If you run this on Windows, it will ask you to install a specific version of Visual Studio. If you are in Windows, download the required Visual Studio from MS website, install it, reboot your computer and go to step 5 (in my case I never needed step 4 in windows, however, I'm not a MS expert.** 3. Close your conda environment ``` conda deactivate ``` 4. Next you need to install pygobject from conda-forge repository in your conda environment. You can add the repo to your favourite conda package manager or simply run the following command as root (it is important to be root). I did it outside the project, but you may do it inside: ``` conda install -c conda-forge pygobject ``` In my case conda was not in the path. I had is installed in: ``` /opt/anaconda3/bin/ ``` You can run the following command from your normal user to find out where conda is: ``` which conda ``` 5. Open the conda environment ``` source activate <your env name> ``` or the corresponding anaconda activate syntax (I never use it so I cant remember precisely) 6. Repeat the first step and now the installation wont fail: ``` pip install pygobject ``` OR if you specifically want to install gi you can run: ``` pip install pgi ``` the correct package is "pgi" NOT "gi" As gi is a dependency of pygobject, everything will get properly installed. You can check it by running ``` python >>> import gi ``` You may find the following usefull for Windows, although you may need to work it out a bit: [GStreammer python bindings on Windows](https://stackoverflow.com/questions/17278953/gstreamer-python-bindings-for-windows)
37,526,026
`python3` is my local Anaconda version of python, while `python3.4` is the system one. I can import `gi` module with `python3.4` (probably because i installed it with `sudo apt-get install python3-gi`) but `python3` doesn't see it: ``` $ python3 -c 'import gi' Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named 'gi' $ python3.4 -c 'import gi' $ python3 --version Python 3.5.1 :: Anaconda 4.0.0 (64-bit) $ python3.4 --version Python 3.4.3 $ which python3 /home/kossak/anaconda3/bin/python3 $ which python3.4 /usr/bin/python3.4 $ ``` How should i install `gi` for Anaconda python? Or maybe i can somehow import sysem-wide modules? My os: ``` System: Kernel: 3.19.0-32-generic x86_64 (64 bit gcc: 4.8.2) Desktop: Cinnamon 2.8.8 (Gtk 2.24.23) dm: mdm Distro: Linux Mint 17.3 Rosa ```
2016/05/30
[ "https://Stackoverflow.com/questions/37526026", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3024945/" ]
If you want to perform a proper install (without soft linking) inside a Linux anaconda environment. Keep in mind that the errors may vary if you have not installed gcc previously (I assumed it was installed by default when I posted, however not everyone does so). **Install it if you don't know very well what you're doing to avoid missunderstandings** 1. Create or open your conda environment. 2. Attemp to install pygobject (don't worry, it will most likely throw an error): ``` pip install pygobject ``` In linux, it will promp the usual installation progress followed by an error: > > (...) Please, try executing the following in your system: > sudo apt install libgirepository1.0-dev > > > Depending on your operation sistem or installed dependencies, the command name or package name may vary. Just follow the instructions and allow the system to install your packets. This step doesn't change anything, is just to give you the precise info of the package you need on your system. **If you run this on Windows, it will ask you to install a specific version of Visual Studio. If you are in Windows, download the required Visual Studio from MS website, install it, reboot your computer and go to step 5 (in my case I never needed step 4 in windows, however, I'm not a MS expert.** 3. Close your conda environment ``` conda deactivate ``` 4. Next you need to install pygobject from conda-forge repository in your conda environment. You can add the repo to your favourite conda package manager or simply run the following command as root (it is important to be root). I did it outside the project, but you may do it inside: ``` conda install -c conda-forge pygobject ``` In my case conda was not in the path. I had is installed in: ``` /opt/anaconda3/bin/ ``` You can run the following command from your normal user to find out where conda is: ``` which conda ``` 5. Open the conda environment ``` source activate <your env name> ``` or the corresponding anaconda activate syntax (I never use it so I cant remember precisely) 6. Repeat the first step and now the installation wont fail: ``` pip install pygobject ``` OR if you specifically want to install gi you can run: ``` pip install pgi ``` the correct package is "pgi" NOT "gi" As gi is a dependency of pygobject, everything will get properly installed. You can check it by running ``` python >>> import gi ``` You may find the following usefull for Windows, although you may need to work it out a bit: [GStreammer python bindings on Windows](https://stackoverflow.com/questions/17278953/gstreamer-python-bindings-for-windows)
for me ``` conda install -c pkgw/label/superseded gtk3 ``` worked
52,964,961
I want to write a script that is able to read a structure from a AutoDesk Inventor Model. Inventor is installed and running and a connection to it through win32.com is established. After loading the model an setting a few variables I'm stuck. I have this list oStructuredBOMView.BOMRows ``` >> print(len(oStructuredBOMView.BOMRows)) 2 ``` I can reference one item ``` >> print(oStructuredBOMView.BOMRows.Item(1)) <win32com.gen_py.Autodesk Inventor Object Library.BOMRow instance at 0x88831536> ``` And with a dir I can display all functions/methods ``` >> print(dir(oStructuredBOMView.BOMRows.Item(1))) ['CLSID', 'GetReferenceKey', '_ApplyTypes_', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattr__', '__getattribute__', '__gt__', '__hash__', '__init__', '__iter__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_get_good_object_', '_get_good_single_object_', '_oleobj_', '_prop_map_get_', '_prop_map_put_', 'coclass_clsid'] ``` This ist the point I can't exceed. How can I use this functions/methods to extract more information? Example: ``` >> oStructuredBOMView.BOMRows.Item(1).sizeof Traceback (most recent call last): File "<pyshell#22>", line 1, in <module> oStructuredBOMView.BOMRows.Item(1).sizeof File "C:\python\Python35\lib\site-packages\win32com\client\__init__.py", line 473, in __getattr__ raise AttributeError("'%s' object has no attribute '%s'" % (repr(self), attr)) AttributeError: '<win32com.gen_py.Autodesk Inventor Object Library.BOMRow instance at 0x88417896>' object has no attribute 'sizeof' >> sizeof(oStructuredBOMView.BOMRows.Item(1)) Traceback (most recent call last): File "<pyshell#23>", line 1, in <module> sizeof(oStructuredBOMView.BOMRows.Item(1)) NameError: name 'sizeof' is not defined ```
2018/10/24
[ "https://Stackoverflow.com/questions/52964961", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9307482/" ]
First, sizeof is no valid python function so your second try won't work. Second, your attribute of the BOMRows Item object is named `__sizeof__`, so you have to use that and not just 'sizeof': ``` oStructuredBOMView.BOMRows.Item(1).__sizeof__ ``` Third, those are just attributes of the object, they may or may not be functions/methods, you can find out which are callable with: ``` isMethod = callable(oStructuredBOMView.BOMRows.Item(1).__sizeof__) ``` See <https://docs.python.org/3/library/functions.html#callable> Update: You can try to `print()` all the attributes (this is kind of obvious, so you maybe already tried that). As I don't know anything about AutoDesk Inventor models, I can't help you with a direct response, but you could try to dig deeper into the object. If the attribute is a 'primitive type' (read: printable object or object that has some string representation) you can use it directly. If your attribute returns another object, then you have to analyze it again with `dir()` or `vars()` again. You can ignore most of the double underscore attributes, as they are most likely from python. Example of trying to print all attributes of an unknown object: ``` for key in dir(my_obj): print('{}: {}'.format(key, getattr(my_obj, key)) ``` But I am sure there should be some documentation available how to use the Autodesk Inventor objects.
The inventor object browser is your best friend to see the properties and methods of each class in inventor. To open it go to the VBA editor and press F2. ``` >> print(oStructuredBOMView.BOMRows.Item(1)) <win32com.gen_py.Autodesk Inventor Object Library.BOMRow instance at 0x88831536> ``` This tells you that the class is BOMRow. Type the class name into the search field and go from there.
20,730,487
Appindicator is not available on Ubuntu 13.10? ``` jason@jz:~$ python Python 2.7.5+ (default, Sep 19 2013, 13:48:49) [GCC 4.8.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import appindicator Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named appindicator >>> ``` Is there an alternative of it? Or can I install it myself? Thanks in advance.
2013/12/22
[ "https://Stackoverflow.com/questions/20730487", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1985786/" ]
``` sudo apt-get install python-appindicator ```
Yes, you can install it yourself. I'd recommend looking at [pip](http://www.pip-installer.org/en/latest/) for installing additional python packages.
20,730,487
Appindicator is not available on Ubuntu 13.10? ``` jason@jz:~$ python Python 2.7.5+ (default, Sep 19 2013, 13:48:49) [GCC 4.8.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import appindicator Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named appindicator >>> ``` Is there an alternative of it? Or can I install it myself? Thanks in advance.
2013/12/22
[ "https://Stackoverflow.com/questions/20730487", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1985786/" ]
``` sudo apt-get install python-appindicator ```
Probably should aim for Python 3 instead: `sudo apt-get install gir1.2-appindicator`
20,730,487
Appindicator is not available on Ubuntu 13.10? ``` jason@jz:~$ python Python 2.7.5+ (default, Sep 19 2013, 13:48:49) [GCC 4.8.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import appindicator Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named appindicator >>> ``` Is there an alternative of it? Or can I install it myself? Thanks in advance.
2013/12/22
[ "https://Stackoverflow.com/questions/20730487", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1985786/" ]
Probably should aim for Python 3 instead: `sudo apt-get install gir1.2-appindicator`
Yes, you can install it yourself. I'd recommend looking at [pip](http://www.pip-installer.org/en/latest/) for installing additional python packages.
44,073,785
I'm learning Python by the book 'Think Python.' My computer's OS is Windows 10. I tried using **os.popen** ('14.8 Pipes' in <http://www.greenteapress.com/thinkpython/html/thinkpython015.html>) but the book provides an example on Unix. I want to run Unix on Python script out of curiosity. I already installed Git bash and Virtual box, but I don't know how to connect and to use Python.
2017/05/19
[ "https://Stackoverflow.com/questions/44073785", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6797284/" ]
The example provided by the book is: > > 14.8 Pipes > > > Most operating systems provide a command-line interface, also known as > a shell. Shells usually provide commands to navigate the file system > and launch applications. For example, in Unix you can change > directories with cd, display the contents of a directory with ls, and > launch a web browser by typing (for example) firefox. > > > Any program that you can launch from the shell can also be launched > from Python using a pipe. A pipe is an object that represents a > running program. > > > For example, the Unix command ls -l normally displays the contents of > the current directory (in long format). You can launch ls with > os.popen1: > > > `>>> cmd = 'ls -l'` > > > `>>> fp = os.popen(cmd)` > > > It appears you are getting tripped up converting the `ls` command which lists directory contents on \*nix to a Windows command. If you search for "what is the windows version of ls" in a search engine, you will discover that Windows provides similar functionality through `dir`. For more useful conversions check out the conversion table on [lemonda.net](https://www.lemoda.net/windows/windows2unix/windows2unix.html). Changing the code to `>>> cmd = 'dir'` and calling it via `>>> fp = os.popen(cmd)` Should enable the example to run on Windows.
if you have the latest update of windows 10 you can use Ubuntu, a version of Linux, from your command prompt by just typing the word 'bash' and waiting on it to download some files. if you see a $ at the end of your command line you got it. after that just type 'sudo apt-get install python' and enter your windows password when it asks for your password hit the 'Y' key when it asks if you are sure. then you should be able to go from the book from there. **EDIT:** I believe you have to run cmd as administrator to install bash. You also may have to enable it by hitting the 'windows key + r' to open the run window then type 'appwiz.cpl to open the uninstall window and then click the button on the left of the screen that says 'Turn Windows features on or off', then wait for that to load and then go check the box by 'Windows Subsystems for Linux (Beta)' and then trying to type 'bash' in an elevated cmd prompt.
14,098,854
I'm looking to implement the pLSI algorithm in R. I found this [python code](http://cs.gmu.edu/~pwang7/code/pLSI.py), but I am stuck in a few places, like line #8 `for termID, value in docTermDic[docID].iteritems():`. In this case I don't understand how to implement both termID & value inside a for loop, and what docTermDic[docID].iteritems() is. Is there any tool available using which I can extract the algo or the pseudocode so as to enable me to code it up myself?
2012/12/31
[ "https://Stackoverflow.com/questions/14098854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1847296/" ]
I assure you that barring errors (which you're not checking for) a file is created. Xcode has a tendency to use the final build-dir as the current working directory when running from the IDE. you can change this by editing the active Scheme. 1. Click on the Project box to the right of the STOP button on the main toolbar 2. Select Edit Scheme 3. Select the "Run" sub scheme in the left pane list. 4. Select the Options tab, 5. Check the "Use Custom Working Directory" checkbox 6. Set the working directory to some place you *know* (like your project root folder). Note: This is also where you will setup any command line arguments (those are on the Arguments tab, not the Options tab), should you desire to do so.
First of all you must check whether file has been `opened/created` or not. Then you should search for the file. Most probably the file hasn't been created yet. Here is the code: ``` #include <iostream> #include <fstream> using namespace std; int main(void){ ofstream file; file.open("tuna.txt"); if(file.is_open()) { file << "I love tuna and tuna loves me!\n"; file.close(); } else cout<< "No file has been created!\n"; return 0; } ``` As you haven't given an absolute path to open function.See the folder where your code file is. Most probably the file will be there.
14,098,854
I'm looking to implement the pLSI algorithm in R. I found this [python code](http://cs.gmu.edu/~pwang7/code/pLSI.py), but I am stuck in a few places, like line #8 `for termID, value in docTermDic[docID].iteritems():`. In this case I don't understand how to implement both termID & value inside a for loop, and what docTermDic[docID].iteritems() is. Is there any tool available using which I can extract the algo or the pseudocode so as to enable me to code it up myself?
2012/12/31
[ "https://Stackoverflow.com/questions/14098854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1847296/" ]
I assure you that barring errors (which you're not checking for) a file is created. Xcode has a tendency to use the final build-dir as the current working directory when running from the IDE. you can change this by editing the active Scheme. 1. Click on the Project box to the right of the STOP button on the main toolbar 2. Select Edit Scheme 3. Select the "Run" sub scheme in the left pane list. 4. Select the Options tab, 5. Check the "Use Custom Working Directory" checkbox 6. Set the working directory to some place you *know* (like your project root folder). Note: This is also where you will setup any command line arguments (those are on the Arguments tab, not the Options tab), should you desire to do so.
In the *Products* folder (in the *Project Navigator* of the *Navigator* tab on the left-hand side of the Xcode IDE) you will find the executable. Click on the executable. If not already shown, make sure the *Utilities* tab on the right hand-side of the Xcode IDE is shown and the *Show the file inspector* is selected. From the *inspector*, you will see *Full Path* showing the path to the executable, and at the end of it, there will be an arrow. Clicking on this arrow will open up the *Finder* window to that location, and this is where you should also see all the text files and other files that have been created from within the program. PS. The reason that you couldn't find the tuna.txt file when using the search is because it is in a hidden folder along with the executable.
14,098,854
I'm looking to implement the pLSI algorithm in R. I found this [python code](http://cs.gmu.edu/~pwang7/code/pLSI.py), but I am stuck in a few places, like line #8 `for termID, value in docTermDic[docID].iteritems():`. In this case I don't understand how to implement both termID & value inside a for loop, and what docTermDic[docID].iteritems() is. Is there any tool available using which I can extract the algo or the pseudocode so as to enable me to code it up myself?
2012/12/31
[ "https://Stackoverflow.com/questions/14098854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1847296/" ]
In the *Products* folder (in the *Project Navigator* of the *Navigator* tab on the left-hand side of the Xcode IDE) you will find the executable. Click on the executable. If not already shown, make sure the *Utilities* tab on the right hand-side of the Xcode IDE is shown and the *Show the file inspector* is selected. From the *inspector*, you will see *Full Path* showing the path to the executable, and at the end of it, there will be an arrow. Clicking on this arrow will open up the *Finder* window to that location, and this is where you should also see all the text files and other files that have been created from within the program. PS. The reason that you couldn't find the tuna.txt file when using the search is because it is in a hidden folder along with the executable.
First of all you must check whether file has been `opened/created` or not. Then you should search for the file. Most probably the file hasn't been created yet. Here is the code: ``` #include <iostream> #include <fstream> using namespace std; int main(void){ ofstream file; file.open("tuna.txt"); if(file.is_open()) { file << "I love tuna and tuna loves me!\n"; file.close(); } else cout<< "No file has been created!\n"; return 0; } ``` As you haven't given an absolute path to open function.See the folder where your code file is. Most probably the file will be there.
5,569,742
I'm looking for the best way to go about creating a job scheduler for different types of jobs. CJobs are threaded and some jobs need to finish before the next step in the process can run. This is currently all managed through a database table...which I think is fine. But if there's a better way to manage dependencies, I'm all ears. Preferably I'd like to do this in python. I see there's parallel python module which looks great but am concerned about this dependency issue between jobs. Can someone recommend anything that does what I need to do or how to go about doing this? Much thanks! D UPDATE: This is to be done over a cluster of servers each with a limited set of available workers...1 per port. Does Celery or SCon support this?
2011/04/06
[ "https://Stackoverflow.com/questions/5569742", "https://Stackoverflow.com", "https://Stackoverflow.com/users/458704/" ]
I've had a lot of success with [Celery](http://celeryproject.org/)
[SCons](http://www.scons.org/) can be helpful for this. It's biased toward software construction (compiling, linking, etc.) but you can easily define new result classes, new commands and new source classes so that it will process your data (and dependencies) properly. Based on the update, you probably need something like [BuiltBot](https://wiki.mozilla.org/Buildbot), also.
5,569,742
I'm looking for the best way to go about creating a job scheduler for different types of jobs. CJobs are threaded and some jobs need to finish before the next step in the process can run. This is currently all managed through a database table...which I think is fine. But if there's a better way to manage dependencies, I'm all ears. Preferably I'd like to do this in python. I see there's parallel python module which looks great but am concerned about this dependency issue between jobs. Can someone recommend anything that does what I need to do or how to go about doing this? Much thanks! D UPDATE: This is to be done over a cluster of servers each with a limited set of available workers...1 per port. Does Celery or SCon support this?
2011/04/06
[ "https://Stackoverflow.com/questions/5569742", "https://Stackoverflow.com", "https://Stackoverflow.com/users/458704/" ]
[Luigi](https://github.com/spotify/luigi) looks very interesting. It allows you to create workflows - sets of related jobs, which dependencies are managed by Luigi. It also has simple web interface providing dependency graph.
[SCons](http://www.scons.org/) can be helpful for this. It's biased toward software construction (compiling, linking, etc.) but you can easily define new result classes, new commands and new source classes so that it will process your data (and dependencies) properly. Based on the update, you probably need something like [BuiltBot](https://wiki.mozilla.org/Buildbot), also.
56,784,138
I am printing out this canvas in python that I am using for a board. When I populate it and print with ``` for row in canvas: print(row) ``` I prints like this : ``` ['.', '.', '.', '.'] ['.', '.', '.', '.'] ['.', '.', '.', '.'] ``` I need it to print like this ``` . . . . . . . . . . . . ``` Is there something I can do to strip it by chance? Thank you
2019/06/27
[ "https://Stackoverflow.com/questions/56784138", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You're using the Python default printing method of a `list`. What you want is to construct a string from your list that looks the way you want. This should do the trick ```py for row in canvas: print(" ".join(row)) ```
Take the list and convert it to string, then print it: ``` for row in canvas: print("".join(row)) ```
56,784,138
I am printing out this canvas in python that I am using for a board. When I populate it and print with ``` for row in canvas: print(row) ``` I prints like this : ``` ['.', '.', '.', '.'] ['.', '.', '.', '.'] ['.', '.', '.', '.'] ``` I need it to print like this ``` . . . . . . . . . . . . ``` Is there something I can do to strip it by chance? Thank you
2019/06/27
[ "https://Stackoverflow.com/questions/56784138", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Here you go. Just use [the string join method](https://www.programiz.com/python-programming/methods/string/join): ``` canvas=[['.', '.', '.', '.'], ['.', '.', '.', '.'], ['.', '.', '.', '.']] for row in canvas: print(" ".join(row)) ```
Take the list and convert it to string, then print it: ``` for row in canvas: print("".join(row)) ```
56,784,138
I am printing out this canvas in python that I am using for a board. When I populate it and print with ``` for row in canvas: print(row) ``` I prints like this : ``` ['.', '.', '.', '.'] ['.', '.', '.', '.'] ['.', '.', '.', '.'] ``` I need it to print like this ``` . . . . . . . . . . . . ``` Is there something I can do to strip it by chance? Thank you
2019/06/27
[ "https://Stackoverflow.com/questions/56784138", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You're using the Python default printing method of a `list`. What you want is to construct a string from your list that looks the way you want. This should do the trick ```py for row in canvas: print(" ".join(row)) ```
As you are using Python 3, where `print` is function, you might do: ``` canvas = [['.','.'],['.','.']] for row in canvas: print(*row) ``` Output: ``` . . . . ``` I used so called unpack operator here (`*` before `row`), if you want to know more about that I suggest [this short article](https://codeyarns.com/2012/04/26/unpack-operator-in-python/)
56,784,138
I am printing out this canvas in python that I am using for a board. When I populate it and print with ``` for row in canvas: print(row) ``` I prints like this : ``` ['.', '.', '.', '.'] ['.', '.', '.', '.'] ['.', '.', '.', '.'] ``` I need it to print like this ``` . . . . . . . . . . . . ``` Is there something I can do to strip it by chance? Thank you
2019/06/27
[ "https://Stackoverflow.com/questions/56784138", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Here you go. Just use [the string join method](https://www.programiz.com/python-programming/methods/string/join): ``` canvas=[['.', '.', '.', '.'], ['.', '.', '.', '.'], ['.', '.', '.', '.']] for row in canvas: print(" ".join(row)) ```
As you are using Python 3, where `print` is function, you might do: ``` canvas = [['.','.'],['.','.']] for row in canvas: print(*row) ``` Output: ``` . . . . ``` I used so called unpack operator here (`*` before `row`), if you want to know more about that I suggest [this short article](https://codeyarns.com/2012/04/26/unpack-operator-in-python/)
25,183,031
I am new at python and I'm currently exploring some of its core functionalities. Could you explain me why the following example always return false in case of a string with special characters: ``` >>> a="x" >>> b="x" >>> a is b True >>> a="xxx" >>> b="xxx" >>> a is b True >>> a="xü" >>> b="xü" >>> a is b False >>> a="ü" >>> b="ü" >>> a is b True >>> #strange: with one special character it works as expected ``` I understand that the storage positions are different for strings with special characters on each assignment, I already checked it with the id() function but for which reason python handles strings in this unconsistent way?
2014/08/07
[ "https://Stackoverflow.com/questions/25183031", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2833639/" ]
Python (the reference implementation at least) has a cache for small integers and strings. I guess unicode strings outside the ASCII range are bigger than the cache threshold (internally unicode is stored using 16 or 32 bit wide characters, [UCS-2 or UCS-4](http://en.wikipedia.org/wiki/Universal_Character_Set)) and so they are not cached. [edit] Found a more complete answer at: [About the changing id of a Python immutable string](https://stackoverflow.com/questions/24245324/about-the-changing-id-of-a-python-immutable-string) Se also: <http://www.laurentluce.com/posts/python-string-objects-implementation/>
With is you're not testing equality between strings, you're testing equality between objects which is resolved through pointers. So your code: ``` >>> a="x" >>> b="x" >>> a is b True ``` is not asking "are a and b the same character?", its asking "are a and b the same object?". Since there's a small object cache (for small integers and one byte strings, as has been said before), the answer is "yes, both variables refer to the same object in memory, the x character small object". When you work with an object that is not eligible for the cache, as in: ``` >>> a="xü" >>> b="xü" >>> a is b False ``` what is going on is that a and b now refer to different objects in memory, so the is operator resolves to false (a and b do not point to the same object!). If the idea is comparing strings, you should use the == operator instead of is.
40,658,834
I have a dictionary in python 2.7 that has the following structure: ``` x = { '1': ['a', 'b', 'c'], '2': ['d', 'e', 'f'] } ``` The length of the value list is always the same and I would like to basically zip the value lists with corresponding values. So, in this case it will create three new lists as: ``` [['a', 'd'], ['b', 'e'], ['c', 'f']] ``` I know I can write an awful looking loop to do this but I was wondering if there is a more pythonic way to do this. I need to preserve the order.
2016/11/17
[ "https://Stackoverflow.com/questions/40658834", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2713740/" ]
You can do the following: ``` zip(*x.values()) ``` Explanation: * `x.values()` returns `[['a', 'b', 'c'], ['d', 'e', 'f']]` *(order may change so you might need to sort `x` first.)* * `zip([a, b], [c, d])` returns `[[a, c], [b, d]]` * To expand `x.values()` into arguments to `zip`, prepend `*` to it.
``` res = list(zip(x['1'], x['2'])) res = list(map(list, res)) ``` An explanation: ``` zip(x['1'], x['2']) ``` Creates a `zip` object that links up your pairs. ``` res = list(zip(x['1'], x['2'])) ``` That `zip` object now become a list of tuples. ``` list(map(list, res)) ``` For each element in `res` (each tuple), change the data structure from tuple to `list`, as you requested in your desired output above (`map` the `list` data type onto all elements in `res`). Then, convert that `map` object into a list to arrive at the final, desired result.
40,658,834
I have a dictionary in python 2.7 that has the following structure: ``` x = { '1': ['a', 'b', 'c'], '2': ['d', 'e', 'f'] } ``` The length of the value list is always the same and I would like to basically zip the value lists with corresponding values. So, in this case it will create three new lists as: ``` [['a', 'd'], ['b', 'e'], ['c', 'f']] ``` I know I can write an awful looking loop to do this but I was wondering if there is a more pythonic way to do this. I need to preserve the order.
2016/11/17
[ "https://Stackoverflow.com/questions/40658834", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2713740/" ]
You can do the following: ``` zip(*x.values()) ``` Explanation: * `x.values()` returns `[['a', 'b', 'c'], ['d', 'e', 'f']]` *(order may change so you might need to sort `x` first.)* * `zip([a, b], [c, d])` returns `[[a, c], [b, d]]` * To expand `x.values()` into arguments to `zip`, prepend `*` to it.
This is single line solves the problem but is likely worse looking than your loop. It loops over the sorted keys and produces a list to pass to zip and then maps over the result converting the tuples into lists. ``` >>> x = {'1': ['a', 'b', 'c'], '2': ['d', 'e', 'f']} >>> map(list, zip(*[x[k] for k in sorted(x)])) [['a', 'd'], ['b', 'e'], ['c', 'f']] ```
40,658,834
I have a dictionary in python 2.7 that has the following structure: ``` x = { '1': ['a', 'b', 'c'], '2': ['d', 'e', 'f'] } ``` The length of the value list is always the same and I would like to basically zip the value lists with corresponding values. So, in this case it will create three new lists as: ``` [['a', 'd'], ['b', 'e'], ['c', 'f']] ``` I know I can write an awful looking loop to do this but I was wondering if there is a more pythonic way to do this. I need to preserve the order.
2016/11/17
[ "https://Stackoverflow.com/questions/40658834", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2713740/" ]
This is single line solves the problem but is likely worse looking than your loop. It loops over the sorted keys and produces a list to pass to zip and then maps over the result converting the tuples into lists. ``` >>> x = {'1': ['a', 'b', 'c'], '2': ['d', 'e', 'f']} >>> map(list, zip(*[x[k] for k in sorted(x)])) [['a', 'd'], ['b', 'e'], ['c', 'f']] ```
``` res = list(zip(x['1'], x['2'])) res = list(map(list, res)) ``` An explanation: ``` zip(x['1'], x['2']) ``` Creates a `zip` object that links up your pairs. ``` res = list(zip(x['1'], x['2'])) ``` That `zip` object now become a list of tuples. ``` list(map(list, res)) ``` For each element in `res` (each tuple), change the data structure from tuple to `list`, as you requested in your desired output above (`map` the `list` data type onto all elements in `res`). Then, convert that `map` object into a list to arrive at the final, desired result.
43,899,649
Just made a Python program which acts as a server for Unix sockets and when it receives a connection, reads 4 bytes from it. On the other hand, I made a C program which connects to the server and sends this 4 bytes to it. file.py ``` packet = connection.recv(4, socket.MSG_WAITALL) for value in packet: if value == '\0': print 'A none' else: print value ``` file.c ``` size_t toSend 84; // Char representation \0\0\0T write(fd, (void *)&toSend, sizeof(toSend)); ``` This makes the python program receive a wrong order of bytes. Here the output ``` T A none A none A none ``` The first thing that came to my mind was that this was the result of endianness, but then I modified the files leaving them like this. file.c ``` char abc[16]; // setting char abc to be 'ABCDEFGHIJKLMNOP' (without ending \0) write(fd, (void *)&abc[0], 16); ``` file.py ``` packet = connection.recv(16, socket.MSG_WAITALL) // everything else, wasn't modified ``` And this printed: ``` A B C ... P ``` So this time it was received as it was supposed to. Could someone point me out to what I'm doing wrong? Unless it has something to do with the `size_t` type, I can't think of anything else. Thanks!
2017/05/10
[ "https://Stackoverflow.com/questions/43899649", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3770881/" ]
Do not use "=" for NULL values. Because nothing equals NULL, not even NULL. NULL means "Unknown"... if you can't know what's in it, you can't compare them. Change your query to this: ``` select * from Dropoff where finished IS NULL and DoorNum = 22; ```
I think it "is null" to check null values as below: ``` select * from Dropoff where finished is NULL and DoorNum = 22; ```
43,899,649
Just made a Python program which acts as a server for Unix sockets and when it receives a connection, reads 4 bytes from it. On the other hand, I made a C program which connects to the server and sends this 4 bytes to it. file.py ``` packet = connection.recv(4, socket.MSG_WAITALL) for value in packet: if value == '\0': print 'A none' else: print value ``` file.c ``` size_t toSend 84; // Char representation \0\0\0T write(fd, (void *)&toSend, sizeof(toSend)); ``` This makes the python program receive a wrong order of bytes. Here the output ``` T A none A none A none ``` The first thing that came to my mind was that this was the result of endianness, but then I modified the files leaving them like this. file.c ``` char abc[16]; // setting char abc to be 'ABCDEFGHIJKLMNOP' (without ending \0) write(fd, (void *)&abc[0], 16); ``` file.py ``` packet = connection.recv(16, socket.MSG_WAITALL) // everything else, wasn't modified ``` And this printed: ``` A B C ... P ``` So this time it was received as it was supposed to. Could someone point me out to what I'm doing wrong? Unless it has something to do with the `size_t` type, I can't think of anything else. Thanks!
2017/05/10
[ "https://Stackoverflow.com/questions/43899649", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3770881/" ]
I think it "is null" to check null values as below: ``` select * from Dropoff where finished is NULL and DoorNum = 22; ```
You shoud use `Is Null` instead: ``` SELECT * FROM Dropoff WHERE finished is NULL and DoorNum = 22; ``` Use `Is Null` to determines whether a specified expression/Value `is NULL`.
43,899,649
Just made a Python program which acts as a server for Unix sockets and when it receives a connection, reads 4 bytes from it. On the other hand, I made a C program which connects to the server and sends this 4 bytes to it. file.py ``` packet = connection.recv(4, socket.MSG_WAITALL) for value in packet: if value == '\0': print 'A none' else: print value ``` file.c ``` size_t toSend 84; // Char representation \0\0\0T write(fd, (void *)&toSend, sizeof(toSend)); ``` This makes the python program receive a wrong order of bytes. Here the output ``` T A none A none A none ``` The first thing that came to my mind was that this was the result of endianness, but then I modified the files leaving them like this. file.c ``` char abc[16]; // setting char abc to be 'ABCDEFGHIJKLMNOP' (without ending \0) write(fd, (void *)&abc[0], 16); ``` file.py ``` packet = connection.recv(16, socket.MSG_WAITALL) // everything else, wasn't modified ``` And this printed: ``` A B C ... P ``` So this time it was received as it was supposed to. Could someone point me out to what I'm doing wrong? Unless it has something to do with the `size_t` type, I can't think of anything else. Thanks!
2017/05/10
[ "https://Stackoverflow.com/questions/43899649", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3770881/" ]
Do not use "=" for NULL values. Because nothing equals NULL, not even NULL. NULL means "Unknown"... if you can't know what's in it, you can't compare them. Change your query to this: ``` select * from Dropoff where finished IS NULL and DoorNum = 22; ```
Use `IS NULL` instead as next:- ``` select * from Dropoff where finished is NULL and DoorNum = 22; ``` Follow These:- [SQL is null and = null](https://stackoverflow.com/questions/9581745/sql-is-null-and-null) [what is “=null” and “ IS NULL”](https://stackoverflow.com/questions/2749044/what-is-null-and-is-null) [Is there any difference between IS NULL and =NULL](https://stackoverflow.com/questions/3777230/is-there-any-difference-between-is-null-and-null)
43,899,649
Just made a Python program which acts as a server for Unix sockets and when it receives a connection, reads 4 bytes from it. On the other hand, I made a C program which connects to the server and sends this 4 bytes to it. file.py ``` packet = connection.recv(4, socket.MSG_WAITALL) for value in packet: if value == '\0': print 'A none' else: print value ``` file.c ``` size_t toSend 84; // Char representation \0\0\0T write(fd, (void *)&toSend, sizeof(toSend)); ``` This makes the python program receive a wrong order of bytes. Here the output ``` T A none A none A none ``` The first thing that came to my mind was that this was the result of endianness, but then I modified the files leaving them like this. file.c ``` char abc[16]; // setting char abc to be 'ABCDEFGHIJKLMNOP' (without ending \0) write(fd, (void *)&abc[0], 16); ``` file.py ``` packet = connection.recv(16, socket.MSG_WAITALL) // everything else, wasn't modified ``` And this printed: ``` A B C ... P ``` So this time it was received as it was supposed to. Could someone point me out to what I'm doing wrong? Unless it has something to do with the `size_t` type, I can't think of anything else. Thanks!
2017/05/10
[ "https://Stackoverflow.com/questions/43899649", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3770881/" ]
Do not use "=" for NULL values. Because nothing equals NULL, not even NULL. NULL means "Unknown"... if you can't know what's in it, you can't compare them. Change your query to this: ``` select * from Dropoff where finished IS NULL and DoorNum = 22; ```
You shoud use `Is Null` instead: ``` SELECT * FROM Dropoff WHERE finished is NULL and DoorNum = 22; ``` Use `Is Null` to determines whether a specified expression/Value `is NULL`.
43,899,649
Just made a Python program which acts as a server for Unix sockets and when it receives a connection, reads 4 bytes from it. On the other hand, I made a C program which connects to the server and sends this 4 bytes to it. file.py ``` packet = connection.recv(4, socket.MSG_WAITALL) for value in packet: if value == '\0': print 'A none' else: print value ``` file.c ``` size_t toSend 84; // Char representation \0\0\0T write(fd, (void *)&toSend, sizeof(toSend)); ``` This makes the python program receive a wrong order of bytes. Here the output ``` T A none A none A none ``` The first thing that came to my mind was that this was the result of endianness, but then I modified the files leaving them like this. file.c ``` char abc[16]; // setting char abc to be 'ABCDEFGHIJKLMNOP' (without ending \0) write(fd, (void *)&abc[0], 16); ``` file.py ``` packet = connection.recv(16, socket.MSG_WAITALL) // everything else, wasn't modified ``` And this printed: ``` A B C ... P ``` So this time it was received as it was supposed to. Could someone point me out to what I'm doing wrong? Unless it has something to do with the `size_t` type, I can't think of anything else. Thanks!
2017/05/10
[ "https://Stackoverflow.com/questions/43899649", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3770881/" ]
Use `IS NULL` instead as next:- ``` select * from Dropoff where finished is NULL and DoorNum = 22; ``` Follow These:- [SQL is null and = null](https://stackoverflow.com/questions/9581745/sql-is-null-and-null) [what is “=null” and “ IS NULL”](https://stackoverflow.com/questions/2749044/what-is-null-and-is-null) [Is there any difference between IS NULL and =NULL](https://stackoverflow.com/questions/3777230/is-there-any-difference-between-is-null-and-null)
You shoud use `Is Null` instead: ``` SELECT * FROM Dropoff WHERE finished is NULL and DoorNum = 22; ``` Use `Is Null` to determines whether a specified expression/Value `is NULL`.
42,070,398
I have this piece of code written to sort a csv file in Python. ``` import csv from operator import itemgetter reader = csv.reader(open("highscores.txt"), delimiter=",") sortedList = sorted(reader, key=itemgetter(1), reverse=True) print(sortedList) ``` I am trying the sort the file by the second item in the array, for example if in my file I had: ``` Callum,22 Kim,43 ``` It would sort it to: ``` Kim,43 Callum,22 ``` However when I run my program I get the error message: ``` sortedList = sorted(reader, key=itemgetter(1), reverse=True) IndexError: list index out of range ``` Edit: I have solved this issue, the problem wasn't the sort function itself but it was a problem with the file that only worked in python 2.7 for some reason.
2017/02/06
[ "https://Stackoverflow.com/questions/42070398", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4525900/" ]
You can use lambda which will allow you to do type transform etc. Similar to your example above, following lines will do what you want (I have kept sort etc default): ``` r = csv.reader(open("test.txt")) sorted(r, key=lambda x: int(x[1])) ``` You can read more details at <https://wiki.python.org/moin/HowTo/Sorting> These are full details showing version and platform: ``` Python 2.7.10 (default, Jul 30 2016, 18:31:42) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.34)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import csv >>> r = csv.reader(open("test.txt")) >>> sorted(r, key=lambda x: int(x[1])) [['Can', ' 2'], ['Try', ' 5'], ['Science', ' 12'], ['Math', ' 22'], ['Life', ' 35']] ``` where test.txt is as below: ``` Science,12 Math,22 Life,35 Can,2 Try,5 ```
My assumption is that you are missing some data. Here is my workaround for that case: ``` import csv with open("highscores.txt") as data: data = csv.reader(data) data = [item for item in data] for i, item in enumerate(data): try: item = int(item[1]) except: data[i].append('0') items = sorted(data, key=lambda x: int(x[1]), reverse=True) print(items) ```
62,569,485
If we are doing a menu on python and the user selects option for finish the interaction. Is preferable to use exit(), conditionals, return or break? Example with break, where we stop the infinite loop with the break: ```py def show_menu(): print('1. Pet kitten\n' '0. Exit') def start_app(): while True: show_menu() user_choice = input('Select an option: ') if user_choice == '1': pet() elif user_choice == '0': print('\nBye!') break else: print('\nPlease select a number from the menu.') start_app() ``` Example with exit(), where we use built-in function exit() for stop the execution of the script: ```py def show_menu(): print('1. Pet kitten\n' '0. Exit') def start_app(): while True: show_menu() user_choice = input('Select an option: ') if user_choice == '1': pet() elif user_choice == '0': print('\nBye!') exit() else: print('\nPlease select a number from the menu.') start_app() ``` Example with conditionals, where the while stops when the condition change: ```py def show_menu(): print('1. Pet kitten\n' '0. Exit') def start_app(): continue_ = True while continue_: show_menu() user_choice = input('Select an option: ') if user_choice == '1': pet() elif user_choice == '0': print('\nBye!') continue_ = False else: print('\nPlease select a number from the menu.') start_app() ``` Example with return, where we finish the interaction returning a random value: ```py def show_menu(): print('1. Pet kitten\n' '0. Exit') def start_app(): continue_ = True while continue_: show_menu() user_choice = input('Select an option: ') if user_choice == '1': pet() elif user_choice == '0': print('\nBye!') return None else: print('\nPlease select a number from the menu.') start_app() ```
2020/06/25
[ "https://Stackoverflow.com/questions/62569485", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13452761/" ]
It is worth noticing that your options fall into two categories, with one important difference between them. On one hand, you can use `break`, `return` or a condition variable to break out of the loop and, eventually, *return to the caller*. Among these options, I'd say just pick whichever gives the cleanest code. On the other hand, you can use `exit()` *which ends the program there and then*. If you ever wish to use this as something other than a top-level menu, for example wrapped in a library to be used as a sub-menu of something else, you do not want the program to suddenly exit. Generally, `exit()` is a rather big chunk of explosives, that should be treated with a bit of respect.
For example 1: First give 4 space in line 2 so that the print function can stay into the show\_menu() function and second define the #pet() function in line 11, otherwise you will get name error. Then put a break statement after line 11. For example 3: The while loop stop in line 15 when you define continue as False. #HappyCoding
11,480,589
In my music directory I have subfolders for each individual artist and album. I would like to make an automation script that goes into each subfolder and cuts and pastes all of the contents into my main music directory. After the script has moved the music, I would like for it to delete the empty subfolders. Something in a python or shell script would be awesome. Any ideas?
2012/07/14
[ "https://Stackoverflow.com/questions/11480589", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1524979/" ]
you need to define `num1` and `num2` before you can use them. ``` public static void main(String args[]){ int num1 = 3; int num2 = -4; displayResult(num1, num2); } ```
When you call `displayResult` in `main`, you need to actually pass it *values*, rather than repeat the types and names of the arguments. For example: ``` public static void main(String[] args){ displayResult(2, 3); } ``` That should work, but it still won't do anything; `displayResult` returns the result, but it doesn't print it or do anything else with it. You likely want to print it, probably using `System.out.println`.
11,480,589
In my music directory I have subfolders for each individual artist and album. I would like to make an automation script that goes into each subfolder and cuts and pastes all of the contents into my main music directory. After the script has moved the music, I would like for it to delete the empty subfolders. Something in a python or shell script would be awesome. Any ideas?
2012/07/14
[ "https://Stackoverflow.com/questions/11480589", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1524979/" ]
you need to define `num1` and `num2` before you can use them. ``` public static void main(String args[]){ int num1 = 3; int num2 = -4; displayResult(num1, num2); } ```
You need to pass actual number values in the displayResult() in main. So change it to something like displayResult(1, 5); It's saying it doesn't like the arguments (int num1, and int num2) because it expects an (int, int) to be passed in like (1, 5).
11,480,589
In my music directory I have subfolders for each individual artist and album. I would like to make an automation script that goes into each subfolder and cuts and pastes all of the contents into my main music directory. After the script has moved the music, I would like for it to delete the empty subfolders. Something in a python or shell script would be awesome. Any ideas?
2012/07/14
[ "https://Stackoverflow.com/questions/11480589", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1524979/" ]
you need to define `num1` and `num2` before you can use them. ``` public static void main(String args[]){ int num1 = 3; int num2 = -4; displayResult(num1, num2); } ```
What are you trying to accomplish with this? You need to pass integer values in to the parameters, the call you have does not do this. The integers are not initialized. The following examples will compile: ``` public static int displayResult(int num1, int num2){ return num1 * num2; } public static void main(String args[]){ displayResult(1, 1); } ``` or... ``` public static int displayResult(int num1, int num2){ return num1 * num2; } public static void main(String args[]){ int num1 = 1; int num2 = 1; displayResult(num1, num2); } ``` etc...
11,480,589
In my music directory I have subfolders for each individual artist and album. I would like to make an automation script that goes into each subfolder and cuts and pastes all of the contents into my main music directory. After the script has moved the music, I would like for it to delete the empty subfolders. Something in a python or shell script would be awesome. Any ideas?
2012/07/14
[ "https://Stackoverflow.com/questions/11480589", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1524979/" ]
you need to define `num1` and `num2` before you can use them. ``` public static void main(String args[]){ int num1 = 3; int num2 = -4; displayResult(num1, num2); } ```
When you write the following ``` public static int displayResult(int num1, int num2){ return num1 * num2; } ``` you're basically defining the method and specifying the argument types. Invoking the method, however, is done as follows ``` displayResult(4,5); ``` Note that I can pass variables as well. The important point is that we don't give the type of the argument
11,480,589
In my music directory I have subfolders for each individual artist and album. I would like to make an automation script that goes into each subfolder and cuts and pastes all of the contents into my main music directory. After the script has moved the music, I would like for it to delete the empty subfolders. Something in a python or shell script would be awesome. Any ideas?
2012/07/14
[ "https://Stackoverflow.com/questions/11480589", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1524979/" ]
When you call `displayResult` in `main`, you need to actually pass it *values*, rather than repeat the types and names of the arguments. For example: ``` public static void main(String[] args){ displayResult(2, 3); } ``` That should work, but it still won't do anything; `displayResult` returns the result, but it doesn't print it or do anything else with it. You likely want to print it, probably using `System.out.println`.
You need to pass actual number values in the displayResult() in main. So change it to something like displayResult(1, 5); It's saying it doesn't like the arguments (int num1, and int num2) because it expects an (int, int) to be passed in like (1, 5).
11,480,589
In my music directory I have subfolders for each individual artist and album. I would like to make an automation script that goes into each subfolder and cuts and pastes all of the contents into my main music directory. After the script has moved the music, I would like for it to delete the empty subfolders. Something in a python or shell script would be awesome. Any ideas?
2012/07/14
[ "https://Stackoverflow.com/questions/11480589", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1524979/" ]
When you call `displayResult` in `main`, you need to actually pass it *values*, rather than repeat the types and names of the arguments. For example: ``` public static void main(String[] args){ displayResult(2, 3); } ``` That should work, but it still won't do anything; `displayResult` returns the result, but it doesn't print it or do anything else with it. You likely want to print it, probably using `System.out.println`.
What are you trying to accomplish with this? You need to pass integer values in to the parameters, the call you have does not do this. The integers are not initialized. The following examples will compile: ``` public static int displayResult(int num1, int num2){ return num1 * num2; } public static void main(String args[]){ displayResult(1, 1); } ``` or... ``` public static int displayResult(int num1, int num2){ return num1 * num2; } public static void main(String args[]){ int num1 = 1; int num2 = 1; displayResult(num1, num2); } ``` etc...
11,480,589
In my music directory I have subfolders for each individual artist and album. I would like to make an automation script that goes into each subfolder and cuts and pastes all of the contents into my main music directory. After the script has moved the music, I would like for it to delete the empty subfolders. Something in a python or shell script would be awesome. Any ideas?
2012/07/14
[ "https://Stackoverflow.com/questions/11480589", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1524979/" ]
When you call `displayResult` in `main`, you need to actually pass it *values*, rather than repeat the types and names of the arguments. For example: ``` public static void main(String[] args){ displayResult(2, 3); } ``` That should work, but it still won't do anything; `displayResult` returns the result, but it doesn't print it or do anything else with it. You likely want to print it, probably using `System.out.println`.
When you write the following ``` public static int displayResult(int num1, int num2){ return num1 * num2; } ``` you're basically defining the method and specifying the argument types. Invoking the method, however, is done as follows ``` displayResult(4,5); ``` Note that I can pass variables as well. The important point is that we don't give the type of the argument
32,811,618
I have the following CODE (Snippets)... It might be useful for others... but I have one Problem... One of my class variables seems to not work, as if there is a limit to the number of variables you can use? ``` import csv class MASAT_data: # # ALL CSV files are expected to have the first row as headers # ALL CSV files are expected to have the first column as the key attribute # All Headers are extracted into its own object variable # def __init__(self, fname, action, intest): __data_Dict = {} __fname = fname __header = [] # <--- THIS WORKS __colLEN = 0 __rowLEN = 0 __data_rows = [] # <--- This seems to not be available to the class #__test = intest if action.upper == "NEW": pass else: self.Get_DICT(__fname) def Get_DICT(self,filename): templist = [] daDict = {} with open(filename,"r") as f: csv_f = csv.reader(f) x=-1 for row in csv_f: x+=1 if len(row)!=0: if x == 0: self.__header = row self.__colLEN = len(self.__header) else: templist = row self.__data_rows.append(templist) # <-- line 38 if len(templist) >= 2: daDict[templist[0]] = templist[1:] self.__rowLEN = x f.close() self.__data_Dict = daDict def rtnData(self,keyName): return self.__data_Dict[keyName] def hlen(self): return self.__colLEN def __len__(self): return self.__rowLEN def rtnList(self): templist = [[]] temps = [] for key, value in self.__data_Dict.items(): temps.append(key) for x in value: temps.append(x) templist.append(temps) temps = [] templist.pop(0) #for x in range(len(templist)): # print(templist[int(x)]) return templist def rtnListDict(self): templist = [] for x in range(len(self.__header)): print (self.__header[x]) #print (self.__data_Dict[self.__header[x]]) ``` I get the following ERROR: ``` C:\Python34\python.exe C:/Users/User/PycharmProjects/MASAT-3/MA-SAT_v3_MAIN.py Traceback (most recent call last): File "C:/Users/User/PycharmProjects/MASAT-3/MA-SAT_v3_MAIN.py", line 10, in <module> appCONFIGs = MASAT_data("CONFIG.csv","LOAD",t) File "C:\Users\User\PycharmProjects\MASAT-3\MASAT_Libs.py", line 23, in __init__ self.Get_DICT(__fname) File "C:\Users\User\PycharmProjects\MASAT-3\MASAT_Libs.py", line 38, in Get_DICT self.__data_rows.append(templist) AttributeError: 'MASAT_data' object has no attribute '_MASAT_data__data_rows' ``` Process finished with exit code 1 The calling program is as follows: ``` __author__ = 'User' import sys from MASAT_Libs import * sys.path.append('C:\\Users\\User\\PycharmProjects\MASAT-3') t = 0 #define Globals if __name__ == '__main__': appCONFIGs = MASAT_data("CONFIG.csv","LOAD",t) if appCONFIGs.rtnData("ApplicationMode")[0] == 'DEBUG': t = 1 else: t = 0 print("t = ",t) observations = MASAT_data(appCONFIGs.rtnData("PathName")[0] + "\\Observables.csv", "LOAD",t) KMNs = MASAT_data(appCONFIGs.rtnData("PathName")[0] + "\\KMN_List.csv", "LOAD",t) geomix = MASAT_data(appCONFIGs.rtnData("PathName")[0] + "\\geomix_perf_list.csv", "LOAD",t) print (geomix.rtnList()) print (len(geomix)) geomix.rtnListDict() ``` So please forgive me if I put in a extra space, or didn't indent correctly (I did it manually when coping the code here... My problem is that I can't add another variable to this class (no matter what the name) without it crassing in this manner. Any ideas for a simple fix?
2015/09/27
[ "https://Stackoverflow.com/questions/32811618", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5382316/" ]
The introduction of [Spring](https://github.com/rails/spring) was meant to make loading your rails app faster, by keeping a copy of the app in memory and only reloading what changes. Unfortunately, sometimes your initializers can stay exactly the same, but they need to be re-run. An example is the "business\_time" gem, which generates an initializer that loads a YAML file. You can change the YAML file and restart console, but Spring sees that your initializer hasn't changed, so it doesn't re-run it. Thus, the changes to your YAML file go unnoticed, This is a very difficult error to troubleshoot. If you're having initializer problems, then exit out of console and run: ``` bundle exec spring stop ``` before going back into console. This will force a fresh reload of the initializers. You can also disable Spring entirely by removing the gem from your Gemfile.
Type this command into rails ``` reload! ``` It's reload your rails c
38,770,169
I try to install scikit-learn for Python 3. I do it in the following way: ``` virtualenv model_env source model_env/bin/activate pip3 install sklearn ``` As a result I get the following error message: ``` Downloading/unpacking sklearn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement sklearn Cleaning up... No distributions at all found for sklearn ``` I had the same problem with `pandas` package and I have resolved it by using the following command: ``` sudo apt-get install python3-pandas ``` Unfortunately, the same approach does not work for the `sklearn` ``` sudo apt-get install python3-sklearn ``` **ADDED** When I replace `sklearn` by `scikit-learn`, I have the same problem: ``` Downloading/unpacking scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... No distributions at all found for scikit-learn ``` **ADDED 2** As it has been recommended, I have try to use pip in combination with `-vvv`. Note that I use `pip3` instead of `pip`. This is what I get as the result: ``` Downloading/unpacking scikit-learn Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Getting page https://pypi.python.org/simple/ Could not fetch URL https://pypi.python.org/simple/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/ when looking for download links for scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ URLs to search for versions for scikit-learn: * https://pypi.python.org/simple/scikit-learn/ Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... Removing temporary dir /tmp/pip_build_root... No distributions at all found for scikit-learn Exception information: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 278, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/usr/lib/python3/dist-packages/pip/req.py", line 1178, in prepare_files url = finder.find_requirement(req_to_install, upgrade=self.upgrade) File "/usr/lib/python3/dist-packages/pip/index.py", line 277, in find_requirement raise DistributionNotFound('No distributions at all found for %s' % req) pip.exceptions.DistributionNotFound: No distributions at all found for scikit-learn Storing debug log for failure in /home/rngorb/.pip/pip.log ```
2016/08/04
[ "https://Stackoverflow.com/questions/38770169", "https://Stackoverflow.com", "https://Stackoverflow.com/users/245549/" ]
I could solve it with this command: ``` py -m pip install --user NAME_OF_THE_LIBRARY ``` It worked with- NumPy, SciPy, matplotlib, ipython, jupyter, pandas, sympy, nose and scikit-learn
if you have multiple version of Python, make sure that you are at the correct site-package path before install. It was the case for me!!!
38,770,169
I try to install scikit-learn for Python 3. I do it in the following way: ``` virtualenv model_env source model_env/bin/activate pip3 install sklearn ``` As a result I get the following error message: ``` Downloading/unpacking sklearn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement sklearn Cleaning up... No distributions at all found for sklearn ``` I had the same problem with `pandas` package and I have resolved it by using the following command: ``` sudo apt-get install python3-pandas ``` Unfortunately, the same approach does not work for the `sklearn` ``` sudo apt-get install python3-sklearn ``` **ADDED** When I replace `sklearn` by `scikit-learn`, I have the same problem: ``` Downloading/unpacking scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... No distributions at all found for scikit-learn ``` **ADDED 2** As it has been recommended, I have try to use pip in combination with `-vvv`. Note that I use `pip3` instead of `pip`. This is what I get as the result: ``` Downloading/unpacking scikit-learn Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Getting page https://pypi.python.org/simple/ Could not fetch URL https://pypi.python.org/simple/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/ when looking for download links for scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ URLs to search for versions for scikit-learn: * https://pypi.python.org/simple/scikit-learn/ Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... Removing temporary dir /tmp/pip_build_root... No distributions at all found for scikit-learn Exception information: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 278, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/usr/lib/python3/dist-packages/pip/req.py", line 1178, in prepare_files url = finder.find_requirement(req_to_install, upgrade=self.upgrade) File "/usr/lib/python3/dist-packages/pip/index.py", line 277, in find_requirement raise DistributionNotFound('No distributions at all found for %s' % req) pip.exceptions.DistributionNotFound: No distributions at all found for scikit-learn Storing debug log for failure in /home/rngorb/.pip/pip.log ```
2016/08/04
[ "https://Stackoverflow.com/questions/38770169", "https://Stackoverflow.com", "https://Stackoverflow.com/users/245549/" ]
Try using ``` pip3 install scikit-learn ```
I could solve it with this command: ``` py -m pip install --user NAME_OF_THE_LIBRARY ``` It worked with- NumPy, SciPy, matplotlib, ipython, jupyter, pandas, sympy, nose and scikit-learn
38,770,169
I try to install scikit-learn for Python 3. I do it in the following way: ``` virtualenv model_env source model_env/bin/activate pip3 install sklearn ``` As a result I get the following error message: ``` Downloading/unpacking sklearn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement sklearn Cleaning up... No distributions at all found for sklearn ``` I had the same problem with `pandas` package and I have resolved it by using the following command: ``` sudo apt-get install python3-pandas ``` Unfortunately, the same approach does not work for the `sklearn` ``` sudo apt-get install python3-sklearn ``` **ADDED** When I replace `sklearn` by `scikit-learn`, I have the same problem: ``` Downloading/unpacking scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... No distributions at all found for scikit-learn ``` **ADDED 2** As it has been recommended, I have try to use pip in combination with `-vvv`. Note that I use `pip3` instead of `pip`. This is what I get as the result: ``` Downloading/unpacking scikit-learn Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Getting page https://pypi.python.org/simple/ Could not fetch URL https://pypi.python.org/simple/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/ when looking for download links for scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ URLs to search for versions for scikit-learn: * https://pypi.python.org/simple/scikit-learn/ Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... Removing temporary dir /tmp/pip_build_root... No distributions at all found for scikit-learn Exception information: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 278, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/usr/lib/python3/dist-packages/pip/req.py", line 1178, in prepare_files url = finder.find_requirement(req_to_install, upgrade=self.upgrade) File "/usr/lib/python3/dist-packages/pip/index.py", line 277, in find_requirement raise DistributionNotFound('No distributions at all found for %s' % req) pip.exceptions.DistributionNotFound: No distributions at all found for scikit-learn Storing debug log for failure in /home/rngorb/.pip/pip.log ```
2016/08/04
[ "https://Stackoverflow.com/questions/38770169", "https://Stackoverflow.com", "https://Stackoverflow.com/users/245549/" ]
Try using ``` pip3 install scikit-learn ```
if you have multiple version of Python, make sure that you are at the correct site-package path before install. It was the case for me!!!
38,770,169
I try to install scikit-learn for Python 3. I do it in the following way: ``` virtualenv model_env source model_env/bin/activate pip3 install sklearn ``` As a result I get the following error message: ``` Downloading/unpacking sklearn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement sklearn Cleaning up... No distributions at all found for sklearn ``` I had the same problem with `pandas` package and I have resolved it by using the following command: ``` sudo apt-get install python3-pandas ``` Unfortunately, the same approach does not work for the `sklearn` ``` sudo apt-get install python3-sklearn ``` **ADDED** When I replace `sklearn` by `scikit-learn`, I have the same problem: ``` Downloading/unpacking scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... No distributions at all found for scikit-learn ``` **ADDED 2** As it has been recommended, I have try to use pip in combination with `-vvv`. Note that I use `pip3` instead of `pip`. This is what I get as the result: ``` Downloading/unpacking scikit-learn Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Getting page https://pypi.python.org/simple/ Could not fetch URL https://pypi.python.org/simple/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/ when looking for download links for scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ URLs to search for versions for scikit-learn: * https://pypi.python.org/simple/scikit-learn/ Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... Removing temporary dir /tmp/pip_build_root... No distributions at all found for scikit-learn Exception information: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 278, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/usr/lib/python3/dist-packages/pip/req.py", line 1178, in prepare_files url = finder.find_requirement(req_to_install, upgrade=self.upgrade) File "/usr/lib/python3/dist-packages/pip/index.py", line 277, in find_requirement raise DistributionNotFound('No distributions at all found for %s' % req) pip.exceptions.DistributionNotFound: No distributions at all found for scikit-learn Storing debug log for failure in /home/rngorb/.pip/pip.log ```
2016/08/04
[ "https://Stackoverflow.com/questions/38770169", "https://Stackoverflow.com", "https://Stackoverflow.com/users/245549/" ]
First update your pip: ``` pip3 install --upgrade pip ``` Then install scikit-learn with arg "--user". Don't use sudo with pip as it will give problems: ``` pip install --user scikit-learn ```
I could solve it with this command: ``` py -m pip install --user NAME_OF_THE_LIBRARY ``` It worked with- NumPy, SciPy, matplotlib, ipython, jupyter, pandas, sympy, nose and scikit-learn
38,770,169
I try to install scikit-learn for Python 3. I do it in the following way: ``` virtualenv model_env source model_env/bin/activate pip3 install sklearn ``` As a result I get the following error message: ``` Downloading/unpacking sklearn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement sklearn Cleaning up... No distributions at all found for sklearn ``` I had the same problem with `pandas` package and I have resolved it by using the following command: ``` sudo apt-get install python3-pandas ``` Unfortunately, the same approach does not work for the `sklearn` ``` sudo apt-get install python3-sklearn ``` **ADDED** When I replace `sklearn` by `scikit-learn`, I have the same problem: ``` Downloading/unpacking scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... No distributions at all found for scikit-learn ``` **ADDED 2** As it has been recommended, I have try to use pip in combination with `-vvv`. Note that I use `pip3` instead of `pip`. This is what I get as the result: ``` Downloading/unpacking scikit-learn Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Getting page https://pypi.python.org/simple/ Could not fetch URL https://pypi.python.org/simple/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/ when looking for download links for scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ URLs to search for versions for scikit-learn: * https://pypi.python.org/simple/scikit-learn/ Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... Removing temporary dir /tmp/pip_build_root... No distributions at all found for scikit-learn Exception information: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 278, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/usr/lib/python3/dist-packages/pip/req.py", line 1178, in prepare_files url = finder.find_requirement(req_to_install, upgrade=self.upgrade) File "/usr/lib/python3/dist-packages/pip/index.py", line 277, in find_requirement raise DistributionNotFound('No distributions at all found for %s' % req) pip.exceptions.DistributionNotFound: No distributions at all found for scikit-learn Storing debug log for failure in /home/rngorb/.pip/pip.log ```
2016/08/04
[ "https://Stackoverflow.com/questions/38770169", "https://Stackoverflow.com", "https://Stackoverflow.com/users/245549/" ]
Try using ``` pip3 install scikit-learn ```
Based on this [answer](https://stackoverflow.com/a/22446215/5781248) for question [Python pip error: “Cannot fetch index base URL https://pypi.python.org/simple/”](https://stackoverflow.com/questions/15501133/python-pip-error-cannot-fetch-index-base-url-https-pypi-python-org-simple) I would try to reinstall (and upgrade) pip with easy\_install ``` easy_install pip==8.1.2 ``` I tried to reproduce your problem, and installing scikit-learn succeeded after `pip install numpy` and `pip install scipy` in a virtual environment created by pyenv-3.4.
38,770,169
I try to install scikit-learn for Python 3. I do it in the following way: ``` virtualenv model_env source model_env/bin/activate pip3 install sklearn ``` As a result I get the following error message: ``` Downloading/unpacking sklearn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement sklearn Cleaning up... No distributions at all found for sklearn ``` I had the same problem with `pandas` package and I have resolved it by using the following command: ``` sudo apt-get install python3-pandas ``` Unfortunately, the same approach does not work for the `sklearn` ``` sudo apt-get install python3-sklearn ``` **ADDED** When I replace `sklearn` by `scikit-learn`, I have the same problem: ``` Downloading/unpacking scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... No distributions at all found for scikit-learn ``` **ADDED 2** As it has been recommended, I have try to use pip in combination with `-vvv`. Note that I use `pip3` instead of `pip`. This is what I get as the result: ``` Downloading/unpacking scikit-learn Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Getting page https://pypi.python.org/simple/ Could not fetch URL https://pypi.python.org/simple/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/ when looking for download links for scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ URLs to search for versions for scikit-learn: * https://pypi.python.org/simple/scikit-learn/ Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... Removing temporary dir /tmp/pip_build_root... No distributions at all found for scikit-learn Exception information: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 278, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/usr/lib/python3/dist-packages/pip/req.py", line 1178, in prepare_files url = finder.find_requirement(req_to_install, upgrade=self.upgrade) File "/usr/lib/python3/dist-packages/pip/index.py", line 277, in find_requirement raise DistributionNotFound('No distributions at all found for %s' % req) pip.exceptions.DistributionNotFound: No distributions at all found for scikit-learn Storing debug log for failure in /home/rngorb/.pip/pip.log ```
2016/08/04
[ "https://Stackoverflow.com/questions/38770169", "https://Stackoverflow.com", "https://Stackoverflow.com/users/245549/" ]
First update your pip: ``` pip3 install --upgrade pip ``` Then install scikit-learn with arg "--user". Don't use sudo with pip as it will give problems: ``` pip install --user scikit-learn ```
If you want the convenience of Anaconda packages but the flexibility and minimalism of `pip` package management, I suggest you try [miniconda](http://conda.pydata.org/miniconda.html). Once you install miniconda (remembering to `source ~/.bash_profile` or `source ~/.bashrc`), you can do this to setup your environment: ``` conda create -n myenv scikit-learn pip python=3 ``` This will get you a conda env with sklearn and pip in case you want to install libraries that are not supported as a conda package. The pip runs inside the conda env.
38,770,169
I try to install scikit-learn for Python 3. I do it in the following way: ``` virtualenv model_env source model_env/bin/activate pip3 install sklearn ``` As a result I get the following error message: ``` Downloading/unpacking sklearn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement sklearn Cleaning up... No distributions at all found for sklearn ``` I had the same problem with `pandas` package and I have resolved it by using the following command: ``` sudo apt-get install python3-pandas ``` Unfortunately, the same approach does not work for the `sklearn` ``` sudo apt-get install python3-sklearn ``` **ADDED** When I replace `sklearn` by `scikit-learn`, I have the same problem: ``` Downloading/unpacking scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... No distributions at all found for scikit-learn ``` **ADDED 2** As it has been recommended, I have try to use pip in combination with `-vvv`. Note that I use `pip3` instead of `pip`. This is what I get as the result: ``` Downloading/unpacking scikit-learn Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Getting page https://pypi.python.org/simple/ Could not fetch URL https://pypi.python.org/simple/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/ when looking for download links for scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ URLs to search for versions for scikit-learn: * https://pypi.python.org/simple/scikit-learn/ Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... Removing temporary dir /tmp/pip_build_root... No distributions at all found for scikit-learn Exception information: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 278, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/usr/lib/python3/dist-packages/pip/req.py", line 1178, in prepare_files url = finder.find_requirement(req_to_install, upgrade=self.upgrade) File "/usr/lib/python3/dist-packages/pip/index.py", line 277, in find_requirement raise DistributionNotFound('No distributions at all found for %s' % req) pip.exceptions.DistributionNotFound: No distributions at all found for scikit-learn Storing debug log for failure in /home/rngorb/.pip/pip.log ```
2016/08/04
[ "https://Stackoverflow.com/questions/38770169", "https://Stackoverflow.com", "https://Stackoverflow.com/users/245549/" ]
Maybe you should consider the use of [Anaconda](https://www.continuum.io/downloads) which include both packages by default and make your life easy with tools to manage [enviroments](http://conda.pydata.org/docs/using/envs.html) and [packages](http://conda.pydata.org/docs/using/pkgs.html)
If you want the convenience of Anaconda packages but the flexibility and minimalism of `pip` package management, I suggest you try [miniconda](http://conda.pydata.org/miniconda.html). Once you install miniconda (remembering to `source ~/.bash_profile` or `source ~/.bashrc`), you can do this to setup your environment: ``` conda create -n myenv scikit-learn pip python=3 ``` This will get you a conda env with sklearn and pip in case you want to install libraries that are not supported as a conda package. The pip runs inside the conda env.
38,770,169
I try to install scikit-learn for Python 3. I do it in the following way: ``` virtualenv model_env source model_env/bin/activate pip3 install sklearn ``` As a result I get the following error message: ``` Downloading/unpacking sklearn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement sklearn Cleaning up... No distributions at all found for sklearn ``` I had the same problem with `pandas` package and I have resolved it by using the following command: ``` sudo apt-get install python3-pandas ``` Unfortunately, the same approach does not work for the `sklearn` ``` sudo apt-get install python3-sklearn ``` **ADDED** When I replace `sklearn` by `scikit-learn`, I have the same problem: ``` Downloading/unpacking scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... No distributions at all found for scikit-learn ``` **ADDED 2** As it has been recommended, I have try to use pip in combination with `-vvv`. Note that I use `pip3` instead of `pip`. This is what I get as the result: ``` Downloading/unpacking scikit-learn Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Getting page https://pypi.python.org/simple/ Could not fetch URL https://pypi.python.org/simple/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/ when looking for download links for scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ URLs to search for versions for scikit-learn: * https://pypi.python.org/simple/scikit-learn/ Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... Removing temporary dir /tmp/pip_build_root... No distributions at all found for scikit-learn Exception information: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 278, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/usr/lib/python3/dist-packages/pip/req.py", line 1178, in prepare_files url = finder.find_requirement(req_to_install, upgrade=self.upgrade) File "/usr/lib/python3/dist-packages/pip/index.py", line 277, in find_requirement raise DistributionNotFound('No distributions at all found for %s' % req) pip.exceptions.DistributionNotFound: No distributions at all found for scikit-learn Storing debug log for failure in /home/rngorb/.pip/pip.log ```
2016/08/04
[ "https://Stackoverflow.com/questions/38770169", "https://Stackoverflow.com", "https://Stackoverflow.com/users/245549/" ]
Maybe you should consider the use of [Anaconda](https://www.continuum.io/downloads) which include both packages by default and make your life easy with tools to manage [enviroments](http://conda.pydata.org/docs/using/envs.html) and [packages](http://conda.pydata.org/docs/using/pkgs.html)
Based on this [answer](https://stackoverflow.com/a/22446215/5781248) for question [Python pip error: “Cannot fetch index base URL https://pypi.python.org/simple/”](https://stackoverflow.com/questions/15501133/python-pip-error-cannot-fetch-index-base-url-https-pypi-python-org-simple) I would try to reinstall (and upgrade) pip with easy\_install ``` easy_install pip==8.1.2 ``` I tried to reproduce your problem, and installing scikit-learn succeeded after `pip install numpy` and `pip install scipy` in a virtual environment created by pyenv-3.4.
38,770,169
I try to install scikit-learn for Python 3. I do it in the following way: ``` virtualenv model_env source model_env/bin/activate pip3 install sklearn ``` As a result I get the following error message: ``` Downloading/unpacking sklearn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement sklearn Cleaning up... No distributions at all found for sklearn ``` I had the same problem with `pandas` package and I have resolved it by using the following command: ``` sudo apt-get install python3-pandas ``` Unfortunately, the same approach does not work for the `sklearn` ``` sudo apt-get install python3-sklearn ``` **ADDED** When I replace `sklearn` by `scikit-learn`, I have the same problem: ``` Downloading/unpacking scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... No distributions at all found for scikit-learn ``` **ADDED 2** As it has been recommended, I have try to use pip in combination with `-vvv`. Note that I use `pip3` instead of `pip`. This is what I get as the result: ``` Downloading/unpacking scikit-learn Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Getting page https://pypi.python.org/simple/ Could not fetch URL https://pypi.python.org/simple/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/ when looking for download links for scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ URLs to search for versions for scikit-learn: * https://pypi.python.org/simple/scikit-learn/ Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... Removing temporary dir /tmp/pip_build_root... No distributions at all found for scikit-learn Exception information: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 278, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/usr/lib/python3/dist-packages/pip/req.py", line 1178, in prepare_files url = finder.find_requirement(req_to_install, upgrade=self.upgrade) File "/usr/lib/python3/dist-packages/pip/index.py", line 277, in find_requirement raise DistributionNotFound('No distributions at all found for %s' % req) pip.exceptions.DistributionNotFound: No distributions at all found for scikit-learn Storing debug log for failure in /home/rngorb/.pip/pip.log ```
2016/08/04
[ "https://Stackoverflow.com/questions/38770169", "https://Stackoverflow.com", "https://Stackoverflow.com/users/245549/" ]
First update your pip: ``` pip3 install --upgrade pip ``` Then install scikit-learn with arg "--user". Don't use sudo with pip as it will give problems: ``` pip install --user scikit-learn ```
Based on this [answer](https://stackoverflow.com/a/22446215/5781248) for question [Python pip error: “Cannot fetch index base URL https://pypi.python.org/simple/”](https://stackoverflow.com/questions/15501133/python-pip-error-cannot-fetch-index-base-url-https-pypi-python-org-simple) I would try to reinstall (and upgrade) pip with easy\_install ``` easy_install pip==8.1.2 ``` I tried to reproduce your problem, and installing scikit-learn succeeded after `pip install numpy` and `pip install scipy` in a virtual environment created by pyenv-3.4.
38,770,169
I try to install scikit-learn for Python 3. I do it in the following way: ``` virtualenv model_env source model_env/bin/activate pip3 install sklearn ``` As a result I get the following error message: ``` Downloading/unpacking sklearn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement sklearn Cleaning up... No distributions at all found for sklearn ``` I had the same problem with `pandas` package and I have resolved it by using the following command: ``` sudo apt-get install python3-pandas ``` Unfortunately, the same approach does not work for the `sklearn` ``` sudo apt-get install python3-sklearn ``` **ADDED** When I replace `sklearn` by `scikit-learn`, I have the same problem: ``` Downloading/unpacking scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... No distributions at all found for scikit-learn ``` **ADDED 2** As it has been recommended, I have try to use pip in combination with `-vvv`. Note that I use `pip3` instead of `pip`. This is what I get as the result: ``` Downloading/unpacking scikit-learn Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Getting page https://pypi.python.org/simple/ Could not fetch URL https://pypi.python.org/simple/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/ when looking for download links for scikit-learn Cannot fetch index base URL https://pypi.python.org/simple/ URLs to search for versions for scikit-learn: * https://pypi.python.org/simple/scikit-learn/ Getting page https://pypi.python.org/simple/scikit-learn/ Could not fetch URL https://pypi.python.org/simple/scikit-learn/: connection error: HTTPSConnectionPool(host='pypi.python.org', port=443): Max retries exceeded with url: /simple/scikit-learn/ (Caused by <class 'OSError'>: [Errno 101] Network is unreachable) Will skip URL https://pypi.python.org/simple/scikit-learn/ when looking for download links for scikit-learn Could not find any downloads that satisfy the requirement scikit-learn Cleaning up... Removing temporary dir /tmp/pip_build_root... No distributions at all found for scikit-learn Exception information: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 278, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/usr/lib/python3/dist-packages/pip/req.py", line 1178, in prepare_files url = finder.find_requirement(req_to_install, upgrade=self.upgrade) File "/usr/lib/python3/dist-packages/pip/index.py", line 277, in find_requirement raise DistributionNotFound('No distributions at all found for %s' % req) pip.exceptions.DistributionNotFound: No distributions at all found for scikit-learn Storing debug log for failure in /home/rngorb/.pip/pip.log ```
2016/08/04
[ "https://Stackoverflow.com/questions/38770169", "https://Stackoverflow.com", "https://Stackoverflow.com/users/245549/" ]
I could solve it with this command: ``` py -m pip install --user NAME_OF_THE_LIBRARY ``` It worked with- NumPy, SciPy, matplotlib, ipython, jupyter, pandas, sympy, nose and scikit-learn
Based on this [answer](https://stackoverflow.com/a/22446215/5781248) for question [Python pip error: “Cannot fetch index base URL https://pypi.python.org/simple/”](https://stackoverflow.com/questions/15501133/python-pip-error-cannot-fetch-index-base-url-https-pypi-python-org-simple) I would try to reinstall (and upgrade) pip with easy\_install ``` easy_install pip==8.1.2 ``` I tried to reproduce your problem, and installing scikit-learn succeeded after `pip install numpy` and `pip install scipy` in a virtual environment created by pyenv-3.4.
69,350,640
Given a matrix S and a binary matrix W, I want to create a submatrix of S corresponding to the non zero coordinates of W. For example: ``` S = [[1,1],[1,2],[1,3],[1,4],[1,5]] W = [[1,0,0],[1,1,0],[1,1,1],[0,1,1],[0,0,1]] ``` I want to get matrices ``` S_1 = [[1,1],[1,2],[1,3]] S_2 = [[1,2],[1,3],[1,4]] S_3 = [[1,3],[1,4],[1,5]] ``` I couldn't figure out a slick way to do this in python. The best I could do for each S\_i is ``` S_1 = S[0,:] for i in range(np.shape(W)[0]): if W[i, 0] == 1: S_1 = np.vstack((S_1, S[i, :])) ``` but if i want to change the dimensions of the problem and have, say, 100 S\_i's, writing a for loop for each one seems a bit ugly. (Side note: S\_1 should be initialized to some empty 2d array but I couldn't get that to work, so initialized it to S[0,:] as a placeholder). EDIT: To clarify what I mean: I have a matrix S ``` 1 1 1 2 1 3 1 4 1 5 ``` and I have a binary matrix ``` 1 0 0 1 1 0 1 1 1 0 1 1 0 0 1 ``` Given the first column of the binary matrix W ``` 1 1 1 0 0 ``` The 1's are in the first, second, and third positions. So I want to create a corresponding submatrix of S with just the first, second and third positions of every column, so S\_1 (corresponding to the 1st column of W) is ``` 1 1 1 2 1 3 ``` Similarly, if we look at the third column of W ``` 0 0 1 1 1 ``` The 1's are in the last three coordinates and so I want a submatrix of S with just the last three coordinates of every column, called S\_3 ``` 1 3 1 4 1 5 ``` So given any ith column of the binary matrix, I'm looking to generate a submatrix S\_i where the columns of S\_i contain the columns of S, but only the entries corresponding to the positions of the 1's in the ith column of the binary matrix.
2021/09/27
[ "https://Stackoverflow.com/questions/69350640", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17018123/" ]
It probably is more useful to work with the transpose of W rather than W itself, both for human-readability and to facilitate writing the code. This means that the entries that affect each S\_i are grouped together in one of the inner parentheses of W, i.e. in a row of W rather than a column as you have it now. Then, S\_i = np.array[S[j,:] for j in np.shape(S)[0] if W\_T[i,j] == 1], where W\_T is the transpose of W. If you need/want to stick with W as is, you need to reverse the indices i and j. As for the outer loop, you could try to nest this in another similar comprehension without an if statement--however this might be awkward since you aren't actually building one output *matrix* (the S\_i can easily be different dimensions, unless you're somehow guaranteed to have the same number of 1s in every column of W). This in fact raises the question of what you want--a list of these arrays S\_i? Otherwise if they are separate variables as you have it written, there's no good way to refer to them in a generalizable way as they don't have indices.
Numpy can do this directly. ``` import numpy as np S = np.array([[1,1],[1,2],[1,3],[1,4],[1,5]]) W = np.array([[1,0,0],[1,1,0],[1,1,1],[0,1,1],[0,0,1]]) for row in range(W.shape[1]): print(S[W[:,row]==1]) ``` Output: ``` [[1 1] [1 2] [1 3]] [[1 2] [1 3] [1 4]] [[1 3] [1 4] [1 5]] ```
70,968,594
I'm from NON computer science background (Biochemist). In Python programming under "Create a class function", I didn't get the output for the following written function: ``` class circle (object): def __init__(self, 10, 'red'): self.radius=10; self.color='red' ``` I got syntax error like this: ``` File "<ipython-input-1-ab699251caa9>", line 2 def_init_(self,10,'red'): ^ SyntaxError: invalid syntax ```
2022/02/03
[ "https://Stackoverflow.com/questions/70968594", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18107977/" ]
in case if your are using [device\_info\_plus](https://pub.dev/packages/device_info_plus) version above 4.x.x the `deviceInfo.androidInfo` function has been removed refer the [changelog here](https://pub.dev/packages/device_info_plus/changelog) downgrade the package to `device_info_plus: 3.2.4` and it would start working fine.
Device info package deprecated: [for read](https://pub.dev/packages/device_info) Please upgrade package to [device\_info\_plus](https://pub.dev/packages/device_info_plus)
70,968,594
I'm from NON computer science background (Biochemist). In Python programming under "Create a class function", I didn't get the output for the following written function: ``` class circle (object): def __init__(self, 10, 'red'): self.radius=10; self.color='red' ``` I got syntax error like this: ``` File "<ipython-input-1-ab699251caa9>", line 2 def_init_(self,10,'red'): ^ SyntaxError: invalid syntax ```
2022/02/03
[ "https://Stackoverflow.com/questions/70968594", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18107977/" ]
in case if your are using [device\_info\_plus](https://pub.dev/packages/device_info_plus) version above 4.x.x the `deviceInfo.androidInfo` function has been removed refer the [changelog here](https://pub.dev/packages/device_info_plus/changelog) downgrade the package to `device_info_plus: 3.2.4` and it would start working fine.
I use device info and package info packages and this code working perfectly in my projects. Sample use: `var deviceId = DeviceInfo.getInstance()?.deviceID ?? "";` ```html class DeviceInfo { static int iosVersionCode = 1; static DeviceInfo? instance; String deviceID = ""; String versionCode = "0"; String version = "0"; String model = ""; String packageName = ""; String osVersion = ""; static DeviceInfo? getInstance() { if (instance == null) { instance = new DeviceInfo(); return instance; } else { return instance; } } Future<void> deviceInfo() async { DeviceInfoPlugin deviceInfo = DeviceInfoPlugin(); PackageInfo packageInfo = await PackageInfo.fromPlatform(); if (Platform.isAndroid) { AndroidDeviceInfo androidInfo = await deviceInfo.androidInfo; packageName = packageInfo.packageName; deviceID = androidInfo.androidId!; version = packageInfo.version; versionCode = packageInfo.buildNumber; model = androidInfo.model!; osVersion = "${androidInfo.version.sdkInt}" ; } else if (Platform.isIOS) { IosDeviceInfo iosInfo = await deviceInfo.iosInfo; packageName = packageInfo.packageName; deviceID = iosInfo.identifierForVendor!; version = packageInfo.version; versionCode = version; //version is the unique things at the ios model = iosInfo.utsname.machine!; osVersion = iosInfo.systemVersion!; } debugPrint("Start---------------------------"); debugPrint("packageName :$packageName"); debugPrint("deviceID :$deviceID"); debugPrint("version :$version"); debugPrint("versionCode :$versionCode"); debugPrint("model :$model"); debugPrint("osVersion :$osVersion"); debugPrint("End---------------------------"); } } ```
54,465,804
Python throwing an error when I try to import epipy even though I installed the package to the project interpreter (python file's interpreter is also set to same environment) * Tried using an absolute import; from epipy import \* * Tried importing a specific function e.g. from epipy import case\_tree * Tried uninstalling and reinstalling the package My code: import pandas as pd import epipy I expect to import installed packages with no errors, received error below: ``` /Users/Noelle/Python/stats/bin/python /Users/Noelle/Python/stats/basic_analytics.py Traceback (most recent call last): File "/Users/Noelle/Python/stats/basic_analytics.py", line 2, in <module> import epipy File "/Users/Noelle/Python/stats/lib/python3.6/site-packages/epipy/__init__.py", line 5, in <module> from .analyses import generation_analysis, reproduction_number, create_2x2 File "/Users/Noelle/Python/stats/lib/python3.6/site-packages/epipy/analyses.py", line 88 print 'Summary of reproduction numbers' ^ SyntaxError: Missing parentheses in call to 'print'. Did you mean print('Summary of reproduction numbers')? ``` Process finished with exit code 1
2019/01/31
[ "https://Stackoverflow.com/questions/54465804", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8705835/" ]
Did you install from pip or directly from github? According to [this post](https://github.com/cmrivers/epipy/issues/15), the pip version is only compatible with python 2, for py3.x you have to download the github version directly
That error is because print is being called: ``` print'Summary of reproduction numbers' ``` Instead of: ``` print('Summary of reproduction numbers') ``` Python 3 print syntax is different from python 2 EDIT: Seems this is because the epipy package you installed is for python 2 and not python 3
15,012,098
I know this is a very dumb question, but I can't install CherryPy. In the documentation is written: > > To install, change to the directory where setup.py is located and type (python-2.3 or later needed): > > > > ``` > python setup.py install > > ``` > > Which is what I do, I type this in Python Shell and it gives me error `Invalid syntax`, but I don't think I have any syntax errors.
2013/02/21
[ "https://Stackoverflow.com/questions/15012098", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2025568/" ]
You won't install programs in the Python shell. Navigate to where you downloaded the source and run `python setup.py install`. Alternatively, you can use pip: ``` pip install cherrypy ```
"I type this in Python Shell"... Don't type that in the Python shell. ``` >>> python setup.py install File "<stdin>", line 1 python setup.py install ^ SyntaxError: invalid syntax ``` Type it at the commandline. ``` $ python setup.py install ``` Or if you're on Windows: ``` C:\>python setup.py install ```
15,012,098
I know this is a very dumb question, but I can't install CherryPy. In the documentation is written: > > To install, change to the directory where setup.py is located and type (python-2.3 or later needed): > > > > ``` > python setup.py install > > ``` > > Which is what I do, I type this in Python Shell and it gives me error `Invalid syntax`, but I don't think I have any syntax errors.
2013/02/21
[ "https://Stackoverflow.com/questions/15012098", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2025568/" ]
The best installation for Cherrypy, Mako, Python with terminal in Ubuntu is: ``` sudo apt-get install subversion python-mako python-simplejson python-cherrypy3 graphviz ``` And after `sudo apt-get update` and after `sudo apt-get upgrade`
"I type this in Python Shell"... Don't type that in the Python shell. ``` >>> python setup.py install File "<stdin>", line 1 python setup.py install ^ SyntaxError: invalid syntax ``` Type it at the commandline. ``` $ python setup.py install ``` Or if you're on Windows: ``` C:\>python setup.py install ```
15,012,098
I know this is a very dumb question, but I can't install CherryPy. In the documentation is written: > > To install, change to the directory where setup.py is located and type (python-2.3 or later needed): > > > > ``` > python setup.py install > > ``` > > Which is what I do, I type this in Python Shell and it gives me error `Invalid syntax`, but I don't think I have any syntax errors.
2013/02/21
[ "https://Stackoverflow.com/questions/15012098", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2025568/" ]
"I type this in Python Shell"... Don't type that in the Python shell. ``` >>> python setup.py install File "<stdin>", line 1 python setup.py install ^ SyntaxError: invalid syntax ``` Type it at the commandline. ``` $ python setup.py install ``` Or if you're on Windows: ``` C:\>python setup.py install ```
If you are installing on Mac and say you are using python 3, you would want to use: ``` sudo python3.3 setup.py build ``` Then after build is finished. ``` sudo python3.3 setup.py install ```
15,012,098
I know this is a very dumb question, but I can't install CherryPy. In the documentation is written: > > To install, change to the directory where setup.py is located and type (python-2.3 or later needed): > > > > ``` > python setup.py install > > ``` > > Which is what I do, I type this in Python Shell and it gives me error `Invalid syntax`, but I don't think I have any syntax errors.
2013/02/21
[ "https://Stackoverflow.com/questions/15012098", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2025568/" ]
You won't install programs in the Python shell. Navigate to where you downloaded the source and run `python setup.py install`. Alternatively, you can use pip: ``` pip install cherrypy ```
If you are installing on Mac and say you are using python 3, you would want to use: ``` sudo python3.3 setup.py build ``` Then after build is finished. ``` sudo python3.3 setup.py install ```
15,012,098
I know this is a very dumb question, but I can't install CherryPy. In the documentation is written: > > To install, change to the directory where setup.py is located and type (python-2.3 or later needed): > > > > ``` > python setup.py install > > ``` > > Which is what I do, I type this in Python Shell and it gives me error `Invalid syntax`, but I don't think I have any syntax errors.
2013/02/21
[ "https://Stackoverflow.com/questions/15012098", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2025568/" ]
You won't install programs in the Python shell. Navigate to where you downloaded the source and run `python setup.py install`. Alternatively, you can use pip: ``` pip install cherrypy ```
I have found out the steps to install via command prompt, please fer to the attachment below. For me "python setup.py install" does not work but it works find for "setup.py install" after I pointed it properly to the directory> Hope my experiment helps: [Step1](https://i.stack.imgur.com/A0LlF.jpg) [Step2](https://i.stack.imgur.com/sFspL.jpg)
15,012,098
I know this is a very dumb question, but I can't install CherryPy. In the documentation is written: > > To install, change to the directory where setup.py is located and type (python-2.3 or later needed): > > > > ``` > python setup.py install > > ``` > > Which is what I do, I type this in Python Shell and it gives me error `Invalid syntax`, but I don't think I have any syntax errors.
2013/02/21
[ "https://Stackoverflow.com/questions/15012098", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2025568/" ]
The best installation for Cherrypy, Mako, Python with terminal in Ubuntu is: ``` sudo apt-get install subversion python-mako python-simplejson python-cherrypy3 graphviz ``` And after `sudo apt-get update` and after `sudo apt-get upgrade`
If you are installing on Mac and say you are using python 3, you would want to use: ``` sudo python3.3 setup.py build ``` Then after build is finished. ``` sudo python3.3 setup.py install ```
15,012,098
I know this is a very dumb question, but I can't install CherryPy. In the documentation is written: > > To install, change to the directory where setup.py is located and type (python-2.3 or later needed): > > > > ``` > python setup.py install > > ``` > > Which is what I do, I type this in Python Shell and it gives me error `Invalid syntax`, but I don't think I have any syntax errors.
2013/02/21
[ "https://Stackoverflow.com/questions/15012098", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2025568/" ]
The best installation for Cherrypy, Mako, Python with terminal in Ubuntu is: ``` sudo apt-get install subversion python-mako python-simplejson python-cherrypy3 graphviz ``` And after `sudo apt-get update` and after `sudo apt-get upgrade`
I have found out the steps to install via command prompt, please fer to the attachment below. For me "python setup.py install" does not work but it works find for "setup.py install" after I pointed it properly to the directory> Hope my experiment helps: [Step1](https://i.stack.imgur.com/A0LlF.jpg) [Step2](https://i.stack.imgur.com/sFspL.jpg)