Q_CreationDate
stringlengths 23
23
| Title
stringlengths 11
149
| Question
stringlengths 25
6.53k
| Answer
stringlengths 15
5.1k
| Score
float64 -1
1.2
| Is_accepted
bool 2
classes | N_answers
int64 1
17
| Q_Id
int64 0
6.76k
|
---|---|---|---|---|---|---|---|
2019-12-13 09:54:52.117 | Add manually packages to PyCharm in Windows | I'm using PyCharm. I try to install Selenium but I have a problem with proxy. I try to add packages manually to my project/environment but I don't know how.
I downloaded files with Selenium. Could you tell me how to add this package to Project without using pip? | open pycharm
click on settings (if u use mac click on preference )
click project
then click projecti nterpreter
click the + button on the bottom of the window you can see a new window search Selenium package and install | 0 | false | 1 | 6,447 |
2019-12-13 14:48:25.083 | Implementing trained-model on camera | I just trained my model successfully and I have some checkpoints from the training process. Can you explain to me how to use this data to recognize the objects live with the help of a webcam? | Congratulations :)
First of all, you use the model to recognize the objects, the model learned from the data, minor detail.
It really depends on what you are aiming for, as the comment suggest, you should probably provide a bit more information.
The simplest setup would probably be to take an image with your webcam, read the file, pass it to the model and get the predictions. If you want to do it live, you are gonna have the stream from the webcam and then pass the images to the model. | 0 | false | 1 | 6,448 |
2019-12-15 02:32:28.747 | 8Puzzle game with A* : What structure for the open set? | I'm developing a 8 Puzzle game solver in python lately and I need a bit of help
So far I finished coding the A* algorithm using Manhattan distance as a heuristic function.
The solver runs and find ~60% of the solutions in less than 2 seconds
However, for the other ~40%, my solver can take up to 20-30 minutes, like it was running without heuristic.
I started troubleshooting, and it seems that the openset I use is causing some problems :
My open set is an array
Each iteration, I loop through the openset to find the lowest f(n) (complexity : O(n) )
I have the feeling that O(n) is way too much to run a decent A* algorithm with such memory used so I wanted to know how should I manage to make the openset less "time eater"
Thank you for your help ! Have a good day
EDIT: FIXED
I solved my problem which was in fact a double problem.
I tried to use a dictionary instead of an array, in which I stored the nodes by their f(n) value and that allowed me to run the solver and the ~181000 possibilities of the game in a few seconds
The second problem (I didn't know about it because of the first), is that I didn't know about the solvability of a puzzle game and as I randomised the initial node, 50% of the puzzles couldn't be solved. That's why it took so long with the openset as the array. | The open set should be a priority queue. Typically these are implemented using a binary heap, though other implementations exist.
Neither an array-list nor a dictionary would be efficient.
The closed set should be an efficient set, so usually a hash table or binary search tree, depending on what your language's standard library defaults to.
A dictionary (aka "map") would technically work, but it's conceptually the wrong data-structure because you're not mapping to anything. An array-list would not be efficient. | 1.2 | true | 1 | 6,449 |
2019-12-15 14:24:50.807 | My python scripts using selenium don't work anymore. Chrome driver version problem | My scripts don't work anymore and I can't figure it out.
It is a chrome version problem apparently... But I don't know how to switch to another version (not the latest?) Does exist another way?
My terminal indicates :
Traceback (most recent call last):
File "/Users/.../Documents/SCRIPTS/PYTHON/Scripts/# -- coding: utf-8 --.py", line 21, in
driver = webdriver.Chrome()
File "/opt/anaconda3/lib/python3.7/site-packages/selenium/webdriver/chrome/webdriver.py", line 81, in init
desired_capabilities=desired_capabilities)
File "/opt/anaconda3/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 157, in init
self.start_session(capabilities, browser_profile)
File "/opt/anaconda3/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 252, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/opt/anaconda3/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/opt/anaconda3/lib/python3.7/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.SessionNotCreatedException: Message: session not created: Chrome version must be between 71 and 75
(Driver info: chromedriver=2.46.628411 (3324f4c8be9ff2f70a05a30ebc72ffb013e1a71e),platform=Mac OS X 10.14.5 x86_64)
Any idea? | This possibly happens, as your Chrome Browser or Chromium may be updated to newer versions automatically. But you still run your selenium scripts using the old version of the chromedriver.
Check the current version of your Google chrome or Chromium, then download the chromedriver for that specific version.
Then your scripts may work fine! | 0 | false | 1 | 6,450 |
2019-12-15 19:29:16.370 | Add new data to model sklearn: SGD | I made models with sklearn, something like this:
clf = SGDClassifier(loss="log")
clf.fit(X, Y)
And then now I would like to add data to learn for this model, but with more important weight. I tried to use partial_fit with sample_weight bigger but not working. Maybe I don't use fit and partial_fit as good, sorry I'm beginner...
If someone know how to add new data I could be happy to know it :)
Thanks for help. | Do you think it has other way to do a first learning and then add new data more important for the model? Keras?
Thanks guys | 0 | false | 1 | 6,451 |
2019-12-16 16:28:11.120 | RPA : How to do back-end automation using RPA tools? | I would like to know how back-end automation is possible through RPA.
I'd be interested in solving this scenario relative to an Incident Management Application, in which authentication is required. The app provide:
An option useful to download/export the report to a csv file
Sort the csv as per the requirement
Send an email with the updated csv to the team
Please let me know how this possible through RPA and what are those tools
available in RPA to automate this kind of scenario? | There are several ways to do it. It is especially useful when your backed are 3rd party applications where you do not have lot of control. Many RPA products like Softomotive WinAutomation, Automation Anywhere, UiPath etc. provide file utilities, excel utilities, db utilities, ability to call apis, OCR capabilities etc., which you can use for backed automation. | 1.2 | true | 2 | 6,452 |
2019-12-16 16:28:11.120 | RPA : How to do back-end automation using RPA tools? | I would like to know how back-end automation is possible through RPA.
I'd be interested in solving this scenario relative to an Incident Management Application, in which authentication is required. The app provide:
An option useful to download/export the report to a csv file
Sort the csv as per the requirement
Send an email with the updated csv to the team
Please let me know how this possible through RPA and what are those tools
available in RPA to automate this kind of scenario? | RPA tools are designed to automate mainly front-end activities by mimicing human actions. It can be done easily using any RPA tool.
However, if you are interested in back-end automation the first question would be, if specific application has an option to interact in the way you want through the back-end/API?
If yes, in theory you could develop RPA robot to run pre-developed back-end script. However, if all you need would be to run this script, creating robot for this case may be redundant. | 0.386912 | false | 2 | 6,452 |
2019-12-16 22:57:12.960 | google colab /bin/bash: 'gdrive/My Drive/path/myfile : Permission denied | I'm trying to run a file (an executable) in google colab I mounted the drive and everything is ok however whenever i try to run it using :
! 'gdrive/My Drive/path/myfile'
I get this output of the cell:
/bin/bash: 'gdrive/My Drive/path/myfile : Permission denied
any ideas how to overcome the permissions? | you first need to permit that file/folder as:
chmod 755 file_name | 1.2 | true | 1 | 6,453 |
2019-12-17 00:12:24.427 | Accessing SAS(9.04) from Anaconda | We are doing a POC to see how to access SAS data sets from Anaconda
All documentation i find says only SASpy works with SAS 9.4 or higher
Our SAS version is 9.04.01M3P062415
Can this be done? If yes any documentation in this regard will be highly appreciated
Many thanks in Advance! | SAS datasets are ODBC compliant. SasPy is for running SAS code. If the goal is to read SAS datasets, only, use ODBC or OleDb. I do not have Python code but SAS has a lot of documentation on doing this using C#. Install the free SAS ODBC drivers and read the sas7bdat. The drivers are on the SAS website.
Writing it is different but reading should be fine. You will lose some aspects of the dataset but data will come through. | 0 | false | 1 | 6,454 |
2019-12-17 09:45:19.723 | How do you write to a file without changing its ctime? | I was hoping that just using something like
with open(file_name, "w") as f:
would not change ctime if the file already existed. Unfortunately it does.
Is there a version which will leave the ctime intact?
Motivation:
I have a file that contains a list of events. I would like to know how old the oldest event is. It seems this should be the files ctime. | Because fopen works that way when using 'w' as an option. From the manual:
"w" write: Create an empty file for output operations.
If a file with the same name already exists, its contents are discarded and the file is treated as a new empty file.
If you don't want to create a new file use a+ to append to the file. This leaves the create date intact. | 0.101688 | false | 2 | 6,455 |
2019-12-17 09:45:19.723 | How do you write to a file without changing its ctime? | I was hoping that just using something like
with open(file_name, "w") as f:
would not change ctime if the file already existed. Unfortunately it does.
Is there a version which will leave the ctime intact?
Motivation:
I have a file that contains a list of events. I would like to know how old the oldest event is. It seems this should be the files ctime. | Beware, ctime is not the creation time but the inode change time. It is updated each time you write to the file, or change its meta-data, for example rename it. So we have:
atime : access time - each time the file is read
mtime : modification time - each time file data is change (file is written to)
ctime : change time - each time something is changed in the file, either data or meta-data like name or (hard) links
I know no way to reset the ctime field because even utimes and its variant can only set the atime and mtime (and birthtime for file systems that support it like BSD UFS2) - except of course changing the system time with all the involved caveats... | 1.2 | true | 2 | 6,455 |
2019-12-17 10:48:24.980 | Can you change the precision globally of a piece of code in Python, as a way of debugging it? | I am solving a system of non-linear equations using the Newton Raphson Method in Python. This involves using the solve(Ax,b) function (spsolve in my case, which is for sparse matrices) iteratively until the error or update reduces below a certain threshold. My specific problem involves calculating functions such as x/(e^x - 1) , which are badly calculated for small x by Python, even using np.expm1().
Despite these difficulties, it seems like my solution converges, because the error becomes of the order of 10^-16. However, the dependent quantities, do not behave physically, and I suspect this is due to the precision of these calculations. For example, I am trying to calculate the current due to a small potential difference. When this potential difference becomes really small, this current begins to oscillate, which is wrong, because currents must be conserved.
I would like to globally increase the precision of my code, but I'm not sure if that's a useful thing to do since I am not sure whether this increased precision would be reflected in functions such as spsolve. I feel the same about using the Decimal library, which would also be quite cumbersome. Can someone give me some general advice on how to go about this or point me towards a relevant post?
Thank you! | You can try using mpmath, but YMMV. generally scipy uses double precision. For a vast majority of cases, analyzing the sources of numerical errors is more productive than just trying to reimplement everything with higher widths floats. | 1.2 | true | 1 | 6,456 |
2019-12-17 20:15:22.827 | BigQuery - Update Tables With Changed/Deleted Records | Presently, we send entire files to the Cloud (Google Cloud Storage) to be imported into BigQuery and do a simple drop/replace. However, as the file sizes have grown, our network team doesn't particularly like the bandwidth we are taking while other ETLs are also trying to run. As a result, we are looking into sending up changed/deleted rows only.
Trying to find the path/help docs on how to do this. Scope - I will start with a simple example. We have a large table with 300 million records. Rather than sending 300 million records every night, send over X million that have changed/deleted. I then need to incorporate the change/deleted records into the BigQuery tables.
We presently use Node JS to move from Storage to BigQuery and Python via Composer to schedule native table updates in BigQuery.
Hope to get pointed in the right direction for how to start down this path. | Stream the full row on every update to BigQuery.
Let the table accommodate multiple rows for the same primary entity.
Write a view eg table_last that picks the most recent row.
This way you have all your queries near-realtime on real data.
You can deduplicate occasionally the table by running a query that rewrites self table with latest row only.
Another approach is if you have 1 final table, and 1 table which you stream into, and have a MERGE statement that runs scheduled every X minutes to write the updates from streamed table to final table. | 0.386912 | false | 1 | 6,457 |
2019-12-21 00:05:12.757 | In Databricks python notebook, how to import a file1 objects resides in different directory than the file2? | Note: I did research on this over web but all of them are pointing to the solution which works on prem/desktops. This case is on databricks notebook, I referred databricks help guide but could not find the solution.
Dear all,
In my local desktop i used to import the objects from other python files by referring their absolute path such as
"from dir.dira.dir0.file1 import *"
But in Databricks python notebook i'm finding it difficult to crack this step since 2 hours. Any help is appreciated.
Below is how my command shows,
from dbfs.Shared.ABC.models.NJ_WrkDir.test_schdl import *
also tried below ways, none of them worked
from dbfs/Shared/ABC/models/NJ_WrkDir/test_schdl import *
from \Shared\ABC\models\NJ_WrkDir\test_schdl import *
from Shared/ABC/models/NJ_WrkDir/test_schdl import *
from Shared.ABC.models.NJ_WrkDir.test_schdl import *
The error messages shows:
ModuleNotFoundError: No module named 'Shared
ModuleNotFoundError: No module named 'dbfs
SyntaxError: unexpected character after line continuation character
File "", line 2
from \Shared\ABC\models\NJ_WrkDir\test_schdl import *
^
Thank you! | The solution is, include the command in child databricks python notebook as
"%run /path/parentfile"
(from where we want to import the objects from) | 0 | false | 1 | 6,458 |
2019-12-21 11:10:31.507 | Calculate mean across one specific dimension of a 4D tensor in Pytorch | I have a PyTorch video feature tensor of shape [66,7,7,1024] and I need to convert it to [1024,66,7,7]. How to rearrange a tensor shape? Also, how to perform mean across dimension=1? i.e., after performing mean of the dimension with size 66, I need the tensor to be [1024,1,7,7].
I have tried to calculate the mean of dimension=1 but I failed to replace it with the mean value. And I could not imagine a 4D tensor in which one dimension is replaced by its mean.
Edit:
I tried torch.mean(my_tensor, dim=1). But this returns me a tensor of shape [1024,7,7]. The 4D tensor is being converted to 3D. But I want it to remain 4D with shape [1024,1,7,7].
Thank you very much. | The first part of the question has been answered in the comments section. So we can use tensor.transpose([3,0,1,2]) to convert the tensor to the shape [1024,66,7,7].
Now mean over the temporal dimension can be taken by
torch.mean(my_tensor, dim=1)
This will give a 3D tensor of shape [1024,7,7].
To obtain a tensor of shape [1024,1,7,7], I had to unsqueeze in dimension=1:
tensor = tensor.unsqueeze(1) | 1.2 | true | 1 | 6,459 |
2019-12-21 17:16:30.333 | False Positive Rate in Confusion Matrix | I was trying to manually calculate TPR and FPR for the given data. But unfortunately I dont have any false positive cases in my dataset and even no true positive cases.
So I am getting divided by zero error in pandas. So I have an intuition that fpr=1-tpr. Please let me know my intuition is correct if not let know how to fix this issue.
Thank you | It is possible to have FPR = 1 with TPR = 1 if your prediction is always positive no matter what your inputs are.
TPR = 1 means we predict correctly all the positives. FPR = 1 is equivalent to predicting always positively when the condition is negative.
As a reminder:
FPR = 1 - TNR = [False Positives] / [Negatives]
TPR = 1 - FNR = [True Positives] / [Positives] | 0 | false | 1 | 6,460 |
2019-12-22 01:56:37.603 | Import python modules in a completely different directory | I am writing a script that automates the use of other scripts. I've set it up to automatically import other modules from .py files stored in a directory called dependencies using importlib.import_modules()
Originally, I had dependencies as a subdirectory of the root of my application, and this worked fine. However, it's my goal to have the dependencies folder stored potentially anywhere a user would like. In my personal example, it's located in my dropbox folder while my script is run from a different directory entirely.
I cannot for the life of me seem to get the modules to be detected and imported anymore and I'm out of ideas.
Would someone have a better idea of how to achieve this?
This is an example of the path structure:
E:
|_ Scripts:
| |_ Mokha.py
|
|_ Dropbox:
| |_ Dependencies:
| |_ utils.py
Here's my code for importing: (I'm reading in a JSON file for the dependency names and looping over every item in the list)
def importPythonModules(pythonDependencies):
chdir(baseConfig["dependencies-path])
for dependency in pythonDependencies:
try:
moduleImport = dependency
module = importlib.import_module(moduleImport)
modules[dependency] = module
print("Loaded module: %s" % (dependency))
except ModuleNotFoundError as e:
print(e)
raise Exception("Error importing python dependecies.")
chdir(application_path)
The error I get is No module named 'utils'
I've tried putting an init.py in both the dependencies folder, the root of my dropbox, and both at the same time to no avail.
This has got to be possible, right? | UPDATE: I solved it.
sys.path.append(baseConfig['dependencies-path'])
Not super happy with the solution but it'll work for now. | 0 | false | 1 | 6,461 |
2019-12-22 06:06:18.430 | How to put images in a linked list in python | I have created a class Node having two data members: data and next.
I have created another class LinkedList to having a data member: head
Now I want to store an image in the node but I have no idea how to do it. The syntax for performing this operation would be very much helpful. | PIL is the Python Imaging Library which provides the python interpreter with image editing capabilities.
USE from PIL import Image after installing.
Windows: Download the appropriate Pillow package according to your python version. Make sure to download according to the python version you have.
pip install Pillow for Linux users.
Then u can easily add image to your linked list by assigning it to a variable | 0 | false | 1 | 6,462 |
2019-12-22 23:43:11.057 | What cause pip did not work after reinstall python? | I had Python 3.7.4 in D:\python3.7.4 before but for some reason, I uninstalled it today, then I changed the folder name to D:\python3.7.5 and installed python 3.7.5 in it, then, when I try to use pip in cmd I got a fatal error saying
Unable to create processing using '"Unable to create process using '"d:\python3.7.4\python.exe" "D:\Python3.7.5\Scripts\pip.exe"'
I tried to change all things contain python3.7.4 in environment variable to python3.7.5 but the same error still exists, does anyone know how to fix this?
Thanks | Try to create a new folder and run the installation there.
This should work, as I did the same myself to go install 2 different versions before | 0 | false | 1 | 6,463 |
2019-12-23 08:37:50.350 | How to create a registration form in Django that is divided into two parts, such that one call fill up the second part only after email verification? | I have the logic for email verification, but I am not sure how to make it such that only after clicking the link on the verification email, the user is taken to the second page of the form, and only after filling the second part the user is saved. | I would say that much better idea is to save user to database anyway, but mark him as inactive (simple boolean field in model will be enough). Upon registration, before confirming email mark him as inactive and as soon as he confirms email and fills second part of your registration form that you mentioned change that boolean value to true. If you don't want to keep inactive users data in your database, you can set up for example cron, that will clean users that haven't confirmed their email for few days. | 1.2 | true | 1 | 6,464 |
2019-12-23 10:03:28.647 | python multiprocess read data from disk | it confused me long time.
my program has two process, both read data from disk, disk max read speed 10M/s
1. if two process both read 10M data, is two process spend time same with one process read twice?
2. if two process both read 5M data, two process read data spend 1s, one process read twice spend 1s, i know multi process can save time from IO, but the spend same time in IO, multi process how to save time? | It's not possible to increase disk read speed by adding more threads. With 2 threads reading you will get at best 1/2 the speed per thread (in practice even less), with 3 threads - 1/3 the speed, etc.
With disk I/O it is the difference between sequential and random access speed that is really important. For example, sequential read speed can be 10 MB/s, and random read just 10 KB/s. This is the case even with the latest SSD drives (although the ratio may be less pronounced).
For that reason you should prefer to read from disk sequentially from only one thread at a time. Reading the file in 2 threads in parallel will not only reduce the speed of each read by half, but will further reduce because of non-sequential (interleaved) disk access.
Note however, that 10 MB is really not much; modern OSes will prefetch the entire file into the cache, and any subsequent reads will appear instantaneous. | 0 | false | 1 | 6,465 |
2019-12-23 20:07:26.647 | How to move Python virtualenv to different system (computer) and use packages present in Site-packages | I am making a python 3 application (flask based) and for that I created a virtualenv in my development system, installed all packages via pip and my app worked fine.
But when I moved that virtualenv to a different system (python3 installed) and ran my application with the absolute path of my virtualenv python (c:/......./myenv/Scripts/python.exe main.py) then it threw the errors that packages are not installed,
I activated the virtualenv and used pip freeze and there were no packages were installed.
But under virtualenv there is 'Site-Packages' (myenv -> lib -> site-packages) , all my installed packages were persent there.
My Question is how to use the packages that are inside 'site-packages' even after moving the virtualenv to different system in Python 3. | You Must not copy & paste venv, even in the same system.
If you install new package in venv-copied, then it would installed in venv-original. Becaus settings are bound to specific directory. | 0 | false | 2 | 6,466 |
2019-12-23 20:07:26.647 | How to move Python virtualenv to different system (computer) and use packages present in Site-packages | I am making a python 3 application (flask based) and for that I created a virtualenv in my development system, installed all packages via pip and my app worked fine.
But when I moved that virtualenv to a different system (python3 installed) and ran my application with the absolute path of my virtualenv python (c:/......./myenv/Scripts/python.exe main.py) then it threw the errors that packages are not installed,
I activated the virtualenv and used pip freeze and there were no packages were installed.
But under virtualenv there is 'Site-Packages' (myenv -> lib -> site-packages) , all my installed packages were persent there.
My Question is how to use the packages that are inside 'site-packages' even after moving the virtualenv to different system in Python 3. | Maybe you can consider using pipenv to control the virtualenvs on different computer or environment. | 0 | false | 2 | 6,466 |
2019-12-24 00:29:06.477 | How do I allow a file to be accessible from all directories? | I have a python program which is an interpreter, for a language that I have made. It is called cbc.py, and it is in a certain directory. Now, I want to know how I can call it, along with sys.argv arguments (like python3 cbc.py _FILENAME_TO_RUN_) in any directory. I have done research on the .bashrc file and on the PATH variable, but I can't find anything that really helps me with my problem. Could someone please show me how to resolve my problem? | You need to make your script executable first and then add it to your PATH.
If you have your python script at ~/path/to/your/script/YOUR_SCRIPT_NAME:
add #!/usr/bin/python3 at the top of you script,
give executable permision to your script using sudo chmod a+x YOUR_SCRIPT_NAME,
edit ~/.bashrc to add your script path, e.g. echo PATH="$HOME/path/to/your/script:$PATH" >> ~/.bashrc,
restart or re-login or run source ~/.bashrc,
now you can access your script via YOUR_SCRIPT_NAME anywhere. | 0 | false | 1 | 6,467 |
2019-12-24 14:27:11.827 | Blueprism-like spying and bot development | Blueprism gives the possibility to spy elements (like buttons and textboxes) in both web-browsers and windows applications. How can I spy (windows-based only) applications using Python, R, Java, C++, C# or other, anything but not Blueprism, preferrably opensource.
For web-browsers, I know how to do this, without being an expert. Using Python or R, for example, I can use Selenium or RSelenium, to spy elements of a website using different ways such as CSS selector, xpath, ID, Class Name, Tag, Text etc.
But for Applications, I have no clue. BluePrism has mainly two different App spying modes which are WIN32 and Active Accessibility. How can I do this type of spying and interacting with an application outside of Blueprism, preferrably using an opensource language?
(only interested in windows-based apps for now)
The aim is of course to create robots able to navigate the apps as a human would do. | There is a free version of Blue Prism now :) Also Blue Prism uses win32, active accessibility and UI Automation which is a newer for of the older active accessibility.
To do this yourself without looking into Blue Prism you would need to know how to use UIA with C#/VB.new or C++. There are libraries however given that Blue Prism now has a free version I would recommend using that. Anything specific can be developed withing a code stage within Blue Prism. | 0 | false | 1 | 6,468 |
2019-12-25 06:36:41.020 | how to use SVM to classify if the shape of features for each sample is matrix? Is it simply to reshape the matrix to long vector? | I have 120 samples and the shape of features for each sample is matrix of 15*17. how to use SVM to classify? Is it simply to reshape the matrix to long vector? | Yes, that would be the approach I would recommend. It is essentially the same procedure that is used when utilizing images in image classification tasks, since each image can be seen as a matrix.
So what people do is to write the matrix as a long vector, consisting of every column concatenated to one another.
So you can do the same here. | 0 | false | 1 | 6,469 |
2019-12-25 14:11:55.277 | How can I fetch data from a website to my local Django Website? | I am rather new to Django and I need to fetch some data from a website. For example I want the top ten posts of the day from Reddit. I know of a "request" module for the same.But I am not sure where and how should I implement it and will it be important to store the data in a model or not. | You can create a helper class named like network.py and implement functions to fetch the data.
If you want to store them in the database you can create appropriate models otherwise you can directly import and call the function and use the data returned from network.py in your view. | 0 | false | 1 | 6,470 |
2019-12-25 14:33:59.377 | How to upload a file to pythonanywhere using react native? | I am trying to build an app through react-native wherein I need to upload a JSON file to my account folder hosted on pythonanywhere.
Can you please tell me how can I upload a JSON file to the pythonanywhere folder through react-native? | The web framework that you're using will have documentation about how to create a view that can accept filee uploads. Then you can use the fetch API in your javascript to send the file to it. | 0.673066 | false | 1 | 6,471 |
2019-12-26 16:59:19.517 | Pycharm can't find python.exe | No Python at 'C:\Users\Mr_Le\AppData\Local\Programs\Python\Python38-32\python.exe'
Any time I try to run my code it keeps prompting me this ^^^ but I had recently deleted Python 3.8 to downgrade to Python 3.6 and just installed Python 3.6 to run pytorch.
Does anyone know how to fix this? | 1.In your windows search bar find python 3.9.8.
[Searching for Windows][1]
[1]: https://i.stack.imgur.com/vNMxT.png
Right click on your the app
Click on App Settings
[Your settings will populate][2]
[2]: https://i.stack.imgur.com/E4yM3.png
Scroll down on this page
[][3]
[3]: https://i.stack.imgur.com/HFc1J.png
Hit the Repair box
Try to run your python script again after restarting all your programs | 0 | false | 2 | 6,472 |
2019-12-26 16:59:19.517 | Pycharm can't find python.exe | No Python at 'C:\Users\Mr_Le\AppData\Local\Programs\Python\Python38-32\python.exe'
Any time I try to run my code it keeps prompting me this ^^^ but I had recently deleted Python 3.8 to downgrade to Python 3.6 and just installed Python 3.6 to run pytorch.
Does anyone know how to fix this? | For other users: just check the "C:\Users<>\AppData\Local\Programs\Python" folder on your PC and remove any folders belonging to previous installations of Python. Also check if environmental variables are correct. | 0 | false | 2 | 6,472 |
2019-12-26 20:34:44.420 | Access output of intermediate layers in Tensor-flow 2.0 in eager mode | I have CNN that I have built using on Tensor-flow 2.0. I need to access outputs of the intermediate layers. I was going over other stackoverflow questions that were similar but all had solutions involving Keras sequential model.
I have tried using model.layers[index].output but I get
Layer conv2d has no inbound nodes.
I can post my code here (which is super long) but I am sure even without that someone can point to me how it can be done using just Tensorflow 2.0 in eager mode. | The most straightforward solution would go like this:
mid_layer = model.get_layer("layer_name")
you can now treat the "mid_layer" as a model, and for instance:
mid_layer.predict(X)
Oh, also, to get the name of a hidden layer, you can use this:
model.summary()
this will give you some insights about the layer input/output as well. | 0 | false | 1 | 6,473 |
2019-12-27 02:38:58.220 | Given a midpoint, gradient and length. How do I plot a line segment of specific length? | I am trying to plot the endpoints of the line segment which is a tangent to a circle in Python.
I know the circle has center of (A, B), and a radius of r. The point at which I want to find the tangent at is (a, b). I want the tangent to be a segment of length c. How do I write a code which allows me to restrict the length of the line?
I have the equation of the tangent to be y = (-(B - b)/(A - a))(x - a) + b. So I know how to plot the two endpoints if the length of the segment did not matter. But how would I determine the x-coordinates of the point? Is there some sort of command which allows me to limit the length of a line?
Thank you!!! | I don't know thonny, and it sounds like your implementation will depend a bit on the context of this computation.
That said, it sounds like what you're looking for is the two points of intersection of your tangent line and a (new, conceptual) cicle with a given radius centered on (a,b). You should be able to put together the algebraic expression for those points, and simplify it into something tidy. Watch out for special cases though, where the slope of the tangent is undefined (or where it's zero). | 0 | false | 1 | 6,474 |
2019-12-27 06:26:33.787 | How to match duplicates and if match how to remove second one in list in python? | I have the list of APIs,
Input = [WriteConsoleA, WSAStartup, RegCloseKey, RegCloseKey, RegCloseKey, NtTerminateProces, RegCloseKey]
expected output = [WriteConsoleA, WSAStartup, RegCloseKey, NtTerminateProces, RegCloseKey] | you can simply convert set(list) i.e. set(Input) to remove all the duplicates. | 0 | false | 1 | 6,475 |
2019-12-27 09:10:51.553 | TextBlob Naive Bayes classifier for neutral tweets | I am doing a small project on sentiment analysis using TextBlob. I understand there are are 2 ways to check the sentiment of tweet:
Tweet polarity: Using it I can tell whether the tweet is positive, negative or neutral
Training a classifier: I am using this method where I am training a TextBlob Naive Bayes classifier on positive and negative tweets and using the classifier to classify tweet either as 'positive' or 'negative'.
My question is, using the Naive bayes classifier, can I also classify the tweet as 'neutral' ? In other words, can the 'sentiment polarity' defined in option 1 can somehow be used in option 2 ? | If you have only two classes, Positive and Negative, and you want to predict if a tweet is Neutral, you can do so by predicting class probabilities.
For example, a tweet predicted as 80% Positive remains Postive. However, a tweet predicting as 50% Postive could be Neutral instead. | 0 | false | 1 | 6,476 |
2019-12-27 12:55:42.747 | Sentiment Classification using Doc2Vec | I am confused as to how I can use Doc2Vec(using Gensim) for IMDB sentiment classification dataset. I have got the Doc2Vec embeddings after training on my corpus and built my Logistic Regression model using it. How do I use it to make predictions for new reviews? sklearn TF-IDF has a transform method that can be used on test data after training on training data, what is its equivalent in Gensim Doc2Vec? | To get a vector for an unseen document, use vector = model.infer_vector(["new", "document"])
Then feed vectorinto your classifier: preds = clf.predict([vector]). | 0.201295 | false | 1 | 6,477 |
2019-12-28 06:27:45.017 | How can I embed a python file or code in HTML? | I am working on an assignment and am stuck with the following problem:
I have to connect to an oracle database in Python to get information about a table, and display this information for each row in an .html-file. Hence, I have created a python file with doctype HTML and many many "print" statements, but am unable to embed this to my main html file. In the next step, I have created a jinja2 template, however this passes the html template data (incl. "{{ to be printed }}") to python and not the other way round. I want to have the code, which is executed in python, to be implemented on my main .html file.
I can't display my code here since it is an active assignment. I am just interested in general opinions on how to pass my statements from python (or the python file) into an html file. I can't find any information about this, only how to escape html with jinja.
Any ideas how to achieve this?
Many thanks. | Thanks for the suggestions. What I have right now is a perfectly working python file containing jinja2 and the html output I want, but as a python file. When executing the corresponding html template, the curly expressions {{name}} are displayed like this, and not as the functions executed within the python file. Hence, I still have to somehow tell my main html file to execute this python script on my webpage, which I cannot manage so far.
Unfortunately, it seems that we are not allowed to use flask, only jinja and django. | 0 | false | 2 | 6,478 |
2019-12-28 06:27:45.017 | How can I embed a python file or code in HTML? | I am working on an assignment and am stuck with the following problem:
I have to connect to an oracle database in Python to get information about a table, and display this information for each row in an .html-file. Hence, I have created a python file with doctype HTML and many many "print" statements, but am unable to embed this to my main html file. In the next step, I have created a jinja2 template, however this passes the html template data (incl. "{{ to be printed }}") to python and not the other way round. I want to have the code, which is executed in python, to be implemented on my main .html file.
I can't display my code here since it is an active assignment. I am just interested in general opinions on how to pass my statements from python (or the python file) into an html file. I can't find any information about this, only how to escape html with jinja.
Any ideas how to achieve this?
Many thanks. | You can't find information because that won't work. Browser cannot run python, meaning that they won't be able to run your code if you embed it into an html file. The setup that you need is a backend server that is running python (flask is a good framework for that) that will do some processing depending on the request that is being sent to it. It will then send some data to a template processor (jinja in this case work well with flask). This will in turn put the data right into the html page you want to generate. Then this html page will be returned to the client making the request, which is something the browser will understand and will show to the user. If you want to do some computation dynamically on the browser you will need to use javascript instead which is something a browser can run (since its in a sandbox mode).
Hope it helps! | 0 | false | 2 | 6,478 |
2019-12-30 03:00:38.240 | How to give an AI controls in a video game? | So I made Pong using PyGame and I want to use genetic algorithms to have an AI learn to play the game. I want it to only know the location of its paddle and the ball and controls. I just don't know how to have the AI move the paddle on its own. I don't want to do like: "If the ball is above you, go up." I want it to just try random stuff until it learns what to do.
So my question is, how do I get the AI to try controls and see what works? | So you'd want as the AI input the position of the paddle, and the position of the ball. The AI output is two boolean output whether the AI should press up or down button on the next simulation step.
I'd also suggest adding another input value, the ball's velocity. Otherwise, you would've likely needed to add another input which is the location of the ball in the previous simulation step, and a much more complicated middle layer for the AI to learn the concept of velocity. | 0 | false | 1 | 6,479 |
2019-12-30 07:27:06.787 | How to get recent data from bigtable? | I need to get 50 latest data (based on timestamp) from BigTable.
I get the data using read_row and filter using CellsRowLimitFilter(50). But it didn't return the latest data. It seems the data didn't sorted based on timestamp? how to get the latest data?
Thank you for your help. | Turns out the problem was on the schema. It wasn't designed for timeseries data. I should have create the rowkey with id#reverse_timestamp and the data will be sorted from the latest. Now I can use CellsRowLimitFilter(50) and get 50 latest data. | 1.2 | true | 1 | 6,480 |
2019-12-30 11:55:40.920 | Will pyqt5 connected with MySQL work on other computers without MySQL? | I am building a GUI software using PyQt5 and want to connect it with MySQL to store the data.
In my computer, it will work fine, but what if I transfer this software to other computer who doesn't have MySQL, and if it has, then it will not have the same password as I will add in my code (using MySQL-connector)a password which I know to be used to connect my software to MySQL on my PC.
My question is, how to handle this problem??? | If you want your database to be installed with your application and NOT shared by different users using your application, then using SQLite is a better choice than MySQL. SQLite by default uses a file that you can bundle with your app. That file contains all the database tables including the connection username/password. | 1.2 | true | 1 | 6,481 |
2020-01-03 03:03:26.107 | How could I run tensorflow on windows 10? I have the gpu Geforce gtx 1650. Can I run tensorflow on it? if yes, then how? | I want to do some ML on my computer with Python, I'm facing problem with the installation of tensorflow and I found that tensorflow could work with GPU, which is CUDA enabled. I've got a GPU Geforce gtx 1650, will tensorflow work on that.
If yes, then, how could I do so? | Here are the steps for installation of tensorflow:
Download and install the Visual Studio.
Install CUDA 10.1
Add lib, include and extras/lib64 directory to the PATH variable.
Install cuDNN
Install tensorflow by pip install tensorflow | 0 | false | 1 | 6,482 |
2020-01-06 05:04:25.633 | I want my python tool to have a mechanism like whenever anyone runs the tool a pop up should come up as New version available please use the latest | I Have created a python based tool for my teammates, Where we group all the similar JIRA tickets and hence it becomes easier to pick the priority one first. But the problem is every time I make some changes I have to ask people to get the latest one from the Perforce server. So I am looking for a mechanism where whenever anyone uses the tool a pop up should come up as "New version available" please install.
Can anyone help how to achieve that? | On startup, or periodically while running, you could have the tool query your Perforce server and check the latest version. If it doesn't match the version currently running, then you would show the popup, and maybe provide a download link.
I'm not personally familiar with Perforce, but in Git for example you could check the hash of the most recent commit. You could even just include a file with a version number that you manually increment every time you push changes. | 0.265586 | false | 3 | 6,483 |
2020-01-06 05:04:25.633 | I want my python tool to have a mechanism like whenever anyone runs the tool a pop up should come up as New version available please use the latest | I Have created a python based tool for my teammates, Where we group all the similar JIRA tickets and hence it becomes easier to pick the priority one first. But the problem is every time I make some changes I have to ask people to get the latest one from the Perforce server. So I am looking for a mechanism where whenever anyone uses the tool a pop up should come up as "New version available" please install.
Can anyone help how to achieve that? | You could maintain the latest version code/tool on your server and have your tool check it periodically against its own version code. If the version code is higher on the server, then your tool needs to be updated and you can tell the user accordingly or raise appropriate pop-up recommending for an update. | 0.135221 | false | 3 | 6,483 |
2020-01-06 05:04:25.633 | I want my python tool to have a mechanism like whenever anyone runs the tool a pop up should come up as New version available please use the latest | I Have created a python based tool for my teammates, Where we group all the similar JIRA tickets and hence it becomes easier to pick the priority one first. But the problem is every time I make some changes I have to ask people to get the latest one from the Perforce server. So I am looking for a mechanism where whenever anyone uses the tool a pop up should come up as "New version available" please install.
Can anyone help how to achieve that? | I have an idea,you can use requests module to crawl your website(put the number of version in the page) and get the newest version.
And then,get the version in the user's computer and compare to the official version.If different or lower than official version,Pop a window to remind user to update | 0.265586 | false | 3 | 6,483 |
2020-01-06 13:14:31.660 | How to get the Performance Log of another tab from Selenium using Python? | I'm using Selenium with Python API and Chrome to do the followings:
Collect the Performance Log;
Click some <a, target='_blank'> tags to get into other pages;
For example, I click a href in Page 'A', which commands the browser opens a new window to load another URL 'B'.
But when I use driver.get_log('performance') to get the performance log, I can only get the log of Page 'A'. Even though I switch to the window of 'B' as soon as I click the href, some log entries of the page 'B' will be lost.
So how can I get the whole performance log of another page without setting the target of <a> to '_top'? | I had the same problem and I think it is because the driver does not immediately switch to a new window.
I switched to page "B" and reloaded this page, then uses get_log and it worked. | 0 | false | 1 | 6,484 |
2020-01-08 06:15:06.680 | What is the difference between iterdescendants() and iterchildren() in lxml? | In LXML python library, how to iterate? and what is the difference between iterdescendants() and iterchildren() in lxml python ? | when you use iterchildren() you iterate over first level childs. When you use iterdescendants() you iterate over childs and childs of childs. | 0 | false | 1 | 6,485 |
2020-01-08 16:40:23.643 | NS3 - python.h file can not be located compilation error | I have included Python.h in my module header file and it was built successfully.
Somehow when I enabled-examples configuration to compile the example.cc file, which includes the module header file. It reported the Python.h file can not be found - fatal error.
I have no clue at the moment what is being wrong.
Could anyone give a hint? It is for the NS3(Network Simulator 3) framework. | thanks for writing back to me:).
I solved the issue by adding the pyembed feature in the wscript within the same folder as my.cc file.
Thanks again:).
J. | 0 | false | 1 | 6,486 |
2020-01-09 15:38:35.540 | Anaconda prompt launches Visual Studio 2017 when run .py Files | Traditionally I've used Notepad ++ along with the Anaconda prompt to write and run scripts locally on my Windows PC.
I had my PC upgraded and thought I'd give Virtual Studio Code a chance to see if I liked it.
Now, every time I try to execute a .py file in the Anaconda prompt Visual Studio 2017 launches. I hate this and can't figure out how to stop it.
I've tried the following:
Uninstalling Virtual Studio Code.
Changing environments in Anaconda.
Reinstalling Anaconda. I did not check the box for the %PATH option.
Reboots at every step.
On my Windows 10 laptop Visual Studio 2017 doesn't appear in my Apps and Features to uninstall. I've tried Googling and am stuck.
The programs involved are:
Windows 10 Professional
Visual Studio 2017
Anaconda version 2019.10 Build Channel py37_0
Can someone help me figure out how to stop this? | How were you running the scripts before? python script.py or only script.py?
If it is the latter, what happened probably is that Windows has associated .py files to Visual Studio. Right click on the file, go to Open With, then select Python if you want to run them, or Notepad++ if you want to edit them. | 1.2 | true | 1 | 6,487 |
2020-01-09 23:15:20.873 | AWS Lambda - Run Lambda multiply times with different environment variables | I have an AWS Lambda that uses 2 environment variables. I want to run this lambda up to several hundred times a day, however i need to change the environment variables between runs.
Ideally, I would like something where I could a list a set of variables pairs and run the lambdas on a schedule
The only way I see of doing this, is have separate lambdas and setting the environment variables for each manually
Any Ideas about how to achieve this | You could use an SQS queue for this. Instead of your scheduler initiating the Lambda function directly, it could simply send a message with the two data values to an SQS queue, and the SQS queue could be configured to trigger the Lambda. When triggered, the Lambda will receive the data from the message. So, the Lambda function does not need to change.
Of course, if you have complete control over the client that generates the two data values then that client could also simply invoke the Lambda function directly, passing the two data values in the payload. | 1.2 | true | 1 | 6,488 |
2020-01-10 07:19:16.663 | convert float64 to int (excel to pandas) | I have imported excel file into python pandas. but when I display customer numbers I get in float64 format i.e
7.500505e+09 , 7.503004e+09
how do convert the column containing these numbers | int(yourVariable) will cast your float64 to a integer number.
Is this what you are looking for? | 0 | false | 1 | 6,489 |
2020-01-10 09:01:43.203 | Camera Calibration basic doubts | I am starting out with computer vision and opencv. I would like to try camera calibration for the images that I have to see how it works. I have a very basic doubt.
Should I use the same camera from which the distorted images were captured or I can use any camera to perform my camera calibration? | Camera calibration is supposed to do for the same camera. Purpose of calibrating a camera is to understand how much distortion the image has and to correct it before we use it to take actual pics. Even if you do not have the original camera, If you have the checkerboard images taken from that camera it is sufficient. Otherwise, look for a similar camera with features as similar as possible (focal length etc.) to take checker board images for calibration and this will somewhat serve your purpose. | 0.386912 | false | 1 | 6,490 |
2020-01-11 08:50:04.607 | NLP AI logic - dialogue sequences with multiple parameters per sequence architecture | I have a dataset of dialogues with various parameters (like if it is a question, an action, what emotion it conveys etc ). I have 4 different "informations" per sentence.
let s say A replys to B
A has an additive parameter in a different list for its possible emotions (1.0.0.0) (angry.happy.sad.bored) - an another list for it s possible actions (1.0.0.0) (question.answer.inpulse.ending)
I know how to build a regular RNN model (from the tutorials and papers I have seen here and there), but I can t seem to find a "parameters" architecture.
Should I train multiple models ? (like sentence A --> emotions, then sentence B -->actions) then train the main RNN separately and predicting the result through all models ?
or is there a way to build one single model with all the information stored right at the beginning ?
I apologize for my approximate English, witch makes my search for answers even more difficult. | From the way I understand your question, you want to find emotions/actions based on a particular sentence. Sentence A has emotions as labels and Sentence B has actions as labels. Each of the labels has 4 different values with a total of 8 values. And you are confused about how to implement labels as input.
Now, you can give all these labels their separate classes. Like emotions will have labels (1.2.3.4) and actions will have labels (5.6.7.8). Then concat both the datasets and run Classification through RNN.
If you need to pass emotions/actions as input, then add them to vectorized matrix. Suppose you have Sentence A stating "Today's environment is very good" with happy emotion. Add the emotion with it's matrix row, like this:
Today | Environment | very | good | health
1 | 1 | 1 | 1 | 0
Now add emotion such that:
Today | Environment | very | good | health | emotion
1 | 1 | 1 | 1 | 0 | 2(for happy)
I hope this answers your question. | 1.2 | true | 1 | 6,491 |
2020-01-11 20:43:56.263 | How to identify the message in a delivery notification? | In pika, I have called channel.confirm_delivery(on_confirm_delivery) in order to be informed when messages are delivered successfully (or fail to be delivered). Then, I call channel.basic_publish to publish the messages. Everything is performed asynchronously.
How, when the on_confirm_delivery callback is called, do I find what the concerned message? In the parameters, The only information that changes in the object passed as a parameter to the callback is delivery_tag, which seems to be an auto-incremented number. However, basic_publish doesn't return any delivery tag.
In other words, if I call basic_publish twice, how do I know, when I receive an acknowledgement, whether it's the first or the second message which is acknowledged? | From RabbitMQ document, I find:
Delivery tags are monotonically growing positive integers and are presented as such by client libraries.
So you can keep a growing integer in your code per channel, set it to 0 when channel is open, increase it when you publish a message. Then this integer will be same as the delivery_tag. | 1.2 | true | 1 | 6,492 |
2020-01-12 11:18:22.443 | how to set the format for date time column in jupyter notebook | 11am – 4pm, 7:30pm – 11:30pm (Mon-Sun)------(this is opening and closing time of restaurant)
[i have this kind of format in my TIME column and this is not converting into datetime format...so how to prepare the data so that i can apply linear regression???]
ValueError: ('Unknown string format:', '11am – 4pm, 7:30pm – 11:30pm (Mon-Sun)') | From my understanding, datetime format requires the 24h format, or - 00:00:00
So instead of 7:30pm, it would be 19:30:00. | 0 | false | 1 | 6,493 |
2020-01-12 15:18:09.343 | Which data to plot to know what model suits best for the problem? | I'm sorry, i know that this is a very basic question but since i'm still a beginner in machine learning, determining what model suits best for my problem is still confusing to me, lately i used linear regression model (causing the r2_score is so low) and a user mentioned i could use certain model according to the curve of the plot of my data and when i see another coder use random forest regressor (causing the r2_score 30% better than the linear regression model) and i do not know how the heck he/she knows better model since he/she doesn't mention about it. I mean in most sites that i read, they shoved the data to some models that they think would suit best for the problem (example: for regression problem, the models could be using linear regression or random forest regressor) but in some sites and some people said firstly we need to plot the data so we can predict what exact one of the models that suit the best. I really don't know which part of the data should i plot? I thought using seaborn pairplot would give me insight of the shape of the curve but i doubt that it is the right way, what should i actually plot? only the label itself or the features itself or both? and how can i get the insight of the curve to know the possible best model after that? | If you are using off-the-shelf packages like sklearn, then many simple models like SVM, RF, etc, are just one-liners, so in practice, we usually try several such models at the same time. | 0 | false | 2 | 6,494 |
2020-01-12 15:18:09.343 | Which data to plot to know what model suits best for the problem? | I'm sorry, i know that this is a very basic question but since i'm still a beginner in machine learning, determining what model suits best for my problem is still confusing to me, lately i used linear regression model (causing the r2_score is so low) and a user mentioned i could use certain model according to the curve of the plot of my data and when i see another coder use random forest regressor (causing the r2_score 30% better than the linear regression model) and i do not know how the heck he/she knows better model since he/she doesn't mention about it. I mean in most sites that i read, they shoved the data to some models that they think would suit best for the problem (example: for regression problem, the models could be using linear regression or random forest regressor) but in some sites and some people said firstly we need to plot the data so we can predict what exact one of the models that suit the best. I really don't know which part of the data should i plot? I thought using seaborn pairplot would give me insight of the shape of the curve but i doubt that it is the right way, what should i actually plot? only the label itself or the features itself or both? and how can i get the insight of the curve to know the possible best model after that? | This question is too general, but I will try to give an overview of how to choose the model. First of all you should that there is no general rule to choose the family of models to use, it is more a choosen by experiminting different model and looking to which one gives better results. You should also now that in general you have multi-dimensional features, thus plotting the data will not give you a full insight of the dependance of your features with the target, however to check if you want to fit a linear model or not, you can start plotting the target vs each dimension of the input, and look if there is some kind of linear relation. However I would recommand that you to fit a linear model, and check if if this is relvant from a statistical point of view (student test, smirnov test, check the residuals...). Note that in real life applications, it is not likeley that linear regression will be the best model, unless you do a lot of featue engineering. So I would recommand you to use more advanced methods (RandomForests, XGboost...) | 0.201295 | false | 2 | 6,494 |
2020-01-12 22:30:19.967 | Openshift online - no longer running collectstatic | I've got 2 Python 3.6 pods currently running. They both used to run collectstatic upon redeployment, but then one wasn't working properly, so I deleted it and made a new 3.6 pod. Everything is working perfectly with it, except it no longer is running collectstatic on redeployment (so I'm doing it manually). Any thoughts on how I can get it running again?
I checked the documentation, and for the 3.11 version of openshift still looks like it has a variable to disable collectstatic (which i haven't done), but the 4.* versions don't seem to have it. Don't know if that has anything to do with it.
Edit:
So it turns out that I had also updated the django version to 2.2.7.
As it happens, the openshift infrastructure on openshift online is happy to collectstatic w/ version 2.1.15 of Django, but not 2.2.7 (or 2.2.9). I'm not quite sure why that is yet. Still looking in to it. | Currently Openshift Online's python 3.6 module doesn't support Django 2.2.7 or 2.2.9. | 1.2 | true | 1 | 6,495 |
2020-01-13 13:47:41.013 | How to edit ELF by adding custom sections and symbols | I want to take an elf file and then based on the content add a section with data and add symbols. Using objcopy --add-section I can add a section with the content that I would like. I cannot figure out how to add a symbol.
Regardless, I would prefer not run a series of programs in order to do what I want but rather do it natively in c or python. In pyelftools I can view an elf, but I cannot figure out how to edit and elf.
How can I add custom sections and symbols in Python or C? | ELF has nothing to do with the symbols stored in it by programs. It is just a format to encode everything. Symbols are generated normally by compilers, like the C compiler, fortran compiler or an assembler, while sections are fixed by the programming language (e.g. the C compiler only uses a limited number of sections, depending on the kind of data you are using in your programs). Some compilers have extensions to associate a variable to a section, so the linke will consider it special in some way. The compiler/assembler generates a symbol table in order for the linker to be able to use it to resolve dependencies.
If you want to add symbols to your program, the easiest way it to create an assembler module with the sections and symbols you want to add to the executable, then assemble it and link to the final executable.
Read about ld(1) program (the linker), and how it uses the link scripts (special hidden files that direct the linker on how to organize the sections in the different modules at link time) to handle the sections in an object file. ELF is just a format. If you use a link script and the help of the assembler, you'll be able to add any section you want or modify the normal memory map that programs use to have. | 0 | false | 1 | 6,496 |
2020-01-13 17:24:11.667 | Google Earth Engine using Python | How should a beginner start learning Google Earth Engine coding with python using colab? I know python, but how do I come to know about the objects of images and image classification. | i use geemap package for convert shape file to earth engine variable without uploading file on assets | 0 | false | 1 | 6,497 |
2020-01-13 18:30:12.277 | How to predict the player using random forest ML | I have to predict the winner of the Australian Open 2020. My dataset has these features: Location / Tournament / Date / Series / Court / Surface / Round / Winner / Loser etc.
I trained my model using just these features 'Victory','Series','Court','Surface','WinRank','LoseRank','WPts','LPts','Wsets','Lsets','Weather' and I have a 0.93 accuracy but now I have to predict the name of the winner and I don't have any idea how to do it based on the model that I trained.
Example: If I have Dimitrov G. vs Simion G using random forest the model has to give me one of them as the winner of the match.
I transformed the names of the players in dummy variables but after that, I don't know what to do?
Can anyone give me just an idea of how could I predict the winner? so I can create a Tournament, please? | To address such a problem, I would suggest creation of a custom target variable.
Firstly, the transformation of names of players into dummy variables seems reasonable (Just make sure, the unique player is identified with the same first and last name combinations thereby, avoiding duplications and thus, having the correct dummy code for the player name).
Now, to create the target variable "wins" -
Use the two player names - P1, P2 of the match as input features for your model.
Define the "wins" as 1 if P1 wins and 0 if P2 wins.
Run your model with this set up.
When you want to create a tournament and predict the winner, the inputs will be your 2 players and other match features. If, "wins" is close to 1, it means your P1 wins and output that player name. | 1.2 | true | 1 | 6,498 |
2020-01-14 14:08:12.387 | Eikon API - ek.get_data for indices | I would like to retrieve the following (historical) information while using the
ek.get_data()
function: ISIN, MSNR,MSNP, MSPI, NR, PI, NT
for some equity indices, take ".STOXX" as an example. How do I do that? I want to specify I am using the get data function instead of the timeseries function because I need daily data and I would not respect the 3k rows limit in get.timeseries.
In general: how do I get to know the right names for the fields that I have to use inside the
ek.get_data()
function? I tried with both the codes that the Excel Eikon program uses and also the names used in the Eikon browser but they differ quite a lot from the example I saw in some sample code on the web (eg. TR.TotalReturnYTD vs TR.PCTCHG_YTD. How do I get to understand what would be the right name for the data types I need? | Considering the codes in your function (ISIN, MSNR,MSNP, MSPI, NR, PI, NT), I'd guess you are interested in the Datastream dataset. You are probably beter off using the DataStream WebServices (DSWS) API instead of the Eikon API. This will also relieve you off your 3k row limit. | 0 | false | 1 | 6,499 |
2020-01-14 16:05:18.980 | Installing cutadapat package in windows | I'm trying to install a package name cutdapt in a windows server. I'm trying to do it this way:
pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org cutadapt
But every time I try to install it I get this error: Building wheel for cutadapt (PEP 517): finished with status 'error'
Any ideas on how to pass this issue? | Turns out, that I had some problems with python 3.5, so I switched to python 3.8 and managed to install the package. | 1.2 | true | 1 | 6,500 |
2020-01-14 17:32:47.693 | Can I use Node.js for the back end and Python for the AI calculations? | I am trying to create a website in Node.js. Though, as I am taking a course on how to use Artificial Intelligence and would like to implement such into my program. Therefore, I was wondering if it was feasible to connect Python Spyder to a Node.js based web application with somewhat ease. | Yes. That is possible. There are a few ways you can do this. You can use the child_process library, as mentioned above. Or, you can have a Python API that takes care of the AI stuff, which your Node app communicates with.
The latter example is what I prefer as most my projects run on containers as micro services on Kubernates. | 0.201295 | false | 1 | 6,501 |
2020-01-14 21:20:47.013 | Python 3.X micro ORM compatible with SQL Server | My application is database heavy (full of very complex queries and stored procedures), it would be too hard and inefficient to write these queries in a lambda way, for this reason I'll have to stick with raw SQL.
So far I found these 2 'micro' ORMs but none are compatible with MSSQL:
PonyORM
Supports: SQLite, PostgreSQL, MySQL and Oracle
Peewee
Supports: SQLite, PostgreSQL, MySQL and CockroachDB
I know SQLAlchemy supports MSSQL, however it would bee too big for what I need. | As of today - Jan 2020 - it seems that using pyodbc is still the way to go for SQL Server + Python if you are not using Django or any other big frameworks. | 1.2 | true | 1 | 6,502 |
2020-01-15 06:45:49.180 | catboost classifier for class imbalance? | I am using catboost classifier for my binary classification model where I have a highly imbalance dataset of 0 -> 115000 & 1 -> 10000.
Can someone please guide me in how to use the following parameters in catboostclassifier:
1. class_weights
2. scale_pos_weight ?
From the documentation, I am under the impression that I can use
Ratio of sum of negative class by sum of positive class i.e. 115000/10000=11.5 as the input for scale_pos_weight but I am not sure .
Please let me know what exact values to use for these two parameters and method to derive that value?
Thanks | For scale_pos_weight you would use negative class // positive class. in your case it would be 11 (I prefer to use whole numbers).
For class weight you would provide a tuple of the class imbalance. in your case it would be: class_weights = (1, 11)
class_weights is more flexible so you could define it for multi-class targets. for example if you have 4 classes you can set it: class_weights = (0.5,1,5,25)
and you need to use only one of the parameters. for a binary classification problem I would stick with scale_pos_weight. | 1.2 | true | 1 | 6,503 |
2020-01-16 01:15:11.980 | How to write \n without making a newline | So I'm trying to write this exact string but I don't \n to make a new line I want to actually print \n on the screen. Any thoughts on how to go about this? (using python
Languages:\npython\nc\njava | adding a backslash will interpret the succeeding backslash character literally. print("\\n"). | 0 | false | 1 | 6,504 |
2020-01-20 05:37:33.623 | My Dataset is showing a string when it should be a curly bracket set/dictionary | My dataset has a column where upon printing the dataframe each entry in the column is like so:
{"Wireless Internet","Air conditioning",Kitchen}
There are multiple things wrong with this that I would like to correct
Upon printing this in the console, python is printing this:'{"Wireless Internet","Air conditioning",Kitchen}' Notice the quotations around the curly brackets, since python is printing a string.
Ideally, I would like to find a way to convert this to a list like: ["Wireless Internet","Air conditioning","Kitchen"] but I do not know how. Further, notice how some words so not have quotations, such as Kitchen. I do not know how to go about correcting this.
Thanks | what you have is a set of words, curly brackets are for Dictionary use such as {'Alex,'19',Marry','20'} its linking it as a key and value which in my case it name and age, rather than that you can use to_list command in python maybe it suits your needs. | 0 | false | 1 | 6,505 |
2020-01-20 11:17:53.697 | Get whole row using database package execute function | I am using databases package in my fastapi app. databases has execute and fetch functions, when I tried to return column values after inserting or updating using execute, it returns only the first value, how to get all the values without using fetch..
This is my query
INSERT INTO table (col1, col2, col3, col4)
VALUES ( val1, val2, val3, val4 ) RETURNING col1, col2; | I had trouble with this also, this was my query:
INSERT INTO notes (text, completed) VALUES (:text, :completed) RETURNING notes.id, notes.text, notes.completed
Using database.execute(...) will only return the first column.
But.. using database.fetch_one(...) inserts the data and returns all the columns.
Hopes this helps | 0 | false | 2 | 6,506 |
2020-01-20 11:17:53.697 | Get whole row using database package execute function | I am using databases package in my fastapi app. databases has execute and fetch functions, when I tried to return column values after inserting or updating using execute, it returns only the first value, how to get all the values without using fetch..
This is my query
INSERT INTO table (col1, col2, col3, col4)
VALUES ( val1, val2, val3, val4 ) RETURNING col1, col2; | INSERT INTO table (col1, col2, col3, col4) VALUES ( val1, val2, val3, val4 ) RETURNING (col1, col2);
you can use this query to get all columns | 1.2 | true | 2 | 6,506 |
2020-01-20 12:19:47.043 | PyCharm venv issue "pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available" | I hope someone can help me as I would like to use PyCharm to develop in Python.
I have looked around but do not seem to be able to find any solutions to my issue.
I have Python 3 installed using the Windows msi.
I am using Windows 10 . have downloaded PyCharm version 2019.3.1 (Community Edition).
I create a new project using the Pure Python option.
On trying to pip install any package, I get the error
pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available
If I try this in VSCode using the terminal it works fine.
Can anyone tell me how to resolve this issue. It would appear to be a problem with the virtual environment but I do not know enough to resolve the issue.
Thanks for your time. | Sorry guys, it appears the basic interpreter option was on Anaconda, that I had installed sometime ago , forgotten about and it defaulted to it . Changing my basic interpreter option to my Python install (Python.exe) solved the issue.
Keep on learning | 0.673066 | false | 1 | 6,507 |
2020-01-20 14:24:55.687 | Does anyone know how Tesseract - OCR postprocessing / spellchecking works? | I was using tesseract-ocr (pytesseract) for spanish and it achieves very high accuracy when you set the language to spanish and of course, the text is in spanish. If you do not set language to spanish this does not perform that good. So, I'm assuming that tesseract is using many postprocessing models for spellchecking and improving the performance, I was wondering if anybody knows some of those models (ie edit distance, noisy channel modeling) that tesseract is applying.
Thanks in advance! | Your assumption is wrong: If you do not specify language, tesseract uses English model as default for OCR. That is why you got wrong result for Spanish input text. There is no spellchecking post processing. | 0 | false | 1 | 6,508 |
2020-01-21 22:38:55.577 | Erwin API with Python | I am trying to get clear concept on how to get the Erwin generated DDL objects with python ? I am aware Erwin API needs to be used. What i am looking if what Python Module and what API needs to used and how to use them ? I would be thankful for some example ! | Here is a start:
import win32com.client
ERwin = win32com.client.Dispatch("erwin9.SCAPI")
I haven't been able to browse the scapi dll so what I know is from trial and error. Erwin publishes VB code that works, but it is not straightforward to convert. | 0.201295 | false | 1 | 6,509 |
2020-01-22 08:29:49.570 | Venv fails in CentOS, ensurepip missing | Im trying to install a venv in python3 (on CentOS). However i get the following error:
Error: Command '['/home/cleared/Develop/test/venv/bin/python3', '-Im',
'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit
status 1.
I guess there is some problem with my ensurepip...
Running python3 -m ensurepip results in
FileNotFoundError: [Errno 2] No such file or directory:
'/usr/lib64/python3.6/ensurepip/_bundled/pip-9.0.3-py2.py3-none-any.whl'
Looking in the /usr/lib64/python3.6/ensurepip/_bundled/ I find pip-18.1-py2.py3-none-any.whl and setuptools-40.6.2-py2.py3-none-any.whl, however no pip-9.0.3-py2.py3-none-any.whl
Running pip3 --version gives
pip 20.0.1 from /usr/local/lib/python3.6/site-packages/pip (python
3.6)
Why is it looking for pip-9.0.3-py2.py3-none-any.whl when I'm running pip 20.0.1, and why to i have pip-18.1-py2.py3-none-any.whl? And how to I fix this? | I would make a clean reinstall of Python (and maybe some of its dependencies as well) with your operating system's package manager (yum?). | 0 | false | 2 | 6,510 |
2020-01-22 08:29:49.570 | Venv fails in CentOS, ensurepip missing | Im trying to install a venv in python3 (on CentOS). However i get the following error:
Error: Command '['/home/cleared/Develop/test/venv/bin/python3', '-Im',
'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit
status 1.
I guess there is some problem with my ensurepip...
Running python3 -m ensurepip results in
FileNotFoundError: [Errno 2] No such file or directory:
'/usr/lib64/python3.6/ensurepip/_bundled/pip-9.0.3-py2.py3-none-any.whl'
Looking in the /usr/lib64/python3.6/ensurepip/_bundled/ I find pip-18.1-py2.py3-none-any.whl and setuptools-40.6.2-py2.py3-none-any.whl, however no pip-9.0.3-py2.py3-none-any.whl
Running pip3 --version gives
pip 20.0.1 from /usr/local/lib/python3.6/site-packages/pip (python
3.6)
Why is it looking for pip-9.0.3-py2.py3-none-any.whl when I'm running pip 20.0.1, and why to i have pip-18.1-py2.py3-none-any.whl? And how to I fix this? | These versions are harcoded at the beginning of ./lib/python3.8/ensurepip/__init__.py. You can edit this file with the correct ones.
Regarding the reason of this corruption, I can only guess. I would bet on a problem during the installtion of this interpreter. | 1.2 | true | 2 | 6,510 |
2020-01-22 11:33:05.413 | How can I deploy my features in a Machine Learning algorithm? | I’m way new to ML so I have a really rudimentary question. I would appreciate it if one clarifies it for me.
Suppose I have a set of tweets which labeled as negative and positive. I want to perform some sentiment analysis.
I extracted 3 basic features:
Emotion icons
Exclamation marks
Intensity words(very, really etc.).
How should I use these features with SVM or other ML algorithms?
In other words, how should I deploy the extracted features in SVM algorithm?
I'm working with python and already know how should I run SVM or other algorithms, but I don't have any idea about the relation between extracted features and role of them in each algorithm!
Based on the responses of some experts I update my question:
At first, I wanna appreciate your time and worthy explanations. I think my problem is solving… So in line with what you said, each ML algorithm may need some vectorized features and I should find a way to represent my features as vectors. I want to explain what I got from your explanation via a rudimentary example.
Say I have emoticon icons (for example 3 icons) as one feature:
1-Hence, I should represent this feature by a vector with 3 values.
2-The vectorized feature can initial in this way : [0,0,0] (each value represents an icon = :) and :( and :P ).
3-Next I should go through each tweet and check whether the tweet has an icon or not. For example [2,1,0] shows that the tweet has: :) 2 times, and :( 1 time, and :p no time.
4-After I check all the tweets I will have a big vector with the size of n*3 (n is the total number of my tweets).
5-Stages 1-4 should be done for other features.
6-Then I should merge all those features by using m models of SVM (m is the number of my features) and then classify by majority vote or some other method.
Or should create a long vector by concatenating all of the vectors, and feed it to the SVM.
Could you please correct me if there is any misunderstanding? If it is not correct I will delete it otherwise I should let it stay cause It can be practical for any beginners such as me...
Thanks a bunch… | basically, to make things very "simple" and "shallow", all algorithm takes some sort of a numeric vector represent the features
the real work is to find how to represent the features as vector which yield the best result, this depends by the feature itself and on the algorithm using
for example to use SVM which basically find a separator plane, you need to project the features on some vectors set which yield a good enough separation, so for instance you can treat your features like this:
Emotion icons - create a vector which represent all the icons present in that tweet, define each icon to an index from 1 to n so tweet represented by [0,0,0,2,1] means the 4th and 5th icons are appearing in his body 2 and 1 times respectively
Exclamation marks - you can simply count the number of occurrences (a better approach will be to represent some more information about it like the place in a sentence and such...)
Intensity words - you can use the same approach as the Emotion icons
basically each feature can be used alone in the SVM model to classify good and bad
you can merge all those features by using 3 models of SVM and then classify by majority vote or some other method
or
you can create a long vector by concatenating all of the vectors, and feed it to the SVM
this is just a one approach, you might tweak it or use some other one to fit your data, model and goal better | 0.999329 | false | 1 | 6,511 |
2020-01-22 12:40:14.960 | Search SVN for specific files | I am trying to write a Python script to search a (very large) SVN repository for specific files (ending with .mat). Usually I would use os.walk() to walk through a directory and then search for the files with a RegEx. Unfortunately I can't use os.walk() for a repository, since it is not a local directory.
Does anyone know how to do that? The repository is too large to download, so I need to search for it "online".
Thanks in advance. | Something like
svn ls -R REPO-ROOT | grep PATTERN
will help | 0.386912 | false | 1 | 6,512 |
2020-01-23 01:37:55.337 | How do you create a class, or function in python that allows you to make a sequence type with specific characteristics | 1) My goal is to create a sequence that is a list that contains ordered dictionaries. The only problem for me will be described below.
I want the list to represent a bunch of "points" which are for all intents and purposes just an ordered dictionary. However, I notice that when I use OrderedDict class, when I print the dictionary it comes up as OrderedDict([key value pair 1, key value pair 2, ... etc)] For me, I would rather it behave like an ordered dictionary, BUT not having those DOUBLE "messy/ugly" "end marks" which are the "[( )]". I don't mind if the points have ONE, and only one, type of "end marks". Also I would also like it if when I print this data type that stuff like OrderedDict() doesn't show up. However, I do not mind if it shows up in return values. Like you know how when you print a list it doesn't show up as list(index0, index1, ... etc) but instead it shows up as [index0, index1, ... etc]. That is what I mean. Inside the point, it would look like this
point = {'height': 1, 'weight': 3, 'age': 5, etc} <- It could be brackets or braces or parentheses. Just some type of "end mark", but I preferably would like it to be in {} and having key value pairs indicated by key: value and have them separated by commas.
what_i_am_looking_for = [point0, point1, point2, point3, ... etc] | In Python 3.6, the ordinary dict implementation was re-written and maintains key insertion order like OrderedDict, but was considered an implementation detail. Python 3.7 made this feature an official part of the language spec, so if you use Python 3.6+ just use dict instead of OrderedDict if you don't care about backward-compatibility with Python 3.5 or earlier. | 0 | false | 1 | 6,513 |
2020-01-23 04:51:14.783 | Scrape and compare and Web page data | I have a web page with data in different tables. I want to extract a particular table and compare with an excel sheet and see whether there are any differences. Note the web page is in a internal domain. I tried with requests and beautifulsoup but I got 401 error. Could anyone help how I can achieve this? | 401 is an Unauthorized Error - which suggests your username and password may be getting rejected, or their format not accepted. Review your credentials and the exact format / data names expected by the page to ensure you're correctly trying to connect. | 0 | false | 1 | 6,514 |
2020-01-25 09:25:18.143 | USB Device/PyUSB on Windows and LInux behaving differently | I have a device with USB interface which I can connect to both my Ubuntu 18.04 machine and my Windows 10 machine. On Windows 10 I have to install the CP210x driver and manually attach it to the device (otherwise Windows tries to find the device manufacturer's driver - it's a CP210x serial chip), and in Linux write the vendorID and productID to the cp210x driver to allow it to attach to ttyUSB0. This works fine.
The Windows driver is from SiliconLabs - the manufacturer of the UART-USB chip in the device.
So on Windows it is attached to COM5 and Linux to ttyUSB0 (Ubuntu, Raspbian)
Using Wireshark I can snoop the usb bus successfully on both operating systems.
The USB device sends data regularly over the USB bus and on Windows using Wireshark I can see this communication as "URB_INTERRUPT in" messages with the final few bytes actually containing the data I require.
On Linux it seems that the device connects but using Wireshark this time I can only see URB_BULK packets. Examining the endpoints using pyusb I see that there is no URB_Interrupt endpoint only the URB_Bulk.
Using the pyusb libraries on Linux it appears that the only endpoints available are URB_BULK.
Question mainly is how do I tell Linux to get the device to send via the Interrupt transfer mechanism as Windows seems to do. I don't see a method in pyusb's set_configuration to do this (as no Interrupt transfer endpoints appear) and haven't found anything in the manufacturer's specification.
Failing that, of course, I could snoop the configuration messages on Windows, but there has to be something I'm missing here? | Disregard this, the answer was simple in the end: Windows was reassigning the device address on the bus to a different device. | 0 | false | 1 | 6,515 |
2020-01-25 21:41:28.487 | How can I define an absolute path saved in one exe file? | I'm writing a software in python for windows which should be connected to a database. Using py2exe i want to make an executable file so that I don't have to install python in the machines the software is running. The problem is that I want the user to define where the database is located the very first time the software starts, but I don't know how to store this information so that the user doesn't have to tell everytime where is the database. I have no idea how to deal with it. (the code cannot be changed because it's just a .exe file). How would you do that? | I can think of some solutions:
You can assume the DB is in a fixed location - bad idea, might move or change name and then your program stop working
You can assume the DB is in the same folder as the .exe file and guide the user to run it in the same folder - better but still not perfect
Ask the user for the DB location and save the path in a configuration file. If the file doesn't exist or path doesn't lead to the file, the user should tell the program where is the DB, otherwise, read it from the config file - I think this is the best option. | 0 | false | 1 | 6,516 |
2020-01-25 23:13:58.847 | How to install python module local to a single project | I've been going around but was not able to find a definitive answer...
So here's my question..
I come from javascript background. I'm trying to pickup python now.
In javascript, the basic practice would be to npm install (or use yarn)
This would install some required module in a specific project.
Now, for python, I've figured out that pip install is the module manager.
I can't seem to figure out how to install this specific to a project (like how javascript does it)
Instead, it's all global.. I've found --user flag, but that's not really I'm looking for.
I've come to conclusion that this is just a complete different schema and I shouldn't try to approach as I have when using javascript.
However, I can't really find a good document why this method was favored.
It may be just my problem but I just can't not think about how I'm consistently bloating my pip global folder with modules that I'm only ever gonna use once for some single project.
Thanks. | A.) Anaconda (the simplest) Just download “Anaconda” that contains a lots of python modules pre installed just use them and it also has code editors. You can creat multiple module collections with the GUI.
B.) Venv = virtual environments (if you need something light and specific that contains specific packages for every project
macOS terminal commands:
Install venv
pip install virtualenv
Setup Venve (INSIDE BASE Project folder)
python3 -m venv thenameofyourvirtualenvironment
Start Venve
source thenameofyourvirtualenvironment/bin/activate
Stop Venve
deactivate
while it is activated you can install specific packages ex.:
pip -q install bcrypt
C.) Use “Docker” it is great if you want to go in depth and have a solide experience, but it can get complicated. | 1.2 | true | 1 | 6,517 |
2020-01-27 10:22:09.743 | How to stop Anaconda Navigator and Spyder from dropping libraries into User folder | For reference, I'm trying to re-learn programming and python basics after years away.
I recently downloaded Anaconda as part of an online Python Course. However, every time I open Spyder or the Navigator they instantly create folders for what I assume are all the relevant libraries in C:Users/Myself. These include .conda, .anaconda, .ipython, .matplotlib, .config and .spyder-py3.
My goal is to figure out how change where these files are placed so I can clean things up and have more control. However, I am not entirely sure why this occurs. My assumption is it's due to that being the default location for the Working Directory, thought the solutions I've seen to that are currently above me. I'm hoping this is a separate issue with a simpler solution, and any light that can be shed on this would be appreciated. | Go to:
~\anaconda3\Lib\site-packages\jupyter_core\paths.py
in def get_home_dir():
You can specify your preferred path directly.
Other anaconda applications can be mortified by this way but you have to find out in which scripts you can change the homedir, and sometimes it has different names. | 0 | false | 2 | 6,518 |
2020-01-27 10:22:09.743 | How to stop Anaconda Navigator and Spyder from dropping libraries into User folder | For reference, I'm trying to re-learn programming and python basics after years away.
I recently downloaded Anaconda as part of an online Python Course. However, every time I open Spyder or the Navigator they instantly create folders for what I assume are all the relevant libraries in C:Users/Myself. These include .conda, .anaconda, .ipython, .matplotlib, .config and .spyder-py3.
My goal is to figure out how change where these files are placed so I can clean things up and have more control. However, I am not entirely sure why this occurs. My assumption is it's due to that being the default location for the Working Directory, thought the solutions I've seen to that are currently above me. I'm hoping this is a separate issue with a simpler solution, and any light that can be shed on this would be appreciated. | They are automatically created to store configuration changes for those related tools. They are created in %USERPROFILE% under Windows.
The following is NOT recommended:
You can change this either via the setx command or by opening the Start Menu search for variables.
- This opens the System Properties menu on the Advanced tab
- Click on Environmental Variables
- Under the user section, add a new variable called USERPROFILE and set the value to a location of your choice. | 0 | false | 2 | 6,518 |
2020-01-28 07:32:40.877 | Is there an effective way to install 'pip', 'modules' and 'dependencies' in an offline environment? | The computer on which I want to install pip and modules is a secure offline environment.
Only Python 2.7 is installed on this computers(centos and ubuntu).
To run the source code I coded, I need another module.
But neither pip nor module is installed.
It looks like i need pip to install all of dependency files.
But I don't know how to install pip offline.
and i have no idea how to install the module offline without pip.
The only network connected is pypi from the my nexus3 repository.
Is there a good way?
Would it be better to install pip and install modules?
Would it be better to just install the module without installing pip? | using pip it is easier to install the packages as it manages certian things on its own. You can install modules manually by downloading its source code and then compiling it yourself. The choice is upto you, how you want to do things. | 0 | false | 1 | 6,519 |
2020-01-29 15:00:21.157 | Kubernetes log not showing output of python print method | I've a python application in which I'm using print() method tho show text to a user. When I interact with this application manually using kubectl exec ... command I can see the output of prints.
However, when script is executed automatically on container startup with CMD python3 /src/my_app.py (last entry in Dockerfile) then, the prints are gone (not shown in kubectl logs). Ayn suggestin on how to fix it? | It turned out to be a problem of python environment. Setting, these two environment variables PYTHONUNBUFFERED=1 and PYTHONIOENCODING=UTF-8 fixed the issue. | 0.545705 | false | 1 | 6,520 |
2020-01-30 15:17:27.757 | Spotfire: Using Multiple Markings in a Data Function Without Needing Something Marked in Each | In Spotfire I have a dashboard that uses both filtering (only one filtering scheme) and multiple markings to show the resulting data in a table.
I have created a data function which takes a column and outputs the data in the column after the active filtering scheme and markings are applied.
However, this output column is only calculated if I have something marked in every marking.
I want the output column to be calculated no matter how many of the markings are being used. Is there a way to do this?
I was thinking I could use an IronPython script to edit the data function parameters for my input column to only check the boxes for markings that are actively being used. However, I can't find how to access those parameters with IronPython.
Thanks! | I think it would be a combination of visuals being set to OR instead of AND for markings (if you have a set of markings that are being set from others).
Also are all the input parameters set to required parameter perhaps unchecking that option would still run the script. In the r script you may want to replace null values as well.
Not too sure without some example. | 1.2 | true | 1 | 6,521 |
2020-01-30 17:22:41.407 | Program to print all folder and subfolder names in specific folder | I should do it wiht only import os
I have problem, that i don't know how to make program after checking the specific folder for folders to do the same for folders in these folders and so on. | You can use os.walk(directory) | 0 | false | 1 | 6,522 |
2020-01-30 21:18:10.660 | Cannot import module from linux-ubuntu terminal | I installed the keyboard module for python with pip3 and after I runned my code the terminal shows me this message: "ImportError: You must be root to use this library on linux." Can anybody help me how to run it well? I tried to run it by switching to "su -" and tried it on this place as well. | Can you please post your script?
If you are just starting the program without a shebang it probably should not run and probably throw an ImportError
Try adding a shebang (#!) at the first line of you script.
A shebang is used in unix to select the interpreter you want to run your script.
Write this in the first line: #!/usr/bin/env python3
If this doesn't help try running it from the terminal using a precending dot like this:
python3 ./{file's_name}.py | 1.2 | true | 1 | 6,523 |
2020-01-31 02:27:00.850 | Easiest way to put python text game in html? | I am trying to help someone put a python text game to be displayed with the inputs and output on his html website. What's the easiest way to do this, regarding the many outputs and inputs? Would it be to make it a flask app? I don't really know how else to describe the situation. Answers would be much appreciated. | I am developing a website with python3.8 and Sanic. It was pretty to use async, await and := ~ | 0 | false | 1 | 6,524 |
2020-02-01 21:35:48.343 | is django overkill for a contact us website? | I'm a complete beginner and a relative of mine asked me to build a simple 'contact us' website for them. It should include some information about his company and a form in which people that visit the website are able to send mails to my relative. I have been playing around with vue.js in order to build the frontend. I now want to know how to put the form to send mails and I read it has to be done with backend, so I thought I could use django as I have played with it in the past and I am confident using python. Is it too much for the work that I have to do? Should I use something simpler? I accept any suggestions please, Thanks. | Flask
You can use Flask. it is simpler than Django and easy to learn. you can build a simple website like the one you want in less than 50 line.
Wordpress
If you want you can use Wordpress. it's easy to install and many hosting services support it already. Wordpress has so many plugins and templates to build contact us website in 10 minutes.
Wix
wix is easy, drag-n-drop website builder with many pre-build templates that you can use, check them out and you will find what you need. | 0 | false | 2 | 6,525 |
2020-02-01 21:35:48.343 | is django overkill for a contact us website? | I'm a complete beginner and a relative of mine asked me to build a simple 'contact us' website for them. It should include some information about his company and a form in which people that visit the website are able to send mails to my relative. I have been playing around with vue.js in order to build the frontend. I now want to know how to put the form to send mails and I read it has to be done with backend, so I thought I could use django as I have played with it in the past and I am confident using python. Is it too much for the work that I have to do? Should I use something simpler? I accept any suggestions please, Thanks. | You will probably should use something ready like Wix or Wordpress if want to do it fast if you prefer to learn in the process you can do it with Django and Vue, but this is indeed little bit overkill | 0 | false | 2 | 6,525 |
2020-02-02 23:55:02.930 | mitmproxy: shortcut for undoing edit | new user of mitmproxy here. I've figured out how to edit a request and replay it, and I'm wondering how to undo my edit.
More specifically, I go to a request's flow, hit 'e', then '8' to edit the request headers. Then I press 'd' to delete one of the headers. What do I press to undo this change? 'u' doesn't work. | It's possible to revoke changes to a flow, but not while editing. In your case, 'e' -> '8' -> 'd' headers, now press 'q' to go back to the flow -> press 'V' to revoke changes to the flow. | 0 | false | 1 | 6,526 |
2020-02-03 11:14:48.003 | Tktable module installation problem. _tkinter.TclError: invalid command name "table" | This problem has been reported earlier but I couldn't find the exact solution for it. I installed ActiveTCL and downloaded tktable.py by "Guilherme Polo <ggpolo@gmail.com>" to my site-packages, also added Tktable.dll, pkgindex.tcl, and tktable.tcl from ActiveTCL\lib\Tktable2.11 to my python38-32\tcl and dlls . I also tried setting the env variable for TCL_LIBRARY and TK_LIBRARY to tcl8.6 and tk8.6 respectively. But I am still getting invalid command name "table".
What is that I am missing? Those who made tktable work on windows 10 and python 3 , how did you do it? I am out of ideas and would be grateful for some tips on it. | Seems like there was problem running the Tktable dlls in python38-32 bit version. It worked in 64 bit version.
Thanks @Donal Fellows for your input. | 0 | false | 1 | 6,527 |
2020-02-03 12:22:32.540 | Getting a "Future Warning" when importing for Yahoo with Pandas-Datareader | I am currently , successfully, importing stock information from Yahoo using pandas-datareader. However, before the extracted data, I always get the following message:
FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
Would anyone have an idea of what it means and how to fix it? | Cause: The cause of this warning is that, basically, the pandas_datareader is importing a module from the pandas library that will be deprecated. Specifically, it is importing pandas.util.testing whereas the new preferred module will be pandas.testing.
Solution: First off this is a warning, and not an outright error, so it won't necessarily break your program. So depending on your exact use case, you may be able to ignore it for now.
That being said, there are a few options you can consider:
Option 1: Change the code yourself -- Go into the pandas_datareader module and modify the line of code in compat_init.py that currently says from pandas.util.testing import assert_frame_equal simply to from pandas.testing import assert_frame_equal. This will import the same function from the correct module.
Option 2: Wait for pandas-datareader to update --You can also wait for the library to be upgraded to import correctly and then run pip3 install --upgrade pandas-datareader. You can go to the Github repo for pandas-datareader and raise an issue.
Option 3: Ignore it -- Just ignore the warning for now since it doesn't break your program. | 0 | false | 3 | 6,528 |
2020-02-03 12:22:32.540 | Getting a "Future Warning" when importing for Yahoo with Pandas-Datareader | I am currently , successfully, importing stock information from Yahoo using pandas-datareader. However, before the extracted data, I always get the following message:
FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
Would anyone have an idea of what it means and how to fix it? | For mac OS open /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pandas_datareader/compat/__init__.py
change: from pandas.util.testing import assert_frame_equal
to: from pandas.testing import assert_frame_equal | -0.135221 | false | 3 | 6,528 |
2020-02-03 12:22:32.540 | Getting a "Future Warning" when importing for Yahoo with Pandas-Datareader | I am currently , successfully, importing stock information from Yahoo using pandas-datareader. However, before the extracted data, I always get the following message:
FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
Would anyone have an idea of what it means and how to fix it? | You may find the 'util.testing' code in pandas_datareader, which is separate from pandas. | -0.067922 | false | 3 | 6,528 |
2020-02-04 16:27:59.627 | Multiple versions of Python in PATH | I've installed Python 3.7, and since installed python 3.8.
I've added both their folders and script folders to PATH, and made sure 3.8 is first as I'd like that to be default.
I see that the Python scripts folder has pip, pip3 and pip3.8 and the python 3.7 folder has the same (but with pip3.7 of course), so in cmd typing pip or pip3 will default to version 3.8 as I have that first in PATH.
This is great, as I can explicitly decide which pip version to run. However I don't know how to do to the same for Python. ie. run Python3.7 from cmd.
And things like Jupyter Notebooks only see a "Python 3" kernel and don't have an option for both.
How can I configure the PATH variables so I can specify which version of python3 to run? | What OS are you running? If you are running linux and used the system package panager to install python 3.8 you should be able to invoke python 3.8 by typing python3.8. Having multiple binaries named python3 in your PATH is problematic, and having python3 in your PATH point to python 3.8 instead of the system version (which is likely a lower version for your OS) will break your system's package manager. It is advisable to keep python3 in your PATH pointing to whatever the system defaults to, and use python3.8 to invoke python 3.8.
The python version that Jupyter sees will be the version from which you installed it. If you want to be able to use Jupyter with multiple python versions, create a virtual environment with your desired python version and install Jupyter in that environment. Once you activate that specific virtual env you will be sure that the jupyter command that you invoke will activate the currect python runtime. | 0.201295 | false | 1 | 6,529 |
2020-02-04 21:42:07.673 | How does the pymssql library fall back on the named pipe port when port 1433 is closed? | I'm trying to remove pymssql and migrate to pyodbc on a python 3.6 project that I'm currently on. The network topology involves two machines that are both on the same LAN and same subnet. The client is an ARM debian based machine and the server is a windows box. Port 1433 is closed on the MSSQL box but port 32001 is open and pymssql is still able to remotely connect to the server as it somehow falls back to using the named pipe port (32001).
My question is how is pymssql able to fall back onto this other port and communicate with the server? pyodbc is unable to do this as if I try using port 1433 it fails and doesn't try to locate the named pipe port. I've tried digging through the pymssql source code to see how it works but all I see is a call to dbopen which ends up in freetds library land. Also just to clarify, tsql -LH returns the named pip information and open port which falls in line with what I've seen using netstat and nmap. I'm 100% sure pymssql falls back to using the named pipe port as the connection to the named pipe port is established after connecting with pymssql.
Any insight or guidance as to how pymssql can do this but pyodbc can't would be greatly appreciated. | Removing the PORT= parameter and using the SERVER=ip\instance in the connection string uses the named pipes to do the connection instead of port 1433. I'm still not sure how the driver itself knows to do this but it works and resolved my problem. | 0.386912 | false | 1 | 6,530 |
2020-02-04 22:07:19.503 | PayPal Adaptive Payments ConvertCurrency Request (deprecated API) in Python | I can't find any example on how to make a convertcurrency request using the paypal API in python, can you give me some examples for this simple request? | Is this an existing integration for which you have an Adaptive APP ID? If not, the Adaptive Payments APIs are very old and deprecated, so you would not have permissions to use this, regardless of whether you can find ready-made code samples for Python. | 0 | false | 1 | 6,531 |
2020-02-04 22:26:47.110 | Python was not found but can be installed | I have just installed python3.8 and sublime text editor. I am attempting to run the python build on sublime text but I am met with "Python was not found but can be installed" error.
Both python and sublime are installed on E:\
When opening cmd prompt I can change dir and am able to run py from there without an issue.
I'm assuming that my sublime is not pointing to the correct dir but don't know how to resolve this issue. | i had the same problem, so i went to the microsoft store (windos 10) and simply installed "python 3.9" and problem was gone!
sorry for bad english btw | -0.386912 | false | 1 | 6,532 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.