content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
30
130
Q: What is createSecretKey for in nodejs crypto module? Im trying to figure out why should I use createSecretKey in crypto module instead of string. What difference between this: const secret = crypto.createSecretKey('mysupersecret'); // Creates SecretKeyObject hmac = crypto.createHmac('sha256', secret); hash = hmac.update('somemessage').digest('hex'); console.log(hash); and this: const secret = 'mysupersecret'; // just string hmac = crypto.createHmac('sha256', secret); hash = hmac.update('somemessage').digest('hex'); console.log(hash); Both output: 81a86a988a751d4523ebc1ccb3150b094ef7d51a0fbe111600d1832c6de68f9f Does SecretKeyObject provides any benefits? Using createSecretKey improves security of my code? A: There are several reasons why you might use the createSecretKey method in the crypto module instead of a string: Security: The createSecretKey method generates a secure, random key that is much less likely to be guessed or hacked than a simple string. This is important for protecting sensitive information, such as passwords or encryption keys. Compatibility: The createSecretKey method generates a key in the correct format for use with other crypto functions, such as encrypt and decrypt. This can save you time and effort when working with cryptographic operations. Ease of use: The createSecretKey method is a simple, one-line command that can be easily integrated into your code. This can make it easier to work with cryptographic operations, especially if you are not an expert in cryptography. Overall, using the createSecretKey method in the crypto module can provide enhanced security, compatibility, and ease of use when working with cryptographic operations.
What is createSecretKey for in nodejs crypto module?
Im trying to figure out why should I use createSecretKey in crypto module instead of string. What difference between this: const secret = crypto.createSecretKey('mysupersecret'); // Creates SecretKeyObject hmac = crypto.createHmac('sha256', secret); hash = hmac.update('somemessage').digest('hex'); console.log(hash); and this: const secret = 'mysupersecret'; // just string hmac = crypto.createHmac('sha256', secret); hash = hmac.update('somemessage').digest('hex'); console.log(hash); Both output: 81a86a988a751d4523ebc1ccb3150b094ef7d51a0fbe111600d1832c6de68f9f Does SecretKeyObject provides any benefits? Using createSecretKey improves security of my code?
[ "There are several reasons why you might use the createSecretKey method in the crypto module instead of a string:\nSecurity: The createSecretKey method generates a secure, random key that is much less likely to be guessed or hacked than a simple string. This is important for protecting sensitive information, such as passwords or encryption keys.\nCompatibility: The createSecretKey method generates a key in the correct format for use with other crypto functions, such as encrypt and decrypt. This can save you time and effort when working with cryptographic operations.\nEase of use: The createSecretKey method is a simple, one-line command that can be easily integrated into your code. This can make it easier to work with cryptographic operations, especially if you are not an expert in cryptography.\nOverall, using the createSecretKey method in the crypto module can provide enhanced security, compatibility, and ease of use when working with cryptographic operations.\n" ]
[ 0 ]
[]
[]
[ "cryptography", "hmac", "javascript", "node.js" ]
stackoverflow_0074678984_cryptography_hmac_javascript_node.js.txt
Q: Exclusive lock using key in .NET I have used the lock statement in C# to exclusively execute a piece of code. Is there a way to do same based on a key. for e.g.: lock(object, key) { //code-here } I have a method which has some piece of code that is not thread-safe, but only if the key (string) happens to be same in two parallel executions. Is there a way to somehow accomplish this in .NET ? If there is a way then we can have parallel executions if the key being used in the parallel executions is different and could improve performance. A: Put lock objects into dictionary indexed by the key - Dictionary<string, object> and grab objects by key to lock. If you need to dynamically add new key/lock object pairs make sure to lock around the dictionary access, otherwise if after construction you only read values from dictionary no additional locking is needed. A: I create a library called AsyncKeyedLock which solves the problem. Internally it uses a ConcurrentDictionary and SemaphoreSlim objects. In case you're using asynchronous code you can use: using (await _locker.LockAsync(myObject.Id)) { ... } or if you're using synchronous code: using (_locker.Lock(myObject.Id)) { ... }
Exclusive lock using key in .NET
I have used the lock statement in C# to exclusively execute a piece of code. Is there a way to do same based on a key. for e.g.: lock(object, key) { //code-here } I have a method which has some piece of code that is not thread-safe, but only if the key (string) happens to be same in two parallel executions. Is there a way to somehow accomplish this in .NET ? If there is a way then we can have parallel executions if the key being used in the parallel executions is different and could improve performance.
[ "Put lock objects into dictionary indexed by the key - Dictionary<string, object> and grab objects by key to lock.\nIf you need to dynamically add new key/lock object pairs make sure to lock around the dictionary access, otherwise if after construction you only read values from dictionary no additional locking is needed.\n", "I create a library called AsyncKeyedLock which solves the problem. Internally it uses a ConcurrentDictionary and SemaphoreSlim objects.\nIn case you're using asynchronous code you can use:\nusing (await _locker.LockAsync(myObject.Id))\n{\n ...\n}\n\nor if you're using synchronous code:\nusing (_locker.Lock(myObject.Id))\n{\n ...\n}\n\n" ]
[ 6, 0 ]
[]
[]
[ ".net", "c#", "locking", "multithreading" ]
stackoverflow_0030882108_.net_c#_locking_multithreading.txt
Q: how to apply condition on fetching data from api? currently i am using fakestoreapi, here i am passing category parameter to fetch data from api. const res = await fetch(`https://fakestoreapi.com/products/${category}`) and displaying products from fetched data {products.map((product, index) => ( <div key={product.id} className="w-full max-w-sm text-black rounded-lg shadow-md bg-white justify-bitween cursor-pointer "> <Link to={`/${product.category}/${product.id}`}> <img className="p-2 rounded-t-lg w-full h-[150px] md:h-[200px] object-contain" src={product.image} alt="productimage" /> </Link> <div className="px-5 pb-2"> <Link to={`/${product.category}/${product.id}`}> <h5 className="text-[15px] font-semibold tracking-tight text-gray-900 hover:text-blue-700">{product.title.slice(0, 30)}...</h5> </Link> <div className="flex items-center justify-between py-4"> <span className="text-2xl sm:text-2xl font-bold text-gray-900 ">${product.price}</span> <button className="hidden sm:block text-white bg-indigo-500 hover:bg-blue-800 focus:ring-4 focus:outline-none focus:ring-blue-300 font-medium rounded-lg text-sm px-5 py-2.5 text-center dark:bg-indigo-500 dark:hover:bg-blue-800 dark:focus:ring-blue-800" onClick={() => addToCart(product)}>Add to cart</button> </div> </div> </div> ))} but now i do not want to display all product of that category how to do that? how to show selective or remove unwanted product details fetched from that id? A: Fakestore API can receive a limit query parameter : fetch('https://fakestoreapi.com/products/category/jewelery?limit=2') .then(res=>res.json()) .then(json=>console.log(json)) You can also use limit(Number) and sort(asc|desc) as a query string to get your ideal results https://fakestoreapi.com/docs
how to apply condition on fetching data from api?
currently i am using fakestoreapi, here i am passing category parameter to fetch data from api. const res = await fetch(`https://fakestoreapi.com/products/${category}`) and displaying products from fetched data {products.map((product, index) => ( <div key={product.id} className="w-full max-w-sm text-black rounded-lg shadow-md bg-white justify-bitween cursor-pointer "> <Link to={`/${product.category}/${product.id}`}> <img className="p-2 rounded-t-lg w-full h-[150px] md:h-[200px] object-contain" src={product.image} alt="productimage" /> </Link> <div className="px-5 pb-2"> <Link to={`/${product.category}/${product.id}`}> <h5 className="text-[15px] font-semibold tracking-tight text-gray-900 hover:text-blue-700">{product.title.slice(0, 30)}...</h5> </Link> <div className="flex items-center justify-between py-4"> <span className="text-2xl sm:text-2xl font-bold text-gray-900 ">${product.price}</span> <button className="hidden sm:block text-white bg-indigo-500 hover:bg-blue-800 focus:ring-4 focus:outline-none focus:ring-blue-300 font-medium rounded-lg text-sm px-5 py-2.5 text-center dark:bg-indigo-500 dark:hover:bg-blue-800 dark:focus:ring-blue-800" onClick={() => addToCart(product)}>Add to cart</button> </div> </div> </div> ))} but now i do not want to display all product of that category how to do that? how to show selective or remove unwanted product details fetched from that id?
[ "Fakestore API can receive a limit query parameter :\nfetch('https://fakestoreapi.com/products/category/jewelery?limit=2')\n .then(res=>res.json())\n .then(json=>console.log(json))\n\n\nYou can also use limit(Number) and sort(asc|desc) as a query string to get your ideal results\n\nhttps://fakestoreapi.com/docs\n" ]
[ 1 ]
[]
[]
[ "api", "javascript", "reactjs" ]
stackoverflow_0074679098_api_javascript_reactjs.txt
Q: How do I find the row name for columns that have a ratio of above 4? R I've managed to calculate the ratio of my rows with the following: dfratios <- df$row1/df$row2 but I now need to find the corresponding row names for all of these ratios, I've tried using row.names but unfortunately, it's now working. Additionally, I've also tried using (df3 %>% filter(dfratios[dfratios > 3])) but that has just given me the following error: Error in filter(): ! Problem while computing ..1 = df3Ratios[df3Ratios > 2]. ✖ Input ..1 must be of size 789 or 1, not size 71. Run rlang::last_error() to see where the error occurred. any help would be appreciated :) A: You can use dplyr to create the new variable an then filter after df %>% mutate(ratios = row1/row2) %>% filter(ratios > 3)
How do I find the row name for columns that have a ratio of above 4? R
I've managed to calculate the ratio of my rows with the following: dfratios <- df$row1/df$row2 but I now need to find the corresponding row names for all of these ratios, I've tried using row.names but unfortunately, it's now working. Additionally, I've also tried using (df3 %>% filter(dfratios[dfratios > 3])) but that has just given me the following error: Error in filter(): ! Problem while computing ..1 = df3Ratios[df3Ratios > 2]. ✖ Input ..1 must be of size 789 or 1, not size 71. Run rlang::last_error() to see where the error occurred. any help would be appreciated :)
[ "You can use dplyr to create the new variable an then filter after\ndf %>% \n mutate(ratios = row1/row2) %>% \n filter(ratios > 3)\n\n" ]
[ 2 ]
[]
[]
[ "dataframe", "r" ]
stackoverflow_0074679144_dataframe_r.txt
Q: Anchoring elements to both the top and bottom after they scroll off screen using pure CSS I have a container that scrolls up and down vertically. Within the container there are some absolutely positioned items. Here is an example to illustrate. #body { height: 200px; overflow: scroll; font-family: sans-serif; font-size: 40px; } #container { width: 100%; height: 600px; background-color: rgba(255, 0, 0, 0.1); position: relative; display: flex; align-items: start; justify-content: center; } #item { display: inline; background-color: rgba(0, 0, 255, 0.1); padding: 10px 20px; border-radius: 5px; position: absolute; top: 150px; } <div id="body"> <div id="container"> <div id="item"> Hello </div> </div> </div> I want to make sure a little portion of the item is always visible as the user scrolls up and down, so that the user can always see something there, and doesn't lose track of it. When the user scrolls too far down, it should look like this... Conversely, if the user scrolls too far up, it should look like this... Is this possible with pure CSS? If this is not possible, what's the most efficient approach to achieving the same result with pure CSS & JS (bearing in mind the container might have multiple items in it)? A: Not sure if I understand the conditions correctly, but assuming the goal is to have #item always partially visible while scrolling within the container, perhaps try set position: sticky on #item. This way, the offset range of #item when it sticks to the border can be specified by top and bottom. Example: #body { height: 200px; overflow: scroll; font-family: sans-serif; font-size: 40px; } #container { width: 100%; height: 900px; background-color: rgba(255, 0, 0, 0.1); position: relative; display: flex; align-items: center; justify-content: center; } #item { display: inline; background-color: rgba(0, 0, 255, 0.1); padding: 10px 20px; border-radius: 5px; position: sticky; top: -50px; bottom: -50px; } <div id="body"> <div id="container"> <div id="item"> Hello </div> </div> </div>
Anchoring elements to both the top and bottom after they scroll off screen using pure CSS
I have a container that scrolls up and down vertically. Within the container there are some absolutely positioned items. Here is an example to illustrate. #body { height: 200px; overflow: scroll; font-family: sans-serif; font-size: 40px; } #container { width: 100%; height: 600px; background-color: rgba(255, 0, 0, 0.1); position: relative; display: flex; align-items: start; justify-content: center; } #item { display: inline; background-color: rgba(0, 0, 255, 0.1); padding: 10px 20px; border-radius: 5px; position: absolute; top: 150px; } <div id="body"> <div id="container"> <div id="item"> Hello </div> </div> </div> I want to make sure a little portion of the item is always visible as the user scrolls up and down, so that the user can always see something there, and doesn't lose track of it. When the user scrolls too far down, it should look like this... Conversely, if the user scrolls too far up, it should look like this... Is this possible with pure CSS? If this is not possible, what's the most efficient approach to achieving the same result with pure CSS & JS (bearing in mind the container might have multiple items in it)?
[ "Not sure if I understand the conditions correctly, but assuming the goal is to have #item always partially visible while scrolling within the container, perhaps try set position: sticky on #item.\nThis way, the offset range of #item when it sticks to the border can be specified by top and bottom.\nExample:\n\n\n#body {\n height: 200px;\n overflow: scroll;\n font-family: sans-serif;\n font-size: 40px;\n}\n\n#container {\n width: 100%;\n height: 900px;\n background-color: rgba(255, 0, 0, 0.1);\n position: relative;\n display: flex;\n align-items: center;\n justify-content: center;\n}\n\n#item {\n display: inline;\n background-color: rgba(0, 0, 255, 0.1);\n padding: 10px 20px;\n border-radius: 5px;\n position: sticky;\n top: -50px;\n bottom: -50px;\n}\n<div id=\"body\">\n <div id=\"container\">\n <div id=\"item\">\n Hello\n </div>\n </div>\n</div>\n\n\n\n" ]
[ 2 ]
[]
[]
[ "css", "javascript", "position" ]
stackoverflow_0074678318_css_javascript_position.txt
Q: Expo: eas build blockedPermissions does not work I'm trying to remove a permission that I don't need that is blocking my submission to the App Store: REQUEST_INSTALL_PACKAGES I use Expo 46 and I put "android": { "blockedPermissions": ["android.permission.REQUEST_INSTALL_PACKAGES"], in my app.json file. When I run an expo prebuild I'm able to see the permission removed in the generated AndroidManifest. However, when I run an eas build (locally or not) if I decompile with bundletool the aab generated, the AndroidManifest does not show this line removing the permission. What could be wrong? A: It's possible that the blockedPermissions property is not supported by the eas build command. This property is intended for use with the expo build:android command, which creates a standalone APK file for your app. To remove the REQUEST_INSTALL_PACKAGES permission, you can try adding the following to your app.json file instead: "android": { "permissions": [ "REQUEST_INSTALL_PACKAGES", ], "permissionExceptions": { "REQUEST_INSTALL_PACKAGES": { "description": "This app does not require this permission" } } }, This will explicitly list the REQUEST_INSTALL_PACKAGES permission and provide an explanation for why it's not needed by your app. When you run expo build:android to create a standalone APK, this permission will be removed from the app. A: Instead, you can simply add "expo": { "android": { "package": "com.my.app", "permissions": ["REQUEST_INSTALL_PACKAGES"] } } to your app.json file to remove the REQUEST_INSTALL_PACKAGES permission from your app. A: It looks like there is an issue with how Expo is handling the blockedPermissions configuration in your app.json file. It appears that Expo is not applying this configuration when building the app with the eas build command. Solution One One possible solution is to use the expo build:android command instead of the eas build command. This command will build your app using Expo's build service, which may handle the blockedPermissions configuration correctly. You can then use the generated .aab file to submit your app to the App Store. Solution Two Another solution is to manually remove the REQUEST_INSTALL_PACKAGES permission from the AndroidManifest.xml file before building the app. You can do this by editing the file and removing the following line: <uses-permission android:name="android.permission.REQUEST_INSTALL_PACKAGES" /> Once you have removed this line, you can run the eas build command to build your app, and the generated .aab file should not include the REQUEST_INSTALL_PACKAGES permission.
Expo: eas build blockedPermissions does not work
I'm trying to remove a permission that I don't need that is blocking my submission to the App Store: REQUEST_INSTALL_PACKAGES I use Expo 46 and I put "android": { "blockedPermissions": ["android.permission.REQUEST_INSTALL_PACKAGES"], in my app.json file. When I run an expo prebuild I'm able to see the permission removed in the generated AndroidManifest. However, when I run an eas build (locally or not) if I decompile with bundletool the aab generated, the AndroidManifest does not show this line removing the permission. What could be wrong?
[ "It's possible that the blockedPermissions property is not supported by the eas build command. This property is intended for use with the expo build:android command, which creates a standalone APK file for your app.\nTo remove the REQUEST_INSTALL_PACKAGES permission, you can try adding the following to your app.json file instead:\n\"android\": {\n \"permissions\": [\n \"REQUEST_INSTALL_PACKAGES\",\n ],\n \"permissionExceptions\": {\n \"REQUEST_INSTALL_PACKAGES\": {\n \"description\": \"This app does not require this permission\"\n }\n }\n},\n\nThis will explicitly list the REQUEST_INSTALL_PACKAGES permission and provide an explanation for why it's not needed by your app. When you run expo build:android to create a standalone APK, this permission will be removed from the app.\n", "Instead, you can simply add \"expo\": { \"android\": { \"package\": \"com.my.app\", \"permissions\": [\"REQUEST_INSTALL_PACKAGES\"] } } to your app.json file to remove the REQUEST_INSTALL_PACKAGES permission from your app.\n", "It looks like there is an issue with how Expo is handling the blockedPermissions configuration in your app.json file. It appears that Expo is not applying this configuration when building the app with the eas build command.\nSolution One\nOne possible solution is to use the expo build:android command instead of the eas build command. This command will build your app using Expo's build service, which may handle the blockedPermissions configuration correctly. You can then use the generated .aab file to submit your app to the App Store.\nSolution Two\nAnother solution is to manually remove the REQUEST_INSTALL_PACKAGES permission from the AndroidManifest.xml file before building the app. You can do this by editing the file and removing the following line:\n<uses-permission android:name=\"android.permission.REQUEST_INSTALL_PACKAGES\" />\n\nOnce you have removed this line, you can run the eas build command to build your app, and the generated .aab file should not include the REQUEST_INSTALL_PACKAGES permission.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "android", "android_manifest", "eas", "expo", "expo_build" ]
stackoverflow_0074560936_android_android_manifest_eas_expo_expo_build.txt
Q: How do I add an image to an Allure report with addattachment on JS? I'm trying to add the screenshots to my allure reports between steps, but in my case is that the test cases are manually created from the IDE with Allure TestOps plugin. How do I modify this code to use the existing screenshots (eg uploaded to the GH repo) just to display it in the allure report and not to take screenshots? const png = await browser.takeScreenshot() allure.createAttachment('screenshot',new Buffer(png,'base64'),'image/png') I tried to use const img = new image(); img.src = ... but it doesn't work. A: May be this can help // Import the fs module to read the file const fs = require("fs"); // Read the file into a Buffer object const screenshotBuffer = fs.readFileSync("path/to/screenshot.png"); // Add the screenshot to the Allure report allure.createAttachment("screenshot", screenshotBuffer, "image/png");
How do I add an image to an Allure report with addattachment on JS?
I'm trying to add the screenshots to my allure reports between steps, but in my case is that the test cases are manually created from the IDE with Allure TestOps plugin. How do I modify this code to use the existing screenshots (eg uploaded to the GH repo) just to display it in the allure report and not to take screenshots? const png = await browser.takeScreenshot() allure.createAttachment('screenshot',new Buffer(png,'base64'),'image/png') I tried to use const img = new image(); img.src = ... but it doesn't work.
[ "May be this can help\n// Import the fs module to read the file\nconst fs = require(\"fs\");\n\n// Read the file into a Buffer object\nconst screenshotBuffer = fs.readFileSync(\"path/to/screenshot.png\");\n\n// Add the screenshot to the Allure report\nallure.createAttachment(\"screenshot\", screenshotBuffer, \"image/png\");\n\n" ]
[ 0 ]
[]
[]
[ "allure", "javascript" ]
stackoverflow_0074679092_allure_javascript.txt
Q: how to convert/run typeScript as javaScript? trying to run https://github.com/airgram/airgram From this post, (node:9374) Warning: To load an ES module, set "type": "module" already add {"type": "module"} To package.json Here is my code: root@faka:~/airgram# cat face.js import { Airgram, Auth, prompt, toObject } from 'airgram' const airgram = new Airgram({ apiId: process.env.APP_ID, apiHash: process.env.APP_HASH }) airgram.use(new Auth({ code: () => prompt(`Please enter the secret code:\n`), phoneNumber: () => prompt(`Please enter your phone number:\n`) })) void (async () => { const me = toObject(await airgram.api.getMe()) console.log(`[me]`, me) }) // Getting all updates airgram.use((ctx, next) => { if ('update' in ctx) { console.log(`[all updates][${ctx._}]`, JSON.stringify(ctx.update)) } return next() }) // Getting new messages airgram.on('updateNewMessage', async ({ update }, next) => { const { message } = update console.log('[new message]', message) return next() }) But still got error: root@faka:~/airgram# node face.js internal/process/esm_loader.js:74 internalBinding('errors').triggerUncaughtException( ^ Error [ERR_MODULE_NOT_FOUND]: Cannot find package 'airgram' imported from /root/airgram/face.js at packageResolve (internal/modules/esm/resolve.js:664:9) at moduleResolve (internal/modules/esm/resolve.js:705:18) at Loader.defaultResolve [as _resolve] (internal/modules/esm/resolve.js:798:11) at Loader.resolve (internal/modules/esm/loader.js:100:40) at Loader.getModuleJob (internal/modules/esm/loader.js:246:28) at ModuleWrap.<anonymous> (internal/modules/esm/module_job.js:47:40) at link (internal/modules/esm/module_job.js:46:36) { code: 'ERR_MODULE_NOT_FOUND' } A: Your code have many errors when running it first make sure you are install airgram package: npm install airgram Second error internalBinding('errors').triggerUncaughtException( you can view this answer to solve it.
how to convert/run typeScript as javaScript?
trying to run https://github.com/airgram/airgram From this post, (node:9374) Warning: To load an ES module, set "type": "module" already add {"type": "module"} To package.json Here is my code: root@faka:~/airgram# cat face.js import { Airgram, Auth, prompt, toObject } from 'airgram' const airgram = new Airgram({ apiId: process.env.APP_ID, apiHash: process.env.APP_HASH }) airgram.use(new Auth({ code: () => prompt(`Please enter the secret code:\n`), phoneNumber: () => prompt(`Please enter your phone number:\n`) })) void (async () => { const me = toObject(await airgram.api.getMe()) console.log(`[me]`, me) }) // Getting all updates airgram.use((ctx, next) => { if ('update' in ctx) { console.log(`[all updates][${ctx._}]`, JSON.stringify(ctx.update)) } return next() }) // Getting new messages airgram.on('updateNewMessage', async ({ update }, next) => { const { message } = update console.log('[new message]', message) return next() }) But still got error: root@faka:~/airgram# node face.js internal/process/esm_loader.js:74 internalBinding('errors').triggerUncaughtException( ^ Error [ERR_MODULE_NOT_FOUND]: Cannot find package 'airgram' imported from /root/airgram/face.js at packageResolve (internal/modules/esm/resolve.js:664:9) at moduleResolve (internal/modules/esm/resolve.js:705:18) at Loader.defaultResolve [as _resolve] (internal/modules/esm/resolve.js:798:11) at Loader.resolve (internal/modules/esm/loader.js:100:40) at Loader.getModuleJob (internal/modules/esm/loader.js:246:28) at ModuleWrap.<anonymous> (internal/modules/esm/module_job.js:47:40) at link (internal/modules/esm/module_job.js:46:36) { code: 'ERR_MODULE_NOT_FOUND' }
[ "Your code have many errors when running it first make sure you are install airgram package:\nnpm install airgram\n\nSecond error\n internalBinding('errors').triggerUncaughtException(\n\nyou can view this answer to solve it.\n" ]
[ 1 ]
[]
[]
[ "javascript", "node.js", "node_modules", "typescript" ]
stackoverflow_0074679060_javascript_node.js_node_modules_typescript.txt
Q: Query select columns from row based upon cell value I am looking to pull certain cells from a row based upon the date(todays) which is in cell I1 on Dashboard. I would like to pull the row from Schedule but only return the team name which is in columns AH and AW. I tried this =QUERY(Schedule!A:BU,"select AH, AW Where Schedule!A:A = '"&I2&"'") Its shooting an error of "Unable to parse query string for Function QUERY parameter 2: PARSE_ERROR: Encountered " "Schedule "" at line 1, column 21. Was expecting one of: "(" ... "(" ... " https://docs.google.com/spreadsheets/d/1bWyFiPsOkmskPNNePvPrbaL2oHAht9QS-lFuwAlSS9o/edit A: added a working formula to your sheet =filter({Schedule!AH:AH,Schedule!AW:AW},TO_DATE(INT(left(Schedule!D:D,10)))=I1) A: The values in column Schedule!A2:A are not dates but text strings that look like dates. You can search them in a query() if you convert the search key in cell I1 to a text string with to_text(), like this: =query(Schedule!A1:BU, "select AH, AW where A = '" & to_text(I1) & "' ", 1)
Query select columns from row based upon cell value
I am looking to pull certain cells from a row based upon the date(todays) which is in cell I1 on Dashboard. I would like to pull the row from Schedule but only return the team name which is in columns AH and AW. I tried this =QUERY(Schedule!A:BU,"select AH, AW Where Schedule!A:A = '"&I2&"'") Its shooting an error of "Unable to parse query string for Function QUERY parameter 2: PARSE_ERROR: Encountered " "Schedule "" at line 1, column 21. Was expecting one of: "(" ... "(" ... " https://docs.google.com/spreadsheets/d/1bWyFiPsOkmskPNNePvPrbaL2oHAht9QS-lFuwAlSS9o/edit
[ "added a working formula to your sheet\n=filter({Schedule!AH:AH,Schedule!AW:AW},TO_DATE(INT(left(Schedule!D:D,10)))=I1)\n\n\n", "The values in column Schedule!A2:A are not dates but text strings that look like dates. You can search them in a query() if you convert the search key in cell I1 to a text string with to_text(), like this:\n=query(Schedule!A1:BU, \"select AH, AW where A = '\" & to_text(I1) & \"' \", 1)\n" ]
[ 1, 1 ]
[]
[]
[ "google_sheets", "google_sheets_formula" ]
stackoverflow_0074678551_google_sheets_google_sheets_formula.txt
Q: Outlook Interop Security I am writing a C# Console application which shall be able to search through my emails. I use the Interop Outlook dll to connect and can access the emails fine, but I always get a popup windows which asks me if I want to allow access to Outlook. I understand that this is a secruity dialog and is needed so viruses cannot access my mails. I have already written an Outlook Add In in the past and never got the dialog. I guess this is because the code was executing from inside Outlook. Is there any possibility do store my console app id and always grant access? If there is no way around the dialog, is there any other way to search my local emails withing a C# console application? The Systems specs are: Server 2012 R2 Datacenter, Visual Studio 2013 update 4 and Outlook 2013 A: You see the standard security prompt in Outlook. You can do the following to avoid it: Use the Security Manager component for supressing the security prompts in Outlook. Use a low-level API which doesn't generate such security prompts - Extended MAPI. Or any other third-party wrappers around that API such as Redemption. Deploy Outlook security settings (for administrators). You can read more about all of these ways on the Outlook "Object Model Guard" Security Issues for Developers page. A: We encountered this issue on a recent Windows Server deployment. The server has Office 365 installed and upto date Symantec Antivirus, for this reason, we overrode the default Programmatic Access Security Setting (while we investigate the Antivirus detection issue). In Outlook go to File -> Options -> Trust Center -> Trust Center Settings -> Programmatic Access [UPDATE]: After performing an update on the Antivirus Software and downloading the latest definitions, the status within Outlook changed to "Valid".
Outlook Interop Security
I am writing a C# Console application which shall be able to search through my emails. I use the Interop Outlook dll to connect and can access the emails fine, but I always get a popup windows which asks me if I want to allow access to Outlook. I understand that this is a secruity dialog and is needed so viruses cannot access my mails. I have already written an Outlook Add In in the past and never got the dialog. I guess this is because the code was executing from inside Outlook. Is there any possibility do store my console app id and always grant access? If there is no way around the dialog, is there any other way to search my local emails withing a C# console application? The Systems specs are: Server 2012 R2 Datacenter, Visual Studio 2013 update 4 and Outlook 2013
[ "You see the standard security prompt in Outlook. You can do the following to avoid it:\n\nUse the Security Manager component for supressing the security prompts in Outlook. \nUse a low-level API which doesn't generate such security prompts - Extended MAPI. Or any other third-party wrappers around that API such as Redemption.\nDeploy Outlook security settings (for administrators).\n\nYou can read more about all of these ways on the Outlook \"Object Model Guard\" Security Issues for Developers page.\n", "We encountered this issue on a recent Windows Server deployment. The server has Office 365 installed and upto date Symantec Antivirus, for this reason, we overrode the default Programmatic Access Security Setting (while we investigate the Antivirus detection issue).\n\nIn Outlook go to File -> Options -> Trust Center -> Trust Center Settings -> Programmatic Access\n[UPDATE]: After performing an update on the Antivirus Software and downloading the latest definitions, the status within Outlook changed to \"Valid\".\n\n" ]
[ 0, 0 ]
[]
[]
[ "c#", "console", "office_interop", "outlook" ]
stackoverflow_0031701371_c#_console_office_interop_outlook.txt
Q: How subtotal a dataframe and also grandtotal I'm looking for a way to summarize my dataframe into a summary table that will have the total sum of each observation and the entire sum in percentage and also the total count and total count in percentage and each having a subgroup of total and also Grand total of each observation. here is my sample dataset below. struct<- samples_stack <- dput(samples[1:50,]) structure(list(Merchant = c("Fat", "Fat", "United", "WAVE", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "WAVE", "WAVE", "WAVE", "WAVE", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "WAVE", "WAVE", "WAVE", "WAVE", "WAVE", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat"), Network = c("G", "G", "G", "X9", "G", "G", "M", "M", "M", "M", "M", "M", "M", "M", "G", "G", "G", "G", "A", "A", "A", "A", "G", "G", "G", "G", "A", "A", "A", "A", "A", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G"), Type = c("Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Data", "Airtime", "Data", "Airtime", "Airtime", "Data", "Airtime", "Data", "Data", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Data", "Airtime", "Airtime", "Airtime", "Data", "Airtime", "Airtime", "Data", "Data", "Airtime", "Airtime", "Data", "Data", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime" ), FaceValue = c(200, 200, 2000, 100, 100, 100, 100, 300, 500, 300, 240, 600, 400, 600, 100, 500, 130, 200, 500, 100, 100, 300, 150, 300, 300, 2000, 200, 240, 600, 400, 400, 500, 250, 100, 100, 500, 450, 50, 1300, 1400, 1400, 2000, 100, 130, 130, 100, 100, 200, 200, 600), Date = c("2022-12-03 23:37:23", "2022-12-03 22:45:52", "2022-12-03 06:58:19", "2022-12-03 14:06:28", "2022-12-03 22:19:13", "2022-12-03 22:15:39", "2022-12-03 20:02:12", "2022-12-03 20:01:07", "2022-12-03 19:14:25", "2022-12-03 14:47:35", "2022-12-03 23:12:40", "2022-12-03 23:09:18", "2022-12-03 22:57:57", "2022-12-03 22:51:16", "2022-12-03 21:23:38", "2022-12-03 21:19:43", "2022-12-03 21:03:38", "2022-12-03 20:57:44", "2022-12-03 22:51:07", "2022-12-03 22:26:50", "2022-12-03 21:57:09", "2022-12-03 21:53:54", "2022-12-03 20:20:21", "2022-12-03 20:13:07", "2022-12-03 20:10:30", "2022-12-03 19:50:21", "2022-12-03 01:28:35", "2022-12-03 01:17:59", "2022-12-03 00:35:08", "2022-12-03 00:31:56", "2022-12-03 00:11:25", "2022-12-03 18:36:51", "2022-12-03 17:56:25", "2022-12-03 17:10:15", "2022-12-03 16:49:27", "2022-12-03 16:45:03", "2022-12-03 16:43:26", "2022-12-03 16:37:55", "2022-12-03 16:36:11", "2022-12-03 16:14:40", "2022-12-03 16:03:10", "2022-12-03 16:02:56", "2022-12-03 15:32:37", "2022-12-03 15:30:45", "2022-12-03 15:14:05", "2022-12-03 15:13:24", "2022-12-03 15:09:10", "2022-12-03 12:20:58", "2022-12-03 11:54:15", "2022-12-03 11:36:53" ), Status = c("Processing", "Transaction Declined", "Successful", "Processing", "Processing", "Transaction Declined", "Processing", "Processing", "Processing", "Processing", "Processing", "Processing", "Processing", "Processing", "Transaction Declined", "Transaction Declined", "Processing", "Transaction Declined", "Transaction Declined", "Transaction Declined", "Transaction Declined", "Transaction Declined", "Transaction Declined", "Processing", "Processing", "Processing", "Processing", "Processing", "Processing", "Processing", "Processing", "Transaction Declined", "Processing", "Transaction Declined", "Transaction Declined", "Transaction Declined", "Processing", "Transaction Declined", "Transaction Declined", "Processing", "Processing", "Transaction Declined", "Transaction Declined", "Processing", "Processing", "Transaction Declined", "Transaction Declined", "Transaction Declined", "Transaction Declined", "Transaction Declined" ), Discount... = c("6.00", "6.00", "5.00", "6.00", "6.00", "6.00", "3.70", "3.70", "3.70", "3.70", "3.20", "3.20", "3.20", "3.20", "6.00", "6.00", "6.00", "6.00", "3.50", "3.50", "3.50", "3.50", "6.00", "6.00", "6.00", "6.00", "3.50", "3.50", "3.50", "3.50", "3.50", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00"), Discount.NGN. = c("12.00", "12.00", "100.00", "6.00", "6.00", "6.00", "3.70", "11.10", "18.50", "11.10", "7.68", "19.20", "12.80", "19.20", "6.00", "30.00", "7.80", "12.00", "17.50", "3.50", "3.50", "10.50", "9.00", "18.00", "18.00", "120.00", "7.00", "8.40", "21.00", "14.00", "14.00", "30.00", "15.00", "6.00", "6.00", "30.00", "27.00", "3.00", "78.00", "84.00", "84.00", "120.00", "6.00", "7.80", "7.80", "6.00", "6.00", "12.00", "12.00", "36.00"), Network.reformat = c("G", "G", "G-s", "X9", "G", "G", "NEW", "NEW", "NEW", "NEW", "NEW", "NEW", "NEW", "NEW", "G", "G", "G", "G", "A", "A", "A", "A", "G", "G", "G", "G", "A", "A", "A", "A", "A", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G")), row.names = c(NA, -50L), class = c("tbl_df", "tbl", "data.frame")) I want my table to look like the table below. A: Use grouped summarize() and mutate() to get initial values, then do additional summaries within bind_rows() to add subtotals and totals. Finally, use forcats::fct_relevel() inside dplyr::arrange() to put everything in order. library(dplyr) library(forcats) samples_stack %>% group_by(Network, Merchant) %>% summarize( Count_Merchant = n(), Sum_FaceValue = sum(FaceValue), .groups = "drop" ) %>% mutate( Pct_Merchant = Count_Merchant / sum(Count_Merchant), Pct_FaceValue = Sum_FaceValue / sum(Sum_FaceValue) ) %>% bind_rows( summarize( group_by(., Network), Merchant = "SUBTOTAL", across(Count_Merchant:Pct_FaceValue, sum) ), summarize( ., across(Network:Merchant, ~ "TOTAL"), across(Count_Merchant:Pct_FaceValue, sum) ) ) %>% arrange( fct_relevel(Network, "TOTAL", after = Inf), fct_relevel(Merchant, "SUBTOTAL") ) # A tibble: 12 × 6 Network Merchant Count_Merchant Sum_FaceValue Pct_Merchant Pct_FaceValue <chr> <chr> <int> <dbl> <dbl> <dbl> 1 A SUBTOTAL 9 2840 0.18 0.130 2 A Fat 4 1000 0.08 0.0457 3 A WAVE 5 1840 0.1 0.0841 4 G SUBTOTAL 32 15890 0.64 0.727 5 G Fat 31 13890 0.62 0.635 6 G United 1 2000 0.02 0.0914 7 M SUBTOTAL 8 3040 0.16 0.139 8 M Fat 4 1200 0.08 0.0549 9 M WAVE 4 1840 0.08 0.0841 10 X9 SUBTOTAL 1 100 0.02 0.00457 11 X9 WAVE 1 100 0.02 0.00457 12 TOTAL TOTAL 50 21870 1 1
How subtotal a dataframe and also grandtotal
I'm looking for a way to summarize my dataframe into a summary table that will have the total sum of each observation and the entire sum in percentage and also the total count and total count in percentage and each having a subgroup of total and also Grand total of each observation. here is my sample dataset below. struct<- samples_stack <- dput(samples[1:50,]) structure(list(Merchant = c("Fat", "Fat", "United", "WAVE", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "WAVE", "WAVE", "WAVE", "WAVE", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "WAVE", "WAVE", "WAVE", "WAVE", "WAVE", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat", "Fat"), Network = c("G", "G", "G", "X9", "G", "G", "M", "M", "M", "M", "M", "M", "M", "M", "G", "G", "G", "G", "A", "A", "A", "A", "G", "G", "G", "G", "A", "A", "A", "A", "A", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G"), Type = c("Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Data", "Airtime", "Data", "Airtime", "Airtime", "Data", "Airtime", "Data", "Data", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime", "Data", "Airtime", "Airtime", "Airtime", "Data", "Airtime", "Airtime", "Data", "Data", "Airtime", "Airtime", "Data", "Data", "Airtime", "Airtime", "Airtime", "Airtime", "Airtime" ), FaceValue = c(200, 200, 2000, 100, 100, 100, 100, 300, 500, 300, 240, 600, 400, 600, 100, 500, 130, 200, 500, 100, 100, 300, 150, 300, 300, 2000, 200, 240, 600, 400, 400, 500, 250, 100, 100, 500, 450, 50, 1300, 1400, 1400, 2000, 100, 130, 130, 100, 100, 200, 200, 600), Date = c("2022-12-03 23:37:23", "2022-12-03 22:45:52", "2022-12-03 06:58:19", "2022-12-03 14:06:28", "2022-12-03 22:19:13", "2022-12-03 22:15:39", "2022-12-03 20:02:12", "2022-12-03 20:01:07", "2022-12-03 19:14:25", "2022-12-03 14:47:35", "2022-12-03 23:12:40", "2022-12-03 23:09:18", "2022-12-03 22:57:57", "2022-12-03 22:51:16", "2022-12-03 21:23:38", "2022-12-03 21:19:43", "2022-12-03 21:03:38", "2022-12-03 20:57:44", "2022-12-03 22:51:07", "2022-12-03 22:26:50", "2022-12-03 21:57:09", "2022-12-03 21:53:54", "2022-12-03 20:20:21", "2022-12-03 20:13:07", "2022-12-03 20:10:30", "2022-12-03 19:50:21", "2022-12-03 01:28:35", "2022-12-03 01:17:59", "2022-12-03 00:35:08", "2022-12-03 00:31:56", "2022-12-03 00:11:25", "2022-12-03 18:36:51", "2022-12-03 17:56:25", "2022-12-03 17:10:15", "2022-12-03 16:49:27", "2022-12-03 16:45:03", "2022-12-03 16:43:26", "2022-12-03 16:37:55", "2022-12-03 16:36:11", "2022-12-03 16:14:40", "2022-12-03 16:03:10", "2022-12-03 16:02:56", "2022-12-03 15:32:37", "2022-12-03 15:30:45", "2022-12-03 15:14:05", "2022-12-03 15:13:24", "2022-12-03 15:09:10", "2022-12-03 12:20:58", "2022-12-03 11:54:15", "2022-12-03 11:36:53" ), Status = c("Processing", "Transaction Declined", "Successful", "Processing", "Processing", "Transaction Declined", "Processing", "Processing", "Processing", "Processing", "Processing", "Processing", "Processing", "Processing", "Transaction Declined", "Transaction Declined", "Processing", "Transaction Declined", "Transaction Declined", "Transaction Declined", "Transaction Declined", "Transaction Declined", "Transaction Declined", "Processing", "Processing", "Processing", "Processing", "Processing", "Processing", "Processing", "Processing", "Transaction Declined", "Processing", "Transaction Declined", "Transaction Declined", "Transaction Declined", "Processing", "Transaction Declined", "Transaction Declined", "Processing", "Processing", "Transaction Declined", "Transaction Declined", "Processing", "Processing", "Transaction Declined", "Transaction Declined", "Transaction Declined", "Transaction Declined", "Transaction Declined" ), Discount... = c("6.00", "6.00", "5.00", "6.00", "6.00", "6.00", "3.70", "3.70", "3.70", "3.70", "3.20", "3.20", "3.20", "3.20", "6.00", "6.00", "6.00", "6.00", "3.50", "3.50", "3.50", "3.50", "6.00", "6.00", "6.00", "6.00", "3.50", "3.50", "3.50", "3.50", "3.50", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00", "6.00"), Discount.NGN. = c("12.00", "12.00", "100.00", "6.00", "6.00", "6.00", "3.70", "11.10", "18.50", "11.10", "7.68", "19.20", "12.80", "19.20", "6.00", "30.00", "7.80", "12.00", "17.50", "3.50", "3.50", "10.50", "9.00", "18.00", "18.00", "120.00", "7.00", "8.40", "21.00", "14.00", "14.00", "30.00", "15.00", "6.00", "6.00", "30.00", "27.00", "3.00", "78.00", "84.00", "84.00", "120.00", "6.00", "7.80", "7.80", "6.00", "6.00", "12.00", "12.00", "36.00"), Network.reformat = c("G", "G", "G-s", "X9", "G", "G", "NEW", "NEW", "NEW", "NEW", "NEW", "NEW", "NEW", "NEW", "G", "G", "G", "G", "A", "A", "A", "A", "G", "G", "G", "G", "A", "A", "A", "A", "A", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G", "G")), row.names = c(NA, -50L), class = c("tbl_df", "tbl", "data.frame")) I want my table to look like the table below.
[ "Use grouped summarize() and mutate() to get initial values, then do additional summaries within bind_rows() to add subtotals and totals. Finally, use forcats::fct_relevel() inside dplyr::arrange() to put everything in order.\nlibrary(dplyr)\nlibrary(forcats)\n\nsamples_stack %>%\n group_by(Network, Merchant) %>%\n summarize(\n Count_Merchant = n(), \n Sum_FaceValue = sum(FaceValue),\n .groups = \"drop\"\n ) %>%\n mutate(\n Pct_Merchant = Count_Merchant / sum(Count_Merchant),\n Pct_FaceValue = Sum_FaceValue / sum(Sum_FaceValue)\n ) %>%\n bind_rows(\n summarize(\n group_by(., Network),\n Merchant = \"SUBTOTAL\",\n across(Count_Merchant:Pct_FaceValue, sum)\n ),\n summarize(\n .,\n across(Network:Merchant, ~ \"TOTAL\"),\n across(Count_Merchant:Pct_FaceValue, sum)\n )\n ) %>%\n arrange(\n fct_relevel(Network, \"TOTAL\", after = Inf),\n fct_relevel(Merchant, \"SUBTOTAL\")\n )\n\n# A tibble: 12 × 6\n Network Merchant Count_Merchant Sum_FaceValue Pct_Merchant Pct_FaceValue\n <chr> <chr> <int> <dbl> <dbl> <dbl>\n 1 A SUBTOTAL 9 2840 0.18 0.130 \n 2 A Fat 4 1000 0.08 0.0457 \n 3 A WAVE 5 1840 0.1 0.0841 \n 4 G SUBTOTAL 32 15890 0.64 0.727 \n 5 G Fat 31 13890 0.62 0.635 \n 6 G United 1 2000 0.02 0.0914 \n 7 M SUBTOTAL 8 3040 0.16 0.139 \n 8 M Fat 4 1200 0.08 0.0549 \n 9 M WAVE 4 1840 0.08 0.0841 \n10 X9 SUBTOTAL 1 100 0.02 0.00457\n11 X9 WAVE 1 100 0.02 0.00457\n12 TOTAL TOTAL 50 21870 1 1 \n\n" ]
[ 1 ]
[]
[]
[ "dplyr", "gt", "kableextra", "r" ]
stackoverflow_0074678249_dplyr_gt_kableextra_r.txt
Q: Convert all images in a pandas dataframe column to grayscale I have a column of a pandas dataframe with 25 thousand images, and I want to convert the color of all of them to grayscale. What would be the simplest way to do this? I know how to convert the color, which I must use a loop and do the conversion with numpy or opencv, but I don't know how to do this loop with a column of the dataframe. A: One way to convert the color of images in a pandas dataframe is to use the apply method on the column containing the image data. This method allows you to apply a custom function to each element of the column. For example, if your dataframe has a column called 'images' containing the image data, you could convert the color of the images to grayscale using the following code: def grayscale(image): return cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) df['images'] = df['images'].apply(grayscale) A: One way to loop through the images in the column and convert them to grayscale would be to use the apply method of the pandas dataframe. Here is an example: import numpy as np import cv2 # Convert an image to grayscale def to_grayscale(image): return cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Loop through the images in the column and convert them to grayscale df['grayscale_images'] = df['images'].apply(to_grayscale) This code will apply the to_grayscale function to each image in the images column of the dataframe, and store the resulting grayscale images in a new column called grayscale_images. Alternatively, you could also use a for loop to iterate through the rows of the dataframe and convert the images in the images column to grayscale. Here is an example: import numpy as np import cv2 # Create a new column for the grayscale images df['grayscale_images'] = np.nan # Loop through the rows of the dataframe for i, row in df.iterrows(): # Convert the image to grayscale grayscale_image = cv2.cvtColor(row['images'], cv2.COLOR_BGR2GRAY) # Store the grayscale image in the new column df.at[i, 'grayscale_images'] = grayscale_image Both of these approaches will loop through the images in the images column and convert them to grayscale.
Convert all images in a pandas dataframe column to grayscale
I have a column of a pandas dataframe with 25 thousand images, and I want to convert the color of all of them to grayscale. What would be the simplest way to do this? I know how to convert the color, which I must use a loop and do the conversion with numpy or opencv, but I don't know how to do this loop with a column of the dataframe.
[ "One way to convert the color of images in a pandas dataframe is to use the apply method on the column containing the image data. This method allows you to apply a custom function to each element of the column.\nFor example, if your dataframe has a column called 'images' containing the image data, you could convert the color of the images to grayscale using the following code:\ndef grayscale(image):\n return cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\n\ndf['images'] = df['images'].apply(grayscale)\n\n", "One way to loop through the images in the column and convert them to grayscale would be to use the apply method of the pandas dataframe. Here is an example:\nimport numpy as np\nimport cv2\n\n# Convert an image to grayscale\ndef to_grayscale(image):\n return cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\n\n# Loop through the images in the column and convert them to grayscale\ndf['grayscale_images'] = df['images'].apply(to_grayscale)\n\nThis code will apply the to_grayscale function to each image in the images column of the dataframe, and store the resulting grayscale images in a new column called grayscale_images.\nAlternatively, you could also use a for loop to iterate through the rows of the dataframe and convert the images in the images column to grayscale. Here is an example:\nimport numpy as np\nimport cv2\n\n# Create a new column for the grayscale images\ndf['grayscale_images'] = np.nan\n\n# Loop through the rows of the dataframe\nfor i, row in df.iterrows():\n # Convert the image to grayscale\n grayscale_image = cv2.cvtColor(row['images'], cv2.COLOR_BGR2GRAY)\n # Store the grayscale image in the new column\n df.at[i, 'grayscale_images'] = grayscale_image\n\nBoth of these approaches will loop through the images in the images column and convert them to grayscale.\n" ]
[ 1, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074679121_pandas_python.txt
Q: How to set a low gravity effect for a certain amount of time? I want to make a low gravity effect for a certain amount of time after I interact with an object. This is the code I have for that interaction ` { if (other.gameObject.layer == 7) { Destroy(other.gameObject); gravity= -7; jumpHeight = 6f; } } ` I tried to search for answers but I didn't know well how to implement them A: You should provide more information on: Which physics system are you using, 2D/3D? What objects should be affected by the gravity change? Are you using a physics-based controller? For a vague answer, the default Unity Physics gravity is a Vector 3 of (0,-9.81f,0) which mimicks the real-world gravity of objects falling down. If you're looking to invert gravity, or lessen its pull using code then you'd have to modify the Physics.gravity (or Physics2D.gravity) value to something else: Physics.gravity = new Vector3(0, 1, 0) // Make physics-based objects move upward on the Y-axis To limit this change for a set duration, you could use coroutines as follows: private readonly Vector3 _defaultGravity = new(0, -9.81f, 0); private readonly Vector3 _temporaryGravity = new(0, 1, 0); /// <summary> /// Starts a coroutine that affects the gravity for a given amount of time. /// </summary> /// <param name="seconds">How long in seconds should the temporary gravity be in effect for</param> private void ChangeGravityForSeconds(float seconds) { StartCoroutine(DecreaseGravity(seconds)); } private IEnumerator DecreaseGravity(float time) { var elapsedTime = 0.0f; while (elapsedTime < time) { Physics.gravity = _temporaryGravity; elapsedTime += Time.deltaTime; yield return new WaitForSeconds(1.0f); } // Restore the default gravity. Physics.gravity = _defaultGravity; } Then you could call ChangeGravityForSeconds(seconds) whenever you need to alter the gravity to the pre-defined value. Alternatively, you could achieve this using the MonoBehaviour.Invoke method that takes in a float for the delay before invocation. private void ChangeGravityForSeconds(float seconds) { Physics.gravity = _temporaryGravity; Invoke(nameof(ResetGravity), seconds); } private void ResetGravity() { Physics.gravity = _defaultGravity; }
How to set a low gravity effect for a certain amount of time?
I want to make a low gravity effect for a certain amount of time after I interact with an object. This is the code I have for that interaction ` { if (other.gameObject.layer == 7) { Destroy(other.gameObject); gravity= -7; jumpHeight = 6f; } } ` I tried to search for answers but I didn't know well how to implement them
[ "You should provide more information on:\n\nWhich physics system are you using, 2D/3D?\nWhat objects should be affected by the gravity change?\nAre you using a physics-based controller?\n\nFor a vague answer, the default Unity Physics gravity is a Vector 3 of (0,-9.81f,0) which mimicks the real-world gravity of objects falling down. If you're looking to invert gravity, or lessen its pull using code then you'd have to modify the Physics.gravity (or Physics2D.gravity) value to something else:\nPhysics.gravity = new Vector3(0, 1, 0) // Make physics-based objects move upward on the Y-axis\n\nTo limit this change for a set duration, you could use coroutines as follows:\nprivate readonly Vector3 _defaultGravity = new(0, -9.81f, 0);\nprivate readonly Vector3 _temporaryGravity = new(0, 1, 0);\n\n/// <summary>\n/// Starts a coroutine that affects the gravity for a given amount of time.\n/// </summary>\n/// <param name=\"seconds\">How long in seconds should the temporary gravity be in effect for</param>\nprivate void ChangeGravityForSeconds(float seconds)\n{\n StartCoroutine(DecreaseGravity(seconds));\n}\n\nprivate IEnumerator DecreaseGravity(float time)\n{\n var elapsedTime = 0.0f;\n while (elapsedTime < time)\n {\n Physics.gravity = _temporaryGravity;\n elapsedTime += Time.deltaTime;\n yield return new WaitForSeconds(1.0f);\n }\n\n // Restore the default gravity.\n Physics.gravity = _defaultGravity;\n}\n\nThen you could call ChangeGravityForSeconds(seconds) whenever you need to alter the gravity to the pre-defined value.\nAlternatively, you could achieve this using the MonoBehaviour.Invoke method that takes in a float for the delay before invocation.\nprivate void ChangeGravityForSeconds(float seconds)\n{\n Physics.gravity = _temporaryGravity;\n Invoke(nameof(ResetGravity), seconds);\n}\n\nprivate void ResetGravity()\n{\n Physics.gravity = _defaultGravity;\n}\n\n" ]
[ 0 ]
[]
[]
[ "c#", "unity3d" ]
stackoverflow_0074634035_c#_unity3d.txt
Q: parsing email threads in python tl;dr questions: how to parse MIME content into threads (thus lists of individual replies & forwards) any libraries that do that? Does Mime-Version: 1.0 standardize the way threads are represented? I'm analyzing enron dataset (https://www.cs.cmu.edu/~./enron/, you can also browse the documents here: http://www.enron-mail.com/email/) This dataset is a collection of ~500K emails. Emails are represented as Mime-Version: 1.0 files, there are no attachments. This is a typical file: Message-ID: <4250772.1075857358369.JavaMail.evans@thyme>^M Date: Tue, 12 Dec 2000 09:19:00 -0800 (PST)^M From: david.portz@enron.com^M To: clint.dean@enron.com^M Subject: City of Bryan Dec parking transactions^M Cc: doug.gilbert-smith@enron.com, elizabeth.sager@enron.com, ^M melissa.murphy@enron.com^M Mime-Version: 1.0^M Content-Type: text/plain; charset=us-ascii^M Content-Transfer-Encoding: 7bit^M Bcc: doug.gilbert-smith@enron.com, elizabeth.sager@enron.com, ^M melissa.murphy@enron.com^M X-From: David Portz^M X-To: Clint Dean^M X-cc: Doug Gilbert-Smith, Elizabeth Sager, Melissa Ann Murphy^M X-bcc: ^M X-Folder: \Clint_Dean_Dec2000\Notes Folders\Notes inbox^M X-Origin: Dean-C^M X-FileName: cdean.nsf^M ^M Following discussions with you and Doug, attached is a draft parking transaction agreement for your review and, if acceptable, for circualtion to the counterparty. Please call me with any questions. --David There is a handy, widely adopted python library that makes life easier in parsing those kind of files: import email import email.policy parsed_email = email.message_from_string(open(filename, 'r').read(), policy=email.policy.default) body = parsed_email.get_payload() from_field = parsed_email['From'] ... However, I didn't find a reliable way to further parse email content to threads: sub_email_1 -> sub_email_2 -> ... > sub_email_n, etc. get_payload returns everything, all together. Here is an example of MIME with threads: https://justpaste.it/bf5zr (the file is 233 lines, so pasted separately). There is clearly a thread: Christi L Nicolay sent email on 04/30/2001 02:20 PM later Christi L Nicolay replied to its own email on 05/03/2001 09:23 PM Lloyd Will replied to that thread on 05/03/2001 09:26 PM Christi L Nicolay replied on 05/07/2001 11:47 AM Tom May forwarded the whole thread on Mon, 7 May 2001 06:58:00 -0700 Any library / existing solution that could do that? Looking at glance into the data, I got impression that there are numerous tiny variants how those threads are organized. Sometimes there are nested > > fields accompanying sub-emails, sometimes there is ---Original Message--- message, etc. It seems way less defined than MIME header fields. I can write some regex-backed python script that parses one email or another, but it will not work universally for the whole Enron dataset. Some more examples of threads from the Enron dataset: http://www.enron-mail.com/email/mann-k/discussion_threads/FW_Salmon_Energy_Turbine_Agreement_5.html http://www.enron-mail.com/email/brawner-s/discussion_threads/Fw_Fw_TIGHT_SKIRTS_AND_TEXANS_2.html http://www.enron-mail.com/email/brawner-s/_sent_mail/Fw_Time_Friends_3.html That led me to question #3: whether the mime format standardizes threads at all. A: Here are a code sample hope it will be usefull import email email_message: email.message.Message = email.message_from_bytes(raw_email_body) # or as in your example # email.message_from_string(open(filename, 'r').read(), policy=email.policy.default) message_parts = list(message.walk()) for part in message_parts: ... # write some logic here A: The MIME (Multipurpose Internet Mail Extensions) standard does not specify how threads should be represented in emails. MIME is a format for encoding various types of data in email messages, such as text, images, and attachments, but does not define the structure or organization of the message itself. Therefore, parsing threads from a MIME-formatted email would require custom logic and may not be straightforward due to the various ways in which threads can be represented in email messages. Some common approaches to parsing threads from emails include using regular expressions to identify common patterns in the email content, or using natural language processing techniques to analyze the content and identify relationships between messages. It's worth noting that some email clients and services, such as Gmail, may add their own custom headers to emails to indicate threading information. In these cases, it may be possible to parse thread information from the email headers rather than the content itself. However, this would depend on the specific headers used by the email client or service in question.
parsing email threads in python
tl;dr questions: how to parse MIME content into threads (thus lists of individual replies & forwards) any libraries that do that? Does Mime-Version: 1.0 standardize the way threads are represented? I'm analyzing enron dataset (https://www.cs.cmu.edu/~./enron/, you can also browse the documents here: http://www.enron-mail.com/email/) This dataset is a collection of ~500K emails. Emails are represented as Mime-Version: 1.0 files, there are no attachments. This is a typical file: Message-ID: <4250772.1075857358369.JavaMail.evans@thyme>^M Date: Tue, 12 Dec 2000 09:19:00 -0800 (PST)^M From: david.portz@enron.com^M To: clint.dean@enron.com^M Subject: City of Bryan Dec parking transactions^M Cc: doug.gilbert-smith@enron.com, elizabeth.sager@enron.com, ^M melissa.murphy@enron.com^M Mime-Version: 1.0^M Content-Type: text/plain; charset=us-ascii^M Content-Transfer-Encoding: 7bit^M Bcc: doug.gilbert-smith@enron.com, elizabeth.sager@enron.com, ^M melissa.murphy@enron.com^M X-From: David Portz^M X-To: Clint Dean^M X-cc: Doug Gilbert-Smith, Elizabeth Sager, Melissa Ann Murphy^M X-bcc: ^M X-Folder: \Clint_Dean_Dec2000\Notes Folders\Notes inbox^M X-Origin: Dean-C^M X-FileName: cdean.nsf^M ^M Following discussions with you and Doug, attached is a draft parking transaction agreement for your review and, if acceptable, for circualtion to the counterparty. Please call me with any questions. --David There is a handy, widely adopted python library that makes life easier in parsing those kind of files: import email import email.policy parsed_email = email.message_from_string(open(filename, 'r').read(), policy=email.policy.default) body = parsed_email.get_payload() from_field = parsed_email['From'] ... However, I didn't find a reliable way to further parse email content to threads: sub_email_1 -> sub_email_2 -> ... > sub_email_n, etc. get_payload returns everything, all together. Here is an example of MIME with threads: https://justpaste.it/bf5zr (the file is 233 lines, so pasted separately). There is clearly a thread: Christi L Nicolay sent email on 04/30/2001 02:20 PM later Christi L Nicolay replied to its own email on 05/03/2001 09:23 PM Lloyd Will replied to that thread on 05/03/2001 09:26 PM Christi L Nicolay replied on 05/07/2001 11:47 AM Tom May forwarded the whole thread on Mon, 7 May 2001 06:58:00 -0700 Any library / existing solution that could do that? Looking at glance into the data, I got impression that there are numerous tiny variants how those threads are organized. Sometimes there are nested > > fields accompanying sub-emails, sometimes there is ---Original Message--- message, etc. It seems way less defined than MIME header fields. I can write some regex-backed python script that parses one email or another, but it will not work universally for the whole Enron dataset. Some more examples of threads from the Enron dataset: http://www.enron-mail.com/email/mann-k/discussion_threads/FW_Salmon_Energy_Turbine_Agreement_5.html http://www.enron-mail.com/email/brawner-s/discussion_threads/Fw_Fw_TIGHT_SKIRTS_AND_TEXANS_2.html http://www.enron-mail.com/email/brawner-s/_sent_mail/Fw_Time_Friends_3.html That led me to question #3: whether the mime format standardizes threads at all.
[ "Here are a code sample hope it will be usefull\nimport email\n\nemail_message: email.message.Message = email.message_from_bytes(raw_email_body)\n# or as in your example\n# email.message_from_string(open(filename, 'r').read(), policy=email.policy.default)\n\nmessage_parts = list(message.walk())\nfor part in message_parts:\n ... # write some logic here\n\n", "The MIME (Multipurpose Internet Mail Extensions) standard does not specify how threads should be represented in emails. MIME is a format for encoding various types of data in email messages, such as text, images, and attachments, but does not define the structure or organization of the message itself.\nTherefore, parsing threads from a MIME-formatted email would require custom logic and may not be straightforward due to the various ways in which threads can be represented in email messages. Some common approaches to parsing threads from emails include using regular expressions to identify common patterns in the email content, or using natural language processing techniques to analyze the content and identify relationships between messages.\nIt's worth noting that some email clients and services, such as Gmail, may add their own custom headers to emails to indicate threading information. In these cases, it may be possible to parse thread information from the email headers rather than the content itself. However, this would depend on the specific headers used by the email client or service in question.\n" ]
[ 0, 0 ]
[]
[]
[ "email", "email_parsing", "mime", "parsing", "python" ]
stackoverflow_0074568352_email_email_parsing_mime_parsing_python.txt
Q: Sort 2 objects with the same key Javascript I have 2 objects where I want the second one to be in the same order as the first one. Ex: const obj1 = [ { key: 1, id: 1, name: "John" }, { key: 2, id: 2, name: "Ann" }, { key: 3, id: 3, name: "Kate" } ]; const obj2 = [ { key: 2, id: 2, name: "Ann" }, { key: 1, id: 1, name: "John" }, { key: 3, id: 3, name: "Kate" } ]; The purpose is to have obj2 in same order as obj1, but only sort with keys. I'm trying to make an helper function which will pass 3 argument: function helper(obj1, obj2, key) { // return new object with sorted array do not modify existing } I can sort one of this, but cant combine 2object together A: To sort two objects with the same key in JavaScript, you can use the Array.sort method. The Array.sort method allows you to specify a comparator function that determines the sorting order for each element in the array. The comparator function should take two arguments, and it should return a negative value if the first argument should be sorted before the second argument, a positive value if the first argument should be sorted after the second argument, and 0 if the two arguments are equal. Here is an example of how to use the Array.sort method to sort two objects with the same key in JavaScript: // Define two objects with the same key const obj1 = {key: 1, value: "apple"}; const obj2 = {key: 1, value: "orange"}; // Define the comparator function function comparator(a, b) { // Compare the key values of the objects return a.key - b.key; } // Use the sort method to sort the objects const sortedObjects = [obj1, obj2].sort(comparator); // Print the sorted objects console.log(sortedObjects); In this example, the comparator function is defined to compare the key values of the objects and return the difference as the sorting order. The Array.sort method is then used to sort the objects based on this comparator function. As you can see, the objects are sorted based on the key value, and the objects with the same key are preserved in the original order. You can modify the comparator function to use different values as the sorting key. For example, you can use the value field of the objects as the sorting key instead of the key field. Here is how to do this: A: My idea is to use hash map to store key obj2, then loop through key obj1 to sort obj2 const obj1 = [ {key: 1, id: 5, name: 'John'}, {key: 2, id: 5, name: 'John'}, {key: 3, id: 5, name: 'John'} ] const obj2 = [ {key: 2, id: 5, name: 'John'}, {key: 1, id: 5, name: 'John'}, {key: 3, id: 5, name: 'John'} ] function help(obj1, obj2, key) { const hashKeyObj2 = obj2.reduce((val, item) => { val[item[key]] = item; return val; }, {}); return obj1.map((item) => hashKeyObj2[item[key]]); } console.log(help(obj1, obj2, 'key')) A: One-liner let sorted = obj1.map(a => obj2.find(b => a.key === b.key)) A: This snippet will sort obj2 from obj1 order: const obj1 = [ { key: 1, id: 5, name: 'John' }, { key: 3, id: 5, name: 'John' }, { key: 2, id: 5, name: 'John' }, ] const obj2 = [ { key: 2, id: 5, name: 'John' }, { key: 1, id: 5, name: 'Jane' }, { key: 3, id: 5, name: 'Tom' }, ] function helper(obj1, obj2, key) { const findIndex = (refValue) => obj1.findIndex((candidate) => candidate[key] === refValue) const comparator = (item1, item2) => findIndex(item1[key]) - findIndex(item2[key]) return [...obj2].sort(comparator) } const result = helper(obj1, obj2, 'key') console.log('result :', result) A: An Approach using lodash function sortArrayBasedOnKeys( array1: Record<string, number | string>[], array2: Record<string, number | string>[], key: string ) { const order = _.map(array1, key); return _.sortBy(array2, (item) => _.indexOf(order, item[key])); } First we are getting order in which objects in the array1 are placed and then using that order to sort the next array.
Sort 2 objects with the same key Javascript
I have 2 objects where I want the second one to be in the same order as the first one. Ex: const obj1 = [ { key: 1, id: 1, name: "John" }, { key: 2, id: 2, name: "Ann" }, { key: 3, id: 3, name: "Kate" } ]; const obj2 = [ { key: 2, id: 2, name: "Ann" }, { key: 1, id: 1, name: "John" }, { key: 3, id: 3, name: "Kate" } ]; The purpose is to have obj2 in same order as obj1, but only sort with keys. I'm trying to make an helper function which will pass 3 argument: function helper(obj1, obj2, key) { // return new object with sorted array do not modify existing } I can sort one of this, but cant combine 2object together
[ "To sort two objects with the same key in JavaScript, you can use the Array.sort method.\nThe Array.sort method allows you to specify a comparator function that determines the sorting order for each element in the array. The comparator function should take two arguments, and it should return a negative value if the first argument should be sorted before the second argument, a positive value if the first argument should be sorted after the second argument, and 0 if the two arguments are equal.\nHere is an example of how to use the Array.sort method to sort two objects with the same key in JavaScript:\n// Define two objects with the same key\nconst obj1 = {key: 1, value: \"apple\"};\nconst obj2 = {key: 1, value: \"orange\"};\n\n// Define the comparator function\nfunction comparator(a, b) {\n // Compare the key values of the objects\n return a.key - b.key;\n}\n\n// Use the sort method to sort the objects\nconst sortedObjects = [obj1, obj2].sort(comparator);\n\n// Print the sorted objects\nconsole.log(sortedObjects);\n\n\nIn this example, the comparator function is defined to compare the key values of the objects and return the difference as the sorting order. The Array.sort method is then used to sort the objects based on this comparator function.\nAs you can see, the objects are sorted based on the key value, and the objects with the same key are preserved in the original order.\nYou can modify the comparator function to use different values as the sorting key. For example, you can use the value field of the objects as the sorting key instead of the key field. Here is how to do this:\n", "My idea is to use hash map to store key obj2, then loop through key obj1 to sort obj2\n\n\n const obj1 = [\n {key: 1, id: 5, name: 'John'},\n {key: 2, id: 5, name: 'John'},\n {key: 3, id: 5, name: 'John'}\n ]\n\n const obj2 = [\n {key: 2, id: 5, name: 'John'},\n {key: 1, id: 5, name: 'John'},\n {key: 3, id: 5, name: 'John'}\n]\n\nfunction help(obj1, obj2, key) {\n const hashKeyObj2 = obj2.reduce((val, item) => {\n val[item[key]] = item;\n return val;\n }, {});\n return obj1.map((item) => hashKeyObj2[item[key]]);\n}\n\nconsole.log(help(obj1, obj2, 'key'))\n\n\n\n", "One-liner\nlet sorted = obj1.map(a => obj2.find(b => a.key === b.key))\n\n", "This snippet will sort obj2 from obj1 order:\n\n\nconst obj1 = [\n { key: 1, id: 5, name: 'John' },\n { key: 3, id: 5, name: 'John' },\n { key: 2, id: 5, name: 'John' },\n]\n\nconst obj2 = [\n { key: 2, id: 5, name: 'John' },\n { key: 1, id: 5, name: 'Jane' },\n { key: 3, id: 5, name: 'Tom' },\n]\n\nfunction helper(obj1, obj2, key) {\n const findIndex = (refValue) => obj1.findIndex((candidate) => candidate[key] === refValue)\n const comparator = (item1, item2) => findIndex(item1[key]) - findIndex(item2[key])\n return [...obj2].sort(comparator)\n}\n\nconst result = helper(obj1, obj2, 'key')\n\nconsole.log('result :', result)\n\n\n\n", "An Approach using lodash\nfunction sortArrayBasedOnKeys(\n array1: Record<string, number | string>[],\n array2: Record<string, number | string>[],\n key: string\n) {\n const order = _.map(array1, key);\n return _.sortBy(array2, (item) => _.indexOf(order, item[key]));\n}\n\nFirst we are getting order in which objects in the array1 are placed and then using that order to sort the next array.\n" ]
[ 0, 0, 0, 0, 0 ]
[]
[]
[ "javascript", "typescript" ]
stackoverflow_0074674267_javascript_typescript.txt
Q: Is it possible to make a "centralized" MouseEvent? I have these two MouseEvents, and I want them "centralized" because I have other nodes that will do the same thing. I know that this can be done if you create multiple MouseEvents, but I reckon that there must be a shorter way. public class scMain implements Initializable { @FXML private Button btnView, btnView2; @FXML private HBox hboxView, hboxView2; public void translateTransition(Node node) { TranslateTransition translate = new TranslateTransition(); translate.setNode(node); // ...animation stuff translate.play(); } public void fadeOffTransition(Node node) { FadeTransition fade = new FadeTransition(); fade.setNode(node); // ...animation stuff fade.play(); } public void fadeInTransition(Node node) { FadeTransition fade = new FadeTransition(); fade.setNode(node); // ...animation stuff fade.play(); } @FXML void btnMouseEnter(MouseEvent event) { fadeOffTransition(btnView); hboxView.setVisible(true); translateTransition(hboxView); } @FXML void btnMouseExit(MouseEvent event) { fadeInTransition(btnView); hboxView.setVisible(false); }} What I tried but obviously didn't work : @FXML void btnMouseEnter(MouseEvent event, Node first, Node second) { fadeOffTransition(first); second.setVisible(true); translateTransition(second); } @FXML void btnMouseExit(MouseEvent event, Node first, Node second) { fadeInTransition(first); second.setVisible(false); } @Override public void initialize(URL url, ResourceBundle rb) { btnMouseEnter(event, btnView2, hboxView2); //error cannot find symbol btnMouseExit(event, btnView2, hboxView2); //error cannot find symbol } Any suggestions would be appreciated. A: Register the event handler in the intialize() method, instead of in the FXML. Remove the onMouseEnter and onMouseExit attributes from the FXML file, and modify the controller as follows: void btnMouseEnter(Node first, Node second) { fadeOffTransition(first); second.setVisible(true); translateTransition(second); } void btnMouseExit(Node first, Node second) { fadeInTransition(first); second.setVisible(false); } @Override public void initialize(URL url, ResourceBundle rb) { btnView2.setOnMouseEnter(e -> btnMouseEnter(btnView2, hboxView2)); btnView2.setOnMouseExit(e -> btnMouseExit(btnView2, hboxView2)); } You can also make the obvious reductions in code, e.g.: private void registerMouseHandlers(Button button, Node target) { button.setOnMouseEnter(e -> btnMouseEnter(button, target)); button.setOnMouseExit(e -> btnMouseExit(button, target)); } @Override public void initialize(URL location, ResourceBundle resources) { registerMouseHandlers(btnView, hboxView); registerMouseHandlers(btnView2, hboxView2); } And if you have a fairly large number of these, it might also be more efficient (in terms of lines of code) to do @Override public void initialize(URL location, ResourceBundle resources) { List<Button> buttons = List.of(btnView, btnView2); List<Node> targets = List.of(hboxView, hboxView2); for (int i = 0 ; i < buttons.size() ; i++) { registerMouseHandlers(buttons.get(i), targets.get(i)); } } You could also consider creating a custom component encapsulating your button and hbox, which simply implements the event handlers for a single button-hbox pair, and then including them in the FXML using fx:include.
Is it possible to make a "centralized" MouseEvent?
I have these two MouseEvents, and I want them "centralized" because I have other nodes that will do the same thing. I know that this can be done if you create multiple MouseEvents, but I reckon that there must be a shorter way. public class scMain implements Initializable { @FXML private Button btnView, btnView2; @FXML private HBox hboxView, hboxView2; public void translateTransition(Node node) { TranslateTransition translate = new TranslateTransition(); translate.setNode(node); // ...animation stuff translate.play(); } public void fadeOffTransition(Node node) { FadeTransition fade = new FadeTransition(); fade.setNode(node); // ...animation stuff fade.play(); } public void fadeInTransition(Node node) { FadeTransition fade = new FadeTransition(); fade.setNode(node); // ...animation stuff fade.play(); } @FXML void btnMouseEnter(MouseEvent event) { fadeOffTransition(btnView); hboxView.setVisible(true); translateTransition(hboxView); } @FXML void btnMouseExit(MouseEvent event) { fadeInTransition(btnView); hboxView.setVisible(false); }} What I tried but obviously didn't work : @FXML void btnMouseEnter(MouseEvent event, Node first, Node second) { fadeOffTransition(first); second.setVisible(true); translateTransition(second); } @FXML void btnMouseExit(MouseEvent event, Node first, Node second) { fadeInTransition(first); second.setVisible(false); } @Override public void initialize(URL url, ResourceBundle rb) { btnMouseEnter(event, btnView2, hboxView2); //error cannot find symbol btnMouseExit(event, btnView2, hboxView2); //error cannot find symbol } Any suggestions would be appreciated.
[ "Register the event handler in the intialize() method, instead of in the FXML. Remove the onMouseEnter and onMouseExit attributes from the FXML file, and modify the controller as follows:\nvoid btnMouseEnter(Node first, Node second) {\n fadeOffTransition(first);\n second.setVisible(true);\n translateTransition(second);\n}\n\n\nvoid btnMouseExit(Node first, Node second) {\n fadeInTransition(first);\n second.setVisible(false);\n}\n\n@Override\npublic void initialize(URL url, ResourceBundle rb) {\n btnView2.setOnMouseEnter(e -> btnMouseEnter(btnView2, hboxView2));\n btnView2.setOnMouseExit(e -> btnMouseExit(btnView2, hboxView2));\n}\n\nYou can also make the obvious reductions in code, e.g.:\nprivate void registerMouseHandlers(Button button, Node target) {\n button.setOnMouseEnter(e -> btnMouseEnter(button, target));\n button.setOnMouseExit(e -> btnMouseExit(button, target));\n}\n\n@Override\npublic void initialize(URL location, ResourceBundle resources) {\n registerMouseHandlers(btnView, hboxView);\n registerMouseHandlers(btnView2, hboxView2);\n}\n\nAnd if you have a fairly large number of these, it might also be more efficient (in terms of lines of code) to do\n@Override\npublic void initialize(URL location, ResourceBundle resources) {\n List<Button> buttons = List.of(btnView, btnView2);\n List<Node> targets = List.of(hboxView, hboxView2);\n for (int i = 0 ; i < buttons.size() ; i++) {\n registerMouseHandlers(buttons.get(i), targets.get(i));\n }\n}\n\nYou could also consider creating a custom component encapsulating your button and hbox, which simply implements the event handlers for a single button-hbox pair, and then including them in the FXML using fx:include.\n" ]
[ 0 ]
[]
[]
[ "java", "javafx", "netbeans", "scenebuilder" ]
stackoverflow_0074678968_java_javafx_netbeans_scenebuilder.txt
Q: BFS (Breadth First Search) Algorithm in Java -> Cannot implement bfs by not getting node's siblings I have a problem about getting all siblings from the main node and implementing the process n Breadth First Search algorithm written by Java. How can I implement that? I shared my code snippets shown below. Here is my Node class shown below. public class Node{ Node(int data){ this.data = data; this.left = null; this.right = null; this.visited = false; } int data; Node left; Node right; boolean visited; // getter and setter } Here is the initilaization process shown below. Node node1 = new Node(1); Node node7 = new Node(7); Node node9 = new Node(9); Node node8 = new Node(8); Node node2 = new Node(2); Node node3 = new Node(3); node1.left = node7; node1.right = node9; node7.right = node8; node9.right = node3; node9.left = node2; Here is the method shown below. public static void bfs(Node root){ if (root == null){ return; } Node temp; //a binary tree with a inner generic node class Queue<Node> queue = new LinkedList<>(); //can't instantiate a Queue since abstract, so use LLQueue queue.add(root); root.visited = true; while (!queue.isEmpty()) { temp = queue.poll(); //remove the node from the queue // How can I get all siblings of the node like // for (Node sibling : temp.getSiblingNodes()) // sibling.visited=true; // queue.add(sibling); } // get the result as a list } A: Since Node has a property isVisited, I assume that there could be cycles in the Graph. The algorithm can be described in the following steps: Mark the root Node as visited and put it into the Queue. Then until the Queue is not empty, repeat: Remove the Node (current Node) from the head of the Queue; Check its left and right child-Nodes. If a child-Node exists (i.e. not null) and hasn't been visited yet, then add this node both into the Queue and resulting list of sibling-Nodes and set isVisited property of the child-Node to true. That's how it might be implemented: public static List<Node> bfs(Node root) { if (root == null) return Collections.emptyList(); List<Node> siblings = new ArrayList<>(); Queue<Node> queue = new ArrayDeque<>(); // performs better than LinkedList queue.add(root); // siblings.add(root); // uncomment this line ONLY if you need the root-Node to be present in the result root.visited = true; while (!queue.isEmpty()) { Node current = queue.poll(); tryAdd(siblings, queue, current.left); tryAdd(siblings, queue, current.right); } return siblings; } public static void tryAdd(List<Node> siblings, Queue<Node> queue, Node next) { if (next != null && !next.isVisited()) { queue.add(next); siblings.add(next); next.setVisited(true); } } To avoid repeating the same actions for both left and right child-Nodes, I've created method tryAdd(). We can alter its conditional logic by introducing a Predicate (in this case condition is short and well readable, and this option is shown rather for education purposes): public static final Predicate<Node> IS_NULL_OR_VISITED = Predicate.<Node>isEqual(null).or(Node::isVisited); public static void tryAdd(List<Node> siblings, Queue<Node> queue, Node next) { if (IS_NULL_OR_VISITED.test(next)) return; queue.add(next); siblings.add(next); next.setVisited(true); } A: You should not try to get the sibling of a node. If you push the children of the current node to the queue, you will guarantee that you pull them out of the queue in sibling order. The important thing here is that you visit a node when it is pulled from the queue, not when it is added to the queue. So your function could be turned into this: public static List<Node> bfs(Node root){ Queue<Node> queue = new LinkedList<>(); List<Node> result = new ArrayList<>(); if (root == null){ return result; } queue.add(root); // Don't visit this root node yet... while (!queue.isEmpty()) { Node node = queue.poll(); result.add(node); // Here we visit the node // Add the children of the visited node to the queue if (node.left != null) queue.add(node.left); if (node.right != null) queue.add(node.right); } return result; } The caller can do this: for (Node node : bfs(node1)) { System.out.println(node.data); } A: On a more side note, you can find an implementation for the BFS algorithm along with a Ford Fulkerson algorithm on this GitHub repository: Ford Fulkerson with BFS- Max flow problem
BFS (Breadth First Search) Algorithm in Java -> Cannot implement bfs by not getting node's siblings
I have a problem about getting all siblings from the main node and implementing the process n Breadth First Search algorithm written by Java. How can I implement that? I shared my code snippets shown below. Here is my Node class shown below. public class Node{ Node(int data){ this.data = data; this.left = null; this.right = null; this.visited = false; } int data; Node left; Node right; boolean visited; // getter and setter } Here is the initilaization process shown below. Node node1 = new Node(1); Node node7 = new Node(7); Node node9 = new Node(9); Node node8 = new Node(8); Node node2 = new Node(2); Node node3 = new Node(3); node1.left = node7; node1.right = node9; node7.right = node8; node9.right = node3; node9.left = node2; Here is the method shown below. public static void bfs(Node root){ if (root == null){ return; } Node temp; //a binary tree with a inner generic node class Queue<Node> queue = new LinkedList<>(); //can't instantiate a Queue since abstract, so use LLQueue queue.add(root); root.visited = true; while (!queue.isEmpty()) { temp = queue.poll(); //remove the node from the queue // How can I get all siblings of the node like // for (Node sibling : temp.getSiblingNodes()) // sibling.visited=true; // queue.add(sibling); } // get the result as a list }
[ "Since Node has a property isVisited, I assume that there could be cycles in the Graph.\nThe algorithm can be described in the following steps:\n\nMark the root Node as visited and put it into the Queue.\n\nThen until the Queue is not empty, repeat:\n\nRemove the Node (current Node) from the head of the Queue;\nCheck its left and right child-Nodes. If a child-Node exists (i.e. not null) and hasn't been visited yet, then add this node both into the Queue and resulting list of sibling-Nodes and set isVisited property of the child-Node to true.\n\n\n\nThat's how it might be implemented:\npublic static List<Node> bfs(Node root) {\n if (root == null) return Collections.emptyList();\n \n List<Node> siblings = new ArrayList<>();\n Queue<Node> queue = new ArrayDeque<>(); // performs better than LinkedList\n \n queue.add(root);\n // siblings.add(root); // uncomment this line ONLY if you need the root-Node to be present in the result\n root.visited = true;\n \n while (!queue.isEmpty()) {\n Node current = queue.poll();\n\n tryAdd(siblings, queue, current.left);\n tryAdd(siblings, queue, current.right);\n }\n return siblings;\n}\n\npublic static void tryAdd(List<Node> siblings, Queue<Node> queue, Node next) {\n if (next != null && !next.isVisited()) {\n queue.add(next);\n siblings.add(next);\n next.setVisited(true);\n }\n}\n\nTo avoid repeating the same actions for both left and right child-Nodes, I've created method tryAdd().\nWe can alter its conditional logic by introducing a Predicate (in this case condition is short and well readable, and this option is shown rather for education purposes):\npublic static final Predicate<Node> IS_NULL_OR_VISITED =\n Predicate.<Node>isEqual(null).or(Node::isVisited);\n\npublic static void tryAdd(List<Node> siblings, Queue<Node> queue, Node next) {\n if (IS_NULL_OR_VISITED.test(next)) return;\n \n queue.add(next);\n siblings.add(next);\n next.setVisited(true);\n}\n\n", "You should not try to get the sibling of a node. If you push the children of the current node to the queue, you will guarantee that you pull them out of the queue in sibling order. The important thing here is that you visit a node when it is pulled from the queue, not when it is added to the queue.\nSo your function could be turned into this:\n public static List<Node> bfs(Node root){\n Queue<Node> queue = new LinkedList<>();\n List<Node> result = new ArrayList<>();\n if (root == null){\n return result;\n }\n queue.add(root); // Don't visit this root node yet...\n while (!queue.isEmpty())\n {\n Node node = queue.poll();\n result.add(node); // Here we visit the node\n // Add the children of the visited node to the queue\n if (node.left != null) queue.add(node.left);\n if (node.right != null) queue.add(node.right);\n }\n return result;\n }\n\nThe caller can do this:\n for (Node node : bfs(node1)) {\n System.out.println(node.data);\n }\n\n", "On a more side note, you can find an implementation for the BFS algorithm along with a Ford Fulkerson algorithm on this GitHub repository: Ford Fulkerson with BFS- Max flow problem\n" ]
[ 2, 0, 0 ]
[]
[]
[ "algorithm", "breadth_first_search", "graph", "java" ]
stackoverflow_0074677457_algorithm_breadth_first_search_graph_java.txt
Q: Cannot access the user-object in tRPC createContext (express) I'm having an issue where my tRPC-configuration cannot access the express session on the request object. I am using passport.js with google and facebook providers, and on any normal http-route (not on the tRPC router), I get the userinfo when calling req.user. app.ts: import * as trpc from '@trpc/server'; import * as trpcExpress from '@trpc/server/adapters/express'; const appRouter = trpc .router() .mutation('addTodo', { input: z.string(), resolve: ({input, ctx}) => { // Add a todo }, }); const app = express(); app.use( session({ secret: 'use an env-variable here', }), ); app.use(passport.initialize()); app.use(passport.session()); app.use( '/trpc', trpcExpress.createExpressMiddleware({ router: appRouter, createContext: (ctx: trpcExpress.CreateExpressContextOptions) => { // === HERE LIES THE ISSUE === console.log(ctx.req.user); // ^ THIS RETURNS UNDEFINED return ctx; }, }), ); app.get("ping", (req, res) => { console.log(req.user); // ^ THIS RETURNS THE USER res.send("pong"); }) It would be easy to say that tRPC doesn't support give you the user, but there must be some sort of workaround, right? A: im not sure how passport works, as i usually work with express sessions, but maybe the concept will be the same function createContext(opts: trpcExpress.CreateExpressContextOptions) { let user = {}; if (opts.req.session.user) { user = opts.req.session.user; } return { user }; } type Context = inferAsyncReturnType<typeof createContext>; const t = initTRPC.context<Context>().create(); const appRouter = t.router({ hello: t.procedure.query(({ ctx, input }) => { console.log("user", ctx.req?.session?.user); console.log({ input }); return "Hello world"; }), session: t.procedure.query(({ ctx, input }) => { console.log({ input }); ctx.req.session.user = { name: "jane" }; return "session created"; }), }); app.use( "/trpc", trpcExpress.createExpressMiddleware({ router: appRouter, createContext }) ); tldr: check the req.session object for the user and then add it to the object the context returns A: I had a similar issue, albeit with typescript. I noticed the request types were different and had to type them differently to match, so I could have my session available. If you are using typescript, you can type your context like this import { CreateExpressContextOptions } from '@trpc/server/adapters/express' import { Request } from 'express' import session from 'express-session' // create my session type by extending the session type from 'express-session' and adding my userId on it export type Session = session.Session & Partial<session.SessionData> & { userId?: string } // Replace the req object from trpc's CreateExpressContextOptions with the express request and add my session onto it type ExpressRequest = Omit<CreateExpressContextOptions, 'req'> & { req: Request & { session: Session } } export const createContext = ({ req, res, }: ExpressRequest) => { return { req, res, } } export type Context = inferAsyncReturnType<typeof createContext> A: I added this to my client-side fetcher used by trpc { credentials: "include" }
Cannot access the user-object in tRPC createContext (express)
I'm having an issue where my tRPC-configuration cannot access the express session on the request object. I am using passport.js with google and facebook providers, and on any normal http-route (not on the tRPC router), I get the userinfo when calling req.user. app.ts: import * as trpc from '@trpc/server'; import * as trpcExpress from '@trpc/server/adapters/express'; const appRouter = trpc .router() .mutation('addTodo', { input: z.string(), resolve: ({input, ctx}) => { // Add a todo }, }); const app = express(); app.use( session({ secret: 'use an env-variable here', }), ); app.use(passport.initialize()); app.use(passport.session()); app.use( '/trpc', trpcExpress.createExpressMiddleware({ router: appRouter, createContext: (ctx: trpcExpress.CreateExpressContextOptions) => { // === HERE LIES THE ISSUE === console.log(ctx.req.user); // ^ THIS RETURNS UNDEFINED return ctx; }, }), ); app.get("ping", (req, res) => { console.log(req.user); // ^ THIS RETURNS THE USER res.send("pong"); }) It would be easy to say that tRPC doesn't support give you the user, but there must be some sort of workaround, right?
[ "im not sure how passport works, as i usually work with express sessions, but maybe the concept will be the same\nfunction createContext(opts: trpcExpress.CreateExpressContextOptions) {\n let user = {};\n if (opts.req.session.user) {\n user = opts.req.session.user;\n }\n\n return {\n user\n };\n}\n\ntype Context = inferAsyncReturnType<typeof createContext>;\nconst t = initTRPC.context<Context>().create();\nconst appRouter = t.router({\n hello: t.procedure.query(({ ctx, input }) => {\n console.log(\"user\", ctx.req?.session?.user);\n console.log({ input });\n return \"Hello world\";\n }),\n\n session: t.procedure.query(({ ctx, input }) => {\n console.log({ input });\n ctx.req.session.user = { name: \"jane\" };\n return \"session created\";\n }),\n});\n\napp.use(\n \"/trpc\",\n trpcExpress.createExpressMiddleware({ router: appRouter, createContext })\n);\n\ntldr: check the req.session object for the user and then add it to the object the context returns\n", "I had a similar issue, albeit with typescript. I noticed the request types were different and had to type them differently to match, so I could have my session available.\nIf you are using typescript, you can type your context like this\nimport { CreateExpressContextOptions } from '@trpc/server/adapters/express'\nimport { Request } from 'express'\nimport session from 'express-session'\n\n// create my session type by extending the session type from 'express-session' and adding my userId on it\nexport type Session = session.Session &\n Partial<session.SessionData> & { userId?: string }\n\n// Replace the req object from trpc's CreateExpressContextOptions with the express request and add my session onto it\ntype ExpressRequest = Omit<CreateExpressContextOptions, 'req'> & {\n req: Request & { session: Session }\n}\n\nexport const createContext = ({\n req,\n res,\n}: ExpressRequest) => {\n return {\n req,\n res,\n }\n}\n\nexport type Context = inferAsyncReturnType<typeof createContext>\n\n", "I added this to my client-side fetcher used by trpc\n{\n credentials: \"include\"\n}\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "express", "express_session", "passport.js", "trpc.io", "typescript" ]
stackoverflow_0074076025_express_express_session_passport.js_trpc.io_typescript.txt
Q: How to get rid of System.UnauthorizedAccessException in WPF I'm trying to write generated pdf file to the directory on Desktop, but System.UnauthorizedAccessException: "Access to the path is denied." appears every time. I tried to run Visual Studio as admin, but it didn't work. PDF-generator method code: public void GeneratePDF() { Encoding.RegisterProvider(CodePagesEncodingProvider.Instance); PdfDocument document = new PdfDocument(); document.Info.Title = $"Чек №{_orderId}"; PdfPage page = document.AddPage(); XGraphics gfx = XGraphics.FromPdfPage(page); XFont font = new XFont("Arial", 18); gfx.DrawString($"{document.Info.Title}", new XFont("Arial", 40, XFontStyle.Bold), XBrushes.Black, new XPoint(200, 70)); document.Save(@"C:\Users\HP\Desktop\CourseWorkPDFs"); } Exception: A: You have to specify a filename, not only the name of the directory: document.Save(@"C:\Users\HP\Desktop\CourseWorkPDFs\FileName.pdf"); You'll have to modify the code to generate a unique filename from the orderid or the current date and time or using some other mechanism.
How to get rid of System.UnauthorizedAccessException in WPF
I'm trying to write generated pdf file to the directory on Desktop, but System.UnauthorizedAccessException: "Access to the path is denied." appears every time. I tried to run Visual Studio as admin, but it didn't work. PDF-generator method code: public void GeneratePDF() { Encoding.RegisterProvider(CodePagesEncodingProvider.Instance); PdfDocument document = new PdfDocument(); document.Info.Title = $"Чек №{_orderId}"; PdfPage page = document.AddPage(); XGraphics gfx = XGraphics.FromPdfPage(page); XFont font = new XFont("Arial", 18); gfx.DrawString($"{document.Info.Title}", new XFont("Arial", 40, XFontStyle.Bold), XBrushes.Black, new XPoint(200, 70)); document.Save(@"C:\Users\HP\Desktop\CourseWorkPDFs"); } Exception:
[ "You have to specify a filename, not only the name of the directory:\ndocument.Save(@\"C:\\Users\\HP\\Desktop\\CourseWorkPDFs\\FileName.pdf\");\n\nYou'll have to modify the code to generate a unique filename from the orderid or the current date and time or using some other mechanism.\n" ]
[ 0 ]
[]
[]
[ "c#", "directory", "pdf" ]
stackoverflow_0074678384_c#_directory_pdf.txt
Q: _InternalLinkedHashMap' is not a subtype of type 'Map i got an error and this is my API wrapper if (method == "get") { var param = ''; Map<String, dynamic> params = data['params'] != null ? data['params'] : {}; params.forEach((k, v) => param += k + "=" + (v == null ? '' : v) + "&"); try { Dio dio = new Dio(); dio.options.headers = headers; final response = await dio.get(url + "?" + param); responseJson = _response(response); //print('Response ' + response.data.toString()); } on SocketException { throw FetchDataException('Tidak terhubung ke server'); } . . . and this is the repository: Future<LeaveListModel> fetchResponse(query) async { final response = await _wrapper.apiRequest( "get", _wrapper.leaveListGetData, {'params': query}, true); return LeaveListModel.fromJson(response); } this is the service, in case you wanted to know: class Service { GetStorage localData = GetStorage(); final String initial = "Service"; final String baseUrl = ConstantConfig().leaveListEndPoint; final String appCode = ConstantConfig().leaveListAppCode; final String outputType = "json"; final String routeAuthConnect = "auth/connect/"; final String routeAuthGetAccessToken = "auth/getAccessToken/"; final String leaveListGetData = "list/"; Future<dynamic> apiRequest( String method, String route, Map<String, dynamic> data, [bool needToken = false]) async { ApiWrapper _apiWrapper = ApiWrapper(); if (needToken) { int thisTime = (DateTime.now().millisecondsSinceEpoch / 1000).round(); String? savedExpired = localData.read(initial + KeyStorage.accessExpired); int expire = savedExpired == null ? 0 : int.parse(savedExpired); if (expire < thisTime) { dynamic tokenResponse = await _apiWrapper.request(baseUrl, initial, appCode, outputType, 'post', routeAuthGetAccessToken, {}); await localData.write(initial + KeyStorage.accessToken, tokenResponse['response']['access_token']); return await _apiWrapper.request( baseUrl, initial, appCode, outputType, method, route, data); } else { return await _apiWrapper.request( baseUrl, initial, appCode, outputType, method, route, data); } } else { return await _apiWrapper.request( baseUrl, initial, appCode, outputType, method, route, data); } } } i just want to call a json file but it show an error, i don't know what's wrong with my code, in case you know how to fix it please let me know this is the error: I/flutter (20696): Call https:xxxxxxxxxxxx I/flutter (20696): type '_InternalLinkedHashMap<dynamic, dynamic>' is not a subtype of type 'Map<String, dynamic>' another error note: I/flutter ( 7626): {} I/flutter ( 7626): 1 I/flutter ( 7626): type '_InternalLinkedHashMap<dynamic, dynamic>' is not a subtype of type 'Map<String, dynamic>' I/flutter ( 7626): {} I/flutter ( 7626): 1 I/flutter ( 7626): type '_InternalLinkedHashMap<dynamic, dynamic>' is not a subtype of type 'Map<String, dynamic>' actually that shows that the process is stopped on Map<String, dynamic> params = data['params'] != null ? data['params'] : {}; because i try to debug the API wrapper with print('1');, print('2');, print('3'); just to show where the process stop and getting error A: You need decode your response first then use it: LeaveListModel.fromJson(jsonDecode(response)); also I think you are passing query encoded to ApiWrapper, so you need to decode it like this: Map<String, dynamic> params = data['params'] != null ? jsonDecode(data['params']) : {}; A: return LeaveListModel.fromJson(response); It seems to me that you should use response.data here, instead of response. return LeaveListModel.fromJson(response.data); A: Try this: Map<String, dynamic>.from(yourData) A: It looks like you're trying to assign a Map<dynamic, dynamic> object to a variable of type Map<String, dynamic>. This is causing the error that you're seeing because the types do not match. In order to fix this error, you can either cast the Map<dynamic, dynamic> object to a Map<String, dynamic> object before assigning it to the variable, or you can change the type of the variable to Map<dynamic, dynamic>. Here is an example of how you could fix the error by casting the object to the correct type: Map<String, dynamic> params = data['params'] != null ? data['params'] as Map<String, dynamic> : {}; Alternatively, you could fix the error by changing the type of the params variable to Map<dynamic, dynamic>: Map<dynamic, dynamic> params = data['params'] != null ? data['params'] : {}; A: For Map<String, dynamic> json, parse like this List<Blog> blogs = (json['blogs'] as List<dynamic>).map((e) => Blog.fromJson(e as Map<String, dynamic>)).toList();
_InternalLinkedHashMap' is not a subtype of type 'Map
i got an error and this is my API wrapper if (method == "get") { var param = ''; Map<String, dynamic> params = data['params'] != null ? data['params'] : {}; params.forEach((k, v) => param += k + "=" + (v == null ? '' : v) + "&"); try { Dio dio = new Dio(); dio.options.headers = headers; final response = await dio.get(url + "?" + param); responseJson = _response(response); //print('Response ' + response.data.toString()); } on SocketException { throw FetchDataException('Tidak terhubung ke server'); } . . . and this is the repository: Future<LeaveListModel> fetchResponse(query) async { final response = await _wrapper.apiRequest( "get", _wrapper.leaveListGetData, {'params': query}, true); return LeaveListModel.fromJson(response); } this is the service, in case you wanted to know: class Service { GetStorage localData = GetStorage(); final String initial = "Service"; final String baseUrl = ConstantConfig().leaveListEndPoint; final String appCode = ConstantConfig().leaveListAppCode; final String outputType = "json"; final String routeAuthConnect = "auth/connect/"; final String routeAuthGetAccessToken = "auth/getAccessToken/"; final String leaveListGetData = "list/"; Future<dynamic> apiRequest( String method, String route, Map<String, dynamic> data, [bool needToken = false]) async { ApiWrapper _apiWrapper = ApiWrapper(); if (needToken) { int thisTime = (DateTime.now().millisecondsSinceEpoch / 1000).round(); String? savedExpired = localData.read(initial + KeyStorage.accessExpired); int expire = savedExpired == null ? 0 : int.parse(savedExpired); if (expire < thisTime) { dynamic tokenResponse = await _apiWrapper.request(baseUrl, initial, appCode, outputType, 'post', routeAuthGetAccessToken, {}); await localData.write(initial + KeyStorage.accessToken, tokenResponse['response']['access_token']); return await _apiWrapper.request( baseUrl, initial, appCode, outputType, method, route, data); } else { return await _apiWrapper.request( baseUrl, initial, appCode, outputType, method, route, data); } } else { return await _apiWrapper.request( baseUrl, initial, appCode, outputType, method, route, data); } } } i just want to call a json file but it show an error, i don't know what's wrong with my code, in case you know how to fix it please let me know this is the error: I/flutter (20696): Call https:xxxxxxxxxxxx I/flutter (20696): type '_InternalLinkedHashMap<dynamic, dynamic>' is not a subtype of type 'Map<String, dynamic>' another error note: I/flutter ( 7626): {} I/flutter ( 7626): 1 I/flutter ( 7626): type '_InternalLinkedHashMap<dynamic, dynamic>' is not a subtype of type 'Map<String, dynamic>' I/flutter ( 7626): {} I/flutter ( 7626): 1 I/flutter ( 7626): type '_InternalLinkedHashMap<dynamic, dynamic>' is not a subtype of type 'Map<String, dynamic>' actually that shows that the process is stopped on Map<String, dynamic> params = data['params'] != null ? data['params'] : {}; because i try to debug the API wrapper with print('1');, print('2');, print('3'); just to show where the process stop and getting error
[ "You need decode your response first then use it:\nLeaveListModel.fromJson(jsonDecode(response));\n\nalso I think you are passing query encoded to ApiWrapper, so you need to decode it like this:\nMap<String, dynamic> params = data['params'] != null ? jsonDecode(data['params']) : {};\n\n", "return LeaveListModel.fromJson(response);\n\nIt seems to me that you should use response.data here, instead of response.\nreturn LeaveListModel.fromJson(response.data);\n\n", "Try this:\nMap<String, dynamic>.from(yourData)\n\n", "It looks like you're trying to assign a Map<dynamic, dynamic> object to a variable of type Map<String, dynamic>. This is causing the error that you're seeing because the types do not match. In order to fix this error, you can either cast the Map<dynamic, dynamic> object to a Map<String, dynamic> object before assigning it to the variable, or you can change the type of the variable to Map<dynamic, dynamic>.\nHere is an example of how you could fix the error by casting the object to the correct type:\nMap<String, dynamic> params = data['params'] != null\n ? data['params'] as Map<String, dynamic>\n : {};\n\nAlternatively, you could fix the error by changing the type of the params variable to Map<dynamic, dynamic>:\nMap<dynamic, dynamic> params = data['params'] != null ? data['params'] : {};\n\n", "For Map<String, dynamic> json, parse like this\nList<Blog> blogs = (json['blogs'] as List<dynamic>).map((e) => Blog.fromJson(e as Map<String, dynamic>)).toList();\n\n" ]
[ 0, 0, 0, 0, 0 ]
[]
[]
[ "dart", "flutter", "json" ]
stackoverflow_0074559078_dart_flutter_json.txt
Q: Visual Studio 22 Multiple Startup Projects I`m using VS 2022 Professional 2022 (17.2.0). I wanted to create a ASP.Net Core app with Angular in VS and used the Microsoft Tutorial for it. https://learn.microsoft.com/en-us/visualstudio/javascript/tutorial-asp-net-core-with-angular?view=vs-2022 When I want to start Multiple Project as once, only Angular CLI will start the ng start command, but the ASP.NET Core Project will never start. When I disable the Angular Project inside, Multiple Startprojects, the .Net Core Console will start. Creation started... 1>------ Build started: Project: ConverterBackend, Configuration: Debug Any CPU ------ 2>------ Build started: Project: Converter, Configuration: Debug Any CPU ------ 1>Analyzer tools are skipped to speed up the build. You can run the Build or Rebuild command to run the analyzer. 1>ConverterBackend -> C:\Optimization\Converter\ConverterBackend\bin\Debug\net6.0\ConverterBackend.dll 3>------ Deployment started: Project: Converter, Configuration: Debug Any CPU ------ Settings: proxy.conf.js target: "https://localhost:7049", launchSettings.json "applicationUrl": "https://localhost:7049;https://localhost:5001", I hope somebody has a Solution or run into the same Error once, cause I cant find anything about this. Could this be an AV problem? Solution: If u ever run into this Problem Check Script Execution permissions and if the AV blocks something. A: You have to start the ASP.Net Core app. In there you should have a Startup Class. Go to Configure Services and add something like this: services.AddSpaStaticFiles(configuration => { configuration.RootPath = "WebUI"; }); Then the Kestrel Server should start up with the SPA. A: I was also struggling with this and finally found the solution. The problem was the Node.js version: I was using Node v18.12.1 (LTS at the time). I had installed it with nvm for Windows. This way, the frontend blocked the startup of the backend. Then, I used nvm to switch to Node v16.18.1. In both cases for 64-bit architecture. With this change, effectively the frontend and backend run perfect as expected from the tutorial. This solution belongs to this user in Microsoft Q&A, where I sought for help as well.
Visual Studio 22 Multiple Startup Projects
I`m using VS 2022 Professional 2022 (17.2.0). I wanted to create a ASP.Net Core app with Angular in VS and used the Microsoft Tutorial for it. https://learn.microsoft.com/en-us/visualstudio/javascript/tutorial-asp-net-core-with-angular?view=vs-2022 When I want to start Multiple Project as once, only Angular CLI will start the ng start command, but the ASP.NET Core Project will never start. When I disable the Angular Project inside, Multiple Startprojects, the .Net Core Console will start. Creation started... 1>------ Build started: Project: ConverterBackend, Configuration: Debug Any CPU ------ 2>------ Build started: Project: Converter, Configuration: Debug Any CPU ------ 1>Analyzer tools are skipped to speed up the build. You can run the Build or Rebuild command to run the analyzer. 1>ConverterBackend -> C:\Optimization\Converter\ConverterBackend\bin\Debug\net6.0\ConverterBackend.dll 3>------ Deployment started: Project: Converter, Configuration: Debug Any CPU ------ Settings: proxy.conf.js target: "https://localhost:7049", launchSettings.json "applicationUrl": "https://localhost:7049;https://localhost:5001", I hope somebody has a Solution or run into the same Error once, cause I cant find anything about this. Could this be an AV problem? Solution: If u ever run into this Problem Check Script Execution permissions and if the AV blocks something.
[ "You have to start the ASP.Net Core app. In there you should have a Startup Class.\nGo to Configure Services and add something like this:\nservices.AddSpaStaticFiles(configuration => { configuration.RootPath = \"WebUI\"; });\nThen the Kestrel Server should start up with the SPA.\n", "I was also struggling with this and finally found the solution. The problem was the Node.js version: I was using Node v18.12.1 (LTS at the time). I had installed it with nvm for Windows. This way, the frontend blocked the startup of the backend.\nThen, I used nvm to switch to Node v16.18.1. In both cases for 64-bit architecture. With this change, effectively the frontend and backend run perfect as expected from the tutorial.\nThis solution belongs to this user in Microsoft Q&A, where I sought for help as well.\n" ]
[ 0, 0 ]
[ "I use two instances of VS open for this; each with a different start up project.\n" ]
[ -1 ]
[ "angular", "asp.net", "c#", "visual_studio_2022" ]
stackoverflow_0073066531_angular_asp.net_c#_visual_studio_2022.txt
Q: HTML img width/heigh vs srcset/sizes Recently, I am fighting with responsive images for my website. Usually, my policy for images is that they must not be larger than 450 pixel wide, but there could be images that are less than this size. Initially, I used img tag with attribute width and height containing the actual image size. However, they don't work fine in a Responsive world where there could be screen less than 450 pixels. So I started to do some tests with this image. I don't want to provide multiple version of this image to the browser (unless some specific exception), because they are not so wide. So I decided to use this code: <img src="assets/img/software-developer.jpeg" alt="Some Stuff" srcset="assets/img/software-developer.jpeg 450w" sizes="(max-width: 449px) 75vw, (min-width: 450px) 450px" This code works quite well because if screen size (tested with Chrome Developer Tools) is less than 450px the image is simply resized (75 vw), otherwise it is used as it is. However, I also use Page Speed to have performance under control. Consider that I am moving my Wordpress website from a Paid host to Jekyll on Gihub pages for two reasons: costs (Paid Web Host vs Free Github Pages) performance (very poor with Wordpress considering that my content is 99% static) Now if I test the page with Page Speed it tells me to specify width and height attributes. I think for Page Speed they are important because the browser calculate quickly the image dimension without download it and render the page without wait the download of all the images. However, width and height seems don't fit well with today Responsive world. So my question is: is it possible have both widht/height and srcset/sizes? In the code I posted above I expected to use 100vw instead of 75 vw but my image were cropped. I think probably this was due to padding/margin? Am I correct? A: The " width & height" image attribute fixed the image's dimensions this thing you can see in my code snippet. So it would help if you used CSS to style your photos to make them responsive (automatically scaleable on big and small screens.). Try following the CSS code: .my_img { width: 420px; max-width: 100%; height: auto; } <img src="https://images.pexels.com/photos/2453551/pexels-photo-2453551.jpeg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=1" alt="leaf-image" width="200" height="300" /> <img src="https://images.pexels.com/photos/2453551/pexels-photo-2453551.jpeg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=1" alt="leaf-image" class="my_img" /> Set class for the image. Then set width whatever you want, after that set max-width: 100%; and height: auto; which will help the browser to increase and decrease the height of the image when the image needs to be scaled down on smaller than 420px screens. Width property will set the image size to 420px when the screen size is bigger than 420px and the max-width: 100%; will make the image responsive, It means when the screen size will less than 420px your image will automatically be adjusted according to your screen with. You can avoid scrset and width/height image attribute. Use CSS.
HTML img width/heigh vs srcset/sizes
Recently, I am fighting with responsive images for my website. Usually, my policy for images is that they must not be larger than 450 pixel wide, but there could be images that are less than this size. Initially, I used img tag with attribute width and height containing the actual image size. However, they don't work fine in a Responsive world where there could be screen less than 450 pixels. So I started to do some tests with this image. I don't want to provide multiple version of this image to the browser (unless some specific exception), because they are not so wide. So I decided to use this code: <img src="assets/img/software-developer.jpeg" alt="Some Stuff" srcset="assets/img/software-developer.jpeg 450w" sizes="(max-width: 449px) 75vw, (min-width: 450px) 450px" This code works quite well because if screen size (tested with Chrome Developer Tools) is less than 450px the image is simply resized (75 vw), otherwise it is used as it is. However, I also use Page Speed to have performance under control. Consider that I am moving my Wordpress website from a Paid host to Jekyll on Gihub pages for two reasons: costs (Paid Web Host vs Free Github Pages) performance (very poor with Wordpress considering that my content is 99% static) Now if I test the page with Page Speed it tells me to specify width and height attributes. I think for Page Speed they are important because the browser calculate quickly the image dimension without download it and render the page without wait the download of all the images. However, width and height seems don't fit well with today Responsive world. So my question is: is it possible have both widht/height and srcset/sizes? In the code I posted above I expected to use 100vw instead of 75 vw but my image were cropped. I think probably this was due to padding/margin? Am I correct?
[ "The \" width & height\" image attribute fixed the image's dimensions this thing you can see in my code snippet. So it would help if you used CSS to style your photos to make them responsive (automatically scaleable on big and small screens.).\nTry following the CSS code:\n\n\n.my_img {\n width: 420px;\n max-width: 100%;\n height: auto;\n}\n<img src=\"https://images.pexels.com/photos/2453551/pexels-photo-2453551.jpeg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=1\" alt=\"leaf-image\" width=\"200\" height=\"300\" />\n\n\n<img src=\"https://images.pexels.com/photos/2453551/pexels-photo-2453551.jpeg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=1\" alt=\"leaf-image\" class=\"my_img\" />\n\n\n\nSet class for the image.\nThen set width whatever you want, after that set max-width: 100%; and height: auto; which will help the browser to increase and decrease the height of the image when the image needs to be scaled down on smaller than 420px screens.\nWidth property will set the image size to 420px when the screen size is bigger than 420px and the max-width: 100%; will make the image responsive, It means when the screen size will less than 420px your image will automatically be adjusted according to your screen with.\nYou can avoid scrset and width/height image attribute. Use CSS.\n" ]
[ 0 ]
[]
[]
[ "css", "html", "jekyll" ]
stackoverflow_0074678487_css_html_jekyll.txt
Q: VB.NET: How to Take MenuIDs those Loaded From Resource Dll I have mainMenu at my own Resource-DLL and I put that to my Windows Form As MainMenu by API Methods: (LoadMenu and SetMenu) How can I take MenuIDs After Clicking with WndProc? Protected Overrides Sub WndProc(ByRef m As Message) If m.Msg = &H11F And m.LParam <> HMainMenu Then If m.LParam <> 0 Then MenuID = GetMenuItemID(m.LParam, 0) Me.ListBox1.Items.Add("Selected.") Else Me.ListBox1.Items.Add("Clicked. " & MenuID.ToString) End If End If MyBase.WndProc(m) End Sub This statement give the ID as Wrong. Hi there, I find a way for take different value of each menu item selection by wParam: Protected Overrides Sub WndProc(ByRef m As Message) If m.Msg = &H11F And m.LParam <> HMainMenu Then If m.LParam <> 0 Then MenuID =(m.WParam.ToInt64 And 255) Me.ListBox1.Items.Add("Selected.") Else Me.ListBox1.Items.Add("Clicked. " & MenuID.ToString) End If End If MyBase.WndProc(m) End Sub So with this changes Can have an ID after menuItem clicked But that's not a True ID at Resource DLL. How Can I take the True ID of Clicked MenuItem?!!! A: Use this Code to Get MenuItem ID: MenuID = (m.WParam.ToInt64 And &HFFFF&)
VB.NET: How to Take MenuIDs those Loaded From Resource Dll
I have mainMenu at my own Resource-DLL and I put that to my Windows Form As MainMenu by API Methods: (LoadMenu and SetMenu) How can I take MenuIDs After Clicking with WndProc? Protected Overrides Sub WndProc(ByRef m As Message) If m.Msg = &H11F And m.LParam <> HMainMenu Then If m.LParam <> 0 Then MenuID = GetMenuItemID(m.LParam, 0) Me.ListBox1.Items.Add("Selected.") Else Me.ListBox1.Items.Add("Clicked. " & MenuID.ToString) End If End If MyBase.WndProc(m) End Sub This statement give the ID as Wrong. Hi there, I find a way for take different value of each menu item selection by wParam: Protected Overrides Sub WndProc(ByRef m As Message) If m.Msg = &H11F And m.LParam <> HMainMenu Then If m.LParam <> 0 Then MenuID =(m.WParam.ToInt64 And 255) Me.ListBox1.Items.Add("Selected.") Else Me.ListBox1.Items.Add("Clicked. " & MenuID.ToString) End If End If MyBase.WndProc(m) End Sub So with this changes Can have an ID after menuItem clicked But that's not a True ID at Resource DLL. How Can I take the True ID of Clicked MenuItem?!!!
[ "Use this Code to Get MenuItem ID:\nMenuID = (m.WParam.ToInt64 And &HFFFF&)\n\n" ]
[ 0 ]
[]
[]
[ "mainmenu", "menu", "resources", "vb.net", "wndproc" ]
stackoverflow_0074657423_mainmenu_menu_resources_vb.net_wndproc.txt
Q: How to use the @nuxtjs/axios module with Nuxt3? I have this code to get API data from https://fakestoreapi.com/products/ <template> <div> </div> </template> <script> definePageMeta({ layout: "products" }) export default { data () { return { data: '', } }, async fetch() { const res = await this.$axios.get('https://fakestoreapi.com/products/') console.log(res.data) }, } </script> I have installed axios and in nuxt.config.ts I have: // https://nuxt.com/docs/api/configuration/nuxt-config export default defineNuxtConfig({ app: { head: { title: 'Nuxt', meta: [ { name: 'description', content: 'Everything about - Nuxt-3'} ], link: [ {rel: 'stylesheet', href: 'https://fonts.googleapis.com/icon?family=Material+Icons' } ] } }, runtimeConfig: { currencyKey: process.env.CURRENCY_API_KEY }, modules: [ "@nuxtjs/tailwindcss", ], buildModules: [ "@nuxtjs/axios" ], axios: { baseURL: '/', } }) I have the following in my console is an experimental feature and its API will likely change. I am not getting API data in the console. A: As told on this page, we don't use the @nuxtjs/axios module anymore with Nuxt3 but rather ohmyfetch, which is now baked-in directly in the core of the framework through $axios as writted here. Hence, your config file should look like this export default defineNuxtConfig({ app: { head: { title: 'Nuxt Dojo', meta: [ { name: 'description', content: 'Everything about - Nuxt-3' } ], link: [ { rel: 'stylesheet', href: 'https://fonts.googleapis.com/icon?family=Material+Icons' } ] } }, runtimeConfig: { currencyKey: process.env.CURRENCY_API_KEY }, modules: [ "@nuxtjs/tailwindcss" ], }) And the given /pages/products/index.vue can be like that <template> <div> <p v-for="user in users" :key="user.id">ID: {{ user.id }} {{ user.name }}</p> </div> </template> <script> definePageMeta({ layout: "products" }) export default { data () { return { users: '', } }, async mounted() { this.users = await $fetch('https://jsonplaceholder.typicode.com/users') }, } </script> This is how it looks at the end (with a successful HTTP request in the network tab) As a confirmation, we can see that the module is indeed not supported (and will not be) by Nuxt3 on the modules page. The Suspense error is detailed in the official documentation <Suspense> is an experimental feature. It is not guaranteed to reach stable status and the API may change before it does. It may seem scary but you can totally use the API as per se and since it's a warning and not an error, it's totally fine!
How to use the @nuxtjs/axios module with Nuxt3?
I have this code to get API data from https://fakestoreapi.com/products/ <template> <div> </div> </template> <script> definePageMeta({ layout: "products" }) export default { data () { return { data: '', } }, async fetch() { const res = await this.$axios.get('https://fakestoreapi.com/products/') console.log(res.data) }, } </script> I have installed axios and in nuxt.config.ts I have: // https://nuxt.com/docs/api/configuration/nuxt-config export default defineNuxtConfig({ app: { head: { title: 'Nuxt', meta: [ { name: 'description', content: 'Everything about - Nuxt-3'} ], link: [ {rel: 'stylesheet', href: 'https://fonts.googleapis.com/icon?family=Material+Icons' } ] } }, runtimeConfig: { currencyKey: process.env.CURRENCY_API_KEY }, modules: [ "@nuxtjs/tailwindcss", ], buildModules: [ "@nuxtjs/axios" ], axios: { baseURL: '/', } }) I have the following in my console is an experimental feature and its API will likely change. I am not getting API data in the console.
[ "As told on this page, we don't use the @nuxtjs/axios module anymore with Nuxt3 but rather ohmyfetch, which is now baked-in directly in the core of the framework through $axios as writted here.\nHence, your config file should look like this\nexport default defineNuxtConfig({\n app: {\n head: {\n title: 'Nuxt Dojo',\n meta: [\n { name: 'description', content: 'Everything about - Nuxt-3' }\n ],\n link: [\n { rel: 'stylesheet', href: 'https://fonts.googleapis.com/icon?family=Material+Icons' }\n ]\n }\n },\n runtimeConfig: {\n currencyKey: process.env.CURRENCY_API_KEY\n },\n modules: [\n \"@nuxtjs/tailwindcss\"\n ],\n})\n\nAnd the given /pages/products/index.vue can be like that\n<template>\n <div>\n <p v-for=\"user in users\" :key=\"user.id\">ID: {{ user.id }} {{ user.name }}</p>\n </div>\n</template>\n\n\n<script>\ndefinePageMeta({\n layout: \"products\"\n})\n\nexport default {\n data () {\n return {\n users: '',\n }\n },\n async mounted() {\n this.users = await $fetch('https://jsonplaceholder.typicode.com/users')\n },\n}\n</script>\n\nThis is how it looks at the end (with a successful HTTP request in the network tab)\n\n\nAs a confirmation, we can see that the module is indeed not supported (and will not be) by Nuxt3 on the modules page.\nThe Suspense error is detailed in the official documentation\n\n<Suspense> is an experimental feature. It is not guaranteed to reach stable status and the API may change before it does.\n\nIt may seem scary but you can totally use the API as per se and since it's a warning and not an error, it's totally fine!\n" ]
[ 0 ]
[]
[]
[ "axios", "javascript", "nuxt.js", "nuxtjs3", "vue.js" ]
stackoverflow_0074678449_axios_javascript_nuxt.js_nuxtjs3_vue.js.txt
Q: Nuxt 3 - How to add Meta tags on a Dynamic route at Build The issue I've encounted originates from attempting to apply dynamic OpenGraph meta tags to a dynamiclly generated route in Nuxt 3 (and by extension, Vue 3). I've tried to set the meta tags dynamically through Javascript - which appears to be the only dynamic option which Nuxt 3 currently supports, to no avail. Obviously when the Open Graph scraper requests the page, it doesn't run any Javascript, meaning my meta tags do not get applied. I do not want to server-side render these pages, keeping them dynamically generated is an important part of this problem. So far I have attempted using the <Head> tag, with the content property generate dynamically: <Head> <Meta hid="og:url" property="og:url" :content="`https://my-site.com/{$route.path}`" /> </Head> This causes the meta tags to be applied properly, but only after the Javascript has been executed. So as I mentioned before, the Open Graph web scrapers do not correctly apply it. The solution I was hoping to find was a method that could add the meta tags at build time - is this possible? Or is there a better solution I'm not considering? A: use useHead composable The properties of useHead can be dynamic, accepting ref, computed, and reactive properties. meta parameter can also accept a function returning an object to make the entire object reactive. learn more here: https://nuxt.com/docs/api/composables/use-head
Nuxt 3 - How to add Meta tags on a Dynamic route at Build
The issue I've encounted originates from attempting to apply dynamic OpenGraph meta tags to a dynamiclly generated route in Nuxt 3 (and by extension, Vue 3). I've tried to set the meta tags dynamically through Javascript - which appears to be the only dynamic option which Nuxt 3 currently supports, to no avail. Obviously when the Open Graph scraper requests the page, it doesn't run any Javascript, meaning my meta tags do not get applied. I do not want to server-side render these pages, keeping them dynamically generated is an important part of this problem. So far I have attempted using the <Head> tag, with the content property generate dynamically: <Head> <Meta hid="og:url" property="og:url" :content="`https://my-site.com/{$route.path}`" /> </Head> This causes the meta tags to be applied properly, but only after the Javascript has been executed. So as I mentioned before, the Open Graph web scrapers do not correctly apply it. The solution I was hoping to find was a method that could add the meta tags at build time - is this possible? Or is there a better solution I'm not considering?
[ "use useHead composable\n\nThe properties of useHead can be dynamic, accepting ref, computed, and reactive properties. meta parameter can also accept a function returning an object to make the entire object reactive.\n\nlearn more here: https://nuxt.com/docs/api/composables/use-head\n" ]
[ 0 ]
[]
[]
[ "facebook_opengraph", "nuxt.js", "nuxtjs3", "typescript", "vue.js" ]
stackoverflow_0074666659_facebook_opengraph_nuxt.js_nuxtjs3_typescript_vue.js.txt
Q: The play method in JavaScript is not working Nothing happens when I click on the buttons to play the corresponding sounds. var tab = document.querySelectorAll(".drum"); for (let i = 0; tab.length; i++) { tab[i].addEventListener("click", function() { var sound = new Audio("sounds\tom-1.mp3"); sound.play(); }); } A: It looks like there is an issue with the for loop that you are using to add event listeners to each button. The condition in your for loop is checking the length of the tab variable, rather than the value of the i variable. As a result, the for loop will never run because the condition will always be false. To fix this, you can change the condition in your for loop to check the value of the i variable instead. Here is how you can modify your code var tab = document.querySelectorAll(".drum"); for (let i = 0; i < tab.length; i++) { tab[i].addEventListener("click", function() { var sound = new Audio("sounds\tom-1.mp3"); sound.play(); }); } With this change, the for loop will iterate over each element in the tab variable, and add an event listener to each one. This should allow you to play the corresponding sound when you click on each button.
The play method in JavaScript is not working
Nothing happens when I click on the buttons to play the corresponding sounds. var tab = document.querySelectorAll(".drum"); for (let i = 0; tab.length; i++) { tab[i].addEventListener("click", function() { var sound = new Audio("sounds\tom-1.mp3"); sound.play(); }); }
[ "It looks like there is an issue with the for loop that you are using to add event listeners to each button. The condition in your for loop is checking the length of the tab variable, rather than the value of the i variable. As a result, the for loop will never run because the condition will always be false.\nTo fix this, you can change the condition in your for loop to check the value of the i variable instead. Here is how you can modify your code\nvar tab = document.querySelectorAll(\".drum\");\n\nfor (let i = 0; i < tab.length; i++) {\n tab[i].addEventListener(\"click\", function() {\n var sound = new Audio(\"sounds\\tom-1.mp3\");\n \n sound.play();\n });\n}\n\nWith this change, the for loop will iterate over each element in the tab variable, and add an event listener to each one. This should allow you to play the corresponding sound when you click on each button.\n" ]
[ 1 ]
[]
[]
[ "html", "html5_audio", "javascript" ]
stackoverflow_0074679147_html_html5_audio_javascript.txt
Q: invalid reference to FROM-clause entry for table "t" I have wrote this join statement in sql and triple checked everything and have tried multiple things but can not seem to shake this bug.-- The Talent Acquisition team is looking to fill some open positions. They want you to get them the territory_description and region_description for territories that do not have any employees, sorted by territory_id. Let's tackle this one piece at a time. In order to achieve this result set, we will need to join the territories and regions table together. So first, select the territory_description column of the territories table, aliased as t, and the region_description of the regions table, aliased as r. Write a FROM statement for the territories table. Alias it as 't' as you do so. Then, write a JOIN statement, joining the regions table, aliasing it as 'r'. This JOIN should find records with matching values ON region_id in the territories and regions tables. If you run the query you've constructed at this point, you should see a result set that contains territory descriptions and corresponding region descriptions. But we're not done! We want only records WHERE the territories do not have any employees. Below the JOIN statement, write a WHERE statement to create a subquery. Find WHERE the territory_id in the territories table is NOT IN the result set from a subquery that selects the territory_id from the employee_territories table. Finally, take the final result set and order by territory_id. SELECT t.territory_description, r.region_description, t.region_id, r.region_id FROM territories t, regions r JOIN regions ON t.region_id = r.region_id ERROR: invalid reference to FROM-clause entry for table "t" LINE 29: ON t.region_id = r.region_id ^ HINT: There is an entry for table "t", but it cannot be referenced from this part of the query. SQL state: 42P01 Character: 1524 I have tried references, removing the region ids from the select statements, and even changing some aliases around or putting different tables in the from statement but nothing seems to work. A: Your SQL query is not structured right - you are trying to do both an implicit JOIN via , regions r and an explicit JOIN via JOIN regions. Here's a corrected version of your query that just uses the explicit JOIN: SELECT t.territory_description, r.region_description, t.region_id, r.region_id FROM territories t JOIN regions r ON t.region_id = r.region_id
invalid reference to FROM-clause entry for table "t"
I have wrote this join statement in sql and triple checked everything and have tried multiple things but can not seem to shake this bug.-- The Talent Acquisition team is looking to fill some open positions. They want you to get them the territory_description and region_description for territories that do not have any employees, sorted by territory_id. Let's tackle this one piece at a time. In order to achieve this result set, we will need to join the territories and regions table together. So first, select the territory_description column of the territories table, aliased as t, and the region_description of the regions table, aliased as r. Write a FROM statement for the territories table. Alias it as 't' as you do so. Then, write a JOIN statement, joining the regions table, aliasing it as 'r'. This JOIN should find records with matching values ON region_id in the territories and regions tables. If you run the query you've constructed at this point, you should see a result set that contains territory descriptions and corresponding region descriptions. But we're not done! We want only records WHERE the territories do not have any employees. Below the JOIN statement, write a WHERE statement to create a subquery. Find WHERE the territory_id in the territories table is NOT IN the result set from a subquery that selects the territory_id from the employee_territories table. Finally, take the final result set and order by territory_id. SELECT t.territory_description, r.region_description, t.region_id, r.region_id FROM territories t, regions r JOIN regions ON t.region_id = r.region_id ERROR: invalid reference to FROM-clause entry for table "t" LINE 29: ON t.region_id = r.region_id ^ HINT: There is an entry for table "t", but it cannot be referenced from this part of the query. SQL state: 42P01 Character: 1524 I have tried references, removing the region ids from the select statements, and even changing some aliases around or putting different tables in the from statement but nothing seems to work.
[ "Your SQL query is not structured right - you are trying to do both an implicit JOIN via , regions r and an explicit JOIN via JOIN regions.\nHere's a corrected version of your query that just uses the explicit JOIN:\nSELECT t.territory_description, r.region_description, t.region_id, r.region_id \nFROM territories t\nJOIN regions r\nON t.region_id = r.region_id\n\n" ]
[ 0 ]
[]
[]
[ "join", "postgresql", "sql" ]
stackoverflow_0074679148_join_postgresql_sql.txt
Q: Error. Error: Invalid hook call. Hooks can only be called inside of the body of a function component I am creating a library of components. In this library, I created one component, connected it locally via npm link to my project, all work, the component was displayed. But when I decided to include styled-components to create a component. Here is my component. import React, {FC} from 'react' import styled from "styled-components" import './mytbc.css' export interface MyButtonProps{ color:string; big?:boolean; } const MyCom: FC<MyButtonProps> = ({children, color, big, ...props}) => { const MyCommon = styled.button` background:${color}; padding:10px; ` return ( <MyCommon> {children} </MyCommon> ) } export default MyCom Then errors appeared in the console. How to fix these errors and what are they related to? A: I also had similar case when I was working with lerna and yarn workspaces. In my case the problem was using multiple and different versions of react, some where hoisted some were not during lerna compilation process. According to docs In order for Hooks to work, the react import from your application code needs to resolve to the same module as the react import from inside the react-dom package. Run this command to list installed versions of react. And if you see more than one React, you’ll need to figure out why this happens and fix your dependency tree. npm ls react OR yarn list react Read more about the problem and solutions here A: check that you have styled-components in your dependencies in file package.json by: cd project_name/src npm install styled-components
Error. Error: Invalid hook call. Hooks can only be called inside of the body of a function component
I am creating a library of components. In this library, I created one component, connected it locally via npm link to my project, all work, the component was displayed. But when I decided to include styled-components to create a component. Here is my component. import React, {FC} from 'react' import styled from "styled-components" import './mytbc.css' export interface MyButtonProps{ color:string; big?:boolean; } const MyCom: FC<MyButtonProps> = ({children, color, big, ...props}) => { const MyCommon = styled.button` background:${color}; padding:10px; ` return ( <MyCommon> {children} </MyCommon> ) } export default MyCom Then errors appeared in the console. How to fix these errors and what are they related to?
[ "I also had similar case when I was working with lerna and yarn workspaces. In my case the problem was using multiple and different versions of react, some where hoisted some were not during lerna compilation process.\n\nAccording to docs\n\nIn order for Hooks to work, the react import from your application\ncode needs to resolve to the same module as the react import from\ninside the react-dom package.\n\nRun this command to list installed versions of react. And if you see more than one React, you’ll need to figure out why this happens and fix your dependency tree.\nnpm ls react\n\nOR\nyarn list react\n\nRead more about the problem and solutions here\n", "check that you have styled-components in your dependencies in file package.json\nby:\n cd project_name/src\n npm install styled-components\n\n" ]
[ 0, 0 ]
[]
[]
[ "javascript", "reactjs", "styled_components" ]
stackoverflow_0070662010_javascript_reactjs_styled_components.txt
Q: I can't install matplotlib using pip I am totally new to the Python and I wanted to use a matplotlib for my school project. I tried to install it using pip (pip install matplotlib), but I got a really long and bad-looking error and I don't know what to do... I was trying to upgrade pip and setuptools, but i didn't help. I don't understand this issue, because I installed for example numpy without any problem. Can anybody help me? ERROR: Command errored out with exit status 1: command: 'c:\program files\python38\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\marci_000\\AppData\\Local\\Temp\\pip-install-6ze8b_ec\\matplotlib\\setup.py'"'"'; __file__='"'"'C:\\Users\\marci_000\\AppData\\Local\\Temp\\pip-install-6ze8b_ec\\matplotlib\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\marci_000\AppData\Local\Temp\pip-install-6ze8b_ec\matplotlib\pip-egg-info' cwd: C:\Users\marci_000\AppData\Local\Temp\pip-install-6ze8b_ec\matplotlib\ Complete output (228 lines): ================================================================================ Edit setup.cfg to change the build options BUILDING MATPLOTLIB matplotlib: yes [3.1.1] python: yes [3.8.0 (tags/v3.8.0:fa919fd, Oct 14 2019, 19:37:50) [MSC v.1916 64 bit (AMD64)]] platform: yes [win32] OPTIONAL SUBPACKAGES sample_data: yes [installing] tests: no [skipping due to configuration] OPTIONAL BACKEND EXTENSIONS agg: yes [installing] tkagg: yes [installing; run-time loading from Python Tcl/Tk] macosx: no [Mac OS-X only] OPTIONAL PACKAGE DATA dlls: no [skipping due to configuration] Could not locate executable g77 Could not locate executable f77 Could not locate executable ifort Could not locate executable ifl Could not locate executable f90 Could not locate executable DF Could not locate executable efl Could not locate executable gfortran Could not locate executable f95 Could not locate executable g95 Could not locate executable efort Could not locate executable efc Could not locate executable flang don't know how to compile Fortran code on platform 'nt' 'svnversion' is not recognized as an internal or external command, operable program or batch file. non-existing path in 'numpy\\distutils': 'site.cfg' Running from numpy source directory. C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\setup.py:418: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates run_build = parse_setuppy_commands() C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\system_info.py:690: UserWarning: Optimized (vendor) Blas libraries are not found. Falls back to netlib Blas library which has worse performance. A better performance should be easily gained by switching Blas library. self.calc_info() C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\system_info.py:690: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. self.calc_info() C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\system_info.py:690: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. self.calc_info() C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\system_info.py:1712: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. if getattr(self, '_calc_info_{}'.format(lapack))(): C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\system_info.py:1712: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. if getattr(self, '_calc_info_{}'.format(lapack))(): c:\program files\python38\lib\distutils\dist.py:274: UserWarning: Unknown distribution option: 'define_macros' warnings.warn(msg) Traceback (most recent call last): File "c:\program files\python38\lib\distutils\core.py", line 148, in setup dist.run_commands() File "c:\program files\python38\lib\distutils\dist.py", line 966, in run_commands self.run_command(cmd) File "c:\program files\python38\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "c:\program files\python38\lib\site-packages\setuptools\command\bdist_egg.py", line 163, in run self.run_command("egg_info") File "c:\program files\python38\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "c:\program files\python38\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\command\egg_info.py", line 26, in run File "c:\program files\python38\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "c:\program files\python38\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\command\build_src.py", line 142, in run File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\command\build_src.py", line 150, in build_sources File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\command\build_src.py", line 267, in build_py_modules_sources File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\misc_util.py", line 2270, in generate_config_py File "c:\program files\python38\lib\distutils\dir_util.py", line 70, in mkpath os.mkdir(head, mode) File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 310, in wrap path = self._remap_input(name, path, *args, **kw) File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 452, in _remap_input self._violation(operation, os.path.realpath(path), *args, **kw) File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 407, in _violation raise SandboxViolation(operation, args, kw) setuptools.sandbox.SandboxViolation: SandboxViolation: mkdir('C:\\Users\\MARCI_~1\\AppData\\Local\\Temp\\easy_install-fqlea6jp\\numpy-1.17.3\\build', 511) {} The package setup script has attempted to modify files on your system that are not within the EasyInstall build area, and has been aborted. This package cannot be safely installed by EasyInstall, and may not support alternate installation locations even if you run its setup script by hand. Please inform the package's author and the EasyInstall maintainers to find out if a fix or workaround is available. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 154, in save_modules yield saved File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 195, in setup_context yield File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 250, in run_setup _execfile(setup_script, ns) File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 45, in _execfile exec(code, globals, locals) File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\setup.py", line 443, in <module> File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\setup.py", line 435, in setup_package File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\core.py", line 171, in setup File "c:\program files\python38\lib\site-packages\setuptools\__init__.py", line 145, in setup return distutils.core.setup(**attrs) File "c:\program files\python38\lib\distutils\core.py", line 163, in setup raise SystemExit("error: " + str(msg)) SystemExit: error: SandboxViolation: mkdir('C:\\Users\\MARCI_~1\\AppData\\Local\\Temp\\easy_install-fqlea6jp\\numpy-1.17.3\\build', 511) {} The package setup script has attempted to modify files on your system that are not within the EasyInstall build area, and has been aborted. This package cannot be safely installed by EasyInstall, and may not support alternate installation locations even if you run its setup script by hand. Please inform the package's author and the EasyInstall maintainers to find out if a fix or workaround is available. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\program files\python38\lib\site-packages\setuptools\command\easy_install.py", line 1144, in run_setup run_setup(setup_script, args) File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 253, in run_setup raise File "c:\program files\python38\lib\contextlib.py", line 131, in __exit__ self.gen.throw(type, value, traceback) File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 195, in setup_context yield File "c:\program files\python38\lib\contextlib.py", line 131, in __exit__ self.gen.throw(type, value, traceback) File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 166, in save_modules saved_exc.resume() File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 141, in resume six.reraise(type, exc, self._tb) File "c:\program files\python38\lib\site-packages\setuptools\_vendor\six.py", line 685, in reraise raise value.with_traceback(tb) File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 154, in save_modules yield saved File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 195, in setup_context yield File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 250, in run_setup _execfile(setup_script, ns) File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 45, in _execfile exec(code, globals, locals) File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\setup.py", line 443, in <module> File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\setup.py", line 435, in setup_package File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\core.py", line 171, in setup File "c:\program files\python38\lib\site-packages\setuptools\__init__.py", line 145, in setup return distutils.core.setup(**attrs) File "c:\program files\python38\lib\distutils\core.py", line 163, in setup raise SystemExit("error: " + str(msg)) SystemExit: error: SandboxViolation: mkdir('C:\\Users\\MARCI_~1\\AppData\\Local\\Temp\\easy_install-fqlea6jp\\numpy-1.17.3\\build', 511) {} The package setup script has attempted to modify files on your system that are not within the EasyInstall build area, and has been aborted. This package cannot be safely installed by EasyInstall, and may not support alternate installation locations even if you run its setup script by hand. Please inform the package's author and the EasyInstall maintainers to find out if a fix or workaround is available. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\marci_000\AppData\Local\Temp\pip-install-6ze8b_ec\matplotlib\setup.py", line 262, in <module> setup( File "c:\program files\python38\lib\site-packages\setuptools\__init__.py", line 144, in setup _install_setup_requires(attrs) File "c:\program files\python38\lib\site-packages\setuptools\__init__.py", line 139, in _install_setup_requires dist.fetch_build_eggs(dist.setup_requires) File "c:\program files\python38\lib\site-packages\setuptools\dist.py", line 717, in fetch_build_eggs resolved_dists = pkg_resources.working_set.resolve( File "c:\program files\python38\lib\site-packages\pkg_resources\__init__.py", line 780, in resolve dist = best[req.key] = env.best_match( File "c:\program files\python38\lib\site-packages\pkg_resources\__init__.py", line 1065, in best_match return self.obtain(req, installer) File "c:\program files\python38\lib\site-packages\pkg_resources\__init__.py", line 1077, in obtain return installer(requirement) File "c:\program files\python38\lib\site-packages\setuptools\dist.py", line 787, in fetch_build_egg return cmd.easy_install(req) File "c:\program files\python38\lib\site-packages\setuptools\command\easy_install.py", line 679, in easy_install return self.install_item(spec, dist.location, tmpdir, deps) File "c:\program files\python38\lib\site-packages\setuptools\command\easy_install.py", line 705, in install_item dists = self.install_eggs(spec, download, tmpdir) File "c:\program files\python38\lib\site-packages\setuptools\command\easy_install.py", line 890, in install_eggs return self.build_and_install(setup_script, setup_base) File "c:\program files\python38\lib\site-packages\setuptools\command\easy_install.py", line 1158, in build_and_install self.run_setup(setup_script, setup_base, args) File "c:\program files\python38\lib\site-packages\setuptools\command\easy_install.py", line 1146, in run_setup raise DistutilsError("Setup script exited with %s" % (v.args[0],)) distutils.errors.DistutilsError: Setup script exited with error: SandboxViolation: mkdir('C:\\Users\\MARCI_~1\\AppData\\Local\\Temp\\easy_install-fqlea6jp\\numpy-1.17.3\\build', 511) {} The package setup script has attempted to modify files on your system that are not within the EasyInstall build area, and has been aborted. This package cannot be safely installed by EasyInstall, and may not support alternate installation locations even if you run its setup script by hand. Please inform the package's author and the EasyInstall maintainers to find out if a fix or workaround is available. ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. A: Add your python to path and try running command in command prompt. A: try to run python -m pip install -U pip python -m pip install -U matplotlib while installing Matplotlib !
I can't install matplotlib using pip
I am totally new to the Python and I wanted to use a matplotlib for my school project. I tried to install it using pip (pip install matplotlib), but I got a really long and bad-looking error and I don't know what to do... I was trying to upgrade pip and setuptools, but i didn't help. I don't understand this issue, because I installed for example numpy without any problem. Can anybody help me? ERROR: Command errored out with exit status 1: command: 'c:\program files\python38\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\marci_000\\AppData\\Local\\Temp\\pip-install-6ze8b_ec\\matplotlib\\setup.py'"'"'; __file__='"'"'C:\\Users\\marci_000\\AppData\\Local\\Temp\\pip-install-6ze8b_ec\\matplotlib\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\marci_000\AppData\Local\Temp\pip-install-6ze8b_ec\matplotlib\pip-egg-info' cwd: C:\Users\marci_000\AppData\Local\Temp\pip-install-6ze8b_ec\matplotlib\ Complete output (228 lines): ================================================================================ Edit setup.cfg to change the build options BUILDING MATPLOTLIB matplotlib: yes [3.1.1] python: yes [3.8.0 (tags/v3.8.0:fa919fd, Oct 14 2019, 19:37:50) [MSC v.1916 64 bit (AMD64)]] platform: yes [win32] OPTIONAL SUBPACKAGES sample_data: yes [installing] tests: no [skipping due to configuration] OPTIONAL BACKEND EXTENSIONS agg: yes [installing] tkagg: yes [installing; run-time loading from Python Tcl/Tk] macosx: no [Mac OS-X only] OPTIONAL PACKAGE DATA dlls: no [skipping due to configuration] Could not locate executable g77 Could not locate executable f77 Could not locate executable ifort Could not locate executable ifl Could not locate executable f90 Could not locate executable DF Could not locate executable efl Could not locate executable gfortran Could not locate executable f95 Could not locate executable g95 Could not locate executable efort Could not locate executable efc Could not locate executable flang don't know how to compile Fortran code on platform 'nt' 'svnversion' is not recognized as an internal or external command, operable program or batch file. non-existing path in 'numpy\\distutils': 'site.cfg' Running from numpy source directory. C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\setup.py:418: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates run_build = parse_setuppy_commands() C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\system_info.py:690: UserWarning: Optimized (vendor) Blas libraries are not found. Falls back to netlib Blas library which has worse performance. A better performance should be easily gained by switching Blas library. self.calc_info() C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\system_info.py:690: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. self.calc_info() C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\system_info.py:690: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. self.calc_info() C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\system_info.py:1712: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. if getattr(self, '_calc_info_{}'.format(lapack))(): C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\system_info.py:1712: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. if getattr(self, '_calc_info_{}'.format(lapack))(): c:\program files\python38\lib\distutils\dist.py:274: UserWarning: Unknown distribution option: 'define_macros' warnings.warn(msg) Traceback (most recent call last): File "c:\program files\python38\lib\distutils\core.py", line 148, in setup dist.run_commands() File "c:\program files\python38\lib\distutils\dist.py", line 966, in run_commands self.run_command(cmd) File "c:\program files\python38\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "c:\program files\python38\lib\site-packages\setuptools\command\bdist_egg.py", line 163, in run self.run_command("egg_info") File "c:\program files\python38\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "c:\program files\python38\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\command\egg_info.py", line 26, in run File "c:\program files\python38\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "c:\program files\python38\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\command\build_src.py", line 142, in run File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\command\build_src.py", line 150, in build_sources File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\command\build_src.py", line 267, in build_py_modules_sources File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\misc_util.py", line 2270, in generate_config_py File "c:\program files\python38\lib\distutils\dir_util.py", line 70, in mkpath os.mkdir(head, mode) File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 310, in wrap path = self._remap_input(name, path, *args, **kw) File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 452, in _remap_input self._violation(operation, os.path.realpath(path), *args, **kw) File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 407, in _violation raise SandboxViolation(operation, args, kw) setuptools.sandbox.SandboxViolation: SandboxViolation: mkdir('C:\\Users\\MARCI_~1\\AppData\\Local\\Temp\\easy_install-fqlea6jp\\numpy-1.17.3\\build', 511) {} The package setup script has attempted to modify files on your system that are not within the EasyInstall build area, and has been aborted. This package cannot be safely installed by EasyInstall, and may not support alternate installation locations even if you run its setup script by hand. Please inform the package's author and the EasyInstall maintainers to find out if a fix or workaround is available. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 154, in save_modules yield saved File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 195, in setup_context yield File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 250, in run_setup _execfile(setup_script, ns) File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 45, in _execfile exec(code, globals, locals) File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\setup.py", line 443, in <module> File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\setup.py", line 435, in setup_package File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\core.py", line 171, in setup File "c:\program files\python38\lib\site-packages\setuptools\__init__.py", line 145, in setup return distutils.core.setup(**attrs) File "c:\program files\python38\lib\distutils\core.py", line 163, in setup raise SystemExit("error: " + str(msg)) SystemExit: error: SandboxViolation: mkdir('C:\\Users\\MARCI_~1\\AppData\\Local\\Temp\\easy_install-fqlea6jp\\numpy-1.17.3\\build', 511) {} The package setup script has attempted to modify files on your system that are not within the EasyInstall build area, and has been aborted. This package cannot be safely installed by EasyInstall, and may not support alternate installation locations even if you run its setup script by hand. Please inform the package's author and the EasyInstall maintainers to find out if a fix or workaround is available. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\program files\python38\lib\site-packages\setuptools\command\easy_install.py", line 1144, in run_setup run_setup(setup_script, args) File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 253, in run_setup raise File "c:\program files\python38\lib\contextlib.py", line 131, in __exit__ self.gen.throw(type, value, traceback) File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 195, in setup_context yield File "c:\program files\python38\lib\contextlib.py", line 131, in __exit__ self.gen.throw(type, value, traceback) File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 166, in save_modules saved_exc.resume() File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 141, in resume six.reraise(type, exc, self._tb) File "c:\program files\python38\lib\site-packages\setuptools\_vendor\six.py", line 685, in reraise raise value.with_traceback(tb) File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 154, in save_modules yield saved File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 195, in setup_context yield File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 250, in run_setup _execfile(setup_script, ns) File "c:\program files\python38\lib\site-packages\setuptools\sandbox.py", line 45, in _execfile exec(code, globals, locals) File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\setup.py", line 443, in <module> File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\setup.py", line 435, in setup_package File "C:\Users\MARCI_~1\AppData\Local\Temp\easy_install-fqlea6jp\numpy-1.17.3\numpy\distutils\core.py", line 171, in setup File "c:\program files\python38\lib\site-packages\setuptools\__init__.py", line 145, in setup return distutils.core.setup(**attrs) File "c:\program files\python38\lib\distutils\core.py", line 163, in setup raise SystemExit("error: " + str(msg)) SystemExit: error: SandboxViolation: mkdir('C:\\Users\\MARCI_~1\\AppData\\Local\\Temp\\easy_install-fqlea6jp\\numpy-1.17.3\\build', 511) {} The package setup script has attempted to modify files on your system that are not within the EasyInstall build area, and has been aborted. This package cannot be safely installed by EasyInstall, and may not support alternate installation locations even if you run its setup script by hand. Please inform the package's author and the EasyInstall maintainers to find out if a fix or workaround is available. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\marci_000\AppData\Local\Temp\pip-install-6ze8b_ec\matplotlib\setup.py", line 262, in <module> setup( File "c:\program files\python38\lib\site-packages\setuptools\__init__.py", line 144, in setup _install_setup_requires(attrs) File "c:\program files\python38\lib\site-packages\setuptools\__init__.py", line 139, in _install_setup_requires dist.fetch_build_eggs(dist.setup_requires) File "c:\program files\python38\lib\site-packages\setuptools\dist.py", line 717, in fetch_build_eggs resolved_dists = pkg_resources.working_set.resolve( File "c:\program files\python38\lib\site-packages\pkg_resources\__init__.py", line 780, in resolve dist = best[req.key] = env.best_match( File "c:\program files\python38\lib\site-packages\pkg_resources\__init__.py", line 1065, in best_match return self.obtain(req, installer) File "c:\program files\python38\lib\site-packages\pkg_resources\__init__.py", line 1077, in obtain return installer(requirement) File "c:\program files\python38\lib\site-packages\setuptools\dist.py", line 787, in fetch_build_egg return cmd.easy_install(req) File "c:\program files\python38\lib\site-packages\setuptools\command\easy_install.py", line 679, in easy_install return self.install_item(spec, dist.location, tmpdir, deps) File "c:\program files\python38\lib\site-packages\setuptools\command\easy_install.py", line 705, in install_item dists = self.install_eggs(spec, download, tmpdir) File "c:\program files\python38\lib\site-packages\setuptools\command\easy_install.py", line 890, in install_eggs return self.build_and_install(setup_script, setup_base) File "c:\program files\python38\lib\site-packages\setuptools\command\easy_install.py", line 1158, in build_and_install self.run_setup(setup_script, setup_base, args) File "c:\program files\python38\lib\site-packages\setuptools\command\easy_install.py", line 1146, in run_setup raise DistutilsError("Setup script exited with %s" % (v.args[0],)) distutils.errors.DistutilsError: Setup script exited with error: SandboxViolation: mkdir('C:\\Users\\MARCI_~1\\AppData\\Local\\Temp\\easy_install-fqlea6jp\\numpy-1.17.3\\build', 511) {} The package setup script has attempted to modify files on your system that are not within the EasyInstall build area, and has been aborted. This package cannot be safely installed by EasyInstall, and may not support alternate installation locations even if you run its setup script by hand. Please inform the package's author and the EasyInstall maintainers to find out if a fix or workaround is available. ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
[ "Add your python to path and try running command in command prompt.\n", "try to run\npython -m pip install -U pip\npython -m pip install -U matplotlib\nwhile installing Matplotlib !\n" ]
[ 0, 0 ]
[]
[]
[ "matplotlib", "pip", "python", "python_3.x" ]
stackoverflow_0058582126_matplotlib_pip_python_python_3.x.txt
Q: Instant download image on button click I'm trying to make simple button which when is clicked will download image. Currently when I click it is open image on blank page where you should click "Save as.." How can I "force" the browser to download it? This is current image and button <img src="{{ $thumb }}" class="img-responsive"> <a type="submit" download="{{ $thumb }}" href="{{ $thumb }}" class="btn btn-primary"> Download Image </a> I've tried: download="{{ $thumb }}" Also download="{{ $thumb }}" target="_blank" Also tried to put the <img...> tag inside <a href..> tag and still doesn't work. A: <a href="/images/img.jpg" download> By adding this you can download image automatically by just one click note: The download attribute is not supported in Edge version 12, IE, Safari 10 (and earlier), or Opera version 12 (and earlier). A: You can try this to force download an image. However you cannot download something that is not on your domain, unless you are using a domain which accepts cross origin requests (Eg:Imgur) You may use the 'download' attribute of HTML5 but still you won't be able to load in cross origin image. Also the below method will support legacy browsers as well function forceDownload(link){ var url = link.getAttribute("data-href"); var fileName = link.getAttribute("download"); link.innerText = "Working..."; var xhr = new XMLHttpRequest(); xhr.open("GET", url, true); xhr.responseType = "blob"; xhr.onload = function(){ var urlCreator = window.URL || window.webkitURL; var imageUrl = urlCreator.createObjectURL(this.response); var tag = document.createElement('a'); tag.href = imageUrl; tag.download = fileName; document.body.appendChild(tag); tag.click(); document.body.removeChild(tag); link.innerText="Download Image"; } xhr.send(); } <a href="#" data-href='https://i.imgur.com/Mc12OXx.png' download="Image.jpg" onclick='forceDownload(this)'>Download Image</a> Note: You cannot force the browser to show a 'Save As' dialog as it is based upon what the user preferences are. A: Try this <img src="{{ $thumb }}" class="img-responsive"> <a href="{{ $thumb }}" class="btn btn-primary" download> Download Image </a> A: Another option is using the fetch methods function downloadFile(elmnt) { const link = elmnt const url = 'https://assets.ctfassets.net/cfexf643femz/8rnHaKLBl6L4t1LmuV0OG/03d7ee1d08119fce2a3f80b93b97ab05/Lagos_de_Torca_en_cifras.pdf' const options = { 'Accept': 'application/json', 'Content-Type': 'application/json' }; fetch(url, options) .then( response => { response.blob().then(blob => { let url = window.URL.createObjectURL(blob); let a = document.createElement('a'); a.href = url; a.download = "file.pdf"; a.click(); }); }); } https://codepen.io/edgarv09/pen/KKabVob example A: Based on the selected answer above by Jones Joseph, I modified his code to not require an anchor tag in the html to link from. Rather I used a button and in AngularJS passed the name of the photo file as a parameter to ng-click. The HTML <button ng-click="MainController.newDownload(MainController.oPhoto.filename)">Download Photo</button> The JavaScript: this.newDownload = function(filename){ var url = 'photos/' + filename var xhr = new XMLHttpRequest(); xhr.open("GET", url, true); xhr.responseType = "blob"; xhr.onload = function(){ var urlCreator = window.URL || window.webkitURL; var imageUrl = urlCreator.createObjectURL(this.response); var tag = document.createElement('a'); tag.href = imageUrl; tag.download = filename; document.body.appendChild(tag); tag.click(); document.body.removeChild(tag); link.innerText="Download Image"; } xhr.send(); } Works in Chrome 107.0.5304.122 (Official Build) (64-bit)
Instant download image on button click
I'm trying to make simple button which when is clicked will download image. Currently when I click it is open image on blank page where you should click "Save as.." How can I "force" the browser to download it? This is current image and button <img src="{{ $thumb }}" class="img-responsive"> <a type="submit" download="{{ $thumb }}" href="{{ $thumb }}" class="btn btn-primary"> Download Image </a> I've tried: download="{{ $thumb }}" Also download="{{ $thumb }}" target="_blank" Also tried to put the <img...> tag inside <a href..> tag and still doesn't work.
[ "<a href=\"/images/img.jpg\" download> By adding this you can download image automatically by just one click\n note: The download attribute is not supported in Edge version 12, IE, Safari 10 (and earlier), or Opera version 12 (and earlier).\n", "You can try this to force download an image.\nHowever you cannot download something that is not on your domain, unless you are using a domain which accepts cross origin requests (Eg:Imgur)\nYou may use the 'download' attribute of HTML5 but still you won't be able to load in cross origin image. \nAlso the below method will support legacy browsers as well\n\n\nfunction forceDownload(link){\r\n var url = link.getAttribute(\"data-href\");\r\n var fileName = link.getAttribute(\"download\");\r\n link.innerText = \"Working...\";\r\n var xhr = new XMLHttpRequest();\r\n xhr.open(\"GET\", url, true);\r\n xhr.responseType = \"blob\";\r\n xhr.onload = function(){\r\n var urlCreator = window.URL || window.webkitURL;\r\n var imageUrl = urlCreator.createObjectURL(this.response);\r\n var tag = document.createElement('a');\r\n tag.href = imageUrl;\r\n tag.download = fileName;\r\n document.body.appendChild(tag);\r\n tag.click();\r\n document.body.removeChild(tag);\r\n link.innerText=\"Download Image\";\r\n }\r\n xhr.send();\r\n}\n<a href=\"#\" data-href='https://i.imgur.com/Mc12OXx.png' download=\"Image.jpg\" onclick='forceDownload(this)'>Download Image</a>\n\n\n\nNote: You cannot force the browser to show a 'Save As' dialog as it is based upon what the user preferences are.\n", "Try this\n<img src=\"{{ $thumb }}\" class=\"img-responsive\"> \n<a href=\"{{ $thumb }}\" class=\"btn btn-primary\" download> \n Download Image\n</a>\n\n", "Another option is using the fetch methods\nfunction downloadFile(elmnt) {\n \n const link = elmnt\n const url = 'https://assets.ctfassets.net/cfexf643femz/8rnHaKLBl6L4t1LmuV0OG/03d7ee1d08119fce2a3f80b93b97ab05/Lagos_de_Torca_en_cifras.pdf'\nconst options = {\n 'Accept': 'application/json',\n 'Content-Type': 'application/json'\n};\n \n fetch(url, options)\n .then( response => {\n response.blob().then(blob => {\n let url = window.URL.createObjectURL(blob);\n let a = document.createElement('a');\n a.href = url;\n a.download = \"file.pdf\";\n a.click();\n });\n }); \n}\n\nhttps://codepen.io/edgarv09/pen/KKabVob example\n", "Based on the selected answer above by Jones Joseph, I modified his code to not require an anchor tag in the html to link from. Rather I used a button and in AngularJS passed the name of the photo file as a parameter to ng-click.\nThe HTML\n<button ng-click=\"MainController.newDownload(MainController.oPhoto.filename)\">Download Photo</button>\n\nThe JavaScript:\n this.newDownload = function(filename){\n var url = 'photos/' + filename\n var xhr = new XMLHttpRequest();\n xhr.open(\"GET\", url, true);\n xhr.responseType = \"blob\";\n xhr.onload = function(){\n var urlCreator = window.URL || window.webkitURL;\n var imageUrl = urlCreator.createObjectURL(this.response);\n var tag = document.createElement('a');\n tag.href = imageUrl;\n tag.download = filename;\n document.body.appendChild(tag);\n tag.click();\n document.body.removeChild(tag);\n link.innerText=\"Download Image\";\n }\n xhr.send();\n }\n\nWorks in Chrome 107.0.5304.122 (Official Build) (64-bit)\n" ]
[ 1, 1, 0, 0, 0 ]
[]
[]
[ "html", "image" ]
stackoverflow_0050042406_html_image.txt
Q: Nested queries in SQL from hr.employees I have a task like this: Using the HR.EMPLOYEES table, build a list of employees whose work experience in the company is below the average. I tried to do this, but an error occurs, how can I rewrite the request correctly? select first_name, last_name from hr.empolyees where MONTHS_BETWEEN(sysdate, hire_date) < (select avg(MONTHS_BETWEEN(sysdate, hire_date) from hr.employees) A: As mentioned in the comments, it looks like the AVG function is missing a closing parenthesis. SELECT first_name, last_name FROM hr.empolyees WHERE MONTHS_BETWEEN(sysdate, hire_date) < (SELECT AVG(MONTHS_BETWEEN(sysdate, hire_date)) FROM hr.employees)
Nested queries in SQL from hr.employees
I have a task like this: Using the HR.EMPLOYEES table, build a list of employees whose work experience in the company is below the average. I tried to do this, but an error occurs, how can I rewrite the request correctly? select first_name, last_name from hr.empolyees where MONTHS_BETWEEN(sysdate, hire_date) < (select avg(MONTHS_BETWEEN(sysdate, hire_date) from hr.employees)
[ "As mentioned in the comments, it looks like the AVG function is missing a closing parenthesis.\nSELECT first_name, \n last_name \nFROM hr.empolyees \nWHERE MONTHS_BETWEEN(sysdate, hire_date) < (SELECT AVG(MONTHS_BETWEEN(sysdate, hire_date)) FROM hr.employees)\n\n" ]
[ 0 ]
[ "To rewrite the query correctly, you can use the AVG function in the SELECT clause to calculate the average work experience, and then use the HAVING clause to filter the result set to include only employees whose work experience is below the average.\nThe modified query would look like this:\nSELECT first_name, last_name\nFROM hr.employees\nGROUP BY first_name, last_name\nHAVING MONTHS_BETWEEN(sysdate, hire_date) < AVG(MONTHS_BETWEEN(sysdate, hire_date))\nIn this query, the AVG function is used in the SELECT clause to calculate the average work experience of all employees in the HR.EMPLOYEES table. The HAVING clause is then used to filter the result set to include only those employees whose work experience is less than the average.\nNote that the GROUP BY clause is used to group the rows by first_name and last_name, so that the AVG function calculates the average work experience for each employee. This is necessary because the HAVING clause operates on the grouped rows, not on the individual rows in the table.\nIf you want to include only employees whose work experience is strictly less than the average (not equal to the average), you can use the < operator in the HAVING clause instead of the <= operator. For example:\nSELECT first_name, last_name\nFROM hr.employees\nGROUP BY first_name, last_name\nHAVING MONTHS_BETWEEN(sysdate, hire_date) < AVG(MONTHS_BETWEEN(sysdate, hire_date))\nThis query will return a list of employees whose work experience in the company is strictly below the average.\nEDIT\nSELECT first_name, last_name FROM hr.employees WHERE MONTHS_BETWEEN(sysdate, hire_date) < (SELECT AVG(MONTHS_BETWEEN(sysdate, hire_date)) FROM hr.employees)\nIn this query, the subquery in the SELECT clause calculates the average work experience for all employees in the HR.EMPLOYEES table. The outer query then uses the result of this subquery to filter the result set to include only those employees whose work experience is less than the average.\nNote that the subquery must be enclosed in parentheses, and it must be followed by the < operator in the WHERE clause. This is necessary to ensure that the subquery is executed first, and the result is used to filter the result set of the outer query.\nIf you want to include only employees whose work experience is strictly less than the average (not equal to the average), you can use the < operator in the WHERE clause instead of the <= operator. For example:\nSELECT first_name, last_name FROM hr.employees WHERE MONTHS_BETWEEN(sysdate, hire_date) < (SELECT AVG(MONTHS_BETWEEN(sysdate, hire_date)) FROM hr.employees)\nThis query will return a list of employees whose work experience in the company is strictly below the average.\n" ]
[ -1 ]
[ "oracle", "sql" ]
stackoverflow_0074678749_oracle_sql.txt
Q: How to emit data from python socket.io server to angular socket.io client I need to get real time data in my angular project from a python server (no chat). I have the angular side setup, but I don't know how to get the python backend working. angular: import { Injectable } from '@angular/core'; import { Observable } from 'rxjs'; import {io} from 'socket.io-client'; @Injectable({ providedIn: 'root' }) export class WebsocketService { constructor() { this.socket = io() } socket: any; readonly uri: string = "ws://localhost:3000" listen(eventName: string) { return new Observable((subscriber) => { this.socket.on(eventName, ((data: unknown) => { subscriber.next(data) })) }) } emit(eventName: string, data:any) { this.socket.emit(eventName,data) } } How to setup python to send some random numbers to the angular client? Best would be with fastapi from fastapi import FastAPI from fastapi_socketio import SocketManager import uvicorn import random app = FastAPI() sio = SocketManager(app=app) ....no clue here async def rndm(): while True: print(random.randint(1, 100)) <- emit some integers to client instead of printing await asyncio.sleep(1) uvicorn.run("fastapitest:app", host='0.0.0.0', port=8000,reload=True) A: You can use the send method provided by SocketManager to send messages to the client. Here is an example of how you can modify your code to send random numbers to the client: from fastapi import FastAPI from fastapi_socketio import SocketManager import uvicorn import random app = FastAPI() sio = SocketManager(app=app) @sio.on('connect') async def connected(sid): while True: random_number = random.randint(1, 100) print(f"Sending number: {random_number}") await sio.send(sid, random_number) await asyncio.sleep(1) uvicorn.run("fastapitest:app", host='0.0.0.0', port=8000,reload=True)
How to emit data from python socket.io server to angular socket.io client
I need to get real time data in my angular project from a python server (no chat). I have the angular side setup, but I don't know how to get the python backend working. angular: import { Injectable } from '@angular/core'; import { Observable } from 'rxjs'; import {io} from 'socket.io-client'; @Injectable({ providedIn: 'root' }) export class WebsocketService { constructor() { this.socket = io() } socket: any; readonly uri: string = "ws://localhost:3000" listen(eventName: string) { return new Observable((subscriber) => { this.socket.on(eventName, ((data: unknown) => { subscriber.next(data) })) }) } emit(eventName: string, data:any) { this.socket.emit(eventName,data) } } How to setup python to send some random numbers to the angular client? Best would be with fastapi from fastapi import FastAPI from fastapi_socketio import SocketManager import uvicorn import random app = FastAPI() sio = SocketManager(app=app) ....no clue here async def rndm(): while True: print(random.randint(1, 100)) <- emit some integers to client instead of printing await asyncio.sleep(1) uvicorn.run("fastapitest:app", host='0.0.0.0', port=8000,reload=True)
[ "You can use the send method provided by SocketManager to send messages to the client.\nHere is an example of how you can modify your code to send random numbers to the client:\nfrom fastapi import FastAPI\nfrom fastapi_socketio import SocketManager\nimport uvicorn\nimport random\n\napp = FastAPI()\nsio = SocketManager(app=app)\n\n@sio.on('connect')\nasync def connected(sid):\n while True:\n random_number = random.randint(1, 100)\n print(f\"Sending number: {random_number}\")\n await sio.send(sid, random_number)\n await asyncio.sleep(1)\n\nuvicorn.run(\"fastapitest:app\", host='0.0.0.0', port=8000,reload=True)\n\n" ]
[ 0 ]
[]
[]
[ "angular", "python", "socket.io" ]
stackoverflow_0074679118_angular_python_socket.io.txt
Q: Count the number of occurrences of a character in a string in Javascript I need to count the number of occurrences of a character in a string. For example, suppose my string contains: var mainStr = "str1,str2,str3,str4"; I want to find the count of comma , character, which is 3. And the count of individual strings after the split along comma, which is 4. I also need to validate that each of the strings i.e str1 or str2 or str3 or str4 should not exceed, say, 15 characters. A: I have updated this answer. I like the idea of using a match better, but it is slower: console.log(("str1,str2,str3,str4".match(/,/g) || []).length); //logs 3 console.log(("str1,str2,str3,str4".match(new RegExp("str", "g")) || []).length); //logs 4 Use a regular expression literal if you know what you are searching for beforehand, if not you can use the RegExp constructor, and pass in the g flag as an argument. match returns null with no results thus the || [] The original answer I made in 2009 is below. It creates an array unnecessarily, but using a split is faster (as of September 2014). I'm ambivalent, if I really needed the speed there would be no question that I would use a split, but I would prefer to use match. Old answer (from 2009): If you're looking for the commas: (mainStr.split(",").length - 1) //3 If you're looking for the str (mainStr.split("str").length - 1) //4 Both in @Lo's answer and in my own silly performance test split comes ahead in speed, at least in Chrome, but again creating the extra array just doesn't seem sane. A: There are at least five ways. The best option, which should also be the fastest (owing to the native RegEx engine) is placed at the top. Method 1 ("this is foo bar".match(/o/g)||[]).length; // returns 2 Method 2 "this is foo bar".split("o").length - 1; // returns 2 Split not recommended as it is resource hungry. It allocates new instances of 'Array' for each match. Don't try it for a >100MB file via FileReader. You can observe the exact resource usage using Chrome's profiler option. Method 3 var stringsearch = "o" ,str = "this is foo bar"; for(var count=-1,index=-2; index != -1; count++,index=str.indexOf(stringsearch,index+1) ); // returns 2 Method 4 Searching for a single character var stringsearch = "o" ,str = "this is foo bar"; for(var i=count=0; i<str.length; count+=+(stringsearch===str[i++])); // returns 2 Method 5 Element mapping and filtering. This is not recommended due to its overall resource preallocation rather than using Pythonian 'generators': var str = "this is foo bar" str.split('').map( function(e,i){ if(e === 'o') return i;} ) .filter(Boolean) //>[9, 10] [9, 10].length // returns 2 Share: I made this gist, with currently 8 methods of character-counting, so we can directly pool and share our ideas - just for fun, and perhaps some interesting benchmarks :) A: Add this function to sting prototype : String.prototype.count=function(c) { var result = 0, i = 0; for(i;i<this.length;i++)if(this[i]==c)result++; return result; }; usage: console.log("strings".count("s")); //2 A: Simply, use the split to find out the number of occurrences of a character in a string. mainStr.split(',').length // gives 4 which is the number of strings after splitting using delimiter comma mainStr.split(',').length - 1 // gives 3 which is the count of comma A: A quick Google search got this (from http://www.codecodex.com/wiki/index.php?title=Count_the_number_of_occurrences_of_a_specific_character_in_a_string#JavaScript) String.prototype.count=function(s1) { return (this.length - this.replace(new RegExp(s1,"g"), '').length) / s1.length; } Use it like this: test = 'one,two,three,four' commas = test.count(',') // returns 3 A: You can also rest your string and work with it like an array of elements using Array.prototype.filter() const mainStr = 'str1,str2,str3,str4'; const commas = [...mainStr].filter(l => l === ',').length; console.log(commas); Or Array.prototype.reduce() const mainStr = 'str1,str2,str3,str4'; const commas = [...mainStr].reduce((a, c) => c === ',' ? ++a : a, 0); console.log(commas); A: UPDATE: This might be simple, but it is not the fastest. See benchmarks below. It's amazing that in 13 years, this answer hasn't shown up. Intuitively, it seems like it should be fastest: const s = "The quick brown fox jumps over the lazy dog."; const oCount = s.length - s.replaceAll('o', '').length; If there are only two kinds of character in the string, then this is faster still: const s = "001101001"; const oneCount = s.replaceAll('0', '').length; BENCHMARKS const { performance } = require('node:perf_hooks'); const ITERATIONS = 10000000; const TEST_STRING = "The quick brown fox jumps over the lazy dog."; console.log(ITERATIONS, "iterations"); let sum = 0; // make sure compiler doesn't optimize code out let start = performance.now(); for (let i = 0; i < ITERATIONS; ++i) { sum += TEST_STRING.length - TEST_STRING.replaceAll('o', '').length; } let end = performance.now(); console.log(" replaceAll duration", end - start, `(sum ${sum})`); sum = 0; start = performance.now(); for (let i = 0; i < ITERATIONS; ++i) { sum += TEST_STRING.split('o').length - 1 } end = performance.now(); console.log(" split duration", end - start, `(sum ${sum})`); 10000 iterations replaceAll duration 2.6167500019073486 (sum 40000) split duration 2.0777920186519623 (sum 40000) 100000 iterations replaceAll duration 17.563208997249603 (sum 400000) split duration 8.087624996900558 (sum 400000) 1000000 iterations replaceAll duration 128.71587499976158 (sum 4000000) split duration 64.15841698646545 (sum 4000000) 10000000 iterations replaceAll duration 1223.3415840268135 (sum 40000000) split duration 629.1629169881344 (sum 40000000) A: Here is a similar solution, but it uses Array.prototype.reduce function countCharacters(char, string) { return string.split('').reduce((acc, ch) => ch === char ? acc + 1: acc, 0) } As was mentioned, String.prototype.split works much faster than String.prototype.replace. A: If you are using lodash, the _.countBy method will do this: _.countBy("abcda")['a'] //2 This method also work with array: _.countBy(['ab', 'cd', 'ab'])['ab'] //2 A: ok, an other one with regexp - probably not fast, but short and better readable then others, in my case just '_' to count key.replace(/[^_]/g,'').length just remove everything that does not look like your char but it does not look nice with a string as input A: I have found that the best approach to search for a character in a very large string (that is 1 000 000 characters long, for example) is to use the replace() method. window.count_replace = function (str, schar) { return str.length - str.replace(RegExp(schar), '').length; }; You can see yet another JSPerf suite to test this method along with other methods of finding a character in a string. A: Performance of Split vs RegExp var i = 0; var split_start = new Date().getTime(); while (i < 30000) { "1234,453,123,324".split(",").length -1; i++; } var split_end = new Date().getTime(); var split_time = split_end - split_start; i= 0; var reg_start = new Date().getTime(); while (i < 30000) { ("1234,453,123,324".match(/,/g) || []).length; i++; } var reg_end = new Date().getTime(); var reg_time = reg_end - reg_start; alert ('Split Execution time: ' + split_time + "\n" + 'RegExp Execution time: ' + reg_time + "\n"); A: I made a slight improvement on the accepted answer, it allows to check with case-sensitive/case-insensitive matching, and is a method attached to the string object: String.prototype.count = function(lit, cis) { var m = this.toString().match(new RegExp(lit, ((cis) ? "gi" : "g"))); return (m != null) ? m.length : 0; } lit is the string to search for ( such as 'ex' ), and cis is case-insensitivity, defaulted to false, it will allow for choice of case insensitive matches. To search the string 'I love StackOverflow.com' for the lower-case letter 'o', you would use: var amount_of_os = 'I love StackOverflow.com'.count('o'); amount_of_os would be equal to 2. If we were to search the same string again using case-insensitive matching, you would use: var amount_of_os = 'I love StackOverflow.com'.count('o', true); This time, amount_of_os would be equal to 3, since the capital O from the string gets included in the search. A: Easiest way i found out... Example- str = 'mississippi'; function find_occurences(str, char_to_count){ return str.split(char_to_count).length - 1; } find_occurences(str, 'i') //outputs 4 A: I just did a very quick and dirty test on repl.it using Node v7.4. For a single character, the standard for loop is quickest: Some code: // winner! function charCount1(s, c) { let count = 0; c = c.charAt(0); // we save some time here for(let i = 0; i < s.length; ++i) { if(c === s.charAt(i)) { ++count; } } return count; } function charCount2(s, c) { return (s.match(new RegExp(c[0], 'g')) || []).length; } function charCount3(s, c) { let count = 0; for(ch of s) { if(c === ch) { ++count; } } return count; } function perfIt() { const s = 'Hello, World!'; const c = 'o'; console.time('charCount1'); for(let i = 0; i < 10000; i++) { charCount1(s, c); } console.timeEnd('charCount1'); console.time('charCount2'); for(let i = 0; i < 10000; i++) { charCount2(s, c); } console.timeEnd('charCount2'); console.time('charCount3'); for(let i = 0; i < 10000; i++) { charCount2(s, c); } console.timeEnd('charCount3'); } Results from a few runs: perfIt() charCount1: 3.301ms charCount2: 11.652ms charCount3: 174.043ms undefined perfIt() charCount1: 2.110ms charCount2: 11.931ms charCount3: 177.743ms undefined perfIt() charCount1: 2.074ms charCount2: 11.738ms charCount3: 152.611ms undefined perfIt() charCount1: 2.076ms charCount2: 11.685ms charCount3: 154.757ms undefined Update 2021-Feb-10: Fixed typo in repl.it demo Update 2020-Oct-24: Still the case with Node.js 12 (play with it yourself here) A: Here is my solution. Lots of solution already posted before me. But I love to share my view here. const mainStr = 'str1,str2,str3,str4'; const commaAndStringCounter = (str) => { const commas = [...str].filter(letter => letter === ',').length; const numOfStr = str.split(',').length; return `Commas: ${commas}, String: ${numOfStr}`; } // Run the code console.log(commaAndStringCounter(mainStr)); // Output: Commas: 3, String: 4 Here you find my REPL A: s = 'dir/dir/dir/dir/' for(i=l=0;i<s.length;i++) if(s[i] == '/') l++ A: I was working on a small project that required a sub-string counter. Searching for the wrong phrases provided me with no results, however after writing my own implementation I have stumbled upon this question. Anyway, here is my way, it is probably slower than most here but might be helpful to someone: function count_letters() { var counter = 0; for (var i = 0; i < input.length; i++) { var index_of_sub = input.indexOf(input_letter, i); if (index_of_sub > -1) { counter++; i = index_of_sub; } } http://jsfiddle.net/5ZzHt/1/ Please let me know if you find this implementation to fail or do not follow some standards! :) UPDATE You may want to substitute: for (var i = 0; i < input.length; i++) { With: for (var i = 0, input_length = input.length; i < input_length; i++) { Interesting read discussing the above: http://www.erichynds.com/blog/javascript-length-property-is-a-stored-value A: UPDATE 06/10/2022 So I ran various perf tests and if your use case allows it, it seems that using split is going to perform the best overall. function countChar(char: string, string: string): number { return string.split(char).length - 1 } countChar('x', 'foo x bar x baz x') I know I am late to the party here but I was rather baffled no one answered this with the most basic of approaches. A large portion of the answers provided by the community for this question are iteration based but all are moving over strings on a per-character basis which is not really efficient. When dealing with a large string that contains thousands of characters walking over each character to get the occurance count can become rather extraneous not to mention a code-smell. The below solutions take advantage of slice, indexOf and the trusted traditional while loop. These approaches prevent us having to walk over each character and will greatly speed up the time it takes to count occurances. These follow similar logic to that you'd find in parsers and lexical analyzers that require string walks. Using with Slice In this approach we are leveraging slice and with every indexOf match we will move our way through the string and eliminate the previous searched potions. Each time we call indexOf the size of the string it searches will be smaller. function countChar (char: string, search: string): number { let num: number = 0; let str: string = search; let pos: number = str.indexOf(char); while(pos > -1) { str = str.slice(pos + 1); pos = str.indexOf(char); num++; } return num; } // Call the function countChar('x', 'foo x bar x baz x') // 3 Using with IndexOf from position Similar to the first approach using slice but instead of augmenting the string we are searching it will leverage the from parameter in indexOf method. function countChar (char: string, str: string): number { let num: number = 0; let pos: number = str.indexOf(char); while(pos > -1) { pos = str.indexOf(char, pos + 1); num++; } return num; } // Call the function countChar('x', 'foo x bar x baz x') // 3 Personally, I go for the second approach over the first, but both are fine and performant when dealing with large strings but also smaller sized ones too. A: What about string.split(desiredCharecter).length-1 Example: var str = "hellow how is life"; var len = str.split("h").length-1; will give count 2 for character "h" in the above string; A: The fastest method seems to be via the index operator: function charOccurances (str, char) { for (var c = 0, i = 0, len = str.length; i < len; ++i) { if (str[i] == char) { ++c; } } return c; } console.log( charOccurances('example/path/script.js', '/') ); // 2 Or as a prototype function: String.prototype.charOccurances = function (char) { for (var c = 0, i = 0, len = this.length; i < len; ++i) { if (this[i] == char) { ++c; } } return c; } console.log( 'example/path/script.js'.charOccurances('/') ); // 2 A: function len(text,char){ return text.innerText.split(string).length } console.log(len("str1,str2,str3,str4",",")) This is a very short function. A: The following uses a regular expression to test the length. testex ensures you don't have 16 or greater consecutive non-comma characters. If it passes the test, then it proceeds to split the string. counting the commas is as simple as counting the tokens minus one. var mainStr = "str1,str2,str3,str4"; var testregex = /([^,]{16,})/g; if (testregex.test(mainStr)) { alert("values must be separated by commas and each may not exceed 15 characters"); } else { var strs = mainStr.split(','); alert("mainStr contains " + strs.length + " substrings separated by commas."); alert("mainStr contains " + (strs.length-1) + " commas."); } A: I'm using Node.js v.6.0.0 and the fastest is the one with index (the 3rd method in Lo Sauer's answer). The second is: function count(s, c) { var n = 0; for (let x of s) { if (x == c) n++; } return n; } A: Here's one just as fast as the split() and the replace methods, which are a tiny bit faster than the regex method (in Chrome and Firefox both). let num = 0; let str = "str1,str2,str3,str4"; //Note: Pre-calculating `.length` is an optimization; //otherwise, it recalculates it every loop iteration. let len = str.length; //Note: Don't use a `for (... of ...)` loop, it's slow! for (let charIndex = 0; charIndex < len; ++charIndex) { if (str[charIndex] === ',') { ++num; } } A: And there is: function character_count(string, char, ptr = 0, count = 0) { while (ptr = string.indexOf(char, ptr) + 1) {count ++} return count } Works with integers too! A: var mainStr = "str1,str2,str3,str4"; var splitStr = mainStr.split(",").length - 1; // subtracting 1 is important! alert(splitStr); Splitting into an array gives us a number of elements, which will always be 1 more than the number of instances of the character. This may not be the most memory efficient, but if your input is always going to be small, this is a straight-forward and easy to understand way to do it. If you need to parse very large strings (greater than a few hundred characters), or if this is in a core loop that processes large volumes of data, I would recommend a different strategy. A: My solution: function countOcurrences(str, value){ var regExp = new RegExp(value, "gi"); return str.match(regExp) ? str.match(regExp).length : 0; } A: The fifth method in Leo Sauers answer fails, if the character is on the beginning of the string. e.g. var needle ='A', haystack = 'AbcAbcAbc'; haystack.split('').map( function(e,i){ if(e === needle) return i;} ) .filter(Boolean).length; will give 2 instead of 3, because the filter funtion Boolean gives false for 0. Other possible filter function: haystack.split('').map(function (e, i) { if (e === needle) return i; }).filter(function (item) { return !isNaN(item); }).length; one more answer: function count(string){ const count={} string.split('').forEach(char=>{ count[char] = count[char] ? (count[char]+1) : 1; }) return count } console.log(count("abfsdfsddsfdfdsfdsfdsfda")) A: I know this might be an old question but I have a simple solution for low-level beginners in JavaScript. As a beginner, I could only understand some of the solutions to this question so I used two nested FOR loops to check each character against every other character in the string, incrementing a count variable for each character found that equals that character. I created a new blank object where each property key is a character and the value is how many times each character appeared in the string(count). Example function:- function countAllCharacters(str) { var obj = {}; if(str.length!==0){ for(i=0;i<str.length;i++){ var count = 0; for(j=0;j<str.length;j++){ if(str[i] === str[j]){ count++; } } if(!obj.hasOwnProperty(str[i])){ obj[str[i]] = count; } } } return obj; } A: I believe you will find the below solution to be very short, very fast, able to work with very long strings, able to support multiple character searches, error proof, and able to handle empty string searches. function substring_count(source_str, search_str, index) { source_str += "", search_str += ""; var count = -1, index_inc = Math.max(search_str.length, 1); index = (+index || 0) - index_inc; do { ++count; index = source_str.indexOf(search_str, index + index_inc); } while (~index); return count; } Example usage: console.log(substring_count("Lorem ipsum dolar un sit amet.", "m ")) function substring_count(source_str, search_str, index) { source_str += "", search_str += ""; var count = -1, index_inc = Math.max(search_str.length, 1); index = (+index || 0) - index_inc; do { ++count; index = source_str.indexOf(search_str, index + index_inc); } while (~index); return count; } The above code fixes the major performance bug in Jakub Wawszczyk's that the code keeps on looks for a match even after indexOf says there is none and his version itself is not working because he forgot to give the function input parameters. A: var a = "acvbasbb"; var b= {}; for (let i=0;i<a.length;i++){ if((a.match(new RegExp(a[i], "g"))).length > 1){ b[a[i]]=(a.match(new RegExp(a[i], "g"))).length; } } console.log(b); In javascript you can use above code to get occurrence of a character in a string. A: My solution with ramda js: const testString = 'somestringtotest' const countLetters = R.compose( R.map(R.length), R.groupBy(R.identity), R.split('') ) countLetters(testString) Link to REPL. A: The function takes string str as parameter and counts occurrence of each unique characters in the string. The result comes in key - value pair for each character. var charFoundMap = {};//object defined for (var i = 0; i < str.length; i++) { if(!charFoundMap[ str[i] ]) { charFoundMap[ str[i] ]=1; } else charFoundMap[ str[i] ] +=1; //if object does not contain this } return charFoundMap; } A: let str = "aabgrhaab" let charMap = {} for(let char of text) { if(charMap.hasOwnProperty(char)){ charMap[char]++ } else { charMap[char] = 1 } } console.log(charMap); //{a: 4, b: 2, g: 1, r: 1, h: 1} A: There is a very tricky way, but it is in reverse: const sampleStringText = "/john/dashboard/language"; Assume the above sample, for counting the number of forward-slashs you can do like this: console.log( sampleStringText.split('/') - 1 ); So I recommended to use a function for it (TypeScript): const counter = (sentence: string, char: string): number => sentence.split(char) - 1; A: String.prototype.reduce = Array.prototype.reduce; String.prototype.count = function(c) { return this.reduce(((n, x) => n + (x === c ? 1 : 0)), 0) }; const n = "bugs bunny was here".count("b") console.log(n) Similar to the prototype based above, but does not allocate an array for the string. Allocation is the problem of nearly every version above, except the loop variants. This avoids loop code, reusing the browser implemented Array.reduce function.
Count the number of occurrences of a character in a string in Javascript
I need to count the number of occurrences of a character in a string. For example, suppose my string contains: var mainStr = "str1,str2,str3,str4"; I want to find the count of comma , character, which is 3. And the count of individual strings after the split along comma, which is 4. I also need to validate that each of the strings i.e str1 or str2 or str3 or str4 should not exceed, say, 15 characters.
[ "I have updated this answer. I like the idea of using a match better, but it is slower:\n\n\nconsole.log((\"str1,str2,str3,str4\".match(/,/g) || []).length); //logs 3\n\nconsole.log((\"str1,str2,str3,str4\".match(new RegExp(\"str\", \"g\")) || []).length); //logs 4\n\n\n\nUse a regular expression literal if you know what you are searching for beforehand, if not you can use the RegExp constructor, and pass in the g flag as an argument.\nmatch returns null with no results thus the || []\nThe original answer I made in 2009 is below. It creates an array unnecessarily, but using a split is faster (as of September 2014). I'm ambivalent, if I really needed the speed there would be no question that I would use a split, but I would prefer to use match.\nOld answer (from 2009):\nIf you're looking for the commas:\n(mainStr.split(\",\").length - 1) //3\n\nIf you're looking for the str\n(mainStr.split(\"str\").length - 1) //4\n\nBoth in @Lo's answer and in my own silly performance test split comes ahead in speed, at least in Chrome, but again creating the extra array just doesn't seem sane.\n", "There are at least five ways. The best option, which should also be the fastest (owing to the native RegEx engine) is placed at the top.\nMethod 1\n(\"this is foo bar\".match(/o/g)||[]).length;\n// returns 2\n\nMethod 2\n\"this is foo bar\".split(\"o\").length - 1;\n// returns 2\n\nSplit not recommended as it is resource hungry. It allocates new instances of 'Array' for each match. Don't try it for a >100MB file via FileReader. You can observe the exact resource usage using Chrome's profiler option.\nMethod 3\n var stringsearch = \"o\"\n ,str = \"this is foo bar\";\n for(var count=-1,index=-2; index != -1; count++,index=str.indexOf(stringsearch,index+1) );\n// returns 2\n\nMethod 4\nSearching for a single character\n var stringsearch = \"o\"\n ,str = \"this is foo bar\";\n for(var i=count=0; i<str.length; count+=+(stringsearch===str[i++]));\n // returns 2\n\nMethod 5\nElement mapping and filtering. This is not recommended due to its overall resource preallocation rather than using Pythonian 'generators':\n var str = \"this is foo bar\"\n str.split('').map( function(e,i){ if(e === 'o') return i;} )\n .filter(Boolean)\n //>[9, 10]\n [9, 10].length\n // returns 2\n\nShare:\nI made this gist, with currently 8 methods of character-counting, so we can directly pool and share our ideas - just for fun, and perhaps some interesting benchmarks :)\n", "Add this function to sting prototype :\nString.prototype.count=function(c) { \n var result = 0, i = 0;\n for(i;i<this.length;i++)if(this[i]==c)result++;\n return result;\n};\n\nusage:\nconsole.log(\"strings\".count(\"s\")); //2\n\n", "Simply, use the split to find out the number of occurrences of a character in a string.\nmainStr.split(',').length // gives 4 which is the number of strings after splitting using delimiter comma\nmainStr.split(',').length - 1 // gives 3 which is the count of comma\n", "A quick Google search got this (from http://www.codecodex.com/wiki/index.php?title=Count_the_number_of_occurrences_of_a_specific_character_in_a_string#JavaScript)\nString.prototype.count=function(s1) { \n return (this.length - this.replace(new RegExp(s1,\"g\"), '').length) / s1.length;\n}\n\nUse it like this:\ntest = 'one,two,three,four'\ncommas = test.count(',') // returns 3\n\n", "You can also rest your string and work with it like an array of elements using\n\nArray.prototype.filter()\n\n\n\nconst mainStr = 'str1,str2,str3,str4';\r\nconst commas = [...mainStr].filter(l => l === ',').length;\r\n\r\nconsole.log(commas);\n\n\n\nOr \n\nArray.prototype.reduce()\n\n\n\nconst mainStr = 'str1,str2,str3,str4';\r\nconst commas = [...mainStr].reduce((a, c) => c === ',' ? ++a : a, 0);\r\n\r\nconsole.log(commas);\n\n\n\n", "UPDATE: This might be simple, but it is not the fastest. See benchmarks below.\n\nIt's amazing that in 13 years, this answer hasn't shown up. Intuitively, it seems like it should be fastest:\nconst s = \"The quick brown fox jumps over the lazy dog.\";\nconst oCount = s.length - s.replaceAll('o', '').length;\n\nIf there are only two kinds of character in the string, then this is faster still:\n\nconst s = \"001101001\";\nconst oneCount = s.replaceAll('0', '').length;\n\n\nBENCHMARKS\nconst { performance } = require('node:perf_hooks');\n\nconst ITERATIONS = 10000000;\nconst TEST_STRING = \"The quick brown fox jumps over the lazy dog.\";\n\nconsole.log(ITERATIONS, \"iterations\");\n\nlet sum = 0; // make sure compiler doesn't optimize code out\nlet start = performance.now();\nfor (let i = 0; i < ITERATIONS; ++i) {\n sum += TEST_STRING.length - TEST_STRING.replaceAll('o', '').length;\n}\nlet end = performance.now();\nconsole.log(\" replaceAll duration\", end - start, `(sum ${sum})`);\n\nsum = 0;\nstart = performance.now();\nfor (let i = 0; i < ITERATIONS; ++i) {\n sum += TEST_STRING.split('o').length - 1\n}\nend = performance.now();\nconsole.log(\" split duration\", end - start, `(sum ${sum})`);\n\n10000 iterations\n replaceAll duration 2.6167500019073486 (sum 40000)\n split duration 2.0777920186519623 (sum 40000)\n100000 iterations\n replaceAll duration 17.563208997249603 (sum 400000)\n split duration 8.087624996900558 (sum 400000)\n1000000 iterations\n replaceAll duration 128.71587499976158 (sum 4000000)\n split duration 64.15841698646545 (sum 4000000)\n10000000 iterations\n replaceAll duration 1223.3415840268135 (sum 40000000)\n split duration 629.1629169881344 (sum 40000000)\n\n", "Here is a similar solution, but it uses Array.prototype.reduce \nfunction countCharacters(char, string) {\n return string.split('').reduce((acc, ch) => ch === char ? acc + 1: acc, 0)\n}\n\nAs was mentioned, String.prototype.split works much faster than String.prototype.replace.\n", "If you are using lodash, the _.countBy method will do this:\n_.countBy(\"abcda\")['a'] //2\n\nThis method also work with array:\n_.countBy(['ab', 'cd', 'ab'])['ab'] //2\n\n", "ok, an other one with regexp - probably not fast, but short and better readable then others, in my case just '_' to count\nkey.replace(/[^_]/g,'').length\n\njust remove everything that does not look like your char \nbut it does not look nice with a string as input\n", "I have found that the best approach to search for a character in a very large string (that is 1 000 000 characters long, for example) is to use the replace() method.\nwindow.count_replace = function (str, schar) {\n return str.length - str.replace(RegExp(schar), '').length;\n};\n\nYou can see yet another JSPerf suite to test this method along with other methods of finding a character in a string.\n", "Performance of Split vs RegExp\n\n\nvar i = 0;\r\n\r\nvar split_start = new Date().getTime();\r\nwhile (i < 30000) {\r\n \"1234,453,123,324\".split(\",\").length -1;\r\n i++;\r\n}\r\nvar split_end = new Date().getTime();\r\nvar split_time = split_end - split_start;\r\n\r\n\r\ni= 0;\r\nvar reg_start = new Date().getTime();\r\nwhile (i < 30000) {\r\n (\"1234,453,123,324\".match(/,/g) || []).length;\r\n i++;\r\n}\r\nvar reg_end = new Date().getTime();\r\nvar reg_time = reg_end - reg_start;\r\n\r\nalert ('Split Execution time: ' + split_time + \"\\n\" + 'RegExp Execution time: ' + reg_time + \"\\n\");\n\n\n\n", "I made a slight improvement on the accepted answer, it allows to check with case-sensitive/case-insensitive matching, and is a method attached to the string object:\nString.prototype.count = function(lit, cis) {\n var m = this.toString().match(new RegExp(lit, ((cis) ? \"gi\" : \"g\")));\n return (m != null) ? m.length : 0;\n}\n\nlit is the string to search for ( such as 'ex' ), and cis is case-insensitivity, defaulted to false, it will allow for choice of case insensitive matches.\n\nTo search the string 'I love StackOverflow.com' for the lower-case letter 'o', you would use:\nvar amount_of_os = 'I love StackOverflow.com'.count('o');\n\namount_of_os would be equal to 2.\n\nIf we were to search the same string again using case-insensitive matching, you would use:\nvar amount_of_os = 'I love StackOverflow.com'.count('o', true);\n\nThis time, amount_of_os would be equal to 3, since the capital O from the string gets included in the search.\n", "Easiest way i found out...\nExample-\nstr = 'mississippi';\n\nfunction find_occurences(str, char_to_count){\n return str.split(char_to_count).length - 1;\n}\n\nfind_occurences(str, 'i') //outputs 4\n\n", "I just did a very quick and dirty test on repl.it using Node v7.4. For a single character, the standard for loop is quickest:\nSome code:\n// winner!\nfunction charCount1(s, c) {\n let count = 0;\n c = c.charAt(0); // we save some time here\n for(let i = 0; i < s.length; ++i) {\n if(c === s.charAt(i)) {\n ++count;\n }\n }\n return count;\n}\n\nfunction charCount2(s, c) {\n return (s.match(new RegExp(c[0], 'g')) || []).length;\n}\n\nfunction charCount3(s, c) {\n let count = 0;\n for(ch of s) {\n if(c === ch) {\n ++count;\n }\n }\n return count;\n}\n\nfunction perfIt() {\n const s = 'Hello, World!';\n const c = 'o';\n\n console.time('charCount1');\n for(let i = 0; i < 10000; i++) {\n charCount1(s, c);\n }\n console.timeEnd('charCount1');\n \n console.time('charCount2');\n for(let i = 0; i < 10000; i++) {\n charCount2(s, c);\n }\n console.timeEnd('charCount2');\n \n console.time('charCount3');\n for(let i = 0; i < 10000; i++) {\n charCount2(s, c);\n }\n console.timeEnd('charCount3');\n}\n\nResults from a few runs:\nperfIt()\ncharCount1: 3.301ms\ncharCount2: 11.652ms\ncharCount3: 174.043ms\nundefined\n\nperfIt()\ncharCount1: 2.110ms\ncharCount2: 11.931ms\ncharCount3: 177.743ms\nundefined\n\nperfIt()\ncharCount1: 2.074ms\ncharCount2: 11.738ms\ncharCount3: 152.611ms\nundefined\n\nperfIt()\ncharCount1: 2.076ms\ncharCount2: 11.685ms\ncharCount3: 154.757ms\nundefined\n\nUpdate 2021-Feb-10: Fixed typo in repl.it demo\nUpdate 2020-Oct-24: Still the case with Node.js 12 (play with it yourself here)\n", "Here is my solution. Lots of solution already posted before me. But I love to share my view here.\nconst mainStr = 'str1,str2,str3,str4';\n\nconst commaAndStringCounter = (str) => {\n const commas = [...str].filter(letter => letter === ',').length;\n const numOfStr = str.split(',').length;\n\n return `Commas: ${commas}, String: ${numOfStr}`;\n}\n\n// Run the code\nconsole.log(commaAndStringCounter(mainStr)); // Output: Commas: 3, String: 4\n\nHere you find my REPL\n", "s = 'dir/dir/dir/dir/'\nfor(i=l=0;i<s.length;i++)\nif(s[i] == '/')\nl++\n\n", "I was working on a small project that required a sub-string counter. Searching for the wrong phrases provided me with no results, however after writing my own implementation I have stumbled upon this question. Anyway, here is my way, it is probably slower than most here but might be helpful to someone:\nfunction count_letters() {\nvar counter = 0;\n\nfor (var i = 0; i < input.length; i++) {\n var index_of_sub = input.indexOf(input_letter, i);\n\n if (index_of_sub > -1) {\n counter++;\n i = index_of_sub;\n }\n}\n\nhttp://jsfiddle.net/5ZzHt/1/\nPlease let me know if you find this implementation to fail or do not follow some standards! :)\nUPDATE\nYou may want to substitute:\n for (var i = 0; i < input.length; i++) {\n\nWith:\nfor (var i = 0, input_length = input.length; i < input_length; i++) {\n\nInteresting read discussing the above:\nhttp://www.erichynds.com/blog/javascript-length-property-is-a-stored-value\n", "UPDATE 06/10/2022\nSo I ran various perf tests and if your use case allows it, it seems that using split is going to perform the best overall.\n\nfunction countChar(char: string, string: string): number {\n\n return string.split(char).length - 1\n\n}\n\ncountChar('x', 'foo x bar x baz x')\n\n\n\nI know I am late to the party here but I was rather baffled no one answered this with the most basic of approaches. A large portion of the answers provided by the community for this question are iteration based but all are moving over strings on a per-character basis which is not really efficient.\nWhen dealing with a large string that contains thousands of characters walking over each character to get the occurance count can become rather extraneous not to mention a code-smell. The below solutions take advantage of slice, indexOf and the trusted traditional while loop. These approaches prevent us having to walk over each character and will greatly speed up the time it takes to count occurances. These follow similar logic to that you'd find in parsers and lexical analyzers that require string walks.\nUsing with Slice\nIn this approach we are leveraging slice and with every indexOf match we will move our way through the string and eliminate the previous searched potions. Each time we call indexOf the size of the string it searches will be smaller.\nfunction countChar (char: string, search: string): number {\n \n let num: number = 0;\n let str: string = search;\n let pos: number = str.indexOf(char);\n \n while(pos > -1) {\n str = str.slice(pos + 1);\n pos = str.indexOf(char);\n num++;\n }\n\n return num;\n\n}\n\n// Call the function\ncountChar('x', 'foo x bar x baz x') // 3\n\n\nUsing with IndexOf from position\nSimilar to the first approach using slice but instead of augmenting the string we are searching it will leverage the from parameter in indexOf method.\nfunction countChar (char: string, str: string): number {\n \n let num: number = 0;\n let pos: number = str.indexOf(char);\n \n while(pos > -1) {\n pos = str.indexOf(char, pos + 1);\n num++;\n }\n\n return num;\n\n}\n\n// Call the function\ncountChar('x', 'foo x bar x baz x') // 3\n\n\nPersonally, I go for the second approach over the first, but both are fine and performant when dealing with large strings but also smaller sized ones too.\n", "What about string.split(desiredCharecter).length-1\nExample:\nvar str = \"hellow how is life\";\nvar len = str.split(\"h\").length-1; will give count 2 for character \"h\" in the above string;\n", "The fastest method seems to be via the index operator:\n\n\nfunction charOccurances (str, char)\r\n{\r\n for (var c = 0, i = 0, len = str.length; i < len; ++i)\r\n {\r\n if (str[i] == char)\r\n {\r\n ++c;\r\n }\r\n }\r\n return c;\r\n}\r\n\r\nconsole.log( charOccurances('example/path/script.js', '/') ); // 2\n\n\n\nOr as a prototype function:\n\n\nString.prototype.charOccurances = function (char)\r\n{\r\n for (var c = 0, i = 0, len = this.length; i < len; ++i)\r\n {\r\n if (this[i] == char)\r\n {\r\n ++c;\r\n }\r\n }\r\n return c;\r\n}\r\n\r\nconsole.log( 'example/path/script.js'.charOccurances('/') ); // 2\n\n\n\n", "function len(text,char){\n\nreturn text.innerText.split(string).length\n}\n\nconsole.log(len(\"str1,str2,str3,str4\",\",\"))\n\nThis is a very short function.\n", "The following uses a regular expression to test the length. testex ensures you don't have 16 or greater consecutive non-comma characters. If it passes the test, then it proceeds to split the string. counting the commas is as simple as counting the tokens minus one.\nvar mainStr = \"str1,str2,str3,str4\";\nvar testregex = /([^,]{16,})/g;\nif (testregex.test(mainStr)) {\n alert(\"values must be separated by commas and each may not exceed 15 characters\");\n} else {\n var strs = mainStr.split(',');\n alert(\"mainStr contains \" + strs.length + \" substrings separated by commas.\");\n alert(\"mainStr contains \" + (strs.length-1) + \" commas.\");\n}\n\n", "I'm using Node.js v.6.0.0 and the fastest is the one with index (the 3rd method in Lo Sauer's answer).\nThe second is:\n\n\nfunction count(s, c) {\r\n var n = 0;\r\n for (let x of s) {\r\n if (x == c)\r\n n++;\r\n }\r\n return n;\r\n}\n\n\n\n", "Here's one just as fast as the split() and the replace methods, which are a tiny bit faster than the regex method (in Chrome and Firefox both).\nlet num = 0;\nlet str = \"str1,str2,str3,str4\";\n//Note: Pre-calculating `.length` is an optimization;\n//otherwise, it recalculates it every loop iteration.\nlet len = str.length;\n//Note: Don't use a `for (... of ...)` loop, it's slow!\nfor (let charIndex = 0; charIndex < len; ++charIndex) {\n if (str[charIndex] === ',') {\n ++num;\n }\n}\n\n", "And there is:\nfunction character_count(string, char, ptr = 0, count = 0) {\n while (ptr = string.indexOf(char, ptr) + 1) {count ++}\n return count\n}\n\nWorks with integers too!\n", "\n\nvar mainStr = \"str1,str2,str3,str4\";\nvar splitStr = mainStr.split(\",\").length - 1; // subtracting 1 is important!\nalert(splitStr);\n\n\n\nSplitting into an array gives us a number of elements, which will always be 1 more than the number of instances of the character. This may not be the most memory efficient, but if your input is always going to be small, this is a straight-forward and easy to understand way to do it.\nIf you need to parse very large strings (greater than a few hundred characters), or if this is in a core loop that processes large volumes of data, I would recommend a different strategy.\n", "My solution:\nfunction countOcurrences(str, value){\n var regExp = new RegExp(value, \"gi\");\n return str.match(regExp) ? str.match(regExp).length : 0; \n}\n\n", "The fifth method in Leo Sauers answer fails, if the character is on the beginning of the string.\ne.g.\nvar needle ='A',\n haystack = 'AbcAbcAbc';\n\nhaystack.split('').map( function(e,i){ if(e === needle) return i;} )\n .filter(Boolean).length;\n\nwill give 2 instead of 3, because the filter funtion Boolean gives false for 0.\nOther possible filter function:\nhaystack.split('').map(function (e, i) {\n if (e === needle) return i;\n}).filter(function (item) {\n return !isNaN(item);\n}).length;\n\none more answer:\nfunction count(string){\n const count={}\n \n string.split('').forEach(char=>{\n count[char] = count[char] ? (count[char]+1) : 1;\n })\n \n return count\n}\n\nconsole.log(count(\"abfsdfsddsfdfdsfdsfdsfda\"))\n\n", "I know this might be an old question but I have a simple solution for low-level beginners in JavaScript. \nAs a beginner, I could only understand some of the solutions to this question so I used two nested FOR loops to check each character against every other character in the string, incrementing a count variable for each character found that equals that character.\nI created a new blank object where each property key is a character and the value is how many times each character appeared in the string(count).\nExample function:-\nfunction countAllCharacters(str) {\n var obj = {};\n if(str.length!==0){\n for(i=0;i<str.length;i++){\n var count = 0;\n for(j=0;j<str.length;j++){\n if(str[i] === str[j]){\n count++;\n }\n }\n if(!obj.hasOwnProperty(str[i])){\n obj[str[i]] = count;\n }\n }\n }\n return obj;\n}\n\n", "I believe you will find the below solution to be very short, very fast, able to work with very long strings, able to support multiple character searches, error proof, and able to handle empty string searches.\nfunction substring_count(source_str, search_str, index) {\n source_str += \"\", search_str += \"\";\n var count = -1, index_inc = Math.max(search_str.length, 1);\n index = (+index || 0) - index_inc;\n do {\n ++count;\n index = source_str.indexOf(search_str, index + index_inc);\n } while (~index);\n return count;\n}\n\nExample usage:\n\n\nconsole.log(substring_count(\"Lorem ipsum dolar un sit amet.\", \"m \"))\r\n\r\nfunction substring_count(source_str, search_str, index) {\r\n source_str += \"\", search_str += \"\";\r\n var count = -1, index_inc = Math.max(search_str.length, 1);\r\n index = (+index || 0) - index_inc;\r\n do {\r\n ++count;\r\n index = source_str.indexOf(search_str, index + index_inc);\r\n } while (~index);\r\n return count;\r\n}\n\n\n\nThe above code fixes the major performance bug in Jakub Wawszczyk's that the code keeps on looks for a match even after indexOf says there is none and his version itself is not working because he forgot to give the function input parameters.\n", "var a = \"acvbasbb\";\nvar b= {};\nfor (let i=0;i<a.length;i++){\n if((a.match(new RegExp(a[i], \"g\"))).length > 1){\n b[a[i]]=(a.match(new RegExp(a[i], \"g\"))).length;\n }\n}\nconsole.log(b);\n\nIn javascript you can use above code to get occurrence of a character in a string. \n", "My solution with ramda js:\nconst testString = 'somestringtotest'\n\nconst countLetters = R.compose(\n R.map(R.length),\n R.groupBy(R.identity),\n R.split('')\n)\n\ncountLetters(testString)\n\nLink to REPL.\n", "The function takes string str as parameter and counts occurrence of each unique characters in the string. The result comes in key - value pair for each character. \nvar charFoundMap = {};//object defined\n for (var i = 0; i < str.length; i++) {\n\n if(!charFoundMap[ str[i] ]) {\n charFoundMap[ str[i] ]=1;\n } \n else\n charFoundMap[ str[i] ] +=1;\n //if object does not contain this \n }\n return charFoundMap;\n\n} \n\n", "let str = \"aabgrhaab\"\nlet charMap = {}\n\nfor(let char of text) {\n if(charMap.hasOwnProperty(char)){\n charMap[char]++\n } else {\n charMap[char] = 1\n }\n}\n\nconsole.log(charMap); //{a: 4, b: 2, g: 1, r: 1, h: 1}\n", "There is a very tricky way, but it is in reverse:\nconst sampleStringText = \"/john/dashboard/language\";\n\nAssume the above sample, for counting the number of forward-slashs you can do like this:\nconsole.log( sampleStringText.split('/') - 1 );\n\nSo I recommended to use a function for it (TypeScript):\nconst counter = (sentence: string, char: string): number => sentence.split(char) - 1;\n\n", "\n\nString.prototype.reduce = Array.prototype.reduce;\n\nString.prototype.count = function(c) {\n return this.reduce(((n, x) => n + (x === c ? 1 : 0)), 0)\n};\n\nconst n = \"bugs bunny was here\".count(\"b\")\nconsole.log(n)\n\n\n\nSimilar to the prototype based above, but does not allocate an array for the string. Allocation is the problem of nearly every version above, except the loop variants. This avoids loop code, reusing the browser implemented Array.reduce function.\n" ]
[ 1015, 284, 25, 25, 14, 12, 12, 10, 8, 7, 6, 5, 4, 4, 4, 4, 3, 3, 3, 2, 2, 2, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "\n\nvar i = 0;\r\n\r\nvar split_start = new Date().getTime();\r\nwhile (i < 30000) {\r\n \"1234,453,123,324\".split(\",\").length -1;\r\n i++;\r\n}\r\nvar split_end = new Date().getTime();\r\nvar split_time = split_end - split_start;\r\n\r\n\r\ni= 0;\r\nvar reg_start = new Date().getTime();\r\nwhile (i < 30000) {\r\n (\"1234,453,123,324\".match(/,/g) || []).length;\r\n i++;\r\n}\r\nvar reg_end = new Date().getTime();\r\nvar reg_time = reg_end - reg_start;\r\n\r\nalert ('Split Execution time: ' + split_time + \"\\n\" + 'RegExp Execution time: ' + reg_time + \"\\n\");\n\n\n\n", "This below is the simplest logic, which is very easy to understand\n //Demo string with repeat char \n let str = \"Coffee\"\n //Splitted the str into an char array for looping\n let strArr = str.split(\"\")\n //This below is the final object which holds the result\n let obj = {};\n //This loop will count char (You can also use traditional one for loop)\n strArr.forEach((value,index)=>{\n //If the char exists in the object it will simple increase its value\n if(obj[value] != undefined)\n {\n obj[value] = parseInt(obj[value]) + 1;\n }//else it will add the new one with initializing 1\n else{\n obj[value] =1;\n } \n });\n\n console.log(\"Char with Count:\",JSON.stringify(obj)); //Char with Count:{\"C\":1,\"o\":1,\"f\":2,\"e\":2}\n\n" ]
[ -1, -1 ]
[ "javascript", "string" ]
stackoverflow_0000881085_javascript_string.txt
Q: Filter QuerySet from a given list of indexs I have a list of index i want to extract from another queryset. >>> allLocation = loc.objects.all() >>> allLocation <QuerySet [<loc: loc object (1)>, <loc: loc object (2)>, <loc: loc object (3)>, <loc: loc object (4)>, <loc: loc object (5)>]> >>> UserIndex = [0,3,4] >>> >>> allLocation[UserIndex[1]] <loc: loc object (4)> >>> filteredUser = ? I can query a single item but I want to filter all the index item from my allLocation given the index number in a list ( UserIndex ) and store it in filteredUser A: So using chatGPT I found the ans I was looking for filteredUser = allLocation.filter(id__in=[allLocation[i].id for i in UserIndex])
Filter QuerySet from a given list of indexs
I have a list of index i want to extract from another queryset. >>> allLocation = loc.objects.all() >>> allLocation <QuerySet [<loc: loc object (1)>, <loc: loc object (2)>, <loc: loc object (3)>, <loc: loc object (4)>, <loc: loc object (5)>]> >>> UserIndex = [0,3,4] >>> >>> allLocation[UserIndex[1]] <loc: loc object (4)> >>> filteredUser = ? I can query a single item but I want to filter all the index item from my allLocation given the index number in a list ( UserIndex ) and store it in filteredUser
[ "So using chatGPT I found the ans I was looking for\nfilteredUser = allLocation.filter(id__in=[allLocation[i].id for i in UserIndex]) \n" ]
[ 0 ]
[]
[]
[ "django", "django_models", "django_queryset", "python" ]
stackoverflow_0074348329_django_django_models_django_queryset_python.txt
Q: How to add several images to ggplot 2 using magick? iam very new to R and having difficulties to add several images to my plot. Adding a single picture works perfectly fine by using the following code: `add_image_centre <- function(plot_path, image_path) { fig <- image_read(plot_path) fig <- image_resize(fig, "1000x1000") img <- image_read(image_path) img <- image_scale(img, "62x85") image_composite(fig, img, offset = "+387+442") } imagepl <- add_image_centre(plot_path = ".png", image_path = ".png") imagepl image_write(imagepl, ".png") ` How can i add several images this way? I have tried copying the code but then it just changes to the last picture added and removes the first one. A: Briefly You can combine fig and img side-by-side as follows: image_append( c( fig, img )) Reference See the section, Combining, at https://rdrr.io/cran/magick/f/vignettes/intro.Rmd, also shown below: require( magick ) bigdata <- image_read('https://jeroen.github.io/images/bigdata.jpg') frink <- image_read("https://jeroen.github.io/images/frink.png") # Appending means simply putting the frames next to each other: image_append(image_scale(img, "x200")) # Use stack = TRUE to position them on top of each other: image_append(image_scale(img, "100"), stack = TRUE) # Composing allows for combining two images on a specific position: # It also and flattens the image, losing information about which pixel came from # which layer. bigdatafrink <- image_scale( image_rotate( image_background( frink, "none" ), 300 ) , "x200" ) image_composite( image_scale( bigdata, "x400" ) , bigdatafrink , offset = "+180+100" )
How to add several images to ggplot 2 using magick?
iam very new to R and having difficulties to add several images to my plot. Adding a single picture works perfectly fine by using the following code: `add_image_centre <- function(plot_path, image_path) { fig <- image_read(plot_path) fig <- image_resize(fig, "1000x1000") img <- image_read(image_path) img <- image_scale(img, "62x85") image_composite(fig, img, offset = "+387+442") } imagepl <- add_image_centre(plot_path = ".png", image_path = ".png") imagepl image_write(imagepl, ".png") ` How can i add several images this way? I have tried copying the code but then it just changes to the last picture added and removes the first one.
[ "Briefly\nYou can combine fig and img side-by-side as follows:\nimage_append( c( fig, img ))\n\nReference\nSee the section, Combining, at https://rdrr.io/cran/magick/f/vignettes/intro.Rmd, also shown below:\nrequire( magick )\n\nbigdata <- image_read('https://jeroen.github.io/images/bigdata.jpg')\nfrink <- image_read(\"https://jeroen.github.io/images/frink.png\")\n\n# Appending means simply putting the frames next to each other:\nimage_append(image_scale(img, \"x200\"))\n\n# Use stack = TRUE to position them on top of each other:\nimage_append(image_scale(img, \"100\"), stack = TRUE)\n\n# Composing allows for combining two images on a specific position:\n# It also and flattens the image, losing information about which pixel came from\n# which layer.\n\nbigdatafrink <- image_scale(\n image_rotate(\n image_background( frink, \"none\" ), 300\n )\n , \"x200\"\n)\n\nimage_composite(\n image_scale( bigdata, \"x400\" )\n , bigdatafrink\n , offset = \"+180+100\"\n)\n\n\n" ]
[ 0 ]
[]
[]
[ "ggplot2", "imagemagick", "r" ]
stackoverflow_0074592311_ggplot2_imagemagick_r.txt
Q: Flutter: RangeError (index): Invalid value: Range is empty My app is working fine, no problem with it. I have: var i (as index) = 0 that I assign to the first data item in a list which is empty at the moment so here is why the error appears. I either need somehow to hide the error or a method to fix it. child: FutureBuilder<List?>( future: read( // "SELECT ProductSeriesDescr FROM ScanRest WHERE ProductStation = '${widget.nrStatie}' AND BoxID = '$cutieScan' and ProductSeriesDescr != '0331120' ANd ProductSeriesDescr != '020322'"), "SELECT ProductAdress, replace(ProductName, '\"', '')ProductName, NeedCount, ScanCount, ProductBarCode, ProductSeriesCount, ProductExpirationDate FROM ScanRest WHERE ProductStation = '${widget.nrStatie}' AND BoxID = '$cutieScan' Order By ProductName ASC"), builder: (context, snapshot) { switch (snapshot.connectionState) { case ConnectionState.waiting: return const Text('Loading....'); default: if (snapshot.hasError) { debugPrint( "call error"); //"call error = ${snapshot.error}" return Text('Error: ${snapshot.error}'); } else { debugPrint( "call success"); // "call success = ${snapshot.data}" List data = snapshot.data ?? []; return Column(children: [ Row( children: [ // ----------------------------------- Product Adress Expanded( child: GestureDetector( onTap: () { setState(() { i++; if (i == snapshot.data!.length) { i = 0; } }); }, child: SizedBox( height: 60, child: Center( child: Text( 'i=' +i.toString() + " " + ((data[i] as Map)['ProductAdress'].toString()), style: const TextStyle(fontSize: 30), ), )), )), // ------------------------------------ NEED COUNT Expanded( child: GestureDetector( onTap: () { _nrProdusController.text = (data[i] as Map)['NeedCount'].toString(); }, child: SizedBox( height: 60, child: Center( child: Text( ((data[i] as Map)['NeedCount'] .toString()), style: TextStyle(fontSize: 35, fontWeight: FontWeight.bold, color: Colors.primaries[Random().nextInt(Colors.primaries.length)]), ), ), ), )), ], ), A: Please use snapshot.hasData() to make sure you have data. You can refer this flutter documentation. I can give very basic sample : FutureBuilder(   builder: (ctx, snapshot) {     if (snapshot.connectionState == ConnectionState.done) {       if (snapshot.hasError) { //have an error       } else if (snapshot.hasData) {        //sure there is data       }     }    ),
Flutter: RangeError (index): Invalid value: Range is empty
My app is working fine, no problem with it. I have: var i (as index) = 0 that I assign to the first data item in a list which is empty at the moment so here is why the error appears. I either need somehow to hide the error or a method to fix it. child: FutureBuilder<List?>( future: read( // "SELECT ProductSeriesDescr FROM ScanRest WHERE ProductStation = '${widget.nrStatie}' AND BoxID = '$cutieScan' and ProductSeriesDescr != '0331120' ANd ProductSeriesDescr != '020322'"), "SELECT ProductAdress, replace(ProductName, '\"', '')ProductName, NeedCount, ScanCount, ProductBarCode, ProductSeriesCount, ProductExpirationDate FROM ScanRest WHERE ProductStation = '${widget.nrStatie}' AND BoxID = '$cutieScan' Order By ProductName ASC"), builder: (context, snapshot) { switch (snapshot.connectionState) { case ConnectionState.waiting: return const Text('Loading....'); default: if (snapshot.hasError) { debugPrint( "call error"); //"call error = ${snapshot.error}" return Text('Error: ${snapshot.error}'); } else { debugPrint( "call success"); // "call success = ${snapshot.data}" List data = snapshot.data ?? []; return Column(children: [ Row( children: [ // ----------------------------------- Product Adress Expanded( child: GestureDetector( onTap: () { setState(() { i++; if (i == snapshot.data!.length) { i = 0; } }); }, child: SizedBox( height: 60, child: Center( child: Text( 'i=' +i.toString() + " " + ((data[i] as Map)['ProductAdress'].toString()), style: const TextStyle(fontSize: 30), ), )), )), // ------------------------------------ NEED COUNT Expanded( child: GestureDetector( onTap: () { _nrProdusController.text = (data[i] as Map)['NeedCount'].toString(); }, child: SizedBox( height: 60, child: Center( child: Text( ((data[i] as Map)['NeedCount'] .toString()), style: TextStyle(fontSize: 35, fontWeight: FontWeight.bold, color: Colors.primaries[Random().nextInt(Colors.primaries.length)]), ), ), ), )), ], ),
[ "Please use snapshot.hasData() to make sure you have data. You can refer this flutter documentation. I can give very basic sample :\nFutureBuilder(\n  builder: (ctx, snapshot) {\n    if (snapshot.connectionState == ConnectionState.done) {\n      if (snapshot.hasError) {\n //have an error\n      } else if (snapshot.hasData) {\n       //sure there is data\n      }\n    } \n  ),\n\n" ]
[ 0 ]
[]
[]
[ "dart", "flutter", "sql" ]
stackoverflow_0074678833_dart_flutter_sql.txt
Q: python - Run indented code through keyboard shortcuts in Spyder as in RStudio I would like to be able to run an indented block of code in python in the same way I do in R. In particular, if in RStudio I have the following indented block of code: print(seq(from = 1, to = 10, by = 1)) I can place the cursor everywhere (at the beginning of the code, in the middle, at the end) except in a new line below and simply press Cmd+Enter (or Ctrl+Enter) and I can run such code. However, in Spyder 4.2, a similar code like this one: import pandas as pd cars = {'Brand': ['Honda', 'Ford','Audi'], 'Price': [20000, 30000, 40000]} will not run wherever I place the cursor, and I have to select the two lines to create the dataframe and launch the whole selection with Cmd+Enter (I modified the keyboard shortcuts in the preferences of Spyder to run a selection). Any advice on how to run such code without selecting it first? Thanks! A: (Spyder maintainer here) You said Any advice on how to run such code without selecting it first? Yes, you need to use cells for that. You can create a cell by inserting a comment that starts with # %%, like this import pandas as pd # %% cars = {'Brand': ['Honda', 'Ford','Audi'], 'Price': [20000, 30000, 40000]} That will allow you to run the piece of code enclosed by those comments with the keyboard shortcuts Shift + Enter (run current cell and advance to the next one); or Control + Enter (run current cell and stay on it). If that explanation was not clear enough, you can learn more about cells in our docs. A: This would be absolutely fantastic if it worked, but it doesn't. I'm guessing there must be a setting that needs to be changed first that everyone fails to mention
python - Run indented code through keyboard shortcuts in Spyder as in RStudio
I would like to be able to run an indented block of code in python in the same way I do in R. In particular, if in RStudio I have the following indented block of code: print(seq(from = 1, to = 10, by = 1)) I can place the cursor everywhere (at the beginning of the code, in the middle, at the end) except in a new line below and simply press Cmd+Enter (or Ctrl+Enter) and I can run such code. However, in Spyder 4.2, a similar code like this one: import pandas as pd cars = {'Brand': ['Honda', 'Ford','Audi'], 'Price': [20000, 30000, 40000]} will not run wherever I place the cursor, and I have to select the two lines to create the dataframe and launch the whole selection with Cmd+Enter (I modified the keyboard shortcuts in the preferences of Spyder to run a selection). Any advice on how to run such code without selecting it first? Thanks!
[ "(Spyder maintainer here) You said\n\nAny advice on how to run such code without selecting it first?\n\nYes, you need to use cells for that. You can create a cell by inserting a comment that starts with # %%, like this\nimport pandas as pd\n\n# %%\ncars = {'Brand': ['Honda', 'Ford','Audi'],\n 'Price': [20000, 30000, 40000]}\n\nThat will allow you to run the piece of code enclosed by those comments with the keyboard shortcuts Shift + Enter (run current cell and advance to the next one); or Control + Enter (run current cell and stay on it).\nIf that explanation was not clear enough, you can learn more about cells in our docs.\n", "This would be absolutely fantastic if it worked, but it doesn't. I'm guessing there must be a setting that needs to be changed first that everyone fails to mention\n" ]
[ 1, 0 ]
[]
[]
[ "keyboard_shortcuts", "python", "r", "spyder" ]
stackoverflow_0067314850_keyboard_shortcuts_python_r_spyder.txt
Q: How to extract exactly the same word with regexp_extract_all in pyspark I am having some issues in finding the correct regular expression lets say I have this list of keywords: keywords = [' b.o.o', ' a.b.a', ' titi'] (please keep in mind that there is a blank space before any keyword and this list can contain up to 100keywords so I can't to it without a function) and my dataframe df: enter image description here I use the following code to extract the matching words, it works partially because it extract even the words that are not an exact match : keywords = [' b.o.o', ' a.b.a', ' titi'] pattern = '(' + '|'.join([fr'\\b({k})\\b' for k in keywords]) + ')' df.withColumn('words', F.expr(f"regexp_extract_all(colB, '{pattern}' ,1))) the output : enter image description here But here is the expected output : enter image description here As we can see, it does extract words that are not exact match, it does not take into account the dot. For example, this code considers awbwa as a match because if we replace w by a dot it will be a match. I also tried pattern = '(' + '|'.join([fr'\\b({k})\\b' for k in [re.escape(x) for x in keywords]]) + ')' to add a backslash before every dot and before the blank space but it doesnt work. Thank you so much for your help (btw I looked everywhere on stackoverflow and I didnt find an answer to this) A: I think you need to add a backslash before the dot in your regular expression pattern to escape it, so it's treated as a literal dot and not a special character that matches any character. In your code, you can try using the re.escape() method from the re module to escape all special characters in the keywords list before joining them in the pattern. Here's an example: import re keywords = [' b.o.o', ' a.b.a', ' titi'] # Escape special characters in the keywords using re.escape() escaped_keywords = [re.escape(keyword) for keyword in keywords] # Join the escaped keywords with '|' as the separator pattern = '(' + '|'.join(escaped_keywords) + ')' # Use the pattern in your regexp_extract_all() call df.withColumn('words', F.expr(f"regexp_extract_all(colB, '{pattern}' ,1)")) This should give you the expected output where only exact matches are extracted. A: You can use the \b word boundary metacharacter to match whole words only, and escape the dots with a backslash \. in your regular expression. Here is an example: import pyspark.sql.functions as F keywords = [' b.o.o', ' a.b.a', ' titi'] # Escape dots and add word boundaries pattern = '(' + '|'.join([fr'\b({k.replace(".", "\\.")})\b' for k in keywords]) + ')' df.withColumn('words', F.expr(f"regexp_extract_all(colB, '{pattern}' ,1))) This will match b.o.o, a.b.a, and titi as whole words, and will not match substrings like awbwa. A: I finally figure it out, for some reason re.escape doesnt work, the solution was to add [] in between dots. thanks for answering !
How to extract exactly the same word with regexp_extract_all in pyspark
I am having some issues in finding the correct regular expression lets say I have this list of keywords: keywords = [' b.o.o', ' a.b.a', ' titi'] (please keep in mind that there is a blank space before any keyword and this list can contain up to 100keywords so I can't to it without a function) and my dataframe df: enter image description here I use the following code to extract the matching words, it works partially because it extract even the words that are not an exact match : keywords = [' b.o.o', ' a.b.a', ' titi'] pattern = '(' + '|'.join([fr'\\b({k})\\b' for k in keywords]) + ')' df.withColumn('words', F.expr(f"regexp_extract_all(colB, '{pattern}' ,1))) the output : enter image description here But here is the expected output : enter image description here As we can see, it does extract words that are not exact match, it does not take into account the dot. For example, this code considers awbwa as a match because if we replace w by a dot it will be a match. I also tried pattern = '(' + '|'.join([fr'\\b({k})\\b' for k in [re.escape(x) for x in keywords]]) + ')' to add a backslash before every dot and before the blank space but it doesnt work. Thank you so much for your help (btw I looked everywhere on stackoverflow and I didnt find an answer to this)
[ "I think you need to add a backslash before the dot in your regular expression pattern to escape it, so it's treated as a literal dot and not a special character that matches any character.\nIn your code, you can try using the re.escape() method from the re module to escape all special characters in the keywords list before joining them in the pattern. Here's an example:\nimport re\n\nkeywords = [' b.o.o', ' a.b.a', ' titi']\n\n# Escape special characters in the keywords using re.escape()\nescaped_keywords = [re.escape(keyword) for keyword in keywords]\n\n# Join the escaped keywords with '|' as the separator\npattern = '(' + '|'.join(escaped_keywords) + ')'\n\n# Use the pattern in your regexp_extract_all() call\ndf.withColumn('words', F.expr(f\"regexp_extract_all(colB, '{pattern}' ,1)\"))\n\n\nThis should give you the expected output where only exact matches are extracted.\n", "You can use the \\b word boundary metacharacter to match whole words only, and escape the dots with a backslash \\. in your regular expression.\nHere is an example:\nimport pyspark.sql.functions as F\n\nkeywords = [' b.o.o', ' a.b.a', ' titi']\n\n# Escape dots and add word boundaries\npattern = '(' + '|'.join([fr'\\b({k.replace(\".\", \"\\\\.\")})\\b' for k in keywords]) + ')'\n\ndf.withColumn('words', F.expr(f\"regexp_extract_all(colB, '{pattern}' ,1)))\n\nThis will match b.o.o, a.b.a, and titi as whole words, and will not match substrings like awbwa.\n", "I finally figure it out, for some reason re.escape doesnt work, the solution was to add [] in between dots. thanks for answering !\n" ]
[ 0, 0, 0 ]
[]
[]
[ "apache_spark", "extract", "pyspark", "python", "regex" ]
stackoverflow_0074671615_apache_spark_extract_pyspark_python_regex.txt
Q: My discord bot is not responding to my commands import discord import os client = discord.Client(intents=discord.Intents.default()) @client.event async def on_ready(): print("We have logged in as {0.user}".format(client)) @client.event async def on_message(message): if message.author == client.user: return if message.content.startswith('$hello'): channel = message.channel await channel.send('Hello!') client.run(os.getenv('TOKEN')) I tried to make a discord bot using discord.py. The bot comes online and everything but does not respond to my messages. Can you tell me what is wrong? A: You seem to have an indentation error: async def on_message(message): if message.author == client.user: return if message.content.startswith('$hello'): channel = message.channel await channel.send('Hello!') The last if-statement is never going to be executed. Instead, move it one indentation back such that: async def on_message(message): if message.author == client.user: return if message.content.startswith('$hello'): channel = message.channel await channel.send('Hello!') A: There are a few potential issues with the code that could be causing the Discord bot not to respond to commands. First, the on_message event handler function is indented incorrectly. The if statement that checks if the message starts with '$hello' should be at the same indentation level as the on_message function, not indented further. This is because the return statement that is indented further will cause the function to return immediately, without checking the if statement or sending a response. Here is an example of how the on_message function should be indented: @client.event async def on_message(message): if message.author == client.user: return if message.content.startswith('$hello'): channel = message.channel await channel.send('Hello!') Another potential issue is that the TOKEN environment variable is not being set. The os.getenv function is used to get the value of the TOKEN environment variable, but if this variable is not set, the os.getenv function will return None, and the client.run function will not be able to authenticate the bot. To fix this issue, you can set the TOKEN environment variable to the bot token that you obtained from the Discord Developer Portal. This can be done in a variety of ways, depending on your operating system and the method that you prefer to use.
My discord bot is not responding to my commands
import discord import os client = discord.Client(intents=discord.Intents.default()) @client.event async def on_ready(): print("We have logged in as {0.user}".format(client)) @client.event async def on_message(message): if message.author == client.user: return if message.content.startswith('$hello'): channel = message.channel await channel.send('Hello!') client.run(os.getenv('TOKEN')) I tried to make a discord bot using discord.py. The bot comes online and everything but does not respond to my messages. Can you tell me what is wrong?
[ "You seem to have an indentation error:\nasync def on_message(message):\n if message.author == client.user:\n return\n\n if message.content.startswith('$hello'):\n channel = message.channel\n await channel.send('Hello!')\n\nThe last if-statement is never going to be executed. Instead, move it one indentation back such that:\nasync def on_message(message):\n if message.author == client.user:\n return\n\n if message.content.startswith('$hello'):\n channel = message.channel\n await channel.send('Hello!')\n\n", "There are a few potential issues with the code that could be causing the Discord bot not to respond to commands.\nFirst, the on_message event handler function is indented incorrectly. The if statement that checks if the message starts with '$hello' should be at the same indentation level as the on_message function, not indented further. This is because the return statement that is indented further will cause the function to return immediately, without checking the if statement or sending a response.\nHere is an example of how the on_message function should be indented:\n@client.event\nasync def on_message(message): \nif message.author == client.user:\n return\n\nif message.content.startswith('$hello'):\n channel = message.channel\n await channel.send('Hello!')\n\nAnother potential issue is that the TOKEN environment variable is not being set. The os.getenv function is used to get the value of the TOKEN environment variable, but if this variable is not set, the os.getenv function will return None, and the client.run function will not be able to authenticate the bot.\nTo fix this issue, you can set the TOKEN environment variable to the bot token that you obtained from the Discord Developer Portal. This can be done in a variety of ways, depending on your operating system and the method that you prefer to use.\n" ]
[ 0, 0 ]
[]
[]
[ "discord.py", "python" ]
stackoverflow_0074679033_discord.py_python.txt
Q: TypeError: Invalid value for schema path `type` the error occurs while I try to run the code but for my colleague, it does not throw an error up to this point i tried to make syntax changes but did not workout. var mongoose = require("mongoose"); mongoose.connect("mongodb://localhost:27017/blog_demo_2", {useNewUrlParser:true}); //POST schema var postSchema = new mongoose.Schema({ title: String, content: String, }); var Post = mongoose.model("Post", postSchema); //USER schema var userSchema = new mongoose.Schema({ name: String, email: String, posts: [ { type: mongoose.Schema.Types.ObjectID, ref: "Post" } ] }); var User = mongoose.model("User", userSchema); wasif4000:~/workspace/associations (master) $ node references.js /home/ubuntu/workspace/node_modules/mongoose/lib/schema.js:414 throw new TypeError('Invalid value for schema path ' + prefix + key + ''); ^ TypeError: Invalid value for schema path `type` at Schema.add (/home/ubuntu/workspace/node_modules/mongoose/lib/schema.js:414:13) at new Schema (/home/ubuntu/workspace/node_modules/mongoose/lib/schema.js:117:10) at Schema.interpretAsType (/home/ubuntu/workspace/node_modules/mongoose/lib/schema.js:770:29) at Schema.path (/home/ubuntu/workspace/node_modules/mongoose/lib/schema.js:596:27) at Schema.add (/home/ubuntu/workspace/node_modules/mongoose/lib/schema.js:437:12) at new Schema (/home/ubuntu/workspace/node_modules/mongoose/lib/schema.js:117:10) at Object.<anonymous> (/home/ubuntu/workspace/associations/references.js:16:18) at Module._compile (module.js:570:32) at Object.Module._extensions..js (module.js:579:10) at Module.load (module.js:487:32) at tryModuleLoad (module.js:446:12) at Function.Module._load (module.js:438:3) at Module.runMain (module.js:604:10) at run (bootstrap_node.js:389:7) at startup (bootstrap_node.js:149:9) at bootstrap_node.js:504:3 A: the correction is type: mongoose.Schema.Types.Object.Id, and not the below one where ID is uppercase, it should be lowercase type: mongoose.Schema.Types.Object*ID*, A: type: mongoose.Schema.Types.ObjectID - is not correct it must be type: mongoose.Schema.Types.ObjectId Mongoose schema does not recognize ObjectID
TypeError: Invalid value for schema path `type`
the error occurs while I try to run the code but for my colleague, it does not throw an error up to this point i tried to make syntax changes but did not workout. var mongoose = require("mongoose"); mongoose.connect("mongodb://localhost:27017/blog_demo_2", {useNewUrlParser:true}); //POST schema var postSchema = new mongoose.Schema({ title: String, content: String, }); var Post = mongoose.model("Post", postSchema); //USER schema var userSchema = new mongoose.Schema({ name: String, email: String, posts: [ { type: mongoose.Schema.Types.ObjectID, ref: "Post" } ] }); var User = mongoose.model("User", userSchema); wasif4000:~/workspace/associations (master) $ node references.js /home/ubuntu/workspace/node_modules/mongoose/lib/schema.js:414 throw new TypeError('Invalid value for schema path ' + prefix + key + ''); ^ TypeError: Invalid value for schema path `type` at Schema.add (/home/ubuntu/workspace/node_modules/mongoose/lib/schema.js:414:13) at new Schema (/home/ubuntu/workspace/node_modules/mongoose/lib/schema.js:117:10) at Schema.interpretAsType (/home/ubuntu/workspace/node_modules/mongoose/lib/schema.js:770:29) at Schema.path (/home/ubuntu/workspace/node_modules/mongoose/lib/schema.js:596:27) at Schema.add (/home/ubuntu/workspace/node_modules/mongoose/lib/schema.js:437:12) at new Schema (/home/ubuntu/workspace/node_modules/mongoose/lib/schema.js:117:10) at Object.<anonymous> (/home/ubuntu/workspace/associations/references.js:16:18) at Module._compile (module.js:570:32) at Object.Module._extensions..js (module.js:579:10) at Module.load (module.js:487:32) at tryModuleLoad (module.js:446:12) at Function.Module._load (module.js:438:3) at Module.runMain (module.js:604:10) at run (bootstrap_node.js:389:7) at startup (bootstrap_node.js:149:9) at bootstrap_node.js:504:3
[ "the correction is\ntype: mongoose.Schema.Types.Object.Id, \n\nand not the below one where ID is uppercase, it should be lowercase\ntype: mongoose.Schema.Types.Object*ID*,\n\n", "type: mongoose.Schema.Types.ObjectID - is not correct\nit must be\ntype: mongoose.Schema.Types.ObjectId\nMongoose schema does not recognize ObjectID\n" ]
[ 1, 0 ]
[ "You have an extra comma in your code, look:\nvar postSchema = new mongoose.Schema({\n title: String,\n content: String, ----> You need to get rid of it\n});\n\nThis should solve the problem.\n" ]
[ -1 ]
[ "cloud9_ide", "javascript", "mongoose", "node.js" ]
stackoverflow_0055809856_cloud9_ide_javascript_mongoose_node.js.txt
Q: mkl-service package failed to import, therefore Intel(R) MKL initialization ensuring its correct out-of-the box operation under condition when Gnu When I go to run a python code directly through the terminal it gives me this error, I've already tried to reinstall numpy and it didn't work! And I tried to install mlk service returns the same error. Can someone help me ? UserWarning: mkl-service package failed to import, therefore Intel(R) MKL initialization ensuring its correct out-of-the box operation under condition when Gnu OpenMP had already been loaded by Python process is not assured. Please install mkl-service package, see http://github.com/IntelPython/mkl-service from . import _distributor_init Traceback (most recent call last): File "c:\Users\teste.user\Desktop\Project-python\teste.py", line 4, in <module> import pandas as pd File "C:\Users\teste.user\Anaconda3\lib\site-packages\pandas\__init__.py", line 16, in <module> raise ImportError( ImportError: Unable to import required dependencies: numpy: IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE! Importing the numpy C-extensions failed. This error can happen for many reasons, often due to issues with your setup or how NumPy was installed. We have compiled some common reasons and troubleshooting tips at: https://numpy.org/devdocs/user/troubleshooting-importerror.html Please note and check the following: * The Python version is: Python3.9 from "C:\Users\teste.user\Anaconda3\python.exe" * The NumPy version is: "1.21.5" and make sure that they are the versions you expect. Please carefully study the documentation linked above for further help. Original error was: DLL load failed while importing _multiarray_umath: The specified module could not be found. A: Can be solved by resetting package configuration by force reinstall of numpy. conda install numpy --force-reinstall A: I was able to fix it by running the following commands to uninstall and reinstall the packages pip uninstall matplotlib pip uninstall pillow pip uninstall numpy pip install matplotlib pip install pillow pip install numpy A: Thanks for help! I used this: I was able to fix it by running the following commands to uninstall and reinstall the packages pip uninstall matplotlib pip uninstall pillow pip uninstall numpy pip install matplotlib pip install pillow pip install numpy
mkl-service package failed to import, therefore Intel(R) MKL initialization ensuring its correct out-of-the box operation under condition when Gnu
When I go to run a python code directly through the terminal it gives me this error, I've already tried to reinstall numpy and it didn't work! And I tried to install mlk service returns the same error. Can someone help me ? UserWarning: mkl-service package failed to import, therefore Intel(R) MKL initialization ensuring its correct out-of-the box operation under condition when Gnu OpenMP had already been loaded by Python process is not assured. Please install mkl-service package, see http://github.com/IntelPython/mkl-service from . import _distributor_init Traceback (most recent call last): File "c:\Users\teste.user\Desktop\Project-python\teste.py", line 4, in <module> import pandas as pd File "C:\Users\teste.user\Anaconda3\lib\site-packages\pandas\__init__.py", line 16, in <module> raise ImportError( ImportError: Unable to import required dependencies: numpy: IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE! Importing the numpy C-extensions failed. This error can happen for many reasons, often due to issues with your setup or how NumPy was installed. We have compiled some common reasons and troubleshooting tips at: https://numpy.org/devdocs/user/troubleshooting-importerror.html Please note and check the following: * The Python version is: Python3.9 from "C:\Users\teste.user\Anaconda3\python.exe" * The NumPy version is: "1.21.5" and make sure that they are the versions you expect. Please carefully study the documentation linked above for further help. Original error was: DLL load failed while importing _multiarray_umath: The specified module could not be found.
[ "Can be solved by resetting package configuration by force reinstall of numpy.\nconda install numpy --force-reinstall\n\n", "I was able to fix it by running the following commands to uninstall and reinstall the packages\npip uninstall matplotlib\npip uninstall pillow\npip uninstall numpy\npip install matplotlib\npip install pillow\npip install numpy\n\n", "Thanks for help! I used this:\nI was able to fix it by running the following commands to uninstall and reinstall the packages\npip uninstall matplotlib\npip uninstall pillow\npip uninstall numpy\npip install matplotlib\npip install pillow\npip install numpy\n" ]
[ 4, 3, 0 ]
[]
[]
[ "jupyter_notebook", "python" ]
stackoverflow_0072858984_jupyter_notebook_python.txt
Q: What does EC2 store and why does it even need a storage solution like EBS or Instance Store? If you use EC2 and launch instances, you can add EBS volumes. So a storage option. However, what I still don't understand exactly is why. Why is there or does EC2 even need a storage option like EBS or Instance Store? What does EC2 store anyway? And why it makes sense that there is EBS? I know that EBS volume is persistent block storage and data is not lost after exit, unlike instance store. I just don't really understand what EBS is useful for. For which cases and applications is EBS used? Or does using EBS have more to do with creating snapshots that you can create to cache data and then save it to S3? I've already read a lot and tried to make it understandable somehow, but somehow I can't get any further here. I would be really happy if someone could shed some light on this for me. Thank you already! A: EC2 instances are just virtual machines in the cloud, and like physical machines, they need storage to store the operating system and application data. By default, EC2 instances come with instance store storage, which is temporary storage that is tied to the life of the instance. This means that if the instance is stopped or terminated, the data in the instance store storage will be lost. EBS volumes, on the other hand, are persistent storage options that are separate from the EC2 instance. This means that the data in an EBS volume will persist even if the EC2 instance is stopped or terminated. EBS volumes can be attached to EC2 instances and used as the primary storage for the operating system and application data. EBS volumes are useful for a variety of use cases, including: Storing data that needs to be retained even if the EC2 instance is stopped or terminated. Storing data that needs to be accessed by multiple EC2 instances. Creating snapshots of the EBS volume for data backup and disaster recovery. Migrating data between EC2 instances or across regions. In summary, EBS volumes provide persistent storage for EC2 instances and are used for a variety of data storage and data management tasks.
What does EC2 store and why does it even need a storage solution like EBS or Instance Store?
If you use EC2 and launch instances, you can add EBS volumes. So a storage option. However, what I still don't understand exactly is why. Why is there or does EC2 even need a storage option like EBS or Instance Store? What does EC2 store anyway? And why it makes sense that there is EBS? I know that EBS volume is persistent block storage and data is not lost after exit, unlike instance store. I just don't really understand what EBS is useful for. For which cases and applications is EBS used? Or does using EBS have more to do with creating snapshots that you can create to cache data and then save it to S3? I've already read a lot and tried to make it understandable somehow, but somehow I can't get any further here. I would be really happy if someone could shed some light on this for me. Thank you already!
[ "EC2 instances are just virtual machines in the cloud, and like physical machines, they need storage to store the operating system and application data. By default, EC2 instances come with instance store storage, which is temporary storage that is tied to the life of the instance. This means that if the instance is stopped or terminated, the data in the instance store storage will be lost.\nEBS volumes, on the other hand, are persistent storage options that are separate from the EC2 instance. This means that the data in an EBS volume will persist even if the EC2 instance is stopped or terminated. EBS volumes can be attached to EC2 instances and used as the primary storage for the operating system and application data.\nEBS volumes are useful for a variety of use cases, including:\n\nStoring data that needs to be retained even if the EC2 instance is\nstopped or terminated.\nStoring data that needs to be accessed by multiple EC2 instances.\nCreating snapshots of the EBS volume for data backup and disaster recovery.\nMigrating data between EC2 instances or across regions.\n\nIn summary, EBS volumes provide persistent storage for EC2 instances and are used for a variety of data storage and data management tasks.\n" ]
[ 2 ]
[]
[]
[ "amazon_ebs", "amazon_ec2", "amazon_web_services" ]
stackoverflow_0074678898_amazon_ebs_amazon_ec2_amazon_web_services.txt
Q: Gtts library error. I don't know why this error are happening or how to fix them I am tried to convert pdf to an audio file but when ever I run my code I get a bunch error from the gtts liberary. If there is a better liberary to use that does not sound like a robot please let me know the errors are https://pastebin.com/Uwnq1MgS and my code is #Importing Libraries #Importing Google Text to Speech library from gtts import gTTS #Importing PDF reader PyPDF2 import PyPDF2 #Open file Path pdf_File = open('simple.pdf', 'rb') #Create PDF Reader Object pdf_Reader = PyPDF2.PdfFileReader(pdf_File) count = pdf_Reader.numPages # counts number of pages in pdf textList = [] #Extracting text data from each page of the pdf file for i in range(count): try: page = pdf_Reader.getPage(i) textList.append(page.extractText()) except: pass #Converting multiline text to single line text textString = " ".join(textList) print(textString) #Set language to english (en) language = 'en' #Call GTTS myAudio = gTTS(text=textString, lang=language, slow=False) #Save as mp3 file myAudio.save("Audio.mp3") Can anyone help me? I have tried nothing because I could not find anything on this errors. A: It looks like the issue is with the PyPDF2 library. The getPage() method is not able to extract the text from some pages in the PDF file, resulting in an error. One solution could be to use the PyMuPDF library instead, which is a more powerful PDF manipulation library. You can install it using the following command: pip install PyMuPDF You can then use the text() method from the PyMuPDF library to extract the text from each page in the PDF file. Here is an example of how your code could look like using the PyMuPDF library: # Importing Libraries # Importing Google Text to Speech library from gtts import gTTS # Importing PDF reader PyMuPDF import fitz # Open file Path pdf_File = open('simple.pdf', 'rb') # Create PDF Reader Object pdf_Reader = fitz.open(pdf_File) count = pdf_Reader.page_count # counts number of pages in pdf textList = [] # Extracting text data from each page of the pdf file for i in range(count): page = pdf_Reader[i] textList.append(page.get_text('text')) # Converting multiline text to single line text textString = " ".join(textList) print(textString) # Set language to english (en) language = 'en' # Call GTTS myAudio = gTTS(text=textString, lang=language, slow=False) # Save as mp3 file myAudio.save("Audio.mp3") This should fix the errors you were getting and allow you to extract the text from all pages in the PDF file.
Gtts library error. I don't know why this error are happening or how to fix them
I am tried to convert pdf to an audio file but when ever I run my code I get a bunch error from the gtts liberary. If there is a better liberary to use that does not sound like a robot please let me know the errors are https://pastebin.com/Uwnq1MgS and my code is #Importing Libraries #Importing Google Text to Speech library from gtts import gTTS #Importing PDF reader PyPDF2 import PyPDF2 #Open file Path pdf_File = open('simple.pdf', 'rb') #Create PDF Reader Object pdf_Reader = PyPDF2.PdfFileReader(pdf_File) count = pdf_Reader.numPages # counts number of pages in pdf textList = [] #Extracting text data from each page of the pdf file for i in range(count): try: page = pdf_Reader.getPage(i) textList.append(page.extractText()) except: pass #Converting multiline text to single line text textString = " ".join(textList) print(textString) #Set language to english (en) language = 'en' #Call GTTS myAudio = gTTS(text=textString, lang=language, slow=False) #Save as mp3 file myAudio.save("Audio.mp3") Can anyone help me? I have tried nothing because I could not find anything on this errors.
[ "It looks like the issue is with the PyPDF2 library. The getPage() method is not able to extract the text from some pages in the PDF file, resulting in an error.\nOne solution could be to use the PyMuPDF library instead, which is a more powerful PDF manipulation library. You can install it using the following command:\npip install PyMuPDF\n\nYou can then use the text() method from the PyMuPDF library to extract the text from each page in the PDF file. Here is an example of how your code could look like using the PyMuPDF library:\n# Importing Libraries\n# Importing Google Text to Speech library\nfrom gtts import gTTS\n\n# Importing PDF reader PyMuPDF\nimport fitz\n\n# Open file Path\npdf_File = open('simple.pdf', 'rb')\n\n# Create PDF Reader Object\npdf_Reader = fitz.open(pdf_File)\ncount = pdf_Reader.page_count # counts number of pages in pdf\ntextList = []\n\n# Extracting text data from each page of the pdf file\nfor i in range(count):\n page = pdf_Reader[i]\n textList.append(page.get_text('text'))\n\n# Converting multiline text to single line text\ntextString = \" \".join(textList)\n\nprint(textString)\n\n# Set language to english (en)\nlanguage = 'en'\n\n# Call GTTS\nmyAudio = gTTS(text=textString, lang=language, slow=False)\n\n# Save as mp3 file\nmyAudio.save(\"Audio.mp3\")\n\nThis should fix the errors you were getting and allow you to extract the text from all pages in the PDF file.\n" ]
[ 0 ]
[]
[]
[ "gtts", "python" ]
stackoverflow_0074679139_gtts_python.txt
Q: Why in 2D vector we have to enter 1D vector inside the nested loop if we are using nested loop? In the code below when I was using 2D vector with 1D vector inside the loop it was printing the output row by row nicely but when I declared 1D vector outside the loop, every time while pushing back the values it was also pushing the values of the previous row as well and in some cases the code doesn't even call the function when I declare 1D vector outside the loop, any reasons for this Bellow are the 2 different codes in one 1D vector is declared inside the nested forloop and in one case outside respectively ` #include<iostream> #include<vector> #include<algorithm> using namespace std; void print(vector<vector<int> > &mat) { for (int i = 0; i < mat.size(); ++i) { for (int j = 0; j < mat[i].size(); ++j){ cout<<mat[i][j]<<" "; } cout<<endl; } } int main(){ int arr[3][3]; vector<vector<int>> stuff; for (int i = 0; i < 3; i++) { vector<int> matri; for (int j = 0; j < 3; j++) { cin>>arr[i][j]; matri.push_back(arr[i][j]); } stuff.push_back(mat); } print(stuff); return 0; } ` ` #include<iostream> #include<vector> #include<algorithm> using namespace std; void print(vector<vector<int> > &matrix) { for (int i = 0; i < matrix.size(); ++i) { for (int j = 0; j < matrix[i].size(); ++j){ cout<<matrix[i][j]<<" "; } cout<<endl; } } int main(){ int arr[3][3]; vector<vector<int>> stuff; for (int i = 0; i < 3; i++) { vector<int> mat; for (int j = 0; j < 3; j++) { cin>>arr[i][j]; mat.push_back(arr[i][j]); } stuff.push_back(mat); } print(stuff); return 0; } ` A: In the first code, you have declared a vector named "matri" inside the loop, which is used to store the values of a single row of the matrix. Then, this vector is pushed into the "stuff" vector, which is a 2D vector. In this way, the values of the matrix are stored in the 2D vector. In the second code, you have declared a vector named "mat" outside the loop. Every time the loop iterates, the values of the current row are pushed into this vector. But since this vector is not declared inside the loop, it retains the values of the previous rows as well. So, when you push this vector into the "stuff" vector, it contains the values of all the rows, including the values of the previous rows. Therefore, it is always a good practice to declare a vector inside the loop so that it can store the values of a single row only and not retain the values of the previous rows.
Why in 2D vector we have to enter 1D vector inside the nested loop if we are using nested loop?
In the code below when I was using 2D vector with 1D vector inside the loop it was printing the output row by row nicely but when I declared 1D vector outside the loop, every time while pushing back the values it was also pushing the values of the previous row as well and in some cases the code doesn't even call the function when I declare 1D vector outside the loop, any reasons for this Bellow are the 2 different codes in one 1D vector is declared inside the nested forloop and in one case outside respectively ` #include<iostream> #include<vector> #include<algorithm> using namespace std; void print(vector<vector<int> > &mat) { for (int i = 0; i < mat.size(); ++i) { for (int j = 0; j < mat[i].size(); ++j){ cout<<mat[i][j]<<" "; } cout<<endl; } } int main(){ int arr[3][3]; vector<vector<int>> stuff; for (int i = 0; i < 3; i++) { vector<int> matri; for (int j = 0; j < 3; j++) { cin>>arr[i][j]; matri.push_back(arr[i][j]); } stuff.push_back(mat); } print(stuff); return 0; } ` ` #include<iostream> #include<vector> #include<algorithm> using namespace std; void print(vector<vector<int> > &matrix) { for (int i = 0; i < matrix.size(); ++i) { for (int j = 0; j < matrix[i].size(); ++j){ cout<<matrix[i][j]<<" "; } cout<<endl; } } int main(){ int arr[3][3]; vector<vector<int>> stuff; for (int i = 0; i < 3; i++) { vector<int> mat; for (int j = 0; j < 3; j++) { cin>>arr[i][j]; mat.push_back(arr[i][j]); } stuff.push_back(mat); } print(stuff); return 0; } `
[ "In the first code, you have declared a vector named \"matri\" inside the loop, which is used to store the values of a single row of the matrix. Then, this vector is pushed into the \"stuff\" vector, which is a 2D vector. In this way, the values of the matrix are stored in the 2D vector.\nIn the second code, you have declared a vector named \"mat\" outside the loop. Every time the loop iterates, the values of the current row are pushed into this vector. But since this vector is not declared inside the loop, it retains the values of the previous rows as well. So, when you push this vector into the \"stuff\" vector, it contains the values of all the rows, including the values of the previous rows.\nTherefore, it is always a good practice to declare a vector inside the loop so that it can store the values of a single row only and not retain the values of the previous rows.\n" ]
[ 0 ]
[]
[]
[ "arrays", "c++" ]
stackoverflow_0074679213_arrays_c++.txt
Q: Can I bind a shortcut to quick fix a specific weak warning? I'm trying to bind a shortcut key to a specific weak warning in GoLand. I like declaring implicit struct literals. Although I click on it in the gif there I have it now the option "Show quick fixes" binded to a shortcut but I still have to navigate to the Problems toolwindow, and select the warning. Is there a way I can do it in one shortcut key? Just for this specific warning? A: i was able to work around it, or i guess what i've been doing was the work around coz what i should be doing is accessing the show context actions options from the editor now i just do that keybind and press on enter and it just goes. :)
Can I bind a shortcut to quick fix a specific weak warning?
I'm trying to bind a shortcut key to a specific weak warning in GoLand. I like declaring implicit struct literals. Although I click on it in the gif there I have it now the option "Show quick fixes" binded to a shortcut but I still have to navigate to the Problems toolwindow, and select the warning. Is there a way I can do it in one shortcut key? Just for this specific warning?
[ "i was able to work around it, or i guess what i've been doing was the work around coz what i should be doing is accessing the show context actions options from the editor now i just do that keybind and press on enter and it just goes. :)\n" ]
[ 1 ]
[]
[]
[ "go", "goland", "jetbrains_ide" ]
stackoverflow_0074678682_go_goland_jetbrains_ide.txt
Q: How can accessibility settings be configured from inside an XCTestCase Button Shapes (the accessibility feature) on iOS can be enabled and disabled from the setting app on the simulator. But what if we want to enable and disable it to take snapshots or any other kind of unit test from within XCTestCase? It doesn't seem to be a UITrait and has a buttonShapesEnabled property exposed from UIAccessibility but how do we change this property from inside a unit test? A: I would recommend using a UI Test to do this. Using a UI Test, you could open the settings app and configure this setting by creating an XCUIApplication that targets the Settings app. XCUIApplication(bundleIdentifier: "com.apple.Preferences") Then interact with the buttons in Settings to enable/disable button shapes. You could then launch and interact with your app, using the screenshot() method to take screenshots: https://developer.apple.com/documentation/xctest/xcuiscreenshotproviding/2897250-screenshot
How can accessibility settings be configured from inside an XCTestCase
Button Shapes (the accessibility feature) on iOS can be enabled and disabled from the setting app on the simulator. But what if we want to enable and disable it to take snapshots or any other kind of unit test from within XCTestCase? It doesn't seem to be a UITrait and has a buttonShapesEnabled property exposed from UIAccessibility but how do we change this property from inside a unit test?
[ "I would recommend using a UI Test to do this.\nUsing a UI Test, you could open the settings app and configure this setting by creating an XCUIApplication that targets the Settings app.\nXCUIApplication(bundleIdentifier: \"com.apple.Preferences\")\n\nThen interact with the buttons in Settings to enable/disable button shapes.\nYou could then launch and interact with your app, using the screenshot() method to take screenshots:\nhttps://developer.apple.com/documentation/xctest/xcuiscreenshotproviding/2897250-screenshot\n" ]
[ 0 ]
[]
[]
[ "ios", "swift", "uiaccessibility", "xctest", "xctestcase" ]
stackoverflow_0073746279_ios_swift_uiaccessibility_xctest_xctestcase.txt
Q: Clear all form field in react, using react hook forms I am creating a login form and want to clear the form when form submitted.I am using react hook form ,here are my codes.. const onSubmit = (data) => { const name = data.name; const email = data.email; const pass = data.password; const confirmPass = data.confirmPassword; console.log(data) } <form className="mt-6" id='signupForm' onSubmit={handleSubmit(onSubmit)}> A: There is a function in React Hook Form called reset on successful form submission run reset function. It will clear all the form values. Example of working.
Clear all form field in react, using react hook forms
I am creating a login form and want to clear the form when form submitted.I am using react hook form ,here are my codes.. const onSubmit = (data) => { const name = data.name; const email = data.email; const pass = data.password; const confirmPass = data.confirmPassword; console.log(data) } <form className="mt-6" id='signupForm' onSubmit={handleSubmit(onSubmit)}>
[ "There is a function in React Hook Form called reset on successful form submission run reset function. It will clear all the form values.\nExample of working.\n" ]
[ 0 ]
[]
[]
[ "forms", "react_forms" ]
stackoverflow_0072146638_forms_react_forms.txt
Q: Why the return response is coming in weird format? I am learning the backend in node. I am trying to execute the following piece of code ` const axios= require('axios') async function getData(){ const resp = await axios.get('https://jsonplaceholder.typicode.com/todos/1') console.log('hello',resp.data) } getData() I am getting output like this: My output The expected output is: expected output I tried running code with node index.js and with extension code runner. The output is the same, even on replit I am getting the same result. Can someone explain, what am i doing wrong? A: You are recieving a Brotli (br) encoding. Just pass Accept-Encoding header to deflate and the server will return you a json output. await axios.get('https://jsonplaceholder.typicode.com/todos/1', { headers: { 'Accept-Encoding': 'deflate' } }) A: You're not doing anything wrong, it's just that axios currently doesn't decompress Brotli-encoded resposes, yet signals Brotli as an acceptable encoding anyway, hence why it looks like garbage. Instead of disabling compression completely, just send the supported encodings axios does support and decompression should be invisible to you const resp = await axios.get('https://jsonplaceholder.typicode.com/todos/1', { 'accept-encoding': 'gzip, deflate' }); A: There is't any problem with your code and also it gives me exactly the output you required with the code you posted in the question so it must be either other log or some other problem but your code posted in question works fine with below output.
Why the return response is coming in weird format?
I am learning the backend in node. I am trying to execute the following piece of code ` const axios= require('axios') async function getData(){ const resp = await axios.get('https://jsonplaceholder.typicode.com/todos/1') console.log('hello',resp.data) } getData() I am getting output like this: My output The expected output is: expected output I tried running code with node index.js and with extension code runner. The output is the same, even on replit I am getting the same result. Can someone explain, what am i doing wrong?
[ "You are recieving a Brotli (br) encoding. Just pass Accept-Encoding header to deflate and the server will return you a json output.\nawait axios.get('https://jsonplaceholder.typicode.com/todos/1', {\n headers: {\n 'Accept-Encoding': 'deflate'\n }\n})\n\n", "You're not doing anything wrong, it's just that axios currently doesn't decompress Brotli-encoded resposes, yet signals Brotli as an acceptable encoding anyway, hence why it looks like garbage. Instead of disabling compression completely, just send the supported encodings axios does support and decompression should be invisible to you\nconst resp = await axios.get('https://jsonplaceholder.typicode.com/todos/1', {\n 'accept-encoding': 'gzip, deflate'\n});\n\n", "There is't any problem with your code and also it gives me exactly the output you required with the code you posted in the question so it must be either other log or some other problem but your code posted in question works fine with below output.\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "axios", "javascript", "node.js" ]
stackoverflow_0074673568_axios_javascript_node.js.txt
Q: How can I compare 2 excel file unique rows by price in UIPath? I am scraping online shops, writing down the data in separate excel files for each shop (product name, shop url, product url, and price). I want to check if there are same products in 2 excel files(like IPhone 14 pro max on amazon and IPhone 14 pro max on ebay), compare their price, and write down the lower price product(product name, shop url, product url and price) to another excel file(like lowest price products). How can I do this? A: Read both Excel files Use the Join Data Tables to merge them by the column that contains the product ID (needs to be unique for sure) Join Type Inner should work And now you get the both prices You could also use pure LINQ, example. At the end it depends on the result you need. So either filter columns or use some advanced comparisons to get Amazon or Ebay as cheapest. But what happens when both have same price? So make sure you also test this one.
How can I compare 2 excel file unique rows by price in UIPath?
I am scraping online shops, writing down the data in separate excel files for each shop (product name, shop url, product url, and price). I want to check if there are same products in 2 excel files(like IPhone 14 pro max on amazon and IPhone 14 pro max on ebay), compare their price, and write down the lower price product(product name, shop url, product url and price) to another excel file(like lowest price products). How can I do this?
[ "\nRead both Excel files\nUse the Join Data Tables to merge them by the column that contains the product ID (needs to be unique for sure)\nJoin Type Inner should work\nAnd now you get the both prices\n\nYou could also use pure LINQ, example.\nAt the end it depends on the result you need. So either filter columns or use some advanced comparisons to get Amazon or Ebay as cheapest. But what happens when both have same price? So make sure you also test this one.\n" ]
[ 0 ]
[]
[]
[ "excel", "rpm", "uipath", "uipath_studio" ]
stackoverflow_0074668463_excel_rpm_uipath_uipath_studio.txt
Q: Attach text to center of image only using grid template I have a background image which has on top another div containing some elements. I success to add the white-container on top of the background div using grid-template-area. I try to do the same thing only using grid and not position. But text hello world doesn't go on top of love icon using grid area love-svg. I think perhaps because img tag has no closed tag, but can't find a way to solve this problem. HTML <div class="container"> <img src="/images/site/trust_pilot_bg_mb.jpg" > <div class="white-container"> <div class="love-section"> <img class="love-ico" src="/images/site/love_svg_icon.svg"><h2 class="hello">Hello world</h2> </img> </div> </div> </div> SCSS .container { display: grid; justify-items: center; grid-template-areas: "content-wrapper"; &>* { grid-area: content-wrapper; } img { width: 100%; } .love-section { display: grid; background-color: white; top: 40px; position: relative; } .love-ico { width: 100px; grid-template-areas: "love-svg"; } .hello { color: red; grid-area: love-svg; } } In blue this is what I want to achieve. Please note that I want the text to be on top of the love logo but allow it to be bigger than it. A: if you want to use grid to do that, you'll need to play with the align-self, and justify-self for the grid-item. Now there are several ways to do that with grid, the way you are using it is not the simplest. I'm proposing just one here: html, body { margin: 0; padding: 0; } #grid { display: grid; grid-template-rows: repeat(7, 1fr); grid-template-columns: repeat(7, 1fr); gap: 0; width: 100vw; min-height: 100vh; } #div1 { grid-area: 1 / 1 / 8 / 8; } #div1 img, #div2 img { width: 100%; height: 100% } #div2 { grid-area: 3 / 3 / 6 / 6; } #div3 { grid-area: 4 / 4 / 5 / 5; align-self: center; justify-self: center; } <div id="grid"> <div id="div1"> <img src="https://picsum.photos/300/200"> </div> <div id="div2"> <img class="love-ico" src="https://picsum.photos/300/200"> </div> <div id="div3"> <h2>Hello world</h2> </div> </div> Now I don't know if it's mandatory for you to use grid, otherwise the old technic of relative, absolute works too: html, body { margin: 0; padding: 0; } #container { position: relative; background-image: url("https://picsum.photos/300/200"); width: 100vw; height: 100vh; } #container img { position: absolute; top: 50%; left: 50%; transform: translate(-50%, -50%); } #container h2 { position: absolute; top: 50%; left: 50%; transform: translate(-50%, -50%); } <div id="container"> <img class="love-ico" src="https://picsum.photos/300/200"> <h2>Hello world</h2> </div>
Attach text to center of image only using grid template
I have a background image which has on top another div containing some elements. I success to add the white-container on top of the background div using grid-template-area. I try to do the same thing only using grid and not position. But text hello world doesn't go on top of love icon using grid area love-svg. I think perhaps because img tag has no closed tag, but can't find a way to solve this problem. HTML <div class="container"> <img src="/images/site/trust_pilot_bg_mb.jpg" > <div class="white-container"> <div class="love-section"> <img class="love-ico" src="/images/site/love_svg_icon.svg"><h2 class="hello">Hello world</h2> </img> </div> </div> </div> SCSS .container { display: grid; justify-items: center; grid-template-areas: "content-wrapper"; &>* { grid-area: content-wrapper; } img { width: 100%; } .love-section { display: grid; background-color: white; top: 40px; position: relative; } .love-ico { width: 100px; grid-template-areas: "love-svg"; } .hello { color: red; grid-area: love-svg; } } In blue this is what I want to achieve. Please note that I want the text to be on top of the love logo but allow it to be bigger than it.
[ "if you want to use grid to do that, you'll need to play with the align-self, and justify-self for the grid-item.\nNow there are several ways to do that with grid, the way you are using it is not the simplest.\nI'm proposing just one here:\n\n\nhtml,\nbody {\n margin: 0;\n padding: 0;\n}\n\n#grid {\n display: grid;\n grid-template-rows: repeat(7, 1fr);\n grid-template-columns: repeat(7, 1fr);\n gap: 0;\n width: 100vw;\n min-height: 100vh;\n}\n\n#div1 {\n grid-area: 1 / 1 / 8 / 8;\n}\n\n#div1 img,\n#div2 img {\n width: 100%;\n height: 100%\n}\n\n#div2 {\n grid-area: 3 / 3 / 6 / 6;\n}\n\n#div3 {\n grid-area: 4 / 4 / 5 / 5;\n align-self: center;\n justify-self: center;\n}\n<div id=\"grid\">\n <div id=\"div1\">\n <img src=\"https://picsum.photos/300/200\">\n </div>\n <div id=\"div2\">\n <img class=\"love-ico\" src=\"https://picsum.photos/300/200\">\n </div>\n <div id=\"div3\">\n <h2>Hello world</h2>\n </div>\n</div>\n\n\n\nNow I don't know if it's mandatory for you to use grid, otherwise the old technic of relative, absolute works too:\n\n\nhtml,\nbody {\n margin: 0;\n padding: 0;\n}\n\n#container {\n position: relative;\n background-image: url(\"https://picsum.photos/300/200\");\n width: 100vw;\n height: 100vh;\n}\n\n#container img {\n position: absolute;\n top: 50%;\n left: 50%;\n transform: translate(-50%, -50%);\n}\n\n#container h2 {\n position: absolute;\n top: 50%;\n left: 50%;\n transform: translate(-50%, -50%);\n}\n<div id=\"container\">\n <img class=\"love-ico\" src=\"https://picsum.photos/300/200\">\n <h2>Hello world</h2>\n</div>\n\n\n\n" ]
[ 0 ]
[]
[]
[ "css", "grid", "html" ]
stackoverflow_0074678895_css_grid_html.txt
Q: NotImplementedError: Conversion 'rpy2py' not defined for objects of type '' only after I run the code twice If I run the following code once it works. import numpy as np import rpy2.robjects as robjects x = np.linspace(0, 1, num = 11, endpoint=True) y = np.array([-1,1,1, -1,1,0, .5,.5,.4, .5, -1]) r_x = robjects.FloatVector(x) r_y = robjects.FloatVector(y) r_smooth_spline = robjects.r['smooth.spline'] #extract R function spline_xy = r_smooth_spline(x=r_x, y=r_y) print('x =', x) print('ysplined =',np.array(robjects.r['predict'](spline_xy,robjects.FloatVector(x)).rx2('y'))) If I run this cell twice in a Jupyter notebook, I obtain the following error message: --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) <ipython-input-2-5efeb940cd16> in <module> 6 r_x = robjects.FloatVector(x) 7 r_y = robjects.FloatVector(y) ----> 8 r_smooth_spline = robjects.r['smooth.spline'] #extract R function 9 spline_xy = r_smooth_spline(x=r_x, y=r_y) 10 print('x =', x) 2 frames /usr/local/lib/python3.8/dist-packages/rpy2/robjects/conversion.py in _rpy2py(obj) 250 non-rpy2) objects. 251 """ --> 252 raise NotImplementedError( 253 "Conversion 'rpy2py' not defined for objects of type '%s'" % 254 str(type(obj)) NotImplementedError: Conversion 'rpy2py' not defined for objects of type '<class 'rpy2.rinterface.SexpClosure'>' This code always used to run without problems multiple times. Probably a new version of python or rpy2 is the problem? How can I fix the problem such that I am able to run this code multiple times within one Jupyter notebook. A: The easiest fix is to run once: !pip install -Iv rpy2==3.4.2 at the start of the Jupyter-notebook in order to rollback to version 3.4.2, where this problem did not occur (see Rpy2 Error depends on execution method: NotImplementedError: Conversion "rpy2py" not defined). For more information how to cahge the version of a python package see Rollback to specific version of a python package in Goolge Colab and Installing specific package version with pip) It would still be interesting to understand how one could use the latest version of rpy2 correctly. A: This is cause by an issue in older releases of ipykernel. I'd recommend to upgrade it rather than downgrade rpy2. See https://github.com/rpy2/rpy2/issues/952
NotImplementedError: Conversion 'rpy2py' not defined for objects of type '' only after I run the code twice
If I run the following code once it works. import numpy as np import rpy2.robjects as robjects x = np.linspace(0, 1, num = 11, endpoint=True) y = np.array([-1,1,1, -1,1,0, .5,.5,.4, .5, -1]) r_x = robjects.FloatVector(x) r_y = robjects.FloatVector(y) r_smooth_spline = robjects.r['smooth.spline'] #extract R function spline_xy = r_smooth_spline(x=r_x, y=r_y) print('x =', x) print('ysplined =',np.array(robjects.r['predict'](spline_xy,robjects.FloatVector(x)).rx2('y'))) If I run this cell twice in a Jupyter notebook, I obtain the following error message: --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) <ipython-input-2-5efeb940cd16> in <module> 6 r_x = robjects.FloatVector(x) 7 r_y = robjects.FloatVector(y) ----> 8 r_smooth_spline = robjects.r['smooth.spline'] #extract R function 9 spline_xy = r_smooth_spline(x=r_x, y=r_y) 10 print('x =', x) 2 frames /usr/local/lib/python3.8/dist-packages/rpy2/robjects/conversion.py in _rpy2py(obj) 250 non-rpy2) objects. 251 """ --> 252 raise NotImplementedError( 253 "Conversion 'rpy2py' not defined for objects of type '%s'" % 254 str(type(obj)) NotImplementedError: Conversion 'rpy2py' not defined for objects of type '<class 'rpy2.rinterface.SexpClosure'>' This code always used to run without problems multiple times. Probably a new version of python or rpy2 is the problem? How can I fix the problem such that I am able to run this code multiple times within one Jupyter notebook.
[ "The easiest fix is to run once:\n!pip install -Iv rpy2==3.4.2\n\nat the start of the Jupyter-notebook in order to rollback to version 3.4.2, where this problem did not occur (see Rpy2 Error depends on execution method: NotImplementedError: Conversion \"rpy2py\" not defined). For more information how to cahge the version of a python package see Rollback to specific version of a python package in Goolge Colab and Installing specific package version with pip)\nIt would still be interesting to understand how one could use the latest version of rpy2 correctly.\n", "This is cause by an issue in older releases of ipykernel. I'd recommend to upgrade it rather than downgrade rpy2.\nSee https://github.com/rpy2/rpy2/issues/952\n" ]
[ 0, 0 ]
[]
[]
[ "jupyter_notebook", "python", "rpy2" ]
stackoverflow_0074678378_jupyter_notebook_python_rpy2.txt
Q: Cannot able to run cqlsh due to python attribute error Cannot able to execute the command cqlsh in mac m1 based system. % bin/cqlsh Traceback (most recent call last): File "/Users/avinashkasukurthi/devtools/apache-cassandra-4.0.7/bin/cqlsh.py", line 159, in <module> from cqlshlib import cql3handling, cqlhandling, pylexotron, sslhandling, cqlshhandling File "/Users/avinashkasukurthi/devtools/apache-cassandra-4.0.7/bin/../pylib/cqlshlib/cql3handling.py", line 19, in <module> from cqlshlib.cqlhandling import CqlParsingRuleSet, Hint File "/Users/avinashkasukurthi/devtools/apache-cassandra-4.0.7/bin/../pylib/cqlshlib/cqlhandling.py", line 23, in <module> from cqlshlib import pylexotron, util File "/Users/avinashkasukurthi/devtools/apache-cassandra-4.0.7/bin/../pylib/cqlshlib/pylexotron.py", line 342, in <module> class ParsingRuleSet: File "/Users/avinashkasukurthi/devtools/apache-cassandra-4.0.7/bin/../pylib/cqlshlib/pylexotron.py", line 343, in ParsingRuleSet RuleSpecScanner = SaferScanner([ ^^^^^^^^^^^^^^ File "/Users/avinashkasukurthi/devtools/apache-cassandra-4.0.7/bin/../pylib/cqlshlib/saferscanner.py", line 91, in __init__ s = re.sre_parse.State() ^^^^^^^^^^^^ AttributeError: module 're' has no attribute 'sre_parse' A: Looks like there may have been a breaking change introduced to Python's synchronized regex engine (SRE) with Python 3.11. I have created a ticket for this on the Cassandra project (CASSANDRA-18088). In the interim, downgrade your local Python to 3.10, and you should be fine.
Cannot able to run cqlsh due to python attribute error
Cannot able to execute the command cqlsh in mac m1 based system. % bin/cqlsh Traceback (most recent call last): File "/Users/avinashkasukurthi/devtools/apache-cassandra-4.0.7/bin/cqlsh.py", line 159, in <module> from cqlshlib import cql3handling, cqlhandling, pylexotron, sslhandling, cqlshhandling File "/Users/avinashkasukurthi/devtools/apache-cassandra-4.0.7/bin/../pylib/cqlshlib/cql3handling.py", line 19, in <module> from cqlshlib.cqlhandling import CqlParsingRuleSet, Hint File "/Users/avinashkasukurthi/devtools/apache-cassandra-4.0.7/bin/../pylib/cqlshlib/cqlhandling.py", line 23, in <module> from cqlshlib import pylexotron, util File "/Users/avinashkasukurthi/devtools/apache-cassandra-4.0.7/bin/../pylib/cqlshlib/pylexotron.py", line 342, in <module> class ParsingRuleSet: File "/Users/avinashkasukurthi/devtools/apache-cassandra-4.0.7/bin/../pylib/cqlshlib/pylexotron.py", line 343, in ParsingRuleSet RuleSpecScanner = SaferScanner([ ^^^^^^^^^^^^^^ File "/Users/avinashkasukurthi/devtools/apache-cassandra-4.0.7/bin/../pylib/cqlshlib/saferscanner.py", line 91, in __init__ s = re.sre_parse.State() ^^^^^^^^^^^^ AttributeError: module 're' has no attribute 'sre_parse'
[ "Looks like there may have been a breaking change introduced to Python's synchronized regex engine (SRE) with Python 3.11. I have created a ticket for this on the Cassandra project (CASSANDRA-18088).\nIn the interim, downgrade your local Python to 3.10, and you should be fine.\n" ]
[ 0 ]
[]
[]
[ "cassandra", "cassandra_4.0", "cqlsh", "python" ]
stackoverflow_0074673247_cassandra_cassandra_4.0_cqlsh_python.txt
Q: Add a column in my dataframe based on the name of Excel file I want to import many excel files into one single dataframe and I want a column where all the rows are the same as the original excel file name in python this is what i have tried df_final=df_final.assign(Année='2021') df_final=df_final.assign(Mois='Octobre') But I am obliged each time to imort a single excel file add these two columns and then move on to the next one. How can i automate this into one function ? A: in order to add a value to each dataframe based on the filename you need to create a list of values equal to the number of rows. Below is a simple example assuming the dataframes are the same. Each sample file I have created looks like this: file1.xlsx Some Data 0 5 1 3 2 2 3 3 4 6 5 5 file2.xlsx Some Data 0 6 1 8 2 5 3 4 4 5 5 9 Example code: import pandas as pd, os source_folder = r"\PATH\TO\FILES" # Create empty list for the dataframes df_list = [] # Loop though each file in the folder for file in os.listdir(source_folder): # Create full file path file_path = os.path.join(source_folder, file) # Create dataframe x_df = pd.read_excel(file_path) # Create new dataframe column for filename based on the number of rows in dataframe x_df["filename"] = [file for _ in range(x_df.shape[0])] # Add dataframe to the list df_list.append(x_df) # Concatonate the list of dataframes to a single dataframe final_df = pd.concat(df_list) print(final_df) The final result is: Some Data filename 0 5 file1.xlsx 1 3 file1.xlsx 2 2 file1.xlsx 3 3 file1.xlsx 4 6 file1.xlsx 5 5 file1.xlsx 0 6 file2.xlsx 1 8 file2.xlsx 2 5 file2.xlsx 3 4 file2.xlsx 4 5 file2.xlsx 5 9 file2.xlsx
Add a column in my dataframe based on the name of Excel file
I want to import many excel files into one single dataframe and I want a column where all the rows are the same as the original excel file name in python this is what i have tried df_final=df_final.assign(Année='2021') df_final=df_final.assign(Mois='Octobre') But I am obliged each time to imort a single excel file add these two columns and then move on to the next one. How can i automate this into one function ?
[ "in order to add a value to each dataframe based on the filename you need to create a list of values equal to the number of rows. Below is a simple example assuming the dataframes are the same.\nEach sample file I have created looks like this:\nfile1.xlsx\n Some Data\n0 5\n1 3\n2 2\n3 3\n4 6\n5 5\n\nfile2.xlsx\n Some Data\n0 6\n1 8\n2 5\n3 4\n4 5\n5 9\n\nExample code:\nimport pandas as pd, os\n\nsource_folder = r\"\\PATH\\TO\\FILES\"\n\n# Create empty list for the dataframes\ndf_list = []\n\n# Loop though each file in the folder\nfor file in os.listdir(source_folder):\n\n # Create full file path\n file_path = os.path.join(source_folder, file)\n\n # Create dataframe\n x_df = pd.read_excel(file_path)\n\n # Create new dataframe column for filename based on the number of rows in dataframe\n x_df[\"filename\"] = [file for _ in range(x_df.shape[0])]\n\n # Add dataframe to the list\n df_list.append(x_df)\n\n# Concatonate the list of dataframes to a single dataframe\nfinal_df = pd.concat(df_list)\n\nprint(final_df)\n\nThe final result is:\n Some Data filename\n0 5 file1.xlsx\n1 3 file1.xlsx\n2 2 file1.xlsx\n3 3 file1.xlsx\n4 6 file1.xlsx\n5 5 file1.xlsx\n0 6 file2.xlsx\n1 8 file2.xlsx\n2 5 file2.xlsx\n3 4 file2.xlsx\n4 5 file2.xlsx\n5 9 file2.xlsx\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074677969_python.txt
Q: django.db.utils.InterfaceError: (0, '') when using django model I have django.db.utils.InterfaceError: (0, '') error on django. I googled around and found this error is related with django mysql connection. What I have done is just like this , from django.core.management.base import BaseCommand from ...models import Issue class Command(BaseCommand): def handle(self, *args, **options): print("dbconnection test:") obj = Issue.objects.get(id=1) print(obj.id) exit() Some articles show the solution with , connection close cursor = connection.cursor() cursor.execute(query) cursor.close() but I don't even have the chance to connection.close() Problem happens here /usr/local/lib/python3.6/site-packages/MySQLdb/connections.py def query(self, query): # Since _mysql releases GIL while querying, we need immutable buffer. if isinstance(query, bytearray): query = bytes(query) _mysql.connection.query(self, query) I really appreciate any help. thank you very much. I added the CONN_MAX_AGE None in db settings but in vain. DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', .... 'HOST': env('DATABASE_HOST'), 'PORT': env('DATABASE_PORT'), 'OPTIONS': { 'charset': 'utf8mb4', 'init_command': "SET sql_mode='STRICT_TRANS_TABLES'" }, 'CONN_MAX_AGE' : None ## add here } } These are the stacktrace Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 86, in _execute return self.cursor.execute(sql, params) File "/usr/local/lib/python3.6/site-packages/django/db/backends/mysql/base.py", line 74, in execute return self.cursor.execute(query, args) File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 250, in execute self.errorhandler(self, exc, value) File "/usr/local/lib/python3.6/site-packages/MySQLdb/connections.py", line 50, in defaulterrorhandler raise errorvalue File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 247, in execute res = self._query(query) File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 412, in _query rowcount = self._do_query(q) File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 375, in _do_query db.query(q) File "/usr/local/lib/python3.6/site-packages/MySQLdb/connections.py", line 276, in query _mysql.connection.query(self, query) _mysql_exceptions.InterfaceError: (0, '') The above exception was the direct cause of the following exception: Traceback (most recent call last): File "manage.py", line 22, in <module> main() File "manage.py", line 19, in main execute_from_command_line(sys.argv) File "/usr/local/lib/python3.6/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line utility.execute() File "/usr/local/lib/python3.6/site-packages/django/core/management/__init__.py", line 395, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/local/lib/python3.6/site-packages/django/core/management/base.py", line 328, in run_from_argv self.execute(*args, **cmd_options) File "/usr/local/lib/python3.6/site-packages/django/core/management/base.py", line 369, in execute output = self.handle(*args, **options) File "/code/tweet/management/commands/handle_tweet.py", line 521, in handle twitterApi.search_tweet(keyword) File "/code/tweet/management/commands/handle_tweet.py", line 329, in search_tweet cnt = self.tagByAi() File "/code/tweet/management/commands/handle_tweet.py", line 103, in tagByAi crowded = Issue.objects.get(id=382) File "/usr/local/lib/python3.6/site-packages/django/db/models/manager.py", line 82, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/django/db/models/query.py", line 411, in get num = len(clone) File "/usr/local/lib/python3.6/site-packages/django/db/models/query.py", line 258, in __len__ self._fetch_all() File "/usr/local/lib/python3.6/site-packages/django/db/models/query.py", line 1261, in _fetch_all self._result_cache = list(self._iterable_class(self)) File "/usr/local/lib/python3.6/site-packages/django/db/models/query.py", line 57, in __iter__ results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size) File "/usr/local/lib/python3.6/site-packages/django/db/models/sql/compiler.py", line 1137, in execute_sql cursor.execute(sql, params) File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 100, in execute return super().execute(sql, params) File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 68, in execute return self._execute_with_wrappers(sql, params, many=False, executor=self._execute) File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 77, in _execute_with_wrappers return executor(sql, params, many, context) File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 86, in _execute return self.cursor.execute(sql, params) File "/usr/local/lib/python3.6/site-packages/django/db/utils.py", line 90, in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 86, in _execute return self.cursor.execute(sql, params) File "/usr/local/lib/python3.6/site-packages/django/db/backends/mysql/base.py", line 74, in execute return self.cursor.execute(query, args) File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 250, in execute self.errorhandler(self, exc, value) File "/usr/local/lib/python3.6/site-packages/MySQLdb/connections.py", line 50, in defaulterrorhandler raise errorvalue File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 247, in execute res = self._query(query) File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 412, in _query rowcount = self._do_query(q) File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 375, in _do_query db.query(q) File "/usr/local/lib/python3.6/site-packages/MySQLdb/connections.py", line 276, in query _mysql.connection.query(self, query) django.db.utils.InterfaceError: (0, '') myenvironment is here absl-py 0.9.0 asgiref 3.2.7 astor 0.8.1 boto3 1.12.28 botocore 1.15.28 cachetools 4.0.0 certifi 2019.11.28 chardet 3.0.4 cycler 0.10.0 Django 3.0.1 django-environ 0.4.5 django-extensions 2.2.6 django-filter 2.2.0 django-mysql 3.3.0 djangorestframework 3.11.0 docutils 0.15.2 gast 0.2.2 gensim 3.8.1 google-api-core 1.16.0 google-auth 1.11.3 google-cloud-core 1.3.0 google-cloud-storage 1.26.0 google-pasta 0.2.0 google-resumable-media 0.5.0 googleapis-common-protos 1.51.0 grpcio 1.27.2 h5py 2.10.0 idna 2.9 jmespath 0.9.5 Keras 2.3.1 Keras-Applications 1.0.8 Keras-Preprocessing 1.1.0 kiwisolver 1.1.0 Markdown 3.2.1 matplotlib 3.0.3 mecab-python3 0.996.3 mysqlclient 1.3.13 neologdn 0.4 numpy 1.16.2 oauthlib 3.1.0 opt-einsum 3.2.0 pandas 0.24.2 pandas-schema 0.3.5 pandocfilters 1.4.2 pip 20.0.2 protobuf 3.11.3 pyasn1 0.4.8 pyasn1-modules 0.2.8 pyparsing 2.4.6 python-dateutil 2.8.1 pytz 2019.3 PyYAML 5.3.1 requests 2.23.0 requests-oauthlib 1.3.0 rsa 4.0 s3transfer 0.3.3 scikit-learn 0.20.3 scipy 1.4.1 setuptools 45.2.0 six 1.14.0 smart-open 1.10.0 sqlparse 0.3.1 tensorboard 1.15.0 tensorflow 1.15.2 tensorflow-estimator 1.15.1 tensorflow-hub 0.7.0 termcolor 1.1.0 urllib3 1.25.8 uWSGI 2.0.17 Werkzeug 1.0.0 wheel 0.34.2 wrapt 1.12.1 A: My code was very similar to what is in the question: class Command(BaseCommand): help = "Check order" def add_arguments(self, parser): parser.add_argument("--order-no", nargs="?", type=str) def handle(self, *args, **options): order = Orders.objects.get(order_no=options["order_no"]) print(order) I figured out by accident that the connection is randomly closed every now and then and it causes InterfaceError. In my case probably some other section of the code uses models on global level or maybe django.setup() is loading module that creates a connection to database, or perhaps connection is kept in memory and re-used between consecutive django calls. (No idea so far, need to dig more). But solution to it was pretty simple, add transaction context manager. It will look like this: from django.db import transaction class Command(BaseCommand): help = "Check order" def add_arguments(self, parser): parser.add_argument("--order-no", nargs="?", type=str) def handle(self, *args, **options): with transaction.atomic(): order = Orders.objects.get(order_no=options["order_no"]) print(order) A: MySQL is lazily connected to Django. If connection.connection is None means you have not connected to MySQL before. If the connection is closed and you try to access the cursor, you will get an InterfaceError. You can close the database connection so that when you use ORM methods Django will reconnect to open a new connection I have created a helper reconnect() which we are using in production. def reconnect(): from django.db import connections from logging import getLogger for alias in list(connections): conn = connections[alias] if conn.connection and not conn.is_usable(): conn.close() del connections[alias] closed.append(alias) getLogger(__name__).warn('Closing unusable connections: %s', closed) I am further attaching a few helpful examples on maintaining database connections healthy https://www.programcreek.com/python/example/100987/django.db.connection.is_usable
django.db.utils.InterfaceError: (0, '') when using django model
I have django.db.utils.InterfaceError: (0, '') error on django. I googled around and found this error is related with django mysql connection. What I have done is just like this , from django.core.management.base import BaseCommand from ...models import Issue class Command(BaseCommand): def handle(self, *args, **options): print("dbconnection test:") obj = Issue.objects.get(id=1) print(obj.id) exit() Some articles show the solution with , connection close cursor = connection.cursor() cursor.execute(query) cursor.close() but I don't even have the chance to connection.close() Problem happens here /usr/local/lib/python3.6/site-packages/MySQLdb/connections.py def query(self, query): # Since _mysql releases GIL while querying, we need immutable buffer. if isinstance(query, bytearray): query = bytes(query) _mysql.connection.query(self, query) I really appreciate any help. thank you very much. I added the CONN_MAX_AGE None in db settings but in vain. DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', .... 'HOST': env('DATABASE_HOST'), 'PORT': env('DATABASE_PORT'), 'OPTIONS': { 'charset': 'utf8mb4', 'init_command': "SET sql_mode='STRICT_TRANS_TABLES'" }, 'CONN_MAX_AGE' : None ## add here } } These are the stacktrace Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 86, in _execute return self.cursor.execute(sql, params) File "/usr/local/lib/python3.6/site-packages/django/db/backends/mysql/base.py", line 74, in execute return self.cursor.execute(query, args) File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 250, in execute self.errorhandler(self, exc, value) File "/usr/local/lib/python3.6/site-packages/MySQLdb/connections.py", line 50, in defaulterrorhandler raise errorvalue File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 247, in execute res = self._query(query) File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 412, in _query rowcount = self._do_query(q) File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 375, in _do_query db.query(q) File "/usr/local/lib/python3.6/site-packages/MySQLdb/connections.py", line 276, in query _mysql.connection.query(self, query) _mysql_exceptions.InterfaceError: (0, '') The above exception was the direct cause of the following exception: Traceback (most recent call last): File "manage.py", line 22, in <module> main() File "manage.py", line 19, in main execute_from_command_line(sys.argv) File "/usr/local/lib/python3.6/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line utility.execute() File "/usr/local/lib/python3.6/site-packages/django/core/management/__init__.py", line 395, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/local/lib/python3.6/site-packages/django/core/management/base.py", line 328, in run_from_argv self.execute(*args, **cmd_options) File "/usr/local/lib/python3.6/site-packages/django/core/management/base.py", line 369, in execute output = self.handle(*args, **options) File "/code/tweet/management/commands/handle_tweet.py", line 521, in handle twitterApi.search_tweet(keyword) File "/code/tweet/management/commands/handle_tweet.py", line 329, in search_tweet cnt = self.tagByAi() File "/code/tweet/management/commands/handle_tweet.py", line 103, in tagByAi crowded = Issue.objects.get(id=382) File "/usr/local/lib/python3.6/site-packages/django/db/models/manager.py", line 82, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/django/db/models/query.py", line 411, in get num = len(clone) File "/usr/local/lib/python3.6/site-packages/django/db/models/query.py", line 258, in __len__ self._fetch_all() File "/usr/local/lib/python3.6/site-packages/django/db/models/query.py", line 1261, in _fetch_all self._result_cache = list(self._iterable_class(self)) File "/usr/local/lib/python3.6/site-packages/django/db/models/query.py", line 57, in __iter__ results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size) File "/usr/local/lib/python3.6/site-packages/django/db/models/sql/compiler.py", line 1137, in execute_sql cursor.execute(sql, params) File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 100, in execute return super().execute(sql, params) File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 68, in execute return self._execute_with_wrappers(sql, params, many=False, executor=self._execute) File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 77, in _execute_with_wrappers return executor(sql, params, many, context) File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 86, in _execute return self.cursor.execute(sql, params) File "/usr/local/lib/python3.6/site-packages/django/db/utils.py", line 90, in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 86, in _execute return self.cursor.execute(sql, params) File "/usr/local/lib/python3.6/site-packages/django/db/backends/mysql/base.py", line 74, in execute return self.cursor.execute(query, args) File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 250, in execute self.errorhandler(self, exc, value) File "/usr/local/lib/python3.6/site-packages/MySQLdb/connections.py", line 50, in defaulterrorhandler raise errorvalue File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 247, in execute res = self._query(query) File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 412, in _query rowcount = self._do_query(q) File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 375, in _do_query db.query(q) File "/usr/local/lib/python3.6/site-packages/MySQLdb/connections.py", line 276, in query _mysql.connection.query(self, query) django.db.utils.InterfaceError: (0, '') myenvironment is here absl-py 0.9.0 asgiref 3.2.7 astor 0.8.1 boto3 1.12.28 botocore 1.15.28 cachetools 4.0.0 certifi 2019.11.28 chardet 3.0.4 cycler 0.10.0 Django 3.0.1 django-environ 0.4.5 django-extensions 2.2.6 django-filter 2.2.0 django-mysql 3.3.0 djangorestframework 3.11.0 docutils 0.15.2 gast 0.2.2 gensim 3.8.1 google-api-core 1.16.0 google-auth 1.11.3 google-cloud-core 1.3.0 google-cloud-storage 1.26.0 google-pasta 0.2.0 google-resumable-media 0.5.0 googleapis-common-protos 1.51.0 grpcio 1.27.2 h5py 2.10.0 idna 2.9 jmespath 0.9.5 Keras 2.3.1 Keras-Applications 1.0.8 Keras-Preprocessing 1.1.0 kiwisolver 1.1.0 Markdown 3.2.1 matplotlib 3.0.3 mecab-python3 0.996.3 mysqlclient 1.3.13 neologdn 0.4 numpy 1.16.2 oauthlib 3.1.0 opt-einsum 3.2.0 pandas 0.24.2 pandas-schema 0.3.5 pandocfilters 1.4.2 pip 20.0.2 protobuf 3.11.3 pyasn1 0.4.8 pyasn1-modules 0.2.8 pyparsing 2.4.6 python-dateutil 2.8.1 pytz 2019.3 PyYAML 5.3.1 requests 2.23.0 requests-oauthlib 1.3.0 rsa 4.0 s3transfer 0.3.3 scikit-learn 0.20.3 scipy 1.4.1 setuptools 45.2.0 six 1.14.0 smart-open 1.10.0 sqlparse 0.3.1 tensorboard 1.15.0 tensorflow 1.15.2 tensorflow-estimator 1.15.1 tensorflow-hub 0.7.0 termcolor 1.1.0 urllib3 1.25.8 uWSGI 2.0.17 Werkzeug 1.0.0 wheel 0.34.2 wrapt 1.12.1
[ "My code was very similar to what is in the question:\nclass Command(BaseCommand):\n help = \"Check order\"\n\n def add_arguments(self, parser):\n parser.add_argument(\"--order-no\", nargs=\"?\", type=str)\n\n def handle(self, *args, **options):\n order = Orders.objects.get(order_no=options[\"order_no\"])\n print(order)\n\nI figured out by accident that the connection is randomly closed every now and then and it causes InterfaceError. In my case probably some other section of the code uses models on global level or maybe django.setup() is loading module that creates a connection to database, or perhaps connection is kept in memory and re-used between consecutive django calls. (No idea so far, need to dig more).\nBut solution to it was pretty simple, add transaction context manager.\nIt will look like this:\nfrom django.db import transaction \n\n\nclass Command(BaseCommand):\n help = \"Check order\"\n\n def add_arguments(self, parser):\n parser.add_argument(\"--order-no\", nargs=\"?\", type=str)\n\n def handle(self, *args, **options):\n with transaction.atomic():\n order = Orders.objects.get(order_no=options[\"order_no\"])\n print(order)\n\n", "MySQL is lazily connected to Django. If connection.connection is None means you have not connected to MySQL before.\nIf the connection is closed and you try to access the cursor, you will get an InterfaceError. You can close the database connection so that when you use ORM methods Django will reconnect to open a new connection\nI have created a helper reconnect() which we are using in production.\ndef reconnect():\n from django.db import connections\n from logging import getLogger\n \n for alias in list(connections):\n conn = connections[alias]\n if conn.connection and not conn.is_usable():\n conn.close()\n del connections[alias]\n closed.append(alias)\n getLogger(__name__).warn('Closing unusable connections: %s', closed)\n\nI am further attaching a few helpful examples on maintaining database connections healthy\nhttps://www.programcreek.com/python/example/100987/django.db.connection.is_usable\n" ]
[ 1, 0 ]
[]
[]
[ "django", "mysql", "python" ]
stackoverflow_0060852406_django_mysql_python.txt
Q: filtering through M-M relationship in sequelize I have the following table with many to many association between them. content-modal.js const Content = sequelize.define('Content', { title: DataTypes.STRING, name: DataTypes.TEXT, duration: DataTypes.INTEGER, ... }, { timestamps: true, paranoid: true, }); category-modal.js const Category = sequelize.define('Category', { name: DataTypes.STRING, type: DataTypes.STRING, }, { timestamps: true, paranoid: true, }); content-category-modal.js const ContentCategory = sequelize.define('ContentCategory', { id: { type: DataTypes.INTEGER, primaryKey: true, autoIncrement: true }, categoryId: { type: DataTypes.INTEGER, allowNull: false, references: { model: 'Category', key: 'id' }, }, contentId: { type: DataTypes.INTEGER, allowNull: false, references: { model: 'Content', key: 'id' }, }, categoryType: { type: DataTypes.STRING, allowNull: false } }, {}); ContentCategory.associate = function(models) { models.Category.belongsToMany(models.Content, { through: ContentCategory }); models.Content.belongsToMany(models.Category, { through: ContentCategory }); }; Here each content has fixed no of category. So whenever I query through database using JOIN for one of the category I will be getting just the category I have passed on to where clause. For instance say there is the following field in the table: Content Table id title name 2 big_buck_bunny.mp4 1669976024371.mp4 3 newEra.mp4 1669976456758.mp4 Category Table id name type 6 Education topic 7 Animation style 8 Awareness topic 9 Narrative style Content Category Table id contentId categoryId 4 3 6 5 3 7 6 2 8 7 2 7 Here when I filter for all the videos where category is animation using the following sequelize query: //styleId=7, topicId=null const { topicId, styleId } = req.query; return db.Content.findAll({ include: [{ model: db.Category, attributes: ['id', 'name', 'type'], where: { id: 7 } }], }) I get the content with only one of the two categories associated with a video which is as expected from the query: data: [{ "id": 2, "title": "big_buck_bunny.mp4", "name": "1669976024371.mp4", "Categories": [{ "id": 7, "name": "Animation", "type": "style" }], }, { "id": 3, "title": "newEra.mp4", "name": "1669976456758.mp4", "Categories": [{ "id": 7, "name": "Animation", "type": "style" }], }] But I want to get all the category for each video if it matches the queried categoryId. i.e. data: [{ "id": 2, "title": "big_buck_bunny.mp4", "name": "1669976024371.mp4", "Categories": [{ "id": 7, "name": "Animation", "type": "style" },{ "id": 8, "name": "Awareness", "type": "topic" }], }, { "id": 3, "title": "newEra.mp4", "name": "1669976456758.mp4", "Categories": [{ "id": 7, "name": "Animation", "type": "style" },{ "id": 6, "name": "Education", "type": "topic" }], }] If it is possible to do so, please share it in the answer or comment. If any further information is needed for clarification do let me know. I will add them in the question. Note: If nothing is found the last option for me will be to query all the data and then filter it based on the category, but I don't think it is considered a good practice. A: I would advise to add one more m:n association to the same tables with a different alias BUT in this case it would be multiplication of too many records and you might end up with out of memory error. So it would be good to get only ids of Content records that satisfy the condition and afterwards to execute one more similar query with the condition against only Content records using given ids. const foundItems = await db.Content.findAll({ attributes: ['id'], include: [{ model: db.Category, attributes: ['id'], where: { id: 7 } }], }) const itemWithAllCategories = db.Content.findAll({ where: { id: foundItems.map(x => x.id) }, include: [{ model: db.Category, attributes: ['id', 'name', 'type'], }], })
filtering through M-M relationship in sequelize
I have the following table with many to many association between them. content-modal.js const Content = sequelize.define('Content', { title: DataTypes.STRING, name: DataTypes.TEXT, duration: DataTypes.INTEGER, ... }, { timestamps: true, paranoid: true, }); category-modal.js const Category = sequelize.define('Category', { name: DataTypes.STRING, type: DataTypes.STRING, }, { timestamps: true, paranoid: true, }); content-category-modal.js const ContentCategory = sequelize.define('ContentCategory', { id: { type: DataTypes.INTEGER, primaryKey: true, autoIncrement: true }, categoryId: { type: DataTypes.INTEGER, allowNull: false, references: { model: 'Category', key: 'id' }, }, contentId: { type: DataTypes.INTEGER, allowNull: false, references: { model: 'Content', key: 'id' }, }, categoryType: { type: DataTypes.STRING, allowNull: false } }, {}); ContentCategory.associate = function(models) { models.Category.belongsToMany(models.Content, { through: ContentCategory }); models.Content.belongsToMany(models.Category, { through: ContentCategory }); }; Here each content has fixed no of category. So whenever I query through database using JOIN for one of the category I will be getting just the category I have passed on to where clause. For instance say there is the following field in the table: Content Table id title name 2 big_buck_bunny.mp4 1669976024371.mp4 3 newEra.mp4 1669976456758.mp4 Category Table id name type 6 Education topic 7 Animation style 8 Awareness topic 9 Narrative style Content Category Table id contentId categoryId 4 3 6 5 3 7 6 2 8 7 2 7 Here when I filter for all the videos where category is animation using the following sequelize query: //styleId=7, topicId=null const { topicId, styleId } = req.query; return db.Content.findAll({ include: [{ model: db.Category, attributes: ['id', 'name', 'type'], where: { id: 7 } }], }) I get the content with only one of the two categories associated with a video which is as expected from the query: data: [{ "id": 2, "title": "big_buck_bunny.mp4", "name": "1669976024371.mp4", "Categories": [{ "id": 7, "name": "Animation", "type": "style" }], }, { "id": 3, "title": "newEra.mp4", "name": "1669976456758.mp4", "Categories": [{ "id": 7, "name": "Animation", "type": "style" }], }] But I want to get all the category for each video if it matches the queried categoryId. i.e. data: [{ "id": 2, "title": "big_buck_bunny.mp4", "name": "1669976024371.mp4", "Categories": [{ "id": 7, "name": "Animation", "type": "style" },{ "id": 8, "name": "Awareness", "type": "topic" }], }, { "id": 3, "title": "newEra.mp4", "name": "1669976456758.mp4", "Categories": [{ "id": 7, "name": "Animation", "type": "style" },{ "id": 6, "name": "Education", "type": "topic" }], }] If it is possible to do so, please share it in the answer or comment. If any further information is needed for clarification do let me know. I will add them in the question. Note: If nothing is found the last option for me will be to query all the data and then filter it based on the category, but I don't think it is considered a good practice.
[ "I would advise to add one more m:n association to the same tables with a different alias BUT in this case it would be multiplication of too many records and you might end up with out of memory error. So it would be good to get only ids of Content records that satisfy the condition and afterwards to execute one more similar query with the condition against only Content records using given ids.\nconst foundItems = await db.Content.findAll({\n attributes: ['id'],\n include: [{ \n model: db.Category,\n attributes: ['id'],\n where: { id: 7 }\n }],\n})\nconst itemWithAllCategories = db.Content.findAll({\n where: {\n id: foundItems.map(x => x.id)\n },\n include: [{ \n model: db.Category,\n attributes: ['id', 'name', 'type'],\n }],\n})\n\n" ]
[ 0 ]
[]
[]
[ "mysql", "sequelize.js" ]
stackoverflow_0074678403_mysql_sequelize.js.txt
Q: flutter how do i set a command that my image will not repeat i have multiple image and it will change on every tap but currently the image will repeat. how do i stop the image to repeat and after the last image is surface, there will have a pop up button. Thanks in advance class _SettingpageState extends State<Settingpage> { List<String> imagelist = [ 'lib/images/image1.png', 'lib/images/image2.png', 'lib/images/image3.png', 'lib/images/image4.png', 'lib/images/image5.png', ]; late String imagePath; @override Widget build(BuildContext context) { imagelist.shuffle(); //shuffle over here var imagePath = imagelist[0]; //store random image over here return Scaffold( backgroundColor: Colors.black, body: Center( child: Container( height: 600, width: 600, color: Colors.black, child: GestureDetector( child: Center(child: Image.asset(imagePath)), onTap: () { setState(() {}); }))), ); } } A: try this: class _SettingpageState extends State<Settingpage> { List<String> imagelist = [ 'lib/images/image1.png', 'lib/images/image2.png', 'lib/images/image3.png', 'lib/images/image4.png', 'lib/images/image5.png', ]; late int imageIndex; late String imagePath; void initRandomImage() { imageIndex = 0; imagelist.shuffle(); imagePath = imagelist[0]; } @override void initState() { initRandomImage(); super.initState(); } void showPopUpButton(BuildContext context) { showDialog( context: context, builder: (context) => AlertDialog( title: Text('shuffle list'), content: TextButton( child: Text('shuffle list'), onPressed: () { initRandomImage(); Navigator.pop(context); setState(() {}); }, ), ), ); } @override Widget build(BuildContext context) { return Scaffold( backgroundColor: Colors.black, body: Center( child: Container( height: 600, width: 600, color: Colors.black, child: GestureDetector( child: Center( child: Center(child: Image.asset(imagePath)), ), onTap: () { if (imageIndex == imagelist.length - 1) { showPopUpButton(context); } else { imageIndex++; setState(() { imagePath = imagelist[imageIndex]; }); } }, ), ), ), ); } }
flutter how do i set a command that my image will not repeat
i have multiple image and it will change on every tap but currently the image will repeat. how do i stop the image to repeat and after the last image is surface, there will have a pop up button. Thanks in advance class _SettingpageState extends State<Settingpage> { List<String> imagelist = [ 'lib/images/image1.png', 'lib/images/image2.png', 'lib/images/image3.png', 'lib/images/image4.png', 'lib/images/image5.png', ]; late String imagePath; @override Widget build(BuildContext context) { imagelist.shuffle(); //shuffle over here var imagePath = imagelist[0]; //store random image over here return Scaffold( backgroundColor: Colors.black, body: Center( child: Container( height: 600, width: 600, color: Colors.black, child: GestureDetector( child: Center(child: Image.asset(imagePath)), onTap: () { setState(() {}); }))), ); } }
[ "try this:\nclass _SettingpageState extends State<Settingpage> {\n List<String> imagelist = [\n 'lib/images/image1.png',\n 'lib/images/image2.png',\n 'lib/images/image3.png',\n 'lib/images/image4.png',\n 'lib/images/image5.png',\n ];\n\n late int imageIndex;\n late String imagePath;\n\n void initRandomImage() {\n imageIndex = 0;\n imagelist.shuffle();\n imagePath = imagelist[0];\n }\n\n @override\n void initState() {\n initRandomImage();\n super.initState();\n }\n\n void showPopUpButton(BuildContext context) {\n showDialog(\n context: context,\n builder: (context) => AlertDialog(\n title: Text('shuffle list'),\n content: TextButton(\n child: Text('shuffle list'),\n onPressed: () {\n initRandomImage();\n Navigator.pop(context);\n setState(() {});\n },\n ),\n ),\n );\n }\n\n @override\n Widget build(BuildContext context) {\n return Scaffold(\n backgroundColor: Colors.black,\n body: Center(\n child: Container(\n height: 600,\n width: 600,\n color: Colors.black,\n child: GestureDetector(\n child: Center(\n child: Center(child: Image.asset(imagePath)),\n ),\n onTap: () {\n if (imageIndex == imagelist.length - 1) {\n showPopUpButton(context);\n } else {\n imageIndex++;\n setState(() {\n imagePath = imagelist[imageIndex];\n });\n }\n },\n ),\n ),\n ),\n );\n }\n}\n\n" ]
[ 0 ]
[]
[]
[ "dart", "flutter" ]
stackoverflow_0074678642_dart_flutter.txt
Q: Increment a hexadecimal string in C++ I want to increment a hexadecimal string in C++. The hexadecimal string starts "013802bf0000000000000000" and I want to increment it to "013802bf0000000000000001", ... ,"013802bf000000000000000f", ... etc till I hit " 013802bfffffffffffffffff ". #include <iostream> #include <conio.h> #include <cstdlib> #include <string> #include <sstream> using namespace std; using std::string; using std::cout; using std::endl; void main(double argc, char* argv[]) { string input = "013802bf0000000000000000"; for (int i = 0; i < 10; i++) { cout<< input << endl; } _getch(); } I want to increment it to "013802bf0000000000000001", ... ,"013802bf000000000000000f", ... etc till I hit " 013802bfffffffffffffffff ". A: If I counted correctly, you want to print all the variations of 16 hex digits, so you could use a std::uint64_t. I've commented out the line that would loop until the second to last number, and printed the first 16 instead. You'd have to print the last number separately. Otherwise you'd be looping forever since all std::uint64_t are less than or equals static_cast<std::uint64_t>(-1). The fmt library lets you print something with a given format: 013802bf is just a prefix text. {} refers to the second argument passed to fmt::print, in this case, i. :016x is the format specification for i: x for hexadecimal, 16 width, and 0 padding. \n is a newline suffix. [Demo] #include <cstdint> // uint64_t #include <fmt/core.h> #include <string> int main() { //for (std::uint64_t i{0}; i < static_cast<std::uint64_t>(-1); ++i) { for (std::uint64_t i{0}; i < 16; ++i) { fmt::print("013802bf{:016x}\n", i); } fmt::print("...\n"); fmt::print("013802bf{:016x}\n", static_cast<std::uint64_t>(-1)); } // Outputs: // // 013802bf0000000000000000 // 013802bf0000000000000001 // 013802bf0000000000000002 // 013802bf0000000000000003 // 013802bf0000000000000004 // 013802bf0000000000000005 // 013802bf0000000000000006 // 013802bf0000000000000007 // 013802bf0000000000000008 // 013802bf0000000000000009 // 013802bf000000000000000a // 013802bf000000000000000b // 013802bf000000000000000c // 013802bf000000000000000d // 013802bf000000000000000e // 013802bf000000000000000f // ... // 013802bfffffffffffffffff
Increment a hexadecimal string in C++
I want to increment a hexadecimal string in C++. The hexadecimal string starts "013802bf0000000000000000" and I want to increment it to "013802bf0000000000000001", ... ,"013802bf000000000000000f", ... etc till I hit " 013802bfffffffffffffffff ". #include <iostream> #include <conio.h> #include <cstdlib> #include <string> #include <sstream> using namespace std; using std::string; using std::cout; using std::endl; void main(double argc, char* argv[]) { string input = "013802bf0000000000000000"; for (int i = 0; i < 10; i++) { cout<< input << endl; } _getch(); } I want to increment it to "013802bf0000000000000001", ... ,"013802bf000000000000000f", ... etc till I hit " 013802bfffffffffffffffff ".
[ "\nIf I counted correctly, you want to print all the variations of 16 hex digits, so you could use a std::uint64_t.\nI've commented out the line that would loop until the second to last number, and printed the first 16 instead.\nYou'd have to print the last number separately. Otherwise you'd be looping forever since all std::uint64_t are less than or equals static_cast<std::uint64_t>(-1).\n\nThe fmt library lets you print something with a given format:\n\n013802bf is just a prefix text.\n{} refers to the second argument passed to fmt::print, in this case, i.\n:016x is the format specification for i: x for hexadecimal, 16 width, and 0 padding.\n\\n is a newline suffix.\n\n\n\n[Demo]\n#include <cstdint> // uint64_t\n#include <fmt/core.h>\n#include <string>\n\nint main() {\n //for (std::uint64_t i{0}; i < static_cast<std::uint64_t>(-1); ++i) {\n for (std::uint64_t i{0}; i < 16; ++i) {\n fmt::print(\"013802bf{:016x}\\n\", i);\n }\n fmt::print(\"...\\n\");\n fmt::print(\"013802bf{:016x}\\n\", static_cast<std::uint64_t>(-1));\n}\n\n// Outputs:\n//\n// 013802bf0000000000000000\n// 013802bf0000000000000001\n// 013802bf0000000000000002\n// 013802bf0000000000000003\n// 013802bf0000000000000004\n// 013802bf0000000000000005\n// 013802bf0000000000000006\n// 013802bf0000000000000007\n// 013802bf0000000000000008\n// 013802bf0000000000000009\n// 013802bf000000000000000a\n// 013802bf000000000000000b\n// 013802bf000000000000000c\n// 013802bf000000000000000d\n// 013802bf000000000000000e\n// 013802bf000000000000000f\n// ...\n// 013802bfffffffffffffffff\n\n" ]
[ 1 ]
[]
[]
[ "c++", "c++11", "hex", "string" ]
stackoverflow_0074678695_c++_c++11_hex_string.txt
Q: Az login Fails using Personal Microsoft Account - AADSTS500200 For a long time was using Terraform with Azure and it worked fine. Now for any reason the az cli command it doens't work. I'm getting follow error: AADSTS500200: User account 'xxxx ' is a personal Microsoft account. Personal Microsoft accounts are not supported for this application unless explicitly invited to an organization. Try signing out and signing back in with an organizational account. I've already upgrade az cli versin to 2.42 but problem perists. Even using incognito mode couldn't login to Azure. Instead of using az login, via browser I'm able to login to azure cloud without issues. A: Try below commands to clear cache. az account clear az login or az login --tenant [tenant id] A: The problem was related with Az cloud list. The active azure cloud was "AzureUSGovernment" instead of "AzureCloud". Once enabled "AzureCloud" issue got fixed. az cloud set --name AzureCloud
Az login Fails using Personal Microsoft Account - AADSTS500200
For a long time was using Terraform with Azure and it worked fine. Now for any reason the az cli command it doens't work. I'm getting follow error: AADSTS500200: User account 'xxxx ' is a personal Microsoft account. Personal Microsoft accounts are not supported for this application unless explicitly invited to an organization. Try signing out and signing back in with an organizational account. I've already upgrade az cli versin to 2.42 but problem perists. Even using incognito mode couldn't login to Azure. Instead of using az login, via browser I'm able to login to azure cloud without issues.
[ "Try below commands to clear cache.\naz account clear\naz login\nor\naz login --tenant [tenant id]\n", "The problem was related with Az cloud list. The active azure cloud was \"AzureUSGovernment\" instead of \"AzureCloud\". Once enabled \"AzureCloud\" issue got fixed.\naz cloud set --name AzureCloud\n\n" ]
[ 0, 0 ]
[]
[]
[ "azure" ]
stackoverflow_0074677961_azure.txt
Q: Does the mean() function in python create a list? I saw a code that does a mean calculation for a column using another column as a group for groupby() function. I want to know what it means by total_acc_avg[6]. Is total_acc_avg a list? Is 6 the index of the list? import pandas as pd data = pd.DataFrame({'mort_acc':[6, None, 3, None, 2, None, 9, 8], # Create pandas DataFrame 'x2':range(11, 19), 'total_acc':[1, 6, 2, 3, 3, 3, 6, 1], 'group2':['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b']}) print(data) total_acc_avg=data.groupby(by='total_acc').mean().mort_acc print(total_acc_avg[6]) A: total_acc_avg is a pandas Series object that contains the average of the mort_acc column, grouped by the total_acc column. In this case, the 6th index of the total_acc_avg Series contains the average value of the mort_acc column for the group with total_acc value of 6.
Does the mean() function in python create a list?
I saw a code that does a mean calculation for a column using another column as a group for groupby() function. I want to know what it means by total_acc_avg[6]. Is total_acc_avg a list? Is 6 the index of the list? import pandas as pd data = pd.DataFrame({'mort_acc':[6, None, 3, None, 2, None, 9, 8], # Create pandas DataFrame 'x2':range(11, 19), 'total_acc':[1, 6, 2, 3, 3, 3, 6, 1], 'group2':['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b']}) print(data) total_acc_avg=data.groupby(by='total_acc').mean().mort_acc print(total_acc_avg[6])
[ "total_acc_avg is a pandas Series object that contains the average of the mort_acc column, grouped by the total_acc column. In this case, the 6th index of the total_acc_avg Series contains the average value of the mort_acc column for the group with total_acc value of 6.\n" ]
[ 1 ]
[]
[]
[ "list", "mean", "pandas", "python" ]
stackoverflow_0074679258_list_mean_pandas_python.txt
Q: Pass array as input parameter in CSharpScript I try to run method by CSharpScript. Method: public class TaskSolution { public int[] Calculate(int[] inputValue) { return inputValue; } } I tried this solution: var script = CSharpScript.Create(solution.Code); var input = new int[3] { 1, 2, 3 }; var call = await script.ContinueWith<int[]>($"new TaskSolution().Calculate({input})").RunAsync(); But it throws Microsoft.CodeAnalysis.Scripting.CompilationErrorException with text "(1,43): error CS0443: Syntax error; value expected" and no more information inside. When I run similar method but with simple input parameter (as int or string) - it runs successfully. But I meet problems with using arrays. A: $"new TaskSolution().Calculate({input})" evaluates to "new TaskSolution().Calculate(System.Int32[])", which is not valid code. input would be treated as string, not passed as the actual array.
Pass array as input parameter in CSharpScript
I try to run method by CSharpScript. Method: public class TaskSolution { public int[] Calculate(int[] inputValue) { return inputValue; } } I tried this solution: var script = CSharpScript.Create(solution.Code); var input = new int[3] { 1, 2, 3 }; var call = await script.ContinueWith<int[]>($"new TaskSolution().Calculate({input})").RunAsync(); But it throws Microsoft.CodeAnalysis.Scripting.CompilationErrorException with text "(1,43): error CS0443: Syntax error; value expected" and no more information inside. When I run similar method but with simple input parameter (as int or string) - it runs successfully. But I meet problems with using arrays.
[ "$\"new TaskSolution().Calculate({input})\" evaluates to \"new TaskSolution().Calculate(System.Int32[])\", which is not valid code. input would be treated as string, not passed as the actual array.\n" ]
[ 0 ]
[]
[]
[ ".net", "c#", "csharpscript", "reflection", "roslyn" ]
stackoverflow_0074678356_.net_c#_csharpscript_reflection_roslyn.txt
Q: Tkinter make a frame fill whole column I'm trying to create a simple GUI using Tkinter module. I have a layout consisting of two columns (with weights 1 and 2). Now, I'd like my two widgets that I add (cfg and cfgx) to fill up the whole column in which they are placed. How could I achieve such a thing with my current setup? Thanks in advance import tkinter as tk from components.ConfigCreator import ConfigCreator WIDTH = 800 HEIGHT = 600 POS_X = 300 POS_Y = 200 class MainApplication(tk.Frame): def __init__(self, parent, *args, **kwargs): tk.Frame.__init__(self, parent, *args, **kwargs) # root self.parent = parent self._setup_size_and_positioning() self._setup_layout() self._setup_widgets() def _setup_size_and_positioning(self) -> None: self.winfo_toplevel().title('Test app') self.winfo_toplevel().geometry(f"{WIDTH}x{HEIGHT}+{POS_X}+{POS_Y}") self.config(width=WIDTH, height=HEIGHT) def _setup_layout(self) -> None: self.grid(row=0, column=0) self.columnconfigure(0, weight=1) self.columnconfigure(1, weight=2) def _setup_widgets(self) -> None: cfg = ConfigCreator(self) cfg.grid(row=0, column=0) cfg.config(bg="limegreen") cfgx = ConfigCreator(self) cfgx.grid(row=0, column=1) cfgx.config(bg="skyblue") if __name__ == "__main__": root = tk.Tk() MainApplication(root).pack(side="top", fill="both", expand=True) root.mainloop() @edit #ConfigCreator.py import tkinter as tk class ConfigCreator(tk.Frame): def __init__(self, parent, *args, **kwargs): tk.Frame.__init__(self, parent, *args, **kwargs) A: You've configured the weight on the columns, but you haven't given any weight to any rows. Because of that, and because the frames by default are only 1 pixel tall, the columns will be virtually invisible. To fix this you can give non-zero weight to one or more rows. For example, self.rowconfigure(0, weight=1) You also haven't configured the contents in each column to fill the space allocated. You should add sticky="nsew" to get each of the inner frames to expand to fill. For example, cfg.grid(row=0, column=0, sticky="nsew")
Tkinter make a frame fill whole column
I'm trying to create a simple GUI using Tkinter module. I have a layout consisting of two columns (with weights 1 and 2). Now, I'd like my two widgets that I add (cfg and cfgx) to fill up the whole column in which they are placed. How could I achieve such a thing with my current setup? Thanks in advance import tkinter as tk from components.ConfigCreator import ConfigCreator WIDTH = 800 HEIGHT = 600 POS_X = 300 POS_Y = 200 class MainApplication(tk.Frame): def __init__(self, parent, *args, **kwargs): tk.Frame.__init__(self, parent, *args, **kwargs) # root self.parent = parent self._setup_size_and_positioning() self._setup_layout() self._setup_widgets() def _setup_size_and_positioning(self) -> None: self.winfo_toplevel().title('Test app') self.winfo_toplevel().geometry(f"{WIDTH}x{HEIGHT}+{POS_X}+{POS_Y}") self.config(width=WIDTH, height=HEIGHT) def _setup_layout(self) -> None: self.grid(row=0, column=0) self.columnconfigure(0, weight=1) self.columnconfigure(1, weight=2) def _setup_widgets(self) -> None: cfg = ConfigCreator(self) cfg.grid(row=0, column=0) cfg.config(bg="limegreen") cfgx = ConfigCreator(self) cfgx.grid(row=0, column=1) cfgx.config(bg="skyblue") if __name__ == "__main__": root = tk.Tk() MainApplication(root).pack(side="top", fill="both", expand=True) root.mainloop() @edit #ConfigCreator.py import tkinter as tk class ConfigCreator(tk.Frame): def __init__(self, parent, *args, **kwargs): tk.Frame.__init__(self, parent, *args, **kwargs)
[ "You've configured the weight on the columns, but you haven't given any weight to any rows. Because of that, and because the frames by default are only 1 pixel tall, the columns will be virtually invisible.\nTo fix this you can give non-zero weight to one or more rows. For example, self.rowconfigure(0, weight=1)\nYou also haven't configured the contents in each column to fill the space allocated. You should add sticky=\"nsew\" to get each of the inner frames to expand to fill. For example, cfg.grid(row=0, column=0, sticky=\"nsew\")\n\n" ]
[ 2 ]
[]
[]
[ "python", "tkinter", "user_interface" ]
stackoverflow_0074677698_python_tkinter_user_interface.txt
Q: import matplotlib.pyplot as plt i have python 3.2.3 on windows. installed matplotlib i'm trying to do this: import matplotlib.pyplot as plt i get this: Traceback (most recent call last): File "<pyshell#10>", line 1, in <module> import matplotlib.pyplot as plt File "E:\programs\python 3.2.3\lib\site-packages\matplotlib\pyplot.py", line 24, in <module> import matplotlib.colorbar File "E:\programs\python 3.2.3\lib\site-packages\matplotlib\colorbar.py", line 29, in <module> import matplotlib.collections as collections File "E:\programs\python 3.2.3\lib\site-packages\matplotlib\collections.py", line 23, in <module> import matplotlib.backend_bases as backend_bases File "E:\programs\python 3.2.3\lib\site-packages\matplotlib\backend_bases.py", line 50, in <module> import matplotlib.textpath as textpath File "E:\programs\python 3.2.3\lib\site-packages\matplotlib\textpath.py", line 5, in <module> import urllib.request, urllib.parse, urllib.error ImportError: No module named urllib.request any idea? A: You need to install urllib. This is required by matplotlib A: Install Python 3.10 and update pip and then try to install Matplotlib and also install urllib using pip install urllib
import matplotlib.pyplot as plt
i have python 3.2.3 on windows. installed matplotlib i'm trying to do this: import matplotlib.pyplot as plt i get this: Traceback (most recent call last): File "<pyshell#10>", line 1, in <module> import matplotlib.pyplot as plt File "E:\programs\python 3.2.3\lib\site-packages\matplotlib\pyplot.py", line 24, in <module> import matplotlib.colorbar File "E:\programs\python 3.2.3\lib\site-packages\matplotlib\colorbar.py", line 29, in <module> import matplotlib.collections as collections File "E:\programs\python 3.2.3\lib\site-packages\matplotlib\collections.py", line 23, in <module> import matplotlib.backend_bases as backend_bases File "E:\programs\python 3.2.3\lib\site-packages\matplotlib\backend_bases.py", line 50, in <module> import matplotlib.textpath as textpath File "E:\programs\python 3.2.3\lib\site-packages\matplotlib\textpath.py", line 5, in <module> import urllib.request, urllib.parse, urllib.error ImportError: No module named urllib.request any idea?
[ "You need to install urllib.\nThis is required by matplotlib\n", "Install Python 3.10 and update pip and then try to install Matplotlib and also install urllib using pip install urllib\n" ]
[ 0, 0 ]
[ "Look Install urllib from : https://pypi.python.org/pypi/urllib2_file/0.2.1\nOr if you was instaled pip you can use sudo pip install urllib\n" ]
[ -1 ]
[ "matplotlib", "python", "python_3.x" ]
stackoverflow_0023370715_matplotlib_python_python_3.x.txt
Q: What is the correct type annotation for bot3.client(service) With boto3 we can create a client for any service of our choice. For example, client = boto3.client('s3') Then if we check the type of the returned object, print(type(client)) <class 'botocore.client.S3'> But since there is no botocore.client.S3 (since it is dynamically created), how do we strongly type the client? The closest I can think of is botocore.client.BaseClient as shown below, which is far from S3 type. from botocore.client import BaseClient client: BaseClient = boto3.client('s3') Any idea? A: You could use the botocore.client.S3 class directly as a type hint, even though it doesn't exist at runtime. This works because type hints are only checked at compile time and are not used at runtime: from botocore.client import S3 client: S3 = boto3.client('s3') Alternatively, you could create a custom type hint for your S3 client by creating a class that extends BaseClient and use that as the type hint for your client variable: from botocore.client import BaseClient class S3Client(BaseClient): pass client: S3Client = boto3.client('s3') Using the first option, you can ensure that your client variable has the correct type and get the benefits of type checking, without having to create a custom class that extends BaseClient. A: Have a look at mypy-boto3: import boto3 from mypy_boto3_s3 import S3Client from typing_extensions import reveal_type def foo(client: S3Client): ... s3_client = boto3.client("s3") reveal_type(s3_client) # revealed type is S3Client foo(s3_client) # OK!
What is the correct type annotation for bot3.client(service)
With boto3 we can create a client for any service of our choice. For example, client = boto3.client('s3') Then if we check the type of the returned object, print(type(client)) <class 'botocore.client.S3'> But since there is no botocore.client.S3 (since it is dynamically created), how do we strongly type the client? The closest I can think of is botocore.client.BaseClient as shown below, which is far from S3 type. from botocore.client import BaseClient client: BaseClient = boto3.client('s3') Any idea?
[ "You could use the botocore.client.S3 class directly as a type hint, even though it doesn't exist at runtime. This works because type hints are only checked at compile time and are not used at runtime:\nfrom botocore.client import S3\n\nclient: S3 = boto3.client('s3')\n\nAlternatively, you could create a custom type hint for your S3 client by creating a class that extends BaseClient and use that as the type hint for your client variable:\nfrom botocore.client import BaseClient\n\nclass S3Client(BaseClient):\n pass\n\nclient: S3Client = boto3.client('s3')\n\nUsing the first option, you can ensure that your client variable has the correct type and get the benefits of type checking, without having to create a custom class that extends BaseClient.\n", "Have a look at mypy-boto3:\nimport boto3\nfrom mypy_boto3_s3 import S3Client\nfrom typing_extensions import reveal_type\n\n\ndef foo(client: S3Client):\n ...\n\n\ns3_client = boto3.client(\"s3\")\n\nreveal_type(s3_client) # revealed type is S3Client\n\nfoo(s3_client) # OK!\n\n" ]
[ 0, 0 ]
[]
[]
[ "boto3", "python_3.x", "python_typing" ]
stackoverflow_0074677544_boto3_python_3.x_python_typing.txt
Q: XCUI Distributed UI Testing I am working on XCUI test automation for an application involving two devices- for instance a video call application. Currently I have two projects- project A running device 1's automation and project B running device 2's automation. Is there a way in XCUI to achieve this with a single project? A: Currently, there is not a way to coordinate 2 devices using XCTest. With some effort, you could set up your own server that coordinates the actions between these 2 devices by pinging the server before each XCTest action to check the status of your other device.
XCUI Distributed UI Testing
I am working on XCUI test automation for an application involving two devices- for instance a video call application. Currently I have two projects- project A running device 1's automation and project B running device 2's automation. Is there a way in XCUI to achieve this with a single project?
[ "Currently, there is not a way to coordinate 2 devices using XCTest.\nWith some effort, you could set up your own server that coordinates the actions between these 2 devices by pinging the server before each XCTest action to check the status of your other device.\n" ]
[ 0 ]
[]
[]
[ "xcode", "xctest", "xcuitest" ]
stackoverflow_0073733091_xcode_xctest_xcuitest.txt
Q: menuItem Action not executed in a swift app I have a swift app for MacOS, and have a menu with sub menuitems. I add the menu from the appdelegate and assign an action via the interface builder, but the target action is never called: statusItem = NSStatusBar.system.statusItem(withLength: NSStatusItem.variableLength) if let menu = menu { statusItem?.menu = menu menu.delegate = self } pauseMenuItem.target = self pauseMenuItem.action = #selector(pausePressed(_:)) We can see from the bullet on the left of IBAction and from the InterfaceBuilder that the link is well done, but whenever I press on the corresponding menuitem, the action is not executed: What am I missing ? A: I ended up creating the menu programmatically and it worked: let menu = NSMenu() let pauseButton = NSMenuItem(title: "Pause", action: #selector(pausePressed), keyEquivalent: "") menu.addItem(pauseButton)
menuItem Action not executed in a swift app
I have a swift app for MacOS, and have a menu with sub menuitems. I add the menu from the appdelegate and assign an action via the interface builder, but the target action is never called: statusItem = NSStatusBar.system.statusItem(withLength: NSStatusItem.variableLength) if let menu = menu { statusItem?.menu = menu menu.delegate = self } pauseMenuItem.target = self pauseMenuItem.action = #selector(pausePressed(_:)) We can see from the bullet on the left of IBAction and from the InterfaceBuilder that the link is well done, but whenever I press on the corresponding menuitem, the action is not executed: What am I missing ?
[ "I ended up creating the menu programmatically and it worked:\nlet menu = NSMenu()\n\nlet pauseButton = NSMenuItem(title: \"Pause\", action: #selector(pausePressed), keyEquivalent: \"\")\nmenu.addItem(pauseButton)\n\n" ]
[ 0 ]
[]
[]
[ "interface_builder", "macos", "menuitem", "swift", "xcode" ]
stackoverflow_0074674594_interface_builder_macos_menuitem_swift_xcode.txt
Q: adb command to rotate the Android device (not emulator) display? Is there any "adb command" or anything by which Android device (not emulator) display can get rotate i.e landscape to potrait and vice versa A: I think this should work. To change the screen orientation to lanscape : service call window 18 i32 1 and to change it to portrait : service call window 18 i32 0. A: I've found the solution: adb shell content insert --uri content://settings/system --bind name:s:user_rotation --bind value:i:1 1 at the end means that the device rotation will be set to "landscape". 0 - "portrait". 2 - "reversed portrait". 3 - "other landscape". Sources: https://gist.github.com/whunter/832c844db80dda873a81, https://www.declarecode.com/code-solutions/shell/adb-shell-command-to-rotate-screen.
adb command to rotate the Android device (not emulator) display?
Is there any "adb command" or anything by which Android device (not emulator) display can get rotate i.e landscape to potrait and vice versa
[ "I think this should work. To change the screen orientation to lanscape :\nservice call window 18 i32 1 \nand to change it to portrait : \nservice call window 18 i32 0.\n", "I've found the solution:\nadb shell content insert --uri content://settings/system --bind name:s:user_rotation --bind value:i:1\n\n\n1 at the end means that the device rotation will be set to \"landscape\".\n0 - \"portrait\".\n2 - \"reversed portrait\".\n3 - \"other landscape\".\n\nSources:\n\nhttps://gist.github.com/whunter/832c844db80dda873a81,\nhttps://www.declarecode.com/code-solutions/shell/adb-shell-command-to-rotate-screen.\n\n" ]
[ 0, 0 ]
[]
[]
[ "android" ]
stackoverflow_0023244545_android.txt
Q: How to exclude an object from KInematic Trajectory Optimization #Drake I want to enable a gripper to do pick&place an object from a shelf. For this, I am trying to use trajectry optimization using KInematic Trajectory Optimization and I am using this deepnote. But the problem is when I add an object to the plant, also the object is considered in the optimization prog. How can I exclude this object from being considered in the optimization. I am kinda new to the drake and I kinda don't know how to exclude an object from conraints. Thanks in advance, I am kinda in rush :) drake A: Short answer: I think you want to create a separate MultibodyPlant to your KinematicTrajectoryOptimization that doesn't have the objects in it. More generally, it's very common to have multiple plants flowing through your robot + control stack, e.g. one for the "robot model" and another for the "robot + objects in the world" model. (It's reasonable that the physics engine's model of the world would be different than the model in the head of the robot). Currently in Drake, it's more recommended to just load two independent plants, rather than try to add/remove objects from an existing plant.
How to exclude an object from KInematic Trajectory Optimization #Drake
I want to enable a gripper to do pick&place an object from a shelf. For this, I am trying to use trajectry optimization using KInematic Trajectory Optimization and I am using this deepnote. But the problem is when I add an object to the plant, also the object is considered in the optimization prog. How can I exclude this object from being considered in the optimization. I am kinda new to the drake and I kinda don't know how to exclude an object from conraints. Thanks in advance, I am kinda in rush :) drake
[ "Short answer: I think you want to create a separate MultibodyPlant to your KinematicTrajectoryOptimization that doesn't have the objects in it.\nMore generally, it's very common to have multiple plants flowing through your robot + control stack, e.g. one for the \"robot model\" and another for the \"robot + objects in the world\" model. (It's reasonable that the physics engine's model of the world would be different than the model in the head of the robot). Currently in Drake, it's more recommended to just load two independent plants, rather than try to add/remove objects from an existing plant.\n" ]
[ 1 ]
[]
[]
[ "drake" ]
stackoverflow_0074678835_drake.txt
Q: My ESP32 is scanning all the nearby WiFi Networks but it does not connect to my WiFi Router using Arduino IDE (Return Value of WiFi.status API = 6) I am trying to connect my ESP32 to my Wifi Router using Arduino IDE but it is not connecting & giving a connection failed or disconnected status. I also confirmed it is scanning all the available Wifi Networks but not connecting to my router. I even tried with another ESP32 board but the problem is still there. I tried this code below. This code would scan/give the available Wifi networks and it did. Also, I was expecting this code to run smoothly but my ESP32 won't connect to my Wifi router. #include<WiFi.h> const char *ssid = "my_SSID"; const char *password = "my_Password"; void setup() { Serial.begin(115200); delay(2000); WiFi.mode(WIFI_STA); WiFi.disconnect(); delay(100); Serial.println("scan start"); // WiFi.scanNetworks will return the number of networks found int n = WiFi.scanNetworks(); Serial.println("scan done"); if (n == 0) { Serial.println("no networks found"); } else { Serial.print(n); Serial.println(" networks found");} // Connect to my network. WiFi.begin(ssid,password); // Check Status of your WiFi Connection int x = WiFi.status(); // If x=3 (Connected to Network) & If x=6 (Disconnected from Network) Serial.print("WiFi Connection Status is "); Serial.println(x); while(WiFi.status() != WL_CONNECTED) { delay(1000); Serial.println("WiFi Connection Failed..."); WiFi.disconnect(); WiFi.reconnect(); } //Print local IP address and start web server Serial.println("\nConnecting"); Serial.println(""); Serial.println("WiFi connected."); Serial.println("ESP32 IP address: "); Serial.println(WiFi.localIP()); } void loop() {} 1st image shows the output of my serial monitor. 2nd inamge shows the return value for WiFi.status function A: Try this code: #include<WiFi.h> const char *ssid = "YourSSID"; const char *password = "YourPassword"; void initWiFi() { WiFi.mode(WIFI_STA); WiFi.begin(ssid, password); Serial.print("Connecting to WiFi .."); while (WiFi.status() != WL_CONNECTED) { Serial.print('.'); delay(1000); } Serial.println(WiFi.localIP()); } void setup() { Serial.begin(115200); initWiFi(); Serial.print("RRSI: "); Serial.println(WiFi.RSSI()); } void loop() { // put your main code here, to run repeatedly: } A: You can also use your code in this simplified form. As mode needs to be called again just before Wifi.begin(). void setup() { Serial.begin(115200); //initWiFi(); WiFi.mode(WIFI_STA); WiFi.disconnect(); delay(100); WiFi.mode(WIFI_STA); WiFi.begin(ssid, password); Serial.print("Connecting to WiFi .."); while (WiFi.status() != WL_CONNECTED) { Serial.print('.'); delay(1000); } ///////////////////////////// Serial.print("RRSI: "); Serial.println(WiFi.RSSI()); } void loop() { // put your main code here, to run repeatedly: }
My ESP32 is scanning all the nearby WiFi Networks but it does not connect to my WiFi Router using Arduino IDE (Return Value of WiFi.status API = 6)
I am trying to connect my ESP32 to my Wifi Router using Arduino IDE but it is not connecting & giving a connection failed or disconnected status. I also confirmed it is scanning all the available Wifi Networks but not connecting to my router. I even tried with another ESP32 board but the problem is still there. I tried this code below. This code would scan/give the available Wifi networks and it did. Also, I was expecting this code to run smoothly but my ESP32 won't connect to my Wifi router. #include<WiFi.h> const char *ssid = "my_SSID"; const char *password = "my_Password"; void setup() { Serial.begin(115200); delay(2000); WiFi.mode(WIFI_STA); WiFi.disconnect(); delay(100); Serial.println("scan start"); // WiFi.scanNetworks will return the number of networks found int n = WiFi.scanNetworks(); Serial.println("scan done"); if (n == 0) { Serial.println("no networks found"); } else { Serial.print(n); Serial.println(" networks found");} // Connect to my network. WiFi.begin(ssid,password); // Check Status of your WiFi Connection int x = WiFi.status(); // If x=3 (Connected to Network) & If x=6 (Disconnected from Network) Serial.print("WiFi Connection Status is "); Serial.println(x); while(WiFi.status() != WL_CONNECTED) { delay(1000); Serial.println("WiFi Connection Failed..."); WiFi.disconnect(); WiFi.reconnect(); } //Print local IP address and start web server Serial.println("\nConnecting"); Serial.println(""); Serial.println("WiFi connected."); Serial.println("ESP32 IP address: "); Serial.println(WiFi.localIP()); } void loop() {} 1st image shows the output of my serial monitor. 2nd inamge shows the return value for WiFi.status function
[ "Try this code:\n#include<WiFi.h>\n\nconst char *ssid = \"YourSSID\"; \nconst char *password = \"YourPassword\";\n \nvoid initWiFi() {\n WiFi.mode(WIFI_STA);\n WiFi.begin(ssid, password);\n Serial.print(\"Connecting to WiFi ..\");\n while (WiFi.status() != WL_CONNECTED) {\n Serial.print('.');\n delay(1000);\n }\n Serial.println(WiFi.localIP());\n }\n \n void setup() {\n Serial.begin(115200);\n initWiFi();\n Serial.print(\"RRSI: \");\n Serial.println(WiFi.RSSI());\n }\n \n void loop() {\n // put your main code here, to run repeatedly:\n }\n\n", "You can also use your code in this simplified form. As mode needs to be called again just before Wifi.begin().\nvoid setup() {\n Serial.begin(115200);\n //initWiFi();\n WiFi.mode(WIFI_STA);\n WiFi.disconnect();\n delay(100);\nWiFi.mode(WIFI_STA);\nWiFi.begin(ssid, password);\nSerial.print(\"Connecting to WiFi ..\");\n while (WiFi.status() != WL_CONNECTED) {\n Serial.print('.');\n delay(1000);\n }\n\n /////////////////////////////\n Serial.print(\"RRSI: \");\n Serial.println(WiFi.RSSI());\n}\n\nvoid loop() {\n // put your main code here, to run repeatedly:\n}\n\n" ]
[ 0, 0 ]
[]
[]
[ "arduino", "arduino_ide", "esp32", "esp8266wifi", "wifi" ]
stackoverflow_0074677361_arduino_arduino_ide_esp32_esp8266wifi_wifi.txt
Q: Angular2 .gitignore I'm setting up an Angular 2 sample project. Is my .gitignore enough already or did I miss anything? node_modules typings jspm_packages bower_components app/**/*.js app/**/*.map A: angular-cli generates # See http://help.github.com/ignore-files/ for more about ignoring files. # compiled output /dist /tmp /out-tsc # dependencies /node_modules # IDEs and editors /.idea .project .classpath .c9/ *.launch .settings/ *.sublime-workspace # IDE - VSCode .vscode/* !.vscode/settings.json !.vscode/tasks.json !.vscode/launch.json !.vscode/extensions.json # misc /.sass-cache /connect.lock /coverage /libpeerconnection.log npm-debug.log testem.log /typings # e2e /e2e/*.js /e2e/*.map # System Files .DS_Store Thumbs.db A: Updated auto-generated .gitignore for Angular 15.0.0 by "@angular/cli": "~15.0.2" includes /.angular/cache and other new folders that should be ignored in Angular: # See http://help.github.com/ignore-files/ for more about ignoring files. # Compiled output /dist /tmp /out-tsc /bazel-out # Node /node_modules npm-debug.log yarn-error.log # IDEs and editors .idea/ .project .classpath .c9/ *.launch .settings/ *.sublime-workspace # Visual Studio Code .vscode/* !.vscode/settings.json !.vscode/tasks.json !.vscode/launch.json !.vscode/extensions.json .history/* # Miscellaneous /.angular/cache .sass-cache/ /connect.lock /coverage /libpeerconnection.log testem.log /typings # System files .DS_Store Thumbs.db
Angular2 .gitignore
I'm setting up an Angular 2 sample project. Is my .gitignore enough already or did I miss anything? node_modules typings jspm_packages bower_components app/**/*.js app/**/*.map
[ "angular-cli generates\n# See http://help.github.com/ignore-files/ for more about ignoring files.\n\n# compiled output\n/dist\n/tmp\n/out-tsc\n\n# dependencies\n/node_modules\n\n# IDEs and editors\n/.idea\n.project\n.classpath\n.c9/\n*.launch\n.settings/\n*.sublime-workspace\n\n# IDE - VSCode\n.vscode/*\n!.vscode/settings.json\n!.vscode/tasks.json\n!.vscode/launch.json\n!.vscode/extensions.json\n\n# misc\n/.sass-cache\n/connect.lock\n/coverage\n/libpeerconnection.log\nnpm-debug.log\ntestem.log\n/typings\n\n# e2e\n/e2e/*.js\n/e2e/*.map\n\n# System Files\n.DS_Store\nThumbs.db\n\n", "Updated auto-generated .gitignore for Angular 15.0.0 by \"@angular/cli\": \"~15.0.2\" includes /.angular/cache and other new folders that should be ignored in Angular:\n# See http://help.github.com/ignore-files/ for more about ignoring files.\n\n# Compiled output\n/dist\n/tmp\n/out-tsc\n/bazel-out\n\n# Node\n/node_modules\nnpm-debug.log\nyarn-error.log\n\n# IDEs and editors\n.idea/\n.project\n.classpath\n.c9/\n*.launch\n.settings/\n*.sublime-workspace\n\n# Visual Studio Code\n.vscode/*\n!.vscode/settings.json\n!.vscode/tasks.json\n!.vscode/launch.json\n!.vscode/extensions.json\n.history/*\n\n# Miscellaneous\n/.angular/cache\n.sass-cache/\n/connect.lock\n/coverage\n/libpeerconnection.log\ntestem.log\n/typings\n\n# System files\n.DS_Store\nThumbs.db\n\n" ]
[ 66, 0 ]
[]
[]
[ "angular", "git", "gitignore", "javascript" ]
stackoverflow_0037187092_angular_git_gitignore_javascript.txt
Q: Check if two columns are having matching values, but values are not in the same index places(Python, Pandas) So, I have this data frame about Super Store Sales. I have 2 sheets: First is named "Orders" Second one is named "Returns" In both sheets we have a matching column called "Order ID", but in the Return sheet we have less rows in "Order ID" of returned purchases and what I basically want to do is make a new column and check if Order ids are matching In Order sheet and in Return sheet and if they are matching I want a value "Returned" to be written and if values are not matching "Not returned". This is df_order data frame This is df_return This is how i thought it should be checked but it is definitely not correct cause everywhere says "not returned", but I've checked manually and seen that some orders are matching. Please, help me out. excel_path = r'C:\Users\Korisnik\Desktop\PythonFiles\Omega\SuperStoreUS.xlsx' df = pd.read_excel(excel_path, sheet_name=None) # 1. df_order = df.get('Orders') df_returns = df.get('Returns') df_users = df.get('Users') df_n.reset_index(drop=True) df_returns.reset_index(drop=True) df_n['Status'] = np.where( df_n['Order ID'].equals(df_returns['Order ID']) and df_returns["Status"] == "Returned", "Returned", "Not returned") df_order= {'City':['Prior Lake','Chicago','NY','Prior Lake', 'Round Rock'], 'Order ID':[86838 ,90154,15000,10000, 12447]} df_return= {'Order ID':[90154, 86838 ], 'Returned':['Returned', 'Returned']} # Create DataFrame from dict df_orders = pd.DataFrame.from_dict(df_order) df_returns = pd.DataFrame.from_dict(df_return) A: You can use pandas.DataFrame.merge with pandas.Series.fillna : df_order = pd.read_excel("SuperStoreUS.xlsx", sheet_name="Orders") df_return = pd.read_excel("SuperStoreUS.xlsx", sheet_name="Returns") Use either : # --- To create a new dataframe out = df_order.merge(df_return, on="Order ID", how="left") out["Status"] = out["Status"].fillna("Not Returned") Or: # --- To update df_order df_order = df_order.merge(df_return, on="Order ID", how="left") df_order["Status"] = df_order["Status"].fillna("Not Returned") A: Another way is to create a new column in df_order, with values conditioned on the row's Order ID: df_orders['Status'] = df_orders['Order ID'].map(lambda x: 'Returned' if x in df_returns['Order ID'].tolist() else 'Not returned')
Check if two columns are having matching values, but values are not in the same index places(Python, Pandas)
So, I have this data frame about Super Store Sales. I have 2 sheets: First is named "Orders" Second one is named "Returns" In both sheets we have a matching column called "Order ID", but in the Return sheet we have less rows in "Order ID" of returned purchases and what I basically want to do is make a new column and check if Order ids are matching In Order sheet and in Return sheet and if they are matching I want a value "Returned" to be written and if values are not matching "Not returned". This is df_order data frame This is df_return This is how i thought it should be checked but it is definitely not correct cause everywhere says "not returned", but I've checked manually and seen that some orders are matching. Please, help me out. excel_path = r'C:\Users\Korisnik\Desktop\PythonFiles\Omega\SuperStoreUS.xlsx' df = pd.read_excel(excel_path, sheet_name=None) # 1. df_order = df.get('Orders') df_returns = df.get('Returns') df_users = df.get('Users') df_n.reset_index(drop=True) df_returns.reset_index(drop=True) df_n['Status'] = np.where( df_n['Order ID'].equals(df_returns['Order ID']) and df_returns["Status"] == "Returned", "Returned", "Not returned") df_order= {'City':['Prior Lake','Chicago','NY','Prior Lake', 'Round Rock'], 'Order ID':[86838 ,90154,15000,10000, 12447]} df_return= {'Order ID':[90154, 86838 ], 'Returned':['Returned', 'Returned']} # Create DataFrame from dict df_orders = pd.DataFrame.from_dict(df_order) df_returns = pd.DataFrame.from_dict(df_return)
[ "You can use pandas.DataFrame.merge with pandas.Series.fillna :\ndf_order = pd.read_excel(\"SuperStoreUS.xlsx\", sheet_name=\"Orders\")\ndf_return = pd.read_excel(\"SuperStoreUS.xlsx\", sheet_name=\"Returns\")\n\nUse either :\n# --- To create a new dataframe\nout = df_order.merge(df_return, on=\"Order ID\", how=\"left\")\nout[\"Status\"] = out[\"Status\"].fillna(\"Not Returned\")\n\nOr:\n# --- To update df_order\ndf_order = df_order.merge(df_return, on=\"Order ID\", how=\"left\")\ndf_order[\"Status\"] = df_order[\"Status\"].fillna(\"Not Returned\")\n\n", "Another way is to create a new column in df_order, with values conditioned on the row's Order ID:\ndf_orders['Status'] = df_orders['Order ID'].map(lambda x: 'Returned' if x in df_returns['Order ID'].tolist() else 'Not returned')\n\n" ]
[ 1, 1 ]
[]
[]
[ "matching", "multiple_columns", "pandas", "python" ]
stackoverflow_0074679052_matching_multiple_columns_pandas_python.txt
Q: Why my images from url not appearing in TouchableOpacity In react-native i am trying to make a scrollview where every element has a title and an image. Because I want to load the image from the url, I wrote the following code: import {Text, TouchableOpacity,Image } from 'react-native' import React from 'react' const CatagoryCard = ({imgUrl,title}) => { return ( <TouchableOpacity> <Image source = {{uri:imgUrl}} resizeMode = 'contain' className = "h-20 w-20 rounded flex-2" /> <Text>{title}</Text> </TouchableOpacity> ); }; export default CatagoryCard; And calling them from another parent component class. import { View, Text, ScrollView } from 'react-native' import React from 'react' import CatagoryCard from './CatagoryCard' const Catagories = () => { return ( <ScrollView horizontal showsVerticalScrollIndicator={false} contentContainerStyle={{ paddingHorizontal:15, paddingTop:10 }}> <CatagoryCard imgUrl = "https://i.ibb.co/ZYvGfFY/Untitled-design-7.png" title = " TEST 1"/> <CatagoryCard imgUrl = "https://i.ibb.co/ZYvGfFY/Untitled-design-7.png" title = " TEST 2"/> <CatagoryCard imgUrl = "https://i.ibb.co/ZYvGfFY/Untitled-design-7.png" title = " TEST 3"/> </ScrollView> ) } export default Catagories The problem is the title are showing perfectly on the view in individual elements but the images are not loading for some unknown reason. A: Set size for the image: <TouchableOpacity> <Image style={{height: 50, width: 50}} source={{ uri: 'https://reactnative.dev/img/tiny_logo.png', }} /> <Text>Text</Text> </TouchableOpacity>
Why my images from url not appearing in TouchableOpacity
In react-native i am trying to make a scrollview where every element has a title and an image. Because I want to load the image from the url, I wrote the following code: import {Text, TouchableOpacity,Image } from 'react-native' import React from 'react' const CatagoryCard = ({imgUrl,title}) => { return ( <TouchableOpacity> <Image source = {{uri:imgUrl}} resizeMode = 'contain' className = "h-20 w-20 rounded flex-2" /> <Text>{title}</Text> </TouchableOpacity> ); }; export default CatagoryCard; And calling them from another parent component class. import { View, Text, ScrollView } from 'react-native' import React from 'react' import CatagoryCard from './CatagoryCard' const Catagories = () => { return ( <ScrollView horizontal showsVerticalScrollIndicator={false} contentContainerStyle={{ paddingHorizontal:15, paddingTop:10 }}> <CatagoryCard imgUrl = "https://i.ibb.co/ZYvGfFY/Untitled-design-7.png" title = " TEST 1"/> <CatagoryCard imgUrl = "https://i.ibb.co/ZYvGfFY/Untitled-design-7.png" title = " TEST 2"/> <CatagoryCard imgUrl = "https://i.ibb.co/ZYvGfFY/Untitled-design-7.png" title = " TEST 3"/> </ScrollView> ) } export default Catagories The problem is the title are showing perfectly on the view in individual elements but the images are not loading for some unknown reason.
[ "Set size for the image:\n<TouchableOpacity>\n <Image\n style={{height: 50, width: 50}}\n source={{\n uri: 'https://reactnative.dev/img/tiny_logo.png',\n }}\n />\n <Text>Text</Text>\n</TouchableOpacity>\n\n" ]
[ 1 ]
[]
[]
[ "android", "react_native" ]
stackoverflow_0074677844_android_react_native.txt
Q: How do I convert this pinescript v2 to pinescript v4? I have a pinescript v2 that needs to be converted to v4 to be able to use alert messages, but I still haven't understood how, any inputs will be useful, thanks. This script was developed a long time back and visiting it now is confusing me. I have been reading that the variables need to be declared first, I get that, but what is the most efficient way to convert this into v4 of pine? //@version=2 strategy("Bollinger + RSI, Double Strategy (by ChartArt) v1.1", shorttitle="CA_-_RSI_Bol_Strat_1.1", overlay=true) ///////////// RSI RSIlength = input(6,title="RSI Period Length") RSIoverSold = 50 RSIoverBought = 50 price = close vrsi = rsi(price, RSIlength) ///////////// Bollinger Bands BBlength = input(200, minval=1,title="Bollinger Period Length") BBmult = 2 // input(2.0, minval=0.001, maxval=50,title="Bollinger Bands Standard Deviation") BBbasis = sma(price, BBlength) BBdev = BBmult * stdev(price, BBlength) BBupper = BBbasis + BBdev BBlower = BBbasis - BBdev source = close buyEntry = crossover(source, BBlower) sellEntry = crossunder(source, BBupper) plot(BBbasis, color=aqua,title="Bollinger Bands SMA Basis Line") p1 = plot(BBupper, color=silver,title="Bollinger Bands Upper Line") p2 = plot(BBlower, color=silver,title="Bollinger Bands Lower Line") fill(p1, p2) ///////////// Colors switch1=input(true, title="Enable Bar Color?") switch2=input(true, title="Enable Background Color?") TrendColor = RSIoverBought and (price[1] > BBupper and price < BBupper) and BBbasis < BBbasis[1] ? red : RSIoverSold and (price[1] < BBlower and price > BBlower) and BBbasis > BBbasis[1] ? green : na barcolor(switch1?TrendColor:na) bgcolor(switch2?TrendColor:na,transp=50) ///////////// RSI + Bollinger Bands Strategy if (not na(vrsi)) if (crossover(vrsi, RSIoverSold) and crossover(source, BBlower)) strategy.entry("RSI_BB_L", strategy.long, stop=BBlower, oca_type=strategy.oca.cancel, comment="RSI_BB_L") else strategy.cancel(id="RSI_BB_L") if (crossunder(vrsi, RSIoverBought) and crossunder(source, BBupper)) strategy.entry("RSI_BB_S", strategy.short, stop=BBupper, oca_type=strategy.oca.cancel, comment="RSI_BB_S") else strategy.cancel(id="RSI_BB_S") //plot(strategy.equity, title="equity", color=red, linewidth=2, style=areabr) A: You can use the migration guide to convert your script to v3 and then use the converter tool to convert it to v4 and above. //@version=4 strategy("Bollinger + RSI, Double Strategy (by ChartArt) v1.1", shorttitle="CA_-_RSI_Bol_Strat_1.1", overlay=true) ///////////// RSI RSIlength = input(6, title="RSI Period Length") RSIoverSold = 50 RSIoverBought = 50 price = close vrsi = rsi(price, RSIlength) ///////////// Bollinger Bands BBlength = input(200, minval=1, title="Bollinger Period Length") BBmult = 2 // input(2.0, minval=0.001, maxval=50,title="Bollinger Bands Standard Deviation") BBbasis = sma(price, BBlength) BBdev = BBmult * stdev(price, BBlength) BBupper = BBbasis + BBdev BBlower = BBbasis - BBdev source = close buyEntry = crossover(source, BBlower) sellEntry = crossunder(source, BBupper) plot(BBbasis, color=color.aqua, title="Bollinger Bands SMA Basis Line") p1 = plot(BBupper, color=color.silver, title="Bollinger Bands Upper Line") p2 = plot(BBlower, color=color.silver, title="Bollinger Bands Lower Line") fill(p1, p2) ///////////// Colors switch1 = input(true, title="Enable Bar Color?") switch2 = input(true, title="Enable Background Color?") TrendColor = RSIoverBought and price[1] > BBupper and price < BBupper and BBbasis < BBbasis[1] ? color.red : RSIoverSold and price[1] < BBlower and price > BBlower and BBbasis > BBbasis[1] ? color.green : na barcolor(switch1 ? TrendColor : na) bgcolor(switch2 ? TrendColor : na, transp=50) ///////////// RSI + Bollinger Bands Strategy if not na(vrsi) if crossover(vrsi, RSIoverSold) and crossover(source, BBlower) strategy.entry("RSI_BB_L", strategy.long, stop=BBlower, oca_type=strategy.oca.cancel, comment="RSI_BB_L") else strategy.cancel(id="RSI_BB_L") if crossunder(vrsi, RSIoverBought) and crossunder(source, BBupper) strategy.entry("RSI_BB_S", strategy.short, stop=BBupper, oca_type=strategy.oca.cancel, comment="RSI_BB_S") else strategy.cancel(id="RSI_BB_S") //plot(strategy.equity, title="equity", color=red, linewidth=2, style=areabr)
How do I convert this pinescript v2 to pinescript v4?
I have a pinescript v2 that needs to be converted to v4 to be able to use alert messages, but I still haven't understood how, any inputs will be useful, thanks. This script was developed a long time back and visiting it now is confusing me. I have been reading that the variables need to be declared first, I get that, but what is the most efficient way to convert this into v4 of pine? //@version=2 strategy("Bollinger + RSI, Double Strategy (by ChartArt) v1.1", shorttitle="CA_-_RSI_Bol_Strat_1.1", overlay=true) ///////////// RSI RSIlength = input(6,title="RSI Period Length") RSIoverSold = 50 RSIoverBought = 50 price = close vrsi = rsi(price, RSIlength) ///////////// Bollinger Bands BBlength = input(200, minval=1,title="Bollinger Period Length") BBmult = 2 // input(2.0, minval=0.001, maxval=50,title="Bollinger Bands Standard Deviation") BBbasis = sma(price, BBlength) BBdev = BBmult * stdev(price, BBlength) BBupper = BBbasis + BBdev BBlower = BBbasis - BBdev source = close buyEntry = crossover(source, BBlower) sellEntry = crossunder(source, BBupper) plot(BBbasis, color=aqua,title="Bollinger Bands SMA Basis Line") p1 = plot(BBupper, color=silver,title="Bollinger Bands Upper Line") p2 = plot(BBlower, color=silver,title="Bollinger Bands Lower Line") fill(p1, p2) ///////////// Colors switch1=input(true, title="Enable Bar Color?") switch2=input(true, title="Enable Background Color?") TrendColor = RSIoverBought and (price[1] > BBupper and price < BBupper) and BBbasis < BBbasis[1] ? red : RSIoverSold and (price[1] < BBlower and price > BBlower) and BBbasis > BBbasis[1] ? green : na barcolor(switch1?TrendColor:na) bgcolor(switch2?TrendColor:na,transp=50) ///////////// RSI + Bollinger Bands Strategy if (not na(vrsi)) if (crossover(vrsi, RSIoverSold) and crossover(source, BBlower)) strategy.entry("RSI_BB_L", strategy.long, stop=BBlower, oca_type=strategy.oca.cancel, comment="RSI_BB_L") else strategy.cancel(id="RSI_BB_L") if (crossunder(vrsi, RSIoverBought) and crossunder(source, BBupper)) strategy.entry("RSI_BB_S", strategy.short, stop=BBupper, oca_type=strategy.oca.cancel, comment="RSI_BB_S") else strategy.cancel(id="RSI_BB_S") //plot(strategy.equity, title="equity", color=red, linewidth=2, style=areabr)
[ "You can use the migration guide to convert your script to v3 and then use the converter tool to convert it to v4 and above.\n//@version=4\nstrategy(\"Bollinger + RSI, Double Strategy (by ChartArt) v1.1\", shorttitle=\"CA_-_RSI_Bol_Strat_1.1\", overlay=true)\n\n\n///////////// RSI\nRSIlength = input(6, title=\"RSI Period Length\")\nRSIoverSold = 50\nRSIoverBought = 50\nprice = close\nvrsi = rsi(price, RSIlength)\n\n\n///////////// Bollinger Bands\nBBlength = input(200, minval=1, title=\"Bollinger Period Length\")\nBBmult = 2 // input(2.0, minval=0.001, maxval=50,title=\"Bollinger Bands Standard Deviation\")\nBBbasis = sma(price, BBlength)\nBBdev = BBmult * stdev(price, BBlength)\nBBupper = BBbasis + BBdev\nBBlower = BBbasis - BBdev\nsource = close\nbuyEntry = crossover(source, BBlower)\nsellEntry = crossunder(source, BBupper)\nplot(BBbasis, color=color.aqua, title=\"Bollinger Bands SMA Basis Line\")\np1 = plot(BBupper, color=color.silver, title=\"Bollinger Bands Upper Line\")\np2 = plot(BBlower, color=color.silver, title=\"Bollinger Bands Lower Line\")\nfill(p1, p2)\n\n\n///////////// Colors\nswitch1 = input(true, title=\"Enable Bar Color?\")\nswitch2 = input(true, title=\"Enable Background Color?\")\nTrendColor = RSIoverBought and price[1] > BBupper and price < BBupper and \n BBbasis < BBbasis[1] ? color.red : \n RSIoverSold and price[1] < BBlower and price > BBlower and BBbasis > BBbasis[1] ? \n color.green : na\nbarcolor(switch1 ? TrendColor : na)\nbgcolor(switch2 ? TrendColor : na, transp=50)\n\n\n///////////// RSI + Bollinger Bands Strategy\nif not na(vrsi)\n\n if crossover(vrsi, RSIoverSold) and crossover(source, BBlower)\n strategy.entry(\"RSI_BB_L\", strategy.long, stop=BBlower, oca_type=strategy.oca.cancel, comment=\"RSI_BB_L\")\n else\n strategy.cancel(id=\"RSI_BB_L\")\n\n if crossunder(vrsi, RSIoverBought) and crossunder(source, BBupper)\n strategy.entry(\"RSI_BB_S\", strategy.short, stop=BBupper, oca_type=strategy.oca.cancel, comment=\"RSI_BB_S\")\n else\n strategy.cancel(id=\"RSI_BB_S\")\n\n//plot(strategy.equity, title=\"equity\", color=red, linewidth=2, style=areabr)\n\n" ]
[ 0 ]
[]
[]
[ "pine_script" ]
stackoverflow_0074679266_pine_script.txt
Q: How to change a URL Link in ASP.NET MVC Breadcrumb I found a nice answer on creating breadcrumbs for this project here: How can dynamic breadcrumbs be achieved with ASP.net MVC? The modified code that I am using is here: public static string BuildBreadcrumbNavigation(this HtmlHelper helper) { var result = string.Empty; var controllerName = helper.ViewContext.RouteData.Values["controller"].ToString(); if ((controllerName != "Home") && (controllerName != "Account")) { var htmlLink = helper.ActionLink( linkText: "Home", actionName: "/", controllerName: "Home").ToHtmlString(); var sb = new StringBuilder($"<ol class='breadcrumb'><li>{htmlLink}</li>"); var controllerLink = helper.ActionLink( linkText: controllerName, actionName: "/", controllerName: controllerName); sb.Append($"<li>{controllerLink}</li>"); var actionName = helper.ViewContext.RouteData.Values["action"].ToString(); var actionLink = helper.ActionLink( linkText: actionName, actionName: actionName, controllerName: controllerName); sb.Append($"<li>{actionLink}</li>"); result = sb.Append("</ol>").ToString(); } return result; } The webpage I want to implement it on has a navigation menu: Services Parts Bulletins Publications Warranty Sales Buyers Image Resources Videos For something like Home > Services > Parts, there is a controller called ServiceController.cs and a model called PartsInformation.cs. The links for Home and Parts work fine, but there is nothing to display for the intermediate Services because it is a menu item only. Clicking Services attempts to redirect here: https://localhost:44383/Services/ What should be done here? Should I remove the link for Services and leave the text, or should I have Services route to Home? I would like to route the root navigation menu items back to Home, but I don't understand enough about this ActionLink. A: Here is what I have done until someone posts a better solution: In the file Global.asax.cs, I added the Application_Error method to redirect when a controller is not found: protected void Application_Error() { var err = Server.GetLastError(); if (err.GetType() == typeof(InvalidOperationException)) { if (err.Message.IndexOf("The view 'Index' or its master was not found") == 0) { if (-1 < err.StackTrace.IndexOf("System.Web.Mvc.ViewResult.FindView(ControllerContext context)")) { // this controller does not have a view var request = HttpContext.Current.Request; var homeLink = $"{request.Url.Scheme}://{request.Url.Authority}{request.ApplicationPath.TrimEnd('/')}/"; Response.Redirect(homeLink); } } } } It works, but it just doesn't seem like the way ASP.NET MVC should handle the case of links going to controllers that do not exist.
How to change a URL Link in ASP.NET MVC Breadcrumb
I found a nice answer on creating breadcrumbs for this project here: How can dynamic breadcrumbs be achieved with ASP.net MVC? The modified code that I am using is here: public static string BuildBreadcrumbNavigation(this HtmlHelper helper) { var result = string.Empty; var controllerName = helper.ViewContext.RouteData.Values["controller"].ToString(); if ((controllerName != "Home") && (controllerName != "Account")) { var htmlLink = helper.ActionLink( linkText: "Home", actionName: "/", controllerName: "Home").ToHtmlString(); var sb = new StringBuilder($"<ol class='breadcrumb'><li>{htmlLink}</li>"); var controllerLink = helper.ActionLink( linkText: controllerName, actionName: "/", controllerName: controllerName); sb.Append($"<li>{controllerLink}</li>"); var actionName = helper.ViewContext.RouteData.Values["action"].ToString(); var actionLink = helper.ActionLink( linkText: actionName, actionName: actionName, controllerName: controllerName); sb.Append($"<li>{actionLink}</li>"); result = sb.Append("</ol>").ToString(); } return result; } The webpage I want to implement it on has a navigation menu: Services Parts Bulletins Publications Warranty Sales Buyers Image Resources Videos For something like Home > Services > Parts, there is a controller called ServiceController.cs and a model called PartsInformation.cs. The links for Home and Parts work fine, but there is nothing to display for the intermediate Services because it is a menu item only. Clicking Services attempts to redirect here: https://localhost:44383/Services/ What should be done here? Should I remove the link for Services and leave the text, or should I have Services route to Home? I would like to route the root navigation menu items back to Home, but I don't understand enough about this ActionLink.
[ "Here is what I have done until someone posts a better solution:\nIn the file Global.asax.cs, I added the Application_Error method to redirect when a controller is not found:\nprotected void Application_Error()\n{\n var err = Server.GetLastError();\n if (err.GetType() == typeof(InvalidOperationException))\n {\n if (err.Message.IndexOf(\"The view 'Index' or its master was not found\") == 0)\n {\n if (-1 < err.StackTrace.IndexOf(\"System.Web.Mvc.ViewResult.FindView(ControllerContext context)\"))\n {\n // this controller does not have a view\n var request = HttpContext.Current.Request;\n var homeLink = $\"{request.Url.Scheme}://{request.Url.Authority}{request.ApplicationPath.TrimEnd('/')}/\";\n Response.Redirect(homeLink);\n }\n }\n }\n}\n\nIt works, but it just doesn't seem like the way ASP.NET MVC should handle the case of links going to controllers that do not exist.\n" ]
[ 0 ]
[]
[]
[ "asp.net_mvc", "c#" ]
stackoverflow_0074678526_asp.net_mvc_c#.txt
Q: How can I read a file in a swift playground Im trying to read a text file using a Swift playground with the following let dirs : String[]? = NSSearchPathForDirectoriesInDomains(NSSearchPathDirectory.DocumentDirectory, NSSearchPathDomainMask.UserDomainMask, true) as? String[] if (dirs != nil) { let directories:String[] = dirs!; let dir = directories[0]; //documents directory let path = dir.stringByAppendingPathComponent(file); //read let content = String.stringWithContentsOfFile(path, encoding: NSUTF8StringEncoding, error: nil) } However this fails with no error. It seems the first line stops the playground from outputting anything below A: You can also put your file into your playground's resources. To do this: show Project Navigator with CMD + 1. Drag and drop your file into the resources folder. Then read the file: On Xcode 6.4 and Swift 1.2: var error: NSError? let fileURL = NSBundle.mainBundle().URLForResource("Input", withExtension: "txt") let content = String(contentsOfURL: fileURL!, encoding: NSUTF8StringEncoding, error: &error) On Xcode 7 and Swift 2: let fileURL = NSBundle.mainBundle().URLForResource("Input", withExtension: "txt") let content = try String(contentsOfURL: fileURL!, encoding: NSUTF8StringEncoding) On Xcode 8 and Swift 3: let fileURL = Bundle.main.url(forResource: "Input", withExtension: "txt") let content = try String(contentsOf: fileURL!, encoding: String.Encoding.utf8) If the file has binary data, you can use NSData(contentsOfURL: fileURL!) or Data(contentsOf: fileURL!) (for Swift 3). A: While the answer has been supplied for a quick fix, there is a better solution. Each time the playground is opened it will be assigned a new container. This means using the normal directory structure you would have to copy the file you want into the new container every time. Instead, inside the container there is a symbolic link to a Shared Playground Data directory (/Users/UserName/Documents/Shared Playground Data) which remains when reopening the playground, and can be accessed from multiple playgrounds. You can use XCPlayground to access this shared folder. import XCPlayground let path = XCPlaygroundSharedDataDirectoryURL.appendingPathComponent("foo.txt") The official documentation can be found here: XCPlayground Module Reference Cool post on how to organize this directory per-playground: Swift, Playgrounds, and XCPlayground UPDATE: For swift 4.2 use playgroundSharedDataDirectory. Don't need to import anything. Looks like: let path = playgroundSharedDataDirectory.appendingPathComponent("file") A: 1. Access a file that is located in the Resources folder of your Playground With Swift 3, Bundle has a method called url(forResource:withExtension:). url(forResource:withExtension:) has the following declaration: func url(forResource name: String?, withExtension ext: String?) -> URL? Returns the file URL for the resource identified by the specified name and file extension. You can use url(forResource:withExtension:) in order to read the content of a json file located in the Resources folder of an iOS or Mac Playground: import Foundation do { guard let fileUrl = Bundle.main.url(forResource: "Data", withExtension: "json") else { fatalError() } let data = try Data(contentsOf: fileUrl) let json = try JSONSerialization.jsonObject(with: data, options: []) print(json) } catch { print(error) } You can use url(forResource:withExtension:) in order to read the content of a text file located in the Resources folder of an iOS or Mac Playground: import Foundation do { guard let fileUrl = Bundle.main.url(forResource: "Text", withExtension: "txt") else { fatalError() } let text = try String(contentsOf: fileUrl, encoding: String.Encoding.utf8) print(text) } catch { print(error) } As an alternative to let image = UIImage(named: "image"), you can use url(forResource:withExtension:) in order to access an image located in the Resources folder of an iOS Playground: import UIKit do { guard let fileUrl = Bundle.main.url(forResource: "Image", withExtension: "png") else { fatalError() } let data = try Data(contentsOf: fileUrl) let image = UIImage(data: data) } catch { print(error) } 2. Access a file that is located in the ~/Documents/Shared Playground Data folder of your computer With Swift 3, PlaygroundSupport module provides a global constant called playgroundSharedDataDirectory. playgroundSharedDataDirectory has the following declaration: let playgroundSharedDataDirectory: URL The path to the directory containing data shared between all playgrounds. You can use playgroundSharedDataDirectory in order to read the content of a json file located in the ~/Documents/Shared Playground Data folder of your computer from an iOS or Mac Playground: import Foundation import PlaygroundSupport do { let fileUrl = PlaygroundSupport.playgroundSharedDataDirectory.appendingPathComponent("Data.json") let data = try Data(contentsOf: fileUrl) let json = try JSONSerialization.jsonObject(with: data, options: []) print(json) } catch { print(error) } You can use playgroundSharedDataDirectory in order to read the content of a text file located in the ~/Documents/Shared Playground Data folder of your computer from an iOS or Mac Playground: import Foundation import PlaygroundSupport do { let fileUrl = PlaygroundSupport.playgroundSharedDataDirectory.appendingPathComponent("Text.txt") let text = try String(contentsOf: fileUrl, encoding: String.Encoding.utf8) print(text) } catch { print(error) } You can use playgroundSharedDataDirectory in order to access an image located in the ~/Documents/Shared Playground Data folder of your computer from an iOS Playground: import UIKit import PlaygroundSupport do { let fileUrl = PlaygroundSupport.playgroundSharedDataDirectory.appendingPathComponent("Image.png") let data = try Data(contentsOf: fileUrl) let image = UIImage(data: data) } catch { print(error) } A: Swift 3 (Xcode 8) The code below works in both iOS and macOS playgrounds. The text file ("MyText.txt" in this example) must be in the Resources directory of the playground. (Note: You may need to open the navigator window to see the directory structure of your playground.) import Foundation if let fileURL = Bundle.main.url(forResource:"MyText", withExtension: "txt") { do { let contents = try String(contentsOf: fileURL, encoding: String.Encoding.utf8) print(contents) } catch { print("Error: \(error.localizedDescription)") } } else { print("No such file URL.") } A: This works for me. The only thing I changed was to be explicit about the file name (which is implied in your example) - perhaps you have a typo in the off-screen definition of the "file" variable? let dirs = NSSearchPathForDirectoriesInDomains(NSSearchPathDirectory.DocumentDirectory, NSSearchPathDomainMask.UserDomainMask, true) as? [String] let file = "trial.txt" // My change to your code - yours is presumably set off-screen if let directories = dirs { let dir = directories[0]; //documents directory let path = dir.stringByAppendingPathComponent(file); //read let content = NSString(contentsOfFile: path, usedEncoding: nil, error: nil) // works... } Update Swift 4.2 As @raistlin points out, this would now be let dirs = NSSearchPathForDirectoriesInDomains( FileManager.SearchPathDirectory.documentDirectory, FileManager.SearchPathDomainMask.userDomainMask, true) or, more tersely: let dirs = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true) A: Select the .playground file. Open Utility inspector, In the playground press opt-cmd-1 to open the File Inspector. You should see the playground on the right. If you don't have it selected, press cmd-1 to open the Project Navigator and click on the playground file. Under 'Resource Path' in Playground Settings choose 'Relative To Playground' and platform as OSX. A: On Mavericks with Xcode 6.0.1 you can read using iOS platform too. import UIKit let dirs : [String]? = NSSearchPathForDirectoriesInDomains(NSSearchPathDirectory.DocumentDirectory, NSSearchPathDomainMask.UserDomainMask, true) as? [String] let myDir = "/Shared Playground Data" let file = "README.md" // My change to your code - yours is presumably set off-screen if (dirs != nil) { let directories:[String] = dirs!; let dir = directories[0] + myDir; // iOS playground documents directory let path = dir.stringByAppendingPathComponent(file); //read let content = String.stringWithContentsOfFile(path, encoding: NSUTF8StringEncoding, error: nil) // works... println(content!) } Remember, you need to create a directory called "Shared Playground Data" in your Documents directory. Im my case I used this command: mkdir "/Users/joao_parana/Documents/Shared Playground Data" and put there my file README.md A: String.stringWithContentsOfFile is DEPRECATED and doesn't work anymore with Xcode 6.1.1 Create your documentDirectoryUrl let documentDirectoryUrl = NSFileManager.defaultManager().URLsForDirectory(.DocumentDirectory, inDomains: .UserDomainMask).first! as NSURL To make sure the file is located there you can use the finder command Go To Folder e copy paste the printed documentDirectoryUrl.path there println(documentDirectoryUrl.path!) // should look like this: /Users/userName/Library/Containers/com.apple.dt.playground.stub.OSX.PLAYGROUNDFILENAME-5AF5B25D-D0D1-4B51-A297-00015EE97F13/Data/Documents Just append the file name to the folder url as a path component let fileNameUrl = documentDirectoryUrl.URLByAppendingPathComponent("ReadMe.txt") var fileOpenError:NSError? Check if the file exists before attempting to open it if NSFileManager.defaultManager().fileExistsAtPath(fileNameUrl.path!) { if let fileContent = String(contentsOfURL: fileNameUrl, encoding: NSUTF8StringEncoding, error: &fileOpenError) { println(fileContent) // prints ReadMe.txt contents if successful } else { if let fileOpenError = fileOpenError { println(fileOpenError) // Error Domain=NSCocoaErrorDomain Code=XXX "The file “ReadMe.txt” couldn’t be opened because...." } } } else { println("file not found") } A: I was unable to read a file with ease in playground and ended up just creating a command line app in Xcode. This seemed to work for me very well. A: The other answers, relying on "playgroundSharedDataDirectory" never works for me, especially if using an iOS playground. let documentsDirectoryShareURL = PlaygroundSupport.playgroundSharedDataDirectory.absoluteURL let fileManager = FileManager() try? fileManager.copyItem(at: URL(fileURLWithPath: "/Users/rufus/Documents/Shared Playground Data/"), to: documentsDirectoryShareURL) I just do the above now. I can populate my documents/shared folder, and it is just manually automatically copied to the playgrounds documents directory. My code will not overwrite files that exist there. You could enhance this if you need it to look at file timestamps and then copy if necessary etc. A: Swift 5.7.1 - Xcode 14.1 func readFile() -> [String] { if let fileURL = Bundle.main.url(forResource: "File", withExtension: "txt") { do { let content = try String(contentsOf: fileURL) var x = content.components(separatedBy: "\n") x.removeAll { data in data.isEmpty } return x } catch { print(error) } } return [String]() } //Usage: let input = readFile()
How can I read a file in a swift playground
Im trying to read a text file using a Swift playground with the following let dirs : String[]? = NSSearchPathForDirectoriesInDomains(NSSearchPathDirectory.DocumentDirectory, NSSearchPathDomainMask.UserDomainMask, true) as? String[] if (dirs != nil) { let directories:String[] = dirs!; let dir = directories[0]; //documents directory let path = dir.stringByAppendingPathComponent(file); //read let content = String.stringWithContentsOfFile(path, encoding: NSUTF8StringEncoding, error: nil) } However this fails with no error. It seems the first line stops the playground from outputting anything below
[ "You can also put your file into your playground's resources. To do this: show Project Navigator with CMD + 1. Drag and drop your file into the resources folder. Then read the file:\nOn Xcode 6.4 and Swift 1.2:\nvar error: NSError?\nlet fileURL = NSBundle.mainBundle().URLForResource(\"Input\", withExtension: \"txt\")\nlet content = String(contentsOfURL: fileURL!, encoding: NSUTF8StringEncoding, error: &error)\n\nOn Xcode 7 and Swift 2:\nlet fileURL = NSBundle.mainBundle().URLForResource(\"Input\", withExtension: \"txt\")\nlet content = try String(contentsOfURL: fileURL!, encoding: NSUTF8StringEncoding)\n\nOn Xcode 8 and Swift 3:\nlet fileURL = Bundle.main.url(forResource: \"Input\", withExtension: \"txt\")\nlet content = try String(contentsOf: fileURL!, encoding: String.Encoding.utf8)\n\nIf the file has binary data, you can use NSData(contentsOfURL: fileURL!) or Data(contentsOf: fileURL!) (for Swift 3).\n", "While the answer has been supplied for a quick fix, there is a better solution. \nEach time the playground is opened it will be assigned a new container. This means using the normal directory structure you would have to copy the file you want into the new container every time.\nInstead, inside the container there is a symbolic link to a Shared Playground Data directory (/Users/UserName/Documents/Shared Playground Data) which remains when reopening the playground, and can be accessed from multiple playgrounds. \nYou can use XCPlayground to access this shared folder.\nimport XCPlayground\n\nlet path = XCPlaygroundSharedDataDirectoryURL.appendingPathComponent(\"foo.txt\")\n\nThe official documentation can be found here: XCPlayground Module Reference\nCool post on how to organize this directory per-playground: Swift, Playgrounds, and XCPlayground \n\nUPDATE: For swift 4.2 use playgroundSharedDataDirectory. Don't need to import anything.\nLooks like:\nlet path = playgroundSharedDataDirectory.appendingPathComponent(\"file\")\n\n", "1. Access a file that is located in the Resources folder of your Playground\nWith Swift 3, Bundle has a method called url(forResource:withExtension:). url(forResource:withExtension:) has the following declaration:\nfunc url(forResource name: String?, withExtension ext: String?) -> URL?\n\n\nReturns the file URL for the resource identified by the specified name and file extension.\n\nYou can use url(forResource:withExtension:) in order to read the content of a json file located in the Resources folder of an iOS or Mac Playground:\nimport Foundation\n\ndo {\n guard let fileUrl = Bundle.main.url(forResource: \"Data\", withExtension: \"json\") else { fatalError() }\n let data = try Data(contentsOf: fileUrl)\n let json = try JSONSerialization.jsonObject(with: data, options: [])\n print(json)\n} catch {\n print(error)\n} \n\nYou can use url(forResource:withExtension:) in order to read the content of a text file located in the Resources folder of an iOS or Mac Playground:\nimport Foundation\n\ndo {\n guard let fileUrl = Bundle.main.url(forResource: \"Text\", withExtension: \"txt\") else { fatalError() }\n let text = try String(contentsOf: fileUrl, encoding: String.Encoding.utf8)\n print(text)\n} catch {\n print(error)\n}\n\nAs an alternative to let image = UIImage(named: \"image\"), you can use url(forResource:withExtension:) in order to access an image located in the Resources folder of an iOS Playground:\nimport UIKit\n\ndo {\n guard let fileUrl = Bundle.main.url(forResource: \"Image\", withExtension: \"png\") else { fatalError() }\n let data = try Data(contentsOf: fileUrl)\n let image = UIImage(data: data)\n} catch {\n print(error)\n}\n\n\n2. Access a file that is located in the ~/Documents/Shared Playground Data folder of your computer\nWith Swift 3, PlaygroundSupport module provides a global constant called playgroundSharedDataDirectory. playgroundSharedDataDirectory has the following declaration:\nlet playgroundSharedDataDirectory: URL\n\n\nThe path to the directory containing data shared between all playgrounds.\n\nYou can use playgroundSharedDataDirectory in order to read the content of a json file located in the ~/Documents/Shared Playground Data folder of your computer from an iOS or Mac Playground:\nimport Foundation\nimport PlaygroundSupport\n\ndo {\n let fileUrl = PlaygroundSupport.playgroundSharedDataDirectory.appendingPathComponent(\"Data.json\") \n let data = try Data(contentsOf: fileUrl)\n let json = try JSONSerialization.jsonObject(with: data, options: [])\n print(json)\n} catch {\n print(error)\n}\n\nYou can use playgroundSharedDataDirectory in order to read the content of a text file located in the ~/Documents/Shared Playground Data folder of your computer from an iOS or Mac Playground:\nimport Foundation\nimport PlaygroundSupport\n\ndo {\n let fileUrl = PlaygroundSupport.playgroundSharedDataDirectory.appendingPathComponent(\"Text.txt\")\n let text = try String(contentsOf: fileUrl, encoding: String.Encoding.utf8)\n print(text)\n} catch {\n print(error)\n}\n\nYou can use playgroundSharedDataDirectory in order to access an image located in the ~/Documents/Shared Playground Data folder of your computer from an iOS Playground:\nimport UIKit\nimport PlaygroundSupport\n\ndo {\n let fileUrl = PlaygroundSupport.playgroundSharedDataDirectory.appendingPathComponent(\"Image.png\")\n let data = try Data(contentsOf: fileUrl)\n let image = UIImage(data: data)\n} catch {\n print(error)\n}\n\n", "Swift 3 (Xcode 8)\nThe code below works in both iOS and macOS playgrounds. The text file (\"MyText.txt\" in this example) must be in the Resources directory of the playground. (Note: You may need to open the navigator window to see the directory structure of your playground.)\nimport Foundation\n\nif let fileURL = Bundle.main.url(forResource:\"MyText\", withExtension: \"txt\")\n{\n do {\n let contents = try String(contentsOf: fileURL, encoding: String.Encoding.utf8)\n print(contents)\n } catch {\n print(\"Error: \\(error.localizedDescription)\")\n }\n} else {\n print(\"No such file URL.\")\n}\n\n", "This works for me. The only thing I changed was to be explicit about the file name (which is implied in your example) - perhaps you have a typo in the off-screen definition of the \"file\" variable?\nlet dirs = NSSearchPathForDirectoriesInDomains(NSSearchPathDirectory.DocumentDirectory, NSSearchPathDomainMask.UserDomainMask, true) as? [String]\n\nlet file = \"trial.txt\" // My change to your code - yours is presumably set off-screen\nif let directories = dirs {\n let dir = directories[0]; //documents directory\n let path = dir.stringByAppendingPathComponent(file);\n\n //read\n let content = NSString(contentsOfFile: path, usedEncoding: nil, error: nil)\n // works...\n}\n\nUpdate Swift 4.2\nAs @raistlin points out, this would now be\nlet dirs = NSSearchPathForDirectoriesInDomains(\n FileManager.SearchPathDirectory.documentDirectory,\n FileManager.SearchPathDomainMask.userDomainMask,\n true)\n\nor, more tersely:\nlet dirs = NSSearchPathForDirectoriesInDomains(.documentDirectory,\n .userDomainMask, true)\n\n", "\nSelect the .playground file.\nOpen Utility inspector, In the playground press opt-cmd-1 to open the File Inspector. You should see the playground on the right. If you don't have it selected, press cmd-1 to open the Project Navigator and click on the playground file.\nUnder 'Resource Path' in Playground Settings choose 'Relative To Playground' and platform as OSX.\n\n", "On Mavericks with Xcode 6.0.1 you can read using iOS platform too.\nimport UIKit\nlet dirs : [String]? = NSSearchPathForDirectoriesInDomains(NSSearchPathDirectory.DocumentDirectory, NSSearchPathDomainMask.UserDomainMask, true) as? [String]\nlet myDir = \"/Shared Playground Data\"\n\nlet file = \"README.md\" // My change to your code - yours is presumably set off-screen\nif (dirs != nil) {\n let directories:[String] = dirs!;\n let dir = directories[0] + myDir; // iOS playground documents directory\n let path = dir.stringByAppendingPathComponent(file);\n\n //read\n let content = String.stringWithContentsOfFile(path, encoding: NSUTF8StringEncoding, error: nil)\n // works...\n println(content!)\n}\n\nRemember, you need to create a directory called \"Shared Playground Data\" in your Documents directory. Im my case I used this command: mkdir \"/Users/joao_parana/Documents/Shared Playground Data\" and put there my file README.md\n", "String.stringWithContentsOfFile is DEPRECATED and doesn't work anymore with Xcode 6.1.1\nCreate your documentDirectoryUrl\nlet documentDirectoryUrl = NSFileManager.defaultManager().URLsForDirectory(.DocumentDirectory, inDomains: .UserDomainMask).first! as NSURL\n\nTo make sure the file is located there you can use the finder command Go To Folder e copy paste the printed documentDirectoryUrl.path there\nprintln(documentDirectoryUrl.path!)\n// should look like this: /Users/userName/Library/Containers/com.apple.dt.playground.stub.OSX.PLAYGROUNDFILENAME-5AF5B25D-D0D1-4B51-A297-00015EE97F13/Data/Documents\n\nJust append the file name to the folder url as a path component\nlet fileNameUrl = documentDirectoryUrl.URLByAppendingPathComponent(\"ReadMe.txt\")\nvar fileOpenError:NSError?\n\nCheck if the file exists before attempting to open it\nif NSFileManager.defaultManager().fileExistsAtPath(fileNameUrl.path!) {\n\n if let fileContent = String(contentsOfURL: fileNameUrl, encoding: NSUTF8StringEncoding, error: &fileOpenError) {\n println(fileContent) // prints ReadMe.txt contents if successful\n } else {\n if let fileOpenError = fileOpenError {\n println(fileOpenError) // Error Domain=NSCocoaErrorDomain Code=XXX \"The file “ReadMe.txt” couldn’t be opened because....\"\n }\n }\n} else {\n println(\"file not found\")\n}\n\n", "I was unable to read a file with ease in playground and ended up just creating a command line app in Xcode. This seemed to work for me very well. \n", "The other answers, relying on \"playgroundSharedDataDirectory\" never works for me, especially if using an iOS playground.\n\n\nlet documentsDirectoryShareURL = PlaygroundSupport.playgroundSharedDataDirectory.absoluteURL\nlet fileManager = FileManager()\ntry? fileManager.copyItem(at: URL(fileURLWithPath: \"/Users/rufus/Documents/Shared Playground Data/\"), to: documentsDirectoryShareURL)\n\n\r\n\nI just do the above now. I can populate my documents/shared folder, and it is just manually automatically copied to the playgrounds documents directory.\nMy code will not overwrite files that exist there. You could enhance this if you need it to look at file timestamps and then copy if necessary etc.\n", "Swift 5.7.1 - Xcode 14.1\nfunc readFile() -> [String] {\n \n if let fileURL = Bundle.main.url(forResource: \"File\", withExtension: \"txt\") {\n do {\n let content = try String(contentsOf: fileURL)\n var x = content.components(separatedBy: \"\\n\")\n x.removeAll { data in\n data.isEmpty\n }\n return x\n } catch {\n print(error)\n }\n }\n \n return [String]()\n \n}\n//Usage:\nlet input = readFile()\n\n" ]
[ 81, 38, 28, 13, 8, 3, 1, 1, 0, 0, 0 ]
[]
[]
[ "swift" ]
stackoverflow_0024245916_swift.txt
Q: Some questions about dReal: delta-satisfiability, parameter with 0.0, doing the same in Z3 and obtaining sat/unsat result I am starting with dReal and I have a set of questions about it. These questions are based on the tutorial we can find in https://github.com/dreal/dreal4, section "Python bindings", with the following code: from dreal import * x = Variable("x") y = Variable("y") z = Variable("z") f_sat = And(0 <= x, x <= 10, 0 <= y, y <= 10, 0 <= z, z <= 10, sin(x) + cos(y) == z) result = CheckSatisfiability(f_sat, 0.001) print(result) If we execute the code, then we obtain the following: x : [1.2472345184845743, 1.2475802036740027] y : [8.9290649281238181, 8.9297562985026744] z : [0.068150554073343028, 0.068589052763514458] I know these x,y and z are somehow the model that satisfies the formula, but I do not get their exact meanings. I mean, I know they have to do with delta-satisfiability, but what does x:[1.2472345184845743, 1.2475802036740027] mean? My (possibly wrong) interpretation is that any x within those bounds is a model. But, in that case, why does not the tool simply return any model within the bounds? What is the second parameter of CheckSatisfiability(f_sat, 0.001)? Once again, it has to do with delta-satisfiability, but I do not know what it is exactly. Does it mean the 'comma precision' for which we want to find a model? That is, there could be cases in which a model is, say, 1.23455 so this would mean setting a precision of 'only' 0.001 is not capable to find the model, so would return unsat. Playing with this precision, I find that I cant set it to be 0.0. For instance: f_sat2 = And(0 <= x, x <= 10, 0 <= y, y <= 10, 0 <= z, z <= 10) result2 = CheckSatisfiability(f_sat2, 0.0) print(result2) This outputs: x : [5, 5] y : [5, 5] z : [5, 5] Does this (a bound with a single number) mean that 5 is the (unique) model of x,y and z? That is, does setting precision to 0.0 yield the classic (not a delta-sat) satisfiabiliy problem? This would mean that dReal can be used also as a classic SMT solver. If this is so, is the problem with 0.0 representable with Z3? In that case, when I do the following in dReal: f_sat3 = And(0 <= x, x <= 10, 0 <= y, y <= 10, 0 <= z, z <= 10, sin(x) + cos(y) == z) result3 = CheckSatisfiability(f_sat3, 0.0) print(result3) And get the (unique) model: x : [1.2473857557646206, 1.2473857557646206] y : [8.9296050612226239, 8.9296050612226239] z : [0.068270483891846451, 0.068270483891846451] Does this mean that Z3 would also be able to give me these models? But, in that case, how could I implement these 'correct' sin() and cos() methods in Z3? By the way, the reason that it is giving models with a huge comma precision (having set parameter 0.0) responds NO to the interpretation I made in the second question. So, again, what does the second parameter of CheckSatisfiability(f_sat, 0.001) mean? How can I get the result SAT/UNSAT in dReal, instead of a model? PS: Where can I find more info, such as tutorials about the tool? Do we know any other similar tools that deal with nonlinear functions? I only have heard about MetiTarski. A: I am one of the authors of dReal. As suggested in the comment, I recommend to read dReal tool paper and “Delta-Complete Decision Procedures for Satisfiability over the Reals” paper. You can find them in https://scungao.github.io . SMT problems over Reals are undecidable when they include non-linear math functions (e.g. trigonometric functions). This means that we cannot have a generic SMT solver for this theory. delta-satisfiability is a way to tackle this problem by introducing over-approximation. Consequently, a delta-satisfiability solver may return two types of answers; delta-SAT and UNSAT. The interpretation of UNSAT is standard, the input formula is unsatisfiable. The interpretation of delta-sat is that the over-approximated problem is satisfiable. The degree of over-approximation is determined by the user-provided input parameter (—precision). To be precise, when a solver returns a box, any point sampled in this box satisfies the over-approximated formula.
Some questions about dReal: delta-satisfiability, parameter with 0.0, doing the same in Z3 and obtaining sat/unsat result
I am starting with dReal and I have a set of questions about it. These questions are based on the tutorial we can find in https://github.com/dreal/dreal4, section "Python bindings", with the following code: from dreal import * x = Variable("x") y = Variable("y") z = Variable("z") f_sat = And(0 <= x, x <= 10, 0 <= y, y <= 10, 0 <= z, z <= 10, sin(x) + cos(y) == z) result = CheckSatisfiability(f_sat, 0.001) print(result) If we execute the code, then we obtain the following: x : [1.2472345184845743, 1.2475802036740027] y : [8.9290649281238181, 8.9297562985026744] z : [0.068150554073343028, 0.068589052763514458] I know these x,y and z are somehow the model that satisfies the formula, but I do not get their exact meanings. I mean, I know they have to do with delta-satisfiability, but what does x:[1.2472345184845743, 1.2475802036740027] mean? My (possibly wrong) interpretation is that any x within those bounds is a model. But, in that case, why does not the tool simply return any model within the bounds? What is the second parameter of CheckSatisfiability(f_sat, 0.001)? Once again, it has to do with delta-satisfiability, but I do not know what it is exactly. Does it mean the 'comma precision' for which we want to find a model? That is, there could be cases in which a model is, say, 1.23455 so this would mean setting a precision of 'only' 0.001 is not capable to find the model, so would return unsat. Playing with this precision, I find that I cant set it to be 0.0. For instance: f_sat2 = And(0 <= x, x <= 10, 0 <= y, y <= 10, 0 <= z, z <= 10) result2 = CheckSatisfiability(f_sat2, 0.0) print(result2) This outputs: x : [5, 5] y : [5, 5] z : [5, 5] Does this (a bound with a single number) mean that 5 is the (unique) model of x,y and z? That is, does setting precision to 0.0 yield the classic (not a delta-sat) satisfiabiliy problem? This would mean that dReal can be used also as a classic SMT solver. If this is so, is the problem with 0.0 representable with Z3? In that case, when I do the following in dReal: f_sat3 = And(0 <= x, x <= 10, 0 <= y, y <= 10, 0 <= z, z <= 10, sin(x) + cos(y) == z) result3 = CheckSatisfiability(f_sat3, 0.0) print(result3) And get the (unique) model: x : [1.2473857557646206, 1.2473857557646206] y : [8.9296050612226239, 8.9296050612226239] z : [0.068270483891846451, 0.068270483891846451] Does this mean that Z3 would also be able to give me these models? But, in that case, how could I implement these 'correct' sin() and cos() methods in Z3? By the way, the reason that it is giving models with a huge comma precision (having set parameter 0.0) responds NO to the interpretation I made in the second question. So, again, what does the second parameter of CheckSatisfiability(f_sat, 0.001) mean? How can I get the result SAT/UNSAT in dReal, instead of a model? PS: Where can I find more info, such as tutorials about the tool? Do we know any other similar tools that deal with nonlinear functions? I only have heard about MetiTarski.
[ "I am one of the authors of dReal.\nAs suggested in the comment, I recommend to read dReal tool paper and “Delta-Complete Decision Procedures for Satisfiability over the Reals” paper. You can find them in https://scungao.github.io .\nSMT problems over Reals are undecidable when they include non-linear math functions (e.g. trigonometric functions). This means that we cannot have a generic SMT solver for this theory. delta-satisfiability is a way to tackle this problem by introducing over-approximation. Consequently, a delta-satisfiability solver may return two types of answers; delta-SAT and UNSAT. The interpretation of UNSAT is standard, the input formula is unsatisfiable. The interpretation of delta-sat is that the over-approximated problem is satisfiable. The degree of over-approximation is determined by the user-provided input parameter (—precision). To be precise, when a solver returns a box, any point sampled in this box satisfies the over-approximated formula.\n" ]
[ 2 ]
[]
[]
[ "dreal", "satisfiability", "smt", "z3", "z3py" ]
stackoverflow_0074678641_dreal_satisfiability_smt_z3_z3py.txt
Q: Custom Attribute For Class Library Classes and Functions in C# I'm developing 3rd party API connector bridge in class library NOT in ASP.NET. User Levels API has 3 user levels, lets say: UserGoer UserDoer UserMaker Service Restriction Each API operation can work with one or multiple user level roles. For example, lets assume operations and reachable user levels as follows; JokerService (reachable by UserGoer, UserMaker) PokerService (reachable by UserGoer, UserDoer) MokerService (reachable by UserGoer, UserDoer, UserMaker) If UserDoer requests for JokerService, API returns bad request. JokerService is only reachable for UserGoer and UserMaker. So, I want to restrict and throw an exception. User Token Structure public interface IToken { string AccessToken { get; set; } string RefreshToken { get; set; } } public class AuthenticationToken : IToken { [JsonProperty("access_token")] public string AccessToken { get; set; } [JsonProperty("refresh_token")] public string RefreshToken { get; set; } } public class UserGoerAuthenticationToken : AuthenticationToken { } public class UserDoerAuthenticationToken : AuthenticationToken { } public class UserMakerAuthenticationToken : AuthenticationToken { } Enum public enum TokenType { Undefined = 0, UserGoer = 1, UserDoer = 2, UserMaker = 3 } Customized Authentication Attribute public class AuthenticationFilter : Attribute { public TokenType[] TokenTypes { get; private set; } public AuthenticationFilter(params TokenType[] TokenTypes) { this.TokenTypes = TokenTypes; } } Example Service [AuthenticationFilter(TokenType.UserGoer, TokenType.UserMaker)] internal class JokerService : BaseService<JokerEntity> { public JokerService(IToken AuthenticationToken) : base(AuthenticationToken) { var tokenTypes = (typeof(JokerService).GetCustomAttributes(true)[0] as AuthenticationFilter) .TokenTypes; bool throwExceptionFlag = true; foreach (var item in tokenTypes) { // Check AuthenticationToken is UserGoer or UserMaker by StartsWith function if (AuthenticationToken.GetType().Name.StartsWith(item.ToString())) { throwExceptionFlag = false; break; } } if (throwExceptionFlag) throw new Exception("Invalid Authentication Token"); } public JokerEntity Create(RequestModel<JokerEntity> model) => base.Create(model); public JokerEntity Update(RequestModel<JokerEntity> model) => base.Update(model); public JokerEntity Get(RequestModel<JokerEntity> model) => base.Get(model); public List<JokerEntity> List(RequestModel<JokerEntity> model) => base.List(model); } In summary, JokerService can be executable by UserGoer and UserMaker. UserDoer has no permission for this service. As you see the the usage of AuthenticationFilter attribute, I'm getting custom attributes in the constructor, because i want to know what IToken is. If there is an irrelevant "User Authentication Token" type that is passed as parameter (IToken), program should be throw an exception. This is my solution, do you think is there any best practice for my problem? Thank you for your help. A: Interesting question. My initial thought at constructive critique would be that the tokens accepted by a particular class via the attribute is something decided at compile time and is unable to change. But, the checking for permissions is happening on the construction of each object. You can prevent this with a static constructor that sets the tokenTypes variable. Static constructors always run before instance constructors. This is also a good place to ensure that tokenTypes is never null (in the absence of your custom attribute). Likewise, the looping through tokenTypes can probably be a function that takes in an IToken and the tokenTypes, and more importantly, could probably live in the BaseService.cs. Writing that logic once will make it easier to maintain when some future requirement necessitates its change. :) See also: https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/static-constructors Hope this helps.
Custom Attribute For Class Library Classes and Functions in C#
I'm developing 3rd party API connector bridge in class library NOT in ASP.NET. User Levels API has 3 user levels, lets say: UserGoer UserDoer UserMaker Service Restriction Each API operation can work with one or multiple user level roles. For example, lets assume operations and reachable user levels as follows; JokerService (reachable by UserGoer, UserMaker) PokerService (reachable by UserGoer, UserDoer) MokerService (reachable by UserGoer, UserDoer, UserMaker) If UserDoer requests for JokerService, API returns bad request. JokerService is only reachable for UserGoer and UserMaker. So, I want to restrict and throw an exception. User Token Structure public interface IToken { string AccessToken { get; set; } string RefreshToken { get; set; } } public class AuthenticationToken : IToken { [JsonProperty("access_token")] public string AccessToken { get; set; } [JsonProperty("refresh_token")] public string RefreshToken { get; set; } } public class UserGoerAuthenticationToken : AuthenticationToken { } public class UserDoerAuthenticationToken : AuthenticationToken { } public class UserMakerAuthenticationToken : AuthenticationToken { } Enum public enum TokenType { Undefined = 0, UserGoer = 1, UserDoer = 2, UserMaker = 3 } Customized Authentication Attribute public class AuthenticationFilter : Attribute { public TokenType[] TokenTypes { get; private set; } public AuthenticationFilter(params TokenType[] TokenTypes) { this.TokenTypes = TokenTypes; } } Example Service [AuthenticationFilter(TokenType.UserGoer, TokenType.UserMaker)] internal class JokerService : BaseService<JokerEntity> { public JokerService(IToken AuthenticationToken) : base(AuthenticationToken) { var tokenTypes = (typeof(JokerService).GetCustomAttributes(true)[0] as AuthenticationFilter) .TokenTypes; bool throwExceptionFlag = true; foreach (var item in tokenTypes) { // Check AuthenticationToken is UserGoer or UserMaker by StartsWith function if (AuthenticationToken.GetType().Name.StartsWith(item.ToString())) { throwExceptionFlag = false; break; } } if (throwExceptionFlag) throw new Exception("Invalid Authentication Token"); } public JokerEntity Create(RequestModel<JokerEntity> model) => base.Create(model); public JokerEntity Update(RequestModel<JokerEntity> model) => base.Update(model); public JokerEntity Get(RequestModel<JokerEntity> model) => base.Get(model); public List<JokerEntity> List(RequestModel<JokerEntity> model) => base.List(model); } In summary, JokerService can be executable by UserGoer and UserMaker. UserDoer has no permission for this service. As you see the the usage of AuthenticationFilter attribute, I'm getting custom attributes in the constructor, because i want to know what IToken is. If there is an irrelevant "User Authentication Token" type that is passed as parameter (IToken), program should be throw an exception. This is my solution, do you think is there any best practice for my problem? Thank you for your help.
[ "Interesting question. My initial thought at constructive critique would be that the tokens accepted by a particular class via the attribute is something decided at compile time and is unable to change. But, the checking for permissions is happening on the construction of each object.\nYou can prevent this with a static constructor that sets the tokenTypes variable. Static constructors always run before instance constructors. This is also a good place to ensure that tokenTypes is never null (in the absence of your custom attribute).\nLikewise, the looping through tokenTypes can probably be a function that takes in an IToken and the tokenTypes, and more importantly, could probably live in the BaseService.cs. Writing that logic once will make it easier to maintain when some future requirement necessitates its change. :)\nSee also: https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/static-constructors\nHope this helps.\n" ]
[ 1 ]
[]
[]
[ "attributes", "c#", "class_library", "design_patterns", "oop" ]
stackoverflow_0074679119_attributes_c#_class_library_design_patterns_oop.txt
Q: Make margin style to mat-slide-toggle-bar element I want to do a margin style for mat-slide-toggle-bar which is in a specific parent element mat-slide-toggle with class name parent-element. Here my Html: <mat-slide-toggle _ngcontent-ng-cli-universal-c397="" class="parent-element mat-slide-toggle parent-element mat-accent mat-checked mat-disabled ng-untouched ng-pristine" ng-reflect-form="[object Object]" id="mat-slide-toggle-1"><label class="mat-slide-toggle-label" for="mat-slide-toggle-1-input"> <span class="mat-slide-toggle-bar"> <input type="checkbox" role="switch" class="mat-slide-toggle-input cdk-visually-hidden" id="mat-slide-toggle-1-input" tabindex="-1" disabled="" aria-checked="true"> <span class="mat-slide-toggle-thumb-container"> <span class="mat-slide-toggle-thumb"></span> <span mat-ripple="" class="mat-ripple mat-slide-toggle-ripple mat-focus-indicator" ng-reflect-trigger="[object HTMLLabelElement]" ng-reflect-disabled="true" ng-reflect-centered="true" ng-reflect-radius="20" ng-reflect-animation="[object Object]"> <span class="mat-ripple-element mat-slide-toggle-persistent-ripple"></span> </span> </span> </span> <span class="mat-slide-toggle-content"><span style="display: none;">&nbsp;</span> text</span></label> </mat-slide-toggle> What I did in style file but dosn't work : .parent-element .mat-slide-toggle-bar { margin-left: 80px; }}} A: You can try this: .mat-slide-toggle-bar {margin-left: 80px !important;} !important will take precedence over most other rules. The first answer to the linked question gives more details about it: What is the order of precedence for CSS? A: I solved it by this code in the scss of my parent component : .parent-element .mat-slide-toggle-bar { margin-left: 20px !important; }
Make margin style to mat-slide-toggle-bar element
I want to do a margin style for mat-slide-toggle-bar which is in a specific parent element mat-slide-toggle with class name parent-element. Here my Html: <mat-slide-toggle _ngcontent-ng-cli-universal-c397="" class="parent-element mat-slide-toggle parent-element mat-accent mat-checked mat-disabled ng-untouched ng-pristine" ng-reflect-form="[object Object]" id="mat-slide-toggle-1"><label class="mat-slide-toggle-label" for="mat-slide-toggle-1-input"> <span class="mat-slide-toggle-bar"> <input type="checkbox" role="switch" class="mat-slide-toggle-input cdk-visually-hidden" id="mat-slide-toggle-1-input" tabindex="-1" disabled="" aria-checked="true"> <span class="mat-slide-toggle-thumb-container"> <span class="mat-slide-toggle-thumb"></span> <span mat-ripple="" class="mat-ripple mat-slide-toggle-ripple mat-focus-indicator" ng-reflect-trigger="[object HTMLLabelElement]" ng-reflect-disabled="true" ng-reflect-centered="true" ng-reflect-radius="20" ng-reflect-animation="[object Object]"> <span class="mat-ripple-element mat-slide-toggle-persistent-ripple"></span> </span> </span> </span> <span class="mat-slide-toggle-content"><span style="display: none;">&nbsp;</span> text</span></label> </mat-slide-toggle> What I did in style file but dosn't work : .parent-element .mat-slide-toggle-bar { margin-left: 80px; }}}
[ "You can try this:\n.mat-slide-toggle-bar {margin-left: 80px !important;}\n\n!important will take precedence over most other rules.\nThe first answer to the linked question gives more details about it:\nWhat is the order of precedence for CSS?\n", "I solved it by this code in the scss of my parent component :\n\n\n.parent-element .mat-slide-toggle-bar {\n margin-left: 20px !important;\n}\n\n\n\n" ]
[ 0, 0 ]
[]
[]
[ "angular", "html", "sass" ]
stackoverflow_0074641523_angular_html_sass.txt
Q: python select polygons containing a point I would like to select polygons that contain at least a point. I can use QGIS's tool called "Select by location: Select all buildings by location of point". Is there a python alternative? So far, I wrote a jupyter notebook and worked with GeoPandas. I have tried import geopandas as gpd import pandas as pd polygon_layer = gpd.read_file(r'file.shp') excel = pd.read_excel('file2.xlsx') points_layer = gpd.GeoDataFrame(excel, geometry=gpd.points_from_xy(excel.X, excel.Y)) subset = gpd.sjoin(polygon_layer, points_layer, how='inner', predicate='within') However, the spatial join does not work as it returns an empty geoDataFrame A: You can use the within method in GeoPandas to select the polygons that contain at least one point in your points layer: import geopandas as gpd import pandas as pd # Read the polygon layer from a shapefile. polygon_layer = gpd.read_file(r'file.shp') # Read the points layer from an Excel file. excel = pd.read_excel('file2.xlsx') points_layer = gpd.GeoDataFrame(excel, geometry=gpd.points_from_xy(excel.X, excel.Y)) # Select the polygons that contain at least one point. selected_polygons = polygon_layer[polygon_layer.within(points_layer.unary_union)] # Print the selected polygons. print(selected_polygons)
python select polygons containing a point
I would like to select polygons that contain at least a point. I can use QGIS's tool called "Select by location: Select all buildings by location of point". Is there a python alternative? So far, I wrote a jupyter notebook and worked with GeoPandas. I have tried import geopandas as gpd import pandas as pd polygon_layer = gpd.read_file(r'file.shp') excel = pd.read_excel('file2.xlsx') points_layer = gpd.GeoDataFrame(excel, geometry=gpd.points_from_xy(excel.X, excel.Y)) subset = gpd.sjoin(polygon_layer, points_layer, how='inner', predicate='within') However, the spatial join does not work as it returns an empty geoDataFrame
[ "You can use the within method in GeoPandas to select the polygons that contain at least one point in your points layer:\nimport geopandas as gpd\nimport pandas as pd\n\n# Read the polygon layer from a shapefile.\npolygon_layer = gpd.read_file(r'file.shp')\n\n# Read the points layer from an Excel file.\nexcel = pd.read_excel('file2.xlsx')\npoints_layer = gpd.GeoDataFrame(excel, geometry=gpd.points_from_xy(excel.X, excel.Y))\n\n# Select the polygons that contain at least one point.\nselected_polygons = polygon_layer[polygon_layer.within(points_layer.unary_union)]\n\n# Print the selected polygons.\nprint(selected_polygons)\n\n" ]
[ 0 ]
[]
[]
[ "geopandas", "gis" ]
stackoverflow_0074679307_geopandas_gis.txt
Q: read entire folder and extract multiple lines and append to new file I'm very new to python and this is far beyond what I'm capable of. I have multiple text files test01.txt test02.txt test03.txt test*.txt Each file has same # of lines and same structure i want to extract lines 20-25 and put that into a text file that i can manipulate in excel. since there are 100s of files, it would be great if we can put the text file name on top or next to the data too. this is basically what i was able to do but as you can see it's not exactly "fast" thanks! file1 = open("test01.txt", "r") content = file1.readlines() file1 = open("values.txt","w") file1.write("test01.txt" + "\n") file1.writelines(content[33:36]) file1.close() file1 = open("test02.txt", "r") content = file1.readlines() #Append-adds at last file1 = open("values.txt","a")#append mode file1.write("test02.txt" + "\n") file1.writelines(content[33:36]) file1.close() file1 = open("test03.txt", "r") content = file1.readlines() #Append-adds at last file1 = open("values.txt","a")#append mode file1.write("test03.txt" + "\n") file1.writelines(content[33:36]) file1.close() A: Here is a script where you can read all files in a directory and write the name of the file and the content into a another file like you did. import os ValuesTextFile = open("values.txt","a") Path = './files/' for Filename in os.listdir(Path): print (Filename) ValuesTextFile.writelines(Filename) File = open(Path + Filename, "r") Content = File.readlines() ValuesTextFile.writelines(Content[33:36]) File.close() ValuesTextFile.close()
read entire folder and extract multiple lines and append to new file
I'm very new to python and this is far beyond what I'm capable of. I have multiple text files test01.txt test02.txt test03.txt test*.txt Each file has same # of lines and same structure i want to extract lines 20-25 and put that into a text file that i can manipulate in excel. since there are 100s of files, it would be great if we can put the text file name on top or next to the data too. this is basically what i was able to do but as you can see it's not exactly "fast" thanks! file1 = open("test01.txt", "r") content = file1.readlines() file1 = open("values.txt","w") file1.write("test01.txt" + "\n") file1.writelines(content[33:36]) file1.close() file1 = open("test02.txt", "r") content = file1.readlines() #Append-adds at last file1 = open("values.txt","a")#append mode file1.write("test02.txt" + "\n") file1.writelines(content[33:36]) file1.close() file1 = open("test03.txt", "r") content = file1.readlines() #Append-adds at last file1 = open("values.txt","a")#append mode file1.write("test03.txt" + "\n") file1.writelines(content[33:36]) file1.close()
[ "Here is a script where you can read all files in a directory and write the name of the file and the content into a another file like you did.\nimport os\n\nValuesTextFile = open(\"values.txt\",\"a\")\nPath = './files/'\nfor Filename in os.listdir(Path):\n print (Filename)\n ValuesTextFile.writelines(Filename)\n File = open(Path + Filename, \"r\")\n Content = File.readlines()\n ValuesTextFile.writelines(Content[33:36])\n File.close()\nValuesTextFile.close()\n\n" ]
[ 0 ]
[]
[]
[ "new_operator", "python" ]
stackoverflow_0074679114_new_operator_python.txt
Q: How to add space between create button and a success message that will show at the bottom after creating? In my form, I have the following JSX written with React and after I create with the button, the success message will touch the button at the bottom, leaving no space. Is there anyway to add that space between? Thanks again. ` <button className="btn btn-primary">Create a hat</button> </div> </form> <div className={successClass} id="success-message"> You have created a new hat! </div> ` I tried googling methods to add properties into the button and message but nothing seems to split them apart. A: Try adding padding-top to the success message div. <button className="btn btn-primary">Create a hat</button> </div> </form> <div className={successClass} id="success-message" style={{padding-top :"2rem"}} > You have created a new hat! </div> `
How to add space between create button and a success message that will show at the bottom after creating?
In my form, I have the following JSX written with React and after I create with the button, the success message will touch the button at the bottom, leaving no space. Is there anyway to add that space between? Thanks again. ` <button className="btn btn-primary">Create a hat</button> </div> </form> <div className={successClass} id="success-message"> You have created a new hat! </div> ` I tried googling methods to add properties into the button and message but nothing seems to split them apart.
[ "Try adding padding-top to the success message div.\n<button className=\"btn btn-primary\">Create a hat</button>\n </div>\n </form>\n <div className={successClass} id=\"success-message\"\nstyle={{padding-top :\"2rem\"}}\n>\n You have created a new hat!\n </div>\n\n`\n\n" ]
[ 0 ]
[]
[]
[ "jsx", "reactjs" ]
stackoverflow_0074679272_jsx_reactjs.txt
Q: Argument of type 'this' is not assignable to parameter of type 'Construct' in AWS CDK I am having an issue while using the CDK in that the this property is erroring and saying that I can't assign 'this' to parameter of type construct. This is happens start on the const s3ListLambdaRole part and makes every new variable declaration after that also error for the same thing. import * as sns from '@aws-cdk/aws-sns'; import * as subs from '@aws-cdk/aws-sns-subscriptions'; import * as sqs from '@aws-cdk/aws-sqs'; import * as cdk from '@aws-cdk/core'; import * as s3 from '@aws-cdk/aws-s3'; import * as lambda from '@aws-cdk/aws-lambda'; import * as path from 'path'; import { Bucket } from '@aws-cdk/aws-s3'; import * as iam from'@aws-cdk/aws-iam'; export class SecurityBaselineDevStack extends cdk.Stack { constructor(scope: cdk.App, id: string, props?: cdk.StackProps) { super(scope, id, props); const testSecurityqueue = new sqs.Queue(this, 'testSecurityqueue', { visibilityTimeout: cdk.Duration.seconds(300) }); const testSecuritytopic = new sns.Topic(this, 'testSecuritytopic'); testSecuritytopic.addSubscription(new subs.SqsSubscription(testSecurityqueue)); //Creating lambda role below const s3ListLambdaRole = new iam.Role(this, 's3ListLambdaRole', { assumedBy: new iam.ServicePrincipal('lambda.amazonaws.com'), }); s3ListLambdaRole.addManagedPolicy(iam.ManagedPolicy.fromAwsManagedPolicyName('AWSLambdaFullAccess')) //creates LambdaFullAccess Role //Adding specific permissions to role now s3ListLambdaRole.addToPolicy(new iam.PolicyStatement({ resources: ['*'], //adds full access to lamda actions: ['s3'] })); const s3ListLambda = new lambda.Function (this, 's3ListLambda', { runtime: lambda.Runtime.PYTHON_3_6, handler: 'listS3.handler', role:s3ListLambdaRole, code: lambda.Code.fromAsset(path.join(__dirname, '../lambda')) }); const testSecurityBucket = new s3.Bucket(this, 'testSecurityBucket'); } } Thank you in advance! A: This happens when version of CDK dependencies are at different versions.Make sure CDK dependencies have same version. Delete node_modules folder Delete package-lock.json Ensure all dependencies in package.json are using same version. Remove carrot ^ symbol before dependencies npm install A: If anyone comes across this issue currently, like I did which is what led me here, the thing I did to resolve this was use the cdk lib and import the things I needed from that. import {Stack, StackProps, App, aws_s3 as s3, aws_iam as iam } from 'aws-cdk-lib'; import { BucketEncryption } from 'aws-cdk-lib/aws-s3'; A: update your CDK library. This is normally caused when your CDK library has different versions. npm update -g aws-cdk A: The issue was that the @aws-cdk/lambda dependency was not the same version as the @aws-cdk/sqs and @aws-cdk/sns dependencies. A: Just to pile on to Yogeshwar's answer (which was incredibly helpful)... While making sure the versions being used are consistent was a good first step, understanding where the specific versions were coming from and why more deeply was key for me to fix this issue for me. I'm using CDK v2, which only requires you to pull in a single module (aws-cdk-lib), but I was still seeing the issue related to the construct library. Two additional pieces of information that helped were to run npm why constructs to see where the different versions of the packages were being pulled in from. This led me to another module that I had created that was asking for "constructs": "^10.0.0", but was getting 10.1.x... This pushed me towards getting a greater understanding of semver, and finding out that ^, will pull in anything in the 10.x series of versions, thus bringing in the incompatible package (the "Remove ^" step from the other answer). This caused me to update how I'm specifying dependencies, and I was eventually able to fix the issue.
Argument of type 'this' is not assignable to parameter of type 'Construct' in AWS CDK
I am having an issue while using the CDK in that the this property is erroring and saying that I can't assign 'this' to parameter of type construct. This is happens start on the const s3ListLambdaRole part and makes every new variable declaration after that also error for the same thing. import * as sns from '@aws-cdk/aws-sns'; import * as subs from '@aws-cdk/aws-sns-subscriptions'; import * as sqs from '@aws-cdk/aws-sqs'; import * as cdk from '@aws-cdk/core'; import * as s3 from '@aws-cdk/aws-s3'; import * as lambda from '@aws-cdk/aws-lambda'; import * as path from 'path'; import { Bucket } from '@aws-cdk/aws-s3'; import * as iam from'@aws-cdk/aws-iam'; export class SecurityBaselineDevStack extends cdk.Stack { constructor(scope: cdk.App, id: string, props?: cdk.StackProps) { super(scope, id, props); const testSecurityqueue = new sqs.Queue(this, 'testSecurityqueue', { visibilityTimeout: cdk.Duration.seconds(300) }); const testSecuritytopic = new sns.Topic(this, 'testSecuritytopic'); testSecuritytopic.addSubscription(new subs.SqsSubscription(testSecurityqueue)); //Creating lambda role below const s3ListLambdaRole = new iam.Role(this, 's3ListLambdaRole', { assumedBy: new iam.ServicePrincipal('lambda.amazonaws.com'), }); s3ListLambdaRole.addManagedPolicy(iam.ManagedPolicy.fromAwsManagedPolicyName('AWSLambdaFullAccess')) //creates LambdaFullAccess Role //Adding specific permissions to role now s3ListLambdaRole.addToPolicy(new iam.PolicyStatement({ resources: ['*'], //adds full access to lamda actions: ['s3'] })); const s3ListLambda = new lambda.Function (this, 's3ListLambda', { runtime: lambda.Runtime.PYTHON_3_6, handler: 'listS3.handler', role:s3ListLambdaRole, code: lambda.Code.fromAsset(path.join(__dirname, '../lambda')) }); const testSecurityBucket = new s3.Bucket(this, 'testSecurityBucket'); } } Thank you in advance!
[ "This happens when version of CDK dependencies are at different versions.Make sure CDK dependencies have same version.\n\nDelete node_modules folder\nDelete package-lock.json\nEnsure all dependencies in package.json are using same version.\nRemove carrot ^ symbol before dependencies\nnpm install\n\n", "If anyone comes across this issue currently, like I did which is what led me here, the thing I did to resolve this was use the cdk lib and import the things I needed from that.\nimport {Stack, StackProps, App, aws_s3 as s3, aws_iam as iam } from 'aws-cdk-lib';\nimport { BucketEncryption } from 'aws-cdk-lib/aws-s3';\n\n", "update your CDK library. This is normally caused when your CDK library has different versions. npm update -g aws-cdk\n", "The issue was that the @aws-cdk/lambda dependency was not the same version as the @aws-cdk/sqs and @aws-cdk/sns dependencies.\n", "Just to pile on to Yogeshwar's answer (which was incredibly helpful)...\nWhile making sure the versions being used are consistent was a good first step, understanding where the specific versions were coming from and why more deeply was key for me to fix this issue for me.\nI'm using CDK v2, which only requires you to pull in a single module (aws-cdk-lib), but I was still seeing the issue related to the construct library. Two additional pieces of information that helped were to run npm why constructs to see where the different versions of the packages were being pulled in from. This led me to another module that I had created that was asking for \"constructs\": \"^10.0.0\", but was getting 10.1.x...\nThis pushed me towards getting a greater understanding of semver, and finding out that ^, will pull in anything in the 10.x series of versions, thus bringing in the incompatible package (the \"Remove ^\" step from the other answer).\nThis caused me to update how I'm specifying dependencies, and I was eventually able to fix the issue.\n" ]
[ 6, 6, 3, 0, 0 ]
[]
[]
[ "aws_cdk", "typescript" ]
stackoverflow_0063745383_aws_cdk_typescript.txt
Q: Convert base64 string to ArrayBuffer I need to convert a base64 encode string into an ArrayBuffer. The base64 strings are user input, they will be copy and pasted from an email, so they're not there when the page is loaded. I would like to do this in javascript without making an ajax call to the server if possible. I found those links interesting, but they didt'n help me: ArrayBuffer to base64 encoded string this is about the opposite conversion, from ArrayBuffer to base64, not the other way round http://jsperf.com/json-vs-base64/2 this looks good but i can't figure out how to use the code. Is there an easy (maybe native) way to do the conversion? thanks A: Try this: function _base64ToArrayBuffer(base64) { var binary_string = window.atob(base64); var len = binary_string.length; var bytes = new Uint8Array(len); for (var i = 0; i < len; i++) { bytes[i] = binary_string.charCodeAt(i); } return bytes.buffer; } A: Using TypedArray.from: Uint8Array.from(atob(base64_string), c => c.charCodeAt(0)) Performance to be compared with the for loop version of Goran.it answer. A: Goran.it's answer does not work because of unicode problem in javascript - https://developer.mozilla.org/en-US/docs/Web/API/WindowBase64/Base64_encoding_and_decoding. I ended up using the function given on Daniel Guerrero's blog: http://blog.danguer.com/2011/10/24/base64-binary-decoding-in-javascript/ Function is listed on github link: https://github.com/danguer/blog-examples/blob/master/js/base64-binary.js Use these lines var uintArray = Base64Binary.decode(base64_string); var byteArray = Base64Binary.decodeArrayBuffer(base64_string); A: For Node.js users: const myBuffer = Buffer.from(someBase64String, 'base64'); myBuffer will be of type Buffer which is a subclass of Uint8Array. Unfortunately, Uint8Array is NOT an ArrayBuffer as the OP was asking for. But when manipulating an ArrayBuffer I almost always wrap it with Uint8Array or something similar, so it should be close to what's being asked for. A: Just found base64-arraybuffer, a small npm package with incredibly high usage, 5M downloads last month (2017-08). https://www.npmjs.com/package/base64-arraybuffer For anyone looking for something of a best standard solution, this may be it. A: Async solution, it's better when the data is big: // base64 to buffer function base64ToBufferAsync(base64) { var dataUrl = "data:application/octet-binary;base64," + base64; fetch(dataUrl) .then(res => res.arrayBuffer()) .then(buffer => { console.log("base64 to buffer: " + new Uint8Array(buffer)); }) } // buffer to base64 function bufferToBase64Async( buffer ) { var blob = new Blob([buffer], {type:'application/octet-binary'}); console.log("buffer to blob:" + blob) var fileReader = new FileReader(); fileReader.onload = function() { var dataUrl = fileReader.result; console.log("blob to dataUrl: " + dataUrl); var base64 = dataUrl.substr(dataUrl.indexOf(',')+1) console.log("dataUrl to base64: " + base64); }; fileReader.readAsDataURL(blob); } A: Javascript is a fine development environment so it seems odd than it doesn't provide a solution to this small problem. The solutions offered elsewhere on this page are potentially slow. Here is my solution. It employs the inbuilt functionality that decodes base64 image and sound data urls. var req = new XMLHttpRequest; req.open('GET', "data:application/octet;base64," + base64Data); req.responseType = 'arraybuffer'; req.onload = function fileLoaded(e) { var byteArray = new Uint8Array(e.target.response); // var shortArray = new Int16Array(e.target.response); // var unsignedShortArray = new Int16Array(e.target.response); // etc. } req.send(); The send request fails if the base 64 string is badly formed. The mime type (application/octet) is probably unnecessary. Tested in chrome. Should work in other browsers. A: Pure JS - no string middlestep (no atob) I write following function which convert base64 in direct way (without conversion to string at the middlestep). IDEA get 4 base64 characters chunk find index of each character in base64 alphabet convert index to 6-bit number (binary string) join four 6 bit numbers which gives 24-bit numer (stored as binary string) split 24-bit string to three 8-bit and covert each to number and store them in output array corner case: if input base64 string ends with one/two = char, remove one/two numbers from output array Below solution allows to process large input base64 strings. Similar function for convert bytes to base64 without btoa is HERE function base64ToBytesArr(str) { const abc = [..."ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"]; // base64 alphabet let result = []; for(let i=0; i<str.length/4; i++) { let chunk = [...str.slice(4*i,4*i+4)] let bin = chunk.map(x=> abc.indexOf(x).toString(2).padStart(6,0)).join(''); let bytes = bin.match(/.{1,8}/g).map(x=> +('0b'+x)); result.push(...bytes.slice(0,3 - (str[4*i+2]=="=") - (str[4*i+3]=="="))); } return result; } // -------- // TEST // -------- let test = "Alice's Adventure in Wonderland."; console.log('test string:', test.length, test); let b64_btoa = btoa(test); console.log('encoded string:', b64_btoa); let decodedBytes = base64ToBytesArr(b64_btoa); // decode base64 to array of bytes console.log('decoded bytes:', JSON.stringify(decodedBytes)); let decodedTest = decodedBytes.map(b => String.fromCharCode(b) ).join``; console.log('Uint8Array', JSON.stringify(new Uint8Array(decodedBytes))); console.log('decoded string:', decodedTest.length, decodedTest); Caution! If you want to decode base64 to STRING (not bytes array) and you know that result contains utf8 characters then atob will fail in general e.g. for character the atob("8J+SqQ==") will give wrong result . In this case you can use above solution and convert result bytes array to string in proper way e.g. : function base64ToBytesArr(str) { const abc = [..."ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"]; // base64 alphabet let result = []; for(let i=0; i<str.length/4; i++) { let chunk = [...str.slice(4*i,4*i+4)] let bin = chunk.map(x=> abc.indexOf(x).toString(2).padStart(6,0)).join(''); let bytes = bin.match(/.{1,8}/g).map(x=> +('0b'+x)); result.push(...bytes.slice(0,3 - (str[4*i+2]=="=") - (str[4*i+3]=="="))); } return result; } // -------- // TEST // -------- let testB64 = "8J+SqQ=="; // for string: ""; console.log('input base64 :', testB64); let decodedBytes = base64ToBytesArr(testB64); // decode base64 to array of bytes console.log('decoded bytes :', JSON.stringify(decodedBytes)); let result = new TextDecoder("utf-8").decode(new Uint8Array(decodedBytes)); console.log('properly decoded string :', result); let result_atob = atob(testB64); console.log('decoded by atob :', result_atob); Snippets tested 2022-08-04 on: chrome 103.0.5060.134 (arm64), safari 15.2, firefox 103.0.1 (64 bit), edge 103.0.1264.77 (arm64), and node-js v12.16.1 A: I would strongly suggest using an npm package implementing correctly the base64 specification. The best one I know is rfc4648 The problem is that btoa and atob use binary strings instead of Uint8Array and trying to convert to and from it is cumbersome. Also there is a lot of bad packages in npm for that. I lose a lot of time before finding that one. The creators of that specific package did a simple thing: they took the specification of Base64 (which is here by the way) and implemented it correctly from the beginning to the end. (Including other formats in the specification that are also useful like Base64-url, Base32, etc ...) That doesn't seem a lot but apparently that was too much to ask to the bunch of other libraries. So yeah, I know I'm doing a bit of proselytism but if you want to avoid losing your time too just use rfc4648. A: I used the accepted answer to this question to create base64Url string <-> arrayBuffer conversions in the realm of base64Url data transmitted via ASCII-cookie [atob, btoa are base64[with +/]<->js binary string], so I decided to post the code. Many of us may want both conversions and client-server communication may use the base64Url version (though a cookie may contain +/ as well as -_ characters if I understand well, only ",;\ characters and some wicked characters from the 128 ASCII are disallowed). But a url cannot contain / character, hence the wider use of b64 url version which of course not what atob-btoa supports... Seeing other comments, I would like to stress that my use case here is base64Url data transmission via url/cookie and trying to use this crypto data with the js crypto api (2017) hence the need for ArrayBuffer representation and b64u <-> arrBuff conversions... if array buffers represent other than base64 (part of ascii) this conversion wont work since atob, btoa is limited to ascii(128). Check out an appropriate converter like below: The buff -> b64u version is from a tweet from Mathias Bynens, thanks for that one (too)! He also wrote a base64 encoder/decoder: https://github.com/mathiasbynens/base64 Coming from java, it may help when trying to understand the code that java byte[] is practically js Int8Array (signed int) but we use here the unsigned version Uint8Array since js conversions work with them. They are both 256bit, so we call it byte[] in js now... The code is from a module class, that is why static. //utility /** * Array buffer to base64Url string * - arrBuff->byte[]->biStr->b64->b64u * @param arrayBuffer * @returns {string} * @private */ static _arrayBufferToBase64Url(arrayBuffer) { console.log('base64Url from array buffer:', arrayBuffer); let base64Url = window.btoa(String.fromCodePoint(...new Uint8Array(arrayBuffer))); base64Url = base64Url.replaceAll('+', '-'); base64Url = base64Url.replaceAll('/', '_'); console.log('base64Url:', base64Url); return base64Url; } /** * Base64Url string to array buffer * - b64u->b64->biStr->byte[]->arrBuff * @param base64Url * @returns {ArrayBufferLike} * @private */ static _base64UrlToArrayBuffer(base64Url) { console.log('array buffer from base64Url:', base64Url); let base64 = base64Url.replaceAll('-', '+'); base64 = base64.replaceAll('_', '/'); const binaryString = window.atob(base64); const length = binaryString.length; const bytes = new Uint8Array(length); for (let i = 0; i < length; i++) { bytes[i] = binaryString.charCodeAt(i); } console.log('array buffer:', bytes.buffer); return bytes.buffer; } A: made a ArrayBuffer from a base64: function base64ToArrayBuffer(base64) { var binary_string = window.atob(base64); var len = binary_string.length; var bytes = new Uint8Array(len); for (var i = 0; i < len; i++) { bytes[i] = binary_string.charCodeAt(i); } return bytes.buffer; } I was trying to use above code and It's working fine. A: Solution without atob I've seen many people complaining about using atob and btoa in the replies. There are some issues to take into account when using them. There's a solution without using them in the MDN page about Base64. Below you can find the code to convert a base64 string into a Uint8Array copied from the docs. Note that the function below returns a Uint8Array. To get the ArrayBuffer version you just need to do uintArray.buffer. function b64ToUint6(nChr) { return nChr > 64 && nChr < 91 ? nChr - 65 : nChr > 96 && nChr < 123 ? nChr - 71 : nChr > 47 && nChr < 58 ? nChr + 4 : nChr === 43 ? 62 : nChr === 47 ? 63 : 0; } function base64DecToArr(sBase64, nBlocksSize) { const sB64Enc = sBase64.replace(/[^A-Za-z0-9+/]/g, ""); const nInLen = sB64Enc.length; const nOutLen = nBlocksSize ? Math.ceil(((nInLen * 3 + 1) >> 2) / nBlocksSize) * nBlocksSize : (nInLen * 3 + 1) >> 2; const taBytes = new Uint8Array(nOutLen); let nMod3; let nMod4; let nUint24 = 0; let nOutIdx = 0; for (let nInIdx = 0; nInIdx < nInLen; nInIdx++) { nMod4 = nInIdx & 3; nUint24 |= b64ToUint6(sB64Enc.charCodeAt(nInIdx)) << (6 * (3 - nMod4)); if (nMod4 === 3 || nInLen - nInIdx === 1) { nMod3 = 0; while (nMod3 < 3 && nOutIdx < nOutLen) { taBytes[nOutIdx] = (nUint24 >>> ((16 >>> nMod3) & 24)) & 255; nMod3++; nOutIdx++; } nUint24 = 0; } } return taBytes; } If you're interested in the reverse operation, ArrayBuffer to base64, you can find how to do it in the same link.
Convert base64 string to ArrayBuffer
I need to convert a base64 encode string into an ArrayBuffer. The base64 strings are user input, they will be copy and pasted from an email, so they're not there when the page is loaded. I would like to do this in javascript without making an ajax call to the server if possible. I found those links interesting, but they didt'n help me: ArrayBuffer to base64 encoded string this is about the opposite conversion, from ArrayBuffer to base64, not the other way round http://jsperf.com/json-vs-base64/2 this looks good but i can't figure out how to use the code. Is there an easy (maybe native) way to do the conversion? thanks
[ "Try this:\nfunction _base64ToArrayBuffer(base64) {\n var binary_string = window.atob(base64);\n var len = binary_string.length;\n var bytes = new Uint8Array(len);\n for (var i = 0; i < len; i++) {\n bytes[i] = binary_string.charCodeAt(i);\n }\n return bytes.buffer;\n}\n\n", "Using TypedArray.from:\nUint8Array.from(atob(base64_string), c => c.charCodeAt(0))\n\nPerformance to be compared with the for loop version of Goran.it answer.\n", "Goran.it's answer does not work because of unicode problem in javascript - https://developer.mozilla.org/en-US/docs/Web/API/WindowBase64/Base64_encoding_and_decoding. \nI ended up using the function given on Daniel Guerrero's blog: http://blog.danguer.com/2011/10/24/base64-binary-decoding-in-javascript/\nFunction is listed on github link: https://github.com/danguer/blog-examples/blob/master/js/base64-binary.js\n \nUse these lines \nvar uintArray = Base64Binary.decode(base64_string); \nvar byteArray = Base64Binary.decodeArrayBuffer(base64_string); \n\n", "For Node.js users:\nconst myBuffer = Buffer.from(someBase64String, 'base64');\n\nmyBuffer will be of type Buffer which is a subclass of Uint8Array. Unfortunately, Uint8Array is NOT an ArrayBuffer as the OP was asking for. But when manipulating an ArrayBuffer I almost always wrap it with Uint8Array or something similar, so it should be close to what's being asked for.\n", "Just found base64-arraybuffer, a small npm package with incredibly high usage, 5M downloads last month (2017-08).\nhttps://www.npmjs.com/package/base64-arraybuffer\nFor anyone looking for something of a best standard solution, this may be it.\n", "Async solution, it's better when the data is big:\n// base64 to buffer\nfunction base64ToBufferAsync(base64) {\n var dataUrl = \"data:application/octet-binary;base64,\" + base64;\n\n fetch(dataUrl)\n .then(res => res.arrayBuffer())\n .then(buffer => {\n console.log(\"base64 to buffer: \" + new Uint8Array(buffer));\n })\n}\n\n// buffer to base64\nfunction bufferToBase64Async( buffer ) {\n var blob = new Blob([buffer], {type:'application/octet-binary'}); \n console.log(\"buffer to blob:\" + blob)\n\n var fileReader = new FileReader();\n fileReader.onload = function() {\n var dataUrl = fileReader.result;\n console.log(\"blob to dataUrl: \" + dataUrl);\n\n var base64 = dataUrl.substr(dataUrl.indexOf(',')+1) \n console.log(\"dataUrl to base64: \" + base64);\n };\n fileReader.readAsDataURL(blob);\n}\n\n", "Javascript is a fine development environment so it seems odd than it doesn't provide a solution to this small problem. The solutions offered elsewhere on this page are potentially slow. Here is my solution. It employs the inbuilt functionality that decodes base64 image and sound data urls.\nvar req = new XMLHttpRequest;\nreq.open('GET', \"data:application/octet;base64,\" + base64Data);\nreq.responseType = 'arraybuffer';\nreq.onload = function fileLoaded(e)\n{\n var byteArray = new Uint8Array(e.target.response);\n // var shortArray = new Int16Array(e.target.response);\n // var unsignedShortArray = new Int16Array(e.target.response);\n // etc.\n}\nreq.send();\n\nThe send request fails if the base 64 string is badly formed.\nThe mime type (application/octet) is probably unnecessary.\nTested in chrome. Should work in other browsers.\n", "Pure JS - no string middlestep (no atob)\nI write following function which convert base64 in direct way (without conversion to string at the middlestep). IDEA\n\nget 4 base64 characters chunk\nfind index of each character in base64 alphabet\nconvert index to 6-bit number (binary string)\njoin four 6 bit numbers which gives 24-bit numer (stored as binary string)\nsplit 24-bit string to three 8-bit and covert each to number and store them in output array\ncorner case: if input base64 string ends with one/two = char, remove one/two numbers from output array\n\nBelow solution allows to process large input base64 strings. Similar function for convert bytes to base64 without btoa is HERE\n\n\nfunction base64ToBytesArr(str) {\n const abc = [...\"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/\"]; // base64 alphabet\n let result = [];\n\n for(let i=0; i<str.length/4; i++) {\n let chunk = [...str.slice(4*i,4*i+4)]\n let bin = chunk.map(x=> abc.indexOf(x).toString(2).padStart(6,0)).join(''); \n let bytes = bin.match(/.{1,8}/g).map(x=> +('0b'+x));\n result.push(...bytes.slice(0,3 - (str[4*i+2]==\"=\") - (str[4*i+3]==\"=\")));\n }\n return result;\n}\n\n\n// --------\n// TEST\n// --------\n\n\nlet test = \"Alice's Adventure in Wonderland.\"; \n\nconsole.log('test string:', test.length, test);\nlet b64_btoa = btoa(test);\nconsole.log('encoded string:', b64_btoa);\n\nlet decodedBytes = base64ToBytesArr(b64_btoa); // decode base64 to array of bytes\nconsole.log('decoded bytes:', JSON.stringify(decodedBytes));\nlet decodedTest = decodedBytes.map(b => String.fromCharCode(b) ).join``;\nconsole.log('Uint8Array', JSON.stringify(new Uint8Array(decodedBytes)));\nconsole.log('decoded string:', decodedTest.length, decodedTest);\n\n\n\nCaution!\nIf you want to decode base64 to STRING (not bytes array) and you know that result contains utf8 characters then atob will fail in general e.g. for character the atob(\"8J+SqQ==\") will give wrong result . In this case you can use above solution and convert result bytes array to string in proper way e.g. :\n\n\nfunction base64ToBytesArr(str) {\n const abc = [...\"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/\"]; // base64 alphabet\n let result = [];\n\n for(let i=0; i<str.length/4; i++) {\n let chunk = [...str.slice(4*i,4*i+4)]\n let bin = chunk.map(x=> abc.indexOf(x).toString(2).padStart(6,0)).join(''); \n let bytes = bin.match(/.{1,8}/g).map(x=> +('0b'+x));\n result.push(...bytes.slice(0,3 - (str[4*i+2]==\"=\") - (str[4*i+3]==\"=\")));\n }\n return result;\n}\n\n\n// --------\n// TEST\n// --------\n\n\nlet testB64 = \"8J+SqQ==\"; // for string: \"\"; \nconsole.log('input base64 :', testB64);\n\nlet decodedBytes = base64ToBytesArr(testB64); // decode base64 to array of bytes\nconsole.log('decoded bytes :', JSON.stringify(decodedBytes));\n\nlet result = new TextDecoder(\"utf-8\").decode(new Uint8Array(decodedBytes));\nconsole.log('properly decoded string :', result);\n\nlet result_atob = atob(testB64);\nconsole.log('decoded by atob :', result_atob);\n\n\n\nSnippets tested 2022-08-04 on: chrome 103.0.5060.134 (arm64), safari 15.2, firefox 103.0.1 (64 bit), edge 103.0.1264.77 (arm64), and node-js v12.16.1\n", "I would strongly suggest using an npm package implementing correctly the base64 specification.\nThe best one I know is rfc4648\nThe problem is that btoa and atob use binary strings instead of Uint8Array and trying to convert to and from it is cumbersome. Also there is a lot of bad packages in npm for that. I lose a lot of time before finding that one.\nThe creators of that specific package did a simple thing: they took the specification of Base64 (which is here by the way) and implemented it correctly from the beginning to the end. (Including other formats in the specification that are also useful like Base64-url, Base32, etc ...) That doesn't seem a lot but apparently that was too much to ask to the bunch of other libraries.\nSo yeah, I know I'm doing a bit of proselytism but if you want to avoid losing your time too just use rfc4648.\n", "I used the accepted answer to this question to create base64Url string <-> arrayBuffer conversions in the realm of base64Url data transmitted via ASCII-cookie [atob, btoa are base64[with +/]<->js binary string], so I decided to post the code.\nMany of us may want both conversions and client-server communication may use the base64Url version (though a cookie may contain +/ as well as -_ characters if I understand well, only \",;\\ characters and some wicked characters from the 128 ASCII are disallowed). But a url cannot contain / character, hence the wider use of b64 url version which of course not what atob-btoa supports...\nSeeing other comments, I would like to stress that my use case here is base64Url data transmission via url/cookie and trying to use this crypto data with the js crypto api (2017) hence the need for ArrayBuffer representation and b64u <-> arrBuff conversions... if array buffers represent other than base64 (part of ascii) this conversion wont work since atob, btoa is limited to ascii(128). Check out an appropriate converter like below:\nThe buff -> b64u version is from a tweet from Mathias Bynens, thanks for that one (too)! He also wrote a base64 encoder/decoder:\nhttps://github.com/mathiasbynens/base64\nComing from java, it may help when trying to understand the code that java byte[] is practically js Int8Array (signed int) but we use here the unsigned version Uint8Array since js conversions work with them. They are both 256bit, so we call it byte[] in js now...\nThe code is from a module class, that is why static.\n//utility\n\n/**\n * Array buffer to base64Url string\n * - arrBuff->byte[]->biStr->b64->b64u\n * @param arrayBuffer\n * @returns {string}\n * @private\n */\nstatic _arrayBufferToBase64Url(arrayBuffer) {\n console.log('base64Url from array buffer:', arrayBuffer);\n\n let base64Url = window.btoa(String.fromCodePoint(...new Uint8Array(arrayBuffer)));\n base64Url = base64Url.replaceAll('+', '-');\n base64Url = base64Url.replaceAll('/', '_');\n\n console.log('base64Url:', base64Url);\n return base64Url;\n}\n\n/**\n * Base64Url string to array buffer\n * - b64u->b64->biStr->byte[]->arrBuff\n * @param base64Url\n * @returns {ArrayBufferLike}\n * @private\n */\nstatic _base64UrlToArrayBuffer(base64Url) {\n console.log('array buffer from base64Url:', base64Url);\n\n let base64 = base64Url.replaceAll('-', '+');\n base64 = base64.replaceAll('_', '/');\n const binaryString = window.atob(base64);\n const length = binaryString.length;\n const bytes = new Uint8Array(length);\n for (let i = 0; i < length; i++) {\n bytes[i] = binaryString.charCodeAt(i);\n }\n\n console.log('array buffer:', bytes.buffer);\n return bytes.buffer;\n}\n\n", "made a ArrayBuffer from a base64:\nfunction base64ToArrayBuffer(base64) {\n var binary_string = window.atob(base64);\n var len = binary_string.length;\n var bytes = new Uint8Array(len);\n for (var i = 0; i < len; i++) {\n bytes[i] = binary_string.charCodeAt(i);\n }\n return bytes.buffer;\n }\n\nI was trying to use above code and It's working fine.\n", "Solution without atob\nI've seen many people complaining about using atob and btoa in the replies. There are some issues to take into account when using them.\nThere's a solution without using them in the MDN page about Base64. Below you can find the code to convert a base64 string into a Uint8Array copied from the docs.\nNote that the function below returns a Uint8Array. To get the ArrayBuffer version you just need to do uintArray.buffer.\nfunction b64ToUint6(nChr) {\n return nChr > 64 && nChr < 91\n ? nChr - 65\n : nChr > 96 && nChr < 123\n ? nChr - 71\n : nChr > 47 && nChr < 58\n ? nChr + 4\n : nChr === 43\n ? 62\n : nChr === 47\n ? 63\n : 0;\n}\n\nfunction base64DecToArr(sBase64, nBlocksSize) {\n const sB64Enc = sBase64.replace(/[^A-Za-z0-9+/]/g, \"\");\n const nInLen = sB64Enc.length;\n const nOutLen = nBlocksSize\n ? Math.ceil(((nInLen * 3 + 1) >> 2) / nBlocksSize) * nBlocksSize\n : (nInLen * 3 + 1) >> 2;\n const taBytes = new Uint8Array(nOutLen);\n\n let nMod3;\n let nMod4;\n let nUint24 = 0;\n let nOutIdx = 0;\n for (let nInIdx = 0; nInIdx < nInLen; nInIdx++) {\n nMod4 = nInIdx & 3;\n nUint24 |= b64ToUint6(sB64Enc.charCodeAt(nInIdx)) << (6 * (3 - nMod4));\n if (nMod4 === 3 || nInLen - nInIdx === 1) {\n nMod3 = 0;\n while (nMod3 < 3 && nOutIdx < nOutLen) {\n taBytes[nOutIdx] = (nUint24 >>> ((16 >>> nMod3) & 24)) & 255;\n nMod3++;\n nOutIdx++;\n }\n nUint24 = 0;\n }\n }\n\n return taBytes;\n}\n\nIf you're interested in the reverse operation, ArrayBuffer to base64, you can find how to do it in the same link.\n" ]
[ 228, 119, 45, 42, 29, 22, 13, 7, 3, 1, 1, 0 ]
[ "The result of atob is a string that is separated with some comma\n\n,\n\nA simpler way is to convert this string to a json array string and after that parse it to a byteArray\nbelow code can simply be used to convert base64 to an array of number\nlet byteArray = JSON.parse('['+atob(base64)+']'); \nlet buffer = new Uint8Array(byteArray);\n\n" ]
[ -1 ]
[ "arraybuffer", "arrays", "base64", "javascript" ]
stackoverflow_0021797299_arraybuffer_arrays_base64_javascript.txt
Q: Getting Class type from String I have a String which has a name of a class say "Ex" (no .class extension). I want to assign it to a Class variable, like this: Class cls = (string).class How can i do that? A: Class<?> cls = Class.forName(className); But your className should be fully-qualified - i.e. com.mycompany.MyClass A: String clsName = "Ex"; // use fully qualified name Class cls = Class.forName(clsName); Object clsInstance = (Object) cls.newInstance(); Check the Java Tutorial trail on Reflection at http://java.sun.com/docs/books/tutorial/reflect/TOC.html for further details. A: You can use the forName method of Class: Class cls = Class.forName(clsName); Object obj = cls.newInstance(); A: You can get the Class reference of any class during run time through the Java Reflection Concept. Check the Below Code. Explanation is given below Here is one example that uses returned Class to create an instance of AClass: package com.xyzws; class AClass { public AClass() { System.out.println("AClass's Constructor"); } static { System.out.println("static block in AClass"); } } public class Program { public static void main(String[] args) { try { System.out.println("The first time calls forName:"); Class c = Class.forName("com.xyzws.AClass"); AClass a = (AClass)c.newInstance(); System.out.println("The second time calls forName:"); Class c1 = Class.forName("com.xyzws.AClass"); } catch (ClassNotFoundException e) { // ... } catch (InstantiationException e) { // ... } catch (IllegalAccessException e) { // ... } } } The printed output is The first time calls forName: static block in AClass AClass's Constructor The second time calls forName: The class has already been loaded so there is no second "static block in AClass" The Explanation is below Class.ForName is called to get a Class Object By Using the Class Object we are creating the new instance of the Class. Any doubts about this let me know A: It should be: Class.forName(String classname) A: Not sure what you are asking, but... Class.forname, maybe? A: public static Class<?> getType(String typeName) { if(typeName.equals("Date")) { return Date.class; } else if(typeName.equals("Float")) { return Float.class; } else if(typeName.equals("Double")) { return Double.class; } else if(typeName.equals("Integer")) { return Integer.class; } return String.class; }
Getting Class type from String
I have a String which has a name of a class say "Ex" (no .class extension). I want to assign it to a Class variable, like this: Class cls = (string).class How can i do that?
[ "Class<?> cls = Class.forName(className);\n\nBut your className should be fully-qualified - i.e. com.mycompany.MyClass\n", "String clsName = \"Ex\"; // use fully qualified name\nClass cls = Class.forName(clsName);\nObject clsInstance = (Object) cls.newInstance();\n\nCheck the Java Tutorial trail on Reflection at http://java.sun.com/docs/books/tutorial/reflect/TOC.html for further details.\n", "You can use the forName method of Class:\nClass cls = Class.forName(clsName);\nObject obj = cls.newInstance();\n\n", "You can get the Class reference of any class during run time through the Java Reflection Concept.\nCheck the Below Code. Explanation is given below\nHere is one example that uses returned Class to create an instance of AClass:\npackage com.xyzws;\nclass AClass {\n public AClass() {\n System.out.println(\"AClass's Constructor\"); \n } \n static { \n System.out.println(\"static block in AClass\"); \n }\n}\npublic class Program { \n public static void main(String[] args) {\n try { \n System.out.println(\"The first time calls forName:\"); \n Class c = Class.forName(\"com.xyzws.AClass\"); \n AClass a = (AClass)c.newInstance(); \n System.out.println(\"The second time calls forName:\"); \n Class c1 = Class.forName(\"com.xyzws.AClass\"); \n } catch (ClassNotFoundException e) { \n // ...\n } catch (InstantiationException e) { \n // ...\n } catch (IllegalAccessException e) { \n // ...\n } \n }\n}\n\nThe printed output is\n The first time calls forName:\n static block in AClass\n AClass's Constructor\n The second time calls forName:\n\nThe class has already been loaded so there is no second \"static block in AClass\" \nThe Explanation is below\nClass.ForName is called to get a Class Object\nBy Using the Class Object we are creating the new instance of the Class.\nAny doubts about this let me know \n", "It should be:\nClass.forName(String classname)\n", "Not sure what you are asking, but... Class.forname, maybe?\n", "public static Class<?> getType(String typeName) {\n if(typeName.equals(\"Date\")) {\n return Date.class;\n } else if(typeName.equals(\"Float\")) {\n return Float.class;\n } else if(typeName.equals(\"Double\")) {\n return Double.class;\n } else if(typeName.equals(\"Integer\")) {\n return Integer.class;\n }\n return String.class;\n}\n\n" ]
[ 193, 45, 10, 4, 3, 2, 0 ]
[]
[]
[ "class", "java", "reflection" ]
stackoverflow_0002408789_class_java_reflection.txt
Q: Read CSV file in Kotlin I want to read text from CSV file, when the code is in the class MainActivity it works ok, but when the code is in a function apart from this class it doesn't work. Shows "Unresolved reference open". How to make it work? My code: val bufferRead = BufferedReader(assets.open("test.csv").reader())
Read CSV file in Kotlin
I want to read text from CSV file, when the code is in the class MainActivity it works ok, but when the code is in a function apart from this class it doesn't work. Shows "Unresolved reference open". How to make it work? My code: val bufferRead = BufferedReader(assets.open("test.csv").reader())
[]
[]
[ "Wrap the code with try/catch .\n" ]
[ -2 ]
[ "android", "csv", "kotlin" ]
stackoverflow_0074679211_android_csv_kotlin.txt
Q: How to improve or debug Typescript ts-node compile time? I switched my codebase to typescript. It's around 100k lines of codes in hundreds of files. Before my launch time was 2 seconds with ESLint --fix --cache. Now with Typescript (ts-node) it is 25 seconds (20 seconds is typescript only). The project is backend only. 25 seconds is kind of unacceptable speed. Is this normal? I tried to remove every dynamic require I could find but still didn't help. Could it be some large file that is taking too long? How can I know what's taking so long? A: You do not have to run production build with ts-node, it just transpiles your code on a fly. You can simply check it, run ts-node and paste some code, then try to see sources of runtime code. Also compilation level and other configurations matter at your tsconfig file. > const a = (a: string) => a + 'hello'; undefined > a.toString() "function (a) { return a + 'hello'; }" So when you run your project it: Typechecks your every file Transpiles your every file Interprets such file Then only runs your code As it is an interpreted process so your time goes to (require -> typecheck -> transpile -> run -> repeat) till all your code is executed. As your codebase is cascade you my run into performance issues on typechecking A: Two solutions for dev env only Use tsc --watch, it will rebuild on file changes and is very fast since it rebuild only the changed files OR Use ts-node-dev. It's a mix of ts-node and nodemon or pm2. So it recompiles and restart server on file changes.
How to improve or debug Typescript ts-node compile time?
I switched my codebase to typescript. It's around 100k lines of codes in hundreds of files. Before my launch time was 2 seconds with ESLint --fix --cache. Now with Typescript (ts-node) it is 25 seconds (20 seconds is typescript only). The project is backend only. 25 seconds is kind of unacceptable speed. Is this normal? I tried to remove every dynamic require I could find but still didn't help. Could it be some large file that is taking too long? How can I know what's taking so long?
[ "You do not have to run production build with ts-node, it just transpiles your code on a fly. You can simply check it, run ts-node and paste some code, then try to see sources of runtime code. Also compilation level and other configurations matter at your tsconfig file.\n> const a = (a: string) => a + 'hello';\nundefined\n> a.toString()\n\"function (a) { return a + 'hello'; }\"\n\nSo when you run your project it:\n\nTypechecks your every file\nTranspiles your every file\nInterprets such file\nThen only runs your code\n\nAs it is an interpreted process so your time goes to (require -> typecheck -> transpile -> run -> repeat) till all your code is executed. As your codebase is cascade you my run into performance issues on typechecking\n", "Two solutions for dev env only\n\nUse tsc --watch, it will rebuild on file changes and is very fast since it rebuild only the changed files\n\nOR\n\nUse ts-node-dev. It's a mix of ts-node and nodemon or pm2. So it recompiles and restart server on file changes.\n\n" ]
[ 1, 0 ]
[]
[]
[ "typescript" ]
stackoverflow_0060297192_typescript.txt
Q: Listening to events on Web Components in HTML I've looked all over for this, can find lots of examples listening to custom events programatically with JS, but nothing declaratively in HTML. I've got a web component where I dispatch an event from inside. If I attach a listener with JS, the handler fires, if I attach the listener in the HTML, it never fires. Is this possible? <script> document.getElementById("myWebComp").addEventListener("myevent", progListener); function progListener(event) { //This will fire console.log("progListener") } function declListener(event) { //This will NOT fire console.log("declListener") } </script> <body> <my=web-comp id="myWebComp" myevent="declListener(event)" onmyevent="declListener(event)></my-web-comp> </body>` A: From your comment on the now deleted answer: a web component still don't act like a proper element Web Components are proper elements, extended from HTMLElement And you can not set Custom Event on* declarative Event Handlers on a DIV either (unless you use build-step tooling that transpiles your code) So all on* declarative Event handlers can be set on a Web Component: https://html.spec.whatwg.org/multipage/webappapis.html#event-handlers-on-elements,-document-objects,-and-window-objects If you want a custom handler, you use addEventListener
Listening to events on Web Components in HTML
I've looked all over for this, can find lots of examples listening to custom events programatically with JS, but nothing declaratively in HTML. I've got a web component where I dispatch an event from inside. If I attach a listener with JS, the handler fires, if I attach the listener in the HTML, it never fires. Is this possible? <script> document.getElementById("myWebComp").addEventListener("myevent", progListener); function progListener(event) { //This will fire console.log("progListener") } function declListener(event) { //This will NOT fire console.log("declListener") } </script> <body> <my=web-comp id="myWebComp" myevent="declListener(event)" onmyevent="declListener(event)></my-web-comp> </body>`
[ "From your comment on the now deleted answer:\n\na web component still don't act like a proper element\n\nWeb Components are proper elements, extended from HTMLElement\nAnd you can not set Custom Event on* declarative Event Handlers on a DIV either\n(unless you use build-step tooling that transpiles your code)\nSo all on* declarative Event handlers can be set on a Web Component:\n\nhttps://html.spec.whatwg.org/multipage/webappapis.html#event-handlers-on-elements,-document-objects,-and-window-objects\n\nIf you want a custom handler, you use addEventListener\n" ]
[ 0 ]
[]
[]
[ "custom_events", "web_component" ]
stackoverflow_0074669861_custom_events_web_component.txt
Q: Xcode commands are taking a long time within Flutter projects Some context I'm working with Flutter, but after doing a couple of changes to the iOS Podfile, .plist files, and Runner.xcworkspace things "stopped" working. The problem I'm having is that everything Xcode related is taking a very long time to run in all of my Flutter projects. To give some context the app I was building when Xcode started giving me problems uses Cloud Firestore. For this to compile faster I added the following code to my Podfile, this was suggested by Google in some docs. platform :ios, '16.1' target 'Runner' do # Code to reduce compile time for iOS. pod 'FirebaseFirestore/WithLeveldb', :git => 'https://github.com/invertase/firestore-ios-sdk-frameworks.git', :tag => '10.2.0' use_frameworks! use_modular_headers! flutter_install_all_ios_pods File.dirname(File.realpath(__FILE__)) end After doing this change and importing the Firestore package a file called GoogleService-Info.plist was created and I added this file to the Runner.xcworkspace as a Runner. This step was mentioned in another Google document for activating sign-in with Google. A weird thing about all of this is that if I try to run open Runner.xcworkspace the Xcode app also takes forever to open (it's been over an hour since I ran it and it has not been opened). This worked earlier as I was able to open this directory to make the aforementioned change (make GoogleService-Info.plist a Runner file). Attempts to solve this After identifying the error I tried doing the following things, but nothing has worked so far: Uninstall Xcode Command Line Tools and install them back on. Uninstall Flutter and install it back on. Uninstall Xcode completely and install it back on. Restart my computer. Try to build the project in another computer, but now this other computer is having the same issue. I've tried to run things on both an Intel-based Mac and an Apple Silicon Mac, but in both computers Xcode "stopped" working for Flutter. Reproducing this problem The problem occurs when I try to run flutter clean, flutter run, or open Runner.xcworkspace. After running the first two commands in --verbose mode the problem comes up when the following commands appear: xcrun xcodebuild -list xcrun xcodebuild -workspace $PATH/Runner.xcworkspace -scheme Flutter Assamble clean xcrun xcodebuild -workspace $PATH/Runner.xcworkspace -scheme Runner clean The first command is currently running on the Apple Silicon Mac and its already been over an hour since it's been stuck there (Intel-based Mac already finished running this command). The second command follows the first one and it took over 20 minutes for it to run in the Intel computer. The third command is currently running on the Intel-based computer and it's been there for over 40 minutes. Final details This problem is persistent in all of my Flutter projects, it doesn't matter if the project has the Firebase packages or not. I don't know what I could have changed in my Xcode configuration for things to stop working so abruptly, but I hope someone is able to help me out. P.S. I already tried compiling a native Swift project and everything seems to work, this issue seems to affect the Flutter projects exclusively. Obviously if I try to run the commands listed earlier outside of the Flutter execution they take a very long time as well. A: After running more tests I realized that the problem was iCloud. For some reason working on both computers at the same time made my local computers work very slowly. The problem was hard to find because the iCloud bird process didn't appear to use more resources than it usually would. To solve this I had to kill iCloud on both computers and restart the service. I recently updated both computers to Ventura 13.0.1 so I think the problem might be somewhere along those lines.
Xcode commands are taking a long time within Flutter projects
Some context I'm working with Flutter, but after doing a couple of changes to the iOS Podfile, .plist files, and Runner.xcworkspace things "stopped" working. The problem I'm having is that everything Xcode related is taking a very long time to run in all of my Flutter projects. To give some context the app I was building when Xcode started giving me problems uses Cloud Firestore. For this to compile faster I added the following code to my Podfile, this was suggested by Google in some docs. platform :ios, '16.1' target 'Runner' do # Code to reduce compile time for iOS. pod 'FirebaseFirestore/WithLeveldb', :git => 'https://github.com/invertase/firestore-ios-sdk-frameworks.git', :tag => '10.2.0' use_frameworks! use_modular_headers! flutter_install_all_ios_pods File.dirname(File.realpath(__FILE__)) end After doing this change and importing the Firestore package a file called GoogleService-Info.plist was created and I added this file to the Runner.xcworkspace as a Runner. This step was mentioned in another Google document for activating sign-in with Google. A weird thing about all of this is that if I try to run open Runner.xcworkspace the Xcode app also takes forever to open (it's been over an hour since I ran it and it has not been opened). This worked earlier as I was able to open this directory to make the aforementioned change (make GoogleService-Info.plist a Runner file). Attempts to solve this After identifying the error I tried doing the following things, but nothing has worked so far: Uninstall Xcode Command Line Tools and install them back on. Uninstall Flutter and install it back on. Uninstall Xcode completely and install it back on. Restart my computer. Try to build the project in another computer, but now this other computer is having the same issue. I've tried to run things on both an Intel-based Mac and an Apple Silicon Mac, but in both computers Xcode "stopped" working for Flutter. Reproducing this problem The problem occurs when I try to run flutter clean, flutter run, or open Runner.xcworkspace. After running the first two commands in --verbose mode the problem comes up when the following commands appear: xcrun xcodebuild -list xcrun xcodebuild -workspace $PATH/Runner.xcworkspace -scheme Flutter Assamble clean xcrun xcodebuild -workspace $PATH/Runner.xcworkspace -scheme Runner clean The first command is currently running on the Apple Silicon Mac and its already been over an hour since it's been stuck there (Intel-based Mac already finished running this command). The second command follows the first one and it took over 20 minutes for it to run in the Intel computer. The third command is currently running on the Intel-based computer and it's been there for over 40 minutes. Final details This problem is persistent in all of my Flutter projects, it doesn't matter if the project has the Firebase packages or not. I don't know what I could have changed in my Xcode configuration for things to stop working so abruptly, but I hope someone is able to help me out. P.S. I already tried compiling a native Swift project and everything seems to work, this issue seems to affect the Flutter projects exclusively. Obviously if I try to run the commands listed earlier outside of the Flutter execution they take a very long time as well.
[ "After running more tests I realized that the problem was iCloud. For some reason working on both computers at the same time made my local computers work very slowly. The problem was hard to find because the iCloud bird process didn't appear to use more resources than it usually would.\nTo solve this I had to kill iCloud on both computers and restart the service. I recently updated both computers to Ventura 13.0.1 so I think the problem might be somewhere along those lines.\n" ]
[ 0 ]
[]
[]
[ "flutter_ios", "ios", "xcode", "xcodebuild" ]
stackoverflow_0074596625_flutter_ios_ios_xcode_xcodebuild.txt
Q: how to pass images through intent? First class public class ChooseDriver extends Activity implements OnItemClickListener { private static final String rssFeed = "http://trade2rise.com/project/seattle/windex.php?itfpage=driver_list"; private static final String ARRAY_NAME = "itfdata"; private static final String ID = "id"; private static final String NAME = "name"; private static final String IMAGE = "image"; List<Item> arrayOfList; ListView listView; MyAdapter objAdapter1; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.choose_driver); listView = (ListView) findViewById(R.id.listView1); listView.setOnItemClickListener(new OnItemClickListener() { @Override public void onItemClick(AdapterView<?> adapterView, View view, int position, long id) { Item item = (Item) objAdapter1.getItem(position); Intent intent = new Intent(ChooseDriver.this, DriverDetail.class); intent.putExtra(ID, item.getId()); intent.putExtra(NAME, item.getName().toString()); // intent.putExtra(IMAGE, item.getImage().toString()); // image.buildDrawingCache(); // Bitmap image= image.getDrawingCache(); Bundle extras = new Bundle(); // extras.putParcelable("imagebitmap", image); intent.putExtras(extras); startActivity(intent); } }); } } Second class public class DriverDetail extends Activity { private ImageLoader imageLoader; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.driver_detail); ImageView imageView = (ImageView) findViewById(R.id.imageView1); TextView tv = (TextView) findViewById(R.id.tvDname); Bundle extras = getIntent().getExtras(); Bitmap bmp = (Bitmap) extras.getParcelable("imagebitmap"); imageView.setImageBitmap(bmp ); //imageView.set(getIntent().getExtras().getString("image")); tv.setText(getIntent().getExtras().getString("name")); } } By using it I can show text very well, but I can't show an image on second Aactivity. A: To pass images through intent in Android. You can do this by converting the image to a Base64 string and then pass it in the intent extras. You can then decode the Base64 string in the receiving activity and get the image back. Kotlin: // Convert image to Base64 string val imageString = Base64.encodeToString(imageByteArray, Base64.DEFAULT) // Create intent val intent = Intent(context, TargetActivity::class.java) // Add image string to intent intent.putExtra("IMAGE", imageString) // Start activity startActivity(intent) // In TargetActivity: // Get image string from intent val imageString = intent.getStringExtra("IMAGE") // Decode image string val imageByteArray = Base64.decode(imageString, Base64.DEFAULT) // Create bitmap from decoded image val bitmap = BitmapFactory.decodeByteArray(imageByteArray, 0, imageByteArray.size) JAVA: // Convert image to Base64 string String imageString = Base64.encodeToString(imageByteArray, Base64.DEFAULT); // Create intent Intent intent = new Intent(context, TargetActivity.class); // Add image string to intent intent.putExtra("IMAGE", imageString); // Start activity startActivity(intent); // In TargetActivity: // Get image string from intent String imageString = intent.getStringExtra("IMAGE"); // Decode image string byte[] imageByteArray = Base64.decode(imageString, Base64.DEFAULT); // Create bitmap from decoded image Bitmap bitmap = BitmapFactory.decodeByteArray(imageByteArray, 0, imageByteArray.length);
how to pass images through intent?
First class public class ChooseDriver extends Activity implements OnItemClickListener { private static final String rssFeed = "http://trade2rise.com/project/seattle/windex.php?itfpage=driver_list"; private static final String ARRAY_NAME = "itfdata"; private static final String ID = "id"; private static final String NAME = "name"; private static final String IMAGE = "image"; List<Item> arrayOfList; ListView listView; MyAdapter objAdapter1; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.choose_driver); listView = (ListView) findViewById(R.id.listView1); listView.setOnItemClickListener(new OnItemClickListener() { @Override public void onItemClick(AdapterView<?> adapterView, View view, int position, long id) { Item item = (Item) objAdapter1.getItem(position); Intent intent = new Intent(ChooseDriver.this, DriverDetail.class); intent.putExtra(ID, item.getId()); intent.putExtra(NAME, item.getName().toString()); // intent.putExtra(IMAGE, item.getImage().toString()); // image.buildDrawingCache(); // Bitmap image= image.getDrawingCache(); Bundle extras = new Bundle(); // extras.putParcelable("imagebitmap", image); intent.putExtras(extras); startActivity(intent); } }); } } Second class public class DriverDetail extends Activity { private ImageLoader imageLoader; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.driver_detail); ImageView imageView = (ImageView) findViewById(R.id.imageView1); TextView tv = (TextView) findViewById(R.id.tvDname); Bundle extras = getIntent().getExtras(); Bitmap bmp = (Bitmap) extras.getParcelable("imagebitmap"); imageView.setImageBitmap(bmp ); //imageView.set(getIntent().getExtras().getString("image")); tv.setText(getIntent().getExtras().getString("name")); } } By using it I can show text very well, but I can't show an image on second Aactivity.
[ "To pass images through intent in Android. You can do this by converting the image to a Base64 string and then pass it in the intent extras. You can then decode the Base64 string in the receiving activity and get the image back.\nKotlin:\n// Convert image to Base64 string\nval imageString = Base64.encodeToString(imageByteArray, Base64.DEFAULT)\n\n// Create intent\nval intent = Intent(context, TargetActivity::class.java)\n\n// Add image string to intent\nintent.putExtra(\"IMAGE\", imageString)\n\n// Start activity\nstartActivity(intent)\n\n\n// In TargetActivity:\n\n// Get image string from intent\nval imageString = intent.getStringExtra(\"IMAGE\")\n\n// Decode image string\nval imageByteArray = Base64.decode(imageString, Base64.DEFAULT)\n\n// Create bitmap from decoded image\nval bitmap = BitmapFactory.decodeByteArray(imageByteArray, 0, imageByteArray.size)\n\nJAVA:\n// Convert image to Base64 string\nString imageString = Base64.encodeToString(imageByteArray, Base64.DEFAULT);\n\n// Create intent\nIntent intent = new Intent(context, TargetActivity.class);\n\n// Add image string to intent\nintent.putExtra(\"IMAGE\", imageString);\n\n// Start activity\nstartActivity(intent);\n\n\n// In TargetActivity:\n\n// Get image string from intent\nString imageString = intent.getStringExtra(\"IMAGE\");\n\n// Decode image string\nbyte[] imageByteArray = Base64.decode(imageString, Base64.DEFAULT);\n\n// Create bitmap from decoded image\nBitmap bitmap = BitmapFactory.decodeByteArray(imageByteArray, 0, imageByteArray.length);\n\n" ]
[ 0 ]
[]
[]
[ "android", "android_intent" ]
stackoverflow_0023577827_android_android_intent.txt
Q: How do I use collect::>.intersection() without the values becoming borrowed? I am looping loop over a Vec<&str>, each time reassigning a variable that holds the intersection of the last two checked. This is resulting in "expected char, found &char". I think this is happening because the loop is a new block scope, which means the values from the original HashSet are borrowed, and go into the new HashSet as borrowed. Unfortunately, the type checker doesn't like that. How do I create a new HashSet<char> instead of HashSet<&char>? Here is my code: use std::collections::HashSet; fn find_item_in_common(sacks: Vec::<&str>) -> char { let mut item: Option<char> = None; let mut sacks_iter = sacks.iter(); let matching_chars = sacks_iter.next().unwrap().chars().collect::<HashSet<_>>(); loop { let next_sack = sacks_iter.next(); if next_sack.is_none() { break; } let next_sack_values: HashSet<_> = next_sack.unwrap().chars().collect(); matching_chars = matching_chars.intersection(&next_sack_values).collect::<HashSet<_>>(); } matching_chars.drain().nth(0).unwrap() } and here are the errors that I'm seeing: error[E0308]: mismatched types --> src/bin/03.rs:13:26 | 6 | let matching_chars = sacks_iter.next().unwrap().chars().collect::<HashSet<_>>(); | ---------------------------------------------------------- expected due to this value ... 13 | matching_chars = matching_chars.intersection(&next_sack_values).collect::<HashSet<_>>(); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `char`, found `&char` | = note: expected struct `HashSet<char>` found struct `HashSet<&char>` By the way, what is that first error trying to tell me? It seems like it is missing something before or after "expected" -- <missing thing?> expected <or missing thing?> due to this value? I also tried changing matching_chars = matching_chars to matching_chars = matching_chars.cloned() and I get the following error. I understand what the error is saying, but I don't know how to resolve it. error[E0599]: the method `cloned` exists for struct `HashSet<char>`, but its trait bounds were not satisfied --> src/bin/03.rs:13:41 | 13 | matching_chars = matching_chars.cloned().intersection(&next_sack_values).collect::<HashSet<_>>(); | ^^^^^^ method cannot be called on `HashSet<char>` due to unsatisfied trait bounds | ::: /Users/brandoncc/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/std/src/collections/hash/set.rs:112:1 | 112 | pub struct HashSet<T, S = RandomState> { | -------------------------------------- doesn't satisfy `HashSet<char>: Iterator` | = note: the following trait bounds were not satisfied: `HashSet<char>: Iterator` which is required by `&mut HashSet<char>: Iterator` A: Looking at the signature of HashSet::intersection will make this clearer: pub fn intersection<'a>( &'a self, other: &'a HashSet<T, S> ) -> Intersection<'a, T, S> The type Intersection<'a, T, S> implements Iterator<Item=&'a T>. So when you collect this iterator, you get a HashSet<&char> as opposed to a HashSet<char>. The solution is simply to use .cloned on the iterator before you use .collect, since char is Clone, like so: matching_chars = matching_chars.intersection(&next_sack_values).cloned().collect() A: Your attempt at using cloned() was almost right but you have to call it after you create the iterator: matching_chars.intersection(&next_sack_values).cloned().collect::<HashSet<_>>() or for Copy types you should use the more appropriate .copied() adapter: matching_chars.intersection(&next_sack_values).copied().collect::<HashSet<_>>() A: By the way, what is that first error trying to tell me? The error is telling you that it expects char because (due to) the original value for matching_chars has type HashSet<char>. I also tried changing matching_chars = matching_chars to matching_chars = matching_chars.cloned() and I get the following error. I understand what the error is saying, but I don't know how to resolve it. Do you, really? str::chars is an Iterator<Item=char>, so when you collect() to a hashset you get a HashSet<char>. The problem is that intersection borrows the hashset, and since the items the hashset contains may or may not be Clone, it also has to borrow the set items, it can't just copy or clone them (not without restricting its flexibility anyway). So that's where you need to add the cloned call, on the HashSet::intersection in order to adapt it from an Iterator<Item=&char> to an Iterator<Item=char>. Or you can just use the & operator, which takes two borrowed hashsets and returns an owned hashset (requiring that the items be Clone). Alternatively use Iterator::filter or Iterator::findon one of the sets, checking if the othersHashSet::containsthe item being looked at. Fundamentally that's basically whatintersection` does, and you know there's just one item at the end.
How do I use collect::>.intersection() without the values becoming borrowed?
I am looping loop over a Vec<&str>, each time reassigning a variable that holds the intersection of the last two checked. This is resulting in "expected char, found &char". I think this is happening because the loop is a new block scope, which means the values from the original HashSet are borrowed, and go into the new HashSet as borrowed. Unfortunately, the type checker doesn't like that. How do I create a new HashSet<char> instead of HashSet<&char>? Here is my code: use std::collections::HashSet; fn find_item_in_common(sacks: Vec::<&str>) -> char { let mut item: Option<char> = None; let mut sacks_iter = sacks.iter(); let matching_chars = sacks_iter.next().unwrap().chars().collect::<HashSet<_>>(); loop { let next_sack = sacks_iter.next(); if next_sack.is_none() { break; } let next_sack_values: HashSet<_> = next_sack.unwrap().chars().collect(); matching_chars = matching_chars.intersection(&next_sack_values).collect::<HashSet<_>>(); } matching_chars.drain().nth(0).unwrap() } and here are the errors that I'm seeing: error[E0308]: mismatched types --> src/bin/03.rs:13:26 | 6 | let matching_chars = sacks_iter.next().unwrap().chars().collect::<HashSet<_>>(); | ---------------------------------------------------------- expected due to this value ... 13 | matching_chars = matching_chars.intersection(&next_sack_values).collect::<HashSet<_>>(); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `char`, found `&char` | = note: expected struct `HashSet<char>` found struct `HashSet<&char>` By the way, what is that first error trying to tell me? It seems like it is missing something before or after "expected" -- <missing thing?> expected <or missing thing?> due to this value? I also tried changing matching_chars = matching_chars to matching_chars = matching_chars.cloned() and I get the following error. I understand what the error is saying, but I don't know how to resolve it. error[E0599]: the method `cloned` exists for struct `HashSet<char>`, but its trait bounds were not satisfied --> src/bin/03.rs:13:41 | 13 | matching_chars = matching_chars.cloned().intersection(&next_sack_values).collect::<HashSet<_>>(); | ^^^^^^ method cannot be called on `HashSet<char>` due to unsatisfied trait bounds | ::: /Users/brandoncc/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/std/src/collections/hash/set.rs:112:1 | 112 | pub struct HashSet<T, S = RandomState> { | -------------------------------------- doesn't satisfy `HashSet<char>: Iterator` | = note: the following trait bounds were not satisfied: `HashSet<char>: Iterator` which is required by `&mut HashSet<char>: Iterator`
[ "Looking at the signature of HashSet::intersection will make this clearer:\npub fn intersection<'a>(\n &'a self,\n other: &'a HashSet<T, S>\n) -> Intersection<'a, T, S>\n\nThe type Intersection<'a, T, S> implements Iterator<Item=&'a T>. So when you collect this iterator, you get a HashSet<&char> as opposed to a HashSet<char>.\nThe solution is simply to use .cloned on the iterator before you use .collect, since char is Clone, like so:\nmatching_chars = matching_chars.intersection(&next_sack_values).cloned().collect()\n\n", "Your attempt at using cloned() was almost right but you have to call it after you create the iterator:\nmatching_chars.intersection(&next_sack_values).cloned().collect::<HashSet<_>>()\n\nor for Copy types you should use the more appropriate .copied() adapter:\nmatching_chars.intersection(&next_sack_values).copied().collect::<HashSet<_>>()\n\n", "\nBy the way, what is that first error trying to tell me?\n\nThe error is telling you that it expects char because (due to) the original value for matching_chars has type HashSet<char>.\n\nI also tried changing matching_chars = matching_chars to matching_chars = matching_chars.cloned() and I get the following error. I understand what the error is saying, but I don't know how to resolve it.\n\nDo you, really?\nstr::chars is an Iterator<Item=char>, so when you collect() to a hashset you get a HashSet<char>.\nThe problem is that intersection borrows the hashset, and since the items the hashset contains may or may not be Clone, it also has to borrow the set items, it can't just copy or clone them (not without restricting its flexibility anyway).\nSo that's where you need to add the cloned call, on the HashSet::intersection in order to adapt it from an Iterator<Item=&char> to an Iterator<Item=char>.\nOr you can just use the & operator, which takes two borrowed hashsets and returns an owned hashset (requiring that the items be Clone).\nAlternatively use Iterator::filter or Iterator::findon one of the sets, checking if the othersHashSet::containsthe item being looked at. Fundamentally that's basically whatintersection` does, and you know there's just one item at the end.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "rust" ]
stackoverflow_0074679122_rust.txt
Q: Display all students that have promoted all courses I have the following tables: Students (id, name, surname, study_year, department_id) Courses(id, name) Course_Signup(id, student_id, course_id, year) Grades(signup_id, grade_type, mark, date), where grade_type can be 'e' (exam), 'l' (lab) or 'p' (project) I want to display the students with their yearly grade (average of the final grade for all courses), but only for those students that have passed ALL the courses that they have signed up for. For example, if a student signs up for 3 courses but only gets a final grade >= 5 to 2 of those courses, that student should not appear in the result. I wrote this so far: SELECT new_table.id, new_table.name, AVG(CASE WHEN grade_type = 'e' THEN new_table.mark END) AS "Average Exam Grade", AVG(CASE WHEN GRADE_TYPE <> 'e' THEN new_table.mark END) AS "Average Activity Grade", (2*AVG(CASE WHEN grade_type = 'e' THEN new_table.mark END) + AVG(CASE WHEN GRADE_TYPE <> 'e' THEN new_table.mark END))/3 AS "Course Final Grade" FROM ( SELECT s.id, c.name, g.grade_type, g.mark FROM Students s JOIN Course_Signup csn ON s.id = csn.student_id JOIN Courses c ON c.id = csn.course_id JOIN Grades g ON g.signup_id = csn.id ) new_table GROUP BY new_table.id, new_table.name HAVING ((2*AVG(CASE WHEN grade_type = 'e' THEN new_table.mark END) + AVG(CASE WHEN GRADE_TYPE <> 'e' THEN new_table.mark END))/3) >= 5.00 ORDER BY new_table.id ASC This gives me the students with every course that they promoted. What would be the easiest way to check that a student has a course final grade >= 5, for every course that they signed up for? A: You can use the MIN statement in your query to check that the minimal score is more or equal to 5. This assumes that you will have every grade recorded in the database. Also, in your query you have Recap_ prefix for your table names. I removed it to align with the table names in the question. SELECT new_table.id, new_table.name, AVG(CASE WHEN grade_type = 'e' THEN new_table.mark END) AS "Average Exam Grade", AVG(CASE WHEN GRADE_TYPE <> 'e' THEN new_table.mark END) AS "Average Activity Grade", (2*AVG(CASE WHEN grade_type = 'e' THEN new_table.mark END) + AVG(CASE WHEN GRADE_TYPE <> 'e' THEN new_table.mark END))/3 AS "Course Final Grade" FROM ( SELECT s.id, c.name, g.grade_type, g.mark FROM Students s JOIN Course_Signup csn ON s.id = csn.student_id JOIN Courses c ON c.id = csn.course_id JOIN Grades g ON g.signup_id = csn.id ) new_table GROUP BY new_table.id, new_table.name HAVING MIN(new_table.mark) >= 5.00 ORDER BY new_table.id ASC A: To get the students satisfying the conditions "grades on all sign-up and mark >=5", you can use traditional relational division query: select s.id, s.name, cs.year, g.grade_type, g.mark from students s join Course_Signup cs on cs.student_id = s.id join Grades g ON cs.id = g.signup_id where not exists( select 1 from Course_Signup cs where s.id = cs.student_id and not exists( select 1 from grades g where g.signup_id = cs.id and g.mark >= 5 ) );
Display all students that have promoted all courses
I have the following tables: Students (id, name, surname, study_year, department_id) Courses(id, name) Course_Signup(id, student_id, course_id, year) Grades(signup_id, grade_type, mark, date), where grade_type can be 'e' (exam), 'l' (lab) or 'p' (project) I want to display the students with their yearly grade (average of the final grade for all courses), but only for those students that have passed ALL the courses that they have signed up for. For example, if a student signs up for 3 courses but only gets a final grade >= 5 to 2 of those courses, that student should not appear in the result. I wrote this so far: SELECT new_table.id, new_table.name, AVG(CASE WHEN grade_type = 'e' THEN new_table.mark END) AS "Average Exam Grade", AVG(CASE WHEN GRADE_TYPE <> 'e' THEN new_table.mark END) AS "Average Activity Grade", (2*AVG(CASE WHEN grade_type = 'e' THEN new_table.mark END) + AVG(CASE WHEN GRADE_TYPE <> 'e' THEN new_table.mark END))/3 AS "Course Final Grade" FROM ( SELECT s.id, c.name, g.grade_type, g.mark FROM Students s JOIN Course_Signup csn ON s.id = csn.student_id JOIN Courses c ON c.id = csn.course_id JOIN Grades g ON g.signup_id = csn.id ) new_table GROUP BY new_table.id, new_table.name HAVING ((2*AVG(CASE WHEN grade_type = 'e' THEN new_table.mark END) + AVG(CASE WHEN GRADE_TYPE <> 'e' THEN new_table.mark END))/3) >= 5.00 ORDER BY new_table.id ASC This gives me the students with every course that they promoted. What would be the easiest way to check that a student has a course final grade >= 5, for every course that they signed up for?
[ "You can use the MIN statement in your query to check that the minimal score is more or equal to 5.\nThis assumes that you will have every grade recorded in the database.\nAlso, in your query you have Recap_ prefix for your table names. I removed it to align with the table names in the question.\nSELECT\n new_table.id,\n new_table.name,\n AVG(CASE WHEN grade_type = 'e' THEN new_table.mark END) AS \"Average Exam Grade\",\n AVG(CASE WHEN GRADE_TYPE <> 'e' THEN new_table.mark END) AS \"Average Activity Grade\",\n (2*AVG(CASE WHEN grade_type = 'e' THEN new_table.mark END) + AVG(CASE WHEN GRADE_TYPE <> 'e' THEN new_table.mark END))/3 AS \"Course Final Grade\"\nFROM\n(\n SELECT s.id, c.name, g.grade_type, g.mark\n FROM Students s\n JOIN Course_Signup csn\n ON s.id = csn.student_id\n JOIN Courses c\n ON c.id = csn.course_id\n JOIN Grades g\n ON g.signup_id = csn.id\n) new_table\nGROUP BY new_table.id, new_table.name\nHAVING MIN(new_table.mark) >= 5.00\nORDER BY new_table.id ASC\n\n", "To get the students satisfying the conditions \"grades on all sign-up and mark >=5\", you can use traditional relational division query:\nselect s.id, s.name, cs.year, g.grade_type, g.mark \nfrom students s\n join Course_Signup cs on cs.student_id = s.id\n join Grades g ON cs.id = g.signup_id\nwhere not exists(\n select 1 from Course_Signup cs\n where s.id = cs.student_id\n and not exists(\n select 1 from grades g \n where g.signup_id = cs.id \n and g.mark >= 5\n )\n);\n\n" ]
[ 1, 0 ]
[]
[]
[ "aggregate_functions", "join", "oracle", "sql" ]
stackoverflow_0074677459_aggregate_functions_join_oracle_sql.txt
Q: pandas how to avoid iterations on rows? using python 3.7+ want to split paragraphs into new rows. need to use Spicy on each row to get the relevant result (not just split('.')). Is it possible with pandas vectorization? any help would be much appreciated have this df - >>> df = pd.DataFrame({'num_legs': [2, 4, 8, 0], ... 'num_wings': [2, 0, 0, 0], ... 'some_description': ['falcons have wings. falcons fly', 'dog have 4 legs. they are the best', 'spiders create webs. spiders have 8 legs', 'fish swims. fish lives in water']}, ... index=['falcon', 'dog', 'spider', 'fish']) >>> df num_legs num_wings some_description falcon 2 2 'falcons have wings. falcons fly' dog 4 0 'dog have 4 legs. they are the best' spider 8 0 'spiders create webs. spiders have 8 legs' fish 0 0 'fish swims. fish lives in water' I want to iterate over rows and split every sentence into 2 so the result would be - num_legs num_wings some_description falcon 2 2 'falcons have wings.' falcon 2 2 'falcons fly.' dog 4 0 'dog have 4 legs' dog 4 0 'they are the best' spider 8 0 'spiders create webs' spider 8 0 'spiders have 8 legs' fish 0 0 'fish swims.' fish 0 0 'fish lives in water' maybe the only way is with iterrows/itertuples (which I understand are bad practice)? Thank you A: a = 'some_description' df.assign(some_description=df[a].str.split(r'\. ')).explode(a) result: num_legs num_wings some_description falcon 2 2 falcons have wings falcon 2 2 falcons fly dog 4 0 dog have 4 legs dog 4 0 they are the best spider 8 0 spiders create webs spider 8 0 spiders have 8 legs fish 0 0 fish swims fish 0 0 fish lives in water
pandas how to avoid iterations on rows?
using python 3.7+ want to split paragraphs into new rows. need to use Spicy on each row to get the relevant result (not just split('.')). Is it possible with pandas vectorization? any help would be much appreciated have this df - >>> df = pd.DataFrame({'num_legs': [2, 4, 8, 0], ... 'num_wings': [2, 0, 0, 0], ... 'some_description': ['falcons have wings. falcons fly', 'dog have 4 legs. they are the best', 'spiders create webs. spiders have 8 legs', 'fish swims. fish lives in water']}, ... index=['falcon', 'dog', 'spider', 'fish']) >>> df num_legs num_wings some_description falcon 2 2 'falcons have wings. falcons fly' dog 4 0 'dog have 4 legs. they are the best' spider 8 0 'spiders create webs. spiders have 8 legs' fish 0 0 'fish swims. fish lives in water' I want to iterate over rows and split every sentence into 2 so the result would be - num_legs num_wings some_description falcon 2 2 'falcons have wings.' falcon 2 2 'falcons fly.' dog 4 0 'dog have 4 legs' dog 4 0 'they are the best' spider 8 0 'spiders create webs' spider 8 0 'spiders have 8 legs' fish 0 0 'fish swims.' fish 0 0 'fish lives in water' maybe the only way is with iterrows/itertuples (which I understand are bad practice)? Thank you
[ "a = 'some_description'\ndf.assign(some_description=df[a].str.split(r'\\. ')).explode(a)\n\nresult:\n num_legs num_wings some_description\nfalcon 2 2 falcons have wings\nfalcon 2 2 falcons fly\ndog 4 0 dog have 4 legs\ndog 4 0 they are the best\nspider 8 0 spiders create webs\nspider 8 0 spiders have 8 legs\nfish 0 0 fish swims\nfish 0 0 fish lives in water\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python", "vectorization" ]
stackoverflow_0074679289_dataframe_pandas_python_vectorization.txt
Q: How to deploy/publish an ASP.NET Core React JS website with a particular .env file? I have a simple ASP.NET Core React JS web app. I've been publishing it directly to Azure just fine. Recently I introduced a 2nd environment (prod vs dev). How can I publish it to Prod using .env, and publish to Dev using .env.dev ? Notes: I build using VS Enterprise. I deploy using VS Enterprise (right-click -> Publish). I know I can update the 'scripts' section in packages.json. But I dont believe this script(s) are called when I do a Publish from VS IDE. Perhaps there is a way to specify the script?? Ex: build:dev would build using .env.development, and build:prod would build using .env Thanks A: You can use the following steps: Create two environment files: .env for production and .env.dev for development. In your package.json file, update the scripts section to include two new commands: build:dev and build:prod. These commands will be used to build your app using the appropriate environment file. Here is an example of what your scripts section might look like: "scripts": { "build:dev": "env-cmd -f .env.dev react-scripts build", "build:prod": "env-cmd -f .env react-scripts build", "start:dev": "env-cmd -f .env.dev react-scripts start", "start:prod": "env-cmd -f .env react-scripts start", "test": "react-scripts test", "eject": "react-scripts eject" }, In your .csproj file, update the TargetFramework element to include the Publish target: <TargetFramework>netcoreapp3.1</TargetFramework> <TargetFramework>net5.0</TargetFramework> In your project, create a new file named publish.prod.cmd that contains the following commands: npm run build:prod dotnet publish This file will be used to build your app using the production environment file and publish it to Azure. In your project, create a new file named publish.dev.cmd that contains the following commands: npm run build:dev dotnet publish This file will be used to build your app using the development environment file and publish it to Azure. In Visual Studio, open the Publish window for your project. In the Build section, select the Custom option and enter the path to the appropriate publish script file (publish.prod.cmd for production and publish.dev.cmd for development). Select the appropriate publish profile and click the Publish button to publish your app to Azure. The app will be built using the environment file specified in the selected publish script, and the resulting build will be published to Azure.
How to deploy/publish an ASP.NET Core React JS website with a particular .env file?
I have a simple ASP.NET Core React JS web app. I've been publishing it directly to Azure just fine. Recently I introduced a 2nd environment (prod vs dev). How can I publish it to Prod using .env, and publish to Dev using .env.dev ? Notes: I build using VS Enterprise. I deploy using VS Enterprise (right-click -> Publish). I know I can update the 'scripts' section in packages.json. But I dont believe this script(s) are called when I do a Publish from VS IDE. Perhaps there is a way to specify the script?? Ex: build:dev would build using .env.development, and build:prod would build using .env Thanks
[ "You can use the following steps:\nCreate two environment files: .env for production and .env.dev for development.\nIn your package.json file, update the scripts section to include two new commands: build:dev and build:prod. These commands will be used to build your app using the appropriate environment file.\nHere is an example of what your scripts section might look like:\n\"scripts\": {\n \"build:dev\": \"env-cmd -f .env.dev react-scripts build\",\n \"build:prod\": \"env-cmd -f .env react-scripts build\",\n \"start:dev\": \"env-cmd -f .env.dev react-scripts start\",\n \"start:prod\": \"env-cmd -f .env react-scripts start\",\n \"test\": \"react-scripts test\",\n \"eject\": \"react-scripts eject\"\n},\n\nIn your .csproj file, update the TargetFramework element to include the Publish target:\n<TargetFramework>netcoreapp3.1</TargetFramework>\n<TargetFramework>net5.0</TargetFramework>\n\nIn your project, create a new file named publish.prod.cmd that contains the following commands:\nnpm run build:prod\ndotnet publish\n\nThis file will be used to build your app using the production environment file and publish it to Azure.\nIn your project, create a new file named publish.dev.cmd that contains the following commands:\nnpm run build:dev\ndotnet publish\n\nThis file will be used to build your app using the development environment file and publish it to Azure.\nIn Visual Studio, open the Publish window for your project. In the Build section, select the Custom option and enter the path to the appropriate publish script file (publish.prod.cmd for production and publish.dev.cmd for development).\nSelect the appropriate publish profile and click the Publish button to publish your app to Azure. The app will be built using the environment file specified in the selected publish script, and the resulting build will be published to Azure.\n" ]
[ 0 ]
[]
[]
[ ".env", "asp.net_core", "azure", "reactjs", "visual_studio" ]
stackoverflow_0074646925_.env_asp.net_core_azure_reactjs_visual_studio.txt
Q: valid date format for mm/dd/yyyy and dd/mm/yyyy I am trying to use a regex for for both types of dates which is dd/mm/yyyy or mm/dd/yyyy right now i am using this regex /^\d{2}\/\d{2}\/\d{4}$/ which makes sure my dates always end up in doubledigits for day and double digits for month and 4 digits for year and that is what is required, but any javascript expert can guide if i have two valid regex dates following the two formats i am trying to do. i do not want my dates to end up in 99/99/9999 but at least like 31/12/3000 or 12/31/3000 so the ending date valid is the 3000 and not more than that this is what i tried /^\d{2}\/\d{2}\/\d{4}$/ i9 am not expert at regex but this is what i was trying to do to make sure they enter date in the format it is required A: you can test this way : const rgxDte = /^\d{1,2}\/\d{1.2}\/\d{4}$/ , msgTxt = [ 'date is valid' , 'date doesn\'t respect regex' , 'date year is > 3000' , 'date has invalid day / month value(s)' ]; document.querySelectorAll('button').forEach( btn => { const inDateTxt = btn.closest('label').querySelector('input') , outMsg = btn.closest('label').querySelector('output') , dte_DMY = inDateTxt.placeholder === 'dd/mm/yyyy' ; inDateTxt.oninput =_=> { outMsg.textContent = '' } btn.onclick =_=> { let msgIndx = rgxDte.test(inDateTxt.value) ? 0 : 1 ; if (!msgIndx) // same as if (msgIndx===0) { let [ DM, MD, year ] = inDateTxt.value.split('/').map(Number) , day = dte_DMY ? DM : MD , month = (dte_DMY ? MD : DM) -1 // month value is an index (first month indx is zero) , dateTest = new Date( year, month, day ) // work with adds,ex: if day > month number days it add days value and do month++ ; if (year > 3000) msgIndx = 2; if (!msgIndx && ( dateTest.getDate() !== day || dateTest.getMonth() !== month )) msgIndx = 3; } outMsg.classList.toggle('BAD', !!msgIndx) outMsg.textContent = msgTxt[msgIndx] } }) body { font-family : Arial, Helvetica, sans-serif; font-size : 16px; margin : 1rem; } label, input, output { display: block; } input { padding : .2rem .5rem; width : 8rem; font-size : 1.2rem ; } output { height : 2em; margin : .5rem 0 1rem 0; color : green; } output.BAD { color : red; } <label> dd/mm/yyyy : <input type="text" id="inDMY" placeholder="dd/mm/yyyy"> <button>check</button> <output></output> </label> <label> mm/dd/yyyy : <input type="text" id="inMDY" placeholder="mm/dd/yyyy"> <button>check</button> <output></output> </label>
valid date format for mm/dd/yyyy and dd/mm/yyyy
I am trying to use a regex for for both types of dates which is dd/mm/yyyy or mm/dd/yyyy right now i am using this regex /^\d{2}\/\d{2}\/\d{4}$/ which makes sure my dates always end up in doubledigits for day and double digits for month and 4 digits for year and that is what is required, but any javascript expert can guide if i have two valid regex dates following the two formats i am trying to do. i do not want my dates to end up in 99/99/9999 but at least like 31/12/3000 or 12/31/3000 so the ending date valid is the 3000 and not more than that this is what i tried /^\d{2}\/\d{2}\/\d{4}$/ i9 am not expert at regex but this is what i was trying to do to make sure they enter date in the format it is required
[ "you can test this way :\n\n\nconst \n rgxDte = /^\\d{1,2}\\/\\d{1.2}\\/\\d{4}$/\n, msgTxt =\n [ 'date is valid'\n , 'date doesn\\'t respect regex' \n , 'date year is > 3000'\n , 'date has invalid day / month value(s)'\n ];\ndocument.querySelectorAll('button').forEach( btn =>\n {\n const \n inDateTxt = btn.closest('label').querySelector('input')\n , outMsg = btn.closest('label').querySelector('output')\n , dte_DMY = inDateTxt.placeholder === 'dd/mm/yyyy'\n ;\n inDateTxt.oninput =_=>\n {\n outMsg.textContent = ''\n } \n btn.onclick =_=>\n {\n let msgIndx = rgxDte.test(inDateTxt.value) ? 0 : 1\n ;\n if (!msgIndx) // same as if (msgIndx===0)\n {\n let \n [ DM, MD, year ] = inDateTxt.value.split('/').map(Number)\n , day = dte_DMY ? DM : MD\n , month = (dte_DMY ? MD : DM) -1 // month value is an index (first month indx is zero)\n , dateTest = new Date( year, month, day ) // work with adds,ex: if day > month number days it add days value and do month++\n ;\n if (year > 3000)\n msgIndx = 2;\n\n if (!msgIndx && ( dateTest.getDate() !== day || dateTest.getMonth() !== month )) \n msgIndx = 3;\n }\n outMsg.classList.toggle('BAD', !!msgIndx)\n outMsg.textContent = msgTxt[msgIndx] \n }\n })\nbody {\n font-family : Arial, Helvetica, sans-serif;\n font-size : 16px;\n margin : 1rem;\n}\nlabel, input, output {\n display: block;\n }\ninput {\n padding : .2rem .5rem;\n width : 8rem;\n font-size : 1.2rem ;\n}\noutput {\n height : 2em;\n margin : .5rem 0 1rem 0;\n color : green;\n }\noutput.BAD { color : red; }\n<label>\n dd/mm/yyyy : \n <input type=\"text\" id=\"inDMY\" placeholder=\"dd/mm/yyyy\">\n <button>check</button>\n <output></output>\n</label>\n\n<label>\n mm/dd/yyyy : \n <input type=\"text\" id=\"inMDY\" placeholder=\"mm/dd/yyyy\">\n <button>check</button>\n <output></output>\n</label>\n\n\n\n" ]
[ 0 ]
[]
[]
[ "javascript", "jquery", "regex" ]
stackoverflow_0074677632_javascript_jquery_regex.txt
Q: updating failed. the response is not a valid json response. WordPress website is on maintenance mode and when I want to publish a page it shows this From Permalinks URLs are set to "Post", and Simple SSL is activated but when I want to create a new page, "Home" for example, and publish I get the "updating failed. the response is not a valid JSON response" error on WordPress. The site is in maintenance mode. How can I fix it? A: Please check the custom shortcodes if you were added any shortcode in your theme function.php file and make sure the shortcode output wrapped with ob_start() and ob_clean() this may causes the error. https://wordpress.stackexchange.com/questions/242769/clean-way-of-using-ob-start-and-ob-end-clean-in-wordpress
updating failed. the response is not a valid json response. WordPress website is on maintenance mode and when I want to publish a page it shows this
From Permalinks URLs are set to "Post", and Simple SSL is activated but when I want to create a new page, "Home" for example, and publish I get the "updating failed. the response is not a valid JSON response" error on WordPress. The site is in maintenance mode. How can I fix it?
[ "Please check the custom shortcodes if you were added any shortcode in your theme function.php file and make sure the shortcode output wrapped with ob_start() and ob_clean() this may causes the error.\nhttps://wordpress.stackexchange.com/questions/242769/clean-way-of-using-ob-start-and-ob-end-clean-in-wordpress\n" ]
[ 0 ]
[]
[]
[ "custom_wordpress_pages", "error_handling", "json", "maintenance_mode", "wordpress" ]
stackoverflow_0074679249_custom_wordpress_pages_error_handling_json_maintenance_mode_wordpress.txt
Q: Get specific values out of dictionary with multiple keys in Python I want to extract multiple ISINs out of a output.json file in python. The output.json file looks like the following: {'A1J780': {'ter': '0.20%', 'wkn': 'A1J780', 'isin': 'IE00B88DZ566'}, 'A1J7W9': {' 'ter': '0.20%', 'isin': 'IE00B8KMSQ34'}, 'LYX0VQ': {'isin': 'LU1302703878'}, 'A2AMYP': {'ter': '0.22%', 'savingsPlan': None, 'inceptionDate': '02.11.16', 'fundSize': '48', 'isin': 'IE00BD34DB16'}} ... My current approach is the following: with open('output.json') as f: data = json.load(f) value_list = list() for i in data: value_list.append(i['isin']) print(value_list) However, I receive the error message: Traceback (most recent call last): File "/Users/placeholder.py", line 73, in <module> value_list.append(i['isin']) ~^^^^^^^^ TypeError: string indices must be integers, not 'str' I would highly appreciate your input! Thank you in advance! A: Use data.values() as target in for loop to iterate over the JSON objects. Doing a loop over data iterates over the keys which is a string value (e.g. "A1J780"). data = { 'A1J780': {'ter': '0.20%', 'wkn': 'A1J780', 'isin': 'IE00B88DZ566'}, 'A1J7W9': {'ter': '0.20%', 'isin': 'IE00B8KMSQ34'} } value_list = [] for i in data.values(): value_list.append(i['isin']) print(value_list) Output: ['IE00B88DZ566', 'IE00B8KMSQ34'] If the isin key is not present in any of the dictionary objects then you would need to use i.get('isin') and check if the value is not None otherwise i['isin'] would raise an exception. A: The error message TypeError: string indices must be integers, not 'str' indicates that you are trying to access a dictionary value using a string as the key, but the type of the key should be an integer instead. In your code, the i variable in the for loop is a string, because it represents the keys in the data dictionary. However, you are trying to access the 'isin' value in the dictionary using the i['isin'] syntax, which is not valid for a string key. To fix this issue, you can use the i variable as the key to access the dictionary value, like this: with open('output.json') as f: data = json.load(f) value_list = list() for i in data: value_list.append(data[i]['isin']) print(value_list) In this updated code, the data[i] syntax is used to access the dictionary value associated with the key i, and then the ['isin'] syntax is used to access the 'isin' value in the nested dictionary. This code should produce the expected output of a list of ISINs from the output.json file.
Get specific values out of dictionary with multiple keys in Python
I want to extract multiple ISINs out of a output.json file in python. The output.json file looks like the following: {'A1J780': {'ter': '0.20%', 'wkn': 'A1J780', 'isin': 'IE00B88DZ566'}, 'A1J7W9': {' 'ter': '0.20%', 'isin': 'IE00B8KMSQ34'}, 'LYX0VQ': {'isin': 'LU1302703878'}, 'A2AMYP': {'ter': '0.22%', 'savingsPlan': None, 'inceptionDate': '02.11.16', 'fundSize': '48', 'isin': 'IE00BD34DB16'}} ... My current approach is the following: with open('output.json') as f: data = json.load(f) value_list = list() for i in data: value_list.append(i['isin']) print(value_list) However, I receive the error message: Traceback (most recent call last): File "/Users/placeholder.py", line 73, in <module> value_list.append(i['isin']) ~^^^^^^^^ TypeError: string indices must be integers, not 'str' I would highly appreciate your input! Thank you in advance!
[ "Use data.values() as target in for loop to iterate over the JSON objects. Doing a loop over data iterates over the keys which is a string value (e.g. \"A1J780\").\ndata = {\n 'A1J780': {'ter': '0.20%', 'wkn': 'A1J780', 'isin': 'IE00B88DZ566'},\n 'A1J7W9': {'ter': '0.20%', 'isin': 'IE00B8KMSQ34'}\n}\nvalue_list = []\nfor i in data.values():\n value_list.append(i['isin'])\nprint(value_list)\n\nOutput:\n['IE00B88DZ566', 'IE00B8KMSQ34']\n\nIf the isin key is not present in any of the dictionary objects then you would need to use i.get('isin') and check if the value is not None otherwise i['isin'] would raise an exception.\n", "The error message TypeError: string indices must be integers, not 'str' indicates that you are trying to access a dictionary value using a string as the key, but the type of the key should be an integer instead.\nIn your code, the i variable in the for loop is a string, because it represents the keys in the data dictionary. However, you are trying to access the 'isin' value in the dictionary using the i['isin'] syntax, which is not valid for a string key.\nTo fix this issue, you can use the i variable as the key to access the dictionary value, like this:\nwith open('output.json') as f:\n data = json.load(f)\n\nvalue_list = list()\nfor i in data:\n value_list.append(data[i]['isin'])\nprint(value_list)\n\nIn this updated code, the data[i] syntax is used to access the dictionary value associated with the key i, and then the ['isin'] syntax is used to access the 'isin' value in the nested dictionary.\nThis code should produce the expected output of a list of ISINs from the output.json file.\n" ]
[ 0, 0 ]
[]
[]
[ "dictionary", "json", "python" ]
stackoverflow_0074679216_dictionary_json_python.txt
Q: Disable `pip install` Timeout For Slow Connections I recently moved to a place with terrible internet connection. Ever since then I have been having huge issues getting my programming environments set up with all the tools I need - you don't realize how many things you need to download until each one of those things takes over a day. For this post I would like to try to figure out how to deal with this in pip. The Problem Almost every time I pip install something it ends out timing out somewhere in the middle. It takes many tries until I get lucky enough to have it complete without a time out. This happens with many different things I have tried, big or small. Every time an install fails the next time starts all over again from 0%, no matter how far I got before. I get something along the lines of pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. What I want to happen Ideally I would like to either extend the definition of time pip uses before it declares a timeout or be able to disable the option of a timeout all together. I am not sure either of these are possible, so if anyone has any other solution for me that would be greatly appreciated as well. Other Information Not sure this helps any but what I found is that the only reliable way for me to download anything here is using torrents, as they do not restart a download once they lose connection, rather they always continue where they left off. If there is a way to use this fact in any way that would be nice too. A: Use option --timeout <sec> to set socket time out. Also, as @Iain Shelvington mentioned, timeout = <sec> in pip configuration will also work. TIP: Every time you want to know something (maybe an option) about a command (tool), before googling, check the manual page of the command by using man <command> or use <command> --help or check that command's docs online will be very useful too (Maybe better than Google). A: To set the timeout time to 30sec for example. The easiest way is executing: pip config global.timeout 30 or going to the pip configuration file pip.ini located in the directory ~\AppData\Roaming\pip in the case of Windows operating system. If the file does not exist there, create it and write: [global] timeout = 30.
Disable `pip install` Timeout For Slow Connections
I recently moved to a place with terrible internet connection. Ever since then I have been having huge issues getting my programming environments set up with all the tools I need - you don't realize how many things you need to download until each one of those things takes over a day. For this post I would like to try to figure out how to deal with this in pip. The Problem Almost every time I pip install something it ends out timing out somewhere in the middle. It takes many tries until I get lucky enough to have it complete without a time out. This happens with many different things I have tried, big or small. Every time an install fails the next time starts all over again from 0%, no matter how far I got before. I get something along the lines of pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. What I want to happen Ideally I would like to either extend the definition of time pip uses before it declares a timeout or be able to disable the option of a timeout all together. I am not sure either of these are possible, so if anyone has any other solution for me that would be greatly appreciated as well. Other Information Not sure this helps any but what I found is that the only reliable way for me to download anything here is using torrents, as they do not restart a download once they lose connection, rather they always continue where they left off. If there is a way to use this fact in any way that would be nice too.
[ "Use option --timeout <sec> to set socket time out.\nAlso, as @Iain Shelvington mentioned, timeout = <sec> in pip configuration will also work.\nTIP: Every time you want to know something (maybe an option) about a command (tool), before googling, check the manual page of the command by using man <command> or use <command> --help or check that command's docs online will be very useful too (Maybe better than Google).\n", "To set the timeout time to 30sec for example. The easiest way is executing: pip config global.timeout 30 or going to the pip configuration file pip.ini located in the directory ~\\AppData\\Roaming\\pip in the case of Windows operating system. If the file does not exist there, create it and write:\n[global]\ntimeout = 30.\n" ]
[ 10, 0 ]
[]
[]
[ "pip", "python", "request_timed_out" ]
stackoverflow_0059796680_pip_python_request_timed_out.txt
Q: iOS WKWebView not showing javascript alert() dialog I am having some trouble getting a WKWebView in iOS 8 to display an alert dialog that is called from Javascript. After creating a standard WKWebView and loading an HTML file, I have a button on the page that creates a simple alert with some text. This works in UIWebView and in Google Chrome/Safari, but does not appear to be working in WKWebView. Any help is appreciated. My setup is as follows: WKWebViewConfiguration *config = [[WKWebViewConfiguration alloc] init]; config.allowsInlineMediaPlayback = YES; config.mediaPlaybackRequiresUserAction = false; _wkViewWeb = [[WKWebView alloc] initWithFrame:_viewWeb.frame config]; _wkViewWeb.scrollView.scrollEnabled = NO; NSString *fullURL = @"file://.../TestSlide.html"; NSURL *url = [NSURL URLWithString:fullURL]; NSURLRequest *request = [NSURLRequest requestWithURL:url cachePolicy:NSURLRequestReloadIgnoringCacheData timeoutInterval:10]; [_wkViewWeb loadRequest:request]; The html has the following function: <SCRIPT Language="JavaScript"> function alertTest() { alert("Testing Alerts"); } </SCRIPT> And a button: <b>Test Alerts: <input type="button" value="Alert Popup" onclick="alertTest()"><br></b> <br> This setup works in UIWebView and in regular browsers, but does not work in WKWebView. Am I missing something in the configuration? Should I be using one of the WK delegates to control the alert/confirm dialog behavior? Thank you. A: To solve this you need a WKUIDelegate for your web view. It is the duty of the delegate to decide if an alert should be displayed, and in what way. You need to implement this for alert, confirm and text input (prompt). Here is sample code without any validation of the page url or security features: - (void)webView:(WKWebView *)webView runJavaScriptAlertPanelWithMessage:(NSString *)message initiatedByFrame:(WKFrameInfo *)frame completionHandler:(void (^)(void))completionHandler { UIAlertController *alertController = [UIAlertController alertControllerWithTitle:message message:nil preferredStyle:UIAlertControllerStyleAlert]; [alertController addAction:[UIAlertAction actionWithTitle:@"OK" style:UIAlertActionStyleCancel handler:^(UIAlertAction *action) { completionHandler(); }]]; [self presentViewController:alertController animated:YES completion:^{}]; } More in the Official Documentation A: Swift 3 with all 3 optional functions implemented: func webView(_ webView: WKWebView, runJavaScriptAlertPanelWithMessage message: String, initiatedByFrame frame: WKFrameInfo, completionHandler: @escaping () -> Void) { let alertController = UIAlertController(title: nil, message: message, preferredStyle: .actionSheet) alertController.addAction(UIAlertAction(title: "OK", style: .default, handler: { (action) in completionHandler() })) present(alertController, animated: true, completion: nil) } func webView(_ webView: WKWebView, runJavaScriptConfirmPanelWithMessage message: String, initiatedByFrame frame: WKFrameInfo, completionHandler: @escaping (Bool) -> Void) { let alertController = UIAlertController(title: nil, message: message, preferredStyle: .actionSheet) alertController.addAction(UIAlertAction(title: "OK", style: .default, handler: { (action) in completionHandler(true) })) alertController.addAction(UIAlertAction(title: "Cancel", style: .default, handler: { (action) in completionHandler(false) })) present(alertController, animated: true, completion: nil) } func webView(_ webView: WKWebView, runJavaScriptTextInputPanelWithPrompt prompt: String, defaultText: String?, initiatedByFrame frame: WKFrameInfo, completionHandler: @escaping (String?) -> Void) { let alertController = UIAlertController(title: nil, message: prompt, preferredStyle: .actionSheet) alertController.addTextField { (textField) in textField.text = defaultText } alertController.addAction(UIAlertAction(title: "OK", style: .default, handler: { (action) in if let text = alertController.textFields?.first?.text { completionHandler(text) } else { completionHandler(defaultText) } })) alertController.addAction(UIAlertAction(title: "Cancel", style: .default, handler: { (action) in completionHandler(nil) })) present(alertController, animated: true, completion: nil) } A: Just to expand a bit, WKWebView requires you to show alerts, prompts, and confirms yourself. Do this by becoming a WKUIDelegate: #import <WebKit/WebKit.h> @interface MyController : UIViewController<WKUIDelegate> Then assign the delegate: web.UIDelegate = self; Then you need to actually implement alert, prompt, and confirm. I create WKWebViewPanelManager.h/m as an easy implementation, so here's what I do: - (void)webView:(WKWebView *)webView runJavaScriptAlertPanelWithMessage:(NSString *)message initiatedByFrame:(WKFrameInfo *)frame completionHandler:(void (^)(void))completionHandler { [WKWebViewPanelManager presentAlertOnController:self.view.window.rootViewController title:@"Alert" message:message handler:completionHandler]; } - (void)webView:(WKWebView *)webView runJavaScriptConfirmPanelWithMessage:(NSString *)message initiatedByFrame:(WKFrameInfo *)frame completionHandler:(void (^)(BOOL result))completionHandler { [WKWebViewPanelManager presentConfirmOnController:self.view.window.rootViewController title:@"Confirm" message:message handler:completionHandler]; } - (void)webView:(WKWebView *)webView runJavaScriptTextInputPanelWithPrompt:(NSString *)prompt defaultText:(nullable NSString *)defaultText initiatedByFrame:(WKFrameInfo *)frame completionHandler:(void (^)(NSString * __nullable result))completionHandler { [WKWebViewPanelManager presentPromptOnController:self.view.window.rootViewController title:@"Prompt" message:prompt defaultText:defaultText handler:completionHandler]; } Of course, it's up to you to filter out bad alert/confirm/prompt requests. A: And here's that in swift: func webView(webView: WKWebView, runJavaScriptAlertPanelWithMessage message: String, initiatedByFrame frame: WKFrameInfo, completionHandler: @escaping () -> Void) { let alertController = UIAlertController(title: message, message: nil, preferredStyle: .alert) alertController.addAction(UIAlertAction(title: "OK", style: .cancel) { _ in completionHandler()} ) self.present(alertController, animated: true, completion: nil) } A: Implement the protocol to your WKWebview container controller. WKUIDelegate Add preference to WKWebview to enable javascript in viewDidLoad() as. // enable JS webView.configuration.preferences.javaScriptEnabled = true Register WKWebview UI Delegate in viewDidLoad() as self.webView.uiDelegate = self Implement the below delegate in your class. func webView(_ webView: WKWebView, runJavaScriptAlertPanelWithMessage message: String, initiatedByFrame frame: WKFrameInfo, completionHandler: @escaping () -> Void) { let alertController = UIAlertController(title: message,message: nil,preferredStyle: .alert) alertController.addAction(UIAlertAction(title: "OK", style: .cancel) {_ in completionHandler()}) self.present(alertController, animated: true, completion: nil) } A: Here is the code in Swift 4.2 func webView(_ webView: WKWebView, runJavaScriptAlertPanelWithMessage message: String, initiatedByFrame frame: WKFrameInfo, completionHandler: @escaping () -> Void) { let alertController = UIAlertController(title: message, message: nil, preferredStyle: UIAlertController.Style.alert); alertController.addAction(UIAlertAction(title: "OK", style: UIAlertAction.Style.cancel) { _ in completionHandler()} ); self.present(alertController, animated: true, completion: {}); } A: The viewController ViewController: UIViewController, WKUIDelegate, WKNavigationDelegate Enable JavaScript webView.configuration.preferences.javaScriptEnabled = true In the ViewDidLoad() Function add the below items webView.uiDelegate = self webView.navigationDelegate = self view.addSubview(webView) Add the delegate as follows func webView(_ webView: WKWebView, runJavaScriptAlertPanelWithMessage message: String, initiatedByFrame frame: WKFrameInfo, completionHandler: @escaping () -> Void) { let alert = UIAlertController(title: nil, message: message, preferredStyle: .alert) let title = NSLocalizedString("OK", comment: "OK Button") let ok = UIAlertAction(title: title, style: .default) { (action: UIAlertAction) -> Void in alert.dismiss(animated: true, completion: nil) } alert.addAction(ok) present(alert, animated: true) completionHandler() }
iOS WKWebView not showing javascript alert() dialog
I am having some trouble getting a WKWebView in iOS 8 to display an alert dialog that is called from Javascript. After creating a standard WKWebView and loading an HTML file, I have a button on the page that creates a simple alert with some text. This works in UIWebView and in Google Chrome/Safari, but does not appear to be working in WKWebView. Any help is appreciated. My setup is as follows: WKWebViewConfiguration *config = [[WKWebViewConfiguration alloc] init]; config.allowsInlineMediaPlayback = YES; config.mediaPlaybackRequiresUserAction = false; _wkViewWeb = [[WKWebView alloc] initWithFrame:_viewWeb.frame config]; _wkViewWeb.scrollView.scrollEnabled = NO; NSString *fullURL = @"file://.../TestSlide.html"; NSURL *url = [NSURL URLWithString:fullURL]; NSURLRequest *request = [NSURLRequest requestWithURL:url cachePolicy:NSURLRequestReloadIgnoringCacheData timeoutInterval:10]; [_wkViewWeb loadRequest:request]; The html has the following function: <SCRIPT Language="JavaScript"> function alertTest() { alert("Testing Alerts"); } </SCRIPT> And a button: <b>Test Alerts: <input type="button" value="Alert Popup" onclick="alertTest()"><br></b> <br> This setup works in UIWebView and in regular browsers, but does not work in WKWebView. Am I missing something in the configuration? Should I be using one of the WK delegates to control the alert/confirm dialog behavior? Thank you.
[ "To solve this you need a WKUIDelegate for your web view. It is the duty of the delegate to decide if an alert should be displayed, and in what way. You need to implement this for alert, confirm and text input (prompt).\nHere is sample code without any validation of the page url or security features:\n- (void)webView:(WKWebView *)webView runJavaScriptAlertPanelWithMessage:(NSString *)message initiatedByFrame:(WKFrameInfo *)frame completionHandler:(void (^)(void))completionHandler\n{\n UIAlertController *alertController = [UIAlertController alertControllerWithTitle:message\n message:nil\n preferredStyle:UIAlertControllerStyleAlert];\n [alertController addAction:[UIAlertAction actionWithTitle:@\"OK\"\n style:UIAlertActionStyleCancel\n handler:^(UIAlertAction *action) {\n completionHandler();\n }]];\n [self presentViewController:alertController animated:YES completion:^{}];\n}\n\nMore in the Official Documentation\n", "Swift 3 with all 3 optional functions implemented:\nfunc webView(_ webView: WKWebView, runJavaScriptAlertPanelWithMessage message: String, initiatedByFrame frame: WKFrameInfo,\n completionHandler: @escaping () -> Void) {\n\n let alertController = UIAlertController(title: nil, message: message, preferredStyle: .actionSheet)\n alertController.addAction(UIAlertAction(title: \"OK\", style: .default, handler: { (action) in\n completionHandler()\n }))\n\n present(alertController, animated: true, completion: nil)\n}\n\n\nfunc webView(_ webView: WKWebView, runJavaScriptConfirmPanelWithMessage message: String, initiatedByFrame frame: WKFrameInfo,\n completionHandler: @escaping (Bool) -> Void) {\n\n let alertController = UIAlertController(title: nil, message: message, preferredStyle: .actionSheet)\n\n alertController.addAction(UIAlertAction(title: \"OK\", style: .default, handler: { (action) in\n completionHandler(true)\n }))\n\n alertController.addAction(UIAlertAction(title: \"Cancel\", style: .default, handler: { (action) in\n completionHandler(false)\n }))\n\n present(alertController, animated: true, completion: nil)\n}\n\n\nfunc webView(_ webView: WKWebView, runJavaScriptTextInputPanelWithPrompt prompt: String, defaultText: String?, initiatedByFrame frame: WKFrameInfo,\n completionHandler: @escaping (String?) -> Void) {\n\n let alertController = UIAlertController(title: nil, message: prompt, preferredStyle: .actionSheet)\n\n alertController.addTextField { (textField) in\n textField.text = defaultText\n }\n\n alertController.addAction(UIAlertAction(title: \"OK\", style: .default, handler: { (action) in\n if let text = alertController.textFields?.first?.text {\n completionHandler(text)\n } else {\n completionHandler(defaultText)\n }\n }))\n\n alertController.addAction(UIAlertAction(title: \"Cancel\", style: .default, handler: { (action) in\n completionHandler(nil)\n }))\n\n present(alertController, animated: true, completion: nil)\n}\n\n", "Just to expand a bit, WKWebView requires you to show alerts, prompts, and confirms yourself. Do this by becoming a WKUIDelegate:\n#import <WebKit/WebKit.h>\n@interface MyController : UIViewController<WKUIDelegate>\n\nThen assign the delegate:\nweb.UIDelegate = self;\n\nThen you need to actually implement alert, prompt, and confirm. I create WKWebViewPanelManager.h/m as an easy implementation, so here's what I do:\n- (void)webView:(WKWebView *)webView runJavaScriptAlertPanelWithMessage:(NSString *)message initiatedByFrame:(WKFrameInfo *)frame completionHandler:(void (^)(void))completionHandler {\n [WKWebViewPanelManager presentAlertOnController:self.view.window.rootViewController title:@\"Alert\" message:message handler:completionHandler];\n}\n\n- (void)webView:(WKWebView *)webView runJavaScriptConfirmPanelWithMessage:(NSString *)message initiatedByFrame:(WKFrameInfo *)frame completionHandler:(void (^)(BOOL result))completionHandler {\n [WKWebViewPanelManager presentConfirmOnController:self.view.window.rootViewController title:@\"Confirm\" message:message handler:completionHandler];\n}\n\n- (void)webView:(WKWebView *)webView runJavaScriptTextInputPanelWithPrompt:(NSString *)prompt defaultText:(nullable NSString *)defaultText initiatedByFrame:(WKFrameInfo *)frame completionHandler:(void (^)(NSString * __nullable result))completionHandler {\n [WKWebViewPanelManager presentPromptOnController:self.view.window.rootViewController title:@\"Prompt\" message:prompt defaultText:defaultText handler:completionHandler];\n}\n\nOf course, it's up to you to filter out bad alert/confirm/prompt requests.\n", "And here's that in swift:\nfunc webView(webView: WKWebView, runJavaScriptAlertPanelWithMessage message: String,\n initiatedByFrame frame: WKFrameInfo, completionHandler: @escaping () -> Void) {\n\n let alertController = UIAlertController(title: message,\n message: nil,\n preferredStyle: .alert)\n\n alertController.addAction(UIAlertAction(title: \"OK\", style: .cancel) {\n _ in completionHandler()}\n )\n\n self.present(alertController, animated: true, completion: nil)\n}\n\n", "\nImplement the protocol to your WKWebview container controller. WKUIDelegate\nAdd preference to WKWebview to enable javascript in viewDidLoad() as. \n// enable JS\nwebView.configuration.preferences.javaScriptEnabled = true\n\nRegister WKWebview UI Delegate in viewDidLoad() as \nself.webView.uiDelegate = self\n\nImplement the below delegate in your class.\nfunc webView(_ webView: WKWebView, runJavaScriptAlertPanelWithMessage message: \nString, initiatedByFrame frame: WKFrameInfo, completionHandler: @escaping () -> \nVoid) {\nlet alertController = UIAlertController(title: message,message: nil,preferredStyle: \n.alert)\n\nalertController.addAction(UIAlertAction(title: \"OK\", style: .cancel) {_ in \ncompletionHandler()})\n\nself.present(alertController, animated: true, completion: nil)\n}\n\n\n", "Here is the code in Swift 4.2\nfunc webView(_ webView: WKWebView, runJavaScriptAlertPanelWithMessage message: String,\n initiatedByFrame frame: WKFrameInfo, completionHandler: @escaping () -> Void) {\n\n let alertController = UIAlertController(title: message, message: nil,\n preferredStyle: UIAlertController.Style.alert);\n\n alertController.addAction(UIAlertAction(title: \"OK\", style: UIAlertAction.Style.cancel) {\n _ in completionHandler()}\n );\n\n self.present(alertController, animated: true, completion: {});\n}\n\n", "\nThe viewController\n\nViewController: UIViewController, WKUIDelegate, WKNavigationDelegate \n\n\nEnable JavaScript\n\nwebView.configuration.preferences.javaScriptEnabled = true\n\n\nIn the ViewDidLoad() Function add the below items\n\n webView.uiDelegate = self\n webView.navigationDelegate = self\n view.addSubview(webView)\n\n\nAdd the delegate as follows\n\n func webView(_ webView: WKWebView,\n runJavaScriptAlertPanelWithMessage message: String,\n initiatedByFrame frame: WKFrameInfo,\n completionHandler: @escaping () -> Void) {\n \n let alert = UIAlertController(title: nil, message: message, preferredStyle: .alert)\n let title = NSLocalizedString(\"OK\", comment: \"OK Button\")\n let ok = UIAlertAction(title: title, style: .default) { (action: UIAlertAction) -> Void in\n alert.dismiss(animated: true, completion: nil)\n }\n alert.addAction(ok)\n present(alert, animated: true)\n completionHandler()\n }\n\n" ]
[ 91, 61, 21, 16, 8, 7, 0 ]
[]
[]
[ "ios", "javascript", "wkwebview" ]
stackoverflow_0026898941_ios_javascript_wkwebview.txt
Q: How to run two react projects together in my local My project sturture is as follows : Projects --- package.json |-------Project1 ---- package.json |-------Project2 ---- package.json if I run yarn start from cd Projects/Project1, project1 react app runs. if I run yarn start from cd Projects/Project2, project2 react app runs. for running the app from Projects, I need to specify project name : eg: yarn start-Project1 I want to run both my projects from the top most directory using yarn start. I want to run the projects on different ports. If I change the package.json in "Projects" to this "start": "cd Projects/Project1 && export PORT=3001 && cd ../Project2 && export PORT=3000 && react-scripts start", It is only running of the project. Thank you. A: This is because the && operator runs commands in sequence, meaning each command runs after the previous command exits. Your Project1 doesn't exit, it just sits there running indefinitely. You need to run both your commands in parallel. One way to do this is to use & instead of &&. & will start the first command in the background instead of waiting for it to exit. When using & you should be careful of the order of operations. You can use parentheses to group sequential/parallel commands as necessary. I think this command might work for you: (cd Projects/Project1 && export PORT=3001) & (cd ../Project2 && export PORT=3000) & react-scripts start A: Go to project1 path in terminal and run project1 by yarn start. Open project2 package.json file and edit scripts section as following: "scripts": { "start": "set PORT=3001 && react-scripts start", And run project2 by yarn start in its path too. Hope this helps.
How to run two react projects together in my local
My project sturture is as follows : Projects --- package.json |-------Project1 ---- package.json |-------Project2 ---- package.json if I run yarn start from cd Projects/Project1, project1 react app runs. if I run yarn start from cd Projects/Project2, project2 react app runs. for running the app from Projects, I need to specify project name : eg: yarn start-Project1 I want to run both my projects from the top most directory using yarn start. I want to run the projects on different ports. If I change the package.json in "Projects" to this "start": "cd Projects/Project1 && export PORT=3001 && cd ../Project2 && export PORT=3000 && react-scripts start", It is only running of the project. Thank you.
[ "This is because the && operator runs commands in sequence, meaning each command runs after the previous command exits. Your Project1 doesn't exit, it just sits there running indefinitely.\nYou need to run both your commands in parallel. One way to do this is to use & instead of &&. & will start the first command in the background instead of waiting for it to exit.\nWhen using & you should be careful of the order of operations. You can use parentheses to group sequential/parallel commands as necessary. I think this command might work for you:\n(cd Projects/Project1 && export PORT=3001) & (cd ../Project2 && export PORT=3000) & react-scripts start\n\n", "Go to project1 path in terminal and run project1 by yarn start.\nOpen project2 package.json file and edit scripts section as following:\n\"scripts\": {\n\"start\": \"set PORT=3001 && react-scripts start\",\nAnd run project2 by yarn start in its path too.\nHope this helps.\n" ]
[ 1, 0 ]
[]
[]
[ "frontend", "javascript", "package.json", "reactjs", "yarnpkg" ]
stackoverflow_0060011671_frontend_javascript_package.json_reactjs_yarnpkg.txt
Q: Why HIlt can not create instance of SharedViewModel? Update 1 I changed delegate from viewModels to hiltNavGraphViewModels and it's works. Thanks to ianhanniballake's comments. But now is another issue. If app have killed (Logcat -> Terminate App), occurs exception No destination with ID 2131296453 is on the NavController's back stack. The current destination is null (stacktrace below). How can I restore back stack in order to hiltNavGraphViewModels don't fail? @AndroidEntryPoint class ScriptDetailFragment: Fragment(R.layout.component_detail_script) { private val viewModel: SharedViewModel by hiltNavGraphViewModels(R.id.scriptFragment) } @AndroidEntryPoint class ScriptFragment : Fragment(R.layout.component_script) { private val viewModel: SharedViewModel by hiltNavGraphViewModels(R.id.scriptFragment) fun navigateToDetail(navDirection: NavDirections) { findNavController().navigate(navDirection) } } @HiltViewModel class SharedViewModel @Inject constructor( private val componentWrapper: ComponentWrapper ) : ViewModel() { } dependencies { implementation 'androidx.core:core-ktx:1.8.0' implementation 'com.google.dagger:hilt-android:2.38' implementation("androidx.hilt:hilt-navigation-fragment:1.0.0") kapt 'com.google.dagger:hilt-compiler:2.38' kapt 'androidx.hilt:hilt-compiler:1.0.0' implementation "androidx.activity:activity-ktx:1.5.1" implementation "androidx.fragment:fragment-ktx:1.5.4" implementation "androidx.navigation:navigation-fragment-ktx:2.5.3" implementation "androidx.navigation:navigation-ui-ktx:2.5.3" } Caused by: java.lang.IllegalArgumentException: No destination with ID 2131296453 is on the NavController's back stack. The current destination is null at androidx.navigation.NavController.getBackStackEntry(NavController.kt:2209) at *.pages.departure.ScriptFragment$special$$inlined$hiltNavGraphViewModels$1.invoke(HiltNavGraphViewModelLazy.kt:49) at *.pages.departure.ScriptFragment$special$$inlined$hiltNavGraphViewModels$1.invoke(Unknown Source:0) at kotlin.SynchronizedLazyImpl.getValue(LazyJVM.kt:74) at *.pages.departure.ScriptFragment$special$$inlined$hiltNavGraphViewModels$3.invoke(HiltNavGraphViewModelLazy.kt:57) at *.pages.departure.ScriptFragment$special$$inlined$hiltNavGraphViewModels$3.invoke(Unknown Source:0) at androidx.lifecycle.ViewModelLazy.getValue(ViewModelLazy.kt:47) at androidx.lifecycle.ViewModelLazy.getValue(ViewModelLazy.kt:35) at *.pages.departure.ScriptFragment.getViewModel(ScriptFragment.kt:15) at *.pages.departure.ScriptFragment.onViewCreated(ScriptFragment.kt:20) at androidx.fragment.app.Fragment.performViewCreated(Fragment.java:3128) at androidx.fragment.app.FragmentStateManager.createView(FragmentStateManager.java:552) at androidx.fragment.app.FragmentStateManager.moveToExpectedState(FragmentStateManager.java:261) at androidx.fragment.app.FragmentStore.moveToExpectedState(FragmentStore.java:113) at androidx.fragment.app.FragmentManager.moveToState(FragmentManager.java:1433) at androidx.fragment.app.FragmentManager.dispatchStateChange(FragmentManager.java:2977) at androidx.fragment.app.FragmentManager.dispatchViewCreated(FragmentManager.java:2888) at androidx.fragment.app.Fragment.performViewCreated(Fragment.java:3129) at androidx.fragment.app.FragmentStateManager.createView(FragmentStateManager.java:552) at androidx.fragment.app.FragmentStateManager.moveToExpectedState(FragmentStateManager.java:261) at androidx.fragment.app.FragmentStore.moveToExpectedState(FragmentStore.java:113) at androidx.fragment.app.FragmentManager.moveToState(FragmentManager.java:1433) at androidx.fragment.app.FragmentManager.dispatchStateChange(FragmentManager.java:2977) at androidx.fragment.app.FragmentManager.dispatchActivityCreated(FragmentManager.java:2895) at androidx.fragment.app.FragmentController.dispatchActivityCreated(FragmentController.java:263) at androidx.fragment.app.FragmentActivity.onStart(FragmentActivity.java:351) at androidx.appcompat.app.AppCompatActivity.onStart(AppCompatActivity.java:248) at android.app.Instrumentation.callActivityOnStart(Instrumentation.java:1335) at android.app.Activity.performStart(Activity.java:7043) activity_main.xml <androidx.fragment.app.FragmentContainerView android:id="@+id/nav_host" android:name="androidx.navigation.fragment.NavHostFragment" android:layout_width="match_parent" android:layout_height="match_parent" app:defaultNavHost="true" tools:context=".screens.MainActivity" /> MainActivity.ki @AndroidEntryPoint class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) _binding = ActivityMainBinding.inflate(layoutInflater) setContentView(binding.root) configureNavHost() } override fun onSupportNavigateUp() = navController.navigateUp() || super.onSupportNavigateUp() private fun configureNavHost() { val resId = if (dsr.isFirstLaunch.value) R.id.welcomeFragment else R.id.webViewFragment navController = (supportFragmentManager.findFragmentById(R.id.nav_host) as NavHostFragment).navController navController.navInflater.inflate(R.navigation.nav_graph).let { graph -> graph.setStartDestination(resId) navController.graph = graph } } } A: If the app is killed and then restarted, the Navigation graph and the back stack will be reset. This means that any fragments or destinations that were on the back stack will be removed, and the NavController will no longer be able to find them. In order to restore the back stack and avoid the exception, you can use the onSaveInstanceState() method in Fragment or Activity to save the state of the NavController and the back stack. This will allow to restore the stored state when the app is restarted. Example of implementation in a Fragment: @AndroidEntryPoint class MyFragment : Fragment() { private val navController by lazy { findNavController() } override fun onSaveInstanceState(outState: Bundle) { super.onSaveInstanceState(outState) // Save the state of the NavController and the back stack navController.saveState(outState) } override fun onViewStateRestored(savedInstanceState: Bundle?) { super.onViewStateRestored(savedInstanceState) // Restore the state of the NavController and the back stack savedInstanceState?.let { navController.restoreState(it) } } } With this implementation, the state of the NavController and the back stack will be saved when the app is killed and restored when the app is restarted, allowing the hiltNavGraphViewModels() function to find the correct destination and avoid the exception. A: To restore the back stack when using the hiltNavGraphViewModels delegate, you can use the popBackStack method in the FragmentManager to pop the current destination off the back stack. This will allow the hiltNavGraphViewModels delegate to find the destination with the specified ID and create an instance of the SharedViewModel correctly. Here is an example of how you can do this in your ScriptFragment class: @AndroidEntryPoint class ScriptFragment : Fragment(R.layout.component_script) { private val viewModel: SharedViewModel by hiltNavGraphViewModels(R.id.scriptFragment) override fun onViewCreated(view: View, savedInstanceState: Bundle?) { super.onViewCreated(view, savedInstanceState) // Pop the current destination off the back stack to restore the back stack childFragmentManager.popBackStack() } fun navigateToDetail(navDirection: NavDirections) { findNavController().navigate(navDirection) } } In this example, the popBackStack method is called in the onViewCreated method, which is called when the fragment's view is created. This ensures that the back stack is restored before the hiltNavGraphViewModels delegate is used to create an instance of the SharedViewModel. You can also use the popBackStackImmediate method instead of the popBackStack method, which will pop the destination off the back stack immediately, without waiting for the fragment's view to be created. This can be useful if you want to restore the back stack as soon as the fragment is created, rather than waiting for the view to be created. @AndroidEntryPoint class ScriptFragment : Fragment(R.layout.component_script) { private val viewModel: SharedViewModel by hiltNavGraphViewModels(R.id.scriptFragment) override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) // Pop the current destination off the back stack to restore the back stack childFragmentManager.popBackStackImmediate() } fun navigateToDetail(navDirection: NavDirections) { findNavController().navigate(navDirection) } } In this example, the popBackStackImmediate method is called in the onCreate method, which is called when the fragment is created. This ensures that the back stack is restored as soon as the fragment is created, before the hiltNavGraphViewModels delegate is used to create an instance of the SharedViewModel.
Why HIlt can not create instance of SharedViewModel?
Update 1 I changed delegate from viewModels to hiltNavGraphViewModels and it's works. Thanks to ianhanniballake's comments. But now is another issue. If app have killed (Logcat -> Terminate App), occurs exception No destination with ID 2131296453 is on the NavController's back stack. The current destination is null (stacktrace below). How can I restore back stack in order to hiltNavGraphViewModels don't fail? @AndroidEntryPoint class ScriptDetailFragment: Fragment(R.layout.component_detail_script) { private val viewModel: SharedViewModel by hiltNavGraphViewModels(R.id.scriptFragment) } @AndroidEntryPoint class ScriptFragment : Fragment(R.layout.component_script) { private val viewModel: SharedViewModel by hiltNavGraphViewModels(R.id.scriptFragment) fun navigateToDetail(navDirection: NavDirections) { findNavController().navigate(navDirection) } } @HiltViewModel class SharedViewModel @Inject constructor( private val componentWrapper: ComponentWrapper ) : ViewModel() { } dependencies { implementation 'androidx.core:core-ktx:1.8.0' implementation 'com.google.dagger:hilt-android:2.38' implementation("androidx.hilt:hilt-navigation-fragment:1.0.0") kapt 'com.google.dagger:hilt-compiler:2.38' kapt 'androidx.hilt:hilt-compiler:1.0.0' implementation "androidx.activity:activity-ktx:1.5.1" implementation "androidx.fragment:fragment-ktx:1.5.4" implementation "androidx.navigation:navigation-fragment-ktx:2.5.3" implementation "androidx.navigation:navigation-ui-ktx:2.5.3" } Caused by: java.lang.IllegalArgumentException: No destination with ID 2131296453 is on the NavController's back stack. The current destination is null at androidx.navigation.NavController.getBackStackEntry(NavController.kt:2209) at *.pages.departure.ScriptFragment$special$$inlined$hiltNavGraphViewModels$1.invoke(HiltNavGraphViewModelLazy.kt:49) at *.pages.departure.ScriptFragment$special$$inlined$hiltNavGraphViewModels$1.invoke(Unknown Source:0) at kotlin.SynchronizedLazyImpl.getValue(LazyJVM.kt:74) at *.pages.departure.ScriptFragment$special$$inlined$hiltNavGraphViewModels$3.invoke(HiltNavGraphViewModelLazy.kt:57) at *.pages.departure.ScriptFragment$special$$inlined$hiltNavGraphViewModels$3.invoke(Unknown Source:0) at androidx.lifecycle.ViewModelLazy.getValue(ViewModelLazy.kt:47) at androidx.lifecycle.ViewModelLazy.getValue(ViewModelLazy.kt:35) at *.pages.departure.ScriptFragment.getViewModel(ScriptFragment.kt:15) at *.pages.departure.ScriptFragment.onViewCreated(ScriptFragment.kt:20) at androidx.fragment.app.Fragment.performViewCreated(Fragment.java:3128) at androidx.fragment.app.FragmentStateManager.createView(FragmentStateManager.java:552) at androidx.fragment.app.FragmentStateManager.moveToExpectedState(FragmentStateManager.java:261) at androidx.fragment.app.FragmentStore.moveToExpectedState(FragmentStore.java:113) at androidx.fragment.app.FragmentManager.moveToState(FragmentManager.java:1433) at androidx.fragment.app.FragmentManager.dispatchStateChange(FragmentManager.java:2977) at androidx.fragment.app.FragmentManager.dispatchViewCreated(FragmentManager.java:2888) at androidx.fragment.app.Fragment.performViewCreated(Fragment.java:3129) at androidx.fragment.app.FragmentStateManager.createView(FragmentStateManager.java:552) at androidx.fragment.app.FragmentStateManager.moveToExpectedState(FragmentStateManager.java:261) at androidx.fragment.app.FragmentStore.moveToExpectedState(FragmentStore.java:113) at androidx.fragment.app.FragmentManager.moveToState(FragmentManager.java:1433) at androidx.fragment.app.FragmentManager.dispatchStateChange(FragmentManager.java:2977) at androidx.fragment.app.FragmentManager.dispatchActivityCreated(FragmentManager.java:2895) at androidx.fragment.app.FragmentController.dispatchActivityCreated(FragmentController.java:263) at androidx.fragment.app.FragmentActivity.onStart(FragmentActivity.java:351) at androidx.appcompat.app.AppCompatActivity.onStart(AppCompatActivity.java:248) at android.app.Instrumentation.callActivityOnStart(Instrumentation.java:1335) at android.app.Activity.performStart(Activity.java:7043) activity_main.xml <androidx.fragment.app.FragmentContainerView android:id="@+id/nav_host" android:name="androidx.navigation.fragment.NavHostFragment" android:layout_width="match_parent" android:layout_height="match_parent" app:defaultNavHost="true" tools:context=".screens.MainActivity" /> MainActivity.ki @AndroidEntryPoint class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) _binding = ActivityMainBinding.inflate(layoutInflater) setContentView(binding.root) configureNavHost() } override fun onSupportNavigateUp() = navController.navigateUp() || super.onSupportNavigateUp() private fun configureNavHost() { val resId = if (dsr.isFirstLaunch.value) R.id.welcomeFragment else R.id.webViewFragment navController = (supportFragmentManager.findFragmentById(R.id.nav_host) as NavHostFragment).navController navController.navInflater.inflate(R.navigation.nav_graph).let { graph -> graph.setStartDestination(resId) navController.graph = graph } } }
[ "If the app is killed and then restarted, the Navigation graph and the back stack will be reset. This means that any fragments or destinations that were on the back stack will be removed, and the NavController will no longer be able to find them.\nIn order to restore the back stack and avoid the exception, you can use the onSaveInstanceState() method in Fragment or Activity to save the state of the NavController and the back stack. This will allow to restore the stored state when the app is restarted.\nExample of implementation in a Fragment:\n@AndroidEntryPoint\nclass MyFragment : Fragment() {\n\n private val navController by lazy { findNavController() }\n\n override fun onSaveInstanceState(outState: Bundle) {\n super.onSaveInstanceState(outState)\n // Save the state of the NavController and the back stack\n navController.saveState(outState)\n }\n\n override fun onViewStateRestored(savedInstanceState: Bundle?) {\n super.onViewStateRestored(savedInstanceState)\n // Restore the state of the NavController and the back stack\n savedInstanceState?.let { navController.restoreState(it) }\n }\n}\n\nWith this implementation, the state of the NavController and the back stack will be saved when the app is killed and restored when the app is restarted, allowing the hiltNavGraphViewModels() function to find the correct destination and avoid the exception.\n", "To restore the back stack when using the hiltNavGraphViewModels delegate, you can use the popBackStack method in the FragmentManager to pop the current destination off the back stack. This will allow the hiltNavGraphViewModels delegate to find the destination with the specified ID and create an instance of the SharedViewModel correctly.\nHere is an example of how you can do this in your ScriptFragment class:\n@AndroidEntryPoint\nclass ScriptFragment : Fragment(R.layout.component_script) {\n\n private val viewModel: SharedViewModel by hiltNavGraphViewModels(R.id.scriptFragment)\n\n override fun onViewCreated(view: View, savedInstanceState: Bundle?) {\n super.onViewCreated(view, savedInstanceState)\n\n // Pop the current destination off the back stack to restore the back stack\n childFragmentManager.popBackStack()\n }\n\n fun navigateToDetail(navDirection: NavDirections) {\n findNavController().navigate(navDirection)\n }\n}\n\nIn this example, the popBackStack method is called in the onViewCreated method, which is called when the fragment's view is created. This ensures that the back stack is restored before the hiltNavGraphViewModels delegate is used to create an instance of the SharedViewModel.\nYou can also use the popBackStackImmediate method instead of the popBackStack method, which will pop the destination off the back stack immediately, without waiting for the fragment's view to be created. This can be useful if you want to restore the back stack as soon as the fragment is created, rather than waiting for the view to be created.\n@AndroidEntryPoint\nclass ScriptFragment : Fragment(R.layout.component_script) {\n\n private val viewModel: SharedViewModel by hiltNavGraphViewModels(R.id.scriptFragment)\n\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n\n // Pop the current destination off the back stack to restore the back stack\n childFragmentManager.popBackStackImmediate()\n }\n\n fun navigateToDetail(navDirection: NavDirections) {\n findNavController().navigate(navDirection)\n }\n}\n\nIn this example, the popBackStackImmediate method is called in the onCreate method, which is called when the fragment is created. This ensures that the back stack is restored as soon as the fragment is created, before the hiltNavGraphViewModels delegate is used to create an instance of the SharedViewModel.\n" ]
[ 1, 0 ]
[]
[]
[ "android", "dagger_hilt", "kotlin" ]
stackoverflow_0074577093_android_dagger_hilt_kotlin.txt
Q: Need assistance with setting node affinity to a helm subchart I'm trying to set node affinity to a subchart within a helm chart, and from what I understand I need to use the --set parameter to do this, but struggling a bit how to pass that at the cli. This is the equivalent node affinity I'm trying to set: mariadb: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: arch operator: In values: - x86_64 Trying to do this, but with the array declarations and such, it feels wrong (and doesn't do anything): helm install gitea gitea-charts/gitea -f ./values.yaml --set 'memcached.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions.key.arch=x86_64' --set 'mariadb.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions.key.arch=x86_64' A: My recommendation for doing something like this, especially when you have a mix of types, is not to use --set but to use a second values yaml -- or amend the existing values yaml if you have one. Overriding the affinity via set gets quite messy and isn't very readable. But if you use a separate values yaml, then instead of: helm install gitea gitea-charts/gitea -f ./values.yaml --set 'memcached.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions.key.arch=x86_64' --set 'mariadb.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions.key.arch=x86_64' You would just have helm install gitea gitea-charts/gitea -f ./values.yaml (if you amended the existing values.yaml) or helm install gitea gitea-charts/gitea -f ./values.yaml -f ./affinity.yaml (if you chose to put the affinity in a separate yaml.)
Need assistance with setting node affinity to a helm subchart
I'm trying to set node affinity to a subchart within a helm chart, and from what I understand I need to use the --set parameter to do this, but struggling a bit how to pass that at the cli. This is the equivalent node affinity I'm trying to set: mariadb: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: arch operator: In values: - x86_64 Trying to do this, but with the array declarations and such, it feels wrong (and doesn't do anything): helm install gitea gitea-charts/gitea -f ./values.yaml --set 'memcached.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions.key.arch=x86_64' --set 'mariadb.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions.key.arch=x86_64'
[ "My recommendation for doing something like this, especially when you have a mix of types, is not to use --set but to use a second values yaml -- or amend the existing values yaml if you have one. Overriding the affinity via set gets quite messy and isn't very readable. But if you use a separate values yaml, then instead of:\nhelm install gitea gitea-charts/gitea -f ./values.yaml --set 'memcached.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions.key.arch=x86_64' --set 'mariadb.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions.key.arch=x86_64'\n\nYou would just have\nhelm install gitea gitea-charts/gitea -f ./values.yaml\n\n(if you amended the existing values.yaml)\nor\nhelm install gitea gitea-charts/gitea -f ./values.yaml -f ./affinity.yaml\n\n(if you chose to put the affinity in a separate yaml.)\n" ]
[ 0 ]
[]
[]
[ "helm3", "kubernetes", "kubernetes_helm" ]
stackoverflow_0074678474_helm3_kubernetes_kubernetes_helm.txt