content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
30
130
Q: I can access API data until I refresh the page, then it becomes inaccessible When trying to access my api data with this code: const lastActivity = data[data.length - 1]; console.log(lastActivity); const lastActivityTime = lastActivity.time; console.log(lastActivityTime); It works initially, then when I refresh the page I get this error: Uncaught TypeError: Cannot read properties of undefined (reading xxx) I've tried throwing that code into different areas of the useEffect block as well and it did not work. here is the code for the component import { React, useState, useEffect } from 'react'; import { Typography } from '@mui/material'; import { fToNow } from '../../../utils/formatTime'; export default function TimeSince() { const [data, setData] = useState([]); useEffect(() => { fetch(`http://localhost:3000/record`) .then((response) => response.json()) .then((usefulData) => { console.log(usefulData); setData(usefulData); }) .catch((e) => { console.error(`An error occurred: ${e}`); }); }, []); console.log(data); const lastActivity = data[data.length - 1]; console.log(lastActivity); const lastActivityTime = lastActivity.time; console.log(lastActivityTime); return ( <Typography variant="h4" sx={{ color: 'text.secondary' }}> test </Typography> ); } api code: [{"_id":"6383a78ef07f0c12aac4521f","date":"2022-11-28","time":"03:08","activity":"Walk"}, I was assuming it was an issue with the value not being accessible when the page renders? But I threw it into a then block after it gets the values and still doesnt work. A: It looks like the error is thrown when data is undefined. This can happen if the data hasn't been loaded yet from the API. In this case, you can avoid the error by checking if data is defined before accessing its length property and the last element in the array. Here is an example of how you can modify your code to avoid the error: import { React, useState, useEffect } from 'react'; import { Typography } from '@mui/material'; import { fToNow } from '../../../utils/formatTime'; export default function TimeSince() { const [data, setData] = useState([]); useEffect(() => { fetch(`http://localhost:3000/record`) .then((response) => response.json()) .then((usefulData) => { console.log(usefulData); setData(usefulData); }) .catch((e) => { console.error(`An error occurred: ${e}`); }); }, []); console.log(data); // Check if data is defined before accessing its properties if (data && data.length) { const lastActivity = data[data.length - 1]; console.log(lastActivity); const lastActivityTime = lastActivity.time; console.log(lastActivityTime); } return ( <Typography variant="h4" sx={{ color: 'text.secondary' }}> test </Typography> ); } This code will check if data is defined and has at least one element before trying to access its properties. If data is not defined or is an empty array, the code will not throw an error and will simply not access the properties of data.
I can access API data until I refresh the page, then it becomes inaccessible
When trying to access my api data with this code: const lastActivity = data[data.length - 1]; console.log(lastActivity); const lastActivityTime = lastActivity.time; console.log(lastActivityTime); It works initially, then when I refresh the page I get this error: Uncaught TypeError: Cannot read properties of undefined (reading xxx) I've tried throwing that code into different areas of the useEffect block as well and it did not work. here is the code for the component import { React, useState, useEffect } from 'react'; import { Typography } from '@mui/material'; import { fToNow } from '../../../utils/formatTime'; export default function TimeSince() { const [data, setData] = useState([]); useEffect(() => { fetch(`http://localhost:3000/record`) .then((response) => response.json()) .then((usefulData) => { console.log(usefulData); setData(usefulData); }) .catch((e) => { console.error(`An error occurred: ${e}`); }); }, []); console.log(data); const lastActivity = data[data.length - 1]; console.log(lastActivity); const lastActivityTime = lastActivity.time; console.log(lastActivityTime); return ( <Typography variant="h4" sx={{ color: 'text.secondary' }}> test </Typography> ); } api code: [{"_id":"6383a78ef07f0c12aac4521f","date":"2022-11-28","time":"03:08","activity":"Walk"}, I was assuming it was an issue with the value not being accessible when the page renders? But I threw it into a then block after it gets the values and still doesnt work.
[ "It looks like the error is thrown when data is undefined. This can happen if the data hasn't been loaded yet from the API. In this case, you can avoid the error by checking if data is defined before accessing its length property and the last element in the array.\nHere is an example of how you can modify your code to avoid the error:\nimport { React, useState, useEffect } from 'react';\nimport { Typography } from '@mui/material';\nimport { fToNow } from '../../../utils/formatTime';\n\nexport default function TimeSince() {\n const [data, setData] = useState([]);\n useEffect(() => {\n fetch(`http://localhost:3000/record`)\n .then((response) => response.json())\n .then((usefulData) => {\n console.log(usefulData);\n setData(usefulData);\n })\n .catch((e) => {\n console.error(`An error occurred: ${e}`);\n });\n }, []);\n console.log(data);\n\n // Check if data is defined before accessing its properties\n if (data && data.length) {\n const lastActivity = data[data.length - 1];\n console.log(lastActivity);\n const lastActivityTime = lastActivity.time;\n console.log(lastActivityTime);\n }\n\n return (\n <Typography variant=\"h4\" sx={{ color: 'text.secondary' }}>\n test\n </Typography>\n );\n}\n\nThis code will check if data is defined and has at least one element before trying to access its properties. If data is not defined or is an empty array, the code will not throw an error and will simply not access the properties of data.\n" ]
[ 0 ]
[]
[]
[ "api", "backend", "javascript", "reactjs" ]
stackoverflow_0074680895_api_backend_javascript_reactjs.txt
Q: How to send audio file from front end to django backend through websockets for processing at the backend I have made a website in django that uses websockets to perform certain tasks as follows : From frontend I want to take audio using MediaRecorder as input api through javascript I want to send this audio back to the backend for processing and that processed data is again send back through the connection in realtime. I have tried various things for the purpose but haven't been successful yet. The bytes data that I've been receiving at the backend when I'm converting that into audio then I'm getting the length and size of the audio file but not getting the content of the file. Means the whole audio is silent but the same audio which I'm listening in the frontend is having some sound. but I don't know what's happening at the backend with the file. Consumer.py : import json from channels.generic.websocket import AsyncWebsocketConsumer class ChatConsumer(AsyncWebsocketConsumer): async def connect(self): self.room_name = self.scope["url_route"]["kwargs"]["username"] self.room_group_name = "realtime_%s" % self.room_name self.channel_layer.group_add( self.room_group_name, self.channel_name) await self.accept() print("Connected") async def disconnect(self , close_code): await self.channel_layer.group_discard( self.roomGroupName , self.channel_layer ) print("Disconnected") async def receive(self, bytes_data): with open('myfile.wav', mode='bx') as f: f.write(bytes_data) audio.js : <script> navigator.mediaDevices.getUserMedia({ audio: true }).then((stream) => { if (!MediaRecorder.isTypeSupported('audio/webm')) return alert('Browser not supported') const mediaRecorder = new MediaRecorder(stream, { mimeType: 'audio/webm', }) const socket = new WebSocket('ws://localhost:8000/audio') socket.onopen = () => { document.querySelector('#status').textContent = 'Connected' console.log({ event: 'onopen' }) mediaRecorder.addEventListener('dataavailable', async (event) => { if (event.data.size > 0 && socket.readyState == 1) { socket.send(event.data) } }) mediaRecorder.start(250) } socket.onmessage = (message) => { const received = message.data if (received) { console.log(received) } } socket.onclose = () => { console.log({ event: 'onclose' }) } socket.onerror = (error) => { console.log({ event: 'onerror', error }) } }) </script> A: It looks like the issue is with the bytes_data that you are receiving in the receive method of your ChatConsumer class. The MediaRecorder.ondataavailable event returns a Blob object containing the recorded audio data, which needs to be converted to a bytes object before it can be written to a file. You can convert the Blob object to bytes using the FileReader API in JavaScript, as follows: mediaRecorder.addEventListener('dataavailable', async (event) => { if (event.data.size > 0 && socket.readyState == 1) { const reader = new FileReader(); reader.readAsArrayBuffer(event.data); reader.onloadend = () => { const bytes = reader.result; socket.send(bytes); } } }); This converts the Blob object to an ArrayBuffer, which is then converted to a bytes object using the onloadend event of the FileReader. You can then write this bytes object to a file on the server using the receive method in your ChatConsumer class, as follows: async def receive(self, bytes_data): with open('myfile.wav', mode='bx') as f: f.write(bytes_data) You may also need to specify the correct MIME type (e.g. 'audio/wav') when writing the file, so that it is recognized as a valid audio file by the server.
How to send audio file from front end to django backend through websockets for processing at the backend
I have made a website in django that uses websockets to perform certain tasks as follows : From frontend I want to take audio using MediaRecorder as input api through javascript I want to send this audio back to the backend for processing and that processed data is again send back through the connection in realtime. I have tried various things for the purpose but haven't been successful yet. The bytes data that I've been receiving at the backend when I'm converting that into audio then I'm getting the length and size of the audio file but not getting the content of the file. Means the whole audio is silent but the same audio which I'm listening in the frontend is having some sound. but I don't know what's happening at the backend with the file. Consumer.py : import json from channels.generic.websocket import AsyncWebsocketConsumer class ChatConsumer(AsyncWebsocketConsumer): async def connect(self): self.room_name = self.scope["url_route"]["kwargs"]["username"] self.room_group_name = "realtime_%s" % self.room_name self.channel_layer.group_add( self.room_group_name, self.channel_name) await self.accept() print("Connected") async def disconnect(self , close_code): await self.channel_layer.group_discard( self.roomGroupName , self.channel_layer ) print("Disconnected") async def receive(self, bytes_data): with open('myfile.wav', mode='bx') as f: f.write(bytes_data) audio.js : <script> navigator.mediaDevices.getUserMedia({ audio: true }).then((stream) => { if (!MediaRecorder.isTypeSupported('audio/webm')) return alert('Browser not supported') const mediaRecorder = new MediaRecorder(stream, { mimeType: 'audio/webm', }) const socket = new WebSocket('ws://localhost:8000/audio') socket.onopen = () => { document.querySelector('#status').textContent = 'Connected' console.log({ event: 'onopen' }) mediaRecorder.addEventListener('dataavailable', async (event) => { if (event.data.size > 0 && socket.readyState == 1) { socket.send(event.data) } }) mediaRecorder.start(250) } socket.onmessage = (message) => { const received = message.data if (received) { console.log(received) } } socket.onclose = () => { console.log({ event: 'onclose' }) } socket.onerror = (error) => { console.log({ event: 'onerror', error }) } }) </script>
[ "It looks like the issue is with the bytes_data that you are receiving in the receive method of your ChatConsumer class. The MediaRecorder.ondataavailable event returns a Blob object containing the recorded audio data, which needs to be converted to a bytes object before it can be written to a file.\nYou can convert the Blob object to bytes using the FileReader API in JavaScript, as follows:\nmediaRecorder.addEventListener('dataavailable', async (event) => {\n if (event.data.size > 0 && socket.readyState == 1) {\n const reader = new FileReader();\n reader.readAsArrayBuffer(event.data);\n reader.onloadend = () => {\n const bytes = reader.result;\n socket.send(bytes);\n }\n }\n});\n\nThis converts the Blob object to an ArrayBuffer, which is then converted to a bytes object using the onloadend event of the FileReader. You can then write this bytes object to a file on the server using the receive method in your ChatConsumer class, as follows:\nasync def receive(self, bytes_data):\n with open('myfile.wav', mode='bx') as f:\n f.write(bytes_data)\n\nYou may also need to specify the correct MIME type (e.g. 'audio/wav') when writing the file, so that it is recognized as a valid audio file by the server.\n" ]
[ 0 ]
[]
[]
[ "django", "javascript", "stream", "web_mediarecorder", "websocket" ]
stackoverflow_0074680879_django_javascript_stream_web_mediarecorder_websocket.txt
Q: How to make zsh keybind between emacs mode and vi mode? I want to bind a key to toggle emacs mode and vi mode,which I use oh-my-zsh plugins(vi-mode). I tried Is there a way to switch Bash or zsh from Emacs mode to vi mode with a keystroke? I also try to bindkey like bindkey '^[e' 'set -o emacs' bindkey '^[v' 'set -o vi' But it's not work for me. Does any way to toggle vi/emacs or keybind to set keymap? Thanks a lot ! A: bindkey is used for binding keys to ZLE widgets not any random command. So what you have guessed at is not going to work. You could write a custom ZLE widget to switch keymaps: select-emacs() { set -o emacs } zle -N select-emacs bindkey '^[e' select-emacs In practical terms, I wouldn't recommend this. If you want a hybrid approach, it is better to select emacs mode but bind a key to vi-cmd-mode. In fact Ctrl-X,Ctrl-V is bound to this by default. You might even bind the escape key to vi-cmd-mode - where emacs key sequences involve an initial escape press, that can mostly be replaced by Alt. If you're used to typing it with the actual escape key, you may be able to replace it by a custom widget in vi command mode. A: I finally "found out" how to toggle vi and emacs mode with a singel key, e.g. [alt]+[i] in zsh # in the .zshrc # toggle vi and emacs mode vi-mode() { set -o vi; } emacs-mode() { set -o emacs; } zle -N vi-mode zle -N emacs-mode bindkey '\ei' vi-mode # switch to vi "insert" mode bindkey -M viins 'jk' vi-cmd-mode # (optionally) switch to vi "cmd" mode bindkey -M viins '\ei' emacs-mode # switch to emacs mode now you can toggle from emacs-mode to vi-mode and from vi-mode (both insert or normal mode) to emacs-mode
How to make zsh keybind between emacs mode and vi mode?
I want to bind a key to toggle emacs mode and vi mode,which I use oh-my-zsh plugins(vi-mode). I tried Is there a way to switch Bash or zsh from Emacs mode to vi mode with a keystroke? I also try to bindkey like bindkey '^[e' 'set -o emacs' bindkey '^[v' 'set -o vi' But it's not work for me. Does any way to toggle vi/emacs or keybind to set keymap? Thanks a lot !
[ "bindkey is used for binding keys to ZLE widgets not any random command. So what you have guessed at is not going to work. You could write a custom ZLE widget to switch keymaps:\nselect-emacs() { set -o emacs }\nzle -N select-emacs\nbindkey '^[e' select-emacs\n\nIn practical terms, I wouldn't recommend this. If you want a hybrid approach, it is better to select emacs mode but bind a key to vi-cmd-mode. In fact Ctrl-X,Ctrl-V is bound to this by default. You might even bind the escape key to vi-cmd-mode - where emacs key sequences involve an initial escape press, that can mostly be replaced by Alt. If you're used to typing it with the actual escape key, you may be able to replace it by a custom widget in vi command mode.\n", "I finally \"found out\" how to toggle vi and emacs mode with a singel key, e.g. [alt]+[i] in zsh\n# in the .zshrc\n# toggle vi and emacs mode\nvi-mode() { set -o vi; }\nemacs-mode() { set -o emacs; }\nzle -N vi-mode\nzle -N emacs-mode\nbindkey '\\ei' vi-mode # switch to vi \"insert\" mode\nbindkey -M viins 'jk' vi-cmd-mode # (optionally) switch to vi \"cmd\" mode\nbindkey -M viins '\\ei' emacs-mode # switch to emacs mode\n\nnow you can toggle from emacs-mode to vi-mode and from vi-mode (both insert or normal mode) to emacs-mode\n" ]
[ 3, 0 ]
[]
[]
[ "oh_my_zsh", "zsh" ]
stackoverflow_0065606241_oh_my_zsh_zsh.txt
Q: how to fix 'pylint too many statements'? Is there a way to condense the following code to remove the pylint too many statements error. All this code is contained within a single function: - selection = input("Enter 1, 2, 3, 4 or 5 to start: \n") if selection == "1": print("\nYou have selected "+"'"+question_data[0][0]+"', let's begin!") for key in science: print("--------------------------------------------------------\n") print(key) for i in science_choices[num_question-1]: print("") print(i) choice = input("Enter you answer (A, B or C): \n").upper() answers.append(choice) correct_answers += check_correct_answer(science.get(key), choice) num_question += 1 player_score(correct_answers, answers) elif selection == "2": ................................................... elif selection == "3": ................................................... elif selection == "4": ................................................... elif selection == "5": ................................................... else: print("\nYou entered an incorrect value.Please try again.\n") start_new_quiz() the program works as is but it's for a college assignment and I would prefer not to submit with pylint errors. A: One obvious way is to create a function for each selection: selection = input("Enter 1, 2, 3, 4 or 5 to start: \n") if selection == "1": treat_one() elif selection == "2": ................................................... elif selection == "3": ................................................... elif selection == "4": ................................................... elif selection == "5": ................................................... else: print("\nYou entered an incorrect value.Please try again.\n") start_new_quiz() def treat_one(): print("\nYou have selected "+"'"+question_data[0][0]+"', let's begin!") for key in science: print("--------------------------------------------------------\n") print(key) for i in science_choices[num_question-1]: print("") print(i) choice = input("Enter you answer (A, B or C): \n").upper() answers.append(choice) correct_answers += check_correct_answer(science.get(key), choice) num_question += 1 player_score(correct_answers, answers) You could also create a dictionary: functions_to_use = { "1": treat_one, ... } function = functions_to_use.get(selection) if function is None: print("\nYou entered an incorrect value.Please try again.\n") else: function() Of if you want to over-engineer it, use getattr: function = getattr("treat_{selection}", error) function() def error(): print("\nYou entered an incorrect value.Please try again.\n")
how to fix 'pylint too many statements'?
Is there a way to condense the following code to remove the pylint too many statements error. All this code is contained within a single function: - selection = input("Enter 1, 2, 3, 4 or 5 to start: \n") if selection == "1": print("\nYou have selected "+"'"+question_data[0][0]+"', let's begin!") for key in science: print("--------------------------------------------------------\n") print(key) for i in science_choices[num_question-1]: print("") print(i) choice = input("Enter you answer (A, B or C): \n").upper() answers.append(choice) correct_answers += check_correct_answer(science.get(key), choice) num_question += 1 player_score(correct_answers, answers) elif selection == "2": ................................................... elif selection == "3": ................................................... elif selection == "4": ................................................... elif selection == "5": ................................................... else: print("\nYou entered an incorrect value.Please try again.\n") start_new_quiz() the program works as is but it's for a college assignment and I would prefer not to submit with pylint errors.
[ "One obvious way is to create a function for each selection:\nselection = input(\"Enter 1, 2, 3, 4 or 5 to start: \\n\")\nif selection == \"1\":\n treat_one()\nelif selection == \"2\":\n ...................................................\nelif selection == \"3\":\n ...................................................\nelif selection == \"4\":\n ...................................................\nelif selection == \"5\":\n ...................................................\nelse:\n print(\"\\nYou entered an incorrect value.Please try again.\\n\")\n start_new_quiz()\n\n\ndef treat_one():\n print(\"\\nYou have selected \"+\"'\"+question_data[0][0]+\"', let's begin!\")\n for key in science:\n print(\"--------------------------------------------------------\\n\")\n print(key)\n for i in science_choices[num_question-1]:\n print(\"\")\n print(i)\n\n choice = input(\"Enter you answer (A, B or C): \\n\").upper()\n answers.append(choice)\n\n correct_answers += check_correct_answer(science.get(key), choice)\n num_question += 1\n player_score(correct_answers, answers)\n\nYou could also create a dictionary:\nfunctions_to_use = {\n \"1\": treat_one,\n ...\n}\nfunction = functions_to_use.get(selection)\nif function is None:\n print(\"\\nYou entered an incorrect value.Please try again.\\n\")\nelse:\n function()\n\nOf if you want to over-engineer it, use getattr:\nfunction = getattr(\"treat_{selection}\", error)\nfunction()\n\ndef error():\n print(\"\\nYou entered an incorrect value.Please try again.\\n\")\n\n" ]
[ 0 ]
[]
[]
[ "pylint" ]
stackoverflow_0074679030_pylint.txt
Q: is there a way to grant permissions in redshift so that other users of the group can run dbt models? We have a group of users that have access to our data warehouse dev environment and I am trying to grant this group access to modify and/or locally run the dbt models I have made. I tried using a post-hook to grant all users of the schema access to the schema but users of the groups are still getting a permission denied message whenever they tried to execute a dbt run command from their terminal on any of my models in this schema. post-hook: - "grant usage on schema {{ this.schema }} to group data_team" - "grant select on {{ this }} to group data_team" Ideally, all users in the data_team group should be able to (locally) overwrite models created by other users that they have have fetched from the git repo storing our dbt models files. A: You need to probably first check if the dbt user has permissions to grant access to all users of a particular schema. Additionally, you will also need to grant them CREATE along with USAGE on a particular schema
is there a way to grant permissions in redshift so that other users of the group can run dbt models?
We have a group of users that have access to our data warehouse dev environment and I am trying to grant this group access to modify and/or locally run the dbt models I have made. I tried using a post-hook to grant all users of the schema access to the schema but users of the groups are still getting a permission denied message whenever they tried to execute a dbt run command from their terminal on any of my models in this schema. post-hook: - "grant usage on schema {{ this.schema }} to group data_team" - "grant select on {{ this }} to group data_team" Ideally, all users in the data_team group should be able to (locally) overwrite models created by other users that they have have fetched from the git repo storing our dbt models files.
[ "You need to probably first check if the dbt user has permissions to grant access to all users of a particular schema. Additionally, you will also need to grant them CREATE along with USAGE on a particular schema\n" ]
[ 0 ]
[]
[]
[ "amazon_redshift", "amazon_web_services", "dbt" ]
stackoverflow_0074658656_amazon_redshift_amazon_web_services_dbt.txt
Q: Convert typo3 internal url to external url I've been looking for the way to convert the typo3 internal urls whose format is t3://page?uid=xx to a human readable url http://mydomain/page-slug. I'd want to do the same thing that the ViewHelper link.typolink does but in the php context (outside of the Extbase context, I'm not in a controller). The context is that I am gettind a link from a db to send it to an external API. I tried to look for documentation but all helpers are about generating an url from controller name and action, etc... Can somebody please explain what is the proper way to do it? The typo3 version I use is the 11.5. Thanks for your answers. A: Actually a content was needed but I had no clue which value to set in it and... I found it, in the cObject, there is the method typoLink_URL($conf) That takes the same parameter as stdWrap_typolink, calls the typolink method as well but with the $content variable already set which in this case is a pipe |. So you only need to call it so: $this->cObj->typoLink_URL(['parameter' => $typoLink]); And you have your url. Thank you for your help Julian.
Convert typo3 internal url to external url
I've been looking for the way to convert the typo3 internal urls whose format is t3://page?uid=xx to a human readable url http://mydomain/page-slug. I'd want to do the same thing that the ViewHelper link.typolink does but in the php context (outside of the Extbase context, I'm not in a controller). The context is that I am gettind a link from a db to send it to an external API. I tried to look for documentation but all helpers are about generating an url from controller name and action, etc... Can somebody please explain what is the proper way to do it? The typo3 version I use is the 11.5. Thanks for your answers.
[ "Actually a content was needed but I had no clue which value to set in it and...\nI found it, in the cObject, there is the method\ntypoLink_URL($conf)\n\nThat takes the same parameter as stdWrap_typolink, calls the typolink method as well but with the $content variable already set which in this case is a pipe |.\nSo you only need to call it so:\n$this->cObj->typoLink_URL(['parameter' => $typoLink]);\n\nAnd you have your url.\nThank you for your help Julian.\n" ]
[ 0 ]
[]
[]
[ "external_url", "typo3", "typolink" ]
stackoverflow_0074671129_external_url_typo3_typolink.txt
Q: How to use @next/mdx with NextJS 13 app directory? With the new app directory, all route directories must have a page.js, page.jsx or a page.tsx file to be visible publicly (eg: mywebsite.com/about requires a file app/about/page.js). But when I try with MDX file app/about/page.mdx, and use nextMDX @next/mdx, I got a 404 not found. Here is my next.config.mjs configuration file: import nextMDX from "@next/mdx"; import remarkFrontmatter from "remark-frontmatter"; import rehypeHighlight from "rehype-highlight"; const withMDX = nextMDX({ extension: /\.(md|mdx)$/, options: { remarkPlugins: [remarkFrontmatter], rehypePlugins: [rehypeHighlight], }, }); const nextConfig = { experimental: { appDir: true, } }; export default withMDX({ ...nextConfig, pageExtensions: ["js", "jsx", "ts", "tsx", "md", "mdx"], }); Thanks for any response A: // To fix this, you have a couple of options: // Use a page.js, page.jsx, or page.tsx file for your page, and import and render your .mdx file within that page file. // Don't use the appDir feature, and instead structure your pages in the traditional way (i.e., pages/about.mdx instead of app/about/page.mdx). // Option 1 would involve changing your page.mdx file to a page.js, page.jsx, or page.tsx file, and importing and rendering your .mdx file within that file. Here's an example of what that might look like: // app/about/page.jsx import MyAboutMDX from './page.mdx'; function AboutPage() { return <MyAboutMDX />; } export default AboutPage; // Option 2 would involve changing your file structure to the traditional Next.js structure, where pages are placed in the pages directory at the root of your project. In this case, you would move your page.mdx file to pages/about.mdx, and update your import paths accordingly. // I hope this helps! Let me know if you have any other questions. A: You need to specify the pageExtensions option in your next.config.mjs file. This option tells Next.js which file extensions to consider as pages when it builds your application. In your case, you need to include "mdx" in the list of page extensions to be able to use MDX files as pages. Here is an example of how you can modify your next.config.mjs file to include "mdx" as a page extension: import nextMDX from "@next/mdx"; import remarkFrontmatter from "remark-frontmatter"; import rehypeHighlight from "rehype-highlight"; const withMDX = nextMDX({ extension: /\.(md|mdx)$/, options: { remarkPlugins: [remarkFrontmatter], rehypePlugins: [rehypeHighlight], }, }); const nextConfig = { experimental: { appDir: true, } }; // Include "mdx" as a page extension export default withMDX({ ...nextConfig, pageExtensions: ["js", "jsx", "ts", "tsx", "md", "mdx"], }); With this change, your Next.js application will be able to use MDX files as pages. Note that you will still need to create a page.mdx file in each directory that you want to be visible publicly (e.g. app/about/page.mdx for the /about route).
How to use @next/mdx with NextJS 13 app directory?
With the new app directory, all route directories must have a page.js, page.jsx or a page.tsx file to be visible publicly (eg: mywebsite.com/about requires a file app/about/page.js). But when I try with MDX file app/about/page.mdx, and use nextMDX @next/mdx, I got a 404 not found. Here is my next.config.mjs configuration file: import nextMDX from "@next/mdx"; import remarkFrontmatter from "remark-frontmatter"; import rehypeHighlight from "rehype-highlight"; const withMDX = nextMDX({ extension: /\.(md|mdx)$/, options: { remarkPlugins: [remarkFrontmatter], rehypePlugins: [rehypeHighlight], }, }); const nextConfig = { experimental: { appDir: true, } }; export default withMDX({ ...nextConfig, pageExtensions: ["js", "jsx", "ts", "tsx", "md", "mdx"], }); Thanks for any response
[ "// To fix this, you have a couple of options:\n\n// Use a page.js, page.jsx, or page.tsx file for your page, and import and render your .mdx file within that page file.\n// Don't use the appDir feature, and instead structure your pages in the traditional way (i.e., pages/about.mdx instead of app/about/page.mdx).\n// Option 1 would involve changing your page.mdx file to a page.js, page.jsx, or page.tsx file, and importing and rendering your .mdx file within that file. Here's an example of what that might look like:\n\n// app/about/page.jsx\nimport MyAboutMDX from './page.mdx';\n\nfunction AboutPage() {\n return <MyAboutMDX />;\n}\n\nexport default AboutPage;\n\n// Option 2 would involve changing your file structure to the traditional Next.js structure, where pages are placed in the pages directory at the root of your project. In this case, you would move your page.mdx file to pages/about.mdx, and update your import paths accordingly.\n\n// I hope this helps! Let me know if you have any other questions.\n\n", "You need to specify the pageExtensions option in your next.config.mjs file. This option tells Next.js which file extensions to consider as pages when it builds your application. In your case, you need to include \"mdx\" in the list of page extensions to be able to use MDX files as pages.\nHere is an example of how you can modify your next.config.mjs file to include \"mdx\" as a page extension:\nimport nextMDX from \"@next/mdx\";\nimport remarkFrontmatter from \"remark-frontmatter\";\nimport rehypeHighlight from \"rehype-highlight\";\n \nconst withMDX = nextMDX({\n extension: /\\.(md|mdx)$/,\n options: {\n remarkPlugins: [remarkFrontmatter],\n rehypePlugins: [rehypeHighlight],\n },\n});\n\nconst nextConfig = {\n experimental: {\n appDir: true,\n }\n};\n\n// Include \"mdx\" as a page extension\nexport default withMDX({\n ...nextConfig,\n pageExtensions: [\"js\", \"jsx\", \"ts\", \"tsx\", \"md\", \"mdx\"],\n});\n\nWith this change, your Next.js application will be able to use MDX files as pages. Note that you will still need to create a page.mdx file in each directory that you want to be visible publicly (e.g. app/about/page.mdx for the /about route).\n" ]
[ 0, 0 ]
[ "To use the @next/mdx package with the new app directory in Next.js, you need to add the .mdx file extension to the pageExtensions array in your next.config.mjs file.\nHere is an example of how you can modify your next.config.mjs file to include the .mdx file extension:\n import nextMDX from \"@next/mdx\";\nimport remarkFrontmatter from \"remark-frontmatter\";\nimport rehypeHighlight from \"rehype-highlight\";\n \nconst withMDX = nextMDX({\n extension: /\\.(md|mdx)$/,\n options: {\n remarkPlugins: [remarkFrontmatter],\n rehypePlugins: [rehypeHighlight],\n },\n});\n\nconst nextConfig = {\n experimental: {\n appDir: true,\n }\n};\n\nexport default withMDX({\n ...nextConfig,\n pageExtensions: [\"js\", \"jsx\", \"ts\", \"tsx\", \"md\", \"mdx\"], // include .mdx in pageExtensions\n});\n\nThis will allow Next.js to recognize and serve the MDX files in your app directory.\n", "It looks like the issue might be with your pageExtensions configuration. The pageExtensions property should be an array of file extensions that Next.js will recognize as pages. In your configuration, you have included \"md\" and \"mdx\", but you need to include \"mdx\" twice in order for Next.js to recognize .mdx files as pages.\nTry updating your configuration as follows:\nconst nextConfig = {\n experimental: {\n appDir: true,\n }\n};\n\nexport default withMDX({\n ...nextConfig,\n pageExtensions: [\"js\", \"jsx\", \"ts\", \"tsx\", \"mdx\", \"mdx\"],\n});\n\nThis should allow Next.js to recognize .mdx files as pages and prevent the 404 error you are seeing.\n", "It looks like you're using the appDir experimental feature in Next.js, which requires that page files be placed in directories that match their route. In order to use an MDX file for a route, you will need to create a directory for the route and place the MDX file inside that directory. For example, if you want to use an MDX file for the route mywebsite.com/about, you would need to create a directory called about in the app directory, and place the MDX file inside that directory as page.mdx.\nHere is an example of what your file structure might look like:\n- app\n - about\n - page.mdx\n - next.config.mjs\n\nYou can then import the MDX file in the page.js or page.jsx file for the route and use it to render the page.\nimport AboutPage from \"./page.mdx\";\n\nconst About = () => {\n return <AboutPage />;\n};\n\nexport default About;\n\n" ]
[ -1, -1, -1 ]
[ "mdxjs", "next.js", "reactjs" ]
stackoverflow_0074493702_mdxjs_next.js_reactjs.txt
Q: How to make my CustomButton stretch and shrink like a ElevatedButton Does someone know why my CustomButton keeps stretching while the ElevatedButton doesn't. It looks like both components are using similar constraints.. Elevated button: 64.0<=w<=Infinity, h=48.0 Custom button: 32.0<=w<=Infinity, h=32.0 Only the elevated button doesn't stretch when CrossAxisAlignment.stretch is disabled only my custom button does. I'm trying to get my custom button's width to stretch when CrossAxisAlignment.stretch is set and to shrink when it is not set. Example code: https://dartpad.dev/?id=1beeb8305313f7c52f29d337c4dca4a7 A: It is likely that the ElevatedButton widget is using a ConstrainedBox to set the constraints on its child, while your CustomButton widget is not. The ConstrainedBox widget enforces the given constraints on its child, so the child will not be able to exceed those constraints. To fix this, you can wrap your CustomButton widget in a ConstrainedBox and set the desired constraints on it. For example: ConstrainedBox( constraints: BoxConstraints(minWidth: 32.0, minHeight: 32.0), child: CustomButton(...), ) This should prevent the CustomButton from stretching beyond the given constraints.
How to make my CustomButton stretch and shrink like a ElevatedButton
Does someone know why my CustomButton keeps stretching while the ElevatedButton doesn't. It looks like both components are using similar constraints.. Elevated button: 64.0<=w<=Infinity, h=48.0 Custom button: 32.0<=w<=Infinity, h=32.0 Only the elevated button doesn't stretch when CrossAxisAlignment.stretch is disabled only my custom button does. I'm trying to get my custom button's width to stretch when CrossAxisAlignment.stretch is set and to shrink when it is not set. Example code: https://dartpad.dev/?id=1beeb8305313f7c52f29d337c4dca4a7
[ "It is likely that the ElevatedButton widget is using a ConstrainedBox to set the constraints on its child, while your CustomButton widget is not. The ConstrainedBox widget enforces the given constraints on its child, so the child will not be able to exceed those constraints.\nTo fix this, you can wrap your CustomButton widget in a ConstrainedBox and set the desired constraints on it. For example:\nConstrainedBox(\n constraints: BoxConstraints(minWidth: 32.0, minHeight: 32.0),\n child: CustomButton(...),\n)\n\nThis should prevent the CustomButton from stretching beyond the given constraints.\n" ]
[ 0 ]
[]
[]
[ "flutter" ]
stackoverflow_0074680819_flutter.txt
Q: Reverse words in a given String in Python3.8 using functions We are given a string and we need to reverse words of a given string how do i do that? i tried, but the compiler doesnt work properly. something wrong with the syntax instead A: I don't know what you tried, but that works: s = "this is a string" rev = " ".join(s.split(" ")[::-1]) print(rev) output: string a is this
Reverse words in a given String in Python3.8 using functions
We are given a string and we need to reverse words of a given string how do i do that? i tried, but the compiler doesnt work properly. something wrong with the syntax instead
[ "I don't know what you tried, but that works:\ns = \"this is a string\"\nrev = \" \".join(s.split(\" \")[::-1])\nprint(rev)\n\noutput:\nstring a is this\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074680589_python.txt
Q: SVG Icons causing huge lag (Google Maps) i have a problem with google markers, they lag so much (replacing path with url to svg fixes the problem but rotation isnt working if i use url) is there any way to make it lag less, or make svg icon with url rotate like i have rotated the icon with path icon: { path: "M14 8.947L22 14v2l-8-2.526v5.36l3 1.666V22l-4.5-1L8 22v-1.5l3-1.667v-5.36L3 16v-2l8-5.053V3.5a1.5 1.5 0 0 1 3 0v5.447z", fillColor: "#ffd400", fillOpacity: 1, strokeColor: "000", strokeOpacity: 0.4, scale: window.mobileCheck(), // icons needs to be bigger on mobile rotation: data["response"][i]["dir"], // directon of a plane anchor: new google.maps.Point(13, 13), }, Icon Url example icon: { // path: "M14 8.947L22 14v2l-8-2.526v5.36l3 1.666V22l-4.5-1L8 22v-1.5l3-1.667v-5.36L3 16v-2l8-5.053V3.5a1.5 1.5 0 0 1 3 0v5.447z", // fillColor: "#ffd400", // fillOpacity: 1, // strokeColor: "000", // strokeOpacity: 0.4, url: "static/aircraft.svg", scale: window.mobileCheck(), // icons needs to be bigger on mobile rotation: data["response"][i]["dir"], // direction of plane (not working with url) anchor: new google.maps.Point(13, 13), }, example that represents the problem: for (let i = 0; i < 100; i++) { new google.maps.Marker({ position: { lat: Math.random() * 5 + 50, lng: Math.random() * 9 + 14, }, map, icon: { // with svg path: path: "M14 8.947L22 14v2l-8-2.526v5.36l3 1.666V22l-4.5-1L8 22v-1.5l3-1.667v-5.36L3 16v-2l8-5.053V3.5a1.5 1.5 0 0 1 3 0v5.447z", fillColor: "#ffd400", fillOpacity: 1, strokeColor: "000", strokeOpacity: 0.4, // with icon: // url: "static/aircraft.svg", scale: 1, rotation: Math.floor(Math.random() * 360), // direction of plane anchor: new google.maps.Point(13, 13), }, }); } aircraft.svg: <svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"> <path stroke="#000" stroke-opacity="0.3" fill="#ffd400" d="M14 8.947L22 14v2l-8-2.526v5.36l3 1.666V22l-4.5-1L8 22v-1.5l3-1.667v-5.36L3 16v-2l8-5.053V3.5a1.5 1.5 0 0 1 3 0v5.447z"/> </svg> A: I found a way to fix this, so there is a variable with svg, and it works the same as with .svg file but i can edit style, and change its rotation - thats what i wanted to achieve with icon url. heres the code: let template = [ '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" style="transform: rotate(' + data["response"][i]["dir"] + 'deg">', '<path stroke="#000" stroke-opacity="0.3" fill="#ffd400" d="M14 8.947L22 14v2l-8-2.526v5.36l3 1.666V22l-4.5-1L8 22v-1.5l3-1.667v-5.36L3 16v-2l8-5.053V3.5a1.5 1.5 0 0 1 3 0v5.447z"/>', "</svg>", ].join("\n"); let svg = template.replace("{{ color }}", "#800"); icon: { // path: "M14 8.947L22 14v2l-8-2.526v5.36l3 1.666V22l-4.5-1L8 22v-1.5l3-1.667v-5.36L3 16v-2l8-5.053V3.5a1.5 1.5 0 0 1 3 0v5.447z", // fillColor: "#ffd400", // fillOpacity: 1, // strokeColor: "000", // strokeOpacity: 0.4, // url: "static/aircraft.svg", url: "data:image/svg+xml;charset=UTF-8," + encodeURIComponent(svg), scaledSize: new google.maps.Size( window.mobileCheck(), window.mobileCheck() ), // rotation: data["response"][i]["dir"], // direction of plane anchor: new google.maps.Point(24, 24), }, i used part of the code from this: question
SVG Icons causing huge lag (Google Maps)
i have a problem with google markers, they lag so much (replacing path with url to svg fixes the problem but rotation isnt working if i use url) is there any way to make it lag less, or make svg icon with url rotate like i have rotated the icon with path icon: { path: "M14 8.947L22 14v2l-8-2.526v5.36l3 1.666V22l-4.5-1L8 22v-1.5l3-1.667v-5.36L3 16v-2l8-5.053V3.5a1.5 1.5 0 0 1 3 0v5.447z", fillColor: "#ffd400", fillOpacity: 1, strokeColor: "000", strokeOpacity: 0.4, scale: window.mobileCheck(), // icons needs to be bigger on mobile rotation: data["response"][i]["dir"], // directon of a plane anchor: new google.maps.Point(13, 13), }, Icon Url example icon: { // path: "M14 8.947L22 14v2l-8-2.526v5.36l3 1.666V22l-4.5-1L8 22v-1.5l3-1.667v-5.36L3 16v-2l8-5.053V3.5a1.5 1.5 0 0 1 3 0v5.447z", // fillColor: "#ffd400", // fillOpacity: 1, // strokeColor: "000", // strokeOpacity: 0.4, url: "static/aircraft.svg", scale: window.mobileCheck(), // icons needs to be bigger on mobile rotation: data["response"][i]["dir"], // direction of plane (not working with url) anchor: new google.maps.Point(13, 13), }, example that represents the problem: for (let i = 0; i < 100; i++) { new google.maps.Marker({ position: { lat: Math.random() * 5 + 50, lng: Math.random() * 9 + 14, }, map, icon: { // with svg path: path: "M14 8.947L22 14v2l-8-2.526v5.36l3 1.666V22l-4.5-1L8 22v-1.5l3-1.667v-5.36L3 16v-2l8-5.053V3.5a1.5 1.5 0 0 1 3 0v5.447z", fillColor: "#ffd400", fillOpacity: 1, strokeColor: "000", strokeOpacity: 0.4, // with icon: // url: "static/aircraft.svg", scale: 1, rotation: Math.floor(Math.random() * 360), // direction of plane anchor: new google.maps.Point(13, 13), }, }); } aircraft.svg: <svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"> <path stroke="#000" stroke-opacity="0.3" fill="#ffd400" d="M14 8.947L22 14v2l-8-2.526v5.36l3 1.666V22l-4.5-1L8 22v-1.5l3-1.667v-5.36L3 16v-2l8-5.053V3.5a1.5 1.5 0 0 1 3 0v5.447z"/> </svg>
[ "I found a way to fix this,\nso there is a variable with svg, and it works the same as with .svg file but i can edit style, and change its rotation - thats what i wanted to achieve with icon url.\nheres the code:\nlet template = [\n '<svg xmlns=\"http://www.w3.org/2000/svg\" width=\"24\" height=\"24\" preserveAspectRatio=\"xMidYMid meet\" viewBox=\"0 0 24 24\" style=\"transform: rotate(' +\n data[\"response\"][i][\"dir\"] +\n 'deg\">',\n '<path stroke=\"#000\" stroke-opacity=\"0.3\" fill=\"#ffd400\" d=\"M14 8.947L22 14v2l-8-2.526v5.36l3 1.666V22l-4.5-1L8 22v-1.5l3-1.667v-5.36L3 16v-2l8-5.053V3.5a1.5 1.5 0 0 1 3 0v5.447z\"/>',\n \"</svg>\",\n].join(\"\\n\");\nlet svg = template.replace(\"{{ color }}\", \"#800\");\n\n\nicon: {\n // path: \"M14 8.947L22 14v2l-8-2.526v5.36l3 1.666V22l-4.5-1L8 22v-1.5l3-1.667v-5.36L3 16v-2l8-5.053V3.5a1.5 1.5 0 0 1 3 0v5.447z\",\n // fillColor: \"#ffd400\",\n // fillOpacity: 1,\n // strokeColor: \"000\",\n // strokeOpacity: 0.4,\n // url: \"static/aircraft.svg\",\n url:\n \"data:image/svg+xml;charset=UTF-8,\" + encodeURIComponent(svg),\n scaledSize: new google.maps.Size(\n window.mobileCheck(),\n window.mobileCheck()\n ),\n // rotation: data[\"response\"][i][\"dir\"], // direction of plane\n anchor: new google.maps.Point(24, 24),\n},\n\ni used part of the code from this: question\n" ]
[ 0 ]
[]
[]
[ "google_maps", "google_maps_markers", "javascript" ]
stackoverflow_0074677366_google_maps_google_maps_markers_javascript.txt
Q: What does the error "Le.Subject is not a constructor" refer to? I hosted a website using Firebase and Angular Material but when testing the various screens I noticed that one is not working. That screen is a calendar using the Angular Material's schedule component. A: The error "Le.Subject is not a constructor" suggests that there is a problem with the way the Angular Material schedule component is being used in your website. This error usually occurs when the code that is trying to create a new instance of the Le.Subject class is not referencing the class correctly, or if the class itself is not implemented properly.
What does the error "Le.Subject is not a constructor" refer to?
I hosted a website using Firebase and Angular Material but when testing the various screens I noticed that one is not working. That screen is a calendar using the Angular Material's schedule component.
[ "The error \"Le.Subject is not a constructor\" suggests that there is a problem with the way the Angular Material schedule component is being used in your website. This error usually occurs when the code that is trying to create a new instance of the Le.Subject class is not referencing the class correctly, or if the class itself is not implemented properly.\n" ]
[ 0 ]
[]
[]
[ "angular", "angular_material", "firebase", "firebase_hosting" ]
stackoverflow_0074680882_angular_angular_material_firebase_firebase_hosting.txt
Q: Sender address rejected: not owned by user (Laravel) I have faced a problem when i worked on contact us form , i used hostinger This is the action code $request->validate([ 'fullname' => ['required', 'string', 'min:5'], 'email' => ['required', 'email'], 'subject' => ['required', 'string', 'min:5'], 'message' => ['required', 'string', 'min:20'] ]); Mail::to($request->email)->send(new ContactUs($request->fullname, $request->email, $request->subject, $request->message)); return redirect()->back(); .env MAIL_MAILER=smtp MAIL_HOST=smtp.hostinger.com MAIL_PORT=465 MAIL_USERNAME=support@me.com MAIL_PASSWORD=xxxxxx MAIL_ENCRYPTION=tls MAIL_FROM_ADDRESS=support@me.com MAIL_FROM_NAME="${APP_NAME}" I hope to find the answer
Sender address rejected: not owned by user (Laravel)
I have faced a problem when i worked on contact us form , i used hostinger This is the action code $request->validate([ 'fullname' => ['required', 'string', 'min:5'], 'email' => ['required', 'email'], 'subject' => ['required', 'string', 'min:5'], 'message' => ['required', 'string', 'min:20'] ]); Mail::to($request->email)->send(new ContactUs($request->fullname, $request->email, $request->subject, $request->message)); return redirect()->back(); .env MAIL_MAILER=smtp MAIL_HOST=smtp.hostinger.com MAIL_PORT=465 MAIL_USERNAME=support@me.com MAIL_PASSWORD=xxxxxx MAIL_ENCRYPTION=tls MAIL_FROM_ADDRESS=support@me.com MAIL_FROM_NAME="${APP_NAME}" I hope to find the answer
[]
[]
[ "Mail::to('support@me.com')->send(new ContactUs($request->fullname, $request->email, $request->subject, $request->message));\n\nThis will send the email to the address specified in the to() method, which is support@me.com.\n" ]
[ -1 ]
[ "contacts", "email", "laravel", "php" ]
stackoverflow_0074680874_contacts_email_laravel_php.txt
Q: swap alternate in an array You have been given an array/list(ARR) of size N. You need to swap every pair of alternate elements in the array/list. You don't need to print or return anything, just change in the input array itself. #include <iostream>; using namespace std; void printArr(int arr[], int n) { for (int i = 0; i < n; i++) cout << arr[i]<<i; } void UpdateArr(int arr[], int n) { int i = 0, j = n - 1; while (i < j) { int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; i += 2; j -= 2; } cout<<' printArr(arr[], n)'; } int main() { int t; cin>> t; int n; cin>> n; int input[100]; for(int i=0; i<n; i++) { cin >>input[i]; } int arr[100] ; n = sizeof(arr) / sizeof(arr[0]); UpdateArr(arr, n); return 0; } A: I'm not sure what are you exactly expecting the output to be (pls edit it and show the expected output) but I think this is what you need to do #include <iostream> #include <iomanip> using namespace std; void UpdateArray(int Arr[], size_t n) { for (size_t i = 0; i < n / 2; i++) { int Holder = Arr[i]; Arr[i] = Arr[~i + n]; Arr[~i + n] = Holder; } } int main() { int Arr[7] = { 1,2,3,4,5,6,7 }; UpdateArray(Arr, 7); for (int i = 0; i < 7; i++) { std::cout << Arr[i] << "\n"; } return 0; } size_t is like an int but it can't go into negative, but it can take bigger positive numbers, you can replace it with int, it shouldn't make a difference. so we loop through half the array, replacing first items with last, the [~i + n] flips the value to the other side, so like index 4 in a array size of 20 will become 15
swap alternate in an array
You have been given an array/list(ARR) of size N. You need to swap every pair of alternate elements in the array/list. You don't need to print or return anything, just change in the input array itself. #include <iostream>; using namespace std; void printArr(int arr[], int n) { for (int i = 0; i < n; i++) cout << arr[i]<<i; } void UpdateArr(int arr[], int n) { int i = 0, j = n - 1; while (i < j) { int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; i += 2; j -= 2; } cout<<' printArr(arr[], n)'; } int main() { int t; cin>> t; int n; cin>> n; int input[100]; for(int i=0; i<n; i++) { cin >>input[i]; } int arr[100] ; n = sizeof(arr) / sizeof(arr[0]); UpdateArr(arr, n); return 0; }
[ "I'm not sure what are you exactly expecting the output to be (pls edit it and show the expected output) but I think this is what you need to do\n#include <iostream>\n#include <iomanip>\nusing namespace std;\n\nvoid UpdateArray(int Arr[], size_t n) {\n for (size_t i = 0; i < n / 2; i++) {\n int Holder = Arr[i];\n Arr[i] = Arr[~i + n];\n Arr[~i + n] = Holder; } }\n\nint main() {\n int Arr[7] = { 1,2,3,4,5,6,7 };\n UpdateArray(Arr, 7);\n for (int i = 0; i < 7; i++) {\n std::cout << Arr[i] << \"\\n\"; }\n return 0; }\n\nsize_t is like an int but it can't go into negative, but it can take bigger positive numbers, you can replace it with int, it shouldn't make a difference.\nso we loop through half the array, replacing first items with last, the [~i + n] flips the value to the other side, so like index 4 in a array size of 20 will become 15\n" ]
[ 0 ]
[]
[]
[ "alternate", "arrays", "c++", "swap" ]
stackoverflow_0074675238_alternate_arrays_c++_swap.txt
Q: How to disable Log4j configuration from external Maven dependency? I have Spring Boot application with Log4j2 XML configuration file placed in resources/log4j2.xml. One external library I use is installed via Maven dependency and have own logging configuration in logback.xml.It seems that this file overwrites my Log4J2 configuration and logging is now controlled by this config file. I'm getting logger instance (org.apache.logging.log4j.Logger) this way: private static final Logger LOGGER = LogManager.getLogger(Foo.class); Q: How can I disable Log4J configuration from external library? Edit 1: Added Maven dependencies related to Log4j2 <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> <version>${spring.boot.version}</version> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-logging</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-log4j2</artifactId> <version>${spring.boot.version}</version> </dependency> A: In order to disable the external library's Log4J2 configuration, you can specify the -Dlog4j.configurationFile system property when starting your Spring Boot application, pointing it to the location of your own Log4J2 configuration file. For example, if your configuration file is in the resources directory of your project, you can use the following command to start your application: java -Dlog4j.configurationFile=classpath:resources/log4j2.xml -jar your-application.jar This will tell Log4J2 to use your configuration file instead of any other configuration files it may find on the classpath. A: Shortly: The problem in your case is Spring try to use slf4j first (which is part of logback-classic) and Spring get it, due to logback exists in your classpath at runtime. Detailed: Spring-JCL under hood try to init logging system by searching classes in classloader in next order: try to load Log4j (if slf4j available load slf4j bridge) try to load Log4j try to load slf4j load java util logging Full logic is available here: github spring source code After that Spring boot start to config logger (using LoggingApplicationListener). In this phase Spring Boot configure logger using config file and available logger from classpath. In your case one of the dependencies uses logback-classic as a dependency and as a result it is available for springboot in runtime, that is why it uses logback-classic. There are a few possible ways how to solve your problem: Migrate to logback-classic and use it in your project (it is preferable) If it is hard to change log4j2 to logback-classic, you can support both of them (for your code config from log4j2.xml will be used and logback.xml will be used for spring). It is necessary to add logging.config configuration to application.properties with path to your custom logback.xml - doc If you want to use log4j2 as a logger for springboot itself, it is necessary to exclude logback-classic from your dependency(it can lead to problems with dependency, which use it)
How to disable Log4j configuration from external Maven dependency?
I have Spring Boot application with Log4j2 XML configuration file placed in resources/log4j2.xml. One external library I use is installed via Maven dependency and have own logging configuration in logback.xml.It seems that this file overwrites my Log4J2 configuration and logging is now controlled by this config file. I'm getting logger instance (org.apache.logging.log4j.Logger) this way: private static final Logger LOGGER = LogManager.getLogger(Foo.class); Q: How can I disable Log4J configuration from external library? Edit 1: Added Maven dependencies related to Log4j2 <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> <version>${spring.boot.version}</version> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-logging</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-log4j2</artifactId> <version>${spring.boot.version}</version> </dependency>
[ "In order to disable the external library's Log4J2 configuration, you can specify the -Dlog4j.configurationFile system property when starting your Spring Boot application, pointing it to the location of your own Log4J2 configuration file. For example, if your configuration file is in the resources directory of your project, you can use the following command to start your application:\njava -Dlog4j.configurationFile=classpath:resources/log4j2.xml -jar your-application.jar\n\nThis will tell Log4J2 to use your configuration file instead of any other configuration files it may find on the classpath.\n", "Shortly:\nThe problem in your case is Spring try to use slf4j first (which is part of logback-classic) and Spring get it, due to logback exists in your classpath at runtime.\nDetailed:\nSpring-JCL under hood try to init logging system by searching classes in classloader in next order:\n\ntry to load Log4j (if slf4j available load slf4j bridge)\ntry to load Log4j\ntry to load slf4j\nload java util logging\n\nFull logic is available here: github spring source code\nAfter that Spring boot start to config logger (using LoggingApplicationListener). In this phase Spring Boot configure logger using config file and available logger from classpath.\nIn your case one of the dependencies uses logback-classic as a dependency and as a result it is available for springboot in runtime, that is why it uses logback-classic.\nThere are a few possible ways how to solve your problem:\n\nMigrate to logback-classic and use it in your project (it is preferable)\nIf it is hard to change log4j2 to logback-classic, you can support both of them (for your code config from log4j2.xml will be used and logback.xml will be used for spring). It is necessary to add logging.config configuration to application.properties with path to your custom logback.xml - doc\nIf you want to use log4j2 as a logger for springboot itself, it is necessary to exclude logback-classic from your dependency(it can lead to problems with dependency, which use it)\n\n" ]
[ 0, 0 ]
[]
[]
[ "java", "log4j2", "maven", "spring_boot" ]
stackoverflow_0074624355_java_log4j2_maven_spring_boot.txt
Q: How to skip errors in stored procedure of redshift tables I have bunch of sql queries inside the stored procedure. Is there any idea to skip the exception and proceed with next set of queries inside stored procedure? Note: I have gone through documentation of AWS, it says stored procedure ends if there is a exception. Also there we can also create UDF, let me know how to create UDF for skipping the exception and proceed with other queries inside SP. Or if there any other way let me know. Thanks. I dont know the solution for this, I am expecting any UDF for ignoring exception or any other statements which ignore the error. A: you can use a TRY & CATCH block to handle the errors and continue processing the rest of the stored procedure. Here is an example of how you can use a TRY...CATCH block to skip errors in a stored procedure: BEGIN -- Main body of stored procedure BEGIN TRY -- Statements that may cause errors END TRY BEGIN CATCH -- Statements to handle errors END CATCH END; The statements that may cause errors are placed inside the TRY block. If any of these statements fail, the error will be caught by the CATCH block, which contains the code to handle the error. This allows the stored procedure to continue processing even if some of the statements fail.
How to skip errors in stored procedure of redshift tables
I have bunch of sql queries inside the stored procedure. Is there any idea to skip the exception and proceed with next set of queries inside stored procedure? Note: I have gone through documentation of AWS, it says stored procedure ends if there is a exception. Also there we can also create UDF, let me know how to create UDF for skipping the exception and proceed with other queries inside SP. Or if there any other way let me know. Thanks. I dont know the solution for this, I am expecting any UDF for ignoring exception or any other statements which ignore the error.
[ "you can use a TRY & CATCH block to handle the errors and continue processing the rest of the stored procedure.\nHere is an example of how you can use a TRY...CATCH block to skip errors in a stored procedure:\nBEGIN\n -- Main body of stored procedure\n BEGIN TRY\n -- Statements that may cause errors\n END TRY\n BEGIN CATCH\n -- Statements to handle errors\n END CATCH\nEND;\n\n\nThe statements that may cause errors are placed inside the TRY block. If any of these statements fail, the error will be caught by the CATCH block, which contains the code to handle the error. This allows the stored procedure to continue processing even if some of the statements fail.\n" ]
[ 0 ]
[]
[]
[ "amazon_redshift", "sql", "stored_procedures" ]
stackoverflow_0074654194_amazon_redshift_sql_stored_procedures.txt
Q: str_word_count() function doesn't display Arabic language properly I've made the next function to return a specific number of words from a text: function brief_text($text, $num_words = 50) { $words = str_word_count($text, 1); $required_words = array_slice($words, 0, $num_words); return implode(" ", $required_words); } and it works pretty well with English language but when I try to use it with Arabic language it fails and doesn't return words as expected. For example: $text_en = "Cairo is the capital of Egypt and Paris is the capital of France"; echo brief_text($text_en, 10); will output Cairo is the capital of Egypt and Paris is the while $text_ar = "القاهرة هى عاصمة مصر وباريس هى عاصمة فرنسا"; echo brief_text($text_ar, 10); will output � � � � � � � � � �. I know that the problem is with the str_word_count function but I don't know how to fix it. UPDATE I have already written another function that works pretty good with both English and Arabic languages, but I was looking for a solution for the problem caused by str_word_count() function when using with Arabic. Anyway here is my other function: function brief_text($string, $number_of_required_words = 50) { $string = trim(preg_replace('/\s+/', ' ', $string)); $words = explode(" ", $string); $required_words = array_slice($words, 0, $number_of_required_words); // get sepecific number of elements from the array return implode(" ", $required_words); } A: Try with this function for word count: // You can call the function as you like if (!function_exists('mb_str_word_count')) { function mb_str_word_count($string, $format = 0, $charlist = '[]') { mb_internal_encoding( 'UTF-8'); mb_regex_encoding( 'UTF-8'); $words = mb_split('[^\x{0600}-\x{06FF}]', $string); switch ($format) { case 0: return count($words); break; case 1: case 2: return $words; break; default: return $words; break; } }; } echo mb_str_word_count("القاهرة هى عاصمة مصر وباريس هى عاصمة فرنسا") . PHP_EOL; Resources Unicode list for arabic A Rule-Based Arabic Stemming Algorithm A Rule and Template Based Stemming Algorithm for Arabic Language (seems more complete) Recommentations Use the tag <meta charset="UTF-8"/> in HTML files Always add Content-type: text/html; charset=utf-8 headers when serving pages A: For accepting ASCII characters too: if (!function_exists('mb_str_word_count')) { function mb_str_word_count($string, $format = 0, $charlist = '[]') { $string=trim($string); if(empty($string)) $words = array(); else $words = preg_split('~[^\p{L}\p{N}\']+~u',$string); switch ($format) { case 0: return count($words); break; case 1: case 2: return $words; break; default: return $words; break; } } } A: hi friend if you want to get count of word in Farsi language or Arabic you can use below code public function customWordCount($content_text) { $resultArray = explode(' ',trim($content_text)); foreach ($resultArray as $key => $item) { if (in_array($item,["|",";",".","-","=",":","{","}","[","]","(",")"])) { $resultArray[$key] = ''; } } $resultArray = array_filter($resultArray); return count($resultArray); } A: I would change all letters to a random English letter and count it str_word_count(preg_replace("/[\x{0600}-\x{06FF}a-zA-Z]/u", "a", "أشهد أن لا إله إلا الله")) A: A while ago I wanted to calculate the reading time of a paragraph and had the same issue and I just simply count the SPACEs in the paragraph :) (note that it won't be that accurate but it suits me) like this: substr_count($text, ' ') + 1; A: My PHP 8.1 solution is; if (!function_exists('mb_str_word_count')) { function mb_str_word_count($string, $format = 0): array|bool|int { return match ($format) { 1 => get_words($string), 2 => get_words($string, count_word_order_as_index: true), default => count(get_words($string)), }; } } function get_words(string $string, $count_word_order_as_index = false): array { $letters = mb_str_split($string); $words = []; if ($count_word_order_as_index) { $count_word_order_as_index_count = 0; } $word = ''; $total_letters = count($letters); foreach ($letters as $key => $letter) { if ($count_word_order_as_index) { $count_word_order_as_index_count++; } if ($letter !== ' ') { $word .= $letter; if ($total_letters === $key + 1) { $words[] = $word; } } else { if ($count_word_order_as_index) { $words[$count_word_order_as_index_count] = $word; } else { $words[] = $word; } $word = ''; } } return $words; }
str_word_count() function doesn't display Arabic language properly
I've made the next function to return a specific number of words from a text: function brief_text($text, $num_words = 50) { $words = str_word_count($text, 1); $required_words = array_slice($words, 0, $num_words); return implode(" ", $required_words); } and it works pretty well with English language but when I try to use it with Arabic language it fails and doesn't return words as expected. For example: $text_en = "Cairo is the capital of Egypt and Paris is the capital of France"; echo brief_text($text_en, 10); will output Cairo is the capital of Egypt and Paris is the while $text_ar = "القاهرة هى عاصمة مصر وباريس هى عاصمة فرنسا"; echo brief_text($text_ar, 10); will output � � � � � � � � � �. I know that the problem is with the str_word_count function but I don't know how to fix it. UPDATE I have already written another function that works pretty good with both English and Arabic languages, but I was looking for a solution for the problem caused by str_word_count() function when using with Arabic. Anyway here is my other function: function brief_text($string, $number_of_required_words = 50) { $string = trim(preg_replace('/\s+/', ' ', $string)); $words = explode(" ", $string); $required_words = array_slice($words, 0, $number_of_required_words); // get sepecific number of elements from the array return implode(" ", $required_words); }
[ "Try with this function for word count:\n// You can call the function as you like\nif (!function_exists('mb_str_word_count'))\n{\n function mb_str_word_count($string, $format = 0, $charlist = '[]') {\n mb_internal_encoding( 'UTF-8');\n mb_regex_encoding( 'UTF-8');\n\n $words = mb_split('[^\\x{0600}-\\x{06FF}]', $string);\n switch ($format) {\n case 0:\n return count($words);\n break;\n case 1:\n case 2:\n return $words;\n break;\n default:\n return $words;\n break;\n }\n };\n}\n\n\n\necho mb_str_word_count(\"القاهرة هى عاصمة مصر وباريس هى عاصمة فرنسا\") . PHP_EOL;\n\nResources\n\nUnicode list for arabic \nA Rule-Based Arabic Stemming Algorithm \nA Rule and Template Based Stemming Algorithm for Arabic Language (seems more complete)\n\nRecommentations\n\nUse the tag <meta charset=\"UTF-8\"/> in HTML files\nAlways add Content-type: text/html; charset=utf-8 headers when serving pages\n\n", "For accepting ASCII characters too:\nif (!function_exists('mb_str_word_count'))\n{\n function mb_str_word_count($string, $format = 0, $charlist = '[]') {\n $string=trim($string);\n if(empty($string))\n $words = array();\n else\n $words = preg_split('~[^\\p{L}\\p{N}\\']+~u',$string);\n switch ($format) {\n case 0:\n return count($words);\n break;\n case 1:\n case 2:\n return $words;\n break;\n default:\n return $words;\n break;\n }\n }\n}\n\n", "hi friend if you want to get count of word in Farsi language or Arabic you can use below code\npublic function customWordCount($content_text)\n{\n $resultArray = explode(' ',trim($content_text));\n foreach ($resultArray as $key => $item)\n {\n if (in_array($item,[\"|\",\";\",\".\",\"-\",\"=\",\":\",\"{\",\"}\",\"[\",\"]\",\"(\",\")\"]))\n {\n $resultArray[$key] = '';\n }\n }\n\n $resultArray = array_filter($resultArray);\n return count($resultArray);\n}\n\n", "I would change all letters to a random English letter and count it\nstr_word_count(preg_replace(\"/[\\x{0600}-\\x{06FF}a-zA-Z]/u\", \"a\", \"أشهد أن لا إله إلا الله\"))\n\n", "A while ago I wanted to calculate the reading time of a paragraph and had the same issue and I just simply count the SPACEs in the paragraph :) (note that it won't be that accurate but it suits me)\nlike this:\nsubstr_count($text, ' ') + 1;\n\n", "My PHP 8.1 solution is;\nif (!function_exists('mb_str_word_count')) {\n function mb_str_word_count($string, $format = 0): array|bool|int\n {\n return match ($format) {\n 1 => get_words($string),\n 2 => get_words($string, count_word_order_as_index: true),\n default => count(get_words($string)),\n };\n }\n\n\n}\n\n\nfunction get_words(string $string, $count_word_order_as_index = false): array\n{\n $letters = mb_str_split($string);\n $words = [];\n\n if ($count_word_order_as_index) {\n $count_word_order_as_index_count = 0;\n }\n $word = '';\n $total_letters = count($letters);\n foreach ($letters as $key => $letter) {\n if ($count_word_order_as_index) {\n $count_word_order_as_index_count++;\n }\n if ($letter !== ' ') {\n $word .= $letter;\n if ($total_letters === $key + 1) {\n $words[] = $word;\n }\n } else {\n if ($count_word_order_as_index) {\n $words[$count_word_order_as_index_count] = $word;\n } else {\n $words[] = $word;\n }\n\n $word = '';\n }\n }\n return $words;\n}\n\n\n" ]
[ 3, 2, 1, 1, 0, 0 ]
[]
[]
[ "function", "php" ]
stackoverflow_0013884178_function_php.txt
Q: PySpark error: java.net.SocketTimeoutException: Accept timed out I am getting error "java.net.SocketTimeoutException: Accept timed out" while running pyspark using python 3.9.6 and spark 3.3.1. Source code: import json from pyspark.sql import SparkSession from pyspark.sql.functions import * from pyspark.sql.types import StringType with open('config.json') as cfg: json_data = json.load(cfg) dataset_path = json_data['Dataset'] # Init spark spark = SparkSession.builder.master('local[*]').appName('A').getOrCreate() sc = spark.sparkContext # Load Dataset df = spark.read.options(delimiter=';', inferSchema=True, header=True).csv(dataset_path); df.show(5) # Dataset preprocessing # Converts integer to double and converts 'quality' column to categorical @udf(returnType=StringType()) def condition(r): if r == 0: label = "bad" else: label = "good" return label df = df.withColumn("NO2", df["NO2"].cast('double')) df = df.withColumn("O3", df["O3"].cast('double')) df = df.withColumn("PM10", df["PM10"].cast('double')) df = df.withColumn("PM25", df["PM25"].cast('double')) df = df.withColumn('quality', condition('quality')) df.show(5) It happens when I try to apply the condition function for dataframe. The full stack trace: py4j.protocol.Py4JJavaError: An error occurred while calling o60.showString. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3.0 failed 1 times, most recent failure: Lost task 0.0 in stage 3.0 (TID 3) (host.docker.internal executor driver): org.apache.spark.SparkException: Python worker failed to connect back. at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:189) at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:109) at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:124) at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:164) at org.apache.spark.sql.execution.python.BatchEvalPythonExec.evaluate(BatchEvalPythonExec.scala:81) at org.apache.spark.sql.execution.python.EvalPythonExec.$anonfun$doExecute$2(EvalPythonExec.scala:131) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:855) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:855) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:136) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.net.SocketTimeoutException: Accept timed out at java.base/java.net.PlainSocketImpl.waitForNewConnection(Native Method) at java.base/java.net.PlainSocketImpl.socketAccept(PlainSocketImpl.java:163) at java.base/java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:458) at java.base/java.net.ServerSocket.implAccept(ServerSocket.java:565) at java.base/java.net.ServerSocket.accept(ServerSocket.java:533) at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:176) ... 24 more Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2672) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2608) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2607) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2607) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1182) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1182) at scala.Option.foreach(Option.scala:407) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1182) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2860) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2802) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2791) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:952) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2228) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2249) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2268) at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:506) at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:459) at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:48) at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3868) at org.apache.spark.sql.Dataset.$anonfun$head$1(Dataset.scala:2863) at org.apache.spark.sql.Dataset.$anonfun$withAction$2(Dataset.scala:3858) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:510) at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3856) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:109) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:169) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3856) at org.apache.spark.sql.Dataset.head(Dataset.scala:2863) at org.apache.spark.sql.Dataset.take(Dataset.scala:3084) at org.apache.spark.sql.Dataset.getRows(Dataset.scala:288) at org.apache.spark.sql.Dataset.showString(Dataset.scala:327) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) at py4j.ClientServerConnection.run(ClientServerConnection.java:106) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: org.apache.spark.SparkException: Python worker failed to connect back. at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:189) at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:109) at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:124) at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:164) at org.apache.spark.sql.execution.python.BatchEvalPythonExec.evaluate(BatchEvalPythonExec.scala:81) at org.apache.spark.sql.execution.python.EvalPythonExec.$anonfun$doExecute$2(EvalPythonExec.scala:131) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:855) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:855) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:136) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ... 1 more Caused by: java.net.SocketTimeoutException: Accept timed out at java.base/java.net.PlainSocketImpl.waitForNewConnection(Native Method) at java.base/java.net.PlainSocketImpl.socketAccept(PlainSocketImpl.java:163) at java.base/java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:458) at java.base/java.net.ServerSocket.implAccept(ServerSocket.java:565) at java.base/java.net.ServerSocket.accept(ServerSocket.java:533) at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:176) ... 24 more I have tried to google it but the only appropriate question I've found is without answer. A: The solution is to import "findspark" import findspark findspark.init()
PySpark error: java.net.SocketTimeoutException: Accept timed out
I am getting error "java.net.SocketTimeoutException: Accept timed out" while running pyspark using python 3.9.6 and spark 3.3.1. Source code: import json from pyspark.sql import SparkSession from pyspark.sql.functions import * from pyspark.sql.types import StringType with open('config.json') as cfg: json_data = json.load(cfg) dataset_path = json_data['Dataset'] # Init spark spark = SparkSession.builder.master('local[*]').appName('A').getOrCreate() sc = spark.sparkContext # Load Dataset df = spark.read.options(delimiter=';', inferSchema=True, header=True).csv(dataset_path); df.show(5) # Dataset preprocessing # Converts integer to double and converts 'quality' column to categorical @udf(returnType=StringType()) def condition(r): if r == 0: label = "bad" else: label = "good" return label df = df.withColumn("NO2", df["NO2"].cast('double')) df = df.withColumn("O3", df["O3"].cast('double')) df = df.withColumn("PM10", df["PM10"].cast('double')) df = df.withColumn("PM25", df["PM25"].cast('double')) df = df.withColumn('quality', condition('quality')) df.show(5) It happens when I try to apply the condition function for dataframe. The full stack trace: py4j.protocol.Py4JJavaError: An error occurred while calling o60.showString. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3.0 failed 1 times, most recent failure: Lost task 0.0 in stage 3.0 (TID 3) (host.docker.internal executor driver): org.apache.spark.SparkException: Python worker failed to connect back. at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:189) at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:109) at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:124) at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:164) at org.apache.spark.sql.execution.python.BatchEvalPythonExec.evaluate(BatchEvalPythonExec.scala:81) at org.apache.spark.sql.execution.python.EvalPythonExec.$anonfun$doExecute$2(EvalPythonExec.scala:131) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:855) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:855) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:136) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.net.SocketTimeoutException: Accept timed out at java.base/java.net.PlainSocketImpl.waitForNewConnection(Native Method) at java.base/java.net.PlainSocketImpl.socketAccept(PlainSocketImpl.java:163) at java.base/java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:458) at java.base/java.net.ServerSocket.implAccept(ServerSocket.java:565) at java.base/java.net.ServerSocket.accept(ServerSocket.java:533) at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:176) ... 24 more Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2672) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2608) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2607) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2607) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1182) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1182) at scala.Option.foreach(Option.scala:407) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1182) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2860) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2802) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2791) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:952) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2228) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2249) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2268) at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:506) at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:459) at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:48) at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3868) at org.apache.spark.sql.Dataset.$anonfun$head$1(Dataset.scala:2863) at org.apache.spark.sql.Dataset.$anonfun$withAction$2(Dataset.scala:3858) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:510) at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3856) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:109) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:169) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3856) at org.apache.spark.sql.Dataset.head(Dataset.scala:2863) at org.apache.spark.sql.Dataset.take(Dataset.scala:3084) at org.apache.spark.sql.Dataset.getRows(Dataset.scala:288) at org.apache.spark.sql.Dataset.showString(Dataset.scala:327) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) at py4j.ClientServerConnection.run(ClientServerConnection.java:106) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: org.apache.spark.SparkException: Python worker failed to connect back. at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:189) at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:109) at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:124) at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:164) at org.apache.spark.sql.execution.python.BatchEvalPythonExec.evaluate(BatchEvalPythonExec.scala:81) at org.apache.spark.sql.execution.python.EvalPythonExec.$anonfun$doExecute$2(EvalPythonExec.scala:131) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:855) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:855) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:136) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ... 1 more Caused by: java.net.SocketTimeoutException: Accept timed out at java.base/java.net.PlainSocketImpl.waitForNewConnection(Native Method) at java.base/java.net.PlainSocketImpl.socketAccept(PlainSocketImpl.java:163) at java.base/java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:458) at java.base/java.net.ServerSocket.implAccept(ServerSocket.java:565) at java.base/java.net.ServerSocket.accept(ServerSocket.java:533) at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:176) ... 24 more I have tried to google it but the only appropriate question I've found is without answer.
[ "The solution is to import \"findspark\"\nimport findspark\nfindspark.init()\n\n" ]
[ 0 ]
[]
[]
[ "apache_spark", "pyspark", "python" ]
stackoverflow_0074679957_apache_spark_pyspark_python.txt
Q: Centering text within an centered element, Css I'm trying to center a card on the page and center the text within that card. Here is the CSS I've attempted below. And It seems to do nothing. I am trying to use JS for the first time with CSS. Confusion about where I'm messing up, so not entirely sure what part to isolate to share. sample here: https://rrrhhhhhhhhh.github.io/sentences/ Thank you CSS .testclass { font: 10px courier, courier new; background: #ffffff; z-index: 10; layer-background-color: #ffffff; } #test { position: absolute; top: 10px; left:10px; padding: 0px; } #rel { position: relative; align: center; } html and js } function writit(text,id) { if (document.getElementById) { x = document.getElementById(id); x.innerHTML = ''; x.innerHTML = text; } else if (document.all) { x = document.all[id]; x.innerHTML = text; } else if (document.layers) { x = document.layers[id]; l = (480-(getNumLines(text)*8))/2; w = (764-(getWidestLine(text)*8))/2; text2 = '<td id=rel align="center" CLASS="testclass" style="font:12px courier,courier new;padding-left:' + w.toString() + 'px' + ';padding-top:' + l.toString() + 'px' + '">' + text + '</td>'; x.document.open(); x.document.write(text2); x.document.close(); } } function getNumLines(mystr) { var a = mystr.split("<br>") return(a.length); } function getWidestLine(mystr) { if (mystr.indexOf("&nbsp;") != -1) { re = /&nbsp;*/g; mystr = mystr.replace(re,"Z"); //alert(mystr); } if (mystr.indexOf("<u>") != -1) { re = /<u>*/g; mystr = mystr.replace(re, ""); re = /<\/u>*/g; mystr = mystr.replace(re, ""); } if (mystr.indexOf("<br>") != -1) { var ss, t; var lngest; ss = mystr.split("<br>"); lngest = ss[0].length; for (t=0; t < ss.length; t++) { if (ss[t].length > lngest) { lngest = ss[t].length; } } } else { lngest = mystr.length; } return(lngest); } // --> </script> <body bgcolor="gainsboro" onload="startup();"> <table bgcolor="white" border height="480px" width="764px" cellpadding="0" cellspacing="0"> <tr> <td align="center"> <table nowrap> <tr> <td><img width="700px" height="1px" src="./resources/images/w.gif"></td> <td> <div class="testclass" id="test"></div> </td> </tr> </table> </td> </tr> </table> <center> <form> <p> <input type="button" onclick="doBack(); return false" value="Back"> <input type="button" onclick="doNext(); return false" value="Next"> </p> </form> </center> </body> </html> A: Just notice this without js .testclass { font: 10px courier,courier new; background: #ffffff; z-index: 10; layer-background-color: #ffffff; } #test {position: absolute; top: 10px; left:10px; padding: 0px; } #rel {position: relative; align: center; } .center-div{ display:flex; flex-direction:column; align-items:center; justify-content:center; } <!Doctype html> <html> <head> <title>Robert Grenier - Sentences</title> </head> <body> <div class="center-div"> <table bgcolor="white" border height="480px" width="764px" cellpadding="0" cellspacing="0"> <tr> <td align="center"> <table nowrap> <tr> <td><img width="700px" height="1px" src="./resources/images/w.gif"></td> <td> <div class="testclass" id="test"></div> </td> </tr> </table> </td> </tr> </table> <center> <form> <p> <input type="button" onclick="doBack(); return false" value="Back"> <input type="button" onclick="doNext(); return false" value="Next"> </p> </form> </center> </div> </body> </html>
Centering text within an centered element, Css
I'm trying to center a card on the page and center the text within that card. Here is the CSS I've attempted below. And It seems to do nothing. I am trying to use JS for the first time with CSS. Confusion about where I'm messing up, so not entirely sure what part to isolate to share. sample here: https://rrrhhhhhhhhh.github.io/sentences/ Thank you CSS .testclass { font: 10px courier, courier new; background: #ffffff; z-index: 10; layer-background-color: #ffffff; } #test { position: absolute; top: 10px; left:10px; padding: 0px; } #rel { position: relative; align: center; } html and js } function writit(text,id) { if (document.getElementById) { x = document.getElementById(id); x.innerHTML = ''; x.innerHTML = text; } else if (document.all) { x = document.all[id]; x.innerHTML = text; } else if (document.layers) { x = document.layers[id]; l = (480-(getNumLines(text)*8))/2; w = (764-(getWidestLine(text)*8))/2; text2 = '<td id=rel align="center" CLASS="testclass" style="font:12px courier,courier new;padding-left:' + w.toString() + 'px' + ';padding-top:' + l.toString() + 'px' + '">' + text + '</td>'; x.document.open(); x.document.write(text2); x.document.close(); } } function getNumLines(mystr) { var a = mystr.split("<br>") return(a.length); } function getWidestLine(mystr) { if (mystr.indexOf("&nbsp;") != -1) { re = /&nbsp;*/g; mystr = mystr.replace(re,"Z"); //alert(mystr); } if (mystr.indexOf("<u>") != -1) { re = /<u>*/g; mystr = mystr.replace(re, ""); re = /<\/u>*/g; mystr = mystr.replace(re, ""); } if (mystr.indexOf("<br>") != -1) { var ss, t; var lngest; ss = mystr.split("<br>"); lngest = ss[0].length; for (t=0; t < ss.length; t++) { if (ss[t].length > lngest) { lngest = ss[t].length; } } } else { lngest = mystr.length; } return(lngest); } // --> </script> <body bgcolor="gainsboro" onload="startup();"> <table bgcolor="white" border height="480px" width="764px" cellpadding="0" cellspacing="0"> <tr> <td align="center"> <table nowrap> <tr> <td><img width="700px" height="1px" src="./resources/images/w.gif"></td> <td> <div class="testclass" id="test"></div> </td> </tr> </table> </td> </tr> </table> <center> <form> <p> <input type="button" onclick="doBack(); return false" value="Back"> <input type="button" onclick="doNext(); return false" value="Next"> </p> </form> </center> </body> </html>
[ "Just notice this without js\n\n\n.testclass {\n font: 10px courier,courier new;\n background: #ffffff;\n z-index: 10;\n layer-background-color: #ffffff;\n}\n\n#test {position: absolute;\n top: 10px;\n left:10px;\n padding: 0px;\n}\n\n#rel {position: relative;\n align: center;\n}\n.center-div{\ndisplay:flex;\nflex-direction:column;\nalign-items:center;\njustify-content:center;\n}\n<!Doctype html>\n<html>\n<head>\n<title>Robert Grenier - Sentences</title>\n\n</head>\n<body>\n<div class=\"center-div\">\n<table bgcolor=\"white\" border height=\"480px\" width=\"764px\" cellpadding=\"0\" cellspacing=\"0\">\n<tr>\n<td align=\"center\">\n<table nowrap>\n<tr>\n<td><img width=\"700px\" height=\"1px\" src=\"./resources/images/w.gif\"></td>\n<td>\n<div class=\"testclass\" id=\"test\"></div>\n</td>\n</tr>\n</table>\n</td>\n</tr>\n</table>\n<center>\n<form>\n<p>\n\n<input type=\"button\" onclick=\"doBack(); return false\" value=\"Back\">\n<input type=\"button\" onclick=\"doNext(); return false\" value=\"Next\">\n</p>\n</form>\n</center>\n</div>\n</body>\n</html>\n\n\n\n" ]
[ 1 ]
[]
[]
[ "css", "html" ]
stackoverflow_0074678130_css_html.txt
Q: Heroku completely messed up my Laravel login (Laravel 9) I am currently trying to host my Laravel 9 project on Heroku. Everything works more or less fine, until it comes to the Login. The first thing which is horrible is the fact that the login style is completely messed up, my login has been fully made with the laravel default login implementation and in local everything works as it should. My current laravel-9 login page on Heroku This is currently my login page... In addition to all of that, whenever I try to login, I get the 419 page error. And yes, I did setup properly the postgre database and added all the migrations. I tried everything I possibly could. I looked everywhere online, but no one is able to help me out.... A: Go to resources/views/layouts/app.blade.php and include bootstrap.css in header of the page: <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/bootstrap/5.2.3/css/bootstrap.min.css" /> And include bootstrap.js in scripts section: <script src="https://cdnjs.cloudflare.com/ajax/libs/bootstrap/5.2.3/js/bootstrap.min.js"></script>
Heroku completely messed up my Laravel login (Laravel 9)
I am currently trying to host my Laravel 9 project on Heroku. Everything works more or less fine, until it comes to the Login. The first thing which is horrible is the fact that the login style is completely messed up, my login has been fully made with the laravel default login implementation and in local everything works as it should. My current laravel-9 login page on Heroku This is currently my login page... In addition to all of that, whenever I try to login, I get the 419 page error. And yes, I did setup properly the postgre database and added all the migrations. I tried everything I possibly could. I looked everywhere online, but no one is able to help me out....
[ "Go to resources/views/layouts/app.blade.php and include bootstrap.css in header of the page:\n<link rel=\"stylesheet\" href=\"https://cdnjs.cloudflare.com/ajax/libs/bootstrap/5.2.3/css/bootstrap.min.css\" />\n\nAnd include bootstrap.js in scripts section:\n<script src=\"https://cdnjs.cloudflare.com/ajax/libs/bootstrap/5.2.3/js/bootstrap.min.js\"></script>\n\n" ]
[ 0 ]
[]
[]
[ "authentication", "heroku", "laravel", "laravel_9" ]
stackoverflow_0074679746_authentication_heroku_laravel_laravel_9.txt
Q: Images aren't showed in tabs in tkinter python I'm trying to show a picture using canvas or directly in the tab, but it doesn't work, it doesn't show an error but picture is not displayed, what am I doing wrong? I need to use a vertical scrollbar and add some widgets, I tried using canvas.create_image and labels but pictures aren't being showed This is my main code: import tkinter as tk from tkinter import ttk from Listado import * import os class Gui(ttk.Frame): def __init__(self): self.db_filename = 'Tienda.db' # Temp file self.temp_dir = "temp" dir_list = os.listdir("./") for file in dir_list: if file.lower() == "temp": self.temp_dir = file break else: self.temp_dir = "temp" if not os.path.exists(self.temp_dir): os.mkdir(self.temp_dir) self.create_gui() def create_gui(self): self.main_window = tk.Tk() self.main_window.attributes('-fullscreen', True) self.screen_width = self.main_window.winfo_screenwidth() self.screen_height = self.main_window.winfo_screenheight() self.tabs = ttk.Notebook(self.main_window) self.tab_admin = ttk.Frame(self.tabs) self.tab_listado = ttk.Frame(self.tabs) self.tab_carrito = ttk.Frame(self.tabs) self.tab_ventas = ttk.Frame(self.tabs) self.tab_login = ttk.Frame(self.tabs) self.tabs.add(self.tab_admin, text ='Administrar') self.tabs.add(self.tab_listado, text ='Ver listado') self.tabs.add(self.tab_carrito, text ='Carrito') self.tabs.add(self.tab_ventas, text ='Ventas') self.tabs.add(self.tab_login, text ='Login') self.listado = Listado(self) self.tabs.pack(expand = 1, fill ="both") self.main_window.mainloop() if __name__ == "__main__": Gui() This is the file named "Listado" import tkinter as tk from tkinter import ttk from tkinter import * from tkinter import scrolledtext as st from PIL import Image, ImageTk class Listado: def __init__(self,Gui): self.create_gui(Gui) def create_gui(self,Gui): img = Image.open("laptop.png") img = img.resize((100, 100), Image.ANTIALIAS) img = ImageTk.PhotoImage(img) container = ttk.Frame(Gui.tab_admin) canvas = tk.Canvas(container) scrollbar = ttk.Scrollbar(container, orient="vertical", command=canvas.yview) scrollable_frame = ttk.Frame(canvas) scrollable_frame.bind( "<Configure>", lambda e: canvas.configure( scrollregion=canvas.bbox("all") ) ) canvas.create_window((0, 0), window=scrollable_frame, anchor="nw") canvas.configure(yscrollcommand=scrollbar.set) for i in range(200): ttk.Label(scrollable_frame, text="Sample scrolling label",image=img).pack() container.pack(side="left",fill="both",expand=True) canvas.pack(side="left", fill="both", expand=True) scrollbar.pack(side="right", fill="y") ttk.Label(scrollable_frame,image=img).pack()
Images aren't showed in tabs in tkinter python
I'm trying to show a picture using canvas or directly in the tab, but it doesn't work, it doesn't show an error but picture is not displayed, what am I doing wrong? I need to use a vertical scrollbar and add some widgets, I tried using canvas.create_image and labels but pictures aren't being showed This is my main code: import tkinter as tk from tkinter import ttk from Listado import * import os class Gui(ttk.Frame): def __init__(self): self.db_filename = 'Tienda.db' # Temp file self.temp_dir = "temp" dir_list = os.listdir("./") for file in dir_list: if file.lower() == "temp": self.temp_dir = file break else: self.temp_dir = "temp" if not os.path.exists(self.temp_dir): os.mkdir(self.temp_dir) self.create_gui() def create_gui(self): self.main_window = tk.Tk() self.main_window.attributes('-fullscreen', True) self.screen_width = self.main_window.winfo_screenwidth() self.screen_height = self.main_window.winfo_screenheight() self.tabs = ttk.Notebook(self.main_window) self.tab_admin = ttk.Frame(self.tabs) self.tab_listado = ttk.Frame(self.tabs) self.tab_carrito = ttk.Frame(self.tabs) self.tab_ventas = ttk.Frame(self.tabs) self.tab_login = ttk.Frame(self.tabs) self.tabs.add(self.tab_admin, text ='Administrar') self.tabs.add(self.tab_listado, text ='Ver listado') self.tabs.add(self.tab_carrito, text ='Carrito') self.tabs.add(self.tab_ventas, text ='Ventas') self.tabs.add(self.tab_login, text ='Login') self.listado = Listado(self) self.tabs.pack(expand = 1, fill ="both") self.main_window.mainloop() if __name__ == "__main__": Gui() This is the file named "Listado" import tkinter as tk from tkinter import ttk from tkinter import * from tkinter import scrolledtext as st from PIL import Image, ImageTk class Listado: def __init__(self,Gui): self.create_gui(Gui) def create_gui(self,Gui): img = Image.open("laptop.png") img = img.resize((100, 100), Image.ANTIALIAS) img = ImageTk.PhotoImage(img) container = ttk.Frame(Gui.tab_admin) canvas = tk.Canvas(container) scrollbar = ttk.Scrollbar(container, orient="vertical", command=canvas.yview) scrollable_frame = ttk.Frame(canvas) scrollable_frame.bind( "<Configure>", lambda e: canvas.configure( scrollregion=canvas.bbox("all") ) ) canvas.create_window((0, 0), window=scrollable_frame, anchor="nw") canvas.configure(yscrollcommand=scrollbar.set) for i in range(200): ttk.Label(scrollable_frame, text="Sample scrolling label",image=img).pack() container.pack(side="left",fill="both",expand=True) canvas.pack(side="left", fill="both", expand=True) scrollbar.pack(side="right", fill="y") ttk.Label(scrollable_frame,image=img).pack()
[]
[]
[ "It looks like you are trying to display an image in a Tkinter canvas widget. However, you are not keeping a reference to the img object that you create, which means that it will be garbage collected and will not be displayed in the canvas.\nTo fix this, you need to keep a reference to the img object. You can do this by assigning it to a variable that is accessible in the scope where you use it to create the image in the canvas. Here is an example of how you could do this:\n# Import the necessary modules\nimport tkinter as tk\nfrom PIL import Image, ImageTk\n\n# Create the main window\nwindow = tk.Tk()\n\n# Load the image and resize it\nimg = Image.open(\"laptop.png\")\nimg = img.resize((100, 100), Image.ANTIALIAS)\n\n# Create a canvas widget\ncanvas = tk.Canvas(window)\n\n# Use the ImageTk.PhotoImage class to create a Tkinter-compatible image\n# object, and keep a reference to it\nimg = ImageTk.PhotoImage(img)\n\n# Create an image item in the canvas and display the image\ncanvas.create_image(0, 0, image=img, anchor=\"nw\")\n\n# Pack the canvas to display it\ncanvas.pack()\n\n# Start the main event loop\nwindow.mainloop()\n\nIn this code, we keep a reference to the img object by assigning it to a variable with the same name. We then pass this variable as the image option when we create the image item in the canvas. This ensures that the img object is not garbage collected and the image is displayed in the canvas.\nYou can apply this same principle to your code to fix the issue with the image not being displayed. You can either assign the img object to a variable with the same name in your create_gui function, or you can create a class attribute to store the reference to the img object and use it in the create_gui function.\n" ]
[ -2 ]
[ "canvas", "image", "python", "scrollbar", "tkinter" ]
stackoverflow_0074680916_canvas_image_python_scrollbar_tkinter.txt
Q: React/Next.js site doesn't load properly in Safari (blank page) I know this is very general but I have a bug in my Next.js website, where when I open my site in Safari, it sometimes loads and sometimes doesn't (almost 50/50 chance - shows a blank page, but I can see outlines of some of my components, no text though). It happens on both iOS/macOS versions of Safari. I read about Cache-Control headers which apparently cause Safari trouble when trying to load the page, but I tried those solutions and they didn't work for me (e.g. setting headers like so res.header("Cache-Control", "no-cache, no-store, must-revalidate") and adding app.disable('etag') to my node server). All I would like to know at this point is the root cause of this. Is it a React thing? Node thing? Next.js or the browser itself (that would be my guess as all the other browsers don't have this issue). Also it's very strange that this doesn't happen 100% of times. On localhost this issue does not happen. (Different headers?) Has anyone ever run into the same issue? Any feedback is welcomed. Thanks. EDIT: So I managed to fix this. The issue was the font I was trying to use imported from Adobe Fonts. Spent 3 days on this but once I replaced the font with a standard Google Font, everything started working OK. Hope this saves someone a headache. A: For me it was a simple hard refresh (CTRL + SHIFT + R) to solve this.
React/Next.js site doesn't load properly in Safari (blank page)
I know this is very general but I have a bug in my Next.js website, where when I open my site in Safari, it sometimes loads and sometimes doesn't (almost 50/50 chance - shows a blank page, but I can see outlines of some of my components, no text though). It happens on both iOS/macOS versions of Safari. I read about Cache-Control headers which apparently cause Safari trouble when trying to load the page, but I tried those solutions and they didn't work for me (e.g. setting headers like so res.header("Cache-Control", "no-cache, no-store, must-revalidate") and adding app.disable('etag') to my node server). All I would like to know at this point is the root cause of this. Is it a React thing? Node thing? Next.js or the browser itself (that would be my guess as all the other browsers don't have this issue). Also it's very strange that this doesn't happen 100% of times. On localhost this issue does not happen. (Different headers?) Has anyone ever run into the same issue? Any feedback is welcomed. Thanks. EDIT: So I managed to fix this. The issue was the font I was trying to use imported from Adobe Fonts. Spent 3 days on this but once I replaced the font with a standard Google Font, everything started working OK. Hope this saves someone a headache.
[ "For me it was a simple hard refresh (CTRL + SHIFT + R) to solve this.\n" ]
[ 0 ]
[ "There could be several reasons why a React or Next.js site may not load properly in Safari. Some possible reasons include:\nThe site may be using features that are not supported by Safari, such as certain JavaScript or CSS features.\nThe site may be experiencing technical issues, such as server errors or connectivity issues, which are preventing it from loading properly.\nThe site may not have been optimized for use with Safari, and may not have been tested extensively on that browser.\nTo troubleshoot the issue, it would be helpful to check the site's JavaScript console for any error messages, and to try accessing the site on a different browser to see if the issue persists\n" ]
[ -1 ]
[ "javascript", "next.js", "node.js", "reactjs", "safari" ]
stackoverflow_0063055442_javascript_next.js_node.js_reactjs_safari.txt
Q: How can I create a window in python? Write a program that displays a rectangle whose frame consists of asterisk ' * ' characters, the inner part of ' Q ' characters. The program will ask the user to indicate the number of rows and columns of the rectangle, these values ​​cannot be less than 3. I tried to create various print() one below the other but I don't understand how to make them adapt to the user, in the sense that if the user asks me for 10 lines I don't know how to make this happen...the window should look like this: ********************* *QQQQQQQQQQQQQQQQQQQ* *QQQQQQQQQQQQQQQQQQQ* *QQQQQQQQQQQQQQQQQQQ* ********************* A: # Ask the user for the number of rows and columns num_rows = int(input("Enter the number of rows: ")) num_cols = int(input("Enter the number of columns: ")) # Make sure the values are at least 3 num_rows = max(num_rows, 3) num_cols = max(num_cols, 3) # Print the top row of asterisks print("*" * num_cols) # Print the middle rows of asterisks and Qs for i in range(num_rows - 2): print("*" + "Q" * (num_cols - 2) + "*") # Print the bottom row of asterisks print("*" * num_cols)
How can I create a window in python?
Write a program that displays a rectangle whose frame consists of asterisk ' * ' characters, the inner part of ' Q ' characters. The program will ask the user to indicate the number of rows and columns of the rectangle, these values ​​cannot be less than 3. I tried to create various print() one below the other but I don't understand how to make them adapt to the user, in the sense that if the user asks me for 10 lines I don't know how to make this happen...the window should look like this: ********************* *QQQQQQQQQQQQQQQQQQQ* *QQQQQQQQQQQQQQQQQQQ* *QQQQQQQQQQQQQQQQQQQ* *********************
[ "# Ask the user for the number of rows and columns\nnum_rows = int(input(\"Enter the number of rows: \"))\nnum_cols = int(input(\"Enter the number of columns: \"))\n\n# Make sure the values are at least 3\nnum_rows = max(num_rows, 3)\nnum_cols = max(num_cols, 3)\n\n# Print the top row of asterisks\nprint(\"*\" * num_cols)\n\n# Print the middle rows of asterisks and Qs\nfor i in range(num_rows - 2):\n print(\"*\" + \"Q\" * (num_cols - 2) + \"*\")\n\n# Print the bottom row of asterisks\nprint(\"*\" * num_cols)\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074680867_python.txt
Q: Filter timeseries data based on multiple criteria I have a table where I store timeseries data: customer_id transaction_type transaction_date transaction_value 1 buy 2022-12-04 100.0 1 sell 2022-12-04 80.0 2 buy 2022-12-04 120.0 2 sell 2022-12-03 120.0 1 buy 2022-12-02 90.0 1 sell 2022-12-02 70.0 2 buy 2022-12-01 110.0 2 sell 2022-12-01 110.0 Number of customers and transaction types is not limited. Currently there are over 10,000 customers and over 600 transaction types. Dates of transactions between customers can be unique and will not always align based on any criteria (that's why I've tried using LATERAL JOIN — you'll see it later). I want to filter those records to get customers IDs with the values of the transaction where any arbitrary condition is met. Number of those conditions in a query is not restricted to two — can be anything. For example: Give me all customers who have a buy with value > $90 and a sale with value > 100$ as their latest transactions The final query should return these two rows: customer_id transaction_type transaction_date transaction_value 2 buy 2022-12-04 120$ 2 sell 2022-12-03 120$ The closest I've came to what I need was by creating a materialized view cross-joining customer IDs and transaction_types: customer_id transaction_type 1 buy 1 sell 2 buy 2 sell And then running a LATERAL JOIN between table with transactions and customer_transactions materialized view: SELECT * FROM customer_transactions JOIN LATERAL ( SELECT * FROM transactions WHERE (transactions.customer_id = customer_transactions.customer_id) AND (transactions.transaction_type = customer_transactions.transaction_type) AND transactions.transaction_date <= '2022-12-04' -- this can change for filtering records back in time ORDER BY transactions.transaction_date DESC LIMIT 1 ) transactions ON TRUE WHERE customer_transactions.transaction_type = 'buy' AND customer_transactions.transaction_value > 90 It seems to be working when one condition is specified. But as soon as subsequential conditions are introduced that's where things start falling apart for me; changing condition to: WHERE (customer_transactions.transaction_type = 'buy' AND customer_transactions.transaction_value > 90) AND (customer_transactions.transaction_type = 'sell' AND customer_transactions.transaction_value > 100) is obviously not going to work as there is no row that satisfies both of these conditions. Is it possible to achieve this using the aproach I took? If so what am I missing? Or maybe there is another way to solve that would be more appropriate? A: You could use a CTE with row_number and chech out the last transactios WITH CTE as (SELECT "customer_id", "transaction_type", "transaction_date", "transaction_value", ROW_NUMBER() OVER(PARTITION BY "customer_id", "transaction_type" ORDER BY "transaction_date" DESC) rn FROM tab1) SELECT "customer_id", "transaction_type", "transaction_date", "transaction_value" FROM CTE WHERE rn = 1 AND CASE WHEN "transaction_type" = 'buy' THEN ("transaction_value" > 90) WHEN "transaction_type" = 'sell' THEN ("transaction_value" > 100) ELSE FALSE END AND (SELECT COUNT(*) FROM CTE c1 WHERE c1."customer_id"= CTE."customer_id" and rn = 1 AND CASE WHEN "transaction_type" = 'buy' THEN ("transaction_value" > 90) WHEN "transaction_type" = 'sell' THEN ("transaction_value" > 100) ELSE FALSE END ) = 2 customer_id transaction_type transaction_date transaction_value 2 buy 2022-12-04 120.0 2 sell 2022-12-03 120.0 SELECT 2 fiddle A: Use distinct on with custom order to select all the latest transactions per customer according to your several criteria (hence the OR) - latest CTE, then count the number of result records per user using count as a window function - latest_with_count CTE - and finally pick these that have a count equal to the number of criteria, i.e. all the criteria are honoured. This may be a bit verbose and abstract template but hopefully would help with the generic problem. The idea would work for any number of conditions. with t as ( /* your query here with several conditions in DISJUNCTION (OR) here, i.e. WHERE (customer_transactions.transaction_type = 'buy' AND customer_transactions.transaction_value > 90) OR (customer_transactions.transaction_type = 'sell' AND customer_transactions.transaction_value > 100) */ ), latest as ( select distinct on (customer_id, transaction_type) * from t -- pick the latest per customer & type order by customer_id, transaction_type, transaction_date desc ), latest_with_count as ( select *, count(*) over (partition by customer_id) cnt from latest ) select * from latest_with_count where cnt = 2 -- the number of criteria
Filter timeseries data based on multiple criteria
I have a table where I store timeseries data: customer_id transaction_type transaction_date transaction_value 1 buy 2022-12-04 100.0 1 sell 2022-12-04 80.0 2 buy 2022-12-04 120.0 2 sell 2022-12-03 120.0 1 buy 2022-12-02 90.0 1 sell 2022-12-02 70.0 2 buy 2022-12-01 110.0 2 sell 2022-12-01 110.0 Number of customers and transaction types is not limited. Currently there are over 10,000 customers and over 600 transaction types. Dates of transactions between customers can be unique and will not always align based on any criteria (that's why I've tried using LATERAL JOIN — you'll see it later). I want to filter those records to get customers IDs with the values of the transaction where any arbitrary condition is met. Number of those conditions in a query is not restricted to two — can be anything. For example: Give me all customers who have a buy with value > $90 and a sale with value > 100$ as their latest transactions The final query should return these two rows: customer_id transaction_type transaction_date transaction_value 2 buy 2022-12-04 120$ 2 sell 2022-12-03 120$ The closest I've came to what I need was by creating a materialized view cross-joining customer IDs and transaction_types: customer_id transaction_type 1 buy 1 sell 2 buy 2 sell And then running a LATERAL JOIN between table with transactions and customer_transactions materialized view: SELECT * FROM customer_transactions JOIN LATERAL ( SELECT * FROM transactions WHERE (transactions.customer_id = customer_transactions.customer_id) AND (transactions.transaction_type = customer_transactions.transaction_type) AND transactions.transaction_date <= '2022-12-04' -- this can change for filtering records back in time ORDER BY transactions.transaction_date DESC LIMIT 1 ) transactions ON TRUE WHERE customer_transactions.transaction_type = 'buy' AND customer_transactions.transaction_value > 90 It seems to be working when one condition is specified. But as soon as subsequential conditions are introduced that's where things start falling apart for me; changing condition to: WHERE (customer_transactions.transaction_type = 'buy' AND customer_transactions.transaction_value > 90) AND (customer_transactions.transaction_type = 'sell' AND customer_transactions.transaction_value > 100) is obviously not going to work as there is no row that satisfies both of these conditions. Is it possible to achieve this using the aproach I took? If so what am I missing? Or maybe there is another way to solve that would be more appropriate?
[ "You could use a CTE with row_number and chech out the last transactios\nWITH CTE as (SELECT\n\"customer_id\", \"transaction_type\", \"transaction_date\",\n \"transaction_value\",\nROW_NUMBER() OVER(PARTITION BY \"customer_id\", \"transaction_type\" ORDER BY \"transaction_date\" DESC) rn\nFROM tab1)\nSELECT \"customer_id\", \"transaction_type\", \"transaction_date\",\n \"transaction_value\" FROM CTE\n WHERE rn = 1 \n AND CASE WHEN \"transaction_type\" = 'buy' THEN (\"transaction_value\" > 90) \nWHEN \"transaction_type\" = 'sell' THEN (\"transaction_value\" > 100) \nELSE FALSE END \nAND (SELECT COUNT(*) FROM CTE c1 \n WHERE c1.\"customer_id\"= CTE.\"customer_id\" and rn = 1\n AND CASE WHEN \"transaction_type\" = 'buy' THEN (\"transaction_value\" > 90) \nWHEN \"transaction_type\" = 'sell' THEN (\"transaction_value\" > 100) \nELSE FALSE END ) = 2\n\n\n\n\n\ncustomer_id\ntransaction_type\ntransaction_date\ntransaction_value\n\n\n\n\n2\nbuy\n2022-12-04\n120.0\n\n\n2\nsell\n2022-12-03\n120.0\n\n\n\n\n\nSELECT 2\n\n\nfiddle\n", "Use distinct on with custom order to select all the latest transactions per customer according to your several criteria (hence the OR) - latest CTE, then count the number of result records per user using count as a window function - latest_with_count CTE - and finally pick these that have a count equal to the number of criteria, i.e. all the criteria are honoured.\nThis may be a bit verbose and abstract template but hopefully would help with the generic problem. The idea would work for any number of conditions.\nwith t as\n(\n /*\n your query here with several conditions in DISJUNCTION (OR) here, i.e.\n WHERE (customer_transactions.transaction_type = 'buy' AND customer_transactions.transaction_value > 90)\n OR (customer_transactions.transaction_type = 'sell' AND customer_transactions.transaction_value > 100)\n */\n),\nlatest as \n(\n select distinct on (customer_id, transaction_type) *\n from t\n -- pick the latest per customer & type\n order by customer_id, transaction_type, transaction_date desc\n),\nlatest_with_count as\n(\n select *, count(*) over (partition by customer_id) cnt\n from latest\n)\nselect * \nfrom latest_with_count\nwhere cnt = 2 -- the number of criteria\n\n" ]
[ 0, 0 ]
[]
[]
[ "postgresql", "sql" ]
stackoverflow_0074680018_postgresql_sql.txt
Q: NextJS - Haveibeenpwned API Fetch error 401 I'm using a valid key that I paid for, yet I keep getting a 401 when trying to fetch? What am I missing here? const checkCompromised = async (email: string) => { const secret = process.env.NEXT_PUBLIC_HAVEIBEENPWNED_API_KEY; const url = `https://haveibeenpwned.com/api/v3/breachedaccount/${email}`; const response = await fetch(url, { method: "GET", mode: "no-cors", headers: { "hibp-api-key": `${secret}`, "User-Agent": "test", }, }); const data = await response.text(); setCompromisedList(data); }; I think I'm following the documentation correctly...? https://haveibeenpwned.com/API/v3#Authorisation Has anybody here used HIBP API before? Any ideas?
NextJS - Haveibeenpwned API Fetch error 401
I'm using a valid key that I paid for, yet I keep getting a 401 when trying to fetch? What am I missing here? const checkCompromised = async (email: string) => { const secret = process.env.NEXT_PUBLIC_HAVEIBEENPWNED_API_KEY; const url = `https://haveibeenpwned.com/api/v3/breachedaccount/${email}`; const response = await fetch(url, { method: "GET", mode: "no-cors", headers: { "hibp-api-key": `${secret}`, "User-Agent": "test", }, }); const data = await response.text(); setCompromisedList(data); }; I think I'm following the documentation correctly...? https://haveibeenpwned.com/API/v3#Authorisation Has anybody here used HIBP API before? Any ideas?
[]
[]
[ "set the hibp-api-key header value to use the actual value of your API key instead of the variable name.\nheaders: {\n \"hibp-api-key\": secret,\n \"User-Agent\": \"test\",\n},\n\n" ]
[ -1 ]
[ "api", "next.js", "typescript" ]
stackoverflow_0074680915_api_next.js_typescript.txt
Q: Netbeans 11.3 and Java 14 Preview Features I am using Java 14 as the default Java platform for Netbeans 11.3 (netbeans_jdkhome is set to my Java 14 JDK) and trying to use a preview feature in a simple Java application. I set the source level to 14 and set --enable-preview as a compiler argument. The code compiles without errors. However, when I try to run it within Netbeans, it complains that the major version of the .class files is 57 while the runtime only plays well with 58 files and preview features. Here's the error: java.lang.UnsupportedClassVersionError: javaapplicationtest14/JavaApplicationTest14 (class file version 57.65535) was compiled with preview features that are unsupported. This version of the Java Runtime only recognizes preview features for class file version 58.65535 I checked the major version of the .class files and they are indeed 57. Any ideas why my project won't compile into Java 14 level? I am using an Ant build. A: As well as setting --enable-preview as a compiler option, it should also be set as a VM Option when running the code: However, that doesn't fix the problem, and unfortunately this looks like a NetBeans 11.3 bug. I reproduced your problem with a Java with Ant project, and created Bug Report NETBEANS-4049 UnsupportedClassVersionError when running JDK14 code with --enable-preview. There are a couple of workarounds if you need to use preview features with JDK 14 in NetBeans: Run your application from the command line (with --enable-preview as an option) instead of within NetBeans. The same code which fails with the UnsupportedClassVersionError in NetBeans runs fine in that environment, which strongly suggests that NetBeans is ignoring the --enable-preview run time option. Create a Java with Maven project instead of a Java with Ant project. You can then run your code which uses preview features within NetBeans. Update your question with more details if you still have problems. A: Netbeans Project Configuration (Java 14) Java 14 Netbeans >= 11 (Current: 12.0 LTS) Optional: Can use sdkman or set default java path: /opt/<jdk-install-dir> C:\Program Files\<jdk-install-dir> Project 'Run' Configuration Java Platform pom.xml Notes Check maven.compiler.source / maven.compiler.target Check build->plugins->plugin->...-> compilerArgs -> arg <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.mycompany</groupId> <artifactId>Demo</artifactId> <version>1.0-SNAPSHOT</version> <packaging>jar</packaging> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>14</maven.compiler.source> <maven.compiler.target>14</maven.compiler.target> </properties> <build> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>3.8.0</version> <configuration> <compilerArgs> <arg>--enable-preview</arg> </compilerArgs> </configuration> </plugin> </plugins> </build> </project> A: nb-javac should not be installed (it appears in the plugins). If it is installed in 11.3, it seems to create classfiles with version 57 not 58, which the runtime then objects to, as above. A: I'm trying this with Netbeans 15. I want to use the foreign linker API, and code completion is saying: MemoryAddress is a preview API and is disabled by default. (use --enable-preview to enable preview APIs) This is with both JDK 19 and JDK 20 preview. I put the --enable-preview option in the pom.xml and it doesn't change anything. If I go to project properties / Build / Compile, there are options to choose which JDK but no way to enter JDK options. It seems like --enable-preview is recommended by NetBeans hints but there's no way to actually do it. Is this a bug in NB or am I missing something? Very frustrating because I would like to use some of these features and I'm spending my time trying to get my code editor to work. I think this did work in previous versions of NB.
Netbeans 11.3 and Java 14 Preview Features
I am using Java 14 as the default Java platform for Netbeans 11.3 (netbeans_jdkhome is set to my Java 14 JDK) and trying to use a preview feature in a simple Java application. I set the source level to 14 and set --enable-preview as a compiler argument. The code compiles without errors. However, when I try to run it within Netbeans, it complains that the major version of the .class files is 57 while the runtime only plays well with 58 files and preview features. Here's the error: java.lang.UnsupportedClassVersionError: javaapplicationtest14/JavaApplicationTest14 (class file version 57.65535) was compiled with preview features that are unsupported. This version of the Java Runtime only recognizes preview features for class file version 58.65535 I checked the major version of the .class files and they are indeed 57. Any ideas why my project won't compile into Java 14 level? I am using an Ant build.
[ "As well as setting --enable-preview as a compiler option, it should also be set as a VM Option when running the code: \n\nHowever, that doesn't fix the problem, and unfortunately this looks like a NetBeans 11.3 bug. I reproduced your problem with a Java with Ant project, and created Bug Report NETBEANS-4049 UnsupportedClassVersionError when running JDK14 code with --enable-preview.\nThere are a couple of workarounds if you need to use preview features with JDK 14 in NetBeans:\n\nRun your application from the command line (with --enable-preview as an option) instead of within NetBeans. The same code which fails with the UnsupportedClassVersionError in NetBeans runs fine in that environment, which strongly suggests that NetBeans is ignoring the --enable-preview run time option. \nCreate a Java with Maven project instead of a Java with Ant project. You can then run your code which uses preview features within NetBeans.\n\nUpdate your question with more details if you still have problems.\n", "Netbeans Project Configuration (Java 14)\n\nJava 14\nNetbeans >= 11 (Current: 12.0 LTS)\n\nOptional:\n\nCan use sdkman or set default java path:\n\n /opt/<jdk-install-dir>\n\nC:\\Program Files\\<jdk-install-dir>\n\nProject 'Run' Configuration\n\nJava Platform\n\npom.xml\nNotes\n\nCheck maven.compiler.source / maven.compiler.target\nCheck build->plugins->plugin->...-> compilerArgs -> arg\n\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n <modelVersion>4.0.0</modelVersion>\n <groupId>com.mycompany</groupId>\n <artifactId>Demo</artifactId>\n <version>1.0-SNAPSHOT</version>\n <packaging>jar</packaging>\n <properties>\n <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>\n <maven.compiler.source>14</maven.compiler.source>\n <maven.compiler.target>14</maven.compiler.target>\n </properties>\n <build>\n <plugins>\n <plugin>\n <artifactId>maven-compiler-plugin</artifactId>\n <version>3.8.0</version>\n <configuration>\n <compilerArgs>\n <arg>--enable-preview</arg>\n </compilerArgs>\n </configuration>\n </plugin>\n </plugins>\n </build>\n</project>\n\n\n", "nb-javac should not be installed (it appears in the plugins). \nIf it is installed in 11.3, it seems to create classfiles with version 57 not 58, which the runtime then objects to, as above.\n", "I'm trying this with Netbeans 15. I want to use the foreign linker API, and code completion is saying:\n\nMemoryAddress is a preview API and is disabled by default. (use\n--enable-preview to enable preview APIs)\n\nThis is with both JDK 19 and JDK 20 preview. I put the --enable-preview option in the pom.xml and it doesn't change anything. If I go to project properties / Build / Compile, there are options to choose which JDK but no way to enter JDK options. It seems like --enable-preview is recommended by NetBeans hints but there's no way to actually do it. Is this a bug in NB or am I missing something? Very frustrating because I would like to use some of these features and I'm spending my time trying to get my code editor to work. I think this did work in previous versions of NB.\n" ]
[ 4, 2, 0, 0 ]
[]
[]
[ "java", "java_14", "netbeans" ]
stackoverflow_0060790646_java_java_14_netbeans.txt
Q: blocking access to wp-login.php through cloudflare I'm trying to block access to my wp-login.php through cloudflare WAF. I created a firewall rule with the following content: URI path equals /wp-login.php AND IP source address equals <my_ipv4> Action: block As you can see, I'm testing this rule by blocking my own IP-address. This does not work, I can see wp-login.php still when visiting my website. A second thing I tried are the IP Access Rules. I blocked my IP for zone "this website", but this also doesn't work. I got my IP address from this website: https://whatismyipaddress.com/ What could be wrong? A: I think you can start from checking the Security Events, for make sure that you trying to block right IP address. And second one - need to check your DNS setup. Cloudflare can protect yours endpoint only if you have “Proxied” DNS records: .
blocking access to wp-login.php through cloudflare
I'm trying to block access to my wp-login.php through cloudflare WAF. I created a firewall rule with the following content: URI path equals /wp-login.php AND IP source address equals <my_ipv4> Action: block As you can see, I'm testing this rule by blocking my own IP-address. This does not work, I can see wp-login.php still when visiting my website. A second thing I tried are the IP Access Rules. I blocked my IP for zone "this website", but this also doesn't work. I got my IP address from this website: https://whatismyipaddress.com/ What could be wrong?
[ "I think you can start from checking the Security Events, for make sure that you trying to block right IP address.\nAnd second one - need to check your DNS setup. Cloudflare can protect yours endpoint only if you have “Proxied” DNS records: .\n" ]
[ 0 ]
[]
[]
[ "cloudflare" ]
stackoverflow_0074679729_cloudflare.txt
Q: Access packages outside of current package setup.py I am trying to access packages outside of the current package using setup.py. My project structure looks like this. Example1/ |-- submodule1/ | |-- __init__.py | |-- main/ | |-- __init__.py | |-- hello.py | |-- setup.py |-- submodule2/ | |-- __init__.py | |-- main/ | |-- __init__.py | |-- world.py | |-- setup.py |-- submodule3/ | |-- __init__.py | |-- main/ | |-- __init__.py | |-- sample.py | |-- setup.py |-- utils/ | |-- __init__.py | |-- util_code1.py | |-- util_code2.py I am trying to include utils package dir in setup.py of submodules. here is how my setup.py looks setup( name='sample_package', description='my test wheel', #packages=find_packages(), packages=['main', '../../utils'] entry_points={ 'group_1': 'module1=Example1.main.hello:method1' } ], include_package_data=True, ) When I run command inside any of submodule python setup.py bdist_wheel to create a wheel for any submodule I am getting the following error. error: package directory '../../utils' does not exist A: It looks like the setup.py file you provided is not correct. In the packages parameter of the setup function, you are trying to include the ../../utils directory as a package, but this directory does not exist relative to the setup.py file. In order to include the utils package, you should include it as utils instead of ../../utils. This will make the setup function look for the utils package in the same directory as the setup.py file. Here is how your setup.py file should look: setup( name='sample_package', description='my test wheel', #packages=find_packages(), packages=['main', 'utils'] entry_points={ 'group_1': 'module1=Example1.main.hello:method1' } ], include_package_data=True, ) You can also use the find_packages function from the setuptools package to automatically find all packages in your project. This can be useful if you have a complex project with many subdirectories. Here is an example of how you can use the find_packages function in your setup.py file: from setuptools import find_packages setup( name='sample_package', description='my test wheel', packages=find_packages(), entry_points={ 'group_1': 'module1=Example1.main.hello:method1' } ], include_package_data=True, ) This will automatically find all packages in your project and include them in the wheel file that is generated when you run the python setup.py bdist_wheel command. A: #It looks like you are trying to include the utils package in the setup.py file of your submodules. However, the way you have specified the package in the setup function is incorrect. #To include a package in your setup.py file, you need to specify the package name and its path relative to the setup.py file. In your case, the utils package is located at ../../utils, but this is not a valid package name. Instead, you need to specify the package name, which is utils, and its relative path, which is ../../utils. #Here is how you can fix this error: setup( name='sample_package', description='my test wheel', #packages=find_packages(), packages=['main', 'utils'], package_dir={'utils': '../../utils'}, entry_points={ 'group_1': 'module1=Example1.main.hello:method1' } ], include_package_data=True, ) #The package_dir parameter specifies the package name and its relative path, so that the setup.py script knows where to find the package. #You can then run the python setup.py bdist_wheel command to build the wheel for your submodule.
Access packages outside of current package setup.py
I am trying to access packages outside of the current package using setup.py. My project structure looks like this. Example1/ |-- submodule1/ | |-- __init__.py | |-- main/ | |-- __init__.py | |-- hello.py | |-- setup.py |-- submodule2/ | |-- __init__.py | |-- main/ | |-- __init__.py | |-- world.py | |-- setup.py |-- submodule3/ | |-- __init__.py | |-- main/ | |-- __init__.py | |-- sample.py | |-- setup.py |-- utils/ | |-- __init__.py | |-- util_code1.py | |-- util_code2.py I am trying to include utils package dir in setup.py of submodules. here is how my setup.py looks setup( name='sample_package', description='my test wheel', #packages=find_packages(), packages=['main', '../../utils'] entry_points={ 'group_1': 'module1=Example1.main.hello:method1' } ], include_package_data=True, ) When I run command inside any of submodule python setup.py bdist_wheel to create a wheel for any submodule I am getting the following error. error: package directory '../../utils' does not exist
[ "It looks like the setup.py file you provided is not correct. In the packages parameter of the setup function, you are trying to include the ../../utils directory as a package, but this directory does not exist relative to the setup.py file.\nIn order to include the utils package, you should include it as utils instead of ../../utils. This will make the setup function look for the utils package in the same directory as the setup.py file.\nHere is how your setup.py file should look:\nsetup(\n name='sample_package',\n description='my test wheel',\n #packages=find_packages(), \n packages=['main', 'utils']\n entry_points={\n 'group_1': 'module1=Example1.main.hello:method1'\n }\n ],\n include_package_data=True,\n)\n\nYou can also use the find_packages function from the setuptools package to automatically find all packages in your project. This can be useful if you have a complex project with many subdirectories.\nHere is an example of how you can use the find_packages function in your setup.py file:\nfrom setuptools import find_packages\n\nsetup(\n name='sample_package',\n description='my test wheel',\n packages=find_packages(), \n entry_points={\n 'group_1': 'module1=Example1.main.hello:method1'\n }\n ],\n include_package_data=True,\n)\n\nThis will automatically find all packages in your project and include them in the wheel file that is generated when you run the python setup.py bdist_wheel command.\n", "#It looks like you are trying to include the utils package in the setup.py file of your submodules. However, the way you have specified the package in the setup function is incorrect.\n\n#To include a package in your setup.py file, you need to specify the package name and its path relative to the setup.py file. In your case, the utils package is located at ../../utils, but this is not a valid package name. Instead, you need to specify the package name, which is utils, and its relative path, which is ../../utils.\n\n#Here is how you can fix this error:\n\nsetup(\n name='sample_package',\n description='my test wheel',\n #packages=find_packages(), \n packages=['main', 'utils'],\n package_dir={'utils': '../../utils'},\n entry_points={\n 'group_1': 'module1=Example1.main.hello:method1'\n }\n ],\n include_package_data=True,\n)\n#The package_dir parameter specifies the package name and its relative path, so that the setup.py script knows where to find the package.\n\n#You can then run the python setup.py bdist_wheel command to build the wheel for your submodule.\n\n" ]
[ 0, 0 ]
[ "The issue is that the package_dir parameter in your setup.py file is not correctly specifying the path to the utils package. The package_dir parameter should be a dictionary that maps package names to the directories where the packages are located. In your case, you could add the following to your setup.py file to correctly specify the path to the utils package:\npackage_dir={\n 'main': 'main',\n 'utils': '../../utils'\n}\n\nWith this change, your setup.py file should look like this:\nsetup(\n name='sample_package',\n description='my test wheel',\n #packages=find_packages(), \n packages=['main', 'utils'],\n package_dir={\n 'main': 'main',\n 'utils': '../../utils'\n },\n entry_points={\n 'group_1': 'module1=Example1.main.hello:method1'\n }\n ],\n include_package_data=True,\n)\n\nThis should fix the error and allow you to correctly create a wheel for your submodule. For more information about the package_dir parameter and other parameters you can use in setup.py, you can check out the documentation for the setuptools package here.\n" ]
[ -1 ]
[ "python", "python_packaging", "setup.py", "setuptools" ]
stackoverflow_0074652871_python_python_packaging_setup.py_setuptools.txt
Q: Bash: iterate over json array and stringyfy the json value I have a json file(test.json) with following structure [ { "key1" : "value-1", "key2" : { "key3" : "value3" } }, { "key1" : "value-2", "key2" : { "key4" : "value4" } } ] I want to covert the file content into following given structure [ { "key1" : "value-1", "key2" : "{\"key3\":\"value3\"}" // basically the stringyfy form of json }, { "key1" : "value-2", "key2" : "{\"key4\":\"value4\"}" // basically the stringyfy form of json } ] I tried like following, able to convert key2 into stringyfy JSON but not sure how to update into JSON. Whatever references I am getting from existing stackoverflow question, they are just adding new value to the existing json object. I need to read the existing value and modify the same and then update it to same json array jq -c '.[]' $BUILD_DIR/test.json | while read i; do echo $(jq -r '.key2' <<< "$i") | jq '@json' done Following one is working for me if I just need to direct update the key2 value, This might look very obvious one but I am very new to bash script syntax. jq '( .[]).key2 |= "foo"' $BUILD_DIR/test.json A: jq 'map(.key2 |= tostring)' Output [ { "key1": "value-1", "key2": "{\"key3\":\"value3\"}" }, { "key1": "value-2", "key2": "{\"key4\":\"value4\"}" } ] Demo on jq play
Bash: iterate over json array and stringyfy the json value
I have a json file(test.json) with following structure [ { "key1" : "value-1", "key2" : { "key3" : "value3" } }, { "key1" : "value-2", "key2" : { "key4" : "value4" } } ] I want to covert the file content into following given structure [ { "key1" : "value-1", "key2" : "{\"key3\":\"value3\"}" // basically the stringyfy form of json }, { "key1" : "value-2", "key2" : "{\"key4\":\"value4\"}" // basically the stringyfy form of json } ] I tried like following, able to convert key2 into stringyfy JSON but not sure how to update into JSON. Whatever references I am getting from existing stackoverflow question, they are just adding new value to the existing json object. I need to read the existing value and modify the same and then update it to same json array jq -c '.[]' $BUILD_DIR/test.json | while read i; do echo $(jq -r '.key2' <<< "$i") | jq '@json' done Following one is working for me if I just need to direct update the key2 value, This might look very obvious one but I am very new to bash script syntax. jq '( .[]).key2 |= "foo"' $BUILD_DIR/test.json
[ "jq 'map(.key2 |= tostring)'\n\nOutput\n[\n {\n \"key1\": \"value-1\",\n \"key2\": \"{\\\"key3\\\":\\\"value3\\\"}\"\n },\n {\n \"key1\": \"value-2\",\n \"key2\": \"{\\\"key4\\\":\\\"value4\\\"}\"\n }\n]\n\nDemo on jq play\n" ]
[ 2 ]
[]
[]
[ "arrays", "bash", "json" ]
stackoverflow_0074680431_arrays_bash_json.txt
Q: VBA to find and delete all highlighted text in an email I need to use VBA to find and delete all highlighted text in an email body. I was trying to use WordEditor in Outlook VBA to do so. I know the following would work in a Word document because I recorded the macro in Word: .Find.ClearFormatting .Find.Highlight = True .Find.Replacement.ClearFormatting With .Find .Text = "" .Replacement.Text = "" .Forward = True .Wrap = wdFindContinue .Format = True .MatchCase = False .MatchWholeWord = False .MatchWildcards = False .MatchSoundsLike = False .MatchAllWordForms = False MsgBox "running macro" End With Selection.Find.Execute Replace:=wdReplaceAll Can someone help with defining all the necessary Outlook objects and code that I need to include in the Outlook macro? I know I need to dim ObjectInspector and a few other objects. I am fairly new to Outlook VBA objects and don't know what is required to make it work. Any help will be greatly appreciated. A: To deal with the Word editor in Outlook you need to use the Inspector.WordEditor property which returns the Microsoft Word Document Object Model of the message. You can use the following ways to get an instance of the Inspector class: Use the ActiveInspector method to return the object representing the currently active inspector (if there is one). Sub CloseItem() Dim myinspector As Outlook.Inspector Dim myItem As Outlook.MailItem Set myinspector = Application.ActiveInspector Set myItem = myinspector.CurrentItem myItem.Close olSave End Sub Use the GetInspector property to return the Inspector object associated with an item. Sub InsertBodyTextInWordEditor() Dim myItem As Outlook.MailItem Dim myInspector As Outlook.Inspector 'You must add a reference to the Microsoft Word Object Library 'before this sample will compile Dim wdDoc As Word.Document Dim wdRange As Word.Range On Error Resume Next Set myItem = Application.CreateItem(olMailItem) myItem.Subject = "Testing..." myItem.Display 'GetInspector property returns Inspector Set myInspector = myItem.GetInspector 'Obtain the Word.Document for the Inspector Set wdDoc = myInspector.WordEditor If Not (wdDoc Is Nothing) Then 'Use the Range object to insert text Set wdRange = wdDoc.Range(0, wdDoc.Characters.Count) wdRange.InsertAfter ("Hello world!") End If End Sub
VBA to find and delete all highlighted text in an email
I need to use VBA to find and delete all highlighted text in an email body. I was trying to use WordEditor in Outlook VBA to do so. I know the following would work in a Word document because I recorded the macro in Word: .Find.ClearFormatting .Find.Highlight = True .Find.Replacement.ClearFormatting With .Find .Text = "" .Replacement.Text = "" .Forward = True .Wrap = wdFindContinue .Format = True .MatchCase = False .MatchWholeWord = False .MatchWildcards = False .MatchSoundsLike = False .MatchAllWordForms = False MsgBox "running macro" End With Selection.Find.Execute Replace:=wdReplaceAll Can someone help with defining all the necessary Outlook objects and code that I need to include in the Outlook macro? I know I need to dim ObjectInspector and a few other objects. I am fairly new to Outlook VBA objects and don't know what is required to make it work. Any help will be greatly appreciated.
[ "To deal with the Word editor in Outlook you need to use the Inspector.WordEditor property which returns the Microsoft Word Document Object Model of the message. You can use the following ways to get an instance of the Inspector class:\n\nUse the ActiveInspector method to return the object representing the currently active inspector (if there is one).\n\nSub CloseItem() \n Dim myinspector As Outlook.Inspector \n Dim myItem As Outlook.MailItem \n Set myinspector = Application.ActiveInspector \n Set myItem = myinspector.CurrentItem \n myItem.Close olSave \nEnd Sub\n\n\nUse the GetInspector property to return the Inspector object associated with an item.\n\nSub InsertBodyTextInWordEditor() \n Dim myItem As Outlook.MailItem \n Dim myInspector As Outlook.Inspector \n 'You must add a reference to the Microsoft Word Object Library \n 'before this sample will compile \n Dim wdDoc As Word.Document \n Dim wdRange As Word.Range \n \n On Error Resume Next \n Set myItem = Application.CreateItem(olMailItem) \n myItem.Subject = \"Testing...\" \n myItem.Display \n 'GetInspector property returns Inspector \n Set myInspector = myItem.GetInspector \n 'Obtain the Word.Document for the Inspector \n Set wdDoc = myInspector.WordEditor \n If Not (wdDoc Is Nothing) Then \n 'Use the Range object to insert text \n Set wdRange = wdDoc.Range(0, wdDoc.Characters.Count) \n wdRange.InsertAfter (\"Hello world!\") \n End If \nEnd Sub\n\n" ]
[ 0 ]
[]
[]
[ "email", "find", "outlook", "replace", "vba" ]
stackoverflow_0074680713_email_find_outlook_replace_vba.txt
Q: How to run an alter table migration with alembic - taking too long and never ends I'm trying to run a migration with alembic (add a column) but it taking too long - and never ends. The table has 100 rows and i don't see an error. This is my migration code in python """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'd6fe1dec4bcd' down_revision = '3f532791c5f3' branch_labels = None depends_on = None def upgrade() -> None: op.add_column('products2', sa.Column( 'product_status', sa.String(255))) def downgrade() -> None: op.drop_column('products2', 'product_status') This is what i see in postgres when I check SELECT * FROM pg_stat_activity WHERE state = 'active'; ALTER TABLE products2 ADD COLUMN product_status VARCHAR(255) This is what I see in terminal INFO [alembic.runtime.migration] Context impl PostgresqlImpl. INFO [alembic.runtime.migration] Will assume transactional DDL. INFO [alembic.runtime.migration] Running upgrade 3f532791c5f3 -> d6fe1dec4bcd, create product status column How can I fix this? I'm running the postgres in a Google Cloud COnsole, but i don`t see any error on their platform A: Get the active locks from pg_locks: SELECT t.relname, l.locktype, page, virtualtransaction, pid, mode, granted FROM pg_locks l, pg_stat_all_tables t WHERE l.relation = t.relid ORDER BY relation asc; Copy the pid(ex: 14210) from above result and substitute in the below command. SELECT pg_terminate_backend(14210)
How to run an alter table migration with alembic - taking too long and never ends
I'm trying to run a migration with alembic (add a column) but it taking too long - and never ends. The table has 100 rows and i don't see an error. This is my migration code in python """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'd6fe1dec4bcd' down_revision = '3f532791c5f3' branch_labels = None depends_on = None def upgrade() -> None: op.add_column('products2', sa.Column( 'product_status', sa.String(255))) def downgrade() -> None: op.drop_column('products2', 'product_status') This is what i see in postgres when I check SELECT * FROM pg_stat_activity WHERE state = 'active'; ALTER TABLE products2 ADD COLUMN product_status VARCHAR(255) This is what I see in terminal INFO [alembic.runtime.migration] Context impl PostgresqlImpl. INFO [alembic.runtime.migration] Will assume transactional DDL. INFO [alembic.runtime.migration] Running upgrade 3f532791c5f3 -> d6fe1dec4bcd, create product status column How can I fix this? I'm running the postgres in a Google Cloud COnsole, but i don`t see any error on their platform
[ "Get the active locks from pg_locks:\nSELECT t.relname, l.locktype, page, virtualtransaction, pid, mode, granted\nFROM pg_locks l, pg_stat_all_tables t \nWHERE l.relation = t.relid \nORDER BY relation asc;\nCopy the pid(ex: 14210) from above result and substitute in the below command.\n\nSELECT pg_terminate_backend(14210)\n\n" ]
[ 0 ]
[]
[]
[ "google_cloud_platform", "postgresql", "python" ]
stackoverflow_0074680825_google_cloud_platform_postgresql_python.txt
Q: Prevent immintrin.h from including avx512 headers when compiling without avx512 support I am compiling without AVX512 support, but I have noticed that immintrin.h drags in ton of LOC for AVX512 , e.g. /usr/lib/gcc/x86_64-linux-gnu/11/include/avx512fintrin.h I have tried to check if specifying march option helps, but it does not seem to help. cat avx512_include.cpp #include <immintrin.h> int main() {} g++ -march=core2 -E -P avx512_include.cpp | wc -l 34365 g++ -march=skylake-avx512 -E -P avx512_include.cpp | wc -l 34257 I know that I could theoretically hack around my gcc instalation and pray that it will work when I remove all content from avx512 headers , but I am looking for a supported way if there is one. I have tried to look inside gcc headers to see if there is some macro check to include avx512 headers or not, but they seem to be included unconditionally. P.S. I tried to find a gcc march tag, could not find any, if somebody knows more appropriate tags beside current please comment A: GCC/clang support __attribute__((target("avx512f"))) for use of AVX-512 stuff in a file compiled without a -march= that implies -mavx512f. So immintrin.h still has to pull in AVX512 definitions. But you'll still get compile errors from GCC and clang if you try to use __m512i v = _mm512_set1_ps(1.0); without enabling it. See The Effect of Architecture When Using SSE / AVX Intrinisics What exactly do the gcc compiler switches (-mavx -mavx2 -mavx512f) do? For shorter compile times, perhaps pre-compiled headers could be an option. Most Linux distros don't do that by default. Or in projects that won't use AVX-512 at all, yeah you could hack things up so the headers don't get included, or at least don't really get compiled. Least intrusive might be to define GCC's include-guard macros before the files are included the first time. GCC's pre-processor will still read through the files, but the compiler proper won't spend any time on them. By size, avx512fintrin.h and avx512vlintrin.h are the big ones; AMX and VNNI and so on are much smaller headers, so probably only really worth bothering with avx512*.h. $ grep --no-filename '#define.*_INCLUDED$' /usr/lib/gcc/x86_64-pc-linux-gnu/12.2.0/include/avx512* #define _AVX5124FMAPSINTRIN_H_INCLUDED #define _AVX5124VNNIWINTRIN_H_INCLUDED #define _AVX512BF16INTRIN_H_INCLUDED #define _AVX512BF16VLINTRIN_H_INCLUDED #define _AVX512BITALGINTRIN_H_INCLUDED #define _AVX512BWINTRIN_H_INCLUDED #define _AVX512CDINTRIN_H_INCLUDED #define _AVX512DQINTRIN_H_INCLUDED ... Maybe redirect this into a skip-avx512.h. A more aggressive approach that avoids even preprocessing those lines would be to make a custom version of /usr/lib/gcc/x86_64-pc-linux-gnu/12.2.0/include/immintrin.h and simply strip out all the #include lines for avx512* headers, and others like avxvnniintrin.h, amx*, and shaintrin.h that are too new for you to want to use. Maybe call it "custom_gcc_intrin.h" and include that instead of immintrin.h? Or call it immintrin.h but put it somewhere inside that specific project where it's looked for first as an include path? Don't modify your system copy of immintrin.h or set things up so a modified version is used by default; that will probably bite you at some point in the future when you try to compile something that does have AVX-512 code paths with runtime detection. You'll be super confused why your GCC is broken if you forget about this modification years ago.
Prevent immintrin.h from including avx512 headers when compiling without avx512 support
I am compiling without AVX512 support, but I have noticed that immintrin.h drags in ton of LOC for AVX512 , e.g. /usr/lib/gcc/x86_64-linux-gnu/11/include/avx512fintrin.h I have tried to check if specifying march option helps, but it does not seem to help. cat avx512_include.cpp #include <immintrin.h> int main() {} g++ -march=core2 -E -P avx512_include.cpp | wc -l 34365 g++ -march=skylake-avx512 -E -P avx512_include.cpp | wc -l 34257 I know that I could theoretically hack around my gcc instalation and pray that it will work when I remove all content from avx512 headers , but I am looking for a supported way if there is one. I have tried to look inside gcc headers to see if there is some macro check to include avx512 headers or not, but they seem to be included unconditionally. P.S. I tried to find a gcc march tag, could not find any, if somebody knows more appropriate tags beside current please comment
[ "GCC/clang support __attribute__((target(\"avx512f\"))) for use of AVX-512 stuff in a file compiled without a -march= that implies -mavx512f. So immintrin.h still has to pull in AVX512 definitions.\nBut you'll still get compile errors from GCC and clang if you try to use __m512i v = _mm512_set1_ps(1.0); without enabling it. See\n\nThe Effect of Architecture When Using SSE / AVX Intrinisics\nWhat exactly do the gcc compiler switches (-mavx -mavx2 -mavx512f) do?\n\n\nFor shorter compile times, perhaps pre-compiled headers could be an option. Most Linux distros don't do that by default.\nOr in projects that won't use AVX-512 at all, yeah you could hack things up so the headers don't get included, or at least don't really get compiled.\nLeast intrusive might be to define GCC's include-guard macros before the files are included the first time. GCC's pre-processor will still read through the files, but the compiler proper won't spend any time on them. By size, avx512fintrin.h and avx512vlintrin.h are the big ones; AMX and VNNI and so on are much smaller headers, so probably only really worth bothering with avx512*.h.\n$ grep --no-filename '#define.*_INCLUDED$' /usr/lib/gcc/x86_64-pc-linux-gnu/12.2.0/include/avx512*\n#define _AVX5124FMAPSINTRIN_H_INCLUDED\n#define _AVX5124VNNIWINTRIN_H_INCLUDED\n#define _AVX512BF16INTRIN_H_INCLUDED\n#define _AVX512BF16VLINTRIN_H_INCLUDED\n#define _AVX512BITALGINTRIN_H_INCLUDED\n#define _AVX512BWINTRIN_H_INCLUDED\n#define _AVX512CDINTRIN_H_INCLUDED\n#define _AVX512DQINTRIN_H_INCLUDED\n...\n\nMaybe redirect this into a skip-avx512.h.\nA more aggressive approach that avoids even preprocessing those lines would be to make a custom version of /usr/lib/gcc/x86_64-pc-linux-gnu/12.2.0/include/immintrin.h and simply strip out all the #include lines for avx512* headers, and others like avxvnniintrin.h, amx*, and shaintrin.h that are too new for you to want to use.\nMaybe call it \"custom_gcc_intrin.h\" and include that instead of immintrin.h? Or call it immintrin.h but put it somewhere inside that specific project where it's looked for first as an include path?\n\nDon't modify your system copy of immintrin.h or set things up so a modified version is used by default; that will probably bite you at some point in the future when you try to compile something that does have AVX-512 code paths with runtime detection. You'll be super confused why your GCC is broken if you forget about this modification years ago.\n" ]
[ 1 ]
[]
[]
[ "avx512", "g++", "gcc", "intrinsics" ]
stackoverflow_0074680139_avx512_g++_gcc_intrinsics.txt
Q: how to use type correctly in a function? I am trying to write numerical methods for calculating the integral module NumericalMethods( trapezoidal_method ) where type Func = Double -> Double type Interval = [Double] type N = Integer trapezoidal_method:: Func -> Interval -> N -> Double trapezoidal_method f interval n = h*((f (head interval) + f $ tail interval)/2 + sum_fxs) where h :: Double h = (tail interval - head interval) / fromIntegral n sum_fxs = sum fxs fxs = map f [head interval + h * fromIntegral x | x <- [1..n-1]] I get this error: • Couldn't match expected type ‘Double -> Double’ with actual type ‘Double’ • Possible cause: ‘f’ is applied to too many arguments In the first argument of ‘(+)’, namely ‘f (head interval)’ In the first argument of ‘($)’, namely ‘f (head interval) + f’ In the first argument of ‘(/)’, namely ‘(f (head interval) + f $ tail interval)’ | 10 | trapezoidal_method f interval n = h*((f (head interval) + f $ tail interval)/2 + sum_fxs) | ^^^^^^^^^^^^^^^^^ A: One issue is your use of $ in f (head interval) + f $ tail interval. The $ operator has the lowest precedence of almost everything, so it will be interpreted as (f (head interval) + f) $ (tail interval). It is trying to figure out what you mean with f (head interval) + f. You probably just want to use parentheses: f (head interval) + f (tail interval). A second issue is your use of lists for intervals. I think tuples would be more suitable: type Interval = (Double, Double) Then you can use fst and snd instead of the erroneous head and tail: trapezoidal_method:: Func -> Interval -> N -> Double trapezoidal_method f interval n = h*((f (fst interval) + f (snd interval))/2 + sum_fxs) where h :: Double h = (snd interval - fst interval) / fromIntegral n sum_fxs = sum fxs fxs = map f [fst interval + h * fromIntegral x | x <- [1..n-1]]
how to use type correctly in a function?
I am trying to write numerical methods for calculating the integral module NumericalMethods( trapezoidal_method ) where type Func = Double -> Double type Interval = [Double] type N = Integer trapezoidal_method:: Func -> Interval -> N -> Double trapezoidal_method f interval n = h*((f (head interval) + f $ tail interval)/2 + sum_fxs) where h :: Double h = (tail interval - head interval) / fromIntegral n sum_fxs = sum fxs fxs = map f [head interval + h * fromIntegral x | x <- [1..n-1]] I get this error: • Couldn't match expected type ‘Double -> Double’ with actual type ‘Double’ • Possible cause: ‘f’ is applied to too many arguments In the first argument of ‘(+)’, namely ‘f (head interval)’ In the first argument of ‘($)’, namely ‘f (head interval) + f’ In the first argument of ‘(/)’, namely ‘(f (head interval) + f $ tail interval)’ | 10 | trapezoidal_method f interval n = h*((f (head interval) + f $ tail interval)/2 + sum_fxs) | ^^^^^^^^^^^^^^^^^
[ "One issue is your use of $ in f (head interval) + f $ tail interval.\nThe $ operator has the lowest precedence of almost everything, so it will be interpreted as (f (head interval) + f) $ (tail interval). It is trying to figure out what you mean with f (head interval) + f.\nYou probably just want to use parentheses: f (head interval) + f (tail interval).\n\nA second issue is your use of lists for intervals. I think tuples would be more suitable:\ntype Interval = (Double, Double)\n\nThen you can use fst and snd instead of the erroneous head and tail:\ntrapezoidal_method:: Func -> Interval -> N -> Double\ntrapezoidal_method f interval n = h*((f (fst interval) + f (snd interval))/2 + sum_fxs)\n where\n h :: Double\n h = (snd interval - fst interval) / fromIntegral n \n sum_fxs = sum fxs\n fxs = map f [fst interval + h * fromIntegral x | x <- [1..n-1]]\n\n" ]
[ 2 ]
[]
[]
[ "haskell" ]
stackoverflow_0074680807_haskell.txt
Q: Undefined variable when calling a service I have a service that call two method of and webapi. Here's the implementation: geo.service.ts import { HttpClient, HttpHeaders } from '@angular/common/http'; import { Injectable } from '@angular/core'; import { Observable } from 'rxjs'; import { environment } from 'src/environments/environment'; import { CountryResponse } from '../models/country.interface'; import { StateResponse } from '../models/state.interface'; @Injectable({ providedIn: 'root', }) export class GeoService { private apiUrl = environment.apiUrl + 'parameter'; constructor(private http: HttpClient) {} getCountry(): Observable<CountryResponse> { let requestUrl = this.apiUrl + '/country'; return this.http.get<CountryResponse>(requestUrl); } getStates( country_id: string): Observable<StateResponse[]> { let requestUrl = this.apiUrl + '/country/' + country_id + '/state'; return this.http.get<StateResponse[]>(requestUrl); } And my ngOnInit in component this.geo_service .getCountry() .subscribe( (data) => { this.country = data; //works fine }, (err) => { this.notification_service.showError( 'Se ha producido un error' ); console.log(err); } ); this.geo_service .getState( this.country.country_id //undefined ) .subscribe( (data) => { this.state = data; }, (err) => { this.notification_service.showError( 'Se ha producido un error' ); console.log(err); } ); the problem is: this.country is undefined on getState call method in ngOnInit and i don't know why, and how to fix it. But the data is ok, because this.country = data works fine as code says. Angular 12. Help please! Calling an API in chain A: The problem is simple, you have to synchronize the two async call. When you call the state the country is not ready yet. The simplest think you can do is : this.geo_service .getCountry() .subscribe( (data) => { this.country = data //here call the state one. this.geo_service.getState(.... }
Undefined variable when calling a service
I have a service that call two method of and webapi. Here's the implementation: geo.service.ts import { HttpClient, HttpHeaders } from '@angular/common/http'; import { Injectable } from '@angular/core'; import { Observable } from 'rxjs'; import { environment } from 'src/environments/environment'; import { CountryResponse } from '../models/country.interface'; import { StateResponse } from '../models/state.interface'; @Injectable({ providedIn: 'root', }) export class GeoService { private apiUrl = environment.apiUrl + 'parameter'; constructor(private http: HttpClient) {} getCountry(): Observable<CountryResponse> { let requestUrl = this.apiUrl + '/country'; return this.http.get<CountryResponse>(requestUrl); } getStates( country_id: string): Observable<StateResponse[]> { let requestUrl = this.apiUrl + '/country/' + country_id + '/state'; return this.http.get<StateResponse[]>(requestUrl); } And my ngOnInit in component this.geo_service .getCountry() .subscribe( (data) => { this.country = data; //works fine }, (err) => { this.notification_service.showError( 'Se ha producido un error' ); console.log(err); } ); this.geo_service .getState( this.country.country_id //undefined ) .subscribe( (data) => { this.state = data; }, (err) => { this.notification_service.showError( 'Se ha producido un error' ); console.log(err); } ); the problem is: this.country is undefined on getState call method in ngOnInit and i don't know why, and how to fix it. But the data is ok, because this.country = data works fine as code says. Angular 12. Help please! Calling an API in chain
[ "The problem is simple, you have to synchronize the two async call. When you call the state the country is not ready yet. The simplest think you can do is :\n this.geo_service\n .getCountry()\n .subscribe(\n (data) => {\n this.country = data\n //here call the state one.\n this.geo_service.getState(....\n }\n\n" ]
[ 1 ]
[]
[]
[ "angular", "angular12", "typescript" ]
stackoverflow_0074680957_angular_angular12_typescript.txt
Q: Creating as many files as the number of arguments to bash script Two arguments will be entered. One of these arguments will be the name of the file and the other will be the number to be generated. $./Project1.sh ./One.dic 4 -- Created: One-1.dic One-2.dic One-3.dic One-4.dic #!/bin/bash echo "name": $1"; echo "number: $2"; for ((i=1;i<=number;i=i+1)) do touch ??? done A: Assumptions: extension of the name (.dic in this case) is not known in advance the name will always include an extension if the name has multiple extensions (eg, .dic.orig) we want to place the number before the last extension (eg, .dic-1.orig) while OP's expected output shows the (relative) path removed (./ in this case), I'm going to assume we do not want to remove the relative path (eg, what if name is new/sub/dir/One.dic?) we won't worry about creating subdirectories that show up in the name (eg, if name=new/sub/dir/One.dic we'll assume the directories new/sub/dir already exist) One idea: name="$1" number="$2" # use parameter expansion to break name into 2 parts base="${name%.*}" ext="${name##*.}" for ((i=1;i<=number;i++)) do newname="${base}-${i}.${ext}" echo "${newname}" done For name=./One.dic and number=4 this generates: ./One-1.dic ./One-2.dic ./One-3.dic ./One-4.dic For name=new/sub/dir/One.dic.orig and number=3 this generates: new/sub/dir/One.dic-1.orig new/sub/dir/One.dic-2.orig new/sub/dir/One.dic-3.orig NOTES: once the results are verified OP can replace echo with touch (or whatever command is desired to create/populate the new file) OP may want to add some logic to verify the input parameters; for number it must be an integer; for name it must contain an extension ... this would entail more than just testing for a period since ./One has a period but no extension OP will need to decide how to handle name if (when?) it contains path/directories (eg, for name=new/sub/dir/One.dic does OP want to abort if directories do not exist? or go ahead and create the subdirectories?) A: #To create multiple files using a bash script, you can use a loop and the touch command. The touch command is used to create an empty file with a specified name. Here is an example of a script that creates as many files as the number of arguments passed to it: #!/bin/bash # Get the name of the file from the first argument name=$1 # Get the number of files to create from the second argument number=$2 # Use a loop to create the specified number of files for ((i=1;i<=number;i=i+1)) do # Create a file with the specified name and an index number appended to it touch "${name}-${i}.dic" done #In the example above, the script takes two arguments: the name of the file and the number of files to create. It then uses a loop to create the specified number of files, using the touch command and appending an index number to the file name. #To use the script, you would call it from the command line and pass the name of the file and the number of files to create as arguments, like this: $./Project1.sh ./One.dic 4 #This would create four files named One-1.dic, One-2.dic, One-3.dic, and One-4.dic. A: #You can try it then #To create a script that creates a number of files, you could use the following code: #!/bin/bash # Check if at least one argument was provided if [ $# -eq 0 ] then echo "Please provide the number of files to create as an argument." exit 1 fi # Create the specified number of files for ((i=1; i<=$1; i++)) do touch "file$i" done #To use the script, save it to a file with a .sh extension and make it executable using the chmod command: chmod +x create_files.sh #You can then run the script and specify the number of files to create as an argument: ./create_files.sh 5 #This will create five files named file1, file2, file3, file4, and file5.
Creating as many files as the number of arguments to bash script
Two arguments will be entered. One of these arguments will be the name of the file and the other will be the number to be generated. $./Project1.sh ./One.dic 4 -- Created: One-1.dic One-2.dic One-3.dic One-4.dic #!/bin/bash echo "name": $1"; echo "number: $2"; for ((i=1;i<=number;i=i+1)) do touch ??? done
[ "Assumptions:\n\nextension of the name (.dic in this case) is not known in advance\nthe name will always include an extension\nif the name has multiple extensions (eg, .dic.orig) we want to place the number before the last extension (eg, .dic-1.orig)\nwhile OP's expected output shows the (relative) path removed (./ in this case), I'm going to assume we do not want to remove the relative path (eg, what if name is new/sub/dir/One.dic?)\nwe won't worry about creating subdirectories that show up in the name (eg, if name=new/sub/dir/One.dic we'll assume the directories new/sub/dir already exist)\n\nOne idea:\nname=\"$1\"\nnumber=\"$2\"\n\n# use parameter expansion to break name into 2 parts\n\nbase=\"${name%.*}\"\next=\"${name##*.}\"\n\nfor ((i=1;i<=number;i++))\ndo\n newname=\"${base}-${i}.${ext}\"\n echo \"${newname}\"\ndone\n\nFor name=./One.dic and number=4 this generates:\n./One-1.dic\n./One-2.dic\n./One-3.dic\n./One-4.dic\n\nFor name=new/sub/dir/One.dic.orig and number=3 this generates:\nnew/sub/dir/One.dic-1.orig\nnew/sub/dir/One.dic-2.orig\nnew/sub/dir/One.dic-3.orig\n\nNOTES:\n\nonce the results are verified OP can replace echo with touch (or whatever command is desired to create/populate the new file)\nOP may want to add some logic to verify the input parameters; for number it must be an integer; for name it must contain an extension ... this would entail more than just testing for a period since ./One has a period but no extension\nOP will need to decide how to handle name if (when?) it contains path/directories (eg, for name=new/sub/dir/One.dic does OP want to abort if directories do not exist? or go ahead and create the subdirectories?)\n\n", "#To create multiple files using a bash script, you can use a loop and the touch command. The touch command is used to create an empty file with a specified name. Here is an example of a script that creates as many files as the number of arguments passed to it:\n\n#!/bin/bash\n\n# Get the name of the file from the first argument\nname=$1\n\n# Get the number of files to create from the second argument\nnumber=$2\n\n# Use a loop to create the specified number of files\nfor ((i=1;i<=number;i=i+1))\ndo \n # Create a file with the specified name and an index number appended to it\n touch \"${name}-${i}.dic\"\ndone\n\n#In the example above, the script takes two arguments: the name of the file and the number of files to create. It then uses a loop to create the specified number of files, using the touch command and appending an index number to the file name.\n\n#To use the script, you would call it from the command line and pass the name of the file and the number of files to create as arguments, like this:\n\n$./Project1.sh ./One.dic 4\n\n#This would create four files named One-1.dic, One-2.dic, One-3.dic, and One-4.dic.\n\n", "#You can try it then \n#To create a script that creates a number of files, you could use the following code:\n\n#!/bin/bash\n\n# Check if at least one argument was provided\nif [ $# -eq 0 ]\nthen\n echo \"Please provide the number of files to create as an argument.\"\n exit 1\nfi\n\n# Create the specified number of files\nfor ((i=1; i<=$1; i++))\ndo\n touch \"file$i\"\ndone\n\n#To use the script, save it to a file with a .sh extension and make it executable using the chmod command:\n\nchmod +x create_files.sh\n\n#You can then run the script and specify the number of files to create as an argument:\n\n./create_files.sh 5\n\n#This will create five files named file1, file2, file3, file4, and file5.\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "bash", "linux", "linux_kernel", "ubuntu" ]
stackoverflow_0074680750_bash_linux_linux_kernel_ubuntu.txt
Q: External CSS not always displaying For some odd reason all my external CSS does work fine except for this one. It does work if I add it internally to my html page but not if it is linked to the external sheet in the external sheet .title { font-size: 30px; font-family: Bebas Neue, cursive; } but <span class="title">hello</span> does not work in the html sheet tried with . class="" or even with # id="" but neither work it would make sense to me if nothing else worked but I don't understand how just 1 can be giving me problems A: Try to add Internal StyleSheet : <span class="title" style=" font-size: 30px;font-family: Bebas Neue, cursive;" >Hello </span>
External CSS not always displaying
For some odd reason all my external CSS does work fine except for this one. It does work if I add it internally to my html page but not if it is linked to the external sheet in the external sheet .title { font-size: 30px; font-family: Bebas Neue, cursive; } but <span class="title">hello</span> does not work in the html sheet tried with . class="" or even with # id="" but neither work it would make sense to me if nothing else worked but I don't understand how just 1 can be giving me problems
[ "Try to add Internal StyleSheet :\n\n\n<span class=\"title\" style=\" font-size: 30px;font-family: Bebas Neue, cursive;\" >Hello </span>\n\n\n\n" ]
[ 0 ]
[]
[]
[ "css" ]
stackoverflow_0074672431_css.txt
Q: How to delete all rows from pandas dataframe1 that do NOT exist in pandas dataframe2 I have two pandas dataframes, data1 and data2. They each have album and artist columns along with other columns that are different attributes. For the sake of what I'm trying to do, I want to delete all of the rows in data2 that DO NOT exist in data1. So, essentially I want all of the album and artists in data2 to match data1. Does anyone know the right way to go about this in python? TIA! So far I've tried: data2 = data2[data2['album', 'artist'].isin(data1['album', 'artist'])] but it doesn't like the ',' to get both attributes to match. A: To remove all rows from a dataframe that do not exist in another dataframe, you can use the merge() method from pandas, along with the indicator parameter. The indicator parameter allows you to specify whether you want to keep only the rows that exist in both dataframes (the default behavior), only the rows that exist in the left dataframe, only the rows that exist in the right dataframe, or all rows from both dataframes. For example, to remove all rows from data1 that do not exist in data2, you can use the merge() method with the indicator parameter set to 'right_only', like this: # Merge data1 and data2 on the 'album' and 'artist' columns merged_data = data1.merge(data2, on=['album', 'artist'], indicator=True) # Keep only the rows where the _merge column is 'right_only' merged_data = merged_data[merged_data['_merge'] == 'right_only'] # Drop the _merge column merged_data = merged_data.drop('_merge', axis=1) # Print the first few rows of the merged dataframe print(merged_data.head()) This will create a new dataframe called merged_data that contains only the rows from data1 that do not exist in data2. The _merge column indicates whether the row exists in both dataframes ('both'), only in the left dataframe ('left_only'), only in the right dataframe ('right_only'), or in neither dataframe ('neither'). In this case, we use the _merge column to filter the dataframe and keep only the rows that have a value of 'right_only'. Then, we drop the _merge column from the dataframe, since it is no longer needed. A: May be this solves your case: # First, create a new column that concatenates the album and artist columns in data1 data1['combo'] = data1['album'] + data1['artist'] # Repeat this for data2 data2['combo'] = data2['album'] + data2['artist'] # Next, keep only the rows in data2 where the combo column exists in data1 data2 = data2[data2['combo'].isin(data1['combo'])] # Finally, drop the combo column from both dataframes data1.drop(columns=['combo'], inplace=True) data2.drop(columns=['combo'], inplace=True) This approach creates a new column in each dataframe that concatenates the album and artist columns, and then uses the isin method to keep only the rows in data2 where the combo column exists in data1. The combo columns are then dropped from both dataframes. Note that this approach assumes that there are no duplicate rows in either dataframe. If there are duplicate rows, you may need to use a different approach, such as grouping by the combo column and then keeping only groups that exist in both dataframes. A: You can use the merge method in Pandas to join the two dataframes on the album and artist columns and keep only the rows that exist in both dataframes. Here is an example of how you could do this: import pandas as pd # Create some sample dataframes data1 = pd.DataFrame({ "album": ["Thriller", "Back in Black", "The Dark Side of the Moon"], "artist": ["Michael Jackson", "AC/DC", "Pink Floyd"], "year": [1982, 1980, 1973] }) data2 = pd.DataFrame({ "album": ["The Bodyguard", "Thriller", "The Dark Side of the Moon"], "artist": ["Whitney Houston", "Michael Jackson", "Pink Floyd"], "genre": ["Soundtrack", "Pop", "Rock"] }) # Merge the dataframes on the album and artist columns, and keep only the rows that exist in both dataframes merged_data = data1.merge(data2, on=["album", "artist"], how="inner") # Print the result print(merged_data) This code will print the following dataframe: album artist year genre 0 Thriller Michael Jackson 1982 Pop 1 The Dark Side of the Moon Pink Floyd 1973 Rock As you can see, this dataframe only contains the rows that exist in both data1 and data2. You can then use this dataframe instead of data2 to work with the rows that exist in both dataframes. Note that the merge method will also join the columns from the two dataframes, so you may need to drop any unnecessary columns or rename columns with the same name to avoid conflicts. You can do this using the drop and rename methods in Pandas, respectively. For example: # Drop the "genre" column from the merged dataframe merged_data = merged_data.drop("genre", axis=1) # Rename the "year" column in the merged dataframe merged_data = merged_data.rename({"year": "release_year"}, axis=1) # Print the result print(merged_data) This code will print the following dataframe: album artist
How to delete all rows from pandas dataframe1 that do NOT exist in pandas dataframe2
I have two pandas dataframes, data1 and data2. They each have album and artist columns along with other columns that are different attributes. For the sake of what I'm trying to do, I want to delete all of the rows in data2 that DO NOT exist in data1. So, essentially I want all of the album and artists in data2 to match data1. Does anyone know the right way to go about this in python? TIA! So far I've tried: data2 = data2[data2['album', 'artist'].isin(data1['album', 'artist'])] but it doesn't like the ',' to get both attributes to match.
[ "To remove all rows from a dataframe that do not exist in another dataframe, you can use the merge() method from pandas, along with the indicator parameter. The indicator parameter allows you to specify whether you want to keep only the rows that exist in both dataframes (the default behavior), only the rows that exist in the left dataframe, only the rows that exist in the right dataframe, or all rows from both dataframes.\nFor example, to remove all rows from data1 that do not exist in data2, you can use the merge() method with the indicator parameter set to 'right_only', like this:\n# Merge data1 and data2 on the 'album' and 'artist' columns\nmerged_data = data1.merge(data2, on=['album', 'artist'], indicator=True)\n\n# Keep only the rows where the _merge column is 'right_only'\nmerged_data = merged_data[merged_data['_merge'] == 'right_only']\n\n# Drop the _merge column\nmerged_data = merged_data.drop('_merge', axis=1)\n\n# Print the first few rows of the merged dataframe\nprint(merged_data.head())\n\nThis will create a new dataframe called merged_data that contains only the rows from data1 that do not exist in data2. The _merge column indicates whether the row exists in both dataframes ('both'), only in the left dataframe ('left_only'), only in the right dataframe ('right_only'), or in neither dataframe ('neither'). In this case, we use the _merge column to filter the dataframe and keep only the rows that have a value of 'right_only'. Then, we drop the _merge column from the dataframe, since it is no longer needed.\n", "May be this solves your case:\n# First, create a new column that concatenates the album and artist columns in data1\ndata1['combo'] = data1['album'] + data1['artist']\n\n# Repeat this for data2\ndata2['combo'] = data2['album'] + data2['artist']\n\n# Next, keep only the rows in data2 where the combo column exists in data1\ndata2 = data2[data2['combo'].isin(data1['combo'])]\n\n# Finally, drop the combo column from both dataframes\ndata1.drop(columns=['combo'], inplace=True)\ndata2.drop(columns=['combo'], inplace=True)\n\n\nThis approach creates a new column in each dataframe that concatenates the album and artist columns, and then uses the isin method to keep only the rows in data2 where the combo column exists in data1. The combo columns are then dropped from both dataframes.\nNote that this approach assumes that there are no duplicate rows in either dataframe. If there are duplicate rows, you may need to use a different approach, such as grouping by the combo column and then keeping only groups that exist in both dataframes.\n", "You can use the merge method in Pandas to join the two dataframes on the album and artist columns and keep only the rows that exist in both dataframes. Here is an example of how you could do this:\nimport pandas as pd\n\n# Create some sample dataframes\ndata1 = pd.DataFrame({\n \"album\": [\"Thriller\", \"Back in Black\", \"The Dark Side of the Moon\"],\n \"artist\": [\"Michael Jackson\", \"AC/DC\", \"Pink Floyd\"],\n \"year\": [1982, 1980, 1973]\n})\n\ndata2 = pd.DataFrame({\n \"album\": [\"The Bodyguard\", \"Thriller\", \"The Dark Side of the Moon\"],\n \"artist\": [\"Whitney Houston\", \"Michael Jackson\", \"Pink Floyd\"],\n \"genre\": [\"Soundtrack\", \"Pop\", \"Rock\"]\n})\n\n# Merge the dataframes on the album and artist columns, and keep only the rows that exist in both dataframes\nmerged_data = data1.merge(data2, on=[\"album\", \"artist\"], how=\"inner\")\n\n# Print the result\nprint(merged_data)\n\nThis code will print the following dataframe:\n album artist year genre\n0 Thriller Michael Jackson 1982 Pop\n1 The Dark Side of the Moon Pink Floyd 1973 Rock\n\nAs you can see, this dataframe only contains the rows that exist in both data1 and data2. You can then use this dataframe instead of data2 to work with the rows that exist in both dataframes.\nNote that the merge method will also join the columns from the two dataframes, so you may need to drop any unnecessary columns or rename columns with the same name to avoid conflicts. You can do this using the drop and rename methods in Pandas, respectively. For example:\n# Drop the \"genre\" column from the merged dataframe\nmerged_data = merged_data.drop(\"genre\", axis=1)\n\n# Rename the \"year\" column in the merged dataframe\nmerged_data = merged_data.rename({\"year\": \"release_year\"}, axis=1)\n\n# Print the result\nprint(merged_data)\n\nThis code will print the following dataframe:\nalbum artist\n\n" ]
[ 0, 0, -1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074680948_dataframe_pandas_python.txt
Q: Function to split a list of strings into group of strings on specific element value I'm trying to create a function for the following: Given a list of strings ["abceadfapaq"; "asdqwedasca"; " "; "asdasqyhgahfgasdsadasda"] and a string separator " " returns another list with sublists splitted on the desired separator. [["abceadfapaq"; "asdqwedasca"]; ["asdasqyhgahfgasdsadasda"]] I've implemented two functions to achieve this, one of them failing because it cannot handle the separator to be an empty space " " Failing: let groupWithNoReplace (inputLines: string list) (separator: string) = let complete = seq { for line in inputLines do yield! line.Split(' ') } |> List.ofSeq let folder (a) (cur, acc) = match a with | _ when a <> separator -> a::cur, acc | _ -> [], cur::acc let result = List.foldBack folder (complete) ([], []) (fst result)::(snd result) let groupWithNoReplace0 = groupWithNoReplace ["abceadfapaq"; "asdqwedasca"; " "; "asdasqyhgahfgasdsadasda"] " " val groupWithNoReplace0: string list list = [["abceadfapaq"; "asdqwedasca"; ""; ""; "asdasqyhgahfgasdsadasda"]] let groupWithNoReplace00 = groupWithNoReplace ["abceadfapaq"; "asdqwedasca"; "="; "asdasqyhgahfgasdsadasda"] "=" val groupWithNoReplace00: string list list = [["abceadfapaq"; "asdqwedasca"]; ["asdasqyhgahfgasdsadasda"]] So the second function is the same but replacing the empty space with a symbol I don't think it would appear in the input "§" I don't like this solution as I don't want to force the input to fullfil specific requeriments based on my implementation. Working with replace let groupWithNoReplace (inputLines: string list) (separator: string) = let validlines = inputLines |> List.map(fun e -> if e = " " then "§" else e) let validsplitter = match separator = " " with | true -> "§" | false -> separator let complete = seq { for line in validlines do yield! line.Split(' ') } |> List.ofSeq let folder (a) (cur, acc) = match a with | _ when a <> validsplitter -> a::cur, acc | _ -> [], cur::acc let result = List.foldBack folder (complete) ([], []) (fst result)::(snd result) let groupWithNoReplace1 = groupWithNoReplace ["abceadfapaq"; "asdqwedasca"; " "; "asdasqyhgahfgasdsadasda"] " " val groupWithNoReplace1: string list list = [["abceadfapaq"; "asdqwedasca"]; ["asdasqyhgahfgasdsadasda"]] let groupWithNoReplace11 = groupWithNoReplace ["abceadfapaq"; "asdqwedasca"; "="; "asdasqyhgahfgasdsadasda"] "=" val groupWithNoReplace11: string list list = [["abceadfapaq"; "asdqwedasca"]; ["asdasqyhgahfgasdsadasda"]] A: this should work: let splitWhen predicate list = let folder state t = if predicate t then [] :: state else (t :: state.Head) :: state. Tail list |> List.fold folder [ [] ] |> List.map List.rev |> List.rev
Function to split a list of strings into group of strings on specific element value
I'm trying to create a function for the following: Given a list of strings ["abceadfapaq"; "asdqwedasca"; " "; "asdasqyhgahfgasdsadasda"] and a string separator " " returns another list with sublists splitted on the desired separator. [["abceadfapaq"; "asdqwedasca"]; ["asdasqyhgahfgasdsadasda"]] I've implemented two functions to achieve this, one of them failing because it cannot handle the separator to be an empty space " " Failing: let groupWithNoReplace (inputLines: string list) (separator: string) = let complete = seq { for line in inputLines do yield! line.Split(' ') } |> List.ofSeq let folder (a) (cur, acc) = match a with | _ when a <> separator -> a::cur, acc | _ -> [], cur::acc let result = List.foldBack folder (complete) ([], []) (fst result)::(snd result) let groupWithNoReplace0 = groupWithNoReplace ["abceadfapaq"; "asdqwedasca"; " "; "asdasqyhgahfgasdsadasda"] " " val groupWithNoReplace0: string list list = [["abceadfapaq"; "asdqwedasca"; ""; ""; "asdasqyhgahfgasdsadasda"]] let groupWithNoReplace00 = groupWithNoReplace ["abceadfapaq"; "asdqwedasca"; "="; "asdasqyhgahfgasdsadasda"] "=" val groupWithNoReplace00: string list list = [["abceadfapaq"; "asdqwedasca"]; ["asdasqyhgahfgasdsadasda"]] So the second function is the same but replacing the empty space with a symbol I don't think it would appear in the input "§" I don't like this solution as I don't want to force the input to fullfil specific requeriments based on my implementation. Working with replace let groupWithNoReplace (inputLines: string list) (separator: string) = let validlines = inputLines |> List.map(fun e -> if e = " " then "§" else e) let validsplitter = match separator = " " with | true -> "§" | false -> separator let complete = seq { for line in validlines do yield! line.Split(' ') } |> List.ofSeq let folder (a) (cur, acc) = match a with | _ when a <> validsplitter -> a::cur, acc | _ -> [], cur::acc let result = List.foldBack folder (complete) ([], []) (fst result)::(snd result) let groupWithNoReplace1 = groupWithNoReplace ["abceadfapaq"; "asdqwedasca"; " "; "asdasqyhgahfgasdsadasda"] " " val groupWithNoReplace1: string list list = [["abceadfapaq"; "asdqwedasca"]; ["asdasqyhgahfgasdsadasda"]] let groupWithNoReplace11 = groupWithNoReplace ["abceadfapaq"; "asdqwedasca"; "="; "asdasqyhgahfgasdsadasda"] "=" val groupWithNoReplace11: string list list = [["abceadfapaq"; "asdqwedasca"]; ["asdasqyhgahfgasdsadasda"]]
[ "this should work:\nlet splitWhen predicate list =\n let folder state t =\n if predicate t then\n [] :: state\n else\n (t :: state.Head) :: state. Tail\n\n list \n |> List.fold folder [ [] ] \n |> List.map List.rev\n |> List.rev\n\n" ]
[ 0 ]
[]
[]
[ "f#", "list", "split" ]
stackoverflow_0074675697_f#_list_split.txt
Q: React Custom 404 NotFound page isn't working in production I have my 404 NotFound custom page properly working in development version, all other routes are perfectly fine, but after I built a production version I cannot get custom 404 page only a default one. My route for 404 is: <Routes> <Route path={"/*"} element={<NotFound />} /> </Routes> I used version only asterisk too path={"*"} , but it didn't work: A: Try using an Asterisk, like this, it should work: <Routes> <Route path={"*"} element={<NotFound />} /> </Routes> or <Routes> <Route path="*" element={<NotFound />} /> </Routes>
React Custom 404 NotFound page isn't working in production
I have my 404 NotFound custom page properly working in development version, all other routes are perfectly fine, but after I built a production version I cannot get custom 404 page only a default one. My route for 404 is: <Routes> <Route path={"/*"} element={<NotFound />} /> </Routes> I used version only asterisk too path={"*"} , but it didn't work:
[ "Try using an Asterisk, like this, it should work:\n<Routes>\n <Route path={\"*\"} element={<NotFound />} />\n</Routes>\n\nor\n<Routes>\n <Route path=\"*\" element={<NotFound />} />\n</Routes>\n\n" ]
[ 0 ]
[]
[]
[ "javascript", "reactjs", "routes" ]
stackoverflow_0074679986_javascript_reactjs_routes.txt
Q: Design Patterns: Factory vs Factory method vs Abstract Factory I was reading design patterns from a website There I read about Factory, Factory method and Abstract factory but they are so confusing, am not clear on the definition. According to definitions Factory - Creates objects without exposing the instantiation logic to the client and Refers to the newly created object through a common interface. Is a simplified version of Factory Method Factory Method - Defines an interface for creating objects, but let subclasses to decide which class to instantiate and Refers to the newly created object through a common interface. Abstract Factory - Offers the interface for creating a family of related objects, without explicitly specifying their classes. I also looked the other stackoverflow threads regarding Abstract Factory vs Factory Method but the UML diagrams drawn there make my understanding even worse. Can anyone please tell me How are these three patterns different from each other? When to use which? And also if possible, any java examples related to these patterns? A: All three Factory types do the same thing: They are a "smart constructor". Let's say you want to be able to create two kinds of Fruit: Apple and Orange. Factory Factory is "fixed", in that you have just one implementation with no subclassing. In this case, you will have a class like this: class FruitFactory { public Apple makeApple() { // Code for creating an Apple here. } public Orange makeOrange() { // Code for creating an orange here. } } Use case: Constructing an Apple or an Orange is a bit too complex to handle in the constructor for either. Factory Method Factory method is generally used when you have some generic processing in a class, but want to vary which kind of fruit you actually use. So: abstract class FruitPicker { protected abstract Fruit makeFruit(); public void pickFruit() { private final Fruit f = makeFruit(); // The fruit we will work on.. <bla bla bla> } } ...then you can reuse the common functionality in FruitPicker.pickFruit() by implementing a factory method in subclasses: class OrangePicker extends FruitPicker { @Override protected Fruit makeFruit() { return new Orange(); } } Abstract Factory Abstract factory is normally used for things like dependency injection/strategy, when you want to be able to create a whole family of objects that need to be of "the same kind", and have some common base classes. Here's a vaguely fruit-related example. The use case here is that we want to make sure that we don't accidentally use an OrangePicker on an Apple. As long as we get our Fruit and Picker from the same factory, they will match. interface PlantFactory { Plant makePlant(); Picker makePicker(); } public class AppleFactory implements PlantFactory { Plant makePlant() { return new Apple(); } Picker makePicker() { return new ApplePicker(); } } public class OrangeFactory implements PlantFactory { Plant makePlant() { return new Orange(); } Picker makePicker() { return new OrangePicker(); } } A: How are these three patterns different from each other? Factory: Creates objects without exposing the instantiation logic to the client. Factory Method: Define an interface for creating an object, but let the subclasses decide which class to instantiate. The Factory method lets a class defer instantiation to subclasses Abstract Factory: Provides an interface for creating families of related or dependent objects without specifying their concrete classes. AbstractFactory pattern uses composition to delegate responsibility of creating object to another class while Factory method design pattern uses inheritance and relies on derived class or sub class to create object When to use which? Factory: Client just need a class and does not care about which concrete implementation it is getting. Factory Method: Client doesn't know what concrete classes it will be required to create at runtime, but just wants to get a class that will do the job. AbstactFactory: When your system has to create multiple families of products or you want to provide a library of products without exposing the implementation details. Abstract Factory classes are often implemented with Factory Method. Factory Methods are usually called within Template Methods. And also if possible, any java examples related to these patterns? Factory and FactoryMethod Intent: Define an interface for creating an object, but let sub classes decide which class to instantiate. Factory Method lets a class defer instantiation to sub classes. UML diagram: Product: It defines an interface of the objects the Factory method creates. ConcreteProduct: Implements Product interface Creator: Declares the Factory method ConcreateCreator: Implements the Factory method to return an instance of a ConcreteProduct Problem statement: Create a Factory of Games by using Factory Methods, which defines the game interface. Code snippet: Factory Pattern. When to use factory methods? Comparison with other creational patterns: Design start out using Factory Method (less complicated, more customizable, subclasses proliferate) and evolve toward Abstract Factory, Prototype, or Builder (more flexible, more complex) as the designer discovers where more flexibility is needed Abstract Factory classes are often implemented with Factory Methods, but they can also be implemented using Prototype References for further reading: Sourcemaking design-patterns A: Factory - Separate Factory class to create complex object. Ex: FruitFactory class to create object of Fruit class FruitFactory{ public static Fruit getFruit(){...} } Factory Method - Instead of whole separate class for factory, just add one method in that class itself as a factory. Ex: Calendar.getInstance() (Java's Calendar) Abstract Factory - Factory of Factories Ex: Lets say we want to build factory for computer parts. So there are several types of computers like Laptop, Desktop, Server. So for each compter type we need factory. So we create one highlevel factory of factories like below ComputerTypeAbstractFactory.getComputerPartFactory(String computerType) ---> This will return PartFactory which can be one of these ServerPartFactory, LaptopPartFactory, DesktopPartFactory. Now these 3 itself are again factories. (You will be dealing with PartFactory itself, but under the hood, there will be separate implementation based on what you provided in abstract factory) Interface-> PartFactory. getComputerPart(String s), Implementations -> ServerPartFactory, LaptopPartFactory, DesktopPartFactory. Usage: new ComputerTypeAbstractFactory().getFactory(“Laptop”).getComputerPart(“RAM”) EDIT: edited to provide exact interfaces for Abstract Factory as per the objections in comments. A: Based this images from Design Patterns in C#, 2nd Edition by Vaskaran Sarcar book: 1. Simple Factory Pattern Creates objects without exposing the instantiation logic to the client. SimpleFactory simpleFactory = new SimpleFactory(); IAnimal dog = simpleFactory.CreateDog(); // Create dog IAnimal tiger = simpleFactory.CreateTiger(); // Create tiger 2. Factory Method Pattern Defines an interface for creating objects, but let subclasses to decide which class to instantiate. AnimalFactory dogFactory = new DogFactory(); IAnimal dog = dogFactory.CreateAnimal(); // Create dog AnimalFactory tigerFactory = new TigerFactory(); IAnimal tiger = tigerFactory.CreateAnimal(); // Create tiger 3. Abstract Factory pattern (factory of factories) Abstract Factory offers the interface for creating a family of related objects, without explicitly specifying their classes IAnimalFactory petAnimalFactory = FactoryProvider.GetAnimalFactory("pet"); IDog dog = petAnimalFactory.GetDog(); // Create pet dog ITiger tiger = petAnimalFactory.GetTiger(); // Create pet tiger IAnimalFactory wildAnimalFactory = FactoryProvider.GetAnimalFactory("wild"); IDog dog = wildAnimalFactory .GetDog(); // Create wild dog ITiger tiger = wildAnimalFactory .GetTiger(); // Create wild tiger A: For this answer, I refer to the "Gang of Four" book. There are no "Factory" nor "Simple Factory" nor "Virtual Factory" definitions in the book. Usually when people are talking about "Factory" pattern they may be talking about something that creates a particular object of a class (but not the "builder" pattern); they may or may not refer to the "Factory Method" or "Abstract Factory" patterns. Anyone can implement "Factory" as he won't because it's not a formal term (bear in mind that some people\companies\communities can have their own vocabulary). The book only contains definitions for "Abstract Factory" and "Factory Method". Here are definitions from the book and a short explanation of why both can be so confusing. I omit code examples because you can find them in other answers: Factory Method (GOF): Define an interface for creating an object, but let subclasses decide which class to instantiate. Factory Method lets a class defer instantiation to subclasses. Abstract Factory (GOF): Provide an interface for creating families of related or dependent objects without specifying their concrete classes. Source of Confusion: Often, one can call a class that used in "Factory Method" pattern as "Factory". This class is abstract by definition. That's why it easy to call this class "Abstract Factory". But it's just the name of the class; you shouldn't confuse it with "Abstract Factory" pattern (class name != pattern name). The "Abstract Factory" pattern is different - it does not use an abstract class; it defines an interface (not necessarily a programming language interface) for creating parts of a bigger object or objects that are related to each other or must be created in a particular way. A: Every design pattern strives to help ensure that written, working code is not touched. We all know that once we touch working code, there are defects in existing working flows, and a lot more testing needs to get done to ensure that we did not break anything. A factory pattern creates objects based on input criteria, thus ensuring that you dont need to write code like: if (this) { create this kind of object } else { that kind of object } A good example of this is a travel website. A travel website can only provide travel (flight, train, bus) or / and provide hotels or / and provide tourist attraction packages. Now, when a user selects next, the website needs to decide what objects it needs to create. Should it only create the travel or hotel object too. Now, if you envision adding another website to your portfolio, and you believe that the same core be used, for example, a carpooling website, which now searches for cab's and makes payments online, you can use a abstract factory at your core. This way you can just snap in one more factory of cabs and carpools. Both factories have nothing to do with each other, so it's a good design to keep them in different factories. Hope this is clear now. Study the website again keeping this example in mind, hopefully it will help. And I really hope I have represented the patterns correctly :). A: AbstractProductA, A1 and A2 both implementing the AbstractProductA AbstractProductB, B1 and B2 both implementing the AbstractProductB interface Factory { AbstractProductA getProductA(); //Factory Method - generate A1/A2 } Using Factory Method, user can able to create A1 or A2 of AbstractProductA. interface AbstractFactory { AbstractProductA getProductA(); //Factory Method AbstractProductB getProductB(); //Factory Method } But Abstract Factory having more than 1 factory method ( ex: 2 factory methods), using those factory methods it will create the set of objects/ related objects. Using Abstract Factory, user can able to create A1, B1 objects of AbstractProductA, AbstractProductB A: Nobody has quoted the original book Design Patterns: Elements of Reusable Object-Oriented Software, which gives the answer in the first two paragraphs of the section “Discussion of Creational Patterns” (emphasis mine): There are two common ways to parameterize a system by the classes of objects it creates. One way is to subclass the class that creates the objects; this corresponds to using the Factory Method (107) pattern. The main drawback of this approach is that it can require a new subclass just to change the class of the product. Such changes can cascade. For example, when the product creator is itself created by a factory method, then you have to override its creator as well. The other way to parameterize a system relies more on object composition: Define an object that’s responsible for knowing the class of the product objects, and make it a parameter of the system. This is a key aspect of the Abstract Factory (87), Builder (97), and Prototype (117) patterns. All three involve creating a new “factory object” whose responsibility is to create product objects. Abstract Factory has the factory object producing objects of several classes. Builder has the factory object building a complex product incrementally using a correspondingly complex protocol. Prototype has the factory object building a product by copying a prototype object. In this case, the factory object and the prototype are the same object, because the prototype is responsible for returning the product. A: None of the answers really explain the Abstract Factory particularly well - probably because the concept is abstract and this is less frequently used in practice. An easy to understand example arrises from considering the following situation. You have a system, which interfaces with another system. I will use the example given in Design Patterns Explained by Shalloway and Trott, P194, because this situation is so rare I can't think of a better one. In their book, they give the example of having different combinations of local hardware resources. They use, as examples: System with high resoltion display and print driver System with low resolution display and print driver There are 2 options for one variable thing (print driver, display driver), and 2 options for the other variable thing (high resolution, low resolution). We want to couple these together in such a way that we have a HighResolutionFactory and a LowResolutionFactory which produce for us both a print driver and display driver of the correct type. This then is the Abstract Factory pattern: class ResourceFactory { virtual AbstractPrintDriver getPrintDriver() = 0; virtual AbstractDisplayDriver getDisplayDriver() = 0; }; class LowResFactory : public ResourceFactory { AbstractPrintDriver getPrintDriver() override { return LowResPrintDriver; } AbstractDisplayDriver getDisplayDriver() override { return LowResDisplayDriver; } }; class HighResFactory : public ResourceFactory { AbstractPrintDriver getPrintDriver() override { return HighResPrintDriver; } AbstractDisplayDriver getDisplayDriver() override { return HighResDisplayDriver; } }; I won't detail both the print driver and display driver hierachies, just one will suffice to demonstrate. class AbstractDisplayDriver { virtual void draw() = 0; }; class HighResDisplayDriver : public AbstractDisplayDriver { void draw() override { // do hardware accelerated high res drawing } }; class LowResDisplayDriver : public AbstractDisplayDriver { void draw() override { // do software drawing, low resolution } }; Why it works: We could have solved this problem with a bunch of if statements: const resource_type = LOW_RESOLUTION; if(resource_type == LOW_RESOLUTION) { drawLowResolution(); printLowResolution(); } else if(resource_type == HIGH_RESOLUTION) { drawHighResolution(); printHighResolution(); } Instead of this we now can do: auto factory = HighResFactory; auto printDriver = factory.getPrintDriver(); printDriver.print(); auto displayDriver = factory.getDisplayDriver(); displayDriver.draw(); Essentially - We have abstracted away the runtime logic into the v-table for our classes. My opinion on this pattern is that it isn't actually very useful. It couples together some things which need not be coupled together. The point of design patterns is usally to reduce coupling, not increase it, therefore in some ways this pattern is really an anti-pattern, but it might be useful in some contexts. If you get to a stage where you are seriously considering implementing this, you might consider some alternative designs. Perhaps you could write a factory which returns factories, which themselves return the objects you eventually want. I imagine this would be more flexible, and would not have the same coupling issues. Addendum: The example from Gang of Four is similarly coupled. They have a MotifFactory and a PMFactory. These then produce PMWindow, PMScrollBar and MotifWindow, MotifScrollBar respectively. This is a somewhat dated text now and so it might be hard to understand the context. I recall reading this chapter and the I understood little from the example beyond having two implementations of a factory base class which return different families of objects.
Design Patterns: Factory vs Factory method vs Abstract Factory
I was reading design patterns from a website There I read about Factory, Factory method and Abstract factory but they are so confusing, am not clear on the definition. According to definitions Factory - Creates objects without exposing the instantiation logic to the client and Refers to the newly created object through a common interface. Is a simplified version of Factory Method Factory Method - Defines an interface for creating objects, but let subclasses to decide which class to instantiate and Refers to the newly created object through a common interface. Abstract Factory - Offers the interface for creating a family of related objects, without explicitly specifying their classes. I also looked the other stackoverflow threads regarding Abstract Factory vs Factory Method but the UML diagrams drawn there make my understanding even worse. Can anyone please tell me How are these three patterns different from each other? When to use which? And also if possible, any java examples related to these patterns?
[ "All three Factory types do the same thing: They are a \"smart constructor\".\nLet's say you want to be able to create two kinds of Fruit: Apple and Orange.\nFactory\nFactory is \"fixed\", in that you have just one implementation with no subclassing. In this case, you will have a class like this:\nclass FruitFactory {\n\n public Apple makeApple() {\n // Code for creating an Apple here.\n }\n\n public Orange makeOrange() {\n // Code for creating an orange here.\n }\n\n}\n\nUse case: Constructing an Apple or an Orange is a bit too complex to handle in the constructor for either.\nFactory Method\nFactory method is generally used when you have some generic processing in a class, but want to vary which kind of fruit you actually use. So:\nabstract class FruitPicker {\n\n protected abstract Fruit makeFruit();\n\n public void pickFruit() {\n private final Fruit f = makeFruit(); // The fruit we will work on..\n <bla bla bla>\n }\n}\n\n...then you can reuse the common functionality in FruitPicker.pickFruit() by implementing a factory method in subclasses:\nclass OrangePicker extends FruitPicker {\n\n @Override\n protected Fruit makeFruit() {\n return new Orange();\n }\n}\n\nAbstract Factory\nAbstract factory is normally used for things like dependency injection/strategy, when you want to be able to create a whole family of objects that need to be of \"the same kind\", and have some common base classes. Here's a vaguely fruit-related example. The use case here is that we want to make sure that we don't accidentally use an OrangePicker on an Apple. As long as we get our Fruit and Picker from the same factory, they will match.\ninterface PlantFactory {\n \n Plant makePlant();\n\n Picker makePicker(); \n\n}\n\npublic class AppleFactory implements PlantFactory {\n Plant makePlant() {\n return new Apple();\n }\n\n Picker makePicker() {\n return new ApplePicker();\n }\n}\n\npublic class OrangeFactory implements PlantFactory {\n Plant makePlant() {\n return new Orange();\n }\n\n Picker makePicker() {\n return new OrangePicker();\n }\n}\n\n", "\n\nHow are these three patterns different from each other?\n\n\nFactory: Creates objects without exposing the instantiation logic to the client.\nFactory Method: Define an interface for creating an object, but let the subclasses decide which class to instantiate. The Factory method lets a class defer instantiation to subclasses\nAbstract Factory: Provides an interface for creating families of related or dependent objects without specifying their concrete classes.\nAbstractFactory pattern uses composition to delegate responsibility of creating object to another class while Factory method design pattern uses inheritance and relies on derived class or sub class to create object\n\n\nWhen to use which?\n\n\nFactory: Client just need a class and does not care about which concrete implementation it is getting.\nFactory Method: Client doesn't know what concrete classes it will be required to create at runtime, but just wants to get a class that will do the job.\nAbstactFactory: When your system has to create multiple families of products or you want to provide a library of products without exposing the implementation details.\nAbstract Factory classes are often implemented with Factory Method. Factory Methods are usually called within Template Methods.\n\n\nAnd also if possible, any java examples related to these patterns?\n\n\nFactory and FactoryMethod\nIntent:\nDefine an interface for creating an object, but let sub classes decide which class to instantiate. Factory Method lets a class defer instantiation to sub classes.\nUML diagram:\n\nProduct: It defines an interface of the objects the Factory method creates.\nConcreteProduct: Implements Product interface\nCreator: Declares the Factory method\nConcreateCreator: Implements the Factory method to return an instance of a ConcreteProduct\nProblem statement: Create a Factory of Games by using Factory Methods, which defines the game interface.\nCode snippet:\nFactory Pattern. When to use factory methods?\nComparison with other creational patterns:\n\nDesign start out using Factory Method (less complicated, more customizable, subclasses proliferate) and evolve toward Abstract Factory, Prototype, or Builder (more flexible, more complex) as the designer discovers where more flexibility is needed \nAbstract Factory classes are often implemented with Factory Methods, but they can also be implemented using Prototype\n\nReferences for further reading: Sourcemaking design-patterns\n", "Factory - Separate Factory class to create complex object.\nEx: FruitFactory class to create object of Fruit\nclass FruitFactory{\n\npublic static Fruit getFruit(){...}\n\n}\n\nFactory Method - Instead of whole separate class for factory, just add one method in that class itself as a factory.\nEx:\nCalendar.getInstance() (Java's Calendar)\n\nAbstract Factory - Factory of Factories\nEx: Lets say we want to build factory for computer parts. So there are several types of computers like Laptop, Desktop, Server.\nSo for each compter type we need factory. So we create one highlevel factory of factories like below\nComputerTypeAbstractFactory.getComputerPartFactory(String computerType) ---> This will return PartFactory which can be one of these ServerPartFactory, LaptopPartFactory, DesktopPartFactory.\n\nNow these 3 itself are again factories. (You will be dealing with PartFactory itself, but under the hood, there will be separate implementation based on what you provided in abstract factory)\n Interface-> PartFactory. getComputerPart(String s), \nImplementations -> ServerPartFactory, LaptopPartFactory, DesktopPartFactory.\n\nUsage:\nnew ComputerTypeAbstractFactory().getFactory(“Laptop”).getComputerPart(“RAM”)\n\nEDIT: edited to provide exact interfaces for Abstract Factory as per the objections in comments.\n", "Based this images from Design Patterns in C#, 2nd Edition by Vaskaran Sarcar book:\n\n1. Simple Factory Pattern\n\nCreates objects without exposing the instantiation logic to the client.\nSimpleFactory simpleFactory = new SimpleFactory();\nIAnimal dog = simpleFactory.CreateDog(); // Create dog\nIAnimal tiger = simpleFactory.CreateTiger(); // Create tiger\n\n\n\n2. Factory Method Pattern\n\nDefines an interface for creating objects, but let subclasses to decide which class to instantiate.\nAnimalFactory dogFactory = new DogFactory(); \nIAnimal dog = dogFactory.CreateAnimal(); // Create dog\n\nAnimalFactory tigerFactory = new TigerFactory();\nIAnimal tiger = tigerFactory.CreateAnimal(); // Create tiger\n\n\n\n3. Abstract Factory pattern (factory of factories)\n\nAbstract Factory offers the interface for creating a family of related objects, without explicitly specifying their classes\nIAnimalFactory petAnimalFactory = FactoryProvider.GetAnimalFactory(\"pet\");\nIDog dog = petAnimalFactory.GetDog(); // Create pet dog\nITiger tiger = petAnimalFactory.GetTiger(); // Create pet tiger\n\nIAnimalFactory wildAnimalFactory = FactoryProvider.GetAnimalFactory(\"wild\");\nIDog dog = wildAnimalFactory .GetDog(); // Create wild dog\nITiger tiger = wildAnimalFactory .GetTiger(); // Create wild tiger\n\n\n", "For this answer, I refer to the \"Gang of Four\" book.\nThere are no \"Factory\" nor \"Simple Factory\" nor \"Virtual Factory\" definitions in the book. Usually when people are talking about \"Factory\" pattern they may be talking about something that creates a particular object of a class (but not the \"builder\" pattern); they may or may not refer to the \"Factory Method\" or \"Abstract Factory\" patterns. Anyone can implement \"Factory\" as he won't because it's not a formal term (bear in mind that some people\\companies\\communities can have their own vocabulary).\nThe book only contains definitions for \"Abstract Factory\" and \"Factory Method\".\nHere are definitions from the book and a short explanation of why both can be so confusing. I omit code examples because you can find them in other answers:\nFactory Method (GOF): Define an interface for creating an object, but let subclasses decide which class to instantiate. Factory Method lets a class defer instantiation to subclasses.\nAbstract Factory (GOF): Provide an interface for creating families of related or dependent objects without specifying their concrete classes.\nSource of Confusion: Often, one can call a class that used in \"Factory Method\" pattern as \"Factory\". This class is abstract by definition. That's why it easy to call this class \"Abstract Factory\". But it's just the name of the class; you shouldn't confuse it with \"Abstract Factory\" pattern (class name != pattern name). The \"Abstract Factory\" pattern is different - it does not use an abstract class; it defines an interface (not necessarily a programming language interface) for creating parts of a bigger object or objects that are related to each other or must be created in a particular way.\n", "Every design pattern strives to help ensure that written, working code is not touched. We all know that once we touch working code, there are defects in existing working flows, and a lot more testing needs to get done to ensure that we did not break anything.\nA factory pattern creates objects based on input criteria, thus ensuring that you dont need to write code like:\n if (this) {\n create this kind of object \n } else { \n that kind of object \n }\n\nA good example of this is a travel website. A travel website can only provide travel (flight, train, bus) or / and provide hotels or / and provide tourist attraction packages. Now, when a user selects next, the website needs to decide what objects it needs to create. Should it only create the travel or hotel object too.\nNow, if you envision adding another website to your portfolio, and you believe that the same core be used, for example, a carpooling website, which now searches for cab's and makes payments online, you can use a abstract factory at your core. This way you can just snap in one more factory of cabs and carpools.\nBoth factories have nothing to do with each other, so it's a good design to keep them in different factories.\nHope this is clear now. Study the website again keeping this example in mind, hopefully it will help. And I really hope I have represented the patterns correctly :).\n", "AbstractProductA, A1 and A2 both implementing the AbstractProductA\nAbstractProductB, B1 and B2 both implementing the AbstractProductB\n\ninterface Factory {\n AbstractProductA getProductA(); //Factory Method - generate A1/A2\n}\n\nUsing Factory Method, user can able to create A1 or A2 of AbstractProductA.\ninterface AbstractFactory {\n AbstractProductA getProductA(); //Factory Method\n AbstractProductB getProductB(); //Factory Method\n}\n\nBut Abstract Factory having more than 1 factory method ( ex: 2 factory methods), using those factory methods it will create the set of objects/ related objects.\nUsing Abstract Factory, user can able to create A1, B1 objects of AbstractProductA, AbstractProductB \n", "Nobody has quoted the original book Design Patterns: Elements of Reusable Object-Oriented Software, which gives the answer in the first two paragraphs of the section “Discussion of Creational Patterns” (emphasis mine):\n\nThere are two common ways to parameterize a system by the classes of objects it creates. One way is to subclass the class that creates the objects; this corresponds to using the Factory Method (107) pattern. The main drawback of this approach is that it can require a new subclass just to change the class of the product. Such changes can cascade. For example, when the product creator is itself created by a factory method, then you have to override its creator as well.\nThe other way to parameterize a system relies more on object composition: Define an object that’s responsible for knowing the class of the product objects, and make it a parameter of the system. This is a key aspect of the Abstract Factory (87), Builder (97), and Prototype (117) patterns. All three involve creating a new “factory object” whose responsibility is to create product objects. Abstract Factory has the factory object producing objects of several classes. Builder has the factory object building a complex product incrementally using a correspondingly complex protocol. Prototype has the factory object building a product by copying a prototype object. In this case, the factory object and the prototype are the same object, because the prototype is responsible for returning the product.\n\n", "None of the answers really explain the Abstract Factory particularly well - probably because the concept is abstract and this is less frequently used in practice.\nAn easy to understand example arrises from considering the following situation.\nYou have a system, which interfaces with another system. I will use the example given in Design Patterns Explained by Shalloway and Trott, P194, because this situation is so rare I can't think of a better one. In their book, they give the example of having different combinations of local hardware resources. They use, as examples:\n\nSystem with high resoltion display and print driver\nSystem with low resolution display and print driver\n\nThere are 2 options for one variable thing (print driver, display driver), and 2 options for the other variable thing (high resolution, low resolution). We want to couple these together in such a way that we have a HighResolutionFactory and a LowResolutionFactory which produce for us both a print driver and display driver of the correct type.\nThis then is the Abstract Factory pattern:\nclass ResourceFactory\n{\n virtual AbstractPrintDriver getPrintDriver() = 0;\n\n virtual AbstractDisplayDriver getDisplayDriver() = 0;\n};\n\nclass LowResFactory : public ResourceFactory\n{\n AbstractPrintDriver getPrintDriver() override\n {\n return LowResPrintDriver;\n }\n\n AbstractDisplayDriver getDisplayDriver() override\n {\n return LowResDisplayDriver;\n }\n};\n\nclass HighResFactory : public ResourceFactory\n{\n AbstractPrintDriver getPrintDriver() override\n {\n return HighResPrintDriver;\n }\n\n AbstractDisplayDriver getDisplayDriver() override\n {\n return HighResDisplayDriver;\n }\n};\n\n\nI won't detail both the print driver and display driver hierachies, just one will suffice to demonstrate.\nclass AbstractDisplayDriver\n{\n virtual void draw() = 0;\n};\n\nclass HighResDisplayDriver : public AbstractDisplayDriver\n{\n void draw() override\n {\n // do hardware accelerated high res drawing\n }\n};\n\nclass LowResDisplayDriver : public AbstractDisplayDriver\n{\n void draw() override\n {\n // do software drawing, low resolution\n }\n};\n\nWhy it works:\nWe could have solved this problem with a bunch of if statements:\nconst resource_type = LOW_RESOLUTION;\n\nif(resource_type == LOW_RESOLUTION)\n{\n drawLowResolution();\n printLowResolution();\n}\nelse if(resource_type == HIGH_RESOLUTION)\n{\n drawHighResolution();\n printHighResolution();\n}\n\nInstead of this we now can do:\n auto factory = HighResFactory;\n auto printDriver = factory.getPrintDriver();\n printDriver.print();\n auto displayDriver = factory.getDisplayDriver();\n displayDriver.draw();\n\nEssentially - We have abstracted away the runtime logic into the v-table for our classes.\nMy opinion on this pattern is that it isn't actually very useful. It couples together some things which need not be coupled together. The point of design patterns is usally to reduce coupling, not increase it, therefore in some ways this pattern is really an anti-pattern, but it might be useful in some contexts.\nIf you get to a stage where you are seriously considering implementing this, you might consider some alternative designs. Perhaps you could write a factory which returns factories, which themselves return the objects you eventually want. I imagine this would be more flexible, and would not have the same coupling issues.\nAddendum: The example from Gang of Four is similarly coupled. They have a MotifFactory and a PMFactory. These then produce PMWindow, PMScrollBar and MotifWindow, MotifScrollBar respectively. This is a somewhat dated text now and so it might be hard to understand the context. I recall reading this chapter and the I understood little from the example beyond having two implementations of a factory base class which return different families of objects.\n" ]
[ 348, 30, 22, 22, 13, 12, 2, 2, 0 ]
[]
[]
[ "design_patterns", "factory", "factory_method", "java", "language_agnostic" ]
stackoverflow_0013029261_design_patterns_factory_factory_method_java_language_agnostic.txt
Q: pyenv deletes python after installing I tried installing python with pyenv install 3.11.0 (though this happens no matter the version) on my Raspberry Pi. When the install was running, there was a 3.11.0 directory in ~/.pyenv/versions, pyenv versions recognized it, and the installed python is actually usable, but the dir disappeared after the installation process finished. Raspberry Pi OS - Debian GNU/Linux 11 (bullseye) aarch64 Aside from one time when it errored out, this has happened every time I tried installing, including 3.11, 3.10, and 3.9 A: #It sounds like something went wrong with the installation of Python on your Raspberry Pi. The first thing you should try is running the pyenv install command with the --verbose flag, which will provide you with more detailed output and may help you identify the issue. For example: pyenv install 3.11.0 --verbose #If that doesn't help, you can try removing the Python version that was partially installed and then try installing it again. You can use the pyenv uninstall command to remove the partially installed Python version, followed by the pyenv install command to try installing it again. For example: pyenv uninstall 3.11.0 pyenv install 3.11.0 #If you continue to have problems, you may want to try installing a different version of Python, such as 3.9.0 or 3.8.6, which are the latest versions of Python 3.9 and 3.8, respectively. You can use the same pyenv install command to install these versions. For example: pyenv install 3.9.0 pyenv install 3.8.6 #If you still can't get Python installed on your Raspberry Pi, you may want to try reinstalling pyenv itself, using the pyenv-installer script, which you can download from the pyenv GitHub page (https://github.com/pyenv/pyenv-installer). This script will automatically install pyenv and its dependencies on your system, which may help resolve any issues you're experiencing. curl -L https://github.com/pyenv/pyenv-installer/raw/master/bin/pyenv-installer | bash #Alternatively, you can try manually installing pyenv and its dependencies using your system's package manager. For example, on a Raspberry Pi running Debian or Raspbian, you can use the apt-get command to install pyenv and its dependencies. sudo apt-get update sudo apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev \ libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev \ xz-utils tk-dev libffi-dev liblzma-dev python-openssl git #After installing pyenv and its dependencies, you can try installing Python again using the pyenv install command. #I hope this helps! Let me know if you have any other questions.
pyenv deletes python after installing
I tried installing python with pyenv install 3.11.0 (though this happens no matter the version) on my Raspberry Pi. When the install was running, there was a 3.11.0 directory in ~/.pyenv/versions, pyenv versions recognized it, and the installed python is actually usable, but the dir disappeared after the installation process finished. Raspberry Pi OS - Debian GNU/Linux 11 (bullseye) aarch64 Aside from one time when it errored out, this has happened every time I tried installing, including 3.11, 3.10, and 3.9
[ "#It sounds like something went wrong with the installation of Python on your Raspberry Pi. The first thing you should try is running the pyenv install command with the --verbose flag, which will provide you with more detailed output and may help you identify the issue. For example:\n\npyenv install 3.11.0 --verbose\n\n#If that doesn't help, you can try removing the Python version that was partially installed and then try installing it again. You can use the pyenv uninstall command to remove the partially installed Python version, followed by the pyenv install command to try installing it again. For example:\n\npyenv uninstall 3.11.0\npyenv install 3.11.0\n\n#If you continue to have problems, you may want to try installing a different version of Python, such as 3.9.0 or 3.8.6, which are the latest versions of Python 3.9 and 3.8, respectively. You can use the same pyenv install command to install these versions. For example:\n\npyenv install 3.9.0\npyenv install 3.8.6\n\n#If you still can't get Python installed on your Raspberry Pi, you may want to try reinstalling pyenv itself, using the pyenv-installer script, which you can download from the pyenv GitHub page (https://github.com/pyenv/pyenv-installer). This script will automatically install pyenv and its dependencies on your system, which may help resolve any issues you're experiencing.\n\ncurl -L https://github.com/pyenv/pyenv-installer/raw/master/bin/pyenv-installer | bash\n\n#Alternatively, you can try manually installing pyenv and its dependencies using your system's package manager. For example, on a Raspberry Pi running Debian or Raspbian, you can use the apt-get command to install pyenv and its dependencies.\n\nsudo apt-get update\nsudo apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev \\\nlibreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev \\\nxz-utils tk-dev libffi-dev liblzma-dev python-openssl git\n#After installing pyenv and its dependencies, you can try installing Python again using the pyenv install command.\n\n#I hope this helps! Let me know if you have any other questions.\n\n" ]
[ 0 ]
[]
[]
[ "linux", "pyenv", "python", "raspberry_pi" ]
stackoverflow_0074648670_linux_pyenv_python_raspberry_pi.txt
Q: Installing Cartopy error on Windows 10 with VSCode I al trying to install Cartopy on my laptop. I have Windows 10, and use VSCode. When installing Cartopy with pip install cartopyI get the following error: ` lib/cartopy/trace.cpp(767): fatal error C1083: Cannot open include file: 'geos_c.h': No such file or directory error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.34.31933\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 ` I installed shapely, matplotlib and pygeos beforehands, but somehow it doesn't seem to do the trick. I then tried to install GEOS, but didnt succeed, apparently you have to use CMAKE to install it correctly but htet didn't work. (still get the same error) is it possible to install it without installing Anaconda ? (I have seen that a lot online) Any help/advice please would help me greatly.
Installing Cartopy error on Windows 10 with VSCode
I al trying to install Cartopy on my laptop. I have Windows 10, and use VSCode. When installing Cartopy with pip install cartopyI get the following error: ` lib/cartopy/trace.cpp(767): fatal error C1083: Cannot open include file: 'geos_c.h': No such file or directory error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.34.31933\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 ` I installed shapely, matplotlib and pygeos beforehands, but somehow it doesn't seem to do the trick. I then tried to install GEOS, but didnt succeed, apparently you have to use CMAKE to install it correctly but htet didn't work. (still get the same error) is it possible to install it without installing Anaconda ? (I have seen that a lot online) Any help/advice please would help me greatly.
[]
[]
[ "Yes, you can install Cartopy without Anaconda by using the pip package manager. However, the error you are getting indicates that the geos_c.h header file is missing, which is required for Cartopy to build and work properly.\nIn order to fix this issue, you will need to install the GEOS library, which provides the geos_c.h header file. You can install GEOS using the pip command, like this:\npip install geos\n\nThis will install GEOS and its dependencies, including the geos_c.h header file. Once GEOS is installed, you should be able to install Cartopy using the pip install cartopy command without any errors.\nIf you are still having issues with the installation, you may need to set the INCLUDE and LIB environment variables to point to the directories where the GEOS headers and libraries are installed. You can do this by setting the INCLUDE variable to the path of the geos_c.h header file, and the LIB variable to the path of the geos_c.dll library file. For example:\nset INCLUDE=C:\\Program Files\\GEOS\\include\nset LIB=C:\\Program Files\\GEOS\\lib\n\nOnce you have set these variables, you should be able to run the pip install cartopy command without any errors.\nNote that you may need to restart your terminal or command prompt in order for the changes to the environment variables to take effect. Additionally, the paths to the geos_c.h and geos_c.dll files may be different on your system, so you will need to adjust the paths in the INCLUDE and LIB variables accordingly.\n" ]
[ -1 ]
[ "cartopy", "cmake", "geos", "python" ]
stackoverflow_0074680953_cartopy_cmake_geos_python.txt
Q: discord.py limiting a command to only be a slash command I am trying to make a command that is only a slash command however my bot uses hybrid commands and normal prefix commands and Im not sure how to make it just a slash command. @client.event async def on_message(message): if message.content.lower() == ";report" or message.content.lower() == ";suggest": return await client.process_commands(message) @client.hybrid_command(name = "report", with_app_command=True, description="Make a suggestion to the bot or report an issue", aliases=["suggest"]) @commands.guild_only() async def report(interaction: discord.Interaction): await interaction.response.send_modal(my_modal()) I tried making an on_message event that ignores the prefix command but it ignores the on_message and listens to the command. I've tried @tree.command and @client.slash_command but they don't work. A: To make a command that can only be used as a slash command, you can use the is_slash_command attribute of the Interaction object in your command function. This attribute will be True if the command was called using the slash command syntax, and False if it was called using a prefix or hybrid command. Here is an example of how you can use this attribute to limit your report command to only be used as a slash command: @client.hybrid_command(name="report", with_app_command=True, description="Make a suggestion to the bot or report an issue", aliases=["suggest"]) @commands.guild_only() async def report(interaction: discord.Interaction): if interaction.is_slash_command: # The command was called using the slash command syntax, so we can process it await interaction.response.send_modal(my_modal()) else: # The command was not called using the slash command syntax, so we will ignore it return Note that you can also use the command attribute of the Interaction object to access the command object itself, if you need to access its attributes or perform other operations on it. @client.hybrid_command(name="report", with_app_command=True, description="Make a suggestion to the bot or report an issue", aliases=["suggest"]) @commands.guild_only() async def report(interaction: discord.Interaction): if interaction.is_slash_command: # Print the name of the command being called print(interaction.command.name) await interaction.response.send_modal(my_modal()) else: return EDIT: Maybe is_app_command @client.hybrid_command(name="report", with_app_command=True, description="Make a suggestion to the bot or report an issue", aliases=["suggest"]) @commands.guild_only() async def report(interaction: discord.Interaction): if interaction.is_app_command: # Print the name of the command being called print(interaction.command.name) await interaction.response.send_modal(my_modal()) else: return
discord.py limiting a command to only be a slash command
I am trying to make a command that is only a slash command however my bot uses hybrid commands and normal prefix commands and Im not sure how to make it just a slash command. @client.event async def on_message(message): if message.content.lower() == ";report" or message.content.lower() == ";suggest": return await client.process_commands(message) @client.hybrid_command(name = "report", with_app_command=True, description="Make a suggestion to the bot or report an issue", aliases=["suggest"]) @commands.guild_only() async def report(interaction: discord.Interaction): await interaction.response.send_modal(my_modal()) I tried making an on_message event that ignores the prefix command but it ignores the on_message and listens to the command. I've tried @tree.command and @client.slash_command but they don't work.
[ "To make a command that can only be used as a slash command, you can use the is_slash_command attribute of the Interaction object in your command function. This attribute will be True if the command was called using the slash command syntax, and False if it was called using a prefix or hybrid command.\nHere is an example of how you can use this attribute to limit your report command to only be used as a slash command:\n@client.hybrid_command(name=\"report\", with_app_command=True, description=\"Make a suggestion to the bot or report an issue\", aliases=[\"suggest\"])\n@commands.guild_only()\nasync def report(interaction: discord.Interaction):\n if interaction.is_slash_command:\n # The command was called using the slash command syntax, so we can process it\n await interaction.response.send_modal(my_modal())\n else:\n # The command was not called using the slash command syntax, so we will ignore it\n return\n\nNote that you can also use the command attribute of the Interaction object to access the command object itself, if you need to access its attributes or perform other operations on it.\n@client.hybrid_command(name=\"report\", with_app_command=True, description=\"Make a suggestion to the bot or report an issue\", aliases=[\"suggest\"])\n@commands.guild_only()\nasync def report(interaction: discord.Interaction):\n if interaction.is_slash_command:\n # Print the name of the command being called\n print(interaction.command.name)\n await interaction.response.send_modal(my_modal())\n else:\n return\n\nEDIT:\nMaybe is_app_command\n@client.hybrid_command(name=\"report\", with_app_command=True, description=\"Make a suggestion to the bot or report an issue\", aliases=[\"suggest\"])\n@commands.guild_only()\nasync def report(interaction: discord.Interaction):\n if interaction.is_app_command:\n # Print the name of the command being called\n print(interaction.command.name)\n await interaction.response.send_modal(my_modal())\n else:\n return\n\n" ]
[ 0 ]
[]
[]
[ "discord", "discord.py", "python" ]
stackoverflow_0074680538_discord_discord.py_python.txt
Q: Unity Photon Doesnt Sync Hey guys I want to make a multiplayer game with unity. But I cannot sync players. I use Photon Transform View and Photon View. I have attached Photon Transform View to Photon View but still it doesnt work. This is my player movement code: using System.Collections; using System.Collections.Generic; using UnityEngine.UI; using Photon.Realtime; using UnityEngine; public class Movement : Photon.MonoBehaviour { joystick griliVAl; Animator animasyon; int Idle = Animator.StringToHash("Idle"); // Use this for initialization void Start () { animasyon = GetComponent<Animator>(); griliVAl = GameObject.FindGameObjectWithTag("Joystick").GetComponent<joystick>(); } public void OnButton() { animasyon.Play("attack_01"); } // Update is called once per frame void Update () { float x = griliVAl.griliv.x; float y = griliVAl.griliv.y; animasyon.SetFloat("Valx", x); animasyon.SetFloat("Valy", y); Quaternion targetRotation = Quaternion.LookRotation(griliVAl.griliv*5); transform.rotation = Quaternion.Lerp(transform.rotation, targetRotation, Time.deltaTime); transform.position += griliVAl.griliv * 5 * Time.deltaTime; } } It will be mobile game. So that these griliVal value is joysticks circle. But can someone please help me to solve this issue? A: If by whatever means it doesn't work, try using OnPhotonSerializeView(). Here is what you can put public void OnPhotonSerializeView(PhotonStream stream, PhotonMessageInfo info) { if(stream.isWriting) stream.SendNext(transform.position); } else if(stream.isReading) { transform.position = (Vector3)stream.ReceiveNext(); } } It is really similar to using Photon transform View, but you are manually synchronizing the player's position. Check if PhotonView is attached to the same gameobject the script is in for OnPhotonSerializeView to work. Don't forget to add IPunObservable in your code public class Movement : Photon.MonoBehaviour, IPunObservable
Unity Photon Doesnt Sync
Hey guys I want to make a multiplayer game with unity. But I cannot sync players. I use Photon Transform View and Photon View. I have attached Photon Transform View to Photon View but still it doesnt work. This is my player movement code: using System.Collections; using System.Collections.Generic; using UnityEngine.UI; using Photon.Realtime; using UnityEngine; public class Movement : Photon.MonoBehaviour { joystick griliVAl; Animator animasyon; int Idle = Animator.StringToHash("Idle"); // Use this for initialization void Start () { animasyon = GetComponent<Animator>(); griliVAl = GameObject.FindGameObjectWithTag("Joystick").GetComponent<joystick>(); } public void OnButton() { animasyon.Play("attack_01"); } // Update is called once per frame void Update () { float x = griliVAl.griliv.x; float y = griliVAl.griliv.y; animasyon.SetFloat("Valx", x); animasyon.SetFloat("Valy", y); Quaternion targetRotation = Quaternion.LookRotation(griliVAl.griliv*5); transform.rotation = Quaternion.Lerp(transform.rotation, targetRotation, Time.deltaTime); transform.position += griliVAl.griliv * 5 * Time.deltaTime; } } It will be mobile game. So that these griliVal value is joysticks circle. But can someone please help me to solve this issue?
[ "If by whatever means it doesn't work, try using OnPhotonSerializeView().\nHere is what you can put\npublic void OnPhotonSerializeView(PhotonStream stream, PhotonMessageInfo info)\n {\n if(stream.isWriting)\n stream.SendNext(transform.position);\n }\n else if(stream.isReading)\n {\n transform.position = (Vector3)stream.ReceiveNext();\n }\n }\n\nIt is really similar to using Photon transform View, but you are manually synchronizing the player's position.\nCheck if PhotonView is attached to the same gameobject the script is in for OnPhotonSerializeView to work.\nDon't forget to add IPunObservable in your code\npublic class Movement : Photon.MonoBehaviour, IPunObservable\n\n" ]
[ 0 ]
[]
[]
[ "photon", "unity3d" ]
stackoverflow_0061990965_photon_unity3d.txt
Q: Why isn't my page re-rendering when props change? React newbie beating his head against the wall for two days I'm working on my first React app (a simple blog site) after doing a few tutorials, and I've been struggling with this issue for two days now. Would love some help on this one! My site lists blog posts which it pulls from the database, and I have an edit button next to a list of posts. When I push the button I query the database to pull all the data for the post in question, and send it via a prop to the edit component. const initPostToEdit = { post_id: 01, title: 'test title 2', sub_title: 'test sub-title', main_content: 'test content', post_url: 'test-url', } export default function BlogPostList() { const [postToEdit, setPostToEdit] = useState(initPostToEdit); const editPost = async (id) => { try { const response = await fetch(`http://localhost:5001/blog-edit/${id}`, { method: "GET" }) const newPostToEdit = await response.json(); setPostToEdit(newPostToEdit[0]); } catch (error) { console.error(error.message) } } return ( <div> <BlogPostEdit postToEdit={postToEdit} /> </div> ) In the edit component, the props pass along an initial filler/dummy info which populate the edit form fine. But I can't figure out how to populate the form with info from the post I want to edit, in such a way that I can submit the form and modify the database. export default function BlogPostEdit(props) { const [blogValues, setBlogValues] = useState(props.postToEdit); const changeHandler = e => { setBlogValues({ ...blogValues, [e.target.name]: e.target.value }); } } return ( <div> <h1 className='text-center mt-5'>Edit Blog Post</h1> <form className="mt-5" onSubmit={onSubmitForm}> <label htmlFor="title">Blog Post Title</label> <input id="title" name="title" type="text" className='form-control' value={blogValues.title} onChange={changeHandler} /> <label htmlFor="sub_title">Blog Post Sub-Title</label> <input id="sub_title" name="sub_title" type="text" className='form-control' value={blogValues.sub_title} onChange={changeHandler} /> </form> </div> ) If I console.log the props in the BlogPostEdit function, I can see them changing to reflect me choosing different posts to edit. The only way I can get this to reflect in the edit form is by using something like value = {props.propsToEdit.title} in the form. But when I do this, whenever I try to make edits to the form it immediately reverts to the original title. When I try to use the useState update function (setBlogValues), to reflect the new prop values, I get errors saying too many renders are happening. I hope this makes sense. Again, any help appreciated! A: You should not write the props.postToEdit on the useState hook, simply execute the function and pass on the post id you would like to edit. You're currently using the state for the function and for manipulating your state, this wont simply work.
Why isn't my page re-rendering when props change? React newbie beating his head against the wall for two days
I'm working on my first React app (a simple blog site) after doing a few tutorials, and I've been struggling with this issue for two days now. Would love some help on this one! My site lists blog posts which it pulls from the database, and I have an edit button next to a list of posts. When I push the button I query the database to pull all the data for the post in question, and send it via a prop to the edit component. const initPostToEdit = { post_id: 01, title: 'test title 2', sub_title: 'test sub-title', main_content: 'test content', post_url: 'test-url', } export default function BlogPostList() { const [postToEdit, setPostToEdit] = useState(initPostToEdit); const editPost = async (id) => { try { const response = await fetch(`http://localhost:5001/blog-edit/${id}`, { method: "GET" }) const newPostToEdit = await response.json(); setPostToEdit(newPostToEdit[0]); } catch (error) { console.error(error.message) } } return ( <div> <BlogPostEdit postToEdit={postToEdit} /> </div> ) In the edit component, the props pass along an initial filler/dummy info which populate the edit form fine. But I can't figure out how to populate the form with info from the post I want to edit, in such a way that I can submit the form and modify the database. export default function BlogPostEdit(props) { const [blogValues, setBlogValues] = useState(props.postToEdit); const changeHandler = e => { setBlogValues({ ...blogValues, [e.target.name]: e.target.value }); } } return ( <div> <h1 className='text-center mt-5'>Edit Blog Post</h1> <form className="mt-5" onSubmit={onSubmitForm}> <label htmlFor="title">Blog Post Title</label> <input id="title" name="title" type="text" className='form-control' value={blogValues.title} onChange={changeHandler} /> <label htmlFor="sub_title">Blog Post Sub-Title</label> <input id="sub_title" name="sub_title" type="text" className='form-control' value={blogValues.sub_title} onChange={changeHandler} /> </form> </div> ) If I console.log the props in the BlogPostEdit function, I can see them changing to reflect me choosing different posts to edit. The only way I can get this to reflect in the edit form is by using something like value = {props.propsToEdit.title} in the form. But when I do this, whenever I try to make edits to the form it immediately reverts to the original title. When I try to use the useState update function (setBlogValues), to reflect the new prop values, I get errors saying too many renders are happening. I hope this makes sense. Again, any help appreciated!
[ "You should not write the props.postToEdit on the useState hook, simply execute the function and pass on the post id you would like to edit. You're currently using the state for the function and for manipulating your state, this wont simply work.\n" ]
[ 2 ]
[]
[]
[ "javascript", "reactjs" ]
stackoverflow_0074680935_javascript_reactjs.txt
Q: Load random image react native I try to randomly load a gif from an array. I tried several ways to do it but none works. I either get an error message or the image just won't appear. Version 1 (result: image doesn't appear): var myPix = new Array("../assets/class/emojis/correct/clapping_hands.gif", "../assets/class/emojis/correct/beaming_face_with_smiling_eyes.gif","../assets/class/emojis/correct/confetti_ball.gif","../assets/class/emojis/correct/flexed_biceps.gif"); var randomNum = Math.floor(Math.random() * myPix.length); var theImage= myPix[randomNum]; return ( <View> <Image style={styles.gifAnimation} source={theImage} /> </View> Version 2 (result: "invalid call") var myPix = new Array("../assets/class/emojis/correct/clapping_hands.gif", "../assets/class/emojis/correct/beaming_face_with_smiling_eyes.gif","../assets/class/emojis/correct/confetti_ball.gif","../assets/class/emojis/correct/flexed_biceps.gif"); var randomNum = Math.floor(Math.random() * myPix.length); var theImage= myPix[randomNum]; return ( <View> <Image style={styles.gifAnimation} source={require(myPix[randomNum])} /> </View> Version 3 (result: image won't load): const [theImage, setTheImage] = useState(); React.useEffect(() => { var myPix = new Array( "../assets/class/emojis/correct/clapping_hands.gif", "../assets/class/emojis/correct/beaming_face_with_smiling_eyes.gif", "../assets/class/emojis/correct/confetti_ball.gif", "../assets/class/emojis/correct/flexed_biceps.gif", ); var randomNum = Math.floor(Math.random() * myPix.length); var x = myPix[randomNum]; setTheImage_Correct(x); source={image_correct Any thoughts? A: Make sure you have implemented the right environment to display GIF as explained below: For RN < 0.60 By default the Gif images are not supported in android react native app. You need to set use Fresco to display the gif images. The code: Edit your android/app/build.gradle file and add the following code: dependencies: { ... compile 'com.facebook.fresco:fresco:1.+' // For animated GIF support compile 'com.facebook.fresco:animated-gif:1.+' // For WebP support, including animated WebP compile 'com.facebook.fresco:animated-webp:1.+' compile 'com.facebook.fresco:webpsupport:1.+' } then you need to bundle the app again, You can display the gif images in two ways like this. 1-> <Image source={require('./../images/load.gif')} style={{width: 100, height: 100 }} /> 2-> <Image source={{uri: 'http://www.clicktorelease.com/code/gif/1.gif'}} style={{width: 100, height:100 }} /> enter code here For RN >= 0.60 implementation 'com.facebook.fresco:animated-gif:1.12.0' //instead of implementation 'com.facebook.fresco:animated-gif:2.0.0' //use As explained in this issue: How do I display an animated gif in React Native? A: I went back to this issue and found out why the images did not load. This is the original code. When using the code, the Gifs simply would not load: var myPix = new Array( "../assets/class/emojis/correct/clapping_hands.gif", "../assets/class/emojis/correct/beaming_face_with_smiling_eyes.gif", "../assets/class/emojis/correct/confetti_ball.gif", "../assets/class/emojis/correct/flexed_biceps.gif"); var randomNum = Math.floor(Math.random() * myPix.length); var theImage= myPix[randomNum]; return ( <View> <Image style={styles.gifAnimation} source={theImage} /> </View> I simply added "require()" for the image sources and it worked well: var myPix = new Array( require("../assets/class/emojis/correct/clapping_hands.gif"), require("../assets/class/emojis/correct/beaming_face_with_smiling_eyes.gif"), require("../assets/class/emojis/correct/confetti_ball.gif"), require("../assets/class/emojis/correct/flexed_biceps.gif"); var randomNum = Math.floor(Math.random() * myPix.length); var theImage= myPix[randomNum]; return ( <View> <Image style={styles.gifAnimation} source={theImage} /> </View>
Load random image react native
I try to randomly load a gif from an array. I tried several ways to do it but none works. I either get an error message or the image just won't appear. Version 1 (result: image doesn't appear): var myPix = new Array("../assets/class/emojis/correct/clapping_hands.gif", "../assets/class/emojis/correct/beaming_face_with_smiling_eyes.gif","../assets/class/emojis/correct/confetti_ball.gif","../assets/class/emojis/correct/flexed_biceps.gif"); var randomNum = Math.floor(Math.random() * myPix.length); var theImage= myPix[randomNum]; return ( <View> <Image style={styles.gifAnimation} source={theImage} /> </View> Version 2 (result: "invalid call") var myPix = new Array("../assets/class/emojis/correct/clapping_hands.gif", "../assets/class/emojis/correct/beaming_face_with_smiling_eyes.gif","../assets/class/emojis/correct/confetti_ball.gif","../assets/class/emojis/correct/flexed_biceps.gif"); var randomNum = Math.floor(Math.random() * myPix.length); var theImage= myPix[randomNum]; return ( <View> <Image style={styles.gifAnimation} source={require(myPix[randomNum])} /> </View> Version 3 (result: image won't load): const [theImage, setTheImage] = useState(); React.useEffect(() => { var myPix = new Array( "../assets/class/emojis/correct/clapping_hands.gif", "../assets/class/emojis/correct/beaming_face_with_smiling_eyes.gif", "../assets/class/emojis/correct/confetti_ball.gif", "../assets/class/emojis/correct/flexed_biceps.gif", ); var randomNum = Math.floor(Math.random() * myPix.length); var x = myPix[randomNum]; setTheImage_Correct(x); source={image_correct Any thoughts?
[ "Make sure you have implemented the right environment to display GIF as explained below:\n\nFor RN < 0.60\n\nBy default the Gif images are not supported in android react native app. You need to set use Fresco to display the gif images. The code:\nEdit your android/app/build.gradle file and add the following code:\ndependencies: {\n...\n\ncompile 'com.facebook.fresco:fresco:1.+'\n\n// For animated GIF support\ncompile 'com.facebook.fresco:animated-gif:1.+'\n\n// For WebP support, including animated WebP\ncompile 'com.facebook.fresco:animated-webp:1.+'\ncompile 'com.facebook.fresco:webpsupport:1.+' \n\n}\nthen you need to bundle the app again, You can display the gif images in two ways like this.\n1-> <Image \n source={require('./../images/load.gif')} \n style={{width: 100, height: 100 }}\n />\n\n2-> <Image \n source={{uri: 'http://www.clicktorelease.com/code/gif/1.gif'}} \n style={{width: 100, height:100 }} \n />\n enter code here\n\n\nFor RN >= 0.60\n\nimplementation 'com.facebook.fresco:animated-gif:1.12.0' //instead of\n\nimplementation 'com.facebook.fresco:animated-gif:2.0.0' //use\n\nAs explained in this issue:\nHow do I display an animated gif in React Native?\n", "I went back to this issue and found out why the images did not load. This is the original code. When using the code, the Gifs simply would not load:\nvar myPix = new Array(\n\"../assets/class/emojis/correct/clapping_hands.gif\", \"../assets/class/emojis/correct/beaming_face_with_smiling_eyes.gif\",\n\"../assets/class/emojis/correct/confetti_ball.gif\",\n\"../assets/class/emojis/correct/flexed_biceps.gif\");\n\nvar randomNum = Math.floor(Math.random() * myPix.length);\nvar theImage= myPix[randomNum];\n\n\n return (\n <View>\n <Image\n style={styles.gifAnimation}\n source={theImage}\n/>\n</View>\n\n\nI simply added \"require()\" for the image sources and it worked well:\nvar myPix = new Array(\nrequire(\"../assets/class/emojis/correct/clapping_hands.gif\"), require(\"../assets/class/emojis/correct/beaming_face_with_smiling_eyes.gif\"),\nrequire(\"../assets/class/emojis/correct/confetti_ball.gif\"),\nrequire(\"../assets/class/emojis/correct/flexed_biceps.gif\");\n\nvar randomNum = Math.floor(Math.random() * myPix.length);\nvar theImage= myPix[randomNum];\n\n\n return (\n <View>\n <Image\n style={styles.gifAnimation}\n source={theImage}\n/>\n</View>\n\n" ]
[ 1, 0 ]
[]
[]
[ "image", "react_native" ]
stackoverflow_0073820588_image_react_native.txt
Q: Getting custom elastic beanstalk logs into cloudwatch group Having issues and been unable to create a custom cloudwatch log group from .ebextentions/logs.config Here are different files I have tried. 1 --- files: /opt/elasticbeanstalk/tasks/bundlelogs.d/celery_logs.conf: content: |- /var/log/celery_beat.stdout.log /var/log/celery_flower.stdout.log /var/log/celery_worker.stdout.log /var/log/faust_worker.stdout.log group: root mode: "000755" owner: root 2 --- files: "/opt/elasticbeanstalk/config/private/logtasks/bundle/applogs.conf" : mode: "000755" owner: root group: root content: | /var/log/celery_beat.stdout.log /var/log/celery_flower.stdout.log /var/log/celery_worker.stdout.log /var/log/faust_worker.stdout.log 3 packages: yum: awslogs: [] files: "/etc/awslogs/awscli.conf" : mode: "000600" owner: root group: root content: | [plugins] cwlogs = cwlogs [default] region = `{"Ref":"AWS::Region"}` "/etc/awslogs/config/logs.conf" : mode: "000600" owner: root group: root content: | [/var/log/celery_beat.stdout.log] log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "/var/log/celery_beat.stdout.log"]]}` log_stream_name = {instance_id} file = /var/log/celery_beat.stdout.log [/var/log/celery_flower.stdout.log] log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "/var/log/celery_flower.stdout.log"]]}` log_stream_name = {instance_id} file = /var/log/celery_flower.stdout.log [/var/log/celery_worker.stdout.log] log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "/var/log/celery_worker.stdout.log"]]}` log_stream_name = {instance_id} file = /var/log/celery_worker.stdout.log commands: "01": command: systemctl enable awslogsd.service "02": command: systemctl restart awslogsd The logs are properly showing up in the files: /var/log/celery_beat.stdout.log /var/log/celery_flower.stdout.log /var/log/celery_worker.stdout.log /var/log/faust_worker.stdout.log But no log group is being created with no logs being transferred to it. I've tried 15 or more other similar configurations with no luck. A: #It looks like you're trying to use AWS Elastic Beanstalk to create a custom CloudWatch log group. To do this, you'll need to use an Elastic Beanstalk configuration file called logs.config to specify the settings for your log group. #Here is an example logs.config file that you can use as a starting point: files: "/etc/awslogs/awscli.conf": mode: "000600" owner: root group: root content: | [plugins] cwlogs = cwlogs [default] region = `{"Ref":"AWS::Region"}` "/etc/awslogs/config/logs.conf": mode: "000600" owner: root group: root content: | [/var/log/celery_beat.stdout.log] log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "/var/log/celery_beat.stdout.log"]]}` log_stream_name = {instance_id} file = /var/log/celery_beat.stdout.log [/var/log/celery_flower.stdout.log] log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "/var/log/celery_flower.stdout.log"]]}` log_stream_name = {instance_id} file = /var/log/celery_flower.stdout.log [/var/log/celery_worker.stdout.log] log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "/var/log/celery_worker.stdout.log"]]}` log_stream_name = {instance_id} file = /var/log/celery_worker.stdout.log commands: "01": command: systemctl enable awslogsd.service "02": command: systemctl restart awslogsd #This configuration file will create a CloudWatch log group for each of the log files you specified (/var/log/celery_beat.stdout.log, /var/log/celery_flower.stdout.log, and /var/log/celery_worker.stdout.log). The log group will be named using the environment name of your Elastic Beanstalk environment. #To use this configuration file, you'll need to save it as .ebextensions/logs.config in the root of your Elastic Beanstalk application source code. Then, when you deploy your application to Elastic Beanstalk, the log group will be created automatically. #I hope this helps! Let me know if you have any other questions.
Getting custom elastic beanstalk logs into cloudwatch group
Having issues and been unable to create a custom cloudwatch log group from .ebextentions/logs.config Here are different files I have tried. 1 --- files: /opt/elasticbeanstalk/tasks/bundlelogs.d/celery_logs.conf: content: |- /var/log/celery_beat.stdout.log /var/log/celery_flower.stdout.log /var/log/celery_worker.stdout.log /var/log/faust_worker.stdout.log group: root mode: "000755" owner: root 2 --- files: "/opt/elasticbeanstalk/config/private/logtasks/bundle/applogs.conf" : mode: "000755" owner: root group: root content: | /var/log/celery_beat.stdout.log /var/log/celery_flower.stdout.log /var/log/celery_worker.stdout.log /var/log/faust_worker.stdout.log 3 packages: yum: awslogs: [] files: "/etc/awslogs/awscli.conf" : mode: "000600" owner: root group: root content: | [plugins] cwlogs = cwlogs [default] region = `{"Ref":"AWS::Region"}` "/etc/awslogs/config/logs.conf" : mode: "000600" owner: root group: root content: | [/var/log/celery_beat.stdout.log] log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "/var/log/celery_beat.stdout.log"]]}` log_stream_name = {instance_id} file = /var/log/celery_beat.stdout.log [/var/log/celery_flower.stdout.log] log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "/var/log/celery_flower.stdout.log"]]}` log_stream_name = {instance_id} file = /var/log/celery_flower.stdout.log [/var/log/celery_worker.stdout.log] log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "/var/log/celery_worker.stdout.log"]]}` log_stream_name = {instance_id} file = /var/log/celery_worker.stdout.log commands: "01": command: systemctl enable awslogsd.service "02": command: systemctl restart awslogsd The logs are properly showing up in the files: /var/log/celery_beat.stdout.log /var/log/celery_flower.stdout.log /var/log/celery_worker.stdout.log /var/log/faust_worker.stdout.log But no log group is being created with no logs being transferred to it. I've tried 15 or more other similar configurations with no luck.
[ "#It looks like you're trying to use AWS Elastic Beanstalk to create a custom CloudWatch log group. To do this, you'll need to use an Elastic Beanstalk configuration file called logs.config to specify the settings for your log group.\n\n#Here is an example logs.config file that you can use as a starting point:\n\nfiles:\n \"/etc/awslogs/awscli.conf\":\n mode: \"000600\"\n owner: root\n group: root\n content: |\n [plugins]\n cwlogs = cwlogs\n [default]\n region = `{\"Ref\":\"AWS::Region\"}`\n\n \"/etc/awslogs/config/logs.conf\":\n mode: \"000600\"\n owner: root\n group: root\n content: |\n [/var/log/celery_beat.stdout.log]\n log_group_name = `{\"Fn::Join\":[\"/\", [\"/aws/elasticbeanstalk\", { \"Ref\":\"AWSEBEnvironmentName\" }, \"/var/log/celery_beat.stdout.log\"]]}`\n log_stream_name = {instance_id}\n file = /var/log/celery_beat.stdout.log\n\n [/var/log/celery_flower.stdout.log]\n log_group_name = `{\"Fn::Join\":[\"/\", [\"/aws/elasticbeanstalk\", { \"Ref\":\"AWSEBEnvironmentName\" }, \"/var/log/celery_flower.stdout.log\"]]}`\n log_stream_name = {instance_id}\n file = /var/log/celery_flower.stdout.log\n\n [/var/log/celery_worker.stdout.log]\n log_group_name = `{\"Fn::Join\":[\"/\", [\"/aws/elasticbeanstalk\", { \"Ref\":\"AWSEBEnvironmentName\" }, \"/var/log/celery_worker.stdout.log\"]]}`\n log_stream_name = {instance_id}\n file = /var/log/celery_worker.stdout.log\n\ncommands:\n \"01\":\n command: systemctl enable awslogsd.service\n \"02\":\n command: systemctl restart awslogsd\n\n#This configuration file will create a CloudWatch log group for each of the log files you specified (/var/log/celery_beat.stdout.log, /var/log/celery_flower.stdout.log, and /var/log/celery_worker.stdout.log). The log group will be named using the environment name of your Elastic Beanstalk environment.\n\n#To use this configuration file, you'll need to save it as .ebextensions/logs.config in the root of your Elastic Beanstalk application source code. Then, when you deploy your application to Elastic Beanstalk, the log group will be created automatically.\n\n#I hope this helps! Let me know if you have any other questions.\n\n" ]
[ 0 ]
[]
[]
[ "amazon_cloudwatch", "amazon_elastic_beanstalk", "celery", "logging" ]
stackoverflow_0073464085_amazon_cloudwatch_amazon_elastic_beanstalk_celery_logging.txt
Q: Cannot use variable in while clause that is declared inside the do/while loop Why am I not allowed to use a variable I declared inside do and use it inside the while part of a do/while loop? do { index += index; int composite = index + 1; // more code here } while (UnsetBitmask(bitmasks, composite)); // cannot resolve symbol composite First time I saw this I thought it was a compiler bug. So I restarted Visual Studio multiple times but still it does not recognize the variable in while. What is wrong with the code, or is this a compiler bug? A: Your variable is used outside the do/while (in C# the curlies typically define the scope) and since it is declared inside the do-while, you cannot use it outside that scope. You can fix this by simply declaring it outside the loop. int composite = 0; // you are not required to set it to zero, as long // as you do not use it before you initialize it do { index += index; composite = index + 1; // more code here } while (UnsetBitmask(bitmasks, composite)); Note (1), it is common practice to declare and set a variable in one go, but this is not a requirement, as long as you set it before you use it. Note (2): in C#, curly braces {....} define the scope of variables. You cannot use a variable outside its scope, it is not "visible" anymore, in fact, once it goes out of scope, it is ready to be garbage collected and its contents can be undefined. i konw this,but why compiler cant do this You asked this in the comments. The answer is less trivial than it may seem. It highly depends on the language definition. Some languages, like Perl, or older BASIC, allow variables to be declared at any level and in any scope. Other languages allow scoping of variables and require variables to be declared prior to their use (C#, Java, C++, Python). Usually, but not necessarily, this is common for statically typed languages, as the compiler needs to know beforehand what type of data you want to put in the variable, in order to do type checking. If a compiler would force declaration, but not scope, it means that all variables would be global, which is hard in larger programs, as as a programmer, you will have to maintain where you used what variable names and maintain uniqueness. This is very tedious (think COBOL). C# added the dynamic keyword, which allows for variables that have dynamic type, but this still requires you to define the variable prior to its use. A compiler can do this, but the designers of C# have decided that clear language is better, and in their opinion, declared variables are a good thing, because it prevents spurious errors and impossible-to-follow code: // can you see the problem? va1l = 1; val1 = 2; vall = va1l + vall; Forcing you to declare variables and to have them scoped prevents this and related kinds of errors. A: This is not a bug. The condition inside the while loop is outside of the block in which the variable is defined. int composite; do { index += index; composite = index + 1; // more code here } while (UnsetBitmask(bitmasks, composite)); A: the scope of this variable is just inside the loop .... You cannot use it outside the loop You should do something like this .... int composite; do { index += index; composite = index + 1; // more code here } while (UnsetBitmask(bitmasks, composite)); A: While { ... } means a codeblock all variables within that block are limited to that special scope. Your condition is outside this scope so it cannot access the variable there. Declare it outside the loop and it works: int composite = 0; do { index += index; composite += index + 1; // more code here } while (UnsetBitmask(bitmasks, composite)); A: Per Specification the condition variable for do .. while should be declared outside the loop else in every iteration the variable will get re-initialized which doesn't make sense. int composite; do { index += index; composite = index + 1; // more code here } while (UnsetBitmask(bitmasks, composite)); A: More general rule: you can't use variable outside its scope that's in C# it's {...}, e.g. { int x = 123; ... } x = 456; // <- compile time error do {} while in your code just means that the scope belongs to the loop. A: I understand that the "while" is outside the block, but consider this: for (int i = 0; i < 10; i++) { Console.WriteLine(i); } Here, the variable "i" is block scoped (not visible outside the for's block), in fact a new "instance" is created on each loop iteration. But the actual definition is outside the block. You could say the "for" declaration belongs to the block, but then I would say the "while" also belongs to the block. It's just not consistent! A: int composite; do { index += index; composite = index + 1; // more code here } while (UnsetBitmask(bitmasks, composite)); A: If you really do not want to move the variable declaration outside the loop body (as all the answers suggest), there is another (pretty obvious and pretty ugly) workaround: do { index += index; int composite = index + 1; // more code here // check the condition inside the loop body, where the variable exisits if(!UnsetBitmask(bitmasks, composite)) break; } while (true);
Cannot use variable in while clause that is declared inside the do/while loop
Why am I not allowed to use a variable I declared inside do and use it inside the while part of a do/while loop? do { index += index; int composite = index + 1; // more code here } while (UnsetBitmask(bitmasks, composite)); // cannot resolve symbol composite First time I saw this I thought it was a compiler bug. So I restarted Visual Studio multiple times but still it does not recognize the variable in while. What is wrong with the code, or is this a compiler bug?
[ "Your variable is used outside the do/while (in C# the curlies typically define the scope) and since it is declared inside the do-while, you cannot use it outside that scope. \nYou can fix this by simply declaring it outside the loop.\nint composite = 0; // you are not required to set it to zero, as long \n // as you do not use it before you initialize it\ndo\n{\n index += index;\n composite = index + 1;\n // more code here\n} while (UnsetBitmask(bitmasks, composite)); \n\nNote (1), it is common practice to declare and set a variable in one go, but this is not a requirement, as long as you set it before you use it.\nNote (2): in C#, curly braces {....} define the scope of variables. You cannot use a variable outside its scope, it is not \"visible\" anymore, in fact, once it goes out of scope, it is ready to be garbage collected and its contents can be undefined.\n\n\ni konw this,but why compiler cant do this\n\nYou asked this in the comments. The answer is less trivial than it may seem. It highly depends on the language definition. Some languages, like Perl, or older BASIC, allow variables to be declared at any level and in any scope. Other languages allow scoping of variables and require variables to be declared prior to their use (C#, Java, C++, Python). Usually, but not necessarily, this is common for statically typed languages, as the compiler needs to know beforehand what type of data you want to put in the variable, in order to do type checking.\nIf a compiler would force declaration, but not scope, it means that all variables would be global, which is hard in larger programs, as as a programmer, you will have to maintain where you used what variable names and maintain uniqueness. This is very tedious (think COBOL).\nC# added the dynamic keyword, which allows for variables that have dynamic type, but this still requires you to define the variable prior to its use.\nA compiler can do this, but the designers of C# have decided that clear language is better, and in their opinion, declared variables are a good thing, because it prevents spurious errors and impossible-to-follow code:\n// can you see the problem?\nva1l = 1;\nval1 = 2;\nvall = va1l + vall;\n\nForcing you to declare variables and to have them scoped prevents this and related kinds of errors.\n", "This is not a bug. The condition inside the while loop is outside of the block in which the variable is defined.\nint composite;\ndo\n{\n index += index;\n composite = index + 1;\n // more code here\n} while (UnsetBitmask(bitmasks, composite));\n\n", "the scope of this variable is just inside the loop .... You cannot use it outside the loop \nYou should do something like this ....\nint composite;\ndo\n{\n index += index;\n composite = index + 1;\n // more code here\n} while (UnsetBitmask(bitmasks, composite)); \n\n", "While { ... } means a codeblock all variables within that block are limited to that special scope. Your condition is outside this scope so it cannot access the variable there. Declare it outside the loop and it works:\nint composite = 0;\ndo\n{\n index += index;\n composite += index + 1;\n // more code here\n} while (UnsetBitmask(bitmasks, composite));\n\n", "Per Specification the condition variable for do .. while should be declared outside the loop else in every iteration the variable will get re-initialized which doesn't make sense.\n int composite;\n do\n {\n index += index;\n composite = index + 1;\n // more code here\n } while (UnsetBitmask(bitmasks, composite)); \n\n", "More general rule: you can't use variable outside its scope that's in C# it's {...}, e.g.\n {\n int x = 123;\n ...\n }\n x = 456; // <- compile time error\n\ndo {} while in your code just means that the scope belongs to the loop. \n", "I understand that the \"while\" is outside the block, but consider this:\nfor (int i = 0; i < 10; i++) \n{\n Console.WriteLine(i);\n}\n\nHere, the variable \"i\" is block scoped (not visible outside the for's block), in fact a new \"instance\" is created on each loop iteration. But the actual definition is outside the block.\nYou could say the \"for\" declaration belongs to the block, but then I would say the \"while\" also belongs to the block. It's just not consistent!\n", " int composite;\ndo\n{\n index += index;\n composite = index + 1;\n // more code here\n} while (UnsetBitmask(bitmasks, composite));\n\n", "If you really do not want to move the variable declaration outside the loop body (as all the answers suggest), there is another (pretty obvious and pretty ugly) workaround:\ndo\n{\n index += index;\n int composite = index + 1;\n // more code here\n\n // check the condition inside the loop body, where the variable exisits\n if(!UnsetBitmask(bitmasks, composite)) break;\n} while (true);\n\n" ]
[ 16, 7, 2, 2, 2, 2, 1, 0, 0 ]
[]
[]
[ "c#", "do_while" ]
stackoverflow_0032334338_c#_do_while.txt
Q: Exception has occurred: ValueError Data cardinality is ambiguous: Trying to build an RNN model for the first time. For some reason I am getting a cardinality error and I am not sure why. Each column is labeled, has a respective date, and has a value in the value field. Excluding the header I have 142 values in each column. ERROR Exception has occurred: ValueError Data cardinality is ambiguous: x sizes: 142 y sizes: 141 Make sure all arrays contain the same number of samples. `import numpy as np import pandas as pd import matplotlib.pyplot as plt training_set=pd.read_csv(r'~/Desktop/HII.csv') training_set=training_set.iloc[:,1:2].values from sklearn.preprocessing import MinMaxScaler sc= MinMaxScaler() training_set=sc.fit_transform(training_set) X_train= training_set[0:142] y_train= training_set[1:142] X_train=np.reshape(X_train, (142 , 1 , 1))` A: The error message is telling you that the sizes of the x and y arrays are different. You are trying to create a dataset with 142 samples, but you are only providing 141 values for the y array. Here is the code that is causing the error: X_train= training_set[0:142] y_train= training_set[1:142] The training_set array has 142 samples, so the X_train array is correct. However, the y_train array only contains the second through the last values of the training_set array, so it only has 141 values. This is why you are getting the error message. To fix the problem, you can simply add one more value to the y_train array. For example, you could use the following code to create the X_train and y_train arrays: X_train = training_set[0:142] y_train = training_set[0:142] In this case, the y_train array will contain the same values as the X_train array, shifted by one position. This will make the sizes of the x and y arrays match, so the error will not occur.
Exception has occurred: ValueError Data cardinality is ambiguous:
Trying to build an RNN model for the first time. For some reason I am getting a cardinality error and I am not sure why. Each column is labeled, has a respective date, and has a value in the value field. Excluding the header I have 142 values in each column. ERROR Exception has occurred: ValueError Data cardinality is ambiguous: x sizes: 142 y sizes: 141 Make sure all arrays contain the same number of samples. `import numpy as np import pandas as pd import matplotlib.pyplot as plt training_set=pd.read_csv(r'~/Desktop/HII.csv') training_set=training_set.iloc[:,1:2].values from sklearn.preprocessing import MinMaxScaler sc= MinMaxScaler() training_set=sc.fit_transform(training_set) X_train= training_set[0:142] y_train= training_set[1:142] X_train=np.reshape(X_train, (142 , 1 , 1))`
[ "The error message is telling you that the sizes of the x and y arrays are different. You are trying to create a dataset with 142 samples, but you are only providing 141 values for the y array.\nHere is the code that is causing the error:\nX_train= training_set[0:142]\ny_train= training_set[1:142]\n\nThe training_set array has 142 samples, so the X_train array is correct. However, the y_train array only contains the second through the last values of the training_set array, so it only has 141 values. This is why you are getting the error message.\nTo fix the problem, you can simply add one more value to the y_train array. For example, you could use the following code to create the X_train and y_train arrays:\nX_train = training_set[0:142]\ny_train = training_set[0:142]\n\nIn this case, the y_train array will contain the same values as the X_train array, shifted by one position. This will make the sizes of the x and y arrays match, so the error will not occur.\n" ]
[ 0 ]
[]
[]
[ "ml", "python", "recurrent_neural_network" ]
stackoverflow_0074681011_ml_python_recurrent_neural_network.txt
Q: How to let the program detect the previous form where a new form was accessed so that the input from the new form is transferred to that previous form I have multiple forms (e.g. Form1, Form2) that both contains a button that opens another form (Form3). In Form3 (pop-up form), the user is prompted to pick among the options, and once these were submitted through a button in Form3, the selected options will be transferred to the previous form where it was opened (either form1 or form2). Both forms1 and 2 are linked to one input form3, so im thinking of using a conditional statement upon clicking the "Submit" button in Form 3 that will determine whether the active form/currently maximized form is Form1 or Form2, and will let the program redirect and transfer the data accordingly to the specific form. In maximized Form1 > clicks a button > Form 3 pop-up opens > User Input is submitted through a button > User Input is transferred to Form1 In maximized Form2 > clicks a button > Form 3 pop-up opens > User Input is submitted through a button > User Input is transferred to Form2 private void button1_Click(object sender, EventArgs e) { if (Form1.ActiveForm != null) { Form1.transfer.labQuan.Text = label8.Text; double InitAmount, AmountwFee; InitAmount = Convert.ToDouble(label12.Text); AmountwFee = InitAmount + 100; Form1.transfer.labAmount.Text = String.Format("P {0:N2}", AmountwFee); this.Hide(); } else if (Form2.ActiveForm != null) { Form2.transfer.labQuan.Text = label8.Text; double InitAmount, AmountwFee; InitAmount = Convert.ToDouble(label12.Text); AmountwFee = InitAmount + 100; Form2.transfer.labAmount.Text = String.Format("P {0:N2}", AmountwFee); this.Hide(); } } It shows the output for Form1, but for Form2 there's no output. I tried placing Form2 in the first condition (if) and that works but not for Form1 this time. Apparently, what comes first is the only condition performed by the program, and the else if is not executed. I tested if (Form1.Visible = true) works, but I've already tried and there was an error in the program. Should there be additional declarations or such or perhaps a new public class? A: You're thinking about it backwards. Instead of "pushing" from Form3 back to Form1 or Form2, you should "pull" from Form3 directly from within the code in either Form1 or Form2. You can do this by showing Form3 with ShowDialog(), which will cause execution in the current form (Form1 or Form2j) to STOP until Form3 is dismissed. In Form3, you can make public properties that can be accessed to retrieve the values from it. For example, here's a boiled down Form3: public partial class Form3 : Form { public Form3() { InitializeComponent(); } public String LabQuantity { get { return label8.Text; } } public double AmountwFee { get { return Convert.ToDouble(label12.Text) + 100; } } private void button1_Click(object sender, EventArgs e) { this.DialogResult = DialogResult.OK; } } Then in either Form1 or Form2, you'd do something like: private void button1_Click(object sender, EventArgs e) { Form3 f3 = new Form3(); if (f3.ShowDialog() == DialogResult.OK) // code STOPS here until "f3" is dismissed! { // ... do something with the data from "f3" ... Console.WriteLine(f3.LabQuantity); Console.WriteLine(f3.AmountwFee); } } A: Thank you for all the suggestions! I'm learning a lot from the feedback because I'm only new to this as a student. Since coding is still very complicated for me, I decided to learn how to duplicate forms and proceeded to make a separate form3 for both form1 and form2. They were the common "copy, paste" techniques plus the renaming of forms, but it worked well in the end. I will try to implement the suggestions the next time we have a coding assignment. Thank you!
How to let the program detect the previous form where a new form was accessed so that the input from the new form is transferred to that previous form
I have multiple forms (e.g. Form1, Form2) that both contains a button that opens another form (Form3). In Form3 (pop-up form), the user is prompted to pick among the options, and once these were submitted through a button in Form3, the selected options will be transferred to the previous form where it was opened (either form1 or form2). Both forms1 and 2 are linked to one input form3, so im thinking of using a conditional statement upon clicking the "Submit" button in Form 3 that will determine whether the active form/currently maximized form is Form1 or Form2, and will let the program redirect and transfer the data accordingly to the specific form. In maximized Form1 > clicks a button > Form 3 pop-up opens > User Input is submitted through a button > User Input is transferred to Form1 In maximized Form2 > clicks a button > Form 3 pop-up opens > User Input is submitted through a button > User Input is transferred to Form2 private void button1_Click(object sender, EventArgs e) { if (Form1.ActiveForm != null) { Form1.transfer.labQuan.Text = label8.Text; double InitAmount, AmountwFee; InitAmount = Convert.ToDouble(label12.Text); AmountwFee = InitAmount + 100; Form1.transfer.labAmount.Text = String.Format("P {0:N2}", AmountwFee); this.Hide(); } else if (Form2.ActiveForm != null) { Form2.transfer.labQuan.Text = label8.Text; double InitAmount, AmountwFee; InitAmount = Convert.ToDouble(label12.Text); AmountwFee = InitAmount + 100; Form2.transfer.labAmount.Text = String.Format("P {0:N2}", AmountwFee); this.Hide(); } } It shows the output for Form1, but for Form2 there's no output. I tried placing Form2 in the first condition (if) and that works but not for Form1 this time. Apparently, what comes first is the only condition performed by the program, and the else if is not executed. I tested if (Form1.Visible = true) works, but I've already tried and there was an error in the program. Should there be additional declarations or such or perhaps a new public class?
[ "You're thinking about it backwards. Instead of \"pushing\" from Form3 back to Form1 or Form2, you should \"pull\" from Form3 directly from within the code in either Form1 or Form2. You can do this by showing Form3 with ShowDialog(), which will cause execution in the current form (Form1 or Form2j) to STOP until Form3 is dismissed. In Form3, you can make public properties that can be accessed to retrieve the values from it.\nFor example, here's a boiled down Form3:\npublic partial class Form3 : Form\n{\n\n public Form3()\n {\n InitializeComponent();\n }\n\n public String LabQuantity\n {\n get\n {\n return label8.Text;\n }\n }\n\n public double AmountwFee\n {\n get\n {\n return Convert.ToDouble(label12.Text) + 100;\n }\n }\n\n private void button1_Click(object sender, EventArgs e)\n {\n this.DialogResult = DialogResult.OK;\n }\n\n}\n\nThen in either Form1 or Form2, you'd do something like:\nprivate void button1_Click(object sender, EventArgs e)\n{\n Form3 f3 = new Form3();\n if (f3.ShowDialog() == DialogResult.OK) // code STOPS here until \"f3\" is dismissed!\n {\n // ... do something with the data from \"f3\" ...\n Console.WriteLine(f3.LabQuantity);\n Console.WriteLine(f3.AmountwFee);\n }\n}\n\n", "Thank you for all the suggestions! I'm learning a lot from the feedback because I'm only new to this as a student. Since coding is still very complicated for me, I decided to learn how to duplicate forms and proceeded to make a separate form3 for both form1 and form2. They were the common \"copy, paste\" techniques plus the renaming of forms, but it worked well in the end.\nI will try to implement the suggestions the next time we have a coding assignment. Thank you!\n" ]
[ 0, 0 ]
[]
[]
[ "active_form", "button", "c#", "forms", "visible" ]
stackoverflow_0074679008_active_form_button_c#_forms_visible.txt
Q: How can I see the execution time in JanusGraph by Gremlin in Ubuntu 20.04? In JanusGraph, when any query is executed by Gremlin, how do you know the time it takes to execute the query? A: Gremlin has a profile step that will show you the time taken by each part or your query. For example you might do: g.V().has('team-name','Arsenal').profile()
How can I see the execution time in JanusGraph by Gremlin in Ubuntu 20.04?
In JanusGraph, when any query is executed by Gremlin, how do you know the time it takes to execute the query?
[ "Gremlin has a profile step that will show you the time taken by each part or your query. For example you might do:\ng.V().has('team-name','Arsenal').profile()\n\n" ]
[ 0 ]
[]
[]
[ "graphdb", "gremlin", "janusgraph", "tinkerpop" ]
stackoverflow_0074665310_graphdb_gremlin_janusgraph_tinkerpop.txt
Q: Docker app requests blocked by CORS to another container I've got two containers, one for a frontend app (build with Sveltekit) and the another for a backend app (Laravel, Nginx, MySQL). I'm trying to access my backend app from my frontend, but I'm getting the following error: Access to XMLHttpRequest at 'http://127.0.0.1:8000/api/auth/login' from origin 'http://localhost:8080' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: The 'Access-Control-Allow-Origin' header contains multiple values '*, *', but only one is allowed. xhr.js:220 POST http://127.0.0.1:8000/api/auth/login net::ERR_FAILED Weird enough, I can access the API from my REST client, meaning that my backend app is accessible from outside the container. Here's my Nginx config: server { listen 80; index index.php index.html; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log; root /var/www/public; location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass app:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; } location / { try_files $uri $uri/ /index.php?$query_string; gzip_static on; } } Here's my backend's docker-compose file: version: "3.9" services: app: build: context: ./ dockerfile: Dockerfile image: dmc container_name: dmc-app restart: unless-stopped working_dir: /var/www/ depends_on: - db - nginx volumes: - ./:/var/www/ - ./docker/php/conf.d/xdebug.ini:/usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini - ./docker/php/conf.d/error_reporting.ini:/usr/local/etc/php/conf.d/error_reporting.ini expose: - "9003" networks: - dmc-net nginx: image: nginx:1.23.2-alpine container_name: dmc-nginx restart: unless-stopped # environment: # PHP_IDE_CONFIG: "serverName=dmc-server" ports: - "8000:80" volumes: - ./:/var/www - ./docker-compose/nginx:/etc/nginx/conf.d networks: - dmc-net db: image: mysql:8.0.31 container_name: dmc-db restart: unless-stopped # using 3307 on the host machine to avoid collisions in case there's a local MySQL instance installed already. ports: - "3307:3306" # use the variables declared in .env file environment: MYSQL_HOST: ${DB_HOST} MYSQL_DATABASE: ${DB_DATABASE} MYSQL_PASSWORD: ${DB_PASSWORD} MYSQL_ROOT_PASSWORD: abcd1234 MYSQL_USER: ${DB_USERNAME} SERVICE_TAGS: dev SERVICE_NAME: mysql volumes: - ./docker-compose/mysql:/docker-entrypoint-initdb.d - mysql-data:/var/lib/mysql networks: - dmc-net networks: dmc-net: driver: bridge volumes: mysql-data: and here's my frontend's docker-compose file: version: "3.9" services: dmc-web: build: context: . dockerfile: Dockerfile container_name: dmc-web restart: always ports: - "3000:3000" - "3010:3010" - "8080:8080" - "5050:5050" - "24678:24678" volumes: - ./:/var/www/html what am I missing? why can I access my backend from my REST client but not from my frontend app? Thanks A: It looks like the issue is with the Access-Control-Allow-Origin header in your backend app's response. This header specifies which domains are allowed to make requests to your API. The error message states that the header contains multiple values (*, *), but only one value is allowed. To fix this issue, you will need to update the Access-Control-Allow-Origin header in your backend app's response to only include the domain that your frontend app is running on (e.g. http://localhost:8080). This will allow requests from your frontend app to be accepted by your backend app. In your Laravel app, you can set the Access-Control-Allow-Origin header in the app/Http/Middleware/Cors.php file. The code should look something like this: public function handle($request, Closure $next) { return $next($request) ->header('Access-Control-Allow-Origin', 'http://localhost:8080') ->header('Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE, OPTIONS'); } Make sure to update the value of the Access-Control-Allow-Origin header to match the domain of your frontend app. You may also need to update the Access-Control-Allow-Methods header to include any other HTTP methods that your API supports. After making this change, be sure to restart your backend app for the changes to take effect. This should resolve the CORS policy error and allow your frontend app to make requests to your backend API.
Docker app requests blocked by CORS to another container
I've got two containers, one for a frontend app (build with Sveltekit) and the another for a backend app (Laravel, Nginx, MySQL). I'm trying to access my backend app from my frontend, but I'm getting the following error: Access to XMLHttpRequest at 'http://127.0.0.1:8000/api/auth/login' from origin 'http://localhost:8080' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: The 'Access-Control-Allow-Origin' header contains multiple values '*, *', but only one is allowed. xhr.js:220 POST http://127.0.0.1:8000/api/auth/login net::ERR_FAILED Weird enough, I can access the API from my REST client, meaning that my backend app is accessible from outside the container. Here's my Nginx config: server { listen 80; index index.php index.html; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log; root /var/www/public; location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass app:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; } location / { try_files $uri $uri/ /index.php?$query_string; gzip_static on; } } Here's my backend's docker-compose file: version: "3.9" services: app: build: context: ./ dockerfile: Dockerfile image: dmc container_name: dmc-app restart: unless-stopped working_dir: /var/www/ depends_on: - db - nginx volumes: - ./:/var/www/ - ./docker/php/conf.d/xdebug.ini:/usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini - ./docker/php/conf.d/error_reporting.ini:/usr/local/etc/php/conf.d/error_reporting.ini expose: - "9003" networks: - dmc-net nginx: image: nginx:1.23.2-alpine container_name: dmc-nginx restart: unless-stopped # environment: # PHP_IDE_CONFIG: "serverName=dmc-server" ports: - "8000:80" volumes: - ./:/var/www - ./docker-compose/nginx:/etc/nginx/conf.d networks: - dmc-net db: image: mysql:8.0.31 container_name: dmc-db restart: unless-stopped # using 3307 on the host machine to avoid collisions in case there's a local MySQL instance installed already. ports: - "3307:3306" # use the variables declared in .env file environment: MYSQL_HOST: ${DB_HOST} MYSQL_DATABASE: ${DB_DATABASE} MYSQL_PASSWORD: ${DB_PASSWORD} MYSQL_ROOT_PASSWORD: abcd1234 MYSQL_USER: ${DB_USERNAME} SERVICE_TAGS: dev SERVICE_NAME: mysql volumes: - ./docker-compose/mysql:/docker-entrypoint-initdb.d - mysql-data:/var/lib/mysql networks: - dmc-net networks: dmc-net: driver: bridge volumes: mysql-data: and here's my frontend's docker-compose file: version: "3.9" services: dmc-web: build: context: . dockerfile: Dockerfile container_name: dmc-web restart: always ports: - "3000:3000" - "3010:3010" - "8080:8080" - "5050:5050" - "24678:24678" volumes: - ./:/var/www/html what am I missing? why can I access my backend from my REST client but not from my frontend app? Thanks
[ "It looks like the issue is with the Access-Control-Allow-Origin header in your backend app's response. This header specifies which domains are allowed to make requests to your API. The error message states that the header contains multiple values (*, *), but only one value is allowed.\nTo fix this issue, you will need to update the Access-Control-Allow-Origin header in your backend app's response to only include the domain that your frontend app is running on (e.g. http://localhost:8080). This will allow requests from your frontend app to be accepted by your backend app.\nIn your Laravel app, you can set the Access-Control-Allow-Origin header in the app/Http/Middleware/Cors.php file. The code should look something like this:\npublic function handle($request, Closure $next)\n{\n return $next($request)\n ->header('Access-Control-Allow-Origin', 'http://localhost:8080')\n ->header('Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE, OPTIONS');\n}\n\nMake sure to update the value of the Access-Control-Allow-Origin header to match the domain of your frontend app. You may also need to update the Access-Control-Allow-Methods header to include any other HTTP methods that your API supports.\nAfter making this change, be sure to restart your backend app for the changes to take effect. This should resolve the CORS policy error and allow your frontend app to make requests to your backend API.\n" ]
[ 0 ]
[]
[]
[ "cors", "docker", "docker_compose", "laravel", "nginx" ]
stackoverflow_0074680388_cors_docker_docker_compose_laravel_nginx.txt
Q: whats wrong with my code, im using photon pun 2 in unity so im trying to make a vr game in unity and im following a tutorial for how to make a multiplayer game and imported the photon pun 2 and copied this code but when i try to build or start the ggame i get there errors using System.Collections; using System.Collections.Generic; using UnityEngine; using Photon.Pun; using Photon.Realtime; public class networkmanager : MonoBehaviourPunCallBacks { // Start is called before the first frame update void Start() { ConnectToServer(); } void ConnectToServer() { PhotonNetwork.ConnectUsingSettings(); Debug.Log("Conecting To Server..."); } public override void OnConnectedToMaster() { Debug.Log("Connected To Server!"); base.OnConnectedToMaster(); RoomOptions roomOptions = new RoomOptions(); roomOptions.MaxPlayers = 10; roomOptions.IsVisible = true; roomOptions.IsOpen = true; PhotonNetwork.JoinOrCreateRoom("Lobby 1", roomOptions, TypedLobby.Default); } public override void OnJoinedRoom() { Debug.Log("Lobby Joined!"); base.OnJoinedRoom(); } public override void OnPlayerEnteredRoom(Player newPlayer) { Debug.Log("A New Player Has Joined The Lobby!"); base.OnPlayerEnteredRoom(newPlayer); } } A: Very simple: replace MonoBehaviourPunCallBacks with MonoBehaviourPunCallbacks and those errors should go away. A: You uppercased b in backs in MonoBehaviourPunCallbacks. It must be lowercase
whats wrong with my code, im using photon pun 2 in unity
so im trying to make a vr game in unity and im following a tutorial for how to make a multiplayer game and imported the photon pun 2 and copied this code but when i try to build or start the ggame i get there errors using System.Collections; using System.Collections.Generic; using UnityEngine; using Photon.Pun; using Photon.Realtime; public class networkmanager : MonoBehaviourPunCallBacks { // Start is called before the first frame update void Start() { ConnectToServer(); } void ConnectToServer() { PhotonNetwork.ConnectUsingSettings(); Debug.Log("Conecting To Server..."); } public override void OnConnectedToMaster() { Debug.Log("Connected To Server!"); base.OnConnectedToMaster(); RoomOptions roomOptions = new RoomOptions(); roomOptions.MaxPlayers = 10; roomOptions.IsVisible = true; roomOptions.IsOpen = true; PhotonNetwork.JoinOrCreateRoom("Lobby 1", roomOptions, TypedLobby.Default); } public override void OnJoinedRoom() { Debug.Log("Lobby Joined!"); base.OnJoinedRoom(); } public override void OnPlayerEnteredRoom(Player newPlayer) { Debug.Log("A New Player Has Joined The Lobby!"); base.OnPlayerEnteredRoom(newPlayer); } }
[ "Very simple: replace MonoBehaviourPunCallBacks with MonoBehaviourPunCallbacks and those errors should go away.\n", "You uppercased b in backs in MonoBehaviourPunCallbacks. It must be lowercase\n" ]
[ 0, 0 ]
[]
[]
[ "photon", "unity3d", "visual_studio" ]
stackoverflow_0072884061_photon_unity3d_visual_studio.txt
Q: Getting unexpected behavior with multiple OR conditions Here is my code: df.where((F.col("A") != F.col("B")) | \ (F.col("A").isNotNull()) | \ (F.col("C") == F.col("D"))).show() When I do this, I do see instances that contradict some of the conditions above. Now, when I structure the code like this, it runs successfully: df.where((F.col("A") != F.col("B")))\ .where((F.col("A").isNotNull()))\ .where((F.col("C") == F.col("D"))) A: The first snipper uses the | to combine the three conditions.However, the | checks if any of the conditions evaluate to true rather than all of them. However, chaining using where clause is equivalent to combining the conditions using and. Hence, the snippets in the code are not equivalent and produce different results. For equivalence, you first snipper will become df.where((F.col("A") != F.col("B")) & \ (F.col("A").isNotNull()) & \ (F.col("C") == F.col("D"))).show()
Getting unexpected behavior with multiple OR conditions
Here is my code: df.where((F.col("A") != F.col("B")) | \ (F.col("A").isNotNull()) | \ (F.col("C") == F.col("D"))).show() When I do this, I do see instances that contradict some of the conditions above. Now, when I structure the code like this, it runs successfully: df.where((F.col("A") != F.col("B")))\ .where((F.col("A").isNotNull()))\ .where((F.col("C") == F.col("D")))
[ "The first snipper uses the | to combine the three conditions.However, the | checks if any of the conditions evaluate to true rather than all of them.\nHowever, chaining using where clause is equivalent to combining the conditions using and.\nHence, the snippets in the code are not equivalent and produce different results.\nFor equivalence, you first snipper will become\ndf.where((F.col(\"A\") != F.col(\"B\")) & \\\n (F.col(\"A\").isNotNull()) & \\\n (F.col(\"C\") == F.col(\"D\"))).show()\n\n" ]
[ 1 ]
[]
[]
[ "pyspark" ]
stackoverflow_0074679091_pyspark.txt
Q: Azure Function to communicate to external service via VPN I would like my azure function to be able to access a postgres db hosted on fly.io, however, to connect to the fly.io organisation I need to do it via a VPN/wireguard-tunnel as its hosted on a private network. Is this possible with an azure function, can I have an azure function connect to an external service via a VPN? I have created a virtual network on azure and have a VM on that virtual network talking to the db via wireguard, but i am not sure how to do it from an azure function, any help would be most appreciated. A: Azure Functions supports accessing resources in a VNET through Virtual Network Integration. With this and setting up a VPN Gateway on the VNET to where your DB is connected, would allow the Azure Function to access it.
Azure Function to communicate to external service via VPN
I would like my azure function to be able to access a postgres db hosted on fly.io, however, to connect to the fly.io organisation I need to do it via a VPN/wireguard-tunnel as its hosted on a private network. Is this possible with an azure function, can I have an azure function connect to an external service via a VPN? I have created a virtual network on azure and have a VM on that virtual network talking to the db via wireguard, but i am not sure how to do it from an azure function, any help would be most appreciated.
[ "Azure Functions supports accessing resources in a VNET through Virtual Network Integration.\nWith this and setting up a VPN Gateway on the VNET to where your DB is connected, would allow the Azure Function to access it.\n" ]
[ 0 ]
[]
[]
[ "azure_functions", "fly", "postgresql", "wireguard" ]
stackoverflow_0074329679_azure_functions_fly_postgresql_wireguard.txt
Q: How to prevent error "WARNING: Unable to acquire token for tenant 'tenantid'" when running Powershell scripts in Azure I'm creating a Service principal to be used as an Azure Runas Account for Azure Automation, using a Powershell script. The script works, however I get the following warning when it's completed WARNING: Unable to acquire token for tenant 'tenantID'. The tenantID from the warning message is another tenant that my account has access to, which has multiple subscriptions within it. However it's unrelated to the tenantid and subscription I'm logging in to. I've tried logging in via the Powershell window, then running the script without having the login inside the script, but get the same error. When I run get-AzContext in the Powershell window after the script runs, it lists the correct tenantID Function being used to login is below. the tenant ID is not the same as the one I get the Warning for function Login { # Log in $tenantid = "tenantID" $subscriptionId = "subscriptionID" $subscriptionName = "subscriptionname" Clear-AzContext -Force Message("Logging In") $account = $(Get-AzContext).Account if ([string]::IsNullOrEmpty($account)) { Login-AzAccount -Tenant $tenantid -Subscription $subscriptionId } # Select the subscription Message("Selecting the '$subscriptionName' Subscription") Set-AzContext $subscriptionId | Out-Null } I have no other references to tenantID. The only other reference I have is for the subscriptionID, in a script which is called by the original script. $Subscription = $(Get-AzContext).Subscription I'd like to understand why it's trying to access the different TenantID for a token, and not to have the error when running the script A: Login Connect-AzAccount Check your current available subscriptions Get-AzContext -ListAvailable Select the subscription you want to work on Select-AzContext -Name '' A: I posted the answer already. The Get-AzSubscription command is the issue, it tries to access all the subscriptions you have access to. You need another command to get the subscription id, I used get-azcontext to get the current subscription id A: You are trying to logon to an MFA enabled tenant. Try this and then MFA accept on your phone # Connect to your Subscription # Ex: Connect-AzAccount -Credential $credentials -Subscription 0000-4566-bcb4-000 -TenantId 00-f750-00-91d3-00 Connect-AzAccount -Subscription 00-9f21-4566-bcb4-00 -TenantId 00-f750-4013-91d3-00 A: I was having the exact same error, and I fixed it by specifying the tenant when setting the context to a specific subscription. I did by updating the following line of code: Set-AzContext $subscriptionId | Out-Null to this one: Set-AzContext -Subscription $subscription -Tenant $tenantId | Out-null A: None of the above helped me but this did! Logout of all your contexts. These persist and accumulate and eventually nothing works. Clear-AzContext
How to prevent error "WARNING: Unable to acquire token for tenant 'tenantid'" when running Powershell scripts in Azure
I'm creating a Service principal to be used as an Azure Runas Account for Azure Automation, using a Powershell script. The script works, however I get the following warning when it's completed WARNING: Unable to acquire token for tenant 'tenantID'. The tenantID from the warning message is another tenant that my account has access to, which has multiple subscriptions within it. However it's unrelated to the tenantid and subscription I'm logging in to. I've tried logging in via the Powershell window, then running the script without having the login inside the script, but get the same error. When I run get-AzContext in the Powershell window after the script runs, it lists the correct tenantID Function being used to login is below. the tenant ID is not the same as the one I get the Warning for function Login { # Log in $tenantid = "tenantID" $subscriptionId = "subscriptionID" $subscriptionName = "subscriptionname" Clear-AzContext -Force Message("Logging In") $account = $(Get-AzContext).Account if ([string]::IsNullOrEmpty($account)) { Login-AzAccount -Tenant $tenantid -Subscription $subscriptionId } # Select the subscription Message("Selecting the '$subscriptionName' Subscription") Set-AzContext $subscriptionId | Out-Null } I have no other references to tenantID. The only other reference I have is for the subscriptionID, in a script which is called by the original script. $Subscription = $(Get-AzContext).Subscription I'd like to understand why it's trying to access the different TenantID for a token, and not to have the error when running the script
[ "Login\nConnect-AzAccount\n\nCheck your current available subscriptions\nGet-AzContext -ListAvailable\n\nSelect the subscription you want to work on\nSelect-AzContext -Name ''\n\n", "I posted the answer already. The Get-AzSubscription command is the issue, it tries to access all the subscriptions you have access to. You need another command to get the subscription id, I used get-azcontext to get the current subscription id\n", "You are trying to logon to an MFA enabled tenant.\nTry this and then MFA accept on your phone\n# Connect to your Subscription\n# Ex: Connect-AzAccount -Credential $credentials -Subscription 0000-4566-bcb4-000 -TenantId 00-f750-00-91d3-00 \nConnect-AzAccount -Subscription 00-9f21-4566-bcb4-00 -TenantId 00-f750-4013-91d3-00\n\n", "I was having the exact same error, and I fixed it by specifying the tenant when setting the context to a specific subscription. I did by updating the following line of code:\nSet-AzContext $subscriptionId | Out-Null\n\nto this one:\nSet-AzContext -Subscription $subscription -Tenant $tenantId | Out-null\n\n", "None of the above helped me but this did! Logout of all your contexts. These persist and accumulate and eventually nothing works.\nClear-AzContext\n" ]
[ 5, 4, 4, 1, 0 ]
[ "One of the ways to get rid of this issue is to use Azure CLI.\n" ]
[ -5 ]
[ "azure_powershell" ]
stackoverflow_0058762844_azure_powershell.txt
Q: How to use U2F with NextJS I am trying to implement U2F in my NextJS project. Currently I am using NextJS 13 (beta). I already have the server side code working with the u2f library but how do I implement it on the client side? const U2F = require("u2f"); const Express = require("express"); const BodyParser = require("body-parser"); const Cors = require("cors"); const HTTPS = require("https"); const FS = require("fs"); const session = require("express-session"); const APP_ID = "https://localhost:2015"; const server = Express(); server.use(session({ secret: "123456", cookie: { secure: true, maxAge: 60000 }, saveUninitialized: true, resave: true })); server.use(BodyParser.json()); server.use(BodyParser.urlencoded({ extended: true })); server.use(Cors({ origin: [APP_ID], credentials: true })); let user; server.get("/register", (request, response, next) => { request.session.u2f = U2F.request(APP_ID); response.send(request.session.u2f); }); server.post("/register", (request, response, next) => { const registration = U2F.checkRegistration(request.session.u2f, request.body.registerResponse); if(!registration.successful) { return response.status(500).send({ message: "error" }); } user = registration; response.send({ message: "The hardware key has been registered" }); }); server.get("/login", (request, response, next) => { request.session.u2f = U2F.request(APP_ID, user.keyHandle); response.send(request.session.u2f); }); server.post("/login", (request, response, next) => { const success = U2F.checkSignature(request.session.u2f, request.body.loginResponse, user.publicKey); response.send(success); }); HTTPS.createServer({ key: FS.readFileSync("server.key"), cert: FS.readFileSync("server.cert") }, server).listen(443, () => { console.log("Listening at :443..."); }); Edit: What I found out is that you should use WebAuthn these days. Do any of you have a good tutorial that explains how to use it with nextjs? A: // To implement U2F on the client side, you'll need to use a U2F-compatible browser and include a U2F JavaScript library on your page. The library will handle the communication with the U2F device and allow you to register and authenticate users using their hardware keys. // Here's an example of how you could implement U2F on the client side using the u2f-api library: // Import the U2F library const u2f = require('u2f-api'); // Set the app ID for the U2F device const APP_ID = "https://localhost:2015"; // Set up the U2F request object const request = { appId: APP_ID, challenge: '123456', }; // Register the user with their U2F device u2f.register(request, function(error, data) { if (error) { // Handle error } else { // Send the U2F registration data to the server sendRegistrationDataToServer(data); } }); // Authenticate the user with their U2F device u2f.sign(request, function(error, data) { if (error) { // Handle error } else { // Send the U2F authentication data to the server sendAuthenticationDataToServer(data); } }); // You'll need to modify your server-side code to handle the registration and authentication data from the client. Then you can use the u2f library on the server to verify the data and complete the registration or authentication process. // I hope this helps! Let me know if you have any other questions.
How to use U2F with NextJS
I am trying to implement U2F in my NextJS project. Currently I am using NextJS 13 (beta). I already have the server side code working with the u2f library but how do I implement it on the client side? const U2F = require("u2f"); const Express = require("express"); const BodyParser = require("body-parser"); const Cors = require("cors"); const HTTPS = require("https"); const FS = require("fs"); const session = require("express-session"); const APP_ID = "https://localhost:2015"; const server = Express(); server.use(session({ secret: "123456", cookie: { secure: true, maxAge: 60000 }, saveUninitialized: true, resave: true })); server.use(BodyParser.json()); server.use(BodyParser.urlencoded({ extended: true })); server.use(Cors({ origin: [APP_ID], credentials: true })); let user; server.get("/register", (request, response, next) => { request.session.u2f = U2F.request(APP_ID); response.send(request.session.u2f); }); server.post("/register", (request, response, next) => { const registration = U2F.checkRegistration(request.session.u2f, request.body.registerResponse); if(!registration.successful) { return response.status(500).send({ message: "error" }); } user = registration; response.send({ message: "The hardware key has been registered" }); }); server.get("/login", (request, response, next) => { request.session.u2f = U2F.request(APP_ID, user.keyHandle); response.send(request.session.u2f); }); server.post("/login", (request, response, next) => { const success = U2F.checkSignature(request.session.u2f, request.body.loginResponse, user.publicKey); response.send(success); }); HTTPS.createServer({ key: FS.readFileSync("server.key"), cert: FS.readFileSync("server.cert") }, server).listen(443, () => { console.log("Listening at :443..."); }); Edit: What I found out is that you should use WebAuthn these days. Do any of you have a good tutorial that explains how to use it with nextjs?
[ "// To implement U2F on the client side, you'll need to use a U2F-compatible browser and include a U2F JavaScript library on your page. The library will handle the communication with the U2F device and allow you to register and authenticate users using their hardware keys.\n\n// Here's an example of how you could implement U2F on the client side using the u2f-api library:\n// Import the U2F library\nconst u2f = require('u2f-api');\n\n// Set the app ID for the U2F device\nconst APP_ID = \"https://localhost:2015\";\n\n// Set up the U2F request object\nconst request = {\n appId: APP_ID,\n challenge: '123456',\n};\n\n// Register the user with their U2F device\nu2f.register(request, function(error, data) {\n if (error) {\n // Handle error\n } else {\n // Send the U2F registration data to the server\n sendRegistrationDataToServer(data);\n }\n});\n\n// Authenticate the user with their U2F device\nu2f.sign(request, function(error, data) {\n if (error) {\n // Handle error\n } else {\n // Send the U2F authentication data to the server\n sendAuthenticationDataToServer(data);\n }\n});\n// You'll need to modify your server-side code to handle the registration and authentication data from the client. Then you can use the u2f library on the server to verify the data and complete the registration or authentication process.\n\n// I hope this helps! Let me know if you have any other questions.\n\n\n" ]
[ 0 ]
[]
[]
[ "authentication", "fido_u2f", "next.js", "node.js", "webauthn" ]
stackoverflow_0074646638_authentication_fido_u2f_next.js_node.js_webauthn.txt
Q: Unity 3D: Error CS0116: A namespace cannot directly contain members such as fields or methods I was coding AI for enemies in a game I’m making in Unity, but then Unity gave me this error. It says the error is on line 21, but I can't find it. Any help is appreciated! using System.Collections; using System.Collections.Generic; using UnityEngine; public class EnemyController : MonoBehaviour { public float lookRadius = 10f; // Start is called before the first frame update void Start() { } // Update is called once per frame void Update() { } } void OnDrawGizmosSelected() { Gizmos.color = Color.red; Gizmos.DrawWireSphere(transform.position, lookRadius); } A: You have a closing bracket after the empty Update() function that doesn't belong there, it should be below OnDrawGizmosSelected() so that it encloses the EnemyController class as follows: public class EnemyController : MonoBehaviour { public float lookRadius = 10f; // Start is called before the first frame update void Start() { } // Update is called once per frame void Update() { } void OnDrawGizmosSelected() { Gizmos.color = Color.red; Gizmos.DrawWireSphere(transform.position, lookRadius); } }
Unity 3D: Error CS0116: A namespace cannot directly contain members such as fields or methods
I was coding AI for enemies in a game I’m making in Unity, but then Unity gave me this error. It says the error is on line 21, but I can't find it. Any help is appreciated! using System.Collections; using System.Collections.Generic; using UnityEngine; public class EnemyController : MonoBehaviour { public float lookRadius = 10f; // Start is called before the first frame update void Start() { } // Update is called once per frame void Update() { } } void OnDrawGizmosSelected() { Gizmos.color = Color.red; Gizmos.DrawWireSphere(transform.position, lookRadius); }
[ "You have a closing bracket after the empty Update() function that doesn't belong there, it should be below OnDrawGizmosSelected() so that it encloses the EnemyController class as follows:\npublic class EnemyController : MonoBehaviour\n{\n public float lookRadius = 10f;\n\n // Start is called before the first frame update\n void Start()\n {\n \n }\n\n // Update is called once per frame\n void Update()\n {\n \n }\n\n void OnDrawGizmosSelected()\n {\n Gizmos.color = Color.red;\n Gizmos.DrawWireSphere(transform.position, lookRadius);\n }\n}\n\n" ]
[ 0 ]
[]
[]
[ "c#", "unity3d" ]
stackoverflow_0074680991_c#_unity3d.txt
Q: Process a large file using Apache Airflow Task Groups I need to process a zip file(that contains a text file) using task groups in airflow. No. of lines can vary from 1 to 50 Million. I want to read the text file in the zip file process each line and write the processed line to another text file, zip it, update Postgres tables and call another DAG to transmit this new zip file to an SFTP server. Since a single task can take more time to process a file with millions of lines, I would like to process the file using a task group. That is, a single task in the task group can process certain no. of lines and transform them. For ex. if we receive a file with 15 Million lines, 6 task groups can be called to process 2.5 Million lines each. But I am confused how to make the task group dynamic and pass the offset to each task. Below is a sample that I tried with fixed offset in islice(), def start_task(**context): print("starting the Main task...") def apply_transformation(line): return f"{line}_NEW" def task1(**context): data = context['dag_run'].conf file_name = data.get("file_name") with zipfile.ZipFile(file_name) as zf: for name in zf.namelist(): with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp: for record in islice(fp, 1, 2000000): apply_transformation(record) def task2(**context): data = context['dag_run'].conf file_name = data.get("file_name") with zipfile.ZipFile(file_name) as zf: for name in zf.namelist(): with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp: for record in islice(fp, 2000001, 4000000): apply_transformation(record) def task3(**context): data = context['dag_run'].conf file_name = data.get("file_name") with zipfile.ZipFile(file_name) as zf: for name in zf.namelist(): with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp: for record in islice(fp, 4000001, 6000000): apply_transformation(record) def task4(**context): data = context['dag_run'].conf file_name = data.get("file_name") with zipfile.ZipFile(file_name) as zf: for name in zf.namelist(): with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp: for record in islice(fp, 6000001, 8000000): apply_transformation(record) def task5(**context): data = context['dag_run'].conf file_name = data.get("file_name") with zipfile.ZipFile(file_name) as zf: for name in zf.namelist(): with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp: for record in islice(fp, 8000001, 10000000): apply_transformation(record) def final_task(**context): print("This is the final task to update postgres tables and call SFTP DAG...") with DAG("main", schedule_interval=None, default_args=default_args, catchup=False) as dag: st = PythonOperator( task_id='start_task', dag=dag, python_callable=start_task ) with TaskGroup(group_id='task_group_1') as tg1: t1 = PythonOperator( task_id='task1', python_callable=task1, dag=dag, ) t2 = PythonOperator( task_id='task2', python_callable=task2, dag=dag, ) t3 = PythonOperator( task_id='task3', python_callable=task3, dag=dag, ) t4 = PythonOperator( task_id='task4', python_callable=task4, dag=dag, ) t5 = PythonOperator( task_id='task5', python_callable=task5, dag=dag, ) ft = PythonOperator( task_id='final_task', dag=dag, python_callable=final_task ) st >> tg1 >> ft After applying transformation to each line, I want to get these transformed lines from different tasks and merge them into a new file and do rest of the operations in the final_task. Or are there any other methods to process large files with millions of lines in parallel? A: Apache Spark, Apache Hadoop, and Apache Flink are distributed computing frameworks that can be used to process large datasets in parallel. They can be used to read the text file in the zip file, process each line in parallel, and write the processed line to another text file. After that, you can zip the file, update Postgres tables, and call another DAG to transmit the new zip file to an SFTP server. A: Yes, there are several methods to process large files with millions of lines in parallel. Here are a few options: MapReduce: MapReduce is a programming model for distributed computing. It splits the data into chunks and process each chunk in parallel. It is an efficient way to process large datasets. Apache Spark: Apache Spark is an open source distributed computing platform which can be used to process large datasets. It uses a cluster of computers to process the data in parallel. Hadoop: Hadoop is a distributed computing platform that can be used to store and process large datasets. It also uses a cluster of computers to process the data in parallel. Distributed task queue: A distributed task queue is a distributed computing system that allows the execution of tasks on multiple machines, in parallel. It is a great way to process large datasets, as each task can run on a different machine, in parallel. Simple Parallel Processing: Simple parallel processing allows you to execute multiple tasks on different machines simultaneously. The advantage of this approach is that it is easy to implement and requires minimal setup. Cloud Computing: Cloud computing allows you to leverage the power of a large network of computers to process large datasets. The advantage of this approach is that it is cost-effective and can scale up easily. A: One possible solution is to use a single PythonOperator for all tasks in the task group and pass the offset as a parameter to this operator. The operator can then read the lines from the specified offset and process the required number of lines. Here is an example of how this can be done: def process_lines(**context): # Read the parameters passed to the operator data = context['dag_run'].conf file_name = data.get("file_name") offset = data.get("offset") num_lines = data.get("num_lines") # Open the zip file and read the text file with zipfile.ZipFile(file_name) as zf: for name in zf.namelist(): with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp: # Read the lines from the specified offset and process them for record in islice(fp, offset, offset + num_lines): apply_transformation(record) with DAG("main", schedule_interval=None, default_args=default_args, catchup=False) as dag: st = PythonOperator( task_id='start_task', dag=dag, python_callable=start_task ) with TaskGroup(group_id='task_group_1') as tg1: # Call the process_lines operator with the appropriate offset and number of lines to process t1 = PythonOperator( task_id='task1', python_callable=process_lines, dag=dag, op_kwargs={"offset": 1, "num_lines": 2000000} ) t2 = PythonOperator( task_id='task2', python_callable=process_lines, dag=dag, op_kwargs={"offset": 2000001, "num_lines": 2000000} ) t3 = PythonOperator( task_id='task3', python_callable=process_lines, dag=dag, op_kwargs={"offset": 4000001, "num_lines": 2000000} ) # Add other tasks to the task group in a similar way # Add dependencies between the tasks in the task group tg1 >> final_task In the above example, we have defined a single operator process_lines that reads the lines from a specified offset and processes the specified number of lines. This operator is called multiple times in the task group with different offsets and number of lines to process.
Process a large file using Apache Airflow Task Groups
I need to process a zip file(that contains a text file) using task groups in airflow. No. of lines can vary from 1 to 50 Million. I want to read the text file in the zip file process each line and write the processed line to another text file, zip it, update Postgres tables and call another DAG to transmit this new zip file to an SFTP server. Since a single task can take more time to process a file with millions of lines, I would like to process the file using a task group. That is, a single task in the task group can process certain no. of lines and transform them. For ex. if we receive a file with 15 Million lines, 6 task groups can be called to process 2.5 Million lines each. But I am confused how to make the task group dynamic and pass the offset to each task. Below is a sample that I tried with fixed offset in islice(), def start_task(**context): print("starting the Main task...") def apply_transformation(line): return f"{line}_NEW" def task1(**context): data = context['dag_run'].conf file_name = data.get("file_name") with zipfile.ZipFile(file_name) as zf: for name in zf.namelist(): with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp: for record in islice(fp, 1, 2000000): apply_transformation(record) def task2(**context): data = context['dag_run'].conf file_name = data.get("file_name") with zipfile.ZipFile(file_name) as zf: for name in zf.namelist(): with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp: for record in islice(fp, 2000001, 4000000): apply_transformation(record) def task3(**context): data = context['dag_run'].conf file_name = data.get("file_name") with zipfile.ZipFile(file_name) as zf: for name in zf.namelist(): with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp: for record in islice(fp, 4000001, 6000000): apply_transformation(record) def task4(**context): data = context['dag_run'].conf file_name = data.get("file_name") with zipfile.ZipFile(file_name) as zf: for name in zf.namelist(): with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp: for record in islice(fp, 6000001, 8000000): apply_transformation(record) def task5(**context): data = context['dag_run'].conf file_name = data.get("file_name") with zipfile.ZipFile(file_name) as zf: for name in zf.namelist(): with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp: for record in islice(fp, 8000001, 10000000): apply_transformation(record) def final_task(**context): print("This is the final task to update postgres tables and call SFTP DAG...") with DAG("main", schedule_interval=None, default_args=default_args, catchup=False) as dag: st = PythonOperator( task_id='start_task', dag=dag, python_callable=start_task ) with TaskGroup(group_id='task_group_1') as tg1: t1 = PythonOperator( task_id='task1', python_callable=task1, dag=dag, ) t2 = PythonOperator( task_id='task2', python_callable=task2, dag=dag, ) t3 = PythonOperator( task_id='task3', python_callable=task3, dag=dag, ) t4 = PythonOperator( task_id='task4', python_callable=task4, dag=dag, ) t5 = PythonOperator( task_id='task5', python_callable=task5, dag=dag, ) ft = PythonOperator( task_id='final_task', dag=dag, python_callable=final_task ) st >> tg1 >> ft After applying transformation to each line, I want to get these transformed lines from different tasks and merge them into a new file and do rest of the operations in the final_task. Or are there any other methods to process large files with millions of lines in parallel?
[ "Apache Spark, Apache Hadoop, and Apache Flink are distributed computing frameworks that can be used to process large datasets in parallel. They can be used to read the text file in the zip file, process each line in parallel, and write the processed line to another text file. After that, you can zip the file, update Postgres tables, and call another DAG to transmit the new zip file to an SFTP server.\n", "Yes, there are several methods to process large files with millions of lines in parallel. Here are a few options:\n\nMapReduce: MapReduce is a programming model for distributed computing. It splits the data into chunks and process each chunk in parallel. It is an efficient way to process large datasets.\n\nApache Spark: Apache Spark is an open source distributed computing platform which can be used to process large datasets. It uses a cluster of computers to process the data in parallel.\n\nHadoop: Hadoop is a distributed computing platform that can be used to store and process large datasets. It also uses a cluster of computers to process the data in parallel.\n\nDistributed task queue: A distributed task queue is a distributed computing system that allows the execution of tasks on multiple machines, in parallel. It is a great way to process large datasets, as each task can run on a different machine, in parallel.\n\nSimple Parallel Processing: Simple parallel processing allows you to execute multiple tasks on different machines simultaneously. The advantage of this approach is that it is easy to implement and requires minimal setup.\n\nCloud Computing: Cloud computing allows you to leverage the power of a large network of computers to process large datasets. The advantage of this approach is that it is cost-effective and can scale up easily.\n\n\n", "One possible solution is to use a single PythonOperator for all tasks in the task group and pass the offset as a parameter to this operator. The operator can then read the lines from the specified offset and process the required number of lines.\nHere is an example of how this can be done:\ndef process_lines(**context):\n # Read the parameters passed to the operator\n data = context['dag_run'].conf\n file_name = data.get(\"file_name\")\n offset = data.get(\"offset\")\n num_lines = data.get(\"num_lines\")\n\n # Open the zip file and read the text file\n with zipfile.ZipFile(file_name) as zf:\n for name in zf.namelist():\n with io.TextIOWrapper(zf.open(name), encoding=\"UTF-8\") as fp:\n # Read the lines from the specified offset and process them\n for record in islice(fp, offset, offset + num_lines):\n apply_transformation(record)\n\n\nwith DAG(\"main\",\n schedule_interval=None,\n default_args=default_args, catchup=False) as dag:\n\n st = PythonOperator(\n task_id='start_task',\n dag=dag,\n python_callable=start_task\n )\n\n with TaskGroup(group_id='task_group_1') as tg1:\n # Call the process_lines operator with the appropriate offset and number of lines to process\n t1 = PythonOperator(\n task_id='task1',\n python_callable=process_lines,\n dag=dag,\n op_kwargs={\"offset\": 1, \"num_lines\": 2000000}\n )\n t2 = PythonOperator(\n task_id='task2',\n python_callable=process_lines,\n dag=dag,\n op_kwargs={\"offset\": 2000001, \"num_lines\": 2000000}\n )\n t3 = PythonOperator(\n task_id='task3',\n python_callable=process_lines,\n dag=dag,\n op_kwargs={\"offset\": 4000001, \"num_lines\": 2000000}\n )\n # Add other tasks to the task group in a similar way\n\n # Add dependencies between the tasks in the task group\n tg1 >> final_task\n\nIn the above example, we have defined a single operator process_lines that reads the lines from a specified offset and processes the specified number of lines. This operator is called multiple times in the task group with different offsets and number of lines to process.\n" ]
[ 0, 0, 0 ]
[ "Here is a possible solution to your problem:\nFirst, you can define a function that calculates the start and end offsets for a given task and the total number of lines in the input file. For example:\ndef calculate_offsets(task_id, num_tasks, num_lines):\n chunk_size = num_lines // num_tasks\n start_offset = (task_id - 1) * chunk_size\n end_offset = task_id * chunk_size\n if task_id == num_tasks:\n end_offset = num_lines\n return start_offset, end_offset\n\nThen, you can use this function to calculate the start and end offsets for each task in the task group, and pass these values as parameters to the tasks. You can also define a helper function that applies the transformation to a slice of the input file:\ndef apply_transformation(start_offset, end_offset, file_name):\n with zipfile.ZipFile(file_name) as zf:\n for name in zf.namelist():\n with io.TextIOWrapper(zf.open(name), encoding=\"UTF-8\") as fp:\n for record in islice(fp, start_offset, end_offset):\n # Apply the transformation here and write the result to a new file\n\nFinally, you can use this helper function in the tasks of the task group.\nyou can use the calculate_offsets() and apply_transformation() functions we defined earlier to calculate the start and end offsets for each task in the task group, and apply the transformation to the corresponding slice of the input file.\nHere is an example of how you can define the tasks in the task group:\ndef task1(**context):\n data = context['dag_run'].conf\n file_name = data.get(\"file_name\")\n num_lines = data.get(\"num_lines\")\n start_offset, end_offset = calculate_offsets(1, 6, num_lines)\n apply_transformation(start_offset, end_offset, file_name)\n\ndef task2(**context):\n data = context['dag_run'].conf\n file_name = data.get(\"file_name\")\n num_lines = data.get(\"num_lines\")\n start_offset, end_offset = calculate_offsets(2, 6, num_lines)\n apply_transformation(start_offset, end_offset, file_name)\n\n# Define the other tasks in the same way\nOnce you have defined the tasks, you can call them in the task group and pass the necessary parameters to them. For example:\n\nCopy code\nwith DAG(\"main\",\n schedule_interval=None,\n default_args=default_args, catchup=False) as dag:\n\n st = PythonOperator(\n task_id='start_task',\n dag=dag,\n python_callable=start_task\n )\n\n with TaskGroup(group_id='task_group_1') as tg1:\n t1 = PythonOperator(\n task_id='task1',\n python_callable=task1,\n dag=dag,\n op_kwargs={'file_name': '{{ dag_run.conf.file_name }}',\n 'num_lines': '{{ dag_run.conf.num_lines }}'}\n )\n\n t2 = PythonOperator(\n task_id='task2',\n python_callable=task2,\n dag=dag,\n op_kwargs={'file_name': '{{ dag_run.conf.file_name }}',\n 'num_lines': '{{ dag_run.conf.num_lines }}'}\n )\n\n # Call the other tasks in the same way\n\n ft = PythonOperator(\n task_id='final_task',\n dag=dag,\n python_callable=final_task\n )\n\n st >> tg1 >> ft\n\nYou can then run this DAG and pass the necessary parameters (the name of the input file and the total number of lines in the file) to the dag_run object when you trigger the DAG.\nI hope this helps! Let me know if you have any questions.\n" ]
[ -1 ]
[ "airflow", "airflow_2.x", "python", "python_3.x" ]
stackoverflow_0074559428_airflow_airflow_2.x_python_python_3.x.txt
Q: How can I add an element to my dimensional array I have a dimensional array, and I would like to add an element in my lessons array this is my array $array = [ 'id' => 1, 'lessons' => [ [ 'name' => 'PHP', ], [ 'name' => 'Python', ] ] ]; I would like to add flag element to my lessons, Desired output: $array = [ 'id' => 1, 'lessons' => [ [ 'name' => 'PHP', 'flag' => true ], [ 'name' => 'Python', 'flag' => true ] ] ]; I would like to do that without a nested foreach or a foreach. I tried with array_map Code I tried : $csmap_data = array_map(function($array){ return $topics['lessons'] + ['flag' => true]; }, $topics); A: Try this: $array['lessons'] = array_map(function($lesson) { $lesson['flag'] = true; return $lesson; }, $array['lessons']); print_r($array); A: foreach( $array['lessons'] as $lesson ) { $lesson['flag'] = true; }
How can I add an element to my dimensional array
I have a dimensional array, and I would like to add an element in my lessons array this is my array $array = [ 'id' => 1, 'lessons' => [ [ 'name' => 'PHP', ], [ 'name' => 'Python', ] ] ]; I would like to add flag element to my lessons, Desired output: $array = [ 'id' => 1, 'lessons' => [ [ 'name' => 'PHP', 'flag' => true ], [ 'name' => 'Python', 'flag' => true ] ] ]; I would like to do that without a nested foreach or a foreach. I tried with array_map Code I tried : $csmap_data = array_map(function($array){ return $topics['lessons'] + ['flag' => true]; }, $topics);
[ "Try this:\n$array['lessons'] = array_map(function($lesson) {\n $lesson['flag'] = true;\n return $lesson;\n}, $array['lessons']);\n\nprint_r($array);\n\n", "foreach( $array['lessons'] as $lesson )\n{\n $lesson['flag'] = true;\n}\n\n" ]
[ 1, 0 ]
[]
[]
[ "php" ]
stackoverflow_0074680808_php.txt
Q: Change the JSON serialization settings of a single ASP.NET Core controller I'm having two controller controllers: ControllerA and ControllerB. The base class of each controller is Controller. The ControllerA needs to return JSON in the default format (camelCase). The ControllerB needs to return data in a different JSON format: snake_case. How can I implement this in ASP.NET Core 3.x and 2.1? I've tried the startup with: services .AddMvc() .AddJsonOptions(options => { options.SerializerSettings.Converters.Add(new StringEnumConverter()); options.SerializerSettings.ContractResolver = new DefaultContractResolver() { NamingStrategy = new SnakeCaseNamingStrategy() }; }) .AddControllersAsServices(); But this will change the serialization for all controllers, not just for ControllerB. How can I configure or annotate this feature for 1 controller? A: ASP.NET Core 3.0+ You can achieve this with a combination of an Action Filter and an Output Formatter. Things look a little different for 3.0+, where the default JSON-formatters for 3.0+ are based on System.Text.Json. At the time of writing, these don't have built-in support for a snake-case naming strategy. However, if you're using Json.NET with 3.0+ (details in the docs), the SnakeCaseAttribute from above is still viable, with a couple of changes: JsonOutputFormatter is now NewtonsoftJsonOutputFormatter. The NewtonsoftJsonOutputFormatter constructor requires an argument of MvcOptions. Here's the code: public class SnakeCaseAttribute : ActionFilterAttribute { public override void OnActionExecuted(ActionExecutedContext ctx) { if (ctx.Result is ObjectResult objectResult) { objectResult.Formatters.Add(new NewtonsoftJsonOutputFormatter( new JsonSerializerSettings { ContractResolver = new DefaultContractResolver { NamingStrategy = new SnakeCaseNamingStrategy() } }, ctx.HttpContext.RequestServices.GetRequiredService<ArrayPool<char>>(), ctx.HttpContext.RequestServices.GetRequiredService<IOptions<MvcOptions>>().Value)); } } } ASP.NET Core 2.x You can achieve this with a combination of an Action Filter and an Output Formatter. Here's an example of what the Action Filter might look like: public class SnakeCaseAttribute : ActionFilterAttribute { public override void OnActionExecuted(ActionExecutedContext ctx) { if (ctx.Result is ObjectResult objectResult) { objectResult.Formatters.Add(new JsonOutputFormatter( new JsonSerializerSettings { ContractResolver = new DefaultContractResolver { NamingStrategy = new SnakeCaseNamingStrategy() } }, ctx.HttpContext.RequestServices.GetRequiredService<ArrayPool<char>>())); } } } Using OnActionExecuted, the code runs after the corresponding action and first checks to see if the result is an ObjectResult (which also applies to OkObjectResult thanks to inheritance). If it is an ObjectResult, the filter simply adds a customised version of a JsonOutputFormatter that will serialise the properties using SnakeCaseNamingStrategy. The second parameter in the JsonOutputFormatter constructor is retrieved from the DI container. In order to use this filter, just apply it to the relevant controller: [SnakeCase] public class ControllerB : Controller { } Note: You might want to create the JsonOutputFormatter/NewtonsoftJsonOutputFormatter ahead of time somewhere, for example - I've not gone that far in the example as that's secondary to the question at hand. A: Ended up creating this method that I use on my end points: { // needed to get the same date and property formatting // as the Search Service: var settings = new JsonSerializerSettings { ContractResolver = new DefaultContractResolver() { NamingStrategy = new SnakeCaseNamingStrategy() }, DateFormatString = "yyyy-MM-ddTHH:mm:ss.fffZ" }; return Json(result, settings); } A: No need for action filters etc. Just override Json() in your controller and that's it. public class MyController : Controller { public override JsonResult Json(object data) { return base.Json(data, new JsonSerializerSettings { // set whataever default options you want }); } } A: In my case, I had to prevent camelCase property naming policy, at action level only, by sticking to System.Text.Json. So, I combined @Kirk's answer and another one from here : public class NoPropertyNamingPolicyAttribute : ActionFilterAttribute { private static readonly SystemTextJsonOutputFormatter SSystemTextJsonOutputFormatter = new SystemTextJsonOutputFormatter(new JsonSerializerOptions { // do not apply any policy, // leave the property names as they are defined in the TestResponse class: PropertyNamingPolicy = null, // to apply camelCase policy: //PropertyNamingPolicy = System.Text.Json.JsonNamingPolicy.CamelCase }); public override void OnActionExecuted(ActionExecutedContext context) { if (context.Result is ObjectResult objectResult) { objectResult.Formatters.Add(SSystemTextJsonOutputFormatter); } } } Usage: [HttpGet("test")] [NoPropertyNamingPolicy] public TestResponse TestAction() { return new TestResponse(); }
Change the JSON serialization settings of a single ASP.NET Core controller
I'm having two controller controllers: ControllerA and ControllerB. The base class of each controller is Controller. The ControllerA needs to return JSON in the default format (camelCase). The ControllerB needs to return data in a different JSON format: snake_case. How can I implement this in ASP.NET Core 3.x and 2.1? I've tried the startup with: services .AddMvc() .AddJsonOptions(options => { options.SerializerSettings.Converters.Add(new StringEnumConverter()); options.SerializerSettings.ContractResolver = new DefaultContractResolver() { NamingStrategy = new SnakeCaseNamingStrategy() }; }) .AddControllersAsServices(); But this will change the serialization for all controllers, not just for ControllerB. How can I configure or annotate this feature for 1 controller?
[ "ASP.NET Core 3.0+\nYou can achieve this with a combination of an Action Filter and an Output Formatter.\nThings look a little different for 3.0+, where the default JSON-formatters for 3.0+ are based on System.Text.Json. At the time of writing, these don't have built-in support for a snake-case naming strategy.\nHowever, if you're using Json.NET with 3.0+ (details in the docs), the SnakeCaseAttribute from above is still viable, with a couple of changes:\n\nJsonOutputFormatter is now NewtonsoftJsonOutputFormatter.\nThe NewtonsoftJsonOutputFormatter constructor requires an argument of MvcOptions.\n\nHere's the code:\npublic class SnakeCaseAttribute : ActionFilterAttribute\n{\n public override void OnActionExecuted(ActionExecutedContext ctx)\n {\n if (ctx.Result is ObjectResult objectResult)\n {\n objectResult.Formatters.Add(new NewtonsoftJsonOutputFormatter(\n new JsonSerializerSettings\n {\n ContractResolver = new DefaultContractResolver\n {\n NamingStrategy = new SnakeCaseNamingStrategy()\n }\n },\n ctx.HttpContext.RequestServices.GetRequiredService<ArrayPool<char>>(),\n ctx.HttpContext.RequestServices.GetRequiredService<IOptions<MvcOptions>>().Value));\n }\n }\n}\n\nASP.NET Core 2.x\nYou can achieve this with a combination of an Action Filter and an Output Formatter. Here's an example of what the Action Filter might look like:\npublic class SnakeCaseAttribute : ActionFilterAttribute\n{\n public override void OnActionExecuted(ActionExecutedContext ctx)\n {\n if (ctx.Result is ObjectResult objectResult)\n {\n objectResult.Formatters.Add(new JsonOutputFormatter(\n new JsonSerializerSettings\n {\n ContractResolver = new DefaultContractResolver\n {\n NamingStrategy = new SnakeCaseNamingStrategy()\n }\n },\n ctx.HttpContext.RequestServices.GetRequiredService<ArrayPool<char>>()));\n }\n }\n}\n\nUsing OnActionExecuted, the code runs after the corresponding action and first checks to see if the result is an ObjectResult (which also applies to OkObjectResult thanks to inheritance). If it is an ObjectResult, the filter simply adds a customised version of a JsonOutputFormatter that will serialise the properties using SnakeCaseNamingStrategy. The second parameter in the JsonOutputFormatter constructor is retrieved from the DI container.\nIn order to use this filter, just apply it to the relevant controller:\n[SnakeCase]\npublic class ControllerB : Controller { }\n\n\nNote: You might want to create the JsonOutputFormatter/NewtonsoftJsonOutputFormatter ahead of time somewhere, for example - I've not gone that far in the example as that's secondary to the question at hand.\n", "Ended up creating this method that I use on my end points:\n{ \n // needed to get the same date and property formatting \n // as the Search Service:\n var settings = new JsonSerializerSettings\n {\n ContractResolver = new DefaultContractResolver()\n {\n NamingStrategy = new SnakeCaseNamingStrategy()\n },\n DateFormatString = \"yyyy-MM-ddTHH:mm:ss.fffZ\"\n };\n\n return Json(result, settings);\n}\n\n", "No need for action filters etc. Just override Json() in your controller and that's it.\npublic class MyController : Controller\n{\n public override JsonResult Json(object data)\n {\n return base.Json(data, new JsonSerializerSettings {\n // set whataever default options you want\n });\n }\n}\n\n", "In my case, I had to prevent camelCase property naming policy, at action level only, by sticking to System.Text.Json.\nSo, I combined @Kirk's answer and another one from here :\npublic class NoPropertyNamingPolicyAttribute : ActionFilterAttribute\n{\n private static readonly SystemTextJsonOutputFormatter SSystemTextJsonOutputFormatter = new SystemTextJsonOutputFormatter(new JsonSerializerOptions\n {\n // do not apply any policy,\n // leave the property names as they are defined in the TestResponse class:\n PropertyNamingPolicy = null,\n\n // to apply camelCase policy:\n //PropertyNamingPolicy = System.Text.Json.JsonNamingPolicy.CamelCase\n });\n\n public override void OnActionExecuted(ActionExecutedContext context)\n {\n if (context.Result is ObjectResult objectResult)\n {\n objectResult.Formatters.Add(SSystemTextJsonOutputFormatter);\n }\n }\n}\n\nUsage:\n[HttpGet(\"test\")]\n[NoPropertyNamingPolicy]\npublic TestResponse TestAction()\n{\n return new TestResponse();\n}\n\n" ]
[ 44, 3, 3, 0 ]
[]
[]
[ ".net_core", "asp.net_core", "asp.net_core_mvc", "c#", "json.net" ]
stackoverflow_0052605946_.net_core_asp.net_core_asp.net_core_mvc_c#_json.net.txt
Q: Alphabetize and word frequency from a file using strtok in C My goal is to analyze a text file, tokenize each word, then alphabetize each word with its word frequency. Example: Input: The house is on the ground on earth. Output: earth - 1 ground - 1 house - 1 is - 1 on - 2 the - 2 I have been able to open the file, read the file line by line, tokenize each word, converted the tokens to lowercase. I am stuck grouping and alphabetizing each token. #include <stdio.h> #include <stdlib.h> void lower_string(char s[]); int main() { FILE *file; //char path[100]; char ch[100]; int characters; /* Input path of files to merge to third file printf("Enter source file path: "); scanf("%s", path); file = fopen(path, "r");*/ file = fopen("test.txt", "r"); //testing w.o repeated input /* Check if file opened successfully */ if (file == NULL) { printf("\nUnable to open file.\n"); printf("Please check if file exists and you have read privilege.\n"); exit(EXIT_FAILURE); } const char delim[] = " ,.;!?[\n]"; char *token; int tokenNum; while (fgets(ch, sizeof(ch), file) != NULL) { lower_string(ch); token = strtok(ch, delim); while (token != NULL) { printf("Token:%s\n", token); token = strtok(NULL, delim); tokenNum++; } } printf("%d\n", tokenNum); //total words testing /* Close files to release resources */ fclose(file); return 0; } void lower_string(char s[]) { int c = 0; while (s[c] != '\0') { if (s[c] >= 'A' && s[c] <= 'Z') { s[c] = s[c] + 32; } c++; } } I have been looking into building and manipulating an ordered linked list of integers and binary search tree of integers. I'm having a hard time figuring out where I should begin to implement these features. So far i have been looking at the code below for ordered linked list. #include <stdio.h> #include <stdlib.h> //These structures are declared globally so they are available to all functions //in the program. typedef struct list_node_s { //defines structure of one node int key; //key value - here an integer int count; //frequency key value encountered in input struct list_node_s *restp; //pointer to the next node in list = NULL if EOL } list_node_t; typedef struct //defines head of list structure { list_node_t *headp; //pointer to first node in list, NULL if list is empty int size; //current number of nodes in the list } ordered_list_t; //Prototypes list_node_t * insert_in_order (list_node_t * old_listp, int new_key); void insert (ordered_list_t * listp, int key); int delete (ordered_list_t * listp, int target); list_node_t * delete_ordered_node (list_node_t * listp, int target,int *is_deleted); void print_list (ordered_list_t * listp); #define SEND -999 //end of input sentinal int main (void) { int next_key; ordered_list_t my_list = {NULL, 0}; printf("\n\nProgram to build, display and manipulate (delete) an Ordered Linked List \n"); printf("\nAdapted from code in \"Problem Solving and Programming in C\" by J.R. Hanly and E.B. Koffman\n\n"); printf ("enter integer keys - end list with %d\n", SEND); /* build list by in-order insertions*/ for (scanf ("%d", &next_key); next_key != SEND; scanf ("%d", &next_key)) { insert (&my_list, next_key); } /* Display completed list */ printf ("\nOrdered list as built:\n"); print_list(&my_list); /* Process requested deletions */ printf("enter key value for node to be removed from list or %d to end > ", SEND); for (scanf ("%d", &next_key); next_key != SEND; scanf ("%d", &next_key)) { if (delete (&my_list, next_key)) { printf ("%d deleted.\n New list:\n", next_key); print_list (&my_list); } else { printf ("No deletion. %d not found\n", next_key); } printf ("enter key value for node to be removed from list or %d to end > ", SEND); } return (0); } /* prints contents of a linked list Display the elements in the list pointed to by the pointer list.*/ void print_list (ordered_list_t * listp) { list_node_t * tmp; for (tmp = listp->headp; tmp != NULL; tmp = tmp->restp) printf ("key = %d; count = %d\n", tmp->key, tmp->count); printf ("\n\n"); } //Inserts a new node containing new_key into an existing list and returns a pointer to the first node of the new list list_node_t * insert_in_order (list_node_t * old_listp, int new_key) { list_node_t * new_listp; if (old_listp == NULL) //check for end of list (EOL) { new_listp = (list_node_t *) malloc (sizeof (list_node_t)); new_listp->key = new_key; new_listp->count = 1; new_listp->restp = NULL; } else if (old_listp->key == new_key) //check for matching key, increment count { old_listp->count++; new_listp = old_listp; } else if (old_listp->key > new_key) //Next node key value > new key, so insert new node at current location { new_listp = (list_node_t *) malloc (sizeof (list_node_t)); new_listp->key = new_key; new_listp->count = 1; new_listp->restp = old_listp; } else { new_listp = old_listp; new_listp->restp = insert_in_order (old_listp->restp, new_key); } return (new_listp); } //inserts a node into an ordered list_node_t void insert (ordered_list_t * listp, int key) { ++(listp->size); listp->headp = insert_in_order (listp->headp, key); } //deletes the first node containing the target key from an ordered list; returns 1 //if target found & deleted, 0 otherwise (means target not in list) int delete (ordered_list_t * listp, int target) { int is_deleted; listp->headp = delete_ordered_node (listp->headp, target, &is_deleted); if (is_deleted) --(listp->size); //reduce current node count (size); keep size of list current return (is_deleted); } /* deletes node containing target key from a list whose head is listp; returns a pointer to the modified list (incase it is the first node, pointed to by listp), frees the memory used by tyhe deleted node and sets a flag to indicate success (1) or failure (0; usually means no such node found). */ list_node_t * delete_ordered_node (list_node_t * listp, int target, int *is_deleted) { list_node_t *to_freep, *ansp; // if list empty, nothing to do; return NULL printf ("check for empty list; target: %d \n", target); if (listp == NULL) { *is_deleted = 0; ansp = NULL; } //if first node is to be deleted, do it; relink rest of list to list header struct else if (listp->key == target) { printf ("at first node; target: %d \n", target); *is_deleted = 1; to_freep = listp; //keeps track of node memory location to be freed ansp = listp->restp; free (to_freep); //release the memory of the deleted node for reuse } //if target exists, it is further down the list (recursive step), make recursive call //to move down the list looking for the target value else { printf ("chase down list to find: %d \n", target); ansp = listp; ansp->restp = delete_ordered_node (listp->restp, target, is_deleted); } return (ansp); } I'm finding it hard to implement that with strtok. 12/4 EDIT: added: Nodes for BST. Questions- Don't know if key needs to be tracked.(I assume it'll be useful to pull specific words). Where/how would I add the logic to alphabetize the tree.(study sources appreciated) How do I pass each word through this tree? #define WLENGTH 100 //Base Node info struct node { char word[WLENGTH]; int key; int freq; struct node *left, *right; }; //Function to create a new node struct node *newNode(char wordn, int item, int freqn) { struct node *temp = (struct node *) malloc(sizeof(struct node)); temp->word = wordn; temp->key = item; temp->freq = freqn; temp->left = temp->right = NULL; return temp; } //Function to place nodes in order void inorder(struct node *root) { if (root != NULL) { inorder(root->left); printf("%d ", root->key); inorder(root->right); } } /*Function to insert a new node with given key*/ struct node* insert(struct node* node, int key) { /* If the tree is empty, return a new node */ if (node == NULL) return newNode(key); /* Otherwise, recur down the tree */ if (key < node->key) node->left = insert(node->left, key); else if (key > node->key) node->right = insert(node->right, key); /* return the (unchanged) node pointer */ return node; } A: At the request of the OP, here is a bit of code to bulk load an entire text file for processing: FILE *mustOpen( char *fname, char *mode ) { FILE *fp = fopen( fname, mode ); if( fp == NULL ) { fprintf( stderr, "Cannot open '%s'\n", fname ); exit( EXIT_FAILURE ); } return fp; } // Passed the path to a file, opens, measures and bulk loads entire file (plus terminating '\0') char *loadFile( char *fname ) { FILE *fp = mustOpen( fname, "rb" ); fseek( fp, 0, SEEK_END ); size_t size = ftell( fp ); fseek( fp, 0, SEEK_SET ); char *buf; if( ( buf = malloc( size + 1) ) == NULL ) fprintf( stderr, "Malloc() failed\n" ), exit( EXIT_FAILURE ); if( fread( buf, sizeof *buf, size, fp ) != size ) fprintf( stderr, "Read incomplete\n" ), exit( EXIT_FAILURE ); fclose( fp ); *(buf + size) = '\0'; // xtra byte allows strXXX() to work return buf; // pointer to heap allocated buffer containing file's bytes } Remember to free() the buffer when done with its contents. With the entire text loaded (and NULL terminated), here is a way to skip along the entire "string" finding each "word" (as defined by the delimiters): for( char *cp = buf; (cp = strtok( cp, delim )) != NULL; cp = NULL ) { /* process each single "word" */ } Since the "text" is in memory, instances of each of the "words" are in memory, too. All that's needed is populating a BST with nodes that 'point to' one instance of each "word" and a counter that counts multiple occurrences of each word. Finally, an "in order" traversal of the BST will give an alphabetised list of words and their frequency in the text. Be sure to compartmentalize each of the functions. The "blocks" of functionality can then be re-used in other projects, and, who knows?... You may want to first load a dictionary and only report the words (and locations) that do not appear in the dictionary (typos?). The code that handles the BST "structure" (searching, adding, traversing) should be somewhat independent of what "information fields" comprise each node.
Alphabetize and word frequency from a file using strtok in C
My goal is to analyze a text file, tokenize each word, then alphabetize each word with its word frequency. Example: Input: The house is on the ground on earth. Output: earth - 1 ground - 1 house - 1 is - 1 on - 2 the - 2 I have been able to open the file, read the file line by line, tokenize each word, converted the tokens to lowercase. I am stuck grouping and alphabetizing each token. #include <stdio.h> #include <stdlib.h> void lower_string(char s[]); int main() { FILE *file; //char path[100]; char ch[100]; int characters; /* Input path of files to merge to third file printf("Enter source file path: "); scanf("%s", path); file = fopen(path, "r");*/ file = fopen("test.txt", "r"); //testing w.o repeated input /* Check if file opened successfully */ if (file == NULL) { printf("\nUnable to open file.\n"); printf("Please check if file exists and you have read privilege.\n"); exit(EXIT_FAILURE); } const char delim[] = " ,.;!?[\n]"; char *token; int tokenNum; while (fgets(ch, sizeof(ch), file) != NULL) { lower_string(ch); token = strtok(ch, delim); while (token != NULL) { printf("Token:%s\n", token); token = strtok(NULL, delim); tokenNum++; } } printf("%d\n", tokenNum); //total words testing /* Close files to release resources */ fclose(file); return 0; } void lower_string(char s[]) { int c = 0; while (s[c] != '\0') { if (s[c] >= 'A' && s[c] <= 'Z') { s[c] = s[c] + 32; } c++; } } I have been looking into building and manipulating an ordered linked list of integers and binary search tree of integers. I'm having a hard time figuring out where I should begin to implement these features. So far i have been looking at the code below for ordered linked list. #include <stdio.h> #include <stdlib.h> //These structures are declared globally so they are available to all functions //in the program. typedef struct list_node_s { //defines structure of one node int key; //key value - here an integer int count; //frequency key value encountered in input struct list_node_s *restp; //pointer to the next node in list = NULL if EOL } list_node_t; typedef struct //defines head of list structure { list_node_t *headp; //pointer to first node in list, NULL if list is empty int size; //current number of nodes in the list } ordered_list_t; //Prototypes list_node_t * insert_in_order (list_node_t * old_listp, int new_key); void insert (ordered_list_t * listp, int key); int delete (ordered_list_t * listp, int target); list_node_t * delete_ordered_node (list_node_t * listp, int target,int *is_deleted); void print_list (ordered_list_t * listp); #define SEND -999 //end of input sentinal int main (void) { int next_key; ordered_list_t my_list = {NULL, 0}; printf("\n\nProgram to build, display and manipulate (delete) an Ordered Linked List \n"); printf("\nAdapted from code in \"Problem Solving and Programming in C\" by J.R. Hanly and E.B. Koffman\n\n"); printf ("enter integer keys - end list with %d\n", SEND); /* build list by in-order insertions*/ for (scanf ("%d", &next_key); next_key != SEND; scanf ("%d", &next_key)) { insert (&my_list, next_key); } /* Display completed list */ printf ("\nOrdered list as built:\n"); print_list(&my_list); /* Process requested deletions */ printf("enter key value for node to be removed from list or %d to end > ", SEND); for (scanf ("%d", &next_key); next_key != SEND; scanf ("%d", &next_key)) { if (delete (&my_list, next_key)) { printf ("%d deleted.\n New list:\n", next_key); print_list (&my_list); } else { printf ("No deletion. %d not found\n", next_key); } printf ("enter key value for node to be removed from list or %d to end > ", SEND); } return (0); } /* prints contents of a linked list Display the elements in the list pointed to by the pointer list.*/ void print_list (ordered_list_t * listp) { list_node_t * tmp; for (tmp = listp->headp; tmp != NULL; tmp = tmp->restp) printf ("key = %d; count = %d\n", tmp->key, tmp->count); printf ("\n\n"); } //Inserts a new node containing new_key into an existing list and returns a pointer to the first node of the new list list_node_t * insert_in_order (list_node_t * old_listp, int new_key) { list_node_t * new_listp; if (old_listp == NULL) //check for end of list (EOL) { new_listp = (list_node_t *) malloc (sizeof (list_node_t)); new_listp->key = new_key; new_listp->count = 1; new_listp->restp = NULL; } else if (old_listp->key == new_key) //check for matching key, increment count { old_listp->count++; new_listp = old_listp; } else if (old_listp->key > new_key) //Next node key value > new key, so insert new node at current location { new_listp = (list_node_t *) malloc (sizeof (list_node_t)); new_listp->key = new_key; new_listp->count = 1; new_listp->restp = old_listp; } else { new_listp = old_listp; new_listp->restp = insert_in_order (old_listp->restp, new_key); } return (new_listp); } //inserts a node into an ordered list_node_t void insert (ordered_list_t * listp, int key) { ++(listp->size); listp->headp = insert_in_order (listp->headp, key); } //deletes the first node containing the target key from an ordered list; returns 1 //if target found & deleted, 0 otherwise (means target not in list) int delete (ordered_list_t * listp, int target) { int is_deleted; listp->headp = delete_ordered_node (listp->headp, target, &is_deleted); if (is_deleted) --(listp->size); //reduce current node count (size); keep size of list current return (is_deleted); } /* deletes node containing target key from a list whose head is listp; returns a pointer to the modified list (incase it is the first node, pointed to by listp), frees the memory used by tyhe deleted node and sets a flag to indicate success (1) or failure (0; usually means no such node found). */ list_node_t * delete_ordered_node (list_node_t * listp, int target, int *is_deleted) { list_node_t *to_freep, *ansp; // if list empty, nothing to do; return NULL printf ("check for empty list; target: %d \n", target); if (listp == NULL) { *is_deleted = 0; ansp = NULL; } //if first node is to be deleted, do it; relink rest of list to list header struct else if (listp->key == target) { printf ("at first node; target: %d \n", target); *is_deleted = 1; to_freep = listp; //keeps track of node memory location to be freed ansp = listp->restp; free (to_freep); //release the memory of the deleted node for reuse } //if target exists, it is further down the list (recursive step), make recursive call //to move down the list looking for the target value else { printf ("chase down list to find: %d \n", target); ansp = listp; ansp->restp = delete_ordered_node (listp->restp, target, is_deleted); } return (ansp); } I'm finding it hard to implement that with strtok. 12/4 EDIT: added: Nodes for BST. Questions- Don't know if key needs to be tracked.(I assume it'll be useful to pull specific words). Where/how would I add the logic to alphabetize the tree.(study sources appreciated) How do I pass each word through this tree? #define WLENGTH 100 //Base Node info struct node { char word[WLENGTH]; int key; int freq; struct node *left, *right; }; //Function to create a new node struct node *newNode(char wordn, int item, int freqn) { struct node *temp = (struct node *) malloc(sizeof(struct node)); temp->word = wordn; temp->key = item; temp->freq = freqn; temp->left = temp->right = NULL; return temp; } //Function to place nodes in order void inorder(struct node *root) { if (root != NULL) { inorder(root->left); printf("%d ", root->key); inorder(root->right); } } /*Function to insert a new node with given key*/ struct node* insert(struct node* node, int key) { /* If the tree is empty, return a new node */ if (node == NULL) return newNode(key); /* Otherwise, recur down the tree */ if (key < node->key) node->left = insert(node->left, key); else if (key > node->key) node->right = insert(node->right, key); /* return the (unchanged) node pointer */ return node; }
[ "At the request of the OP, here is a bit of code to bulk load an entire text file for processing:\nFILE *mustOpen( char *fname, char *mode ) {\n FILE *fp = fopen( fname, mode );\n if( fp == NULL ) {\n fprintf( stderr, \"Cannot open '%s'\\n\", fname );\n exit( EXIT_FAILURE );\n }\n return fp;\n}\n\n// Passed the path to a file, opens, measures and bulk loads entire file (plus terminating '\\0')\nchar *loadFile( char *fname ) {\n FILE *fp = mustOpen( fname, \"rb\" );\n\n fseek( fp, 0, SEEK_END );\n size_t size = ftell( fp );\n fseek( fp, 0, SEEK_SET );\n\n char *buf;\n if( ( buf = malloc( size + 1) ) == NULL )\n fprintf( stderr, \"Malloc() failed\\n\" ), exit( EXIT_FAILURE );\n\n if( fread( buf, sizeof *buf, size, fp ) != size )\n fprintf( stderr, \"Read incomplete\\n\" ), exit( EXIT_FAILURE );\n\n fclose( fp );\n\n *(buf + size) = '\\0'; // xtra byte allows strXXX() to work\n\n return buf; // pointer to heap allocated buffer containing file's bytes\n}\n\nRemember to free() the buffer when done with its contents.\nWith the entire text loaded (and NULL terminated), here is a way to skip along the entire \"string\" finding each \"word\" (as defined by the delimiters):\nfor( char *cp = buf; (cp = strtok( cp, delim )) != NULL; cp = NULL ) {\n /* process each single \"word\" */\n}\n\nSince the \"text\" is in memory, instances of each of the \"words\" are in memory, too. All that's needed is populating a BST with nodes that 'point to' one instance of each \"word\" and a counter that counts multiple occurrences of each word.\nFinally, an \"in order\" traversal of the BST will give an alphabetised list of words and their frequency in the text.\nBe sure to compartmentalize each of the functions. The \"blocks\" of functionality can then be re-used in other projects, and, who knows?... You may want to first load a dictionary and only report the words (and locations) that do not appear in the dictionary (typos?). The code that handles the BST \"structure\" (searching, adding, traversing) should be somewhat independent of what \"information fields\" comprise each node.\n" ]
[ 0 ]
[]
[]
[ "c", "file", "frequency", "project", "strtok" ]
stackoverflow_0074672550_c_file_frequency_project_strtok.txt
Q: How to properly render form fields with django? I am currently working on a login page for a django webapp. I am trying to include the login form within the index.html file. However, the form fields are not being rendered. My urls are correct I believe but I'm not sure where I am going wrong. Here is my views.py, forms.py and a snippet of the index.html. (I do not want to create a new page for the login I'd like to keep it on the index page) # Home view def index(request): form = LoginForm() if form.is_valid(): user = authenticate( username=form.cleaned_data['username'], password=form.cleaned_data['password'], ) if user is not None: login(request, user) messages.success(request, f' welcome {user} !!') return redirect('loggedIn') else: messages.info(request, f'Password or Username is wrong. Please try again.') return render(request, "index_logged_out.html") class LoginForm(forms.Form): username = forms.CharField(max_length=63) password = forms.CharField(max_length=63, widget=forms.PasswordInput) <!-- Login --> <section class="page-section" id="login"> <div class="container"> <div class="text-center"> <h2 class="section-heading text-uppercase">Login</h2> </div> <form> {% csrf_token %} {{form}} <center><button class="btn btn-primary btn-block fa-lg gradient-custom-2 mb-3" type="submit" style="width: 300px;">Login</button></center> </form> <div class="text-center pt-1 mb-5 pb-1"> <center><a class="text-muted" href="#!">Forgot password?</a></center> </div> <div class="d-flex align-items-center justify-content-center pb-4"> <p class="mb-0 me-2">Don't have an account?</p> <button type="button" class="btn btn-outline-primary"><a href="{% url 'register' %}">Create New</a></button> </div> </form> </div> </section> A: In your index() view, you are creating a LoginForm object, but you are not passing it to the template when you render it. This means that the form fields will not be rendered in the template. To fix this, you can pass the form object to the template when you render it, like this: def index(request): form = LoginForm() if form.is_valid(): # ... return render(request, "index_logged_out.html", {"form": form})
How to properly render form fields with django?
I am currently working on a login page for a django webapp. I am trying to include the login form within the index.html file. However, the form fields are not being rendered. My urls are correct I believe but I'm not sure where I am going wrong. Here is my views.py, forms.py and a snippet of the index.html. (I do not want to create a new page for the login I'd like to keep it on the index page) # Home view def index(request): form = LoginForm() if form.is_valid(): user = authenticate( username=form.cleaned_data['username'], password=form.cleaned_data['password'], ) if user is not None: login(request, user) messages.success(request, f' welcome {user} !!') return redirect('loggedIn') else: messages.info(request, f'Password or Username is wrong. Please try again.') return render(request, "index_logged_out.html") class LoginForm(forms.Form): username = forms.CharField(max_length=63) password = forms.CharField(max_length=63, widget=forms.PasswordInput) <!-- Login --> <section class="page-section" id="login"> <div class="container"> <div class="text-center"> <h2 class="section-heading text-uppercase">Login</h2> </div> <form> {% csrf_token %} {{form}} <center><button class="btn btn-primary btn-block fa-lg gradient-custom-2 mb-3" type="submit" style="width: 300px;">Login</button></center> </form> <div class="text-center pt-1 mb-5 pb-1"> <center><a class="text-muted" href="#!">Forgot password?</a></center> </div> <div class="d-flex align-items-center justify-content-center pb-4"> <p class="mb-0 me-2">Don't have an account?</p> <button type="button" class="btn btn-outline-primary"><a href="{% url 'register' %}">Create New</a></button> </div> </form> </div> </section>
[ "In your index() view, you are creating a LoginForm object, but you are not passing it to the template when you render it. This means that the form fields will not be rendered in the template.\nTo fix this, you can pass the form object to the template when you render it, like this:\ndef index(request):\n form = LoginForm()\n if form.is_valid():\n # ...\n return render(request, \"index_logged_out.html\", {\"form\": form})\n\n" ]
[ 0 ]
[]
[]
[ "django", "html", "python" ]
stackoverflow_0074681034_django_html_python.txt
Q: Reflection to avoid class load I was reading through the PerfMark code and saw a comment about avoid an accidental class load through using reflection in a commit: if (Boolean.getBoolean("io.perfmark.PerfMark.debug")) { - Logger.getLogger(PerfMark.class.getName()).log(Level.FINE, "Error during PerfMark.<clinit>", err); + // We need to be careful here, as it's easy to accidentally cause a class load. Logger is loaded + // reflectively to avoid accidentally pulling it in. + // TODO(carl-mastrangelo): Maybe make this load SLF4J instead? + Class<?> logClass = Class.forName("java.util.logging.Logger"); + Object logger = logClass.getMethod("getLogger", String.class).invoke(null, PerfMark.class.getName()); .. } I don't quite understand which class is prevented from being accidentally loaded here. According to Class#forName will cause the logger class to be loaded. From my understanding, the class will only be loaded if the enclosing if condition is true. Or is this the point I am missing? Commit with more context is here: https://github.com/perfmark/perfmark/commit/4f87fb72c2077df6ade958b524d6d217766c9f93#diff-f9fdc8ad347ee9aa7a11a5259d5ab41c81e84c0ff375de17faebe7625cf50fb5R116 I ran the part with the if block and set a breakpoint on static and non-static fields in the Logger class. It hit the breakpoint only when the call was executed irregardless of using reflection or direct. When the if condition was false, no logger was loaded in any case. A: I think the important point of that commit is to load the classes from java.util.logging only when it is really required (when the system property "io.perfmark.PerfMark.debug" is "true" and err is not null, i.e. when the class io.perfmark.impl.SecretPerfMarkImpl$PerfMarkImpl is not available or that class has not the required constructor.) If the code is Logger.getLogger(PerfMark.class.getName()).log(Level.FINE, "Error during PerfMark.<clinit>", err); then the java.util.logging.Logger class is loaded as soon as the PerfMark class is loaded (since loading PerfMark requires that the static initializer block is executed). With this convoluted code the java.util.logging.Logger is only loaded if PerfMark cannot load its support class io.perfmark.impl.SecretPerfMarkImpl$PerfMarkImpl and the system property "io.perfmark.PerfMark.debug" is set to "true" (which probably means that java.util.logging.Logger is almost never loaded just because you use PerfMark)
Reflection to avoid class load
I was reading through the PerfMark code and saw a comment about avoid an accidental class load through using reflection in a commit: if (Boolean.getBoolean("io.perfmark.PerfMark.debug")) { - Logger.getLogger(PerfMark.class.getName()).log(Level.FINE, "Error during PerfMark.<clinit>", err); + // We need to be careful here, as it's easy to accidentally cause a class load. Logger is loaded + // reflectively to avoid accidentally pulling it in. + // TODO(carl-mastrangelo): Maybe make this load SLF4J instead? + Class<?> logClass = Class.forName("java.util.logging.Logger"); + Object logger = logClass.getMethod("getLogger", String.class).invoke(null, PerfMark.class.getName()); .. } I don't quite understand which class is prevented from being accidentally loaded here. According to Class#forName will cause the logger class to be loaded. From my understanding, the class will only be loaded if the enclosing if condition is true. Or is this the point I am missing? Commit with more context is here: https://github.com/perfmark/perfmark/commit/4f87fb72c2077df6ade958b524d6d217766c9f93#diff-f9fdc8ad347ee9aa7a11a5259d5ab41c81e84c0ff375de17faebe7625cf50fb5R116 I ran the part with the if block and set a breakpoint on static and non-static fields in the Logger class. It hit the breakpoint only when the call was executed irregardless of using reflection or direct. When the if condition was false, no logger was loaded in any case.
[ "I think the important point of that commit is to load the classes from java.util.logging only when it is really required (when the system property \"io.perfmark.PerfMark.debug\" is \"true\" and err is not null, i.e. when the class io.perfmark.impl.SecretPerfMarkImpl$PerfMarkImpl is not available or that class has not the required constructor.)\nIf the code is\nLogger.getLogger(PerfMark.class.getName()).log(Level.FINE, \"Error during PerfMark.<clinit>\", err);\n\nthen the java.util.logging.Logger class is loaded as soon as the PerfMark class is loaded (since loading PerfMark requires that the static initializer block is executed).\nWith this convoluted code the java.util.logging.Logger is only loaded if PerfMark cannot load its support class io.perfmark.impl.SecretPerfMarkImpl$PerfMarkImpl and the system property \"io.perfmark.PerfMark.debug\" is set to \"true\" (which probably means that java.util.logging.Logger is almost never loaded just because you use PerfMark)\n" ]
[ 1 ]
[]
[]
[ "classloader", "dynamic_class_creation", "java", "reflection" ]
stackoverflow_0074680847_classloader_dynamic_class_creation_java_reflection.txt
Q: How do I iterate through an array of structs? I have this hashmap I want to implement. typedef void * Data; typedef struct { Data data; //Data pointer to the data char * key; //char pointer to the string key } HashMapItem; typedef struct hashmap { HashMapItem * items; //items of the hashmaps size_t size; //size of the hashmaps int count; //how many elements are in the hashmap } HashMap; I declare it like so: HashMap * create_hashmap(size_t key_space){ if(key_space == 0) return NULL; HashMap * hm = malloc(sizeof(HashMap)); //allocate memory to store hashmap hm->items = calloc(key_space, sizeof(HashMapItem)); //allocate memory to store every item inside the map, null it hm->size = key_space; //set sitze of hashmap hm->count = 0; //empty at the begining return hm; } When i try to iterate through it, it says that expression must have arithmetic or pointer type but has type "HashMapItem" even though i declare it as a pointer of HashMapItems if((hm->items)[index] != NULL) Any idea? A: typedef void * Data; Never hide pointers behind typedefs. It is a very, very bad practice. How to iterate. typedef struct { void *data; //Data pointer to the data char * key; //char pointer to the string key } HashMapItem; typedef struct hashmap { HashMapItem * items; //items of the hashmaps size_t size; //size of the hashmaps size_t count; //how many elements are in the hashmap } HashMap; void foo(HashMap *map) { for(size_t i = 0; i < map -> count; i ++) { puts(map -> items[i].key); } } PS count member should be also size_t EDIT. Below is wrong as hm->items)[index] has type HashMapItem and it is not pointer. You cant compare it to NULL. if((hm->items)[index] != NULL)
How do I iterate through an array of structs?
I have this hashmap I want to implement. typedef void * Data; typedef struct { Data data; //Data pointer to the data char * key; //char pointer to the string key } HashMapItem; typedef struct hashmap { HashMapItem * items; //items of the hashmaps size_t size; //size of the hashmaps int count; //how many elements are in the hashmap } HashMap; I declare it like so: HashMap * create_hashmap(size_t key_space){ if(key_space == 0) return NULL; HashMap * hm = malloc(sizeof(HashMap)); //allocate memory to store hashmap hm->items = calloc(key_space, sizeof(HashMapItem)); //allocate memory to store every item inside the map, null it hm->size = key_space; //set sitze of hashmap hm->count = 0; //empty at the begining return hm; } When i try to iterate through it, it says that expression must have arithmetic or pointer type but has type "HashMapItem" even though i declare it as a pointer of HashMapItems if((hm->items)[index] != NULL) Any idea?
[ "typedef void * Data;\n\nNever hide pointers behind typedefs. It is a very, very bad practice.\nHow to iterate.\ntypedef struct {\n void *data; //Data pointer to the data\n char * key; //char pointer to the string key\n} HashMapItem;\n\ntypedef struct hashmap {\n HashMapItem * items; //items of the hashmaps\n size_t size; //size of the hashmaps\n size_t count; //how many elements are in the hashmap\n} HashMap;\n\n\nvoid foo(HashMap *map)\n{\n for(size_t i = 0; i < map -> count; i ++)\n {\n puts(map -> items[i].key);\n }\n}\n\nPS count member should be also size_t\nEDIT. Below is wrong as hm->items)[index] has type HashMapItem and it is not pointer. You cant compare it to NULL.\n if((hm->items)[index] != NULL)\n\n" ]
[ 1 ]
[]
[]
[ "c", "pointers" ]
stackoverflow_0074681000_c_pointers.txt
Q: Incremental query in DBT based on current month with Jinja I am trying to implement a incremental query in DBT using Jinja. Considering there are tables getting created every month in warehouse with year and month suffix and I need to write a logic to union the new table which gets created every month to execute the DBT model. Below is the code which I have started with #initialize the months in a list {% set months= ['03','04','05','06','07','08','09','10','11','12','01','02'] %} #first select query for Feb month of 2022 SELECT *, '2022-02-01' AS ref_month FROM source_table_2022_02 #initilalize year variable to 2022 {% set year= namespace(items=2022) %} #loop through the months to generate dynamic query for upcoming months {% for month in months %} #if month is Jan increment the year {% if month == '01' %} {% set year.items = year.items + 1 %} {% endif %} UNION ALL SELECT *, '{{ year.items }}-{{ month }}-01' AS ref_month FROM source_table_{{ year.items }}_{{ month }} {% endfor %} output of above logic is as below SELECT *, '2022-02-01' AS ref_month FROM source_table_2022_02 UNION ALL SELECT *, '2022-03-01' AS ref_month FROM source_table_2022_03 UNION ALL SELECT *, '2022-04-01' AS ref_month FROM source_table_2022_04 . . . UNION ALL SELECT *, '2023-02-01' AS ref_month FROM source_table_2023_02 I need help in stopping the for loop when we reach the current month i.e Dec(because there is no current_month method in Jinja and I need to implement this logic in DBT models.sql file and not a python file), instead of looping through the upcoming months. Note: as mentioned earlier the source table gets created every month with year and month suffix I also want to continue the loop after 2023 Feb in the upcoming months. Current logic stops immediately after the list iteration ends i.e 2023 Feb A: If you want to stop the loop when you reach the current month, you can simply add a conditional check to the loop to check if the month is equal to the current month, and if so, break out of the loop. Here is an example of how you might do this: #initialize the months in a list {% set months= ['03','04','05','06','07','08','09','10','11','12','01','02'] %} #first select query for Feb month of 2022 SELECT *, '2022-02-01' AS ref_month FROM source_table_2022_02 #initilalize year variable to 2022 {% set year= namespace(items=2022) %} #get the current month {% set current_month = "12" %} #loop through the months to generate dynamic query for upcoming months {% for month in months %} #if month is Jan increment the year {% if month == '01' %} {% set year.items = year.items + 1 %} {% endif %} #check if the current month has been reached, and if so, break out of the loop {% if month == current_month %} {% break %} {% endif %} UNION ALL SELECT *, '{{ year.items }}-{{ month }}-01' AS ref_month FROM source_table_{{ year.items }}_{{ month }} {% endfor %} This will cause the loop to break when it reaches the current month, and the code following the loop will not be executed. A: Would you consider installing the commonly used dbt-utils package? If so, they have a macro called get_relations_by_pattern and another called union_relations. These could be used to solve your problem as follows: {% set monthly_relations = dbt_utils.get_relations_by_pattern('my_schema', 'source_table_%') %} SELECT * FROM {{ dbt_utils.union_relations(relations = monthly_relations) }} Note that a new field _dbt_source_relation will be added, listing the original table name. You'll be able to parse the month and year from this.
Incremental query in DBT based on current month with Jinja
I am trying to implement a incremental query in DBT using Jinja. Considering there are tables getting created every month in warehouse with year and month suffix and I need to write a logic to union the new table which gets created every month to execute the DBT model. Below is the code which I have started with #initialize the months in a list {% set months= ['03','04','05','06','07','08','09','10','11','12','01','02'] %} #first select query for Feb month of 2022 SELECT *, '2022-02-01' AS ref_month FROM source_table_2022_02 #initilalize year variable to 2022 {% set year= namespace(items=2022) %} #loop through the months to generate dynamic query for upcoming months {% for month in months %} #if month is Jan increment the year {% if month == '01' %} {% set year.items = year.items + 1 %} {% endif %} UNION ALL SELECT *, '{{ year.items }}-{{ month }}-01' AS ref_month FROM source_table_{{ year.items }}_{{ month }} {% endfor %} output of above logic is as below SELECT *, '2022-02-01' AS ref_month FROM source_table_2022_02 UNION ALL SELECT *, '2022-03-01' AS ref_month FROM source_table_2022_03 UNION ALL SELECT *, '2022-04-01' AS ref_month FROM source_table_2022_04 . . . UNION ALL SELECT *, '2023-02-01' AS ref_month FROM source_table_2023_02 I need help in stopping the for loop when we reach the current month i.e Dec(because there is no current_month method in Jinja and I need to implement this logic in DBT models.sql file and not a python file), instead of looping through the upcoming months. Note: as mentioned earlier the source table gets created every month with year and month suffix I also want to continue the loop after 2023 Feb in the upcoming months. Current logic stops immediately after the list iteration ends i.e 2023 Feb
[ "If you want to stop the loop when you reach the current month, you can simply add a conditional check to the loop to check if the month is equal to the current month, and if so, break out of the loop. Here is an example of how you might do this:\n#initialize the months in a list\n{% set months= ['03','04','05','06','07','08','09','10','11','12','01','02'] %}\n\n#first select query for Feb month of 2022\nSELECT *, '2022-02-01' AS ref_month\nFROM source_table_2022_02\n\n#initilalize year variable to 2022\n{% set year= namespace(items=2022) %}\n\n#get the current month\n{% set current_month = \"12\" %}\n\n#loop through the months to generate dynamic query for upcoming months\n{% for month in months %}\n\n #if month is Jan increment the year\n {% if month == '01' %}\n {% set year.items = year.items + 1 %}\n {% endif %}\n\n #check if the current month has been reached, and if so, break out of the loop\n {% if month == current_month %}\n {% break %}\n {% endif %}\n\n UNION ALL \n \n SELECT *, '{{ year.items }}-{{ month }}-01' AS ref_month\n FROM source_table_{{ year.items }}_{{ month }}\n\n{% endfor %}\n\n\nThis will cause the loop to break when it reaches the current month, and the code following the loop will not be executed.\n", "Would you consider installing the commonly used dbt-utils package?\nIf so, they have a macro called get_relations_by_pattern and another called union_relations.\nThese could be used to solve your problem as follows:\n{% set monthly_relations = dbt_utils.get_relations_by_pattern('my_schema', 'source_table_%') %}\n\n SELECT *\n FROM {{ dbt_utils.union_relations(relations = monthly_relations) }}\n\nNote that a new field _dbt_source_relation will be added, listing the original table name. You'll be able to parse the month and year from this.\n" ]
[ 0, 0 ]
[]
[]
[ "dbt", "jinja2" ]
stackoverflow_0074677489_dbt_jinja2.txt
Q: Memory issue while running ARIMA model I am trying to run my ARIMA model and am getting the below error:- MemoryError: Unable to allocate 52.4 GiB for an array with shape (83873, 83873) and data type float64 My python/anaconda is installed in the C drive and has somewhere around 110GB free space but still am getting this error. How do I resolve this? Also below is my code:- from statsmodels.tsa.arima_model import ARIMA model=ARIMA(df['Sales'],order=(1,0,1)) model_fit=model.fit() I tried to slice the dataframe for only 1 year of values, but still having issues. Anaconda version is 3.8- 64 bit. My dataframe looks like this- It has somewhere around 83,873 rows. A: I did a pivot transformation and it solved my issue.
Memory issue while running ARIMA model
I am trying to run my ARIMA model and am getting the below error:- MemoryError: Unable to allocate 52.4 GiB for an array with shape (83873, 83873) and data type float64 My python/anaconda is installed in the C drive and has somewhere around 110GB free space but still am getting this error. How do I resolve this? Also below is my code:- from statsmodels.tsa.arima_model import ARIMA model=ARIMA(df['Sales'],order=(1,0,1)) model_fit=model.fit() I tried to slice the dataframe for only 1 year of values, but still having issues. Anaconda version is 3.8- 64 bit. My dataframe looks like this- It has somewhere around 83,873 rows.
[ "I did a pivot transformation and it solved my issue.\n" ]
[ 0 ]
[ "I have the same problem than you had and I cannot see the solution... Could you help me please? I'd be so greatful thanks :)\n" ]
[ -3 ]
[ "arima", "memory", "python", "time_series" ]
stackoverflow_0070726861_arima_memory_python_time_series.txt
Q: Why does `Tags.findOne();` return a TypeError? The /tag [tag name] command is supposed to send the tag description or a message if the tag doesn't exist. Currently I have no items in the model. When I run the command with any arguments I get TypeError: Cannot read properties of undefined (reading 'findOne'). Here is my code: commands/tag.js: const { Tags } = require('../models/tag.js'); const { SlashCommandBuilder } = require('discord.js'); module.exports = { data: new SlashCommandBuilder() .setName('tag') .setDescription('Show a tag.') .addStringOption(option => option .setName('name') .setDescription('The name of the tag') .setRequired(true) ), async execute(interaction) { const name = interaction.options.getString('name'); const tag = Tags.findOne({where: { name: name } }); if (tag) return interaction.reply(tag.get('description')); return interaction.reply('That tag doesn\'t exist!'); } } models/tag.js: module.exports = (db, DataTypes) => { return db.define('tags', { name: { type: DataTypes.STRING, unique: true, }, description: DataTypes.TEXT, }); } db-init.js (this is run manually): const { Sequelize, DataTypes } = require('sequelize'); const db = new Sequelize({ dialect: 'sqlite', storage: './database.sqlite', }); require('./models/tag.js')(db, DataTypes); const force = process.argv.includes('--force') || process.argv.includes('-f'); db.sync({ force }).then(async () => { console.log('Database synced.'); db.close(); }).catch(console.error); commands/addTag.js: const { Tags } = require('../models/tag.js'); const { SlashCommandBuilder } = require('discord.js'); module.exports = { data: new SlashCommandBuilder() .setName('addtag') .setDescription('Create a tag!') .addStringOption(option => option .setName('name') .setDescription('The name of your tag') .setRequired(true) ) .addStringOption(option => option .setName('description') .setDescription('The tag description') .setRequired(true) ), async execute(interaction) { const name = interaction.options.getString('name'); const description = interaction.options.getString('description'); try { const tag = await Tags.create({ name: name, description: description, }); return interaction.reply(`Tag \`${tag.name}\` created.`); } catch (error) { if (error.name = 'SequelizeUniqueConstraintError') return interaction.reply('That tag already exists!'); return interaction.reply({ content: 'An error occured.', ephemeral: true }); } } } The /addtag [tag name] [tag description] command is supposed to create a tag or send a message if the tag already exists. Whenever I run the command, I don't get any errors but the bot says the tag already exists even though it doesn't. Can someone please explain why these commands don't function properly? A: I can see that you are using Sequelize to interact with your SQLite database. In order to use the findOne method, you need to use the db object you created in db-init.js to access your Tags model and then call the findOne method on that object. Here's how you can fix your tag.js file: const { db } = require('../db-init.js'); const { Tags } = require('../models/tag.js')(db, DataTypes); // This line is changed const { SlashCommandBuilder } = require('discord.js'); module.exports = { data: new SlashCommandBuilder() .setName('tag') .setDescription('Show a tag.') .addStringOption(option => option .setName('name') .setDescription('The name of the tag') .setRequired(true) ), async execute(interaction) { const name = interaction.options.getString('name'); // This is the new code const tag = await Tags.findOne({ where: { name: name } }); // Notice the use of await here if (!tag) return interaction.reply('That tag doesn\'t exist!'); return interaction.reply(tag.get('description')); } } The issue in your addTag.js file is that you are using the SequelizeUniqueConstraintError error name in a comparison instead of an assignment. In JavaScript, the assignment operator = is different from the comparison operator ==. The assignment operator assigns a value to a variable, while the comparison operator checks if two values are equal. Here's how you can fix your addTag.js file: const { db } = require('../db-init.js'); const { Tags } = require('../models/tag.js')(db, DataTypes); // This line is changed const { SlashCommandBuilder } = require('discord.js'); module.exports = { data: new SlashCommandBuilder() .setName('addtag') .setDescription('Create a tag!') .addStringOption(option => option .setName('name') .setDescription('The name of your tag') .setRequired(true) ) .addStringOption(option => option .setName('description') .setDescription('The tag description') .setRequired(true) ), async execute(interaction) { const name = interaction.options.getString('name'); const description = interaction.options.getString('description'); try { const tag = await Tags.create({ name: name, description: description, }); return interaction.reply(`Tag \`${tag.name}\` created.`); } catch (error) { // This is the new code if (error.name === 'SequelizeUniqueConstraintError') return interaction.reply('That tag already exists!'); return interaction.reply({ content: 'An error occured.', ephemeral: true }); } } }
Why does `Tags.findOne();` return a TypeError?
The /tag [tag name] command is supposed to send the tag description or a message if the tag doesn't exist. Currently I have no items in the model. When I run the command with any arguments I get TypeError: Cannot read properties of undefined (reading 'findOne'). Here is my code: commands/tag.js: const { Tags } = require('../models/tag.js'); const { SlashCommandBuilder } = require('discord.js'); module.exports = { data: new SlashCommandBuilder() .setName('tag') .setDescription('Show a tag.') .addStringOption(option => option .setName('name') .setDescription('The name of the tag') .setRequired(true) ), async execute(interaction) { const name = interaction.options.getString('name'); const tag = Tags.findOne({where: { name: name } }); if (tag) return interaction.reply(tag.get('description')); return interaction.reply('That tag doesn\'t exist!'); } } models/tag.js: module.exports = (db, DataTypes) => { return db.define('tags', { name: { type: DataTypes.STRING, unique: true, }, description: DataTypes.TEXT, }); } db-init.js (this is run manually): const { Sequelize, DataTypes } = require('sequelize'); const db = new Sequelize({ dialect: 'sqlite', storage: './database.sqlite', }); require('./models/tag.js')(db, DataTypes); const force = process.argv.includes('--force') || process.argv.includes('-f'); db.sync({ force }).then(async () => { console.log('Database synced.'); db.close(); }).catch(console.error); commands/addTag.js: const { Tags } = require('../models/tag.js'); const { SlashCommandBuilder } = require('discord.js'); module.exports = { data: new SlashCommandBuilder() .setName('addtag') .setDescription('Create a tag!') .addStringOption(option => option .setName('name') .setDescription('The name of your tag') .setRequired(true) ) .addStringOption(option => option .setName('description') .setDescription('The tag description') .setRequired(true) ), async execute(interaction) { const name = interaction.options.getString('name'); const description = interaction.options.getString('description'); try { const tag = await Tags.create({ name: name, description: description, }); return interaction.reply(`Tag \`${tag.name}\` created.`); } catch (error) { if (error.name = 'SequelizeUniqueConstraintError') return interaction.reply('That tag already exists!'); return interaction.reply({ content: 'An error occured.', ephemeral: true }); } } } The /addtag [tag name] [tag description] command is supposed to create a tag or send a message if the tag already exists. Whenever I run the command, I don't get any errors but the bot says the tag already exists even though it doesn't. Can someone please explain why these commands don't function properly?
[ "I can see that you are using Sequelize to interact with your SQLite database. In order to use the findOne method, you need to use the db object you created in db-init.js to access your Tags model and then call the findOne method on that object.\nHere's how you can fix your tag.js file:\nconst { db } = require('../db-init.js');\nconst { Tags } = require('../models/tag.js')(db, DataTypes); // This line is changed\nconst { SlashCommandBuilder } = require('discord.js');\n\nmodule.exports = {\n data: new SlashCommandBuilder()\n .setName('tag')\n .setDescription('Show a tag.')\n .addStringOption(option =>\n option\n .setName('name')\n .setDescription('The name of the tag')\n .setRequired(true)\n ),\n async execute(interaction) {\n const name = interaction.options.getString('name');\n\n // This is the new code\n const tag = await Tags.findOne({ where: { name: name } }); // Notice the use of await here\n if (!tag) return interaction.reply('That tag doesn\\'t exist!');\n return interaction.reply(tag.get('description'));\n }\n}\n\nThe issue in your addTag.js file is that you are using the SequelizeUniqueConstraintError error name in a comparison instead of an assignment. In JavaScript, the assignment operator = is different from the comparison operator ==. The assignment operator assigns a value to a variable, while the comparison operator checks if two values are equal.\nHere's how you can fix your addTag.js file:\nconst { db } = require('../db-init.js');\nconst { Tags } = require('../models/tag.js')(db, DataTypes); // This line is changed\nconst { SlashCommandBuilder } = require('discord.js');\n\nmodule.exports = {\n data: new SlashCommandBuilder()\n .setName('addtag')\n .setDescription('Create a tag!')\n .addStringOption(option =>\n option\n .setName('name')\n .setDescription('The name of your tag')\n .setRequired(true) \n )\n .addStringOption(option =>\n option\n .setName('description')\n .setDescription('The tag description')\n .setRequired(true)\n ),\n async execute(interaction) {\n const name = interaction.options.getString('name');\n const description = interaction.options.getString('description');\n\n try {\n const tag = await Tags.create({\n name: name,\n description: description,\n });\n\n return interaction.reply(`Tag \\`${tag.name}\\` created.`);\n } catch (error) {\n // This is the new code\n if (error.name === 'SequelizeUniqueConstraintError') return interaction.reply('That tag already exists!');\n\n return interaction.reply({ content: 'An error occured.', ephemeral: true });\n }\n }\n}\n\n" ]
[ 0 ]
[]
[]
[ "discord.js", "javascript", "node.js", "sequelize.js", "typeerror" ]
stackoverflow_0074681008_discord.js_javascript_node.js_sequelize.js_typeerror.txt
Q: Example query from my textbook is coming up with an Error message. "Ambiguous column name 'V_CODE'" For my homework, I have to enter example queries from the book to sql. There are also example database I downloaded for the homework and properly put them into SQL. The following is what I have to put in but an error message comes up. SELECT V_CODE, V_NAME, V_STATE, P_CODE, P_DESCRIPT, P_PRICE * P_QOH AS TOTAL FROM PRODUCT P JOIN VENDOR V ON P.V_CODE = V.V_CODE WHERE V_STATE IN ('TN','KY') ORDER BY V_STATE, TOTAL DESC; --Total value of products from Tennessee and Kentucky Ambiguous column name 'V_CODE' Not sure how to fix. I had to put in an example query from the homework but an error message comes up. "Ambiguous column name 'V_CODE'" A: You need to choose which table you are choosing V_CODE from. In this example it's taken from the PRODUCT table. SELECT P.V_CODE, V_NAME, V_STATE, P_CODE, P_DESCRIPT, P_PRICE * P_QOH AS TOTAL FROM PRODUCT P JOIN VENDOR V ON P.V_CODE = V.V_CODE WHERE V_STATE IN ('TN','KY') ORDER BY V_STATE, TOTAL DESC; --Total value of products from Tennessee and Kentucky
Example query from my textbook is coming up with an Error message. "Ambiguous column name 'V_CODE'"
For my homework, I have to enter example queries from the book to sql. There are also example database I downloaded for the homework and properly put them into SQL. The following is what I have to put in but an error message comes up. SELECT V_CODE, V_NAME, V_STATE, P_CODE, P_DESCRIPT, P_PRICE * P_QOH AS TOTAL FROM PRODUCT P JOIN VENDOR V ON P.V_CODE = V.V_CODE WHERE V_STATE IN ('TN','KY') ORDER BY V_STATE, TOTAL DESC; --Total value of products from Tennessee and Kentucky Ambiguous column name 'V_CODE' Not sure how to fix. I had to put in an example query from the homework but an error message comes up. "Ambiguous column name 'V_CODE'"
[ "You need to choose which table you are choosing V_CODE from. In this example it's taken from the PRODUCT table.\nSELECT P.V_CODE, V_NAME, V_STATE, P_CODE, P_DESCRIPT, P_PRICE * P_QOH AS TOTAL\nFROM PRODUCT P JOIN VENDOR V ON P.V_CODE = V.V_CODE\nWHERE V_STATE IN ('TN','KY')\nORDER BY V_STATE, TOTAL DESC;\n--Total value of products from Tennessee and Kentucky\n\n" ]
[ 0 ]
[]
[]
[ "ambiguous", "sql", "sql_server" ]
stackoverflow_0074681024_ambiguous_sql_sql_server.txt
Q: Kinesis Firehose with S3 in the same account and Redshift in another account I'm finding out the different ways of ingest a Redshift from a Kinesis delivery firehose. I saw that it requires Firehose + intermediary S3 + Redshift, and I saw that it is possible to use a S3 of another account with a Redshift in the same account of S3. My question is: Is it possible to have the S3 bucket and firehose in account A, but ingest a Redshift from account B? thank you in advance. A: Yes, this should be possible as long as you can create an IAM role / policy that provides read access to the accounts in Firehose and S3 that are separate. Redshift must have the "AmazonS3ReadOnlyAccess" and "AmazonKinesisFirehoseFullAccess" permissions enabled.
Kinesis Firehose with S3 in the same account and Redshift in another account
I'm finding out the different ways of ingest a Redshift from a Kinesis delivery firehose. I saw that it requires Firehose + intermediary S3 + Redshift, and I saw that it is possible to use a S3 of another account with a Redshift in the same account of S3. My question is: Is it possible to have the S3 bucket and firehose in account A, but ingest a Redshift from account B? thank you in advance.
[ "Yes, this should be possible as long as you can create an IAM role / policy that provides read access to the accounts in Firehose and S3 that are separate. Redshift must have the \"AmazonS3ReadOnlyAccess\" and \"AmazonKinesisFirehoseFullAccess\" permissions enabled.\n" ]
[ 0 ]
[]
[]
[ "amazon_kinesis", "amazon_redshift", "amazon_s3", "firehose" ]
stackoverflow_0074497105_amazon_kinesis_amazon_redshift_amazon_s3_firehose.txt
Q: INVALID_NONCE authenticating on Crypto.com C# REST API I am working on private endpoints of Crypto.com, and I am receiving the following problem { "id":60, "method":"private/get-currency-networks", "code":10007, "message":"INVALID_NONCE" } when sending the message "{\"id\":60, \"method\":\"private/get-currency-networks\", \"nonce\":1667591280642, \"params\":{}, \"sig\":\"xxxxxxx\", \"api_key\":\"xxxxxx\" }" with signature Hex encoded and all. I tried to get nonce using UTCtime since the Epoch: long nonce = DateTimeOffset.UtcNow.ToUnixTimeMilliseconds(); But I still receive the error message that according to the documentation means "Nonce value differs by more than 30 seconds from server" A: for me it helped substracting 10 sec from the current-time when setting the nonce, like: .setNonce(LocalDateTime.now().minusSeconds(10).atZone(ZoneId.systemDefault()).toInstant().toEpochMilli()) see: https://exchange-docs.crypto.com/exchange/v1/rest-ws/index.html#invalid_nonce-on-all-requests
INVALID_NONCE authenticating on Crypto.com C# REST API
I am working on private endpoints of Crypto.com, and I am receiving the following problem { "id":60, "method":"private/get-currency-networks", "code":10007, "message":"INVALID_NONCE" } when sending the message "{\"id\":60, \"method\":\"private/get-currency-networks\", \"nonce\":1667591280642, \"params\":{}, \"sig\":\"xxxxxxx\", \"api_key\":\"xxxxxx\" }" with signature Hex encoded and all. I tried to get nonce using UTCtime since the Epoch: long nonce = DateTimeOffset.UtcNow.ToUnixTimeMilliseconds(); But I still receive the error message that according to the documentation means "Nonce value differs by more than 30 seconds from server"
[ "for me it helped substracting 10 sec from the current-time when setting the nonce, like:\n.setNonce(LocalDateTime.now().minusSeconds(10).atZone(ZoneId.systemDefault()).toInstant().toEpochMilli())\n\nsee:\nhttps://exchange-docs.crypto.com/exchange/v1/rest-ws/index.html#invalid_nonce-on-all-requests\n" ]
[ 0 ]
[]
[]
[ "authentication", "bad_request", "rest" ]
stackoverflow_0074322364_authentication_bad_request_rest.txt
Q: How to create a matrix with four 1's in each row staggered How can I easily create this matrix using clever commands: 1 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 A: unname(model.matrix(~gl(3,4) + 0)) [,1] [,2] [,3] [1,] 1 0 0 [2,] 1 0 0 [3,] 1 0 0 [4,] 1 0 0 [5,] 0 1 0 [6,] 0 1 0 [7,] 0 1 0 [8,] 0 1 0 [9,] 0 0 1 [10,] 0 0 1 [11,] 0 0 1 [12,] 0 0 1 Another Option: as.matrix(Matrix::bdiag(rep(list(rep(1,4)),3))) [,1] [,2] [,3] [1,] 1 0 0 [2,] 1 0 0 [3,] 1 0 0 [4,] 1 0 0 [5,] 0 1 0 [6,] 0 1 0 [7,] 0 1 0 [8,] 0 1 0 [9,] 0 0 1 [10,] 0 0 1 [11,] 0 0 1 [12,] 0 0 1 as.matrix(Matrix::bdiag(replicate(3, numeric(4)+1, FALSE))) A: Perhaps the shortest and fastest option uses diag and modular math: diag(3)[0:11 %/% 4 + 1,] [,1] [,2] [,3] [1,] 1 0 0 [2,] 1 0 0 [3,] 1 0 0 [4,] 1 0 0 [5,] 0 1 0 [6,] 0 1 0 [7,] 0 1 0 [8,] 0 1 0 [9,] 0 0 1 [10,] 0 0 1 [11,] 0 0 1 [12,] 0 0 1 A: matrix(rep(c(1,0,0,0,1,0,0,0,1), each = 4),ncol = 3) [,1] [,2] [,3] [1,] 1 0 0 [2,] 1 0 0 [3,] 1 0 0 [4,] 1 0 0 [5,] 0 1 0 [6,] 0 1 0 [7,] 0 1 0 [8,] 0 1 0 [9,] 0 0 1 [10,] 0 0 1 [11,] 0 0 1 [12,] 0 0 1 A: #To create the matrix you have described, you can use the repmat function in MATLAB or Octave. This function creates a matrix by repeating the elements of another matrix. #For example, to create the matrix you have described, you could do the following: >> A = [1 0 0; 1 0 0; 1 0 0; 1 0 0] A = 1 0 0 1 0 0 1 0 0 1 0 0 >> B = [0 1 0; 0 1 0; 0 1 0; 0 1 0] B = 0 1 0 0 1 0 0 1 0 0 1 0 >> C = [0 0 1; 0 0 1; 0 0 1; 0 0 1] C = 0 0 1 0 0 1 0 0 1 0 0 1 >> D = [A;B;C] D = 1 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 #Alternatively, you could use the repmat function to create the matrix more concisely, like this: >> E = repmat([1 0 0; 0 1 0; 0 0 1], 4, 1) E = 1 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 #Note that the repmat function takes two arguments: the matrix to be repeated, and the number of times to repeat it in each dimension. In the example above, we have repeated the matrix [1 0 0; 0 1 0; 0 0 1] four times in the first dimension (i.e. along the rows), and once in the second dimension (i.e. along the columns). This creates the desired matrix.
How to create a matrix with four 1's in each row staggered
How can I easily create this matrix using clever commands: 1 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1
[ "unname(model.matrix(~gl(3,4) + 0))\n\n [,1] [,2] [,3]\n [1,] 1 0 0\n [2,] 1 0 0\n [3,] 1 0 0\n [4,] 1 0 0\n [5,] 0 1 0\n [6,] 0 1 0\n [7,] 0 1 0\n [8,] 0 1 0\n [9,] 0 0 1\n[10,] 0 0 1\n[11,] 0 0 1\n[12,] 0 0 1\n\nAnother Option:\nas.matrix(Matrix::bdiag(rep(list(rep(1,4)),3)))\n\n [,1] [,2] [,3]\n [1,] 1 0 0\n [2,] 1 0 0\n [3,] 1 0 0\n [4,] 1 0 0\n [5,] 0 1 0\n [6,] 0 1 0\n [7,] 0 1 0\n [8,] 0 1 0\n [9,] 0 0 1\n[10,] 0 0 1\n[11,] 0 0 1\n[12,] 0 0 1\n\nas.matrix(Matrix::bdiag(replicate(3, numeric(4)+1, FALSE)))\n\n", "Perhaps the shortest and fastest option uses diag and modular math:\n diag(3)[0:11 %/% 4 + 1,]\n [,1] [,2] [,3]\n [1,] 1 0 0\n [2,] 1 0 0\n [3,] 1 0 0\n [4,] 1 0 0\n [5,] 0 1 0\n [6,] 0 1 0\n [7,] 0 1 0\n [8,] 0 1 0\n [9,] 0 0 1\n[10,] 0 0 1\n[11,] 0 0 1\n[12,] 0 0 1\n\n", "matrix(rep(c(1,0,0,0,1,0,0,0,1), each = 4),ncol = 3)\n\n [,1] [,2] [,3]\n [1,] 1 0 0\n [2,] 1 0 0\n [3,] 1 0 0\n [4,] 1 0 0\n [5,] 0 1 0\n [6,] 0 1 0\n [7,] 0 1 0\n [8,] 0 1 0\n [9,] 0 0 1\n[10,] 0 0 1\n[11,] 0 0 1\n[12,] 0 0 1\n\n", "#To create the matrix you have described, you can use the repmat function in MATLAB or Octave. This function creates a matrix by repeating the elements of another matrix.\n\n#For example, to create the matrix you have described, you could do the following:\n\n>> A = [1 0 0; 1 0 0; 1 0 0; 1 0 0]\nA =\n 1 0 0\n 1 0 0\n 1 0 0\n 1 0 0\n\n>> B = [0 1 0; 0 1 0; 0 1 0; 0 1 0]\nB =\n 0 1 0\n 0 1 0\n 0 1 0\n 0 1 0\n\n>> C = [0 0 1; 0 0 1; 0 0 1; 0 0 1]\nC =\n 0 0 1\n 0 0 1\n 0 0 1\n 0 0 1\n\n>> D = [A;B;C]\nD =\n 1 0 0\n 1 0 0\n 1 0 0\n 1 0 0\n 0 1 0\n 0 1 0\n 0 1 0\n 0 1 0\n 0 0 1\n 0 0 1\n 0 0 1\n 0 0 1\n#Alternatively, you could use the repmat function to create the matrix more concisely, like this:\n\n>> E = repmat([1 0 0; 0 1 0; 0 0 1], 4, 1)\nE =\n 1 0 0\n 1 0 0\n 1 0 0\n 1 0 0\n 0 1 0\n 0 1 0\n 0 1 0\n 0 1 0\n 0 0 1\n 0 0 1\n 0 0 1\n 0 0 1\n\n#Note that the repmat function takes two arguments: the matrix to be repeated, and the number of times to repeat it in each dimension. In the example above, we have repeated the matrix [1 0 0; 0 1 0; 0 0 1] four times in the first dimension (i.e. along the rows), and once in the second dimension (i.e. along the columns). This creates the desired matrix.\n\n\n\n\n" ]
[ 4, 3, 2, 1 ]
[]
[]
[ "matrix", "r" ]
stackoverflow_0074680911_matrix_r.txt
Q: How do show my tests passing/failing in Github? I have a project on github that has extensive unit tests (using mocha for node.js). I'd like to show off by showing those tests passing/failing on each page. I notice other projects on Github are doing this. I've been unable to find any documentation on how to make the test status display. How can I make Github show unit test output? Does Github run the tests or do you need to hook up with an external webapp? Is there a free webservice to do this (my app is Open Source)? A: Take a look at Travis CI. You can use it with GitHub. They have docs on using NodeJS Those badges you see are called "status images" and Travis provides MarkDown that you can insert into your project's README.md file. A: Note that since April 26th 2013, you can see the build status on your GitHub repo branch page: The Commit Status API allows you to use that elsewhere: see " Repo Statuses API". Starting April 30th, 2013, the API endpoint for commit statuses has been extended to allow branch and tag names, as well as commit SHAs. A: CircleCI the status badges are also simply images that you can drop into your README.md file with the markdown. For example: ![Build Status](https://circleci.com/gh/<your github name>/<repo name>.png?circle-token=:circle-token) or ![Build Status](https://circleci.com/gh/<your github name>/<repo name>.svg?style=shield&circle-token=:circle-token) A: Yes I'm quite sure you mean something like Jenkins or Travis-CI. They work on your github account! On every commit the tests are executed. A: GitHub also has workflow status badges for GitHub Actions. The image is usually embedded into the README.md file using Markdown like this: ![example workflow](https://github.com/<OWNER>/<REPOSITORY>/actions/workflows/<WORKFLOW>.yml/badge.svg)
How do show my tests passing/failing in Github?
I have a project on github that has extensive unit tests (using mocha for node.js). I'd like to show off by showing those tests passing/failing on each page. I notice other projects on Github are doing this. I've been unable to find any documentation on how to make the test status display. How can I make Github show unit test output? Does Github run the tests or do you need to hook up with an external webapp? Is there a free webservice to do this (my app is Open Source)?
[ "Take a look at Travis CI. You can use it with GitHub.\nThey have docs on using NodeJS\nThose badges you see are called \"status images\" and Travis provides MarkDown that you can insert into your project's README.md file.\n", "Note that since April 26th 2013, you can see the build status on your GitHub repo branch page:\n\nThe Commit Status API allows you to use that elsewhere: see \"\nRepo Statuses API\".\nStarting April 30th, 2013, the API endpoint for commit statuses has been extended to allow branch and tag names, as well as commit SHAs.\n", "CircleCI the status badges are also simply images that you can drop into your README.md file with the markdown. For example:\n![Build Status](https://circleci.com/gh/<your github name>/<repo name>.png?circle-token=:circle-token)\n\nor\n![Build Status](https://circleci.com/gh/<your github name>/<repo name>.svg?style=shield&circle-token=:circle-token)\n\n", "Yes I'm quite sure you mean something like Jenkins or Travis-CI.\nThey work on your github account! On every commit the tests are executed.\n", "GitHub also has workflow status badges for GitHub Actions.\nThe image is usually embedded into the README.md file using Markdown like this:\n![example workflow](https://github.com/<OWNER>/<REPOSITORY>/actions/workflows/<WORKFLOW>.yml/badge.svg)\n\n" ]
[ 55, 13, 7, 2, 0 ]
[]
[]
[ "continuous_integration", "github", "unit_testing" ]
stackoverflow_0013546097_continuous_integration_github_unit_testing.txt
Q: problem with backface culling on OpenGL python My goal is to render a .pmx 3D model using PyOpenGL on pygame. I've found pymeshio module that extracts vertices and normal vectors and etc. found an example code on it's github repo that renders on tkinter. I changed the code to render on pygame instead, didn't change parts related to OpenGL rendering. The output is this: The model file is not corrupted, I checked it on Blender and MMD. I'm new with OpenGL and 3D programming but I think it might be related to sequence of vertices for back-face culling, some of triangles are clockwise and some of the others are counter clockwise. this is rendering code. it uses draw function to render. class IndexedVertexArray(object): def __init__(self): # vertices self.vertices=[] self.normal=[] self.colors=[] self.uvlist=[] self.b0=[] self.b1=[] self.w0=[] self.materials=[] self.indices=[] self.buffers=[] self.new_vertices=[] self.new_normal=[] def addVertex(self, pos, normal, uv, color, b0, b1, w0): self.vertices+=pos self.normal+=normal self.colors+=color self.uvlist+=uv self.b0.append(b0) self.b1.append(b1) self.w0.append(w0) def setIndices(self, indices): self.indices=indices def addMaterial(self, material): self.materials.append(material) def create_array_buffer(self, buffer_id, floats): # print('create_array_buuffer', buffer_id) glBindBuffer(GL_ARRAY_BUFFER, buffer_id) glBufferData(GL_ARRAY_BUFFER, len(floats)*4, # byte size (ctypes.c_float*len(floats))(*floats), # 謎のctypes GL_STATIC_DRAW) def create_vbo(self): self.buffers = glGenBuffers(4+1) # print("create_vbo", self.buffers) self.create_array_buffer(self.buffers[0], self.vertices) self.create_array_buffer(self.buffers[1], self.normal) self.create_array_buffer(self.buffers[2], self.colors) self.create_array_buffer(self.buffers[3], self.uvlist) # indices glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, self.buffers[4]) glBufferData(GL_ELEMENT_ARRAY_BUFFER, len(self.indices)*4, # byte size (ctypes.c_uint*len(self.indices))(*self.indices), # 謎のctypes GL_STATIC_DRAW) def draw(self): if len(self.buffers)==0: self.create_vbo() glEnableClientState(GL_VERTEX_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, self.buffers[0]); glVertexPointer(4, GL_FLOAT, 0, None); glEnableClientState(GL_NORMAL_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, self.buffers[1]); glNormalPointer(GL_FLOAT, 0, None); glEnableClientState(GL_COLOR_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, self.buffers[2]); glColorPointer(4, GL_FLOAT, 0, None); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, self.buffers[3]); glTexCoordPointer(2, GL_FLOAT, 0, None); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, self.buffers[4]); index_offset=0 for i, m in enumerate(self.materials): # submesh m.begin() glDrawElements(GL_TRIANGLES, m.vertex_count, GL_UNSIGNED_INT, ctypes.c_void_p(index_offset)); index_offset+=m.vertex_count * 4 # byte size m.end() # cleanup glDisableClientState(GL_TEXTURE_COORD_ARRAY) glDisableClientState(GL_COLOR_ARRAY) glDisableClientState(GL_NORMAL_ARRAY); glDisableClientState(GL_VERTEX_ARRAY) this is the part related to back-face culling class MQOMaterial(object): def __init__(self): self.rgba=(1, 1, 1, 1) self.vcol=False self.texture=None def __enter__(self): self.begin() def __exit__(self): self.end() def begin(self): glColor4f(*self.rgba) if self.texture: self.texture.begin() # backface culling glEnable(GL_CULL_FACE) glFrontFace(GL_CW) glCullFace(GL_BACK) # glCullFace(GL_FRONT) # alpha test glEnable(GL_ALPHA_TEST); glAlphaFunc(GL_GREATER, 0.5); def end(self): if self.texture: self.texture.end() First I disabled alpha channel and did nothing. I tried GL_FRONT and GL_CCW but it didn't work. I tried to separate vertices groups and render them using glVertex3fv. the original code already saves vertices in this format: vertices = [v0.x, v0.y, v0.z, 1, v1.x, v1.y, v1.z, 1, v2.x, v2.y, v2.z, 1, ...] ___________________ ___________________ ___________________ v0 v1 v2 normal = [v0.normal.x, v0.normal.y, v0.normal.z, v1.normal.x, v1.normal.y, v1.normal.z, ...] _____________________________________ _____________________________________ v0 v1 indices = [0, 1, 2, 1, 4, 5, 2, 4, 6, ...] ------- ------- ------- group0 group1 group2 I tried to render triangles with this code: def _draw(self): glBegin(GL_TRIANGLES) for i in range(len(self.indices) // 3): # glTexCoord2fv( tex_coords[ti] ) if i == len(self.new_normal): break # glNormal3fv( self.new_normal[i] ) glVertex3fv( self.new_vertices[i]) glEnd() def new_sort(self): for i in range(len(self.indices) // 3): if i <= -1: continue k = 4 * i j = 3 * i if k + 2 >= len(self.vertices) or j + 2 >= len(self.normal): break self.new_vertices.append(tuple((self.vertices[k], self.vertices[k + 1], self.vertices[k + 2] ))) self.new_normal.append(tuple((self.normal[j], self.normal[j + 1], self.normal[j + 2] ))) the output I thought maybe wrong points were together so shifted them with 1 and 2 to set correct points but the output became uglier. I tested this with quadrilateral and no change. I would be appreciated for any help or hint. A: The colorful images on the top seem to be rendered without depth test. You have to enable the Depth Test and clear the depth buffer: glEnable(GL_DEPTH_TEST) glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
problem with backface culling on OpenGL python
My goal is to render a .pmx 3D model using PyOpenGL on pygame. I've found pymeshio module that extracts vertices and normal vectors and etc. found an example code on it's github repo that renders on tkinter. I changed the code to render on pygame instead, didn't change parts related to OpenGL rendering. The output is this: The model file is not corrupted, I checked it on Blender and MMD. I'm new with OpenGL and 3D programming but I think it might be related to sequence of vertices for back-face culling, some of triangles are clockwise and some of the others are counter clockwise. this is rendering code. it uses draw function to render. class IndexedVertexArray(object): def __init__(self): # vertices self.vertices=[] self.normal=[] self.colors=[] self.uvlist=[] self.b0=[] self.b1=[] self.w0=[] self.materials=[] self.indices=[] self.buffers=[] self.new_vertices=[] self.new_normal=[] def addVertex(self, pos, normal, uv, color, b0, b1, w0): self.vertices+=pos self.normal+=normal self.colors+=color self.uvlist+=uv self.b0.append(b0) self.b1.append(b1) self.w0.append(w0) def setIndices(self, indices): self.indices=indices def addMaterial(self, material): self.materials.append(material) def create_array_buffer(self, buffer_id, floats): # print('create_array_buuffer', buffer_id) glBindBuffer(GL_ARRAY_BUFFER, buffer_id) glBufferData(GL_ARRAY_BUFFER, len(floats)*4, # byte size (ctypes.c_float*len(floats))(*floats), # 謎のctypes GL_STATIC_DRAW) def create_vbo(self): self.buffers = glGenBuffers(4+1) # print("create_vbo", self.buffers) self.create_array_buffer(self.buffers[0], self.vertices) self.create_array_buffer(self.buffers[1], self.normal) self.create_array_buffer(self.buffers[2], self.colors) self.create_array_buffer(self.buffers[3], self.uvlist) # indices glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, self.buffers[4]) glBufferData(GL_ELEMENT_ARRAY_BUFFER, len(self.indices)*4, # byte size (ctypes.c_uint*len(self.indices))(*self.indices), # 謎のctypes GL_STATIC_DRAW) def draw(self): if len(self.buffers)==0: self.create_vbo() glEnableClientState(GL_VERTEX_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, self.buffers[0]); glVertexPointer(4, GL_FLOAT, 0, None); glEnableClientState(GL_NORMAL_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, self.buffers[1]); glNormalPointer(GL_FLOAT, 0, None); glEnableClientState(GL_COLOR_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, self.buffers[2]); glColorPointer(4, GL_FLOAT, 0, None); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, self.buffers[3]); glTexCoordPointer(2, GL_FLOAT, 0, None); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, self.buffers[4]); index_offset=0 for i, m in enumerate(self.materials): # submesh m.begin() glDrawElements(GL_TRIANGLES, m.vertex_count, GL_UNSIGNED_INT, ctypes.c_void_p(index_offset)); index_offset+=m.vertex_count * 4 # byte size m.end() # cleanup glDisableClientState(GL_TEXTURE_COORD_ARRAY) glDisableClientState(GL_COLOR_ARRAY) glDisableClientState(GL_NORMAL_ARRAY); glDisableClientState(GL_VERTEX_ARRAY) this is the part related to back-face culling class MQOMaterial(object): def __init__(self): self.rgba=(1, 1, 1, 1) self.vcol=False self.texture=None def __enter__(self): self.begin() def __exit__(self): self.end() def begin(self): glColor4f(*self.rgba) if self.texture: self.texture.begin() # backface culling glEnable(GL_CULL_FACE) glFrontFace(GL_CW) glCullFace(GL_BACK) # glCullFace(GL_FRONT) # alpha test glEnable(GL_ALPHA_TEST); glAlphaFunc(GL_GREATER, 0.5); def end(self): if self.texture: self.texture.end() First I disabled alpha channel and did nothing. I tried GL_FRONT and GL_CCW but it didn't work. I tried to separate vertices groups and render them using glVertex3fv. the original code already saves vertices in this format: vertices = [v0.x, v0.y, v0.z, 1, v1.x, v1.y, v1.z, 1, v2.x, v2.y, v2.z, 1, ...] ___________________ ___________________ ___________________ v0 v1 v2 normal = [v0.normal.x, v0.normal.y, v0.normal.z, v1.normal.x, v1.normal.y, v1.normal.z, ...] _____________________________________ _____________________________________ v0 v1 indices = [0, 1, 2, 1, 4, 5, 2, 4, 6, ...] ------- ------- ------- group0 group1 group2 I tried to render triangles with this code: def _draw(self): glBegin(GL_TRIANGLES) for i in range(len(self.indices) // 3): # glTexCoord2fv( tex_coords[ti] ) if i == len(self.new_normal): break # glNormal3fv( self.new_normal[i] ) glVertex3fv( self.new_vertices[i]) glEnd() def new_sort(self): for i in range(len(self.indices) // 3): if i <= -1: continue k = 4 * i j = 3 * i if k + 2 >= len(self.vertices) or j + 2 >= len(self.normal): break self.new_vertices.append(tuple((self.vertices[k], self.vertices[k + 1], self.vertices[k + 2] ))) self.new_normal.append(tuple((self.normal[j], self.normal[j + 1], self.normal[j + 2] ))) the output I thought maybe wrong points were together so shifted them with 1 and 2 to set correct points but the output became uglier. I tested this with quadrilateral and no change. I would be appreciated for any help or hint.
[ "The colorful images on the top seem to be rendered without depth test. You have to enable the Depth Test and clear the depth buffer:\nglEnable(GL_DEPTH_TEST)\n\nglClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)\n\n" ]
[ 1 ]
[]
[]
[ "3d", "opengl", "pyopengl", "python" ]
stackoverflow_0074680930_3d_opengl_pyopengl_python.txt
Q: Netbeans profiler OpenJDK 64-Bit Server VM (Zulu 8.58.0.13-CA-macos-aarch64) I am trying to use Netbeans 12.5 profiler on OpenJDK 64-Bit Server VM (Zulu 8.58.0.13-CA-macos-aarch64) The first calibration process always fail. Deleting ~/.nbprofiler doesn't fix the problem. I am getting the error: Profiler calibration data file does not exists: /Users/myhome/.nbprofiler/machinedata.jdk18 and Netbeans output: *** Profiler message (Sun Jan 23 12:30:38 CET 2022): Starting target application... /Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home/bin/java -agentpath:/Applications/NetBeans/Apache NetBeans 12.5.app/Contents/Resources/NetBeans/netbeans/profiler/lib/deployed/jdk16/mac/libprofilerinterface.jnilib -Xbootclasspath/a:/Applications/NetBeans/Apache NetBeans 12.5.app/Contents/Resources/NetBeans/netbeans/profiler/lib/jfluid-server.jar:/Applications/NetBeans/Apache NetBeans 12.5.app/Contents/Resources/NetBeans/netbeans/profiler/lib/jfluid-server-15.jar org.netbeans.lib.profiler.server.ProfilerServer /Applications/NetBeans/Apache NetBeans 12.5.app/Contents/Resources/NetBeans/netbeans/profiler/lib/deployed/jdk16/mac 5141 10 Profiler+Calibration+Run *** Profiler warning (Sun Jan 23 12:33:11 CET 2022): timed out while trying to connect to the target JVM. *** Profiler error (Sun Jan 23 12:33:11 CET 2022): connection with server not open My environment: Netbeans 12.5 openjdk version "1.8.0_312" OpenJDK Runtime Environment (Zulu 8.58.0.13-CA-macos-aarch64) (build 1.8.0_312-b07) OpenJDK 64-Bit Server VM (Zulu 8.58.0.13-CA-macos-aarch64) (build 25.312-b07, mixed mode) macOS 12.1 Chip Apple M1 Pro Thanks in advance for any help/suggestion A: The error message indicates that the profiler cannot connect to the target JVM (Java Virtual Machine) during the calibration process. This could be due to several reasons, including a conflict with the version of the Java runtime you are using or a problem with the profiler's configuration. One possible solution is to try using a different version of the Java runtime, or to update to the latest version of NetBeans. You can also try using the profiler with a different project or application to see if the problem is specific to your current setup. Another option is to try resetting the profiler's settings by deleting the .nbprofiler directory in your home directory. This will remove any existing profiler configuration data and allow you to start with a clean configuration. It's also worth checking the NetBeans documentation and forums for additional troubleshooting tips or known issues with the profiler.
Netbeans profiler OpenJDK 64-Bit Server VM (Zulu 8.58.0.13-CA-macos-aarch64)
I am trying to use Netbeans 12.5 profiler on OpenJDK 64-Bit Server VM (Zulu 8.58.0.13-CA-macos-aarch64) The first calibration process always fail. Deleting ~/.nbprofiler doesn't fix the problem. I am getting the error: Profiler calibration data file does not exists: /Users/myhome/.nbprofiler/machinedata.jdk18 and Netbeans output: *** Profiler message (Sun Jan 23 12:30:38 CET 2022): Starting target application... /Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home/bin/java -agentpath:/Applications/NetBeans/Apache NetBeans 12.5.app/Contents/Resources/NetBeans/netbeans/profiler/lib/deployed/jdk16/mac/libprofilerinterface.jnilib -Xbootclasspath/a:/Applications/NetBeans/Apache NetBeans 12.5.app/Contents/Resources/NetBeans/netbeans/profiler/lib/jfluid-server.jar:/Applications/NetBeans/Apache NetBeans 12.5.app/Contents/Resources/NetBeans/netbeans/profiler/lib/jfluid-server-15.jar org.netbeans.lib.profiler.server.ProfilerServer /Applications/NetBeans/Apache NetBeans 12.5.app/Contents/Resources/NetBeans/netbeans/profiler/lib/deployed/jdk16/mac 5141 10 Profiler+Calibration+Run *** Profiler warning (Sun Jan 23 12:33:11 CET 2022): timed out while trying to connect to the target JVM. *** Profiler error (Sun Jan 23 12:33:11 CET 2022): connection with server not open My environment: Netbeans 12.5 openjdk version "1.8.0_312" OpenJDK Runtime Environment (Zulu 8.58.0.13-CA-macos-aarch64) (build 1.8.0_312-b07) OpenJDK 64-Bit Server VM (Zulu 8.58.0.13-CA-macos-aarch64) (build 25.312-b07, mixed mode) macOS 12.1 Chip Apple M1 Pro Thanks in advance for any help/suggestion
[ "The error message indicates that the profiler cannot connect to the target JVM (Java Virtual Machine) during the calibration process. This could be due to several reasons, including a conflict with the version of the Java runtime you are using or a problem with the profiler's configuration.\nOne possible solution is to try using a different version of the Java runtime, or to update to the latest version of NetBeans. You can also try using the profiler with a different project or application to see if the problem is specific to your current setup.\nAnother option is to try resetting the profiler's settings by deleting the .nbprofiler directory in your home directory. This will remove any existing profiler configuration data and allow you to start with a clean configuration.\nIt's also worth checking the NetBeans documentation and forums for additional troubleshooting tips or known issues with the profiler.\n" ]
[ 0 ]
[]
[]
[ "java", "macos", "netbeans" ]
stackoverflow_0070821648_java_macos_netbeans.txt
Q: How to disable QWebEngineView logging with webEngineContextLog? I'm using a QWebEngineView in my application. After upgrading to PyQt6 it has started to output the logging information shown below. How can I disable these messages? I have found the code that is emitting them here: logContext It looks like I have to change the output of webEngineContextLog.isInfoEnabled() to False, but it is unclear how to achieve this. Logging output: qt.webenginecontext: GL Type: desktop Surface Type: OpenGL Surface Profile: CompatibilityProfile Surface Version: 4.6 QSG RHI Backend: OpenGL Using Supported QSG Backend: yes Using Software Dynamic GL: no Using Multithreaded OpenGL: yes Init Parameters: * application-name python * browser-subprocess-path C:\Users\xxx\miniconda3\envs\xxx\lib\site-packages\PyQt6\Qt6\bin\QtWebEngineProcess.exe * create-default-gl-context * disable-es3-gl-context * disable-features ConsolidatedMovementXY,InstalledApp,BackgroundFetch,WebOTP,WebPayments,WebUSB,PictureInPicture * disable-speech-api * enable-features NetworkServiceInProcess,TracingServiceInProcess * enable-threaded-compositing * in-process-gpu * use-gl desktop Minimal code to reproduce: from PyQt6.QtWidgets import QApplication from PyQt6.QtWebEngineWidgets import QWebEngineView app = QApplication(['test']) QWebEngineView().settings() A: I stumbled over the same problem today when I tried to integrate a silent unit test into my current project. After a quick investigation and having a look at how it is logged here in the function logContext, I came up with the following solution which works fine for me: from PySide6.QtCore import QUrl, QLoggingCategory from PySide6.QtWidgets import (QApplication, QMainWindow) from PySide6.QtWebEngineWidgets import QWebEngineView web_engine_context_log = QLoggingCategory("qt.webenginecontext") web_engine_context_log.setFilterRules("*.info=false") web_view = QWebEngineView()
How to disable QWebEngineView logging with webEngineContextLog?
I'm using a QWebEngineView in my application. After upgrading to PyQt6 it has started to output the logging information shown below. How can I disable these messages? I have found the code that is emitting them here: logContext It looks like I have to change the output of webEngineContextLog.isInfoEnabled() to False, but it is unclear how to achieve this. Logging output: qt.webenginecontext: GL Type: desktop Surface Type: OpenGL Surface Profile: CompatibilityProfile Surface Version: 4.6 QSG RHI Backend: OpenGL Using Supported QSG Backend: yes Using Software Dynamic GL: no Using Multithreaded OpenGL: yes Init Parameters: * application-name python * browser-subprocess-path C:\Users\xxx\miniconda3\envs\xxx\lib\site-packages\PyQt6\Qt6\bin\QtWebEngineProcess.exe * create-default-gl-context * disable-es3-gl-context * disable-features ConsolidatedMovementXY,InstalledApp,BackgroundFetch,WebOTP,WebPayments,WebUSB,PictureInPicture * disable-speech-api * enable-features NetworkServiceInProcess,TracingServiceInProcess * enable-threaded-compositing * in-process-gpu * use-gl desktop Minimal code to reproduce: from PyQt6.QtWidgets import QApplication from PyQt6.QtWebEngineWidgets import QWebEngineView app = QApplication(['test']) QWebEngineView().settings()
[ "I stumbled over the same problem today when I tried to integrate a silent unit test into my current project.\nAfter a quick investigation and having a look at how it is logged here in the function logContext, I came up with the following solution which works fine for me:\nfrom PySide6.QtCore import QUrl, QLoggingCategory\nfrom PySide6.QtWidgets import (QApplication, QMainWindow)\nfrom PySide6.QtWebEngineWidgets import QWebEngineView\n\nweb_engine_context_log = QLoggingCategory(\"qt.webenginecontext\")\nweb_engine_context_log.setFilterRules(\"*.info=false\")\nweb_view = QWebEngineView()\n\n" ]
[ 0 ]
[]
[]
[ "pyqt6", "python", "qwebengineview" ]
stackoverflow_0074499940_pyqt6_python_qwebengineview.txt
Q: Error: ! Can't combine `..1$exists` and `..350$exists` I wrote a code in which loop through a folder with different files which are 2500. The names of the variable found in the list called flight_info_1 are path and data. When I want to unnest it, I god following error. Can give me someone a hint? With kind regards library(prettydoc) library(tidyverse) library(fs) library(purrr) library(haven) library(gghalves) library(tidygraph) library(ggraph) library(GGally) library(vtable) library(ggridges) library(scales) library(ggpubr) library(highcharter) library(plotly) library(patchwork) library(ggdark) library(ggthemes) library(rnaturalearth) `%||%` <- rlang::`%||%` flight_info_1 <- "Path/Path/" %>% dir_ls(recurse = TRUE, type = "file") %>% as_tibble_col("path") %>% mutate(data = map(path, function(path_i){ path_i <<- path_i data_i <- read_csv(path_i, show_col_types=FALSE) # if(nrow(data_i)==0) # {return(NULL)} data_i$date <- as.character(data_i[["date"]] %||% NA_character_) moved_vars <- tibble(.rows=nrow(data_i)) if(isTRUE(str_detect(data_i[["airline"]] %||% "", "€"))) { moved_vars <- bind_cols(moved_vars, select(data_i, price = airline)) } if(isTRUE(str_detect(data_i[["price"]] %||% "", "hr|min"))){ moved_vars <- bind_cols(moved_vars, select(data_i, duration = price)) } if(isTRUE(str_detect(data_i[["emissions"]] %||% "", "[included]"))){ moved_vars <- bind_cols(moved_vars, select(data_i, luggage = emissions)) } if(isTRUE(str_detect(data_i[["luggage"]] %||% "", "(?i)kg"))){ moved_vars <- bind_cols(moved_vars, select(data_i, emissions = luggage)) } if(isTRUE(str_detect(data_i[["duration"]] %||% "", "^[[:alpha:][:space:]]+$"))){ moved_vars <- bind_cols(moved_vars, select(data_i, airline = duration)) } data_i_mod <- bind_cols( moved_vars, select(data_i, -any_of(colnames(moved_vars))) ) return(data_i_mod) })) data_csv <- unnest(flight_info_1, data) Error: ! Can't combine `..1$exists` <double> and `..350$exists` <character>. Backtrace: 1. tidyr::unnest(flight_info_1, data) 2. tidyr:::unnest.data.frame(flight_info_1, data) 3. tidyr::unchop(data, any_of(cols), keep_empty = keep_empty, ptype = ptype) 4. tidyr:::df_unchop(cols, ptype = ptype, keep_empty = keep_empty) 5. vctrs::vec_unchop(col, ptype = col_ptype) 6. vctrs (local) `<fn>`() 7. vctrs::vec_default_ptype2(...) 8. vctrs::stop_incompatible_type(...) 9. vctrs:::stop_incompatible(...) 10. vctrs:::stop_vctrs(...) A: The solution was data_i_mod <- bind_cols( moved_vars, select(data_i, -any_of(colnames(moved_vars))) ) data_i_mod <- data_i_mod %>% mutate(across(where(is.double), as.character)) return(data_i_mod)
Error: ! Can't combine `..1$exists` and `..350$exists`
I wrote a code in which loop through a folder with different files which are 2500. The names of the variable found in the list called flight_info_1 are path and data. When I want to unnest it, I god following error. Can give me someone a hint? With kind regards library(prettydoc) library(tidyverse) library(fs) library(purrr) library(haven) library(gghalves) library(tidygraph) library(ggraph) library(GGally) library(vtable) library(ggridges) library(scales) library(ggpubr) library(highcharter) library(plotly) library(patchwork) library(ggdark) library(ggthemes) library(rnaturalearth) `%||%` <- rlang::`%||%` flight_info_1 <- "Path/Path/" %>% dir_ls(recurse = TRUE, type = "file") %>% as_tibble_col("path") %>% mutate(data = map(path, function(path_i){ path_i <<- path_i data_i <- read_csv(path_i, show_col_types=FALSE) # if(nrow(data_i)==0) # {return(NULL)} data_i$date <- as.character(data_i[["date"]] %||% NA_character_) moved_vars <- tibble(.rows=nrow(data_i)) if(isTRUE(str_detect(data_i[["airline"]] %||% "", "€"))) { moved_vars <- bind_cols(moved_vars, select(data_i, price = airline)) } if(isTRUE(str_detect(data_i[["price"]] %||% "", "hr|min"))){ moved_vars <- bind_cols(moved_vars, select(data_i, duration = price)) } if(isTRUE(str_detect(data_i[["emissions"]] %||% "", "[included]"))){ moved_vars <- bind_cols(moved_vars, select(data_i, luggage = emissions)) } if(isTRUE(str_detect(data_i[["luggage"]] %||% "", "(?i)kg"))){ moved_vars <- bind_cols(moved_vars, select(data_i, emissions = luggage)) } if(isTRUE(str_detect(data_i[["duration"]] %||% "", "^[[:alpha:][:space:]]+$"))){ moved_vars <- bind_cols(moved_vars, select(data_i, airline = duration)) } data_i_mod <- bind_cols( moved_vars, select(data_i, -any_of(colnames(moved_vars))) ) return(data_i_mod) })) data_csv <- unnest(flight_info_1, data) Error: ! Can't combine `..1$exists` <double> and `..350$exists` <character>. Backtrace: 1. tidyr::unnest(flight_info_1, data) 2. tidyr:::unnest.data.frame(flight_info_1, data) 3. tidyr::unchop(data, any_of(cols), keep_empty = keep_empty, ptype = ptype) 4. tidyr:::df_unchop(cols, ptype = ptype, keep_empty = keep_empty) 5. vctrs::vec_unchop(col, ptype = col_ptype) 6. vctrs (local) `<fn>`() 7. vctrs::vec_default_ptype2(...) 8. vctrs::stop_incompatible_type(...) 9. vctrs:::stop_incompatible(...) 10. vctrs:::stop_vctrs(...)
[ "The solution was\n data_i_mod <- bind_cols(\n moved_vars, select(data_i, -any_of(colnames(moved_vars)))\n )\n \n data_i_mod <- data_i_mod %>% mutate(across(where(is.double), as.character))\n \n return(data_i_mod)\n\n" ]
[ 0 ]
[]
[]
[ "data_wrangling", "r" ]
stackoverflow_0074680675_data_wrangling_r.txt
Q: Unity Take Render Each List Element Separately I want to get renders of multiple items and set them to a list in order to their itemId. For that, first program instantiate the object, get render and destroy it. In each render I am using the clone of the previous render because of the optimization issues. But there is some problem about ordering and set the correct render for item. I tried to re-order rendering code but it is not working. There is no error but the renders doesn't match with the item. public async void SetRenderAsync(string itemId, RawImage image, WeatherCondition var renderPool = renderPoolList.Find(rp => rp.ItemId == itemId); if (renderPool == null) { var result = await AssetManager.Instance.InstantiateAsync(itemId, new Vector3(0,1.5f,0), new Quaternion(),trailersParent.transform); if (result) { renderCamera.Render(); RenderTexture.active = renderTexture; renderCamera.targetTexture = Instantiate(renderCamera.activeTexture); renderPool = new RenderPool() { ItemId = itemId, renderTexture = renderCamera.activeTexture, }; renderPoolList.Add(renderPool); Destroy(result); } } image.texture = renderPool.renderTexture;} A: You could sort the render pool list by item ID before assigning the render texture to the item. This can be done using the List<T>.Sort method, which takes a comparison function that specifies how the elements in the list should be compared and sorted. For example, you could add the following code before the line where you assign the render texture to the item: renderPoolList.Sort((r1, r2) => string.Compare(r1.ItemId, r2.ItemId)); This will sort the render pool list in ascending order by item ID, ensuring that the correct render texture is assigned to each item.
Unity Take Render Each List Element Separately
I want to get renders of multiple items and set them to a list in order to their itemId. For that, first program instantiate the object, get render and destroy it. In each render I am using the clone of the previous render because of the optimization issues. But there is some problem about ordering and set the correct render for item. I tried to re-order rendering code but it is not working. There is no error but the renders doesn't match with the item. public async void SetRenderAsync(string itemId, RawImage image, WeatherCondition var renderPool = renderPoolList.Find(rp => rp.ItemId == itemId); if (renderPool == null) { var result = await AssetManager.Instance.InstantiateAsync(itemId, new Vector3(0,1.5f,0), new Quaternion(),trailersParent.transform); if (result) { renderCamera.Render(); RenderTexture.active = renderTexture; renderCamera.targetTexture = Instantiate(renderCamera.activeTexture); renderPool = new RenderPool() { ItemId = itemId, renderTexture = renderCamera.activeTexture, }; renderPoolList.Add(renderPool); Destroy(result); } } image.texture = renderPool.renderTexture;}
[ "You could sort the render pool list by item ID before assigning the render texture to the item. This can be done using the List<T>.Sort method, which takes a comparison function that specifies how the elements in the list should be compared and sorted.\nFor example, you could add the following code before the line where you assign the render texture to the item:\nrenderPoolList.Sort((r1, r2) => string.Compare(r1.ItemId, r2.ItemId));\nThis will sort the render pool list in ascending order by item ID, ensuring that the correct render texture is assigned to each item.\n" ]
[ 0 ]
[]
[]
[ "c#", "render", "textures", "unity3d" ]
stackoverflow_0074678297_c#_render_textures_unity3d.txt
Q: Unable to access UDF from external view Redshift I have created datashare, and shared VIEW with Consumer cluster. That view contains one UDF, I have added that view in External Schema. Now when I try to run that view in consumer cluster, I am getting below error. [Amazon](500310) Invalid operation: External view contains unsupported objects; Can someone help me with this? Thanks in advance A: A few possible reasons could be: The UDF is not defined in the correct schema. Redshift external views can only access UDFs that are defined in the same schema as the external view. If the UDF is defined in a different schema, you will need to specify the schema name when calling the UDF from the external view. For example SELECT my_udf(col1) FROM my_schema.my_table; The UDF is not visible to the user who is accessing the external view. Redshift UDFs are only visible to users who have the necessary privileges to access the UDF's schema. If the user who is accessing the external view does not have access to the UDF's schema, you will need to grant the user the "USAGE" privilege on the UDF's schema. The UDF is not compatible with the external view's data type. Redshift UDFs can only be used with data types that are compatible with the UDF's input and output arguments. If the external view's data type is not compatible with the UDF's arguments, you will need to convert the data type before calling the UDF
Unable to access UDF from external view Redshift
I have created datashare, and shared VIEW with Consumer cluster. That view contains one UDF, I have added that view in External Schema. Now when I try to run that view in consumer cluster, I am getting below error. [Amazon](500310) Invalid operation: External view contains unsupported objects; Can someone help me with this? Thanks in advance
[ "A few possible reasons could be:\n\nThe UDF is not defined in the correct schema. Redshift external views can only access UDFs that are defined in the same schema as the external view. If the UDF is defined in a different schema, you will need to specify the schema name when calling the UDF from the external view. For example\n\nSELECT my_udf(col1) FROM my_schema.my_table;\n\n\nThe UDF is not visible to the user who is accessing the external view. Redshift UDFs are only visible to users who have the necessary privileges to access the UDF's schema. If the user who is accessing the external view does not have access to the UDF's schema, you will need to grant the user the \"USAGE\" privilege on the UDF's schema.\n\nThe UDF is not compatible with the external view's data type. Redshift UDFs can only be used with data types that are compatible with the UDF's input and output arguments. If the external view's data type is not compatible with the UDF's arguments, you will need to convert the data type before calling the UDF\n\n\n" ]
[ 0 ]
[]
[]
[ "amazon_redshift", "data_sharing", "user_defined_functions" ]
stackoverflow_0074444090_amazon_redshift_data_sharing_user_defined_functions.txt
Q: How do I make my code capitalize the first letter of the word that has a capital letter in it? (Pig Latin) My code so far is: def to_pig(string): words = string.split() for i, word in enumerate(words): ''' if first letter is a vowel ''' if word[0] in 'aeiou': words[i] = words[i]+ "yay" elif word[0] in 'AEIOU': words[i] = words[i]+ "yay" else: ''' else get vowel position and postfix all the consonants present before that vowel to the end of the word along with "ay" ''' has_vowel = False for j, letter in enumerate(word): if letter in 'aeiou': words[i] = word[j:] + word[:j] + "ay" has_vowel = True break #if the word doesn't have any vowel then simply postfix "ay" if(has_vowel == False): words[i] = words[i]+ "ay" pig_latin = ' '.join(words) return pig_latin My code right now coverts a string to pig latin string. If a word starts with one or more consonants followed by a vowel, the consonants up to but not including the first vowel are moved to the end of the word and "ay" is added. If a word begins with a vowel, then "yay" is added to the end. String: "The rain in Spain stays mainly in the plains" However, my code returns: "eThay ainray inyay ainSpay aysstay ainlymay inyay ethay ainsplay" While it should return: "Ethay ainray inyay Ainspay aysstay ainlymay inyay ethay ainsplay" How do I fix my code so that it returns the first letter capital for the word that has a capital letter? A: Use any(... isupper()) to check for the presence of a capital letter and str.title() to capitalize the first letter. >>> words = "eThay ainray inyay ainSpay aysstay ainlymay inyay ethay ainsplay".split() >>> words = [word.title() if any(c.isupper() for c in word) else word for word in words] >>> ' '.join(words) 'Ethay ainray inyay Ainspay aysstay ainlymay inyay ethay ainsplay' A: A one-line solution would be to check whether the word contains a capital letter. If so, you want to convert the capital letter to a lowercase letter and then capitalize the first letter of that word. You could do that as such. Suppose you have your array of words, then: words = [i[0].upper() + i[1:].lower() if i.lower() != i else i for i in words]
How do I make my code capitalize the first letter of the word that has a capital letter in it? (Pig Latin)
My code so far is: def to_pig(string): words = string.split() for i, word in enumerate(words): ''' if first letter is a vowel ''' if word[0] in 'aeiou': words[i] = words[i]+ "yay" elif word[0] in 'AEIOU': words[i] = words[i]+ "yay" else: ''' else get vowel position and postfix all the consonants present before that vowel to the end of the word along with "ay" ''' has_vowel = False for j, letter in enumerate(word): if letter in 'aeiou': words[i] = word[j:] + word[:j] + "ay" has_vowel = True break #if the word doesn't have any vowel then simply postfix "ay" if(has_vowel == False): words[i] = words[i]+ "ay" pig_latin = ' '.join(words) return pig_latin My code right now coverts a string to pig latin string. If a word starts with one or more consonants followed by a vowel, the consonants up to but not including the first vowel are moved to the end of the word and "ay" is added. If a word begins with a vowel, then "yay" is added to the end. String: "The rain in Spain stays mainly in the plains" However, my code returns: "eThay ainray inyay ainSpay aysstay ainlymay inyay ethay ainsplay" While it should return: "Ethay ainray inyay Ainspay aysstay ainlymay inyay ethay ainsplay" How do I fix my code so that it returns the first letter capital for the word that has a capital letter?
[ "Use any(... isupper()) to check for the presence of a capital letter and str.title() to capitalize the first letter.\n>>> words = \"eThay ainray inyay ainSpay aysstay ainlymay inyay ethay ainsplay\".split()\n>>> words = [word.title() if any(c.isupper() for c in word) else word for word in words]\n>>> ' '.join(words)\n'Ethay ainray inyay Ainspay aysstay ainlymay inyay ethay ainsplay'\n\n", "A one-line solution would be to check whether the word contains a capital letter. If so, you want to convert the capital letter to a lowercase letter and then capitalize the first letter of that word. You could do that as such. Suppose you have your array of words, then:\nwords = [i[0].upper() + i[1:].lower() if i.lower() != i else i for i in words]\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074681051_python.txt
Q: Emailed invoices say sent from messaging-service@post.xero.com - Reply email address is correct though I am using the xero api with Python. I am correctly emailing invoices and all appears well except that the "sent from User" is appearing in the received email as messaging-service@post.xero.com rather than my business name. Replying to the email correctly goes to my address ( not the messaging-service.... one). Any help would be much appreciated as I cannot find any reference to this error online. I tried to find how I might add the user details to be accompanied with the email send method but this parameter doesnt appear to exist. I tried to find if these details can be included with the invoice but again no luck. I think the user details are included beyond the scope of what I can access - the send email method correctly knows the user as the return address is correct, it just isnt being included in the email header information correctly for some reason. Cheers Steve A: I have temporarily "fixed" this by unselecting "Logged in user" under email settings in xero and selecting the user account I wish emails to be sent from. I don't think this is the best solution but it works for now. Cheers Steve
Emailed invoices say sent from messaging-service@post.xero.com - Reply email address is correct though
I am using the xero api with Python. I am correctly emailing invoices and all appears well except that the "sent from User" is appearing in the received email as messaging-service@post.xero.com rather than my business name. Replying to the email correctly goes to my address ( not the messaging-service.... one). Any help would be much appreciated as I cannot find any reference to this error online. I tried to find how I might add the user details to be accompanied with the email send method but this parameter doesnt appear to exist. I tried to find if these details can be included with the invoice but again no luck. I think the user details are included beyond the scope of what I can access - the send email method correctly knows the user as the return address is correct, it just isnt being included in the email header information correctly for some reason. Cheers Steve
[ "I have temporarily \"fixed\" this by unselecting \"Logged in user\" under email settings in xero and selecting the user account I wish emails to be sent from.\nI don't think this is the best solution but it works for now.\nCheers\nSteve\n" ]
[ 0 ]
[]
[]
[ "email" ]
stackoverflow_0074673101_email.txt
Q: Why my example is passing test without model validation? I've got the following model and test file. To my understanding, the last example should fail until I validate the body attribute in the model but it's passing the test. I'm not sure what it is that I'm missing. Any assistance is highly appreciated in advance, thanks. article.rb class Article < ApplicationRecord validates :title, presence: true, length: { in: 6..25 } end article_spec.rb require 'rails_helper' RSpec.describe Article, type: :model do subject { Article.new(title: 'Lorem ipsum dolor sit, amet ', body: 'consectetur adipisicing elit. Unde, labore?') } before { subject.save } it 'is not valid without a title' do subject.title = nil expect(subject).to_not be_valid end it 'is not valid if the title is too short' do subject.title = 'a' expect(subject).to_not be_valid end it 'is not valid if the title is too long' do subject.title = 'a' * 26 expect(subject).to_not be_valid end it 'is not valid without a body' do subject.body = nil expect(subject).to_not be_valid end end A: You are missing the validation for the body attribute. class Article < ApplicationRecord validates :title, presence: true, length: { in: 6..25 } validates :body, presence: true end
Why my example is passing test without model validation?
I've got the following model and test file. To my understanding, the last example should fail until I validate the body attribute in the model but it's passing the test. I'm not sure what it is that I'm missing. Any assistance is highly appreciated in advance, thanks. article.rb class Article < ApplicationRecord validates :title, presence: true, length: { in: 6..25 } end article_spec.rb require 'rails_helper' RSpec.describe Article, type: :model do subject { Article.new(title: 'Lorem ipsum dolor sit, amet ', body: 'consectetur adipisicing elit. Unde, labore?') } before { subject.save } it 'is not valid without a title' do subject.title = nil expect(subject).to_not be_valid end it 'is not valid if the title is too short' do subject.title = 'a' expect(subject).to_not be_valid end it 'is not valid if the title is too long' do subject.title = 'a' * 26 expect(subject).to_not be_valid end it 'is not valid without a body' do subject.body = nil expect(subject).to_not be_valid end end
[ "You are missing the validation for the body attribute.\nclass Article < ApplicationRecord\n validates :title, presence: true, length: { in: 6..25 }\n validates :body, presence: true\nend\n\n" ]
[ 0 ]
[]
[]
[ "rspec", "ruby", "ruby_on_rails", "testing" ]
stackoverflow_0074679222_rspec_ruby_ruby_on_rails_testing.txt
Q: Is it inefficient to open and close a WebSocket connection multiple times? I'm building a game that involves realtime voice chat in which I want to use WebSockets for. My understanding is that WebSockets are not scaleable meaning they would require a lot of server Infrastructure to handle. Although I plan to host the server through a server-less method (AWS), I still want to maximize efficiency to reduce cost. My idea was to minimize the number of WebSocket connections on the server by opening the connection when voice chat is an option (i.e. in the game lobby) and then close the connection when voice chat is no longer an option instead of opening the connection the whole time the user is in the game. Common sense would tell me that this is much more efficient than keeping the connection open for the duration of the game, but I just thought I would ask to see if my thought process is correct since I am pretty new to using WebSockets. Is it inefficient to open and close a connection multiple times? Are there any draw backs to this? Also, as a side question, is it bad practice to send large payloads over WebSockets (e.i. a megabyte)? I want the client to be able to send an audio file to multiple other clients. Would it just be best to stick with typical polling to handle this, or could I use WebSockets? Thank you to all who can help! A: Opening and closing WebSocket connections multiple times is not necessarily inefficient, as the overhead associated with establishing a new connection is usually minimal. However, if the WebSocket connection needs to be re-established frequently, there may be additional overhead associated with the authentication process. It is best to open the connection just once and keep it open for the duration of the client's session. Sending large payloads over a WebSocket connection is generally not recommended, as the amount of data that can be transferred is limited. The best approach for sending large payloads would be to use a protocol such as HTTP or FTP, which are designed to handle large amounts of data. Additionally, if the payloads are audio files, it may be more efficient to use a streaming protocol such as RTSP.
Is it inefficient to open and close a WebSocket connection multiple times?
I'm building a game that involves realtime voice chat in which I want to use WebSockets for. My understanding is that WebSockets are not scaleable meaning they would require a lot of server Infrastructure to handle. Although I plan to host the server through a server-less method (AWS), I still want to maximize efficiency to reduce cost. My idea was to minimize the number of WebSocket connections on the server by opening the connection when voice chat is an option (i.e. in the game lobby) and then close the connection when voice chat is no longer an option instead of opening the connection the whole time the user is in the game. Common sense would tell me that this is much more efficient than keeping the connection open for the duration of the game, but I just thought I would ask to see if my thought process is correct since I am pretty new to using WebSockets. Is it inefficient to open and close a connection multiple times? Are there any draw backs to this? Also, as a side question, is it bad practice to send large payloads over WebSockets (e.i. a megabyte)? I want the client to be able to send an audio file to multiple other clients. Would it just be best to stick with typical polling to handle this, or could I use WebSockets? Thank you to all who can help!
[ "Opening and closing WebSocket connections multiple times is not necessarily inefficient, as the overhead associated with establishing a new connection is usually minimal. However, if the WebSocket connection needs to be re-established frequently, there may be additional overhead associated with the authentication process. It is best to open the connection just once and keep it open for the duration of the client's session.\nSending large payloads over a WebSocket connection is generally not recommended, as the amount of data that can be transferred is limited. The best approach for sending large payloads would be to use a protocol such as HTTP or FTP, which are designed to handle large amounts of data. Additionally, if the payloads are audio files, it may be more efficient to use a streaming protocol such as RTSP.\n" ]
[ 0 ]
[]
[]
[ "http", "polling", "server", "websocket" ]
stackoverflow_0074681105_http_polling_server_websocket.txt
Q: How to specify local shell for Fabric2/Paramiko/Invoke? When trying to create a fabric2.Connection, Paramiko tries to invoke a local /bin/bash command: $ fab2 db-shell Traceback (most recent call last): File "/nix/store/m2iyj18cifr4a1rvpfgphg7kfgsf2pj2-python3.9-fabric2-2.7.1/bin/.fab2-wrapped", line 9, in <module> sys.exit(program.run()) File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/invoke/program.py", line 384, in run self.execute() File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/invoke/program.py", line 566, in execute executor.execute(*self.tasks) File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/invoke/executor.py", line 129, in execute result = call.task(*args, **call.kwargs) File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/invoke/tasks.py", line 127, in __call__ result = self.body(*args, **kwargs) File "/home/username/project/fabfile.py", line 645, in db_shell bastion_connection().run( File "/home/username/project/fabfile_utils.py", line 141, in bastion_connection conn = Connection( File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/fabric2/connection.py", line 403, in __init__ self.ssh_config = self.config.base_ssh_config.lookup(host) File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/paramiko/config.py", line 223, in lookup options = self._lookup(hostname=hostname) File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/paramiko/config.py", line 250, in _lookup or self._does_match( File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/paramiko/config.py", line 389, in _does_match passed = invoke.run(exec_cmd, hide="stdout", warn=True).ok File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/invoke/__init__.py", line 48, in run return Context().run(command, **kwargs) File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/invoke/context.py", line 95, in run return self._run(runner, command, **kwargs) File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/invoke/context.py", line 102, in _run return runner.run(command, **kwargs) File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/invoke/runners.py", line 380, in run return self._run_body(command, **kwargs) File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/invoke/runners.py", line 431, in _run_body self.start(command, self.opts["shell"], self.env) File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/invoke/runners.py", line 1291, in start self.process = Popen( File "/nix/store/0zzvjh5gnz0ny7ckilzyn9hmg5lypszf-python3-3.9.13/lib/python3.9/subprocess.py", line 951, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/nix/store/0zzvjh5gnz0ny7ckilzyn9hmg5lypszf-python3-3.9.13/lib/python3.9/subprocess.py", line 1821, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: '/bin/bash' I've not been able to find anything in the docs about this yet. I'm running fabric2==2.7.1, invoke==1.6.0, and paramiko==2.8.0 on NixOS stable 22.11. A: Invoke currently defaults to /bin/bash, which is a known issue, since operating systems like Alpine and NixOS don't have that path. Fortunately we can specify a command in $PATH rather than manually looking it up, so adding the following to the Connection call does the trick: config=invoke.Config(overrides={"shell": "bash"}).
How to specify local shell for Fabric2/Paramiko/Invoke?
When trying to create a fabric2.Connection, Paramiko tries to invoke a local /bin/bash command: $ fab2 db-shell Traceback (most recent call last): File "/nix/store/m2iyj18cifr4a1rvpfgphg7kfgsf2pj2-python3.9-fabric2-2.7.1/bin/.fab2-wrapped", line 9, in <module> sys.exit(program.run()) File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/invoke/program.py", line 384, in run self.execute() File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/invoke/program.py", line 566, in execute executor.execute(*self.tasks) File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/invoke/executor.py", line 129, in execute result = call.task(*args, **call.kwargs) File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/invoke/tasks.py", line 127, in __call__ result = self.body(*args, **kwargs) File "/home/username/project/fabfile.py", line 645, in db_shell bastion_connection().run( File "/home/username/project/fabfile_utils.py", line 141, in bastion_connection conn = Connection( File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/fabric2/connection.py", line 403, in __init__ self.ssh_config = self.config.base_ssh_config.lookup(host) File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/paramiko/config.py", line 223, in lookup options = self._lookup(hostname=hostname) File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/paramiko/config.py", line 250, in _lookup or self._does_match( File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/paramiko/config.py", line 389, in _does_match passed = invoke.run(exec_cmd, hide="stdout", warn=True).ok File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/invoke/__init__.py", line 48, in run return Context().run(command, **kwargs) File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/invoke/context.py", line 95, in run return self._run(runner, command, **kwargs) File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/invoke/context.py", line 102, in _run return runner.run(command, **kwargs) File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/invoke/runners.py", line 380, in run return self._run_body(command, **kwargs) File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/invoke/runners.py", line 431, in _run_body self.start(command, self.opts["shell"], self.env) File "/nix/store/20k85mfdkbrj5w1pq0d7dzagygfip70h-python3-3.9.13-env/lib/python3.9/site-packages/invoke/runners.py", line 1291, in start self.process = Popen( File "/nix/store/0zzvjh5gnz0ny7ckilzyn9hmg5lypszf-python3-3.9.13/lib/python3.9/subprocess.py", line 951, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/nix/store/0zzvjh5gnz0ny7ckilzyn9hmg5lypszf-python3-3.9.13/lib/python3.9/subprocess.py", line 1821, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: '/bin/bash' I've not been able to find anything in the docs about this yet. I'm running fabric2==2.7.1, invoke==1.6.0, and paramiko==2.8.0 on NixOS stable 22.11.
[ "Invoke currently defaults to /bin/bash, which is a known issue, since operating systems like Alpine and NixOS don't have that path. Fortunately we can specify a command in $PATH rather than manually looking it up, so adding the following to the Connection call does the trick: config=invoke.Config(overrides={\"shell\": \"bash\"}).\n" ]
[ 0 ]
[]
[]
[ "paramiko", "pyinvoke", "python_3.x", "python_fabric_2", "shell" ]
stackoverflow_0074681115_paramiko_pyinvoke_python_3.x_python_fabric_2_shell.txt
Q: Blank window vscode as root it's been a long time that i run vscode as root via this command (ubuntu 20.04.4 LTS) : > sudo code --user-data-dir="~/.vscode-root" Today, i used the same command to execute vscode as root, but i don't have the same result. in this time, i have a blank window. i tried to remove it + remove /$Home/Code and .vscode folder install it again but i have the same problem, i tried to use the same command with other flags without being able to solve it. Flags tried : --disable-gpu /// --disable-features=CalculateNativeWinOcclusion is there a way to delete (package + config) and all data of vscode to install it properly again ? screen shot Any idea ? Thank you & have a nice day A: That works for me on VSCodium: sudo su -c "codium --user-data-dir="/root" --no-sandbox && sleep infinity" For some reason, vscode does not start with sudo, but it works calmly with su -c PS: I do not know if it can be done easier. I needed this to work with git in vscode
Blank window vscode as root
it's been a long time that i run vscode as root via this command (ubuntu 20.04.4 LTS) : > sudo code --user-data-dir="~/.vscode-root" Today, i used the same command to execute vscode as root, but i don't have the same result. in this time, i have a blank window. i tried to remove it + remove /$Home/Code and .vscode folder install it again but i have the same problem, i tried to use the same command with other flags without being able to solve it. Flags tried : --disable-gpu /// --disable-features=CalculateNativeWinOcclusion is there a way to delete (package + config) and all data of vscode to install it properly again ? screen shot Any idea ? Thank you & have a nice day
[ "That works for me on VSCodium:\nsudo su -c \"codium --user-data-dir=\"/root\" --no-sandbox && sleep infinity\"\n\nFor some reason, vscode does not start with sudo, but it works calmly with su -c\nPS: I do not know if it can be done easier. I needed this to work with git in vscode\n" ]
[ 0 ]
[]
[]
[ "ubuntu_20.04", "visual_studio_code", "vscode_tasks" ]
stackoverflow_0072015007_ubuntu_20.04_visual_studio_code_vscode_tasks.txt
Q: REST API Design - Async REST Client Vs Async REST API I already have REST API (for System-to-System communication) which takes lot of time to process. I want to have asynchronous processing. I see two options here: To make the API itself as asynchronous, where it returns a LOCATION header giving another URI to fetch result. To make the client asynchronous - using asynchronous HTTP Client or AsyncRestTemplate etc. I was wondering what is better way in such scenarios, as both seems to solve the issue. A: Both of the options you have mentioned can be effective ways to handle asynchronous processing in a REST API. The choice of which one to use will depend on the specific requirements of your API and the preferences of the team implementing it. In general, making the API itself asynchronous by returning a location header with a URI to fetch the result can be a good approach if you want to keep the API interface simple and consistent. This can be especially useful if the API is intended to be used by a variety of different clients, some of which may not support asynchronous requests. On the other hand, if your API is intended to be used by clients that can support asynchronous requests, using an asynchronous HTTP client or AsyncRestTemplate can be a good way to improve the performance of the API by allowing the client to make multiple requests concurrently. This approach can also be useful if you want to provide a more fine-grained control over the asynchronous processing of requests. Ultimately, the best approach will depend on the specific requirements of your API and the capabilities of the clients that will be using it. It may be worth considering implementing both options and comparing their performance to see which one works best for your use case.
REST API Design - Async REST Client Vs Async REST API
I already have REST API (for System-to-System communication) which takes lot of time to process. I want to have asynchronous processing. I see two options here: To make the API itself as asynchronous, where it returns a LOCATION header giving another URI to fetch result. To make the client asynchronous - using asynchronous HTTP Client or AsyncRestTemplate etc. I was wondering what is better way in such scenarios, as both seems to solve the issue.
[ "Both of the options you have mentioned can be effective ways to handle asynchronous processing in a REST API. The choice of which one to use will depend on the specific requirements of your API and the preferences of the team implementing it.\nIn general, making the API itself asynchronous by returning a location header with a URI to fetch the result can be a good approach if you want to keep the API interface simple and consistent. This can be especially useful if the API is intended to be used by a variety of different clients, some of which may not support asynchronous requests.\nOn the other hand, if your API is intended to be used by clients that can support asynchronous requests, using an asynchronous HTTP client or AsyncRestTemplate can be a good way to improve the performance of the API by allowing the client to make multiple requests concurrently. This approach can also be useful if you want to provide a more fine-grained control over the asynchronous processing of requests.\nUltimately, the best approach will depend on the specific requirements of your API and the capabilities of the clients that will be using it. It may be worth considering implementing both options and comparing their performance to see which one works best for your use case.\n" ]
[ 0 ]
[]
[]
[ "asynchronous", "asyncresttemplate", "http", "httpclient", "rest" ]
stackoverflow_0074627880_asynchronous_asyncresttemplate_http_httpclient_rest.txt
Q: Why can't I deploy my next.js app to port 80? I never deployed any project before and I'm currently running with an issue while deploying a next.js app to godaddy. I uploaded my next.js app to the public_html folder of my cpanel and then i connected through ssh and executed the npm run dev command with server.js pointing to my domain name as hostname and 3000 as port number however, to access it i will have to write in the url www.mydomain.com:3000 . I learned that in order to access it by the following url www.mydomain.com I have to specify port 80 in the server.js file. However, when I do so and run the npm run dev command it says that I do not have the permission to port 0:0:0:0:80 Screenshot here I have a VPS server and a domain name. I am doing something wrong or is there something I missed? Should I not use my domain name as hostname in the server.js file? Should I maybe keep the hostname as my VPS ip and port 3000 then point my domain to read from this address? I am a beginner with no previous exprience in deployement and this is my first next.js app ever. Any help is appreciated and thank you! A: Did you get this sorted? You need to set up .htaccess rules for the port. DirectoryIndex disabled RewriteEngine On RewriteRule ^$ http://127.0.0.1:30000/ [P,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ http://127.0.0.1:30000/$1 [P,L]` This tutorial explains how to deploy node.js via cPanel. https://youtu.be/sIcy3q3Ib_s
Why can't I deploy my next.js app to port 80?
I never deployed any project before and I'm currently running with an issue while deploying a next.js app to godaddy. I uploaded my next.js app to the public_html folder of my cpanel and then i connected through ssh and executed the npm run dev command with server.js pointing to my domain name as hostname and 3000 as port number however, to access it i will have to write in the url www.mydomain.com:3000 . I learned that in order to access it by the following url www.mydomain.com I have to specify port 80 in the server.js file. However, when I do so and run the npm run dev command it says that I do not have the permission to port 0:0:0:0:80 Screenshot here I have a VPS server and a domain name. I am doing something wrong or is there something I missed? Should I not use my domain name as hostname in the server.js file? Should I maybe keep the hostname as my VPS ip and port 3000 then point my domain to read from this address? I am a beginner with no previous exprience in deployement and this is my first next.js app ever. Any help is appreciated and thank you!
[ "Did you get this sorted?\nYou need to set up .htaccess rules for the port.\n DirectoryIndex disabled\n RewriteEngine On\n RewriteRule ^$ http://127.0.0.1:30000/ [P,L]\n RewriteCond %{REQUEST_FILENAME} !-f\n RewriteCond %{REQUEST_FILENAME} !-d\n RewriteRule ^(.*)$ http://127.0.0.1:30000/$1 [P,L]`\n\nThis tutorial explains how to deploy node.js via cPanel.\nhttps://youtu.be/sIcy3q3Ib_s\n" ]
[ 0 ]
[]
[]
[ "cpanel", "deployment", "javascript", "next.js", "node.js" ]
stackoverflow_0072222283_cpanel_deployment_javascript_next.js_node.js.txt
Q: How disable and enable usb port via command prompt? How disable and enable usb port via command prompt? or using batch script ? or using vb script in windows 7? A: You can use batch which gives you a couple of options. You can edit the registry key to disable usb devices from being used reg add HKLM\SYSTEM\CurrentControlSet\Services\UsbStor /v "Start" /t REG_DWORD /d "4" /f To enable change value to 3. Or you can deny access to the files Usbstor.pnf and Usbstor.inf cacls %windir%\Inf\Usbstor.pnf /d user cacls %windir%\Inf\Usbstor.inf /d user Where user is the user account that you want to deny access for. To enable use cacls %windir%\Inf\Usbstor.pnf /p user:R cacls %windir%\Inf\Usbstor.inf /p user:R Both commands will need admin rights. Hope this helps A: You can also have a look at devcon command. Available freely on microsoft site, for win7+ windows. A: I have the same problem and I use a solution that takes the best of the two previous answers: 1º-We disable the functionality that allow us to detect new external storage devices: reg add HKLM\SYSTEM\CurrentControlSet\Services\UsbStor /v "Start" /t REG_DWORD /d "4" /f 2º-We remove all the drivers of USB devices installed on the PC (This will also eliminate the possibility of using keyboard and mouse, but only momentarily): devcon.exe remove *USB* 3º- We re-scan the connected USB devices, so that Windows will automatically install the drivers of devices different than external storage (eg Mouse, keyboard ...), thus obtaining the desired result: devcon.exe rescan 4º- If we want to re-allow the use of external storage devices in our PC, we must use the command: reg add HKLM\SYSTEM\CurrentControlSet\Services\UsbStor /v "Start" /t REG_DWORD /d "3" /f PD: Every command will need admin rights A: I use USBDeview. See documentation and examples to use it. Easy and works fine
How disable and enable usb port via command prompt?
How disable and enable usb port via command prompt? or using batch script ? or using vb script in windows 7?
[ "You can use batch which gives you a couple of options. You can edit the registry key to disable usb devices from being used\nreg add HKLM\\SYSTEM\\CurrentControlSet\\Services\\UsbStor /v \"Start\" /t REG_DWORD /d \"4\" /f\n\nTo enable change value to 3.\nOr you can deny access to the files Usbstor.pnf and Usbstor.inf\ncacls %windir%\\Inf\\Usbstor.pnf /d user\ncacls %windir%\\Inf\\Usbstor.inf /d user\n\nWhere user is the user account that you want to deny access for.\nTo enable use\ncacls %windir%\\Inf\\Usbstor.pnf /p user:R\ncacls %windir%\\Inf\\Usbstor.inf /p user:R\n\nBoth commands will need admin rights.\nHope this helps\n", "You can also have a look at devcon command. Available freely on microsoft site, for win7+ windows.\n", "I have the same problem and I use a solution that takes the best of the two previous answers: \n1º-We disable the functionality that allow us to detect new external storage devices: \nreg add HKLM\\SYSTEM\\CurrentControlSet\\Services\\UsbStor /v \"Start\" /t REG_DWORD /d \"4\" /f\n\n2º-We remove all the drivers of USB devices installed on the PC (This will also eliminate the possibility of using keyboard and mouse, but only momentarily): \ndevcon.exe remove *USB*\n\n3º- We re-scan the connected USB devices, so that Windows will automatically install the drivers of devices different than external storage (eg Mouse, keyboard ...), thus obtaining the desired result: \ndevcon.exe rescan\n\n4º- If we want to re-allow the use of external storage devices in our PC, we must use the command: \nreg add HKLM\\SYSTEM\\CurrentControlSet\\Services\\UsbStor /v \"Start\" /t REG_DWORD /d \"3\" /f\n\nPD: Every command will need admin rights\n", "I use USBDeview. See documentation and examples to use it. Easy and works fine\n" ]
[ 11, 2, 1, 0 ]
[]
[]
[ "batch_file", "vbscript", "windows" ]
stackoverflow_0013267236_batch_file_vbscript_windows.txt
Q: csv.writer not writing entire output to CSV file I am attempting to scrape the artists' Spotify streaming rankings from Kworb.net into a CSV file and I've nearly succeeded except I'm running into a weird issue. The code below successfully scrapes all 10,000 of the listed artists into the console: import requests from bs4 import BeautifulSoup import csv URL = "https://kworb.net/spotify/artists.html" result = requests.get(URL) src = result.content soup = BeautifulSoup(src, 'html.parser') table = soup.find('table', id="spotifyartistindex") header_tags = table.find_all('th') headers = [header.text.strip() for header in header_tags] rows = [] data_rows = table.find_all('tr') for row in data_rows: value = row.find_all('td') beautified_value = [dp.text.strip() for dp in value] print(beautified_value) if len(beautified_value) == 0: continue rows.append(beautified_value) The issue arises when I use the following code to save the output to a CSV file: with open('artist_rankings.csv', 'w', newline="") as output: writer = csv.writer(output) writer.writerow(headers) writer.writerows(rows) For whatever reason, only 738 of the artists are saved to the file. Does anyone know what could be causing this? Thanks so much for any help! A: As an alternative approach, you might want to make your life easier next time and use pandas. Here's how: import requests import pandas as pd source = requests.get("https://kworb.net/spotify/artists.html") df = pd.concat(pd.read_html(source.text, flavor="bs4")) df.to_csv("artists.csv", index=False) This outputs a .csv file with 10,000 artists. A: The issue with your code is that you are using the print statement to display the data on the console, but this is not included in the rows list that you are writing to the CSV file. Instead, you need to append the data to the rows list before writing it to the CSV file. Here is how you can modify your code to fix this issue: import requests from bs4 import BeautifulSoup import csv URL = "https://kworb.net/spotify/artists.html" result = requests.get(URL) src = result.content soup = BeautifulSoup(src, 'html.parser') table = soup.find('table', id="spotifyartistindex") header_tags = table.find_all('th') headers = [header.text.strip() for header in header_tags] rows = [] data_rows = table.find_all('tr') for row in data_rows: value = row.find_all('td') beautified_value = [dp.text.strip() for dp in value] # Append the data to the rows list rows.append(beautified_value) Write the data to the CSV file with open('artist_rankings.csv', 'w', newline="") as output: writer = csv.writer(output) writer.writerow(headers) writer.writerows(rows) In this modified code, the data is first appended to the rows list, and then it is written to the CSV file. This will ensure that all of the data is saved to the file, and not just the first 738 rows. Note that you may also want to add some error handling to your code in case the request to the URL fails, or if the HTML of the page is not in the expected format. This will help prevent your code from crashing when it encounters unexpected data. You can do this by adding a try-except block to your code, like this: import requests from bs4 import BeautifulSoup import csv URL = "https://kworb.net/spotify/artists.html" try: result = requests.get(URL) src = result.content soup = BeautifulSoup(src, 'html.parser') table = soup.find('table', id="spotifyartistindex") if table is None: raise Exception("Could not find table with id 'spotifyartistindex'") header_tags = table.find_all('th') headers = [header.text.strip() for header in header_tags] rows = [] data_rows = table.find_all('tr') for row in data_rows: value = row.find_all('td') beautified_value = [dp.text.strip() for dp in value] # Append the data to the rows list rows.append(beautified_value) # Write the data to the CSV file with open('artist_rankings.csv', 'w', newline="") as output: writer = csv.writer(output)
csv.writer not writing entire output to CSV file
I am attempting to scrape the artists' Spotify streaming rankings from Kworb.net into a CSV file and I've nearly succeeded except I'm running into a weird issue. The code below successfully scrapes all 10,000 of the listed artists into the console: import requests from bs4 import BeautifulSoup import csv URL = "https://kworb.net/spotify/artists.html" result = requests.get(URL) src = result.content soup = BeautifulSoup(src, 'html.parser') table = soup.find('table', id="spotifyartistindex") header_tags = table.find_all('th') headers = [header.text.strip() for header in header_tags] rows = [] data_rows = table.find_all('tr') for row in data_rows: value = row.find_all('td') beautified_value = [dp.text.strip() for dp in value] print(beautified_value) if len(beautified_value) == 0: continue rows.append(beautified_value) The issue arises when I use the following code to save the output to a CSV file: with open('artist_rankings.csv', 'w', newline="") as output: writer = csv.writer(output) writer.writerow(headers) writer.writerows(rows) For whatever reason, only 738 of the artists are saved to the file. Does anyone know what could be causing this? Thanks so much for any help!
[ "As an alternative approach, you might want to make your life easier next time and use pandas.\nHere's how:\nimport requests\nimport pandas as pd\n\nsource = requests.get(\"https://kworb.net/spotify/artists.html\")\ndf = pd.concat(pd.read_html(source.text, flavor=\"bs4\"))\ndf.to_csv(\"artists.csv\", index=False)\n\nThis outputs a .csv file with 10,000 artists.\n\n", "The issue with your code is that you are using the print statement to display the data on the console, but this is not included in the rows list that you are writing to the CSV file. Instead, you need to append the data to the rows list before writing it to the CSV file.\nHere is how you can modify your code to fix this issue:\nimport requests\nfrom bs4 import BeautifulSoup\nimport csv\n\nURL = \"https://kworb.net/spotify/artists.html\"\nresult = requests.get(URL)\nsrc = result.content\nsoup = BeautifulSoup(src, 'html.parser')\n\ntable = soup.find('table', id=\"spotifyartistindex\")\n\nheader_tags = table.find_all('th')\nheaders = [header.text.strip() for header in header_tags]\n\nrows = []\ndata_rows = table.find_all('tr')\n\nfor row in data_rows:\nvalue = row.find_all('td')\nbeautified_value = [dp.text.strip() for dp in value]\n# Append the data to the rows list\nrows.append(beautified_value)\n\nWrite the data to the CSV file\nwith open('artist_rankings.csv', 'w', newline=\"\") as output:\nwriter = csv.writer(output)\nwriter.writerow(headers)\nwriter.writerows(rows)\n\nIn this modified code, the data is first appended to the rows list, and then it is written to the CSV file. This will ensure that all of the data is saved to the file, and not just the first 738 rows.\nNote that you may also want to add some error handling to your code in case the request to the URL fails, or if the HTML of the page is not in the expected format. This will help prevent your code from crashing when it encounters unexpected data. You can do this by adding a try-except block to your code, like this:\nimport requests\nfrom bs4 import BeautifulSoup\nimport csv\n\nURL = \"https://kworb.net/spotify/artists.html\"\n\ntry:\nresult = requests.get(URL)\nsrc = result.content\nsoup = BeautifulSoup(src, 'html.parser')\n\ntable = soup.find('table', id=\"spotifyartistindex\")\n\nif table is None:\n raise Exception(\"Could not find table with id 'spotifyartistindex'\")\n\nheader_tags = table.find_all('th')\nheaders = [header.text.strip() for header in header_tags]\n\nrows = []\ndata_rows = table.find_all('tr')\n\nfor row in data_rows:\n value = row.find_all('td')\n beautified_value = [dp.text.strip() for dp in value]\n # Append the data to the rows list\n rows.append(beautified_value)\n\n# Write the data to the CSV file\nwith open('artist_rankings.csv', 'w', newline=\"\") as output:\n writer = csv.writer(output)\n\n" ]
[ 1, 0 ]
[]
[]
[ "python", "python_3.x", "web_scraping" ]
stackoverflow_0074680982_python_python_3.x_web_scraping.txt
Q: how can i add fontawesome to javascript element? Good evening guys. I am preparing a todo app. I added delete button but I want to add icon from fontawesome to delete button. Where exactly do i need to add <i class="fa fa-trash" aria-hidden="true"></i> ? I would be glad if you help me <body> <div class="container"> <div id="newtask"> <input type="text" id="myInput" placeholder="Title..." /> <button onclick="newElement()">Add</button> </div> </div> <!--Script--> <script> var myNodelist = document.getElementsByTagName('LI'); var i; for (i = 0; i < myNodelist.length; i++) { var span = document.createElement('SPAN'); var txt = document.createTextNode('\u00D7'); span.className = 'close'; span.appendChild(txt); myNodelist[i].appendChild(span); } var close = document.getElementsByClassName('close'); var i; for (i = 0; i < close.length; i++) { close[i].onclick = function () { var div = this.parentElement; div.style.display = 'none'; }; } function newElement() { var li = document.createElement('li'); var inputValue = document.getElementById('myInput').value; var t = document.createTextNode(inputValue); li.appendChild(t); if (inputValue === '') { alert('You must write something!'); } else { document.getElementById('newtask').appendChild(li); } document.getElementById('myInput').value = ''; var span = document.createElement('SPAN'); var txt = document.createTextNode('\u00D7'); span.className = 'close'; span.appendChild(txt); li.appendChild(span); for (i = 0; i < close.length; i++) { close[i].onclick = function () { var div = this.parentElement; div.style.display = 'none'; }; } } </script> </body> A: There are two things to highlight here is, First, the code piece to delete a list item is not deleting the actual item, but just hiding it from rendered HTML and that's wrong. You should remove that element completely from the document. The below code is not deleting in actuality. close[i].onclick = function() { var div = this.parentElement; div.style.display = 'none'; }; Secondly, answering your query about adding icons using <i> tag. If you are adding this dynamically it should be done the same way you did with the <span> tag. You have to do createElement and append that element to its parent. You can link the fontawesome in your HTML document's <head> part. Here is the generic CDN link. <link rel="stylesheet" href="path/to/font-awesome/css/font-awesome.min.css"> For an updated version of the above link, you can check their official website.
how can i add fontawesome to javascript element?
Good evening guys. I am preparing a todo app. I added delete button but I want to add icon from fontawesome to delete button. Where exactly do i need to add <i class="fa fa-trash" aria-hidden="true"></i> ? I would be glad if you help me <body> <div class="container"> <div id="newtask"> <input type="text" id="myInput" placeholder="Title..." /> <button onclick="newElement()">Add</button> </div> </div> <!--Script--> <script> var myNodelist = document.getElementsByTagName('LI'); var i; for (i = 0; i < myNodelist.length; i++) { var span = document.createElement('SPAN'); var txt = document.createTextNode('\u00D7'); span.className = 'close'; span.appendChild(txt); myNodelist[i].appendChild(span); } var close = document.getElementsByClassName('close'); var i; for (i = 0; i < close.length; i++) { close[i].onclick = function () { var div = this.parentElement; div.style.display = 'none'; }; } function newElement() { var li = document.createElement('li'); var inputValue = document.getElementById('myInput').value; var t = document.createTextNode(inputValue); li.appendChild(t); if (inputValue === '') { alert('You must write something!'); } else { document.getElementById('newtask').appendChild(li); } document.getElementById('myInput').value = ''; var span = document.createElement('SPAN'); var txt = document.createTextNode('\u00D7'); span.className = 'close'; span.appendChild(txt); li.appendChild(span); for (i = 0; i < close.length; i++) { close[i].onclick = function () { var div = this.parentElement; div.style.display = 'none'; }; } } </script> </body>
[ "There are two things to highlight here is,\nFirst, the code piece to delete a list item is not deleting the actual item, but just hiding it from rendered HTML and that's wrong. You should remove that element completely from the document. The below code is not deleting in actuality.\n\nclose[i].onclick = function() {\n var div = this.parentElement;\n div.style.display = 'none';\n};\n\n\nSecondly, answering your query about adding icons using <i> tag. If you are adding this dynamically it should be done the same way you did with the <span> tag. You have to do createElement and append that element to its parent. You can link the fontawesome in your HTML document's <head> part. Here is the generic CDN link.\n\n<link rel=\"stylesheet\" href=\"path/to/font-awesome/css/font-awesome.min.css\">\n\n\nFor an updated version of the above link, you can check their official website.\n" ]
[ 0 ]
[]
[]
[ "font_awesome", "javascript" ]
stackoverflow_0074680967_font_awesome_javascript.txt
Q: Run multiple Data fusion replication jobs on one dataproc cluster I am currently analyzing GCP data fusion replication features to ingest initial snapshot followed by the CDC. The plan is to create one replication job per table because adding a new table is not supported once the replication job is created. I tried to a table by deleting and creating the replication job with same name. But it results the initial snapshot load for the tables. Having said that, in order to overcome the above 2 scenarios, I am planning to create replication job per table. However, every replication job creates its own dataproc cluster which will incur more costs. Is it possible to run all replication jobs on one dataproc autoscaling cluster? Note: The instance type is Basic.  A: Yes, it is possible to run all replication jobs on one dataproc autoscaling cluster. You can configure the dataproc cluster to autoscale based on the needs of the replication jobs. This way, you can avoid running multiple clusters and minimize costs. Additionally, you can configure the cluster to shut down when the replication jobs are completed, helping to further reduce costs.
Run multiple Data fusion replication jobs on one dataproc cluster
I am currently analyzing GCP data fusion replication features to ingest initial snapshot followed by the CDC. The plan is to create one replication job per table because adding a new table is not supported once the replication job is created. I tried to a table by deleting and creating the replication job with same name. But it results the initial snapshot load for the tables. Having said that, in order to overcome the above 2 scenarios, I am planning to create replication job per table. However, every replication job creates its own dataproc cluster which will incur more costs. Is it possible to run all replication jobs on one dataproc autoscaling cluster? Note: The instance type is Basic. 
[ "Yes, it is possible to run all replication jobs on one dataproc autoscaling cluster. You can configure the dataproc cluster to autoscale based on the needs of the replication jobs. This way, you can avoid running multiple clusters and minimize costs.\nAdditionally, you can configure the cluster to shut down when the replication jobs are completed, helping to further reduce costs.\n" ]
[ 0 ]
[]
[]
[ "change_data_capture", "google_cloud_data_fusion", "google_cloud_platform" ]
stackoverflow_0074681102_change_data_capture_google_cloud_data_fusion_google_cloud_platform.txt
Q: Understanding snake game extension logic There is a function "extend" which is behaving as expected but I don't understand how. The writer of the code is using -1 as the position of the item in the list "segments". Should this not add an extra element to the already created snake at the position of its last segment? If so, how would that lengthen the snake as the segment created at the end will overlap with the segment that is already there? The complete code of the relevant files is described at the end. def extend(self): self.add_segment(self.segments[-1].position()) The code for main.py is mentioned below: from turtle import Screen from snake import Snake from food import Food from scoreboard import ScoreBoard import time screen = Screen() screen.setup(width=600, height=600) screen.bgcolor("black") screen.title("My Snake Game") screen.tracer() scoreboard = ScoreBoard() snake = Snake() food = Food() screen.listen() screen.onkey(snake.up, "Up") screen.onkey(snake.down, "Down") screen.onkey(snake.left, "Left") screen.onkey(snake.right, "Right") game_is_on = True while game_is_on: screen.update() snake.move() if snake.head.distance(food) < 15: food.refresh() scoreboard.increase_score() snake.extend() #Detect collision with wall if snake.head.xcor() > 280 or snake.head.xcor() < -280 or snake.head.ycor() > 280 or snake.head.ycor() < -280: game_is_on = False scoreboard.game_over() #Detect collision with tail for segment in snake.segments: if segment == snake.head: pass elif snake.head.position() == segment.position(): game_is_on = False scoreboard.game_over() screen.exitonclick() The code for snake.py is mentioned below: from turtle import Turtle STARTING_POSITIONS = [(0, 0), (-20, 0), (-40, 0)] MOVE_DISTANCE = 20 UP = 90 DOWN = 270 LEFT = 180 RIGHT = 0 class Snake: def __init__(self): self.segments = [] self.create_snake() self.head = self.segments[0] def create_snake(self): for position in STARTING_POSITIONS: self.add_segment(position) def add_segment(self, position): new_segment = Turtle("square") new_segment.color("white") new_segment.penup() new_segment.goto(position) self.segments.append(new_segment) ################ def extend(self): self.add_segment(self.segments[-1].position()) ################ def move(self): for seg_num in range(len(self.segments) - 1, 0, -1): new_x = self.segments[seg_num - 1].xcor() new_y = self.segments[seg_num - 1].ycor() self.segments[seg_num].goto(new_x, new_y) self.segments[0].forward(MOVE_DISTANCE) def up(self): if self.head.heading() != DOWN: self.head.setheading(UP) def down(self): if self.head.heading() != UP: self.head.setheading(DOWN) def left(self): if self.head.heading() != RIGHT: self.head.setheading(LEFT) def right(self): if self.head.heading() != LEFT: self.head.setheading(RIGHT) A: Should this not add an extra element to the already created snake at the position of its last segment? If so, how would that lengthen the snake as the segment created at the end will overlap with the segment that is already there? A good question: intuitively, it seems like it should. But examine the movement code: def move(self): for seg_num in range(len(self.segments) - 1, 0, -1): new_x = self.segments[seg_num - 1].xcor() new_y = self.segments[seg_num - 1].ycor() self.segments[seg_num].goto(new_x, new_y) self.segments[0].forward(MOVE_DISTANCE) This iterates the snake segments from tail to head, moving each segment to the position of the segment ahead of it, then finally moving the head forward a step in whatever direction the snake is heading. It's clear that this works fine under normal movement. The snake's tail will vacate its previous location and nothing will fill it in, leaving an empty space, while the head will occupy a new, previously empty space. Here's an example of a normal move call, with the snake of length 5 moving one step to the right: 4 3 2 1 H --------> 4 3 2 1 H ^ | empty Now, after a call to extend we get this seemingly invalid situation (imagine the two 4s share the exact same square/position on a 1-d axis, rather than positioned one square vertically above it): 4 4 3 2 1 H --------> But the next move call resolves this scenario just fine. Even though there are two 4s sharing the same position, the snake will move as follows after one tick to the right: 4 4 3 2 1 H ^ | filled in by new segment Although I'm still using 4, it's really a 5th tail segment with its own unique position: 5 4 3 2 1 H ^ | filled in by new segment The snake has moved to the right, but the last element fell into place naturally because it was assigned to the coordinate space occupied by the segment ahead of it, self.segments[seg_num - 1], which would have been left empty under normal movement. This vacant space is exactly the length - 2-th element's previous position, and that's precisely what the new (seemingly) duplicate tail element was set to by extend. The position of the new tail is never "passed back" to any other element, so it doesn't really matter what its initial value is; it will momentarily be assigned to whatever the old tail's space was. To concisely summarize this: Under normal movement, there's no segment that is assigned to the tail's old position, leaving an empty space. After a call to extend when the snake grows, the duplicate tail is assigned to the empty space that the old tail would have vacated, filling it in.
Understanding snake game extension logic
There is a function "extend" which is behaving as expected but I don't understand how. The writer of the code is using -1 as the position of the item in the list "segments". Should this not add an extra element to the already created snake at the position of its last segment? If so, how would that lengthen the snake as the segment created at the end will overlap with the segment that is already there? The complete code of the relevant files is described at the end. def extend(self): self.add_segment(self.segments[-1].position()) The code for main.py is mentioned below: from turtle import Screen from snake import Snake from food import Food from scoreboard import ScoreBoard import time screen = Screen() screen.setup(width=600, height=600) screen.bgcolor("black") screen.title("My Snake Game") screen.tracer() scoreboard = ScoreBoard() snake = Snake() food = Food() screen.listen() screen.onkey(snake.up, "Up") screen.onkey(snake.down, "Down") screen.onkey(snake.left, "Left") screen.onkey(snake.right, "Right") game_is_on = True while game_is_on: screen.update() snake.move() if snake.head.distance(food) < 15: food.refresh() scoreboard.increase_score() snake.extend() #Detect collision with wall if snake.head.xcor() > 280 or snake.head.xcor() < -280 or snake.head.ycor() > 280 or snake.head.ycor() < -280: game_is_on = False scoreboard.game_over() #Detect collision with tail for segment in snake.segments: if segment == snake.head: pass elif snake.head.position() == segment.position(): game_is_on = False scoreboard.game_over() screen.exitonclick() The code for snake.py is mentioned below: from turtle import Turtle STARTING_POSITIONS = [(0, 0), (-20, 0), (-40, 0)] MOVE_DISTANCE = 20 UP = 90 DOWN = 270 LEFT = 180 RIGHT = 0 class Snake: def __init__(self): self.segments = [] self.create_snake() self.head = self.segments[0] def create_snake(self): for position in STARTING_POSITIONS: self.add_segment(position) def add_segment(self, position): new_segment = Turtle("square") new_segment.color("white") new_segment.penup() new_segment.goto(position) self.segments.append(new_segment) ################ def extend(self): self.add_segment(self.segments[-1].position()) ################ def move(self): for seg_num in range(len(self.segments) - 1, 0, -1): new_x = self.segments[seg_num - 1].xcor() new_y = self.segments[seg_num - 1].ycor() self.segments[seg_num].goto(new_x, new_y) self.segments[0].forward(MOVE_DISTANCE) def up(self): if self.head.heading() != DOWN: self.head.setheading(UP) def down(self): if self.head.heading() != UP: self.head.setheading(DOWN) def left(self): if self.head.heading() != RIGHT: self.head.setheading(LEFT) def right(self): if self.head.heading() != LEFT: self.head.setheading(RIGHT)
[ "\nShould this not add an extra element to the already created snake at the position of its last segment? If so, how would that lengthen the snake as the segment created at the end will overlap with the segment that is already there?\n\nA good question: intuitively, it seems like it should. But examine the movement code:\ndef move(self):\n for seg_num in range(len(self.segments) - 1, 0, -1):\n new_x = self.segments[seg_num - 1].xcor()\n new_y = self.segments[seg_num - 1].ycor()\n self.segments[seg_num].goto(new_x, new_y)\n self.segments[0].forward(MOVE_DISTANCE)\n\nThis iterates the snake segments from tail to head, moving each segment to the position of the segment ahead of it, then finally moving the head forward a step in whatever direction the snake is heading.\nIt's clear that this works fine under normal movement. The snake's tail will vacate its previous location and nothing will fill it in, leaving an empty space, while the head will occupy a new, previously empty space. Here's an example of a normal move call, with the snake of length 5 moving one step to the right:\n4 3 2 1 H\n-------->\n\n 4 3 2 1 H\n^\n|\nempty\n\nNow, after a call to extend we get this seemingly invalid situation (imagine the two 4s share the exact same square/position on a 1-d axis, rather than positioned one square vertically above it):\n4\n4 3 2 1 H\n-------->\n\nBut the next move call resolves this scenario just fine. Even though there are two 4s sharing the same position, the snake will move as follows after one tick to the right:\n4 4 3 2 1 H\n^\n|\nfilled in by new segment\n\nAlthough I'm still using 4, it's really a 5th tail segment with its own unique position:\n5 4 3 2 1 H\n^\n|\nfilled in by new segment\n\nThe snake has moved to the right, but the last element fell into place naturally because it was assigned to the coordinate space occupied by the segment ahead of it, self.segments[seg_num - 1], which would have been left empty under normal movement.\nThis vacant space is exactly the length - 2-th element's previous position, and that's precisely what the new (seemingly) duplicate tail element was set to by extend.\nThe position of the new tail is never \"passed back\" to any other element, so it doesn't really matter what its initial value is; it will momentarily be assigned to whatever the old tail's space was.\n\nTo concisely summarize this:\n\nUnder normal movement, there's no segment that is assigned to the tail's old position, leaving an empty space.\nAfter a call to extend when the snake grows, the duplicate tail is assigned to the empty space that the old tail would have vacated, filling it in.\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_turtle", "turtle_graphics" ]
stackoverflow_0074677711_python_python_turtle_turtle_graphics.txt
Q: Is there a FIXED function in QuickSight like Tableau? I’m trying to manipulate a Tableau function to QS. My output is not getting the same number, so I am doing something wrong here. Can anyone help with this? Tableau - { FIXED [marketplace_id],[monthending],[page_type],[device_type]:MAX([traffic_o])} QS - maxOver(traffic_o,[{marketplace_id},monthending,{page_type},{device_type}],PRE_AGG) A: Are you applying any visualization filters? In Tableau, the FIXED() function gets evaluated before the visualization filters. In QuickSight, your LOD expressions can be executed before or after the filters. If you want to replicate the FIXED() function in QuickSight, you should use the PRE_FILTER parameter in you LOD Expression. https://help.tableau.com/current/pro/desktop/en-us/calculations_calculatedfields_lod_filters.htm https://docs.aws.amazon.com/quicksight/latest/user/level-aware-calculations.html
Is there a FIXED function in QuickSight like Tableau?
I’m trying to manipulate a Tableau function to QS. My output is not getting the same number, so I am doing something wrong here. Can anyone help with this? Tableau - { FIXED [marketplace_id],[monthending],[page_type],[device_type]:MAX([traffic_o])} QS - maxOver(traffic_o,[{marketplace_id},monthending,{page_type},{device_type}],PRE_AGG)
[ "Are you applying any visualization filters?\nIn Tableau, the FIXED() function gets evaluated before the visualization filters. In QuickSight, your LOD expressions can be executed before or after the filters. If you want to replicate the FIXED() function in QuickSight, you should use the PRE_FILTER parameter in you LOD Expression.\nhttps://help.tableau.com/current/pro/desktop/en-us/calculations_calculatedfields_lod_filters.htm\nhttps://docs.aws.amazon.com/quicksight/latest/user/level-aware-calculations.html\n" ]
[ 0 ]
[]
[]
[ "amazon_quicksight", "tableau_api" ]
stackoverflow_0072522601_amazon_quicksight_tableau_api.txt
Q: Azure Node Function w/ Eventhub output binding to dynamically routed ADX table Having difficulties outputting from my function to an eventhub and finally into ADX when I want to target a table to route to. I have had no issues hitting target tables with the Node SDK via EventHubProducerClient in which case, you simply specify those properties next to the body of the event your sending: { body: { some:"fieldValue" }, properties: { Table: 'table_name', Format: 'JSON', IngestionMappingReference: 'table_name_mapping' } } But doing this in the same manner when using the output binding documented in Azure Event Hubs output binding for Azure Functions where the messages that I would push would take the form of the above JS object does not work. The SDK documentation is non-helpful. I can confirm that the data is in fact flowing from the Function to the Eventhub and into ADX if and only if I change the adx data connection for said eventhub to target a specific table (the opposite of the behavior I want) which is documented in Ingest data from event hub into Azure Data Explorer. Any help would be greatly appreciated, this seems so silly! Edit: grammar A: The returned object is set as the data payload of the Event Hub Message that the Azure Functions runtime returns. Unfortunately, there is no way to change this from the JS Function itself. In a C# Function you can return an EventData object but this isn't supported in non-C# languages. You're only option, if you need your function to be in JS, is to use the Event Hub SDK directly.
Azure Node Function w/ Eventhub output binding to dynamically routed ADX table
Having difficulties outputting from my function to an eventhub and finally into ADX when I want to target a table to route to. I have had no issues hitting target tables with the Node SDK via EventHubProducerClient in which case, you simply specify those properties next to the body of the event your sending: { body: { some:"fieldValue" }, properties: { Table: 'table_name', Format: 'JSON', IngestionMappingReference: 'table_name_mapping' } } But doing this in the same manner when using the output binding documented in Azure Event Hubs output binding for Azure Functions where the messages that I would push would take the form of the above JS object does not work. The SDK documentation is non-helpful. I can confirm that the data is in fact flowing from the Function to the Eventhub and into ADX if and only if I change the adx data connection for said eventhub to target a specific table (the opposite of the behavior I want) which is documented in Ingest data from event hub into Azure Data Explorer. Any help would be greatly appreciated, this seems so silly! Edit: grammar
[ "The returned object is set as the data payload of the Event Hub Message that the Azure Functions runtime returns. Unfortunately, there is no way to change this from the JS Function itself.\nIn a C# Function you can return an EventData object but this isn't supported in non-C# languages.\nYou're only option, if you need your function to be in JS, is to use the Event Hub SDK directly.\n" ]
[ 0 ]
[]
[]
[ "azure", "azure_data_explorer", "azure_eventhub", "azure_functions" ]
stackoverflow_0074354652_azure_azure_data_explorer_azure_eventhub_azure_functions.txt
Q: Solving equation of motion due to (Lorentz acceleration) using Forward Euler and Runge-Kutta 4th order using Python 3 I am tring to solve the equation of motion of charged particle in planetary magnetic field to see the path of the particle using Forward Euler's and RK5 method in python (as an excercise in learning Numerical methods) I encounter two problems: The 'for loop' in the RK4 method does not update the new values. It give the values of the first iteration for all iteration. With the change of the sing of 'β = charge/mass' the path of particle which is expected does not change. It seems the path is unaffected by the nature(sign) of the particle. What does this mean physically or mathematically? The codes are adapted from : python two coupled second order ODEs Runge Kutta 4th order and Applying Forward Euler Method to a Three-Box Model System of ODEs I would be immensely grateful if anyone explain to me what is wrong in the code. thank you. The Code are as under: import numpy as np import matplotlib.pyplot as plt from math import sin, cos from scipy.integrate import odeint scales = np.array([1e7, 0.1, 1, 1e-5, 10, 1e-5]) def LzForce(t,p): # assigning each ODE to a vector element r,x,θ,y,ϕ,z = p*scales # constants R = 60268e3 # metre g_20 = 1583e-9 Ω = 9.74e-3 # degree/second B_θ = (R/r)**4*g_20*cos(θ)*sin(θ) B_r = 2*(R/r)**4*g_20*(0.5*(3*cos(θ)**2-1)) β = +9.36e10 # defining the ODEs drdt = x dxdt = r*(y**2 +(z+Ω)**2*sin(θ)**2-β*z*sin(θ)*B_θ) dθdt = y dydt = (-2*x*y +r*(z+Ω)**2*sin(θ)*cos(θ)+β*r*z*sin(θ)*B_r)/r dϕdt = z dzdt = (-2*x*(z+Ω)*sin(θ)-2*r*y*(z+Ω)*cos(θ)+β*(x*B_θ-r*y*B_r))/(r*sin(θ)) return np.array([drdt,dxdt,dθdt,dydt,dϕdt,dzdt])/scales def ForwardEuler(fun,t0,p0,tf,dt): r0 = 6.6e+07 x0 = 0. θ0 = 88. y0 = 0. ϕ0 = 0. z0 = 22e-3 p0 = np.array([r0,x0,θ0,y0,ϕ0,z0]) t = np.arange(t0,tf+dt,dt) p = np.zeros([len(t), len(p0)]) p[0] = p0 for i in range(len(t)-1): p[i+1,:] = p[i,:] + fun(t[i],p[i,:]) * dt return t, p def rk4(fun,t0,p0,tf,dt): # initial conditions r0 = 6.6e+07 x0 = 0. θ0 = 88. y0 = 0. ϕ0 = 0. z0 = 22e-3 p0 = np.array([r0,x0,θ0,y0,ϕ0,z0]) t = np.arange(t0,tf+dt,dt) p = np.zeros([len(t), len(p0)]) p[0] = p0 for i in range(len(t)-1): k1 = dt * fun(t[i], p[i]) k2 = dt * fun(t[i] + 0.5*dt, p[i] + 0.5 * k1) k3 = dt * fun(t[i] + 0.5*dt, p[i] + 0.5 * k2) k4 = dt * fun(t[i] + dt, p[i] + k3) p[i+1] = p[i] + (k1 + 2*(k2 + k3) + k4)/6 return t,p dt = 0.5 tf = 1000 p0 = [6.6e+07,0.0,88.0,0.0,0.0,22e-3] t0 = 0 #Solution with Forward Euler t,p_Euler = ForwardEuler(LzForce,t0,p0,tf,dt) #Solution with RK4 t ,p_RK4 = rk4(LzForce,t0, p0 ,tf,dt) print(t,p_Euler) print(t,p_RK4) # Plot Solutions r,x,θ,y,ϕ,z = p_Euler.T fig,ax=plt.subplots(2,3,figsize=(8,4)) plt.xlabel('time in sec') plt.ylabel('parameters') for a,s in zip(ax.flatten(),[r,x,θ,y,ϕ,z]): a.plot(t,s); a.grid() plt.title("Forward Euler", loc='left') plt.tight_layout(); plt.show() r,x,θ,y,ϕ,z = p_RK4.T fig,ax=plt.subplots(2,3,figsize=(8,4)) plt.xlabel('time in sec') plt.ylabel('parameters') for a,q in zip(ax.flatten(),[r,x,θ,y,ϕ,z]): a.plot(t,q); a.grid() plt.title("RK4", loc='left') plt.tight_layout(); plt.show() [RK4 solution plot][1] [Euler's solution methods][2] ''''RK4 does not give iterated values. The path is unaffected by the change of sign which is expected as it is under Lorentz force'''' [1]: https://i.stack.imgur.com/bZdIw.png [2]: https://i.stack.imgur.com/tuNDp.png A: You are not iterating more than once inside the for loop in rk4 because it returns after the first iteration. for i in range(len(t)-1): k1 = dt * fun(t[i], p[i]) k2 = dt * fun(t[i] + 0.5*dt, p[i] + 0.5 * k1) k3 = dt * fun(t[i] + 0.5*dt, p[i] + 0.5 * k2) k4 = dt * fun(t[i] + dt, p[i] + k3) p[i+1] = p[i] + (k1 + 2*(k2 + k3) + k4)/6 # This is the problem line, the return was tabbed in, to be inside the for block, so the block executed once and returned. return t,p For physics questions please try a different forum.
Solving equation of motion due to (Lorentz acceleration) using Forward Euler and Runge-Kutta 4th order using Python 3
I am tring to solve the equation of motion of charged particle in planetary magnetic field to see the path of the particle using Forward Euler's and RK5 method in python (as an excercise in learning Numerical methods) I encounter two problems: The 'for loop' in the RK4 method does not update the new values. It give the values of the first iteration for all iteration. With the change of the sing of 'β = charge/mass' the path of particle which is expected does not change. It seems the path is unaffected by the nature(sign) of the particle. What does this mean physically or mathematically? The codes are adapted from : python two coupled second order ODEs Runge Kutta 4th order and Applying Forward Euler Method to a Three-Box Model System of ODEs I would be immensely grateful if anyone explain to me what is wrong in the code. thank you. The Code are as under: import numpy as np import matplotlib.pyplot as plt from math import sin, cos from scipy.integrate import odeint scales = np.array([1e7, 0.1, 1, 1e-5, 10, 1e-5]) def LzForce(t,p): # assigning each ODE to a vector element r,x,θ,y,ϕ,z = p*scales # constants R = 60268e3 # metre g_20 = 1583e-9 Ω = 9.74e-3 # degree/second B_θ = (R/r)**4*g_20*cos(θ)*sin(θ) B_r = 2*(R/r)**4*g_20*(0.5*(3*cos(θ)**2-1)) β = +9.36e10 # defining the ODEs drdt = x dxdt = r*(y**2 +(z+Ω)**2*sin(θ)**2-β*z*sin(θ)*B_θ) dθdt = y dydt = (-2*x*y +r*(z+Ω)**2*sin(θ)*cos(θ)+β*r*z*sin(θ)*B_r)/r dϕdt = z dzdt = (-2*x*(z+Ω)*sin(θ)-2*r*y*(z+Ω)*cos(θ)+β*(x*B_θ-r*y*B_r))/(r*sin(θ)) return np.array([drdt,dxdt,dθdt,dydt,dϕdt,dzdt])/scales def ForwardEuler(fun,t0,p0,tf,dt): r0 = 6.6e+07 x0 = 0. θ0 = 88. y0 = 0. ϕ0 = 0. z0 = 22e-3 p0 = np.array([r0,x0,θ0,y0,ϕ0,z0]) t = np.arange(t0,tf+dt,dt) p = np.zeros([len(t), len(p0)]) p[0] = p0 for i in range(len(t)-1): p[i+1,:] = p[i,:] + fun(t[i],p[i,:]) * dt return t, p def rk4(fun,t0,p0,tf,dt): # initial conditions r0 = 6.6e+07 x0 = 0. θ0 = 88. y0 = 0. ϕ0 = 0. z0 = 22e-3 p0 = np.array([r0,x0,θ0,y0,ϕ0,z0]) t = np.arange(t0,tf+dt,dt) p = np.zeros([len(t), len(p0)]) p[0] = p0 for i in range(len(t)-1): k1 = dt * fun(t[i], p[i]) k2 = dt * fun(t[i] + 0.5*dt, p[i] + 0.5 * k1) k3 = dt * fun(t[i] + 0.5*dt, p[i] + 0.5 * k2) k4 = dt * fun(t[i] + dt, p[i] + k3) p[i+1] = p[i] + (k1 + 2*(k2 + k3) + k4)/6 return t,p dt = 0.5 tf = 1000 p0 = [6.6e+07,0.0,88.0,0.0,0.0,22e-3] t0 = 0 #Solution with Forward Euler t,p_Euler = ForwardEuler(LzForce,t0,p0,tf,dt) #Solution with RK4 t ,p_RK4 = rk4(LzForce,t0, p0 ,tf,dt) print(t,p_Euler) print(t,p_RK4) # Plot Solutions r,x,θ,y,ϕ,z = p_Euler.T fig,ax=plt.subplots(2,3,figsize=(8,4)) plt.xlabel('time in sec') plt.ylabel('parameters') for a,s in zip(ax.flatten(),[r,x,θ,y,ϕ,z]): a.plot(t,s); a.grid() plt.title("Forward Euler", loc='left') plt.tight_layout(); plt.show() r,x,θ,y,ϕ,z = p_RK4.T fig,ax=plt.subplots(2,3,figsize=(8,4)) plt.xlabel('time in sec') plt.ylabel('parameters') for a,q in zip(ax.flatten(),[r,x,θ,y,ϕ,z]): a.plot(t,q); a.grid() plt.title("RK4", loc='left') plt.tight_layout(); plt.show() [RK4 solution plot][1] [Euler's solution methods][2] ''''RK4 does not give iterated values. The path is unaffected by the change of sign which is expected as it is under Lorentz force'''' [1]: https://i.stack.imgur.com/bZdIw.png [2]: https://i.stack.imgur.com/tuNDp.png
[ "You are not iterating more than once inside the for loop in rk4 because it returns after the first iteration.\nfor i in range(len(t)-1):\n k1 = dt * fun(t[i], p[i]) \n k2 = dt * fun(t[i] + 0.5*dt, p[i] + 0.5 * k1)\n k3 = dt * fun(t[i] + 0.5*dt, p[i] + 0.5 * k2)\n k4 = dt * fun(t[i] + dt, p[i] + k3)\n p[i+1] = p[i] + (k1 + 2*(k2 + k3) + k4)/6 \n# This is the problem line, the return was tabbed in, to be inside the for block, so the block executed once and returned.\nreturn t,p\n\nFor physics questions please try a different forum.\n" ]
[ 0 ]
[]
[]
[ "nonlinear_equation", "numerical_methods", "python_3.x", "runge_kutta" ]
stackoverflow_0074681075_nonlinear_equation_numerical_methods_python_3.x_runge_kutta.txt
Q: Angular Universal deployment in shared hosting cpanel node.js I am trying to deploy dist folder in public_html and then starting a node.js application in cpanel with startup file as main.js. But always getting bad luck. Can anyone please help with the steps to deploy succesfully?? A: If you are using the "setup node.js app" module, then you need to specify and different folder for the app. Please see this video for steps on how to deploy https://youtu.be/sIcy3q3Ib_s
Angular Universal deployment in shared hosting cpanel node.js
I am trying to deploy dist folder in public_html and then starting a node.js application in cpanel with startup file as main.js. But always getting bad luck. Can anyone please help with the steps to deploy succesfully??
[ "If you are using the \"setup node.js app\" module, then you need to specify and different folder for the app.\nPlease see this video for steps on how to deploy https://youtu.be/sIcy3q3Ib_s\n" ]
[ 0 ]
[]
[]
[ "angular_universal", "node.js", "server_side_rendering", "shared_hosting", "web_deployment" ]
stackoverflow_0073144896_angular_universal_node.js_server_side_rendering_shared_hosting_web_deployment.txt
Q: How to put this bar in front of elements? I seem to run into this problem that i cant bring this bar upfront, ive used z-index, still nothing. And it wont work on any of my pages. Im adding media part, if theres a problem or something i can add please comment. Thank you!! Can somebody help me out? This is the bar @media (max-width: 700px) { .text-box h1 { font-size: 35px; text-align: center; } .text-box p { font-size: 12px; width: 300px; height: 220px; text-align: left; padding-left: 30px; position: relative; } .nav-links ul li { display: block; z-index: 1; } .nav-links { position: absolute; background: rgb(128, 27, 27); height: 101vh; width: 130px; top: 0; right: -700px; text-align: left; z-index: 0; transition: 1s; } #myimage { height: 300px; width: 300px; } nav .fa { display: flex; color: rgb(255, 255, 255); margin: 30px; font-size: 22px; cursor: pointer; } .nav-links ul { padding: 30px; A: To bring an element in front of other elements on a page, you can use the z-index property in CSS. This property specifies the stack order of an element and determines whether it is in front of or behind other elements. Elements with a higher z-index are positioned in front of elements with a lower z-index. In your case, it looks like you are already using the z-index property in the .nav-links ul li selector, but it is not working as expected. This is likely because the z-index property only works on elements that have a position value other than the default value of static. Therefore, to fix this issue, you need to set a position value (e.g. relative, absolute, or fixed) for the .nav-links element. Here is an example of how you can do this: .nav-links { position: relative; /* Add this line */ background: rgb(128, 27, 27); height: 101vh; width: 130px; top: 0; right: -700px; text-align: left; z-index: 1; /* Set a non-zero z-index value here */ transition: 1s; } With this change, the .nav-links element will be positioned in front of the other elements on the page, and the navigation bar will appear in front of the other elements as well.
How to put this bar in front of elements?
I seem to run into this problem that i cant bring this bar upfront, ive used z-index, still nothing. And it wont work on any of my pages. Im adding media part, if theres a problem or something i can add please comment. Thank you!! Can somebody help me out? This is the bar @media (max-width: 700px) { .text-box h1 { font-size: 35px; text-align: center; } .text-box p { font-size: 12px; width: 300px; height: 220px; text-align: left; padding-left: 30px; position: relative; } .nav-links ul li { display: block; z-index: 1; } .nav-links { position: absolute; background: rgb(128, 27, 27); height: 101vh; width: 130px; top: 0; right: -700px; text-align: left; z-index: 0; transition: 1s; } #myimage { height: 300px; width: 300px; } nav .fa { display: flex; color: rgb(255, 255, 255); margin: 30px; font-size: 22px; cursor: pointer; } .nav-links ul { padding: 30px;
[ "To bring an element in front of other elements on a page, you can use the z-index property in CSS. This property specifies the stack order of an element and determines whether it is in front of or behind other elements. Elements with a higher z-index are positioned in front of elements with a lower z-index.\nIn your case, it looks like you are already using the z-index property in the .nav-links ul li selector, but it is not working as expected. This is likely because the z-index property only works on elements that have a position value other than the default value of static. Therefore, to fix this issue, you need to set a position value (e.g. relative, absolute, or fixed) for the .nav-links element.\nHere is an example of how you can do this:\n.nav-links {\n position: relative; /* Add this line */\n background: rgb(128, 27, 27);\n height: 101vh;\n width: 130px;\n top: 0;\n right: -700px;\n text-align: left;\n z-index: 1; /* Set a non-zero z-index value here */\n transition: 1s;\n}\n\nWith this change, the .nav-links element will be positioned in front of the other elements on the page, and the navigation bar will appear in front of the other elements as well.\n" ]
[ 0 ]
[]
[]
[ "css", "html", "sass" ]
stackoverflow_0074680478_css_html_sass.txt
Q: Print Dictionnary using generator Is it possible to print a dictionnary using a generator using a pattern ? Exemple : giving this dictionnary people = [ { 'name' : 'AAA', 'date_birth': '12/08/1990', 'class': '1st'}, { 'name' : 'BB', 'date_birth': '12/08/1992', 'class': '2nd'}, { 'name' : 'CC', 'date_birth': '12/08/1988', 'class': '3rd'}, ] Is it possible to print them as [name,date_birth,class] using generator ? A: Yes, it is possible to print the elements of a dictionary using a generator and a pattern. Here is an example of how you can do this: people = [ {'name': 'AAA', 'date_birth': '12/08/1990', 'class': '1st'}, {'name': 'BB', 'date_birth': '12/08/1992', 'class': '2nd'}, {'name': 'CC', 'date_birth': '12/08/1988', 'class': '3rd'}, ] # Define a generator function that yields the elements of the dictionary # according to the specified pattern def print_people(people, pattern): for person in people: yield [person[key] for key in pattern] # Use the generator function to print the elements of the dictionary for person in print_people(people, ['name', 'date_birth', 'class']): print(person) In this example, the print_people() function is a generator that yields the elements of the people dictionary according to the specified pattern (a list of keys in the dictionary). The for loop at the end of the code iterates over the generator and prints each element. When you run this code, it will print the elements of the people dictionary as a list of values for the keys specified in the pattern: ['AAA', '12/08/1990', '1st'] ['BB', '12/08/1992', '2nd'] ['CC', '12/08/1988', '3rd'] A: This works too: gen = ([v for v in a.values()] for a in people) for i in gen: print(i) A: ### Define the dictionary people = [ { 'name' : 'AAA', 'date_birth': '12/08/1990', 'class': '1st'}, { 'name' : 'BB', 'date_birth': '12/08/1992', 'class': '2nd'}, { 'name' : 'CC', 'date_birth': '12/08/1988', 'class': '3rd'}, ] ### Define the pattern for the generator pattern = ['name', 'date_birth', 'class'] ### Create the generator person_generator = (person[key] for person in people for key in pattern) ### Print the elements of the dictionary using the generator print(list(person_generator))
Print Dictionnary using generator
Is it possible to print a dictionnary using a generator using a pattern ? Exemple : giving this dictionnary people = [ { 'name' : 'AAA', 'date_birth': '12/08/1990', 'class': '1st'}, { 'name' : 'BB', 'date_birth': '12/08/1992', 'class': '2nd'}, { 'name' : 'CC', 'date_birth': '12/08/1988', 'class': '3rd'}, ] Is it possible to print them as [name,date_birth,class] using generator ?
[ "Yes, it is possible to print the elements of a dictionary using a generator and a pattern. Here is an example of how you can do this:\npeople = [\n {'name': 'AAA', 'date_birth': '12/08/1990', 'class': '1st'},\n {'name': 'BB', 'date_birth': '12/08/1992', 'class': '2nd'},\n {'name': 'CC', 'date_birth': '12/08/1988', 'class': '3rd'},\n]\n\n# Define a generator function that yields the elements of the dictionary\n# according to the specified pattern\ndef print_people(people, pattern):\n for person in people:\n yield [person[key] for key in pattern]\n\n# Use the generator function to print the elements of the dictionary\nfor person in print_people(people, ['name', 'date_birth', 'class']):\n print(person)\n\nIn this example, the print_people() function is a generator that yields the elements of the people dictionary according to the specified pattern (a list of keys in the dictionary). The for loop at the end of the code iterates over the generator and prints each element.\nWhen you run this code, it will print the elements of the people dictionary as a list of values for the keys specified in the pattern:\n['AAA', '12/08/1990', '1st']\n['BB', '12/08/1992', '2nd']\n['CC', '12/08/1988', '3rd']\n\n", "This works too:\ngen = ([v for v in a.values()] for a in people)\n\nfor i in gen:\n print(i)\n\n", "### Define the dictionary\npeople = [\n { 'name' : 'AAA', 'date_birth': '12/08/1990', 'class': '1st'}, \n { 'name' : 'BB', 'date_birth': '12/08/1992', 'class': '2nd'}, \n { 'name' : 'CC', 'date_birth': '12/08/1988', 'class': '3rd'}, \n]\n### Define the pattern for the generator\npattern = ['name', 'date_birth', 'class']\n\n### Create the generator\nperson_generator = (person[key] for person in people for key in pattern)\n\n### Print the elements of the dictionary using the generator\nprint(list(person_generator))\n\n" ]
[ 0, 0, 0 ]
[ "You can use a list generator as follows:\npeople = [(i['name'], i['date_birth'], i['class']) for i in people]\n\n", "Yes, it is possible to use a generator to print the elements of a dictionary using a pattern. You can do this by using a generator expression to iterate over the dictionary items and yield a formatted string for each item. Here's an example of how you might do this:\n# Define the dictionary\npeople = [\n { 'name' : 'AAA', 'date_birth': '12/08/1990', 'class': '1st'}, \n { 'name' : 'BB', 'date_birth': '12/08/1992', 'class': '2nd'}, \n { 'name' : 'CC', 'date_birth': '12/08/1988', 'class': '3rd'}, \n]\n\n# Define the pattern to be used for printing\npattern = '[{name},{date_birth},{class}]'\n\n# Use a generator expression to iterate over the dictionary items and yield a formatted string for each item\nformatted_people = (pattern.format(**person) for person in people)\n\n# Print the formatted strings\nfor person in formatted_people:\n print(person)\n\nThis will print the dictionary items using the specified pattern, with each item being printed on a separate line.\n" ]
[ -1, -1 ]
[ "python" ]
stackoverflow_0074681100_python.txt
Q: foreach() argument must be of type array|object, string given Laravel I'm trying to retrieve data from database and display all the data using foreachloop. Im getting first row data easily without foreach loop but whenever I try using loop the error displays "foreach() argument must be of type array|object, string given" This is my Controller Code class dbcontroller extends Controller { /** * Display a listing of the resource. * * @return \Illuminate\Http\Response */ public function index() { $posts = DB::table('table1')->get(); $d = $posts[0]->Name; $a =$posts[0]->Age; return view('db',compact('d','a')); } } And my Blade.php <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http-equiv="X-UA-Compatible" content="ie=edge"> <title>Document</title> </head> <body> @foreach ($d as $user => $data) <p>This name {{ $data->name }}</p> @endforeach {{-- <p>Name is {{$d}} & Age is {{$a}}</p> --}} </body> </html> A: You should bring $posts for your foreach. controller: return view('db', compact('posts')); in blade: @foreach ($posts as $post) <p>{{ $post->name }}</p> @endforeach
foreach() argument must be of type array|object, string given Laravel
I'm trying to retrieve data from database and display all the data using foreachloop. Im getting first row data easily without foreach loop but whenever I try using loop the error displays "foreach() argument must be of type array|object, string given" This is my Controller Code class dbcontroller extends Controller { /** * Display a listing of the resource. * * @return \Illuminate\Http\Response */ public function index() { $posts = DB::table('table1')->get(); $d = $posts[0]->Name; $a =$posts[0]->Age; return view('db',compact('d','a')); } } And my Blade.php <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http-equiv="X-UA-Compatible" content="ie=edge"> <title>Document</title> </head> <body> @foreach ($d as $user => $data) <p>This name {{ $data->name }}</p> @endforeach {{-- <p>Name is {{$d}} & Age is {{$a}}</p> --}} </body> </html>
[ "You should bring $posts for your foreach.\ncontroller:\nreturn view('db', compact('posts'));\n\nin blade:\n@foreach ($posts as $post)\n <p>{{ $post->name }}</p>\n@endforeach\n\n" ]
[ 0 ]
[]
[]
[ "database", "laravel" ]
stackoverflow_0074680626_database_laravel.txt
Q: What happens if i send the null pointer to fclose()? For example: FILE* file_name; file_name = fopen("some.txt", "r"); // some.txt isn't exist if (file_name !=NULL) printf("nice"); fclose(file_name); What happens in fclose? A: Passing a NULL pointer to fclose triggers undefined behavior. The fclose function is documented as a library function in section 7.21.5.1 of the C standard, and section 7.1.4p1 states the following regarding library functions: Each of the following statements applies unless explicitly stated otherwise in the detailed descriptions that follow: If an argument to a function has an invalid value (such as a value outside the domain of the function, or a pointer outside the address space of the program, or a null pointer, or a pointer to non-modifiable storage when the corresponding parameter is not const-qualified) or a type (after promotion) not expected by a function with variable number of arguments, the behavior is undefined. Section 7.21.5.1 makes no explicit mention of a NULL pointer being passed to fclose, so the above statement applies. A: The C standard does not define the behavior.1 Some implementations may test the passed pointer and disregard a null pointer and may return success or may return failure. Other implementations may crash. You should not do this without a special purpose such as aiding diagnosis of a bug or investigating how it affects program vulnerabilities. Footnote 1 The behavior is undefined becaue the specification for fclose in C 2018 7.21.5.1 specifies what fclose does when passed a pointer to a stream and does not specify what it does when passed a null pointer, and 7.1.4 1 says “… If an argument to a [standard library] function has an invalid value (such as… a null pointer…)…, the behavior is undefined.”
What happens if i send the null pointer to fclose()?
For example: FILE* file_name; file_name = fopen("some.txt", "r"); // some.txt isn't exist if (file_name !=NULL) printf("nice"); fclose(file_name); What happens in fclose?
[ "Passing a NULL pointer to fclose triggers undefined behavior.\nThe fclose function is documented as a library function in section 7.21.5.1 of the C standard, and section 7.1.4p1 states the following regarding library functions:\n\nEach of the following statements applies unless explicitly stated\notherwise in the detailed descriptions that follow: If an argument to\na function has an invalid value (such as a value outside the domain of\nthe function, or a pointer outside the address space of the program,\nor a null pointer, or a pointer to non-modifiable storage when the\ncorresponding parameter is not const-qualified) or a type (after\npromotion) not expected by a function with variable number of\narguments, the behavior is undefined.\n\nSection 7.21.5.1 makes no explicit mention of a NULL pointer being passed to fclose, so the above statement applies.\n", "The C standard does not define the behavior.1 Some implementations may test the passed pointer and disregard a null pointer and may return success or may return failure. Other implementations may crash. You should not do this without a special purpose such as aiding diagnosis of a bug or investigating how it affects program vulnerabilities.\nFootnote\n1 The behavior is undefined becaue the specification for fclose in C 2018 7.21.5.1 specifies what fclose does when passed a pointer to a stream and does not specify what it does when passed a null pointer, and 7.1.4 1 says “… If an argument to a [standard library] function has an invalid value (such as… a null pointer…)…, the behavior is undefined.”\n" ]
[ 6, 4 ]
[]
[]
[ "c" ]
stackoverflow_0074681080_c.txt
Q: Non-const member reference is mutable on const object? Given the following: struct S { int x; int& y; }; int main() { int i = 6; const S s{5, i}; // (1) // s.x = 10; // (2) s.y = 99; // (3) } Why is (3) allowed when s is const? (2) produces a compiler error, which is expected. I'd expect (3) to result in a compiler error as well. A: Why is s.y = 99 allowed when s is const? The type of s.y for const S s is not int const& but int&. It is not a reference to a const int, but a const reference to an int. Of course, all references are constant, you cannot rebind a reference. What if you wanted a type S' for which const object cannot be used to change the value y refers to? You cannot do it simply, and must resort to accessors, or any non-const function (e.g. operator=): class U { int& _y; public: int x; void setY(int y) { _y = y; } // cannot be called on const U };
Non-const member reference is mutable on const object?
Given the following: struct S { int x; int& y; }; int main() { int i = 6; const S s{5, i}; // (1) // s.x = 10; // (2) s.y = 99; // (3) } Why is (3) allowed when s is const? (2) produces a compiler error, which is expected. I'd expect (3) to result in a compiler error as well.
[ "\nWhy is s.y = 99 allowed when s is const?\n\nThe type of s.y for const S s is not int const& but int&. It is not a reference to a const int, but a const reference to an int. Of course, all references are constant, you cannot rebind a reference.\nWhat if you wanted a type S' for which const object cannot be used to change the value y refers to? You cannot do it simply, and must resort to accessors, or any non-const function (e.g. operator=):\nclass U\n{\n int& _y;\npublic:\n int x;\n void setY(int y) { _y = y; } // cannot be called on const U\n};\n\n" ]
[ 4 ]
[]
[]
[ "c++", "class", "member", "reference", "struct" ]
stackoverflow_0074681127_c++_class_member_reference_struct.txt