row_id
int64 0
48.4k
| init_message
stringlengths 1
342k
| conversation_hash
stringlengths 32
32
| scores
dict |
|---|---|---|---|
47,490
|
take a look at this:
""C:\Users\bower\.ollama\models\bin>hfdownloader_windows_amd64_1.3.4 -m vicgalle/Unsafe-Llama-3-8B -s "C:\Users\bower.ollama\models\bin
"
Model: vicgalle/Unsafe-Llama-3-8B
Branch: main
Storage: C:\Users\bower.ollama\models\bin
NumberOfConcurrentConnections: 5
Append Filter Names to Folder: false
Skip SHA256 Check: false
Token:
Getting File Download Files List Tree from: https://huggingface.co/api/models/vicgalle/Unsafe-Llama-3-8B/tree/main/
Checking file size matching: C:\Users\bower.ollama\models\bin/vicgalle_Unsafe-Llama-3-8B/.gitattributes
Checking file size matching: C:\Users\bower.ollama\models\bin/vicgalle_Unsafe-Llama-3-8B/README.md
Checking file size matching: C:\Users\bower.ollama\models\bin/vicgalle_Unsafe-Llama-3-8B/config.json
Checking file size matching: C:\Users\bower.ollama\models\bin/vicgalle_Unsafe-Llama-3-8B/generation_config.json
Downloading C:\Users\bower.ollama\models\bin/vicgalle_Unsafe-Llama-3-8B/model-00001-of-00004.safetensors Speed: 5.32 MB/sec, 100.00%
Merging C:\Users\bower.ollama\models\bin/vicgalle_Unsafe-Llama-3-8B/model-00001-of-00004.safetensors Chunks
Checking SHA256 Hash for LFS file: C:\Users\bower.ollama\models\bin/vicgalle_Unsafe-Llama-3-8B/model-00001-of-00004.safetensors
Hash Matched for LFS file: C:\Users\bower.ollama\models\bin/vicgalle_Unsafe-Llama-3-8B/model-00001-of-00004.safetensors
Downloading C:\Users\bower.ollama\models\bin/vicgalle_Unsafe-Llama-3-8B/model-00002-of-00004.safetensors Speed: 4.81 MB/sec, 100.00%
Merging C:\Users\bower.ollama\models\bin/vicgalle_Unsafe-Llama-3-8B/model-00002-of-00004.safetensors Chunks
Checking SHA256 Hash for LFS file: C:\Users\bower.ollama\models\bin/vicgalle_Unsafe-Llama-3-8B/model-00002-of-00004.safetensors
Hash Matched for LFS file: C:\Users\bower.ollama\models\bin/vicgalle_Unsafe-Llama-3-8B/model-00002-of-00004.safetensors
Downloading C:\Users\bower.ollama\models\bin/vicgalle_Unsafe-Llama-3-8B/model-00003-of-00004.safetensors Speed: 5.70 MB/sec, 100.00%
Merging C:\Users\bower.ollama\models\bin/vicgalle_Unsafe-Llama-3-8B/model-00003-of-00004.safetensors Chunks
Checking SHA256 Hash for LFS file: C:\Users\bower.ollama\models\bin/vicgalle_Unsafe-Llama-3-8B/model-00003-of-00004.safetensors
Hash Matched for LFS file: C:\Users\bower.ollama\models\bin/vicgalle_Unsafe-Llama-3-8B/model-00003-of-00004.safetensors
Downloading C:\Users\bower.ollama\models\bin/vicgalle_Unsafe-Llama-3-8B/model-00004-of-00004.safetensors Speed: 8.97 MB/sec, 100.00%
Merging C:\Users\bower.ollama\models\bin/vicgalle_Unsafe-Llama-3-8B/model-00004-of-00004.safetensors Chunks
Checking SHA256 Hash for LFS file: C:\Users\bower.ollama\models\bin/vicgalle_Unsafe-Llama-3-8B/model-00004-of-00004.safetensors
Hash Matched for LFS file: C:\Users\bower.ollama\models\bin/vicgalle_Unsafe-Llama-3-8B/model-00004-of-00004.safetensors
Checking file size matching: C:\Users\bower.ollama\models\bin/vicgalle_Unsafe-Llama-3-8B/model.safetensors.index.json
Checking file size matching: C:\Users\bower.ollama\models\bin/vicgalle_Unsafe-Llama-3-8B/special_tokens_map.json
Checking file size matching: C:\Users\bower.ollama\models\bin/vicgalle_Unsafe-Llama-3-8B/tokenizer.json
Checking file size matching: C:\Users\bower.ollama\models\bin/vicgalle_Unsafe-Llama-3-8B/tokenizer_config.json
Download of vicgalle/Unsafe-Llama-3-8B completed successfully
""
but when i checked the folder directory nothing is there. where are the files?
|
f8d72175b9b3678379db35a5aadd0a91
|
{
"intermediate": 0.3517095744609833,
"beginner": 0.4232886731624603,
"expert": 0.22500182688236237
}
|
47,491
|
I am using docker-compose, where I pull the image from the hub, BUT I need to modify it using my Dockerfile (inside there is only copy command for config file), the docker-compose.yml looks like below, can you help me fix it?
version: '3'
services:
nginx:
build:
context: .
args:
- NODE_ENV=local
dockerfile: Dockerfile
image: docker.io/nginx:latest
ports:
- "8080:80"
- "8443:443"
volumes:
- /var/www/html:/usr/share/nginx/html:ro
- /var/www/ssl:/etc/nginx/certs:ro
restart: always
|
90811e74b06badd1f598806d0dbb6f50
|
{
"intermediate": 0.41142964363098145,
"beginner": 0.3097233772277832,
"expert": 0.27884697914123535
}
|
47,492
|
you are an excellent expert in crewai framework , your goal is to answer any question that the user asks you about the crewai framework , here is the detailed crewai framework :
"""{
"agent.py": "import os\nimport uuid\nfrom typing import Any, Dict, List, Optional, Tuple\n\nfrom langchain.agents.agent import RunnableAgent\nfrom langchain.agents.tools import tool as LangChainTool\nfrom langchain.memory import ConversationSummaryMemory\nfrom langchain.tools.render import render_text_description\nfrom langchain_core.agents import AgentAction\nfrom langchain_core.callbacks import BaseCallbackHandler\nfrom langchain_openai import ChatOpenAI\nfrom pydantic import (\n UUID4,\n BaseModel,\n ConfigDict,\n Field,\n InstanceOf,\n PrivateAttr,\n field_validator,\n model_validator,\n)\nfrom pydantic_core import PydanticCustomError\n\nfrom crewai.agents import CacheHandler, CrewAgentExecutor, CrewAgentParser, ToolsHandler\nfrom crewai.utilities import I18N, Logger, Prompts, RPMController\nfrom crewai.utilities.token_counter_callback import TokenCalcHandler, TokenProcess\n\n\nclass Agent(BaseModel):\n \"\"\"Represents an agent in a system.\n\n Each agent has a role, a goal, a backstory, and an optional language model (llm).\n The agent can also have memory, can operate in verbose mode, and can delegate tasks to other agents.\n\n Attributes:\n agent_executor: An instance of the CrewAgentExecutor class.\n role: The role of the agent.\n goal: The objective of the agent.\n backstory: The backstory of the agent.\n config: Dict representation of agent configuration.\n llm: The language model that will run the agent.\n function_calling_llm: The language model that will the tool calling for this agent, it overrides the crew function_calling_llm.\n max_iter: Maximum number of iterations for an agent to execute a task.\n memory: Whether the agent should have memory or not.\n max_rpm: Maximum number of requests per minute for the agent execution to be respected.\n verbose: Whether the agent execution should be in verbose mode.\n allow_delegation: Whether the agent is allowed to delegate tasks to other agents.\n tools: Tools at agents disposal\n step_callback: Callback to be executed after each step of the agent execution.\n callbacks: A list of callback functions from the langchain library that are triggered during the agent's execution process\n \"\"\"\n\n __hash__ = object.__hash__ # type: ignore\n _logger: Logger = PrivateAttr()\n _rpm_controller: RPMController = PrivateAttr(default=None)\n _request_within_rpm_limit: Any = PrivateAttr(default=None)\n _token_process: TokenProcess = TokenProcess()\n\n formatting_errors: int = 0\n model_config = ConfigDict(arbitrary_types_allowed=True)\n id: UUID4 = Field(\n default_factory=uuid.uuid4,\n frozen=True,\n description=\"Unique identifier for the object, not set by user.\",\n )\n role: str = Field(description=\"Role of the agent\")\n goal: str = Field(description=\"Objective of the agent\")\n backstory: str = Field(description=\"Backstory of the agent\")\n config: Optional[Dict[str, Any]] = Field(\n description=\"Configuration for the agent\",\n default=None,\n )\n max_rpm: Optional[int] = Field(\n default=None,\n description=\"Maximum number of requests per minute for the agent execution to be respected.\",\n )\n memory: bool = Field(\n default=False, description=\"Whether the agent should have memory or not\"\n )\n verbose: bool = Field(\n default=False, description=\"Verbose mode for the Agent Execution\"\n )\n allow_delegation: bool = Field(\n default=True, description=\"Allow delegation of tasks to agents\"\n )\n tools: Optional[List[Any]] = Field(\n default_factory=list, description=\"Tools at agents disposal\"\n )\n max_iter: Optional[int] = Field(\n default=15, description=\"Maximum iterations for an agent to execute a task\"\n )\n agent_executor: InstanceOf[CrewAgentExecutor] = Field(\n default=None, description=\"An instance of the CrewAgentExecutor class.\"\n )\n tools_handler: InstanceOf[ToolsHandler] = Field(\n default=None, description=\"An instance of the ToolsHandler class.\"\n )\n cache_handler: InstanceOf[CacheHandler] = Field(\n default=CacheHandler(), description=\"An instance of the CacheHandler class.\"\n )\n step_callback: Optional[Any] = Field(\n default=None,\n description=\"Callback to be executed after each step of the agent execution.\",\n )\n i18n: I18N = Field(default=I18N(), description=\"Internationalization settings.\")\n llm: Any = Field(\n default_factory=lambda: ChatOpenAI(\n model=os.environ.get(\"OPENAI_MODEL_NAME\", \"gpt-4\")\n ),\n description=\"Language model that will run the agent.\",\n )\n function_calling_llm: Optional[Any] = Field(\n description=\"Language model that will run the agent.\", default=None\n )\n callbacks: Optional[List[InstanceOf[BaseCallbackHandler]]] = Field(\n default=None, description=\"Callback to be executed\"\n )\n\n def __init__(__pydantic_self__, **data):\n config = data.pop(\"config\", {})\n super().__init__(**config, **data)\n\n @field_validator(\"id\", mode=\"before\")\n @classmethod\n def _deny_user_set_id(cls, v: Optional[UUID4]) -> None:\n if v:\n raise PydanticCustomError(\n \"may_not_set_field\", \"This field is not to be set by the user.\", {}\n )\n\n @model_validator(mode=\"after\")\n def set_attributes_based_on_config(self) -> \"Agent\":\n \"\"\"Set attributes based on the agent configuration.\"\"\"\n if self.config:\n for key, value in self.config.items():\n setattr(self, key, value)\n return self\n\n @model_validator(mode=\"after\")\n def set_private_attrs(self):\n \"\"\"Set private attributes.\"\"\"\n self._logger = Logger(self.verbose)\n if self.max_rpm and not self._rpm_controller:\n self._rpm_controller = RPMController(\n max_rpm=self.max_rpm, logger=self._logger\n )\n return self\n\n @model_validator(mode=\"after\")\n def set_agent_executor(self) -> \"Agent\":\n \"\"\"set agent executor is set.\"\"\"\n if hasattr(self.llm, \"model_name\"):\n self.llm.callbacks = [\n TokenCalcHandler(self.llm.model_name, self._token_process)\n ]\n if not self.agent_executor:\n self.set_cache_handler(self.cache_handler)\n return self\n\n def execute_task(\n self,\n task: Any,\n context: Optional[str] = None,\n tools: Optional[List[Any]] = None,\n ) -> str:\n \"\"\"Execute a task with the agent.\n\n Args:\n task: Task to execute.\n context: Context to execute the task in.\n tools: Tools to use for the task.\n\n Returns:\n Output of the agent\n \"\"\"\n self.tools_handler.last_used_tool = {}\n\n task_prompt = task.prompt()\n\n if context:\n task_prompt = self.i18n.slice(\"task_with_context\").format(\n task=task_prompt, context=context\n )\n\n tools = self._parse_tools(tools or self.tools)\n self.create_agent_executor(tools=tools)\n self.agent_executor.tools = tools\n self.agent_executor.task = task\n\n self.agent_executor.tools_description = render_text_description(tools)\n self.agent_executor.tools_names = self.__tools_names(tools)\n\n result = self.agent_executor.invoke(\n {\n \"input\": task_prompt,\n \"tool_names\": self.agent_executor.tools_names,\n \"tools\": self.agent_executor.tools_description,\n }\n )[\"output\"]\n\n if self.max_rpm:\n self._rpm_controller.stop_rpm_counter()\n\n return result\n\n def set_cache_handler(self, cache_handler: CacheHandler) -> None:\n \"\"\"Set the cache handler for the agent.\n\n Args:\n cache_handler: An instance of the CacheHandler class.\n \"\"\"\n self.cache_handler = cache_handler\n self.tools_handler = ToolsHandler(cache=self.cache_handler)\n self.create_agent_executor()\n\n def set_rpm_controller(self, rpm_controller: RPMController) -> None:\n \"\"\"Set the rpm controller for the agent.\n\n Args:\n rpm_controller: An instance of the RPMController class.\n \"\"\"\n if not self._rpm_controller:\n self._rpm_controller = rpm_controller\n self.create_agent_executor()\n\n def create_agent_executor(self, tools=None) -> None:\n \"\"\"Create an agent executor for the agent.\n\n Returns:\n An instance of the CrewAgentExecutor class.\n \"\"\"\n tools = tools or self.tools\n\n agent_args = {\n \"input\": lambda x: x[\"input\"],\n \"tools\": lambda x: x[\"tools\"],\n \"tool_names\": lambda x: x[\"tool_names\"],\n \"agent_scratchpad\": lambda x: self.format_log_to_str(\n x[\"intermediate_steps\"]\n ),\n }\n\n executor_args = {\n \"llm\": self.llm,\n \"i18n\": self.i18n,\n \"tools\": self._parse_tools(tools),\n \"verbose\": self.verbose,\n \"handle_parsing_errors\": True,\n \"max_iterations\": self.max_iter,\n \"step_callback\": self.step_callback,\n \"tools_handler\": self.tools_handler,\n \"function_calling_llm\": self.function_calling_llm,\n \"callbacks\": self.callbacks,\n }\n\n if self._rpm_controller:\n executor_args[\n \"request_within_rpm_limit\"\n ] = self._rpm_controller.check_or_wait\n\n if self.memory:\n summary_memory = ConversationSummaryMemory(\n llm=self.llm, input_key=\"input\", memory_key=\"chat_history\"\n )\n executor_args[\"memory\"] = summary_memory\n agent_args[\"chat_history\"] = lambda x: x[\"chat_history\"]\n prompt = Prompts(i18n=self.i18n, tools=tools).task_execution_with_memory()\n else:\n prompt = Prompts(i18n=self.i18n, tools=tools).task_execution()\n\n execution_prompt = prompt.partial(\n goal=self.goal,\n role=self.role,\n backstory=self.backstory,\n )\n\n bind = self.llm.bind(stop=[self.i18n.slice(\"observation\")])\n inner_agent = agent_args | execution_prompt | bind | CrewAgentParser(agent=self)\n self.agent_executor = CrewAgentExecutor(\n agent=RunnableAgent(runnable=inner_agent), **executor_args\n )\n\n def interpolate_inputs(self, inputs: Dict[str, Any]) -> None:\n \"\"\"Interpolate inputs into the agent description and backstory.\"\"\"\n if inputs:\n self.role = self.role.format(**inputs)\n self.goal = self.goal.format(**inputs)\n self.backstory = self.backstory.format(**inputs)\n\n def increment_formatting_errors(self) -> None:\n \"\"\"Count the formatting errors of the agent.\"\"\"\n self.formatting_errors += 1\n\n def format_log_to_str(\n self,\n intermediate_steps: List[Tuple[AgentAction, str]],\n observation_prefix: str = \"Observation: \",\n llm_prefix: str = \"\",\n ) -> str:\n \"\"\"Construct the scratchpad that lets the agent continue its thought process.\"\"\"\n thoughts = \"\"\n for action, observation in intermediate_steps:\n thoughts += action.log\n thoughts += f\"\\n{observation_prefix}{observation}\\n{llm_prefix}\"\n return thoughts\n\n def _parse_tools(self, tools: List[Any]) -> List[LangChainTool]:\n \"\"\"Parse tools to be used for the task.\"\"\"\n # tentatively try to import from crewai_tools import BaseTool as CrewAITool\n tools_list = []\n try:\n from crewai_tools import BaseTool as CrewAITool\n\n for tool in tools:\n if isinstance(tool, CrewAITool):\n tools_list.append(tool.to_langchain())\n else:\n tools_list.append(tool)\n except ModuleNotFoundError:\n for tool in tools:\n tools_list.append(tool)\n return tools_list\n\n @staticmethod\n def __tools_names(tools) -> str:\n return \", \".join([t.name for t in tools])\n\n def __repr__(self):\n return f\"Agent(role={self.role}, goal={self.goal}, backstory={self.backstory})\"\n",
"agents": {
"cache": {
"cache_handler.py": "from typing import Optional\n\n\nclass CacheHandler:\n \"\"\"Callback handler for tool usage.\"\"\"\n\n _cache: dict = {}\n\n def __init__(self):\n self._cache = {}\n\n def add(self, tool, input, output):\n self._cache[f\"{tool}-{input}\"] = output\n\n def read(self, tool, input) -> Optional[str]:\n return self._cache.get(f\"{tool}-{input}\")\n",
"__init__.py": "from .cache_handler import CacheHandler\n",
"__pycache__": {}
},
"executor.py": "import time\nfrom typing import Any, Dict, Iterator, List, Optional, Tuple, Union\n\nfrom langchain.agents import AgentExecutor\nfrom langchain.agents.agent import ExceptionTool\nfrom langchain.callbacks.manager import CallbackManagerForChainRun\nfrom langchain_core.agents import AgentAction, AgentFinish, AgentStep\nfrom langchain_core.exceptions import OutputParserException\nfrom langchain_core.pydantic_v1 import root_validator\nfrom langchain_core.tools import BaseTool\nfrom langchain_core.utils.input import get_color_mapping\nfrom pydantic import InstanceOf\n\nfrom crewai.agents.tools_handler import ToolsHandler\nfrom crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException\nfrom crewai.utilities import I18N\n\n\nclass CrewAgentExecutor(AgentExecutor):\n _i18n: I18N = I18N()\n llm: Any = None\n iterations: int = 0\n task: Any = None\n tools_description: str = \"\"\n tools_names: str = \"\"\n function_calling_llm: Any = None\n request_within_rpm_limit: Any = None\n tools_handler: InstanceOf[ToolsHandler] = None\n max_iterations: Optional[int] = 15\n have_forced_answer: bool = False\n force_answer_max_iterations: Optional[int] = None\n step_callback: Optional[Any] = None\n\n @root_validator()\n def set_force_answer_max_iterations(cls, values: Dict) -> Dict:\n values[\"force_answer_max_iterations\"] = values[\"max_iterations\"] - 2\n return values\n\n def _should_force_answer(self) -> bool:\n return (\n self.iterations == self.force_answer_max_iterations\n ) and not self.have_forced_answer\n\n def _call(\n self,\n inputs: Dict[str, str],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Dict[str, Any]:\n \"\"\"Run text through and get agent response.\"\"\"\n # Construct a mapping of tool name to tool for easy lookup\n name_to_tool_map = {tool.name: tool for tool in self.tools}\n # We construct a mapping from each tool to a color, used for logging.\n color_mapping = get_color_mapping(\n [tool.name for tool in self.tools], excluded_colors=[\"green\", \"red\"]\n )\n intermediate_steps: List[Tuple[AgentAction, str]] = []\n # Let's start tracking the number of iterations and time elapsed\n self.iterations = 0\n time_elapsed = 0.0\n start_time = time.time()\n # We now enter the agent loop (until it returns something).\n while self._should_continue(self.iterations, time_elapsed):\n if not self.request_within_rpm_limit or self.request_within_rpm_limit():\n next_step_output = self._take_next_step(\n name_to_tool_map,\n color_mapping,\n inputs,\n intermediate_steps,\n run_manager=run_manager,\n )\n\n if self.step_callback:\n self.step_callback(next_step_output)\n\n if isinstance(next_step_output, AgentFinish):\n return self._return(\n next_step_output, intermediate_steps, run_manager=run_manager\n )\n\n intermediate_steps.extend(next_step_output)\n if len(next_step_output) == 1:\n next_step_action = next_step_output[0]\n # See if tool should return directly\n tool_return = self._get_tool_return(next_step_action)\n if tool_return is not None:\n return self._return(\n tool_return, intermediate_steps, run_manager=run_manager\n )\n self.iterations += 1\n time_elapsed = time.time() - start_time\n output = self.agent.return_stopped_response(\n self.early_stopping_method, intermediate_steps, **inputs\n )\n return self._return(output, intermediate_steps, run_manager=run_manager)\n\n def _iter_next_step(\n self,\n name_to_tool_map: Dict[str, BaseTool],\n color_mapping: Dict[str, str],\n inputs: Dict[str, str],\n intermediate_steps: List[Tuple[AgentAction, str]],\n run_manager: Optional[CallbackManagerForChainRun] = None,\n ) -> Iterator[Union[AgentFinish, AgentAction, AgentStep]]:\n \"\"\"Take a single step in the thought-action-observation loop.\n\n Override this to take control of how the agent makes and acts on choices.\n \"\"\"\n try:\n if self._should_force_answer():\n error = self._i18n.errors(\"force_final_answer\")\n output = AgentAction(\"_Exception\", error, error)\n self.have_forced_answer = True\n yield AgentStep(action=output, observation=error)\n return\n\n intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)\n # Call the LLM to see what to do.\n output = self.agent.plan(\n intermediate_steps,\n callbacks=run_manager.get_child() if run_manager else None,\n **inputs,\n )\n\n except OutputParserException as e:\n if isinstance(self.handle_parsing_errors, bool):\n raise_error = not self.handle_parsing_errors\n else:\n raise_error = False\n if raise_error:\n raise ValueError(\n \"An output parsing error occurred. \"\n \"In order to pass this error back to the agent and have it try \"\n \"again, pass `handle_parsing_errors=True` to the AgentExecutor. \"\n f\"This is the error: {str(e)}\"\n )\n str(e)\n if isinstance(self.handle_parsing_errors, bool):\n if e.send_to_llm:\n observation = f\"\\n{str(e.observation)}\"\n str(e.llm_output)\n else:\n observation = \"\"\n elif isinstance(self.handle_parsing_errors, str):\n observation = f\"\\n{self.handle_parsing_errors}\"\n elif callable(self.handle_parsing_errors):\n observation = f\"\\n{self.handle_parsing_errors(e)}\"\n else:\n raise ValueError(\"Got unexpected type of `handle_parsing_errors`\")\n output = AgentAction(\"_Exception\", observation, \"\")\n if run_manager:\n run_manager.on_agent_action(output, color=\"green\")\n tool_run_kwargs = self.agent.tool_run_logging_kwargs()\n observation = ExceptionTool().run(\n output.tool_input,\n verbose=False,\n color=None,\n callbacks=run_manager.get_child() if run_manager else None,\n **tool_run_kwargs,\n )\n\n if self._should_force_answer():\n error = self._i18n.errors(\"force_final_answer\")\n output = AgentAction(\"_Exception\", error, error)\n yield AgentStep(action=output, observation=error)\n return\n\n yield AgentStep(action=output, observation=observation)\n return\n\n # If the tool chosen is the finishing tool, then we end and return.\n if isinstance(output, AgentFinish):\n yield output\n return\n\n actions: List[AgentAction]\n actions = [output] if isinstance(output, AgentAction) else output\n yield from actions\n for agent_action in actions:\n if run_manager:\n run_manager.on_agent_action(agent_action, color=\"green\")\n # Otherwise we lookup the tool\n tool_usage = ToolUsage(\n tools_handler=self.tools_handler,\n tools=self.tools,\n tools_description=self.tools_description,\n tools_names=self.tools_names,\n function_calling_llm=self.function_calling_llm,\n task=self.task,\n action=agent_action,\n )\n tool_calling = tool_usage.parse(agent_action.log)\n\n if isinstance(tool_calling, ToolUsageErrorException):\n observation = tool_calling.message\n else:\n if tool_calling.tool_name.lower().strip() in [\n name.lower().strip() for name in name_to_tool_map\n ]:\n observation = tool_usage.use(tool_calling, agent_action.log)\n else:\n observation = self._i18n.errors(\"wrong_tool_name\").format(\n tool=tool_calling.tool_name,\n tools=\", \".join([tool.name for tool in self.tools]),\n )\n yield AgentStep(action=agent_action, observation=observation)\n",
"parser.py": "import re\nfrom typing import Any, Union\n\nfrom langchain.agents.output_parsers import ReActSingleInputOutputParser\nfrom langchain_core.agents import AgentAction, AgentFinish\nfrom langchain_core.exceptions import OutputParserException\n\nfrom crewai.utilities import I18N\n\nFINAL_ANSWER_ACTION = \"Final Answer:\"\nMISSING_ACTION_AFTER_THOUGHT_ERROR_MESSAGE = \"I did it wrong. Invalid Format: I missed the 'Action:' after 'Thought:'. I will do right next, and don't use a tool I have already used.\\n\"\nMISSING_ACTION_INPUT_AFTER_ACTION_ERROR_MESSAGE = \"I did it wrong. Invalid Format: I missed the 'Action Input:' after 'Action:'. I will do right next, and don't use a tool I have already used.\\n\"\nFINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE = \"I did it wrong. Tried to both perform Action and give a Final Answer at the same time, I must do one or the other\"\n\n\nclass CrewAgentParser(ReActSingleInputOutputParser):\n \"\"\"Parses ReAct-style LLM calls that have a single tool input.\n\n Expects output to be in one of two formats.\n\n If the output signals that an action should be taken,\n should be in the below format. This will result in an AgentAction\n being returned.\n\n Thought: agent thought here\n Action: search\n Action Input: what is the temperature in SF?\n\n If the output signals that a final answer should be given,\n should be in the below format. This will result in an AgentFinish\n being returned.\n\n Thought: agent thought here\n Final Answer: The temperature is 100 degrees\n \"\"\"\n\n _i18n: I18N = I18N()\n agent: Any = None\n\n def parse(self, text: str) -> Union[AgentAction, AgentFinish]:\n includes_answer = FINAL_ANSWER_ACTION in text\n regex = (\n r\"Action\\s*\\d*\\s*:[\\s]*(.*?)[\\s]*Action\\s*\\d*\\s*Input\\s*\\d*\\s*:[\\s]*(.*)\"\n )\n action_match = re.search(regex, text, re.DOTALL)\n if action_match:\n if includes_answer:\n raise OutputParserException(\n f\"{FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE}: {text}\"\n )\n action = action_match.group(1).strip()\n action_input = action_match.group(2)\n tool_input = action_input.strip(\" \")\n tool_input = tool_input.strip('\"')\n\n return AgentAction(action, tool_input, text)\n\n elif includes_answer:\n return AgentFinish(\n {\"output\": text.split(FINAL_ANSWER_ACTION)[-1].strip()}, text\n )\n\n if not re.search(r\"Action\\s*\\d*\\s*:[\\s]*(.*?)\", text, re.DOTALL):\n self.agent.increment_formatting_errors()\n raise OutputParserException(\n f\"Could not parse LLM output: `{text}`\",\n observation=f\"{MISSING_ACTION_AFTER_THOUGHT_ERROR_MESSAGE}\\n{self._i18n.slice('final_answer_format')}\",\n llm_output=text,\n send_to_llm=True,\n )\n elif not re.search(\n r\"[\\s]*Action\\s*\\d*\\s*Input\\s*\\d*\\s*:[\\s]*(.*)\", text, re.DOTALL\n ):\n self.agent.increment_formatting_errors()\n raise OutputParserException(\n f\"Could not parse LLM output: `{text}`\",\n observation=MISSING_ACTION_INPUT_AFTER_ACTION_ERROR_MESSAGE,\n llm_output=text,\n send_to_llm=True,\n )\n else:\n format = self._i18n.slice(\"format_without_tools\")\n error = f\"{format}\"\n self.agent.increment_formatting_errors()\n raise OutputParserException(\n error,\n observation=error,\n llm_output=text,\n send_to_llm=True,\n )\n",
"tools_handler.py": "from typing import Any\n\nfrom ..tools.cache_tools import CacheTools\nfrom ..tools.tool_calling import ToolCalling\nfrom .cache.cache_handler import CacheHandler\n\n\nclass ToolsHandler:\n \"\"\"Callback handler for tool usage.\"\"\"\n\n last_used_tool: ToolCalling = {}\n cache: CacheHandler\n\n def __init__(self, cache: CacheHandler):\n \"\"\"Initialize the callback handler.\"\"\"\n self.cache = cache\n self.last_used_tool = {}\n\n def on_tool_use(self, calling: ToolCalling, output: str) -> Any:\n \"\"\"Run when tool ends running.\"\"\"\n self.last_used_tool = calling\n if calling.tool_name != CacheTools().name:\n self.cache.add(\n tool=calling.tool_name,\n input=calling.arguments,\n output=output,\n )\n",
"__init__.py": "from .cache.cache_handler import CacheHandler\nfrom .executor import CrewAgentExecutor\nfrom .parser import CrewAgentParser\nfrom .tools_handler import ToolsHandler\n",
"__pycache__": {}
},
"cli": {
"cli.py": "import click\n\nfrom .create_crew import create_crew\n\n\<PRESIDIO_ANONYMIZED_EMAIL_ADDRESS>()\ndef crewai():\n \"\"\"Top-level command group for crewai.\"\"\"\n\n\n@crewai.command()\n@click.argument(\"project_name\")\ndef create(project_name):\n \"\"\"Create a new crew.\"\"\"\n create_crew(project_name)\n\n\nif __name__ == \"__main__\":\n crewai()\n",
"create_crew.py": "import os\nfrom pathlib import Path\n\nimport click\n\n\ndef create_crew(name):\n \"\"\"Create a new crew.\"\"\"\n folder_name = name.replace(\" \", \"_\").replace(\"-\", \"_\").lower()\n class_name = name.replace(\"_\", \" \").replace(\"-\", \" \").title().replace(\" \", \"\")\n\n click.secho(f\"Creating folder {folder_name}...\", fg=\"green\", bold=True)\n\n if not os.path.exists(folder_name):\n os.mkdir(folder_name)\n os.mkdir(folder_name + \"/tests\")\n os.mkdir(folder_name + \"/src\")\n os.mkdir(folder_name + f\"/src/{folder_name}\")\n os.mkdir(folder_name + f\"/src/{folder_name}/tools\")\n os.mkdir(folder_name + f\"/src/{folder_name}/config\")\n with open(folder_name + \"/.env\", \"w\") as file:\n file.write(\"OPENAI_API_KEY=YOUR_API_KEY\")\n else:\n click.secho(\n f\"\\tFolder {folder_name} already exists. Please choose a different name.\",\n fg=\"red\",\n )\n return\n\n package_dir = Path(__file__).parent\n templates_dir = package_dir / \"templates\"\n\n # List of template files to copy\n root_template_files = [\n \".gitignore\",\n \"pyproject.toml\",\n \"README.md\",\n ]\n tools_template_files = [\"tools/custom_tool.py\", \"tools/__init__.py\"]\n config_template_files = [\"config/agents.yaml\", \"config/tasks.yaml\"]\n src_template_files = [\"__init__.py\", \"main.py\", \"crew.py\"]\n\n for file_name in root_template_files:\n src_file = templates_dir / file_name\n dst_file = Path(folder_name) / file_name\n copy_template(src_file, dst_file, name, class_name, folder_name)\n\n for file_name in src_template_files:\n src_file = templates_dir / file_name\n dst_file = Path(folder_name) / \"src\" / folder_name / file_name\n copy_template(src_file, dst_file, name, class_name, folder_name)\n\n for file_name in tools_template_files:\n src_file = templates_dir / file_name\n dst_file = Path(folder_name) / \"src\" / folder_name / file_name\n copy_template(src_file, dst_file, name, class_name, folder_name)\n\n for file_name in config_template_files:\n src_file = templates_dir / file_name\n dst_file = Path(folder_name) / \"src\" / folder_name / file_name\n copy_template(src_file, dst_file, name, class_name, folder_name)\n\n click.secho(f\"Crew {name} created successfully!\", fg=\"green\", bold=True)\n\n\ndef copy_template(src, dst, name, class_name, folder_name):\n \"\"\"Copy a file from src to dst.\"\"\"\n with open(src, \"r\") as file:\n content = file.read()\n\n # Interpolate the content\n content = content.replace(\"{{name}}\", name)\n content = content.replace(\"{{crew_name}}\", class_name)\n content = content.replace(\"{{folder_name}}\", folder_name)\n\n # Write the interpolated content to the new file\n with open(dst, \"w\") as file:\n file.write(content)\n\n click.secho(f\" - Created {dst}\", fg=\"green\")\n",
"templates": {
"config": {
"agents.yaml": "researcher:\n role: >\n {topic} Senior Data Researcher\n goal: >\n Uncover cutting-edge developments in {topic}\n backstory: >\n You're a seasoned researcher with a knack for uncovering the latest\n developments in {topic}. Known for your ability to find the most relevant\n information and present it in a clear and concise manner.\n\nreporting_analyst:\n role: >\n {topic} Reporting Analyst\n goal: >\n Create detailed reports based on {topic} data analysis and research findings\n backstory: >\n You're a meticulous analyst with a keen eye for detail. You're known for\n your ability to turn complex data into clear and concise reports, making\n it easy for others to understand and act on the information you provide.",
"tasks.yaml": "research_task:\n description: >\n Conduct a thorough research about {topic}\n Make sure you find any interesting and relevant information given\n the current year is 2024.\n expected_output: >\n A list with 10 bullet points of the most relevant information about {topic}\n\nreporting_task:\n description: >\n Review the context you got and expand each topic into a full section for a report.\n Make sure the report is detailed and contains any and all relevant information.\n expected_output: >\n A fully fledge reports with the mains topics, each with a full section of information.\n Formated as markdown with out '
|
ab71dcaa600236c801f5fde370bdac15
|
{
"intermediate": 0.582137942314148,
"beginner": 0.34477242827415466,
"expert": 0.073089599609375
}
|
47,493
|
Traceback (most recent call last):
File "R:\python\stable dif\OnnxDiffusersUI\onnxUI.py", line 14, in <module>
from diffusers import (
ImportError: cannot import name 'OnnxStableDiffusionInpaintPipelineLegacy' from 'diffusers' (R:\python\stable dif\OnnxDiffusersUI\virtualenv\Lib\site-packages\diffusers\__init__.py)
|
f45e6c5067e3e2f591d545485e9b6375
|
{
"intermediate": 0.4990330934524536,
"beginner": 0.2818257212638855,
"expert": 0.21914124488830566
}
|
47,494
|
is training model
optimizer = keras.optimizers.Adam(learning_rate=0.0003,clipvalue=0.5)
model.compile(optimizer=optimizer, loss = 'mean_squared_error', metrics=[tf.keras.metrics.MeanAbsoluteError()])
model.fit(x_train, y_train,epochs = 250)
return final mae ?or should i calculate it ?
|
56a2f127040e55a9e96af4cc742a2547
|
{
"intermediate": 0.1719370037317276,
"beginner": 0.14882557094097137,
"expert": 0.679237425327301
}
|
47,495
|
How can I reinstall an installed program on Winget without downloading it again?
|
cc5b1657918b7ba2ff9d9d0c0d149261
|
{
"intermediate": 0.4866047501564026,
"beginner": 0.23238512873649597,
"expert": 0.28101015090942383
}
|
47,496
|
in tensorflow keras can i set input_shpe of model outside dense
lide is there model.set_input_shape or something like that?
|
d7ed963994a525752443b6e95d1353eb
|
{
"intermediate": 0.36457163095474243,
"beginner": 0.07524912804365158,
"expert": 0.5601792335510254
}
|
47,497
|
import matplotlib.pyplot as plt
from sklearn import datasets, metrics
from sklearn.model_selection import train_test_split
digits = datasets.load_digits()
n_samples = len(digits.images)
data = digits.images.reshape((n_samples, -1))
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=6)
X_train, X_test, y_train, y_test = train_test_split(
data, digits.target, test_size=0.2)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
disp = metrics.plot_confusion_matrix(knn, X_test, y_test)
plt.show()
시험문제 낼껀데 코드 순서좀 섞어줘
|
1927827dcbecf577b67954c56e9009f6
|
{
"intermediate": 0.4181957542896271,
"beginner": 0.21753054857254028,
"expert": 0.36427372694015503
}
|
47,498
|
Hi there, please be a sapui5 senior developer and answer my question with working code examples.
|
5864ad5eb8cc12f8cdd427e4aacbe199
|
{
"intermediate": 0.4024829566478729,
"beginner": 0.2911725640296936,
"expert": 0.3063444495201111
}
|
47,499
|
this is my code to train a model:
# %%
import pandas as pd
import datetime as dt
from datetime import date
import numpy as np
import tensorflow as tf
from sklearn.preprocessing import MinMaxScaler
import tensorflow as tf
import keras
# %%
df = pd.read_csv(r'C:\Users\arisa\Desktop\ddd\Binance_1INCHBUSD_d.csv')
# %%
include_substrings = ["y_"]
exact_columns_to_keep = ["Open", "High", "Low", "Close","volume_base", "volume_crypto", "tradecount",]
# %%
# %%
filtered_columns = [col for col in df.columns if any(col.startswith(s) for s in include_substrings)]
columns_to_keep = list(set(exact_columns_to_keep + filtered_columns))
# %%
df = df[columns_to_keep]
# %%
# Assuming ‘data’ already contains the selected features and targets
features = df.drop([
'y_High_1d', 'y_Low_1d', 'y_Priority_1d',
'y_High_2d', 'y_Low_2d', 'y_Priority_2d',
'y_High_3d', 'y_Low_3d', 'y_Priority_3d',
'y_High_5d', 'y_Low_5d', 'y_Priority_5d'], axis=1)
targets = df[['y_High_5d', 'y_Low_5d', 'y_Priority_5d']]
# Scale the features and targets
feature_scaler = MinMaxScaler(feature_range=(0, 1))
target_scaler = MinMaxScaler(feature_range=(0, 1))
scaled_features = feature_scaler.fit_transform(features)
scaled_targets = target_scaler.fit_transform(targets)
# %%
look_back = 60 # Number of previous time steps to consider for each output
x_train = []
y_train = []
for i in range(look_back, len(scaled_features)):
x_train.append(scaled_features[i-look_back:i])
y_train.append(scaled_targets[i]) # Assuming the target is the next time step
x_train, y_train = np.array(x_train), np.array(y_train)
# %%
input_shape = (x_train.shape[1], x_train.shape[2]) # (time steps, number of features)
# %% [markdown]
# # ML Model (LSTM)
#
# ---
#
#
# %%
from tensorflow.keras.layers import Dense, Dropout, LSTM
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import BatchNormalization
# %%
model = Sequential()
model.add(LSTM(units = 100, activation = 'tanh', return_sequences=True
,input_shape = input_shape))
model.add(Dropout(0.4))
model.add(LSTM(units = 240, activation = 'tanh'))
model.add(Dropout(0.5))
model.add(Dense(units = 3))
# %%
optimizer = keras.optimizers.Adam(learning_rate=0.0003,clipvalue=0.5)
model.compile(optimizer=optimizer, loss = 'mean_squared_error', metrics=[tf.keras.metrics.MeanAbsoluteError()])
model.fit(x_train, y_train,epochs = 250)
# %%
model.save('keras_model.h5')
# %%
val_loss, val_mae = model.evaluate(x_train, y_train)
print("Validation MAE:", val_mae)
how can i make it run faster
|
971b892919840ceaf85f3e910fe919bd
|
{
"intermediate": 0.3457399904727936,
"beginner": 0.39225322008132935,
"expert": 0.26200684905052185
}
|
47,500
|
is my code for training and testing model correct:
# %%
import pandas as pd
import datetime as dt
from datetime import date
import numpy as np
import tensorflow as tf
from sklearn.preprocessing import MinMaxScaler
import tensorflow as tf
import keras
from sklearn.model_selection import train_test_split
# %%
df = pd.read_csv(r'C:\Users\arisa\Desktop\ddd\Binance_1INCHBUSD_d.csv')
# %%
include_substrings = ["y_"]
exact_columns_to_keep = ["Open", "High", "Low", "Close","volume_base", "volume_crypto", "tradecount",]
# %%
# %%
filtered_columns = [col for col in df.columns if any(col.startswith(s) for s in include_substrings)]
columns_to_keep = list(set(exact_columns_to_keep + filtered_columns))
# %%
df = df[columns_to_keep]
# %%
# Assuming ‘data’ already contains the selected features and targets
features = df.drop([
'y_High_1d', 'y_Low_1d', 'y_Priority_1d',
'y_High_2d', 'y_Low_2d', 'y_Priority_2d',
'y_High_3d', 'y_Low_3d', 'y_Priority_3d',
'y_High_5d', 'y_Low_5d', 'y_Priority_5d'], axis=1)
targets = df[['y_High_5d', 'y_Low_5d', 'y_Priority_5d']]
# Scale the features and targets
feature_scaler = MinMaxScaler(feature_range=(0, 1))
target_scaler = MinMaxScaler(feature_range=(0, 1))
scaled_features = feature_scaler.fit_transform(features)
scaled_targets = target_scaler.fit_transform(targets)
# %%
look_back = 60 # Number of previous time steps to consider for each output
x_train = []
y_train = []
for i in range(look_back, len(scaled_features)):
x_train.append(scaled_features[i-look_back:i])
y_train.append(scaled_targets[i]) # Assuming the target is the next time step
x_train, y_train = np.array(x_train), np.array(y_train)
# %%
x_train_full, x_test, y_train_full, y_test = train_test_split(x_train, y_train, test_size=0.2, random_state=42)
# %%
input_shape = (x_train_full.shape[1], x_train_full.shape[2]) # (time steps, number of features)
# %% [markdown]
# # ML Model (LSTM)
#
# ---
#
#
# %%
from tensorflow.keras.layers import Dense, Dropout, LSTM
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import BatchNormalization
# %%
model = Sequential()
model.add(LSTM(units = 100, activation = 'tanh', return_sequences=True
,input_shape = input_shape))
model.add(Dropout(0.4))
model.add(LSTM(units = 240, activation = 'tanh'))
model.add(Dropout(0.5))
model.add(Dense(units = 3))
# %%
optimizer = keras.optimizers.Adam(learning_rate=0.0003,clipvalue=0.5)
model.compile(optimizer=optimizer, loss = 'mean_squared_error', metrics=[tf.keras.metrics.MeanAbsoluteError()])
model.fit(x_train_full, y_train_full,epochs = 1000)
# %%
model.save('keras_model.h5')
# %%
test_loss, test_mae = model.evaluate(x_test, y_test)
print("Test MAE:", test_mae)
# %%
from sklearn.metrics import mean_absolute_error
predictions = model.predict(x_test)
mae_each_target = [mean_absolute_error(y_test[:,i], predictions[:,i]) for i in range(y_test.shape[1])]
# If you know the names of your target variables
target_names = ['y_High_5d', 'y_Low_5d', 'y_Priority_5d']
# Print MAE for each target
for name, mae in zip(target_names, mae_each_target):
print(f"MAE for {name}: {mae}")
|
e5c6c88c6a7c37521f6a7433bbcf01ce
|
{
"intermediate": 0.3536593019962311,
"beginner": 0.3340369760990143,
"expert": 0.3123037815093994
}
|
47,501
|
intervals = [1,2,3,5]
look_back = 60
optimizer = keras.optimizers.Adam(learning_rate=0.0003,clipvalue=0.5)
for csv_file in csv_files:
file_path = os.path.join(csv_directory, csv_file)
unique_part = file_path.split('_')[-2]
df = pd.read_csv(r'C:\Users\arisa\Desktop\ddd\Binance_1INCHBUSD_d.csv')
include_substrings = ["y_"]
exact_columns_to_keep = ["Open", "High", "Low", "Close","volume_base", "volume_crypto", "tradecount",]
filtered_columns = [col for col in df.columns if any(col.startswith(s) for s in include_substrings)]
columns_to_keep = list(set(exact_columns_to_keep + filtered_columns))
df = df[columns_to_keep]
df.head()
features = df.drop([
'y_High_1d', 'y_Low_1d', 'y_Priority_1d',
'y_High_2d', 'y_Low_2d', 'y_Priority_2d',
'y_High_3d', 'y_Low_3d', 'y_Priority_3d',
'y_High_5d', 'y_Low_5d', 'y_Priority_5d'], axis=1)
# Scale the features and targets
feature_scaler = MinMaxScaler(feature_range=(0, 1))
scaled_features = feature_scaler.fit_transform(features)
joblib.dump(feature_scaler,f'x_scalers/x_scaler_{unique_part}.sav')
for i in intervals:
y_cols = [[f'y_High_{i}d', 'y_Low_{i}d', 'y_Priority_{i}d']]
targets = df[y_cols]
target_scaler = MinMaxScaler(feature_range=(0, 1))
scaled_targets = target_scaler.fit_transform(targets)
joblib.dump(feature_scaler,f'y_scalers/y{i}_scaler_{unique_part}.sav')
x_train = []
y_train = []
for i in range(look_back, len(scaled_features)):
x_train.append(scaled_features[i-look_back:i])
y_train.append(scaled_targets[i]) # Assuming the target is the next time step
x_train, y_train = np.array(x_train), np.array(y_train)
input_shape = (x_train.shape[1], x_train.shape[2])
model = Sequential()
model.add(LSTM(units = 100, activation = 'tanh', return_sequences=True))
model.add(Dropout(0.4))
model.add(LSTM(units = 240, activation = 'tanh'))
model.add(Dropout(0.5))
model.add(Dense(units = 3))
model.compile(optimizer=optimizer, loss = 'mean_squared_error', metrics=[tf.keras.metrics.MeanAbsoluteError()])
model.fit(x_train, y_train,epochs = 1000)
mae = model.evaluate(x_train, y_train)
model.save(f'models/lstm_model_{unique_part}_y{i}_mae_{mae}.h5')
error:
{
"name": "KeyError",
"message": "\"None of [Index([('y_High_1d', 'y_Low_{i}d', 'y_Priority_{i}d')], dtype='object')] are in the [columns]\"",
"stack": "---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[3], line 42
40 for i in intervals:
41 y_cols = [[f'y_High_{i}d', 'y_Low_{i}d', 'y_Priority_{i}d']]
---> 42 targets = df[y_cols]
43 target_scaler = MinMaxScaler(feature_range=(0, 1))
44 scaled_targets = target_scaler.fit_transform(targets)
File c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\pandas\\core\\frame.py:4108, in DataFrame.__getitem__(self, key)
4106 if is_iterator(key):
4107 key = list(key)
-> 4108 indexer = self.columns._get_indexer_strict(key, \"columns\")[1]
4110 # take() does not accept boolean indexers
4111 if getattr(indexer, \"dtype\", None) == bool:
File c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\pandas\\core\\indexes\\base.py:6200, in Index._get_indexer_strict(self, key, axis_name)
6197 else:
6198 keyarr, indexer, new_indexer = self._reindex_non_unique(keyarr)
-> 6200 self._raise_if_missing(keyarr, indexer, axis_name)
6202 keyarr = self.take(indexer)
6203 if isinstance(key, Index):
6204 # GH 42790 - Preserve name from an Index
File c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\pandas\\core\\indexes\\base.py:6249, in Index._raise_if_missing(self, key, indexer, axis_name)
6247 if nmissing:
6248 if nmissing == len(indexer):
-> 6249 raise KeyError(f\"None of [{key}] are in the [{axis_name}]\")
6251 not_found = list(ensure_index(key)[missing_mask.nonzero()[0]].unique())
6252 raise KeyError(f\"{not_found} not in index\")
KeyError: \"None of [Index([('y_High_1d', 'y_Low_{i}d', 'y_Priority_{i}d')], dtype='object')] are in the [columns]\""
}
|
9fe9e69e9d2a4f9eb809b14c614aca65
|
{
"intermediate": 0.3158862292766571,
"beginner": 0.4526892602443695,
"expert": 0.231424480676651
}
|
47,502
|
I am using podman with docker-compose file like below:
|
41147985a761db73bd9fe66c2d4a90ec
|
{
"intermediate": 0.4186263680458069,
"beginner": 0.2279214709997177,
"expert": 0.3534521460533142
}
|
47,503
|
upgrading Maven: org.springframework.batch:spring-batch-core:4.3.4 (spring-batch-core-4.3.4.jar to Maven: org.springframework.batch:spring-batch-core:5.0.5 (spring-batch-core-5.0.5.jar)....'org.springframework.batch.core.configuration.annotation.JobBuilderFactory' is deprecated and marked for removal .....fix the issue...... Consider defining a bean of type 'org.springframework.batch.core.configuration.annotation.JobBuilderFactory' in your configuration..........................'org.springframework.batch.core.configuration.annotation.StepBuilderFactory' is deprecated and marked for removal ...........................update this class package com.mns.oms.batch.config;
import java.util.Date;
import java.util.HashMap;
import java.util.Map;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.JobParametersBuilder;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.configuration.annotation.JobBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepScope;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.batch.core.launch.support.RunIdIncrementer;
import org.springframework.batch.item.data.MongoItemReader;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.task.TaskExecutor;
import org.springframework.data.domain.Sort.Direction;
import org.springframework.data.mongodb.core.MongoTemplate;
import org.springframework.scheduling.annotation.EnableScheduling;
import org.springframework.scheduling.annotation.Scheduled;
import com.mns.oms.batch.domain.CarrierData;
import com.mns.oms.batch.listener.CarrierStepListener;
import com.mns.oms.batch.listener.JobStatusNotificationListener;
import com.mns.oms.batch.model.BeamDataDTO;
import com.mns.oms.batch.processor.BeamDataProcessor;
import com.mns.oms.batch.writer.KafkaBatchWriter;
/**
* @author Mrinmoy Mandal
*
* Module: WISMR
*
*
*/
@Configuration
@EnableScheduling
@ConditionalOnProperty(value = "beam.batchjob.enabled", matchIfMissing = true, havingValue = "true")
public class BeamDataBatchConfiguration {
@Autowired
private JobStatusNotificationListener jobListener;
@Value("${beam.data.write.chunk.size}")
private String chunkSize;
@Autowired
@Qualifier("beamTaskExecutor")
private TaskExecutor beamTaskExecutor;
@Value("${beam.batchjob.step.partitioner.each.range}")
private int range;
@Autowired
private MongoTemplate mongoTemplate;
@Autowired
private JobBuilderFactory beamJobBuilderFactory;
@Autowired
private StepBuilderFactory beamStepBuilderFactory;
@Autowired
private JobLauncher jobLauncher;
@Scheduled(cron = "${beam.spring.batch.job.cron.expression}")
public void ffiSchedule() {
try {
JobParameters jobParameters = new JobParametersBuilder().addDate("launchDate", new Date())
.toJobParameters();
jobLauncher.run(exportDataToBeam(), jobParameters);
} catch (Exception e) {
e.printStackTrace();
}
}
@Bean
@StepScope
public MongoItemReader<CarrierData> mongoItemReader(@Value("#{stepExecutionContext['minValue']}") Long minValue,
@Value("#{stepExecutionContext['maxValue']}") Long maxValue) {
MongoItemReader<CarrierData> reader = new MongoItemReader<>();
reader.setTemplate(mongoTemplate);
Map<String, Direction> sortMap = new HashMap<>();
sortMap.put("_id", Direction.DESC);
reader.setSort(sortMap);
reader.setTargetType(CarrierData.class);
reader.setPageSize(range);
reader.setQuery("{isProcessed: {$eq: false} }");
return reader;
}
@Bean
public BeamDataProcessor beamDataProcessor() {
return new BeamDataProcessor();
}
@Autowired
private KafkaBatchWriter kafkaItemWriter;
@Bean
public Job exportDataToBeam() throws Exception {
return this.beamJobBuilderFactory.get("exportDataToBeam").incrementer(new RunIdIncrementer())
.listener(jobListener).start(beamMasterStep()).build();
}
@Bean
public Step beamMasterStep() throws Exception {
return this.beamStepBuilderFactory.get("beamStep").<CarrierData, BeamDataDTO>chunk(Integer.valueOf(chunkSize))
.reader(mongoItemReader(null, null)).processor(beamDataProcessor()).writer(kafkaItemWriter)
.taskExecutor(beamTaskExecutor).listener(new CarrierStepListener()).build();
}
}
|
bc38bf246012afa7e9a63f73e268780b
|
{
"intermediate": 0.33161014318466187,
"beginner": 0.43318822979927063,
"expert": 0.2352016568183899
}
|
47,504
|
I am using podman-compose with docker-compose.yml like below:
version: '3'
services:
nginx:
build:
context: .
dockerfile: Dockerfile
#image: docker.io/nginx:latest
labels:
app: nginx
io.containers.autoupdate: image
ports:
- "8080:80"
- "8443:443"
volumes:
- /var/www/html:/usr/share/nginx/html:ro
- /var/www/ssl:/etc/nginx/ssl:ro
restart: always
and Dockerfile:
# Use the official Nginx image as the base
FROM docker.io/nginx:latest
# Copy custom nginx configuration file
COPY nginx.conf /etc/nginx/conf.d/nginx.conf
I want it to have autoupdate functionality, can you help me?
|
e5d61916ce7f6ae3ce4f42702ff7abb8
|
{
"intermediate": 0.4476372003555298,
"beginner": 0.315123051404953,
"expert": 0.23723971843719482
}
|
47,505
|
Are there an applications that allow to create archives on Android with saving all file attributes unchanged like data creation and access time and that allow to unarchive all the files saving all file attributes unchanged like data creation and access time?
|
e974fa0083e6222b4625c2ff31597f65
|
{
"intermediate": 0.5866076350212097,
"beginner": 0.1531466394662857,
"expert": 0.26024574041366577
}
|
47,506
|
what is ppp.h?
|
598b3737fb8d70a912d7bd4f342a27a7
|
{
"intermediate": 0.2801728844642639,
"beginner": 0.39407193660736084,
"expert": 0.32575514912605286
}
|
47,507
|
Hi
|
563507fc8056912604627a20cb4731a7
|
{
"intermediate": 0.33010533452033997,
"beginner": 0.26984941959381104,
"expert": 0.400045245885849
}
|
47,508
|
How do i run MeetingNotes.docx on linux terminal?
|
84cf6f671230fa358c17674a03d59f47
|
{
"intermediate": 0.43657243251800537,
"beginner": 0.23360824584960938,
"expert": 0.32981938123703003
}
|
47,509
|
How do i run programs on linux terminal
|
ad87b5abe13706bef7551e747b151290
|
{
"intermediate": 0.40844669938087463,
"beginner": 0.34092551469802856,
"expert": 0.2506277859210968
}
|
47,510
|
From now on you will act like a Linux Terminal. I will give you inputs and you have to give the outputs without any explainations. You are a terminal of Debian Linux. Give the output in a codeblock
|
51184867e16d8f3d7336342777896ffb
|
{
"intermediate": 0.38654154539108276,
"beginner": 0.3281387984752655,
"expert": 0.28531965613365173
}
|
47,511
|
Make me a way to do a node system using only Scratch programming and include accessing booleans from other sprites for it
|
7e8b5412a823b34ab7f256d495241e52
|
{
"intermediate": 0.4416605830192566,
"beginner": 0.27226322889328003,
"expert": 0.2860761880874634
}
|
47,512
|
Make me a way to code a node system using only Scratch code
|
49192592ca0714865c8161cc8e82a171
|
{
"intermediate": 0.20500504970550537,
"beginner": 0.3746955096721649,
"expert": 0.4202995002269745
}
|
47,513
|
From now on you will act like a windows 10 cmd. I will give you inputs and you have to give the outputs without any explainations. You are a cmd of windows 10. Give the output in a codeblock
|
c463c46d320c15b72883a5fb3544e50f
|
{
"intermediate": 0.35724565386772156,
"beginner": 0.3286612033843994,
"expert": 0.3140932023525238
}
|
47,514
|
From now on you will act like a Linux Terminal. I will give you inputs and you have to give the outputs without any explainations. You are a terminal of Debian Linux. Give the output in a codeblock
|
c630b52b52bab236bc4bd07e7110ef4c
|
{
"intermediate": 0.38654154539108276,
"beginner": 0.3281387984752655,
"expert": 0.28531965613365173
}
|
47,515
|
How can i make a fuel guage with a radpberry pi?
|
a141bd95b46eb83a22fbd6d4441b9231
|
{
"intermediate": 0.30124080181121826,
"beginner": 0.269703209400177,
"expert": 0.42905592918395996
}
|
47,516
|
Do a Node system using clones in 1 sprite in Scratch code
|
2fecefde090f2e494773ff256903b308
|
{
"intermediate": 0.5256258845329285,
"beginner": 0.1628757119178772,
"expert": 0.31149840354919434
}
|
47,517
|
create type safe optimized debounced custom react hook
|
5919c547b1c9f14e0817685001074be9
|
{
"intermediate": 0.27249670028686523,
"beginner": 0.10574175417423248,
"expert": 0.6217615604400635
}
|
47,518
|
Do a Scratch extension known as Single Inputs which uses reporter blocks with one input that can be: String, Number, Boolean, Color, Matrix
|
e0ed39ec64e0d4085700fab553d7fb9d
|
{
"intermediate": 0.49278193712234497,
"beginner": 0.2719685137271881,
"expert": 0.23524953424930573
}
|
47,519
|
Make a Scratch Advanced Input extension and use this as a reference:
class SelectorEX {
getInfo() {
return {
id: 'selectorExtension',
name: 'Selector',
color1: '#7fde2c',
color2: '#73bf30',
blocks: [
{
opcode: 'selectorOn',
blockType: Scratch.BlockType.BOOLEAN,
text: 'is [SELECTOR] on',
arguments: {
SELECTOR: {
type: Scratch.ArgumentType.NULL,
}
}
},
{
opcode: 'reportSelector',
blockType: Scratch.BlockType.REPORTER,
text: '[SELECTOR]',
arguments: {
SELECTOR: {
type: Scratch.ArgumentType.STRING,
menu: 'SELECTOR'
}
}
},
{
opcode: 'toSelectorS',
blockType: Scratch.BlockType.REPORTER,
text: "convert [BOOL] to selector",
arguments: {
BOOL: {
type: Scratch.ArgumentType.BOOLEAN
}
}
},
{
opcode: 'toBooleanS',
blockType: Scratch.BlockType.BOOLEAN,
text: "convert [SELECTOR] to boolean",
arguments: {
SELECTOR: {
type: Scratch.ArgumentType.NULL
}
}
},
{
opcode: 'whileOn',
blockType: Scratch.BlockType.LOOP,
text: "while [SELECTOR] is [SELECTORMENU]",
arguments: {
SELECTOR: {
type: Scratch.ArgumentType.NULL
},
SELECTORMENU: {
type: Scratch.ArgumentType.STRING,
menu: 'SELECTOR'
}
}
},
{
opcode: 'repeatUntilOn',
blockType: Scratch.BlockType.LOOP,
text: "repeat until [SELECTOR] is [SELECTORMENU]",
arguments: {
SELECTOR: {
type: Scratch.ArgumentType.NULL
},
SELECTORMENU: {
type: Scratch.ArgumentType.STRING,
menu: 'SELECTOR'
}
}
},
{
opcode: 'ifSelectorIsOn',
blockType: Scratch.BlockType.CONDITIONAL,
text: "if [SELECTOR] is [SELECTORMENU] then",
arguments: {
SELECTOR: {
type: Scratch.ArgumentType.NULL
},
SELECTORMENU: {
type: Scratch.ArgumentType.STRING,
menu: 'SELECTOR'
}
}
},
{
opcode: 'selectorOneAndSelectorTwoAreEqual',
blockType: Scratch.BlockType.BOOLEAN,
text: "[SELECTORONE] and [SELECTORTWO] are equal",
arguments: {
SELECTORONE: {
type: Scratch.ArgumentType.NULL
},
SELECTORTWO: {
type: Scratch.ArgumentType.NULL
}
}
},
{
opcode: 'ifSelectorOneAndSelectorTwoAreEqual',
blockType: Scratch.BlockType.CONDITIONAL,
text: "if [SELECTORONE] and [SELECTORTWO] are equal then",
arguments: {
SELECTORONE: {
type: Scratch.ArgumentType.NULL
},
SELECTORTWO: {
type: Scratch.ArgumentType.NULL
}
}
},
{
opcode: 'convertSelectorToNumber',
blockType: Scratch.BlockType.REPORTER,
text: "convert [SELECTOR] to number",
arguments: {
SELECTOR: {
type: Scratch.ArgumentType.NULL
}
}
},
{
opcode: 'convertNumberToSelector',
blockType: Scratch.BlockType.REPORTER,
text: "convert [NUMBER] to selector",
arguments: {
NUMBER: {
type: Scratch.ArgumentType.NUMBER
}
}
},
{
opcode: 'selectorIsTheSameAsNumber',
blockType: Scratch.BlockType.BOOLEAN,
text: "[SELECTOR] is the same as [NUMBER]",
arguments: {
SELECTOR: {
type: Scratch.ArgumentType.NULL
},
NUMBER: {
type: Scratch.ArgumentType.NUMBER
}
}
},
{
opcode: 'selectorIsTheSameAsBoolean',
blockType: Scratch.BlockType.BOOLEAN,
text: "[SELECTOR] is the same as [BOOL]",
arguments: {
SELECTOR: {
type: Scratch.ArgumentType.NULL
},
BOOL: {
type: Scratch.ArgumentType.BOOLEAN
}
}
},
{
opcode: 'ifSelectorIsTheSameAsNumber',
blockType: Scratch.BlockType.CONDITIONAL,
text: "if [SELECTOR] is the same as [NUMBER] then",
arguments: {
SELECTOR: {
type: Scratch.ArgumentType.NULL
},
NUMBER: {
type: Scratch.ArgumentType.NUMBER
}
}
},
{
opcode: 'ifSelectorIsTheSameAsBoolean',
blockType: Scratch.BlockType.CONDITIONAL,
text: "if [SELECTOR] is the same as [BOOL] then",
arguments: {
SELECTOR: {
type: Scratch.ArgumentType.NULL
},
BOOL: {
type: Scratch.ArgumentType.BOOLEAN
}
}
},
{
opcode: 'textIsSelector',
blockType: Scratch.BlockType.BOOLEAN,
text: "text [TEXT] is selector",
arguments: {
TEXT: {
type: Scratch.ArgumentType.STRING
}
}
},
{
opcode: 'textIsNotSelector',
blockType: Scratch.BlockType.BOOLEAN,
text: "text [TEXT] is not selector",
arguments: {
TEXT: {
type: Scratch.ArgumentType.STRING
}
}
},
{
opcode: 'ifTextIsSelector',
blockType: Scratch.BlockType.CONDITIONAL,
text: "if text [TEXT] is selector then",
arguments: {
TEXT: {
type: Scratch.ArgumentType.STRING
}
}
},
{
opcode: 'ifTextIsNotSelector',
blockType: Scratch.BlockType.CONDITIONAL,
text: "if text [TEXT] is not selector then",
arguments: {
TEXT: {
type: Scratch.ArgumentType.STRING
}
}
}
],
menus: {
SELECTOR: {
acceptReporters: true,
items: ["on", "off"]
}
}
};
}
selectorOn(args) {
return args.SELECTOR === 'on';
}
reportSelector(args) {
return args.SELECTOR;
}
toSelectorS(args) {
if (args.BOOL) {
return 'on';
} else {
return 'off';
}
}
toBooleanS(args) {
if (args.SELECTOR === 'on') {
return true;
} else if (args.SELECTOR === 'off') {
return false;
}
}
whileOn(args, util) {
if (args.SELECTOR === args.SELECTORMENU) {
util.startBranch(1, true);
}
}
repeatUntilOn(args, util) {
if (args.SELECTOR !== args.SELECTORMENU) {
util.startBranch(1, true);
}
}
ifSelectorIsOn(args, util) {
return (args.SELECTOR === args.SELECTORMENU);
}
selectorOneAndSelectorTwoAreEqual(args) {
return (args.SELECTORONE === args.SELECTORTWO);
}
ifSelectorOneAndSelectorTwoAreEqual(args, util) {
return (args.SELECTORONE === args.SELECTORTWO);
}
convertSelectorToNumber(args) {
if (args.SELECTOR === 'on') {
return 1;
} else if (args.SELECTOR === 'off') {
return 0;
}
}
convertNumberToSelector(args) {
if (args.NUMBER === 1) {
return 'on';
} else if (args.NUMBER === 0) {
return 'off';
}
}
selectorIsTheSameAsNumber(args) {
return (args.SELECTOR === args.NUMBER);
}
selectorIsTheSameAsBoolean(args) {
return (args.SELECTOR === args.BOOL);
}
ifSelectorIsTheSameAsNumber(args, util) {
return (args.SELECTOR === args.NUMBER);
}
ifSelectorIsTheSameAsBoolean(args, util) {
return (args.SELECTOR === args.BOOL);
}
textIsSelector(args) {
if (args.TEXT === 'on' || args.TEXT === 'off') {
return true;
} else {
return false;
}
}
textIsNotSelector(args) {
if (args.TEXT !== 'on' || args.TEXT !== 'off') {
return true;
} else {
return false;
}
}
ifTextIsSelector(args, util) {
return (args.TEXT === 'on' || args.TEXT === 'off');
}
ifTextIsNotSelector(args, util) {
return (args.TEXT !== 'on' || args.TEXT !== 'off');
}
}
Scratch.extensions.register(new SelectorEX());
|
f6cd53420041051761445792ff415ed7
|
{
"intermediate": 0.34701964259147644,
"beginner": 0.4679103195667267,
"expert": 0.18507006764411926
}
|
47,520
|
Make a More Sensing extension that makes use of the util.target mostly. Do it in JS with code blocks, no smart quotes and using [x] where x is the argument name in text. Also include hat and conditional blocks. Use this as a reference:
class SelectorEX {
getInfo() {
return {
id: 'selectorExtension',
name: 'Selector',
color1: '#7fde2c',
color2: '#73bf30',
blocks: [
{
opcode: 'selectorOn',
blockType: Scratch.BlockType.BOOLEAN,
text: 'is [SELECTOR] on',
arguments: {
SELECTOR: {
type: Scratch.ArgumentType.NULL,
}
}
},
{
opcode: 'reportSelector',
blockType: Scratch.BlockType.REPORTER,
text: '[SELECTOR]',
arguments: {
SELECTOR: {
type: Scratch.ArgumentType.STRING,
menu: 'SELECTOR'
}
}
},
{
opcode: 'toSelectorS',
blockType: Scratch.BlockType.REPORTER,
text: "convert [BOOL] to selector",
arguments: {
BOOL: {
type: Scratch.ArgumentType.BOOLEAN
}
}
},
{
opcode: 'toBooleanS',
blockType: Scratch.BlockType.BOOLEAN,
text: "convert [SELECTOR] to boolean",
arguments: {
SELECTOR: {
type: Scratch.ArgumentType.NULL
}
}
},
{
opcode: 'whileOn',
blockType: Scratch.BlockType.LOOP,
text: "while [SELECTOR] is [SELECTORMENU]",
arguments: {
SELECTOR: {
type: Scratch.ArgumentType.NULL
},
SELECTORMENU: {
type: Scratch.ArgumentType.STRING,
menu: 'SELECTOR'
}
}
},
{
opcode: 'repeatUntilOn',
blockType: Scratch.BlockType.LOOP,
text: "repeat until [SELECTOR] is [SELECTORMENU]",
arguments: {
SELECTOR: {
type: Scratch.ArgumentType.NULL
},
SELECTORMENU: {
type: Scratch.ArgumentType.STRING,
menu: 'SELECTOR'
}
}
},
{
opcode: 'ifSelectorIsOn',
blockType: Scratch.BlockType.CONDITIONAL,
text: "if [SELECTOR] is [SELECTORMENU] then",
arguments: {
SELECTOR: {
type: Scratch.ArgumentType.NULL
},
SELECTORMENU: {
type: Scratch.ArgumentType.STRING,
menu: 'SELECTOR'
}
}
},
{
opcode: 'selectorOneAndSelectorTwoAreEqual',
blockType: Scratch.BlockType.BOOLEAN,
text: "[SELECTORONE] and [SELECTORTWO] are equal",
arguments: {
SELECTORONE: {
type: Scratch.ArgumentType.NULL
},
SELECTORTWO: {
type: Scratch.ArgumentType.NULL
}
}
},
{
opcode: 'ifSelectorOneAndSelectorTwoAreEqual',
blockType: Scratch.BlockType.CONDITIONAL,
text: "if [SELECTORONE] and [SELECTORTWO] are equal then",
arguments: {
SELECTORONE: {
type: Scratch.ArgumentType.NULL
},
SELECTORTWO: {
type: Scratch.ArgumentType.NULL
}
}
},
{
opcode: 'convertSelectorToNumber',
blockType: Scratch.BlockType.REPORTER,
text: "convert [SELECTOR] to number",
arguments: {
SELECTOR: {
type: Scratch.ArgumentType.NULL
}
}
},
{
opcode: 'convertNumberToSelector',
blockType: Scratch.BlockType.REPORTER,
text: "convert [NUMBER] to selector",
arguments: {
NUMBER: {
type: Scratch.ArgumentType.NUMBER
}
}
},
{
opcode: 'selectorIsTheSameAsNumber',
blockType: Scratch.BlockType.BOOLEAN,
text: "[SELECTOR] is the same as [NUMBER]",
arguments: {
SELECTOR: {
type: Scratch.ArgumentType.NULL
},
NUMBER: {
type: Scratch.ArgumentType.NUMBER
}
}
},
{
opcode: 'selectorIsTheSameAsBoolean',
blockType: Scratch.BlockType.BOOLEAN,
text: "[SELECTOR] is the same as [BOOL]",
arguments: {
SELECTOR: {
type: Scratch.ArgumentType.NULL
},
BOOL: {
type: Scratch.ArgumentType.BOOLEAN
}
}
},
{
opcode: 'ifSelectorIsTheSameAsNumber',
blockType: Scratch.BlockType.CONDITIONAL,
text: "if [SELECTOR] is the same as [NUMBER] then",
arguments: {
SELECTOR: {
type: Scratch.ArgumentType.NULL
},
NUMBER: {
type: Scratch.ArgumentType.NUMBER
}
}
},
{
opcode: 'ifSelectorIsTheSameAsBoolean',
blockType: Scratch.BlockType.CONDITIONAL,
text: "if [SELECTOR] is the same as [BOOL] then",
arguments: {
SELECTOR: {
type: Scratch.ArgumentType.NULL
},
BOOL: {
type: Scratch.ArgumentType.BOOLEAN
}
}
},
{
opcode: 'textIsSelector',
blockType: Scratch.BlockType.BOOLEAN,
text: "text [TEXT] is selector",
arguments: {
TEXT: {
type: Scratch.ArgumentType.STRING
}
}
},
{
opcode: 'textIsNotSelector',
blockType: Scratch.BlockType.BOOLEAN,
text: "text [TEXT] is not selector",
arguments: {
TEXT: {
type: Scratch.ArgumentType.STRING
}
}
},
{
opcode: 'ifTextIsSelector',
blockType: Scratch.BlockType.CONDITIONAL,
text: "if text [TEXT] is selector then",
arguments: {
TEXT: {
type: Scratch.ArgumentType.STRING
}
}
},
{
opcode: 'ifTextIsNotSelector',
blockType: Scratch.BlockType.CONDITIONAL,
text: "if text [TEXT] is not selector then",
arguments: {
TEXT: {
type: Scratch.ArgumentType.STRING
}
}
}
],
menus: {
SELECTOR: {
acceptReporters: true,
items: ["on", "off"]
}
}
};
}
selectorOn(args) {
return args.SELECTOR === 'on';
}
reportSelector(args) {
return args.SELECTOR;
}
toSelectorS(args) {
if (args.BOOL) {
return 'on';
} else {
return 'off';
}
}
toBooleanS(args) {
if (args.SELECTOR === 'on') {
return true;
} else if (args.SELECTOR === 'off') {
return false;
}
}
whileOn(args, util) {
if (args.SELECTOR === args.SELECTORMENU) {
util.startBranch(1, true);
}
}
repeatUntilOn(args, util) {
if (args.SELECTOR !== args.SELECTORMENU) {
util.startBranch(1, true);
}
}
ifSelectorIsOn(args, util) {
return (args.SELECTOR === args.SELECTORMENU);
}
selectorOneAndSelectorTwoAreEqual(args) {
return (args.SELECTORONE === args.SELECTORTWO);
}
ifSelectorOneAndSelectorTwoAreEqual(args, util) {
return (args.SELECTORONE === args.SELECTORTWO);
}
convertSelectorToNumber(args) {
if (args.SELECTOR === 'on') {
return 1;
} else if (args.SELECTOR === 'off') {
return 0;
}
}
convertNumberToSelector(args) {
if (args.NUMBER === 1) {
return 'on';
} else if (args.NUMBER === 0) {
return 'off';
}
}
selectorIsTheSameAsNumber(args) {
return (args.SELECTOR === args.NUMBER);
}
selectorIsTheSameAsBoolean(args) {
return (args.SELECTOR === args.BOOL);
}
ifSelectorIsTheSameAsNumber(args, util) {
return (args.SELECTOR === args.NUMBER);
}
ifSelectorIsTheSameAsBoolean(args, util) {
return (args.SELECTOR === args.BOOL);
}
textIsSelector(args) {
if (args.TEXT === 'on' || args.TEXT === 'off') {
return true;
} else {
return false;
}
}
textIsNotSelector(args) {
if (args.TEXT !== 'on' || args.TEXT !== 'off') {
return true;
} else {
return false;
}
}
ifTextIsSelector(args, util) {
return (args.TEXT === 'on' || args.TEXT === 'off');
}
ifTextIsNotSelector(args, util) {
return (args.TEXT !== 'on' || args.TEXT !== 'off');
}
}
Scratch.extensions.register(new SelectorEX());
|
e18d8a78ee3548855a0a25035b6134b0
|
{
"intermediate": 0.3231900930404663,
"beginner": 0.4739345610141754,
"expert": 0.20287536084651947
}
|
47,521
|
Make an idea for an random extension on Scratch to make in JS that does not involve interacting with other sprites.
|
f8f4099bc1bdbbcb6b16bb0d1d846a3a
|
{
"intermediate": 0.4164249300956726,
"beginner": 0.3275378942489624,
"expert": 0.2560372054576874
}
|
47,522
|
File "R:\python\stable dif\OnnxDiffusersUI\onnxUI.py", line 1288, in <module>
image_t2 = gr.Image(
^^^^^^^^^
File "R:\python\stable dif\OnnxDiffusersUI\virtualenv\Lib\site-packages\gradio\component_meta.py", line 159, in wrapper
return fn(self, **kwargs)
^^^^^^^^^^^^^^^^^^
TypeError: Image.__init__() got an unexpected keyword argument 'source'
Столкнулся с ошибкой при запуске stable difusion на пк
|
004d14cbd7efccfa6ae77b027793032d
|
{
"intermediate": 0.36804354190826416,
"beginner": 0.41520780324935913,
"expert": 0.2167486697435379
}
|
47,523
|
cannot reach services in VPN's LAN when connecting via wg-quick up on Fedora Linux. On my android smartphone it works perfectly:
[Interface]
PrivateKey = qEcrXXXpoI5VmgIGgytp6I/vXXXifzy6w/KMRXXXpnA=
Address = 10.8.0.4/24
DNS = 1.1.1.1
[Peer]
PublicKey = ZXjKByqqD6QXXXSjGYXv+iXXXbtlV4hj90XXXuPJk3o=
PresharedKey = bF29jCNBXrXXX9obTXXX+Vi105qfDbMmiWdjXXXUsIg=
AllowedIPs = 0.0.0.0/0, ::/0
PersistentKeepalive = 0
Endpoint = public.vpn.com:51820
|
9756bdee4ec86b7bbae76c5172a28a38
|
{
"intermediate": 0.4419126808643341,
"beginner": 0.30068108439445496,
"expert": 0.25740623474121094
}
|
47,524
|
I have a measurement, where after about 50 points I start to look for the point where acceleration turns into deceleration. My code needs to be corrected, can you help me?
def seekZeroCrossing(data):
# if the data array is empty or not a list, return None
if (isinstance(data, list) == False and isinstance(data, np.ndarray) == False):
return None
# Initialize previous sign to 0
prev_sign = 0
position = None
for i, element in enumerate(data):
if isinstance(element, (int, float)):
current_sign = 1 if element > 0 else -1 # Determine the sign of the current number
if current_sign != prev_sign and prev_sign != 0: # Check for zero-crossing condition
position = i
break
prev_sign = current_sign
return position
|
2b97721bc288da82b056e4a71e46a2b7
|
{
"intermediate": 0.4754122197628021,
"beginner": 0.22701396048069,
"expert": 0.29757383465766907
}
|
47,525
|
How can I know the processes of a Program on the Terminal in Windows?
|
015cc42e90846d214e2ad11be9c35046
|
{
"intermediate": 0.5014653205871582,
"beginner": 0.2991132438182831,
"expert": 0.19942142069339752
}
|
47,526
|
Make code in scratchblocks that generates images of some random blocks
|
df82f11049902dc357942c8c5cdab65e
|
{
"intermediate": 0.3306294083595276,
"beginner": 0.2062833458185196,
"expert": 0.4630872309207916
}
|
47,527
|
Make code in scratchblocks that generates images of some random blocks
|
3a4814c091df280b6f6df11fe4205d1b
|
{
"intermediate": 0.3306294083595276,
"beginner": 0.2062833458185196,
"expert": 0.4630872309207916
}
|
47,528
|
Use scratchblocks to make a block that creates a clone at random position
|
cd0673592d10ca04d6d6a58233b472c2
|
{
"intermediate": 0.3786442279815674,
"beginner": 0.2356339991092682,
"expert": 0.38572174310684204
}
|
47,529
|
Do new blocks in the scratchblocks language based on this:
Hi
Lists::list
Variables::variables
Controls::control
Start stuff::events
Adding::operators
[QWERTYUIOP v] turns to (0)::extension
Boolean time <AhahaHAAHh::variables>::sensing
I love to move::motion
Looks [Hello!] (costume1 v) (size::looks)::looks
Speak::sound
HELLO::custom
Obsolete::obsolete
Last thing::grey
abc:: looks
say [I'm not a Motion block!]:: motion
eat (pen color:: pen):: control
if <touching (mouse pointer v)?:: list> then
die:: grey
end
abc:: events hat
def:: motion stack
ghi:: pen reporter
jkl:: operators boolean
Hi there::looks hat
mnop@addInput {
qrstuv {
destroy {move (3829) steps and wait::motion ring}::events
}wxyz{
say {
say ((1) + (1)) and stop loop::looks cap
}@loopArrow::looks
}::motion
}::grey cap
speak (Hello I am a banger <wait, you're not a variable [yes v]::variables> [#d7a936]::variables)::tts
when @greenFlag clicked::events hat
repeat (1000){
glide (1) [millisecs v] to (edges v)::motion
}then repeat (5){
say [Fin] for (1) seconds::looks
say [ished!] for (1) seconds::looks
}@loopArrow::control
wait until <touching color (#d7a0b0)?>::control
run ({create clone:: control} @addInput:: grey ring):: control
<() @addInput:: grey ring>
say (http:// [snap.berkeley.edu]:: sensing)
((6) × (7):: operators)
(join [hello ] [world] @delInput @addInput:: operators)
script variables ((foo):: grey) ((bar):: grey) @delInput @addInput:: grey
warp {
move (10) steps
} :: grey
report [Done!]:: control cap
(<> @addInput) // without even the:: grey ring
|
a9b8184d81e4e15d46f403d56c00e8be
|
{
"intermediate": 0.16700518131256104,
"beginner": 0.6767373085021973,
"expert": 0.1562575101852417
}
|
47,530
|
Hi
Lists::list
Variables::variables
Controls::control
Start stuff::events
Adding::operators
[QWERTYUIOP v] turns to (0)::extension
Boolean time <AhahaHAAHh::variables>::sensing
I love to move::motion
Looks [Hello!] (costume1 v) (size::looks)::looks
Speak::sound
HELLO::custom
Obsolete::obsolete
Last thing::grey
abc:: looks
say [I’m not a Motion block!]:: motion
eat (pen color:: pen):: control
if <touching (mouse pointer v)?:: list> then
die:: grey
end
abc:: events hat
def:: motion stack
ghi:: pen reporter
jkl:: operators boolean
Hi there::looks hat
mnop@addInput {
qrstuv {
destroy {move (3829) steps and wait::motion ring}::events
}wxyz{
say {
say ((1) + (1)) and stop loop::looks cap
}@loopArrow::looks
}::motion
}::grey cap
speak (Hello I am a banger <wait, you’re not a variable [yes v]::variables> [#d7a936]::variables)::tts
when @greenFlag clicked::events hat
repeat (1000){
glide (1) [millisecs v] to (edges v)::motion
}then repeat (5){
say [Fin] for (1) seconds::looks
say [ished!] for (1) seconds::looks
}@loopArrow::control
wait until <touching color (#d7a0b0)?>::control
run ({create clone:: control} @addInput:: grey ring):: control
<() @addInput:: grey ring>
say (http:// [snap.berkeley.edu]:: sensing)
((6) × (7):: operators)
(join [hello ] [world] @delInput @addInput:: operators)
script variables ((foo):: grey) ((bar):: grey) @delInput @addInput:: grey
warp {
move (10) steps
} :: grey
report [Done!]:: control cap
(<> @addInput) // without even the:: grey ring
|
ef977a73565f6585bbcc5e8455a61362
|
{
"intermediate": 0.21844448149204254,
"beginner": 0.5984563827514648,
"expert": 0.1830991506576538
}
|
47,531
|
F:\AIgene_anki>pyinstaller --onefile --windowed guipyqt.py
2987 INFO: PyInstaller: 6.5.0, contrib hooks: 2024.3
2987 INFO: Python: 3.11.5 (conda)
3000 INFO: Platform: Windows-10-10.0.22631-SP0
3001 INFO: wrote F:\AIgene_anki\guipyqt.spec
3004 INFO: Extending PYTHONPATH with paths
['F:\\AIgene_anki']
3433 INFO: checking Analysis
3433 INFO: Building Analysis because Analysis-00.toc is non existent
3434 INFO: Initializing module dependency graph...
3434 INFO: Caching module graph hooks...
3462 INFO: Analyzing base_library.zip ...
12419 INFO: Loading module hook 'hook-heapq.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
12516 INFO: Loading module hook 'hook-encodings.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
16679 INFO: Loading module hook 'hook-pickle.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
18047 INFO: Caching module dependency graph...
18164 INFO: Running Analysis Analysis-00.toc
18164 INFO: Looking for Python shared library...
18168 INFO: Using Python shared library: D:\anaconda\python311.dll
18168 INFO: Analyzing F:\AIgene_anki\guipyqt.py
18175 INFO: Loading module hook 'hook-PyQt5.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
18269 INFO: Loading module hook 'hook-PyQt5.QtWidgets.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
18391 INFO: Loading module hook 'hook-PyQt5.QtCore.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
18610 INFO: Loading module hook 'hook-pydantic.py' from 'D:\\anaconda\\Lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
18644 INFO: Loading module hook 'hook-platform.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
19803 INFO: Loading module hook 'hook-importlib_metadata.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
20021 INFO: Loading module hook 'hook-multiprocessing.util.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
20283 INFO: Loading module hook 'hook-xml.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
21017 INFO: Loading module hook 'hook-pycparser.py' from 'D:\\anaconda\\Lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
21469 INFO: Processing pre-safe import module hook distutils from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks\\pre_safe_import_module\\hook-distutils.py'.
21470 INFO: Processing pre-find module path hook distutils from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks\\pre_find_module_path\\hook-distutils.py'.
21856 INFO: Loading module hook 'hook-distutils.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
21916 INFO: Loading module hook 'hook-distutils.util.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
22066 INFO: Loading module hook 'hook-sysconfig.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
22253 INFO: Loading module hook 'hook-setuptools.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
28831 INFO: Loading module hook 'hook-packaging.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
29624 INFO: Loading module hook 'hook-pkg_resources.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
37544 INFO: Loading module hook 'hook-cryptography.py' from 'D:\\anaconda\\Lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
43759 INFO: Loading module hook 'hook-bcrypt.py' from 'D:\\anaconda\\Lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
44191 INFO: Loading module hook 'hook-py.py' from 'D:\\anaconda\\Lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
44623 INFO: Loading module hook 'hook-pytest.py' from 'D:\\anaconda\\Lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
44809 INFO: Loading module hook 'hook-numpy.py' from 'D:\\anaconda\\Lib\\site-packages\\numpy\\_pyinstaller'...
45490 INFO: Loading module hook 'hook-difflib.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
45681 INFO: Loading module hook 'hook-psutil.py' from 'D:\\anaconda\\Lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
47479 INFO: Loading module hook 'hook-pygments.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
57084 INFO: Loading module hook 'hook-certifi.py' from 'D:\\anaconda\\Lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
57102 INFO: Loading module hook 'hook-anyio.py' from 'D:\\anaconda\\Lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
61934 INFO: Loading module hook 'hook-IPython.py' from 'D:\\anaconda\\Lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
63750 INFO: Loading module hook 'hook-xml.dom.domreg.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
63939 INFO: Loading module hook 'hook-matplotlib.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
64324 INFO: Processing pre-safe import module hook gi from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks\\pre_safe_import_module\\hook-gi.py'.
64409 INFO: Loading module hook 'hook-PIL.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
64485 INFO: Loading module hook 'hook-PIL.Image.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
67424 INFO: Loading module hook 'hook-xml.etree.cElementTree.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
68075 INFO: Loading module hook 'hook-PIL.ImageFilter.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
68559 INFO: Loading module hook 'hook-jinja2.py' from 'D:\\anaconda\\Lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
70455 INFO: Processing pre-safe import module hook six.moves from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks\\pre_safe_import_module\\hook-six.moves.py'.
71619 INFO: Loading module hook 'hook-matplotlib.backends.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
72254 INFO: Loading module hook 'hook-importlib_resources.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
72749 INFO: Loading module hook 'hook-lib2to3.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
73251 INFO: Loading module hook 'hook-wcwidth.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
74319 INFO: Loading module hook 'hook-jedi.py' from 'D:\\anaconda\\Lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
76766 INFO: Loading module hook 'hook-parso.py' from 'D:\\anaconda\\Lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
77169 INFO: Loading module hook 'hook-docutils.py' from 'D:\\anaconda\\Lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
90668 INFO: Loading module hook 'hook-sphinx.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
114842 INFO: Loading module hook 'hook-babel.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
115200 INFO: Loading module hook 'hook-pytz.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
116952 INFO: Loading module hook 'hook-sqlite3.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
120185 INFO: Loading module hook 'hook-nbformat.py' from 'D:\\anaconda\\Lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
120558 INFO: Loading module hook 'hook-jsonschema.py' from 'D:\\anaconda\\Lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
120689 INFO: Loading module hook 'hook-jsonschema_specifications.py' from 'D:\\anaconda\\Lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
120774 INFO: Processing pre-safe import module hook urllib3.packages.six.moves from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks\\pre_safe_import_module\\hook-urllib3.packages.six.moves.py'.
121047 INFO: Loading module hook 'hook-charset_normalizer.py' from 'D:\\anaconda\\Lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
121594 INFO: Loading module hook 'hook-platformdirs.py' from 'D:\\anaconda\\Lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
122602 INFO: Loading module hook 'hook-zmq.py' from 'D:\\anaconda\\Lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
126378 INFO: Loading module hook 'hook-nacl.py' from 'D:\\anaconda\\Lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdh\\stdhooks'...
126572 INFO: Loading module hook 'hook-pywintypes.py' from 'D:\\anaconda\\Lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
127780 INFO: Loading module hook 'hook-_tkinter.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
127801 INFO: Loading module hook 'hook-PyQt6.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
Aborting build process due to attempt to collect multiple Qt bindings packages: attempting to run hook for 'PyQt6', while hook for 'PyQt5' has already been run! PyInstaller does not support multiple Qt bindings packages in a frozen application - either ensure that the build environment has only one Qt bindings package installed, or exclude the extraneous bindings packages via the module exclusion mechanism (--exclude command-line option, or excludes list in the spec file).
|
724e5aaf82b2ea1baf905a59736edee8
|
{
"intermediate": 0.5143664479255676,
"beginner": 0.36132195591926575,
"expert": 0.12431155890226364
}
|
47,532
|
Can you program a gradio chatbot interface?
|
f1f60b0b4744c8f0ff7d7895b67d2694
|
{
"intermediate": 0.27933311462402344,
"beginner": 0.09795621037483215,
"expert": 0.622710645198822
}
|
47,533
|
Мое веб приложение возвращает объект типа Image, могу ли я как-то визуализировать это в постмане, или отразить что картинка получена, вот часть кода
@api_view(['POST'])
@renderer_classes([JSONRenderer])
def generating_image_based_on_text(request: Request):
method_name = "generating_image_based_on_text"
logger.info(f"[{app_name}] -> {method_name} start:")
start_time = time.time()
response_status = status.HTTP_200_OK
exception = None
result = {}
try:
body = json.loads(request.body.decode('utf-8'))
text = body.get("text", "")
img = MainGeneratingImageBasedOnText().apply(text)
result = # тут нужно что-то вернуть
logger.info(f"result = {result}")
except Exception as exc:
logger.error(f"{method_name} finished failure, exception: ", exc)
response_status = status.HTTP_400_BAD_REQUEST
exception = exc
finally:
finish_time = time.time() - start_time
logger.info(f"[{app_name}] -> {method_name} finished with {finish_time:.2f} sec")
return Response(data=result, status=response_status, exception=exception)
|
3f34ec51a5a4016a8b2e67faea70b780
|
{
"intermediate": 0.42779481410980225,
"beginner": 0.49160826206207275,
"expert": 0.08059690147638321
}
|
47,534
|
When mounting a SMB share as CIFS in Linux, what does the `noperm` option do?
|
386de685e0b243b5f49c4acd70bb0b5f
|
{
"intermediate": 0.40160802006721497,
"beginner": 0.23558811843395233,
"expert": 0.3628038167953491
}
|
47,535
|
My gradio app currently looks like this:
from chromadb.utils import embedding_functions
import chromadb
from openai import OpenAI
import gradio as gr
import time
anyscale_base_url = "https://api.endpoints.anyscale.com/v1"
multilingual_embeddings = embedding_functions.SentenceTransformerEmbeddingFunction(model_name="jost/multilingual-e5-base-politics-de")
def predict(api_key, user_input):
client = chromadb.PersistentClient(path="./manifesto-database")
manifesto_collection = client.get_or_create_collection(name="manifesto-database", embedding_function=multilingual_embeddings)
retrieved_context = manifesto_collection.query(query_texts=[user_input], n_results=3, where={"ideology": "Authoritarian-right"})
contexts = [context for context in retrieved_context['documents']]
print(contexts[0])
prompt = f"""[INST] {user_input} [/INST]"""
client = OpenAI(base_url=anyscale_base_url, api_key=api_key)
completion = client.completions.create(
model="mistralai/Mixtral-8x7B-Instruct-v0.1",
prompt=prompt,
temperature=0.7,
max_tokens=1000)
response = completion.choices[0].text
return response
def main():
description = "This is a simple interface to interact with OpenAI’s Chat Completion API. Please enter your API key and your message."
with gr.Blocks() as demo:
with gr.Row():
api_key_input = gr.Textbox(label="API Key", placeholder="Enter your API key here", show_label=True, type="password")
user_input = gr.Textbox(label="Your Message", placeholder="Enter your message here")
submit_btn = gr.Button("Submit")
output = gr.Textbox(label="LLM Response")
submit_btn.click(fn=predict, inputs=[api_key_input, user_input], outputs=output)
demo.launch()
if __name__ == "__main__":
main()
I want to compare the same prompt for two model which I select from a drop down menu. The responses should be next to each other.
|
b3051ac33339a0722fb0ea6029d323a5
|
{
"intermediate": 0.4953461289405823,
"beginner": 0.3212879002094269,
"expert": 0.18336600065231323
}
|
47,536
|
I have this gradio code:
from chromadb.utils import embedding_functions
import chromadb
from openai import OpenAI
import gradio as gr
import time
anyscale_base_url = "https://api.endpoints.anyscale.com/v1"
multilingual_embeddings = embedding_functions.SentenceTransformerEmbeddingFunction(model_name="jost/multilingual-e5-base-politics-de")
def predict(api_key, user_input, model1, model2, prompt_manipulation=None, direct_steering_option=None):
# client = chromadb.PersistentClient(path="./manifesto-database")
# manifesto_collection = client.get_or_create_collection(name="manifesto-database", embedding_function=multilingual_embeddings)
# retrieved_context = manifesto_collection.query(query_texts=[user_input], n_results=3, where={"ideology": "Authoritarian-right"})
# contexts = [context for context in retrieved_context['documents']]
# print(contexts[0])
prompt = f"""[INST] {user_input} [/INST]"""
client = OpenAI(base_url=anyscale_base_url, api_key=api_key)
response1 = client.completions.create(
model=model1,
prompt=prompt,
temperature=0.7,
max_tokens=1000).choices[0].text
response2 = client.completions.create(
model=model2,
prompt=prompt,
temperature=0.7,
max_tokens=1000).choices[0].text
return response1, response2
def main():
description = "This is a simple interface to compare two model prodided by Anyscale. Please enter your API key and your message."
with gr.Blocks() as demo:
# Prompt manipulation setup
with gr.Row():
prompt_manipulation = gr.Dropdown(
label="Prompt Manipulation",
choices=[
"Impersonation (direct steering)",
"Most similar RAG (indirect steering with related context)",
"Random RAG (indirect steering with randomized context)"
]
)
# Conditional dropdown - options revealed based on ‘Prompt Manipulation’ selection
with gr.Row():
direct_steering_option = gr.Dropdown(
label="Direct Steering Options",
choices=["Option 1", "Option 2", "Option 3", "Option 4"],
visible=False # Initially hidden
)
# Making direct_steering_option visible based on prompt_manipulation choice
def show_direct_steering_options(prompt_choice):
return prompt_choice == "Impersonation (direct steering)"
prompt_manipulation.change(fn=show_direct_steering_options,
inputs=[prompt_manipulation],
outputs=[direct_steering_option])
with gr.Row():
api_key_input = gr.Textbox(label="API Key", placeholder="Enter your API key here", show_label=True, type="password")
user_input = gr.Textbox(label="Prompt", placeholder="Enter your message here")
model_selector1 = gr.Dropdown(label="Model 1", choices=["mistralai/Mixtral-8x7B-Instruct-v0.1", "mistralai/Mixtral-8x22B-Instruct-v0.1"])
model_selector2 = gr.Dropdown(label="Model 2", choices=["mistralai/Mixtral-8x7B-Instruct-v0.1", "mistralai/Mixtral-8x22B-Instruct-v0.1"])
submit_btn = gr.Button("Submit")
with gr.Row():
output1 = gr.Textbox(label="Model 1 Response")
output2 = gr.Textbox(label="Model 2 Response")
submit_btn.click(fn=predict, inputs=[api_key_input, user_input, model_selector1, model_selector2], outputs=[output1, output2])
demo.launch()
if __name__ == "__main__":
main()
I want to another drop down menu next to the prompt manipulation menu. But this can only be accesssed when something is selected from the Pompt manipulation drop down. Based on this selection, there should be four different options in the new drop down menu
|
538d65890755ca8e76851b3361497f1d
|
{
"intermediate": 0.3936256170272827,
"beginner": 0.4511127471923828,
"expert": 0.15526168048381805
}
|
47,537
|
tengo un dataset csv multilabel con la siguiente estructura:
una columa con tweets en texto, y otra columa de tipo string que contiene al menos una palabra, por ejemplo:
1365287445114322946t,We do not need any crazy twisted politician telling us whether to take a vaccine. We do not need them to tell us how to run our lives. What we need to do is fight back and ignore the nonsensical narrative.,political
1364157842022891520t,"@AgreeT0D1sagree @Matteo30115900 @Femi_Sorry I respect Natalie’s view even if it differs from mine. I genuinely hope you don’t encounter problems for not having a vaccine passport, if they are issued",mandatory
Cual es la forma más eficiente y efectiva para crear un array multilabel para procesar los target?
|
88801cf3e5f8f670a9cb2c1d4e283676
|
{
"intermediate": 0.44217053055763245,
"beginner": 0.22550244629383087,
"expert": 0.3323270082473755
}
|
47,538
|
got this when I ran 'wsl.exe' on Powershell (Administrator).
<3>WSL (10) ERROR: CreateProcessParseCommon:711: Failed to translate C:\Users\arron baranquil
<3>WSL (10) ERROR: CreateProcessParseCommon:757: getpwuid(0) failed 2
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\App\cmder\bin
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\App\cmder\vendor\bin
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\App\cmder\vendor\conemu-maximus5\ConEmu\Scripts
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\App\cmder\vendor\conemu-maximus5
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\App\cmder\vendor\conemu-maximus5\ConEmu
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\Program Files\Eclipse Adoptium\jre-8.0.402.6-hotspot\bin
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\Program Files (x86)\Common Files\Oracle\Java\javapath
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\Program Files (x86)\Intel\TXE Components\TCS\
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\Program Files\Intel\TXE Components\TCS\
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\WINDOWS\system32
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\WINDOWS
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\WINDOWS\System32\Wbem
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\WINDOWS\System32\WindowsPowerShell\v1.0\
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\Program Files\Intel\TXE Components\DAL\
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\Program Files (x86)\Intel\TXE Components\DAL\
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\Program Files\Intel\TXE Components\IPT\
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\Program Files (x86)\Intel\TXE Components\IPT\
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\WINDOWS\System32\OpenSSH\
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\Program Files\FirefoxPWA\
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\Program Files\dotnet\
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\Program Files\PowerShell\7\
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\Users\arron baranquil\scoop\apps\yt-dlp\2024.04.09
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\Users\arron baranquil\scoop\apps\ghostscript\10.03.0
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\Program Files\Tesseract-OCR
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\Program Files\Docker\Docker\resources\bin
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\Users\arron baranquil\scoop\apps\ghostscript\current\lib
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\Users\arron baranquil\scoop\apps\gsudo\current
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\Users\arron baranquil\scoop\apps\python\current\Scripts
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\Users\arron baranquil\scoop\apps\python\current
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\Users\arron baranquil\scoop\shims
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\Users\arron baranquil\AppData\Local\Programs\WingetUI\choco-cli\bin
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\Users\arron baranquil\AppData\Local\Microsoft\WindowsApps
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\Users\arron baranquil\AppData\Local\Microsoft\WinGet\Packages\Microsoft.Sysinternals.ProcessExplorer_Microsoft.Winget.Source_8wekyb3d8bbwe
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\texlive\2024\bin\windows
<3>WSL (10) ERROR: UtilTranslatePathList:2866: Failed to translate C:\App\cmder
Processing fstab with mount -a failed.
Failed to mount C:\, see dmesg for more details.
Failed to mount D:\, see dmesg for more details.
<3>WSL (10) ERROR: CreateProcessEntryCommon:334: getpwuid(0) failed 2
<3>WSL (10) ERROR: CreateProcessEntryCommon:505: execvpe /bin/sh failed 2
<3>WSL (10) ERROR: CreateProcessEntryCommon:508: Create process not expected to return
|
54db7ea83c610d42f2bf6e01509cd862
|
{
"intermediate": 0.297981321811676,
"beginner": 0.43965551257133484,
"expert": 0.26236316561698914
}
|
47,539
|
c# i want to create attribute that if u add it to the async method, it would call it on my custom synchronizationcontext
|
1795c75b539e8569f596730ce96b94a4
|
{
"intermediate": 0.5708900094032288,
"beginner": 0.19306980073451996,
"expert": 0.2360401451587677
}
|
47,540
|
from kivy.app import App
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.label import Label
from kivy.uix.button import Button
from kivy.uix.popup import Popup
from kivy.graphics import Color
from kivy.uix.screenmanager import ScreenManager, Screen
class HealthyBMIAppli(App):
def build(self):
sm = ScreenManager()
sm.add_widget(LoginScreen(name='login'))
sm.add_widget(MainScreen(name='main'))
return sm
class LoginScreen(Screen):
def __init__(self, **kwargs):
super(LoginScreen, self).__init__(**kwargs)
layout = BoxLayout(orientation='vertical')
self.add_widget(layout)
label = Label(text='Healthy BMI!', font_size=30, color=[0.5, 0.8, 0.2, 1], halign='center', valign='middle')
layout.add_widget(label)
button = Button(text='Login', font_size=20)
layout.add_widget(button)
self.button = button
def on_button_press(self):
sm = self.parent
sm.current = 'main'
class MainScreen(Screen):
def __init__(self, **kwargs):
super(MainScreen, self).__init__(**kwargs)
layout = BoxLayout(orientation='vertical')
self.add_widget(layout)
label = Label(text='Enter your height and weight to calculate your BMI', font_size=20, color=[0.5, 0.8, 0.2, 1], halign='center', valign='middle')
layout.add_widget(label)
input_height = TextInput(multiline=False, hint_text='Height (in cm)')
input_weight = TextInput(multiline=False, hint_text='Weight (in kg)')
calculate_button = Button(text='Calculate BMI', font_size=20)
layout.add_widget(input_height)
layout.add_widget(input_weight)
layout.add_widget(calculate_button)
self.calculate_button = calculate_button
def on_calculate_button_press(self):
height = float(self.ids.input_height.text)
weight = float(self.ids.input_weight.text)
bmi = weight / (height ** 2) * 10000
popup = Popup(title='Your BMI', content=Label(text=f'Your BMI is {bmi:.2f}'), size_hint=(None, None), size=(200, 100))
popup.open()
class ThemeSwitcher(Screen):
def __init__(self, **kwargs):
super(ThemeSwitcher, self).__init__(**kwargs)
layout = BoxLayout(orientation='vertical')
self.add_widget(layout)
switch_button = Button(text='Switch theme', font_size=20)
layout.add_widget(switch_button)
self.switch_button = switch_button
def on_switch_button_press(self):
if self.parent.theme == 'light':
self.parent.theme = 'dark'
else:
self.parent.theme = 'light'
class HealthyBMIAppliApp(App):
def build(self):
sm = ScreenManager()
sm.add_widget(LoginScreen(name='login'))
sm.add_widget(MainScreen(name='main'))
sm.add_widget(ThemeSwitcher(name='theme_switcher'))
return sm
if __name__ == '__main__':
HealthyBMIAppli().run()
|
1aa495891f19948e8acc11f542fed87e
|
{
"intermediate": 0.2533782720565796,
"beginner": 0.5474810600280762,
"expert": 0.19914069771766663
}
|
47,541
|
I need to wait until the animation below finishes and proceed further with drawing other graphs
ani = FuncAnimation(plt.gcf(), animate, fargs=(axs, data), interval=100) # call function 'animate' every 100 milliseconds (or 1/10th of a second)
|
b5711d3df6176b4bc97217524e66ab26
|
{
"intermediate": 0.4670620858669281,
"beginner": 0.28645065426826477,
"expert": 0.24648727476596832
}
|
47,542
|
<!DOCTYPE html>
<html>
<head>
<title>ASMR Audio Positioner</title>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style>
body {
margin: 0;
padding: 0;
overflow: hidden;
background-color: #1c1c1c;
color: #f1f1f1;
}
.container {
display: flex;
justify-content: center;
align-items: center;
height: 80vh;
width: 100%;
position: fixed;
top: 0;
left: 0;
}
.ear {
width: 20vw;
height: 20vw;
max-width: 120px;
max-height: 120px;
background-color: #333;
border-radius: 50%;
margin: 0 5vw;
display: flex;
justify-content: center;
align-items: center;
overflow: hidden;
position: relative;
}
.ear img {
width: 100%;
height: 100%;
object-fit: cover;
}
.right-ear {
order: 1;
}
.left-ear {
order: 3;
}
.center-ear {
order: 2;
width: 25vw;
height: 25vw;
max-width: 160px;
max-height: 160px;
background-color: #555;
border-radius: 50%;
margin: 0 3vw;
}
.audio-position {
width: 8vw;
height: 8vw;
max-width: 40px;
max-height: 40px;
background-color: #888;
border-radius: 50%;
position: absolute;
cursor: move;
z-index: 1;
}
.button-container {
position: fixed;
bottom: 20px;
left: 0;
right: 0;
text-align: center;
}
.record-btn {
margin: 0 5px;
padding: 10px 20px;
font-size: 16px;
background-color: #333;
color: #f1f1f1;
border: none;
border-radius: 4px;
cursor: pointer;
}
.record-btn:disabled {
opacity: 0.5;
cursor: not-allowed;
}
.config-menu {
position: fixed;
bottom: -200px;
left: 0;
right: 0;
background-color: rgba(51, 51, 51, 0.9);
padding: 20px;
transition: bottom 0.3s ease;
border-top-left-radius: 10px;
border-top-right-radius: 10px;
display: flex;
flex-direction: column;
align-items: center;
}
.config-menu.open {
bottom: 0;
}
.config-option {
margin-bottom: 10px;
display: flex;
align-items: center;
}
.config-option label {
margin-right: 10px;
}
.transparency-line {
position: absolute;
left: 0;
right: 0;
height: 1px;
background-color: rgba(255, 255, 255, 0.2);
display: flex;
align-items: center;
justify-content: flex-end;
padding-right: 10px;
font-size: 12px;
}
.close-btn {
margin-top: 20px;
padding: 10px 20px;
font-size: 16px;
background-color: #555;
color: #f1f1f1;
border: none;
border-radius: 4px;
cursor: pointer;
}
</style>
</head>
<body>
<div class="container">
<div class="ear right-ear">
<img src="imagenes/derecho.jpg" alt="Right Ear">
</div>
<div class="ear center-ear">
<img src="C:\Users\ACER\Desktop\Asmr\imagenes\muñeca.jpeg" alt="Center Ear">
</div>
<div class="ear left-ear">
<img src="imagenes/izquierdo.jpg" alt="Left Ear">
</div>
<div class="audio-position" id="audioPosition"></div>
</div>
<div class="button-container">
<button class="record-btn" id="recordBtn">Record</button>
<button class="record-btn" id="stopBtn" disabled>Stop</button>
<button class="record-btn" id="downloadBtn" disabled>Download</button>
<button class="record-btn" id="configBtn">Config</button>
</div>
<div class="config-menu" id="configMenu">
<div class="config-option">
<label for="transparencyLines">Transparency Lines:</label>
<input type="checkbox" id="transparencyLines">
</div>
<button class="close-btn" id="closeBtn">Close</button>
</div>
<script>
const audioPosition = document.getElementById('audioPosition');
const recordBtn = document.getElementById('recordBtn');
const stopBtn = document.getElementById('stopBtn');
const downloadBtn = document.getElementById('downloadBtn');
const configBtn = document.getElementById('configBtn');
const configMenu = document.getElementById('configMenu');
const transparencyLinesCheckbox = document.getElementById('transparencyLines');
const closeBtn = document.getElementById('closeBtn');
let isDragging = false;
let currentX;
let currentY;
let initialX;
let initialY;
let xOffset = 0;
let yOffset = 0;
let mediaRecorder;
let recordedChunks = [];
let audioContext;
let sourceNode;
let pannerNode;
let gainNode;
let destinationNode;
let positions = [];
audioPosition.addEventListener('mousedown', dragStart);
audioPosition.addEventListener('touchstart', dragStart);
document.addEventListener('mouseup', dragEnd);
document.addEventListener('touchend', dragEnd);
document.addEventListener('mousemove', drag);
document.addEventListener('touchmove', drag);
configBtn.addEventListener('click', toggleConfigMenu);
transparencyLinesCheckbox.addEventListener('change', toggleTransparencyLines);
closeBtn.addEventListener('click', closeConfigMenu);
function dragStart(e) {
if (e.type === 'touchstart') {
initialX = e.touches[0].clientX - xOffset;
initialY = e.touches[0].clientY - yOffset;
} else {
initialX = e.clientX - xOffset;
initialY = e.clientY - yOffset;
}
isDragging = true;
}
function dragEnd() {
isDragging = false;
}
function drag(e) {
if (isDragging) {
e.preventDefault();
if (e.type === 'touchmove') {
currentX = e.touches[0].clientX - initialX;
currentY = e.touches[0].clientY - initialY;
} else {
currentX = e.clientX - initialX;
currentY = e.clientY - initialY;
}
xOffset = currentX;
yOffset = currentY;
setTranslate(currentX, currentY, audioPosition);
updateAudio();
}
}
function setTranslate(xPos, yPos, el) {
el.style.transform = `translate3d(${xPos}px, ${yPos}px, 0)`;
}
recordBtn.addEventListener('click', startRecording);
stopBtn.addEventListener('click', stopRecording);
downloadBtn.addEventListener('click', downloadRecording);
function startRecording() {
recordedChunks = [];
positions = [];
navigator.mediaDevices.getUserMedia({ audio: true })
.then(stream => {
audioContext = new AudioContext();
sourceNode = audioContext.createMediaStreamSource(stream);
pannerNode = audioContext.createPanner();
gainNode = audioContext.createGain();
destinationNode = audioContext.createMediaStreamDestination();
sourceNode.connect(pannerNode);
pannerNode.connect(gainNode);
gainNode.connect(destinationNode);
mediaRecorder = new MediaRecorder(destinationNode.stream);
mediaRecorder.addEventListener('dataavailable', handleDataAvailable);
mediaRecorder.start();
recordBtn.disabled = true;
stopBtn.disabled = false;
updateAudio();
})
.catch(error => {
console.error('Error accessing microphone:', error);
});
}
function stopRecording() {
mediaRecorder.stop();
recordBtn.disabled = false;
stopBtn.disabled = true;
downloadBtn.disabled = false;
sourceNode.disconnect();
pannerNode.disconnect();
gainNode.disconnect();
destinationNode.disconnect();
audioContext.close();
}
function handleDataAvailable(event) {
if (event.data.size > 0) {
recordedChunks.push(event.data);
}
}
function downloadRecording() {
const blob = new Blob(recordedChunks, { type: 'audio/webm' });
const url = URL.createObjectURL(blob);
const a = document.createElement('a');
a.href = url;
a.download = 'recording.webm';
a.click();
URL.revokeObjectURL(url);
downloadBtn.disabled = true;
}
function updateAudio() {
if (pannerNode && gainNode) {
const containerWidth = document.querySelector('.container').offsetWidth;
const containerHeight = document.querySelector('.container').offsetHeight;
const earWidth = document.querySelector('.ear').offsetWidth;
const centerX = currentX + initialX;
const centerY = currentY + initialY;
const normalizedX = (centerX - earWidth) / (containerWidth - 2 * earWidth);
const normalizedY = 1 - (centerY / containerHeight);
const panValue = normalizedX * 2 - 1;
pannerNode.setPosition(panValue, 0, 1 - Math.abs(panValue));
const gainValue = normalizedY;
gainNode.gain.setValueAtTime(gainValue, audioContext.currentTime);
positions.push({ x: normalizedX, y: normalizedY });
}
}
function toggleConfigMenu() {
configMenu.classList.toggle('open');
}
function toggleTransparencyLines() {
if (transparencyLinesCheckbox.checked) {
showTransparencyLines();
} else {
hideTransparencyLines();
}
}
function closeConfigMenu() {
configMenu.classList.remove('open');
}
// Función para mostrar las líneas de transparencia
function showTransparencyLines() {
const volumeLevels = [0, 20, 40, 60, 80, 100, 120, 140];
const container = document.querySelector('.container');
const containerHeight = container.offsetHeight;
// Eliminar las líneas existentes para evitar duplicados
hideTransparencyLines();
volumeLevels.forEach(level => {
const line = document.createElement('div');
line.classList.add('transparency-line');
// Calcula la posición de la línea basada en su nivel
// El nivel más alto (140) se sitúa en la parte superior del contenedor (top = 0)
// El nivel más bajo (0) se sitúa en la parte inferior del contenedor (top = containerHeight)
line.style.top = `${containerHeight - (level / 140) * containerHeight}px`;
line.textContent = level;
container.appendChild(line);
});
}
// Función para ocultar las líneas de transparencia
function hideTransparencyLines() {
const lines = document.querySelectorAll('.transparency-line');
lines.forEach(line => line.remove());
}
function hideTransparencyLines() {
const lines = document.querySelectorAll('.transparency-line');
lines.forEach(line => line.remove());
}
</script>
</body>
</html>
|
5fe8d5e50dadf463e0a812b4e504e16f
|
{
"intermediate": 0.36617401242256165,
"beginner": 0.43180936574935913,
"expert": 0.20201656222343445
}
|
47,543
|
write arudino for example
|
4a94fa8ed52194faa5dd29c4a131c78c
|
{
"intermediate": 0.44479453563690186,
"beginner": 0.2977016270160675,
"expert": 0.25750380754470825
}
|
47,544
|
как удалить все docker containers через power shell
|
2c41003b53824426f35b38e0eef569c4
|
{
"intermediate": 0.3406429886817932,
"beginner": 0.34088680148124695,
"expert": 0.31847020983695984
}
|
47,545
|
[WARNING] [Config ] Older configuration version detected (0 instead of 27)
[WARNING] [Config ] Upgrading configuration in progress.
[DEBUG ] [Config ] Upgrading from 0 to 1
[INFO ] [Logger ] Record log in C:\Users\L14\.kivy\logs\kivy_24-04-21_0.txt
[INFO ] [deps ] Successfully imported "kivy_deps.angle" 0.3.3
[INFO ] [deps ] Successfully imported "kivy_deps.glew" 0.3.1
[INFO ] [deps ] Successfully imported "kivy_deps.sdl2" 0.6.0
[INFO ] [Kivy ] v2.2.1
[INFO ] [Kivy ] Installed at "C:\Users\L14\AppData\Local\Programs\Python\Python310\lib\site-packages\kivy\__init__.py"
[INFO ] [Python ] v3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
[INFO ] [Python ] Interpreter at "C:\Users\L14\AppData\Local\Programs\Python\Python310\python.exe"
[INFO ] [Logger ] Purge log fired. Processing...
[INFO ] [Logger ] Purge finished!
[INFO ] [Factory ] 190 symbols loaded
[INFO ] [Image ] Providers: img_tex, img_dds, img_sdl2, img_pil (img_ffpyplayer ignored)
[INFO ] [Window ] Provider: sdl2
[INFO ] [GL ] Using the "OpenGL" graphics system
[INFO ] [GL ] GLEW initialization succeeded
[INFO ] [GL ] Backend used <glew>
[INFO ] [GL ] OpenGL version <b'4.6.0 Compatibility Profile Context 23.11.1.231017'>
[INFO ] [GL ] OpenGL vendor <b'ATI Technologies Inc.'>
[INFO ] [GL ] OpenGL renderer <b'AMD Radeon (TM) Graphics'>
[INFO ] [GL ] OpenGL parsed version: 4, 6
[INFO ] [GL ] Shading version <b'4.60'>
[INFO ] [GL ] Texture max size <16384>
[INFO ] [GL ] Texture max units <32>
[INFO ] [Window ] auto add sdl2 input provider
[INFO ] [Window ] virtual keyboard not allowed, single mode, not docked
[INFO ] [Text ] Provider: sdl2
[INFO ] [GL ] NPOT texture support is available
[INFO ] [Base ] Start application main loop
[INFO ] [Base ] Leaving application in progress...
Traceback (most recent call last):
File "kivy\properties.pyx", line 961, in kivy.properties.ObservableDict.__getattr__
KeyError: 'input_height'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Users\L14\Documents\Projets\Healthy\Healthy_BMI.py", line 88, in <module>
HealthyBMIAppli().run()
File "C:\Users\L14\AppData\Local\Programs\Python\Python310\lib\site-packages\kivy\app.py", line 956, in run
runTouchApp()
File "C:\Users\L14\AppData\Local\Programs\Python\Python310\lib\site-packages\kivy\base.py", line 574, in runTouchApp
EventLoop.mainloop()
File "C:\Users\L14\AppData\Local\Programs\Python\Python310\lib\site-packages\kivy\base.py", line 339, in mainloop
self.idle()
File "C:\Users\L14\AppData\Local\Programs\Python\Python310\lib\site-packages\kivy\base.py", line 383, in idle
self.dispatch_input()
File "C:\Users\L14\AppData\Local\Programs\Python\Python310\lib\site-packages\kivy\base.py", line 334, in dispatch_input
post_dispatch_input(*pop(0))
File "C:\Users\L14\AppData\Local\Programs\Python\Python310\lib\site-packages\kivy\base.py", line 263, in post_dispatch_input
listener.dispatch('on_motion', etype, me)
File "kivy\_event.pyx", line 731, in kivy._event.EventDispatcher.dispatch
File "C:\Users\L14\AppData\Local\Programs\Python\Python310\lib\site-packages\kivy\core\window\__init__.py", line 1691, in on_motion
self.dispatch('on_touch_down', me)
File "kivy\_event.pyx", line 731, in kivy._event.EventDispatcher.dispatch
File "C:\Users\L14\AppData\Local\Programs\Python\Python310\lib\site-packages\kivy\core\window\__init__.py", line 1708, in on_touch_down
if w.dispatch('on_touch_down', touch):
File "kivy\_event.pyx", line 731, in kivy._event.EventDispatcher.dispatch
File "C:\Users\L14\AppData\Local\Programs\Python\Python310\lib\site-packages\kivy\uix\screenmanager.py", line 1210, in on_touch_down
return super(ScreenManager, self).on_touch_down(touch)
File "C:\Users\L14\AppData\Local\Programs\Python\Python310\lib\site-packages\kivy\uix\widget.py", line 589, in on_touch_down
if child.dispatch('on_touch_down', touch):
File "kivy\_event.pyx", line 731, in kivy._event.EventDispatcher.dispatch
File "C:\Users\L14\AppData\Local\Programs\Python\Python310\lib\site-packages\kivy\uix\relativelayout.py", line 306, in on_touch_down
ret = super(RelativeLayout, self).on_touch_down(touch)
File "C:\Users\L14\AppData\Local\Programs\Python\Python310\lib\site-packages\kivy\uix\widget.py", line 589, in on_touch_down
if child.dispatch('on_touch_down', touch):
File "kivy\_event.pyx", line 731, in kivy._event.EventDispatcher.dispatch
File "C:\Users\L14\AppData\Local\Programs\Python\Python310\lib\site-packages\kivy\uix\widget.py", line 589, in on_touch_down
if child.dispatch('on_touch_down', touch):
File "kivy\_event.pyx", line 731, in kivy._event.EventDispatcher.dispatch
File "C:\Users\L14\AppData\Local\Programs\Python\Python310\lib\site-packages\kivy\uix\behaviors\button.py", line 151, in on_touch_down
self.dispatch('on_press')
File "kivy\_event.pyx", line 727, in kivy._event.EventDispatcher.dispatch
File "kivy\_event.pyx", line 1307, in kivy._event.EventObservers.dispatch
File "kivy\_event.pyx", line 1231, in kivy._event.EventObservers._dispatch
File "c:\Users\L14\Documents\Projets\Healthy\Healthy_BMI.py", line 56, in on_calculate_button_press
height = float(self.ids.input_height.text)
File "kivy\properties.pyx", line 964, in kivy.properties.ObservableDict.__getattr__
AttributeError: 'super' object has no attribute '__getattr__'. Did you mean: '__setattr__'?
|
f4362cfa08a7b4ec85762543afadfc00
|
{
"intermediate": 0.4226619601249695,
"beginner": 0.3157869875431061,
"expert": 0.26155102252960205
}
|
47,546
|
I need to wait until the animation below finishes and proceed further with drawing other graphs
The most important to me is how to stop or exit the animation when finished
ani = FuncAnimation(plt.gcf(), animate, fargs=(axs, data), interval=100) # call function ‘animate’ every 100 milliseconds (or 1/10th of a second)
|
3ecdc7aed1d48c2583ad081241162edb
|
{
"intermediate": 0.5247259736061096,
"beginner": 0.25644397735595703,
"expert": 0.21882997453212738
}
|
47,547
|
I have this gradio code:
from chromadb.utils import embedding_functions
import chromadb
from openai import OpenAI
import gradio as gr
import time
anyscale_base_url = “https://api.endpoints.anyscale.com/v1”
multilingual_embeddings = embedding_functions.SentenceTransformerEmbeddingFunction(model_name=“jost/multilingual-e5-base-politics-de”)
def predict(api_key, user_input, model1, model2, prompt_manipulation=None, direct_steering_option=None):
# client = chromadb.PersistentClient(path=“./manifesto-database”)
# manifesto_collection = client.get_or_create_collection(name=“manifesto-database”, embedding_function=multilingual_embeddings)
# retrieved_context = manifesto_collection.query(query_texts=[user_input], n_results=3, where={“ideology”: “Authoritarian-right”})
# contexts = [context for context in retrieved_context[‘documents’]]
# print(contexts[0])
prompt = f"“”[INST] {user_input} [/INST]“”"
client = OpenAI(base_url=anyscale_base_url, api_key=api_key)
response1 = client.completions.create(
model=model1,
prompt=prompt,
temperature=0.7,
max_tokens=1000).choices[0].text
response2 = client.completions.create(
model=model2,
prompt=prompt,
temperature=0.7,
max_tokens=1000).choices[0].text
return response1, response2
def main():
description = “This is a simple interface to compare two model prodided by Anyscale. Please enter your API key and your message.”
with gr.Blocks() as demo:
# Prompt manipulation setup
with gr.Row():
prompt_manipulation = gr.Dropdown(
label=“Prompt Manipulation”,
choices=[
“Impersonation (direct steering)”,
“Most similar RAG (indirect steering with related context)”,
“Random RAG (indirect steering with randomized context)”
]
)
with gr.Row():
api_key_input = gr.Textbox(label=“API Key”, placeholder=“Enter your API key here”, show_label=True, type=“password”)
user_input = gr.Textbox(label=“Prompt”, placeholder=“Enter your message here”)
model_selector1 = gr.Dropdown(label=“Model 1”, choices=[“mistralai/Mixtral-8x7B-Instruct-v0.1”, “mistralai/Mixtral-8x22B-Instruct-v0.1”])
model_selector2 = gr.Dropdown(label=“Model 2”, choices=[“mistralai/Mixtral-8x7B-Instruct-v0.1”, “mistralai/Mixtral-8x22B-Instruct-v0.1”])
submit_btn = gr.Button(“Submit”)
with gr.Row():
output1 = gr.Textbox(label=“Model 1 Response”)
output2 = gr.Textbox(label=“Model 2 Response”)
submit_btn.click(fn=predict, inputs=[api_key_input, user_input, model_selector1, model_selector2], outputs=[output1, output2])
demo.launch()
if name == “main”:
main()
I want to another drop down menu next to the prompt manipulation menu. But this can only be accesssed when something is selected from the Pompt manipulation drop down. Based on this selection, there should be four different options in the new drop down menu
|
22dc4905cb59d7392f3b05c8024451bf
|
{
"intermediate": 0.31724628806114197,
"beginner": 0.46412014961242676,
"expert": 0.2186334878206253
}
|
47,548
|
I need to wait until the animation below finishes and proceed further with drawing other graphs
The most important to me is how to stop or exit the animation when finished
Another issue is that "plt.show()" is blocking the execution until the plot is closed, BUT I want to draw additional data after the animation is finished.
ani = FuncAnimation(plt.gcf(), animate, fargs=(axs, data), interval=100) # call function ‘animate’ every 100 milliseconds (or 1/10th of a second)
plt.show()
|
b3edb0089ead8fa715eb4d797b8ef630
|
{
"intermediate": 0.62840336561203,
"beginner": 0.18925641477108002,
"expert": 0.18234024941921234
}
|
47,549
|
I made this gradio app:
from chromadb.utils import embedding_functions
import chromadb
from openai import OpenAI
import gradio as gr
import time
anyscale_base_url = "https://api.endpoints.anyscale.com/v1"
multilingual_embeddings = embedding_functions.SentenceTransformerEmbeddingFunction(model_name="jost/multilingual-e5-base-politics-de")
def predict(api_key, user_input, model1, model2, prompt_manipulation=None, direct_steering_option=None):
# client = chromadb.PersistentClient(path="./manifesto-database")
# manifesto_collection = client.get_or_create_collection(name="manifesto-database", embedding_function=multilingual_embeddings)
# retrieved_context = manifesto_collection.query(query_texts=[user_input], n_results=3, where={"ideology": "Authoritarian-right"})
# contexts = [context for context in retrieved_context['documents']]
# print(contexts[0])
prompt = f"""[INST] {user_input} [/INST]"""
client = OpenAI(base_url=anyscale_base_url, api_key=api_key)
response1 = client.completions.create(
model=model1,
prompt=prompt,
temperature=0.7,
max_tokens=1000).choices[0].text
response2 = client.completions.create(
model=model2,
prompt=prompt,
temperature=0.7,
max_tokens=1000).choices[0].text
return response1, response2
def main():
description = "This is a simple interface to compare two model prodided by Anyscale. Please enter your API key and your message."
with gr.Blocks() as demo:
# Prompt manipulation dropdown
with gr.Row():
prompt_manipulation = gr.Dropdown(
label="Prompt Manipulation",
choices=[
"None",
"Impersonation (direct steering)",
"Most similar RAG (indirect steering with related context)",
"Random RAG (indirect steering with randomized context)"
],
value="None", # default value
)
direct_steering_option = gr.Dropdown(label="Direct Steering Option")
with gr.Row():
api_key_input = gr.Textbox(label="API Key", placeholder="Enter your API key here", show_label=True, type="password")
user_input = gr.Textbox(label="Prompt", placeholder="Enter your message here")
model_selector1 = gr.Dropdown(label="Model 1", choices=["mistralai/Mixtral-8x7B-Instruct-v0.1", "mistralai/Mixtral-8x22B-Instruct-v0.1"])
model_selector2 = gr.Dropdown(label="Model 2", choices=["mistralai/Mixtral-8x7B-Instruct-v0.1", "mistralai/Mixtral-8x22B-Instruct-v0.1"])
submit_btn = gr.Button("Submit")
with gr.Row():
output1 = gr.Textbox(label="Model 1 Response")
output2 = gr.Textbox(label="Model 2 Response")
submit_btn.click(fn=predict, inputs=[api_key_input, user_input, model_selector1, model_selector2], outputs=[output1, output2])
demo.launch()
if __name__ == "__main__":
main()
As you can see, the direct_steering_option dropdown has no values so far. This dropdown menu should show different options depending on the value selected in the prompt_manipulation dropdown. How can I achieve this?
|
21f029808fd7c0e1e4fae0061872c735
|
{
"intermediate": 0.39012932777404785,
"beginner": 0.41811561584472656,
"expert": 0.19175514578819275
}
|
47,550
|
correct this code to answer the question. dotn make too many changes
#include <bits/stdc++.h>
#include <vector>
#include <queue>
using namespace std;
void add_edge(vector<vector<int>> &adj, int u, int v)
{
adj[u - 1].push_back(v - 1);
adj[v - 1].push_back(u - 1);
}
void bfs(vector<vector<int>> &adj, int start, vector<int> &vis)
{
queue<int> myqueue;
vis[start] = true;
myqueue.push(start);
while (!myqueue.empty())
{
int curr = myqueue.front();
myqueue.pop();
for (int neighbour : adj[curr])
{
if (!vis[neighbour])
{
vis[neighbour] = true;
myqueue.push(neighbour);
}
}
}
}
int main()
{
int n, m;
cin >> n >> m;
vector<vector<int>> adj(n);
vector<int> vis(n, 0);
for (int i = 0; i < m; i++)
{
int u, v;
cin >> u >> v;
add_edge(adj, u, v);
}
vector<int> arr;
for (int i = 0; i < n; i++)
{
if (!vis[i])
{
bfs(adj, i, vis);
arr.push_back(i);
}
}
// int cnt=arr.size();
int cnt = arr.size() - 1;
cout << cnt << "\n";
for (int i = 0; i < cnt; i++)
{
cout << arr[i]+1 << " " << arr[i + 1]+1 << "\n";
}
return 0;
}
The offices and (bi-directional) connections (both normal and fiber) are given to you.
. The
normal connection connects any two offices
and
. Normal connections have latency
. The
fiber connection connects the HQ with the office
. Fiber connections also come with a latency
. The total latency of a path is the sum of latencies on the connections. You are to output the
that can be removed, such that the
between the HQ and any other node remains the same as before.
There are
offices with
normal connections and
high-speed fiber connections.
The
normal connection connects offices
and
(bi-directionally) with latency
.
The
fiber connection connects offices
and
(bi-directionally) with latency
.
Input Format
The first line of the input file will contain three space-separated integers
,
and
, the number of offices, the number of normal connections and the number of fiber connections.
There will be
lines after this, the
line signifying the
normal connection, each containing three space-separated integers
,
and
the two offices that are connected and the latency of the connection respectively.
There will be
lines after this, the
line signifying the
fiber connection, each containing three space-separated integers
and
, the office connected to the HQ and the latency of the fiber connection respectively.
Output Format
Output only one integer
- the maximum number of fiber connections that can be removed without changing the latency of smallest latency path from office 1 to any other office.
|
4997b7f4437011d69e485e3905d5c9f4
|
{
"intermediate": 0.31178513169288635,
"beginner": 0.5039395093917847,
"expert": 0.18427540361881256
}
|
47,551
|
i have following code to train models on my csv dataset files :
intervals = [1,2,3,5]
look_back = 60
optimizer = keras.optimizers.Adam(learning_rate=0.0003,clipvalue=0.5)
for csv_file in csv_files:
file_path = os.path.join(csv_directory, csv_file)
unique_part = file_path.split('_')[-2]
df = pd.read_csv(file_path)
include_substrings = ["y_"]
exact_columns_to_keep = ["Open", "High", "Low", "Close","volume_base", "volume_crypto", "tradecount",]
filtered_columns = [col for col in df.columns if any(col.startswith(s) for s in include_substrings)]
columns_to_keep = list(set(exact_columns_to_keep + filtered_columns))
df = df[columns_to_keep]
df.head()
features = df.drop([
'y_High_1d', 'y_Low_1d', 'y_Priority_1d',
'y_High_2d', 'y_Low_2d', 'y_Priority_2d',
'y_High_3d', 'y_Low_3d', 'y_Priority_3d',
'y_High_5d', 'y_Low_5d', 'y_Priority_5d'], axis=1)
# Scale the features and targets
feature_scaler = MinMaxScaler(feature_range=(0, 1))
scaled_features = feature_scaler.fit_transform(features)
joblib.dump(feature_scaler,f'x_scalers/x_{unique_part}_scaler.sav')
for p in intervals:
# Corrected to create a flat list of column names
y_cols = [f'y_High_{p}d', f'y_Low_{p}d', f'y_Priority_{p}d']
# Now you can properly index df with y_cols
targets = df[y_cols]
# Continuing with your existing code…
target_scaler = MinMaxScaler(feature_range=(0, 1))
scaled_targets = target_scaler.fit_transform(targets)
joblib.dump(target_scaler,f'y_scalers/y{i}_{unique_part}_scaler.sav')
x_train = []
y_train = []
for i in range(look_back, len(scaled_features)):
x_train.append(scaled_features[i-look_back:i])
y_train.append(scaled_targets[i]) # Assuming the target is the next time step
x_train, y_train = np.array(x_train), np.array(y_train)
input_shape = (x_train.shape[1], x_train.shape[2])
model = Sequential()
model.add(LSTM(units = 100, activation = 'tanh', return_sequences=True))
model.add(Dropout(0.4))
model.add(LSTM(units = 240, activation = 'tanh'))
model.add(Dropout(0.5))
model.add(Dense(units = 3))
model.compile(optimizer=optimizer, loss = 'mean_squared_error', metrics=[tf.keras.metrics.MeanAbsoluteError()])
model.fit(x_train, y_train,epochs = 1000)
mae = model.evaluate(x_train, y_train)
model.save(f'models/lstm_model_{unique_part}_y{p}_mae_{mae}.h5')
how can i free resources(gpu,ram,...) after each model trained and saved ?
|
ef7080aad3dd0323b8372312f46f5542
|
{
"intermediate": 0.34073635935783386,
"beginner": 0.39686381816864014,
"expert": 0.2623997926712036
}
|
47,552
|
how do I make python code pause before exiting until I press ag key?
|
488040f62bbc30c994e9321d1bbd22be
|
{
"intermediate": 0.45807337760925293,
"beginner": 0.18217507004737854,
"expert": 0.35975152254104614
}
|
47,553
|
hi
|
5ddd84e2621c9f9ab45814e59fb7ae1f
|
{
"intermediate": 0.3246487081050873,
"beginner": 0.27135494351387024,
"expert": 0.40399640798568726
}
|
47,554
|
write a c# script for my Character in unity, to go forward with a variable speed, divide the x axis to 3 lanes, character should be able to switch between lanes with left and right controls in horizontal. also add a variable to control smoothness to lane switching, only move between the 3 assigned lanes
|
f9f570bd55cb53c115780ff39ef67f20
|
{
"intermediate": 0.44831225275993347,
"beginner": 0.23878642916679382,
"expert": 0.3129012882709503
}
|
47,555
|
For the following question THAT IS AN ESSAY QUESTION AND ALL FOR EDUCATIONAL PURPOSES:
“Task 2 (600 word):
Investigate how an organisation could benefit from the following security best practices (e.g., Cyber essentials, ISO27000 series standard, NIST800, COBIT, GDPR) and information security policies, regulations, procedures and guidelines.”
Whilst taking the following question specifications into consideration, IN EXACTLY ONLY 600 WORDS (that doesn't factor in references) WRITE AN ENTRY FOR Task 2. Please ensure that during the creation of this entry the following applies: that the tone one would use when answering an important examination question is used and abides by the following: employs a vast range of references (embedded and harvard referenced) utilises impressive grammar/ demonstrates an astonishing linguistic prowess, shows extensive research that is abundant in detail and illustrates amazing attention to detail, is both concise and insightful, demonstrates excellent reflection, showcases linguistic elegance of a publishable quality, and finally has embedded harvard references and a separate references section. Please also to ensure to abide by the following grading criteria to ensure whatever is produced is workings of which are deserving of the highest/ best grading band possible: “A very comprehensive technically correct submission. All major aspects of the assignment covered. Clear expression of ideas. A very high standard of presentation. All problems identified and solutions are feasible and within the restrictions of the assignment. All sources acknowledged and referenced to a high standard.” (please also display the word count minus references.):
|
d6561aaf0ad99556a864c1c83eb5e350
|
{
"intermediate": 0.17599236965179443,
"beginner": 0.47350847721099854,
"expert": 0.35049915313720703
}
|
47,556
|
write a c# script for my Character in unity, to go forward with a variable speed, divide the x axis to 3 lanes, character should be able to switch between lanes with left and right controls in horizontal. lane switching should be smoothed using a variable, i also have a animator component in the child object, i want to add ability to jump using space or any controls related to jump. i have set a condition named isJumping to switch to jump animation
|
f4a1aca14f3ea2e3e0ac7a7a3edc9650
|
{
"intermediate": 0.4471053183078766,
"beginner": 0.2742515206336975,
"expert": 0.2786431610584259
}
|
47,557
|
$(".custom-select").each(function () {
var classes = $(this).attr("class"),
id = $(this).attr("id"),
name = $(this).attr("name");
var template = '<div class="' + classes + '">';
template +=
'<span class="custom-select-trigger">' +
$(this).attr("placeholder") +
"</span>";
template += '<div class="custom-options">';
$(this)
.find("option")
.each(function () {
template +=
'<span class="custom-option ' +
$(this).attr("class") +
'" data-value="' +
$(this).attr("value") +
'">' +
$(this).html() +
"</span>";
});
template += "</div></div>";
$(this).wrap('<div class="custom-select-wrapper"></div>');
$(this).hide();
$(this).after(template);
});
$(".custom-option:first-of-type").hover(
function () {
$(this).parents(".custom-options").addClass("option-hover");
},
function () {
$(this).parents(".custom-options").removeClass("option-hover");
}
);
$(".custom-select-trigger").on("click", function () {
$("html").one("click", function () {
$(".custom-select").removeClass("opened");
});
$(this).parents(".custom-select").toggleClass("opened");
event.stopPropagation();
});
$(".custom-option").on("click", function () {
$(this)
.parents(".custom-select-wrapper")
.find("select")
.val($(this).data("value"));
$(this)
.parents(".custom-options")
.find(".custom-option")
.removeClass("selection");
$(this).addClass("selection");
$(this).parents(".custom-select").removeClass("opened");
$(this)
.parents(".custom-select")
.find(".custom-select-trigger")
.text($(this).text());
});
$(".custom-select-wrapper .custom-options .custom-option").on("click", function() {
var value = $(this).data("value");
var select = $(this).closest(".custom-select-wrapper").find("select");
select.val(value).trigger("change");
});
Мне нужно изменить мой js код, если placeholder select = значению vlaue из option, то выводить в option атрибут selected
|
50a34a79a249da74a99cf6ff7ca881aa
|
{
"intermediate": 0.145483136177063,
"beginner": 0.7349308729171753,
"expert": 0.11958599835634232
}
|
47,558
|
$(".custom-select").each(function () {
var classes = $(this).attr("class"),
id = $(this).attr("id"),
name = $(this).attr("name");
var template = '<div class="' + classes + '">';
template +=
'<span class="custom-select-trigger">' +
$(this).attr("placeholder") +
"</span>";
template += '<div class="custom-options">';
$(this)
.find("option")
.each(function () {
template +=
'<span class="custom-option ' +
$(this).attr("class") +
'" data-value="' +
$(this).attr("value") +
'">' +
$(this).html() +
"</span>";
});
template += "</div></div>";
$(this).wrap('<div class="custom-select-wrapper"></div>');
$(this).hide();
$(this).after(template);
});
$(".custom-option:first-of-type").hover(
function () {
$(this).parents(".custom-options").addClass("option-hover");
},
function () {
$(this).parents(".custom-options").removeClass("option-hover");
}
);
$(".custom-select-trigger").on("click", function () {
$("html").one("click", function () {
$(".custom-select").removeClass("opened");
});
$(this).parents(".custom-select").toggleClass("opened");
event.stopPropagation();
});
$(".custom-option").on("click", function () {
$(this)
.parents(".custom-select-wrapper")
.find("select")
.val($(this).data("value"));
$(this)
.parents(".custom-options")
.find(".custom-option")
.removeClass("selection");
$(this).addClass("selection");
$(this).parents(".custom-select").removeClass("opened");
$(this)
.parents(".custom-select")
.find(".custom-select-trigger")
.text($(this).text());
});
$(".custom-select-wrapper .custom-options .custom-option").on("click", function() {
var value = $(this).data("value");
var select = $(this).closest(".custom-select-wrapper").find("select");
select.val(value).trigger("change");
});
Мне нужно изменить мой js код, если placeholder select = значению vlaue из option, то выводить в option атрибут selected
|
65e726f1974e97f5fa9746242e2ff322
|
{
"intermediate": 0.145483136177063,
"beginner": 0.7349308729171753,
"expert": 0.11958599835634232
}
|
47,559
|
$(".custom-select").each(function () {
var classes = $(this).attr("class"),
id = $(this).attr("id"),
name = $(this).attr("name");
var template = '<div class="' + classes + '">';
template +=
'<span class="custom-select-trigger">' +
$(this).attr("placeholder") +
"</span>";
template += '<div class="custom-options">';
$(this)
.find("option")
.each(function () {
template +=
'<span class="custom-option ' +
$(this).attr("class") +
'" data-value="' +
$(this).attr("value") +
'">' +
$(this).html() +
"</span>";
});
template += "</div></div>";
$(this).wrap('<div class="custom-select-wrapper"></div>');
$(this).hide();
$(this).after(template);
});
$(".custom-option:first-of-type").hover(
function () {
$(this).parents(".custom-options").addClass("option-hover");
},
function () {
$(this).parents(".custom-options").removeClass("option-hover");
}
);
$(".custom-select-trigger").on("click", function () {
$("html").one("click", function () {
$(".custom-select").removeClass("opened");
});
$(this).parents(".custom-select").toggleClass("opened");
event.stopPropagation();
});
$(".custom-option").on("click", function () {
$(this)
.parents(".custom-select-wrapper")
.find("select")
.val($(this).data("value"));
$(this)
.parents(".custom-options")
.find(".custom-option")
.removeClass("selection");
$(this).addClass("selection");
$(this).parents(".custom-select").removeClass("opened");
$(this)
.parents(".custom-select")
.find(".custom-select-trigger")
.text($(this).text());
});
$(".custom-select-wrapper .custom-options .custom-option").on("click", function() {
var value = $(this).data("value");
var select = $(this).closest(".custom-select-wrapper").find("select");
select.val(value).trigger("change");
});
Мне нужно изменить мой js код, если placeholder select = значению vlaue из option, то выводить в option атрибут selected
|
15e91cc4caa6842d9e77353d57b20da1
|
{
"intermediate": 0.145483136177063,
"beginner": 0.7349308729171753,
"expert": 0.11958599835634232
}
|
47,560
|
$(".custom-select").each(function () {
var classes = $(this).attr("class"),
id = $(this).attr("id"),
name = $(this).attr("name");
var template = '<div class="' + classes + '">';
template +=
'<span class="custom-select-trigger">' +
$(this).attr("placeholder") +
"</span>";
template += '<div class="custom-options">';
$(this)
.find("option")
.each(function () {
template +=
'<span class="custom-option ' +
$(this).attr("class") +
'" data-value="' +
$(this).attr("value") +
'">' +
$(this).html() +
"</span>";
});
template += "</div></div>";
$(this).wrap('<div class="custom-select-wrapper"></div>');
$(this).hide();
$(this).after(template);
});
$(".custom-option:first-of-type").hover(
function () {
$(this).parents(".custom-options").addClass("option-hover");
},
function () {
$(this).parents(".custom-options").removeClass("option-hover");
}
);
$(".custom-select-trigger").on("click", function () {
$("html").one("click", function () {
$(".custom-select").removeClass("opened");
});
$(this).parents(".custom-select").toggleClass("opened");
event.stopPropagation();
});
$(".custom-option").on("click", function () {
$(this)
.parents(".custom-select-wrapper")
.find("select")
.val($(this).data("value"));
$(this)
.parents(".custom-options")
.find(".custom-option")
.removeClass("selection");
$(this).addClass("selection");
$(this).parents(".custom-select").removeClass("opened");
$(this)
.parents(".custom-select")
.find(".custom-select-trigger")
.text($(this).text());
});
$(".custom-select-wrapper .custom-options .custom-option").on("click", function() {
var value = $(this).data("value");
var select = $(this).closest(".custom-select-wrapper").find("select");
select.val(value).trigger("change");
});
Мне нужно изменить мой js код, если placeholder select = значению vlaue из option, то выводить в option атрибут selected
|
029f34eca4d039c27c3ac48ba47a9f2c
|
{
"intermediate": 0.145483136177063,
"beginner": 0.7349308729171753,
"expert": 0.11958599835634232
}
|
47,561
|
$(".custom-select").each(function () {
var classes = $(this).attr("class"),
id = $(this).attr("id"),
name = $(this).attr("name");
var template = '<div class="' + classes + '">';
template +=
'<span class="custom-select-trigger">' +
$(this).attr("placeholder") +
"</span>";
template += '<div class="custom-options">';
$(this)
.find("option")
.each(function () {
template +=
'<span class="custom-option ' +
$(this).attr("class") +
'" data-value="' +
$(this).attr("value") +
'">' +
$(this).html() +
"</span>";
});
template += "</div></div>";
$(this).wrap('<div class="custom-select-wrapper"></div>');
$(this).hide();
$(this).after(template);
});
$(".custom-option:first-of-type").hover(
function () {
$(this).parents(".custom-options").addClass("option-hover");
},
function () {
$(this).parents(".custom-options").removeClass("option-hover");
}
);
$(".custom-select-trigger").on("click", function () {
$("html").one("click", function () {
$(".custom-select").removeClass("opened");
});
$(this).parents(".custom-select").toggleClass("opened");
event.stopPropagation();
});
$(".custom-option").on("click", function () {
$(this)
.parents(".custom-select-wrapper")
.find("select")
.val($(this).data("value"));
$(this)
.parents(".custom-options")
.find(".custom-option")
.removeClass("selection");
$(this).addClass("selection");
$(this).parents(".custom-select").removeClass("opened");
$(this)
.parents(".custom-select")
.find(".custom-select-trigger")
.text($(this).text());
});
$(".custom-select-wrapper .custom-options .custom-option").on("click", function() {
var value = $(this).data("value");
var select = $(this).closest(".custom-select-wrapper").find("select");
select.val(value).trigger("change");
});
Мне нужно изменить мой js код, если placeholder select = значению vlaue из option, то выводить в option атрибут selected
|
34674e729c1747f30d8e208847a91ff8
|
{
"intermediate": 0.145483136177063,
"beginner": 0.7349308729171753,
"expert": 0.11958599835634232
}
|
47,562
|
$(".custom-select").each(function () {
var classes = $(this).attr("class"),
id = $(this).attr("id"),
name = $(this).attr("name");
var template = '<div class="' + classes + '">';
template +=
'<span class="custom-select-trigger">' +
$(this).attr("placeholder") +
"</span>";
template += '<div class="custom-options">';
$(this)
.find("option")
.each(function () {
template +=
'<span class="custom-option ' +
$(this).attr("class") +
'" data-value="' +
$(this).attr("value") +
'">' +
$(this).html() +
"</span>";
});
Мне нужно изменить мой js код, если placeholder select = значению vlaue из option, то выводить в option атрибут selected
|
ec8edc3bd225db06b720980dec58549f
|
{
"intermediate": 0.30694442987442017,
"beginner": 0.44620949029922485,
"expert": 0.24684609472751617
}
|
47,563
|
using this .env:
and this file .env :
PORT=3000
MYSQL_HOST=localhost
MYSQL_USER=root
MYSQL_PASSWORD=1234
MYSQL_DATABASE=frankenstein
create database sql from node, delete if exist database and tables with DROP TABLE IF EXISTS, and create 2 tables with ECMA6 imports and mysql2 package using this file for connection db.js :
import mysql from 'mysql2/promise';
const { MYSQL_HOST, MYSQL_USER, MYSQL_PASSWORD, MYSQL_DATABASE } = process.env;
let pool;
const getConnection = async () => {
if (!pool) {
pool = mysql.createPool({
connectionLimit: 10,
host: MYSQL_HOST,
user: MYSQL_USER,
password: MYSQL_PASSWORD,
database: MYSQL_DATABASE,
timezone: 'Z',
});
}
return await pool.getConnection();
};
export { getConnection };
|
aa6158ac1ac4e7db1e4f8877ce7b26a8
|
{
"intermediate": 0.48413753509521484,
"beginner": 0.3528771698474884,
"expert": 0.16298536956310272
}
|
47,564
|
1:1 Error: Expected newline after "use client" directive. lines-around-directive
next.js 13에서 발생하는 eslint 에러야. 고쳐줘
|
92c5a44a72ea1d506cec80c20aebadde
|
{
"intermediate": 0.43525436520576477,
"beginner": 0.294435977935791,
"expert": 0.2703096568584442
}
|
47,565
|
if file exists python with os module
|
170f3beae66d7cba39fcacd39a5a9d03
|
{
"intermediate": 0.46470901370048523,
"beginner": 0.2598950266838074,
"expert": 0.2753959596157074
}
|
47,566
|
File "/usr/lib64/python3.12/site-packages/dbus/service.py", line 712, in _message_cb
retval = candidate_method(self, *args, **keywords)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/waydroid/tools/actions/container_manager.py", line 34, in Start
do_start(self.args, session)
File "/usr/lib/waydroid/tools/actions/container_manager.py", line 189, in do_start
helpers.lxc.start(args)
File "/usr/lib/waydroid/tools/helpers/lxc.py", line 397, in start
wait_for_running(args)
File "/usr/lib/waydroid/tools/helpers/lxc.py", line 391, in wait_for_running
|
ee19d8176181de4b413d3e1f1d942831
|
{
"intermediate": 0.38972023129463196,
"beginner": 0.4542795419692993,
"expert": 0.15600018203258514
}
|
47,567
|
What is the purpose of reference variables in Java?
|
2c5f253db3cfa61d3e25e14372c529ec
|
{
"intermediate": 0.4631463289260864,
"beginner": 0.41057288646698,
"expert": 0.12628073990345
}
|
47,568
|
make sample config with `credits` currency
namespace Shop.Configs;
public class MainConfig
{
public List<ShopCurrency> Currencies { get; set; }
public DatabaseData DatabaseData { get; set; }
}
public class ShopCurrency
{
public string Name { get; set; }
public string ShortName { get; set; }
public string FullName { get; set; }
public int DefaultValue { get; set; }
}
public class DatabaseData
{
public string Host { get; set; }
public string User { get; set; }
public string Database { get; set; }
public string Password { get; set; }
}
|
eef77eeab1ca28d20c7553bcf8f2d774
|
{
"intermediate": 0.37092554569244385,
"beginner": 0.4085434079170227,
"expert": 0.22053103148937225
}
|
47,569
|
HI!
|
18b7b1b89e913cbd67715172319144ac
|
{
"intermediate": 0.3374777138233185,
"beginner": 0.2601830065250397,
"expert": 0.40233927965164185
}
|
47,570
|
Define the decimal type with 3 digits in the fractional part and 5 total digits.
Enter a short text
Report a typo
HINT by
avatar
Luong Nhan
Decimal(Precision, Scale)
- Precision: total count of digits
- Scale: total count of digits to the right of the decimal point
|
7f81138a51ac2620442cd3ef4c55a692
|
{
"intermediate": 0.44407230615615845,
"beginner": 0.197031170129776,
"expert": 0.35889652371406555
}
|
47,571
|
why it doesnt work?
import socket
import concurrent.futures
def scan_port(ip, port):
try:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(0.5)
result = sock.connect_ex((ip, port))
if result == 0:
return f"Port {port} is open"
sock.close()
except socket.error:
print("error")
return
# Адрес и список портов для сканирования
ip_address = '50.7.93.85'
ports = range(1, 120)
# Использование ThreadPoolExecutor для сканирования в параллельном режиме
with concurrent.futures.ThreadPoolExecutor(max_workers=100) as executor:
results = executor.map(scan_port, [ip_address] * len(ports), ports)
# print(results)
# Вывод результатов
for result in results:
if result:
print(result)
|
2ac8c8d39337951b597919e88c191525
|
{
"intermediate": 0.42638546228408813,
"beginner": 0.30136147141456604,
"expert": 0.2722530663013458
}
|
47,572
|
My row styles aren't working correctly.
I'm trying to fix this function:
(defn- row-style-fn [_ _ {quote :raw-data}]
(cond (:status quote)
"bg-secondary-yellow"
(:approved quote )
"bg-light-green-2"
:else
"bg-light-grey"))
(defn quotes-table [job-id sales-contractor?]
[table/table-component {:table-id :quote.job
:header [header-render]
:items-sub [:quote.job/filtered-by-status]
:search {:placeholder "Search"
:render-fn status-filter}
:actions [{:label-text "Email Quote"
:type :button
:button-type :secondary
:dispatch [:quote.job/open-email-quote-modal job-id]
:required-roles #{:customer-job/staff}}
{:label-text @(subscribe [:quote.job/create-button-text])
:type :button
:button-type :primary
:disabled @(subscribe [:quote.job/approved-quotes-exist?])
:dispatch [:quote.job/new-quote-page job-id]
:required-roles #{:customer-job/staff}}]
:hover-over true
:click {:dispatch :quote.job/navigate-to-edit-quote-page}
:row-style-fn row-style-fn
:columns [{:label ""
:key :reference
:render-fn open-pdf-link
:click nil}
{:label "Quote Number"
:key :quote-number
:sortable true}
{:label "Reference"
:key :reference
:sortable true}
{:label "Total (Tax Exclusive)"
:key :sub-total
:format :currency
:sortable true}
{:label "Date Issued"
:key :create-ts
:format :datetime
:sortable true}
{:label "Status"
:key :status
:render-fn render-status}
{:label "Sent"
:key :sent
:format :yes-no}
{:key :id
:render-fn sales-order
:click nil}
{:key :id
:render-fn (partial quote-actions sales-contractor?)
:click nil}]
:sort {:key :create-ts
:direction :descending}}])
The data I'm getting is like this:
[:quotes {
[:sub-total 12000]
[:editable true]
[:quote-s-3-name "d0bba468-3c54-400b-b95a-7d85349df37f"]
[:sent false]
[:currency-code "AUD"]
[:reference "7 - J00008"]
[:sales-order-file-url "/retrieve/095fa6aa-3 … 6b-9650-a75daea56d6c"]
[:total 13200]
[:status :approved]
[:id "1059aa3a-2ed2-42b0-87bb-45edd01b380d"]
[:quote-file-url "/retrieve/1059aa3a-2 … 0b-b95a-7d85349df37f"]
[:comment nil]
[:quote-number "ORC1081"]
[:create-ts #inst "2024-04-19T06:02:22.396-00:00"]}
|
d19a1663f9486751e4b62a69150d6e57
|
{
"intermediate": 0.4278520345687866,
"beginner": 0.4254589080810547,
"expert": 0.1466890424489975
}
|
47,573
|
@echo off
SETLOCAL
for %%i in (*.pdf) do "C:\Program Files (x86)\gs\gs10.03.0\bin\gswin32c.exe" -q -dNOPAUSE -sDEVICE=txtwrite -sOutputFile="%%~ni.txt" -dFirstPage=1 -dLastPage=1 "%%i" -c quit
echo Conversion Complete!
ENDLOCAL for this i want to show an option to choose pdf files
|
4016d46c8085da7185b9be23861fcea5
|
{
"intermediate": 0.39278632402420044,
"beginner": 0.2880994379520416,
"expert": 0.3191142976284027
}
|
47,574
|
ld: command not found
|
c775769df255b4b8b3a963afaa74da07
|
{
"intermediate": 0.32514509558677673,
"beginner": 0.3876981735229492,
"expert": 0.28715670108795166
}
|
47,575
|
I want a step by step plan to make a space on huuging face like pixart
|
6c0f89441ea1127ae50fe8b061536646
|
{
"intermediate": 0.37119778990745544,
"beginner": 0.271129310131073,
"expert": 0.3576728403568268
}
|
47,576
|
'...
298233 INFO: Loading module hook 'hook-qtpy.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
298234 INFO: hook-qtpy: selected 'PyQt5' as Qt bindings because hook for 'PyQt5' has been run before.
298974 INFO: Loading module hook 'hook-markdown.py' from 'D:\\anaconda\\Lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
303940 INFO: Loading module hook 'hook-PyQt5.QtGui.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
304522 INFO: Loading module hook 'hook-PyQt5.QtNetwork.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
304793 INFO: Loading module hook 'hook-PyQt5.QtWebChannel.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
304876 INFO: Loading module hook 'hook-PyQt5.QtWebEngineCore.py' from 'D:\\anaconda\\Lib\\site-packages\\PyInstaller\\hooks'...
305091 WARNING: QtLibraryInfo(PyQt5): could not find translations with base name 'qtwebengine'! These translations will not be collected.
Unable to find 'D:\\anaconda\\Lib\\site-packages\\PyQt5\\Qt5\\translations\\qtwebengine_locales' when adding binary and data files.
|
de61512b2833c80ccfd4982c448fc62c
|
{
"intermediate": 0.3971843719482422,
"beginner": 0.32128003239631653,
"expert": 0.2815355956554413
}
|
47,577
|
how to allow a user to input pi in python
|
f04beedab47433411cdb8481c72590d2
|
{
"intermediate": 0.40244993567466736,
"beginner": 0.2002992182970047,
"expert": 0.39725083112716675
}
|
47,578
|
im getting following error:
22:19:18.216 [info] Restart requested ~\Desktop\Stock-Price-Prediction-Using-LSTM-main\lastm_all.ipynb
22:19:18.218 [warn] Cancel all remaining cells due to dead kernel
22:19:19.477 [info] Process Execution: ~\.conda\envs\tf\python.exe -c "import ipykernel; print(ipykernel.__version__); print("5dc3a68c-e34e-4080-9c3e-2a532b2ccb4d"); print(ipykernel.__file__)"
22:19:19.483 [info] Process Execution: ~\.conda\envs\tf\python.exe -m ipykernel_launcher --f=~\AppData\Roaming\jupyter\runtime\kernel-v2-95208yFYTXnN1W1U.json
> cwd: ~\Desktop\Stock-Price-Prediction-Using-LSTM-main
22:19:20.524 [info] Restarted 8277bb6d-9f1c-4aed-a911-2fd5d1ef7886
with following message :
The Kernel crashed while executing code in the current cell or a previous cell.
Please review the code in the cell(s) to identify a possible cause of the failure.
Click here for more info.
View Jupyter log for further details.
my code:
# %%
import pandas as pd
import datetime as dt
from datetime import date
import numpy as np
import tensorflow as tf
from sklearn.preprocessing import MinMaxScaler
import keras
import os
import joblib
from tensorflow.keras.layers import Dense, Dropout, LSTM
from tensorflow.keras.models import Sequential
import gc
import time
# %%
# gpus = tf.config.experimental.list_physical_devices(‘GPU’)
# if gpus:
# try:
# # Currently, memory growth needs to be the same across GPUs
# for gpu in gpus:
# tf.config.experimental.set_memory_growth(gpu, True)
# logical_gpus = tf.config.experimental.list_logical_devices(‘GPU’)
# print(f’{len(gpus)} Physical GPUs, {len(logical_gpus)} Logical GPUs’)
# except RuntimeError as e:
# # Memory growth must be set before GPUs have been initialized
# print(e)
# %%
csv_directory = r"C:\Users\arisa\Desktop\day_spot_summary_3"
csv_files = [file for file in os.listdir(csv_directory) if file.endswith(‘.csv’)]
# %%
intervals = [1,2,3,5]
look_back = 60
optimizer = keras.optimizers.Adam(learning_rate=0.0003,clipvalue=0.5)
for csv_file in csv_files:
file_path = os.path.join(csv_directory, csv_file)
unique_part = file_path.split('')[-2]
df = pd.read_csv(file_path)
include_substrings = ["y"]
exact_columns_to_keep = [“Open”, “High”, “Low”, “Close”,“volume_base”, “volume_crypto”, “tradecount”,]
filtered_columns = [col for col in df.columns if any(col.startswith(s) for s in include_substrings)]
columns_to_keep = list(set(exact_columns_to_keep + filtered_columns))
df = df[columns_to_keep]
features = df.drop([
‘y_High_1d’, ‘y_Low_1d’, ‘y_Priority_1d’,
‘y_High_2d’, ‘y_Low_2d’, ‘y_Priority_2d’,
‘y_High_3d’, ‘y_Low_3d’, ‘y_Priority_3d’,
‘y_High_5d’, ‘y_Low_5d’, ‘y_Priority_5d’], axis=1)
# Scale the features and targets
feature_scaler = MinMaxScaler(feature_range=(0, 1))
scaled_features = feature_scaler.fit_transform(features)
joblib.dump(feature_scaler,f’x_scalers/x{unique_part}scaler.sav’)
for p in intervals:
# Corrected to create a flat list of column names
y_cols = [f’y_High{p}d’, f’y_Low{p}d’, f’y_Priority{p}d’]
# Now you can properly index df with y_cols
targets = df[y_cols]
# Continuing with your existing code…
target_scaler = MinMaxScaler(feature_range=(0, 1))
scaled_targets = target_scaler.fit_transform(targets)
joblib.dump(target_scaler,f’y_scalers/y{p}{unique_part}scaler.sav’)
x_train = []
y_train = []
for i in range(look_back, len(scaled_features)):
x_train.append(scaled_features[i-look_back:i])
y_train.append(scaled_targets[i]) # Assuming the target is the next time step
x_train, y_train = np.array(x_train), np.array(y_train)
input_shape = (x_train.shape[1], x_train.shape[2])
model = Sequential()
model.add(LSTM(units = 100, activation = ‘tanh’, return_sequences=True))
model.add(Dropout(0.4))
model.add(LSTM(units = 240, activation = ‘tanh’))
model.add(Dropout(0.5))
model.add(Dense(units = 3))
model.compile(optimizer=optimizer, loss = ‘mean_squared_error’, metrics=[tf.keras.metrics.MeanAbsoluteError()])
model.fit(x_train, y_train,epochs = 1000)
mae = model.evaluate(x_train, y_train)
if os.path.exists(f’models/lstm_model{unique_part}y{p}mae{mae}.h5’):
milis = round(time.time() * 1000)
model.save(f’models/lstm_model{unique_part}_y{p}mae{mae}dup{milis}.h5’)
else:
model.save(f’models/lstm_model{unique_part}_y{p}mae{mae}.h5’)
# tf.keras.backend.clear_session()
|
c4e6a448cbb4dbcec8c4a33fb01ccbee
|
{
"intermediate": 0.3402222692966461,
"beginner": 0.49658921360969543,
"expert": 0.16318851709365845
}
|
47,579
|
im getting following error:
Visual Studio Code (1.88.1, undefined, desktop)
Jupyter Extension Version: 2024.3.1.
Python Extension Version: 2024.4.1.
Pylance Extension Version: 2024.4.1.
Platform: win32 (x64).
Workspace folder ~\Desktop\Stock-Price-Prediction-Using-LSTM-main, Home = c:\Users\arisa
13:54:57.957 [info] Starting Kernel startUsingPythonInterpreter, .jvsc74a57bd0436411c074290f12765d02881f33b31f4740c9f9e514b7051a62a818066633e1.~\.conda\envs\tf\python.exe.~\.conda\envs\tf\python.exe.-m#ipykernel_launcher (Python Path: ~\.conda\envs\tf\python.exe, Conda, tf (Python 3.9.19), 3.9.19) for '~\Desktop\Stock-Price-Prediction-Using-LSTM-main\lastm_all.ipynb' (disableUI=true)
13:55:01.064 [info] Process Execution: ~\.conda\envs\tf\python.exe -c "import ipykernel; print(ipykernel.__version__); print("5dc3a68c-e34e-4080-9c3e-2a532b2ccb4d"); print(ipykernel.__file__)"
13:55:01.071 [info] Process Execution: ~\.conda\envs\tf\python.exe ~\.vscode\extensions\ms-toolsai.jupyter-2024.3.1-win32-x64\pythonFiles\vscode_datascience_helpers\kernel_interrupt_daemon.py --ppid 9520
> cwd: ~\.vscode\extensions\ms-toolsai.jupyter-2024.3.1-win32-x64\pythonFiles\vscode_datascience_helpers
13:55:01.082 [info] Process Execution: ~\.conda\envs\tf\python.exe -m pip list
13:55:01.206 [info] Process Execution: ~\.conda\envs\tf\python.exe -m ipykernel_launcher --f=~\AppData\Roaming\jupyter\runtime\kernel-v2-95206eUXcaYB7uSI.json
> cwd: ~\Desktop\Stock-Price-Prediction-Using-LSTM-main
13:55:02.434 [info] Process Execution: ~\.conda\envs\tf\python.exe ~\.vscode\extensions\ms-toolsai.jupyter-2024.3.1-win32-x64\pythonFiles\printJupyterDataDir.py
13:55:02.509 [warn] Got a non-existent Jupyter Data Dir file:///c%3A/Users/<username>/AppData/Roaming/Python/share/jupyter
13:55:55.066 [info] Execution of code ms-toolsai.jupyter-1 completed in 27ms
13:57:44.436 [info] Execution of code ms-toolsai.jupyter-2 completed in 17ms
13:57:52.276 [info] Execution of code ms-toolsai.jupyter-3 completed in 18ms
13:57:53.027 [info] Execution of code ms-toolsai.jupyter-4 completed in 17ms
13:57:53.118 [info] Execution of code ms-toolsai.jupyter-5 completed in 17ms
13:57:59.272 [info] Restart requested ~\Desktop\Stock-Price-Prediction-Using-LSTM-main\lastm_all.ipynb
13:57:59.277 [info] Process Execution: c:\Windows\System32\taskkill.exe /F /T /PID 20972
13:57:59.284 [info] Process Execution: ~\.conda\envs\tf\python.exe -c "import ipykernel; print(ipykernel.__version__); print("5dc3a68c-e34e-4080-9c3e-2a532b2ccb4d"); print(ipykernel.__file__)"
13:57:59.335 [info] Process Execution: ~\.conda\envs\tf\python.exe -m ipykernel_launcher --f=~\AppData\Roaming\jupyter\runtime\kernel-v2-9520thkWsR6h6ohm.json
> cwd: ~\Desktop\Stock-Price-Prediction-Using-LSTM-main
13:58:00.169 [info] Restarted 8277bb6d-9f1c-4aed-a911-2fd5d1ef7886
13:58:01.320 [info] Handle Execution of Cells 0,1,2,3 for ~\Desktop\Stock-Price-Prediction-Using-LSTM-main\lastm_all.ipynb
13:58:04.339 [info] Cell 0 completed in 3.012s (start: 1713733081327, end: 1713733084339)
13:58:04.486 [warn] StdErr from Kernel Process 2024-04-21 13:58:04.486714: I tensorflow/core/platform/cpu_feature_guard.cc:193] This
13:58:04.486 [warn] StdErr from Kernel Process TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
13:58:05.042 [warn] StdErr from Kernel Process 2024-04-21 13:58:05.042950: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1616] Created device
13:58:05.043 [warn] StdErr from Kernel Process /job:localhost/replica:0/task:0/device:GPU:0 with 5450 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3070, pci bus id: 0000:41:00.0, compute capability: 8.6
13:58:05.051 [info] Cell 1 completed in 0.626s (start: 1713733084425, end: 1713733085051)
13:58:05.068 [info] Cell 2 completed in 0.01s (start: 1713733085057, end: 1713733085067)
13:58:08.767 [warn] StdErr from Kernel Process 2024-04-21 13:58:08.767571: I tensorflow/stream_executor/cuda/cuda_dnn.cc:384] Loaded cuDNN version 8100
13:58:09.574 [warn] StdErr from Kernel Process 2024-04-21 13:58:09.574465: I tensorflow/stream_executor/cuda/cuda_blas.cc:1614] TensorFloat-32 will be used for the ma
13:58:09.574 [warn] StdErr from Kernel Process trix multiplication. This will only be logged once.
14:11:10.753 [warn] Cell completed with errors [hd [Error]: Unable to synchronously create dataset (name already exists)
at r.execute (~\.vscode\extensions\ms-toolsai.jupyter-2024.3.1-win32-x64\dist\extension.node.js:327:4943)] {
ename: 'ValueError',
evalue: 'Unable to synchronously create dataset (name already exists)',
traceback: [
'\x1B[1;31m---------------------------------------------------------------------------\x1B[0m',
'\x1B[1;31mValueError\x1B[0m Traceback (most recent call last)',
'Cell \x1B[1;32mIn[4], line 71\x1B[0m\n' +
"\x1B[0;32m 69\x1B[0m model\x1B[38;5;241m.\x1B[39msave(\x1B[38;5;124mf\x1B[39m\x1B[38;5;124m'\x1B[39m\x1B[38;5;124mmodels/lstm_model_\x1B[39m\x1B[38;5;132;01m{\x1B[39;00munique_part\x1B[38;5;132;01m}\x1B[39;00m\x1B[38;5;124m_y\x1B[39m\x1B[38;5;132;01m{\x1B[39;00mp\x1B[38;5;132;01m}\x1B[39;00m\x1B[38;5;124m_mae_\x1B[39m\x1B[38;5;132;01m{\x1B[39;00mmae\x1B[38;5;132;01m}\x1B[39;00m\x1B[38;5;124m_dup\x1B[39m\x1B[38;5;132;01m{\x1B[39;00mmilis\x1B[38;5;132;01m}\x1B[39;00m\x1B[38;5;124m.h5\x1B[39m\x1B[38;5;124m'\x1B[39m)\n" +
'\x1B[0;32m 70\x1B[0m \x1B[38;5;28;01melse\x1B[39;00m:\n' +
"\x1B[1;32m---> 71\x1B[0m \x1B[43mmodel\x1B[49m\x1B[38;5;241;43m.\x1B[39;49m\x1B[43msave\x1B[49m\x1B[43m(\x1B[49m\x1B[38;5;124;43mf\x1B[39;49m\x1B[38;5;124;43m'\x1B[39;49m\x1B[38;5;124;43mmodels/lstm_model_\x1B[39;49m\x1B[38;5;132;43;01m{\x1B[39;49;00m\x1B[43munique_part\x1B[49m\x1B[38;5;132;43;01m}\x1B[39;49;00m\x1B[38;5;124;43m_y\x1B[39;49m\x1B[38;5;132;43;01m{\x1B[39;49;00m\x1B[43mp\x1B[49m\x1B[38;5;132;43;01m}\x1B[39;49;00m\x1B[38;5;124;43m_mae_\x1B[39;49m\x1B[38;5;132;43;01m{\x1B[39;49;00m\x1B[43mmae\x1B[49m\x1B[38;5;132;43;01m}\x1B[39;49;00m\x1B[38;5;124;43m.h5\x1B[39;49m\x1B[38;5;124;43m'\x1B[39;49m\x1B[43m)\x1B[49m\n" +
'\x1B[0;32m 72\x1B[0m tf\x1B[38;5;241m.\x1B[39mkeras\x1B[38;5;241m.\x1B[39mbackend\x1B[38;5;241m.\x1B[39mclear_session() \n',
'File \x1B[1;32mc:\\Users\\<username>\\.conda\\envs\\tf\\lib\\site-packages\\keras\\utils\\traceback_utils.py:70\x1B[0m, in \x1B[0;36mfilter_traceback.<locals>.error_handler\x1B[1;34m(*args, **kwargs)\x1B[0m\n' +
'\x1B[0;32m 67\x1B[0m filtered_tb \x1B[38;5;241m=\x1B[39m _process_traceback_frames(e\x1B[38;5;241m.\x1B[39m__traceback__)\n' +
'\x1B[0;32m 68\x1B[0m \x1B[38;5;66;03m# To get the full stack trace, call:\x1B[39;00m\n' +
'\x1B[0;32m 69\x1B[0m \x1B[38;5;66;03m# `tf.debugging.disable_traceback_filtering()`\x1B[39;00m\n' +
'\x1B[1;32m---> 70\x1B[0m \x1B[38;5;28;01mraise\x1B[39;00m e\x1B[38;5;241m.\x1B[39mwith_traceback(filtered_tb) \x1B[38;5;28;01mfrom\x1B[39;00m \x1B[38;5;28;01mNone\x1B[39;00m\n' +
'\x1B[0;32m 71\x1B[0m \x1B[38;5;28;01mfinally\x1B[39;00m:\n' +
'\x1B[0;32m 72\x1B[0m \x1B[38;5;28;01mdel\x1B[39;00m filtered_tb\n',
'File \x1B[1;32mc:\\Users\\<username>\\.conda\\envs\\tf\\lib\\site-packages\\h5py\\_hl\\group.py:183\x1B[0m, in \x1B[0;36mGroup.create_dataset\x1B[1;34m(self, name, shape, dtype, data, **kwds)\x1B[0m\n' +
"\x1B[0;32m 180\x1B[0m parent_path, name \x1B[38;5;241m=\x1B[39m name\x1B[38;5;241m.\x1B[39mrsplit(\x1B[38;5;124mb\x1B[39m\x1B[38;5;124m'\x1B[39m\x1B[38;5;124m/\x1B[39m\x1B[38;5;124m'\x1B[39m, \x1B[38;5;241m1\x1B[39m)\n" +
'\x1B[0;32m 181\x1B[0m group \x1B[38;5;241m=\x1B[39m \x1B[38;5;28mself\x1B[39m\x1B[38;5;241m.\x1B[39mrequire_group(parent_path)\n' +
'\x1B[1;32m--> 183\x1B[0m dsid \x1B[38;5;241m=\x1B[39m dataset\x1B[38;5;241m.\x1B[39mmake_new_dset(group, shape, dtype, data, name, \x1B[38;5;241m*\x1B[39m\x1B[38;5;241m*\x1B[39mkwds)\n' +
'\x1B[0;32m 184\x1B[0m dset \x1B[38;5;241m=\x1B[39m dataset\x1B[38;5;241m.\x1B[39mDataset(dsid)\n' +
'\x1B[0;32m 185\x1B[0m \x1B[38;5;28;01mreturn\x1B[39;00m dset\n',
'File \x1B[1;32mc:\\Users\\<username>\\.conda\\envs\\tf\\lib\\site-packages\\h5py\\_hl\\dataset.py:163\x1B[0m, in \x1B[0;36mmake_new_dset\x1B[1;34m(parent, shape, dtype, data, name, chunks, compression, shuffle, fletcher32, maxshape, compression_opts, fillvalue, scaleoffset, track_times, external, track_order, dcpl, dapl, efile_prefix, virtual_prefix, allow_unknown_filter, rdcc_nslots, rdcc_nbytes, rdcc_w0)\x1B[0m\n' +
'\x1B[0;32m 160\x1B[0m \x1B[38;5;28;01melse\x1B[39;00m:\n' +
'\x1B[0;32m 161\x1B[0m sid \x1B[38;5;241m=\x1B[39m h5s\x1B[38;5;241m.\x1B[39mcreate_simple(shape, maxshape)\n' +
'\x1B[1;32m--> 163\x1B[0m dset_id \x1B[38;5;241m=\x1B[39m \x1B[43mh5d\x1B[49m\x1B[38;5;241;43m.\x1B[39;49m\x1B[43mcreate\x1B[49m\x1B[43m(\x1B[49m\x1B[43mparent\x1B[49m\x1B[38;5;241;43m.\x1B[39;49m\x1B[43mid\x1B[49m\x1B[43m,\x1B[49m\x1B[43m \x1B[49m\x1B[43mname\x1B[49m\x1B[43m,\x1B[49m\x1B[43m \x1B[49m\x1B[43mtid\x1B[49m\x1B[43m,\x1B[49m\x1B[43m \x1B[49m\x1B[43msid\x1B[49m\x1B[43m,\x1B[49m\x1B[43m \x1B[49m\x1B[43mdcpl\x1B[49m\x1B[38;5;241;43m=\x1B[39;49m\x1B[43mdcpl\x1B[49m\x1B[43m,\x1B[49m\x1B[43m \x1B[49m\x1B[43mdapl\x1B[49m\x1B[38;5;241;43m=\x1B[39;49m\x1B[43mdapl\x1B[49m\x1B[43m)\x1B[49m\n' +
'\x1B[0;32m 165\x1B[0m \x1B[38;5;28;01mif\x1B[39;00m (data \x1B[38;5;129;01mis\x1B[39;00m \x1B[38;5;129;01mnot\x1B[39;00m \x1B[38;5;28;01mNone\x1B[39;00m) \x1B[38;5;129;01mand\x1B[39;00m (\x1B[38;5;129;01mnot\x1B[39;00m \x1B[38;5;28misinstance\x1B[39m(data, Empty)):\n' +
'\x1B[0;32m 166\x1B[0m dset_id\x1B[38;5;241m.\x1B[39mwrite(h5s\x1B[38;5;241m.\x1B[39mALL, h5s\x1B[38;5;241m.\x1B[39mALL, data)\n',
'File \x1B[1;32mh5py\\\\_objects.pyx:54\x1B[0m, in \x1B[0;36mh5py._objects.with_phil.wrapper\x1B[1;34m()\x1B[0m\n',
'File \x1B[1;32mh5py\\\\_objects.pyx:55\x1B[0m, in \x1B[0;36mh5py._objects.with_phil.wrapper\x1B[1;34m()\x1B[0m\n',
'File \x1B[1;32mh5py\\\\h5d.pyx:137\x1B[0m, in \x1B[0;36mh5py.h5d.create\x1B[1;34m()\x1B[0m\n',
'\x1B[1;31mValueError\x1B[0m: Unable to synchronously create dataset (name already exists)'
]
}
14:11:10.756 [info] Cell 3 completed in 785.672s (start: 1713733085081, end: 1713733870753)
14:12:33.624 [info] Restart requested ~\Desktop\Stock-Price-Prediction-Using-LSTM-main\lastm_all.ipynb
14:12:33.630 [info] Process Execution: c:\Windows\System32\taskkill.exe /F /T /PID 20412
14:12:34.891 [info] Process Execution: ~\.conda\envs\tf\python.exe -c "import ipykernel; print(ipykernel.__version__); print("5dc3a68c-e34e-4080-9c3e-2a532b2ccb4d"); print(ipykernel.__file__)"
14:12:34.896 [info] Process Execution: ~\.conda\envs\tf\python.exe -m ipykernel_launcher --f=~\AppData\Roaming\jupyter\runtime\kernel-v2-9520AY5NlkvQg6rr.json
> cwd: ~\Desktop\Stock-Price-Prediction-Using-LSTM-main
14:12:35.726 [info] Restarted 8277bb6d-9f1c-4aed-a911-2fd5d1ef7886
14:12:38.591 [info] Handle Execution of Cells 0,1,2,3 for ~\Desktop\Stock-Price-Prediction-Using-LSTM-main\lastm_all.ipynb
14:12:41.626 [info] Cell 0 completed in 3.027s (start: 1713733958598, end: 1713733961625)
14:12:41.642 [info] Cell 1 completed in 0.011s (start: 1713733961631, end: 1713733961642)
14:12:41.749 [info] Cell 2 completed in 0.099s (start: 1713733961650, end: 1713733961749)
14:12:41.846 [warn] StdErr from Kernel Process 2024-04-21 14:12:41.846363: I tensorflow/core/platform/cpu_feat
14:12:41.846 [warn] StdErr from Kernel Process ure_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
14:12:42.382 [warn] StdErr from Kernel Process 2024-04-21 14:12:42.382738: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1616] Created device /job:localhost/repli
14:12:42.383 [warn] StdErr from Kernel Process ca:0/task:0/device:GPU:0 with 5450 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3070, pci bus id: 0000:41:00.0, compute capability: 8.6
14:12:46.152 [warn] StdErr from Kernel Process 2024-04-21 14:12:46.152691: I tensorflow/stream_executor/cuda/cuda_dnn.cc:384] Loaded cuDNN ve
14:12:46.153 [warn] StdErr from Kernel Process rsion 8100
14:12:46.946 [warn] StdErr from Kernel Process 2024-04-21 14:12:46.946404: I tensorflow/stream_executor/cuda/cuda_blas.cc:1614] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.
14:13:33.804 [info] Interrupt kernel execution
14:13:33.804 [info] Interrupt requested ~\Desktop\Stock-Price-Prediction-Using-LSTM-main\lastm_all.ipynb
14:13:33.804 [info] Interrupt kernel execution
14:13:33.805 [info] Interrupting kernel: python3919jvsc74a57bd0436411c074290f12765d02881f33b31f4740c9f9e514b7051a62a818066633e1
14:13:33.805 [info] Interrupting kernel via custom event (Win32)
14:13:36.107 [warn] Cell completed with errors (cancelled)
14:13:36.107 [info] Cell 3 completed in 54.344s (start: 1713733961763, end: 1713734016107)
14:13:36.110 [info] Interrupt requested & sent for ~\Desktop\Stock-Price-Prediction-Using-LSTM-main\lastm_all.ipynb in notebookEditor.
14:13:36.988 [info] Restart requested ~\Desktop\Stock-Price-Prediction-Using-LSTM-main\lastm_all.ipynb
14:13:36.992 [info] Process Execution: c:\Windows\System32\taskkill.exe /F /T /PID 20856
14:13:36.997 [info] Process Execution: ~\.conda\envs\tf\python.exe -c "import ipykernel; print(ipykernel.__version__); print("5dc3a68c-e34e-4080-9c3e-2a532b2ccb4d"); print(ipykernel.__file__)"
14:13:37.046 [info] Process Execution: ~\.conda\envs\tf\python.exe -m ipykernel_launcher --f=~\AppData\Roaming\jupyter\runtime\kernel-v2-9520HyVFTdhDvtwf.json
> cwd: ~\Desktop\Stock-Price-Prediction-Using-LSTM-main
14:13:37.894 [info] Restarted 8277bb6d-9f1c-4aed-a911-2fd5d1ef7886
14:13:48.774 [info] Restart requested ~\Desktop\Stock-Price-Prediction-Using-LSTM-main\lastm_all.ipynb
14:13:48.778 [info] Process Execution: c:\Windows\System32\taskkill.exe /F /T /PID 18136
14:13:48.782 [info] Process Execution: ~\.conda\envs\tf\python.exe -c "import ipykernel; print(ipykernel.__version__); print("5dc3a68c-e34e-4080-9c3e-2a532b2ccb4d"); print(ipykernel.__file__)"
14:13:48.823 [info] Process Execution: ~\.conda\envs\tf\python.exe -m ipykernel_launcher --f=~\AppData\Roaming\jupyter\runtime\kernel-v2-9520MJuNGKTNxnU3.json
> cwd: ~\Desktop\Stock-Price-Prediction-Using-LSTM-main
14:13:49.643 [info] Restarted 8277bb6d-9f1c-4aed-a911-2fd5d1ef7886
14:13:51.211 [info] Handle Execution of Cells 0,1,2,3 for ~\Desktop\Stock-Price-Prediction-Using-LSTM-main\lastm_all.ipynb
14:13:54.264 [info] Cell 0 completed in 3.047s (start: 1713734031217, end: 1713734034264)
14:13:54.280 [info] Cell 1 completed in 0.01s (start: 1713734034270, end: 1713734034280)
14:13:54.373 [info] Cell 2 completed in 0.086s (start: 1713734034286, end: 1713734034372)
14:13:54.474 [warn] StdErr from Kernel Process 2024-04-21 14:13:54.475208: I tensorflow/core/platform/cpu_feature_guard.cc:193]
14:13:54.474 [warn] StdErr from Kernel Process This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
14:13:55.021 [warn] StdErr from Kernel Process 2024-04-21 14:13:55.022493: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1
14:13:55.022 [warn] StdErr from Kernel Process 616] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 5450 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3070, pci bus id: 0000:41:00.0, compute capability: 8.6
14:13:58.812 [warn] StdErr from Kernel Process 2024-04-21 14:13:58.812991: I tensorflow/stream_executor/cuda/cuda_dnn.cc:384] Loaded cuDNN version 81
14:13:58.812 [warn] StdErr from Kernel Process 00
14:13:59.659 [warn] StdErr from Kernel Process 2024-04-21 14:13:59.660253: I tensorflow/stream_executor/cuda/cuda_blas.cc:1614] T
14:13:59.659 [warn] StdErr from Kernel Process ensorFloat-32 will be used for the matrix multiplication. This will only be logged once.
15:30:11.012 [warn] StdErr from Kernel Process 2024-04-21 15:30:11.019164: F .\tensorflow/core/kernels/conv_2d_gpu.h:1028] Non-OK-status: Gpu
15:30:11.012 [warn] StdErr from Kernel Process LaunchKernel( SwapDimension1And2InTensor3UsingTiles<T, kNumThreads, kTileSize, kTileSize, conjugate>, total_tiles_count, kNumThreads, 0, d.stream(), input, input_dims, output) status: INTERNAL: an illegal memory access was encountered
2024-04-21 15:30:11.019529: F tensorflow/core/kernels/training_ops_gpu.cu.c
15:30:11.595 [error] Disposing session as kernel process died ExitCode: 3221226505, Reason: 2024-04-21 14:13:54.475208: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-04-21 14:13:55.022493: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1616] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 5450 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3070, pci bus id: 0000:41:00.0, compute capability: 8.6
2024-04-21 14:13:58.812991: I tensorflow/stream_executor/cuda/cuda_dnn.cc:384] Loaded cuDNN version 8100
2024-04-21 14:13:59.660253: I tensorflow/stream_executor/cuda/cuda_blas.cc:1614] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.
2024-04-21 15:30:11.019164: F .\tensorflow/core/kernels/conv_2d_gpu.h:1028] Non-OK-status: GpuLaunchKernel( SwapDimension1And2InTensor3UsingTiles<T, kNumThreads, kTileSize, kTileSize, conjugate>, total_tiles_count, kNumThreads, 0, d.stream(), input, input_dims, output) status: INTERNAL: an illegal memory access was encountered
2024-04-21 15:30:11.019529: F tensorflow/core/kernels/training_ops_gpu.cu.c
15:30:11.615 [info] Cell 3 completed in -1713734034.387s (start: 1713734034387, end: undefined)
with following message :
The Kernel crashed while executing code in the current cell or a previous cell.
Please review the code in the cell(s) to identify a possible cause of the failure.
Click here for more info.
View Jupyter log for further details.
my code:
# %%
import pandas as pd
import datetime as dt
from datetime import date
import numpy as np
import tensorflow as tf
from sklearn.preprocessing import MinMaxScaler
import keras
import os
import joblib
from tensorflow.keras.layers import Dense, Dropout, LSTM
from tensorflow.keras.models import Sequential
import gc
import time
# %%
# gpus = tf.config.experimental.list_physical_devices('GPU')
# if gpus:
# try:
# # Currently, memory growth needs to be the same across GPUs
# for gpu in gpus:
# tf.config.experimental.set_memory_growth(gpu, True)
# logical_gpus = tf.config.experimental.list_logical_devices('GPU')
# print(f'{len(gpus)} Physical GPUs, {len(logical_gpus)} Logical GPUs')
# except RuntimeError as e:
# # Memory growth must be set before GPUs have been initialized
# print(e)
# %%
csv_directory = r"C:\Users\arisa\Desktop\day_spot_summary_3"
csv_files = [file for file in os.listdir(csv_directory) if file.endswith('.csv')]
# %%
intervals = [1,2,3,5]
look_back = 60
optimizer = keras.optimizers.Adam(learning_rate=0.0003,clipvalue=0.5)
for csv_file in csv_files:
file_path = os.path.join(csv_directory, csv_file)
unique_part = file_path.split('_')[-2]
df = pd.read_csv(file_path)
include_substrings = ["y_"]
exact_columns_to_keep = ["Open", "High", "Low", "Close","volume_base", "volume_crypto", "tradecount",]
filtered_columns = [col for col in df.columns if any(col.startswith(s) for s in include_substrings)]
columns_to_keep = list(set(exact_columns_to_keep + filtered_columns))
df = df[columns_to_keep]
features = df.drop([
'y_High_1d', 'y_Low_1d', 'y_Priority_1d',
'y_High_2d', 'y_Low_2d', 'y_Priority_2d',
'y_High_3d', 'y_Low_3d', 'y_Priority_3d',
'y_High_5d', 'y_Low_5d', 'y_Priority_5d'], axis=1)
# Scale the features and targets
feature_scaler = MinMaxScaler(feature_range=(0, 1))
scaled_features = feature_scaler.fit_transform(features)
joblib.dump(feature_scaler,f'x_scalers/x_{unique_part}_scaler.sav')
for p in intervals:
# Corrected to create a flat list of column names
y_cols = [f'y_High_{p}d', f'y_Low_{p}d', f'y_Priority_{p}d']
# Now you can properly index df with y_cols
targets = df[y_cols]
# Continuing with your existing code…
target_scaler = MinMaxScaler(feature_range=(0, 1))
scaled_targets = target_scaler.fit_transform(targets)
joblib.dump(target_scaler,f'y_scalers/y{p}_{unique_part}_scaler.sav')
x_train = []
y_train = []
for i in range(look_back, len(scaled_features)):
x_train.append(scaled_features[i-look_back:i])
y_train.append(scaled_targets[i]) # Assuming the target is the next time step
x_train, y_train = np.array(x_train), np.array(y_train)
input_shape = (x_train.shape[1], x_train.shape[2])
model = Sequential()
model.add(LSTM(units = 100, activation = 'tanh', return_sequences=True))
model.add(Dropout(0.4))
model.add(LSTM(units = 240, activation = 'tanh'))
model.add(Dropout(0.5))
model.add(Dense(units = 3))
model.compile(optimizer=optimizer, loss = 'mean_squared_error', metrics=[tf.keras.metrics.MeanAbsoluteError()])
model.fit(x_train, y_train,epochs = 1000)
mae = model.evaluate(x_train, y_train)
if os.path.exists(f'models/lstm_model_{unique_part}_y{p}_mae_{mae}.h5'):
milis = round(time.time() * 1000)
model.save(f'models/lstm_model_{unique_part}_y{p}_mae_{mae}_dup{milis}.h5')
else:
model.save(f'models/lstm_model_{unique_part}_y{p}_mae_{mae}.h5')
# tf.keras.backend.clear_session()
|
d5278b4a9b4594beb8933a76ef4a379d
|
{
"intermediate": 0.3780849277973175,
"beginner": 0.3172858953475952,
"expert": 0.3046291470527649
}
|
47,580
|
im using vscode and jupyter notebook
is it possible to restart kernel and call run all cells programmatically from code?
|
f83635fa32795a1f365da78a33ac7432
|
{
"intermediate": 0.525931715965271,
"beginner": 0.16388382017612457,
"expert": 0.310184508562088
}
|
47,581
|
import torch
import torch.nn.functional as F
from torch_geometric.nn import GATConv
import numpy as np
import torch.optim as optim
from torch.distributions import MultivariateNormal
class CustomGNN(torch.nn.Module):
def __init__(self, in_channels, out_channels):
super(CustomGNN, self).__init__()
self.gat1 = GATConv(in_channels, 8, heads=8, dropout=0.6)
self.gat2 = GATConv(8 * 8, out_channels, heads=1, concat=False, dropout=0.6)
self.component_nodes_indices = torch.tensor([7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17], dtype=torch.long)
# Define masks for tuning selective features
self.m_features_mask = torch.zeros(24, dtype=torch.bool)
self.m_features_mask[[18, 19]] = True
self.c_features_mask = torch.zeros(24, dtype=torch.bool)
self.c_features_mask[20] = True
self.i_features_mask = torch.zeros(24, dtype=torch.bool)
self.i_features_mask[21] = True
self.v_features_mask = torch.zeros(24, dtype=torch.bool)
self.v_features_mask[22] = True
def forward(self, x, edge_index):
x = F.dropout(x, p=0.6, training=self.training)
x = F.elu(self.gat1(x, edge_index))
x = F.dropout(x, p=0.6, training=self.training)
x = self.gat2(x, edge_index)
# Synchronize updates for defined node pairs before updating dynamic features
# Averaging the values for the synchronous pairs for features at indices [18] and [19]
# Ensure the original_features tensor is prepared for this operation to not alter unrelated features.
original_features = x.clone()
# Define synchronous node pairs and their associated feature indices
sync_pairs = [(7, 8), (9, 10), (11, 14)] # Indices in self.component_nodes_indices
features_to_sync = [18, 19]
# Perform synchronization
for pair in sync_pairs:
# Calculate the mean of the paired node features
avg_features = original_features[[self.component_nodes_indices[pair[0]], self.component_nodes_indices[pair[1]]], :][:, features_to_sync].mean(dim=0)
# Assign the averaged features back to the original positions for both nodes in the pair
original_features[self.component_nodes_indices[pair[0]], features_to_sync] = avg_features
original_features[self.component_nodes_indices[pair[1]], features_to_sync] = avg_features
# Apply mask and update dynamic features (if there’s any additional logic for individual component node updates)
dynamic_updates = torch.zeros_like(x)
# Update logic as previous, but now considering synchronization is already handled
# Note: With the current use-case, dynamic updates remain as initially set.
# This placeholder exists for cases where further dynamic processing is applied after synchronization.
# Ensuring static features are kept as is from original_features and only dynamic are updated
return original_features * (1 - dynamic_updates) + x * dynamic_updates
class Actor(torch.nn.Module):
def __init__(self, gnn_model):
super(Actor, self).__init__()
self.gnn = gnn_model
def forward(self, state):
# State contains node_features_tensor, edge_feature_tensor, edge_index
node_features_tensor, edge_feature_tensor, edge_index = state
action_probs = self.gnn(node_features_tensor, edge_index)
return action_probs
class Critic(torch.nn.Module):
def __init__(self, state_dim):
super(Critic, self).__init__()
self.network = torch.nn.Sequential(
torch.nn.Linear(state_dim, 128),
torch.nn.ReLU(),
torch.nn.Linear(128, 1)
)
def forward(self, state):
return self.network(state)
class PPOAgent:
def __init__(self, gnn_model, state_dim, action_space, lr_actor, lr_critic, gamma, gae_lambda, epsilon, policy_clip, epochs):
self.gamma = gamma
self.gae_lambda = gae_lambda
self.epsilon = epsilon
self.policy_clip = policy_clip
self.epochs = epochs
self.actor = Actor(gnn_model)
self.critic = Critic(state_dim)
self.optimizer_actor = optim.Adam(self.actor.parameters(), lr=lr_actor)
self.optimizer_critic = optim.Adam(self.critic.parameters(), lr=lr_critic)
self.action_space = action_space # Assume continuous
def select_action(self, state):
state_tensor = torch.FloatTensor(state).unsqueeze(0) # Adjust dimensions as necessary
action_probs = self.actor(state_tensor)
cov_mat = torch.diag(action_probs.var()).unsqueeze(0) # Ensure variances are positive and form a covariance matrix
dist = MultivariateNormal(action_probs, cov_mat)
action = dist.sample()
log_prob = dist.log_prob(action)
return action.numpy().squeeze(), log_prob.item()
def compute_gae(self, next_value, rewards, masks, values):
values = values + [next_value]
gae = 0
returns = []
for step in reversed(range(len(rewards))):
delta = rewards[step] + self.gamma * values[step + 1] * masks[step] - values[step]
gae = delta + self.gamma * self.gae_lambda * masks[step] * gae
returns.insert(0, gae + values[step])
return returns
def update_policy(self, prev_states, prev_actions, prev_log_probs, returns, advantages):
advantages = torch.tensor(advantages)
returns = torch.tensor(returns)
prev_log_probs = torch.tensor(prev_log_probs)
for _ in range(self.epochs):
log_probs, state_values, entropy = self.evaluate(prev_states, prev_actions)
ratios = torch.exp(log_probs - prev_log_probs.detach())
advantages = returns - state_values.detach()
surr1 = ratios * advantages
surr2 = torch.clamp(ratios, 1-self.policy_clip, 1+self.policy_clip) * advantages
actor_loss = - torch.min(surr1, surr2).mean()
critic_loss = F.mse_loss(state_values, returns)
self.optimizer_actor.zero_grad()
actor_loss.backward()
self.optimizer_actor.step()
self.optimizer_critic.zero_grad()
critic_loss.backward()
self.optimizer_critic.step()
def evaluate(self, states, actions):
# Replace with actual evaluation logic based on your training loop requirements
pass
# Create the environment
env = CircuitEnvironment(server_address, username, password, bounds_low, bounds_high, target_metrics, netlist_content)
For the above given code provide me the proper complete RL training loop to be perfectly synchronize with the above PPO agent class and all its functions 'select_action', 'compute_gae', and 'update_policy'. and also provide me the necessory initialization requirements and instantiation requirements in the above code.
|
1b4ff6fe3093e6815cec587d318c6d57
|
{
"intermediate": 0.2318788319826126,
"beginner": 0.3672078251838684,
"expert": 0.4009133279323578
}
|
47,582
|
i have a jupyter notebook
can i create a python autonation to click somewhere and call it from my jupyter notebook?
|
4fa3c705a0d1306daf3d5334df9506be
|
{
"intermediate": 0.4121481478214264,
"beginner": 0.26505246758461,
"expert": 0.3227993845939636
}
|
47,583
|
explain
|
3f3573d8016ff543c33250844744d067
|
{
"intermediate": 0.3545367121696472,
"beginner": 0.31888994574546814,
"expert": 0.32657337188720703
}
|
47,584
|
ansewe fast
how can i restart a python .py file from itself?
|
550126c04993117f72d4b7cb26c25347
|
{
"intermediate": 0.4651561677455902,
"beginner": 0.1638764590024948,
"expert": 0.37096741795539856
}
|
47,585
|
HI!
|
da8d55d77de6ca52790c55d077ced179
|
{
"intermediate": 0.3374777138233185,
"beginner": 0.2601830065250397,
"expert": 0.40233927965164185
}
|
47,586
|
can you tell me what does this regex do?
const patternLink =
/(?:https?:\/\/)?(?:[\w\.]+)\.(?:[a-z]{2,6}\.?)(?:\/[\w\.]*)*\/?/g;
|
2f6b8008ab377402344171a866cbf184
|
{
"intermediate": 0.49602648615837097,
"beginner": 0.32052209973335266,
"expert": 0.18345142900943756
}
|
47,587
|
If a residential proxy's IP address has a reverse DNS that points to a proxy server, it can be detected.
Reverse DNS is the most effective method of detecting residential proxies because it is the most accurate. If an IP address has a reverse DNS that points to a proxy server, it is almost certainly a residential proxy.
how do i do that?
|
9572fd59dae2f1e07cfb66f3613f2593
|
{
"intermediate": 0.3912259638309479,
"beginner": 0.28586024045944214,
"expert": 0.3229138255119324
}
|
47,588
|
X:1
T:Unified Composition Adaptation
C:Adapted for Jay Chong style
M:4/4
L:1/4
K:A
%%score {V:1}
V:V1 clef=treble
% Part 1 - Using A major as the foundational key
|:"A"A2 "E"E2 | "F#m"F2 "D"D2 | "E"E2 "A/C#"A2 | "Am/C"A2 "G/B"G2 | "Gm/Bb"G2 "Dm/A"D2 | "A"A2 "E"E2 | "E4"E2:|
|:"A"A2 "E"E2 | "F#m"F2 "D"D2 | "E"E2 | "A/C#"A2 "Am/C"A2 | "G/B"G2 "Gm/Bb"G2 | "Dm"D2 "A"A2 | "E"E2 "E4"E2:|
% Refrain from Part 1
|:"A"A2 "E"E2 | "F#m"F2 "C#m"C2 | "D"D2 "E"E2 | "E4"E2 "A"A2:|
% Transitioning to F major for Part 2
K:F
% Introduction of Part 2 - Modulated to fit within the new key
|:"F"F2 "Fm"F2 | "Em7"E2 "Am7"A2 | "Dm7"D2 "G"G2 | "Cadd9"C2 "Cadd9"G2:|
|"C"C2 "G/B"G2 | "Am7"A2 "Am7/G"G2 | "F"F2 "Em7"E2 "Am7"A2 | "Dm7"D2 "G"G2 |
% Bridging Themes - Merging elements from both parts
|:"Bm7-5"B2 "E7"E2 | "Am"A2 "Am7/G"G2 | "F"F2 "Em7"E2 "Am7"A2 | "Dm7"D2 "G"G2:|
% Interlude (間奏) - Serving as a thematic breather
|:"Am"A2 "F"F2 | "C"C2 "G"G2:| X2
% Modulating Back to A major for the Conclusion
K:A
% Solo adapted from Part 1 with a return to A major, blending in motifs
|:"A"A2 "B/A"B2 | "E/G#"E2 "Em/G"E2 | "D/F#"D2 |
"C/E"C2 "G/D"G2 | "C"C2 "F/A"F2 | "E/G#"E2 |
"A/C#"A2 "Am/C"A2 | "G/B"G2 "Gm/Bb"G2 | "Dm"D2 "A"A2 | "E"E2 | "E4"E2:|
% Coda, echoing themes from throughout for cohesion
|:"F"F2 "Fm"F2 | "Em7"E2 "Am7"A2 | "Dm7"D2 "G"G2 | "Cadd9"C2 "G"G2:|
|:"Am"A2 "F"F2 | "C"C2 "G"G2:| X2
|:"A"A2 "E"E2 | "F#m"F2 "C#m"C2 | "D"D2 "E"E2 | "A"A2:| Suit this lyrics 为了爱你更多 in this ABC notation.
|
55609ec83a7eaa337d437edd6386e70a
|
{
"intermediate": 0.334383100271225,
"beginner": 0.38257601857185364,
"expert": 0.2830409109592438
}
|
47,589
|
# %%
import pandas as pd
import datetime as dt
from datetime import date
import numpy as np
import tensorflow as tf
from sklearn.preprocessing import MinMaxScaler
import keras
import os
import joblib
from tensorflow.keras.layers import Dense, Dropout, LSTM
from tensorflow.keras.models import Sequential
import gc
import time
import shutil
from multiprocessing import Process
import sys
csv_directory = r"C:\Users\arisa\Desktop\day_spot_summary_3"
csv_files = [file for file in os.listdir(csv_directory) if file.endswith('.csv')]
csv_file = csv_files[0]
look_back = 60
optimizer = keras.optimizers.Adam(learning_rate=0.0003,clipvalue=0.5)
# for csv_file in csv_files:
file_path = os.path.join(csv_directory, csv_file)
unique_part = file_path.split('_')[-2]
df = pd.read_csv(file_path)
include_substrings = ["y_"]
exact_columns_to_keep = ["Open", "High", "Low", "Close","volume_base", "volume_crypto", "tradecount",]
filtered_columns = [col for col in df.columns if any(col.startswith(s) for s in include_substrings)]
columns_to_keep = list(set(exact_columns_to_keep + filtered_columns))
df = df[columns_to_keep]
features = df.drop([
'y_High_1d', 'y_Low_1d', 'y_Priority_1d',
'y_High_2d', 'y_Low_2d', 'y_Priority_2d',
'y_High_3d', 'y_Low_3d', 'y_Priority_3d',
'y_High_5d', 'y_Low_5d', 'y_Priority_5d'], axis=1)
# Scale the features and targets
feature_scaler = MinMaxScaler(feature_range=(0, 1))
scaled_features = feature_scaler.fit_transform(features)
joblib.dump(feature_scaler,f'x_scalers/x_{unique_part}_scaler.sav')
def train_model(size):
# Corrected to create a flat list of column names
y_cols = [f'y_High_{size}d', f'y_Low_{size}d', f'y_Priority_{size}d']
# Now you can properly index df with y_cols
targets = df[y_cols]
# Continuing with your existing code…
target_scaler = MinMaxScaler(feature_range=(0, 1))
scaled_targets = target_scaler.fit_transform(targets)
joblib.dump(target_scaler,f'y_scalers/y{size}_{unique_part}_scaler.sav')
x_train = []
y_train = []
for i in range(look_back, len(scaled_features)):
x_train.append(scaled_features[i-look_back:i])
y_train.append(scaled_targets[i]) # Assuming the target is the next time step
x_train, y_train = np.array(x_train), np.array(y_train)
input_shape = (x_train.shape[1], x_train.shape[2])
model = Sequential()
model.add(LSTM(units = 100, activation = 'tanh', return_sequences=True,input_shape = input_shape))
model.add(Dropout(0.4))
model.add(LSTM(units = 240, activation = 'tanh'))
model.add(Dropout(0.5))
model.add(Dense(units = 3))
model.compile(optimizer=optimizer, loss = 'mean_squared_error', metrics=[tf.keras.metrics.MeanAbsoluteError()])
model.fit(x_train, y_train,epochs = 1000)
mae = model.evaluate(x_train, y_train)
loss_value = str(mae[0])[0:5]
mae_value = str(mae[1])[0:5]
base_filename = f'models/lstm_model_{unique_part}_y{size}_mae_{loss_value}_{mae_value}'
filename = base_filename + '.h5'
counter = 0
# Loop until a unique filename is found
while os.path.exists(filename):
counter += 1
filename = f"{base_filename}_dup{counter}.h5"
model.save(filename)
if __name__ == '__main__':
intervals = [1,2,3,5]
for size in intervals:
proc = Process(target=train_model, args=(size))
proc.start()
proc.join()
doned_file_path = os.path.join(r"C:\Users\arisa\Desktop\doned_path", csv_file)
shutil.move(file_path,doned_file_path)
try:
# This is for Windows systems
if sys.platform == "win32":
os.system(f'start /B {sys.executable} {"".join(sys.argv)}')
else:
# This is for Unix/Linux/MacOS systems
os.execv(sys.executable, [sys.executable] + sys.argv)
except Exception as e:
# If the attempt to restart fails, you can fallback to this method
print(f"Failed to restart: {e}")
os.system(f'python {" ".join(sys.argv)}')
error:
Traceback (most recent call last):
File "c:\Users\arisa\Desktop\Stock-Price-Prediction-Using-LSTM-main\lstm_all.py", line 102, in <module>
proc = Process(target=train_model, args=(size))
File "C:\Users\arisa\.conda\envs\tf\lib\multiprocessing\process.py", line 91, in __init__
self._args = tuple(args)
TypeError: 'int' object is not iterable
|
9de40aebd7e302286f57a6dc786a2fa2
|
{
"intermediate": 0.3058694303035736,
"beginner": 0.3332005739212036,
"expert": 0.36093005537986755
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.