row_id
int64 0
48.4k
| init_message
stringlengths 1
342k
| conversation_hash
stringlengths 32
32
| scores
dict |
|---|---|---|---|
603
|
please write a c code for me to plot a 3d parametric surface. parametric function of x and y and z is based on u and v. and tell a step by step procedure for me to run this code on linux debian. i am total noob on c and linux. my linux is bullseye. also i have visual studio code.
|
49ed365be242df025db0e2ca7adcf7fa
|
{
"intermediate": 0.4435424208641052,
"beginner": 0.2767217457294464,
"expert": 0.27973583340644836
}
|
604
|
so what is the value of sqrt(6+2sqrt(7+3sqrt(8+4sqrt(9+5sqrt(10+... carried out to infinity.
Define a function, named Ramanujan, which takes as one of its arguments , the depth of a rational approximation to the above nested expression. if the depth is 2, the return value should be the value of sqrt(6+2sqrt(7+3sqrt8)) recursively
|
4abcdc2ec5861d0ffbb42409c2528c58
|
{
"intermediate": 0.2810628414154053,
"beginner": 0.41068342328071594,
"expert": 0.30825376510620117
}
|
605
|
write java code for the assignment in https://cs.clarku.edu/~cs121/homework/shuffles1/
|
f07b7f53968c10c5d74c2017efa27102
|
{
"intermediate": 0.30502429604530334,
"beginner": 0.40709352493286133,
"expert": 0.2878822088241577
}
|
606
|
A fictional company named NorthWind Traders has hired you to make some enhancements to their existing order-tracking database. You will be required to analyze their existing database, design a solution to meet their requirements, create the new database according to the new design.
Enhancement #1 – Better Inventory Tracking
The existing database has three fields in the Products table that are currently used to track inventory stock (UnitsInStock, UnitsOnOrder, ReorderLevel). However, the company owns three different warehouses, and currently has no way of tracking stock levels for each product in each warehouse. Inventory for any product could be stored at any of the three warehouse locations. Each warehouse has a name and a street address that should be stored in the new database. The warehouses are commonly known by their location within the city: The Dockside warehouse, the Airport warehouse and the Central warehouse. The new scheme should be able to record the same basic data as the existing database (UnitsInStock, UnitsOnOrder, ReorderLevel), but also reference the warehouse in which the products are currently being stored in, or re-ordered from. Note: Your script should distribute existing products amongst the three warehouses. Feel free to make up the distribution.
|
db259fe5f2a79d4d1e3ae9f697c30636
|
{
"intermediate": 0.31408220529556274,
"beginner": 0.40369224548339844,
"expert": 0.28222551941871643
}
|
607
|
Hi ChatGPT, I want you to act as an expect in evolutionary computing. I will provide you with some information about problem that I would like to optimize. You would then explain to me a steps by steps on how to solve it as well as generate python code. My problem that need to be optimized are rastrigin function.
|
2bb1cf3f613c5051e13e9eaceb9df18b
|
{
"intermediate": 0.17595168948173523,
"beginner": 0.15502095222473145,
"expert": 0.6690272688865662
}
|
608
|
I need some help making a UI for a autohotkey script. The UI can be considered like a list. A user can add entries to that list. A user can remove individual entries from the list. Later on, the script will need to loop through each item on the list. Lets start with that and go from there
|
d3599d85545063cf90c7f44123a4cd2b
|
{
"intermediate": 0.29329046607017517,
"beginner": 0.36282074451446533,
"expert": 0.34388887882232666
}
|
609
|
write me a sql file based on these instructions: Enhancement #1 – Better Inventory Tracking
The existing database has three fields in the Products table that are currently used to track inventory stock (UnitsInStock, UnitsOnOrder, ReorderLevel). However, the company owns three different warehouses, and currently has no way of tracking stock levels for each product in each warehouse. Inventory for any product could be stored at any of the three warehouse locations. Each warehouse has a name and a street address that should be stored in the new database. The warehouses are commonly known by their location within the city: The Dockside warehouse, the Airport warehouse and the Central warehouse. The new scheme should be able to record the same basic data as the existing database (UnitsInStock, UnitsOnOrder, ReorderLevel), but also reference the warehouse in which the products are currently being stored in, or re-ordered from. Note: Your script should distribute existing products amongst the three warehouses. Feel free to make up the distribution.
Enhancement #2 – Full Names with Title
The HR department has requested a database change to help integration with the new HR software program. The new software does not work well when importing separate values for first/last names and the title of courtesy. They propose adding a new field that contains a combined value in the following format: <Courtesy Title> <First Name> <Last Name>. Examples: Mr. John Doe or Dr. Jane Smith. The proposed field will not replace the usage of the existing separate fields; it will simply be a redundant field containing the full value from the three other fields, for use only with the new HR software program.
Deliverable Descriptions & Requirements
Database Setup & Requirements
For this project, you will need to create one new database:
Create a new, empty database named exactly: northwind_Enhanced. This db represents the “new and improved” db you will give your client upon completion of all work and testing.
After the initial database creation, no structural or data changes (DDL or DML) should occur in the original Northwind database.
|
c3949a0e088f3e85af515972cb3a1007
|
{
"intermediate": 0.4264000952243805,
"beginner": 0.25199875235557556,
"expert": 0.32160118222236633
}
|
610
|
wallpaper with anime style with a face made with computer symbols, with a pink, violet, white, blue, and yellow color palette. Use stable difussion
|
6422e0dad8988e3d7663e344f6f29620
|
{
"intermediate": 0.30067870020866394,
"beginner": 0.2986868917942047,
"expert": 0.4006343483924866
}
|
611
|
berlusconi flying over the gulf of Naples in a smoking suit
|
f1efee0b8d01aa39cfe1e03a32037580
|
{
"intermediate": 0.34568408131599426,
"beginner": 0.3251830041408539,
"expert": 0.32913288474082947
}
|
612
|
Hello ChatGPT. I got this error in my train_pong_ai script in python: Traceback (most recent call last):
File "C:\Users\RSupreme4\PycharmProjects\pythonProjectAI\train_pong_ai.py", line 5, in <module>
from pong_game import PongGame, Paddle, Ball, get_obs
ImportError: cannot import name 'Paddle' from 'pong_game' (C:\Users\RSupreme4\PycharmProjects\pythonProjectAI\pong_game.py). The code for poing_game is: import pygame
from pygame.locals import *
class PongGame:
def __init__(self):
self.screen = pygame.display.set_mode((640, 480))
pygame.display.set_caption("Pong AI")
self.clock = pygame.time.Clock()
self.font = pygame.font.Font(None, 36)
self.ball_pos = [320, 240]
self.ball_speed = [2, 2]
self.score = [0, 0]
def draw_ball(self, ball_pos):
pygame.draw.rect(self.screen, (255, 255, 255), pygame.Rect(ball_pos[0], ball_pos[1], 15, 15))
def draw_paddle(self, paddle_pos):
pygame.draw.rect(self.screen, (255, 255, 255), pygame.Rect(paddle_pos[0], paddle_pos[1], 10, 60))
def move_ball(self, paddle_a_y, paddle_b_y):
self.ball_pos[0] += self.ball_speed[0]
self.ball_pos[1] += self.ball_speed[1]
if self.ball_pos[1] <= 0 or self.ball_pos[1] + 15 >= 480:
self.ball_speed[1] = -self.ball_speed[1]
if (self.ball_pos[0] <= 30 and paddle_a_y <= self.ball_pos[1] <= paddle_a_y + 60) or \
(self.ball_pos[0] >= 595 and paddle_b_y <= self.ball_pos[1] <= paddle_b_y + 60):
self.ball_speed[0] = -self.ball_speed[0]
if self.ball_pos[0] <= 0:
self.score[1] += 1
self.ball_pos = [320, 240]
if self.ball_pos[0] + 15 >= 640:
self.score[0] += 1
self.ball_pos = [320, 240]
def display_score(self):
score_text = self.font.render(f"{self.score[0]} - {self.score[1]}", True, (255, 255, 255))
self.screen.blit(score_text, (290, 10))
def play(self, paddle_a_y, paddle_b_y):
self.screen.fill((0, 0, 0))
self.draw_paddle((20, paddle_a_y))
self.draw_paddle((610, paddle_b_y))
self.move_ball(paddle_a_y, paddle_b_y)
self.draw_ball(self.ball_pos)
self.display_score()
pygame.display.flip()
self.clock.tick(60)
|
ac6584cbdb0a852d64a1fcc036fcff56
|
{
"intermediate": 0.27706319093704224,
"beginner": 0.5558151602745056,
"expert": 0.16712158918380737
}
|
613
|
Here's simple C program to convert png image to 8bit color palette grayscale png image libstb make out as indexed image
|
682fe2998e1f7005409612059e0762d0
|
{
"intermediate": 0.4517427682876587,
"beginner": 0.15371239185333252,
"expert": 0.394544780254364
}
|
614
|
Hi, I got a dataset and I want you to implement the following on the dataset named as 'dataset.csv'. Preprocessing steps to implement:
Replace any missing or invalid values: In the given dataset, there are some missing or invalid values represented by letters like 'f', 'd', 'e', 'c', and 'a'. You should replace these values with appropriate values like the mean or median of the column, or you can remove the rows that contain these values.
Convert all values to numerical format: Some columns in the dataset contain string values. You need to convert them to numerical format using appropriate techniques like one-hot encoding use OneHotEncoder.
Scale the data: • Scale numerical variables to have zero mean and unit variance.You can use StandardScaler from sklearn or Normalize from PyTorch. Since the columns in the dataset have different ranges of values, you need to scale the data to ensure that all columns have the same range of values. You can use techniques like min-max scaling or standardization for this purpose.
Split the dataset into training and testing sets: You should split the dataset into two parts: a training set and a testing set. The training set is used to train the model, while the testing set is used to evaluate the performance of the model on new data.
Normalize the data: If you are using certain machine learning algorithms like neural networks, you may need to normalize the data to ensure that the features have similar variances. Use Pytorch
Check for outliers: You should check for outliers in the dataset and remove them if necessary. Outliers can have a significant impact on the performance of the model.
Dataset is given as header = ['f1', 'f2', 'f3', 'f4', 'f5', 'f6', 'f7', 'target'] 1 108 60 46 178 35.5 0.415 0
5 97 76 27 0 35.6 0.378 1
4 83 86 19 0 29.3 0.317 0
1 114 66 36 200 38.1 0.289 0
1 149 68 29 127 29.3 0.349 1
5 117 86 30 105 39.1 0.251 0
1 111 94 0 0 32.8 0.265 0
4 112 78 40 0 39.4 0.236 0
1 116 78 29 180 36.1 0.496 0
0 141 84 26 0 32.4 0.433 0
2 175 88 0 0 22.9 0.326 0
2 92 52 0 0 30.1 0.141 0
3 130 78 23 79 28.4 0.323 1
8 120 86 0 0 28.4 0.259 1
2 174 88 37 120 44.5 0.646 1
2 106 56 27 165 29 0.426 0
2 105 75 0 0 23.3 0.56 0
4 95 60 32 0 35.4 0.284 0
0 126 86 27 120 27.4 0.515 0
8 65 72 23 0 32 0.6 0
2 99 60 17 160 36.6 0.453 0
1 102 74 0 0 39.5 0.293 1
11 120 80 37 150 42.3 0.785 1
3 102 44 20 94 30.8 0.4 0
1 109 58 18 116 28.5 0.219 0
9 140 94 0 0 32.7 0.734 1
13 153 88 37 140 40.6 1.174 0
12 100 84 33 105 30 0.488 0
1 147 94 41 0 49.3 0.358 1
1 81 74 41 57 46.3 1.096 0
3 187 70 22 200 36.4 0.408 1
6 162 62 0 0 24.3 0.178 1
4 136 70 0 0 31.2 1.182 1
1 121 78 39 74 39 0.261 0
3 108 62 24 0 26 0.223 0
0 181 88 44 510 43.3 0.222 1
8 154 78 32 0 32.4 0.443 1
1 128 88 39 110 36.5 1.057 1
7 137 90 41 0 32 0.391 0
0 123 72 0 0 36.3 0.258 1
1 106 76 0 0 37.5 0.197 0
6 190 92 0 0 35.5 0.278 1
9 f 74 31 0 44 0.403 1
9 89 62 0 0 22.5 e 0
10 101 76 48 180 d 0.171 0
2 122 70 27 b 36.8 0.34 0
c 121 72 23 112 26.2 0.245 0
1 126 60 a 0 30.1 0.349 1.
|
057927c76c243bdb54c80360192dd757
|
{
"intermediate": 0.3793551027774811,
"beginner": 0.32215404510498047,
"expert": 0.29849085211753845
}
|
615
|
how to run a script until a key is pressed?
|
9c6fb271d753c42dc8afc5f094b8ea60
|
{
"intermediate": 0.33942899107933044,
"beginner": 0.33856573700904846,
"expert": 0.3220052421092987
}
|
616
|
Hi, I was asked to implement the following . Note for using libraries: For this, any pre-trained or pre-built neural
networks or CNN architectures cannot be used (e.g. torchvision.models,
keras.applications). This time you can use scikit-learn for data preprocessing.
For this assignment you can use PyTorch or Keras/Tensorflow deep learning framework (works using sklearn.neural_network.MLPClassifier won't be considered).
Part I: Building a Basic NN
In this , implement a neural network using the PyTorch/Keras library.
You will train the network on the dataset named as 'dataset.csv', which contains of seven features and
a target. Your goal is to predict a target, that has a binary representation.
Step 1: Loading the Dataset
Load the dataset. You can use the pandas library to load the dataset into a DataFrame
Step 2: Preprocessing the Dataset
First, we need to preprocess the dataset before we use it to train the neural network.
Preprocessing typically involves converting categorical variables to numerical variables,
scaling numerical variables, and splitting the dataset into training and validation sets.
For this dataset, you can use the following preprocessing steps:
• Convert categorical variables to numerical variables using one-hot encoding
o You can use OneHotEncoder¶from sklearn
• Scale numerical variables to have zero mean and unit variance.
o You can use StandardScaler from sklearn or Normalize from PyTorch
• Split the dataset into training and validation sets.
o train_test_split from sklearn
You can also check Keras preprocessing tools here.
Step 3: Defining the Neural Network
Now, we need to define the neural network that we will use to make predictions on the
dataset. For this part, you can define a simple neural network.
Decide your NN architecture:
• How many input neurons are there?
• What activation function will you choose?
o Suggestion: try ReLU
• What is the number of hidden layers?
o Suggestion: start with a small network, e.g. 2 or 3 layers
• What is the size of each hidden layer?
o Suggestion: try 64 or 128 nodes for each layer
• What activation function is used for the hidden and output layer?
Step 4: Training the Neural Network
Training has to be defined from scratch, e.g. code with in-built .fit() function won’t be
evaluated.
1. Set up the training loop: In this step, you will create a loop that iterates over the
training data for a specified number of epochs. For each epoch, you will iterate
over the batches of the training data, compute the forward pass through the
neural network, compute the loss, compute the gradients using backpropagation,
and update the weights of the network using an optimizer such as Stochastic
Gradient Descent (SGD) or Adam.
2. Define the loss function that will be used to compute the error between the
predicted output of the neural network and the true labels of the training data.
For binary classification problems, a commonly used loss function is Binary Cross Entropy Loss.
3. Choose an optimizer and a learning rate. It will update the weights of the neural
network during training. Stochastic Gradient Descent (SGD) is one of the
commonly used, you can also explore other optimizers like Adam or RMSProp.
4. Train the neural network. Run the training loop and train the neural network on
the training data. Select the number of epochs and batch size. Monitor the
training loss and the validation loss at each epoch to ensure that the model is not
overfitting to the training data.
5. Evaluate the performance of the model on the testing data.
The expected accuracy for this task is more than 75%.
6. Save the weights of the trained neural network.
7. Visualize the results. Use visualization techniques such as confusion matrices. The training loop implementation should contain both test and train accuracy and their losses as well.
|
68dac35905787b43d4d38d7b6c6a8539
|
{
"intermediate": 0.5481632947921753,
"beginner": 0.1738663762807846,
"expert": 0.2779703140258789
}
|
617
|
pixel = (uint8_t)((*p + *(p + 1) + *(p + 2)) / 3.0);
|
b9377805d2db25477a8ac484972a6bb3
|
{
"intermediate": 0.3552331030368805,
"beginner": 0.2668601870536804,
"expert": 0.3779067099094391
}
|
618
|
create an advanced application in python that helps the user to achieve his goals and this using chatgpt 4 or 3.5 and the libraries that you consider
|
159b8a4837d08c112df898049e2f80c7
|
{
"intermediate": 0.6066034436225891,
"beginner": 0.14033842086791992,
"expert": 0.2530581057071686
}
|
619
|
create an advanced application in python that helps the user to achieve his goals and this using chatgpt 4 or 3.5 and the libraries that you consider
|
8c442b8f205f50811a012a3bbe7d93f6
|
{
"intermediate": 0.6066034436225891,
"beginner": 0.14033842086791992,
"expert": 0.2530581057071686
}
|
620
|
rewrite this wihtout using win32api:
def Is_Clicked(hwnd, game_window):
if win32gui.GetForegroundWindow() != hwnd:
continue
if not win32api.GetAsyncKeyState(win32con.VK_LBUTTON) & 0x8000:
continue
x, y = win32gui.GetCursorPos()
if hwnd == win32gui.GetForegroundWindow() and game_window.left <= x <= game_window.left + game_window.width and game_window.top <= y <= game_window.top + game_window.height:
return True
return False
|
7b0841688ec6df5c5b6dd3034cbc4a6b
|
{
"intermediate": 0.38996919989585876,
"beginner": 0.38969454169273376,
"expert": 0.22033622860908508
}
|
621
|
I need help with autohotkey. I need to loop through a listview and print each entry in a message box.
|
924ef017d34d4c6f4d35712eca6899b0
|
{
"intermediate": 0.34134814143180847,
"beginner": 0.30247464776039124,
"expert": 0.3561771512031555
}
|
622
|
'''
import streamlit as st
import pandas as pd
import requests
import json
from PIL import Image, ImageOps
from io import BytesIO
from itertools import groupby
import datetime
import altair as alt
access_token = "EAAIui8JmOHYBAESXLZAnsSRe4OITHYzy3Q5osKgMXGRQnoVMtiIwJUonFjVHEjl9EZCEmURy9I9S9cnyFUXBquZCsWnGx1iJCYTvkKuUZBpBwwSceZB0ZB6YY9B83duIwZCoOlrOODhnA3HLLGbRKGPJ9hbQPLCrkVbc5ibhE43wIAinV0gVkJ30x4UEpb8fXLD8z5J9EYrbQZDZD"
account_id = "17841458386736965"
def load_media_info(access_token, account_id):
base_url = f"https://graph.facebook.com/v11.0/{account_id}/media"
params = {
"fields": "id,media_type,media_url,thumbnail_url,permalink,caption,timestamp,like_count,comments_count,insights.metric(impressions,reach,engagement)",
"access_token": access_token
}
items = []
while base_url:
response = requests.get(base_url, params=params)
data = json.loads(response.text)
items.extend(data["data"])
if "paging" in data and "next" in data["paging"]:
base_url = data["paging"]["next"]
params = {}
else:
base_url = None
return pd.DataFrame(items)
def load_comments_for_post(post_id, access_token):
base_url = f"https://graph.facebook.com/v11.0/{post_id}/comments"
params = {
"access_token": access_token
}
comments = []
while base_url:
response = requests.get(base_url, params=params)
data = json.loads(response.text)
comments.extend(data["data"])
if "paging" in data and "next" in data["paging"]:
base_url = data["paging"]["next"]
params = {}
else:
base_url = None
return comments
df = load_media_info(access_token, account_id)
if 'thumbnail_url' not in df.columns:
df['thumbnail_url'] = df['media_url']
df['thumbnail_url'] = df.apply(lambda x: x["media_url"] if x["media_type"] == "IMAGE" else x["thumbnail_url"], axis=1)
df["id"] = df["timestamp"]
df["id"] = df["id"].apply(lambda x: datetime.datetime.strptime(x.split("+")[0], "%Y-%m-%dT%H:%M:%S").strftime("%Y%m%d"))
df = df.sort_values("timestamp", ascending=False)
df["id_rank"] = [f"_{len(list(group))}" for _, group in groupby(df["id"])]
df["id"] += df["id_rank"]
menu = ["Content", "Analytics"]
choice = st.sidebar.radio("Menu", menu)
if "load_more" not in st.session_state:
st.session_state.load_more = 0
def display_carousel(carousel_items):
scale_factor = 0.15
display_images = []
for url in carousel_items:
req_img = requests.get(url)
img_bytes = req_img.content
img = Image.open(BytesIO(img_bytes))
display_image = ImageOps.scale(img, scale_factor)
display_images.append(display_image)
st.image(display_images, width=300)
if choice == "Content":
selected_id = st.sidebar.selectbox("Select Post", df["id"].unique())
selected_data = df[df["id"] == selected_id].iloc[0]
image_url = selected_data["media_url"] if selected_data["media_type"] == "IMAGE" else selected_data["thumbnail_url"]
image_response = requests.get(image_url)
image = Image.open(BytesIO(image_response.content))
display_carousel([image_url])
# Process caption text
caption_text = selected_data["caption"]
if caption_text:
start_desc_index = caption_text.find("[Description]")
if start_desc_index != -1:
caption_text = caption_text[start_desc_index + 13:] # Remove text before "[Description]"
end_tags_index = caption_text.find("[Tags]")
if end_tags_index != -1:
caption_text = caption_text[:end_tags_index] # Remove text from "[Tags]"
st.write(caption_text.strip())
likes = selected_data["like_count"]
if "insights" in selected_data.keys():
try:
impressions = selected_data["insights"][0]['values'][0]['value']
percentage = (likes * 100) / impressions
st.write(f"いいね: {likes} (インプレッションに対する割合: {percentage:.1f}%)")
except (KeyError, IndexError):
st.write(f"いいね: {likes}")
else:
st.write(f"いいね: {likes}")
st.write(f"コメント数: {selected_data['comments_count']}")
# Get comments
try:
post_id = selected_data["id"]
comments = load_comments_for_post(post_id, access_token)
if st.session_state.load_more:
for comment in comments:
st.write(f"{comment['username']}: {comment['text']}")
else:
for comment in comments[:5]: # Show only the first 5 comments
st.write(f"{comment['username']}: {comment['text']}")
# Load more button
if st.button("さらに表示"):
st.session_state.load_more += 1
except Exception as e:
st.write("コメントの取得中にエラーが発生しました。")
elif choice == "Analytics":
categories = ["いいね数", "コメント数"]
selected_category = st.selectbox("Select metric", categories)
if selected_category == "いいね数":
metric = "like_count"
elif selected_category == "コメント数":
metric = "comments_count"
chart_df = df[["id", "timestamp", metric]].copy()
chart_df["timestamp"] = pd.to_datetime(chart_df["timestamp"]).dt.date
chart = alt.Chart(chart_df).mark_line().encode(
x="timestamp:T",
y=metric + ":Q"
).properties(title=f"Time Series of {selected_category}",
width=800,
height=300)
st.altair_chart(chart)
'''
上記コードを実行すると下記のエラーが発生します。修正済みのコードを省略せずにすべて表示してください。
'''
gaierror Traceback (most recent call last)
File ~/.var/app/org.jupyter.JupyterLab/config/jupyterlab-desktop/jlab_server/lib/python3.8/site-packages/urllib3/connection.py:174, in HTTPConnection._new_conn(self)
173 try:
--> 174 conn = connection.create_connection(
175 (self._dns_host, self.port), self.timeout, **extra_kw
176 )
178 except SocketTimeout:
File ~/.var/app/org.jupyter.JupyterLab/config/jupyterlab-desktop/jlab_server/lib/python3.8/site-packages/urllib3/util/connection.py:72, in create_connection(address, timeout, source_address, socket_options)
68 return six.raise_from(
69 LocationParseError(u"'%s', label empty or too long" % host), None
70 )
---> 72 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
73 af, socktype, proto, canonname, sa = res
File ~/.var/app/org.jupyter.JupyterLab/config/jupyterlab-desktop/jlab_server/lib/python3.8/socket.py:918, in getaddrinfo(host, port, family, type, proto, flags)
917 addrlist = []
--> 918 for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
919 af, socktype, proto, canonname, sa = res
gaierror: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
NewConnectionError Traceback (most recent call last)
File ~/.var/app/org.jupyter.JupyterLab/config/jupyterlab-desktop/jlab_server/lib/python3.8/site-packages/urllib3/connectionpool.py:703, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
702 # Make the request on the httplib connection object.
--> 703 httplib_response = self._make_request(
704 conn,
705 method,
706 url,
707 timeout=timeout_obj,
708 body=body,
709 headers=headers,
710 chunked=chunked,
711 )
713 # If we're going to release the connection in ``finally:``, then
714 # the response doesn't need to know about the connection. Otherwise
715 # it will also try to release it and we'll have a double-release
716 # mess.
File ~/.var/app/org.jupyter.JupyterLab/config/jupyterlab-desktop/jlab_server/lib/python3.8/site-packages/urllib3/connectionpool.py:386, in HTTPConnectionPool._make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
385 try:
--> 386 self._validate_conn(conn)
387 except (SocketTimeout, BaseSSLError) as e:
388 # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
File ~/.var/app/org.jupyter.JupyterLab/config/jupyterlab-desktop/jlab_server/lib/python3.8/site-packages/urllib3/connectionpool.py:1042, in HTTPSConnectionPool._validate_conn(self, conn)
1041 if not getattr(conn, "sock", None): # AppEngine might not have `.sock`
-> 1042 conn.connect()
1044 if not conn.is_verified:
File ~/.var/app/org.jupyter.JupyterLab/config/jupyterlab-desktop/jlab_server/lib/python3.8/site-packages/urllib3/connection.py:363, in HTTPSConnection.connect(self)
361 def connect(self):
362 # Add certificate verification
--> 363 self.sock = conn = self._new_conn()
364 hostname = self.host
File ~/.var/app/org.jupyter.JupyterLab/config/jupyterlab-desktop/jlab_server/lib/python3.8/site-packages/urllib3/connection.py:186, in HTTPConnection._new_conn(self)
185 except SocketError as e:
--> 186 raise NewConnectionError(
187 self, "Failed to establish a new connection: %s" % e
188 )
190 return conn
NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f9356876a30>: Failed to establish a new connection: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
MaxRetryError Traceback (most recent call last)
File ~/.var/app/org.jupyter.JupyterLab/config/jupyterlab-desktop/jlab_server/lib/python3.8/site-packages/requests/adapters.py:489, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
488 if not chunked:
--> 489 resp = conn.urlopen(
490 method=request.method,
491 url=url,
492 body=request.body,
493 headers=request.headers,
494 redirect=False,
495 assert_same_host=False,
496 preload_content=False,
497 decode_content=False,
498 retries=self.max_retries,
499 timeout=timeout,
500 )
502 # Send the request.
503 else:
File ~/.var/app/org.jupyter.JupyterLab/config/jupyterlab-desktop/jlab_server/lib/python3.8/site-packages/urllib3/connectionpool.py:787, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
785 e = ProtocolError("Connection aborted.", e)
--> 787 retries = retries.increment(
788 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
789 )
790 retries.sleep()
File ~/.var/app/org.jupyter.JupyterLab/config/jupyterlab-desktop/jlab_server/lib/python3.8/site-packages/urllib3/util/retry.py:592, in Retry.increment(self, method, url, response, error, _pool, _stacktrace)
591 if new_retry.is_exhausted():
--> 592 raise MaxRetryError(_pool, url, error or ResponseError(cause))
594 log.debug("Incremented Retry for (url='%s'): %r", url, new_retry)
MaxRetryError: HTTPSConnectionPool(host='graph.facebook.com', port=443): Max retries exceeded with url: /v11.0/17841458386736965/media?fields=id%2Cmedia_type%2Cmedia_url%2Cthumbnail_url%2Cpermalink%2Ccaption%2Ctimestamp%2Clike_count%2Ccomments_count%2Cinsights.metric%28impressions%2Creach%2Cengagement%29&access_token=EAAIui8JmOHYBAESXLZAnsSRe4OITHYzy3Q5osKgMXGRQnoVMtiIwJUonFjVHEjl9EZCEmURy9I9S9cnyFUXBquZCsWnGx1iJCYTvkKuUZBpBwwSceZB0ZB6YY9B83duIwZCoOlrOODhnA3HLLGbRKGPJ9hbQPLCrkVbc5ibhE43wIAinV0gVkJ30x4UEpb8fXLD8z5J9EYrbQZDZD (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f9356876a30>: Failed to establish a new connection: [Errno -2] Name or service not known'))
During handling of the above exception, another exception occurred:
ConnectionError Traceback (most recent call last)
Cell In[45], line 57
53 base_url = None
55 return comments
---> 57 df = load_media_info(access_token, account_id)
58 if 'thumbnail_url' not in df.columns:
59 df['thumbnail_url'] = df['media_url']
Cell In[45], line 23, in load_media_info(access_token, account_id)
21 items = []
22 while base_url:
---> 23 response = requests.get(base_url, params=params)
24 data = json.loads(response.text)
26 items.extend(data["data"])
File ~/.var/app/org.jupyter.JupyterLab/config/jupyterlab-desktop/jlab_server/lib/python3.8/site-packages/requests/api.py:73, in get(url, params, **kwargs)
62 def get(url, params=None, **kwargs):
63 r"""Sends a GET request.
64
65 :param url: URL for the new :class:`Request` object.
(...)
70 :rtype: requests.Response
71 """
---> 73 return request("get", url, params=params, **kwargs)
File ~/.var/app/org.jupyter.JupyterLab/config/jupyterlab-desktop/jlab_server/lib/python3.8/site-packages/requests/api.py:59, in request(method, url, **kwargs)
55 # By using the 'with' statement we are sure the session is closed, thus we
56 # avoid leaving sockets open which can trigger a ResourceWarning in some
57 # cases, and look like a memory leak in others.
58 with sessions.Session() as session:
---> 59 return session.request(method=method, url=url, **kwargs)
File ~/.var/app/org.jupyter.JupyterLab/config/jupyterlab-desktop/jlab_server/lib/python3.8/site-packages/requests/sessions.py:587, in Session.request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
582 send_kwargs = {
583 "timeout": timeout,
584 "allow_redirects": allow_redirects,
585 }
586 send_kwargs.update(settings)
--> 587 resp = self.send(prep, **send_kwargs)
589 return resp
File ~/.var/app/org.jupyter.JupyterLab/config/jupyterlab-desktop/jlab_server/lib/python3.8/site-packages/requests/sessions.py:701, in Session.send(self, request, **kwargs)
698 start = preferred_clock()
700 # Send the request
--> 701 r = adapter.send(request, **kwargs)
703 # Total elapsed time of the request (approximately)
704 elapsed = preferred_clock() - start
File ~/.var/app/org.jupyter.JupyterLab/config/jupyterlab-desktop/jlab_server/lib/python3.8/site-packages/requests/adapters.py:565, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
561 if isinstance(e.reason, _SSLError):
562 # This branch is for urllib3 v1.22 and later.
563 raise SSLError(e, request=request)
--> 565 raise ConnectionError(e, request=request)
567 except ClosedPoolError as e:
568 raise ConnectionError(e, request=request)
ConnectionError: HTTPSConnectionPool(host='graph.facebook.com', port=443): Max retries exceeded with url: /v11.0/17841458386736965/media?fields=id%2Cmedia_type%2Cmedia_url%2Cthumbnail_url%2Cpermalink%2Ccaption%2Ctimestamp%2Clike_count%2Ccomments_count%2Cinsights.metric%28impressions%2Creach%2Cengagement%29&access_token=EAAIui8JmOHYBAESXLZAnsSRe4OITHYzy3Q5osKgMXGRQnoVMtiIwJUonFjVHEjl9EZCEmURy9I9S9cnyFUXBquZCsWnGx1iJCYTvkKuUZBpBwwSceZB0ZB6YY9B83duIwZCoOlrOODhnA3HLLGbRKGPJ9hbQPLCrkVbc5ibhE43wIAinV0gVkJ30x4UEpb8fXLD8z5J9EYrbQZDZD (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f9356876a30>: Failed to establish a new connection: [Errno -2] Name or service not known'))
'''
|
3a1d0fdee2ec52939011033a210ec3ff
|
{
"intermediate": 0.3365078866481781,
"beginner": 0.40272417664527893,
"expert": 0.2607679069042206
}
|
623
|
multiple moniters desktop mask layer window using c++ implement
|
98ac1d1971531c77f91c1e99610720e8
|
{
"intermediate": 0.32206466794013977,
"beginner": 0.21377567946910858,
"expert": 0.46415960788726807
}
|
624
|
Give an example bert model using pytorch
|
50926f9a65ac80726f792e1de0956a8d
|
{
"intermediate": 0.2624857723712921,
"beginner": 0.0967104360461235,
"expert": 0.6408037543296814
}
|
625
|
I need help debugging a autohotkey script. I think it hangs when it waiting for the color change.ColorCheck := 0
Loop
{
WinActivate, hahah
WinWaitActive, hahah
; Get the color at the specified window coordinate
PixelGetColor, color, 615, 90, RGB
; Check if the color has changed from 0070EF to 0086FF
if (color == 0x0086FF)
{
break
}
ColorCheck++
if (ColorCheck >= 600)
{
ColorCheck := 0
Msgbox, 4, 1 minute has passed downloading %CurItem%. Do you want to continue?`n`nClick Yes to continue, or No to exit.
IfMsgBox Yes
continue
else
break
}
; Sleep for a short interval before checking again
Sleep, 100
}
|
f1f6326df9846c85ce39b1da2af24622
|
{
"intermediate": 0.4054631292819977,
"beginner": 0.44019731879234314,
"expert": 0.1543395221233368
}
|
626
|
I'm working on a fivem volleyball script how can i make it so when I press E within a certain zone it will add me to the team on server show that the team has one person for everyone and not let anyone else join the team
|
863bab5abf89185ccf593263290bd989
|
{
"intermediate": 0.3424164652824402,
"beginner": 0.2972871959209442,
"expert": 0.3602963984012604
}
|
627
|
Hi, I've implemented a neural net to perform on a small dataset. The max test accuracy I'm getting is 78. I want you to thoroughly check my implementation and see what best I can do to even increase the performance of my model. See all possible ways and Give me the code lines to where I need to make changes . Here is my implementation. I got all necessary imports installed as per this code.
# Step 1: Load the Dataset
data = pd.read_csv('dataset.csv')
data.head()
# Visualize scatter plots for features against the target
sns.scatterplot(data=data, x=data.columns[0], y='target')
plt.title('Feature1 vs Target')
plt.show()
sns.scatterplot(data=data, x=data.columns[1], y='target')
plt.title('Feature2 vs Target')
plt.show()
sns.scatterplot(data=data, x=data.columns[2], y='target')
plt.title('Feature3 vs Target')
plt.show()
"""# **Step 2**: **Preprocessing the Dataset**
"""
# Identify categorical columns
categorical_columns = data.select_dtypes(include=['object']).columns
# Create an empty list to store invalid values
invalid_values = []
# Iterate over all object columns and find the invalid values
for col in categorical_columns:
invalid_values.extend(data.loc[data[col].str.isalpha(), col].unique())
# Print the object columns & the unique list of invalid values
print('Categorical columns:' , categorical_columns.to_list(), 'Invalid Values:', set(invalid_values))
# Replace missing/invalid values with pd.NA
data = data.replace(invalid_values, np.nan)
# Find the missing values
missing_values = data.isna().sum()
print('Missing values:')
print(missing_values)
# Fill missing values with mode or mean, depending on column type
fill_columns = [k for k, v in missing_values.to_dict().items() if v != 0]
for col in fill_columns:
if data[col].dtype == 'object':
data[col].fillna(data[col].mode()[0], inplace=True)
else:
data[col].fillna(data[col].mean(), inplace=True)
# Convert the dataframe to numeric
data = data.astype('float')
X = data.drop(columns=['target'])
y = data['target']
# Scale numerical variables to have zero mean and unit variance.
scaler = StandardScaler(with_mean= False)
X_scaled = scaler.fit_transform(X)
# Compute the mean and variance of each column
mean = np.mean(X_scaled, axis=0)
var = np.var(X_scaled, axis=0)
print(f'Mean: {mean} Variance: {var}')
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.2, random_state=42, shuffle= False)
#Step 3 & 4 : Defining the Neural Network and its Architecture
class NNClassifier(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(NNClassifier, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
self.relu1 = nn.ReLU()
self.fc2 = nn.Linear(hidden_size, hidden_size)
self.relu2 = nn.ReLU()
self.fc3 = nn.Linear(hidden_size,hidden_size)
self.relu3 = nn.ReLU()
self.fc4 = nn.Linear(hidden_size, output_size)
self.sigmoid = nn.Sigmoid()
def forward(self, x):
x = self.relu1(self.fc1(x))
x = self.relu2(self.fc2(x))
x = self.relu3(self.fc3(x))
x = self.sigmoid(self.fc4(x))
return x
hidden_size = 128
input_size = X_train.shape[1]
output_size = 1
model = NNClassifier(input_size, hidden_size, output_size)
# Set hyperparameters
epochs = 1000
batch_size = 64
learning_rate = 0.01
# Define loss and optimizer
criterion = nn.BCELoss()
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
# Training segment
train_losses = []
train_accuracies = []
test_losses = []
test_accuracies = []
for epoch in range(epochs):
epoch_train_losses = []
epoch_y_true = []
epoch_y_pred = []
for i in range(0, len(X_train), batch_size):
#X_batch = torch.tensor(X_train.iloc[i:i + batch_size].values, dtype=torch.float32)
X_batch = torch.tensor(X_train[i:i + batch_size], dtype=torch.float32)
y_batch = torch.tensor(y_train[i:i + batch_size].values, dtype=torch.float32).view(-1, 1)
optimizer.zero_grad()
y_pred = model(X_batch)
loss = criterion(y_pred, y_batch)
loss.backward()
optimizer.step()
epoch_train_losses.append(loss.item())
epoch_y_true.extend(y_batch.numpy().flatten().tolist())
epoch_y_pred.extend((y_pred > 0.5).float().numpy().flatten().tolist())
train_losses.append(sum(epoch_train_losses) / len(epoch_train_losses))
train_accuracies.append(accuracy_score(epoch_y_true, epoch_y_pred))
# Testing segment
with torch.no_grad():
#X_test_tensor = torch.tensor(X_test.values, dtype=torch.float32)
X_test_tensor = torch.tensor(X_test, dtype=torch.float32)
y_test_tensor = torch.tensor(y_test.values, dtype=torch.float32).view(-1, 1)
test_pred = model(X_test_tensor)
test_loss = criterion(test_pred, y_test_tensor)
test_accuracy = accuracy_score(y_test_tensor, (test_pred > 0.5).float())
test_losses.append(test_loss.item())
test_accuracies.append(test_accuracy)
if epoch % 100 == 0:
print(f"Epoch: {epoch+1}/{epochs}, Training Loss: {train_losses[-1]}, Test Loss: {test_loss.item()}, Training Accuracy: {train_accuracies[-1]}, Test Accuracy: {test_accuracy}")
# Compare training and test losses
plt.plot(train_losses, label='Training Loss')
plt.plot(test_losses, label='Test Loss')
plt.title('Training vs Test Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# Compare training and test accuracies
plt.plot(train_accuracies, label='Training Accuracy')
plt.plot(test_accuracies, label='Test Accuracy')
plt.title('Training vs Test Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
|
61141fe447f70ec4d7e787695b0e1bd5
|
{
"intermediate": 0.24290770292282104,
"beginner": 0.4966225326061249,
"expert": 0.26046982407569885
}
|
628
|
I want you to become a senior frontend developer with 10 years of experience. As a senior developer, you already mastered the best practices and can create efficient and scalable applications. Your task is to refactor and fix Tasks Logger application. The application does the following.
Allows user to input task title
Allows user to select task type. Either Front-End or Back-End
If task type is Front-End it should automatically add to Front-End Tasks container else, add to Back-End container when the user click 'Add'. Each task should have a progress bar.
Delete button beside each task to delete the task
When user click on the task, display a modal where user can input and add subtask. The modal should also display The Task title and information of the last added task example: Last Added: April 13, 2023 (12:03pm)
Each subtask should have option 'Pending, Ongoing, and Completed' beside it and Change the background color of subtask depending on its status.
Use local storage so that when a user reload the page, the tasks and subtasks should still be there. The progress bar of each task depends on the average % of its subtasks' status. (100% Complete, 50% Ongoing, 0% Pending)
You can use the following React Hooks: useState, useEffect, useContext, useReducer
Perform TDD with Mocha and Chai
You can ask me for clarifying questions before you start.
|
6503c7856b82d83d12a80f3ee90347d4
|
{
"intermediate": 0.49571147561073303,
"beginner": 0.2300984114408493,
"expert": 0.27419009804725647
}
|
629
|
git RSA key fingerprint is this key is not known by any other names
|
daa980a3465dbf308ce9f3de3ab3d7be
|
{
"intermediate": 0.4698849320411682,
"beginner": 0.1807658076286316,
"expert": 0.3493492901325226
}
|
630
|
'''
import streamlit as st
import pandas as pd
import requests
import json
from PIL import Image, ImageOps
from io import BytesIO
from itertools import groupby
import datetime
import altair as alt
access_token = "EAAIui8JmOHYBAESXLZAnsSRe4OITHYzy3Q5osKgMXGRQnoVMtiIwJUonFjVHEjl9EZCEmURy9I9S9cnyFUXBquZCsWnGx1iJCYTvkKuUZBpBwwSceZB0ZB6YY9B83duIwZCoOlrOODhnA3HLLGbRKGPJ9hbQPLCrkVbc5ibhE43wIAinV0gVkJ30x4UEpb8fXLD8z5J9EYrbQZDZD"
account_id = "17841458386736965"
def load_media_info(access_token, account_id):
base_url = f"https://graph.facebook.com/v11.0/{account_id}/media"
params = {
"fields": "id,media_type,media_url,thumbnail_url,permalink,caption,timestamp,like_count,comments_count,insights.metric(impressions,reach,engagement),children",
"access_token": access_token
}
items = []
while base_url:
response = requests.get(base_url, params=params)
data = json.loads(response.text)
items.extend(data["data"])
if "paging" in data and "next" in data["paging"]:
base_url = data["paging"]["next"]
params = {}
else:
base_url = None
return pd.DataFrame(items)
def load_comments_for_post(post_id, access_token):
base_url = f"https://graph.facebook.com/v11.0/{post_id}/comments"
params = {
"access_token": access_token
}
comments = []
while base_url:
response = requests.get(base_url, params=params)
data = json.loads(response.text)
comments.extend(data["data"])
if "paging" in data and "next" in data["paging"]:
base_url = data["paging"]["next"]
params = {}
else:
base_url = None
return comments
df = load_media_info(access_token, account_id)
if 'thumbnail_url' not in df.columns:
df['thumbnail_url'] = df['media_url']
df['thumbnail_url'] = df.apply(lambda x: x["media_url"] if x["media_type"] == "IMAGE" else x["thumbnail_url"], axis=1)
df["id"] = df["timestamp"]
df["id"] = df["id"].apply(lambda x: datetime.datetime.strptime(x.split("+")[0], "%Y-%m-%dT%H:%M:%S").strftime("%Y%m%d"))
df = df.sort_values("timestamp", ascending=False)
df["id_rank"] = [f"_{len(list(group))}" for _, group in groupby(df["id"])]
df["id"] += df["id_rank"]
menu = ["Content", "Analytics"]
choice = st.sidebar.radio("Menu", menu)
if "load_more" not in st.session_state:
st.session_state.load_more = 0
def display_carousel(carousel_items):
scale_factor = 0.15
display_images = []
for url in carousel_items:
req_img = requests.get(url)
img_bytes = req_img.content
img = Image.open(BytesIO(img_bytes))
display_image = ImageOps.scale(img, scale_factor)
display_images.append(display_image)
st.image(display_images, width=300)
if choice == "Content":
selected_id = st.sidebar.selectbox("Select Post", df["id"].unique())
selected_data = df[df["id"] == selected_id].iloc[0]
image_url = selected_data["media_url"] if selected_data["media_type"] == "IMAGE" else selected_data["thumbnail_url"]
image_response = requests.get(image_url)
image = Image.open(BytesIO(image_response.content))
if "children" in selected_data:
carousel_items = [datum["media_url"] for datum in selected_data["children"]["data"]]
else:
carousel_items = [image_url]
display_carousel(carousel_items)
# Process caption text
caption_text = selected_data["caption"]
if caption_text:
start_desc_index = caption_text.find("[Description]")
if start_desc_index != -1:
caption_text = caption_text[start_desc_index + 13:] # Remove text before "[Description]"
end_tags_index = caption_text.find("[Tags]")
if end_tags_index != -1:
caption_text = caption_text[:end_tags_index] # Remove text from "[Tags]"
st.write(caption_text.strip())
likes = selected_data["like_count"]
if "insights" in selected_data.keys():
try:
impressions = selected_data["insights"][0]['values'][0]['value']
percentage = (likes * 100) / impressions
st.write(f"いいね: {likes} (インプレッションに対する割合: {percentage:.1f}%)")
except (KeyError, IndexError):
st.write(f"いいね: {likes}")
else:
st.write(f"いいね: {likes}")
st.write(f"コメント数: {selected_data['comments_count']}")
# Get comments
post_id = selected_data["id"]
try:
comments = load_comments_for_post(post_id, access_token)
if st.session_state.load_more:
for comment in comments:
st.write(f"{comment['id']}: {comment['text']}")
else:
for comment in comments[:5]: # Show only the first 5 comments
st.write(f"{comment['id']}: {comment['text']}")
# Load more button
if st.button("さらに表示"):
st.session_state.load_more += 1
except Exception as e:
st.write("コメントの取得中にエラーが発生しました。")
elif choice == "Analytics":
categories = ["いいね数", "コメント数"]
selected_category = st.selectbox("Select metric", categories)
if selected_category == "いいね数":
metric = "like_count"
elif selected_category == "コメント数":
metric = "comments_count"
chart_df = df[["id", "timestamp", metric]].copy()
chart_df["timestamp"] = pd.to_datetime(chart_df["timestamp"]).dt.date
chart = alt.Chart(chart_df).mark_line().encode(
x="timestamp:T",
y=metric + ":Q"
).properties(title=f"Time Series of {selected_category}",
width=800,
height=300)
st.altair_chart(chart)
'''
上記コードを実行すると下記のエラーが発生します。修正済みのコードを省略せずにすべて表示してください。
‘’‘
KeyError Traceback (most recent call last)
Cell In[48], line 97
94 image = Image.open(BytesIO(image_response.content))
96 if “children” in selected_data:
—> 97 carousel_items = [datum[“media_url”] for datum in selected_data[“children”][“data”]]
98 else:
99 carousel_items = [image_url]
Cell In[48], line 97, in <listcomp>(.0)
94 image = Image.open(BytesIO(image_response.content))
96 if “children” in selected_data:
—> 97 carousel_items = [datum[“media_url”] for datum in selected_data[“children”][“data”]]
98 else:
99 carousel_items = [image_url]
KeyError: ‘media_url’
’‘’
|
04059bc225c9841cd59d35924f4827b8
|
{
"intermediate": 0.34018293023109436,
"beginner": 0.48244380950927734,
"expert": 0.17737331986427307
}
|
631
|
does quidway E050 support AAA authentication?
|
c25a6e259e8b19157f1e95cb5e742a37
|
{
"intermediate": 0.38464364409446716,
"beginner": 0.1937766820192337,
"expert": 0.4215797185897827
}
|
632
|
Evaluate these classes:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Assignment_4
{
public class HashMap<K, V>
{
/* Properties */
public Entry<K, V>[] Table { get; set; }
public int Capacity { get; set; }
public double LoadFactor { get; set; }
public int size; // Field used since we need a Size() method to satisfy the tests.
/* Constructors */
public HashMap()
{
this.Capacity = 11;
this.Table = new Entry<K, V>[this.Capacity];
this.LoadFactor = 0.75;
this.size = 0;
}
public HashMap(int initialCapacity)
{
if (initialCapacity <= 0)
{
throw new ArgumentException();
}
this.Capacity = initialCapacity;
this.Table = new Entry<K, V>[this.Capacity];
this.LoadFactor = 0.75;
this.size = 0;
}
public HashMap(int initialCapacity, double loadFactor)
{
if (initialCapacity <= 0 || loadFactor <= 0)
{
throw new ArgumentException();
}
this.Capacity = initialCapacity;
this.Table = new Entry<K, V>[this.Capacity];
this.LoadFactor = loadFactor;
this.size = 0;
}
/* Methods */
public int Size()
{
return this.size;
}
public bool IsEmpty()
{
return this.size == 0;
}
public void Clear()
{
Array.Clear(this.Table, 0, this.Table.Length);
this.size = 0;
}
//public int GetMatchingOrNextAvailableBucket(K key)
//{
//}
//public V Get(K key)
//{
//}
public V Put(K key, V value)
{
if (key == null)
{
throw new ArgumentNullException();
}
if (value == null)
{
throw new ArgumentNullException();
}
if (IsEmpty())
{
Entry<K, V> newEntry = new Entry<K, V>(key, value);
this.size++;
}
V oldValue = value;
return oldValue;
}
//public V Remove(K key)
//{
//}
//private int ReSize()
//{
//}
//public void ReHash()
//{
//}
//public IEnumerator<V> Values()
//{
//}
//public IEnumerator<K> Keys()
//{
//}
}
}
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Assignment_4
{
public class Entry<K, V>
{
/* Properties */
public K Key { get; set; }
public V Value { get; set; }
/* Constructors */
public Entry(K key, V value)
{
this.Key = key;
this.Value = value;
}
}
}
|
88e8e79ecf8f9a18b9ec3c7c6536206f
|
{
"intermediate": 0.39889922738075256,
"beginner": 0.4000631272792816,
"expert": 0.20103761553764343
}
|
633
|
can you write an app that monitor a 4chan general thread and will sent a notification if a post gets many replies to it?
|
93d3d0d331940b5af2778e3bee1e1836
|
{
"intermediate": 0.4232189357280731,
"beginner": 0.11105728149414062,
"expert": 0.46572375297546387
}
|
634
|
I need help please making a autohotkey script that scrapes (I think a javascript) webpage and using regex extract hyperlinks with a base url of /us-en/album/. For example a line in the html could be <a href="/us-en/album/no-escape-doctor-p-remix-ganja-white-night-and-dirt-monkey/lxoh1ckw0j04b" title="More details on No Escape (Doctor P Remix) by Ganja White Night."> and I need just between the first two " char. So the following: /us-en/album/no-escape-doctor-p-remix-ganja-white-night-and-dirt-monkey/lxoh1ckw0j04b
|
8df615a80f3da4374da40d58efd0a793
|
{
"intermediate": 0.5087490081787109,
"beginner": 0.169886514544487,
"expert": 0.32136452198028564
}
|
635
|
‘’‘
import streamlit as st
import pandas as pd
import requests
import json
from PIL import Image, ImageOps
from io import BytesIO
from itertools import groupby
import instaloader
import datetime
import altair as alt
loader = instaloader.Instaloader()
# For login
username = "walhalax"
password = "W@lhalax4031"
loader.context.login(username, password) # Login
loader.context.request_timeout = (9, 15) # Increase request timeout
access_token = "EAAIui8JmOHYBAESXLZAnsSRe4OITHYzy3Q5osKgMXGRQnoVMtiIwJUonFjVHEjl9EZCEmURy9I9S9cnyFUXBquZCsWnGx1iJCYTvkKuUZBpBwwSceZB0ZB6YY9B83duIwZCoOlrOODhnA3HLLGbRKGPJ9hbQPLCrkVbc5ibhE43wIAinV0gVkJ30x4UEpb8fXLD8z5J9EYrbQZDZD"
account_id = "17841458386736965"
def load_media_info(access_token, account_id):
base_url = f"https://graph.facebook.com/v11.0/{account_id}/media"
params = {
"fields": "id,media_type,media_url,thumbnail_url,permalink,caption,timestamp,like_count,comments_count,insights.metric(impressions,reach,engagement),children{media_type,media_url}",
"access_token": access_token
}
items = []
while base_url:
response = requests.get(base_url, params=params)
data = json.loads(response.text)
items.extend(data["data"])
if "paging" in data and "next" in data["paging"]:
base_url = data["paging"]["next"]
params = {}
else:
base_url = None
return pd.DataFrame(items)
df = load_media_info(access_token, account_id)
if 'thumbnail_url' not in df.columns:
df['thumbnail_url'] = df['media_url']
df['thumbnail_url'] = df.apply(lambda x: x["media_url"] if x["media_type"] == "IMAGE" else x["thumbnail_url"], axis=1)
df["id"] = df["timestamp"]
df["id"] = df["id"].apply(lambda x: datetime.datetime.strptime(x.split("+")[0], "%Y-%m-%dT%H:%M:%S").strftime("%Y%m%d"))
df = df.sort_values("timestamp", ascending=False)
df["id_rank"] = [f"_{len(list(group))}" for _, group in groupby(df["id"])]
df["id"] += df["id_rank"]
menu = ["Content", "Analytics"]
choice = st.sidebar.radio("Menu", menu)
if "load_more" not in st.session_state:
st.session_state.load_more = 0
def display_carousel(carousel_items):
scale_factor = 0.15
display_images = []
for url in carousel_items:
req_img = requests.get(url)
img_bytes = req_img.content
img = Image.open(BytesIO(img_bytes))
display_image = ImageOps.scale(img, scale_factor)
display_images.append(display_image)
st.image(display_images, width=300)
if choice == "Content":
selected_id = st.sidebar.selectbox("Select Post", df["id"].unique())
selected_data = df[df["id"] == selected_id].iloc[0]
image_url = selected_data["media_url"] if selected_data["media_type"] == "IMAGE" else selected_data["thumbnail_url"]
image_response = requests.get(image_url)
image = Image.open(BytesIO(image_response.content))
# Display carousel
if "children" in selected_data.keys():
carousel_items = [child_data["media_url"] for child_data in selected_data["children"]["data"]]
display_carousel(carousel_items)
else:
display_carousel([image_url])
# Process caption text
caption_text = selected_data["caption"]
if caption_text:
start_desc_index = caption_text.find("[Description]")
if start_desc_index != -1:
caption_text = caption_text[start_desc_index + 13:] # Remove text before "[Description]"
end_tags_index = caption_text.find("[Tags]")
if end_tags_index != -1:
caption_text = caption_text[:end_tags_index] # Remove text from "[Tags]"
st.write(caption_text.strip())
likes = selected_data["like_count"]
if "insights" in selected_data.keys():
try:
impressions = selected_data["insights"][0]['values'][0]['value']
percentage = (likes * 100) / impressions
st.write(f"いいね: {likes} (インプレッションに対する割合: {percentage:.1f}%)")
except (KeyError, IndexError):
st.write(f"いいね: {likes}")
else:
st.write(f"いいね: {likes}")
st.write(f"コメント数: {selected_data['comments_count']}")
# Get comments and usernames
try:
shortcode = selected_data["permalink"].split("/")[-2]
post = instaloader.Post.from_shortcode(loader.context, shortcode)
comments = post.get_comments()
comment_list = [(comment.owner.username, comment.text) for comment in comments]
if st.session_state.load_more:
for username, text in comment_list:
st.write(f"{username}: {text}")
else:
for username, text in comment_list[:3]: # Show only the first 3 comments
st.write(f"{username}: {text}")
# Load more button
if st.button("さらに表示"):
st.session_state.load_more += 1
except Exception as e:
st.write("コメントの取得中にエラーが発生しました。")
elif choice == "Analytics":
categories = ["いいね数", "コメント数"]
selected_category = st.selectbox("Select metric", categories)
if selected_category == "いいね数":
metric = "like_count"
elif selected_category == "コメント数":
metric = "comments_count"
chart_df = df[["id", "timestamp", metric]].copy()
chart_df["timestamp"] = pd.to_datetime(chart_df["timestamp"]).dt.date
chart = alt.Chart(chart_df).mark_line().encode(
x="timestamp:T",
y=metric + ":Q"
).properties(
title=f"Time Series of {selected_category}",
width=800,
height=300
)
st.altair_chart(chart)
'''
上記コードにてJupyter開発環境ではエラー表示はなく、streamlitで表示された事を確認できました。しかし、期待された表示がされていない箇所があります。
①"Content"の説明文について、"[Description]"の前の文字列と、"[Tags]"を含めたそれ以降の文字列を削除するための機能が動作していないため、抜本的な対処も含めて改修してください。
②コンテンツの通常画像は投稿の1枚目のみを表示し、画像の拡大ボタンを押下した際に、同投稿のすべての画像を一覧で表示するように改修してください。
正常に動作するよう修正済みのコードを省略せずにすべて表示してください。
|
85684b9dfc2213608a698ea273261b69
|
{
"intermediate": 0.3743874430656433,
"beginner": 0.3443370461463928,
"expert": 0.28127551078796387
}
|
636
|
Modal won't pop-up. SubtaskModal should pop up when I click on a task.
|
eb1d0e870dcc15bb00e5201a1f2be760
|
{
"intermediate": 0.39130038022994995,
"beginner": 0.2735100984573364,
"expert": 0.3351895213127136
}
|
637
|
Evaluate these classes:
'''
A module that encapsulates a web scraper. This module scrapes data from a website.
'''
from html.parser import HTMLParser
import urllib.request
from datetime import datetime, timedelta
import logging
from dateutil.parser import parse
class WeatherScraper(HTMLParser):
"""A parser for extracting temperature values from a website."""
logger = logging.getLogger("main." + __name__)
def __init__(self):
try:
super().__init__()
self.is_tbody = False
self.is_td = False
self.is_tr = False
self.last_page = False
self.counter = 0
self.daily_temps = {}
self.weather = {}
self.row_date = ""
except Exception as error:
self.logger.error("scrape:init:%s", error)
def is_valid_date(self, date_str):
"""Check if a given string is a valid date."""
try:
parse(date_str, default=datetime(1900, 1, 1))
return True
except ValueError:
return False
def is_numeric(self, temp_str):
"""Check if given temperature string can be converted to a float."""
try:
float(temp_str)
return True
except ValueError:
return False
def handle_starttag(self, tag, attrs):
"""Handle the opening tags."""
try:
if tag == "tbody":
self.is_tbody = True
if tag == "tr" and self.is_tbody:
self.is_tr = True
if tag == "td" and self.is_tr:
self.counter += 1
self.is_td = True
# Only parses the valid dates, all other values are excluded.
if tag == "abbr" and self.is_tr and self.is_valid_date(attrs[0][1]):
self.row_date = str(datetime.strptime(attrs[0][1], "%B %d, %Y").date())
if len(attrs) == 2:
if attrs[1][1] == "previous disabled":
self.last_page = True
except Exception as error:
self.logger.error("scrape:starttag:%s", error)
def handle_endtag(self, tag):
"""Handle the closing tags."""
try:
if tag == "td":
self.is_td = False
if tag == "tr":
self.counter = 0
self.is_tr = False
except Exception as error:
self.logger.error("scrape:end:%s", error)
def handle_data(self, data):
"""Handle the data inside the tags."""
# if data.startswith("Daily Data Report for January 2020"):
# self.last_page = True
try:
if self.is_tbody and self.is_td and self.counter <= 3 and data.strip():
if self.counter == 1 and self.is_numeric(data.strip()):
self.daily_temps["Max"] = float(data.strip())
if self.counter == 2 and self.is_numeric(data.strip()):
self.daily_temps["Min"] = float(data.strip())
if self.counter == 3 and self.is_numeric(data.strip()):
self.daily_temps["Mean"] = float(data.strip())
self.weather[self.row_date] = self.daily_temps
self.daily_temps = {}
except Exception as error:
self.logger.error("scrape:data:%s", error)
def get_data(self):
"""Fetch the weather data and return it as a dictionary of dictionaries."""
current_date = datetime.now()
while not self.last_page:
try:
url = f"https://climate.weather.gc.ca/climate_data/daily_data_e.html?StationID=27174&timeframe=2&StartYear=1840&EndYear=2018&Day={current_date.day}&Year={current_date.year}&Month={current_date.month}"
with urllib.request.urlopen(url) as response:
html = response.read().decode()
self.feed(html)
# Subtracts one day from the current date and assigns the
# resulting date back to the current_date variable.
current_date -= timedelta(days=1)
except Exception as error:
self.logger.error("scrape:get_data:%s", error)
return self.weather
# Test program.
if __name__ == "__main__":
print_data = WeatherScraper().get_data()
for k, v in print_data.items():
print(k, v)
'''
A Module that creates and modifies a database. In this case, the data is weather information
scraped from a webpage.
'''
import sqlite3
import logging
from dateutil import parser
from scrape_weather import WeatherScraper
class DBOperations:
"""Class for performing operations on a SQLite database"""
def __init__(self, dbname):
"""
Constructor for DBOperations class.
Parameters:
- dbname: str, the name of the SQLite database file to use
"""
self.dbname = dbname
self.logger = logging.getLogger(__name__)
def initialize_db(self):
"""
Initialize the SQLite database by creating the weather_data table.
This method should be called every time the program runs.
"""
with self.get_cursor() as cursor:
try:
cursor.execute('''
CREATE TABLE IF NOT EXISTS weather_data (
id INTEGER PRIMARY KEY AUTOINCREMENT,
sample_date TEXT UNIQUE,
location TEXT,
min_temp REAL,
max_temp REAL,
avg_temp REAL
)
''')
self.logger.info("Initialized database successfully.")
except sqlite3.Error as error:
self.logger.error("An error occurred while creating the table: %s", error)
def save_data(self, data):
"""
Save weather data to the SQLite database.
If the data already exists in the database, it will not be duplicated.
Parameters:
- data: dict, the weather data to save to the database. Must have keys for
sample_date, location, min_temp, max_temp, and avg_temp.
"""
with self.get_cursor() as cursor:
try:
cursor.execute('''
INSERT OR IGNORE INTO weather_data
(sample_date, location, min_temp, max_temp, avg_temp)
VALUES (?, ?, ?, ?, ?)
''', (data['sample_date'], data['location'], data['min_temp'], data['max_temp'],
data['avg_temp']))
self.logger.info("Data saved successfully.")
except sqlite3.Error as error:
self.logger.error("An error occurred while saving data to the database: %s", error)
def fetch_data(self, location):
"""
Fetch weather data from the SQLite database for a specified location.
Parameters:
- location: str, the location to fetch weather data for
Returns:
- A list of tuples containing the weather data for the specified location,
where each tuple has the format (sample_date, min_temp, max_temp, avg_temp).
Returns an empty list if no data is found for the specified location.
"""
with self.get_cursor() as cursor:
try:
cursor.execute('''
SELECT sample_date, min_temp, max_temp, avg_temp
FROM weather_data
WHERE location = ?
''', (location,))
data = cursor.fetchall()
self.logger.info("Data fetched successfully.")
return data
except sqlite3.Error as error:
self.logger.error("An error occurred while fetching data from the database: %s",
error)
return []
def fetch_data_year_to_year(self, first_year, last_year):
'''
Fetch weather data from the SQLite database for a specified year range.
Parameters:
- first_year: int, the first year in the range.
- end_year: int, the final year in the range.
Returns:
- A list of data that falls in the range of years specified by the user.'''
month_data = {1:[], 2:[], 3:[], 4:[], 5:[], 6:[],
7:[], 8:[], 9:[], 10:[], 11:[], 12:[]}
start_year = f'{first_year}-01-01'
end_year = f'{last_year}-01-01'
with self.get_cursor() as cursor:
try:
for row in cursor.execute('''
SELECT sample_date, avg_temp
FROM weather_data
WHERE sample_date BETWEEN ? AND ?
ORDER BY sample_date''',(start_year, end_year)):
month = parser.parse(row[0]).month
month_data[month].append(row[1])
self.logger.info("Data fetched successfully.")
return month_data
except sqlite3.Error as error:
self.logger.error("An error occurred while fetching data from the database: %s",
error)
return []
def fetch_data_single_month(self, month, year):
'''
Fetch weather data from the SQLite database for a specified month and year.
Parameters:
- month: int, the month to search for.
- year: int, the year to search for.
Returns:
- A list of temperatures for the month and year the user searched for'''
temperatures = {}
with self.get_cursor() as cursor:
try:
for row in cursor.execute('''
SELECT sample_date, avg_temp
FROM weather_data
WHERE sample_date LIKE ?||'-'||'0'||?||'-'||'%'
ORDER BY sample_date''',(year, month)):
temperatures[row[0]] = row[1]
return temperatures
except sqlite3.Error as error:
self.logger.error("An error occurred while fetching data from the database: %s",
error)
return []
def purge_data(self):
"""
Purge all weather data from the SQLite database.
"""
with self.get_cursor() as cursor:
try:
cursor.execute('DELETE FROM weather_data')
self.logger.info("Data purged successfully.")
except sqlite3.Error as error:
self.logger.error("An error occurred while purging data from the database: %s",
error)
def get_cursor(self):
"""
Get a cursor to use for database operations.
Returns:
- A cursor object for the SQLite database.
"""
return DBCM(self.dbname)
class DBCM:
'''
A class that represents a connection to a database.
'''
def __init__(self, dbname):
self.dbname = dbname
self.logger = logging.getLogger(__name__)
def __enter__(self):
try:
self.conn = sqlite3.connect(self.dbname)
self.cursor = self.conn.cursor()
self.logger.info("Connection to database established successfully.")
return self.cursor
except sqlite3.Error as error:
self.logger.error("An error occurred while connecting to the database: %s", error)
return None
def __exit__(self, exc_type, exc_val, exc_tb):
if exc_type is not None:
self.conn.rollback()
else:
try:
self.conn.commit()
self.logger.info("Changes committed successfully.")
except sqlite3.Error as error:
self.logger.error("An error occurred while committing changes to the database: %s",
error)
try:
self.cursor.close()
self.conn.close()
self.logger.info("Connection to database closed successfully.")
except sqlite3.Error as error:
self.logger.error("An error occurred while closing the database connection: %s", error)
def main():
'''
The main method.
'''
# Initialize the database
data_base = DBOperations("mydatabase.db")
data_base.initialize_db()
# Get the weather data
scraper = WeatherScraper()
data = scraper.get_data()
# Process the data and prepare the rows
rows = []
for date, temps in data.items():
row = (
date,
"Winnipeg",
temps["Max"],
temps["Min"],
temps["Mean"]
)
rows.append(row)
# Save the data to the database
with data_base.get_cursor() as cursor:
try:
cursor.executemany('''
INSERT OR IGNORE INTO weather_data
(sample_date, location, min_temp, max_temp, avg_temp)
VALUES (?, ?, ?, ?, ?)
''', rows)
data_base.logger.info("Inserted %s rows into the database.", len(rows))
except sqlite3.Error as error:
data_base.logger.error("An error occurred while inserting data: %s", error)
if __name__ == '__main__':
main()
'''
A module that creates graphs from data pulled from a database.
'''
import logging
import matplotlib.pyplot as plt
from db_operations import DBOperations
class PlotOperations():
'''A class that plots data based on user input.'''
logger = logging.getLogger("main." + __name__)
def create_box_plot(self, start_year, end_year, data):
'''
A function that creates a box plot of data pulled from the database based off of user input.
Parameters:
- start_year: int - The year in which the user wants the range to begin.
- end_year: int - The year in which the user wants the range to end.'''
try:
data_to_plot = list(data.values())
plt.boxplot(data_to_plot) #Feed the data
plt.title(f'Monthly temperature distribution for: {start_year} to {end_year}')
plt.xlabel('Month') # Label the x-axis
plt.ylabel('Temperature (Celcius)') # Label the y-axis
plt.show() # Show the graph
except Exception as error:
self.logger.error("PlotOps:boxplot:%s", error)
def create_line_plot(self, data):
"""
Creates a line plot based on the data provided by the user.
Parameters:
- data: dict - A collection of data stored in a dictionary."""
try:
dates = list(data.keys()) # Dates are the keys in the dictionary
temps = list(data.values()) # Temperatures are the values in the dictionary
plt.plot(dates, temps) # Feed the data
plt.title('Daily Avg Temperatures') # Create the title
plt.xlabel('Days of Month') # Label the x axis
plt.ylabel('Avg Daily Temp') # Label the y axis
# Create text rotation on the x axis so they all fit properly
plt.xticks(rotation = 50, horizontalalignment = 'right')
plt.show() # Show the graph
except Exception as error:
self.logger.error("PlotOps:lineplot:%s", error)
db = DBOperations("mydatabase.db")
data = db.fetch_data_year_to_year(1996, 2023)
print(data)
PlotOperations().create_box_plot(1996, 2023, data)
|
f24c569eb9d17463e7b804770ab1d6c0
|
{
"intermediate": 0.4034900367259979,
"beginner": 0.4515967071056366,
"expert": 0.14491327106952667
}
|
638
|
When I click on a task nothing happens. It should display the modal when I click on a task
|
8d9fd913daffd63eca405a05d01721f4
|
{
"intermediate": 0.3766084313392639,
"beginner": 0.19356772303581238,
"expert": 0.4298238754272461
}
|
639
|
In part - 1, I was asked to implement the following. # -*- coding: utf-8 -*-
"""ML_2.ipynb
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/1Nlj4SKnJgUNA0rers6Vt1nwNN3mV49c0
"""
import pandas as pd
import torch
import numpy as np
from torch import nn, optim
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder, StandardScaler
from sklearn.metrics import accuracy_score, confusion_matrix
import matplotlib.pyplot as plt
import seaborn as sns
# Step 1: Load the Dataset
data = pd.read_csv('dataset.csv')
data.head()
# Visualize scatter plots for features against the target
sns.scatterplot(data=data, x=data.columns[0], y='target')
plt.title('Feature1 vs Target')
plt.show()
sns.scatterplot(data=data, x=data.columns[1], y='target')
plt.title('Feature2 vs Target')
plt.show()
sns.scatterplot(data=data, x=data.columns[2], y='target')
plt.title('Feature3 vs Target')
plt.show()
"""# **Step 2**: **Preprocessing the Dataset**
"""
# Identify categorical columns
categorical_columns = data.select_dtypes(include=['object']).columns
# Create an empty list to store invalid values
invalid_values = []
# Iterate over all object columns and find the invalid values
for col in categorical_columns:
invalid_values.extend(data.loc[data[col].str.isalpha(), col].unique())
# Print the object columns & the unique list of invalid values
print('Categorical columns:' , categorical_columns.to_list(), 'Invalid Values:', set(invalid_values))
# Replace missing/invalid values with pd.NA
data = data.replace(invalid_values, np.nan)
# Find the missing values
missing_values = data.isna().sum()
print('Missing values:')
print(missing_values)
# Fill missing values with mode or mean, depending on column type
fill_columns = [k for k, v in missing_values.to_dict().items() if v != 0]
for col in fill_columns:
if data[col].dtype == 'object':
data[col].fillna(data[col].mode()[0], inplace=True)
else:
data[col].fillna(data[col].mean(), inplace=True)
# Convert the dataframe to numeric
data = data.astype('float')
# Store Features and Target in X and y respectively
X = data.drop(columns=['target'])
y = data['target']
# Scale numerical variables to have zero mean and unit variance.
scaler = StandardScaler(with_mean= False)
X_scaled = scaler.fit_transform(X)
# Compute the mean and variance of each column
mean = np.mean(X_scaled, axis=0)
var = np.var(X_scaled, axis=0)
print(f'Mean: {mean} Variance: {var}')
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.2, random_state=42, shuffle= False)
#Step 3 & 4 : Defining the Neural Network and its Architecture
class NNClassifier(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(NNClassifier, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
self.relu1 = nn.ReLU()
self.fc2 = nn.Linear(hidden_size, hidden_size)
self.relu2 = nn.ReLU()
self.fc3 = nn.Linear(hidden_size,hidden_size)
self.relu3 = nn.ReLU()
self.fc4 = nn.Linear(hidden_size, output_size)
self.sigmoid = nn.Sigmoid()
def forward(self, x):
x = self.relu1(self.fc1(x))
x = self.relu2(self.fc2(x))
x = self.relu3(self.fc3(x))
x = self.sigmoid(self.fc4(x))
return x
hidden_size = 128
input_size = X_train.shape[1]
output_size = 1
model = NNClassifier(input_size, hidden_size, output_size)
# Set hyperparameters
epochs = 1000
batch_size = 64
learning_rate = 0.01
# Define loss and optimizer
criterion = nn.BCELoss()
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
#scheduler = StepLR(optimizer, step_size=100, gamma=0.90)
# Training segment
train_losses = []
train_accuracies = []
test_losses = []
test_accuracies = []
for epoch in range(epochs):
epoch_train_losses = []
epoch_y_true = []
epoch_y_pred = []
for i in range(0, len(X_train), batch_size):
#X_batch = torch.tensor(X_train.iloc[i:i + batch_size].values, dtype=torch.float32)
X_batch = torch.tensor(X_train[i:i + batch_size], dtype=torch.float)
y_batch = torch.tensor(y_train[i:i + batch_size].values, dtype=torch.float).view(-1, 1)
optimizer.zero_grad()
y_pred = model(X_batch)
loss = criterion(y_pred, y_batch)
loss.backward()
optimizer.step()
epoch_train_losses.append(loss.item())
epoch_y_true.extend(y_batch.numpy().flatten().tolist())
epoch_y_pred.extend((y_pred > 0.5).float().numpy().flatten().tolist())
#scheduler.step()
train_losses.append(sum(epoch_train_losses) / len(epoch_train_losses))
train_accuracies.append(accuracy_score(epoch_y_true, epoch_y_pred))
# Testing segment
with torch.no_grad():
#X_test_tensor = torch.tensor(X_test.values, dtype=torch.float32)
X_test_tensor = torch.tensor(X_test, dtype=torch.float32)
y_test_tensor = torch.tensor(y_test.values, dtype=torch.float32).view(-1, 1)
test_pred = model(X_test_tensor)
test_loss = criterion(test_pred, y_test_tensor)
test_accuracy = accuracy_score(y_test_tensor, (test_pred > 0.5).float())
test_losses.append(test_loss.item())
test_accuracies.append(test_accuracy)
if epoch % 100 == 0:
print(f"Epoch: {epoch+1}/{epochs}, Training Loss: {train_losses[-1]}, Test Loss: {test_loss.item()}, Training Accuracy: {train_accuracies[-1]}, Test Accuracy: {test_accuracy}")
# Compare training and test losses
plt.plot(train_losses, label='Training Loss')
plt.plot(test_losses, label='Test Loss')
plt.title('Training vs Test Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# Compare training and test accuracies
plt.plot(train_accuracies, label='Training Accuracy')
plt.plot(test_accuracies, label='Test Accuracy')
plt.title('Training vs Test Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
#Step 6: Save the Weights
torch.save(model.state_dict(), 'trained_weights.h5')
# Confusion Matrix
y_pred = (model(X_test_tensor) > 0.5).float().numpy()
cm = confusion_matrix(y_test, y_pred)
print('Confusion matrix: \n', cm)
. This BetterNNClassifier is for Part - II which we are about to discuss. I'll provide more details once you get a clear picture of what happend in part - 1. And also before proceeding to part - II , The given implementation is 100% correct , clean and without any loopholes.
class BetterNNClassifier(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(BetterNNClassifier, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
self.bn1 = nn.BatchNorm1d(hidden_size)
self.dropout1 = nn.Dropout(0.5)
self.relu1 = nn.LeakyReLU(0.1)
self.fc2 = nn.Linear(hidden_size, hidden_size)
self.bn2 = nn.BatchNorm1d(hidden_size)
self.dropout2 = nn.Dropout(0.5)
self.relu2 = nn.LeakyReLU(0.1)
self.fc3 = nn.Linear(hidden_size, output_size)
self.sigmoid = nn.Sigmoid()
def forward(self, x):
x = self.dropout1(self.bn1(self.relu1(self.fc1(x))))
x = self.dropout2(self.bn2(self.relu2(self.fc2(x))))
x = self.sigmoid(self.fc3(x))
return x
hidden_size = 128
input_size = X_train.shape[1]
output_size = 1
model = BetterNNClassifier(input_size, hidden_size, output_size)
|
7fb98bc424e12d3ccbc1aabb7b41c532
|
{
"intermediate": 0.32012638449668884,
"beginner": 0.3506920337677002,
"expert": 0.3291815519332886
}
|
640
|
Hi, I implemented a train_and_evaluate_model ias follows:
from sklearn.model_selection import KFold
from torch.optim.lr_scheduler import ReduceLROnPlateau
def train_and_evaluate_model(model, learning_rate = 0.01, epochs = 1000, optimization_technique=None, k_splits=None, batch_size=64, patience=None, scheduler_patience=None, **kwargs):
epoch_train_losses = []
epoch_train_accuracies = []
epoch_test_losses = []
epoch_test_accuracies = []
if optimization_technique == 'k_fold' and k_splits:
kfold = KFold(n_splits=k_splits, shuffle=True)
else:
kfold = KFold(n_splits=1, shuffle=False)
for train_index, test_index in kfold.split(X_scaled):
X_train, X_test = X_scaled[train_index], X_scaled[test_index]
y_train, y_test = y[train_index], y[test_index]
X_train, X_test = torch.tensor(X_train, dtype=torch.float32), torch.tensor(X_test, dtype=torch.float32)
y_train, y_test = torch.tensor(y_train, dtype=torch.float32).view(-1, 1), torch.tensor(y_test, dtype=torch.float32).view(-1, 1)
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
criterion = nn.BCELoss()
scheduler = ReduceLROnPlateau(optimizer, 'min', patience=scheduler_patience, verbose=True) if optimization_technique == 'learning_rate_scheduler' else None
best_loss = float('inf')
stopping_counter = 0
for epoch in range(epochs):
optimizer.zero_grad()
y_pred_train = model(X_train)
loss = criterion(y_pred_train, y_train)
loss.backward()
optimizer.step()
with torch.no_grad():
y_pred_test = model(X_test)
test_loss = criterion(y_pred_test, y_test)
epoch_train_losses.append(loss.item())
epoch_train_accuracies.extend((y_pred_train > 0.5).float().numpy().flatten().tolist())
epoch_train_accuracies.append(accuracy_score(y_train, (y_pred_train > 0.5).float()))
epoch_test_losses.append(test_loss.item())
epoch_test_accuracies.extend((y_pred_test > 0.5).float().numpy().flatten().tolist())
epoch_test_accuracies.append(accuracy_score(y_test, (y_pred_test > 0.5).float()))
if optimization_technique == 'early_stopping' and patience:
if test_loss < best_loss:
best_loss = test_loss
stopping_counter = 0
else:
stopping_counter += 1
if stopping_counter > patience:
break
if optimization_technique == 'learning_rate_scheduler' and scheduler_patience and scheduler:
scheduler.step(test_loss)
if optimization_technique == 'k_fold' and k_splits:
if epoch == 999:
break
if optimization_technique != 'k_fold':
break
return epoch_train_losses, epoch_train_accuracies, epoch_test_losses, epoch_test_accuracies
This line here train_losses, train_accuracies, test_losses, test_accuracies = train_and_evaluate_model(model, “dropout”) throws a kfold error at for loop since I'm not passing any value for kfold parameter. So, Modify the above code to also work for finding the best_model which is defined as # Step 2: Train the classifier with three different dropout values
dropout_values = [0.3, 0.5, 0.7]
models = []
losses = []
accuracies = []
for dropout in dropout_values:
model = BetterNNClassifier(input_size, hidden_size, output_size, dropout)
# Train the model and get training and testing losses and accuracies
train_losses, train_accuracies, test_losses, test_accuracies = train_and_evaluate_model(model)
models.append(model)
losses.append((train_losses, test_losses))
accuracies.append((train_accuracies, test_accuracies))
# Step 3: Choose the best dropout value (based on the highest test accuracy)
best_dropout_index = np.argmax([max(acc[1]) for acc in accuracies])
best_dropout = dropout_values[best_dropout_index]
best_model = models[best_dropout_index]
print(f"Best Dropout Value: {best_dropout}")
|
66bd8dd02165167d8ce681e0757b5468
|
{
"intermediate": 0.3728182911872864,
"beginner": 0.36840879917144775,
"expert": 0.2587728798389435
}
|
641
|
Build a scraper for https://arcjav.arcjavdb.workers.dev/0:/001-050/%E4%B8%8A%E5%8E%9F%E4%BA%9A%E8%A1%A3/
|
0e8f67ad753c902dec6808ea93e48dfa
|
{
"intermediate": 0.4615998864173889,
"beginner": 0.17387060821056366,
"expert": 0.36452946066856384
}
|
642
|
Point out the bugs
SubtaskModal.jsx:51 Uncaught ReferenceError: subtaskTitle is not defined
at SubtaskModal (SubtaskModal.jsx:51:14)
at renderWithHooks (react-dom.development.js:16305:18)
at mountIndeterminateComponent (react-dom.development.js:20074:13)
at beginWork (react-dom.development.js:21587:16)
at HTMLUnknownElement.callCallback2 (react-dom.development.js:4164:14)
at Object.invokeGuardedCallbackDev (react-dom.development.js:4213:16)
at invokeGuardedCallback (react-dom.development.js:4277:31)
at beginWork$1 (react-dom.development.js:27451:7)
at performUnitOfWork (react-dom.development.js:26557:12)
at workLoopSync (react-dom.development.js:26466:5)
SubtaskModal @ SubtaskModal.jsx:51
renderWithHooks @ react-dom.development.js:16305
mountIndeterminateComponent @ react-dom.development.js:20074
beginWork @ react-dom.development.js:21587
callCallback2 @ react-dom.development.js:4164
invokeGuardedCallbackDev @ react-dom.development.js:4213
invokeGuardedCallback @ react-dom.development.js:4277
beginWork$1 @ react-dom.development.js:27451
performUnitOfWork @ react-dom.development.js:26557
workLoopSync @ react-dom.development.js:26466
renderRootSync @ react-dom.development.js:26434
performSyncWorkOnRoot @ react-dom.development.js:26085
flushSyncCallbacks @ react-dom.development.js:12042
(anonymous) @ react-dom.development.js:25651
SubtaskModal.jsx:51 Uncaught ReferenceError: subtaskTitle is not defined
at SubtaskModal (SubtaskModal.jsx:51:14)
at renderWithHooks (react-dom.development.js:16305:18)
at mountIndeterminateComponent (react-dom.development.js:20074:13)
at beginWork (react-dom.development.js:21587:16)
at HTMLUnknownElement.callCallback2 (react-dom.development.js:4164:14)
at Object.invokeGuardedCallbackDev (react-dom.development.js:4213:16)
at invokeGuardedCallback (react-dom.development.js:4277:31)
at beginWork$1 (react-dom.development.js:27451:7)
at performUnitOfWork (react-dom.development.js:26557:12)
at workLoopSync (react-dom.development.js:26466:5)
SubtaskModal @ SubtaskModal.jsx:51
renderWithHooks @ react-dom.development.js:16305
mountIndeterminateComponent @ react-dom.development.js:20074
beginWork @ react-dom.development.js:21587
callCallback2 @ react-dom.development.js:4164
invokeGuardedCallbackDev @ react-dom.development.js:4213
invokeGuardedCallback @ react-dom.development.js:4277
beginWork$1 @ react-dom.development.js:27451
performUnitOfWork @ react-dom.development.js:26557
workLoopSync @ react-dom.development.js:26466
renderRootSync @ react-dom.development.js:26434
recoverFromConcurrentError @ react-dom.development.js:25850
performSyncWorkOnRoot @ react-dom.development.js:26096
flushSyncCallbacks @ react-dom.development.js:12042
(anonymous) @ react-dom.development.js:25651
react_devtools_backend.js:2655 The above error occurred in the <SubtaskModal> component:
at SubtaskModal (http://localhost:5173/src/components/SubtaskModal.jsx?t=1681359029397:15:25)
at div
at TaskContainer (http://localhost:5173/src/components/TaskContainer.jsx?t=1681363520683:19:60)
at div
at TaskProvider (http://localhost:5173/src/TaskContext.jsx:13:25)
at App
Consider adding an error boundary to your tree to customize error handling behavior.
Visit https://reactjs.org/link/error-boundaries to learn more about error boundaries.
overrideMethod @ react_devtools_backend.js:2655
logCapturedError @ react-dom.development.js:18687
update.callback @ react-dom.development.js:18720
callCallback @ react-dom.development.js:13923
commitUpdateQueue @ react-dom.development.js:13944
commitLayoutEffectOnFiber @ react-dom.development.js:23391
commitLayoutMountEffects_complete @ react-dom.development.js:24688
commitLayoutEffects_begin @ react-dom.development.js:24674
commitLayoutEffects @ react-dom.development.js:24612
commitRootImpl @ react-dom.development.js:26823
commitRoot @ react-dom.development.js:26682
performSyncWorkOnRoot @ react-dom.development.js:26117
flushSyncCallbacks @ react-dom.development.js:12042
(anonymous) @ react-dom.development.js:25651
react-dom.development.js:12056 Uncaught ReferenceError: subtaskTitle is not defined
at SubtaskModal (SubtaskModal.jsx:51:14)
at renderWithHooks (react-dom.development.js:16305:18)
at mountIndeterminateComponent (react-dom.development.js:20074:13)
at beginWork (react-dom.development.js:21587:16)
at beginWork$1 (react-dom.development.js:27426:14)
at performUnitOfWork (react-dom.development.js:26557:12)
at workLoopSync (react-dom.development.js:26466:5)
at renderRootSync (react-dom.development.js:26434:7)
at recoverFromConcurrentError (react-dom.development.js:25850:20)
at performSyncWorkOnRoot (react-dom.development.js:26096:20)
|
971a878f9445185559503bf5f6748fea
|
{
"intermediate": 0.3536204695701599,
"beginner": 0.40857595205307007,
"expert": 0.23780357837677002
}
|
643
|
There is a chrome extension (whatsapp-web-incognito) that is open source on github (https://github.com/tomer8007/whatsapp-web-incognito).
this extension has many features including revoke deleted messages on whatsapp web.
this feature works well if the other person deleted the message, or if I delete it on my phone for everyone, but it doesn't work if I delete it for (only me).
I want to edit the extension to handle also messages deleted for me.
Can you help me?
|
1b0bf50bc576b6c649c05506abbe5354
|
{
"intermediate": 0.4530963599681854,
"beginner": 0.34243878722190857,
"expert": 0.20446480810642242
}
|
644
|
#include<pthread.h>
#include<stdio.h>
#include<stdlib.h>
#include<unistd.h>
#define OVER -1
#define SIZE 10
int ring_buffer[SIZE];
pthread_mutex_t mutex; //mutex lock
pthread_cond_t not_full, not_empty; //avoid jingzheng
int write_index =0;
int read_index = 0;
void *producer (void *arg){
int N = 0;// the item which will be written in buffer
while(1){
pthread_mutex_lock(&mutex);//lock the thread first
// test whether the buffer is full or not, if it's full wait until not full
// the buffer is a ring_array
while(write_index + 1 == read_index ||( read_index == 0 && write_index== SIZE -1 )){
// wait the thread until satisfy the condition variable
pthread_cond_wait(& not_full, & mutex);
}
N = rand()%10000+1;
if (N >100000){
ring_buffer[write_index] = OVER;
printf("exit producer");
pthread_mutex_unlock(&mutex);
break;
}
else {
ring_buffer[write_index] = N;
printf("%d has been written in buffer%d!\n",N,write_index);
write_index = (write_index + 1)% SIZE;//update index
if (write_index == read_index){ //add this condition to avoid awake more..
pthread_cond_signal(¬_empty);// awake the consumer thread
}
pthread_mutex_unlock(&mutex);
sleep(1);
}
}
pthread_exit(NULL);
}
void *consumer(void *arg){
while(1){
pthread_mutex_lock(&mutex);//lock the thread first
// test whether the buffer is empty or not, if it's empty wait until not full
// the buffer is a ring_array
while(write_index == read_index){
// wait the thread until satisfy the condition variable
pthread_cond_wait(& not_empty, & mutex);
}
int d = ring_buffer[read_index];
ring_buffer[read_index] = 0; //clear the item
printf("an item %d in buffer %d has been read !\n",d,read_index);
if (d == OVER){
printf("exit consumer!");
pthread_mutex_unlock(&mutex);
break;
}
else{
read_index = (read_index + 1)% SIZE;//update index
if (write_index + 1 == read_index ||( read_index == 0 && write_index== SIZE -1 )){ //add this condition to avoid awake more..
pthread_cond_signal(¬_full);// awake the consumer thread
}
pthread_mutex_unlock(&mutex);
sleep(1);
}
}
pthread_exit(NULL);
}
int main (){
pthread_mutex_init(&mutex,NULL);
pthread_cond_init(¬_empty,NULL);
pthread_cond_init(¬_full,NULL);
srand(time(NULL));// int
pthread_t t1,t2;
int res1,res2;
res1 = pthread_create(&t1,NULL,&producer,NULL);
res2 = pthread_create(&t2,NULL,&consumer,NULL);
if (res1 != 0) {
printf("线程1创建失败");
return 0;
}
else if(res2 != 0){
printf("Thread2 failed to init");
return 0;
}
pthread_join(t1,NULL);
pthread_join(t2,NULL);
pthread_mutex_destroy(&mutex);
pthread_cond_destroy(¬_empty);
pthread_cond_destroy(¬_full);
}为什么我的消费者进程运行了一会儿就阻塞了
|
1b9d75d48a4c5848adb1c129bbc61df0
|
{
"intermediate": 0.3575059771537781,
"beginner": 0.44421306252479553,
"expert": 0.19828101992607117
}
|
645
|
When I add a subtask it is not added in realtime. I have to close and reopen the modal to show the added subtask. Please fix the bug, newly added subtask should be displayed in real time
|
28b99e4fb4800249ec42c44ee37ca630
|
{
"intermediate": 0.3653285503387451,
"beginner": 0.23368580639362335,
"expert": 0.4009856879711151
}
|
646
|
Input: A person's face image.
Output: That image is to be inserted in a frame. (Like how you insert a person's photo in a photo frame). Decide by yourself if you would be needing any ML models for face recognition to be able to detect the part of the image which contains the face and make sure to insert the facial part in the frame image.)
NOTE: Come up with the best approach you think would give the guaranteed desired output.
Write me exact code (function is preffered) for the above task.
|
f842c1c57b90f6a55287c81d88107086
|
{
"intermediate": 0.22357702255249023,
"beginner": 0.21771052479743958,
"expert": 0.5587124824523926
}
|
647
|
difference between id and name html
|
f229aa972e8b7ac38b4c5ba935e7c1ed
|
{
"intermediate": 0.40927842259407043,
"beginner": 0.33347755670547485,
"expert": 0.2572441101074219
}
|
648
|
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{% block title %}My App{% endblock %}</title>
<!-- Bootstrap CSS -->
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/css/bootstrap.min.css" integrity="sha384-OgVRvuATP1z7JjHLkuOU7Xw704+h835Lr+6
my index.html adn i have other too and i have question fro u also
|
58625a276c649ce0f86de51023e956eb
|
{
"intermediate": 0.3878227174282074,
"beginner": 0.27833613753318787,
"expert": 0.3338411748409271
}
|
649
|
what is the best mongoDB database structure for a game of hearts?
|
77f5b25034c3caa79b0e50c4882f561a
|
{
"intermediate": 0.37066832184791565,
"beginner": 0.3583180904388428,
"expert": 0.2710135579109192
}
|
650
|
Can you show me how to keep track of a hearts game using a mongoDB dataset with mongoose
|
56c5cfca48056e79cac436c6336de851
|
{
"intermediate": 0.6980383396148682,
"beginner": 0.09634249657392502,
"expert": 0.20561917126178741
}
|
651
|
add handle for TextField selection
|
c6bc9af25ae14c879ba81da6396ba617
|
{
"intermediate": 0.29564934968948364,
"beginner": 0.22999083995819092,
"expert": 0.47435978055000305
}
|
652
|
fivem lua
RegisterCommand("testing", function(score)
print(score)
TriggerClientEvent("qb-nui:update-score", -1, score)
end)
for some reason when i enter /testing 5 or /testing "0-5" it always prints 0
|
d7ba48d67751783c314d856d61191b66
|
{
"intermediate": 0.27437639236450195,
"beginner": 0.5502946972846985,
"expert": 0.17532886564731598
}
|
653
|
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-26-4d31f351fda5> in <cell line: 7>()
8 model = BetterNNClassifier(input_size, hidden_size, output_size, dropout)
9 # Train the model and get training and testing losses and accuracies
---> 10 train_losses, train_accuracies, test_losses, test_accuracies = train_and_evaluate_model(model)
11 models.append(model)
12 losses.append((train_losses, test_losses))
2 frames
/usr/local/lib/python3.9/dist-packages/sklearn/model_selection/_split.py in __init__(self, n_splits, shuffle, random_state)
296
297 if n_splits <= 1:
--> 298 raise ValueError(
299 "k-fold cross-validation requires at least one"
300 " train/test split by setting n_splits=2 or more,"
ValueError: k-fold cross-validation requires at least one train/test split by setting n_splits=2 or more, got n_splits=1. The code I used is from sklearn.model_selection import KFold
from torch.optim.lr_scheduler import ReduceLROnPlateau
def train_and_evaluate_model(model, learning_rate = 0.01, epochs = 1000, optimization_technique=None, k_splits=None, batch_size=64, patience=None, scheduler_patience=None, **kwargs):
epoch_train_losses = []
epoch_train_accuracies = []
epoch_test_losses = []
epoch_test_accuracies = []
if optimization_technique == 'k_fold' and k_splits:
kfold = KFold(n_splits=k_splits, shuffle=True)
else:
kfold = KFold(n_splits=1, shuffle=False)
for train_index, test_index in kfold.split(X_scaled):
X_train, X_test = X_scaled[train_index], X_scaled[test_index]
y_train, y_test = y[train_index], y[test_index]
X_train, X_test = torch.tensor(X_train, dtype=torch.float32), torch.tensor(X_test, dtype=torch.float32)
y_train, y_test = torch.tensor(y_train, dtype=torch.float32).view(-1, 1), torch.tensor(y_test, dtype=torch.float32).view(-1, 1)
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
criterion = nn.BCELoss()
scheduler = ReduceLROnPlateau(optimizer, 'min', patience=scheduler_patience, verbose=True) if optimization_technique == 'learning_rate_scheduler' else None
best_loss = float('inf')
stopping_counter = 0
for epoch in range(epochs):
optimizer.zero_grad()
y_pred_train = model(X_train)
loss = criterion(y_pred_train, y_train)
loss.backward()
optimizer.step()
with torch.no_grad():
y_pred_test = model(X_test)
test_loss = criterion(y_pred_test, y_test)
epoch_train_losses.append(loss.item())
epoch_train_accuracies.extend((y_pred_train > 0.5).float().numpy().flatten().tolist())
epoch_train_accuracies.append(accuracy_score(y_train, (y_pred_train > 0.5).float()))
epoch_test_losses.append(test_loss.item())
epoch_test_accuracies.extend((y_pred_test > 0.5).float().numpy().flatten().tolist())
epoch_test_accuracies.append(accuracy_score(y_test, (y_pred_test > 0.5).float()))
if optimization_technique == 'early_stopping' and patience:
if test_loss < best_loss:
best_loss = test_loss
stopping_counter = 0
else:
stopping_counter += 1
if stopping_counter > patience:
break
if optimization_technique == 'learning_rate_scheduler' and scheduler_patience and scheduler:
scheduler.step(test_loss)
if optimization_technique == 'k_fold' and k_splits:
if epoch == 999:
break
if optimization_technique != 'k_fold':
break
return epoch_train_losses, epoch_train_accuracies, epoch_test_losses, epoch_test_accuracies
When I'm trying to perform this I'm getting the above defined error. # Step 2: Train the classifier with three different dropout values
dropout_values = [0.3, 0.5, 0.7]
models = []
losses = []
accuracies = []
for dropout in dropout_values:
model = BetterNNClassifier(input_size, hidden_size, output_size, dropout)
# Train the model and get training and testing losses and accuracies
train_losses, train_accuracies, test_losses, test_accuracies = train_and_evaluate_model(model)
models.append(model)
losses.append((train_losses, test_losses))
accuracies.append((train_accuracies, test_accuracies))
# Step 3: Choose the best dropout value (based on the highest test accuracy)
best_dropout_index = np.argmax([max(acc[1]) for acc in accuracies])
best_dropout = dropout_values[best_dropout_index]
best_model = models[best_dropout_index]
print(f"Best Dropout Value: {best_dropout}"). So, fix the error.
|
58cffdcfb8f2825e06ebac66c14ae70b
|
{
"intermediate": 0.3738024830818176,
"beginner": 0.4127262234687805,
"expert": 0.21347130835056305
}
|
654
|
https://cdn.discordapp.com/attachments/1052780891300184096/1095659338003128380/image.png
I'm trying to create a fivem NUI like the one in the image. It is positioned center horizontally and at the very top. the NUI consists of a background div inside of the main div are three smaller divs One says Team A , One says 1-1 Place holder for score which can be changed and the last div says Team B
|
d6d8e61ed1b96b2f57c4bc713efadaa5
|
{
"intermediate": 0.311023086309433,
"beginner": 0.3423793315887451,
"expert": 0.3465975821018219
}
|
655
|
Accepts a number and identify if it is an odd or even number. Print "EVEN" or "ODD" as your output.
Filenames: Seatwork_02_01.jpg, Seatwork_02_01.c
Accept the room number, room capacity and the number of students enrolled in the semester. Output whether the room is “FULL” or “NOT FULL
|
3d33868e89eb90de1e0f6d84b8b77953
|
{
"intermediate": 0.43318748474121094,
"beginner": 0.20887359976768494,
"expert": 0.3579389154911041
}
|
656
|
I got a function implemented as
def train_model(model, epochs = 1000, batch_size = 64, learning_rate = 0.01):
# Define loss and optimizer
criterion = nn.BCELoss()
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
#scheduler = StepLR(optimizer, step_size=100, gamma=0.90)
# Training segment
train_losses = []
train_accuracies = []
test_losses = []
test_accuracies = []
for epoch in range(epochs):
epoch_train_losses = []
epoch_y_true = []
epoch_y_pred = []
for i in range(0, len(X_train), batch_size):
#X_batch = torch.tensor(X_train.iloc[i:i + batch_size].values, dtype=torch.float32)
X_batch = torch.tensor(X_train[i:i + batch_size], dtype=torch.float)
y_batch = torch.tensor(y_train[i:i + batch_size].values, dtype=torch.float).view(-1, 1)
optimizer.zero_grad()
y_pred = model(X_batch)
loss = criterion(y_pred, y_batch)
loss.backward()
optimizer.step()
epoch_train_losses.append(loss.item())
epoch_y_true.extend(y_batch.numpy().flatten().tolist())
epoch_y_pred.extend((y_pred > 0.5).float().numpy().flatten().tolist())
#scheduler.step()
train_losses.append(sum(epoch_train_losses) / len(epoch_train_losses))
train_accuracies.append(accuracy_score(epoch_y_true, epoch_y_pred))
# Testing segment
with torch.no_grad():
#X_test_tensor = torch.tensor(X_test.values, dtype=torch.float32)
X_test_tensor = torch.tensor(X_test, dtype=torch.float32)
y_test_tensor = torch.tensor(y_test.values, dtype=torch.float32).view(-1, 1)
test_pred = model(X_test_tensor)
test_loss = criterion(test_pred, y_test_tensor)
test_accuracy = accuracy_score(y_test_tensor, (test_pred > 0.5).float())
test_losses.append(test_loss.item())
test_accuracies.append(test_accuracy)
if epoch % 100 == 0:
print(f"Epoch: {epoch+1}/{epochs}, Training Loss: {train_losses[-1]}, Test Loss: {test_loss.item()}, Training Accuracy: {train_accuracies[-1]}, Test Accuracy: {test_accuracy}")
return train_losses, train_accuracies, test_losses, test_accuracies
Based on this ,I also implemented the following to handle multiple techniques :
from sklearn.model_selection import KFold
from torch.optim.lr_scheduler import ReduceLROnPlateau
def train_and_evaluate_model(model, learning_rate = 0.01, epochs = 1000, optimization_technique=None, k_splits=None, batch_size=64, patience=None, scheduler_patience=None, **kwargs):
epoch_train_losses = []
epoch_train_accuracies = []
epoch_test_losses = []
epoch_test_accuracies = []
if optimization_technique == 'k_fold' and k_splits:
kfold = KFold(n_splits=k_splits, shuffle=True)
else:
kfold = KFold(n_splits=2, shuffle=False)
for train_index, test_index in kfold.split(X_scaled):
X_train, X_test = X_scaled[train_index], X_scaled[test_index]
y_train, y_test = y[train_index], y[test_index]
X_train, X_test = torch.tensor(X_train, dtype=torch.float32), torch.tensor(X_test, dtype=torch.float32)
y_train, y_test = torch.tensor(y_train.values, dtype=torch.float32).view(-1, 1), torch.tensor(y_test.values, dtype=torch.float32).view(-1, 1)
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
criterion = nn.BCELoss()
scheduler = ReduceLROnPlateau(optimizer, 'min', patience=scheduler_patience, verbose=True) if optimization_technique == 'learning_rate_scheduler' else None
best_loss = float('inf')
stopping_counter = 0
for epoch in range(epochs):
optimizer.zero_grad()
y_pred_train = model(X_train)
loss = criterion(y_pred_train, y_train)
loss.backward()
optimizer.step()
with torch.no_grad():
y_pred_test = model(X_test)
test_loss = criterion(y_pred_test, y_test)
epoch_train_losses.append(loss.item())
epoch_train_accuracies.extend((y_pred_train > 0.5).float().numpy().flatten().tolist())
test_accuracy = accuracy_score(y_train, (y_pred_train > 0.5).float()))
epoch_train_accuracies.append(test_accuracy)
epoch_test_losses.append(test_loss.item())
epoch_test_accuracies.extend((y_pred_test > 0.5).float().numpy().flatten().tolist())
epoch_test_accuracies.append(accuracy_score(y_test, (y_pred_test > 0.5).float()))
if optimization_technique == 'early_stopping' and patience:
if test_loss < best_loss:
best_loss = test_loss
stopping_counter = 0
else:
stopping_counter += 1
if stopping_counter > patience:
break
if optimization_technique == 'learning_rate_scheduler' and scheduler_patience and scheduler:
scheduler.step(test_loss)
if optimization_technique == 'k_fold' and k_splits:
if epoch == 999:
break
if epoch % 100 == 0:
print(f"Epoch: {epoch+1}/{epochs}, Training Loss: {epoch_train_losses[-1]}, Test Loss: {test_loss.item()}, Training Accuracy: {epoch_train_accuracies[-1]}, Test Accuracy: {test_accuracy}")
if optimization_technique != 'k_fold':
break
return epoch_train_losses, epoch_train_accuracies, epoch_test_losses, epoch_test_accuracies
Now train_and_evaluate_model() has 100% of train_model code. So, instead of rewriting the same code again. How can I use the same train_model() code without rewriting in train_and_evaluate_model() like using a lambda function to store the train_model(). Anything like that. Help would be appreciated.
|
eac75f767a33eaa2cf8086a765a3ea6f
|
{
"intermediate": 0.34950152039527893,
"beginner": 0.3513781726360321,
"expert": 0.29912033677101135
}
|
657
|
write a program in js and html that takes details about the execution time and time period of the processes, and the program will generate corresponding Gantt Chart for Rate Monotonic, and Earliest Deadline First algorithms. Furthermore, you also have to tell whether the processes are schedulable or not.
|
2dd3ae1b1a8cb027b51f30edb16b9240
|
{
"intermediate": 0.2929273843765259,
"beginner": 0.14889846742153168,
"expert": 0.5581741333007812
}
|
658
|
teach the basics of how to code in assembly 8086
|
4c5aed860dfd7fdd63509d59a6ff8127
|
{
"intermediate": 0.06169584393501282,
"beginner": 0.8511004447937012,
"expert": 0.08720370382070541
}
|
659
|
Pretend you are marketing genius. We are going to launch a hackathon at Akaike Technologies called Hackaike. You should give me a document that outlines an email marketing campaign and a single webpage to promote it.
|
e535fd25d791b0a107485c2f2b101e40
|
{
"intermediate": 0.33291390538215637,
"beginner": 0.40728622674942017,
"expert": 0.25979989767074585
}
|
660
|
react typescript, klinechart charting library. You need to access market data as here https://www.binance.com/ru/futures/DOGEUSDT
here is my code
import React, {useState, useEffect, ChangeEvent} from “react”;
import {
Badge, Box,
Button,
ButtonGroup,
Chip,
CircularProgress,
Grid,
Icon, IconButton, Skeleton, TextField,
Typography
} from “@mui/material”;
import ArrowUpwardRoundedIcon from ‘@mui/icons-material/ArrowUpwardRounded’;
import ArrowDownwardRoundedIcon from ‘@mui/icons-material/ArrowDownwardRounded’;
import {useAuthContext} from “…/…/AuthProvider/AuthProvider”;
import FormatPrice from “…/…/Common/FormatPrice”;
import {Order, readCandlesByTrade, readOrderByTrade, Trade, updateTrade} from “…/…/…/actions/cicap-diary-trades”;
import DataTable from “…/…/DataTable/DataTable”;
import {useSnackbar} from “notistack”;
import {CandleChart} from “…/CandleChart/CandleChart”;
import {TradeEntity} from “…/CandleChart/CandleChart.props”;
import {FormattedNumber} from “react-intl”;
import ImageIcon from “next/image”;
import {createTradeImage, deleteTradeImage, tradeImagesCollection, TradeImage} from “…/…/…/actions/cicap-diary-trades-images”;
const timezoneOffset = new Date().getTimezoneOffset() * 60;
interface SideTypeProps {
order: Order;
}
const SideType = ({order}: SideTypeProps) => {
let label = ‘’;
let color = ‘default’;
const icon = order?.side && ‘sell’ === order.side
? ArrowDownwardRoundedIcon
: ArrowUpwardRoundedIcon;
switch (order?.type) {
case ‘limit’:
label = ‘L’;
color = ‘success’
break;
case ‘market’:
label = ‘M’;
color = ‘warning’
break;
case ‘stop_market’:
label = ‘F’;
color = ‘primary’
break;
}
return <>
<Badge
badgeContent={label}
// @ts-ignore
color={color}
>
<Icon component={icon} color=“action” />
</Badge>
</>
}
interface CandlePrice {
timestamp: number;
open: number;
high: number;
low: number;
close: number;
volume: number;
}
interface TradeDetailsProps {
tradeId: string;
defaultChartInterval: string;
}
const TradeDetails = ({tradeId, defaultChartInterval}: TradeDetailsProps) => {
const [trade, setTrade] = useState<{orders: Array<Order>, data: Trade} | undefined>(undefined);
const [orders, setOrders] = useState<Array<TradeEntity>>([]);
const [chartInterval, setChartInterval] = useState(defaultChartInterval);
const [waiting, setWaiting] = useState(false);
const [candleData, setCandleData] = useState<Array<CandlePrice>>([]);
const [images, setImages] = useState<Array<TradeImage>>([]);
const [imageIdDeletion, setImageIdDeletion] = useState<string|null>(null);
const {diaryToken} = useAuthContext();
const [description, setDescription] = useState(‘’);
const [conclusion, setConclusion] = useState(‘’);
const [videoLink, setVideoLink] = useState(‘’);
const {enqueueSnackbar} = useSnackbar();
const ref = React.createRef<HTMLInputElement>();
const fetchTrade = () => {
if (!tradeId || !diaryToken) {
return;
}
setWaiting(true);
readOrderByTrade(tradeId, diaryToken)
.then(data => {
setWaiting(false);
if (!data) return;
const newOrders: Array<TradeEntity> = [];
data.orders.forEach(order => {
const timestamp = Math.floor(order.msTimestamp) - timezoneOffset;
newOrders.push({
time: timestamp,
position: order.side,
value: parseFloat(order.price),
});
});
setTrade(data);
setOrders(newOrders);
});
}
return <> <Box sx={{pr: 2}}>
{
trade?.data.id && candleData ? (<>
<CandleChart
images={images}
candles={candleData}
tradeId={trade?.data.id}
orders={orders}
interval={chartInterval}
openPrice={trade?.data.openPrice}
closePrice={trade?.data.closePrice}
pricePrecision={trade.data.pricePrecision}
quantityPrecision={trade.data.quantityPrecision}
createImage={createImage}
/>
</>) : (<>
<Skeleton variant=“rectangular” height={500} />
</>)
}
</Box>
</>
}
import React, {useEffect, useRef, useState} from “react”;
import {
init,
dispose,
Chart,
DeepPartial,
IndicatorFigureStylesCallbackData,
Indicator,
IndicatorStyle,
KLineData,
utils,
} from “klinecharts”;
import {CandleChartProps} from “./CandleChart.props”;
import CandleChartToolbar from “./CandleChartToolbar”;
import {Style} from “util”;
import {Box, Icon, IconButton, Stack} from “@mui/material”;
import getMinutesTickSizeByInterval from “./utils/getMinutesTickSizeByInterval.util”;
import drawTrade from “./utils/drawTrade.util”;
import drawTradeLines from “./utils/drawTradeLines.util”;
import {BasketIcon, ScreenIcon} from “…/…/icons”;
import {FullScreen, useFullScreenHandle} from “react-full-screen”;;
interface Vol {
volume?: number
}
export const CandleChart = ({
images,
candles,
tradeId,
orders,
interval,
openPrice,
closePrice,
pricePrecision,
quantityPrecision,
createImage
}: CandleChartProps) => {
console.log(candles);
const chart = useRef<Chart|null>();
const paneId = useRef<string>(“”);
const [figureId, setFigureId] = useState<string>(“”)
const ref = useRef<HTMLDivElement>(null);
const handle = useFullScreenHandle();
useEffect(() => {
chart.current = init(chart-{tradeId}, {styles: chartStyles});
return () => dispose(chart-{tradeId});
}, [tradeId]);
useEffect(() => {
const onWindowResize = () => chart.current?.resize();
window.addEventListener(“resize”, onWindowResize);
return () => window.removeEventListener(“resize”, onWindowResize);
}, []);
useEffect(() => {
chart.current?.applyNewData(candles);
chart.current?.overrideIndicator({
name: “VOL”,
shortName: “Объем”,
calcParams: [],
figures: [
{
key: “volume”,
title: “”,
type: “bar”,
baseValue: 0,
styles: (data: IndicatorFigureStylesCallbackData<Vol>, indicator: Indicator, defaultStyles: IndicatorStyle) => {
const kLineData = data.current.kLineData as KLineData
let color: string
if (kLineData.close > kLineData.open) {
color = utils.formatValue(indicator.styles, “bars[0].upColor”, (defaultStyles.bars)[0].upColor) as string
} else if (kLineData.close < kLineData.open) {
color = utils.formatValue(indicator.styles, “bars[0].downColor”, (defaultStyles.bars)[0].downColor) as string
} else {
color = utils.formatValue(indicator.styles, “bars[0].noChangeColor”, (defaultStyles.bars)[0].noChangeColor) as string
}
return { color }
}
}
]
}, paneId.current);
chart.current?.createIndicator(“VOL”, false, { id: paneId.current });
chart.current?.setPriceVolumePrecision(+pricePrecision, +quantityPrecision);
}, [candles]);
useEffect(() => {
if (!orders || orders.length === 0 || candles.length === 0) return;
const minTime = orders[0].time;
const maxTime = orders[orders.length - 1].time;
const needleTime = minTime + (maxTime - minTime) / 2;
chart.current?.scrollToTimestamp(needleTime + 45 * getMinutesTickSizeByInterval(interval) * 60 * 1000);
drawTrade(chart, paneId, orders, interval);
if (openPrice && closePrice) {
let openTime = Infinity;
let closeTime = -Infinity;
orders.forEach(order => {
if (openTime > order.time) {
openTime = order.time;
}
if (closeTime < order.time) {
closeTime = order.time;
}
});
drawTradeLines(
chart,
openPrice,
openTime,
closePrice,
closeTime,
orders[0].position,
paneId,
pricePrecision,
quantityPrecision,
);
}
}, [orders, candles, tradeId]);
return (<> <Stack direction=“row” height={!handle.active ? 550 : “100%”} width=“100%”>
<CandleChartToolbar
setFigureId={setFigureId}
chart={chart} paneId={paneId}
handle={handle}
/>
<Box
ref={ref}
id={chart-${tradeId}}
width=“calc(100% - 55px)”
height={!handle.active ? 550 : “100%”}
sx={{ borderLeft: “1px solid #ddd” }}
>
{
figureId.length > 0 &&
<Stack
sx={{
backgroundColor: “#CBD4E3”,
borderRadius: 1,
position: “absolute”,
zIndex: 10,
right: 80,
top: 30,
border: “1px solid #697669”,
}}
spacing={2}
>
<IconButton sx={{ borderRadius: 1 }} onClick={removeFigure}>
<Icon component={BasketIcon} />
</IconButton>
</Stack>
}
</Box>
</Stack>
</>);
}
In those transactions where there is no closed order, you need to display real-time candles on the chart and calculate the percentage at the current price. This is necessary so that you can follow the transaction in real time. It is necessary that the chart reflects the current situation on the market, as here https://www.binance.com/ru/futures/DOGEUSDT
import { Options, WebSocketHook, JsonValue } from ‘./types’;
export declare const useWebSocket: <T extends JsonValue | null = JsonValue | null>(url: string | (() => string | Promise<string>) | null, options?: Options, connect?: boolean) => WebSocketHook<T, MessageEvent<any> | null>;
useWebSocket
|
57ad5de7d133295d0a85fd42a02769ca
|
{
"intermediate": 0.3814411759376526,
"beginner": 0.41001394391059875,
"expert": 0.20854489505290985
}
|
661
|
I'm working on a fivem volley ball script written in lua. I'm unsure how to detect if the ball hits the ground and also do the physics of the player interacting with the object
|
92c07428adc7219108ccefb79f9022e1
|
{
"intermediate": 0.5269007086753845,
"beginner": 0.22597284615039825,
"expert": 0.2471264898777008
}
|
662
|
это cmake скрипт, какие изменения по улучшению можно в него внести:
# static builds: link qml import plugins into the target.
function(add_qml_static_imports target qml_DIRS)
get_filename_component(targetName ${target} NAME)
set(_qml_imports_file ${CMAKE_CURRENT_BINARY_DIR}/${target}_qml_plugin_import.cpp)
set(bin_DIRS ${QT_INSTALL_PREFIX}/bin)
if (DEFINED QT_HOST_PATH)
list(APPEND bin_DIRS "${QT_HOST_PATH}/bin" "${QT_HOST_PATH}/libexec")
endif()
find_program(QMLIMPORTSCANNER_COMMAND qmlimportscanner PATHS ${bin_DIRS}
DOC "The Qt qmlimportscanner executable"
NO_DEFAULT_PATH CMAKE_FIND_ROOT_PATH_BOTH)
list(LENGTH qml_DIRS qml_DIRS_LENGTH)
if (QMLIMPORTSCANNER_COMMAND AND qml_DIRS_LENGTH GREATER 0)
set(_qml_imports)
set(_link_directories)
set(_link_libraries)
foreach (_gml_SOURCE_DIR ${qml_DIRS})
set(QMLIMPORTSCANNER_ARGS "${_gml_SOURCE_DIR}" -importPath "${QT_INSTALL_PREFIX}/qml")
execute_process(
COMMAND "${QMLIMPORTSCANNER_COMMAND}" ${QMLIMPORTSCANNER_ARGS}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
OUTPUT_VARIABLE _imports
)
string(REGEX REPLACE "[\r\n \[{]" "" _imports "${_imports}")
string(REGEX REPLACE "}," ";" _imports "${_imports}")
string(REGEX REPLACE "[}\]]" ";" _imports "${_imports}")
foreach(_line ${_imports})
if(${_line} MATCHES "plugin" AND ${_line} MATCHES "path" AND ${_line} MATCHES "classname")
string(REGEX MATCH "\"[ ]*path[ ]*\"[ ]*:[ ]*\"[^,]*[ ]*\"" _path ${_line})
string(REGEX REPLACE "^\"[ ]*path[ ]*\"[ ]*:" "" _path "${_path}")
if (_path)
list(APPEND _link_directories "${CMAKE_LIBRARY_PATH_FLAG}${_path}")
endif()
string(REGEX MATCH "\"[ ]*plugin[ ]*\"[ ]*:[ ]*\"[^,]*[ ]*\"" _plugin ${_line})
string(REGEX REPLACE "^\"[ ]*plugin[ ]*\"[ ]*:" "" _plugin "${_plugin}")
string(REGEX REPLACE "\"" "" _plugin "${_plugin}")
if (_plugin)
if (IOS)
list(APPEND _link_libraries ${_plugin}$<$<CONFIG:Debug>:_debug>)
elseif (WIN32)
list(APPEND _link_libraries ${_plugin}$<$<CONFIG:Debug>:d>)
else ()
list(APPEND _link_libraries ${_plugin})
endif ()
endif()
string(REGEX MATCH "\"[ ]*classname[ ]*\"[ ]*:[ ]*\"[^,]*[ ]*\"" _classname ${_line})
string(REGEX REPLACE "^\"[ ]*classname[ ]*\"[ ]*:" "" _classname "${_classname}")
string(REGEX REPLACE "\"" "" _classname "${_classname}")
if (_classname)
list(APPEND _qml_imports "Q_IMPORT_PLUGIN(${_classname})")
endif()
endif ()
endforeach()
endforeach()
list(REMOVE_DUPLICATES _link_directories)
list(REMOVE_DUPLICATES _link_libraries)
list(REMOVE_DUPLICATES _qml_imports)
target_link_libraries(${target} PRIVATE ${_link_directories} ${_link_libraries})
set(_import_header "// This file is autogenerated by cmake toolchain."
" //It imports static plugin classes for plugins used by QML imports."
"#include <QtPlugin>")
string(REPLACE ";" "\n" _import_header "${_import_header}")
string(REPLACE ";" "\n" _qml_imports "${_qml_imports}")
file(GENERATE OUTPUT "${_qml_imports_file}" CONTENT "${_import_header}\n${_qml_imports}\n")
set_source_files_properties(${_qml_imports_file} PROPERTIES GENERATED 1 SKIP_AUTOMOC ON)
get_property(_sources TARGET ${target} PROPERTY SOURCES)
list(APPEND _sources ${_qml_imports_file})
set_target_properties(${target} PROPERTIES SOURCES "${_sources}")
endif()
endfunction(add_qml_static_imports)
|
8a2d1da148e12e07d84389d655773660
|
{
"intermediate": 0.3324063718318939,
"beginner": 0.4921172559261322,
"expert": 0.1754762828350067
}
|
663
|
write a code to make a chatbot like you and implement it
|
9d35c8b396d63cc434c5189370a8c4e9
|
{
"intermediate": 0.23955124616622925,
"beginner": 0.1437550187110901,
"expert": 0.6166937351226807
}
|
664
|
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{% block title %}My App{% endblock %}</title>
<!-- Bootstrap CSS -->
<!-- <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/css/bootstrap.min.css" integrity="sha384-OgVRvuATP1z7JjHLkuOU7Xw704+h835Lr+6 -->
{% block content %}
{% include 'grid.html' %}
{% endblock %}
this is my index. html
and i have quest
|
763cc3a295f35c18ef53733f4fb1d05e
|
{
"intermediate": 0.3644965887069702,
"beginner": 0.3001033365726471,
"expert": 0.3354000151157379
}
|
665
|
Can you explain how the Diffusion/Wiener Model can be utilized in image quality studies, and what insights it could provide researchers about the decision-making process of observers rating static images on a scale of 1 to 5, including the possibility of some ratings requiring less evidence before they are made?
|
dc4b40366b38b3358d608648def0d8de
|
{
"intermediate": 0.16838230192661285,
"beginner": 0.059534717351198196,
"expert": 0.7720829844474792
}
|
666
|
Какие замечания по улучшению этого cmake скрипта ты можешь сделать:
# static builds: link qml import plugins into the target.
function(add_qml_static_imports target qml_DIRS)
get_filename_component(targetName ${target} NAME)
set(_qml_imports_file ${CMAKE_CURRENT_BINARY_DIR}/${target}_qml_plugin_import.cpp)
set(bin_DIRS ${QT_INSTALL_PREFIX}/bin)
if (DEFINED QT_HOST_PATH)
list(APPEND bin_DIRS "${QT_HOST_PATH}/bin" "${QT_HOST_PATH}/libexec")
endif()
find_program(QMLIMPORTSCANNER_COMMAND qmlimportscanner PATHS ${bin_DIRS}
DOC "The Qt qmlimportscanner executable"
NO_DEFAULT_PATH CMAKE_FIND_ROOT_PATH_BOTH)
list(LENGTH qml_DIRS qml_DIRS_LENGTH)
if (QMLIMPORTSCANNER_COMMAND AND qml_DIRS_LENGTH GREATER 0)
set(_qml_imports)
set(_link_directories)
set(_link_libraries)
foreach (_gml_SOURCE_DIR ${qml_DIRS})
set(QMLIMPORTSCANNER_ARGS "${_gml_SOURCE_DIR}" -importPath "${QT_INSTALL_PREFIX}/qml")
execute_process(
COMMAND "${QMLIMPORTSCANNER_COMMAND}" ${QMLIMPORTSCANNER_ARGS}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
OUTPUT_VARIABLE _imports
)
string(REGEX REPLACE "[\r\n \[{]" "" _imports "${_imports}")
string(REGEX REPLACE "}," ";" _imports "${_imports}")
string(REGEX REPLACE "[}\]]" ";" _imports "${_imports}")
foreach(_line ${_imports})
if(${_line} MATCHES "plugin" AND ${_line} MATCHES "path" AND ${_line} MATCHES "classname")
string(REGEX MATCH "\"[ ]*path[ ]*\"[ ]*:[ ]*\"[^,]*[ ]*\"" _path ${_line})
string(REGEX REPLACE "^\"[ ]*path[ ]*\"[ ]*:" "" _path "${_path}")
if (_path)
list(APPEND _link_directories "${CMAKE_LIBRARY_PATH_FLAG}${_path}")
endif()
string(REGEX MATCH "\"[ ]*plugin[ ]*\"[ ]*:[ ]*\"[^,]*[ ]*\"" _plugin ${_line})
string(REGEX REPLACE "^\"[ ]*plugin[ ]*\"[ ]*:" "" _plugin "${_plugin}")
string(REGEX REPLACE "\"" "" _plugin "${_plugin}")
if (_plugin)
if (IOS)
list(APPEND _link_libraries ${_plugin}$<$<CONFIG:Debug>:_debug>)
elseif (WIN32)
list(APPEND _link_libraries ${_plugin}$<$<CONFIG:Debug>:d>)
else ()
list(APPEND _link_libraries ${_plugin})
endif ()
endif()
string(REGEX MATCH "\"[ ]*classname[ ]*\"[ ]*:[ ]*\"[^,]*[ ]*\"" _classname ${_line})
string(REGEX REPLACE "^\"[ ]*classname[ ]*\"[ ]*:" "" _classname "${_classname}")
string(REGEX REPLACE "\"" "" _classname "${_classname}")
if (_classname)
list(APPEND _qml_imports "Q_IMPORT_PLUGIN(${_classname})")
endif()
endif ()
endforeach()
endforeach()
list(REMOVE_DUPLICATES _link_directories)
list(REMOVE_DUPLICATES _link_libraries)
list(REMOVE_DUPLICATES _qml_imports)
target_link_libraries(${target} PRIVATE ${_link_directories} ${_link_libraries})
set(_import_header "// This file is autogenerated by cmake toolchain."
" //It imports static plugin classes for plugins used by QML imports."
"#include <QtPlugin>")
string(REPLACE ";" "\n" _import_header "${_import_header}")
string(REPLACE ";" "\n" _qml_imports "${_qml_imports}")
file(GENERATE OUTPUT "${_qml_imports_file}" CONTENT "${_import_header}\n${_qml_imports}\n")
set_source_files_properties(${_qml_imports_file} PROPERTIES GENERATED 1 SKIP_AUTOMOC ON)
get_property(_sources TARGET ${target} PROPERTY SOURCES)
list(APPEND _sources ${_qml_imports_file})
set_target_properties(${target} PROPERTIES SOURCES "${_sources}")
endif()
endfunction(add_qml_static_imports)
|
417be78a76cbabc6b6986bc4a904c9c8
|
{
"intermediate": 0.30373629927635193,
"beginner": 0.5640760064125061,
"expert": 0.13218769431114197
}
|
667
|
npm WARN ERESOLVE overriding peer dependency
npm WARN While resolving: eslint-loader@2.2.1
npm WARN Found: eslint@7.15.0
npm WARN node_modules/eslint
npm WARN dev eslint@“7.15.0” from the root project
npm WARN 4 more (@vue/cli-plugin-eslint, babel-eslint, …)
npm WARN
npm WARN Could not resolve dependency:
npm WARN peer eslint@“>=1.6.0 <7.0.0” from eslint-loader@2.2.1
这个问题怎么解决
|
4381684446654679fa74de697b0c5a24
|
{
"intermediate": 0.30332404375076294,
"beginner": 0.3446481227874756,
"expert": 0.35202786326408386
}
|
668
|
hi
|
1c373dbef747933a5576b1e0fae0ac98
|
{
"intermediate": 0.3246487081050873,
"beginner": 0.27135494351387024,
"expert": 0.40399640798568726
}
|
669
|
“instagram pro account"と"instagram graph API”(version.16)と"instaloader"と"Python3"とPython用の各種ライブラリを用いる事ができる状況において、“jupyterlab"で開発や動作確認を実施したうえで、最終的に"Streamlit"を用いてブラウザで情報を表示することを想定しています。
instagramからの情報取得については"instagram graph API"を優先的に使用し、もし"instaloader"を使用する場合にはinstagramから不正ログインと検知されてしまうため、ログイン情報を使用しない方法としてください。また、表示させるコンテンツの対象は"動画"や"real"や"Stories"を除いた"静止画像"のみとしてください。
(1) コンテンツの画像をinstagramから自動でダウンロードして1枚目だけを代表の扉絵として横幅600ピクセルほどの大きさで表示し、当該画像をクリックすることでオーバーレイの全画面表示に切り替わり、同投稿内のすべての画像を表示させる。
(2) 投稿日を元にした"YYYYMMDD"というIDを付与(同日に複数投稿があった場合のため枝番の”_1",“_2"を付与)して、(1)の画像の直下に表示する。
(3) コンテンツに対して自身が入力した説明文について投稿内容から取得し、文中の”[Description]“の文字列以前と”[Tags]"直前のスペース以降の文字列を削除した文のみを、(2)の日付の直下に表示する。
(4) コンテンツに対する"いいね数"と、"その右隣に(いいね率:23.9%)のように、当該コンテンツの"インプレッション数"と"いいね数"から計算した"いいね率"を(3)の下に表示する。
(5) “フォロー数"という項目として、特定のコンテンツを見たことがきっかけでフォローを実施したユーザー数と、その右隣にカンマ区切りで該当ユーザーIDを(4)の直下に表示する
(6) コンテンツに対するコメントを実施したユーザーIDとそのコメントを”:"区切りで横並びで配置し、デフォルトでは最新5つのみを表示し、下に"さらに表示"ボタンを設置して押下することですべてのコメントを表示する機能を(5)の下に表示する。
(7) (1)から(6)までをひとつのセクションとし、複数のコンテンツを左上から右下へ新旧の順番とし、Streamlitで表示されたブラウザ画面上に複数タイル状に配置する。
上記の機能を実現させるPythonコードをインデントを付与してコード全体を表示してください
|
03e40fc9bc12e28476f9615ba15312fe
|
{
"intermediate": 0.3593831956386566,
"beginner": 0.4251910448074341,
"expert": 0.2154257744550705
}
|
670
|
predictors = housing_df[['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'LSTAT']]
outcome = housing_df[['MEDV']]
x=pd.DataFrame(predictors)
pd.DataFrame(outcome)
train_X,valid_X,train_y,valid_y=train_test_split(x,y,test_size=0.4,random_state=1)
print('Training set:', train_X.shape, 'Validation set:', valid_X.shape)
def train_model(variables):
model = LinearRegression()
model.fit(train_X[variables], train_y)
return model
def score_model(model, variables):
return AIC_score(train_y, model.predict(train_X[variables]), model)
#Run the backward_elimination function created by the author and shown in Ch06
#MISSING 1 line of code
print("Best Subset:", best_variables)
|
777e52e4a141dbdcad9eee247e70bf2e
|
{
"intermediate": 0.3054628074169159,
"beginner": 0.4122174382209778,
"expert": 0.2823197543621063
}
|
671
|
In python how can I add to the setup.cfg file the requirements of a package that is located in Azure DevOps Feed?
|
e8e01f5d7e996fb7d900ee83eb6c5092
|
{
"intermediate": 0.4893832802772522,
"beginner": 0.2575233578681946,
"expert": 0.2530933618545532
}
|
672
|
i uploaded this dataset to kaggle https://www.kaggle.com/datasets/hiyassat/quran-corpus/code using the tutorial from speechbrain https://colab.research.google.com/drive/1aFgzrUv3udM_gNJNUoLaHIm78QHtxdIz?usp=sharing#scrollTo=b3tnXnrWc2My you are requested to train this data using speechbrain of hugging face us deliverables a python code that cab be used to train this data on my machine using speechbrain of hugging face
|
178c758c99d85a308515af03cd37dd96
|
{
"intermediate": 0.3435021936893463,
"beginner": 0.26835644245147705,
"expert": 0.3881414234638214
}
|
673
|
make simple database
|
7f8f26d7bb3947cfb304a758a7538c8f
|
{
"intermediate": 0.40100905299186707,
"beginner": 0.37352386116981506,
"expert": 0.22546714544296265
}
|
674
|
https://drive.google.com/file/d/1ye-iAX_Fb18DxoDLjrh26zLQCiut24Ty/view?usp=share_link this is my dataset link you perform my task i give you
|
d7a0240f8904d07759284a5e773ac5bb
|
{
"intermediate": 0.31026744842529297,
"beginner": 0.29322218894958496,
"expert": 0.39651042222976685
}
|
675
|
Deep learning based recognition of foetal anticipation using cardiotocograph data
I would like someone to extract the features do feature selection and labeling and best optimized method to be selected from the given dataset
Step 1) Use K-means Clustering for Outlier Removal
Step 2) Feature Extraction and Classification : Feature Pyramid Siamese network
Step 3) Loss function optimization using Rain Optimization algorithm
|
1deae6deabb5cbeb016a93b08bbe9193
|
{
"intermediate": 0.020494285970926285,
"beginner": 0.01643778756260872,
"expert": 0.9630679488182068
}
|
676
|
can you make this code send user_id, gsr_data, emg_data, current_time to a firebase database
can you make this code so that it sends the data to a googlefirebase database
import serial
import time
import gspread
from oauth2client.service_account import ServiceAccountCredentials
from datetime import datetime
# Set up the serial connection to the Arduino Uno board
ser = serial.Serial('/dev/ttyACM0', 9600)
user_id = "1"
# Set up the connection to Google Sheets
scope = ['https://spreadsheets.google.com/feeds', 'https://www.googleapis.com/auth/drive']
creds = ServiceAccountCredentials.from_json_keyfile_name('/home/salvador/Downloads/credentials.json', scope)
client = gspread.authorize(creds)
# Provide the Google Sheet Id
gs1 = client.open_by_key('1OsyLoK7MP-6LnLnJbmKKjrnOmZtDEVJ__6f75HWw_uo')
ws1 = gs1.sheet1
while True:
# read the serial data
line = ser.readline().decode().strip()
# split the data into GSR and EMG values
gsr_data, emg_data = map(int, line.split(','))
# get the current time
current_time = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
# create a row of data to append to the worksheet
row = [user_id, gsr_data, emg_data, current_time]
# append the row to the worksheet
ws1.append_row(row)
print("UserID: " , user_id)
print("GSR Data: " , gsr_data)
print("EMG Data: " , emg_data)
print("Time: " , current_time)
# wait for 5 minutes before reading data again
time.sleep(1)
|
0c55af519749a1562e8547125a2ce0f9
|
{
"intermediate": 0.5679903626441956,
"beginner": 0.2736339271068573,
"expert": 0.15837572515010834
}
|
677
|
can you make this code send user_id, gsr_data, emg_data, current_time to a firebase database
import serial
import time
import gspread
from oauth2client.service_account import ServiceAccountCredentials
from datetime import datetime
import tensorflow.lite as lite
import pandas as pd
import numpy as np
from sklearn.preprocessing import MinMaxScaler
user_id = "1"
# Set up the serial connection to the Arduino Uno board
ser = serial.Serial('/dev/ttyACM0', 9600)
# Set up credentials and connect to Google Sheets
scope = ['https://spreadsheets.google.com/feeds', 'https://www.googleapis.com/auth/drive']
creds = ServiceAccountCredentials.from_json_keyfile_name('/home/salvador/Downloads/credentials.json', scope)
client = gspread.authorize(creds)
# Open the sheet and get the new data
sheet1 = client.open("Projectdata").worksheet("Sheet1")
training_data = pd.DataFrame(sheet1.get_all_records())
# Get Sheet2 for appending the new data
sheet2 = client.open("Projectdata").worksheet("Sheet2")
# Load the TensorFlow Lite model
interpreter = lite.Interpreter(model_path="/home/salvador/Downloads/model.tflite")
interpreter.allocate_tensors()
# Get input and output tensors
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Create a scaler and fit it to the training data
scaler = MinMaxScaler()
scaler.fit(training_data[['GSRdata', 'EMGdata']])
while True:
# Read the serial data
line = ser.readline().decode().strip()
# Split the data into GSR and EMG values
gsr_data, emg_data = map(int, line.split(','))
# Get the current time
current_time = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
# Preprocess the data and make a prediction
input_data = np.array([gsr_data, emg_data], dtype=np.float32)
input_data = scaler.transform(input_data.reshape(1, -1))
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
prediction = interpreter.get_tensor(output_details[0]['index'])
predicted_label = np.argmax(prediction, axis=1)
soreness_label = ''
if predicted_label == 0:
soreness_label = 'relaxed'
elif predicted_label == 1:
soreness_label = 'tense'
elif predicted_label == 2:
soreness_label = 'exhausted'
else:
print('Unknown category')
# Create a row of data to append to the worksheet
row = [user_id,gsr_data, emg_data, current_time,soreness_label]
# Append the row to Sheet2
sheet2.append_row(row)
print("UserID: " , user_id)
print("GSR Data: " , gsr_data)
print("EMG Data: " , emg_data)
print("Time: " , current_time)
print("Soreness: ", soreness_label)
# Wait for 5 seconds before reading data again
time.sleep(5)
|
252e158413e480b6645844685a91c121
|
{
"intermediate": 0.41639238595962524,
"beginner": 0.3864298164844513,
"expert": 0.19717776775360107
}
|
678
|
function foo() {
const eid_1 = async_hooks.executionAsyncId()
const pid_1 = async_hooks.triggerAsyncId()
console.log('async_hooks.executionAsyncId(): ', eid_1)
console.log('async_hooks.triggerAsyncId(): ', pid_1)
console.log("Start of foo");
}
foo();
|
9309eddab36fbadf2b272f75575f779a
|
{
"intermediate": 0.32730504870414734,
"beginner": 0.45896780490875244,
"expert": 0.21372713148593903
}
|
679
|
hive报错Error running query: java lang.NoClassDefFoundError: org/apache/tez/runtime/api/Event
|
b31be4485b8c45d924b8093cce6333aa
|
{
"intermediate": 0.3653029203414917,
"beginner": 0.4451913833618164,
"expert": 0.1895056813955307
}
|
680
|
I am coding a custom Android soft keyboard and encountering a bug where, when the keyboard is downsized, the keys shrink as expected but the keyboard does not drop down to the bottom of the screen. Only when certain keys are pressed on the downsized keyboard does it drop to the bottom of the screen.
There is no bug when the keyboard grows in size. In the input method service's `onStartInputView`, I have the following:
|
34707d4d6ae16328e3aad079b26c2bd8
|
{
"intermediate": 0.3814776539802551,
"beginner": 0.3662436902523041,
"expert": 0.2522786259651184
}
|
681
|
Generate a python code optimizing HARTMANN 6-DIMENSIONAL FUNCTION with plot graph
|
a06b79a46b8172ea9f694fee40d2031d
|
{
"intermediate": 0.12137851864099503,
"beginner": 0.06657644361257553,
"expert": 0.8120450377464294
}
|
682
|
I am coding a custom Android soft keyboard and encountering a bug where, when the keyboard is downsized, the keys shrink as expected but the keyboard does not drop down to the bottom of the screen.
There is no bug when the keyboard grows in size. In the input method service's `onStartInputView`, I have the following:
|
c00ec70764ef3bb400ec099fe32245ad
|
{
"intermediate": 0.35885676741600037,
"beginner": 0.4003942906856537,
"expert": 0.24074897170066833
}
|
683
|
In mongo, I have app schema which contain array of services. Create a index which will ensure service names are unique per application
|
3410aeacd29d21a03c960dcac642abad
|
{
"intermediate": 0.419045627117157,
"beginner": 0.26499754190444946,
"expert": 0.31595686078071594
}
|
684
|
I am coding a custom Android soft keyboard app. In the settings page of the app, there is a SeekBar that resizes the height of the keyboard.
I am encountering a bug where, when the keyboard is downsized, the keys shrink as expected but the keyboard does not drop down to the bottom of the screen. There is no bug when the keyboard grows in size.
In the input method service's `onStartInputView`, I have the following:
|
b7d494628251c6ff6de58f2d42b731ea
|
{
"intermediate": 0.3624108135700226,
"beginner": 0.4041331112384796,
"expert": 0.23345601558685303
}
|
685
|
can you make this script to send the notification as an email towards another email, instead of just showing it on shell? make the recipient email to be the inputted prompt at the opening of the script.
since you can't make a long message, please split it into two part. make your first answer as the first part, and then after that i'll tell you to make the latter part.
here's the script:
import time
import requests
import msvcrt
from plyer import notification
def get_thread_posts(board, thread_id):
url = f"https://a.4cdn.org/{board}/thread/{thread_id}.json"
response = requests.get(url)
if response.status_code == 200:
data = response.json()
return data["posts"]
return None
def notify(reply_count, post, board, thread_id):
post_content = post.get('com', 'No content').replace('<br>', '\n')
post_content = post_content.strip()
post_link = f"https://boards.4channel.org/{board}/thread/{thread_id}#p{post['no']}"
post_header = f"Post with {reply_count} replies ({post_link}):"
def count_replies(posts):
reply_count = {}
for post in posts:
com = post.get("com", "")
hrefs = post.get("com", "").split('href="#p')
for href in hrefs[1:]:
num = int(href.split('"')[0])
if num in reply_count:
reply_count[num] += 1
else:
reply_count[num] = 1
return reply_count
def monitor_thread(board, thread_id, min_replies, delay):
seen_posts = set()
print(f"Monitoring thread {board}/{thread_id} every {delay} seconds…")
tries = 0
while True:
posts = get_thread_posts(board, thread_id)
if posts:
print("Checking posts for replies…")
reply_count = count_replies(posts)
for post in posts:
post_id = post['no']
replies = reply_count.get(post_id, 0)
if replies >= min_replies and post_id not in seen_posts:
print(f"Found post with {replies} replies. Sending notification…")
notify(replies, post, board, thread_id)
seen_posts.add(post_id)
# Check if the thread is archived
is_archived = any(post.get('archived') for post in posts)
if is_archived:
print(f"Thread {board}/{thread_id} is archived. Checking catalog for /aicg/ thread and switching to the newest one...")
latest_aicg_thread = get_latest_aicg_thread(board)
if latest_aicg_thread:
thread_id = latest_aicg_thread
tries = 0
else:
tries += 1
print(f"No /aicg/ thread found in the catalog. Retrying ({tries}/5)...")
if tries >= 5:
print("No /aicg/ thread found after 5 tries. Exiting in 1 minute...")
time.sleep(60)
if not msvcrt.kbhit():
return
time.sleep(delay)
def get_latest_aicg_thread(board):
url = f"https://a.4cdn.org/{board}/catalog.json"
response = requests.get(url)
if response.status_code == 200:
data = response.json()
threads = []
for page in data:
for thread in page['threads']:
if thread.get('sub') and "/aicg/" in thread['sub']:
threads.append(thread)
if threads:
latest_thread = max(threads, key=lambda t: t['last_modified'])
return latest_thread['no']
return None
if __name__ == "__main__":
BOARD = "g"
THREAD_ID = input("Enter thread ID to monitor: ")
MIN_REPLIES = int(input("Enter minimum replies for notification: "))
DELAY = int(input("Enter delay time in minutes: ")) * 60
monitor_thread(BOARD, THREAD_ID, MIN_REPLIES, DELAY)
|
b4e586f263cdb7ed177aa5487b9527ec
|
{
"intermediate": 0.4121207296848297,
"beginner": 0.35967200994491577,
"expert": 0.22820724546909332
}
|
686
|
import React, { useState, useEffect } from "react";
import { Autocomplete, AutocompleteGetTagProps, Chip, TextField, Typography } from "@mui/material";
import { createTag } from "../../../actions/cicap-diary-widgets";
import { useAuthContext } from "../../AuthProvider/AuthProvider";
import { createTradeTag, deleteTradeTag } from "../../../actions/cicap-diary-trades-tags";
import { TradesColumnTagPropsType } from "./TradesColumnTag.props";
import CheckIcon from '@mui/icons-material/Check';
import CloseIcon from '@mui/icons-material/Close';
import { styled } from '@mui/material/styles';
export interface Tag {
id: string;
name: string;
}
export interface TradesColumnTagPropsType {
tradeTags: Tag[]
tags: Tag[]
tradeId: string
fetchTags: () => void
}
const TradesColumnTag = ({ tradeTags, tags, tradeId, fetchTags }: TradesColumnTagPropsType) => {
const [selectTags, setSelectTags] = useState<string[]>(tradeTags.map(t => t.name));
const { diaryToken } = useAuthContext();
const [newTag, setNewTag] = useState<string>("");
useEffect(() => {
setSelectTags(tradeTags.map(t => t.name))
}, [tradeTags])
const createNewTag = (newTag: string) => {
if (!diaryToken || newTag === "") return
createTag({ name: newTag }, diaryToken)
.then(data => {
if (!data) return;
fetchTags();
});
}
const createNewTradeTag = (tagId: string) => {
if (!diaryToken) return
createTradeTag(tradeId, tagId, diaryToken)
.then(data => {
if (!data) return;
const tags = data.data
setSelectTags(tags.map(tag => tag.name))
});
}
if (newTag !== "") {
const newTagId = tags.find(t => t.name === newTag)
if (newTagId) {
createNewTradeTag(newTagId.id)
setNewTag("")
}
}
const deleteTag = (tagId: string) => {
if (!diaryToken) return
deleteTradeTag(tradeId, tagId, diaryToken)
.then(data => {
if (!data) return;
const tags = data.data
if (tags.length > 0) {
setSelectTags(tags.map(tag => tag.name))
} else {
setSelectTags([])
}
});
}
const handleTagsChange = (event: React.ChangeEvent<{}>, value: string[]) => {
setSelectTags(value)
const toBeRemoved = selectTags.filter(tag => !value.includes(tag));
if (toBeRemoved.length > 0) {
const filteredTagsToBeRemoved = tags.filter(tag => toBeRemoved.includes(tag.name));
if (filteredTagsToBeRemoved.length === 0) return
if (filteredTagsToBeRemoved.length > 1) {
let i = 0;
const interval = setInterval(() => {
deleteTag(filteredTagsToBeRemoved[i].id);
i++;
if (i >= filteredTagsToBeRemoved.length) {
clearInterval(interval);
}
}, 100);
} else if (filteredTagsToBeRemoved.length === 1) {
deleteTag(filteredTagsToBeRemoved[0].id);
}
}
if (value.length === 0) return
if (value.length < selectTags.length) return
const filteredData = tags.filter(tag => tag.name === value[value.length - 1]);
if (filteredData.length > 0) {
createNewTradeTag(filteredData[0].id)
}
const newArr = []
for (let i = 0; i < value.length; i++) {
let isPresent = false;
for (let j = 0; j < tags.length; j++) {
if (value[i] === tags[j].name) {
isPresent = true;
break;
}
}
if (!isPresent) {
newArr.push(value[i]);
}
}
if (newArr.length > 0) {
createNewTag(newArr[0])
setNewTag(newArr[0])
}
};
const randomColor = () => {
const colors = ["#FFE9B8", "#D0E9D7", "#D6E6FF"];
return colors[Math.floor(Math.random() * colors.length)];
};
interface TagProps extends ReturnType<AutocompleteGetTagProps> {
label: string;
}
const Tag = (props: TagProps) => {
const { label, onDelete, ...other } = props;
return (
<div {...other}>
<span>{label}</span>
<CloseIcon onClick={onDelete} />
</div>
);
}
const StyledTag = styled(Tag)<TagProps>(
({ theme }) => `
display: flex;
align-items: center;
margin: 2px;
line-height: 1.5;
background-color: ${randomColor()};
border: none;
border-radius: 2px;
box-sizing: content-box;
padding: 0 10px 0 10px;
& span {
overflow: hidden;
white-space: nowrap;
text-overflow: ellipsis;
}
& svg {
display: none;
}
&:hover {
padding: 0 2px 0 10px;
& svg {
display: block;
font-size: 18px;
cursor: pointer;
margin-top: -1px;
}
}
`,
);
const truncate = (str: any, n: number) => {
if (str) return str.length > n ? str.substr(0, n) + '...' : str
}
return (
<Autocomplete
multiple
options={tags.map(t => t.name)}
freeSolo
value={selectTags}
onChange={handleTagsChange}
sx={{ minWidth: "180px", fontSize: "8px" }}
renderTags={(value: string[], getTagProps) => {
return value.map((option: any, index: number) => (
<span key={index}>
<StyledTag label={truncate(option, 15)} {...getTagProps({ index })} />
</span>
))
}}
renderInput={(params) => (
<TextField {...params}
sx={{"& input": { fontSize: "12px"}, "& :before": { borderBottom: "3px solid #CFCFCF" }}}
placeholder={selectTags.length === 0 ? "пробой уровня" : ""}
variant="standard"
/>
)}
renderOption={(props, option) => (
<li style={{ padding: 4, }} {...props}>
<span style={{ backgroundColor: randomColor(), borderRadius: 3, marginLeft: 4, padding: "0px 10px 0px 10px" }} >
{option}
</span>
</li>
)}
/>
)
}
export default TradesColumnTag
Переписать компонент так, чтобы
const [selectTags, setSelectTags] = useState() был массивом таких объектов interface Tag {
name: string;
color: string;
}
при отрисовке компонента , в useEffect(() => {
setSelectTags(tradeTags.map(t => t.name))
}, [tradeTags]) сеталось не только имя, но и добавлялся новый цвет, в handleTagsChange приходит массив строк, чтобы имена которые уже есть в массиве с цветом, оставались, а новое value, когда сетается в стейт, добавлялся к нему цвет новый
|
69d71441eafbc2c27dde15caf155f43b
|
{
"intermediate": 0.4072017967700958,
"beginner": 0.46349772810935974,
"expert": 0.12930043041706085
}
|
687
|
1) A CNN and VCG16-based image classifier that would give us how likely a person has a Heart disease
2) The Heart diseases can be Angina pectoris, Hypotension, Coronary Artery Disease, and Cardiovascular disease, or any other disease whose dataset of echocardiograms is available. A dataset of around a 1000 images per disease would be ideal
3)The dataset used should be Echocardiograms of the diseases, from which the CNN algorithm will extract features
4)we need a trained model something that will readily integrate with a website. And we may require your help in integrating it to our website
5)we need to know the diseases and their respective accuracy.
6) We would likely need the following things from you: the dataset of echocardiograms, the file wherein you built the model, the exported model along with its accuracy scores for each diseases
|
feda84e041f43e855e7ec252320f358c
|
{
"intermediate": 0.14944235980510712,
"beginner": 0.09523607045412064,
"expert": 0.7553215622901917
}
|
688
|
can you make this script to send the notification as an email towards another email, instead of just showing it on shell? make the recipient email to be the inputted prompt at the opening of the script.
here’s the script:
import time
import requests
import msvcrt
from plyer import notification
def get_thread_posts(board, thread_id):
url = f"https://a.4cdn.org/{board}/thread/{thread_id}.json"
response = requests.get(url)
if response.status_code == 200:
data = response.json()
return data[“posts”]
return None
def notify(reply_count, post, board, thread_id):
post_content = post.get(‘com’, ‘No content’).replace(‘<br>’, ‘\n’)
post_content = post_content.strip()
post_link = f"https://boards.4channel.org/{board}/thread/{thread_id}#p{post[‘no’]}“
post_header = f"Post with {reply_count} replies ({post_link}):”
def count_replies(posts):
reply_count = {}
for post in posts:
com = post.get(“com”, “”)
hrefs = post.get(“com”, “”).split(‘href="#p’)
for href in hrefs[1:]:
num = int(href.split(‘"’)[0])
if num in reply_count:
reply_count[num] += 1
else:
reply_count[num] = 1
return reply_count
def monitor_thread(board, thread_id, min_replies, delay):
seen_posts = set()
print(f"Monitoring thread {board}/{thread_id} every {delay} seconds…“)
tries = 0
while True:
posts = get_thread_posts(board, thread_id)
if posts:
print(“Checking posts for replies…”)
reply_count = count_replies(posts)
for post in posts:
post_id = post[‘no’]
replies = reply_count.get(post_id, 0)
if replies >= min_replies and post_id not in seen_posts:
print(f"Found post with {replies} replies. Sending notification…”)
notify(replies, post, board, thread_id)
seen_posts.add(post_id)
# Check if the thread is archived
is_archived = any(post.get(‘archived’) for post in posts)
if is_archived:
print(f"Thread {board}/{thread_id} is archived. Checking catalog for /aicg/ thread and switching to the newest one…“)
latest_aicg_thread = get_latest_aicg_thread(board)
if latest_aicg_thread:
thread_id = latest_aicg_thread
tries = 0
else:
tries += 1
print(f"No /aicg/ thread found in the catalog. Retrying ({tries}/5)…”)
if tries >= 5:
print(“No /aicg/ thread found after 5 tries. Exiting in 1 minute…”)
time.sleep(60)
if not msvcrt.kbhit():
return
time.sleep(delay)
def get_latest_aicg_thread(board):
url = f"https://a.4cdn.org/{board}/catalog.json"
response = requests.get(url)
if response.status_code == 200:
data = response.json()
threads = []
for page in data:
for thread in page[‘threads’]:
if thread.get(‘sub’) and “/aicg/” in thread[‘sub’]:
threads.append(thread)
if threads:
latest_thread = max(threads, key=lambda t: t[‘last_modified’])
return latest_thread[‘no’]
return None
if name == “main”:
BOARD = “g”
THREAD_ID = input("Enter thread ID to monitor: ")
MIN_REPLIES = int(input("Enter minimum replies for notification: "))
DELAY = int(input("Enter delay time in minutes: ")) * 60
monitor_thread(BOARD, THREAD_ID, MIN_REPLIES, DELAY)
|
fba98fbf97b49914e85f7697d958a701
|
{
"intermediate": 0.41259923577308655,
"beginner": 0.4419596195220947,
"expert": 0.14544108510017395
}
|
689
|
create or replace synonym bis_ibr.IBR_RECEIPTS for IBR_RECEIPTS
ORA-00955: name is already used by an existing object
|
51818f5ff0b103076df1fe277865ad42
|
{
"intermediate": 0.36508047580718994,
"beginner": 0.18319819867610931,
"expert": 0.45172131061553955
}
|
690
|
I am coding a custom Android soft keyboard app. In the settings page of the app, there is a SeekBar that resizes the height of the keyboard.
I am encountering a bug where, when the keyboard is downsized using the SeekBar, the keys shrink as expected but the keyboard does not drop down to the bottom of the screen. Only if the SeekBar is touched again does the keyboard get repositioned at the bottom of the screen.
(There is no bug when the keyboard grows in size.)
In the input method service's `onStartInputView`, I have the following:
|
036e9cdd6d99b250ca1908bc3d5a5c6b
|
{
"intermediate": 0.427354633808136,
"beginner": 0.3541746735572815,
"expert": 0.2184707075357437
}
|
691
|
I want you to write a script that would search value x from 2160 to 3564, and y from -768 to 1880, and change their values, add 400 to them, path to file A:\Programs\Obsidian\classic\Life\Kaizen\KanBan April 2023.canvas
|
e1f23477210df58f8b9c5282e0158580
|
{
"intermediate": 0.38811352849006653,
"beginner": 0.1446126401424408,
"expert": 0.46727386116981506
}
|
692
|
I want you to write a script that would search value x from 2160 to 3564, and y from -768 to 1880, and change their values, add 400 to them, path to file A:\Programs\Obsidian\classic\Life\Kaizen\KanBan April 2023.canvas
|
1b7b89c1c5efa4681db2f24870e9faf4
|
{
"intermediate": 0.38811352849006653,
"beginner": 0.1446126401424408,
"expert": 0.46727386116981506
}
|
693
|
This code:
# Load required packages
library(brms)
# Load or create your data
# The data should have columns:
# - response_time: The time taken for each decision
# - choice: The actual choice made by a participant (e.g., 1, 2, or 3 for a 3-alternative forced-choice decision)
# - condition: The experimental condition (if applicable)
# data <- …
# Fit the diffusion model
fit <- brm(
formula = response_time | dec(rate) ~ 1 + level,
family = wiener(),
data = data,
prior = c(
# Priors for the model parameters
prior(normal(0, 1), class = "Intercept"),
prior(normal(0, 1), class = "b"),
prior(uniform(0, 1), class = "phi"),
prior(uniform(0, 1), class = "theta")
),
chains = 4,
iter = 2000,
warmup = 1000,
control = list(max_treedepth = 15)
)
# Check the results
summary(fit)
Returns:
Error: The following priors do not correspond to any model parameter:
phi ~ uniform(0, 1)
theta ~ uniform(0, 1)
Function 'get_prior' might be helpful to you.
|
f96c22712a94f6f70309dd4e705f181b
|
{
"intermediate": 0.4800994396209717,
"beginner": 0.16918164491653442,
"expert": 0.3507189452648163
}
|
694
|
flink mysql代码实现
|
130942937c7e4370b1ad4aabf6f51f3b
|
{
"intermediate": 0.2963688373565674,
"beginner": 0.2610434591770172,
"expert": 0.4425877034664154
}
|
695
|
I want to create a chatbot that is powered by AI. start we chatgbt and then move to own AI.
|
4f64e6539930ef0f48dfc92e2fc7d4e6
|
{
"intermediate": 0.23035338521003723,
"beginner": 0.23332853615283966,
"expert": 0.5363180637359619
}
|
696
|
MyBatisPlusSink
|
ceb105002f4a0b6fb20b67f59c45d18b
|
{
"intermediate": 0.3610963225364685,
"beginner": 0.2803078293800354,
"expert": 0.35859590768814087
}
|
697
|
how to resetting the Remote Debugging settings in Safari
|
13456cad0893a912a8f41be55048e514
|
{
"intermediate": 0.6156879663467407,
"beginner": 0.24556052684783936,
"expert": 0.1387515515089035
}
|
698
|
Here are the docs for the pain 008.001.02 format, write me a conversion library for js that can take an csv as input and send back an xml with the correct iso 20022 format
---
Input XML File for PAIN.008.001.02
This page contains a description of the input XML file for the PAIN.008.001.02 specification, which is used for ACH Direct Debit (ACH Pull) CCD and PPD payments.
The following is a high-level example of a PAIN.008.001.02 input XML file:
XML
<?xml version="1.0" encoding="UTF-8" ?>
<Document xmlns="urn:iso:std:iso:20022:tech:xsd:pain.008.001.02">
<CstmrDrctDbtInitn>
<GrpHdr>
...
</GrpHdr>
<PmtInf>
...
</PmtInf>
</CstmrDrctDbtInitn>
</Document>
The root document tag for the input file is CstmrDrctDbtInitn. It contains a Group Header and at least one Payment Information building block (corresponding to a batch). The Group Header contains metadata that relates to all the batches in the file. Each batch contains meta data for all the transactions within.
Group Header (GrpHdr)
Your XML Input file must include a group header using the GrpHdr building block. This building block is only present once in a file and contains a set of characteristics shared by all the individual instructions included in the message.
XML
<GrpHdr>
<MsgId>ABCDEFG090301</MsgId>
<CreDtTm>2013-08-28T17:12:44</CreDtTm>
<NbOfTxs>5</NbOfTxs>
<CtrlSum>43236.93</CtrlSum>
<InitgPty>
...
</InitgPty>
</GrpHdr>
Here's an explanation of the different tags in the GrpHdr block as shown above:
MsgId: An ID number for the message, which you assign before sending the file to CRB. Best practice: Use a new for each file.**
CreDtTm: The date and time the payment instruction was created.
NbOfTxs: The total number of transaction instruction blocks in the message. Each instruction corresponds to one transaction, and will form a separate instruction block. For more information on the transaction instruction blocks, see the Direct Debit Transaction Information section below.
CtrlSum: The total amount (as a number) of all the instructions included in the file, irrespective of currencies, used as a control sum. Note: Unicorn currently supports USD only.
InitgPty: Indicates the party initiating the transfer. This is the party initiating the credit transfer on behalf of the debtor. See the Initiating Party section that follows for more details on the tags inside the InitgPty block.
Note: All the above elements in the GrpHdr block are required.
Initiating Party (InitgPty)
The Initiating Party is the party initiating the payment. This is the party that initiates the credit transfer on behalf of the debtor.
XML
<InitgPty>
<Nm>John Doe Corporation</Nm>
<Id>
<OrgId>
<Othr>
<Id>0123456789</Id>
</Othr>
</OrgId>
</Id>
</InitgPty>
Here's an explanation of the different tags in the InitgPty block as shown above:
Nm: Name by which the originating party is known and which is usually used to identify that party.
Id: Identifier. The parent element of the OrgId element containing the identifying information about the initiating party.
OrgId: Organization Identification block containing the initiating party's identification in its child elements.
Othr: A block containing the initiating party's identification in a child element.
Id: The unique and unambiguous identifier the initiating party, which can be an organization or an individual person.
Note: All the above elements in the InitgPty block are required.
Payment Information (PmtInf)
The PmtInf block contains payment information per batch in your file. You must include at least one PmtInf block in your file. In most cases the input file contains only one PmtInf block, with one set of payment instructions. This enables you to indicate general properties (such as execution date, creditor information, and credited account) once at the level of the PmtInf block.
You might want to use multiple PmtInf blocks if the file includes instructions to credit more than one account. In that case, you need a PmtInf block for each account that is going to be credited.
XML
<PmtInf>
<PmtInfId>DOMC10000025</PmtInfId>
<PmtMtd>DD</PmtMtd>
<BtchBookg>false</BtchBookg>
<NbOfTxs>5</NbOfTxs>
<CtrlSum>12.01</CtrlSum>
<PmtTpInf>
...
</PmtTpInf>
<ReqdColltnDt>2020-08-21</ReqdColltnDt>
<Cdtr>
...
</Cdtr>
<CdtrAcct>
...
</CdtrAcct>
<CdtrAgt>
...
</CdtrAgt>
<DrctDbtTxInf>
...
</DrctDbtTxInf>
</PmtInf>
Below is an explanation of the different top-level tags and blocks found in the PmtInf block as shown above. These tags and blocks appear once for each PmtInf block, except for the DrctDbtTxInf block, which can appear multiple times, representing multiple transactions. Each of the other tags and blocks applies to all DrctDbtTxInf blocks that appear in the PmtInf block:
PmtInf: This block contains payment information, such as creditor and payment type information. You can use this block repeatedly within the same input file. Note: One or more instances of the PmtInf element is required.
PmtInfId: The unique ID number for this batch, which is assigned by the originating party. Note: This element is required.
PmtMtd: Payment Method. For direct debit transactions you should define it as "DD". Note: This element is required.
BtchBookg: Defines how CRB should handle the debit. If the tag is set to "TRUE", then all debit instructions will be handled as one consolidated debit. If the tag is set to "FALSE", it means that you want each debit to be handled separately. Note: Currently the system will always behave as if the value is "FALSE".
NbOfTxs: The number of transactions within this batch. Note: This element is required.
CtrlSum: The sum total of all instructions within this batch, irrespective of currencies, used as a control sum. Note: Unicorn currently supports "USD" only. Note: This element is required.
PmtTpInf: The Payment Type Information block, including a priority level. See the Payment Type Information section below for more details on the PmtTpInf block. Note: This element is required.
ReqdColltnDt: Requested Collection Date. The date on which the originator's account is to be debited. This tag currently supports current dates. Support for future dates will come in a future release. Note: This element is required.
Cdtr: Creditor block. Contains the name and postal address of the originator. See the Creditor section below for an example of the Cdtr block. Note: This element is required.
CdtrAcct: Creditor Account. The account of the originator that will be credited. See the Creditor Account section below for more details on the CdtrAcct block. Note: This element is required.
CdtrAgt: Creditor Agent block. Details on the creditor's financial institution. See the Creditor Agent section below for more details on the CdtrAgt block. Note: This element is required.
DrctDbtTxInf: Direct Debit Transaction Information. Includes elements related to the debit side of the transaction, such as debtor and remittance information. This block can appear multiple times within the same PmtInf block. See the Direct Debit Transaction Information section below for more details on the DrctDbtTxInf block. Note: One or more of the CdtTrfTxInf element is required.
Payment Type Information (PmtTpInf)
The PmtTpInf block contains information on the payment type.
XML
<PmtTpInf>
<SvcLvl>
<Cd>NURG</Cd>
</SvcLvl>
<LclInstrm>
<Prtry>CCD</Prtry>
</LclInstrm>
</PmtTpInf>
Here's an explanation of the different tags in the PmtTpInf block as shown in the examples above:
SvcLvl: Service Level. Contains the payment urgency level (Cd) in a child element.
Cd: Code: Payment urgency level. Note: This element is required. For direct debit transactions this has a fixed value of "NURG".
LclInstrm: Local Instrument. Used to specify a local instrument, local clearing option and/or to further qualify the service or service level. Note: This element is required.
Prtry: Proprietary. Note: This element is required. The value must be the ACH Pull type ("CCD" or "PPD").
Creditor (Cdtr)
The Cdtr block contains information on the name, postal address and ID of the originator (creditor).
XML
<Cdtr>
<Nm>John Doe Corporation</Nm>
<PstlAdr>
<Ctry>US</Ctry>
<AdrLine>999 Any Street, 13th Floor</AdrLine>
<AdrLine>99999 Anytown</AdrLine>
</PstlAdr>
<Id>
<OrgId>
<Othr>
<Id>0123456789</Id>
</Othr>
</OrgId>
</Id>
</Cdtr>
Nm: Creditor name.
PstlAdr: A block containing the postal address of the creditor, including country and address lines.
Id: Identification block, containing information used to identify the creditor in child elements.
OrgId: Organization Identification block containing the creditor identification in its child elements.
Othr: A block containing the creditor identification in a child element.
Id: A unique and unambiguous identifier of the creditor. This ID is identical to the Id field in the InitgPty block described above.
Creditor Account (CdtrAcct)
The CdtrAccount block contains information on the account of the originator that will be credited.
XML
<CdtrAcct>
<Id>
<Othr>
<Id>0123456789</Id>
</Othr>
</Id>
<Ccy>USD</Ccy>
</CdtrAcct>
Here's an explanation of the different tags in the CdtrAccount block as shown above:
Id: The sub-block containing the creditor's account identification information.
Othr: The sub-block containing the creditor's Id tag.
Id: The unique identifier of the creditor's account.
Ccy: Currency. The ISO currency code of the debtor's account.
Note: All the above elements in the CdtrAcct block are required.
Creditor Agent (CdtrAgt)
The CdtrAgt block contains information on the originator's financial institution.
XML
<CdtrAgt>
<FinInstnId>
<ClrSysMmbId>
<MmbId>123456789</MmbId>
</ClrSysMmbId>
<PstlAdr>
<Ctry>US</Ctry>
</PstlAdr>
</FinInstnId>
</CdtrAgt>
Here's an explanation of the different tags in the CdtrAgt block as shown above:
FinInstnId: Financial Institution Identification sub-block.
ClrSysMmbId: Clearing System Member Identification sub-block. Contains Information used to identify a member within a clearing system.
MmbId: Member Identification: Identifier (routing number) of the creditor's financial institution in the creditor's clearing system. For Direct Debit, this value must always be CRB's routing number (021214891).
PstlAdr: Postal Address sub-block.
Ctry: Country code. The country code for the debtor's financial institution.
Note: All the above elements in the CdtrAgt block are required.
Direct Debit Transaction Information (DrctDbtTxInf)
The DrctDbtTxInf block includes elements related to the debit side of the transaction, such as debitor and remittance information for the transaction. You can use this block repeatedly within the same PmtInf block. The number of occurrences of the DrctDbtTxInf block within a file is indicated by the NbOfTxs field in the Group Header (GrpHdr).
XML
<DrctDbtTxInf>
<PmtId>
<EndToEndId>100DDEB000000</EndToEndId>
</PmtId>
<InstdAmt Ccy="USD">0.01</InstdAmt>
<DbtrAgt>
<FinInstnId>
<ClrSysMmbId>
<MmbId>123456789</MmbId>
</ClrSysMmbId>
<Nm>DUMMY</Nm>
<PstlAdr>
<Ctry>US</Ctry>
</PstlAdr>
</FinInstnId>
</DbtrAgt>
<Dbtr>
<Nm>John Doe</Nm>
<PstlAdr>
<Ctry>US</Ctry>
</PstlAdr>
</Dbtr>
<DbtrAcct>
<Id>
<Othr>
<Id>01234567890</Id>
</Othr>
</Id>
</DbtrAcct>
<RmtInf>
<Ustrd>Testing</Ustrd>
</RmtInf>
</DrctDbtTxInf>
Here's an explanation of the different tags in the DrctDbtTxInf block as shown above:
PmtId: Payment Identification sub-block. Provides identifying information regarding the transaction in child elements. Note: This element is required.
EndToEndId: End to End Identification: End-to-end reference number of the credit transfer. This information is sent to the beneficiary. Note: This element is required.
InstdAmt: Instructed Amount. The amount of the credit transfer in the indicated currency. Note: This element is required.
DbtrAgt: Debtor Agent block. Details on the debtor's financial institution for the transaction. See the Debtor Agent section below for more details on the DbtrAgt block. Note: This element is required.
Dbtr: The debtor sub-block. Contains details on the debtor for the transaction, including Nm (name) and PstlAdr (postal address) elements. Note: The Dbtr block and its Nm element are required. The PstlAdr element is not required.
DbtrAcct: Debtor account sub-block for the transaction, containing the debtor account number in its child elements. Note: This element is required.
Id: Identification sub-block. Contains an identification of the debtor account in child elements. Note: This element is required.
Othr: Sub-block containing the debtor's Id tag. Note: This element is required.
Id: The unique identifier of the debtor's account. Note: This element is required.
RmtInf: The remittance information to send along with the transaction. Note: This element is required for ACH Direct Debit.
Ustrd: Unstructured description of the transaction. Note: This element is required.
Debtor Agent (DbtrAgt)
The DbtrAgt block contains information on the debtor's financial institution.
XML
<DbtrAgt>
<FinInstnId>
<ClrSysMmbId>
<MmbId>123456789</MmbId>
</ClrSysMmbId>
<Nm>John Doe Bank</Nm>
<PstlAdr>
<Ctry>US</Ctry>
</PstlAdr>
</FinInstnId>
</DbtrAgt>
Here's an explanation of the different tags in the DbtrAgt block as shown above:
FinInstnId: Financial Institution Identification sub-block.
ClrSysMmbId: Clearing System Member Identification sub-block. Contains Information used to identify a member within a clearing system.
MmbId: Member Identification: Identification (routing number) of the debtor's financial institution in the debtor's clearing system.
Nm: Name of the debtor's financial institution.
PstlAdr: Postal Address sub-block.
Ctry: Country code. The ISO country code for the creditor's financial institution.
Note: All the above elements in the DbtrAgt block are required.
|
7ad50633b8c75a48f47046885c0b05c6
|
{
"intermediate": 0.4250049591064453,
"beginner": 0.3717614412307739,
"expert": 0.20323359966278076
}
|
699
|
flutter how to use EditableText
|
be36f905c58f4e331dde92eeeaf355ea
|
{
"intermediate": 0.5512300729751587,
"beginner": 0.2512036859989166,
"expert": 0.1975662261247635
}
|
700
|
Write code to recursively find all the direct urls to the files at https://arcjav.arcjavdb.workers.dev/0:/001-050/%E4%B8%8A%E5%8E%9F%E4%BA%9A%E8%A1%A3/
|
c8cc83fb1182617f31163992bbd4e077
|
{
"intermediate": 0.47031620144844055,
"beginner": 0.19839036464691162,
"expert": 0.3312934637069702
}
|
701
|
how can i generate a set number augmentations using albumentations
|
d0269b17816ab2ee5087ae2f163562e3
|
{
"intermediate": 0.2768872082233429,
"beginner": 0.2040298581123352,
"expert": 0.5190829634666443
}
|
702
|
Here are the docs for the pain 008.001.02 format, write me a conversion library for js that can take an csv as input and send back an xml with the correct iso 20022 format, vanilla js please
—
Input XML File for PAIN.008.001.02
This page contains a description of the input XML file for the PAIN.008.001.02 specification, which is used for ACH Direct Debit (ACH Pull) CCD and PPD payments.
The following is a high-level example of a PAIN.008.001.02 input XML file:
XML
<?xml version=“1.0” encoding=“UTF-8” ?>
<Document xmlns=“urn:iso:std:iso:20022:tech:xsd:pain.008.001.02”>
<CstmrDrctDbtInitn>
<GrpHdr>
…
</GrpHdr>
<PmtInf>
…
</PmtInf>
</CstmrDrctDbtInitn>
</Document>
The root document tag for the input file is CstmrDrctDbtInitn. It contains a Group Header and at least one Payment Information building block (corresponding to a batch). The Group Header contains metadata that relates to all the batches in the file. Each batch contains meta data for all the transactions within.
Group Header (GrpHdr)
Your XML Input file must include a group header using the GrpHdr building block. This building block is only present once in a file and contains a set of characteristics shared by all the individual instructions included in the message.
XML
<GrpHdr>
<MsgId>ABCDEFG090301</MsgId>
<CreDtTm>2013-08-28T17:12:44</CreDtTm>
<NbOfTxs>5</NbOfTxs>
<CtrlSum>43236.93</CtrlSum>
<InitgPty>
…
</InitgPty>
</GrpHdr>
Here’s an explanation of the different tags in the GrpHdr block as shown above:
MsgId: An ID number for the message, which you assign before sending the file to CRB. Best practice: Use a new for each file.**
CreDtTm: The date and time the payment instruction was created.
NbOfTxs: The total number of transaction instruction blocks in the message. Each instruction corresponds to one transaction, and will form a separate instruction block. For more information on the transaction instruction blocks, see the Direct Debit Transaction Information section below.
CtrlSum: The total amount (as a number) of all the instructions included in the file, irrespective of currencies, used as a control sum. Note: Unicorn currently supports USD only.
InitgPty: Indicates the party initiating the transfer. This is the party initiating the credit transfer on behalf of the debtor. See the Initiating Party section that follows for more details on the tags inside the InitgPty block.
Note: All the above elements in the GrpHdr block are required.
Initiating Party (InitgPty)
The Initiating Party is the party initiating the payment. This is the party that initiates the credit transfer on behalf of the debtor.
XML
<InitgPty>
<Nm>John Doe Corporation</Nm>
<Id>
<OrgId>
<Othr>
<Id>0123456789</Id>
</Othr>
</OrgId>
</Id>
</InitgPty>
Here’s an explanation of the different tags in the InitgPty block as shown above:
Nm: Name by which the originating party is known and which is usually used to identify that party.
Id: Identifier. The parent element of the OrgId element containing the identifying information about the initiating party.
OrgId: Organization Identification block containing the initiating party’s identification in its child elements.
Othr: A block containing the initiating party’s identification in a child element.
Id: The unique and unambiguous identifier the initiating party, which can be an organization or an individual person.
Note: All the above elements in the InitgPty block are required.
Payment Information (PmtInf)
The PmtInf block contains payment information per batch in your file. You must include at least one PmtInf block in your file. In most cases the input file contains only one PmtInf block, with one set of payment instructions. This enables you to indicate general properties (such as execution date, creditor information, and credited account) once at the level of the PmtInf block.
You might want to use multiple PmtInf blocks if the file includes instructions to credit more than one account. In that case, you need a PmtInf block for each account that is going to be credited.
XML
<PmtInf>
<PmtInfId>DOMC10000025</PmtInfId>
<PmtMtd>DD</PmtMtd>
<BtchBookg>false</BtchBookg>
<NbOfTxs>5</NbOfTxs>
<CtrlSum>12.01</CtrlSum>
<PmtTpInf>
…
</PmtTpInf>
<ReqdColltnDt>2020-08-21</ReqdColltnDt>
<Cdtr>
…
</Cdtr>
<CdtrAcct>
…
</CdtrAcct>
<CdtrAgt>
…
</CdtrAgt>
<DrctDbtTxInf>
…
</DrctDbtTxInf>
</PmtInf>
Below is an explanation of the different top-level tags and blocks found in the PmtInf block as shown above. These tags and blocks appear once for each PmtInf block, except for the DrctDbtTxInf block, which can appear multiple times, representing multiple transactions. Each of the other tags and blocks applies to all DrctDbtTxInf blocks that appear in the PmtInf block:
PmtInf: This block contains payment information, such as creditor and payment type information. You can use this block repeatedly within the same input file. Note: One or more instances of the PmtInf element is required.
PmtInfId: The unique ID number for this batch, which is assigned by the originating party. Note: This element is required.
PmtMtd: Payment Method. For direct debit transactions you should define it as “DD”. Note: This element is required.
BtchBookg: Defines how CRB should handle the debit. If the tag is set to “TRUE”, then all debit instructions will be handled as one consolidated debit. If the tag is set to “FALSE”, it means that you want each debit to be handled separately. Note: Currently the system will always behave as if the value is “FALSE”.
NbOfTxs: The number of transactions within this batch. Note: This element is required.
CtrlSum: The sum total of all instructions within this batch, irrespective of currencies, used as a control sum. Note: Unicorn currently supports “USD” only. Note: This element is required.
PmtTpInf: The Payment Type Information block, including a priority level. See the Payment Type Information section below for more details on the PmtTpInf block. Note: This element is required.
ReqdColltnDt: Requested Collection Date. The date on which the originator’s account is to be debited. This tag currently supports current dates. Support for future dates will come in a future release. Note: This element is required.
Cdtr: Creditor block. Contains the name and postal address of the originator. See the Creditor section below for an example of the Cdtr block. Note: This element is required.
CdtrAcct: Creditor Account. The account of the originator that will be credited. See the Creditor Account section below for more details on the CdtrAcct block. Note: This element is required.
CdtrAgt: Creditor Agent block. Details on the creditor’s financial institution. See the Creditor Agent section below for more details on the CdtrAgt block. Note: This element is required.
DrctDbtTxInf: Direct Debit Transaction Information. Includes elements related to the debit side of the transaction, such as debtor and remittance information. This block can appear multiple times within the same PmtInf block. See the Direct Debit Transaction Information section below for more details on the DrctDbtTxInf block. Note: One or more of the CdtTrfTxInf element is required.
Payment Type Information (PmtTpInf)
The PmtTpInf block contains information on the payment type.
XML
<PmtTpInf>
<SvcLvl>
<Cd>NURG</Cd>
</SvcLvl>
<LclInstrm>
<Prtry>CCD</Prtry>
</LclInstrm>
</PmtTpInf>
Here’s an explanation of the different tags in the PmtTpInf block as shown in the examples above:
SvcLvl: Service Level. Contains the payment urgency level (Cd) in a child element.
Cd: Code: Payment urgency level. Note: This element is required. For direct debit transactions this has a fixed value of “NURG”.
LclInstrm: Local Instrument. Used to specify a local instrument, local clearing option and/or to further qualify the service or service level. Note: This element is required.
Prtry: Proprietary. Note: This element is required. The value must be the ACH Pull type (“CCD” or “PPD”).
Creditor (Cdtr)
The Cdtr block contains information on the name, postal address and ID of the originator (creditor).
XML
<Cdtr>
<Nm>John Doe Corporation</Nm>
<PstlAdr>
<Ctry>US</Ctry>
<AdrLine>999 Any Street, 13th Floor</AdrLine>
<AdrLine>99999 Anytown</AdrLine>
</PstlAdr>
<Id>
<OrgId>
<Othr>
<Id>0123456789</Id>
</Othr>
</OrgId>
</Id>
</Cdtr>
Nm: Creditor name.
PstlAdr: A block containing the postal address of the creditor, including country and address lines.
Id: Identification block, containing information used to identify the creditor in child elements.
OrgId: Organization Identification block containing the creditor identification in its child elements.
Othr: A block containing the creditor identification in a child element.
Id: A unique and unambiguous identifier of the creditor. This ID is identical to the Id field in the InitgPty block described above.
Creditor Account (CdtrAcct)
The CdtrAccount block contains information on the account of the originator that will be credited.
XML
<CdtrAcct>
<Id>
<Othr>
<Id>0123456789</Id>
</Othr>
</Id>
<Ccy>USD</Ccy>
</CdtrAcct>
Here’s an explanation of the different tags in the CdtrAccount block as shown above:
Id: The sub-block containing the creditor’s account identification information.
Othr: The sub-block containing the creditor’s Id tag.
Id: The unique identifier of the creditor’s account.
Ccy: Currency. The ISO currency code of the debtor’s account.
Note: All the above elements in the CdtrAcct block are required.
Creditor Agent (CdtrAgt)
The CdtrAgt block contains information on the originator’s financial institution.
XML
<CdtrAgt>
<FinInstnId>
<ClrSysMmbId>
<MmbId>123456789</MmbId>
</ClrSysMmbId>
<PstlAdr>
<Ctry>US</Ctry>
</PstlAdr>
</FinInstnId>
</CdtrAgt>
Here’s an explanation of the different tags in the CdtrAgt block as shown above:
FinInstnId: Financial Institution Identification sub-block.
ClrSysMmbId: Clearing System Member Identification sub-block. Contains Information used to identify a member within a clearing system.
MmbId: Member Identification: Identifier (routing number) of the creditor’s financial institution in the creditor’s clearing system. For Direct Debit, this value must always be CRB’s routing number (021214891).
PstlAdr: Postal Address sub-block.
Ctry: Country code. The country code for the debtor’s financial institution.
Note: All the above elements in the CdtrAgt block are required.
Direct Debit Transaction Information (DrctDbtTxInf)
The DrctDbtTxInf block includes elements related to the debit side of the transaction, such as debitor and remittance information for the transaction. You can use this block repeatedly within the same PmtInf block. The number of occurrences of the DrctDbtTxInf block within a file is indicated by the NbOfTxs field in the Group Header (GrpHdr).
XML
<DrctDbtTxInf>
<PmtId>
<EndToEndId>100DDEB000000</EndToEndId>
</PmtId>
<InstdAmt Ccy=“USD”>0.01</InstdAmt>
<DbtrAgt>
<FinInstnId>
<ClrSysMmbId>
<MmbId>123456789</MmbId>
</ClrSysMmbId>
<Nm>DUMMY</Nm>
<PstlAdr>
<Ctry>US</Ctry>
</PstlAdr>
</FinInstnId>
</DbtrAgt>
<Dbtr>
<Nm>John Doe</Nm>
<PstlAdr>
<Ctry>US</Ctry>
</PstlAdr>
</Dbtr>
<DbtrAcct>
<Id>
<Othr>
<Id>01234567890</Id>
</Othr>
</Id>
</DbtrAcct>
<RmtInf>
<Ustrd>Testing</Ustrd>
</RmtInf>
</DrctDbtTxInf>
Here’s an explanation of the different tags in the DrctDbtTxInf block as shown above:
PmtId: Payment Identification sub-block. Provides identifying information regarding the transaction in child elements. Note: This element is required.
EndToEndId: End to End Identification: End-to-end reference number of the credit transfer. This information is sent to the beneficiary. Note: This element is required.
InstdAmt: Instructed Amount. The amount of the credit transfer in the indicated currency. Note: This element is required.
DbtrAgt: Debtor Agent block. Details on the debtor’s financial institution for the transaction. See the Debtor Agent section below for more details on the DbtrAgt block. Note: This element is required.
Dbtr: The debtor sub-block. Contains details on the debtor for the transaction, including Nm (name) and PstlAdr (postal address) elements. Note: The Dbtr block and its Nm element are required. The PstlAdr element is not required.
DbtrAcct: Debtor account sub-block for the transaction, containing the debtor account number in its child elements. Note: This element is required.
Id: Identification sub-block. Contains an identification of the debtor account in child elements. Note: This element is required.
Othr: Sub-block containing the debtor’s Id tag. Note: This element is required.
Id: The unique identifier of the debtor’s account. Note: This element is required.
RmtInf: The remittance information to send along with the transaction. Note: This element is required for ACH Direct Debit.
Ustrd: Unstructured description of the transaction. Note: This element is required.
Debtor Agent (DbtrAgt)
The DbtrAgt block contains information on the debtor’s financial institution.
XML
<DbtrAgt>
<FinInstnId>
<ClrSysMmbId>
<MmbId>123456789</MmbId>
</ClrSysMmbId>
<Nm>John Doe Bank</Nm>
<PstlAdr>
<Ctry>US</Ctry>
</PstlAdr>
</FinInstnId>
</DbtrAgt>
Here’s an explanation of the different tags in the DbtrAgt block as shown above:
FinInstnId: Financial Institution Identification sub-block.
ClrSysMmbId: Clearing System Member Identification sub-block. Contains Information used to identify a member within a clearing system.
MmbId: Member Identification: Identification (routing number) of the debtor’s financial institution in the debtor’s clearing system.
Nm: Name of the debtor’s financial institution.
PstlAdr: Postal Address sub-block.
Ctry: Country code. The ISO country code for the creditor’s financial institution.
Note: All the above elements in the DbtrAgt block are required.
|
bf01da78c427a2db12811a947176dfa2
|
{
"intermediate": 0.4206574261188507,
"beginner": 0.3920121490955353,
"expert": 0.187330424785614
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.