row_id
int64 0
48.4k
| init_message
stringlengths 1
342k
| conversation_hash
stringlengths 32
32
| scores
dict |
|---|---|---|---|
46,988
|
write a professional ffmpeg code that can make a unique audio "radio" effect on input audio without losing quality: as like as: ffmpeg -i AUDIO.mp3 -filter:a "highpass=f=1375.4,volume=12.3dB" audio_result.mp3
but create a unique one without loosing quality!
|
f6220fd0a927fcf8c8793f8a334f3a3f
|
{
"intermediate": 0.3856160640716553,
"beginner": 0.20286552608013153,
"expert": 0.41151848435401917
}
|
46,989
|
write a professional ffmpeg code that can make a unique audio “radio” effect on input audio without losing quality: as like as: ffmpeg -i AUDIO.mp3 -filter:a “highpass=f=1375.4,volume=12.3dB” audio_result.mp3
but create a unique one without loosing quality!
|
61396947a43bfface0528f01b498b2fd
|
{
"intermediate": 0.4195529818534851,
"beginner": 0.17866531014442444,
"expert": 0.40178173780441284
}
|
46,990
|
New-Item : Cannot find drive. A drive with the name 'Q' does not exist.
At C:\drivecreation.ps1:2 char:1
+ New-Item -Path "Q:\Home$" -ItemType Directory
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (Q:String) [New-Item], DriveNotFoundException
+ FullyQualifiedErrorId : DriveNotFound,Microsoft.PowerShell.Commands.NewItemCommand
Get-Acl : Cannot find drive. A drive with the name 'Q' does not exist.
At C:\drivecreation.ps1:5 char:8
+ $acl = Get-Acl -Path "Q:\Home$"
+ ~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (Q:String) [Get-Acl], DriveNotFoundException
+ FullyQualifiedErrorId : DriveNotFound,Microsoft.PowerShell.Commands.GetAclCommand
You cannot call a method on a null-valued expression.
At C:\drivecreation.ps1:9 char:1
+ $acl.SetAccessRule($accessRule)
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (:) [], RuntimeException
+ FullyQualifiedErrorId : InvokeMethodOnNull
You cannot call a method on a null-valued expression.
At C:\drivecreation.ps1:13 char:1
+ $acl.SetAccessRule($accessRule)
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (:) [], RuntimeException
+ FullyQualifiedErrorId : InvokeMethodOnNull
Set-Acl : Cannot bind argument to parameter 'AclObject' because it is null.
At C:\drivecreation.ps1:15 char:37
+ Set-Acl -Path "Q:\Home$" -AclObject $acl
+ ~~~~
+ CategoryInfo : InvalidData: (:) [Set-Acl], ParameterBindingValidationException
+ FullyQualifiedErrorId : ParameterArgumentValidationErrorNullNotAllowed,Microsoft.PowerShell.Commands.SetAclComma
nd
|
3bab272a70de35fddd7eba1bc78c1a61
|
{
"intermediate": 0.44966378808021545,
"beginner": 0.31834954023361206,
"expert": 0.23198670148849487
}
|
46,991
|
#include <bits/stdc++.h>
#define N 1000
using namespace std;
struct hop
{
int x, y, id;
};
hop a[N+1];
int n, xuoi[N+3], dem;
bool cmp (hop A, hop B)
{
if (A.x!=B.x) return A.x > B.x;
else
if (A.y==B.y) return A.id > B.id;
else return A.y>B.y;
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(0);cout.tie(0);
freopen("bai5.inp","r",stdin);
freopen("bai5.out","w",stdout);
cin>> n;
for(int i=1;i<=n;i++)
{
cin >> a[i].x >> a[i].y;
a[i].id=i;
if (a[i].x>a[i].y) swap(a[i].x, a[i].y);
}
sort(a+1,a+n+1,cmp);
for(int i=1;i<=n;i++) cout << a[i].x<<" "<< a[i].y<<" "<<a[i].id<<'\n';
// Tính xuoi
for (int i = 1; i <= n; ++i) {
xuoi[i] = 1;
for (int j = 1; j < i; j++) {
if (a[j].x >a[i].x && a[j].y >a[i].y )
xuoi[i] = max(xuoi[i], xuoi[j] + 1);
}
}
for (int i=1; i<=n; i++) cout << xuoi[i]<<" ";
dem=0; xuoi[0]= 0;
for (int i=1; i<=n; i++)
if (xuoi[i]>xuoi[i-1]) dem++;
cout<< dem <<'\n';
if (dem==1) cout << a[n].id;
else{
for (int i=1; i<=n; i++)
if (xuoi[i]!=xuoi[i-1])
cout << a[i].id<<" ";
}
return 0;
}
|
e7d0121700400221f5adca09e64b72b1
|
{
"intermediate": 0.2765996754169464,
"beginner": 0.5085639953613281,
"expert": 0.21483632922172546
}
|
46,992
|
write a ffmpeg code to replace audio to video
|
2181473d756735614667095650792403
|
{
"intermediate": 0.5140331983566284,
"beginner": 0.15701451897621155,
"expert": 0.3289523422718048
}
|
46,993
|
for keithley 2651a sourcemeter Im connecting it with a GPIB to usb cable via my pc. and I have py visa installled I like to have it houtput a voltage could you give me some instruction code
|
780e5e81fbc5b46f4a8532f124475dc9
|
{
"intermediate": 0.46842342615127563,
"beginner": 0.2688594460487366,
"expert": 0.2627171277999878
}
|
46,994
|
Please modify the depth first search algorithm below to find all connected components in an undirected graph. Comment on where you made the modification. Your modified algorithm needs to print out each component ID (starting from 1) and the corresponding vertices.
For example, take a directed graph with 6 vertices namely u, v, w, x, y, z. u is directed towards v and x. v is directed towards y. w is directed towards y and z. x is directed towards v. y is directed towards x. Your output for this DFS example will look like the following:-
Component 1: u, v, y, x. Component 2: w, z.
DFS Algorithm:
DFS (G:graph; var color:carray; parent:parray);
for each vertex u do
color[u]=white; parent[u]=nil;
end for
time = 0;
for each vertex u do
if color[u] == white then
DFS-Visit(u);
end if
end for
end DFS
DFS-Visit(u)
{
time = time + 1;
d[u] = time;
color[u]=gray;
for each v in adj[u] do
if color[v] = white {
parent[v] = u;
DFS-Visit(v);
}
color[u] = red;
time = time + 1;
f[u] = time;
}
|
0878f44a5908587bd063a796b0a0ab7d
|
{
"intermediate": 0.24484960734844208,
"beginner": 0.27772873640060425,
"expert": 0.4774216413497925
}
|
46,995
|
hi, can you create a ffmpeg 6.0 linux arg beauty pass using a night time lut and modifying this arg: ffmpeg' -hide_banner -y -i %04d.exr -pix_fmt yuv420p10le -c:v libx265 -r 30 -preset fast -crf 5
|
4043b7bc17805df08e047feaaa1ed31e
|
{
"intermediate": 0.575837254524231,
"beginner": 0.1897815614938736,
"expert": 0.23438118398189545
}
|
46,996
|
temperature
place_id avg_temp
1 1 -21
2 2 -13
3 3 -9
4 4 23
5 5 -1
6 6 0
7 7 6
8 8 4
9 9 15
10 10 -12
Fetch the 5 coldest places from the temperature tabl in sql
|
8f3c23d0ab3f0940f7f2285e25fcac73
|
{
"intermediate": 0.4496818482875824,
"beginner": 0.2510967254638672,
"expert": 0.2992214262485504
}
|
46,997
|
import spacy
import pandas as pd
import random
from spacy.training import Example
nlp = spacy.blank("en")
"""
train_data = [
("This is a complete sentence.", {"cats": {"complete": 1, "incomplete": 0}}),
("Incomplete sentence.", {"cats": {"complete": 0, "incomplete": 1}}),
]
"""
csv_path = "dataset.csv"
data = pd.read_csv(csv_path)
def get_verb_subject_count(doc):
num_verbs = len([token for token in doc if token.pos_ == "VERB"])
num_subjects = len([token for token in doc if token.dep_ == "nsubj"])
return num_verbs, num_subjects
def label_to_cat(label):
if label == "Finished":
return {"Finished": 1, "Unfinished": 0}
elif label == "Unfinished":
return {"Finished": 0, "Unfinished": 1}
else:
return None
train_data = []
for i in range(10):
sentence = data.loc[i, "sentence"]
label = data.loc[i, "is_finished"]
cats = label_to_cat(label)
if cats:
doc = nlp(sentence)
num_verbs, num_subjects = get_verb_subject_count(doc)
train_data.append(
(
sentence,
{"cats": cats, "num_verbs": num_verbs, "num_subjects": num_subjects},
)
)
print("Data retrieved from CSV")
textcat = nlp.add_pipe("textcat")
textcat.add_label("Finished")
textcat.add_label("Unfinished")
other_pipes = [pipe for pipe in nlp.pipe_names if pipe != "textcat"]
with nlp.disable_pipes(*other_pipes):
optimizer = nlp.begin_training()
batch_size = 8 # Adjust batch size as needed
print_interval = 50 # Print progress every 50 batches
total_batches = len(train_data) // batch_size
for epoch in range(10):
# Shuffle training data for each epoch
random.shuffle(train_data)
for batch_start in range(0, len(train_data), batch_size):
batch = train_data[batch_start : batch_start + batch_size]
texts, annotations = zip(*batch)
examples = []
for text, annot in zip(texts, annotations):
doc = nlp.make_doc(text)
examples.append(Example.from_dict(doc, annot))
nlp.update(examples, drop=0.5, sgd=optimizer)
if (batch_start // batch_size) % print_interval == 0:
progress = (batch_start // batch_size) / total_batches * 100
print(
f"Epoch {epoch+1}/{10}, Batch {batch_start // batch_size}/{total_batches} ({progress:.2f}%) trained"
)
output_dir = "/Users/royce/Desktop/ai-proj/train/model/"
nlp.to_disk(output_dir)
print("Model saved to", output_dir)
getting below error
KeyError: "[E983] Invalid key(s) for 'token_annotation': num_verbs. Available keys: {'LEMMA', 'HEAD', 'POS', 'MORPH', 'deps', 'pos', 'sent_starts', 'SPACY', 'tags', 'ORTH', 'lemmas', 'TAG', 'SENT_START', 'words', 'heads', 'DEP', 'morphs', 'spaces'}"
fix my code pls
|
7afbc0e5e08869bd70343ef618f0c368
|
{
"intermediate": 0.37553682923316956,
"beginner": 0.36764511466026306,
"expert": 0.25681808590888977
}
|
46,998
|
i have following code to train a LSTM model on my dataset:
# %%
from sklearn.preprocessing import StandardScaler
import pandas as pd
import numpy as np
from tensorflow import keras
import joblib
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM,Dense,Dropout
import os
# %%
csv_directory = r"C:\Users\arisa\Desktop\day_spot_summary"
csv_files = [file for file in os.listdir(csv_directory) if file.endswith(‘.csv’)]
# %%
# %%
def build_lstm_model(input_shape):
model = Sequential([
LSTM(2716, activation=‘tanh’, input_shape=input_shape, return_sequences=True), # Adjusted for LSTM
Dropout(0.20),
# LSTM(2716, activation=‘tanh’, return_sequences=False), # Additional LSTM layer
# Dropout(0.10),
# LSTM(2716, activation=‘tanh’, return_sequences=False), # Additional LSTM layer
# Dropout(0.10),
Dense(2716, activation=‘relu’),
Dense(128, activation=‘relu’),
Dense(64, activation=‘relu’),
Dense(32, activation=‘relu’),
Dense(12),
])
model.compile(optimizer=‘adam’,
loss=‘mse’, # Use Mean Squared Error for regression
metrics=[‘mae’]) # Mean Absolute Error as an additional metric
return model
# %%
def data_generator_lstm( n_steps):
while True:
for csv_file in csv_files:
# Read the CSV file
file_path = os.path.join(csv_directory, csv_file)
chunk = pd.read_csv(file_path)
feature_data = chunk.drop([
‘y_High_1d’, ‘y_Low_1d’, ‘y_Priority_1d’,
‘y_High_2d’, ‘y_Low_2d’, ‘y_Priority_2d’,
‘y_High_3d’, ‘y_Low_3d’, ‘y_Priority_3d’,
‘y_High_5d’, ‘y_Low_5d’, ‘y_Priority_5d’], axis=1)
target_data = chunk[[‘y_High_1d’
, ‘y_Low_1d’, ‘y_Priority_1d’,
‘y_High_2d’, ‘y_Low_2d’, ‘y_Priority_2d’,
‘y_High_3d’, ‘y_Low_3d’, ‘y_Priority_3d’,
‘y_High_5d’, ‘y_Low_5d’, ‘y_Priority_5d’
]]
# Prepare sequences for features and targets
X, y = [], []
for i in range(len(feature_data) - n_steps + 1): # Correct range to prevent out-of-bounds access
X.append(feature_data.iloc[i:i + n_steps].to_numpy()) # Use iloc for consistency, though not necessary for slicing
# Make sure the index for y is correctly bounded within the target_data
if i + n_steps - 1 < len(target_data): # Adjust condition to prevent out-of-bounds
y.append(target_data.iloc[i + n_steps - 1].to_numpy()) # Correct indexing to match the condition
else:
break # Safety break (though should be unnecessary with corrected logic)
X, y = np.array(X), np.array(y)
yield X, y
# %%
from tensorflow.keras.mixed_precision import set_global_policy
# Enable mixed precision
set_global_policy(‘mixed_float16’)
# %%
model = build_lstm_model((30, 2716,))
model.summary()
# %%
import warnings
warnings.filterwarnings(action=‘ignore’, message=‘X has feature names, but StandardScaler was fitted without feature names’)
train_generator = data_generator_lstm(30)
# Update total_samples, train_samples, and val_samples according to your dataset after transformations
model.fit(
train_generator,
steps_per_epoch=50,
epochs=75,
# Add validation_data if you have a validation generator
)
please change it properly so instead of LSTM i train a prophet model
|
fef51be0c275ad846bd8afeff1bb6113
|
{
"intermediate": 0.3564283549785614,
"beginner": 0.3367084562778473,
"expert": 0.3068631887435913
}
|
46,999
|
im working on ggogle collab what does this mean, For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)
|
9641722a8b752ecb2da727cfff700cfa
|
{
"intermediate": 0.36391177773475647,
"beginner": 0.1709301620721817,
"expert": 0.46515804529190063
}
|
47,000
|
in servicenow business rule I have 3 fields (reference to the cmdb_ci_service table). Field 1 is the parent of field 2. The second is the parent of the third. (field 2 parent of the field 3)
I need a business rule that gives an error if:
- all fields are empty
- if only the first field is not empty, but the other two are empty
- if the first two are not empty but the third is empty even though there are values that could be chosen
If the first two fields are not empty, and the third is empty because there is no value that can be chosen, "That's OK" should be displayed.
|
7f0d04ad52f8bb9fc22e8caa373e1002
|
{
"intermediate": 0.3855266869068146,
"beginner": 0.34365108609199524,
"expert": 0.2708222270011902
}
|
47,001
|
i have a very large size csv file
how can i know its shape whithout openning it
|
34cd492f2c1546c57777b180b4194375
|
{
"intermediate": 0.373808354139328,
"beginner": 0.2707742750644684,
"expert": 0.355417400598526
}
|
47,002
|
im using google collab script that someone else created, how do i run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces), i don't have python, or any terminals set up
|
4cc013eb659c693e5ef8715485c0d575
|
{
"intermediate": 0.47552475333213806,
"beginner": 0.23178933560848236,
"expert": 0.2926858961582184
}
|
47,003
|
im training a tcn model on my dataset :
# %%
from sklearn.preprocessing import StandardScaler
import pandas as pd
import numpy as np
from tensorflow import keras
import joblib
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM,Dense,Dropout
import os
# %%
csv_directory = r"C:\Users\arisa\Desktop\day_spot_summary"
csv_files = [file for file in os.listdir(csv_directory) if file.endswith('.csv')]
# %%
from tcn import TCN, tcn_full_summary # Import TCN layer
def build_tcn_model(input_shape):
model = Sequential([
TCN(input_shape=input_shape, nb_filters=128, kernel_size=3, dilations=[1, 2, 4, 8], padding='causal',
use_skip_connections=True, dropout_rate=0.2, return_sequences=False),
# Dense(512, activation='relu'),
# Dropout(0.2),
# Dense(256, activation='relu'),
# Dropout(0.2),
Dense(128, activation='relu'),
Dropout(0.1),
Dense(64, activation='relu'),
Dropout(0.1),
Dense(32, activation='relu'),
Dense(12), # Assuming you have 12 targets as in your LSTM model
])
model.compile(optimizer='adam',
loss='mse', # Use Mean Squared Error for regression
metrics=['mae']) # Mean Absolute Error as an additional metric
return model
# %%
def data_generator_lstm( n_steps):
while True:
for csv_file in csv_files:
# Read the CSV file
file_path = os.path.join(csv_directory, csv_file)
chunk = pd.read_csv(file_path)
feature_data = chunk.drop([
'y_High_1d', 'y_Low_1d', 'y_Priority_1d',
'y_High_2d', 'y_Low_2d', 'y_Priority_2d',
'y_High_3d', 'y_Low_3d', 'y_Priority_3d',
'y_High_5d', 'y_Low_5d', 'y_Priority_5d'], axis=1)
target_data = chunk[['y_High_1d'
, 'y_Low_1d', 'y_Priority_1d',
'y_High_2d', 'y_Low_2d', 'y_Priority_2d',
'y_High_3d', 'y_Low_3d', 'y_Priority_3d',
'y_High_5d', 'y_Low_5d', 'y_Priority_5d'
]]
# Prepare sequences for features and targets
X, y = [], []
for i in range(len(feature_data) - n_steps + 1): # Correct range to prevent out-of-bounds access
X.append(feature_data.iloc[i:i + n_steps].to_numpy()) # Use iloc for consistency, though not necessary for slicing
# Make sure the index for y is correctly bounded within the target_data
if i + n_steps - 1 < len(target_data): # Adjust condition to prevent out-of-bounds
y.append(target_data.iloc[i + n_steps - 1].to_numpy()) # Correct indexing to match the condition
else:
break # Safety break (though should be unnecessary with corrected logic)
X, y = np.array(X), np.array(y)
yield X, y
# %%
from tensorflow.keras.mixed_precision import set_global_policy
# Enable mixed precision
set_global_policy('mixed_float16')
# %%
model = build_tcn_model((30, 2716,))
model.summary()
# %%
import warnings
warnings.filterwarnings(action='ignore', message='X has feature names, but StandardScaler was fitted without feature names')
train_generator = data_generator_lstm(30)
# Update total_samples, train_samples, and val_samples according to your dataset after transformations
model.fit(
train_generator,
steps_per_epoch=50,
epochs=75,
# Add validation_data if you have a validation generator
)
on the output it gives nan as mse and mae:
Epoch 1/75
50/50 [==============================] - 107s 2s/step - loss: nan - mae: nan
Epoch 2/75
33/50 [==================>...........] - ETA: 32s - loss: nan - mae: nan
is this ok?
|
f809a29840854a1b3353f2be7a526a3b
|
{
"intermediate": 0.5149767398834229,
"beginner": 0.2695019841194153,
"expert": 0.21552123129367828
}
|
47,004
|
(tok_embeddings): Embedding(2048, 288)
(dropout): Dropout(p=0.0, inplace=False)
(layers): ModuleList(
(0-5): 6 x TransformerBlock(
(attention): Attention(
(wq): Linear(in_features=288, out_features=288, bias=False)
(wk): Linear(in_features=288, out_features=288, bias=False)
(wv): Linear(in_features=288, out_features=288, bias=False)
(wo): Linear(in_features=288, out_features=288, bias=False)
(attn_dropout): Dropout(p=0.0, inplace=False)
(resid_dropout): Dropout(p=0.0, inplace=False)
)
(feed_forward): FeedForward(
(w1): Linear(in_features=288, out_features=768, bias=False)
(w2): Linear(in_features=768, out_features=288, bias=False)
(w3): Linear(in_features=288, out_features=768, bias=False)
(dropout): Dropout(p=0.0, inplace=False)
)
(attention_norm): RMSNorm()
(ffn_norm): RMSNorm()
)
)
(norm): RMSNorm()
(output): Linear(in_features=288, out_features=2048, bias=False)
)
|
b2b5ae17c94c56d33fea4ca83ada7582
|
{
"intermediate": 0.3200986683368683,
"beginner": 0.21535296738147736,
"expert": 0.4645483195781708
}
|
47,005
|
i have following code to calculate a generic scaler on my dataset
update the code so instead of StandardScaler it calculates MinMaxScaler:
def calculate_features_scaling_params(file_path, features_to_drop):
scaler = StandardScaler()
for chunk in pd.read_csv(file_path, chunksize=10000): # Adjust chunksize based on your memory capacity
filtered_chunk = chunk.drop(features_to_drop, axis=1)
scaler.partial_fit(filtered_chunk) # Accumulate means and variances
return scaler.mean_, scaler.var_
def calculate_targets_scaling_params(file_path):
scaler = StandardScaler()
for chunk in pd.read_csv(file_path, chunksize=10000): # Adjust chunksize based on your memory capacity
filtered_chunk = chunk[['y_High_1d'
, 'y_Low_1d', 'y_Priority_1d',
'y_High_2d', 'y_Low_2d', 'y_Priority_2d',
'y_High_3d', 'y_Low_3d', 'y_Priority_3d',
'y_High_5d', 'y_Low_5d', 'y_Priority_5d'
]]
scaler.partial_fit(filtered_chunk) # Accumulate means and variances
return scaler.mean_, scaler.var_
# features_to_drop = ['Date', 'Symbol',
'y_High_1d', 'y_Low_1d', 'y_Priority_1d',
'y_High_2d', 'y_Low_2d', 'y_Priority_2d',
'y_High_3d', 'y_Low_3d', 'y_Priority_3d',
'y_High_5d', 'y_Low_5d', 'y_Priority_5d']
f_mean_, f_var_ = calculate_features_scaling_params(file_path, features_to_drop)
t_mean_, t_var_ = calculate_targets_scaling_params(file_path)
Suppose mean_ and var_ have been obtained as above
x_scaler = StandardScaler()
x_scaler.mean_ = f_mean_
x_scaler.var_ = f_var_
x_scaler.scale_ = np.sqrt(f_var_)
y_scaler = StandardScaler()
y_scaler.mean_ = t_mean_
y_scaler.var_ = t_var_
y_scaler.scale_ = np.sqrt(t_var_)
|
bbcd41ab365299b32af29090fb1f5847
|
{
"intermediate": 0.27944111824035645,
"beginner": 0.4607892632484436,
"expert": 0.25976958870887756
}
|
47,006
|
# Set up logging configuration
logging.basicConfig(filename='crewai_chat.log', level=logging.DEBUG, format='%(asctime)s - %(levelname)s - Session: %(session_id)s - %(message)s')
# Generate a unique identifier for the session
session_id = str(uuid.uuid4())
# Create a Crew object
crew = Crew(
agents=[financial_analyst, sales_specialist, finance_manager],
tasks=[task1, task2, task3],
verbose=1,
process=Process.sequential
)
# Get your crew to work and capture the dialogue
result = crew.kickoff()
logging.info("######################")
logging.info(result) help fix this
|
e8f8c1f71d483d922fe8453f23e731e6
|
{
"intermediate": 0.45721757411956787,
"beginner": 0.2523418664932251,
"expert": 0.2904405891895294
}
|
47,007
|
I want you to act as a programmar.programmar is a programming language. I will provide you with commands and you will interpret them. My first command is "I need help writing a program".
|
4a2f3a75ac000f666af59ced32bc0f06
|
{
"intermediate": 0.26501092314720154,
"beginner": 0.25000491738319397,
"expert": 0.48498421907424927
}
|
47,008
|
# Create tasks for your agents
task1 = Task(description='Review the latest swing trading oppurtunities for TSLA related to potentential options trade entering positions for puts and or calls, using the duckduckgo_search tool and follow up with yahoo_finance_news tool for further insight if neccessary. Identify key market trends opportunities and exact option stocks to trade with dates to build our options trading portfolio in a professional email report.', agent=financial_analyst)
task2 = Task(description='Based on the financial analysts financial report, prepare a professional money making strategy report. The report should highlight the identified market trends and investment opportunities for exact option stocks to trade with dates, and how to trade them for calls and or puts, and focus on making money fast.', agent=sales_specialist)
task3 = Task(description='Review the financial analysts financial report and the sales specialist money making strategy report. Identify any inaccuracies, inconsistencies, or areas lacking in clarity. Give a overall Grade of each report with a letter like school based on predefined criteria including accuracy, compliance with financial standards, and strategic alignment. Provide specific, constructive feedback on how to improve these reports in a clear, concise, and encouraging manner. Keep it short and concise.', agent=finance_manager)
# Create a Crew object
crew = Crew(
agents=[financial_analyst, sales_specialist, finance_manager],
tasks=[task1, task2, task3],
verbose=1,
process=Process.sequential
)
# Get your crew to work and capture the dialogue
result = crew.kickoff() how can i just log the dialogue only
|
8ad538ec77675bb7d7c352badc5cc86e
|
{
"intermediate": 0.3303404152393341,
"beginner": 0.4001777172088623,
"expert": 0.269481897354126
}
|
47,009
|
I am stuck with some issue with Json formatting in the script .Kindly help me and let me know where am I doing mistake:
// Add your code here
var pool_members_curr = {};
var pool_members_ui = {};
var device = '10.33.120.205';
var pool_name = 'pool-tcp-443-ey-test-20.20.20.52';
var request = {};
var midserver = GlideProperties.get('mid.server.rba_default');
var gr = new GlideRecord('ecc_agent');
gr.addQuery('name', midserver);
gr.query();
gr.next();
if (gr.status == 'Up') {
var username = '';
var password = '';
var grCreds = new GlideRecord('sys_auth_profile_basic');
grCreds.addQuery('name', 'F5 Lab');
grCreds.query();
var encr = new GlideEncrypter();
if (grCreds.next()) {
username_lab = grCreds.username.toString();
password_lab = encr.decrypt(grCreds.password.toString());
} else {
gs.print('Failed to get credentials for "F5 Lab" auth profile.');
}
{
username = username_lab;
password = password_lab;
}
var r1 = new sn_ws.RESTMessageV2();
r1.setHttpMethod('post');
r1.setEndpoint('https://' + device + '/mgmt/shared/authn/login');
r1.setRequestHeader('Content-Type', 'application/json');
r1.setRequestBody('{"username":"' + username + '","password":"' + password + '","loginProviderName":"tmos"}');
r1.setMIDServer(midserver);
var response1 = r1.execute();
var responseBody1 = response1.getBody();
var httpStatus1 = response1.getStatusCode();
if (httpStatus1 != 200) {
throw new Error('Getting authentication token from ' + device + ' failed with HTTP Response status: ' + httpStatus1 + ' Response body:' + responseBody1);
} else {
var res_body_json1 = JSON.parse(responseBody1);
var auth_token = res_body_json1['token']['token'];
}
var ritm = new GlideRecord('sc_req_item');
ritm.addQuery('sys_id', '3f3d1c181ba9c294ae8c43f7cc4bcb5e');
ritm.query();
if (ritm.next()) {
var mem_add_2 = [];
var member_address = [];
var mem_add_table = [];
var add = ritm.variables.f5_active_device;
var p_name = ritm.variables.pool_name;
var mrvs_address = JSON.parse(ritm.variables.mvrs_f5_pmm);
for (var i = 0; i < mrvs_address.length; i++) {
var member = mrvs_address[i];
var add_0 = member['name'];
var member_add = member['address'];
var add_2 = member_add + ':' + member['port'];
pool_members_ui = {
'name': add_2
};
member_address.push(pool_members_ui + '');
mem_add_2.push(member_add + '');
gs.print('pool_members_ui' + member_address);
gs.print('mem_add_2' + mem_add_2);
var f5_pool_member = new GlideRecord('u_network_f5_pool_members');
f5_pool_member.addEncodedQuery('u_f5_ipSTARTSWITH' + add);
f5_pool_member.addQuery('u_pool_name', p_name);
f5_pool_member.query();
while (f5_pool_member.next()) {
mem_add_table.push(f5_pool_member.u_member_address + '');
}
gs.print('mem_add_table ' + mem_add_table);
var arrayUtil = new ArrayUtil();
var array3 = arrayUtil.diff(mem_add_table, mem_add_2);
gs.print('array3 ' + array3);
gs.print('array3 length ' + array3.length);
if (array3.length == 0) {
gs.print('test');
request = {
'partition': 'Common',
'members': member_address
};
}
}
}
gs.print('StrINGIFY ' + JSON.stringify(request) );
}
|
81679f73722bec768bb4742048f19586
|
{
"intermediate": 0.3698769509792328,
"beginner": 0.4055939316749573,
"expert": 0.22452911734580994
}
|
47,010
|
I need a bash script
in a directory ..
loop over all the *.csv files
and print the file name then some stlying then
in each file check whether the file contains '(' or ')'
|
25a8e214fc0f520cda688dc140ebe386
|
{
"intermediate": 0.1887354701757431,
"beginner": 0.6867283582687378,
"expert": 0.1245361715555191
}
|
47,011
|
I need a bash script
for looping over *.csv files
for each file check the content of each file if '(' or ')' exits then print the file name
|
ab8a588d7d987e2b4954fa5e187de372
|
{
"intermediate": 0.12125950306653976,
"beginner": 0.7933358550071716,
"expert": 0.08540469408035278
}
|
47,012
|
I need a bash script
for looping over *.csv files
for each file check the content of each file if ‘(’ or ‘)’ exits then print the file name
then print that content from the file
|
54cf9b92240a2e17d9a7027bebc28782
|
{
"intermediate": 0.18115626275539398,
"beginner": 0.7041513323783875,
"expert": 0.11469241231679916
}
|
47,013
|
I need a bash script
for looping over *.csv files
for each file check the content of each file if ‘(’ or ‘)’ exits then print the file name
|
846004a1a18149c6f9203aea47ccc39a
|
{
"intermediate": 0.10820024460554123,
"beginner": 0.8252987265586853,
"expert": 0.06650097668170929
}
|
47,014
|
i have following code to train a LSTM model on my dataset:
# %%
from sklearn.preprocessing import StandardScaler
import pandas as pd
import numpy as np
from tensorflow import keras
import joblib
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM,Dense,Dropout
import os
# %%
csv_directory = r"C:\Users\arisa\Desktop\day_spot_summary"
csv_files = [file for file in os.listdir(csv_directory) if file.endswith('.csv')]
# %%
# %%
def build_lstm_model(input_shape):
model = Sequential([
LSTM(2716, activation='tanh', input_shape=input_shape, return_sequences=True), # Adjusted for LSTM
Dropout(0.20),
# LSTM(2716, activation='tanh', return_sequences=False), # Additional LSTM layer
# Dropout(0.10),
# LSTM(2716, activation='tanh', return_sequences=False), # Additional LSTM layer
# Dropout(0.10),
Dense(2716, activation='relu'),
Dense(128, activation='relu'),
Dense(64, activation='relu'),
Dense(32, activation='relu'),
Dense(12),
])
model.compile(optimizer='adam',
loss='mse', # Use Mean Squared Error for regression
metrics=['mae']) # Mean Absolute Error as an additional metric
return model
# %%
def data_generator_lstm( n_steps):
while True:
for csv_file in csv_files:
# Read the CSV file
file_path = os.path.join(csv_directory, csv_file)
chunk = pd.read_csv(file_path)
feature_data = chunk.drop([
'y_High_1d', 'y_Low_1d', 'y_Priority_1d',
'y_High_2d', 'y_Low_2d', 'y_Priority_2d',
'y_High_3d', 'y_Low_3d', 'y_Priority_3d',
'y_High_5d', 'y_Low_5d', 'y_Priority_5d'], axis=1)
target_data = chunk[['y_High_1d'
, 'y_Low_1d', 'y_Priority_1d',
'y_High_2d', 'y_Low_2d', 'y_Priority_2d',
'y_High_3d', 'y_Low_3d', 'y_Priority_3d',
'y_High_5d', 'y_Low_5d', 'y_Priority_5d'
]]
# Prepare sequences for features and targets
X, y = [], []
for i in range(len(feature_data) - n_steps + 1): # Correct range to prevent out-of-bounds access
X.append(feature_data.iloc[i:i + n_steps].to_numpy()) # Use iloc for consistency, though not necessary for slicing
# Make sure the index for y is correctly bounded within the target_data
if i + n_steps - 1 < len(target_data): # Adjust condition to prevent out-of-bounds
y.append(target_data.iloc[i + n_steps - 1].to_numpy()) # Correct indexing to match the condition
else:
break # Safety break (though should be unnecessary with corrected logic)
X, y = np.array(X), np.array(y)
yield X, y
# %%
from tensorflow.keras.mixed_precision import set_global_policy
# Enable mixed precision
set_global_policy('mixed_float16')
# %%
model = build_lstm_model((30, 2716,))
model.summary()
# %%
import warnings
warnings.filterwarnings(action='ignore', message='X has feature names, but StandardScaler was fitted without feature names')
train_generator = data_generator_lstm(30)
# Update total_samples, train_samples, and val_samples according to your dataset after transformations
model.fit(
train_generator,
steps_per_epoch=50,
epochs=75,
# Add validation_data if you have a validation generator
)
please change it properly so instead of LSTM i train a TCN model
|
e16f2994e77515a656ed8db1c669d520
|
{
"intermediate": 0.4664432108402252,
"beginner": 0.284995436668396,
"expert": 0.2485613077878952
}
|
47,015
|
Write very simple encryption algorithm on Lua. Algorithm provides encrypt(string, key) and decrypt(string, key) functions. The strength of the cipher is not important because it will be used in a computer game
|
5329bb916dfa0c9b8ec0194870d426e9
|
{
"intermediate": 0.349531352519989,
"beginner": 0.19619528949260712,
"expert": 0.45427340269088745
}
|
47,016
|
For each of the following Prolog expressions, write the equivalent Haskell expression without any use of [ and ] other than for the empty list []. Identify whether the expression is allowed by Haskell and, if not, explain why. Answer in detail.
[0|1].
[0, 1].
[0|[1]].
[0, [1]].
[0|[1|[2|[]]]]
|
153ec57c23764f7808361f3682df1c3d
|
{
"intermediate": 0.33038267493247986,
"beginner": 0.3517893850803375,
"expert": 0.31782791018486023
}
|
47,017
|
how check total errors in every 1 minute using splunk
|
65ac328336e3df1358740a77de2d4895
|
{
"intermediate": 0.27887222170829773,
"beginner": 0.048228681087493896,
"expert": 0.6728991270065308
}
|
47,018
|
# Log the result
logging.info('Result of crew.kickoff(): %s', result)
# Log any dialogue messages from the crew object
for message in crew.dialogue_messages:
dialogue_logger.info('Dialogue message: %s', message)
# Configure the logging settings
logging.basicConfig(filename='options.log', level=logging.INFO, format='%(asctime)s - %(levelname)s: %(message)s')
# Define a custom logging handler to capture printed output
class PrintToLogHandler(logging.Handler):
def emit(self, record):
try:
msg = self.format(record)
logging.info(msg)
except Exception:
self.handleError(record)
# Redirect stdout to the custom logging handler
stdout_handler = PrintToLogHandler()
stdout_handler.setLevel(logging.INFO)
logging.getLogger().addHandler(stdout_handler)
sys.stdout = stdout_handler
# Call the crew.kickoff() method
result = crew.kickoff() fix this
|
8005e4e90a20bbc47d19774526672436
|
{
"intermediate": 0.4169614613056183,
"beginner": 0.3772827386856079,
"expert": 0.205755814909935
}
|
47,019
|
Write Vigenere encryption algorithm on Lua. Algorithm provides encrypt(string, key) and decrypt(string, key) functions. The strength of the cipher is not important because it will be used in a computer game
|
71c3294340f709061a81a4e9050406d5
|
{
"intermediate": 0.33413195610046387,
"beginner": 0.2088598757982254,
"expert": 0.45700815320014954
}
|
47,020
|
f(x) {
C, 2 <= x < 5
0, poza
}
find the C parameter, such that this function would be a density function. write the solution in tex
|
6ed4e80cb8290c65410161315e3b9f3a
|
{
"intermediate": 0.2617599070072174,
"beginner": 0.40153056383132935,
"expert": 0.33670952916145325
}
|
47,021
|
result = crew.kickoff() simple way to log this print function
|
18170caf5e13623ebe7eb1ac8a61f795
|
{
"intermediate": 0.3461241126060486,
"beginner": 0.4652761220932007,
"expert": 0.18859981000423431
}
|
47,022
|
https://www.facebook.com/siddhaarchitects This is my Facebook link can u analyse it
|
00f89841790fe8584521e73a43380969
|
{
"intermediate": 0.3531513810157776,
"beginner": 0.21869704127311707,
"expert": 0.42815154790878296
}
|
47,023
|
conda create -n transformers python=3.9
Fetching package metadata ...
CondaHTTPError: HTTP 000 CONNECTION FAILED for url
|
da88e269481169610934bc7392b9c6a9
|
{
"intermediate": 0.4367520809173584,
"beginner": 0.24555768072605133,
"expert": 0.31769025325775146
}
|
47,024
|
{
"name": "InvalidArgumentError",
"message": "Graph execution error:
Detected at node 'mean_squared_error/SquaredDifference' defined at (most recent call last):
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\runpy.py\", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\runpy.py\", line 87, in _run_code
exec(code, run_globals)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\ipykernel_launcher.py\", line 18, in <module>
app.launch_new_instance()
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\traitlets\\config\\application.py\", line 1075, in launch_instance
app.start()
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\ipykernel\\kernelapp.py\", line 739, in start
self.io_loop.start()
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\tornado\\platform\\asyncio.py\", line 205, in start
self.asyncio_loop.run_forever()
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\asyncio\\base_events.py\", line 601, in run_forever
self._run_once()
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\asyncio\\base_events.py\", line 1905, in _run_once
handle._run()
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\asyncio\\events.py\", line 80, in _run
self._context.run(self._callback, *self._args)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\ipykernel\\kernelbase.py\", line 545, in dispatch_queue
await self.process_one()
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\ipykernel\\kernelbase.py\", line 534, in process_one
await dispatch(*args)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\ipykernel\\kernelbase.py\", line 437, in dispatch_shell
await result
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\ipykernel\\ipkernel.py\", line 359, in execute_request
await super().execute_request(stream, ident, parent)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\ipykernel\\kernelbase.py\", line 778, in execute_request
reply_content = await reply_content
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\ipykernel\\ipkernel.py\", line 446, in do_execute
res = shell.run_cell(
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\ipykernel\\zmqshell.py\", line 549, in run_cell
return super().run_cell(*args, **kwargs)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\IPython\\core\\interactiveshell.py\", line 3048, in run_cell
result = self._run_cell(
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\IPython\\core\\interactiveshell.py\", line 3103, in _run_cell
result = runner(coro)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\IPython\\core\\async_helpers.py\", line 129, in _pseudo_sync_runner
coro.send(None)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\IPython\\core\\interactiveshell.py\", line 3308, in run_cell_async
has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\IPython\\core\\interactiveshell.py\", line 3490, in run_ast_nodes
if await self.run_code(code, result, async_=asy):
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\IPython\\core\\interactiveshell.py\", line 3550, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File \"C:\\Users\\arisa\\AppData\\Local\\Temp\\ipykernel_15096\\261283929.py\", line 7, in <module>
model.fit(
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\keras\\utils\\traceback_utils.py\", line 65, in error_handler
return fn(*args, **kwargs)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\keras\\engine\\training.py\", line 1564, in fit
tmp_logs = self.train_function(iterator)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\keras\\engine\\training.py\", line 1160, in train_function
return step_function(self, iterator)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\keras\\engine\\training.py\", line 1146, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\keras\\engine\\training.py\", line 1135, in run_step
outputs = model.train_step(data)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\keras\\engine\\training.py\", line 994, in train_step
loss = self.compute_loss(x, y, y_pred, sample_weight)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\keras\\engine\\training.py\", line 1052, in compute_loss
return self.compiled_loss(
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\keras\\engine\\compile_utils.py\", line 265, in __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\keras\\losses.py\", line 152, in __call__
losses = call_fn(y_true, y_pred)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\keras\\losses.py\", line 272, in call
return ag_fn(y_true, y_pred, **self._fn_kwargs)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\keras\\losses.py\", line 1486, in mean_squared_error
return backend.mean(tf.math.squared_difference(y_pred, y_true), axis=-1)
Node: 'mean_squared_error/SquaredDifference'
required broadcastable shapes
\t [[{{node mean_squared_error/SquaredDifference}}]] [Op:__inference_train_function_4665]",
"stack": "---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
Cell In[8], line 7
3 train_generator = data_generator_lstm(30,x_scaler_loaded,y_scaler_loaded)
5 # Update total_samples, train_samples, and val_samples according to your dataset after transformations
----> 7 model.fit(
8 train_generator,
9 steps_per_epoch=50,
10 epochs=75,
11 # Add validation_data if you have a validation generator
12 )
File c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\keras\\utils\\traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs)
67 filtered_tb = _process_traceback_frames(e.__traceback__)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
File c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\tensorflow\\python\\eager\\execute.py:54, in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
52 try:
53 ctx.ensure_initialized()
---> 54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
55 inputs, attrs, num_outputs)
56 except core._NotOkStatusException as e:
57 if name is not None:
InvalidArgumentError: Graph execution error:
Detected at node 'mean_squared_error/SquaredDifference' defined at (most recent call last):
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\runpy.py\", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\runpy.py\", line 87, in _run_code
exec(code, run_globals)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\ipykernel_launcher.py\", line 18, in <module>
app.launch_new_instance()
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\traitlets\\config\\application.py\", line 1075, in launch_instance
app.start()
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\ipykernel\\kernelapp.py\", line 739, in start
self.io_loop.start()
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\tornado\\platform\\asyncio.py\", line 205, in start
self.asyncio_loop.run_forever()
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\asyncio\\base_events.py\", line 601, in run_forever
self._run_once()
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\asyncio\\base_events.py\", line 1905, in _run_once
handle._run()
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\asyncio\\events.py\", line 80, in _run
self._context.run(self._callback, *self._args)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\ipykernel\\kernelbase.py\", line 545, in dispatch_queue
await self.process_one()
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\ipykernel\\kernelbase.py\", line 534, in process_one
await dispatch(*args)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\ipykernel\\kernelbase.py\", line 437, in dispatch_shell
await result
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\ipykernel\\ipkernel.py\", line 359, in execute_request
await super().execute_request(stream, ident, parent)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\ipykernel\\kernelbase.py\", line 778, in execute_request
reply_content = await reply_content
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\ipykernel\\ipkernel.py\", line 446, in do_execute
res = shell.run_cell(
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\ipykernel\\zmqshell.py\", line 549, in run_cell
return super().run_cell(*args, **kwargs)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\IPython\\core\\interactiveshell.py\", line 3048, in run_cell
result = self._run_cell(
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\IPython\\core\\interactiveshell.py\", line 3103, in _run_cell
result = runner(coro)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\IPython\\core\\async_helpers.py\", line 129, in _pseudo_sync_runner
coro.send(None)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\IPython\\core\\interactiveshell.py\", line 3308, in run_cell_async
has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\IPython\\core\\interactiveshell.py\", line 3490, in run_ast_nodes
if await self.run_code(code, result, async_=asy):
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\IPython\\core\\interactiveshell.py\", line 3550, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File \"C:\\Users\\arisa\\AppData\\Local\\Temp\\ipykernel_15096\\261283929.py\", line 7, in <module>
model.fit(
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\keras\\utils\\traceback_utils.py\", line 65, in error_handler
return fn(*args, **kwargs)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\keras\\engine\\training.py\", line 1564, in fit
tmp_logs = self.train_function(iterator)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\keras\\engine\\training.py\", line 1160, in train_function
return step_function(self, iterator)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\keras\\engine\\training.py\", line 1146, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\keras\\engine\\training.py\", line 1135, in run_step
outputs = model.train_step(data)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\keras\\engine\\training.py\", line 994, in train_step
loss = self.compute_loss(x, y, y_pred, sample_weight)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\keras\\engine\\training.py\", line 1052, in compute_loss
return self.compiled_loss(
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\keras\\engine\\compile_utils.py\", line 265, in __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\keras\\losses.py\", line 152, in __call__
losses = call_fn(y_true, y_pred)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\keras\\losses.py\", line 272, in call
return ag_fn(y_true, y_pred, **self._fn_kwargs)
File \"c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\keras\\losses.py\", line 1486, in mean_squared_error
return backend.mean(tf.math.squared_difference(y_pred, y_true), axis=-1)
Node: 'mean_squared_error/SquaredDifference'
required broadcastable shapes
\t [[{{node mean_squared_error/SquaredDifference}}]] [Op:__inference_train_function_4665]"
}
|
23e8cb891a99f565318122bbd2735418
|
{
"intermediate": 0.281398206949234,
"beginner": 0.34826910495758057,
"expert": 0.3703327178955078
}
|
47,025
|
set difference between two sets know what's added or removed in js
|
123fc57167af56c3e45e54a4c468d13d
|
{
"intermediate": 0.30773505568504333,
"beginner": 0.4000660479068756,
"expert": 0.29219889640808105
}
|
47,026
|
{
"name": "ResourceExhaustedError",
"message": "Graph execution error:
OOM when allocating tensor with shape[21728] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
\t [[{{node concat}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
\t [[sequential/lstm_1/PartitionedCall]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
[Op:__inference_train_function_6595]",
"stack": "---------------------------------------------------------------------------
ResourceExhaustedError Traceback (most recent call last)
Cell In[8], line 7
3 train_generator = data_generator_lstm(30,x_scaler_loaded,y_scaler_loaded)
5 # Update total_samples, train_samples, and val_samples according to your dataset after transformations
----> 7 model.fit(
8 train_generator,
9 steps_per_epoch=50,
10 epochs=75,
11 # Add validation_data if you have a validation generator
12 )
File c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\keras\\utils\\traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs)
67 filtered_tb = _process_traceback_frames(e.__traceback__)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
File c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\tensorflow\\python\\eager\\execute.py:54, in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
52 try:
53 ctx.ensure_initialized()
---> 54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
55 inputs, attrs, num_outputs)
56 except core._NotOkStatusException as e:
57 if name is not None:
ResourceExhaustedError: Graph execution error:
OOM when allocating tensor with shape[21728] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
\t [[{{node concat}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
\t [[sequential/lstm_1/PartitionedCall]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
[Op:__inference_train_function_6595]"
}
|
2c5b66835655a6a0bebf680285e2ffd0
|
{
"intermediate": 0.300179660320282,
"beginner": 0.3396975100040436,
"expert": 0.36012282967567444
}
|
47,027
|
write a sql.py script that iterates through models.py file given a path and outputs a instruction.txt file containing the following :
for each class in models.py , if the class containing a field that is equal to ForeginKey() ( django ) create an Insert instruction for that class adding _id to the field name
for example
Class TEST_CLASS :
x= models.Charfield(...)
y= models.Charfield(...)
z=models.ForeginKey(...)
outputs : INSERT INTO test_case ("x","y","z_id")
|
e8b7acc5d49f115a4f1841e4b0c710a3
|
{
"intermediate": 0.4280316233634949,
"beginner": 0.37861523032188416,
"expert": 0.19335313141345978
}
|
47,028
|
I have resolved several issues concerning code and spring components maven versions and am now addressing modifications to the remaining incompatible batch service code.
.....rephrase it
|
6fc94701ac414afeb8c381cad8c63118
|
{
"intermediate": 0.32992473244667053,
"beginner": 0.3038342595100403,
"expert": 0.3662409484386444
}
|
47,029
|
here is my code :
# %%
from sklearn.preprocessing import StandardScaler
import pandas as pd
import numpy as np
from tensorflow import keras
import joblib
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM,Dense,Dropout
import os
# %%
csv_directory = r"C:\Users\arisa\Desktop\day_spot_summary"
csv_files = [file for file in os.listdir(csv_directory) if file.endswith('.csv')]
# %%
from tensorflow.keras.layers import BatchNormalization
def build_lstm_model(input_shape):
model = Sequential([
LSTM(2716, activation='tanh', input_shape=input_shape, return_sequences=True), # Adjusted for LSTM
Dropout(0.20),
BatchNormalization(),
# LSTM(2716, activation='tanh', return_sequences=False), # Additional LSTM layer
# Dropout(0.10),
Dense(2716, activation='relu'),
Dropout(0.15),
Dense(256, activation='relu'),
Dropout(0.10),
Dense(128, activation='relu'),
Dense(64, activation='relu'),
Dense(32, activation='relu'),
Dense(12),
])
model.compile(optimizer='adam',
loss='mse', # Use Mean Squared Error for regression
metrics=['mae']) # Mean Absolute Error as an additional metric
return model
# %%
x_scaler_loaded = joblib.load('nn_x_minmaxscaler.sav')
y_scaler_loaded = joblib.load('nn_y_minmaxscaler.sav')
# %%
def data_generator_lstm( n_steps,x_scaler,y_scaler):
while True:
for csv_file in csv_files:
# Read the CSV file
file_path = os.path.join(csv_directory, csv_file)
chunk = pd.read_csv(file_path)
feature_data = chunk.drop([
'y_High_1d', 'y_Low_1d', 'y_Priority_1d',
'y_High_2d', 'y_Low_2d', 'y_Priority_2d',
'y_High_3d', 'y_Low_3d', 'y_Priority_3d',
'y_High_5d', 'y_Low_5d', 'y_Priority_5d'], axis=1)
target_data = chunk[['y_High_1d'
, 'y_Low_1d', 'y_Priority_1d',
'y_High_2d', 'y_Low_2d', 'y_Priority_2d',
'y_High_3d', 'y_Low_3d', 'y_Priority_3d',
'y_High_5d', 'y_Low_5d', 'y_Priority_5d'
]]
feature_data_scaled = pd.DataFrame(x_scaler.transform(feature_data), columns=feature_data.columns)
# Assuming target_data also needs to be scaled, apply scaler separately
target_data_scaled = pd.DataFrame(y_scaler.transform(target_data), columns=target_data.columns)
# Prepare sequences for features and targets
X, y = [], []
for i in range(len(feature_data_scaled) - n_steps + 1): # Correct range to prevent out-of-bounds access
X.append(feature_data_scaled.iloc[i:i + n_steps].to_numpy()) # Use iloc for consistency, though not necessary for slicing
# Make sure the index for y is correctly bounded within the target_data
if i + n_steps - 1 < len(target_data_scaled): # Adjust condition to prevent out-of-bounds
y.append(target_data_scaled.iloc[i + n_steps - 1].to_numpy()) # Correct indexing to match the condition
else:
break # Safety break (though should be unnecessary with corrected logic)
X, y = np.array(X), np.array(y)
yield X, y
# %%
# from tensorflow.keras.mixed_precision import set_global_policy
# # Enable mixed precision
# set_global_policy('mixed_float16')
# %%
model = build_lstm_model((30, 2716))
model.summary()
# %%
import warnings
warnings.filterwarnings(action='ignore', message='X has feature names, but MinMaxScaler was fitted without feature names')
train_generator = data_generator_lstm(30,x_scaler_loaded,y_scaler_loaded)
# Update total_samples, train_samples, and val_samples according to your dataset after transformations
model.fit(
train_generator,
steps_per_epoch=50,
epochs=75,
# Add validation_data if you have a validation generator
)
|
ba3da4892addf44d0f8c3b1ffad10f81
|
{
"intermediate": 0.5108555555343628,
"beginner": 0.33795487880706787,
"expert": 0.15118961036205292
}
|
47,030
|
code:
# %%
from sklearn.preprocessing import StandardScaler
import pandas as pd
import numpy as np
from tensorflow import keras
import joblib
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM,Dense,Dropout
import os
# %%
csv_directory = r"C:\Users\arisa\Desktop\day_spot_summary"
csv_files = [file for file in os.listdir(csv_directory) if file.endswith('.csv')]
# %%
from tensorflow.keras.layers import BatchNormalization
def build_lstm_model(input_shape):
model = Sequential([
LSTM(2716, activation='tanh', input_shape=input_shape, return_sequences=True), # Adjusted for LSTM
Dropout(0.20),
BatchNormalization(),
# LSTM(2716, activation='tanh', return_sequences=False), # Additional LSTM layer
# Dropout(0.10),
Dense(2716, activation='relu'),
Dropout(0.15),
Dense(256, activation='relu'),
Dropout(0.10),
Dense(128, activation='relu'),
Dense(64, activation='relu'),
Dense(32, activation='relu'),
Dense(12),
])
model.compile(optimizer='adam',
loss='mse', # Use Mean Squared Error for regression
metrics=['mae']) # Mean Absolute Error as an additional metric
return model
# %%
x_scaler_loaded = joblib.load('nn_x_minmaxscaler.sav')
y_scaler_loaded = joblib.load('nn_y_minmaxscaler.sav')
# %%
def data_generator_lstm( n_steps,x_scaler,y_scaler):
while True:
for csv_file in csv_files:
# Read the CSV file
file_path = os.path.join(csv_directory, csv_file)
chunk = pd.read_csv(file_path)
feature_data = chunk.drop([
'y_High_1d', 'y_Low_1d', 'y_Priority_1d',
'y_High_2d', 'y_Low_2d', 'y_Priority_2d',
'y_High_3d', 'y_Low_3d', 'y_Priority_3d',
'y_High_5d', 'y_Low_5d', 'y_Priority_5d'], axis=1)
target_data = chunk[[
'y_High_1d','y_Low_1d', 'y_Priority_1d',
'y_High_2d', 'y_Low_2d', 'y_Priority_2d',
'y_High_3d', 'y_Low_3d', 'y_Priority_3d',
'y_High_5d', 'y_Low_5d', 'y_Priority_5d'
]]
feature_data_scaled = pd.DataFrame(x_scaler.transform(feature_data), columns=feature_data.columns)
# Assuming target_data also needs to be scaled, apply scaler separately
target_data_scaled = pd.DataFrame(y_scaler.transform(target_data), columns=target_data.columns)
# ensuring end_ix does not go out of feature_data_scaled’s bounds
num_samples = (len(feature_data_scaled) - n_steps) // 32
for i in range(num_samples):
start_ix = i * 32
end_ix = start_ix + n_steps
X = feature_data_scaled[start_ix:end_ix]
# using .iloc to avoid KeyError, and selecting the corresponding outputs
y = target_data_scaled.iloc[start_ix:end_ix].iloc[-1]
yield X.reshape((1, n_steps, -1)), y.reshape((1, -1))
model = build_lstm_model((30, 2716))
model.summary()
# %%
import warnings
warnings.filterwarnings(action='ignore', message='X has feature names, but MinMaxScaler was fitted without feature names')
train_generator = data_generator_lstm(30,x_scaler_loaded,y_scaler_loaded)
# Update total_samples, train_samples, and val_samples according to your dataset after transformations
model.fit(
train_generator,
steps_per_epoch=50,
epochs=75,
# Add validation_data if you have a validation generator
)
error:
{
"name": "AttributeError",
"message": "'DataFrame' object has no attribute 'reshape'",
"stack": "---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~\\AppData\\Local\\Temp\\ipykernel_17980\\261283929.py in ?()
----> 7 import warnings
8 warnings.filterwarnings(action='ignore', message='X has feature names, but MinMaxScaler was fitted without feature names')
9 train_generator = data_generator_lstm(30,x_scaler_loaded,y_scaler_loaded)
10
c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\keras\\utils\\traceback_utils.py in ?(*args, **kwargs)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
70 raise e.with_traceback(filtered_tb) from None
71 finally:
---> 72 del filtered_tb
~\\AppData\\Local\\Temp\\ipykernel_17980\\994707971.py in ?(n_steps, x_scaler, y_scaler)
32 X = feature_data_scaled[start_ix:end_ix]
33 # using .iloc to avoid KeyError, and selecting the corresponding outputs
34 y = target_data_scaled.iloc[start_ix:end_ix].iloc[-1]
35
---> 36 yield X.reshape((1, n_steps, -1)), y.reshape((1, -1))
37
38 # Prepare sequences for features and targets
39 # X, y = [], []
c:\\Users\\arisa\\.conda\\envs\\tf\\lib\\site-packages\\pandas\\core\\generic.py in ?(self, name)
6295 and name not in self._accessors
6296 and self._info_axis._can_hold_identifiers_and_holds_name(name)
6297 ):
6298 return self[name]
-> 6299 return object.__getattribute__(self, name)
AttributeError: 'DataFrame' object has no attribute 'reshape'"
}
|
f9d6934cad98313dafa45bfa373540b3
|
{
"intermediate": 0.4058266580104828,
"beginner": 0.37244248390197754,
"expert": 0.22173085808753967
}
|
47,031
|
i need appscript which will add new button in google docs, which will set default paragraphs to arial 12pt 1.5 interline
|
5f620f3a7c744dc72e12276e2391d881
|
{
"intermediate": 0.47463130950927734,
"beginner": 0.20786692202091217,
"expert": 0.3175017535686493
}
|
47,032
|
python import-request-elastic.py
/usr/lib/python3/dist-packages/urllib3/connectionpool.py:1062: InsecureRequestWarning: Unverified HTTPS request is being made to host 'elaas-inspec-dev.kb.elasticaas.ocb.equant.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
warnings.warn(
Traceback (most recent call last):
File "/home/inobile/Code/import-request-elastic.py", line 44, in <module>
data = response.json()["responses"][0]["hits"]["hits"]
import requests
import csv
# URL de l'API Kibana
url = "https://elaas-inspec-dev.kb.elasticaas.ocb.equant.com/s/inspec-dev/api/metrics/data"
# Paramètres de la requête
params = {
"from": "2023-12-31T23:00:00.000Z",
"to": "2024-01-31T23:00:00.000Z",
"query": "nuar",
"filters": [
{
"meta": {
"index": "security-solution-inspec-dev",
"negate": False,
"disabled": False,
"alias": None,
"type": "phrase",
"key": "observer.domain",
"params": {
"query": "stork1"
},
"field": "observer.domain"
},
"query": {
"match_phrase": {
"observer.domain": "stork1"
}
},
"$state": {
"store": "appState"
}
}
]
}
# Informations d'authentification
username = "xxxx"
password = "xxxx"
# Envoyer la requête avec authentification basique et récupérer les données
response = requests.get(url, auth=(username, password), params=params, verify=False)
data = response.json()["responses"][0]["hits"]["hits"]
# Initialiser un dictionnaire pour stocker les hits par heure et par jour
hits_by_hour_and_day = {}
# Parcourir les données et compter les hits par heure et par jour
for hit in data:
timestamp = hit["_source"]["@timestamp"]
date = timestamp.split("T")[0]
hour = int(timestamp.split("T")[1].split(":")[0])
if date not in hits_by_hour_and_day:
hits_by_hour_and_day[date] = {hour: 1 for hour in range(24)}
else:
hits_by_hour_and_day[date][hour] += 1
# Ouvrir un fichier CSV pour écrire les résultats
with open("hits_par_heure_et_jour.csv", "w", newline="") as csvfile:
writer = csv.writer(csvfile)
# Écrire l'en-tête du fichier CSV
header = ["Jours"] + [f"{hour}h" for hour in range(24)]
writer.writerow(header)
# Écrire les données dans le fichier CSV
for date, hourly_hits in sorted(hits_by_hour_and_day.items()):
row = [date] + [hourly_hits.get(hour, 0) for hour in range(24)]
writer.writerow(row)
|
49a20b04368b28b523ffe37b840364cb
|
{
"intermediate": 0.4433022439479828,
"beginner": 0.3808145225048065,
"expert": 0.17588317394256592
}
|
47,033
|
is this the right way to use BatchNormalization:?
model = Sequential([
LSTM(2716, activation='tanh', input_shape=input_shape, return_sequences=True), # Adjusted for LSTM
Dropout(0.20),
# BatchNormalization(),
# LSTM(2716, activation='tanh', return_sequences=False), # Additional LSTM layer
# Dropout(0.10),
Dense(2716, activation='relu'),
Dropout(0.15),
Dense(256, activation='relu'),
Dropout(0.10),
Dense(128, activation='relu'),
Dense(64, activation='relu'),
Dense(32, activation='relu'),
Dense(12),
])
|
329a4e1be18768f1eef20709193f2251
|
{
"intermediate": 0.379477322101593,
"beginner": 0.1522846221923828,
"expert": 0.46823811531066895
}
|
47,034
|
Objective: - To implement the concept of Joins
Joint Multiple Table (Equi Join): Sometimes we require to treat more than one table as
though manipulate data from all the tables as though the tables were not separate object
but one single entity. To achieve this, we have to join tables. Tables are joined on column
that have dame data type and data with in tables.
The tables that must be joined are specified in the FROM clause and the
joining attributes in the WHERE clause.
Algorithm for JOIN in SQL:
1. Cartesian product of tables (specified in the FROM clause)
2. Selection of rows that match (predicate in the WHERE clause)
3. Project column specified in the SELECT clause.
1. Cartesian product:-
Consider two table student and course
Select B.*,P.*
FROM student B, course P;
2. INNER JOIN:
Cartesian product followed by selection
Select B.*,P.*
FROM student B, Course P
WHERE B.course # P.course # ;
3. LEFT OUTER JOIN:
LEFT OUTER JOIN = Cartesian product + selection but include rows from the
left table which are unmatched pat nulls in the values of attributes belonging to th
e second table
Exam:
Select B.*,P*
FROM student B left join course p
ON B.course # P.course #;
4. RIGHT OUTER JOIN:
RIGHT OUTER JOIN = Cartesian product + selection but include rows
from right table which are unmatched
Exam:
Select B.*,P.*
From student B RIGHT JOIN course P
B.course# = P course # ;
5. FULL OUTER JOIN
Exam Select
B.*,P.*
From student B FULL JOIN course P
On B.course # = P course # ;
OBJECTIVE: Answer the following Queries:
1. Find out the product which has been sold to ‘Ivan Sayross.’
2. Find out the product and their quantities that will have do delivered.
3. Find the product_no and description of moving products.
4. Find out the names of clients who have purchased ‘CD DRIVE’
5. List the product_no and s_order_no of customers haaving qty ordered less than 5
from the order details table for the product “1.44 floppies”.
6. Find the products and their quantities for the orders placed by ‘Vandan Saitwal ’
and “Ivan Bayross”.
7. Find the products and their quantities for the orders placed by client_no “
C00001” and “C00002”
8. Find the order No,, Client No and salesman No. where a client has been received
by more than one salesman.
9. Display the s_order_date in the format “dd-mm-yy” e.g. “12- feb-96”
10. Find the date, 15 days after date.
|
cd314ecf4f19e92d1da30d1c82382ace
|
{
"intermediate": 0.30916455388069153,
"beginner": 0.1711457520723343,
"expert": 0.519689679145813
}
|
47,035
|
import json
import re
import requests
from bs4 import BeautifulSoup
from urllib.parse import urlparse
class AppleMusicAPI:
def __init__(self):
self.session = requests.Session()
self.session.headers = {
'content-type': 'application/json;charset=utf-8',
'connection': 'keep-alive',
'accept': 'application/json',
'origin': 'https://music.apple.com',
'referer': 'https://music.apple.com/',
'accept-encoding': 'gzip, deflate, br',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36'
}
def __check_url(self, url):
try:
response = self.session.head(url, allow_redirects=True)
return response.status_code == 200
except requests.RequestException:
return False
def __get_access_token(self):
response = self.session.get('https://music.apple.com/us/browse')
response.raise_for_status()
soup = BeautifulSoup(response.text, 'html.parser')
script_tag = soup.find('script', attrs={'type': 'module', 'crossorigin': True, 'src': True})
if not script_tag:
raise Exception("Unable to find script tag containing access token")
index_js_url = 'https://music.apple.com' + script_tag['src']
response = self.session.get(index_js_url)
response.raise_for_status()
access_token = re.search('(?=eyJh)(.*?)(?=")', response.text)
if not access_token:
raise Exception("Unable to find access token in JavaScript file")
return access_token.group(0)
def __get_album_info(self, url):
parsed_url = urlparse(url)
if parsed_url.netloc != 'music.apple.com':
raise ValueError("Invalid URL. Please provide a valid Apple Music album URL.")
if parsed_url.path.count('/') < 4:
raise ValueError("Invalid URL format. Please provide a direct link to the album.")
album_id = parsed_url.path.split('/')[4]
album_api_url = f'https://amp-api.music.apple.com/v1/catalog/us/albums/{album_id}'
response = self.session.get(album_api_url)
response.raise_for_status()
return response.json()
def get_album_details(self, url):
access_token = self.__get_access_token()
self.session.headers['authorization'] = f'Bearer {access_token}'
if not self.__check_url(url):
raise ValueError("Invalid URL. Please provide a valid Apple Music album URL.")
album_info = self.__get_album_info(url)
try:
details = {
'isrc': album_info['data'][0]['attributes'].get('isrc', ''),
'composer': album_info['data'][0]['attributes'].get('composer', ''),
'songartist': album_info['data'][0]['attributes']['artistName'],
'credits': album_info['data'][0]['attributes']['editorialNotes'].get('shortNotes', ''),
'Programming': album_info['data'][0]['attributes']['editorialNotes'].get('Programming', []),
'Guitar': album_info['data'][0]['attributes']['editorialNotes'].get('Guitar', []),
'Drums': album_info['data'][0]['attributes']['editorialNotes'].get('Drums', []),
'Vocals': album_info['data'][0]['attributes']['editorialNotes'].get('Vocals', []),
'Background Vocals': album_info['data'][0]['attributes']['editorialNotes'].get('Background Vocals', []),
'Songwriter': album_info['data'][0]['attributes']['editorialNotes'].get('Songwriter', []),
'Producer': album_info['data'][0]['attributes']['editorialNotes'].get('Producer', []),
'Executive Producer': album_info['data'][0]['attributes']['editorialNotes'].get('Executive Producer', []),
'Mixing Engineer': album_info['data'][0]['attributes']['editorialNotes'].get('Mixing Engineer', []),
'Mastering Engineer': album_info['data'][0]['attributes']['editorialNotes'].get('Mastering Engineer', []),
'Engineer': album_info['data'][0]['attributes']['editorialNotes'].get('Engineer', []),
'recordlabel': album_info['data'][0]['attributes']['recordLabel'],
'trackcount': album_info['data'][0]['attributes']['trackCount'],
'albumartist': album_info['data'][0]['attributes']['artistName']
}
except KeyError as e:
raise ValueError(f"Failed to extract album details: {e}")
return details
if __name__ == "__main__":
api = AppleMusicAPI()
url = 'https://music.apple.com/us/album/happiness-begins/1461478261'
album_details = api.get_album_details(url)
print(album_details)
# Write album details to a text file
with open('album_details.txt', 'w') as f:
json.dump(album_details, f, indent=4)
api.py:
import re
import json
import requests
from bs4 import BeautifulSoup
from urllib.parse import urlparse
from urllib.request import urlopen
from urllib.error import URLError, HTTPError
from utils import Cache
from utils import Config
from utils import logger
from api.parse import parseJson
class AppleMusic(object):
def __init__(self, cache, sync, skipVideo):
self.session = requests.Session()
self.session.headers = {
'content-type': 'application/json;charset=utf-8',
'connection': 'keep-alive',
'accept': 'application/json',
'origin': 'https://music.apple.com',
'referer': 'https://music.apple.com/',
'accept-encoding': 'gzip, deflate, br',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36'
}
self.__cache = Cache(cache)
self.__config = Config(cache)
self.sync = int(sync)
self.skipVideo = skipVideo
self.__accessToken()
self.__mediaUserToken()
def __checkUrl(self, url):
try:
urlopen(url)
return True
except (URLError, HTTPError):
return False
def __getUrl(self, url):
__url = urlparse(url)
if not __url.scheme:
url = f"https://{url}"
if __url.netloc == "music.apple.com":
if self.__checkUrl(url):
splits = url.split('/')
id = splits[-1]
kind = splits[4]
if kind == "album":
if len(id.split('?i=')) > 1:
id = id.split('?i=')[1]
kind = "song"
self.kind = kind
self.id = id
else: logger.error("URL is invalid!", 1)
else: logger.error("URL is invalid!", 1)
def __accessToken(self):
accessToken = self.__cache.get("accessToken")
if not accessToken:
logger.info("Fetching access token from web...")
response = requests.get('https://music.apple.com/us/browse')
if response.status_code != 200:
logger.error("Failed to get music.apple.com! Please re-try...", 1)
content = BeautifulSoup(response.text, "html.parser")
indexJs = content.find(
"script",
attrs={
'type': 'module',
'crossorigin': True,
'src': True
}
).get('src')
response = requests.get(f'https://music.apple.com{indexJs}')
if response.status_code != 200:
logger.error("Failed to get JavaScript library! Please re-try...", 1)
accessToken = re.search('(?=eyJh)(.*?)(?=")', response.text).group(1)
self.__cache.set("accessToken", accessToken)
else:
logger.info("Checking access token found in cache...")
self.session.headers.update(
{
'authorization': f'Bearer {accessToken}'
}
)
response = self.session.get("https://amp-api.music.apple.com/v1/catalog/us/songs/1450330685")
if response.text == '':
logger.info("Access token found in cache is expired!")
self.__cache.delete("access_token")
self.__accessToken()
self.session.headers.update(
{
'authorization': f'Bearer {accessToken}'
}
)
def __mediaUserToken(self):
if self.__config.get('mediaUserToken'):
logger.info("Checking media-user-token...")
self.session.headers.update(
{
"media-user-token": self.__config.get("mediaUserToken")
}
)
response = self.session.get("https://amp-api.music.apple.com/v1/me/storefront")
if response.status_code == 200:
response = json.loads(response.text)
self.storefront = response["data"][0]["id"]
self.language = response["data"][0]["attributes"]["defaultLanguageTag"]
self.session.headers.update(
{
'accept-language': f'{self.language},en;q=0.9'
}
)
self.isMediaUserToken = True
else:
logger.error("Invalid media-user-token! Passing over the user subscription...")
self.__config.delete('mediaUserToken')
else:
self.storefront = 'us'
self.language = 'en-US'
self.isMediaUserToken = False
def __getErrors(self, errors):
if not isinstance(errors, list):
errors = [errors]
for error in errors:
err_status = error.get("status")
err_detail = error.get("detail")
logger.error(f"{err_status} - {err_detail}", 1)
def __getJson(self):
logger.info("Fetching api response...")
cacheKey = f"{self.id}:{self.storefront}"
__cache = self.__cache.get(cacheKey)
if __cache:
logger.info("Using the previous response found in cache...")
return __cache
apiUrl = f'https://amp-api.music.apple.com/v1/catalog/{self.storefront}/{self.kind}s/{self.id}'
if self.kind == "album" or self.kind == "song":
params = {
'extend': 'editorialVideo',
'include[songs]': 'albums,lyrics,credits',
'l': f'{self.language}'
}
elif self.kind == "music-video":
params = {
'l': f'{self.language}'
}
self.session.params = params
response = json.loads(
self.session.get(
apiUrl
).text
)
if not "errors" in response:
self.__cache.set(cacheKey, response)
return response
else:
self.__getErrors(response)
def getInfo(self, url):
self.__getUrl(url)
if self.kind == "album":
return parseJson(
self.__getJson()["data"][0]["relationships"]["tracks"]["data"],
self.sync,
self.skipVideo
)
elif self.kind == "song":
return parseJson(
self.__getJson()["data"],
self.sync
)
elif self.kind == "music-video":
return parseJson(
self.__getJson()["data"],
self.sync
)
|
6c06ad05a0dce94b9b649645a46f9406
|
{
"intermediate": 0.27874755859375,
"beginner": 0.4992334842681885,
"expert": 0.22201895713806152
}
|
47,036
|
api.py:
import re
import json
import requests
from bs4 import BeautifulSoup
from urllib.parse import urlparse
from urllib.request import urlopen
from urllib.error import URLError, HTTPError
from utils import Cache
from utils import Config
from utils import logger
from api.parse import parseJson
class AppleMusic(object):
def __init__(self, cache, sync, skipVideo):
self.session = requests.Session()
self.session.headers = {
'content-type': 'application/json;charset=utf-8',
'connection': 'keep-alive',
'accept': 'application/json',
'origin': 'https://music.apple.com',
'referer': 'https://music.apple.com/',
'accept-encoding': 'gzip, deflate, br',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36'
}
self.__cache = Cache(cache)
self.__config = Config(cache)
self.sync = int(sync)
self.skipVideo = skipVideo
self.__accessToken()
self.__mediaUserToken()
def __checkUrl(self, url):
try:
urlopen(url)
return True
except (URLError, HTTPError):
return False
def __getUrl(self, url):
__url = urlparse(url)
if not __url.scheme:
url = f"https://{url}"
if __url.netloc == "music.apple.com":
if self.__checkUrl(url):
splits = url.split('/')
id = splits[-1]
kind = splits[4]
if kind == "album":
if len(id.split('?i=')) > 1:
id = id.split('?i=')[1]
kind = "song"
self.kind = kind
self.id = id
else: logger.error("URL is invalid!", 1)
else: logger.error("URL is invalid!", 1)
def __accessToken(self):
accessToken = self.__cache.get("accessToken")
if not accessToken:
logger.info("Fetching access token from web...")
response = requests.get('https://music.apple.com/us/browse')
if response.status_code != 200:
logger.error("Failed to get music.apple.com! Please re-try...", 1)
content = BeautifulSoup(response.text, "html.parser")
indexJs = content.find(
"script",
attrs={
'type': 'module',
'crossorigin': True,
'src': True
}
).get('src')
response = requests.get(f'https://music.apple.com{indexJs}')
if response.status_code != 200:
logger.error("Failed to get JavaScript library! Please re-try...", 1)
accessToken = re.search('(?=eyJh)(.*?)(?=")', response.text).group(1)
self.__cache.set("accessToken", accessToken)
else:
logger.info("Checking access token found in cache...")
self.session.headers.update(
{
'authorization': f'Bearer {accessToken}'
}
)
response = self.session.get("https://amp-api.music.apple.com/v1/catalog/us/songs/1450330685")
if response.text == '':
logger.info("Access token found in cache is expired!")
self.__cache.delete("access_token")
self.__accessToken()
self.session.headers.update(
{
'authorization': f'Bearer {accessToken}'
}
)
def __mediaUserToken(self):
if self.__config.get('mediaUserToken'):
logger.info("Checking media-user-token...")
self.session.headers.update(
{
"media-user-token": self.__config.get("mediaUserToken")
}
)
response = self.session.get("https://amp-api.music.apple.com/v1/me/storefront")
if response.status_code == 200:
response = json.loads(response.text)
self.storefront = response["data"][0]["id"]
self.language = response["data"][0]["attributes"]["defaultLanguageTag"]
self.session.headers.update(
{
'accept-language': f'{self.language},en;q=0.9'
}
)
self.isMediaUserToken = True
else:
logger.error("Invalid media-user-token! Passing over the user subscription...")
self.__config.delete('mediaUserToken')
else:
self.storefront = 'us'
self.language = 'en-US'
self.isMediaUserToken = False
def __getErrors(self, errors):
if not isinstance(errors, list):
errors = [errors]
for error in errors:
err_status = error.get("status")
err_detail = error.get("detail")
logger.error(f"{err_status} - {err_detail}", 1)
def __getJson(self):
logger.info("Fetching api response...")
cacheKey = f"{self.id}:{self.storefront}"
__cache = self.__cache.get(cacheKey)
if __cache:
logger.info("Using the previous response found in cache...")
return __cache
apiUrl = f'https://amp-api.music.apple.com/v1/catalog/{self.storefront}/{self.kind}s/{self.id}'
if self.kind == "album" or self.kind == "song":
params = {
'extend': 'editorialVideo',
'include[songs]': 'albums,lyrics,credits',
'l': f'{self.language}'
}
elif self.kind == "music-video":
params = {
'l': f'{self.language}'
}
self.session.params = params
response = json.loads(
self.session.get(
apiUrl
).text
)
if not "errors" in response:
self.__cache.set(cacheKey, response)
return response
else:
self.__getErrors(response)
def getInfo(self, url):
self.__getUrl(url)
if self.kind == "album":
return parseJson(
self.__getJson()["data"][0]["relationships"]["tracks"]["data"],
self.sync,
self.skipVideo
)
elif self.kind == "song":
return parseJson(
self.__getJson()["data"],
self.sync
)
elif self.kind == "music-video":
return parseJson(
self.__getJson()["data"],
self.sync
)
get these fields :
{'isrc': '', 'composer': '', 'songartist': 'Jonas Brothers', 'credits': '', 'Programming': [], 'Guitar': [], 'Drums': [], 'Vocals': [], 'Background Vocals': [], 'Songwriter': [], 'Producer': [], 'Executive Producer': [], 'Mixing Engineer': [], 'Mastering Engineer': [], 'Engineer': [], 'recordlabel': 'Jonas Brothers Recording', 'trackcount': 14, 'albumartist': 'Jonas Brothers'}
give me python script from link 'https://music.apple.com/us/album/happiness-begins/1461478261'
|
58a72d57f1075cf159d7634fbb72e5c1
|
{
"intermediate": 0.44158756732940674,
"beginner": 0.38617774844169617,
"expert": 0.17223471403121948
}
|
47,037
|
api.py:
import re
import json
import requests
from bs4 import BeautifulSoup
from urllib.parse import urlparse
from urllib.request import urlopen
from urllib.error import URLError, HTTPError
from utils import Cache
from utils import Config
from utils import logger
from api.parse import parseJson
class AppleMusic(object):
def init(self, cache, sync, skipVideo):
self.session = requests.Session()
self.session.headers = {
‘content-type’: ‘application/json;charset=utf-8’,
‘connection’: ‘keep-alive’,
‘accept’: ‘application/json’,
‘origin’: ‘https://music.apple.com’,
‘referer’: ‘https://music.apple.com/’,
‘accept-encoding’: ‘gzip, deflate, br’,
‘user-agent’: ‘Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36’
}
self.__cache = Cache(cache)
self.__config = Config(cache)
self.sync = int(sync)
self.skipVideo = skipVideo
self.__accessToken()
self.__mediaUserToken()
def __checkUrl(self, url):
try:
urlopen(url)
return True
except (URLError, HTTPError):
return False
def __getUrl(self, url):
__url = urlparse(url)
if not __url.scheme:
url = f"https://{url}“
if __url.netloc == “music.apple.com”:
if self.__checkUrl(url):
splits = url.split(‘/’)
id = splits[-1]
kind = splits[4]
if kind == “album”:
if len(id.split(‘?i=’)) > 1:
id = id.split(‘?i=’)[1]
kind = “song”
self.kind = kind
self.id = id
else: logger.error(“URL is invalid!”, 1)
else: logger.error(“URL is invalid!”, 1)
def __accessToken(self):
accessToken = self.__cache.get(“accessToken”)
if not accessToken:
logger.info(“Fetching access token from web…”)
response = requests.get(‘https://music.apple.com/us/browse’)
if response.status_code != 200:
logger.error(“Failed to get music.apple.com! Please re-try…”, 1)
content = BeautifulSoup(response.text, “html.parser”)
indexJs = content.find(
“script”,
attrs={
‘type’: ‘module’,
‘crossorigin’: True,
‘src’: True
}
).get(‘src’)
response = requests.get(f’https://music.apple.com{indexJs}‘)
if response.status_code != 200:
logger.error(“Failed to get JavaScript library! Please re-try…”, 1)
accessToken = re.search(’(?=eyJh)(.*?)(?=”)‘, response.text).group(1)
self.__cache.set(“accessToken”, accessToken)
else:
logger.info(“Checking access token found in cache…”)
self.session.headers.update(
{
‘authorization’: f’Bearer {accessToken}’
}
)
response = self.session.get(“https://amp-api.music.apple.com/v1/catalog/us/songs/1450330685”)
if response.text == ‘’:
logger.info(“Access token found in cache is expired!”)
self.__cache.delete(“access_token”)
self.__accessToken()
self.session.headers.update(
{
‘authorization’: f’Bearer {accessToken}‘
}
)
def __mediaUserToken(self):
if self.__config.get(‘mediaUserToken’):
logger.info(“Checking media-user-token…”)
self.session.headers.update(
{
“media-user-token”: self.__config.get(“mediaUserToken”)
}
)
response = self.session.get(“https://amp-api.music.apple.com/v1/me/storefront”)
if response.status_code == 200:
response = json.loads(response.text)
self.storefront = response[“data”][0][“id”]
self.language = response[“data”][0][“attributes”][“defaultLanguageTag”]
self.session.headers.update(
{
‘accept-language’: f’{self.language},en;q=0.9’
}
)
self.isMediaUserToken = True
else:
logger.error(“Invalid media-user-token! Passing over the user subscription…”)
self.__config.delete(‘mediaUserToken’)
else:
self.storefront = ‘us’
self.language = ‘en-US’
self.isMediaUserToken = False
def __getErrors(self, errors):
if not isinstance(errors, list):
errors = [errors]
for error in errors:
err_status = error.get(“status”)
err_detail = error.get(“detail”)
logger.error(f"{err_status} - {err_detail}“, 1)
def __getJson(self):
logger.info(“Fetching api response…”)
cacheKey = f”{self.id}:{self.storefront}"
__cache = self.__cache.get(cacheKey)
if __cache:
logger.info(“Using the previous response found in cache…”)
return __cache
apiUrl = f’https://amp-api.music.apple.com/v1/catalog/{self.storefront}/{self.kind}s/{self.id}‘
if self.kind == “album” or self.kind == “song”:
params = {
‘extend’: ‘editorialVideo’,
‘include[songs]’: ‘albums,lyrics,credits’,
‘l’: f’{self.language}‘
}
elif self.kind == “music-video”:
params = {
‘l’: f’{self.language}'
}
self.session.params = params
response = json.loads(
self.session.get(
apiUrl
).text
)
if not “errors” in response:
self.__cache.set(cacheKey, response)
return response
else:
self.__getErrors(response)
def getInfo(self, url):
self.__getUrl(url)
if self.kind == “album”:
return parseJson(
self.__getJson()[“data”][0][“relationships”][“tracks”][“data”],
self.sync,
self.skipVideo
)
elif self.kind == “song”:
return parseJson(
self.__getJson()[“data”],
self.sync
)
elif self.kind == “music-video”:
return parseJson(
self.__getJson()[“data”],
self.sync
)
get these fields from link :
this is just an example--
{‘isrc’: ‘’, ‘composer’: ‘’, ‘songartist’: ‘Jonas Brothers’, ‘credits’: ‘’, ‘Programming’: [], ‘Guitar’: [], ‘Drums’: [], ‘Vocals’: [], ‘Background Vocals’: [], ‘Songwriter’: [], ‘Producer’: [], ‘Executive Producer’: [], ‘Mixing Engineer’: [], ‘Mastering Engineer’: [], ‘Engineer’: [], ‘recordlabel’: ‘Jonas Brothers Recording’, ‘trackcount’: 14, ‘albumartist’: ‘Jonas Brothers’}
give me python script from link ‘https://music.apple.com/us/album/happiness-begins/1461478261’
|
2c3fc5411998edc4b630f9c3841bec9c
|
{
"intermediate": 0.411155641078949,
"beginner": 0.42896199226379395,
"expert": 0.1598823070526123
}
|
47,038
|
npm list --depth 0 gives `-- (empty)
|
6b76dbb9da4fab9385fe52a24167fe18
|
{
"intermediate": 0.34904050827026367,
"beginner": 0.2708413600921631,
"expert": 0.38011816143989563
}
|
47,039
|
In my react component that is using react-effector library I need to render a certain icon within depending on the effector store. Initial value of the store will be null so even if I change the value of the store later the icon doesn't appear because the component was already rendered. How can I rerender it when effector's store value changes?
|
04462bea7f1912d93dcc1dcb48011738
|
{
"intermediate": 0.7894769310951233,
"beginner": 0.11550131440162659,
"expert": 0.09502172470092773
}
|
47,040
|
In my react component that is using react-effector library I need to render a certain icon within depending on the effector store. Initial value of the store will be null so even if I change the value of the store later the icon doesn’t appear because the component was already rendered. How can I rerender it when effector’s store value changes?
|
b187864fc85fc7d9cd508b166e4f679e
|
{
"intermediate": 0.8168677091598511,
"beginner": 0.09839267283678055,
"expert": 0.0847395583987236
}
|
47,041
|
explain
|
842fe47129e1236e99022adee0092b7b
|
{
"intermediate": 0.3545367121696472,
"beginner": 0.31888994574546814,
"expert": 0.32657337188720703
}
|
47,042
|
I need google spreadsheet formula which will allow me to comma separation items, im using =join(", ";E2:E3) but this is not ignoring empty cells, please modify it for me to do so
|
6c3312bb16e926fd1ffb4d9c099eb57c
|
{
"intermediate": 0.37461748719215393,
"beginner": 0.2425372153520584,
"expert": 0.3828452527523041
}
|
47,043
|
in javascript for leafletjs I want the user to click on the map and then use that location to retrieve building=house data 200 meters around that point
|
b4efe332ec2cc90d6876119ebf6412b6
|
{
"intermediate": 0.569892942905426,
"beginner": 0.18643181025981903,
"expert": 0.24367523193359375
}
|
47,044
|
in this javascript for Leaflet I am fetching building outlines from the overpass api. Can I fill the building outlines 'let money = 100000;
const map = L.map('map').setView([51.5352028, 0.0054299], 17);
// fetch house data
// Event listener for when the map is clicked
map.on('click', function (e) {
// Update building radius and city coordinates
let buildingRadius = 300;
let firstCityCoords = [e.latlng.lat, e.latlng.lng];
const overpassQuery = `
[out:json];
way["building"="house"](around:${buildingRadius},${firstCityCoords[0]},${firstCityCoords[1]});
out body;
>;
out skel qt;
`;
fetch('https://overpass-api.de/api/interpreter', {
method: 'POST',
headers: {
'Content-Type': 'application/x-www-form-urlencoded',
},
body: 'data=' + encodeURIComponent(overpassQuery),
})
.then((response) => response.json())
.then((data) => {
// Process the data returned by the Overpass API
data.elements.forEach((element) => {
if (element.type === 'way') {
// Extract coordinates from the way element
const coordinates = element.nodes.map((nodeId) => {
const node = data.elements.find(
(node) => node.id === nodeId
);
return [node.lat, node.lon];
});
// Create a polyline for the road
const polyline = L.polyline(
coordinates,
{
color: 'black', // Set road color
weight: 2, // Set road weight
opacity: 1, // Set road opacity
}
).addTo(map);
}
});
})
.catch((error) => {
console.error('Error fetching data:', error);
});
});
'
|
b31c405c07d5a720c5b62091275ae258
|
{
"intermediate": 0.3992637097835541,
"beginner": 0.45278871059417725,
"expert": 0.1479475498199463
}
|
47,045
|
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "3UeycYCyxDfE"
},
"source": [
"# TRANSLATOR"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8BfUjVxBcz5N"
},
"source": [
"## instalation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "WXqM38xBRHu2"
},
"outputs": [],
"source": [
"%%time\n",
"!pip install tensorflow-text\n",
"!pip install datasets\n",
"!pip install -q tensorflow_datasets\n",
"!pip install pydot\n",
"!cd /content\n",
"!clear"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Ukvs1XfMG7aG"
},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"import tensorflow_text as tf_text\n",
"import tensorflow_datasets as tfds\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"import requests\n",
"import functools\n",
"import collections\n",
"import os\n",
"import pathlib\n",
"import re\n",
"import string\n",
"import tempfile\n",
"import time\n",
"import matplotlib.pyplot as plt\n",
"import os\n",
"import re\n",
"import shutil\n",
"import string\n",
"import tensorflow as tf\n",
"\n",
"from tensorflow.keras import layers\n",
"from tensorflow.keras import losses\n",
"import pydot"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7V7igFwpc6Hs"
},
"source": [
"## dataset"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ZaMtoUtAREzs"
},
"outputs": [],
"source": [
"from datasets import load_dataset\n",
"\n",
"dataset = load_dataset(\"Helsinki-NLP/opus_books\", \"en-fr\")\n",
"data = dataset[\"train\"]\n",
"\n",
"french_sentences = [example[\"fr\"] for example in data[\"translation\"][:127085]]\n",
"english_sentences = [example[\"en\"] for example in data[\"translation\"][:127085]]\n",
"dataset = tf.data.Dataset.from_tensor_slices((french_sentences, english_sentences))\n",
"\n",
"french_sentences_decoded = []\n",
"english_sentences_decoded = []\n",
"\n",
"for french_sentence, english_sentence in dataset.take(127085):\n",
" french_sentences_decoded.append(\"b '\"+french_sentence.numpy().decode('utf-8'))\n",
" english_sentences_decoded.append(\"b '\"+english_sentence.numpy().decode('utf-8'))\n",
"\n",
"print(\"Nombre de phrases en français :\", len(french_sentences_decoded))\n",
"print(\"Nombre de phrases en anglais :\", len(english_sentences_decoded))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "2tb7H5uFQoBA"
},
"outputs": [],
"source": [
"train_fr = french_sentences\n",
"train_en = english_sentences"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "NzS8h0budHWv"
},
"source": [
"## vocab"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"id": "72uMXsQhFIx8"
},
"outputs": [],
"source": [
"from tensorflow_text.tools.wordpiece_vocab import bert_vocab_from_dataset as bert_vocab\n",
"\n",
"bert_tokenizer_params = dict(lower_case=True)\n",
"reserved_tokens = [\"[PAD]\", \"[UNK]\", \"[START]\", \"[END]\"]\n",
"\n",
"bert_vocab_args = {\n",
" 'vocab_size': 8000,\n",
" 'reserved_tokens': reserved_tokens,\n",
" 'bert_tokenizer_params': bert_tokenizer_params,\n",
" 'learn_params': {},\n",
"}\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "q6f3mrA-DK0n"
},
"outputs": [],
"source": [
"%%time\n",
"en_vocab = bert_vocab.bert_vocab_from_dataset(\n",
" tf.data.Dataset.from_tensor_slices(english_sentences).batch(1000).prefetch(2),\n",
" **bert_vocab_args\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "EJZzz1x5YY0x"
},
"outputs": [],
"source": [
"%%time\n",
"fr_vocab = bert_vocab.bert_vocab_from_dataset(\n",
" tf.data.Dataset.from_tensor_slices(french_sentences).batch(1000).prefetch(2),\n",
" **bert_vocab_args\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "1hmXuHHNcBHg"
},
"outputs": [],
"source": [
"def write_vocab_file(filepath, vocab):\n",
" with open(filepath, 'w') as f:\n",
" for token in vocab:\n",
" print(token, file=f)\n",
"write_vocab_file('en_vocab.txt', en_vocab)\n",
"write_vocab_file('fr_vocab.txt', fr_vocab)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "kLTf_mEvfNR9"
},
"source": [
"#` TOKENIZER `\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "jGshnQ2idy8I"
},
"outputs": [],
"source": [
"fr_tokenizer = tf_text.BertTokenizer('fr_vocab.txt', **bert_tokenizer_params)\n",
"en_tokenizer = tf_text.BertTokenizer('en_vocab.txt', **bert_tokenizer_params)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "bbQyYKhHkkDe"
},
"outputs": [],
"source": [
"# Tokenize the examples -> (batch, word, word-piece)\n",
"en_tokenizere = en_tokenizer.tokenize(\"hello how are you Vadim\")\n",
"# Merge the word and word-piece axes -> (batch, tokens)\n",
"en_tokenizere= en_tokenizere.merge_dims(-2,-1)\n",
"\n",
"for ex in en_tokenizere.to_list():\n",
" print(ex)\n"
]
},
{
"cell_type": "code",
"source": [
"words = en_tokenizer.detokenize(token_batch)\n",
"tf.strings.reduce_join(words, separator=' ', axis=-1)"
],
"metadata": {
"id": "k0m1461Gwy3e"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"## model"
],
"metadata": {
"id": "BjoPdwoxBWw2"
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "7OqoDOybbwi6"
},
"outputs": [],
"source": [
"max_length = 200\n",
"\n",
"fr_sequences = [fr_tokenizer.tokenize(french_sentence.numpy().decode('utf-8')).merge_dims(-2,-1)\n",
" for french_sentence, _ in dataset.take(127085)]\n",
"fr_ragged = tf.ragged.stack(fr_sequences)\n",
"fr_padded = fr_ragged.to_tensor(default_value=0, shape=[None, None, max_length])\n",
"\n",
"fr_sequencesdeocde = [fr_tokenizer.tokenize(\"[START]\"+french_sentence.numpy().decode('utf-8')+\"[END]\").merge_dims(-2,-1)\n",
" for french_sentence, _ in dataset.take(127085)]\n",
"fr_raggeddecode = tf.ragged.stack(fr_sequences)\n",
"fr_paddeddecode = fr_ragged.to_tensor(default_value=0, shape=[None, None, max_length])\n",
"\n",
"en_sequences = [en_tokenizer.tokenize(english_sentence.numpy().decode('utf-8')).merge_dims(-2,-1)\n",
" for _, english_sentence in dataset.take(127085)]\n",
"en_ragged = tf.ragged.stack(en_sequences)\n",
"en_padded = en_ragged.to_tensor(default_value=0, shape=[None, None, max_length])\n",
"\n",
"x_train = fr_padded\n",
"x2_train = fr_paddeddecode\n",
"y_train = en_padded\n",
"\n"
]
},
{
"cell_type": "code",
"source": [
"inputs = tf.keras.Input(shape=(1,200))\n",
"embedding_dim = 256\n",
"lstm_units = 512\n",
"vocab_size_en = len(en_vocab) + len(reserved_tokens)\n",
"vocab_size_fr = len(fr_vocab) + len(reserved_tokens)\n",
"\n",
"encoder_inputs = tf.keras.layers.Input(shape=(200,))\n",
"encoder_embedding = tf.keras.layers.Embedding(input_dim=vocab_size_en, output_dim=embedding_dim, mask_zero=True)(encoder_inputs)\n",
"encoder_outputs, state_h, state_c = tf.keras.layers.LSTM(lstm_units, return_state=True)(encoder_embedding)\n",
"encoder_states = [state_h, state_c]\n",
"\n",
"decoder_inputs = tf.keras.layers.Input(shape=(200,))\n",
"decoder_embedding = tf.keras.layers.Embedding(input_dim=vocab_size_fr, output_dim=embedding_dim, mask_zero=True)(decoder_inputs)\n",
"decoder_lstm = tf.keras.layers.LSTM(lstm_units, return_sequences=True, return_state=True)\n",
"decoder_outputs, _, _ = decoder_lstm(decoder_embedding, initial_state=encoder_states)\n",
"decoder_dense = tf.keras.layers.Dense(vocab_size_fr, activation='softmax')\n",
"decoder_outputs = decoder_dense(decoder_outputs)\n",
"model = tf.keras.models.Model(inputs=[encoder_inputs, decoder_inputs], outputs=decoder_outputs)\n",
"model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])\n",
"\n",
"\n",
"model.summary()"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 493
},
"id": "FIp6S-MR7Pls",
"outputId": "0a2d9241-1984-4aa7-c1d1-c5a8d982e542"
},
"execution_count": null,
"outputs": [
{
"output_type": "display_data",
"data": {
"text/plain": [
"\u001b[1mModel: \"functional_9\"\u001b[0m\n"
],
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">Model: \"functional_9\"</span>\n",
"</pre>\n"
]
},
"metadata": {}
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┓\n",
"┃\u001b[1m \u001b[0m\u001b[1mLayer (type) \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1mOutput Shape \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1m Param #\u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1mConnected to \u001b[0m\u001
|
468bb645dccdb1ccd88905b323884d49
|
{
"intermediate": 0.3315049111843109,
"beginner": 0.37692585587501526,
"expert": 0.2915692627429962
}
|
47,046
|
ng -v
Error: You need to specify a command before moving on. Use '--help' to view the available commands.
|
fdc865c49d9536e67fbd5642cf11cf88
|
{
"intermediate": 0.3284425735473633,
"beginner": 0.2726157009601593,
"expert": 0.39894169569015503
}
|
47,047
|
import spacy
from spacy.pipeline import EntityRuler
from negspacy.negation import Negex
nlp = spacy.load('en_ner_bc5cdr_md')
ruler = EntityRuler(nlp)
patterns = [{"label": "DISEASE", "pattern": "Diabetes Mellitus"},
{"label": "DISEASE", "pattern": "Diabetes"},
{"label": "DISEASE", "pattern": "Typhoid"},
{"label": "DISEASE", "pattern": "Cancer"},
{"label": "DISEASE", "pattern": "Banana Cedar"}]
ruler.add_patterns(patterns)
@spacy.Language.factory("custom_ner")
def create_custom_ner(nlp, name):
return ruler
nlp.replace_pipe("ner", "custom_ner")
nlp.add_pipe("negex", config={"ent_types":["DISEASE"]})
doc = nlp("The patient was diagnosed with Diabetes Mellitus but not Banana Cedar.")
for ent in doc.ents:
print(ent.text, ent.label_)
for ent in doc.ents:
if ent._.negex:
print(f"{ent.text}: Not Present")
else:
print(f"{ent.text}: Present")
output:
Diabetes Mellitus: Present
Banana Cedar: Not Present
i want the output to identify all NERs present in the sentence instead of skipping over it after it gets one match. so in this case the output should look like:
Diabetes: Present
Diabetes Mellitus: Present
Banana Cedar: Not Present
|
333f65aaa531e959df4cd7b9d93b4b95
|
{
"intermediate": 0.48383963108062744,
"beginner": 0.2394200712442398,
"expert": 0.27674028277397156
}
|
47,048
|
write me simple code to test and run it on the CELL processor
|
80f3268bbaa9ea84892693e75a0264ee
|
{
"intermediate": 0.4695681035518646,
"beginner": 0.15547873079776764,
"expert": 0.37495315074920654
}
|
47,049
|
%%time
!pip install tensorflow-text
!pip install datasets
!pip install tensorflow_datasets
!pip install pydot
!pip install tensorflow
!pip install numpy
!pip install requests
!pip install matplotlib
!pip install tensorflow-text
!pip install datasets
!pip install pydot
!clear
Requirement already satisfied: tensorflow_datasets in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (4.9.3)
Requirement already satisfied: absl-py in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow_datasets) (1.4.0)
Requirement already satisfied: array-record in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow_datasets) (0.4.1)
Requirement already satisfied: click in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow_datasets) (8.1.7)
Requirement already satisfied: dm-tree in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow_datasets) (0.1.8)
Requirement already satisfied: etils>=0.9.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from etils[enp,epath,etree]>=0.9.0->tensorflow_datasets) (1.5.2)
Requirement already satisfied: numpy in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow_datasets) (1.26.4)
Requirement already satisfied: promise in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow_datasets) (2.3)
Collecting protobuf>=3.20 (from tensorflow_datasets)
Using cached protobuf-5.26.1-cp39-cp39-win_amd64.whl.metadata (592 bytes)
Requirement already satisfied: psutil in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow_datasets) (5.9.8)
Requirement already satisfied: requests>=2.19.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow_datasets) (2.31.0)
Requirement already satisfied: tensorflow-metadata in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow_datasets) (1.14.0)
Requirement already satisfied: termcolor in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow_datasets) (2.4.0)
Requirement already satisfied: toml in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow_datasets) (0.10.2)
Requirement already satisfied: tqdm in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow_datasets) (4.66.2)
Requirement already satisfied: wrapt in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow_datasets) (1.16.0)
Requirement already satisfied: fsspec in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from etils[enp,epath,etree]>=0.9.0->tensorflow_datasets) (2024.2.0)
Requirement already satisfied: importlib_resources in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from etils[enp,epath,etree]>=0.9.0->tensorflow_datasets) (6.4.0)
Requirement already satisfied: typing_extensions in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from etils[enp,epath,etree]>=0.9.0->tensorflow_datasets) (4.11.0)
Requirement already satisfied: zipp in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from etils[enp,epath,etree]>=0.9.0->tensorflow_datasets) (3.18.1)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from requests>=2.19.0->tensorflow_datasets) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from requests>=2.19.0->tensorflow_datasets) (3.7)
Requirement already satisfied: urllib3<3,>=1.21.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from requests>=2.19.0->tensorflow_datasets) (2.2.1)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from requests>=2.19.0->tensorflow_datasets) (2024.2.2)
...
Found existing installation: protobuf 3.19.6
Uninstalling protobuf-3.19.6:
Successfully uninstalled protobuf-3.19.6
Successfully installed protobuf-3.20.3
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorboard 2.10.1 requires protobuf<3.20,>=3.9.2, but you have protobuf 3.20.3 which is incompatible.
tensorflow 2.10.1 requires protobuf<3.20,>=3.9.2, but you have protobuf 3.20.3 which is incompatible.
tensorflow-intel 2.16.1 requires keras>=3.0.0, but you have keras 2.10.0 which is incompatible.
tensorflow-intel 2.16.1 requires tensorboard<2.17,>=2.16, but you have tensorboard 2.10.1 which is incompatible.
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
Requirement already satisfied: pydot in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (2.0.0)
Requirement already satisfied: pyparsing>=3 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from pydot) (3.1.2)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
Requirement already satisfied: tensorflow in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (2.10.1)
Requirement already satisfied: absl-py>=1.0.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow) (1.4.0)
Requirement already satisfied: astunparse>=1.6.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow) (1.6.3)
Requirement already satisfied: flatbuffers>=2.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow) (24.3.25)
Requirement already satisfied: gast<=0.4.0,>=0.2.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow) (0.4.0)
Requirement already satisfied: google-pasta>=0.1.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow) (0.2.0)
Requirement already satisfied: h5py>=2.9.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow) (3.11.0)
Requirement already satisfied: keras-preprocessing>=1.1.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow) (1.1.2)
Requirement already satisfied: libclang>=13.0.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow) (18.1.1)
Requirement already satisfied: numpy>=1.20 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow) (1.26.4)
Requirement already satisfied: opt-einsum>=2.3.2 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow) (3.3.0)
Requirement already satisfied: packaging in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow) (24.0)
Collecting protobuf<3.20,>=3.9.2 (from tensorflow)
Using cached protobuf-3.19.6-cp39-cp39-win_amd64.whl.metadata (807 bytes)
Requirement already satisfied: setuptools in c:\program files\windowsapps\pythonsoftwarefoundation.python.3.9_3.9.3568.0_x64__qbz5n2kfra8p0\lib\site-packages (from tensorflow) (58.1.0)
Requirement already satisfied: six>=1.12.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow) (1.16.0)
Requirement already satisfied: termcolor>=1.1.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow) (2.4.0)
Requirement already satisfied: typing-extensions>=3.6.6 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow) (4.11.0)
Requirement already satisfied: wrapt>=1.11.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow) (1.16.0)
Requirement already satisfied: tensorflow-io-gcs-filesystem>=0.23.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow) (0.31.0)
Requirement already satisfied: grpcio<2.0,>=1.24.3 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow) (1.62.1)
Requirement already satisfied: tensorboard<2.11,>=2.10 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow) (2.10.1)
Requirement already satisfied: tensorflow-estimator<2.11,>=2.10.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow) (2.10.0)
Requirement already satisfied: keras<2.11,>=2.10.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow) (2.10.0)
Requirement already satisfied: wheel<1.0,>=0.23.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from astunparse>=1.6.0->tensorflow) (0.43.0)
...
Found existing installation: protobuf 3.20.3
Uninstalling protobuf-3.20.3:
Successfully uninstalled protobuf-3.20.3
Successfully installed protobuf-3.19.6
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorflow-datasets 4.9.3 requires protobuf>=3.20, but you have protobuf 3.19.6 which is incompatible.
tensorflow-intel 2.16.1 requires keras>=3.0.0, but you have keras 2.10.0 which is incompatible.
tensorflow-intel 2.16.1 requires protobuf!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<5.0.0dev,>=3.20.3, but you have protobuf 3.19.6 which is incompatible.
tensorflow-intel 2.16.1 requires tensorboard<2.17,>=2.16, but you have tensorboard 2.10.1 which is incompatible.
tensorflow-metadata 1.14.0 requires protobuf<4.21,>=3.20.3, but you have protobuf 3.19.6 which is incompatible.
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
Requirement already satisfied: numpy in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (1.26.4)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
Requirement already satisfied: requests in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (2.31.0)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from requests) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from requests) (3.7)
Requirement already satisfied: urllib3<3,>=1.21.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from requests) (2.2.1)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from requests) (2024.2.2)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
Requirement already satisfied: matplotlib in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (3.8.4)
Requirement already satisfied: contourpy>=1.0.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from matplotlib) (1.2.1)
Requirement already satisfied: cycler>=0.10 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from matplotlib) (0.12.1)
Requirement already satisfied: fonttools>=4.22.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from matplotlib) (4.51.0)
Requirement already satisfied: kiwisolver>=1.3.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from matplotlib) (1.4.5)
Requirement already satisfied: numpy>=1.21 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from matplotlib) (1.26.4)
Requirement already satisfied: packaging>=20.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from matplotlib) (24.0)
Requirement already satisfied: pillow>=8 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from matplotlib) (10.3.0)
Requirement already satisfied: pyparsing>=2.3.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from matplotlib) (3.1.2)
Requirement already satisfied: python-dateutil>=2.7 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from matplotlib) (2.9.0.post0)
Requirement already satisfied: importlib-resources>=3.2.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from matplotlib) (6.4.0)
Requirement already satisfied: zipp>=3.1.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from importlib-resources>=3.2.0->matplotlib) (3.18.1)
Requirement already satisfied: six>=1.5 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from python-dateutil>=2.7->matplotlib) (1.16.0)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
Requirement already satisfied: tensorflow-text in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (2.10.0)
Requirement already satisfied: tensorflow-hub>=0.8.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow-text) (0.16.1)
Requirement already satisfied: tensorflow<2.11,>=2.10.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow-text) (2.10.1)
Requirement already satisfied: absl-py>=1.0.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow<2.11,>=2.10.0->tensorflow-text) (1.4.0)
Requirement already satisfied: astunparse>=1.6.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow<2.11,>=2.10.0->tensorflow-text) (1.6.3)
Requirement already satisfied: flatbuffers>=2.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow<2.11,>=2.10.0->tensorflow-text) (24.3.25)
Requirement already satisfied: gast<=0.4.0,>=0.2.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow<2.11,>=2.10.0->tensorflow-text) (0.4.0)
Requirement already satisfied: google-pasta>=0.1.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow<2.11,>=2.10.0->tensorflow-text) (0.2.0)
Requirement already satisfied: h5py>=2.9.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow<2.11,>=2.10.0->tensorflow-text) (3.11.0)
Requirement already satisfied: keras-preprocessing>=1.1.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow<2.11,>=2.10.0->tensorflow-text) (1.1.2)
Requirement already satisfied: libclang>=13.0.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow<2.11,>=2.10.0->tensorflow-text) (18.1.1)
Requirement already satisfied: numpy>=1.20 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow<2.11,>=2.10.0->tensorflow-text) (1.26.4)
Requirement already satisfied: opt-einsum>=2.3.2 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow<2.11,>=2.10.0->tensorflow-text) (3.3.0)
Requirement already satisfied: packaging in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow<2.11,>=2.10.0->tensorflow-text) (24.0)
Requirement already satisfied: protobuf<3.20,>=3.9.2 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow<2.11,>=2.10.0->tensorflow-text) (3.19.6)
Requirement already satisfied: setuptools in c:\program files\windowsapps\pythonsoftwarefoundation.python.3.9_3.9.3568.0_x64__qbz5n2kfra8p0\lib\site-packages (from tensorflow<2.11,>=2.10.0->tensorflow-text) (58.1.0)
Requirement already satisfied: six>=1.12.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow<2.11,>=2.10.0->tensorflow-text) (1.16.0)
Requirement already satisfied: termcolor>=1.1.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow<2.11,>=2.10.0->tensorflow-text) (2.4.0)
Requirement already satisfied: typing-extensions>=3.6.6 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow<2.11,>=2.10.0->tensorflow-text) (4.11.0)
Requirement already satisfied: wrapt>=1.11.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow<2.11,>=2.10.0->tensorflow-text) (1.16.0)
Requirement already satisfied: tensorflow-io-gcs-filesystem>=0.23.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow<2.11,>=2.10.0->tensorflow-text) (0.31.0)
Requirement already satisfied: grpcio<2.0,>=1.24.3 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow<2.11,>=2.10.0->tensorflow-text) (1.62.1)
Requirement already satisfied: tensorboard<2.11,>=2.10 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow<2.11,>=2.10.0->tensorflow-text) (2.10.1)
Requirement already satisfied: tensorflow-estimator<2.11,>=2.10.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow<2.11,>=2.10.0->tensorflow-text) (2.10.0)
Requirement already satisfied: keras<2.11,>=2.10.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from tensorflow<2.11,>=2.10.0->tensorflow-text) (2.10.0)
...
Requirement already satisfied: MarkupSafe>=2.1.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from werkzeug>=1.0.1->tensorboard<2.11,>=2.10->tensorflow<2.11,>=2.10.0->tensorflow-text) (2.1.5)
Requirement already satisfied: zipp>=0.5 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard<2.11,>=2.10->tensorflow<2.11,>=2.10.0->tensorflow-text) (3.18.1)
Requirement already satisfied: pyasn1<0.7.0,>=0.4.6 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard<2.11,>=2.10->tensorflow<2.11,>=2.10.0->tensorflow-text) (0.6.0)
Requirement already satisfied: oauthlib>=3.0.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.11,>=2.10->tensorflow<2.11,>=2.10.0->tensorflow-text) (3.2.2)
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
Requirement already satisfied: datasets in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (2.18.0)
Requirement already satisfied: filelock in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from datasets) (3.13.4)
Requirement already satisfied: numpy>=1.17 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from datasets) (1.26.4)
Requirement already satisfied: pyarrow>=12.0.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from datasets) (15.0.2)
Requirement already satisfied: pyarrow-hotfix in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from datasets) (0.6)
Requirement already satisfied: dill<0.3.9,>=0.3.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from datasets) (0.3.8)
Requirement already satisfied: pandas in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from datasets) (2.2.2)
Requirement already satisfied: requests>=2.19.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from datasets) (2.31.0)
Requirement already satisfied: tqdm>=4.62.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from datasets) (4.66.2)
Requirement already satisfied: xxhash in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from datasets) (3.4.1)
Requirement already satisfied: multiprocess in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from datasets) (0.70.16)
Requirement already satisfied: fsspec<=2024.2.0,>=2023.1.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from fsspec[http]<=2024.2.0,>=2023.1.0->datasets) (2024.2.0)
Requirement already satisfied: aiohttp in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from datasets) (3.9.5)
Requirement already satisfied: huggingface-hub>=0.19.4 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from datasets) (0.22.2)
Requirement already satisfied: packaging in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from datasets) (24.0)
Requirement already satisfied: pyyaml>=5.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from datasets) (6.0.1)
Requirement already satisfied: aiosignal>=1.1.2 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from aiohttp->datasets) (1.3.1)
Requirement already satisfied: attrs>=17.3.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from aiohttp->datasets) (23.2.0)
Requirement already satisfied: frozenlist>=1.1.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from aiohttp->datasets) (1.4.1)
Requirement already satisfied: multidict<7.0,>=4.5 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from aiohttp->datasets) (6.0.5)
Requirement already satisfied: yarl<2.0,>=1.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from aiohttp->datasets) (1.9.4)
Requirement already satisfied: async-timeout<5.0,>=4.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from aiohttp->datasets) (4.0.3)
Requirement already satisfied: typing-extensions>=3.7.4.3 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from huggingface-hub>=0.19.4->datasets) (4.11.0)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from requests>=2.19.0->datasets) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from requests>=2.19.0->datasets) (3.7)
...
Requirement already satisfied: python-dateutil>=2.8.2 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from pandas->datasets) (2.9.0.post0)
Requirement already satisfied: pytz>=2020.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from pandas->datasets) (2024.1)
Requirement already satisfied: tzdata>=2022.7 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from pandas->datasets) (2024.1)
Requirement already satisfied: six>=1.5 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from python-dateutil>=2.8.2->pandas->datasets) (1.16.0)
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
Requirement already satisfied: pydot in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (2.0.0)
Requirement already satisfied: pyparsing>=3 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from pydot) (3.1.2)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages)
CPU times: total: 0 ns
Wall time: 25.3 s
'clear' n'est pas reconnu en tant que commande interne
ou externe, un programme ex�cutable ou un fichier de commandes.
import tensorflow as tf
import tensorflow_text as tf_text
import tensorflow_datasets as tfds
import numpy as np
import matplotlib.pyplot as plt
import requests
import functools
import collections
import os
import pathlib
import re
import string
import tempfile
import time
import matplotlib.pyplot as plt
import os
import re
import shutil
import string
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import losses
import pydot
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[9], line 3
1 import tensorflow as tf
2 import tensorflow_text as tf_text
----> 3 import tensorflow_datasets as tfds
4 import numpy as np
5 import matplotlib.pyplot as plt
File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\tensorflow_datasets\__init__.py:43
41 _TIMESTAMP_IMPORT_STARTS = time.time()
42 from absl import logging
---> 43 import tensorflow_datasets.core.logging as _tfds_logging
44 from tensorflow_datasets.core.logging import call_metadata as _call_metadata
46 _metadata = _call_metadata.CallMetadata()
ImportError: cannot import name 'core' from partially initialized module 'tensorflow_datasets' (most likely due to a circular import) (C:\Users\ixelo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\tensorflow_datasets\__init__.py)
|
b629ed76b165b410e4969135b25befd1
|
{
"intermediate": 0.3772393763065338,
"beginner": 0.44175300002098083,
"expert": 0.18100754916667938
}
|
47,050
|
How to register hit from player Colyseus
|
20f3a9566d5b78be89fa0f93fd92015f
|
{
"intermediate": 0.32943108677864075,
"beginner": 0.3143779933452606,
"expert": 0.35619091987609863
}
|
47,051
|
hi
|
446d4b88cfab9943f9de5e646df63337
|
{
"intermediate": 0.3246487081050873,
"beginner": 0.27135494351387024,
"expert": 0.40399640798568726
}
|
47,052
|
Use the following format {
"points": [
{
"x": -0.023103713989257812,
"y": -1.9493672847747803,
"z": 0.004488945007324219
},
{
"x": 2.101100444793701,
"y": -1.6588795185089111,
"z": 0.006519317626953125
},
{
"x": 2.5287222862243652,
"y": -0.32134079933166504,
"z": 0.0037126541137695312
},
{
"x": 0.9061293601989746,
"y": 0.44204187393188477,
"z": 0.0034646987915039062
},
{
"x": 2.2219018936157227,
"y": 1.980624794960022,
"z": 0.0011157989501953125
},
{
"x": 1.1156058311462402,
"y": 3.4300408363342285,
"z": 0.0038156509399414062
},
{
"x": -0.46977853775024414,
"y": 3.242877960205078,
"z": -0.00022220611572265625
},
{
"x": -1.5030198097229004,
"y": 4.657849311828613,
"z": -0.00079345703125
},
{
"x": -1.6512951850891113,
"y": 1.6116106510162354,
"z": 0.0026826858520507812
},
{
"x": -0.3505887985229492,
"y": 1.9661757946014404,
"z": -0.0007028579711914062
},
{
"x": -0.7707219123840332,
"y": 0.6003820896148682,
"z": 0.0046844482421875
},
{
"x": -0.685844898223877,
"y": -0.5849349498748779,
"z": 0.00269317626953125
}
]
}
and change these points to the same one:
header:
stamp:
sec: 1713355467
nanosec: 190441243
frame_id: map
point:
x: -1.8603521585464478
y: -0.710990309715271
z: 0.00640869140625
---
header:
stamp:
sec: 1713355512
nanosec: 606475818
frame_id: map
point:
x: -1.640790343284607
y: -1.7298060655593872
z: 0.00640869140625
---
header:
stamp:
sec: 1713355533
nanosec: 54335250
frame_id: map
point:
x: 0.5842314958572388
y: -1.9313279390335083
z: 0.002471923828125
---
header:
stamp:
sec: 1713355539
nanosec: 966501600
frame_id: map
point:
x: -0.48641666769981384
y: -0.7948017716407776
z: 0.002471923828125
---
header:
stamp:
sec: 1713355546
nanosec: 522140654
frame_id: map
point:
x: 0.485558420419693
y: 0.45880216360092163
z: 0.00640869140625
---
header:
stamp:
sec: 1713355566
nanosec: 272607820
frame_id: map
point:
x: -1.2811323404312134
y: 0.35720714926719666
z: 0.34820556640625
---
header:
stamp:
sec: 1713355576
nanosec: 822090086
frame_id: map
point:
x: -2.6011927127838135
y: 0.3359096944332123
z: 0.002471923828125
---
|
868261cd863a20a1ecd407646a824a7d
|
{
"intermediate": 0.2756030261516571,
"beginner": 0.46766912937164307,
"expert": 0.25672781467437744
}
|
47,053
|
Requirement already satisfied: requests in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (2.31.0)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from requests) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from requests) (3.7)
Requirement already satisfied: urllib3<3,>=1.21.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from requests) (2.2.1)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from requests) (2024.2.2)
Requirement already satisfied: matplotlib in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (3.8.4)
Requirement already satisfied: contourpy>=1.0.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from matplotlib) (1.2.1)
Requirement already satisfied: cycler>=0.10 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from matplotlib) (0.12.1)
Requirement already satisfied: fonttools>=4.22.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from matplotlib) (4.51.0)
Requirement already satisfied: kiwisolver>=1.3.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from matplotlib) (1.4.5)
Requirement already satisfied: numpy>=1.21 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from matplotlib) (1.26.4)
Requirement already satisfied: packaging>=20.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from matplotlib) (24.0)
Requirement already satisfied: pillow>=8 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from matplotlib) (10.3.0)
Requirement already satisfied: pyparsing>=2.3.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from matplotlib) (3.1.2)
Requirement already satisfied: python-dateutil>=2.7 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from matplotlib) (2.9.0.post0)
Requirement already satisfied: six>=1.5 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from python-dateutil>=2.7->matplotlib) (1.16.0)
ERROR: Could not find a version that satisfies the requirement tensorflow-text (from versions: none)
ERROR: No matching distribution found for tensorflow-text
Requirement already satisfied: datasets in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (2.18.0)
Requirement already satisfied: filelock in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from datasets) (3.13.4)
Requirement already satisfied: numpy>=1.17 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from datasets) (1.26.4)
Requirement already satisfied: pyarrow>=12.0.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from datasets) (15.0.2)
Requirement already satisfied: pyarrow-hotfix in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from datasets) (0.6)
Requirement already satisfied: dill<0.3.9,>=0.3.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from datasets) (0.3.8)
Requirement already satisfied: pandas in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from datasets) (2.2.2)
Requirement already satisfied: requests>=2.19.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from datasets) (2.31.0)
Requirement already satisfied: tqdm>=4.62.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from datasets) (4.66.2)
Requirement already satisfied: xxhash in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from datasets) (3.4.1)
Requirement already satisfied: multiprocess in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from datasets) (0.70.16)
Requirement already satisfied: fsspec<=2024.2.0,>=2023.1.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from fsspec[http]<=2024.2.0,>=2023.1.0->datasets) (2024.2.0)
Requirement already satisfied: aiohttp in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from datasets) (3.9.5)
Requirement already satisfied: huggingface-hub>=0.19.4 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from datasets) (0.22.2)
Requirement already satisfied: packaging in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from datasets) (24.0)
Requirement already satisfied: pyyaml>=5.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from datasets) (6.0.1)
Requirement already satisfied: aiosignal>=1.1.2 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from aiohttp->datasets) (1.3.1)
Requirement already satisfied: attrs>=17.3.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from aiohttp->datasets) (23.2.0)
Requirement already satisfied: frozenlist>=1.1.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from aiohttp->datasets) (1.4.1)
Requirement already satisfied: multidict<7.0,>=4.5 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from aiohttp->datasets) (6.0.5)
Requirement already satisfied: yarl<2.0,>=1.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from aiohttp->datasets) (1.9.4)
Requirement already satisfied: typing-extensions>=3.7.4.3 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from huggingface-hub>=0.19.4->datasets) (4.11.0)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from requests>=2.19.0->datasets) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from requests>=2.19.0->datasets) (3.7)
Requirement already satisfied: urllib3<3,>=1.21.1 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from requests>=2.19.0->datasets) (2.2.1)
...
Requirement already satisfied: six in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from promise->tensorflow-datasets) (1.16.0)
Requirement already satisfied: googleapis-common-protos<2,>=1.52.0 in c:\users\ixelo\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages (from tensorflow-metadata->tensorflow-datasets) (1.63.0)
CPU times: total: 15.6 ms
Wall time: 14.8 s
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...
'clear' n'est pas reconnu en tant que commande interne
ou externe, un programme ex�cutable ou un fichier de commandes.
|
a6151f4bc501f343dc50b9d26ae8ce61
|
{
"intermediate": 0.3738146722316742,
"beginner": 0.4590801000595093,
"expert": 0.16710519790649414
}
|
47,054
|
I want to have a callback function ofr turtlebot4 lidar scanner how can I code that
|
164c0f8ec7792bd6c4e983357b812f16
|
{
"intermediate": 0.38892945647239685,
"beginner": 0.22529776394367218,
"expert": 0.3857727646827698
}
|
47,055
|
def write_vocab_file(filepath, vocab):
with open(filepath, 'w') as f:
for token in vocab:
print(token, file=f)
write_vocab_file('fr_vocab.txt', fr_vocab)
write_vocab_file('en_vocab.txt', en_vocab)
---------------------------------------------------------------------------
UnicodeEncodeError Traceback (most recent call last)
Cell In[18], line 5
3 for token in vocab:
4 print(token, file=f)
----> 5 write_vocab_file('fr_vocab.txt', fr_vocab)
6 write_vocab_file('en_vocab.txt', en_vocab)
Cell In[18], line 4
2 with open(filepath, 'w') as f:
3 for token in vocab:
----> 4 print(token, file=f)
File C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\encodings\cp1252.py:19, in IncrementalEncoder.encode(self, input, final)
18 def encode(self, input, final=False):
---> 19 return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u03b1' in position 0: character maps to <undefined>
|
232504c9102a361e114f92e80599bc22
|
{
"intermediate": 0.48174723982810974,
"beginner": 0.1701108068227768,
"expert": 0.34814202785491943
}
|
47,056
|
employees
id salary status
1 1 2016 married
2 2 5903 single
3 3 7608 married
4 4 6448 single
5 5 9551 married
6 6 6505 married
7 7 5753 single
8 8 7313 single
9 9 4219 single
10 10 3140 married
11 11 2702 married
12 12 3035 single
13 13 7590 single
14 14 3404 married
15 15 4551 married
As an owner of a vehicle factory, you have agreed to provide a salary raise for the four employees with the lowest salaries who are also married, as they are struggling to finance their families. Return only the IDs of the relevant employees. Sort the results by salary in ascending order in sql
|
e15a2dca856ae5640c0eaa6307e815a0
|
{
"intermediate": 0.31267809867858887,
"beginner": 0.3240755796432495,
"expert": 0.36324629187583923
}
|
47,057
|
Difference between GROUP by, HAVING in SQL and how to use them. Please explain as if i am a 5 year old kid
|
418a40073fbf2623f8edcd68ca9b8205
|
{
"intermediate": 0.3625272512435913,
"beginner": 0.3771844804286957,
"expert": 0.2602882385253906
}
|
47,058
|
Difference between GROUP by, HAVING in SQL and how to use them. Please explain as if i am a 5 year old kid
|
16f2d1096547ba027a9bf789485a8389
|
{
"intermediate": 0.3625272512435913,
"beginner": 0.3771844804286957,
"expert": 0.2602882385253906
}
|
47,059
|
max_length = 200
def tokenize_and_merge(fr_sentence, en_sentence):
fr_tokens = fr_tokenizer.tokenize(fr_sentence.np().decode('utf-8')).merge_dims(-2, -1)
fr_decode_tokens = fr_tokenizer.tokenize("[START]" + fr_sentence.np().decode('utf-8') + "[END]").merge_dims(-2, -1)
en_tokens = en_tokenizer.tokenize(en_sentence.np().decode('utf-8')).merge_dims(-2, -1)
return fr_tokens, fr_decode_tokens, en_tokens
dataset = dataset.map(tokenize_and_merge, num_parallel_calls=tf.data.AUTOTUNE)
dataset = dataset.padded_batch(batch_size=32, padded_shapes=([max_length], [max_length], [max_length]), padding_values=0)
x_train = []
x2_train = []
y_train = []
for fr_batch, fr_decode_batch, en_batch in dataset:
x_train.append(fr_batch)
x2_train.append(fr_decode_batch)
y_train.append(en_batch)
x_train = tf.concat(x_train, axis=0)
x2_train = tf.concat(x2_train, axis=0)
y_train = tf.concat(y_train, axis=0)
print("x_train shape:", x_train.shape)
print("x2_train shape:", x2_train.shape)
print("y_train shape:", y_train.shape)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[26], line 9
6 en_tokens = en_tokenizer.tokenize(en_sentence.np().decode('utf-8')).merge_dims(-2, -1)
7 return fr_tokens, fr_decode_tokens, en_tokens
----> 9 dataset = dataset.map(tokenize_and_merge, num_parallel_calls=tf.data.AUTOTUNE)
10 dataset = dataset.padded_batch(batch_size=32, padded_shapes=([max_length], [max_length], [max_length]), padding_values=0)
12 x_train = []
File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\tensorflow\python\data\ops\dataset_ops.py:2204, in DatasetV2.map(self, map_func, num_parallel_calls, deterministic, name)
2202 return MapDataset(self, map_func, preserve_cardinality=True, name=name)
2203 else:
-> 2204 return ParallelMapDataset(
2205 self,
2206 map_func,
2207 num_parallel_calls,
2208 deterministic,
2209 preserve_cardinality=True,
2210 name=name)
File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\tensorflow\python\data\ops\dataset_ops.py:5441, in ParallelMapDataset.__init__(self, input_dataset, map_func, num_parallel_calls, deterministic, use_inter_op_parallelism, preserve_cardinality, use_legacy_function, name)
5439 self._input_dataset = input_dataset
5440 self._use_inter_op_parallelism = use_inter_op_parallelism
-> 5441 self._map_func = structured_function.StructuredFunctionWrapper(
5442 map_func,
...
File "C:\Users\ixelo\AppData\Local\Temp\ipykernel_14408\2057811764.py", line 4, in tokenize_and_merge *
fr_tokens = fr_tokenizer.tokenize(fr_sentence.np().decode('utf-8')).merge_dims(-2, -1)
AttributeError: 'Tensor' object has no attribute 'np'
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...
|
6412c544c0474e039c0eb4882e721018
|
{
"intermediate": 0.3324238955974579,
"beginner": 0.31864476203918457,
"expert": 0.34893131256103516
}
|
47,060
|
which of these topics would be used for lidar in in ros2 :
/battery_state
/cliff_intensity
/cmd_audio
/cmd_lightring
/cmd_vel
/diagnostics
/diagnostics_agg
/diagnostics_toplevel_state
/dock_status
/function_calls
/hazard_detection
/hmi/buttons
/hmi/display
/hmi/display/message
/hmi/led
/imu
/interface_buttons
/ip
/ir_intensity
/ir_opcode
/joint_states
/joy
/joy/set_feedback
/kidnap_status
/mobility_monitor/transition_event
/mouse
/oakd/rgb/preview/camera_info
/oakd/rgb/preview/image_raw
/odom
/parameter_events
/robot_description
/robot_state/transition_event
/rosout
/scan
/slip_status
/static_transform/transition_event
/stop_status
/tf
/tf_static
/wheel_status
/wheel_ticks
/wheel_vels
|
7088e83ad3d45afd8ffca9257c875b5d
|
{
"intermediate": 0.3128310739994049,
"beginner": 0.4821358919143677,
"expert": 0.20503301918506622
}
|
47,061
|
optimize this code for this code begin faster :
max_length = 200
fr_sequences = [fr_tokenizer.tokenize(french_sentence.numpy().decode('utf-8')).merge_dims(-2,-1)
for french_sentence, _ in dataset.take(127085)]
fr_ragged = tf.ragged.stack(fr_sequences)
fr_padded = fr_ragged.to_tensor(default_value=0, shape=[None, None, max_length])
print("fr sequences YES")
fr_sequencesdeocde = [fr_tokenizer.tokenize("[START]"+french_sentence.numpy().decode('utf-8')+"[END]").merge_dims(-2,-1)
for french_sentence, _ in dataset.take(127085)]
fr_raggeddecode = tf.ragged.stack(fr_sequences)
fr_paddeddecode = fr_ragged.to_tensor(default_value=0, shape=[None, None, max_length])
print("sequencesdeocde Yes")
fr_sequences
en_sequences = [en_tokenizer.tokenize(english_sentence.numpy().decode('utf-8')).merge_dims(-2,-1)
for _, english_sentence in dataset.take(127085)]
en_ragged = tf.ragged.stack(en_sequences)
en_padded = en_ragged.to_tensor(default_value=0, shape=[None, None, max_length])
print("en_sequences yes")
x_train = fr_padded
x2_train = fr_paddeddecode
y_train = en_padded
|
a4d4e70cdd3a75030492bd8210cc1740
|
{
"intermediate": 0.21228128671646118,
"beginner": 0.34579524397850037,
"expert": 0.44192349910736084
}
|
47,062
|
max_length = 200
def process_sentence(sentence, tokenizer, add_start_end=False):
sentence_text = sentence.numpy().decode('utf-8')
if add_start_end:
sentence_text = "[START]" + sentence_text + "[END]"
tokenized_sentence = tokenizer.tokenize(sentence_text).merge_dims(-2, -1)
return tokenized_sentence
def tokenize_map_fn(french_sentence, english_sentence):
fr_sentence_proc = tf.py_function(func=process_sentence, inp=[french_sentence, fr_tokenizer, False], Tout=tf.int64)
fr_sentence_proc_deocde = tf.py_function(func=process_sentence, inp=[french_sentence, fr_tokenizer, True], Tout=tf.int64)
en_sentence_proc = tf.py_function(func=process_sentence, inp=[english_sentence, en_tokenizer, False], Tout=tf.int64)
return fr_sentence_proc, fr_sentence_proc_deocde, en_sentence_proc
# Assuming dataset already defined and is a tf.data.Dataset object
processed_dataset = dataset.take(127085).map(tokenize_map_fn)
# To efficiently create padded and ragged tensors, first stack all elements from processed_dataset
fr_sequences, fr_sequences_decode, en_sequences = [], [], []
for fr_seq, fr_seq_decode, en_seq in processed_dataset:
fr_sequences.append(fr_seq)
fr_sequences_decode.append(fr_seq_decode)
en_sequences.append(en_seq)
# Convert lists of sequences into ragged tensors and then pad them
fr_ragged = tf.ragged.stack(fr_sequences)
fr_padded = fr_ragged.to_tensor(default_value=0, shape=[None, None, max_length])
print("fr sequences YES")
fr_ragged_decode = tf.ragged.stack(fr_sequences_decode)
fr_padded_decode = fr_ragged_decode.to_tensor(default_value=0, shape=[None, None, max_length])
print("sequencesdeocde Yes")
en_ragged = tf.ragged.stack(en_sequences)
en_padded = en_ragged.to_tensor(default_value=0, shape=[None, None, max_length])
print("en_sequences yes")
x_train = fr_padded
x2_train = fr_padded_decode
y_train = en_padded
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[29], line 16
13 return fr_sentence_proc, fr_sentence_proc_deocde, en_sentence_proc
15 # Assuming dataset already defined and is a tf.data.Dataset object
---> 16 processed_dataset = dataset.take(127085).map(tokenize_map_fn)
18 # To efficiently create padded and ragged tensors, first stack all elements from processed_dataset
19 fr_sequences, fr_sequences_decode, en_sequences = [], [], []
File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\tensorflow\python\data\ops\dataset_ops.py:2202, in DatasetV2.map(self, map_func, num_parallel_calls, deterministic, name)
2199 if deterministic is not None and not DEBUG_MODE:
2200 warnings.warn("The `deterministic` argument has no effect unless the "
2201 "`num_parallel_calls` argument is specified.")
-> 2202 return MapDataset(self, map_func, preserve_cardinality=True, name=name)
2203 else:
2204 return ParallelMapDataset(
2205 self,
2206 map_func,
(...)
2209 preserve_cardinality=True,
2210 name=name)
File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\tensorflow\python\data\ops\dataset_ops.py:5400, in MapDataset.__init__(self, input_dataset, map_func, use_inter_op_parallelism, preserve_cardinality, use_legacy_function, name)
5398 self._use_inter_op_parallelism = use_inter_op_parallelism
5399 self._preserve_cardinality = preserve_cardinality
...
File "C:\Users\ixelo\AppData\Local\Temp\ipykernel_14408\98278579.py", line 10, in tokenize_map_fn *
fr_sentence_proc = tf.py_function(func=process_sentence, inp=[french_sentence, fr_tokenizer, False], Tout=tf.int64)
TypeError: Tensors in list passed to 'input' of 'EagerPyFunc' Op have types [string, <NOT CONVERTIBLE TO TENSOR>, bool] that are invalid. Tensors: [<tf.Tensor 'args_0:0' shape=() dtype=string>, <tensorflow_text.python.ops.bert_tokenizer.BertTokenizer object at 0x000001CAAA3B6FE0>, False]
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...
|
02e14aac5fbd7db43a6e77e754304566
|
{
"intermediate": 0.2965380847454071,
"beginner": 0.3611869513988495,
"expert": 0.3422749638557434
}
|
47,063
|
How can use this function static int updateDisplay(void){
int ret = 0;
switch (getDisplay())
{
case eHomeDisplay :
sprintf(cluster_screen.disp_name,currentDisplay);
cluster_screen.width = m_parameterInfo[eHomeDisplay].width;
cluster_screen.height = m_parameterInfo[eHomeDisplay].height;
break;
case eWideDisplay :
sprintf(cluster_screen.disp_name,currentDisplay);
cluster_screen.width = m_parameterInfo[eWideDisplay].width;
cluster_screen.height = m_parameterInfo[eWideDisplay].height;
break;
case eDesktopDisplay :
sprintf(cluster_screen.disp_name,currentDisplay);
cluster_screen.width = m_parameterInfo[eDesktopDisplay].width;
cluster_screen.height = m_parameterInfo[eDesktopDisplay].height;
break;
default:
break;
}
ret = 1;
return ret;
} in Gtest with lass MockReadClusterCfg {
public:
MOCK_METHOD(bool, ReadClusterCfg, ());
MOCK_METHOD(int, getDisplay, ());
};
// Test fixture class
class CfgRunnerTest : public testing::Test {
public:
virtual bool ReadClusterCfg() { // Virtual function to be mocked
// Forward the call to the mock implementation
return mockReadClusterCfg.ReadClusterCfg();
}
virtual int getDisplay() { // Virtual function to be mocked
// Forward the call to the mock implementation
return mockReadClusterCfg.getDisplay();
}
protected:
MockReadClusterCfg mockReadClusterCfg; // Instance of the mock ReadClusterCfg
};
|
8cd966590c36be3e786e0b943037837c
|
{
"intermediate": 0.48338520526885986,
"beginner": 0.35211658477783203,
"expert": 0.16449816524982452
}
|
47,064
|
make this code more faster :
max_length = 200
fr_sequences = [fr_tokenizer.tokenize(french_sentence.numpy().decode('utf-8')).merge_dims(-2,-1)
for french_sentence, _ in dataset.take(127085)]
fr_ragged = tf.ragged.stack(fr_sequences)
fr_padded = fr_ragged.to_tensor(default_value=0, shape=[None, None, max_length])
print("fr sequences YES")
fr_sequencesdeocde = [fr_tokenizer.tokenize("[START]"+french_sentence.numpy().decode('utf-8')+"[END]").merge_dims(-2,-1)
for french_sentence, _ in dataset.take(127085)]
fr_raggeddecode = tf.ragged.stack(fr_sequences)
fr_paddeddecode = fr_ragged.to_tensor(default_value=0, shape=[None, None, max_length])
print("sequencesdeocde Yes")
fr_sequences
en_sequences = [en_tokenizer.tokenize(english_sentence.numpy().decode('utf-8')).merge_dims(-2,-1)
for _, english_sentence in dataset.take(127085)]
en_ragged = tf.ragged.stack(en_sequences)
en_padded = en_ragged.to_tensor(default_value=0, shape=[None, None, max_length])
print("en_sequences yes")
x_train = fr_padded
x2_train = fr_paddeddecode
y_train = en_padded
|
3bd33def1c8d9a996cefe6ce19201ee6
|
{
"intermediate": 0.3364163637161255,
"beginner": 0.4371351897716522,
"expert": 0.2264484465122223
}
|
47,065
|
add indicator in this code for the time :
batch_size = 1024 # Adjust batch size to your hardware capabilities.
def tokenize_sentences(french_sentence, english_sentence):
Tokenize French sentences (with and without start/end tokens) in a single pass
fr_sentence_text = french_sentence.numpy().decode('utf-8')
fr_sequence = fr_tokenizer.tokenize(fr_sentence_text).merge_dims(-2, -1)
fr_sequence_decode = fr_tokenizer.tokenize("[START]" + fr_sentence_text + "[END]").merge_dims(-2, -1)
Tokenize English sentences
en_sequence = en_tokenizer.tokenize(english_sentence.numpy().decode('utf-8')).merge_dims(-2, -1)
return fr_sequence, fr_sequence_decode, en_sequence
Initialize lists to hold tokenized sequences
fr_sequences, fr_sequencesdeocde, en_sequences = [], [], []
for french_sentence, english_sentence in dataset.batch(batch_size).take(127085 // batch_size):
Use tf.py_function to operate on the numpy values of each element
batch_fr_seq, batch_fr_seq_decode, batch_en_seq = zip(*[tokenize_sentences(f, e) for f, e in zip(french_sentence, english_sentence)])
fr_sequences.extend(batch_fr_seq)
fr_sequencesdeocde.extend(batch_fr_seq_decode)
en_sequences.extend(batch_en_seq)
Stack and pad sequences
fr_ragged = tf.ragged.stack(fr_sequences)
fr_padded = fr_ragged.to_tensor(default_value=0, shape=[None, None, max_length])
fr_raggeddecode = tf.ragged.stack(fr_sequencesdeocde)
fr_paddeddecode = fr_raggeddecode.to_tensor(default_value=0, shape=[None, None, max_length])
en_ragged = tf.ragged.stack(en_sequences)
en_padded = en_ragged.to_tensor(default_value=0, shape=[None, None, max_length])
Assign padded data to train variables
x_train, x2_train, y_train = fr_padded, fr_paddeddecode, en_padded
|
368bdf349112fa01f8756dbc617a277a
|
{
"intermediate": 0.4373420178890228,
"beginner": 0.276734858751297,
"expert": 0.2859231233596802
}
|
47,066
|
here is my code of generating timeseries for training my lstm model
i dont want to use specefic chunk_size, so update the code:
feature_data_scaled = pd.DataFrame(x_scaler.transform(feature_data), columns=feature_data.columns)
# Assuming target_data also needs to be scaled, apply scaler separately
target_data_scaled = pd.DataFrame(y_scaler.transform(target_data), columns=target_data.columns)
# ensuring end_ix does not go out of feature_data_scaled’s bounds
num_samples = (len(feature_data_scaled) - n_steps) // batch_size
for i in range(num_samples):
start_ix = i * batch_size
end_ix = start_ix + n_steps
X = feature_data_scaled[start_ix:end_ix]
# using .iloc to avoid KeyError, and selecting the corresponding outputs
y = target_data_scaled.iloc[start_ix:end_ix].iloc[-1]
yield X.values.reshape((1, n_steps, -1)), y.values.reshape((1, -1))
|
63fe7c9ddac06d70a10a1e5c8026f787
|
{
"intermediate": 0.3920058310031891,
"beginner": 0.24170617759227753,
"expert": 0.3662879765033722
}
|
47,067
|
i have on my page phone numbers without country preffix and with it im using preg_replace to add a href
$buffer = preg_replace('~((\+48 |.*?)605 697 177)~s', "<a href=\"tel:<PRESIDIO_ANONYMIZED_PHONE_NUMBER>\">$1</a>", $buffer);
How can i achieve this to work?
|
3b2f4b4a740ffbd40050adb9dc13c8a3
|
{
"intermediate": 0.6281800866127014,
"beginner": 0.1026197075843811,
"expert": 0.26920026540756226
}
|
47,068
|
im trying to train and lstm model but the training goes like :
Epoch 1/2500
300/300 [==============================] - 24s 68ms/step - loss: 91.9649 - mae: 4.8055
Epoch 2/2500
300/300 [==============================] - 19s 64ms/step - loss: 92.7976 - mae: 4.4733
Epoch 3/2500
300/300 [==============================] - 19s 65ms/step - loss: 82.4959 - mae: 4.3163
Epoch 4/2500
300/300 [==============================] - 19s 64ms/step - loss: 68.1812 - mae: 3.7349
Epoch 5/2500
300/300 [==============================] - 19s 65ms/step - loss: 81.2047 - mae: 4.5482
Epoch 6/2500
300/300 [==============================] - 19s 64ms/step - loss: 111.6692 - mae: 4.6476
Epoch 7/2500
300/300 [==============================] - 19s 64ms/step - loss: 57.8791 - mae: 4.1733
Epoch 8/2500
300/300 [==============================] - 19s 65ms/step - loss: 81.7904 - mae: 4.3814
Epoch 9/2500
300/300 [==============================] - 19s 64ms/step - loss: 184.2159 - mae: 5.0559
Epoch 10/2500
300/300 [==============================] - 20s 66ms/step - loss: 438.6423 - mae: 5.5885
Epoch 11/2500
|
448d7a9c04591dac1acc7c622ada43d6
|
{
"intermediate": 0.19626682996749878,
"beginner": 0.24547572433948517,
"expert": 0.5582574605941772
}
|
47,069
|
Function pointer elements used in structures of function pointers can contain letters and numbers and will follow the pattern: ( * p_lowerCamelCase )
|
c6023524b8145eba177a147747ca5360
|
{
"intermediate": 0.2831232249736786,
"beginner": 0.4666634202003479,
"expert": 0.2502133250236511
}
|
47,070
|
A single set of typedefs shall be used in place of standard C variable definitions in all modules.
|
2c594be39163a78f7606952702811043
|
{
"intermediate": 0.25033462047576904,
"beginner": 0.4437524080276489,
"expert": 0.30591297149658203
}
|
47,071
|
I want to distribute numbers 1-50 in 10 groups. In such a way that when you add all 5 numbers in a group, all groups would have an equal total sum.
|
ab62825ac54afd04042658ef032744c2
|
{
"intermediate": 0.38916414976119995,
"beginner": 0.2745732367038727,
"expert": 0.336262583732605
}
|
47,072
|
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "3UeycYCyxDfE"
},
"source": [
"# TRANSLATOR"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8BfUjVxBcz5N"
},
"source": [
"## instalation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "WXqM38xBRHu2",
"outputId": "075e90fc-7ba6-4b54-9bb9-414037cc352e"
},
"outputs": [],
"source": [
"%%time\n",
"!pip install -q -U tensorflow-text\n",
"!pip install datasets\n",
"!pip install -q tensorflow_datasets\n",
"!pip install pydot\n",
"!pip install tensorflow\n",
"!pip install numpy\n",
"!cd /content\n",
"!clear"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Ukvs1XfMG7aG"
},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"import tensorflow_text as tf_text\n",
"import tensorflow_datasets as tfds\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"import requests\n",
"import functools\n",
"import collections\n",
"import os\n",
"import pathlib\n",
"import re\n",
"import string\n",
"import tempfile\n",
"import time\n",
"import matplotlib.pyplot as plt\n",
"import os\n",
"import re\n",
"import shutil\n",
"import string\n",
"import tensorflow as tf\n",
"\n",
"from tensorflow.keras import layers\n",
"from tensorflow.keras import losses\n",
"import pydot"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7V7igFwpc6Hs"
},
"source": [
"## dataset"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 280,
"referenced_widgets": [
"a5b60d70fc784178aae195ba50aa17fb",
"2f81a198eff14fd9baf1c44eba10e66a",
"da5df7f8dc8b4efa9ef1fe766fda07e4",
"3cbdaac2925e4f259f8daa74f4a4b0e7",
"131339a75d4f4b10900f433780ceda69",
"684fe6b833ae4964952c6480fae46f57",
"895fece0320d4e4c80971bb9341e4e1c",
"25efff7c11d3468fb91d922a42371142",
"1641d12ee23646c5bc9f59438ca76468",
"89a141e6736947538de57a715aca30bd",
"da1528b34ff745e3b7cc2363dd9dea69",
"ffcb28bfdc0d4341a5cb2fe0b14d3657",
"37a47c1b7eac4d4fb3ee224fdf0392ee",
"18e9a5f3d42f4a93b003876b7bcd199c",
"9842e84a76e64f1e862bc39a06582a71",
"52fd339b0e944e74b7df1b176ee90f5b",
"78a4e11bb5f742a4b755cd60137b65c4",
"c646272c9a86474c9bda867e57035aaf",
"1840095e6efe495a960a1fd6d3060f1c",
"c832fdd570f4460c8f37640c20db5089",
"9a29068ca9fd478ab4b2b64b29a30452",
"617134a74765472b9d621d8d806e6f90",
"f1cd45d36c604216bd526f4ee135edcc",
"bf7382bdfe9c46c39cc0e48487aa3e7e",
"ee55c21104a548938945911d1be5e8bd",
"e793e43e37b14498bc34c4f36296d2ac",
"0433034eb2744785b33f79961a2c04e4",
"8b1fd60dc6fb4094bfecc63c824cdda5",
"17879bad90954e9cb888b3675c472f9b",
"df815e17a0014f94b2ed81fdca2b8a30",
"7b1e2448b06e4235b8502ed17084482a",
"cedda31df2a34e97935d72c8dfd5da74",
"a4d33f99d00348a6a733e8936feac62c"
]
},
"id": "ZaMtoUtAREzs",
"outputId": "f8dadd74-5e04-4fa6-9392-6ccd6b2cb458"
},
"outputs": [],
"source": [
"from datasets import load_dataset\n",
"\n",
"dataset = load_dataset(\"Helsinki-NLP/opus_books\", \"en-fr\")\n",
"data = dataset[\"train\"]\n",
"\n",
"french_sentences = [example[\"fr\"] for example in data[\"translation\"][:127085]]\n",
"english_sentences = [example[\"en\"] for example in data[\"translation\"][:127085]]\n",
"dataset = tf.data.Dataset.from_tensor_slices((french_sentences, english_sentences))\n",
"\n",
"french_sentences_decoded = []\n",
"english_sentences_decoded = []\n",
"\n",
"for french_sentence, english_sentence in dataset.take(12708):\n",
" french_sentences_decoded.append(\"b '\"+french_sentence.numpy().decode('utf-8'))\n",
" english_sentences_decoded.append(\"b '\"+english_sentence.numpy().decode('utf-8'))\n",
"\n",
"print(\"Nombre de phrases en français :\", len(french_sentences_decoded))\n",
"print(\"Nombre de phrases en anglais :\", len(english_sentences_decoded))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "2tb7H5uFQoBA"
},
"outputs": [],
"source": [
"train_fr = french_sentences\n",
"train_en = english_sentences"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "NzS8h0budHWv"
},
"source": [
"## vocab"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "72uMXsQhFIx8"
},
"outputs": [],
"source": [
"from tensorflow_text.tools.wordpiece_vocab import bert_vocab_from_dataset as bert_vocab\n",
"\n",
"bert_tokenizer_params = dict(lower_case=True)\n",
"reserved_tokens = [\"[PAD]\", \"[UNK]\", \"[START]\", \"[END]\"]\n",
"\n",
"bert_vocab_args = {\n",
" 'vocab_size': 8000,\n",
" 'reserved_tokens': reserved_tokens,\n",
" 'bert_tokenizer_params': bert_tokenizer_params,\n",
" 'learn_params': {},\n",
"}\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "q6f3mrA-DK0n",
"outputId": "b5413b20-51a6-4b79-9bad-ae1aeae4cf42"
},
"outputs": [],
"source": [
"%%time\n",
"en_vocab = bert_vocab.bert_vocab_from_dataset(\n",
" tf.data.Dataset.from_tensor_slices(english_sentences).batch(1000).prefetch(2),\n",
" **bert_vocab_args\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "EJZzz1x5YY0x",
"outputId": "64c27808-53ad-48b9-9a08-ee65c322a6e4"
},
"outputs": [],
"source": [
"%%time\n",
"fr_vocab = bert_vocab.bert_vocab_from_dataset(\n",
" tf.data.Dataset.from_tensor_slices(french_sentences).batch(1000).prefetch(2),\n",
" **bert_vocab_args\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "1hmXuHHNcBHg"
},
"outputs": [],
"source": [
"def write_vocab_file(filepath, vocab):\n",
" with open(filepath, 'w') as f:\n",
" for token in vocab:\n",
" print(token, file=f)\n",
"write_vocab_file('en_vocab.txt', en_vocab)\n",
"write_vocab_file('fr_vocab.txt', fr_vocab)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "kLTf_mEvfNR9"
},
"source": [
"#` TOKENIZER `\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "jGshnQ2idy8I"
},
"outputs": [],
"source": [
"fr_tokenizer = tf_text.BertTokenizer('fr_vocab.txt', **bert_tokenizer_params)\n",
"en_tokenizer = tf_text.BertTokenizer('en_vocab.txt', **bert_tokenizer_params)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "bbQyYKhHkkDe",
"outputId": "ef6ba4a4-7bab-432e-adcf-6a85440f2c22"
},
"outputs": [],
"source": [
"# Tokenize the examples -> (batch, word, word-piece)\n",
"en_tokenizere = en_tokenizer.tokenize(\"hello how are you Vadim\")\n",
"# Merge the word and word-piece axes -> (batch, tokens)\n",
"en_tokenizere= en_tokenizere.merge_dims(-2,-1)\n",
"\n",
"for ex in en_tokenizere.to_list():\n",
" print(ex)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "k0m1461Gwy3e"
},
"outputs": [],
"source": [
"words = en_tokenizer.detokenize(token_batch)\n",
"tf.strings.reduce_join(words, separator=' ', axis=-1)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BjoPdwoxBWw2"
},
"source": [
"## model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "7OqoDOybbwi6"
},
"outputs": [],
"source": [
"max_length = 200\n",
"\n",
"fr_sequences = [fr_tokenizer.tokenize(french_sentence.numpy().decode('utf-8')).merge_dims(-2,-1)\n",
" for french_sentence, _ in dataset.take(2000)]\n",
"fr_ragged = tf.ragged.stack(fr_sequences)\n",
"fr_padded = fr_ragged.to_tensor(default_value=0, shape=[None, None, max_length])\n",
"\n",
"fr_sequencesdeocde = [fr_tokenizer.tokenize(\"[START]\"+french_sentence.numpy().decode('utf-8')+\"[END]\").merge_dims(-2,-1)\n",
" for french_sentence, _ in dataset.take(2000)]\n",
"fr_raggeddecode = tf.ragged.stack(fr_sequences)\n",
"fr_paddeddecode = fr_ragged.to_tensor(default_value=0, shape=[None, None, max_length])\n",
"\n",
"en_sequences = [en_tokenizer.tokenize(english_sentence.numpy().decode('utf-8')).merge_dims(-2,-1)\n",
" for _, english_sentence in dataset.take(2000)]\n",
"en_ragged = tf.ragged.stack(en_sequences)\n",
"en_padded = en_ragged.to_tensor(default_value=0, shape=[None, None, max_length])\n",
"\n",
"x_train = fr_padded\n",
"x2_train = fr_paddeddecode\n",
"y_train = en_padded\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 493
},
"id": "FIp6S-MR7Pls",
"outputId": "8f187fbe-4f57-4a4f-bb91-c1dc9e098552"
},
"outputs": [],
"source": [
"inputs = tf.keras.Input(shape=(1,200))\n",
"embedding_dim = 512\n",
"lstm_units = 1024\n",
"vocab_size_en = len(en_vocab) + len(reserved_tokens)\n",
"vocab_size_fr = len(fr_vocab) + len(reserved_tokens)\n",
"\n",
"encoder_inputs = tf.keras.layers.Input(shape=(200,))\n",
"encoder_embedding = tf.keras.layers.Embedding(input_dim=vocab_size_en, output_dim=embedding_dim, mask_zero=True)(encoder_inputs)\n",
"encoder_outputs, state_h, state_c = tf.keras.layers.LSTM(lstm_units, return_state=True)(encoder_embedding)\n",
"encoder_states = [state_h, state_c]\n",
"\n",
"decoder_inputs = tf.keras.layers.Input(shape=(200,))\n",
"decoder_embedding = tf.keras.layers.Embedding(input_dim=vocab_size_fr, output_dim=embedding_dim, mask_zero=True)(decoder_inputs)\n",
"decoder_lstm = tf.keras.layers.LSTM(lstm_units, return_sequences=True, return_state=True)\n",
"decoder_outputs, _, _ = decoder_lstm(decoder_embedding, initial_state=encoder_states)\n",
"decoder_dense = tf.keras.layers.Dense(vocab_size_fr, activation='softmax')\n",
"decoder_outputs = decoder_dense(decoder_outputs)\n",
"model = tf.keras.models.Model(inputs=[encoder_inputs, decoder_inputs], outputs=decoder_outputs)\n",
"model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])\n",
"\n",
"\n",
"model.summary()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "YOryNiyc15Tf"
},
"outputs": [],
"source": [
"def format_sequences(tokenizer, sentences, max_length):\n",
" sequences = tokenizer.tokenize(sentences).merge_dims(-2, -1)\n",
" padded_sequences = sequences.to_tensor(default_value=0, shape=[None, max_length])\n",
" return padded_sequences\n",
"\n",
"encoder_input_data = format_sequences(en_tokenizer, train_en, max_length)\n",
"decoder_input_data = format_sequences(fr_tokenizer, [\"[START] \" + sentence for sentence in train_fr], max_length)\n",
"decoder_target_data = format_sequences(fr_tokenizer, [sentence + \" [END]\" for sentence in train_fr], max_length)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "0oKPxeCMPcsF",
"outputId": "0a2d9241-1984-4aa7-c1d1-c5a8d982e542"
},
"outputs": [],
"source": [
"batch_size = 2\n",
"epochs = 3\n",
"\n",
"model.fit([encoder_input_data, decoder_input_data], decoder_target_data,\n",
" batch_size=batch_size,\n",
" epochs=epochs,\n",
" validation_split=0.2)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "avXVDSHx18KC"
},
"outputs": [],
"source": [
"encoder_model = tf.keras.models.Model(encoder_inputs, encoder_states)\n",
"\n",
"decoder_state_input_h = tf.keras.Input(shape=(lstm_units,))\n",
"decoder_state_input_c = tf.keras.Input(shape=(lstm_units,))\n",
"decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]\n",
"\n",
"decoder_outputs, state_h, state_c = decoder_lstm(\n",
" decoder_embedding, initial_state=decoder_states_inputs)\n",
"decoder_states = [state_h, state_c]\n",
"decoder_outputs = decoder_dense(decoder_outputs)\n",
"\n",
"decoder_model = tf.keras.models.Model(\n",
" [decoder_inputs] + decoder_states_inputs,\n",
" [decoder_outputs] + decoder_states)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "tu-NqIEA19if"
},
"outputs": [],
"source": [
"def translate(input_sentence):\n",
" # Prepare the input sentence\n",
" input_seq = format_sequences(en_tokenizer, [input_sentence], max_length)\n",
" states_value = encoder_model.predict(input_seq)\n",
"\n",
" # Generate empty target sequence of length 1 with only the start token\n",
" target_seq = np.zeros((1, 1))\n",
" target_seq[0, 0] = fr_tokenizer.vocab['[START]']\n",
"\n",
" # Sampling loop for a batch of sequences\n",
" stop_condition = False\n",
" decoded_sentence = ''\n",
" while not stop_condition:\n",
" output_tokens, h, c = decoder_model.predict([target_seq] + states_value)\n",
"\n",
" # Sample a token and add the corresponding character to the decoded sentence\n",
" sampled_token_index = np.argmax(output_tokens[0, -1, :])\n",
" sampled_char = fr_tokenizer.detokenize([[sampled_token_index]]).numpy()[0].decode('utf-8')\n",
" decoded_sentence += sampled_char + ' '\n",
"\n",
" # Exit condition: either hit max length or find stop token.\n",
" if (sampled_char == '[END]' or len(decoded_sentence) > max_length):\n",
" stop_condition = True\n",
"\n",
" # Update the target sequence (length 1).\n",
" target_seq = np.zeros((1, 1))\n",
" target_seq[0, 0] = sampled_token_index\n",
"\n",
" # Update states\n",
" states_value = [h, c]\n",
"\n",
" return decoded_sentence"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 447
},
"id": "KimJ6ANPMz6r",
"outputId": "95dcfad1-3d7f-410d-ac24-adcc3215741f"
},
"outputs": [],
"source": [
"history = model.fit([x_train,x2_train], y_train, epochs=10, batch_size=2)"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
},
"widgets": {
"application/vnd.jupyter.widget-state+json": {
"0433034eb2744785b33f79961a2c04e4": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"131339a75d4f4b10900f433780ceda69": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"1641d12ee23646c5bc9f59438ca76468": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "ProgressStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "ProgressStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"bar_color": null,
"description_width": ""
}
},
"17879bad90954e9cb888b3675c472f9b": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"1840095e6efe495a960a1fd6d3060f1c": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"18e9a5f3d42f4a93b003876b7bcd199c": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "FloatProgressModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "FloatProgressModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "ProgressView",
"bar_style": "success",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_1840095e6efe495a960a1fd6d3060f1c",
"max": 20985324,
"min": 0,
"orientation": "horizontal",
"style": "IPY_MODEL_c832fdd570f4460c8f37640c20db5089",
"value": 20985324
}
},
"25efff7c11d3468fb91d922a42371142": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"2f81a198eff14fd9baf1c44eba10e66a": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_684fe6b833ae4964952c6480fae46f57",
"placeholder": "",
"style": "IPY_MODEL_895fece0320d4e4c80971bb9341e4e1c",
"value": "Downloading readme: 100%"
}
},
"37a47c1b7eac4d4fb3ee224fdf0392ee": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_78a4e11bb5f742a4b755cd60137b65c4",
"placeholder": "",
"style": "IPY_MODEL_c646272c9a86474c9bda867e57035aaf",
"value": "Downloading data: 100%"
}
},
"3cbdaac2925e4f259f8daa74f4a4b0e7": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_89a141e6736947538de57a715aca30bd",
"placeholder": "",
"style": "IPY_MODEL_da1528b34ff745e3b7cc2363dd9dea69",
"value": " 28.1k/28.1k [00:00<00:00, 1.51MB/s]"
}
},
"52fd339b0e944e74b7df1b176ee90f5b": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"617134a74765472b9d621d8d806e6f90": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"684fe6b833ae4964952c6480fae46f57": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"78a4e11bb5f742a4b755cd60137b65c4": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"7b1e2448b06e4235b8502ed17084482a": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "ProgressStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "ProgressStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"bar_color": null,
"description_width": ""
}
},
"895fece0320d4e4c80971bb9341e4e1c": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"89a141e6736947538de57a715aca30bd": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"8b1fd60dc6fb4094bfecc63c824cdda5": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"9842e84a76e64f1e862bc39a06582a71": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_9a29068ca9fd478ab4b2b64b29a30452",
"placeholder": "",
"style": "IPY_MODEL_617134a74765472b9d621d8d806e6f90",
"value": " 21.0M/21.0M [00:00<00:00, 29.9MB/s]"
}
},
"9a29068ca9fd478ab4b2b64b29a30452": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"a4d33f99d00348a6a733e8936feac62c": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"a5b60d70fc784178aae195ba50aa17fb": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HBoxModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HBoxModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HBoxView",
"box_style": "",
"children": [
"IPY_MODEL_2f81a198eff14fd9baf1c44eba10e66a",
"IPY_MODEL_da5df7f8dc8b4efa9ef1fe766fda07e4",
"IPY_MODEL_3cbdaac2925e4f259f8daa74f4a4b0e7"
],
"layout": "IPY_MODEL_131339a75d4f4b10900f433780ceda69"
}
},
"bf7382bdfe9c46c39cc0e48487aa3e7e": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_8b1fd60dc6fb4094bfecc63c824cdda5",
"placeholder": "",
"style": "IPY_MODEL_17879bad90954e9cb888b3675c472f9b",
"value": "Generating train split: 100%"
}
},
"c646272c9a86474c9bda867e57035aaf": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"c832fdd570f4460c8f37640c20db5089": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "ProgressStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "ProgressStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"bar_color": null,
"description_width": ""
}
},
"cedda31df2a34e97935d72c8dfd5da74": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"da1528b34ff745e3b7cc2363dd9dea69": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"da5df7f8dc8b4efa9ef1fe766fda07e4": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "FloatProgressModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "FloatProgressModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "ProgressView",
"bar_style": "success",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_25efff7c11d3468fb91d922a42371142",
"max": 28064,
"min": 0,
"orientation": "horizontal",
"style": "IPY_MODEL_1641d12ee23646c5bc9f59438ca76468",
"value": 28064
}
},
"df815e17a0014f94b2ed81fdca2b8a30": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"e793e43e37b14498bc34c4f36296d2ac": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_cedda31df2a34e97935d72c8dfd5da74",
"placeholder": "",
"style": "IPY_MODEL_a4d33f99d00348a6a733e8936feac62c",
"value": " 127085/127085 [00:00<00:00, 284966.50 examples/s]"
}
},
"ee55c21104a548938945911d1be5e8bd": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "FloatProgressModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "FloatProgressModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "ProgressView",
"bar_style": "success",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_df815e17a0014f94b2ed81fdca2b8a30",
"max": 127085,
"min": 0,
"orientation": "horizontal",
"style": "IPY_MODEL_7b1e2448b06e4235b8502ed17084482a",
"value": 127085
}
},
"f1cd45d36c604216bd526f4ee135edcc": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HBoxModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HBoxModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HBoxView",
"box_style": "",
"children": [
"IPY_MODEL_bf7382bdfe9c46c39cc0e48487aa3e7e",
"IPY_MODEL_ee55c21104a548938945911d1be5e8bd",
"IPY_MODEL_e793e43e37b14498bc34c4f36296d2ac"
],
"layout": "IPY_MODEL_0433034eb2744785b33f79961a2c04e4"
}
},
"ffcb28bfdc0d4341a5cb2fe0b14d3657": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HBoxModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HBoxModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HBoxView",
"box_style": "",
"children": [
"IPY_MODEL_37a47c1b7eac4d4fb3ee224fdf0392ee",
"IPY_MODEL_18e9a5f3d42f4a93b003876b7bcd199c",
"IPY_MODEL_9842e84a76e64f1e862bc39a06582a71"
],
"layout": "IPY_MODEL_52fd339b0e944e74b7df1b176ee90f5b"
}
}
}
}
},
"nbformat": 4,
"nbformat_minor": 0
}
|
9cda18cb025ca2853ef0a4a882bf7976
|
{
"intermediate": 0.4142000377178192,
"beginner": 0.3563648760318756,
"expert": 0.22943517565727234
}
|
47,073
|
I want you to act as a contabo VPS Webhosting server. I will provide you with a list of webhosting services and you will act as their administrator. You will also be responsible for maintaining the service. My first suggestion request is "I need help setting up a VPS web hosting service."
|
06c05f712f3bf361c50d5035a4a1b8f2
|
{
"intermediate": 0.3415154218673706,
"beginner": 0.2971973717212677,
"expert": 0.3612872064113617
}
|
47,074
|
how to write mock test for the function static void gc_destroy(struct graphics_gc_priv *gc) {
g_free(gc);
gc = NULL;
} using MOCK_METHOD
|
eaaeb6e6d92e20f285da40bc581cb0c2
|
{
"intermediate": 0.3410744369029999,
"beginner": 0.3550702631473541,
"expert": 0.303855299949646
}
|
47,076
|
in javascript I am dynamically displaying some text. the line breaks in ' messageDisplay.textContent = `Congratulations you have leased ${numberOfBuildings} buildings for £50,000! You will earn £${numberOfBuildings} per day from these leases. <br> You can now click on individual buildings on the map to buy them and start earning rent as well.`;' display as <br>. How should I add line breaks
|
e3cd2da5932f7eeb6cf12c34eb30de2f
|
{
"intermediate": 0.424678236246109,
"beginner": 0.21679069101810455,
"expert": 0.35853105783462524
}
|
47,077
|
这是我们的双模态双分支结构的forward函数: def forward_features(self, z, x, event_z, event_x,
mask_z=None, mask_x=None,
ce_template_mask=None, ce_keep_rate=None,
return_last_attn=False
):
# 分支1 处理流程
B, H, W = x.shape[0], x.shape[2], x.shape[3]
x = self.patch_embed(x)
z = self.patch_embed(z)
z += self.pos_embed_z
x += self.pos_embed_x
if mask_z is not None and mask_x is not None:
mask_z = F.interpolate(mask_z[None].float(), scale_factor=1. / self.patch_size).to(torch.bool)[0]
mask_z = mask_z.flatten(1).unsqueeze(-1)
mask_x = F.interpolate(mask_x[None].float(), scale_factor=1. / self.patch_size).to(torch.bool)[0]
mask_x = mask_x.flatten(1).unsqueeze(-1)
mask_x = combine_tokens(mask_z, mask_x, mode=self.cat_mode)
mask_x = mask_x.squeeze(-1)
if self.add_cls_token:
cls_tokens = self.cls_token.expand(B, -1, -1)
cls_tokens = cls_tokens + self.cls_pos_embed
if self.add_sep_seg:
x += self.search_segment_pos_embed
z += self.template_segment_pos_embed
x = combine_tokens(z, x, mode=self.cat_mode)
if self.add_cls_token:
x = torch.cat([cls_tokens, x], dim=1)
x = self.pos_drop(x)
lens_z = self.pos_embed_z.shape[1]
lens_x = self.pos_embed_x.shape[1]
global_index_t = torch.linspace(0, lens_z - 1, lens_z).to(x.device)
global_index_t = global_index_t.repeat(B, 1)
global_index_s = torch.linspace(0, lens_x - 1, lens_x).to(x.device)
global_index_s = global_index_s.repeat(B, 1)
removed_indexes_s = []
for i, blk in enumerate(self.blocks):
x, global_index_t, global_index_s, removed_index_s, attn = \
blk(x, global_index_t, global_index_s, mask_x, ce_template_mask, ce_keep_rate)
if self.ce_loc is not None and i in self.ce_loc:
removed_indexes_s.append(removed_index_s)
# x = self.norm(x) # # [bs, n_patch, dim] = [bs, 320, 768] 320 = 64 + 256
# # 分支2 处理流程
event_x = self.pos_embed_event(event_x)
event_z = self.pos_embed_event(event_z)
event_x += self.pos_embed_x
event_z += self.pos_embed_z
event_x = combine_tokens(event_z, event_x, mode=self.cat_mode)
if self.add_cls_token:
event_x = torch.cat([cls_tokens, event_x], dim=1)
lens_z = self.pos_embed_z.shape[1]
lens_x = self.pos_embed_x.shape[1]
global_index_t1 = torch.linspace(0, lens_z - 1, lens_z).to(event_x.device)
global_index_t1 = global_index_t1.repeat(B, 1)
global_index_s1 = torch.linspace(0, lens_x - 1, lens_x).to(event_x.device)
global_index_s1 = global_index_s1.repeat(B, 1)
removed_indexes_s1 = []
for i, blk in enumerate(self.blocks):
event_x, global_index_t1, global_index_s1, removed_index_s1, attn = \
blk(event_x, global_index_t1, global_index_s1, mask_x, ce_template_mask, ce_keep_rate)
if self.ce_loc is not None and i in self.ce_loc:
removed_indexes_s1.append(removed_index_s1)
# 在所有blocks处理完,引入counter_guide进行模态间交互
x_inter, event_x_inter = self.counter_guide(x,event_x)
# 将交互后的特征增强原始特征
x_enhenced = x + x_inter
event_x_enhenced = event_x + event_x_inter
x = torch.cat([x_enhenced, event_x_enhenced], dim=1)
,现在将import torch,os
import torch.nn as nn
from torch.nn.parameter import Parameter
class Multi_Context(nn.Module):
def __init__(self, inchannels):
super(Multi_Context, self).__init__()
self.conv2_1 = nn.Sequential(
nn.Conv2d(in_channels=inchannels, out_channels=inchannels, kernel_size=1, stride=1, padding=0),
nn.BatchNorm2d(inchannels),
nn.ReLU(inplace=True))
self.conv2_2 = nn.Sequential(
nn.Conv2d(in_channels=inchannels, out_channels=inchannels, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(inchannels),
nn.ReLU(inplace=True))
self.conv2_3 = nn.Sequential(
nn.Conv2d(in_channels=inchannels, out_channels=inchannels, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(inchannels),
nn.ReLU(inplace=True))
self.conv2 = nn.Sequential(
nn.Conv2d(in_channels=inchannels * 3, out_channels=inchannels, kernel_size=3, padding=1),
nn.BatchNorm2d(inchannels))
def forward(self, x):
x1 = self.conv2_1(x)
x2 = self.conv2_2(x)
x3 = self.conv2_3(x)
x = torch.cat([x1,x2,x3], dim=1)
x = self.conv2(x)
return x
class Adaptive_Weight(nn.Module):
def __init__(self, inchannels):
super(Adaptive_Weight, self).__init__()
self.avg = nn.AdaptiveAvgPool2d(1)
self.inchannels = inchannels
self.fc1 = nn.Conv2d(inchannels, inchannels//4, kernel_size=1, bias=False)
self.relu1 = nn.ReLU()
self.fc2 = nn.Conv2d(inchannels//4, 1, kernel_size=1, bias=False)
self.relu2 = nn.ReLU()
self.sigmoid = nn.Sigmoid()
def forward(self, x):
x_avg = self.avg(x)
weight = self.relu1(self.fc1(x_avg))
weight = self.relu2(self.fc2(weight))
weight = self.sigmoid(weight)
out = x * weight
return out
class Counter_attention(nn.Module):
def __init__(self, inchannels):
super(Counter_attention, self).__init__()
self.conv1 = nn.Sequential(nn.Conv2d(in_channels=inchannels, out_channels=inchannels, kernel_size=3, padding=1),
nn.BatchNorm2d(inchannels))
self.conv2 = nn.Sequential(nn.Conv2d(in_channels=inchannels, out_channels=inchannels, kernel_size=3, padding=1),
nn.BatchNorm2d(inchannels))
# self.conv3 = nn.Sequential(nn.Conv2d(in_channels=inchannels*2, out_channels=inchannels, kernel_size=1),
# nn.BatchNorm2d(inchannels))
self.sig = nn.Sigmoid()
self.mc1 = Multi_Context(inchannels)
self.mc2 = Multi_Context(inchannels)
self.ada_w1 = Adaptive_Weight(inchannels)
self.ada_w2 = Adaptive_Weight(inchannels)
def forward(self, assistant, present):
mc1 = self.mc1(assistant)
pr1 = present * self.sig(mc1)
pr2 = self.conv1(present)
pr2 = present * self.sig(pr2)
out1 = pr1 + pr2 + present
mc2 = self.mc2(present)
as1 = assistant * self.sig(mc2)
as2 = self.conv2(assistant)
as2 = assistant * self.sig(as2)
out2 = as1 + as2 + assistant
out1 = self.ada_w1(out1)
out2 = self.ada_w2(out2)
out = out1 + out2
# out = torch.cat([out1, out2], dim=1)
# out = self.conv3(out)
return out
class Counter_Guide(nn.Module):
def __init__(self):
super(Counter_Guide, self).__init__()
self.counter_atten1 = Counter_attention(128)
self.counter_atten2 = Counter_attention(256)
def forward(self, frame1, frame2, event1, event2):
out1 = self.counter_atten1(frame1, event1)
out2 = self.counter_atten2(frame2, event2)
return out1, out2
if __name__ == '__main__':
net = Counter_Guide()
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
net = net.cuda()
var1 = torch.FloatTensor(10, 128, 36, 36).cuda()
var2 = torch.FloatTensor(10, 256, 18, 18).cuda()
var3 = torch.FloatTensor(10, 128, 36, 36).cuda()
var4 = torch.FloatTensor(10, 256, 18, 18).cuda()
# var = Variable(var)
out1, out2 = net(var1, var2, var3, var4)
print('*************')
print(out1.shape, out2.shape)引入,实现双分支模态特征交互,现在需要在输入counterguide之前将x和event_x维度转换,以适应counterguide模块
|
e0c729f66dee0ec9148a70af3bb33614
|
{
"intermediate": 0.30744272470474243,
"beginner": 0.5402919054031372,
"expert": 0.15226538479328156
}
|
47,078
|
这是我们的双模态双分支结构的forward函数: def forward_features(self, z, x, event_z, event_x,
mask_z=None, mask_x=None,
ce_template_mask=None, ce_keep_rate=None,
return_last_attn=False
):
# 分支1 处理流程
B, H, W = x.shape[0], x.shape[2], x.shape[3]
x = self.patch_embed(x)
z = self.patch_embed(z)
z += self.pos_embed_z
x += self.pos_embed_x
if mask_z is not None and mask_x is not None:
mask_z = F.interpolate(mask_z[None].float(), scale_factor=1. / self.patch_size).to(torch.bool)[0]
mask_z = mask_z.flatten(1).unsqueeze(-1)
mask_x = F.interpolate(mask_x[None].float(), scale_factor=1. / self.patch_size).to(torch.bool)[0]
mask_x = mask_x.flatten(1).unsqueeze(-1)
mask_x = combine_tokens(mask_z, mask_x, mode=self.cat_mode)
mask_x = mask_x.squeeze(-1)
if self.add_cls_token:
cls_tokens = self.cls_token.expand(B, -1, -1)
cls_tokens = cls_tokens + self.cls_pos_embed
if self.add_sep_seg:
x += self.search_segment_pos_embed
z += self.template_segment_pos_embed
x = combine_tokens(z, x, mode=self.cat_mode)
if self.add_cls_token:
x = torch.cat([cls_tokens, x], dim=1)
x = self.pos_drop(x)
lens_z = self.pos_embed_z.shape[1]
lens_x = self.pos_embed_x.shape[1]
global_index_t = torch.linspace(0, lens_z - 1, lens_z).to(x.device)
global_index_t = global_index_t.repeat(B, 1)
global_index_s = torch.linspace(0, lens_x - 1, lens_x).to(x.device)
global_index_s = global_index_s.repeat(B, 1)
removed_indexes_s = []
for i, blk in enumerate(self.blocks):
x, global_index_t, global_index_s, removed_index_s, attn = <br/> blk(x, global_index_t, global_index_s, mask_x, ce_template_mask, ce_keep_rate)
if self.ce_loc is not None and i in self.ce_loc:
removed_indexes_s.append(removed_index_s)
# x = self.norm(x) # # [bs, n_patch, dim] = [bs, 320, 768] 320 = 64 + 256
# # 分支2 处理流程
event_x = self.pos_embed_event(event_x)
event_z = self.pos_embed_event(event_z)
event_x += self.pos_embed_x
event_z += self.pos_embed_z
event_x = combine_tokens(event_z, event_x, mode=self.cat_mode)
if self.add_cls_token:
event_x = torch.cat([cls_tokens, event_x], dim=1)
lens_z = self.pos_embed_z.shape[1]
lens_x = self.pos_embed_x.shape[1]
global_index_t1 = torch.linspace(0, lens_z - 1, lens_z).to(event_x.device)
global_index_t1 = global_index_t1.repeat(B, 1)
global_index_s1 = torch.linspace(0, lens_x - 1, lens_x).to(event_x.device)
global_index_s1 = global_index_s1.repeat(B, 1)
removed_indexes_s1 = []
for i, blk in enumerate(self.blocks):
event_x, global_index_t1, global_index_s1, removed_index_s1, attn = <br/> blk(event_x, global_index_t1, global_index_s1, mask_x, ce_template_mask, ce_keep_rate)
if self.ce_loc is not None and i in self.ce_loc:
removed_indexes_s1.append(removed_index_s1)
# 在所有blocks处理完,引入counter_guide进行模态间交互
x_inter, event_x_inter = self.counter_guide(x,event_x)
# 将交互后的特征增强原始特征
x_enhenced = x + x_inter
event_x_enhenced = event_x + event_x_inter
x = torch.cat([x_enhenced, event_x_enhenced], dim=1)
,现在将import torch,os
import torch.nn as nn
from torch.nn.parameter import Parameter
class Multi_Context(nn.Module):
def init(self, inchannels):
super(Multi_Context, self).init()
self.conv2_1 = nn.Sequential(
nn.Conv2d(in_channels=inchannels, out_channels=inchannels, kernel_size=1, stride=1, padding=0),
nn.BatchNorm2d(inchannels),
nn.ReLU(inplace=True))
self.conv2_2 = nn.Sequential(
nn.Conv2d(in_channels=inchannels, out_channels=inchannels, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(inchannels),
nn.ReLU(inplace=True))
self.conv2_3 = nn.Sequential(
nn.Conv2d(in_channels=inchannels, out_channels=inchannels, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(inchannels),
nn.ReLU(inplace=True))
self.conv2 = nn.Sequential(
nn.Conv2d(in_channels=inchannels * 3, out_channels=inchannels, kernel_size=3, padding=1),
nn.BatchNorm2d(inchannels))
def forward(self, x):
x1 = self.conv2_1(x)
x2 = self.conv2_2(x)
x3 = self.conv2_3(x)
x = torch.cat([x1,x2,x3], dim=1)
x = self.conv2(x)
return x
class Adaptive_Weight(nn.Module):
def init(self, inchannels):
super(Adaptive_Weight, self).init()
self.avg = nn.AdaptiveAvgPool2d(1)
self.inchannels = inchannels
self.fc1 = nn.Conv2d(inchannels, inchannels//4, kernel_size=1, bias=False)
self.relu1 = nn.ReLU()
self.fc2 = nn.Conv2d(inchannels//4, 1, kernel_size=1, bias=False)
self.relu2 = nn.ReLU()
self.sigmoid = nn.Sigmoid()
def forward(self, x):
x_avg = self.avg(x)
weight = self.relu1(self.fc1(x_avg))
weight = self.relu2(self.fc2(weight))
weight = self.sigmoid(weight)
out = x * weight
return out
class Counter_attention(nn.Module):
def init(self, inchannels):
super(Counter_attention, self).init()
self.conv1 = nn.Sequential(nn.Conv2d(in_channels=inchannels, out_channels=inchannels, kernel_size=3, padding=1),
nn.BatchNorm2d(inchannels))
self.conv2 = nn.Sequential(nn.Conv2d(in_channels=inchannels, out_channels=inchannels, kernel_size=3, padding=1),
nn.BatchNorm2d(inchannels))
# self.conv3 = nn.Sequential(nn.Conv2d(in_channels=inchannels2, out_channels=inchannels, kernel_size=1),
# nn.BatchNorm2d(inchannels))
self.sig = nn.Sigmoid()
self.mc1 = Multi_Context(inchannels)
self.mc2 = Multi_Context(inchannels)
self.ada_w1 = Adaptive_Weight(inchannels)
self.ada_w2 = Adaptive_Weight(inchannels)
def forward(self, assistant, present):
mc1 = self.mc1(assistant)
pr1 = present * self.sig(mc1)
pr2 = self.conv1(present)
pr2 = present * self.sig(pr2)
out1 = pr1 + pr2 + present
mc2 = self.mc2(present)
as1 = assistant * self.sig(mc2)
as2 = self.conv2(assistant)
as2 = assistant * self.sig(as2)
out2 = as1 + as2 + assistant
out1 = self.ada_w1(out1)
out2 = self.ada_w2(out2)
out = out1 + out2
# out = torch.cat([out1, out2], dim=1)
# out = self.conv3(out)
return out
class Counter_Guide(nn.Module):
def init(self):
super(Counter_Guide, self).init()
self.counter_atten1 = Counter_attention(128)
self.counter_atten2 = Counter_attention(256)
def forward(self, frame1, frame2, event1, event2):
out1 = self.counter_atten1(frame1, event1)
out2 = self.counter_atten2(frame2, event2)
return out1, out2
if name == ‘main’:
net = Counter_Guide()
os.environ[‘CUDA_VISIBLE_DEVICES’] = ‘0’
net = net.cuda()
var1 = torch.FloatTensor(10, 128, 36, 36).cuda()
var2 = torch.FloatTensor(10, 256, 18, 18).cuda()
var3 = torch.FloatTensor(10, 128, 36, 36).cuda()
var4 = torch.FloatTensor(10, 256, 18, 18).cuda()
# var = Variable(var)
out1, out2 = net(var1, var2, var3, var4)
print('************')
print(out1.shape, out2.shape)引入,实现双分支模态特征交互,现在需要在输入counterguide之前将x和event_x维度转换,以适应counterguide模块,我们的特征x和event_x的维度是(32,320,768)
|
7fdcaa96bab01452c3821247d1492aa8
|
{
"intermediate": 0.23947244882583618,
"beginner": 0.5990864634513855,
"expert": 0.16144105792045593
}
|
47,079
|
use python program to create a music recommendation system using and selecting apple music and spotify and tidal and deezer
|
ee755f88bb3dd51f6c20a7cd3477a74e
|
{
"intermediate": 0.34674975275993347,
"beginner": 0.1125892847776413,
"expert": 0.5406609773635864
}
|
47,080
|
in this javascript for leafletjs I am using e.stopPropagation(); on polygons to ensure that the map click event is also not called. This works in ensuring the polygon click event works and the map click event isn't called however it still gives an error that 'e.stopPropagation is not a function' - 'let money = 100000;
let numberOfBuildings = 0;
let dailybonus = 0;
const moneyElement = document.getElementById("moneydisplay");
moneyElement.textContent = `£${money}`;
const map = L.map('map').setView([51.5352028, 0.0054299], 17);
// fetch house data
// Event listener for when the map is clicked
map.on('click', function (e) {
// Update building radius and city coordinates
let buildingRadius = 300;
let firstCityCoords = [e.latlng.lat, e.latlng.lng];
if (money >= 100000) {
// Code to execute when money is 100,000 or more (original code goes here)
money -= 50000;
const overpassQuery = `
[out:json];
way["building"="house"](around:${buildingRadius},${firstCityCoords[0]},${firstCityCoords[1]});
out body;
>;
out skel qt;
`;
fetch('https://overpass-api.de/api/interpreter', {
method: 'POST',
headers: {
'Content-Type': 'application/x-www-form-urlencoded',
},
body: 'data=' + encodeURIComponent(overpassQuery),
})
.then((response) => response.json())
.then((data) => {
// Update money display after successful building placement
const moneyElement = document.getElementById("moneydisplay");
moneyElement.textContent = `£${money}`;
numberOfBuildings = data.elements.length; // Get the length of the array after fetching data
dailybonus += numberOfBuildings;
// Process the data returned by the Overpass API
data.elements.forEach((element) => {
if (element.type === 'way') {
// Extract coordinates from the way element
const coordinates = element.nodes.map((nodeId) => {
const node = data.elements.find(
(node) => node.id === nodeId
);
return [node.lat, node.lon];
});
// Create a polygon for the building
const polygon = L.polygon(
coordinates,
{
color: 'black', // Set building outline color
weight: 2, // Set building outline weight
fill: true, // Fill the building outline
fillColor: 'gray', // Set building fill color
fillOpacity: 0.5, // Set building fill opacity
}
).addTo(map);
// Add click functionality to the polygon (optional)
polygon.on('click', (e) => {
// Handle click event on the building footprint
console.log('Building footprint clicked!');
e.stopPropagation();
// Change polygon fill color to green
});
}
});
// Display message after creating polygons (uses the updated numberOfBuildings)
const messageDisplay = document.getElementById('messageDisplay');
messageDisplay.innerHTML = `Congratulations you have leased ${numberOfBuildings} buildings for £50,000! You will earn £${numberOfBuildings} per day from these leases. <p> You can now click on individual buildings on the map to buy them and start earning rent as well.</p>`;
})
.catch((error) => {
console.error('Error fetching data:', error);
});
} else {
// Code to execute when money is less than 100,000 (optional)
console.log("You don't have enough money to build!");
// Display message after creating polygons (uses the updated numberOfBuildings)
const messageDisplay = document.getElementById('messageDisplay');
messageDisplay.textContent = `Sorry you don't have enough money. You need at least £100,000 to buy land.`;
}
});
//24 hour clock display
const TIME_MULTIPLIER = 60 * 10; // 10 minutes = 600 seconds
// Function to format time in 24-hour format with leading zeros
function formatTime(hours, minutes) {
// Handle the case where minutes reach 60 (should display the next hour)
if (minutes === 60) {
hours++;
minutes = 0;
}
return `${hours.toString().padStart(2, "0")}:${minutes
.toString()
.padStart(2, "0")}`;
}
// Function to update the clock display and handle daily bonus
function updateClock() {
const currentTime = new Date();
// Simulate game time by multiplying actual time with multiplier
const gameTime = new Date(currentTime.getTime() * TIME_MULTIPLIER);
// Get hours and minutes in 24-hour format
let hours = gameTime.getHours();
// Get minutes and force them to the nearest multiple of 10 (ending in 0)
let minutes = Math.floor(gameTime.getMinutes() / 10) * 10;
// Format the time string with fixed minute handling
const formattedTime = formatTime(hours, minutes);
// Update the content of the div with the formatted time
document.getElementById("timedisplay").textContent = formattedTime;
// Check if it's midnight (00:00)
if (hours === 0 && minutes === 0) {
// add dailybonus
money += dailybonus;
const moneyDisplay = document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
console.log(
); // You can replace console.log with your desired action
}
}
// Call the updateClock function initially
updateClock();
// Update the clock every second to simulate smooth time progression
setInterval(updateClock, 1000);
'
|
d5cc25ae509e4b4d340ef2622c67c789
|
{
"intermediate": 0.4682261347770691,
"beginner": 0.3455926775932312,
"expert": 0.18618124723434448
}
|
47,081
|
applemusic_api.py:
import re
import base64
import pbkdf2
import hashlib
from Cryptodome.Hash import SHA256
from uuid import uuid4
from utils.utils import create_requests_session
from fingerprint import Fingerprint
import srp._pysrp as srp
srp.rfc5054_enable()
srp.no_username_in_x()
def b64enc(data):
return base64.b64encode(data).decode()
def b64dec(data):
return base64.b64decode(data)
class AppleMusicApi(object):
def __init__(self, exception, storefront='US', language='en-US', lyrics_resource='lyrics'):
self.s = create_requests_session()
self.api_base = 'https://amp-api.music.apple.com/v1/'
self.storefront = storefront
self.language = language
self.lyrics_storefront = storefront
self.lyrics_language = language
self.lyrics_resource = lyrics_resource
self.access_token = ''
self.user_token = ''
self.exception = exception
self.user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.5060.66 Safari/537.36'
def headers(self):
return {
'authorization': 'Bearer ' + self.access_token,
'Connection': 'Keep-Alive',
'Content-Type': 'application/json',
'Origin': 'https://music.apple.com',
'Referer': 'https://music.apple.com/',
'Accept-Encoding': 'gzip, deflate',
'Accept-Language': f'{self.language},en;q=0.9',
'User-Agent': self.user_agent,
'Media-User-Token': self.user_token,
'x-apple-renewal': 'true'
}
def get_access_token(self):
s = create_requests_session()
r = s.get('https://music.apple.com/us/search', headers=self.headers())
if r.status_code != 200: raise self.exception(r.text)
index_js = re.search('(?<=index\-)(.*?)(?=\.js")', r.text).group(1)
r = s.get(f'https://music.apple.com/assets/index-{index_js}.js', headers=self.headers())
if r.status_code != 200: raise self.exception(r.text)
self.access_token = re.search('(?=eyJh)(.*?)(?=")', r.text).group(1)
return self.access_token
def auth(self, email: str, password: str):
auth_url = 'https://idmsa.apple.com/appleauth/'
client_id = '06f8d74b71c73757a2f82158d5e948ae7bae11ec45fda9a58690f55e35945c51'
frame_id = 'auth-' + str(uuid4()).lower()
# get "dslang", "site" and "aasp" cookies
r = self.s.get(auth_url + 'auth/authorize/signin', headers=self.headers(), params={
'frame_id': frame_id,
'language': 'en_us',
'skVersion': '7',
'iframeId': frame_id,
'client_id': client_id,
'redirect_uri': 'https://music.apple.com',
'response_type': 'code',
'response_mode': 'web_message',
'account_ind': '1',
'state': frame_id,
'authVersion': 'latest'
})
if r.status_code != 200: raise self.exception(r.text)
auth_attributes = r.headers['X-Apple-Auth-Attributes']
# get "aa" cookie
r = self.s.post(auth_url + 'jslog', headers=self.headers(), json={
'type': 'INFO',
'title': 'AppleAuthPerf-s-y',
'message': '''APPLE ID : TTI {"data":{"initApp":{"startTime":1154.2000000001863},"loadAuthComponent":{"startTime":1500.7000000001863},"startAppToTTI":{"duration":346.70000000018626}},"order":["initApp","loadAuthComponent","startAppToTTI"]}''',
'iframeId': frame_id,
'details': '''{"pageVisibilityState":"visible"}'''
})
assert (r.status_code == 200)
# actual login
headers = {
'Accept': 'application/json',
'Referer': 'https://idmsa.apple.com/',
'Content-Type': 'application/json',
'X-Apple-Widget-Key': client_id,
'X-Apple-Frame-Id': frame_id,
'X-Apple-Domain-Id': '3',
'X-Apple-Locale': 'en_us',
'X-Requested-With': 'XMLHttpRequest',
'Origin': 'https://idmsa.apple.com',
'X-Apple-I-Require-UE': 'true',
'X-Apple-I-FD-Client-Info': '{' + f'"U":"{self.user_agent}","L":"{self.language}","Z":"GMT-8:00","V":"1.1","F":"{Fingerprint().create_fingerprint()}"' + '}',
'X-Apple-Auth-Attributes': auth_attributes,
'User-Agent': self.user_agent,
'X-Apple-Mandate-Security-Upgrade': '0'
}
json_ = {'accountName': email, 'rememberMe': 'false'}
params_ = {'isRememberMeEnabled': 'false'}
r = self.s.post(auth_url + 'auth/federate', headers=headers, params=params_, json=json_)
if 'federated' not in r.json(): raise self.exception(r.text)
# finally begin login
user = srp.User(email, bytes(), hash_alg=srp.SHA256, ng_type=srp.NG_2048)
_, A = user.start_authentication()
json_ = {'a': b64enc(A), 'accountName': email, 'protocols': ['s2k', 's2k_fo']}
r = self.s.post(auth_url + 'auth/signin/init', headers=headers, json=json_)
out_json = r.json()
if r.status_code != 200: raise self.exception(out_json['serviceErrors'][0]['message'])
if 'b' not in out_json: raise self.exception(r.text)
if out_json.get('protocol') != 's2k': raise self.exception('Protocol not supported')
salt = b64dec(out_json['salt'])
iterations = out_json['iteration']
B = b64dec(out_json['b'])
c = out_json['c']
pass_hash = hashlib.sha256(password.encode("utf-8")).digest()
enc_pass = pbkdf2.PBKDF2(pass_hash, salt, iterations, SHA256).read(32)
user.p = enc_pass
M1 = user.process_challenge(salt, B)
if M1 is None: raise self.exception("Failed to process challenge")
M2 = user.K
# real version uses m2 as well... hmmm
json_ = {'accountName': email, 'c': c, 'm1': b64enc(M1), 'm2': b64enc(M2), 'rememberMe': 'false'}
r = self.s.post(auth_url + 'auth/signin/complete', headers=headers, params=params_, json=json_)
if r.status_code != 200: raise self.exception(r.json()['serviceErrors'][0]['message'])
# exchange the "myacinfo" cookie with the "media-user-token"
r = self.s.post('https://buy.music.apple.com/account/web/auth', headers=self.headers(), json={'webAuthorizationFlowContext': 'music'})
if r.status_code != 200: raise self.exception(r.text)
self.user_token = self.s.cookies['media-user-token']
return self.user_token
def get_account_details(self, force_region, selected_language, lyrics_language):
r = self.s.get(self.api_base + 'me/account', headers=self.headers(), params={'meta': 'subscription'})
if r.status_code != 200: raise self.exception(r.text)
self.lyrics_storefront = r.json()['meta']['subscription']['storefront']
if force_region.lower() == self.lyrics_storefront: force_region = None
if force_region: print(f"Apple Music: WARNING: Selected region {force_region} is not the same as your Apple Music region {self.lyrics_storefront}, lyrics will use the region {self.lyrics_storefront}. Only lyrics available in both regions will be used, maybe use a copy of the module with the folder name (which determines the name of the module) and the netlocation_constant changed for lyrics only if you want credits or playlists from other regions.")
self.storefront = force_region.lower() if force_region else self.lyrics_storefront
account_active = r.json()['meta']['subscription']['active']
storefront_endpoint = f'storefronts/{force_region.lower()}' if force_region else 'me/storefront'
endpoint_data = self.s.get(self.api_base + storefront_endpoint, headers=self.headers())
if endpoint_data.status_code != 200: raise self.exception(f'Region {force_region} is not supported')
supported_languages = endpoint_data.json()['data'][0]['attributes']['supportedLanguageTags']
if selected_language:
for i in supported_languages:
if selected_language in i:
self.language = i
break
else:
print(f"Apple Music: WARNING: Selected language {selected_language} in region {force_region if force_region else self.lyrics_storefront} is unsupported, force a different region or use one of these: {', '.join(supported_languages)}")
self.language = supported_languages[0]
else:
self.language = supported_languages[0]
if not lyrics_language: lyrics_language = selected_language
if force_region:
supported_languages = self.s.get(f'{self.api_base}me/storefront', headers=self.headers()).json()['data'][0]['attributes']['supportedLanguageTags']
if lyrics_language:
for i in supported_languages:
if selected_language in i:
self.lyrics_language = i
break
else:
print(f"Apple Music: WARNING: Selected language {selected_language} in lyrics region {self.lyrics_storefront} is unsupported, force a different region or use one of these: {', '.join(supported_languages)}")
self.lyrics_language = supported_languages[0]
else:
self.lyrics_language = supported_languages[0]
return self.storefront, account_active, self.language, self.lyrics_language, self.lyrics_storefront
def check_active_subscription(self):
url = f'{self.api_base}me/account'
params = {'meta': 'subscription', 'challenge[subscriptionCapabilities]': 'voice,premium'}
response = self.s.get(url, headers=self.headers(), params=params)
if response.status_code != 200: raise self.exception(response.text)
response_data = response.json()
if 'meta' in response_data and 'subscription' in response_data['meta']:
return response_data['meta']['subscription'].get('active', False)
return False
def _get(self, url: str, params=None, storefront=None, language=None):
if not params: params = {}
if not storefront: storefront = self.storefront
params['l'] = language if language else self.language
r = self.s.get(f'{self.api_base}catalog/{storefront}/{url}', params=params, headers=self.headers())
if r.status_code not in [200, 201, 202]: raise self.exception(r.text)
return r.json()
def search(self, query_type: str, query: str, limit: int = 10):
if limit > 25: limit = 25
params = {
'term': query,
'types': query_type,
'limit': limit
}
if query_type == 'songs':
params['extend[songs]'] = 'attribution,composerName,contentRating,discNumber,durationInMillis,isrc,movementCount,movementName,movementNumber,releaseDate,trackNumber,workNamedata'
params['include[songs]'] = 'artists,albums' + (f',{self.lyrics_resource}' if self.storefront == self.lyrics_storefront else '') # doesn't give lyrics?
params['extend[albums]'] = 'copyright,upc'
elif query_type == 'playlists':
params['include[playlists]'] = 'curator'
params['extend[playlists]'] = 'artwork,description,trackTypes,trackCount'
results = self._get('search', params)['results']
if query_type in results:
results = results[query_type]['data']
else:
results = []
return results
def get_playlist_base_data(self, playlist_id):
return self._get(f'playlists/{playlist_id}', params={
'include': 'curator,tracks',
'extend': 'artwork,description,trackTypes,trackCount',
'include[songs]': 'artists,albums' + (f',{self.lyrics_resource}' if self.storefront == self.lyrics_storefront else ''),
'extend[songs]': 'extendedAssetUrls,attribution,composerName,contentRating,discNumber,durationInMillis,isrc,movementCount,movementName,movementNumber,releaseDate,trackNumber,workNamedata',
'extend[albums]': 'copyright,upc'
})['data'][0]
def get_playlist_tracks(self, playlist_data):
tracks_list, track_data = [], {}
tracks = list(playlist_data['relationships']['tracks']['data'])
offset = len(tracks)
while len(tracks) + offset <= playlist_data['attributes']['trackCount']:
tracks += self._get(f'playlists/{playlist_data["id"]}/tracks', params={
'offset': offset,
'include[songs]': 'artists,albums' + (f',{self.lyrics_resource}' if self.storefront == self.lyrics_storefront else ''),
'extend[songs]': 'extendedAssetUrls,attribution,composerName,contentRating,discNumber,durationInMillis,isrc,movementCount,movementName,movementNumber,releaseDate,trackNumber,workNamedata',
'extend[albums]': 'copyright,upc',
'limit': 100
})['data']
offset += 100
for track in tracks:
tracks_list.append(track['id'])
track_data[track['id']] = track
return tracks_list, track_data
def get_tracks_by_ids(self, track_ids: list = None, isrc: str = None):
if not track_ids: track_ids = []
params = {'filter[isrc]': isrc} if isrc else {'ids': ','.join(track_ids)}
params['include'] = 'artists,albums' + (f',{self.lyrics_resource}' if self.storefront == self.lyrics_storefront else '')
params['extend'] = 'attribution,composerName,contentRating,discNumber,durationInMillis,isrc,movementCount,movementName,movementNumber,releaseDate,trackNumber,workNamedata'
params['extend[albums]'] = 'copyright,upc'
return self._get('songs', params)['data']
def get_track(self, track_id: str = None):
return self.get_tracks_by_ids([track_id])[0]
@staticmethod
def get_lyrics_support(track_attributes):
# could technically be a single line in the lambda
if track_attributes.get('hasTimeSyncedLyrics'):
return 1 if track_attributes.get('isVocalAttenuationAllowed') else 2
else:
return 3 if track_attributes.get('hasLyrics') else 4
def get_track_by_isrc(self, isrc: str, album_name: str):
results = self.get_tracks_by_ids(isrc=isrc)
correct_region_results = [i for i in results if i['attributes']['url'].split('i=')[-1].split('&')[0] == i['id']]
incorrect_region_results = [i for i in results if i['attributes']['url'].split('i=')[-1].split('&')[0] != i['id']]
correct_region_results_sorted_by_track_number = sorted(correct_region_results, key=lambda x: x['attributes'].get('trackNumber', 1))
fix_results_by_album = lambda list_to_sort: sorted(list_to_sort, key=lambda x: (x['attributes']['albumName'] != album_name))
correct_album_correct_region_results = fix_results_by_album(correct_region_results_sorted_by_track_number)
correct_album_incorrect_region_results = fix_results_by_album(incorrect_region_results)
correct_album_prioritised_lyrics_results = sorted(correct_album_correct_region_results, key=lambda x: self.get_lyrics_support(x['attributes']))
return correct_album_prioritised_lyrics_results + correct_album_incorrect_region_results
def get_lyrics(self, track_id, lyrics_resource=None):
if not lyrics_resource: lyrics_resource = self.lyrics_resource
try:
data = self._get(f'songs/{track_id}/{lyrics_resource}', storefront=self.lyrics_storefront, language=self.language)
except self.exception:
return None
return data#['data'][0]['attributes']['ttml']
mycode.py:
from applemusic_api import AppleMusicApi
if __name__ == "__main__":
artist_name = input("Enter artist name: ")
song_title = input("Enter song title: ")
apple_music_api = AppleMusicApi()
apple_music_api.get_access_token() # Hypothetically sets access and user tokens
track_results = apple_music_api.search('songs', f"{artist_name} - {song_title}")
# Printing track names from the search result
if track_results and 'songs' in track_results['results']:
for track in track_results['results']['songs']['data']:
print(f"Track: {track['attributes']['name']} by {track['attributes']['artistName']}")
else:
print("No tracks found.")
to recommend songs based on artistname and song name
|
218672a4ac5e5d287b3df07e441a4df4
|
{
"intermediate": 0.3362293839454651,
"beginner": 0.4676024615764618,
"expert": 0.19616815447807312
}
|
47,082
|
https://docs.crewai.com/core-concepts/Crews/ make this page clear
|
667e3273831876eab9c06974833fcb1e
|
{
"intermediate": 0.30034875869750977,
"beginner": 0.24632121622562408,
"expert": 0.45333001017570496
}
|
47,083
|
public class CategoryConfig
{
public Dictionary<string, CategoryEntry> Categories { get; set; }
}
public class CategoryEntry
{
public List<string> Items { get; set; }
}
how to make it work with subcategories also
|
baaba7a8d390a135649f503103839794
|
{
"intermediate": 0.3731980621814728,
"beginner": 0.3632202744483948,
"expert": 0.26358169317245483
}
|
47,084
|
解读:# 将 4输入分开,构建新的相同模态结合的2输入,2分支
import math
import logging
from functools import partial
from collections import OrderedDict
from copy import deepcopy
import torch
import torch.nn as nn
import torch.nn.functional as F
from timm.models.layers import to_2tuple
from lib.models.layers.patch_embed import PatchEmbed, PatchEmbed_event, xcorr_depthwise
from .utils import combine_tokens, recover_tokens
from .vit import VisionTransformer
from ..layers.attn_blocks import CEBlock
from .counter_guide import Counter_Guide
_logger = logging.getLogger(__name__)
class VisionTransformerCE(VisionTransformer):
""" Vision Transformer with candidate elimination (CE) module
A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale`
- https://arxiv.org/abs/2010.11929
Includes distillation token & head support for `DeiT: Data-efficient Image Transformers`
- https://arxiv.org/abs/2012.12877
"""
def __init__(self, img_size=224, patch_size=16, in_chans=3, num_classes=1000, embed_dim=768, depth=12,
num_heads=12, mlp_ratio=4., qkv_bias=True, representation_size=None, distilled=False,
drop_rate=0., attn_drop_rate=0., drop_path_rate=0., embed_layer=PatchEmbed, norm_layer=None,
act_layer=None, weight_init='',
ce_loc=None, ce_keep_ratio=None):
super().__init__()
if isinstance(img_size, tuple):
self.img_size = img_size
else:
self.img_size = to_2tuple(img_size)
self.patch_size = patch_size
self.in_chans = in_chans
self.num_classes = num_classes
self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models
self.num_tokens = 2 if distilled else 1
norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6)
act_layer = act_layer or nn.GELU
self.patch_embed = embed_layer(
img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim)
num_patches = self.patch_embed.num_patches
self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
self.dist_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) if distilled else None
self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + self.num_tokens, embed_dim))
self.pos_drop = nn.Dropout(p=drop_rate)
self.pos_embed_event = PatchEmbed_event(in_chans=32, embed_dim=768, kernel_size=4, stride=4)
dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule
blocks = []
ce_index = 0
self.ce_loc = ce_loc
for i in range(depth):
ce_keep_ratio_i = 1.0
if ce_loc is not None and i in ce_loc:
ce_keep_ratio_i = ce_keep_ratio[ce_index]
ce_index += 1
blocks.append(
CEBlock(
dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, drop=drop_rate,
attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer, act_layer=act_layer,
keep_ratio_search=ce_keep_ratio_i)
)
self.blocks = nn.Sequential(*blocks)
self.norm = norm_layer(embed_dim)
self.counter_guide = Counter_Guide
self.init_weights(weight_init)
def reshape_features_for_counter_guide(self, features, num_patches_per_side=14, include_cls_token=False):
B, N, D = features.shape # batch_size, num_patches+num_tokens, embed_dim
if not include_cls_token:
features = features[:, 1:, :] # 假计第一个为cls_token
# 重塑features至(batch_size, embed_dim, H, W)
features_reshaped = features.transpose(1, 2).reshape(B, D, num_patches_per_side, num_patches_per_side)
return features_reshaped
def reshape_features_back_from_counter_guide(self, features_reshaped, num_patches_per_side=14, include_cls_token=False):
B, D, H, W = features_reshaped.shape
features_seq = features_reshaped.reshape(B, D, -1).transpose(1, 2) # 将features转换回 (B, N, D)
if include_cls_token:
cls_tokens = self.cls_token.expand(B, -1, -1)
features_seq = torch.cat((cls_tokens, features_seq), dim=1) # 加回cls token
return features_seq
def forward_features(self, z, x, event_z, event_x,
mask_z=None, mask_x=None,
ce_template_mask=None, ce_keep_rate=None,
return_last_attn=False
):
# 分支1 处理流程
B, H, W = x.shape[0], x.shape[2], x.shape[3]
x = self.patch_embed(x)
z = self.patch_embed(z)
z += self.pos_embed_z
x += self.pos_embed_x
if mask_z is not None and mask_x is not None:
mask_z = F.interpolate(mask_z[None].float(), scale_factor=1. / self.patch_size).to(torch.bool)[0]
mask_z = mask_z.flatten(1).unsqueeze(-1)
mask_x = F.interpolate(mask_x[None].float(), scale_factor=1. / self.patch_size).to(torch.bool)[0]
mask_x = mask_x.flatten(1).unsqueeze(-1)
mask_x = combine_tokens(mask_z, mask_x, mode=self.cat_mode)
mask_x = mask_x.squeeze(-1)
if self.add_cls_token:
cls_tokens = self.cls_token.expand(B, -1, -1)
cls_tokens = cls_tokens + self.cls_pos_embed
if self.add_sep_seg:
x += self.search_segment_pos_embed
z += self.template_segment_pos_embed
x = combine_tokens(z, x, mode=self.cat_mode)
if self.add_cls_token:
x = torch.cat([cls_tokens, x], dim=1)
x = self.pos_drop(x)
lens_z = self.pos_embed_z.shape[1]
lens_x = self.pos_embed_x.shape[1]
global_index_t = torch.linspace(0, lens_z - 1, lens_z).to(x.device)
global_index_t = global_index_t.repeat(B, 1)
global_index_s = torch.linspace(0, lens_x - 1, lens_x).to(x.device)
global_index_s = global_index_s.repeat(B, 1)
removed_indexes_s = []
for i, blk in enumerate(self.blocks):
x, global_index_t, global_index_s, removed_index_s, attn = \
blk(x, global_index_t, global_index_s, mask_x, ce_template_mask, ce_keep_rate)
if self.ce_loc is not None and i in self.ce_loc:
removed_indexes_s.append(removed_index_s)
# x = self.norm(x) # # [bs, n_patch, dim] = [bs, 320, 768] 320 = 64 + 256
# # 分支2 处理流程
event_x = self.pos_embed_event(event_x)
event_z = self.pos_embed_event(event_z)
event_x += self.pos_embed_x
event_z += self.pos_embed_z
event_x = combine_tokens(event_z, event_x, mode=self.cat_mode)
if self.add_cls_token:
event_x = torch.cat([cls_tokens, event_x], dim=1)
lens_z = self.pos_embed_z.shape[1]
lens_x = self.pos_embed_x.shape[1]
global_index_t1 = torch.linspace(0, lens_z - 1, lens_z).to(event_x.device)
global_index_t1 = global_index_t1.repeat(B, 1)
global_index_s1 = torch.linspace(0, lens_x - 1, lens_x).to(event_x.device)
global_index_s1 = global_index_s1.repeat(B, 1)
removed_indexes_s1 = []
for i, blk in enumerate(self.blocks):
event_x, global_index_t1, global_index_s1, removed_index_s1, attn = \
blk(event_x, global_index_t1, global_index_s1, mask_x, ce_template_mask, ce_keep_rate)
if self.ce_loc is not None and i in self.ce_loc:
removed_indexes_s1.append(removed_index_s1)
# 在所有blocks处理完,引入counter_guide进行模态间交互
# 为了适应模块,先进行维度转化
x_reshaped = self.reshape_features_for_counter_guide(x)
event_x_reshaped = self.reshape_features_for_counter_guide(event_x)
x_inter, event_x_inter = self.counter_guide(x_reshaped, event_x_reshaped)
# x_inter, event_x_inter = self.counter_guide(x,event_x)
# 将交互后的特征增强原始特征
# 交互后的特征是C,H,W的形式,需要再转化为原始的vit形式
x_enhenced = x + self.reshape_features_back_from_counter_guide(x_inter, include_cls_token=True)
event_x_enhenced = event_x + self.reshape_features_back_from_counter_guide(event_x_inter, include_cls_token=True)
x = torch.cat([x_enhenced, event_x_enhenced], dim=1)
aux_dict = {
"attn": attn,
"removed_indexes_s": removed_indexes_s, # used for visualization
}
return x, aux_dict
def forward(self, z, x, event_z, event_x,
ce_template_mask=None, ce_keep_rate=None,
tnc_keep_rate=None,
return_last_attn=False):
x, aux_dict = self.forward_features(z, x, event_z, event_x, ce_template_mask=ce_template_mask, ce_keep_rate=ce_keep_rate,)
return x, aux_dict
def _create_vision_transformer(pretrained=False, **kwargs):
model = VisionTransformerCE(**kwargs)
if pretrained:
if 'npz' in pretrained:
model.load_pretrained(pretrained, prefix='')
else:
checkpoint = torch.load(pretrained, map_location="cpu")
missing_keys, unexpected_keys = model.load_state_dict(checkpoint["model"], strict=False)
print('Load pretrained model from: ' + pretrained)
return model
def vit_base_patch16_224_ce(pretrained=False, **kwargs):
""" ViT-Base model (ViT-B/16) from original paper (https://arxiv.org/abs/2010.11929).
"""
model_kwargs = dict(
patch_size=16, embed_dim=768, depth=12, num_heads=12, **kwargs)
model = _create_vision_transformer(pretrained=pretrained, **model_kwargs)
return model
def vit_large_patch16_224_ce(pretrained=False, **kwargs):
""" ViT-Large model (ViT-L/16) from original paper (https://arxiv.org/abs/2010.11929).
"""
model_kwargs = dict(
patch_size=16, embed_dim=1024, depth=24, num_heads=16, **kwargs)
model = _create_vision_transformer(pretrained=pretrained, **model_kwargs)
return model
|
02eb864424691bded4a00e3c6bed4af6
|
{
"intermediate": 0.27997615933418274,
"beginner": 0.4316779673099518,
"expert": 0.28834590315818787
}
|
47,085
|
in this javascript for leaflet.js when a building is colored green I want to remove it from the polygon click event and display a message saying 'You already own this building' - 'let money = 100000;
let numberOfBuildings = 0;
let dailybonus = 0;
let polygonClicked = false; // Flag to track if a polygon was clicked
const moneyElement = document.getElementById("moneydisplay");
moneyElement.textContent = `£${money}`;
const map = L.map("map").setView([51.5352028, 0.0054299], 17);
// fetch house data
// Event listener for when the map is clicked
map.on("click", function (e) {
if (!polygonClicked) {
// Update building radius and city coordinates
let buildingRadius = 300;
let firstCityCoords = [e.latlng.lat, e.latlng.lng];
if (money >= 100000) {
// Code to execute when money is 100,000 or more (original code goes here)
money -= 50000;
const overpassQuery = `
[out:json];
way["building"="house"](around:${buildingRadius},${firstCityCoords[0]},${firstCityCoords[1]});
out body;
>;
out skel qt;
`;
fetch("https://overpass-api.de/api/interpreter", {
method: "POST",
headers: {
"Content-Type": "application/x-www-form-urlencoded",
},
body: "data=" + encodeURIComponent(overpassQuery),
})
.then((response) => response.json())
.then((data) => {
// Update money display after successful building placement
const moneyElement = document.getElementById("moneydisplay");
moneyElement.textContent = `£${money}`;
numberOfBuildings = data.elements.length; // Get the length of the array after fetching data
dailybonus += numberOfBuildings;
console.log("Daily bonus total now:", dailybonus);
// Process the data returned by the Overpass API
data.elements.forEach((element) => {
if (element.type === "way") {
// Extract coordinates from the way element
const coordinates = element.nodes.map((nodeId) => {
const node = data.elements.find((node) => node.id === nodeId);
return [node.lat, node.lon];
});
// Create a polygon for the building
const polygon = L.polygon(coordinates, {
color: "black", // Set building outline color
weight: 2, // Set building outline weight
fill: true, // Fill the building outline
fillColor: "gray", // Set building fill color
fillOpacity: 0.5, // Set building fill opacity
}).addTo(map);
polygon.on("click", function (e) {
// Handle click event on the building footprint
console.log("Building footprint clicked!");
e.originalEvent.stopPropagation();
polygonClicked = true; // Set flag to true when a polygon is clicked
if (money >= 10000) {
// Change polygon fill color to green
polygon.setStyle({ fillColor: "green" });
// Display message after creating polygons (uses the updated numberOfBuildings)
const messageDisplay =
document.getElementById("messageDisplay");
messageDisplay.innerHTML = `Congratulations you have bought this building for £10,000. You will earn £1000 per day in rent.`;
money -= 10000;
dailybonus += 1000;
const moneyDisplay = document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
console.log("Daily bonus total now:", dailybonus);
}
else{
const messageDisplay = document.getElementById("messageDisplay");
messageDisplay.innerHTML = `Sorry you need £10,000 to buy this building`;
}
});
// Reset the polygonClicked flag after clicking outside a polygon
map.on("click", function (e) {
polygonClicked = false;
});
}
});
// Display message after creating polygons (uses the updated numberOfBuildings)
const messageDisplay = document.getElementById("messageDisplay");
messageDisplay.innerHTML = `Congratulations you have leased ${numberOfBuildings} buildings for £50,000! You will earn £${numberOfBuildings} per day from these leases. <p> You can now click on individual buildings on the map to buy them and start earning rent as well.</p>`;
})
.catch((error) => {
console.error("Error fetching data:", error);
});
} else {
// Code to execute when money is less than 100,000 (optional)
console.log("You don't have enough money to build!");
// Display message after creating polygons (uses the updated numberOfBuildings)
const messageDisplay = document.getElementById("messageDisplay");
messageDisplay.textContent = `Sorry you don't have enough money. You need at least £100,000 to buy land.`;
}
}
});
//24 hour clock display
const TIME_MULTIPLIER = 60 * 10; // 10 minutes = 600 seconds
// Function to format time in 24-hour format with leading zeros
function formatTime(hours, minutes) {
// Handle the case where minutes reach 60 (should display the next hour)
if (minutes === 60) {
hours++;
minutes = 0;
}
return `${hours.toString().padStart(2, "0")}:${minutes
.toString()
.padStart(2, "0")}`;
}
// Function to update the clock display and handle daily bonus
function updateClock() {
const currentTime = new Date();
// Simulate game time by multiplying actual time with multiplier
const gameTime = new Date(currentTime.getTime() * TIME_MULTIPLIER);
// Get hours and minutes in 24-hour format
let hours = gameTime.getHours();
// Get minutes and force them to the nearest multiple of 10 (ending in 0)
let minutes = Math.floor(gameTime.getMinutes() / 10) * 10;
// Format the time string with fixed minute handling
const formattedTime = formatTime(hours, minutes);
// Update the content of the div with the formatted time
document.getElementById("timedisplay").textContent = formattedTime;
// Check if it's midnight (00:00)
if (hours === 0 && minutes === 0) {
// add dailybonus
money += dailybonus;
const moneyDisplay = document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
}
}
// Call the updateClock function initially
updateClock();
// Update the clock every second to simulate smooth time progression
setInterval(updateClock, 1000);
'
|
ed5bb6d2aac1bbe3b430a4955c74fd1d
|
{
"intermediate": 0.39260047674179077,
"beginner": 0.4234406650066376,
"expert": 0.18395882844924927
}
|
47,086
|
write me an elasticsearch query (using painless script IF NECESSARY) to update the theme array inside this document, removing "A" and adding "E" and "F"
{
"_index" : "v_dev_dataset_document_tags_index",
"_type" : "_doc",
"_id" : "ZhxNyI4BWFZ4mUbq-rFX",
"_score" : 0.0,
"_routing" : "6b576762832bcb86c5fef8f8a26dc3494470a262",
"_source" : {
"comment" : "",
"sentiment" : "neutre",
"theme" : [
"B",
"C"
],
"creation_date" : "2024-04-10T08:58:57",
"thesaurus" : "thesaurus_centralized",
"current" : true,
"identifier" : "lili",
"status" : "to_review",
"document_relations" : {
"parent" : "6b576762832bcb86c5fef8f8a26dc3494470a262",
"name" : "tag"
},
"positions" : [ ]
}
|
b639b321e87190b3858a75364312c2dc
|
{
"intermediate": 0.35278984904289246,
"beginner": 0.3088909983634949,
"expert": 0.33831918239593506
}
|
47,087
|
Add necessary extra formatting to the following lecture. Do not summarize or skip any part. Do not break:
Welcome to this lecture. The title of this lecture is Organization and Work Design, part 4. This is the last part about organization and work design. We will talk about the modified charts. In the previous part we talked about the basic charts, the basic structures. What is important is that the modified charts are related with some particular situation that organization can face. When there are some growth strategies, organization want to follow some growth strategies, or they do want to adopt some diversification strategies, they need to swap from, for example, a functional solution to a divisional solution. But usually if we move from the functional solution to the divisional solution, we will dismantle the possibility of achieving scale economies. And sometimes the need for exploiting again and maintaining scale economies without changing the structural function lead to try to find a way for integrating the functional structure, modifying the functional structure in order to be able to cope with the uncertainty and the flexibility required for growth strategy and diversification strategy and at the same time maintaining the main advantage of efficiency of the functional solution, the functional structure. So if you want, when we talk about modified charts, we are talking about functional structure, modify in order to maintain the high level of efficiency and the high level of adding, the annihile level of flexibility with respect to the product, the market, or the segment of customer and clients required. Let's have a look at the agenda of the lecture of today. We will divide the lecture in two parts. First of all, we will work on the basic chart. We will, if you want, synthesize the lesson learned from the previous part of the lecture, from the part three. Then we will move in order to maintain a red line with the third part of this lecture. We will move to the modified chart and we will see the three main modified functional structures, the functional structure modified by products, by projects and by matrix. How do we modify the function? Well, the functional approach. We modify the functional approach inserting a new job position. But before talking with these change on the functional, let's have a look at the basic chart in order to remind us all the main tenets related with the basic structure, the basic structure, the basic solution for organizational design. So, let's talk about the elementary chart. As for the elementary chart, I want to remind you that this is a very simple solution with only two hierarchical levels, the economical and managerial direction unit and the imperative unit. The first level, the economical and managerial direction unit, is responsible for all the decisions, while the operative units are only asked to accomplish tasks. The managing direction takes all the decisions and give orders to the operative units. There is a high level of centralization and a low level of formalization because of the direct supervision of the managing director. The leadership style is authoritative and paternalistic and it has a short term orientation rather than a long term orientation. There is a huge backlog of unsolved problems because of the bottleneck of the managing director who have to solve all the problems. And there is a high level of efficiency and fast reaction to different exception coming from the environment. And there are good relationships. This is mostly for very simple organization with a very simple strategy, one product, one line of product in one market. Then we have the functional solution, the functional chart. The functional chart introduces a more hierarchical level, the functional level. So, we have three different hierarchical levels, the economical and managerial direction unit, the functional level and the operative level. What is the main characteristic of this functional solution? The main characteristic is that for still medium-sized, medium-complexity organization with a simple strategy, one product or one line of product in one market, the division of labor is for homogeneity of techniques, homogeneity of tasks, homogeneity of activity that should be done within the same department. Within the same department, all the homogenous activity are performed. So, we have functional directors. The functional directors are responsible for those, that group of activity allocated to them. And if you want, the main advantage of this chart is the efficiency. For the developing of high level of scale economies and experience economies. And there are also good relationships. There are some disadvantages. The disadvantages are related with the possible conflicts and the possible conflict between the different functional directors and the different units, therefore. And the difficulties are related with the communication between the different business units. Then we have the divisional chart. The divisional chart implies again three levels of hierarchy, the economical and managerial direction unit, the business unit and the operative unit. The most important characteristics of the divisional chart is that it's the flexibility because it's this solution fit for complex, huge, big organization with complex strategies, diversified and with lot of product strategies. And also with lot of reference markets. The grouping criteria is by how put within the same business unit are allocated all the tasks needed for being able to achieve the goal related one product, one market, one segment of customer. The most important concept for the divisional chart is the concept of strategic business area. Each business unit serves a specific strategic business area. And what is important is that the activity are so homogeneous that should be replicated in total under each business unit. Under each business unit we have a functional structure replicated under each business unit. These are the three basic chart we have seen in the last previous lecture. Now we move from this basic chart. In particular we move from the functional one and we try to maintain the advantages of the functional solution introducing some modification to its natural regular structure in order to achieve some of the advantages of the divisional solution. How do we achieve this goal? How do we combine the advantages of the functional solution with the advantages of the divisional chart? We achieve this inserting some specific organism, some specific job position within the functional chart. This organism can be without authority and can be temporary or permanent within the functional structure. It has two main characteristics. One is related with the level of hierarchical power, hierarchical authority and one is related with its possibility of staying forever within the structure or being active only for a limited time span. What is important is that this organism is cut with an horizontal dimension, the vertical- hierarchical line of the functional structure. At each intersection of the vertical- hierarchical line if you want, it interacts with the functional unit, the functional job position with which it has some relationship. What are the possible situations that we can cope with the consideration of the two characteristics of this organism? I would like to remind you that the two characteristics is that the presence of the absence of authority and being or not some time-limit to the presence within the structure. If we combine these two dimensions, these two characteristics, we can design these matrix. These matrix identify four different solutions, four different situations. The organism, the job position, the inserted job position can be temporary or permanent without authority or without authority. There are four potential situations, without authority and temporary, without authority and permanent, with authority and temporary, with authority and permanent. But considering that this situation without authority and temporary will create a harmful situation because who will follow the indication of an organism or of a job position, which is temporary, will stay for a few times and has no authority, so he will never be considered as a coordination tool. This is excluded as a possible solution. Three different solutions could remain. The first one is when we have a situation without authority and a permanent organism. In this case, we have a functional chart modified by product. The other situation is when we insert the job position for temporary time and we give them authority. In this case, we have the functional chart modified by project. When the inserted organism is permanent and without authority, we have the functional chart modified by matrix. But now, let's start with the first situation, the functional chart modified by product. The functional chart modified by product is a situation where we insert a job position with a situation of permanence and without authority. In a functional chart modified by product, we insert this organism without authority but permanent. This job position is called product manager. If you want, we insert within the structure of function, the functional structure we insert what they are called production. Production product managers. They are responsible for managing growth and or diversification strategy. They are usually the dependents of the economical and managerial unit. But they are also alternative. We will see alternative solutions. We will see therefore two different types of functional modified structure by product. The type one and the type two. What is important is that product manager intersect horizontally the vertical line which derive directly from the functional structure. The product manager are responsible for the ultimate results of the products. If you want, they recover what is the main goal assigned to a business unit in the divisional unit. They will have to in some way integrate themselves with the responsibility of the functional lines, vertical functional lines which are responsible for different goals. Let's have a look at the modified functional chart by product type one. As you can see, we have the first level managing director and then we have the normal functional director. And then we add what we call the product managers, would depend directly from the managing directors. We have the product manager one, product manager two, product manager three. And all of them intersect the vertical lines, the vertical lines of the functional structure. When these modified solution fit, well these solution fit for firms which are very diversified in terms of products. So they have really different line of products or different products that should be care of and they do not want to dismantle the high level of scale that have reached within the functional structure. Product manager, as I said, depend, so they are at the same hierarchical level of functional directors. So they depend directly from the managing director. And they have usually strategic tasks and resource allocation goals between the different functional vertical lines. What will happen if, when they intersect, going back to the chart, what happens when they intersect the vertical lines horizontally? Well, in this case, sometimes some conflicts could arise because they can ask for accomplishing some task for they could assign some goals to the vertical lines, which can be in contrast with the mandates that the functional director has given to those two positions. How to manage these conflicts? Well, the job position which receives the double mandate, the formal without authority mandate vertical line and the without authority horizontal mandate informal, if you want mandate from the product manager, have to stop immediately and say, I'm in front of conflict. And this solution should be searched by a dialogue. So through a mutual adjustment between the functional directors implied in the conflict and the product manager, since they are at the same level, at the same hierarchical level, and if the conflict is not solved at that level, can be taken to the managing director for the exception principle of a yaw. So we have the way out for exiting conflict, for solving conflict in this case. A alternative solution less complex for less complex situation is demotified by product type two. Let's have a look at this second alternative. Here we have the managing director has before and the different functional director. But in this case, in this chart, the product managers depend directly from the marketing managers. So they intercept all the vertical lines, the vertical functional lines except that one of marketing. When these modified solution type two could be adopted, what could be adopted in situation of not so huge diversification but with many products. So the products are not so dis-homogeneous between them. And in this case, PM depends completely from the marketing department and they are responsible for the managing of the marketing of the products and the integration with the vertical functional lines. So there is a big, a huge difference from the previous type one product manager. The previous product manager type one were full responsible of the competitive and the economic financial response goals of the products while type two product manager are responsible only of the effectiveness of the marketing activities of the different products. Again, also for this case, some conflicts could arise when they interact with the vertical functional line. What are the ways for managing these possible conflicts? The ways for managing these conflicts is that they should immediately talk with the marketing director in order to solve with the corresponding functional director the conflict, the potential conflict or the conflict that has been arise in the intersection between the vertical functional line and the horizontal functional line. These are the functional structure modified by product. Let's now move to functional structure modified by project. In this case, we insert, in this case we insert a job position, an organism which is temporary but we give it authority. If you want, the big difference now from the product solution by product solution is that the job position now can give orders and assign task directly to the vertical lines. Where the functional structure modified by project can be applied? The situation in which they can be applied is for highly sophisticated situation in terms of techniques and at the same time, when there is a peculiar goal to be achieved, peculiar goal related for a job order, for example. Think about, for example, where big ships are built. The building of a ship takes for two or three years. So we need to tackle for all the incentives of these project. And this could be the case for adopting these form of organizational structure. And what is important is how to solve possible conflicts, how to solve possible conflicts that could arise within the functional structure modified by project inserting a job position with who has authority. Why there is the emergence of conflicts? There is the emergence of conflicts because when there is a job position which has authority, intersect the vertical, hierarchical line, they will have authority. So they will create a situation of a double binding mandate for those positions in which they intersect. And how can we solve these potential conflicts? Well, we can adopt what is called a coordination principle. The coordination principle implies that all the people at the intersection between horizontal line of authority and vertical line of authority will follow the horizontal mandate when it works, when he or she works on the project. On the other hand, at a time, will follow the vertical mandate. But let's have a look at the... Let's have a look at the modified by project chart which is its characteristics. How usual? We have the managing director and we have the functional directors. Then we insert the job position which is temporary and without authority. These job positions are called project managers. Project manager one, project manager two, project manager three in these examples. As you can see in the intersection of the vertical line by the horizontal with authority hierarchical lines, there are some positions. These positions can be characterized by a double mandate. There are a double mandate. One is following the techniques and one is following the outputs, the project actually. How do we avoid conflicts? We talk about the coordination principle. When those people work on the project, they respond to the horizontal mandate. When they do not work on the project, they respond to the vertical mandate. But we know that the uncertainties and unforeseen events can come whatever during time. So how to try to avoid conflicts and how to manage conflict that could arise. In order to avoid conflicts, what is important is to create a very good planning of all the activities. So each people at the actors section should know when they are supposed to work within the project and when they are not. These are implied to adopt very sophisticated forms of project management techniques in order to define deadlines, milestones, gants of actions and so on. On the other side, when there will be for unforeseen events, for uncertainty, there will be some conflicts, emerging conflicts. We can tackle with them using a sort of exception rule. Again, as in the case by the functional structure modified by product, when these guys, going back to this chart, when these guys find a conflict for the mandate of the research and development director and the project manager mandate, he has to talk with both of them and they can solve heat through a mutual adjustment and if they do not find a solution, they can go to the upper level, that is, to the managing director. So what are the advantages of this solution? Well, the advantages are that we will succeed in a better project control and if you want also better relationship with clients and for sure, an higher level of coordination at level of the project, because we have dedicated resources, the project manager, who can follow the developing during all the time of the project, the developing of the project itself and therefore this structure, the modified by project structure is more goal oriented than other. What are the cons? What are the disadvantages of the structure? The disadvantages of the structure are the high level of conflicts deriving from the double mandate of all the intersection, all the people within the structure will have two bosses. So they will ask for a lot of coordination and these conflicts will ask for a lot of coordination, coordination between the different functional directors and the different project managers. But let's now have a look at the third functional chart modified by matrix. So we will talk now with the matrix structure. What is the matrix structure? The matrix structure is the most complex and sophisticated functional modified chart. If you want the first level, the first hierarchical level position, has specialized both by product that is included the divisional perspective and functional unit. So in the matrix we have both the dimension, techniques if you want inputs and outputs. And therefore in the matrix structure for sure again as in the project we will have a double line hierarchical line of mandates. But in this case we will solve potential conflicts introducing a job position called two boss managers who will if you want mediate the conflicting mandates that will be no more the need for going for multiple adjustment between functional directors and divisional directors. The solution of the double mandate is up to these two boss managers. But let's have a look at the chart of the matrix chart in order to understand the role of the boss managers. Well as you can see the matrix chart has two different first line the functional units and the product project units. Here we have the functional dimension and here we have the divisional dimension. Here we have the functional units in the example we have three different functional units and three different business units. The business units have their own hierarchical lines and the same have the functional units have their own hierarchical lines. So if you want in this case there are nine different intersections there are three functional units three divisional units which intersect in nine different points. In each intersection we pose a two boss manager the two boss manager is a role for mediating the conflicting mandates coming from the functional line the hierarchical line and the divisional hierarchical line. Under the two boss manager there is a team. So here in the example we will have nine different team, two boss manager position three multiplied three intersection and in each intersection we will pose a two boss manager so nine boss manager, two boss manager and under each of these nine two boss manager we will have nine different teams. What is important therefore? The important is that these two boss manager should be more relationship oriented than task oriented. They should adopt a leadership style more relationship oriented because they have to mediate the double mandate. Their social skill should be more important than professional competencies. That's very important in selecting people who have to cover those job positions. What are the main characteristics of the matrix chart? First of all there is a double grouping criteria input output, functional and divisional units. Secondly there is a double structural double line of authority one which follows the functional orientation and one which follows the divisional orientation. There are a lot of managerial integration roles. In the case, in the example we have seen in the slide there are at least nine different integration roles the two boss managers nine different two boss managers. There is a high level of participation and autonomy. All the teams under each the two boss managers have their own autonomy and the two boss managers can articulate, distribute tasks to its teams with different orientation. Then there is a huge implication, a huge adoption of the mutual adjustment as a coordination mechanism and this mutual adjustment is mostly accomplished mostly performed by the two boss manager positions due to the fact that they have to interact with the horizontal and vertical lines of the functional and the divisional lines. Therefore last but not least there is a high level of internal complexity due to the fact that we are mixing different tensions one coming from the functional and the other coming from the divisional orientation. Could be the solution for all the problems in terms of managing both efficiency and flexibility? No. This is a really, really complex solution, organizational solution. Usually it fits for really complex organization, really big organization with really complex strategies but at the same time with huge need for efficiency due to their dimension. What are the main advantages of this solution? Well, the first advantage I think is that integrates both efficiency and flexibility. Due to the fact that it embedded, naturally embedded both the dimension, the functional dimension and the divisional dimension, both the main goals of the two basic organizational solutions are, if you want, reflected in this matrix solution. Therefore if you want there is a mix of efficiency and effectiveness that can be rich adopting this solution. But at the same time there are some problems related with the adoption of this matrix solution. Some disadvantages, if you want, some cons of this solution. First of all, we cannot assume that within this solution conflicts will disappear at all. Actually the two boss manager will be completely exposed from different tension coming from the functional dimension and the divisional dimension. And not always will be able, they will be able to mediate these different tensions. So for sure there will be some levels of conflicts around the role of the two boss managers. Secondly, therefore there will be a large need for coordination. We cannot discuss that there are different highlands, different teams under each two boss manager, but the different teams should coordinate themselves in some way. So there is not only the need for coordination at the two boss manager level, but also between different teams. And this is very important. We will ask for the introduction of different coordination mechanism which will cost in terms of organizational solutions. And last but not least, the presence of two boss managers implies also that there could be the case for not having a perfect equilibrium between authority and responsibility. Because the dimension for who is in charge of accomplishing different orders, obtaining different goals, it's really clear, the two boss manager, but who is in charge of the different goals. This is not really clear. Could be the two boss manager, but also the functional and the divisional director. So there could be some problems. Thank you very much as usual for your attention. And if you need more information about this lecture, please go to visit the website of Unid in the Tunnel University. Thank you very much.
|
91f146fa2fccaca17ae9ef202cf4166c
|
{
"intermediate": 0.259690523147583,
"beginner": 0.5160551071166992,
"expert": 0.22425436973571777
}
|
47,088
|
i have this es script, and i want to apply it to multiple documents, which i can pass the id of. i want the routing id. use _routing
document example:
{
"_index" : "v_dev_dataset_document_tags_index",
"_type" : "_doc",
"_id" : "ZhxNyI4BWFZ4mUbq-rFX",
"_score" : 0.0,
"_routing" : "6b576762832bcb86c5fef8f8a26dc3494470a262",
"_source" : {
"comment" : "",
"sentiment" : "neutre",
"theme" : [
"B",
"C"
],
"creation_date" : "2024-04-10T08:58:57",
"thesaurus" : "thesaurus_centralized",
"current" : true,
"identifier" : "lili",
"status" : "to_review",
"document_relations" : {
"parent" : "6b576762832bcb86c5fef8f8a26dc3494470a262",
"name" : "tag"
},
"positions" : [ ]
}
the script:
"script": {
"source": """
// Convert the theme array to a HashSet for efficient manipulation
HashSet existingThemes = new HashSet(ctx._source.theme);
// List of themes to add and remove, passed as parameters
List themesToAdd = params.themesToAdd;
List themesToRemove = params.themesToRemove;
// Remove specified themes
for (String themeToRemove : themesToRemove) {
existingThemes.remove(themeToRemove);
}
// Add specified themes
for (String themeToAdd : themesToAdd) {
existingThemes.add(themeToAdd);
}
// Convert the HashSet back to a List for compatibility with Elasticsearch
ctx._source.theme = new ArrayList(existingThemes);
""",
"lang": "painless",
"params": {
"themesToAdd": self.tag.add,
"themesToRemove": self.tag.remove,
},
},
|
dd295f4c7955d4ae4c862965f90da694
|
{
"intermediate": 0.39954787492752075,
"beginner": 0.35725921392440796,
"expert": 0.2431928664445877
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.