row_id
int64 0
48.4k
| init_message
stringlengths 1
342k
| conversation_hash
stringlengths 32
32
| scores
dict |
|---|---|---|---|
48,092
|
\begin{figure*}[!htb]
\centering
\subfloat[Hazy image]{\includegraphics[width=0.8in]{twopick_hazy.png}%
\label{fig_first_case}}
\hfil
\subfloat[DCP]{\includegraphics[width=0.8in]{twopick_DCP.png}%
\label{fig_second_case}}
\hfil
\subfloat[CAP]{\includegraphics[width=0.8in]{twopick_CAP.png}%
\label{fig_second_case}}
\hfil
\subfloat[NLD]{\includegraphics[width=0.8in]{twopick_NLD.png}%
\label{fig_second_case}}
\hfil
\subfloat[SLP]{\includegraphics[width=0.8in]{twopick_SLP.png}%
\label{fig_second_case}}
\hfil
\subfloat[DehazeNet]{\includegraphics[width=0.8in]{twopick_dehazenet.png}%
\label{fig_second_case}}
\hfil
\subfloat[AOD-Net]{\includegraphics[width=0.8in]{twopick_AOD.png}%
\label{fig_second_case}}
\hfil
\subfloat[FFA-Net]{\includegraphics[width=0.8in]{twopick_FFA.png}%
\label{fig_second_case}}
\hfil
\subfloat[Our method]{\includegraphics[width=0.8in]{twopick_my.png}%
\label{fig_second_case}}
\caption{Dehazing results of classical images without clear image reference. The lower part show local details of image.}
\label{fig7}
\end{figure*}帮我把这段latex改为单栏的两行三列的图像分布
|
e6486be81f7431517ab343279fa1da01
|
{
"intermediate": 0.3029104769229889,
"beginner": 0.3791201412677765,
"expert": 0.3179693818092346
}
|
48,093
|
Make ascii art for the letter m
|
f9a557d775aab0ca8c5d281626f7dd16
|
{
"intermediate": 0.41269373893737793,
"beginner": 0.3237380385398865,
"expert": 0.2635682225227356
}
|
48,094
|
What’s the difference between headers payload and form data?
|
f78be9fe8cc3917d299cf690dcdddd32
|
{
"intermediate": 0.41577795147895813,
"beginner": 0.31949350237846375,
"expert": 0.2647285759449005
}
|
48,095
|
I have this react custom input:
import { TextInput } from "@mantine/core";
import { useState } from "react";
const [_value, setValue] = useState<any>(0);
const handleInputChange = (value: string) => {
let newValue : number | string = value.replace(/\D/g, '');
newValue = newValue === '' ? '0' : newValue;
newValue = parseInt(newValue);
newValue = Math.max(0, Math.min(0, 100));
setValue(newValue)
};
const TableInput = () => {
return(
<TextInput
defaultValue={0}
type="number"
value={_value}
onChange={() => handleInputChange}
/>
)
}
export default TableInput;
And I want to import it to page but also to call some other function onChange like this:
<TableInput
onChange = {(event:any) => {handleInputChange(index, parseInt(event.currentTarget.value))} }
/>
How to do it?
|
705f4a70df332d2694b2d1455fec5abb
|
{
"intermediate": 0.39734160900115967,
"beginner": 0.4826660752296448,
"expert": 0.11999238282442093
}
|
48,096
|
I am performing the kaggle competition :""
|
b152acb4e70edb573d6241d59a821bf2
|
{
"intermediate": 0.24844972789287567,
"beginner": 0.19325228035449982,
"expert": 0.5582979917526245
}
|
48,097
|
I am doing the Kaggle competition "Titanic - Machine Learning from Disaster", my code:
"# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
!pip install tensorflow
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
train_data = pd.read_csv("/kaggle/input/titanic/train.csv")
test_data = pd.read_csv("/kaggle/input/titanic/test.csv")
train_data.head(100)
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
features = ['PassengerId', 'Survived', 'Pclass','Name','Sex','Age','SibSp','Parch','Ticket','Fare','Cabin','Embarked']
X = pd.get_dummies(train_data[features])
y = train_data['Survived']
# Splitting the training data for validation
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)
# It's generally a good idea to scale your data for neural networks
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_val_scaled = scaler.transform(X_val)
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
model = Sequential([
Dense(32, activation='relu', input_shape=(X_train_scaled.shape[1],)),
Dropout(0.5),
Dense(32, activation='relu'),
Dropout(0.5),
Dense(32, activation='relu'),
Dropout(0.5),
Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
history = model.fit(X_train_scaled, y_train, epochs=100, validation_data=(X_val_scaled, y_val),
verbose=1, batch_size=2)
# Scale the test data
X_test = pd.get_dummies(test_data[features])
X_test_scaled = scaler.transform(X_test)
# Prediction
test_predictions = model.predict(X_test_scaled)
test_predictions = (test_predictions > 0.5).astype(int).reshape(-1)
# Preparing for submission
output = pd.DataFrame({'PassengerId': test_data.PassengerId, 'Survived': test_predictions})
output.to_csv('neural_network_submission.csv', index=False)
print("Your neural network submission was successfully saved!")
"
However:Epoch 1/100
/opt/conda/lib/python3.10/site-packages/keras/src/layers/core/dense.py:86: UserWarning: Do not pass an `input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using an `Input(shape)` object as the first layer in the model instead.
the output of the layer (its "activation").
92/356 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.6802 - loss: nan
W0000 00:00:1714140438.775364 112 graph_launch.cc:671] Fallback to op-by-op mode because memset node breaks graph update
336/356 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.6417 - loss: nan
W0000 00:00:1714140439.695673 113 graph_launch.cc:671] Fallback to op-by-op mode because memset node breaks graph update
|
0a22f5d83af19fdf7de7a0834ab9d41f
|
{
"intermediate": 0.4611983299255371,
"beginner": 0.21999861299991608,
"expert": 0.3188031017780304
}
|
48,098
|
I am doing the Kaggle competition “Titanic - Machine Learning from Disaster”, my code:
“# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here’s several helpful packages to load
!pip install tensorflow
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only “…/input/” directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk(‘/kaggle/input’):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using “Save & Run All”
# You can also write temporary files to /kaggle/temp/, but they won’t be saved outside of the current session
train_data = pd.read_csv(”/kaggle/input/titanic/train.csv")
test_data = pd.read_csv(“/kaggle/input/titanic/test.csv”)
train_data.head(100)
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
features = [‘PassengerId’, ‘Survived’, ‘Pclass’,‘Name’,‘Sex’,‘Age’,‘SibSp’,‘Parch’,‘Ticket’,‘Fare’,‘Cabin’,‘Embarked’]
X = pd.get_dummies(train_data[features])
y = train_data[‘Survived’]
# Splitting the training data for validation
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)
# It’s generally a good idea to scale your data for neural networks
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_val_scaled = scaler.transform(X_val)
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
model = Sequential([
Dense(32, activation=‘relu’, input_shape=(X_train_scaled.shape[1],)),
Dropout(0.5),
Dense(32, activation=‘relu’),
Dropout(0.5),
Dense(32, activation=‘relu’),
Dropout(0.5),
Dense(1, activation=‘sigmoid’)
])
model.compile(optimizer=‘adam’,
loss=‘binary_crossentropy’,
metrics=[‘accuracy’])
history = model.fit(X_train_scaled, y_train, epochs=100, validation_data=(X_val_scaled, y_val),
verbose=1, batch_size=2)
# Scale the test data
X_test = pd.get_dummies(test_data[features])
X_test_scaled = scaler.transform(X_test)
# Prediction
test_predictions = model.predict(X_test_scaled)
test_predictions = (test_predictions > 0.5).astype(int).reshape(-1)
# Preparing for submission
output = pd.DataFrame({‘PassengerId’: test_data.PassengerId, ‘Survived’: test_predictions})
output.to_csv(‘neural_network_submission.csv’, index=False)
print(“Your neural network submission was successfully saved!”)
"
However:Epoch 1/100
/opt/conda/lib/python3.10/site-packages/keras/src/layers/core/dense.py:86: UserWarning: Do not pass an input_shape/input_dim argument to a layer. When using Sequential models, prefer using an Input(shape) object as the first layer in the model instead.
the output of the layer (its “activation”).
92/356 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.6802 - loss: nan
W0000 00:00:1714140438.775364 112 graph_launch.cc:671] Fallback to op-by-op mode because memset node breaks graph update
336/356 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - accuracy: 0.6417 - loss: nan
W0000 00:00:1714140439.695673 113 graph_launch.cc:671] Fallback to op-by-op mode because memset node breaks graph update
|
f96923504ecec7bd70d52eb0f2db8b83
|
{
"intermediate": 0.44246071577072144,
"beginner": 0.23457184433937073,
"expert": 0.32296743988990784
}
|
48,099
|
following the list of country codes: AF,AL,DZ,AS,AD,AO,AI,AG,AR,AM,AW,AU,AT,#CYRL,BS,BH,BD,BB,BY,BE,BZ,BJ,BM,BT,BO,#CYRL,BW,BR,IO,VG,BN,BG,#ADLM,BI,KH,CM,CA,IC,CV,BQ,KY,CF,EA,TD,CL,CN,CX,CC,CO,KM,CG,CD,CK,CR,HR,CU,CW,CY,CZ,CI,DI,DK,DG,DJ,DM,DO,EC,EG,SV,GQ,ER,EE,SZ,ET,150,FK,FO,FJ,FI,FR,GF,PF,GA,GM,GE,DE,GH,GI,GR,GL,GD,GP,GU,GT,GG,#ADLM,#ADLM,GY,HT,HN,HK,HU,IS,IN,ID,IR,IQ,IE,IM,IL,IT,JM,JP,JE,JO,KZ,KE,KI,XK,KW,KG,LA,419,LV,LB,LS,LR,LY,LI,LT,LU,MO,MG,MW,MY,MV,ML,MT,MH,MQ,MR,MU,YT,MX,FM,MD,MC,MN,#CYRL,MS,MA,MZ,MM,NA,NR,NP,NL,NC,NZ,NI,NE,NG,NU,NF,KP,MK,MP,NO,OM,PK,PW,PS,PA,PG,PY,PE,PH,PN,PL,PT,XA,XB,PR,QA,RO,RU,RW,RE,WS,SM,SA,SN,#CYRL,SC,SL,SP,SG,SX,SK,SI,SB,SO,ZA,KR,SS,ES,LK,BL,SH,KN,LC,MF,PM,VC,SD,SR,SJ,SE,CH,SY,ST,#HANT,TJ,TZ,TH,TL,TG,TK,TO,TT,TN,TM,TC,TV,TR,UM,VI,UG,UA,AE,GB,US,UY,#CYRL,VU,VA,VE,VN,WF,EH,YE,ZM,ZG,ZW,001,AX
I need a json array containing as key the country code and with two values groups containing regex and a mask following the rules on this page https://github.com/RedMadRobot/input-mask-android/wiki/Mask-Syntax%3A-Basics for vat numbers and zip codes
|
4f29b9e8773f6192e553c9d7d3a74c1b
|
{
"intermediate": 0.21980752050876617,
"beginner": 0.5366187691688538,
"expert": 0.24357371032238007
}
|
48,100
|
I am doing the Kaggle competition “Titanic - Machine Learning from Disaster”, my code:
"# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
train_data = pd.read_csv("/kaggle/input/titanic/train.csv")
test_data = pd.read_csv("/kaggle/input/titanic/test.csv")
train_data.head(100)
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
# Imputing Age
age_imputer = SimpleImputer(strategy='median')
train_data['Age'] = age_imputer.fit_transform(train_data[['Age']])
test_data['Age'] = age_imputer.transform(test_data[['Age']])
# Assuming Fare missing values can be filled with -1 (or you could use mean or median)
fare_imputer = SimpleImputer(strategy='median')
train_data['Fare'] = fare_imputer.fit_transform(train_data[['Fare']])
test_data['Fare'] = fare_imputer.transform(test_data[['Fare']])
features = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare","Embarked","PassengerId"]
X = pd.get_dummies(train_data[features])
X_test = pd.get_dummies(test_data[features])
y = train_data["Survived"]
model = RandomForestClassifier(n_estimators=200, max_depth=200, random_state=10)
model.fit(X, y)
predictions = model.predict(X_test)
train_accuracy = model.score(X, y)
print(f"Training Accuracy: {train_accuracy:.4f}")
output = pd.DataFrame({'PassengerId': test_data.PassengerId, 'Survived': predictions})
output.to_csv('submission.csv', index=False)
print("Your submission was successfully saved!")
"
I want to change the model to neural network, show code.
|
1fed2f64247133c05d8ed2e915a0b23f
|
{
"intermediate": 0.5781989693641663,
"beginner": 0.15743790566921234,
"expert": 0.2643630802631378
}
|
48,101
|
convert this pinescript to python script and use talib : // This source code is subject to the terms of the Mozilla Public License 2.0 at https://mozilla.org/MPL/2.0/
//@version=5
indicator(title='Twin Range Filter', overlay=true, timeframe='')
source = input(defval=close, title='Source')
showsignals = input(title='Show Buy/Sell Signals ?', defval=true)
per1 = input.int(defval=27, minval=1, title='Fast period')
mult1 = input.float(defval=1.6, minval=0.1, title='Fast range')
per2 = input.int(defval=55, minval=1, title='Slow period')
mult2 = input.float(defval=2, minval=0.1, title='Slow range')
smoothrng(x, t, m) =>
wper = t * 2 - 1
avrng = ta.ema(math.abs(x - x[1]), t)
smoothrng = ta.ema(avrng, wper) * m
smoothrng
smrng1 = smoothrng(source, per1, mult1)
smrng2 = smoothrng(source, per2, mult2)
smrng = (smrng1 + smrng2) / 2
rngfilt(x, r) =>
rngfilt = x
rngfilt := x > nz(rngfilt[1]) ? x - r < nz(rngfilt[1]) ? nz(rngfilt[1]) : x - r : x + r > nz(rngfilt[1]) ? nz(rngfilt[1]) : x + r
rngfilt
filt = rngfilt(source, smrng)
upward = 0.0
upward := filt > filt[1] ? nz(upward[1]) + 1 : filt < filt[1] ? 0 : nz(upward[1])
downward = 0.0
downward := filt < filt[1] ? nz(downward[1]) + 1 : filt > filt[1] ? 0 : nz(downward[1])
STR = filt + smrng
STS = filt - smrng
FUB = 0.0
FUB := STR < nz(FUB[1]) or close[1] > nz(FUB[1]) ? STR : nz(FUB[1])
FLB = 0.0
FLB := STS > nz(FLB[1]) or close[1] < nz(FLB[1]) ? STS : nz(FLB[1])
TRF = 0.0
TRF := nz(TRF[1]) == FUB[1] and close <= FUB ? FUB : nz(TRF[1]) == FUB[1] and close >= FUB ? FLB : nz(TRF[1]) == FLB[1] and close >= FLB ? FLB : nz(TRF[1]) == FLB[1] and close <= FLB ? FUB : FUB
long = ta.crossover(close, TRF)
short = ta.crossunder(close, TRF)
plotshape(showsignals and long, title='Long', text='BUY', style=shape.labelup, textcolor=color.white, size=size.tiny, location=location.belowbar, color=color.rgb(0, 19, 230))
plotshape(showsignals and short, title='Short', text='SELL', style=shape.labeldown, textcolor=color.white, size=size.tiny, location=location.abovebar, color=color.rgb(0, 19, 230))
alertcondition(long, title='Long', message='Long')
alertcondition(short, title='Short', message='Short')
Trfff = plot(TRF)
mPlot = plot(ohlc4, title='', style=plot.style_circles, linewidth=0)
longFillColor = close > TRF ? color.green : na
shortFillColor = close < TRF ? color.red : na
fill(mPlot, Trfff, title='UpTrend Highligter', color=longFillColor, transp=90)
fill(mPlot, Trfff, title='DownTrend Highligter', color=shortFillColor, transp=90)
|
0af9a5e7e531fa94763f33b5941f22d5
|
{
"intermediate": 0.3307849168777466,
"beginner": 0.3821905851364136,
"expert": 0.28702446818351746
}
|
48,102
|
def CheckCarAvailability(instructions,summary):
flag=False
dates=[]
car=instructions[1]
start=datetime.strptime(instructions[2],"%Y-%m-%d")
print(start)
start = datetime.strptime(instructions[2], "%Y-%m-%d")
duration = int(instructions[3])
end = start + timedelta(days=duration-1)
for _, value in summary.items():
if value[0]==car:
date = datetime.strptime(value[1], "%Y-%m-%d")
dateend = datetime.strptime(value[1], "%Y-%m-%d") + timedelta(days=int(value[4]) - 1)
if start<date<end or start<dateend<end:
flag=True
if start<date<dateend<end:
for _ in range(value[4]):
dates.append(date)
date+=1
elif date<start<dateend<end:
for _ in range(dateend-start+1):
dates.append(start)
start+=1
elif start<date<end<dateend:
for _ in range(end-date+1):
dates.append(date)
date+=1
if flag==False:
print(f"The Car {car} is available in this period.")
else:
print(f"The Car {car} is occupied in this period.")
dates=sorted(dates)
for date in dates:
print(date)
|
d1cf43f4c05f3fe6f6bf31c7b0fba98e
|
{
"intermediate": 0.22443349659442902,
"beginner": 0.5122814178466797,
"expert": 0.2632850110530853
}
|
48,103
|
у мене є основна головна сторінка,нижче код<?php
require_once 'include/db.php';
$mysqli = new mysqli('localhost', 'root', '17020575', 'тур');
if ($mysqli->connect_error) {
die('Помилка з\'єднання: ' . $mysqli->connect_error);
}
?>
<!DOCTYPE html>
<html lang="ua">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title> Empire of Travel</title> <style>
body {
margin: 0;
font-family: Arial, sans-serif;
}
header {
background-color: #66CDAA;
padding: 15px;
color: #fff;
display: flex;
justify-content: space-between;
align-items: center;
}
header img{
height: 50px;
width: 50px;
}
nav {
display: flex;
}
nav ul {
list-style: none;
margin: 0;
padding: 0;
display: flex;
}
nav li {
margin-right: 20px;
color: #933;
}
section.hero {
height: 500px;
width: 100%;
background-image: url('https://wide-w.com/wp-content/uploads/2019/01/gora-jeverest.jpg');
background-size: cover;
background-position: center;
display: flex;
justify-content: center;
align-items: center;
color: #ffffff;
text-align: center;
}
section {
padding: 20px;
}
section#about {
border: 2px solid #333;
background-color: #00FA9A;
padding: 20px;
margin: 20px 0;
overflow: hidden;
}
section#about img {
float: left;
margin-right: 20px;
max-width: 300px;
}
section#about p {
margin: 0 0 20px 0;
}
section#best-tour {
text-align: center;
}
section#best-tour h2 {
margin-bottom: 20px;
}
section#best-tour .feature {
display: inline-block;
margin: 0 20px 20px 0;
border: 10px solid #F0E68C;
padding: 10px;
width: calc((100% - 60px) / 3);
box-sizing: border-box;
}
section#best-tour img {
width: 80px;
height: 80px;
border-radius: 50%;
margin-bottom: 10px;
display: block;
margin-left: auto;
margin-right: auto;
}
.hottourscontainer {
max-width: 1200px;
margin: 0 auto;
display: flex;
flex-wrap: wrap;
align-items: center;
justify-content: space-around;
padding: 20px;
box-shadow: 0 2px 4px rgba(0,0,0,0.1);
}
.hottourscontainer img {
max-width: 200px;
margin-right: 20px;
border-radius: 10px;
margin-bottom: 20px;
}
.hottoursinfo {
flex-grow: 1;
width: 70%;
}
.tourtitle {
font-size: 24px;
font-weight: bold;
margin-bottom: 10px;
}
.tourdescription {
font-size: 16px;
margin-bottom: 10px;
}
.tourprice {
font-size: 18px;
font-weight: bold;
color: green;
border: 2px solid #eee;
padding: 10px;
display: inline-block;
}
.hottourstitle {
text-align: center;
font-size: 32px;
margin-bottom: 40px;
}
section#route-example {
text-align: center;
background-color: #FFF8DC;
}
section#route-example .route-block {
display: flex;
justify-content: center;
align-items: center;
margin-bottom: 40px;
border: 2px solid #DCDCDC;
padding: 10px;
margin-bottom: 10px;
}
section#route-example .route-block img{
width: 500px;
height: 400px;
}
section#route-example .route-block p {
width: 48%;
margin: 0 2%;
box-sizing: border-box;
}
footer {
background-color: #333;
color: #fff;
text-align: center;
padding: 10px;
position: fixed;
width: 100%;
bottom: 0;
}
#book-tour {
background-color: #F0FFF0;
text-align: center;
padding: 50px 0;
}
.book-button-container {
display: inline-block;
}
.book-button {
background-color: #4CAF50;
color: white;
padding: 15px 32px;
text-align: center;
text-decoration: none;
font-size: 16px;
margin: 4px 2px;
cursor: pointer;
border-radius: 5px;
}
.book-button:hover {
background-color: #45a049;
}
#testimonials {
text-align: center;
padding: 50px;
}
.testimonial-container {
display: flex;
justify-content: space-around;
flex-wrap: wrap;
gap: 20px;
}
.testimonial {
background-color: #ffffff;
border: 1px solid #eaeaea;
padding: 20px;
border-radius: 5px;
box-shadow: 0px 2px 4px rgba(0, 0, 0, 0.1);
flex-basis: calc(30% - 40px);
margin: 10px;
flex-grow: 1;
}
.testimonial blockquote {
font-style: italic;
color: #555;
}
.testimonial-author, .tour-date {
font-weight: bold;
font-size: 0.9em;
color: #333;
text-align: right;
margin-top: 15px;
}
#map {
margin: 20px;
padding: 20px;
}
#map h2 {
text-align: center;
margin-bottom: 20px;
}
#map iframe {
width: 100%;
}
#contacts {
background-color: #f8f8f8;
padding: 50px 0;
}
#contacts .container {
max-width: 1200px;
margin: 0 auto;
padding: 0 15px;
}
#contacts h2 {
text-align: center;
margin-bottom: 15px;
#contacts p {
text-align: center;
margin: 10px 0;
font-size: 1rem;
}
#contacts a {
color: #007bff;
text-decoration: none;
}
#contacts a:hover {
text-decoration: underline;
}
footer{
height: 25px;
}
</style>
</head>body>
<header>
<div>
<img src="logo.png" alt="Логотип">
</div>
<nav>
<ul>
<li><a href="#about">Про нас</a></li>
<li><a href="hotels.php">Готелі</a></li>
<li><a href="blog.php">Блог</a></li>
<li><a href="newpage.php">Оформити тур</a></li>
<li><a href="login.php">Admin</a></li>
</ul>
</nav>
</header> я хочу,щоб ти взяв головну сторінку з коду нижче та змінівши мій код <!DOCTYPE html>
<html class="wide wow-animation" lang="en">
<head>
<!--Site Title-->
<title>Home</title>
<meta charset="utf-8">
<meta name="format-detection" content="telephone=no">
<meta name="viewport" content="width=device-width, height=device-height, initial-scale=1.0, maximum-scale=1.0, user-scalable=0">
<!--Stylesheets -->
<link href="//fonts.googleapis.com/css?family=Pacifico%7CLato:400,100,100italic,300,300italic,700,400italic,900,700italic,900italic%7CMontserrat:400,700" rel="stylesheet" type="text/css">
<link rel="icon" href="images/favicon.ico" type="image/x-icon">
<!--Bootstrap -->
<link rel="stylesheet" href="css/bootstrap.css">
<link rel="stylesheet" href="css/fonts.css">
<link rel="stylesheet" href="css/style.css">
<style>.ie-panel{display: none;background: #212121;padding: 10px 0;box-shadow: 3px 3px 5px 0 rgba(0,0,0,.3);clear: both;text-align:center;position: relative;z-index: 1;} html.ie-10 .ie-panel, html.lt-ie-10 .ie-panel {display: block;}</style>
</head>
<body>
<div class="ie-panel"><a href="http://windows.microsoft.com/en-US/internet-explorer/"><img src="images/ie8-panel/warning_bar_0000_us.jpg" height="42" width="820" alt="You are using an outdated browser. For a faster, safer browsing experience, upgrade for free today."></a></div>
<div class="preloader">
<div class="preloader-body">
<div class="cssload-container">
<div class="cssload-speeding-wheel"> </div>
</div>
<p>Loading...</p>
</div>
</div>
<!--The Main Wrapper--><a class="section section-banner d-none d-xl-flex" href="https://www.templatemonster.com/website-templates/58434.html" style="background-image: url(images/banner/banner-1-bg-1600x60.jpg); background-image: -webkit-image-set( url(images/banner/banner-1-bg-1600x60.jpg) 1x, url(images/banner/banner-1-bg-3200x120.jpg) 2x )"> <img src="images/banner/banner-1-1600x60.png" srcset="images/banner/banner-1-1600x60.png 1x, images/banner/banner-1-3200x120.png 2x" alt=""></a>
<div class="page">
<!--
========================================================
HEADER
========================================================
-->
<header class="page-header">
<!--RD Navbar-->
<div class="rd-navbar-wrap">
<nav class="rd-navbar top-panel-none-items" data-layout="rd-navbar-fixed" data-sm-layout="rd-navbar-fixed" data-md-layout="rd-navbar-fixed" data-md-device-layout="rd-navbar-fixed" data-lg-layout="rd-navbar-fixed" data-lg-device-layout="rd-navbar-fixed" data-xl-layout="rd-navbar-static" data-xl-device-layout="rd-navbar-static" data-lg-stick-up-offset="46px" data-xl-stick-up-offset="46px" data-xxl-stick-up-offset="46px" data-lg-stick-up="true" data-xl-stick-up="true" data-xxl-stick-up="true">
<div class="rd-navbar-inner">
<!--RD Navbar Panel-->
<div class="rd-navbar-panel">
<!--RD Navbar Toggle-->
<button class="rd-navbar-toggle" data-rd-navbar-toggle=".rd-navbar"><span></span></button>
<!--END RD Navbar Toggle-->
<!--RD Navbar Brand-->
<div class="rd-navbar-brand"><a href="index.html"><img src="images/logo-default.png" alt=""></a></div>
<!--END RD Navbar Brand-->
</div>
<!--END RD Navbar Panel-->
<div class="rd-navbar-nav-wrap">
<!--RD Navbar Nav-->
<ul class="rd-navbar-nav">
<li class="active"><a href="./">Home</a></li>
<li><a href="about.html">About Us</a></li>
<li><a href="typography.html">Typography</a></li>
<li><a href="contact_us.html">Contact Us</a></li>
</ul>
<!--END RD Navbar Nav-->
</div>
</div>
</nav>
</div>
<!--END RD Navbar-->
<section>
<!--Swiper-->
<div class="swiper-container swiper-slider" data-autoplay="5000" data-slide-effect="fade" data-loop="false">
<div class="jumbotron text-center">
<h1><small>#1 Web Design Agency</small>Welcome to Modicate
</h1>
<p class="big">We create quality products <br> tailored to your needs and requirements.</p><div class='button-group-variant'> <a class='button button-default round-xl button-sm' href='https://www.templatemonster.com/website-templates/58434.html'>Buy Now</a> <a class='button button-default round-xl button-sm' href='#'>View Features</a></div>
</div>
<div class="swiper-wrapper">
<div class="swiper-slide" data-slide-bg="images/nature-landscape-mountains-4k_1551644850.jpg">
<div class="swiper-slide-caption"></div>
</div>
<div class="swiper-slide" data-slide-bg="images/12690-mountains-peak-snow-snowy-sky-clouds-4k.jpg">
<div class="swiper-slide-caption"></div>
</div>
<div class="swiper-slide" data-slide-bg="images/mountains-peaks-snow-covered-snow-wallpaper.jpg">
<div class="swiper-slide-caption"></div>
</div>
</div>
</div>
</section>
</header>
|
3b2d6aad9d296b505a9554d827a2ab9b
|
{
"intermediate": 0.3450203537940979,
"beginner": 0.5187655091285706,
"expert": 0.13621410727500916
}
|
48,104
|
есть ли тут код отвечающий за выборку от start char до далее по порядку идущих? #include "stdafx.h"
#include "CBFGDefs.h"
#include "Procs.h"
#include "FontMapClass.h"
#include "file-request.h"
#include "WriteFont.h"
#include <shellapi.h>
// Required includes for Win2000
#include "Ws2tcpip.h"
#include "Wspiapi.h"
extern HINSTANCE G_Inst;
extern HWND hMain;
extern BFontMap *Fnt;
extern AppInfo *info;
extern bool cfgChange;
int CALLBACK EnumFontMgr(ENUMLOGFONTEX *lpelfe,NEWTEXTMETRICEX *lpntme,int FontType,LPARAM lParam)
{
SendDlgItemMessage(hMain,CBO_FONTS,CB_ADDSTRING,0,(LPARAM)lpelfe->elfFullName);
return 1;
}
BOOL CALLBACK MainProc(HWND hDlg, UINT msg, WPARAM wParam, LPARAM lParam)
{
HDC dc;
LOGFONT fDef;
char Text[256];
int RowDex,Index;
int tVal,BPPVal,Flags,RetVal;
SCROLLINFO scrInf;
string VerData,VerNum;
RECT rcArea;
HBRUSH hBr;
BFG_RGB ColVal;
CHOOSECOLOR SelCol;
static COLORREF CustCol[16]; // array of custom colors for color picker
switch(msg)
{
case WM_INITDIALOG:
SendMessage(hDlg,WM_SETICON,ICON_BIG,(LPARAM)LoadIcon(G_Inst,MAKEINTRESOURCE(MAKEINTRESOURCE(APP_ICON))));
SendDlgItemMessage(hDlg,CMD_UP,BM_SETIMAGE,IMAGE_ICON,(LPARAM)LoadIcon(G_Inst,MAKEINTRESOURCE(ICO_UP)));
SendDlgItemMessage(hDlg,CMD_DOWN,BM_SETIMAGE,IMAGE_ICON,(LPARAM)LoadIcon(G_Inst,MAKEINTRESOURCE(ICO_DOWN)));
SendDlgItemMessage(hDlg,CMD_RIGHT,BM_SETIMAGE,IMAGE_ICON,(LPARAM)LoadIcon(G_Inst,MAKEINTRESOURCE(ICO_RIGHT)));
SendDlgItemMessage(hDlg,CMD_LEFT,BM_SETIMAGE,IMAGE_ICON,(LPARAM)LoadIcon(G_Inst,MAKEINTRESOURCE(ICO_LEFT)));
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_ADDSTRING,0,(LPARAM)"16");
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_ADDSTRING,0,(LPARAM)"32");
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_ADDSTRING,0,(LPARAM)"64");
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_ADDSTRING,0,(LPARAM)"128");
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_ADDSTRING,0,(LPARAM)"256");
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_ADDSTRING,0,(LPARAM)"512");
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_ADDSTRING,0,(LPARAM)"1024");
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_ADDSTRING,0,(LPARAM)"2048");
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_ADDSTRING,0,(LPARAM)"4096");
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_ADDSTRING,0,(LPARAM)"16");
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_ADDSTRING,0,(LPARAM)"32");
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_ADDSTRING,0,(LPARAM)"64");
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_ADDSTRING,0,(LPARAM)"128");
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_ADDSTRING,0,(LPARAM)"256");
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_ADDSTRING,0,(LPARAM)"512");
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_ADDSTRING,0,(LPARAM)"1024");
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_ADDSTRING,0,(LPARAM)"2048");
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_ADDSTRING,0,(LPARAM)"4096");
tVal=Fnt->GetSize(MAPWIDTH);
if(tVal==32)
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_SETCURSEL,1,0);
else if(tVal==64)
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_SETCURSEL,2,0);
else if(tVal==128)
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_SETCURSEL,3,0);
else if(tVal==256)
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_SETCURSEL,4,0);
else if(tVal==512)
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_SETCURSEL,5,0);
else if(tVal==1024)
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_SETCURSEL,6,0);
else if(tVal==2048)
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_SETCURSEL,7,0);
else if(tVal==4096)
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_SETCURSEL,8,0);
else
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_SETCURSEL,0,0);
tVal=Fnt->GetSize(MAPHEIGHT);
if(tVal==32)
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_SETCURSEL,1,0);
if(tVal==64)
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_SETCURSEL,2,0);
if(tVal==128)
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_SETCURSEL,3,0);
else if(tVal==256)
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_SETCURSEL,4,0);
else if(tVal==512)
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_SETCURSEL,5,0);
else if(tVal==1024)
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_SETCURSEL,6,0);
else if(tVal==2048)
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_SETCURSEL,7,0);
else if(tVal==4096)
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_SETCURSEL,8,0);
else
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_SETCURSEL,0,0);
SendDlgItemMessage(hDlg,CBO_ALIAS,CB_ADDSTRING,0,(LPARAM)"None");
SendDlgItemMessage(hDlg,CBO_ALIAS,CB_ADDSTRING,0,(LPARAM)"Normal Anti-Alias");
SendDlgItemMessage(hDlg,CBO_ALIAS,CB_ADDSTRING,0,(LPARAM)"ClearType (WinXP Only)");
SendDlgItemMessage(hDlg,CBO_ALIAS,CB_SETCURSEL,0,0);
SendDlgItemMessage(hDlg,CBO_ZOOM,CB_ADDSTRING,0,(LPARAM)"25%");
SendDlgItemMessage(hDlg,CBO_ZOOM,CB_ADDSTRING,0,(LPARAM)"50%");
SendDlgItemMessage(hDlg,CBO_ZOOM,CB_ADDSTRING,0,(LPARAM)"100%");
SendDlgItemMessage(hDlg,CBO_ZOOM,CB_ADDSTRING,0,(LPARAM)"200%");
SendDlgItemMessage(hDlg,CBO_ZOOM,CB_ADDSTRING,0,(LPARAM)"400%");
SendDlgItemMessage(hDlg,CBO_ZOOM,CB_SETCURSEL,2,0);
wsprintf(Text,"%d",Fnt->GetSize(CELLWIDTH));
SendDlgItemMessage(hDlg,TXT_CELLWIDTH,WM_SETTEXT,0,(LPARAM)Text);
wsprintf(Text,"%d",Fnt->GetSize(CELLHEIGHT));
SendDlgItemMessage(hDlg,TXT_CELLHEIGHT,WM_SETTEXT,0,(LPARAM)Text);
wsprintf(Text,"%d",Fnt->GetFontWidth());
SendDlgItemMessage(hDlg,TXT_FONTWIDTH,WM_SETTEXT,0,(LPARAM)Text);
wsprintf(Text,"%d",Fnt->GetFontHeight());
SendDlgItemMessage(hDlg,TXT_FONTHEIGHT,WM_SETTEXT,0,(LPARAM)Text);
SendDlgItemMessage(hDlg,SPN_CELLWIDTH,UDM_SETRANGE,0,MAKELONG(256,8));
SendDlgItemMessage(hDlg,SPN_CELLHEIGHT,UDM_SETRANGE,0,MAKELONG(256,8));
SendDlgItemMessage(hDlg,SPN_FONTHEIGHT,UDM_SETRANGE,0,MAKELONG(256,1));
SendDlgItemMessage(hDlg,SPN_FONTWIDTH,UDM_SETRANGE,0,MAKELONG(256,0));
SendDlgItemMessage(hDlg,SPN_WIDTH,UDM_SETRANGE,0,MAKELONG(100,-100));
SendDlgItemMessage(hDlg,SPN_START,UDM_SETRANGE,0,MAKELONG(254,0));
SendDlgItemMessage(hDlg,RAD_ALL,BM_SETCHECK,BST_CHECKED,0);
info->MaxChars=Fnt->GetSize(MAXCHARS);
PostMessage(hDlg,WM_APP,0,0);
return TRUE;
case WM_DRAWITEM:
if(wParam==ODR_FORECOL)
{
dc=((LPDRAWITEMSTRUCT)lParam)->hDC;
ColVal=Fnt->GetCol(TEXTCOL);
GetClientRect(hDlg, &rcArea);
hBr=CreateSolidBrush(RGB(ColVal.Red,ColVal.Green,ColVal.Blue));
FillRect(dc,&rcArea,hBr);
DeleteObject(hBr);
}
if(wParam==ODR_BACKCOL)
{
dc=((LPDRAWITEMSTRUCT)lParam)->hDC;
ColVal=Fnt->GetCol(BACKCOL);
GetClientRect(hDlg, &rcArea);
hBr=CreateSolidBrush(RGB(ColVal.Red,ColVal.Green,ColVal.Blue));
FillRect(dc,&rcArea,hBr);
DeleteObject(hBr);
}
CreateFontMap();
return TRUE;
case WM_APP:
SendDlgItemMessage(hDlg,CBO_FONTS,CB_RESETCONTENT,0,0);
fDef.lfCharSet=ANSI_CHARSET;
fDef.lfFaceName[0]=NULL;
fDef.lfPitchAndFamily=0;
dc=GetDC(hMain);
EnumFontFamiliesEx(dc,&fDef,(FONTENUMPROC)EnumFontMgr,0,0);
ReleaseDC(hMain,dc);
SendDlgItemMessage(hDlg,CBO_FONTS,CB_SETCURSEL,0,0);
SendDlgItemMessage(hDlg,CBO_FONTS,CB_GETLBTEXT,0,(LPARAM)Text);
Fnt->SetFontName(Text);
if(info->Grid)
{
SendDlgItemMessage(hMain,CHK_GRID,BM_SETCHECK,BST_CHECKED,0);
CheckMenuItem(GetMenu(hMain),ID_VIEW_SHOWGRID,MF_CHECKED);
}
else
{
SendDlgItemMessage(hMain,CHK_GRID,BM_SETCHECK,BST_UNCHECKED,0);
CheckMenuItem(GetMenu(hMain),ID_VIEW_SHOWGRID,MF_UNCHECKED);
}
if(info->wMarker)
{
SendDlgItemMessage(hDlg,CHK_WIDTH,BM_SETCHECK,BST_CHECKED,0);
CheckMenuItem(GetMenu(hMain),ID_VIEW_WIDTHMARKERS,MF_CHECKED);
}
else
{
SendDlgItemMessage(hDlg,CHK_WIDTH,BM_SETCHECK,BST_UNCHECKED,0);
CheckMenuItem(GetMenu(hMain),ID_VIEW_WIDTHMARKERS,MF_UNCHECKED);
}
EnableWindow(GetDlgItem(hMain,SCR_HOR),FALSE);
EnableWindow(GetDlgItem(hMain,SCR_VERT),FALSE);
Fnt->SetBaseChar(32);
wsprintf(Text,"%d",Fnt->GetBaseChar());
SendDlgItemMessage(hDlg,TXT_START,WM_SETTEXT,0,(LPARAM)Text);
SendMessage(hMain,WM_APP+1,0,0);
EnableWindow(GetDlgItem(hMain,TXT_WIDTH),FALSE);
EnableWindow(GetDlgItem(hMain,STA_WIDTH),FALSE);
CalcScroll();
CreateFontMap();
return FALSE;
case WM_APP+1: // Control Update
if(info->ModAll==TRUE)
{
wsprintf(Text,"%d",Fnt->GetGlobal(HOFFSET));
SendDlgItemMessage(hMain,TXT_XADJ,WM_SETTEXT,0,(LPARAM)Text);
wsprintf(Text,"%d",Fnt->GetGlobal(VOFFSET));
SendDlgItemMessage(hMain,TXT_YADJ,WM_SETTEXT,0,(LPARAM)Text);
wsprintf(Text,"%d",Fnt->GetGlobal(WIDTH));
SendDlgItemMessage(hMain,TXT_WADJ,WM_SETTEXT,0,(LPARAM)Text);
SendDlgItemMessage(hMain,TXT_WIDTH,WM_SETTEXT,0,(LPARAM)"");
}
else
{
wsprintf(Text,"%d",Fnt->GetCharVal(info->Select+Fnt->GetBaseChar(),HOFFSET));
SendDlgItemMessage(hMain,TXT_XADJ,WM_SETTEXT,0,(LPARAM)Text);
wsprintf(Text,"%d",Fnt->GetCharVal(info->Select+Fnt->GetBaseChar(),VOFFSET));
SendDlgItemMessage(hMain,TXT_YADJ,WM_SETTEXT,0,(LPARAM)Text);
wsprintf(Text,"%d",Fnt->GetCharVal(info->Select+Fnt->GetBaseChar(),WOFFSET));
SendDlgItemMessage(hMain,TXT_WADJ,WM_SETTEXT,0,(LPARAM)Text);
wsprintf(Text,"%d",Fnt->GetCharVal(info->Select+Fnt->GetBaseChar(),EWIDTH));
SendDlgItemMessage(hMain,TXT_WIDTH,WM_SETTEXT,0,(LPARAM)Text);
wsprintf(Text,"Adjust Selection (%d) Only",info->Select+Fnt->GetBaseChar());
SendDlgItemMessage(hMain,RAD_SEL,WM_SETTEXT,0,(LPARAM)Text);
}
return TRUE;
case WM_CLOSE:
case WM_DESTROY:
EndDialog(hDlg,0);
PostQuitMessage(0);
return TRUE;
case WM_HSCROLL:
{
switch(LOWORD(wParam))
{
case SB_THUMBTRACK:
SetScrollPos((HWND)lParam,SB_CTL,HIWORD(wParam),TRUE);
info->hScr=HIWORD(wParam);
CreateFontMap();
return 0;
case SB_LINELEFT:
if(info->hScroll==FALSE)
return 0;
info->hScr-=8;
if(info->hScr<0)
info->hScr=0;
SetScrollPos(GetDlgItem(hMain,SCR_HOR),SB_CTL,info->hScr,TRUE);
CreateFontMap();
return 0;
case SB_LINERIGHT:
if(info->hScroll==FALSE)
return 0;
info->hScr+=8;
scrInf.cbSize=sizeof(SCROLLINFO);
scrInf.fMask=SIF_RANGE;
GetScrollInfo(GetDlgItem(hMain,SCR_HOR),SB_CTL,&scrInf);
if(info->hScr>scrInf.nMax)
info->hScr=scrInf.nMax;
SetScrollPos(GetDlgItem(hMain,SCR_HOR),SB_CTL,info->hScr,TRUE);
CreateFontMap();
return 0;
case SB_PAGELEFT:
info->hScr-=24;
SetScrollPos(GetDlgItem(hMain,SCR_HOR),SB_CTL,info->hScr,TRUE);
CreateFontMap();
return 0;
case SB_PAGERIGHT:
info->hScr+=24;
SetScrollPos(GetDlgItem(hMain,SCR_HOR),SB_CTL,info->hScr,TRUE);
CreateFontMap();
return 0;
}
return FALSE;
}
case WM_VSCROLL:
{
switch(LOWORD(wParam))
{
case SB_THUMBTRACK:
SetScrollPos((HWND)lParam,SB_CTL,HIWORD(wParam),TRUE);
info->vScr=HIWORD(wParam);
CreateFontMap();
return 0;
case SB_LINEUP:
if(info->vScroll==FALSE)
return 0;
info->vScr-=8;
if(info->vScr<0)
info->vScr=0;
SetScrollPos(GetDlgItem(hMain,SCR_VERT),SB_CTL,info->vScr,TRUE);
CreateFontMap();
return 0;
case SB_LINEDOWN:
if(info->vScroll==FALSE)
return 0;
info->vScr+=8;
scrInf.cbSize=sizeof(SCROLLINFO);
scrInf.fMask=SIF_RANGE;
GetScrollInfo(GetDlgItem(hMain,SCR_VERT),SB_CTL,&scrInf);
if(info->vScr>scrInf.nMax)
info->vScr=scrInf.nMax;
SetScrollPos(GetDlgItem(hMain,SCR_VERT),SB_CTL,info->vScr,TRUE);
CreateFontMap();
return 0;
case SB_PAGEDOWN:
info->vScr+=24;
SetScrollPos(GetDlgItem(hMain,SCR_VERT),SB_CTL,info->vScr,TRUE);
CreateFontMap();
return 0;
case SB_PAGEUP:
info->vScr-=24;
SetScrollPos(GetDlgItem(hMain,SCR_VERT),SB_CTL,info->vScr,TRUE);
CreateFontMap();
return 0;
}
return FALSE;
}
case WM_NOTIFY:
{
NMUPDOWN *Hdr;
Hdr=(LPNMUPDOWN) lParam;
if(Hdr->hdr.code==UDN_DELTAPOS)
{
switch(Hdr->hdr.idFrom)
{
case SPN_CELLHEIGHT:
Fnt->SetSize(CELLHEIGHT,Hdr->iPos+Hdr->iDelta);
info->MaxChars=Fnt->GetSize(MAXCHARS);
info->Select=LimitSelection(info->Select,info->MaxChars);
CreateFontMap();
return 0;
case SPN_CELLWIDTH:
Fnt->SetSize(CELLWIDTH,Hdr->iPos+Hdr->iDelta);
info->MaxChars=Fnt->GetSize(MAXCHARS);
info->Select=LimitSelection(info->Select,info->MaxChars);
CreateFontMap();
return 0;
case SPN_FONTHEIGHT:
Fnt->SetFontHeight(Hdr->iPos+Hdr->iDelta);
CreateFontMap();
return 0;
case SPN_FONTWIDTH:
Fnt->SetFontWidth(Hdr->iPos+Hdr->iDelta);
CreateFontMap();
return 0;
case SPN_WIDTH:
if(info->ModAll)
{
Fnt->SetGlobal(WIDTH,Hdr->iPos+Hdr->iDelta);
CreateFontMap();
}
else
{
Fnt->SetCharVal(info->Select+Fnt->GetBaseChar(),WOFFSET,Hdr->iPos+Hdr->iDelta);
wsprintf(Text,"%d",Fnt->GetCharVal(info->Select+Fnt->GetBaseChar(),EWIDTH));
SendDlgItemMessage(hMain,TXT_WIDTH,WM_SETTEXT,0,(LPARAM)Text);
CreateFontMap();
}
return 0;
case SPN_START:
if(Hdr->iPos>0)
Fnt->SetBaseChar(Hdr->iPos+Hdr->iDelta);
if(Fnt->GetBaseChar()+info->Select>255)
info->Select=255-Fnt->GetBaseChar();
SendMessage(hMain,WM_APP+1,0,0);
CreateFontMap();
return 0;
}
}
return 0;
break;
}
case WM_COMMAND:
{
switch(LOWORD(wParam)) // Buttons & Menu items
{
case ID_COLOUR_SETTEXTCOLOUR:
case ODR_FORECOL:
ColVal=Fnt->GetCol(TEXTCOL);
SelCol.lStructSize=sizeof(CHOOSECOLOR);
SelCol.hwndOwner=hDlg;
SelCol.rgbResult=RGB(ColVal.Red,ColVal.Green,ColVal.Blue);
SelCol.lpCustColors=(LPDWORD)CustCol;
SelCol.Flags=CC_FULLOPEN | CC_RGBINIT | CC_ANYCOLOR;
if(ChooseColor(&SelCol))
Fnt->SetCol(TEXTCOL,GetRValue(SelCol.rgbResult),GetGValue(SelCol.rgbResult),GetBValue(SelCol.rgbResult));
InvalidateRgn(hDlg,NULL,NULL);
return TRUE;
case ID_COLOUR_SETBACKGROUNDCOLOUR:
case ODR_BACKCOL:
ColVal=Fnt->GetCol(BACKCOL);
SelCol.lStructSize=sizeof(CHOOSECOLOR);
SelCol.hwndOwner=hDlg;
SelCol.rgbResult=RGB(ColVal.Red,ColVal.Green,ColVal.Blue);
SelCol.lpCustColors=(LPDWORD)CustCol;
SelCol.Flags=CC_FULLOPEN | CC_RGBINIT | CC_ANYCOLOR;
if(ChooseColor(&SelCol))
Fnt->SetCol(BACKCOL,GetRValue(SelCol.rgbResult),GetGValue(SelCol.rgbResult),GetBValue(SelCol.rgbResult));
InvalidateRgn(hDlg,NULL,NULL);
return TRUE;
case ID_VIEW_SHOWGRID:
case CHK_GRID:
info->Grid^=1;
if(info->Grid)
{
SendDlgItemMessage(hMain,CHK_GRID,BM_SETCHECK,BST_CHECKED,0);
CheckMenuItem(GetMenu(hMain),ID_VIEW_SHOWGRID,MF_CHECKED);
}
else
{
SendDlgItemMessage(hMain,CHK_GRID,BM_SETCHECK,BST_UNCHECKED,0);
CheckMenuItem(GetMenu(hMain),ID_VIEW_SHOWGRID,MF_UNCHECKED);
}
CreateFontMap();
return TRUE;
case ID_VIEW_WIDTHMARKERS:
case CHK_WIDTH:
info->wMarker^=1;
if(info->wMarker)
{
SendDlgItemMessage(hMain,CHK_WIDTH,BM_SETCHECK,BST_CHECKED,0);
CheckMenuItem(GetMenu(hMain),ID_VIEW_WIDTHMARKERS,MF_CHECKED);
}
else
{
SendDlgItemMessage(hMain,CHK_WIDTH,BM_SETCHECK,BST_UNCHECKED,0);
CheckMenuItem(GetMenu(hMain),ID_VIEW_WIDTHMARKERS,MF_UNCHECKED);
}
CreateFontMap();
return TRUE;
case CHK_BOLD:
if(Fnt->GetFontWeight()==FW_NORMAL)
Fnt->SetFontWeight(FW_BOLD);
else
Fnt->SetFontWeight(FW_NORMAL);
CreateFontMap();
return TRUE;
case CHK_ITAL:
if(Fnt->GetFontItalic())
Fnt->SetFontItalic(FALSE);
else
Fnt->SetFontItalic(TRUE);
CreateFontMap();
return TRUE;
case CMD_LEFT:
if(info->ModAll)
{
tVal=Fnt->GetGlobal(HOFFSET);
Fnt->SetGlobal(HOFFSET,tVal-1);
SendMessage(hMain,WM_APP+1,0,0);
}
else
{
tVal=Fnt->GetCharVal(Fnt->GetBaseChar()+info->Select,HOFFSET);
Fnt->SetCharVal(Fnt->GetBaseChar()+info->Select,HOFFSET,tVal-1);
SendMessage(hMain,WM_APP+1,0,0);
}
CreateFontMap();
return TRUE;
case CMD_RIGHT:
if(info->ModAll)
{
tVal=Fnt->GetGlobal(HOFFSET);
Fnt->SetGlobal(HOFFSET,tVal+1);
SendMessage(hMain,WM_APP+1,0,0);
}
else
{
tVal=Fnt->GetCharVal(Fnt->GetBaseChar()+info->Select,HOFFSET);
Fnt->SetCharVal(Fnt->GetBaseChar()+info->Select,HOFFSET,tVal+1);
SendMessage(hMain,WM_APP+1,0,0);
}
CreateFontMap();
return TRUE;
case CMD_UP:
if(info->ModAll)
{
tVal=Fnt->GetGlobal(VOFFSET);
Fnt->SetGlobal(VOFFSET,tVal-1);
SendMessage(hMain,WM_APP+1,0,0);
}
else
{
tVal=Fnt->GetCharVal(Fnt->GetBaseChar()+info->Select,VOFFSET);
Fnt->SetCharVal(Fnt->GetBaseChar()+info->Select,VOFFSET,tVal-1);
SendMessage(hMain,WM_APP+1,0,0);
}
CreateFontMap();
return TRUE;
case CMD_DOWN:
if(info->ModAll)
{
tVal=Fnt->GetGlobal(VOFFSET);
Fnt->SetGlobal(VOFFSET,tVal+1);
SendMessage(hMain,WM_APP+1,0,0);
}
else
{
tVal=Fnt->GetCharVal(Fnt->GetBaseChar()+info->Select,VOFFSET);
Fnt->SetCharVal(Fnt->GetBaseChar()+info->Select,VOFFSET,tVal+1);
SendMessage(hMain,WM_APP+1,0,0);
}
CreateFontMap();
return TRUE;
case RAD_ALL:
info->ModAll=TRUE;
EnableWindow(GetDlgItem(hMain,TXT_WIDTH),FALSE);
EnableWindow(GetDlgItem(hMain,STA_WIDTH),FALSE);
SendDlgItemMessage(hMain,RAD_SEL,WM_SETTEXT,0,(LPARAM)"Adjust Selection Only");
SendMessage(hMain,WM_APP+1,0,0);
CreateFontMap();
return TRUE;
case RAD_SEL:
info->ModAll=FALSE;
SendMessage(hMain,WM_APP+1,0,0);
EnableWindow(GetDlgItem(hMain,TXT_WIDTH),TRUE);
EnableWindow(GetDlgItem(hMain,STA_WIDTH),TRUE);
wsprintf(Text,"Adjust Selection (%d) Only",info->Select+Fnt->GetBaseChar());
SendDlgItemMessage(hMain,RAD_SEL,WM_SETTEXT,0,(LPARAM)Text);
CreateFontMap();
return TRUE;
case ID_FILE_RESET:
Flags=Fnt->LoadConfig("bfg.cfg");
tVal=Fnt->GetSize(MAPWIDTH);
if(tVal==32)
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_SETCURSEL,1,0);
else if(tVal==64)
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_SETCURSEL,2,0);
else if(tVal==128)
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_SETCURSEL,3,0);
else if(tVal==256)
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_SETCURSEL,4,0);
else if(tVal==512)
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_SETCURSEL,5,0);
else if(tVal==1024)
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_SETCURSEL,6,0);
else if(tVal==2048)
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_SETCURSEL,7,0);
else if(tVal==4096)
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_SETCURSEL,8,0);
else
SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_SETCURSEL,0,0);
tVal=Fnt->GetSize(MAPHEIGHT);
if(tVal==32)
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_SETCURSEL,1,0);
else if(tVal==64)
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_SETCURSEL,2,0);
else if(tVal==128)
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_SETCURSEL,3,0);
else if(tVal==256)
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_SETCURSEL,4,0);
else if(tVal==512)
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_SETCURSEL,5,0);
else if(tVal==1024)
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_SETCURSEL,6,0);
else if(tVal==2048)
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_SETCURSEL,7,0);
else if(tVal==4096)
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_SETCURSEL,8,0);
else
SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_SETCURSEL,0,0);
wsprintf(Text,"%d",Fnt->GetSize(CELLHEIGHT));
SendDlgItemMessage(hMain,TXT_CELLHEIGHT,WM_SETTEXT,0,(LPARAM)Text);
wsprintf(Text,"%d",Fnt->GetSize(CELLWIDTH));
SendDlgItemMessage(hMain,TXT_CELLWIDTH,WM_SETTEXT,0,(LPARAM)Text);
info->MaxChars=Fnt->GetSize(MAXCHARS);
info->hScr=0;
info->vScr=0;
info->Zoom=1.0f;
SendDlgItemMessage(hMain,CBO_ZOOM,CB_SETCURSEL,1,0);
if(Flags & SHOW_GRID)
{
info->Grid=true;
SendDlgItemMessage(hMain,CHK_GRID,BM_SETCHECK,BST_CHECKED,0);
CheckMenuItem(GetMenu(hMain),ID_VIEW_SHOWGRID,MF_CHECKED);
}
else
{
info->Grid=false;
SendDlgItemMessage(hMain,CHK_GRID,BM_SETCHECK,BST_UNCHECKED,0);
CheckMenuItem(GetMenu(hMain),ID_VIEW_SHOWGRID,MF_UNCHECKED);
}
if(Flags & SHOW_WIDTH)
{
info->wMarker=true;
SendDlgItemMessage(hDlg,CHK_WIDTH,BM_SETCHECK,BST_CHECKED,0);
CheckMenuItem(GetMenu(hMain),ID_VIEW_WIDTHMARKERS,MF_CHECKED);
}
else
{
info->wMarker=false;
SendDlgItemMessage(hDlg,CHK_WIDTH,BM_SETCHECK,BST_UNCHECKED,0);
CheckMenuItem(GetMenu(hMain),ID_VIEW_WIDTHMARKERS,MF_UNCHECKED);
}
SendDlgItemMessage(hMain,CBO_FONTS,CB_SETCURSEL,0,0);
SendDlgItemMessage(hDlg,CBO_FONTS,CB_GETLBTEXT,0,(LPARAM)Text);
Fnt->SetFontName(Text);
Fnt->SetBaseChar(32);
wsprintf(Text,"%d",32);
SendDlgItemMessage(hMain,TXT_START,WM_SETTEXT,0,(LPARAM)Text);
wsprintf(Text,"%d",Fnt->GetFontHeight());
SendDlgItemMessage(hMain,TXT_FONTHEIGHT,WM_SETTEXT,0,(LPARAM)Text);
wsprintf(Text,"%d",Fnt->GetFontWidth());
SendDlgItemMessage(hMain,TXT_FONTWIDTH,WM_SETTEXT,0,(LPARAM)Text);
Fnt->SetFontWeight(FW_NORMAL);
SendDlgItemMessage(hMain,CHK_BOLD,BM_SETCHECK,BST_UNCHECKED,0);
Fnt->SetFontItalic(FALSE);
SendDlgItemMessage(hMain,CHK_ITAL,BM_SETCHECK,BST_UNCHECKED,0);
Fnt->SetFontQuality(NONANTIALIASED_QUALITY);
SendDlgItemMessage(hDlg,CBO_ALIAS,CB_SETCURSEL,0,0);
CheckMenuItem(GetMenu(hMain),ID_ANTIALIAS_NONE,MF_CHECKED);
CheckMenuItem(GetMenu(hMain),ID_ANTIALIAS_NORMAL,MF_UNCHECKED);
CheckMenuItem(GetMenu(hMain),ID_ANTIALIAS_CLEARTYPE,MF_UNCHECKED);
Fnt->ResetOffsets();
info->ModAll=TRUE;
info->Select=0;
SendDlgItemMessage(hMain,RAD_ALL,BM_SETCHECK,BST_CHECKED,0);
SendDlgItemMessage(hMain,RAD_SEL,BM_SETCHECK,BST_UNCHECKED,0);
SendDlgItemMessage(hMain,RAD_SEL,WM_SETTEXT,0,(LPARAM)"Adjust Selection Only");
EnableWindow(GetDlgItem(hMain,TXT_WIDTH),FALSE);
EnableWindow(GetDlgItem(hMain,STA_WIDTH),FALSE);
SendMessage(hMain,WM_APP+1,0,0);
CreateFontMap();
return TRUE;
case ID_FILE_SAVEBFF:
lstrcpy(Text,Fnt->GetFontName());
lstrcat(Text,".bff");
if(GetTargetName(Text,"Save BFF","Bitmap Font Files (BFF)\0*.bff\0All Files\0*.*\0\0","bff"))
{
if(CheckOverwrite(Text))
{
tVal=DialogBox(G_Inst,MAKEINTRESOURCE(DLG_SAVEOPT),hMain,SaveOptProc);
if(tVal)
{
// Extract BPP and pre-processing flags from dialog return value
BPPVal=tVal & 0x3F;
switch(BPPVal)
{
case 8:
RetVal=Fnt->SaveFont(SAVE_BFF8,Text,tVal);
break;
case 24:
RetVal=Fnt->SaveFont(SAVE_BFF24,Text);
break;
case 32:
RetVal=Fnt->SaveFont(SAVE_BFF32,Text,tVal);
break;
}
if(RetVal)
MessageBox(hDlg,"Save Complete","File Operation",MB_OK);
else
MessageBox(hDlg,"Save Failed","Error",MB_OK | MB_ICONEXCLAMATION);
}
}
}
return TRUE;
case ID_IMPORT_FONTDATA:
Text[0]=NULL;
if(GetSourceName(Text,"Import Font Data","Font Data Files (CSV)\0*.csv\0All Files\0*.*\0\0","csv"))
{
if(Fnt->ImportData(Text))
{
// Set font face
wsprintf(Text,"%d",Fnt->GetFontName());
Index=SendDlgItemMessage(hMain,CBO_FONTS,CB_FINDSTRING,-1,(LPARAM)Text);
// Set Start Char
wsprintf(Text,"%d",Fnt->GetBaseChar());
SendDlgItemMessage(hMain,TXT_START,WM_SETTEXT,0,(LPARAM)Text);
// Set Bold Checkbox
if(Fnt->GetFontWeight()==FW_NORMAL)
SendDlgItemMessage(hMain,CHK_BOLD,BM_SETCHECK,BST_UNCHECKED,0);
else
SendDlgItemMessage(hMain,CHK_BOLD,BM_SETCHECK,BST_CHECKED,0);
// Set Italic Checkbox
if(Fnt->GetFontItalic())
SendDlgItemMessage(hMain,CHK_ITAL,BM_SETCHECK,BST_CHECKED,0);
else
SendDlgItemMessage(hMain,CHK_ITAL,BM_SETCHECK,BST_UNCHECKED,0);
CreateFontMap();
}
else
{
MessageBox(hDlg,"Import Failed","Error",MB_OK | MB_ICONEXCLAMATION);
}
}
return TRUE;
case ID_EXPORT_BITMAP:
lstrcpy(Text,"ExportedFont.bmp");
if(GetTargetName(Text,"Export BMP","Bitmap Images (BMP)\0*.bmp\0All Files\0*.*\0\0","bmp"))
{
if(CheckOverwrite(Text))
{
if(Fnt->ExportMap(Text,EXPORT_BMP)==SBM_OK)
MessageBox(hDlg,"Export Complete","BMP Export",MB_OK);
else
MessageBox(hDlg,"Export Failed","Error",MB_OK | MB_ICONEXCLAMATION);
}
}
return TRUE;
case ID_EXPORT_TARGA:
lstrcpy(Text,"ExportedFont.tga");
if(GetTargetName(Text,"Export TGA","Targa Images (TGA)\0*.tga\0All Files\0*.*\0\0","tga"))
{
if(CheckOverwrite(Text))
{
if(Fnt->ExportMap(Text,EXPORT_TGA)==SBM_OK)
MessageBox(hDlg,"Export Complete","TGA Export",MB_OK);
else
MessageBox(hDlg,"Export Failed","Error",MB_OK | MB_ICONEXCLAMATION);
}
}
return TRUE;
case ID_EXPORT_TARGA32:
lstrcpy(Text,"ExportedFont.tga");
if(GetTargetName(Text,"Export TGA","Targa Images (TGA)\0*.tga\0All Files\0*.*\0\0","tga"))
{
if(CheckOverwrite(Text))
{
if(Fnt->ExportMap(Text,EXPORT_TGA32)==SBM_OK)
MessageBox(hDlg,"Export Complete","TGA Export",MB_OK);
else
MessageBox(hDlg,"Export Failed","Error",MB_OK | MB_ICONEXCLAMATION);
}
}
return TRUE;
case ID_EXPORT_FONTDATA:
lstrcpy(Text,"FontData.csv");
if(GetTargetName(Text,"Export Font Data","Comma Separated Values (CSV)\0*.csv\0All Files\0*.*\0\0","csv"))
{
if(CheckOverwrite(Text))
{
if(Fnt->SaveFont(SAVE_CSV,Text))
MessageBox(hDlg,"Export Complete","Font Data Export",MB_OK);
else
MessageBox(hDlg,"Export Failed","Error",MB_OK | MB_ICONEXCLAMATION);
}
}
return TRUE;
case ID_EXPORT_BIN:
lstrcpy(Text,"FontData.dat");
if(GetTargetName(Text,"Export Binary Font Data","Binary Font Files (dat)\0*.dat\0All Files\0*.*\0\0","dat"))
{
if(CheckOverwrite(Text))
{
if(Fnt->SaveFont(SAVE_BIN,Text))
MessageBox(hDlg,"Export Complete","Font Data Export",MB_OK);
else
MessageBox(hDlg,"Export Failed","Error",MB_OK | MB_ICONEXCLAMATION);
}
}
return TRUE;
break;
case ID_FILE_EXIT:
EndDialog(hDlg,0);
PostQuitMessage(0);
return TRUE;
case ID_VIEW_ZOOMIN:
RowDex=SendDlgItemMessage(hMain,CBO_ZOOM,CB_GETCURSEL,0,0);
switch(RowDex)
{
case 0:
info->Zoom=0.5f;
SendDlgItemMessage(hMain,CBO_ZOOM,CB_SETCURSEL,1,0);
CalcScroll();
CreateFontMap();
return TRUE;
case 1:
info->Zoom=1.0f;
SendDlgItemMessage(hMain,CBO_ZOOM,CB_SETCURSEL,2,0);
CalcScroll();
CreateFontMap();
return TRUE;
case 2:
info->Zoom=2.0f;
SendDlgItemMessage(hMain,CBO_ZOOM,CB_SETCURSEL,3,0);
CalcScroll();
CreateFontMap();
return TRUE;
case 3:
info->Zoom=4.0f;
SendDlgItemMessage(hMain,CBO_ZOOM,CB_SETCURSEL,4,0);
CalcScroll();
CreateFontMap();
return TRUE;
}
return TRUE;
case ID_VIEW_ZOOMOUT:
RowDex=SendDlgItemMessage(hMain,CBO_ZOOM,CB_GETCURSEL,0,0);
switch(RowDex)
{
case 1:
info->Zoom=0.25f;
SendDlgItemMessage(hMain,CBO_ZOOM,CB_SETCURSEL,0,0);
CalcScroll();
CreateFontMap();
return TRUE;
case 2:
info->Zoom=0.5f;
SendDlgItemMessage(hMain,CBO_ZOOM,CB_SETCURSEL,1,0);
CalcScroll();
CreateFontMap();
return TRUE;
case 3:
info->Zoom=1.0f;
SendDlgItemMessage(hMain,CBO_ZOOM,CB_SETCURSEL,2,0);
CalcScroll();
CreateFontMap();
return TRUE;
case 4:
info->Zoom=2.0f;
SendDlgItemMessage(hMain,CBO_ZOOM,CB_SETCURSEL,3,0);
CalcScroll();
CreateFontMap();
return TRUE;
}
return TRUE;
case ID_ANTIALIAS_NONE:
SendDlgItemMessage(hDlg,CBO_ALIAS,CB_SETCURSEL,0,0);
CheckMenuItem(GetMenu(hMain),ID_ANTIALIAS_NONE,MF_CHECKED);
CheckMenuItem(GetMenu(hMain),ID_ANTIALIAS_NORMAL,MF_UNCHECKED);
CheckMenuItem(GetMenu(hMain),ID_ANTIALIAS_CLEARTYPE,MF_UNCHECKED);
Fnt->SetFontQuality(NONANTIALIASED_QUALITY);
CreateFontMap();
return TRUE;
case ID_ANTIALIAS_NORMAL:
SendDlgItemMessage(hDlg,CBO_ALIAS,CB_SETCURSEL,1,0);
CheckMenuItem(GetMenu(hMain),ID_ANTIALIAS_NONE,MF_UNCHECKED);
CheckMenuItem(GetMenu(hMain),ID_ANTIALIAS_NORMAL,MF_CHECKED);
CheckMenuItem(GetMenu(hMain),ID_ANTIALIAS_CLEARTYPE,MF_UNCHECKED);
Fnt->SetFontQuality(ANTIALIASED_QUALITY);
CreateFontMap();
return TRUE;
case ID_ANTIALIAS_CLEARTYPE:
SendDlgItemMessage(hDlg,CBO_ALIAS,CB_SETCURSEL,2,0);
CheckMenuItem(GetMenu(hMain),ID_ANTIALIAS_NONE,MF_UNCHECKED);
CheckMenuItem(GetMenu(hMain),ID_ANTIALIAS_NORMAL,MF_UNCHECKED);
CheckMenuItem(GetMenu(hMain),ID_ANTIALIAS_CLEARTYPE,MF_CHECKED);
Fnt->SetFontQuality(5); // CLEARTYPE_QUALITY;
CreateFontMap();
return TRUE;
case ID_TOOLS_PREVIEW:
DialogBox(G_Inst,MAKEINTRESOURCE(DLG_PREVIEW),hDlg,PreviewWinProc);
return TRUE;
case ID_TOOLS_CONFIGURATION:
DialogBox(G_Inst,MAKEINTRESOURCE(DLG_CONFIG),hDlg,ConfigWinProc);
info->MaxChars=Fnt->GetSize(MAXCHARS);
info->Select=LimitSelection(info->Select,info->MaxChars);
SendMessage(hMain,WM_APP+1,0,0);
InvalidateRgn(hDlg,NULL,NULL);
CreateFontMap();
return TRUE;
case ID_HELP_CONTENTS:
if((int)ShellExecute(hDlg,"open","CBFGHelp.chm",NULL,NULL,SW_SHOWMAXIMIZED)<32)
MessageBox(hDlg,"Unable to open Help file","Error",MB_OK | MB_ICONERROR);
return TRUE;
case ID_HELP_ABOUT:
DialogBox(G_Inst,MAKEINTRESOURCE(DLG_ABOUT),hMain,AboutProc);
return TRUE;
} // End Switch LOWORD(wParam)
switch(HIWORD(wParam)) // Notifications
{
case EN_KILLFOCUS:
switch(LOWORD(wParam))
{
case TXT_CELLWIDTH:
SendDlgItemMessage(hDlg,TXT_CELLWIDTH,WM_GETTEXT,256,(LPARAM)Text);
tVal=Fnt->SetSize(CELLWIDTH,atoi(Text));
wsprintf(Text,"%d",tVal);
SendDlgItemMessage(hDlg,TXT_CELLWIDTH,WM_SETTEXT,0,(LPARAM)Text);
info->MaxChars=Fnt->GetSize(MAXCHARS);
CreateFontMap();
return TRUE;
case TXT_CELLHEIGHT:
SendDlgItemMessage(hDlg,TXT_CELLHEIGHT,WM_GETTEXT,256,(LPARAM)Text);
tVal=Fnt->SetSize(CELLHEIGHT,atoi(Text));
wsprintf(Text,"%d",tVal);
SendDlgItemMessage(hDlg,TXT_CELLHEIGHT,WM_SETTEXT,0,(LPARAM)Text);
info->MaxChars=Fnt->GetSize(MAXCHARS);
CreateFontMap();
return TRUE;
case TXT_FONTWIDTH:
SendDlgItemMessage(hDlg,TXT_FONTWIDTH,WM_GETTEXT,256,(LPARAM)Text);
tVal=Fnt->SetFontWidth(atoi(Text));
wsprintf(Text,"%d",tVal);
SendDlgItemMessage(hDlg,TXT_FONTWIDTH,WM_SETTEXT,0,(LPARAM)Text);
CreateFontMap();
return TRUE;
case TXT_FONTHEIGHT:
SendDlgItemMessage(hDlg,TXT_FONTHEIGHT,WM_GETTEXT,256,(LPARAM)Text);
tVal=Fnt->SetFontHeight(atoi(Text));
wsprintf(Text,"%d",tVal);
SendDlgItemMessage(hDlg,TXT_FONTHEIGHT,WM_SETTEXT,0,(LPARAM)Text);
CreateFontMap();
return TRUE;
case TXT_START:
SendDlgItemMessage(hDlg,TXT_START,WM_GETTEXT,256,(LPARAM)Text);
tVal=Fnt->SetBaseChar(atoi(Text));
wsprintf(Text,"%d",tVal);
SendDlgItemMessage(hDlg,TXT_START,WM_SETTEXT,0,(LPARAM)Text);
CreateFontMap();
return TRUE;
case TXT_XADJ:
if(info->ModAll)
{
SendDlgItemMessage(hDlg,TXT_XADJ,WM_GETTEXT,256,(LPARAM)Text);
tVal=Limit(atoi(Text));
tVal=Fnt->SetGlobal(HOFFSET,tVal);
wsprintf(Text,"%d",tVal);
SendDlgItemMessage(hDlg,TXT_XADJ,WM_SETTEXT,0,(LPARAM)Text);
CreateFontMap();
}
else
{
SendDlgItemMessage(hDlg,TXT_XADJ,WM_GETTEXT,256,(LPARAM)Text);
tVal=Limit(atoi(Text));
tVal=Fnt->SetCharVal(info->Select+Fnt->GetBaseChar(),HOFFSET,tVal);
wsprintf(Text,"%d",tVal);
SendDlgItemMessage(hDlg,TXT_XADJ,WM_SETTEXT,0,(LPARAM)Text);
CreateFontMap();
}
case TXT_YADJ:
if(info->ModAll)
{
SendDlgItemMessage(hDlg,TXT_YADJ,WM_GETTEXT,256,(LPARAM)Text);
tVal=Limit(atoi(Text));
tVal=Fnt->SetGlobal(VOFFSET,tVal);
wsprintf(Text,"%d",tVal);
SendDlgItemMessage(hDlg,TXT_YADJ,WM_SETTEXT,0,(LPARAM)Text);
CreateFontMap();
}
else
{
SendDlgItemMessage(hDlg,TXT_YADJ,WM_GETTEXT,256,(LPARAM)Text);
tVal=Limit(atoi(Text));
tVal=Fnt->SetCharVal(info->Select+Fnt->GetBaseChar(),VOFFSET,tVal);
wsprintf(Text,"%d",tVal);
SendDlgItemMessage(hDlg,TXT_YADJ,WM_SETTEXT,0,(LPARAM)Text);
CreateFontMap();
}
case TXT_WADJ:
if(info->ModAll)
{
SendDlgItemMessage(hDlg,TXT_WADJ,WM_GETTEXT,256,(LPARAM)Text);
tVal=Limit(atoi(Text));
tVal=Fnt->SetGlobal(WIDTH,tVal);
wsprintf(Text,"%d",tVal);
SendDlgItemMessage(hDlg,TXT_WADJ,WM_SETTEXT,0,(LPARAM)Text);
CreateFontMap();
}
else
{
SendDlgItemMessage(hDlg,TXT_WADJ,WM_GETTEXT,256,(LPARAM)Text);
tVal=Limit(atoi(Text));
tVal=Fnt->SetCharVal(info->Select+Fnt->GetBaseChar(),WOFFSET,tVal);
wsprintf(Text,"%d",tVal);
SendDlgItemMessage(hDlg,TXT_WADJ,WM_SETTEXT,0,(LPARAM)Text);
CreateFontMap();
wsprintf(Text,"%d",Fnt->GetCharVal(info->Select+Fnt->GetBaseChar(),EWIDTH));
SendDlgItemMessage(hMain,TXT_WIDTH,WM_SETTEXT,0,(LPARAM)Text);
}
return TRUE;
}
return FALSE;
break; // End EN_KILLFOCUS
case CBN_SELCHANGE:
switch(LOWORD(wParam))
{
case CBO_FONTS:
RowDex=SendDlgItemMessage(hDlg,CBO_FONTS,CB_GETCURSEL,0,0);
SendDlgItemMessage(hDlg,CBO_FONTS,CB_GETLBTEXT,RowDex,(LPARAM)Text);
Fnt->SetFontName(Text);
CreateFontMap();
return TRUE;
case CBO_ALIAS:
RowDex=SendDlgItemMessage(hDlg,CBO_ALIAS,CB_GETCURSEL,0,0);
if(RowDex==0)
{
Fnt->SetFontQuality(NONANTIALIASED_QUALITY);
CheckMenuItem(GetMenu(hMain),ID_ANTIALIAS_NONE,MF_CHECKED);
CheckMenuItem(GetMenu(hMain),ID_ANTIALIAS_NORMAL,MF_UNCHECKED);
CheckMenuItem(GetMenu(hMain),ID_ANTIALIAS_CLEARTYPE,MF_UNCHECKED);
}
else if(RowDex==1)
{
Fnt->SetFontQuality(ANTIALIASED_QUALITY);
CheckMenuItem(GetMenu(hMain),ID_ANTIALIAS_NONE,MF_UNCHECKED);
CheckMenuItem(GetMenu(hMain),ID_ANTIALIAS_NORMAL,MF_CHECKED);
CheckMenuItem(GetMenu(hMain),ID_ANTIALIAS_CLEARTYPE,MF_UNCHECKED);
}
else if(RowDex==2)
{
Fnt->SetFontQuality(5); //CLEARTYPE_QUALITY;
CheckMenuItem(GetMenu(hMain),ID_ANTIALIAS_NONE,MF_UNCHECKED);
CheckMenuItem(GetMenu(hMain),ID_ANTIALIAS_NORMAL,MF_UNCHECKED);
CheckMenuItem(GetMenu(hMain),ID_ANTIALIAS_CLEARTYPE,MF_CHECKED);
}
CreateFontMap();
return TRUE;
case CBO_IMGXRES:
RowDex=SendDlgItemMessage(hDlg,CBO_IMGXRES,CB_GETCURSEL,0,0);
if(RowDex==0)
Fnt->SetSize(MAPWIDTH,16);
else if(RowDex==1)
Fnt->SetSize(MAPWIDTH,32);
else if(RowDex==2)
Fnt->SetSize(MAPWIDTH,64);
else if(RowDex==3)
Fnt->SetSize(MAPWIDTH,128);
else if(RowDex==4)
Fnt->SetSize(MAPWIDTH,256);
else if(RowDex==5)
Fnt->SetSize(MAPWIDTH,512);
else if(RowDex==6)
Fnt->SetSize(MAPWIDTH,1024);
else if(RowDex==7)
Fnt->SetSize(MAPWIDTH,2048);
else if(RowDex==8)
Fnt->SetSize(MAPWIDTH,4096);
info->MaxChars=Fnt->GetSize(MAXCHARS);
CalcScroll();
CreateFontMap();
return TRUE;
case CBO_IMGYRES:
RowDex=SendDlgItemMessage(hDlg,CBO_IMGYRES,CB_GETCURSEL,0,0);
if(RowDex==0)
Fnt->SetSize(MAPHEIGHT,16);
else if(RowDex==1)
Fnt->SetSize(MAPHEIGHT,32);
else if(RowDex==2)
Fnt->SetSize(MAPHEIGHT,64);
else if(RowDex==3)
Fnt->SetSize(MAPHEIGHT,128);
else if(RowDex==4)
Fnt->SetSize(MAPHEIGHT,256);
else if(RowDex==5)
Fnt->SetSize(MAPHEIGHT,512);
else if(RowDex==6)
Fnt->SetSize(MAPHEIGHT,1024);
else if(RowDex==7)
Fnt->SetSize(MAPHEIGHT,2048);
else if(RowDex==8)
Fnt->SetSize(MAPHEIGHT,4096);
info->MaxChars=Fnt->GetSize(MAXCHARS);
CalcScroll();
CreateFontMap();
return TRUE;
case CBO_ZOOM:
RowDex=SendDlgItemMessage(hDlg,CBO_ZOOM,CB_GETCURSEL,0,0);
if(RowDex==0)
info->Zoom=0.25;
else if(RowDex==1)
info->Zoom=0.5f;
else if(RowDex==2)
info->Zoom=1.0f;
else if(RowDex==3)
info->Zoom=2.0f;
else if(RowDex==4)
info->Zoom=4.0f;
CalcScroll();
CreateFontMap();
return TRUE;
}
return FALSE;
break;
}
default:
return 0;
} // End WM_COMMAND
} // End Switch MSG
}// End FontProc
|
602c9054276bc3cc7210f86eb200cc05
|
{
"intermediate": 0.3828771710395813,
"beginner": 0.3663898706436157,
"expert": 0.25073301792144775
}
|
48,105
|
можно ли изменить этот код чтобы вместо прибавления к basechar по порядку символов оно использовало заранее заготовленную карту(порядок) символов? #include "stdafx.h"
#include <memory.h>
#include "FontMapClass.h"
#include "UtilFunctions.h"
BFontMap::BFontMap()
{
int loop;
BaseChar=32;
MapWidth=256;
MapHeight=256;
CellHeight=32;
CellWidth=32;
gHMod=0;
gVMod=0;
gWidthMod=0;
for(loop=0;loop!=256;loop++)
{
HMod[loop]=0;
VMod[loop]=0;
WidthMod[loop]=0;
}
fnt=NULL;
FntDef.lfHeight=20;
FntDef.lfWidth=0;
FntDef.lfEscapement=0;
FntDef.lfOrientation=0;
FntDef.lfWeight=FW_NORMAL;
FntDef.lfItalic=FALSE;
FntDef.lfUnderline=FALSE;
FntDef.lfStrikeOut=FALSE;
FntDef.lfCharSet=DEFAULT_CHARSET;
FntDef.lfOutPrecision=OUT_DEFAULT_PRECIS;
FntDef.lfClipPrecision=CLIP_DEFAULT_PRECIS;
FntDef.lfQuality=NONANTIALIASED_QUALITY;
FntDef.lfPitchAndFamily=DEFAULT_PITCH;
FntDef.lfFaceName[0]=NULL;
BkCol.Red=0;
BkCol.Green=0;
BkCol.Blue=0;
TextCol.Red=255;
TextCol.Green=255;
TextCol.Blue=255;
GridCol.Red=170;
GridCol.Green=0;
GridCol.Blue=170;
WidthCol.Red=170;
WidthCol.Green=170;
WidthCol.Blue=0;
SelCol.Red=0;
SelCol.Green=154;
SelCol.Blue=0;
}
BFontMap::~BFontMap()
{
DeleteObject(fnt);
}
int BFontMap::SetSize(int Which, int NewSize)
{
switch(Which)
{
case MAPWIDTH:
if(!IsPower(NewSize))
NewSize=256;
MapWidth=NewSize;
return MapWidth;
case MAPHEIGHT:
if(!IsPower(NewSize))
NewSize=256;
MapHeight=NewSize;
return MapHeight;
case CELLWIDTH:
if(NewSize<8)
CellWidth=8;
else if(NewSize>256)
CellWidth=256;
else
CellWidth=NewSize;
return CellWidth;
case CELLHEIGHT:
if(NewSize<8)
CellHeight=8;
else if(NewSize>256)
CellHeight=256;
else
CellHeight=NewSize;
return CellHeight;
}
return 0;
}
int BFontMap::GetSize(int Which)
{
switch(Which)
{
case MAPWIDTH:
return MapWidth;
case MAPHEIGHT:
return MapHeight;
case CELLWIDTH:
return CellWidth;
case CELLHEIGHT:
return CellHeight;
case MAXCHARS:
return (MapWidth/CellWidth)*(MapHeight/CellHeight);
}
return 0;
}
unsigned char BFontMap::SetBaseChar(int NewBase)
{
if(NewBase<0)
NewBase=0;
if(NewBase>255)
NewBase=255;
BaseChar=NewBase;
return BaseChar;
}
unsigned char BFontMap::GetBaseChar()
{
return BaseChar;
}
char BFontMap::SetGlobal(int Which, char Value)
{
switch(Which)
{
case VOFFSET:
gVMod=Value;
break;
case HOFFSET:
gHMod=Value;
break;
case WIDTH:
gWidthMod=Value;
break;
}
return Value;
}
char BFontMap::GetGlobal(int Which)
{
switch(Which)
{
case VOFFSET:
return gVMod;
case HOFFSET:
return gHMod;
case WIDTH:
return gWidthMod;
}
return 0;
}
char BFontMap::SetCharVal(int Char, int Which, char NewVal)
{
switch(Which)
{
case WOFFSET:
WidthMod[Char]=NewVal;
break;
case HOFFSET:
HMod[Char]=NewVal;
break;
case VOFFSET:
VMod[Char]=NewVal;
break;
}
return NewVal;
}
char BFontMap::GetCharVal(int Char, int Which)
{
switch(Which)
{
case WIDTH:
return BaseWidth[Char];
case HOFFSET:
return HMod[Char];
case VOFFSET:
return VMod[Char];
case WOFFSET:
return WidthMod[Char];
case EWIDTH:
return WidthMod[Char]+BaseWidth[Char]+gWidthMod;
}
return 0;
}
long BFontMap::SetFontHeight(long NewHeight)
{
if(NewHeight<1)
NewHeight=1;
if(NewHeight>256)
NewHeight=256;
FntDef.lfHeight=NewHeight;
return FntDef.lfHeight;
}
long BFontMap::GetFontHeight()
{
return FntDef.lfHeight;
}
long BFontMap::SetFontWidth(long NewWidth)
{
if(NewWidth<0)
NewWidth=0;
if(NewWidth>256)
NewWidth=256;
FntDef.lfWidth=NewWidth;
return FntDef.lfWidth;
}
long BFontMap::GetFontWidth()
{
return FntDef.lfWidth;
}
bool BFontMap::SetFontName(char* NewName)
{
if(lstrcpy(FntDef.lfFaceName,NewName))
return true;
else
return false;
}
char* BFontMap::GetFontName()
{
return FntDef.lfFaceName;
}
long BFontMap::SetFontWeight(long NewWeight)
{
FntDef.lfWeight=NewWeight;
return FntDef.lfWeight;
}
long BFontMap::GetFontWeight()
{
return FntDef.lfWeight;
}
long BFontMap::SetFontQuality(long NewQual)
{
FntDef.lfQuality=(BYTE)NewQual;
return FntDef.lfQuality;
}
long BFontMap::GetFontQuality()
{
return FntDef.lfQuality;
}
long BFontMap::SetFontItalic(long NewItal)
{
FntDef.lfItalic=(BYTE)NewItal;
return FntDef.lfItalic;
}
long BFontMap::GetFontItalic()
{
return FntDef.lfItalic;
}
void BFontMap::SetCol(int Which, BFG_RGB NewCol)
{
BFG_RGB *Tgt;
switch(Which)
{
case GRIDCOL:
Tgt=&GridCol;
break;
case WIDTHCOL:
Tgt=&WidthCol;
break;
case SELCOL:
Tgt=&SelCol;
break;
case TEXTCOL:
Tgt=&TextCol;
break;
case BACKCOL:
Tgt=&BkCol;
break;
default:
return;
}
Tgt->Red=NewCol.Red;
Tgt->Green=NewCol.Green;
Tgt->Blue=NewCol.Blue;
}
void BFontMap::SetCol(int Which, unsigned char Red, unsigned char Green, unsigned char Blue)
{
BFG_RGB *Tgt;
switch(Which)
{
case GRIDCOL:
Tgt=&GridCol;
break;
case WIDTHCOL:
Tgt=&WidthCol;
break;
case SELCOL:
Tgt=&SelCol;
break;
case TEXTCOL:
Tgt=&TextCol;
break;
case BACKCOL:
Tgt=&BkCol;
break;
default:
return;
}
Tgt->Red=Red;
Tgt->Green=Green;
Tgt->Blue=Blue;
}
BFG_RGB BFontMap::GetCol(int Which)
{
switch(Which)
{
case GRIDCOL:
return GridCol;
break;
case WIDTHCOL:
return WidthCol;
break;
case SELCOL:
return SelCol;
break;
case TEXTCOL:
return TextCol;
break;
case BACKCOL:
return BkCol;
break;
}
return BkCol; // Default
}
bool BFontMap::CalcWidths(HDC hdc)
{
BOOL Test;
int Letter;
ABC CharWidth[256];
int nttWidth[256];
// Populate Width data
Test=GetCharABCWidths(hdc,0,255,CharWidth);
if(Test)
{
for(Letter=0;Letter!=256;Letter++)
BaseWidth[Letter]=(unsigned char)(CharWidth[Letter].abcA+
CharWidth[Letter].abcB+
CharWidth[Letter].abcC);
}
else
{
// GetCharWidth32 for non truetype fonts
Test=GetCharWidth32(hdc,0,255,nttWidth);
if(Test)
for(Letter=0;Letter!=256;Letter++)
BaseWidth[Letter]=(unsigned char)nttWidth[Letter];
}
return true;
}
HBITMAP* BFontMap::DrawFontMap(int Flags, int Sel)
{
HDC wDC,mDC;
HBITMAP *fDIB;
BITMAPINFO BMDat;
HBRUSH Brush;
HPEN Pen;
int RowDex,ColDex,Letter;
HRGN ClipRgn;
RECT CharArea;
char Symbol[2];
unsigned char eVal;
// Create Device context
wDC=CreateDC("DISPLAY",NULL,NULL,NULL);
mDC=CreateCompatibleDC(wDC);
if(!wDC || !mDC)
return NULL;
// Create bitmap for font rendering
fDIB=new HBITMAP;
if(!fDIB)
return NULL;
BMDat.bmiHeader.biSize=sizeof(BITMAPINFOHEADER);
BMDat.bmiHeader.biWidth=MapWidth;
BMDat.bmiHeader.biHeight=MapHeight;
BMDat.bmiHeader.biPlanes=1;
BMDat.bmiHeader.biBitCount=24;
BMDat.bmiHeader.biCompression=BI_RGB;
BMDat.bmiHeader.biSizeImage=(MapWidth*MapHeight)*3;
*fDIB=CreateDIBSection(mDC,&BMDat,DIB_RGB_COLORS,NULL,NULL,0);
if(!fDIB)
return NULL;
if(!SelectObject(mDC,*fDIB))
return NULL;
// Fill background
if(Flags & DFM_ALPHA)
{
Brush=CreateSolidBrush(RGB(0,0,0));
Pen=CreatePen(PS_SOLID,0,RGB(0,0,0));
}
else
{
Brush=CreateSolidBrush(RGB(BkCol.Red,BkCol.Green,BkCol.Blue));
Pen=CreatePen(PS_SOLID,0,RGB(BkCol.Red,BkCol.Green,BkCol.Blue));
}
SelectObject(mDC,Brush);
SelectObject(mDC,Pen);
Rectangle(mDC,0,0,MapWidth,MapHeight);
DeleteObject(Pen);
DeleteObject(Brush);
// Draw Selection
Pen=CreatePen(PS_SOLID,0,RGB(SelCol.Red,SelCol.Green,SelCol.Blue));
Brush=CreateSolidBrush(RGB(SelCol.Red,SelCol.Green,SelCol.Blue));
if(Sel>-1)
{
SelectObject(mDC,Pen);
SelectObject(mDC,Brush);
RowDex=(Sel/(MapWidth/CellWidth));
ColDex=(Sel-((MapWidth/CellWidth)*RowDex));
ColDex*=CellWidth;
RowDex*=CellHeight;
Rectangle(mDC,ColDex,RowDex,ColDex+CellWidth,RowDex+CellHeight);
}
DeleteObject(Brush);
DeleteObject(Pen);
// Draw letters
// Create the font
if(fnt)
DeleteObject(fnt);
fnt=CreateFontIndirect(&FntDef);
SelectObject(mDC,fnt);
CalcWidths(mDC);
if(Flags & DFM_ALPHA)
{
SetTextColor(mDC,RGB(255,255,255));
SetBkColor(mDC,RGB(0,0,0));
}
else
{
SetTextColor(mDC,RGB(TextCol.Red,TextCol.Green,TextCol.Blue));
SetBkColor(mDC,RGB(BkCol.Red,BkCol.Green,BkCol.Blue));
}
SetBkMode(mDC,TRANSPARENT);
Pen=CreatePen(PS_SOLID,0,RGB(WidthCol.Red,WidthCol.Green,WidthCol.Blue));
SelectObject(mDC,Pen);
Letter=BaseChar;
for(RowDex=0;RowDex<(MapHeight-CellHeight)+1;RowDex+=CellHeight)
{
for(ColDex=0;ColDex<(MapWidth-CellWidth)+1 && Letter<256;ColDex+=CellWidth)
{
// Set Clipping Region
ClipRgn=CreateRectRgn(ColDex,RowDex,ColDex+CellWidth,RowDex+CellHeight);
SelectClipRgn(mDC,ClipRgn);
// Draw width marker
if(Flags & DFM_WIDTHLINE)
{
eVal=BaseWidth[Letter]+WidthMod[Letter]+gWidthMod;
MoveToEx(mDC,ColDex+eVal,RowDex,NULL);
LineTo(mDC,ColDex+eVal,RowDex+CellHeight);
}
// Render Char
CharArea.left=ColDex+HMod[Letter]+gHMod;
CharArea.right=ColDex+CellWidth;
CharArea.top=RowDex+VMod[Letter]+gVMod;
CharArea.bottom=RowDex+CellHeight;
wsprintf(Symbol,"%c",Letter);
Letter++;
DrawText(mDC,Symbol,-1,&CharArea,DT_LEFT | DT_NOPREFIX | DT_NOCLIP);
// Remove clip region
SelectClipRgn(mDC,NULL);
DeleteObject(ClipRgn);
}
}
DeleteObject(Pen);
// Draw grid lines
Pen=CreatePen(PS_SOLID,0,RGB(GridCol.Red,GridCol.Green,GridCol.Blue));
if(Flags & DFM_GRIDLINES)
{
SelectObject(mDC,Pen);
for(RowDex=CellHeight-1;RowDex<MapHeight;RowDex+=CellHeight)
{
MoveToEx(mDC,0,RowDex,NULL);
LineTo(mDC,MapWidth,RowDex);
}
for(ColDex=CellWidth-1;ColDex<MapWidth;ColDex+=CellWidth)
{
MoveToEx(mDC,ColDex,0,NULL);
LineTo(mDC,ColDex,MapHeight);
}
}
DeleteObject(Pen);
DeleteDC(wDC);
DeleteDC(mDC);
return fDIB;
}
int BFontMap::LoadConfig(char *fname)
{
ifstream cfgfile;
long fSize;
char *dat;
char Hdr[7];
int tVal,Flags;
cfgfile.open(fname,ios::binary);
if(cfgfile.fail())
return -1;
cfgfile.seekg(0,ios_base::end);
fSize=cfgfile.tellg();
cfgfile.seekg(0,ios_base::beg);
dat=new char[fSize];
if(!dat)
return -1;
cfgfile.read(dat,fSize);
cfgfile.close();
// Check ID
lstrcpyn(Hdr,dat,7);
Hdr[6]=NULL;
if(lstrcmp(Hdr,"BFGCFG"))
{
delete [] dat;
return -1;
}
memcpy(&MapWidth,&dat[6],4);
memcpy(&MapHeight,&dat[10],4);
memcpy(&CellWidth,&dat[14],4);
memcpy(&CellHeight,&dat[18],4);
memcpy(&tVal,&dat[22],4);
FntDef.lfHeight=tVal;
memcpy(&tVal,&dat[26],4);
FntDef.lfWidth=tVal;
memcpy(&Flags,&dat[30],4);
memcpy(&GridCol,&dat[34],3);
memcpy(&WidthCol,&dat[37],3);
memcpy(&SelCol,&dat[40],3);
memcpy(&TextCol,&dat[43],3);
memcpy(&BkCol,&dat[46],3);
delete [] dat;
return Flags;
}
bool BFontMap::SaveConfig(char *fname, bool Grid, bool Width)
{
ofstream cfgfile;
int tVal,Flags=0;
cfgfile.open(fname,ios_base::binary | ios_base::trunc );
if(cfgfile.fail())
return false;
cfgfile.write("BFGCFG",6);
cfgfile.write((char*)&MapWidth,sizeof(int));
cfgfile.write((char*)&MapHeight,sizeof(int));
cfgfile.write((char*)&CellWidth,sizeof(int));
cfgfile.write((char*)&CellHeight,sizeof(int));
tVal=(int)FntDef.lfHeight;
cfgfile.write((char*)&tVal,sizeof(int));
tVal=(int)FntDef.lfWidth;
cfgfile.write((char*)&tVal,sizeof(int));
if(Grid)
Flags |= SHOW_GRID;
if(Width)
Flags |= SHOW_WIDTH;
cfgfile.write((char*)&Flags,sizeof(int));
cfgfile.write((char*)&GridCol,sizeof(BFG_RGB));
cfgfile.write((char*)&WidthCol,sizeof(BFG_RGB));
cfgfile.write((char*)&SelCol,sizeof(BFG_RGB));
cfgfile.write((char*)&TextCol,sizeof(BFG_RGB));
cfgfile.write((char*)&BkCol,sizeof(BFG_RGB));
cfgfile.close();
return true;
}
void BFontMap::ResetOffsets()
{
int Loop;
for(Loop=0;Loop!=256;++Loop)
{
WidthMod[Loop]=0;
VMod[Loop]=0;
HMod[Loop]=0;
}
gWidthMod=gHMod=gVMod=0;
}
bool BFontMap::SaveFont(int Format, char *fname, int flags)
{
bool Inv,Sat;
Inv=Sat=false;
if(flags & SAVE_INV_ALPHA)
Inv=true;
if(flags & SAVE_RGB_SAT)
Sat=true;
switch(Format)
{
case SAVE_BFF8:
return SaveBFF2(fname,8,Inv,false);
break;
case SAVE_BFF24:
return SaveBFF2(fname,24,false,false);
break;
case SAVE_BFF32:
return SaveBFF2(fname,32,Inv,Sat);
break;
case SAVE_BIN:
return ExportBinData(fname);
break;
case SAVE_CSV:
return ExportCSVData(fname);
}
return false;
}
bool BFontMap::SaveBFF2(char *fname, char OutputBPP, bool Invert, bool RGBSat)
{
ofstream out;
HBITMAP *hBMP;
FontFileHeader Hdr;
DIBSECTION bmInfo;
SBM_Image FntImg,AlphaImg;
int Loop;
unsigned char EffWidth[256];
out.open(fname, ios::binary | ios::trunc);
if(out.fail())
return false;
// Populate header
Hdr.ID1 = 0xBF;
Hdr.ID2 = 0xF2;
Hdr.BPP=24;
Hdr.ImageWidth=MapWidth;
Hdr.ImageHeight=MapHeight;
Hdr.CellWidth=CellWidth;
Hdr.CellHeight=CellHeight;
Hdr.StartPoint=BaseChar;
// Create the SBM image
FntImg.Create(Hdr.ImageWidth,Hdr.ImageHeight,Hdr.BPP);
// Render the font image
if(OutputBPP==8)
hBMP=DrawFontMap(DFM_ALPHA,-1);
else
hBMP=DrawFontMap(0,-1);
// Grab the bitmap information
if(!GetObject(*hBMP,sizeof(DIBSECTION),&bmInfo))
return FALSE;
// Copy bitmap to SBM
memcpy(FntImg.GetImg(),bmInfo.dsBm.bmBits,(Hdr.ImageWidth*Hdr.ImageHeight)*(Hdr.BPP/8));
// Flip memory bitmap BGR to BFF RGB
FntImg.BGRtoRGB();
// Free the bitmap
delete hBMP;
// Add in alpha channel if required
if(OutputBPP==32)
{
// Render new alpha fontmap
hBMP=DrawFontMap(DFM_ALPHA,-1);
// Create the SBM alpha image
AlphaImg.Create(Hdr.ImageWidth,Hdr.ImageHeight,Hdr.BPP);
// Get RGB data ptr from Img
if(!GetObject(*hBMP,sizeof(DIBSECTION),&bmInfo))
return FALSE;
// Copy bitmap to alpha SBM
memcpy(AlphaImg.GetImg(),bmInfo.dsBm.bmBits,(Hdr.ImageWidth*Hdr.ImageHeight)*(Hdr.BPP/8));
// Free the bitmap
delete hBMP;
// Post-process images and insert alpha channel into font map
AlphaImg.Grayscale();
if(RGBSat)
FntImg.Saturate(0,0,0,255,255,255);
if(Invert)
AlphaImg.InvertCol();
FntImg.InsertAlpha(AlphaImg.GetImg());
Hdr.BPP=32;
}
if(OutputBPP==8)
{
FntImg.Grayscale();
if(Invert)
FntImg.InvertCol();
Hdr.BPP=8;
}
// Invert image
FntImg.FlipImg();
// Write header data
out.write((char*)&Hdr,sizeof(Hdr));
// Write char widths
for(Loop=0;Loop!=256;++Loop)
EffWidth[Loop]=BaseWidth[Loop]+WidthMod[Loop]+gWidthMod;
out.write((char*)EffWidth,256);
// Write bitmap
out.write((char*)FntImg.GetImg(),(Hdr.ImageWidth*Hdr.ImageHeight)*(OutputBPP/8));
out.close();
return true;
}
int BFontMap::ExportMap(char* fname, int fmt)
{
ofstream out;
HBITMAP *hBMP;
FontFileHeader Hdr;
DIBSECTION bmInfo;
SBM_Image FntImg,AlphaImg;
int Result;
out.open(fname, ios::binary | ios::trunc);
if(out.fail())
return false;
// Populate header
Hdr.ID1 = 0xBF;
Hdr.ID2 = 0xF2;
Hdr.BPP=24;
Hdr.ImageWidth=MapWidth;
Hdr.ImageHeight=MapHeight;
Hdr.CellHeight=CellHeight;
Hdr.CellWidth=CellHeight;
Hdr.StartPoint=BaseChar;
// Create the SBM image
FntImg.Create(Hdr.ImageWidth,Hdr.ImageHeight,Hdr.BPP);
// Render the font image
hBMP=DrawFontMap(0,-1);
// Grab the bitmap information
if(!GetObject(*hBMP,sizeof(DIBSECTION),&bmInfo))
return false;
// Copy bitmap to SBM
memcpy(FntImg.GetImg(),bmInfo.dsBm.bmBits,(Hdr.ImageWidth*Hdr.ImageHeight)*(Hdr.BPP/8));
// Free the bitmap
delete hBMP;
// Add in alpha channel if required
if(fmt==EXPORT_TGA32)
{
// Render new alpha fontmap
hBMP=DrawFontMap(DFM_ALPHA,-1);
// Create the SBM alpha image
AlphaImg.Create(Hdr.ImageWidth,Hdr.ImageHeight,Hdr.BPP);
// Get RGB data ptr from Img
if(!GetObject(*hBMP,sizeof(DIBSECTION),&bmInfo))
return false;
// Copy bitmap to alpha SBM
memcpy(AlphaImg.GetImg(),bmInfo.dsBm.bmBits,(Hdr.ImageWidth*Hdr.ImageHeight)*(Hdr.BPP/8));
// Free the bitmap
delete hBMP;
// Grayscale the alphamap
AlphaImg.Grayscale();
// Insert alpha channel into font map
FntImg.InsertAlpha(AlphaImg.GetImg());
}
switch(fmt)
{
case EXPORT_TGA32:
Result=FntImg.SaveTGA(fname);
break;
case EXPORT_TGA:
Result=FntImg.SaveTGA(fname);
break;
case EXPORT_BMP:
Result=FntImg.SaveBMP(fname);
break;
default:
Result=false;
break;
}
return Result;
}
bool BFontMap::ImportData(char *fname)
{
/* extern BFontMap *Fnt;
FILE *in;
long fsize,datptr;
int Index,Val;
char *data;
in=fopen(fname,"r");
if(in==NULL)
return FALSE;
// Get filesize
fseek(in,0,SEEK_END);
fsize=ftell(in);
rewind(in);
// Allocate space for file contents
data = new char[fsize];
if(data==NULL)
{
fclose(in);
return FALSE;
}
// Read in the file contents
fread(data,fsize,1,in);
fclose(in);
// Extract the font data
datptr=0;
// Image Width
while(data[datptr]!=',')
++datptr;
datptr++;
sscanf(&data[datptr],"%d",&(cfg->ImgSize));
// Image Height
while(data[datptr]!=',')
++datptr;
datptr++;
sscanf(&data[datptr],"%d",&(cfg->ImgSize));
// Cell Width
while(data[datptr]!=',')
++datptr;
datptr++;
sscanf(&data[datptr],"%d",&(cfg->CellHeight));
// Cell Height
while(data[datptr]!=',')
++datptr;
datptr++;
sscanf(&data[datptr],"%d",&(cfg->CellHeight));
// Start char
while(data[datptr]!=',')
++datptr;
datptr++;
sscanf(&data[datptr],"%d",&(cfg->CharBase));
// Font Name
while(data[datptr]!=',')
++datptr;
datptr++;
Index=0;
while(data[datptr]!='\n')
{
cfg->FntDef.lfFaceName[Index]=data[datptr];
++Index;
++datptr;
}
cfg->FntDef.lfFaceName[Index]=NULL;
// Font Height
while(data[datptr]!=',')
++datptr;
datptr++;
sscanf(&data[datptr],"%d",&(cfg->FntDef.lfHeight));
// Font Width
while(data[datptr]!=',')
++datptr;
datptr++;
sscanf(&data[datptr],"%d",&(cfg->FntDef.lfWidth));
// Char Widths
for(Index=0;Index!=256;++Index)
{
while(data[datptr]!=',')
++datptr;
datptr++;
sscanf(&data[datptr],"%d",&Val);
cfg->width[Index]=Val; // Prevents stack damage
}
// Char X Offsets
for(Index=0;Index!=256;++Index)
{
while(data[datptr]!=',')
++datptr;
datptr++;
sscanf(&data[datptr],"%d",&Val);
cfg->hAdj[Index]=Val;
}
// Char Y Offsets
for(Index=0;Index!=256;++Index)
{
while(data[datptr]!=',')
++datptr;
datptr++;
sscanf(&data[datptr],"%d",&Val);
cfg->vAdj[Index]=Val;
}
// Global Width Offset
while(data[datptr]!=',')
++datptr;
datptr++;
sscanf(&data[datptr],"%d",&Val);
cfg->gwAdj=Val;
// Global X Offset
while(data[datptr]!=',')
++datptr;
datptr++;
sscanf(&data[datptr],"%d",&Val);
cfg->ghAdj=Val;
// Global Y Offset
while(data[datptr]!=',')
++datptr;
datptr++;
sscanf(&data[datptr],"%d",&Val);
cfg->gvAdj=Val;
// Bold Value
while(data[datptr]!=',')
++datptr;
datptr++;
sscanf(&data[datptr],"%d",&Val);
cfg->FntDef.lfWeight=Val;
// Italic Value
while(data[datptr]!=',')
++datptr;
datptr++;
sscanf(&data[datptr],"%d",&Val);
cfg->FntDef.lfItalic=Val;
// AntiAlias Value
while(data[datptr]!=',')
++datptr;
datptr++;
sscanf(&data[datptr],"%d",&Val);
cfg->FntDef.lfQuality=Val;
delete [] data;*/
return TRUE;
}
bool BFontMap::ExportCSVData(char *fname)
{
ofstream out;
int Loop;
out.open(fname, ios::out | ios::trunc);
if(out.fail())
return false;
out<<"Image Width,"<<MapWidth<<"\n";
out<<"Image Height,"<<MapHeight<<"\n";
out<<"Cell Width,"<<CellWidth<<"\n";
out<<"Cell Height,"<<CellHeight<<"\n";
out<<"Start Char,"<<(int)BaseChar<<"\n";
out<<"Font Name,"<<FntDef.lfFaceName<<"\n";
out<<"Font Height,"<<FntDef.lfHeight<<"\n";
out<<"Font Width (0 is default),"<<FntDef.lfWidth<<"\n";
for(Loop=0;Loop!=256;++Loop)
{
out<<"Char "<<Loop<<" Base Width,"<<(int)BaseWidth[Loop]<<"\n";
}
for(Loop=0;Loop!=256;++Loop)
{
out<<"Char "<<Loop<<" Width Offset,"<<(int)WidthMod[Loop]<<"\n";
}
for(Loop=0;Loop!=256;++Loop)
{
out<<"Char "<<Loop<<" X Offset,"<<(int)HMod[Loop]<<"\n";
}
for(Loop=0;Loop!=256;++Loop)
{
out<<"Char "<<Loop<<" Y Offset,"<<(int)VMod[Loop]<<"\n";
}
out<<"Global Width Offset,"<<(int)gWidthMod<<"\n";
out<<"Global X Offset,"<<(int)gHMod<<"\n";
out<<"Global Y Offset,"<<(int)gVMod<<"\n";
out<<"Bold,"<<FntDef.lfWeight<<"\n";
out<<"Italic,"<<(int)FntDef.lfItalic<<"\n";
out<<"AntiAlias,"<<(int)FntDef.lfQuality<<"\n";
out.close();
return TRUE;
}
bool BFontMap::ExportBinData(char *fname)
{
ofstream out;
int Loop;
unsigned char eWidth[256];
out.open(fname, ios::binary | ios::trunc);
if(out.fail())
return false;
// Image dims
out.write((char*)&MapWidth,sizeof(int));
out.write((char*)&MapHeight,sizeof(int));
// Cell dims
out.write((char*)&CellWidth,sizeof(int));
out.write((char*)&CellHeight,sizeof(int));
// Start char
out.write((char*)&BaseChar,1);
// Font Widths
for(Loop=0;Loop!=256;++Loop)
eWidth[Loop]=BaseWidth[Loop]+WidthMod[Loop]+gWidthMod;
out.write((char*)eWidth,256);
out.close();
return TRUE;
}
bool BFontMap::IsPower(int TestValue)
{
bool Ret=FALSE;
float Val;
Val=(float)TestValue;
while(Val>=2.0f)
{
if(Val==2.0f)
Ret=TRUE;
Val=Val/2.0f;
}
return Ret;
}
|
cbbed06b89e8b14adaadcf150938c32f
|
{
"intermediate": 0.28728821873664856,
"beginner": 0.4311906099319458,
"expert": 0.28152111172676086
}
|
48,106
|
I am doing the Kaggle competition “Titanic - Machine Learning from Disaster”, my code:
“# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here’s several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only “…/input/” directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk(‘/kaggle/input’):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using “Save & Run All”
# You can also write temporary files to /kaggle/temp/, but they won’t be saved outside of the current session
train_data = pd.read_csv(”/kaggle/input/titanic/train.csv")
test_data = pd.read_csv(“/kaggle/input/titanic/test.csv”)
train_data.head(100)
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
# Imputing Age
age_imputer = SimpleImputer(strategy=‘median’)
train_data[‘Age’] = age_imputer.fit_transform(train_data[[‘Age’]])
test_data[‘Age’] = age_imputer.transform(test_data[[‘Age’]])
# Assuming Fare missing values can be filled with -1 (or you could use mean or median)
fare_imputer = SimpleImputer(strategy=‘median’)
train_data[‘Fare’] = fare_imputer.fit_transform(train_data[[‘Fare’]])
test_data[‘Fare’] = fare_imputer.transform(test_data[[‘Fare’]])
features = [“Pclass”, “Sex”, “Age”, “SibSp”, “Parch”, “Fare”,“Embarked”,“PassengerId”]
X = pd.get_dummies(train_data[features])
X_test = pd.get_dummies(test_data[features])
y = train_data[“Survived”]
model = RandomForestClassifier(n_estimators=200, max_depth=200, random_state=10)
model.fit(X, y)
predictions = model.predict(X_test)
train_accuracy = model.score(X, y)
print(f"Training Accuracy: {train_accuracy:.4f}“)
output = pd.DataFrame({‘PassengerId’: test_data.PassengerId, ‘Survived’: predictions})
output.to_csv(‘submission.csv’, index=False)
print(“Your submission was successfully saved!”)
”
I want to change the model to neural network, show code.
|
fa46d1c32853dac7cf5cd2da4930c42d
|
{
"intermediate": 0.4141998291015625,
"beginner": 0.2301241010427475,
"expert": 0.3556760549545288
}
|
48,107
|
<style>
body {
margin: 0;
font-family: Arial, sans-serif;
}
header {
background-color: #66CDAA;
padding: 15px;
color: #fff;
display: flex;
justify-content: space-between;
align-items: center;
}
header img{
height: 50px;
width: 50px;
}
nav {
display: flex;
}
nav ul {
list-style: none;
margin: 0;
padding: 0;
display: flex;
}
nav li {
margin-right: 20px;
color: #933;
}
section.hero {
height: 800px;
width: 100%;
background-image: url('https://wide-w.com/wp-content/uploads/2019/01/gora-jeverest.jpg');
background-size: cover;
background-position: center;
display: flex;
justify-content: center;
align-items: center;
color: #ffffff;
text-align: center;
}
section {
padding: 20px;
}
section#about {
border: 2px solid #333;
background-color: #00FA9A;
padding: 20px;
margin: 20px 0;
overflow: hidden;
}
section#about img {
float: left;
margin-right: 20px;
max-width: 300px;
}
section#about p {
margin: 0 0 20px 0;
}
section#best-tour {
text-align: center;
}
section#best-tour h2 {
margin-bottom: 20px;
}
section#best-tour .feature {
display: inline-block;
margin: 0 20px 20px 0;
border: 10px solid #F0E68C;
padding: 10px;
width: calc((100% - 60px) / 3);
box-sizing: border-box;
}
section#best-tour img {
width: 80px;
height: 80px;
border-radius: 50%;
margin-bottom: 10px;
display: block;
margin-left: auto;
margin-right: auto;
}
.hottourscontainer {
max-width: 1200px;
margin: 0 auto;
display: flex;
flex-wrap: wrap;
align-items: center;
justify-content: space-around;
padding: 20px;
box-shadow: 0 2px 4px rgba(0,0,0,0.1);
}
.hottourscontainer img {
max-width: 200px;
margin-right: 20px;
border-radius: 10px;
margin-bottom: 20px;
}
.hottoursinfo {
flex-grow: 1;
width: 70%;
}
.tourtitle {
font-size: 24px;
font-weight: bold;
margin-bottom: 10px;
}
.tourdescription {
font-size: 16px;
margin-bottom: 10px;
}
.tourprice {
font-size: 18px;
font-weight: bold;
color: green;
border: 2px solid #eee;
padding: 10px;
display: inline-block;
}
.hottourstitle {
text-align: center;
font-size: 32px;
margin-bottom: 40px;
}
section#route-example {
text-align: center;
background-color: #FFF8DC;
}
section#route-example .route-block {
display: flex;
justify-content: center;
align-items: center;
margin-bottom: 40px;
border: 2px solid #DCDCDC;
padding: 10px;
margin-bottom: 10px;
}
section#route-example .route-block img{
width: 500px;
height: 400px;
}
section#route-example .route-block p {
width: 48%;
margin: 0 2%;
box-sizing: border-box;
}
footer {
background-color: #333;
color: #fff;
text-align: center;
padding: 10px;
position: fixed;
width: 100%;
bottom: 0;
}
#book-tour {
background-color: #F0FFF0;
text-align: center;
padding: 50px 0;
}
.book-button-container {
display: inline-block;
}
.book-button {
background-color: #4CAF50;
color: white;
padding: 15px 32px;
text-align: center;
text-decoration: none;
font-size: 16px;
margin: 4px 2px;
cursor: pointer;
border-radius: 5px;
}
.book-button:hover {
background-color: #45a049;
}
#testimonials {
text-align: center;
padding: 50px;
}
.testimonial-container {
display: flex;
justify-content: space-around;
flex-wrap: wrap;
gap: 20px;
}
.testimonial {
background-color: #ffffff;
border: 1px solid #eaeaea;
padding: 20px;
border-radius: 5px;
box-shadow: 0px 2px 4px rgba(0, 0, 0, 0.1);
flex-basis: calc(30% - 40px);
margin: 10px;
flex-grow: 1;
}
.testimonial blockquote {
font-style: italic;
color: #555;
}
.testimonial-author, .tour-date {
font-weight: bold;
font-size: 0.9em;
color: #333;
text-align: right;
margin-top: 15px;
}
#map {
margin: 20px;
padding: 20px;
}
#map h2 {
text-align: center;
margin-bottom: 20px;
}
#map iframe {
width: 100%;
}
#contacts {
background-color: #f8f8f8;
padding: 50px 0;
}
#contacts .container {
max-width: 1200px;
margin: 0 auto;
padding: 0 15px;
}
#contacts h2 {
text-align: center;
margin-bottom: 15px;
#contacts p {
text-align: center;
margin: 10px 0;
font-size: 1rem;
}
#contacts a {
color: #007bff;
text-decoration: none;
}
#contacts a:hover {
text-decoration: underline;
}
footer{
height: 25px;
}
.swiper-container {
height: 1000vh;
margin: 0 auto;
position: relative;
overflow: hidden;
list-style: none;
padding: 0;
/* Fix of Webkit flickering */
z-index: 1;
</style> <section class="hero">
<h1>Ласкаво просимо в нашу туристичну компанію<br>
Тур від агентства " <b><i> Empire of Travel".</i></b> </h1>
</section>
<section id="about">
<img src="https://seo-evolution.com.ua/imgfly/public/Ej5uSGigkkhfGF9Sq2yBC9xGtsPNaiiNQ7uy0W4i.jpg?h=600" alt="Про нас">
<p>
<h1>Фантастична подорож для вас!</h1><br>
</p>
<p>Чому б не відсвяткувати свою наступну літню відпустку в Європі! Відвідайте замки, парки, пляжі під час цього чудового туру... У старій Європі є безліч меморіалів і музеїв з античною архітектурою. Долина Луари відома на весь світ своїми замками,
Рим – автентичною архітектурою, а Амальфі чудово підходить для пляжного відпочинку.</p>
<p>Південь Франції та більша частина сусідньої Італії є майданчиками для захоплення. Це зони, які здавна активізували почуття розкоші та поблажливості. Тепер у вас є чудова нагода випробувати все на собі.</p>
</section>
покращи візуал
|
a24c36688efd3e1901432cdff91c363b
|
{
"intermediate": 0.3339492976665497,
"beginner": 0.43508729338645935,
"expert": 0.23096346855163574
}
|
48,108
|
I am doing the Kaggle competition “Titanic - Machine Learning from Disaster”, my code:
“# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here’s several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only “…/input/” directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk(‘/kaggle/input’):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using “Save & Run All”
# You can also write temporary files to /kaggle/temp/, but they won’t be saved outside of the current session
train_data = pd.read_csv(”/kaggle/input/titanic/train.csv")
test_data = pd.read_csv(“/kaggle/input/titanic/test.csv”)
train_data.head(100)
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
# Imputing Age
age_imputer = SimpleImputer(strategy=‘median’)
train_data[‘Age’] = age_imputer.fit_transform(train_data[[‘Age’]])
test_data[‘Age’] = age_imputer.transform(test_data[[‘Age’]])
# Assuming Fare missing values can be filled with -1 (or you could use mean or median)
fare_imputer = SimpleImputer(strategy=‘median’)
train_data[‘Fare’] = fare_imputer.fit_transform(train_data[[‘Fare’]])
test_data[‘Fare’] = fare_imputer.transform(test_data[[‘Fare’]])
features = [“Pclass”, “Sex”, “Age”, “SibSp”, “Parch”, “Fare”,“Embarked”,“PassengerId”]
X = pd.get_dummies(train_data[features])
X_test = pd.get_dummies(test_data[features])
y = train_data[“Survived”]
model = RandomForestClassifier(n_estimators=200, max_depth=200, random_state=10)
model.fit(X, y)
predictions = model.predict(X_test)
train_accuracy = model.score(X, y)
print(f"Training Accuracy: {train_accuracy:.4f}“)
output = pd.DataFrame({‘PassengerId’: test_data.PassengerId, ‘Survived’: predictions})
output.to_csv(‘submission.csv’, index=False)
print(“Your submission was successfully saved!”)
”
I want to change the model to neural network, show code.
|
1cb448bb2a9cf86a9181227055abb58e
|
{
"intermediate": 0.4141998291015625,
"beginner": 0.2301241010427475,
"expert": 0.3556760549545288
}
|
48,109
|
accessors in computer science
|
ec287cc4bcd02ba0d1dc7dc76d442d2a
|
{
"intermediate": 0.2793835997581482,
"beginner": 0.2402334213256836,
"expert": 0.4803830087184906
}
|
48,110
|
i have challenge like this: import time
import numpy as np
import socket
import base64
# This might be useful with the exploitation of the device at some point!
#import lascar
HOST = '0.0.0.0' # This must be changed to the corresponding value of the live instance
PORT = 1337 # This must be changed to the corresponding value of the live instance
# This function is used to decode the base64 transmitted power trace (which is a NumPy array)
# The function should only be called for the response of the 1. option and on the data received
# after we send the plaintext (as seen in the example code below)
def b64_decode_trace(leakage):
byte_data = base64.b64decode(leakage)
return np.frombuffer(byte_data) # convert binary data into a NumPy array
# This function is used to communicate with the remote machine (Laptop-2) via socket
def connect_to_socket(option, data):
# Initialize a socket connection
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.connect((HOST, PORT))
resp_1 = s.recv(1024)
s.sendall(option)
resp_2 = s.recv(1024) # Receive response
# Send the data
# option one: binary plaintext
# option two: Hex encoded AES KEY
s.sendall(data)
# Receive response
# option one: receive base64 encoded binary data
# that represented the power traces as a Numpy array
# option two: receive an ASCII Message
# (if the key is correct the flag will be returned)
resp_data = b''
while True:
temp_data = s.recv(8096)
if not temp_data:
break
resp_data += temp_data
s.close()
# The print commands can be used for debugging in order to observe the responses
# The following print commands can be commented out.
print(resp_1.decode('ascii'))
print(option)
print(resp_2.decode('ascii'))
print(data)
#print(resp_data)
return resp_data
# Sample binary plaintext
plaintext = b'0123456789ABCDEF'
# Example use of option 1
print("Option 1:")
leakage = connect_to_socket(b'1', plaintext)
power_trace = b64_decode_trace(leakage)
print("Length of power trace: {}".format(power_trace.shape))
print(power_trace) # Outputs the NumPy array that represents the power trace.
# Always use a delay between each connection
# in order to have a stable connection
time.sleep(0.1)
# Sample HEX encoded AES KEY
KEY = b'00112233445566778899AABBCCDDEEFF'
print("\nOption 2:")
# Example use of option 2
response = connect_to_socket(b'2', KEY)
print(response)
|
f0e066b968b8043075ff0321f9f839f4
|
{
"intermediate": 0.500298261642456,
"beginner": 0.3520253002643585,
"expert": 0.1476764678955078
}
|
48,111
|
I am doing the Kaggle competition “Titanic - Machine Learning from Disaster”, my code:
"# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
!pip install tensorflow
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.impute import SimpleImputer
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, InputLayer, Dropout
from tensorflow.keras.utils import to_categorical
train_data = pd.read_csv("/kaggle/input/titanic/train.csv")
test_data = pd.read_csv("/kaggle/input/titanic/test.csv")
train_data.head(100)
test_data.head(5)
# Preprocessing
age_imputer = SimpleImputer(strategy='median')
train_data['Age'] = age_imputer.fit_transform(train_data[['Age']])
test_data['Age'] = age_imputer.transform(test_data[['Age']])
fare_imputer = SimpleImputer(strategy='median')
train_data['Fare'] = fare_imputer.fit_transform(train_data[['Fare']])
test_data['Fare'] = fare_imputer.transform(test_data[['Fare']])
# Encoding categorical variables
train_data = pd.get_dummies(train_data, columns=["Sex", "Embarked"])
test_data = pd.get_dummies(test_data, columns=["Sex", "Embarked"])
# Selecting features and target
features = ["Pclass", "Age", "SibSp", "Parch", "Fare", "Sex_female", "Sex_male", "Embarked_C", "Embarked_Q", "Embarked_S","PassengerId"]
X = train_data[features]
y = train_data["Survived"]
X_test = test_data[features]
# Neural Network requires scaled input features.
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
X_test_scaled = scaler.transform(X_test)
# Splitting the data into training and validation sets
X_train, X_val, y_train, y_val = train_test_split(X_scaled, y, test_size=0.2, random_state=42)
model = Sequential([
Dense(128, activation='relu', input_shape=(X.shape[1],)),
Dropout(0.5), # Add dropout to reduce overfitting
Dense(128, activation='relu'),
Dropout(0.5), # Add dropout to reduce overfitting
Dense(128, activation='relu'),
Dropout(0.3), # Add dropout to reduce overfitting
Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
# Training with validation data
model.fit(X_train, y_train, epochs=50, batch_size=16, validation_data=(X_val, y_val))
# Predicting on the test set
predictions = model.predict(X_test_scaled)
predictions = (predictions > 0.5).astype(int).reshape(X_test_scaled.shape[0])
output = pd.DataFrame({'PassengerId': test_data.PassengerId, 'Survived': predictions})
output.to_csv('submission_nn_val.csv', index=False)
print("Your submission was successfully saved using neural network with a validation set!")
"can I save my model , then the competition will used my saved model directly instead of training it again when running the code?
|
40597ed5613b2777d6f85d744256bab6
|
{
"intermediate": 0.4356418550014496,
"beginner": 0.27508458495140076,
"expert": 0.2892735004425049
}
|
48,112
|
I am doing the Kaggle competition “Titanic - Machine Learning from Disaster”, my code:
"# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
!pip install tensorflow
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.impute import SimpleImputer
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, InputLayer, Dropout
from tensorflow.keras.utils import to_categorical
train_data = pd.read_csv("/kaggle/input/titanic/train.csv")
test_data = pd.read_csv("/kaggle/input/titanic/test.csv")
train_data.head(100)
test_data.head(5)
# Preprocessing
age_imputer = SimpleImputer(strategy='median')
train_data['Age'] = age_imputer.fit_transform(train_data[['Age']])
test_data['Age'] = age_imputer.transform(test_data[['Age']])
fare_imputer = SimpleImputer(strategy='median')
train_data['Fare'] = fare_imputer.fit_transform(train_data[['Fare']])
test_data['Fare'] = fare_imputer.transform(test_data[['Fare']])
# Encoding categorical variables
train_data = pd.get_dummies(train_data, columns=["Sex", "Embarked"])
test_data = pd.get_dummies(test_data, columns=["Sex", "Embarked"])
# Selecting features and target
features = ["Pclass", "Age", "SibSp", "Parch", "Fare", "Sex_female", "Sex_male", "Embarked_C", "Embarked_Q", "Embarked_S","PassengerId"]
X = train_data[features]
y = train_data["Survived"]
X_test = test_data[features]
# Neural Network requires scaled input features.
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
X_test_scaled = scaler.transform(X_test)
# Splitting the data into training and validation sets
X_train, X_val, y_train, y_val = train_test_split(X_scaled, y, test_size=0.2, random_state=42)
model = Sequential([
Dense(128, activation='relu', input_shape=(X.shape[1],)),
Dropout(0.5), # Add dropout to reduce overfitting
Dense(128, activation='relu'),
Dropout(0.5), # Add dropout to reduce overfitting
Dense(128, activation='relu'),
Dropout(0.3), # Add dropout to reduce overfitting
Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
# Training with validation data
model.fit(X_train, y_train, epochs=50, batch_size=16, validation_data=(X_val, y_val))
# Predicting on the test set
predictions = model.predict(X_test_scaled)
predictions = (predictions > 0.5).astype(int).reshape(X_test_scaled.shape[0])
output = pd.DataFrame({'PassengerId': test_data.PassengerId, 'Survived': predictions})
output.to_csv('submission_nn_val.csv', index=False)
print("Your submission was successfully saved using neural network with a validation set!")
"can I save my model , then the competition will used my saved model directly instead of training it again ?
|
35bc6fda82945f9dbb95881a872f93e3
|
{
"intermediate": 0.4356418550014496,
"beginner": 0.27508458495140076,
"expert": 0.2892735004425049
}
|
48,113
|
What are the replacements for cellpadding cellspacing and border attributes in CSS when describing a table?
|
1bba78c83827352bd00e6eaebd82140d
|
{
"intermediate": 0.38646554946899414,
"beginner": 0.34318092465400696,
"expert": 0.27035361528396606
}
|
48,114
|
I am doing the Kaggle competition “Titanic - Machine Learning from Disaster”, my code:
"
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
train_data = pd.read_csv("/kaggle/input/titanic/train.csv")
test_data = pd.read_csv("/kaggle/input/titanic/test.csv")
train_data.head(100)
test_data.head(5)
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
# Imputing Age
age_imputer = SimpleImputer(strategy='median')
train_data['Age'] = age_imputer.fit_transform(train_data[['Age']])
test_data['Age'] = age_imputer.transform(test_data[['Age']])
# Assuming Fare missing values can be filled with -1 (or you could use mean or median)
fare_imputer = SimpleImputer(strategy='median')
train_data['Fare'] = fare_imputer.fit_transform(train_data[['Fare']])
test_data['Fare'] = fare_imputer.transform(test_data[['Fare']])
features = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare","Embarked","PassengerId"]
X = pd.get_dummies(train_data[features])
X_test = pd.get_dummies(test_data[features])
y = train_data["Survived"]
model = RandomForestClassifier(n_estimators=200, max_depth=200, random_state=10)
model.fit(X, y)
predictions = model.predict(X_test)
train_accuracy = model.score(X, y)
print(f"Training Accuracy: {train_accuracy:.4f}")
output = pd.DataFrame({'PassengerId': test_data.PassengerId, 'Survived': predictions})
output.to_csv('submission.csv', index=False)
print("Your submission was successfully saved!")
Training Accuracy: 1.0000
Your submission was successfully saved!
"
The test accuarcy is 79, how to further improve?
|
b3a82091a61981f9aa5af931e64fd524
|
{
"intermediate": 0.6388859748840332,
"beginner": 0.17631752789020538,
"expert": 0.1847964972257614
}
|
48,115
|
I am doing the Kaggle competition “Titanic - Machine Learning from Disaster”, my code:
"
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
train_data = pd.read_csv("/kaggle/input/titanic/train.csv")
test_data = pd.read_csv("/kaggle/input/titanic/test.csv")
train_data.head(100)
test_data.head(5)
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
# Imputing Age
age_imputer = SimpleImputer(strategy='median')
train_data['Age'] = age_imputer.fit_transform(train_data[['Age']])
test_data['Age'] = age_imputer.transform(test_data[['Age']])
# Assuming Fare missing values can be filled with -1 (or you could use mean or median)
fare_imputer = SimpleImputer(strategy='median')
train_data['Fare'] = fare_imputer.fit_transform(train_data[['Fare']])
test_data['Fare'] = fare_imputer.transform(test_data[['Fare']])
features = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare","Embarked","PassengerId"]
X = pd.get_dummies(train_data[features])
X_test = pd.get_dummies(test_data[features])
y = train_data["Survived"]
model = RandomForestClassifier(n_estimators=200, max_depth=200, random_state=10)
model.fit(X, y)
predictions = model.predict(X_test)
train_accuracy = model.score(X, y)
print(f"Training Accuracy: {train_accuracy:.4f}")
output = pd.DataFrame({'PassengerId': test_data.PassengerId, 'Survived': predictions})
output.to_csv('submission.csv', index=False)
print("Your submission was successfully saved!")
Training Accuracy: 1.0000
Your submission was successfully saved!
"
The test accuarcy is 79, how to further improve? Show code.
|
e60a8a9aaa4fe92495f52af355216e2d
|
{
"intermediate": 0.5621781349182129,
"beginner": 0.23322196304798126,
"expert": 0.20459993183612823
}
|
48,116
|
public class Star
{
private int mySize;
private double myDistance;
private String myName;
public Star()
{
mySize = 0;
myDistance = 0;
myName = "none";
the following
attempts to create a Star Object.
I. Star s = new Star(15, 25.9,
Vega);
II. Star s = new Star();
III. Star s = new Star(11, 14.25, "Rigel");
Which line of code above does NOT compile?
}
public Star(int s, double d, String n)
{
mySize = s;
myDistance = d;
A. II.
B. I.
myName = n;
}
public double getDistance()
{
return myDistance;
}
public double getInfo()
{
return mySize / 2. myDistance;
+
}
public int calc(int m)
{
int xm 5;
+
return x;
C. III.
|
911eecc884149fbd6a32db1b36d1330a
|
{
"intermediate": 0.2957535684108734,
"beginner": 0.5563386678695679,
"expert": 0.14790773391723633
}
|
48,117
|
i have following code to load my trained models
i have 4 trained models and 4 equal y parameters(each has 3 values to be predicted)
complete the code so predict each y with each mmodel and then validate predicted result based on mae :
# Placeholder for storing the data frames
data_frames = []
x_scaler = joblib.load('snn_all_1142_x_scaler.sav')
x_scaler = joblib.load('snn_all_1142_yhlp1_scaler.sav')
x_scaler = joblib.load('snn_all_1142_yhlp2_scaler.sav')
x_scaler = joblib.load('snn_all_1142_yhlp3_scaler.sav')
x_scaler = joblib.load('snn_all_1142_yhlp5_scaler.sav')
model_y1 = load_model('snn_1142_hlp1_sec_5m_relu_trainloss09_mae17_valloss22_mae22.h5')
model_y1 = load_model('snn_1142_hlp2_sec_5m_relu_trainloss07_mae15_valloss36_mae19.h5')
model_y1 = load_model('snn_1142_hlp3_sec_5m_relu_trainloss05_mae13_valloss17_mae17.h5')
model_y1 = load_model('snn_1142_hlp5_5m_relu_trainloss05_mae12_valloss12_mae14.h5')
# Loop over the list of csv files
for csv_file in csv_files:
# Read the CSV file
file_path = os.path.join(csv_directory, csv_file)
df = pd.read_csv(file_path)
# Assuming ‘df’ is your DataFrame and ‘Label’ is the target column
X = df.drop(['y_High_1d', 'y_Low_1d', 'y_Priority_1d',
'y_High_2d', 'y_Low_2d', 'y_Priority_2d',
'y_High_3d', 'y_Low_3d', 'y_Priority_3d',
'y_High_5d', 'y_Low_5d', 'y_Priority_5d'], axis=1).values
Y1 = df[['y_High_1d', 'y_Low_1d', 'y_Priority_1d']].values
Y2 = df[['y_High_2d', 'y_Low_2d', 'y_Priority_2d']].values
Y3 = df[['y_High_3d', 'y_Low_3d', 'y_Priority_3d']].values
Y5 = df[['y_High_5d', 'y_Low_5d', 'y_Priority_5d']].values
# Y = to_categorical(y) # Convert labels to one-hot encoding
|
9634765a1014facbec4b379ad981dde7
|
{
"intermediate": 0.24265849590301514,
"beginner": 0.6116739511489868,
"expert": 0.14566761255264282
}
|
48,118
|
code:
# Placeholder for storing the data frames
data_frames = []
x_scaler = joblib.load('snn_all_1142_x_scaler.sav')
x_scaler = joblib.load('snn_all_1142_yhlp1_scaler.sav')
x_scaler = joblib.load('snn_all_1142_yhlp2_scaler.sav')
x_scaler = joblib.load('snn_all_1142_yhlp3_scaler.sav')
x_scaler = joblib.load('snn_all_1142_yhlp5_scaler.sav')
model_y1 = load_model('snn_1142_hlp1_sec_5m_relu_trainloss09_mae17_valloss22_mae22.h5')
model_y2 = load_model('snn_1142_hlp2_sec_5m_relu_trainloss07_mae15_valloss36_mae19.h5')
model_y3 = load_model('snn_1142_hlp3_sec_5m_relu_trainloss05_mae13_valloss17_mae17.h5')
model_y5 = load_model('snn_1142_hlp5_5m_relu_trainloss05_mae12_valloss12_mae14.h5')
# Loop over the list of csv files
for csv_file in csv_files:
# Read the CSV file
unique_part = csv_file.split('_')[-2]
file_path = os.path.join(csv_directory, csv_file)
df = pd.read_csv(file_path)
# Assuming ‘df’ is your DataFrame and ‘Label’ is the target column
X = df.drop(['y_High_1d', 'y_Low_1d', 'y_Priority_1d',
'y_High_2d', 'y_Low_2d', 'y_Priority_2d',
'y_High_3d', 'y_Low_3d', 'y_Priority_3d',
'y_High_5d', 'y_Low_5d', 'y_Priority_5d'], axis=1).values
Y1_actual = df[['y_High_1d', 'y_Low_1d', 'y_Priority_1d']].values
Y2_actual = df[['y_High_2d', 'y_Low_2d', 'y_Priority_2d']].values
Y3_actual = df[['y_High_3d', 'y_Low_3d', 'y_Priority_3d']].values
Y5_actual = df[['y_High_5d', 'y_Low_5d', 'y_Priority_5d']].values
X_scaled = x_scaler.fit(X)
# Make predictions
Y1_pred = model_y1.predict(X_scaled)
Y2_pred = model_y2.predict(X_scaled)
Y3_pred = model_y3.predict(X_scaled)
Y5_pred = model_y5.predict(X_scaled)
# Note: If your y values were scaled during training, you should inverse transform
# the predictions and the actual y values before calculating the MAE
Y1_pred_actual = scaler_y1.inverse_transform(Y1_pred)
Y2_pred_actual = scaler_y2.inverse_transform(Y2_pred)
Y3_pred_actual = scaler_y3.inverse_transform(Y3_pred)
Y5_pred_actual = scaler_y5.inverse_transform(Y5_pred)
file_results = {'file_name': csv_file}
# Calculate MAE for each column and add to the dictionary
for i, label in enumerate(labels):
file_results[f'Y1_{label}_MAE'] = mean_absolute_error(Y1_actual[:, i], Y1_pred_actual[:, i])
file_results[f'Y2{label}_MAE'] = mean_absolute_error(Y2_actual[:, i], Y2_pred_actual[:, i])
file_results[f'Y3{label}_MAE'] = mean_absolute_error(Y3_actual[:, i], Y3_pred_actual[:, i])
file_results[f'Y5{label}_MAE'] = mean_absolute_error(Y5_actual[:, i], Y5_pred_actual[:, i])
# Append the results of this file to the main results list
results.append(file_results)
# Convert the list of dictionaries to a DataFrame
results_df = pd.DataFrame(results)
# Output the DataFrame to a CSV file
results_csv_path = 'mae_results.csv' # Define the output path
results_df.to_csv(results_csv_path, index=False) # Export to CSV without the index
error:
{
"name": "ImportError",
"message": "Filepath looks like a hdf5 file but h5py is not available. filepath=snn_1142_hlp1_sec_5m_relu_trainloss09_mae17_valloss22_mae22.h5",
"stack": "---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[9], line 10
7 x_scaler = joblib.load('snn_all_1142_yhlp3_scaler.sav')
8 x_scaler = joblib.load('snn_all_1142_yhlp5_scaler.sav')
---> 10 model_y1 = load_model('snn_1142_hlp1_sec_5m_relu_trainloss09_mae17_valloss22_mae22.h5')
11 model_y2 = load_model('snn_1142_hlp2_sec_5m_relu_trainloss07_mae15_valloss36_mae19.h5')
12 model_y3 = load_model('snn_1142_hlp3_sec_5m_relu_trainloss05_mae13_valloss17_mae17.h5')
File c:\\Users\\Fazel\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\keras\\utils\\traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs)
67 filtered_tb = _process_traceback_frames(e.__traceback__)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
File c:\\Users\\Fazel\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\keras\\saving\\save.py:236, in load_model(filepath, custom_objects, compile, options)
234 else:
235 if h5py is None:
--> 236 raise ImportError(
237 \"Filepath looks like a hdf5 file but h5py is \"
238 \"not available.\"
239 f\" filepath={filepath_str}\"
240 )
241 return hdf5_format.load_model_from_hdf5(
242 tf.io.gfile.GFile(filepath_str, mode=\"rb\"),
243 custom_objects,
244 compile,
245 )
246 elif h5py is not None and isinstance(filepath, h5py.File):
ImportError: Filepath looks like a hdf5 file but h5py is not available. filepath=snn_1142_hlp1_sec_5m_relu_trainloss09_mae17_valloss22_mae22.h5"
}
|
32bd8e7be6e2d7d546b3ccb5fa355073
|
{
"intermediate": 0.32182252407073975,
"beginner": 0.39669960737228394,
"expert": 0.28147783875465393
}
|
48,119
|
in following code i want to perform a sigmoid on thirs column of Y1_pred:
Y1_pred = model_y1.predict(X_scaled)
give me proper python copde
|
9f339f1dc59dc288936fbe16cf0305f4
|
{
"intermediate": 0.3222676217556,
"beginner": 0.15759402513504028,
"expert": 0.5201382637023926
}
|
48,120
|
Could you write a script that reads a simple syntax consisting of an instruction and parameters, both provided in an array?
|
3707bd1454c7756d70495ad8c03a9545
|
{
"intermediate": 0.3574806749820709,
"beginner": 0.3314478099346161,
"expert": 0.3110715448856354
}
|
48,121
|
write me an order confirmation email template, professional tone, in arabic
|
971cfc2cec6d71fe3a16b7b408fea96d
|
{
"intermediate": 0.4065905809402466,
"beginner": 0.3189408779144287,
"expert": 0.2744685411453247
}
|
48,122
|
Could you write a price of code that reads a specific instruction in a syntax, looks it up in a array, gets the specified parameters, stored in the same array as the instruction?
|
66d6738f38d2acab77dbe5b5b0159fa7
|
{
"intermediate": 0.46966472268104553,
"beginner": 0.1541164368391037,
"expert": 0.37621885538101196
}
|
48,123
|
Could you write a price of code that reads a specific instruction in a syntax, looks it up in a array, gets the specified parameters, stored in the same array as the instruction? The parameters are also written by the user, but to know how many parameters are required by the instruction the amount of parameters need to be stored beforehand.
|
eb7efd3072330a35b3cd5850e17758e3
|
{
"intermediate": 0.4388827979564667,
"beginner": 0.23160450160503387,
"expert": 0.32951271533966064
}
|
48,124
|
Suppose I wanted to play a simple beeping sound using ALSA in C. How would I do that?
|
da73b01924c68040fbfffdd2af09aae3
|
{
"intermediate": 0.4339192509651184,
"beginner": 0.14053072035312653,
"expert": 0.42555004358291626
}
|
48,125
|
convert flac to mp3 with ffmpeg using amd gpu hardware accelaration on linux
|
cf8f35963d34562dbc8d4da5a2eb6a10
|
{
"intermediate": 0.37496092915534973,
"beginner": 0.2066117525100708,
"expert": 0.41842734813690186
}
|
48,126
|
please check the code for errors:
|
202d313b19e7897751f503d74f347c00
|
{
"intermediate": 0.3656284809112549,
"beginner": 0.279302716255188,
"expert": 0.3550688326358795
}
|
48,127
|
please identify what type of cipher this is: thkhdliSrsyeosoTOnanAtSeelmeAttargeIsediaotelhedoeIoolelteAamehsReTeHnSeSnlDhFeePFuFsmeMtNidlhAccgseiaslalAsnlTdieishKeADehodrDFerhuhSinvhaDaIDehosWrrnrhfdySnFhTaeBTreeksdn
|
101a9364a9074fcea715325b1368db86
|
{
"intermediate": 0.4046347439289093,
"beginner": 0.38990628719329834,
"expert": 0.20545890927314758
}
|
48,128
|
#
# For licensing see accompanying LICENSE file.
# Copyright (C) 2024 Apple Inc. All Rights Reserved.
#
import argparse
import functools
from dataclasses import dataclass, field
from numbers import Number
from typing import Callable, Dict, List, Optional, Tuple, Union
import numpy as np
import torch
from torch import Tensor, nn
from torch.distributed.fsdp.wrap import transformer_auto_wrap_policy
from torch.nn import functional as F
from corenet.modeling.layers import (
Embedding,
LinearLayer,
RotaryEmbedding,
get_normalization_layer,
norm_layers_tuple,
)
from corenet.modeling.layers.activation import build_activation_layer
from corenet.modeling.models import MODEL_REGISTRY
from corenet.modeling.models.language_modeling.base_lm import BaseLanguageModel
from corenet.utils import logger
from corenet.utils.math_utils import make_divisible
def compute_heads(model_dim: int, head_dim: int) -> int:
"""Compute the number of heads.
Args:
model_dim: Model dimension.
head_dim: Head dimension.
...note:
If model dimension is not divisible by head dimension, ValueError is raised. Otherwise, integer denoting
number of heads in multi-head attention is returned.
"""
if model_dim % head_dim == 0:
return model_dim // head_dim
else:
raise ValueError(
f"Model dimension should be divisible by head dimension. Got: {model_dim} and {head_dim}."
)
@dataclass
class GPTConfig:
vocab_size: int = 32000
max_context_length: int = 2048
num_transformer_layers: int = 12
model_dim: int = 2048
head_dim: int = 128
qkv_multipliers: Union[Number, List[Number]] = 1.0
num_query_heads: int = compute_heads(model_dim=model_dim, head_dim=head_dim)
# This variable allows to switch between multi-head attention, group query attention, and multi-query attention.
# When num_gqa_groups == 1, then it is multi-head attention.
# When 1 < num_gqa_groups < num_heads and num_heads is divisible by num_gqa_groups, then it is group query attention
# When num_gqa_groups == num_heads, then it is multi-query attention
num_gqa_groups: int = 1
# Multipliers for the feed-forward network.
ffn_multipliers: Union[Number, List[Number]] = 4.0
# use FFN with Gated Linear Unit (GLU)
ffn_with_glu: bool = True
ffn_dim_divisor: int = 256
activation_fn_name: str = "swish"
normalization_layer_name: str = "rms_norm"
normalize_qk_projections: bool = False
share_input_output_layers: bool = False
rope_freq_constant: int = 10000
# Note that rope_max_length is set to twice of max_context_length.
# This allows flexibility in token lengths during training or fine-tuning.
rope_max_length: int = 4096
def __post_init__(self) -> None:
if self.num_gqa_groups is not None:
head_multiple_of = self.num_gqa_groups
else:
head_multiple_of = 2
if isinstance(self.qkv_multipliers, Number):
# All attention layers have the same latent dimensions, resulting in uniform allocation of parameters.
qkv_dim = make_divisible(
self.model_dim * self.qkv_multipliers,
divisor=self.head_dim * head_multiple_of,
)
query_dims = [int(qkv_dim)] * self.num_transformer_layers
elif (
isinstance(self.qkv_multipliers, (tuple, list))
and len(self.qkv_multipliers) == 2
):
# Each attention layer have different latent dimensions assuming qkv_multipliers[0] != qkv_multipliers[1].
# This results in variable allocation of parameters in attention layer.
# This scaling is known as layer-wise or block-wise scaling: https://arxiv.org/abs/2008.00623
qkv_multipliers = [
round(v, 2)
for v in np.linspace(
self.qkv_multipliers[0],
self.qkv_multipliers[1],
num=self.num_transformer_layers,
dtype=float,
)
]
# Make sure that scaled model dimension is divisible by scaled head dimension.
query_dims = [
int(
make_divisible(
self.model_dim * m, divisor=self.head_dim * head_multiple_of
)
)
for m in qkv_multipliers
]
else:
raise NotImplementedError(
f"QKV multipliers should be a single number or a list containing exactly two numbers. Got: {qkv_multipliers}."
)
# compute the number of query, key, and value heads
# For multi-head and multi-query attention, the number of heads for query, key, and value are the same.
# For group query attention, the number of key and value heads are the same.
self.num_query_heads = [
int(compute_heads(q_dim, self.head_dim)) for q_dim in query_dims
]
self.num_kv_heads = [
q_heads // self.num_gqa_groups for q_heads in self.num_query_heads
]
# Feed-forward network (FFN) multipliers
if isinstance(self.ffn_multipliers, Number):
# All FFN layers have the same latent dimensions, resulting in uniform allocation of parameters.
self.ffn_multipliers = [self.ffn_multipliers] * self.num_transformer_layers
elif (
isinstance(self.ffn_multipliers, (tuple, list))
and len(self.ffn_multipliers) == 2
):
# Each FFN layer have different latent dimensions assuming ffn_multipliers[0] != ffn_multipliers[1].
# This results in variable allocation of parameters in FFN layer.
# This scaling is known as layer-wise or block-wise scaling: https://arxiv.org/abs/2008.00623
self.ffn_multipliers = [
round(v, 2)
for v in np.linspace(
self.ffn_multipliers[0],
self.ffn_multipliers[1],
num=self.num_transformer_layers,
dtype=float,
)
]
else:
raise NotImplementedError(
f"FFN multipliers should be a single number or a list containing exactly two numbers. Got: {qkv_multipliers}."
)
@classmethod
def from_name(
cls, model_name: str, vocab_size: int, max_context_length: int
) -> "GPTConfig":
if model_name in gpt_configs:
config = gpt_configs[model_name]
else:
raise NotImplementedError(f"{model_name} is not yet implemented")
config["vocab_size"] = vocab_size
config["max_context_length"] = max_context_length
return cls(**config)
gpt_configs = {
"gpt-test": dict(
num_transformer_layers=1,
model_dim=128,
head_dim=64,
num_gqa_groups=1,
normalize_qk_projections=True,
share_input_output_layers=True,
# Vary the FFN and QKV multiplier to create variable FFN and attention layers respectively.
ffn_multipliers=(0.25, 0.75),
qkv_multipliers=(0.25, 0.5),
),
# A sample GPT configuration.
"gpt-1_3B": dict(
num_transformer_layers=24,
model_dim=2048,
head_dim=64,
max_context_length=2048,
# For gated FFN, the value is around 3. while for standard FFN, the value is 4.0.
ffn_multipliers=3.0,
# Number of GQA groups.
num_gqa_groups=4,
normalize_qk_projections=True,
share_input_output_layers=True,
),
"OpenELM-270M": dict(
num_transformer_layers=16,
model_dim=1280,
head_dim=64,
num_gqa_groups=4,
normalize_qk_projections=True,
share_input_output_layers=True,
# Vary the FFN and QKV multiplier to create variable FFN and attention layers respectively.
ffn_multipliers=(0.5, 4.0),
qkv_multipliers=(0.5, 1.0),
),
"OpenELM-450M": dict(
num_transformer_layers=20,
model_dim=1536,
head_dim=64,
num_gqa_groups=4,
normalize_qk_projections=True,
share_input_output_layers=True,
# Vary the FFN and QKV multiplier to create variable FFN and attention layers respectively.
ffn_multipliers=(0.5, 4.0),
qkv_multipliers=(0.5, 1.0),
),
"OpenELM-1_1B": dict(
num_transformer_layers=28,
model_dim=2048,
head_dim=64,
num_gqa_groups=4,
normalize_qk_projections=True,
share_input_output_layers=True,
# Vary the FFN and QKV multiplier to create variable FFN and attention layers respectively.
ffn_multipliers=(0.5, 4.0),
qkv_multipliers=(0.5, 1.0),
),
"OpenELM-3B": dict(
num_transformer_layers=36,
model_dim=3072,
head_dim=128,
num_gqa_groups=4,
normalize_qk_projections=True,
share_input_output_layers=True,
# Vary the FFN and QKV multiplier to create variable FFN and attention layers respectively.
ffn_multipliers=(0.5, 4.0),
qkv_multipliers=(0.5, 1.0),
),
}
class MultiHeadCausalAttention(nn.Module):
"""Multi-head causal attention.
Args:
opts: Command-line arguments.
model_config: Model configuration.
layer_idx: Layer index.
"""
def __init__(
self, opts: argparse.Namespace, model_config: GPTConfig, layer_idx: int
) -> None:
super().__init__()
assert (
model_config.num_query_heads[layer_idx]
% model_config.num_kv_heads[layer_idx]
== 0
), f"Number of query heads are not divisible by number of key/value heads. Got: {model_config.num_query_heads[layer_idx]} and {model_config.num_kv_heads[layer_idx]}."
head_dim = model_config.head_dim
q_heads = model_config.num_query_heads[layer_idx]
k_heads = model_config.num_kv_heads[layer_idx]
v_heads = model_config.num_kv_heads[layer_idx]
self.qkv_proj = LinearLayer(
in_features=model_config.model_dim,
out_features=(q_heads + k_heads + v_heads) * head_dim,
bias=False,
)
self.pos_embedding = RotaryEmbedding(
model_dim=model_config.head_dim,
max_seq_length=model_config.rope_max_length,
freq_constant=model_config.rope_freq_constant,
)
if model_config.normalize_qk_projections:
self.q_norm = get_normalization_layer(
opts,
num_features=model_config.head_dim,
norm_type=model_config.normalization_layer_name,
)
self.k_norm = get_normalization_layer(
opts,
num_features=model_config.head_dim,
norm_type=model_config.normalization_layer_name,
)
else:
self.q_norm = None
self.k_norm = None
self.out_proj = LinearLayer(
in_features=q_heads * head_dim,
out_features=model_config.model_dim,
bias=False,
)
self.head_dim = model_config.head_dim
self.num_q_heads = q_heads
self.num_k_heads = k_heads
self.num_v_heads = v_heads
self.model_dim = model_config.model_dim
self.num_groups = self.num_q_heads // self.num_k_heads
def extra_repr(self) -> str:
return (
super().extra_repr()
+ f"model_dim={self.model_dim}, num_query_heads={self.num_q_heads}, num_key_heads={self.num_k_heads}, num_value_heads={self.num_v_heads}"
)
def forward(
self,
x: Tensor,
past_keys: Optional[Tensor] = None,
past_values: Optional[Tensor] = None,
use_kv_cache: bool = False,
is_causal: bool = True,
) -> Tuple[Tensor, Optional[Tensor], Optional[Tensor]]:
"""
Forward pass of multi-head self-attention.
Args:
x: Input tensor of the shape [batch size, sequence length, model dimension].
past_keys: Tensor storing the cached keys.
The shape of tensor is [batch size, number of key heads, sequence length, head dimension].
past_values: Tensor storing the cached values. The shape of the tensor is the same as 'past_keys'.
use_kv_cache: Cache the output of key and value projection layers for faster inference.
is_causal: Specifies whether to apply causal masking in scaled dot-product attention.
Returns:
The output of the same shape as the input, optionally with a tensor containing cached keys and values.
"""
batch_size, seq_length, d_model = x.shape
# [batch_size, seq_length, d_model] --> [batch_size, seq_length, (num_q_heads + num_k_heads + num_v_heads) * head_dim]
qkv = self.qkv_proj(x)
# [batch_size, seq_length, (num_q_heads + num_k_heads + num_v_heads) * head_dim] --> [batch_size, seq_length, (num_q_heads + num_k_heads + num_v_heads), head_dim]
qkv = qkv.reshape(
batch_size,
seq_length,
self.num_q_heads + self.num_k_heads + self.num_v_heads,
self.head_dim,
)
# [batch_size, seq_length, (num_q_heads + num_k_heads + num_v_heads), head_dim] --> [batch_size, (num_q_heads + num_k_heads + num_v_heads), seq_length, head_dim]
qkv = qkv.transpose(1, 2)
# [batch_size, (num_q_heads + num_k_heads + num_v_heads), seq_length, head_dim] --> [batch_size, num_q_heads, seq_length, head_dim], [batch_size, num_k_heads, seq_length, head_dim], [batch_size, num_v_heads, seq_length, head_dim]
queries, keys, values = qkv.split(
[self.num_q_heads, self.num_k_heads, self.num_v_heads], dim=1
)
if self.q_norm is not None:
queries = self.q_norm(queries)
if self.k_norm is not None:
keys = self.k_norm(keys)
if use_kv_cache:
if past_keys is not None:
assert past_values is not None
# concatenate past and current keys along the sequence dimension.
keys = torch.cat([past_keys, keys], dim=-2)
values = torch.cat([past_values, values], dim=-2)
past_keys = keys
past_values = values
# Add positional embedding
queries, keys = self.pos_embedding(queries, keys)
if self.num_groups != 1:
# Group-query attention.
# [batch_size, num_k_heads, seq_length, head_dim] --> [batch_size, num_q_heads, seq_length, head_dim]
keys = keys.repeat_interleave(self.num_groups, dim=1)
# [batch_size, num_v_heads, seq_length, head_dim] --> [batch_size, num_q_heads, seq_length, head_dim]
values = values.repeat_interleave(self.num_groups, dim=1)
# scaled dot-product attention.
# The output of this operation has size of [batch_size, num_q_heads, seq_length, head_dim]
attn_output = F.scaled_dot_product_attention(
queries,
keys,
values,
attn_mask=None,
dropout_p=0,
is_causal=is_causal,
)
# [batch_size, num_q_heads, seq_length, head_dim] --> [batch_size, seq_length, num_q_heads, head_dim]
attn_output = attn_output.transpose(1, 2).contiguous()
# [batch_size, seq_length, num_q_heads, head_dim] --> [batch_size, seq_length, num_q_heads * head_dim]
attn_output = attn_output.reshape(
batch_size, seq_length, self.num_q_heads * self.head_dim
)
# [batch_size, seq_length, num_q_heads * head_dim] --> [batch_size, seq_length, d_model]
out = self.out_proj(attn_output)
return out, past_keys, past_values
class FeedForwardNetwork(nn.Module):
"""Feed-forward network.
Args:
opts: Command-line arguments.
model_config: Model configuration.
layer_idx: Layer index.
"""
def __init__(
self, opts: argparse.Namespace, model_config: GPTConfig, layer_idx: int
) -> None:
super().__init__()
ffn_multiplier = model_config.ffn_multipliers[layer_idx]
intermediate_dim = int(
make_divisible(
ffn_multiplier * model_config.model_dim,
divisor=model_config.ffn_dim_divisor,
)
)
if model_config.ffn_with_glu:
# FFN with Gated linear unit, as described in https://arxiv.org/abs/2002.05202v1.
self.proj_1 = LinearLayer(
in_features=model_config.model_dim,
out_features=2 * intermediate_dim,
bias=False,
)
self.proj_2 = LinearLayer(
in_features=intermediate_dim,
out_features=model_config.model_dim,
bias=False,
)
self.ffn_with_glu = True
else:
# Standard FFN, as described in https://arxiv.org/abs/1706.03762
self.proj_1 = LinearLayer(
in_features=model_config.model_dim,
out_features=intermediate_dim,
bias=False,
)
self.proj_2 = LinearLayer(
in_features=intermediate_dim,
out_features=model_config.model_dim,
bias=False,
)
self.ffn_with_glu = False
self.act = build_activation_layer(
opts=opts, act_type=model_config.activation_fn_name
)
def extra_repr(self) -> str:
return super().extra_repr() + f"(ffn_with_glu) : {self.ffn_with_glu}"
def forward(self, x: Tensor) -> Tensor:
"""Forward function of FFN layer.
Args:
x: Input tensor of the shape [batch size, sequence length, model dimension].
Returns:
A tensor of the same shape as the input.
"""
if self.ffn_with_glu:
y_12 = self.proj_1(x)
y_1, y_2 = y_12.chunk(2, dim=-1)
y = self.act(y_1) * y_2
return self.proj_2(y)
else:
return self.proj_2(self.act(self.proj_1(x)))
class TransformerDecoderLayer(nn.Module):
"""Transformer decoder layer.
Args:
opts: Command-line arguments.
model_config: Model configuration.
layer_idx: Layer index.
"""
def __init__(
self, opts: argparse.Namespace, model_config: GPTConfig, layer_idx: int
) -> None:
super().__init__()
self.attn = MultiHeadCausalAttention(
opts, model_config=model_config, layer_idx=layer_idx
)
self.ffn = FeedForwardNetwork(
opts, model_config=model_config, layer_idx=layer_idx
)
self.ffn_norm = get_normalization_layer(
opts,
num_features=model_config.model_dim,
norm_type=model_config.normalization_layer_name,
)
self.attn_norm = get_normalization_layer(
opts,
num_features=model_config.model_dim,
norm_type=model_config.normalization_layer_name,
)
def forward(
self,
x: Tensor,
past_keys: Optional[Tensor] = None,
past_values: Optional[Tensor] = None,
use_kv_cache: bool = False,
is_causal: bool = True,
) -> Tuple[Tensor, Optional[Tensor], Optional[Tensor]]:
"""
Forward pass of decoder layer.
Args:
x: Input tensor of the shape [batch size, sequence length, model dimension].
past_keys: Tensor storing the cached keys.
The shape of tensor is [batch size, number of key heads, sequence length, head dimension].
past_values: Tensor storing the cached values. The shape of the tensor is the same as 'past_keys'.
use_kv_cache: Cache the output of key and value projection layers for faster inference.
is_causal: Specifies whether to apply causal masking in scaled dot-product attention.
Returns:
The output of the same shape as the input, optionally with a tensor containing cached keys and values.
"""
# Pre-norm attention.
y_attn = self.attn_norm(x)
y_attn, past_keys, past_values = self.attn(
y_attn, past_keys, past_values, use_kv_cache, is_causal
)
y_attn = x + y_attn
# Pre-norm FFN.
y_ffn = y_attn + self.ffn(self.ffn_norm(y_attn))
return y_ffn, past_keys, past_values
@MODEL_REGISTRY.register(name="general_gpt", type="language_modeling")
class GeneralGPTModel(BaseLanguageModel):
"""General GPT model.
Args:
opts: Command-line arguments.
"""
def __init__(self, opts: argparse.Namespace, *args, **kwargs) -> None:
super().__init__(opts, *args, **kwargs)
model_name = getattr(opts, "model.language_modeling.general_gpt.model_name")
if model_name is None:
logger.error(
"Please specify model name using 'model.language_modeling.general_gpt.model_name' parameter in your configuration file."
)
vocab_size = getattr(opts, "model.language_modeling.general_gpt.vocab_size")
if vocab_size is None:
logger.error(
"Please specify vocabulary size using 'model.language_modeling.general_gpt.vocab_size' parameter in your configuration file."
)
max_context_length = getattr(
opts, "model.language_modeling.general_gpt.max_context_length"
)
if max_context_length is None:
logger.error(
"Please specify maximum context length using 'model.language_modeling.general_gpt.max_context_length' parameter in your configuration file."
)
padding_index = getattr(
opts, "model.language_modeling.general_gpt.padding_index"
)
model_config = GPTConfig.from_name(
model_name=model_name,
vocab_size=vocab_size,
max_context_length=max_context_length,
)
self.token_embeddings = Embedding(
opts,
embedding_dim=model_config.model_dim,
num_embeddings=model_config.vocab_size,
padding_idx=padding_index,
)
self.layers = nn.ModuleList(
TransformerDecoderLayer(
opts, model_config=model_config, layer_idx=layer_idx
)
for layer_idx in range(model_config.num_transformer_layers)
)
self.norm = get_normalization_layer(
opts,
num_features=model_config.model_dim,
norm_type=model_config.normalization_layer_name,
)
if model_config.share_input_output_layers:
self.classifier = None
else:
self.classifier = LinearLayer(
in_features=model_config.model_dim,
out_features=model_config.vocab_size,
bias=False,
)
self.reset_parameters(model_config=model_config)
self.num_transformer_layers = model_config.num_transformer_layers
@classmethod
def add_arguments(cls, parser: argparse.ArgumentParser) -> argparse.ArgumentParser:
"""Add General GPT model arguments."""
if cls == GeneralGPTModel:
group = parser.add_argument_group(cls.__name__)
group.add_argument(
"--model.language-modeling.general-gpt.model-name",
type=str,
default=None,
choices=list(gpt_configs.keys()),
help="Name of the generative transformer-based LM model. Defaults to None (i.e., user need to specify the model name.).",
)
group.add_argument(
"--model.language-modeling.general-gpt.max-context-length",
type=int,
default=None,
help="Maximum context length. Defaults to None (i.e., user needs to specify the maximum contenxt length value.).",
)
group.add_argument(
"--model.language-modeling.general-gpt.vocab-size",
type=int,
default=None,
help="Vocabulary size. Defaults to None (i.e., user needs to specify the vocabulary size.).",
)
group.add_argument(
"--model.language-modeling.general-gpt.padding-index",
type=int,
default=None,
help="Padding index. Defaults to None (i.e., no padding).",
)
return parser
def forward(
self, model_input: Union[Tensor, Dict[str, Tensor]]
) -> Union[Tensor, Dict[str, Tensor]]:
"""Forward function of GPT model.
Args:
model_input: Input to the model. It can be a tensor or a dictionary.
In case of a tensor, the expected shape is [batch size, sequence length].
In case of a dictionary, the expected keys are 'input_ids', 'past_keys', 'past_values',
'use_kv_cache', and 'is_causal'. The shape of the values for each key is:
{
"input_ids": [batch size, sequence length],
"past_keys": [ [batch size, number of key heads, sequence length, head dimension] ]* number of transformer layers,
"past_values": [ [batch size, number of value heads, sequence length, head dimension] ] * number of transformer layers,
"use_kv_cache": boolean,
"is_causal": boolean,
}
where
'input_ids' represents input token indices.
'past_keys' and 'past_values' represents the cached tensor outputs of key and value branch in multi-head attention respectively.
These values can be None.
'use_kv_cache' indicates to use KV caching or not.
'is_causal' indicates to use causal masking in scaled dot-product attention or not.
Returns:
Output of the model.
1. When 'use_kv_cache' is enabled, a dictionary with 'logits', 'past_keys', and 'past_values' is returned.
The expected shape of the values is
{
"logits": [batch size, sequence length, vocabular size],
"past_keys": [ [batch size, number of key heads, sequence length, head dimension] ] * number of transformer layers,
"past_values": [ [batch size, number of value heads, sequence length, head dimension] ] * number of transformer layers,
}
2. Logits tensor is returned. The shape of logits tensor is [batch size, sequence length, vocabulary size].
...note:
1. For pre-training, 'model_input' is typically a tensor.
2. For inference, we have two scenarios.
2.a. Processing prefix or prompt: When dealing with a prefix or prompt, it is expected that the 'sequence length' is more than one and past keys
or values are None. If the intention of the user is to perform generation following a prefix, it's recommended to provide the prefix inputs
as a dictionary, specifying 'use_kv_cache=True', 'is_causal=True', 'past_keys=None', and 'past_values=None'. Otherwise, users should pass token
indices as a tensor.
2.b. Generation: In this case, 'sequence length' should be one. In other words, one token is generated at a time with KV caching.
Ideally, when using KV caching, 'is_causal' should be set to False.
The generation logic may vary from task to task and we rely on user for correctly passing the inputs.
"""
if isinstance(model_input, dict):
expected_input_keys = {
"input_ids",
"past_keys",
"past_values",
"use_kv_cache",
"is_causal",
}
assert expected_input_keys == set(
model_input.keys()
), f"Model input does not contain all keys. Expected keys are {expected_input_keys}, but got {set(model_input.keys())}."
input_ids = model_input["input_ids"]
past_keys = model_input["past_keys"]
past_values = model_input["past_values"]
use_kv_cache = model_input["use_kv_cache"]
is_causal = model_input["is_causal"]
if past_keys is None:
assert past_values is None
past_keys = [None] * self.num_transformer_layers
past_values = [None] * self.num_transformer_layers
elif isinstance(model_input, Tensor):
input_ids = model_input
past_keys = [None] * self.num_transformer_layers
past_values = [None] * self.num_transformer_layers
use_kv_cache = False
is_causal = True
else:
raise NotImplementedError(
f"Supported input types are either Tensor or Dictionary. Got: {type(model_input)}."
)
x = self.token_embeddings(input_ids)
for layer_idx in range(self.num_transformer_layers):
past_keys_layer_i = past_keys[layer_idx]
past_values_layer_i = past_values[layer_idx]
x, past_keys_layer_i, past_values_layer_i = self.layers[layer_idx](
x, past_keys_layer_i, past_values_layer_i, use_kv_cache, is_causal
)
# update the kv cache
past_keys[layer_idx] = past_keys_layer_i
past_values[layer_idx] = past_values_layer_i
x = self.norm(x)
if self.classifier is None:
logits = F.linear(x, weight=self.token_embeddings.weight)
else:
logits = self.classifier(x)
if use_kv_cache:
return {
"logits": logits,
"past_keys": past_keys,
"past_values": past_values,
}
else:
return logits
def get_fsdp_wrap_policy(
self,
) -> Callable[[torch.nn.Module, bool, int], bool]:
"""Returns the FSDP policy."""
general_gpt_auto_wrap_policy = functools.partial(
transformer_auto_wrap_policy,
transformer_layer_cls={TransformerDecoderLayer},
)
return general_gpt_auto_wrap_policy
def get_activation_checkpoint_submodule_class(self) -> Callable:
"""Returns the layer that should be used for activation checkpointing."""
return TransformerDecoderLayer
def reset_parameters(self, model_config: GPTConfig) -> None:
"""Initialize the parameters of language model.
Args:
model_config: Model configuration.
"""
for module in self.modules():
if isinstance(module, (LinearLayer, nn.Linear)):
std = module.in_features**-0.5
torch.nn.init.normal_(module.weight, mean=0.0, std=std)
if module.bias is not None:
torch.nn.init.zeros_(module.bias)
elif isinstance(module, (nn.Embedding, Embedding)):
std = module.embedding_dim**-0.5
torch.nn.init.normal_(module.weight, mean=0.0, std=std)
elif isinstance(module, norm_layers_tuple):
if module.weight is not None:
torch.nn.init.ones_(module.weight)
if hasattr(module, "bias") and module.bias is not None:
torch.nn.init.zeros_(module.bias)
model_dim = model_config.model_dim
n_layers = model_config.num_transformer_layers
# standard deviation of output layers in transformer block is scaled,
# following https://arxiv.org/pdf/2205.01068.pdf
std = (model_dim**-0.5) * ((2 * n_layers) ** -0.5)
for param_name, param in self.named_parameters():
if param_name.endswith("out_proj.weight") or param_name.endswith(
"ffn.proj_2.weight"
):
torch.nn.init.normal_(param, mean=0.0, std=std)
|
dbad29ee9f5e9d8331d174291430f2e8
|
{
"intermediate": 0.35025927424430847,
"beginner": 0.3643319606781006,
"expert": 0.28540876507759094
}
|
48,129
|
am running a django project under django rest framework and running it with runserver , i set a static folder to server an excel file under it to react frontend and set allowed_cors to all , when react tries to fetch the file from the static path ( localhost/static/excel_file ) it return a has been blocked by CORS cuz no access contro;: aallow origin header is present on the requested resource , how can i fix it ?
|
b965e7e70fd61bf75aadc300d2b91b60
|
{
"intermediate": 0.8267281651496887,
"beginner": 0.08556053787469864,
"expert": 0.08771131187677383
}
|
48,130
|
In this exercise, we are going to look at creating a Superclass / Subclass relationship for Students.
Our superclass will be the Student class and contain the following instance variables:
String name - Student’s first and last name
int id - Student’s ID number
double gpa - Student’s GPA
Our subclass will be StudentAthlete and contain the following instance variables:
String sport - Name of sport student plays
String level - The level at which the student plays (varsity, junior varsity, etc)
For this exercise, you will focus on the constructors for both classes. Remember that your subclass constructor needs to call the superclass constructor, so make sure you have the parameters to do that.
Note: For the autograder, your constructor needs to list the parameters in the order they are listed above.
The classes will have getters and a toString, but no setters. You can use these to test, but do not need to alter them.
Once completed, create two students as noted in the StudentTester class. public class Student
{
private String name;
private int id;
private double gpa;
// Constructor goes here
public String getName(){
return name;
}
public int getID(){
return id;
}
public double getGPA(){
return gpa;
}
public String toString(){
return name + " (" + id + ")";
}
}
public class StudentAthlete extends Student
{
private String sport;
private String level;
// Add the constructor here
public String getSport(){
return sport;
}
public String getLevel(){
return level;
}
@Override
public String toString(){
return super.toString() + " plays " + sport;
}
}
public class StudentTester
{
public static void main(String[] args)
{
/**
* Create a student with id # 123987, GPA: 2.56
*/
/**
* Create a student athlete with id # 987456, GPA: 3.47,
* who plays lacrosse for the varsity team
*/
// Print out both objects
}
}
|
a72bc1a6a2b923aa1df839ca0e959239
|
{
"intermediate": 0.23127776384353638,
"beginner": 0.5463882684707642,
"expert": 0.2223338931798935
}
|
48,131
|
User
const UploadModal: React.FC<UploadModalProps> = ({ open, onClose, files }) => {
const onFileUpload = () => {
files.forEach(file => {
const uploadTask = ref(storage, `uploads/${file.name}`).put(file);
uploadTask.on(
"state_changed",
snapshot => {},
error => {
console.error('Upload failed', error);
},
() => {
uploadTask.snapshot.ref.getDownloadURL().then(downloadURL => {
console.log('File available at', downloadURL);
// Here, you can now save the downloadURL to your database or state for sharing
});
}
);
});
};
return (
<Modal
open={open}
onClose={onClose}
aria-labelledby="upload-modal-title"
aria-describedby="upload-modal-description"
>
<Box sx={modalStyle}>
<Button onClick={onFileUpload} disabled={files.length === 0}>Upload Files</Button>
</Box>
</Modal>
);
};
export default UploadModal;
Add a MUI based progress circle, a check mark transition if its successful followed by the link, and a X if there is an error uploading followed by the error message. Additionally, I am using firebase auth, so use the unique uid from firebases const uid = user.uid; provided by auth.currentUser to upload files into a uid/file name folder.
|
e5d699e58e3e744dbac0f56cc56abfa9
|
{
"intermediate": 0.4697251617908478,
"beginner": 0.3119722902774811,
"expert": 0.21830259263515472
}
|
48,132
|
can you give me a python code for implementing tabnet
|
7ac0fadde61c3cec82c7ea2bc5bb304a
|
{
"intermediate": 0.48116442561149597,
"beginner": 0.15523484349250793,
"expert": 0.3636007606983185
}
|
48,133
|
suddenly with no reason after the project was worked successfully, i tried to serve again give me this D:\projects\oms_admin\node_modules\webpack-sources\lib\SizeOnlySource.js:16
return new Error(
^
Error: Content and Map of this Source is not available (only size() is supported)
at SizeOnlySource._error (D:\projects\oms_admin\node_modules\webpack-sources\lib\SizeOnlySource.js:16:10)
at SizeOnlySource.buffer (D:\projects\oms_admin\node_modules\webpack-sources\lib\SizeOnlySource.js:30:14)
at _isSourceEqual (D:\projects\oms_admin\node_modules\@angular-devkit\build-angular\node_modules\webpack\lib\util\source.js:21:51)
at isSourceEqual (D:\projects\oms_admin\node_modules\@angular-devkit\build-angular\node_modules\webpack\lib\util\source.js:43:17)
at Compilation.emitAsset (D:\projects\oms_admin\node_modules\@angular-devkit\build-angular\node_modules\webpack\lib\Compilation.js:4253:9)
at D:\projects\oms_admin\node_modules\@angular-devkit\build-angular\node_modules\webpack\lib\Compiler.js:566:28
at D:\projects\oms_admin\node_modules\@angular-devkit\build-angular\node_modules\webpack\lib\Compiler.js:1200:17
at eval (eval at create (D:\projects\oms_admin\node_modules\tapable\lib\HookCodeFactory.js:33:10), <anonymous>:13:1)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at runNextTicks (node:internal/process/task_queues:64:3)
|
57abb8d05de409192e8c9769cc1e57ac
|
{
"intermediate": 0.5322514772415161,
"beginner": 0.23960137367248535,
"expert": 0.22814717888832092
}
|
48,134
|
I would like to build a WP plugin to get familiar with WP, I don't have much experience with PHP
|
4e206c43f99530f972be2711a9c0fe0b
|
{
"intermediate": 0.5601149797439575,
"beginner": 0.23322321474552155,
"expert": 0.20666180551052094
}
|
48,135
|
I would like to build a WP plugin to get familiar with WP, I don’t have much experience with PHP
what should i build?
|
8ee1d564b9af5e7e6c49939a99c0a415
|
{
"intermediate": 0.5390779972076416,
"beginner": 0.31684499979019165,
"expert": 0.14407697319984436
}
|
48,136
|
how to make something liek this
[
"test item",
"test category" : {
"test item 2"
}
]
|
1aadf2ac7de2bc7bc15f34f2220453a9
|
{
"intermediate": 0.3090571165084839,
"beginner": 0.32061269879341125,
"expert": 0.370330274105072
}
|
48,137
|
move ngb modal dialog to center of screen horizontally, it approximately move to left
|
17b47affd2bea8f52cc87e7b87991944
|
{
"intermediate": 0.36236611008644104,
"beginner": 0.2024621069431305,
"expert": 0.43517184257507324
}
|
48,138
|
c#:
i have a lot of servers and i have chat between servers like player joins player one and can send messages to other servers, what should i use for this dublex messaaging or how to name that? connections should be constanly established between servers and not only chatmessages are sent between
|
e7fa7376dfa273e43c15d9fdc800c898
|
{
"intermediate": 0.46295416355133057,
"beginner": 0.2583869993686676,
"expert": 0.2786588966846466
}
|
48,139
|
convert mssql table to entity framework - "CREATE TABLE [dbo].[ObjectInCategory](
[ObjectId] [uniqueidentifier] NOT NULL,
[ObjectTypeId] [uniqueidentifier] NOT NULL,
[IsDeleted] [bit] NOT NULL,
[CreationDate] [datetime] NOT NULL,
[DeleteDate] [datetime] NOT NULL,
[Ts] [timestamp] NOT NULL,
[ObjectInCategoryId] [uniqueidentifier] NOT NULL,
[CategoryId] [uniqueidentifier] NOT NULL)"
|
80379c41d85498b4e68b68c38e948024
|
{
"intermediate": 0.5225460529327393,
"beginner": 0.23452189564704895,
"expert": 0.2429320365190506
}
|
48,140
|
Convert to entity framework - "CREATE TABLE [dbo].[ElementPath](
[Path] [varchar](500) NULL,
[ElementId] [uniqueidentifier] NULL
)"
|
b097639ed2bb3153169d5f4e2495fe7e
|
{
"intermediate": 0.46537306904792786,
"beginner": 0.241585373878479,
"expert": 0.29304152727127075
}
|
48,141
|
I am trying to make a chat interface in wxWidgets C++ project, how would you tackle that? I want the left side panel to have the conversations and the rest I want to have the current selected chat. What widgets would you use to do this? How would you represent the conversations of the two participants and how would you differenciate between them?
|
518c3b2f6a146f754a5300b456caedd3
|
{
"intermediate": 0.5176844000816345,
"beginner": 0.217820942401886,
"expert": 0.26449474692344666
}
|
48,142
|
이뜌링 모음집 - 2021-06-15 221401
이뜌링 모음집 - 23106 t
이뜌링 모음집 - 23107 t
이뜌링 모음집 - 23108 t
이런 목록의 버튼을 눌리는 consol code를 만들고 싶다. 도와줘 아래는 해당 HTML을 가져왔다.
<div class="mg-blog-post-box"><div class="mg-header"><div class="mg-blog-category"><div class="mg-blog-category"><a class="newsup-categories category-color-1" href="https://s7.watchfreejavonline.co/category/live-webcam-korean-bj/" alt="View all posts in Korean">
Korean
</a></div></div><h1 class="title single"> <a title="Permalink to: 이뜌링 모음집">
이뜌링 모음집</a></h1><div class="media mg-info-author-block"><div class="media-body"></div></div></div><article class="small single"><section id="custom_html-72" class="widget_text widget widget_custom_html"><div class="textwidget custom-html-widget"><div class="banners-container"><div class="banner"> <script data-cfasync="false" type="text/javascript" src="//pk910324e.com/lv/esnk/2004265/code.js" async="" class="__clb-2004265"></script> </div></div></div></section><div class="video_player"><iframe src="https://xxembed.com/p/12z4l8" frameborder="0" marginwidth="0" marginheight="0" scrolling="NO" width="640" height="360" allowfullscreen="" __idm_id__="65537"></iframe> <script>var vid1 = "<IFRAME SRC=\"https:\/\/xxembed.com\/p\/12z4l8\" FRAMEBORDER=0 MARGINWIDTH=0 MARGINHEIGHT=0 SCROLLING=NO WIDTH=640 HEIGHT=360 allowfullscreen><\/IFRAME>";
$(document).ready(function(){
//$('.video_player > iframe').remove();
$("#reload_button").hide(); // hide refresh button at start
$('.img_player').click(function(){ // Add Ad
//window.open("https://satisfactorilybewitchgreatness.com/wfm9wreipd?key=b29bfe175a8e73930083198952d02d09");
$('.img_player').hide();
$('.video_player').prepend(vid1);
$("#reload_button").show();
});
$("#reload_button").click(function() {
$('.video_player > iframe').remove();
$('.video_player').prepend(vid1);
});
});</script> <img class="img_player" src="/wp-content/uploads/2020/09/playvideo.png" width="100%" style="display: none;"><div style="text-align: center;">
<a class="btn btn-success" href="https://link-to.net/1077184/592.0079368689959/dynamic/?r=aHR0cHM6Ly94eGVtYmVkLmNvbS9wLzEyejRsOA==" target="_blank" _target="blank">다운로드</a>
<i id="reload_button" class="fa fa-refresh" aria-hidden="true" style="background-color: red; padding: 10px; border-radius: 50%; color: white; font-size: 24px;"></i></div></div><p><img fetchpriority="high" decoding="async" class="alignnone size-medium wp-image-79619" src="https://s7.watchfreejavonline.co/wp-content/uploads/2024/04/이뜌링-모음집추가자료-사카시섹-영상-225x300.jpg" alt="" width="225" height="300" srcset="https://s7.watchfreejavonline.co/wp-content/uploads/2024/04/이뜌링-모음집추가자료-사카시섹-영상-225x300.jpg 225w, https://s7.watchfreejavonline.co/wp-content/uploads/2024/04/이뜌링-모음집추가자료-사카시섹-영상-768x1024.jpg 768w, https://s7.watchfreejavonline.co/wp-content/uploads/2024/04/이뜌링-모음집추가자료-사카시섹-영상.jpg 1108w" sizes="(max-width: 225px) 100vw, 225px"></p><p> </p> <script>function pinIt()
{
var e = document.createElement('script');
e.setAttribute('type','text/javascript');
e.setAttribute('charset','UTF-8');
e.setAttribute('src','https://assets.pinterest.com/js/pinmarklet.js?r='+Math.random()*99999999);
document.body.appendChild(e);
}</script> <div class="post-share"><div class="post-share-icons cf">
<a href="https://www.facebook.com/sharer.php?u=https%3A%2F%2Fs7.watchfreejavonline.co%2F%25ec%259d%25b4%25eb%259c%258c%25eb%25a7%2581-%25eb%25aa%25a8%25ec%259d%258c%25ec%25a7%2591%25ec%25b6%2594%25ea%25b0%2580%25ec%259e%2590%25eb%25a3%258c-%25ec%2582%25ac%25ec%25b9%25b4%25ec%258b%259c%25ec%2584%25b9-%25ec%2598%2581%25ec%2583%2581%2F" class="link facebook" target="_blank">
<i class="fab fa-facebook"></i></a>
<a href="http://twitter.com/share?url=https%3A%2F%2Fs7.watchfreejavonline.co%2F%25ec%259d%25b4%25eb%259c%258c%25eb%25a7%2581-%25eb%25aa%25a8%25ec%259d%258c%25ec%25a7%2591%25ec%25b6%2594%25ea%25b0%2580%25ec%259e%2590%25eb%25a3%258c-%25ec%2582%25ac%25ec%25b9%25b4%25ec%258b%259c%25ec%2584%25b9-%25ec%2598%2581%25ec%2583%2581%2F&text=%E1%84%8B%E1%85%B5%E1%84%84%E1%85%B2%E1%84%85%E1%85%B5%E1%86%BC%20%E1%84%86%E1%85%A9%E1%84%8B%E1%85%B3%E1%86%B7%E1%84%8C%E1%85%B5%E1%86%B8%5B%E1%84%8E%E1%85%AE%E1%84%80%E1%85%A1%E1%84%8C%E1%85%A1%E1%84%85%E1%85%AD%20%E1%84%89%E1%85%A1%E1%84%8F%E1%85%A1%E1%84%89%E1%85%B5%2B%E1%84%89%E1%85%A6%E1%86%A8%20%E1%84%8B%E1%85%A7%E1%86%BC%E1%84%89%E1%85%A1%E1%86%BC%5D" class="link x-twitter" target="_blank">
<i class="fa-brands fa-x-twitter"></i></a>
<a href="mailto:?subject=이뜌링%20모음집%5B추가자료%20사카시+섹%20영상%5D&body=https%3A%2F%2Fs7.watchfreejavonline.co%2F%25ec%259d%25b4%25eb%259c%258c%25eb%25a7%2581-%25eb%25aa%25a8%25ec%259d%258c%25ec%25a7%2591%25ec%25b6%2594%25ea%25b0%2580%25ec%259e%2590%25eb%25a3%258c-%25ec%2582%25ac%25ec%25b9%25b4%25ec%258b%259c%25ec%2584%25b9-%25ec%2598%2581%25ec%2583%2581%2F" class="link email" target="_blank">
<i class="fas fa-envelope"></i></a><a href="https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fs7.watchfreejavonline.co%2F%25ec%259d%25b4%25eb%259c%258c%25eb%25a7%2581-%25eb%25aa%25a8%25ec%259d%258c%25ec%25a7%2591%25ec%25b6%2594%25ea%25b0%2580%25ec%259e%2590%25eb%25a3%258c-%25ec%2582%25ac%25ec%25b9%25b4%25ec%258b%259c%25ec%2584%25b9-%25ec%2598%2581%25ec%2583%2581%2F&title=%E1%84%8B%E1%85%B5%E1%84%84%E1%85%B2%E1%84%85%E1%85%B5%E1%86%BC%20%E1%84%86%E1%85%A9%E1%84%8B%E1%85%B3%E1%86%B7%E1%84%8C%E1%85%B5%E1%86%B8%5B%E1%84%8E%E1%85%AE%E1%84%80%E1%85%A1%E1%84%8C%E1%85%A1%E1%84%85%E1%85%AD%20%E1%84%89%E1%85%A1%E1%84%8F%E1%85%A1%E1%84%89%E1%85%B5%2B%E1%84%89%E1%85%A6%E1%86%A8%20%E1%84%8B%E1%85%A7%E1%86%BC%E1%84%89%E1%85%A1%E1%86%BC%5D" class="link linkedin" target="_blank">
<i class="fab fa-linkedin"></i></a><a href="https://telegram.me/share/url?url=https%3A%2F%2Fs7.watchfreejavonline.co%2F%25ec%259d%25b4%25eb%259c%258c%25eb%25a7%2581-%25eb%25aa%25a8%25ec%259d%258c%25ec%25a7%2591%25ec%25b6%2594%25ea%25b0%2580%25ec%259e%2590%25eb%25a3%258c-%25ec%2582%25ac%25ec%25b9%25b4%25ec%258b%259c%25ec%2584%25b9-%25ec%2598%2581%25ec%2583%2581%2F&text&title=%E1%84%8B%E1%85%B5%E1%84%84%E1%85%B2%E1%84%85%E1%85%B5%E1%86%BC%20%E1%84%86%E1%85%A9%E1%84%8B%E1%85%B3%E1%86%B7%E1%84%8C%E1%85%B5%E1%86%B8%5B%E1%84%8E%E1%85%AE%E1%84%80%E1%85%A1%E1%84%8C%E1%85%A1%E1%84%85%E1%85%AD%20%E1%84%89%E1%85%A1%E1%84%8F%E1%85%A1%E1%84%89%E1%85%B5%2B%E1%84%89%E1%85%A6%E1%86%A8%20%E1%84%8B%E1%85%A7%E1%86%BC%E1%84%89%E1%85%A1%E1%86%BC%5D" class="link telegram" target="_blank">
<i class="fab fa-telegram"></i></a><a href="javascript:pinIt();" class="link pinterest"><i class="fab fa-pinterest"></i></a><a class="print-r" href="javascript:window.print()"> <i class="fas fa-print"></i></a></div></div><div class="clearfix mb-3"></div><nav class="navigation post-navigation" aria-label="Posts"><h2 class="screen-reader-text">Post navigation</h2><div class="nav-links"><div class="nav-previous"><a href="https://s7.watchfreejavonline.co/%ec%a7%9c%eb%af%b8-%eb%b0%a9%ec%85%80-%eb%aa%a8%ec%9d%8c/" rel="prev">짜미 방셀 모음<div class="fa fa-angle-double-right"></div><span></span></a></div><div class="nav-next"><a href="https://s7.watchfreejavonline.co/%ec%84%9c%eb%a6%b0%eb%b0%95%ec%86%8c%ec%97%b0/" rel="next"><div class="fa fa-angle-double-left"></div><span></span> 서린(박소연)</a></div></div></nav></article></div>
|
bbb6ac6491abdded33eba20316ad3aff
|
{
"intermediate": 0.27507302165031433,
"beginner": 0.5232821106910706,
"expert": 0.2016448676586151
}
|
48,143
|
How to read uncommited entity framework
|
df40265d7ff2125a146f135d61b50d85
|
{
"intermediate": 0.5265507698059082,
"beginner": 0.2007337212562561,
"expert": 0.2727155089378357
}
|
48,144
|
fix this python script : async def twin_range_filter(symbol, timeframe):
if not mt5.initialize():
print("initialize() failed, error code =", mt5.last_error())
return None
# Ambil data candlestick dari MetaTrader
candles = mt5.copy_rates_from_pos(symbol, timeframe, 0, 100)
# Buat DataFrame dari data candlestick
df = pd.DataFrame(candles)
close = df['close']
def smoothrng(x, t, m):
avrng = talib.EMA(np.abs(x.diff()), timeperiod=t)
smoothrng = talib.EMA(avrng, timeperiod=t) * m
return smoothrng.fillna(0)
per1 = 27
mult1 = 1.6
per2 = 55
mult2 = 2.0
smrng1 = smoothrng(close, per1, mult1)
smrng2 = smoothrng(close, per2, mult2)
smrng = (smrng1 + smrng2) / 2
def rngfilt(x, r):
rngfilt = x.copy()
rngfilt = rngfilt.ffill()
for i in range(1, len(x)):
prev_val = rngfilt.iloc[i-1]
if x.iloc[i] > prev_val:
rngfilt.iloc[i] = max(prev_val, x.iloc[i] - r.iloc[i])
else:
rngfilt.iloc[i] = min(prev_val, x.iloc[i] + r.iloc[i])
return rngfilt
filt = rngfilt(close, smrng)
STR = filt + smrng
STS = filt - smrng
FUB = np.zeros(len(df))
FLB = np.zeros(len(df))
FUB[0] = STR.iloc[0]
FLB[0] = STS.iloc[0]
for i in range(1, len(df)):
FUB[i] = np.where((STR[i] < STR[i-1]) | (close[i-1] > FUB[i-1]), STR[i], FUB[i-1])
FLB[i] = np.where((STS[i] > STS[i-1]) | (close[i-1] < FLB[i-1]), STS[i], FLB[i-1])
TRF = np.zeros(len(df))
TRF[0] = FUB[0] # Initialize TRF with the first value of FUB
for i in range(1, len(df)):
if (np.roll(TRF, 1)[i] == FUB[i]) and (close[i] <= FUB[i]):
TRF[i] = FUB[i]
elif (np.roll(TRF, 1)[i] == FUB[i]) and (close[i] >= FUB[i]):
TRF[i] = FLB[i]
elif (np.roll(TRF, 1)[i] == FLB[i]) and (close[i] >= FLB[i]):
TRF[i] = FLB[i]
elif (np.roll(TRF, 1)[i] == FLB[i]) and (close[i] <= FLB[i]):
TRF[i] = FUB[i]
else:
TRF[i] = FUB[i] # Default to FUB if none of the conditions match
long_signal = (close > np.roll(TRF, 1))
short_signal = (close < np.roll(TRF, 1))
df['TRF'] = TRF
df['long_signal'] = long_signal
df['short_signal'] = short_signal
# Filtering signals to print:
if df.iloc[-1]['long_signal']:
print("Current signal: BUY")
elif df.iloc[-1]['short_signal']:
print("Current signal: SELL")
else:
print("No clear signal"),,,,,, bcs the signal generate from this script not same with pine script from trading view indicator, this script : // This source code is subject to the terms of the Mozilla Public License 2.0 at https://mozilla.org/MPL/2.0/
//@version=5
indicator(title='Twin Range Filter', overlay=true, timeframe='')
source = input(defval=close, title='Source')
showsignals = input(title='Show Buy/Sell Signals ?', defval=true)
per1 = input.int(defval=27, minval=1, title='Fast period')
mult1 = input.float(defval=1.6, minval=0.1, title='Fast range')
per2 = input.int(defval=55, minval=1, title='Slow period')
mult2 = input.float(defval=2, minval=0.1, title='Slow range')
smoothrng(x, t, m) =>
wper = t * 2 - 1
avrng = ta.ema(math.abs(x - x[1]), t)
smoothrng = ta.ema(avrng, wper) * m
smoothrng
smrng1 = smoothrng(source, per1, mult1)
smrng2 = smoothrng(source, per2, mult2)
smrng = (smrng1 + smrng2) / 2
rngfilt(x, r) =>
rngfilt = x
rngfilt := x > nz(rngfilt[1]) ? x - r < nz(rngfilt[1]) ? nz(rngfilt[1]) : x - r : x + r > nz(rngfilt[1]) ? nz(rngfilt[1]) : x + r
rngfilt
filt = rngfilt(source, smrng)
upward = 0.0
upward := filt > filt[1] ? nz(upward[1]) + 1 : filt < filt[1] ? 0 : nz(upward[1])
downward = 0.0
downward := filt < filt[1] ? nz(downward[1]) + 1 : filt > filt[1] ? 0 : nz(downward[1])
STR = filt + smrng
STS = filt - smrng
FUB = 0.0
FUB := STR < nz(FUB[1]) or close[1] > nz(FUB[1]) ? STR : nz(FUB[1])
FLB = 0.0
FLB := STS > nz(FLB[1]) or close[1] < nz(FLB[1]) ? STS : nz(FLB[1])
TRF = 0.0
TRF := nz(TRF[1]) == FUB[1] and close <= FUB ? FUB : nz(TRF[1]) == FUB[1] and close >= FUB ? FLB : nz(TRF[1]) == FLB[1] and close >= FLB ? FLB : nz(TRF[1]) == FLB[1] and close <= FLB ? FUB : FUB
long = ta.crossover(close, TRF)
short = ta.crossunder(close, TRF)
plotshape(showsignals and long, title='Long', text='BUY', style=shape.labelup, textcolor=color.white, size=size.tiny, location=location.belowbar, color=color.rgb(0, 19, 230))
plotshape(showsignals and short, title='Short', text='SELL', style=shape.labeldown, textcolor=color.white, size=size.tiny, location=location.abovebar, color=color.rgb(0, 19, 230))
alertcondition(long, title='Long', message='Long')
alertcondition(short, title='Short', message='Short')
Trfff = plot(TRF)
mPlot = plot(ohlc4, title='', style=plot.style_circles, linewidth=0)
longFillColor = close > TRF ? color.green : na
shortFillColor = close < TRF ? color.red : na
fill(mPlot, Trfff, title='UpTrend Highligter', color=longFillColor, transp=90)
fill(mPlot, Trfff, title='DownTrend Highligter', color=shortFillColor, transp=90)
|
f686f9f3cd9f75ae23e83c177541464b
|
{
"intermediate": 0.3431336283683777,
"beginner": 0.39342987537384033,
"expert": 0.26343652606010437
}
|
48,145
|
entity framework add read uncommited to every query entity framework
|
e5ff87c11d2702448c903322c0810f59
|
{
"intermediate": 0.644354522228241,
"beginner": 0.21372148394584656,
"expert": 0.1419239491224289
}
|
48,146
|
entity framework set read uncommited to data context entity framework
|
015427bd411f0d40f6c2bbeac14542c9
|
{
"intermediate": 0.5572892427444458,
"beginner": 0.2541193664073944,
"expert": 0.18859143555164337
}
|
48,147
|
play typing of the dead on linux
|
fb3ae0a1c94f68926ae19f421b450738
|
{
"intermediate": 0.30916693806648254,
"beginner": 0.41596680879592896,
"expert": 0.2748662233352661
}
|
48,148
|
现在给你很多的大表,现在请你写hivesql来统一判断哪些表是空的,哪些表是不为空的,这些大表的表名是下面这些:hll_dm.dm_app_point_motion_trend_v1_1h_in
hll_dwb.dwb_lbs_compass_expo_order_driver_area_1d_in
hll_dwb.dwb_order_expose_chain_1d_in
hll_dm.dm_compass_user_funnel_behavior_1d_in
hll_dwb.dwb_sensor_driver_order_expo_click_slide_detail_1d_in
hll_dwb.dwb_driver_compass_funnel_expo_1d_in
hll_dws.dws_user_order_create_ib_1d_tm
hll_dws.dws_user_sensor_tags_1d_in
hll_dm.dm_profile_driver_extend_1d_tm
hll_dws.dws_user_order_executed_sc_1d_tm
hll_dws.dws_user_order_create_sc_1d_tm
hll_dws.dws_app_abtest_user_p1_1d_in
hll_dm.dm_app_abtest_driver_idx_1d_in
hll_dwb.dwb_driver_order_grab_1d_in
hll_dws.dws_user_tag_operation_1d_tm
hll_dm.dm_app_point_hot_words_v1_1h_in
hll_dm.dm_multi_bus_user_idx_sum_p0_90d_1d_in
hll_dws.dws_user_idx_sum_p0_90d_1d_in
hll_dm.dm_app_abtest_user_idx_p1_1d_in
hll_dws.dws_order_createdate_idx_p1_1d_in
hll_dwb.dwb_user_compass_funnel_start_1d_in
hll_dwb.dwb_lbs_compass_push_order_driver_area_1d_in
hll_dwb.dwb_app_abtest_user_tags_p1_1d_in
hll_dws.dws_driver_market_1d_tm
hll_dwb.dwb_driver_grab_order_idx_city_funnel_base_1d_in
hll_dws.dws_app_abtest_user_1d_in
hll_dm.dm_app_abtest_user_idx_1d_in
hll_bi_cf.dsp_duoyewu_kexin_kpi_city_final
hll_dwb.dwb_driver_compass_funnel_push_1d_in
hll_dwb.dwb_user_compass_funnel_evaluate_1d_in
hll_dwb.dwb_user_tags_p1_1d_in
hll_dws.dws_user_coupon_1d_tm
hll_dws.dws_driver_common_idx_sum_p1_90d_1d_in
hll_dws.dws_user_order_complete_move_wf_1d_tm
hll_dws.dws_move_porter_login_orderexpo_1d_tm
hll_dws.dws_app_abtest_user_order_1d_in
hll_dwb.dwb_app_abtest_user_order_1d_in
hll_dwb.dwb_app_abtest_user_trade_order_1d_in
hll_dwb.dwb_driver_order_complete_1d_in
hll_dwb.dwb_app_abtest_user_aborder_1d_in
hll_dwb.dwb_user_tags_1d_in
hll_dws.dws_driver_order_grab_1d_tm
hll_dws.dws_user_business_line_1d_tm
hll_dwb.dwb_order_ltl_new_base_1d_tm
hll_dwb.dwb_user_compass_funnel_order_1d_in
hll_dwb.dwb_order_tags_p1_1d_in
hll_dws.dws_order_idx_sum_p0_90d_1d_in
hll_dwb.dwb_order_tags_1d_in
hll_dwb.dwb_app_abtest_driver_order_1d_in
hll_dws.dws_driver_app_behavior_extend_1d_tm
|
e6dbb3b8dbaa828b42f47e90064e8a1d
|
{
"intermediate": 0.33905577659606934,
"beginner": 0.43175485730171204,
"expert": 0.22918936610221863
}
|
48,149
|
whats a query string parameters? what is nc=1?
and how to send them via python's requests?
|
9d5c3c8038688d69194263c951b868ac
|
{
"intermediate": 0.6273849606513977,
"beginner": 0.17382800579071045,
"expert": 0.19878706336021423
}
|
48,150
|
ossim server linux server scan username password and private key. on private key are password protected
|
69b5d4f65d4cb941848f5533715fa1b5
|
{
"intermediate": 0.35762256383895874,
"beginner": 0.1887214183807373,
"expert": 0.45365604758262634
}
|
48,151
|
using System;
using PlayerScript.Input;
using UnityEngine;
namespace PlayerScript
{
public class PlayerCamera : MonoBehaviour
{
public event Action<float> OnCameraRotate;
private static PlayerCamera s_Instance { get; set; }
public static PlayerCamera Instance => s_Instance;
public Transform target;
public float maxDistance = 10f;
public float sens = 1f;
public float verticalOffset = 2f;
public float collisionRadius = 0.2f;
private Vector3 _offset;
private float _xRotation;
private float _yRotation;
private void Awake()
{
s_Instance = this;
}
private void Start()
{
Cursor.visible = false;
Cursor.lockState = CursorLockMode.Locked;
_offset = transform.position - target.position;
}
private void Update()
{
var lookInput = PlayerInput.Instance.Look;
Rotate(lookInput);
}
private void Rotate(Vector2 look)
{
_xRotation += look.y * Time.deltaTime * sens;
_xRotation = Mathf.Clamp(_xRotation, -45f, 45f);
_yRotation += look.x * Time.deltaTime * sens;
transform.localRotation = Quaternion.Euler(_xRotation, 0, 0);
target.localRotation = Quaternion.Euler(0, _yRotation, 0);
var newPosition = target.position + target.TransformDirection(Vector3.forward * _offset.magnitude);
newPosition.y += verticalOffset * Mathf.Sin(_xRotation * Mathf.Deg2Rad);
if (Physics.SphereCast(target.position, collisionRadius, -transform.forward, out var hit, maxDistance, -1))
{
newPosition = hit.point + hit.normal * collisionRadius;
}
else
{
newPosition = target.position + target.TransformDirection(_offset);
newPosition.y += verticalOffset * Mathf.Sin(_xRotation * Mathf.Deg2Rad);
}
transform.position = newPosition;
OnCameraRotate?.Invoke(_yRotation);
}
}
}
исправь, сделай колизию лучше
|
6cd22e539555f428fd371f9ba3d7a53a
|
{
"intermediate": 0.32165947556495667,
"beginner": 0.3886631727218628,
"expert": 0.2896774411201477
}
|
48,152
|
现在给你很多的大表,现在请你写hivesql来统一判断哪些表是空的,哪些表是不为空的,记住要判断所有的表,不要遗漏,这些大表的表名是下面这些:hll_dm.dm_app_point_motion_trend_v1_1h_in
hll_dwb.dwb_lbs_compass_expo_order_driver_area_1d_in
hll_dwb.dwb_order_expose_chain_1d_in
hll_dm.dm_compass_user_funnel_behavior_1d_in
hll_dwb.dwb_sensor_driver_order_expo_click_slide_detail_1d_in
hll_dwb.dwb_driver_compass_funnel_expo_1d_in
hll_dws.dws_user_order_create_ib_1d_tm
hll_dws.dws_user_sensor_tags_1d_in
hll_dm.dm_profile_driver_extend_1d_tm
hll_dws.dws_user_order_executed_sc_1d_tm
hll_dws.dws_user_order_create_sc_1d_tm
hll_dws.dws_app_abtest_user_p1_1d_in
hll_dm.dm_app_abtest_driver_idx_1d_in
hll_dwb.dwb_driver_order_grab_1d_in
hll_dws.dws_user_tag_operation_1d_tm
hll_dm.dm_app_point_hot_words_v1_1h_in
hll_dm.dm_multi_bus_user_idx_sum_p0_90d_1d_in
hll_dws.dws_user_idx_sum_p0_90d_1d_in
hll_dm.dm_app_abtest_user_idx_p1_1d_in
hll_dws.dws_order_createdate_idx_p1_1d_in
hll_dwb.dwb_user_compass_funnel_start_1d_in
hll_dwb.dwb_lbs_compass_push_order_driver_area_1d_in
hll_dwb.dwb_app_abtest_user_tags_p1_1d_in
hll_dws.dws_driver_market_1d_tm
hll_dwb.dwb_driver_grab_order_idx_city_funnel_base_1d_in
hll_dws.dws_app_abtest_user_1d_in
hll_dm.dm_app_abtest_user_idx_1d_in
hll_bi_cf.dsp_duoyewu_kexin_kpi_city_final
hll_dwb.dwb_driver_compass_funnel_push_1d_in
hll_dwb.dwb_user_compass_funnel_evaluate_1d_in
hll_dwb.dwb_user_tags_p1_1d_in
hll_dws.dws_user_coupon_1d_tm
hll_dws.dws_driver_common_idx_sum_p1_90d_1d_in
hll_dws.dws_user_order_complete_move_wf_1d_tm
hll_dws.dws_move_porter_login_orderexpo_1d_tm
hll_dws.dws_app_abtest_user_order_1d_in
hll_dwb.dwb_app_abtest_user_order_1d_in
hll_dwb.dwb_app_abtest_user_trade_order_1d_in
hll_dwb.dwb_driver_order_complete_1d_in
hll_dwb.dwb_app_abtest_user_aborder_1d_in
hll_dwb.dwb_user_tags_1d_in
hll_dws.dws_driver_order_grab_1d_tm
hll_dws.dws_user_business_line_1d_tm
hll_dwb.dwb_order_ltl_new_base_1d_tm
hll_dwb.dwb_user_compass_funnel_order_1d_in
hll_dwb.dwb_order_tags_p1_1d_in
hll_dws.dws_order_idx_sum_p0_90d_1d_in
hll_dwb.dwb_order_tags_1d_in
hll_dwb.dwb_app_abtest_driver_order_1d_in
hll_dws.dws_driver_app_behavior_extend_1d_tm
|
f89a0202cdccb3f5d39ba4fc7b4808ce
|
{
"intermediate": 0.31844618916511536,
"beginner": 0.46480926871299744,
"expert": 0.216744527220726
}
|
48,153
|
现在给你很多的大表,现在请你写hivesql来统一判断每一张表哪些表是空的,哪些表是不为空的,记住要判断所有的表,不要遗漏,不要省略任何的表,这些大表的表名是下面这些:hll_dm.dm_app_point_motion_trend_v1_1h_in
hll_dwb.dwb_lbs_compass_expo_order_driver_area_1d_in
hll_dwb.dwb_order_expose_chain_1d_in
hll_dm.dm_compass_user_funnel_behavior_1d_in
hll_dwb.dwb_sensor_driver_order_expo_click_slide_detail_1d_in
hll_dwb.dwb_driver_compass_funnel_expo_1d_in
hll_dws.dws_user_order_create_ib_1d_tm
hll_dws.dws_user_sensor_tags_1d_in
hll_dm.dm_profile_driver_extend_1d_tm
hll_dws.dws_user_order_executed_sc_1d_tm
hll_dws.dws_user_order_create_sc_1d_tm
hll_dws.dws_app_abtest_user_p1_1d_in
hll_dm.dm_app_abtest_driver_idx_1d_in
hll_dwb.dwb_driver_order_grab_1d_in
hll_dws.dws_user_tag_operation_1d_tm
hll_dm.dm_app_point_hot_words_v1_1h_in
hll_dm.dm_multi_bus_user_idx_sum_p0_90d_1d_in
hll_dws.dws_user_idx_sum_p0_90d_1d_in
hll_dm.dm_app_abtest_user_idx_p1_1d_in
hll_dws.dws_order_createdate_idx_p1_1d_in
hll_dwb.dwb_user_compass_funnel_start_1d_in
hll_dwb.dwb_lbs_compass_push_order_driver_area_1d_in
hll_dwb.dwb_app_abtest_user_tags_p1_1d_in
hll_dws.dws_driver_market_1d_tm
hll_dwb.dwb_driver_grab_order_idx_city_funnel_base_1d_in
hll_dws.dws_app_abtest_user_1d_in
hll_dm.dm_app_abtest_user_idx_1d_in
hll_bi_cf.dsp_duoyewu_kexin_kpi_city_final
hll_dwb.dwb_driver_compass_funnel_push_1d_in
hll_dwb.dwb_user_compass_funnel_evaluate_1d_in
hll_dwb.dwb_user_tags_p1_1d_in
hll_dws.dws_user_coupon_1d_tm
hll_dws.dws_driver_common_idx_sum_p1_90d_1d_in
hll_dws.dws_user_order_complete_move_wf_1d_tm
hll_dws.dws_move_porter_login_orderexpo_1d_tm
hll_dws.dws_app_abtest_user_order_1d_in
hll_dwb.dwb_app_abtest_user_order_1d_in
hll_dwb.dwb_app_abtest_user_trade_order_1d_in
hll_dwb.dwb_driver_order_complete_1d_in
hll_dwb.dwb_app_abtest_user_aborder_1d_in
hll_dwb.dwb_user_tags_1d_in
hll_dws.dws_driver_order_grab_1d_tm
hll_dws.dws_user_business_line_1d_tm
hll_dwb.dwb_order_ltl_new_base_1d_tm
hll_dwb.dwb_user_compass_funnel_order_1d_in
hll_dwb.dwb_order_tags_p1_1d_in
hll_dws.dws_order_idx_sum_p0_90d_1d_in
hll_dwb.dwb_order_tags_1d_in
hll_dwb.dwb_app_abtest_driver_order_1d_in
hll_dws.dws_driver_app_behavior_extend_1d_tm
|
d7e4118c42f5a11714cf8508dd8b28c3
|
{
"intermediate": 0.27771130204200745,
"beginner": 0.42535701394081116,
"expert": 0.2969317138195038
}
|
48,154
|
detect the text from image without pytessract, option to select image using tkinter
|
f93ddcaa0d88c23b6307f0d53d01feba
|
{
"intermediate": 0.34648337960243225,
"beginner": 0.11261957883834839,
"expert": 0.5408970713615417
}
|
48,155
|
现在给你很多的大表,现在请你写hivesql来统一判断每一张表哪些表是空的,哪些表是不为空的,记住要判断所有的表,不要遗漏不要省略,也不要写脚本,只写hivesql就好,这些大表的表名是下面这些:hll_dm.dm_app_point_motion_trend_v1_1h_in
hll_dwb.dwb_lbs_compass_expo_order_driver_area_1d_in
hll_dwb.dwb_order_expose_chain_1d_in
hll_dm.dm_compass_user_funnel_behavior_1d_in
hll_dwb.dwb_sensor_driver_order_expo_click_slide_detail_1d_in
hll_dwb.dwb_driver_compass_funnel_expo_1d_in
hll_dws.dws_user_order_create_ib_1d_tm
hll_dws.dws_user_sensor_tags_1d_in
hll_dm.dm_profile_driver_extend_1d_tm
hll_dws.dws_user_order_executed_sc_1d_tm
hll_dws.dws_user_order_create_sc_1d_tm
hll_dws.dws_app_abtest_user_p1_1d_in
hll_dm.dm_app_abtest_driver_idx_1d_in
hll_dwb.dwb_driver_order_grab_1d_in
hll_dws.dws_user_tag_operation_1d_tm
hll_dm.dm_app_point_hot_words_v1_1h_in
hll_dm.dm_multi_bus_user_idx_sum_p0_90d_1d_in
hll_dws.dws_user_idx_sum_p0_90d_1d_in
hll_dm.dm_app_abtest_user_idx_p1_1d_in
hll_dws.dws_order_createdate_idx_p1_1d_in
hll_dwb.dwb_user_compass_funnel_start_1d_in
hll_dwb.dwb_lbs_compass_push_order_driver_area_1d_in
hll_dwb.dwb_app_abtest_user_tags_p1_1d_in
hll_dws.dws_driver_market_1d_tm
hll_dwb.dwb_driver_grab_order_idx_city_funnel_base_1d_in
hll_dws.dws_app_abtest_user_1d_in
hll_dm.dm_app_abtest_user_idx_1d_in
hll_bi_cf.dsp_duoyewu_kexin_kpi_city_final
hll_dwb.dwb_driver_compass_funnel_push_1d_in
hll_dwb.dwb_user_compass_funnel_evaluate_1d_in
hll_dwb.dwb_user_tags_p1_1d_in
hll_dws.dws_user_coupon_1d_tm
hll_dws.dws_driver_common_idx_sum_p1_90d_1d_in
hll_dws.dws_user_order_complete_move_wf_1d_tm
hll_dws.dws_move_porter_login_orderexpo_1d_tm
hll_dws.dws_app_abtest_user_order_1d_in
hll_dwb.dwb_app_abtest_user_order_1d_in
hll_dwb.dwb_app_abtest_user_trade_order_1d_in
hll_dwb.dwb_driver_order_complete_1d_in
hll_dwb.dwb_app_abtest_user_aborder_1d_in
hll_dwb.dwb_user_tags_1d_in
hll_dws.dws_driver_order_grab_1d_tm
hll_dws.dws_user_business_line_1d_tm
hll_dwb.dwb_order_ltl_new_base_1d_tm
hll_dwb.dwb_user_compass_funnel_order_1d_in
hll_dwb.dwb_order_tags_p1_1d_in
hll_dws.dws_order_idx_sum_p0_90d_1d_in
hll_dwb.dwb_order_tags_1d_in
hll_dwb.dwb_app_abtest_driver_order_1d_in
hll_dws.dws_driver_app_behavior_extend_1d_tm
|
07f05576c58e06d8b6ce2324fd5ae5e6
|
{
"intermediate": 0.32218194007873535,
"beginner": 0.42789456248283386,
"expert": 0.2499234527349472
}
|
48,156
|
现在给你很多的大表,现在请你写完整的hivesql来统一判断每一张表哪些表是空的,哪些表是不为空的,记住要判断所有的表,不要遗漏不要省略,也不要写脚本,只写hivesql就好,这些大表的表名是下面这些:hll_dm.dm_app_point_motion_trend_v1_1h_in
hll_dwb.dwb_lbs_compass_expo_order_driver_area_1d_in
hll_dwb.dwb_order_expose_chain_1d_in
hll_dm.dm_compass_user_funnel_behavior_1d_in
hll_dwb.dwb_sensor_driver_order_expo_click_slide_detail_1d_in
hll_dwb.dwb_driver_compass_funnel_expo_1d_in
hll_dws.dws_user_order_create_ib_1d_tm
hll_dws.dws_user_sensor_tags_1d_in
hll_dm.dm_profile_driver_extend_1d_tm
hll_dws.dws_user_order_executed_sc_1d_tm
hll_dws.dws_user_order_create_sc_1d_tm
hll_dws.dws_app_abtest_user_p1_1d_in
hll_dm.dm_app_abtest_driver_idx_1d_in
hll_dwb.dwb_driver_order_grab_1d_in
hll_dws.dws_user_tag_operation_1d_tm
hll_dm.dm_app_point_hot_words_v1_1h_in
hll_dm.dm_multi_bus_user_idx_sum_p0_90d_1d_in
hll_dws.dws_user_idx_sum_p0_90d_1d_in
hll_dm.dm_app_abtest_user_idx_p1_1d_in
hll_dws.dws_order_createdate_idx_p1_1d_in
hll_dwb.dwb_user_compass_funnel_start_1d_in
hll_dwb.dwb_lbs_compass_push_order_driver_area_1d_in
hll_dwb.dwb_app_abtest_user_tags_p1_1d_in
hll_dws.dws_driver_market_1d_tm
hll_dwb.dwb_driver_grab_order_idx_city_funnel_base_1d_in
hll_dws.dws_app_abtest_user_1d_in
hll_dm.dm_app_abtest_user_idx_1d_in
hll_bi_cf.dsp_duoyewu_kexin_kpi_city_final
hll_dwb.dwb_driver_compass_funnel_push_1d_in
hll_dwb.dwb_user_compass_funnel_evaluate_1d_in
hll_dwb.dwb_user_tags_p1_1d_in
hll_dws.dws_user_coupon_1d_tm
hll_dws.dws_driver_common_idx_sum_p1_90d_1d_in
hll_dws.dws_user_order_complete_move_wf_1d_tm
hll_dws.dws_move_porter_login_orderexpo_1d_tm
hll_dws.dws_app_abtest_user_order_1d_in
hll_dwb.dwb_app_abtest_user_order_1d_in
hll_dwb.dwb_app_abtest_user_trade_order_1d_in
hll_dwb.dwb_driver_order_complete_1d_in
hll_dwb.dwb_app_abtest_user_aborder_1d_in
hll_dwb.dwb_user_tags_1d_in
hll_dws.dws_driver_order_grab_1d_tm
hll_dws.dws_user_business_line_1d_tm
hll_dwb.dwb_order_ltl_new_base_1d_tm
hll_dwb.dwb_user_compass_funnel_order_1d_in
hll_dwb.dwb_order_tags_p1_1d_in
hll_dws.dws_order_idx_sum_p0_90d_1d_in
hll_dwb.dwb_order_tags_1d_in
hll_dwb.dwb_app_abtest_driver_order_1d_in
hll_dws.dws_driver_app_behavior_extend_1d_tm
|
d1c4103da82bf8f2037ded2fdcfbdc5d
|
{
"intermediate": 0.26017090678215027,
"beginner": 0.5260268449783325,
"expert": 0.2138022631406784
}
|
48,157
|
i have this now:
response = session.post(url, data=payload, headers=headers, files={'file[]': ('image.png', files)})
what if i dont want to send any files? what code would that be?
|
8f1566c6b3af70d131e64bf398bbd4c7
|
{
"intermediate": 0.5740837454795837,
"beginner": 0.23329457640647888,
"expert": 0.19262169301509857
}
|
48,158
|
以下是hivesql判断一张表是否为空的:学习下面的hivesql:SELECT
'hll_dwb.dwb_lbs_compass_expo_order_driver_area_1d_in' as table_name,
CASE
WHEN COUNT(1) > 0 THEN 'Not empty'
ELSE 'empty'
END as status
FROM
(SELECT 1 FROM hll_dwb.dwb_lbs_compass_expo_order_driver_area_1d_in where dt='2024-04-25' LIMIT 1) t
|
f29819489c4a360cc421d4798a031ff9
|
{
"intermediate": 0.3484835922718048,
"beginner": 0.37530580163002014,
"expert": 0.27621060609817505
}
|
48,159
|
以下是hivesql判断一张表是否为空的:学习下面的hivesql:SELECT
‘hll_dwb.dwb_lbs_compass_expo_order_driver_area_1d_in’ as table_name,
CASE
WHEN COUNT(1) > 0 THEN ‘Not empty’
ELSE ‘empty’
END as status
FROM
(SELECT 1 FROM hll_dwb.dwb_lbs_compass_expo_order_driver_area_1d_in where dt=‘2024-04-25’ LIMIT 1) t
|
0ab4c09d8a3cc58fa6879feed98ddbdc
|
{
"intermediate": 0.33117178082466125,
"beginner": 0.36566829681396484,
"expert": 0.3031599223613739
}
|
48,160
|
以下是hivesql判断一张表是否为空的:学习下面的hivesql:SELECT
‘hll_dwb.dwb_lbs_compass_expo_order_driver_area_1d_in’ as table_name,
CASE
WHEN COUNT(1) > 0 THEN ‘Not empty’
ELSE ‘empty’
END as status
FROM
(SELECT 1 FROM hll_dwb.dwb_lbs_compass_expo_order_driver_area_1d_in where dt=‘2024-04-25’ LIMIT 1) t
|
45ca1e3a1ba266faf8df29b3d7c1702d
|
{
"intermediate": 0.33117178082466125,
"beginner": 0.36566829681396484,
"expert": 0.3031599223613739
}
|
48,161
|
Vue 3 element plus. How to validate form on mount
|
7130a895613e7ee97bbb95fb2b71d511
|
{
"intermediate": 0.4764839708805084,
"beginner": 0.23880818486213684,
"expert": 0.28470781445503235
}
|
48,162
|
hll_dm.dm_app_point_motion_trend_v1_1h_in
hll_dwb.dwb_lbs_compass_expo_order_driver_area_1d_in
hll_dwb.dwb_order_expose_chain_1d_in。以上是一些库表名,格式是“库名.表名”。现在希望你把上面的格式改成:hll_compare.库名__表名
|
a90d9cf1d22aa982afc65ba12df68d04
|
{
"intermediate": 0.35972192883491516,
"beginner": 0.2555314302444458,
"expert": 0.3847466707229614
}
|
48,163
|
Write a persuasive essay on why the 1st amendment is the most important
Amendment to the U.S.
Constitution: Amendment
Include three reasons to support your thesis.
Suggested Organization: Paragraph 1 - Introduction and thesis statement
Paragraph 2 - Reason 1 and
argument
Paragraph 3 - Reason 2 and
argument
Paragraph 4 - Reason 3 and
argument
Paragraph 5 - Conclusion
Write at least 300 words.
|
de6db05a91efb69c0985016ef795e965
|
{
"intermediate": 0.37556353211402893,
"beginner": 0.2765560746192932,
"expert": 0.34788036346435547
}
|
48,164
|
import asyncio
import sys
import numpy as np
from datetime import datetime
import MetaTrader5 as mt5
import talib
import pandas as pd
from telegram import Bot
## Telegram ----------------------------
TOKEN = '6015612448:AAFGB5C35wkCItukxTEJrWY3gyqZy-iK5r4'
CHAT_ID = '882283026'
bot = Bot(TOKEN)
## Metatrader ----------------------------
# login = 116237638
# server = "Exness-MT5Trial6"
login = 124385496
server = "Exness-MT5Trial7"
password = "748798lokaldeN#"
symbol = "XAUUSDm"
volume = 0.01
## Timing ----------------------------
timeframe = mt5.TIMEFRAME_M5
trend_macd = mt5.TIMEFRAME_M5
trend_tf = mt5.TIMEFRAME_M15
time_candle = 30 #second 3 min = 180s, 5 Min = 300s, 15 min = 900s, 20 min = 1200s
candle_close = 56 #timeframe in second
comment = "testing"
## Message Telegram, Timer ---------------------------------------------------------------------------------
async def send_message_async(message):
await bot.send_message(chat_id=CHAT_ID, text=message)
async def count_down(seconds):
for i in range(seconds, 0, -1):
print(i, end='', flush=True)
await asyncio.sleep(1)
print('\r', end='', flush=True)
print("OK!")
## Checking Side For OP ---------------------------------------------------------------------------------
async def macd(symbol, timeframe=trend_macd):
if not mt5.initialize():
print("initialize() failed, error code =", mt5.last_error())
return None
candles = mt5.copy_rates_from_pos(symbol, timeframe, 0, 500)
df = pd.DataFrame(candles)
macd, signal, hist = talib.MACD(df['close'], fastperiod=12, slowperiod=26, signalperiod=9)
# print(f"MACD Line: {macd.iloc[-1]:.5f}")
# print(f"Signal Line: {signal.iloc[-1]:.5f}")
# print(f"MACD Histogram: {hist.iloc[-1]:.5f}")
if macd.iloc[-1] > signal.iloc[-1]:
print("MACD: Buy")
await send_message_async("MACD: Buy")
return "Buy"
elif macd.iloc[-1] < signal.iloc[-1]:
print("MACD: Sell")
await send_message_async("MACD: Sell")
return "Sell"
async def twin_range_filter(symbol, timeframe=trend_tf):
if not mt5.initialize():
print("initialize() failed, error code =", mt5.last_error())
return None
candles = mt5.copy_rates_from_pos(symbol, timeframe, 0, 500)
df = pd.DataFrame(candles)
close = df['close']
def smoothrng(x, t, m):
wper = t * 2 - 1
avrng = talib.EMA(np.abs(x.diff()), timeperiod=t)
smoothrng = talib.EMA(avrng, timeperiod=wper) * m
return smoothrng
per1, mult1, per2, mult2 = 27, 1.6, 55, 2.0
smrng1 = smoothrng(close, per1, mult1)
smrng2 = smoothrng(close, per2, mult2)
smrng = (smrng1 + smrng2) / 2
def rngfilt(x, r):
rngfilt = x.copy()
for i in range(1, len(x)):
prev_val = rngfilt.iloc[i-1]
if x.iloc[i] > prev_val:
rngfilt.iloc[i] = max(prev_val, x.iloc[i] - r.iloc[i])
else:
rngfilt.iloc[i] = min(prev_val, x.iloc[i] + r.iloc[i])
return rngfilt
filt = rngfilt(close, smrng)
STR = filt + smrng
STS = filt - smrng
FUB = [STR.iloc[0]]
FLB = [STS.iloc[0]]
for i in range(1, len(df)):
FUB.append(STR.iloc[i] if (STR.iloc[i] < STR.iloc[i-1]) or (close.iloc[i-1] > FUB[i-1]) else FUB[i-1])
FLB.append(STS.iloc[i] if (STS.iloc[i] > STS.iloc[i-1]) or (close.iloc[i-1] < FLB[i-1]) else FLB[i-1])
FUB = np.array(FUB)
FLB = np.array(FLB)
TRF = [FUB[0]]
for i in range(1, len(df)):
last_trf = TRF[-1]
if (last_trf == FUB[i-1] and close.iloc[i] <= FUB[i]) or (last_trf == FLB[i-1] and close.iloc[i] <= FLB[i]):
TRF.append(FUB[i])
elif (last_trf == FUB[i-1] and close.iloc[i] >= FUB[i]) or (last_trf == FLB[i-1] and close.iloc[i] >= FLB[i]):
TRF.append(FLB[i])
else:
TRF.append(FUB[i])
TRF = np.array(TRF)
long_signal = (close > np.roll(TRF, 1))[1:]
short_signal = (close < np.roll(TRF, 1))[1:]
df['TRF'] = TRF
df['long_signal'] = np.append([False], long_signal)
df['short_signal'] = np.append([False], short_signal)
if df.iloc[-1]['long_signal']:
print("Trend: BUY")
await send_message_async("Trend: Buy")
return "Buy"
elif df.iloc[-1]['short_signal']:
print("Trend: SELL")
await send_message_async("Trend: Sell")
return "Sell"
## Condition for OP ---------------------------------------------------------------------------------
async def detect_engulfing(symbol, timeframe):
if not mt5.initialize():
print("initialize() failed, error code =", mt5.last_error())
return None
start_time = datetime.now()
start_seconds = start_time.second
wait_seconds = candle_close - start_seconds
print("Waiting for candle to close. Sleeping for", wait_seconds, "seconds")
await send_message_async(f"Waiting for candle to close: {wait_seconds} seconds")
await count_down(wait_seconds)
candles = mt5.copy_rates_from_pos(symbol, timeframe, 0, 2)
df = pd.DataFrame(candles)
for i in range(1, len(df)):
current = df.iloc[i].copy()
previous = df.iloc[i-1].copy()
if np.abs(current['open'] - previous['close']) > 0.005:
current['open'] = previous['close']
if previous['open'] > previous['close'] and \
current['close'] > current['open'] and \
current['close'] >= previous['open'] and \
previous['close'] >= current['open'] and \
current['close'] - current['open'] > previous['open'] - previous['close']:
print("Bullish Engulfing")
await send_message_async("Bullish Engulfing")
return "Bullish Engulfing"
elif previous['close'] > previous['open'] and \
current['open'] > current['close'] and \
current['open'] >= previous['close'] and \
previous['open'] >= current['close'] and \
current['open'] - current['close'] > previous['close'] - previous['open']:
print("Bearish Engulfing")
await send_message_async("Bearish Engulfing")
return "Bearish Engulfing"
else:
print("No Engulfing")
await send_message_async("No Engulfing")
return "No Engulfing"
async def parabolic_sar(symbol, timeframe):
if not mt5.initialize():
print("initialize() failed, error code =", mt5.last_error())
return None
candles = mt5.copy_rates_from_pos(symbol, timeframe, 0, 100)
df = pd.DataFrame(candles)
df['SAR'] = talib.SAR(df['high'], df['low'], acceleration=0.02, maximum=0.2)
current_price = df['close'].iloc[-1]
current_sar = df['SAR'].iloc[-1]
if current_price > current_sar:
print("Parabolic SAR: Bullish")
await send_message_async("Parabolic SAR: Bullish")
return "Bullish"
else:
print("Parabolic SAR: Bearish")
await send_message_async("Parabolic SAR: Bearish")
return "Bearish"
async def bollinger_bands(symbol, timeframe):
if not mt5.initialize():
print("initialize() failed, error code =", mt5.last_error())
return None
candles = mt5.copy_rates_from_pos(symbol, timeframe, 0, 100)
df = pd.DataFrame(candles)
upper_band, middle_band, lower_band = talib.BBANDS(df['close'], timeperiod=20, nbdevup=2, nbdevdn=2, matype=0)
current_close = df['close'].iloc[-1]
# print(f"Upper Band: {upper_band.iloc[-1]:.5f}")
# print(f"Middle Band: {middle_band.iloc[-1]:.5f}")
# print(f"Lower Band: {lower_band.iloc[-1]:.5f}")
deviation = 0.5
if upper_band.iloc[-1] - deviation <= current_close <= upper_band.iloc[-1] + deviation:
print("Bollinger Bands: Price around upper band")
await send_message_async("Bollinger Bands: Price around upper band")
elif lower_band.iloc[-1] - deviation <= current_close <= lower_band.iloc[-1] + deviation:
print("Bollinger Bands: Price around lower band")
await send_message_async("Bollinger Bands: Price around lower band")
elif current_close > upper_band.iloc[-1]:
print("Bollinger Bands: Overbought")
await send_message_async("Bollinger Bands: Overbought")
elif current_close < lower_band.iloc[-1]:
print("Bollinger Bands: Oversold")
await send_message_async("Bollinger Bands: Oversold")
elif current_close > middle_band.iloc[-1]:
print("Bollinger Bands: Neutral UP")
elif current_close < middle_band.iloc[-1]:
print("Bollinger Bands: Neutral Down")
## OP ---------------------------------------------------------------------------------
async def execute_trade(symbol, timeframe, order_type):
engulfing = await detect_engulfing(symbol, timeframe)
psar = await parabolic_sar(symbol, timeframe)
if order_type == "buy" and engulfing == "Bullish Engulfing" and psar == "Bullish":
await send_order(order_type=order_type)
elif order_type == "sell" and engulfing == "Bearish Engulfing" and psar == "Bearish":
await send_order(order_type=order_type)
else:
print("Bad Candles & SAR")
await send_message_async("Bad Candles & SAR")
await main()
async def send_order(login=login, server=server, password=password, symbol=symbol, volume=volume, order_type=None):
if not mt5.initialize(login=login, server=server, password=password):
print("initialize() failed, error code =", mt5.last_error())
return
positions = mt5.positions_get(symbol=symbol)
for position in positions:
if position.profit > 0:
print("There's an open position with profit. New order will be canceled.")
await send_message_async("Existing position is profitable. New order canceled.")
mt5.shutdown()
return
action = mt5.TRADE_ACTION_DEAL
order_type = mt5.ORDER_TYPE_BUY if order_type == "buy" else mt5.ORDER_TYPE_SELL
result = mt5.order_send({
"action": action,
"symbol": symbol,
"volume": volume,
"type": order_type,
"price": mt5.symbol_info_tick(symbol).ask if order_type == mt5.ORDER_TYPE_BUY else mt5.symbol_info_tick(symbol).bid,
"deviation": 20,
"magic": 234000,
"comment": comment,
"type_time": mt5.ORDER_TIME_GTC,
"type_filling": mt5.ORDER_FILLING_FOK,
})
if result.retcode == mt5.TRADE_RETCODE_DONE:
print("Order successful")
await send_message_async("Order Position open")
await count_down(3)
else:
print("Order failed")
await send_message_async("Order Failed")
await count_down(3)
mt5.shutdown()
## Initial ---------------------------------------------------------------------------------
async def main():
while True:
try:
await send_message_async("Waiting.....")
macd_signal = await macd(symbol)
trend_signal = await twin_range_filter(symbol)
if macd_signal == "Buy" and trend_signal == "Sell":
print("Wait Candle for Sell")
await send_message_async("Wait Candle for Sell")
await execute_trade(symbol, timeframe, order_type="sell")
await count_down(5)
elif macd_signal == "Sell" and trend_signal == "Buy":
print("Wait Candle for Buy")
await send_message_async("Wait Candle for Buy")
await execute_trade(symbol, timeframe, order_type="buy")
await count_down(5)
else:
print("No suitable conditions found for trade execution.")
await send_message_async("Wait next MACD, No Trade open")
await count_down(5)
except Exception as e:
print("An error occurred:", e)
await send_message_async("An error occurred: " + str(e))
await count_down(5)
continue
# await twin_range_filter(symbol)
# await macd(symbol, timeframe)
# await bollinger_bands(symbol, timeframe)
# await execute_trade(symbol, timeframe, order_type="sell")
try:
asyncio.run(main())
except KeyboardInterrupt:
print(".....Script stopped by user. Exiting gracefully.....")
sys.exit(0)
|
eeb5f3750ec5ae60d200eaf59fb5ed99
|
{
"intermediate": 0.38881534337997437,
"beginner": 0.4451081156730652,
"expert": 0.16607648134231567
}
|
48,165
|
Use the following class to answer the
question.
public class Automobiles
{
private String myMandM;
private int myNumPassengers;
private double myEngineSize;
private boolean isHybrid;
public Automobiles()
{
myManoM = "none";
Which of the following lines of code does NOT compile?
A. Automobiles a3 = new Automobiles("vw", 8.0, 285.6, true);
B. Automobiles a2 = new Automobiles();
C. Automobiles a1 = new
Automobiles("Nova", 4, 360.5, false);
myNumPassengers = -1;
myEngineSize = 0.0;
isHybrid
false;
}
public Automobiles (String m, int np, double es, boolean h)
{
}
myMandM = m;
myNumPassengers = np;
myEngineSize = es;
isHybrid=h;
public String getMakeAndModel()
{
}
return myMandM;
public int getNumPassengers()
{
}
return myNumPassengers;
public double getEngineSize()
{
}
return myEngineSize;
public boolean getHybrid()
{
return isHybrid;
}
}
|
97406073ea4712c405ae27e951343bc5
|
{
"intermediate": 0.17165692150592804,
"beginner": 0.6715929508209229,
"expert": 0.1567501723766327
}
|
48,166
|
理解下面用hivesql判断表是否为空语句:SELECT ‘hll_compare.hll_dm__dm_app_point_motion_trend_v1_1h_in’ AS table_name
, CASE
WHEN COUNT(1) > 0 THEN ‘Not empty’
ELSE ‘empty’
END AS status
FROM (
SELECT 1
FROM hll_compare.hll_dm__dm_app_point_motion_trend_v1_1h_in
WHERE dt = ‘2024-04-25’
LIMIT 1
) t。现在给你更多的表名,请你按照上面给你hivesql一样的判断方法来写hivesql。只需要更改表名,其他的不要改变。这些表是:hll_compare.hll_dm__dm_app_point_motion_trend_v1_1h_in
hll_compare.hll_dwb__dwb_lbs_compass_expo_order_driver_area_1d_in
hll_compare.hll_dwb__dwb_order_expose_chain_1d_in
hll_compare.hll_dm__dm_compass_user_funnel_behavior_1d_in
hll_compare.hll_dwb__dwb_sensor_driver_order_expo_click_slide_detail_1d_in
|
46c1a07aab8807602b3cc3afe994acef
|
{
"intermediate": 0.34320592880249023,
"beginner": 0.3499491810798645,
"expert": 0.3068448603153229
}
|
48,167
|
现在给你很多的大表,现在请你写hivesql来统一判断每一张表哪些表是空的,哪些表是不为空的,记住要判断所有的表,不要遗漏不要省略,这些大表的表名是下面这些:hll_dm.dm_app_point_motion_trend_v1_1h_in
hll_dwb.dwb_lbs_compass_expo_order_driver_area_1d_in
hll_dwb.dwb_order_expose_chain_1d_in
hll_dm.dm_compass_user_funnel_behavior_1d_in
hll_dwb.dwb_sensor_driver_order_expo_click_slide_detail_1d_in
hll_dwb.dwb_driver_compass_funnel_expo_1d_in
hll_dws.dws_user_order_create_ib_1d_tm
hll_dws.dws_user_sensor_tags_1d_in
hll_dm.dm_profile_driver_extend_1d_tm
hll_dws.dws_user_order_executed_sc_1d_tm
hll_dws.dws_user_order_create_sc_1d_tm
hll_dws.dws_app_abtest_user_p1_1d_in
hll_dm.dm_app_abtest_driver_idx_1d_in
hll_dwb.dwb_driver_order_grab_1d_in
hll_dws.dws_user_tag_operation_1d_tm
hll_dm.dm_app_point_hot_words_v1_1h_in
hll_dm.dm_multi_bus_user_idx_sum_p0_90d_1d_in
hll_dws.dws_user_idx_sum_p0_90d_1d_in
hll_dm.dm_app_abtest_user_idx_p1_1d_in
hll_dws.dws_order_createdate_idx_p1_1d_in
hll_dwb.dwb_user_compass_funnel_start_1d_in
hll_dwb.dwb_lbs_compass_push_order_driver_area_1d_in
hll_dwb.dwb_app_abtest_user_tags_p1_1d_in
hll_dws.dws_driver_market_1d_tm
hll_dwb.dwb_driver_grab_order_idx_city_funnel_base_1d_in
hll_dws.dws_app_abtest_user_1d_in
hll_dm.dm_app_abtest_user_idx_1d_in
hll_bi_cf.dsp_duoyewu_kexin_kpi_city_final
hll_dwb.dwb_driver_compass_funnel_push_1d_in
hll_dwb.dwb_user_compass_funnel_evaluate_1d_in
hll_dwb.dwb_user_tags_p1_1d_in
hll_dws.dws_user_coupon_1d_tm
hll_dws.dws_driver_common_idx_sum_p1_90d_1d_in
hll_dws.dws_user_order_complete_move_wf_1d_tm
hll_dws.dws_move_porter_login_orderexpo_1d_tm
hll_dws.dws_app_abtest_user_order_1d_in
hll_dwb.dwb_app_abtest_user_order_1d_in
hll_dwb.dwb_app_abtest_user_trade_order_1d_in
hll_dwb.dwb_driver_order_complete_1d_in
hll_dwb.dwb_app_abtest_user_aborder_1d_in
hll_dwb.dwb_user_tags_1d_in
hll_dws.dws_driver_order_grab_1d_tm
hll_dws.dws_user_business_line_1d_tm
hll_dwb.dwb_order_ltl_new_base_1d_tm
hll_dwb.dwb_user_compass_funnel_order_1d_in
hll_dwb.dwb_order_tags_p1_1d_in
hll_dws.dws_order_idx_sum_p0_90d_1d_in
hll_dwb.dwb_order_tags_1d_in
hll_dwb.dwb_app_abtest_driver_order_1d_in
hll_dws.dws_driver_app_behavior_extend_1d_tm
|
c764970736a255df108936c4a9a7283c
|
{
"intermediate": 0.26672911643981934,
"beginner": 0.4248306155204773,
"expert": 0.308440238237381
}
|
48,168
|
在import math
import logging
from functools import partial
from collections import OrderedDict
from copy import deepcopy
import torch
import torch.nn as nn
import torch.nn.functional as F
from timm.models.layers import to_2tuple
from lib.models.layers.patch_embed import PatchEmbed, PatchEmbed_event, xcorr_depthwise
from .utils import combine_tokens, recover_tokens
from .vit import VisionTransformer
from ..layers.attn_blocks import CEBlock
import random
import numpy as np
_logger = logging.getLogger(__name__)
class VisionTransformerCE(VisionTransformer):
""" Vision Transformer with candidate elimination (CE) module
A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale`
- https://arxiv.org/abs/2010.11929
Includes distillation token & head support for `DeiT: Data-efficient Image Transformers`
- https://arxiv.org/abs/2012.12877
"""
def __init__(self, img_size=224, patch_size=16, in_chans=3, num_classes=1000, embed_dim=768, depth=12,
num_heads=12, mlp_ratio=4., qkv_bias=True, representation_size=None, distilled=False,
drop_rate=0., attn_drop_rate=0., drop_path_rate=0., embed_layer=PatchEmbed, norm_layer=None,
act_layer=None, weight_init='',
ce_loc=None, ce_keep_ratio=None):
"""
Args:
img_size (int, tuple): input image size
patch_size (int, tuple): patch size
in_chans (int): number of input channels
num_classes (int): number of classes for classification head
embed_dim (int): embedding dimension
depth (int): depth of transformer
num_heads (int): number of attention heads
mlp_ratio (int): ratio of mlp hidden dim to embedding dim
qkv_bias (bool): enable bias for qkv if True
representation_size (Optional[int]): enable and set representation layer (pre-logits) to this value if set
distilled (bool): model includes a distillation token and head as in DeiT models
drop_rate (float): dropout rate
attn_drop_rate (float): attention dropout rate
drop_path_rate (float): stochastic depth rate
embed_layer (nn.Module): patch embedding layer
norm_layer: (nn.Module): normalization layer
weight_init: (str): weight init scheme
"""
# super().__init__()
super().__init__()
if isinstance(img_size, tuple):
self.img_size = img_size
else:
self.img_size = to_2tuple(img_size)
self.patch_size = patch_size
self.in_chans = in_chans
self.num_classes = num_classes
self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models
self.num_tokens = 2 if distilled else 1
norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6)
act_layer = act_layer or nn.GELU
self.patch_embed = embed_layer(
img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim)
num_patches = self.patch_embed.num_patches
self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
self.dist_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) if distilled else None
self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + self.num_tokens, embed_dim))
self.pos_drop = nn.Dropout(p=drop_rate)
self.pos_embed_event = PatchEmbed_event(in_chans=32, embed_dim=768, kernel_size=4, stride=4)
# self.pos_embed_event = PatchEmbed_event(in_chans=32, embed_dim=768, kernel_size=4, stride=4)
# self.pos_embed_event_z = PatchEmbed_event(in_chans=32, embed_dim=768, kernel_size=3, stride=1)
# attn = CrossAttn(768, 4, 3072, 0.1, 'relu')
# self.cross_attn = Iter_attn(attn, 2)
dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule
blocks = []
ce_index = 0
self.ce_loc = ce_loc
for i in range(depth):
ce_keep_ratio_i = 1.0
if ce_loc is not None and i in ce_loc:
ce_keep_ratio_i = ce_keep_ratio[ce_index]
ce_index += 1
blocks.append(
CEBlock(
dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, drop=drop_rate,
attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer, act_layer=act_layer,
keep_ratio_search=ce_keep_ratio_i)
)
self.blocks = nn.Sequential(*blocks)
self.norm = norm_layer(embed_dim)
self.init_weights(weight_init)
def masking_fea(self,z, event_z, x, event_x, ratio=0.8 ):
b,nz,c = z.shape
b,nez,c = event_z.shape
b,nx,c = x.shape
b,nex,c = event_x.shape
assert(nz == nez)
assert(nx == nex)
lenz_out = int(nz*ratio)
lenx_out = int(nx*ratio)
mask_nz = torch.rand(b,nz).float()
mask_ez = torch.rand(b,nez).float()
mask_nx = torch.rand(b,nx).float()
mask_ex = torch.rand(b,nex).float()
mask_nz = mask_nz>0.4
mask_ez = mask_ez>0.4
mask_ez = ~mask_nz + mask_ez
mask_nz_idx = mask_nz.float().sort(1,descending=True)[-1].to(device = z.device)
mask_ez_idx = mask_ez.float().sort(1,descending=True)[-1].to(device = z.device)
mask_nx = mask_nx>0.4
mask_ex = mask_ex>0.4
mask_ex = ~mask_nx + mask_ex
mask_nx_idx = mask_nx.float().sort(1,descending=True)[-1].to(device = z.device)
mask_ex_idx = mask_ex.float().sort(1,descending=True)[-1].to(device = z.device)
masked_z = torch.gather(z, 1, mask_nz_idx[:,:lenz_out,None].repeat([1,1,c]))
masked_ez = torch.gather(event_z, 1, mask_ez_idx[:,:lenz_out,None].repeat([1,1,c]))
masked_x = torch.gather(x, 1, mask_nx_idx[:,:lenx_out,None].repeat([1,1,c]))
masked_ex = torch.gather(event_x, 1, mask_ex_idx[:,:lenx_out,None].repeat([1,1,c]))
return masked_z, masked_ez, masked_x, masked_ex,{'x1':mask_nx_idx[:,:lenx_out],'x0':mask_nx_idx[:,lenx_out:],
'ex1':mask_ex_idx[:,:lenx_out],'ex0':mask_ex_idx[:,lenx_out:], }
def forward_features(self, z, x, event_z, event_x,
mask_z=None, mask_x=None,
ce_template_mask=None, ce_keep_rate=None,
return_last_attn=False,Track=False
):
B, H, W = x.shape[0], x.shape[2], x.shape[3]
# print('shape of event_z before projection:{}, event_x:{}'.format(event_z.shape, event_x.shape))
event_z = self.pos_embed_event(event_z) # [:,:,:,:1000]
event_x = self.pos_embed_event(event_x) # B 768 1024
x = self.patch_embed(x)
z = self.patch_embed(z)
# print('shape of event_z:{}, event_x:{}, x:{}, z:{}'.format(event_z.shape,event_x.shape,x.shape,z.shape ))
event_z += self.pos_embed_z
event_x += self.pos_embed_x
z += self.pos_embed_z
x += self.pos_embed_x
# attention mask handling # B, H, W
if mask_z is not None and mask_x is not None:
mask_z = F.interpolate(mask_z[None].float(), scale_factor=1. / self.patch_size).to(torch.bool)[0]
mask_z = mask_z.flatten(1).unsqueeze(-1)
mask_x = F.interpolate(mask_x[None].float(), scale_factor=1. / self.patch_size).to(torch.bool)[0]
mask_x = mask_x.flatten(1).unsqueeze(-1)
mask_x = combine_tokens(mask_z, mask_x, mode=self.cat_mode)
mask_x = mask_x.squeeze(-1)
if self.add_cls_token:
cls_tokens = self.cls_token.expand(B, -1, -1)
cls_tokens = cls_tokens + self.cls_pos_embed
if self.add_sep_seg:
x += self.search_segment_pos_embed
z += self.template_segment_pos_embed
if Track == False:
z, event_z, x, event_x, token_idx = self.masking_fea(z, event_z, x, event_x, ratio=0.9)
x = combine_tokens(z, event_z, x, event_x, mode=self.cat_mode) # 64+64+256+256=640
# x = combine_tokens(z, x, event_z, event_x, mode=self.cat_mode) # 64+64+256+256=640
if self.add_cls_token:
x = torch.cat([cls_tokens, x], dim=1)
x = self.pos_drop(x)
# lens_z = self.pos_embed_z.shape[1]
# lens_x = self.pos_embed_x.shape[1]
lens_z = z.shape[1]
lens_x = x.shape[1]
global_index_t = torch.linspace(0, lens_z - 1, lens_z).to(x.device)
global_index_t = global_index_t.repeat(B, 1)
global_index_s = torch.linspace(0, lens_x - 1, lens_x).to(x.device)
global_index_s = global_index_s.repeat(B, 1)
removed_indexes_s = []
out_attn = []
for i, blk in enumerate(self.blocks):
# out_global_s.append(global_index_s)
# out_global_t.append(global_index_t)
x, global_index_t, global_index_s, removed_index_s, attn = \
blk(x, global_index_t, global_index_s, mask_x, ce_template_mask, ce_keep_rate)
if self.ce_loc is not None and i in self.ce_loc:
removed_indexes_s.append(removed_index_s)
out_attn.append(attn)
# print('shape of attn:{}, lens_z:{}, lens_x:{}'.format(attn.shape, lens_z, lens_x))
out_attn_idx = random.choice(np.arange(len(out_attn)))
out_attn = out_attn[out_attn_idx]
x = self.norm(x)
lens_x_new = global_index_s.shape[1]
lens_z_new = global_index_t.shape[1]
z = x[:, :lens_z_new*2]
x = x[:, lens_z_new*2:]
if Track == False:
idx1 = token_idx['x1']
idx0 = token_idx['x0']
idex1 = token_idx['ex1']
idex0 = token_idx['ex0']
ex = x[:,idex1.shape[1]:]
x = x[:,:idex1.shape[1]]
# if removed_indexes_s and removed_indexes_s[0] is not None:
# removed_indexes_cat = torch.cat(removed_indexes_s, dim=1)
pruned_lens_x = idx0.shape[1]
pad_x = torch.zeros([B, pruned_lens_x, x.shape[2]], device=x.device)
x = torch.cat([x, pad_x], dim=1)
index_all = torch.cat([idx1, idx0], dim=1)
# recover original token order
C = x.shape[-1]
x = torch.zeros_like(x).scatter_(dim=1, index=index_all.unsqueeze(-1).expand(B, -1, C).to(torch.int64), src=x)
ex = torch.cat([ex, pad_x], dim=1)
index_all = torch.cat([idex1, idex0], dim=1)
# recover original token order
C = ex.shape[-1]
ex = torch.zeros_like(ex).scatter_(dim=1, index=index_all.unsqueeze(-1).expand(B, -1, C).to(torch.int64), src=ex)
x = torch.cat([x,ex],dim=1)
x = recover_tokens(x, lens_z_new, lens_x, mode=self.cat_mode)
event_x = x[:, lens_x:] # RGB head
x = x[:, :lens_x] # RGB head
x = torch.cat([event_x, x], dim=1)
aux_dict = {
# "attn": attn,
"attn": out_attn,
"removed_indexes_s": removed_indexes_s, # used for visualization
}
return x, aux_dict
def forward(self, z, x, event_z, event_x,
ce_template_mask=None, ce_keep_rate=None,
tnc_keep_rate=None,
return_last_attn=False,Track=False):
x, aux_dict = self.forward_features(z, x, event_z, event_x, ce_template_mask=ce_template_mask, ce_keep_rate=ce_keep_rate,Track=Track)
return x, aux_dict
def _create_vision_transformer(pretrained=False, **kwargs):
model = VisionTransformerCE(**kwargs)
if pretrained:
if 'npz' in pretrained:
model.load_pretrained(pretrained, prefix='')
else:
checkpoint = torch.load(pretrained, map_location="cpu")
missing_keys, unexpected_keys = model.load_state_dict(checkpoint["model"], strict=False)
print('Load pretrained model from: ' + pretrained)
return model
def vit_base_patch16_224_ce(pretrained=False, **kwargs):
""" ViT-Base model (ViT-B/16) from original paper (https://arxiv.org/abs/2010.11929).
"""
model_kwargs = dict(
patch_size=16, embed_dim=768, depth=12, num_heads=12, **kwargs)
model = _create_vision_transformer(pretrained=pretrained, **model_kwargs)
return model
def vit_large_patch16_224_ce(pretrained=False, **kwargs):
""" ViT-Large model (ViT-L/16) from original paper (https://arxiv.org/abs/2010.11929).
"""
model_kwargs = dict(
patch_size=16, embed_dim=1024, depth=24, num_heads=16, **kwargs)
model = _create_vision_transformer(pretrained=pretrained, **model_kwargs)
return model是主模型,然后from . import BaseActor
from lib.utils.misc import NestedTensor
from lib.utils.box_ops import box_cxcywh_to_xyxy, box_xywh_to_xyxy
import torch
from lib.utils.merge import merge_template_search
from ...utils.heapmap_utils import generate_heatmap
from ...utils.ce_utils import generate_mask_cond, adjust_keep_rate
class CEUTrackActor(BaseActor):
""" Actor for training CEUTrack models """
def __init__(self, net, objective, loss_weight, settings, cfg=None):
super().__init__(net, objective)
self.loss_weight = loss_weight
self.settings = settings
self.bs = self.settings.batchsize # batch size
self.cfg = cfg
def __call__(self, data):
"""
args:
data - The input data, should contain the fields 'template', 'search', 'gt_bbox'.
template_images: (N_t, batch, 3, H, W)
search_images: (N_s, batch, 3, H, W)
returns:
loss - the training loss
status - dict containing detailed losses
"""
# forward pass
out_dict = self.forward_pass(data)
# compute losses
loss, status = self.compute_losses(out_dict, data)
return loss, status
def forward_pass(self, data):
# currently only support 1 template and 1 search region
assert len(data['template_images']) == 1
assert len(data['search_images']) == 1
assert len(data['template_event']) == 1
assert len(data['search_event']) == 1
template_list = []
for i in range(self.settings.num_template):
template_img_i = data['template_images'][i].view(-1,
*data['template_images'].shape[2:]) # (batch, 3, 128, 128)
# template_att_i = data['template_att'][i].view(-1, *data['template_att'].shape[2:]) # (batch, 128, 128)
template_list.append(template_img_i)
search_img = data['search_images'][0].view(-1, *data['search_images'].shape[2:]) # (batch, 3, 320, 320)
# search_att = data['search_att'][0].view(-1, *data['search_att'].shape[2:]) # (batch, 320, 320)
template_event = data['template_event'][0].view(-1, *data['template_event'].shape[2:])
search_event = data['search_event'][0].view(-1, *data['search_event'].shape[2:])
box_mask_z = None
ce_keep_rate = None
if self.cfg.MODEL.BACKBONE.CE_LOC:
box_mask_z = generate_mask_cond(self.cfg, template_list[0].shape[0], template_list[0].device,
data['template_anno'][0])
ce_start_epoch = self.cfg.TRAIN.CE_START_EPOCH
ce_warm_epoch = self.cfg.TRAIN.CE_WARM_EPOCH
ce_keep_rate = adjust_keep_rate(data['epoch'], warmup_epochs=ce_start_epoch,
total_epochs=ce_start_epoch + ce_warm_epoch,
ITERS_PER_EPOCH=1,
base_keep_rate=self.cfg.MODEL.BACKBONE.CE_KEEP_RATIO[0])
if len(template_list) == 1:
template_list = template_list[0]
out_dict = self.net(template=template_list,
search=search_img,
event_template=template_event,
event_search=search_event,
ce_template_mask=box_mask_z,
ce_keep_rate=ce_keep_rate,
return_last_attn=False)
return out_dict
def compute_losses(self, pred_dict, gt_dict, return_status=True):
# gt gaussian map
gt_bbox = gt_dict['search_anno'][-1] # (Ns, batch, 4) (x1,y1,w,h) -> (batch, 4)
gt_gaussian_maps = generate_heatmap(gt_dict['search_anno'], self.cfg.DATA.SEARCH.SIZE, self.cfg.MODEL.BACKBONE.STRIDE)
gt_gaussian_maps = gt_gaussian_maps[-1].unsqueeze(1)
# Get boxes
pred_boxes = pred_dict['pred_boxes']
if torch.isnan(pred_boxes).any():
raise ValueError("Network outputs is NAN! Stop Training")
num_queries = pred_boxes.size(1)
pred_boxes_vec = box_cxcywh_to_xyxy(pred_boxes).view(-1, 4) # (B,N,4) --> (BN,4) (x1,y1,x2,y2)
gt_boxes_vec = box_xywh_to_xyxy(gt_bbox)[:, None, :].repeat((1, num_queries, 1)).view(-1, 4).clamp(min=0.0,
max=1.0) # (B,4) --> (B,1,4) --> (B,N,4)
# compute giou and iou
try:
giou_loss, iou = self.objective['giou'](pred_boxes_vec, gt_boxes_vec) # (BN,4) (BN,4)
except:
giou_loss, iou = torch.tensor(0.0).cuda(), torch.tensor(0.0).cuda()
# compute l1 loss
l1_loss = self.objective['l1'](pred_boxes_vec, gt_boxes_vec) # (BN,4) (BN,4)
# compute location loss
if 'score_map' in pred_dict:
location_loss = self.objective['focal'](pred_dict['score_map'], gt_gaussian_maps)
else:
location_loss = torch.tensor(0.0, device=l1_loss.device)
rank_loss = self.loss_rank(pred_dict,gt_dict['search_anno'], gt_dict['template_anno'])
# weighted sum
loss = self.loss_weight['giou'] * giou_loss + self.loss_weight['l1'] * l1_loss + self.loss_weight['focal'] * location_loss + rank_loss*1.2
if return_status:
# status for log
mean_iou = iou.detach().mean()
status = {"Loss/total": loss.item(),
"Loss/giou": giou_loss.item(),
"Loss/l1": l1_loss.item(),
"Loss/location": location_loss.item(),
"IoU": mean_iou.item()}
return loss, status
else:
return loss
def _random_permute(self,matrix):
# matrix = random.choice(matrix)
b, c, h, w = matrix.shape
idx = [ torch.randperm(c).to(matrix.device) for i in range(b)]
idx = torch.stack(idx, dim=0)[:, :, None, None].repeat([1,1,h,w])
# idx = torch.randperm(c)[None,:,None,None].repeat([b,1,h,w]).to(matrix.device)
matrix01 = torch.gather(matrix, 1, idx)
return matrix01
def crop_flag(self, flag, global_index_s, global_index_t,H1 = 64, H2 = 256):
B,Ls = global_index_s.shape
B, Lt = global_index_t.shape
B,C,L1,L2 = flag.shape
flag_t = flag[:,:,:H1,:]
flag_s = flag[:,:,H1:,:]
flag_t = torch.gather(flag_t,2,global_index_t[:,None,:,None].repeat([1,C,1,L2]).long())
flag_s = torch.gather(flag_s,2,global_index_s[:,None,:,None].repeat([1,C,1,L2]).long())
flag = torch.cat([flag_t, flag_s], dim = 2)
flag_t = flag[:,:,:,:H1]
flag_s = flag[:,:,:,H1:]
flag_t = torch.gather(flag_t,3,global_index_t[:,None,None,:].repeat([1,C,int(Ls+Lt),1]).long())
flag_s = torch.gather(flag_s,3,global_index_s[:,None,None,:].repeat([1,C,int(Ls+Lt),1]).long())
flag = torch.cat([flag_t, flag_s], dim = 3)
B, C, L11, L12 = flag.shape
try:
assert(L11 == int(Lt + Ls))
assert(L12 == int(Lt + Ls))
except:
print('L11:{}, L12:{}, L1:{}, L2:{}'.format(L11, L12, L1, L2))
return flag
def crop_fusion(self, flag, attn, global_index_s, global_index_t,H1 = 64, H2 = 256 ):
flag = self.crop_flag(flag=flag, global_index_s=global_index_s, global_index_t=global_index_t)
B,C,L1,L2 = flag.shape
Ba, Ca, La, La2 = attn.shape
_,idx1 = flag.mean(dim=3,keepdim=False).sort(dim=2,descending=True)
# print('shape of flag:{}, idx1:{}'.format(flag.shape, idx1[:,:,:32,None].repeat([1,Ca,1,L2]).shape))
flag = torch.gather(flag,2,idx1[:,:,:32,None].repeat([1,C,1,L2]).long())
attn = torch.gather(attn,2,idx1[:,:,:32,None].repeat([1,Ca,1,L2]).long())
_,idx2 = flag.mean(dim=2,keepdim=False).sort(dim=2,descending=True)
flag = torch.gather(flag,3,idx2[:,:,None,:32].repeat([1,C,32,1]).long())
attn = torch.gather(attn,3,idx2[:,:,None,:32].repeat([1,Ca,32,1]).long())
return attn * flag
def loss_rank(self, outputs, targetsi, temp_annoi=None):
"""Compute the losses related to the bounding boxes, the L1 regression loss and the GIoU loss
targets dicts must contain the key "boxes" containing a tensor of dim [nb_target_boxes, 4]
The target boxes are expected in format (center_x, center_y, h, w), normalized by the image size.
"""
attn = outputs['attn']
# print('attn shape:{}'.format(attn.shape))
attn1 = torch.cat([attn[:,:,114:344,57:114], attn[:,:,114:344,344:]],dim=3)
attn1 = attn1.mean(dim=0, keepdim=True).mean(dim=1, keepdim=True)
attn2 = torch.cat([attn[:,:,344:,:57], attn[:,:,344:,114:344]],dim=3)
attn2 = attn2.mean(dim=0, keepdim=True).mean(dim=1, keepdim=True)
# print('attn1 shape:{},attn2 shape:{}, attn:{}'.format(attn1.shape,attn2.shape,attn.shape))
# attn = self._random_permute(attn)
# attn = attn[:,:,:,:]
# B1, C1, H1, W1 = attn.shape
# global_index_s = outputs['out_global_s']
# global_index_t = outputs['out_global_t']
# try:
# assert((global_index_s.shape[1] + global_index_t.shape[1])== int(H1/2))
# except:
# print('Falut,shape of attn:{}, s:{}, t:{}'.format(attn.shape,global_index_s.shape, global_index_t.shape ))
# H1 = int(64)
# H2 = int(256)
# l_t = int(math.sqrt(64))
# l_s = int(math.sqrt(256))
# temp_anno = temp_annoi[0,:,:]
# targets = targetsi[0,:,:]
# r_s = torch.arange(l_s).to(temp_anno.device)
# r_t = torch.arange(l_t).to(temp_anno.device)
# r_t = r_t[None,:].repeat([B1,1])
# cx, cy, w, h = temp_anno[:,0:1], temp_anno[:,1:2], temp_anno[:,2:3], temp_anno[:,3:4]
# cx *= l_t
# cy *= l_t
# w *= l_t
# h *= l_t
# flagx_01 = r_t >= cx - w/2
# flagx_02 = r_t <= cx + w/2
# flagy_02 = r_t >= cy - h/2
# flagy_01 = r_t <= cy + h/2
# flagx = flagx_01.float()*flagx_02.float()
# flagy = flagy_01.float()*flagy_02.float()
# flagx = flagx[:,None,:].repeat([1,l_t,1])
# flagy = flagy[:,:,None].repeat([1,1,l_t])
# flag = flagx*flagy
# flagt = flag.reshape([B1, H1])
# cx, cy, w, h = targets[:,0:1], targets[:,1:2], targets[:,2:3], targets[:,3:4]
# cx *= l_s
# cy *= l_s
# w *= l_s
# h *= l_s
# flagx_01 = r_s >= cx - w/2
# flagx_02 = r_s <= cx + w/2
# flagy_02 = r_s >= cy - h/2
# flagy_01 = r_s <= cy + h/2
# flagx = flagx_01.float()*flagx_02.float()
# flagy = flagy_01.float()*flagy_02.float()
# flagx = flagx[:,None,:].repeat([1,l_s,1])
# flagy = flagy[:,:,None].repeat([1,1,l_s])
# flag = flagx*flagy
# flags = flag.reshape([B1, H2])
# flag = torch.cat([flagt, flags], dim=1)
# flag_total = flag[:,:,None].repeat([1,1,int(H1+H2)]) * flag[:,None,:].repeat([1,int(H1+H2),1])
# attn1 = self.crop_fusion(flag_total[:,None,:,:], attn, global_index_s, global_index_t)
attn = torch.cat([attn1, attn2],dim=1)
B, C, H, W = attn.shape
# _,s1,_ = torch.svd(attn1.reshape([B*C, H, W]))
_,s1,_ = torch.svd(attn.reshape([B*C, H, W]))
s01 = torch.abs(s1 - 1)
return torch.mean(s01)是CEUTrackActor,那么现在的模型是# 将 4输入分开,构建新的相同模态结合的2输入,2分支
import math
import logging
from functools import partial
from collections import OrderedDict
from copy import deepcopy
import torch
import torch.nn as nn
import torch.nn.functional as F
from timm.models.layers import to_2tuple
from lib.models.layers.patch_embed import PatchEmbed, PatchEmbed_event, xcorr_depthwise
from .utils import combine_tokens, recover_tokens
from .vit import VisionTransformer
from ..layers.attn_blocks import CEBlock
from .new_counter_guide import Counter_Guide
# from .ad_counter_guide import Counter_Guide_Enhanced
from .ad_counter_guide_downdim import Counter_Guide_Enhanced
_logger = logging.getLogger(__name__)
class VisionTransformerCE(VisionTransformer):
""" Vision Transformer with candidate elimination (CE) module
A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale`
- https://arxiv.org/abs/2010.11929
Includes distillation token & head support for `DeiT: Data-efficient Image Transformers`
- https://arxiv.org/abs/2012.12877
"""
def __init__(self, img_size=224, patch_size=16, in_chans=3, num_classes=1000, embed_dim=768, depth=12,
num_heads=12, mlp_ratio=4., qkv_bias=True, representation_size=None, distilled=False,
drop_rate=0., attn_drop_rate=0., drop_path_rate=0., embed_layer=PatchEmbed, norm_layer=None,
act_layer=None, weight_init='',
ce_loc=None, ce_keep_ratio=None):
super().__init__()
if isinstance(img_size, tuple):
self.img_size = img_size
else:
self.img_size = to_2tuple(img_size)
self.patch_size = patch_size
self.in_chans = in_chans
self.num_classes = num_classes
self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models
self.num_tokens = 2 if distilled else 1
norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6)
act_layer = act_layer or nn.GELU
self.patch_embed = embed_layer(
img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim)
num_patches = self.patch_embed.num_patches
self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
self.dist_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) if distilled else None
self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + self.num_tokens, embed_dim))
self.pos_drop = nn.Dropout(p=drop_rate)
self.pos_embed_event = PatchEmbed_event(in_chans=32, embed_dim=768, kernel_size=4, stride=4)
dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule
blocks = []
ce_index = 0
self.ce_loc = ce_loc
for i in range(depth):
ce_keep_ratio_i = 1.0
if ce_loc is not None and i in ce_loc:
ce_keep_ratio_i = ce_keep_ratio[ce_index]
ce_index += 1
blocks.append(
CEBlock(
dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, drop=drop_rate,
attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer, act_layer=act_layer,
keep_ratio_search=ce_keep_ratio_i)
)
self.blocks = nn.Sequential(*blocks)
self.norm = norm_layer(embed_dim)
self.init_weights(weight_init)
# 添加交互模块counter_guide
self.counter_guide = Counter_Guide_Enhanced(768, 768)
def forward_features(self, z, x, event_z, event_x,
mask_z=None, mask_x=None,
ce_template_mask=None, ce_keep_rate=None,
return_last_attn=False
):
# 分支1 处理流程
B, H, W = x.shape[0], x.shape[2], x.shape[3]
x = self.patch_embed(x)
z = self.patch_embed(z)
z += self.pos_embed_z
x += self.pos_embed_x
if mask_z is not None and mask_x is not None:
mask_z = F.interpolate(mask_z[None].float(), scale_factor=1. / self.patch_size).to(torch.bool)[0]
mask_z = mask_z.flatten(1).unsqueeze(-1)
mask_x = F.interpolate(mask_x[None].float(), scale_factor=1. / self.patch_size).to(torch.bool)[0]
mask_x = mask_x.flatten(1).unsqueeze(-1)
mask_x = combine_tokens(mask_z, mask_x, mode=self.cat_mode)
mask_x = mask_x.squeeze(-1)
if self.add_cls_token:
cls_tokens = self.cls_token.expand(B, -1, -1)
cls_tokens = cls_tokens + self.cls_pos_embed
if self.add_sep_seg:
x += self.search_segment_pos_embed
z += self.template_segment_pos_embed
x = combine_tokens(z, x, mode=self.cat_mode)
if self.add_cls_token:
x = torch.cat([cls_tokens, x], dim=1)
x = self.pos_drop(x)
lens_z = self.pos_embed_z.shape[1]
lens_x = self.pos_embed_x.shape[1]
global_index_t = torch.linspace(0, lens_z - 1, lens_z).to(x.device)
global_index_t = global_index_t.repeat(B, 1)
global_index_s = torch.linspace(0, lens_x - 1, lens_x).to(x.device)
global_index_s = global_index_s.repeat(B, 1)
removed_indexes_s = []
# # 分支2 处理流程
event_x = self.pos_embed_event(event_x)
event_z = self.pos_embed_event(event_z)
event_x += self.pos_embed_x
event_z += self.pos_embed_z
event_x = combine_tokens(event_z, event_x, mode=self.cat_mode)
if self.add_cls_token:
event_x = torch.cat([cls_tokens, event_x], dim=1)
lens_z = self.pos_embed_z.shape[1]
lens_x = self.pos_embed_x.shape[1]
global_index_t1 = torch.linspace(0, lens_z - 1, lens_z).to(event_x.device)
global_index_t1 = global_index_t1.repeat(B, 1)
global_index_s1 = torch.linspace(0, lens_x - 1, lens_x).to(event_x.device)
global_index_s1 = global_index_s1.repeat(B, 1)
removed_indexes_s1 = []
for i, blk in enumerate(self.blocks):
# 第一个分支处理
x, global_index_t, global_index_s, removed_index_s, attn = \
blk(x, global_index_t, global_index_s, mask_x, ce_template_mask, ce_keep_rate)
# 第二个分支处理
event_x, global_index_t1, global_index_s1, removed_index_s1, attn = \
blk(event_x, global_index_t1, global_index_s1, mask_x, ce_template_mask, ce_keep_rate)
if self.ce_loc is not None and i in self.ce_loc:
removed_indexes_s.append(removed_index_s)
removed_indexes_s1.append(removed_index_s1)
# 在第1层和第2层增加counter_guide模块,验证早期融合效果
if i == 0 :
enhanced_x, enhanced_event_x = self.counter_guide(x, event_x)
# 将增强后的特征与原特征相加
x = x + enhanced_x
event_x = event_x + enhanced_event_x
# 应用LayerNorm归一化处理
x = self.norm(x)
event_x = self.norm(event_x)
x_cat = torch.cat([event_x,x], dim=1)
x = x_cat
aux_dict = {
"attn": attn,
"removed_indexes_s": removed_indexes_s, # used for visualization
}
return x, aux_dict
def forward(self, z, x, event_z, event_x,
ce_template_mask=None, ce_keep_rate=None,
tnc_keep_rate=None,
return_last_attn=False):
x, aux_dict = self.forward_features(z, x, event_z, event_x, ce_template_mask=ce_template_mask, ce_keep_rate=ce_keep_rate,)
return x, aux_dict
def _create_vision_transformer(pretrained=False, **kwargs):
model = VisionTransformerCE(**kwargs)
if pretrained:
if 'npz' in pretrained:
model.load_pretrained(pretrained, prefix='')
else:
checkpoint = torch.load(pretrained, map_location="cpu")
missing_keys, unexpected_keys = model.load_state_dict(checkpoint["model"], strict=False)
print('Load pretrained model from: ' + pretrained)
return model
def vit_base_patch16_224_ce(pretrained=False, **kwargs):
""" ViT-Base model (ViT-B/16) from original paper (https://arxiv.org/abs/2010.11929).
"""
model_kwargs = dict(
patch_size=16, embed_dim=768, depth=12, num_heads=12, **kwargs)
model = _create_vision_transformer(pretrained=pretrained, **model_kwargs)
return model
def vit_large_patch16_224_ce(pretrained=False, **kwargs):
""" ViT-Large model (ViT-L/16) from original paper (https://arxiv.org/abs/2010.11929).
"""
model_kwargs = dict(
patch_size=16, embed_dim=1024, depth=24, num_heads=16, **kwargs)
model = _create_vision_transformer(pretrained=pretrained, **model_kwargs)
return model,然后他的CEUTrackActor是from . import BaseActor
from lib.utils.misc import NestedTensor
from lib.utils.box_ops import box_cxcywh_to_xyxy, box_xywh_to_xyxy
import torch
from lib.utils.merge import merge_template_search
from ...utils.heapmap_utils import generate_heatmap
from ...utils.ce_utils import generate_mask_cond, adjust_keep_rate
class CEUTrackActor(BaseActor):
""" Actor for training CEUTrack models """
def __init__(self, net, objective, loss_weight, settings, cfg=None):
super().__init__(net, objective)
self.loss_weight = loss_weight
self.settings = settings
self.bs = self.settings.batchsize # batch size
self.cfg = cfg
def __call__(self, data):
"""
args:
data - The input data, should contain the fields 'template', 'search', 'gt_bbox'.
template_images: (N_t, batch, 3, H, W)
search_images: (N_s, batch, 3, H, W)
returns:
loss - the training loss
status - dict containing detailed losses
"""
# forward pass
out_dict = self.forward_pass(data)
# compute losses
loss, status = self.compute_losses(out_dict, data)
return loss, status
def forward_pass(self, data):
# currently only support 1 template and 1 search region
assert len(data['template_images']) == 1
assert len(data['search_images']) == 1
assert len(data['template_event']) == 1
assert len(data['search_event']) == 1
template_list = []
for i in range(self.settings.num_template):
template_img_i = data['template_images'][i].view(-1,
*data['template_images'].shape[2:]) # (batch, 3, 128, 128)
# template_att_i = data['template_att'][i].view(-1, *data['template_att'].shape[2:]) # (batch, 128, 128)
template_list.append(template_img_i)
search_img = data['search_images'][0].view(-1, *data['search_images'].shape[2:]) # (batch, 3, 320, 320)
# search_att = data['search_att'][0].view(-1, *data['search_att'].shape[2:]) # (batch, 320, 320)
template_event = data['template_event'][0].view(-1, *data['template_event'].shape[2:])
search_event = data['search_event'][0].view(-1, *data['search_event'].shape[2:])
box_mask_z = None
ce_keep_rate = None
if self.cfg.MODEL.BACKBONE.CE_LOC:
box_mask_z = generate_mask_cond(self.cfg, template_list[0].shape[0], template_list[0].device,
data['template_anno'][0])
ce_start_epoch = self.cfg.TRAIN.CE_START_EPOCH
ce_warm_epoch = self.cfg.TRAIN.CE_WARM_EPOCH
ce_keep_rate = adjust_keep_rate(data['epoch'], warmup_epochs=ce_start_epoch,
total_epochs=ce_start_epoch + ce_warm_epoch,
ITERS_PER_EPOCH=1,
base_keep_rate=self.cfg.MODEL.BACKBONE.CE_KEEP_RATIO[0])
if len(template_list) == 1:
template_list = template_list[0]
out_dict = self.net(template=template_list,
search=search_img,
event_template=template_event,
event_search=search_event,
ce_template_mask=box_mask_z,
ce_keep_rate=ce_keep_rate,
return_last_attn=False)
return out_dict
def compute_losses(self, pred_dict, gt_dict, return_status=True):
# gt gaussian map
gt_bbox = gt_dict['search_anno'][-1] # (Ns, batch, 4) (x1,y1,w,h) -> (batch, 4)
gt_gaussian_maps = generate_heatmap(gt_dict['search_anno'], self.cfg.DATA.SEARCH.SIZE, self.cfg.MODEL.BACKBONE.STRIDE)
gt_gaussian_maps = gt_gaussian_maps[-1].unsqueeze(1)
# Get boxes
pred_boxes = pred_dict['pred_boxes']
if torch.isnan(pred_boxes).any():
raise ValueError("Network outputs is NAN! Stop Training")
num_queries = pred_boxes.size(1)
pred_boxes_vec = box_cxcywh_to_xyxy(pred_boxes).view(-1, 4) # (B,N,4) --> (BN,4) (x1,y1,x2,y2)
gt_boxes_vec = box_xywh_to_xyxy(gt_bbox)[:, None, :].repeat((1, num_queries, 1)).view(-1, 4).clamp(min=0.0,
max=1.0) # (B,4) --> (B,1,4) --> (B,N,4)
# compute giou and iou
try:
giou_loss, iou = self.objective['giou'](pred_boxes_vec, gt_boxes_vec) # (BN,4) (BN,4)
except:
giou_loss, iou = torch.tensor(0.0).cuda(), torch.tensor(0.0).cuda()
# compute l1 loss
l1_loss = self.objective['l1'](pred_boxes_vec, gt_boxes_vec) # (BN,4) (BN,4)
# compute location loss
if 'score_map' in pred_dict:
location_loss = self.objective['focal'](pred_dict['score_map'], gt_gaussian_maps)
else:
location_loss = torch.tensor(0.0, device=l1_loss.device)
# weighted sum
loss = self.loss_weight['giou'] * giou_loss + self.loss_weight['l1'] * l1_loss + self.loss_weight['focal'] * location_loss
if return_status:
# status for log
mean_iou = iou.detach().mean()
status = {"Loss/total": loss.item(),
"Loss/giou": giou_loss.item(),
"Loss/l1": l1_loss.item(),
"Loss/location": location_loss.item(),
"IoU": mean_iou.item()}
return loss, status
else:
return loss
,那么第一个代码中的CEUTrackActor中引入了正交高秩正则化,那么在第二个代码的CEUTrackActor中也将正交高秩正则化添加进去
|
ca8d4dbd0be70773f0d215a7138318b5
|
{
"intermediate": 0.4144701659679413,
"beginner": 0.39271798729896545,
"expert": 0.19281193614006042
}
|
48,169
|
5. An electrical firm manufactures light bulbs that have a normally distributed
lifespan with a mean of 800 hours and a standard deviation of 40 hours.
(a) Find the probability the bulb burns between 778 and 834 hours.
(b) Find the probability the bulb burns longer than 896 hours.
|
80d69c566b907314d7d34b1e04c536a1
|
{
"intermediate": 0.4016805589199066,
"beginner": 0.39570924639701843,
"expert": 0.20261022448539734
}
|
48,170
|
What is the code that muted line 6 in G code?
|
da427b7ce58611c36b8bfad8e95554cd
|
{
"intermediate": 0.2921062707901001,
"beginner": 0.35837578773498535,
"expert": 0.34951794147491455
}
|
48,171
|
What is the code that muted line 6 in G chord?
|
7455d236e42576afbe21fb77598069ae
|
{
"intermediate": 0.3753429651260376,
"beginner": 0.3347611725330353,
"expert": 0.2898959219455719
}
|
48,172
|
源代码中包含的代码① import math
import logging
from functools import partial
from collections import OrderedDict
from copy import deepcopy
import torch
import torch.nn as nn
import torch.nn.functional as F
from timm.models.layers import to_2tuple
from lib.models.layers.patch_embed import PatchEmbed, PatchEmbed_event, xcorr_depthwise
from .utils import combine_tokens, recover_tokens
from .vit import VisionTransformer
from ..layers.attn_blocks import CEBlock
import random
import numpy as np
_logger = logging.getLogger(__name__)
class VisionTransformerCE(VisionTransformer):
""" Vision Transformer with candidate elimination (CE) module
A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale`
- https://arxiv.org/abs/2010.11929
Includes distillation token & head support for `DeiT: Data-efficient Image Transformers`
- https://arxiv.org/abs/2012.12877
"""
def __init__(self, img_size=224, patch_size=16, in_chans=3, num_classes=1000, embed_dim=768, depth=12,
num_heads=12, mlp_ratio=4., qkv_bias=True, representation_size=None, distilled=False,
drop_rate=0., attn_drop_rate=0., drop_path_rate=0., embed_layer=PatchEmbed, norm_layer=None,
act_layer=None, weight_init='',
ce_loc=None, ce_keep_ratio=None):
"""
Args:
img_size (int, tuple): input image size
patch_size (int, tuple): patch size
in_chans (int): number of input channels
num_classes (int): number of classes for classification head
embed_dim (int): embedding dimension
depth (int): depth of transformer
num_heads (int): number of attention heads
mlp_ratio (int): ratio of mlp hidden dim to embedding dim
qkv_bias (bool): enable bias for qkv if True
representation_size (Optional[int]): enable and set representation layer (pre-logits) to this value if set
distilled (bool): model includes a distillation token and head as in DeiT models
drop_rate (float): dropout rate
attn_drop_rate (float): attention dropout rate
drop_path_rate (float): stochastic depth rate
embed_layer (nn.Module): patch embedding layer
norm_layer: (nn.Module): normalization layer
weight_init: (str): weight init scheme
"""
# super().__init__()
super().__init__()
if isinstance(img_size, tuple):
self.img_size = img_size
else:
self.img_size = to_2tuple(img_size)
self.patch_size = patch_size
self.in_chans = in_chans
self.num_classes = num_classes
self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models
self.num_tokens = 2 if distilled else 1
norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6)
act_layer = act_layer or nn.GELU
self.patch_embed = embed_layer(
img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim)
num_patches = self.patch_embed.num_patches
self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
self.dist_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) if distilled else None
self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + self.num_tokens, embed_dim))
self.pos_drop = nn.Dropout(p=drop_rate)
self.pos_embed_event = PatchEmbed_event(in_chans=32, embed_dim=768, kernel_size=4, stride=4)
# self.pos_embed_event = PatchEmbed_event(in_chans=32, embed_dim=768, kernel_size=4, stride=4)
# self.pos_embed_event_z = PatchEmbed_event(in_chans=32, embed_dim=768, kernel_size=3, stride=1)
# attn = CrossAttn(768, 4, 3072, 0.1, 'relu')
# self.cross_attn = Iter_attn(attn, 2)
dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule
blocks = []
ce_index = 0
self.ce_loc = ce_loc
for i in range(depth):
ce_keep_ratio_i = 1.0
if ce_loc is not None and i in ce_loc:
ce_keep_ratio_i = ce_keep_ratio[ce_index]
ce_index += 1
blocks.append(
CEBlock(
dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, drop=drop_rate,
attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer, act_layer=act_layer,
keep_ratio_search=ce_keep_ratio_i)
)
self.blocks = nn.Sequential(*blocks)
self.norm = norm_layer(embed_dim)
self.init_weights(weight_init)
def masking_fea(self,z, event_z, x, event_x, ratio=0.8 ):
b,nz,c = z.shape
b,nez,c = event_z.shape
b,nx,c = x.shape
b,nex,c = event_x.shape
assert(nz == nez)
assert(nx == nex)
lenz_out = int(nz*ratio)
lenx_out = int(nx*ratio)
mask_nz = torch.rand(b,nz).float()
mask_ez = torch.rand(b,nez).float()
mask_nx = torch.rand(b,nx).float()
mask_ex = torch.rand(b,nex).float()
mask_nz = mask_nz>0.4
mask_ez = mask_ez>0.4
mask_ez = ~mask_nz + mask_ez
mask_nz_idx = mask_nz.float().sort(1,descending=True)[-1].to(device = z.device)
mask_ez_idx = mask_ez.float().sort(1,descending=True)[-1].to(device = z.device)
mask_nx = mask_nx>0.4
mask_ex = mask_ex>0.4
mask_ex = ~mask_nx + mask_ex
mask_nx_idx = mask_nx.float().sort(1,descending=True)[-1].to(device = z.device)
mask_ex_idx = mask_ex.float().sort(1,descending=True)[-1].to(device = z.device)
masked_z = torch.gather(z, 1, mask_nz_idx[:,:lenz_out,None].repeat([1,1,c]))
masked_ez = torch.gather(event_z, 1, mask_ez_idx[:,:lenz_out,None].repeat([1,1,c]))
masked_x = torch.gather(x, 1, mask_nx_idx[:,:lenx_out,None].repeat([1,1,c]))
masked_ex = torch.gather(event_x, 1, mask_ex_idx[:,:lenx_out,None].repeat([1,1,c]))
return masked_z, masked_ez, masked_x, masked_ex,{'x1':mask_nx_idx[:,:lenx_out],'x0':mask_nx_idx[:,lenx_out:],
'ex1':mask_ex_idx[:,:lenx_out],'ex0':mask_ex_idx[:,lenx_out:], }
def forward_features(self, z, x, event_z, event_x,
mask_z=None, mask_x=None,
ce_template_mask=None, ce_keep_rate=None,
return_last_attn=False,Track=False
):
B, H, W = x.shape[0], x.shape[2], x.shape[3]
# print('shape of event_z before projection:{}, event_x:{}'.format(event_z.shape, event_x.shape))
event_z = self.pos_embed_event(event_z) # [:,:,:,:1000]
event_x = self.pos_embed_event(event_x) # B 768 1024
x = self.patch_embed(x)
z = self.patch_embed(z)
# print('shape of event_z:{}, event_x:{}, x:{}, z:{}'.format(event_z.shape,event_x.shape,x.shape,z.shape ))
event_z += self.pos_embed_z
event_x += self.pos_embed_x
z += self.pos_embed_z
x += self.pos_embed_x
# attention mask handling # B, H, W
if mask_z is not None and mask_x is not None:
mask_z = F.interpolate(mask_z[None].float(), scale_factor=1. / self.patch_size).to(torch.bool)[0]
mask_z = mask_z.flatten(1).unsqueeze(-1)
mask_x = F.interpolate(mask_x[None].float(), scale_factor=1. / self.patch_size).to(torch.bool)[0]
mask_x = mask_x.flatten(1).unsqueeze(-1)
mask_x = combine_tokens(mask_z, mask_x, mode=self.cat_mode)
mask_x = mask_x.squeeze(-1)
if self.add_cls_token:
cls_tokens = self.cls_token.expand(B, -1, -1)
cls_tokens = cls_tokens + self.cls_pos_embed
if self.add_sep_seg:
x += self.search_segment_pos_embed
z += self.template_segment_pos_embed
if Track == False:
z, event_z, x, event_x, token_idx = self.masking_fea(z, event_z, x, event_x, ratio=0.9)
x = combine_tokens(z, event_z, x, event_x, mode=self.cat_mode) # 64+64+256+256=640
# x = combine_tokens(z, x, event_z, event_x, mode=self.cat_mode) # 64+64+256+256=640
if self.add_cls_token:
x = torch.cat([cls_tokens, x], dim=1)
x = self.pos_drop(x)
# lens_z = self.pos_embed_z.shape[1]
# lens_x = self.pos_embed_x.shape[1]
lens_z = z.shape[1]
lens_x = x.shape[1]
global_index_t = torch.linspace(0, lens_z - 1, lens_z).to(x.device)
global_index_t = global_index_t.repeat(B, 1)
global_index_s = torch.linspace(0, lens_x - 1, lens_x).to(x.device)
global_index_s = global_index_s.repeat(B, 1)
removed_indexes_s = []
out_attn = []
for i, blk in enumerate(self.blocks):
# out_global_s.append(global_index_s)
# out_global_t.append(global_index_t)
x, global_index_t, global_index_s, removed_index_s, attn = \
blk(x, global_index_t, global_index_s, mask_x, ce_template_mask, ce_keep_rate)
if self.ce_loc is not None and i in self.ce_loc:
removed_indexes_s.append(removed_index_s)
out_attn.append(attn)
# print('shape of attn:{}, lens_z:{}, lens_x:{}'.format(attn.shape, lens_z, lens_x))
out_attn_idx = random.choice(np.arange(len(out_attn)))
out_attn = out_attn[out_attn_idx]
x = self.norm(x)
lens_x_new = global_index_s.shape[1]
lens_z_new = global_index_t.shape[1]
z = x[:, :lens_z_new*2]
x = x[:, lens_z_new*2:]
if Track == False:
idx1 = token_idx['x1']
idx0 = token_idx['x0']
idex1 = token_idx['ex1']
idex0 = token_idx['ex0']
ex = x[:,idex1.shape[1]:]
x = x[:,:idex1.shape[1]]
# if removed_indexes_s and removed_indexes_s[0] is not None:
# removed_indexes_cat = torch.cat(removed_indexes_s, dim=1)
pruned_lens_x = idx0.shape[1]
pad_x = torch.zeros([B, pruned_lens_x, x.shape[2]], device=x.device)
x = torch.cat([x, pad_x], dim=1)
index_all = torch.cat([idx1, idx0], dim=1)
# recover original token order
C = x.shape[-1]
x = torch.zeros_like(x).scatter_(dim=1, index=index_all.unsqueeze(-1).expand(B, -1, C).to(torch.int64), src=x)
ex = torch.cat([ex, pad_x], dim=1)
index_all = torch.cat([idex1, idex0], dim=1)
# recover original token order
C = ex.shape[-1]
ex = torch.zeros_like(ex).scatter_(dim=1, index=index_all.unsqueeze(-1).expand(B, -1, C).to(torch.int64), src=ex)
x = torch.cat([x,ex],dim=1)
x = recover_tokens(x, lens_z_new, lens_x, mode=self.cat_mode)
event_x = x[:, lens_x:] # RGB head
x = x[:, :lens_x] # RGB head
x = torch.cat([event_x, x], dim=1)
aux_dict = {
# "attn": attn,
"attn": out_attn,
"removed_indexes_s": removed_indexes_s, # used for visualization
}
return x, aux_dict
def forward(self, z, x, event_z, event_x,
ce_template_mask=None, ce_keep_rate=None,
tnc_keep_rate=None,
return_last_attn=False,Track=False):
x, aux_dict = self.forward_features(z, x, event_z, event_x, ce_template_mask=ce_template_mask, ce_keep_rate=ce_keep_rate,Track=Track)
return x, aux_dict
def _create_vision_transformer(pretrained=False, **kwargs):
model = VisionTransformerCE(**kwargs)
if pretrained:
if 'npz' in pretrained:
model.load_pretrained(pretrained, prefix='')
else:
checkpoint = torch.load(pretrained, map_location="cpu")
missing_keys, unexpected_keys = model.load_state_dict(checkpoint["model"], strict=False)
print('Load pretrained model from: ' + pretrained)
return model
def vit_base_patch16_224_ce(pretrained=False, **kwargs):
""" ViT-Base model (ViT-B/16) from original paper (https://arxiv.org/abs/2010.11929).
"""
model_kwargs = dict(
patch_size=16, embed_dim=768, depth=12, num_heads=12, **kwargs)
model = _create_vision_transformer(pretrained=pretrained, **model_kwargs)
return model
def vit_large_patch16_224_ce(pretrained=False, **kwargs):
""" ViT-Large model (ViT-L/16) from original paper (https://arxiv.org/abs/2010.11929).
"""
model_kwargs = dict(
patch_size=16, embed_dim=1024, depth=24, num_heads=16, **kwargs)
model = _create_vision_transformer(pretrained=pretrained, **model_kwargs)
return model是主模型,代码② 是模型的损失:from . import BaseActor
from lib.utils.misc import NestedTensor
from lib.utils.box_ops import box_cxcywh_to_xyxy, box_xywh_to_xyxy
import torch
from lib.utils.merge import merge_template_search
from ...utils.heapmap_utils import generate_heatmap
from ...utils.ce_utils import generate_mask_cond, adjust_keep_rate
class CEUTrackActor(BaseActor):
""" Actor for training CEUTrack models """
def __init__(self, net, objective, loss_weight, settings, cfg=None):
super().__init__(net, objective)
self.loss_weight = loss_weight
self.settings = settings
self.bs = self.settings.batchsize # batch size
self.cfg = cfg
def __call__(self, data):
"""
args:
data - The input data, should contain the fields 'template', 'search', 'gt_bbox'.
template_images: (N_t, batch, 3, H, W)
search_images: (N_s, batch, 3, H, W)
returns:
loss - the training loss
status - dict containing detailed losses
"""
# forward pass
out_dict = self.forward_pass(data)
# compute losses
loss, status = self.compute_losses(out_dict, data)
return loss, status
def forward_pass(self, data):
# currently only support 1 template and 1 search region
assert len(data['template_images']) == 1
assert len(data['search_images']) == 1
assert len(data['template_event']) == 1
assert len(data['search_event']) == 1
template_list = []
for i in range(self.settings.num_template):
template_img_i = data['template_images'][i].view(-1,
*data['template_images'].shape[2:]) # (batch, 3, 128, 128)
# template_att_i = data['template_att'][i].view(-1, *data['template_att'].shape[2:]) # (batch, 128, 128)
template_list.append(template_img_i)
search_img = data['search_images'][0].view(-1, *data['search_images'].shape[2:]) # (batch, 3, 320, 320)
# search_att = data['search_att'][0].view(-1, *data['search_att'].shape[2:]) # (batch, 320, 320)
template_event = data['template_event'][0].view(-1, *data['template_event'].shape[2:])
search_event = data['search_event'][0].view(-1, *data['search_event'].shape[2:])
box_mask_z = None
ce_keep_rate = None
if self.cfg.MODEL.BACKBONE.CE_LOC:
box_mask_z = generate_mask_cond(self.cfg, template_list[0].shape[0], template_list[0].device,
data['template_anno'][0])
ce_start_epoch = self.cfg.TRAIN.CE_START_EPOCH
ce_warm_epoch = self.cfg.TRAIN.CE_WARM_EPOCH
ce_keep_rate = adjust_keep_rate(data['epoch'], warmup_epochs=ce_start_epoch,
total_epochs=ce_start_epoch + ce_warm_epoch,
ITERS_PER_EPOCH=1,
base_keep_rate=self.cfg.MODEL.BACKBONE.CE_KEEP_RATIO[0])
if len(template_list) == 1:
template_list = template_list[0]
out_dict = self.net(template=template_list,
search=search_img,
event_template=template_event,
event_search=search_event,
ce_template_mask=box_mask_z,
ce_keep_rate=ce_keep_rate,
return_last_attn=False)
return out_dict
def compute_losses(self, pred_dict, gt_dict, return_status=True):
# gt gaussian map
gt_bbox = gt_dict['search_anno'][-1] # (Ns, batch, 4) (x1,y1,w,h) -> (batch, 4)
gt_gaussian_maps = generate_heatmap(gt_dict['search_anno'], self.cfg.DATA.SEARCH.SIZE, self.cfg.MODEL.BACKBONE.STRIDE)
gt_gaussian_maps = gt_gaussian_maps[-1].unsqueeze(1)
# Get boxes
pred_boxes = pred_dict['pred_boxes']
if torch.isnan(pred_boxes).any():
raise ValueError("Network outputs is NAN! Stop Training")
num_queries = pred_boxes.size(1)
pred_boxes_vec = box_cxcywh_to_xyxy(pred_boxes).view(-1, 4) # (B,N,4) --> (BN,4) (x1,y1,x2,y2)
gt_boxes_vec = box_xywh_to_xyxy(gt_bbox)[:, None, :].repeat((1, num_queries, 1)).view(-1, 4).clamp(min=0.0,
max=1.0) # (B,4) --> (B,1,4) --> (B,N,4)
# compute giou and iou
try:
giou_loss, iou = self.objective['giou'](pred_boxes_vec, gt_boxes_vec) # (BN,4) (BN,4)
except:
giou_loss, iou = torch.tensor(0.0).cuda(), torch.tensor(0.0).cuda()
# compute l1 loss
l1_loss = self.objective['l1'](pred_boxes_vec, gt_boxes_vec) # (BN,4) (BN,4)
# compute location loss
if 'score_map' in pred_dict:
location_loss = self.objective['focal'](pred_dict['score_map'], gt_gaussian_maps)
else:
location_loss = torch.tensor(0.0, device=l1_loss.device)
rank_loss = self.loss_rank(pred_dict,gt_dict['search_anno'], gt_dict['template_anno'])
# weighted sum
loss = self.loss_weight['giou'] * giou_loss + self.loss_weight['l1'] * l1_loss + self.loss_weight['focal'] * location_loss + rank_loss*1.2
if return_status:
# status for log
mean_iou = iou.detach().mean()
status = {"Loss/total": loss.item(),
"Loss/giou": giou_loss.item(),
"Loss/l1": l1_loss.item(),
"Loss/location": location_loss.item(),
"IoU": mean_iou.item()}
return loss, status
else:
return loss
def _random_permute(self,matrix):
# matrix = random.choice(matrix)
b, c, h, w = matrix.shape
idx = [ torch.randperm(c).to(matrix.device) for i in range(b)]
idx = torch.stack(idx, dim=0)[:, :, None, None].repeat([1,1,h,w])
# idx = torch.randperm(c)[None,:,None,None].repeat([b,1,h,w]).to(matrix.device)
matrix01 = torch.gather(matrix, 1, idx)
return matrix01
def crop_flag(self, flag, global_index_s, global_index_t,H1 = 64, H2 = 256):
B,Ls = global_index_s.shape
B, Lt = global_index_t.shape
B,C,L1,L2 = flag.shape
flag_t = flag[:,:,:H1,:]
flag_s = flag[:,:,H1:,:]
flag_t = torch.gather(flag_t,2,global_index_t[:,None,:,None].repeat([1,C,1,L2]).long())
flag_s = torch.gather(flag_s,2,global_index_s[:,None,:,None].repeat([1,C,1,L2]).long())
flag = torch.cat([flag_t, flag_s], dim = 2)
flag_t = flag[:,:,:,:H1]
flag_s = flag[:,:,:,H1:]
flag_t = torch.gather(flag_t,3,global_index_t[:,None,None,:].repeat([1,C,int(Ls+Lt),1]).long())
flag_s = torch.gather(flag_s,3,global_index_s[:,None,None,:].repeat([1,C,int(Ls+Lt),1]).long())
flag = torch.cat([flag_t, flag_s], dim = 3)
B, C, L11, L12 = flag.shape
try:
assert(L11 == int(Lt + Ls))
assert(L12 == int(Lt + Ls))
except:
print('L11:{}, L12:{}, L1:{}, L2:{}'.format(L11, L12, L1, L2))
return flag
def crop_fusion(self, flag, attn, global_index_s, global_index_t,H1 = 64, H2 = 256 ):
flag = self.crop_flag(flag=flag, global_index_s=global_index_s, global_index_t=global_index_t)
B,C,L1,L2 = flag.shape
Ba, Ca, La, La2 = attn.shape
_,idx1 = flag.mean(dim=3,keepdim=False).sort(dim=2,descending=True)
# print('shape of flag:{}, idx1:{}'.format(flag.shape, idx1[:,:,:32,None].repeat([1,Ca,1,L2]).shape))
flag = torch.gather(flag,2,idx1[:,:,:32,None].repeat([1,C,1,L2]).long())
attn = torch.gather(attn,2,idx1[:,:,:32,None].repeat([1,Ca,1,L2]).long())
_,idx2 = flag.mean(dim=2,keepdim=False).sort(dim=2,descending=True)
flag = torch.gather(flag,3,idx2[:,:,None,:32].repeat([1,C,32,1]).long())
attn = torch.gather(attn,3,idx2[:,:,None,:32].repeat([1,Ca,32,1]).long())
return attn * flag
def loss_rank(self, outputs, targetsi, temp_annoi=None):
"""Compute the losses related to the bounding boxes, the L1 regression loss and the GIoU loss
targets dicts must contain the key "boxes" containing a tensor of dim [nb_target_boxes, 4]
The target boxes are expected in format (center_x, center_y, h, w), normalized by the image size.
"""
attn = outputs['attn']
# print('attn shape:{}'.format(attn.shape))
attn1 = torch.cat([attn[:,:,114:344,57:114], attn[:,:,114:344,344:]],dim=3)
attn1 = attn1.mean(dim=0, keepdim=True).mean(dim=1, keepdim=True)
attn2 = torch.cat([attn[:,:,344:,:57], attn[:,:,344:,114:344]],dim=3)
attn2 = attn2.mean(dim=0, keepdim=True).mean(dim=1, keepdim=True)
# print('attn1 shape:{},attn2 shape:{}, attn:{}'.format(attn1.shape,attn2.shape,attn.shape))
# attn = self._random_permute(attn)
# attn = attn[:,:,:,:]
# B1, C1, H1, W1 = attn.shape
# global_index_s = outputs['out_global_s']
# global_index_t = outputs['out_global_t']
# try:
# assert((global_index_s.shape[1] + global_index_t.shape[1])== int(H1/2))
# except:
# print('Falut,shape of attn:{}, s:{}, t:{}'.format(attn.shape,global_index_s.shape, global_index_t.shape ))
# H1 = int(64)
# H2 = int(256)
# l_t = int(math.sqrt(64))
# l_s = int(math.sqrt(256))
# temp_anno = temp_annoi[0,:,:]
# targets = targetsi[0,:,:]
# r_s = torch.arange(l_s).to(temp_anno.device)
# r_t = torch.arange(l_t).to(temp_anno.device)
# r_t = r_t[None,:].repeat([B1,1])
# cx, cy, w, h = temp_anno[:,0:1], temp_anno[:,1:2], temp_anno[:,2:3], temp_anno[:,3:4]
# cx *= l_t
# cy *= l_t
# w *= l_t
# h *= l_t
# flagx_01 = r_t >= cx - w/2
# flagx_02 = r_t <= cx + w/2
# flagy_02 = r_t >= cy - h/2
# flagy_01 = r_t <= cy + h/2
# flagx = flagx_01.float()*flagx_02.float()
# flagy = flagy_01.float()*flagy_02.float()
# flagx = flagx[:,None,:].repeat([1,l_t,1])
# flagy = flagy[:,:,None].repeat([1,1,l_t])
# flag = flagx*flagy
# flagt = flag.reshape([B1, H1])
# cx, cy, w, h = targets[:,0:1], targets[:,1:2], targets[:,2:3], targets[:,3:4]
# cx *= l_s
# cy *= l_s
# w *= l_s
# h *= l_s
# flagx_01 = r_s >= cx - w/2
# flagx_02 = r_s <= cx + w/2
# flagy_02 = r_s >= cy - h/2
# flagy_01 = r_s <= cy + h/2
# flagx = flagx_01.float()*flagx_02.float()
# flagy = flagy_01.float()*flagy_02.float()
# flagx = flagx[:,None,:].repeat([1,l_s,1])
# flagy = flagy[:,:,None].repeat([1,1,l_s])
# flag = flagx*flagy
# flags = flag.reshape([B1, H2])
# flag = torch.cat([flagt, flags], dim=1)
# flag_total = flag[:,:,None].repeat([1,1,int(H1+H2)]) * flag[:,None,:].repeat([1,int(H1+H2),1])
# attn1 = self.crop_fusion(flag_total[:,None,:,:], attn, global_index_s, global_index_t)
attn = torch.cat([attn1, attn2],dim=1)
B, C, H, W = attn.shape
# _,s1,_ = torch.svd(attn1.reshape([B*C, H, W]))
_,s1,_ = torch.svd(attn.reshape([B*C, H, W]))
s01 = torch.abs(s1 - 1)
return torch.mean(s01),此处的rank loss计算的是什么,对应于vit_ce中的什么?
|
8b7faacbb96ed30fb45f0a192575168c
|
{
"intermediate": 0.3723090589046478,
"beginner": 0.4451221823692322,
"expert": 0.18256868422031403
}
|
48,173
|
Hello ChatGPT, I need some matplotlib code written to display a figure. I need to load the following csv file: exposure_bias_non_gap.csv
it looks like this: model prefix Rep_2
gpt2 20 0.0456923548707846
gpt2 40 0.0376174829107073
gpt2 60 0.0385942191087302
gpt2 80 0.0399946315930748
gpt2 100 0.0160664433651837
gpt2+knn 20 0.0455856574444207
gpt2+knn 40 0.0391952024753186
gpt2+knn 60 0.0376619694566408
gpt2+knn 80 0.0339169018652571
gpt2+knn 100 0.0337967929147069
gpt2+ft 20 0.0381663910336764
gpt2+ft 40 0.0473735821920047
gpt2+ft 60 0.0540370239182715
gpt2+ft 80 0.0567987647160258
gpt2+ft 100 0.0607158716726295
gpt2+ft+knn 20 0.0350845336064363
gpt2+ft+knn 40 0.0431638175747123
gpt2+ft+knn 60 0.0484514332141914
gpt2+ft+knn 80 0.054032185896671
gpt2+ft+knn 100 0.0597133626728642
I want to create a plot where the x axis is the prefix [20, 40, 60, 80, 100] and the y axis is the rep2 score (floating point integers). I want each of the different labels ['gpt2, 'gpt+knn', 'gpt2+ft', 'gpt2+ft+knn'] are colored differently as well.
|
3226f1cf03a84d8fe97aaa0b0fc33da0
|
{
"intermediate": 0.3885008990764618,
"beginner": 0.2789492905139923,
"expert": 0.3325497806072235
}
|
48,174
|
multipart/form-data
what is it. explain like im 5
|
6d5f8d5936c73fb00a419b3fdba8ff1d
|
{
"intermediate": 0.37924084067344666,
"beginner": 0.2952773869037628,
"expert": 0.32548174262046814
}
|
48,175
|
generate 1 color image with random resolution from 50x50 to 150x150 pixels using python
|
f5a83292a5d6f580ff824a890310c063
|
{
"intermediate": 0.2849481403827667,
"beginner": 0.16303536295890808,
"expert": 0.5520164370536804
}
|
48,176
|
import numpy as np
import pandas as pd
from backtesting import Strategy, Backtest
import talib
import MetaTrader5 as mt5
mt5.initialize()
def get_historical_data(symbol, timeframe, start_pos, count):
rates = mt5.copy_rates_from_pos(symbol, timeframe, start_pos, count)
df = pd.DataFrame(rates)
df['time'] = pd.to_datetime(df['time'], unit='s') # Convert time to datetime
df.set_index('time', inplace=True) # Set time as index
df.drop(['spread', 'real_volume'], axis=1, inplace=True) # Drop unnecessary columns
# Rename columns to match backtesting library requirements
df.rename(columns={'open': 'Open', 'high': 'High', 'low': 'Low', 'close': 'Close'}, inplace=True)
return df[['Open', 'High', 'Low', 'Close']]
def macd(self):
macd, signal = talib.MACD(self.data['close'], fastperiod=12, slowperiod=26, signalperiod=9)
if macd.iloc[-1] > signal.iloc[-1]:
return "Buy"
elif macd.iloc[-1] < signal.iloc[-1]:
return "Sell"
def twin_range_filter(self):
close = self.data['close']
def smoothrng(x, t, m):
wper = t * 2 - 1
avrng = talib.EMA(np.abs(x.diff()), timeperiod=t)
smoothrng = talib.EMA(avrng, timeperiod=wper) * m
return smoothrng
per1, mult1, per2, mult2 = 27, 1.6, 55, 2.0
smrng1 = smoothrng(close, per1, mult1)
smrng2 = smoothrng(close, per2, mult2)
smrng = (smrng1 + smrng2) / 2
def rngfilt(x, r):
rngfilt = x.copy()
for i in range(1, len(x)):
prev_val = rngfilt.iloc[i-1]
if x.iloc[i] > prev_val:
rngfilt.iloc[i] = max(prev_val, x.iloc[i] - r.iloc[i])
else:
rngfilt.iloc[i] = min(prev_val, x.iloc[i] + r.iloc[i])
return rngfilt
filt = rngfilt(close, smrng)
STR = filt + smrng
STS = filt - smrng
FUB = [STR.iloc[0]]
FLB = [STS.iloc[0]]
for i in range(1, len(df)):
FUB.append(STR.iloc[i] if (STR.iloc[i] < STR.iloc[i-1]) or (close.iloc[i-1] > FUB[i-1]) else FUB[i-1])
FLB.append(STS.iloc[i] if (STS.iloc[i] > STS.iloc[i-1]) or (close.iloc[i-1] < FLB[i-1]) else FLB[i-1])
FUB = np.array(FUB)
FLB = np.array(FLB)
TRF = [FUB[0]]
for i in range(1, len(df)):
last_trf = TRF[-1]
if (last_trf == FUB[i-1] and close.iloc[i] <= FUB[i]) or (last_trf == FLB[i-1] and close.iloc[i] <= FLB[i]):
TRF.append(FUB[i])
elif (last_trf == FUB[i-1] and close.iloc[i] >= FUB[i]) or (last_trf == FLB[i-1] and close.iloc[i] >= FLB[i]):
TRF.append(FLB[i])
else:
TRF.append(FUB[i])
TRF = np.array(TRF)
long_signal = (close > np.roll(TRF, 1))[1:]
short_signal = (close < np.roll(TRF, 1))[1:]
self.data['TRF'] = TRF
self.data['long_signal'] = np.append([False], long_signal)
self.data['short_signal'] = np.append([False], short_signal)
if self.data.iloc[-1]['long_signal']:
return "Buy"
elif self.data.iloc[-1]['short_signal']:
return "Sell"
def detect_engulfing(self):
for i in range(1, len(self.data)):
current = self.data.iloc[i].copy()
previous = self.data.iloc[i-1].copy()
if np.abs(current['open'] - previous['close']) > 0.005:
current['open'] = previous['close']
if previous['open'] > previous['close'] and \
current['close'] > current['open'] and \
current['close'] >= previous['open'] and \
previous['close'] >= current['open'] and \
current['close'] - current['open'] > previous['open'] - previous['close']:
return "Bullish Engulfing"
elif previous['close'] > previous['open'] and \
current['open'] > current['close'] and \
current['open'] >= previous['close'] and \
previous['open'] >= current['close'] and \
current['open'] - current['close'] > previous['close'] - previous['open']:
return "Bearish Engulfing"
else:
return "No Engulfing"
class EngulfingStrategy(Strategy):
def init(self):
self.macd = self.I(self.macd)
self.trf = self.I(self.twin_range_filter)
self.engulfing = self.I(self.detect_engulfing)
def next(self):
# Check for bullish engulfing condition
if self.macd() == "Sell" and self.trf() == "Buy" and self.engulfing() == "Bullish Engulfing":
self.buy()
# Check for bearish engulfing condition
elif self.macd() == "Buy" and self.trf() == "Sell" and self.engulfing() == "Bearish Engulfing":
self.sell()
#Define your backtest parameters
symbol = 'XAUUSDm' # Example symbol
timeframe = mt5.TIMEFRAME_M5 # Example timeframe (H1)
data = get_historical_data(symbol, timeframe, 0, 10000) # Example: Get 1000 bars of historical data
bt = Backtest(data, EngulfingStrategy, cash=10000, commission=0.0)
# Run the backtest
stats = bt.run()
# Print the performance statistics
print(stats)
|
a1d3c2a3616cc1c2cf4f6ddcf5ca1b07
|
{
"intermediate": 0.4968603849411011,
"beginner": 0.30280935764312744,
"expert": 0.2003302425146103
}
|
48,177
|
what is gradio?
|
92f5051aeaec076513d34ff5fe97b5f5
|
{
"intermediate": 0.24201053380966187,
"beginner": 0.12483757734298706,
"expert": 0.6331518292427063
}
|
48,178
|
Make CSS code for this HTML, which is a redesign. Use a code block for this.
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Progresswire Coo</title>
<link rel="stylesheet" href="newstyle.css">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="This is Progresswire Coo! It is a site by the creators behind Progresswire Coo.">
</head>
<body>
<header>
<img src="progresswirecoo.png" alt="Progresswire Coo Logo">
<h1>Progresswire Coo</h1>
</header>
<p>Hello! This is the Progresswire Coo home page thing.</p>
<h2>Links</h2>
<ul>
<li><a href="news.html">News</a><br><small>See the old page <a href="old.html">here</a>. This is WIP.</small></li>
<li><a href="terminal/index.html">Terminal</a><br><small>Incomplete</small></li>
<li><a href="#">Text Edition</a><br><small><a href="textedition.png">What does it look like?</a></small></li>
<li><a href="https://linktr.ee/i4kthetf1thingy">Download Progresswire Coo 1 (Linktree)</a><br><small>The project that started it all.</small></li>
<li><a href="ProgresswireCoo.wav">Download the Progresswire Coo music</a><br><small>If you want to listen to the Progresswire Coo music, click here.</small></li>
<li><a href="bootstrapver.html">Bootstrap version</a><br><small>This design is still supported along with this one.</small></li>
</ul>
</body>
</html>
|
1d1a4abeb4c5f84bc7b51f7e26fdbfe9
|
{
"intermediate": 0.3535723090171814,
"beginner": 0.26828986406326294,
"expert": 0.3781377971172333
}
|
48,179
|
Make a CSS for this that is a redesign. Use code blocks, no comments, flexes, and the color #447db7. Also don't use smart quotes
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Progresswire Coo</title>
<link rel="stylesheet" href="newstyle.css">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="This is Progresswire Coo! It is a site by the creators behind Progresswire Coo.">
</head>
<body>
<header>
<img src="progresswirecoo.png" alt="Progresswire Coo Logo">
<h1>The Progresswire Coo homepage</h1>
</header>
<p>Hello! This is the Progresswire Coo home page thing.</p>
<ul>
<li><a href="news.html">News</a><br><small>See the old page <a href="old.html">here</a>. This is WIP.</small></li>
<li><a href="terminal/index.html">Terminal</a><br><small>Incomplete</small></li>
<li><a href="#">Text Edition</a><br><small><a href="textedition.png">What does it look like?</a></small></li>
<li><a href="https://linktr.ee/i4kthetf1thingy">Download Progresswire Coo 1 (Linktree)</a><br><small>The project that started it all.</small></li>
<li><a href="ProgresswireCoo.wav">Download the Progresswire Coo music</a><br><small>If you want to listen to the Progresswire Coo music, click here.</small></li>
<li><a href="bootstrapver.html">Bootstrap version</a><br><small>This design is still supported along with this one.</small></li>
</ul>
</body>
</html>
|
f448168a22e57ab1b701af710631399a
|
{
"intermediate": 0.3376358151435852,
"beginner": 0.3734409511089325,
"expert": 0.2889232039451599
}
|
48,180
|
Make a CSS for this redesign. Use code blocks, no comments, flexes and the color #447db7
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Progresswire Coo</title>
<link rel="stylesheet" href="newstyle.css">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="This is Progresswire Coo! It is a site by the creators behind Progresswire Coo.">
</head>
<body>
<header>
<img src="progresswirecoo.png" alt="Progresswire Coo Logo">
<h1>The Progresswire Coo homepage</h1>
</header>
<p>Hello! This is the Progresswire Coo home page thing.</p>
<h2>Links</h2>
<ul>
<li><a href="news.html">News</a><br><small>See the old page <a href="old.html">here</a> (WARNING: NO STYLE). This is WIP.</small></li>
<li><a href="terminal/index.html">Terminal</a><br><small>Incomplete</small></li>
<li><a href="#">Text Edition</a><br><small><a href="textedition.png">What does it look like?</a></small></li>
<li><a href="https://linktr.ee/i4kthetf1thingy">Download Progresswire Coo 1 (Linktree)</a><br><small>The project that started it all.</small></li>
<li><a href="ProgresswireCoo.wav">Download the Progresswire Coo music</a><br><small>If you want to listen to the Progresswire Coo music, click here.</small></li>
<li><a href="bootstrapver.html">Bootstrap version</a><br><small>This design is still supported along with this one.</small></li>
</ul>
</body>
</html>
|
35608a9f6ece3e22d7d52b2c2da318de
|
{
"intermediate": 0.36556583642959595,
"beginner": 0.2306639850139618,
"expert": 0.40377017855644226
}
|
48,181
|
Make a CSS for this redesign. Use code blocks, no comments, flexes and the color #447db7
<!DOCTYPE html>
<html>
<head>
<meta charset=“utf-8”>
<title>Progresswire Coo</title>
<link rel=“stylesheet” href=“newstyle.css”>
<meta name=“viewport” content=“width=device-width, initial-scale=1.0”>
<meta name=“description” content=“This is Progresswire Coo! It is a site by the creators behind Progresswire Coo.”>
</head>
<body>
<header>
<img src=“progresswirecoo.png” alt=“Progresswire Coo Logo”>
<h1>The Progresswire Coo homepage</h1>
</header>
<p>Hello! This is the Progresswire Coo home page thing.</p>
<h2>Links</h2>
<ul>
<li><a href=“news.html”>News</a><br><small>See the old page <a href=“old.html”>here</a> (WARNING: NO STYLE). This is WIP.</small></li>
<li><a href=“terminal/index.html”>Terminal</a><br><small>Incomplete</small></li>
<li><a href=“#”>Text Edition</a><br><small><a href=“textedition.png”>What does it look like?</a></small></li>
<li><a href=“https://linktr.ee/i4kthetf1thingy”>Download Progresswire Coo 1 (Linktree)</a><br><small>The project that started it all.</small></li>
<li><a href=“ProgresswireCoo.wav”>Download the Progresswire Coo music</a><br><small>If you want to listen to the Progresswire Coo music, click here.</small></li>
<li><a href=“bootstrapver.html”>Bootstrap version</a><br><small>This design is still supported along with this one.</small></li>
</ul>
</body>
</html>
|
aaf6e918680e9b537a79cd842f92a906
|
{
"intermediate": 0.35384154319763184,
"beginner": 0.2902519404888153,
"expert": 0.35590648651123047
}
|
48,182
|
Make a CSS for this redesign. Use code blocks, no comments, flexes and the color #447db7
<!DOCTYPE html>
<html>
<head>
<meta charset=“utf-8”>
<title>Progresswire Coo</title>
<link rel=“stylesheet” href=“newstyle.css”>
<meta name=“viewport” content=“width=device-width, initial-scale=1.0”>
<meta name=“description” content=“This is Progresswire Coo! It is a site by the creators behind Progresswire Coo.”>
</head>
<body>
<header>
<img src=“progresswirecoo.png” alt=“Progresswire Coo Logo”>
<h1>The Progresswire Coo homepage</h1>
</header>
<p>Hello! This is the Progresswire Coo home page thing.</p>
<h2>Links</h2>
<ul>
<li><a href=“news.html”>News</a><br><small>See the old page <a href=“old.html”>here</a> (WARNING: NO STYLE). This is WIP.</small></li>
<li><a href=“terminal/index.html”>Terminal</a><br><small>Incomplete</small></li>
<li><a href=“#”>Text Edition</a><br><small><a href=“textedition.png”>What does it look like?</a></small></li>
<li><a href=“https://linktr.ee/i4kthetf1thingy”>Download Progresswire Coo 1 (Linktree)</a><br><small>The project that started it all.</small></li>
<li><a href=“ProgresswireCoo.wav”>Download the Progresswire Coo music</a><br><small>If you want to listen to the Progresswire Coo music, click here.</small></li>
<li><a href=“bootstrapver.html”>Bootstrap version</a><br><small>This design is still supported along with this one.</small></li>
</ul>
</body>
</html>
|
578fbf6583287a3cb8a7399298bbb4c5
|
{
"intermediate": 0.35384154319763184,
"beginner": 0.2902519404888153,
"expert": 0.35590648651123047
}
|
48,183
|
Make a CSS for this redesign using code blocks, no comments, flexes and the color #447db7 for links and lightened/darkened for other stuff like headers
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Progresswire Coo</title>
<link rel="stylesheet" href="newstyle.css">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="This is Progresswire Coo! It is a site by the creators behind Progresswire Coo.">
</head>
<body>
<header>
<img src="progresswirecoo.png" alt="Progresswire Coo Logo">
<h1>The Progresswire Coo homepage</h1>
</header>
<p>Hello! This is the Progresswire Coo home page thing.</p>
<h2>Links</h2>
<ul>
<li><a href="news.html">News</a><br><small>See the old page <a href="old.html">here</a> (WARNING: NO STYLE). This is WIP.</small></li>
<li><a href="terminal/index.html">Terminal</a><br><small>Incomplete</small></li>
<li><a href="#">Text Edition</a><br><small><a href="textedition.png">What does it look like?</a></small></li>
<li><a href="https://linktr.ee/i4kthetf1thingy">Download Progresswire Coo 1 (Linktree)</a><br><small>The project that started it all.</small></li>
<li><a href="ProgresswireCoo.wav">Download the Progresswire Coo music</a><br><small>If you want to listen to the Progresswire Coo music, click here.</small></li>
<li><a href="bootstrapver.html">Bootstrap version</a><br><small>This design is still supported along with this one.</small></li>
</ul>
</body>
</html>
|
d306b0348aa86348b527a0ac486c96a4
|
{
"intermediate": 0.2883540689945221,
"beginner": 0.3006053864955902,
"expert": 0.4110405147075653
}
|
48,184
|
Merge these 2 CSS's into one.
Reference:
@import url('https://fonts.googleapis.com/css2?family=Overpass&display=swap');
@import url('https://fonts.googleapis.com/css2?family=Overpass:wght@700&display=swap');
body {
background-color: #f0f0f0;
font-family: "Overpass", sans-serif;
color: #333;
}
header {
color: #fff;
text-align: center;
padding: 50px;
margin: 20px;
}
h1 {
color: #447db7;
}
h2 {
color: #23558c;
}
ul {
list-style: none;
padding: 0;
display: grid;
grid-template-columns: 1fr 1fr;
grid-gap: 10px;
}
li {
border: 1px solid #ddd;
padding: 10px;
}
a {
color: #447db7;
text-decoration: none;
}
a:hover {
color: #23558c;
}
small {
color: #666;
}
main {
margin: 0px 4in;
}
CSS to be mixed with:
body {
font-family: system-ui;
padding: 20px;
background-color: #242424;
color: #fff;
}
h1 {
text-align: center;
}
img {
display: block;
margin: 0 auto;
width: 200px;
}
p {
text-align: center;
}
a {
color: #fff;
text-decoration: none;
}
hr {
border-color: #fff;
margin: 2rem 0;
}
p a:hover {
text-decoration: underline;
}
p a:visited {
color: #fff;
}
p a:focus,
p a:active {
color: #ccc;
}
p img {
max-width: 100%;
}
|
f1256952d3724651b72454ff301cd630
|
{
"intermediate": 0.41566428542137146,
"beginner": 0.3311455547809601,
"expert": 0.25319015979766846
}
|
48,185
|
write java code: Create concurrent Refinable Hashset data structure with 1 Million nodes and perform the basic operations (contains, insert, and remove) by varying the number of threads from 1 to 20 (i.e., 1, 2, 4, 6, 8, 10, 12 ... 20) and for different workloads (100C-0I-0D, 90C-9I-1D, 50C-25I-25D, 30C-35I-35D, 0C-50I-50D). Prefill the data structure with 50% of elements and duration of each run is 10 seconds.
To measure the throghput, consider the average of FIVE runs and also measure
the cache misses per 1000 operations using perf tool
|
e56b717dea27dab30bc5288384174fb1
|
{
"intermediate": 0.5024796724319458,
"beginner": 0.24879270792007446,
"expert": 0.24872764945030212
}
|
48,186
|
Сдейлай лаконичный пересказ на английском языке, рассказывающий об истоии The Teck Turquoise Tiara "The incredible Gloucester tiara collection includes multiple pieces inherited from Queen Mary, including the Teck Turquoise Tiara. Combining turquoises with diamonds in an intricate design, the tiara has been worn by two generations of Gloucester duchesses.
The suite from the Illustrated London News drawings of Queen Mary’s wedding gifts, 1893
The piece was originally made for Princess Mary Adelaide of Cambridge, a granddaughter of George III, who married Prince Francis, Duke of Teck. The tiara was a part of a parure of turquoise jewelry that was made for Mary Adelaide around 1850; the set also originally included a necklace, earrings, and three brooches. (Eventually, additional turquoise pieces were added to the suite of jewels.)
Queen Mary wears the tiara before its 1912 renovation
In 1893, the Duke and Duchess of Teck gave the parure to Princess Mary as a wedding present. Her new husband was, of course, the future King George V, meaning that the turquoises would adorn a British queen consort. Always tinkering with her jewels, the turquoise tiara didn’t escape Mary’s eye for adaptation. The tiara was originally taller, but Mary had its size reduced in 1912 by E. Wolff and Co., who often did work for Garrard.
The tiara and is accompanying jewels are displayed with Princess Alice’s wedding gifts
Two decades on, she gave the entire set to her new daughter-in-law, Alice, as a wedding present, echoing her own parents’ gift. The suite was included in the elaborate display of Alice’s wedding gifts, which you can learn more about here!
Princess Alice, Duchess of Gloucester wears the tiara and jewels at the Dorchester Hotel, 15 November 1955 (Keystone Pictures USA/Alamy)
Alice wore the tiara often, and even posed in it for a famous series of photographs taken by Cecil Beaton. Here, she wears the tiara and jewels in 1955 in London for a fashion show at the Dorchester Hotel.
Alice wears the tiara, with Prince William of Gloucester and the Queen Mother, during the Dutch state visit, 13 April 1972 (PA Images/Alamy)
She continued to wear the tiara throughout her life as a working royal. Above, Alice wears the suite in April 1972 during a state visit from Queen Juliana of the Netherlands. Also pictured here: the Queen Mother (more on her jewels from this occasion here) and Alice’s elder son, Prince William of Gloucester. William died only four months later in a plane crash.
JUSTIN TALLIS/AFP/Getty Images
Today, the set is worn by Alice’s daughter-in-law, Birgitte, the wife of the current Duke of Gloucester. It’s not her most-worn tiara (that prize probably goes to the Honeysuckle Tiara, another legacy from Queen Mary), but it pops up now and again, as it did in March 2015 for the Guildhall banquet during the Mexican state visit. You’ll recognize quite a few pieces from the parure on Birgitte in the image above, taken during that banquet."
|
03a89106f759ce77f3e2db4c203bfc58
|
{
"intermediate": 0.17757540941238403,
"beginner": 0.5293183326721191,
"expert": 0.2931062579154968
}
|
48,187
|
120/120 - 39s - 329ms/step - accuracy: 0.7124 - loss: 0.6110 - val_accuracy: 0.6900 - val_loss: 1.2085
Epoch 2/20
120/120 - 34s - 286ms/step - accuracy: 0.7751 - loss: 0.4634 - val_accuracy: 0.6900 - val_loss: 0.9913
Epoch 3/20
120/120 - 33s - 272ms/step - accuracy: 0.8211 - loss: 0.3980 - val_accuracy: 0.6200 - val_loss: 0.8050
Epoch 4/20
120/120 - 39s - 321ms/step - accuracy: 0.8294 - loss: 0.3657 - val_accuracy: 0.5900 - val_loss: 0.7533
Epoch 5/20
120/120 - 38s - 318ms/step - accuracy: 0.8771 - loss: 0.2988 - val_accuracy: 0.6000 - val_loss: 0.9564
Epoch 6/20
120/120 - 41s - 340ms/step - accuracy: 0.8821 - loss: 0.2879 - val_accuracy: 0.6900 - val_loss: 0.7461
Epoch 7/20
120/120 - 40s - 330ms/step - accuracy: 0.9030 - loss: 0.2489 - val_accuracy: 0.7200 - val_loss: 0.8114
Epoch 8/20
120/120 - 38s - 319ms/step - accuracy: 0.9089 - loss: 0.2373 - val_accuracy: 0.7000 - val_loss: 0.7795
Epoch 9/20
120/120 - 33s - 275ms/step - accuracy: 0.9381 - loss: 0.1824 - val_accuracy: 0.6900 - val_loss: 0.7876
Epoch 10/20
120/120 - 37s - 308ms/step - accuracy: 0.9323 - loss: 0.1813 - val_accuracy: 0.4800 - val_loss: 1.4136
Epoch 11/20
120/120 - 39s - 321ms/step - accuracy: 0.9515 - loss: 0.1505 - val_accuracy: 0.5200 - val_loss: 1.1952
Epoch 12/20
120/120 - 41s - 341ms/step - accuracy: 0.9482 - loss: 0.1518 - val_accuracy: 0.6300 - val_loss: 0.9851
Epoch 13/20
120/120 - 38s - 318ms/step - accuracy: 0.9624 - loss: 0.1264 - val_accuracy: 0.5100 - val_loss: 1.5504
Epoch 14/20
120/120 - 40s - 336ms/step - accuracy: 0.9640 - loss: 0.1234 - val_accuracy: 0.6100 - val_loss: 0.9077
Epoch 15/20
120/120 - 40s - 330ms/step - accuracy: 0.9548 - loss: 0.1379 - val_accuracy: 0.6000 - val_loss: 0.9919
Epoch 16/20
120/120 - 50s - 417ms/step - accuracy: 0.9615 - loss: 0.1096 - val_accuracy: 0.6400 - val_loss: 1.0118
Epoch 17/20
120/120 - 48s - 400ms/step - accuracy: 0.9783 - loss: 0.0867 - val_accuracy: 0.6300 - val_loss: 1.1686
Epoch 18/20
120/120 - 43s - 359ms/step - accuracy: 0.9808 - loss: 0.0797 - val_accuracy: 0.6900 - val_loss: 1.1012
Epoch 19/20
120/120 - 43s - 355ms/step - accuracy: 0.9666 - loss: 0.0974 - val_accuracy: 0.6600 - val_loss: 1.0525
Epoch 20/20
120/120 - 44s - 364ms/step - accuracy: 0.9916 - loss: 0.0558 - val_accuracy: 0.6400 - val_loss: 1.1115
<keras.src.callbacks.history.History at 0x142531a2490>
|
50347eec55f21ce1f8e92bbdce4cbfca
|
{
"intermediate": 0.26877427101135254,
"beginner": 0.4403032064437866,
"expert": 0.29092252254486084
}
|
48,188
|
what are this model can do that other LLMs cant do like chatGPT?
|
3fff2bc5732058a6c44b059d71f08873
|
{
"intermediate": 0.2707754373550415,
"beginner": 0.12431307137012482,
"expert": 0.6049114465713501
}
|
48,189
|
what are the free to use openai models using API?
|
5d90fec581cc2ae42bf035c7b0dca11f
|
{
"intermediate": 0.3350169062614441,
"beginner": 0.1075458899140358,
"expert": 0.5574371814727783
}
|
48,190
|
add channel as header in http request
|
1e16db83c0f9729b7b072ad734cb38b8
|
{
"intermediate": 0.2882600724697113,
"beginner": 0.3026033341884613,
"expert": 0.409136563539505
}
|
48,191
|
convert python lang grammar to ebnf
|
74282c3cd2c38af278c216b6d219172a
|
{
"intermediate": 0.25920674204826355,
"beginner": 0.4904835820198059,
"expert": 0.25030967593193054
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.