row_id
int64 0
48.4k
| init_message
stringlengths 1
342k
| conversation_hash
stringlengths 32
32
| scores
dict |
|---|---|---|---|
46,487
|
Привет! Исправь парсинг и сохранение ответа после запроса к ЯндексГПТ. Он выдает подобные ответы:
"Вот возможная сводка и биографии:
name: Иван
surname: Иванов
patronym: Иванович
birthdayat: 97.08.1979
diedat: 1980-е
epitaph: «Иван Иванович Иванов. Всю свою жизнь посвятил науке и просвещению. Его вклад в образование неоценим. Память о нём будет жить в сердцах всех, кто знал и любил его.»
placeOfBirth: Уфа
placeOfDeath: Уфа
children: сын и дочь
wifehusband: жена
education: УГАТУ (Уфимский государственный авиационный технический университет)
occupation: инженер-конструктор
awards: награды за выдающиеся достижения в области инженерии
title1: Жизнь и карьера
biography1: Иван Иванович Иванов родился 97.08.1979 в Уфе. После окончания УГАТУ он начал карьеру инженера-конструктора. Благодаря своему таланту и трудолюбию, он быстро продвигался по служебной лестнице и вскоре стал ведущим инженером. Он посвятил свою карьеру развитию авиационной промышленности и внёс значительный вклад в разработку новых моделей самолётов.
title2: Семья и личная жизнь
biography2: Иван Иванович был женат и имел двоих детей: сына и дочь. Он был любящим и заботливым отцом и мужем.
title3: Смерть и память
biography3: Иван Иванович скончался в 1980-х годах. Его смерть была тяжёлой утратой для его семьи и друзей. Его память будет жить в его работах и в сердцах всех, кто его знал.
conclusion: Иван Иванович оставил неизгладимый след в истории Уфы и всей страны благодаря своему вкладу в развитие авиации и инженерного дела. Его жизнь и наследие будут вдохновлять будущие поколения.
"
Нужно из этого извлекать информацию по тэгам (name, surname и т.д.) и сохранять информацию в public_info. Вот код бота:
from aiogram import Bot, Dispatcher, executor, types
from aiogram.dispatcher import FSMContext
from aiogram.dispatcher.filters.state import State, StatesGroup
from aiogram.contrib.fsm_storage.memory import MemoryStorage
from aiogram.types import ReplyKeyboardMarkup, KeyboardButton, InlineKeyboardButton, InlineKeyboardMarkup
from aiogram.utils.callback_data import CallbackData
import aiosqlite
import asyncio
import aiohttp
import json
import re
API_TOKEN = '6996318383:AAEcQfdQhzEg3L_6DKQVidJEn46Wb27Sy4g'
ADMINS = [989037374, 1515567046]
bot = Bot(token=API_TOKEN)
storage = MemoryStorage()
dp = Dispatcher(bot, storage=storage)
class Form(StatesGroup):
choosing_action = State()
answer_question = State()
class lk(StatesGroup):
personal_account = State()
edit_answer = State()
new_answer = State()
edit_answer_select = State()
edit_answer_cb = State()
new_answer_cb = State()
class admin(StatesGroup):
admin_panel = State()
select_question_to_delete = State()
select_question_to_edit = State()
edit_question_text = State()
new_question = State()
async def create_db():
async with aiosqlite.connect('base.db') as db:
await db.execute('''CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY,
username TEXT NOT NULL,
last_question_idx INTEGER DEFAULT 0)''')
await db.execute('''CREATE TABLE IF NOT EXISTS questions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
question TEXT NOT NULL,
order_num INTEGER NOT NULL)''')
await db.execute('''CREATE TABLE IF NOT EXISTS answers (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
question TEXT,
answer TEXT,
FOREIGN KEY (user_id) REFERENCES users (id))''')
await db.execute('''CREATE TABLE IF NOT EXISTS public_info (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
name TEXT,
surname TEXT,
patronym TEXT,
birthdayat TEXT,
diedat TEXT,
epitaph TEXT,
placeOfBirth TEXT,
placeOfDeath TEXT,
children TEXT,
wifehusband TEXT,
education TEXT,
occupation TEXT,
awards TEXT,
title1 TEXT,
biography1 TEXT,
title2 TEXT,
biography2 TEXT,
title3 TEXT,
biography3 TEXT,
conclusion TEXT,
FOREIGN KEY (user_id) REFERENCES users (id))''')
await db.commit()
# Обработка под MarkdownV2
def mdv2(text: str) -> str:
escape_chars = [
"_", "[", "]", "(", ")", "~", "`", ">",
"#", "+", "-", "=", "|", "{", "}", ".", "!"
]
for char in escape_chars:
text = text.replace(char, f"\{char}")
text = text.replace("**", "*").replace('"', '“')
return text
# калбэки
change_action_cb = CallbackData('change', 'action')
# КНОПКА МЕНЮ
menu = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
menu.add(KeyboardButton("В меню"))
async def add_user(user_id: int, username: str):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT id FROM users WHERE id = ?', (user_id,))
user_exists = await cursor.fetchone()
if user_exists:
await db.execute('UPDATE users SET username = ? WHERE id = ?', (username, user_id))
else:
await db.execute('INSERT INTO users (id, username) VALUES (?, ?)', (user_id, username))
await db.commit()
@dp.message_handler(commands="start", state="*")
async def cmd_start(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
user_id = message.from_user.id
username = message.from_user.username or "unknown"
await add_user(user_id, username)
if user_id not in ADMINS:
await message.answer("Выберите действие:", reply_markup=markup)
await Form.choosing_action.set()
else:
markup.add(KeyboardButton("Админ-панель"))
await message.answer("Выберите действие:", reply_markup=markup)
await Form.choosing_action.set()
@dp.message_handler(lambda message: message.text == "В меню", state="*")
async def back_to_menu(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
if message.from_user.id not in ADMINS:
await message.answer("Вернули вас в меню. Выберите действие", reply_markup=markup)
await Form.choosing_action.set()
else:
markup.add(KeyboardButton("Админ-панель"))
await message.answer("Вернули вас в меню. Выберите действие", reply_markup=markup)
await Form.choosing_action.set()
async def save_answer(user_id: int, question: str, answer: str, question_idx: int):
async with aiosqlite.connect('base.db') as db:
await db.execute('INSERT INTO answers (user_id, question, answer) VALUES (?, ?, ?)',
(user_id, question, answer))
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (question_idx, user_id))
await db.commit()
async def set_next_question(user_id: int):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT last_question_idx FROM users WHERE id = ?', (user_id,))
result = await cursor.fetchone()
last_question_idx = result[0] if result else 0
next_question_idx = last_question_idx + 1
question_cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (next_question_idx,))
question_text = await question_cursor.fetchone()
if question_text:
await bot.send_message(user_id, question_text[0], reply_markup=menu)
await Form.answer_question.set()
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (next_question_idx, user_id))
await db.commit()
else:
answers_text = ""
cursor = await db.execute("SELECT question, answer FROM answers WHERE user_id=?", (user_id,))
answers = await cursor.fetchall()
for idx, (question, answer) in enumerate(answers, start=1):
answers_text += f"{idx}. {question} - {answer}\n"
markup = InlineKeyboardMarkup(
inline_keyboard=[
[InlineKeyboardButton(text="Сгенерировать", callback_data=change_action_cb.new(action="generate"))],
[InlineKeyboardButton(text="Изменить ответ", callback_data=change_action_cb.new(action="change"))],
[InlineKeyboardButton(text="Заполнить заново",
callback_data=change_action_cb.new(action="refill"))],
]
)
await bot.send_message(user_id, f"Вот ваши ответы:\n\n{answers_text}", reply_markup=markup)
await dp.current_state(user=user_id).reset_state(with_data=False)
@dp.callback_query_handler(change_action_cb.filter(action="change"), state="*")
async def change_answer(callback_query: types.CallbackQuery, state: FSMContext):
await bot.answer_callback_query(callback_query.id)
await lk.edit_answer.set()
await bot.send_message(callback_query.from_user.id, "Введите номер вопроса, который хотите изменить:")
@dp.message_handler(state=lk.edit_answer_cb)
async def enter_question_number(message: types.Message, state: FSMContext):
question_number = message.text
if not question_number.isdigit():
await message.reply("Пожалуйста, введите номер вопроса цифрами. Попробуйте снова:")
return
await state.update_data(question_number=int(question_number))
await lk.new_answer.set()
await message.answer("Введите новый ответ:")
@dp.callback_query_handler(change_action_cb.filter(action="refill"), state="*")
async def process_refill(callback_query: types.CallbackQuery, callback_data: dict):
user_id = callback_query.from_user.id
await bot.answer_callback_query(callback_query.id)
markup = InlineKeyboardMarkup().add(InlineKeyboardButton("Да, начать заново", callback_data="confirm_refill"))
await bot.send_message(user_id, "Вы уверены, что хотите начать заново? Ваши текущие ответы будут удалены.", reply_markup=markup)
@dp.message_handler(state=lk.new_answer_cb)
async def update_answer(message: types.Message, state: FSMContext):
new_answer_text = message.text
user_data = await state.get_data()
question_number = user_data['question_number']
user_id = message.from_user.id
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await db.execute('UPDATE answers SET answer = ? WHERE user_id = ? AND question = ?',
(new_answer_text, user_id, question_text[0]))
await db.commit()
await message.answer(f"Ваш ответ на вопрос изменен на: {new_answer_text}", reply_markup=menu)
else:
await message.answer(f"Проблема при редактировании ответа, вопрос не найден.")
await state.finish()
await set_next_question(user_id)
@dp.message_handler(lambda message: message.text == "Сгенерировать био", state=Form.choosing_action)
async def generate_bio(message: types.Message):
user_id = message.from_user.id
await set_next_question(user_id)
@dp.message_handler(state=Form.answer_question)
async def process_question_answer(message: types.Message, state: FSMContext):
user_id = message.from_user.id
answer_text = message.text
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT last_question_idx FROM users WHERE id = ?', (user_id,))
result = await cursor.fetchone()
current_question_idx = result[0] if result else 0
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (current_question_idx,))
question = await cursor.fetchone()
if question:
question_text = question[0]
await db.execute('INSERT INTO answers (user_id, question, answer) VALUES (?, ?, ?)',
(user_id, question_text, answer_text))
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (current_question_idx, user_id))
await db.commit()
else:
await message.answer("Произошла ошибка при сохранении вашего ответа.")
await set_next_question(user_id)
@dp.message_handler(lambda message: message.text == "Личный кабинет", state=Form.choosing_action)
async def personal_account(message: types.Message):
user_id = message.from_user.id
answers_text = "Личный кабинет\n\nВаши ответы:\n"
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question, answer FROM answers WHERE user_id=? ORDER BY id', (user_id,))
answers = await cursor.fetchall()
for idx, (question, answer) in enumerate(answers, start=1):
answers_text += f"{idx}. {question}: {answer}\n"
if answers_text == "Личный кабинет\n\nВаши ответы:\n":
answers_text = "Личный кабинет\n\nВы еще не отвечали на вопросы. Пожалуйста, нажмите «В меню» и выберите «Сгенерировать био», чтобы ответить на вопросы"
await message.answer(answers_text, reply_markup=menu)
else:
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Изменить ответ"))
markup.add(KeyboardButton("Заполнить заново"))
markup.add(KeyboardButton("В меню"))
await message.answer(answers_text, reply_markup=markup)
await lk.personal_account.set()
@dp.message_handler(lambda message: message.text == "Изменить ответ", state=lk.personal_account)
async def change_answer(message: types.Message):
await message.answer("Введите номер вопроса, на который хотите изменить ответ:",reply_markup=menu)
await lk.edit_answer.set()
@dp.message_handler(state=lk.edit_answer)
async def process_question_number(message: types.Message, state: FSMContext):
text = message.text
question_number = int(text)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await state.update_data(question=question_text[0], question_number=question_number)
await message.answer("Введите новый ответ:")
await lk.new_answer.set()
else:
await message.answer(f"Вопроса под номером {question_number} не существует.")
@dp.message_handler(state=lk.new_answer)
async def process_new_answer(message: types.Message, state: FSMContext):
user_data = await state.get_data()
question_number = user_data['question_number']
new_answer = message.text
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Изменить ответ"))
markup.add(KeyboardButton("Заполнить заново"))
markup.add(KeyboardButton("В меню"))
user_id = message.from_user.id
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await db.execute('UPDATE answers SET answer = ? WHERE user_id = ? AND question = ?', (new_answer, user_id, question_text[0]))
await db.commit()
await message.answer(f"Ваш ответ на вопрос изменен на: {new_answer}", reply_markup=markup)
else:
await message.answer(f"Проблема при редактировании ответа, вопрос не найден.")
await state.finish()
await personal_account(message)
@dp.message_handler(lambda message: message.text == "Заполнить заново", state=lk.personal_account)
async def refill_form(message: types.Message):
markup = InlineKeyboardMarkup().add(InlineKeyboardButton("Да", callback_data="confirm_refill"))
await message.answer("Вы уверены, что хотите начать заново? Все текущие ответы будут удалены.", reply_markup=markup)
@dp.callback_query_handler(lambda c: c.data == 'confirm_refill', state="*")
async def process_refill(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
async with aiosqlite.connect('base.db') as db:
await db.execute('DELETE FROM answers WHERE user_id=?', (user_id,))
await db.commit()
await db.execute('UPDATE users SET last_question_idx = 0 WHERE id = ?', (user_id,))
await db.commit()
state = dp.current_state(user=user_id)
await state.reset_state(with_data=False)
await bot.answer_callback_query(callback_query.id)
await bot.send_message(user_id, "Ваши ответы удалены.")
await cmd_start(callback_query.message)
# ГЕНЕРАЦИЯ
def parse_yandexgpt(answer_text: str) -> dict:
pattern = re.compile(r'(\b(?:name|surname|patronym|birthdayat|diedat|epitaph|placeOfBirth|placeOfDeath|children|wifehusband|education|occupation|awards|title[123]|biography[123]|conclusion)\b): (.?)(?=\s+\b\w+\b: |\s$)',re.DOTALL)
matches = pattern.findall(answer_text)
data = {key.strip(): value.strip() for key, value in matches}
return data
async def save_result(user_id: int, answer_dict: dict):
async with aiosqlite.connect('base.db') as db:
columns = ", ".join(answer_dict.keys())
placeholders = ", ".join(["?"] * len(answer_dict))
values = list(answer_dict.values())
await db.execute(f"INSERT INTO public_info (user_id, {columns}) VALUES (?, {placeholders})", [user_id, *values])
await db.commit()
class YandexGPT:
@staticmethod
async def generate(prompt: str, apikey: str, sa_id: str, user_id : str):
url = 'https://llm.api.cloud.yandex.net/foundationModels/v1/completion'
headers = {
'Content-Type': 'application/json',
'Authorization': f'Api-Key {apikey}'
}
data = {
"modelUri": f"gpt://{sa_id}/yandexgpt-lite/latest",
"completionOptions": {
"stream": False,
"temperature": 0.4,
"maxTokens": "3000"
},
"messages": [
{
"role": "system",
"text": """"Твоя задача - создать информационную сводку и биографию (title1, biography1, title2, biography2, title3, biography3 ) в соответствии с ответами на вопросы пользователя (пишутся в формате вопрос - ответ).
Не пиши ничего кроме этой сводки и НЕ ОТХОДИ ОТ ШАБЛОНА. Если информации данных в каком-то пункте нет, пиши ТОЛЬКО “null“, а не что-либо другое. Создай следующую сводку:
name = {}
surname = {}
patronym = {}
birthday_at = {}
died_at = {}
epitaph = {} # не больше 300 символов
placeOfBirth = {}
placeOfDeath = {}
children = {}
wifehusband = {} # Это супруг или супруга
education = {}
occupation = {} # Род деятельности человека
awards = {}
title1 = {}
biography1 = {}
title2 = {}
biography2 = {}
title3 = {}
biography3 = {}
conclusion = {}
В поле name должно быть ТОЛЬКО имя, без фамилии и отчества. Не путай имя (name), фамилию (surname) и отчество (patronym) - они должны стоять на правильных местах.
epitaph придумай сам, чтобы она соответствовала жизненному пути человека.
Не придумывай в биографии ничего от себя, распиши подробнее, но только ту информацию, которая есть от пользователя.
Все даты пиши в формате dd.mm.yyyy.
В conclusion пиши заключение БИОГРАФИИ, а не что либо другое.
"""
},
{
"role": "user",
"text": prompt
}
]
}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=data, headers=headers) as response:
response_data = await response.json()
try:
answer = response_data['result']['alternatives'][0]['message']['text']
answer = answer.replace("*","").replace("_","")
await bot.send_message(user_id, mdv2(answer), parse_mode="MarkdownV2")
await save_result(user_id, parse_yandexgpt(answer))
except KeyError as e:
await bot.send_message(user_id, "Не удалось получить ответ от сервера. Проверьте переданные данные и попробуйте еще раз.")
@dp.callback_query_handler(change_action_cb.filter(action="generate"), state="*")
async def process_generate(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
prompt = ""
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question, answer FROM answers WHERE user_id=?", (user_id,))
answers = await cursor.fetchall()
for question, answer in answers:
prompt += f"\n{question} - {answer}"
api_key = "AQVN1J4sCxYR98rj-tVppyp6gXQthbdmYvmgtO7a"
sa_id = "b1g5og37bgh1ghh2s2qc"
await YandexGPT.generate(prompt, api_key, sa_id, user_id)
# АДМИН-ПАНЕЛЬ
# КНОПКА НАЗАД
back = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=False)
back.add(KeyboardButton("Назад"))
# КЛАВА
admin_kb = ReplyKeyboardMarkup(resize_keyboard=True)
admin_kb.add("Вопросы", "Добавить", "Удалить", "Редактировать","В меню")
@dp.message_handler(lambda message: message.text == "Назад", state=[admin.new_question, admin.edit_question_text, admin.select_question_to_edit, admin.select_question_to_delete])
async def back_to_admin_panel(message: types.Message, state: FSMContext):
await state.finish()
await admin_panel(message)
@dp.message_handler(lambda message: message.text == "Админ-панель", state=Form.choosing_action)
async def admin_panel(message: types.Message):
if message.from_user.id not in ADMINS:
await message.answer("Доступ запрещен.")
return
await message.answer("Админ-панель:", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Вопросы", state=admin.admin_panel)
async def show_questions(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if questions:
text = "\n".join([f"{idx + 1}. {question[0]}" for idx, question in enumerate(questions)])
else:
text = "Вопросы отсутствуют."
await message.answer(text)
@dp.message_handler(lambda message: message.text == "Добавить", state=admin.admin_panel)
async def add_question_start(message: types.Message):
await message.answer("Введите текст нового вопроса:", reply_markup=back)
await admin.new_question.set()
@dp.message_handler(state=admin.new_question)
async def add_question_process(message: types.Message, state: FSMContext):
new_question = message.text
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT MAX(order_num) FROM questions")
max_order_num = await cursor.fetchone()
next_order_num = (max_order_num[0] or 0) + 1
await db.execute("INSERT INTO questions (question, order_num) VALUES (?, ?)", (new_question, next_order_num))
await db.commit()
await message.answer("Вопрос успешно добавлен.", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Редактировать", state=admin.admin_panel)
async def select_question_to_edit_start(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT id, question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if not questions:
await message.answer("Вопросы отсутствуют.")
return
text = "Выберите номер вопроса для редактирования:\n\n"
text += "\n".join(f"{qid}. {qtext}" for qid, qtext in questions)
await message.answer(text, reply_markup=back)
await admin.select_question_to_edit.set()
@dp.message_handler(state=admin.select_question_to_edit)
async def edit_question(message: types.Message, state: FSMContext):
qid_text = message.text
if not qid_text.isdigit():
await message.answer("Пожалуйста, введите число. Попробуйте еще раз:", reply_markup=back)
return
qid = int(qid_text)
async with state.proxy() as data:
data['question_id'] = qid
await admin.edit_question_text.set()
await message.answer("Введите новый текст вопроса:", reply_markup=back)
@dp.message_handler(state=admin.edit_question_text)
async def update_question(message: types.Message, state: FSMContext):
new_text = message.text
async with state.proxy() as data:
qid = data['question_id']
async with aiosqlite.connect('base.db') as db:
await db.execute("UPDATE questions SET question = ? WHERE id = ?", (new_text, qid))
await db.commit()
await message.answer("Вопрос успешно отредактирован.", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Удалить", state=admin.admin_panel)
async def select_question_to_delete_start(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT id, question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if not questions:
await message.answer("Вопросы отсутствуют.")
return
text = "Выберите номер вопроса для удаления:\n\n"
text += "\n".join(f"{qid}. {qtext}" for qid, qtext in questions)
await message.answer(text, reply_markup=back)
await admin.select_question_to_delete.set()
@dp.message_handler(state=admin.select_question_to_delete)
async def delete_question(message: types.Message, state: FSMContext):
qid_text = message.text
if not qid_text.isdigit():
await message.answer("Пожалуйста, введите число. Попробуйте еще раз:", reply_markup=back)
return
qid = int(qid_text)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT order_num FROM questions WHERE id = ?", (qid,))
question = await cursor.fetchone()
if not question:
await message.answer(f"Вопрос под номером {qid} не найден. Пожалуйста, попробуйте другой номер.")
return
order_num_to_delete = question[0]
await db.execute("DELETE FROM questions WHERE id = ?", (qid,))
await db.execute("UPDATE questions SET order_num = order_num - 1 WHERE order_num > ?", (order_num_to_delete,))
await db.commit()
await message.answer("Вопрос успешно удален.", reply_markup=admin_kb)
await admin.admin_panel.set()
async def main():
await create_db()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
executor.start_polling(dp, skip_updates=True)
|
387eec4384f00a867348a5aba733e2cc
|
{
"intermediate": 0.1710163652896881,
"beginner": 0.721811056137085,
"expert": 0.10717253386974335
}
|
46,488
|
Привет! Исправь парсинг и сохранение ответа после запроса к ЯндексГПТ. Он выдает подобные ответы:
“Вот возможная сводка и биографии:
name: Иван
surname: Иванов
patronym: Иванович
birthdayat: 97.08.1979
diedat: 1980-е
epitaph: «Иван Иванович Иванов. Всю свою жизнь посвятил науке и просвещению. Его вклад в образование неоценим. Память о нём будет жить в сердцах всех, кто знал и любил его.»
placeOfBirth: Уфа
placeOfDeath: Уфа
children: сын и дочь
wifehusband: жена
education: УГАТУ (Уфимский государственный авиационный технический университет)
occupation: инженер-конструктор
awards: награды за выдающиеся достижения в области инженерии
title1: Жизнь и карьера
biography1: Иван Иванович Иванов родился 97.08.1979 в Уфе. После окончания УГАТУ он начал карьеру инженера-конструктора. Благодаря своему таланту и трудолюбию, он быстро продвигался по служебной лестнице и вскоре стал ведущим инженером. Он посвятил свою карьеру развитию авиационной промышленности и внёс значительный вклад в разработку новых моделей самолётов.
title2: Семья и личная жизнь
biography2: Иван Иванович был женат и имел двоих детей: сына и дочь. Он был любящим и заботливым отцом и мужем.
title3: Смерть и память
biography3: Иван Иванович скончался в 1980-х годах. Его смерть была тяжёлой утратой для его семьи и друзей. Его память будет жить в его работах и в сердцах всех, кто его знал.
conclusion: Иван Иванович оставил неизгладимый след в истории Уфы и всей страны благодаря своему вкладу в развитие авиации и инженерного дела. Его жизнь и наследие будут вдохновлять будущие поколения.
”
Нужно из этого извлекать информацию по тэгам (name, surname и т.д.) и сохранять информацию в public_info. Вот код бота:
from aiogram import Bot, Dispatcher, executor, types
from aiogram.dispatcher import FSMContext
from aiogram.dispatcher.filters.state import State, StatesGroup
from aiogram.contrib.fsm_storage.memory import MemoryStorage
from aiogram.types import ReplyKeyboardMarkup, KeyboardButton, InlineKeyboardButton, InlineKeyboardMarkup
from aiogram.utils.callback_data import CallbackData
import aiosqlite
import asyncio
import aiohttp
import json
import re
API_TOKEN = '6996318383:AAEcQfdQhzEg3L_6DKQVidJEn46Wb27Sy4g'
ADMINS = [989037374, 1515567046]
bot = Bot(token=API_TOKEN)
storage = MemoryStorage()
dp = Dispatcher(bot, storage=storage)
class Form(StatesGroup):
choosing_action = State()
answer_question = State()
class lk(StatesGroup):
personal_account = State()
edit_answer = State()
new_answer = State()
edit_answer_select = State()
edit_answer_cb = State()
new_answer_cb = State()
class admin(StatesGroup):
admin_panel = State()
select_question_to_delete = State()
select_question_to_edit = State()
edit_question_text = State()
new_question = State()
async def create_db():
async with aiosqlite.connect('base.db') as db:
await db.execute('''CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY,
username TEXT NOT NULL,
last_question_idx INTEGER DEFAULT 0)''')
await db.execute('''CREATE TABLE IF NOT EXISTS questions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
question TEXT NOT NULL,
order_num INTEGER NOT NULL)''')
await db.execute('''CREATE TABLE IF NOT EXISTS answers (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
question TEXT,
answer TEXT,
FOREIGN KEY (user_id) REFERENCES users (id))''')
await db.execute('''CREATE TABLE IF NOT EXISTS public_info (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
name TEXT,
surname TEXT,
patronym TEXT,
birthdayat TEXT,
diedat TEXT,
epitaph TEXT,
placeOfBirth TEXT,
placeOfDeath TEXT,
children TEXT,
wifehusband TEXT,
education TEXT,
occupation TEXT,
awards TEXT,
title1 TEXT,
biography1 TEXT,
title2 TEXT,
biography2 TEXT,
title3 TEXT,
biography3 TEXT,
conclusion TEXT,
FOREIGN KEY (user_id) REFERENCES users (id))''')
await db.commit()
# Обработка под MarkdownV2
def mdv2(text: str) -> str:
escape_chars = [
"_", "[", "]", "(", ")", "~", "`", ">",
"#", "+", "-", "=", "|", "{", "}", ".", "!"
]
for char in escape_chars:
text = text.replace(char, f"\{char}")
text = text.replace("**", "*").replace('"', '“')
return text
# калбэки
change_action_cb = CallbackData('change', 'action')
# КНОПКА МЕНЮ
menu = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
menu.add(KeyboardButton("В меню"))
async def add_user(user_id: int, username: str):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT id FROM users WHERE id = ?', (user_id,))
user_exists = await cursor.fetchone()
if user_exists:
await db.execute('UPDATE users SET username = ? WHERE id = ?', (username, user_id))
else:
await db.execute('INSERT INTO users (id, username) VALUES (?, ?)', (user_id, username))
await db.commit()
@dp.message_handler(commands="start", state="*")
async def cmd_start(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
user_id = message.from_user.id
username = message.from_user.username or "unknown"
await add_user(user_id, username)
if user_id not in ADMINS:
await message.answer("Выберите действие:", reply_markup=markup)
await Form.choosing_action.set()
else:
markup.add(KeyboardButton("Админ-панель"))
await message.answer("Выберите действие:", reply_markup=markup)
await Form.choosing_action.set()
@dp.message_handler(lambda message: message.text == "В меню", state="*")
async def back_to_menu(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
if message.from_user.id not in ADMINS:
await message.answer("Вернули вас в меню. Выберите действие", reply_markup=markup)
await Form.choosing_action.set()
else:
markup.add(KeyboardButton("Админ-панель"))
await message.answer("Вернули вас в меню. Выберите действие", reply_markup=markup)
await Form.choosing_action.set()
async def save_answer(user_id: int, question: str, answer: str, question_idx: int):
async with aiosqlite.connect('base.db') as db:
await db.execute('INSERT INTO answers (user_id, question, answer) VALUES (?, ?, ?)',
(user_id, question, answer))
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (question_idx, user_id))
await db.commit()
async def set_next_question(user_id: int):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT last_question_idx FROM users WHERE id = ?', (user_id,))
result = await cursor.fetchone()
last_question_idx = result[0] if result else 0
next_question_idx = last_question_idx + 1
question_cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (next_question_idx,))
question_text = await question_cursor.fetchone()
if question_text:
await bot.send_message(user_id, question_text[0], reply_markup=menu)
await Form.answer_question.set()
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (next_question_idx, user_id))
await db.commit()
else:
answers_text = ""
cursor = await db.execute("SELECT question, answer FROM answers WHERE user_id=?", (user_id,))
answers = await cursor.fetchall()
for idx, (question, answer) in enumerate(answers, start=1):
answers_text += f"{idx}. {question} - {answer}\n"
markup = InlineKeyboardMarkup(
inline_keyboard=[
[InlineKeyboardButton(text="Сгенерировать", callback_data=change_action_cb.new(action="generate"))],
[InlineKeyboardButton(text="Изменить ответ", callback_data=change_action_cb.new(action="change"))],
[InlineKeyboardButton(text="Заполнить заново",
callback_data=change_action_cb.new(action="refill"))],
]
)
await bot.send_message(user_id, f"Вот ваши ответы:\n\n{answers_text}", reply_markup=markup)
await dp.current_state(user=user_id).reset_state(with_data=False)
@dp.callback_query_handler(change_action_cb.filter(action="change"), state="*")
async def change_answer(callback_query: types.CallbackQuery, state: FSMContext):
await bot.answer_callback_query(callback_query.id)
await lk.edit_answer.set()
await bot.send_message(callback_query.from_user.id, "Введите номер вопроса, который хотите изменить:")
@dp.message_handler(state=lk.edit_answer_cb)
async def enter_question_number(message: types.Message, state: FSMContext):
question_number = message.text
if not question_number.isdigit():
await message.reply("Пожалуйста, введите номер вопроса цифрами. Попробуйте снова:")
return
await state.update_data(question_number=int(question_number))
await lk.new_answer.set()
await message.answer("Введите новый ответ:")
@dp.callback_query_handler(change_action_cb.filter(action="refill"), state="*")
async def process_refill(callback_query: types.CallbackQuery, callback_data: dict):
user_id = callback_query.from_user.id
await bot.answer_callback_query(callback_query.id)
markup = InlineKeyboardMarkup().add(InlineKeyboardButton("Да, начать заново", callback_data="confirm_refill"))
await bot.send_message(user_id, "Вы уверены, что хотите начать заново? Ваши текущие ответы будут удалены.", reply_markup=markup)
@dp.message_handler(state=lk.new_answer_cb)
async def update_answer(message: types.Message, state: FSMContext):
new_answer_text = message.text
user_data = await state.get_data()
question_number = user_data['question_number']
user_id = message.from_user.id
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await db.execute('UPDATE answers SET answer = ? WHERE user_id = ? AND question = ?',
(new_answer_text, user_id, question_text[0]))
await db.commit()
await message.answer(f"Ваш ответ на вопрос изменен на: {new_answer_text}", reply_markup=menu)
else:
await message.answer(f"Проблема при редактировании ответа, вопрос не найден.")
await state.finish()
await set_next_question(user_id)
@dp.message_handler(lambda message: message.text == "Сгенерировать био", state=Form.choosing_action)
async def generate_bio(message: types.Message):
user_id = message.from_user.id
await set_next_question(user_id)
@dp.message_handler(state=Form.answer_question)
async def process_question_answer(message: types.Message, state: FSMContext):
user_id = message.from_user.id
answer_text = message.text
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT last_question_idx FROM users WHERE id = ?', (user_id,))
result = await cursor.fetchone()
current_question_idx = result[0] if result else 0
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (current_question_idx,))
question = await cursor.fetchone()
if question:
question_text = question[0]
await db.execute('INSERT INTO answers (user_id, question, answer) VALUES (?, ?, ?)',
(user_id, question_text, answer_text))
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (current_question_idx, user_id))
await db.commit()
else:
await message.answer("Произошла ошибка при сохранении вашего ответа.")
await set_next_question(user_id)
@dp.message_handler(lambda message: message.text == "Личный кабинет", state=Form.choosing_action)
async def personal_account(message: types.Message):
user_id = message.from_user.id
answers_text = "Личный кабинет\n\nВаши ответы:\n"
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question, answer FROM answers WHERE user_id=? ORDER BY id', (user_id,))
answers = await cursor.fetchall()
for idx, (question, answer) in enumerate(answers, start=1):
answers_text += f"{idx}. {question}: {answer}\n"
if answers_text == "Личный кабинет\n\nВаши ответы:\n":
answers_text = "Личный кабинет\n\nВы еще не отвечали на вопросы. Пожалуйста, нажмите «В меню» и выберите «Сгенерировать био», чтобы ответить на вопросы"
await message.answer(answers_text, reply_markup=menu)
else:
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Изменить ответ"))
markup.add(KeyboardButton("Заполнить заново"))
markup.add(KeyboardButton("В меню"))
await message.answer(answers_text, reply_markup=markup)
await lk.personal_account.set()
@dp.message_handler(lambda message: message.text == "Изменить ответ", state=lk.personal_account)
async def change_answer(message: types.Message):
await message.answer("Введите номер вопроса, на который хотите изменить ответ:",reply_markup=menu)
await lk.edit_answer.set()
@dp.message_handler(state=lk.edit_answer)
async def process_question_number(message: types.Message, state: FSMContext):
text = message.text
question_number = int(text)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await state.update_data(question=question_text[0], question_number=question_number)
await message.answer("Введите новый ответ:")
await lk.new_answer.set()
else:
await message.answer(f"Вопроса под номером {question_number} не существует.")
@dp.message_handler(state=lk.new_answer)
async def process_new_answer(message: types.Message, state: FSMContext):
user_data = await state.get_data()
question_number = user_data['question_number']
new_answer = message.text
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Изменить ответ"))
markup.add(KeyboardButton("Заполнить заново"))
markup.add(KeyboardButton("В меню"))
user_id = message.from_user.id
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await db.execute('UPDATE answers SET answer = ? WHERE user_id = ? AND question = ?', (new_answer, user_id, question_text[0]))
await db.commit()
await message.answer(f"Ваш ответ на вопрос изменен на: {new_answer}", reply_markup=markup)
else:
await message.answer(f"Проблема при редактировании ответа, вопрос не найден.")
await state.finish()
await personal_account(message)
@dp.message_handler(lambda message: message.text == "Заполнить заново", state=lk.personal_account)
async def refill_form(message: types.Message):
markup = InlineKeyboardMarkup().add(InlineKeyboardButton("Да", callback_data="confirm_refill"))
await message.answer("Вы уверены, что хотите начать заново? Все текущие ответы будут удалены.", reply_markup=markup)
@dp.callback_query_handler(lambda c: c.data == 'confirm_refill', state="*")
async def process_refill(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
async with aiosqlite.connect('base.db') as db:
await db.execute('DELETE FROM answers WHERE user_id=?', (user_id,))
await db.commit()
await db.execute('UPDATE users SET last_question_idx = 0 WHERE id = ?', (user_id,))
await db.commit()
state = dp.current_state(user=user_id)
await state.reset_state(with_data=False)
await bot.answer_callback_query(callback_query.id)
await bot.send_message(user_id, "Ваши ответы удалены.")
await cmd_start(callback_query.message)
# ГЕНЕРАЦИЯ
def parse_yandexgpt(answer_text: str) -> dict:
pattern = re.compile(r'(\b(?:name|surname|patronym|birthdayat|diedat|epitaph|placeOfBirth|placeOfDeath|children|wifehusband|education|occupation|awards|title[123]|biography[123]|conclusion)\b): (.?)(?=\s+\b\w+\b: |\s$)',re.DOTALL)
matches = pattern.findall(answer_text)
data = {key.strip(): value.strip() for key, value in matches}
return data
async def save_result(user_id: int, answer_dict: dict):
async with aiosqlite.connect('base.db') as db:
columns = ", ".join(answer_dict.keys())
placeholders = ", ".join(["?"] * len(answer_dict))
values = list(answer_dict.values())
await db.execute(f"INSERT INTO public_info (user_id, {columns}) VALUES (?, {placeholders})", [user_id, *values])
await db.commit()
class YandexGPT:
@staticmethod
async def generate(prompt: str, apikey: str, sa_id: str, user_id : str):
url = 'https://llm.api.cloud.yandex.net/foundationModels/v1/completion'
headers = {
'Content-Type': 'application/json',
'Authorization': f'Api-Key {apikey}'
}
data = {
"modelUri": f"gpt://{sa_id}/yandexgpt-lite/latest",
"completionOptions": {
"stream": False,
"temperature": 0.4,
"maxTokens": "3000"
},
"messages": [
{
"role": "system",
"text": """"Твоя задача - создать информационную сводку и биографию (title1, biography1, title2, biography2, title3, biography3 ) в соответствии с ответами на вопросы пользователя (пишутся в формате вопрос - ответ).
Не пиши ничего кроме этой сводки и НЕ ОТХОДИ ОТ ШАБЛОНА. Если информации данных в каком-то пункте нет, пиши ТОЛЬКО “null“, а не что-либо другое. Создай следующую сводку:
name = {}
surname = {}
patronym = {}
birthday_at = {}
died_at = {}
epitaph = {} # не больше 300 символов
placeOfBirth = {}
placeOfDeath = {}
children = {}
wifehusband = {} # Это супруг или супруга
education = {}
occupation = {} # Род деятельности человека
awards = {}
title1 = {}
biography1 = {}
title2 = {}
biography2 = {}
title3 = {}
biography3 = {}
conclusion = {}
В поле name должно быть ТОЛЬКО имя, без фамилии и отчества. Не путай имя (name), фамилию (surname) и отчество (patronym) - они должны стоять на правильных местах.
epitaph придумай сам, чтобы она соответствовала жизненному пути человека.
Не придумывай в биографии ничего от себя, распиши подробнее, но только ту информацию, которая есть от пользователя.
Все даты пиши в формате dd.mm.yyyy.
В conclusion пиши заключение БИОГРАФИИ, а не что либо другое.
"""
},
{
"role": "user",
"text": prompt
}
]
}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=data, headers=headers) as response:
response_data = await response.json()
try:
answer = response_data['result']['alternatives'][0]['message']['text']
answer = answer.replace("*","").replace("_","")
await bot.send_message(user_id, mdv2(answer), parse_mode="MarkdownV2")
await save_result(user_id, parse_yandexgpt(answer))
except KeyError as e:
await bot.send_message(user_id, "Не удалось получить ответ от сервера. Проверьте переданные данные и попробуйте еще раз.")
@dp.callback_query_handler(change_action_cb.filter(action="generate"), state="*")
async def process_generate(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
prompt = ""
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question, answer FROM answers WHERE user_id=?", (user_id,))
answers = await cursor.fetchall()
for question, answer in answers:
prompt += f"\n{question} - {answer}"
api_key = "AQVN1J4sCxYR98rj-tVppyp6gXQthbdmYvmgtO7a"
sa_id = "b1g5og37bgh1ghh2s2qc"
await YandexGPT.generate(prompt, api_key, sa_id, user_id)
# АДМИН-ПАНЕЛЬ
# КНОПКА НАЗАД
back = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=False)
back.add(KeyboardButton("Назад"))
# КЛАВА
admin_kb = ReplyKeyboardMarkup(resize_keyboard=True)
admin_kb.add("Вопросы", "Добавить", "Удалить", "Редактировать","В меню")
@dp.message_handler(lambda message: message.text == "Назад", state=[admin.new_question, admin.edit_question_text, admin.select_question_to_edit, admin.select_question_to_delete])
async def back_to_admin_panel(message: types.Message, state: FSMContext):
await state.finish()
await admin_panel(message)
@dp.message_handler(lambda message: message.text == "Админ-панель", state=Form.choosing_action)
async def admin_panel(message: types.Message):
if message.from_user.id not in ADMINS:
await message.answer("Доступ запрещен.")
return
await message.answer("Админ-панель:", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Вопросы", state=admin.admin_panel)
async def show_questions(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if questions:
text = "\n".join([f"{idx + 1}. {question[0]}" for idx, question in enumerate(questions)])
else:
text = "Вопросы отсутствуют."
await message.answer(text)
@dp.message_handler(lambda message: message.text == "Добавить", state=admin.admin_panel)
async def add_question_start(message: types.Message):
await message.answer("Введите текст нового вопроса:", reply_markup=back)
await admin.new_question.set()
@dp.message_handler(state=admin.new_question)
async def add_question_process(message: types.Message, state: FSMContext):
new_question = message.text
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT MAX(order_num) FROM questions")
max_order_num = await cursor.fetchone()
next_order_num = (max_order_num[0] or 0) + 1
await db.execute("INSERT INTO questions (question, order_num) VALUES (?, ?)", (new_question, next_order_num))
await db.commit()
await message.answer("Вопрос успешно добавлен.", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Редактировать", state=admin.admin_panel)
async def select_question_to_edit_start(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT id, question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if not questions:
await message.answer("Вопросы отсутствуют.")
return
text = "Выберите номер вопроса для редактирования:\n\n"
text += "\n".join(f"{qid}. {qtext}" for qid, qtext in questions)
await message.answer(text, reply_markup=back)
await admin.select_question_to_edit.set()
@dp.message_handler(state=admin.select_question_to_edit)
async def edit_question(message: types.Message, state: FSMContext):
qid_text = message.text
if not qid_text.isdigit():
await message.answer("Пожалуйста, введите число. Попробуйте еще раз:", reply_markup=back)
return
qid = int(qid_text)
async with state.proxy() as data:
data['question_id'] = qid
await admin.edit_question_text.set()
await message.answer("Введите новый текст вопроса:", reply_markup=back)
@dp.message_handler(state=admin.edit_question_text)
async def update_question(message: types.Message, state: FSMContext):
new_text = message.text
async with state.proxy() as data:
qid = data['question_id']
async with aiosqlite.connect('base.db') as db:
await db.execute("UPDATE questions SET question = ? WHERE id = ?", (new_text, qid))
await db.commit()
await message.answer("Вопрос успешно отредактирован.", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Удалить", state=admin.admin_panel)
async def select_question_to_delete_start(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT id, question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if not questions:
await message.answer("Вопросы отсутствуют.")
return
text = "Выберите номер вопроса для удаления:\n\n"
text += "\n".join(f"{qid}. {qtext}" for qid, qtext in questions)
await message.answer(text, reply_markup=back)
await admin.select_question_to_delete.set()
@dp.message_handler(state=admin.select_question_to_delete)
async def delete_question(message: types.Message, state: FSMContext):
qid_text = message.text
if not qid_text.isdigit():
await message.answer("Пожалуйста, введите число. Попробуйте еще раз:", reply_markup=back)
return
qid = int(qid_text)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT order_num FROM questions WHERE id = ?", (qid,))
question = await cursor.fetchone()
if not question:
await message.answer(f"Вопрос под номером {qid} не найден. Пожалуйста, попробуйте другой номер.")
return
order_num_to_delete = question[0]
await db.execute("DELETE FROM questions WHERE id = ?", (qid,))
await db.execute("UPDATE questions SET order_num = order_num - 1 WHERE order_num > ?", (order_num_to_delete,))
await db.commit()
await message.answer("Вопрос успешно удален.", reply_markup=admin_kb)
await admin.admin_panel.set()
async def main():
await create_db()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
executor.start_polling(dp, skip_updates=True)
|
d69fa8805f29c28e3c9ed030fa694c20
|
{
"intermediate": 0.1717446744441986,
"beginner": 0.7224477529525757,
"expert": 0.10580762475728989
}
|
46,489
|
Dim sourceWorkbook As Workbook
Dim destWorkbook As Workbook
Dim sourceSheet As Worksheet
Dim destSheet As Worksheet
Dim folderPath As String
Dim sourceFileName As String
Dim sourceFilePath As String
' Dynamically get the folder path of the workbook containing this script (.xlsm)
' and ensure it ends with a backslash
folderPath = ThisWorkbook.Path
If Right(folderPath, 1) <> "\" Then
folderPath = folderPath & "\"
End If
' Get the name of the first .xlsx file in the folder
sourceFileName = Dir(folderPath & "*.xlsx")
' Check if an .xlsx file was found
If sourceFileName = "" Then
MsgBox "No .xlsx file found in the same folder."
Exit Sub
End If
' Construct the full file path for the .xlsx file
sourceFilePath = folderPath & sourceFileName
' Set the destination workbook and sheet
' ThisWorkbook refers to the workbook containing this script (.xlsm)
Set destWorkbook = ThisWorkbook
Set destSheet = destWorkbook.Sheets(1) ' Adjust as needed if copying to a different sheet
' Attempt to open the source .xlsx file
On Error Resume Next ' In case the file doesn't open
Set sourceWorkbook = Workbooks.Open(sourceFilePath)
On Error GoTo 0 ' Turn back on regular error handling after attempt to open
' Check if the workbook was successfully opened
If sourceWorkbook Is Nothing Then
MsgBox "Failed to open the .xlsx file."
Exit Sub
End If
' Set the source sheet (assuming data is on the first sheet)
Set sourceSheet = sourceWorkbook.Sheets(1)
' Copy the used range from the source sheet to the destination sheet
sourceSheet.UsedRange.Copy Destination:=destSheet.Cells(2, 2) ' Starts pasting from B4
' Close the source workbook without saving changes
sourceWorkbook.Close SaveChanges:=False
MsgBox "Data copied successfully from " & sourceFileName
from this code i want that data that is copied from source to be pasted always starting with column b in destination , indiferent how is dispoded in source
|
613476fc4adc1fe00323340b87d2d23d
|
{
"intermediate": 0.4886661469936371,
"beginner": 0.22550170123577118,
"expert": 0.28583213686943054
}
|
46,490
|
Loading Linux linux-zen..
error: file ` /vmlinuz-linux-zen' not found
Loading initial ramdisk ... s
load the kerneI first
error: you need
-Press any key to continue
|
d6c5db587e6a2440f8047cff1691a828
|
{
"intermediate": 0.3574255108833313,
"beginner": 0.23204867541790009,
"expert": 0.4105258584022522
}
|
46,491
|
You are a Python expert who can provide clear, concise, high-quality code.
"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from ipywidgets import Output, Button
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}")
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask")
plt.show()
# preprocess and predict on the selected image
out = Output()
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.resize(img, (image_size, image_size))
img = np.expand_dims(img, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
plt.figure(figsize=(10, 5))
plt.subplot(1, 3, 1)
plt.imshow(cv2.cvtColor(cv2.imread(image_path), cv2.COLOR_BGR2RGB))
plt.title("Input Image")
plt.axis("off")
plt.subplot(1, 3, 2)
plt.imshow(seg_output[0, :, :, 0], cmap='gray')
plt.title("Segmentation Output")
plt.axis("off")
plt.subplot(1, 3, 3)
class_probabilities = class_output[0]
y_pos = np.arange(len(labels))
plt.barh(y_pos, class_probabilities, align='center')
plt.yticks(y_pos, labels)
plt.xlabel('Probability')
plt.title('Classification Output')
plt.gca().invert_yaxis()
plt.tight_layout()
print("Predicted Class:", predicted_class)
print("Class Probabilities:")
for label, prob in zip(labels, class_probabilities):
print(f"{label}: {prob:.4f}")
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
custom_objects = {"segmentation_loss": segmentation_loss}
# Load model
model = load_model(saved_model_path, custom_objects=custom_objects)
# file uploader widget
uploader = widgets.FileUpload(accept='image/*', multiple=False)
uploader.observe(on_upload, names='value')
display(uploader)
display(out)
display(clear_button)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code let user to select one image and performance inference. Help me make the output probabilty look better.
|
d886cae6ab4bbeea5cd25b82e522f57a
|
{
"intermediate": 0.36243683099746704,
"beginner": 0.3620155155658722,
"expert": 0.275547593832016
}
|
46,492
|
Dim fs As Object, folder As Object, file As Object
Dim ws As Worksheet
Dim i As Integer
Dim textFile As Integer
Dim fileContent As String
Dim RegEx As Object
Dim matches As Object
' Ensure references to the active worksheet and FileSystemObject
Set ws = ThisWorkbook.Sheets("Sheet1")
Set fs = CreateObject("Scripting.FileSystemObject")
' Using the folder where the workbook is saved
Set folder = fs.GetFolder(ThisWorkbook.Path)
' Start writing from row 2 in column G
i = 2
For Each file In folder.Files
If LCase(fs.GetExtensionName(file.Name)) = "txt" Then
textFile = FreeFile()
Open file.Path For Input As textFile
fileContent = ""
fileContent = Input(LOF(textFile), textFile)
Close textFile
' Debug output to help diagnose the issue
Debug.Print "Content of " & file.Name & ": " & fileContent
Set RegEx = CreateObject("VBScript.RegExp")
With RegEx
.Global = True
.IgnoreCase = True
.Pattern = "Placed\s*:\s*(\d+)"
End With
Set matches = RegEx.Execute(fileContent)
If matches.Count > 0 Then
Dim numericValue As String
numericValue = matches(0).SubMatches(0)
' Debug print the captured value for verification
Debug.Print "Captured Numeric Value: " & numericValue
' Write captured numeric value into column G starting from row 2
ws.Cells(i, 7).Value = numericValue
i = i + 1
Else
' If no match found, print a message
Debug.Print "No match found in " & file.Name
End If
' Clean up RegExp object after use
Set RegEx = Nothing
End If
Next file
' Clean up FileSystemObject after use
Set fs = Nothing
FOR THIS CODE I WANT TO SEARCH IN .XLSM FILE EXACT NAME OF .TXT FILE (WITHOUT EXTENSION .TXT ) AND TO PUT IN EXACT SAME ROW AS NAME WAS FOUND NUMERIC VALUE AFTER Placed: on column G
|
32d522837d770f26b9dbb019f8779f4f
|
{
"intermediate": 0.3211047053337097,
"beginner": 0.4937071204185486,
"expert": 0.1851881742477417
}
|
46,493
|
Привет! Помоги мне сделать нововведения в боте. После того, как бот отправил пользователю ответ от YandexGPT, нужно добавить к этому сообщению инлайн-кнопку, которая будет отправлять данные, которые бот сохранил в public_info, в (# ОБНОВЛЕНИЕ СТРАНИЦЫ) - обновление страницы тоже надо переработать под эти цели. Вот код:
from aiogram import Bot, Dispatcher, executor, types
from aiogram.dispatcher import FSMContext
from aiogram.dispatcher.filters.state import State, StatesGroup
from aiogram.contrib.fsm_storage.memory import MemoryStorage
from aiogram.types import ReplyKeyboardMarkup, KeyboardButton, InlineKeyboardButton, InlineKeyboardMarkup
from aiogram.utils.callback_data import CallbackData
import aiosqlite
import asyncio
import aiohttp
import json
import re
API_TOKEN = '6996318383:AAEcQfdQhzEg3L_6DKQVidJEn46Wb27Sy4g'
ADMINS = [989037374, 1515567046]
bot = Bot(token=API_TOKEN)
storage = MemoryStorage()
dp = Dispatcher(bot, storage=storage)
class Form(StatesGroup):
choosing_action = State()
answer_question = State()
class lk(StatesGroup):
personal_account = State()
edit_answer = State()
new_answer = State()
edit_answer_select = State()
edit_answer_cb = State()
new_answer_cb = State()
class admin(StatesGroup):
admin_panel = State()
select_question_to_delete = State()
select_question_to_edit = State()
edit_question_text = State()
new_question = State()
async def create_db():
async with aiosqlite.connect('base.db') as db:
await db.execute('''CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY,
username TEXT NOT NULL,
last_question_idx INTEGER DEFAULT 0)''')
await db.execute('''CREATE TABLE IF NOT EXISTS questions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
question TEXT NOT NULL,
order_num INTEGER NOT NULL)''')
await db.execute('''CREATE TABLE IF NOT EXISTS answers (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
question TEXT,
answer TEXT,
FOREIGN KEY (user_id) REFERENCES users (id))''')
await db.execute('''CREATE TABLE IF NOT EXISTS public_info (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
name TEXT,
surname TEXT,
patronym TEXT,
birthdayat TEXT,
diedat TEXT,
epitaph TEXT,
placeOfBirth TEXT,
placeOfDeath TEXT,
children TEXT,
wifehusband TEXT,
education TEXT,
occupation TEXT,
awards TEXT,
title1 TEXT,
biography1 TEXT,
title2 TEXT,
biography2 TEXT,
title3 TEXT,
biography3 TEXT,
conclusion TEXT,
FOREIGN KEY (user_id) REFERENCES users (id))''')
await db.commit()
# Обработка под MarkdownV2
def mdv2(text: str) -> str:
escape_chars = [
"_", "[", "]", "(", ")", "~", "`", ">",
"#", "+", "-", "=", "|", "{", "}", ".", "!"
]
for char in escape_chars:
text = text.replace(char, f"\{char}")
text = text.replace("**", "*").replace('"', '“')
return text
# калбэки
change_action_cb = CallbackData('change', 'action')
# КНОПКА МЕНЮ
menu = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
menu.add(KeyboardButton("В меню"))
async def add_user(user_id: int, username: str):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT id FROM users WHERE id = ?', (user_id,))
user_exists = await cursor.fetchone()
if user_exists:
await db.execute('UPDATE users SET username = ? WHERE id = ?', (username, user_id))
else:
await db.execute('INSERT INTO users (id, username) VALUES (?, ?)', (user_id, username))
await db.commit()
@dp.message_handler(commands="start", state="*")
async def cmd_start(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
user_id = message.from_user.id
username = message.from_user.username or "unknown"
await add_user(user_id, username)
if user_id not in ADMINS:
await message.answer("Выберите действие:", reply_markup=markup)
await Form.choosing_action.set()
else:
markup.add(KeyboardButton("Админ-панель"))
await message.answer("Выберите действие:", reply_markup=markup)
await Form.choosing_action.set()
@dp.message_handler(lambda message: message.text == "В меню", state="*")
async def back_to_menu(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
if message.from_user.id not in ADMINS:
await message.answer("Вернули вас в меню. Выберите действие", reply_markup=markup)
await Form.choosing_action.set()
else:
markup.add(KeyboardButton("Админ-панель"))
await message.answer("Вернули вас в меню. Выберите действие", reply_markup=markup)
await Form.choosing_action.set()
async def save_answer(user_id: int, question: str, answer: str, question_idx: int):
async with aiosqlite.connect('base.db') as db:
await db.execute('INSERT INTO answers (user_id, question, answer) VALUES (?, ?, ?)',
(user_id, question, answer))
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (question_idx, user_id))
await db.commit()
async def set_next_question(user_id: int):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT last_question_idx FROM users WHERE id = ?', (user_id,))
result = await cursor.fetchone()
last_question_idx = result[0] if result else 0
next_question_idx = last_question_idx + 1
question_cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (next_question_idx,))
question_text = await question_cursor.fetchone()
if question_text:
await bot.send_message(user_id, question_text[0], reply_markup=menu)
await Form.answer_question.set()
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (next_question_idx, user_id))
await db.commit()
else:
answers_text = ""
cursor = await db.execute("SELECT question, answer FROM answers WHERE user_id=?", (user_id,))
answers = await cursor.fetchall()
for idx, (question, answer) in enumerate(answers, start=1):
answers_text += f"{idx}. {question} - {answer}\n"
markup = InlineKeyboardMarkup(
inline_keyboard=[
[InlineKeyboardButton(text="Сгенерировать", callback_data=change_action_cb.new(action="generate"))],
[InlineKeyboardButton(text="Изменить ответ", callback_data=change_action_cb.new(action="change"))],
[InlineKeyboardButton(text="Заполнить заново",
callback_data=change_action_cb.new(action="refill"))],
]
)
await bot.send_message(user_id, f"Вот ваши ответы:\n\n{answers_text}", reply_markup=markup)
await dp.current_state(user=user_id).reset_state(with_data=False)
@dp.callback_query_handler(change_action_cb.filter(action="change"), state="*")
async def change_answer(callback_query: types.CallbackQuery, state: FSMContext):
await bot.answer_callback_query(callback_query.id)
await lk.edit_answer.set()
await bot.send_message(callback_query.from_user.id, "Введите номер вопроса, который хотите изменить:")
@dp.message_handler(state=lk.edit_answer_cb)
async def enter_question_number(message: types.Message, state: FSMContext):
question_number = message.text
if not question_number.isdigit():
await message.reply("Пожалуйста, введите номер вопроса цифрами. Попробуйте снова:")
return
await state.update_data(question_number=int(question_number))
await lk.new_answer.set()
await message.answer("Введите новый ответ:")
@dp.callback_query_handler(change_action_cb.filter(action="refill"), state="*")
async def process_refill(callback_query: types.CallbackQuery, callback_data: dict):
user_id = callback_query.from_user.id
await bot.answer_callback_query(callback_query.id)
markup = InlineKeyboardMarkup().add(InlineKeyboardButton("Да, начать заново", callback_data="confirm_refill"))
await bot.send_message(user_id, "Вы уверены, что хотите начать заново? Ваши текущие ответы будут удалены.", reply_markup=markup)
@dp.message_handler(state=lk.new_answer_cb)
async def update_answer(message: types.Message, state: FSMContext):
new_answer_text = message.text
user_data = await state.get_data()
question_number = user_data['question_number']
user_id = message.from_user.id
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await db.execute('UPDATE answers SET answer = ? WHERE user_id = ? AND question = ?',
(new_answer_text, user_id, question_text[0]))
await db.commit()
await message.answer(f"Ваш ответ на вопрос изменен на: {new_answer_text}", reply_markup=menu)
else:
await message.answer(f"Проблема при редактировании ответа, вопрос не найден.")
await state.finish()
await set_next_question(user_id)
@dp.message_handler(lambda message: message.text == "Сгенерировать био", state=Form.choosing_action)
async def generate_bio(message: types.Message):
user_id = message.from_user.id
await set_next_question(user_id)
@dp.message_handler(state=Form.answer_question)
async def process_question_answer(message: types.Message, state: FSMContext):
user_id = message.from_user.id
answer_text = message.text
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT last_question_idx FROM users WHERE id = ?', (user_id,))
result = await cursor.fetchone()
current_question_idx = result[0] if result else 0
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (current_question_idx,))
question = await cursor.fetchone()
if question:
question_text = question[0]
await db.execute('INSERT INTO answers (user_id, question, answer) VALUES (?, ?, ?)',
(user_id, question_text, answer_text))
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (current_question_idx, user_id))
await db.commit()
else:
await message.answer("Произошла ошибка при сохранении вашего ответа.")
await set_next_question(user_id)
@dp.message_handler(lambda message: message.text == "Личный кабинет", state=Form.choosing_action)
async def personal_account(message: types.Message):
user_id = message.from_user.id
answers_text = "Личный кабинет\n\nВаши ответы:\n"
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question, answer FROM answers WHERE user_id=? ORDER BY id', (user_id,))
answers = await cursor.fetchall()
for idx, (question, answer) in enumerate(answers, start=1):
answers_text += f"{idx}. {question}: {answer}\n"
if answers_text == "Личный кабинет\n\nВаши ответы:\n":
answers_text = "Личный кабинет\n\nВы еще не отвечали на вопросы. Пожалуйста, нажмите «В меню» и выберите «Сгенерировать био», чтобы ответить на вопросы"
await message.answer(answers_text, reply_markup=menu)
else:
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Изменить ответ"))
markup.add(KeyboardButton("Заполнить заново"))
markup.add(KeyboardButton("В меню"))
await message.answer(answers_text, reply_markup=markup)
await lk.personal_account.set()
@dp.message_handler(lambda message: message.text == "Изменить ответ", state=lk.personal_account)
async def change_answer(message: types.Message):
await message.answer("Введите номер вопроса, на который хотите изменить ответ:",reply_markup=menu)
await lk.edit_answer.set()
@dp.message_handler(state=lk.edit_answer)
async def process_question_number(message: types.Message, state: FSMContext):
text = message.text
question_number = int(text)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await state.update_data(question=question_text[0], question_number=question_number)
await message.answer("Введите новый ответ:")
await lk.new_answer.set()
else:
await message.answer(f"Вопроса под номером {question_number} не существует.")
@dp.message_handler(state=lk.new_answer)
async def process_new_answer(message: types.Message, state: FSMContext):
user_data = await state.get_data()
question_number = user_data['question_number']
new_answer = message.text
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Изменить ответ"))
markup.add(KeyboardButton("Заполнить заново"))
markup.add(KeyboardButton("В меню"))
user_id = message.from_user.id
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await db.execute('UPDATE answers SET answer = ? WHERE user_id = ? AND question = ?', (new_answer, user_id, question_text[0]))
await db.commit()
await message.answer(f"Ваш ответ на вопрос изменен на: {new_answer}", reply_markup=markup)
else:
await message.answer(f"Проблема при редактировании ответа, вопрос не найден.")
await state.finish()
await personal_account(message)
@dp.message_handler(lambda message: message.text == "Заполнить заново", state=lk.personal_account)
async def refill_form(message: types.Message):
markup = InlineKeyboardMarkup().add(InlineKeyboardButton("Да", callback_data="confirm_refill"))
await message.answer("Вы уверены, что хотите начать заново? Все текущие ответы будут удалены.", reply_markup=markup)
@dp.callback_query_handler(lambda c: c.data == 'confirm_refill', state="*")
async def process_refill(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
async with aiosqlite.connect('base.db') as db:
await db.execute('DELETE FROM answers WHERE user_id=?', (user_id,))
await db.commit()
await db.execute('UPDATE users SET last_question_idx = 0 WHERE id = ?', (user_id,))
await db.commit()
state = dp.current_state(user=user_id)
await state.reset_state(with_data=False)
await bot.answer_callback_query(callback_query.id)
await bot.send_message(user_id, "Ваши ответы удалены.")
await cmd_start(callback_query.message)
# ГЕНЕРАЦИЯ
def parse_yandexgpt(answer_text: str) -> dict:
pattern = re.compile(
r'(name|surname|patronym|birthdayat|diedat|epitaph|placeOfBirth|placeOfDeath|children|wifehusband|education|occupation|awards|title[123]|biography[123]|conclusion): ([^:]+?)(?=\n\w+: |\n\n|\Z)',
re.DOTALL)
matches = pattern.findall(answer_text)
data = {key.strip(): value.strip() for key, value in matches}
return data
async def save_result(user_id: int, answer_dict: dict):
async with aiosqlite.connect('base.db') as db:
columns = ", ".join(answer_dict.keys())
placeholders = ", ".join(["?"] * len(answer_dict))
values = list(answer_dict.values())
print(columns)
print(placeholders)
print(values)
await db.execute(f"INSERT INTO public_info (user_id, {columns}) VALUES (?, {placeholders})", [user_id, *values])
await db.commit()
class YandexGPT:
@staticmethod
async def generate(prompt: str, apikey: str, sa_id: str, user_id : str):
url = 'https://llm.api.cloud.yandex.net/foundationModels/v1/completion'
headers = {
'Content-Type': 'application/json',
'Authorization': f'Api-Key {apikey}'
}
data = {
"modelUri": f"gpt://{sa_id}/yandexgpt-lite/latest",
"completionOptions": {
"stream": False,
"temperature": 0.4,
"maxTokens": "3000"
},
"messages": [
{
"role": "system",
"text": """"Твоя задача - создать информационную сводку и биографию (title1, biography1, title2, biography2, title3, biography3 ) в соответствии с ответами на вопросы пользователя (пишутся в формате вопрос - ответ).
Не пиши ничего кроме этой сводки и НЕ ОТХОДИ ОТ ШАБЛОНА. Если информации данных в каком-то пункте нет, пиши ТОЛЬКО “null“, а не что-либо другое. Создай следующую сводку:
name = {}
surname = {}
patronym = {}
birthday_at = {}
died_at = {}
epitaph = {} # не больше 300 символов
placeOfBirth = {}
placeOfDeath = {}
children = {}
wifehusband = {} # Это супруг или супруга
education = {}
occupation = {} # Род деятельности человека
awards = {}
title1 = {}
biography1 = {}
title2 = {}
biography2 = {}
title3 = {}
biography3 = {}
conclusion = {}
В поле name должно быть ТОЛЬКО имя, без фамилии и отчества. Не путай имя (name), фамилию (surname) и отчество (patronym) - они должны стоять на правильных местах.
epitaph придумай сам, чтобы она соответствовала жизненному пути человека.
Не придумывай в биографии ничего от себя, распиши подробнее, но только ту информацию, которая есть от пользователя.
Все даты пиши в формате dd.mm.yyyy.
В conclusion пиши заключение БИОГРАФИИ, а не что либо другое.
"""
},
{
"role": "user",
"text": prompt
}
]
}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=data, headers=headers) as response:
response_data = await response.json()
try:
answer = response_data['result']['alternatives'][0]['message']['text']
answer = answer.replace("*","").replace("_","")
await bot.send_message(user_id, mdv2(answer), parse_mode="MarkdownV2")
await save_result(user_id, parse_yandexgpt(answer))
except KeyError as e:
await bot.send_message(user_id, "Не удалось получить ответ от сервера. Проверьте переданные данные и попробуйте еще раз.")
@dp.callback_query_handler(change_action_cb.filter(action="generate"), state="*")
async def process_generate(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
prompt = ""
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question, answer FROM answers WHERE user_id=?", (user_id,))
answers = await cursor.fetchall()
for question, answer in answers:
prompt += f"\n{question} - {answer}"
api_key = "AQVN1J4sCxYR98rj-tVppyp6gXQthbdmYvmgtO7a"
sa_id = "b1g5og37bgh1ghh2s2qc"
await YandexGPT.generate(prompt, api_key, sa_id, user_id)
# ОБНОВЛЕНИЕ СТРАНИЦЫ
async def fetch_access_token():
url = 'https://mc.dev.rand.agency/api/v1/get-access-token'
payload = {
"email": "team1@hackathon.ru",
"password": "PA9dEa7x",
"device": "bot-v0.0.1"
}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=payload, headers={'Accept': 'application/json', 'Content-Type': 'application/json;charset=UTF-8'}) as response:
if response.status == 200:
data = await response.json()
access_token = data.get('access_token')
print(f'Access Token: {access_token}')
await update_page(session, access_token) # Вызываем функцию update_page после получения токена
else:
print(f'Error: {response.status}')
async def update_page(session, access_token):
url = 'https://mc.dev.rand.agency/api/page/64577087'
payload = {
"id": 8734,
"name": "Хакатон",
"surname": None,
"patronym": None,
"birthday_at": "2024-01-01 00:00:00",
"died_at": "2024-01-02 00:00:00",
"epitaph": "хз",
"start": {
"day": "01",
"month": "01",
"year": 2024
},
"end": {
"day": "02",
"month": "01",
"year": 2024
},
"link": "https://mc.dev.rand.agency/page/64577087",
"page_type_id": 1,
"biographies": [
{
"id": 4610,
"title": None,
"description": None,
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 1,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
},
{
"url": None,
"title": None,
"order": 2
}
],
"media": []
},
{
"id": 4611,
"title": None,
"description": None,
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 2,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
}
],
"media": []
},
{
"id": 4612,
"title": None,
"description": None,
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 3,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
},
{
"url": None,
"title": None,
"order": 2
}
],
"media": []
},
{
"id": 4613,
"title": None,
"description": None,
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 4,
"checked": False,
"photos": [],
"media": []
}
],
"page_information": [
{
"id": 30957,
"page_id": 8734,
"title": "pageInformation.placeOfBirth",
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30958,
"page_id": 8734,
"title": "pageInformation.placeOfDeath",
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30959,
"page_id": 8734,
"title": "pageInformation.children",
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30960,
"page_id": 8734,
"title": "^pageInformation.wife||pageInformation.husband",
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30961,
"page_id": 8734,
"title": "pageInformation.citizenship",
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30962,
"page_id": 8734,
"title": "pageInformation.education",
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30963,
"page_id": 8734,
"title": "pageInformation.occupation",
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30964,
"page_id": 8734,
"title": "pageInformation.awards",
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
}
],
"_method": "PUT"
}
headers = {
'Accept': 'application/json',
'Content-Type': 'application/json;charset=UTF-8',
'Authorization': f'Bearer {access_token}'
}
async with session.put(url, json=payload, headers=headers) as response:
if response.status == 200:
data = await response.json()
print(f'Update Page Response: {data}')
else:
data = await response.json()
print(f'Error: {data}')
|
206e039ad5f96eb60a9f8ce7c4944e60
|
{
"intermediate": 0.3506529927253723,
"beginner": 0.439303457736969,
"expert": 0.2100435346364975
}
|
46,494
|
Hi
|
f6b058df646aefc7e16d7cffae98b13a
|
{
"intermediate": 0.33010533452033997,
"beginner": 0.26984941959381104,
"expert": 0.400045245885849
}
|
46,495
|
Dim fs As Object, folder As Object, file As Object
Dim ws As Worksheet
Dim i As Integer
Dim textFile As Integer
Dim fileContent As String
Dim RegEx As Object
Dim matches As Object
' Ensure references to the active worksheet and FileSystemObject
Set ws = ThisWorkbook.Sheets("Sheet1")
Set fs = CreateObject("Scripting.FileSystemObject")
' Using the folder where the workbook is saved
Set folder = fs.GetFolder(ThisWorkbook.Path)
' Start writing from row 2 in column G
i = 2
For Each file In folder.Files
If LCase(fs.GetExtensionName(file.Name)) = "txt" Then
textFile = FreeFile()
Open file.Path For Input As textFile
fileContent = ""
fileContent = Input(LOF(textFile), textFile)
Close textFile
' Debug output to help diagnose the issue
Debug.Print "Content of " & file.Name & ": " & fileContent
Set RegEx = CreateObject("VBScript.RegExp")
With RegEx
.Global = True
.IgnoreCase = True
.Pattern = "Placed\s*:\s*(\d+)"
End With
Set matches = RegEx.Execute(fileContent)
If matches.Count > 0 Then
Dim numericValue As String
numericValue = matches(0).SubMatches(0)
' Debug print the captured value for verification
Debug.Print "Captured Numeric Value: " & numericValue
' Write captured numeric value into column G starting from row 2
ws.Cells(i, 7).Value = numericValue
i = i + 1
Else
' If no match found, print a message
Debug.Print "No match found in " & file.Name
End If
' Clean up RegExp object after use
Set RegEx = Nothing
End If
Next file
' Clean up FileSystemObject after use
Set fs = Nothing
for this code i want before to paste values in column g to search for names of files on b cells and to put on same row numeric value on column g
|
273452012d5faae4a4b93b67894d6572
|
{
"intermediate": 0.4066333770751953,
"beginner": 0.35283419489860535,
"expert": 0.24053248763084412
}
|
46,496
|
You are a Python expert who can provide clear, concise, high-quality code.
"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from ipywidgets import Output, Button
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}")
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask")
plt.show()
# preprocess and predict on the selected image
out = Output()
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
axs[0].imshow(img)
axs[0].set_title("Input Image")
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output")
axs[1].axis("off")
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='blue')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right")
axs[2].set_ylabel('Probability')
axs[2].set_ylim([0, 1])
axs[2].set_title('Classification Output')
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='black', ha='center')
plt.tight_layout()
print("Predicted Class:", predicted_class)
print("Class Probabilities:")
for label, prob in zip(labels, class_probabilities):
print(f"{label}: {prob:.4f}")
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
custom_objects = {"segmentation_loss": segmentation_loss}
# Load model
model = load_model(saved_model_path, custom_objects=custom_objects)
# file uploader widget
uploader = widgets.FileUpload(accept='image/*', multiple=False)
uploader.observe(on_upload, names='value')
display(uploader)
display(out)
display(clear_button)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code let user to select one image and performance inference. Help me to remove the output printed but include the predicted output class name in the plot.
|
e93c10cce654c08f7c9040f547cc053f
|
{
"intermediate": 0.36243683099746704,
"beginner": 0.3620155155658722,
"expert": 0.275547593832016
}
|
46,497
|
You are a Python expert who can provide clear, concise, high-quality code.
"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from ipywidgets import Output, Button
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}")
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask")
plt.show()
# preprocess and predict on the selected image
out = Output()
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
axs[0].imshow(img)
axs[0].set_title("Input Image")
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output")
axs[1].axis("off")
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='blue')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right")
axs[2].set_ylabel('Probability')
axs[2].set_ylim([0, 1])
axs[2].set_title('Classification Output')
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='black', ha='center')
plt.tight_layout()
print("Predicted Class:", predicted_class)
print("Class Probabilities:")
for label, prob in zip(labels, class_probabilities):
print(f"{label}: {prob:.4f}")
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
custom_objects = {"segmentation_loss": segmentation_loss}
# Load model
model = load_model(saved_model_path, custom_objects=custom_objects)
# file uploader widget
uploader = widgets.FileUpload(accept='image/*', multiple=False)
uploader.observe(on_upload, names='value')
display(uploader)
display(out)
display(clear_button)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code let user to select one image and performance inference. Help me to make the clear button just near the upload button.
|
028029cea53a3df55104dd2ba3f9f037
|
{
"intermediate": 0.36243683099746704,
"beginner": 0.3620155155658722,
"expert": 0.275547593832016
}
|
46,498
|
Private Sub CommandButton2_Click()
Dim fs As Object, folder As Object, file As Object
Dim wbPath As String, ws As Worksheet
Dim i As Integer
' Set the worksheet object to the active sheet
Set ws = ThisWorkbook.Sheets("Sheet1")
' Initialize the FileSystemObject
Set fs = CreateObject("Scripting.FileSystemObject")
' Get the folder where this workbook is saved
Set folder = fs.GetFolder(ThisWorkbook.Path)
' Initialize row index for Sheet1 column G
i = 4
' Loop through each file in the directory
For Each file In folder.Files
If LCase(fs.GetExtensionName(file.Name)) = "txt" Then
' Open the text file
textFile = FreeFile
Open file.Path For Input As textFile
fileContent = Input(LOF(textFile), textFile)
Close textFile
' Use Regular Expression to find the numeric value after "Placed"
Set RegEx = CreateObject("VBScript.RegExp")
With RegEx
.Global = False
.IgnoreCase = True
.Pattern = "Placed\D*(\d+)"
End With
' If a matching pattern is found, extract the numeric value
If RegEx.Test(fileContent) Then
numericValue = RegEx.Execute(fileContent)(0).SubMatches(0)
' Write the numeric value into column G starting from row 4
ws.Cells(i, 7).Value = numericValue
i = i + 1
End If
' Clean up RegExp object
Set RegEx = Nothing
End If
Next file
' Clean up FileSystemObject
Set fs = Nothing
End Sub add to the code this : before to paste numeric value in column g first search for .txt file names in this file and paste numerical value only on corespondig row on column g . if name of file was not found then msg box file found , if not file not found
|
3069f7ee65dfca26d7dba9d592e235cb
|
{
"intermediate": 0.4735398590564728,
"beginner": 0.2542262077331543,
"expert": 0.27223387360572815
}
|
46,499
|
import gradio as gr
import os
import sys
import json
import requests
MODEL = "gpt-4-0125-preview"
API_URL = os.getenv("API_URL")
DISABLED = os.getenv("DISABLED") == 'True'
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
print (API_URL)
print (OPENAI_API_KEY)
NUM_THREADS = int(os.getenv("NUM_THREADS"))
print (NUM_THREADS)
def exception_handler(exception_type, exception, traceback):
print("%s: %s" % (exception_type.__name__, exception))
sys.excepthook = exception_handler
sys.tracebacklimit = 0
#https://github.com/gradio-app/gradio/issues/3531#issuecomment-1484029099
def parse_codeblock(text):
lines = text.split("\n")
for i, line in enumerate(lines):
if "
|
978845a107be4b1d487966aa808e2fdc
|
{
"intermediate": 0.5010150074958801,
"beginner": 0.2961530089378357,
"expert": 0.20283201336860657
}
|
46,500
|
привет у меня есть 2 кода по сканированию qr для unity мне нужно их обьеденить в один что, из одного взять многопоточное разделение из другого вывод на экран вот эти коды
public class NewQRScan : MonoBehaviour
{
[SerializeField]
private RawImage _rawImageBackground;
[SerializeField]
private AspectRatioFitter _aspectRatioFitter;
[SerializeField]
private TextMeshProUGUI _textOut;
[SerializeField]
private RectTransform _scanZone;
private bool _isCamAvaible;
private WebCamTexture _cameraTexture;
void Start()
{
SetUpCamera();
}
// Update is called once per frame
void Update()
{
UpdateCameraRender();
}
private void SetUpCamera()
{
WebCamDevice[] devices = WebCamTexture.devices;
if (devices.Length == 0)
{
_isCamAvaible = false;
return;
}
for (int i = 0; i < devices.Length; i++)
{
if (devices[i].isFrontFacing == false)
{
_cameraTexture = new WebCamTexture(devices[i].name, (int)_scanZone.rect.width, (int)_scanZone.rect.height);
break;
}
}
_cameraTexture.Play();
_rawImageBackground.texture = _cameraTexture;
_isCamAvaible = true;
}
private void UpdateCameraRender()
{
if (_isCamAvaible == false)
{
return;
}
float ratio = (float)_cameraTexture.width / (float)_cameraTexture.height;
_aspectRatioFitter.aspectRatio = ratio;
int orientation = _cameraTexture.videoRotationAngle;
orientation = orientation * 3;
_rawImageBackground.rectTransform.localEulerAngles = new Vector3(0, 0, orientation);
}
public void OnClickScan()
{
Scan();
}
private void Scan()
{
try
{
IBarcodeReader barcodeReader = new BarcodeReader();
Result result = barcodeReader.Decode(_cameraTexture.GetPixels32(), _cameraTexture.width, _cameraTexture.height);
if (result != null)
{
_textOut.text = result.Text;
}
else
{
_textOut.text = "Failed to Read QR CODE";
}
}
catch
{
_textOut.text = "FAILED IN TRY";
}
}
}
public class QRCodeScanner : MonoBehaviour
{
[SerializeField]
private string lastResult;
private WebCamTexture camTexture;
private Rect screenRect;
private Color32[] cameraColorData;
private int width, height;
private CancellationTokenSource cts = new CancellationTokenSource();
private bool startEncoding;
private bool startDecoding;
private BarcodeWriter writer;
private Result result;
public event Action<string> OnResult;
private BarcodeReader barcodeReader = new BarcodeReader
{
AutoRotate = false,
Options = new ZXing.Common.DecodingOptions
{
TryHarder = false
}
};
public void ActiveScanner()
{
SetupWebcamTexture();
PlayWebcamTexture();
cameraColorData = new Color32[width * height];
screenRect = new Rect(0, 0, Screen.width, Screen.height);
if (cts != null)
{
cts.Dispose();
}
cts = new CancellationTokenSource();
// Перезапуск работы с камерой и сканированием
RestartScannerThreads();
}
private void OnEnable()
{
RestartScanner();
}
private void RestartScannerThreads()
{
ThreadPool.QueueUserWorkItem(new WaitCallback(GetCodeFromImageData), cts.Token);
ThreadPool.QueueUserWorkItem(new WaitCallback(EncodeNewFromLastResult), cts.Token);
}
private void Update()
{
if (camTexture != null && camTexture.isPlaying && !startDecoding)
{
camTexture.GetPixels32(cameraColorData);
startDecoding = true;
}
}
private void OnGUI()
{
DrawCameraTexture();
if (result != null)
{
HandleDecodedResult();
}
}
private void DrawCameraTexture()
{
if (camTexture != null)
{
// Сохраняем текущую матрицу GUI, чтобы восстановить её позже
Matrix4x4 matrixBackup = GUI.matrix;
// Получаем угол поворота и проверяем на зеркальное отображение
float rotationAngle = -camTexture.videoRotationAngle;
// Добавляем 180 градусов для дополнительного поворота
rotationAngle += 180;
bool isVerticallyMirrored = camTexture.videoVerticallyMirrored;
Rect rect = screenRect;
if (isVerticallyMirrored)
{
// Если изображение отображается зеркально, корректируем положение rect
rect.y = Screen.height - rect.y - rect.height;
}
// Высчитываем центр поворота
Vector2 pivotPoint = new Vector2(rect.xMin + rect.width * 0.5f, rect.yMin + rect.height * 0.5f);
// Применяем поворот вокруг центра поворота
GUIUtility.RotateAroundPivot(rotationAngle, pivotPoint);
// Рисуем текстуру с учётом поворота и зеркального отображения
GUI.DrawTexture(rect, camTexture, ScaleMode.ScaleToFit);
// Восстанавливаем матрицу GUI
GUI.matrix = matrixBackup;
}
}
private void HandleDecodedResult()
{
try
{
if (result != null)
{
// QR-код успешно считан
Debug.Log("Decoded QR: " + result.Text);
// Останавливаем сканирование и камеру
camTexture.Stop();
var resultURL = result.Text.Substring(0, result.Text.Length - 1);
OnResult?.Invoke(resultURL);
// Переходим по ссылке, если это нужно
result = null;
//Application.OpenURL(resultURL);
}
}
catch (System.Exception ex)
{
Debug.LogWarning(ex.Message);
}
}
private void OnDestroy()
{
// Вызывается при уничтожении объекта
if (camTexture != null)
{
camTexture.Stop();
camTexture = null;
}
cts.Cancel();
cts.Dispose();
}
private void RestartScanner()
{
if (camTexture != null && !camTexture.isPlaying)
{
camTexture.Play();
}
startDecoding = false;
startEncoding = false;
ActiveScanner();
}
private void OnApplicationFocus(bool hasFocus)
{
if (hasFocus)
{
// Перезапускаем сканер, когда пользователь возвращается в приложение
RestartScanner();
}
}
private void SetupWebcamTexture()
{
camTexture = new WebCamTexture();
camTexture.requestedHeight = Screen.height;
camTexture.requestedWidth = Screen.width;
}
private void PlayWebcamTexture()
{
if (camTexture != null)
{
camTexture.Play();
width = camTexture.width;
height = camTexture.height;
}
}
private void EncodeNewFromLastResult(object obj)
{
CancellationToken token = (CancellationToken)obj;
while (!token.IsCancellationRequested)
{
if (startEncoding && lastResult != null)
{
startEncoding = !startEncoding;
}
}
}
private void GetCodeFromImageData(object obj)
{
CancellationToken token = (CancellationToken)obj;
while (!token.IsCancellationRequested)
{
// decode the current frame
if (startDecoding && cameraColorData != null)
{
result = barcodeReader.Decode(cameraColorData, width, height);
if (result != null)
{
lastResult = result.Text + " " + result.BarcodeFormat;
startEncoding = true;
}
startDecoding = !startDecoding;
}
}
}
}
|
d8e505cf2c5f28a6ba249465fd3e5f5a
|
{
"intermediate": 0.34353578090667725,
"beginner": 0.424638032913208,
"expert": 0.23182614147663116
}
|
46,501
|
Hi
|
6e3086a8ec95245dab6946b65f0892e7
|
{
"intermediate": 0.33010533452033997,
"beginner": 0.26984941959381104,
"expert": 0.400045245885849
}
|
46,502
|
write python code that simulates visually the solar system
|
e4f740676b88d39e20d8dfa3984163f8
|
{
"intermediate": 0.3328382670879364,
"beginner": 0.24173706769943237,
"expert": 0.4254246950149536
}
|
46,503
|
Dim fs As Object, folder As Object, file As Object
Dim wbPath As String, ws As Worksheet
Dim i As Integer
' Set the worksheet object to the active sheet
Set ws = ThisWorkbook.Sheets("Sheet1")
' Initialize the FileSystemObject
Set fs = CreateObject("Scripting.FileSystemObject")
' Get the folder where this workbook is saved
Set folder = fs.GetFolder(ThisWorkbook.Path)
' Initialize row index for Sheet1 column G
i = 4
' Loop through each file in the directory
For Each file In folder.Files
If LCase(fs.GetExtensionName(file.Name)) = "txt" Then
' Open the text file
textFile = FreeFile
Open file.Path For Input As textFile
fileContent = Input(LOF(textFile), textFile)
Close textFile
' Use Regular Expression to find the numeric value after "Placed"
Set RegEx = CreateObject("VBScript.RegExp")
With RegEx
.Global = False
.IgnoreCase = True
.Pattern = "Placed\D*(\d+)"
End With
' If a matching pattern is found, extract the numeric value
If RegEx.Test(fileContent) Then
numericValue = RegEx.Execute(fileContent)(0).SubMatches(0)
' Write the numeric value into column G starting from row 4
ws.Cells(i, 7).Value = numericValue
i = i + 1
End If
' Clean up RegExp object
Set RegEx = Nothing
End If
Next file
' Clean up FileSystemObject
Set fs = Nothing
End Sub for this code i want to add inside one before to paste numrical value in column g to search in column b .txt file names without extension and after that to past numeric value in coespondent row where name was foun
|
99c0c9241d39bdf8c9f58c0b30e56576
|
{
"intermediate": 0.4770796597003937,
"beginner": 0.3008566200733185,
"expert": 0.22206373512744904
}
|
46,504
|
у меня проблема изображение перевернутое нужно его повернуть на 90 градусов
[SerializeField] private RawImage rawImageBackground;
[SerializeField] private AspectRatioFitter aspectRatioFitter;
private WebCamTexture camTexture;
private CancellationTokenSource cts;
private Result result;
private BarcodeReader barcodeReader = new BarcodeReader
{
AutoRotate = false,
Options = new ZXing.Common.DecodingOptions
{
TryHarder = false
}
};
void Start()
{
cts = new CancellationTokenSource();
SetupWebcamTexture();
PlayWebcamTexture();
// Запуск многопоточного сканирования QR-кода
StartScanningAsync(cts.Token);
}
void Update()
{
if (camTexture.width < 100)
{
return;
}
// Обновление изображения и соотношения сторон
rawImageBackground.texture = camTexture;
rawImageBackground.material.mainTexture = camTexture;
aspectRatioFitter.aspectRatio = (float)camTexture.width / camTexture.height;
// Здесь можно также обновлять UI или выполнять другие операции на каждом кадре
}
private void SetupWebcamTexture()
{
WebCamDevice[] devices = WebCamTexture.devices;
string desiredName = devices.Length > 0 ? devices[0].name : null;
camTexture = new WebCamTexture(desiredName);
camTexture.requestedHeight = Screen.height;
camTexture.requestedWidth = Screen.width;
}
private void PlayWebcamTexture()
{
if (camTexture != null)
{
camTexture.Play();
}
}
private async void StartScanningAsync(CancellationToken ct)
{
await Task.Run(() =>
{
while (!ct.IsCancellationRequested)
{
try
{
if (camTexture.didUpdateThisFrame && camTexture.isPlaying)
{
var result = barcodeReader.Decode(camTexture.GetPixels32(), camTexture.width, camTexture.height);
if (result != null)
{
ct.ThrowIfCancellationRequested(); // Выход из цикла, если операция была отменена
UpdateTextOnMainThread(result.Text);
}
}
}
catch
{
// Обработка исключений (например, операция отменена)
return;
}
}
}, ct);
}
void UpdateTextOnMainThread(string text)
{
try
{
if (result != null)
{
// QR-код успешно считан
Debug.Log("Decoded QR: " + result.Text);
// Останавливаем сканирование и камеру
camTexture.Stop();
var resultURL = result.Text.Substring(0, result.Text.Length - 1);
//OnResult?.Invoke(resultURL);
// Переходим по ссылке, если это нужно
result = null;
Application.OpenURL(resultURL);
}
}
catch (System.Exception ex)
{
Debug.LogWarning(ex.Message);
}
}
private void OnDestroy() // Остановка и очистка при уничтожении
{
cts.Cancel(); // Отмена задачи сканирования
if (camTexture != null)
{
camTexture.Stop();
camTexture = null;
}
}
}
|
48c44cdc2ce88f357d9af444c60c4592
|
{
"intermediate": 0.31356123089790344,
"beginner": 0.4527140259742737,
"expert": 0.23372472822666168
}
|
46,505
|
How to uninstall self-compiled FFmpeg from Fedora?
|
68c6416769bcda3a88c6422f47ade949
|
{
"intermediate": 0.5381168127059937,
"beginner": 0.2333441525697708,
"expert": 0.22853906452655792
}
|
46,506
|
raise libvirtError('virDomainCreateXML() failed')
libvirt.libvirtError: конфигурация не поддерживается: эмулятор «/usr/lib/xen/bin/qemu-system-i386» не найден
|
5631305c0dedd2b0b15e5700d235ea33
|
{
"intermediate": 0.4497281610965729,
"beginner": 0.37677547335624695,
"expert": 0.17349636554718018
}
|
46,507
|
Hi! I would I find a title for a section of a document that contains profisional scheudle for the project, hand-out, tools, documnetation requiremnet, hand-oin, evaluation schedule evaluation
|
75d41803127ca872aeae562b6c35f2ed
|
{
"intermediate": 0.38363713026046753,
"beginner": 0.16185803711414337,
"expert": 0.4545048177242279
}
|
46,508
|
Correct errors in the program code for C++: float FUNC(float x)
{
return -0.5+ 1.0/(4 - 3*cos(x));
}
float f(float x)
{
return (-3*sin(x))/pow((4-3*cos(x)),2);
}
//-------------------------------------------------------------------------—
void __fastcall TForm1::Button1Click(TObject *Sender)
{
float a,b,e,x,Fa,Fx;
int i;
a=-2;
b=1;
i=0;
e = StrToFloat(Edit1->Text);
while (abs(b-a)>e) {
x=a;
Fa=FUNC(x);
x=(a+b)/2;
Fx=FUNC(x);
if (Fa*Fx<0) { a=x;
}
else
{
b=x;
}
i++;
}
Edit2-> Text=x;
Edit3-> Text=i;
}
//-------------------------------------------------------------------------—
void __fastcall TForm1::Button3Click(TObject *Sender)
{
float x, x0, e;
int i;
e = StrToFloat(Edit7->Text);
x0 = -1;
i = 0;
x = x0;
do {
x0 = x;
x = -0.5 + (1 / (4 - 3 * cos(x0)));
i++;
} while (fabs(x - x0) > e);
Edit8->Text = FloatToStr(x);
Edit9->Text = IntToStr(i);
}
|
9deb820077362d0c8a847aec542a044c
|
{
"intermediate": 0.32421234250068665,
"beginner": 0.49038010835647583,
"expert": 0.18540748953819275
}
|
46,509
|
I have a 3d area and a 3d character i made in blender, area is a obj and character is fbx. make a pygame to test out character and map, after this i'll start coding the game
|
10d0e53f669f5485afd4acc34f26c86d
|
{
"intermediate": 0.38150259852409363,
"beginner": 0.22105607390403748,
"expert": 0.3974413573741913
}
|
46,510
|
def forward(self, query, key, value, mask=None):
if mask is not None:
mask = mask.unsqueeze(1)
batch_size = query.size(0)
print('query shape is', query.shape)
query, key, value = [model(x).view(batch_size, -1, self.Head, self.d_k).transpose(1,2)
for model, x in zip(self.linears, (query, key, value))]
x, self.attn = attention(query, key, value ,dropout=self.drop_out)
x = x.transpose(1,2).contiguous().view(batch_size, -1, self.Head * self.d_k)
return self.linears[-1](x)
该函数报错TypeError: view(): argument 'size' failed to unpack the object at pos 4 with error "type must be tuple of ints,but got float" 是什么原因? query, key, value 都是[2,4,512]维度的矩阵,Head = 8
|
4606f46914461c486124e93f9400acb5
|
{
"intermediate": 0.34960097074508667,
"beginner": 0.44818246364593506,
"expert": 0.20221658051013947
}
|
46,511
|
Привет! Мне нужно сделать так, чтобы process_update_page выбирал ПОСЛЕДНЕЕ сохранение биографии.
from aiogram import Bot, Dispatcher, executor, types
from aiogram.dispatcher import FSMContext
from aiogram.dispatcher.filters.state import State, StatesGroup
from aiogram.contrib.fsm_storage.memory import MemoryStorage
from aiogram.types import ReplyKeyboardMarkup, KeyboardButton, InlineKeyboardButton, InlineKeyboardMarkup
from aiogram.utils.callback_data import CallbackData
import aiosqlite
import asyncio
import aiohttp
import json
import re
API_TOKEN = '6996318383:AAEcQfdQhzEg3L_6DKQVidJEn46Wb27Sy4g'
ADMINS = [989037374, 1515567046]
bot = Bot(token=API_TOKEN)
storage = MemoryStorage()
dp = Dispatcher(bot, storage=storage)
class Form(StatesGroup):
choosing_action = State()
answer_question = State()
class lk(StatesGroup):
personal_account = State()
edit_answer = State()
new_answer = State()
edit_answer_select = State()
edit_answer_cb = State()
new_answer_cb = State()
class admin(StatesGroup):
admin_panel = State()
select_question_to_delete = State()
select_question_to_edit = State()
edit_question_text = State()
new_question = State()
async def create_db():
async with aiosqlite.connect('base.db') as db:
await db.execute('''CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY,
username TEXT NOT NULL,
last_question_idx INTEGER DEFAULT 0)''')
await db.execute('''CREATE TABLE IF NOT EXISTS questions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
question TEXT NOT NULL,
order_num INTEGER NOT NULL)''')
await db.execute('''CREATE TABLE IF NOT EXISTS answers (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
question TEXT,
answer TEXT,
FOREIGN KEY (user_id) REFERENCES users (id))''')
await db.execute('''CREATE TABLE IF NOT EXISTS public_info (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
nome TEXT,
surname TEXT,
patronym TEXT,
birthdayat TEXT,
diedat TEXT,
epitaph TEXT,
placeOfBirth TEXT,
placeOfDeath TEXT,
children TEXT,
wifehusband TEXT,
education TEXT,
occupation TEXT,
awards TEXT,
title1 TEXT,
biography1 TEXT,
title2 TEXT,
biography2 TEXT,
title3 TEXT,
biography3 TEXT,
conclusion TEXT,
FOREIGN KEY (user_id) REFERENCES users (id))''')
await db.commit()
# Обработка под MarkdownV2
def mdv2(text: str) -> str:
escape_chars = [
"_", "[", "]", "(", ")", "~", "`", ">",
"#", "+", "-", "=", "|", "{", "}", ".", "!"
]
for char in escape_chars:
text = text.replace(char, f"\{char}")
text = text.replace("**", "*").replace('"', '“')
return text
# калбэки
change_action_cb = CallbackData('change', 'action')
# КНОПКА МЕНЮ
menu = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
menu.add(KeyboardButton("В меню"))
async def add_user(user_id: int, username: str):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT id FROM users WHERE id = ?', (user_id,))
user_exists = await cursor.fetchone()
if user_exists:
await db.execute('UPDATE users SET username = ? WHERE id = ?', (username, user_id))
else:
await db.execute('INSERT INTO users (id, username) VALUES (?, ?)', (user_id, username))
await db.commit()
@dp.message_handler(commands="start", state="*")
async def cmd_start(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
user_id = message.from_user.id
username = message.from_user.username or "unknown"
await add_user(user_id, username)
if user_id not in ADMINS:
await message.answer("Выберите действие:", reply_markup=markup)
await Form.choosing_action.set()
else:
markup.add(KeyboardButton("Админ-панель"))
await message.answer("Выберите действие:", reply_markup=markup)
await Form.choosing_action.set()
@dp.message_handler(lambda message: message.text == "В меню", state="*")
async def back_to_menu(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
if message.from_user.id not in ADMINS:
await message.answer("Вернули вас в меню. Выберите действие", reply_markup=markup)
await Form.choosing_action.set()
else:
markup.add(KeyboardButton("Админ-панель"))
await message.answer("Вернули вас в меню. Выберите действие", reply_markup=markup)
await Form.choosing_action.set()
async def save_answer(user_id: int, question: str, answer: str, question_idx: int):
async with aiosqlite.connect('base.db') as db:
await db.execute('INSERT INTO answers (user_id, question, answer) VALUES (?, ?, ?)',
(user_id, question, answer))
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (question_idx, user_id))
await db.commit()
async def set_next_question(user_id: int):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT last_question_idx FROM users WHERE id = ?', (user_id,))
result = await cursor.fetchone()
last_question_idx = result[0] if result else 0
next_question_idx = last_question_idx + 1
question_cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (next_question_idx,))
question_text = await question_cursor.fetchone()
if question_text:
await bot.send_message(user_id, question_text[0], reply_markup=menu)
await Form.answer_question.set()
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (next_question_idx, user_id))
await db.commit()
else:
answers_text = ""
cursor = await db.execute("SELECT question, answer FROM answers WHERE user_id=?", (user_id,))
answers = await cursor.fetchall()
for idx, (question, answer) in enumerate(answers, start=1):
answers_text += f"{idx}. {question} - {answer}\n"
markup = InlineKeyboardMarkup(
inline_keyboard=[
[InlineKeyboardButton(text="Сгенерировать", callback_data=change_action_cb.new(action="generate"))],
[InlineKeyboardButton(text="Изменить ответ", callback_data=change_action_cb.new(action="change"))],
[InlineKeyboardButton(text="Заполнить заново",
callback_data=change_action_cb.new(action="refill"))],
]
)
await bot.send_message(user_id, f"Вот ваши ответы:\n\n{answers_text}", reply_markup=markup)
await dp.current_state(user=user_id).reset_state(with_data=False)
@dp.callback_query_handler(change_action_cb.filter(action="change"), state="*")
async def change_answer(callback_query: types.CallbackQuery, state: FSMContext):
await bot.answer_callback_query(callback_query.id)
await lk.edit_answer.set()
await bot.send_message(callback_query.from_user.id, "Введите номер вопроса, который хотите изменить:")
@dp.message_handler(state=lk.edit_answer_cb)
async def enter_question_number(message: types.Message, state: FSMContext):
question_number = message.text
if not question_number.isdigit():
await message.reply("Пожалуйста, введите номер вопроса цифрами. Попробуйте снова:")
return
await state.update_data(question_number=int(question_number))
await lk.new_answer.set()
await message.answer("Введите новый ответ:")
@dp.callback_query_handler(change_action_cb.filter(action="refill"), state="*")
async def process_refill(callback_query: types.CallbackQuery, callback_data: dict):
user_id = callback_query.from_user.id
await bot.answer_callback_query(callback_query.id)
markup = InlineKeyboardMarkup().add(InlineKeyboardButton("Да, начать заново", callback_data="confirm_refill"))
await bot.send_message(user_id, "Вы уверены, что хотите начать заново? Ваши текущие ответы будут удалены.", reply_markup=markup)
@dp.message_handler(state=lk.new_answer_cb)
async def update_answer(message: types.Message, state: FSMContext):
new_answer_text = message.text
user_data = await state.get_data()
question_number = user_data['question_number']
user_id = message.from_user.id
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await db.execute('UPDATE answers SET answer = ? WHERE user_id = ? AND question = ?',
(new_answer_text, user_id, question_text[0]))
await db.commit()
await message.answer(f"Ваш ответ на вопрос изменен на: {new_answer_text}", reply_markup=menu)
else:
await message.answer(f"Проблема при редактировании ответа, вопрос не найден.")
await state.finish()
await set_next_question(user_id)
@dp.message_handler(lambda message: message.text == "Сгенерировать био", state=Form.choosing_action)
async def generate_bio(message: types.Message):
user_id = message.from_user.id
await set_next_question(user_id)
@dp.message_handler(state=Form.answer_question)
async def process_question_answer(message: types.Message, state: FSMContext):
user_id = message.from_user.id
answer_text = message.text
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT last_question_idx FROM users WHERE id = ?', (user_id,))
result = await cursor.fetchone()
current_question_idx = result[0] if result else 0
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (current_question_idx,))
question = await cursor.fetchone()
if question:
question_text = question[0]
await db.execute('INSERT INTO answers (user_id, question, answer) VALUES (?, ?, ?)',
(user_id, question_text, answer_text))
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (current_question_idx, user_id))
await db.commit()
else:
await message.answer("Произошла ошибка при сохранении вашего ответа.")
await set_next_question(user_id)
@dp.message_handler(lambda message: message.text == "Личный кабинет", state=Form.choosing_action)
async def personal_account(message: types.Message):
user_id = message.from_user.id
answers_text = "Личный кабинет\n\nВаши ответы:\n"
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question, answer FROM answers WHERE user_id=? ORDER BY id', (user_id,))
answers = await cursor.fetchall()
for idx, (question, answer) in enumerate(answers, start=1):
answers_text += f"{idx}. {question}: {answer}\n"
if answers_text == "Личный кабинет\n\nВаши ответы:\n":
answers_text = "Личный кабинет\n\nВы еще не отвечали на вопросы. Пожалуйста, нажмите «В меню» и выберите «Сгенерировать био», чтобы ответить на вопросы"
await message.answer(answers_text, reply_markup=menu)
else:
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Изменить ответ"))
markup.add(KeyboardButton("Заполнить заново"))
markup.add(KeyboardButton("В меню"))
await message.answer(answers_text, reply_markup=markup)
await lk.personal_account.set()
@dp.message_handler(lambda message: message.text == "Изменить ответ", state=lk.personal_account)
async def change_answer(message: types.Message):
await message.answer("Введите номер вопроса, на который хотите изменить ответ:",reply_markup=menu)
await lk.edit_answer.set()
@dp.message_handler(state=lk.edit_answer)
async def process_question_number(message: types.Message, state: FSMContext):
text = message.text
question_number = int(text)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await state.update_data(question=question_text[0], question_number=question_number)
await message.answer("Введите новый ответ:")
await lk.new_answer.set()
else:
await message.answer(f"Вопроса под номером {question_number} не существует.")
@dp.message_handler(state=lk.new_answer)
async def process_new_answer(message: types.Message, state: FSMContext):
user_data = await state.get_data()
question_number = user_data['question_number']
new_answer = message.text
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Изменить ответ"))
markup.add(KeyboardButton("Заполнить заново"))
markup.add(KeyboardButton("В меню"))
user_id = message.from_user.id
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await db.execute('UPDATE answers SET answer = ? WHERE user_id = ? AND question = ?', (new_answer, user_id, question_text[0]))
await db.commit()
await message.answer(f"Ваш ответ на вопрос изменен на: {new_answer}", reply_markup=markup)
else:
await message.answer(f"Проблема при редактировании ответа, вопрос не найден.")
await state.finish()
await personal_account(message)
@dp.message_handler(lambda message: message.text == "Заполнить заново", state=lk.personal_account)
async def refill_form(message: types.Message):
markup = InlineKeyboardMarkup().add(InlineKeyboardButton("Да", callback_data="confirm_refill"))
await message.answer("Вы уверены, что хотите начать заново? Все текущие ответы будут удалены.", reply_markup=markup)
@dp.callback_query_handler(lambda c: c.data == 'confirm_refill', state="*")
async def process_refill(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
async with aiosqlite.connect('base.db') as db:
await db.execute('DELETE FROM answers WHERE user_id=?', (user_id,))
await db.commit()
await db.execute('UPDATE users SET last_question_idx = 0 WHERE id = ?', (user_id,))
await db.commit()
state = dp.current_state(user=user_id)
await state.reset_state(with_data=False)
await bot.answer_callback_query(callback_query.id)
await bot.send_message(user_id, "Ваши ответы удалены.")
await cmd_start(callback_query.message)
# ГЕНЕРАЦИЯ
def parse_yandexgpt(answer_text: str) -> dict:
pattern = re.compile(
r'(nome|surname|patronym|birthdayat|diedat|epitaph|placeOfBirth|placeOfDeath|children|wifehusband|education|occupation|awards|title1|biography1|title2|biography2|title3|biography3|conclusion): ([^:]+?)(?=\n\w+: |\n\n|\Z)',
re.DOTALL)
matches = pattern.findall(answer_text)
data = {key.strip(): value.strip() for key, value in matches}
return data
async def save_result(user_id: int, answer_dict: dict):
async with aiosqlite.connect('base.db') as db:
columns = ", ".join(answer_dict.keys())
placeholders = ", ".join(["?"] * len(answer_dict))
values = list(answer_dict.values())
print(columns)
print(placeholders)
print(values)
await db.execute(f"INSERT INTO public_info (user_id, {columns}) VALUES (?, {placeholders})", [user_id, *values])
await db.commit()
class YandexGPT:
@staticmethod
async def generate(prompt: str, apikey: str, sa_id: str, user_id : str):
url = 'https://llm.api.cloud.yandex.net/foundationModels/v1/completion'
headers = {
'Content-Type': 'application/json',
'Authorization': f'Api-Key {apikey}'
}
data = {
"modelUri": f"gpt://{sa_id}/yandexgpt-lite/latest",
"completionOptions": {
"stream": False,
"temperature": 0.4,
"maxTokens": "3000"
},
"messages": [
{
"role": "system",
"text": """"Твоя задача - создать информационную сводку и биографию (title1, biography1, title2, biography2, title3, biography3 ) в соответствии с ответами на вопросы пользователя (пишутся в формате вопрос - ответ).
Не пиши ничего кроме этой сводки и НЕ ОТХОДИ ОТ ШАБЛОНА. Если информации данных в каком-то пункте нет, пиши ТОЛЬКО “null“, а не что-либо другое. Создай следующую сводку:
nome: {}
surname: {}
patronym: {}
birthday_at: {}
died_at: {}
epitaph: {} # не больше 300 символов
placeOfBirth: {}
placeOfDeath: {}
children: {}
wifehusband: {} # Это супруг или супруга
education: {}
occupation: {} # Род деятельности человека
awards: {}
epitaph: {}
title1: {}
biography1: {}
title2: {}
biography2: {}
title3: {}
biography3: {}
conclusion: {}
В поле nome должно быть ТОЛЬКО имя, без фамилии и отчества. Не путай имя (nome), фамилию (surname) и отчество (patronym) - они должны стоять на правильных местах.
epitaph - это эпитафия, придумай ее сам, чтобы она соответствовала жизненному пути человека.
Не придумывай в биографии ничего от себя, распиши подробнее, но только ту информацию, которая есть от пользователя.
Все даты пиши в формате dd.mm.yyyy.
В conclusion пиши заключение БИОГРАФИИ, а не что либо другое.
"""
},
{
"role": "user",
"text": prompt
}
]
}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=data, headers=headers) as response:
response_data = await response.json()
try:
answer = response_data['result']['alternatives'][0]['message']['text']
answer = answer.replace("*","").replace("_","")
try:
await save_result(user_id, parse_yandexgpt(answer))
await bot.send_message(user_id, mdv2(answer), parse_mode="MarkdownV2",reply_markup=InlineKeyboardMarkup().add(InlineKeyboardButton("Обновить страницу",callback_data="update_page")))
except:
await bot.send_message(user_id, "Не удалось сохранить данные. Пожалуйста, нажмите Сгенерировать еще раз")
except KeyError as e:
await bot.send_message(user_id, "Не удалось получить ответ от сервера. Проверьте переданные данные и попробуйте еще раз.")
@dp.callback_query_handler(change_action_cb.filter(action="generate"), state="*")
async def process_generate(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
prompt = ""
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question, answer FROM answers WHERE user_id=?", (user_id,))
answers = await cursor.fetchall()
for question, answer in answers:
prompt += f"\n{question} - {answer}"
api_key = "AQVN1J4sCxYR98rj-tVppyp6gXQthbdmYvmgtO7a"
sa_id = "b1g5og37bgh1ghh2s2qc"
await YandexGPT.generate(prompt, api_key, sa_id, user_id)
# ОБНОВЛЕНИЕ СТРАНИЦЫ
@dp.callback_query_handler(lambda c: c.data == 'update_page', state="*")
async def process_update_page(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
await bot.answer_callback_query(callback_query.id)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT nome, surname, patronym, birthdayat, diedat, epitaph, placeOfBirth, placeOfDeath, children, wifehusband, education, occupation, awards FROM public_info WHERE user_id=?", (user_id,))
data = await cursor.fetchone()
if data:
print(data)
data_structure = {
"nome": data[0],
"surname": data[1],
"patronym": data[2],
"birthday_at": data[3]+"T21:00:00.000000Z",
"died_at": data[4]+"T21:00:00.000000Z",
"epitaph": data[5],
"placeOfBirth": data[6],
"placeOfDeath": data[7],
"children": data[8],
"wifehusband": data[9],
"education": data[10],
"occupation": data[11],
"awards": data[12],
"day_b": data[3].split('.')[0],
"month_b": data[3].split('.')[1],
"year_b": int(data[3].split('.')[2]),
"day_d": data[4].split('.')[0],
"month_d": data[4].split('.')[1],
"year_d": int(data[4].split('.')[2]),
}
url = 'https://mc.dev.rand.agency/api/v1/get-access-token'
payload = {
"email": "team1@hackathon.ru",
"password": "PA9dEa7x",
"device": "bot-v0.0.1"
}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=payload, headers={'Accept': 'application/json','Content-Type': 'application/json;charset=UTF-8'}) as response:
if response.status == 200:
data = await response.json()
access_token = data.get('access_token')
print(f'Access Token: {access_token}')
else:
print(f'Error: {response.status}')
async with aiohttp.ClientSession() as session:
await update_page(session, access_token, data_structure)
await bot.send_message(user_id, "Страница обновлена")
async def update_page(session, access_token, data_structure):
url = 'https://mc.dev.rand.agency/api/page/64577087'
print(data_structure["nome"])
payload = {
"id": 8734,
"name": data_structure["nome"],
"surname": data_structure["surname"],
"patronym": data_structure["patronym"],
"birthday_at": data_structure["birthday_at"],
"died_at": data_structure["died_at"],
"epitaph": data_structure["epitaph"],
"start": {
"day": data_structure["day_b"],
"month": data_structure["month_b"],
"year": data_structure["year_b"]
},
"end": {
"day": data_structure["day_d"],
"month": data_structure["month_d"],
"year": data_structure["year_d"]
},
"link": "https://mc.dev.rand.agency/page/64577087",
"page_type_id": 1,
"biographies": [
{
"id": 4610,
"title": None,
"description": None,
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 1,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
},
{
"url": None,
"title": None,
"order": 2
}
],
"media": []
},
{
"id": 4611,
"title": None,
"description": None,
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 2,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
}
],
"media": []
},
{
"id": 4612,
"title": None,
"description": None,
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 3,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
},
{
"url": None,
"title": None,
"order": 2
}
],
"media": []
},
{
"id": 4613,
"title": None,
"description": None,
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 4,
"checked": False,
"photos": [],
"media": []
}
],
"page_information": [
{
"id": 30957,
"page_id": 8734,
"title": "pageInformation.placeOfBirth",
"is_system": True,
"description": data_structure["placeOfBirth"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30958,
"page_id": 8734,
"title": data_structure["placeOfDeath"],
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30959,
"page_id": 8734,
"title": data_structure["children"],
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30960,
"page_id": 8734,
"title": "^pageInformation.wife||pageInformation.husband",
"is_system": True,
"description": data_structure["children"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30961,
"page_id": 8734,
"title": "pageInformation.citizenship",
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30962,
"page_id": 8734,
"title": "pageInformation.education",
"is_system": True,
"description": data_structure["education"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30963,
"page_id": 8734,
"title": "pageInformation.occupation",
"is_system": True,
"description": data_structure["occupation"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30964,
"page_id": 8734,
"title": "pageInformation.awards",
"is_system": True,
"description": data_structure["awards"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
}
],
"_method": "PUT"
}
headers = {
'Accept': 'application/json',
'Content-Type': 'application/json;charset=UTF-8',
'Authorization': f'Bearer {access_token}'
}
async with session.put(url, json=payload, headers=headers) as response:
if response.status == 200:
data = await response.json()
print(f'Update Page Response: {data}')
else:
data = await response.json()
print(f'Error: {data}')
# АДМИН-ПАНЕЛЬ
# КНОПКА НАЗАД
back = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=False)
back.add(KeyboardButton("Назад"))
# КЛАВА
admin_kb = ReplyKeyboardMarkup(resize_keyboard=True)
admin_kb.add("Вопросы", "Добавить", "Удалить", "Редактировать","В меню")
@dp.message_handler(lambda message: message.text == "Назад", state=[admin.new_question, admin.edit_question_text, admin.select_question_to_edit, admin.select_question_to_delete])
async def back_to_admin_panel(message: types.Message, state: FSMContext):
await state.finish()
await admin_panel(message)
@dp.message_handler(lambda message: message.text == "Админ-панель", state=Form.choosing_action)
async def admin_panel(message: types.Message):
if message.from_user.id not in ADMINS:
await message.answer("Доступ запрещен.")
return
await message.answer("Админ-панель:", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Вопросы", state=admin.admin_panel)
async def show_questions(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if questions:
text = "\n".join([f"{idx + 1}. {question[0]}" for idx, question in enumerate(questions)])
else:
text = "Вопросы отсутствуют."
await message.answer(text)
@dp.message_handler(lambda message: message.text == "Добавить", state=admin.admin_panel)
async def add_question_start(message: types.Message):
await message.answer("Введите текст нового вопроса:", reply_markup=back)
await admin.new_question.set()
@dp.message_handler(state=admin.new_question)
async def add_question_process(message: types.Message, state: FSMContext):
new_question = message.text
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT MAX(order_num) FROM questions")
max_order_num = await cursor.fetchone()
next_order_num = (max_order_num[0] or 0) + 1
await db.execute("INSERT INTO questions (question, order_num) VALUES (?, ?)", (new_question, next_order_num))
await db.commit()
await message.answer("Вопрос успешно добавлен.", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Редактировать", state=admin.admin_panel)
async def select_question_to_edit_start(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT id, question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if not questions:
await message.answer("Вопросы отсутствуют.")
return
text = "Выберите номер вопроса для редактирования:\n\n"
text += "\n".join(f"{qid}. {qtext}" for qid, qtext in questions)
await message.answer(text, reply_markup=back)
await admin.select_question_to_edit.set()
@dp.message_handler(state=admin.select_question_to_edit)
async def edit_question(message: types.Message, state: FSMContext):
qid_text = message.text
if not qid_text.isdigit():
await message.answer("Пожалуйста, введите число. Попробуйте еще раз:", reply_markup=back)
return
qid = int(qid_text)
async with state.proxy() as data:
data['question_id'] = qid
await admin.edit_question_text.set()
await message.answer("Введите новый текст вопроса:", reply_markup=back)
@dp.message_handler(state=admin.edit_question_text)
async def update_question(message: types.Message, state: FSMContext):
new_text = message.text
async with state.proxy() as data:
qid = data['question_id']
async with aiosqlite.connect('base.db') as db:
await db.execute("UPDATE questions SET question = ? WHERE id = ?", (new_text, qid))
await db.commit()
await message.answer("Вопрос успешно отредактирован.", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Удалить", state=admin.admin_panel)
async def select_question_to_delete_start(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT id, question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if not questions:
await message.answer("Вопросы отсутствуют.")
return
text = "Выберите номер вопроса для удаления:\n\n"
text += "\n".join(f"{qid}. {qtext}" for qid, qtext in questions)
await message.answer(text, reply_markup=back)
await admin.select_question_to_delete.set()
@dp.message_handler(state=admin.select_question_to_delete)
async def delete_question(message: types.Message, state: FSMContext):
qid_text = message.text
if not qid_text.isdigit():
await message.answer("Пожалуйста, введите число. Попробуйте еще раз:", reply_markup=back)
return
qid = int(qid_text)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT order_num FROM questions WHERE id = ?", (qid,))
question = await cursor.fetchone()
if not question:
await message.answer(f"Вопрос под номером {qid} не найден. Пожалуйста, попробуйте другой номер.")
return
order_num_to_delete = question[0]
await db.execute("DELETE FROM questions WHERE id = ?", (qid,))
await db.execute("UPDATE questions SET order_num = order_num - 1 WHERE order_num > ?", (order_num_to_delete,))
await db.commit()
await message.answer("Вопрос успешно удален.", reply_markup=admin_kb)
await admin.admin_panel.set()
async def main():
await create_db()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
executor.start_polling(dp, skip_updates=True)
|
d532864151f8a41c83cf5b30680d93b9
|
{
"intermediate": 0.5331723690032959,
"beginner": 0.33495888113975525,
"expert": 0.13186880946159363
}
|
46,512
|
Привет! Мне нужно сделать так, чтобы save_result перед сохранением очищал предыдущие данные, закрепленные за пользователем.
from aiogram import Bot, Dispatcher, executor, types
from aiogram.dispatcher import FSMContext
from aiogram.dispatcher.filters.state import State, StatesGroup
from aiogram.contrib.fsm_storage.memory import MemoryStorage
from aiogram.types import ReplyKeyboardMarkup, KeyboardButton, InlineKeyboardButton, InlineKeyboardMarkup
from aiogram.utils.callback_data import CallbackData
import aiosqlite
import asyncio
import aiohttp
import json
import re
API_TOKEN = '6996318383:AAEcQfdQhzEg3L_6DKQVidJEn46Wb27Sy4g'
ADMINS = [989037374, 1515567046]
bot = Bot(token=API_TOKEN)
storage = MemoryStorage()
dp = Dispatcher(bot, storage=storage)
class Form(StatesGroup):
choosing_action = State()
answer_question = State()
class lk(StatesGroup):
personal_account = State()
edit_answer = State()
new_answer = State()
edit_answer_select = State()
edit_answer_cb = State()
new_answer_cb = State()
class admin(StatesGroup):
admin_panel = State()
select_question_to_delete = State()
select_question_to_edit = State()
edit_question_text = State()
new_question = State()
async def create_db():
async with aiosqlite.connect('base.db') as db:
await db.execute('''CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY,
username TEXT NOT NULL,
last_question_idx INTEGER DEFAULT 0)''')
await db.execute('''CREATE TABLE IF NOT EXISTS questions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
question TEXT NOT NULL,
order_num INTEGER NOT NULL)''')
await db.execute('''CREATE TABLE IF NOT EXISTS answers (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
question TEXT,
answer TEXT,
FOREIGN KEY (user_id) REFERENCES users (id))''')
await db.execute('''CREATE TABLE IF NOT EXISTS public_info (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
nome TEXT,
surname TEXT,
patronym TEXT,
birthdayat TEXT,
diedat TEXT,
epitaph TEXT,
placeOfBirth TEXT,
placeOfDeath TEXT,
children TEXT,
wifehusband TEXT,
education TEXT,
occupation TEXT,
awards TEXT,
title1 TEXT,
biography1 TEXT,
title2 TEXT,
biography2 TEXT,
title3 TEXT,
biography3 TEXT,
conclusion TEXT,
FOREIGN KEY (user_id) REFERENCES users (id))''')
await db.commit()
# Обработка под MarkdownV2
def mdv2(text: str) -> str:
escape_chars = [
"_", "[", "]", "(", ")", "~", "`", ">",
"#", "+", "-", "=", "|", "{", "}", ".", "!"
]
for char in escape_chars:
text = text.replace(char, f"\{char}")
text = text.replace("**", "*").replace('"', '“')
return text
# калбэки
change_action_cb = CallbackData('change', 'action')
# КНОПКА МЕНЮ
menu = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
menu.add(KeyboardButton("В меню"))
async def add_user(user_id: int, username: str):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT id FROM users WHERE id = ?', (user_id,))
user_exists = await cursor.fetchone()
if user_exists:
await db.execute('UPDATE users SET username = ? WHERE id = ?', (username, user_id))
else:
await db.execute('INSERT INTO users (id, username) VALUES (?, ?)', (user_id, username))
await db.commit()
@dp.message_handler(commands="start", state="*")
async def cmd_start(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
user_id = message.from_user.id
username = message.from_user.username or "unknown"
await add_user(user_id, username)
if user_id not in ADMINS:
await message.answer("Выберите действие:", reply_markup=markup)
await Form.choosing_action.set()
else:
markup.add(KeyboardButton("Админ-панель"))
await message.answer("Выберите действие:", reply_markup=markup)
await Form.choosing_action.set()
@dp.message_handler(lambda message: message.text == "В меню", state="*")
async def back_to_menu(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
if message.from_user.id not in ADMINS:
await message.answer("Вернули вас в меню. Выберите действие", reply_markup=markup)
await Form.choosing_action.set()
else:
markup.add(KeyboardButton("Админ-панель"))
await message.answer("Вернули вас в меню. Выберите действие", reply_markup=markup)
await Form.choosing_action.set()
async def save_answer(user_id: int, question: str, answer: str, question_idx: int):
async with aiosqlite.connect('base.db') as db:
await db.execute('INSERT INTO answers (user_id, question, answer) VALUES (?, ?, ?)',
(user_id, question, answer))
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (question_idx, user_id))
await db.commit()
async def set_next_question(user_id: int):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT last_question_idx FROM users WHERE id = ?', (user_id,))
result = await cursor.fetchone()
last_question_idx = result[0] if result else 0
next_question_idx = last_question_idx + 1
question_cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (next_question_idx,))
question_text = await question_cursor.fetchone()
if question_text:
await bot.send_message(user_id, question_text[0], reply_markup=menu)
await Form.answer_question.set()
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (next_question_idx, user_id))
await db.commit()
else:
answers_text = ""
cursor = await db.execute("SELECT question, answer FROM answers WHERE user_id=?", (user_id,))
answers = await cursor.fetchall()
for idx, (question, answer) in enumerate(answers, start=1):
answers_text += f"{idx}. {question} - {answer}\n"
markup = InlineKeyboardMarkup(
inline_keyboard=[
[InlineKeyboardButton(text="Сгенерировать", callback_data=change_action_cb.new(action="generate"))],
[InlineKeyboardButton(text="Изменить ответ", callback_data=change_action_cb.new(action="change"))],
[InlineKeyboardButton(text="Заполнить заново",
callback_data=change_action_cb.new(action="refill"))],
]
)
await bot.send_message(user_id, f"Вот ваши ответы:\n\n{answers_text}", reply_markup=markup)
await dp.current_state(user=user_id).reset_state(with_data=False)
@dp.callback_query_handler(change_action_cb.filter(action="change"), state="*")
async def change_answer(callback_query: types.CallbackQuery, state: FSMContext):
await bot.answer_callback_query(callback_query.id)
await lk.edit_answer.set()
await bot.send_message(callback_query.from_user.id, "Введите номер вопроса, который хотите изменить:")
@dp.message_handler(state=lk.edit_answer_cb)
async def enter_question_number(message: types.Message, state: FSMContext):
question_number = message.text
if not question_number.isdigit():
await message.reply("Пожалуйста, введите номер вопроса цифрами. Попробуйте снова:")
return
await state.update_data(question_number=int(question_number))
await lk.new_answer.set()
await message.answer("Введите новый ответ:")
@dp.callback_query_handler(change_action_cb.filter(action="refill"), state="*")
async def process_refill(callback_query: types.CallbackQuery, callback_data: dict):
user_id = callback_query.from_user.id
await bot.answer_callback_query(callback_query.id)
markup = InlineKeyboardMarkup().add(InlineKeyboardButton("Да, начать заново", callback_data="confirm_refill"))
await bot.send_message(user_id, "Вы уверены, что хотите начать заново? Ваши текущие ответы будут удалены.", reply_markup=markup)
@dp.message_handler(state=lk.new_answer_cb)
async def update_answer(message: types.Message, state: FSMContext):
new_answer_text = message.text
user_data = await state.get_data()
question_number = user_data['question_number']
user_id = message.from_user.id
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await db.execute('UPDATE answers SET answer = ? WHERE user_id = ? AND question = ?',
(new_answer_text, user_id, question_text[0]))
await db.commit()
await message.answer(f"Ваш ответ на вопрос изменен на: {new_answer_text}", reply_markup=menu)
else:
await message.answer(f"Проблема при редактировании ответа, вопрос не найден.")
await state.finish()
await set_next_question(user_id)
@dp.message_handler(lambda message: message.text == "Сгенерировать био", state=Form.choosing_action)
async def generate_bio(message: types.Message):
user_id = message.from_user.id
await set_next_question(user_id)
@dp.message_handler(state=Form.answer_question)
async def process_question_answer(message: types.Message, state: FSMContext):
user_id = message.from_user.id
answer_text = message.text
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT last_question_idx FROM users WHERE id = ?', (user_id,))
result = await cursor.fetchone()
current_question_idx = result[0] if result else 0
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (current_question_idx,))
question = await cursor.fetchone()
if question:
question_text = question[0]
await db.execute('INSERT INTO answers (user_id, question, answer) VALUES (?, ?, ?)',
(user_id, question_text, answer_text))
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (current_question_idx, user_id))
await db.commit()
else:
await message.answer("Произошла ошибка при сохранении вашего ответа.")
await set_next_question(user_id)
@dp.message_handler(lambda message: message.text == "Личный кабинет", state=Form.choosing_action)
async def personal_account(message: types.Message):
user_id = message.from_user.id
answers_text = "Личный кабинет\n\nВаши ответы:\n"
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question, answer FROM answers WHERE user_id=? ORDER BY id', (user_id,))
answers = await cursor.fetchall()
for idx, (question, answer) in enumerate(answers, start=1):
answers_text += f"{idx}. {question}: {answer}\n"
if answers_text == "Личный кабинет\n\nВаши ответы:\n":
answers_text = "Личный кабинет\n\nВы еще не отвечали на вопросы. Пожалуйста, нажмите «В меню» и выберите «Сгенерировать био», чтобы ответить на вопросы"
await message.answer(answers_text, reply_markup=menu)
else:
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Изменить ответ"))
markup.add(KeyboardButton("Заполнить заново"))
markup.add(KeyboardButton("В меню"))
await message.answer(answers_text, reply_markup=markup)
await lk.personal_account.set()
@dp.message_handler(lambda message: message.text == "Изменить ответ", state=lk.personal_account)
async def change_answer(message: types.Message):
await message.answer("Введите номер вопроса, на который хотите изменить ответ:",reply_markup=menu)
await lk.edit_answer.set()
@dp.message_handler(state=lk.edit_answer)
async def process_question_number(message: types.Message, state: FSMContext):
text = message.text
question_number = int(text)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await state.update_data(question=question_text[0], question_number=question_number)
await message.answer("Введите новый ответ:")
await lk.new_answer.set()
else:
await message.answer(f"Вопроса под номером {question_number} не существует.")
@dp.message_handler(state=lk.new_answer)
async def process_new_answer(message: types.Message, state: FSMContext):
user_data = await state.get_data()
question_number = user_data['question_number']
new_answer = message.text
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Изменить ответ"))
markup.add(KeyboardButton("Заполнить заново"))
markup.add(KeyboardButton("В меню"))
user_id = message.from_user.id
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await db.execute('UPDATE answers SET answer = ? WHERE user_id = ? AND question = ?', (new_answer, user_id, question_text[0]))
await db.commit()
await message.answer(f"Ваш ответ на вопрос изменен на: {new_answer}", reply_markup=markup)
else:
await message.answer(f"Проблема при редактировании ответа, вопрос не найден.")
await state.finish()
await personal_account(message)
@dp.message_handler(lambda message: message.text == "Заполнить заново", state=lk.personal_account)
async def refill_form(message: types.Message):
markup = InlineKeyboardMarkup().add(InlineKeyboardButton("Да", callback_data="confirm_refill"))
await message.answer("Вы уверены, что хотите начать заново? Все текущие ответы будут удалены.", reply_markup=markup)
@dp.callback_query_handler(lambda c: c.data == 'confirm_refill', state="*")
async def process_refill(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
async with aiosqlite.connect('base.db') as db:
await db.execute('DELETE FROM answers WHERE user_id=?', (user_id,))
await db.commit()
await db.execute('UPDATE users SET last_question_idx = 0 WHERE id = ?', (user_id,))
await db.commit()
state = dp.current_state(user=user_id)
await state.reset_state(with_data=False)
await bot.answer_callback_query(callback_query.id)
await bot.send_message(user_id, "Ваши ответы удалены.")
await cmd_start(callback_query.message)
# ГЕНЕРАЦИЯ
def parse_yandexgpt(answer_text: str) -> dict:
pattern = re.compile(
r'(nome|surname|patronym|birthdayat|diedat|epitaph|placeOfBirth|placeOfDeath|children|wifehusband|education|occupation|awards|title1|biography1|title2|biography2|title3|biography3|conclusion): ([^:]+?)(?=\n\w+: |\n\n|\Z)',
re.DOTALL)
matches = pattern.findall(answer_text)
data = {key.strip(): value.strip() for key, value in matches}
return data
async def save_result(user_id: int, answer_dict: dict):
async with aiosqlite.connect('base.db') as db:
columns = ", ".join(answer_dict.keys())
placeholders = ", ".join(["?"] * len(answer_dict))
values = list(answer_dict.values())
print(columns)
print(placeholders)
print(values)
await db.execute(f"INSERT INTO public_info (user_id, {columns}) VALUES (?, {placeholders})", [user_id, *values])
await db.commit()
class YandexGPT:
@staticmethod
async def generate(prompt: str, apikey: str, sa_id: str, user_id : str):
url = 'https://llm.api.cloud.yandex.net/foundationModels/v1/completion'
headers = {
'Content-Type': 'application/json',
'Authorization': f'Api-Key {apikey}'
}
data = {
"modelUri": f"gpt://{sa_id}/yandexgpt-lite/latest",
"completionOptions": {
"stream": False,
"temperature": 0.4,
"maxTokens": "3000"
},
"messages": [
{
"role": "system",
"text": """"Твоя задача - создать информационную сводку и биографию (title1, biography1, title2, biography2, title3, biography3 ) в соответствии с ответами на вопросы пользователя (пишутся в формате вопрос - ответ).
Не пиши ничего кроме этой сводки и НЕ ОТХОДИ ОТ ШАБЛОНА. Если информации данных в каком-то пункте нет, пиши ТОЛЬКО “null“, а не что-либо другое. Создай следующую сводку:
nome: {}
surname: {}
patronym: {}
birthday_at: {}
died_at: {}
epitaph: {} # не больше 300 символов
placeOfBirth: {}
placeOfDeath: {}
children: {}
wifehusband: {} # Это супруг или супруга
education: {}
occupation: {} # Род деятельности человека
awards: {}
epitaph: {}
title1: {}
biography1: {}
title2: {}
biography2: {}
title3: {}
biography3: {}
conclusion: {}
В поле nome должно быть ТОЛЬКО имя, без фамилии и отчества. Не путай имя (nome), фамилию (surname) и отчество (patronym) - они должны стоять на правильных местах.
epitaph - это эпитафия, придумай ее сам, чтобы она соответствовала жизненному пути человека.
Не придумывай в биографии ничего от себя, распиши подробнее, но только ту информацию, которая есть от пользователя.
Все даты пиши в формате dd.mm.yyyy.
В conclusion пиши заключение БИОГРАФИИ, а не что либо другое.
"""
},
{
"role": "user",
"text": prompt
}
]
}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=data, headers=headers) as response:
response_data = await response.json()
try:
answer = response_data['result']['alternatives'][0]['message']['text']
answer = answer.replace("*","").replace("_","")
try:
await save_result(user_id, parse_yandexgpt(answer))
await bot.send_message(user_id, mdv2(answer), parse_mode="MarkdownV2",reply_markup=InlineKeyboardMarkup().add(InlineKeyboardButton("Обновить страницу",callback_data="update_page")))
except:
await bot.send_message(user_id, "Не удалось сохранить данные. Пожалуйста, нажмите Сгенерировать еще раз")
except KeyError as e:
await bot.send_message(user_id, "Не удалось получить ответ от сервера. Проверьте переданные данные и попробуйте еще раз.")
@dp.callback_query_handler(change_action_cb.filter(action="generate"), state="*")
async def process_generate(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
prompt = ""
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question, answer FROM answers WHERE user_id=?", (user_id,))
answers = await cursor.fetchall()
for question, answer in answers:
prompt += f"\n{question} - {answer}"
api_key = "AQVN1J4sCxYR98rj-tVppyp6gXQthbdmYvmgtO7a"
sa_id = "b1g5og37bgh1ghh2s2qc"
await YandexGPT.generate(prompt, api_key, sa_id, user_id)
# ОБНОВЛЕНИЕ СТРАНИЦЫ
@dp.callback_query_handler(lambda c: c.data == 'update_page', state="*")
async def process_update_page(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
await bot.answer_callback_query(callback_query.id)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT nome, surname, patronym, birthdayat, diedat, epitaph, placeOfBirth, placeOfDeath, children, wifehusband, education, occupation, awards FROM public_info WHERE user_id=?", (user_id,))
data = await cursor.fetchone()
if data:
print(data)
data_structure = {
"nome": data[0],
"surname": data[1],
"patronym": data[2],
"birthday_at": data[3]+"T21:00:00.000000Z",
"died_at": data[4]+"T21:00:00.000000Z",
"epitaph": data[5],
"placeOfBirth": data[6],
"placeOfDeath": data[7],
"children": data[8],
"wifehusband": data[9],
"education": data[10],
"occupation": data[11],
"awards": data[12],
"day_b": data[3].split('.')[0],
"month_b": data[3].split('.')[1],
"year_b": int(data[3].split('.')[2]),
"day_d": data[4].split('.')[0],
"month_d": data[4].split('.')[1],
"year_d": int(data[4].split('.')[2]),
}
url = 'https://mc.dev.rand.agency/api/v1/get-access-token'
payload = {
"email": "team1@hackathon.ru",
"password": "PA9dEa7x",
"device": "bot-v0.0.1"
}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=payload, headers={'Accept': 'application/json','Content-Type': 'application/json;charset=UTF-8'}) as response:
if response.status == 200:
data = await response.json()
access_token = data.get('access_token')
print(f'Access Token: {access_token}')
else:
print(f'Error: {response.status}')
async with aiohttp.ClientSession() as session:
await update_page(session, access_token, data_structure)
await bot.send_message(user_id, "Страница обновлена")
async def update_page(session, access_token, data_structure):
url = 'https://mc.dev.rand.agency/api/page/64577087'
print(data_structure["nome"])
payload = {
"id": 8734,
"name": data_structure["nome"],
"surname": data_structure["surname"],
"patronym": data_structure["patronym"],
"birthday_at": data_structure["birthday_at"],
"died_at": data_structure["died_at"],
"epitaph": data_structure["epitaph"],
"start": {
"day": data_structure["day_b"],
"month": data_structure["month_b"],
"year": data_structure["year_b"]
},
"end": {
"day": data_structure["day_d"],
"month": data_structure["month_d"],
"year": data_structure["year_d"]
},
"link": "https://mc.dev.rand.agency/page/64577087",
"page_type_id": 1,
"biographies": [
{
"id": 4610,
"title": None,
"description": None,
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 1,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
},
{
"url": None,
"title": None,
"order": 2
}
],
"media": []
},
{
"id": 4611,
"title": None,
"description": None,
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 2,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
}
],
"media": []
},
{
"id": 4612,
"title": None,
"description": None,
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 3,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
},
{
"url": None,
"title": None,
"order": 2
}
],
"media": []
},
{
"id": 4613,
"title": None,
"description": None,
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 4,
"checked": False,
"photos": [],
"media": []
}
],
"page_information": [
{
"id": 30957,
"page_id": 8734,
"title": "pageInformation.placeOfBirth",
"is_system": True,
"description": data_structure["placeOfBirth"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30958,
"page_id": 8734,
"title": data_structure["placeOfDeath"],
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30959,
"page_id": 8734,
"title": data_structure["children"],
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30960,
"page_id": 8734,
"title": "^pageInformation.wife||pageInformation.husband",
"is_system": True,
"description": data_structure["children"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30961,
"page_id": 8734,
"title": "pageInformation.citizenship",
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30962,
"page_id": 8734,
"title": "pageInformation.education",
"is_system": True,
"description": data_structure["education"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30963,
"page_id": 8734,
"title": "pageInformation.occupation",
"is_system": True,
"description": data_structure["occupation"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30964,
"page_id": 8734,
"title": "pageInformation.awards",
"is_system": True,
"description": data_structure["awards"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
}
],
"_method": "PUT"
}
headers = {
'Accept': 'application/json',
'Content-Type': 'application/json;charset=UTF-8',
'Authorization': f'Bearer {access_token}'
}
async with session.put(url, json=payload, headers=headers) as response:
if response.status == 200:
data = await response.json()
print(f'Update Page Response: {data}')
else:
data = await response.json()
print(f'Error: {data}')
# АДМИН-ПАНЕЛЬ
# КНОПКА НАЗАД
back = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=False)
back.add(KeyboardButton("Назад"))
# КЛАВА
admin_kb = ReplyKeyboardMarkup(resize_keyboard=True)
admin_kb.add("Вопросы", "Добавить", "Удалить", "Редактировать","В меню")
@dp.message_handler(lambda message: message.text == "Назад", state=[admin.new_question, admin.edit_question_text, admin.select_question_to_edit, admin.select_question_to_delete])
async def back_to_admin_panel(message: types.Message, state: FSMContext):
await state.finish()
await admin_panel(message)
@dp.message_handler(lambda message: message.text == "Админ-панель", state=Form.choosing_action)
async def admin_panel(message: types.Message):
if message.from_user.id not in ADMINS:
await message.answer("Доступ запрещен.")
return
await message.answer("Админ-панель:", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Вопросы", state=admin.admin_panel)
async def show_questions(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if questions:
text = "\n".join([f"{idx + 1}. {question[0]}" for idx, question in enumerate(questions)])
else:
text = "Вопросы отсутствуют."
await message.answer(text)
@dp.message_handler(lambda message: message.text == "Добавить", state=admin.admin_panel)
async def add_question_start(message: types.Message):
await message.answer("Введите текст нового вопроса:", reply_markup=back)
await admin.new_question.set()
@dp.message_handler(state=admin.new_question)
async def add_question_process(message: types.Message, state: FSMContext):
new_question = message.text
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT MAX(order_num) FROM questions")
max_order_num = await cursor.fetchone()
next_order_num = (max_order_num[0] or 0) + 1
await db.execute("INSERT INTO questions (question, order_num) VALUES (?, ?)", (new_question, next_order_num))
await db.commit()
await message.answer("Вопрос успешно добавлен.", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Редактировать", state=admin.admin_panel)
async def select_question_to_edit_start(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT id, question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if not questions:
await message.answer("Вопросы отсутствуют.")
return
text = "Выберите номер вопроса для редактирования:\n\n"
text += "\n".join(f"{qid}. {qtext}" for qid, qtext in questions)
await message.answer(text, reply_markup=back)
await admin.select_question_to_edit.set()
@dp.message_handler(state=admin.select_question_to_edit)
async def edit_question(message: types.Message, state: FSMContext):
qid_text = message.text
if not qid_text.isdigit():
await message.answer("Пожалуйста, введите число. Попробуйте еще раз:", reply_markup=back)
return
qid = int(qid_text)
async with state.proxy() as data:
data['question_id'] = qid
await admin.edit_question_text.set()
await message.answer("Введите новый текст вопроса:", reply_markup=back)
@dp.message_handler(state=admin.edit_question_text)
async def update_question(message: types.Message, state: FSMContext):
new_text = message.text
async with state.proxy() as data:
qid = data['question_id']
async with aiosqlite.connect('base.db') as db:
await db.execute("UPDATE questions SET question = ? WHERE id = ?", (new_text, qid))
await db.commit()
await message.answer("Вопрос успешно отредактирован.", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Удалить", state=admin.admin_panel)
async def select_question_to_delete_start(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT id, question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if not questions:
await message.answer("Вопросы отсутствуют.")
return
text = "Выберите номер вопроса для удаления:\n\n"
text += "\n".join(f"{qid}. {qtext}" for qid, qtext in questions)
await message.answer(text, reply_markup=back)
await admin.select_question_to_delete.set()
@dp.message_handler(state=admin.select_question_to_delete)
async def delete_question(message: types.Message, state: FSMContext):
qid_text = message.text
if not qid_text.isdigit():
await message.answer("Пожалуйста, введите число. Попробуйте еще раз:", reply_markup=back)
return
qid = int(qid_text)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT order_num FROM questions WHERE id = ?", (qid,))
question = await cursor.fetchone()
if not question:
await message.answer(f"Вопрос под номером {qid} не найден. Пожалуйста, попробуйте другой номер.")
return
order_num_to_delete = question[0]
await db.execute("DELETE FROM questions WHERE id = ?", (qid,))
await db.execute("UPDATE questions SET order_num = order_num - 1 WHERE order_num > ?", (order_num_to_delete,))
await db.commit()
await message.answer("Вопрос успешно удален.", reply_markup=admin_kb)
await admin.admin_panel.set()
async def main():
await create_db()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
executor.start_polling(dp, skip_updates=True)
|
957efbd47267aecdefcd5e40d9cde591
|
{
"intermediate": 0.3883317708969116,
"beginner": 0.4529540240764618,
"expert": 0.15871421992778778
}
|
46,513
|
PS D:\projects\oms_admin> ng update @angular/core
Error: Repository is not clean. Please commit or stash any changes before updating.
|
4a474b542e12082d6581eaf2a76b0dc7
|
{
"intermediate": 0.413261353969574,
"beginner": 0.2694779932498932,
"expert": 0.3172605633735657
}
|
46,514
|
What mpv config would be the best for my computer? These are some information about my computer.
Processors
Intel(R) Celeron(R) CPU N3060 @ 1.60GHz
Cores: 2
Threads: 2
Virtualization: Yes
Data Execution Prevention: Yes
L2 Cache: 1.02 MB
L3 Cache: 0 b
Stepping: Intel64 Family 6 Model 76 Stepping 4
Revision: 19460
Memory
Module 0
Manufacturer: Samsung
Size: 4 GiB
Memory Type: DDR3
Speed: 1600 MHz
Form Factor: Other
Virtual Memory
Total: 9.54 GiB
Available : 6.32 GiB
Used: 3.22 GiB
Graphics
Intel(R) HD Graphics
Video Memory: 1 GiB
Video Memory Type: Unknown
DAC Type: Internal
Current Resolution: 1366 x 768
Current Refresh Rate: 60 Hz
|
8a07e0fac97012afba1231f7a1244b4b
|
{
"intermediate": 0.335544615983963,
"beginner": 0.3701308071613312,
"expert": 0.2943246364593506
}
|
46,515
|
1_ Translate the following legal text into colloquial Farsi 2_ Place the Persian and English text side by side in the table 3_ From the beginning to the end of the text, there should be an English sentence on the left side and a Persian sentence on the right side.
4- Using legal language for Persian translation
.Lesson Fourty
Word Study
alter / 'o:ltə(r) / to change; to make different;to vary.
The Parliament has no right to alter the fundamental laws of the country set out in the constitution.
It is prohibited to alter governmental instruments.
When the treaty is signed, no party has a right to alter its terms.
automatically / p:tə'mætikli / occurring or happening in a natural way or without direct control.
The process was controlled automatically.
Residence in the area automatically qualifies you for membership.
Arbitrators are not automatically disqualified by a business relationship with the parties.
conclusion / kən'klu:3n / the formal and final arrangement or settlement.
Hostilities ended with the successful conclusion of a peace treaty.
The risk in respect of goods sold in transit passes to the buyer from the time of the conclusion of the contract.
If the goods were non-existent at the time of the conclusion of the contract,the contract may be void.
continue / kən'tinju: / to keep on; to go on; not to stop; maintain; to go or move further.
The workers continue to accept low pay.
Property values continue to advance rapidly.
The company continued to experience financialdifficulties.
destroy / dr'stror / to damage so badly that it no longer exists or works;to break down; demolish.
They have destroyed all the evidence.
Before the buyer took possession of the sugar, it was destroyed in a fire and the seller sued for the price.
The loss falls on the party who has title at the time that the goods are damaged or destroyed.
frustration/fra'strerfn / force majeure; the state of being ineffective.
Frustration discharges the contract and any obligations in respect of further performance.
Frustration is to be assessed at the time that the frustrating event occurred.
If there is a frustration in respect of a specific obligation,the contract may only be discharged in respect of that obligation.
fulfil/fol'fil/ to carry out; to perform; to satisfy the specific requirements of something.
Not all foreign visitors fulfil legal entry requirements.
You must fulfil the terms of your contract.
He failed to fulfil his early promise.
generate/ 'dsenəreit / to bring into existence; to produce; to make something exist or occur.
A letter of comfort may not generate legal obligations.
The sale generated record profits.
The plaintiff will be unable to generate proof that the contract has been broken by the defendant.
in force / in fo:s / effective; in effect; in operation; valid.
The new tax law has become in force since January.
The Government has published bilateral and multilateral treaties which are in force.
The Vienna Sales Convention is in force in many countries.
support/ sə'po:t / to strengthen; to encourage; to give hep or approval; to maintain.
The Government provides financing to support exports of dairy products.
No scholars have supported the recent decision of the high court.
A moral obligation which is not supported by consideration is unenforce-able.
|
4bb6a6a6fec2d4b6e1d4b040ab2d253f
|
{
"intermediate": 0.3290564715862274,
"beginner": 0.32395175099372864,
"expert": 0.34699180722236633
}
|
46,516
|
You are a Python expert who can provide clear, concise, high-quality code.
"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from ipywidgets import Output, Button
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}")
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask")
plt.show()
# preprocess and predict on the selected image
out = Output()
def predict_image(image_path):
img = cv2.imread(image_path)
img = cv2.resize(img, (image_size, image_size))
img = np.expand_dims(img, axis=0)
class_output, seg_output = model.predict(img)
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
with out:
clear_output(wait=True)
plt.figure(figsize=(10, 5))
plt.subplot(1, 3, 1)
plt.imshow(cv2.cvtColor(cv2.imread(image_path), cv2.COLOR_BGR2RGB))
plt.title("Input Image")
plt.axis("off")
plt.subplot(1, 3, 2)
plt.imshow(seg_output[0, :, :, 0], cmap='gray')
plt.title("Segmentation Output")
plt.axis("off")
plt.subplot(1, 3, 3)
class_probabilities = class_output[0]
y_pos = np.arange(len(labels))
plt.barh(y_pos, class_probabilities, align='center')
plt.yticks(y_pos, labels)
plt.xlabel('Probability')
plt.title('Classification Output')
plt.gca().invert_yaxis()
plt.tight_layout()
print("Predicted Class:", predicted_class)
print("Class Probabilities:")
for label, prob in zip(labels, class_probabilities):
print(f"{label}: {prob:.4f}")
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
custom_objects = {"segmentation_loss": segmentation_loss}
# Load model
model = load_model(saved_model_path, custom_objects=custom_objects)
# file uploader widget
uploader = widgets.FileUpload(accept='image/*', multiple=False)
uploader.observe(on_upload, names='value')
display(uploader)
display(out)
display(clear_button)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code let user to select one image and performance inference. Help me to make the classification plot look better, more high tech.
|
b123e592752bd68d89fcbefaa726da0d
|
{
"intermediate": 0.36243683099746704,
"beginner": 0.3620155155658722,
"expert": 0.275547593832016
}
|
46,517
|
You are a Python expert who can provide clear, concise, high-quality code.
"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from ipywidgets import Output, Button
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}")
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask")
plt.show()
# preprocess and predict on the selected image
out = Output()
def predict_image(image_path):
img = cv2.imread(image_path)
img = cv2.resize(img, (image_size, image_size))
img = np.expand_dims(img, axis=0)
class_output, seg_output = model.predict(img)
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
with out:
clear_output(wait=True)
plt.figure(figsize=(10, 5))
plt.subplot(1, 3, 1)
plt.imshow(cv2.cvtColor(cv2.imread(image_path), cv2.COLOR_BGR2RGB))
plt.title("Input Image")
plt.axis("off")
plt.subplot(1, 3, 2)
plt.imshow(seg_output[0, :, :, 0], cmap='gray')
plt.title("Segmentation Output")
plt.axis("off")
plt.subplot(1, 3, 3)
class_probabilities = class_output[0]
y_pos = np.arange(len(labels))
plt.barh(y_pos, class_probabilities, align='center')
plt.yticks(y_pos, labels)
plt.xlabel('Probability')
plt.title('Classification Output')
plt.gca().invert_yaxis()
plt.tight_layout()
print("Predicted Class:", predicted_class)
print("Class Probabilities:")
for label, prob in zip(labels, class_probabilities):
print(f"{label}: {prob:.4f}")
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
custom_objects = {"segmentation_loss": segmentation_loss}
# Load model
model = load_model(saved_model_path, custom_objects=custom_objects)
# file uploader widget
uploader = widgets.FileUpload(accept='image/*', multiple=False)
uploader.observe(on_upload, names='value')
display(uploader)
display(out)
display(clear_button)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code let user to select one image and performance inference. Help me to make the classification plot look better, more high tech.
|
3d513177c343c22f06201fd42fbfe90f
|
{
"intermediate": 0.36243683099746704,
"beginner": 0.3620155155658722,
"expert": 0.275547593832016
}
|
46,518
|
in this javascript for leaflet.js a polyline is drawn between the two points clicked on by the user. However an error occurs if the user clicks on the same point twice. Is it possible to restrict the user from clicking the same point twice. - 'var money = 100000;
var numberOfCarriages = 1;
var speed = 60;
var dailybonus = 0;
const map = L.map("map").setView([54.2231637, -1.9381623], 6);
// Add custom zoom control to the map with position set to ‘topright’
const customZoomControl = L.control.zoom({ position: "topright" }).addTo(map);
// Remove the default zoom control from the map
map.removeControl(map.zoomControl);
let clickedPoints = [];
let isLineDrawn = false;
let marker; // Declare the marker variable
let progress = 0;
let cafeOneBonus = 0;
let cafeTwoBonus = 0;
let hotelOneBonus = 0;
let hotelTwoBonus = 0;
let buildingRadius = 0;
let buildingRadius2 = 0;
const increaseSpeed = () => {
const speedIncrease = 20;
speed += speedIncrease;
};
// Function to create circle markers with click functionality
function createCircleMarkers(geojson) {
return L.geoJSON(geojson, {
pointToLayer: function (feature, latlng) {
const circleMarker = L.circleMarker(latlng, {
radius: 4,
fillColor: "#ff7800",
color: "#000",
weight: 0.2,
opacity: 1,
fillOpacity: 0.8,
});
// Attach the feature to the circle marker
circleMarker.feature = feature;
circleMarker.on("mouseover", function () {
this.bindPopup(feature.properties.city).openPopup();
});
circleMarker.on("click", function (e) {
if (!isLineDrawn) {
clickedPoints.push(e.target); // Push the circle marker with attached feature
if (clickedPoints.length === 2) {
const firstCityCoords =
clickedPoints[0].feature.geometry.coordinates;
const secondCityCoords =
clickedPoints[1].feature.geometry.coordinates;
const polyline = L.polyline(
clickedPoints.map((p) => p.getLatLng())
).addTo(map);
const firstCity = clickedPoints[0].feature.properties.city;
const secondCity = clickedPoints[1].feature.properties.city;
clickedPoints = [];
isLineDrawn = true;
// Remove click event listener after a line has been drawn
map.off("click");
// Set the map bounds to show the area with the polyline
map.fitBounds(polyline.getBounds());
money = money - 50000; // Subtract 50000 from money
const moneyDisplay = document.getElementById("moneydisplay");
const moneyString = `£${money}`; // Assuming money is a number
moneyDisplay.textContent = moneyString;
const instructionsElement = document.getElementById("instructions");
// Clear any existing content in the instructions element:
instructionsElement.innerHTML = "";
// Create separate paragraph elements:
const congratulationsParagraph = document.createElement("p");
congratulationsParagraph.textContent = `Congratulations you have built your first train line from ${firstCity} to ${secondCity}!`;
const costsParagraph = document.createElement("p");
costsParagraph.textContent = `Your construction costs were £50,000. You have £50,000 remaining.`;
const buyTrainParagraph = document.createElement("p");
buyTrainParagraph.textContent = "You now need to buy a train.";
const newTrainParagraph = document.createElement("p");
newTrainParagraph.textContent =
"At this time you can only afford to buy the train engine the Sleeping Lion. The Sleeping Lion has a traveling speed of 60 miles per hour. It can pull four carriages. Which means your train will have a capacity of around 120 seated passengers";
const traincost = document.createElement("p");
traincost.textContent = `The Sleeping Lion will cost you £30,000 to purchase. Do you wish to buy the Sleeping Lion?`;
// Append paragraphs to the instructions element:
instructionsElement.appendChild(congratulationsParagraph);
instructionsElement.appendChild(costsParagraph);
instructionsElement.appendChild(buyTrainParagraph);
instructionsElement.appendChild(newTrainParagraph);
instructionsElement.appendChild(traincost);
// Add button element:
const buyButton = document.createElement("button");
buyButton.id = "buybutton";
buyButton.textContent = "Buy Train";
// Append the button element to the instructions element:
instructionsElement.appendChild(buyButton);
//buybutton event listener
document
.getElementById("buybutton")
.addEventListener("click", function () {
// Check if you have enough money before purchase
money = money - 30000; // Subtract 30000 from money
const moneyDisplay = document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
// Update instructions content after successful purchase
instructionsElement.innerHTML = ""; // Clear previous content
const successMessage = document.createElement("p");
successMessage.textContent = `You now have a train line from ${firstCity} to ${secondCity} and a train! Press the button below to begin operations.`;
instructionsElement.appendChild(successMessage);
// Add button element:
const trainButton = document.createElement("button");
trainButton.id = "trainbutton";
trainButton.textContent = "Start Train";
// Append the button element to the instructions element:
instructionsElement.appendChild(trainButton);
trainButton.addEventListener("click", function () {
console.log("Train Started");
//emptyinstructions add improvement buttons
instructionsElement.innerHTML = ""; // Clear previous content
//randomgeneration of dailybonus
function generateDailyBonus(minBonus, maxBonus) {
const randomNumber =
Math.floor(Math.random() * (maxBonus - minBonus + 1)) +
minBonus;
dailybonus += randomNumber;
console.log(`Daily bonus of ${randomNumber} added!`);
}
//buy carriages
//add carriages button
const carriageButton = document.createElement("button");
carriageButton.id = "trainbutton";
carriageButton.textContent = "Buy Train Carriage";
const carriageMessage = document.createElement("p");
carriageMessage.textContent = `Buy another passenger carriage for your train for £20,000`;
instructionsElement.appendChild(carriageMessage);
// Append the button element to the instructions element:
instructionsElement.appendChild(carriageButton);
//cariagebutton logic
carriageButton.addEventListener("click", () => {
console.log("Carriage Bought");
// Check if enough money is available
if (money >= 20000) {
// Check if maximum number of carriages reached
if (numberOfCarriages < 4) {
numberOfCarriages++;
money -= 20000; // Subtract 20000 from money
const moneyDisplay =
document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
// Update marker content using the previously retrieved reference
markerContent.textContent = numberOfCarriages;
} else {
console.log(
"Maximum number of carriages reached! You can't buy more."
);
instructionsElement.removeChild(carriageButton);
instructionsElement.removeChild(carriageMessage);
}
}
});
//buy station cafes
//add station one cafe button
const stationOneMessage = document.createElement("p");
stationOneMessage.textContent = `Open a cafe in ${firstCity} Station for £2,500.`;
instructionsElement.appendChild(stationOneMessage);
// Add button element:
const cafeOneButton = document.createElement("button");
cafeOneButton.id = "trainbutton";
cafeOneButton.textContent = "Buy Cafe";
// Append the button element to the instructions element:
instructionsElement.appendChild(cafeOneButton);
//cafeonelogic
cafeOneButton.addEventListener("click", () => {
if (money >= 2500) {
// add a random number between 2000 and 7000 to dailbonus
generateDailyBonus(2000, 7000); // Call with cafe bonus range
cafeOneBonus = dailybonus;
console.log("Cafe one bought");
money -= 2500;
const moneyDisplay =
document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
instructionsElement.removeChild(cafeOneButton);
instructionsElement.removeChild(stationOneMessage);
//fetch data from Overpass API around first station
buildingRadius += 150;
const overpassQuery = `
[out:json];
way["building"](around:${buildingRadius},${firstCityCoords[1]},${firstCityCoords[0]});
out body;
>;
out skel qt;
`;
fetch("https://overpass-api.de/api/interpreter", {
method: "POST",
headers: {
"Content-Type": "application/x-www-form-urlencoded",
},
body: "data=" + encodeURIComponent(overpassQuery),
})
.then((response) => response.json())
.then((data) => {
// Process the data returned by the Overpass API
data.elements.forEach((element) => {
if (element.type === "way") {
// Extract coordinates from the way element
const coordinates = element.nodes.map(
(nodeId) => {
const node = data.elements.find(
(node) => node.id === nodeId
);
return [node.lat, node.lon];
}
);
// Create a polyline for the road
const polyline = L.polyline(coordinates, {
color: "#333333",
weight: 1,
}).addTo(map);
}
});
})
.catch(error => {
console.error('Error fetching data:', error);
});
} else {
}
});
//add station two cafe buttons
const stationTwoMessage = document.createElement("p");
stationTwoMessage.textContent = `Open a cafe in ${secondCity} Station for £2,500.`;
instructionsElement.appendChild(stationTwoMessage);
// Add button element:
const cafeTwoButton = document.createElement("button");
cafeTwoButton.id = "trainbutton";
cafeTwoButton.textContent = "Buy Cafe";
// Append the button element to the instructions element:
instructionsElement.appendChild(cafeTwoButton);
//cafetwologic
cafeTwoButton.addEventListener("click", () => {
if (money >= 2500) {
// Generate a random number between 2000 (inclusive) and 7000 (exclusive)
generateDailyBonus(2000, 7000); // Call with cafe bonus range
cafeTwoBonus = dailybonus;
console.log("Cafe two bought");
money -= 2500;
const moneyDisplay =
document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
instructionsElement.removeChild(cafeTwoButton);
instructionsElement.removeChild(stationTwoMessage);
//fetch data from Overpass API around first station
buildingRadius2 += 150;
const overpassQuery = `
[out:json];
way["building"](around:${buildingRadius2},${secondCityCoords[1]},${secondCityCoords[0]});
out body;
>;
out skel qt;
`;
fetch("https://overpass-api.de/api/interpreter", {
method: "POST",
headers: {
"Content-Type": "application/x-www-form-urlencoded",
},
body: "data=" + encodeURIComponent(overpassQuery),
})
.then((response) => response.json())
.then((data) => {
// Process the data returned by the Overpass API
data.elements.forEach((element) => {
if (element.type === "way") {
// Extract coordinates from the way element
const coordinates = element.nodes.map(
(nodeId) => {
const node = data.elements.find(
(node) => node.id === nodeId
);
return [node.lat, node.lon];
}
);
// Create a polyline for the road
const polyline = L.polyline(coordinates, {
color: "#333333",
weight: 1,
}).addTo(map);
}
});
})
} else {
}
});
//buyhotel
const hoteloneMessage = document.createElement("p");
hoteloneMessage.textContent = `Open a hotel in ${firstCity} Station for £10,000.`;
instructionsElement.appendChild(hoteloneMessage);
// Add button element:
const hoteloneButton = document.createElement("button");
hoteloneButton.id = "trainbutton";
hoteloneButton.textContent = "Buy Hotel";
// Append the button element to the instructions element:
instructionsElement.appendChild(hoteloneButton);
//hotelonelogic
hoteloneButton.addEventListener("click", () => {
if (money >= 10000) {
generateDailyBonus(8000, 24000); // Call with cafe bonus range
hotelOneBonus = dailybonus;
money -= 10000;
const moneyDisplay =
document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
instructionsElement.removeChild(hoteloneButton);
instructionsElement.removeChild(hoteloneMessage);
//fetch data from Overpass API around first station
buildingRadius += 200;
const overpassQuery = `
[out:json];
way["building"](around:${buildingRadius},${firstCityCoords[1]},${firstCityCoords[0]});
out body;
>;
out skel qt;
`;
fetch("https://overpass-api.de/api/interpreter", {
method: "POST",
headers: {
"Content-Type": "application/x-www-form-urlencoded",
},
body: "data=" + encodeURIComponent(overpassQuery),
})
.then((response) => response.json())
.then((data) => {
// Process the data returned by the Overpass API
data.elements.forEach((element) => {
if (element.type === "way") {
// Extract coordinates from the way element
const coordinates = element.nodes.map(
(nodeId) => {
const node = data.elements.find(
(node) => node.id === nodeId
);
return [node.lat, node.lon];
}
);
// Create a polyline for the road
const polyline = L.polyline(coordinates, {
color: "#333333",
weight: 1,
}).addTo(map);
}
});
})
} else {
}
});
const hoteltwoMessage = document.createElement("p");
hoteltwoMessage.textContent = `Open a hotel in ${secondCity} Station for £10,000.`;
instructionsElement.appendChild(hoteltwoMessage);
// Add button element:
const hoteltwoButton = document.createElement("button");
hoteltwoButton.id = "trainbutton";
hoteltwoButton.textContent = "Buy Hotel";
// Append the button element to the instructions element:
instructionsElement.appendChild(hoteltwoButton);
//hotelonelogic
hoteltwoButton.addEventListener("click", () => {
if (money >= 10000) {
generateDailyBonus(8000, 24000); // Call with cafe bonus range
hotelTwoBonus = dailybonus;
money -= 10000;
const moneyDisplay =
document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
instructionsElement.removeChild(hoteltwoButton);
instructionsElement.removeChild(hoteltwoMessage);
//fetch data from Overpass API around first station
buildingRadius2 += 200;
const overpassQuery = `
[out:json];
way["building"](around:${buildingRadius2},${secondCityCoords[1]},${secondCityCoords[0]});
out body;
>;
out skel qt;
`;
fetch("https://overpass-api.de/api/interpreter", {
method: "POST",
headers: {
"Content-Type": "application/x-www-form-urlencoded",
},
body: "data=" + encodeURIComponent(overpassQuery),
})
.then((response) => response.json())
.then((data) => {
// Process the data returned by the Overpass API
data.elements.forEach((element) => {
if (element.type === "way") {
// Extract coordinates from the way element
const coordinates = element.nodes.map(
(nodeId) => {
const node = data.elements.find(
(node) => node.id === nodeId
);
return [node.lat, node.lon];
}
);
// Create a polyline for the road
const polyline = L.polyline(coordinates, {
color: "#333333",
weight: 1,
}).addTo(map);
}
});
})
} else {
}
});
// starttrain
const firstPoint = L.latLng(
firstCityCoords[1],
firstCityCoords[0]
);
const secondPoint = L.latLng(
secondCityCoords[1],
secondCityCoords[0]
);
const intervalDuration = 10; // milliseconds per frame
const distance = firstPoint.distanceTo(secondPoint);
const steps = ((distance / speed) * 1000) / intervalDuration; // Assuming speed of 35 miles per hour
const latStep = (secondPoint.lat - firstPoint.lat) / steps;
const lngStep = (secondPoint.lng - firstPoint.lng) / steps;
const marker = L.marker(firstPoint, {
icon: L.divIcon({
className: 'circle-marker', // Add a CSS class for styling (optional)
html: `<b>${numberOfCarriages}</b>`, // Include the number inside a bold tag
iconSize: [20, 20] // Adjust iconSize as needed (optional)
})
}).addTo(map);
// Assuming the marker variable is defined in this scope
const markerContent = marker.getElement().querySelector('b'); // Assuming bold tag for number
const moveMarker = (speed) => {
if (progress < steps) {
const newLat = firstPoint.lat + latStep * progress;
const newLng = firstPoint.lng + lngStep * progress;
const newLatLng = L.latLng(newLat, newLng);
marker.setLatLng(newLatLng); // Update the marker's position
progress++;
setTimeout(function () {
moveMarker(speed);
}, intervalDuration);
} else {
// Marker reaches the second point, update money
money +=
Math.floor(Math.random() * (2000 - 1000 + 1)) +
1000 * numberOfCarriages;
const moneyDisplay =
document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
// Wait two seconds before animating back and call moveBackMarker recursively
setTimeout(() => {
moveBackMarker(speed);
}, 2000); // Wait for 2 seconds (2000 milliseconds)
}
};
const moveBackMarker = (speed) => {
// Corrected calculation for animating back from second point to first
if (progress > 0) {
const newLat =
secondPoint.lat - latStep * (steps - progress);
const newLng =
secondPoint.lng - lngStep * (steps - progress);
const newLatLng = L.latLng(newLat, newLng);
marker.setLatLng(newLatLng); // Update the marker's position
progress--;
setTimeout(function () {
moveBackMarker(speed);
}, intervalDuration);
} else {
console.log("Reached starting point again.");
// Add random number to money and update display
money +=
Math.floor(Math.random() * (2000 - 1000 + 1)) +
1000 * numberOfCarriages;
const moneyDisplay =
document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
// Reset progress for next round trip
progress = 0;
// Recursively call moveMarker to start next animation cycle
moveMarker(speed);
}
};
moveMarker(speed); // Start the animation
});
});
}
}
});
return circleMarker;
},
});
}
fetch("gb.geojson")
.then((response) => response.json())
.then((geojson) => {
L.geoJSON(geojson, {
fillColor: "none", // Style for polygon (empty fill)
weight: 1,
color: "#000",
opacity: 1,
fillOpacity: 0,
}).addTo(map);
})
.catch((error) => {
console.error("Error loading GeoJSON:", error);
});
fetch("cities.geojson")
.then((response) => response.json())
.then((geojson) => {
createCircleMarkers(geojson).addTo(map);
})
.catch((error) => {
console.error("Error loading GeoJSON:", error);
});
//24 hour clock display
const TIME_MULTIPLIER = 60 * 10; // 10 minutes = 600 seconds
// Function to format time in 24-hour format with leading zeros
function formatTime(hours, minutes) {
// Handle the case where minutes reach 60 (should display the next hour)
if (minutes === 60) {
hours++;
minutes = 0;
}
return `${hours.toString().padStart(2, "0")}:${minutes
.toString()
.padStart(2, "0")}`;
}
// Function to update the clock display and handle daily bonus
function updateClock() {
const currentTime = new Date();
// Simulate game time by multiplying actual time with multiplier
const gameTime = new Date(currentTime.getTime() * TIME_MULTIPLIER);
// Get hours and minutes in 24-hour format
let hours = gameTime.getHours();
// Get minutes and force them to the nearest multiple of 10 (ending in 0)
let minutes = Math.floor(gameTime.getMinutes() / 10) * 10;
// Format the time string with fixed minute handling
const formattedTime = formatTime(hours, minutes);
// Update the content of the div with the formatted time
document.getElementById("timedisplay").textContent = formattedTime;
// Check if it's midnight (00:00)
if (hours === 0 && minutes === 0) {
// Generate random daily bonus (modify as needed)
money += cafeOneBonus + cafeTwoBonus + hotelOneBonus;
const moneyDisplay = document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
console.log(
`Daily bonus of ${
cafeOneBonus + cafeTwoBonus + hotelOneBonus + hotelTwoBonus
} added! Total money: ${money}`
); // You can replace console.log with your desired action
}
}
// Call the updateClock function initially
updateClock();
// Update the clock every second to simulate smooth time progression
setInterval(updateClock, 1000);
'
|
1605a6551b482576fdb0466311709c0c
|
{
"intermediate": 0.3737030029296875,
"beginner": 0.33329707384109497,
"expert": 0.2929999828338623
}
|
46,519
|
You are a Python expert who can provide clear, concise, high-quality code.
"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual, HBox, VBox
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from ipywidgets import Output, Button
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}")
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask")
plt.show()
# preprocess and predict on the selected image
out = Output()
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
axs[0].imshow(img)
axs[0].set_title("Input Image")
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output")
axs[1].axis("off")
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='blue')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right")
axs[2].set_ylabel('Probability')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}') # Increased title font size
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='black', ha='center')
plt.tight_layout()
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
custom_objects = {"segmentation_loss": segmentation_loss}
# Load model
model = load_model(saved_model_path, custom_objects=custom_objects)
# Test model
test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5)
# file uploader widget
uploader = widgets.FileUpload(accept='image/*', multiple=False)
uploader.observe(on_upload, names='value')
buttons_layout = HBox([uploader, clear_button])
display(buttons_layout)
display(out)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code let user to select one image and performance inference. Help me to make the classification plot look better, more high tech.
|
d2d476cc590690c49250c3fb00bb1b2a
|
{
"intermediate": 0.35522809624671936,
"beginner": 0.4343754053115845,
"expert": 0.2103964388370514
}
|
46,520
|
Привет! Надо сделать так, чтобы пользователь не мог ввести неправильные даты (т.е., например, чтобы дата смерти была раньше даты рождения) и не мог написать что-то другое, что не является датой. Эту проверку надо сделать перед
try:
await save_result(user_id, parse_yandexgpt(answer))
await bot.send_message(user_id, mdv2(answer), parse_mode="MarkdownV2",reply_markup=InlineKeyboardMarkup().add(InlineKeyboardButton("Обновить страницу",callback_data="update_page")))
После save_result нужно вытащить из БД даты и проверить их на правильность. Если они неверные - написать об этом пользователю. Вот код бота:
from aiogram import Bot, Dispatcher, executor, types
from aiogram.dispatcher import FSMContext
from aiogram.dispatcher.filters.state import State, StatesGroup
from aiogram.contrib.fsm_storage.memory import MemoryStorage
from aiogram.types import ReplyKeyboardMarkup, KeyboardButton, InlineKeyboardButton, InlineKeyboardMarkup
from aiogram.utils.callback_data import CallbackData
import aiosqlite
import asyncio
import aiohttp
import json
import re
API_TOKEN = '6996318383:AAEcQfdQhzEg3L_6DKQVidJEn46Wb27Sy4g'
ADMINS = [989037374, 1515567046]
bot = Bot(token=API_TOKEN)
storage = MemoryStorage()
dp = Dispatcher(bot, storage=storage)
class Form(StatesGroup):
choosing_action = State()
answer_question = State()
class lk(StatesGroup):
personal_account = State()
edit_answer = State()
new_answer = State()
edit_answer_select = State()
edit_answer_cb = State()
new_answer_cb = State()
class admin(StatesGroup):
admin_panel = State()
select_question_to_delete = State()
select_question_to_edit = State()
edit_question_text = State()
new_question = State()
async def create_db():
async with aiosqlite.connect('base.db') as db:
await db.execute('''CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY,
username TEXT NOT NULL,
last_question_idx INTEGER DEFAULT 0)''')
await db.execute('''CREATE TABLE IF NOT EXISTS questions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
question TEXT NOT NULL,
order_num INTEGER NOT NULL)''')
await db.execute('''CREATE TABLE IF NOT EXISTS answers (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
question TEXT,
answer TEXT,
FOREIGN KEY (user_id) REFERENCES users (id))''')
await db.execute('''CREATE TABLE IF NOT EXISTS public_info (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
nome TEXT,
surname TEXT,
patronym TEXT,
birthdayat TEXT,
diedat TEXT,
epitaph TEXT,
placeOfBirth TEXT,
placeOfDeath TEXT,
children TEXT,
wifehusband TEXT,
education TEXT,
occupation TEXT,
awards TEXT,
title1 TEXT,
biography1 TEXT,
title2 TEXT,
biography2 TEXT,
title3 TEXT,
biography3 TEXT,
conclusion TEXT,
FOREIGN KEY (user_id) REFERENCES users (id))''')
await db.commit()
# Обработка под MarkdownV2
def mdv2(text: str) -> str:
escape_chars = [
"_", "[", "]", "(", ")", "~", "`", ">",
"#", "+", "-", "=", "|", "{", "}", ".", "!"
]
for char in escape_chars:
text = text.replace(char, f"\{char}")
text = text.replace("**", "*").replace('"', '“')
return text
# калбэки
change_action_cb = CallbackData('change', 'action')
# КНОПКА МЕНЮ
menu = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
menu.add(KeyboardButton("В меню"))
async def add_user(user_id: int, username: str):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT id FROM users WHERE id = ?', (user_id,))
user_exists = await cursor.fetchone()
if user_exists:
await db.execute('UPDATE users SET username = ? WHERE id = ?', (username, user_id))
else:
await db.execute('INSERT INTO users (id, username) VALUES (?, ?)', (user_id, username))
await db.commit()
@dp.message_handler(commands="start", state="*")
async def cmd_start(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
user_id = message.from_user.id
username = message.from_user.username or "unknown"
await add_user(user_id, username)
if user_id not in ADMINS:
await message.answer("Выберите действие:", reply_markup=markup)
await Form.choosing_action.set()
else:
markup.add(KeyboardButton("Админ-панель"))
await message.answer("Выберите действие:", reply_markup=markup)
await Form.choosing_action.set()
@dp.message_handler(lambda message: message.text == "В меню", state="*")
async def back_to_menu(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
if message.from_user.id not in ADMINS:
await message.answer("Вернули вас в меню. Выберите действие", reply_markup=markup)
await Form.choosing_action.set()
else:
markup.add(KeyboardButton("Админ-панель"))
await message.answer("Вернули вас в меню. Выберите действие", reply_markup=markup)
await Form.choosing_action.set()
async def save_answer(user_id: int, question: str, answer: str, question_idx: int):
async with aiosqlite.connect('base.db') as db:
await db.execute('INSERT INTO answers (user_id, question, answer) VALUES (?, ?, ?)',
(user_id, question, answer))
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (question_idx, user_id))
await db.commit()
async def set_next_question(user_id: int):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT last_question_idx FROM users WHERE id = ?', (user_id,))
result = await cursor.fetchone()
last_question_idx = result[0] if result else 0
next_question_idx = last_question_idx + 1
question_cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (next_question_idx,))
question_text = await question_cursor.fetchone()
if question_text:
await bot.send_message(user_id, question_text[0], reply_markup=menu)
await Form.answer_question.set()
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (next_question_idx, user_id))
await db.commit()
else:
answers_text = ""
cursor = await db.execute("SELECT question, answer FROM answers WHERE user_id=?", (user_id,))
answers = await cursor.fetchall()
for idx, (question, answer) in enumerate(answers, start=1):
answers_text += f"{idx}. {question} - {answer}\n"
markup = InlineKeyboardMarkup(
inline_keyboard=[
[InlineKeyboardButton(text="Сгенерировать", callback_data=change_action_cb.new(action="generate"))],
[InlineKeyboardButton(text="Изменить ответ", callback_data=change_action_cb.new(action="change"))],
[InlineKeyboardButton(text="Заполнить заново",
callback_data=change_action_cb.new(action="refill"))],
]
)
await bot.send_message(user_id, f"Вот ваши ответы:\n\n{answers_text}", reply_markup=markup)
await dp.current_state(user=user_id).reset_state(with_data=False)
@dp.callback_query_handler(change_action_cb.filter(action="change"), state="*")
async def change_answer(callback_query: types.CallbackQuery, state: FSMContext):
await bot.answer_callback_query(callback_query.id)
await lk.edit_answer.set()
await bot.send_message(callback_query.from_user.id, "Введите номер вопроса, который хотите изменить:")
@dp.message_handler(state=lk.edit_answer_cb)
async def enter_question_number(message: types.Message, state: FSMContext):
question_number = message.text
if not question_number.isdigit():
await message.reply("Пожалуйста, введите номер вопроса цифрами. Попробуйте снова:")
return
await state.update_data(question_number=int(question_number))
await lk.new_answer.set()
await message.answer("Введите новый ответ:")
@dp.callback_query_handler(change_action_cb.filter(action="refill"), state="*")
async def process_refill(callback_query: types.CallbackQuery, callback_data: dict):
user_id = callback_query.from_user.id
await bot.answer_callback_query(callback_query.id)
markup = InlineKeyboardMarkup().add(InlineKeyboardButton("Да, начать заново", callback_data="confirm_refill"))
await bot.send_message(user_id, "Вы уверены, что хотите начать заново? Ваши текущие ответы будут удалены.", reply_markup=markup)
@dp.message_handler(state=lk.new_answer_cb)
async def update_answer(message: types.Message, state: FSMContext):
new_answer_text = message.text
user_data = await state.get_data()
question_number = user_data['question_number']
user_id = message.from_user.id
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await db.execute('UPDATE answers SET answer = ? WHERE user_id = ? AND question = ?',
(new_answer_text, user_id, question_text[0]))
await db.commit()
await message.answer(f"Ваш ответ на вопрос изменен на: {new_answer_text}", reply_markup=menu)
else:
await message.answer(f"Проблема при редактировании ответа, вопрос не найден.")
await state.finish()
await set_next_question(user_id)
@dp.message_handler(lambda message: message.text == "Сгенерировать био", state=Form.choosing_action)
async def generate_bio(message: types.Message):
user_id = message.from_user.id
await set_next_question(user_id)
@dp.message_handler(state=Form.answer_question)
async def process_question_answer(message: types.Message, state: FSMContext):
user_id = message.from_user.id
answer_text = message.text
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT last_question_idx FROM users WHERE id = ?', (user_id,))
result = await cursor.fetchone()
current_question_idx = result[0] if result else 0
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (current_question_idx,))
question = await cursor.fetchone()
if question:
question_text = question[0]
await db.execute('INSERT INTO answers (user_id, question, answer) VALUES (?, ?, ?)',
(user_id, question_text, answer_text))
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (current_question_idx, user_id))
await db.commit()
else:
await message.answer("Произошла ошибка при сохранении вашего ответа.")
await set_next_question(user_id)
@dp.message_handler(lambda message: message.text == "Личный кабинет", state=Form.choosing_action)
async def personal_account(message: types.Message):
user_id = message.from_user.id
answers_text = "Личный кабинет\n\nВаши ответы:\n"
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question, answer FROM answers WHERE user_id=? ORDER BY id', (user_id,))
answers = await cursor.fetchall()
for idx, (question, answer) in enumerate(answers, start=1):
answers_text += f"{idx}. {question}: {answer}\n"
if answers_text == "Личный кабинет\n\nВаши ответы:\n":
answers_text = "Личный кабинет\n\nВы еще не отвечали на вопросы. Пожалуйста, нажмите «В меню» и выберите «Сгенерировать био», чтобы ответить на вопросы"
await message.answer(answers_text, reply_markup=menu)
else:
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Изменить ответ"))
markup.add(KeyboardButton("Заполнить заново"))
markup.add(KeyboardButton("В меню"))
await message.answer(answers_text, reply_markup=markup)
await lk.personal_account.set()
@dp.message_handler(lambda message: message.text == "Изменить ответ", state=lk.personal_account)
async def change_answer(message: types.Message):
await message.answer("Введите номер вопроса, на который хотите изменить ответ:",reply_markup=menu)
await lk.edit_answer.set()
@dp.message_handler(state=lk.edit_answer)
async def process_question_number(message: types.Message, state: FSMContext):
text = message.text
question_number = int(text)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await state.update_data(question=question_text[0], question_number=question_number)
await message.answer("Введите новый ответ:")
await lk.new_answer.set()
else:
await message.answer(f"Вопроса под номером {question_number} не существует.")
@dp.message_handler(state=lk.new_answer)
async def process_new_answer(message: types.Message, state: FSMContext):
user_data = await state.get_data()
question_number = user_data['question_number']
new_answer = message.text
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Изменить ответ"))
markup.add(KeyboardButton("Заполнить заново"))
markup.add(KeyboardButton("В меню"))
user_id = message.from_user.id
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await db.execute('UPDATE answers SET answer = ? WHERE user_id = ? AND question = ?', (new_answer, user_id, question_text[0]))
await db.commit()
await message.answer(f"Ваш ответ на вопрос изменен на: {new_answer}", reply_markup=markup)
else:
await message.answer(f"Проблема при редактировании ответа, вопрос не найден.")
await state.finish()
await personal_account(message)
@dp.message_handler(lambda message: message.text == "Заполнить заново", state=lk.personal_account)
async def refill_form(message: types.Message):
markup = InlineKeyboardMarkup().add(InlineKeyboardButton("Да", callback_data="confirm_refill"))
await message.answer("Вы уверены, что хотите начать заново? Все текущие ответы будут удалены.", reply_markup=markup)
@dp.callback_query_handler(lambda c: c.data == 'confirm_refill', state="*")
async def process_refill(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
async with aiosqlite.connect('base.db') as db:
await db.execute('DELETE FROM answers WHERE user_id=?', (user_id,))
await db.commit()
await db.execute('UPDATE users SET last_question_idx = 0 WHERE id = ?', (user_id,))
await db.commit()
state = dp.current_state(user=user_id)
await state.reset_state(with_data=False)
await bot.answer_callback_query(callback_query.id)
await bot.send_message(user_id, "Ваши ответы удалены.")
await cmd_start(callback_query.message)
# ГЕНЕРАЦИЯ
def parse_yandexgpt(answer_text: str) -> dict:
pattern = re.compile(
r'(nome|surname|patronym|birthdayat|diedat|epitaph|placeOfBirth|placeOfDeath|children|wifehusband|education|occupation|awards|title1|biography1|title2|biography2|title3|biography3|conclusion): ([^:]+?)(?=\n\w+: |\n\n|\Z)',
re.DOTALL)
matches = pattern.findall(answer_text)
data = {key.strip(): value.strip() for key, value in matches}
return data
async def save_result(user_id: int, answer_dict: dict):
async with aiosqlite.connect('base.db') as db:
await db.execute("DELETE FROM public_info WHERE user_id=?", (user_id, ))
columns = ", ".join(answer_dict.keys())
placeholders = ", ".join(["?"] * len(answer_dict))
values = list(answer_dict.values())
print(columns)
print(placeholders)
print(values)
await db.execute(f"INSERT INTO public_info (user_id, {columns}) VALUES (?, {placeholders})", [user_id, *values])
await db.commit()
class YandexGPT:
@staticmethod
async def generate(prompt: str, apikey: str, sa_id: str, user_id : str):
url = 'https://llm.api.cloud.yandex.net/foundationModels/v1/completion'
headers = {
'Content-Type': 'application/json',
'Authorization': f'Api-Key {apikey}'
}
data = {
"modelUri": f"gpt://{sa_id}/yandexgpt-lite/latest",
"completionOptions": {
"stream": False,
"temperature": 0.4,
"maxTokens": "3000"
},
"messages": [
{
"role": "system",
"text": """"Твоя задача - создать информационную сводку и биографию (title1, biography1, title2, biography2, title3, biography3 ) в соответствии с ответами на вопросы пользователя (пишутся в формате вопрос - ответ).
Не пиши ничего кроме этой сводки и НЕ ОТХОДИ ОТ ШАБЛОНА. Если информации данных в каком-то пункте нет, пиши ТОЛЬКО “null“, а не что-либо другое. Создай следующую сводку:
nome: {}
surname: {}
patronym: {}
birthday_at: {}
died_at: {}
epitaph: {} # не больше 300 символов
placeOfBirth: {}
placeOfDeath: {}
children: {}
wifehusband: {} # Это супруг или супруга
education: {}
occupation: {} # Род деятельности человека
awards: {}
epitaph: {}
title1: {}
biography1: {}
title2: {}
biography2: {}
title3: {}
biography3: {}
conclusion: {}
В поле nome должно быть ТОЛЬКО имя, без фамилии и отчества. Не путай имя (nome), фамилию (surname) и отчество (patronym) - они должны стоять на правильных местах.
epitaph - это эпитафия, сделай так, чтобы она соответствовала жизненному пути человека.
Не придумывай в биографии ничего от себя, распиши подробнее, но только ту информацию, которая есть от пользователя.
Все даты пиши в формате dd.mm.yyyy.
В conclusion пиши заключение БИОГРАФИИ, а не что либо другое.
"""
},
{
"role": "user",
"text": prompt
}
]
}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=data, headers=headers) as response:
response_data = await response.json()
try:
answer = response_data['result']['alternatives'][0]['message']['text']
answer = answer.replace("*","").replace("_","")
try:
await save_result(user_id, parse_yandexgpt(answer))
await bot.send_message(user_id, mdv2(answer), parse_mode="MarkdownV2",reply_markup=InlineKeyboardMarkup().add(InlineKeyboardButton("Обновить страницу",callback_data="update_page")))
except:
await bot.send_message(user_id, "Не удалось сохранить данные. Пожалуйста, нажмите Сгенерировать еще раз")
except KeyError as e:
await bot.send_message(user_id, "Не удалось получить ответ от сервера. Проверьте переданные данные и попробуйте еще раз.")
@dp.callback_query_handler(change_action_cb.filter(action="generate"), state="*")
async def process_generate(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
prompt = ""
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question, answer FROM answers WHERE user_id=?", (user_id,))
answers = await cursor.fetchall()
for question, answer in answers:
prompt += f"\n{question} - {answer}"
api_key = "AQVN1J4sCxYR98rj-tVppyp6gXQthbdmYvmgtO7a"
sa_id = "b1g5og37bgh1ghh2s2qc"
await YandexGPT.generate(prompt, api_key, sa_id, user_id)
# ОБНОВЛЕНИЕ СТРАНИЦЫ
@dp.callback_query_handler(lambda c: c.data == 'update_page', state="*")
async def process_update_page(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
await bot.answer_callback_query(callback_query.id)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT nome, surname, patronym, birthdayat, diedat, epitaph, placeOfBirth, placeOfDeath, children, wifehusband, education, occupation, awards, title1, biography1, title2, biography2, title3, biography3, conclusion FROM public_info WHERE user_id=?", (user_id,))
data = await cursor.fetchone()
if data:
print(data)
data_structure = {
"nome": data[0],
"surname": data[1],
"patronym": data[2],
"birthday_at": data[3]+"T21:00:00.000000Z",
"died_at": data[4]+"T21:00:00.000000Z",
"epitaph": data[5],
"placeOfBirth": data[6],
"placeOfDeath": data[7],
"children": data[8],
"wifehusband": data[9],
"education": data[10],
"occupation": data[11],
"awards": data[12],
"title1" : data[13],
"biography1": data[14],
"title2": data[15],
"biography2": data[16],
"title3": data[17],
"biography3": data[18],
"conclusion": data[19],
"day_b": data[3].split('.')[0],
"month_b": data[3].split('.')[1],
"year_b": int(data[3].split('.')[2]),
"day_d": data[4].split('.')[0],
"month_d": data[4].split('.')[1],
"year_d": int(data[4].split('.')[2]),
}
url = 'https://mc.dev.rand.agency/api/v1/get-access-token'
payload = {
"email": "team1@hackathon.ru",
"password": "PA9dEa7x",
"device": "bot-v0.0.1"
}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=payload, headers={'Accept': 'application/json','Content-Type': 'application/json;charset=UTF-8'}) as response:
if response.status == 200:
data = await response.json()
access_token = data.get('access_token')
print(f'Access Token: {access_token}')
else:
print(f'Error: {response.status}')
async with aiohttp.ClientSession() as session:
await update_page(session, access_token, data_structure)
await bot.send_message(user_id, "Страница обновлена")
async def update_page(session, access_token, data_structure):
url = 'https://mc.dev.rand.agency/api/page/64577087'
print(data_structure["nome"])
payload = {
"id": 8734,
"name": data_structure["nome"],
"surname": data_structure["surname"],
"patronym": data_structure["patronym"],
"birthday_at": data_structure["birthday_at"],
"died_at": data_structure["died_at"],
"epitaph": data_structure["epitaph"],
"start": {
"day": data_structure["day_b"],
"month": data_structure["month_b"],
"year": data_structure["year_b"]
},
"end": {
"day": data_structure["day_d"],
"month": data_structure["month_d"],
"year": data_structure["year_d"]
},
"link": "https://mc.dev.rand.agency/page/64577087",
"page_type_id": 1,
"biographies": [
{
"id": 4610,
"title": data_structure["title1"],
"description": data_structure["biography1"],
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 1,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
},
{
"url": None,
"title": None,
"order": 2
}
],
"media": []
},
{
"id": 4611,
"title": data_structure["title2"],
"description": data_structure["biography2"],
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 2,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
}
],
"media": []
},
{
"id": 4612,
"title": data_structure["title3"],
"description": data_structure["biography3"],
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 3,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
},
{
"url": None,
"title": None,
"order": 2
}
],
"media": []
},
{
"id": 4613,
"title": "Заключение",
"description": data_structure["conclusion"],
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 4,
"checked": False,
"photos": [],
"media": []
}
],
"page_information": [
{
"id": 30957,
"page_id": 8734,
"title": "pageInformation.placeOfBirth",
"is_system": True,
"description": data_structure["placeOfBirth"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30958,
"page_id": 8734,
"title": data_structure["placeOfDeath"],
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30959,
"page_id": 8734,
"title": data_structure["children"],
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30960,
"page_id": 8734,
"title": "^pageInformation.wife||pageInformation.husband",
"is_system": True,
"description": data_structure["wifehusband"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30961,
"page_id": 8734,
"title": "pageInformation.citizenship",
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30962,
"page_id": 8734,
"title": "pageInformation.education",
"is_system": True,
"description": data_structure["education"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30963,
"page_id": 8734,
"title": "pageInformation.occupation",
"is_system": True,
"description": data_structure["occupation"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30964,
"page_id": 8734,
"title": "pageInformation.awards",
"is_system": True,
"description": data_structure["awards"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
}
],
"_method": "PUT"
}
headers = {
'Accept': 'application/json',
'Content-Type': 'application/json;charset=UTF-8',
'Authorization': f'Bearer {access_token}'
}
async with session.put(url, json=payload, headers=headers) as response:
if response.status == 200:
data = await response.json()
print(f'Update Page Response: {data}')
else:
data = await response.json()
print(f'Error: {data}')
# АДМИН-ПАНЕЛЬ
# КНОПКА НАЗАД
back = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=False)
back.add(KeyboardButton("Назад"))
# КЛАВА
admin_kb = ReplyKeyboardMarkup(resize_keyboard=True)
admin_kb.add("Вопросы", "Добавить", "Удалить", "Редактировать","В меню")
@dp.message_handler(lambda message: message.text == "Назад", state=[admin.new_question, admin.edit_question_text, admin.select_question_to_edit, admin.select_question_to_delete])
async def back_to_admin_panel(message: types.Message, state: FSMContext):
await state.finish()
await admin_panel(message)
@dp.message_handler(lambda message: message.text == "Админ-панель", state=Form.choosing_action)
async def admin_panel(message: types.Message):
if message.from_user.id not in ADMINS:
await message.answer("Доступ запрещен.")
return
await message.answer("Админ-панель:", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Вопросы", state=admin.admin_panel)
async def show_questions(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if questions:
text = "\n".join([f"{idx + 1}. {question[0]}" for idx, question in enumerate(questions)])
else:
text = "Вопросы отсутствуют."
await message.answer(text)
@dp.message_handler(lambda message: message.text == "Добавить", state=admin.admin_panel)
async def add_question_start(message: types.Message):
await message.answer("Введите текст нового вопроса:", reply_markup=back)
await admin.new_question.set()
@dp.message_handler(state=admin.new_question)
async def add_question_process(message: types.Message, state: FSMContext):
new_question = message.text
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT MAX(order_num) FROM questions")
max_order_num = await cursor.fetchone()
next_order_num = (max_order_num[0] or 0) + 1
await db.execute("INSERT INTO questions (question, order_num) VALUES (?, ?)", (new_question, next_order_num))
await db.commit()
await message.answer("Вопрос успешно добавлен.", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Редактировать", state=admin.admin_panel)
async def select_question_to_edit_start(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT id, question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if not questions:
await message.answer("Вопросы отсутствуют.")
return
text = "Выберите номер вопроса для редактирования:\n\n"
text += "\n".join(f"{qid}. {qtext}" for qid, qtext in questions)
await message.answer(text, reply_markup=back)
await admin.select_question_to_edit.set()
@dp.message_handler(state=admin.select_question_to_edit)
async def edit_question(message: types.Message, state: FSMContext):
qid_text = message.text
if not qid_text.isdigit():
await message.answer("Пожалуйста, введите число. Попробуйте еще раз:", reply_markup=back)
return
qid = int(qid_text)
async with state.proxy() as data:
data['question_id'] = qid
await admin.edit_question_text.set()
await message.answer("Введите новый текст вопроса:", reply_markup=back)
@dp.message_handler(state=admin.edit_question_text)
async def update_question(message: types.Message, state: FSMContext):
new_text = message.text
async with state.proxy() as data:
qid = data['question_id']
async with aiosqlite.connect('base.db') as db:
await db.execute("UPDATE questions SET question = ? WHERE id = ?", (new_text, qid))
await db.commit()
await message.answer("Вопрос успешно отредактирован.", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Удалить", state=admin.admin_panel)
async def select_question_to_delete_start(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT id, question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if not questions:
await message.answer("Вопросы отсутствуют.")
return
text = "Выберите номер вопроса для удаления:\n\n"
text += "\n".join(f"{qid}. {qtext}" for qid, qtext in questions)
await message.answer(text, reply_markup=back)
await admin.select_question_to_delete.set()
@dp.message_handler(state=admin.select_question_to_delete)
async def delete_question(message: types.Message, state: FSMContext):
qid_text = message.text
if not qid_text.isdigit():
await message.answer("Пожалуйста, введите число. Попробуйте еще раз:", reply_markup=back)
return
qid = int(qid_text)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT order_num FROM questions WHERE id = ?", (qid,))
question = await cursor.fetchone()
if not question:
await message.answer(f"Вопрос под номером {qid} не найден. Пожалуйста, попробуйте другой номер.")
return
order_num_to_delete = question[0]
await db.execute("DELETE FROM questions WHERE id = ?", (qid,))
await db.execute("UPDATE questions SET order_num = order_num - 1 WHERE order_num > ?", (order_num_to_delete,))
await db.commit()
await message.answer("Вопрос успешно удален.", reply_markup=admin_kb)
await admin.admin_panel.set()
async def main():
await create_db()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
executor.start_polling(dp, skip_updates=True)
|
ef0f2a8f6ae3608231abd66a42f4a1e9
|
{
"intermediate": 0.3617526590824127,
"beginner": 0.47032949328422546,
"expert": 0.16791784763336182
}
|
46,521
|
You are a Python expert who can provide clear, concise, high-quality code.
"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual, HBox, VBox
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from ipywidgets import Output, Button
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}")
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask")
plt.show()
# preprocess and predict on the selected image
out = Output()
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
axs[0].imshow(img)
axs[0].set_title("Input Image")
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output")
axs[1].axis("off")
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='blue')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right")
axs[2].set_ylabel('Probability')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}') # Increased title font size
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='black', ha='center')
plt.tight_layout()
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
custom_objects = {"segmentation_loss": segmentation_loss}
# Load model
model = load_model(saved_model_path, custom_objects=custom_objects)
# Test model
test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5)
# file uploader widget
uploader = widgets.FileUpload(accept='image/*', multiple=False)
uploader.observe(on_upload, names='value')
buttons_layout = HBox([uploader, clear_button])
display(buttons_layout)
display(out)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code let user to select one image and performance inference. Help me to make the classification plot look better, more high tech.
|
3eeee356e6775c3ed39abd64a6579b61
|
{
"intermediate": 0.35522809624671936,
"beginner": 0.4343754053115845,
"expert": 0.2103964388370514
}
|
46,522
|
from aiogram import Bot, Dispatcher, executor, types
from aiogram.dispatcher import FSMContext
from aiogram.dispatcher.filters.state import State, StatesGroup
from aiogram.contrib.fsm_storage.memory import MemoryStorage
from aiogram.types import ReplyKeyboardMarkup, KeyboardButton, InlineKeyboardButton, InlineKeyboardMarkup
from aiogram.utils.callback_data import CallbackData
import aiosqlite
import asyncio
import aiohttp
import json
import re
from datetime import datetime
API_TOKEN = '6996318383:AAEcQfdQhzEg3L_6DKQVidJEn46Wb27Sy4g'
ADMINS = [989037374, 1515567046]
bot = Bot(token=API_TOKEN)
storage = MemoryStorage()
dp = Dispatcher(bot, storage=storage)
class Form(StatesGroup):
choosing_action = State()
answer_question = State()
class lk(StatesGroup):
personal_account = State()
edit_answer = State()
new_answer = State()
edit_answer_select = State()
edit_answer_cb = State()
new_answer_cb = State()
class admin(StatesGroup):
admin_panel = State()
select_question_to_delete = State()
select_question_to_edit = State()
edit_question_text = State()
new_question = State()
async def create_db():
async with aiosqlite.connect('base.db') as db:
await db.execute('''CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY,
username TEXT NOT NULL,
last_question_idx INTEGER DEFAULT 0)''')
await db.execute('''CREATE TABLE IF NOT EXISTS questions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
question TEXT NOT NULL,
order_num INTEGER NOT NULL)''')
await db.execute('''CREATE TABLE IF NOT EXISTS answers (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
question TEXT,
answer TEXT,
FOREIGN KEY (user_id) REFERENCES users (id))''')
await db.execute('''CREATE TABLE IF NOT EXISTS public_info (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
nome TEXT,
surname TEXT,
patronym TEXT,
birthdayat TEXT,
diedat TEXT,
epitaph TEXT,
placeOfBirth TEXT,
placeOfDeath TEXT,
children TEXT,
wifehusband TEXT,
education TEXT,
occupation TEXT,
awards TEXT,
title1 TEXT,
biography1 TEXT,
title2 TEXT,
biography2 TEXT,
title3 TEXT,
biography3 TEXT,
conclusion TEXT,
FOREIGN KEY (user_id) REFERENCES users (id))''')
await db.commit()
# Обработка под MarkdownV2
def mdv2(text: str) -> str:
escape_chars = [
"_", "[", "]", "(", ")", "~", "`", ">",
"#", "+", "-", "=", "|", "{", "}", ".", "!"
]
for char in escape_chars:
text = text.replace(char, f"\{char}")
text = text.replace("**", "*").replace('"', '“')
return text
# Проверка дат
def validate_dates(birthdate_str: str, deathdate_str: str) -> bool:
try:
birthdate = datetime.strptime(birthdate_str, "%d.%m.%Y")
deathdate = datetime.strptime(deathdate_str, "%d.%m.%Y")
except ValueError:
return False
if deathdate < birthdate:
return False
return True
# калбэки
change_action_cb = CallbackData('change', 'action')
# КНОПКА МЕНЮ
menu = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
menu.add(KeyboardButton("В меню"))
async def add_user(user_id: int, username: str):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT id FROM users WHERE id = ?', (user_id,))
user_exists = await cursor.fetchone()
if user_exists:
await db.execute('UPDATE users SET username = ? WHERE id = ?', (username, user_id))
else:
await db.execute('INSERT INTO users (id, username) VALUES (?, ?)', (user_id, username))
await db.commit()
@dp.message_handler(commands="start", state="*")
async def cmd_start(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
user_id = message.from_user.id
username = message.from_user.username or "unknown"
await add_user(user_id, username)
if user_id not in ADMINS:
await message.answer("Выберите действие:", reply_markup=markup)
await Form.choosing_action.set()
else:
markup.add(KeyboardButton("Админ-панель"))
await message.answer("Выберите действие:", reply_markup=markup)
await Form.choosing_action.set()
@dp.message_handler(lambda message: message.text == "В меню", state="*")
async def back_to_menu(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
if message.from_user.id not in ADMINS:
await message.answer("Вернули вас в меню. Выберите действие", reply_markup=markup)
await Form.choosing_action.set()
else:
markup.add(KeyboardButton("Админ-панель"))
await message.answer("Вернули вас в меню. Выберите действие", reply_markup=markup)
await Form.choosing_action.set()
async def save_answer(user_id: int, question: str, answer: str, question_idx: int):
async with aiosqlite.connect('base.db') as db:
await db.execute('INSERT INTO answers (user_id, question, answer) VALUES (?, ?, ?)',
(user_id, question, answer))
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (question_idx, user_id))
await db.commit()
async def set_next_question(user_id: int):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT last_question_idx FROM users WHERE id = ?', (user_id,))
result = await cursor.fetchone()
last_question_idx = result[0] if result else 0
next_question_idx = last_question_idx + 1
question_cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (next_question_idx,))
question_text = await question_cursor.fetchone()
if question_text:
await bot.send_message(user_id, question_text[0], reply_markup=menu)
await Form.answer_question.set()
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (next_question_idx, user_id))
await db.commit()
else:
answers_text = ""
cursor = await db.execute("SELECT question, answer FROM answers WHERE user_id=?", (user_id,))
answers = await cursor.fetchall()
for idx, (question, answer) in enumerate(answers, start=1):
answers_text += f"{idx}. {question} - {answer}\n"
markup = InlineKeyboardMarkup(
inline_keyboard=[
[InlineKeyboardButton(text="Сгенерировать", callback_data=change_action_cb.new(action="generate"))],
[InlineKeyboardButton(text="Изменить ответ", callback_data=change_action_cb.new(action="change"))],
[InlineKeyboardButton(text="Заполнить заново",
callback_data=change_action_cb.new(action="refill"))],
]
)
await bot.send_message(user_id, f"Вот ваши ответы:\n\n{answers_text}", reply_markup=markup)
await dp.current_state(user=user_id).reset_state(with_data=False)
@dp.callback_query_handler(change_action_cb.filter(action="change"), state="*")
async def change_answer(callback_query: types.CallbackQuery, state: FSMContext):
await bot.answer_callback_query(callback_query.id)
await lk.edit_answer.set()
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
text = "Вопросы:\n\n"
if questions:
text += "\n".join([f"{idx + 1}. {question[0]}" for idx, question in enumerate(questions)])
text += "\n\nВведите номер вопроса, который хотите изменить:"
await bot.send_message(callback_query.from_user.id, text,reply_markup=back)
else:
await bot.send_message(callback_query.from_user.id,"Вопросы отсутствуют.", reply_markup=back)
@dp.message_handler(state=lk.edit_answer_cb)
async def enter_question_number(message: types.Message, state: FSMContext):
question_number = message.text
if not question_number.isdigit():
await message.reply("Пожалуйста, введите номер вопроса цифрами. Попробуйте снова:")
return
await state.update_data(question_number=int(question_number))
await lk.new_answer.set()
await message.answer("Введите новый ответ:")
@dp.callback_query_handler(change_action_cb.filter(action="refill"), state="*")
async def process_refill(callback_query: types.CallbackQuery, callback_data: dict):
user_id = callback_query.from_user.id
await bot.answer_callback_query(callback_query.id)
markup = InlineKeyboardMarkup().add(InlineKeyboardButton("Да, начать заново", callback_data="confirm_refill"))
await bot.send_message(user_id, "Вы уверены, что хотите начать заново? Ваши текущие ответы будут удалены.", reply_markup=markup)
@dp.message_handler(state=lk.new_answer_cb)
async def update_answer(message: types.Message, state: FSMContext):
new_answer_text = message.text
user_data = await state.get_data()
question_number = user_data['question_number']
user_id = message.from_user.id
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await db.execute('UPDATE answers SET answer = ? WHERE user_id = ? AND question = ?',
(new_answer_text, user_id, question_text[0]))
await db.commit()
await message.answer(f"Ваш ответ на вопрос изменен на: {new_answer_text}", reply_markup=menu)
else:
await message.answer(f"Проблема при редактировании ответа, вопрос не найден.")
await state.finish()
await set_next_question(user_id)
@dp.message_handler(lambda message: message.text == "Сгенерировать био", state=Form.choosing_action)
async def generate_bio(message: types.Message):
user_id = message.from_user.id
await set_next_question(user_id)
@dp.message_handler(state=Form.answer_question)
async def process_question_answer(message: types.Message, state: FSMContext):
user_id = message.from_user.id
answer_text = message.text
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT last_question_idx FROM users WHERE id = ?', (user_id,))
result = await cursor.fetchone()
current_question_idx = result[0] if result else 0
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (current_question_idx,))
question = await cursor.fetchone()
if question:
question_text = question[0]
await db.execute('INSERT INTO answers (user_id, question, answer) VALUES (?, ?, ?)',
(user_id, question_text, answer_text))
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (current_question_idx, user_id))
await db.commit()
else:
await message.answer("Произошла ошибка при сохранении вашего ответа.")
await set_next_question(user_id)
@dp.message_handler(lambda message: message.text == "Личный кабинет", state=Form.choosing_action)
async def personal_account(message: types.Message):
user_id = message.from_user.id
answers_text = "Личный кабинет\n\nВаши ответы:\n"
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question, answer FROM answers WHERE user_id=? ORDER BY id', (user_id,))
answers = await cursor.fetchall()
for idx, (question, answer) in enumerate(answers, start=1):
answers_text += f"{idx}. {question}: {answer}\n"
if answers_text == "Личный кабинет\n\nВаши ответы:\n":
answers_text = "Личный кабинет\n\nВы еще не отвечали на вопросы. Пожалуйста, нажмите «В меню» и выберите «Сгенерировать био», чтобы ответить на вопросы"
await message.answer(answers_text, reply_markup=menu)
else:
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Изменить ответ"))
markup.add(KeyboardButton("Заполнить заново"))
markup.add(KeyboardButton("В меню"))
await message.answer(answers_text, reply_markup=markup)
await lk.personal_account.set()
@dp.message_handler(lambda message: message.text == "Изменить ответ", state=lk.personal_account)
async def change_answer(message: types.Message):
await message.answer("Введите номер вопроса, на который хотите изменить ответ:",reply_markup=menu)
await lk.edit_answer.set()
@dp.message_handler(state=lk.edit_answer)
async def process_question_number(message: types.Message, state: FSMContext):
text = message.text
question_number = int(text)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await state.update_data(question=question_text[0], question_number=question_number)
await message.answer("Введите новый ответ:")
await lk.new_answer.set()
else:
await message.answer(f"Вопроса под номером {question_number} не существует.")
@dp.message_handler(state=lk.new_answer)
async def process_new_answer(message: types.Message, state: FSMContext):
user_data = await state.get_data()
question_number = user_data['question_number']
new_answer = message.text
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Изменить ответ"))
markup.add(KeyboardButton("Заполнить заново"))
markup.add(KeyboardButton("В меню"))
user_id = message.from_user.id
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await db.execute('UPDATE answers SET answer = ? WHERE user_id = ? AND question = ?', (new_answer, user_id, question_text[0]))
await db.commit()
await message.answer(f"Ваш ответ на вопрос изменен на: {new_answer}", reply_markup=markup)
else:
await message.answer(f"Проблема при редактировании ответа, вопрос не найден.")
await state.finish()
await personal_account(message)
@dp.message_handler(lambda message: message.text == "Заполнить заново", state=lk.personal_account)
async def refill_form(message: types.Message):
markup = InlineKeyboardMarkup().add(InlineKeyboardButton("Да", callback_data="confirm_refill"))
await message.answer("Вы уверены, что хотите начать заново? Все текущие ответы будут удалены.", reply_markup=markup)
@dp.callback_query_handler(lambda c: c.data == 'confirm_refill', state="*")
async def process_refill(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
async with aiosqlite.connect('base.db') as db:
await db.execute('DELETE FROM answers WHERE user_id=?', (user_id,))
await db.commit()
await db.execute('UPDATE users SET last_question_idx = 0 WHERE id = ?', (user_id,))
await db.commit()
state = dp.current_state(user=user_id)
await state.reset_state(with_data=False)
await bot.answer_callback_query(callback_query.id)
await bot.send_message(user_id, "Ваши ответы удалены.")
await cmd_start(callback_query.message)
# ГЕНЕРАЦИЯ
def parse_yandexgpt(answer_text: str) -> dict:
pattern = re.compile(
r'(nome|surname|patronym|birthdayat|diedat|epitaph|placeOfBirth|placeOfDeath|children|wifehusband|education|occupation|awards|title1|biography1|title2|biography2|title3|biography3|conclusion): ([^:]+?)(?=\n\w+: |\n\n|\Z)',
re.DOTALL)
matches = pattern.findall(answer_text)
data = {key.strip(): value.strip() for key, value in matches}
return data
async def save_result(user_id: int, answer_dict: dict):
async with aiosqlite.connect('base.db') as db:
await db.execute("DELETE FROM public_info WHERE user_id=?", (user_id, ))
columns = ", ".join(answer_dict.keys())
placeholders = ", ".join(["?"] * len(answer_dict))
values = list(answer_dict.values())
print(columns)
print(placeholders)
print(values)
await db.execute(f"INSERT INTO public_info (user_id, {columns}) VALUES (?, {placeholders})", [user_id, *values])
await db.commit()
class YandexGPT:
@staticmethod
async def generate(prompt: str, apikey: str, sa_id: str, user_id : str):
url = 'https://llm.api.cloud.yandex.net/foundationModels/v1/completion'
headers = {
'Content-Type': 'application/json',
'Authorization': f'Api-Key {apikey}'
}
data = {
"modelUri": f"gpt://{sa_id}/yandexgpt-lite/latest",
"completionOptions": {
"stream": False,
"temperature": 0.4,
"maxTokens": "3000"
},
"messages": [
{
"role": "system",
"text": """"Твоя задача - создать информационную сводку и биографию (title1, biography1, title2, biography2, title3, biography3 ) в соответствии с ответами на вопросы пользователя (пишутся в формате вопрос - ответ).
Не пиши ничего кроме этой сводки и НЕ ОТХОДИ ОТ ШАБЛОНА. Если информации данных в каком-то пункте нет, пиши ТОЛЬКО “null“, а не что-либо другое. Создай следующую сводку:
nome: {}
surname: {}
patronym: {}
birthday_at: {}
died_at: {}
epitaph: {} # не больше 300 символов
placeOfBirth: {}
placeOfDeath: {}
children: {}
wifehusband: {} # Это супруг или супруга
education: {}
occupation: {} # Род деятельности человека
awards: {}
epitaph: {}
title1: {}
biography1: {}
title2: {}
biography2: {}
title3: {}
biography3: {}
conclusion: {}
В поле nome должно быть ТОЛЬКО имя, без фамилии и отчества. Не путай имя (nome), фамилию (surname) и отчество (patronym) - они должны стоять на правильных местах.
epitaph - это эпитафия, сделай так, чтобы она соответствовала жизненному пути человека.
Не придумывай в биографии ничего от себя, распиши подробнее, но только ту информацию, которая есть от пользователя.
Все даты пиши в формате dd.mm.yyyy.
В conclusion пиши заключение БИОГРАФИИ, а не что либо другое.
"""
},
{
"role": "user",
"text": prompt
}
]
}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=data, headers=headers) as response:
response_data = await response.json()
try:
answer = response_data['result']['alternatives'][0]['message']['text']
answer = answer.replace("*","").replace("_","")
answer_data = parse_yandexgpt(answer)
birthdate_str = answer_data.get("birthdayat", "")
deathdate_str = answer_data.get("diedat", "")
if birthdate_str and deathdate_str and not validate_dates(birthdate_str, deathdate_str):
markup = InlineKeyboardMarkup.add(InlineKeyboardButton(text="Изменить ответ",callback_data=change_action_cb.new(action="change")))
await bot.send_message(user_id, "Некорректные даты: дата смерти не может быть раньше даты рождения или введены неверные даты.", reply_markup=markup)
else:
try:
await save_result(user_id, answer_data)
await bot.send_message(user_id, mdv2(answer), parse_mode="MarkdownV2",reply_markup=InlineKeyboardMarkup().add(InlineKeyboardButton("Обновить страницу",callback_data="update_page")))
except:
await bot.send_message(user_id, "Не удалось сохранить данные. Пожалуйста, нажмите Сгенерировать еще раз")
except KeyError as e:
await bot.send_message(user_id, "Не удалось получить ответ от сервера. Проверьте переданные данные и попробуйте еще раз.")
@dp.callback_query_handler(change_action_cb.filter(action="generate"), state="*")
async def process_generate(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
prompt = ""
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question, answer FROM answers WHERE user_id=?", (user_id,))
answers = await cursor.fetchall()
for question, answer in answers:
prompt += f"\n{question} - {answer}"
api_key = "AQVN1J4sCxYR98rj-tVppyp6gXQthbdmYvmgtO7a"
sa_id = "b1g5og37bgh1ghh2s2qc"
await YandexGPT.generate(prompt, api_key, sa_id, user_id)
# ОБНОВЛЕНИЕ СТРАНИЦЫ
@dp.callback_query_handler(lambda c: c.data == 'update_page', state="*")
async def process_update_page(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
await bot.answer_callback_query(callback_query.id)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT nome, surname, patronym, birthdayat, diedat, epitaph, placeOfBirth, placeOfDeath, children, wifehusband, education, occupation, awards, title1, biography1, title2, biography2, title3, biography3, conclusion FROM public_info WHERE user_id=?", (user_id,))
data = await cursor.fetchone()
if data:
print(data)
data_structure = {
"nome": data[0],
"surname": data[1],
"patronym": data[2],
"birthday_at": data[3]+"T21:00:00.000000Z",
"died_at": data[4]+"T21:00:00.000000Z",
"epitaph": data[5],
"placeOfBirth": data[6],
"placeOfDeath": data[7],
"children": data[8],
"wifehusband": data[9],
"education": data[10],
"occupation": data[11],
"awards": data[12],
"title1" : data[13],
"biography1": data[14],
"title2": data[15],
"biography2": data[16],
"title3": data[17],
"biography3": data[18],
"conclusion": data[19],
"day_b": data[3].split('.')[0],
"month_b": data[3].split('.')[1],
"year_b": int(data[3].split('.')[2]),
"day_d": data[4].split('.')[0],
"month_d": data[4].split('.')[1],
"year_d": int(data[4].split('.')[2]),
}
url = 'https://mc.dev.rand.agency/api/v1/get-access-token'
payload = {
"email": "team1@hackathon.ru",
"password": "PA9dEa7x",
"device": "bot-v0.0.1"
}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=payload, headers={'Accept': 'application/json','Content-Type': 'application/json;charset=UTF-8'}) as response:
if response.status == 200:
data = await response.json()
access_token = data.get('access_token')
print(f'Access Token: {access_token}')
else:
print(f'Error: {response.status}')
async with aiohttp.ClientSession() as session:
await update_page(session, access_token, data_structure)
await bot.send_message(user_id, "Страница обновлена")
async def update_page(session, access_token, data_structure):
url = 'https://mc.dev.rand.agency/api/page/64577087'
print(data_structure["nome"])
payload = {
"id": 8734,
"name": data_structure["nome"],
"surname": data_structure["surname"],
"patronym": data_structure["patronym"],
"birthday_at": data_structure["birthday_at"],
"died_at": data_structure["died_at"],
"epitaph": data_structure["epitaph"],
"start": {
"day": data_structure["day_b"],
"month": data_structure["month_b"],
"year": data_structure["year_b"]
},
"end": {
"day": data_structure["day_d"],
"month": data_structure["month_d"],
"year": data_structure["year_d"]
},
"link": "https://mc.dev.rand.agency/page/64577087",
"page_type_id": 1,
"biographies": [
{
"id": 4610,
"title": data_structure["title1"],
"description": data_structure["biography1"],
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 1,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
},
{
"url": None,
"title": None,
"order": 2
}
],
"media": []
},
{
"id": 4611,
"title": data_structure["title2"],
"description": data_structure["biography2"],
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 2,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
}
],
"media": []
},
{
"id": 4612,
"title": data_structure["title3"],
"description": data_structure["biography3"],
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 3,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
},
{
"url": None,
"title": None,
"order": 2
}
],
"media": []
},
{
"id": 4613,
"title": "Заключение",
"description": data_structure["conclusion"],
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 4,
"checked": False,
"photos": [],
"media": []
}
],
"page_information": [
{
"id": 30957,
"page_id": 8734,
"title": "pageInformation.placeOfBirth",
"is_system": True,
"description": data_structure["placeOfBirth"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30958,
"page_id": 8734,
"title": data_structure["placeOfDeath"],
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30959,
"page_id": 8734,
"title": data_structure["children"],
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30960,
"page_id": 8734,
"title": "^pageInformation.wife||pageInformation.husband",
"is_system": True,
"description": data_structure["wifehusband"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30961,
"page_id": 8734,
"title": "pageInformation.citizenship",
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30962,
"page_id": 8734,
"title": "pageInformation.education",
"is_system": True,
"description": data_structure["education"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30963,
"page_id": 8734,
"title": "pageInformation.occupation",
"is_system": True,
"description": data_structure["occupation"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30964,
"page_id": 8734,
"title": "pageInformation.awards",
"is_system": True,
"description": data_structure["awards"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
}
],
"_method": "PUT"
}
headers = {
'Accept': 'application/json',
'Content-Type': 'application/json;charset=UTF-8',
'Authorization': f'Bearer {access_token}'
}
async with session.put(url, json=payload, headers=headers) as response:
if response.status == 200:
data = await response.json()
print(f'Update Page Response: {data}')
else:
data = await response.json()
print(f'Error: {data}')
# АДМИН-ПАНЕЛЬ
# КНОПКА НАЗАД
back = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=False)
back.add(KeyboardButton("Назад"))
# КЛАВА
admin_kb = ReplyKeyboardMarkup(resize_keyboard=True)
admin_kb.add("Вопросы", "Добавить", "Удалить", "Редактировать","В меню")
@dp.message_handler(lambda message: message.text == "Назад", state=[admin.new_question, admin.edit_question_text, admin.select_question_to_edit, admin.select_question_to_delete])
async def back_to_admin_panel(message: types.Message, state: FSMContext):
await state.finish()
await admin_panel(message)
@dp.message_handler(lambda message: message.text == "Админ-панель", state=Form.choosing_action)
async def admin_panel(message: types.Message):
if message.from_user.id not in ADMINS:
await message.answer("Доступ запрещен.")
return
await message.answer("Админ-панель:", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Вопросы", state=admin.admin_panel)
async def show_questions(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if questions:
text = "\n".join([f"{idx + 1}. {question[0]}" for idx, question in enumerate(questions)])
else:
text = "Вопросы отсутствуют."
await message.answer(text)
@dp.message_handler(lambda message: message.text == "Добавить", state=admin.admin_panel)
async def add_question_start(message: types.Message):
await message.answer("Введите текст нового вопроса:", reply_markup=back)
await admin.new_question.set()
@dp.message_handler(state=admin.new_question)
async def add_question_process(message: types.Message, state: FSMContext):
new_question = message.text
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT MAX(order_num) FROM questions")
max_order_num = await cursor.fetchone()
next_order_num = (max_order_num[0] or 0) + 1
await db.execute("INSERT INTO questions (question, order_num) VALUES (?, ?)", (new_question, next_order_num))
await db.commit()
await message.answer("Вопрос успешно добавлен.", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Редактировать", state=admin.admin_panel)
async def select_question_to_edit_start(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT id, question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if not questions:
await message.answer("Вопросы отсутствуют.")
return
text = "Выберите номер вопроса для редактирования:\n\n"
text += "\n".join(f"{qid}. {qtext}" for qid, qtext in questions)
await message.answer(text, reply_markup=back)
await admin.select_question_to_edit.set()
@dp.message_handler(state=admin.select_question_to_edit)
async def edit_question(message: types.Message, state: FSMContext):
qid_text = message.text
if not qid_text.isdigit():
await message.answer("Пожалуйста, введите число. Попробуйте еще раз:", reply_markup=back)
return
qid = int(qid_text)
async with state.proxy() as data:
data['question_id'] = qid
await admin.edit_question_text.set()
await message.answer("Введите новый текст вопроса:", reply_markup=back)
@dp.message_handler(state=admin.edit_question_text)
async def update_question(message: types.Message, state: FSMContext):
new_text = message.text
async with state.proxy() as data:
qid = data['question_id']
async with aiosqlite.connect('base.db') as db:
await db.execute("UPDATE questions SET question = ? WHERE id = ?", (new_text, qid))
await db.commit()
await message.answer("Вопрос успешно отредактирован.", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Удалить", state=admin.admin_panel)
async def select_question_to_delete_start(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT id, question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if not questions:
await message.answer("Вопросы отсутствуют.")
return
text = "Выберите номер вопроса для удаления:\n\n"
text += "\n".join(f"{qid}. {qtext}" for qid, qtext in questions)
await message.answer(text, reply_markup=back)
await admin.select_question_to_delete.set()
@dp.message_handler(state=admin.select_question_to_delete)
async def delete_question(message: types.Message, state: FSMContext):
qid_text = message.text
if not qid_text.isdigit():
await message.answer("Пожалуйста, введите число. Попробуйте еще раз:", reply_markup=back)
return
qid = int(qid_text)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT order_num FROM questions WHERE id = ?", (qid,))
question = await cursor.fetchone()
if not question:
await message.answer(f"Вопрос под номером {qid} не найден. Пожалуйста, попробуйте другой номер.")
return
order_num_to_delete = question[0]
await db.execute("DELETE FROM questions WHERE id = ?", (qid,))
await db.execute("UPDATE questions SET order_num = order_num - 1 WHERE order_num > ?", (order_num_to_delete,))
await db.commit()
await message.answer("Вопрос успешно удален.", reply_markup=admin_kb)
await admin.admin_panel.set()
async def main():
await create_db()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
executor.start_polling(dp, skip_updates=True)
if birthdate_str and deathdate_str and not validate_dates(birthdate_str, deathdate_str):
markup = InlineKeyboardMarkup.add(InlineKeyboardButton(text="Изменить ответ",callback_data=change_action_cb.new(action="change")))
await bot.send_message(user_id, "Некорректные даты: дата смерти не может быть раньше даты рождения или введены неверные даты.", reply_markup=markup) - почему кнопка не появляется?
|
b80b420dfbb3a20dda0f1d2b30bec034
|
{
"intermediate": 0.34789735078811646,
"beginner": 0.5282596945762634,
"expert": 0.12384292483329773
}
|
46,523
|
You are a Python expert who can provide clear, concise, high-quality code.
"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual, HBox, VBox
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from ipywidgets import Output, Button
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}")
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask")
plt.show()
# preprocess and predict on the selected image
out = Output()
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
axs[0].imshow(img)
axs[0].set_title("Input Image")
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output")
axs[1].axis("off")
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='blue')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right")
axs[2].set_ylabel('Probability')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}') # Increased title font size
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='black', ha='center')
plt.tight_layout()
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
custom_objects = {"segmentation_loss": segmentation_loss}
# Load model
model = load_model(saved_model_path, custom_objects=custom_objects)
# Test model
test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5)
# file uploader widget
uploader = widgets.FileUpload(accept='image/*', multiple=False)
uploader.observe(on_upload, names='value')
buttons_layout = HBox([uploader, clear_button])
display(buttons_layout)
display(out)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code let user to select one image and performance inference. Help me to make the classification plot look better, more high tech.
|
e28a011789895ee58b8025ac813e30bb
|
{
"intermediate": 0.35522809624671936,
"beginner": 0.4343754053115845,
"expert": 0.2103964388370514
}
|
46,524
|
You are a Python expert who can provide clear, concise, high-quality code.
"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual, HBox, VBox
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from ipywidgets import Output, Button
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}")
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask")
plt.show()
# preprocess and predict on the selected image
out = Output()
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
axs[0].imshow(img)
axs[0].set_title("Input Image")
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output")
axs[1].axis("off")
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='blue')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right")
axs[2].set_ylabel('Probability')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}') # Increased title font size
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='black', ha='center')
plt.tight_layout()
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
custom_objects = {"segmentation_loss": segmentation_loss}
# Load model
model = load_model(saved_model_path, custom_objects=custom_objects)
# Test model
test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5)
# file uploader widget
uploader = widgets.FileUpload(accept='image/*', multiple=False)
uploader.observe(on_upload, names='value')
buttons_layout = HBox([uploader, clear_button])
display(buttons_layout)
display(out)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code lets the user select one image and performance inference. Help me to make the classification plot look more advanced, more future tech style, more pro, and more dynamic.
|
040b8318b08adff3aa3cc730290ed3cf
|
{
"intermediate": 0.35522809624671936,
"beginner": 0.4343754053115845,
"expert": 0.2103964388370514
}
|
46,525
|
You are a Python expert who can provide clear, concise, high-quality code.
"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual, HBox, VBox
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from ipywidgets import Output, Button
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}")
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask")
plt.show()
# preprocess and predict on the selected image
out = Output()
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
axs[0].imshow(img)
axs[0].set_title("Input Image")
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output")
axs[1].axis("off")
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='blue')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right")
axs[2].set_ylabel('Probability')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}') # Increased title font size
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='black', ha='center')
plt.tight_layout()
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
custom_objects = {"segmentation_loss": segmentation_loss}
# Load model
model = load_model(saved_model_path, custom_objects=custom_objects)
# Test model
test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5)
# file uploader widget
uploader = widgets.FileUpload(accept='image/*', multiple=False)
uploader.observe(on_upload, names='value')
buttons_layout = HBox([uploader, clear_button])
display(buttons_layout)
display(out)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code let user to select one image and performance inference. Help me to make the classification plot look more high future tech, more advanced, more professional.
|
17cc82ca4be8442acfee214c34932bd7
|
{
"intermediate": 0.35522809624671936,
"beginner": 0.4343754053115845,
"expert": 0.2103964388370514
}
|
46,526
|
You are a Python expert who can provide clear, concise, high-quality code.
"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual, HBox, VBox
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from ipywidgets import Output, Button
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}")
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask")
plt.show()
# preprocess and predict on the selected image
out = Output()
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
axs[0].imshow(img)
axs[0].set_title("Input Image")
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output")
axs[1].axis("off")
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='blue')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right")
axs[2].set_ylabel('Probability')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}') # Increased title font size
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='black', ha='center')
plt.tight_layout()
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
custom_objects = {"segmentation_loss": segmentation_loss}
# Load model
model = load_model(saved_model_path, custom_objects=custom_objects)
# Test model
test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5)
# file uploader widget
uploader = widgets.FileUpload(accept='image/*', multiple=False)
uploader.observe(on_upload, names='value')
buttons_layout = HBox([uploader, clear_button])
display(buttons_layout)
display(out)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code let user to select one image and performance inference. Help me to make the classification plot look more high future tech, more advanced, more professional.
|
4e7942e82745eb77188a43713b662340
|
{
"intermediate": 0.35522809624671936,
"beginner": 0.4343754053115845,
"expert": 0.2103964388370514
}
|
46,527
|
You are a Python expert who can provide clear, concise, high-quality code.
"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual, HBox, VBox
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from ipywidgets import Output, Button
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}")
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask")
plt.show()
# preprocess and predict on the selected image
out = Output()
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
axs[0].imshow(img)
axs[0].set_title("Input Image")
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output")
axs[1].axis("off")
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='blue')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right")
axs[2].set_ylabel('Probability')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}') # Increased title font size
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='black', ha='center')
plt.tight_layout()
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
custom_objects = {"segmentation_loss": segmentation_loss}
# Load model
model = load_model(saved_model_path, custom_objects=custom_objects)
# Test model
test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5)
# file uploader widget
uploader = widgets.FileUpload(accept='image/*', multiple=False)
uploader.observe(on_upload, names='value')
buttons_layout = HBox([uploader, clear_button])
display(buttons_layout)
display(out)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code let user to select one image and performance inference. Help me to make the classification plot look more high future tech, more advanced, more professional. Show code.
|
76b44aec4042ed7943fbf3f5ce1ca703
|
{
"intermediate": 0.35522809624671936,
"beginner": 0.4343754053115845,
"expert": 0.2103964388370514
}
|
46,528
|
If a golang aplication uses gorm and connects to a postgresql database, keeping de connection open for global use, in which case does the error "too many clients" can appear?
|
5cc2950208f00daed049c5ab881ccac3
|
{
"intermediate": 0.616111695766449,
"beginner": 0.09761622548103333,
"expert": 0.28627198934555054
}
|
46,529
|
You are a Python expert who can provide clear, concise, high-quality code.
"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual, HBox, VBox
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from ipywidgets import Output, Button
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}")
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask")
plt.show()
# preprocess and predict on the selected image
out = Output()
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
axs[0].imshow(img)
axs[0].set_title("Input Image")
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output")
axs[1].axis("off")
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='blue')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right")
axs[2].set_ylabel('Probability')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}') # Increased title font size
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='black', ha='center')
plt.tight_layout()
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
custom_objects = {"segmentation_loss": segmentation_loss}
# Load model
model = load_model(saved_model_path, custom_objects=custom_objects)
# Test model
test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5)
# file uploader widget
uploader = widgets.FileUpload(accept='image/*', multiple=False)
uploader.observe(on_upload, names='value')
buttons_layout = HBox([uploader, clear_button])
display(buttons_layout)
display(out)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code let user to select one image and performance inference. Help me to make the classification plot look more high-tech, more dynamic. Show code.
|
adde4815dae193a42fb338bec0d0167e
|
{
"intermediate": 0.35522809624671936,
"beginner": 0.4343754053115845,
"expert": 0.2103964388370514
}
|
46,530
|
You are a Python expert who can provide clear, concise, high-quality code.
"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual, HBox, VBox
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from ipywidgets import Output, Button
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}")
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask")
plt.show()
# preprocess and predict on the selected image
out = Output()
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
axs[0].imshow(img)
axs[0].set_title("Input Image")
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output")
axs[1].axis("off")
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='blue')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right")
axs[2].set_ylabel('Probability')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}') # Increased title font size
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='black', ha='center')
plt.tight_layout()
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
custom_objects = {"segmentation_loss": segmentation_loss}
# Load model
model = load_model(saved_model_path, custom_objects=custom_objects)
# Test model
test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5)
# file uploader widget
uploader = widgets.FileUpload(accept='image/*', multiple=False)
uploader.observe(on_upload, names='value')
buttons_layout = HBox([uploader, clear_button])
display(buttons_layout)
display(out)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code let user to select one image and performance inference. Help me to make the classification plot look more high future tech, more advanced, more professional.Show code.
|
7be44906fa5780d9ef7ca6add9c177fe
|
{
"intermediate": 0.35522809624671936,
"beginner": 0.4343754053115845,
"expert": 0.2103964388370514
}
|
46,531
|
привет у меня есть 2 кода по сканированию qr для unity мне нужно их обьеденить в один что, из одного взять многопоточное разделение из другого вывод на экран вот эти коды
public class NewQRScan : MonoBehaviour
{
[SerializeField]
private RawImage _rawImageBackground;
[SerializeField]
private AspectRatioFitter _aspectRatioFitter;
[SerializeField]
private TextMeshProUGUI _textOut;
[SerializeField]
private RectTransform _scanZone;
private bool _isCamAvaible;
private WebCamTexture _cameraTexture;
void Start()
{
SetUpCamera();
}
// Update is called once per frame
void Update()
{
UpdateCameraRender();
}
private void SetUpCamera()
{
WebCamDevice[] devices = WebCamTexture.devices;
if (devices.Length == 0)
{
_isCamAvaible = false;
return;
}
for (int i = 0; i < devices.Length; i++)
{
if (devices[i].isFrontFacing == false)
{
_cameraTexture = new WebCamTexture(devices[i].name, (int)_scanZone.rect.width, (int)_scanZone.rect.height);
break;
}
}
_cameraTexture.Play();
_rawImageBackground.texture = _cameraTexture;
_isCamAvaible = true;
}
private void UpdateCameraRender()
{
if (_isCamAvaible == false)
{
return;
}
float ratio = (float)_cameraTexture.width / (float)_cameraTexture.height;
_aspectRatioFitter.aspectRatio = ratio;
int orientation = _cameraTexture.videoRotationAngle;
orientation = orientation * 3;
_rawImageBackground.rectTransform.localEulerAngles = new Vector3(0, 0, orientation);
}
public void OnClickScan()
{
Scan();
}
private void Scan()
{
try
{
IBarcodeReader barcodeReader = new BarcodeReader();
Result result = barcodeReader.Decode(_cameraTexture.GetPixels32(), _cameraTexture.width, _cameraTexture.height);
if (result != null)
{
_textOut.text = result.Text;
}
else
{
_textOut.text = “Failed to Read QR CODE”;
}
}
catch
{
_textOut.text = “FAILED IN TRY”;
}
}
}
public class QRCodeScanner : MonoBehaviour
{
[SerializeField]
private string lastResult;
private WebCamTexture camTexture;
private Rect screenRect;
private Color32[] cameraColorData;
private int width, height;
private CancellationTokenSource cts = new CancellationTokenSource();
private bool startEncoding;
private bool startDecoding;
private BarcodeWriter writer;
private Result result;
public event Action<string> OnResult;
private BarcodeReader barcodeReader = new BarcodeReader
{
AutoRotate = false,
Options = new ZXing.Common.DecodingOptions
{
TryHarder = false
}
};
public void ActiveScanner()
{
SetupWebcamTexture();
PlayWebcamTexture();
cameraColorData = new Color32[width * height];
screenRect = new Rect(0, 0, Screen.width, Screen.height);
if (cts != null)
{
cts.Dispose();
}
cts = new CancellationTokenSource();
// Перезапуск работы с камерой и сканированием
RestartScannerThreads();
}
private void OnEnable()
{
RestartScanner();
}
private void RestartScannerThreads()
{
ThreadPool.QueueUserWorkItem(new WaitCallback(GetCodeFromImageData), cts.Token);
ThreadPool.QueueUserWorkItem(new WaitCallback(EncodeNewFromLastResult), cts.Token);
}
private void Update()
{
if (camTexture != null && camTexture.isPlaying && !startDecoding)
{
camTexture.GetPixels32(cameraColorData);
startDecoding = true;
}
}
private void OnGUI()
{
DrawCameraTexture();
if (result != null)
{
HandleDecodedResult();
}
}
private void DrawCameraTexture()
{
if (camTexture != null)
{
// Сохраняем текущую матрицу GUI, чтобы восстановить её позже
Matrix4x4 matrixBackup = GUI.matrix;
// Получаем угол поворота и проверяем на зеркальное отображение
float rotationAngle = -camTexture.videoRotationAngle;
// Добавляем 180 градусов для дополнительного поворота
rotationAngle += 180;
bool isVerticallyMirrored = camTexture.videoVerticallyMirrored;
Rect rect = screenRect;
if (isVerticallyMirrored)
{
// Если изображение отображается зеркально, корректируем положение rect
rect.y = Screen.height - rect.y - rect.height;
}
// Высчитываем центр поворота
Vector2 pivotPoint = new Vector2(rect.xMin + rect.width * 0.5f, rect.yMin + rect.height * 0.5f);
// Применяем поворот вокруг центра поворота
GUIUtility.RotateAroundPivot(rotationAngle, pivotPoint);
// Рисуем текстуру с учётом поворота и зеркального отображения
GUI.DrawTexture(rect, camTexture, ScaleMode.ScaleToFit);
// Восстанавливаем матрицу GUI
GUI.matrix = matrixBackup;
}
}
private void HandleDecodedResult()
{
try
{
if (result != null)
{
// QR-код успешно считан
Debug.Log("Decoded QR: " + result.Text);
// Останавливаем сканирование и камеру
camTexture.Stop();
var resultURL = result.Text.Substring(0, result.Text.Length - 1);
OnResult?.Invoke(resultURL);
// Переходим по ссылке, если это нужно
result = null;
//Application.OpenURL(resultURL);
}
}
catch (System.Exception ex)
{
Debug.LogWarning(ex.Message);
}
}
private void OnDestroy()
{
// Вызывается при уничтожении объекта
if (camTexture != null)
{
camTexture.Stop();
camTexture = null;
}
cts.Cancel();
cts.Dispose();
}
private void RestartScanner()
{
if (camTexture != null && !camTexture.isPlaying)
{
camTexture.Play();
}
startDecoding = false;
startEncoding = false;
ActiveScanner();
}
private void OnApplicationFocus(bool hasFocus)
{
if (hasFocus)
{
// Перезапускаем сканер, когда пользователь возвращается в приложение
RestartScanner();
}
}
private void SetupWebcamTexture()
{
camTexture = new WebCamTexture();
camTexture.requestedHeight = Screen.height;
camTexture.requestedWidth = Screen.width;
}
private void PlayWebcamTexture()
{
if (camTexture != null)
{
camTexture.Play();
width = camTexture.width;
height = camTexture.height;
}
}
private void EncodeNewFromLastResult(object obj)
{
CancellationToken token = (CancellationToken)obj;
while (!token.IsCancellationRequested)
{
if (startEncoding && lastResult != null)
{
startEncoding = !startEncoding;
}
}
}
private void GetCodeFromImageData(object obj)
{
CancellationToken token = (CancellationToken)obj;
while (!token.IsCancellationRequested)
{
// decode the current frame
if (startDecoding && cameraColorData != null)
{
result = barcodeReader.Decode(cameraColorData, width, height);
if (result != null)
{
lastResult = result.Text + " " + result.BarcodeFormat;
startEncoding = true;
}
startDecoding = !startDecoding;
}
}
}
}
|
6e949c13914c5421c473f03897a141c2
|
{
"intermediate": 0.29701918363571167,
"beginner": 0.4816162884235382,
"expert": 0.22136451303958893
}
|
46,532
|
i want to know angular cli version, when i run ng -v gave me node version only
|
93e9373c6d034e88e0759e187b5b41b2
|
{
"intermediate": 0.5981019735336304,
"beginner": 0.18029139935970306,
"expert": 0.22160665690898895
}
|
46,533
|
You are a Python expert who can provide clear, concise, high-quality code.
"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual, HBox, VBox
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from ipywidgets import Output, Button
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}")
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask")
plt.show()
# preprocess and predict on the selected image
out = Output()
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
axs[0].imshow(img)
axs[0].set_title("Input Image")
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output")
axs[1].axis("off")
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='blue')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right")
axs[2].set_ylabel('Probability')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}') # Increased title font size
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='black', ha='center')
plt.tight_layout()
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
custom_objects = {"segmentation_loss": segmentation_loss}
# Load model
model = load_model(saved_model_path, custom_objects=custom_objects)
# Test model
test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5)
# file uploader widget
uploader = widgets.FileUpload(accept='image/*', multiple=False)
uploader.observe(on_upload, names='value')
buttons_layout = HBox([uploader, clear_button])
display(buttons_layout)
display(out)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code let user to select one image and performance inference. Help me to make the classification plot become dark theme, more pro and high-tech. Show code.
|
e00277f430d64846662960ee719e2a1d
|
{
"intermediate": 0.35522809624671936,
"beginner": 0.4343754053115845,
"expert": 0.2103964388370514
}
|
46,534
|
Привет! Надо сделать так, чтобы пользователю после генерации выводилась только биография (все 3 заголовка и биографии) и эпитафия. При этом в БД сохранять все как раньше
from aiogram import Bot, Dispatcher, executor, types
from aiogram.dispatcher import FSMContext
from aiogram.dispatcher.filters.state import State, StatesGroup
from aiogram.contrib.fsm_storage.memory import MemoryStorage
from aiogram.types import ReplyKeyboardMarkup, KeyboardButton, InlineKeyboardButton, InlineKeyboardMarkup
from aiogram.utils.callback_data import CallbackData
import aiosqlite
import asyncio
import aiohttp
import json
import re
from datetime import datetime
API_TOKEN = '6996318383:AAEcQfdQhzEg3L_6DKQVidJEn46Wb27Sy4g'
ADMINS = [989037374, 1515567046]
bot = Bot(token=API_TOKEN)
storage = MemoryStorage()
dp = Dispatcher(bot, storage=storage)
class Form(StatesGroup):
choosing_action = State()
answer_question = State()
class lk(StatesGroup):
personal_account = State()
edit_answer = State()
new_answer = State()
edit_answer_select = State()
edit_answer_cb = State()
new_answer_cb = State()
class admin(StatesGroup):
admin_panel = State()
select_question_to_delete = State()
select_question_to_edit = State()
edit_question_text = State()
new_question = State()
async def create_db():
async with aiosqlite.connect('base.db') as db:
await db.execute('''CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY,
username TEXT NOT NULL,
last_question_idx INTEGER DEFAULT 0)''')
await db.execute('''CREATE TABLE IF NOT EXISTS questions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
question TEXT NOT NULL,
order_num INTEGER NOT NULL)''')
await db.execute('''CREATE TABLE IF NOT EXISTS answers (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
question TEXT,
answer TEXT,
FOREIGN KEY (user_id) REFERENCES users (id))''')
await db.execute('''CREATE TABLE IF NOT EXISTS public_info (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
nome TEXT,
surname TEXT,
patronym TEXT,
birthdayat TEXT,
diedat TEXT,
epitaph TEXT,
placeOfBirth TEXT,
placeOfDeath TEXT,
children TEXT,
wifehusband TEXT,
education TEXT,
occupation TEXT,
awards TEXT,
title1 TEXT,
biography1 TEXT,
title2 TEXT,
biography2 TEXT,
title3 TEXT,
biography3 TEXT,
conclusion TEXT,
FOREIGN KEY (user_id) REFERENCES users (id))''')
await db.commit()
# Обработка под MarkdownV2
def mdv2(text: str) -> str:
escape_chars = [
"_", "[", "]", "(", ")", "~", "`", ">",
"#", "+", "-", "=", "|", "{", "}", ".", "!"
]
for char in escape_chars:
text = text.replace(char, f"\{char}")
text = text.replace("**", "*").replace('"', '“')
return text
# Проверка дат
def validate_dates(birthdate_str: str, deathdate_str: str) -> bool:
try:
birthdate = datetime.strptime(birthdate_str, "%d.%m.%Y")
deathdate = datetime.strptime(deathdate_str, "%d.%m.%Y")
except ValueError:
return False
if deathdate < birthdate:
return False
return True
# калбэки
change_action_cb = CallbackData('change', 'action')
# КНОПКА МЕНЮ
menu = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
menu.add(KeyboardButton("В меню"))
async def add_user(user_id: int, username: str):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT id FROM users WHERE id = ?', (user_id,))
user_exists = await cursor.fetchone()
if user_exists:
await db.execute('UPDATE users SET username = ? WHERE id = ?', (username, user_id))
else:
await db.execute('INSERT INTO users (id, username) VALUES (?, ?)', (user_id, username))
await db.commit()
@dp.message_handler(commands="start", state="*")
async def cmd_start(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
user_id = message.from_user.id
username = message.from_user.username or "unknown"
await add_user(user_id, username)
if user_id not in ADMINS:
await message.answer("Выберите действие:", reply_markup=markup)
await Form.choosing_action.set()
else:
markup.add(KeyboardButton("Админ-панель"))
await message.answer("Выберите действие:", reply_markup=markup)
await Form.choosing_action.set()
@dp.message_handler(lambda message: message.text == "В меню", state="*")
async def back_to_menu(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
if message.from_user.id not in ADMINS:
await message.answer("Вернули вас в меню. Выберите действие", reply_markup=markup)
await Form.choosing_action.set()
else:
markup.add(KeyboardButton("Админ-панель"))
await message.answer("Вернули вас в меню. Выберите действие", reply_markup=markup)
await Form.choosing_action.set()
async def save_answer(user_id: int, question: str, answer: str, question_idx: int):
async with aiosqlite.connect('base.db') as db:
await db.execute('INSERT INTO answers (user_id, question, answer) VALUES (?, ?, ?)',
(user_id, question, answer))
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (question_idx, user_id))
await db.commit()
async def set_next_question(user_id: int):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT last_question_idx FROM users WHERE id = ?', (user_id,))
result = await cursor.fetchone()
last_question_idx = result[0] if result else 0
next_question_idx = last_question_idx + 1
question_cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (next_question_idx,))
question_text = await question_cursor.fetchone()
if question_text:
await bot.send_message(user_id, question_text[0], reply_markup=menu)
await Form.answer_question.set()
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (next_question_idx, user_id))
await db.commit()
else:
answers_text = ""
cursor = await db.execute("SELECT question, answer FROM answers WHERE user_id=?", (user_id,))
answers = await cursor.fetchall()
for idx, (question, answer) in enumerate(answers, start=1):
answers_text += f"{idx}. {question} - {answer}\n"
markup = InlineKeyboardMarkup(
inline_keyboard=[
[InlineKeyboardButton(text="Сгенерировать", callback_data=change_action_cb.new(action="generate"))],
[InlineKeyboardButton(text="Изменить ответ", callback_data=change_action_cb.new(action="change"))],
[InlineKeyboardButton(text="Заполнить заново",
callback_data=change_action_cb.new(action="refill"))],
]
)
await bot.send_message(user_id, f"Вот ваши ответы:\n\n{answers_text}", reply_markup=markup)
await dp.current_state(user=user_id).reset_state(with_data=False)
await Form.choosing_action.set()
@dp.callback_query_handler(change_action_cb.filter(action="change"), state="*")
async def change_answer(callback_query: types.CallbackQuery, state: FSMContext):
await bot.answer_callback_query(callback_query.id)
await lk.edit_answer.set()
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
text = "Вопросы:\n\n"
if questions:
text += "\n".join([f"{idx + 1}. {question[0]}" for idx, question in enumerate(questions)])
text += "\n\nВведите номер вопроса, который хотите изменить:"
await bot.send_message(callback_query.from_user.id, text,reply_markup=menu)
else:
await bot.send_message(callback_query.from_user.id,"Вопросы отсутствуют.", reply_markup=menu)
@dp.message_handler(state=lk.edit_answer_cb)
async def enter_question_number(message: types.Message, state: FSMContext):
question_number = message.text
if not question_number.isdigit():
await message.reply("Пожалуйста, введите номер вопроса цифрами. Попробуйте снова:")
return
await state.update_data(question_number=int(question_number))
await lk.new_answer.set()
await message.answer("Введите новый ответ:")
@dp.callback_query_handler(change_action_cb.filter(action="refill"), state="*")
async def process_refill(callback_query: types.CallbackQuery, callback_data: dict):
user_id = callback_query.from_user.id
await bot.answer_callback_query(callback_query.id)
markup = InlineKeyboardMarkup().add(InlineKeyboardButton("Да, начать заново", callback_data="confirm_refill"))
await bot.send_message(user_id, "Вы уверены, что хотите начать заново? Ваши текущие ответы будут удалены.", reply_markup=markup)
@dp.message_handler(state=lk.new_answer_cb)
async def update_answer(message: types.Message, state: FSMContext):
new_answer_text = message.text
user_data = await state.get_data()
question_number = user_data['question_number']
user_id = message.from_user.id
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await db.execute('UPDATE answers SET answer = ? WHERE user_id = ? AND question = ?',
(new_answer_text, user_id, question_text[0]))
await db.commit()
await message.answer(f"Ваш ответ на вопрос изменен на: {new_answer_text}", reply_markup=menu)
else:
await message.answer(f"Проблема при редактировании ответа, вопрос не найден.")
await state.finish()
await set_next_question(user_id)
@dp.message_handler(lambda message: message.text == "Сгенерировать био", state=[Form.choosing_action, ])
async def generate_bio(message: types.Message):
user_id = message.from_user.id
await set_next_question(user_id)
@dp.message_handler(state=Form.answer_question)
async def process_question_answer(message: types.Message, state: FSMContext):
user_id = message.from_user.id
answer_text = message.text
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT last_question_idx FROM users WHERE id = ?', (user_id,))
result = await cursor.fetchone()
current_question_idx = result[0] if result else 0
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (current_question_idx,))
question = await cursor.fetchone()
if question:
question_text = question[0]
await db.execute('INSERT INTO answers (user_id, question, answer) VALUES (?, ?, ?)',
(user_id, question_text, answer_text))
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (current_question_idx, user_id))
await db.commit()
else:
await message.answer("Произошла ошибка при сохранении вашего ответа.")
await set_next_question(user_id)
@dp.message_handler(lambda message: message.text == "Личный кабинет", state=Form.choosing_action)
async def personal_account(message: types.Message):
user_id = message.from_user.id
answers_text = "Личный кабинет\n\nВаши ответы:\n"
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question, answer FROM answers WHERE user_id=? ORDER BY id', (user_id,))
answers = await cursor.fetchall()
for idx, (question, answer) in enumerate(answers, start=1):
answers_text += f"{idx}. {question}: {answer}\n"
if answers_text == "Личный кабинет\n\nВаши ответы:\n":
answers_text = "Личный кабинет\n\nВы еще не отвечали на вопросы. Пожалуйста, нажмите «В меню» и выберите «Сгенерировать био», чтобы ответить на вопросы"
await message.answer(answers_text, reply_markup=menu)
else:
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Изменить ответ"))
markup.add(KeyboardButton("Заполнить заново"))
markup.add(KeyboardButton("В меню"))
await message.answer(answers_text, reply_markup=markup)
await lk.personal_account.set()
@dp.message_handler(lambda message: message.text == "Изменить ответ", state=lk.personal_account)
async def change_answer(message: types.Message):
await message.answer("Введите номер вопроса, на который хотите изменить ответ:",reply_markup=menu)
await lk.edit_answer.set()
@dp.message_handler(state=lk.edit_answer)
async def process_question_number(message: types.Message, state: FSMContext):
text = message.text
question_number = int(text)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await state.update_data(question=question_text[0], question_number=question_number)
await message.answer("Введите новый ответ:")
await lk.new_answer.set()
else:
await message.answer(f"Вопроса под номером {question_number} не существует.")
@dp.message_handler(state=lk.new_answer)
async def process_new_answer(message: types.Message, state: FSMContext):
user_data = await state.get_data()
question_number = user_data['question_number']
new_answer = message.text
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Изменить ответ"))
markup.add(KeyboardButton("Заполнить заново"))
markup.add(KeyboardButton("В меню"))
user_id = message.from_user.id
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await db.execute('UPDATE answers SET answer = ? WHERE user_id = ? AND question = ?', (new_answer, user_id, question_text[0]))
await db.commit()
await message.answer(f"Ваш ответ на вопрос изменен на: {new_answer}", reply_markup=markup)
else:
await message.answer(f"Проблема при редактировании ответа, вопрос не найден.")
await state.finish()
await personal_account(message)
@dp.message_handler(lambda message: message.text == "Заполнить заново", state=lk.personal_account)
async def refill_form(message: types.Message):
markup = InlineKeyboardMarkup().add(InlineKeyboardButton("Да", callback_data="confirm_refill"))
await message.answer("Вы уверены, что хотите начать заново? Все текущие ответы будут удалены.", reply_markup=markup)
@dp.callback_query_handler(lambda c: c.data == 'confirm_refill', state="*")
async def process_refill(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
async with aiosqlite.connect('base.db') as db:
await db.execute('DELETE FROM answers WHERE user_id=?', (user_id,))
await db.commit()
await db.execute('UPDATE users SET last_question_idx = 0 WHERE id = ?', (user_id,))
await db.commit()
state = dp.current_state(user=user_id)
await state.reset_state(with_data=False)
await bot.answer_callback_query(callback_query.id)
await bot.send_message(user_id, "Ваши ответы удалены.")
await cmd_start(callback_query.message)
# ГЕНЕРАЦИЯ
def parse_yandexgpt(answer_text: str) -> dict:
pattern = re.compile(
r'(nome|surname|patronym|birthdayat|diedat|epitaph|placeOfBirth|placeOfDeath|children|wifehusband|education|occupation|awards|title1|biography1|title2|biography2|title3|biography3|conclusion): ([^:]+?)(?=\n\w+: |\n\n|\Z)',
re.DOTALL)
matches = pattern.findall(answer_text)
data = {key.strip(): value.strip() for key, value in matches}
return data
async def save_result(user_id: int, answer_dict: dict):
async with aiosqlite.connect('base.db') as db:
await db.execute("DELETE FROM public_info WHERE user_id=?", (user_id, ))
columns = ", ".join(answer_dict.keys())
placeholders = ", ".join(["?"] * len(answer_dict))
values = list(answer_dict.values())
print(columns)
print(placeholders)
print(values)
await db.execute(f"INSERT INTO public_info (user_id, {columns}) VALUES (?, {placeholders})", [user_id, *values])
await db.commit()
class YandexGPT:
@staticmethod
async def generate(prompt: str, apikey: str, sa_id: str, user_id : str):
url = 'https://llm.api.cloud.yandex.net/foundationModels/v1/completion'
headers = {
'Content-Type': 'application/json',
'Authorization': f'Api-Key {apikey}'
}
data = {
"modelUri": f"gpt://{sa_id}/yandexgpt-lite/latest",
"completionOptions": {
"stream": False,
"temperature": 0.4,
"maxTokens": "3000"
},
"messages": [
{
"role": "system",
"text": """"Твоя задача - создать информационную сводку и биографию (title1, biography1, title2, biography2, title3, biography3 ) в соответствии с ответами на вопросы пользователя (пишутся в формате вопрос - ответ).
Не пиши ничего кроме этой сводки и НЕ ОТХОДИ ОТ ШАБЛОНА. Если информации данных в каком-то пункте нет, пиши ТОЛЬКО “null“, а не что-либо другое. Создай следующую сводку:
nome: {}
surname: {}
patronym: {}
birthday_at: {}
died_at: {}
epitaph: {} # не больше 300 символов
placeOfBirth: {}
placeOfDeath: {}
children: {}
wifehusband: {} # Это супруг или супруга
education: {}
occupation: {} # Род деятельности человека
awards: {}
epitaph: {}
title1: {}
biography1: {}
title2: {}
biography2: {}
title3: {}
biography3: {}
conclusion: {}
В поле nome должно быть ТОЛЬКО имя, без фамилии и отчества. Не путай имя (nome), фамилию (surname) и отчество (patronym) - они должны стоять на правильных местах.
epitaph - это эпитафия, сделай так, чтобы она соответствовала жизненному пути человека.
Не придумывай в биографии ничего от себя, распиши подробнее, но только ту информацию, которая есть от пользователя.
Все даты пиши в формате dd.mm.yyyy.
В conclusion пиши заключение БИОГРАФИИ, а не что либо другое.
"""
},
{
"role": "user",
"text": prompt
}
]
}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=data, headers=headers) as response:
response_data = await response.json()
try:
answer = response_data['result']['alternatives'][0]['message']['text']
answer = answer.replace("*","").replace("_","")
answer_data = parse_yandexgpt(answer)
birthdate_str = answer_data.get("birthdayat", "")
deathdate_str = answer_data.get("diedat", "")
if birthdate_str and deathdate_str and not validate_dates(birthdate_str, deathdate_str):
markup = InlineKeyboardMarkup()
markup.add(InlineKeyboardButton(text="Изменить ответ", callback_data=change_action_cb.new(action="change")))
await bot.send_message(user_id, "Некорректные даты: дата смерти не может быть раньше даты рождения или введены неверные даты.", reply_markup=markup)
else:
try:
await save_result(user_id, answer_data)
await bot.send_message(user_id, mdv2(answer), parse_mode="MarkdownV2",reply_markup=InlineKeyboardMarkup().add(InlineKeyboardButton("Обновить страницу",callback_data="update_page")))
except:
await bot.send_message(user_id, "Не удалось сохранить данные. Пожалуйста, нажмите Сгенерировать еще раз")
except KeyError as e:
await bot.send_message(user_id, "Не удалось получить ответ от сервера. Проверьте переданные данные и попробуйте еще раз.")
@dp.callback_query_handler(change_action_cb.filter(action="generate"), state="*")
async def process_generate(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
prompt = ""
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question, answer FROM answers WHERE user_id=?", (user_id,))
answers = await cursor.fetchall()
for question, answer in answers:
prompt += f"\n{question} - {answer}"
api_key = "AQVN1J4sCxYR98rj-tVppyp6gXQthbdmYvmgtO7a"
sa_id = "b1g5og37bgh1ghh2s2qc"
await YandexGPT.generate(prompt, api_key, sa_id, user_id)
# ОБНОВЛЕНИЕ СТРАНИЦЫ
@dp.callback_query_handler(lambda c: c.data == 'update_page', state="*")
async def process_update_page(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
await bot.answer_callback_query(callback_query.id)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT nome, surname, patronym, birthdayat, diedat, epitaph, placeOfBirth, placeOfDeath, children, wifehusband, education, occupation, awards, title1, biography1, title2, biography2, title3, biography3, conclusion FROM public_info WHERE user_id=?", (user_id,))
data = await cursor.fetchone()
if data:
print(data)
data_structure = {
"nome": data[0],
"surname": data[1],
"patronym": data[2],
"birthday_at": data[3]+"T21:00:00.000000Z",
"died_at": data[4]+"T21:00:00.000000Z",
"epitaph": data[5],
"placeOfBirth": data[6],
"placeOfDeath": data[7],
"children": data[8],
"wifehusband": data[9],
"education": data[10],
"occupation": data[11],
"awards": data[12],
"title1" : data[13],
"biography1": data[14],
"title2": data[15],
"biography2": data[16],
"title3": data[17],
"biography3": data[18],
"conclusion": data[19],
"day_b": data[3].split('.')[0],
"month_b": data[3].split('.')[1],
"year_b": int(data[3].split('.')[2]),
"day_d": data[4].split('.')[0],
"month_d": data[4].split('.')[1],
"year_d": int(data[4].split('.')[2]),
}
url = 'https://mc.dev.rand.agency/api/v1/get-access-token'
payload = {
"email": "team1@hackathon.ru",
"password": "PA9dEa7x",
"device": "bot-v0.0.1"
}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=payload, headers={'Accept': 'application/json','Content-Type': 'application/json;charset=UTF-8'}) as response:
if response.status == 200:
data = await response.json()
access_token = data.get('access_token')
print(f'Access Token: {access_token}')
else:
print(f'Error: {response.status}')
async with aiohttp.ClientSession() as session:
await update_page(session, access_token, data_structure)
await bot.send_message(user_id, "Страница обновлена")
await Form.choosing_action.set()
async def update_page(session, access_token, data_structure):
url = 'https://mc.dev.rand.agency/api/page/64577087'
print(data_structure["nome"])
payload = {
"id": 8734,
"name": data_structure["nome"],
"surname": data_structure["surname"],
"patronym": data_structure["patronym"],
"birthday_at": data_structure["birthday_at"],
"died_at": data_structure["died_at"],
"epitaph": data_structure["epitaph"],
"start": {
"day": data_structure["day_b"],
"month": data_structure["month_b"],
"year": data_structure["year_b"]
},
"end": {
"day": data_structure["day_d"],
"month": data_structure["month_d"],
"year": data_structure["year_d"]
},
"link": "https://mc.dev.rand.agency/page/64577087",
"page_type_id": 1,
"biographies": [
{
"id": 4610,
"title": data_structure["title1"],
"description": data_structure["biography1"],
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 1,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
},
{
"url": None,
"title": None,
"order": 2
}
],
"media": []
},
{
"id": 4611,
"title": data_structure["title2"],
"description": data_structure["biography2"],
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 2,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
}
],
"media": []
},
{
"id": 4612,
"title": data_structure["title3"],
"description": data_structure["biography3"],
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 3,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
},
{
"url": None,
"title": None,
"order": 2
}
],
"media": []
},
{
"id": 4613,
"title": "Заключение",
"description": data_structure["conclusion"],
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 4,
"checked": False,
"photos": [],
"media": []
}
],
"page_information": [
{
"id": 30957,
"page_id": 8734,
"title": "pageInformation.placeOfBirth",
"is_system": True,
"description": data_structure["placeOfBirth"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30958,
"page_id": 8734,
"title": data_structure["placeOfDeath"],
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30959,
"page_id": 8734,
"title": data_structure["children"],
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30960,
"page_id": 8734,
"title": "^pageInformation.wife||pageInformation.husband",
"is_system": True,
"description": data_structure["wifehusband"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30961,
"page_id": 8734,
"title": "pageInformation.citizenship",
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30962,
"page_id": 8734,
"title": "pageInformation.education",
"is_system": True,
"description": data_structure["education"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30963,
"page_id": 8734,
"title": "pageInformation.occupation",
"is_system": True,
"description": data_structure["occupation"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30964,
"page_id": 8734,
"title": "pageInformation.awards",
"is_system": True,
"description": data_structure["awards"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
}
],
"_method": "PUT"
}
headers = {
'Accept': 'application/json',
'Content-Type': 'application/json;charset=UTF-8',
'Authorization': f'Bearer {access_token}'
}
async with session.put(url, json=payload, headers=headers) as response:
if response.status == 200:
data = await response.json()
print(f'Update Page Response: {data}')
else:
data = await response.json()
print(f'Error: {data}')
# АДМИН-ПАНЕЛЬ
# КНОПКА НАЗАД
back = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=False)
back.add(KeyboardButton("Назад"))
# КЛАВА
admin_kb = ReplyKeyboardMarkup(resize_keyboard=True)
admin_kb.add("Вопросы", "Добавить", "Удалить", "Редактировать","В меню")
@dp.message_handler(lambda message: message.text == "Назад", state=[admin.new_question, admin.edit_question_text, admin.select_question_to_edit, admin.select_question_to_delete])
async def back_to_admin_panel(message: types.Message, state: FSMContext):
await state.finish()
await admin_panel(message)
@dp.message_handler(lambda message: message.text == "Админ-панель", state=Form.choosing_action)
async def admin_panel(message: types.Message):
if message.from_user.id not in ADMINS:
await message.answer("Доступ запрещен.")
return
await message.answer("Админ-панель:", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Вопросы", state=admin.admin_panel)
async def show_questions(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if questions:
text = "\n".join([f"{idx + 1}. {question[0]}" for idx, question in enumerate(questions)])
else:
text = "Вопросы отсутствуют."
await message.answer(text)
@dp.message_handler(lambda message: message.text == "Добавить", state=admin.admin_panel)
async def add_question_start(message: types.Message):
await message.answer("Введите текст нового вопроса:", reply_markup=back)
await admin.new_question.set()
@dp.message_handler(state=admin.new_question)
async def add_question_process(message: types.Message, state: FSMContext):
new_question = message.text
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT MAX(order_num) FROM questions")
max_order_num = await cursor.fetchone()
next_order_num = (max_order_num[0] or 0) + 1
await db.execute("INSERT INTO questions (question, order_num) VALUES (?, ?)", (new_question, next_order_num))
await db.commit()
await message.answer("Вопрос успешно добавлен.", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Редактировать", state=admin.admin_panel)
async def select_question_to_edit_start(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT id, question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if not questions:
await message.answer("Вопросы отсутствуют.")
return
text = "Выберите номер вопроса для редактирования:\n\n"
text += "\n".join(f"{qid}. {qtext}" for qid, qtext in questions)
await message.answer(text, reply_markup=back)
await admin.select_question_to_edit.set()
@dp.message_handler(state=admin.select_question_to_edit)
async def edit_question(message: types.Message, state: FSMContext):
qid_text = message.text
if not qid_text.isdigit():
await message.answer("Пожалуйста, введите число. Попробуйте еще раз:", reply_markup=back)
return
qid = int(qid_text)
async with state.proxy() as data:
data['question_id'] = qid
await admin.edit_question_text.set()
await message.answer("Введите новый текст вопроса:", reply_markup=back)
@dp.message_handler(state=admin.edit_question_text)
async def update_question(message: types.Message, state: FSMContext):
new_text = message.text
async with state.proxy() as data:
qid = data['question_id']
async with aiosqlite.connect('base.db') as db:
await db.execute("UPDATE questions SET question = ? WHERE id = ?", (new_text, qid))
await db.commit()
await message.answer("Вопрос успешно отредактирован.", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Удалить", state=admin.admin_panel)
async def select_question_to_delete_start(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT id, question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if not questions:
await message.answer("Вопросы отсутствуют.")
return
text = "Выберите номер вопроса для удаления:\n\n"
text += "\n".join(f"{qid}. {qtext}" for qid, qtext in questions)
await message.answer(text, reply_markup=back)
await admin.select_question_to_delete.set()
@dp.message_handler(state=admin.select_question_to_delete)
async def delete_question(message: types.Message, state: FSMContext):
qid_text = message.text
if not qid_text.isdigit():
await message.answer("Пожалуйста, введите число. Попробуйте еще раз:", reply_markup=back)
return
qid = int(qid_text)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT order_num FROM questions WHERE id = ?", (qid,))
question = await cursor.fetchone()
if not question:
await message.answer(f"Вопрос под номером {qid} не найден. Пожалуйста, попробуйте другой номер.")
return
order_num_to_delete = question[0]
await db.execute("DELETE FROM questions WHERE id = ?", (qid,))
await db.execute("UPDATE questions SET order_num = order_num - 1 WHERE order_num > ?", (order_num_to_delete,))
await db.commit()
await message.answer("Вопрос успешно удален.", reply_markup=admin_kb)
await admin.admin_panel.set()
async def main():
await create_db()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
executor.start_polling(dp, skip_updates=True)
|
abad928fa1faf75d06168876b49fce5d
|
{
"intermediate": 0.2358989715576172,
"beginner": 0.6563599109649658,
"expert": 0.10774113237857819
}
|
46,535
|
привет у меня есть скрипт в unity xв котором рисуется интерфейс в OnGUI я бы хотел реализацию как в другом скрипте
using System;
using System.Collections;
using System.Collections.Generic;
using System.Threading;
using UnityEngine;
using ZXing;
using ZXing.QrCode;
public class QRCodeScanner : MonoBehaviour
{
[SerializeField]
private string lastResult;
private WebCamTexture camTexture;
private Rect screenRect;
private Color32[] cameraColorData;
private int width, height;
private CancellationTokenSource cts = new CancellationTokenSource();
private bool startEncoding;
private bool startDecoding;
private BarcodeWriter writer;
private Result result;
public event Action<string> OnResult;
private BarcodeReader barcodeReader = new BarcodeReader
{
AutoRotate = false,
Options = new ZXing.Common.DecodingOptions
{
TryHarder = false
}
};
public void ActiveScanner()
{
SetupWebcamTexture();
PlayWebcamTexture();
cameraColorData = new Color32[width * height];
screenRect = new Rect(0, 0, Screen.width, Screen.height);
if (cts != null)
{
cts.Dispose();
}
cts = new CancellationTokenSource();
// Перезапуск работы с камерой и сканированием
RestartScannerThreads();
}
private void OnEnable()
{
RestartScanner();
}
private void RestartScannerThreads()
{
ThreadPool.QueueUserWorkItem(new WaitCallback(GetCodeFromImageData), cts.Token);
ThreadPool.QueueUserWorkItem(new WaitCallback(EncodeNewFromLastResult), cts.Token);
}
private void Update()
{
if (camTexture != null && camTexture.isPlaying && !startDecoding)
{
camTexture.GetPixels32(cameraColorData);
startDecoding = true;
}
}
private void OnGUI()
{
DrawCameraTexture();
if (result != null)
{
HandleDecodedResult();
}
}
private void DrawCameraTexture()
{
if (camTexture != null)
{
// Сохраняем текущую матрицу GUI, чтобы восстановить её позже
Matrix4x4 matrixBackup = GUI.matrix;
// Получаем угол поворота и проверяем на зеркальное отображение
float rotationAngle = -camTexture.videoRotationAngle;
// Добавляем 180 градусов для дополнительного поворота
rotationAngle += 180;
bool isVerticallyMirrored = camTexture.videoVerticallyMirrored;
Rect rect = screenRect;
if (isVerticallyMirrored)
{
// Если изображение отображается зеркально, корректируем положение rect
rect.y = Screen.height - rect.y - rect.height;
}
// Высчитываем центр поворота
Vector2 pivotPoint = new Vector2(rect.xMin + rect.width * 0.5f, rect.yMin + rect.height * 0.5f);
// Применяем поворот вокруг центра поворота
GUIUtility.RotateAroundPivot(rotationAngle, pivotPoint);
// Рисуем текстуру с учётом поворота и зеркального отображения
GUI.DrawTexture(rect, camTexture, ScaleMode.ScaleToFit);
// Восстанавливаем матрицу GUI
GUI.matrix = matrixBackup;
}
}
private void HandleDecodedResult()
{
try
{
if (result != null)
{
// QR-код успешно считан
Debug.Log("Decoded QR: " + result.Text);
// Останавливаем сканирование и камеру
camTexture.Stop();
var resultURL = result.Text.Substring(0, result.Text.Length - 1);
OnResult?.Invoke(resultURL);
// Переходим по ссылке, если это нужно
result = null;
//Application.OpenURL(resultURL);
}
}
catch (System.Exception ex)
{
Debug.LogWarning(ex.Message);
}
}
private void OnDestroy()
{
// Вызывается при уничтожении объекта
if (camTexture != null)
{
camTexture.Stop();
camTexture = null;
}
cts.Cancel();
cts.Dispose();
}
private void RestartScanner()
{
if (camTexture != null && !camTexture.isPlaying)
{
camTexture.Play();
}
startDecoding = false;
startEncoding = false;
ActiveScanner();
}
private void OnApplicationFocus(bool hasFocus)
{
if (hasFocus)
{
// Перезапускаем сканер, когда пользователь возвращается в приложение
RestartScanner();
}
}
private void SetupWebcamTexture()
{
camTexture = new WebCamTexture();
camTexture.requestedHeight = Screen.height;
camTexture.requestedWidth = Screen.width;
}
private void PlayWebcamTexture()
{
if (camTexture != null)
{
camTexture.Play();
width = camTexture.width;
height = camTexture.height;
}
}
private void EncodeNewFromLastResult(object obj)
{
CancellationToken token = (CancellationToken)obj;
while (!token.IsCancellationRequested)
{
if (startEncoding && lastResult != null)
{
startEncoding = !startEncoding;
}
}
}
private void GetCodeFromImageData(object obj)
{
CancellationToken token = (CancellationToken)obj;
while (!token.IsCancellationRequested)
{
// decode the current frame
if (startDecoding && cameraColorData != null)
{
result = barcodeReader.Decode(cameraColorData, width, height);
if (result != null)
{
lastResult = result.Text + " " + result.BarcodeFormat;
startEncoding = true;
}
startDecoding = !startDecoding;
}
}
}
}
public class NewQRScan : MonoBehaviour
{
[SerializeField]
private RawImage _rawImageBackground;
[SerializeField]
private AspectRatioFitter _aspectRatioFitter;
[SerializeField]
private TextMeshProUGUI _textOut;
[SerializeField]
private RectTransform _scanZone;
private bool _isCamAvaible;
private WebCamTexture _cameraTexture;
void Start()
{
SetUpCamera();
}
// Update is called once per frame
void Update()
{
UpdateCameraRender();
}
private void SetUpCamera()
{
WebCamDevice[] devices = WebCamTexture.devices;
string desiredName = devices.Length > 0 ? devices[0].name : null;
_cameraTexture = new WebCamTexture(desiredName,(int)_scanZone.rect.width, (int)_scanZone.rect.height);
_cameraTexture.Play();
_rawImageBackground.texture = _cameraTexture;
_isCamAvaible = true;
}
private void UpdateCameraRender()
{
if (_isCamAvaible == false)
{
return;
}
float ratio = (float)_cameraTexture.width / (float)_cameraTexture.height;
_aspectRatioFitter.aspectRatio = ratio;
int orientation = _cameraTexture.videoRotationAngle;
orientation = orientation * 3;
_rawImageBackground.rectTransform.localEulerAngles = new Vector3(0, 0, orientation);
}
|
0cef945b6bdc20b951d97cf88c8a52ef
|
{
"intermediate": 0.38581398129463196,
"beginner": 0.45847877860069275,
"expert": 0.1557072401046753
}
|
46,536
|
How to check ip address in Python without accepting connection (realise simple DDoS protection)?
|
c2e79f03e38546db5e80b011465a835c
|
{
"intermediate": 0.525459885597229,
"beginner": 0.0869787260890007,
"expert": 0.3875614106655121
}
|
46,537
|
"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from ipywidgets import Output, Button
from warnings import filterwarnings
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = []
y_class_labels = []
y_seg_labels = []
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0)
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, class_data_counts):
X_train = []
y_train_class = []
y_train_seg = []
X_val = []
y_val_class = []
y_val_seg = []
X_test = []
y_test_class = []
y_test_seg = []
for label, count in class_data_counts.items():
label_indices = np.where(y_class_labels == label)[0]
class_X_data = X_data[label_indices]
class_y_class_labels = y_class_labels[label_indices]
class_y_seg_labels = y_seg_labels[label_indices]
train_count = count[0]
val_count = count[1]
test_count = count[2]
class_X_train = class_X_data[:train_count]
class_y_train_class = class_y_class_labels[:train_count]
class_y_train_seg = class_y_seg_labels[:train_count]
class_X_val = class_X_data[train_count: train_count + val_count]
class_y_val_class = class_y_class_labels[train_count: train_count + val_count]
class_y_val_seg = class_y_seg_labels[train_count: train_count + val_count]
class_X_test = class_X_data[train_count + val_count: train_count + val_count + test_count]
class_y_test_class = class_y_class_labels[train_count + val_count: train_count + val_count + test_count]
class_y_test_seg = class_y_seg_labels[train_count + val_count: train_count + val_count + test_count]
X_train.extend(class_X_train)
y_train_class.extend(class_y_train_class)
y_train_seg.extend(class_y_train_seg)
X_val.extend(class_X_val)
y_val_class.extend(class_y_val_class)
y_val_seg.extend(class_y_val_seg)
X_test.extend(class_X_test)
y_test_class.extend(class_y_test_class)
y_test_seg.extend(class_y_test_seg)
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
np.array(X_train),
np.array(y_train_class_categorical),
np.array(y_train_seg),
np.array(X_val),
np.array(y_val_class_categorical),
np.array(y_val_seg),
np.array(X_test),
np.array(y_test_class_categorical),
np.array(y_test_seg),
)
def count_labels(y_class_categorical, label_encoder):
y_class_labels = np.argmax(y_class_categorical, axis=1)
y_class_names = label_encoder.inverse_transform(y_class_labels)
unique, counts = np.unique(y_class_names, return_counts=True)
return dict(zip(unique, counts))
def build_model(input_shape, num_classes):
num_filter = 16
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
concat1 = concatenate([up1, relu8], axis=-1)
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1)
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1)
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1)
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
Categorical_Focal_loss = tf.keras.losses.CategoricalFocalCrossentropy(
alpha=0.25,
gamma=2.0,
from_logits=False,
label_smoothing=0.0,
axis=-1,)
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(learning_rate=0.001),
loss={"classification_output": Categorical_Focal_loss, "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
best_model = load_model("multitask_best_weights.h5")
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
return test_class_acc
def plot_performance(history):
bg_color = '#303030'
text_color = 'white'
plt.rcParams['figure.facecolor'] = bg_color
plt.rcParams['axes.facecolor'] = bg_color
plt.rcParams['axes.edgecolor'] = 'grey'
plt.rcParams['axes.labelcolor'] = text_color
plt.rcParams['text.color'] = text_color
plt.rcParams['xtick.color'] = text_color
plt.rcParams['ytick.color'] = text_color
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3), dpi=200)
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy", fontweight='bold')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3), dpi=200)
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss", fontweight='bold')
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3), dpi=200)
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy", fontweight='bold')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3), dpi=200)
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss", fontweight='bold')
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Reset to defaults for other plots that may not want these settings
plt.rcParams['figure.facecolor'] = 'white'
plt.rcParams['axes.facecolor'] = 'white'
plt.rcParams['axes.edgecolor'] = 'black'
plt.rcParams['axes.labelcolor'] = 'black'
plt.rcParams['text.color'] = 'black'
plt.rcParams['xtick.color'] = 'black'
plt.rcParams['ytick.color'] = 'black'
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
bg_color = '#303030'
text_color = 'white'
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}", color=text_color)
plt.axis("off")
plt.gcf().set_facecolor(bg_color)
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask", color=text_color)
plt.axis("off")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask", color=text_color)
plt.axis("off")
plt.tight_layout()
plt.show()
out = Output()
def predict_image(image_path):
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
class_output, seg_output = model.predict(img_expanded)
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
with out:
clear_output(wait=True)
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
bg_color = '#303030'
fig.patch.set_facecolor(bg_color)
for ax in axs:
ax.set_facecolor(bg_color)
ax.spines['bottom'].set_color('grey')
ax.spines['top'].set_color('grey')
ax.spines['right'].set_color('grey')
ax.spines['left'].set_color('grey')
ax.tick_params(axis='x', colors='white')
ax.tick_params(axis='y', colors='white')
axs[0].imshow(img)
axs[0].set_title("Input Image", color='white', fontweight='bold')
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output", color='white', fontweight='bold')
axs[1].axis("off")
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
bars = axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='cyan')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right", color='white')
axs[2].set_ylabel('Probability', color='white')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}', color='white', fontweight='bold')
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='white', ha='center')
plt.tight_layout()
plt.show()
def on_upload(change):
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
predict_image(uploaded_filename)
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
image_size = 224
labels = ["bridge", "excess", "good", "insuff", "no"]
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
class_data_counts = {
"bridge": [80, 80, 80],
"excess": [80, 80, 80],
"good": [80, 80, 80],
"insuff": [80, 80, 80],
"no": [80, 80, 80]}
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, class_data_counts)
label_encoder = LabelEncoder()
label_encoder.fit(y_class_labels)
train_counts = count_labels(y_train_class, label_encoder)
val_counts = count_labels(y_val_class, label_encoder)
test_counts = count_labels(y_test_class, label_encoder)
print("Train counts: ", train_counts," Total in train set:", sum(train_counts.values()))
print("Validation counts:", val_counts, " Total in validation set:", sum(val_counts.values()))
print("Test counts: ", test_counts," Total in test set:", sum(test_counts.values()))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
model = build_model(input_shape, num_classes)
model.summary()
model = build_model(input_shape, num_classes)
batch_size = 16
epochs = 2
history = train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs)
test_class_acc = evaluate_model(model, X_test, y_test_class, y_test_seg)
plot_performance(history)
"
The above is the Python code with Keras to train the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output. I want to plot some images (e.g. 10) with their segmentation masks and labels randomly before training. Use '#303030' as background color.
|
22d502c76fe3c533345807c912ae79ff
|
{
"intermediate": 0.34744828939437866,
"beginner": 0.4292181432247162,
"expert": 0.22333356738090515
}
|
46,538
|
"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from ipywidgets import Output, Button
from warnings import filterwarnings
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = []
y_class_labels = []
y_seg_labels = []
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0)
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, class_data_counts):
X_train = []
y_train_class = []
y_train_seg = []
X_val = []
y_val_class = []
y_val_seg = []
X_test = []
y_test_class = []
y_test_seg = []
for label, count in class_data_counts.items():
label_indices = np.where(y_class_labels == label)[0]
class_X_data = X_data[label_indices]
class_y_class_labels = y_class_labels[label_indices]
class_y_seg_labels = y_seg_labels[label_indices]
train_count = count[0]
val_count = count[1]
test_count = count[2]
class_X_train = class_X_data[:train_count]
class_y_train_class = class_y_class_labels[:train_count]
class_y_train_seg = class_y_seg_labels[:train_count]
class_X_val = class_X_data[train_count: train_count + val_count]
class_y_val_class = class_y_class_labels[train_count: train_count + val_count]
class_y_val_seg = class_y_seg_labels[train_count: train_count + val_count]
class_X_test = class_X_data[train_count + val_count: train_count + val_count + test_count]
class_y_test_class = class_y_class_labels[train_count + val_count: train_count + val_count + test_count]
class_y_test_seg = class_y_seg_labels[train_count + val_count: train_count + val_count + test_count]
X_train.extend(class_X_train)
y_train_class.extend(class_y_train_class)
y_train_seg.extend(class_y_train_seg)
X_val.extend(class_X_val)
y_val_class.extend(class_y_val_class)
y_val_seg.extend(class_y_val_seg)
X_test.extend(class_X_test)
y_test_class.extend(class_y_test_class)
y_test_seg.extend(class_y_test_seg)
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
np.array(X_train),
np.array(y_train_class_categorical),
np.array(y_train_seg),
np.array(X_val),
np.array(y_val_class_categorical),
np.array(y_val_seg),
np.array(X_test),
np.array(y_test_class_categorical),
np.array(y_test_seg),
)
def count_labels(y_class_categorical, label_encoder):
y_class_labels = np.argmax(y_class_categorical, axis=1)
y_class_names = label_encoder.inverse_transform(y_class_labels)
unique, counts = np.unique(y_class_names, return_counts=True)
return dict(zip(unique, counts))
def build_model(input_shape, num_classes):
num_filter = 16
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
concat1 = concatenate([up1, relu8], axis=-1)
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1)
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1)
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1)
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
Categorical_Focal_loss = tf.keras.losses.CategoricalFocalCrossentropy(
alpha=0.25,
gamma=2.0,
from_logits=False,
label_smoothing=0.0,
axis=-1,)
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(learning_rate=0.001),
loss={"classification_output": Categorical_Focal_loss, "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
best_model = load_model("multitask_best_weights.h5")
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
return test_class_acc
def plot_performance(history):
bg_color = '#303030'
text_color = 'white'
plt.rcParams['figure.facecolor'] = bg_color
plt.rcParams['axes.facecolor'] = bg_color
plt.rcParams['axes.edgecolor'] = 'grey'
plt.rcParams['axes.labelcolor'] = text_color
plt.rcParams['text.color'] = text_color
plt.rcParams['xtick.color'] = text_color
plt.rcParams['ytick.color'] = text_color
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3), dpi=200)
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy", fontweight='bold')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3), dpi=200)
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss", fontweight='bold')
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3), dpi=200)
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy", fontweight='bold')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3), dpi=200)
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss", fontweight='bold')
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Reset to defaults for other plots that may not want these settings
plt.rcParams['figure.facecolor'] = 'white'
plt.rcParams['axes.facecolor'] = 'white'
plt.rcParams['axes.edgecolor'] = 'black'
plt.rcParams['axes.labelcolor'] = 'black'
plt.rcParams['text.color'] = 'black'
plt.rcParams['xtick.color'] = 'black'
plt.rcParams['ytick.color'] = 'black'
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
bg_color = '#303030'
text_color = 'white'
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}", color=text_color)
plt.axis("off")
plt.gcf().set_facecolor(bg_color)
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask", color=text_color)
plt.axis("off")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask", color=text_color)
plt.axis("off")
plt.tight_layout()
plt.show()
out = Output()
def predict_image(image_path):
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
class_output, seg_output = model.predict(img_expanded)
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
with out:
clear_output(wait=True)
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
bg_color = '#303030'
fig.patch.set_facecolor(bg_color)
for ax in axs:
ax.set_facecolor(bg_color)
ax.spines['bottom'].set_color('grey')
ax.spines['top'].set_color('grey')
ax.spines['right'].set_color('grey')
ax.spines['left'].set_color('grey')
ax.tick_params(axis='x', colors='white')
ax.tick_params(axis='y', colors='white')
axs[0].imshow(img)
axs[0].set_title("Input Image", color='white', fontweight='bold')
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output", color='white', fontweight='bold')
axs[1].axis("off")
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
bars = axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='cyan')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right", color='white')
axs[2].set_ylabel('Probability', color='white')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}', color='white', fontweight='bold')
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='white', ha='center')
plt.tight_layout()
plt.show()
def on_upload(change):
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
predict_image(uploaded_filename)
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
image_size = 224
labels = ["bridge", "excess", "good", "insuff", "no"]
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
class_data_counts = {
"bridge": [80, 80, 80],
"excess": [80, 80, 80],
"good": [80, 80, 80],
"insuff": [80, 80, 80],
"no": [80, 80, 80]}
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, class_data_counts)
label_encoder = LabelEncoder()
label_encoder.fit(y_class_labels)
train_counts = count_labels(y_train_class, label_encoder)
val_counts = count_labels(y_val_class, label_encoder)
test_counts = count_labels(y_test_class, label_encoder)
print("Train counts: ", train_counts," Total in train set:", sum(train_counts.values()))
print("Validation counts:", val_counts, " Total in validation set:", sum(val_counts.values()))
print("Test counts: ", test_counts," Total in test set:", sum(test_counts.values()))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
model = build_model(input_shape, num_classes)
model.summary()
model = build_model(input_shape, num_classes)
batch_size = 16
epochs = 2
history = train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs)
test_class_acc = evaluate_model(model, X_test, y_test_class, y_test_seg)
plot_performance(history)
"
The above is the Python code with Keras to train the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output. I want to plot some images (e.g. 10) with their segmentation masks and labels randomly before training. Use '#303030' as background color.
|
2874dba355b5969100c56f567cb51b93
|
{
"intermediate": 0.34744828939437866,
"beginner": 0.4292181432247162,
"expert": 0.22333356738090515
}
|
46,539
|
"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from ipywidgets import Output, Button
from warnings import filterwarnings
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = []
y_class_labels = []
y_seg_labels = []
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0)
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, class_data_counts):
X_train = []
y_train_class = []
y_train_seg = []
X_val = []
y_val_class = []
y_val_seg = []
X_test = []
y_test_class = []
y_test_seg = []
for label, count in class_data_counts.items():
label_indices = np.where(y_class_labels == label)[0]
class_X_data = X_data[label_indices]
class_y_class_labels = y_class_labels[label_indices]
class_y_seg_labels = y_seg_labels[label_indices]
train_count = count[0]
val_count = count[1]
test_count = count[2]
class_X_train = class_X_data[:train_count]
class_y_train_class = class_y_class_labels[:train_count]
class_y_train_seg = class_y_seg_labels[:train_count]
class_X_val = class_X_data[train_count: train_count + val_count]
class_y_val_class = class_y_class_labels[train_count: train_count + val_count]
class_y_val_seg = class_y_seg_labels[train_count: train_count + val_count]
class_X_test = class_X_data[train_count + val_count: train_count + val_count + test_count]
class_y_test_class = class_y_class_labels[train_count + val_count: train_count + val_count + test_count]
class_y_test_seg = class_y_seg_labels[train_count + val_count: train_count + val_count + test_count]
X_train.extend(class_X_train)
y_train_class.extend(class_y_train_class)
y_train_seg.extend(class_y_train_seg)
X_val.extend(class_X_val)
y_val_class.extend(class_y_val_class)
y_val_seg.extend(class_y_val_seg)
X_test.extend(class_X_test)
y_test_class.extend(class_y_test_class)
y_test_seg.extend(class_y_test_seg)
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
np.array(X_train),
np.array(y_train_class_categorical),
np.array(y_train_seg),
np.array(X_val),
np.array(y_val_class_categorical),
np.array(y_val_seg),
np.array(X_test),
np.array(y_test_class_categorical),
np.array(y_test_seg),
)
def count_labels(y_class_categorical, label_encoder):
y_class_labels = np.argmax(y_class_categorical, axis=1)
y_class_names = label_encoder.inverse_transform(y_class_labels)
unique, counts = np.unique(y_class_names, return_counts=True)
return dict(zip(unique, counts))
def build_model(input_shape, num_classes):
num_filter = 16
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
concat1 = concatenate([up1, relu8], axis=-1)
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1)
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1)
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1)
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
Categorical_Focal_loss = tf.keras.losses.CategoricalFocalCrossentropy(
alpha=0.25,
gamma=2.0,
from_logits=False,
label_smoothing=0.0,
axis=-1,)
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(learning_rate=0.001),
loss={"classification_output": Categorical_Focal_loss, "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
best_model = load_model("multitask_best_weights.h5")
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
return test_class_acc
def plot_performance(history):
bg_color = '#303030'
text_color = 'white'
plt.rcParams['figure.facecolor'] = bg_color
plt.rcParams['axes.facecolor'] = bg_color
plt.rcParams['axes.edgecolor'] = 'grey'
plt.rcParams['axes.labelcolor'] = text_color
plt.rcParams['text.color'] = text_color
plt.rcParams['xtick.color'] = text_color
plt.rcParams['ytick.color'] = text_color
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3), dpi=200)
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy", fontweight='bold')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3), dpi=200)
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss", fontweight='bold')
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3), dpi=200)
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy", fontweight='bold')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3), dpi=200)
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss", fontweight='bold')
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Reset to defaults for other plots that may not want these settings
plt.rcParams['figure.facecolor'] = 'white'
plt.rcParams['axes.facecolor'] = 'white'
plt.rcParams['axes.edgecolor'] = 'black'
plt.rcParams['axes.labelcolor'] = 'black'
plt.rcParams['text.color'] = 'black'
plt.rcParams['xtick.color'] = 'black'
plt.rcParams['ytick.color'] = 'black'
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
bg_color = '#303030'
text_color = 'white'
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}", color=text_color)
plt.axis("off")
plt.gcf().set_facecolor(bg_color)
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask", color=text_color)
plt.axis("off")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask", color=text_color)
plt.axis("off")
plt.tight_layout()
plt.show()
out = Output()
def predict_image(image_path):
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
class_output, seg_output = model.predict(img_expanded)
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
with out:
clear_output(wait=True)
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
bg_color = '#303030'
fig.patch.set_facecolor(bg_color)
for ax in axs:
ax.set_facecolor(bg_color)
ax.spines['bottom'].set_color('grey')
ax.spines['top'].set_color('grey')
ax.spines['right'].set_color('grey')
ax.spines['left'].set_color('grey')
ax.tick_params(axis='x', colors='white')
ax.tick_params(axis='y', colors='white')
axs[0].imshow(img)
axs[0].set_title("Input Image", color='white', fontweight='bold')
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output", color='white', fontweight='bold')
axs[1].axis("off")
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
bars = axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='cyan')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right", color='white')
axs[2].set_ylabel('Probability', color='white')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}', color='white', fontweight='bold')
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='white', ha='center')
plt.tight_layout()
plt.show()
def on_upload(change):
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
predict_image(uploaded_filename)
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
image_size = 224
labels = ["bridge", "excess", "good", "insuff", "no"]
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
class_data_counts = {
"bridge": [80, 80, 80],
"excess": [80, 80, 80],
"good": [80, 80, 80],
"insuff": [80, 80, 80],
"no": [80, 80, 80]}
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, class_data_counts)
label_encoder = LabelEncoder()
label_encoder.fit(y_class_labels)
train_counts = count_labels(y_train_class, label_encoder)
val_counts = count_labels(y_val_class, label_encoder)
test_counts = count_labels(y_test_class, label_encoder)
print("Train counts: ", train_counts," Total in train set:", sum(train_counts.values()))
print("Validation counts:", val_counts, " Total in validation set:", sum(val_counts.values()))
print("Test counts: ", test_counts," Total in test set:", sum(test_counts.values()))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
model = build_model(input_shape, num_classes)
model.summary()
model = build_model(input_shape, num_classes)
batch_size = 16
epochs = 2
history = train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs)
test_class_acc = evaluate_model(model, X_test, y_test_class, y_test_seg)
plot_performance(history)
"
The above is the Python code with Keras to train the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output. I want to plot some images (e.g. 10) with their segmentation masks and labels randomly before training. Use '#303030' as background color.
|
6c8dd0e2a4f22d5940287d1003ff118c
|
{
"intermediate": 0.34744828939437866,
"beginner": 0.4292181432247162,
"expert": 0.22333356738090515
}
|
46,540
|
mocking a function inside a function using Gtest
|
75f0c20b9073a1fdc0648ca32b20cfc6
|
{
"intermediate": 0.42606696486473083,
"beginner": 0.33443161845207214,
"expert": 0.23950143158435822
}
|
46,541
|
in javascript is it possible to set two conditions for an if statement. for example - if (!isLineDrawn & selectedMarker === null)
|
2fcab150bc202de0a19d73f05531bc16
|
{
"intermediate": 0.3547849655151367,
"beginner": 0.41620486974716187,
"expert": 0.2290101945400238
}
|
46,542
|
"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from ipywidgets import Output, Button
from warnings import filterwarnings
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = []
y_class_labels = []
y_seg_labels = []
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0)
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, class_data_counts):
X_train = []
y_train_class = []
y_train_seg = []
X_val = []
y_val_class = []
y_val_seg = []
X_test = []
y_test_class = []
y_test_seg = []
for label, count in class_data_counts.items():
label_indices = np.where(y_class_labels == label)[0]
class_X_data = X_data[label_indices]
class_y_class_labels = y_class_labels[label_indices]
class_y_seg_labels = y_seg_labels[label_indices]
train_count = count[0]
val_count = count[1]
test_count = count[2]
class_X_train = class_X_data[:train_count]
class_y_train_class = class_y_class_labels[:train_count]
class_y_train_seg = class_y_seg_labels[:train_count]
class_X_val = class_X_data[train_count: train_count + val_count]
class_y_val_class = class_y_class_labels[train_count: train_count + val_count]
class_y_val_seg = class_y_seg_labels[train_count: train_count + val_count]
class_X_test = class_X_data[train_count + val_count: train_count + val_count + test_count]
class_y_test_class = class_y_class_labels[train_count + val_count: train_count + val_count + test_count]
class_y_test_seg = class_y_seg_labels[train_count + val_count: train_count + val_count + test_count]
X_train.extend(class_X_train)
y_train_class.extend(class_y_train_class)
y_train_seg.extend(class_y_train_seg)
X_val.extend(class_X_val)
y_val_class.extend(class_y_val_class)
y_val_seg.extend(class_y_val_seg)
X_test.extend(class_X_test)
y_test_class.extend(class_y_test_class)
y_test_seg.extend(class_y_test_seg)
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
np.array(X_train),
np.array(y_train_class_categorical),
np.array(y_train_seg),
np.array(X_val),
np.array(y_val_class_categorical),
np.array(y_val_seg),
np.array(X_test),
np.array(y_test_class_categorical),
np.array(y_test_seg),
)
def count_labels(y_class_categorical, label_encoder):
y_class_labels = np.argmax(y_class_categorical, axis=1)
y_class_names = label_encoder.inverse_transform(y_class_labels)
unique, counts = np.unique(y_class_names, return_counts=True)
return dict(zip(unique, counts))
def build_model(input_shape, num_classes):
num_filter = 16
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
concat1 = concatenate([up1, relu8], axis=-1)
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1)
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1)
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1)
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
Categorical_Focal_loss = tf.keras.losses.CategoricalFocalCrossentropy(
alpha=0.25,
gamma=2.0,
from_logits=False,
label_smoothing=0.0,
axis=-1,)
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(learning_rate=0.001),
loss={"classification_output": Categorical_Focal_loss, "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
best_model = load_model("multitask_best_weights.h5")
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
return test_class_acc
def plot_performance(history):
bg_color = '#303030'
text_color = 'white'
plt.rcParams['figure.facecolor'] = bg_color
plt.rcParams['axes.facecolor'] = bg_color
plt.rcParams['axes.edgecolor'] = 'grey'
plt.rcParams['axes.labelcolor'] = text_color
plt.rcParams['text.color'] = text_color
plt.rcParams['xtick.color'] = text_color
plt.rcParams['ytick.color'] = text_color
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3), dpi=200)
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy", fontweight='bold')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3), dpi=200)
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss", fontweight='bold')
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3), dpi=200)
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy", fontweight='bold')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3), dpi=200)
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss", fontweight='bold')
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Reset to defaults for other plots that may not want these settings
plt.rcParams['figure.facecolor'] = 'white'
plt.rcParams['axes.facecolor'] = 'white'
plt.rcParams['axes.edgecolor'] = 'black'
plt.rcParams['axes.labelcolor'] = 'black'
plt.rcParams['text.color'] = 'black'
plt.rcParams['xtick.color'] = 'black'
plt.rcParams['ytick.color'] = 'black'
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
bg_color = '#303030'
text_color = 'white'
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}", color=text_color)
plt.axis("off")
plt.gcf().set_facecolor(bg_color)
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask", color=text_color)
plt.axis("off")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask", color=text_color)
plt.axis("off")
plt.tight_layout()
plt.show()
out = Output()
def predict_image(image_path):
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
class_output, seg_output = model.predict(img_expanded)
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
with out:
clear_output(wait=True)
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
bg_color = '#303030'
fig.patch.set_facecolor(bg_color)
for ax in axs:
ax.set_facecolor(bg_color)
ax.spines['bottom'].set_color('grey')
ax.spines['top'].set_color('grey')
ax.spines['right'].set_color('grey')
ax.spines['left'].set_color('grey')
ax.tick_params(axis='x', colors='white')
ax.tick_params(axis='y', colors='white')
axs[0].imshow(img)
axs[0].set_title("Input Image", color='white', fontweight='bold')
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output", color='white', fontweight='bold')
axs[1].axis("off")
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
bars = axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='cyan')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right", color='white')
axs[2].set_ylabel('Probability', color='white')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}', color='white', fontweight='bold')
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='white', ha='center')
plt.tight_layout()
plt.show()
def on_upload(change):
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
predict_image(uploaded_filename)
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
image_size = 224
labels = ["bridge", "excess", "good", "insuff", "no"]
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
class_data_counts = {
"bridge": [80, 80, 80],
"excess": [80, 80, 80],
"good": [80, 80, 80],
"insuff": [80, 80, 80],
"no": [80, 80, 80]}
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, class_data_counts)
label_encoder = LabelEncoder()
label_encoder.fit(y_class_labels)
train_counts = count_labels(y_train_class, label_encoder)
val_counts = count_labels(y_val_class, label_encoder)
test_counts = count_labels(y_test_class, label_encoder)
print("Train counts: ", train_counts," Total in train set:", sum(train_counts.values()))
print("Validation counts:", val_counts, " Total in validation set:", sum(val_counts.values()))
print("Test counts: ", test_counts," Total in test set:", sum(test_counts.values()))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
model = build_model(input_shape, num_classes)
model.summary()
model = build_model(input_shape, num_classes)
batch_size = 16
epochs = 2
history = train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs)
test_class_acc = evaluate_model(model, X_test, y_test_class, y_test_seg)
plot_performance(history)
"
The above is the Python code with Keras to train the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output. I want to plot some images with their segmentation masks and labels randomly before training (at least one for each class). Use '#303030' as background color.
|
39be8841fe82c651f800c422992d51e8
|
{
"intermediate": 0.34744828939437866,
"beginner": 0.4292181432247162,
"expert": 0.22333356738090515
}
|
46,543
|
"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from ipywidgets import Output, Button
from warnings import filterwarnings
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = []
y_class_labels = []
y_seg_labels = []
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0)
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, class_data_counts):
X_train = []
y_train_class = []
y_train_seg = []
X_val = []
y_val_class = []
y_val_seg = []
X_test = []
y_test_class = []
y_test_seg = []
for label, count in class_data_counts.items():
label_indices = np.where(y_class_labels == label)[0]
class_X_data = X_data[label_indices]
class_y_class_labels = y_class_labels[label_indices]
class_y_seg_labels = y_seg_labels[label_indices]
train_count = count[0]
val_count = count[1]
test_count = count[2]
class_X_train = class_X_data[:train_count]
class_y_train_class = class_y_class_labels[:train_count]
class_y_train_seg = class_y_seg_labels[:train_count]
class_X_val = class_X_data[train_count: train_count + val_count]
class_y_val_class = class_y_class_labels[train_count: train_count + val_count]
class_y_val_seg = class_y_seg_labels[train_count: train_count + val_count]
class_X_test = class_X_data[train_count + val_count: train_count + val_count + test_count]
class_y_test_class = class_y_class_labels[train_count + val_count: train_count + val_count + test_count]
class_y_test_seg = class_y_seg_labels[train_count + val_count: train_count + val_count + test_count]
X_train.extend(class_X_train)
y_train_class.extend(class_y_train_class)
y_train_seg.extend(class_y_train_seg)
X_val.extend(class_X_val)
y_val_class.extend(class_y_val_class)
y_val_seg.extend(class_y_val_seg)
X_test.extend(class_X_test)
y_test_class.extend(class_y_test_class)
y_test_seg.extend(class_y_test_seg)
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
np.array(X_train),
np.array(y_train_class_categorical),
np.array(y_train_seg),
np.array(X_val),
np.array(y_val_class_categorical),
np.array(y_val_seg),
np.array(X_test),
np.array(y_test_class_categorical),
np.array(y_test_seg),
)
def count_labels(y_class_categorical, label_encoder):
y_class_labels = np.argmax(y_class_categorical, axis=1)
y_class_names = label_encoder.inverse_transform(y_class_labels)
unique, counts = np.unique(y_class_names, return_counts=True)
return dict(zip(unique, counts))
def build_model(input_shape, num_classes):
num_filter = 16
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
concat1 = concatenate([up1, relu8], axis=-1)
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1)
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1)
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1)
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
Categorical_Focal_loss = tf.keras.losses.CategoricalFocalCrossentropy(
alpha=0.25,
gamma=2.0,
from_logits=False,
label_smoothing=0.0,
axis=-1,)
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(learning_rate=0.001),
loss={"classification_output": Categorical_Focal_loss, "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
best_model = load_model("multitask_best_weights.h5")
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
return test_class_acc
def plot_performance(history):
bg_color = '#303030'
text_color = 'white'
plt.rcParams['figure.facecolor'] = bg_color
plt.rcParams['axes.facecolor'] = bg_color
plt.rcParams['axes.edgecolor'] = 'grey'
plt.rcParams['axes.labelcolor'] = text_color
plt.rcParams['text.color'] = text_color
plt.rcParams['xtick.color'] = text_color
plt.rcParams['ytick.color'] = text_color
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3), dpi=200)
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy", fontweight='bold')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3), dpi=200)
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss", fontweight='bold')
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3), dpi=200)
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy", fontweight='bold')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3), dpi=200)
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss", fontweight='bold')
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Reset to defaults for other plots that may not want these settings
plt.rcParams['figure.facecolor'] = 'white'
plt.rcParams['axes.facecolor'] = 'white'
plt.rcParams['axes.edgecolor'] = 'black'
plt.rcParams['axes.labelcolor'] = 'black'
plt.rcParams['text.color'] = 'black'
plt.rcParams['xtick.color'] = 'black'
plt.rcParams['ytick.color'] = 'black'
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
bg_color = '#303030'
text_color = 'white'
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}", color=text_color)
plt.axis("off")
plt.gcf().set_facecolor(bg_color)
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask", color=text_color)
plt.axis("off")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask", color=text_color)
plt.axis("off")
plt.tight_layout()
plt.show()
out = Output()
def predict_image(image_path):
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
class_output, seg_output = model.predict(img_expanded)
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
with out:
clear_output(wait=True)
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
bg_color = '#303030'
fig.patch.set_facecolor(bg_color)
for ax in axs:
ax.set_facecolor(bg_color)
ax.spines['bottom'].set_color('grey')
ax.spines['top'].set_color('grey')
ax.spines['right'].set_color('grey')
ax.spines['left'].set_color('grey')
ax.tick_params(axis='x', colors='white')
ax.tick_params(axis='y', colors='white')
axs[0].imshow(img)
axs[0].set_title("Input Image", color='white', fontweight='bold')
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output", color='white', fontweight='bold')
axs[1].axis("off")
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
bars = axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='cyan')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right", color='white')
axs[2].set_ylabel('Probability', color='white')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}', color='white', fontweight='bold')
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='white', ha='center')
plt.tight_layout()
plt.show()
def on_upload(change):
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
predict_image(uploaded_filename)
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
image_size = 224
labels = ["bridge", "excess", "good", "insuff", "no"]
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
class_data_counts = {
"bridge": [80, 80, 80],
"excess": [80, 80, 80],
"good": [80, 80, 80],
"insuff": [80, 80, 80],
"no": [80, 80, 80]}
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, class_data_counts)
label_encoder = LabelEncoder()
label_encoder.fit(y_class_labels)
train_counts = count_labels(y_train_class, label_encoder)
val_counts = count_labels(y_val_class, label_encoder)
test_counts = count_labels(y_test_class, label_encoder)
print("Train counts: ", train_counts," Total in train set:", sum(train_counts.values()))
print("Validation counts:", val_counts, " Total in validation set:", sum(val_counts.values()))
print("Test counts: ", test_counts," Total in test set:", sum(test_counts.values()))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
model = build_model(input_shape, num_classes)
model.summary()
model = build_model(input_shape, num_classes)
batch_size = 16
epochs = 2
history = train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs)
test_class_acc = evaluate_model(model, X_test, y_test_class, y_test_seg)
plot_performance(history)
"
The above is the Python code with Keras to train the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output. I want to plot some images with their segmentation masks and labels randomly before training (at least one for each class). Use '#303030' as background color.
|
bdf7ae19b00471ecc96f4b1e4d938bf8
|
{
"intermediate": 0.34744828939437866,
"beginner": 0.4292181432247162,
"expert": 0.22333356738090515
}
|
46,544
|
failed to make directory /boot/grub on arch Linux installation
|
027e434238e2b9661842ff74cfebc9ad
|
{
"intermediate": 0.3370038568973541,
"beginner": 0.31935933232307434,
"expert": 0.34363681077957153
}
|
46,545
|
in this javascript for leaflet.js I want to create a function to allow the user to draw another polyline from either firstpoint or secondpoint to any other circleMarker. However the polyline should only be added to the map money >= 50000 - 'var money = 100000;
var numberOfCarriages = 1;
var speed = 60;
var dailybonus = 0;
var selectedMarker = null; // Variable to store the selected marker
const map = L.map("map").setView([54.2231637, -1.9381623], 6);
// Add custom zoom control to the map with position set to ‘topright’
const customZoomControl = L.control.zoom({ position: "topright" }).addTo(map);
// Remove the default zoom control from the map
map.removeControl(map.zoomControl);
let clickedPoints = [];
let isLineDrawn = false;
let marker; // Declare the marker variable
let progress = 0;
let cafeOneBonus = 0;
let cafeTwoBonus = 0;
let hotelOneBonus = 0;
let hotelTwoBonus = 0;
let buildingRadius = 0;
let buildingRadius2 = 0;
// Function to create circle markers with click functionality
function createCircleMarkers(geojson) {
return L.geoJSON(geojson, {
pointToLayer: function (feature, latlng) {
const circleMarker = L.circleMarker(latlng, {
radius: 4,
fillColor: "#ff7800",
color: "#000",
weight: 0.2,
opacity: 1,
fillOpacity: 0.8,
});
// Attach the feature to the circle marker
circleMarker.feature = feature;
circleMarker.on("mouseover", function () {
this.bindPopup(feature.properties.city).openPopup();
});
circleMarker.on("click", function (e) {
if (!isLineDrawn && selectedMarker !== e.target) {
clickedPoints.push(e.target); // Push the circle marker with attached feature
if (clickedPoints.length === 2) {
const firstCityCoords =
clickedPoints[0].feature.geometry.coordinates;
const secondCityCoords =
clickedPoints[1].feature.geometry.coordinates;
const polyline = L.polyline(
clickedPoints.map((p) => p.getLatLng())
).addTo(map);
const firstCity = clickedPoints[0].feature.properties.city;
const secondCity = clickedPoints[1].feature.properties.city;
clickedPoints = [];
isLineDrawn = true;
// Remove click event listener after a line has been drawn
map.off("click");
// Set the map bounds to show the area with the polyline
map.fitBounds(polyline.getBounds());
money = money - 50000; // Subtract 50000 from money
const moneyDisplay = document.getElementById("moneydisplay");
const moneyString = `£${money}`; // Assuming money is a number
moneyDisplay.textContent = moneyString;
const instructionsElement = document.getElementById("instructions");
// Clear any existing content in the instructions element:
instructionsElement.innerHTML = "";
// Create separate paragraph elements:
const congratulationsParagraph = document.createElement("p");
congratulationsParagraph.textContent = `Congratulations you have built your first train line from ${firstCity} to ${secondCity}!`;
const costsParagraph = document.createElement("p");
costsParagraph.textContent = `Your construction costs were £50,000. You have £50,000 remaining.`;
const buyTrainParagraph = document.createElement("p");
buyTrainParagraph.textContent = "You now need to buy a train.";
const newTrainParagraph = document.createElement("p");
newTrainParagraph.textContent =
"At this time you can only afford to buy the train engine the Sleeping Lion. The Sleeping Lion has a traveling speed of 60 miles per hour. It can pull four carriages. Which means your train will have a capacity of around 120 seated passengers";
const traincost = document.createElement("p");
traincost.textContent = `The Sleeping Lion will cost you £30,000 to purchase. Do you wish to buy the Sleeping Lion?`;
// Append paragraphs to the instructions element:
instructionsElement.appendChild(congratulationsParagraph);
instructionsElement.appendChild(costsParagraph);
instructionsElement.appendChild(buyTrainParagraph);
instructionsElement.appendChild(newTrainParagraph);
instructionsElement.appendChild(traincost);
// Add button element:
const buyButton = document.createElement("button");
buyButton.id = "buybutton";
buyButton.textContent = "Buy Train";
// Append the button element to the instructions element:
instructionsElement.appendChild(buyButton);
//buybutton event listener
document
.getElementById("buybutton")
.addEventListener("click", function () {
// Check if you have enough money before purchase
money = money - 30000; // Subtract 30000 from money
const moneyDisplay = document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
// Update instructions content after successful purchase
instructionsElement.innerHTML = ""; // Clear previous content
const successMessage = document.createElement("p");
successMessage.textContent = `You now have a train line from ${firstCity} to ${secondCity} and a train! Press the button below to begin operations.`;
instructionsElement.appendChild(successMessage);
// Add button element:
const trainButton = document.createElement("button");
trainButton.id = "trainbutton";
trainButton.textContent = "Start Train";
// Append the button element to the instructions element:
instructionsElement.appendChild(trainButton);
trainButton.addEventListener("click", function () {
console.log("Train Started");
//emptyinstructions add improvement buttons
instructionsElement.innerHTML = ""; // Clear previous content
//randomgeneration of dailybonus
function generateDailyBonus(minBonus, maxBonus) {
const randomNumber =
Math.floor(Math.random() * (maxBonus - minBonus + 1)) +
minBonus;
dailybonus += randomNumber;
console.log(`Daily bonus of ${randomNumber} added!`);
}
//buy carriages
//add carriages button
const carriageButton = document.createElement("button");
carriageButton.id = "trainbutton";
carriageButton.textContent = "Buy Train Carriage";
const carriageMessage = document.createElement("p");
carriageMessage.textContent = `Buy another passenger carriage for your train for £20,000`;
instructionsElement.appendChild(carriageMessage);
// Append the button element to the instructions element:
instructionsElement.appendChild(carriageButton);
//cariagebutton logic
carriageButton.addEventListener("click", () => {
console.log("Carriage Bought");
// Check if enough money is available
if (money >= 20000) {
// Check if maximum number of carriages reached
if (numberOfCarriages < 4) {
numberOfCarriages++;
money -= 20000; // Subtract 20000 from money
const moneyDisplay =
document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
// Update marker content using the previously retrieved reference
markerContent.textContent = numberOfCarriages;
} else {
console.log(
"Maximum number of carriages reached! You can't buy more."
);
instructionsElement.removeChild(carriageButton);
instructionsElement.removeChild(carriageMessage);
}
}
});
//buy station cafes
//add station one cafe button
const stationOneMessage = document.createElement("p");
stationOneMessage.textContent = `Open a cafe in ${firstCity} Station for £2,500.`;
instructionsElement.appendChild(stationOneMessage);
// Add button element:
const cafeOneButton = document.createElement("button");
cafeOneButton.id = "trainbutton";
cafeOneButton.textContent = "Buy Cafe";
// Append the button element to the instructions element:
instructionsElement.appendChild(cafeOneButton);
//cafeonelogic
cafeOneButton.addEventListener("click", () => {
if (money >= 2500) {
// add a random number between 2000 and 7000 to dailbonus
generateDailyBonus(2000, 7000); // Call with cafe bonus range
cafeOneBonus = dailybonus;
console.log("Cafe one bought");
money -= 2500;
const moneyDisplay =
document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
instructionsElement.removeChild(cafeOneButton);
instructionsElement.removeChild(stationOneMessage);
//fetch data from Overpass API around first station
buildingRadius += 150;
const overpassQuery = `
[out:json];
way["building"](around:${buildingRadius},${firstCityCoords[1]},${firstCityCoords[0]});
out body;
>;
out skel qt;
`;
fetch("https://overpass-api.de/api/interpreter", {
method: "POST",
headers: {
"Content-Type": "application/x-www-form-urlencoded",
},
body: "data=" + encodeURIComponent(overpassQuery),
})
.then((response) => response.json())
.then((data) => {
// Process the data returned by the Overpass API
data.elements.forEach((element) => {
if (element.type === "way") {
// Extract coordinates from the way element
const coordinates = element.nodes.map(
(nodeId) => {
const node = data.elements.find(
(node) => node.id === nodeId
);
return [node.lat, node.lon];
}
);
// Create a polyline for the road
const polyline = L.polyline(coordinates, {
color: "#333333",
weight: 1,
}).addTo(map);
}
});
})
.catch((error) => {
console.error("Error fetching data:", error);
});
} else {
}
});
//add station two cafe buttons
const stationTwoMessage = document.createElement("p");
stationTwoMessage.textContent = `Open a cafe in ${secondCity} Station for £2,500.`;
instructionsElement.appendChild(stationTwoMessage);
// Add button element:
const cafeTwoButton = document.createElement("button");
cafeTwoButton.id = "trainbutton";
cafeTwoButton.textContent = "Buy Cafe";
// Append the button element to the instructions element:
instructionsElement.appendChild(cafeTwoButton);
//cafetwologic
cafeTwoButton.addEventListener("click", () => {
if (money >= 2500) {
// Generate a random number between 2000 (inclusive) and 7000 (exclusive)
generateDailyBonus(2000, 7000); // Call with cafe bonus range
cafeTwoBonus = dailybonus;
console.log("Cafe two bought");
money -= 2500;
const moneyDisplay =
document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
instructionsElement.removeChild(cafeTwoButton);
instructionsElement.removeChild(stationTwoMessage);
//fetch data from Overpass API around first station
buildingRadius2 += 150;
const overpassQuery = `
[out:json];
way["building"](around:${buildingRadius2},${secondCityCoords[1]},${secondCityCoords[0]});
out body;
>;
out skel qt;
`;
fetch("https://overpass-api.de/api/interpreter", {
method: "POST",
headers: {
"Content-Type": "application/x-www-form-urlencoded",
},
body: "data=" + encodeURIComponent(overpassQuery),
})
.then((response) => response.json())
.then((data) => {
// Process the data returned by the Overpass API
data.elements.forEach((element) => {
if (element.type === "way") {
// Extract coordinates from the way element
const coordinates = element.nodes.map(
(nodeId) => {
const node = data.elements.find(
(node) => node.id === nodeId
);
return [node.lat, node.lon];
}
);
// Create a polyline for the road
const polyline = L.polyline(coordinates, {
color: "#333333",
weight: 1,
}).addTo(map);
}
});
});
} else {
}
});
//buyhotel
const hoteloneMessage = document.createElement("p");
hoteloneMessage.textContent = `Open a hotel in ${firstCity} Station for £10,000.`;
instructionsElement.appendChild(hoteloneMessage);
// Add button element:
const hoteloneButton = document.createElement("button");
hoteloneButton.id = "trainbutton";
hoteloneButton.textContent = "Buy Hotel";
// Append the button element to the instructions element:
instructionsElement.appendChild(hoteloneButton);
//hotelonelogic
hoteloneButton.addEventListener("click", () => {
if (money >= 10000) {
generateDailyBonus(8000, 24000); // Call with cafe bonus range
hotelOneBonus = dailybonus;
money -= 10000;
const moneyDisplay =
document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
instructionsElement.removeChild(hoteloneButton);
instructionsElement.removeChild(hoteloneMessage);
//fetch data from Overpass API around first station
buildingRadius += 200;
const overpassQuery = `
[out:json];
way["building"](around:${buildingRadius},${firstCityCoords[1]},${firstCityCoords[0]});
out body;
>;
out skel qt;
`;
fetch("https://overpass-api.de/api/interpreter", {
method: "POST",
headers: {
"Content-Type": "application/x-www-form-urlencoded",
},
body: "data=" + encodeURIComponent(overpassQuery),
})
.then((response) => response.json())
.then((data) => {
// Process the data returned by the Overpass API
data.elements.forEach((element) => {
if (element.type === "way") {
// Extract coordinates from the way element
const coordinates = element.nodes.map(
(nodeId) => {
const node = data.elements.find(
(node) => node.id === nodeId
);
return [node.lat, node.lon];
}
);
// Create a polyline for the road
const polyline = L.polyline(coordinates, {
color: "#333333",
weight: 1,
}).addTo(map);
}
});
});
} else {
}
});
const hoteltwoMessage = document.createElement("p");
hoteltwoMessage.textContent = `Open a hotel in ${secondCity} Station for £10,000.`;
instructionsElement.appendChild(hoteltwoMessage);
// Add button element:
const hoteltwoButton = document.createElement("button");
hoteltwoButton.id = "trainbutton";
hoteltwoButton.textContent = "Buy Hotel";
// Append the button element to the instructions element:
instructionsElement.appendChild(hoteltwoButton);
//hotelonelogic
hoteltwoButton.addEventListener("click", () => {
if (money >= 10000) {
generateDailyBonus(8000, 24000); // Call with cafe bonus range
hotelTwoBonus = dailybonus;
money -= 10000;
const moneyDisplay =
document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
instructionsElement.removeChild(hoteltwoButton);
instructionsElement.removeChild(hoteltwoMessage);
//fetch data from Overpass API around first station
buildingRadius2 += 200;
const overpassQuery = `
[out:json];
way["building"](around:${buildingRadius2},${secondCityCoords[1]},${secondCityCoords[0]});
out body;
>;
out skel qt;
`;
fetch("https://overpass-api.de/api/interpreter", {
method: "POST",
headers: {
"Content-Type": "application/x-www-form-urlencoded",
},
body: "data=" + encodeURIComponent(overpassQuery),
})
.then((response) => response.json())
.then((data) => {
// Process the data returned by the Overpass API
data.elements.forEach((element) => {
if (element.type === "way") {
// Extract coordinates from the way element
const coordinates = element.nodes.map(
(nodeId) => {
const node = data.elements.find(
(node) => node.id === nodeId
);
return [node.lat, node.lon];
}
);
// Create a polyline for the road
const polyline = L.polyline(coordinates, {
color: "#333333",
weight: 1,
}).addTo(map);
}
});
});
} else {
}
});
// starttrain
const firstPoint = L.latLng(
firstCityCoords[1],
firstCityCoords[0]
);
const secondPoint = L.latLng(
secondCityCoords[1],
secondCityCoords[0]
);
const intervalDuration = 10; // milliseconds per frame
const distance = firstPoint.distanceTo(secondPoint);
const steps = ((distance / speed) * 1000) / intervalDuration; // Assuming speed of 35 miles per hour
const latStep = (secondPoint.lat - firstPoint.lat) / steps;
const lngStep = (secondPoint.lng - firstPoint.lng) / steps;
const marker = L.marker(firstPoint, {
icon: L.divIcon({
className: "circle-marker", // Add a CSS class for styling (optional)
html: `<b>${numberOfCarriages}</b>`, // Include the number inside a bold tag
iconSize: [20, 20], // Adjust iconSize as needed (optional)
}),
}).addTo(map);
// Assuming the marker variable is defined in this scope
const markerContent = marker.getElement().querySelector("b"); // Assuming bold tag for number
const moveMarker = (speed) => {
if (progress < steps) {
const newLat = firstPoint.lat + latStep * progress;
const newLng = firstPoint.lng + lngStep * progress;
const newLatLng = L.latLng(newLat, newLng);
marker.setLatLng(newLatLng); // Update the marker's position
progress++;
setTimeout(function () {
moveMarker(speed);
}, intervalDuration);
} else {
// Marker reaches the second point, update money
money +=
Math.floor(Math.random() * (2000 - 1000 + 1)) +
1000 * numberOfCarriages;
const moneyDisplay =
document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
// Wait two seconds before animating back and call moveBackMarker recursively
setTimeout(() => {
moveBackMarker(speed);
}, 2000); // Wait for 2 seconds (2000 milliseconds)
}
};
const moveBackMarker = (speed) => {
// Corrected calculation for animating back from second point to first
if (progress > 0) {
const newLat =
secondPoint.lat - latStep * (steps - progress);
const newLng =
secondPoint.lng - lngStep * (steps - progress);
const newLatLng = L.latLng(newLat, newLng);
marker.setLatLng(newLatLng); // Update the marker's position
progress--;
setTimeout(function () {
moveBackMarker(speed);
}, intervalDuration);
} else {
console.log("Reached starting point again.");
// Add random number to money and update display
money +=
Math.floor(Math.random() * (2000 - 1000 + 1)) +
1000 * numberOfCarriages;
const moneyDisplay =
document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
// Reset progress for next round trip
progress = 0;
// Recursively call moveMarker to start next animation cycle
moveMarker(speed);
}
};
moveMarker(speed); // Start the animation
});
});
}
else {
selectedMarker = e.target; // Set the clicked marker as selected
// Optional visual indication for selection (e.g., reduce opacity)
}
}
});
return circleMarker;
},
});
}
fetch("gb.geojson")
.then((response) => response.json())
.then((geojson) => {
L.geoJSON(geojson, {
fillColor: "none", // Style for polygon (empty fill)
weight: 1,
color: "#000",
opacity: 1,
fillOpacity: 0,
}).addTo(map);
})
.catch((error) => {
console.error("Error loading GeoJSON:", error);
});
fetch("cities.geojson")
.then((response) => response.json())
.then((geojson) => {
createCircleMarkers(geojson).addTo(map);
})
.catch((error) => {
console.error("Error loading GeoJSON:", error);
});
//24 hour clock display
const TIME_MULTIPLIER = 60 * 10; // 10 minutes = 600 seconds
// Function to format time in 24-hour format with leading zeros
function formatTime(hours, minutes) {
// Handle the case where minutes reach 60 (should display the next hour)
if (minutes === 60) {
hours++;
minutes = 0;
}
return `${hours.toString().padStart(2, "0")}:${minutes
.toString()
.padStart(2, "0")}`;
}
// Function to update the clock display and handle daily bonus
function updateClock() {
const currentTime = new Date();
// Simulate game time by multiplying actual time with multiplier
const gameTime = new Date(currentTime.getTime() * TIME_MULTIPLIER);
// Get hours and minutes in 24-hour format
let hours = gameTime.getHours();
// Get minutes and force them to the nearest multiple of 10 (ending in 0)
let minutes = Math.floor(gameTime.getMinutes() / 10) * 10;
// Format the time string with fixed minute handling
const formattedTime = formatTime(hours, minutes);
// Update the content of the div with the formatted time
document.getElementById("timedisplay").textContent = formattedTime;
// Check if it's midnight (00:00)
if (hours === 0 && minutes === 0) {
// Generate random daily bonus (modify as needed)
money += cafeOneBonus + cafeTwoBonus + hotelOneBonus;
const moneyDisplay = document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
console.log(
`Daily bonus of ${
cafeOneBonus + cafeTwoBonus + hotelOneBonus + hotelTwoBonus
} added! Total money: ${money}`
); // You can replace console.log with your desired action
}
}
// Call the updateClock function initially
updateClock();
// Update the clock every second to simulate smooth time progression
setInterval(updateClock, 1000);
'
|
552f4343347de46167d060ec654a767d
|
{
"intermediate": 0.45153433084487915,
"beginner": 0.4089650511741638,
"expert": 0.13950054347515106
}
|
46,546
|
I need to store my jwt access token in cookie. How do do it via туые оы
|
dcd22dba09fefa732f0d414525581d48
|
{
"intermediate": 0.5144830346107483,
"beginner": 0.19034647941589355,
"expert": 0.2951704263687134
}
|
46,547
|
You are a Python expert who can provide clear, concise, high-quality code.
"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual, HBox, VBox
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from ipywidgets import Output, Button
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}")
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask")
plt.show()
# preprocess and predict on the selected image
out = Output()
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
axs[0].imshow(img)
axs[0].set_title("Input Image")
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output")
axs[1].axis("off")
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='blue')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right")
axs[2].set_ylabel('Probability')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}') # Increased title font size
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='black', ha='center')
plt.tight_layout()
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
custom_objects = {"segmentation_loss": segmentation_loss}
# Load model
model = load_model(saved_model_path, custom_objects=custom_objects)
# Test model
test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5)
# file uploader widget
uploader = widgets.FileUpload(accept='image/*', multiple=False)
uploader.observe(on_upload, names='value')
buttons_layout = HBox([uploader, clear_button])
display(buttons_layout)
display(out)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code let user to select one image and performance inference. Help me to make the classification plot become dark theme, more pro and high-tech. Show code.
|
7562d23a49ae8f18e42e1ba5d412d90e
|
{
"intermediate": 0.35522809624671936,
"beginner": 0.4343754053115845,
"expert": 0.2103964388370514
}
|
46,548
|
in this javascript for leaflet.js if money<=30000 I want them to be able to add another marker to the polyline. - var money = 100000;
var numberOfCarriages = 1;
var speed = 60;
var dailybonus = 0;
var selectedMarker = null; // Variable to store the selected marker
const map = L.map("map").setView([54.2231637, -1.9381623], 6);
// Add custom zoom control to the map with position set to ‘topright’
const customZoomControl = L.control.zoom({ position: "topright" }).addTo(map);
// Remove the default zoom control from the map
map.removeControl(map.zoomControl);
let clickedPoints = [];
let isLineDrawn = false;
let marker; // Declare the marker variable
let progress = 0;
let cafeOneBonus = 0;
let cafeTwoBonus = 0;
let hotelOneBonus = 0;
let hotelTwoBonus = 0;
let buildingRadius = 0;
let buildingRadius2 = 0;
// Function to create circle markers with click functionality
function createCircleMarkers(geojson) {
return L.geoJSON(geojson, {
pointToLayer: function (feature, latlng) {
const circleMarker = L.circleMarker(latlng, {
radius: 4,
fillColor: "#ff7800",
color: "#000",
weight: 0.2,
opacity: 1,
fillOpacity: 0.8,
});
// Attach the feature to the circle marker
circleMarker.feature = feature;
circleMarker.on("mouseover", function () {
this.bindPopup(feature.properties.city).openPopup();
});
circleMarker.on("click", function (e) {
if (!isLineDrawn && selectedMarker !== e.target) {
clickedPoints.push(e.target); // Push the circle marker with attached feature
if (clickedPoints.length === 2) {
const firstCityCoords =
clickedPoints[0].feature.geometry.coordinates;
const secondCityCoords =
clickedPoints[1].feature.geometry.coordinates;
const polyline = L.polyline(
clickedPoints.map((p) => p.getLatLng())
).addTo(map);
const firstCity = clickedPoints[0].feature.properties.city;
const secondCity = clickedPoints[1].feature.properties.city;
clickedPoints = [];
isLineDrawn = true;
// Remove click event listener after a line has been drawn
map.off("click");
// Set the map bounds to show the area with the polyline
map.fitBounds(polyline.getBounds());
money = money - 50000; // Subtract 50000 from money
const moneyDisplay = document.getElementById("moneydisplay");
const moneyString = `£${money}`; // Assuming money is a number
moneyDisplay.textContent = moneyString;
const instructionsElement = document.getElementById("instructions");
// Clear any existing content in the instructions element:
instructionsElement.innerHTML = "";
// Create separate paragraph elements:
const congratulationsParagraph = document.createElement("p");
congratulationsParagraph.textContent = `Congratulations you have built your first train line from ${firstCity} to ${secondCity}!`;
const costsParagraph = document.createElement("p");
costsParagraph.textContent = `Your construction costs were £50,000. You have £50,000 remaining.`;
const buyTrainParagraph = document.createElement("p");
buyTrainParagraph.textContent = "You now need to buy a train.";
const newTrainParagraph = document.createElement("p");
newTrainParagraph.textContent =
"At this time you can only afford to buy the train engine the Sleeping Lion. The Sleeping Lion has a traveling speed of 60 miles per hour. It can pull four carriages. Which means your train will have a capacity of around 120 seated passengers";
const traincost = document.createElement("p");
traincost.textContent = `The Sleeping Lion will cost you £30,000 to purchase. Do you wish to buy the Sleeping Lion?`;
// Append paragraphs to the instructions element:
instructionsElement.appendChild(congratulationsParagraph);
instructionsElement.appendChild(costsParagraph);
instructionsElement.appendChild(buyTrainParagraph);
instructionsElement.appendChild(newTrainParagraph);
instructionsElement.appendChild(traincost);
// Add button element:
const buyButton = document.createElement("button");
buyButton.id = "buybutton";
buyButton.textContent = "Buy Train";
// Append the button element to the instructions element:
instructionsElement.appendChild(buyButton);
//buybutton event listener
document
.getElementById("buybutton")
.addEventListener("click", function () {
// Check if you have enough money before purchase
money = money - 30000; // Subtract 30000 from money
const moneyDisplay = document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
// Update instructions content after successful purchase
instructionsElement.innerHTML = ""; // Clear previous content
const successMessage = document.createElement("p");
successMessage.textContent = `You now have a train line from ${firstCity} to ${secondCity} and a train! Press the button below to begin operations.`;
instructionsElement.appendChild(successMessage);
// Add button element:
const trainButton = document.createElement("button");
trainButton.id = "trainbutton";
trainButton.textContent = "Start Train";
// Append the button element to the instructions element:
instructionsElement.appendChild(trainButton);
trainButton.addEventListener("click", function () {
console.log("Train Started");
//emptyinstructions add improvement buttons
instructionsElement.innerHTML = ""; // Clear previous content
//randomgeneration of dailybonus
function generateDailyBonus(minBonus, maxBonus) {
const randomNumber =
Math.floor(Math.random() * (maxBonus - minBonus + 1)) +
minBonus;
dailybonus += randomNumber;
console.log(`Daily bonus of ${randomNumber} added!`);
}
//buy carriages
//add carriages button
const carriageButton = document.createElement("button");
carriageButton.id = "trainbutton";
carriageButton.textContent = "Buy Train Carriage";
const carriageMessage = document.createElement("p");
carriageMessage.textContent = `Buy another passenger carriage for your train for £20,000`;
instructionsElement.appendChild(carriageMessage);
// Append the button element to the instructions element:
instructionsElement.appendChild(carriageButton);
//cariagebutton logic
carriageButton.addEventListener("click", () => {
console.log("Carriage Bought");
// Check if enough money is available
if (money >= 20000) {
// Check if maximum number of carriages reached
if (numberOfCarriages < 4) {
numberOfCarriages++;
money -= 20000; // Subtract 20000 from money
const moneyDisplay =
document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
// Update marker content using the previously retrieved reference
markerContent.textContent = numberOfCarriages;
} else {
console.log(
"Maximum number of carriages reached! You can't buy more."
);
instructionsElement.removeChild(carriageButton);
instructionsElement.removeChild(carriageMessage);
}
}
});
//buy station cafes
//add station one cafe button
const stationOneMessage = document.createElement("p");
stationOneMessage.textContent = `Open a cafe in ${firstCity} Station for £2,500.`;
instructionsElement.appendChild(stationOneMessage);
// Add button element:
const cafeOneButton = document.createElement("button");
cafeOneButton.id = "trainbutton";
cafeOneButton.textContent = "Buy Cafe";
// Append the button element to the instructions element:
instructionsElement.appendChild(cafeOneButton);
//cafeonelogic
cafeOneButton.addEventListener("click", () => {
if (money >= 2500) {
// add a random number between 2000 and 7000 to dailbonus
generateDailyBonus(2000, 7000); // Call with cafe bonus range
cafeOneBonus = dailybonus;
console.log("Cafe one bought");
money -= 2500;
const moneyDisplay =
document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
instructionsElement.removeChild(cafeOneButton);
instructionsElement.removeChild(stationOneMessage);
//fetch data from Overpass API around first station
buildingRadius += 150;
const overpassQuery = `
[out:json];
way["building"](around:${buildingRadius},${firstCityCoords[1]},${firstCityCoords[0]});
out body;
>;
out skel qt;
`;
fetch("https://overpass-api.de/api/interpreter", {
method: "POST",
headers: {
"Content-Type": "application/x-www-form-urlencoded",
},
body: "data=" + encodeURIComponent(overpassQuery),
})
.then((response) => response.json())
.then((data) => {
// Process the data returned by the Overpass API
data.elements.forEach((element) => {
if (element.type === "way") {
// Extract coordinates from the way element
const coordinates = element.nodes.map(
(nodeId) => {
const node = data.elements.find(
(node) => node.id === nodeId
);
return [node.lat, node.lon];
}
);
// Create a polyline for the road
const polyline = L.polyline(coordinates, {
color: "#333333",
weight: 1,
}).addTo(map);
}
});
})
.catch((error) => {
console.error("Error fetching data:", error);
});
} else {
}
});
//add station two cafe buttons
const stationTwoMessage = document.createElement("p");
stationTwoMessage.textContent = `Open a cafe in ${secondCity} Station for £2,500.`;
instructionsElement.appendChild(stationTwoMessage);
// Add button element:
const cafeTwoButton = document.createElement("button");
cafeTwoButton.id = "trainbutton";
cafeTwoButton.textContent = "Buy Cafe";
// Append the button element to the instructions element:
instructionsElement.appendChild(cafeTwoButton);
//cafetwologic
cafeTwoButton.addEventListener("click", () => {
if (money >= 2500) {
// Generate a random number between 2000 (inclusive) and 7000 (exclusive)
generateDailyBonus(2000, 7000); // Call with cafe bonus range
cafeTwoBonus = dailybonus;
console.log("Cafe two bought");
money -= 2500;
const moneyDisplay =
document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
instructionsElement.removeChild(cafeTwoButton);
instructionsElement.removeChild(stationTwoMessage);
//fetch data from Overpass API around first station
buildingRadius2 += 150;
const overpassQuery = `
[out:json];
way["building"](around:${buildingRadius2},${secondCityCoords[1]},${secondCityCoords[0]});
out body;
>;
out skel qt;
`;
fetch("https://overpass-api.de/api/interpreter", {
method: "POST",
headers: {
"Content-Type": "application/x-www-form-urlencoded",
},
body: "data=" + encodeURIComponent(overpassQuery),
})
.then((response) => response.json())
.then((data) => {
// Process the data returned by the Overpass API
data.elements.forEach((element) => {
if (element.type === "way") {
// Extract coordinates from the way element
const coordinates = element.nodes.map(
(nodeId) => {
const node = data.elements.find(
(node) => node.id === nodeId
);
return [node.lat, node.lon];
}
);
// Create a polyline for the road
const polyline = L.polyline(coordinates, {
color: "#333333",
weight: 1,
}).addTo(map);
}
});
});
} else {
}
});
//buyhotel
const hoteloneMessage = document.createElement("p");
hoteloneMessage.textContent = `Open a hotel in ${firstCity} Station for £10,000.`;
instructionsElement.appendChild(hoteloneMessage);
// Add button element:
const hoteloneButton = document.createElement("button");
hoteloneButton.id = "trainbutton";
hoteloneButton.textContent = "Buy Hotel";
// Append the button element to the instructions element:
instructionsElement.appendChild(hoteloneButton);
//hotelonelogic
hoteloneButton.addEventListener("click", () => {
if (money >= 10000) {
generateDailyBonus(8000, 24000); // Call with cafe bonus range
hotelOneBonus = dailybonus;
money -= 10000;
const moneyDisplay =
document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
instructionsElement.removeChild(hoteloneButton);
instructionsElement.removeChild(hoteloneMessage);
//fetch data from Overpass API around first station
buildingRadius += 200;
const overpassQuery = `
[out:json];
way["building"](around:${buildingRadius},${firstCityCoords[1]},${firstCityCoords[0]});
out body;
>;
out skel qt;
`;
fetch("https://overpass-api.de/api/interpreter", {
method: "POST",
headers: {
"Content-Type": "application/x-www-form-urlencoded",
},
body: "data=" + encodeURIComponent(overpassQuery),
})
.then((response) => response.json())
.then((data) => {
// Process the data returned by the Overpass API
data.elements.forEach((element) => {
if (element.type === "way") {
// Extract coordinates from the way element
const coordinates = element.nodes.map(
(nodeId) => {
const node = data.elements.find(
(node) => node.id === nodeId
);
return [node.lat, node.lon];
}
);
// Create a polyline for the road
const polyline = L.polyline(coordinates, {
color: "#333333",
weight: 1,
}).addTo(map);
}
});
});
} else {
}
});
const hoteltwoMessage = document.createElement("p");
hoteltwoMessage.textContent = `Open a hotel in ${secondCity} Station for £10,000.`;
instructionsElement.appendChild(hoteltwoMessage);
// Add button element:
const hoteltwoButton = document.createElement("button");
hoteltwoButton.id = "trainbutton";
hoteltwoButton.textContent = "Buy Hotel";
// Append the button element to the instructions element:
instructionsElement.appendChild(hoteltwoButton);
//hotelonelogic
hoteltwoButton.addEventListener("click", () => {
if (money >= 10000) {
generateDailyBonus(8000, 24000); // Call with cafe bonus range
hotelTwoBonus = dailybonus;
money -= 10000;
const moneyDisplay =
document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
instructionsElement.removeChild(hoteltwoButton);
instructionsElement.removeChild(hoteltwoMessage);
//fetch data from Overpass API around first station
buildingRadius2 += 200;
const overpassQuery = `
[out:json];
way["building"](around:${buildingRadius2},${secondCityCoords[1]},${secondCityCoords[0]});
out body;
>;
out skel qt;
`;
fetch("https://overpass-api.de/api/interpreter", {
method: "POST",
headers: {
"Content-Type": "application/x-www-form-urlencoded",
},
body: "data=" + encodeURIComponent(overpassQuery),
})
.then((response) => response.json())
.then((data) => {
// Process the data returned by the Overpass API
data.elements.forEach((element) => {
if (element.type === "way") {
// Extract coordinates from the way element
const coordinates = element.nodes.map(
(nodeId) => {
const node = data.elements.find(
(node) => node.id === nodeId
);
return [node.lat, node.lon];
}
);
// Create a polyline for the road
const polyline = L.polyline(coordinates, {
color: "#333333",
weight: 1,
}).addTo(map);
}
});
});
} else {
}
});
// starttrain
const firstPoint = L.latLng(
firstCityCoords[1],
firstCityCoords[0]
);
const secondPoint = L.latLng(
secondCityCoords[1],
secondCityCoords[0]
);
const intervalDuration = 10; // milliseconds per frame
const distance = firstPoint.distanceTo(secondPoint);
const steps = ((distance / speed) * 1000) / intervalDuration; // Assuming speed of 35 miles per hour
const latStep = (secondPoint.lat - firstPoint.lat) / steps;
const lngStep = (secondPoint.lng - firstPoint.lng) / steps;
const marker = L.marker(firstPoint, {
icon: L.divIcon({
className: "circle-marker", // Add a CSS class for styling (optional)
html: `<b>${numberOfCarriages}</b>`, // Include the number inside a bold tag
iconSize: [20, 20], // Adjust iconSize as needed (optional)
}),
}).addTo(map);
// Assuming the marker variable is defined in this scope
const markerContent = marker.getElement().querySelector("b"); // Assuming bold tag for number
const moveMarker = (speed) => {
if (progress < steps) {
const newLat = firstPoint.lat + latStep * progress;
const newLng = firstPoint.lng + lngStep * progress;
const newLatLng = L.latLng(newLat, newLng);
marker.setLatLng(newLatLng); // Update the marker's position
progress++;
setTimeout(function () {
moveMarker(speed);
}, intervalDuration);
} else {
// Marker reaches the second point, update money
money +=
Math.floor(Math.random() * (2000 - 1000 + 1)) +
1000 * numberOfCarriages;
const moneyDisplay =
document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
// Wait two seconds before animating back and call moveBackMarker recursively
setTimeout(() => {
moveBackMarker(speed);
}, 2000); // Wait for 2 seconds (2000 milliseconds)
}
};
const moveBackMarker = (speed) => {
// Corrected calculation for animating back from second point to first
if (progress > 0) {
const newLat =
secondPoint.lat - latStep * (steps - progress);
const newLng =
secondPoint.lng - lngStep * (steps - progress);
const newLatLng = L.latLng(newLat, newLng);
marker.setLatLng(newLatLng); // Update the marker's position
progress--;
setTimeout(function () {
moveBackMarker(speed);
}, intervalDuration);
} else {
console.log("Reached starting point again.");
// Add random number to money and update display
money +=
Math.floor(Math.random() * (2000 - 1000 + 1)) +
1000 * numberOfCarriages;
const moneyDisplay =
document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
// Reset progress for next round trip
progress = 0;
// Recursively call moveMarker to start next animation cycle
moveMarker(speed);
}
};
moveMarker(speed); // Start the animation
});
});
}
else {
selectedMarker = e.target; // Set the clicked marker as selected
// Optional visual indication for selection (e.g., reduce opacity)
}
}
});
return circleMarker;
},
});
}
fetch("gb.geojson")
.then((response) => response.json())
.then((geojson) => {
L.geoJSON(geojson, {
fillColor: "none", // Style for polygon (empty fill)
weight: 1,
color: "#000",
opacity: 1,
fillOpacity: 0,
}).addTo(map);
})
.catch((error) => {
console.error("Error loading GeoJSON:", error);
});
fetch("cities.geojson")
.then((response) => response.json())
.then((geojson) => {
createCircleMarkers(geojson).addTo(map);
})
.catch((error) => {
console.error("Error loading GeoJSON:", error);
});
//24 hour clock display
const TIME_MULTIPLIER = 60 * 10; // 10 minutes = 600 seconds
// Function to format time in 24-hour format with leading zeros
function formatTime(hours, minutes) {
// Handle the case where minutes reach 60 (should display the next hour)
if (minutes === 60) {
hours++;
minutes = 0;
}
return `${hours.toString().padStart(2, "0")}:${minutes
.toString()
.padStart(2, "0")}`;
}
// Function to update the clock display and handle daily bonus
function updateClock() {
const currentTime = new Date();
// Simulate game time by multiplying actual time with multiplier
const gameTime = new Date(currentTime.getTime() * TIME_MULTIPLIER);
// Get hours and minutes in 24-hour format
let hours = gameTime.getHours();
// Get minutes and force them to the nearest multiple of 10 (ending in 0)
let minutes = Math.floor(gameTime.getMinutes() / 10) * 10;
// Format the time string with fixed minute handling
const formattedTime = formatTime(hours, minutes);
// Update the content of the div with the formatted time
document.getElementById("timedisplay").textContent = formattedTime;
// Check if it's midnight (00:00)
if (hours === 0 && minutes === 0) {
// Generate random daily bonus (modify as needed)
money += cafeOneBonus + cafeTwoBonus + hotelOneBonus;
const moneyDisplay = document.getElementById("moneydisplay");
const moneyString = `£${money}`;
moneyDisplay.textContent = moneyString;
console.log(
`Daily bonus of ${
cafeOneBonus + cafeTwoBonus + hotelOneBonus + hotelTwoBonus
} added! Total money: ${money}`
); // You can replace console.log with your desired action
}
}
// Call the updateClock function initially
updateClock();
// Update the clock every second to simulate smooth time progression
setInterval(updateClock, 1000);
|
c03eab842688081beb36f29652fd5453
|
{
"intermediate": 0.3641330599784851,
"beginner": 0.4247852861881256,
"expert": 0.21108172833919525
}
|
46,549
|
I am making a C++ SDL based game engine, and I am finishing some tasks before moving into a new stage. Now I want to replace every raw pointer from my original code which was based of SDL examples into smart pointers. Let's start. First I need to know when to use a unique_ptr vs a shared_ptr one, for example, my font class uses a TTF_Font* member, how would I know if this is unique or shared?
|
c4f9c5f81ceaae27442b2da276c56461
|
{
"intermediate": 0.5557474493980408,
"beginner": 0.28327304124832153,
"expert": 0.16097944974899292
}
|
46,550
|
Write a python script that produces a random 5 by 5 grid, where each place in the grid can be either empty or occupied by any number between 1 and 4 inclusive. Empty places must be represented as X. The only condition that must be satisfied is that there should be no row or column all the places of which are the same number. But there should be a possibility of swapping any two places after which the grid should contain exactly one row or column all the places of which have the same number whatever it may be.
|
143d69593f18ba2b64b92545aa4eeda2
|
{
"intermediate": 0.339432954788208,
"beginner": 0.16989915072917938,
"expert": 0.4906679391860962
}
|
46,551
|
import asyncio, socket, pickle, threading
from kivy.clock import Clock
from kivy.uix.gridlayout import GridLayout
from kivy.uix.textinput import TextInput
from kivy.uix.button import Button
from kivy.uix.scrollview import ScrollView
from kivy.uix.boxlayout import BoxLayout
from kivymd.app import MDApp
from discord.ext import commands
import discord
class DiscordGUI(BoxLayout):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.orientation = 'vertical'
self.padding = [10, 10, 10, 10]
intents = discord.Intents.default()
intents.typing = False
intents.presences = False
self.bot = commands.Bot(command_prefix="!", intents=intents)
self.channels = []
self.selected_channel = None
self.match_channel = None
self.bot_token_entry = TextInput(hint_text="Bot Token:", multiline=False)
self.server_id_entry = TextInput(hint_text="Server ID:", multiline=False)
self.add_widget(self.bot_token_entry)
self.add_widget(self.server_id_entry)
self.add_widget(Button(text="Start Discord Bot", on_press=self.run_bot))
self.fetch_button = Button(text="Fetch Channels", on_press=self.fetch_channels, disabled=True)
self.add_widget(self.fetch_button)
self.listen_button = Button(text="Listen", on_press=self.listen_server, disabled=True)
self.add_widget(self.listen_button)
self.channel_layout = GridLayout(cols=4, size_hint_y=None)
self.channel_layout.bind(minimum_height=self.channel_layout.setter('height'))
self.channel_buttons = ScrollView()
self.channel_buttons.add_widget(self.channel_layout)
self.add_widget(self.channel_buttons)
def run_bot(self, instance):
loop = asyncio.get_event_loop()
self.bot_task = loop.create_task(self.bot.start(self.bot_token_entry.text.strip()))
Clock.schedule_interval(lambda dt: loop.run_until_complete(asyncio.sleep(0)), 0.01)
self.fetch_button.disabled = False
def fetch_channels(self, instance):
loop = asyncio.get_event_loop()
loop.create_task(self._fetch_channels())
async def _fetch_channels(self):
guild_id = int(self.server_id_entry.text.strip())
guild = self.bot.get_guild(guild_id)
if guild:
self.channels = [(channel.name, channel.id) for channel in guild.text_channels]
Clock.schedule_once(lambda dt: self.update_buttons())
def update_buttons(self):
for widget in self.channel_layout.children[:]:
if isinstance(widget, Button) and widget.text in [channel[0] for channel in self.channels]:
self.channel_layout.remove_widget(widget)
for channel in self.channels:
button = Button(text=channel[0], size_hint_y=None, height=50)
button.bind(on_press=self.toggle_button)
self.channel_layout.add_widget(button)
self.listen_button.disabled = False
def toggle_button(self, instance):
if self.selected_channel:
self.selected_channel.background_color = (1, 1, 1, 1)
instance.background_color = (0, 0, 0, 1)
self.selected_channel = instance
self.match_channel = next(channel[1] for channel in self.channels if channel[0] == instance.text)
def listen_server(self, instance):
if self.match_channel:
listen_thread = threading.Thread(target=self._listen_server)
listen_thread.start()
def _listen_server(self):
server_address = ('localhost', 12345)
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
sock.connect(server_address)
while True:
data = sock.recv(1024)
if not data:
break
shared_dict = pickle.loads(data)
Clock.schedule_once(lambda dt: self.send_message_to_discord(shared_dict))
def send_message_to_discord(self, match_data):
if hasattr(self.selected_channel, 'text'):
channel_id = self.match_channel
channel = self.bot.get_channel(channel_id)
if channel:
asyncio.run_coroutine_threadsafe(channel.send(str(match_data)), self.bot.loop)
class DiscordApp(MDApp):
def build(self):
self.gui = DiscordGUI()
return self.gui
def on_stop(self):
self.gui.bot_task.cancel()
if __name__ == "__main__":
DiscordApp().run()
|
ad6638ac9c3457225fb8d1ae768aa8d6
|
{
"intermediate": 0.25978973507881165,
"beginner": 0.5837438702583313,
"expert": 0.15646637976169586
}
|
46,552
|
Task exception was never retrieved
future: <Task finished name='Task-47' coro=<Dispatcher._process_polling_updates() done, defined at C:\YandexGPT\.venv\Lib\site-packages\aiogram\dispatcher\dispatcher.py:407> exception=IntegrityError('NOT NULL constraint failed: users.username')>
Traceback (most recent call last):
File "C:\YandexGPT\.venv\Lib\site-packages\aiogram\dispatcher\dispatcher.py", line 415, in _process_polling_updates
for responses in itertools.chain.from_iterable(await self.process_updates(updates, fast)):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\YandexGPT\.venv\Lib\site-packages\aiogram\dispatcher\dispatcher.py", line 235, in process_updates
return await asyncio.gather(*tasks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\YandexGPT\.venv\Lib\site-packages\aiogram\dispatcher\handler.py", line 117, in notify
response = await handler_obj.handler(*args, **partial_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\YandexGPT\.venv\Lib\site-packages\aiogram\dispatcher\dispatcher.py", line 256, in process_update
return await self.message_handlers.notify(update.message)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\YandexGPT\.venv\Lib\site-packages\aiogram\dispatcher\handler.py", line 117, in notify
response = await handler_obj.handler(*args, **partial_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\YandexGPT\main.py", line 302, in process_password
await db.execute("INSERT OR REPLACE INTO users (id, login, password) VALUES (?, ?, ?)", (user_id, login, password))
File "C:\YandexGPT\.venv\Lib\site-packages\aiosqlite\core.py", line 193, in execute
cursor = await self._execute(self._conn.execute, sql, parameters)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\YandexGPT\.venv\Lib\site-packages\aiosqlite\core.py", line 132, in _execute
return await future
^^^^^^^^^^^^
File "C:\YandexGPT\.venv\Lib\site-packages\aiosqlite\core.py", line 115, in run
result = function()
^^^^^^^^^^
sqlite3.IntegrityError: NOT NULL constraint failed: users.username
Вот код бота:
from aiogram import Bot, Dispatcher, executor, types
from aiogram.dispatcher import FSMContext
from aiogram.dispatcher.filters.state import State, StatesGroup
from aiogram.contrib.fsm_storage.memory import MemoryStorage
from aiogram.types import ReplyKeyboardMarkup, KeyboardButton, InlineKeyboardButton, InlineKeyboardMarkup
from aiogram.utils.callback_data import CallbackData
import aiosqlite
import asyncio
import aiohttp
import json
import re
from datetime import datetime
API_TOKEN = '6996318383:AAEcQfdQhzEg3L_6DKQVidJEn46Wb27Sy4g'
ADMINS = [989037374, 1515567046]
bot = Bot(token=API_TOKEN)
storage = MemoryStorage()
dp = Dispatcher(bot, storage=storage)
class Form(StatesGroup):
choosing_action = State()
login = State()
password = State()
answer_question = State()
class lk(StatesGroup):
personal_account = State()
edit_answer = State()
new_answer = State()
edit_answer_select = State()
edit_answer_cb = State()
new_answer_cb = State()
class admin(StatesGroup):
admin_panel = State()
select_question_to_delete = State()
select_question_to_edit = State()
edit_question_text = State()
new_question = State()
async def create_db():
async with aiosqlite.connect('base.db') as db:
await db.execute('''CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY,
username TEXT NOT NULL,
login TEXT,
password TEXT,
last_question_idx INTEGER DEFAULT 0)''')
await db.execute('''CREATE TABLE IF NOT EXISTS questions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
question TEXT NOT NULL,
order_num INTEGER NOT NULL)''')
await db.execute('''CREATE TABLE IF NOT EXISTS answers (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
question TEXT,
answer TEXT,
FOREIGN KEY (user_id) REFERENCES users (id))''')
await db.execute('''CREATE TABLE IF NOT EXISTS public_info (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
nome TEXT,
surname TEXT,
patronym TEXT,
birthdayat TEXT,
diedat TEXT,
epitaph TEXT,
placeOfBirth TEXT,
placeOfDeath TEXT,
children TEXT,
wifehusband TEXT,
education TEXT,
occupation TEXT,
awards TEXT,
title1 TEXT,
biography1 TEXT,
title2 TEXT,
biography2 TEXT,
title3 TEXT,
biography3 TEXT,
conclusion TEXT,
FOREIGN KEY (user_id) REFERENCES users (id))''')
await db.commit()
# Обработка под MarkdownV2
def mdv2(text: str) -> str:
escape_chars = [
"_", "[", "]", "(", ")", "~", "`", ">",
"#", "+", "-", "=", "|", "{", "}", ".", "!"
]
for char in escape_chars:
text = text.replace(char, f"\{char}")
text = text.replace("**", "*").replace('"', '“')
return text
# Проверка дат
def validate_dates(birthdate_str: str, deathdate_str: str) -> bool:
try:
birthdate = datetime.strptime(birthdate_str, "%d.%m.%Y")
deathdate = datetime.strptime(deathdate_str, "%d.%m.%Y")
except ValueError:
return False
if deathdate < birthdate:
return False
return True
# калбэки
change_action_cb = CallbackData('change', 'action')
# КНОПКА МЕНЮ
menu = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
menu.add(KeyboardButton("В меню"))
async def add_user(user_id: int, username: str):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT id FROM users WHERE id = ?', (user_id,))
user_exists = await cursor.fetchone()
if user_exists:
await db.execute('UPDATE users SET username = ? WHERE id = ?', (username, user_id))
else:
await db.execute('INSERT INTO users (id, username) VALUES (?, ?)', (user_id, username))
await db.commit()
@dp.message_handler(commands="start", state="*")
async def cmd_start(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
user_id = message.from_user.id
username = message.from_user.username or "unknown"
await add_user(user_id, username)
if user_id not in ADMINS:
await message.answer("Выберите действие:", reply_markup=markup)
await Form.choosing_action.set()
else:
markup.add(KeyboardButton("Админ-панель"))
await message.answer("Выберите действие:", reply_markup=markup)
await Form.choosing_action.set()
@dp.message_handler(lambda message: message.text == "В меню", state="*")
async def back_to_menu(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
if message.from_user.id not in ADMINS:
await message.answer("Вернули вас в меню. Выберите действие", reply_markup=markup)
await Form.choosing_action.set()
else:
markup.add(KeyboardButton("Админ-панель"))
await message.answer("Вернули вас в меню. Выберите действие", reply_markup=markup)
await Form.choosing_action.set()
async def save_answer(user_id: int, question: str, answer: str, question_idx: int):
async with aiosqlite.connect('base.db') as db:
await db.execute('INSERT INTO answers (user_id, question, answer) VALUES (?, ?, ?)',
(user_id, question, answer))
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (question_idx, user_id))
await db.commit()
async def set_next_question(user_id: int):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT last_question_idx FROM users WHERE id = ?', (user_id,))
result = await cursor.fetchone()
last_question_idx = result[0] if result else 0
next_question_idx = last_question_idx + 1
question_cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (next_question_idx,))
question_text = await question_cursor.fetchone()
if question_text:
await bot.send_message(user_id, question_text[0], reply_markup=menu)
await Form.answer_question.set()
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (next_question_idx, user_id))
await db.commit()
else:
answers_text = ""
cursor = await db.execute("SELECT question, answer FROM answers WHERE user_id=?", (user_id,))
answers = await cursor.fetchall()
for idx, (question, answer) in enumerate(answers, start=1):
answers_text += f"{idx}. {question} - {answer}\n"
markup = InlineKeyboardMarkup(
inline_keyboard=[
[InlineKeyboardButton(text="Сгенерировать", callback_data=change_action_cb.new(action="generate"))],
[InlineKeyboardButton(text="Изменить ответ", callback_data=change_action_cb.new(action="change"))],
[InlineKeyboardButton(text="Заполнить заново",
callback_data=change_action_cb.new(action="refill"))],
]
)
await bot.send_message(user_id, f"Вот ваши ответы:\n\n{answers_text}", reply_markup=markup)
await dp.current_state(user=user_id).reset_state(with_data=False)
await Form.choosing_action.set()
@dp.callback_query_handler(change_action_cb.filter(action="change"), state="*")
async def change_answer(callback_query: types.CallbackQuery, state: FSMContext):
await bot.answer_callback_query(callback_query.id)
await lk.edit_answer.set()
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
text = "Вопросы:\n\n"
if questions:
text += "\n".join([f"{idx + 1}. {question[0]}" for idx, question in enumerate(questions)])
text += "\n\nВведите номер вопроса, который хотите изменить:"
await bot.send_message(callback_query.from_user.id, text,reply_markup=menu)
else:
await bot.send_message(callback_query.from_user.id,"Вопросы отсутствуют.", reply_markup=menu)
@dp.message_handler(state=lk.edit_answer_cb)
async def enter_question_number(message: types.Message, state: FSMContext):
question_number = message.text
if not question_number.isdigit():
await message.reply("Пожалуйста, введите номер вопроса цифрами. Попробуйте снова:")
return
await state.update_data(question_number=int(question_number))
await lk.new_answer.set()
await message.answer("Введите новый ответ:")
@dp.callback_query_handler(change_action_cb.filter(action="refill"), state="*")
async def process_refill(callback_query: types.CallbackQuery, callback_data: dict):
user_id = callback_query.from_user.id
await bot.answer_callback_query(callback_query.id)
markup = InlineKeyboardMarkup().add(InlineKeyboardButton("Да, начать заново", callback_data="confirm_refill"))
await bot.send_message(user_id, "Вы уверены, что хотите начать заново? Ваши текущие ответы будут удалены.", reply_markup=markup)
@dp.message_handler(state=lk.new_answer_cb)
async def update_answer(message: types.Message, state: FSMContext):
new_answer_text = message.text
user_data = await state.get_data()
question_number = user_data['question_number']
user_id = message.from_user.id
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await db.execute('UPDATE answers SET answer = ? WHERE user_id = ? AND question = ?',
(new_answer_text, user_id, question_text[0]))
await db.commit()
await message.answer(f"Ваш ответ на вопрос изменен на: {new_answer_text}", reply_markup=menu)
else:
await message.answer(f"Проблема при редактировании ответа, вопрос не найден.")
await state.finish()
await set_next_question(user_id)
@dp.message_handler(lambda message: message.text == "Сгенерировать био", state=[Form.choosing_action, ])
async def check_login(message: types.Message):
user_id = message.from_user.id
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT login, password FROM users WHERE id = ?', (user_id,))
user_login_info = await cursor.fetchone()
if not user_login_info or user_login_info[0] is None or user_login_info[1] is None:
# Если логин или пароль отсутствуют, начинаем процесс их запроса
await message.answer("Введите Ваш логин:")
await Form.login.set()
else:
# Если данные присутствуют, переходим к генерации био
await set_next_question(user_id)
@dp.message_handler(state=Form.login)
async def process_login(message: types.Message, state: FSMContext):
# Сохраняем логин во временное хранилище
async with state.proxy() as data:
data['login'] = message.text
await message.answer("Введите Ваш пароль:")
await Form.password.set()
@dp.message_handler(state=Form.password)
async def process_password(message: types.Message, state: FSMContext):
user_id = message.from_user.id
async with state.proxy() as data:
login = data['login']
password = message.text
url = 'https://mc.dev.rand.agency/api/v1/get-access-token'
payload = {
"email": login,
"password": password,
"device": "bot-v0.0.1"
}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=payload, headers={'Accept': 'application/json','Content-Type': 'application/json;charset=UTF-8'}) as response:
if response.status == 200:
async with aiosqlite.connect('base.db') as db:
await db.execute("INSERT OR REPLACE INTO users (id, login, password) VALUES (?, ?, ?)", (user_id, login, password))
await db.commit()
await message.answer("Логин и пароль успешно сохранены.")
await set_next_question(user_id)
else:
await bot.send_message(user_id,"Логин и/или пароль неверные. Пожалуйста, попробуйте еще раз.")
await check_login(message)
@dp.message_handler(state=Form.answer_question)
async def process_question_answer(message: types.Message, state: FSMContext):
user_id = message.from_user.id
answer_text = message.text
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT last_question_idx FROM users WHERE id = ?', (user_id,))
result = await cursor.fetchone()
current_question_idx = result[0] if result else 0
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (current_question_idx,))
question = await cursor.fetchone()
if question:
question_text = question[0]
await db.execute('INSERT INTO answers (user_id, question, answer) VALUES (?, ?, ?)',
(user_id, question_text, answer_text))
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (current_question_idx, user_id))
await db.commit()
else:
await message.answer("Произошла ошибка при сохранении вашего ответа.")
await set_next_question(user_id)
@dp.message_handler(lambda message: message.text == "Личный кабинет", state=Form.choosing_action)
async def personal_account(message: types.Message):
user_id = message.from_user.id
answers_text = "Личный кабинет\n\nВаши ответы:\n"
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question, answer FROM answers WHERE user_id=? ORDER BY id', (user_id,))
answers = await cursor.fetchall()
for idx, (question, answer) in enumerate(answers, start=1):
answers_text += f"{idx}. {question}: {answer}\n"
if answers_text == "Личный кабинет\n\nВаши ответы:\n":
answers_text = "Личный кабинет\n\nВы еще не отвечали на вопросы. Пожалуйста, нажмите «В меню» и выберите «Сгенерировать био», чтобы ответить на вопросы"
await message.answer(answers_text, reply_markup=menu)
else:
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Изменить ответ"))
markup.add(KeyboardButton("Заполнить заново"))
markup.add(KeyboardButton("В меню"))
await message.answer(answers_text, reply_markup=markup)
await lk.personal_account.set()
@dp.message_handler(lambda message: message.text == "Изменить ответ", state=lk.personal_account)
async def change_answer(message: types.Message):
await message.answer("Введите номер вопроса, на который хотите изменить ответ:",reply_markup=menu)
await lk.edit_answer.set()
@dp.message_handler(state=lk.edit_answer)
async def process_question_number(message: types.Message, state: FSMContext):
text = message.text
question_number = int(text)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await state.update_data(question=question_text[0], question_number=question_number)
await message.answer("Введите новый ответ:")
await lk.new_answer.set()
else:
await message.answer(f"Вопроса под номером {question_number} не существует.")
@dp.message_handler(state=lk.new_answer)
async def process_new_answer(message: types.Message, state: FSMContext):
user_data = await state.get_data()
question_number = user_data['question_number']
new_answer = message.text
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Изменить ответ"))
markup.add(KeyboardButton("Заполнить заново"))
markup.add(KeyboardButton("В меню"))
user_id = message.from_user.id
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await db.execute('UPDATE answers SET answer = ? WHERE user_id = ? AND question = ?', (new_answer, user_id, question_text[0]))
await db.commit()
await message.answer(f"Ваш ответ на вопрос изменен на: {new_answer}", reply_markup=markup)
else:
await message.answer(f"Проблема при редактировании ответа, вопрос не найден.")
await state.finish()
await personal_account(message)
@dp.message_handler(lambda message: message.text == "Заполнить заново", state=lk.personal_account)
async def refill_form(message: types.Message):
markup = InlineKeyboardMarkup().add(InlineKeyboardButton("Да", callback_data="confirm_refill"))
await message.answer("Вы уверены, что хотите начать заново? Все текущие ответы будут удалены.", reply_markup=markup)
@dp.callback_query_handler(lambda c: c.data == 'confirm_refill', state="*")
async def process_refill(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
async with aiosqlite.connect('base.db') as db:
await db.execute('DELETE FROM answers WHERE user_id=?', (user_id,))
await db.commit()
await db.execute('UPDATE users SET last_question_idx = 0 WHERE id = ?', (user_id,))
await db.commit()
state = dp.current_state(user=user_id)
await state.reset_state(with_data=False)
await bot.answer_callback_query(callback_query.id)
await bot.send_message(user_id, "Ваши ответы удалены.")
await cmd_start(callback_query.message)
# ГЕНЕРАЦИЯ
def parse_yandexgpt(answer_text: str) -> dict:
pattern = re.compile(
r'(nome|surname|patronym|birthdayat|diedat|epitaph|placeOfBirth|placeOfDeath|children|wifehusband|education|occupation|awards|title1|biography1|title2|biography2|title3|biography3|conclusion): ([^:]+?)(?=\n\w+: |\n\n|\Z)',
re.DOTALL)
matches = pattern.findall(answer_text)
data = {key.strip(): value.strip() for key, value in matches}
return data
async def save_result(user_id: int, answer_dict: dict):
async with aiosqlite.connect('base.db') as db:
await db.execute("DELETE FROM public_info WHERE user_id=?", (user_id, ))
columns = ", ".join(answer_dict.keys())
placeholders = ", ".join(["?"] * len(answer_dict))
values = list(answer_dict.values())
print(columns)
print(placeholders)
print(values)
await db.execute(f"INSERT INTO public_info (user_id, {columns}) VALUES (?, {placeholders})", [user_id, *values])
await db.commit()
class YandexGPT:
@staticmethod
async def generate(prompt: str, apikey: str, sa_id: str, user_id : str):
url = 'https://llm.api.cloud.yandex.net/foundationModels/v1/completion'
headers = {
'Content-Type': 'application/json',
'Authorization': f'Api-Key {apikey}'
}
data = {
"modelUri": f"gpt://{sa_id}/yandexgpt-lite/latest",
"completionOptions": {
"stream": False,
"temperature": 0.4,
"maxTokens": "3000"
},
"messages": [
{
"role": "system",
"text": """"Твоя задача - создать информационную сводку и биографию (title1, biography1, title2, biography2, title3, biography3 ) в соответствии с ответами на вопросы пользователя (пишутся в формате вопрос - ответ).
Не пиши ничего кроме этой сводки и НЕ ОТХОДИ ОТ ШАБЛОНА. Если информации данных в каком-то пункте нет, пиши ТОЛЬКО “null“, а не что-либо другое. Создай следующую сводку:
nome: {}
surname: {}
patronym: {}
birthday_at: {}
died_at: {}
epitaph: {} # не больше 300 символов
placeOfBirth: {}
placeOfDeath: {}
children: {}
wifehusband: {} # Это супруг или супруга
education: {}
occupation: {} # Род деятельности человека
awards: {}
epitaph: {}
title1: {}
biography1: {}
title2: {}
biography2: {}
title3: {}
biography3: {}
conclusion: {}
В поле nome должно быть ТОЛЬКО имя, без фамилии и отчества. Не путай имя (nome), фамилию (surname) и отчество (patronym) - они должны стоять на правильных местах.
epitaph - это эпитафия, сделай так, чтобы она соответствовала жизненному пути человека.
Не придумывай в биографии ничего от себя, распиши подробнее, но только ту информацию, которая есть от пользователя.
Все даты пиши в формате dd.mm.yyyy.
В conclusion пиши заключение БИОГРАФИИ, а не что либо другое.
"""
},
{
"role": "user",
"text": prompt
}
]
}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=data, headers=headers) as response:
response_data = await response.json()
try:
answer = response_data['result']['alternatives'][0]['message']['text']
answer = answer.replace("*","").replace("_","")
answer_data = parse_yandexgpt(answer)
birthdate_str = answer_data.get("birthdayat", "")
deathdate_str = answer_data.get("diedat", "")
if birthdate_str and deathdate_str and not validate_dates(birthdate_str, deathdate_str):
markup = InlineKeyboardMarkup()
markup.add(InlineKeyboardButton(text="Изменить ответ", callback_data=change_action_cb.new(action="change")))
await bot.send_message(user_id, "Некорректные даты: дата смерти не может быть раньше даты рождения или введены неверные даты.", reply_markup=markup)
else:
try:
await save_result(user_id, answer_data)
biography_output = mdv2(f"*Биография*\n\n*{answer_data.get('title1', '')}*: {answer_data.get('biography1', '')}\n*{answer_data.get('title2', '')}*: {answer_data.get('biography2', '')}\n*{answer_data.get('title3', '')}*: {answer_data.get('biography3', '')}\n*Заключение:*{answer_data.get('conclusion', '')}\n\n*Эпитафия:* {answer_data.get('epitaph', '')}")
await bot.send_message(user_id, biography_output, parse_mode="MarkdownV2",reply_markup=InlineKeyboardMarkup().add(InlineKeyboardButton("Обновить страницу",callback_data="update_page")))
except:
await bot.send_message(user_id, "Не удалось сохранить данные. Пожалуйста, нажмите Сгенерировать еще раз")
except KeyError as e:
await bot.send_message(user_id, "Не удалось получить ответ от сервера. Проверьте переданные данные и попробуйте еще раз.")
@dp.callback_query_handler(change_action_cb.filter(action="generate"), state="*")
async def process_generate(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
prompt = ""
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question, answer FROM answers WHERE user_id=?", (user_id,))
answers = await cursor.fetchall()
for question, answer in answers:
prompt += f"\n{question} - {answer}"
api_key = "AQVN1J4sCxYR98rj-tVppyp6gXQthbdmYvmgtO7a"
sa_id = "b1g5og37bgh1ghh2s2qc"
await YandexGPT.generate(prompt, api_key, sa_id, user_id)
# ОБНОВЛЕНИЕ СТРАНИЦЫ
@dp.callback_query_handler(lambda c: c.data == 'update_page', state="*")
async def process_update_page(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
await bot.answer_callback_query(callback_query.id)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT nome, surname, patronym, birthdayat, diedat, epitaph, placeOfBirth, placeOfDeath, children, wifehusband, education, occupation, awards, title1, biography1, title2, biography2, title3, biography3, conclusion FROM public_info WHERE user_id=?", (user_id,))
data = await cursor.fetchone()
if data:
data_structure = {
"nome": data[0],
"surname": data[1],
"patronym": data[2],
"birthday_at": data[3]+"T21:00:00.000000Z",
"died_at": data[4]+"T21:00:00.000000Z",
"epitaph": data[5],
"placeOfBirth": data[6],
"placeOfDeath": data[7],
"children": data[8],
"wifehusband": data[9],
"education": data[10],
"occupation": data[11],
"awards": data[12],
"title1" : data[13],
"biography1": data[14],
"title2": data[15],
"biography2": data[16],
"title3": data[17],
"biography3": data[18],
"conclusion": data[19],
"day_b": data[3].split('.')[0],
"month_b": data[3].split('.')[1],
"year_b": int(data[3].split('.')[2]),
"day_d": data[4].split('.')[0],
"month_d": data[4].split('.')[1],
"year_d": int(data[4].split('.')[2]),
}
url = 'https://mc.dev.rand.agency/api/v1/get-access-token'
payload = {
"email": "team1@hackathon.ru",
"password": "PA9dEa7x",
"device": "bot-v0.0.1"
}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=payload, headers={'Accept': 'application/json','Content-Type': 'application/json;charset=UTF-8'}) as response:
if response.status == 200:
data = await response.json()
access_token = data.get('access_token')
print(f'Access Token: {access_token}')
else:
print(f'Error: {response.status}')
async with aiohttp.ClientSession() as session:
await update_page(session, access_token, data_structure)
await bot.send_message(user_id, "Страница обновлена")
await Form.choosing_action.set()
async def update_page(session, access_token, data_structure):
url = 'https://mc.dev.rand.agency/api/page/64577087'
print(data_structure["nome"])
payload = {
"id": 8734,
"name": data_structure["nome"],
"surname": data_structure["surname"],
"patronym": data_structure["patronym"],
"birthday_at": data_structure["birthday_at"],
"died_at": data_structure["died_at"],
"epitaph": data_structure["epitaph"],
"start": {
"day": data_structure["day_b"],
"month": data_structure["month_b"],
"year": data_structure["year_b"]
},
"end": {
"day": data_structure["day_d"],
"month": data_structure["month_d"],
"year": data_structure["year_d"]
},
"link": "https://mc.dev.rand.agency/page/64577087",
"page_type_id": 1,
"biographies": [
{
"id": 4610,
"title": data_structure["title1"],
"description": data_structure["biography1"],
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 1,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
},
{
"url": None,
"title": None,
"order": 2
}
],
"media": []
},
{
"id": 4611,
"title": data_structure["title2"],
"description": data_structure["biography2"],
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 2,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
}
],
"media": []
},
{
"id": 4612,
"title": data_structure["title3"],
"description": data_structure["biography3"],
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 3,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
},
{
"url": None,
"title": None,
"order": 2
}
],
"media": []
},
{
"id": 4613,
"title": "Заключение",
"description": data_structure["conclusion"],
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 4,
"checked": False,
"photos": [],
"media": []
}
],
"page_information": [
{
"id": 30957,
"page_id": 8734,
"title": "pageInformation.placeOfBirth",
"is_system": True,
"description": data_structure["placeOfBirth"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30958,
"page_id": 8734,
"title": data_structure["placeOfDeath"],
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30959,
"page_id": 8734,
"title": data_structure["children"],
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30960,
"page_id": 8734,
"title": "^pageInformation.wife||pageInformation.husband",
"is_system": True,
"description": data_structure["wifehusband"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30961,
"page_id": 8734,
"title": "pageInformation.citizenship",
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30962,
"page_id": 8734,
"title": "pageInformation.education",
"is_system": True,
"description": data_structure["education"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30963,
"page_id": 8734,
"title": "pageInformation.occupation",
"is_system": True,
"description": data_structure["occupation"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30964,
"page_id": 8734,
"title": "pageInformation.awards",
"is_system": True,
"description": data_structure["awards"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
}
],
"_method": "PUT"
}
headers = {
'Accept': 'application/json',
'Content-Type': 'application/json;charset=UTF-8',
'Authorization': f'Bearer {access_token}'
}
async with session.put(url, json=payload, headers=headers) as response:
if response.status == 200:
data = await response.json()
print(f'Update Page Response: {data}')
else:
data = await response.json()
print(f'Error: {data}')
# АДМИН-ПАНЕЛЬ
# КНОПКА НАЗАД
back = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=False)
back.add(KeyboardButton("Назад"))
# КЛАВА
admin_kb = ReplyKeyboardMarkup(resize_keyboard=True)
admin_kb.add("Вопросы", "Добавить", "Удалить", "Редактировать","В меню")
@dp.message_handler(lambda message: message.text == "Назад", state=[admin.new_question, admin.edit_question_text, admin.select_question_to_edit, admin.select_question_to_delete])
async def back_to_admin_panel(message: types.Message, state: FSMContext):
await state.finish()
await admin_panel(message)
@dp.message_handler(lambda message: message.text == "Админ-панель", state=Form.choosing_action)
async def admin_panel(message: types.Message):
if message.from_user.id not in ADMINS:
await message.answer("Доступ запрещен.")
return
await message.answer("Админ-панель:", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Вопросы", state=admin.admin_panel)
async def show_questions(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if questions:
text = "\n".join([f"{idx + 1}. {question[0]}" for idx, question in enumerate(questions)])
else:
text = "Вопросы отсутствуют."
await message.answer(text)
@dp.message_handler(lambda message: message.text == "Добавить", state=admin.admin_panel)
async def add_question_start(message: types.Message):
await message.answer("Введите текст нового вопроса:", reply_markup=back)
await admin.new_question.set()
@dp.message_handler(state=admin.new_question)
async def add_question_process(message: types.Message, state: FSMContext):
new_question = message.text
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT MAX(order_num) FROM questions")
max_order_num = await cursor.fetchone()
next_order_num = (max_order_num[0] or 0) + 1
await db.execute("INSERT INTO questions (question, order_num) VALUES (?, ?)", (new_question, next_order_num))
await db.commit()
await message.answer("Вопрос успешно добавлен.", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Редактировать", state=admin.admin_panel)
async def select_question_to_edit_start(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT id, question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if not questions:
await message.answer("Вопросы отсутствуют.")
return
text = "Выберите номер вопроса для редактирования:\n\n"
text += "\n".join(f"{qid}. {qtext}" for qid, qtext in questions)
await message.answer(text, reply_markup=back)
await admin.select_question_to_edit.set()
@dp.message_handler(state=admin.select_question_to_edit)
async def edit_question(message: types.Message, state: FSMContext):
qid_text = message.text
if not qid_text.isdigit():
await message.answer("Пожалуйста, введите число. Попробуйте еще раз:", reply_markup=back)
return
qid = int(qid_text)
async with state.proxy() as data:
data['question_id'] = qid
await admin.edit_question_text.set()
await message.answer("Введите новый текст вопроса:", reply_markup=back)
@dp.message_handler(state=admin.edit_question_text)
async def update_question(message: types.Message, state: FSMContext):
new_text = message.text
async with state.proxy() as data:
qid = data['question_id']
async with aiosqlite.connect('base.db') as db:
await db.execute("UPDATE questions SET question = ? WHERE id = ?", (new_text, qid))
await db.commit()
await message.answer("Вопрос успешно отредактирован.", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Удалить", state=admin.admin_panel)
async def select_question_to_delete_start(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT id, question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if not questions:
await message.answer("Вопросы отсутствуют.")
return
text = "Выберите номер вопроса для удаления:\n\n"
text += "\n".join(f"{qid}. {qtext}" for qid, qtext in questions)
await message.answer(text, reply_markup=back)
await admin.select_question_to_delete.set()
@dp.message_handler(state=admin.select_question_to_delete)
async def delete_question(message: types.Message, state: FSMContext):
qid_text = message.text
if not qid_text.isdigit():
await message.answer("Пожалуйста, введите число. Попробуйте еще раз:", reply_markup=back)
return
qid = int(qid_text)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT order_num FROM questions WHERE id = ?", (qid,))
question = await cursor.fetchone()
if not question:
await message.answer(f"Вопрос под номером {qid} не найден. Пожалуйста, попробуйте другой номер.")
return
order_num_to_delete = question[0]
await db.execute("DELETE FROM questions WHERE id = ?", (qid,))
await db.execute("UPDATE questions SET order_num = order_num - 1 WHERE order_num > ?", (order_num_to_delete,))
await db.commit()
await message.answer("Вопрос успешно удален.", reply_markup=admin_kb)
await admin.admin_panel.set()
async def main():
await create_db()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
executor.start_polling(dp, skip_updates=True)
|
ec4b75a6303bbd44d3c31e0595548ba8
|
{
"intermediate": 0.3563714623451233,
"beginner": 0.3012671172618866,
"expert": 0.3423613905906677
}
|
46,553
|
Task exception was never retrieved
future: <Task finished name='Task-47' coro=<Dispatcher._process_polling_updates() done, defined at C:\YandexGPT\.venv\Lib\site-packages\aiogram\dispatcher\dispatcher.py:407> exception=IntegrityError('NOT NULL constraint failed: users.username')>
Traceback (most recent call last):
File "C:\YandexGPT\.venv\Lib\site-packages\aiogram\dispatcher\dispatcher.py", line 415, in _process_polling_updates
for responses in itertools.chain.from_iterable(await self.process_updates(updates, fast)):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\YandexGPT\.venv\Lib\site-packages\aiogram\dispatcher\dispatcher.py", line 235, in process_updates
return await asyncio.gather(*tasks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\YandexGPT\.venv\Lib\site-packages\aiogram\dispatcher\handler.py", line 117, in notify
response = await handler_obj.handler(*args, **partial_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\YandexGPT\.venv\Lib\site-packages\aiogram\dispatcher\dispatcher.py", line 256, in process_update
return await self.message_handlers.notify(update.message)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\YandexGPT\.venv\Lib\site-packages\aiogram\dispatcher\handler.py", line 117, in notify
response = await handler_obj.handler(*args, **partial_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\YandexGPT\main.py", line 302, in process_password
await db.execute("INSERT OR REPLACE INTO users (id, login, password) VALUES (?, ?, ?)", (user_id, login, password))
File "C:\YandexGPT\.venv\Lib\site-packages\aiosqlite\core.py", line 193, in execute
cursor = await self._execute(self._conn.execute, sql, parameters)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\YandexGPT\.venv\Lib\site-packages\aiosqlite\core.py", line 132, in _execute
return await future
^^^^^^^^^^^^
File "C:\YandexGPT\.venv\Lib\site-packages\aiosqlite\core.py", line 115, in run
result = function()
^^^^^^^^^^
sqlite3.IntegrityError: NOT NULL constraint failed: users.username
Проблема возникает на await db.execute("INSERT OR REPLACE INTO users (id, login, password) VALUES (?, ?, ?)", (user_id, login, password)). Вот код бота:
from aiogram import Bot, Dispatcher, executor, types
from aiogram.dispatcher import FSMContext
from aiogram.dispatcher.filters.state import State, StatesGroup
from aiogram.contrib.fsm_storage.memory import MemoryStorage
from aiogram.types import ReplyKeyboardMarkup, KeyboardButton, InlineKeyboardButton, InlineKeyboardMarkup
from aiogram.utils.callback_data import CallbackData
import aiosqlite
import asyncio
import aiohttp
import json
import re
from datetime import datetime
API_TOKEN = '6996318383:AAEcQfdQhzEg3L_6DKQVidJEn46Wb27Sy4g'
ADMINS = [989037374, 1515567046]
bot = Bot(token=API_TOKEN)
storage = MemoryStorage()
dp = Dispatcher(bot, storage=storage)
class Form(StatesGroup):
choosing_action = State()
login = State()
password = State()
answer_question = State()
class lk(StatesGroup):
personal_account = State()
edit_answer = State()
new_answer = State()
edit_answer_select = State()
edit_answer_cb = State()
new_answer_cb = State()
class admin(StatesGroup):
admin_panel = State()
select_question_to_delete = State()
select_question_to_edit = State()
edit_question_text = State()
new_question = State()
async def create_db():
async with aiosqlite.connect('base.db') as db:
await db.execute('''CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY,
username TEXT NOT NULL,
login TEXT,
password TEXT,
last_question_idx INTEGER DEFAULT 0)''')
await db.execute('''CREATE TABLE IF NOT EXISTS questions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
question TEXT NOT NULL,
order_num INTEGER NOT NULL)''')
await db.execute('''CREATE TABLE IF NOT EXISTS answers (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
question TEXT,
answer TEXT,
FOREIGN KEY (user_id) REFERENCES users (id))''')
await db.execute('''CREATE TABLE IF NOT EXISTS public_info (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
nome TEXT,
surname TEXT,
patronym TEXT,
birthdayat TEXT,
diedat TEXT,
epitaph TEXT,
placeOfBirth TEXT,
placeOfDeath TEXT,
children TEXT,
wifehusband TEXT,
education TEXT,
occupation TEXT,
awards TEXT,
title1 TEXT,
biography1 TEXT,
title2 TEXT,
biography2 TEXT,
title3 TEXT,
biography3 TEXT,
conclusion TEXT,
FOREIGN KEY (user_id) REFERENCES users (id))''')
await db.commit()
# Обработка под MarkdownV2
def mdv2(text: str) -> str:
escape_chars = [
"_", "[", "]", "(", ")", "~", "`", ">",
"#", "+", "-", "=", "|", "{", "}", ".", "!"
]
for char in escape_chars:
text = text.replace(char, f"\{char}")
text = text.replace("**", "*").replace('"', '“')
return text
# Проверка дат
def validate_dates(birthdate_str: str, deathdate_str: str) -> bool:
try:
birthdate = datetime.strptime(birthdate_str, "%d.%m.%Y")
deathdate = datetime.strptime(deathdate_str, "%d.%m.%Y")
except ValueError:
return False
if deathdate < birthdate:
return False
return True
# калбэки
change_action_cb = CallbackData('change', 'action')
# КНОПКА МЕНЮ
menu = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
menu.add(KeyboardButton("В меню"))
async def add_user(user_id: int, username: str):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT id FROM users WHERE id = ?', (user_id,))
user_exists = await cursor.fetchone()
if user_exists:
await db.execute('UPDATE users SET username = ? WHERE id = ?', (username, user_id))
else:
await db.execute('INSERT INTO users (id, username) VALUES (?, ?)', (user_id, username))
await db.commit()
@dp.message_handler(commands="start", state="*")
async def cmd_start(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
user_id = message.from_user.id
username = message.from_user.username or "unknown"
await add_user(user_id, username)
if user_id not in ADMINS:
await message.answer("Выберите действие:", reply_markup=markup)
await Form.choosing_action.set()
else:
markup.add(KeyboardButton("Админ-панель"))
await message.answer("Выберите действие:", reply_markup=markup)
await Form.choosing_action.set()
@dp.message_handler(lambda message: message.text == "В меню", state="*")
async def back_to_menu(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
if message.from_user.id not in ADMINS:
await message.answer("Вернули вас в меню. Выберите действие", reply_markup=markup)
await Form.choosing_action.set()
else:
markup.add(KeyboardButton("Админ-панель"))
await message.answer("Вернули вас в меню. Выберите действие", reply_markup=markup)
await Form.choosing_action.set()
async def save_answer(user_id: int, question: str, answer: str, question_idx: int):
async with aiosqlite.connect('base.db') as db:
await db.execute('INSERT INTO answers (user_id, question, answer) VALUES (?, ?, ?)',
(user_id, question, answer))
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (question_idx, user_id))
await db.commit()
async def set_next_question(user_id: int):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT last_question_idx FROM users WHERE id = ?', (user_id,))
result = await cursor.fetchone()
last_question_idx = result[0] if result else 0
next_question_idx = last_question_idx + 1
question_cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (next_question_idx,))
question_text = await question_cursor.fetchone()
if question_text:
await bot.send_message(user_id, question_text[0], reply_markup=menu)
await Form.answer_question.set()
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (next_question_idx, user_id))
await db.commit()
else:
answers_text = ""
cursor = await db.execute("SELECT question, answer FROM answers WHERE user_id=?", (user_id,))
answers = await cursor.fetchall()
for idx, (question, answer) in enumerate(answers, start=1):
answers_text += f"{idx}. {question} - {answer}\n"
markup = InlineKeyboardMarkup(
inline_keyboard=[
[InlineKeyboardButton(text="Сгенерировать", callback_data=change_action_cb.new(action="generate"))],
[InlineKeyboardButton(text="Изменить ответ", callback_data=change_action_cb.new(action="change"))],
[InlineKeyboardButton(text="Заполнить заново",
callback_data=change_action_cb.new(action="refill"))],
]
)
await bot.send_message(user_id, f"Вот ваши ответы:\n\n{answers_text}", reply_markup=markup)
await dp.current_state(user=user_id).reset_state(with_data=False)
await Form.choosing_action.set()
@dp.callback_query_handler(change_action_cb.filter(action="change"), state="*")
async def change_answer(callback_query: types.CallbackQuery, state: FSMContext):
await bot.answer_callback_query(callback_query.id)
await lk.edit_answer.set()
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
text = "Вопросы:\n\n"
if questions:
text += "\n".join([f"{idx + 1}. {question[0]}" for idx, question in enumerate(questions)])
text += "\n\nВведите номер вопроса, который хотите изменить:"
await bot.send_message(callback_query.from_user.id, text,reply_markup=menu)
else:
await bot.send_message(callback_query.from_user.id,"Вопросы отсутствуют.", reply_markup=menu)
@dp.message_handler(state=lk.edit_answer_cb)
async def enter_question_number(message: types.Message, state: FSMContext):
question_number = message.text
if not question_number.isdigit():
await message.reply("Пожалуйста, введите номер вопроса цифрами. Попробуйте снова:")
return
await state.update_data(question_number=int(question_number))
await lk.new_answer.set()
await message.answer("Введите новый ответ:")
@dp.callback_query_handler(change_action_cb.filter(action="refill"), state="*")
async def process_refill(callback_query: types.CallbackQuery, callback_data: dict):
user_id = callback_query.from_user.id
await bot.answer_callback_query(callback_query.id)
markup = InlineKeyboardMarkup().add(InlineKeyboardButton("Да, начать заново", callback_data="confirm_refill"))
await bot.send_message(user_id, "Вы уверены, что хотите начать заново? Ваши текущие ответы будут удалены.", reply_markup=markup)
@dp.message_handler(state=lk.new_answer_cb)
async def update_answer(message: types.Message, state: FSMContext):
new_answer_text = message.text
user_data = await state.get_data()
question_number = user_data['question_number']
user_id = message.from_user.id
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await db.execute('UPDATE answers SET answer = ? WHERE user_id = ? AND question = ?',
(new_answer_text, user_id, question_text[0]))
await db.commit()
await message.answer(f"Ваш ответ на вопрос изменен на: {new_answer_text}", reply_markup=menu)
else:
await message.answer(f"Проблема при редактировании ответа, вопрос не найден.")
await state.finish()
await set_next_question(user_id)
@dp.message_handler(lambda message: message.text == "Сгенерировать био", state=[Form.choosing_action, ])
async def check_login(message: types.Message):
user_id = message.from_user.id
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT login, password FROM users WHERE id = ?', (user_id,))
user_login_info = await cursor.fetchone()
if not user_login_info or user_login_info[0] is None or user_login_info[1] is None:
# Если логин или пароль отсутствуют, начинаем процесс их запроса
await message.answer("Введите Ваш логин:")
await Form.login.set()
else:
# Если данные присутствуют, переходим к генерации био
await set_next_question(user_id)
@dp.message_handler(state=Form.login)
async def process_login(message: types.Message, state: FSMContext):
# Сохраняем логин во временное хранилище
async with state.proxy() as data:
data['login'] = message.text
await message.answer("Введите Ваш пароль:")
await Form.password.set()
@dp.message_handler(state=Form.password)
async def process_password(message: types.Message, state: FSMContext):
user_id = message.from_user.id
async with state.proxy() as data:
login = data['login']
password = message.text
url = 'https://mc.dev.rand.agency/api/v1/get-access-token'
payload = {
"email": login,
"password": password,
"device": "bot-v0.0.1"
}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=payload, headers={'Accept': 'application/json','Content-Type': 'application/json;charset=UTF-8'}) as response:
if response.status == 200:
async with aiosqlite.connect('base.db') as db:
await db.execute("INSERT OR REPLACE INTO users (id, login, password) VALUES (?, ?, ?)", (user_id, login, password))
await db.commit()
await message.answer("Логин и пароль успешно сохранены.")
await set_next_question(user_id)
else:
await bot.send_message(user_id,"Логин и/или пароль неверные. Пожалуйста, попробуйте еще раз.")
await check_login(message)
@dp.message_handler(state=Form.answer_question)
async def process_question_answer(message: types.Message, state: FSMContext):
user_id = message.from_user.id
answer_text = message.text
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT last_question_idx FROM users WHERE id = ?', (user_id,))
result = await cursor.fetchone()
current_question_idx = result[0] if result else 0
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (current_question_idx,))
question = await cursor.fetchone()
if question:
question_text = question[0]
await db.execute('INSERT INTO answers (user_id, question, answer) VALUES (?, ?, ?)',
(user_id, question_text, answer_text))
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (current_question_idx, user_id))
await db.commit()
else:
await message.answer("Произошла ошибка при сохранении вашего ответа.")
await set_next_question(user_id)
@dp.message_handler(lambda message: message.text == "Личный кабинет", state=Form.choosing_action)
async def personal_account(message: types.Message):
user_id = message.from_user.id
answers_text = "Личный кабинет\n\nВаши ответы:\n"
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question, answer FROM answers WHERE user_id=? ORDER BY id', (user_id,))
answers = await cursor.fetchall()
for idx, (question, answer) in enumerate(answers, start=1):
answers_text += f"{idx}. {question}: {answer}\n"
if answers_text == "Личный кабинет\n\nВаши ответы:\n":
answers_text = "Личный кабинет\n\nВы еще не отвечали на вопросы. Пожалуйста, нажмите «В меню» и выберите «Сгенерировать био», чтобы ответить на вопросы"
await message.answer(answers_text, reply_markup=menu)
else:
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Изменить ответ"))
markup.add(KeyboardButton("Заполнить заново"))
markup.add(KeyboardButton("В меню"))
await message.answer(answers_text, reply_markup=markup)
await lk.personal_account.set()
@dp.message_handler(lambda message: message.text == "Изменить ответ", state=lk.personal_account)
async def change_answer(message: types.Message):
await message.answer("Введите номер вопроса, на который хотите изменить ответ:",reply_markup=menu)
await lk.edit_answer.set()
@dp.message_handler(state=lk.edit_answer)
async def process_question_number(message: types.Message, state: FSMContext):
text = message.text
question_number = int(text)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await state.update_data(question=question_text[0], question_number=question_number)
await message.answer("Введите новый ответ:")
await lk.new_answer.set()
else:
await message.answer(f"Вопроса под номером {question_number} не существует.")
@dp.message_handler(state=lk.new_answer)
async def process_new_answer(message: types.Message, state: FSMContext):
user_data = await state.get_data()
question_number = user_data['question_number']
new_answer = message.text
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Изменить ответ"))
markup.add(KeyboardButton("Заполнить заново"))
markup.add(KeyboardButton("В меню"))
user_id = message.from_user.id
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await db.execute('UPDATE answers SET answer = ? WHERE user_id = ? AND question = ?', (new_answer, user_id, question_text[0]))
await db.commit()
await message.answer(f"Ваш ответ на вопрос изменен на: {new_answer}", reply_markup=markup)
else:
await message.answer(f"Проблема при редактировании ответа, вопрос не найден.")
await state.finish()
await personal_account(message)
@dp.message_handler(lambda message: message.text == "Заполнить заново", state=lk.personal_account)
async def refill_form(message: types.Message):
markup = InlineKeyboardMarkup().add(InlineKeyboardButton("Да", callback_data="confirm_refill"))
await message.answer("Вы уверены, что хотите начать заново? Все текущие ответы будут удалены.", reply_markup=markup)
@dp.callback_query_handler(lambda c: c.data == 'confirm_refill', state="*")
async def process_refill(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
async with aiosqlite.connect('base.db') as db:
await db.execute('DELETE FROM answers WHERE user_id=?', (user_id,))
await db.commit()
await db.execute('UPDATE users SET last_question_idx = 0 WHERE id = ?', (user_id,))
await db.commit()
state = dp.current_state(user=user_id)
await state.reset_state(with_data=False)
await bot.answer_callback_query(callback_query.id)
await bot.send_message(user_id, "Ваши ответы удалены.")
await cmd_start(callback_query.message)
# ГЕНЕРАЦИЯ
def parse_yandexgpt(answer_text: str) -> dict:
pattern = re.compile(
r'(nome|surname|patronym|birthdayat|diedat|epitaph|placeOfBirth|placeOfDeath|children|wifehusband|education|occupation|awards|title1|biography1|title2|biography2|title3|biography3|conclusion): ([^:]+?)(?=\n\w+: |\n\n|\Z)',
re.DOTALL)
matches = pattern.findall(answer_text)
data = {key.strip(): value.strip() for key, value in matches}
return data
async def save_result(user_id: int, answer_dict: dict):
async with aiosqlite.connect('base.db') as db:
await db.execute("DELETE FROM public_info WHERE user_id=?", (user_id, ))
columns = ", ".join(answer_dict.keys())
placeholders = ", ".join(["?"] * len(answer_dict))
values = list(answer_dict.values())
print(columns)
print(placeholders)
print(values)
await db.execute(f"INSERT INTO public_info (user_id, {columns}) VALUES (?, {placeholders})", [user_id, *values])
await db.commit()
class YandexGPT:
@staticmethod
async def generate(prompt: str, apikey: str, sa_id: str, user_id : str):
url = 'https://llm.api.cloud.yandex.net/foundationModels/v1/completion'
headers = {
'Content-Type': 'application/json',
'Authorization': f'Api-Key {apikey}'
}
data = {
"modelUri": f"gpt://{sa_id}/yandexgpt-lite/latest",
"completionOptions": {
"stream": False,
"temperature": 0.4,
"maxTokens": "3000"
},
"messages": [
{
"role": "system",
"text": """"Твоя задача - создать информационную сводку и биографию (title1, biography1, title2, biography2, title3, biography3 ) в соответствии с ответами на вопросы пользователя (пишутся в формате вопрос - ответ).
Не пиши ничего кроме этой сводки и НЕ ОТХОДИ ОТ ШАБЛОНА. Если информации данных в каком-то пункте нет, пиши ТОЛЬКО “null“, а не что-либо другое. Создай следующую сводку:
nome: {}
surname: {}
patronym: {}
birthday_at: {}
died_at: {}
epitaph: {} # не больше 300 символов
placeOfBirth: {}
placeOfDeath: {}
children: {}
wifehusband: {} # Это супруг или супруга
education: {}
occupation: {} # Род деятельности человека
awards: {}
epitaph: {}
title1: {}
biography1: {}
title2: {}
biography2: {}
title3: {}
biography3: {}
conclusion: {}
В поле nome должно быть ТОЛЬКО имя, без фамилии и отчества. Не путай имя (nome), фамилию (surname) и отчество (patronym) - они должны стоять на правильных местах.
epitaph - это эпитафия, сделай так, чтобы она соответствовала жизненному пути человека.
Не придумывай в биографии ничего от себя, распиши подробнее, но только ту информацию, которая есть от пользователя.
Все даты пиши в формате dd.mm.yyyy.
В conclusion пиши заключение БИОГРАФИИ, а не что либо другое.
"""
},
{
"role": "user",
"text": prompt
}
]
}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=data, headers=headers) as response:
response_data = await response.json()
try:
answer = response_data['result']['alternatives'][0]['message']['text']
answer = answer.replace("*","").replace("_","")
answer_data = parse_yandexgpt(answer)
birthdate_str = answer_data.get("birthdayat", "")
deathdate_str = answer_data.get("diedat", "")
if birthdate_str and deathdate_str and not validate_dates(birthdate_str, deathdate_str):
markup = InlineKeyboardMarkup()
markup.add(InlineKeyboardButton(text="Изменить ответ", callback_data=change_action_cb.new(action="change")))
await bot.send_message(user_id, "Некорректные даты: дата смерти не может быть раньше даты рождения или введены неверные даты.", reply_markup=markup)
else:
try:
await save_result(user_id, answer_data)
biography_output = mdv2(f"*Биография*\n\n*{answer_data.get('title1', '')}*: {answer_data.get('biography1', '')}\n*{answer_data.get('title2', '')}*: {answer_data.get('biography2', '')}\n*{answer_data.get('title3', '')}*: {answer_data.get('biography3', '')}\n*Заключение:*{answer_data.get('conclusion', '')}\n\n*Эпитафия:* {answer_data.get('epitaph', '')}")
await bot.send_message(user_id, biography_output, parse_mode="MarkdownV2",reply_markup=InlineKeyboardMarkup().add(InlineKeyboardButton("Обновить страницу",callback_data="update_page")))
except:
await bot.send_message(user_id, "Не удалось сохранить данные. Пожалуйста, нажмите Сгенерировать еще раз")
except KeyError as e:
await bot.send_message(user_id, "Не удалось получить ответ от сервера. Проверьте переданные данные и попробуйте еще раз.")
@dp.callback_query_handler(change_action_cb.filter(action="generate"), state="*")
async def process_generate(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
prompt = ""
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question, answer FROM answers WHERE user_id=?", (user_id,))
answers = await cursor.fetchall()
for question, answer in answers:
prompt += f"\n{question} - {answer}"
api_key = "AQVN1J4sCxYR98rj-tVppyp6gXQthbdmYvmgtO7a"
sa_id = "b1g5og37bgh1ghh2s2qc"
await YandexGPT.generate(prompt, api_key, sa_id, user_id)
# ОБНОВЛЕНИЕ СТРАНИЦЫ
@dp.callback_query_handler(lambda c: c.data == 'update_page', state="*")
async def process_update_page(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
await bot.answer_callback_query(callback_query.id)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT nome, surname, patronym, birthdayat, diedat, epitaph, placeOfBirth, placeOfDeath, children, wifehusband, education, occupation, awards, title1, biography1, title2, biography2, title3, biography3, conclusion FROM public_info WHERE user_id=?", (user_id,))
data = await cursor.fetchone()
if data:
data_structure = {
"nome": data[0],
"surname": data[1],
"patronym": data[2],
"birthday_at": data[3]+"T21:00:00.000000Z",
"died_at": data[4]+"T21:00:00.000000Z",
"epitaph": data[5],
"placeOfBirth": data[6],
"placeOfDeath": data[7],
"children": data[8],
"wifehusband": data[9],
"education": data[10],
"occupation": data[11],
"awards": data[12],
"title1" : data[13],
"biography1": data[14],
"title2": data[15],
"biography2": data[16],
"title3": data[17],
"biography3": data[18],
"conclusion": data[19],
"day_b": data[3].split('.')[0],
"month_b": data[3].split('.')[1],
"year_b": int(data[3].split('.')[2]),
"day_d": data[4].split('.')[0],
"month_d": data[4].split('.')[1],
"year_d": int(data[4].split('.')[2]),
}
url = 'https://mc.dev.rand.agency/api/v1/get-access-token'
payload = {
"email": "team1@hackathon.ru",
"password": "PA9dEa7x",
"device": "bot-v0.0.1"
}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=payload, headers={'Accept': 'application/json','Content-Type': 'application/json;charset=UTF-8'}) as response:
if response.status == 200:
data = await response.json()
access_token = data.get('access_token')
print(f'Access Token: {access_token}')
else:
print(f'Error: {response.status}')
async with aiohttp.ClientSession() as session:
await update_page(session, access_token, data_structure)
await bot.send_message(user_id, "Страница обновлена")
await Form.choosing_action.set()
async def update_page(session, access_token, data_structure):
url = 'https://mc.dev.rand.agency/api/page/64577087'
print(data_structure["nome"])
payload = {
"id": 8734,
"name": data_structure["nome"],
"surname": data_structure["surname"],
"patronym": data_structure["patronym"],
"birthday_at": data_structure["birthday_at"],
"died_at": data_structure["died_at"],
"epitaph": data_structure["epitaph"],
"start": {
"day": data_structure["day_b"],
"month": data_structure["month_b"],
"year": data_structure["year_b"]
},
"end": {
"day": data_structure["day_d"],
"month": data_structure["month_d"],
"year": data_structure["year_d"]
},
"link": "https://mc.dev.rand.agency/page/64577087",
"page_type_id": 1,
"biographies": [
{
"id": 4610,
"title": data_structure["title1"],
"description": data_structure["biography1"],
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 1,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
},
{
"url": None,
"title": None,
"order": 2
}
],
"media": []
},
{
"id": 4611,
"title": data_structure["title2"],
"description": data_structure["biography2"],
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 2,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
}
],
"media": []
},
{
"id": 4612,
"title": data_structure["title3"],
"description": data_structure["biography3"],
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 3,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
},
{
"url": None,
"title": None,
"order": 2
}
],
"media": []
},
{
"id": 4613,
"title": "Заключение",
"description": data_structure["conclusion"],
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 4,
"checked": False,
"photos": [],
"media": []
}
],
"page_information": [
{
"id": 30957,
"page_id": 8734,
"title": "pageInformation.placeOfBirth",
"is_system": True,
"description": data_structure["placeOfBirth"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30958,
"page_id": 8734,
"title": data_structure["placeOfDeath"],
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30959,
"page_id": 8734,
"title": data_structure["children"],
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30960,
"page_id": 8734,
"title": "^pageInformation.wife||pageInformation.husband",
"is_system": True,
"description": data_structure["wifehusband"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30961,
"page_id": 8734,
"title": "pageInformation.citizenship",
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30962,
"page_id": 8734,
"title": "pageInformation.education",
"is_system": True,
"description": data_structure["education"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30963,
"page_id": 8734,
"title": "pageInformation.occupation",
"is_system": True,
"description": data_structure["occupation"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30964,
"page_id": 8734,
"title": "pageInformation.awards",
"is_system": True,
"description": data_structure["awards"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
}
],
"_method": "PUT"
}
headers = {
'Accept': 'application/json',
'Content-Type': 'application/json;charset=UTF-8',
'Authorization': f'Bearer {access_token}'
}
async with session.put(url, json=payload, headers=headers) as response:
if response.status == 200:
data = await response.json()
print(f'Update Page Response: {data}')
else:
data = await response.json()
print(f'Error: {data}')
# АДМИН-ПАНЕЛЬ
# КНОПКА НАЗАД
back = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=False)
back.add(KeyboardButton("Назад"))
# КЛАВА
admin_kb = ReplyKeyboardMarkup(resize_keyboard=True)
admin_kb.add("Вопросы", "Добавить", "Удалить", "Редактировать","В меню")
@dp.message_handler(lambda message: message.text == "Назад", state=[admin.new_question, admin.edit_question_text, admin.select_question_to_edit, admin.select_question_to_delete])
async def back_to_admin_panel(message: types.Message, state: FSMContext):
await state.finish()
await admin_panel(message)
@dp.message_handler(lambda message: message.text == "Админ-панель", state=Form.choosing_action)
async def admin_panel(message: types.Message):
if message.from_user.id not in ADMINS:
await message.answer("Доступ запрещен.")
return
await message.answer("Админ-панель:", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Вопросы", state=admin.admin_panel)
async def show_questions(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if questions:
text = "\n".join([f"{idx + 1}. {question[0]}" for idx, question in enumerate(questions)])
else:
text = "Вопросы отсутствуют."
await message.answer(text)
@dp.message_handler(lambda message: message.text == "Добавить", state=admin.admin_panel)
async def add_question_start(message: types.Message):
await message.answer("Введите текст нового вопроса:", reply_markup=back)
await admin.new_question.set()
@dp.message_handler(state=admin.new_question)
async def add_question_process(message: types.Message, state: FSMContext):
new_question = message.text
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT MAX(order_num) FROM questions")
max_order_num = await cursor.fetchone()
next_order_num = (max_order_num[0] or 0) + 1
await db.execute("INSERT INTO questions (question, order_num) VALUES (?, ?)", (new_question, next_order_num))
await db.commit()
await message.answer("Вопрос успешно добавлен.", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Редактировать", state=admin.admin_panel)
async def select_question_to_edit_start(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT id, question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if not questions:
await message.answer("Вопросы отсутствуют.")
return
text = "Выберите номер вопроса для редактирования:\n\n"
text += "\n".join(f"{qid}. {qtext}" for qid, qtext in questions)
await message.answer(text, reply_markup=back)
await admin.select_question_to_edit.set()
@dp.message_handler(state=admin.select_question_to_edit)
async def edit_question(message: types.Message, state: FSMContext):
qid_text = message.text
if not qid_text.isdigit():
await message.answer("Пожалуйста, введите число. Попробуйте еще раз:", reply_markup=back)
return
qid = int(qid_text)
async with state.proxy() as data:
data['question_id'] = qid
await admin.edit_question_text.set()
await message.answer("Введите новый текст вопроса:", reply_markup=back)
@dp.message_handler(state=admin.edit_question_text)
async def update_question(message: types.Message, state: FSMContext):
new_text = message.text
async with state.proxy() as data:
qid = data['question_id']
async with aiosqlite.connect('base.db') as db:
await db.execute("UPDATE questions SET question = ? WHERE id = ?", (new_text, qid))
await db.commit()
await message.answer("Вопрос успешно отредактирован.", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Удалить", state=admin.admin_panel)
async def select_question_to_delete_start(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT id, question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if not questions:
await message.answer("Вопросы отсутствуют.")
return
text = "Выберите номер вопроса для удаления:\n\n"
text += "\n".join(f"{qid}. {qtext}" for qid, qtext in questions)
await message.answer(text, reply_markup=back)
await admin.select_question_to_delete.set()
@dp.message_handler(state=admin.select_question_to_delete)
async def delete_question(message: types.Message, state: FSMContext):
qid_text = message.text
if not qid_text.isdigit():
await message.answer("Пожалуйста, введите число. Попробуйте еще раз:", reply_markup=back)
return
qid = int(qid_text)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT order_num FROM questions WHERE id = ?", (qid,))
question = await cursor.fetchone()
if not question:
await message.answer(f"Вопрос под номером {qid} не найден. Пожалуйста, попробуйте другой номер.")
return
order_num_to_delete = question[0]
await db.execute("DELETE FROM questions WHERE id = ?", (qid,))
await db.execute("UPDATE questions SET order_num = order_num - 1 WHERE order_num > ?", (order_num_to_delete,))
await db.commit()
await message.answer("Вопрос успешно удален.", reply_markup=admin_kb)
await admin.admin_panel.set()
async def main():
await create_db()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
executor.start_polling(dp, skip_updates=True)
|
a10f79429fa13eeb115ad5db06229667
|
{
"intermediate": 0.29830652475357056,
"beginner": 0.3785805106163025,
"expert": 0.32311302423477173
}
|
46,554
|
fix the error in the code below
//@version=5
indicator(shorttitle="BB", title="Bollinger Bands", overlay=true, timeframe="", timeframe_gaps=true)
// Bollinger Bands Inputs
length = input.int(20, minval=1)
maType = input.string("SMA", "Basis MA Type", options = ["SMA", "EMA", "SMMA (RMA)", "WMA", "VWMA"])
src = input(close, title="Source")
mult = input.float(2.0, minval=0.001, maxval=50, title="StdDev")
// Bollinger Bands Width with Squeeze Alert Inputs
comp_len = input(125, title="Comparison Period for Squeeze", minval=1)
// Bollinger Bands Calculations
ma(source, length, _type) =>
switch _type
"SMA" => ta.sma(source, length)
"EMA" => ta.ema(source, length)
"SMMA (RMA)" => ta.rma(source, length)
"WMA" => ta.wma(source, length)
"VWMA" => ta.vwma(source, length) // Ensure VWMA line is indented the same as SMA
basis = ma(src, length, maType)
dev = mult * ta.stdev(src, length)
upper = basis + dev
lower = basis - dev
offset = input.int(0, "Offset", minval = -500, maxval = 500, display = display.data_window)
// Bollinger Bands Plots
plot(basis, "Basis", color=#FF6D00, offset = offset)
p1 = plot(upper, "Upper", color=#2962FF, offset = offset)
p2 = plot(lower, "Lower", color=#2962FF, offset = offset)
fill(p1, p2, title = "Background", color=color.rgb(33, 150, 243, 95))
// Bollinger Bands Width with Squeeze Alert Calculations
bbwidth() =>
(upper - lower) / basis
bbw = bbwidth()
squeeze = bbw == lowest(bbw, comp_len)
// Bollinger Bands Width with Squeeze Alert Plots (in a separate window)
window.new()
bgcolor(squeeze ? color.fuchsia : na, transp=75)
plot(bbw, "BBW", color=#138484), hline(0)
// Alert for Squeeze
alertcondition(squeeze, "Bollinger squeeze ", "Bollinger squeeze ")
|
6e47e37c5dbbad05e21964d645f840d5
|
{
"intermediate": 0.3383235037326813,
"beginner": 0.3010649085044861,
"expert": 0.36061155796051025
}
|
46,555
|
hi bro
|
a5912d063eedb5a2b510aa67d2fd5193
|
{
"intermediate": 0.34584465622901917,
"beginner": 0.2482806295156479,
"expert": 0.40587475895881653
}
|
46,556
|
Привет! Помоги сделать так, чтобы в update_page емейл и пароль брались из БД. Вот код бота:
from aiogram import Bot, Dispatcher, executor, types
from aiogram.dispatcher import FSMContext
from aiogram.dispatcher.filters.state import State, StatesGroup
from aiogram.contrib.fsm_storage.memory import MemoryStorage
from aiogram.types import ReplyKeyboardMarkup, KeyboardButton, InlineKeyboardButton, InlineKeyboardMarkup
from aiogram.utils.callback_data import CallbackData
import aiosqlite
import asyncio
import aiohttp
import json
import re
from datetime import datetime
API_TOKEN = '6996318383:AAEcQfdQhzEg3L_6DKQVidJEn46Wb27Sy4g'
ADMINS = [989037374, 1515567046]
bot = Bot(token=API_TOKEN)
storage = MemoryStorage()
dp = Dispatcher(bot, storage=storage)
class Form(StatesGroup):
choosing_action = State()
login = State()
password = State()
answer_question = State()
class lk(StatesGroup):
personal_account = State()
edit_answer = State()
new_answer = State()
edit_answer_select = State()
edit_answer_cb = State()
new_answer_cb = State()
class admin(StatesGroup):
admin_panel = State()
select_question_to_delete = State()
select_question_to_edit = State()
edit_question_text = State()
new_question = State()
async def create_db():
async with aiosqlite.connect('base.db') as db:
await db.execute('''CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY,
username TEXT NOT NULL,
login TEXT,
password TEXT,
last_question_idx INTEGER DEFAULT 0)''')
await db.execute('''CREATE TABLE IF NOT EXISTS questions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
question TEXT NOT NULL,
order_num INTEGER NOT NULL)''')
await db.execute('''CREATE TABLE IF NOT EXISTS answers (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
question TEXT,
answer TEXT,
FOREIGN KEY (user_id) REFERENCES users (id))''')
await db.execute('''CREATE TABLE IF NOT EXISTS public_info (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
nome TEXT,
surname TEXT,
patronym TEXT,
birthdayat TEXT,
diedat TEXT,
epitaph TEXT,
placeOfBirth TEXT,
placeOfDeath TEXT,
children TEXT,
wifehusband TEXT,
education TEXT,
occupation TEXT,
awards TEXT,
title1 TEXT,
biography1 TEXT,
title2 TEXT,
biography2 TEXT,
title3 TEXT,
biography3 TEXT,
conclusion TEXT,
FOREIGN KEY (user_id) REFERENCES users (id))''')
await db.commit()
# Обработка под MarkdownV2
def mdv2(text: str) -> str:
escape_chars = [
"_", "[", "]", "(", ")", "~", "`", ">",
"#", "+", "-", "=", "|", "{", "}", ".", "!"
]
for char in escape_chars:
text = text.replace(char, f"\{char}")
text = text.replace("**", "*").replace('"', '“')
return text
# Проверка дат
def validate_dates(birthdate_str: str, deathdate_str: str) -> bool:
try:
birthdate = datetime.strptime(birthdate_str, "%d.%m.%Y")
deathdate = datetime.strptime(deathdate_str, "%d.%m.%Y")
except ValueError:
return False
if deathdate < birthdate:
return False
return True
# калбэки
change_action_cb = CallbackData('change', 'action')
# КНОПКА МЕНЮ
menu = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
menu.add(KeyboardButton("В меню"))
async def add_user(user_id: int, username: str):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT id FROM users WHERE id = ?', (user_id,))
user_exists = await cursor.fetchone()
if user_exists:
await db.execute('UPDATE users SET username = ? WHERE id = ?', (username, user_id))
else:
await db.execute('INSERT INTO users (id, username) VALUES (?, ?)', (user_id, username))
await db.commit()
@dp.message_handler(commands="start", state="*")
async def cmd_start(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
user_id = message.from_user.id
username = message.from_user.username or "unknown"
await add_user(user_id, username)
if user_id not in ADMINS:
await message.answer("Выберите действие:", reply_markup=markup)
await Form.choosing_action.set()
else:
markup.add(KeyboardButton("Админ-панель"))
await message.answer("Выберите действие:", reply_markup=markup)
await Form.choosing_action.set()
@dp.message_handler(lambda message: message.text == "В меню", state="*")
async def back_to_menu(message: types.Message):
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Сгенерировать био"))
markup.add(KeyboardButton("Личный кабинет"))
if message.from_user.id not in ADMINS:
await message.answer("Вернули вас в меню. Выберите действие", reply_markup=markup)
await Form.choosing_action.set()
else:
markup.add(KeyboardButton("Админ-панель"))
await message.answer("Вернули вас в меню. Выберите действие", reply_markup=markup)
await Form.choosing_action.set()
async def save_answer(user_id: int, question: str, answer: str, question_idx: int):
async with aiosqlite.connect('base.db') as db:
await db.execute('INSERT INTO answers (user_id, question, answer) VALUES (?, ?, ?)',
(user_id, question, answer))
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (question_idx, user_id))
await db.commit()
async def set_next_question(user_id: int):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT last_question_idx FROM users WHERE id = ?', (user_id,))
result = await cursor.fetchone()
last_question_idx = result[0] if result else 0
next_question_idx = last_question_idx + 1
question_cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (next_question_idx,))
question_text = await question_cursor.fetchone()
if question_text:
await bot.send_message(user_id, question_text[0], reply_markup=menu)
await Form.answer_question.set()
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (next_question_idx, user_id))
await db.commit()
else:
answers_text = ""
cursor = await db.execute("SELECT question, answer FROM answers WHERE user_id=?", (user_id,))
answers = await cursor.fetchall()
for idx, (question, answer) in enumerate(answers, start=1):
answers_text += f"{idx}. {question} - {answer}\n"
markup = InlineKeyboardMarkup(
inline_keyboard=[
[InlineKeyboardButton(text="Сгенерировать", callback_data=change_action_cb.new(action="generate"))],
[InlineKeyboardButton(text="Изменить ответ", callback_data=change_action_cb.new(action="change"))],
[InlineKeyboardButton(text="Заполнить заново",
callback_data=change_action_cb.new(action="refill"))],
]
)
await bot.send_message(user_id, f"Вот ваши ответы:\n\n{answers_text}", reply_markup=markup)
await dp.current_state(user=user_id).reset_state(with_data=False)
await Form.choosing_action.set()
@dp.callback_query_handler(change_action_cb.filter(action="change"), state="*")
async def change_answer(callback_query: types.CallbackQuery, state: FSMContext):
await bot.answer_callback_query(callback_query.id)
await lk.edit_answer.set()
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
text = "Вопросы:\n\n"
if questions:
text += "\n".join([f"{idx + 1}. {question[0]}" for idx, question in enumerate(questions)])
text += "\n\nВведите номер вопроса, который хотите изменить:"
await bot.send_message(callback_query.from_user.id, text,reply_markup=menu)
else:
await bot.send_message(callback_query.from_user.id,"Вопросы отсутствуют.", reply_markup=menu)
@dp.message_handler(state=lk.edit_answer_cb)
async def enter_question_number(message: types.Message, state: FSMContext):
question_number = message.text
if not question_number.isdigit():
await message.reply("Пожалуйста, введите номер вопроса цифрами. Попробуйте снова:")
return
await state.update_data(question_number=int(question_number))
await lk.new_answer.set()
await message.answer("Введите новый ответ:")
@dp.callback_query_handler(change_action_cb.filter(action="refill"), state="*")
async def process_refill(callback_query: types.CallbackQuery, callback_data: dict):
user_id = callback_query.from_user.id
await bot.answer_callback_query(callback_query.id)
markup = InlineKeyboardMarkup().add(InlineKeyboardButton("Да, начать заново", callback_data="confirm_refill"))
await bot.send_message(user_id, "Вы уверены, что хотите начать заново? Ваши текущие ответы будут удалены.", reply_markup=markup)
@dp.message_handler(state=lk.new_answer_cb)
async def update_answer(message: types.Message, state: FSMContext):
new_answer_text = message.text
user_data = await state.get_data()
question_number = user_data['question_number']
user_id = message.from_user.id
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await db.execute('UPDATE answers SET answer = ? WHERE user_id = ? AND question = ?',
(new_answer_text, user_id, question_text[0]))
await db.commit()
await message.answer(f"Ваш ответ на вопрос изменен на: {new_answer_text}", reply_markup=menu)
else:
await message.answer(f"Проблема при редактировании ответа, вопрос не найден.")
await state.finish()
await set_next_question(user_id)
@dp.message_handler(lambda message: message.text == "Сгенерировать био", state=[Form.choosing_action, ])
async def check_login(message: types.Message):
user_id = message.from_user.id
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT login, password FROM users WHERE id = ?', (user_id,))
user_login_info = await cursor.fetchone()
if not user_login_info or user_login_info[0] is None or user_login_info[1] is None:
# Если логин или пароль отсутствуют, начинаем процесс их запроса
await message.answer("Введите Ваш логин:")
await Form.login.set()
else:
# Если данные присутствуют, переходим к генерации био
await set_next_question(user_id)
@dp.message_handler(state=Form.login)
async def process_login(message: types.Message, state: FSMContext):
# Сохраняем логин во временное хранилище
async with state.proxy() as data:
data['login'] = message.text
await message.answer("Введите Ваш пароль:")
await Form.password.set()
@dp.message_handler(state=Form.password)
async def process_password(message: types.Message, state: FSMContext):
user_id = message.from_user.id
async with state.proxy() as data:
login = data['login']
password = message.text
url = 'https://mc.dev.rand.agency/api/v1/get-access-token'
payload = {
"email": login,
"password": password,
"device": "bot-v0.0.1"
}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=payload, headers={'Accept': 'application/json','Content-Type': 'application/json;charset=UTF-8'}) as response:
if response.status == 200:
async with aiosqlite.connect('base.db') as db:
await db.execute("INSERT OR REPLACE INTO users (id, username, login, password) VALUES (?, ?, ?, ?)", (user_id, message.from_user.username, login, password))
await db.commit()
await message.answer("Логин и пароль успешно сохранены.")
await set_next_question(user_id)
else:
await bot.send_message(user_id,"Логин и/или пароль неверные. Пожалуйста, попробуйте еще раз.")
await check_login(message)
@dp.message_handler(state=Form.answer_question)
async def process_question_answer(message: types.Message, state: FSMContext):
user_id = message.from_user.id
answer_text = message.text
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT last_question_idx FROM users WHERE id = ?', (user_id,))
result = await cursor.fetchone()
current_question_idx = result[0] if result else 0
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (current_question_idx,))
question = await cursor.fetchone()
if question:
question_text = question[0]
await db.execute('INSERT INTO answers (user_id, question, answer) VALUES (?, ?, ?)',
(user_id, question_text, answer_text))
await db.execute('UPDATE users SET last_question_idx = ? WHERE id = ?', (current_question_idx, user_id))
await db.commit()
else:
await message.answer("Произошла ошибка при сохранении вашего ответа.")
await set_next_question(user_id)
@dp.message_handler(lambda message: message.text == "Личный кабинет", state=Form.choosing_action)
async def personal_account(message: types.Message):
user_id = message.from_user.id
answers_text = "Личный кабинет\n\nВаши ответы:\n"
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question, answer FROM answers WHERE user_id=? ORDER BY id', (user_id,))
answers = await cursor.fetchall()
for idx, (question, answer) in enumerate(answers, start=1):
answers_text += f"{idx}. {question}: {answer}\n"
if answers_text == "Личный кабинет\n\nВаши ответы:\n":
answers_text = "Личный кабинет\n\nВы еще не отвечали на вопросы. Пожалуйста, нажмите «В меню» и выберите «Сгенерировать био», чтобы ответить на вопросы"
await message.answer(answers_text, reply_markup=menu)
else:
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Изменить ответ"))
markup.add(KeyboardButton("Заполнить заново"))
markup.add(KeyboardButton("В меню"))
await message.answer(answers_text, reply_markup=markup)
await lk.personal_account.set()
@dp.message_handler(lambda message: message.text == "Изменить ответ", state=lk.personal_account)
async def change_answer(message: types.Message):
await message.answer("Введите номер вопроса, на который хотите изменить ответ:",reply_markup=menu)
await lk.edit_answer.set()
@dp.message_handler(state=lk.edit_answer)
async def process_question_number(message: types.Message, state: FSMContext):
text = message.text
question_number = int(text)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await state.update_data(question=question_text[0], question_number=question_number)
await message.answer("Введите новый ответ:")
await lk.new_answer.set()
else:
await message.answer(f"Вопроса под номером {question_number} не существует.")
@dp.message_handler(state=lk.new_answer)
async def process_new_answer(message: types.Message, state: FSMContext):
user_data = await state.get_data()
question_number = user_data['question_number']
new_answer = message.text
markup = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=True)
markup.add(KeyboardButton("Изменить ответ"))
markup.add(KeyboardButton("Заполнить заново"))
markup.add(KeyboardButton("В меню"))
user_id = message.from_user.id
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute('SELECT question FROM questions WHERE order_num = ?', (question_number,))
question_text = await cursor.fetchone()
if question_text:
await db.execute('UPDATE answers SET answer = ? WHERE user_id = ? AND question = ?', (new_answer, user_id, question_text[0]))
await db.commit()
await message.answer(f"Ваш ответ на вопрос изменен на: {new_answer}", reply_markup=markup)
else:
await message.answer(f"Проблема при редактировании ответа, вопрос не найден.")
await state.finish()
await personal_account(message)
@dp.message_handler(lambda message: message.text == "Заполнить заново", state=lk.personal_account)
async def refill_form(message: types.Message):
markup = InlineKeyboardMarkup().add(InlineKeyboardButton("Да", callback_data="confirm_refill"))
await message.answer("Вы уверены, что хотите начать заново? Все текущие ответы будут удалены.", reply_markup=markup)
@dp.callback_query_handler(lambda c: c.data == 'confirm_refill', state="*")
async def process_refill(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
async with aiosqlite.connect('base.db') as db:
await db.execute('DELETE FROM answers WHERE user_id=?', (user_id,))
await db.commit()
await db.execute('UPDATE users SET last_question_idx = 0 WHERE id = ?', (user_id,))
await db.commit()
state = dp.current_state(user=user_id)
await state.reset_state(with_data=False)
await bot.answer_callback_query(callback_query.id)
await bot.send_message(user_id, "Ваши ответы удалены.")
await cmd_start(callback_query.message)
# ГЕНЕРАЦИЯ
def parse_yandexgpt(answer_text: str) -> dict:
pattern = re.compile(
r'(nome|surname|patronym|birthdayat|diedat|epitaph|placeOfBirth|placeOfDeath|children|wifehusband|education|occupation|awards|title1|biography1|title2|biography2|title3|biography3|conclusion): ([^:]+?)(?=\n\w+: |\n\n|\Z)',
re.DOTALL)
matches = pattern.findall(answer_text)
data = {key.strip(): value.strip() for key, value in matches}
return data
async def save_result(user_id: int, answer_dict: dict):
async with aiosqlite.connect('base.db') as db:
await db.execute("DELETE FROM public_info WHERE user_id=?", (user_id, ))
columns = ", ".join(answer_dict.keys())
placeholders = ", ".join(["?"] * len(answer_dict))
values = list(answer_dict.values())
await db.execute(f"INSERT INTO public_info (user_id, {columns}) VALUES (?, {placeholders})", [user_id, *values])
await db.commit()
class YandexGPT:
@staticmethod
async def generate(prompt: str, apikey: str, sa_id: str, user_id : str):
url = 'https://llm.api.cloud.yandex.net/foundationModels/v1/completion'
headers = {
'Content-Type': 'application/json',
'Authorization': f'Api-Key {apikey}'
}
data = {
"modelUri": f"gpt://{sa_id}/yandexgpt-lite/latest",
"completionOptions": {
"stream": False,
"temperature": 0.4,
"maxTokens": "3000"
},
"messages": [
{
"role": "system",
"text": """"Твоя задача - создать информационную сводку и биографию (title1, biography1, title2, biography2, title3, biography3 ) в соответствии с ответами на вопросы пользователя (пишутся в формате вопрос - ответ).
Не пиши ничего кроме этой сводки и НЕ ОТХОДИ ОТ ШАБЛОНА. Если информации данных в каком-то пункте нет, пиши ТОЛЬКО “null“, а не что-либо другое. Создай следующую сводку:
nome: {}
surname: {}
patronym: {}
birthday_at: {}
died_at: {}
epitaph: {} # не больше 300 символов
placeOfBirth: {}
placeOfDeath: {}
children: {}
wifehusband: {} # Это супруг или супруга
education: {}
occupation: {} # Род деятельности человека
awards: {}
epitaph: {}
title1: {}
biography1: {}
title2: {}
biography2: {}
title3: {}
biography3: {}
conclusion: {}
В поле nome должно быть ТОЛЬКО имя, без фамилии и отчества. Не путай имя (nome), фамилию (surname) и отчество (patronym) - они должны стоять на правильных местах.
epitaph - это эпитафия, сделай так, чтобы она соответствовала жизненному пути человека.
Не придумывай в биографии ничего от себя, распиши подробнее, но только ту информацию, которая есть от пользователя.
Все даты пиши в формате dd.mm.yyyy.
В conclusion пиши заключение БИОГРАФИИ, а не что либо другое.
"""
},
{
"role": "user",
"text": prompt
}
]
}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=data, headers=headers) as response:
response_data = await response.json()
try:
answer = response_data['result']['alternatives'][0]['message']['text']
answer = answer.replace("*","").replace("_","")
answer_data = parse_yandexgpt(answer)
birthdate_str = answer_data.get("birthdayat", "")
deathdate_str = answer_data.get("diedat", "")
if birthdate_str and deathdate_str and not validate_dates(birthdate_str, deathdate_str):
markup = InlineKeyboardMarkup()
markup.add(InlineKeyboardButton(text="Изменить ответ", callback_data=change_action_cb.new(action="change")))
await bot.send_message(user_id, "Некорректные даты: дата смерти не может быть раньше даты рождения или введены неверные даты.", reply_markup=markup)
else:
try:
await save_result(user_id, answer_data)
biography_output = mdv2(f"*Биография*\n\n*{answer_data.get('title1', '')}*: {answer_data.get('biography1', '')}\n*{answer_data.get('title2', '')}*: {answer_data.get('biography2', '')}\n*{answer_data.get('title3', '')}*: {answer_data.get('biography3', '')}\n*Заключение:*{answer_data.get('conclusion', '')}\n\n*Эпитафия:* {answer_data.get('epitaph', '')}")
await bot.send_message(user_id, biography_output, parse_mode="MarkdownV2",reply_markup=InlineKeyboardMarkup().add(InlineKeyboardButton("Обновить страницу",callback_data="update_page")))
except:
await bot.send_message(user_id, "Не удалось сохранить данные. Пожалуйста, нажмите Сгенерировать еще раз")
except KeyError as e:
await bot.send_message(user_id, "Не удалось получить ответ от сервера. Проверьте переданные данные и попробуйте еще раз.")
@dp.callback_query_handler(change_action_cb.filter(action="generate"), state="*")
async def process_generate(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
prompt = ""
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question, answer FROM answers WHERE user_id=?", (user_id,))
answers = await cursor.fetchall()
for question, answer in answers:
prompt += f"\n{question} - {answer}"
api_key = "AQVN1J4sCxYR98rj-tVppyp6gXQthbdmYvmgtO7a"
sa_id = "b1g5og37bgh1ghh2s2qc"
await YandexGPT.generate(prompt, api_key, sa_id, user_id)
# ОБНОВЛЕНИЕ СТРАНИЦЫ
@dp.callback_query_handler(lambda c: c.data == 'update_page', state="*")
async def process_update_page(callback_query: types.CallbackQuery):
user_id = callback_query.from_user.id
await bot.answer_callback_query(callback_query.id)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT nome, surname, patronym, birthdayat, diedat, epitaph, placeOfBirth, placeOfDeath, children, wifehusband, education, occupation, awards, title1, biography1, title2, biography2, title3, biography3, conclusion FROM public_info WHERE user_id=?", (user_id,))
data = await cursor.fetchone()
if data:
data_structure = {
"nome": data[0],
"surname": data[1],
"patronym": data[2],
"birthday_at": data[3]+"T21:00:00.000000Z",
"died_at": data[4]+"T21:00:00.000000Z",
"epitaph": data[5],
"placeOfBirth": data[6],
"placeOfDeath": data[7],
"children": data[8],
"wifehusband": data[9],
"education": data[10],
"occupation": data[11],
"awards": data[12],
"title1" : data[13],
"biography1": data[14],
"title2": data[15],
"biography2": data[16],
"title3": data[17],
"biography3": data[18],
"conclusion": data[19],
"day_b": data[3].split('.')[0],
"month_b": data[3].split('.')[1],
"year_b": int(data[3].split('.')[2]),
"day_d": data[4].split('.')[0],
"month_d": data[4].split('.')[1],
"year_d": int(data[4].split('.')[2]),
}
url = 'https://mc.dev.rand.agency/api/v1/get-access-token'
payload = {
"email": "team1@hackathon.ru",
"password": "PA9dEa7x",
"device": "bot-v0.0.1"
}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=payload, headers={'Accept': 'application/json','Content-Type': 'application/json;charset=UTF-8'}) as response:
if response.status == 200:
data = await response.json()
access_token = data.get('access_token')
print(f'Access Token: {access_token}')
else:
print(f'Error: {response.status}')
async with aiohttp.ClientSession() as session:
await update_page(session, access_token, data_structure)
await bot.send_message(user_id, "Страница обновлена")
await Form.choosing_action.set()
async def update_page(session, access_token, data_structure):
url = 'https://mc.dev.rand.agency/api/page/64577087'
print(data_structure["nome"])
payload = {
"id": 8734,
"name": data_structure["nome"],
"surname": data_structure["surname"],
"patronym": data_structure["patronym"],
"birthday_at": data_structure["birthday_at"],
"died_at": data_structure["died_at"],
"epitaph": data_structure["epitaph"],
"start": {
"day": data_structure["day_b"],
"month": data_structure["month_b"],
"year": data_structure["year_b"]
},
"end": {
"day": data_structure["day_d"],
"month": data_structure["month_d"],
"year": data_structure["year_d"]
},
"link": "https://mc.dev.rand.agency/page/64577087",
"page_type_id": 1,
"biographies": [
{
"id": 4610,
"title": data_structure["title1"],
"description": data_structure["biography1"],
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 1,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
},
{
"url": None,
"title": None,
"order": 2
}
],
"media": []
},
{
"id": 4611,
"title": data_structure["title2"],
"description": data_structure["biography2"],
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 2,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
}
],
"media": []
},
{
"id": 4612,
"title": data_structure["title3"],
"description": data_structure["biography3"],
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 3,
"checked": False,
"photos": [
{
"url": None,
"title": None,
"order": 1
},
{
"url": None,
"title": None,
"order": 2
}
],
"media": []
},
{
"id": 4613,
"title": "Заключение",
"description": data_structure["conclusion"],
"page_id": 8734,
"created_at": "2024-04-11T12:16:46.000000Z",
"updated_at": "2024-04-11T12:16:46.000000Z",
"order": 4,
"checked": False,
"photos": [],
"media": []
}
],
"page_information": [
{
"id": 30957,
"page_id": 8734,
"title": "pageInformation.placeOfBirth",
"is_system": True,
"description": data_structure["placeOfBirth"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30958,
"page_id": 8734,
"title": data_structure["placeOfDeath"],
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30959,
"page_id": 8734,
"title": data_structure["children"],
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30960,
"page_id": 8734,
"title": "^pageInformation.wife||pageInformation.husband",
"is_system": True,
"description": data_structure["wifehusband"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30961,
"page_id": 8734,
"title": "pageInformation.citizenship",
"is_system": True,
"description": None,
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30962,
"page_id": 8734,
"title": "pageInformation.education",
"is_system": True,
"description": data_structure["education"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30963,
"page_id": 8734,
"title": "pageInformation.occupation",
"is_system": True,
"description": data_structure["occupation"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
},
{
"id": 30964,
"page_id": 8734,
"title": "pageInformation.awards",
"is_system": True,
"description": data_structure["awards"],
"created_at": "2024-04-11T12:16:20.000000Z",
"updated_at": "2024-04-11T12:16:20.000000Z"
}
],
"_method": "PUT"
}
headers = {
'Accept': 'application/json',
'Content-Type': 'application/json;charset=UTF-8',
'Authorization': f'Bearer {access_token}'
}
async with session.put(url, json=payload, headers=headers) as response:
if response.status == 200:
data = await response.json()
print(f'Update Page Response: {data}')
else:
data = await response.json()
print(f'Error: {data}')
# АДМИН-ПАНЕЛЬ
# КНОПКА НАЗАД
back = ReplyKeyboardMarkup(resize_keyboard=True, one_time_keyboard=False)
back.add(KeyboardButton("Назад"))
# КЛАВА
admin_kb = ReplyKeyboardMarkup(resize_keyboard=True)
admin_kb.add("Вопросы", "Добавить", "Удалить", "Редактировать","В меню")
@dp.message_handler(lambda message: message.text == "Назад", state=[admin.new_question, admin.edit_question_text, admin.select_question_to_edit, admin.select_question_to_delete])
async def back_to_admin_panel(message: types.Message, state: FSMContext):
await state.finish()
await admin_panel(message)
@dp.message_handler(lambda message: message.text == "Админ-панель", state=Form.choosing_action)
async def admin_panel(message: types.Message):
if message.from_user.id not in ADMINS:
await message.answer("Доступ запрещен.")
return
await message.answer("Админ-панель:", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Вопросы", state=admin.admin_panel)
async def show_questions(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if questions:
text = "\n".join([f"{idx + 1}. {question[0]}" for idx, question in enumerate(questions)])
else:
text = "Вопросы отсутствуют."
await message.answer(text)
@dp.message_handler(lambda message: message.text == "Добавить", state=admin.admin_panel)
async def add_question_start(message: types.Message):
await message.answer("Введите текст нового вопроса:", reply_markup=back)
await admin.new_question.set()
@dp.message_handler(state=admin.new_question)
async def add_question_process(message: types.Message, state: FSMContext):
new_question = message.text
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT MAX(order_num) FROM questions")
max_order_num = await cursor.fetchone()
next_order_num = (max_order_num[0] or 0) + 1
await db.execute("INSERT INTO questions (question, order_num) VALUES (?, ?)", (new_question, next_order_num))
await db.commit()
await message.answer("Вопрос успешно добавлен.", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Редактировать", state=admin.admin_panel)
async def select_question_to_edit_start(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT id, question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if not questions:
await message.answer("Вопросы отсутствуют.")
return
text = "Выберите номер вопроса для редактирования:\n\n"
text += "\n".join(f"{qid}. {qtext}" for qid, qtext in questions)
await message.answer(text, reply_markup=back)
await admin.select_question_to_edit.set()
@dp.message_handler(state=admin.select_question_to_edit)
async def edit_question(message: types.Message, state: FSMContext):
qid_text = message.text
if not qid_text.isdigit():
await message.answer("Пожалуйста, введите число. Попробуйте еще раз:", reply_markup=back)
return
qid = int(qid_text)
async with state.proxy() as data:
data['question_id'] = qid
await admin.edit_question_text.set()
await message.answer("Введите новый текст вопроса:", reply_markup=back)
@dp.message_handler(state=admin.edit_question_text)
async def update_question(message: types.Message, state: FSMContext):
new_text = message.text
async with state.proxy() as data:
qid = data['question_id']
async with aiosqlite.connect('base.db') as db:
await db.execute("UPDATE questions SET question = ? WHERE id = ?", (new_text, qid))
await db.commit()
await message.answer("Вопрос успешно отредактирован.", reply_markup=admin_kb)
await admin.admin_panel.set()
@dp.message_handler(lambda message: message.text == "Удалить", state=admin.admin_panel)
async def select_question_to_delete_start(message: types.Message):
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT id, question FROM questions ORDER BY order_num ASC")
questions = await cursor.fetchall()
if not questions:
await message.answer("Вопросы отсутствуют.")
return
text = "Выберите номер вопроса для удаления:\n\n"
text += "\n".join(f"{qid}. {qtext}" for qid, qtext in questions)
await message.answer(text, reply_markup=back)
await admin.select_question_to_delete.set()
@dp.message_handler(state=admin.select_question_to_delete)
async def delete_question(message: types.Message, state: FSMContext):
qid_text = message.text
if not qid_text.isdigit():
await message.answer("Пожалуйста, введите число. Попробуйте еще раз:", reply_markup=back)
return
qid = int(qid_text)
async with aiosqlite.connect('base.db') as db:
cursor = await db.execute("SELECT order_num FROM questions WHERE id = ?", (qid,))
question = await cursor.fetchone()
if not question:
await message.answer(f"Вопрос под номером {qid} не найден. Пожалуйста, попробуйте другой номер.")
return
order_num_to_delete = question[0]
await db.execute("DELETE FROM questions WHERE id = ?", (qid,))
await db.execute("UPDATE questions SET order_num = order_num - 1 WHERE order_num > ?", (order_num_to_delete,))
await db.commit()
await message.answer("Вопрос успешно удален.", reply_markup=admin_kb)
await admin.admin_panel.set()
async def main():
await create_db()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
executor.start_polling(dp, skip_updates=True)
|
1b44a00c1eb09d86e4492262921a8b9c
|
{
"intermediate": 0.37693291902542114,
"beginner": 0.5630067586898804,
"expert": 0.06006031110882759
}
|
46,557
|
"import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual, HBox, VBox
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from ipywidgets import Output, Button
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
bg_color = '#303030' # Lighter than black, as per the previous specification
text_color = 'white'
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}", color=text_color)
plt.axis("off")
plt.gcf().set_facecolor(bg_color)
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask", color=text_color)
plt.axis("off")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask", color=text_color)
plt.axis("off")
plt.tight_layout()
plt.show()
# preprocess and predict on the selected image
out = Output()
'''
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
axs[0].imshow(img)
axs[0].set_title("Input Image")
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output")
axs[1].axis("off")
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='blue')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right")
axs[2].set_ylabel('Probability')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}') # Increased title font size
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='black', ha='center')
plt.tight_layout()
plt.show()
'''
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Configure specific style options for lighter background
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
# Lighten background
bg_color = '#303030' # Lighter than black for contrast
fig.patch.set_facecolor(bg_color)
for ax in axs:
ax.set_facecolor(bg_color)
# Set spines color to lighter tone
ax.spines['bottom'].set_color('grey')
ax.spines['top'].set_color('grey')
ax.spines['right'].set_color('grey')
ax.spines['left'].set_color('grey')
ax.tick_params(axis='x', colors='white')
ax.tick_params(axis='y', colors='white')
axs[0].imshow(img)
axs[0].set_title("Input Image", color='white', fontweight='bold')
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output", color='white', fontweight='bold')
axs[1].axis("off")
# Adjusted plot for classification
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
bars = axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='cyan')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right", color='white')
axs[2].set_ylabel('Probability', color='white')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}', color='white', fontweight='bold')
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='white', ha='center')
plt.tight_layout()
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
custom_objects = {"segmentation_loss": segmentation_loss}
# Load model
model = load_model(saved_model_path, custom_objects=custom_objects)
# Test model
test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5)
uploader = widgets.FileUpload(accept='image/*', multiple=False)
uploader.observe(on_upload, names='value')
buttons_layout = HBox([uploader, clear_button])
display(buttons_layout)
display(out)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code let user to select one image and performance inference. Check these functions:"def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Configure specific style options for lighter background
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
# Lighten background
bg_color = '#303030' # Lighter than black for contrast
fig.patch.set_facecolor(bg_color)
for ax in axs:
ax.set_facecolor(bg_color)
# Set spines color to lighter tone
ax.spines['bottom'].set_color('grey')
ax.spines['top'].set_color('grey')
ax.spines['right'].set_color('grey')
ax.spines['left'].set_color('grey')
ax.tick_params(axis='x', colors='white')
ax.tick_params(axis='y', colors='white')
axs[0].imshow(img)
axs[0].set_title("Input Image", color='white', fontweight='bold')
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output", color='white', fontweight='bold')
axs[1].axis("off")
# Adjusted plot for classification
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
bars = axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='cyan')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right", color='white')
axs[2].set_ylabel('Probability', color='white')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}', color='white', fontweight='bold')
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='white', ha='center')
plt.tight_layout()
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)"
Because I cannot get the correct predicted classification output, check error.
|
c19c85bd82a0878cf0bf8f63312c0edf
|
{
"intermediate": 0.33455729484558105,
"beginner": 0.3478151261806488,
"expert": 0.31762760877609253
}
|
46,558
|
"import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual, HBox, VBox
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from ipywidgets import Output, Button
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
bg_color = '#303030' # Lighter than black, as per the previous specification
text_color = 'white'
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}", color=text_color)
plt.axis("off")
plt.gcf().set_facecolor(bg_color)
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask", color=text_color)
plt.axis("off")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask", color=text_color)
plt.axis("off")
plt.tight_layout()
plt.show()
# preprocess and predict on the selected image
out = Output()
'''
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
axs[0].imshow(img)
axs[0].set_title("Input Image")
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output")
axs[1].axis("off")
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='blue')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right")
axs[2].set_ylabel('Probability')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}') # Increased title font size
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='black', ha='center')
plt.tight_layout()
plt.show()
'''
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Configure specific style options for lighter background
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
# Lighten background
bg_color = '#303030' # Lighter than black for contrast
fig.patch.set_facecolor(bg_color)
for ax in axs:
ax.set_facecolor(bg_color)
# Set spines color to lighter tone
ax.spines['bottom'].set_color('grey')
ax.spines['top'].set_color('grey')
ax.spines['right'].set_color('grey')
ax.spines['left'].set_color('grey')
ax.tick_params(axis='x', colors='white')
ax.tick_params(axis='y', colors='white')
axs[0].imshow(img)
axs[0].set_title("Input Image", color='white', fontweight='bold')
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output", color='white', fontweight='bold')
axs[1].axis("off")
# Adjusted plot for classification
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
bars = axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='cyan')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right", color='white')
axs[2].set_ylabel('Probability', color='white')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}', color='white', fontweight='bold')
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='white', ha='center')
plt.tight_layout()
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
custom_objects = {"segmentation_loss": segmentation_loss}
# Load model
model = load_model(saved_model_path, custom_objects=custom_objects)
# Test model
test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5)
uploader = widgets.FileUpload(accept='image/*', multiple=False)
uploader.observe(on_upload, names='value')
buttons_layout = HBox([uploader, clear_button])
display(buttons_layout)
display(out)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code let user to select one image and performance inference. Check these functions:"def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Configure specific style options for lighter background
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
# Lighten background
bg_color = '#303030' # Lighter than black for contrast
fig.patch.set_facecolor(bg_color)
for ax in axs:
ax.set_facecolor(bg_color)
# Set spines color to lighter tone
ax.spines['bottom'].set_color('grey')
ax.spines['top'].set_color('grey')
ax.spines['right'].set_color('grey')
ax.spines['left'].set_color('grey')
ax.tick_params(axis='x', colors='white')
ax.tick_params(axis='y', colors='white')
axs[0].imshow(img)
axs[0].set_title("Input Image", color='white', fontweight='bold')
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output", color='white', fontweight='bold')
axs[1].axis("off")
# Adjusted plot for classification
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
bars = axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='cyan')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right", color='white')
axs[2].set_ylabel('Probability', color='white')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}', color='white', fontweight='bold')
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='white', ha='center')
plt.tight_layout()
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)"
Because I cannot get the correct predicted classification output, check error.
|
c74cf307888b6fef46a9311b607bf711
|
{
"intermediate": 0.33455729484558105,
"beginner": 0.3478151261806488,
"expert": 0.31762760877609253
}
|
46,559
|
"import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual, HBox, VBox
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from ipywidgets import Output, Button
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
bg_color = '#303030' # Lighter than black, as per the previous specification
text_color = 'white'
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}", color=text_color)
plt.axis("off")
plt.gcf().set_facecolor(bg_color)
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask", color=text_color)
plt.axis("off")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask", color=text_color)
plt.axis("off")
plt.tight_layout()
plt.show()
# preprocess and predict on the selected image
out = Output()
'''
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
axs[0].imshow(img)
axs[0].set_title("Input Image")
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output")
axs[1].axis("off")
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='blue')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right")
axs[2].set_ylabel('Probability')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}') # Increased title font size
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='black', ha='center')
plt.tight_layout()
plt.show()
'''
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Configure specific style options for lighter background
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
# Lighten background
bg_color = '#303030' # Lighter than black for contrast
fig.patch.set_facecolor(bg_color)
for ax in axs:
ax.set_facecolor(bg_color)
# Set spines color to lighter tone
ax.spines['bottom'].set_color('grey')
ax.spines['top'].set_color('grey')
ax.spines['right'].set_color('grey')
ax.spines['left'].set_color('grey')
ax.tick_params(axis='x', colors='white')
ax.tick_params(axis='y', colors='white')
axs[0].imshow(img)
axs[0].set_title("Input Image", color='white', fontweight='bold')
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output", color='white', fontweight='bold')
axs[1].axis("off")
# Adjusted plot for classification
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
bars = axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='cyan')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right", color='white')
axs[2].set_ylabel('Probability', color='white')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}', color='white', fontweight='bold')
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='white', ha='center')
plt.tight_layout()
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
custom_objects = {"segmentation_loss": segmentation_loss}
# Load model
model = load_model(saved_model_path, custom_objects=custom_objects)
# Test model
test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5)
uploader = widgets.FileUpload(accept='image/*', multiple=False)
uploader.observe(on_upload, names='value')
buttons_layout = HBox([uploader, clear_button])
display(buttons_layout)
display(out)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code let user to select one image and performance inference, however the predicted output always produce the wrong classification output. Check and correct these functions to solve the problem:"def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Configure specific style options for lighter background
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
# Lighten background
bg_color = '#303030' # Lighter than black for contrast
fig.patch.set_facecolor(bg_color)
for ax in axs:
ax.set_facecolor(bg_color)
# Set spines color to lighter tone
ax.spines['bottom'].set_color('grey')
ax.spines['top'].set_color('grey')
ax.spines['right'].set_color('grey')
ax.spines['left'].set_color('grey')
ax.tick_params(axis='x', colors='white')
ax.tick_params(axis='y', colors='white')
axs[0].imshow(img)
axs[0].set_title("Input Image", color='white', fontweight='bold')
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output", color='white', fontweight='bold')
axs[1].axis("off")
# Adjusted plot for classification
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
bars = axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='cyan')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right", color='white')
axs[2].set_ylabel('Probability', color='white')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}', color='white', fontweight='bold')
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='white', ha='center')
plt.tight_layout()
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)"
Also, this code work well :"import ipywidgets as widgets
from IPython.display import display, clear_output
from ipywidgets import Output
import matplotlib.pyplot as plt
import cv2
import numpy as np
# Load the saved model
custom_objects = {"segmentation_loss": segmentation_loss}
model = load_model(saved_model_path, custom_objects=custom_objects)
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Create an Output widget to capture the output
out = Output()
# Function to preprocess and predict on the selected image
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.resize(img, (image_size, image_size))
img = np.expand_dims(img, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
plt.figure(figsize=(10, 5))
plt.subplot(1, 3, 1)
plt.imshow(cv2.cvtColor(cv2.imread(image_path), cv2.COLOR_BGR2RGB))
plt.title("Input Image")
plt.axis("off")
plt.subplot(1, 3, 2)
plt.imshow(seg_output[0, :, :, 0], cmap='gray')
plt.title("Segmentation Output")
plt.axis("off")
plt.subplot(1, 3, 3)
class_probabilities = class_output[0]
y_pos = np.arange(len(labels))
plt.barh(y_pos, class_probabilities, align='center')
plt.yticks(y_pos, labels)
plt.xlabel('Probability')
plt.title('Classification Output')
plt.gca().invert_yaxis() # Invert y-axis to display labels from top to bottom
plt.tight_layout()
# Print the predicted class
print("Predicted Class:", predicted_class)
print("Class Probabilities:")
for label, prob in zip(labels, class_probabilities):
print(f"{label}: {prob:.4f}")
plt.show()
# Create a file uploader widget
uploader = widgets.FileUpload(accept='image/*', multiple=False)
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Attach the callback function to the file uploader
uploader.observe(on_upload, names='value')
# Display the file uploader widget and the Output widget
display(uploader)
display(out)"
Show the fixed code.
|
1bb72f4a4c02020569205ef52d373a44
|
{
"intermediate": 0.33455729484558105,
"beginner": 0.3478151261806488,
"expert": 0.31762760877609253
}
|
46,560
|
"import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual, HBox, VBox
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from ipywidgets import Output, Button
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
bg_color = '#303030' # Lighter than black, as per the previous specification
text_color = 'white'
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}", color=text_color)
plt.axis("off")
plt.gcf().set_facecolor(bg_color)
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask", color=text_color)
plt.axis("off")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask", color=text_color)
plt.axis("off")
plt.tight_layout()
plt.show()
# preprocess and predict on the selected image
out = Output()
'''
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
axs[0].imshow(img)
axs[0].set_title("Input Image")
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output")
axs[1].axis("off")
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='blue')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right")
axs[2].set_ylabel('Probability')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}') # Increased title font size
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='black', ha='center')
plt.tight_layout()
plt.show()
'''
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Configure specific style options for lighter background
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
# Lighten background
bg_color = '#303030' # Lighter than black for contrast
fig.patch.set_facecolor(bg_color)
for ax in axs:
ax.set_facecolor(bg_color)
# Set spines color to lighter tone
ax.spines['bottom'].set_color('grey')
ax.spines['top'].set_color('grey')
ax.spines['right'].set_color('grey')
ax.spines['left'].set_color('grey')
ax.tick_params(axis='x', colors='white')
ax.tick_params(axis='y', colors='white')
axs[0].imshow(img)
axs[0].set_title("Input Image", color='white', fontweight='bold')
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output", color='white', fontweight='bold')
axs[1].axis("off")
# Adjusted plot for classification
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
bars = axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='cyan')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right", color='white')
axs[2].set_ylabel('Probability', color='white')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}', color='white', fontweight='bold')
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='white', ha='center')
plt.tight_layout()
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
custom_objects = {"segmentation_loss": segmentation_loss}
# Load model
model = load_model(saved_model_path, custom_objects=custom_objects)
# Test model
test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5)
uploader = widgets.FileUpload(accept='image/*', multiple=False)
uploader.observe(on_upload, names='value')
buttons_layout = HBox([uploader, clear_button])
display(buttons_layout)
display(out)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code let user to select one image and performance inference, however the predicted output always produce the wrong classification output. Check and correct these functions to solve the problem:"def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Configure specific style options for lighter background
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
# Lighten background
bg_color = '#303030' # Lighter than black for contrast
fig.patch.set_facecolor(bg_color)
for ax in axs:
ax.set_facecolor(bg_color)
# Set spines color to lighter tone
ax.spines['bottom'].set_color('grey')
ax.spines['top'].set_color('grey')
ax.spines['right'].set_color('grey')
ax.spines['left'].set_color('grey')
ax.tick_params(axis='x', colors='white')
ax.tick_params(axis='y', colors='white')
axs[0].imshow(img)
axs[0].set_title("Input Image", color='white', fontweight='bold')
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output", color='white', fontweight='bold')
axs[1].axis("off")
# Adjusted plot for classification
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
bars = axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='cyan')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right", color='white')
axs[2].set_ylabel('Probability', color='white')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}', color='white', fontweight='bold')
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='white', ha='center')
plt.tight_layout()
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)"
Also, this code work well :"import ipywidgets as widgets
from IPython.display import display, clear_output
from ipywidgets import Output
import matplotlib.pyplot as plt
import cv2
import numpy as np
# Load the saved model
custom_objects = {"segmentation_loss": segmentation_loss}
model = load_model(saved_model_path, custom_objects=custom_objects)
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Create an Output widget to capture the output
out = Output()
# Function to preprocess and predict on the selected image
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.resize(img, (image_size, image_size))
img = np.expand_dims(img, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
plt.figure(figsize=(10, 5))
plt.subplot(1, 3, 1)
plt.imshow(cv2.cvtColor(cv2.imread(image_path), cv2.COLOR_BGR2RGB))
plt.title("Input Image")
plt.axis("off")
plt.subplot(1, 3, 2)
plt.imshow(seg_output[0, :, :, 0], cmap='gray')
plt.title("Segmentation Output")
plt.axis("off")
plt.subplot(1, 3, 3)
class_probabilities = class_output[0]
y_pos = np.arange(len(labels))
plt.barh(y_pos, class_probabilities, align='center')
plt.yticks(y_pos, labels)
plt.xlabel('Probability')
plt.title('Classification Output')
plt.gca().invert_yaxis() # Invert y-axis to display labels from top to bottom
plt.tight_layout()
# Print the predicted class
print("Predicted Class:", predicted_class)
print("Class Probabilities:")
for label, prob in zip(labels, class_probabilities):
print(f"{label}: {prob:.4f}")
plt.show()
# Create a file uploader widget
uploader = widgets.FileUpload(accept='image/*', multiple=False)
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Attach the callback function to the file uploader
uploader.observe(on_upload, names='value')
# Display the file uploader widget and the Output widget
display(uploader)
display(out)"
|
df726287836a80db52a57d05bd0dfb30
|
{
"intermediate": 0.33455729484558105,
"beginner": 0.3478151261806488,
"expert": 0.31762760877609253
}
|
46,561
|
"import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual, HBox, VBox
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from ipywidgets import Output, Button
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
bg_color = '#303030' # Lighter than black, as per the previous specification
text_color = 'white'
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}", color=text_color)
plt.axis("off")
plt.gcf().set_facecolor(bg_color)
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask", color=text_color)
plt.axis("off")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask", color=text_color)
plt.axis("off")
plt.tight_layout()
plt.show()
# preprocess and predict on the selected image
out = Output()
'''
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
axs[0].imshow(img)
axs[0].set_title("Input Image")
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output")
axs[1].axis("off")
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='blue')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right")
axs[2].set_ylabel('Probability')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}') # Increased title font size
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='black', ha='center')
plt.tight_layout()
plt.show()
'''
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Configure specific style options for lighter background
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
# Lighten background
bg_color = '#303030' # Lighter than black for contrast
fig.patch.set_facecolor(bg_color)
for ax in axs:
ax.set_facecolor(bg_color)
# Set spines color to lighter tone
ax.spines['bottom'].set_color('grey')
ax.spines['top'].set_color('grey')
ax.spines['right'].set_color('grey')
ax.spines['left'].set_color('grey')
ax.tick_params(axis='x', colors='white')
ax.tick_params(axis='y', colors='white')
axs[0].imshow(img)
axs[0].set_title("Input Image", color='white', fontweight='bold')
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output", color='white', fontweight='bold')
axs[1].axis("off")
# Adjusted plot for classification
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
bars = axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='cyan')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right", color='white')
axs[2].set_ylabel('Probability', color='white')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}', color='white', fontweight='bold')
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='white', ha='center')
plt.tight_layout()
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
custom_objects = {"segmentation_loss": segmentation_loss}
# Load model
model = load_model(saved_model_path, custom_objects=custom_objects)
# Test model
test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5)
uploader = widgets.FileUpload(accept='image/*', multiple=False)
uploader.observe(on_upload, names='value')
buttons_layout = HBox([uploader, clear_button])
display(buttons_layout)
display(out)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code let user to select one image and performance inference, however the predicted output always produce the wrong classification output. Check and correct these functions to solve the problem:"def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Configure specific style options for lighter background
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
# Lighten background
bg_color = '#303030' # Lighter than black for contrast
fig.patch.set_facecolor(bg_color)
for ax in axs:
ax.set_facecolor(bg_color)
# Set spines color to lighter tone
ax.spines['bottom'].set_color('grey')
ax.spines['top'].set_color('grey')
ax.spines['right'].set_color('grey')
ax.spines['left'].set_color('grey')
ax.tick_params(axis='x', colors='white')
ax.tick_params(axis='y', colors='white')
axs[0].imshow(img)
axs[0].set_title("Input Image", color='white', fontweight='bold')
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output", color='white', fontweight='bold')
axs[1].axis("off")
# Adjusted plot for classification
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
bars = axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='cyan')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right", color='white')
axs[2].set_ylabel('Probability', color='white')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}', color='white', fontweight='bold')
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='white', ha='center')
plt.tight_layout()
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)"
|
a72549acaf84ab01ee7dcefc8570fc2a
|
{
"intermediate": 0.33455729484558105,
"beginner": 0.3478151261806488,
"expert": 0.31762760877609253
}
|
46,562
|
"import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual, HBox, VBox
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from ipywidgets import Output, Button
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
bg_color = '#303030' # Lighter than black, as per the previous specification
text_color = 'white'
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}", color=text_color)
plt.axis("off")
plt.gcf().set_facecolor(bg_color)
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask", color=text_color)
plt.axis("off")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask", color=text_color)
plt.axis("off")
plt.tight_layout()
plt.show()
# preprocess and predict on the selected image
out = Output()
'''
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
axs[0].imshow(img)
axs[0].set_title("Input Image")
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output")
axs[1].axis("off")
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='blue')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right")
axs[2].set_ylabel('Probability')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}') # Increased title font size
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='black', ha='center')
plt.tight_layout()
plt.show()
'''
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Configure specific style options for lighter background
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
# Lighten background
bg_color = '#303030' # Lighter than black for contrast
fig.patch.set_facecolor(bg_color)
for ax in axs:
ax.set_facecolor(bg_color)
# Set spines color to lighter tone
ax.spines['bottom'].set_color('grey')
ax.spines['top'].set_color('grey')
ax.spines['right'].set_color('grey')
ax.spines['left'].set_color('grey')
ax.tick_params(axis='x', colors='white')
ax.tick_params(axis='y', colors='white')
axs[0].imshow(img)
axs[0].set_title("Input Image", color='white', fontweight='bold')
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output", color='white', fontweight='bold')
axs[1].axis("off")
# Adjusted plot for classification
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
bars = axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='cyan')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right", color='white')
axs[2].set_ylabel('Probability', color='white')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}', color='white', fontweight='bold')
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='white', ha='center')
plt.tight_layout()
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
custom_objects = {"segmentation_loss": segmentation_loss}
# Load model
model = load_model(saved_model_path, custom_objects=custom_objects)
# Test model
test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5)
uploader = widgets.FileUpload(accept='image/*', multiple=False)
uploader.observe(on_upload, names='value')
buttons_layout = HBox([uploader, clear_button])
display(buttons_layout)
display(out)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code let user to select one image and performance inference, however the predicted output always produce the wrong classification output. Check and correct these functions to solve the problem:"def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Configure specific style options for lighter background
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
# Lighten background
bg_color = '#303030' # Lighter than black for contrast
fig.patch.set_facecolor(bg_color)
for ax in axs:
ax.set_facecolor(bg_color)
# Set spines color to lighter tone
ax.spines['bottom'].set_color('grey')
ax.spines['top'].set_color('grey')
ax.spines['right'].set_color('grey')
ax.spines['left'].set_color('grey')
ax.tick_params(axis='x', colors='white')
ax.tick_params(axis='y', colors='white')
axs[0].imshow(img)
axs[0].set_title("Input Image", color='white', fontweight='bold')
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output", color='white', fontweight='bold')
axs[1].axis("off")
# Adjusted plot for classification
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
bars = axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='cyan')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right", color='white')
axs[2].set_ylabel('Probability', color='white')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}', color='white', fontweight='bold')
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='white', ha='center')
plt.tight_layout()
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)"
|
00ad80b05bc6f4f434b123ea8a62af06
|
{
"intermediate": 0.33455729484558105,
"beginner": 0.3478151261806488,
"expert": 0.31762760877609253
}
|
46,563
|
"import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual, HBox, VBox
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from ipywidgets import Output, Button
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
def test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5):
random_indices = np.random.choice(len(X_test), num_images, replace=False)
selected_images = X_test[random_indices]
selected_labels = y_test_class[random_indices]
selected_masks = y_test_seg[random_indices]
predictions = model.predict(selected_images)
predicted_classes = np.argmax(predictions[0], axis=1)
predicted_masks = predictions[1]
bg_color = '#303030' # Lighter than black, as per the previous specification
text_color = 'white'
for i in range(num_images):
plt.figure(figsize=(10, 5))
plt.subplot(2, 3, 1)
plt.imshow(selected_images[i])
plt.title(f"Input Image\nGround Truth Label: {labels[np.argmax(selected_labels[i])]}\nPredicted Label: {labels[predicted_classes[i]]}", color=text_color)
plt.axis("off")
plt.gcf().set_facecolor(bg_color)
plt.subplot(2, 3, 2)
plt.imshow(selected_masks[i].squeeze(), cmap="gray")
plt.title("Ground Truth Segmentation Mask", color=text_color)
plt.axis("off")
plt.subplot(2, 3, 3)
plt.imshow(predicted_masks[i].squeeze(), cmap="gray")
plt.title("Predicted Segmentation Mask", color=text_color)
plt.axis("off")
plt.tight_layout()
plt.show()
# preprocess and predict on the selected image
out = Output()
'''
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
axs[0].imshow(img)
axs[0].set_title("Input Image")
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output")
axs[1].axis("off")
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='blue')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right")
axs[2].set_ylabel('Probability')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}') # Increased title font size
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='black', ha='center')
plt.tight_layout()
plt.show()
'''
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Configure specific style options for lighter background
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
# Lighten background
bg_color = '#303030' # Lighter than black for contrast
fig.patch.set_facecolor(bg_color)
for ax in axs:
ax.set_facecolor(bg_color)
# Set spines color to lighter tone
ax.spines['bottom'].set_color('grey')
ax.spines['top'].set_color('grey')
ax.spines['right'].set_color('grey')
ax.spines['left'].set_color('grey')
ax.tick_params(axis='x', colors='white')
ax.tick_params(axis='y', colors='white')
axs[0].imshow(img)
axs[0].set_title("Input Image", color='white', fontweight='bold')
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output", color='white', fontweight='bold')
axs[1].axis("off")
# Adjusted plot for classification
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
bars = axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='cyan')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right", color='white')
axs[2].set_ylabel('Probability', color='white')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}', color='white', fontweight='bold')
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='white', ha='center')
plt.tight_layout()
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
custom_objects = {"segmentation_loss": segmentation_loss}
# Load model
model = load_model(saved_model_path, custom_objects=custom_objects)
# Test model
test_model(model, X_test, y_test_class, y_test_seg, labels, num_images=5)
uploader = widgets.FileUpload(accept='image/*', multiple=False)
uploader.observe(on_upload, names='value')
buttons_layout = HBox([uploader, clear_button])
display(buttons_layout)
display(out)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code let user to select one image and performance inference, however the predicted output always produce the wrong classification output. Check and correct these functions to solve the problem:"def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Convert BGR to RGB
img_resized = cv2.resize(img, (image_size, image_size))
img_expanded = np.expand_dims(img_resized, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img_expanded)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Configure specific style options for lighter background
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
# Lighten background
bg_color = '#303030' # Lighter than black for contrast
fig.patch.set_facecolor(bg_color)
for ax in axs:
ax.set_facecolor(bg_color)
# Set spines color to lighter tone
ax.spines['bottom'].set_color('grey')
ax.spines['top'].set_color('grey')
ax.spines['right'].set_color('grey')
ax.spines['left'].set_color('grey')
ax.tick_params(axis='x', colors='white')
ax.tick_params(axis='y', colors='white')
axs[0].imshow(img)
axs[0].set_title("Input Image", color='white', fontweight='bold')
axs[0].axis("off")
axs[1].imshow(seg_output[0, :, :, 0], cmap='gray')
axs[1].set_title("Segmentation Output", color='white', fontweight='bold')
axs[1].axis("off")
# Adjusted plot for classification
class_probabilities = class_output[0]
x_pos = np.arange(len(labels))
bars = axs[2].bar(x_pos, class_probabilities, align='center', alpha=0.7, color='cyan')
axs[2].set_xticks(x_pos)
axs[2].set_xticklabels(labels, rotation=45, ha="right", color='white')
axs[2].set_ylabel('Probability', color='white')
axs[2].set_ylim([0, 1])
axs[2].set_title(f'Classification Output\nPredicted Class: {predicted_class}', color='white', fontweight='bold')
for i, v in enumerate(class_probabilities):
axs[2].text(i, v + 0.02, f"{v:.2f}", color='white', ha='center')
plt.tight_layout()
plt.show()
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Define a function to handle the button click event and clear the output
clear_button = Button(description='Clear')
def clear_output_button_clicked(button):
out.clear_output()
clear_button.on_click(clear_output_button_clicked)"
Also, this code work well :"import ipywidgets as widgets
from IPython.display import display, clear_output
from ipywidgets import Output
import matplotlib.pyplot as plt
import cv2
import numpy as np
# Load the saved model
custom_objects = {"segmentation_loss": segmentation_loss}
model = load_model(saved_model_path, custom_objects=custom_objects)
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Create an Output widget to capture the output
out = Output()
# Function to preprocess and predict on the selected image
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.resize(img, (image_size, image_size))
img = np.expand_dims(img, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
plt.figure(figsize=(10, 5))
plt.subplot(1, 3, 1)
plt.imshow(cv2.cvtColor(cv2.imread(image_path), cv2.COLOR_BGR2RGB))
plt.title("Input Image")
plt.axis("off")
plt.subplot(1, 3, 2)
plt.imshow(seg_output[0, :, :, 0], cmap='gray')
plt.title("Segmentation Output")
plt.axis("off")
plt.subplot(1, 3, 3)
class_probabilities = class_output[0]
y_pos = np.arange(len(labels))
plt.barh(y_pos, class_probabilities, align='center')
plt.yticks(y_pos, labels)
plt.xlabel('Probability')
plt.title('Classification Output')
plt.gca().invert_yaxis() # Invert y-axis to display labels from top to bottom
plt.tight_layout()
# Print the predicted class
print("Predicted Class:", predicted_class)
print("Class Probabilities:")
for label, prob in zip(labels, class_probabilities):
print(f"{label}: {prob:.4f}")
plt.show()
# Create a file uploader widget
uploader = widgets.FileUpload(accept='image/*', multiple=False)
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Attach the callback function to the file uploader
uploader.observe(on_upload, names='value')
# Display the file uploader widget and the Output widget
display(uploader)
display(out)"
Show fixed code.
|
32fbd1ea12908bc20a2d68bbf5fd064c
|
{
"intermediate": 0.33455729484558105,
"beginner": 0.3478151261806488,
"expert": 0.31762760877609253
}
|
46,564
|
"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
# Build model
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
model = build_model(input_shape, num_classes)
model.summary()
test_class_acc_list = []
for i in range(1):
print(f"\nTrain {i+1}:\n")
# Train model
batch_size = 16
epochs = 50
history = train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs)
# Evaluate model on test data
test_class_acc = evaluate_model(model, X_test, y_test_class, y_test_seg)
plot_performance(history)
test_class_acc_list.append(test_class_acc)
# Calculate average test classification accuracy
average_test_class_acc = sum(test_class_acc_list) / len(test_class_acc_list)
print("Test Classification Accuracy List:", test_class_acc_list)
print("Average Test Classification Accuracy:", average_test_class_acc)
# Load and test model
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
# Define segmentation loss as a custom_loss function
custom_objects = {"segmentation_loss": segmentation_loss}
model = load_model(saved_model_path, custom_objects=custom_objects)
import ipywidgets as widgets
from IPython.display import display, clear_output
from ipywidgets import Output
import matplotlib.pyplot as plt
import cv2
import numpy as np
# Load the saved model
custom_objects = {"segmentation_loss": segmentation_loss}
model = load_model(saved_model_path, custom_objects=custom_objects)
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Create an Output widget to capture the output
out = Output()
# Function to preprocess and predict on the selected image
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.resize(img, (image_size, image_size))
img = np.expand_dims(img, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
plt.figure(figsize=(10, 5))
plt.subplot(1, 3, 1)
plt.imshow(cv2.cvtColor(cv2.imread(image_path), cv2.COLOR_BGR2RGB))
plt.title("Input Image")
plt.axis("off")
plt.subplot(1, 3, 2)
plt.imshow(seg_output[0, :, :, 0], cmap='gray')
plt.title("Segmentation Output")
plt.axis("off")
plt.subplot(1, 3, 3)
class_probabilities = class_output[0]
y_pos = np.arange(len(labels))
plt.barh(y_pos, class_probabilities, align='center')
plt.yticks(y_pos, labels)
plt.xlabel('Probability')
plt.title('Classification Output')
plt.gca().invert_yaxis() # Invert y-axis to display labels from top to bottom
plt.tight_layout()
# Print the predicted class
print("Predicted Class:", predicted_class)
print("Class Probabilities:")
for label, prob in zip(labels, class_probabilities):
print(f"{label}: {prob:.4f}")
plt.show()
# Create a file uploader widget
uploader = widgets.FileUpload(accept='image/*', multiple=False)
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Attach the callback function to the file uploader
uploader.observe(on_upload, names='value')
# Display the file uploader widget and the Output widget
display(uploader)
display(out)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code let user to select one image and performance inference. Help me to remove the output printed but include the predicted output class name in the plot. Also, change the background color of the plot to '#303030'
|
04163a6840fde432868a5b5b3b76fb01
|
{
"intermediate": 0.33926141262054443,
"beginner": 0.3307265639305115,
"expert": 0.3300120532512665
}
|
46,565
|
"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
# Build model
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
model = build_model(input_shape, num_classes)
model.summary()
test_class_acc_list = []
for i in range(1):
print(f"\nTrain {i+1}:\n")
# Train model
batch_size = 16
epochs = 50
history = train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs)
# Evaluate model on test data
test_class_acc = evaluate_model(model, X_test, y_test_class, y_test_seg)
plot_performance(history)
test_class_acc_list.append(test_class_acc)
# Calculate average test classification accuracy
average_test_class_acc = sum(test_class_acc_list) / len(test_class_acc_list)
print("Test Classification Accuracy List:", test_class_acc_list)
print("Average Test Classification Accuracy:", average_test_class_acc)
# Load and test model
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
# Define segmentation loss as a custom_loss function
custom_objects = {"segmentation_loss": segmentation_loss}
model = load_model(saved_model_path, custom_objects=custom_objects)
import ipywidgets as widgets
from IPython.display import display, clear_output
from ipywidgets import Output
import matplotlib.pyplot as plt
import cv2
import numpy as np
# Load the saved model
custom_objects = {"segmentation_loss": segmentation_loss}
model = load_model(saved_model_path, custom_objects=custom_objects)
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Create an Output widget to capture the output
out = Output()
# Function to preprocess and predict on the selected image
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.resize(img, (image_size, image_size))
img = np.expand_dims(img, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
plt.figure(figsize=(10, 5))
plt.subplot(1, 3, 1)
plt.imshow(cv2.cvtColor(cv2.imread(image_path), cv2.COLOR_BGR2RGB))
plt.title("Input Image")
plt.axis("off")
plt.subplot(1, 3, 2)
plt.imshow(seg_output[0, :, :, 0], cmap='gray')
plt.title("Segmentation Output")
plt.axis("off")
plt.subplot(1, 3, 3)
class_probabilities = class_output[0]
y_pos = np.arange(len(labels))
plt.barh(y_pos, class_probabilities, align='center')
plt.yticks(y_pos, labels)
plt.xlabel('Probability')
plt.title('Classification Output')
plt.gca().invert_yaxis() # Invert y-axis to display labels from top to bottom
plt.tight_layout()
# Print the predicted class
print("Predicted Class:", predicted_class)
print("Class Probabilities:")
for label, prob in zip(labels, class_probabilities):
print(f"{label}: {prob:.4f}")
plt.show()
# Create a file uploader widget
uploader = widgets.FileUpload(accept='image/*', multiple=False)
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Attach the callback function to the file uploader
uploader.observe(on_upload, names='value')
# Display the file uploader widget and the Output widget
display(uploader)
display(out)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code let user to select one image and performance inference. Help me to remove the output printed but include the predicted output class name in the plot. Also, change the background color of the plot to '#303030'
|
e07cbcc6653eaba59a4df292e54ccf77
|
{
"intermediate": 0.33926141262054443,
"beginner": 0.3307265639305115,
"expert": 0.3300120532512665
}
|
46,566
|
"
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import cv2
import random
import tensorflow as tf
import tkinter as tk
from tkinter import filedialog
from PIL import ImageTk, Image
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from IPython.display import display, clear_output
from tensorflow.keras.preprocessing import image
from tensorflow.keras.optimizers import Adam, SGD, RMSprop, AdamW, Adadelta, Adagrad, Adamax, Adafactor, Nadam, Ftrl
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tqdm import tqdm
import os
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential, Model, load_model
from tensorflow.keras.layers import (
GlobalAveragePooling2D,
Dropout,
Dense,
Conv2D,
MaxPooling2D,
Flatten,
Dropout,
BatchNormalization,
Activation,
concatenate,
Conv2DTranspose,
Input,
Reshape,
UpSampling2D,
)
from tensorflow.keras.applications import (
EfficientNetV2B0,
EfficientNetV2B1,
EfficientNetV2B2,
EfficientNetV2B3,
EfficientNetV2L,
EfficientNetV2M,
EfficientNetV2S,
)
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications import VGG16, VGG19
from tensorflow.keras.applications import ResNet50, ResNet101, ResNet152, ResNetRS50, ResNetRS101
from tensorflow.keras.applications import InceptionResNetV2, ConvNeXtXLarge, ConvNeXtBase, DenseNet121, MobileNetV2, NASNetLarge, NASNetMobile
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, TensorBoard, ModelCheckpoint
from sklearn.metrics import classification_report, confusion_matrix
import ipywidgets as widgets
import io
from PIL import Image
from IPython.display import display, clear_output
from warnings import filterwarnings
from google.colab import drive
drive.mount("/content/gdrive")
def load_data(data_folders):
X_data = [] # Combined data
y_class_labels = [] # Combined classification labels
y_seg_labels = [] # Combined segmentation labels
for folderPath in data_folders:
for label in labels:
label_folder_path = os.path.join(folderPath, label)
for filename in tqdm(os.listdir(label_folder_path)):
if filename.endswith(".jpg"):
img = cv2.imread(os.path.join(label_folder_path, filename))
img = cv2.resize(img, (image_size, image_size))
X_data.append(img)
y_class_labels.append(label)
seg_filename = filename.split(".")[0] + ".png"
seg_img = cv2.imread(os.path.join(label_folder_path, seg_filename), 0)
seg_img = cv2.resize(seg_img, (image_size, image_size))
seg_img = np.where(seg_img > 0, 1, 0) # Convert segmentation mask to binary
y_seg_labels.append(seg_img)
X_data = np.array(X_data)
y_class_labels = np.array(y_class_labels)
y_seg_labels = np.array(y_seg_labels)
X_data, y_class_labels, y_seg_labels = shuffle(X_data, y_class_labels, y_seg_labels, random_state=101)
return X_data, y_class_labels, y_seg_labels
def split_data(X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio):
num_samples = len(X_data)
num_train_samples = int(train_ratio * num_samples)
num_val_samples = int(val_ratio * num_samples)
num_test_samples = int(test_ratio * num_samples)
X_train = X_data[:num_train_samples]
y_train_class = y_class_labels[:num_train_samples]
y_train_seg = y_seg_labels[:num_train_samples]
X_val = X_data[num_train_samples: num_train_samples + num_val_samples]
y_val_class = y_class_labels[num_train_samples: num_train_samples + num_val_samples]
y_val_seg = y_seg_labels[num_train_samples: num_train_samples + num_val_samples]
X_test = X_data[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_class = y_class_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
y_test_seg = y_seg_labels[num_train_samples + num_val_samples: num_train_samples + num_val_samples + num_test_samples]
# Convert class labels to categorical
label_encoder = LabelEncoder()
y_train_class_encoded = label_encoder.fit_transform(y_train_class)
y_train_class_categorical = to_categorical(y_train_class_encoded)
y_val_class_encoded = label_encoder.transform(y_val_class)
y_val_class_categorical = to_categorical(y_val_class_encoded)
y_test_class_encoded = label_encoder.transform(y_test_class)
y_test_class_categorical = to_categorical(y_test_class_encoded)
return (
X_train,
y_train_class_categorical,
y_train_seg,
X_val,
y_val_class_categorical,
y_val_seg,
X_test,
y_test_class_categorical,
y_test_seg,
)
def build_model(input_shape, num_classes):
num_filter = 32 # 16/32 best, 8: best classification but no segment
# Encoder (Done)
inputs = Input(input_shape)
conv1 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(inputs)
bn1 = BatchNormalization()(conv1)
relu1 = Activation("relu")(bn1)
conv2 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu1)
bn2 = BatchNormalization()(conv2)
relu2 = Activation("relu")(bn2)
down1 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu2)
conv3 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(down1)
bn3 = BatchNormalization()(conv3)
relu3 = Activation("relu")(bn3)
conv4 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu3)
bn4 = BatchNormalization()(conv4)
relu4 = Activation("relu")(bn4)
down2 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu4)
conv5 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(down2)
bn5 = BatchNormalization()(conv5)
relu5 = Activation("relu")(bn5)
conv6 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu5)
bn6 = BatchNormalization()(conv6)
relu6 = Activation("relu")(bn6)
down3 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu6)
conv7 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(down3)
bn7 = BatchNormalization()(conv7)
relu7 = Activation("relu")(bn7)
conv8 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu7)
bn8 = BatchNormalization()(conv8)
relu8 = Activation("relu")(bn8)
# Middle
down4 = MaxPooling2D(pool_size=(2, 2), strides=2)(relu8)
conv9 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(down4)
bn9 = BatchNormalization()(conv9)
relu9 = Activation("relu")(bn9)
conv10 = Conv2D(num_filter * 16, 3, activation="linear", padding="same", strides=1)(relu9)
bn10 = BatchNormalization()(conv10)
relu10 = Activation("relu")(bn10)
up1 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu10)
# Decoder (Done)
concat1 = concatenate([up1, relu8], axis=-1) # , axis=3
conv11 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(concat1)
bn11 = BatchNormalization()(conv11)
relu11 = Activation("relu")(bn11)
conv12 = Conv2D(num_filter * 8, 3, activation="linear", padding="same", strides=1)(relu11)
bn12 = BatchNormalization()(conv12)
relu12 = Activation("relu")(bn12)
up2 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu12)
concat2 = concatenate([up2, relu6], axis=-1) # , axis=3
conv13 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(concat2)
bn13 = BatchNormalization()(conv13)
relu13 = Activation("relu")(bn13)
conv14 = Conv2D(num_filter * 4, 3, activation="linear", padding="same", strides=1)(relu13)
bn14 = BatchNormalization()(conv14)
relu14 = Activation("relu")(bn14)
up3 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu14)
concat3 = concatenate([up3, relu4], axis=-1) # , axis=3
conv15 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(concat3)
bn15 = BatchNormalization()(conv15)
relu15 = Activation("relu")(bn15)
conv16 = Conv2D(num_filter * 2, 3, activation="linear", padding="same", strides=1)(relu15)
bn16 = BatchNormalization()(conv16)
relu16 = Activation("relu")(bn16)
up4 = UpSampling2D(size=(2, 2), interpolation="bilinear")(relu16)
concat4 = concatenate([up4, relu2], axis=-1) # , axis=3
conv17 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(concat4)
bn17 = BatchNormalization()(conv17)
relu17 = Activation("relu")(bn17)
conv18 = Conv2D(num_filter * 1, 3, activation="linear", padding="same", strides=1)(relu17)
bn18 = BatchNormalization()(conv18)
relu18 = Activation("relu")(bn18)
# Segmentation branch
segmentation_output = Conv2D(1, 1, activation="sigmoid", name="segmentation_output")(relu18) # original
# Classification branch (Not done)
gap1 = GlobalAveragePooling2D()(relu8)
gap2 = GlobalAveragePooling2D()(relu10)
gap3 = GlobalAveragePooling2D()(relu12)
conv20 = Conv2D(16, 3, activation="linear", padding="same", strides=1)(segmentation_output)
bn20 = BatchNormalization()(conv20)
relu20 = Activation("relu")(bn20)
down5 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu20)
conv21 = Conv2D(32, 3, activation="linear", padding="same", strides=1)(down5)
bn21 = BatchNormalization()(conv21)
relu21 = Activation("relu")(bn21)
down6 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu21)
conv22 = Conv2D(64, 3, activation="linear", padding="same", strides=1)(down6)
bn22 = BatchNormalization()(conv22)
relu22 = Activation("relu")(bn22)
down7 = MaxPooling2D(pool_size=(4, 4), strides=4)(relu22)
flatten1 = Flatten()(down7)
concat5 = concatenate([gap1, gap2, gap3, flatten1], axis=-1)
# FC layers
fc1 = Dense(1024, activation="relu")(concat5)
dropout1 = Dropout(0.5)(fc1)
fc2 = Dense(1024, activation="relu")(dropout1)
dropout2 = Dropout(0.5)(fc2)
classification_output = Dense(num_classes, activation="softmax", name="classification_output")(dropout2)
# Define the model
model = Model(inputs=inputs, outputs=[classification_output, segmentation_output])
return model
def segmentation_loss(y_true, y_pred):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
bce_loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
smooth = 1e-5
intersection = tf.reduce_sum(y_true * y_pred)
union = tf.reduce_sum(y_true) + tf.reduce_sum(y_pred)
dice_loss = 1.0 - 2.0 * (intersection + smooth) / (union + smooth)
segmentation_loss = bce_loss + 1 * dice_loss
return segmentation_loss
def train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs):
checkpoint = ModelCheckpoint(
"multitask_best_weights.h5",
monitor="val_classification_output_accuracy",
save_best_only=True,
mode="max",
verbose=1,)
reduce_lr = ReduceLROnPlateau(
monitor="val_classification_output_accuracy",
factor=0.3,
patience=2,
min_delta=0.001,
mode="auto",
verbose=1,)
tensorboard = TensorBoard(log_dir="logs")
model.compile(
optimizer=Adam(lr=0.001),
loss={"classification_output": "categorical_crossentropy", "segmentation_output": segmentation_loss},
metrics={"classification_output": "accuracy", "segmentation_output": "accuracy"},
loss_weights={"classification_output": 1, "segmentation_output": 1},)
history = model.fit(
X_train,
{"classification_output": y_train_class, "segmentation_output": y_train_seg},
validation_data=(X_val, {"classification_output": y_val_class, "segmentation_output": y_val_seg}),
epochs=epochs,
verbose=1,
batch_size=batch_size,
callbacks=[checkpoint, reduce_lr, tensorboard],)
return history
def evaluate_model(model, X_test, y_test_class, y_test_seg):
with tf.keras.utils.custom_object_scope({"segmentation_loss": segmentation_loss}):
# Load the best model weights
best_model = load_model("multitask_best_weights.h5")
# Evaluate the model on test data
test_loss, test_class_loss, test_seg_loss, test_class_acc, test_seg_acc = best_model.evaluate(
X_test, {"classification_output": y_test_class, "segmentation_output": y_test_seg})
print("Test Classification Loss:", test_class_loss)
print("Test Segmentation Loss:", test_seg_loss)
print("Test Classification Accuracy:", test_class_acc)
print("Test Segmentation Accuracy:", test_seg_acc)
# Evaluate the model on validation data
val_loss, val_class_loss, val_seg_loss, val_class_acc, val_seg_acc = best_model.evaluate(
X_val, {'classification_output': y_val_class, 'segmentation_output': y_val_seg})
print("Validation Classification Loss:", val_class_loss)
print("Validation Segmentation Loss:", val_seg_loss)
print("Validation Classification Accuracy:", val_class_acc)
print("Validation Segmentation Accuracy:", val_seg_acc)
# Evaluate the model on training data
train_loss, train_class_loss, train_seg_loss, train_class_acc, train_seg_acc = best_model.evaluate(X_train, {'classification_output': y_train_class, 'segmentation_output': y_train_seg})
print("Train Classification Loss:", train_class_loss)
print("Train Segmentation Loss:", train_seg_loss)
print("Train Classification Accuracy:", train_class_acc)
print("Train Segmentation Accuracy:", train_seg_acc)
# Return test classification accuracy
return test_class_acc
def plot_performance(history):
# Plot classification accuracy
classification_train_accuracy = history.history["classification_output_accuracy"]
classification_val_accuracy = history.history["val_classification_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_accuracy, label="Training Accuracy")
plt.plot(classification_val_accuracy, label="Validation Accuracy")
plt.title("Classification Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot classification loss
classification_train_loss = history.history["classification_output_loss"]
classification_val_loss = history.history["val_classification_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(classification_train_loss, "b", label="Training Loss")
plt.plot(classification_val_loss, "r", label="Validation Loss")
plt.title("Classification Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Plot segmentation accuracy
segmentation_train_accuracy = history.history["segmentation_output_accuracy"]
segmentation_val_accuracy = history.history["val_segmentation_output_accuracy"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_accuracy, label="Training Accuracy")
plt.plot(segmentation_val_accuracy, label="Validation Accuracy")
plt.title("Segmentation Accuracy")
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend()
plt.show()
# Plot segmentation loss
segmentation_train_loss = history.history["segmentation_output_loss"]
segmentation_val_loss = history.history["val_segmentation_output_loss"]
plt.figure(figsize=(7, 3))
plt.plot(segmentation_train_loss, "b", label="Training Loss")
plt.plot(segmentation_val_loss, "r", label="Validation Loss")
plt.title("Segmentation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
# Set image size
image_size = 224
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Set data folders
data_folders = [
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/train",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/val",
"/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/test",]
# Load data
X_data, y_class_labels, y_seg_labels = load_data(data_folders)
# Set image size
image_size = 224
# Split data
train_ratio = 0.2
val_ratio = 0.2
test_ratio = 0.6
X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, X_test, y_test_class, y_test_seg = split_data(
X_data, y_class_labels, y_seg_labels, train_ratio, val_ratio, test_ratio)
# Print the counts
print("Number of train images:", len(X_train))
print("Number of train binary masks:", len(y_train_seg))
print("Number of validation images:", len(X_val))
print("Number of validation binary masks:", len(y_val_seg))
print("Number of test images:", len(X_test))
print("Number of test binary masks:", len(y_test_seg))
# Build model
input_shape = (image_size, image_size, 3)
num_classes = len(labels)
model = build_model(input_shape, num_classes)
model.summary()
test_class_acc_list = []
for i in range(1):
print(f"\nTrain {i+1}:\n")
# Train model
batch_size = 16
epochs = 50
history = train_model(model, X_train, y_train_class, y_train_seg, X_val, y_val_class, y_val_seg, batch_size, epochs)
# Evaluate model on test data
test_class_acc = evaluate_model(model, X_test, y_test_class, y_test_seg)
plot_performance(history)
test_class_acc_list.append(test_class_acc)
# Calculate average test classification accuracy
average_test_class_acc = sum(test_class_acc_list) / len(test_class_acc_list)
print("Test Classification Accuracy List:", test_class_acc_list)
print("Average Test Classification Accuracy:", average_test_class_acc)
# Load and test model
saved_model_path = "/content/gdrive/MyDrive/Deep learning/FYP_2/4 Dataset Ratio 60 20 20/jit012/jit0/b_dip/model/pcbmtl_bdip.h5"
# Define segmentation loss as a custom_loss function
custom_objects = {"segmentation_loss": segmentation_loss}
model = load_model(saved_model_path, custom_objects=custom_objects)
import ipywidgets as widgets
from IPython.display import display, clear_output
from ipywidgets import Output
import matplotlib.pyplot as plt
import cv2
import numpy as np
# Load the saved model
custom_objects = {"segmentation_loss": segmentation_loss}
model = load_model(saved_model_path, custom_objects=custom_objects)
# Define labels
labels = ["bridge", "excess", "good", "insuff", "no"]
# Create an Output widget to capture the output
out = Output()
# Function to preprocess and predict on the selected image
def predict_image(image_path):
# Load and preprocess the image
img = cv2.imread(image_path)
img = cv2.resize(img, (image_size, image_size))
img = np.expand_dims(img, axis=0)
# Predict using the model
class_output, seg_output = model.predict(img)
# Get the predicted class
predicted_class_index = np.argmax(class_output)
predicted_class = labels[predicted_class_index]
# Clear the previous output
with out:
clear_output(wait=True)
# Visualize the input and segmentation output
plt.figure(figsize=(10, 5))
plt.subplot(1, 3, 1)
plt.imshow(cv2.cvtColor(cv2.imread(image_path), cv2.COLOR_BGR2RGB))
plt.title("Input Image")
plt.axis("off")
plt.subplot(1, 3, 2)
plt.imshow(seg_output[0, :, :, 0], cmap='gray')
plt.title("Segmentation Output")
plt.axis("off")
plt.subplot(1, 3, 3)
class_probabilities = class_output[0]
y_pos = np.arange(len(labels))
plt.barh(y_pos, class_probabilities, align='center')
plt.yticks(y_pos, labels)
plt.xlabel('Probability')
plt.title('Classification Output')
plt.gca().invert_yaxis() # Invert y-axis to display labels from top to bottom
plt.tight_layout()
# Print the predicted class
print("Predicted Class:", predicted_class)
print("Class Probabilities:")
for label, prob in zip(labels, class_probabilities):
print(f"{label}: {prob:.4f}")
plt.show()
# Create a file uploader widget
uploader = widgets.FileUpload(accept='image/*', multiple=False)
# Define a callback function to handle the uploaded file
def on_upload(change):
# Get the uploaded file
uploaded_filename = next(iter(uploader.value))
uploaded_file = uploader.value[uploaded_filename]
# Save the uploaded file
with open(uploaded_filename, 'wb') as f:
f.write(uploaded_file['content'])
# Predict and visualize the image
predict_image(uploaded_filename)
# Attach the callback function to the file uploader
uploader.observe(on_upload, names='value')
# Display the file uploader widget and the Output widget
display(uploader)
display(out)
"
The above is the Python code with Keras to do inference using the multi-task learning model, this model will take the image as input and provide one classification output and one segmentation output, this code let user to select one image and performance inference. Help me to remove the output printed but include the predicted output class name in the plot. Also, change the background color of the plot to '#303030'
|
fb7bc7151d7a0ecae5aa3b452e443a0b
|
{
"intermediate": 0.33926141262054443,
"beginner": 0.3307265639305115,
"expert": 0.3300120532512665
}
|
46,567
|
moving characters around renpy
|
785b34e0b2ea4e002d79773cd791e8d0
|
{
"intermediate": 0.3048594295978546,
"beginner": 0.3681989908218384,
"expert": 0.3269416093826294
}
|
46,568
|
make me code for arduino with servo on pin 2 and bluetooth module. the servo opens and closes the box . make the box to open on "1" in bluetooth terminal and close on "0"
|
7aa18f7a434a2e8404e73989899dadaf
|
{
"intermediate": 0.5315998792648315,
"beginner": 0.2174597680568695,
"expert": 0.25094035267829895
}
|
46,569
|
fix my code
def maximalRectangle(self, matrix: List[List[str]]) -> int:
if not matrix or not matrix[0]:
return 0
width, height = len(matrix[0]), len(matrix)
h_histogram = [0] * (width + 1)
max_area = 0
for i in range(height):
for j in range(width):
h_histogram[j] = h_histogram[j] + 1 if matrix[i][j] == '1' else 0
stack = []
for j in range(width + 1):
while stack and (j == width or stack[-1] > h_histogram[j]):
h = stack.pop()
start = stack[-1] + 1 if stack else 0
max_area = max(max_area, h * (j - start))
stack.append(h_histogram[j])
return max_area
|
d47196dd7fb40d1cfb7cdd18899c4461
|
{
"intermediate": 0.3965253531932831,
"beginner": 0.4201783239841461,
"expert": 0.183296337723732
}
|
46,570
|
create a blender python script that shows 3 explosions with multiple bursts of breaking up and splitting and moving in a vortex to form letter V on a random motion make sure code works 100%
|
023c8ecf3950070a8b3825c8bc8b3c13
|
{
"intermediate": 0.2538433074951172,
"beginner": 0.18776395916938782,
"expert": 0.5583927035331726
}
|
46,571
|
hi
|
c40cc63b4510fa17795a383215afc755
|
{
"intermediate": 0.3246487081050873,
"beginner": 0.27135494351387024,
"expert": 0.40399640798568726
}
|
46,572
|
#!/usr/bin/env -S swipl
%-*- mode: prolog; -*-
:- module(prj4_sol, [
sublist_lengths/2,
same_length_sublists/1,
fibonacci_sublists/1,
assoc_lookup/3,
assoc_replace/3,
add_to_plus_expr/2,
named_to_op_expr/2,
named_expr_eval/2,
named_expr_to_prefix_tokens/2,
op_expr_to_prefix_tokens/2
]).
%%%%%%%%%%%%%%%%%%%%%%%%%%% sublist_lengths/2 %%%%%%%%%%%%%%%%%%%%%%%%%%%
% #1: 10-points
% sublist_lengths(List, Lengths) should succeed iff List is a list
% containing sub-lists and Lengths is a list having the same length
% as List and whose elements are the lengths of the corresponding
% sub-list in List. You may assume that all the elements of List
% are sub-lists.
sublist_lengths(_List, _Lengths) :- 'TODO'.
:-begin_tests(sublist_lengths, [blocked('TODO')]).
test(empty, [nondet]) :-
sublist_lengths([], Lengths), Lengths = [].
test(sublist_lengths1, [nondet]) :-
sublist_lengths([[a, b]], Lengths), Lengths = [2].
test(sublist_lengths3, [nondet]) :-
sublist_lengths([[2], [a, b], [x, y, z]], Lengths), Lengths = [1, 2, 3].
test(sublist_lengths_var_list, [nondet]) :-
sublist_lengths(List, [1, 2, 3]), length(List, 3).
:-end_tests(sublist_lengths).
%%%%%%%%%%%%%%%%%%%%%%%%% same_length_sublists/1 %%%%%%%%%%%%%%%%%%%%%%%%
% #2: 10-points
% same_length_sublists(List): succeed only iff all the sublists in
% List have the same length. You may assume that all the elements of
% List are sub-lists. The procedure should succeed for an empty List.
%
% *Hint*: use an auxiliary procedure.
same_length_sublists(_List) :- 'TODO'.
:-begin_tests(same_length_sublists, [blocked('TODO')]).
test(empty, [nondet]) :-
same_length_sublists([]).
test(empties, [nondet]) :-
same_length_sublists([[], [], []]).
test(empties_fail, [fail]) :-
same_length_sublists([[], [2], []]).
test(sublists1, [nondet]) :-
same_length_sublists([[[a, 2]], [[]], [c]]).
test(sublists1_fail, [fail]) :-
same_length_sublists([[a], [[]], [c, 2]]).
test(sublists3, [nondet]) :-
same_length_sublists([[a, [2], 4], [b, 5, [1]], [3, 2, c]]).
test(sublists3_fail, [fail]) :-
same_length_sublists([[a, 2, 4], [b, 5, 1], [3, [2, c]]]).
:-end_tests(same_length_sublists).
%%%%%%%%%%%%%%%%%%%%%%%%%% fibonacci_sublists/1 %%%%%%%%%%%%%%%%%%%%%%%%%
% #3: 10-points
% fibonacci_sublists(List) should succeed iff List is a list of
% sublists whose lengths have a Fibonacci relationship; i.e.
% length(List[i]) == length(List[i-2]) + length(List[i-1])
% where List[i] is the sublist at index i in List. You may
% assume that List contains only sublists. The procedure
% should trivially succeed if the length of List is < 3.
fibonacci_sublists(_List) :- 'TODO'.
:-begin_tests(fibonacci_sublists, [blocked('TODO')]).
test(empty, [nondet]) :-
fibonacci_sublists([]).
test(zero, [nondet]) :-
fibonacci_sublists([[]]).
test(one, [nondet]) :-
fibonacci_sublists([[], [a]]).
test(two, [nondet]) :-
fibonacci_sublists([[], [a], [c]]).
test(three, [nondet]) :-
fibonacci_sublists([[], [a], [c], [a, c]]).
test(three_fail, [fail]) :-
fibonacci_sublists([[], [a], [c], [a, c, c]]).
test(four, [nondet]) :-
fibonacci_sublists([[], [a], [c], [a, c], [1, 2, 3] ]).
test(four_fail, [fail]) :-
fibonacci_sublists([[], [a], [c], [a, c], [1, 2, 3, 4] ]).
test(ten, [nondet]) :-
fibonacci_sublists([[], [a], [c], [a, c], [1, 2, 3], [1, 2, 3, 4, 5],
Eight, Thirteen, TwentyOne, ThirtyFour, FiftyFive]),
length(Eight, 8),
length(Thirteen, 13),
length(TwentyOne, 21),
length(ThirtyFour, 34),
length(FiftyFive, 55).
test(ten_fail, [fail]) :-
fibonacci_sublists([[], [a], [c], [a, c], [1, 2, 3], [1, 2, 3, 4, 5],
Eight, Thirteen, TwentyOne, ThirtyFour, FiftySix]),
!, %prevent backtracking
length(Eight, 8),
length(Thirteen, 13),
length(TwentyOne, 21),
length(ThirtyFour, 34),
length(FiftySix, 56).
test(four_start_22, [nondet]) :-
fibonacci_sublists([[1, 2], [1, 2], [1, 2, 3, 4], [1, 2, 3, 4, 5, 6]]).
test(four_start_22_fail, [fail]) :-
fibonacci_sublists([[1, 2], [1, 2], [1, 2, 3, 4], [1, 2, 3, 4, 5]]).
:-end_tests(fibonacci_sublists).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%% assoc_lookup/3 %%%%%%%%%%%%%%%%%%%%%%%%%%%%
% #4: 5-points
% A "association list" is a list of (Key, Value) pairs and can
% be used as a dictionary.
% assoc_lookup(Assoc, Key, Value): succeeds if Value is associated with
% Key in association list Assoc.
% *Restriction*: you may not use recursion.
% *Hint* your solution should simply call a Prolog built-in.
assoc_lookup(_Assoc, _Key, _Value) :- 'TODO'.
:-begin_tests(assoc_lookup, [blocked('TODO')]).
test(empty, [fail]) :-
assoc_lookup([], key, _Value).
test(first, [nondet]) :-
assoc_lookup([(key, 42), (a, 22), (b, 33)], key, Value),
42 = Value.
test(last, [nondet]) :-
assoc_lookup([(a, 22), (b, 33), (key, 42)], key, Value),
Value = 42.
test(mid, [nondet]) :-
assoc_lookup([(a, 22), (key, 42), (b, 33)], key, Value),
42 = Value.
test(multi, [nondet]) :-
assoc_lookup([(a, 22), (key, 42), (b, 33), (key, 22) ], key, Value),
Value = 42.
test(multi_fail, [fail]) :-
assoc_lookup([(a, 22), (key, 42), (b, 33), (key, 22) ], key, Value),
43 = Value.
test(bound_value, [nondet]) :-
assoc_lookup([(a, 22), (key, 42), (b, 33), (key, 22) ], key, 22).
test(unbound_key, [nondet]) :-
assoc_lookup([(a, 22), (key, 42), (b, 33), (key, 22) ], Key, 33),
b = Key.
:-end_tests(assoc_lookup).
%%%%%%%%%%%%%%%%%%%%%%%%%%%% assoc_replace/3 %%%%%%%%%%%%%%%%%%%%%%%%%%%%
% #5: 10-points
% assoc_replace(AtomIntList, Assoc, ListZ): given a list AtomIntList
% containing Prolog atoms and integers, match ListZ with the list
% which is the result of replacing all atoms in AtomIntList with their
% value in assoc-list Assoc.
%
% *Hints*: Use assoc_lookup/3 from your answer to the previous
% exercise and Prolog's built-ins atom(A) which succeeds if A is an
% atom and integer(I) which succeeds if I is an integer.
assoc_replace(_AtomIntList, _Assoc, _ListZ) :- 'TODO'.
:-begin_tests(assoc_replace, [blocked('TODO')]).
test(empty, [nondet]) :-
assoc_replace([], [(a,22), (b, 33), (c, 42)], Z),
Z = [].
test(single, [nondet]) :-
assoc_replace([c], [(a,22), (b, 33), (c, 42)], Z),
Z = [42].
test(none, [nondet]) :-
assoc_replace([77], [(a,22), (b, 33), (c, 42)], Z),
Z = [77].
test(multi, [nondet]) :-
assoc_replace([c, a, 8, b, 44], [(a,22), (b, 33), (c, 42)], Z),
Z = [42, 22, 8, 33, 44].
test(multi_fail, [fail]) :-
assoc_replace([c, a, d, b, 44], [(a,22), (b, 33), (c, 42)], Z),
Z = [42, 22, d, 33, 44].
:-end_tests(assoc_replace).
%%%%%%%%%%%%%%%%%%%%%%%%%%% add_to_plus_expr/2 %%%%%%%%%%%%%%%%%%%%%%%%%%
% #6: 10-points
% An add-expr is an integer or of the form add(X, Y), where X and
% Y are add-expr's. A plus-expr is an integer or of the form +(X, Y),
% where X and Y are plus-expr's (note that +(X, Y) can also be
% written in Prolog as X + Y).
%
% add_to_plus_expr(AddExpr, PlusExpr) should succeed iff PlusExpr
% is the same as AddExpr with each add replaced by +.
%
% *Hint*: the Prolog built-in integer(I) succeeds iff I is an integer.
add_to_plus_expr(_AddExprI, _PlusExpr) :- 'TODO'.
:-begin_tests(add_to_plus_expr, [blocked('TODO')]).
test(int, [nondet]) :-
add_to_plus_expr(42, Z), Z = 42.
test(add_2_3, [nondet]) :-
add_to_plus_expr(add(2, 3), Z), Z = 2 + 3.
test(add_add_2_3_add_4_5, [nondet]) :-
add_to_plus_expr(add(add(2, 3), add(4, 5)), Z), Z = (2 + 3) + (4 + 5).
test(add_add_add_add_1_2_3_4_5, [nondet]) :-
add_to_plus_expr(add(add(add(add(1, 2), 3), 4), 5), Z),
Z = 1 + 2 + 3 + 4 + 5.
test(add_add_add_add_1_2_3_4_5_fail, [fail]) :-
add_to_plus_expr(add(add(add(add(1, 2), 3), 4), 5), Z),
Z = 1 + 2 + 3 + (4 + 5).
test(add_1_add_2_add_3_add_4_5, [nondet]) :-
add_to_plus_expr(add(1, add(2, add(3, add(4, 5)))), Z),
Z = 1 + (2 + (3 + (4 + 5))).
% reversed instantiation patterns
test(rev_int, [nondet]) :-
add_to_plus_expr(Z, 42), Z = 42.
test(rev_add_2_3, [nondet]) :-
add_to_plus_expr(Z, 2 + 3), Z = add(2, 3).
test(rev_add_add_2_3_add_4_5, [nondet]) :-
add_to_plus_expr(Z, (2 + 3) + (4 + 5)), Z = add(add(2, 3), add(4, 5)).
test(rev_add_add_add_add_1_2_3_4_5, [nondet]) :-
add_to_plus_expr(Z, 1 + 2 + 3 + 4 + 5),
Z = add(add(add(add(1, 2), 3), 4), 5).
test(rev_add_add_add_add_1_2_3_4_5_fail, [fail]) :-
add_to_plus_expr(Z, 1 + 2 + 3 + (4 + 5)),
Z = add(add(add(add(1, 2), 3), 4), 5).
test(rev_add_1_add_2_add_3_add_4_5, [nondet]) :-
add_to_plus_expr(Z, 1 + (2 + (3 + (4 + 5)))),
Z = add(1, add(2, add(3, add(4, 5)))).
:-end_tests(add_to_plus_expr).
%%%%%%%%%%%%%%%%%%%%%%%%%%% named_to_op_expr/2 %%%%%%%%%%%%%%%%%%%%%%%%%%
% #7: 10-points
% A named-expr is either a integer, or is one of add(X, Y) or
% mul(X, Y) where X and Y are named-expr's. An op-expr
% is an arithmetic expression over integers and binary operators + and
% *.
%
% named_to_op_expr(NamedExpr, OpExpr) should succeed iff OpExpr
% is the same as NamedExpr with each add and mul replaced by
% + and * respectively.
% It should be possible to run this procedure with either one or
% both arguments instantiated.
named_to_op_expr(_NamedExpr, _OpExpr) :- 'TODO'.
:-begin_tests(named_to_op_expr, [blocked('TODO')]).
test(int, [nondet]) :-
NamedExpr = 42, OpExpr = 42,
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(add_2_3, [nondet]) :-
NamedExpr = add(2, 3), OpExpr = 2 + 3,
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(add_add_2_3_add_4_5, [nondet]) :-
NamedExpr = add(add(2, 3), add(4, 5)), OpExpr = (2 + 3) + (4 + 5),
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(add_add_add_add_1_2_3_4_5, [nondet]) :-
NamedExpr = add(add(add(add(1, 2), 3), 4), 5), OpExpr = 1 + 2 + 3 + 4 + 5,
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(add_add_add_add_1_2_3_4_5_fail, [fail]) :-
NamedExpr = add(add(add(add(1, 2), 3), 4), 5), OpExpr = 1 + 2 + 3 + (4 + 5),
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(add_1_add_2_add_3_add_4_5, [nondet]) :-
NamedExpr = add(1, add(2, add(3, add(4, 5)))),
OpExpr = 1 + (2 + (3 + (4 + 5))),
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(mul_2_3, [nondet]) :-
NamedExpr = mul(2, 3), OpExpr = 2 * 3,
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(mul_mul_2_3_mul_4_5, [nondet]) :-
NamedExpr = mul(mul(2, 3), mul(4, 5)), OpExpr = (2 * 3) * (4 * 5),
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(mul_mul_mul_mul_1_2_3_4_5, [nondet]) :-
NamedExpr = mul(mul(mul(mul(1, 2), 3), 4), 5), OpExpr = 1 * 2 * 3 * 4 * 5,
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(mul_mul_mul_mul_1_2_3_4_5_fail, [fail]) :-
NamedExpr = mul(mul(mul(mul(1, 2), 3), 4), 5),
OpExpr = 1 * 2 * 3 * (4 * 5),
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(mul_1_mul_2_mul_3_mul_4_5, [nondet]) :-
NamedExpr = mul(1, mul(2, mul(3, mul(4, 5)))),
OpExpr = 1 * (2 * (3 * (4 * 5))),
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(mul_add_1_mul_2_3, [nondet]) :-
NamedExpr = mul(add(1, 2), 3), OpExpr = (1 + 2) * 3,
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(add_1_mul_2_3, [nondet]) :-
NamedExpr = add(1, mul(2, 3)), OpExpr = 1 + 2*3,
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(add_mul_1_2_add_3_4, [nondet]) :-
NamedExpr = add(mul(1, 2), mul(3, 4)), OpExpr = 1*2 + 3*4,
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(mul_add_1_2_mul_3_4, [nondet]) :-
NamedExpr = mul(add(1, 2), add(3, 4)), OpExpr = (1 + 2) * (3 + 4),
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
% reversed instantiation patterns
test(rev_int, [nondet]) :-
NamedExpr = 42, OpExpr = 42,
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_add_2_3, [nondet]) :-
NamedExpr = add(2, 3), OpExpr = 2 + 3,
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_add_add_2_3_add_4_5, [nondet]) :-
NamedExpr = add(add(2, 3), add(4, 5)), OpExpr = (2 + 3) + (4 + 5),
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_add_add_add_add_1_2_3_4_5, [nondet]) :-
NamedExpr = add(add(add(add(1, 2), 3), 4), 5), OpExpr = 1 + 2 + 3 + 4 + 5,
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_add_add_add_add_1_2_3_4_5_fail, [fail]) :-
NamedExpr = add(add(add(add(1, 2), 3), 4), 5), OpExpr = 1 + 2 + 3 + (4 + 5),
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_add_1_add_2_add_3_add_4_5, [nondet]) :-
NamedExpr = add(1, add(2, add(3, add(4, 5)))),
OpExpr = 1 + (2 + (3 + (4 + 5))),
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_mul_2_3, [nondet]) :-
NamedExpr = mul(2, 3), OpExpr = 2 * 3,
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_mul_mul_2_3_mul_4_5, [nondet]) :-
NamedExpr = mul(mul(2, 3), mul(4, 5)), OpExpr = (2 * 3) * (4 * 5),
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_mul_mul_mul_mul_1_2_3_4_5, [nondet]) :-
NamedExpr = mul(mul(mul(mul(1, 2), 3), 4), 5), OpExpr = 1 * 2 * 3 * 4 * 5,
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_mul_mul_mul_mul_1_2_3_4_5_fail, [fail]) :-
NamedExpr = mul(mul(mul(mul(1, 2), 3), 4), 5),
OpExpr = 1 * 2 * 3 * (4 * 5),
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_mul_1_mul_2_mul_3_mul_4_5, [nondet]) :-
NamedExpr = mul(1, mul(2, mul(3, mul(4, 5)))),
OpExpr = 1 * (2 * (3 * (4 * 5))),
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_mul_add_1_mul_2_3, [nondet]) :-
NamedExpr = mul(add(1, 2), 3), OpExpr = (1 + 2) * 3,
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_add_1_mul_2_3, [nondet]) :-
NamedExpr = add(1, mul(2, 3)), OpExpr = 1 + 2*3,
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_add_mul_1_2_add_3_4, [nondet]) :-
NamedExpr = add(mul(1, 2), mul(3, 4)), OpExpr = 1*2 + 3*4,
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_mul_add_1_2_mul_3_4, [nondet]) :-
NamedExpr = mul(add(1, 2), add(3, 4)), OpExpr = (1 + 2) * (3 + 4),
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
:-end_tests(named_to_op_expr).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%% named_expr_eval/2 %%%%%%%%%%%%%%%%%%%%%%%%%
% #8: 10-points
% named_expr_eval(NamedExpr, Value): Value matches the result of evaluating
% named-expr NamedExpr (named-expr is as in the previous question, add
% should add its operands and mul should multiply them).
%
% *Hint*: combine your solution to the previous exercise with is/2.
named_expr_eval(_NamedExpr, _Value) :- 'TODO'.
:-begin_tests(named_expr_eval, [blocked('TODO')]).
test(int, [nondet]) :-
named_expr_eval(42, 42).
test(add_2_3, [nondet]) :-
named_expr_eval(add(2, 3), 5).
test(add_add_2_3_add_4_5, [nondet]) :-
named_expr_eval(add(add(2, 3), add(4, 5)), 14).
test(add_add_add_add_1_2_3_4_5, [nondet]) :-
named_expr_eval(add(add(add(add(1, 2), 3), 4), 5), 15).
test(add_add_add_add_1_2_3_4_5_fail, [fail]) :-
named_expr_eval(add(add(add(add(1, 2), 3), 4), 5), 16).
test(add_1_add_2_add_3_add_4_5, [nondet]) :-
named_expr_eval(add(1, add(2, add(3, add(4, 5)))), 15).
test(mul_2_3, [nondet]) :-
named_expr_eval(mul(2, 3), 6).
test(mul_mul_2_3_mul_4_5, [nondet]) :-
named_expr_eval(mul(mul(2, 3), mul(4, 5)), 120).
test(mul_mul_mul_mul_1_2_3_4_5, [nondet]) :-
named_expr_eval(mul(mul(mul(mul(1, 2), 3), 4), 5), 120).
test(mul_mul_mul_mul_1_2_3_4_5_fail, [fail]) :-
named_expr_eval(mul(mul(mul(mul(1, 2), 3), 4), 5), 121).
test(mul_1_mul_2_mul_3_mul_4_5, [nondet]) :-
named_expr_eval(mul(1, mul(2, mul(3, mul(4, 5)))), 120).
test(mul_add_1_mul_2_3, [nondet]) :-
named_expr_eval(mul(add(1, 2), 3), 9).
test(add_1_mul_2_3, [nondet]) :-
named_expr_eval(add(1, mul(2, 3)), 7).
test(add_mul_1_2_add_3_4, [nondet]) :-
named_expr_eval(add(mul(1, 2), mul(3, 4)), 14).
test(mul_add_1_2_mul_3_4, [nondet]) :-
named_expr_eval(mul(add(1, 2), add(3, 4)), 21).
:-end_tests(named_expr_eval).
%%%%%%%%%%%%%%%%%%%%% named_expr_to_prefix_tokens/2 %%%%%%%%%%%%%%%%%%%%%
% #9: 15-points
% named_expr_to_prefix_tokens(NamedExpr, PrefixTokens): PrefixTokens is
% a list of the tokens in NamedExpr in prefix notation.
%
% *Hint*: use append/3.
named_expr_to_prefix_tokens(_NamedExpr, _PrefixTokens) :- 'TODO'.
:-begin_tests(named_expr_to_prefix_tokens, [blocked('TODO')]).
test(int, [nondet]) :-
named_expr_to_prefix_tokens(42, [42]).
test(add_2_3, [nondet]) :-
named_expr_to_prefix_tokens(add(2, 3), [add, 2, 3]).
test(add_add_2_3_add_4_5, [nondet]) :-
named_expr_to_prefix_tokens(add(add(2, 3), add(4, 5)),
[add, add, 2, 3, add, 4, 5]).
test(add_add_add_add_1_2_3_4_5, [nondet]) :-
named_expr_to_prefix_tokens(add(add(add(add(1, 2), 3), 4), 5),
[add, add, add, add, 1, 2, 3, 4, 5]).
test(add_add_add_add_1_2_3_4_5_fail, [fail]) :-
named_expr_to_prefix_tokens(add(add(add(add(1, 2), 3), 4), 5),
[add, add, add, 1, 2, 3, 4, 5]).
test(add_1_add_2_add_3_add_4_5, [nondet]) :-
named_expr_to_prefix_tokens(add(1, add(2, add(3, add(4, 5)))),
[add, 1, add, 2, add, 3, add, 4, 5]).
test(mul_2_3, [nondet]) :-
named_expr_to_prefix_tokens(mul(2, 3), [mul, 2, 3]).
test(mul_mul_2_3_mul_4_5, [nondet]) :-
named_expr_to_prefix_tokens(mul(mul(2, 3), mul(4, 5)),
[mul, mul, 2, 3, mul, 4, 5]).
test(mul_mul_mul_mul_1_2_3_4_5, [nondet]) :-
named_expr_to_prefix_tokens(mul(mul(mul(mul(1, 2), 3), 4), 5),
[mul, mul, mul, mul, 1, 2, 3, 4, 5]).
test(mul_mul_mul_mul_1_2_3_4_5_fail, [fail]) :-
named_expr_to_prefix_tokens(mul(mul(mul(mul(1, 2), 3), 4), 5),
[mul, mul, mul, 1, 2, 3, 4, 5]).
test(mul_1_mul_2_mul_3_mul_4_5, [nondet]) :-
named_expr_to_prefix_tokens(mul(1, mul(2, mul(3, mul(4, 5)))),
[mul, 1, mul, 2, mul, 3, mul, 4, 5]).
test(mul_add_1_2_3, [nondet]) :-
named_expr_to_prefix_tokens(mul(add(1, 2), 3), [mul, add, 1, 2, 3]).
test(add_1_mul_2_3, [nondet]) :-
named_expr_to_prefix_tokens(add(1, mul(2, 3)), [add, 1, mul, 2, 3]).
test(add_mul_1_2_add_3_4, [nondet]) :-
named_expr_to_prefix_tokens(add(mul(1, 2), mul(3, 4)),
[add, mul, 1, 2, mul, 3, 4]).
test(mul_add_1_2_mul_3_4, [nondet]) :-
named_expr_to_prefix_tokens(mul(add(1, 2), add(3, 4)),
[mul, add, 1, 2, add, 3, 4]).
:-end_tests(named_expr_to_prefix_tokens).
%%%%%%%%%%%%%%%%%%%%%%% op_expr_to_prefix_expr/2 %%%%%%%%%%%%%%%%%%%%%%%
% #10: 10-points
% op_to_prefix_expr(OpExpr, PrefixTokens): Given a OpExpr involving
% integers, + and *, set PrefixTokens to a list containing its tokens
% in prefix notation.
%
% *Restriction*: must be implemented using *only* earlier procedures;
% cannot directly use recursion or Prolog built-ins.
op_expr_to_prefix_tokens(_OpExpr, _PrefixTokens) :- 'TODO'.
:-begin_tests(op_expr_to_prefix_tokens, [blocked('TODO')]).
test(int, [nondet]) :-
op_expr_to_prefix_tokens(42, [42]).
test(add_2_3, [nondet]) :-
op_expr_to_prefix_tokens(+(2, 3), [+, 2, 3]).
test(add_add_2_3_add_4_5, [nondet]) :-
op_expr_to_prefix_tokens(+(+(2, 3), +(4, 5)),
[+, +, 2, 3, +, 4, 5]).
test(add_add_add_add_1_2_3_4_5, [nondet]) :-
op_expr_to_prefix_tokens(+(+(+(+(1, 2), 3), 4), 5),
[+, +, +, +, 1, 2, 3, 4, 5]).
test(add_add_add_add_1_2_3_4_5_fail, [fail]) :-
op_expr_to_prefix_tokens(+(+(+(+(1, 2), 3), 4), 5),
[+, +, +, 1, 2, 3, 4, 5]).
test(add_1_add_2_add_3_add_4_5, [nondet]) :-
op_expr_to_prefix_tokens(+(1, +(2, +(3, +(4, 5)))),
[+, 1, +, 2, +, 3, +, 4, 5]).
test(mul_2_3, [nondet]) :-
op_expr_to_prefix_tokens(*(2, 3), [*, 2, 3]).
test(mul_mul_2_3_mul_4_5, [nondet]) :-
op_expr_to_prefix_tokens(*(*(2, 3), *(4, 5)),
[*, *, 2, 3, *, 4, 5]).
test(mul_mul_mul_mul_1_2_3_4_5, [nondet]) :-
op_expr_to_prefix_tokens(*(*(*(*(1, 2), 3), 4), 5),
[*, *, *, *, 1, 2, 3, 4, 5]).
test(mul_mul_mul_mul_1_2_3_4_5_fail, [fail]) :-
op_expr_to_prefix_tokens(*(*(*(*(1, 2), 3), 4), 5),
[*, *, *, 1, 2, 3, 4, 5]).
test(mul_1_mul_2_mul_3_mul_4_5, [nondet]) :-
op_expr_to_prefix_tokens(*(1, *(2, *(3, *(4, 5)))),
[*, 1, *, 2, *, 3, *, 4, 5]).
test(mul_add_1_2_3, [nondet]) :-
op_expr_to_prefix_tokens(*(+(1, 2), 3), [*, +, 1, 2, 3]).
test(add_1_mul_2_3, [nondet]) :-
op_expr_to_prefix_tokens(+(1, *(2, 3)), [+, 1, *, 2, 3]).
test(add_mul_1_2_add_3_4, [nondet]) :-
op_expr_to_prefix_tokens(+(*(1, 2), *(3, 4)),
[+, *, 1, 2, *, 3, 4]).
test(mul_add_1_2_mul_3_4, [nondet]) :-
op_expr_to_prefix_tokens(*(+(1, 2), +(3, 4)),
[*, +, 1, 2, +, 3, 4]).
:-end_tests(op_expr_to_prefix_tokens).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% main/0 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
main :-
current_prolog_flag(argv, Argv),
(length(Argv, 0) -> run_tests ; run_tests(Argv)).
:-initialization(main, main).
|
16337b9f577ab3023c8c74280c893151
|
{
"intermediate": 0.2771926820278168,
"beginner": 0.32356879115104675,
"expert": 0.39923855662345886
}
|
46,573
|
How can I integrate next-on-pages to an existing project?
|
4a666313f0cfcb347b4d90f044e24274
|
{
"intermediate": 0.4371550977230072,
"beginner": 0.16390034556388855,
"expert": 0.39894458651542664
}
|
46,574
|
How can I integrate @cloudflare/next-on-pages to an existing project?
|
cb373253c9dd8f45033c092f0a48a382
|
{
"intermediate": 0.6099856495857239,
"beginner": 0.14418187737464905,
"expert": 0.2458324283361435
}
|
46,575
|
hello
|
7cbd1921486aab8e0f66e2b8b00c625c
|
{
"intermediate": 0.32064199447631836,
"beginner": 0.28176039457321167,
"expert": 0.39759764075279236
}
|
46,576
|
import asyncio, socket, pickle, threading
from kivy.clock import Clock
from kivy.uix.gridlayout import GridLayout
from kivy.uix.textinput import TextInput
from kivy.uix.button import Button
from kivy.uix.scrollview import ScrollView
from kivy.uix.boxlayout import BoxLayout
from kivymd.app import MDApp
from discord.ext import commands
import discord
class DiscordGUI(BoxLayout):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.orientation = 'vertical'
self.padding = [10, 10, 10, 10]
intents = discord.Intents.default()
intents.typing = False
intents.presences = False
self.bot = commands.Bot(command_prefix="!", intents=intents)
self.channels = []
self.selected_channel = None
self.match_channel = None
self.bot_token_entry = TextInput(hint_text="Bot Token:", multiline=False)
self.server_id_entry = TextInput(hint_text="Server ID:", multiline=False)
self.add_widget(self.bot_token_entry)
self.add_widget(self.server_id_entry)
self.add_widget(Button(text="Start Discord Bot", on_press=self.run_bot))
self.fetch_button = Button(text="Fetch Channels", on_press=self.fetch_channels, disabled=True)
self.add_widget(self.fetch_button)
self.listen_button = Button(text="Listen", on_press=self.listen_server, disabled=True)
self.add_widget(self.listen_button)
self.channel_layout = GridLayout(cols=4, size_hint_y=None)
self.channel_layout.bind(minimum_height=self.channel_layout.setter('height'))
self.channel_buttons = ScrollView()
self.channel_buttons.add_widget(self.channel_layout)
self.add_widget(self.channel_buttons)
def run_bot(self, instance):
loop = asyncio.get_event_loop()
self.bot_task = loop.create_task(self.bot.start(self.bot_token_entry.text.strip()))
Clock.schedule_interval(lambda dt: loop.run_until_complete(asyncio.sleep(0)), 0.01)
self.fetch_button.disabled = False
def fetch_channels(self, instance):
loop = asyncio.get_event_loop()
loop.create_task(self._fetch_channels())
async def _fetch_channels(self):
guild_id = int(self.server_id_entry.text.strip())
guild = self.bot.get_guild(guild_id)
if guild:
self.channels = [(channel.name, channel.id) for channel in guild.text_channels]
Clock.schedule_once(lambda dt: self.update_buttons())
def update_buttons(self):
for widget in self.channel_layout.children[:]:
if isinstance(widget, Button) and widget.text in [channel[0] for channel in self.channels]:
self.channel_layout.remove_widget(widget)
for channel in self.channels:
button = Button(text=channel[0], size_hint_y=None, height=50)
button.bind(on_press=self.toggle_button)
self.channel_layout.add_widget(button)
self.listen_button.disabled = False
def toggle_button(self, instance):
if self.selected_channel:
self.selected_channel.background_color = (1, 1, 1, 1)
instance.background_color = (0, 0, 0, 1)
self.selected_channel = instance
self.match_channel = next(channel[1] for channel in self.channels if channel[0] == instance.text)
def listen_server(self, instance):
if self.match_channel:
listen_thread = threading.Thread(target=self._listen_server)
listen_thread.start()
def _listen_server(self):
server_address = ('localhost', 12345)
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
sock.connect(server_address)
while True:
data = sock.recv(1024)
if not data:
break
shared_dict = pickle.loads(data)
Clock.schedule_once(lambda dt: self.send_message_to_discord(shared_dict))
def send_message_to_discord(self, match_data):
if hasattr(self.selected_channel, 'text'):
channel_id = self.match_channel
channel = self.bot.get_channel(channel_id)
if channel:
asyncio.run_coroutine_threadsafe(channel.send(str(match_data)), self.bot.loop)
class DiscordApp(MDApp):
def build(self):
self.gui = DiscordGUI()
return self.gui
def on_stop(self):
self.gui.bot_task.cancel()
if __name__ == "__main__":
DiscordApp().run()
|
46ad016522323325eb23bd1291fdcddd
|
{
"intermediate": 0.25978973507881165,
"beginner": 0.5837438702583313,
"expert": 0.15646637976169586
}
|
46,577
|
in this js code how to access state from event listener
class Game {
this.state = { x: 0 }
|
627145f0d82ef08b7b89e03f111c4743
|
{
"intermediate": 0.42724597454071045,
"beginner": 0.38464123010635376,
"expert": 0.1881127804517746
}
|
46,578
|
in this js code how to access state from event listener
class Game {
this.state = {
phase: "aiming",
}
bombGrabAreaDOM.addEventListener("mousedown", function(e) {
});
}
|
2fe3a4b2e7d7647f23fe6294ecb61f53
|
{
"intermediate": 0.5866776704788208,
"beginner": 0.23540295660495758,
"expert": 0.17791937291622162
}
|
46,579
|
Rewrite this timeline but make it more detailed, descriptive, and imaginative.
"
|
a57985297a57658a987447943d334a62
|
{
"intermediate": 0.3286379873752594,
"beginner": 0.3434045612812042,
"expert": 0.3279574513435364
}
|
46,580
|
Implement all the procedures specified in prj4_sol.pro using swipl Prolog. You may define auxiliary procedures as needed.
The resulting prj4_sol.pro should have all tests unblocked.
You are not allowed to use any non-logical features of Prolog other than arithmetic procedures like is/2, =:=/2 or =</2. Non-logical features include the explicit cut !, implicit cut within ->, assert, retract, record, etc.
Each procedure proc that you are required to write has a skeleton rule of the form: proc(_Arg1Name, ...) :- 'TODO'. Replace this skeleton line with facts and rules for the procedure.
The tests provided in the prj4_sol.pro are currently blocked. To run the tests for an individual procedure, unblock the tests by removing the blocked('TODO') argument.
prj4_sol.pro:
#!/usr/bin/env -S swipl
%-*- mode: prolog; -*-
:- module(prj4_sol, [
sublist_lengths/2,
same_length_sublists/1,
fibonacci_sublists/1,
assoc_lookup/3,
assoc_replace/3,
add_to_plus_expr/2,
named_to_op_expr/2,
named_expr_eval/2,
named_expr_to_prefix_tokens/2,
op_expr_to_prefix_tokens/2
]).
%%%%%%%%%%%%%%%%%%%%%%%%%%% sublist_lengths/2 %%%%%%%%%%%%%%%%%%%%%%%%%%%
% #1: 10-points
% sublist_lengths(List, Lengths) should succeed iff List is a list
% containing sub-lists and Lengths is a list having the same length
% as List and whose elements are the lengths of the corresponding
% sub-list in List. You may assume that all the elements of List
% are sub-lists.
sublist_lengths(_List, _Lengths) :- 'TODO'.
:-begin_tests(sublist_lengths, [blocked('TODO')]).
test(empty, [nondet]) :-
sublist_lengths([], Lengths), Lengths = [].
test(sublist_lengths1, [nondet]) :-
sublist_lengths([[a, b]], Lengths), Lengths = [2].
test(sublist_lengths3, [nondet]) :-
sublist_lengths([[2], [a, b], [x, y, z]], Lengths), Lengths = [1, 2, 3].
test(sublist_lengths_var_list, [nondet]) :-
sublist_lengths(List, [1, 2, 3]), length(List, 3).
:-end_tests(sublist_lengths).
%%%%%%%%%%%%%%%%%%%%%%%%% same_length_sublists/1 %%%%%%%%%%%%%%%%%%%%%%%%
% #2: 10-points
% same_length_sublists(List): succeed only iff all the sublists in
% List have the same length. You may assume that all the elements of
% List are sub-lists. The procedure should succeed for an empty List.
%
% *Hint*: use an auxiliary procedure.
same_length_sublists(_List) :- 'TODO'.
:-begin_tests(same_length_sublists, [blocked('TODO')]).
test(empty, [nondet]) :-
same_length_sublists([]).
test(empties, [nondet]) :-
same_length_sublists([[], [], []]).
test(empties_fail, [fail]) :-
same_length_sublists([[], [2], []]).
test(sublists1, [nondet]) :-
same_length_sublists([[[a, 2]], [[]], [c]]).
test(sublists1_fail, [fail]) :-
same_length_sublists([[a], [[]], [c, 2]]).
test(sublists3, [nondet]) :-
same_length_sublists([[a, [2], 4], [b, 5, [1]], [3, 2, c]]).
test(sublists3_fail, [fail]) :-
same_length_sublists([[a, 2, 4], [b, 5, 1], [3, [2, c]]]).
:-end_tests(same_length_sublists).
%%%%%%%%%%%%%%%%%%%%%%%%%% fibonacci_sublists/1 %%%%%%%%%%%%%%%%%%%%%%%%%
% #3: 10-points
% fibonacci_sublists(List) should succeed iff List is a list of
% sublists whose lengths have a Fibonacci relationship; i.e.
% length(List[i]) == length(List[i-2]) + length(List[i-1])
% where List[i] is the sublist at index i in List. You may
% assume that List contains only sublists. The procedure
% should trivially succeed if the length of List is < 3.
fibonacci_sublists(_List) :- 'TODO'.
:-begin_tests(fibonacci_sublists, [blocked('TODO')]).
test(empty, [nondet]) :-
fibonacci_sublists([]).
test(zero, [nondet]) :-
fibonacci_sublists([[]]).
test(one, [nondet]) :-
fibonacci_sublists([[], [a]]).
test(two, [nondet]) :-
fibonacci_sublists([[], [a], [c]]).
test(three, [nondet]) :-
fibonacci_sublists([[], [a], [c], [a, c]]).
test(three_fail, [fail]) :-
fibonacci_sublists([[], [a], [c], [a, c, c]]).
test(four, [nondet]) :-
fibonacci_sublists([[], [a], [c], [a, c], [1, 2, 3] ]).
test(four_fail, [fail]) :-
fibonacci_sublists([[], [a], [c], [a, c], [1, 2, 3, 4] ]).
test(ten, [nondet]) :-
fibonacci_sublists([[], [a], [c], [a, c], [1, 2, 3], [1, 2, 3, 4, 5],
Eight, Thirteen, TwentyOne, ThirtyFour, FiftyFive]),
length(Eight, 8),
length(Thirteen, 13),
length(TwentyOne, 21),
length(ThirtyFour, 34),
length(FiftyFive, 55).
test(ten_fail, [fail]) :-
fibonacci_sublists([[], [a], [c], [a, c], [1, 2, 3], [1, 2, 3, 4, 5],
Eight, Thirteen, TwentyOne, ThirtyFour, FiftySix]),
!, %prevent backtracking
length(Eight, 8),
length(Thirteen, 13),
length(TwentyOne, 21),
length(ThirtyFour, 34),
length(FiftySix, 56).
test(four_start_22, [nondet]) :-
fibonacci_sublists([[1, 2], [1, 2], [1, 2, 3, 4], [1, 2, 3, 4, 5, 6]]).
test(four_start_22_fail, [fail]) :-
fibonacci_sublists([[1, 2], [1, 2], [1, 2, 3, 4], [1, 2, 3, 4, 5]]).
:-end_tests(fibonacci_sublists).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%% assoc_lookup/3 %%%%%%%%%%%%%%%%%%%%%%%%%%%%
% #4: 5-points
% A "association list" is a list of (Key, Value) pairs and can
% be used as a dictionary.
% assoc_lookup(Assoc, Key, Value): succeeds if Value is associated with
% Key in association list Assoc.
% *Restriction*: you may not use recursion.
% *Hint* your solution should simply call a Prolog built-in.
assoc_lookup(_Assoc, _Key, _Value) :- 'TODO'.
:-begin_tests(assoc_lookup, [blocked('TODO')]).
test(empty, [fail]) :-
assoc_lookup([], key, _Value).
test(first, [nondet]) :-
assoc_lookup([(key, 42), (a, 22), (b, 33)], key, Value),
42 = Value.
test(last, [nondet]) :-
assoc_lookup([(a, 22), (b, 33), (key, 42)], key, Value),
Value = 42.
test(mid, [nondet]) :-
assoc_lookup([(a, 22), (key, 42), (b, 33)], key, Value),
42 = Value.
test(multi, [nondet]) :-
assoc_lookup([(a, 22), (key, 42), (b, 33), (key, 22) ], key, Value),
Value = 42.
test(multi_fail, [fail]) :-
assoc_lookup([(a, 22), (key, 42), (b, 33), (key, 22) ], key, Value),
43 = Value.
test(bound_value, [nondet]) :-
assoc_lookup([(a, 22), (key, 42), (b, 33), (key, 22) ], key, 22).
test(unbound_key, [nondet]) :-
assoc_lookup([(a, 22), (key, 42), (b, 33), (key, 22) ], Key, 33),
b = Key.
:-end_tests(assoc_lookup).
%%%%%%%%%%%%%%%%%%%%%%%%%%%% assoc_replace/3 %%%%%%%%%%%%%%%%%%%%%%%%%%%%
% #5: 10-points
% assoc_replace(AtomIntList, Assoc, ListZ): given a list AtomIntList
% containing Prolog atoms and integers, match ListZ with the list
% which is the result of replacing all atoms in AtomIntList with their
% value in assoc-list Assoc.
%
% *Hints*: Use assoc_lookup/3 from your answer to the previous
% exercise and Prolog's built-ins atom(A) which succeeds if A is an
% atom and integer(I) which succeeds if I is an integer.
assoc_replace(_AtomIntList, _Assoc, _ListZ) :- 'TODO'.
:-begin_tests(assoc_replace, [blocked('TODO')]).
test(empty, [nondet]) :-
assoc_replace([], [(a,22), (b, 33), (c, 42)], Z),
Z = [].
test(single, [nondet]) :-
assoc_replace([c], [(a,22), (b, 33), (c, 42)], Z),
Z = [42].
test(none, [nondet]) :-
assoc_replace([77], [(a,22), (b, 33), (c, 42)], Z),
Z = [77].
test(multi, [nondet]) :-
assoc_replace([c, a, 8, b, 44], [(a,22), (b, 33), (c, 42)], Z),
Z = [42, 22, 8, 33, 44].
test(multi_fail, [fail]) :-
assoc_replace([c, a, d, b, 44], [(a,22), (b, 33), (c, 42)], Z),
Z = [42, 22, d, 33, 44].
:-end_tests(assoc_replace).
%%%%%%%%%%%%%%%%%%%%%%%%%%% add_to_plus_expr/2 %%%%%%%%%%%%%%%%%%%%%%%%%%
% #6: 10-points
% An add-expr is an integer or of the form add(X, Y), where X and
% Y are add-expr's. A plus-expr is an integer or of the form +(X, Y),
% where X and Y are plus-expr's (note that +(X, Y) can also be
% written in Prolog as X + Y).
%
% add_to_plus_expr(AddExpr, PlusExpr) should succeed iff PlusExpr
% is the same as AddExpr with each add replaced by +.
%
% *Hint*: the Prolog built-in integer(I) succeeds iff I is an integer.
add_to_plus_expr(_AddExprI, _PlusExpr) :- 'TODO'.
:-begin_tests(add_to_plus_expr, [blocked('TODO')]).
test(int, [nondet]) :-
add_to_plus_expr(42, Z), Z = 42.
test(add_2_3, [nondet]) :-
add_to_plus_expr(add(2, 3), Z), Z = 2 + 3.
test(add_add_2_3_add_4_5, [nondet]) :-
add_to_plus_expr(add(add(2, 3), add(4, 5)), Z), Z = (2 + 3) + (4 + 5).
test(add_add_add_add_1_2_3_4_5, [nondet]) :-
add_to_plus_expr(add(add(add(add(1, 2), 3), 4), 5), Z),
Z = 1 + 2 + 3 + 4 + 5.
test(add_add_add_add_1_2_3_4_5_fail, [fail]) :-
add_to_plus_expr(add(add(add(add(1, 2), 3), 4), 5), Z),
Z = 1 + 2 + 3 + (4 + 5).
test(add_1_add_2_add_3_add_4_5, [nondet]) :-
add_to_plus_expr(add(1, add(2, add(3, add(4, 5)))), Z),
Z = 1 + (2 + (3 + (4 + 5))).
% reversed instantiation patterns
test(rev_int, [nondet]) :-
add_to_plus_expr(Z, 42), Z = 42.
test(rev_add_2_3, [nondet]) :-
add_to_plus_expr(Z, 2 + 3), Z = add(2, 3).
test(rev_add_add_2_3_add_4_5, [nondet]) :-
add_to_plus_expr(Z, (2 + 3) + (4 + 5)), Z = add(add(2, 3), add(4, 5)).
test(rev_add_add_add_add_1_2_3_4_5, [nondet]) :-
add_to_plus_expr(Z, 1 + 2 + 3 + 4 + 5),
Z = add(add(add(add(1, 2), 3), 4), 5).
test(rev_add_add_add_add_1_2_3_4_5_fail, [fail]) :-
add_to_plus_expr(Z, 1 + 2 + 3 + (4 + 5)),
Z = add(add(add(add(1, 2), 3), 4), 5).
test(rev_add_1_add_2_add_3_add_4_5, [nondet]) :-
add_to_plus_expr(Z, 1 + (2 + (3 + (4 + 5)))),
Z = add(1, add(2, add(3, add(4, 5)))).
:-end_tests(add_to_plus_expr).
%%%%%%%%%%%%%%%%%%%%%%%%%%% named_to_op_expr/2 %%%%%%%%%%%%%%%%%%%%%%%%%%
% #7: 10-points
% A named-expr is either a integer, or is one of add(X, Y) or
% mul(X, Y) where X and Y are named-expr's. An op-expr
% is an arithmetic expression over integers and binary operators + and
% *.
%
% named_to_op_expr(NamedExpr, OpExpr) should succeed iff OpExpr
% is the same as NamedExpr with each add and mul replaced by
% + and * respectively.
% It should be possible to run this procedure with either one or
% both arguments instantiated.
named_to_op_expr(_NamedExpr, _OpExpr) :- 'TODO'.
:-begin_tests(named_to_op_expr, [blocked('TODO')]).
test(int, [nondet]) :-
NamedExpr = 42, OpExpr = 42,
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(add_2_3, [nondet]) :-
NamedExpr = add(2, 3), OpExpr = 2 + 3,
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(add_add_2_3_add_4_5, [nondet]) :-
NamedExpr = add(add(2, 3), add(4, 5)), OpExpr = (2 + 3) + (4 + 5),
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(add_add_add_add_1_2_3_4_5, [nondet]) :-
NamedExpr = add(add(add(add(1, 2), 3), 4), 5), OpExpr = 1 + 2 + 3 + 4 + 5,
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(add_add_add_add_1_2_3_4_5_fail, [fail]) :-
NamedExpr = add(add(add(add(1, 2), 3), 4), 5), OpExpr = 1 + 2 + 3 + (4 + 5),
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(add_1_add_2_add_3_add_4_5, [nondet]) :-
NamedExpr = add(1, add(2, add(3, add(4, 5)))),
OpExpr = 1 + (2 + (3 + (4 + 5))),
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(mul_2_3, [nondet]) :-
NamedExpr = mul(2, 3), OpExpr = 2 * 3,
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(mul_mul_2_3_mul_4_5, [nondet]) :-
NamedExpr = mul(mul(2, 3), mul(4, 5)), OpExpr = (2 * 3) * (4 * 5),
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(mul_mul_mul_mul_1_2_3_4_5, [nondet]) :-
NamedExpr = mul(mul(mul(mul(1, 2), 3), 4), 5), OpExpr = 1 * 2 * 3 * 4 * 5,
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(mul_mul_mul_mul_1_2_3_4_5_fail, [fail]) :-
NamedExpr = mul(mul(mul(mul(1, 2), 3), 4), 5),
OpExpr = 1 * 2 * 3 * (4 * 5),
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(mul_1_mul_2_mul_3_mul_4_5, [nondet]) :-
NamedExpr = mul(1, mul(2, mul(3, mul(4, 5)))),
OpExpr = 1 * (2 * (3 * (4 * 5))),
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(mul_add_1_mul_2_3, [nondet]) :-
NamedExpr = mul(add(1, 2), 3), OpExpr = (1 + 2) * 3,
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(add_1_mul_2_3, [nondet]) :-
NamedExpr = add(1, mul(2, 3)), OpExpr = 1 + 2*3,
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(add_mul_1_2_add_3_4, [nondet]) :-
NamedExpr = add(mul(1, 2), mul(3, 4)), OpExpr = 1*2 + 3*4,
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
test(mul_add_1_2_mul_3_4, [nondet]) :-
NamedExpr = mul(add(1, 2), add(3, 4)), OpExpr = (1 + 2) * (3 + 4),
named_to_op_expr(NamedExpr, Z),
Z = OpExpr.
% reversed instantiation patterns
test(rev_int, [nondet]) :-
NamedExpr = 42, OpExpr = 42,
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_add_2_3, [nondet]) :-
NamedExpr = add(2, 3), OpExpr = 2 + 3,
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_add_add_2_3_add_4_5, [nondet]) :-
NamedExpr = add(add(2, 3), add(4, 5)), OpExpr = (2 + 3) + (4 + 5),
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_add_add_add_add_1_2_3_4_5, [nondet]) :-
NamedExpr = add(add(add(add(1, 2), 3), 4), 5), OpExpr = 1 + 2 + 3 + 4 + 5,
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_add_add_add_add_1_2_3_4_5_fail, [fail]) :-
NamedExpr = add(add(add(add(1, 2), 3), 4), 5), OpExpr = 1 + 2 + 3 + (4 + 5),
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_add_1_add_2_add_3_add_4_5, [nondet]) :-
NamedExpr = add(1, add(2, add(3, add(4, 5)))),
OpExpr = 1 + (2 + (3 + (4 + 5))),
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_mul_2_3, [nondet]) :-
NamedExpr = mul(2, 3), OpExpr = 2 * 3,
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_mul_mul_2_3_mul_4_5, [nondet]) :-
NamedExpr = mul(mul(2, 3), mul(4, 5)), OpExpr = (2 * 3) * (4 * 5),
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_mul_mul_mul_mul_1_2_3_4_5, [nondet]) :-
NamedExpr = mul(mul(mul(mul(1, 2), 3), 4), 5), OpExpr = 1 * 2 * 3 * 4 * 5,
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_mul_mul_mul_mul_1_2_3_4_5_fail, [fail]) :-
NamedExpr = mul(mul(mul(mul(1, 2), 3), 4), 5),
OpExpr = 1 * 2 * 3 * (4 * 5),
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_mul_1_mul_2_mul_3_mul_4_5, [nondet]) :-
NamedExpr = mul(1, mul(2, mul(3, mul(4, 5)))),
OpExpr = 1 * (2 * (3 * (4 * 5))),
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_mul_add_1_mul_2_3, [nondet]) :-
NamedExpr = mul(add(1, 2), 3), OpExpr = (1 + 2) * 3,
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_add_1_mul_2_3, [nondet]) :-
NamedExpr = add(1, mul(2, 3)), OpExpr = 1 + 2*3,
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_add_mul_1_2_add_3_4, [nondet]) :-
NamedExpr = add(mul(1, 2), mul(3, 4)), OpExpr = 1*2 + 3*4,
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
test(rev_mul_add_1_2_mul_3_4, [nondet]) :-
NamedExpr = mul(add(1, 2), add(3, 4)), OpExpr = (1 + 2) * (3 + 4),
named_to_op_expr(Z, OpExpr),
Z = NamedExpr.
:-end_tests(named_to_op_expr).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%% named_expr_eval/2 %%%%%%%%%%%%%%%%%%%%%%%%%
% #8: 10-points
% named_expr_eval(NamedExpr, Value): Value matches the result of evaluating
% named-expr NamedExpr (named-expr is as in the previous question, add
% should add its operands and mul should multiply them).
%
% *Hint*: combine your solution to the previous exercise with is/2.
named_expr_eval(_NamedExpr, _Value) :- 'TODO'.
:-begin_tests(named_expr_eval, [blocked('TODO')]).
test(int, [nondet]) :-
named_expr_eval(42, 42).
test(add_2_3, [nondet]) :-
named_expr_eval(add(2, 3), 5).
test(add_add_2_3_add_4_5, [nondet]) :-
named_expr_eval(add(add(2, 3), add(4, 5)), 14).
test(add_add_add_add_1_2_3_4_5, [nondet]) :-
named_expr_eval(add(add(add(add(1, 2), 3), 4), 5), 15).
test(add_add_add_add_1_2_3_4_5_fail, [fail]) :-
named_expr_eval(add(add(add(add(1, 2), 3), 4), 5), 16).
test(add_1_add_2_add_3_add_4_5, [nondet]) :-
named_expr_eval(add(1, add(2, add(3, add(4, 5)))), 15).
test(mul_2_3, [nondet]) :-
named_expr_eval(mul(2, 3), 6).
test(mul_mul_2_3_mul_4_5, [nondet]) :-
named_expr_eval(mul(mul(2, 3), mul(4, 5)), 120).
test(mul_mul_mul_mul_1_2_3_4_5, [nondet]) :-
named_expr_eval(mul(mul(mul(mul(1, 2), 3), 4), 5), 120).
test(mul_mul_mul_mul_1_2_3_4_5_fail, [fail]) :-
named_expr_eval(mul(mul(mul(mul(1, 2), 3), 4), 5), 121).
test(mul_1_mul_2_mul_3_mul_4_5, [nondet]) :-
named_expr_eval(mul(1, mul(2, mul(3, mul(4, 5)))), 120).
test(mul_add_1_mul_2_3, [nondet]) :-
named_expr_eval(mul(add(1, 2), 3), 9).
test(add_1_mul_2_3, [nondet]) :-
named_expr_eval(add(1, mul(2, 3)), 7).
test(add_mul_1_2_add_3_4, [nondet]) :-
named_expr_eval(add(mul(1, 2), mul(3, 4)), 14).
test(mul_add_1_2_mul_3_4, [nondet]) :-
named_expr_eval(mul(add(1, 2), add(3, 4)), 21).
:-end_tests(named_expr_eval).
%%%%%%%%%%%%%%%%%%%%% named_expr_to_prefix_tokens/2 %%%%%%%%%%%%%%%%%%%%%
% #9: 15-points
% named_expr_to_prefix_tokens(NamedExpr, PrefixTokens): PrefixTokens is
% a list of the tokens in NamedExpr in prefix notation.
%
% *Hint*: use append/3.
named_expr_to_prefix_tokens(_NamedExpr, _PrefixTokens) :- 'TODO'.
:-begin_tests(named_expr_to_prefix_tokens, [blocked('TODO')]).
test(int, [nondet]) :-
named_expr_to_prefix_tokens(42, [42]).
test(add_2_3, [nondet]) :-
named_expr_to_prefix_tokens(add(2, 3), [add, 2, 3]).
test(add_add_2_3_add_4_5, [nondet]) :-
named_expr_to_prefix_tokens(add(add(2, 3), add(4, 5)),
[add, add, 2, 3, add, 4, 5]).
test(add_add_add_add_1_2_3_4_5, [nondet]) :-
named_expr_to_prefix_tokens(add(add(add(add(1, 2), 3), 4), 5),
[add, add, add, add, 1, 2, 3, 4, 5]).
test(add_add_add_add_1_2_3_4_5_fail, [fail]) :-
named_expr_to_prefix_tokens(add(add(add(add(1, 2), 3), 4), 5),
[add, add, add, 1, 2, 3, 4, 5]).
test(add_1_add_2_add_3_add_4_5, [nondet]) :-
named_expr_to_prefix_tokens(add(1, add(2, add(3, add(4, 5)))),
[add, 1, add, 2, add, 3, add, 4, 5]).
test(mul_2_3, [nondet]) :-
named_expr_to_prefix_tokens(mul(2, 3), [mul, 2, 3]).
test(mul_mul_2_3_mul_4_5, [nondet]) :-
named_expr_to_prefix_tokens(mul(mul(2, 3), mul(4, 5)),
[mul, mul, 2, 3, mul, 4, 5]).
test(mul_mul_mul_mul_1_2_3_4_5, [nondet]) :-
named_expr_to_prefix_tokens(mul(mul(mul(mul(1, 2), 3), 4), 5),
[mul, mul, mul, mul, 1, 2, 3, 4, 5]).
test(mul_mul_mul_mul_1_2_3_4_5_fail, [fail]) :-
named_expr_to_prefix_tokens(mul(mul(mul(mul(1, 2), 3), 4), 5),
[mul, mul, mul, 1, 2, 3, 4, 5]).
test(mul_1_mul_2_mul_3_mul_4_5, [nondet]) :-
named_expr_to_prefix_tokens(mul(1, mul(2, mul(3, mul(4, 5)))),
[mul, 1, mul, 2, mul, 3, mul, 4, 5]).
test(mul_add_1_2_3, [nondet]) :-
named_expr_to_prefix_tokens(mul(add(1, 2), 3), [mul, add, 1, 2, 3]).
test(add_1_mul_2_3, [nondet]) :-
named_expr_to_prefix_tokens(add(1, mul(2, 3)), [add, 1, mul, 2, 3]).
test(add_mul_1_2_add_3_4, [nondet]) :-
named_expr_to_prefix_tokens(add(mul(1, 2), mul(3, 4)),
[add, mul, 1, 2, mul, 3, 4]).
test(mul_add_1_2_mul_3_4, [nondet]) :-
named_expr_to_prefix_tokens(mul(add(1, 2), add(3, 4)),
[mul, add, 1, 2, add, 3, 4]).
:-end_tests(named_expr_to_prefix_tokens).
%%%%%%%%%%%%%%%%%%%%%%% op_expr_to_prefix_expr/2 %%%%%%%%%%%%%%%%%%%%%%%
% #10: 10-points
% op_to_prefix_expr(OpExpr, PrefixTokens): Given a OpExpr involving
% integers, + and *, set PrefixTokens to a list containing its tokens
% in prefix notation.
%
% *Restriction*: must be implemented using *only* earlier procedures;
% cannot directly use recursion or Prolog built-ins.
op_expr_to_prefix_tokens(_OpExpr, _PrefixTokens) :- 'TODO'.
:-begin_tests(op_expr_to_prefix_tokens, [blocked('TODO')]).
test(int, [nondet]) :-
op_expr_to_prefix_tokens(42, [42]).
test(add_2_3, [nondet]) :-
op_expr_to_prefix_tokens(+(2, 3), [+, 2, 3]).
test(add_add_2_3_add_4_5, [nondet]) :-
op_expr_to_prefix_tokens(+(+(2, 3), +(4, 5)),
[+, +, 2, 3, +, 4, 5]).
test(add_add_add_add_1_2_3_4_5, [nondet]) :-
op_expr_to_prefix_tokens(+(+(+(+(1, 2), 3), 4), 5),
[+, +, +, +, 1, 2, 3, 4, 5]).
test(add_add_add_add_1_2_3_4_5_fail, [fail]) :-
op_expr_to_prefix_tokens(+(+(+(+(1, 2), 3), 4), 5),
[+, +, +, 1, 2, 3, 4, 5]).
test(add_1_add_2_add_3_add_4_5, [nondet]) :-
op_expr_to_prefix_tokens(+(1, +(2, +(3, +(4, 5)))),
[+, 1, +, 2, +, 3, +, 4, 5]).
test(mul_2_3, [nondet]) :-
op_expr_to_prefix_tokens(*(2, 3), [*, 2, 3]).
test(mul_mul_2_3_mul_4_5, [nondet]) :-
op_expr_to_prefix_tokens(*(*(2, 3), *(4, 5)),
[*, *, 2, 3, *, 4, 5]).
test(mul_mul_mul_mul_1_2_3_4_5, [nondet]) :-
op_expr_to_prefix_tokens(*(*(*(*(1, 2), 3), 4), 5),
[*, *, *, *, 1, 2, 3, 4, 5]).
test(mul_mul_mul_mul_1_2_3_4_5_fail, [fail]) :-
op_expr_to_prefix_tokens(*(*(*(*(1, 2), 3), 4), 5),
[*, *, *, 1, 2, 3, 4, 5]).
test(mul_1_mul_2_mul_3_mul_4_5, [nondet]) :-
op_expr_to_prefix_tokens(*(1, *(2, *(3, *(4, 5)))),
[*, 1, *, 2, *, 3, *, 4, 5]).
test(mul_add_1_2_3, [nondet]) :-
op_expr_to_prefix_tokens(*(+(1, 2), 3), [*, +, 1, 2, 3]).
test(add_1_mul_2_3, [nondet]) :-
op_expr_to_prefix_tokens(+(1, *(2, 3)), [+, 1, *, 2, 3]).
test(add_mul_1_2_add_3_4, [nondet]) :-
op_expr_to_prefix_tokens(+(*(1, 2), *(3, 4)),
[+, *, 1, 2, *, 3, 4]).
test(mul_add_1_2_mul_3_4, [nondet]) :-
op_expr_to_prefix_tokens(*(+(1, 2), +(3, 4)),
[*, +, 1, 2, +, 3, 4]).
:-end_tests(op_expr_to_prefix_tokens).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% main/0 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
main :-
current_prolog_flag(argv, Argv),
(length(Argv, 0) -> run_tests ; run_tests(Argv)).
:-initialization(main, main).
|
5a951b12a1ae3058a2140d15d108fff4
|
{
"intermediate": 0.34270182251930237,
"beginner": 0.407720148563385,
"expert": 0.24957804381847382
}
|
46,581
|
can I use multiprocessing to make the listen server run all the time once clicked?
import asyncio, socket, pickle, threading
from kivy.clock import Clock
from kivy.uix.gridlayout import GridLayout
from kivy.uix.textinput import TextInput
from kivy.uix.button import Button
from kivy.uix.scrollview import ScrollView
from kivy.uix.boxlayout import BoxLayout
from kivymd.app import MDApp
from discord.ext import commands
import discord
class DiscordGUI(BoxLayout):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.orientation = 'vertical'
self.padding = [10, 10, 10, 10]
intents = discord.Intents.default()
intents.typing = False
intents.presences = False
self.bot = commands.Bot(command_prefix="!", intents=intents)
self.channels = []
self.selected_channel = None
self.match_channel = None
self.bot_token_entry = TextInput(hint_text="Bot Token:", multiline=False)
self.server_id_entry = TextInput(hint_text="Server ID:", multiline=False)
self.add_widget(self.bot_token_entry)
self.add_widget(self.server_id_entry)
self.add_widget(Button(text="Start Discord Bot", on_press=self.run_bot))
self.fetch_button = Button(text="Fetch Channels", on_press=self.fetch_channels, disabled=True)
self.add_widget(self.fetch_button)
self.listen_button = Button(text="Listen", on_press=self.listen_server, disabled=True)
self.add_widget(self.listen_button)
self.channel_layout = GridLayout(cols=4, size_hint_y=None)
self.channel_layout.bind(minimum_height=self.channel_layout.setter('height'))
self.channel_buttons = ScrollView()
self.channel_buttons.add_widget(self.channel_layout)
self.add_widget(self.channel_buttons)
def run_bot(self, instance):
loop = asyncio.get_event_loop()
self.bot_task = loop.create_task(self.bot.start(self.bot_token_entry.text.strip()))
Clock.schedule_interval(lambda dt: loop.run_until_complete(asyncio.sleep(0)), 0.01)
self.fetch_button.disabled = False
def fetch_channels(self, instance):
loop = asyncio.get_event_loop()
loop.create_task(self._fetch_channels())
async def _fetch_channels(self):
guild_id = int(self.server_id_entry.text.strip())
guild = self.bot.get_guild(guild_id)
if guild:
self.channels = [(channel.name, channel.id) for channel in guild.text_channels]
Clock.schedule_once(lambda dt: self.update_buttons())
def update_buttons(self):
for widget in self.channel_layout.children[:]:
if isinstance(widget, Button) and widget.text in [channel[0] for channel in self.channels]:
self.channel_layout.remove_widget(widget)
for channel in self.channels:
button = Button(text=channel[0], size_hint_y=None, height=50)
button.bind(on_press=self.toggle_button)
self.channel_layout.add_widget(button)
self.listen_button.disabled = False
def toggle_button(self, instance):
if self.selected_channel:
self.selected_channel.background_color = (1, 1, 1, 1)
instance.background_color = (0, 0, 0, 1)
self.selected_channel = instance
self.match_channel = next(channel[1] for channel in self.channels if channel[0] == instance.text)
def listen_server(self, instance):
if self.match_channel:
listen_thread = threading.Thread(target=self._listen_server)
listen_thread.start()
def _listen_server(self):
server_address = ('localhost', 12345)
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
sock.connect(server_address)
while True:
data = sock.recv(1024)
if not data:
break
shared_dict = pickle.loads(data)
Clock.schedule_once(lambda dt: self.send_message_to_discord(shared_dict))
def send_message_to_discord(self, match_data):
if hasattr(self.selected_channel, 'text'):
channel_id = self.match_channel
channel = self.bot.get_channel(channel_id)
if channel:
asyncio.run_coroutine_threadsafe(channel.send(str(match_data)), self.bot.loop)
class DiscordApp(MDApp):
def build(self):
self.gui = DiscordGUI()
return self.gui
def on_stop(self):
self.gui.bot_task.cancel()
if __name__ == "__main__":
DiscordApp().run()
|
2369ff9a9fbaa703daef9d23a6802779
|
{
"intermediate": 0.267214298248291,
"beginner": 0.5658004879951477,
"expert": 0.16698522865772247
}
|
46,582
|
Write a part 2 to this timeline chronicling the events from 9/14/2099 (solar eclipse and sounding of the first trumpet of the rapture) to 2110, God's despair, disappearance and supposed suicide after losing faith in humanity
|
358e4a94df24a29d6de9e7e8443ffddc
|
{
"intermediate": 0.3324647545814514,
"beginner": 0.2613134980201721,
"expert": 0.40622174739837646
}
|
46,583
|
#!/usr/bin/env -S swipl
%-- mode: prolog; --
:- module(prj4_sol, [
sublist_lengths/2,
same_length_sublists/1,
fibonacci_sublists/1,
assoc_lookup/3,
assoc_replace/3,
add_to_plus_expr/2,
named_to_op_expr/2,
named_expr_eval/2,
named_expr_to_prefix_tokens/2,
op_expr_to_prefix_tokens/2
]).
%%%%%%%%%%%%%%%%%%%%%%%%%%% sublist_lengths/2 %%%%%%%%%%%%%%%%%%%%%%%%%%%
sublist_lengths([], []).
sublist_lengths([H|T], [Len|LT]) :- length(H, Len), sublist_lengths(T, LT).
:-begin_tests(sublist_lengths).
test(empty, [nondet]) :-
sublist_lengths([], Lengths), Lengths = [].
test(sublist_lengths1, [nondet]) :-
sublist_lengths([[a, b]], Lengths), Lengths = [2].
test(sublist_lengths3, [nondet]) :-
sublist_lengths([[2], [a, b], [x, y, z]], Lengths), Lengths = [1, 2, 3].
test(sublist_lengths_var_list, [nondet]) :-
sublist_lengths(List, [1, 2, 3]), length(List, 3).
:-end_tests(sublist_lengths).
%%%%%%%%%%%%%%%%%%%%%%%%% same_length_sublists/1 %%%%%%%%%%%%%%%%%%%%%%%%
all_same_length([]).
all_same_length([]).
all_same_length([X, Y | T]) :- length(X, L), length(Y, L), all_same_length([Y | T]).
same_length_sublists(List) :- all_same_length(List).
:-begin_tests(same_length_sublists).
test(empty, [nondet]) :-
same_length_sublists([]).
test(empties, [nondet]) :-
same_length_sublists([[], [], []]).
test(empties_fail, [fail]) :-
same_length_sublists([[], [2], []]).
test(sublists1, [nondet]) :-
same_length_sublists([[[a, 2]], [[]], [c]]).
test(sublists1_fail, [fail]) :-
same_length_sublists([[a], [[]], [c, 2]]).
test(sublists3, [nondet]) :-
same_length_sublists([[a, [2], 4], [b, 5, [1]], [3, 2, c]]).
test(sublists3_fail, [fail]) :-
same_length_sublists([[a, 2, 4], [b, 5, 1], [3, [2, c]]]).
:-end_tests(same_length_sublists).
%%%%%%%%%%%%%%%%%%%%%%%%%% fibonacci_sublists/1 %%%%%%%%%%%%%%%%%%%%%%%%%
fibonacci_sublists([]).
fibonacci_sublists([]).
fibonacci_sublists([_, _]).
fibonacci_sublists([A, B, C|T]) :-
length(A, AL), length(B, BL), length(C, CL),
CL is AL+BL,
fibonacci_sublists([B, C|T]).
:-begin_tests(fibonacci_sublists).
test(empty, [nondet]) :-
fibonacci_sublists([]).
test(zero, [nondet]) :-
fibonacci_sublists([[]]).
test(one, [nondet]) :-
fibonacci_sublists([[], [a]]).
test(two, [nondet]) :-
fibonacci_sublists([[], [a], [c]]).
test(three, [nondet]) :-
fibonacci_sublists([[], [a], [c], [a, c]]).
test(three_fail, [fail]) :-
fibonacci_sublists([[], [a], [c], [a, c, c]]).
test(four, [nondet]) :-
fibonacci_sublists([[], [a], [c], [a, c], [1, 2, 3] ]).
test(four_fail, [fail]) :-
fibonacci_sublists([[], [a], [c], [a, c], [1, 2, 3, 4] ]).
:-end_tests(fibonacci_sublists).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%% assoc_lookup/3 %%%%%%%%%%%%%%%%%%%%%%%%%%%%
assoc_lookup(Assoc, Key, Value) :- member((Key, Value), Assoc).
:-begin_tests(assoc_lookup).
test(empty, [fail]) :-
assoc_lookup([], key, _Value).
test(first, [nondet]) :-
assoc_lookup([(key, 42), (a, 22), (b, 33)], key, Value),
42 = Value.
:-end_tests(assoc_lookup).
%%%%%%%%%%%%%%%%%%%%%%%%%%%% assoc_replace/3 %%%%%%%%%%%%%%%%%%%%%%%%%%%%
assoc_replace([], _, []).
assoc_replace([H|T], Assoc, [Z|ZT]) :-
(atom(H) -> assoc_lookup(Assoc, H, Z); Z = H),
assoc_replace(T, Assoc, ZT).
:-begin_tests(assoc_replace).
test(empty, [nondet]) :-
assoc_replace([], [(a,22), (b, 33), (c, 42)], Z),
Z = [].
:-end_tests(assoc_replace).
%%%%%%%%%%%%%%%%%%%%%%%%%%% add_to_plus_expr/2 %%%%%%%%%%%%%%%%%%%%%%%%%%
add_to_plus_expr(Expr, Expr) :- integer(Expr).
add_to_plus_expr(add(X, Y), PlusExpr) :-
add_to_plus_expr(X, XExpr),
add_to_plus_expr(Y, YExpr),
PlusExpr = XExpr + YExpr.
:-begin_tests(add_to_plus_expr).
test(int, [nondet]) :-
add_to_plus_expr(42, Z), Z = 42.
:-end_tests(add_to_plus_expr).
%%%%%%%%%%%%%%%%%%%%%%%%%%% named_to_op_expr/2 %%%%%%%%%%%%%%%%%%%%%%%%%%
named_to_op_expr(Expr, Expr) :- integer(Expr).
named_to_op_expr(add(X, Y), XExpr + YExpr) :-
named_to_op_expr(X, XExpr),
named_to_op_expr(Y, YExpr).
named_to_op_expr(mul(X, Y), XExpr * YExpr) :-
named_to_op_expr(X, XExpr),
named_to_op_expr(Y, YExpr).
:-begin_tests(named_to_op_expr).
test(int, [nondet]) :-
named_to_op_expr(42, Z), Z = 42.
:-end_tests(named_to_op_expr).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%% named_expr_eval/2 %%%%%%%%%%%%%%%%%%%%%%%%%
named_expr_eval(Expr, Val) :- integer(Expr), Val = Expr.
named_expr_eval(add(X, Y), Val) :-
named_expr_eval(X, XVal),
named_expr_eval(Y, YVal),
Val is XVal + YVal.
named_expr_eval(mul(X, Y), Val) :-
named_expr_eval(X, XVal),
named_expr_eval(Y, YVal),
Val is XVal * YVal.
:-begin_tests(named_expr_eval).
test(int, [nondet]) :-
named_expr_eval(42, 42).
:-end_tests(named_expr_eval).
%%%%%%%%%%%%%%%%%%%%% named_expr_to_prefix_tokens/2 %%%%%%%%%%%%%%%%%%%%%
named_expr_to_prefix_tokens(N, [N]) :- integer(N).
named_expr_to_prefix_tokens(add(X, Y), [add|T]) :-
named_expr_to_prefix_tokens(X, TX), named_expr_to_prefix_tokens(Y, TY), append(TX, TY, T).
named_expr_to_prefix_tokens(mul(X, Y), [mul|T]) :-
named_expr_to_prefix_tokens(X, TX), named_expr_to_prefix_tokens(Y, TY), append(TX, TY, T).
:-begin_tests(named_expr_to_prefix_tokens).
test(int, [nondet]) :-
named_expr_to_prefix_tokens(42, [42]).
:-end_tests(named_expr_to_prefix_tokens).
%%%%%%%%%%%%%%%%%%%%%%% op_expr_to_prefix_expr/2 %%%%%%%%%%%%%%%%%%%%%%%
% #10: 10-points
% op_to_prefix_expr(OpExpr, PrefixTokens): Given a OpExpr involving
% integers, + and *, set PrefixTokens to a list containing its tokens
% in prefix notation.
%
% *Restriction*: must be implemented using *only* earlier procedures;
% cannot directly use recursion or Prolog built-ins.
op_expr_to_prefix_tokens(_OpExpr, _PrefixTokens) :- 'TODO'.
:-begin_tests(op_expr_to_prefix_tokens, [blocked('TODO')]).
test(int, [nondet]) :-
op_expr_to_prefix_tokens(42, [42]).
test(add_2_3, [nondet]) :-
op_expr_to_prefix_tokens(+(2, 3), [+, 2, 3]).
test(add_add_2_3_add_4_5, [nondet]) :-
op_expr_to_prefix_tokens(+(+(2, 3), +(4, 5)),
[+, +, 2, 3, +, 4, 5]).
test(add_add_add_add_1_2_3_4_5, [nondet]) :-
op_expr_to_prefix_tokens(+(+(+(+(1, 2), 3), 4), 5),
[+, +, +, +, 1, 2, 3, 4, 5]).
test(add_add_add_add_1_2_3_4_5_fail, [fail]) :-
op_expr_to_prefix_tokens(+(+(+(+(1, 2), 3), 4), 5),
[+, +, +, 1, 2, 3, 4, 5]).
test(add_1_add_2_add_3_add_4_5, [nondet]) :-
op_expr_to_prefix_tokens(+(1, +(2, +(3, +(4, 5)))),
[+, 1, +, 2, +, 3, +, 4, 5]).
test(mul_2_3, [nondet]) :-
op_expr_to_prefix_tokens(*(2, 3), [*, 2, 3]).
test(mul_mul_2_3_mul_4_5, [nondet]) :-
op_expr_to_prefix_tokens(*(*(2, 3), *(4, 5)),
[*, *, 2, 3, *, 4, 5]).
test(mul_mul_mul_mul_1_2_3_4_5, [nondet]) :-
op_expr_to_prefix_tokens(*(*(*(*(1, 2), 3), 4), 5),
[*, *, *, *, 1, 2, 3, 4, 5]).
test(mul_mul_mul_mul_1_2_3_4_5_fail, [fail]) :-
op_expr_to_prefix_tokens(*(*(*(*(1, 2), 3), 4), 5),
[*, *, *, 1, 2, 3, 4, 5]).
test(mul_1_mul_2_mul_3_mul_4_5, [nondet]) :-
op_expr_to_prefix_tokens(*(1, *(2, *(3, *(4, 5)))),
[*, 1, *, 2, *, 3, *, 4, 5]).
test(mul_add_1_2_3, [nondet]) :-
op_expr_to_prefix_tokens(*(+(1, 2), 3), [*, +, 1, 2, 3]).
test(add_1_mul_2_3, [nondet]) :-
op_expr_to_prefix_tokens(+(1, *(2, 3)), [+, 1, *, 2, 3]).
test(add_mul_1_2_add_3_4, [nondet]) :-
op_expr_to_prefix_tokens(+(*(1, 2), *(3, 4)),
[+, *, 1, 2, *, 3, 4]).
test(mul_add_1_2_mul_3_4, [nondet]) :-
op_expr_to_prefix_tokens(*(+(1, 2), +(3, 4)),
[*, +, 1, 2, +, 3, 4]).
:-end_tests(op_expr_to_prefix_tokens).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% main/0 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
main :-
current_prolog_flag(argv, Argv),
(length(Argv, 0) -> run_tests ; run_tests(Argv)).
:-initialization(main, main).
?- ['prj4_sol.pro'].
true.
?- run_tests.
% PL-Unit: sublist_lengths .... passed 0.001 sec
% PL-Unit: same_length_sublists .
ERROR: /home/i571/submit/prj4-sol/prj4_sol.pro:44:
test empties: failed
.
ERROR: /home/i571/submit/prj4-sol/prj4_sol.pro:48:
test sublists1: failed
.
ERROR: /home/i571/submit/prj4-sol/prj4_sol.pro:52:
test sublists3: failed
.**FAILED 0.000 sec
% PL-Unit: fibonacci_sublists .
ERROR: /home/i571/submit/prj4-sol/prj4_sol.pro:72:
test zero: failed
......**FAILED 0.000 sec
% PL-Unit: assoc_lookup .. passed 0.003 sec
% PL-Unit: assoc_replace . passed 0.000 sec
% PL-Unit: add_to_plus_expr . passed 0.000 sec
% PL-Unit: named_to_op_expr . passed 0.000 sec
% PL-Unit: named_expr_eval . passed 0.000 sec
% PL-Unit: named_expr_to_prefix_tokens . passed 0.000 sec
% PL-Unit: op_expr_to_prefix_tokens blocked: TODO
% 4 tests failed
% 22 tests passed
false.
|
070562f462f7e39138ac367ab73bf782
|
{
"intermediate": 0.3597465753555298,
"beginner": 0.4477899372577667,
"expert": 0.1924634873867035
}
|
46,584
|
i have nested array i want to check if the length of the parent array > 0 then check the length of child array > 0, do the condition using js
|
aac36203ab1401a1a47c878a5dca5ffd
|
{
"intermediate": 0.5483621954917908,
"beginner": 0.20868676900863647,
"expert": 0.24295106530189514
}
|
46,585
|
Диалог:USER: Как в kivy android сделать event на нажатие на кнопку паузы/воспроизведения на Bluetooth гарнитуреASSISTANT: Для обработки событий нажатия на кнопку паузы/воспроизведения на Bluetooth гарнитуре в приложении Kivy для Android, можно использовать спецификацию Plyer - библиотеку для доступа к функциям оборудования. Однако, стоит отметить, что по умолчанию Plyer не предоставляет прямой функционал для работы с мультимедийными кнопками Bluetooth гарнитур. В этом случае, потребуется использовать Pyjnius для вызова Android API, либо рассмотреть возможность использования других подходов, например, через Java классы и их интеграцию с Kivy.<br/><br/>В общем виде, вам потребуется:<br/><br/>1. Использовать Pyjnius для доступа к Android Intent и BroadcastReceiver, чтобы прослушивать соответствующие системные события, такие как ACTION_MEDIA_BUTTON.<br/><br/>2. Зарегистрировать BroadcastReceiver в вашем приложении для обработки этих событий.<br/><br/>Вот базовый пример того, как вы могли бы начать:<br/><pre><code class="python"><br/>from jnius import autoclass<br/>from kivy.app import App<br/><br/># Получаем доступ к Java классам<br/>Intent = autoclass(‘android.content.Intent’)<br/>IntentFilter = autoclass(‘android.content.IntentFilter’)<br/>BroadcastReceiver = autoclass(‘android.content.BroadcastReceiver’)<br/>Context = autoclass(‘android.content.Context’)<br/>KeyEvent = autoclass(‘android.view.KeyEvent’)<br/><br/>class MediaButtonReceiver(BroadcastReceiver):<br/> def <strong>init</strong>(self, **kwargs):<br/> super(MediaButtonReceiver, self).<strong>init</strong>(**kwargs)<br/> self.onReceiveCallback = kwargs.get(‘onReceive’)<br/><br/> def onReceive(self, context, intent):<br/> action = intent.getAction()<br/> if action == Intent.ACTION_MEDIA_BUTTON:<br/> event = intent.getParcelableExtra(Intent.EXTRA_KEY_EVENT)<br/> if event.getAction() == KeyEvent.ACTION_DOWN:<br/> keyCode = event.getKeyCode()<br/> if keyCode == KeyEvent.KEYCODE_MEDIA_PLAY_PAUSE:<br/> self.onReceiveCallback()<br/><br/>class MyApp(App):<br/> def build(self):<br/> self.register_media_button_receiver()<br/> # Сюда добавьте создание пользовательского интерфейса<br/><br/> def register_media_button_receiver(self):<br/> intentFilter = IntentFilter(Intent.ACTION_MEDIA_BUTTON)<br/> intentFilter.setPriority(1000) # Установите приоритет вашего приемника<br/> self.mediaButtonReceiver = MediaButtonReceiver(onReceive=self.on_media_button)<br/> self.activity = autoclass(‘org.kivy.android.PythonActivity’).mActivity<br/> self.activity.registerReceiver(self.mediaButtonReceiver, intentFilter)<br/><br/> def on_media_button(self):<br/> print(“Кнопка паузы/воспроизведения на Bluetooth гарнитуре нажата”)<br/><br/> def on_stop(self):<br/> # Не забудьте разрегистрировать BroadcastReceiver при закрытии приложения<br/> self.activity.unregisterReceiver(self.mediaButtonReceiver)<br/><br/>if <strong>name</strong> == ‘<strong>main</strong>’:<br/> MyApp().run()</code></pre><br/><br/>Обратите внимание, что для обеспечения работы этого решения, ваше приложение должно запускаться с достаточными разрешениями. Убедитесь, что в файле buildozer.spec вы запросили необходимые разрешения для работы с мультимедиа.<br/><br/>Этот код является базовым примером и может потребовать дополнительной настройки и адаптации под ваши конкретные требования и условия использования.USER: Какие разрешения нужно дать?ASSISTANT:
|
59efa25ea8e56119252fa921fcaa3fba
|
{
"intermediate": 0.37979215383529663,
"beginner": 0.4493032395839691,
"expert": 0.17090460658073425
}
|
46,586
|
Диалог:USER: Как в kivy android сделать event на нажатие на кнопку паузы/воспроизведения на Bluetooth гарнитуреASSISTANT: Для обработки событий нажатия на кнопку паузы/воспроизведения на Bluetooth гарнитуре в приложении Kivy для Android, можно использовать спецификацию Plyer - библиотеку для доступа к функциям оборудования. Однако, стоит отметить, что по умолчанию Plyer не предоставляет прямой функционал для работы с мультимедийными кнопками Bluetooth гарнитур. В этом случае, потребуется использовать Pyjnius для вызова Android API, либо рассмотреть возможность использования других подходов, например, через Java классы и их интеграцию с Kivy.<br/><br/>В общем виде, вам потребуется:<br/><br/>1. Использовать Pyjnius для доступа к Android Intent и BroadcastReceiver, чтобы прослушивать соответствующие системные события, такие как ACTION_MEDIA_BUTTON.<br/><br/>2. Зарегистрировать BroadcastReceiver в вашем приложении для обработки этих событий.<br/><br/>Вот базовый пример того, как вы могли бы начать:<br/><pre><code class="python"><br/>from jnius import autoclass<br/>from kivy.app import App<br/><br/># Получаем доступ к Java классам<br/>Intent = autoclass(‘android.content.Intent’)<br/>IntentFilter = autoclass(‘android.content.IntentFilter’)<br/>BroadcastReceiver = autoclass(‘android.content.BroadcastReceiver’)<br/>Context = autoclass(‘android.content.Context’)<br/>KeyEvent = autoclass(‘android.view.KeyEvent’)<br/><br/>class MediaButtonReceiver(BroadcastReceiver):<br/> def <strong>init</strong>(self, **kwargs):<br/> super(MediaButtonReceiver, self).<strong>init</strong>(**kwargs)<br/> self.onReceiveCallback = kwargs.get(‘onReceive’)<br/><br/> def onReceive(self, context, intent):<br/> action = intent.getAction()<br/> if action == Intent.ACTION_MEDIA_BUTTON:<br/> event = intent.getParcelableExtra(Intent.EXTRA_KEY_EVENT)<br/> if event.getAction() == KeyEvent.ACTION_DOWN:<br/> keyCode = event.getKeyCode()<br/> if keyCode == KeyEvent.KEYCODE_MEDIA_PLAY_PAUSE:<br/> self.onReceiveCallback()<br/><br/>class MyApp(App):<br/> def build(self):<br/> self.register_media_button_receiver()<br/> # Сюда добавьте создание пользовательского интерфейса<br/><br/> def register_media_button_receiver(self):<br/> intentFilter = IntentFilter(Intent.ACTION_MEDIA_BUTTON)<br/> intentFilter.setPriority(1000) # Установите приоритет вашего приемника<br/> self.mediaButtonReceiver = MediaButtonReceiver(onReceive=self.on_media_button)<br/> self.activity = autoclass(‘org.kivy.android.PythonActivity’).mActivity<br/> self.activity.registerReceiver(self.mediaButtonReceiver, intentFilter)<br/><br/> def on_media_button(self):<br/> print(“Кнопка паузы/воспроизведения на Bluetooth гарнитуре нажата”)<br/><br/> def on_stop(self):<br/> # Не забудьте разрегистрировать BroadcastReceiver при закрытии приложения<br/> self.activity.unregisterReceiver(self.mediaButtonReceiver)<br/><br/>if <strong>name</strong> == ‘<strong>main</strong>’:<br/> MyApp().run()</code></pre><br/><br/>Обратите внимание, что для обеспечения работы этого решения, ваше приложение должно запускаться с достаточными разрешениями. Убедитесь, что в файле buildozer.spec вы запросили необходимые разрешения для работы с мультимедиа.<br/><br/>Этот код является базовым примером и может потребовать дополнительной настройки и адаптации под ваши конкретные требования и условия использования.USER: Какие разрешения нужно дать?ASSISTANT:
|
3b5340b03ce19745658b222319370d86
|
{
"intermediate": 0.37979215383529663,
"beginner": 0.4493032395839691,
"expert": 0.17090460658073425
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.