row_id
int64
0
48.4k
init_message
stringlengths
1
342k
conversation_hash
stringlengths
32
32
scores
dict
47,089
elastic search api search multiple documents based on their _routing id
0328abe0cfc56a839e90417df552ec84
{ "intermediate": 0.3770725429058075, "beginner": 0.12269148975610733, "expert": 0.5002359747886658 }
47,090
c# what if i want to store 2 types in single list?
932994ff8af59b23c1fdefe2623e1266
{ "intermediate": 0.5790239572525024, "beginner": 0.18770143389701843, "expert": 0.2332746386528015 }
47,091
c# what if i want to store 2 types in single list?
dd4597fd9b46d9ede20aade344b8d981
{ "intermediate": 0.5790239572525024, "beginner": 0.18770143389701843, "expert": 0.2332746386528015 }
47,092
es update_by_query pass multiple _id
c29203c510f92e50803ec2878c97dcbf
{ "intermediate": 0.3550374209880829, "beginner": 0.25657740235328674, "expert": 0.38838517665863037 }
47,093
translate lyrics of Lazarus by porcupine tree into asl gloss.
68d586c038c3b9e17c194923bcd4d86c
{ "intermediate": 0.4070125222206116, "beginner": 0.30831781029701233, "expert": 0.2846696078777313 }
47,094
django no active acount found with credentials even though they are in databse IN ONE PC and not in the other , it works on my local machine with postgres and in my coworker it odesn't with postgres or sqlite , authentication ius used through simple_jwt
90a428db4114874218619998b2a3ddc0
{ "intermediate": 0.725898265838623, "beginner": 0.10551289469003677, "expert": 0.168588787317276 }
47,095
Write me an arduino code, a long-short term memory neural network MPC algorithm from sketch.
1b76a12141ae1b944b7b57b4c1e18c02
{ "intermediate": 0.06581301987171173, "beginner": 0.03326448053121567, "expert": 0.9009225368499756 }
47,096
i want Trace layers during inference in MistralForCausalLM in transformer
cbf739cacfe5320b97f2b1bcf5d80cd0
{ "intermediate": 0.34261175990104675, "beginner": 0.12186073511838913, "expert": 0.5355274677276611 }
47,097
Explain the functions I need to write in arduino C from sketch to create a short term memory neural network MPC algorithm.
ffc1ceb737d37b7cdc87172e4ac48971
{ "intermediate": 0.12338302284479141, "beginner": 0.0810927152633667, "expert": 0.7955242395401001 }
47,098
how to install network client vpn on ubuntu
0b783487419f273e2c0b86e0fb9adaf3
{ "intermediate": 0.30833059549331665, "beginner": 0.34979864954948425, "expert": 0.3418707847595215 }
47,099
Question: """ Discord.js 14 Music Bot won't play music on Ubuntu but works on Windows """ Detail: """ I'm currently using this custom music bot with discord.js 14. While it runs smoothly on Windows, I encounter an issue when attempting to launch it from Ubuntu. The bot successfully locates music, but upon trying to load it, the music fails to start, and the bot promptly responds with a stopped message. No error log was generated. Works on Windows 11 23H2 Not working on Ubuntu 22.04.4 LTS What I've tried: Installing the @discordjs/opus package (https://stackoverflow.com/a/62719993/18463835). Updated packages with npm update --save. Installed ffmpeg using sudo apt install. Tried 4-5 other music bot source codes found on GitHub; every bot encountered the same issue. Updated Ubuntu and its packages. Reinstalled Ubuntu from a new ISO. Node Version: v20.12.2 (same on Ubuntu and Windows) Here's an image: ![img](https://i.imgur.com/Rustkwq.png) """ Don't repeat the question and just provide answer to the question above. If you are uncertain about the answer, please inform me that you do not possess the required knowledge regarding this question. Response in English
439d844f8294007312d88483de7b7b8f
{ "intermediate": 0.3571334481239319, "beginner": 0.3352232575416565, "expert": 0.3076432943344116 }
47,100
i have a pdf invoice, and i am trying to extract the invoice number and invoice total amount out of it with uipath string method. what is the methodology i can apply to help me identify the needed information.
574d547d456f250a03ed4d439dc13a17
{ "intermediate": 0.3940264582633972, "beginner": 0.27771955728530884, "expert": 0.32825398445129395 }
47,101
For each of the following functions, provide a Prolog interface and specification (not implementation) which is an analog of the function: a. A function which returns the roots of the quadratic equations ax^2 + bx + c = 0. b. A function which splits a list into two halves. c. A function which splits a list into sublists on every occurence of a particular separator (similar to split() in popular languages like JavaScript, Java and Python). If you expect the behavior of the Prolog procedure to be different from that of the function, then your specification should include that behavior. For example, an answer for the Prolog analog to the function which appends one list to another may be something like: append(Xs, Ys, Zs): Zs matches the result of appending Ys to list Xs. If at least two of the arguments are instantiated, then it will succeed at most once; if the first argument or its tail is uninstantiated, then depending on the other arguments, it may succeed multiple times producing longer and longer lists for the first argument. In this sample answer, the initial append(Xs, Ys, Zs) provides the interface and the subsequent description provides the specification. Answer in detail.
0ebffc4930bcee3dc4fc12b2d857d282
{ "intermediate": 0.3257952630519867, "beginner": 0.37975794076919556, "expert": 0.2944468855857849 }
47,102
import math import logging from functools import partial from collections import OrderedDict from copy import deepcopy import torch import torch.nn as nn import torch.nn.functional as F from timm.models.layers import to_2tuple from lib.models.layers.patch_embed import PatchEmbed, PatchEmbed_event, xcorr_depthwise from .utils import combine_tokens, recover_tokens from .vit import VisionTransformer from ..layers.attn_blocks import CEBlock _logger = logging.getLogger(__name__) class VisionTransformerCE(VisionTransformer): """ Vision Transformer with candidate elimination (CE) module A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale` - https://arxiv.org/abs/2010.11929 Includes distillation token & head support for `DeiT: Data-efficient Image Transformers` - https://arxiv.org/abs/2012.12877 """ def __init__(self, img_size=224, patch_size=16, in_chans=3, num_classes=1000, embed_dim=768, depth=12, num_heads=12, mlp_ratio=4., qkv_bias=True, representation_size=None, distilled=False, drop_rate=0., attn_drop_rate=0., drop_path_rate=0., embed_layer=PatchEmbed, norm_layer=None, act_layer=None, weight_init='', ce_loc=None, ce_keep_ratio=None): """ Args: img_size (int, tuple): input image size patch_size (int, tuple): patch size in_chans (int): number of input channels num_classes (int): number of classes for classification head embed_dim (int): embedding dimension depth (int): depth of transformer num_heads (int): number of attention heads mlp_ratio (int): ratio of mlp hidden dim to embedding dim qkv_bias (bool): enable bias for qkv if True representation_size (Optional[int]): enable and set representation layer (pre-logits) to this value if set distilled (bool): model includes a distillation token and head as in DeiT models drop_rate (float): dropout rate attn_drop_rate (float): attention dropout rate drop_path_rate (float): stochastic depth rate embed_layer (nn.Module): patch embedding layer norm_layer: (nn.Module): normalization layer weight_init: (str): weight init scheme """ # super().__init__() super().__init__() if isinstance(img_size, tuple): self.img_size = img_size else: self.img_size = to_2tuple(img_size) self.patch_size = patch_size self.in_chans = in_chans self.num_classes = num_classes self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models self.num_tokens = 2 if distilled else 1 norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6) act_layer = act_layer or nn.GELU self.patch_embed = embed_layer( img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim) num_patches = self.patch_embed.num_patches self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) self.dist_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) if distilled else None self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + self.num_tokens, embed_dim)) self.pos_drop = nn.Dropout(p=drop_rate) self.pos_embed_event = PatchEmbed_event(in_chans=32, embed_dim=768, kernel_size=4, stride=4) # self.pos_embed_event = PatchEmbed_event(in_chans=32, embed_dim=768, kernel_size=4, stride=4) # self.pos_embed_event_z = PatchEmbed_event(in_chans=32, embed_dim=768, kernel_size=3, stride=1) # attn = CrossAttn(768, 4, 3072, 0.1, 'relu') # self.cross_attn = Iter_attn(attn, 2) dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule blocks = [] ce_index = 0 self.ce_loc = ce_loc for i in range(depth): ce_keep_ratio_i = 1.0 if ce_loc is not None and i in ce_loc: ce_keep_ratio_i = ce_keep_ratio[ce_index] ce_index += 1 blocks.append( CEBlock( dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer, act_layer=act_layer, keep_ratio_search=ce_keep_ratio_i) ) self.blocks = nn.Sequential(*blocks) self.norm = norm_layer(embed_dim) self.init_weights(weight_init) def forward_features(self, z, x, event_z, event_x, mask_z=None, mask_x=None, ce_template_mask=None, ce_keep_rate=None, return_last_attn=False ): B, H, W = x.shape[0], x.shape[2], x.shape[3] event_z = self.pos_embed_event(event_z) # [:,:,:,:1000] event_x = self.pos_embed_event(event_x) # B 768 1024 x = self.patch_embed(x) z = self.patch_embed(z) event_z += self.pos_embed_z event_x += self.pos_embed_x z += self.pos_embed_z x += self.pos_embed_x # attention mask handling # B, H, W if mask_z is not None and mask_x is not None: mask_z = F.interpolate(mask_z[None].float(), scale_factor=1. / self.patch_size).to(torch.bool)[0] mask_z = mask_z.flatten(1).unsqueeze(-1) mask_x = F.interpolate(mask_x[None].float(), scale_factor=1. / self.patch_size).to(torch.bool)[0] mask_x = mask_x.flatten(1).unsqueeze(-1) mask_x = combine_tokens(mask_z, mask_x, mode=self.cat_mode) mask_x = mask_x.squeeze(-1) if self.add_cls_token: cls_tokens = self.cls_token.expand(B, -1, -1) cls_tokens = cls_tokens + self.cls_pos_embed if self.add_sep_seg: x += self.search_segment_pos_embed z += self.template_segment_pos_embed x = combine_tokens(z, event_z, x, event_x, mode=self.cat_mode) # 64+64+256+256=640 # x = combine_tokens(z, x, event_z, event_x, mode=self.cat_mode) # 64+64+256+256=640 if self.add_cls_token: x = torch.cat([cls_tokens, x], dim=1) x = self.pos_drop(x) lens_z = self.pos_embed_z.shape[1] lens_x = self.pos_embed_x.shape[1] global_index_t = torch.linspace(0, lens_z - 1, lens_z).to(x.device) global_index_t = global_index_t.repeat(B, 1) global_index_s = torch.linspace(0, lens_x - 1, lens_x).to(x.device) global_index_s = global_index_s.repeat(B, 1) removed_indexes_s = [] for i, blk in enumerate(self.blocks): x, global_index_t, global_index_s, removed_index_s, attn = \ blk(x, global_index_t, global_index_s, mask_x, ce_template_mask, ce_keep_rate) if self.ce_loc is not None and i in self.ce_loc: removed_indexes_s.append(removed_index_s) x = self.norm(x) lens_x_new = global_index_s.shape[1] lens_z_new = global_index_t.shape[1] z = x[:, :lens_z_new*2] x = x[:, lens_z_new*2:] if removed_indexes_s and removed_indexes_s[0] is not None: removed_indexes_cat = torch.cat(removed_indexes_s, dim=1) pruned_lens_x = lens_x - lens_x_new pad_x = torch.zeros([B, pruned_lens_x, x.shape[2]], device=x.device) x = torch.cat([x, pad_x], dim=1) index_all = torch.cat([global_index_s, removed_indexes_cat], dim=1) # recover original token order C = x.shape[-1] x = torch.zeros_like(x).scatter_(dim=1, index=index_all.unsqueeze(-1).expand(B, -1, C).to(torch.int64), src=x) x = recover_tokens(x, lens_z_new, lens_x, mode=self.cat_mode) x = x[:, :lens_x] # RGB head x = torch.cat([event_x, x], dim=1) # x = x[:, lens_x//2:] # event head # x = torch.cat([z, x], dim=1) # re-concatenate with the template, which may be further used by other modules # x, event_x = x[:, :lens_x//2], x[:, lens_x//2:] # x = x[:, -lens_x//2:] aux_dict = { "attn": attn, "removed_indexes_s": removed_indexes_s, # used for visualization } return x, aux_dict def forward(self, z, x, event_z, event_x, ce_template_mask=None, ce_keep_rate=None, tnc_keep_rate=None, return_last_attn=False): x, aux_dict = self.forward_features(z, x, event_z, event_x, ce_template_mask=ce_template_mask, ce_keep_rate=ce_keep_rate,) return x, aux_dict def _create_vision_transformer(pretrained=False, **kwargs): model = VisionTransformerCE(**kwargs) if pretrained: if 'npz' in pretrained: model.load_pretrained(pretrained, prefix='') else: checkpoint = torch.load(pretrained, map_location="cpu") missing_keys, unexpected_keys = model.load_state_dict(checkpoint["model"], strict=False) print('Load pretrained model from: ' + pretrained) return model def vit_base_patch16_224_ce(pretrained=False, **kwargs): """ ViT-Base model (ViT-B/16) from original paper (https://arxiv.org/abs/2010.11929). """ model_kwargs = dict( patch_size=16, embed_dim=768, depth=12, num_heads=12, **kwargs) model = _create_vision_transformer(pretrained=pretrained, **model_kwargs) return model def vit_large_patch16_224_ce(pretrained=False, **kwargs): """ ViT-Large model (ViT-L/16) from original paper (https://arxiv.org/abs/2010.11929). """ model_kwargs = dict( patch_size=16, embed_dim=1024, depth=24, num_heads=16, **kwargs) model = _create_vision_transformer(pretrained=pretrained, **model_kwargs) return model取出每一层vit的输出特征
9aa8da52dbe22f496178e5af7d7a1fd2
{ "intermediate": 0.34594112634658813, "beginner": 0.43554434180259705, "expert": 0.21851447224617004 }
47,103
Heres a snippet of my quadmesh code: public class QuadMesh extends Mesh { private static final int[] INDICES = new int[] {0, 1, 2, 3}; private static final MeshBuffer TEX_COORDS = MeshAttribute.TEX_COORDS.fill(new float[] {0, 1, 0, 0, 1, 1, 1, 0}); /** * Initializes a QuadMesh.<br> * <br> * The origin of a quad is at its bottom left, like the projection / orthographic matrices. is that even true anymore * * @param x bottom left corner origin * @param y bottom left corner origin * @param width width (screen width == Valence.FRAME_WIDTH) * @param height height (screen height == Valence.FRAME_HEIGHT) * @param texture texture GLTexture that will be bound during render */ public QuadMesh(float x, float y, float width, float height, GLTexture texture) { super(INDICES, texture, MeshAttribute.VERTICES_2D.fill(new float[] {x, y, x, y + height, x + width, y, x + width, y + height}), TEX_COORDS); } In another class, I'm defining this: private static final QuadMesh mesh = new QuadMesh(-1, -1, 2, 2, new float[] {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1}); When the constructor is called, the INDICES array in quadmesh is still null. Why
a0417d022bcc6f75bca11939ae4180c3
{ "intermediate": 0.4027214050292969, "beginner": 0.3824884593486786, "expert": 0.21479012072086334 }
47,104
Code a random number generator in Python
4cc2885a51255cd8fdacabdfbabb2301
{ "intermediate": 0.2694496214389801, "beginner": 0.14967195689678192, "expert": 0.5808784365653992 }
47,105
create firefox driver in selenium python
9397acbb42f5223545d7b0d306597a3d
{ "intermediate": 0.3098966181278229, "beginner": 0.19145400822162628, "expert": 0.49864938855171204 }
47,106
Just like how you can upload model matrices to opengl shaders to dynamically change the vertex locations, can you upload a matrix of colors to change the colors of quads? I have a quad shader with always 4 vertices and I dont want 4 separate uniform colors to update, just one array. Is this possible
e248e472ed8e76d4fc0fec47a3426e00
{ "intermediate": 0.5011819005012512, "beginner": 0.08999016135931015, "expert": 0.4088279604911804 }
47,107
Just like how you can upload model matrices to opengl shaders to dynamically change the vertex locations, can you upload a matrix of colors to change the colors of quads? I have a quad shader with always 4 vertices and I dont want 4 separate uniform colors to update, just one array. Is this possible?
e2de4396b35e0b1133ee5365879084fd
{ "intermediate": 0.5031083226203918, "beginner": 0.09765716642141342, "expert": 0.39923444390296936 }
47,108
I want you to act as a developer. I will provide you with tools that will allow you to develop software for developing mobile applications. My first request is "I need help developing a mobile app for Android."
bcf94023a972deaa906c7bf34c7b34b5
{ "intermediate": 0.4524758756160736, "beginner": 0.2594115138053894, "expert": 0.2881125807762146 }
47,109
my code is: rom selenium import webdriver import time from selenium import webdriver from selenium.webdriver.firefox.options import Options from selenium.webdriver.firefox.service import Service from webdriver_manager.firefox import GeckoDriverManager from selenium.webdriver.firefox.options import Options options = Options() profile_path = r"\Firefox profiles\e36jr3kj.default-release" options.profile = profile_path driver = webdriver.Firefox(options) driver.get("https://bot.sannysoft.com/") how to set this profile.set_preference('dom.webdriver.enabled', False) profile.set_preference('useAutomationExtension', False) profile.update_preferences() to porfile?
a5b8a5e397ad7a0fabda271f4934e2e9
{ "intermediate": 0.4409286081790924, "beginner": 0.31466537714004517, "expert": 0.24440602958202362 }
47,110
my gecko wedriver gets detectec by cloudflare using selenium python, what can i do?
d698c592698f6b360c57a6fd6db6a28b
{ "intermediate": 0.4526938498020172, "beginner": 0.213321715593338, "expert": 0.33398449420928955 }
47,111
Explain to me, how can I code a Short-Term Memory Neural Network MPC Algorithm in Arduino C
fb75664dd56f33793d830a5bccc75c3e
{ "intermediate": 0.122499980032444, "beginner": 0.03916376456618309, "expert": 0.8383362889289856 }
47,112
C:\Users\ILEG-i5-11\Desktop>python lyrics.py Traceback (most recent call last): File "C:\Users\ILEG-i5-11\Desktop\lyrics.py", line 59, in <module> main() File "C:\Users\ILEG-i5-11\Desktop\lyrics.py", line 46, in main image = generate_image_with_text(lyric, image_size, font_path, font_size, glow_factor) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ILEG-i5-11\Desktop\lyrics.py", line 23, in generate_image_with_text text_width, text_height = draw.textsize(text, font=font) ^^^^^^^^^^^^^ AttributeError: 'ImageDraw' object has no attribute 'textsize' import os import re import subprocess from PIL import Image, ImageDraw, ImageFont def parse_lrc_file(file_path): timestamps = [] lyrics = [] with open(file_path, 'r') as file: for line in file: match = re.match(r'\[(\d+:\d+\.\d+)\]', line) if match: timestamp = match.group(1) lyrics.append(line[len(match.group(0)):].strip()) timestamps.append(timestamp) return timestamps, lyrics def generate_image_with_text(text, image_size, font_path, font_size, glow_factor): image = Image.new('RGB', image_size, color='black') draw = ImageDraw.Draw(image) font = ImageFont.truetype(font_path, font_size) text_width, text_height = draw.textsize(text, font=font) draw.text(((image_size[0] - text_width) / 2, (image_size[1] - text_height) / 2), text, font=font, fill='white') # Create glowing effect for i in range(glow_factor): blurred_image = image.filter(ImageFilter.GaussianBlur(radius=i)) image = Image.blend(image, blurred_image, alpha=0.2) return image def main(): lrc_file = 'lyrics.lrc' font_path = 'arial.ttf' # Change this to the path of your desired font font_size = 48 image_size = (1280, 720) # Change this to your desired image size output_dir = 'output_images' timestamps, lyrics = parse_lrc_file(lrc_file) os.makedirs(output_dir, exist_ok=True) # Number of frames for glowing effect glow_factor = 10 for idx, lyric in enumerate(lyrics): image = generate_image_with_text(lyric, image_size, font_path, font_size, glow_factor) image_path = os.path.join(output_dir, f'image_{idx:03d}.png') image.save(image_path) # Generate video using ffmpeg subprocess.run([ 'ffmpeg', '-r', '1', '-i', os.path.join(output_dir, 'image_%03d.png'), '-c:v', 'libx264', '-vf', 'fps=25', 'output_video.mp4' ]) print("Video created successfully!") if __name__ == "__main__": main()
1cb63d9c94e389446b2f7cd6600e7ed9
{ "intermediate": 0.39215120673179626, "beginner": 0.41547614336013794, "expert": 0.192372664809227 }
47,113
Traceback (most recent call last): File “C:\Users\ILEG-i5-11\Desktop\lyrics.py”, line 10, in <module> from moviepy.video.fx.all import gaussian_blur ImportError: cannot import name ‘gaussian_blur’ from ‘moviepy.video.fx.all’ (D:\Program Files\Python\Lib\site-packages\moviepy\video\fx\all_init_.py) The issue you’re encountering with the ImportError suggests that the direct import path for gaussian_blur is incorrect based on how the MoviePy library structures its effects modules. The gaussian_blur effect is available in MoviePy, but it seems the import path used isn’t right. In MoviePy, effect functions are typically applied to clips using the .fx() method with the effect function passed as an argument, rather than directly imported from moviepy.video.fx.all. The correct way to access effects in MoviePy doesn’t usually involve importing them directly by name like gaussian_blur from moviepy.video.fx.all. Given that misunderstanding, let’s correct the approach to applying a blur effect within your script. Since a direct import statement for gaussian_blur didn’t work as expected, you should utilize the effects through the .fx() method on clip objects directly, which doesn’t require importing the effects by name: ### Applying Blur Effect Correctly: Instead of trying to import gaussian_blur directly, you can apply effects to a clip by referencing them as methods of the clip object. MoviePy automatically makes all effects available as methods on clips via the .fx() interface or directly as methods named after the effect. However, seeing as there might have been confusion or an update issue with MoviePy’s documentation or structure, let’s go with a more generic approach. ### Revised Approach — Using .fx(): To apply a blur
5caaed11efd2e1d853c0fd01fb086a48
{ "intermediate": 0.6687471270561218, "beginner": 0.20212328433990479, "expert": 0.1291295737028122 }
47,114
browser is under remote control how to turn it off geckodriver python selenium
21c425339af26a6d0ce787eb6134a67a
{ "intermediate": 0.3541311025619507, "beginner": 0.4110836684703827, "expert": 0.23478522896766663 }
47,115
browser is under remote control how to hide it so it wouldnt be possible to detect that im using gecko weindriver in python selenium
41083db29c56250f51b041c1f94b69ea
{ "intermediate": 0.39879921078681946, "beginner": 0.23249471187591553, "expert": 0.368706077337265 }
47,116
Create an entertaining game in HTML
5c70ffa434639e39f5e8abf0ab6779d6
{ "intermediate": 0.35861119627952576, "beginner": 0.3732753098011017, "expert": 0.26811349391937256 }
47,117
I want complete Python code to create a student management website
9aa9d7310017c040cceeb8d608b08d15
{ "intermediate": 0.3603806495666504, "beginner": 0.33198171854019165, "expert": 0.30763763189315796 }
47,118
Make this website more polished, return the full polished version of the HTML, use a better CSS library like Tailwind or something else to achieve a modern and very Pro look, You should return the full HTML code for the polished website, Follow the instructions, here is the base HTML: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Introducing the HX-GPU: A Breakthrough in GPU Technology</title> <style> body, html { background-color: #0a0a0a; /* Xbox One theme color */ color: #ffffff; font-family: Arial, sans-serif; } header { background-color: #1a1a1a; /* Darker shade for header */ backdrop-filter: blur(10px); } header h1 { font-size: 2rem; font-weight: 700; padding: 1rem 0; } main { backdrop-filter: blur(5px); background-color: #121212; /* Darker main background */ color: #fff; } section { background-color: rgba(255, 255, 255, 0.05); /* Translucent sections */ backdrop-filter: blur(20px); border-radius: 10px; /* Polished glass look */ padding: 20px; margin-bottom: 20px; } section h2 { color: #4A90E2; /* Bright blue for better visibility */ border-bottom: 1px solid #555; /* Adds a subtle separator */ padding-bottom: 10px; margin-bottom: 20px; } p { color: #ccc; line-height: 1.6; } /* Add to all elements that should have rounded corners */ *, section, #reflective-demo img, #matte-demo img, header { border-radius: 10px; } #hx-gpu-details img { height: 100%; width: 100%; object-fit: cover; } #hx-gpu-details ul { list-style-type: none; padding-left: 0; } #hx-gpu-details ul li { margin-bottom: 10px; } #contact-section a { color: #4A90E2; text-decoration: underline; } </style> <link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.0.2/dist/tailwind.min.css" rel="stylesheet"> </head> <body class="bg-gray-100 text-gray-900"> <!-- Header --> <header class="bg-black text-white"> <div class="max-w-7xl mx-auto py-4 px-4 sm:px-6 lg:px-8"> <h1 class="text-lg leading-6 font-semibold">HX-GPU: The Future of Graphics Processing</h1> </div> </header> <!-- Main Content --> <main class="bg-gray-100 text-gray-900 mt-8"> <div class="max-w-7xl mx-auto sm:px-6 lg:px-8"> <!-- Introductory text or banner --> <section class="bg-black text-white shadow sm:rounded-md p-6 mb-8"> <h2 class="text-lg leading-6 font-medium">Introducing the HX-GPU</h2> <p>Seven years in the making, built on an FPGA foundation with 32 Xtended Matrix cores, the HX-GPU redefines GPU capability, bringing unparalleled efficiency and performance. Dive into the story of its creation and explore its potential in the realm of DX8 and DX9 titles.</p> </section> <!-- HX-GPU Tech Specs --> <section id="hx-gpu-details" class="bg-white shadow sm:rounded-md p-6 mb-8"> <h2 class="mb-4 text-lg leading-6 font-medium text-gray-900">HX-GPU Tech Specs</h2> <img src="https://source.unsplash.com/featured/?fpga,technology" alt="FPGA Technology" class="h-64 w-full object-cover"> <ul> <li>Base: FPGA</li> <li>Cores: 32 Xtended Matrix</li> <li>Performance: 76.8 FP32 at 30 Watts</li> <li>DX8/DX9 Performance: Exemplary</li> <li>Crysis Benchmark: 22 FPS at 640p Very Low Settings</li> </ul> </section> <!-- Personal Story --> <section> <h2>My Journey as a Tech Enthusiast</h2> <p>I embarked on the journey of building the HX-GPU when I was 20 years old, driven by my passion for technology and innovation. After seven years of relentless development and testing, the HX-GPU stands as a testament to my dedication.</p> </section> <!-- Contact Section --> <section id="contact-section"> <h2>Contact Us</h2> <p>Interested in learning more about the HX-GPU or discussing potential collaborations? Reach out at:</p> <a href="mailto:<PRESIDIO_ANONYMIZED_EMAIL_ADDRESS>"><PRESIDIO_ANONYMIZED_EMAIL_ADDRESS></a> </section> </div> </main> <!-- Script for Demo Initialization --> <script> console.log('WebGPU demos initialized. Enjoy the cutting-edge web graphics!'); // Example placeholder for future interactive enhancement document.addEventListener('DOMContentLoaded', (event) => { // Future implementation spot for interactive demo enhancements }); // Smooth scroll effect for internal links document.addEventListener('DOMContentLoaded', (event) => { document.querySelectorAll('a[href^="#"]').forEach(anchor => { anchor.addEventListener('click', function (e) { e.preventDefault(); document.querySelector(this.getAttribute('href')).scrollIntoView({ behavior: 'smooth' }); }); }); }); // Add onClick event for mailto link to log contact inquiries document.querySelector('a[href^="mailto:"]').addEventListener('click', function() { console.log('Email contact initiated:', this.href); }); </script> </body> </html>
17310ce35f3f5b06398a3ac71fc91af1
{ "intermediate": 0.4057055711746216, "beginner": 0.37312978506088257, "expert": 0.22116461396217346 }
47,119
Hello, if I have X amount of dice with Y amount of sides. How many combinations would that be, but if (1, 2, 1) is considered the same as (2, 1, 1)
bd629d29ec42399b35170bea5c9ff033
{ "intermediate": 0.37478020787239075, "beginner": 0.3086678087711334, "expert": 0.31655198335647583 }
47,120
<!DOCTYPE html><html lang="ru"><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <meta name="title" content="MODER SHOP - площадка по продаже и покупке аккаунтов SAMP"> <meta name="Keywords" content="Магазин аккаунтов SAMP, купить вирты самп, купить аккаунт SAMP, продать аккаунт SAMP, SAMP"> <meta name="description" content="На нашей площадке вы можете купить аккаунт SAMP, либо же продать свой аккаунт в САМП. Это полностью бесплатно, удобно и безопасно, ведь перед продажей аккаунта его проверяют наши боты, а так же в течение 12 часов вы сможете вернуть деньги."> <title>MODER SHOP - площадка по продаже модов и разработок</title> <link rel="shortcut icon" href="images/favicon.png" type="image/x-icon"> <link rel="stylesheet" href="css/libs-beta.css"> <link rel="stylesheet" href="css/main-beta.css"> </head> <body> <!-- ШАПКА САЙТА (start) LiveInternet counter<script type="25e3660b161ee406eb08dbd9-text/javascript"> new Image().src = "//counter.yadro.ru/hit?r"+ escape(document.referrer)+((typeof(screen)=="undefined")?"": ";s"+screen.width+"*"+screen.height+"*"+(screen.colorDepth? screen.colorDepth:screen.pixelDepth))+";u"+escape(document.URL)+ ";h"+escape(document.title.substring(0,150))+ ";"+Math.random();</script><!--/LiveInternet <nav class="header__navbar garland0 inner__navbar navbar navbar-toggleable-md "> <div class="container"> <button class="navbar-toggler navbar-toggler-right" type="button" data-toggle="collapse" data-target="#navbarSupportedContent" aria-controls="navbarSupportedContent" aria-expanded="false" aria-label="Toggle navigation"> <i class="fas fa-bars"></i> </button> <div class="header__navbar__logo"><a href="/"><img src="images/logo.png" alt=""></a></div> <div class="collapse navbar-collapse" id="navbarSupportedContent"> <ul class="header__navbar__menu ml-auto"> <li><a href="/catalog/">Модификации</a></li> <li><a href="/register/">Регистрация</a></li> <li><a href="/login/">Вход</a></li> </ul> </div> </div> </nav> Header Navbar (end) --> <!-- Кнопки main (start) --> <section class="index__description"> <div class="container"> <div class="row wow animated fadeInDown"> <div class="col-12"> <h1>MODER SHOP</h1> <p>Это сайт для креативных людей создающие скрипты, софт и другие модификации, множество товаров, проверенные продавцы, безопасные сделки и регулярное обновление каталога - все это вы найдете в нашем магазине, не упусти свой шанс.</p> <a class="button button-1" href="/catalog/">Модификации</a> <a class="button button-1" href="/money/">Готовые решения</a> <a class="button button-1" href="/vacansy/">Заказать</a> <a class="button button-1" href="/buytask/">Вакансии</a> </div> </div> <div class="index__macbook wow animated fadeInUp"><img class="img-fluid" src="images/macbook.png" alt=""></div> <a class="button button-1" href="/login/">Вход</a> <a class="button button-1" href="/reg/">Регистрация</a> </div> </section> <!-- Кнопки main (end) --> <!-- Advantages (start) --> <section class="index__advantages"> <div class="container"> <div class="section__title"><h2>Почему нам стоит доверять?</h2></div> <div class="index__advantages__items"> <div class="row"> <div class="col-lg-4"> <div class="adv__item wow animated fadeInUp" data-wow-delay="0.2s"> <div class="icon"><div class="bg-adv bg-adv_icon_1"></div></div> <div class="text"> <h3>Это бесплатно</h3> <p>�&nbsp;егистрация и размещение аккаунтов абсолютно бесплатные, мы лишь взымаем небольшой процент с ваших продаж.</p> </div> </div> </div> <div class="col-lg-4"> <div class="adv__item wow animated fadeInUp" data-wow-delay="0.4s"> <div class="icon"><div class="bg-adv bg-adv_icon_2"></div></div> <div class="text"> <h3>Безопасность</h3> <p>После покупки аккаунт проверяется нашим ботом и в случае ошибки деньги будут возвращены вам.</p> </div> </div> </div> <div class="col-lg-4"> <div class="adv__item wow animated fadeInUp" data-wow-delay="0.6s"> <div class="icon"><div class="bg-adv bg-adv_icon_3"></div></div> <div class="text"> <h3>Отзывчивая поддержка</h3> <p>Наши менеджеры работают 24/7 без выходных и всегда готовы ответить на ваши вопросы и помочь с решением возникших проблем.</p> </div> </div> </div> </div> <div class="row"> <div class="col-lg-4 offset-lg-2"> <div class="adv__item wow animated fadeInUp" data-wow-delay="0.8s"> <div class="icon"><div class="bg-adv bg-adv_icon_4"></div></div> <div class="text"> <h3>Большой спрос</h3> <p>Мы вкладываем огромные силы и вложения в продвижения нашего магазина, чтобы обойти всех конкурентов и доказать, что мы лучшие.</p> </div> </div> </div> <div class="col-lg-4"> <div class="adv__item wow animated fadeInUp" data-wow-delay="1s"> <div class="icon"><div class="bg-adv bg-adv_icon_5"></div></div> <div class="text"> <h3>Бесплатный софт</h3> <p>Продавая аккаунты у нас, вы получите бесплатный чекер, и вам не придется платить за него.</p> </div> </div> </div> </div> </div> <center><a class="button button-1 wow animated zoomIn" href="/register/">Зарегистрироваться</a></center> </div> </section> <!-- Advantages (end) --> <!-- Низ (start) <footer class="footer wow animated fadeInUp"> <div class="container"> <div class="row d-flex justify-content-between"> <div class="footer__info"> <div class="footer__info__logo"><a href="">SAMP Store</a></div> <div class="footer__info__copy"> <span class="rights">All rights reserved © 2018-2023<br></span> <span class="dev">Контакты поддержки:<br> <a target="_blank" href="https://vk.com/berkut_tnt">vk.com</a> | <a href="/cdn-cgi/l/email-protection" class="__cf_email__" data-cfemail="7312171e1a1d3300121e035e00071c01165d0106">[email�&nbsp;protected]</a></span> <a href="https://www.free-kassa.ru/"><img src="images/18.png"></a> </div> </div> <ul class="footer__menu"> <li><a href="/catalog/">Модификации</a></li> <li><a href="/money/">Разработки</a></li> <li><a href="/register/">Зарегистрироваться</a></li> <li><a href="/login/">Войти</a></li> </ul> </div> </div> </footer> <!-- Footer (end) --> <!-- Load styles and scripts --> <script data-cfasync="false" src="js/email-decode.min.js"></script><script src="js/scripts.min.js" type="25e3660b161ee406eb08dbd9-text/javascript"></script> <script src="js/wow.min.js" type="25e3660b161ee406eb08dbd9-text/javascript"></script> <script src="js/main.js" type="25e3660b161ee406eb08dbd9-text/javascript"></script> <script type="25e3660b161ee406eb08dbd9-text/javascript">new WOW().init();</script> <script src="js/rocket-loader.min.js" data-cf-settings="25e3660b161ee406eb08dbd9-|49" defer=""></script> </body></html> Ответь кодом, в виде кода. Разработай систему регистрации и авторизации на этом сайте.
70b4f9934e6c38da934fbb4132152d0f
{ "intermediate": 0.4706020653247833, "beginner": 0.40538567304611206, "expert": 0.12401226907968521 }
47,121
How do I calculate every subset of a set, but with repeating numbers
193f034c33b9c20283d66472134002b3
{ "intermediate": 0.33010467886924744, "beginner": 0.27430689334869385, "expert": 0.39558839797973633 }
47,122
Who are you?
74f6831da3b0fac24cfbb2b24ba2470b
{ "intermediate": 0.3994603753089905, "beginner": 0.2791310250759125, "expert": 0.32140862941741943 }
47,123
hi please write to me a complete code for a simple website that use rust smart contract of solana to create an artissana shop
775abc45cb8553acbf46e3470075f76b
{ "intermediate": 0.501025915145874, "beginner": 0.16395829617977142, "expert": 0.33501580357551575 }
47,124
Waria-Ki, a Confectionery Manufacturing company has several departments. A department may have many employees but must have at least seven employees. Every employee works for one and only one department. Every department has a manager. Employee’s details include, employee number, first name, last name, middle initial, gender, address, hobbies and salary. Department details include department number, department name, location and a phone number. The department number and department name are both unique identifiers of a department. Waria-Ki also tracks dependents of each employee. A dependent must be associated with one and only one employee. Some employees will not have dependents, while others will have many dependents. For all dependents, dependent number, first and last names and gender are recorded. Each employee is assigned an office. Some employees do share offices, but an employee must be allocated an office. Office details include Office number, location and office phone number. Some employees are assigned a company car. The company cars are identified by their vehicle id, and have a license plate number, make, model, and year of manufacture. Waria-Ki manufactures many sugar confectionery products including gum and jelly products, hard sweets, lollipops and Chewables. Each of these products are produced by a particular department. Product details include product number, description, manufacture date and the current price. The products are usually distributed for storage in one or many of the company’s warehouses. A warehouse may store one or many type of products. A warehouse details include code, location and phone number. Waria-Ki products are very popular among many kiosk owners. Each kiosk owners can buy any number of products, and each product can be bought by any number of kiosk owners. Kiosk owner’s details include code, last name, first name, kiosk location, address and email. Tasks (a) Given the information provided on the Waria-Ki background: i. Draw the fully labelled and implementable Crow’s Foot ERD for the requirements provided. Include all the entities, relationships, optionalities and multiplicities. ii. Translate the ERD that you have developed into a relational schema (Tables). Make sure that you identify all the primary and foreign keys. iii. Create an Access Control (Authorization) Matrix for the tables in your relational Schema. (b) Implement your Database design using Oracle SQL*Plus. i. Use at least one sequence ii. List of your SQL DDL commands (with the necessary integrity and domain constraints). iii. Populate the database. Sample of tuples for each relation (for each relation about 3 records). iv. Try five possible queries and Five sub-queries on the database tables. (c) Add some advanced features including i. Create 2 Views (involving two different relations and with different limits to modifications) ii. Create 2 different procedures iii. Create 2 different synonyms iv. Create 2 different triggers v. Index two of your tables vi. Allow the user system to use and to allow other users to manipulate records in any of your project tables. He should also allow other users to access your tables.
5f7ba2cf85a27e7e43e2b47b46d801d3
{ "intermediate": 0.4283900856971741, "beginner": 0.3186377286911011, "expert": 0.25297218561172485 }
47,125
what is my best option for crossplatform development as a hobby project?
562ac41ec00bf5028dbd8597f6ccf7a7
{ "intermediate": 0.5857173204421997, "beginner": 0.16721026599407196, "expert": 0.24707239866256714 }
47,126
https://raw.githubusercontent.com/dropcreations/Manzana-Apple-Music-Lyrics/main/assets/preview_premium.mp4 creating an apple music lyrics play in video format without audio but only using lyrics.lrc format with glow letter by letter of current line and turn that line to previous line as blur and converting next line from blur to glow letter by letter accordingly throughout write python program
6e5865feb3d37e88eb6ca9148ab5d568
{ "intermediate": 0.32481345534324646, "beginner": 0.17441990971565247, "expert": 0.5007666349411011 }
47,127
obsidian dataview get files which has link to current file
b4963203e95264d25326ae5e1ae6cea8
{ "intermediate": 0.5053612589836121, "beginner": 0.1779302954673767, "expert": 0.3167084753513336 }
47,128
obsidian dataview get files which has link to current file no talking, only query
1095e9058651942c32b6e7a75327fccb
{ "intermediate": 0.5151592493057251, "beginner": 0.23279553651809692, "expert": 0.252045214176178 }
47,129
make it work def print1(): print(1) def print2(): print(2) def print3(): print(3) func = random.choice(print1,print2, print3) func()
ad79decbbcd3609d1ed3f6edb16c9874
{ "intermediate": 0.376377671957016, "beginner": 0.40345028042793274, "expert": 0.2201719880104065 }
47,130
How to archive in .rar format using 7z on linix terminal
182830a04e95ab3e32f78a17d66e1d7b
{ "intermediate": 0.3477896451950073, "beginner": 0.2479284107685089, "expert": 0.4042819142341614 }
47,131
write a complex python code
c00173cd1dbdb19e4e74cc632386eee3
{ "intermediate": 0.23497506976127625, "beginner": 0.37202417850494385, "expert": 0.3930007517337799 }
47,132
как мне сделать github workflows build и publish
8459dcea9cd01372c598c64ded9eb3f0
{ "intermediate": 0.5031489133834839, "beginner": 0.13733039796352386, "expert": 0.35952070355415344 }
47,133
Run actions/setup-dotnet@v1 /home/runner/work/_actions/actions/setup-dotnet/v1/externals/install-dotnet.sh --version 8.1.101 dotnet-install: Note that the intended use of this script is for Continuous Integration (CI) scenarios, where: dotnet-install: - The SDK needs to be installed without user interaction and without admin rights. dotnet-install: - The SDK installation doesn't need to persist across multiple CI runs. dotnet-install: To set up a development environment or to run apps, use installers rather than this script. Visit https://dotnet.microsoft.com/download to get the installer. dotnet-install: Attempting to download using primary link https://dotnetcli.azureedge.net/dotnet/Sdk/8.1.101/dotnet-sdk-8.1.101-linux-x64.tar.gz curl: (22) The requested URL returned error: 404 dotnet-install: The resource at primary link 'https://dotnetcli.azureedge.net/dotnet/Sdk/8.1.101/dotnet-sdk-8.1.101-linux-x64.tar.gz' is not available. dotnet-install: Attempting to download using primary link https://dotnetbuilds.azureedge.net/public/Sdk/8.1.101/dotnet-sdk-8.1.101-linux-x64.tar.gz curl: (22) The requested URL returned error: 404 dotnet-install: The resource at primary link 'https://dotnetbuilds.azureedge.net/public/Sdk/8.1.101/dotnet-sdk-8.1.101-linux-x64.tar.gz' is not available. dotnet_install: Error: Could not find `.NET Core SDK` with version = 8.1.101 dotnet_install: Error: Refer to: https://aka.ms/dotnet-os-lifecycle for information on .NET Core support Error: The process '/home/runner/work/_actions/actions/setup-dotnet/v1/externals/install-dotnet.sh' failed with exit code 1 name: .NET Build and Publish on: push: branches: [ main ] pull_request: branches: [ main ] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Setup .NET Core uses: actions/setup-dotnet@v1 with: dotnet-version: '8.1.101' # Укажите необходимую версию - name: Install dependencies run: dotnet restore - name: Build run: dotnet build --no-restore -c Release - name: Test run: dotnet test --no-build -c Release - name: Publish run: dotnet publish -c Release -o out - name: Upload Artifacts uses: actions/upload-artifact@v2 with: name: published-app path: out/**/*
0f8db11daa1ed5ee63ef58b33d203743
{ "intermediate": 0.2750646770000458, "beginner": 0.44948792457580566, "expert": 0.27544736862182617 }
47,134
using yt-dlp and python use check_formats "selected" to find if video has format 251
d5b332208e0febad0d6e7bd80a5c8792
{ "intermediate": 0.4405210316181183, "beginner": 0.22475677728652954, "expert": 0.3347221612930298 }
47,135
c:\Users\14802\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\_api\deprecation.py:117: LangChainDeprecationWarning: The class `langchain_community.llms.openai.OpenAI` was deprecated in langchain-community 0.0.10 and will be removed in 0.2.0. An updated version of the class exists in the langchain-openai package and should be used instead. To use it run `pip install -U langchain-openai` and import as `from langchain_openai import OpenAI`. warn_deprecated(
9f0e4092c2fd1df31c222651459a6235
{ "intermediate": 0.3215694725513458, "beginner": 0.2778814733028412, "expert": 0.400549054145813 }
47,136
how to make it work private void Test<T>(IShopItem<T> item) where T : BaseItemConfig { IShopItem<BaseItemConfig> test = item; }
e070aed7faa32ebe5fad44a095401af0
{ "intermediate": 0.4825517237186432, "beginner": 0.3687817454338074, "expert": 0.14866656064987183 }
47,137
can I delete a specific value from a HashSet in Rust? if I have this [1,2,3,4,5,6] and only want to end up with [1,2,3,5,6]
c6b3049a6e25ef277963e1612ffb6f3a
{ "intermediate": 0.5565566420555115, "beginner": 0.15894146263599396, "expert": 0.2845018804073334 }
47,138
error SocketException: Failed host lookup:OS Error: No address associated with hostname, errno = 7)
9a522b103a399d070110b4a5fbcd746c
{ "intermediate": 0.30477333068847656, "beginner": 0.2989290952682495, "expert": 0.3962976038455963 }
47,139
how can I "grep" exclusive? Meaning that I have a file with a lot of rows and I want to preserve everything minus the line I am grepping
071b13f1c1a033995453e0a195070c21
{ "intermediate": 0.45110586285591125, "beginner": 0.1949581652879715, "expert": 0.35393592715263367 }
47,140
https://huggingface.co/spaces/yuntian-deng/ChatGPT4
9debb33b596ecdd00c3a148887bf2fee
{ "intermediate": 0.3405991196632385, "beginner": 0.2803560197353363, "expert": 0.37904486060142517 }
47,141
uninstall app my developed app from emulator and reinstall the apk
8fe3ff56b5542f89207fd2d8b11b3e5c
{ "intermediate": 0.3904098868370056, "beginner": 0.23887450993061066, "expert": 0.37071555852890015 }
47,142
Let's chat only with 0 and 1.
692f86e0168fd7250bec595d0bfd57f1
{ "intermediate": 0.36056268215179443, "beginner": 0.17136676609516144, "expert": 0.46807053685188293 }
47,143
assets: List<Asset>.from(json["assets"]!.map((x) => Asset.fromJson(x))), Null check operator used on a null value
9d54f7380facd40c5de7591faad31680
{ "intermediate": 0.43340492248535156, "beginner": 0.2547285556793213, "expert": 0.31186655163764954 }
47,144
Create a method removeEvens that removes all even elements from an ArrayList of Integers. Once the method is created, call it on the existing ArrayList in the main method, then print the results. import java.util.ArrayList; public class Odds { public static void main(String[] args) { ArrayList<Integer> odds = new ArrayList<Integer>(); // Pre-load the array list with values. for(int index = 1; index < 21; index++) { odds.add(index); odds.add(index); } // Call removeEvens on the ArrayList above // Print the ArrayList to verify the even numbers have been removed } public static void removeEvens(ArrayList<Integer> array) { } }
422b99aab8f3ba56b2864e2ec6054792
{ "intermediate": 0.32342395186424255, "beginner": 0.5213357210159302, "expert": 0.15524032711982727 }
47,145
keysight sourcemeter 2651 programming with pyvisa pulling current
48bfea4ddc0da1b1015bce2b4a5f3c01
{ "intermediate": 0.4244803190231323, "beginner": 0.2935464382171631, "expert": 0.2819732129573822 }
47,146
For each of the following Prolog predicates p(X, Y), provide a corresponding Haskell equation px=E where E is an expression over Haskell variable x. The expression should use only head and tail and be equivalent to the Prolog expression Y when p is called with an argument equivalent to X. p1([_|X], X). p2([_, _|X], X). p3([, X|], X). p4([, [X|]|_], X). p5([, [, X]|_], X). For example, given Prolog predicate p1([X|_], X), the Haskell "equivalent" of p1/2 would be p1 x = head x.
4616b53bd3529f3404c62af234c2dc98
{ "intermediate": 0.3524182438850403, "beginner": 0.31792429089546204, "expert": 0.3296574354171753 }
47,147
Output File (optional) Saves the task output to a file. If used with Output JSON or Output Pydantic, specifies how the output is saved. give an example
bb3ef7fe22429afa7223b9e54be5d2e7
{ "intermediate": 0.33120623230934143, "beginner": 0.2932072579860687, "expert": 0.37558650970458984 }
47,148
Create a method removeEvens that removes all even elements from an ArrayList of Integers. Once the method is created, call it on the existing ArrayList in the main method, then print the results. import java.util.ArrayList; public class Odds { public static void main(String[] args) { ArrayList<Integer> odds = new ArrayList<Integer>(); // Pre-load the array list with values. for(int index = 1; index < 21; index++) { odds.add(index); } // Call removeEvens on the ArrayList above removeEvens(odds); // Print the ArrayList to verify the even numbers have been removed System.out.println(odds); } public static void removeEvens(ArrayList<Integer> array) { for (int i = array.size() - 1; i >= 0; i--) { if(array.get(i) % 2 == 0) { array.remove(i); } } } }
5c7b2f18c081d50d5b5fb5ebffb58d35
{ "intermediate": 0.3536384105682373, "beginner": 0.4950121343135834, "expert": 0.15134942531585693 }
47,149
Magic Squares A n x n matrix that is filled with numbers 1, 2, 3….n2. The sum of the elements in each row, column and the two diagonals is the same value. Option 1 Write a program that reads in n2 values from the keyboard and tests to see if they form a magic square when arranged in a square matrix. You need to test for three conditions: Did the user enter n2 numbers for some n? Do each of the numbers 1, 2, 3….n2 occur exactly once in the user input? When put into a square, are the sums of the rows, columns and diagonals equal to each other? IF the size of the input is a square, then test whether all numbers between 1 and n2 are present. Then compute the row, column and diagonal sums. Implement a class Square with methods public void add(int i) public boolean isMagic()
fd873d53bbd5a96c50c2b4484e1b5807
{ "intermediate": 0.426045298576355, "beginner": 0.27187684178352356, "expert": 0.30207788944244385 }
47,150
import java.util.*; /** * Your implementation of various different graph algorithms. * * @author Royce Arockiasamy * @userid rarockiasamy6 * @GTID 903754593 * @version 1.0 */ public class GraphAlgorithms { /** * Performs a breadth first search (bfs) on the input graph, starting at * the parameterized starting vertex. * * When exploring a vertex, explore in the order of neighbors returned by * the adjacency list. Failure to do so may cause you to lose points. * * You may import/use java.util.Set, java.util.List, java.util.Queue, and * any classes that implement the aforementioned interfaces, as long as they * are efficient. * * The only instance of java.util.Map that you may use is the * adjacency list from graph. DO NOT create new instances of Map * for BFS (storing the adjacency list in a variable is fine). * * DO NOT modify the structure of the graph. The graph should be unmodified * after this method terminates. * * @param <T> the generic typing of the data * @param start the vertex to begin the bfs on * @param graph the graph to search through * @return list of vertices in visited order * @throws IllegalArgumentException if any input is null, or if start * doesn't exist in the graph */ public static <T> List<Vertex<T>> bfs(Vertex<T> start, Graph<T> graph) { if (start == null || graph == null) { throw new IllegalArgumentException("Start vertex or graph is null"); } if (!graph.getAdjList().containsKey(start)) { throw new IllegalArgumentException("Start vertex doesn't exist in the graph"); } Set<Vertex<T>> visited = new HashSet<>(); List<Vertex<T>> vertices = new ArrayList<>(); Queue<Vertex<T>> queue = new LinkedList<>(); queue.add(start); visited.add(start); while (!queue.isEmpty()) { Vertex<T> curr = queue.poll(); vertices.add(curr); for (VertexDistance<T> adj : graph.getAdjList().get(curr)) { if (!visited.contains(adj.getVertex())) { queue.add(adj.getVertex()); visited.add(adj.getVertex()); } } } return vertices; } /** * Performs a depth first search (dfs) on the input graph, starting at * the parameterized starting vertex. * * When exploring a vertex, explore in the order of neighbors returned by * the adjacency list. Failure to do so may cause you to lose points. * * *NOTE* You MUST implement this method recursively, or else you will lose * all points for this method. * * You may import/use java.util.Set, java.util.List, and * any classes that implement the aforementioned interfaces, as long as they * are efficient. * * The only instance of java.util.Map that you may use is the * adjacency list from graph. DO NOT create new instances of Map * for DFS (storing the adjacency list in a variable is fine). * * DO NOT modify the structure of the graph. The graph should be unmodified * after this method terminates. * * @param <T> the generic typing of the data * @param start the vertex to begin the dfs on * @param graph the graph to search through * @return list of vertices in visited order * @throws IllegalArgumentException if any input is null, or if start * doesn't exist in the graph */ public static <T> List<Vertex<T>> dfs(Vertex<T> start, Graph<T> graph) { if (start == null || graph == null) { throw new IllegalArgumentException("Start vertex or graph is null"); } if (!graph.getAdjList().containsKey(start)) { throw new IllegalArgumentException("Start vertex not in graph"); } List<Vertex<T>> list = new ArrayList<>(); Set<Vertex<T>> visited = new HashSet<>(); rDfs(start, graph, list, visited); return list; } /** * helper method for the dfs algorithm. * @param start dfs starts on this node * @param graph graph consisting of all the nodes * @param visited set to keep track of visited nodes * @param list vertices list */ private static <T> void rDfs(Vertex<T> start, Graph<T> graph, List<Vertex<T>> list, Set<Vertex<T>> visited) { visited.add(start); list.add(start); for (VertexDistance<T> adj : graph.getAdjList().get(start)) { if (!visited.contains(adj.getVertex())) { rDfs(adj.getVertex(), graph, list, visited); } } } /** * Finds the single-source shortest distance between the start vertex and * all vertices given a weighted graph (you may assume non-negative edge * weights). * * Return a map of the shortest distances such that the key of each entry * is a node in the graph and the value for the key is the shortest distance * to that node from start, or Integer.MAX_VALUE (representing * infinity) if no path exists. * * You may import/use java.util.PriorityQueue, * java.util.Map, and java.util.Set and any class that * implements the aforementioned interfaces, as long as your use of it * is efficient as possible. * * You should implement the version of Dijkstra's where you use two * termination conditions in conjunction. * * 1) Check if all of the vertices have been visited. * 2) Check if the PQ is empty. * * DO NOT modify the structure of the graph. The graph should be unmodified * after this method terminates. * * @param <T> the generic typing of the data * @param start the vertex to begin the Dijkstra's on (source) * @param graph the graph we are applying Dijkstra's to * @return a map of the shortest distances from start to every * other node in the graph * @throws IllegalArgumentException if any input is null, or if start * doesn't exist in the graph. */ public static <T> Map<Vertex<T>, Integer> dijkstras(Vertex<T> start, Graph<T> graph) { if (start == null || graph == null) { throw new IllegalArgumentException("Start vertex or graph is null"); } if (!graph.getVertices().contains(start)) { throw new IllegalArgumentException("Start vertex not in graph"); } Set<Vertex<T>> visited = new HashSet<>(); Map<Vertex<T>, Integer> weights = new HashMap<>(); PriorityQueue<VertexDistance<T>> priorityQueue = new PriorityQueue<>(); VertexDistance<T> starter = new VertexDistance<>(start, 0); priorityQueue.add(starter); for (Vertex<T> vertex : graph.getVertices()) { weights.put(vertex, Integer.MAX_VALUE); } int dist = 0; while (!(priorityQueue.isEmpty()) && visited.size() < graph.getVertices().size()) { VertexDistance<T> pointer = priorityQueue.poll(); if (!visited.contains(pointer.getVertex())) { dist = pointer.getDistance(); visited.add(pointer.getVertex()); weights.put(pointer.getVertex(), dist); } for (VertexDistance<T> vertex : graph.getAdjList().get(pointer.getVertex())) { if (!visited.contains(vertex.getVertex())) { VertexDistance<T> temp = new VertexDistance<>(vertex.getVertex(), dist + vertex.getDistance()); priorityQueue.add(temp); } } } return weights; } /** * Runs Prim's algorithm on the given graph and returns the Minimum * Spanning Tree (MST) in the form of a set of Edges. If the graph is * disconnected and therefore no valid MST exists, return null. * * You may assume that the passed in graph is undirected. In this framework, * this means that if (u, v, 3) is in the graph, then the opposite edge * (v, u, 3) will also be in the graph, though as a separate Edge object. * * The returned set of edges should form an undirected graph. This means * that every time you add an edge to your return set, you should add the * reverse edge to the set as well. This is for testing purposes. This * reverse edge does not need to be the one from the graph itself; you can * just make a new edge object representing the reverse edge. * * You may assume that there will only be one valid MST that can be formed. * * You should NOT allow self-loops or parallel edges in the MST. * * You may import/use PriorityQueue, java.util.Set, and any class that * implements the aforementioned interface. * * DO NOT modify the structure of the graph. The graph should be unmodified * after this method terminates. * * The only instance of java.util.Map that you may use is the * adjacency list from graph. DO NOT create new instances of Map * for this method (storing the adjacency list in a variable is fine). * * @param <T> the generic typing of the data * @param start the vertex to begin Prims on * @param graph the graph we are applying Prims to * @return the MST of the graph or null if there is no valid MST * @throws IllegalArgumentException if any input is null, or if start * doesn't exist in the graph. */ public static <T> Set<Edge<T>> prims(Vertex<T> start, Graph<T> graph) { if (start == null) { throw new IllegalArgumentException("The start vertex is null"); } if (graph == null) { throw new IllegalArgumentException("The graph is null"); } if (!graph.getVertices().contains(start)) { throw new IllegalArgumentException("The start is not in the graph"); } Set<Vertex<T>> visited = new HashSet<>(); Set<Edge<T>> edgeSet = new HashSet<>(); PriorityQueue<Edge<T>> pq = new PriorityQueue<>(); Map<Vertex<T>, List<VertexDistance<T>>> adjList = graph.getAdjList(); for (VertexDistance<T> v : adjList.get(start)) { Edge<T> curr = new Edge<>(start, v.getVertex(), v.getDistance()); pq.add(curr); } visited.add(start); while (!pq.isEmpty() && visited.size() < graph.getVertices().size()) { Edge<T> curr = pq.poll(); if (!visited.contains(curr.getV())) { visited.add(curr.getV()); Edge<T> edge = new Edge<>(curr.getU(), curr.getV(), curr.getWeight()); edgeSet.add(edge); Edge<T> edge1 = new Edge<>(curr.getV(), curr.getU(), curr.getWeight()); edgeSet.add(edge1); for (VertexDistance<T> vx : adjList.get(curr.getV())) { if (!visited.contains(vx.getVertex())) { Edge<T> pointer = new Edge<>(curr.getV(), vx.getVertex(), vx.getDistance()); pq.add(pointer); } } } } if (edgeSet.size() < graph.getVertices().size() - 1) { return null; } return edgeSet; } } correct the code for prim's algorithm and only gimme code for that
e4ca2d6bf9fe927209d191614ae2ea04
{ "intermediate": 0.5319790244102478, "beginner": 0.30302512645721436, "expert": 0.16499587893486023 }
47,151
training material of the methodology of extracting specific text from string using uipath
33c905ab4fb48aee055ee9cc3a15c13d
{ "intermediate": 0.17857202887535095, "beginner": 0.13683825731277466, "expert": 0.6845897436141968 }
47,152
c# dictionary by int or string at the same tme
b02248b2d6d368ac843fbba659894699
{ "intermediate": 0.38135963678359985, "beginner": 0.45565757155418396, "expert": 0.16298285126686096 }
47,153
Magic Squares A n x n matrix that is filled with numbers 1, 2, 3….n2. The sum of the elements in each row, column and the two diagonals is the same value. Write a program that reads in n2 values from the keyboard and tests to see if they form a magic square when arranged in a square matrix. You need to test for three conditions: Did the user enter n2 numbers for some n? Do each of the numbers 1, 2, 3….n2 occur exactly once in the user input? When put into a square, are the sums of the rows, columns and diagonals equal to each other? IF the size of the input is a square, then test whether all numbers between 1 and n2 are present. Then compute the row, column and diagonal sums. Implement a class Square with methods public void add(int i) public boolean isMagic() This algorithm only works if n is odd. Place a 1 in the middle of the bottom row. Put the number 2 (we will call it k) at the top of the next column. Then follow this pattern : place k+1 into the square to the right and down, wrapping around the borders. However if the square to the right and down has already been filled or if you are in the lower right corner, then you must move to the square straight up instead. Here is a 5 x 5 version that follows this algorithm. 11 18 25 2 9 10 12 19 21 3 4 6 13 20 22 23 5 7 14 16 17 24 1 8 15 Write a program whose input is the number n and whose output is the magic square of order n if n is odd. Implement a class MagicSquare with a constructor that constructs the square and a toString method that returns a representation of the square.
4f4ff555d430feceef3314acc3ec0bb6
{ "intermediate": 0.2207583338022232, "beginner": 0.14017243683338165, "expert": 0.6390691995620728 }
47,154
Write a program that reads in n2 values from the keyboard and tests to see if they form a magic square when arranged in a square matrix. You need to test for three conditions: Did the user enter n2 numbers for some n? Do each of the numbers 1, 2, 3….n2 occur exactly once in the user input? When put into a square, are the sums of the rows, columns and diagonals equal to each other? IF the size of the input is a square, then test whether all numbers between 1 and n2 are present. Then compute the row, column and diagonal sums. Implement a class Square with methods public void add(int i) public boolean isMagic()
1d7094e059cd4a69af35e08b7eb8c380
{ "intermediate": 0.5185003876686096, "beginner": 0.15137170255184174, "expert": 0.3301279544830322 }
47,155
how do i run a terminal, For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces) . im running some elses google colab script, with that in mind how do i exacute
7e37d13c55ad81205233f4248c7149e6
{ "intermediate": 0.356963187456131, "beginner": 0.3461777865886688, "expert": 0.2968589663505554 }
47,156
For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces), how do i go about this
8d1eee69134ff7d9d349fa42cf2c5d28
{ "intermediate": 0.4292006492614746, "beginner": 0.2078474760055542, "expert": 0.3629518747329712 }
47,157
В чем ошибка? C:\Users\Perelygin>ssh root@192.168.18.2 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that a host key has just been changed. The fingerprint for the ECDSA key sent by the remote host is SHA256:17Erak0p+QDymxDIOunFB9gf0DRdQlOnFUrGVSosvk0. Please contact your system administrator. Add correct host key in C:\\Users\\Perelygin/.ssh/known_hosts to get rid of this message. Offending ECDSA key in C:\\Users\\Perelygin/.ssh/known_hosts:5 ECDSA host key for 192.168.18.2 has changed and you have requested strict checking. Host key verification failed.
ff30e156306bd96eba3996558697e19e
{ "intermediate": 0.34663665294647217, "beginner": 0.33951300382614136, "expert": 0.3138503134250641 }
47,158
For free permanent hosting and GPU upgrades, run gradio deploy from Terminal to deploy to Spaces (https://huggingface.co/spaces), how do i go about this, and i dont understand code and dont have any basic or prior knowlege of python, coding, terminals, etc.
ef59fa2e622dd77939d33074d898e0bc
{ "intermediate": 0.4752129316329956, "beginner": 0.34790945053100586, "expert": 0.17687758803367615 }
47,159
given a table kitchen_items cutlery amount 1 knife 3 2 spoon 13 3 fork 9 4 toothpick 49 5 straw 32 6 chopsticks 14 7 nutcracker 1 8 spatula 2 9 rolling pin 1 10 honey dipper 1 Fetch all of the kitchen items that the cutlery have less than 3 items. Change the cutlery column name to silverware in sql
1e19717286cd79611af7a2fc966b07ca
{ "intermediate": 0.38449159264564514, "beginner": 0.3003693222999573, "expert": 0.3151390850543976 }
47,160
Fetch all of the cellphone models that start with the letter m and the 3rd letter is o, the price range is between 1000 and 1500, and they support 5G. Return only the cellphone model and replace the name to id in sql
7934885bd9f31d93e3154ba2c0583401
{ "intermediate": 0.3536529242992401, "beginner": 0.2555729150772095, "expert": 0.390774130821228 }
47,161
if num_devices == 1: low_zero = False
a2b2ae9fbd367a88868c0fc21078750d
{ "intermediate": 0.3209393322467804, "beginner": 0.3560725152492523, "expert": 0.3229881227016449 }
47,162
from langchain import HuggingFaceHub llm_huggingface=HuggingFaceHub(repo_id="mistral-community/Mixtral-8x22B-v0.1",model_kwargs={"temperature":0,"max_length":64})
dca219c385f1e6c4c8748e4d45c85f17
{ "intermediate": 0.3516319990158081, "beginner": 0.24084368348121643, "expert": 0.40752434730529785 }
47,163
hi,can you create a simple html5-bootstrap5-javascript ffmpeg6.0 curves tool with a input image button for image file input area to adjust the image, image needs to be viewed and manipulated in realtime while sliders are adjusted using ffmpeg -filter:v curves with full rgb options with sliders add a convert button that will convert the final code where ever sliders are adjusted to output to ffmpeg6.0 arg, highlight text with green background at bottom <br> of convert button. Add a clear image and reset button at bottom of highlited text from arg. the output arg should look formated similar to this: ffmpeg -i image_name.exr -filter:v curves=r='0/0.11 .42/.51 1/0.95':g='0.50/0.48':b='0/0.22 .49/.44 1/0.8'
056e673c00b9caf3dfe44017473586b7
{ "intermediate": 0.6671550869941711, "beginner": 0.10523971915245056, "expert": 0.22760522365570068 }
47,164
For each of the following Prolog expressions, write the equivalent Haskell expression without any use of [ and ] other than for the empty list []. Identify whether the expression is allowed by Haskell and, if not, explain why. 10-points [0|1]. [0, 1]. [0|[1]]. [0, [1]]. [0|[1|[2|[]]]].
84f4d3e0046d3e731b54253b5f132b15
{ "intermediate": 0.3475254774093628, "beginner": 0.3346267640590668, "expert": 0.31784772872924805 }
47,165
Что за ошибка [root@localhost network-scripts]# ping ya.ru ping: ya.ru: Temporary failure in name resolution
a13fc6063f101ab1a81d407fe353a4fb
{ "intermediate": 0.3546752333641052, "beginner": 0.33347341418266296, "expert": 0.3118513524532318 }
47,166
my ubuntu 22.04 boot up time take at least 2 and half minutes . how can i improve the boot up time?
e72f1f4426601edd1ea186de712a9f02
{ "intermediate": 0.3017406165599823, "beginner": 0.35393279790878296, "expert": 0.34432655572891235 }
47,167
Write a program that reads in n2 values from the keyboard and tests to see if they form a magic square when arranged in a square matrix. You need to test for three conditions: Did the user enter n2 numbers for some n? Do each of the numbers 1, 2, 3….n2 occur exactly once in the user input? When put into a square, are the sums of the rows, columns and diagonals equal to each other? IF the size of the input is a square, then test whether all numbers between 1 and n2 are present. Then compute the row, column and diagonal sums. Implement a class Square with methods public void add(int i) public boolean isMagic()
b7050df9ea4721303698e8b54bd300bb
{ "intermediate": 0.5185003876686096, "beginner": 0.15137170255184174, "expert": 0.3301279544830322 }
47,168
Warning: count(): Parameter must be an array or an object that implements Countable in C:\xampp\htdocs\kamna\getinserData.php on line 82 ما هى المشكلة
d216bd4e205423359714aa44e3690200
{ "intermediate": 0.48527398705482483, "beginner": 0.2294946014881134, "expert": 0.2852313816547394 }
47,169
package com.mns.returns.producer; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Value; import org.springframework.kafka.core.KafkaTemplate; import org.springframework.kafka.support.SendResult; import org.springframework.stereotype.Component; import org.springframework.util.concurrent.ListenableFuture; import org.springframework.util.concurrent.ListenableFutureCallback; @Component public class KafkaProducer { private static final Logger LOGGER = LoggerFactory.getLogger(KafkaProducer.class); @Value("${kafka.topic.omsapi.wismr}") String topic; @Autowired KafkaTemplate<String, String> kafkaTemplate; public String sendCommunicationMessageToKafkaTopic(String message) { String responseStatus = "Not Sent"; try { LOGGER.info("Publishing communication message to KAFKA Topic---->{} ", topic); ListenableFuture<SendResult<String, String>> future = kafkaTemplate.send(topic, message); future.addCallback(new ListenableFutureCallback<SendResult<String, String>>() { @Override public void onSuccess(SendResult<String, String> result) { LOGGER.debug( "Sent message=[" + message + "] with partition [" + result.getRecordMetadata().partition() + "] offset=[" + result.getRecordMetadata().offset() + "]"); } @Override public void onFailure(Throwable ex) { System.out.println("Unable to send message=[" + message + "] due to : " + ex.getMessage()); } }); LOGGER.info("Published communication message to KAFKA Topic::: " + topic); responseStatus = "Sent"; } catch (Exception ex) { LOGGER.info("Error in publishing communication message to KAFKA ::: " + ex); LOGGER.info("Error in publishing communication message to KAFKA ::: " + ex.getMessage()); LOGGER.info("Error in publishing communication message to KAFKA ::: " + ex.getLocalizedMessage()); } return responseStatus; } } this is the issue.....com.mns.oms.batch.writer.KafkaBatchWriter is not abstract and does not override abstract method write(org.springframework.batch.item.Chunk<? extends com.mns.oms.batch.model.BeamDataDTO>) in org.springframework.batch.item.ItemWriter......explain and fix this is the issue.....com.mns.oms.batch.writer.KafkaBatchWriter is not abstract and does not override abstract method write(org.springframework.batch.item.Chunk<? extends com.mns.oms.batch.model.BeamDataDTO>) in org.springframework.batch.item.ItemWriter......explain and fix
db60cb1f9061eed9782a425253ce6ab9
{ "intermediate": 0.48955607414245605, "beginner": 0.3333601653575897, "expert": 0.17708371579647064 }
47,170
hi,can you create a simple html5-bootstrap5-javascript-webgl ffmpeg6.0 curves tool with a imageInput button for image file input area to adjust the image, image needs to be viewed and manipulated in realtime while sliders are adjusted using ffmpeg or webgl -filter:v curves with full rgb options with sliders add a convert button that will convert the final code where ever sliders are adjusted to output to ffmpeg6.0 arg, highlight text with green background at bottom <br> of convert button. Add a clear image and reset button at bottom of highlited text from arg. the output arg should look formated similar to this: ffmpeg -i ${imageName} -filter:v curves=r='0/0.5 0.42/0.51 1/0.95':g='0.50/0.5':b='0/0.5 0.49/0.44 1/0.8'
deba5f3b0b677dfabf32024c138a1803
{ "intermediate": 0.7100542187690735, "beginner": 0.10785946249961853, "expert": 0.18208634853363037 }
47,171
from pdb2sql import interface from pdb2sql import pdb2sql pdb_db = pdb2sql('1a02.pdb') db = interface(pdb_db) itf_atom = db.get_contact_atoms(cutoff = 3) itf_atom_pair = db.get_contact_atoms(cutoff = 3, return_contact_pairs=True) print("interface atom:\n", itf_atom) print("interface residue:\n", itf_residue) print("interface residue pairs:\n", itf_residue_pair) Modify the above code to extract all interface residue related data from 1a02 pdb file and write all those data in a csv.
b5d103bc1223f46e7765dd6fe46499e7
{ "intermediate": 0.5162786841392517, "beginner": 0.2308029681444168, "expert": 0.2529183626174927 }
47,172
package com.mns.oms.batch.writer; import java.util.ArrayList; import java.util.List; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.batch.item.ItemWriter; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Value; import org.springframework.data.mongodb.core.MongoTemplate; import org.springframework.data.mongodb.core.query.Criteria; import org.springframework.data.mongodb.core.query.Query; import org.springframework.data.mongodb.core.query.Update; import org.springframework.kafka.core.KafkaTemplate; import org.springframework.stereotype.Component; import com.fasterxml.jackson.core.JsonProcessingException; import com.fasterxml.jackson.databind.ObjectMapper; import com.mns.oms.batch.domain.CarrierData; import com.mns.oms.batch.model.BeamDataDTO; import com.mongodb.client.result.UpdateResult; /** * @author Mrinmoy Mandal * * Module: WISMR * * */ @Component public class KafkaBatchWriter implements ItemWriter<BeamDataDTO> { private static final Logger log = LoggerFactory.getLogger(KafkaBatchWriter.class); @Value("${kafka.topic.omsapi.beam}") private String topic; @Autowired MongoTemplate mongoTemplate; @Autowired private KafkaTemplate<String, String> kafkaTemplate; @Override public void write(List<? extends BeamDataDTO> items) throws Exception { List<String> idList = new ArrayList<>(); items.forEach(item -> { try { ObjectMapper objectMapper = new ObjectMapper(); String data = objectMapper.writeValueAsString(item); log.debug("Data to be sent to BEAM------->{}", data); kafkaTemplate.send(topic, data); idList.add(item.getMessageID().split("_")[2]); } catch (JsonProcessingException e) { // TODO Auto-generated catch block log.error("ERROR in SENDING DATA to BEAM---------------{}", e); } }); // Batch update if (idList.size() > 0) { Query recordsToUpdateQuery = Query.query(Criteria.where("_id").in(idList)); UpdateResult updtResult = mongoTemplate.updateMulti(recordsToUpdateQuery, Update.update("isProcessed", true), CarrierData.class); log.info("Rows matched to update records in DB after sending data to BEAM------------{}", updtResult.getMatchedCount()); log.info("Rows modified in DB after sending data to BEAM------------{}", updtResult.getModifiedCount()); } } } this is the issue.....com.mns.oms.batch.writer.KafkaBatchWriter is not abstract and does not override abstract method write(org.springframework.batch.item.Chunk<? extends com.mns.oms.batch.model.BeamDataDTO>) in org.springframework.batch.item.ItemWriter this is the issue.....com.mns.oms.batch.writer.KafkaBatchWriter is not abstract and does not override abstract method write(org.springframework.batch.item.Chunk<? extends com.mns.oms.batch.model.BeamDataDTO>) in org.springframework.batch.item.ItemWriter Update the above code and provide full implementation
4335c68e6b3406f810fd06e3387e1824
{ "intermediate": 0.36293989419937134, "beginner": 0.4405391812324524, "expert": 0.19652092456817627 }
47,173
package com.mns.oms.batch.writer; import java.util.ArrayList; import java.util.List; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.batch.item.ItemWriter; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Value; import org.springframework.data.mongodb.core.MongoTemplate; import org.springframework.data.mongodb.core.query.Criteria; import org.springframework.data.mongodb.core.query.Query; import org.springframework.data.mongodb.core.query.Update; import org.springframework.kafka.core.KafkaTemplate; import org.springframework.stereotype.Component; import com.fasterxml.jackson.core.JsonProcessingException; import com.fasterxml.jackson.databind.ObjectMapper; import com.mns.oms.batch.domain.CarrierData; import com.mns.oms.batch.model.BeamDataDTO; import com.mongodb.client.result.UpdateResult; /** * @author Mrinmoy Mandal * * Module: WISMR * * */ @Component public class KafkaBatchWriter implements ItemWriter<BeamDataDTO> { private static final Logger log = LoggerFactory.getLogger(KafkaBatchWriter.class); @Value("${kafka.topic.omsapi.beam}") private String topic; @Autowired MongoTemplate mongoTemplate; @Autowired private KafkaTemplate<String, String> kafkaTemplate; @Override public void write(List<? extends BeamDataDTO> items) throws Exception { List<String> idList = new ArrayList<>(); items.forEach(item -> { try { ObjectMapper objectMapper = new ObjectMapper(); String data = objectMapper.writeValueAsString(item); log.debug("Data to be sent to BEAM------->{}", data); kafkaTemplate.send(topic, data); idList.add(item.getMessageID().split("_")[2]); } catch (JsonProcessingException e) { // TODO Auto-generated catch block log.error("ERROR in SENDING DATA to BEAM---------------{}", e); } }); // Batch update if (idList.size() > 0) { Query recordsToUpdateQuery = Query.query(Criteria.where("_id").in(idList)); UpdateResult updtResult = mongoTemplate.updateMulti(recordsToUpdateQuery, Update.update("isProcessed", true), CarrierData.class); log.info("Rows matched to update records in DB after sending data to BEAM------------{}", updtResult.getMatchedCount()); log.info("Rows modified in DB after sending data to BEAM------------{}", updtResult.getModifiedCount()); } } } this is the issue.....com.mns.oms.batch.writer.KafkaBatchWriter is not abstract and does not override abstract method write(org.springframework.batch.item.Chunk<? extends com.mns.oms.batch.model.BeamDataDTO>) in org.springframework.batch.item.ItemWriter......explain and fix this is the issue.....method does not override or implement a method from a supertype......explain and fix Update the above code and provide full implementation
2f7798a6f02d63e77941883f59bc73d5
{ "intermediate": 0.3784722089767456, "beginner": 0.4054439067840576, "expert": 0.21608386933803558 }
47,174
My workbook has over 50 worksheets, in the VBA code of ecah of the worksheets, I want to search a specific line of text within the VBA code and repalce it with multiple lines of VBA code. How can I do this
852d599516b3bab84c92c21711042191
{ "intermediate": 0.3372982442378998, "beginner": 0.31684258580207825, "expert": 0.34585919976234436 }
47,175
Fetch all of the cellphone models that start with the letter m and the 3rd letter is o, the price range is between 1000 and 1500, and they support 5G. Return only the cellphone model and replace the name to id in sql given below is the table cellphones model price wifi_5g 1 mqopal 2590 1 2 mlop12 1293 1 3 maqw99 1490 0 4 qpola21 1092 1 5 hj52wdf 800 0 6 m1oa32 1392 1 7 12o09p 999 0 8 mtozavg 452 1 9 kflwp67 3098 0 10 nbgfert 1189 1
dccfbe734502506053fdad6c2698d545
{ "intermediate": 0.3331575393676758, "beginner": 0.3002822697162628, "expert": 0.3665601909160614 }
47,176
Write a program that reads in n2 values from the keyboard and tests to see if they form a magic square when arranged in a square matrix. You need to test for three conditions: Did the user enter n2 numbers for some n? Do each of the numbers 1, 2, 3….n2 occur exactly once in the user input? When put into a square, are the sums of the rows, columns and diagonals equal to each other? IF the size of the input is a square, then test whether all numbers between 1 and n2 are present. Then compute the row, column and diagonal sums. Implement a class Square with methods public void add(int i) public boolean isMagic()
e1e44ce83cec2f2f08ad0c299f0cd111
{ "intermediate": 0.522680938243866, "beginner": 0.14678078889846802, "expert": 0.3305383324623108 }
47,177
В чем ошибка [root@localhost etc]# systemctl star firewall.service Unknown command verb star.
f0bcb3aa6ec9e957cf7fb3460c24390b
{ "intermediate": 0.38686126470565796, "beginner": 0.3117620050907135, "expert": 0.30137673020362854 }
47,179
Mssql how to store query result
5aebd1d811208d0e09f9d5eebf803281
{ "intermediate": 0.5380074381828308, "beginner": 0.2561972439289093, "expert": 0.20579524338245392 }
47,180
Mssql how to make to make 2 pivots with same fields
87905da2eefd376fefdf607d81c225cf
{ "intermediate": 0.24414601922035217, "beginner": 0.18352419137954712, "expert": 0.5723298192024231 }
47,181
i have following code to train my NN model : # %% from sklearn.preprocessing import StandardScaler import pandas as pd import numpy as np from tensorflow import keras import joblib def calculate_targets_scaling_params(file_path): scaler = StandardScaler() for chunk in pd.read_csv(file_path, chunksize=10000): # Adjust chunksize based on your memory capacity filtered_chunk = chunk[['y_High_5d','y_Low_5d','y_Priority_5d']] # 'y_Priority_1d', # 'y_High_2d', 'y_Low_2d', 'y_Priority_2d', # 'y_High_3d', 'y_Low_3d', 'y_Priority_3d', # 'y_High_5d', 'y_Low_5d', 'y_Priority_5d' scaler.partial_fit(filtered_chunk) # Accumulate means and variances return scaler.mean_, scaler.var_ # %% x_scaler_loaded = joblib.load('nn_x_scaler.sav') y_scaler_loaded = joblib.load('nn_y_hlp5_scaler.sav') def data_generator(file_path, batch_size, x_scaler, y_scaler): chunksize = batch_size while True: # Loop forever, so the generator never terminates for chunk in pd.read_csv(file_path, chunksize=chunksize): # if data_type == 'train' and row_counter >= train_rows: # continue # Skip the rest if we are fetching training data but have reached the end of the train set # elif data_type == 'val' and row_counter < train_rows: # row_counter += len(chunk) # continue # Skip this chunk if we are fetching validation data but are still in the train range # Assuming your CSV has headers that match features/targets # Normalizing the features filtered_c = chunk.drop(['Date', 'Symbol'], axis=1) feature_data = filtered_c.drop([ 'y_High_1d', 'y_Low_1d', 'y_Priority_1d', 'y_High_2d', 'y_Low_2d', 'y_Priority_2d', 'y_High_3d', 'y_Low_3d', 'y_Priority_3d', 'y_High_5d', 'y_Low_5d', 'y_Priority_5d'], axis=1) target_data = filtered_c[['y_High_5d', 'y_Low_5d', 'y_Priority_5d' # , 'y_Low_1d', 'y_Priority_1d', # 'y_High_2d', 'y_Low_2d', 'y_Priority_2d', # 'y_High_3d', 'y_Low_3d', 'y_Priority_3d', # 'y_High_5d', 'y_Low_5d', 'y_Priority_5d' ]] feature_data_scaled = pd.DataFrame(x_scaler.transform(feature_data), columns=feature_data.columns) # Assuming target_data also needs to be scaled, apply scaler separately target_data_scaled = pd.DataFrame(y_scaler.transform(target_data), columns=target_data.columns) # Now, feature_data_scaled and target_data_scaled are both DataFrames, scaled and ready to use yield feature_data_scaled.values, target_data_scaled.values # row_counter += len(chunk) # %% from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout, Input import tensorflow as tf from tensorflow.keras.layers import BatchNormalization def build_model(): input_shape = (6427,) model = Sequential([ Dense(6427, activation='relu', input_shape = input_shape), Dropout(0.17), BatchNormalization(), Dense(4096, activation='relu'), Dropout(0.15), BatchNormalization(), Dense(2048, activation='relu'), Dropout(0.12), BatchNormalization(), Dense(1024, activation='relu'), Dropout(0.10), BatchNormalization(), Dense(512, activation='relu'), Dropout(0.05), BatchNormalization(), Dense(256, activation='relu'), BatchNormalization(), Dense(128, activation='relu'), BatchNormalization(), Dense(64, activation='relu'), BatchNormalization(), Dense(32, activation='relu'), BatchNormalization(), Dense(3), ]) optimizer = keras.optimizers.Adam(learning_rate=0.006) model.compile(optimizer='adam', loss='mse', # Use Mean Squared Error for regression metrics=['mae']) # Mean Absolute Error as an additional metric return model # %% # Instantiate the model model = build_model() model.summary() # %% import warnings warnings.filterwarnings(action='ignore', message='X has feature names, but StandardScaler was fitted without feature names') train_generator = data_generator(file_path, batch_size,x_scaler=x_scaler_loaded,y_scaler=y_scaler_loaded) total_samples = 301617 # Assuming same example size train_samples = int(total_samples) steps_per_epoch = train_samples // batch_size # Modify the model fitting call to include validation data model.fit( train_generator, steps_per_epoch=steps_per_epoch, epochs=45, ) currently my model not using any validation dataset when training change the code properly for implementing val set
363852c14c95f24789544f87fe3468a4
{ "intermediate": 0.3473767042160034, "beginner": 0.46195611357688904, "expert": 0.19066713750362396 }
47,182
I would like to click on a cell H3 in my excel sheet and open a text sheet 'Help & Comments.txt' in a windoe that is 800 by 600. can you please write the VBA code
3ab70a7016c0158045f7a4a0bdf4baee
{ "intermediate": 0.303143709897995, "beginner": 0.44209516048431396, "expert": 0.2547611594200134 }
47,183
返回显示404
7375bd6bb733f753588504f3ddd41e01
{ "intermediate": 0.34294721484184265, "beginner": 0.31499385833740234, "expert": 0.342058926820755 }
47,184
Write the following sentences in quantification logic: All students in CSCI 170 can go to a concert if there exists a concert event that still has enough tickets available and there is a metro schedule that matches the concert event. The universe is x = students in USC, y = concert events, and z = metro schedules. Also defines the following predicates: S(x) = x is a student in CSCI 170. T(y) = y has enough tickets. M(z,y) = z matches the concert y. G(x,y) = x go to concert y.
1559548beda1047652f6b42e56be164d
{ "intermediate": 0.3371276259422302, "beginner": 0.28277504444122314, "expert": 0.38009732961654663 }
47,185
I have local RAG pipeline using Private GPT which uses the Mistral LLM, I want to use RAGAs to evaluate the pipeline using Mixtral model. How to do it?
9bf3bd856ac5dc67648c2359b9b2d814
{ "intermediate": 0.36875349283218384, "beginner": 0.1158137172460556, "expert": 0.5154328346252441 }
47,186
How to make 2 pivots for same fields
82aff880b520cf5571dbd4cae558a123
{ "intermediate": 0.20237936079502106, "beginner": 0.11936582624912262, "expert": 0.6782548427581787 }
47,187
i have following code to train my NN model : # %% from sklearn.preprocessing import StandardScaler import pandas as pd import numpy as np from tensorflow import keras import joblib def calculate_targets_scaling_params(file_path): scaler = StandardScaler() for chunk in pd.read_csv(file_path, chunksize=10000): # Adjust chunksize based on your memory capacity filtered_chunk = chunk[[‘y_High_5d’,‘y_Low_5d’,‘y_Priority_5d’]] scaler.partial_fit(filtered_chunk) # Accumulate means and variances return scaler.mean_, scaler.var_ # %% x_scaler_loaded = joblib.load(‘nn_x_scaler.sav’) y_scaler_loaded = joblib.load(‘nn_y_hlp5_scaler.sav’) def data_generator(file_path, batch_size, x_scaler, y_scaler): chunksize = batch_size while True: # Loop forever, so the generator never terminates for chunk in pd.read_csv(file_path, chunksize=chunksize): filtered_c = chunk.drop([‘Date’, ‘Symbol’], axis=1) feature_data = filtered_c.drop([ ‘y_High_1d’, ‘y_Low_1d’, ‘y_Priority_1d’, ‘y_High_2d’, ‘y_Low_2d’, ‘y_Priority_2d’, ‘y_High_3d’, ‘y_Low_3d’, ‘y_Priority_3d’, ‘y_High_5d’, ‘y_Low_5d’, ‘y_Priority_5d’], axis=1) target_data = filtered_c[[‘y_High_5d’, ‘y_Low_5d’, ‘y_Priority_5d’]] feature_data_scaled = pd.DataFrame(x_scaler.transform(feature_data), columns=feature_data.columns) # Assuming target_data also needs to be scaled, apply scaler separately target_data_scaled = pd.DataFrame(y_scaler.transform(target_data), columns=target_data.columns) # Now, feature_data_scaled and target_data_scaled are both DataFrames, scaled and ready to use yield feature_data_scaled.values, target_data_scaled.values # %% from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout, Input import tensorflow as tf from tensorflow.keras.layers import BatchNormalization def build_model(): input_shape = (6427,) model = Sequential([ Dense(6427, activation=‘relu’, input_shape = input_shape), Dropout(0.17), BatchNormalization(), Dense(4096, activation=‘relu’), Dropout(0.15), BatchNormalization(), Dense(2048, activation=‘relu’), Dropout(0.12), BatchNormalization(), Dense(1024, activation=‘relu’), Dropout(0.10), BatchNormalization(), Dense(512, activation=‘relu’), Dropout(0.05), BatchNormalization(), Dense(256, activation=‘relu’), BatchNormalization(), Dense(128, activation=‘relu’), BatchNormalization(), Dense(64, activation=‘relu’), BatchNormalization(), Dense(32, activation=‘relu’), BatchNormalization(), Dense(3), ]) optimizer = keras.optimizers.Adam(learning_rate=0.006) model.compile(optimizer=‘adam’, loss=‘mse’, # Use Mean Squared Error for regression metrics=[‘mae’]) # Mean Absolute Error as an additional metric return model # %% # Instantiate the model model = build_model() model.summary() # %% import warnings warnings.filterwarnings(action=‘ignore’, message=‘X has feature names, but StandardScaler was fitted without feature names’) train_generator = data_generator(file_path, batch_size,x_scaler=x_scaler_loaded,y_scaler=y_scaler_loaded) total_samples = 301617 # Assuming same example size train_samples = int(total_samples) steps_per_epoch = train_samples // batch_size # Modify the model fitting call to include validation data model.fit( train_generator, steps_per_epoch=steps_per_epoch, epochs=45, ) currently my model not using any validation dataset when training change the code properly for implementing val set
18c65ffbd41237f4bdeae2a86983a176
{ "intermediate": 0.34660783410072327, "beginner": 0.4356030225753784, "expert": 0.21778912842273712 }
47,188
i have following code to train my NN model : # %% from sklearn.preprocessing import StandardScaler import pandas as pd import numpy as np from tensorflow import keras import joblib def calculate_targets_scaling_params(file_path): scaler = StandardScaler() for chunk in pd.read_csv(file_path, chunksize=10000): # Adjust chunksize based on your memory capacity filtered_chunk = chunk[[‘y_High_5d’,‘y_Low_5d’,‘y_Priority_5d’]] scaler.partial_fit(filtered_chunk) # Accumulate means and variances return scaler.mean_, scaler.var_ # %% x_scaler_loaded = joblib.load(‘nn_x_scaler.sav’) y_scaler_loaded = joblib.load(‘nn_y_hlp5_scaler.sav’) def data_generator(file_path, batch_size, x_scaler, y_scaler): chunksize = batch_size while True: # Loop forever, so the generator never terminates for chunk in pd.read_csv(file_path, chunksize=chunksize): filtered_c = chunk.drop([‘Date’, ‘Symbol’], axis=1) feature_data = filtered_c.drop([ ‘y_High_1d’, ‘y_Low_1d’, ‘y_Priority_1d’, ‘y_High_2d’, ‘y_Low_2d’, ‘y_Priority_2d’, ‘y_High_3d’, ‘y_Low_3d’, ‘y_Priority_3d’, ‘y_High_5d’, ‘y_Low_5d’, ‘y_Priority_5d’], axis=1) target_data = filtered_c[[‘y_High_5d’, ‘y_Low_5d’, ‘y_Priority_5d’]] feature_data_scaled = pd.DataFrame(x_scaler.transform(feature_data), columns=feature_data.columns) # Assuming target_data also needs to be scaled, apply scaler separately target_data_scaled = pd.DataFrame(y_scaler.transform(target_data), columns=target_data.columns) # Now, feature_data_scaled and target_data_scaled are both DataFrames, scaled and ready to use yield feature_data_scaled.values, target_data_scaled.values # %% from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout, Input import tensorflow as tf from tensorflow.keras.layers import BatchNormalization def build_model(): input_shape = (6427,) model = Sequential([ Dense(6427, activation=‘relu’, input_shape = input_shape), Dropout(0.17), BatchNormalization(), Dense(4096, activation=‘relu’), Dropout(0.15), BatchNormalization(), Dense(2048, activation=‘relu’), Dropout(0.12), BatchNormalization(), Dense(1024, activation=‘relu’), Dropout(0.10), BatchNormalization(), Dense(512, activation=‘relu’), Dropout(0.05), BatchNormalization(), Dense(256, activation=‘relu’), BatchNormalization(), Dense(128, activation=‘relu’), BatchNormalization(), Dense(64, activation=‘relu’), BatchNormalization(), Dense(32, activation=‘relu’), BatchNormalization(), Dense(3), ]) optimizer = keras.optimizers.Adam(learning_rate=0.006) model.compile(optimizer=‘adam’, loss=‘mse’, # Use Mean Squared Error for regression metrics=[‘mae’]) # Mean Absolute Error as an additional metric return model # %% # Instantiate the model model = build_model() model.summary() # %% import warnings warnings.filterwarnings(action=‘ignore’, message=‘X has feature names, but StandardScaler was fitted without feature names’) train_generator = data_generator(file_path, batch_size,x_scaler=x_scaler_loaded,y_scaler=y_scaler_loaded) total_samples = 301617 # Assuming same example size train_samples = int(total_samples) steps_per_epoch = train_samples // batch_size # Modify the model fitting call to include validation data model.fit( train_generator, steps_per_epoch=steps_per_epoch, epochs=45, ) currently my model not using any validation dataset when training change the code properly for and implement val set
304f36ab3aef0bc6d52fa0c65c534ab1
{ "intermediate": 0.34660783410072327, "beginner": 0.4356030225753784, "expert": 0.21778912842273712 }
47,189
реализуй api для фронта по задаче 1. Аутентификация: На вход принимаем логин и пароль и проверяем их с помощью com.ncs.security.service.AuthServiceImpl#auth(java.lang.String, java.lang.String, java.lang.String). Дополнительно делаем проверки как в com.ncs.company.security.NcsAuthenticationProvider#getAuthToken. В случае успеха, отправляем назад token. Методы auth и getAuthToken предоставляю public AuthResult auth(String login, String password, String ip) { LOGGER.debug("Authenticating with {}, {}, IP:{}", login, password, ip); if (isEmpty(login)) { return invalidAuthResult("Не указан логин", "Ваш логин введен неверно", "В случае возникновения вопросов, обратитесь в службу поддержки клиентов по телефону", WARNING, false); } UserFilter filter = new UserFilter(); filter.setLogin(login); User user = userRepository.findOne(UserSpecification.buildFilterSpecification(filter)); if (user == null) { return invalidAuthResult( "Введите номер телефона или e-mail, которые вы указали при регистрации аккаунта в системе Цифра банк.", "Ваш логин введен неверно.", "В случае возникновения вопросов, обратитесь в службу поддержки клиентов по телефону", WARNING, false); } if (isDeleted(user)) { return invalidAuthResult( "Указанный аккаунт заблокирован. Для разблокировки обратитесь в службу поддержки клиентов", "Ошибка аутентификации", EMPTY, DANGER, false); } HashCreator hc = new HashCreator(); String passwordHash = hc.getPasswordHash(password, user.getSalt()); if (!passwordHash.equals(user.getPasword())) { if (user.getUnsuccessfulLogin() == null) { user.setUnsuccessfulLogin(1L); } else { user.setUnsuccessfulLogin(user.getUnsuccessfulLogin() + 1); user.setLastLogin(new Date()); user.setLastIp(ip); if (user.getUnsuccessfulLogin().equals(Long.parseLong(Constants.get(MAX_UNSUCCESSFUL_LOGIN)) - 1)) { return invalidAuthResult( "Для предотвращения несанкционированного доступа к данным у Вас осталась последняя попытка входа в систему.", "Не верный пароль.", EMPTY, DANGER, false); } if (user.getUnsuccessfulLogin() >= Long.parseLong(Constants.get(MAX_UNSUCCESSFUL_LOGIN))) { user.setState(UserState.DELETED); return invalidAuthResult( "Превышено допустимое кол-во попыток входа в систему. Для предотвращения несанкционированного доступа к данным Ваша учетная запись заблокирована. Для разблокировки пожалуйста обратитесь в службу поддержки клиентов.", "Не верный пароль.", EMPTY, DANGER, true); } } return invalidAuthResult( "У вас осталось 2 попытки на ввод пароля, после этого ваш аккаунт будет заблокирован.", "Не верный пароль.", EMPTY, DANGER, false); } user.setLastIp(ip); user.setLastLogin(new Date()); user.setUnsuccessfulLogin(0L); AuthResult result = new AuthResult(); Token token = new Token(); String tokenStr = getSecureCode(); token.setToken(tokenStr); token.setTokenValidity(new Date(new Date().getTime() + singletonBean.getTokenValidity())); token.setUserId(user.getId()); token.setValid(true); token = tokenRepository.save(token); tokenCache.put(token); result.setToken(token); result.setValid(true); result.setUserId(user.getId()); return result; } public UsernamePasswordAuthenticationToken getAuthToken(Object principal, Object credentials, Employee employee) { PartnerEmployeeFilter filter = new PartnerEmployeeFilter(); filter.setEmployeeId(employee.getId()); List<PartnerEmployee> partnerEmployees = partnerEmployeeService.find(filter); if (partnerEmployees == null || partnerEmployees.size() == 0 || partnerEmployees.size() > 1) { throw getAuthenticationException(); } List<UserInfoPartner> partners = new ArrayList<>(); for (PartnerEmployee e : partnerEmployees) { if (!ActiveState.activeStates().contains(e.getActiveState())) { throw new NcsAuthenticationException("Доступ в систему заблокирован.", "Ошибка аутентификации", "В случае возникновения вопросов, обратитесь в службу поддержки клиентов по телефону", AuthResultType.WARNING); } UserInfoPartner partner = new UserInfoPartner(); partner.setId(e.getPartner().getId()); partner.setName(e.getPartner().getName()); if (e.getPartner() instanceof Company) { Company company = (Company) e.getPartner(); if (!ActiveState.activeStates().contains(company.getActiveState())) { throw new NcsAuthenticationException("Доступ в систему заблокирован.", "Ошибка аутентификации", "В случае возникновения вопросов, обратитесь в службу поддержки клиентов по телефону", AuthResultType.WARNING); } } partners.add(partner); } UserInfo info = new UserInfo(); info.setId(employee.getId()); info.setLogin(employee.getEmail()); info.setPhone(employee.getPhone()); PassportData passportData = employee.getPassportData(); if (passportData != null) { info.setFirstName(passportData.getFirstName()); info.setLastName(passportData.getLastName()); info.setMiddleName(passportData.getMiddleName()); } info.setCompanyCssDir(WebConstants.THEME_DEFAULT); info.setPartners(partners); if (info.getPartners().size() > 0) { info.setPartner(info.getPartners().get(0)); } info.setAccessType(employee.getAccessType()); // TODO: тут список ролей пользователя. (при необходимости) Collection<GrantedAuthority> authorities = new HashSet<GrantedAuthority>(); UsernamePasswordAuthenticationToken authToken = new UsernamePasswordAuthenticationToken(principal, credentials, authorities); authToken.setDetails(info); return authToken;
24a6b0f4589b66544dc18f9b168b4821
{ "intermediate": 0.2923183739185333, "beginner": 0.43483278155326843, "expert": 0.27284884452819824 }