input
stringlengths
0
27.7k
created_at
stringlengths
29
29
you can do this by decompiling the module you want to monkey patch, change the erlang abstract code and then reload the module. it will be a bit hard to find the specific cond branch you want to update, but not impossible. i have used this approach to wrap a private function with another function, in order to manipulate the return value of the private function. {, beam, } = :code.getobjectcode(mix.tasks.deps.compile) {:ok, {, [{:abstractcode, {:rawabstractv1, code}}]}} = :beamlib.chunks(beam), ~w/abstractcode/a) do the manipulation to the abstract code here then recompile and load the binary code {:ok, modname, binary} = :compile.forms(code) :code.load_binary(modname, [], binary) i am not going to make any judgments on whether this is right or wrong, it could just be a means to an end. the following libraries use this method: patch, mecks , and maybe more mocking libraries.
2024-03-09 08:30:50.050000000
i have a web app and for best user experience, i need to hide the toolbar of mobile safari (ios) . i know that many websites use an overlay taller than the screen size and some kind of indication for the user to scroll up (like a swipe up gif ). scrolling causes the toolbar to disappear! but i was wondering if there is any library that already does this and detects if the user is playing on ios and if he is not in fullscreen mode, triggers the popup, etc. and then hides it when the user is back in fullscreen. if there is no library, any recommendations are welcome to detect if we have ios and if user is not in fullscreen mode and how to hide the popup after he triggered fullscreen.
2024-03-09 15:29:12.297000000
is there a way to move all the dependencies folders to a subfolder and still have python (embedded version) work? yes. dependencies can in principle be installed anywhere, as long as the folder that contains them is on the pythonpath (see: pythonpath ). the op later added: nb: the goal is here to use an embedded python, which is totally independent from the system's global python install. so this is independent from any environment variable such as pythonpath etc. in that case the 3d party dependencies can still be installed in any subfolder. you should be able to add the folder to the .pht file, according to a python core dev: the embeddable package is intended for environments that you don"t control, so you need the isolation. if you are adding a site-packages folder that you want imported, just add it to the .pth file and it will be included in sys.path ( [LINK]> this information can also be found in the python docs: for those who want to bundle python into their application or distribution, the following advice will prevent conflicts with other installations: include a ._pth file alongside your executable containing the directories to include. this will ignore paths listed in the registry and environment variables, and also ignore site unless import site is listed. ( [LINK]> but if the intended usage is to simply run pythonw.exe -m myprogram then this is not running in an isolated, sandboxed environment, and it's not clear why a special embeddable python version would be used in that case. it's definitely not the typical usecase for embeddable pythons.
2024-03-11 16:28:16.637000000
the access is denied error when trying to connect to azure from jenkins pipeline could be caused by incorrect credentials or insufficient permissions. in addition to the points i have mentioned in comment section. would request you to kindly check below things once. double-check that the azure service principal credentials are correct and have the necessary permissions to access the azure container app make sure that the azure service principal credentials are correctly configured in jenkins i.e _ username with password username : service principal appid password : service principal password id : credential identifier (such as azureserviceprincipal ) check under subscription- iam- role assignment- your jenkins have contributor access or not make sure you have below two plugins enabled. if all these are correctly updated, then you should be able to connect your jenkins to azure. for additional details kindly refer to below shared documents. references: tutorial: create a jenkins pipeline using github and docker deploy to azure container apps from azure pipelines - configuration deploy to azure app service with jenkins <a href="[LINK] thread</a>
2024-02-09 12:15:55.037000000
go to the path of your application and right click on that folder and then choose the properties and go to the security tab, from there, allow all the available options for the iis user. and restart your application poll in iis and lunch the app again
2024-03-26 18:18:45.737000000
in the last weeks before migrating our xamarin-app to maui our customer came up with an very important improvement.. currently when we are playing a video in our app, the background music playing from spotify, youtube music, etc. stops. this behaviour makes totally sense if our video would have an audio track. but it hasn´t. so they want us to change the code to still play music while watching the video (if the video has no audio track). i read about the setaudiofocusrequest for android but this is not exactly what we need. maybe there is a way to set the audio focus for our app regularly into background (or whatever it is called) and just set the audio focus when we have an audio track. fyi we are playing videos via two ways. either its an help/instruction video (contains audio) or an animated video (has no audio). these are separated and get called in different locations. so we know exactly, if the video has an audio track or not (or in other words if we have to mute the app or not). unfortunately i have not found anything helpful until yet. maybe some of you guys has faced this issue and can help us. thanks in advance
2024-03-11 14:59:20.357000000
let's say i have a function that sends const char data asynchronously, and invokes a callback when data is sent or an error happens (i omitted error handling as irrelevant to the question): void senddata( const char data, std::function void() ); so, until the callback is invoked, i have to maintain the data passed to the function as valid. one solution is to pass the result of std::string::cstr() and move this string into the callback: void send( std::string str ) { auto data = str.cstr(); senddata( data, [st=std::move(str)]() {} ); } so, str should be moved into the lambda, and when it is invoked, it should destroy the string at the end. of course, i can move the original string to a std::unique_ptr std::string and then move that pointer into the lambda, but this code is simpler. so, is it safe to do so?
2024-03-07 23:58:47.957000000
i want to restrict my application to only support string type values for json configuration. json schema by default supports a set of datatypes such as integer, boolean, etc however, the parsing logic in my application only supports string values. how do i make sure the schema does not define a property of any type except string. allow - { key1 : a , key2 : b } reject - { key1 : a , key2 : true} or { key1 : a , key2 : [ 1 , 2 ]}
2024-03-18 23:24:18.417000000
you export default your middleware file export default async function middleware(request: nextrequest) { const { pathname } = request.nexturl; if (pathname === "/login" || pathname === "/admin") { return nextresponse.next(); } const token = await gettoken({ req: request }); / protected route for user / const userproctedroutes = ["/"]; / protected route for admin / const adminproctedroutes = ["/admin/dashboard"]; if ( token == null && (userproctedroutes.includes(pathname) || adminproctedroutes.includes(pathname)) ) { return nextresponse.redirect( new url("/login?error=please login first", request.url) ); } / get user token / const user: customuser | null = token?.user as customuser; / if user try to access admin routes / if (adminproctedroutes.includes(pathname) && user.role == "user") { return nextresponse.redirect( new url("/admin?error=please login first", request.url) ); } / if admin try to access user routes / if (userproctedroutes.includes(pathname) && user.role == "admin") { return nextresponse.redirect( new url("/login?error=please login first", request.url) ); } }
2024-03-04 20:27:20.023000000
enable to use git in intellij community edition. error details : git is not installed empty git --version output: after a long fight with the installation, i was unable to fix it
2024-03-22 09:08:12.730000000
here is some sample json: { type : donut , batter : [ { id : 1001 , type : regular }, { id : 1002 , type : chocolate } ] } i want to create a caller like this (returns the batter array): getpath( thejson, new string[] { batter }) or (returns regular) getpath( thejson, new string[] { batter , 0, type }) or getpath( thejson, batter.0.type }) something like this: public object getpath( jsonobject json, string[] path ) { object retob = new object(); for( string curpath : path ) { if( json.has( curpath )){ retob = json.get( curpath ); } } system.out.println( curpath: + (jsonobject) retob.tostring( 3 ); return retob; } is this possible? i've only been writing java for about 6 weeks now and i'm hoping a base class can be returned and the caller to cast it to a jsonobject a jsonarray or string. i also know that i haven't wrapped it in a try or handing errors back. this is just enough to discuss it here.
2024-02-20 01:39:46.767000000
i created a class that handles my sqlite database in python. this is written in synchronous code. now, i want to use this code in an asynchronous program. i used asyncio.tothread to execute a database insert asynchronously. however, i get different systemerror messages everytime i run my code. below, you find a minimal example which raises system errors. there are easy workarounds to this, but i thought sqlite was threadsafe? it took me a while to isolate the problem and i want to understand what is wrong, so i don't run into the same problems later on. i use python3.11 and sqlite3 version=2.6 and sqlite3.threadsafe is 3 import asyncio import sqlite3 from pathlib import path path( test.db ).unlink(missingok=true) con = sqlite3.connect( test.db , checksamethread=false) def init(): con.execute( create table test (test integer not null); ) async def write(): while true: with con: con.execute( insert into test values (1) ) await asyncio.sleep(0) async def write2(): while true: await asyncio.tothread(write3) def write3(): with con: con.execute( insert into test values (2) ) if name == main : init() loop = asyncio.geteventloop() try: loop.createtask(write()) loop.createtask(write2()) loop.runforever() finally: con.commit() con.close() loop.close()
2024-03-13 20:16:52.913000000
everyone! i'm trying to make tensorflow 2.15 to work with my gpu - nvidia geforce gtx 750 ti. tensorflow docs - [LINK]> gpu - config - from tensorflow docs - [LINK] versão versão python compilador ferramentas de construção cudnn cuda tensorflow-2.15.0 3.9-3.11 clang 16.0.0 bazel 6.1.0 8.8 12.2 my os is: ubuntu 23.04 +---------------------------------------------------------------------------------------+ | nvidia-smi 545.29.06 driver version: 545.29.06 cuda version: 12.3 | |-----------------------------------------+----------------------+----------------------+ | gpu name persistence-m | bus-id disp.a | volatile uncorr. ecc | | fan temp perf pwr:usage/cap | memory-usage | gpu-util compute m. | | | | mig m. | |=========================================+======================+======================| | 0 nvidia geforce gtx 750 ti off | [HASH]:01:00.0 on | n/a | | 33% 37c p8 1w / 38w | 436mib / 4096mib | 0% default | | | | n/a | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | processes: | | gpu gi ci pid type process name gpu memory | | id id usage | |=======================================================================================| | 0 n/a n/a 2553 g /usr/lib/xorg/xorg 187mib | | 0 n/a n/a 2695 g /usr/bin/gnome-shell 26mib | | 0 n/a n/a 3356 g ...sion,sparerendererforsiteperprocess 53mib | | 0 n/a n/a 3743 g ...onenabled --variations-seed-version 129mib | | 0 n/a n/a 42678 c /usr/bin/python3 27mib | +---------------------------------------------------------------------------------------+ $ nvcc --version nvcc: nvidia (r) cuda compiler driver copyright (c) 2005-2019 nvidia corporation built on sunjul2819:07:16pdt2019 cuda compilation tools, release 10.1, v10.1.243 $ python --version python 3.11.4 ~ ⌚ 18:14:55 $ bazel --version bazel 6.1.0 ~ ⌚ 18:15:05 $ gcc --version gcc (ubuntu 12.3.0-1ubuntu1~23.04) 12.3.0 copyright (c) 2022 free software foundation, inc. this is free software; see the source for copying conditions. there is no warranty; not even for merchantability or fitness for a particular purpose. $ cat /etc/os-release prettyname= ubuntu 23.04 name= ubuntu versionid= 23.04 version= 23.04 (lunar lobster) versioncodename=lunar id=ubuntu idlike=debian homeurl= [LINK]/ supporturl= [LINK]/ bugreporturl= [LINK]/ privacypolicyurl= [LINK] ubuntucodename=lunar logo=ubuntu-logo now to reproduce the error, you can do the following commands: import tensorflow as tf from tensorflow.python.client import devicelib print(devicelib.listlocaldevices()) output: [name: /device:cpu:0 devicetype: cpu memorylimit: [HASH] logout error: 2024-02-14 17:58:40.696088: e external/localxla/xla/streamexecutor/cuda/cudadnn.cc:9261] unable to register cudnn factory: attempting to register factory for plugin cudnn when one has already been registered 2024-02-14 17:58:40.696166: e external/localxla/xla/streamexecutor/cuda/cudafft.cc:607] unable to register cufft factory: attempting to register factory for plugin cufft when one has already been registered 2024-02-14 17:58:40.829634: e external/localxla/xla/streamexecutor/cuda/cudablas.cc:1515] unable to register cublas factory: attempting to register factory for plugin cublas when one has already been registered 2024-02-14 17:58:40.984397: i tensorflow/core/platform/cpufeatureguard.cc:182] this tensorflow binary is optimized to use available cpu instructions in performance-critical operations. to enable the following instructions: avx2 fma, in other operations, rebuild tensorflow with the appropriate compiler flags. 2024-02-14 17:58:43.234200: w tensorflow/compiler/tf2tensorrt/utils/pyutils.cc:38] tf-trt warning: could not find tensorrt 2.15.0 2024-02-14 17:58:44.809184: i external/localxla/xla/streamexecutor/cuda/cudaexecutor.cc:901] successful numa node read from sysfs had negative value (-1), but there must be at least one numa node, so returning numa node zero. see more at [LINK]> 2024-02-14 17:58:45.101985: w tensorflow/core/commonruntime/gpu/gpu_device.cc:2256] cannot dlopen some gpu libraries. please make sure the missing libraries mentioned above are installed properly if you would like to use gpu. follow the guide at [LINK]> for how to download and setup the required libraries for your platform. skipping registering gpu devices... if someone can help me to fix this issue! i trying to reinstall and i don't know anything else to fix this! i try a lot of thing! thanks!
2024-02-16 01:32:56.453000000
when i login the app then clear the app from the background it occurs a problem and show me that appname keeps crashing . when i try it on my phone it doesn't happen just on some of phones and in emulators. i tried detect the exact problem with firebase crashlytics but unfortunately i can't see anything. how can i fix the problem? xmppservice.dart import 'dart:async'; import 'dart:convert'; import 'dart:io'; import 'package:flutter/material.dart'; import 'package:get/get.dart'; import 'package:app/config.dart'; import 'package:app/services/rest.dart'; import 'package:app/view-models/commandcontroller.dart'; import 'package:xmppplugin/ennums/xmppconnectionstate.dart'; import 'package:xmppplugin/errorresponseevent.dart'; import 'package:xmppplugin/models/chatstatemodel.dart'; import 'package:xmppplugin/models/connectionevent.dart'; import 'package:xmppplugin/models/messagemodel.dart'; import 'package:xmppplugin/models/presentmode.dart'; import 'package:xmppplugin/successresponseevent.dart'; import 'package:xmppplugin/xmppplugin.dart'; extension connectionstatetostring on xmppconnectionstate { string toconnectionname() { return tostring().split('.').last; } } class xmpphelper implements datachangeevents { late xmppconnection flutterxmpp; late timer timer; applifecyclestate applifecyclestate = applifecyclestate.resumed; bool islogged = false; final commandcontroller commandcontroller = get.find(); string host = host; string appbotjid = botjid; string connectionstatus = disconnected ; string userjid = ; string password = ; xmpphelper() { timer = timer.periodic(const duration(seconds: 10), (timer t) async { if (connectionstatus == authenticated && applifecyclestate == applifecyclestate.resumed) { await ping(); } }); } future void connect(string userjid, string password) async { final map auth = { userjid : $userjid/${platform.isandroid ? android : ios }${datetime.now().millisecondssinceepoch} , password : password, host : host, port : '5222', requiresslconnection : true, autodeliveryreceipt : true, usestreammanagement : true, automaticreconnection : true, }; this.userjid = userjid; this.password = password; flutterxmpp = xmppconnection(auth); await flutterxmpp.start(onerror); await flutterxmpp.login(); } void createroster(string roster) { flutterxmpp.createroster(roster); } void onerror(object error) {} [USER] void onchatmessage(messagechat messagechat) {} [USER] void onnormalmessage(messagechat messagechat) { if (messagechat.body == ) { return; } commandcontroller.messagehandler(messagechat.body!); } [USER] void onchatstatechange(chatstate chatstate) {} [USER] void onconnectionevents(connectionevent connectionevent) { connectionstatus = connectionevent.type!.toconnectionname(); debugprint( connstatus: $connectionstatus ); if (connectionstatus == authenticated ) { islogged = true; login(); } else if (connectionstatus == failed ) { } else if (connectionstatus == disconnected ) { } else if (connectionstatus == connected ) {} } [USER] void ongroupmessage(messagechat messagechat) {} [USER] void onpresencechange(presentmodel message) {} [USER] void onsuccessevent(successresponseevent successresponseevent) {} [USER] void onxmpperror(errorresponseevent errorresponseevent) {} void sendmessage(map commanddata) { int time = datetime.now().millisecondssinceepoch; commanddata[ gonderen ] = userjid.tostring().split( @ )[0]; debugprint( commanddata: $commanddata ); flutterxmpp.sendmessagewithtype( appbotjid, jsonencode(commanddata), $time , time); } future void stop() async = await flutterxmpp.stop(); future void xmpplogout() async = await flutterxmpp.logout(); } layout.dart import 'package:flutter/material.dart'; import 'package:get/get.dart'; import 'package:app/view-models/commandcontroller.dart'; import 'package:app/widgets/sidemenu.dart'; import 'package:xmppplugin/xmppplugin.dart'; import 'package:app/globals.dart' as globals; class layout extends statefulwidget implements preferredsizewidget { const layout({super.key}); [USER] state layout createstate() = layoutstate(); [USER] size get preferredsize = const size.fromheight(100); } class layoutstate extends state layout with widgetsbindingobserver { final commandcontroller controller = get.put(commandcontroller()); list pages = [ patientinfopage(), const page1(), const page2(), const page3(), const page4(), const page5(), const page6(), const page7() ]; [USER] void initstate() { super.initstate(); widgetsbinding.instance.addobserver(this); xmppconnection.addlistener(globals.xmpphelper); controller.selectedpage.value = 7; } [USER] void dispose() { super.dispose(); xmppconnection.removelistener(globals.xmpphelper); } [USER] future void didchangeapplifecyclestate(applifecyclestate state) async { super.didchangeapplifecyclestate(state); globals.xmpphelper.applifecyclestate = state; debugprint( state: ${state.tostring()} ); if (state == applifecyclestate.resumed) { debugprint( state: resumed ); } else if (state == applifecyclestate.detached) { debugprint( state: detached ); } } [USER] widget build(buildcontext context) { list appbar appbars = [ appbar(title: text(''.tr)), appbar(), appbar(), appbar(title: text(''.tr)), appbar(title: text(''.tr)), appbar(), appbar(title: text(''.tr), actions: const [ padding( padding: edgeinsets.only(right: 16.0), child: icon(icons.telegram, size: 50, color: colors.blue)), ]), appbar( automaticallyimplyleading: false, title: const text(''), ) ]; return popscope( canpop: false, child: scaffold( appbar: preferredsize( preferredsize: const size.fromheight(50), child: obx(() = appbars[controller.selectedpage.value]), ), drawer: const sidenavigationmenu(), body: obx(() = pages[controller.selectedpage.value]), ), ); } } also the connection breaks when the app is background or no actions in 10 minutes. i tried to handle it in many ways but i failed.
2024-02-23 14:27:11.550000000
i'm developing a nestjs application where i need to obtain a jwt token by making a post request . i'm utilizing the httpservice from the [USER]/common library. however, i consistently encounter a 400 bad request error . here's the code using httpservice to get jwt token: import { httpservice, internalservererrorexception, } from [USER]/common ; export class jwtservice { constructor(private httpservice: httpservice, private cache: cacheservice) {} private getauthtoken(): observable oauth { const url = 'endpointurl'; const body = json.stringify({ clientid : 'xxxxxxxxxxxxxxx', clientsecret : 'xxxxxxxxxxxxx', granttype : clientcredentials , }); const headers = { content-type : application/json }; return this.callapi(url, body, headers); } private callapi( url: string, body, headers: { [key: string]: string } ): observable any { return this.httpservice.post(url, body, headers).pipe( map((result: axiosresponse any ) = result.data), tap((result: any) = { const { accesstoken } = result; // eslint-disable-line this.cache .set( authtoken , accesstoken, { ttl: env.secondstoexpire, }) .subscribe(); }), catcherror((err: any) = { return throwerror( new internalservererrorexception(err.message, auth service ) ); }) ); }) } } } here's what i've already tried: other apis : i tested the same code with other apis, and it worked flawlessly. the issue seems specific to the jwt token api. postman success : when i manually send the request using postman (same url, body, and headers), i successfully receive the token. body format : i attempted using a plain object for the request body instead of a json object, but the error persists. fetch api : i also experimented with the fetch() api, and it returned the expected response (code snippet below). sample code using fetch api: const req = { method: post , headers: { content-type : application/json , }, body: json.stringify({ clientid: xxxxxxxxxx, clientsecret: xxxxxxxxxxx, granttype: xxxxxxxxxxx, }), }; fetch('jwttokenendpoint', req) .then((response) = response.json()) .then((data) = { this.logger.log(success retrieving jwt token); console.log(data); this.jwtsubject.next(${data.token}); }) .catch((error) = { this.logger.error(error retrieving jwt token: ${error}); }); i am looking for an assistance to resolve the error and get the token successfully.
2024-03-12 07:41:13.393000000
our firewall admin had to change the settings. the adress was blocked by the firewall.
2024-02-26 11:53:32.180000000
delegated permissions are handled through oauth2permissiongrant objects instead of approleassignments. documentation: [LINK]> i don't remember the exact parameters for the azuread module, but essentially you need to create an oauth2permissiongrant with: clientid: object id of the service principal that will get the permission consenttype: allprincipals (unless you want to grant this only with some users) principalid: null (or the objectid of a user) resourceid: object id of the service principal that defines the permission (ms graph in your case) scope: space-separated list of required permissions you only need one of these per api that you want to call unless you want to grant the permission only for some users, in which case you need one per user as well. note that this is what grants the permissions to the application. to keep it clean, i would usually recommend defining these permissions also on the application object in its required permissions. this does not grant the permissions, though it allows an admin to click grant consent and create all of the required approleassignments and oauth2permissiongrants through that. you can check the application object contents here: [LINK]> you would want to define the permissions under requiredresourceaccess .
2024-03-26 19:33:31.727000000
i've written code that will find a specific email based on filtered criteria and then forward it to a new email address that gets passed into the code as a variable. it works perfectly, though the use case is to have it come from a generic email address, such as noreply[USER].com (replacing example with our actual enterprise domain). i've tried to add a from element to the step that builds the requestbody that gets posted, but keep getting an error. i've researched online and read the ms guides on graph, but have been unable to find a solution. my code looks like this: public async task emailforward(string notifnum, string forwardto) { try { console.writeline( find and forward email - ams notification + notifnum); string[] scopes = new[] { [LINK] }; var tenantid = configurationmanager.appsettings[ tenantid ]; var clientid = configurationmanager.appsettings[ clientid ]; var clientsecret = configurationmanager.appsettings[ clientsecret ]; var options = new tokencredentialoptions { authorityhost = azureauthorityhosts.azurepubliccloud }; var clientsecretcredential = new clientsecretcredential(tenantid, clientid, clientsecret, options); var graphclient = new graphserviceclient(clientsecretcredential, scopes); // generates the user request builder var userrequestbuilder = graphclient.users[ redacted-user-data ]; var sharedinbox = await userrequestbuilder.mailfolders[ inbox ].getasync(); //we need to first parse the notifnum to a number, then reconvert it to string so we can force the padding to always be exactly 12 digits //so, let's start with defining the variable we'll eventually use to hold the properly formatted number along with the bit that precedes it in the subject line string notifformatted = ; //let's convert our notification number from a string to int32 int32 notifnumeric = 0; if (int32.tryparse(notifnum, out notifnumeric)) { notifformatted = notf+ notifnumeric.tostring( [HASH] ); } //setup filter so that it looks for the notification and only from the us.ams email account (to prevent hits from postmaster[USER].com on failed delivery //of emails for the same notification, potentially to a bad email address the first time, which may be triggering the request to forward in the first place. string filterstring = contains(subject, ' + notifformatted + ')and startswith(from/emailaddress/address, 'valid-email-address[USER].com') ; //find the one (1) email that has the notification number in the subject line var emlid = await userrequestbuilder.mailfolders[ inbox ].messages.getasync(x = { x.queryparameters.top = 1; //we have a filter in place to make sure it is only from us.ams[USER].com, but still only want 1 email to be forwarded x.queryparameters.filter = filterstring; }); if (emlid.value.count==0) { //no email matching the notification number in the filter was found...we need to report back and not attempt to forward since there's nothing to forward console.writeline( !!!no email found for notification + notifnumeric + . skipping this notification! ); exitcode = 2; return; } console.writeline( found email for subject line: + notifformatted); string emltofwd = emlid.value[0].id; ////build up the needed setup to forward the message as requestbody to a different recipient var requestbody = new microsoft.graph.users.item.messages.item.createforward.createforwardpostrequestbody { torecipients = new list recipient { new recipient { emailaddress = new emailaddress { address = forwardto, }, }, }, }; var result = await userrequestbuilder.messages[emltofwd].createforward.postasync(requestbody); await userrequestbuilder.messages[result.id].send.postasync(); console.writeline( email forwarded to + forwardto + for notification + notifformatted); exitcode = 0; } catch (exception ex) { console.writeline(ex); exitcode = 1; } } i've tried adding something like the following after the close of the torecipients block: from = new recipient { emailaddress = new emailaddress { address = noreply[USER].com } } but, i get the error that 'createforwardpostrequestbody' does not contain a definition for 'from'. i haven't been able to find in the graph documentation how to build the requestbody to include a different email address from which to forward the email. how do i get it to forward from a generic email like noreply[USER].com instead of the actual mailbox/user executing the code via the credential flow?
2024-02-19 22:04:31.293000000
not sure if this is exactly your requirement, but it seems like you want the text to be highlighted on mouseover . a really easy way to achieve this is by using css :hover selector (this wont even require any javascript). implementation: return ( span classname= word key={i} {substring} /span ); css: .word:hover{ background-color: #ffff00; /add your highlight styles/ }
2024-03-13 06:01:41.930000000
i need to calculate running total ,year wise. if user selects date as dec 2023 it should show running total from jan 2023- dec 2023. few examples are given below user selection (in slicer dropdown) running total jan '2023 to show data for jan 2023 june '2023 sum from jan 2023 to june 2023 october 2023 sum from jan 2023 to october 2023 december 2023 sum from jan 2023 to december 2023 jan 2024 sum of jan 2024 only (no sum of 2023 year) mar 2024 sum of jan + feb + mar for 2024 year it is not ytd calculation but it is calendar year calculation. what logic i need to write-in? please note that month filter is in form of december 2023 i.e. month year i have tried ytd but that is not the requirement is
2024-03-14 15:26:25.563000000
see [LINK]>. you can use transport.schedule() which works almost like settimeout() . in general, it is better to use the schedule() method when scheduling audio events in tone.js. this ensures that the events occur at the correct time in the musical timeline, regardless of the tempo.
2024-02-28 13:43:13.640000000
you can do it within pandas plotting function like: df.col.plot.line( figsize=(10,8), title='figure title', xlabel='year, month', xticks=range(len(df)), rot=90 ) where: xticks is the parameter where you set the frequency of the ticks on the abscissa rot rotates the abscissa labels by 90 degrees i find this quicker for rapid visualisation.
2024-03-20 14:30:41.693000000
page level security is not currently possible in power bi. if you can't use rls, i would do the following.(though i would recommend rls) there would be some redundancy in this solution. you would have to maintain a sales version of the report, and a management version of the report). create security groups for your different audiences. for example, sales, and management. you may need to work with it or whomever is your microsoft admin. your admin needs to put those people in their respective group. create a power bi app with your reports in it. when you publish the app, you can have two different audiences. the sales audience would have access to the sales report, and the management team would have access to the management report. here's a great blog about app audiences. [LINK]>
2024-02-26 19:29:23.213000000
install html 'snippets' extension vscode extension id: abusaidm.html-snippets edit: it has been deprecated.
2024-02-22 18:04:06.273000000
all you need is a centralized storage engine. there are multiple options you can choose from: a distributed cache system, such as redis or memcache, is used for high-performance scenarios with a simple key-value data structure. i would recommend redis as a cache system since it supports more complex data structures like hash, list, and sorted set, and it also provides data persistence. a centralized database, like mysql, is used for structured data and durable data storage, utilizing disk-based data persistence. mysql also supports acid and transactions, which ensure data consistency and integrity. generally, mysql's performance is not as good as redis under similar conditions. for your microservices architecture, you can implement a data access service in front of your data storage to avoid direct connections from multiple services, protecting your data by not exposing the data source to all services. all other services will then request data from the data access service to access the shared data. alternatively, if simplicity is not a concern, you can directly connect to the data source with all microservices in need
2024-02-20 07:25:27.010000000
i have a vps from hostinger which was on us server with centos installed. i changed it to uk server and it started giving error 503 on few sites and few sites are working fine. please help me. i tried reinstalling apache and changed ip in centos. but it didn't helped.
2024-03-14 17:31:05.963000000
we're building a flink app that consumes events from different kafka topics. this app uses the bounded out of order watermark strategy on the source. during normal execution everything works as expected and we do not get any late arriving data (based on watermarks), but on checkpoints/ savepoints restores we're getting late arriving events, no matter how much we increase the out of order bound. did anyone ever encounter this situation?
2024-03-21 21:54:23.730000000
if using webhooks, you can add a slack user email variable in the workflow and then add that variable to the message text with mention (default) option selected. webhook settings send a message settings
2024-02-20 19:16:12.737000000
i use pjproject on a rapsberry pi to make calls. the call is working but when i use dtmf keys like '*' or '#', i catch the error in the title bad rtp pt 101 (expecting 8) . i understood that it means tel-events are going through the audio media but is not correctly interpreted. the version of pjproject used is 2.14. output of pjsip i tried to change the configuration of pjproject multiple times but nothing changes.
2024-03-06 08:50:12.507000000
when the setupbeforeclass method is called, the application is not yet bootstrapped, so you will not be able to access the features associated with using models and factories. instead, you can make the $token property static and add a check in the setup method to see if it has any value and, if so, skip reinitializing it. also, if you want to authorize a user inside the test, you don't have to get the token yourself. instead, you can use the actingas helper method. for example: ?php namespace tests\feature; use illuminate\foundation\testing\refreshdatabase; use tests\testcase; class exampletest extends testcase { use refreshdatabase; const testendpoint = '/myendpoint'; private static $user; public function setup(): void { parent::setup(); if (!self::$user) { self::$user = createuser(); } } public function testendpointinvalidauthorization() { $response = $this- post(self::testendpoint); $response- assertstatus(400); } // other tests... public function testendpointsuccess() { $response = $this- actingas(self::$user)- post(self::testendpoint); $response- assertstatus(200); } } see for details session / authentication section.
2024-02-15 12:43:49.533000000
you might try a check using the ternary operator like this : console.log(typeof ans == 'object' ? ans.name : ans)
2024-02-07 06:44:56.610000000
i have this code: import swiftui import combine final class viewmodel: observableobject { [USER] var text = var cancellables = set anycancellable () init() { $text .sink { completion in } receivevalue: { text in print( text \(text) ) }.store(in: &cancellables) } } struct contentview: view { [USER] private var vm = viewmodel() var body: some view { textfield( title , text: $vm.text) } } #preview { contentview() } what i have noticed, if i type one letter, i receive two values. so at first it will just say text . and if i type one letter, say letter a, output will look like this: text a text a in some cases i saw it prints same value even more than two times. i know i can use removeduplicates but why is this actually happening?
2024-03-22 12:30:47.073000000
i have a challenge in postgres and spend the the whole day trying to nail it and killing me with it. i have these two tables: tracking plan ------------------------------------------------ id | parameters | subject 1 | [{ coding :{ code : 8231-0 }}, { coding :{ code : 82311-0 }}] | {identifier: 1, ref: 1} 2 | [{ coding :{ code : 8234-0 }}, { coding :{ code : 82319-0 }}] | {identifier: 2, ref: x2} 3 | [{ coding :{ code : 1234-0 }}, { coding :{ code : 1234-0 }}] | {identifier: 2, ref: x2} observations ------------------------------------------------ id | component | code 1 | [{ coding :{ code : 8231-0 }, value: x, unit: y}, | 2 | [{ coding :{ code : 8234-0 } valua: a, unit y}] | {coding: {code : 8281-0 }}0 }}] | {coding: {code : 82311-0 }} they are more complex but will try to simplify. so basically the table tracking plan holds some control parameters. the table observations registers every update to different updates. i need a query to know the latest update on a specific parameter that belongs to that plan. as such i was trying to do this query: select tp.id, o.component from ecosystem.trackingplan tp left join ecosystem.observation o on ( exists ( select from jsonbarrayelements(tp.parameters) comp where comp- 'coding'- 'code' in ( select comp2- 'coding'- 'code' -- changed this line from jsonbarrayelements(o.component) comp2 ) ) ) where tp.subject- 'reference' like 'patient/%' and tp.subject- 'identifier' = '238'::varchar; but i get all parameters in one row, and some of them do not even belong to the tracking plan. i am trying to achieve to something like this: trackingplan | observation | __________________________________ 1 | [{ coding :{ code : 8231-0 }, value: x, unit: y}, 2 | [{ coding :{ code : 8234-0 } valua: a, unit y}] 3 | null basically i want to have the id. of the plan with all the latest observations for the parameters of that plan. any ideas?
2024-03-29 19:29:59.863000000
step#1 : i think the way to go is creating a script that will apply the desired changes to the dev environment first. you can validate the results and proceed to step#2 if everything looks good. otherwise, just restore prod data to dev environment again, adjust/fix the script and repeat until satisfied. step#2 : now run the same script in another (i.e. pre-prod) environment against the data from production, e.g. most recent copy of production data. note that the outcome of step#1 may be not accurate as many developers/features may have modified the dev environment data, and you script will produce different results depending on which version of data it's applied to. so for better confidence, you want an environment with locked write access to data from prod that has just been restore to pre-prod. proceed if satisfied. step#3 : then, finally run against production, and do some testing. hope this helps.
2024-02-28 17:16:00.700000000
i have made a basic angular project, and when i run the application, the app starts correctly, but i would like it to start as part of the visual studio 2022 window, if possible, not in the new cmd window. problem i have tried searching for it on the web, but i didn't find anything useful.
2024-02-21 09:05:00.353000000
i have this simple chainlit app: import chainlit as cl from langchain.prompts import prompttemplate from langchaincore.outputparsers import stroutputparser from langchaincore.runnables import runnablepassthrough, runnableparallel from langchaincommunity.documenttransformers import longcontextreorder from lib.aws import awsadapter def formatdocs(docs): format the retrieved documents. return \n\n .join(doc.pagecontent for doc in docs) [USER].onchatstart async def onchatstart(): adapter = awsadapter() vectordb = adapter.opensearchclient retriever = vectordb.asretriever() prompt = prompttemplate( template= \ answer the question based only on the following context: {context} question: {question} , inputvariables=[ context , question ], ) reordering = longcontextreorder() chain = (prompt | adapter.chat | stroutputparser()) rag = runnableparallel( { context : retriever | reordering.transformdocuments | formatdocs, question : runnablepassthrough()} ).assign(answer=chain) cl.usersession.set( rag , rag) [USER].onmessage async def main(message: cl.message): rag = cl.usersession.get( rag ) msg = cl.message(content= ) await msg.send() async for chunk in rag.astream(message.content): for key in chunk: if key == answer : await msg.streamtoken(chunk[key]) await msg.send() as far as i understand, on paper all should work fine and it almost does. the problem is that i receive the final answer from my model at once and it is not streaming it token by token. i am not really familiar with async programming in python and i am not sure why it does work in this way. note that rag.astream(message.content) is asynchronous python generator. adapter.chat is langchaincommunity.chatmodels.bedrockchat which supports streaming can someone point me what the solution might be ? p.s: this is the only example in chainlit docs which describes streaming output. also, i have implemented the same using synchronous generator and it works as expected. output = {} currkey = none for chunk in rag.stream( question ... ): for key in chunk: if key not in output: output[key] = chunk[key] else: output[key] += chunk[key] if key == 'answer': print(chunk[key], end= , flush=true) currkey = key
2024-03-11 22:33:32.470000000
i have an existing asp.net mvc application that uses azure acs authentication with a clientid and clientsecret. now as acs is deprecated, i have to migrate it to azure ad authentication which has clientid and tenantid. what changes will be there in the authentication function? and can i use my previous csom code as it is? data is on sharepoint.
2024-02-28 18:28:15.150000000
the key to the answer is to define an index,and to increase it when clicking the button, if encounter the current section and the previous section both have img,then hide the previous one, if there is no next section disable the button. style section { display: none; } .d-block { display: block; } /style section class="d-block" horace /section section ursa /section section img id=sad src=[LINK] /section section img id=neutral src=[LINK] /section section img id=happy src=[LINK] /section section thus /section button next /button script document.addeventlistener("domcontentloaded", () = { let index = 1 const button = document.queryselector('button'); const sections = document.queryselectorall('section'); button.addeventlistener('click', function() { if (sections[index]) { if (hasimg(index) && hasimg(index - 1)) { sections[index - 1].classlist.remove("d-block") } sections[index].classlist.add("d-block") } if (!sections[++index]) { //there is no next section disable the button button.disabled = true; } }); function hasimg(index) { return sections[index].queryselector('img') } }); /script
2024-03-15 04:13:34.487000000
problem has been solved by adding a route rule in the vpc routing table to forward the default route to the internet gateway.
2024-03-15 03:03:11.867000000
i contacted power bi support and the reply was: we investigated your case and this visual is really not been maintained for a long period of time. since any further development of this visual is not planned and it will not be supported, we will deprecate this visual in a nearest time. we apologize for any inconvenience this situation may have caused you.
2024-02-09 07:48:01.907000000
when using flint to work with polynomials, it is best to work with fmpzmodpoly_t instead of * fmpzmodpoly . the type fmpzmodpolyt is typedefed to an array of length 1 of fmpzmodpolystruct , so it behaves similar to pointers and allows passing parameters of type fmpzmodpolystruct by reference. (this pattern is held throughout flint , arb , calcium , and the other libraries in the same family). the init and clear functions expect these types and will do it appropriately. for example, this smaller code initializes a polynomial, squares it, prints it, and then clears the memory for both the polynomials and the modulus. #include fmpzmodpoly.h int main() { fmpzt n; // array of length 1 fmpzstruct fmpzmodpolyt x, y; // array of length 1 fmpzmodpolystruct fmpzinitsetui(n, 7); // n = 7 fmpzmodpolyinit(x, n); fmpzmodpolyinit(y, n); fmpzmodpolysetcoeffui(x, 3, 5); fmpzmodpolysetcoeffui(x, 0, 6); fmpzmodpolysqr(y, x); fmpzmodpolyprint(x); flintprintf( \n ); fmpzmodpolyprint(y); flintprintf( \n ); fmpzmodpolyclear(x); // clear x memory fmpzmodpolyclear(y); // clear y memory fmpzclear(n); // clear modulus memory } internally, each fmpzmodpoly_struct consists of several coefficients, each of which is an fmpz integer. on creation and with manipulation, these might be allocated and changed. on deletion, each of these need to be cleared as well, which is why simply calling delete on *fmpzmodpoly won't work. instead, that will cause memory leaks.
2024-02-20 19:18:37.153000000
i am building an app using flutter and firebase. i am trying to create a user profile on their first login to the app. i am using the google sign-in method provided by firebase authentication. this is my code for the same: void handlegooglesignin() async { try { googleauthprovider googleauthprovider = googleauthprovider(); usercredential usercredential = await auth.signinwithprovider(googleauthprovider); user? user = usercredential.user; // check if user pf exists if (user != null) { documentsnapshot userdoc = await firebasefirestore.instance .collection( users ) .doc(user.uid) .get(); // if not there, then create if (!userdoc.exists) { await firebasefirestore.instance .collection( users ) .doc(user.uid) .set( { photourl : user.photourl, displayname : user.displayname, email : user.email, }, ); } } } catch (e) { showdialog( context: context, builder: (context) { return alertdialog( content: text(e.tostring()), ); }, ); } } i am storing displayname , photourl , email , and other fields (still thinking). but on the firestore window, it shows that the document was stored with displayname set to null . so what am i doing wrong, and what is the fix?
2024-02-20 18:32:29.953000000
based on the name ideally you should refactor ifileuploadingservice.uploadfile to be async ( truly async), since it seems to be an io-bound operation . if the interface/implementation is outside of your control then usual approach would be to return task.completedtask (though there can be edge cases): public class uploadfilecommandhandler(ifileuploadingservice fileuploadingservice) : irequesthandler uploadfilecommand { private readonly ifileuploadingservice fileuploadingservice = fileuploadingservice; public task handle(uploadfilecommand request, cancellationtoken cancellationtoken) { fileuploadingservice.uploadfile(request.file); // - this method returns void, not a task, see it below (the interface ifileuploadingservice) return task.completedtask; } }
2024-02-27 13:55:02.030000000
import mlflow.pyfunc import mlflow from langchain.agents import createsqlagent from langchain.agents.agenttoolkits import sqldatabasetoolkit from langchain.sqldatabase import sqldatabase from langchain import openai llm = openai(temperature=0) class ucbot(): def init(self, llm): self.llm = llm self.toolkit = sqldatabasetoolkit(db=sqldatabase.fromdatabricks(catalog= samples , schema= nyctaxi ), llm=llm) self.agent = createsqlagent(llm=self.llm, toolkit=self.toolkit, verbose=true, topk=1) def getanswer(self, question): return self.agent.run(question) class mlflowucbot(mlflow.pyfunc.pythonmodel): def init(self, llm): self.llm = llm def predict(self, context, input): ucbot = ucbot(self.llm) return ucbot.getanswer(input) persist model to mlflow with mlflow.startrun(): mlflow.pyfunc.logmodel( pythonmodel=mlflowucbot(llm), extrapiprequirements=['langchain', 'databricks-sql-connector', 'sqlalchemy', 'openai'], artifactpath='model', registeredmodelname= mymodel , inputexample={ input : how many tables? } ) the code is able to create a model and predict when i try to create a model serve i get this error: an error occurred while loading the model. no module named 'openai' after adding openai in dependency i get the following error an error occurred while loading the model. no module named 'openai.api_resources'
2024-03-07 14:54:23.007000000
make sure you have bluetooth enabled on mac
2024-02-13 17:19:44.800000000
i've found various other questions with workarounds but none of them seem to work. i've tried embedding views in navigationstacks, vstacks etc. i've pinpointed the issue to having to do with .fullscreencover because if i call the second view via .sheet : .sheet(ispresented: $showsecondview, content: { secondview() }) then it works. the other testing i've done has pinpointed the issue i think down to .onappear in the second view. if i assign focus to the first textfield using a button (see code below), then the keyboard appears with the toolbar. but if i assign focus via .onappear , then it doesn't: struct firstview: view { [USER] var showsecondview: bool = false var body: some view { navigationstack { button(action: {showsecondview.toggle()}, label: { text( show second view ) }) .fullscreencover(ispresented: $showsecondview, content: { secondview() }) .navigationtitle( first view ) } } } struct secondview: view { [USER] var first: string = [USER] var second: string = [USER] private var focusedtextfield: textfields? var body: some view { navigationstack { button( show keyboard ) { focusedtextfield = .firsttextfield } vstack(spacing: 20) { textfield(text: $first) { text( first ) } .focused($focusedtextfield, equals: .firsttextfield) textfield(text: $second) { text( second ) } .focused($focusedtextfield, equals: .secondtextfield) } spacer() .toolbar { toolbaritemgroup(placement: .keyboard) { button(action: {}, label: { text( prev ) }) button(action: {}, label: { text( next ) }) spacer() button(action: {}, label: { image(systemname: keyboard.chevron.compact.down ) }) } } .navigationtitle( second view ) .onappear(perform: { focusedtextfield = .firsttextfield }) } } enum textfields { case firsttextfield case secondtextfield } }
2024-02-18 23:28:49.263000000
you have to use a loop. assuming strings, you could use apply : df['name'] = df['name'].apply(lambda x: x[::-1]) or a list comprehension: df['name'] = [x[::-1] for x in df['name']] output: name 0 dcba 1 zyx if you don't have only strings, a safer approach would be to check for the type: df['name'] = df['name'].apply(lambda x: x[::-1] if isinstance(x, str) else x) or df['name'] = [x[::-1] if isinstance(x, str) else x for x in df['name']]
2024-03-19 13:34:09.203000000
i'm trying to build a view which can be swiped up and down. i'm not sure why the above behaviour is occurring in my code. the text is kind of jumping/stuttering. i want the view to be draggable while the text is animating without the animation being broken. am i doing something i'm not supposed to do? below is my code. any tips would be highly appreciated. struct contentview: view { [USER] var offset : cgfloat = 0 [USER] var time: string = 0.0 let timer = timer.publish(every: 0.1, on: .main, in: .common).autoconnect() var body: some view { vstack { spacer() text(string(time)).contenttransition(.numerictext()) .onreceive(timer) { value in withanimation { time = value.timeintervalsince1970.minutesecond // i have an extension for this. } } spacer() } .frame(maxwidth: .infinity) .background(.red) .offset(y: offset) .gesture(swipedowngesture) } } extension contentview { var swipedowngesture: some gesture { draggesture() .onchanged(ondrag(value:)) .onended(onswipedownended(value:)) } func ondrag(value: draggesture.value){ let horizontalamount = value.translation.width let verticalamount = value.translation.height let ishorizontalswipe = abs(horizontalamount) abs(verticalamount) let isswipedown = verticalamount 0 if ishorizontalswipe { let isswiperight = horizontalamount 0 if isswiperight { // ignore } else { // ignore } } else if isswipedown { withanimation { offset = value.translation.height } } } func onswipedownended(value: draggesture.value){ withanimation(.smooth(duration: 0.32, extrabounce: 0.22)) { offset = 0 } } }
2024-03-08 10:03:31.693000000
i"m unable to update my gitlab-runner install due to bad keys being detected. is this a gitlab update issue or something gone wrong on my system? update and install was working without problems in 2023. root[USER]-runner:~apt-get update hit:1 [LINK] bookworm-security inrelease hit:2 [LINK] bookworm inrelease get:3 [LINK] bookworm inrelease [23.3 kb] err:3 [LINK] bookworm inrelease the following signatures were invalid: expkeysig [HASH] gitlab b.v. (package repository signing key) packages[USER].com fetched 23.3 kb in 1s (21.0 kb/s) reading package lists... done w: an error occurred during the signature verification. the repository is not updated and the previous index files will be used. gpg error: [LINK] bookworm inrelease: the following signatures were invalid: expkeysig [HASH] gitlab b.v. (package repository signing key) packages[USER].com w: failed to fetch [LINK] the following signatures were invalid: expkeysig [HASH] gitlab b.v. (package repository signing key) packages[USER].com w: some index files failed to download. they have been ignored, or old ones used instead. many suggest to add gitlab apt gpg key like this root[USER]-runner:~curl -s [LINK] | apt-key add - ok still it does not resolve the issue on debian 12 and ubuntu 22. same error on apt update.
2024-03-22 07:49:15.507000000
this is working for me primeng table [tablestyle]= {'min-width': '50rem','border-collapse':'separate','border-spacing':'0'}
2024-02-15 13:30:41.453000000
i installed android studio recently and i set up and emulator(an android 14 emulator to be specific) but when i try to run the emulator, it just shows the google logo and never comes on. i never got to open any apps on the emulator. i don't know whether it is because of my laptop specs or not i have a 4gb ram and my laptop speed is about 2.45ghz
2024-03-21 22:33:26.987000000
the view-transition api works in google chrome but not in microsoft edge 122. both browsers should be supported according to the compatibility table. i am using macos. there is no javascript involved. it works in chrome just by using these two html files. the index.html page navigates to child.html page when image is clicked. the child.html page has the same image but bigger, and the view transition api works nicely in chrome as it handles the size difference. here are my html files: index.html meta name= view-transition content= same-origin a href= child.html img class= img-small src= [LINK] /a style .img-small { view-transition-name: small; width: 600px; } /style child.html meta name= view-transition content= same-origin a href= index.html img class= img-small src= [LINK] /a body /body style .img-small { view-transition-name: small; width: 900px; } /style do you know where might be the problem? thanks!
2024-03-13 14:16:29.030000000
try something along these lines: formula in g1 : =groupby(xlookup(b2:b8,e2:e8,d2:d8),a2:a8,sum,0,0)
2024-02-28 22:31:43.287000000
q: what is my goal? a: need a way to search best match columns across many rows. q: can you explain with more details? a: suppose i have following data: id key a key b key c val a val b val c 1 abc b c va0 null vc0 2 a bcd c null vb1 vc1 in my circumanstance, i have a score function that will calculate best match values. for a given condition keys as: key a = abc , key b = bcd and key c = cde , then i need to query the best match values as: val a = va0 , val b = vb1 and val c = vc1 (a little special as following). as as conclusion, suppose our score function is: when same: 2i ( i is the position index starts from 1 ) when pattern match: 1+i when value is null, not match with key a = abc , key b = bcd and key c = cde , for row 1( id=1 ): key a is exactly match, key b is pattern match, key c is pattern match too but the longer match with larger index is prior, the weight is weight1 = 21 + (1+2) + 1+3 = 9 . with the same algorithm, for row 2( id=2 ), the weight is weight2 = (1+1) + 2(1+2) + 1+3 = 12 . for val a , row 2 is null, then we got va0 from row 1, for val b row 1 is null then we got vb1 from row 2. however for val c , since weight2 weight1 then we got vc1 . that's why we got val a = va0 , val b = vb1 and val c = vc1 . further more, for a entire row to find the highest score row is clear(for the given data and query conditions row 2 is expected), but for column-level highest score match would be a problem. maybe we could search columns one-by-one, but if there are more than 100 columns this solution will die and it's not acceptable. btw, any version of elasticsearch is fine. we are using latest 8.12.2 for testing purpose. we are in poc phase, and any solution will be appriciate and we are glad to do any test about that. we tried to index entire rows and query the row has highest score with custom score function, it works and the solution is easy to know. we read the manual and know bulk query api might be work but our columns could be ~5k it cannot work as expected.
2024-02-26 15:10:14.663000000
xceed.document.net.formatting f = new xceed.document.net.formatting(); f.fontfamily = new font( times new roman ); f.size = 11d; var numberedlist = document.addlist( , 0, listitemtype.numbered, 1, formatting: f);
2024-02-23 09:19:06.073000000
this script will work fine. click here or go to this repository [LINK]> you need to add a new trigger and select dopost function in it. and deploy it as new deployment. then use the deployment url for the form submission.
2024-02-22 13:18:50.440000000
problem i am trying to connect to a kafka pod from a different kafka consumer pod in the same namespace but the consumer can't connect to the broker. i am seeing this error in the consumer pod logs [2024-02-28 16:51:02,193] warn [consumer clientid=consumer-console-consumer-81790-1, groupid=console-consumer-81790] connection to node 0 (localhost/127.0.0.1:19092) could not be established. broker may not be available. (org.apache.kafka.clients.networkclient) [2024-02-28 16:51:03,098] warn [consumer clientid=consumer-console-consumer-81790-1, groupid=console-consumer-81790] connection to node 0 (localhost/127.0.0.1:19092) could not be established. broker may not be available. (org.apache.kafka.clients.networkclient) my questions what am i missing in the k8s config? any pointers to troubleshoot? i have shared multiple things that i have tried below. context my kafka yaml file --- apiversion: v1 kind: pod metadata: name: kafka labels: app: kafka service: kafka namespace: datastores spec: containers: - image: xxx.xxx.xx/images/kafka:latest name: kafka ports: - containerport: 19092 environment: these are some failed trial attempts to get the kafka pod to listen on localhost - name: kafkaadvertisedlisteners value: plaintext://127.0.0.1:19092,plaintexthost://localhost:19092,connectionsfromhost://localhost:19092 - name: kafkalistenersecurityprotocolmap value: plaintext:plaintext,connectionsfrom_host:plaintext resources: {} restartpolicy: always status: {} --- apiversion: v1 kind: service metadata: labels: app: kafka service: kafka name: kafka namespace: datastores spec: ports: - name: 19092 port: 19092 targetport: 19092 selector: app: kafka status: loadbalancer: {} my kafka-consumer yaml file --- apiversion: v1 kind: pod metadata: name: kafka-consumer-test labels: app: kafka-consumer-test namespace: datastores spec: containers: - name: kafka-consumer image: xxx.xxx.xx/images/kafka:latest command: [ /bin/bash , -c ] args: - | kafka-console-consumer.sh \ --bootstrap-server localhost:19092 \ --topic my-test-topic \ --from-beginning restartpolicy: never what i have tried so far setting listeners=plaintext://127.0.0.1:9092 in server.properties as suggested in . you can see the alternative option to set this via the config in the yaml file lemme know if any additional information is required. tia!
2024-02-29 07:17:03.533000000
you can provide actions and no actions as below { actions : [ ], notactions : [ /write , */delete ], dataactions : [], notdataactions : [] }
2024-03-18 15:12:55.877000000
i think you are running into a protection the browser puts in place, that prevents js from artificially triggering click events that are not part of an original click event. for example, the browser will prevent you from triggering a click event on an element, if you trigger the event from a mouse move event. in your scenario, because the confirm() method is triggered that interrupts the flow of the click event from one handler to the next. here's my suggestion, extract the order submitted handler into a separate method definition, so that method is called while passing the event to that method. the original click event listener can call that method, as well as the new one that has the are you sure confirm() call. function submitorder(event){ //... original submit order } $('onestepcheckout-place-order-button').observe('click',submitorder); // or $('onestepcheckout-place-order-button').observe('click',function(event){ if (confirm('are you sure?') === false) { console.log('the order should not be submitted') event.preventdefault() event.stoppropagation() return false } submitorder(event); });
2024-03-21 16:42:43.797000000
how can i create an efficient loop that lets 2 child processes work with 2 different files? i tried using a loop like: for(i = 0; i 2; i++){ if((pids[i] = fork()) 0){ fprintf(stderr, something wrong happened creating child n:%d\n , i+1); exit(exitfailure); } if(pids[i] == 0){ sprintf(nomefile, filefiglio%d.txt , i+1); fd[i] = open(nomefile, ocreat | otrunc | owronly, 0660); printf( write on the file n:%d\n , i); fgets(buffer, sizeof(buffer), stdin); write(fd[i], buffer, strlen(buffer)); close(fd[i]); exit(exitsuccess); } } the output is going to be: ./7afebb hello 1 write on the file n:1 hello 0 write on the file n:0 (empty space to fill file 1) (empty space to fill file 2) how can i make it ask me to fill file 1 and only after that fill file 2? // this is how i corrected the code: void sighandler(int sig){ printf( signal received: %d\n , sig); return; } for(i = 0; i n;i++){ if((pid[i] = fork()) 0){ fprintf(stderr, something went wrong\n ); exit(exitfailure); } if(pid[i] == 0){ pause(); sprintf(nomefile, figliomag%d.txt , i+1); fd[i] = open(nomefile, ocreat | owronly | otrunc, 0660); printf( write on the file:\n ); fgets(buffer, sizeof(buffer), stdin); write(fd[i], buffer, strlen(buffer)); close(fd[i]); exit(exitsuccess); } } action.sahandler = sigdfl; for(i = 0; i n; i++){ printf( parent: click enter to write on file n:%d\n , i); getchar(); kill(pid[i], sigusr1); wait(0); }
2024-02-21 14:46:42.380000000
create uicollectionview and set scrolling to horizontal hide plus image in last cell. you can set minimumlinespacing and minimumsectionspacing to very small.
2024-03-15 12:47:30.927000000
i have an api key for my node js application to access google spreadsheets. now i would like to access also gmail api. so i activated the gmail api in the cloud console but when i add the scope '[LINK]' to my google.auth() request i get an error message: invalid credentials'. what else is required to use the gmail api with that api key? because the api key is for the whole project i assumed that it will authorize also newly activated apis. i created a new key for the api, but it did not help. do i have to create a new api key?
2024-03-12 18:58:13.607000000
as david mentioned, you can add scripts directly from cdn. you can see an example todo app here . i am working on a laravel module named lamx that provides functionality to easily incorporate htmx into laravel applications.
2024-03-14 09:22:19.213000000
i have been researching about this over and over. what is the difference between yarn, npm and npx in the context of creating react apps i have read a lot of articles on it yet had no solid explanation to it. i'm trying to use these separately to see if there are different features in them but i see nothing
2024-02-14 20:02:17.283000000
i would invoke it using closure functions test.describe('inside a loop test', async ({ page }) = { for (let index = 0; index = 5; index++) { (function (currentindex) { test.beforeeach(async () = { console.log('before each #' + currentindex); }); test(test #${currentindex}, async () = { console.log('test #' + currentindex); // example test action await page.goto('[LINK]'); // add your playwright test actions here }); test.aftereach(async () = { console.log('after each #' + currentindex); }); })(index); } });
2024-02-07 17:28:01.747000000
we've got a webview that we're loading under the application class whenever the app starts. this website is for internal data loading and won't appear to end users. all was well; till then, we upgraded the application to android sdk 34. while loading the webview, at some point during the loading, unexpectedly, it stops loading and keeps on the same state till we close the application. note that we've got another webview that loads webview to display web pages. is someone having a similar issue? any clue or solution is appreciated. thank you
2024-03-02 16:32:37.970000000
using the code from for example, if today is 2024/03/24, then the total spend should be taken only for the sum of "completed" orders for the period from 2023/03/24 to 2024/03/24.
2024-03-24 17:15:21.730000000
if you are trying to read the associated files, such as the labels.txt, you can simply unzip the tflite file [LINK]>
2024-03-05 09:44:38.787000000
i simplified your code so it only show-cases your problem. never use float unless you know what you're doing. never assign width or height to an element unless you know what you're doing. use max-width and max-height instead. always use const or let to declare variables. use the correct semantic for the tags. use understandable variable names and classes that actually describes what they are used for. use loops when you have several elements that needs to change. i used a css variable to control the spacing on all elements. i added a label and inputs that uses the same name so they are all connected. i added event listeners to all inputs that listens to the 'change' event. i renamed the box ids to match the input ids so it's easier to use the input id and add -info to find the id for the corresponding .box . you can use the hidden attribute that almost all elements have. hide all elements, then show the element that you want to display. document.queryselectorall('.sidenav input').foreach(input = { input.addeventlistener('change', showcontent); }) function showcontent({target}) { document.queryselectorall('section .box').foreach(infobox = { infobox.hidden = true; }) const boxtodisplay = document.getelementbyid(target.id + '-info'); if (boxtodisplay) { boxtodisplay.hidden = false; } }; showcontent({target: document.queryselector('.sidenav input[checked]')}) * { box-sizing: border-box; } :root { --spacing: 1rem; --spacing-quarter: 0.25rem; } html, body { font-family: arial, helvetica, sans-serif; margin: 0; padding: var(--spacing); background: darkblue; } input, label { cursor: pointer; } label { color: white; text-align: left; } .links a:hover { color: blueviolet; } main { display: flex; } section { flex: 1 1 auto; } .sidenav { background: linear-gradient(to right, black, dimgray, darkgray); padding: var(--spacing); text-align: right; border-radius: 10px; max-width: 200px; } .sidenav div { padding: var(--spacing-quarter) 0px; } .box { background-image: url(../prof_slike/image3.jpg); background-repeat: no-repeat; background-attachment: fixed; background-size: 100% 100%; border-radius: 10px; padding: var(--spacing); } .box h3 { color: white; font-size: 20px; padding: 2px; margin: 0px; } .card { min-height: 340px; background-color: white; border-radius: 5px; text-align: initial; } .card p { color: black; font-size: 12px; padding: var(--spacing); } main div class="sidenav" div label for="skills" skills /label input type="radio" name="navselector" checked id="skills" / /div div label for="interests" interests /label input type="radio" name="navselector" id="interests" / /div div label for="education" education /label input type="radio" name="navselector" id="education" / /div /div section div class="box" id="skills-info" h3 skills /h3 div class="card" contenteditable="true" p my skills are... /p /div /div div class="box" id="interests-info" h3 interests /h3 div class="card" contenteditable="true" p my interests are... /p /div /div div class="box" id="education-info" h3 education /h3 div class="card" contenteditable="true" p my education is... /p /div /div /section /main
2024-02-23 00:55:18.303000000
textscalefactor has been deprecated since v3.12.0-2.0.pre please check this for deprecated property. we can use scale method from textscaler here is a sample: final ts = mediaquery.textscalerof(context); final res = ts.scale(1.0); print( scale factor: $res );
2024-03-19 19:36:23.577000000
i just have seen this <a href="[LINK]> but it is not completely what i need... i have seen that with core 8 i don't get no more a success response when i call api function with this code: var claims = new[] { new claim(jwtregisteredclaimnames.sub, user.username), new claim(jwtregisteredclaimnames.jti, guid.newguid().tostring()), new claim(claimtypes.role, administrator ) }; var token = new jwtsecuritytoken( issuer: [LINK] , audience: [LINK] , expires: datetime.utcnow.addhours(1), claims: claims, signingcredentials: new signingcredentials(signingkey, securityalgorithms.hmacsha256)); return ok(new { token = new jwtsecuritytokenhandler().writetoken(token), expiration = token.validto }); i read that now the object is a jsonwebtoken , but don't know how to change the code... someone that can help me?
2024-02-29 15:30:33.573000000
there are no api calls available to clone/duplicate an amazon rekognition face collection. in fact, you cannot even 'retrieve' an entry from face collections. rather, face collections are used by other api calls (eg searchfacesbyimage() ).
2024-02-13 05:36:33.110000000
since the derived class doesn't add any fields, it shouldn't provide a custom swap at all. the one from the base class will be used automatically. if it did have extra members, the swap should've been implemented as: using std::swap; swap(staticcast user & (first), staticcast user & (second)); swap(first.extra, second.extra); // only swap the fields added in the derived class. some other things: move constructor, copy&swap operator= , and swap() all should be noexcept . unless you're doing this for practice, you should follow the rule of 0 and get rid of all those custom copy/move constructors, assignment operators, and swap() . the implicitly generated ones will work just fine. setters should accept parameters by value, then std::move them. might be a good idea to get into the habit of writing using std::swap; swap(...); instead of calling std::swap directly, to be able to utilize custom swap without having to think whether you need to omit std:: or not for each individual type.
2024-03-17 11:37:23.927000000
we have the following setup on production for mqtt. 5 emqx broker(version 3.x) aws load balancer to distribute load across mqtt brokers (and haproxy in some enviroments) paho mqtt python client (version 1.1) we are noticing an issue where messages are getting frequently dropped(around 1 or 2 in every 100 messages). mqtt connect configuration setup is as follows clientid = randomintfrom1to100 currenthostname cleansession = false keep alive timeout = 60 how the messages are published ? we have x number of celery workers publishing to the same topic in parallel, with message rate of 10/s at max. the client id is unique across each celery worker as it using hostname in client id. for the messages which are getting dropped or missed, paho mqtt library is returning a 0 on publish indicating the message was published successfully. sample code for publish (res, mid) = self.conn.publish(topic=topic, payload=payload, qos=qos) if res == 0: log.debug(f succesfully published message::{str(res)} with id {mid} for payload::{payload} , clientid=self.clientid) else: log.info(f error publishing message::{str(res)} with id {mid} for payload::{payload} , clientid=self.client_id) but there are no logs emqx(even with debug logs enabled), for the ones which have been dropped . this is happening only on production where there are multiple clients publishing to same topic, whereas with single client we haven't noticed an issue. is there any issue with the configuration of the above or would upgrading to a newer version of the library help fix the issue? or this could be something specific to the emqx broker. python version: 3.6.9 library version: 1.1 operating system (including version): linux mqtt server (name, version, configuration, hosting details): emqx
2024-03-26 13:47:28.217000000
hi still confused on whether it is possible for rotary and relative positional embeddings to be integrated with the fast kernels in pytorch sdpa, allowing for faster training/inference? if so what would be a blue print of how to merge previous architecture that use both to the new sdpa one ? i would want to integrate sdpa into the esm model, available on huggingface at [LINK]>. here is the section of the code of interest, the forward call of the attention def forward( self, hiddenstates: torch.tensor, attentionmask: optional[torch.floattensor] = none, headmask: optional[torch.floattensor] = none, encoderhiddenstates: optional[torch.floattensor] = none, encoderattentionmask: optional[torch.floattensor] = none, pastkeyvalue: optional[tuple[tuple[torch.floattensor]]] = none, outputattentions: optional[bool] = false, ) - tuple[torch.tensor]: mixedquerylayer = self.query(hiddenstates) if this is instantiated as a cross-attention module, the keys and values come from an encoder; the attention mask needs to be such that the encoder's padding tokens are not attended to. iscrossattention = encoderhiddenstates is not none if iscrossattention and pastkeyvalue is not none: reuse k,v, crossattentions keylayer = pastkeyvalue[0] valuelayer = pastkeyvalue[1] attentionmask = encoderattentionmask elif iscrossattention: keylayer = self.transposeforscores(self.key(encoderhiddenstates)) valuelayer = self.transposeforscores(self.value(encoderhiddenstates)) attentionmask = encoderattentionmask elif pastkeyvalue is not none: keylayer = self.transposeforscores(self.key(hiddenstates)) valuelayer = self.transposeforscores(self.value(hiddenstates)) keylayer = torch.cat([pastkeyvalue[0], keylayer], dim=2) valuelayer = torch.cat([pastkeyvalue[1], valuelayer], dim=2) else: keylayer = self.transposeforscores(self.key(hiddenstates)) valuelayer = self.transposeforscores(self.value(hiddenstates)) querylayer = self.transposeforscores(mixedquerylayer) matt: our bert model (which this code was derived from) scales attention logits down by sqrt(headdim). esm scales the query down by the same factor instead. modulo numerical stability these are equivalent, but not when rotary embeddings get involved. therefore, we scale the query here to match the original esm code and fix rotary embeddings. querylayer = querylayer * self.attentionheadsize**-0.5 if self.isdecoder: if crossattention save tuple(torch.tensor, torch.tensor) of all cross attention key/valuestates. further calls to crossattention layer can then reuse all cross-attention key/valuestates (first if case) if uni-directional self-attention (decoder) save tuple(torch.tensor, torch.tensor) of all previous decoder key/valuestates. further calls to uni-directional self-attention can concat previous decoder key/valuestates to current projected key/valuestates (third elif case) if encoder bi-directional self-attention pastkeyvalue is always none pastkeyvalue = (keylayer, valuelayer) if self.positionembeddingtype == rotary : querylayer, keylayer = self.rotaryembeddings(querylayer, keylayer) take the dot product between query and key to get the raw attention scores. attentionscores = torch.matmul(querylayer, keylayer.transpose(-1, -2)) if self.positionembeddingtype == relativekey or self.positionembeddingtype == relativekeyquery : seqlength = hiddenstates.size()[1] positionidsl = torch.arange(seqlength, dtype=torch.long, device=hiddenstates.device).view(-1, 1) positionidsr = torch.arange(seqlength, dtype=torch.long, device=hiddenstates.device).view(1, -1) distance = positionidsl - positionidsr positionalembedding = self.distanceembedding(distance + self.maxpositionembeddings - 1) positionalembedding = positionalembedding.to(dtype=querylayer.dtype) fp16 compatibility if self.positionembeddingtype == relativekey : relativepositionscores = torch.einsum( bhld,lrd- bhlr , querylayer, positionalembedding) attentionscores = attentionscores + relativepositionscores elif self.positionembeddingtype == relativekeyquery : relativepositionscoresquery = torch.einsum( bhld,lrd- bhlr , querylayer, positionalembedding) relativepositionscoreskey = torch.einsum( bhrd,lrd- bhlr , keylayer, positionalembedding) attentionscores = attentionscores + relativepositionscoresquery + relativepositionscoreskey if attentionmask is not none: apply the attention mask is (precomputed for all layers in esmmodel forward() function) attentionscores = attentionscores + attentionmask normalize the attention scores to probabilities. attentionprobs = nn.functional.softmax(attentionscores, dim=-1) this is actually dropping out entire tokens to attend to, which might seem a bit unusual, but is taken from the original transformer paper. attentionprobs = self.dropout(attentionprobs) mask heads if we want to if headmask is not none: attentionprobs = attentionprobs * headmask contextlayer = torch.matmul(attentionprobs, valuelayer) contextlayer = contextlayer.permute(0, 2, 1, 3).contiguous() newcontextlayershape = contextlayer.size()[:-2] + (self.allheadsize,) contextlayer = contextlayer.view(newcontextlayershape) outputs = (contextlayer, attentionprobs) if outputattentions else (contextlayer,) if self.isdecoder: outputs = outputs + (pastkeyvalue,) return outputs how would you change the code to replace with the pytorch function sdpa at [LINK]>, specifically, how/could you change the relative positional encoding into a attention mask that could be passed on to the function? scouring online it seems possible, meaning the scaleddotproduct_attention function can expect an arbitrary mask but not sure how this would be achieved in this case.
2024-03-12 21:22:17.550000000
i have addressed this matter by changing the return type from string to httpresponsedata class within the microsoft.azure.functions.worker.http namespace. additionally, i have updated the request object from httprequest to httprequestdata , resulting in the following code structure. reference : azure signalr service [function(nameof(negotiate))] public static httpresponsedata negotiate([httptrigger(authorizationlevel.anonymous, post )] httprequestdata req, [signalrconnectioninfoinput(hubname = serverless )] string connectioninfo) { var response = req.createresponse(httpstatuscode.ok); response.headers.add( content-type , application/json ); response.writestringasync(connectioninfo); return response; }
2024-02-21 10:02:37.110000000
i tested using qt 6.6.1 and msvc2019 64bit. note: mingw does not include qwebengineview. i used qmake, so i added 'webenginewidgets' to the .pro file. the relevant parts are these two lines: qt += core gui webenginewidgets greaterthan(qtmajorversion, 4): qt += widgets webenginewidgets if you forget 'webenginewidgets' you'll get linker errors using qt creator. (unresolved external symbol ... qwebengineview...) main.cpp doesn't require any mention of your qwebengineview. because you want to use a widget in your mainwindow class, put a pointer to the 'view' in your mainwindow.h. in your mainwindow.cpp constructor, 'new' the view and set its parent to the ui- widget you have. also delete 'view' pointer in the destructor, ~mainwindow(). then in the constructor (or wherever is appropriate): view- load(...) the url size the view- resize(ui- widget-size()) view- show() the website. if you forget to size the view you're likely to get a tiny image of the url in your ui- widget. the source code ... webviewer.pro - assumes app name webviewer qt += core gui webenginewidgets greaterthan(qtmajorversion, 4): qt += widgets webenginewidgets config += c++17 sources += \ main.cpp \ mainwindow.cpp headers += \ mainwindow.h forms += \ mainwindow.ui translations += \ webviewerenus.ts config += lrelease config += embed_translations default rules for deployment. qnx: target.path = /tmp/$${target}/bin else: unix:!android: target.path = /opt/$${target}/bin !isempty(target.path): installs += target mainwindow.ui ?xml version= 1.0 encoding= utf-8 ? ui version= 4.0 class mainwindow /class widget class= qmainwindow name= mainwindow property name= geometry rect x 0 /x y 0 /y width 602 /width height 436 /height /rect /property property name= windowtitle string mainwindow /string /property widget class= qwidget name= centralwidget widget class= qwidget name= webwidget native= true property name= geometry rect x 10 /x y 10 /y width 581 /width height 411 /height /rect /property /widget /widget /widget resources/ connections/ /ui main.cpp #include mainwindow.h #include qapplication int main(int argc, char *argv[]) { qapplication a(argc, argv); mainwindow w; w.show(); return a.exec(); } mainwindow.h #ifndef mainwindowh #define mainwindowh #include qmainwindow #include qtwebenginewidgets/qwebengineview qtbeginnamespace namespace ui { class mainwindow; } qtendnamespace class mainwindow : public qmainwindow { q_object public: mainwindow(qwidget parent = nullptr); ~mainwindow(); private: ui::mainwindow ui; qwebengineview view = nullptr; // add this* }; #endif // mainwindowh mainwindow.cpp #include mainwindow.h #include uimainwindow.h mainwindow::mainwindow(qwidget *parent) : qmainwindow(parent) , ui(new ui::mainwindow) , view(nullptr) { ui- setupui(this); if (ui- webwidget) { view = new qwebengineview(ui- webwidget); view- load(qurl( [LINK]/ )); view- resize(ui- webwidget- size()); view- show(); } } mainwindow::~mainwindow() { delete ui; delete view; }
2024-02-21 22:40:38.980000000
i had this error too, with this expression, in javascript, inside a .vue component: let myvar = new regexp('[^]+\='); meaning one or more characters eccept literal , followed but literal = ; to solve the issue i had to 'double escape' the second , so \\ instead of simply \ : let myvar = new regexp('[^]+\\*=');
2024-02-09 09:31:29.030000000
i'm currently working on implementing role-based authentication using nextauth.js in my next.js application. i've followed the documentation provided by nextauth.js, but i encountered an error(in profile snippet and callback snippet which i copied from next-auth documentation) when trying to add role-based authentication to my api routes. i'm using typescript and my api route file is located at pages/api/auth/[..nextauth]/route.ts. import nextauth from next-auth import credentialsprovider from next-auth/providers/credentials ; import {signinwithemailandpassword} from 'firebase/auth'; import auth from '@/app/lib/auth'; export const authoptions = { secret: process.env.auth_secret, pages: { signin: '/signin' }, session: { strategy: jwt as const, maxage: 3600, }, providers: [ credentialsprovider({ //error profile(profile) { return { role: profile.role ?? user , } }, name: 'credentials', credentials: {}, async authorize(credentials): promise any { return await signinwithemailandpassword(auth, (credentials as any).email || '', (credentials as any).password || '') .then(usercredential = { if (usercredential.user) { return usercredential.user; } return null; }) .catch(error = (console.log(error))) .catch((error) = { console.log(error); }); } }) ], //error callbacks: { async jwt({ token, user }) { if (user) token.role = user.role; return token; }, async session({ session, token }) { if (session?.user) session.user.role = token.role; return session; }, }, } const handler = nextauth(authoptions) export { handler as get, handler as post} could someone please help me understand why this error is happening and how i can properly implement role-based authentication with nextauth.js in my api routes? what i tried: using nextauth.js documentation: i'm setting up role-based authentication in my next.js app by following the nextauth.js documentation. copying code: i copied code snippets from the documentation to implement role-based authentication. encountering error: after implementing the code, i encountered an error.
2024-03-03 19:26:00.767000000
i'm grappling with a pl/sql data copy procedure that works flawlessly for multiple remote sites but times out on one specific site. our awr and ash reports pinpoint 'virtual circuit' waits as the predominant bottleneck, accounting for a staggering 79% of the total database time. the procedure's logic identifies absent rows in our database compared to a remote database and triggers data copying for missing entries. while other sites' data copy tasks wrap up in under a minute, this particular site's operation is subject to frustrating timeouts. enclosed below is a snippet from our awr report, which underscores the extensive wait times under the 'virtual circuit wait' event: awr report i will greatly appreciate any insights or recommended diagnostic approaches from the wisdom of this community. thanks in advance for your time and help. whenever i run this procedure manually for the problematic site, it takes at least five minutes to run. i just ran it on the test database, and it took about 22 minutes total. this is probably longer than normal because it hasn"t copied all of the data in almost a week. when i run it for any other site it completes in less than a minute.
2024-02-13 05:17:57.260000000
that limitation is documented : be aware that despite forcing a screenshot resolution to a particular height and width for a test, if this test is run on different infrastructure (i.e a 13 mac vs pc attached to a 30 monitor), the results will be different. so it's extremely important that you standardize where the tests will run, both locally and ci. one way to handle this, is by running it with docker container or against browserstack or alike.
2024-03-22 18:15:54.367000000
try this expression if you only want the value itself from your provided sample: value= ([^ ]*)
2024-03-21 14:07:06.810000000
i had a wrinkle on this scenario, where some of my sourcecontext entries were generic classes and contained periods later in the the output string for dll version strings from the implemented class; this code ended up doing the trick for me: log.logger = new loggerconfiguration() .enrich.fromlogcontext() .enrich.withcomputed( sourcecontextwithoutdll , substring(sourcecontext, 0, if lastindexof(sourcecontext, '') 0 then lastindexof(sourcecontext, '') else length(sourcecontext)) ) .enrich.withcomputed( sourcecontextname , substring(sourcecontextwithoutdll, lastindexof(sourcecontextwithoutdll, '.') + 1) ) .readfrom.configuration(configuration) .writeto.console(outputtemplate: [{timestamp:hh:mm:ss} {sourcecontextname} {level:u3}] {message:lj}{newline}{exception} ) .createlogger();
2024-02-22 10:12:58.063000000
just a thought... normalize your delimiters. this assumes your source file does not exceed 2gb. [newval] can be parsed as necessary example declare [USER] varchar(max) = 'george bush b berry goldwater b silver surfer mork mindy ' select newval = trim(value) ,seq = ordinal from string_split(replace(replace([USER],' b ','|'),' ','|'),'|',1) or another option if truly fixed width select newval = substring([USER],n*22,20) ,seq = n+1 from (select top 5000 n=-1+rownumber() over (order by 1/0) from master..sptvalues n1 ) a where n =len([USER])/22 results newval seq george bush 1 berry goldwater 2 silver surfer 3 mork mindy 4 i should add... to import your source file declare [USER] varchar(max); select [USER] = bulkcolumn from openrowset(bulk 'c:\working\testdata.txt', single_blob) x;
2024-03-25 14:53:13.967000000
i have this hyper client: use httpbodyutil::{combinators::unsyncboxbody, bodyext, bodystream, streambody}; use hypertls::httpsconnector; use hyperutil::{ client::legacy::{self, connect::httpconnector}, rt::tokioexecutor, } use tokioutil::{ bytes::bytes, io::{readerstream, streamreader}, }; pub struct httpsclient { legacyclient: legacy::client httpsconnector httpconnector , unsyncboxbody bytes, std::io::error , } and i'm using legacyclient for code like: legacyclient .request( hyper::request::builder() .method(method::put) .uri(uri) .header(header::contentlength, size) .body(bodyext::boxedunsync(streambody::new(readerstream::new(stream).mapok(frame::data)))) .unwrap(), ) .await?; now i would like to use that legacyclient also for a simple delete call. is it possible? i tried with: let req = hyper::request::builder() .uri(uri) .method( delete ) .body(()) .unwrap(); let resp = legacyclient.request(req).await?; but it throws with: error[e0308]: mismatched types | 128 | let resp = legacyclient.request(req).await?; | ------- ^^^ expected request unsyncboxbody bytes, ... , found request () | | | arguments to this method are incorrect | = note: expected struct hyper::request httpbodyutil::combinators::unsyncboxbody hyper::body::bytes, std::io::error found struct hyper::request () note: method defined here | 201 | pub fn request(&self, mut req: request b ) - responsefuture { | ^^^^^^^ how can i fix this? i would like to use the hyper_util::client::legacy because i don't want to add another dependency such as reqwest to my project.
2024-03-28 00:16:53.390000000
i am working on a project where app a has to be running all the time. meaning that in each crash or reboot of the initial device, the application has to be restarted. i want to do this using another application, app b , which will constantly check the version (or something else) of app a . is this possible and if so, what is the best method? i have tried using appstate && linking but this gave performance issues. i have tried writing a script in the android folder that automatically boots the application and thankfully this works. so my question is, is it possible for app b to check the status of app a and if no response, start app a ?
2024-03-12 09:11:18.063000000
i would like to have a cloneable struct with a type parameter that is uncloneable used for a field that is of type arc. it looks like rust requires all type parameters for a cloneable struct to be cloneable too. without a type parameter, it is possible to declare a cloneable struct with an uncloneable field that is an arc. here is the code that shows: a cloneable struct with a type parameter that is also cloneable. a cloneable struct that does not have a type parameter and has a field that is an arc of an uncloneable type. i think it is unnecessary to require the type parameter to be cloneable if it is used in a field that is an arc of that type. my specific case is a multi threaded pool of resources - i do not want to require my resource to be cloneable. the pool uses arc mutex vec t as a way to store resources. is there a way to make the pool cloneable and keep the t uncloneable. is the solution to manually implement clone() method and not use #[derive(clone)] ? use std::sync::arc; use std::clone::clone; #[derive(clone)] struct containerwithgenerics t { contained: arc t } #[derive(clone)] struct valuecloneable { value: i32 } #[derive(clone)] struct containerwithoutgenerics { contained: arc valuenotcloneable } struct valuenotcloneable { value: i32 } fn main() { let container0 = containerwithgenerics { contained: arc::new(valuecloneable { value: 1i32 }) }.clone(); let container1 = containerwithoutgenerics { contained: arc::new(valuenotcloneable { value: 1i32 }) }.clone(); }
2024-03-15 17:11:26.360000000
i need to get a working chatgpt in python. i used the official documentation from the openal website and wrote this code: from openai import openai client = openai(apikey=' my api is written here ') response = client.chat.completions.create( model= gpt-3.5-turbo , messages=[ { role : system , content : you are a helpful assistant. }, { role : user , content : who won the world series in 2020? }, { role : assistant , content : the los angeles dodgers won the world series in 2020. }, { role : user , content : where was it played? } ] ) after the launch, i received the following error: traceback (most recent call last): file c:\users\danuk\appdata\local\programs\python\python312\1.py , line 5, in module response = client.chat.completions.create( file c:\users\danuk\appdata\local\programs\python\python312\lib\site-packages\openai\utils\_utils.py , line 275, in wrapper return func(args, kwargs) file c:\users\danuk\appdata\local\programs\python\python312\lib\site-packages\openai\resources\chat\completions.py , line 663, in create return self.post( file c:\users\danuk\appdata\local\programs\python\python312\lib\site-packages\openai\baseclient.py , line 1200, in post return cast(responset, self.request(castto, opts, stream=stream, streamcls=streamcls)) file c:\users\danuk\appdata\local\programs\python\python312\lib\site-packages\openai\baseclient.py , line 889, in request return self.request( file c:\users\danuk\appdata\local\programs\python\python312\lib\site-packages\openai\baseclient.py , line 980, in request raise self.makestatuserrorfromresponse(err.response) from none openai.permissiondeniederror: !doctype html !--[if lt ie 7] html class= no-js ie6 oldie lang= en-us ![endif]-- !--[if ie 7] html class= no-js ie7 oldie lang= en-us ![endif]-- !--[if ie 8] html class= no-js ie8 oldie lang= en-us ![endif]-- !--[if gt ie 8] !-- html class= no-js lang= en-us !-- ![endif]-- head title attention required! | cloudflare /title meta charset= utf-8 / meta http-equiv= content-type content= text/html; charset=utf-8 / meta http-equiv= x-ua-compatible content= ie=edge / meta name= robots content= noindex, nofollow / meta name= viewport content= width=device-width,initial-scale=1 / link rel= stylesheet id= cfstyles-css href= /cdn-cgi/styles/cf.errors.css / !--[if lt ie 9] link rel= stylesheet id='cfstyles-ie-css' href= /cdn-cgi/styles/cf.errors.ie.css / ![endif]-- style body{margin:0;padding:0} /style !--[if gte ie 10] !-- script if (!navigator.cookieenabled) { window.addeventlistener('domcontentloaded', function () { var cookieel = document.getelementbyid('cookie-alert'); cookieel.style.display = 'block'; }) } /script !-- ![endif]-- /head body div id= cf-wrapper div class= cf-alert cf-alert-error cf-cookie-error id= cookie-alert data-translate= enablecookies please enable cookies. /div div id= cf-error-details class= cf-error-details-wrapper div class= cf-wrapper cf-header cf-error-overview h1 data-translate= blockheadline sorry, you have been blocked /h1 h2 class= cf-subheadline span data-translate= unabletoaccess you are unable to access /span api.openai.com /h2 /div !-- /.header -- div class= cf-section cf-highlight div class= cf-wrapper div class= cf-screenshot-container cf-screenshot-full span class= cf-no-screenshot error /span /div /div /div !-- /.captcha-container -- div class= cf-section cf-wrapper div class= cf-columns two div class= cf-column h2 data-translate= blockedwhyheadline why have i been blocked? /h2 p data-translate= blockedwhydetail this website is using a security service to protect itself from online attacks. the action you just performed triggered the security solution. there are several actions that could trigger this block including submitting a certain word or phrase, a sql command or malformed data. /p /div div class= cf-column h2 data-translate= blockedresolveheadline what can i do to resolve this? /h2 p data-translate= blockedresolvedetail you can email the site owner to let them know you were blocked. please include what you were doing when this page came up and the cloudflare ray id found at the bottom of this page. /p /div /div /div !-- /.section -- div class= cf-error-footer cf-wrapper w-240 lg:w-full py-10 sm:py-4 sm:px-8 mx-auto text-center sm:text-left border-solid border-0 border-t border-gray-300 p class= text-13 span class= cf-footer-item sm:block sm:mb-1 cloudflare ray id: strong class= font-semibold [HASH] /strong /span span class= cf-footer-separator sm:hidden &bull; /span span id= cf-footer-item-ip class= cf-footer-item hidden sm:block sm:mb-1 your ip: button type= button id= cf-footer-ip-reveal class= cf-footer-ip-reveal-btn click to reveal /button span class= hidden id= cf-footer-ip 5.228.83.46 /span span class= cf-footer-separator sm:hidden &bull; /span /span span class= cf-footer-item sm:block sm:mb-1 span performance &amp; security by /span a rel= noopener noreferrer href= [LINK] id= brandlink target= blank cloudflare /a /span /p script (function(){function d(){var b=a.getelementbyid( cf-footer-item-ip ),c=a.getelementbyid( cf-footer-ip-reveal );b&& classlist in b&&(b.classlist.remove( hidden ),c.addeventlistener( click ,function(){c.classlist.add( hidden );a.getelementbyid( cf-footer-ip ).classlist.remove( hidden )}))}var a=document;document.addeventlistener&&a.addeventlistener( domcontentloaded ,d)})(); /script /div !-- /.error-footer -- /div !-- /#cf-error-details -- /div !-- /#cf-wrapper -- script window.cf_translation = {}; /script /body /html i have not found any information about this error. i've tried changing the version, writing different variations of the code, but it doesn't work.
2024-03-10 21:17:52.087000000
arns for iam roles should look something like arn:aws:iam:: account-id :role/ role-name and for iam users, it should look something like arn:aws:iam:: account-id :user/ user-name . so, if m69-aws-jxx-icxxx-developer/vdoxxx is the name of your role/user, you'll want to adjust your policy to follow the format i mentioned above. make sure the role/user name matches exactly what you've set up in aws iam, including any paths( i am curious as to why there is a path in your role name or user name ) or special characters. your principal should look like: principal : { aws : arn:aws:iam::[HASH]:role/m69-aws-jxx-icxxx-developer/vdoxxx } again, double-check that the role name is correct and exists in your iam, and you should be good to go.
2024-03-19 07:19:53.297000000
there are two techniques through this you can create a shortcut programmatically of any app without badge by using widget create an activity with transparent launcher icon [1]: [LINK]> use this for set this activity to shortcut shortcutinfo.setactivity(componentname(mactivity!!, openappactivity::class.java)) complete method of 2nd option fun addshortcut(bitmap: bitmap) { val launchintent = mactivity!!.packagemanager.getlaunchintentforpackage(mapplication.packagenameapp.takeif { it.isnotempty() } ?: mactivity!!.packagename) val shortcutinfo = shortcutinfocompat.builder( mactivity!!.applicationcontext, mainviewmodel.selectedapp!!.appname ) .setintent(launchintent!!) .setshortlabel( test ) .setalwaysbadged() .setactivity(componentname(mactivity!!, openappactivity::class.java)) .seticon(iconcompat.createwithbitmap(bitmap!!)).build() val broadcastintent = intent(intent.actioncreateshortcut) mactivity?.registerreceiver( object : broadcastreceiver() { override fun onreceive( c: context?, intent: intent ) { toast.maketext( mactivity!!.applicationcontext, icon created , toast.lengthshort ).show() mactivity!!.unregisterreceiver(this) } }, intentfilter( intent.actioncreateshortcut ), context.receivernotexported ) val successcallback = pendingintent.getbroadcast( mactivity!!, 99, broadcastintent, pendingintent.flagupdatecurrent or pendingintent.flagimmutable ) shortcutmanagercompat.requestpinshortcut( requirecontext(), shortcutinfo, successcallback.intentsender ) }
2024-02-27 05:44:34.683000000
i tried using pypdf2 to unencrypt and then send off attachments of pdf files landing in my inbox but realised this is not going to work because those pdfs have attachments in them (excel) and can only be accessed if decrpyted via adobe reader. is there api or a simple way of using adobe reader api via python to decrypt these pdf files and retain those excel files? i've tried chatgpt and google but there's no documentation on this.
2024-02-21 03:01:32.193000000
i have a field in my struct that stores either a floating point array or an integer array, but i can't use generics to make sure that this field can take both types, normally they would only take one type i can only use 'any' to make sure i can accept 'float32' or 'uint32', but i need to make sure i can't accept any type other than 'float32' & 'uint32', i tried asking ai, and ai told me that golang's slices can't accept two and more types type animkeyframe[t float32 | int32] struct { frame uint32 // vector []t intan []any outtan []any } func test[t float32 | int32]() { // t := animkeyframe[t]{} t := new(animkeyframe[t]) t.vector[0] = float32(1)//error t.vector[1] = int(1)//error t.intan [0] = float32(1)//no error t.intan [1] = int(1)//no error }
2024-03-12 08:24:08.833000000
i'm developing powershell cmdlets in c#. i would like to define mandatory parameters, but that are mutually exclusive. example: parameter a parameter b when invoking the command, i must set only parameter a or only parameter b, but i cannot set both and i cannot set none.
2024-03-28 10:07:50.233000000
you can use self.close() in every class you want.
2024-03-01 11:24:19.237000000
i'm trying to use react-quill with react-admin v4 (i'm not too happy with the richtextinput). while i easily did so on the 'create' page, i struggle with the 'edit' page. here is the simplified code snippet: import { edit, simpleform, textinput, dateinput, useinput, required } from react-admin ; import reactquill from react-quill ; import { modules } from ../utils/quillmodules ; const quillinput = ({ source }) = { const { field, } = useinput({ source }); const handlechange = (value: any) = { field.onchange(value); }; return ( reactquill modules={modules} theme= snow value={field.value || ''} onchange={handlechange} / / ); }; export const findingedit = () = ( edit simpleform warnwhenunsavedchanges quillinput source= description / dateinput source= duedate validate={[required()]} / /simpleform /edit ); but loading this page throws the following error, and i not skilled enough to understand how to handle the changes in this custom input: cannot call an event handler while rendering. at reactquill2 ([LINK]) at quillinput ([LINK]) at div at [LINK] at grid4 ([LINK]) at div at [LINK] at cardcontent2 ([LINK]) at [LINK] at form at formgroupsprovider ([LINK]) at formprovider ([LINK]) at labelprefixcontextprovider ([LINK]) at recordcontextprovider ([LINK]) at optionalrecordcontextprovider ([LINK]) at form2 ([LINK]) at simpleform ([LINK]) at div at [LINK] at paper2 ([LINK]) at [LINK] at card2 ([LINK]) at div at div at [LINK] at editview ([LINK]) at recordcontextprovider ([LINK]) at savecontextprovider ([LINK]) at editcontextprovider ([LINK]) at editbase ([LINK]) at edit ([LINK]) at findingedit at renderedroute ([LINK]) at routes ([LINK]) at resourcecontextprovider ([LINK]) at resource ([LINK]) at renderedroute ([LINK]) at routes ([LINK]) at suspense at errorboundary2 ([LINK]) at div at main at div at div at [LINK] at layout ([LINK]) at div at renderedroute ([LINK]) at routes ([LINK]) at coreadminroutes ([LINK]) at renderedroute ([LINK]) at routes ([LINK]) at coreadminui ([LINK]) at div at [LINK] at scopedcssbaseline2 ([LINK]) at adminui ([LINK]) at themeprovider ([LINK]) at themeprovider2 ([LINK]) at themeprovider3 ([LINK]) at themeprovider2 ([LINK]) at resourcedefinitioncontextprovider ([LINK]) at notificationcontextprovider ([LINK]) at i18ncontextprovider ([LINK]) at router ([LINK]) at historyrouter2 ([LINK]) at internalrouter ([LINK]) at basenamecontextprovider ([LINK]) at adminrouter ([LINK]) at queryclientprovider2 ([LINK]) at preferenceseditorcontextprovider ([LINK]) at storecontextprovider ([LINK]) at coreadmincontext ([LINK]) at admincontext ([LINK]) at admin ([LINK]) at app i read the official react-admin documentation
2024-03-11 08:41:45.520000000
context i am running an sql query over pyspark, on top of an azure blob storage datalake. the lake is partitioned with several nested keys. i ran two queries whose logic is the same to me regarding the filters, but actually query 1 returns 20 million rows, while query 2 returns 28 million rows. here is the structure of the blob storage container : . |-- company=first | |-- year=2021 | |-- year=2022 | |-- year=2023 | -- year=2024 |-- company=other | |-- year=2021 | |-- year=2022 | |-- year=2023 | -- year=2024 -- company=second |-- year=2021 |-- year=2022 |-- year=2023 -- year=2024 query 1 -- file : query1.sql select col1, col2, col3 from parquet.abfs://container[USER].dfs.core.windows.net/company=first/year=2023 where col1 in ('a','b','c') union select col1, col2, col3 from parquet.abfs://container[USER].dfs.core.windows.net/company=second/year=2023 where col1 in ('a','b','c') union select col1, col2, col3 from parquet.abfs://container[USER].dfs.core.windows.net/company=other/year=2023 where col1 in ('a','b','c') query 2 -- file : query2.sql select from parquet.abfs://container[USER].dfs.core.windows.net/company=first/year=2023 where col1 in ('a','b','c') union select from parquet.abfs://container[USER].dfs.core.windows.net/company=second/year=2023 where col1 in ('a','b','c') union select * from parquet.abfs://container[USER].dfs.core.windows.net/company=other/year=2023 where col1 in ('a','b','c') pyspark execution code here is the code used to run the spark query from the sql code from pyspark.sql import sparksession spark = sparksession.builder.getorcreate() query1 = open( query1.sql , mode= r ).read() df1 = spark.sql(query1) df1.count() gives 20 million rows query2 = open( query2.sql , mode= r ).read() df2 = spark.sql(query2) df2.count() gives 28 million rows question any idea what could be the cause of this gap ? anything related to using pyspark / pyspark over sql / the sql query itself / something in the data itself ?
2024-02-27 17:14:48.247000000
let's say that i have a consumer: public class testconsumer : iconsumer testmessage { ... that is configured to use a routing key using: cfg.receiveendpoint( testqueue , e= { e.bind testmessage (x = { x.routingkey = testroutingkey }); }); it creates a queue called testqueue and binds it to the exchange with the routing key testroutingkey . if the consumer catches an unhandled exception it also creates an error queue called testqueueerror and binds it to the 'fault-testmessage' exchange with no routing key. how can i configure the generated error queue to be bound to the fault exchange with a chosen routing key? i wasn't able to find out how to do this in the documentation. if it's not possible, then how can i differentiate between errors thrown by different consumers for the original exchange?
2024-02-15 15:16:36.280000000
i will use data as a name for your table. create a measure (not a calculated column): end score = calculate ( sumx ( summarize ( data, data[location], data[domain], data[phase], data[domain score] ), data[domain score] ), allexcept ( data, data[domain], data[location] ) ) result: how it works: summarize function takes all your columns except answer , and creates a virtual table where every row is unique, thus suppressing duplicates; sumx iterates over this new table, and sums up domain scores; allexcept then makes sure that results are not filtered by any fields except location and domain. note though, the better way to solve this problem is to answer why your data has these duplicates in the first place, and if they could be removed from the data model. using fancy dax to fix data issues is not a good practice.
2024-03-24 03:30:37.517000000
i have an image that will animate to grow in scale. while the growth increases underneath the header element, it persists in growing over the top of the div2 element. i tried z-index and other such things without luck :( document.addeventlistener("domcontentloaded", function() { document.getelementbyid("div1").style.height = window.innerheight - document.queryselector("header").offsetheight - document.getelementbyid("div2").offsetheight + "px"; }); main { width: 100%; height: 100vh; overflow-y: hidden; } header { background-color:yellow; padding: 20px; } #div2 { height: 300px; background-color: blue; color: white; } #imggrow { height: 100%; width: 100%; animation: imggrow 2s ease forwards; object-fit: cover; } [USER] imggrow { 0% { transform: scale(1.0); } 100% { transform: scale(1.5); } } header header /header main div id="div1" img id="imggrow" src="[LINK]" /div div id="div2" 1 br 2 br 3 br 4 br 5 br 6 /div /main
2024-02-14 01:24:48.980000000
if you have a space in the json tag, use double quotes inside the single quotes and the json tag is case sensitive select * from openjson([USER]) with ( injectionpoint nvarchar(max) '$. injection point ', tanksize nvarchar(max) '$. tank size ', targetusage nvarchar(max) '$. target usage ', targetallowance nvarchar(max) '$. target allowance ', pricegal nvarchar(max) '$. price/gal ', tanklocationid nvarchar(max) '$. tank location id ', sracctmgr nvarchar(max) '$. sr acct mgr ', district nvarchar(max) '$.district', county nvarchar(max) '$.county', liftmechanism nvarchar(max) '$. lift mechanism ', opregion nvarchar(max) '$.opregion', area nvarchar(max) '$.area', uid nvarchar(max) '$.uid' ) as json
2024-03-14 20:22:59.947000000
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
71
Edit dataset card