input
stringlengths
0
27.7k
created_at
stringlengths
29
29
for install next.js on windows 7, you can follow these steps install node.js : first, you need to have node.js installed on your system. you can download the installer from the official node.js website ( [LINK]>) and follow the installation instructions. make sure to install the lts version, which is recommended for most users. open command prompt : once node.js is installed, open the command prompt by pressing win + r , typing cmd , and hitting enter. create a new next.js project : in the command prompt, navigate to the directory where you want to create your next.js project using the cd command. for example: cd path\to\desired\directory initialize a new next.js project : once you're in the desired directory, you can initialize a new next.js project using the following command: npx create-next-app my-next-app replace my-next-app with the name you want for your project. navigate to the project directory : after the project is created, navigate into the project directory: cd my-next-app start the development server : once inside the project directory, you can start the development server by running: npm run dev this command will start the next.js development server, and you can access your next.js application by opening a web browser and navigating to [LINK]>
2024-03-18 06:57:32.040000000
actually you can set height of header from tabs stack options and not from the normal stack
2024-03-18 06:25:30.957000000
background i am trying to use react-dropzone inside of a remix application to do a standard file upload. i have followed the basic example in the react-dropzone documents, but keep running into the same issue when trying to submit the file in a normal form tag. to validate my back-end strategy, i used a standard input type= file / and it is working as expected leading me to believe my issue is with the react-dropzone implementation. questions does react-dropzone actually use the input tag just like normal or is there more processing needed to get the actual files ready for upload? could this be a mime type issue because i am using the standard remix server? code // dropzone component export default function dropzonecomponent(){ const {getrootprops, getinputprops, isdragactive, acceptedfiles} = usedropzone() const filelist = acceptedfiles.map(f = p key={f.name} {f.name} /p ) return ( div {...getrootprops({classname: 'dropzone'})} input {...getinputprops({name: 'upload-file'})} / {filelist.length 0 ? ( div classname= filelist {filelist} /div ) : ( civiewlist classname= dropzone-icon / )} {isdragactive ? ( p drop here... /p ) : ( p drag files here or button type='button' browse /button )} /div ) } // remix route export async function action({request}: actionfunctionargs){ const uploadhandler = unstablecomposeuploadhandlers( unstablecreatefileuploadhandler({ maxpartsize: [HASH], file: ({filename}) = filename, directory: './public/uploads' }), unstablecreatememoryuploadhandler() ) const formdata = await unstableparsemultipartformdata(request, uploadhandler) const submission = parsewithzod(formdata, {schema}) // do other things with file (irrelevant as the submission never makes it past the zod validation) } export default uploadroute(){ return ( // ... form method= post enctype= multipart/form-data dropzonecomponent / button type= submit upload /button /form ) } expected behavior select or drag file into dropzone area submit form just like any other remix form submission parse data in action using the uploadhandler validate data using conform and zod do further processing with file upload actual behavior select file using dropzone area (appears to work as i can list the file using acceptedfiles ) submit form as usual (using form submit button) parse data using uploadhandler action fails to store the file and the zod/conform validation also fails details i happen to be uploading .csv files. when doing this with a standard input type= file / and console logging inside of the action for the route, it shows that piece of the formdata to be of the type blob {size: xxx, type: 'text/csv'} and is then stored into the appropriate file location and passes validation. when done with react-dropzone it just shows blob {size: 0, type: 'application/octet-stream'} and is not stored in the file location. i'm assuming it is because the actual file is not being sent with the request, but the documentation is silent on how to submit the file as a standard form input element. i also tried specifying the accept parameter inside the usedropzone hook, but it didn't change the result. any help would be appreciated....
2024-03-13 20:17:31.443000000
those who are looking for answer, here is the quickest way to solve the problem: [USER]( yourkey ) var yourvar = default value
2024-02-14 05:01:24.100000000
you could make the triangular polygon slightly bigger, style it appropriately, and add the respective patch to the plot instead of drawing a special diagonal line. then you could add the circle, clip it using the polygon, and finally clip the polygon too, using the circle: import matplotlib.pyplot as plt create the circle with radius 6 circle = plt.circle((0, 0), 6, color='r', fill=false) set up the plot (reuse the previous grid settings) plt.figure(figsize=(8, 8)) plt.xlim(0, 10) plt.ylim(0, 10) plt.grid() add the circle to the plot ax = plt.gca() set aspect ratio to ensure square grid cells ax.setaspect( equal ) make a polygon looking like a diagonal line on the visible part of the plot polygon = plt.polygon([[-1, -1], [7, 0], [0, 7]], transform=ax.transdata, edgecolor='b', facecolor='w', linestyle='--') add the polygon and circle to the plot ax.addpatch(polygon) ax.addpatch(circle) clip the circle using the polygon appearing as a diagonal line circle.setclippath(polygon) clip the polygon ( exterior parts of the diagonal line ) using the circle polygon.setclip_path(circle) show the plot plt.title( circle centered at (0,0) clipped by diagonal line ) plt.show() the output:
2024-03-08 12:40:20.647000000
i'm studying js. currently trying to understand objects. can't understand why the same variable (item) can shows different result. if i type console.log(item); it shows the keys of the object. if i use console.log(menu[item]); it shows the values. let menu = { width: 200, height: 300, title: "my menu" }; for (let item in menu) { console.log(item); // item = width, height, title } for (let item in menu) { console.log(menu[item]); // item = 200, 300, my menu } i couldn't find the answer on my own. probably because i even don't know how to describe my question correctly.
2024-03-04 20:17:29.670000000
what i do: i use useform with some fields - one of them is for file uploading. it is a custom field that uses the controller . the problem: the upload field is optional but once user decide to use it i would like to permit the form submiting while it is uploading the file so the question is how to invalidate form or permit the submission for a while and then enable that back. thank you
2024-02-15 12:07:50.207000000
i had the same issue, try to reopen the terminal. if the issue still there, try to restart your pc and restart your router.
2024-03-03 03:33:50.717000000
i have one model inside my models.py named account in this case i wanna create a field in which i can choose my own model objects. for example if i had 3 accounts {jack, harry, helena} i want to choose the accounts i want between them. actually i just want to create a following system . jack can follow harry and helena & also helena can follow jack or harry how can i do that guys? account class inside models.py class account(models.model): username = models.charfield(max_length=100) following = models.manytomanyfield(account, blank=true)
2024-03-05 09:10:59.403000000
here is my state const icon = ref('image.jpg') style scoped background-image: url(icon); /style this approach is not working please advise how to go about it background-image: url(icon); this approach is not working please advise how to go about it
2024-02-06 14:17:22.733000000
assuming that the number of columns in your table is fixed and that the number of columns in the result table are also fixed, it is possible to combine your result with an auxiliary table just to replicate the data the number of times necessary so that it is possible to later transform the multiple rows in columns, example: drop table if exists table1; create table table1 (id int, month varchar(10), col1 char(1), col2 char(1), col3 char(1), col4 char(1)); insert into table1 (id, month, col1, col2, col3, col4) values (101, 'jan', 'a', 'b', null, 'b'), (102, 'feb', 'c', 'a', 'g', 'e'), (103, 'mar', 'x', 'y', 'z', 't'), (104, 'apr', '1', '2', '3', null) ; select * from table1; your data is: +------+-------+------+------+------+------+ | id | month | col1 | col2 | col3 | col4 | +------+-------+------+------+------+------+ | 101 | jan | a | b | null | b | | 102 | feb | c | a | g | e | | 103 | mar | x | y | z | t | | 104 | apr | 1 | 2 | 3 | null | +------+-------+------+------+------+------+ 4 rows in set (0,00 sec) then, create the auxiliary table with records necessary for the number of columns you want to transform: create table amount (number int); insert into amount (number) values (1), (2), (3), (4); now, just combine the results of both tables: select amount.number, case when number = 1 then 'col1' when number = 2 then 'col2' when number = 3 then 'col3' when number = 4 then 'col4' end as description, case when number = 1 then groupconcat(case when month = 'jan' then col1 else '' end separator '') when number = 2 then groupconcat(case when month = 'jan' then col2 else '' end separator '') when number = 3 then groupconcat(case when month = 'jan' then col3 else '' end separator '') when number = 4 then groupconcat(case when month = 'jan' then col4 else '' end separator '') end as jan, case when number = 1 then groupconcat(case when month = 'feb' then col1 else '' end separator '') when number = 2 then groupconcat(case when month = 'feb' then col2 else '' end separator '') when number = 3 then groupconcat(case when month = 'feb' then col3 else '' end separator '') when number = 4 then groupconcat(case when month = 'feb' then col4 else '' end separator '') end as feb, case when number = 1 then groupconcat(case when month = 'mar' then col1 else '' end separator '') when number = 2 then groupconcat(case when month = 'mar' then col2 else '' end separator '') when number = 3 then groupconcat(case when month = 'mar' then col3 else '' end separator '') when number = 4 then groupconcat(case when month = 'mar' then col4 else '' end separator '') end as mar, case when number = 1 then groupconcat(case when month = 'apr' then col1 else '' end separator '') when number = 2 then groupconcat(case when month = 'apr' then col2 else '' end separator '') when number = 3 then groupconcat(case when month = 'apr' then col3 else '' end separator '') when number = 4 then groupconcat(case when month = 'apr' then col4 else '' end separator '') end as apr from table1 cross join amount group by amount.number ; and the result is: +--------+-------------+------+------+------+------+ | number | description | jan | feb | mar | apr | +--------+-------------+------+------+------+------+ | 1 | col1 | a | c | x | 1 | | 2 | col2 | b | a | y | 2 | | 3 | col3 | | g | z | 3 | | 4 | col4 | b | e | t | | +--------+-------------+------+------+------+------+ 4 rows in set (0,00 sec)
2024-03-15 14:15:22.673000000
the difference is due to very different gpu utilization in each case. lets see if we can get a better understanding of what your code is doing by calculating all relevant data (then we can think through it carefully). in your first launch ( 1d grid), iy starts out at zero for all threads, and the grid stride in y is 1. therefore the outer loop iterates dimy times. on a sm89 device, on cuda 12.2, the max blocks occupancy api returns 12 for me, which makes sense. 12x128 = 1536, the maximum possible threads on a sm89 sm. your rtx 4090 has 128 sms. so your total blocks launched is 12x128, or 1536 blocks. the total number of threads in that case is 1536x128 = 196,608. this effectively is the grid width in x for the first launch. the grid width in y , as we have already discovered, is 1. so how will your 2d grid-stride loop in the kernel behave? on the first iteration in iy loop, you will have 196,608 threads in x , but you only need 8x1024=8192 threads to cover the dataset width. so of your 196,608 launched threads, only 8192 will do any useful work for that iteration of iy , and the remainder will simply hit the ix dimx test and do nothing. that doesn't sound like a sensible use of resources. of course this same behavior manifests on every iteration of iy : 8192 threads do something useful, the others don't. so if we naively applied that to the gpu (since you have sized your grid to match the gpu), then we could say that we are using 8192/196,608 of the gpu in a useful fashion, ie. we are using ~4.2% of the gpu available capability. sure, you might say, but doesn't the grid stride loop help there? don't those non-useful blocks just iterate quickly through the iy extent, and retire? yes, they do, but they leave behind empty space on the gpu. the work that could be done in the actual dataset does not get scheduled, because it is already assigned to other already scheduled threads/blocks. so you really have assigned just 4.2% of the thread capacity of your gpu to work on the problem. in the second launch case ( 2d grid), the situation is different. the patch that a single grid stride works on is not entirely linear in x. it is a two dimensional patch, consisting of threads in x (128), and blocks in y (1536). i'm not going to work through all the same arithmetic, but this different shape patch means that in many/most iterations of your grid stride, the entire grid (all threads) are doing useful work. so your gpu utilization is much higher in this case. if this treatment is valid, then we should observe a significant difference in average sm utilization. i don't have a rtx4090 to work on, but i do have a sm89 device, an l4 gpu, with 58 sms. so it still has the mismatched shape issue, but not quite as extreme as your rtx4090 with 128 sms. nsight compute in the sol section includes a measurement of sm utilization, called compute (sm) throughput in the cli output. here is what it looks like running your code on my l4 gpu with 58 sms: cat t139.cu #include iostream #include time.h #include sys/time.h #define usecpsec 1000000ull unsigned long long dtimeusec(unsigned long long start=0){ timeval tv; gettimeofday(&tv, 0); return ((tv.tv_sec*usecpsec)+tv.tvusec)-start; } global void kernela(float *g_data, int dimx, int dimy, int niterations) { for (int iy = blockidx.y blockdim.y + threadidx.y; iy dimy; iy += blockdim.y griddim.y) { for (int ix = blockidx.x blockdim.x + threadidx.x; ix dimx; ix += blockdim.x griddim.x) { int idx = iy * dimx + ix; float value = gdata[idx]; for (int i = 0; i niterations; i++) { value += fsqrtrn(logf(value) + 1.f); } g_data[idx] = value; } } } void launchkernel(float * ddata, int dimx, int dimy, int niterations, int type=0) { cudadeviceprop prop; cudagetdeviceproperties(&prop, 0); int numsms = prop.multiprocessorcount; int numthreads = 128; int numblocks; cudaoccupancymaxactiveblockspermultiprocessor(&numblocks, kernela, numthreads, 0); numblocks *= numsms; std::cout num blocks: numblocks std::endl; dim3 block(numthreads); dim3 grid; if (type) grid=dim3(1,numblocks); else grid=dim3(numblocks); kernela grid, block (d_data, dimx, dimy, niterations); } int main(){ int niterations=5; int dimx=81024; int dimy=81024; float *ddata; cudamalloc(&ddata, sizeof(*d_data)dimxdimy); launchkernel(ddata, dimx, dimy, niterations, 0); cudadevicesynchronize(); unsigned long long dt = dtimeusec(0); launchkernel(ddata, dimx, dimy, niterations, 0); cudadevicesynchronize(); dt = dtimeusec(dt); std::cout elapsed 0: dt/(float)usecpsec s std::endl; dt = dtimeusec(0); launchkernel(ddata, dimx, dimy, niterations, 1); cudadevicesynchronize(); dt = dtimeusec(dt); std::cout elapsed 1: dt/(float)usecpsec s std::endl; } nvcc -o t139 t139.cu -arch=sm89 ncu ./t139 ==prof== connected to process 137887 (/root/bobc/t139) num blocks: 696 ==prof== profiling kernela - 0: 0%....50%....100% - 9 passes num blocks: 696 ==prof== profiling kernela - 1: 0%....50%....100% - 9 passes elapsed 0: 0.478202s num blocks: 696 ==prof== profiling kernela - 2: 0%....50%....100% - 9 passes elapsed 1: 0.342003s ==prof== disconnected from process 137887 [137887] t139[USER].0.0.1 kernela(float *, int, int, int) (696, 1, 1)x(128, 1, 1), context 1, stream 7, device 0, cc 8.9 section: gpu speed of light throughput ----------------------- ------------- ------------ metric name metric unit metric value ----------------------- ------------- ------------ dram frequency cycle/nsecond 6.25 sm frequency cycle/usecond 827.17 elapsed cycles cycle 14,037,832 memory throughput % 12.69 dram throughput % 12.69 duration msecond 16.77 l1/tex cache throughput % 3.07 l2 cache throughput % 3.12 sm active cycles cycle 5,878,996.60 compute (sm) throughput % 17.66 ----------------------- ------------- ------------ opt this kernel exhibits low compute throughput and memory bandwidth utilization relative to the peak performance of this device. achieved compute throughput and/or memory bandwidth below 60.0% of peak typically indicate latency issues. look at scheduler statistics and warp state statistics for potential reasons. section: launch statistics -------------------------------- --------------- --------------- metric name metric unit metric value -------------------------------- --------------- --------------- block size 128 function cache configuration cacheprefernone grid size 696 registers per thread register/thread 22 shared memory configuration size kbyte 32.77 driver shared memory per block kbyte/block 1.02 dynamic shared memory per block byte/block 0 static shared memory per block byte/block 0 threads thread 89,088 waves per sm 1 -------------------------------- --------------- --------------- section: occupancy ------------------------------- ----------- ------------ metric name metric unit metric value ------------------------------- ----------- ------------ block limit sm block 24 block limit registers block 21 block limit shared mem block 32 block limit warps block 12 theoretical active warps per sm warp 48 theoretical occupancy % 100 achieved occupancy % 35.36 achieved active warps per sm warp 16.97 ------------------------------- ----------- ------------ opt estimated speedup: 64.64% this kernel's theoretical occupancy is not impacted by any block limit. the difference between calculated theoretical (100.0%) and measured achieved occupancy (35.4%) can be the result of warp scheduling overheads or workload imbalances during the kernel execution. load imbalances can occur between warps within a block as well as across blocks of the same kernel. see the cuda best practices guide ([LINK]) for more details on optimizing occupancy. kernel_a(float *, int, int, int) (696, 1, 1)x(128, 1, 1), context 1, stream 7, device 0, cc 8.9 section: gpu speed of light throughput ----------------------- ------------- ------------ metric name metric unit metric value ----------------------- ------------- ------------ dram frequency cycle/nsecond 6.15 sm frequency cycle/usecond 798.00 elapsed cycles cycle 14,171,756 memory throughput % 12.32 dram throughput % 12.32 duration msecond 17.54 l1/tex cache throughput % 3.05 l2 cache throughput % 3.08 sm active cycles cycle 5,926,135.90 compute (sm) throughput % 17.69 ----------------------- ------------- ------------ opt this kernel exhibits low compute throughput and memory bandwidth utilization relative to the peak performance of this device. achieved compute throughput and/or memory bandwidth below 60.0% of peak typically indicate latency issues. look at scheduler statistics and warp state statistics for potential reasons. section: launch statistics -------------------------------- --------------- --------------- metric name metric unit metric value -------------------------------- --------------- --------------- block size 128 function cache configuration cacheprefernone grid size 696 registers per thread register/thread 22 shared memory configuration size kbyte 32.77 driver shared memory per block kbyte/block 1.02 dynamic shared memory per block byte/block 0 static shared memory per block byte/block 0 threads thread 89,088 waves per sm 1 -------------------------------- --------------- --------------- section: occupancy ------------------------------- ----------- ------------ metric name metric unit metric value ------------------------------- ----------- ------------ block limit sm block 24 block limit registers block 21 block limit shared mem block 32 block limit warps block 12 theoretical active warps per sm warp 48 theoretical occupancy % 100 achieved occupancy % 35.28 achieved active warps per sm warp 16.93 ------------------------------- ----------- ------------ opt estimated speedup: 64.72% this kernel's theoretical occupancy is not impacted by any block limit. the difference between calculated theoretical (100.0%) and measured achieved occupancy (35.3%) can be the result of warp scheduling overheads or workload imbalances during the kernel execution. load imbalances can occur between warps within a block as well as across blocks of the same kernel. see the cuda best practices guide ([LINK]) for more details on optimizing occupancy. kernel_a(float *, int, int, int) (1, 696, 1)x(128, 1, 1), context 1, stream 7, device 0, cc 8.9 section: gpu speed of light throughput ----------------------- ------------- ------------ metric name metric unit metric value ----------------------- ------------- ------------ dram frequency cycle/nsecond 6.24 sm frequency cycle/usecond 806.90 elapsed cycles cycle 2,160,446 memory throughput % 79.10 dram throughput % 79.10 duration msecond 2.64 l1/tex cache throughput % 16.89 l2 cache throughput % 20.03 sm active cycles cycle 2,083,805.60 compute (sm) throughput % 74.22 ----------------------- ------------- ------------ inf compute and memory are well-balanced: to reduce runtime, both computation and memory traffic must be reduced. check both the compute workload analysis and memory workload analysis sections. section: launch statistics -------------------------------- --------------- --------------- metric name metric unit metric value -------------------------------- --------------- --------------- block size 128 function cache configuration cacheprefernone grid size 696 registers per thread register/thread 22 shared memory configuration size kbyte 32.77 driver shared memory per block kbyte/block 1.02 dynamic shared memory per block byte/block 0 static shared memory per block byte/block 0 threads thread 89,088 waves per sm 1 -------------------------------- --------------- --------------- section: occupancy ------------------------------- ----------- ------------ metric name metric unit metric value ------------------------------- ----------- ------------ block limit sm block 24 block limit registers block 21 block limit shared mem block 32 block limit warps block 12 theoretical active warps per sm warp 48 theoretical occupancy % 100 achieved occupancy % 90.75 achieved active warps per sm warp 43.56 ------------------------------- ----------- ------------ inf this kernel's theoretical occupancy is not impacted by any block limit. we see that in the first and second launch, both of which i am issuing as type 0 , ie. your 1d case, the sm throughput is about 17%. in the 3rd launch, which is my type 1 corresponding to your 2d case, the sm throughput is about 75%. furthermore the nsight compute rules or expert system output in the 1d case mentions the following: this kernel exhibits low compute throughput and memory bandwidth utilization relative to the peak performance of this device. the low compute throughput is directly related to the compute (sm) throughput measurement. that notation is not given in the 2d case, and instead it states: compute and memory are well-balanced: my l4 gpu only has 58 sms compared to your 128, so the shape mismatch issue while severe, is not quite as bad as on your rtx 4090. i would expect the above comparison to be even worse (the 1d case probably has even lower sm throughput) on your rtx 4090.
2024-02-17 17:12:12.907000000
i had already upgrade my pandas to version=2.2.0. i need to downgrade my pandas for .iteritems . when i downgrade with following code: pip install pandas==1.5.3 it's showing me below error: failed to build pandas error: could not build wheels for pandas, which is required to install pyproject.toml-based projects reference - error image can anyone help me to avoid this error.
2024-02-16 13:18:45.850000000
i am trying to learn riverpod so this may be a stupid question but here is what i am struggling with. i have a authservice class that extends notifier. this class works with firebase auth to sign user in and out. i am trying to monitor firebaseauth.authstatechanges() event so that i can display correct widget to the user based on if the user is authenticated or not. here is how my authservice class looks like: class authservice extends notifier appuser { authservice(this.firebaseauth) { // empty constructor body } appuser? appuserfromfirebaseuser(user? user) { if (null != user) { return appuser(uid: user.uid); } else { return null; } } stream appuser? get user { return firebaseauth.authstatechanges().map((user) { return appuserfromfirebaseuser(user); }); } } the problem i am facing is that visual studio code is showing me a warning for the stream appuser? getter saying notifiers should not have public properties/getters. instead, all their public api should be exposed through the state property. which tells me i must be doing something wrong the way i am trying to monitor the firebaseauth.authstatechanges() stream. can someone please tell me how do i do this the right way?
2024-02-28 17:00:59.270000000
i need to update some .net assembly in my project. i've checked decompiled codes and saw that one method from the assembly of new version has time-consuming logic. for example, assembly x.1 has some internal class y. i need to update assembly to version x.2, but there are time-consuming changes in y class. example: x.1 internal class y { public void z(){ method1(); method2(); } } x.2 internal class y { public void z(){ method1(); method2(); timeconsumingmethod3(); } } i need to run method z without 'timeconsumingmethod3()'. is it possible to modify only one public method in internal class from assembly by reflection or in another way?
2024-02-23 13:25:43.143000000
what's the shortest way to build an array in rust as follows: let byte: u8 = 1; let foo: u32 = 5; // how do i do this? let out: [u8; 5] = byte + foo.tobebytes(); in other words, i want to concatenate a single byte and 4 bytes together to form a 5 byte array?
2024-02-18 08:05:44.163000000
i created a web app and enabled system assigned identity : check whether the user has mailbox: connect-exchangeonline get-mailbox -identity user[USER].onmicrosoft.com when i tried to run the add-mailboxpermission command by passing the objectid i got the same error : add-mailboxpermission -identity user[USER].onmicrosoft.com -user managedidentityobjectid -accessrights fullaccess to resolve the error , you need to explicitly create the service principal by passing the objectid and the applicationid like below: search the system managed identity enterprise application: new-serviceprincipal -appid enterpriseapplicationapplicationid -serviceid enterpriseapplicationobjectid now to add the mailbox permission to the managed identity , copy objectid from the response and pass it in the user parameter: add-mailboxpermission -identity user[USER].onmicrosoft.com -user objectid -accessrights fullaccess you can verify by using below command: get-mailboxpermission -identity user[USER].onmicrosoft.com -user enterpriseapplicationobjectid
2024-02-28 10:25:42.337000000
i am encountering some sporadic crashes when calling into a c++ library (not under my control) from c#. the internal code of the c++ library is roughly like this: #include string #include memory class holder { public: static std::sharedptr holder create() { return std::makeshared holder (); } const std::string& getref() { return value; } private: std::string value; }; extern c void getref(std::string& v) { auto h = holder::create(); v = h- getref(); } getref is called from cvia p/invoke with no special marshalling. is this guaranteed to work? h goes out of scope at the end of getref , but a reference to its member value is held and passed on. is this reference still valid when being access on the cside?
2024-03-26 13:41:13.813000000
a fix for this [is in insiders v1.88](release notes link: [LINK]>). you can change your tab names to something like /parentfolder index using this value: \${dirname} {filename} in the setting: workbench editor customlabels : patterns (setting id workbench.editor.customlabels.patterns ) see more at provide api to access and change editor tab labels
2024-03-20 17:59:44.617000000
i'm trying to build some navigation animation on my own with mapbox maps sdk and without the navigation sdk. i want to align the map bearing and the puck direction (where the arrow points to) with the route line i draw on the map. i use followpuckviewportstate so that my map center can move along with the change of current location of the user. i am able to change the map bearing by setting viewport.options 's bearing , but the puck is still pointing to where my phone is physically pointing to. i've looked through the doc but didn't find anything that allows me to set the direction of the puck. puckbearing on the puckmanager only allows heading or course but i want to set to a specific bearing value. i'v already checkout out <a href="[LINK] to fixed the mapbox navigation puck&#39;s position and direction</a> but didn't find bearingsmoothing equivalent in ios sdk. would appreciate any help!
2024-03-14 22:26:15.847000000
i wrote an api that is able to do almost exactly what you're asking for: [LINK]>. it works as an http api proxy instead of a native python module, but would be easily adaptable if you want.
2024-03-07 05:53:28.330000000
if you want to make it smaller remove the padding, i recommend adding py-3 to div above header. adjust the py-3(padding y axis) to lower values to make it even smaller. div class= container fixed-top bg-white py-3 header class= d-flex flex-wrap justify-content-center without padding how it looks style *{ border: 0.2px solid red; } /style use that bit of code inside your header to see the containers/layouts separately, then you will understand whats happening, at least thats how i do it. so once you see the sections properly you can figure it out. and once your done remove the border.
2024-03-19 01:20:19.277000000
adding to p.t.'s excellent answer: you can also get htop to report this value by going to setup - columns - available columns and then selecting m_swap . this value most likely has the same limitations of shared pages that p.t. describes.
2024-03-28 08:29:51.923000000
to enable netty you have to use reactive gateway in your pom.xml file instead of the mvc gateway ?xml version= 1.0 encoding= utf-8 ? project xmlns= [LINK] xmlns:xsi= [LINK] xsi:schemalocation= [LINK] [LINK] modelversion 4.0.0 /modelversion parent groupid org.springframework.boot /groupid artifactid spring-boot-starter-parent /artifactid version 3.2.3 /version relativepath/ !-- lookup parent from repository -- /parent groupid com.fmsia2.api.gateway /groupid artifactid apigateway /artifactid version 0.0.1-snapshot /version name apigateway /name description demo project for spring boot /description properties java.version 17 /java.version spring-cloud.version 2023.0.0 /spring-cloud.version /properties dependencies dependency groupid org.springframework.boot /groupid artifactid spring-boot-starter-webflux /artifactid /dependency dependency groupid org.springframework.cloud /groupid artifactid spring-cloud-starter /artifactid /dependency dependency groupid org.springframework.cloud /groupid artifactid spring-cloud-starter-gateway /artifactid /dependency dependency groupid org.springframework.cloud /groupid artifactid spring-cloud-starter-netflix-eureka-client /artifactid /dependency dependency groupid org.projectlombok /groupid artifactid lombok /artifactid optional true /optional /dependency dependency groupid org.springframework.boot /groupid artifactid spring-boot-starter-test /artifactid scope test /scope /dependency dependency groupid io.projectreactor /groupid artifactid reactor-test /artifactid scope test /scope /dependency /dependencies dependencymanagement dependencies dependency groupid org.springframework.cloud /groupid artifactid spring-cloud-dependencies /artifactid version ${spring-cloud.version} /version type pom /type scope import /scope /dependency /dependencies /dependencymanagement build plugins plugin groupid org.springframework.boot /groupid artifactid spring-boot-maven-plugin /artifactid configuration excludes exclude groupid org.projectlombok /groupid artifactid lombok /artifactid /exclude /excludes /configuration /plugin /plugins /build /project
2024-03-09 12:05:38.650000000
i'm trying to scrape the authors from this url: [LINK]> and it is grabbing only 3 of the writers, with the fourth being truncated into an ellipse in the output. here is the code: import csv import requests from bs4 import beautifulsoup url url = '[LINK]' send a get request to the url response = requests.get(url) parse the html content soup = beautifulsoup(response.content, 'html.parser') authors = soup.find( meta , { name : authors })['content'] print(authors) this is the output: datian bi | jingyuan kong | ... | junli yang why why why? thanks!
2024-03-15 22:29:02.090000000
i am learning redux, so wrote below logic by following online resources, but it throws this error: /home/sssawant/sarvesh/notesfullstack/frontend/reactnotes/reduxdemo/nodemodules/redux/dist/cjs/redux.cjs:407 const chain = middlewares.map((middleware) = middleware(middlewareapi)); typeerror: middleware is not a function at /home/sssawant/sarvesh/notesfullstack/frontend/reactnotes/reduxdemo/nodemodules/redux/dist/cjs/redux.cjs:407:51 at array.map () at /home/sssawant/sarvesh/notesfullstack/frontend/reactnotes/reduxdemo/nodemodules/redux/dist/cjs/redux.cjs:407:31 at createstore (/home/sssawant/sarvesh/notesfullstack/frontend/reactnotes/reduxdemo/nodemodules/redux/dist/cjs/redux.cjs:133:33) at object. (/home/sssawant/sarvesh/notesfullstack/frontend/reactnotes/reduxdemo/asyncactions2.js:79:15) at module.compile (node:internal/modules/cjs/loader:1376:14) at module.extensions..js (node:internal/modules/cjs/loader:1435:10) at module.load (node:internal/modules/cjs/loader:1207:32) at module.load (node:internal/modules/cjs/loader:1023:12) at function.executeuserentrypoint [as runmain] (node:internal/modules/runmain:135:12) node.js v20.11.0 i have below versions for dependencies. node version - v20.11.0 current dependencies axios[USER].6.7 redux-logger[USER].0.6 redux-thunk[USER].1.0 redux[USER].0.1 my code: const redux = require( redux ); const { thunk } = require( redux-thunk ); const createstore = redux.createstore; const applymiddleware = redux.applymiddleware; const thunkmiddleware = require( redux-thunk ).default const axios = require( axios ); const initialstate = { loading: false, users: [], error: , }; const fetchusersrequest = fetchusersrequest ; const fetchuserssuccess = fetchuserssuccess ; const fetchusersfailure = fetchusersfailure ; const fetchusersrequest = () = { return { type: fetchusersrequest, }; }; const fetchuserssuccess = (users) = { return { type: fetchuserssuccess, payload: users, }; }; const fetchusersfailure = (error) = { return { type: fetchusersfailure, payload: error, }; }; const reducer = (curstate, action) = { switch (action.type) { case fetchusersrequest: return { ...curstate, loading: true, }; case fetchuserssuccess: return { ...curstate, loading: false, users: action.payload, error: , }; case fetchusers_failure: return { ...curstate, loading: false, users: [], error: action.payload, }; } }; const fetchusers = () = { return function (dispatch) { dispatch(fetchusersrequest()); axios .get( [LINK] ) .then((rsponse) = { const users = response.data.map((user) = user.id); dispatch(fetchuserssuccess(users)); }) .catch((error) = { dispatch(fetchusersfailure(error.message)); }); }; }; const store = createstore(reducer, applymiddleware(thunkmiddleware)); store.subscribe(() = {console.log(store.getstate())}); store.dispatch(fetchusers()) why am i getting this error? is it a syntactical error, or due to different versions for node and redux-thunk?
2024-02-09 09:17:45.167000000
i am looking for a public list of world banks i don't need branch offices and full addresses - but the name and the website. i think of data ... xml, csv ... with these fields: bank name, country name or country code (iso two letters) website: optional: city of bank headquarters for each bank, one record per country of presence. btw: especially small banks are interesting i have found a great page that is very very comprehensive - see - it has got 9000 banks in europe: see from a to z: [LINK]> a [LINK] [LINK] [LINK] b [LINK] u [LINK] [LINK] see a detailed page: [LINK]> i need to have this data contacts mitteldorfstrasse 48, 9524, zuzwil sg, switzerland 071 944 15 51071 944 27 52 [LINK]/ approach: my approach is to use bs4, request and pandas btw : perhaps we can count form zero to 100 000 - in order to get all the bank that are stored inthe db: see a detailed page: [LINK]> i run on colab this: import requests from bs4 import beautifulsoup import pandas as pd function to scrape bank data from my url def scrapebankdata(url): response = requests.get(url) soup = beautifulsoup(response.content, html.parser ) here we try to find bank name, country, and website bankname = soup.find( h1 , class= entry-title ).text.strip() country = soup.find( span , class= country-name ).text.strip() website = soup.find( a , class= site-url ).text.strip() print(f scraped: {bankname}, {country}, {website} ) return { bank name : bankname, country : country, website : website} the list of urls for scraping bank data by country urls = [ [LINK] , [LINK] , [LINK] , we could add more urls for other countries as needed ] list to store bank data bankdata = [] iterate through the urls and scrape bank data for url in urls: response = requests.get(url) soup = beautifulsoup(response.content, html.parser ) banklinks = soup.findall( div , class= search-bank ) for banklink in banklinks: bankurl = [LINK] + banklink.find( a ).get( href ) bankinfo = scrapebankdata(bankurl) bankdata.append(bankinfo) and now we convert the list of dictionaries to a pandas dataframe df = pd.dataframe(bankdata) subsequently we print the dataframe print(df) see what is getting back empty dataframe columns: [] index: [] well it seems to me that there is an issue with the scraping process. i tried some different approachs by inspecting the elements on the webpage again and again to ensure i am extracting the correct information on the page. well should also print out some extra debug information to help diagnose the problem. update: good evening dear [USER] m. and [USER] thank you very much for your comments and for sharing your ideas: food for thoughts: as for the selenium - i think that this is a good idea - and for running it (selenium) on google-colab i have learned from jacob padilla [USER] / [USER]:[HASH] :: see jacobs page: [LINK]> with the google-colab-selenium: [LINK]> and the default options: the google-colab-selenium package is preconfigured with a set of default options optimized for google colab environments. these defaults include: • --headless: runs chrome in headless mode (without a gui). • --no-sandbox: disables the chrome sandboxing feature, necessary in the colab environment. • --disable-dev-shm-usage: prevents issues with limited shared memory in docker containers. • --lang=en: sets the language to english. well i think that this approach is worth to think about: so we can go like so: were using selenium in google colab to bypass the cloudflare (that you mentioned - eternalwhite) blocking and scrape the desired data can be nice but feasible approach. here some thoughts on a step-by-step aproach - and how to set it up using the google-colab-selenium package by jacob padilla: install google-colab-selenium: you can install the google-colab-selenium package using pip: diff !pip install google-colab-selenium we also need to install selenium: diff !pip install selenium import necessary libraries: import the required libraries in your colab notebook: from selenium import webdriver from selenium.webdriver.chrome.service import service from selenium.webdriver.common.by import by from selenium.webdriver.chrome.options import options from google.colab import output import time and then we need to setup selenium webdriver: configure the chrome webdriver with the necessary options: set up options options = webdriver.chromeoptions() options.addargument('--headless') options.addargument('--no-sandbox') options.addargument('--disable-dev-shm-usage') create a new instance of the chrome driver driver = webdriver.chrome('chromedriver', options=options) here we re going to define function for scraping: we define a function to scrape bank data using selenium: def scrapebankdatawithselenium(url): driver.get(url) time.sleep(5) first of all - we let the page load completely bankname = driver.findelement(by.classname, 'entry-title').text.strip() country = driver.findelement(by.classname, 'country-name').text.strip() website = driver.findelement(by.classname, 'site-url').text.strip() print(f scraped: {bankname}, {country}, {website} ) return { bank name : bankname, country : country, website : website} and then we could go and scrape data: now we can scrape the data using the defined function: list of urls for scraping bank data by country urls = [ [LINK] , [LINK] , [LINK] , hmm - we could add more urls for other countries as needed ] list to store bank data bankdata = [] now we can iterate through the urls and scrape bank data for url in urls: bankdata.append(scrapebankdatawithselenium(url)) and now we can convert the list of dictionaries to a pandas dataframe df = pd.dataframe(bankdata) print the dataframe print(df) and - in one single shot: first of all we need to install all the required packages - for example the packages of jakobs selenium approach etc etx: !pip install google-colab-selenium !apt-get update to update ubuntu to correctly run apt install !apt install chromium-chromedriver !cp /usr/lib/chromium-browser/chromedriver /usr/bin and afterwards we need to import all the necessary libraries from selenium import webdriver from selenium.webdriver.common.by import by import pandas as pd import time set up options for chrome webdriver chromeoptions = webdriver.chromeoptions() chromeoptions.addargument('--headless') chromeoptions.addargument('--no-sandbox') chromeoptions.addargument('--disable-dev-shm-usage') chromeoptions.addargument('--remote-debugging-port=9222') add this option create a new instance of the chrome driver driver = webdriver.chrome('chromedriver', options=chromeoptions) define function to scrape bank data using selenium def scrapebankdatawithselenium(url): driver.get(url) time.sleep(5) let the page load completely bankname = driver.findelement(by.classname, 'entry-title').text.strip() country = driver.findelement(by.classname, 'country-name').text.strip() website = driver.findelement(by.classname, 'site-url').text.strip() print(f scraped: {bankname}, {country}, {website} ) return { bank name : bankname, country : country, website : website} list of urls for scraping bank data by country urls = [ [LINK] , [LINK] , [LINK] , add more urls for other countries as needed ] list to store bank data bankdata = [] iterate through the urls and scrape bank data for url in urls: bankdata.append(scrapebankdatawithselenium(url)) convert the list of dictionaries to a pandas dataframe df = pd.dataframe(bankdata) print the dataframe print(df) close the webdriver driver.quit() see what i have got back - on google-colab: typeerror traceback (most recent call last) ipython-input-4-[HASH] in cell line: 21 () 19 20 create a new instance of the chrome driver --- 21 driver = webdriver.chrome('chromedriver', options=chromeoptions) 22 23 define function to scrape bank data using selenium typeerror: webdriver.init__() got multiple values for argument 'options' update : btw if we want to gather data form all the countries we can go like so: [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] and herzegovina [LINK] virgin islands [LINK] [LINK] islands [LINK] [LINK] [LINK] [LINK] republic [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] of man [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] [LINK] marino [LINK] [LINK] [LINK] [LINK] [LINK]= sweden [LINK] [LINK] [LINK] and caicos islands [LINK] [LINK] kingdom
2024-03-08 11:18:55.543000000
this question is a follow-up to my <a href="[LINK] question</a> which was answered. goal enable handwriting to text conversion in a shiny app while using nested modules. this works three files are relevant: handwritingforshiny2.js (placed in www folder) modhandwriting.r (placed in r folder; this is a module that utilizes the js function) app.r (placed in main app folder) running app.r shows a canvas where you can write with a stylus and it is converted to text when the send button is clicked. there also undo, redo, and clear canvas buttons. this does not work i then introduced another module ( modsimple.r ) that utilizes the module modhandwriting.r . then i created another app file called app2.r where i used the modsimple.r . but this time the buttons don't work. although, i can still write in the canvas. code here is all the code: handwritingforshiny2.js $(document).ready(function() { var handwritingcanvas = {}; shiny.addcustommessagehandler( initialize , function(id) { var canvas = document.getelementbyid(id + '-handwritingcanvas'); handwritingcanvas[id] = new handwriting.canvas(canvas, 3); // enable undo and redo handwritingcanvas[id].setundoredo(true, true); console.log( this is initialize function ); }); shiny.addcustommessagehandler( clearcanvas , function(id) { handwritingcanvas[id].erase(); console.log( this is clearcanvas function ); }); shiny.addcustommessagehandler( undocanvas , function(id) { handwritingcanvas[id].undo(); }); shiny.addcustommessagehandler( redocanvas , function(id) { handwritingcanvas[id].redo(); }); shiny.addcustommessagehandler( sendcanvas , function(id) { var trace = handwritingcanvas[id].trace; var options = { language: 'en', numofreturn: 1 }; var callback = function(result, err) { if (err) { console.error(err); } else { shiny.setinputvalue(id + '-recognizedtext', result[0]); } }; handwriting.recognize(trace, options, callback); }); }); modhandwriting.r handwritingui - function(id) { ns - ns(id) fluidrow( column( width = 4, actionbutton(ns( clearcanvas ), clear canvas ), actionbutton(ns( undo ), undo ), actionbutton(ns( redo ), redo ), actionbutton(ns( send ), send ), textareainput(ns( manualtext ), enter text , value = ), tags$canvas(id = ns( handwritingcanvas ), width = 400px , height = 200px ) ) ) } handwritingserver - function(id) { moduleserver( id, function(input, output, session) { observe({ session$sendcustommessage( initialize , message = id) }) observeevent(input$clearcanvas, { session$sendcustommessage( clearcanvas , message = id) }) observeevent(input$send, { session$sendcustommessage( sendcanvas , message = id) }) observeevent(input$undo, { print( undo button clicked ) diagnostic output session$sendcustommessage( undocanvas , message = id) }) observeevent(input$redo, { session$sendcustommessage( redocanvas , message = id) }) observeevent(input$recognizedtext, { if (!is.null(input$recognizedtext)) { updatetextareainput(session, manualtext , value = input$recognizedtext) } }) observe({ session$sendcustommessage( initcanvas , message = null) }) } ) } app.r library(shiny) ui - fluidpage( tags$head( tags$script(src = handwriting.js ), tags$script(src = handwriting.canvas.js ), tags$script(src = handwritingforshiny2.js ), tags$style(html( #handwritingcanvas { border: 1px solid #000; margin-top: 10px; margin-bottom: 10px; } )) ), titlepanel( handwriting recognition ), handwritingui( module1 ) ) server - function(input, output, session) { handwritingserver( module1 ) } shinyapp(ui, server) modsimple.r modsimpleui - function(id) { ns - ns(id) taglist( handwritingui(ns( foo )) ) } modsimpleserver - function(id) { moduleserver( id, function(input, output, session) { ns - session$ns handwritingserver(ns( foo )) } ) } app2.r library(shiny) ui - fluidpage( tags$head( tags$script(src = handwriting.js ), tags$script(src = handwriting.canvas.js ), tags$script(src = handwritingforshiny2.js ), tags$style(html( #handwritingcanvas { border: 1px solid #000; margin-top: 10px; margin-bottom: 10px; } )) ), titlepanel( handwriting recognition ), modsimpleui( module1 ) ) server - function(input, output, session) { modsimpleserver( module1 ) } shinyapp(ui, server) debugging the js code: i have put console.log statements in the js code. i see both prints with app.r but only this is initialize function with app2.r . somehow, the clearcanvas function is not triggered (and so are the other button functions). this seems to be a namespace issue. i'd appreciate any help. edit changed the modsimple.r by removing ns in the server part: modsimpleui - function(id) { ns - ns(id) taglist( handwritingui(ns( foo )) ) } modsimpleserver - function(id) { moduleserver( id, function(input, output, session) { ns - session$ns handwritingserver( foo ) } ) } but with this change, i lost the ability to write inside the canvas. also, the console shows this error: shinyapp.ts:442 typeerror: cannot read properties of null (reading 'getcontext') at new handwriting.canvas (handwriting.canvas.js:19:24) at e. anonymous (handwritingforshiny2.js:6:29) at e. anonymous (shinyapp.ts:866:40) at m (shinyapp.ts:5:1357) at generator. anonymous (shinyapp.ts:5:4174) at generator.next (shinyapp.ts:5:2208) at f0 (shinyapp.ts:6:99) at s (shinyapp.ts:7:194) at shinyapp.ts:7:364 at new promise ( anonymous ) (anonymous) @ shinyapp.ts:442 shinyapp.ts:442 typeerror: cannot read properties of undefined (reading 'erase') at e. anonymous (handwritingfor_shiny2.js:14:27) at e. anonymous (shinyapp.ts:866:40) at m (shinyapp.ts:5:1357) at generator. anonymous (shinyapp.ts:5:4174) at generator.next (shinyapp.ts:5:2208) at f0 (shinyapp.ts:6:99) at s (shinyapp.ts:7:194) at shinyapp.ts:7:364 at new promise ( anonymous ) at e. anonymous (shinyapp.ts:7:97) (anonymous) @ shinyapp.ts:442 m @ shinyapp.ts:5 (anonymous) @ shinyapp.ts:5 (anonymous) @ shinyapp.ts:5 f0 @ shinyapp.ts:6 c @ shinyapp.ts:7 promise.then (async) f0 @ shinyapp.ts:6 s @ shinyapp.ts:7 promise.then (async) f0 @ shinyapp.ts:6 s @ shinyapp.ts:7 promise.then (async) f0 @ shinyapp.ts:6 s @ shinyapp.ts:7 promise.then (async) f0 @ shinyapp.ts:6 s @ shinyapp.ts:7 promise.then (async) f0 @ shinyapp.ts:6 s @ shinyapp.ts:7 promise.then (async) f0 @ shinyapp.ts:6 s @ shinyapp.ts:7 promise.then (async) f0 @ shinyapp.ts:6 s @ shinyapp.ts:7
2024-02-14 23:43:37.247000000
to solve this, i have to change the version of com.google.gms.google-services inside the settings.gradle file. from plugins { ... id 'com.google.gms.google-services' version '4.4.1' apply false } to plugins { ... id 'com.google.gms.google-services' version '4.3.15' apply false }
2024-02-24 08:16:50.927000000
they are different clocks. the timestamps of kernel dmesg are retrieved by kernel internal localclock(), which is not exported to userspace. it's not clockmonotonic nor clockmonotonicraw. the time shown in /proc/uptime is clockboottime. clockboottime is identical to clock_monotonic, except that it also includes any time that the system is suspended. on x86, these two clocks are usually both based on tsc clock. but they are managed independently.
2024-02-20 02:03:07.087000000
i have an expo bare workflow working fine in sdk 49 . but when i run expo-doctor and other solutions, i receive many errors about gradle and dependencies. i've tried to fix it, but many developers with the same problem advised me to create a new project. in my new project, i succeeded with an expo-doctor (no issues). but now i need help with svg, png, and fonts. unable to resolve missing-asset-registry-path from nodemodules/[USER]-google-fonts/poppins/poppins100thin.ttf i'll receive a png or an svg if i remove google fonts. below is my package.json . dependencies : { [USER]/preset-env : ^7.23.9 , [USER]-google-fonts/poppins : ^0.2.3 , [USER]/firestore : ^4.4.2 , [USER]/bottom-sheet : ^4.6.1 , [USER]-native-firebase/app : ^19.0.0 , [USER]-native-firebase/auth : ^19.0.0 , [USER]-native/babel-preset : ^0.73.21 , [USER]-navigation/native : ^6.1.12 , [USER]-navigation/stack : ^6.3.23 , [USER]/toolkit : ^2.2.1 , [USER]/react : ~18.2.45 , [USER]/styled-components : ^5.1.34 , dotenv : ^16.4.5 , expo : ~50.0.7 , expo-asset : ~9.0.2 , expo-checkbox : ~2.7.0 , expo-clipboard : ~5.0.1 , expo-constants : ~15.4.5 , expo-dev-client : ^3.3.8 , expo-image-manipulator : ~11.8.0 , expo-image-picker : ~14.7.1 , expo-splash-screen : ~0.26.4 , expo-status-bar : ~1.11.1 , expo-system-ui : ^2.9.3 , expo-updates : ~0.24.11 , firebase : ^10.8.0 , lodash : ^4.17.21 , phosphor-react-native : ^2.0.0 , react : 18.2.0 , react-native : 0.73.4 , react-native-bouncy-checkbox : ^3.0.7 , react-native-dotenv : ^3.4.10 , react-native-dropdown-picker : ^5.4.6 , react-native-element-dropdown : ^2.10.1 , react-native-gesture-handler : 2.14.0 , react-native-get-random-values : 1.8.0 , react-native-paper : ^5.12.3 , react-native-reanimated : 3.6.2 , react-native-safe-area-context : 4.8.2 , react-native-svg : ^14.1.0 , react-native-svg-transformer : ^1.3.0 , react-native-vector-icons : ^10.0.3 , react-redux : ^9.1.0 , styled-components : ^6.1.8 , typescript : ^5.3.0 , uuid : ^9.0.1 }, devdependencies : { [USER]/core : ^7.20.0 , [USER]/uuid : ^9.0.7 , typescript : ^5.1.3 }, i've tried to update all dependencies, downgrade expo, create a new project, but the project has a great complexity to restart everything. i've tried also to remove the pictures and fonts. but the problem persists. also tried clearing the cache.
2024-02-26 17:59:30.280000000
so, conveniently i am able to answer my own question. apparently using ngx-markdown in conjunction with tailwind css creates this problem in react users as well. the solution is as simple as installing the tailwind typography plugin and then using the prose type in your classes. install tailwind typography: npm install -d [USER]/typography include the [USER]/typography in your tailwind.config.js in the plugins array: / [USER] {import('tailwindcss').config} / module.exports = { content: [ ./src//.{html,js} ], theme: { extend: {}, }, plugins: [ require('[USER]/typography') ], } the tailwind documentation for this can also be found here: [LINK]> then after you've done all that, it's as simple as this (i recommend using max-w-full): div class= prose max-w-full markdown [src]= 'assets/md/data.md' /markdown /div i hope this helps with anyone else on the internet experiencing this issue, learning a framework is a learning curve :)
2024-02-28 20:04:52.133000000
i have the following code for a telegram bot in typescript: import telegrambot, { message } from 'node-telegram-bot-api' const bot = new telegrambot(mytoken, { polling: { params: { allowedupdates: ['chatmember', 'message', 'mychat_member'] } }}) bot.on('message', async (msg) = { console.log( message ) }) the bot.on('message') is triggering correctly when a user joins a group which has as an admin the bot, but it doesn't trigger when the user leaves the group. is this the correct approach to get notified on users' group leave? how can i achieve this?
2024-03-21 16:26:37.570000000
i'm using devexpress gantt for react, when i try to use the custom task template i get the following error: error failed to execute 'removechild' on 'node': the node to be removed is not a child of this node. this is my current implementation: gantt rootvalue={0} height={ inherit } scaletype={viewmode.tolowercase()} taskcontentrender={tasktemplate} ontaskclick={(taskclickevent: any) = {updateportfolioscenario(taskclickevent)}} ontaskupdating={(data) = {update(data, data.newvalues.start, data.newvalues.end)}} tasks datasource={tasks} keyexpr= id parentidexpr= parentid titleexpr= title progressexpr= taskprogress startexpr= start endexpr= end colorexpr= color / contextmenu enabled={false} / editing enabled / column datafield= portfolioscenariohtml caption= width={ 15% } cellcomponent={checkbox} / column datafield= title caption={t( generaltitle ) ?? title } width={ 50% } / column datafield= start calculatedisplayvalue={getstartdateformat} caption={t( generalstartdate ) ?? start date } width={ 35% } / column datafield= end calculatedisplayvalue={getenddateformat} caption={t( generalenddate ) ?? end date } width={ 35% } / column datafield= user caption={t( generalowner ) ?? owner } cellcomponent={usercomp} width={ 20% } / column datafield= oldstart caption={ } width={ 0% } / column datafield= oldend caption={ } width={ 0% } / /gantt this is my tasktemplate function: const tasktemplate = (data: any) = { let realtask = tasks.find((x) = x.id == data.taskdata.id); var taskdata = data.taskdata; var taskrange = taskdata.end - taskdata.start; var ticksize = data.tasksize.width / taskrange; let oldstart: any = new date(taskdata.oldstart); var actualtaskdelta = old_start - taskdata.start; var actualtaskleftposition = actualtaskdelta * ticksize; var actualtaskleftpositionpx = actualtaskleftposition + px ; return ( div div {realtask.portfolioscenario == 1 && realtask.oldstart && realtask.oldend && actualtaskleftpositionpx != nanpx && ( div classname= dx-gantt-taskwrapper style={{ position: absolute , left: actualtaskleftpositionpx }} div classname= dx-gantt-task style={{ width: ${data.tasksize.width}px, backgroundcolor: rgba(146, 152, 159,0.4) }} div classname= dx-gantt-tasktitle dx-gantt-titlein {data.taskdata.title} /div div classname= dx-gantt-tprg /div /div /div )} div classname= dx-gantt-taskwrapper div classname= dx-gantt-task style={{ width: ${data.tasksize.width}px, backgroundcolor: data.taskdata.color }} div classname= dx-gantt-tasktitle dx-gantt-titlein {data.taskdata.title} /div div classname= dx-gantt-tprg /div /div /div /div /div ); }; when i don't use the taskcontentrender={tasktemplate} everything works fine. but i want to add another task bar to show the previous data. to do so i use the tasktemplate function, and at first glance it seems to work fine. but whenever i try to update the data (by clicking the checkbox or by dragging the bar and changing the dates) it crashes. i saw a lot of ticket on devexpress where people where complaining of similar issues using different libraries, sometimes the dev team said to add a div /div tag to wrap the custom template code. as you can see i did that but to no avail. can someone help?
2024-03-28 14:46:02.393000000
i'm trying to program a gui in tkinter that sets the root to a darkmode waits a specific amount of time and then closing or withdrawing itself and at the same moment a new toplevel should be generated. the withdrawed main window should pop up again after i closed the toplevel. therefore i used the if statement, but i get a nameerror: newwindow not defined , evenhough i set it to global. this my code: from tkinter import * def answer(event): anslabel[ text ]= you clicked the button! def darkmode(event): hallo[ fg ]= white hallo[ bg ]= black main[ bg ]= black anslabel[ bg ]= black anslabel[ fg ]= white button[ bg ]= black button[ fg ]= white main.after(3000,createwindow) def createwindow(): global newwindow newwindow=toplevel() newlabel=label(newwindow,text= you found the secret function! ) newlabel.pack() main.withdraw() main=tk() hallo=label(main,text= hello ) button=button(main,text= button ) button.bind( button-1 ,answer) button.bind( button-3 ,darkmode) anslabel=label(main,text= hello! ) hallo.pack() button.pack() anslabel.pack() if not newwindow.winfo_exists(): main.deiconify() main.mainloop()
2024-02-13 21:20:47.440000000
you actually figured the representations out correctly. what you input is a so-called h-representation (which lists inequalities and possibly also equations), get_generators() returns the corresponding v-representation (which in turn lists vertices and possibly also extreme rays). the manual of cddlib , the c-library underlying pycddlib , defines the two formats in detail.
2024-03-22 12:35:32.160000000
i was following the documentation on drf social auth [LINK]> i've encountered an issue while working with django-rest-social-auth and simple-jwt. the error message is as follows: noreversematch at /api/login/social/jwt-pair/facebook/ 'http' is not a registered namespace i've registered my path in the root urls.py (located in the config folder) as follows: path( api/login/ , include( restsocialauth.urlsjwtpair )) despite referring to the documentation, i'm unable to resolve this error. if anyone has encountered a similar issue or has insights into what might be causing this problem. could someone kindly assist?
2024-02-07 15:03:06.340000000
here's a one-liner i ended up using: remote= origin\/ ; for branch in $(git branch -r | grep $remote | grep -v head | sed s/$remote// ); do echo $branch; git checkout $branch; done breakdown: define the remote wanted. i needed this because this repo had multiple ones. on this step we include an escaped slash; loop through every branch; use grep to narrow down to the branches we want. in this case we use the $remote defined earlier and 'head' to remove 'remote/head' from the options; use sed to remove the remote name and the slash from the options; finally we can checkout every branch
2024-02-21 18:42:33.143000000
i use decimal pipe to format the number in input field value| number:'0.0-6': 'en-us' when i use number with more than 10 digits shows this: on [HASH].123456 returns 11,111,111,111.123455 on [HASH].123456 returns 111,111,111,111.12345 on [HASH].123456 returns 1,111,111,111,111.1234 on [HASH].123456 returns 11,111,111,111,111.123 it does not matter width of the input field. does anyone had the same issue? is there any workaround? i also try with [HASH].999999 and it shows 12,000,000,000,000
2024-03-20 12:44:31.887000000
i have an app both in android and ios platform and i have a different back-end lambdas based on node.js. i want to add social login in app - when user clicks sign in with google or sign in with apple . i need to exchange these credentials to aws cognito credentials (id token) to give access to other resources. i have achieved this by using hosted ui - so in my nodejs lambda, i create a url with temporary state and send to the apps. but user needs to click twice - first button in app, then app opens with url of web page and needs to click again with same button. i need to achieve this without opening the web page. do you have any suggestions or ways to fix this? regards,
2024-03-10 13:49:24.217000000
i am trying to only show the category and subcategory when the post's custom field is not empty. as of right now it does show the category and subcategory but then it is just an empty space below the subcategory. (because the custom field is and should be empty). ?php function owcategorieswithsubcategoriesandposts( $taxonomy, $posttype ) { // get the top categories that belong to the provided taxonomy (the ones without parent) $categories = getterms( array( 'taxonomy' = $taxonomy, 'parent' = 0, 'orderby' = 'termid', 'hideempty' = true ) ); ? div class= accordions-container ?php // iterate through all categories to display each individual category foreach ( $categories as $category ) { $catname = $category- name; $catid = $category- termid; $catslug = $category- slug; ? div class= accordion-wrapper ?php // display the name of each individual category echo ' h2 i class= fa-regular fa-lightbulb-on /i ' . $catname . ' /h3 '; // get all the subcategories that belong to the current category $subcategories = getterms( array( 'taxonomy' = $taxonomy, 'parent' = $catid, // -- the parent is the current category 'orderby' = 'termid', 'hideempty' = true ) ); ? div class= subcategory-wrapper ?php // iterate through all subcategories to display each individual subcategory foreach ( $subcategories as $subcategory ) { $subcatname = $subcategory- name; $subcatid = $subcategory- termid; $subcatslug = $subcategory- slug; $childsubcategories = getterms( array( 'taxonomy' = $taxonomy, 'parent' = $subcatid, // -- the parent is the current category 'orderby' = 'termid', 'hideempty' = true ) ); // display the name of each individual subcategory with id and slug echo ' button class= accordion ' . $subcatname . ' /button '; echo ' div class= panel '; ? div ?php foreach ( $childsubcategories as $childsubcategory ) { $childsubcatname = $childsubcategory- name; $childsubcatid = $childsubcategory- termid; $childsubcatslug = $childsubcategory- slug; ? h4 class= accordian-child-subcategory ?php echo $childsubcatname ? /h4 ?php // get all posts that belong to this specific subcategory $posts = new wpquery( array( 'posttype' = $posttype, 'postsperpage' = -1, // -- show all posts 'hideempty' = true, 'order' = 'asc', 'metaquery' = array( array( 'key' = 'specificationsheet', 'value' = '', 'compare' = '!=' ) ), 'taxquery' = array( array( 'taxonomy' = $taxonomy, 'terms' = $childsubcatid, 'field' = 'id' ) ) ) ); // if there are posts available within this subcategory if ( $posts- haveposts() ){ // as long as there are posts to show while ( $posts- haveposts() ): $posts- thepost(); //show the title of each post with the post id ? div class= post-loop-single div class= post-loop-text div class= post-loop-title h3 ?php thetitle(); ? ?php if( getfield('discontinued')) { ? - discontinued ?php } else if(getfield('phasingout')) { ? - phasing out ?php } ? /h3 div class= post-loop-spec-sheet a href= ?php thefield('specificationsheet'); ? target= blank i class= fa-light fa-file-magnifying-glass /i specification sheet /a /div /div /div /div ?php endwhile; ? ?php } else { } wpresetquery(); } ? /div ?php echo ' /div '; } ? /div /div ?php } ? /div ?php } owcategorieswithsubcategoriesandposts( 'product-document-category', 'product-document' ); ? this is an example of what a correct category - subcategory - post with not empty custom field looks like correct this is an example of what an incorrect category - subcategory - post with empty custom field looks like incorrect the goal is to have the incorrect category not show up at all. i have tried many things. i have rearranged, attempted different get_terms functionalities, and none have proven to work so far.
2024-03-12 21:00:42.463000000
im using static headers on my ag grid, and there are some headers/columns who can have values or not. when i get the data from the backend, the values for those headers may or may not come. however, when using ag grid cell data type boolean, if the value is undefined it shows a grey checkbox, and i want it to be an empty/false checkbox instead, does anyone have an idea for that? left are undefined checkboxes and right are false/true checkboxes i've tried to create a custom cell renderer, however i'm also having trouble on applying it on only boolean cell data types and not on everything else. also i can't apply the cellrenderer on whatever data i want since even though this grid has static headers, some others might come from backend, so if i use this custom cell renderer i need to check if the cell datatype is boolean and only then apply it.
2024-02-15 17:37:26.890000000
i want to interpolate points on a curve in python having the coordinates of the start and end points, and their curvature values (curvature is 1/r , r being the radius of the circumference). the sign of the curvature value indicates if the curve is to the right or to the left. the distance of the curve is also known. example approach: pointa = { 'coordinates' : (latitudea, longitudea), 'curvature': 0 } pointb = { 'coordinates' : (latitudeb, longitudeb), 'curvature': -30 } curvedistance = 120 in meters stepsize = 1 interpolate points every stepsize meters calculate interpolated points if curvature value of both points is 0, it is a usual linear interpolation. i'm adding the code if someone is interested in how to calculate the gps points. def haversinedistance(lat1, lon1, lat2, lon2): convert latitude and longitude from degrees to radians lat1 = math.radians(lat1) lon1 = math.radians(lon1) lat2 = math.radians(lat2) lon2 = math.radians(lon2) radius of the earth in kilometers earth_radius = 6371 haversine formula dlat = lat2 - lat1 dlon = lon2 - lon1 a = math.sin(dlat/2)2 + math.cos(lat1) math.cos(lat2) math.sin(dlon/2)2 c = 2 * math.atan2(math.sqrt(a), math.sqrt(1 - a)) calculate the distance distance = earth_radius c return distance 1000 return distance in meters def linearinterpolation(agps, bgps, stepsize): latitudea = agps[0] longitudea = agps[1] latitudeb = bgps[0] longitudeb = bgps[1] distance = haversinedistance(latitudea, longitudea, latitudeb, longitudeb) if stepsize = distance: return none else: numintervals = int(distance / stepsize) generate points at specified intervals along the path intermediatepoints = [] intermediatepoints.append((latitudea, longitudea)) add start point to the returning vector for i in range(1, numintervals + 1): fraction = i / (numintervals + 1) intermediatelat = latitudea + fraction * (latitudeb - latitudea) intermediate_lon = longitudea + fraction * (longitudeb - longitudea) intermediatepoints.append((intermediatelat, intermediatelon)) add interpolated point intermediatepoints.append((latitudeb, longitudeb)) add end point to the returning vector return intermediate_points the issue i'm having is when i have different curvature values (i assume it is a clothoid case) and same curvature values (a circular case). i'm looking for a python library/function that could help me with these two cases. thanks in advance
2024-02-16 09:13:12.963000000
just going based on your config it appears that you are trying to use mezzio configuration in a mvc application. laminas mvc expects a module.php file which should expose a getconfig method. / [USER] array string, mixed / public function getconfig(): array { return include dir . '/../config/module.config.php'; } furthermore, mezzio does not use controllers, its psr 7 / 15 middleware. so i am not sure what you have going on there. you need to pick a framework. mezzio and laminas mvc are not the same, what will work in one may not work in the other.
2024-02-18 08:15:01.567000000
i am running into an issue where running various azure module commands will hang. this appears to only really happen when launching outside of ise. $azurevmdetails = get-azvm -name $servername when ran in ise the results return fine each time. when executed in normal powershell 5.1 window inline with my script that contains gui/forms it will just hang as soon as it gets to this part with no errors indicating in the console. this happens about 90% of the time. i have tried an alternate approach below which has the same result. $command = get-azvm -name $servername $azurevmdetails = invoke-expression $command i have also tried running as a job but the problem is the whole powershell console seems to go unresponsive and is not able to recover. i have also tried updating modules to the latest version. trying to avoid powershell 7 as it does not work out of the box with the gui and the main script this ties into is thousands of lines of code.
2024-03-25 16:44:00.777000000
assuming a data frame like this: data = { persnr : [22222, 22222, 22222, 22222, 55555, 55555], xyz : [ a , b , a , b , a , b ], date : [ jan , jan , feb , feb , jan , jan ], value : [0.8, 0.2, 0.8, 0.2, 0.8, 0.2], } persnr xyz date value 0 22222 a jan 0.8 1 22222 b jan 0.2 2 22222 a feb 0.8 3 22222 b feb 0.2 4 55555 a jan 0.8 5 55555 b jan 0.2 merge two data frames: (i) the original minus column value and (ii) a data frame where you group on persnr (assuming there will be more than one value for this; otherwise, this column isn't necessary) and date , then sum the value in each group and reset the index. set column value with 0 where df[ xyz ] == b . use .loc to do the previous steps only for the selected persnr values. selectedpersnr = [22222] add all selected values here df.loc[df[ persnr ].isin(selectedpersnr)] = pd.merge( df.drop(columns= value ), df.groupby([ persnr , date ])[ value ].sum().resetindex(), ) df.loc[(df[ persnr ].isin(selectedpersnr)) & (df[ xyz ] == b ), value ] = 0 persnr xyz date value 0 22222 a jan 1.0 1 22222 b jan 0.0 2 22222 a feb 1.0 3 22222 b feb 0.0 4 55555 a jan 0.8 5 55555 b jan 0.2
2024-03-13 15:38:45.280000000
my chrome extension creates an iframe with an html file included in the extension on each web page. i would like to access and modify values of elements in the iframe, but the contentdocument attribute of the iframe object is null. here are the files: manifest.json { manifestversion : 3, name : iframe example , version : 1.0 , description : iframe , webaccessibleresources : [ { resources : [ foo.html ], matches : [ allurls ] } ], contentscripts : [ { matches : [ allurls ], js : [ content.js ] } ] } content.js (function() { const frame = document.createelement('iframe'); frame.id = 'foo'; frame.src = chrome.runtime.geturl('foo.html'); frame.style.width = '100px'; frame.style.height = '100px'; frame.style.position = 'fixed'; frame.style.zindex = '99999'; frame.style.top = '0'; frame.style.left = '0'; document.body.appendchild(frame); })(); document.addeventlistener('keydown', function(event) { if (event.ctrlkey && event.shiftkey && event.code === 'keyy') { frame = document.getelementbyid('foo'); console.log(frame.contentdocument.getelementbyid('foo')); } }); foo.html !doctype html html lang= en body div id= foo waiting /div /body /html error message: uncaught typeerror: cannot read properties of null (reading 'getelementbyid') of course the iframe is fully loaded when i press the keys. the error is likely due to the web page and the html files not being from the same origin. how to resolve the issue? in real case, i need access both to the web page and the iframe as i'd like to extract some content of the web page, transform it and set the value of a div object in the iframe.
2024-03-06 21:31:47.737000000
i am trying to write a simple github workflow where it will install and run tests for my maven project. when i try to run mvn clean install locally the tests run but not when it runs on the pipeline. it keeps showing zero test run. what could be the possible reason? name: build and test on: push: branches: [ master ] pull_request: branches: [ master ] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout[USER] with: ref: feat/add-test - name: set up jdk 11 uses: actions/setup-java[USER] with: java-version: 11 distribution: 'adopt' cache: maven - name: install dependencies run: mvn package -dskiptests - name: build with maven run: mvn clean install - name: run formatter run: mvn com.coveo:fmt-maven-plugin:format - name: test run: | mvn clean test ?xml version= 1.0 encoding= utf-8 ? project xmlns= [LINK] xmlns:xsi= [LINK] xsi:schemalocation= [LINK] [LINK] modelversion 4.0.0 /modelversion properties maven.compiler.source 11 /maven.compiler.source maven.compiler.target 11 /maven.compiler.target /properties groupid speed /groupid artifactid simulator /artifactid version 1.0 /version build plugins plugin groupid org.apache.maven.plugins /groupid artifactid maven-compiler-plugin /artifactid version 3.8.1 /version configuration source 11 /source target 11 /target /configuration /plugin plugin groupid org.jacoco /groupid artifactid jacoco-maven-plugin /artifactid version 0.8.11 /version executions execution goals goal prepare-agent /goal /goals /execution !-- attached to maven test phase -- execution id report /id phase test /phase goals goal report /goal /goals /execution /executions /plugin /plugins /build dependencies !-- junit -- dependency groupid org.junit.jupiter /groupid artifactid junit-jupiter-api /artifactid version 5.10.2 /version scope test /scope /dependency dependency groupid org.junit.jupiter /groupid artifactid junit-jupiter-engine /artifactid version 5.10.2 /version scope test /scope /dependency dependency groupid org.mockito /groupid artifactid mockito-core /artifactid version 5.11.0 /version scope test /scope /dependency /dependencies /project
2024-03-28 09:24:52.680000000
i am currently getting an error while trying to install shopware 6. looked everywhere around the internet but couldn't find a solution. the problem is when i run the shopware installer using xampp, during the configuration process it asks me about php binary and i choose c:\xampp\apache\bin\httpd.exe after that when i try to choose a version and download shopware 6, it gives me an error saying: usage: c:\\xampp\\apache\\bin\\httpd.exe [-d name] [-d directory] [-f file] [-c directive ] [-c directive ] [-w] [-k start|restart|stop|shutdown] [-n servicename] [-k install|config|uninstall] [-n servicename] [-v] [-v] [-h] [-l] [-l] [-t] [-t] [-s] [-x] options: -d name : define a name for use in directives -d directory : specify an alternate initial serverroot -f file : specify an alternate serverconfigfile -c directive : process directive before reading config files -c directive : process directive after reading config files -n name : set service name and use its serverconfigfile and serverroot -k start : tell apache to start -k restart : tell running apache to do a graceful restart -k stop|shutdown : tell running apache to shutdown -k install : install an apache service -k config : change startup options of an apache service -k uninstall : uninstall an apache service -w : hold open the console window on error -e level : show startup errors of level (see loglevel) -e file : log startup errors to file -v : show version number -v : show compile settings -h : list available command line options (this page) -l : list compiled in modules -l : list available configuration directives -t -d dumpvhosts : show parsed vhost settings -t -d dumpruncfg : show parsed run settings -s : a synonym for -t -d dumpvhosts -d dumpruncfg -t -d dumpmodules : show all loaded modules -m : a synonym for -t -d dumpmodules -t -d dump_includes: show all included configuration files -t : run syntax check for config files -t : start without documentroot(s) check -x : debug mode (only one worker, do not detach) did anyone encounter the same problem before? can you guys recommend any solutions? thank you.
2024-03-17 18:38:01.217000000
is there something similar to the azure sql database deployment for sql server? conceptually, i'm thinking running a task that executes update-database as a deployment task. am i on the right track? the azure sql database deployment task used sqlpackage.exe and invoke-sqlcmd cmdlet to deploy dacpacs or sql server scripts to azure sql server. for sql server, the similar tick is sql server database deploy . you are on the right track by using entity framework core (ef core) to manage your database schema. instead of update-database , you can use dotnet ef migrations script --idempotent which is equal, please check the similar <a href="[LINK]> for your reference.
2024-03-19 14:16:48.300000000
this is the ability to destructure a class instance field. to use the fields of an object at the time of destructuring, you can use the syntax object(:[type] field) (conditionally). here are more examples: asyncvalue? value; final result = switch (value) { asyncerror(:final error) = '$error', asyncerror(:var error, :stacktrace stacktrace) = '$error $stacktrace', asyncerror(:var error, stacktrace: stacktrace st) = '$error $st', asyncerror int (:final int value) = '$value', asyncerror(:object error) = '$error', asyncvalue int (hasvalue: true) = 'hasvalue is true', asyncvalue int (:var hasvalue) when hasvalue == false = '$hasvalue', _ = 'default', the example itself doesn't make sense here, i only showed the syntax of field extraction. specify : (or just inside the object's parentheses) and use the ide prompts to see the fields available for destructuring. use a custom field when destructuring if necessary (as with stacktrace: stacktrace st ). you can also immediately start mapping fields to value, as in the asyncvalue.hasvalue example. use the when syntax if you need this field later. there is a very large syntax, i recommend viewing the information using the links below: [LINK]> records and pattern matching in dart - complete guide | sandro maglione dive into dart's patterns and records
2024-03-12 08:08:33.067000000
i am building an app using react, ts, mongodb, prisma and stripe. while implementing stripe subscription, i have this malformed objectid: invalid character 'u' was found at index 0 in the provided hex string: user2dxrtvzciy9tt3yngsz8ey7xdvq . error. how can i fix this problem ? my prisma.schema file: generator client { provider = prisma-client-js } datasource db { provider = mongodb url = env( databaseurl ) } model userapilimit { id string [USER] [USER](auto()) [USER]( id ) [USER].objectid userid string [USER] count int [USER](0) createdat datetime [USER](now()) updatedat datetime [USER] } model usersubscription { id string [USER] [USER].objectid [USER]( id ) userid string [USER] stripecustomerid string? [USER] [USER](name: stripecustomerid ) stripesubscriptionid string? [USER] [USER](name: stripesubscriptionid ) stripepriceid string? [USER](name: stripepriceid ) stripecurrentperiodend datetime? [USER](name: currentperiodend ) } i had no problems with the id on userapilimit and tried to changed the id on subscription to id string [USER] [USER](auto()) [USER]( _id ) [USER].objectid but it didn't work
2024-03-21 16:25:49.420000000
the return type of a component defined in kubeflow pipelines (kfp) dsl must be serializable since pipeline steps require the outputs of the component to be transferable. the error message typeerror: todict() missing 1 required positional argument: 'self' indicates that the windowgenerator class is not serializable , likely because it does not have a method todict(). the approaches to solve this are following: return serializable data : you can modify the return type of the component to be a serializable type. for example, you can return a dictionary containing the necessary information to reconstruct the windowgenerator object: [USER].dsl.component def turnwindowgenerator(df: pd.dataframe) - dict[str, any]: widewindow = windowgenerator(...) return widewindow.todict() return a path to a saved object : saved the output in some file and returns its path [USER].dsl.component def turnwindowgenerator(df: pd.dataframe) - str: ... widewindow = ... create the windowgenerator object path = /path/to/save/windowgenerator.pkl joblib.dump(widewindow, path) return path utilize component outputs : save the output as an dsl.output or artifact [USER].dsl.component def turnwindowgenerator(df: pd.dataframe) - none: ... widewindow = ... create the windowgenerator object kfp.dsl.outputartifact( windowgenerator ).writemlpipelinemodel(wide_window) choose the approach that best suits your pipeline's structure and data needs.
2024-02-15 05:59:43.157000000
i have been stuck with this for the last couple of days. i created a private github repository with cmake, c++. i then added actions to build and test the code. since the repo is private, i added a secret token as suggested by the codeconv docs. after the tests are completed, i wish to also check for code coverage with the codeconv, so i added another another step at the bottom of my yaml: - name: test working-directory: ${{ steps.strings.outputs.build-output-dir }} run: ctest --verbose -c ${{ matrix.buildtype }} --reporter xml - name: upload coverage reports to codecov uses: codecov/codecov-action[USER].0.1 with: token: ${{ secrets.codecovtoken }} slug: user / repo as was suggested by the codecov docs. in case that matters, my executables are not in the same directory as build. the tests pass, but the codeconv exits with an error: info - 2024-03-20 09:56:26,137 -- ci service found: github-actions warning - 2024-03-20 09:56:26,141 -- no config file could be found. ignoring config. warning - 2024-03-20 09:56:26,147 -- xcrun is not installed or can't be found. warning - 2024-03-20 09:56:26,163 -- no gcov data found. warning - 2024-03-20 09:56:26,164 -- coverage.py is not installed or can't be found. info - 2024-03-20 09:56:26,196 -- found 0 coverage files to upload error: no coverage reports found. please make sure you're generating reports successfully. warning: codecov: failed to properly upload report: the process '/home/runner/work/_actions/codecov/codecov-action/v4.0.1/dist/codecov' failed with exit code 1 complaining that the report was not found. does anybody have an idea what to do?
2024-03-20 10:05:39.197000000
i created an asp.net core mvc application with ef core. several problems appeared one after the other. i eventually could create a controller on the orders table of northwind database. it went with views automatically. when launching the application, this opened the homecontroller , that is provided by default. as i want to open the orderscontroller , in the address bar, after [LINK]/ i add orders , but i get a transient error, whereas the content has this property : .usesqlserver( @ server=(localdb)\mssqllocaldb;database=efmiscellanous.connectionresiliency;trusted_connection=true , options = options.enableretryonfailure( maxretrycount: 5, maxretrydelay: system.timespan.fromseconds(30), errornumberstoadd: null) ); i first tried with nothing in the parenthesis for enableretryonfailure , and then tried something i saw in the forums. well the example was on mysql, as i use sql server express, perhaps good to know. the database is on an instance of sql server express rather than on localdb , but i am not sure this is the point. i remember that the two controllers can target two ports, how do i verify that? and supposing it is not the point either, what must i do?
2024-03-12 23:20:09.963000000
if anyone is still facing this issue and tried all the things, check the modules that you have added in your project, it's very much possible that one of the module has implemented older version of androidx.appcompat:appcompat in my case, i had added 'nativetemplates' module for admob native ads, which had older version of androidx.appcompat:appcompat:1.2.0 , i simply updated it to latest version and it resolved the issue.
2024-03-05 18:57:22.017000000
does this mean setting to binary is not supported by server? or could this be client problem? connecting to xxx.xx.xx.xx... ssh server supporting sftp and scp password: sftp ls / /inbox sftp ls /+mode=binary couldn't stat remote file: no such file or directory can't ls: /+mode=binary not found sftp or should it be ls / +mode=binary ? (space between / and + )
2024-02-19 10:02:15.200000000
your code is not react-ing very correctly. instead of declaring some unrestricted global and mutating it locally you should create a react context provider to hold the state and provide the component and visibility value down to consumers. example: myglobalcomponentprovider.tsx import { propswithchildren, reactnode, dispatch, createcontext, usecontext, usestate } from react ; type globalcomponentprops = { component: reactnode; setcomponent: dispatch reactnode ; isvisible: boolean; setisvisible: dispatch boolean ; }; export const myglobalcomponentcontext = createcontext globalcomponentprops ({ component: null, setcomponent: () = null, isvisible: false, setisvisible: () = false }); export const usemyglobalcomponentcontext = () = usecontext(myglobalcomponentcontext); export const myglobalcomponent = () = { const { component, isvisible } = usemyglobalcomponentcontext(); return div {isvisible ? component : null} /div ; }; const myglobalcomponentprovider = ({ children }: propswithchildren {} ) = { const [component, setcomponent] = usestate reactnode (null); const [isvisible, setisvisible] = usestate(false); return ( myglobalcomponentcontext.provider value={{ component, isvisible, setcomponent, setisvisible }} {children} /myglobalcomponentcontext.provider ); }; export default myglobalcomponentprovider; import the myglobalcomponentprovider component and wrap the app or root-level component to provide the context value to that sub-reacttree. consumers use the exported usemyglobalcomponentcontext hook to access the values and handle/render accordingly. app import myglobalcomponentprovider, { myglobalcomponent, } from ./myglobalcomponentprovider ; export default function app() { return ( myglobalcomponentprovider div classname= app myglobalcomponent / anothercomponent / /div /myglobalcomponentprovider ); } anothercomponent.tsx import { useeffect } from 'react'; import { usemyglobalcomponentcontext } from ./myglobalcomponentprovider ; const anothercomponent = () = { const { setcomponent, setisvisible } = usemyglobalcomponentcontext(); useeffect(() = { setcomponent( tertiarycomponent myprop= asds / ); setisvisible(true); }, []); return h1 anothercomponent /h1 ; };
2024-02-15 19:40:48.800000000
none of the supplied answers are complete. some are wrong in terms of parameters. here's an example of multi-file selection and open: // an approximate arbitrary file limit of 200 uint32_t maxopenfiles = 200; // a buffer to hold 200 file paths (using the old 260 char path limit). going to assume its allocated. tchar lpszfiles = new tchar[maxopenfiles maxpath]; // maxpath is 260 here lpszfiles[0] = 0; // don't forget this! cfiledialog dlgopen(true, l * , null, ofnexplorer | ofnallowmultiselect, l all files (.)|.|| ); dlgopen.mofn.lpstrfile = lpszfiles; dlgopen.mofn.nmaxfile = maxopenfiles * max_path; // not the file count, but the size of the buffer in characters. if (dlgopen.domodal() == idok) { cstring sfilepath; position pos = dlgopen.getstartposition(); while (pos) { // fetch each file path sfilepath = dlgopen.getnextpathname(pos); // do something with the file path // openme(sfilepath); } } // free the path buffer (assuming it was allocated) delete[] lpszfiles;
2024-03-21 17:29:38.430000000
i want to write a regex to match this that starts with -----begin private key----- and things that end with -----end private key----- including the -----begin private key----- and -----end private key----- . this is my attempt: (?ms)(?=[-]+begin private key[-]+)(.*?)(?=[-]+end private key[-]+) given this: whatever -----begin private key----- danbgkqhkig9w0bfqefaascbkkwggslagea danbgkqhkig9w0bfqefaascbkkwggslagea danbgkqhkig9w0bfqefaascbkkwggslagea s1hqdfvgttzigzwfxggjxyvfow== -----end private key----- whatever i want to get: -----begin private key----- danbgkqhkig9w0bfqefaascbkkwggslagea danbgkqhkig9w0bfqefaascbkkwggslagea danbgkqhkig9w0bfqefaascbkkwggslagea s1hqdfvgttzigzwfxggjxyvfow== -----end private key-----
2024-03-10 05:29:42.143000000
something along the lines of this should be close to what you seem to be asking for: source.from(partitions()) .map(partition - (slowsource(partition).buffer(1, overflowstrategy.backpressure())).prematerialize(actorsystem)) .flatmapconcat(pair::second) by attaching a source to a buffer and materializing it, you effectively prime the pump by initializing it and signaling demand to the source. you can tweak the buffer sizes (e.g. can use statefulmap to have larger buffers on later partitions) if so inclined. if the main reason the sources are slow is that there's a long delay before the first element appears but after that they come fairly quickly, this will allow each partition to get set up quickly.
2024-02-28 15:47:23.073000000
i have a simple plot to draw closing price in pinescript v5. plot(series=close, title='close', linewidth=2, color=color.lime) i would like to modify the plot such that if price of close 100, the plot will be plotted in dotted line. otherwise, the plot will be in normal line. i am using pinescript v5. i tried the following; plotstyle = plot.styleline if close = 100 plotstyle := plot.stylecross plot(close, title='close', linewidth=2, color=color.lime, style=plotstyle) i received the error error at 652:64 cannot call 'plot' with argument 'style'='plotstyle'. an argument of 'series plotstyle' type was used but a 'input plotstyle' is expected.
2024-02-15 03:47:59.890000000
try with: if gunzip --test ${file} 2 /dev/null 1 /dev/null; then echo this is a gzipped file! else echo this is not a gzipped file. fi
2024-03-12 14:19:38.577000000
i would like to upload few files into s3 bucket via terraform with kms key.the problem is i cannot upload the files because of properties server-side-encryption as default: do not specify an encryption however i can change manual to specify an encrption key and that work! but i want to get it in terraform! that should work via terraform!
2024-03-19 13:26:25.750000000
need some help. trying to solve this, kindly help. have a table of sales with some months with zero sales. my running total is failing to return a cumulative value for months with zero sales. product jan feb mar april may jun bread 4 2 0 2 3 4 runtot 4 6 0 8 11 15 expected solution product jan feb mar april may jun bread 4 2 0 2 3 4 runtot 4 6 6 8 11 15
2024-03-15 03:31:15.237000000
i'm trying to query the microsoft graph api to interact with plans within a group. i'm using this method in the sdk: graphclient.groups[id.tostring()].planner.plans.getasync(); however, this incurs the following exception: microsoft.graph.models.odataerrors.odataerror: you do not have the required permissions to access this item. i've used the same graphclient to list the groups, and to retrieve the group whose id is used in the above method call, which has the following permissions assigned: group.read.all ( application and delegated ) group.readwrite.all ( application and delegated ) groupmember.read.all ( application ) tasks.read ( delegated ) tasks.read.shared ( delegated ) tasks.readwrite ( delegated ) tasks.readwrite.shared ( delegated ) user.read ( delegated ) to obtain a graphclient i used the code found in the sample [here][1] . the only part i changed is to use my own tenantid , clientid , and clientsecret . what extra permission is needed to list the plans within a group?
2024-03-22 14:53:17.883000000
i am trying to implement a modalbottomsheet with material3 . everything works like expected but i don't get colored the area which google called container. or more precisely: i would like to have a background image. i do not care if the modalbottomsheet has the background image or the element inside. i tried something like that modalbottomsheet( sheetstate = sheetstate, content = { surface(modifier = modifier.fillmaxsize()) { image( painter = painterresource(id = r.drawable.app_background), contentdescription = null, modifier = modifier .fillmaxsize(), contentscale = contentscale.crop ) } }, ondismissrequest = { pressedback.invoke() }, modifier = modifier .fillmaxsize(), tonalelevation = 20.dp, ) does anybody know how i can manipulate this area? i cannot find anything.
2024-02-28 18:23:38.950000000
posts = post.objects.all().alive() is passed via the system to be parsed with list[postschema] . how can i achieve the access of orm passed object, as mentioned below? schema is slightly extended version of basemodel class postschema(schema, ): id: int createdat: datetime updatedat: datetime softdeletedat: datetime | none = none id: int title: str slug: str body: str tags: list[tagsschema] comments: simplecommentschema [USER]( comments , mode= before ) def getcomments(cls, v, info, obj): obj is the each-object passed via list[postschema] comments = polymorphiccomments.objects.filter( parentid=obj.id, parenttype=contenttype.objects.getformodel(obj)).alive() return { 'commentcount': comments.count(), 'lastcomment': comments.last() if comments.exists() else none, } return { 'commentcount': 0, 'lastcomment': none } the post object by orm does not have a comments attribute, it must be a later calculated filled.
2024-03-18 01:30:04.427000000
another laravel issue can be the order in the down method, you have to drop the referencing table first. hope that makes sence !
2024-02-14 19:42:10.963000000
here's why the first approach wasn't working as expected and better ways to handle 404 logging in laravel 10. understanding the issue the reportable method within the app\exceptions\handler class is primarily designed to report exceptions for error tracking services, not specifically for logging every 404. this is why you weren't seeing the effect you expected. recommended approaches here's how to effectively log 404 errors in laravel 10: custom exception handler middleware create a middleware (e.g., lognotfoundrequests.php) ?php namespace app\http\middleware; use closure; use illuminate\http\request; use symfony\component\httpkernel\exception\notfoundhttpexception; use illuminate\support\facades\log; class lognotfoundrequests { public function handle(request $request, closure $next) { $response = $next($request); if ($response- status() === 404) { log::error('404 not found: ' . $request- path()); } return $response; } } register the middleware in your app\http\kernel.php file. add it to either your global middleware stack or a specific route group. event listener create an event listener (e.g., lognotfoundexception.php) ?php namespace app\listeners; use illuminate\contracts\queue\shouldqueue; use illuminate\queue\interactswithqueue; use illuminate\support\facades\log; use symfony\component\httpkernel\exception\notfoundhttpexception; class lognotfoundexception { public function handle(notfoundhttpexception $event) { log::error('404 not found: ' . $event- getrequest()- path()); } } register the listener in your app\providers\eventserviceprovider.php file: protected $listen = [ // ... other events notfoundhttpexception::class = [ lognotfoundexception::class, ], ]; choosing a method middleware: good for simple, direct logging of the request path. event listener: offers more flexibility if you want more elaborate error handling, such as passing data to other systems or creating custom log messages. why avoid try-catch blocks manually adding try-catch blocks with modelnotfoundexception everywhere is less ideal because: repetition: it creates unnecessary code duplication. limited control: it doesn't let you easily customize log messages or log levels.
2024-02-28 11:04:16.113000000
i have the exact same issue as the person who asked this question: git config --global http.sslverify false git config --global --unset http.proxy git config --global --unset https.proxy git config http.postbuffer [HASH] however, none of these solutions have worked for me. i would greatly appreciate any further suggestions. (note: i refrained from commenting on the original question or providing an answer due to my insufficient reputation, as stack overflow advises against asking questions within answer sections.)
2024-03-10 16:34:28.057000000
adding app:layout_constrainedheight= true to views referenced by barrier solved my problem.
2024-03-18 11:51:25.063000000
i have a php file within /var/www/html that is called from the client side, and within this php file i require a file from a directory i created and called /app/lib/ where all of my custom libraries and classes reside. however, every time the endpoint is hit, i get the following php warning & error: php warning: require(/app/lib/user/user.php): failed to open stream: permission denied php fatal error: uncaught error: failed opening required '/app/lib/user/user.php' (include_path='.:/usr/share/pear:/usr/share/php') my server os is centos currently, the permissions on the /app/lib directory are apache:apache, with permissions on all directories set to 755, and the php files being set to 644. i am not sure what else i am missing, so if anyone has any insight, i would greatly appreciate the help
2024-02-14 11:15:59.740000000
well, after i run docker-compose up or --build,i've never experienced this error, i have no idea how to resolve it., i'm getting this error: error response from daemon: error while creating mount source path '/run/desktop/mnt/host/c/users/asus/namatus/db/data': mkdir /run/desktop/mnt/host/c: file exists docker-compose: version: '3' services: rabbitmq: image: rabbitmq:3-management containername: rabbitmq hostname: rabbitmq volumes: - /var/lib/rabbitmq ports: - '5672:5672' - '15672:15672' envfile: - .env auth: build: context: ./ dockerfile: ./apps/auth/dockerfile envfile: - .env dependson: - rabbitmq - postgres volumes: - .:/usr/src/app - /usr/src/app/nodemodules command: npm run start:dev auth namatus: build: context: ./ dockerfile: ./apps/namatus/dockerfile ports: - '4000:5000' envfile: - .env dependson: - rabbitmq - auth volumes: - .:/usr/src/app - /usr/src/app/nodemodules command: npm run start:dev namatus postgres: image: postgres envfile: - .env ports: - '5432:5432' volumes: - ./db/data:/var/lib/postgresql/data postgresadmin: image: dpage/pgadmin4 dependson: - postgres envfile: - .env ports: - '15432:80' dockerfile: from node workdir /usr/src/app copy package*.json . run npm install copy . . does anyone know what is causing this? and i also have another one that hasn't helped me yet, but to resolve this one, i need to resolve this one first that i wasn't receiving.
2024-02-19 15:16:03.177000000
most scanf formats will read and discard leading white space (e.g. space, tab, and newline ). so, in the shown code, the newline will be left after the first call to scanf . but the second call to scanf will read and discard it. however, the newline after that input will be left in the input buffer. the only three scanf formats that do not read and discard leading white space are %c , %[…] (scan sets) and %n . if you want the same behavior for those formats, you need to add an explicit leading space in the format string yourself. because error handling and invalid input could be hard to handle with scanf , i always recommend using fgets to read a whole line instead. then, you can use, e.g. sscanf to parse the line. this way, if there's invalid input, it will not be left in the input buffer.
2024-02-16 18:20:12.207000000
the issue lies in the form action. since you're updating a student record, the form action should point to the update method in your studentcontroller, not the edit method. the correct action for the form should be students.update route instead of students.edit.
2024-03-29 05:52:25.330000000
i've very new to web development and never used the new blazor implementation. i'm trying to set up a web app to connect to a 2000 microsoft sql server. i can't use the standard system.data.sqlclient as this doesn't support sql 2000. i believe that i have to use the odbc connection. problem is that i cant find any examples out there that have done this. i know it's a lot to ask but would anyone have an example app of this or know where i might find a solution. the sql server isn't local and it uses sql authentication. i'm using visual studio 2022, .net 8 thanks
2024-02-26 18:44:38.587000000
i have the boilerplate identity pages for user creation/login etc. all works fine, users are stored in my mssql backend. i have 2 pages in my project that are both behind an [authorize] tag. the first page you are redirected to after successful log in and it works without an issue. when you click a button on that page to go to the next page, you're asked to sign in again. once logged in for the second time that page displays correctly. i've gone around and around the documentation trying to figure out why the user can't presist its log in status between pages but my knowledge of these systems is very limited and the documentations are often written for people with a lot more technical experience than i have. does anyone have any idea what the problem could be? my program.cs setup relevant to identity is here: builder.services.adddefaultidentity user (options = options.signin.requireconfirmedaccount = true) .addentityframeworkstores mydatabasecontext () .adddefaulttokenproviders(); builder.services.configureapplicationcookie(options = { options.expiretimespan = timespan.fromminutes(20); options.loginpath = /identity/account/login ; options.slidingexpiration = true; }); // add logger services builder.host.useserilog((context, services, configuration) = configuration .readfrom.services(services) .enrich.fromlogcontext() .writeto.console() .writeto.file( log.txt , rollinginterval: rollinginterval.day)); builder.services.adddistributedmemorycache(); // add services to the container. builder.services.addcontrollerswithviews(); builder.services.addrazorpages(); var app = builder.build(); // configure the http request pipeline. if (!app.environment.isdevelopment()) { app.useexceptionhandler( /home/error ); // the default hsts value is 30 days. you may want to change this for production scenarios, see [LINK]. // app.usehsts(); } app.useserilogrequestlogging(); app.usehttpsredirection(); app.usestaticfiles(); app.userouting(); app.useauthentication(); app.useauthorization(); app.maprazorpages(); app.mapcontrollers(); this post sounds very similar to my issue, but i don't see what their solution is suggesting that i haven't done? <a href="[LINK] core identity successful login redirecting back to login page</a>
2024-02-27 23:07:27.510000000
i am afraid that there is no out-of-box method can read all variables in the variable group and pass them to environmentvariables of azurecontainerapps task. to meet your requirement, i suggest that you can use the powershell script to run the rest api: variablegroups - get to list all variable names in the variable group and add them to the string list. then you can define pipeline variable to save the variable list and pass it to azurecontainerapps task. for example: variables: - group: cms-aeh-poc steps: - task: powershell[USER] inputs: targettype: 'inline' script: | $token = $(pat) $url= [LINK] $token = [system.convert]::tobase64string([system.text.encoding]::ascii.getbytes( :$($token) )) $response = invoke-restmethod -uri $url -headers @{authorization = basic $token } -method get -contenttype application/json $string= foreach($variablename in $response.variables.psobject.properties.name) { $env = $variablename=$($variablename) $string= -join( $string , , $env ) } echo $string echo ##vso[task.setvariable variable=variablelist]$string - task: azurecontainerapps[USER] inputs: azuresubscription: 'xx' xxxx environmentvariables: 'build_id=$(build.buildid) $(variablelist)'
2024-03-19 12:40:19.873000000
logic that will allow your user to hit the max length and then delete after code: [USER] fun customtextfield( streamtitle:string, updatetext:(string)- unit ){ val maxlength = 141 textfield( value = streamtitle, onvaluechange = { if (streamtitle.length = maxlength || it.length streamtitle.length) { updatetext(it) } } explanation streamtitle.length = maxlength is the typical logic that will stop the user from entering text once the maxlength is hit. this has a problem that it will not allow the user to delete any text due to the maxlength being hit. the additional logic of it.length streamtitle.length is needed to combat the previously mentioned scenario. when a user deletes a character onvaluechange() is called with new value of it minus the character deleted. meaning that if our user has reached maxlength they will trigger our second conditional and now be allowed to delete characters
2024-02-24 01:33:20.140000000
i have two functions that are called when my bot is added to a server. one fetches the guilds channels and one fetches the guilds roles. i have recently implemented sharding within my environment, and when editing my channel and role fetching functions using broadcasteval() from the shardingmanager with discord.js (im running v14.7.1) i am getting duplication of roles, where it is returning roles multiplied by the amount of shards, to indicate the guild is accessible on each shard which shouldn't be the case. my fetch channels function works fine and does return on one set of channels (not duplicated) from one shard the guild is located on. it's almost exactly the same function: here is the fetch roles function: export const getshardroles = async (guildid) = { const roles = await client.shard.broadcasteval( async (client, context) = { try { const guild = await client.guilds.fetch(context.guildid, false, true); if (guild) { const roles = guild.roles.cache .filter((role) = role.name !== "[USER]") .map((role) = ({ id: role.id, name: role.name })); console.log(shard id: ${client.shard.ids[0]}, roles:, roles); return roles; } } catch (error) { // guild not found on this shard } }, { context: { guildid } } ); console.log("roles:", roles); return roles.flat(); }; this function currently returns all roles but from each shard? what am i missing? how is my channels function working as expected, but my roles function is acting weird? this is my fetch channels function that works as expected for reference: const channels = await client.shard.broadcasteval( async (client, context) = { try { const guild = await client.guilds.fetch(context.guildid); if (guild) { return guild.channels.cache .filter((channel) = channel.type === 0 || channel.type === 5) .map((channel) = ({ id: channel.id, name: channel.name })); } } catch (error) { console.log( no guild channels found with id on any shard: , context.guildid, error ); } }, { context: { guildid } } ); console.log( channels: , channels); return channels.flat(); };[CODE]`
2024-02-14 16:05:18.273000000
operator precedence is important. you can force order of evaluation by parenthesizing: yq e '( .spec.groups[] | select(.name == deeplearning ) | .items[] | select(.name == tritonreleaseversion ) | .default ) = newreleaseversion ' config.yaml first select the items you want to update ( …[] | select(…) | … ), then update them ( … = newreleaseversion ). an alternative way of writing the program is: yq e '( .spec.groups[] | select(.name == deeplearning ).items[] | select(.name == tritonreleaseversion ) ).default = newreleaseversion ' config.yaml with mikefarah's yq , you get: apiversion: kots.io/v1beta1 kind: config metadata: name: enterprise spec: groups: - name: apiauthorization title: api authorization items: - name: apikey title: api key type: password required: true - name: workspaceid title: workspace id type: text required: true - name: deeplearning title: deep learning description: deep learning service options items: - name: tritonreleaseoptions title: release options type: heading - name: tritonreleaseversion title: triton release version type: text default: newreleaseversion shoudl be updated when the triton release is true required: true - name: trackerreleaseversion title: tracker version type: text default: release-v308 shoudl be updated when the tracker is being build as part of the component required: true
2024-03-21 10:38:32.897000000
you can do this with a self-join . join the table with itself, matching rows on id, but only the rows for which expire is null. for each id you will have the data for both december and null as one row. select t1.id, t1.expire, t2.lastwdif, t2.name, t2.vendor from test t1 join test t2 on t1.id = t2.id and t2.expire is null where left(t1.expire, 3) = '12/' order by id; demonstration . note: this will produce a row for every row in december.
2024-02-27 01:46:43.280000000
i am using a nodered flow to read modbus data from a plc and query it to postgresql and mariadb. there is a function which looks at a variable from the holding register wheter it is 0 or 1. it will only query the data if the variable is 1. my modbus plc extracts the data every 500ms. my problem is, that if there are no changes for some hours or days, i need to restart the flow or it wont be able to read out the data, even if the modbus node says active . is there a way to restart a nodered flow via a script or in it itself? tyvm!
2024-02-26 09:33:35.777000000
i have a file named .env.production and .env.development. i can build my app manually using the dev variables but once i call firebase deploy --only functions it rebuilds the my app using the production variables. is there a way to prevent firebase from rebuilding using the production variables? i have tried hosting : { predeploy : vite build --mode development , } scripts : { deploy:dev : vite build --mode development && firebase deploy --only hosting }, thank you!!
2024-02-27 19:46:11.927000000
i am relatively new to golang and i'm facing an issue with a grpc server program i've developed. in this program, upon receiving each grpc request, i'm making an http request to a third-party api. while this setup works for individual requests, i encounter errors when the server receives numerous concurrent requests. the errors i'm encountering are mainly: [eof] [context deadline exceeded (client.timeout exceeded while awaiting headers)] i have a reusable http client implemented as follows: // http.go (utils package) var client http.client var clientmutex sync.mutex func createhttpclient() http.client { clientmutex.lock() defer clientmutex.unlock() var tlsconfig = &tls.config{ insecureskipverify: true, } if client == nil { client = &http.client{ transport: &http.transport{ maxidleconnsperhost: 5, maxidleconns: 50, tlsclientconfig: tlsconfig, }, timeout: time.second 10, } } return client } func init() { client = createhttpclient() } and i'm making post requests using the following function: // http.go (utils package) func postrequest(payload, url string) (http.response, error) { req, err := http.newrequest( post , url, bytes.newbuffer([]byte(payload))) if err != nil { log.error().msgf( error while posting request: %s , err.error()) return nil, err } return client.do(req) } i call the postrequest function from another file in my project: // somefile.go (controllers package) resp, err := utils.postrequest(requestpayload, url) if err != nil { // handle error } defer resp.body.close() i suspect the issue might be related to the keep-alive configuration of the http client. however, i'm not entirely sure how to address this problem. could someone experienced with grpc and http clients in golang guide me on how to resolve these errors and provide insights into what might be causing them? thank you in advance for your help.
2024-03-20 05:31:42.217000000
use group by modifier to aggregate the data. here you want to aggregate the data for each timestamp. check the query. select sum(used), timestamp from usage group by timestamp
2024-03-12 13:17:56.673000000
we have been using spanner and recently noticed a dip in the 'total storage' graph in 'system insights' which is not expected and we would like to know what are the possible causes for the storage to get reduced and then rise up again. there have been no action taken from our end which would cause the storage to reduce and then rise up again. thanks! in spanner system insights the 'total storage' graph shows a dip which should not have been there. there should have been only increased usage pattern.
2024-03-22 10:03:07.597000000
the reason is data_error is exclusively an exception for the io packages, so functions like get will throw dataerror. core language attributes like 'value are not part of the io packages, so they don't use the dataerror exception. instead they use one of the core standard exceptions, constrainterror. the list of standard exceptions, found in the package standard are: constrainterror: exception; programerror : exception; storageerror : exception; taskingerror : exception; exceptions available to io are found in the package ada.ioexceptions and include: statuserror : exception; modeerror : exception; nameerror : exception; useerror : exception; deviceerror : exception; enderror : exception; dataerror : exception; layout_error : exception;
2024-03-07 19:34:53.287000000
i am having a use-case of developing several nextjs applications (ssg) and deploying them to aws amplify. i am using aws amplify hosting, where directly linking each github repository to amplify apps. and configuring yaml file for frontend deployment. all applications are deployed and working fine. problem statement: now i need to setup a quality gate in the deployment process, where i have to check certain parameters (package version of few npm modules) before deployment. i am trying to implement this with command hooks. officially its mentioned pre and post push hooks will get triggered by default in amplify hosting. i have added amplify/hooks folder. have placed two files pre-push.js and post-push.js with some console statements. (refer attached image) build setting yaml: when i tried to trigger a build, these scripts are not getting executed. am i missing anything or will be applicable only for backend execution? is there any other better way to handle this use-case with aws amplify for ssg deployments? thanks in advance.
2024-03-06 11:03:22.533000000
for the scene delegate you can use this code: func scene(_ scene: uiscene, willconnectto session: uiscenesession, options connectionoptions: uiscene.connectionoptions) { //check if there tapped shortcut item when app was closed if let shotcutitem = connectionoptions.shortcutitem { //do something with shortcutitem } }
2024-02-28 13:35:39.663000000
the -w, --write-out format option can be very helpful. you can get all http headers, or a single one: $ curl -s -w '%{header_json}' [LINK] -o /dev/null { date :[ sun, 18 feb 2024 13:47:12 gmt ], content-type :[ application/json ], content-length :[ 254 ], server :[ gunicorn/19.9.0 ], access-control-allow-origin :[ * ], access-control-allow-credentials :[ true ] } $ curl -s -w '%header{content-type}' [LINK] -o /dev/null application/json read more
2024-02-18 13:52:55.037000000
the issue is that currentvalue is set to nil in the call to tonumber(keyget). from the lua manual for tonumber: tries to convert its argument to a number. if the argument is already a number or a string convertible to a number, then tonumber returns this number; otherwise, it returns nil. so, even though keyget is not nil, whatever it is, it is still not convertible into a number. add a line to log the value of keyget after local keyget = redis.call(...) redis.log(redis.log_notice, keyget) hopefully, that will tell you why the value of keyget isn't working with tonumber.
2024-03-05 09:42:25.087000000
i can make a code like this, it worked: //template class template t class templateclass{ public: void print(); } template t void templateclass t ::print(){ return; } //template function class print{ public: template class t void templatefunction(); } template class t print::templatefunction(){ return; } but how can i make this worked as similar? //template class and function class templateclass{ public: template class t void templatefunction(); } template class t, class t1 templateclass t ::templatefunction(){ return; } it looks like your post is mostly code; please add some more details.
2024-03-22 09:19:54.557000000
composite is special. do you get same number of results when you run each inner query manually? maybe some of them just have the nextrecordsurl set but you ignore it and retrieve only 1st page of results.
2024-03-24 07:20:22.590000000
i want to create a dashboard page where the admin can create accounts for teachers, students, and respondents. when the admin selects the account type, a specific form will appear on the same page. i tried using ngmodel, but i encountered the error mentioned above create-accounts.component.ts section class= p-6 rounded-md bg-slate-400 mt-4 h1 class= mb-2 text-3xl creer comptes /h1 select [(ngmodel)]= selectaccount name= selectaccount option value= student selected etudiant /option option value= teacher enseignant /option option value= resp responsable /option /select div ngif= selectaccount === student app-student-form /app-student-form /div div ngif= selectaccount === teacher app-teacher-form /app-teacher-form /div div *ngif= selectaccount === resp app-resp-form /app-resp-form /div /section create-accounts-component.ts import { component } from '[USER]/core'; import { teacherformcomponent } from '../../components/form-components/teacher-form/teacher-form.component'; import { studentformcomponent } from '../../components/form-components/student-form/student-form.component'; import { respformcomponent } from '../../components/form-components/resp-form/resp-form.component'; [USER]({ selector: 'app-create-accounts', standalone: true, imports: [fontawesomemodule, teacherformcomponent, studentformcomponent, respformcomponent,], templateurl: './create-accounts.component.html', styleurl: './create-accounts.component.css' }) export class createaccountscomponent { selectaccount= student-form ; } app.component.ts import { component } from '[USER]/core'; import { routeroutlet } from '[USER]/router'; import {customelementsschema} from '[USER]/core'; import { logincomponent } from './pages/login/login.component'; import { recoverpasswordcomponent } from './pages/recover-password/recover-password.component'; import { createaccountscomponent } from './pages/admin/administrator/create-accounts/create-accounts.component'; [USER]({ selector: 'app-root', standalone: true, imports: [routeroutlet, logincomponent, recoverpasswordcomponent, createaccountscomponent], templateurl: './app.component.html', styleurl: './app.component.css', schemas: [customelementsschema] }) export class appcomponent { title = 'my-app'; }
2024-03-15 21:26:08.903000000
try adding this code. this should make the scrollbar always visible on firefox .rc-virtual-list-holder { scrollbar-width: thin; scrollbar-color: rgba(0, 0, 0, .3) transparent; } .rc-virtual-list-holder::-webkit-scrollbar { display: none; }
2024-03-06 15:46:45.457000000
for my e2e tests (using jest, mongoose and nestjs), using --runinband and awaiting for the connection.db.dropdatabase() didn't worked. i was still getting 'database is in the process of being dropped' whithin the next test case. as an alternative, what i did was, instead of dropping the database, just clearing all collections: const collections = await connection.db.listcollections().toarray().filter(val = val.name !== 'system.views'); for (const collection of collections) { const collectionindb = connection.collection(collection.name); await collectionindb.deletemany({}); }
2024-02-27 16:44:30.963000000
this is an example what i have in the log file: log 1 log 2 log 2 log 2 log 3 log 4 and this is what i want to have: log 1 log 2 log 3 log 4 how can i filter repeated new logs?
2024-02-21 19:23:53.390000000
i have a very basic python cloud function (with cors enabled) that looks like : [USER].http def functionwithcorsenabled(event): method = event.method endpoint = event.path routekey = method + + endpoint logging.info(f route key : {routekey} ) if method == options : allows get requests from any origin with the content-type header and caches preflight response for an 3600s headers = { 'access-control-allow-origin': '', 'access-control-allow-methods': 'post, get', 'access-control-allow-headers': 'content-type, x-appcheck', 'access-control-max-age': '3600' } return '', 204, headers set cors headers for the main request headers = { 'access-control-allow-origin': '', } if routekey == get /test : return testcall(), 200, headers return { error : not allowed }, 401 def testcall(): logging.info( test ! ) return { message , success } i have tried all combination of return : return testcall(), 200, headers to match with the documentation return testcall(), headers with testcall() returning the status code in addition of the dict return test_call() without headers and so on... when i check my cloud logging logs, i do see everything successfully running in python the the test ! log with the return status code 200. in flutter web i do see an undefined error : clientexception: xmlhttprequest error., uri=[LINK] how to make a successful http request in flutter web from a python cloud function ?
2024-02-16 21:18:33.510000000