instruction
stringlengths
0
30k
|.net|docker|docker-compose|open-telemetry|open-telemetry-collector|
null
I'm running several applications on GKE with a Spring Cloud Gateway in front of them and behind a Google Cloud Loadbalancer. While debuging i noticed that most traces reported to Google Cloud Trace show a missing root trace. It appears the Gateway (correctly) reads the `X-Cloud-Trace-Context` header and reports it own span with a parent traceId. But this parent does not actually exist in trace-id. i'm using google own spring-integration for tracing and can't find any span reported by a load-balancer, so i'm inclined to believe there is something wrong there. Do i have to activate tracing on the ingress/load-balancer somewhere? I cant find any documentation on that.
No Traces reported by GCP Load Balancer to Google Cloud Trace
|spring-cloud-sleuth|gcp-load-balancer|google-cloud-trace|
You are assuming that submitting a parallel stream operation as a job to another executor service will make the Stream implementation use that executor service. This is not the case. There is an undocumented trick to make a parallel stream operation use a different Fork/Join pool by initiating it from a worker thread of that pool. But the executor service producing virtual threads is not a Fork/Join pool. So when you initiate the parallel stream operation from a virtual thread, the parallel stream will use the [common pool] for the operation. In other words, you are still using platform threads except for the one initiating virtual thread, as the Stream implementation also performs work in the caller thread. So when I use the following program ```java public class ParallelStreamInsideVirtualThread { public static void main(String[] args) throws Exception { var executorService = Executors.newVirtualThreadPerTaskExecutor(); var job = executorService.submit( () -> { Thread init = Thread.currentThread(); return IntStream.rangeClosed(0, 10).parallel() .peek(x -> printThread(init)) .mapToObj(String::valueOf) .toList(); }); job.get(); } static void printThread(Thread initial) { Thread t = Thread.currentThread(); System.out.println((t.isVirtual()? "Virtual ": "Platform ") + (t == initial? "(initiator)": t.getName())); } } ``` it will print something like ```none Virtual (initiator) Virtual (initiator) Platform ForkJoinPool.commonPool-worker-1 Platform ForkJoinPool.commonPool-worker-3 Platform ForkJoinPool.commonPool-worker-2 Platform ForkJoinPool.commonPool-worker-4 Virtual (initiator) Platform ForkJoinPool.commonPool-worker-1 Platform ForkJoinPool.commonPool-worker-3 Platform ForkJoinPool.commonPool-worker-5 Platform ForkJoinPool.commonPool-worker-2 ``` In short, you are not measuring the performance of virtual threads at all. [common pool]: https://docs.oracle.com/en/java/javase/21/docs/api/java.base/java/util/concurrent/ForkJoinPool.html#commonPool()
## Background * Gradle project that uses maven-publish and the jfrog artifactory plugins. * Upgrading from Gradle 5.6.4 to Gradle 6.9.4 (reproducible when upgrading to 6.0). ## Problem Running the build e.g. ``` ./gradlew clean build ``` fails before any tasks are run, because of: ``` [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] Caused by: java.lang.NoSuchMethodError: org.gradle.api.publish.maven.internal.publication.MavenPublicationInternal.getPublishableFiles()Lorg/gradle/api/file/FileCollection; ``` ## Code (excerpted) ``` apply plugin: "maven-publish" apply plugin: "com.jfrog.artifactory" buildscript { dependencies { classpath "org.jfrog.buildinfo:build-info-extractor-gradle:4.8.1" } } publishing { publications { mavenJava(MavenPublication) { from components.java } } } artifactory { publish { defaults { publications('mavenJava') # This is the line that throws the exception } } } } ``` ## Notes [Someone else ran into the same issue](https://discuss.gradle.org/t/after-updated-from-5-4-1-to-6-6-1-getting-what-went-wrong-failed-to-notify-build-listener-org-gradle-api-file-filecollection-org-gradle-api-publish-ivy-internal-publication-ivypublicationinternal-getpublishablefiles/42640) and the solution was "Looks I have to update the plugin in root project build.gradle" which didn't quite get me there since it wasn't clear which plugin they were talking about.
|c|operating-system|kill-process|
The [answer by Mark](https://stackoverflow.com/a/65818022/2745495) is how to disable the colorings completely. But if you instead wanted to customize the colors, you can adjust various properties (documented [here][1]) in your `workbench.colorCustomizations` setting: - `list.errorForeground`: Files containing errors. - `list.warningForeground`: Files containing warnings. - `gitDecoration.addedResourceForeground`: Added Git files. - `gitDecoration.modifiedResourceForeground`: Modified Git files. - Several other git related settings, see documentation linked above. Note that these all will also affects the filename coloring in the sidebar/explorer view. I don't think there's a way to separate these, as they're both controlled by the same setting. Example in settings.json: "workbench.colorCustomizations": { "[Monokai]": { // Optional - limit to a specific theme { "list.errorForeground":"#ff00ff", // make errors purple just for fun }, [1]: https://code.visualstudio.com/api/references/theme-color
Проблема в том что мой код рабоатает но при скачивание говорить что тип файла не поддерживается мой код ---> ``` var urlContract = "https://dashboard-8tl3.onrender.com/%D0%90%D0%BD%D0%BD%D0%BE%D1%82%D0%B0%D1%86%D0%B8%D1%8F_2024-03-11_144401.png" function downloadContract(urlContract) { if (urlContract.includes("png")) { const blob = new Blob([urlContract], { type: "image/png" }); const link = document.createElement("a"); link.setAttribute("href", URL.createObjectURL(blob)); link.setAttribute("download", "Contract.png"); link.click(); } if (urlContract.includes("jpeg")) { const blob = new Blob([urlContract], { type: "image/jpeg" }); const link = document.createElement("a"); link.setAttribute("href", URL.createObjectURL(blob)); link.setAttribute("download", "Contract.jpeg"); link.click(); } if (urlContract.includes("pdf")) { const blob = new Blob([urlContract], { type: "application/pdf" }); const link = document.createElement("a"); link.setAttribute("href", URL.createObjectURL(blob)); link.setAttribute("download", "Contract.pdf"); link.click(); } ```
the framework is react,, the question is how can I upload (an image or pdf) when clicking on it, in the nutria of the <img> and <div> tags
|reactjs|
null
I work with the Spring framework and I encounter problem with mapping set of roles from many-to-many table. Let me show you my classes. ``` public class Role { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; @NonNull private String authority; @ManyToMany(mappedBy = "roles", fetch = FetchType.EAGER/*, cascade=CascadeType.MERGE*/) private Set<AppUser> appUsers; public Role(@NonNull String authority) { this.authority = authority; } public Role() { } } ``` ``` public class AppUser implements Serializable { @Serial private static final long serialVersionUID = -8357820005040969853L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; // @NonNull @Column(unique = true) private /*final*/ String username; // @NonNull private /*final*/ String password; @ManyToMany(fetch = FetchType.EAGER/*, cascade=CascadeType.MERGE*/) // @NonNull @JoinTable(name = "users_roles", joinColumns = @JoinColumn(name = "role_id"/*, referencedColumnName = "id"*/), inverseJoinColumns = @JoinColumn(name = "user_id"/*, referencedColumnName = "id"*/)) private Set<Role> roles; } ``` Here is the saving process: ``` @Bean // @Transactional public CommandLineRunner run(RoleRepository roleRepository, UserRepository userRepository, PasswordEncoder passwordEncoder) { return args -> { if (roleRepository.findByAuthority("ADMIN").isEmpty()) { Role admin = roleRepository.save(new Role("ADMIN")); Role user = roleRepository.save(new Role("USER")); userRepository.save(new AppUser("admin", passwordEncoder.encode("admin"), new HashSet<>() { { add(admin); add(user); } })); } }; } ``` And here is the moment when I want to retrieve roles for the user: ``` @Component public class UserInterceptor implements HandlerInterceptor { private final UserRepository userRepository; private final RoleRepository roleRepository; public UserInterceptor(UserRepository userRepository, RoleRepository roleRepository) { this.userRepository = userRepository; this.roleRepository = roleRepository; } @Override public void postHandle(@NonNull HttpServletRequest request, @NonNull HttpServletResponse response, @NonNull Object handler, ModelAndView modelAndView) { if (modelAndView != null) { Authentication authentication = SecurityContextHolder.getContext().getAuthentication(); Optional<AppUser> appUser = userRepository.findByUsername(authentication.getName()); if (appUser.isPresent()) { Set<Role> set = roleRepository.findByAppUsers(new HashSet<>() { { add(appUser.get()); } }); log.info("Roles for user {}: {}", appUser.get().getUsername(), appUser.get().getRoles()); modelAndView.addObject("username", appUser.get().getUsername()); modelAndView.addObject("roles", appUser.get().getRoles()); } else modelAndView.addObject("username", "anonymous"); } } } ``` The most interesting part is that when I login into system as the "admin" with the password "admin", it sees my User in the database and I actually have value inside Optional<AppUser> appUser. After checking presence I create simple set just to check, whether it maps roles for particular users and it does, after that I have set of roles retrieved by User object. And now comes the magic part: if I try to get roles from the user doing appUser.get().getRoles(), I get EMPTY SET OF ROLES IN THE appUser OBJECT. I mean, I could have tried to get all roles using an extra set created by userRepository.findByUsername, but I feel like it shouldn't work this way. Can anyone help me please, I would appreciate it and it would help me to get Spring framework. Thanks in advance. P.S. Here is the table from the database and yes, names of columns are messed, I've fixed it already. [![Database](https://i.stack.imgur.com/jsPHv.png)](https://i.stack.imgur.com/jsPHv.png)
Spring does not map set of roles
|java|spring|database|spring-boot|many-to-many|
null
How do I specify the name of the span operation in the OpenTelemetry package of the Go language? Span names from the documentation list: https://develop.sentry.dev/sdk/performance/span-operations/ I tried to specify through an attribute, but it just creates another attribute with the same name: ctx, span := trace.Tracer.Start(ctx, "name", trace.WithAttributes(attribute.String("Operation", "http.server"))) defer span.End()
Update for anyone searching for the same thing: ever since Wagtail 2.12, the original answer does not work as `stream_data` as a property of `StreamValue` was deprecated. You can use the following to achieve the same thing: def find_block(block_name): for page in YourPageModel: for block in page.body: if block.block_type == block_name: print('@@@ Found one: ' + str(block.block_type) + ' at page: ' + str(page))
[![enter image description here][1]][1] **Error Message**: ``` The portal is having issues getting an authentication token. The experience rendered may be degraded. Additional information from the call to get a token: Extension: Microsoft_Azure_Monitoring Resource: loganalyticsapi Details: The extension 'Microsoft_Azure_Monitoring' has not defined the resource access for resource 'loganalyticsapi' in the extension's configuration, or in the portal's configuration. ``` When I create the resource in Azure, the access is not showing in the IAM of that resource and also getting authentication issue. I'm the owner of my subscription. [1]: https://i.stack.imgur.com/Vl7Tq.png
The portal is having issues getting an authentication token. The experience rendered may be degraded
|azure|azure-keyvault|azure-authentication|
GKE autopilot is adjusting my resource requests due to my sidecar container ``` autopilot.gke.io/resource-adjustment: { "input": { "containers": [ { "limits": { "cpu": "30m", "memory": "100Mi" }, "requests": { "cpu": "30m", "memory": "100Mi" }, "name": "agones-gameserver-sidecar" }, { "limits": { "cpu": "300m", "ephemeral-storage": "2Gi", "memory": "750Mi" }, "requests": { "cpu": "300m", "ephemeral-storage": "2Gi", "memory": "750Mi" }, "name": "websocket-server" } ] }, "output": { "containers": [ { "limits": { "cpu": "30m", "ephemeral-storage": "1Gi", "memory": "100Mi" }, "requests": { "cpu": "30m", "ephemeral-storage": "1Gi", "memory": "100Mi" }, "name": "agones-gameserver-sidecar" }, { "limits": { "cpu": "470m", "ephemeral-storage": "2Gi", "memory": "750Mi" }, "requests": { "cpu": "470m", "ephemeral-storage": "2Gi", "memory": "750Mi" }, "name": "websocket-server" } ] }, "modified": true } ``` Is there any way to prevent this? I don't see why my cpu is being scaled up when I'm within the resource request limits.
GKE Autopilot scales up workload due to sidecar resource requests
|kubernetes|google-cloud-platform|google-kubernetes-engine|
I'm building a Spring Boot application that has the friendship functionality. I have User, Friendship entities and FriendshipDTO(for simplicity I removed JPA annotations): public class User { private Long id; private String email; private String firstName; private String lastName; private String phone; private String login; .... } public class Friendship { private Long id; @ManyToOne private User fromUser; @ManyToOne private User toUser; private FriendStatus friendStatus; private LocalDateTime createdAt; } public class FriendshipDTO { private Long id; private User friend; } I want to get all friendships by user id and map it to FriendshipDTO: @Query("select new com.dto.FriendshipDTO(fr.id, " + " case when fr.fromUser.id = :id then fr.toUser " + " else fr.fromUser end) " + "from Friendship fr " + + "join fr.fromUser fs on fs.id = st.id " + "join fr.toUser tt on tt.id = st.id " + "where fr.friendStatus = 'ACCEPTED' " + " and (fr.toUser.id = :id or fr.fromUser.id = :id)") Page<FriendshipDTO> findAllFriendshipsByUserId(@Param("id") Long id, Pageable pageable); It seems JPA doesn't like case when then and throws an exception: Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed: java.lang.ClassCastException: class org.hibernate.persister.entity.SingleTableEntityPersister cannot be cast to class org.hibernate.metamodel.mapping.BasicValuedMapping (org.hibernate.persister.entity.SingleTableEntityPersister and org.hibernate.metamodel.mapping.BasicValuedMapping are in unnamed module of loader 'app')] with root cause java.lang.ClassCastException: class org.hibernate.persister.entity.SingleTableEntityPersister cannot be cast to class org.hibernate.metamodel.mapping.BasicValuedMapping (org.hibernate.persister.entity.SingleTableEntityPersister and org.hibernate.metamodel.mapping.BasicValuedMapping are in unnamed module of loader 'app') What am I doing wrong?
{"OriginalQuestionIds":[73368016],"Voters":[{"Id":14732669,"DisplayName":"ray"},{"Id":8017690,"DisplayName":"Yong Shun"},{"Id":21972629,"DisplayName":"jQueeny"}]}
how to print an array forward and backward based on boolean flag true/false using single for loop in java?
I notice this API call: [`CMFCPropertyGridProperty::AdjustButtonRect`][1]. But how do I add a button to a `CMFCPropertyGridProperty`? I would like to show a custom button on the right and manage clicking the button. Related question: https://stackoverflow.com/q/78127577/2287576 [1]: https://learn.microsoft.com/en-us/cpp/mfc/reference/cmfcpropertygridproperty-class?view=msvc-170#adjustbuttonrect [2]: https://learn.microsoft.com/en-us/cpp/mfc/reference/cmfcpropertygridproperty-class?view=msvc-170#hasbutton
You are trying to assign all props to state instead of only `suggestionsList`. Also, you don't need to put `suggestionsList` to state. It makes additional variable mutable and can lead to potential bugs. Just use it from props directly. Try this: ``` // Write your code here import {Component} from 'react' import SuggestionItem from '../SuggestionItem' import './index.css' class GoogleSuggestions extends Component { state = { searchInput: ''} showoptions = event => { this.setState({searchInput: event.target.value}) } render() { const {suggestionsList, searchInput} = this.state const { suggestionsList } = this.props; console.log(typeof suggestionsList) return ( <div className="bg-container"> <img className="googleLogo" src="https://assets.ccbp.in/frontend/react-js/google-logo.png" alt="google logo" /> <div className="input-container"> <div> <img className="search-icon" src="https://assets.ccbp.in/frontend/react-js/google-search-icon.png" alt="search icon" /> <input type="search" value={searchInput} onClick={this.showoptions} className="input" placeholder="Search Google" /> </div> <ul className="ul-cont"> {suggestionsList.map(eachItem => ( <SuggestionItem itemDetails={eachItem} key={eachItem.id} /> ))} </ul> </div> </div> ) } } export default GoogleSuggestions ```
``` DECLARE @fullTextCondition NVARCHAR(1000) = '4S Iphone' SET @fullTextCondition = '"' + REPLACE(@fullTextCondition, ' ', '" And "') + '*"' SELECT * FROM Product WHERE CONTAINS(ProductName, @fullTextCondition); ```
{"OriginalQuestionIds":[2100907],"Voters":[{"Id":112968,"DisplayName":"knittl","BindingReason":{"GoldTagBadge":"git"}}]}
I am trying to make a CountDownTimer for my Android app, but every time I try to start it using just `timer.start()`, it starts multiple timers. I don't know why. My code for this is: ``` val timer = object: CountDownTimer(5000, 1000) { override fun onTick(millisUntilFinished: Long) { println(millisUntilFinished) if (new == null) { skipTimeButton.visibility = View.GONE exoSkip.visibility = View.VISIBLE disappeared = false return } } override fun onFinish() { val skip = currentTimeStamp skipTimeButton.visibility = View.GONE exoSkip.visibility = View.VISIBLE disappeared = true } } timer.start() ```
I'm trying to fetch apis and combine the json objects into a single variable array that I can loop through. using .push my variable array ends up as.. ``` [ [ {"a":"1"} ], [ {"b":"2"} ] ] ``` when i want this.. ``` [ {"a":"1"} {"b":"2"} ] ``` Here's my trimmed down code.. ``` var combinedJson = []; const r1 = fetch(firstJson).then(response => response.json()); const r2 = fetch(secondJson).then(response => response.json()); Promise.all([r1, r2]) .then(([d1, d2]) => { combinedJson.push(d1, d2); console.log(combinedJson); }) .catch(error => { console.error(error); }); ```
AvroParquetWriter.<GenericRecord>builder(filePAth).withSchema(schema).withCompressionCodec(CompressionCodecName.SNAPPY).withConf(Configuration).withDataModel(GenericData.get()).withWriteMode(Mode.OVERWTITE).withROwGroupSize(8*1024*10124).withPageSize(64*1024*1024).build() For Path i am using a logic path = "hdfsLocation"+String.format(tid%num of rows per file),counter/number of rows per file)+_parquet. With these i am able to acheive file size of 382 KB i need file size around 100 MB. Please share some solution expecting file size of 100 MB got file size of 382.4 KB
I am facing issue with ParquetFileWriting n hdfs in flink where parquet file size is around 382 KB . I want the parquet file in MB
|apache|apache-flink|parquet|
null
app theme: Theme.Material3.DayNight.NoActionBar this id manifest: <activity android:name=".ui.activities.BaseActivity" android:exported="false" android:label="@string/title_activity_base" android:theme="@style/AppTheme" /> <activity android:name=".ui.activities.MainActivity" android:exported="true" android:screenOrientation="portrait" android:theme="@style/AppTheme"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.DEFAULT" /> </intent-filter> </activity>
Strange black bar at the bottom of the screen android
|android|android-constraintlayout|
processing a list is quite easy with a `for`loop: @echo off setlocal set /P "op=Enter the number(s, separated by space or comma): " echo DEBUG: %op% for %%i in (%op%) do call :op%%i 2>nul|| goto :fail goto :eof :op1 echo one goto :eof :op2 echo two goto :eof :op3 echo three goto :eof :op4 echo four goto :eof :fail echo wrong input goto :of some more verifying for correct input might be desirable.
I wish to write a function and use the "trace()" method inside the function, like this : ``` library(IRanges) mf <- function(){ insert.expr <- quote(message("tracing ...")) trace(what = "findOverlaps", signature = c("IntegerRanges", "IntegerRanges"), tracer = insert.expr, edit = TRUE, print = FALSE) } ``` When I executed this function, I found the following situation (marked by the arrow): [enter image description here](https://i.stack.imgur.com/gocX4.png) When I call the function I want to trace, the following error is thrown: ``` mf() query <- IRanges(c(1, 3, 9), c(1, 4, 11)) subject <- IRanges(c(2, 2, 10), c(2, 4, 12)) findOverlaps(query, subject, type = "within") ``` [enter image description here](https://i.stack.imgur.com/iBYTf.png) I looked at the source code of the "trace()" method and found that the problem occurred in the "methods:::.TraceWithMethods()" method: In Rstudio, the problematic code appears on line 126, I think the "substitute()" function should not be used here. The problem was indeed solved when I changed "substitute(tracer)" to "tracer". [enter image description here](https://i.stack.imgur.com/eaVPQ.png) I'm not sure if this is a bug in the method itself, if not I'm hoping to find a workaround here.
I am trying to generate dynamically a sidebar using the template coreui with angular 17 The error ExpressionChangedAfterItHasBeenCheckedError ``` ERROR Error: NG0100: ExpressionChangedAfterItHasBeenCheckedError: Expression has changed after it was checked. Previous value: 'undefined'. Current value: 'disabled'. Expression location: \_SidebarNavLinkComponent component It seems like the view has been created after its parent and its children have been dirty checked. Has it been created in a change detection hook? Find more at https://angular.io/errors/NG0100 at throwErrorIfNoChangesMode (core.mjs:11912:11) at bindingUpdated (core.mjs:17587:17) at ɵɵproperty (core.mjs:20396:9) at SidebarNavLinkComponent_Template (coreui-angular.mjs:13043:14) at executeTemplate (core.mjs:12263:9) at refreshView (core.mjs:13490:13) at detectChangesInView (core.mjs:13714:9) at detectChangesInViewIfAttached (core.mjs:13677:5) at detectChangesInComponent (core.mjs:13666:5) at detectChangesInChildComponents (core.mjs:13727:9) ``` I'm trying to update the coreui sidebar dyanamically. Here is my view layout.component.html ``` <!--sidebar--> <c-sidebar #sidebar="cSidebar" class="d-print-none sidebar sidebar-fixed" id="sidebar" visible> <c-sidebar-brand [brandFull]="{ src: 'assets/img/brand/Success-logo.svg', width: 200, height: 46, alt: 'Success sarl Logo' }" [brandNarrow]="{ src: 'assets/img/brand/coreui-signet-white.svg', width: 46, height: 46, alt: 'Success sarl Logo' }" routerLink="./" /> <ng-scrollbar pointerEventsMethod="scrollbar"> <c-sidebar-nav [navItems]="navItems" dropdownMode="close" /> </ng-scrollbar> <c-sidebar-toggler *ngIf="!sidebar.narrow" toggle="unfoldable" cSidebarToggle="sidebar" /> </c-sidebar> <!--main--> <div class="wrapper d-flex flex-column min-vh-100 bg-light dark:bg-transparent"> <!--app-header--> <app-default-header class="mb-4 d-print-none header header-sticky" position="sticky" sidebarId="sidebar" /> <!--app-body--> <div class="body flex-grow-1 px-3"> <c-container breakpoint="lg" class="h-auto"> <router-outlet /> </c-container> </div> <!--app footer--> <app-default-footer /> </div> ``` Here is my layout.component.ts ``` export class UserLayoutComponent implements OnInit { public navItem?: Observable<any>; public navItems: any; public userAuthenticated = false; public showView: any; constructor(private _authService: AuthService, private _repository: RepositoryService, private route: ActivatedRoute, private _router: Router, private sideBarGenerateService: SideBarGenerateService, ) { this._authService.loginChanged.subscribe(userAuthenticated => { this.userAuthenticated = userAuthenticated; }) } ngOnInit(): void { this._authService.isAuthenticated().then(userAuthenticated => { this.userAuthenticated = userAuthenticated; }); this._authService.checkUserClaims(Constants.adminClaims).then( res => { if (!res) { let user = this._authService.getUserInfo(); this._repository.getData(`api/v1/UserHasContrat/users/${this.route.snapshot.queryParams['filter']}`).subscribe((response: ApiHttpResponse) => { if (!(response.body?.every((elem: any) => (user.profile.sub).includes(elem?.userId)))) { this._router.navigate(['/dashboard']); } }); } }); setTimeout(() => { this.navItems = this.sideBarGenerateService.getSideBar(); },0) } } ``` My Service ``` getSideBar = () => { //Get claims of user this.userClaims = this._authService.getUserInfo()['profile']['role']; return navItems.filter(this.isToBeDisplayed); } //Filter items to display isToBeDisplayed = (value: any) => { if (value.hasOwnProperty('children') && value.children.length > 0) { value.children = value.children.filter((currentValue: any) => { if (currentValue?.attributes) { if ((currentValue.attributes['role'].every((elem: string) => this.userClaims.includes(elem)))) { return currentValue; } } }) return value; } else { return value; } } ``` Please help me check out this code. Can't have a result without ExpressionChangedAfterItHasBeenCheckedError.
Hi so basically i guess you have created the table(blogs) before and probably you dropped the table(blogs) to run it again. I want you to know that every time you create a new table it also goes to migrations table. So in this case. 1. Be very sure you dropped the table 2. Then open the migrations table and delete the row for blogs Then you're good to go.
I had to ensure in a php and mysql application that 2 different tables A and B, having a common document_number column that this column is unique in a union all between these 2 tables meaning it's values must appear in unique and consecutive order example: 1. I make A document I give number=1 2. I make A document I give number=2 3. I make B document I give number=3 4. I make A document I give number=4 I have made a N (Numbers) table with just one column number and I have used before insert and after delete triggers on A and B to insert/delete the document number in the N table. Number is primary key of table N so it is also unique! What am I missing? Should I also make the column document number in A and B as foreign key to point to number in Numbers ? Please help. Table A has id, number, date, client_id. We obtain table A from several D documents. We have ADetails=A_ID, D_ID. D modifies the stock. Table B has id, number, date, client_id, value. Table B already existed and modified stock by itself. DDetails and BDetails have product, quantity and value. Tables B and D have similar fields but have different number range and are different entities. Table A, B, D and their details tables have already been implemented as different tables. So I have made a table Numbers with a primary key number in which I copy the number from A and B with triggers. Update on the number column in A and B is not allowed. ``` DELIMITER $$ CREATE or replace TRIGGER `before_insert_trigger_A` BEFORE INSERT ON `A` FOR EACH ROW BEGIN insert into `numbers` values (NEW.number); END$$ DELIMITER ; DELIMITER $$ CREATE or replace TRIGGER `before_insert_trigger_B` BEFORE INSERT ON `B` FOR EACH ROW BEGIN insert into `numbers` values (NEW.number); END$$ DELIMITER ; DELIMITER $$ CREATE or replace TRIGGER `after_delete_trigger_A` AFTER DELETE ON `A` FOR EACH ROW BEGIN delete from `numbers` where `number`=OLD.number and not exists ( select 1 from B where `number`=OLD.number); END$$ DELIMITER ; DELIMITER $$ CREATE or replace TRIGGER `after_delete_trigger_B` AFTER DELETE ON `B` FOR EACH ROW BEGIN delete from `numbers` where `number`=OLD.number and not exists ( select 1 from A where `number`=OLD.number); END$$ DELIMITER ; ```
**Swift 5 Update** Set the boolean in your "didFinishLaunchingWithOptions" in your AppDelegate like so... //Assumiing you have made a singleton for Userdefaults public let defaults = UserDefaults.standard func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool { defaults.set(true, forKey: "SoundActive") //boiler plate code ... }
I've been trying to figure this out too, the closest I can get is finding the file registry for *imported* themes only; it doesn't include RStudio's own themes, annoyingly. > appdata/roaming/rstudio/themes on Windows > > c~/. config/rstudio/themes on mac
It seems to be a bug about "base::trace()" or "methods:::.TraceWithMethods()"?
|r|
null
I am creating a mobile application in Android studio, where the entire screen is occupied by a human image. The user should click, for example, on the leg, and information about the leg will be displayed to him. But I ran into the problem of specifying certain arbitrary places on the image, clicking on which a new activity will open. Tell me, please, how can this be implemented? There will be a lot of arbitrary "buttons" in the image. Here's my example using coordinates, but it's terribly inconvenient, maybe someone knows a better way? ``` public class MainActivity extends AppCompatActivity { private ImageView imageView; @SuppressLint("ClickableViewAccessibility") @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); imageView = findViewById(R.id.imageView); imageView.setImageResource(R.drawable.front_man); // Определение областей изображения Rect handRect = new Rect(100, 100, 200, 200); Rect legRect = new Rect(300, 300, 400, 400); // Обработка нажатия imageView.setOnTouchListener(new View.OnTouchListener() { @Override public boolean onTouch(View v, MotionEvent event) { float x = event.getX(); float y = event.getY(); // Проверка области нажатия if (handRect.contains((int) x, (int) y)) { // Переход на макет с описанием руки Intent intent = new Intent(MainActivity.this, HandActivity.class); startActivity(intent); } else if (legRect.contains((int) x, (int) y)) { // Переход на макет с описанием ноги Intent intent = new Intent(MainActivity.this, LegActivity.class); startActivity(intent); } return true; } }); } } ``` I write code in Java, but you can also drop your tips on Kotlin)
How can I specify specific areas for clicking on ImageView?
|android|
null
I started to learn WireMock. My first experience is not very positive. Here's a failing MRE: ```java import com.github.tomakehurst.wiremock.WireMockServer; import org.junit.jupiter.api.Test; public class GenericTest { @Test void test() { new WireMockServer(8090); } } ``` ```xml <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.wiremock</groupId> <artifactId>wiremock</artifactId> <version>3.5.1</version> <scope>test</scope> </dependency> ``` ```lang-none java.lang.NoClassDefFoundError: org/eclipse/jetty/util/thread/ThreadPool ``` I debugged it a little: ```java public WireMockServer(int port) { this(/* -> this */ wireMockConfig() /* <- throws */.port(port)); } ``` ```java // WireMockConfiguration // ↓ throwing inline private ThreadPoolFactory threadPoolFactory = new QueuedThreadPoolFactory(); public static WireMockConfiguration wireMockConfig() { return /* implicit no-args constructor */ new WireMockConfiguration(); } ``` ```java package com.github.tomakehurst.wiremock.jetty; import com.github.tomakehurst.wiremock.core.Options; import com.github.tomakehurst.wiremock.http.ThreadPoolFactory; // ↓ package org.eclipse does not exist, these lines are in red import org.eclipse.jetty.util.thread.QueuedThreadPool; import org.eclipse.jetty.util.thread.ThreadPool; public class QueuedThreadPoolFactory implements ThreadPoolFactory { @Override public ThreadPool buildThreadPool(Options options) { return new QueuedThreadPool(options.containerThreads()); } } ``` My conclusions: 1. WireMock has a dependency on `org.eclipse` 2. WireMock doesn't include this dependency in its artifact 3. I have to provide it manually ```xml <!-- like so --> <dependency> <groupId>org.eclipse.jetty</groupId> <artifactId>jetty-util</artifactId> <version>12.0.7</version> </dependency> ``` I even visited their [GitHub][1] to see for myself if the dependency is marked as `provided`, but they use Gradle, and I don't know Gradle But that's not all! You'll also have to include (at least) `com.github.jknack.handlebars` and `com.google.common.cache` (see `com.github.tomakehurst.wiremock.extension.responsetemplating.TemplateEngine`) Luckily, I found this "stand-alone" artifact that doesn't require any manual props ```xml <dependency> <groupId>org.wiremock</groupId> <artifactId>wiremock-standalone</artifactId> <version>3.5.1</version> <scope>test</scope> </dependency> ``` My question: Why aren't all artifacts "stand-alone"? Why do artifacts that don't work unless propped up by manually declared dependencies even exist, what are their advantages? [1]: https://github.com/wiremock/wiremock/tree/master
SingleTableEntityPersister cannot be cast to class org.hibernate.metamodel.mapping.BasicValuedMapping
|postgresql|spring-boot|hibernate|jpa|spring-data-jpa|
|r|selenium-webdriver|tor|rselenium|
Have a custom live search feature on our wordpress website, which only displays 20 products in the middle of the screen for any given search. Would like to be able to remove this 20 limit, so that unlimited products are shown. So far have come up with the code below for our function.php file, which unfortunately doesn't work: function search_filter($query) { if ( !is_admin() && $query->is_main_query() ) { if ($query->is_wpsc_gc_live_search_pre_get_posts) { $query->set('posts_per_page', -1); } } } add_action('pre_get_posts','search_filter');` Code below from the live search file that may help with the tweaking of the above code: function wpsc_gc_start_search_query() { global $wp_query, $wpsc_query; $product_page_id = wpsc_get_the_post_id_by_shortcode('[productspage]'); $post = get_post( $product_page_id ); $wp_query = new WP_Query( array( 'pagename' => $post->post_name ) ); add_action( 'pre_get_posts', 'wpsc_gc_live_search_pre_get_posts' ); wpsc_start_the_query(); remove_action( 'pre_get_posts', 'wpsc_gc_live_search_pre_get_posts' ); list( $wp_query, $wpsc_query ) = array( $wpsc_query, $wp_query ); // swap the wpsc_query object $GLOBALS['nzshpcrt_activateshpcrt'] = true; What we are basically trying to do is set the following: Number of products per page to show during the live search, at either 50, or unlimited (would like to try both scenarios). Code below, which relates to the number of products showing per page: wpsc_products_per_page Would be great for some help and advise on how we can getting the above filter to work. Created the above filter for our themes function.php file, which needs tweaking.
Trying To Get A PHP Filter To Work In Wordpress To Show Unlimited Products On The Page
|php|wordpress|function|filter|themes|
null
You can achieve it by splitting it function swapCharacters(str,i,j){ str=str.split("") [str[i],str[j]]=[str[j],str[i]] return str.join("") } Looks clean :)
Trying To Get A PHP Filter To Work In Wordpress To Show Unlimited Products On The Page During Custom Search
Is there any way to persist data from server to client in SSR apart from using "store" in SSRContext? The docs mostly mention Vuex, even though Vuex is deprecated. I'm using @tanstack/vue-query, which saves a lot of headaches when managing async or server state. It manages deduping requests, retries, query caching and invalidation, gabarge collection, query refetching, infinite queries, and so much more. In the docs, the guidelines for integrating with Vite SSR are: ```js // main.js (entry point) import App from './App.vue' import viteSSR from 'vite-ssr/vue' import { QueryClient, VueQueryPlugin, hydrate, dehydrate, } from '@tanstack/vue-query' export default viteSSR(App, { routes: [] }, ({ app, initialState }) => { // -- This is Vite SSR main hook, which is called once per request // Create a fresh VueQuery client const queryClient = new QueryClient() // Sync initialState with the client state if (import.meta.env.SSR) { // Indicate how to access and serialize VueQuery state during SSR initialState.vueQueryState = { toJSON: () => dehydrate(queryClient) } } else { // Reuse the existing state in the browser hydrate(queryClient, initialState.vueQueryState) } // Mount and provide the client to the app components app.use(VueQueryPlugin, { queryClient }) }) ``` Then in the Vue component: ```Vue <!-- MyComponent.vue --> <template> <div> <button @click="refetch">Refetch</button> <p>{{ data }}</p> </div> </template> <script setup> import { useQuery } from '@tanstack/vue-query' import { onServerPrefetch } from 'vue' // This will be prefetched and sent from the server const { refetch, data, suspense } = useQuery({ queryKey: ['todos'], queryFn: getTodos, }) onServerPrefetch(suspense) </script> ``` I tried the same with boot files Quasar SSR, replacing `initialState` with `ssrContext` but it reads as undefined on the client. Here is what I tried: ```js import { boot } from 'quasar/wrappers' import { QueryClient, VueQueryPlugin, dehydrate, keepPreviousData } from '@tanstack/vue-query' import { hydrate } from 'vue' export default boot(({ app, ssrContext }) => { const globalQueryClient = new QueryClient({ defaultOptions: { queries: { networkMode: 'always', placeholderData: keepPreviousData, retry: false, staleTime: 1000 * 60 * 5 // 5 minutes } } }) if (process.env.SERVER) { // Indicate how to access and serialize VueQuery state during SSR ssrContext.vueQueryState = { toJSON: () => dehydrate(globalQueryClient) } } else { // Reuse the existing state in the browser hydrate(globalQueryClient, ssrContext.vueQueryState) } app.use(VueQueryPlugin, { queryClient: globalQueryClient }) }) ``` `vueQueryState` was undefined on the client.
I would like to ask you how you create and manage modals in your React applications. I saw the solution when modals are handled using Redux.For example you dispatch the modal key and props into the state and after display the modal using a key. The minos of this approach is that when you need to pass some function into modal as a prop, redux warns you that functions can't be serialized and that's why it is no recommended approach. Some developers prefer using emitters to resolve this redux issue or something else. What is your favorite solution? I would be thankful for your recommendations
Modals in React applications
|reactjs|modal-dialog|
use `Cross Join` and `Subquery` to get your desired result ``` select AgeGroup,CAST(SUMNumberOfFans1 AS FLOAT)/SUMNumberOfFans2 from (Select Date, AgeGroup, SUM(NumberOfFans) SUMNumberOfFans1 From FansPerGenderAge WHERE date = (SELECT max(date) from FansPerGenderAge) GROUP BY AgeGroup) a CROSS JOIN( Select SUM(NumberOfFans) SUMNumberOfFans2 From FansPerGenderAge WHERE date = (SELECT max(date) from FansPerGenderAge)) b ```
You should declare a local `var`, and pass that to `Eval`. ``` // in the Button() { ... } closure var theCV = CLIPSValue() Eval(clipsEnv, "(find-all-facts ((?f message)) TRUE)", &theCV) ``` You don't need a `@State` here unless you also want to use `theCV` in other parts of the view. In that case, you can declare a `@State` of type `CLIPSValue?`. ``` @State var theCV: CLIPSValue? ``` Then assign to this after `Eval`: ``` var theCV = CLIPSValue() Eval(clipsEnv, "(find-all-facts ((?f message)) TRUE)", &theCV) self.theCV = theCV ```
Here's one approach: * Use [`pd.json_normalize`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.json_normalize.html) with 'type' as metadata (include the same levels except 'prices'). ```python df = pd.json_normalize(red, record_path=['data', 'events', 'markets', 'outcomes', 'prices'], meta=[['data', 'events', 'markets', 'outcomes', 'type']] ) df.head() numerator denominator decimal displayOrder priceType handicapLow \ 0 13 4 4.25 1 LP None 1 63 100 1.63 1 LP None 2 4 1 5.00 1 LP None 3 11 10 2.10 1 LP None 4 13 20 1.65 1 LP None handicapHigh data.events.markets.outcomes.type 0 None MR 1 None MR 2 None MR 3 None -- 4 None -- ``` * Now, use [`df.loc`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.loc.html) with [`Series.eq`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.eq.html) to get back all the values from column 'decimal' where our meta column equals 'MR'. ```python meta_col = 'data.events.markets.outcomes.type' decimals = df.loc[df[meta_col].eq('MR'), 'decimal'] decimals.head(5) 0 4.25 1 1.63 2 5.00 20 3.40 21 2.40 Name: decimal, dtype: float64 ``` Here the index values (`0, 1, 2, 20, 21`) refer to the rows where 'type' equals 'MR'.
I want to find what percentage of the memory load accesses in a program is memory indirect, for example, A[B[i]], A[(B[i]&mask)+c], A[B[C[i]]], etc. My idea (have not started implementation) is to track if any register holds a value that has been loaded from memory. For ALU or bit operations, if one of the source registers is being tracked, then the destination will also be tracked. If such a register value is used to perform a memory access, then flag it as a memory indirect access. Is there a better way to do it? How can I go about implementing it?
Percentage memory indirect accesses
|x86-64|cpu-architecture|intel-pin|
I want to convert My wordpress Website into a Mobile APP i am using Buddyboss theme. I want a app where user login once and than stay logged in unless he log out himself. I have tried many site but didn't find the solution.
How can I convert my website to Application?
|android|wordpress|web|native|
null
I am trying to resize iframes based on content changes for cross-origin content. Also I am trying open someone else's website in to my iframe so I don't have access to their site. Is there a way to do so? <!-- language: lang-js --> function resizeIFrameToFitContent(iFrame) { iFrame.width = iFrame.contentWindow.document.body.scrollWidth; iFrame.height = iFrame.contentWindow.document.body.scrollHeight; } window.addEventListener('DOMContentLoaded', function(e) { var iFrame = document.getElementById('iFrame1'); resizeIFrameToFitContent(iFrame); // or, to resize all iframes: var iframes = document.querySelectorAll("iframe"); for (var i = 0; i < iframes.length; i++) { resizeIFrameToFitContent(iframes[i]); } }); <!-- language: lang-html --> <iframe src="usagelogs/default.aspx" id="iFrame1"></iframe> <!-- end snippet -->
[Check image][1] I'm new to this, but I understand that the file is missing. I'm working in vscode, and the error comes out of vs studio 1 I've been trying to install pandas using pip, but it always fails and gives me a giant wall of error text and this at the bottom [1]: https://i.stack.imgur.com/h1Qdv.png
When using ObjectMessage to serialize and de-serialize an object that Class needs to be on the Classpath in both the sending and receiving application. I general your project needs to include a common Jar on the receiver side that contains the class being transmitted for this to work. ObjectMessage has a rather checkered past in terms of security vulnerabilities and should be avoided when possible as many brokers place limits on what types you can send there without specialized configuration, you may want to consider sending payload like JSON in a TextMessage.
I am dealing with a library database today. The structure is kind of odd, and I am having trouble pulling the data how I want it to appear. So I have this query: SELECT lc.catalogID, hb_g.intro AS 'Genre/Subject', gk.kidsAge AS 'Ages', pb_g.intro AS 'Genre/Subject', pb.ageRange AS 'Ages' FROM library.libraryCatalog lc INNER JOIN library.hardbacks hb ON lc.catalogID = hb.catalogId INNER JOIN library.paperbacks pb ON lc.catalogID = pb.catalogId LEFT JOIN library.genres hb_g ON hb.genreId = hb_g.genreId LEFT JOIN library.genres pb_g ON pb.genreId = pb_g.genreId LEFT JOIN library.bookSeries bs ON hb.id = bs.logId LEFT JOIN library.genreKids gk ON bs.kidsId = gk.kidsId WHERE lc.libraryID = 87 It produces results like you see below. The issue I have, is that I need the `Fairy Tales` and `12+` result to appear in the same columns as the other genres. catalogID Genre/Subject Age up to Genre/Subject Ages 2021 Mystery 8+ Fairy Tales 12+ 2021 Sci-Fi/Fantasy 12+ Fairy Tales 12+ 2021 Fiction 10+ Fairy Tales 12+ 2021 Non-Fiction 12+ Fairy Tales 12+ 2021 Biography 16+ Fairy Tales 12+ 2021 Historical 10+ Fairy Tales 12+ I am hoping for something like this: catalogID Genre/Subject Age up to 2021 Mystery 8+ 2021 Sci-Fi/Fantasy 12+ 2021 Fiction 10+ 2021 Non-Fiction 12+ 2021 Biography 16+ 2021 Historical 12+ 2021 Fairy Tales 12+ <---- moved here I tried using ISNULL and COALESCE but neither of those worked. IS something like this possible? Thanks!
I have written a code for finding prime numbers using sieve of Erasthenes algorithm, but the problem is my code works as it tends to be, but for only some numbers. It shows like "entering upto-value infinitely" as the error and for some numbers it works perfectly. As I started recently studying C in-depth I couldn't find what is going wrong. I request the code-ies to help me in this case, and also help me realize what is the mistake, why it happens and how to prevent it. Here's the code: ``` #include <stdio.h> int main() { int n; printf("enter number: "); scanf("%d",&n); int arr[n],i,pr=2; for(i=0;pr<=n;i++) { arr[i]=pr; pr++; } int j,k; while(arr[k]<=n) { for(j=2;j<n;j++) { for(k=0;k<n;k++) { if(arr[k]%j==0 && arr[k]>j) arr[k]=0; } } } for(i=0;arr[i]<=n;i++) { if(arr[i]!=0) printf(" %d",arr[i]); } printf("\n"); return 0; } ```
I'm trying to fetch apis and combine the json objects into a single variable array that I can loop through. using .push my variable array ends up as.. ``` [ [ {"a":"1"} ], [ {"b":"2"} ] ] ``` when i want this.. ``` [ {"a":"1"}, {"b":"2"} ] ``` Here's my trimmed down code.. ``` var combinedJson = []; const r1 = fetch(firstJson).then(response => response.json()); const r2 = fetch(secondJson).then(response => response.json()); Promise.all([r1, r2]) .then(([d1, d2]) => { combinedJson.push(d1, d2); console.log(combinedJson); }) .catch(error => { console.error(error); }); ```
I am scraping messages about power plant unavailability and converting them into timeseries and storing them in a sql server database. My current structure is the following. * `Messages`: publicationDate datetime, messageSeriesID nvarchar, version int, messageId identity The primary key is on `(messageSeriesId, version)` * `Units`: messageId int, area nvarchar, fueltype nvarchar, unitname nvarchar tsId identity The primary key is on `tsId`. There is a foreign key relation on tsId between this table and `Messages`. The main reason for this table is that one message can contain information about multiple power plants. * `Timeseries`: tsId int, delivery datetime, value decimal I have a partition scheme based on delivery, each partition contains a month of data. The primary key is on `(tsId, delivery)` and it's partitioned along the monthly partition scheme. There is a foreign key on `tsId` to `tsId` in the `Units` table. The `Messages` and `Units` tables contain around a million rows each. The `Timeseries` table contains about 500 million rows. Now, every time I insert a new batch of data, one row goes into the `Messages` table, between one and a few (4) go into the `Units` table, and a lot (up to 100.000s) go into the `Timeseries` table. The problem I'm encountering is that inserts into the `Timeseries` table are too slow (100.000 rows take up to a minute). I already made some improvements on this by setting the fillfactor to 80 instead of 100 when rebuilding the index there. However its still too slow. And I am a bit puzzled, because the way I understand it is this: every partition contains all rows with delivery in that month, but the primary key is on `tsId` first and `delivery` second. So to insert data in this partition, it should simply be placed at the end of the partition (since `tsId` is the identity column and thus increasing by one every transaction). The time series that I am trying to insert spans 3 years and therefore 36 partitions. If I, however, create a time series with the same length that falls within a single partition the insert is notable faster (around 1.5 second). Likewise if I create an empty time series table (`timeseries_test`) with the same structure as the original one, then inserts are also very fast (also for inserting data that spans 3 years). However, querying is done based mainly on delivery, so I don't think partitioning by `tsId` is a good idea. If anyone has a suggestion on the structure or methods to improve inserts it would be greatly appreciated. Create Table statements (I changed the order of the primary key on the timeseries table, but it didnt make any difference, in fact it seemed to slow down inserts): CREATE TABLE [dbo].[remit_messages]( [publicationDate] [datetime2](0) NOT NULL, [version] [int] NOT NULL, [messageId] [int] IDENTITY(1,1) NOT NULL, [messageSeriesId] [nvarchar](36) NOT NULL, PRIMARY KEY CLUSTERED ( [messageSeriesId] ASC, [version] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO /****** Object: Index [dbo_remit_messages_messageId] Script Date: 2024-03-30 13:26:36 ******/ CREATE UNIQUE NONCLUSTERED INDEX [dbo_remit_messages_messageId] ON [dbo].[remit_messages] ( [messageId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO CREATE TABLE [dbo].[remit_units]( [tsId] [int] IDENTITY(1,1) NOT NULL, [fuelTypeId] [int] NOT NULL, [areaId] [int] NOT NULL, [messageId] [int] NOT NULL, [unitName] [nvarchar](200) NULL, PRIMARY KEY CLUSTERED ( [messageId] ASC, [tsId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY], CONSTRAINT [dbo_remit_tsId] UNIQUE NONCLUSTERED ( [tsId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO /****** Object: Index [dbo_remit_units_tsid] Script Date: 2024-03-30 13:30:39 ******/ CREATE NONCLUSTERED INDEX [dbo_remit_units_tsid] ON [dbo].[remit_units] ( [tsId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO ALTER TABLE [dbo].[remit_units] WITH CHECK ADD FOREIGN KEY([messageId]) REFERENCES [dbo].[remit_messages] ([messageId]) ON UPDATE CASCADE ON DELETE CASCADE GO CREATE TABLE [dbo].[remit_ts]( [tsId] [int] NOT NULL, [delivery] [datetime2](0) NOT NULL, [available] [decimal](11, 3) NULL, [unavailable] [decimal](11, 3) NULL, PRIMARY KEY CLUSTERED ( [delivery] ASC, [tsId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [MonthlyPartitionScheme]([delivery]) ) ON [MonthlyPartitionScheme]([delivery]) GO /****** Object: Index [idx_remit_ts_delivery_inc] Script Date: 2024-03-30 13:33:34 ******/ CREATE NONCLUSTERED INDEX [idx_remit_ts_delivery_inc] ON [dbo].[remit_ts] ( [delivery] ASC ) INCLUDE([tsId],[unavailable],[available]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [MonthlyPartitionScheme]([delivery]) GO ALTER TABLE [dbo].[remit_ts] WITH CHECK ADD FOREIGN KEY([tsId]) REFERENCES [dbo].[remit_units] ([tsId]) ON UPDATE CASCADE ON DELETE CASCADE GO Actual execution plan: https://www.brentozar.com/pastetheplan/?id=HJJiFFS1R
I also got the same error but in my case I wasn't getting the env variable which I recently added. This Error was fixed by stopping and restarting the react app.
This is considered as anti pattern in modern React ecosystem. According to single responsibility principle keep your business logic in a simple way with custom hooks (Use prototype inheritance in it, if required), To store API calls use Tanstack Query and to store global data use Jotai (Atoms). This libraries are very easy to learn and maintain. You don't need to write Redux (action, reducers and store), Redux toolkit and other boilerplate codes today. Even you learn this concepts it was not so useful in other stacks. Even today many React interviewers asks for Redux questions, I hope they will update their projects with the mentioned best practices soon. A sample snippet is given below. ``` const Counter = () => { const {sendCounterdata} = useCounterAPI() // TanstackQuery const [counter, setCounter] = useAtom(counterAtom); // Jotai atom // Custom hook const { increment, decrement, submit } = useCounter({setCounter, onSubmit: sendCounterdata}); return ( <> {counter} <Button onPress={increment}>Increment</Button> <Button onPress={decrement}>Decrement</Button> <Button onPress={submit}>Submit</Button> </> ); }; ``` Bonus: You can write the unit test for that custom hook 'useCounter' easily.
I have a model of SQLAlchemy. Here is my *models.py*: ``` class PlaceInfoModel(Base): __tablename__ = 'place_info' id = Column(Integer, primary_key=True, autoincrement=True) owner_id = Column(Integer,nullable=False) name = Column(String(60)) address = Column(String(300)) rating = Column(Float) type = Column(String(20)) image = Column(String) ``` *serializer.py*: ``` from .models import PlaceInfoModel, sessionmaker,engine from rest_framework import serializers from django.contrib.auth.models import User from rest_framework.fields import CurrentUserDefault class PlaceInfoSerializer(serializers.Serializer): name = serializers.CharField() address = serializers.CharField() rating = serializers.FloatField() image = serializers.CharField(required=False) owner_id = serializers.IntegerField() # i want to auto update it with auth_user.id on POST request ``` This is my *views.py*: ``` class PlaceViewSet(ViewSet): authentication_classes = [JWTAuthentication] permission_classes = [IsAuthenticatedOrReadOnly, IsPermittedForAction] ordering_fields = ['id', 'name', 'rating', 'address'] def create(self, request): serializer = PlaceInfoSerializer(data=request.data) print(request.user.id) if serializer.is_valid(): Session = sessionmaker(bind=engine) session = Session() place = PlaceInfoModel(**serializer.validated_data) session.add(place) session.commit() return Response(serializer.data, status=status.HTTP_201_CREATED) return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST) ``` I want to populate my owner_id in *serializers.py* somehow from SimpleJWT authentication. I tried to do it with CurrentUserDefault() and failed.
Getting the Django ORM auth_user.id data using serializer
|django|django-models|django-rest-framework|sqlalchemy|orm|
null
I've spent quite some time in the last month trying to figure out how to run colima & docker engine on my Mac Pro M2 Max as x86_64 for both. Everything came from the missing compatible architecture (have only 'i386,x86_64') for dockerfile-maven-plugin version 1.4.13 as described here: [text](https://stackoverflow.com/questions/71300031/docker-image-build-failed-on-mac-m1-chip), [text](https://github.com/apache/airavata-custos/issues/374) and [text](https://github.com/spotify/dockerfile-maven/issues/394) This plugin is hardcoded for the moment and I can't change it to any other. In the company we use x86_64 cpu arch PC's. But instead, I want to use my Mack Book for this purpose when I try to build images using the mentioned plugin I have tried the following Scenario 1: I have installed homebrew and then installed: colima and docker After this, I started colima with this argument --arch x86_64 and once I ssh to the vm, I can see that it's with x86_64 architecture. So far so good, but I stuck to start the docker engine with the same architecture. It's started by default as darwin/arm64 Scenario 2: I have uninstalled Homebrew for arm64 and installed rosetta. Afterwards, I proceed with installing x86_64 Homebrew as: arch -x86_64 /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" Then I installed docker engine as: arch -x86_64 /usr/local/bin/brew install docker and thus way, it's started as darwin/amd64 (rosetta) which is what I wanted to achieve. I also have installed colima in the same way (arch -x86_64 /usr/local/bin/brew install colima). But ... when I try to start colima, I got this error message: ``` FATA[0000] limactl is running under rosetta, please reinstall lima with native arch FATA[0000] lima compatibility error: error checking Lima version: exit status 1 ``` Even this [text](https://stackoverflow.com/questions/78169611/how-to-solve-limactl-is-running-under-rosetta-please-reinstall-lima-with-nativ) didn't help much. One solution could be to run the native installation of Homebrew for arm64 arch and once I start colima with --arch x86_64, to install the docker engine inside the VM (which have Ubuntu OS by default). Inside the VM I can add the configuration as /etc/docker/daemon.json and inside to put:`{"hosts": ["tcp://0.0.0.0:2375", "unix:///var/run/docker.sock"]}`. Later on, I'll make sure to add =tcp://localhost:2375 env variable into my e.g. .zshrc file on MacOS and source it. In this way I'll have the docker engine running inside the VM which will be exposed to my Mac Terminal and I can run my build. Does anyone else fetch simillar issue or have better solution for this ?
I'm making a web app with Flask using SQLALCHEMY to management the databases. I need to store in more then onde database. For example: "user_databse.sb" and "teacher_database.db". I read the documentation to do this, but when I try to use in my code I getting this error: > ModuleNotFoundError: No module named 'MySQLdb' Thats my code: ``` path = os.path.abspath(os.path.dirname(__file__)) app = Flask(__name__) app.config['SECRET_KEY'] = 'secretkey' app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + os.path.join(path, 'databaseteste.db') app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False app.config['SQLALCHEMY_BINDS'] = { 'user_database': 'mysql:///root:password@localhost/userdatabase' } db = SQLAlchemy(app) # User model to database class User(UserMixin, db.Model): __bind_key__ = 'user_database' id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(64), index=True, unique=True) email = db.Column(db.String(120), unique = True, index = True) password_hash = db.Column(db.String(128)) joined_at = db.Column(db.DateTime(), default = datetime.utcnow, index = True) def __repr__(self): return f'User {self.username}' def set_passoword(self, password): self.password_hash = generate_password_hash(password) ... ``` Can someone help me to find what I doing wrong?
Python - How to create multiple databases with SQLALCHEMY in Flask?
```cpp create<k+1, R..., T...>(v, in); ``` This is not valid because either the parameters are deduced or they are provided. Here they are provided, so deduction is cut. The first call is then `create<1,double,int,double,int>( v, in )`. The function receives its parameters but there are 5 non-discernible possibilities. ```cpp create<1,double,int,double,int> R=[], T=[double,int,double,int] create<1,double,int,double,int> R=[double], T=[int,double,int] create<1,double,int,double,int> R=[double,int], T=[double,int] create<1,double,int,double,int> R=[double,int,double, T=[int] create<1,double,int,double,int> R=[double,int,double,int], T=[] ``` Here the third case is the only one possible, but the deduction is deactivated! If you do not indicate the parameters they are fairly deduced. Then, it is imperative to stop the recursion. The first call is `create<1>( v, in )` producing the instanciation for `k=1` ```cpp std::get<1>(v) = std::get<1>(in); create<2>(v, in); // <== error function doesn't exist ``` But **because of the SFINAE**, the function create<2> does not exist. And it is an error to call an inexistant function So a possible solution: ```cpp template<size_t k, typename ...R, typename ...T> static typename std::enable_if< (k < sizeof...(R)), void>::type create( std::tuple<R...>& v, std::tuple<T...>const& in ) { std::get<k>(v) = std::get<k>(in); if constexpr ( k + 1 < sizeof...(R) ) { create<k+1>( v, in ); } } ```
I have a webpage with two `<picture>` elements that each have `<a>` in front of them. The images are inline, next to each other. Whitespace is between these two elements (not sure why, but I'm fine with the small gap between them) and the whitespace itself has a little underscore or bottom-border type of characteristic to it. I don't want this little underscore to be visible. Page viewable here: https://nohappynonsense.net/writtte If I remove the `<a>` from each picture the little line goes away. But I want these images to have a link, so I'm not sure what else to try. Also - sorry if my code looks like some horrible mess, I'm not a programmer or coder or anything.
Persisting @tanstack/vue-query query client in Quasar SSR
|javascript|vue.js|vite|server-side-rendering|quasar|
null