id
stringlengths
40
40
text
stringlengths
29
2.03k
original_text
stringlengths
3
154k
subdomain
stringclasses
20 values
metadata
dict
1bb423d1ecd7caf2dcd22b2125e845d899183b30
Stackoverflow Stackexchange Q: How to Use sorting in Spring Data Rest GET method When I create any repository in spring framework like following it gives method by default to get all records of the entity using this API GET : http://localhost:3000/api/addresses It sends data from ascending order But If I want my data in descending order than How can I specify this ? Address Repository public interface AddressRepository extends JpaRepository<Address, Long> { } A: Perhaps,you can : localhost:3000/api/addresses?sort={property},desc this will help you sort by property desc
Q: How to Use sorting in Spring Data Rest GET method When I create any repository in spring framework like following it gives method by default to get all records of the entity using this API GET : http://localhost:3000/api/addresses It sends data from ascending order But If I want my data in descending order than How can I specify this ? Address Repository public interface AddressRepository extends JpaRepository<Address, Long> { } A: Perhaps,you can : localhost:3000/api/addresses?sort={property},desc this will help you sort by property desc A: You can also specify this as part of your request, take a look at documentation here Sorting. Also please take a look at similar answer here. A: Try inside the AddressRepository class something like: public List<Address> findAllByOrderByIdDesc(); of course you can change the "ById" part with any other field you want to use for sorting.
stackoverflow
{ "language": "en", "length": 140, "provenance": "stackexchange_0000F.jsonl.gz:912684", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44691279" }
05b3e08a4b7166508495609163cde0fe9cf6dd47
Stackoverflow Stackexchange Q: typescript function throwing error to return a value Here is what I am trying to accomplish, I am calling a function getGeoLocationOfUser() which I supposed to return me the geolocation of user and function is supposed to return only when geolocation is available or there is some error. but above function is throwing an error A function whose declared type is neither 'void' nor 'any' must return a value. public userGeolocation={latitude:null,longitude:null} getGeoLocationOfUser():{latitude:any,longitude:any}{ this.geolocation.getCurrentPosition().then((resp) => { this.userGeolocation.latitude=resp.coords.latitude; this.userGeolocation.longitude=resp.coords.longitude; console.log(this.userGeolocation); localStorage.setItem('userGeoLocation',JSON.stringify(this.userGeolocation)); return this.userGeolocation; //saving geolocation of user to localStorage }).catch((error) => { console.log('Error getting location', error); return this.userGeolocation; }); } I might be missing a very basic concept here .Any help will be appreciated. A: You need to return a Promise of Geolocation here. //Make return type as Promise<object_type> or Promise<any> getGeoLocationOfUser():Promise<{latitude:any,longitude:any}>{ //return the inner function return this.geolocation.getCurrentPosition().then((resp) => { this.userGeolocation.latitude=resp.coords.latitude; this.userGeolocation.longitude=resp.coords.longitude; console.log(this.userGeolocation); localStorage.setItem('userGeoLocation',JSON.stringify(this.userGeolocation)); return this.userGeolocation; //saving geolocation of user to localStorage }).catch((error) => { console.log('Error getting location', error); return this.userGeolocation; }); } You can then get the value by calling function().then(callback). getGeoLocationOfUser().then( loc =>{ this.location = loc}).catch(err=>{});
Q: typescript function throwing error to return a value Here is what I am trying to accomplish, I am calling a function getGeoLocationOfUser() which I supposed to return me the geolocation of user and function is supposed to return only when geolocation is available or there is some error. but above function is throwing an error A function whose declared type is neither 'void' nor 'any' must return a value. public userGeolocation={latitude:null,longitude:null} getGeoLocationOfUser():{latitude:any,longitude:any}{ this.geolocation.getCurrentPosition().then((resp) => { this.userGeolocation.latitude=resp.coords.latitude; this.userGeolocation.longitude=resp.coords.longitude; console.log(this.userGeolocation); localStorage.setItem('userGeoLocation',JSON.stringify(this.userGeolocation)); return this.userGeolocation; //saving geolocation of user to localStorage }).catch((error) => { console.log('Error getting location', error); return this.userGeolocation; }); } I might be missing a very basic concept here .Any help will be appreciated. A: You need to return a Promise of Geolocation here. //Make return type as Promise<object_type> or Promise<any> getGeoLocationOfUser():Promise<{latitude:any,longitude:any}>{ //return the inner function return this.geolocation.getCurrentPosition().then((resp) => { this.userGeolocation.latitude=resp.coords.latitude; this.userGeolocation.longitude=resp.coords.longitude; console.log(this.userGeolocation); localStorage.setItem('userGeoLocation',JSON.stringify(this.userGeolocation)); return this.userGeolocation; //saving geolocation of user to localStorage }).catch((error) => { console.log('Error getting location', error); return this.userGeolocation; }); } You can then get the value by calling function().then(callback). getGeoLocationOfUser().then( loc =>{ this.location = loc}).catch(err=>{}); A: Kindly change the return type any instead of {latitude:any,longitude:any} getGeoLocationOfUser(): any { return this.geolocation.getCurrentPosition().then((resp) => { this.userGeolocation.latitude = resp.coords.latitude; this.userGeolocation.longitude = resp.coords.longitude; console.log(this.userGeolocation); localStorage.setItem('userGeoLocation', JSON.stringify(this.userGeolocation)); return this.userGeolocation; //saving geolocation of user to localStorage }).catch((error) => { console.log('Error getting location', error); return this.userGeolocation; }); } A: You can try with any return type. getGeoLocationOfUser(): Promise<any> { this.geolocation.getCurrentPosition().then((resp) => { this.userGeolocation.latitude=resp.coords.latitude; this.userGeolocation.longitude=resp.coords.longitude; console.log(this.userGeolocation); localStorage.setItem('userGeoLocation',JSON.stringify(this.userGeolocation)); return this.userGeolocation; //saving geolocation of user to localStorage }).catch((error) => { console.log('Error getting location', error); return this.userGeolocation; }); }
stackoverflow
{ "language": "en", "length": 259, "provenance": "stackexchange_0000F.jsonl.gz:912696", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44691323" }
cc854c0bace256e28381c85f155ced1f798d01e7
Stackoverflow Stackexchange Q: How to understand tf.get_collection() in TensorFlow I am confused by tf.get_collection() form the docs, it says that Returns a list of values in the collection with the given name. And an example from the Internet is here from_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, from_scope) Is it means that it collects variables from tf.GraphKeys.TRAINABLE_VARIABLES to from_scope? However, how can I use this function if I want to get variables from another scope? Thank you! A: A collection is nothing but a named set of values. Every value is a node of the computational graph. Every node has its name and the name is composed by the concatenation of scopes, / and values, like: preceding/scopes/in/that/way/value get_collection, without scope allow fetching every value in the collection without applying any filter operation. When the scope parameter is present, every element of the collection is filtered and its returned only if the name of the node starts with the specified scope.
Q: How to understand tf.get_collection() in TensorFlow I am confused by tf.get_collection() form the docs, it says that Returns a list of values in the collection with the given name. And an example from the Internet is here from_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, from_scope) Is it means that it collects variables from tf.GraphKeys.TRAINABLE_VARIABLES to from_scope? However, how can I use this function if I want to get variables from another scope? Thank you! A: A collection is nothing but a named set of values. Every value is a node of the computational graph. Every node has its name and the name is composed by the concatenation of scopes, / and values, like: preceding/scopes/in/that/way/value get_collection, without scope allow fetching every value in the collection without applying any filter operation. When the scope parameter is present, every element of the collection is filtered and its returned only if the name of the node starts with the specified scope. A: As described in the string doc: * *TRAINABLE_VARIABLES: the subset of Variable objects that will be trained by an optimizer. and scope: (Optional.) A string. If supplied, the resulting list is filtered to include only items whose name attribute matches scope using re.match. Items without a name attribute are never returned if a scope is supplied. The choice of re.match means that a scope without special tokens filters by prefix. So it will return the list of trainable variables in the given scope.
stackoverflow
{ "language": "en", "length": 238, "provenance": "stackexchange_0000F.jsonl.gz:912729", "question_score": "16", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44691406" }
4f6fb844d9b47e42b757c9aa058a4cdc6d127a9b
Stackoverflow Stackexchange Q: CannotResolveClassException for ProjectMatrixAuthorizationStrategy I was trying to restore jenkins on a new machine by tacking up backup from old machine . I replaced the jenkins home directory of new machine from old one. When i launch jenkins it gives me this error. Caused: java.io.IOException: Unable to read /var/lib/jenkins/config.xml There is also Caused: hudson.util.HudsonFailedToLoad Caused: org.jvnet.hudson.reactor.ReactorException Debug info is ---- Debugging information ---- message : hudson.security.ProjectMatrixAuthorizationStrategy cause-exception : com.thoughtworks.xstream.mapper.CannotResolveClassException cause-message : hudson.security.ProjectMatrixAuthorizationStrategy class : hudson.model.Hudson required-type : hudson.model.Hudson converter-type : hudson.util.RobustReflectionConverter path : /hudson/authorizationStrategy line number : 11 version : not available ------------------------------- This is what my config.xml look like <useSecurity>true</useSecurity> <authorizationStrategy class="hudson.security.ProjectMatrixAuthorizationStrategy"> <permission>hudson.model.Hudson.Administer:visha</permission> </authorizationStrategy> Can someone please help ? A: This usually happens when the plugin providing the authorization strategy is not installed or enabled. Make sure the matrix-auth plugin is installed and that it's not disabled (no matrix-auth.jpi.disabled file (or similar) in $JENKINS_HOME/plugins/).
Q: CannotResolveClassException for ProjectMatrixAuthorizationStrategy I was trying to restore jenkins on a new machine by tacking up backup from old machine . I replaced the jenkins home directory of new machine from old one. When i launch jenkins it gives me this error. Caused: java.io.IOException: Unable to read /var/lib/jenkins/config.xml There is also Caused: hudson.util.HudsonFailedToLoad Caused: org.jvnet.hudson.reactor.ReactorException Debug info is ---- Debugging information ---- message : hudson.security.ProjectMatrixAuthorizationStrategy cause-exception : com.thoughtworks.xstream.mapper.CannotResolveClassException cause-message : hudson.security.ProjectMatrixAuthorizationStrategy class : hudson.model.Hudson required-type : hudson.model.Hudson converter-type : hudson.util.RobustReflectionConverter path : /hudson/authorizationStrategy line number : 11 version : not available ------------------------------- This is what my config.xml look like <useSecurity>true</useSecurity> <authorizationStrategy class="hudson.security.ProjectMatrixAuthorizationStrategy"> <permission>hudson.model.Hudson.Administer:visha</permission> </authorizationStrategy> Can someone please help ? A: This usually happens when the plugin providing the authorization strategy is not installed or enabled. Make sure the matrix-auth plugin is installed and that it's not disabled (no matrix-auth.jpi.disabled file (or similar) in $JENKINS_HOME/plugins/). A: It can happen if a newer version of a plugin is incompatible with older version of Jenkins. It's recommended to upgrade Jenkins to latest version. This is how I do it: ssh jenkins "cd /tmp; wget https://updates.jenkins-ci.org/latest/jenkins.war" ssh jenkins "cp /usr/share/jenkins/jenkins.war /tmp/jenkins.war.previous.version" ssh jenkins "sudo systemctl status jenkins" ssh jenkins "sudo cp /tmp/jenkins.war /usr/share/jenkins/" ssh jenkins "sudo systemctl restart jenkins" A: If you cannot even log in because of this error, you can disable security in jenkins config file /config.xml. Search for <useSecurity>true</useSecurity> and change the value to false. Then restart jenkins from command line and you should be able to login and make change to plugin/auth configurations as suggested in other answers. A: usually, this error ours when there is a mismatch between Jenkins Version and Plugins version. Best solution is always to keep updated Jenkins and plugin or installed appropriate versions of jenkins according to the Jenkins version. For centos, Redhat, amazon Linux follow the below steps. sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key yum update Jenkins. For an ubuntu machine, you can follow the steps given by hit3k. cd /tmp; wget https://updates.jenkins-ci.org/latest/jenkins.war cp /usr/share/jenkins/jenkins.war /tmp/jenkins.war.previous.version sudo systemctl status jenkins sudo cp /tmp/jenkins.war /usr/share/jenkins/ sudo systemctl restart jenkins
stackoverflow
{ "language": "en", "length": 343, "provenance": "stackexchange_0000F.jsonl.gz:912736", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44691438" }
f37a9c559d4eb22d3405dffc8fb600cb83bdf8a4
Stackoverflow Stackexchange Q: Environment variable with newline in amazon opsworks I am trying to setup an environment variable in Amazon's opsworks with chef. This is intended to keep a private key which contains newline characters. This is not getting set correctly, and the deployment of my rails app fails due an Exception caused due to this incorrect variable. Can someone please help me with this? Thanks. A: Only printable characters can be used as values in the environment variable in opsworks as documented in the link: http://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-creating.html#workingapps-creating-environment. So, the only way I found was to replace the special characters as strings, and then in the application that was using this character, replace the string representation received from the environment variable with the corresponding special character.
Q: Environment variable with newline in amazon opsworks I am trying to setup an environment variable in Amazon's opsworks with chef. This is intended to keep a private key which contains newline characters. This is not getting set correctly, and the deployment of my rails app fails due an Exception caused due to this incorrect variable. Can someone please help me with this? Thanks. A: Only printable characters can be used as values in the environment variable in opsworks as documented in the link: http://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-creating.html#workingapps-creating-environment. So, the only way I found was to replace the special characters as strings, and then in the application that was using this character, replace the string representation received from the environment variable with the corresponding special character. A: As @codeignitor mentioned, only printable characters can be used. I would suggest use some printable standard encodings such as JSON or Base64 instead of inventing one new protocol by yourself that can distract your colleagues. However, if you eventually adopted @codeignitor's solution, at least writing more comments in your source code to explain the special protocol.
stackoverflow
{ "language": "en", "length": 180, "provenance": "stackexchange_0000F.jsonl.gz:912742", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44691464" }
00aedddf932b8663b2d3caee1ba795b3f58d9bfc
Stackoverflow Stackexchange Q: How to initialize inherited class with base class? I have an instance of this class: public class MyClass { public string Username { get; set; } public string Password { get; set; } public string GetJson() { return JsonConvert.SerializeObject(this); } } but in some cases I need more properties in serialized json. I thought I should make a second inherited class like this: public class MyInheritedClass : MyClass { public string Email { get; set; } } If I'm not solving the problem in the wrong way, how can I initialize a new instance of my second class with an instance of the first one and have a json string from GetJson() that contains all the three properties? A: One of the options is to serialize the base class object and then deserialize it to derived class. E.g. you can use Json.Net serializer var jsonSerializerSettings = new JsonSerializerSettings { //You can specify other settings or no settings at all PreserveReferencesHandling = PreserveReferencesHandling.Objects, ReferenceLoopHandling = ReferenceLoopHandling.Ignore }; string jsonFromBase = JsonConvert.SerializeObject(baseObject, Formatting.Indented, jsonSerializerSettings); derivedClass= JsonConvert.DeserializeObject<DerivedClass>(jsonFromBase) ;
Q: How to initialize inherited class with base class? I have an instance of this class: public class MyClass { public string Username { get; set; } public string Password { get; set; } public string GetJson() { return JsonConvert.SerializeObject(this); } } but in some cases I need more properties in serialized json. I thought I should make a second inherited class like this: public class MyInheritedClass : MyClass { public string Email { get; set; } } If I'm not solving the problem in the wrong way, how can I initialize a new instance of my second class with an instance of the first one and have a json string from GetJson() that contains all the three properties? A: One of the options is to serialize the base class object and then deserialize it to derived class. E.g. you can use Json.Net serializer var jsonSerializerSettings = new JsonSerializerSettings { //You can specify other settings or no settings at all PreserveReferencesHandling = PreserveReferencesHandling.Objects, ReferenceLoopHandling = ReferenceLoopHandling.Ignore }; string jsonFromBase = JsonConvert.SerializeObject(baseObject, Formatting.Indented, jsonSerializerSettings); derivedClass= JsonConvert.DeserializeObject<DerivedClass>(jsonFromBase) ; A: You just need to create an instance of child class that is MyInheritedClass and it will hold all the properties from both the classes. When you create an instance of child class MyInheritedClass, runtime will call the constructor of Parent class MyInheritedClass first to allocate the memory for the member of parent class and then child class constructor will be invoked. So instance of Child class will have all the properties and you are referring to the this while serializing the object so it should have all the properties serialized in json. Note: Even though you are serializing the object inside the method that is declared in parent class, referring to this object will refer to the current instance that is instance of Child class so will hold all the properties. A: No. you can't initialize the derived instance in base class object. However you can create seprate extension method, public class MyClass { public string Username { get; set; } public string Password { get; set; } public string GetJson() { return JsonConvert.SerializeObject(this); } } public class MyInheritedClass : MyClass { public string Email { get; set; } } public static class MyClassExtension { public static MyInheritedClass ToMyInheritedClass(this MyClass obj, string email) { // You could use some mapper for identical properties. . . return new MyInheritedClass() { Email = email, Password = obj.Password, Username = obj.Password }; } } usage: MyClass myClass = new MyClass { Username = "abc", Password = "123" }; var myInheritedClass = myClass.ToMyInheritedClass("abc@mail.com"); Console.WriteLine(myInheritedClass.GetJson()); output would be: {"Email":"abc@mail.com","Username":"123","Password":"123"} A: You can create a constructor in your derived class and map the objects, public class MyInheritedClass : MyClass { MyInheritedClass (MyClass baseObject) { this.UserName = baseObject.UserName; // Do it similarly for rest of the properties } public string Email { get; set; } } MyInheritedClass inheritedClassObject = new MyInheritedClass(myClassObject); inheritedClassObject.GetJson(); Updated Constructor : MyInheritedClass (MyClass baseObject) { //Get the list of properties available in base class var properties = baseObject.GetProperties(); properties.ToList().ForEach(property => { //Check whether that property is present in derived class var isPresent = this.GetType().GetProperty(property); if (isPresent != null && property.CanWrite) { //If present get the value and map it var value = baseObject.GetType().GetProperty(property).GetValue(baseObject, null); this.GetType().GetProperty(property).SetValue(this, value, null); } }); }
stackoverflow
{ "language": "en", "length": 544, "provenance": "stackexchange_0000F.jsonl.gz:912806", "question_score": "10", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44691645" }
d74f01db7c2a0778ea9c6d2ee2dbcab9ff1d000a
Stackoverflow Stackexchange Q: Will 2 threads ever access my Java AWS Lambda concurrently? Does AWS maintain a thread-pool and dispatch concurrent incoming requests to the same Lambda instance, or spin up another instance in these circumstances? I know that at some load factor another instance will be started, but can I rely on single-threaded access within a Lambda? A: A Lambda instance processes one event at the time. If more events arrive before the event is processed, a new instance is spawned. Copied from the AWS Lambda developer guide: The first time you invoke your function, AWS Lambda creates an instance of the function and runs its handler method to process the event. When the function returns a response, it sticks around to process additional events. If you invoke the function again while the first event is being processed, Lambda creates another instance.
Q: Will 2 threads ever access my Java AWS Lambda concurrently? Does AWS maintain a thread-pool and dispatch concurrent incoming requests to the same Lambda instance, or spin up another instance in these circumstances? I know that at some load factor another instance will be started, but can I rely on single-threaded access within a Lambda? A: A Lambda instance processes one event at the time. If more events arrive before the event is processed, a new instance is spawned. Copied from the AWS Lambda developer guide: The first time you invoke your function, AWS Lambda creates an instance of the function and runs its handler method to process the event. When the function returns a response, it sticks around to process additional events. If you invoke the function again while the first event is being processed, Lambda creates another instance.
stackoverflow
{ "language": "en", "length": 141, "provenance": "stackexchange_0000F.jsonl.gz:912807", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44691657" }
8ea3b61b1ba48fb1043103dbdfb1f1b8749c60c2
Stackoverflow Stackexchange Q: Stop Application Insights including path parameters in the Operation Name Our ASP.NET MVC application includes some URI path parameters, like: https://example.com/api/query/14hes1017ceimgS2ESsIec In Application Insights, this URI above becomes Operation Name GET /api/query/14hes1017ceimgS2ESsIec We don't want millions of unique Operations like this; it's just one code method serving them all (see below). We want to roll them up under an Operation Name like GET /api/query/{path} Here is the code method - I think App Insights could detect that the URI contains a query parameter... but it doesn't. [Route("api/query/{hash}")] public HttpResponseMessage Get(string hash) { ... A: The reason Application Insights does not detect that the suffix of your Operation Name is a parameter is because the SDK does not look at your code, and for all practical purposes that's a valid URI. Two options to get what you want: * *Change your API to pass the parameter in the query string (that is stripped out of the Operation Name) *Implement your own ITelemetryProcessor (detailed explanation can be found here), and remove the suffix hash from the Operation Name yourself
Q: Stop Application Insights including path parameters in the Operation Name Our ASP.NET MVC application includes some URI path parameters, like: https://example.com/api/query/14hes1017ceimgS2ESsIec In Application Insights, this URI above becomes Operation Name GET /api/query/14hes1017ceimgS2ESsIec We don't want millions of unique Operations like this; it's just one code method serving them all (see below). We want to roll them up under an Operation Name like GET /api/query/{path} Here is the code method - I think App Insights could detect that the URI contains a query parameter... but it doesn't. [Route("api/query/{hash}")] public HttpResponseMessage Get(string hash) { ... A: The reason Application Insights does not detect that the suffix of your Operation Name is a parameter is because the SDK does not look at your code, and for all practical purposes that's a valid URI. Two options to get what you want: * *Change your API to pass the parameter in the query string (that is stripped out of the Operation Name) *Implement your own ITelemetryProcessor (detailed explanation can be found here), and remove the suffix hash from the Operation Name yourself A: I hacked it with this hardcoded OperationNameMunger (using these docs for inspiration). I wired it into the ApplicationInsights.config, straight after the OperationNameTelemetryInitializer. using System.Text.RegularExpressions; using Microsoft.ApplicationInsights.Channel; using Microsoft.ApplicationInsights.Extensibility; namespace My.Namespace { public class OperationNameMunger : ITelemetryInitializer { public void Initialize(ITelemetry telemetry) { var existingOpName = telemetry.Context?.Operation?.Name; if (existingOpName == null) return; const string matchesInterestingOps = "^([A-Z]+ /api/query/)[^ ]+$"; var match = Regex.Match(existingOpName, matchesInterestingOps); if (match.Success) { telemetry.Context.Operation.Name = match.Groups[1].Value + "{hash}"; } } } } A: MS are working on this feature with https://github.com/Microsoft/ApplicationInsights-dotnet-server/issues/176 A: When I ran into this I felt like it would be more useful to have the actual text of the route as the operation name, rather than try to identify all the different ways an ID could be constructed. The problem is that route template exists down the tree from HttpRequestMessage, but in a TelemetryInitializer you end up with only access to HttpContext.Current.Request which is an HttpRequest. They don't make it easy but this code works: // This class runs at the start of each request and gets the name of the // route template from actionContext.ControllerContext?.RouteData?.Route?.RouteTemplate // then stores it in HttpContext.Current.Items public class AiRewriteUrlsFilter : System.Web.Http.Filters.ActionFilterAttribute { internal const string AiTelemetryName = "AiTelemetryName"; public override void OnActionExecuting(HttpActionContext actionContext) { string method = actionContext.Request?.Method?.Method; string routeData = actionContext.ControllerContext?.RouteData?.Route?.RouteTemplate; if (!string.IsNullOrEmpty(routeData) && routeData.StartsWith("api/1.0/") && HttpContext.Current != null) { HttpContext.Current.Items.Add(AiTelemetryName, $"{method} /{routeData}"); } base.OnActionExecuting(actionContext); } } // This class runs when the telemetry is initialized and pulls // the value we set in HttpContext.Current.Items and uses it // as the new name of the telemetry. public class AiRewriteUrlsInitializer : ITelemetryInitializer { public void Initialize(ITelemetry telemetry) { if (telemetry is RequestTelemetry rTelemetry && HttpContext.Current != null) { string telemetryName = HttpContext.Current.Items[AiRewriteUrlsFilter.AiTelemetryName] as string; if (!string.IsNullOrEmpty(telemetryName)) { rTelemetry.Name = telemetryName; } } } } A: Here is a solution for ASP.NET using Minimal API's. This will use the routePattern from the Endpoint as the name and operation_Name. using Microsoft.ApplicationInsights.Channel; using Microsoft.ApplicationInsights.DataContracts; using Microsoft.ApplicationInsights.Extensibility; namespace Web.ApplicationInsights; public class UseRoutePatternAsNameTelemetryInitializer : ITelemetryInitializer { private readonly IHttpContextAccessor _httpContextAccessor; public UseRoutePatternAsNameTelemetryInitializer(IHttpContextAccessor httpContextAccessor) { _httpContextAccessor = httpContextAccessor; } public void Initialize(ITelemetry telemetry) { var httpContext = _httpContextAccessor.HttpContext; if (telemetry is RequestTelemetry requestTelemetry && httpContext != null) { var endpoint = httpContext.GetEndpoint(); if (endpoint is RouteEndpoint routeEndpoint) { var telemetryName = CreateTelemetryName(routeEndpoint, httpContext); requestTelemetry.Name = telemetryName; requestTelemetry.Context.Operation.Name = telemetryName; } } } private static string CreateTelemetryName(RouteEndpoint routeEndpoint, HttpContext httpContext) { var routePattern = routeEndpoint.RoutePattern.RawText ?? ""; var routeName = routePattern.StartsWith("/") ? routePattern : $"/{routePattern}"; var telemetryName = $"{httpContext.Request.Method} {routeName}"; return telemetryName; } } And the following in Program.cs AddApplicationInsightsTelemetry(builder); static void AddApplicationInsightsTelemetry(WebApplicationBuilder webApplicationBuilder) { webApplicationBuilder.Services.AddApplicationInsightsTelemetry(); webApplicationBuilder.Services.AddHttpContextAccessor(); webApplicationBuilder.Services.AddSingleton<ITelemetryInitializer, UseRoutePatternAsNameTelemetryInitializer>(); } A: Inspired by @Mike's answer. * *Updated for ASP.NET Core 5/6 *Uses route name, if specified. *Uses template and API version, if route data is available. Telemetry name before/after: GET /chat/ba1ce6bb-01e8-4633-918b-08d9a363a631/since/2021-11-18T18:51:08 GET /chat/{id}/since/{timestamp} https://gist.github.com/angularsen/551bcbc5f770d85ff9c4dfbab4465546 The solution consists of: * *Global MVC action filter, to compute telemetry name from route data. *ITelemetryInitializer to update the telemetry name. *Configure filter and initializer in ASP.NET's Startup class Global filter to compute the telemetry name from the API action route data. #nullable enable using System; using Microsoft.AspNetCore.Http; using Microsoft.AspNetCore.Mvc; using Microsoft.AspNetCore.Mvc.Filters; namespace Digma.Api.Common.Telemetry { /// <summary> /// Action filter to construct a simpler telemetry name from the route name or the route template. /// <br/><br/> /// Out of the box, Application Insights sometimes uses request telemetry names like "GET /chat/ba1ce6bb-01e8-4633-918b-08d9a363a631/since/2021-11-18T18:51:08". /// This makes it hard to see how many requests were for a particular API action. /// This is a <a href="https://github.com/microsoft/ApplicationInsights-dotnet/issues/1418">known issue</a>. /// <br/><br/> /// - If route name is defined, then use that.<br/> /// - If route template is defined, then the name is formatted as "{method} /{template} v{version}". /// </summary> /// <example> /// - <b>"MyCustomName"</b> if route name is explicitly defined with <c>[Route("my_path", Name="MyCustomName")]</c><br/> /// - <b>"GET /config v2.0"</b> if template is "config" and API version is 2.0.<br/> /// - <b>"GET /config"</b> if no API version is defined. /// </example> /// <remarks> /// The value is passed on via <see cref="HttpContext.Items"/> with the key <see cref="SimpleRequestTelemetryNameInitializer.TelemetryNameKey"/>. /// </remarks> public class SimpleRequestTelemetryNameActionFilter : ActionFilterAttribute { public override void OnActionExecuting(ActionExecutingContext context) { var httpContext = context.HttpContext; var attributeRouteInfo = context.ActionDescriptor.AttributeRouteInfo; if (attributeRouteInfo?.Name is { } name) { // If route name is defined, it takes precedence. httpContext.Items.Add(SimpleRequestTelemetryNameInitializer.TelemetryNameKey, name); } else if (attributeRouteInfo?.Template is { } template) { // Otherwise, use the route template if defined. string method = httpContext.Request.Method; string versionSuffix = GetVersionSuffix(httpContext); httpContext.Items.Add(SimpleRequestTelemetryNameInitializer.TelemetryNameKey, $"{method} /{template}{versionSuffix}"); } base.OnActionExecuting(context); } private static string GetVersionSuffix(HttpContext httpContext) { try { var requestedApiVersion = httpContext.GetRequestedApiVersion()?.ToString(); // Add leading whitespace so we can simply append version string to telemetry name. if (!string.IsNullOrWhiteSpace(requestedApiVersion)) return $" v{requestedApiVersion}"; } catch (Exception) { // Some requests lack the IApiVersioningFeature, like requests to get swagger doc } return string.Empty; } } } Telemetry initializer that updates the name of RequestTelemetry. using Microsoft.ApplicationInsights.Channel; using Microsoft.ApplicationInsights.DataContracts; using Microsoft.ApplicationInsights.Extensibility; using Microsoft.AspNetCore.Http; namespace Digma.Api.Common.Telemetry { /// <summary> /// Changes the name of request telemetry to the value assigned by <see cref="SimpleRequestTelemetryNameActionFilter"/>. /// </summary> /// <remarks> /// The value is passed on via <see cref="HttpContext.Items"/> with the key <see cref="TelemetryNameKey"/>. /// </remarks> public class SimpleRequestTelemetryNameInitializer : ITelemetryInitializer { internal const string TelemetryNameKey = "SimpleOperationNameInitializer:TelemetryName"; private readonly IHttpContextAccessor _httpContextAccessor; public SimpleRequestTelemetryNameInitializer(IHttpContextAccessor httpContextAccessor) { _httpContextAccessor = httpContextAccessor; } public void Initialize(ITelemetry telemetry) { var httpContext = _httpContextAccessor.HttpContext; if (telemetry is RequestTelemetry requestTelemetry && httpContext != null) { if (httpContext.Items.TryGetValue(TelemetryNameKey, out var telemetryNameObj) && telemetryNameObj is string telemetryName && !string.IsNullOrEmpty(telemetryName)) { requestTelemetry.Name = telemetryName; } } } } } ASP.NET startup class to configure the global filter and telemetry initializer. public class Startup { public void ConfigureServices(IServiceCollection services) { // Register telemetry initializer. services.AddApplicationInsightsTelemetry(); services.AddSingleton<ITelemetryInitializer, SimpleRequestTelemetryNameInitializer>(); services.AddMvc(opt => { // Global MVC filters. opt.Filters.Add<SimpleRequestTelemetryNameActionFilter>(); }); } public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { // ...other configuration } }
stackoverflow
{ "language": "en", "length": 1141, "provenance": "stackexchange_0000F.jsonl.gz:912808", "question_score": "13", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44691658" }
11489cfb174599525a0676aad199539de3642a44
Stackoverflow Stackexchange Q: Remove Time From Pandas DateTime on Excel Export I have a Pandas Dataframe queried with pyODBC that returns 'dates' as floats. I change the data types to datetime after converting to a string with ymd formatting and then create an Excel File with ExcelWriter. The resulting Excel data keeps the yyyymmdd 00:00:00 format. Some posts suggest creating 'helper' columns in Pandas and using dt.normalize (?) but I would like to do it all on export ... possible? Better way in general? Note [date] is a list of three columns df[date] = df[date].apply(lambda x: pd.to_datetime(x.astype(str), format = '%Y%m%d')) df Col 1 2017-01-19 2016-12-29 2017-01-04 2016-12-29 2017-01-04 writer = ExcelWriter('MyData.xlsx', date_format = 'yyyy mm dd') df.to_excel(writer, 'DATA') writer.save() 2017-01-19 00:00:00 2016-12-29 00:00:00 2017-01-04 00:00:00 2016-12-29 00:00:00 2017-01-04 00:00:00 2017-01-04 00:00:00 2017-01-16 00:00:00 A: I think there is problem pandas use datetime, so need define datetime_format also. Docs. writer = pd.ExcelWriter('MyData.xlsx', date_format = 'yyyy mm dd', datetime_format='yyyy mm dd')
Q: Remove Time From Pandas DateTime on Excel Export I have a Pandas Dataframe queried with pyODBC that returns 'dates' as floats. I change the data types to datetime after converting to a string with ymd formatting and then create an Excel File with ExcelWriter. The resulting Excel data keeps the yyyymmdd 00:00:00 format. Some posts suggest creating 'helper' columns in Pandas and using dt.normalize (?) but I would like to do it all on export ... possible? Better way in general? Note [date] is a list of three columns df[date] = df[date].apply(lambda x: pd.to_datetime(x.astype(str), format = '%Y%m%d')) df Col 1 2017-01-19 2016-12-29 2017-01-04 2016-12-29 2017-01-04 writer = ExcelWriter('MyData.xlsx', date_format = 'yyyy mm dd') df.to_excel(writer, 'DATA') writer.save() 2017-01-19 00:00:00 2016-12-29 00:00:00 2017-01-04 00:00:00 2016-12-29 00:00:00 2017-01-04 00:00:00 2017-01-04 00:00:00 2017-01-16 00:00:00 A: I think there is problem pandas use datetime, so need define datetime_format also. Docs. writer = pd.ExcelWriter('MyData.xlsx', date_format = 'yyyy mm dd', datetime_format='yyyy mm dd')
stackoverflow
{ "language": "en", "length": 157, "provenance": "stackexchange_0000F.jsonl.gz:912814", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44691675" }
58fc0b9037303606da28737dfa796a6c594f59c7
Stackoverflow Stackexchange Q: Promise returning undefined I am trying to use promise to send an ajax request to a php script which checks if a file exist on the server and returns a boolean value. I have the below code but the fileExists function always return undefined. How can I wrap the promise in a function and have the function return the promise value? function fileExists(url) { var promise = new Promise(function(resolve, reject) { var xhr = new XMLHttpRequest(); xhr.onload = function() { resolve(this.responseText); }; xhr.onerror = reject; xhr.open('GET', url); xhr.send(); }); promise.then(function(e) { return e; }); } var result = fileExists("url_to_file"); A: Try this: function fileExists(url) { return promise = new Promise(function(resolve, reject) { var xhr = new XMLHttpRequest(); xhr.onload = function() { resolve(this.responseText); }; xhr.onerror = reject; xhr.open('GET', url); xhr.send(); }); } fileExists("url_to_file").then(text => console.log(text)); Your function returns nothing. If you return the promise you can hold on to the data once it's resolved.
Q: Promise returning undefined I am trying to use promise to send an ajax request to a php script which checks if a file exist on the server and returns a boolean value. I have the below code but the fileExists function always return undefined. How can I wrap the promise in a function and have the function return the promise value? function fileExists(url) { var promise = new Promise(function(resolve, reject) { var xhr = new XMLHttpRequest(); xhr.onload = function() { resolve(this.responseText); }; xhr.onerror = reject; xhr.open('GET', url); xhr.send(); }); promise.then(function(e) { return e; }); } var result = fileExists("url_to_file"); A: Try this: function fileExists(url) { return promise = new Promise(function(resolve, reject) { var xhr = new XMLHttpRequest(); xhr.onload = function() { resolve(this.responseText); }; xhr.onerror = reject; xhr.open('GET', url); xhr.send(); }); } fileExists("url_to_file").then(text => console.log(text)); Your function returns nothing. If you return the promise you can hold on to the data once it's resolved. A: This part of your code: promise.then(function(e) { return e; }); only returns e to the callback function. You have to handle the result within that callback. promise.then(function() { // handle result; }); Or might as well return the promise as shown by @Ole. A: You should return the promise, because you need to assign your result variable asynchronous. function fileExists(url) { return new Promise(function(resolve, reject) { var xhr = new XMLHttpRequest(); xhr.onload = function() { resolve(this.responseText); }; xhr.onerror = reject; xhr.open('GET', url); xhr.send(); }); } fileExists("url_to_file").then(console.log); A: Your function must return a Promise. function fileExists(url) { return new Promise(function (resolve, reject) { var xhr = new XMLHttpRequest(); xhr.onload = function() { resolve(this.responseText); }; xhr.onerror = reject(); xhr.open('GET', url); xhr.send(); }); } Then just call the function and print the resolved text fileExists("url_to_file").then(function (text) { console.log(text); });
stackoverflow
{ "language": "en", "length": 291, "provenance": "stackexchange_0000F.jsonl.gz:912847", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44691804" }
a641eace4bac9044890d861cfe7d27bdc4effb82
Stackoverflow Stackexchange Q: How to get Default Screen Timeout Time from settings programmatically? I want to get default screen timeout programmatically. I know how to set screen timeout programmatically by using below code. android.provider.Settings.System.putInt(getContentResolver(), Settings.System.SCREEN_OFF_TIMEOUT, <Value for Timeout>); But I am not able to get default screen timeout from settings. A: You need to use getInt() for that Settings.System.getInt(getContentResolver(), Settings.System.SCREEN_OFF_TIMEOUT)) Reference
Q: How to get Default Screen Timeout Time from settings programmatically? I want to get default screen timeout programmatically. I know how to set screen timeout programmatically by using below code. android.provider.Settings.System.putInt(getContentResolver(), Settings.System.SCREEN_OFF_TIMEOUT, <Value for Timeout>); But I am not able to get default screen timeout from settings. A: You need to use getInt() for that Settings.System.getInt(getContentResolver(), Settings.System.SCREEN_OFF_TIMEOUT)) Reference
stackoverflow
{ "language": "en", "length": 59, "provenance": "stackexchange_0000F.jsonl.gz:912850", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44691810" }
4075883d18b99b33d5590be55ac54c7a8fbd21ed
Stackoverflow Stackexchange Q: Failed to resolve: com.android.support.design:25.4.0 I added the following line to my build.gradle(Module:app): compile 'com.android.support:design:25.4.0' But when executing Gradle I'm getting Failed to resolve: com.android.support.design:25.4.0 I got that the support code from the android support design library and added it to a new project. I added it to the dependency section as such: dependencies { compile fileTree(dir: 'libs', include: ['*.jar']) androidTestCompile('com.android.support.test.espresso:espresso-core:2.2.2', { exclude group: 'com.android.support', module: 'support-annotations' }) compile 'com.android.support:appcompat-v7:25.3.1' compile 'com.android.support.constraint:constraint-layout:1.0.2' testCompile 'junit:junit:4.12' compile 'com.android.support:design:25.4.0' } Any ideas on what I'm doing wrong? A: Always keep appcompact version and support lib versionssame, so change com.android.support:design:25.4.0 to com.android.support:design:25.3.1
Q: Failed to resolve: com.android.support.design:25.4.0 I added the following line to my build.gradle(Module:app): compile 'com.android.support:design:25.4.0' But when executing Gradle I'm getting Failed to resolve: com.android.support.design:25.4.0 I got that the support code from the android support design library and added it to a new project. I added it to the dependency section as such: dependencies { compile fileTree(dir: 'libs', include: ['*.jar']) androidTestCompile('com.android.support.test.espresso:espresso-core:2.2.2', { exclude group: 'com.android.support', module: 'support-annotations' }) compile 'com.android.support:appcompat-v7:25.3.1' compile 'com.android.support.constraint:constraint-layout:1.0.2' testCompile 'junit:junit:4.12' compile 'com.android.support:design:25.4.0' } Any ideas on what I'm doing wrong? A: Always keep appcompact version and support lib versionssame, so change com.android.support:design:25.4.0 to com.android.support:design:25.3.1 A: allprojects { repositories { google() jcenter() mavenCentral() } } A: Mr. Bhavesh Patadiya give us a good solution. However, I'd like to share something more, to make fix process more explicit. There are two "build.gradle" files under the project directory. Their pathes are to be "Your-project-root-dir/build.gradle" and "Your-project-root-dir/app/build.gradle" respectively. When you see the error information in your android studio, and try to trace the file, you will probably open the second one. You should add this statement in the first file ("Your-project-root-dir/build.gradle"). allprojects { repositories { jcenter() maven { url "https://maven.google.com" } } } and add the statements in the second build.gradle ("Your-project-root-dir/app/build.gradle") dependencies { ... compile "com.android.support:support-core-utils:27.0.2" } A: A more updated version of the answer of "Bhavesh Patadiya" : * *In your project build.gradle file, add google() into the repositories blocks: repositories { jcenter() google() } *Update the same file with a newer Gradle version: classpath 'com.android.tools.build:gradle:2.3.3' *If the above cause you new issues or the same issue, exit Android-Studio, and delete the "gradle" folder/s (maybe also ".gradle" folder) and the "build" folder and sub-folders, and then open Android-Studio again. A: Important: The support libraries are now available through Google's Maven repository. You do not need to download the support repository from the SDK Manager. For more information, see Support Library Setup. Step 1: Open the build.gradle file for your application. Step 2: Make sure that the repositories section includes a maven section with the "https://maven.google.com" endpoint. For example: allprojects { repositories { jcenter() maven { url "https://maven.google.com" } } } Step 3: Add the support library to the dependencies section. For example, to add the v4 core-utils library, add the following lines: dependencies { ... compile "com.android.support:support-core-utils:25.4.0" } A: You need to update the android support Repository in the SDK manager . Also the Design Library depends on the Support v4 and AppCompat Support Libraries. Same version android support must be the same with others.. compile 'com.android.support:appcompat-v7:25.3.1' <-- same compile 'com.android.support:design:23.3.1' <-- same A: after adding : maven { url "https://maven.google.com" } make sure your Gradle sync is on ONLINE mode you can check it from: Android studio -> Preferences -> Build, Execution, Deployment -> Gradle -> Offline work (make sure this check box is not selected) A: This problem occurs when there is andoridtestImplementation is added in app.build. Remove testImplementation,androidTestImplementation from the app.build, that solves this issue. A: Above answers did't resolve anything for me. * *Tried syncing the project- Failed. *Tried building the project -Failed Problem found : Sdk Support Repository was corrupted . Fix: Go to the SDK manager, click the "SDK Tools" tab. If the check-mark for "Support Repository" is selected, unselect it and click OK. This will delete all of the files in the repository. Then recheck the check-mark, click GO and reinstall the repository. A: If you still have the issue, check the project settings for offline mode. if offline mode is on, then off and sync the project. That fixed my issue. A: There is no library by that name. There is com.android.support:recyclerview-v7:25.4.0. Failed to resolve com.android.support:support-compat:25.4.0 Failed to resolve com.android.support:support-core-ui:25.4.0 I am trying to include this library to my project by adding compile 'jp.wasabeef:recyclerview-animators:2.2.7' so remove this line from gradle my error just resolved
stackoverflow
{ "language": "en", "length": 634, "provenance": "stackexchange_0000F.jsonl.gz:912865", "question_score": "63", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44691858" }
59fb79c08d47af1260823913a9172dd40845a479
Stackoverflow Stackexchange Q: TypeError: User.save is not a function I am trying to post the user data to mongodb but I am getting an error as : TypeError: user.save is not a function how to fix this? I am using express here is my code: router.post('/', function (req, res) { let params = { id: req.body.id, name: req.body.name } User .save({ params }) .find() .then((data) => { res.render('index', { title: 'List', data: data }); }) .catch((err) => { return res.send(err); }); }) This is my schema: var mongoose = require('mongoose'); var Schema = mongoose.Schema; var User = new Schema({ id: String, name: String }); User.pre('save', function (next) { next(); }); module.exports = mongoose.model('User', User); A: You have to create a new User instance and call save on it router.post('/', function (req, res) { let user = new User ({ id: req.body.id, name: req.body.name }) user.save((err) => { if (err) { return res.send(err); } User .find() .then((data) => { res.render('index', { title: 'List', data: data }); }) .catch((err) => { return res.send(err); }); }); })
Q: TypeError: User.save is not a function I am trying to post the user data to mongodb but I am getting an error as : TypeError: user.save is not a function how to fix this? I am using express here is my code: router.post('/', function (req, res) { let params = { id: req.body.id, name: req.body.name } User .save({ params }) .find() .then((data) => { res.render('index', { title: 'List', data: data }); }) .catch((err) => { return res.send(err); }); }) This is my schema: var mongoose = require('mongoose'); var Schema = mongoose.Schema; var User = new Schema({ id: String, name: String }); User.pre('save', function (next) { next(); }); module.exports = mongoose.model('User', User); A: You have to create a new User instance and call save on it router.post('/', function (req, res) { let user = new User ({ id: req.body.id, name: req.body.name }) user.save((err) => { if (err) { return res.send(err); } User .find() .then((data) => { res.render('index', { title: 'List', data: data }); }) .catch((err) => { return res.send(err); }); }); })
stackoverflow
{ "language": "en", "length": 172, "provenance": "stackexchange_0000F.jsonl.gz:912901", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692005" }
6423c4630d542a82140206ee2854b82bd4f85e1c
Stackoverflow Stackexchange Q: nginx logging : first byte and last byte received I need to log all requests where nginx is taking huge time to respond to user. Upstream time is in 200 ms where nginx response time is > 333 secs I would like nginx to log: * *When it received first byte of the request *When it received last byte of the request *When it sent the request to upstream *When it got response from upstream A: You can create custom logging pattern by editing nginx config /etc/nginx/nginx.conf log_format apm '"$time_local" client=$remote_addr ' 'method=$request_method request="$request" ' 'request_length=$request_length ' 'status=$status bytes_sent=$bytes_sent ' 'body_bytes_sent=$body_bytes_sent ' 'referer=$http_referer ' 'user_agent="$http_user_agent" ' 'upstream_addr=$upstream_addr ' 'upstream_status=$upstream_status ' 'request_time=$request_time ' 'upstream_response_time=$upstream_response_time ' 'upstream_connect_time=$upstream_connect_time ' 'upstream_header_time=$upstream_header_time'; Here is a sample log format called apm that includes these four NGINX timing variables along with some other useful information, and update the access log to use that formatting access_log /var/log/nginx/access.log apm; To find the request which takes greater resonse time, you will have to send the data to ELK or similar service and filter it from there: https://www.youtube.com/watch?v=XaFH2PcYI64&ab_channel=TravelsCode You can create dashboard in kibana to get full picture of the results eg: https://blog.ruanbekker.com/blog/2019/04/02/setup-kibana-dashboards-for-nginx-log-data-to-understand-the-behavior/
Q: nginx logging : first byte and last byte received I need to log all requests where nginx is taking huge time to respond to user. Upstream time is in 200 ms where nginx response time is > 333 secs I would like nginx to log: * *When it received first byte of the request *When it received last byte of the request *When it sent the request to upstream *When it got response from upstream A: You can create custom logging pattern by editing nginx config /etc/nginx/nginx.conf log_format apm '"$time_local" client=$remote_addr ' 'method=$request_method request="$request" ' 'request_length=$request_length ' 'status=$status bytes_sent=$bytes_sent ' 'body_bytes_sent=$body_bytes_sent ' 'referer=$http_referer ' 'user_agent="$http_user_agent" ' 'upstream_addr=$upstream_addr ' 'upstream_status=$upstream_status ' 'request_time=$request_time ' 'upstream_response_time=$upstream_response_time ' 'upstream_connect_time=$upstream_connect_time ' 'upstream_header_time=$upstream_header_time'; Here is a sample log format called apm that includes these four NGINX timing variables along with some other useful information, and update the access log to use that formatting access_log /var/log/nginx/access.log apm; To find the request which takes greater resonse time, you will have to send the data to ELK or similar service and filter it from there: https://www.youtube.com/watch?v=XaFH2PcYI64&ab_channel=TravelsCode You can create dashboard in kibana to get full picture of the results eg: https://blog.ruanbekker.com/blog/2019/04/02/setup-kibana-dashboards-for-nginx-log-data-to-understand-the-behavior/
stackoverflow
{ "language": "en", "length": 193, "provenance": "stackexchange_0000F.jsonl.gz:912914", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692045" }
f21101c925b05269940c7ed623fccc1ea743dd32
Stackoverflow Stackexchange Q: How to calculate number of days in each month in php need to calculate the number of days from current date to 27th of each month in PHP In below code, it's calculating correctly for current month but if the current date is 28th it should calculate for next month. $year = date("y"); $month = date("m"); $day = '27'; $current_date = new DateTime(date('Y-m-d'), new DateTimeZone('Asia/Dhaka')); $end_date = new DateTime("$year-$month-$day", new DateTimeZone('Asia/Dhaka')); $interval = $current_date->diff($end_date); echo $interval->format('%a day(s)'); A: Try php cal_days_in_month function cal_days_in_month — Return the number of days in a month for a given year and calendar Ex: $number = cal_days_in_month(CAL_GREGORIAN, 8, 2003); // 31 echo "There were {$number} days in August 2003"; Reference
Q: How to calculate number of days in each month in php need to calculate the number of days from current date to 27th of each month in PHP In below code, it's calculating correctly for current month but if the current date is 28th it should calculate for next month. $year = date("y"); $month = date("m"); $day = '27'; $current_date = new DateTime(date('Y-m-d'), new DateTimeZone('Asia/Dhaka')); $end_date = new DateTime("$year-$month-$day", new DateTimeZone('Asia/Dhaka')); $interval = $current_date->diff($end_date); echo $interval->format('%a day(s)'); A: Try php cal_days_in_month function cal_days_in_month — Return the number of days in a month for a given year and calendar Ex: $number = cal_days_in_month(CAL_GREGORIAN, 8, 2003); // 31 echo "There were {$number} days in August 2003"; Reference A: I wrote this script quick, because I don't have the time to test it yet. EDIT: $day = 27; $today = date('d'); if($today < $day){ $math = $day - $today; echo "There are " . $math . " days left until the 27th."; } else { $diff = date('t') - $today; $math = $diff + $day; echo "There are " . $math . " days left until the 27th of the next month."; } A: Try below code, <?php $year = date("y"); $month = date("m"); $day = '27'; $current_date = new DateTime(date('Y-m-d'), new DateTimeZone('Asia/Dhaka')); $end_date = new DateTime("$year-$month-$day", new DateTimeZone('Asia/Dhaka')); if($current_date->getTimestamp()<=$end_date->getTimestamp()){ $interval = $current_date->diff($end_date); echo $interval->format('%a day(s)'); } else{ $interval = $end_date->diff($current_date); echo $interval->format('-%a day(s)'); } ?> A: $now = time(); // or your date as well $your_date = strtotime("2010-01-01"); $datediff = $now - $your_date; echo floor($datediff / (60 * 60 * 24)); Source : Finding the number of days between two dates A: by this.... <?php $d=cal_days_in_month(CAL_GREGORIAN,10,2005); echo "There was $d days in October 2005"; ?>
stackoverflow
{ "language": "en", "length": 284, "provenance": "stackexchange_0000F.jsonl.gz:912915", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692047" }
2127b317747fa4752bc9cc238d9184ccec2c0a92
Stackoverflow Stackexchange Q: What is the difference between res.send and res.write in express? I am a beginner to express.js and I am trying to understand the difference between res.send and res.write ? A: One of the most important differences not indicated in any of the answers are "draining". The res.write may return true or false. As of the documentation: Returns true if the entire data was flushed successfully to the kernel buffer. Returns false if all or part of the data was queued in user memory. 'drain' will be emitted when the buffer is free again. So, when doing res.write, the caller should hold off writing until the drain event emits if the res.write returned false. All these are handled automatically in res.send. The trade off is the buffering you will have to do when using the latter.
Q: What is the difference between res.send and res.write in express? I am a beginner to express.js and I am trying to understand the difference between res.send and res.write ? A: One of the most important differences not indicated in any of the answers are "draining". The res.write may return true or false. As of the documentation: Returns true if the entire data was flushed successfully to the kernel buffer. Returns false if all or part of the data was queued in user memory. 'drain' will be emitted when the buffer is free again. So, when doing res.write, the caller should hold off writing until the drain event emits if the res.write returned false. All these are handled automatically in res.send. The trade off is the buffering you will have to do when using the latter. A: res.send is equivalent to res.write + res.end So the key difference is res.send can be called only once where as res.write can be called multiple times followed by a res.end. But apart from that res.send is part of Express. It can automatically detect the length of response header. But there may be be a chance of memory spike with res.send(), in case of large files, our application hangs in between . A: res.send * *res.send is only in Express.js. *Performs many useful tasks for simple non-streaming responses. *Ability to automatically assigns the Content-Length HTTP response header field. *Ability to provides automatic HEAD & HTTP cache freshness support. *Practical explanation * *res.send can only be called once, since it is equivalent to res.write + res.end() *Example: app.get('/user/:id', function (req, res) { res.send('OK'); }); For more details: * *Express.js: Response res.write * *Can be called multiple times to provide successive parts of the body. *Example: response.write('<html>'); response.write('<body>'); response.write('<h1>Hello, World!</h1>'); response.write('</body>'); response.write('</html>'); response.end(); For more details: * *response.write(chunk[, encoding][, callback]) *Anatomy of an HTTP Transaction: Sending Response Body A: I am also a beginner in this. But what I observed is that, if you write only res.write() the page will be loading continuously until you write res.end(). Whereas if you write res.send() there is no need of res.end(). res.send() basically it does both res.write() and res.end(). A: Suppose you have two line that needs to be shown up and you use res.send as res.send("shows only First Line") res.send("won't show second Line") Then only first line will show up, whereas using res.write you have flexibility to write multiple line such as res.write("Shows first line") res.write("Shows second line") res.send() A: res.send() is equivalent to res.write()+ res.end(). basically, res.send is for express and res.write+res.end() is for bare metal node ways to send data.
stackoverflow
{ "language": "en", "length": 434, "provenance": "stackexchange_0000F.jsonl.gz:912916", "question_score": "77", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692048" }
cb4baf62d9f4c0bad03a491ba4aeded54fb07706
Stackoverflow Stackexchange Q: How to add an array/list to an object I create a student class with : setClass("student", slots=list(name="character", age="numeric", GPA="numeric")) Now I want to create another class : setClass("classRoom", slots=list(id = "numeric"),contains = "student") only issue is instead of one student I want to add a list of students. How do I achieve this? A: I guess you have already achieved what you wanted. Adding student class in classRoom will give access to all the properties of class student. You can now enter new students in classRoom with the properties of class student. # define a class called student setClass("student", slots=list(name="character", age="numeric", GPA="numeric")) # inherit from student setClass("classRoom", slots=list(id = "numeric"),contains = "student") # Now create new objects of classroom: > s1 <- new("classRoom",name="John", age=21, GPA=3.5, id=001) > s2 <- new("classRoom",name="Grisham", age=24, GPA=345, id=002) > s3 <- new("classRoom",name="Flamingo", age=22, GPA=3.8, id=003) Like this, you have added students in your classRoom.
Q: How to add an array/list to an object I create a student class with : setClass("student", slots=list(name="character", age="numeric", GPA="numeric")) Now I want to create another class : setClass("classRoom", slots=list(id = "numeric"),contains = "student") only issue is instead of one student I want to add a list of students. How do I achieve this? A: I guess you have already achieved what you wanted. Adding student class in classRoom will give access to all the properties of class student. You can now enter new students in classRoom with the properties of class student. # define a class called student setClass("student", slots=list(name="character", age="numeric", GPA="numeric")) # inherit from student setClass("classRoom", slots=list(id = "numeric"),contains = "student") # Now create new objects of classroom: > s1 <- new("classRoom",name="John", age=21, GPA=3.5, id=001) > s2 <- new("classRoom",name="Grisham", age=24, GPA=345, id=002) > s3 <- new("classRoom",name="Flamingo", age=22, GPA=3.8, id=003) Like this, you have added students in your classRoom.
stackoverflow
{ "language": "en", "length": 150, "provenance": "stackexchange_0000F.jsonl.gz:912918", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692063" }
7c13302d5a7f5a885e3e0586634a8da283bf06db
Stackoverflow Stackexchange Q: How to convert screenshot object into file object in Ashot selenium I wrote a java method whose return type is file. This method grabs a screenshot of screen with Ashot and store it on a Screenshot object. I need to convert that screenshot object into file object so that I can return file object. public static File grabScreenshot() { try { Thread.sleep(Integer.parseInt(Property.getProperty("screenshotDelay"))); } catch (InterruptedException e) { e.printStackTrace(); } catch (NumberFormatException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } File screenshot=null; //creating null file object to return Screenshot screenshot1 = new AShot().shootingStrategy(new ViewportPastingStrategy(1000)).takeScreenshot(driver()); //Here I have to typecast the screenshot1 to file type so that I can return return screenshot; } A: This should do the trick // getImage() will give buffered image which can be used to write to file BufferedImage bi = new AShot() .shootingStrategy(ShootingStrategies.viewportPasting(100)) .takeScreenshot(driver).getImage(); File outputfile = new File("image.jpg"); try { ImageIO.write(bi, "jpg", outputfile); } catch (IOException e) { e.printStackTrace(); } // Print the absolute path to see where the file is created System.out.println(outputfile.getAbsolutePath());
Q: How to convert screenshot object into file object in Ashot selenium I wrote a java method whose return type is file. This method grabs a screenshot of screen with Ashot and store it on a Screenshot object. I need to convert that screenshot object into file object so that I can return file object. public static File grabScreenshot() { try { Thread.sleep(Integer.parseInt(Property.getProperty("screenshotDelay"))); } catch (InterruptedException e) { e.printStackTrace(); } catch (NumberFormatException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } File screenshot=null; //creating null file object to return Screenshot screenshot1 = new AShot().shootingStrategy(new ViewportPastingStrategy(1000)).takeScreenshot(driver()); //Here I have to typecast the screenshot1 to file type so that I can return return screenshot; } A: This should do the trick // getImage() will give buffered image which can be used to write to file BufferedImage bi = new AShot() .shootingStrategy(ShootingStrategies.viewportPasting(100)) .takeScreenshot(driver).getImage(); File outputfile = new File("image.jpg"); try { ImageIO.write(bi, "jpg", outputfile); } catch (IOException e) { e.printStackTrace(); } // Print the absolute path to see where the file is created System.out.println(outputfile.getAbsolutePath());
stackoverflow
{ "language": "en", "length": 171, "provenance": "stackexchange_0000F.jsonl.gz:912927", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692095" }
fe74e379aaf10830ddb1d4bd275a1e8e11ed0d44
Stackoverflow Stackexchange Q: Visual Studio Queries open in Browser instead of inside Visual Studio In Visual Studio 2017 When I am clicking on a Query it opens in a browser instead of opening (like it used to) inside visual studio How can I change this to be opened in visual studio? A: In Visual Studio 2017 (aka VS version 15) the default behaviour for opening work items has been changed to browser. You can switch back to the old behaviour: Open Tools > Options > Work Items and change "Open work items in" to "Visual Studio". More details here: https://blogs.msdn.microsoft.com/devops/2016/08/22/work-items-now-open-in-the-web-from-visual-studio-15/
Q: Visual Studio Queries open in Browser instead of inside Visual Studio In Visual Studio 2017 When I am clicking on a Query it opens in a browser instead of opening (like it used to) inside visual studio How can I change this to be opened in visual studio? A: In Visual Studio 2017 (aka VS version 15) the default behaviour for opening work items has been changed to browser. You can switch back to the old behaviour: Open Tools > Options > Work Items and change "Open work items in" to "Visual Studio". More details here: https://blogs.msdn.microsoft.com/devops/2016/08/22/work-items-now-open-in-the-web-from-visual-studio-15/
stackoverflow
{ "language": "en", "length": 98, "provenance": "stackexchange_0000F.jsonl.gz:912967", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692264" }
b89bcb1cb68790a6f6dd1be6c5370905d62f3225
Stackoverflow Stackexchange Q: webpack.optimize.ModuleConcatenationPlugin is not a constructor Getting the following error on Travis CI since upgrading to Webpack 3. It seems to work fine on my local environment, but when I commit to master and kick of Travis it keeps failing now with the following error. 21 06 2017 20:16:31.514:ERROR [config]: Invalid config file! TypeError: webpack.optimize.ModuleConcatenationPlugin is not a constructor at Object.<anonymous> (/home/travis/build/.../webpack.prod.config.babel.js:91:3) at Module._compile (module.js:569:30) at loader (/home/travis/build/.../node_modules/babel-register/lib/node.js:144:5) at Object.require.extensions.(anonymous function) [as .js] (/home/travis/build/.../node_modules/babel-register/lib/node.js:154:7) at Module.load (module.js:503:32) at tryModuleLoad (module.js:466:12) at Function.Module._load (module.js:458:3) And the Line it's complaining about // Webpack 3 Scope Hoisting new webpack.optimize.ModuleConcatenationPlugin(), And also have set Webpack to version 3 in my package.json of course. "webpack": "^3.0.0", And my Travis yml is pretty simple language: node_js sudo: false node_js: - '8' A: Delete node_modules and package-lock.json then run npm install again to generate a new package-lock.json. Once you commit the new package-lock.json, Travis should work correctly. I was having the exact same issue locally, I did what I described and I stopped having that issue.
Q: webpack.optimize.ModuleConcatenationPlugin is not a constructor Getting the following error on Travis CI since upgrading to Webpack 3. It seems to work fine on my local environment, but when I commit to master and kick of Travis it keeps failing now with the following error. 21 06 2017 20:16:31.514:ERROR [config]: Invalid config file! TypeError: webpack.optimize.ModuleConcatenationPlugin is not a constructor at Object.<anonymous> (/home/travis/build/.../webpack.prod.config.babel.js:91:3) at Module._compile (module.js:569:30) at loader (/home/travis/build/.../node_modules/babel-register/lib/node.js:144:5) at Object.require.extensions.(anonymous function) [as .js] (/home/travis/build/.../node_modules/babel-register/lib/node.js:154:7) at Module.load (module.js:503:32) at tryModuleLoad (module.js:466:12) at Function.Module._load (module.js:458:3) And the Line it's complaining about // Webpack 3 Scope Hoisting new webpack.optimize.ModuleConcatenationPlugin(), And also have set Webpack to version 3 in my package.json of course. "webpack": "^3.0.0", And my Travis yml is pretty simple language: node_js sudo: false node_js: - '8' A: Delete node_modules and package-lock.json then run npm install again to generate a new package-lock.json. Once you commit the new package-lock.json, Travis should work correctly. I was having the exact same issue locally, I did what I described and I stopped having that issue.
stackoverflow
{ "language": "en", "length": 169, "provenance": "stackexchange_0000F.jsonl.gz:912971", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692289" }
308a9439791803b74470fb7b613b9fa56f9b2e33
Stackoverflow Stackexchange Q: Angular CLI - ng serve I know that the command "ng serve" creates all my stuff in memory. Now I have such url: http://localhost:4200/some-angular-route But I want this url: http://localhost:4200/subfolder/some-angular-route My Question is how can I create such a "subfolder"? My Problem is that in my production environment the requests go via Spring Boot Zuul and there are prefixes at urls. A: You can set base href as follows @NgModule({ imports: [ RouterModule.forRoot(routes) ], declarations: [AppComponent], providers: [{ provide: APP_BASE_HREF, useValue: '/urlPrefix' }] }) export class AppModule { } Also, when you build your code, you can give --directory and --baseHref as follows. ng build --deploy-url /mySubFolder/ --base-href /urlPrefix/ You can find more here https://github.com/angular/angular-cli/wiki/build
Q: Angular CLI - ng serve I know that the command "ng serve" creates all my stuff in memory. Now I have such url: http://localhost:4200/some-angular-route But I want this url: http://localhost:4200/subfolder/some-angular-route My Question is how can I create such a "subfolder"? My Problem is that in my production environment the requests go via Spring Boot Zuul and there are prefixes at urls. A: You can set base href as follows @NgModule({ imports: [ RouterModule.forRoot(routes) ], declarations: [AppComponent], providers: [{ provide: APP_BASE_HREF, useValue: '/urlPrefix' }] }) export class AppModule { } Also, when you build your code, you can give --directory and --baseHref as follows. ng build --deploy-url /mySubFolder/ --base-href /urlPrefix/ You can find more here https://github.com/angular/angular-cli/wiki/build
stackoverflow
{ "language": "en", "length": 116, "provenance": "stackexchange_0000F.jsonl.gz:913018", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692410" }
51cd80c3faf587de675d2a52e723dd1d75ac69fb
Stackoverflow Stackexchange Q: How to connect to postgresql google cloud sql instance? Need to import postgresql dump using pg_restore command I have a google cloud sql postgresql instance. When I try to import postgresql dump I get following error. I am able to connect to instance with the command given below. gcloud sql connect instance-name --user postgres It takes me to the psql command line client where I can not use database restore command like pg_restore Does anyone have an idea on how can I actually connect to Google Cloud SQL instance so that I can perform operation such as pg_restore? A: if you need pg_restore can simply use: gcloud sql connect instance-name --user postgres < dbbackup.sql where dbbackup.sql is a file obtained from: pg_dump olddb --format=plain --no-owner -v >> dbbackup.sql
Q: How to connect to postgresql google cloud sql instance? Need to import postgresql dump using pg_restore command I have a google cloud sql postgresql instance. When I try to import postgresql dump I get following error. I am able to connect to instance with the command given below. gcloud sql connect instance-name --user postgres It takes me to the psql command line client where I can not use database restore command like pg_restore Does anyone have an idea on how can I actually connect to Google Cloud SQL instance so that I can perform operation such as pg_restore? A: if you need pg_restore can simply use: gcloud sql connect instance-name --user postgres < dbbackup.sql where dbbackup.sql is a file obtained from: pg_dump olddb --format=plain --no-owner -v >> dbbackup.sql
stackoverflow
{ "language": "en", "length": 129, "provenance": "stackexchange_0000F.jsonl.gz:913031", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692455" }
91044f0cd73680190290a8b6ae459d06931d7add
Stackoverflow Stackexchange Q: Session handling in Google oauth I'm using Google's javascript auth library for authenticating users to my webapp where I check the token's expiry for validity. Currently I'm using the client-side only oauth flow. Few questions: * *Does Google auth handle token refresh by itself or do I have to do so manually? *If it does handle token refresh by itself - is there a hook method I can use to detect when the access token has changed? *If it doesn't - what alternatives am I facing? Many thanks. A: Revisiting this question: * *After a lot of trial and error I've come to the conclusion that Google's javascript auth lib does handle the refreshing of tokens on its own. *No there is no such hook available. *N/A.
Q: Session handling in Google oauth I'm using Google's javascript auth library for authenticating users to my webapp where I check the token's expiry for validity. Currently I'm using the client-side only oauth flow. Few questions: * *Does Google auth handle token refresh by itself or do I have to do so manually? *If it does handle token refresh by itself - is there a hook method I can use to detect when the access token has changed? *If it doesn't - what alternatives am I facing? Many thanks. A: Revisiting this question: * *After a lot of trial and error I've come to the conclusion that Google's javascript auth lib does handle the refreshing of tokens on its own. *No there is no such hook available. *N/A.
stackoverflow
{ "language": "en", "length": 128, "provenance": "stackexchange_0000F.jsonl.gz:913052", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692526" }
99d62de2eab193b2adf8effdcdc8a4895c555825
Stackoverflow Stackexchange Q: Docker Alpine image + grpc: Warning: insecure environment read function 'getenv' used When I used this image: node:6.11.0-alpine to run my service (use GRPC), then keep getting these warnings: D0622 06:52:01.170502843 1 env_linux.c:66] Warning: insecure environment read function 'getenv' used D0622 06:52:01.554446816 12 env_linux.c:66] Warning: insecure environment read function 'getenv' used D0622 06:52:01.559295167 14 env_linux.c:66] Warning: insecure environment read function 'getenv' used D0622 06:52:01.566036292 13 env_linux.c:66] Warning: insecure environment read function 'getenv' used D0622 06:52:01.569975088 15 env_linux.c:66] Warning: insecure environment read function 'getenv' used Do I miss any package? How to fix this? This issue Issue #8104 can be referred. A: I believe this has been fixed in grpc-1.23.0 and newer: https://github.com/grpc/grpc/commit/5efa660b41ab05971901f3a7b19e3890ea1e8884
Q: Docker Alpine image + grpc: Warning: insecure environment read function 'getenv' used When I used this image: node:6.11.0-alpine to run my service (use GRPC), then keep getting these warnings: D0622 06:52:01.170502843 1 env_linux.c:66] Warning: insecure environment read function 'getenv' used D0622 06:52:01.554446816 12 env_linux.c:66] Warning: insecure environment read function 'getenv' used D0622 06:52:01.559295167 14 env_linux.c:66] Warning: insecure environment read function 'getenv' used D0622 06:52:01.566036292 13 env_linux.c:66] Warning: insecure environment read function 'getenv' used D0622 06:52:01.569975088 15 env_linux.c:66] Warning: insecure environment read function 'getenv' used Do I miss any package? How to fix this? This issue Issue #8104 can be referred. A: I believe this has been fixed in grpc-1.23.0 and newer: https://github.com/grpc/grpc/commit/5efa660b41ab05971901f3a7b19e3890ea1e8884
stackoverflow
{ "language": "en", "length": 113, "provenance": "stackexchange_0000F.jsonl.gz:913071", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692590" }
ebfc293de01360921721ad5d2838f51e9b4057a5
Stackoverflow Stackexchange Q: MySQL trigger before insert with INSERT IGNORE Is MySQL trigger which is set before insert to certain table, executed when I call INSERT IGNORE and inserting is ignored? A: Here's a demonstration: mysql> create table foo (foo_id int primary key); mysql> create table bar (foo_id int); mysql> create trigger foo_ins before insert on foo for each row insert into bar set foo_id = new.foo_id; mysql> insert ignore into foo set foo_id=123; Query OK, 1 row affected (0.01 sec) mysql> insert ignore into foo set foo_id=123; Query OK, 0 rows affected, 1 warning (0.00 sec) This should have inserted just one row in the foo table, because the second try would conflict with the primary key value. We see that the second insert affects 0 rows, which means no new row was inserted. Let's see what the effect on the bar table is: mysql> select * from bar; +--------+ | foo_id | +--------+ | 123 | | 123 | +--------+ Two rows were inserted. This proves that the trigger is fired even when doing insert ignore.
Q: MySQL trigger before insert with INSERT IGNORE Is MySQL trigger which is set before insert to certain table, executed when I call INSERT IGNORE and inserting is ignored? A: Here's a demonstration: mysql> create table foo (foo_id int primary key); mysql> create table bar (foo_id int); mysql> create trigger foo_ins before insert on foo for each row insert into bar set foo_id = new.foo_id; mysql> insert ignore into foo set foo_id=123; Query OK, 1 row affected (0.01 sec) mysql> insert ignore into foo set foo_id=123; Query OK, 0 rows affected, 1 warning (0.00 sec) This should have inserted just one row in the foo table, because the second try would conflict with the primary key value. We see that the second insert affects 0 rows, which means no new row was inserted. Let's see what the effect on the bar table is: mysql> select * from bar; +--------+ | foo_id | +--------+ | 123 | | 123 | +--------+ Two rows were inserted. This proves that the trigger is fired even when doing insert ignore.
stackoverflow
{ "language": "en", "length": 176, "provenance": "stackexchange_0000F.jsonl.gz:913081", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692628" }
f4998a4aabf5b87227c831c64750987e6e16687e
Stackoverflow Stackexchange Q: iOS 11 is removing native facebook integration. So, applications with current facebook sdk will not work on iOS 11? Recently, I have read that iOS 11 is dropping the native system integration with Twitter, Facebook, Flickr, and Vimeo. I have used facebook sdk for various apps which are on the store. Will they stop working with the release of iOS 11? A: Depends on what you are using, the login with facebook is still working with beta 2
Q: iOS 11 is removing native facebook integration. So, applications with current facebook sdk will not work on iOS 11? Recently, I have read that iOS 11 is dropping the native system integration with Twitter, Facebook, Flickr, and Vimeo. I have used facebook sdk for various apps which are on the store. Will they stop working with the release of iOS 11? A: Depends on what you are using, the login with facebook is still working with beta 2 A: They shouldn't. The most important thing that changes is the authorization flow with the new SFAuthenticationSession. You should test the functionalities that rely on FB's SDK thoroughly though, to catch any API changes or bugs. A: You have to remove native code (Account Kit) and use Facebook Latest SDK. this will work. If you have only used Account Kit for login then you have to give a new update of the app on AppStore as there is now option to login in setting in iOS 11 now. Ref Blog
stackoverflow
{ "language": "en", "length": 169, "provenance": "stackexchange_0000F.jsonl.gz:913082", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692632" }
4777fcc04dae998b0bda8b025b27d431b067c0f9
Stackoverflow Stackexchange Q: Plotting Networkx graph in Python I recently started using networkx library in python to generate and visualize graph plots. I started with a simple code (comprising of 4 nodes) as shown import networkx as nx import matplotlib.pyplot as plt G = nx.Graph() G.add_edges_from([(1 ,2) , (2 ,3) , (1 ,3) , (1 ,4) ]) nx.draw(G) plt.show() When I run the code for two consecutive times, the outputs for same code is as shown in the images (the orientation of the plot is random) Is it possible to generate the output of the plots with the same/fixed orientation? A: Per default a spring layout is used to draw the graph. This can have a new orientation each time it is drawn. There are also other layouts available. Using e.g. nx.draw_circular or nx.draw_spectral will give you always the same layout. You may also define the positions of the nodes using a dictionary, which maps nodenumber to a position. import networkx as nx import matplotlib.pyplot as plt G = nx.Graph() G.add_edges_from([(1 ,2) , (2 ,3) , (1 ,3) , (1 ,4) ]) pos = { 1: (20, 30), 2: (40, 30), 3: (30, 10),4: (0, 40)} nx.draw_networkx(G, pos=pos) plt.show()
Q: Plotting Networkx graph in Python I recently started using networkx library in python to generate and visualize graph plots. I started with a simple code (comprising of 4 nodes) as shown import networkx as nx import matplotlib.pyplot as plt G = nx.Graph() G.add_edges_from([(1 ,2) , (2 ,3) , (1 ,3) , (1 ,4) ]) nx.draw(G) plt.show() When I run the code for two consecutive times, the outputs for same code is as shown in the images (the orientation of the plot is random) Is it possible to generate the output of the plots with the same/fixed orientation? A: Per default a spring layout is used to draw the graph. This can have a new orientation each time it is drawn. There are also other layouts available. Using e.g. nx.draw_circular or nx.draw_spectral will give you always the same layout. You may also define the positions of the nodes using a dictionary, which maps nodenumber to a position. import networkx as nx import matplotlib.pyplot as plt G = nx.Graph() G.add_edges_from([(1 ,2) , (2 ,3) , (1 ,3) , (1 ,4) ]) pos = { 1: (20, 30), 2: (40, 30), 3: (30, 10),4: (0, 40)} nx.draw_networkx(G, pos=pos) plt.show()
stackoverflow
{ "language": "en", "length": 197, "provenance": "stackexchange_0000F.jsonl.gz:913088", "question_score": "10", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692644" }
e4503ee71cc22471c1d983d8dbf72b001044683b
Stackoverflow Stackexchange Q: How can I upgrade Python version and packages in pyenv virtualenv? I used pyenv, pyenv-virtualenv for managing python virtual environment. I have a project working in Python 3.4 virtual environment. So all installed packages(pandas, numpy etc) are not newest version. What I want to do is to upgrade Python version from 3.4 to 3.6 as well as upgrade other package version to higher one. How can I do this easily? A: Here is how you can switch to 3.9.0 for a given virtual environement venv-name: pip freeze > requirements-lock.txt pyenv virtualenv-delete venv-name pyenv virtualenv 3.9.0 venv-name pip install -r requirements-lock.txt Once everything works correctly you can safely remove the temporary requirements lock file: rm requirements-lock.txt Note that using pip freeze > requirements.txt is usually not a good idea as this file is often used to handle your package requirements (not necessarily pip freeze output). It's better to use a different (temporary) file just to be sure.
Q: How can I upgrade Python version and packages in pyenv virtualenv? I used pyenv, pyenv-virtualenv for managing python virtual environment. I have a project working in Python 3.4 virtual environment. So all installed packages(pandas, numpy etc) are not newest version. What I want to do is to upgrade Python version from 3.4 to 3.6 as well as upgrade other package version to higher one. How can I do this easily? A: Here is how you can switch to 3.9.0 for a given virtual environement venv-name: pip freeze > requirements-lock.txt pyenv virtualenv-delete venv-name pyenv virtualenv 3.9.0 venv-name pip install -r requirements-lock.txt Once everything works correctly you can safely remove the temporary requirements lock file: rm requirements-lock.txt Note that using pip freeze > requirements.txt is usually not a good idea as this file is often used to handle your package requirements (not necessarily pip freeze output). It's better to use a different (temporary) file just to be sure. A: OP asked to upgrade the packages alongside Python. No other answers address the upgrade of packages. Lock files are not the answer here. Save your packages to a requirements file without the version. pip freeze | cut -d"=" -f1 > requirements-to-upgrade.txt Delete your environment, create a new one with the upgraded Python version, then install the requirements file. pyenv virtualenv-delete venv-name pyenv virtualenv 3.6.8 venv-name pip install -r requirements-to-upgrade.txt The dependency resolver in pip should try to find the latest package. This assumes you have the upgrade Python version installed (e.g., pyenv install 3.6.8). A: Use pip freeze > requirements.txt to save a list of installed packages. Create a new venv with python 3.6. Install saved packages with pip install -r requirements.txt. When pip founds an universal wheel in its cache it installs the package from the cache. Other packages will be downloaded, cached, built and installed. A: If you use anaconda, just type conda install python==$pythonversion$
stackoverflow
{ "language": "en", "length": 315, "provenance": "stackexchange_0000F.jsonl.gz:913097", "question_score": "24", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692668" }
f074e1f12ccbfb1eb758a9821585980e01f53191
Stackoverflow Stackexchange Q: How to add color to transparent area of font awesome icons I want to change the transparent middle portion of fa-youtube-play icon to red. I try this code: .fa { background-color: red; } <link href="https://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet"/> <div> <i class="fa fa-youtube-play fa-3x" aria-hidden="true"></i> </div> But sing this code will overlaps the icon. How do I make color to the inner transparent are only? A: You can do that by using psuedo character i.e. :after. This might help: https://jsfiddle.net/kr8axdc3/
Q: How to add color to transparent area of font awesome icons I want to change the transparent middle portion of fa-youtube-play icon to red. I try this code: .fa { background-color: red; } <link href="https://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet"/> <div> <i class="fa fa-youtube-play fa-3x" aria-hidden="true"></i> </div> But sing this code will overlaps the icon. How do I make color to the inner transparent are only? A: You can do that by using psuedo character i.e. :after. This might help: https://jsfiddle.net/kr8axdc3/ A: universal means no. I'm afraid you'll have to work with each icon individually. .fa { background-image: radial-gradient(at center, red 40%, transparent 40%); } <link href="https://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet"/> <div> <i class="fa fa-youtube-play fa-3x" aria-hidden="true"></i> </div> A: In CSS use the clip property like so .fa { clip: rect(0px,0px,0px,0px); }. Set to whatever values are appropriate. I believe this property only applies to images, however. A: As you can see there are many ways to do it. The easiest way to fix this is to add line-height. div .fa { background-color:red; line-height: 22px; } <link href="https://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet"/> <div> <i class="fa fa-youtube-play fa-3x" aria-hidden="true"></i> </div> A: try this one, .fa{ background-color:red; line-height:25px; width:40px; } A: You could even do that using pseudo selector :before as below, bit lengthy styling but adds background to that transparent area. div { display: inline-block; position: relative; } div:before { content: ""; position: absolute; background: red; width: 30px; height: 30px; z-index: -1; top: 10px; left: 10px; } <link href="https://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" /> <div> <i class="fa fa-youtube-play fa-3x" aria-hidden="true"></i> </div>
stackoverflow
{ "language": "en", "length": 248, "provenance": "stackexchange_0000F.jsonl.gz:913105", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692692" }
6b4df4d33020980d96be634416c3f301dd61e250
Stackoverflow Stackexchange Q: Google Script number format in Google Docs I'm currently generating a google document with Google Script. I have an issue with number formats. I have a table with currency values, so I tried to format the text like this ( like how I'd do it normally ): var number = 12345; var currencyString = number.toLocaleString(); // This 'works' but outputs : 12345 var euroCurrencyString = number.toLocaleString("nl-NL", { style: 'currency', currency: 'EUR' }); // This throws exception But it throws this exception: TypeError: InternalError: Illegal radix 0. Is there any other way to format a number to a currency string? ( eq; € 12.345,00 ) EDIT: toLocaleString() actually works, I just mistyped it
Q: Google Script number format in Google Docs I'm currently generating a google document with Google Script. I have an issue with number formats. I have a table with currency values, so I tried to format the text like this ( like how I'd do it normally ): var number = 12345; var currencyString = number.toLocaleString(); // This 'works' but outputs : 12345 var euroCurrencyString = number.toLocaleString("nl-NL", { style: 'currency', currency: 'EUR' }); // This throws exception But it throws this exception: TypeError: InternalError: Illegal radix 0. Is there any other way to format a number to a currency string? ( eq; € 12.345,00 ) EDIT: toLocaleString() actually works, I just mistyped it
stackoverflow
{ "language": "en", "length": 113, "provenance": "stackexchange_0000F.jsonl.gz:913113", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692708" }
02b2eb5cc2ecc28ad0f355a6d845845fec707169
Stackoverflow Stackexchange Q: how to compute the probability of Poisson distribution in julia I have to compute the probability of Poisson distribution in Julia. I just know how to get Poisson distribution. But i have to compute the probability. also i have lambda from 20 to 100. using Distributions Poisson() A: The objects in Distributions.jl are like random variables. If you declare a value to be of a distribution, you can sample from it using rand, but there are a whole lot of other methods you can apply to it. Among them is pdf: julia> X = Poisson(30) Distributions.Poisson{Float64}(λ=30.0) julia> pdf(X, 2) 4.2109303359780846e-11 julia> pdf(X, 0:1:10) 11-element Array{Float64,1}: 9.35762e-14 2.80729e-12 4.21093e-11 4.21093e-10 3.1582e-9 1.89492e-8 9.47459e-8 4.06054e-7 1.5227e-6 5.07567e-6 1.5227e-5
Q: how to compute the probability of Poisson distribution in julia I have to compute the probability of Poisson distribution in Julia. I just know how to get Poisson distribution. But i have to compute the probability. also i have lambda from 20 to 100. using Distributions Poisson() A: The objects in Distributions.jl are like random variables. If you declare a value to be of a distribution, you can sample from it using rand, but there are a whole lot of other methods you can apply to it. Among them is pdf: julia> X = Poisson(30) Distributions.Poisson{Float64}(λ=30.0) julia> pdf(X, 2) 4.2109303359780846e-11 julia> pdf(X, 0:1:10) 11-element Array{Float64,1}: 9.35762e-14 2.80729e-12 4.21093e-11 4.21093e-10 3.1582e-9 1.89492e-8 9.47459e-8 4.06054e-7 1.5227e-6 5.07567e-6 1.5227e-5
stackoverflow
{ "language": "en", "length": 117, "provenance": "stackexchange_0000F.jsonl.gz:913122", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692754" }
b234bdcb57fa70ee8a142843471688c553a13000
Stackoverflow Stackexchange Q: Configure spring boot to redirect 404 to a single page app I want to configure my Spring Boot app to redirect any 404 not found request to my single page app. For example if I am calling localhost:8080/asdasd/asdasdasd/asdasd which is does not exist, it should redirect to localhost:8080/notFound. The problem is that I have a single page react app and it runs in the root path localhost:8080/. So spring should redirect to localhost:8080/notFound and then forward to / (to keep route). A: This is the full Spring Boot 2.0 example: @Configuration public class WebApplicationConfig implements WebMvcConfigurer { @Override public void addViewControllers(ViewControllerRegistry registry) { registry.addViewController("/notFound").setViewName("forward:/index.html"); } @Bean public WebServerFactoryCustomizer<ConfigurableServletWebServerFactory> containerCustomizer() { return container -> { container.addErrorPages(new ErrorPage(HttpStatus.NOT_FOUND, "/notFound")); }; } }
Q: Configure spring boot to redirect 404 to a single page app I want to configure my Spring Boot app to redirect any 404 not found request to my single page app. For example if I am calling localhost:8080/asdasd/asdasdasd/asdasd which is does not exist, it should redirect to localhost:8080/notFound. The problem is that I have a single page react app and it runs in the root path localhost:8080/. So spring should redirect to localhost:8080/notFound and then forward to / (to keep route). A: This is the full Spring Boot 2.0 example: @Configuration public class WebApplicationConfig implements WebMvcConfigurer { @Override public void addViewControllers(ViewControllerRegistry registry) { registry.addViewController("/notFound").setViewName("forward:/index.html"); } @Bean public WebServerFactoryCustomizer<ConfigurableServletWebServerFactory> containerCustomizer() { return container -> { container.addErrorPages(new ErrorPage(HttpStatus.NOT_FOUND, "/notFound")); }; } } A: This should do the trick: Add an error page for 404 that routes to /notFound, and forward that to your SPA (assuming the entry is on /index.html): @Configuration public class WebApplicationConfig extends WebMvcConfigurerAdapter { @Override public void addViewControllers(ViewControllerRegistry registry) { registry.addViewController("/notFound").setViewName("forward:/index.html"); } @Bean public EmbeddedServletContainerCustomizer containerCustomizer() { return container -> { container.addErrorPages(new ErrorPage(HttpStatus.NOT_FOUND, "/notFound")); }; } } A: //add this controller : perfect solution(from jhipster) @Controller public class ClientForwardController { @GetMapping(value = "/**/{path:[^\\.]*}") public String forward() { return "forward:/"; } } A: In case anyone stumbles here looking for how to handle Angular/React/other routes and paths in a Spring Boot app - but not always return index.html for any 404 - it can be done in a standard Spring controller RequestMapping. This can be done without adding view controllers and/or customizing the container error page. The RequestMapping supports wild cards, so you can make it match against a set of well known paths (ie. angular routes etc.) in your application and only then return forward index.html: @Controller public class Html5PathsController { @RequestMapping( method = {RequestMethod.OPTIONS, RequestMethod.GET}, path = {"/path1/**", "/path2/**", "/"} ) public String forwardAngularPaths() { return "forward:/index.html"; } } Another option (borrowed from an old Spring article here: https://spring.io/blog/2015/05/13/modularizing-the-client-angular-js-and-spring-security-part-vii) is to use a naming convention: @Controller public class Html5PathsController { @RequestMapping(value = "/{[path:[^\\.]*}") public String redirect() { return "forward:/index.html"; } } The above configuration will match all paths that do not contain a period and are not already mapped to another controller. A: Here the security configuration (SecurityConfig.java) @Configuration @EnableWebSecurity @EnableGlobalMethodSecurity(prePostEnabled=true) public class SecurityConfig extends WebSecurityConfigurerAdapter { @Autowired private Environment env; @Autowired private UserSecurityService userSecurityService; private BCryptPasswordEncoder passwordEncoder() { return SecurityUtility.passwordEncoder(); } private static final String[] PUBLIC_MATCHERS = { "/css/**", "/js/**", "/data/**", "/sound/**", "/img/**", "/", "/login", "/logout, "/error", "/index2", }; @Override protected void configure(HttpSecurity http) throws Exception { http .authorizeRequests(). /* antMatchers("/**").*/ antMatchers(PUBLIC_MATCHERS). permitAll().anyRequest().authenticated(); //.logout().logoutRequestMatcher(new AntPathRequestMatcher("/logout")).logoutSuccessUrl("/login"); http .csrf().disable().cors().disable() .formLogin().failureUrl("/login?error") .defaultSuccessUrl("/index2") .loginPage("/login").permitAll() .and() .logout().logoutRequestMatcher(new AntPathRequestMatcher("/logout")) .logoutSuccessUrl("/?logout").deleteCookies("remember-me").permitAll() .and() .rememberMe() .and() .sessionManagement().maximumSessions(3600) .and(). invalidSessionUrl("/login"); } @Autowired public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception { auth.userDetailsService(userSecurityService).passwordEncoder(passwordEncoder()); } } If not found any resource redirect to error page @Controller public class IndexController implements ErrorController{ private static final String PATH = "/error"; @RequestMapping(value = PATH) public String error() { return PATH; } @Override public String getErrorPath() { return PATH; } } Error page like <!DOCTYPE html> <html lang="en" xmlns:th="http://www.w3.org/1000/xhtml" xmlns:sec="http://www.thymeleaf.org/extras/spring-security"> <meta http-equiv="refresh" content="5;url=/login" /> <body> <h1>Page not found please login the system!</h1> </body> </html> A: Simply implementing the org.springframework.boot.web.servlet.error.ErrorController did the trick for me. I use SpringBoot 2.0 with React. (If you are interested in how to do that here is a boilerplate project made by me: https://github.com/archangel1991/react-with-spring) @Controller public class CustomErrorController implements ErrorController { @Override public String getErrorPath() { return "/error"; } } I am not sure why is this working though.
stackoverflow
{ "language": "en", "length": 579, "provenance": "stackexchange_0000F.jsonl.gz:913129", "question_score": "19", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692781" }
ac1b5603e0c27ec2559ac3f8e58e42e4c01d83e3
Stackoverflow Stackexchange Q: @page :first not working I have to use two different headers in the print version: one for the first page and one for the other pages. I would like to put a header (fixed on the top) for the other pages and use the css display: none for the first page. But I have not any effect with @page :first. This is my code: @page :first { .header { display: none; } } I tried also putting !important in the css but nothing happens. What should I do? A: :first allows only few CSS properties. You can only change margins, page breaks and windows with it.Other CSS properties will be ignored. So i assume display:none may not work. Though you can refer more about how to use @page and with what type of CSS properties it works. https://developer.mozilla.org/en/docs/Web/CSS/:first
Q: @page :first not working I have to use two different headers in the print version: one for the first page and one for the other pages. I would like to put a header (fixed on the top) for the other pages and use the css display: none for the first page. But I have not any effect with @page :first. This is my code: @page :first { .header { display: none; } } I tried also putting !important in the css but nothing happens. What should I do? A: :first allows only few CSS properties. You can only change margins, page breaks and windows with it.Other CSS properties will be ignored. So i assume display:none may not work. Though you can refer more about how to use @page and with what type of CSS properties it works. https://developer.mozilla.org/en/docs/Web/CSS/:first A: According to: https://developer.mozilla.org/en/docs/Web/CSS/@page The @page CSS at-rule is used to modify some CSS properties when printing a document. You can't change all CSS properties with @page. You can only change the margins, orphans, widows, and page breaks of the document. Attempts to change any other CSS properties will be ignored. And also for the :first https://developer.mozilla.org/en-US/docs/Web/CSS/:first Note: you cannot change all CSS properties with :first. You can only change the margins, orphans, widows, and page breaks of the document. All other CSS properties will be ignored. So since you're trying to remove one of your own elements - try using media queries instead: @media print { .header { display: none; } } https://benfrain.com/create-print-styles-using-css3-media-queries/ A: It looks like it's a Mozilla bug. I am not able to get margins working, even when following their own example here: https://developer.mozilla.org/en-US/docs/Web/CSS/:first Both pages are printed in an identical way, no difference.
stackoverflow
{ "language": "en", "length": 287, "provenance": "stackexchange_0000F.jsonl.gz:913130", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692784" }
bac714e58f674a401b189d9f8160cec5bf66d99b
Stackoverflow Stackexchange Q: iOS silent notifications for killed apps after device restart According to the documentation silent notifications are processed by the delegate application(_:didReceiveRemoteNotification:fetchCompletionHandler:) also when the app is in "Not Running" state. This behavior does not apply if the app was force-quitted by the user. But the documentation mentions that if the device has been restarted after a force-quit, the notification will trigger again the app launch on the device. Excerpt from the documentation: ... However, the system does not automatically launch your app if the user has force-quit it. In that situation, the user must relaunch your app or restart the device before the system attempts to launch your app automatically again. Can anyone confirm this to be working (maybe with earlier iOS versions)? My experience (using iOS 10.x) is that if the app was force-quitted the app won't be restarted even after (multiple) device reboot.
Q: iOS silent notifications for killed apps after device restart According to the documentation silent notifications are processed by the delegate application(_:didReceiveRemoteNotification:fetchCompletionHandler:) also when the app is in "Not Running" state. This behavior does not apply if the app was force-quitted by the user. But the documentation mentions that if the device has been restarted after a force-quit, the notification will trigger again the app launch on the device. Excerpt from the documentation: ... However, the system does not automatically launch your app if the user has force-quit it. In that situation, the user must relaunch your app or restart the device before the system attempts to launch your app automatically again. Can anyone confirm this to be working (maybe with earlier iOS versions)? My experience (using iOS 10.x) is that if the app was force-quitted the app won't be restarted even after (multiple) device reboot.
stackoverflow
{ "language": "en", "length": 146, "provenance": "stackexchange_0000F.jsonl.gz:913131", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692786" }
b0466e5a5ec7755f6d828a6847ad2d7816556214
Stackoverflow Stackexchange Q: C# restart a while loop in the middle I have a while loop that i want to restart if a condition is false, in my case. It is because i'm checking if a ip is valid and if it is then run the whole loop normal and if not i need it to restart from the start again at that point. I can't use a break and i dont want to use a goto. What is your sugestions? This is my code, where i want to restart my while loop. while (calculateAgain == "y") { Console.Write("Your ip here: "); string ip = Console.ReadLine(); IpValidation(ip); if (IpValidation(ip) == false) { Console.WriteLine("Ugyldig IP og eller Subnet mask!\n"); // goto or break } After this my code runs on a lot... Tell me some solutions to this other than goto. A: You need to use the word continue if (IpValidation(ip) == false) { Console.WriteLine("Ugyldig IP og eller Subnet mask!\n"); continue; } This will skip the rest and go to the top of your loop.
Q: C# restart a while loop in the middle I have a while loop that i want to restart if a condition is false, in my case. It is because i'm checking if a ip is valid and if it is then run the whole loop normal and if not i need it to restart from the start again at that point. I can't use a break and i dont want to use a goto. What is your sugestions? This is my code, where i want to restart my while loop. while (calculateAgain == "y") { Console.Write("Your ip here: "); string ip = Console.ReadLine(); IpValidation(ip); if (IpValidation(ip) == false) { Console.WriteLine("Ugyldig IP og eller Subnet mask!\n"); // goto or break } After this my code runs on a lot... Tell me some solutions to this other than goto. A: You need to use the word continue if (IpValidation(ip) == false) { Console.WriteLine("Ugyldig IP og eller Subnet mask!\n"); continue; } This will skip the rest and go to the top of your loop. A: There is the continue statement. If it will be hit it will skip back to the start of the loop. while (calculateAgain == "y") { // ... if (IpValidation(ip) == false) { Console.WriteLine("Ugyldig IP og eller Subnet mask!\n"); continue; } Console.WriteLine("This will not be executed when continue is called"); } A: You can do this on two ways, (by using break or continue) break will exit the loop completely, continue will just skip the current iteration. So by reading your question. You need to use continue here, so your example might look like this: while (calculateAgain == "y") { Console.Write("Your ip here: "); string ip = Console.ReadLine(); IpValidation(ip); if (IpValidation(ip) == false) { Console.WriteLine("Ugyldig IP og eller Subnet mask!\n"); continue; } } This above means, for condition if (IpValidation(ip) == false) code below will be skiped (will never be executed) if condition is satisfied
stackoverflow
{ "language": "en", "length": 317, "provenance": "stackexchange_0000F.jsonl.gz:913138", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692801" }
4f40547932ef65e7e222617a20007a83333f8490
Stackoverflow Stackexchange Q: How to use "vuetify" transitions on my components? I'm using Vuetifyjs library in my project. I want to add transitions to my components - but there are no documentation about how to start transitions. For example I want to add some transitions to appearance of my cards on screen. <v-card transition="v-slide-y-transition">...</v-card> How to start transition? A: If you are using Vuetify transitions around your router-view, the transition can sometimes be jarring on leave/enter. To make the transition look smoother try the prop hide-on-leave="true" <v-scroll-x-transition mode="in" hide-on-leave="true"> <router-view></router-view> </v-scroll-x-transition>
Q: How to use "vuetify" transitions on my components? I'm using Vuetifyjs library in my project. I want to add transitions to my components - but there are no documentation about how to start transitions. For example I want to add some transitions to appearance of my cards on screen. <v-card transition="v-slide-y-transition">...</v-card> How to start transition? A: If you are using Vuetify transitions around your router-view, the transition can sometimes be jarring on leave/enter. To make the transition look smoother try the prop hide-on-leave="true" <v-scroll-x-transition mode="in" hide-on-leave="true"> <router-view></router-view> </v-scroll-x-transition> A: This wraps some card text in a transition. When I trigger the v-show="show" model to true, the text slides in. <v-slide-y-transition> <v-card-text v-show="show"> Example text </v-card-text> </v-slide-y-transition> You could have a button trigger it or even add an onCreate() method that triggers the show to true after the component loads. A: try this : <v-fab-transition appear> <p>hello</p> </v-fab-transition>
stackoverflow
{ "language": "en", "length": 148, "provenance": "stackexchange_0000F.jsonl.gz:913144", "question_score": "33", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692813" }
e089a6982ae52b231991d494198b48bd4ad0b3b1
Stackoverflow Stackexchange Q: ShadowAlertDialog.getLatestAlertDialog() returns null for android.support.v7.app.AlertDialog Is there any workaround that will allow us to test android.support.v7.app.AlertDialog in Robolectric? someActivity.findViewById(R.id.alet_btn).performClick(); AlertDialog alert = ShadowAlertDialog.getLatestAlertDialog(); ShadowAlertDialog shadowAlertDialog = Shadows.shadowOf(alert); assertThat(shadowAlertDialog.getTitle()).isEqualTo("Hello"); A: Try the below List shownDialogs = ShadowAlertDialog.getShownDialogs(); if (shownDialogs.get(0) instanceof AlertDialog) { AlertDialog dialog = (android.support.v7.app.AlertDialog) shownDialogs.get(0); assertThat(dialog).isShowing(); dialog.getButton(AlertDialog.BUTTON_NEGATIVE).performClick(); } or if (ShadowAlertDialog.getLatestDialog() instanceof AlertDialog) { AlertDialog dialog = (android.support.v7.app.AlertDialog) ShadowAlertDialog.getLatestDialog(); assertThat(dialog).isShowing(); dialog.getButton(AlertDialog.BUTTON_POSITIVE).performClick(); } It allows you to click the buttons, but can't verify title or message AFAIK.
Q: ShadowAlertDialog.getLatestAlertDialog() returns null for android.support.v7.app.AlertDialog Is there any workaround that will allow us to test android.support.v7.app.AlertDialog in Robolectric? someActivity.findViewById(R.id.alet_btn).performClick(); AlertDialog alert = ShadowAlertDialog.getLatestAlertDialog(); ShadowAlertDialog shadowAlertDialog = Shadows.shadowOf(alert); assertThat(shadowAlertDialog.getTitle()).isEqualTo("Hello"); A: Try the below List shownDialogs = ShadowAlertDialog.getShownDialogs(); if (shownDialogs.get(0) instanceof AlertDialog) { AlertDialog dialog = (android.support.v7.app.AlertDialog) shownDialogs.get(0); assertThat(dialog).isShowing(); dialog.getButton(AlertDialog.BUTTON_NEGATIVE).performClick(); } or if (ShadowAlertDialog.getLatestDialog() instanceof AlertDialog) { AlertDialog dialog = (android.support.v7.app.AlertDialog) ShadowAlertDialog.getLatestDialog(); assertThat(dialog).isShowing(); dialog.getButton(AlertDialog.BUTTON_POSITIVE).performClick(); } It allows you to click the buttons, but can't verify title or message AFAIK.
stackoverflow
{ "language": "en", "length": 78, "provenance": "stackexchange_0000F.jsonl.gz:913156", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692860" }
162fbfac9fdaff6a506e9a0a026cbe600ab6f1c5
Stackoverflow Stackexchange Q: DNS resolution fails on android There is restriction on UDP response size for DNS protocol. It can only contain ~500bytes. When data exceeds the limit all the DNS servers sets flag "truncated" in response but some (google 8.8.8.8 for example) does not put any IPs in response others just put trimmed list. Utilities like nslookup and dig tries to ask DNS server by TCP to get full response but android does not. Instead it fails. The example of code that fails is bellow. var host = "prdimg.affili.net"; var addressList = Dns.GetHostEntry(host).AddressList; The Modernhttpclient uses gets IPs the same way so I cannot get files from prdimg.affili.net. To fix it I've implemented the temporary solution. I use GooglePublicDnsClient to resolve DNS and then change hostname to resolved ip with UriBuilder. var builder = new UriBuilder(originalUrl); builder.Host = ip; But the solution has two disadvantages * *it does not work for https because of certificate check *it does not work if server uses Vhosts Can anyone propose a better solution?
Q: DNS resolution fails on android There is restriction on UDP response size for DNS protocol. It can only contain ~500bytes. When data exceeds the limit all the DNS servers sets flag "truncated" in response but some (google 8.8.8.8 for example) does not put any IPs in response others just put trimmed list. Utilities like nslookup and dig tries to ask DNS server by TCP to get full response but android does not. Instead it fails. The example of code that fails is bellow. var host = "prdimg.affili.net"; var addressList = Dns.GetHostEntry(host).AddressList; The Modernhttpclient uses gets IPs the same way so I cannot get files from prdimg.affili.net. To fix it I've implemented the temporary solution. I use GooglePublicDnsClient to resolve DNS and then change hostname to resolved ip with UriBuilder. var builder = new UriBuilder(originalUrl); builder.Host = ip; But the solution has two disadvantages * *it does not work for https because of certificate check *it does not work if server uses Vhosts Can anyone propose a better solution?
stackoverflow
{ "language": "en", "length": 169, "provenance": "stackexchange_0000F.jsonl.gz:913157", "question_score": "10", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692863" }
5a27fba2e7e2261abd837239a55d52a9ccd28176
Stackoverflow Stackexchange Q: Is a (local) file path a URI? On some input we allow the following paths: * *C:\Folder *\\server\Folder *http://example.com/... Can I mark them all as "URI"s? A: Strictly speaking no, unless you make sure it's an absolute path and add add "file://" to the beginning. As per RFC 3986 Section 3: URI = scheme ":" hier-part [ "?" query ] [ "#" fragment ] hier-part = "//" authority path-abempty / path-absolute / path-rootless / path-empty The scheme and the ":" are not in square brackets [], which means they are not optional. However, the HTML standard calls these "path-relative-scheme-less-URL strings" and they're valid in the href attribute of an HTML element so maybe it's fine to call relative Unix paths "URLs" (not absolute Unix paths or Windows paths though).
Q: Is a (local) file path a URI? On some input we allow the following paths: * *C:\Folder *\\server\Folder *http://example.com/... Can I mark them all as "URI"s? A: Strictly speaking no, unless you make sure it's an absolute path and add add "file://" to the beginning. As per RFC 3986 Section 3: URI = scheme ":" hier-part [ "?" query ] [ "#" fragment ] hier-part = "//" authority path-abempty / path-absolute / path-rootless / path-empty The scheme and the ":" are not in square brackets [], which means they are not optional. However, the HTML standard calls these "path-relative-scheme-less-URL strings" and they're valid in the href attribute of an HTML element so maybe it's fine to call relative Unix paths "URLs" (not absolute Unix paths or Windows paths though). A: C:/Folder and /server/Folder/ are file paths. http://example.com/ is a URL, which is a URI sub-type, so you could mark it as a URI but not the other way around (like how squares are rectangles but not vice versa). Of course, here you have posted a clear, simple example. When discussing the distinction between URI and URL, not only is the answer not clear cut, it is also disputed. I recommend taking a look at this thread and the answers posted in it for clarification. Generally though, it is mostly agreed that the main difference is that URLs provide an access method (such as http://). So if we were to convert your first file path into a URL it would become the following (note the addition of the access method): file:///c:/Folder/test.txt If you modify all your file paths to include an access method like in my example, then it will be okay for you to mark them as URIs.
stackoverflow
{ "language": "en", "length": 289, "provenance": "stackexchange_0000F.jsonl.gz:913159", "question_score": "13", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692872" }
7b799dc81b390e16f10e44c0ddb6a1b4184a7b42
Stackoverflow Stackexchange Q: Effect of aria-hidden in Font Awesome what's the effect of aria-hidden in Font Awesome? Is it necessary to use it? why? or why not? for example, I want to know the effect of aria-hidden="true" in the code below: <i class="fa fa-star" aria-hidden="true"></i> A: In short, it makes the icon not visible to screen readers, to enhance accessibility. From the documentation: If you're using an icon to add some extra decoration or branding, it does not need to be announced to users as they are navigating your site or app aurally. Additionally, if you're using an icon to visually re-emphasize or add styling to content already present in your HTML, it does not need to be repeated to an assistive technology-using user. You can make sure this is not read by adding the aria-hidden="true" to your Font Awesome markup.
Q: Effect of aria-hidden in Font Awesome what's the effect of aria-hidden in Font Awesome? Is it necessary to use it? why? or why not? for example, I want to know the effect of aria-hidden="true" in the code below: <i class="fa fa-star" aria-hidden="true"></i> A: In short, it makes the icon not visible to screen readers, to enhance accessibility. From the documentation: If you're using an icon to add some extra decoration or branding, it does not need to be announced to users as they are navigating your site or app aurally. Additionally, if you're using an icon to visually re-emphasize or add styling to content already present in your HTML, it does not need to be repeated to an assistive technology-using user. You can make sure this is not read by adding the aria-hidden="true" to your Font Awesome markup. A: If you're using an icon to add some extra decoration or branding, it does not need to be announced to users as they are navigating your site or app aurally. Additionally, if you're using an icon to visually re-emphasize or add styling to content already present in your HTML, it does not need to be repeated to an assistive technology-using user. You can make sure this is not read by adding the aria-hidden="true" to your Font Awesome markup. <i class="fa fa-fighter-jet" aria-hidden="true"></i> an icon being used as pure decoration <h1 class="logo"> <i class="fa fa-pied-piper" aria-hidden="true"></i> Pied Piper, A Middle-Out Compression Solution Making Data Storage Problems Smaller </h1> an icon being used as a logo Source: http://fontawesome.io/accessibility/
stackoverflow
{ "language": "en", "length": 256, "provenance": "stackexchange_0000F.jsonl.gz:913172", "question_score": "13", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692917" }
901516b8c38fd5fc5b73eacc10e917e36644a286
Stackoverflow Stackexchange Q: Open XML change fontsize of table for (var i = 0; i <= data.GetUpperBound(0); i++) { var tr = new DocumentFormat.OpenXml.Wordprocessing.TableRow(); for (var j = 0; j <= data.GetUpperBound(1); j++) { var tc = new DocumentFormat.OpenXml.Wordprocessing.TableCell(); tc.Append(new DocumentFormat.OpenXml.Wordprocessing.Paragraph(new DocumentFormat.OpenXml.Wordprocessing.Run(new DocumentFormat.OpenXml.Wordprocessing.Text(data[i, j])))); tr.Append(tc); } table.Append(tr); } I want to change fontsize in table cell. Can you help me with that? I don't know why they didn't add a property for cell fontsize. A: To change the fontsize of a table cell, you need to add a RunProperties to the Run. The fontsize is specified inside a FontSize element inside that RunProperties. For example to change all of your entries to fontsize 18, your code would look like: for (var i = 0; i <= data.GetUpperBound(0); i++) { var tr = new DocumentFormat.OpenXml.Wordprocessing.TableRow(); for (var j = 0; j <= data.GetUpperBound(1); j++) { var tc = new DocumentFormat.OpenXml.Wordprocessing.TableCell(); var paragraph = new DocumentFormat.OpenXml.Wordprocessing.Paragraph(); var run = new DocumentFormat.OpenXml.Wordprocessing.Run(); var text = new DocumentFormat.OpenXml.Wordprocessing.Text(data[i, j]); // your old code for reference: tc.Append(new DocumentFormat.OpenXml.Wordprocessing.Paragraph(new DocumentFormat.OpenXml.Wordprocessing.Run(new DocumentFormat.OpenXml.Wordprocessing.Text(data[i, j])))); RunProperties runProperties1 = new RunProperties(); FontSize fontSize1 = new FontSize(){ Val = "36" }; runProperties1.Append(fontSize1); run.Append(runProperties1); run.Append(text); paragraph.Append(run); tc.Append(paragraph); tr.Append(tc); } table.Append(tr); }
Q: Open XML change fontsize of table for (var i = 0; i <= data.GetUpperBound(0); i++) { var tr = new DocumentFormat.OpenXml.Wordprocessing.TableRow(); for (var j = 0; j <= data.GetUpperBound(1); j++) { var tc = new DocumentFormat.OpenXml.Wordprocessing.TableCell(); tc.Append(new DocumentFormat.OpenXml.Wordprocessing.Paragraph(new DocumentFormat.OpenXml.Wordprocessing.Run(new DocumentFormat.OpenXml.Wordprocessing.Text(data[i, j])))); tr.Append(tc); } table.Append(tr); } I want to change fontsize in table cell. Can you help me with that? I don't know why they didn't add a property for cell fontsize. A: To change the fontsize of a table cell, you need to add a RunProperties to the Run. The fontsize is specified inside a FontSize element inside that RunProperties. For example to change all of your entries to fontsize 18, your code would look like: for (var i = 0; i <= data.GetUpperBound(0); i++) { var tr = new DocumentFormat.OpenXml.Wordprocessing.TableRow(); for (var j = 0; j <= data.GetUpperBound(1); j++) { var tc = new DocumentFormat.OpenXml.Wordprocessing.TableCell(); var paragraph = new DocumentFormat.OpenXml.Wordprocessing.Paragraph(); var run = new DocumentFormat.OpenXml.Wordprocessing.Run(); var text = new DocumentFormat.OpenXml.Wordprocessing.Text(data[i, j]); // your old code for reference: tc.Append(new DocumentFormat.OpenXml.Wordprocessing.Paragraph(new DocumentFormat.OpenXml.Wordprocessing.Run(new DocumentFormat.OpenXml.Wordprocessing.Text(data[i, j])))); RunProperties runProperties1 = new RunProperties(); FontSize fontSize1 = new FontSize(){ Val = "36" }; runProperties1.Append(fontSize1); run.Append(runProperties1); run.Append(text); paragraph.Append(run); tc.Append(paragraph); tr.Append(tc); } table.Append(tr); }
stackoverflow
{ "language": "en", "length": 197, "provenance": "stackexchange_0000F.jsonl.gz:913176", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44692925" }
4e187c9f1f9f4fb18d6251fdb6c47a3f1f145176
Stackoverflow Stackexchange Q: Regular expression for Visual Studio Power Tools custom tab color to match on tabs that start with I (for Interface) I'd like to colour code my tabs according to file types. I'd like Interface files to be white. My naming convention for interfaces is the usual IFoo.cs IIgloo.cs etc To try and colour code these I'm using this regex ^I([A-Z][a-z0-9]*){1}\.cs$ However that is colour coding Invoice.cs and IInvoice.cs Where I would like it to only catch IInvoice.cs Where have I gone wrong with my regex? I thought this regex would match: ^I - Starts with I ([A-Z][a-z0-9]*) - Then has an upper case character followed by lowercase characters or numbers {1}\.cs$ - Ends with .cs Other regex I've tried: I[A-Z]+[a-z0-9]+\.cs - Matches both Invoice.cs and IInvoice.cs (?:I[A-Z]+[a-z0-9]+\.cs) - Matches both Invoice.cs and IInvoice.cs
Q: Regular expression for Visual Studio Power Tools custom tab color to match on tabs that start with I (for Interface) I'd like to colour code my tabs according to file types. I'd like Interface files to be white. My naming convention for interfaces is the usual IFoo.cs IIgloo.cs etc To try and colour code these I'm using this regex ^I([A-Z][a-z0-9]*){1}\.cs$ However that is colour coding Invoice.cs and IInvoice.cs Where I would like it to only catch IInvoice.cs Where have I gone wrong with my regex? I thought this regex would match: ^I - Starts with I ([A-Z][a-z0-9]*) - Then has an upper case character followed by lowercase characters or numbers {1}\.cs$ - Ends with .cs Other regex I've tried: I[A-Z]+[a-z0-9]+\.cs - Matches both Invoice.cs and IInvoice.cs (?:I[A-Z]+[a-z0-9]+\.cs) - Matches both Invoice.cs and IInvoice.cs
stackoverflow
{ "language": "en", "length": 134, "provenance": "stackexchange_0000F.jsonl.gz:913205", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44693018" }
fcbb10e64d66f5a45edf608dfd31ebed5025eaf9
Stackoverflow Stackexchange Q: Issue converting video to live photo to wallpaper I'm using this way of converting a video + image to live photo. It adds the needed metadata to the video and still image. It exports the new video with the correct metadata using AVAssetExportPresetPassthrough so the codec settings shouldn't be changed. After I set the output high resolution live photo as wallpaper on my phone's home screen, the quality is greatly reduced and pixelated. This does not happen, or at least not to the same extent, when setting a photo taken from the camera as live wallpaper. Anyone have any idea on what could be the issue?
Q: Issue converting video to live photo to wallpaper I'm using this way of converting a video + image to live photo. It adds the needed metadata to the video and still image. It exports the new video with the correct metadata using AVAssetExportPresetPassthrough so the codec settings shouldn't be changed. After I set the output high resolution live photo as wallpaper on my phone's home screen, the quality is greatly reduced and pixelated. This does not happen, or at least not to the same extent, when setting a photo taken from the camera as live wallpaper. Anyone have any idea on what could be the issue?
stackoverflow
{ "language": "en", "length": 107, "provenance": "stackexchange_0000F.jsonl.gz:913211", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44693037" }
70b3657cd621734d12d70bddfdeaefc10f26def5
Stackoverflow Stackexchange Q: Rx-Android library import rx.android.plugins vs io.reactivex.android.plugins my build.gradle: compile 'io.reactivex.rxjava2:rxandroid:2.0.1' compile "io.reactivex.rxjava2:rxjava:2.1.0" In using the exact libraries External Libraries shows rxandroid-1.2.0 & rxandroid-2.0.1 in one project of mine, whereas in another just shows the latter. 1.2.0 uses rx.android.schedulers.Scheduler; which is compatible with rx.Observable when used as follows: someRxObservable .subscribeOn(Schedulers.io()) .observeOn(AndroidSchedulers.mainThread()) .subscribe(); io.reactivex.Scheduler used by 2.0.1 isn't, as it does not subclass rx.Scheduler. I'm not sure if this is a dependency gradle, in expecting to have downloaded both libraries by using gradle dependency mentioned above. That said however, in looking at the source code on github for the latest version 2.0.1 it only shows the io.reactive solution. A: io.reactivex is the newer RxJava and the issue was that AndroidRx only works with version 2 of RxJava. The fix was to use io.reactivex across the board with its Observables, Schedulers, & Subscriptions.
Q: Rx-Android library import rx.android.plugins vs io.reactivex.android.plugins my build.gradle: compile 'io.reactivex.rxjava2:rxandroid:2.0.1' compile "io.reactivex.rxjava2:rxjava:2.1.0" In using the exact libraries External Libraries shows rxandroid-1.2.0 & rxandroid-2.0.1 in one project of mine, whereas in another just shows the latter. 1.2.0 uses rx.android.schedulers.Scheduler; which is compatible with rx.Observable when used as follows: someRxObservable .subscribeOn(Schedulers.io()) .observeOn(AndroidSchedulers.mainThread()) .subscribe(); io.reactivex.Scheduler used by 2.0.1 isn't, as it does not subclass rx.Scheduler. I'm not sure if this is a dependency gradle, in expecting to have downloaded both libraries by using gradle dependency mentioned above. That said however, in looking at the source code on github for the latest version 2.0.1 it only shows the io.reactive solution. A: io.reactivex is the newer RxJava and the issue was that AndroidRx only works with version 2 of RxJava. The fix was to use io.reactivex across the board with its Observables, Schedulers, & Subscriptions.
stackoverflow
{ "language": "en", "length": 141, "provenance": "stackexchange_0000F.jsonl.gz:913227", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44693095" }
eb36a32465876179cecc74dd2d2f84cdeb98507c
Stackoverflow Stackexchange Q: how to add inline style width:calc(100% / var) in reactjs? for (let i = 0; i < divCount.length; i++) { var radio = divCount[i].getElementsByClassName("radio_count"); for (var index = 0; index < radio.length; index++) { var element: any = radio[index]; console.log(element); var style ="width:calc(100% / " + element + ")"; element.style = style; } } * *i want add inline style *set dynamic value in 100% divided ex. width:clac(100% / var); how can i do this ? A: What @Andrew wrote is correct, however here's a non ES2015/ES6 version (the question did not specify): In the react component render, you can use the following JSX: render: function() { var dynamicWidth = 'calc(100% / ' + value + ')'; return ( <div> <div style={{width: dynamicWidth}}></div> </div> ); Basically how this works is on each render that string gets interpolated. What @Andrew has shown you is just a better syntax available to do the same thing in ES6 calc(100% / ${value}) is conceptually equivalent to 'calc(100% / ' + value + ')'
Q: how to add inline style width:calc(100% / var) in reactjs? for (let i = 0; i < divCount.length; i++) { var radio = divCount[i].getElementsByClassName("radio_count"); for (var index = 0; index < radio.length; index++) { var element: any = radio[index]; console.log(element); var style ="width:calc(100% / " + element + ")"; element.style = style; } } * *i want add inline style *set dynamic value in 100% divided ex. width:clac(100% / var); how can i do this ? A: What @Andrew wrote is correct, however here's a non ES2015/ES6 version (the question did not specify): In the react component render, you can use the following JSX: render: function() { var dynamicWidth = 'calc(100% / ' + value + ')'; return ( <div> <div style={{width: dynamicWidth}}></div> </div> ); Basically how this works is on each render that string gets interpolated. What @Andrew has shown you is just a better syntax available to do the same thing in ES6 calc(100% / ${value}) is conceptually equivalent to 'calc(100% / ' + value + ')' A: Template literals should help. <div style={{width: `calc(100% / ${yourVariable})`}}></div> Example https://jsfiddle.net/69z2wepo/81579/ A: 1.Inline styles can be added to a component in ReactJS as shown below: render() { return ( <div style={{ color: 'white', backgroundColor: 'red' }}> Div to be styled </div> ) } Or by passing a style object to the component: render() { const divStyle = { color: 'white', backgroundColor: 'red' }; return ( <div style={divStyle}> Div to be styled </div> ) } *In order to set a dynamic width based on the number of child elements as you have described, I would recommend using CSS. Use display: flex on the wrapping element and flex-grow: 1 on the inner element. Example: https://jsfiddle.net/edyjt34r/ A: Without using Calc function you can this desired result <div style = width: `${100 / ${yourVariable}}%`></div> A: for (let i = 0; i < divCount.length; i++) { var radio = divCount[i].getElementsByClassName("radio_count"); for (var index = 0; index < radio.length; index++) { var element: any = radio[index]; console.log(element); let width = 100 / radio.length; element.style.width=width + "%"; } }
stackoverflow
{ "language": "en", "length": 343, "provenance": "stackexchange_0000F.jsonl.gz:913257", "question_score": "15", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44693208" }
0699af6357ad395ec8ef07f4b0a8c5f44d5a8134
Stackoverflow Stackexchange Q: Uploading large files with Django: How should one go about doing this? I have to upload >= 20 GBs of data with Django. Should I break the file into chunks and then upload it with some kind of a checksum to maintain integrity or does Django implicitly does it? Will it be better if I use FTP instead of regular HTTP for such large files? A: Django uses so-called Upload Handlers to upload files, and has a related setting called FILE_UPLOAD_MAX_MEMORY_SIZE (default value of 2.5Mb). Files smaller than this threshold will be handled in memory, larger files will be streamed into a temporary file on disk. I haven't yet tried uploading files larger than about 1Gb, but I would expect you can just use django without problems.
Q: Uploading large files with Django: How should one go about doing this? I have to upload >= 20 GBs of data with Django. Should I break the file into chunks and then upload it with some kind of a checksum to maintain integrity or does Django implicitly does it? Will it be better if I use FTP instead of regular HTTP for such large files? A: Django uses so-called Upload Handlers to upload files, and has a related setting called FILE_UPLOAD_MAX_MEMORY_SIZE (default value of 2.5Mb). Files smaller than this threshold will be handled in memory, larger files will be streamed into a temporary file on disk. I haven't yet tried uploading files larger than about 1Gb, but I would expect you can just use django without problems.
stackoverflow
{ "language": "en", "length": 128, "provenance": "stackexchange_0000F.jsonl.gz:913277", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44693272" }
fc33a6d3fad868dfb318971e7aa3a4ef0b0ca392
Stackoverflow Stackexchange Q: Run code when an element is added to a list How do I detect when something is added to a list? Is there an equivalent to the dictionary __setitem__ method, that will get called when something is added to the list by insert, extend, append or using + or += (__add__, __iadd__), or some other method that I probably forgot about? Or do I need to hook into each of those methods, one by one? A: You'll need to override each method separately. Especially as the operations you've mentioned are different in nature - append, insert, extend, and += modify the list in place while + creates a new list. If you're feeling fancy, this is a potential way to do it without having to write too much boilerplate: class MyList(list): pass for method in ['append', 'insert', 'extend', '__add__', '__iadd__']: def code_added(self, *args, **kwargs): # Your code here getattr(super(MyList, self), method)(*args, **kwargs) setattr(MyList, method, code_added) Depending on what the code you want to run accesses, you might need to handle __add__ separately.
Q: Run code when an element is added to a list How do I detect when something is added to a list? Is there an equivalent to the dictionary __setitem__ method, that will get called when something is added to the list by insert, extend, append or using + or += (__add__, __iadd__), or some other method that I probably forgot about? Or do I need to hook into each of those methods, one by one? A: You'll need to override each method separately. Especially as the operations you've mentioned are different in nature - append, insert, extend, and += modify the list in place while + creates a new list. If you're feeling fancy, this is a potential way to do it without having to write too much boilerplate: class MyList(list): pass for method in ['append', 'insert', 'extend', '__add__', '__iadd__']: def code_added(self, *args, **kwargs): # Your code here getattr(super(MyList, self), method)(*args, **kwargs) setattr(MyList, method, code_added) Depending on what the code you want to run accesses, you might need to handle __add__ separately. A: as obskyr answer is suggesting, you have to define a child class of list and override a lot of methods, and test carefully to see if you're not missing something. My approach uses a deeper hook using __getattribute__ (for method calls), __iadd__ (for +=) and __setitem__ (for slice assignment) to catch the maximum of changes, and call the original parent method so it acts like a generic middleman: class MyList(list): def __getattribute__(self,a): if a in {"append","extend","remove","insert","pop","reverse","sort","clear"}: print("modification by {}".format(a)) else: print("not modified {}".format(a)) return list.__getattribute__(self,a) def __iadd__(self,v): print("in place add") return list.__iadd__(self,v) def __setitem__(self,i,v): print("setitem {},{}".format(i,v)) return list.__setitem__(self,i,v) l = MyList() l.append(12) l.extend([12]) l.remove(12) print(l) l[:] = [4,5,6] l += [5] print(l) output: modification by append modification by extend modification by remove [12] setitem slice(None, None, None),[4, 5, 6] in place add [4, 5, 6, 5] as you see * *the list is modified properly *all changes are detected I may have missed some accesses, but that seems pretty close to me.
stackoverflow
{ "language": "en", "length": 336, "provenance": "stackexchange_0000F.jsonl.gz:913318", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44693429" }
42dbe55b748a586ea94ade4b11c01d48595cf067
Stackoverflow Stackexchange Q: Image color changed after converting from numpy array to PIL image python I am trying to convert the image that I read using cv2.imread which is store in numpy array to PIL Image object , the color of Image will be changed Here is the code I=cv2.imread("Image.jpg") PILImage=Image.fromarray(I,mode='RGB') How can get back my original Image? A: OpenCV likes to treat images as having BGR layers instead of RGB layers. Adding I = cv2.cvtColor(I, cv2.COLOR_BGR2RGB) will swap layers to what you expect.
Q: Image color changed after converting from numpy array to PIL image python I am trying to convert the image that I read using cv2.imread which is store in numpy array to PIL Image object , the color of Image will be changed Here is the code I=cv2.imread("Image.jpg") PILImage=Image.fromarray(I,mode='RGB') How can get back my original Image? A: OpenCV likes to treat images as having BGR layers instead of RGB layers. Adding I = cv2.cvtColor(I, cv2.COLOR_BGR2RGB) will swap layers to what you expect.
stackoverflow
{ "language": "en", "length": 82, "provenance": "stackexchange_0000F.jsonl.gz:913343", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44693507" }
7949279f40b2c77e5d8419b23e69aa054f1b118d
Stackoverflow Stackexchange Q: How to use the API to download a repo I want to download a single file from my Bitbucket repository. In the documentary I found the following API call. https://api.bitbucket.org/1.0/repositories/{accountname}/{repo_slug}/raw/{revision}/{path} However I struggle to find out what my "accountname", "repo_slug", "revision" and "path" is. If I open the folder "scripts" in my Bitbucket account the browser displays the following link. https://example.com/projects/MMMA/repos/iapc_reporting/browse/scripts For accountname I used "MMMA", for repo_slug "iapc_reporting", for revision the branch "master", and for path "scripts/main.py". The URL now looks like this: https://api.bitbucket.org/1.0/repositories/MMMA/iapc_reporting/raw/master/scripts/main.py Unfortunately opening this link in my browser gives me an 404 error. How do I properly buidl this link? If you had a solution with the V2 API that would be even better. A: If your server "example.com" is managed by a BitBucket server, then the API url should be: https://example.com/rest/api/1.0/projects/MMMA/repos/iapc_reporting See "Bitbucket Server REST APIs". In your case, since it is a private repo, with a curl --user user:pw: https://example.com/rest/api/1.0/projects/MMMA/repos/iapc_reporting/raw/master/scripts/main.py
Q: How to use the API to download a repo I want to download a single file from my Bitbucket repository. In the documentary I found the following API call. https://api.bitbucket.org/1.0/repositories/{accountname}/{repo_slug}/raw/{revision}/{path} However I struggle to find out what my "accountname", "repo_slug", "revision" and "path" is. If I open the folder "scripts" in my Bitbucket account the browser displays the following link. https://example.com/projects/MMMA/repos/iapc_reporting/browse/scripts For accountname I used "MMMA", for repo_slug "iapc_reporting", for revision the branch "master", and for path "scripts/main.py". The URL now looks like this: https://api.bitbucket.org/1.0/repositories/MMMA/iapc_reporting/raw/master/scripts/main.py Unfortunately opening this link in my browser gives me an 404 error. How do I properly buidl this link? If you had a solution with the V2 API that would be even better. A: If your server "example.com" is managed by a BitBucket server, then the API url should be: https://example.com/rest/api/1.0/projects/MMMA/repos/iapc_reporting See "Bitbucket Server REST APIs". In your case, since it is a private repo, with a curl --user user:pw: https://example.com/rest/api/1.0/projects/MMMA/repos/iapc_reporting/raw/master/scripts/main.py
stackoverflow
{ "language": "en", "length": 157, "provenance": "stackexchange_0000F.jsonl.gz:913411", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44693698" }
8c66a1f49e1ce6b04e8a0bd0d41c8831f0f577d5
Stackoverflow Stackexchange Q: Get the exit code of a Python script in a Bash script I'm new to Bash and want to catch my Python script exit code. My script.py looks like: #! /usr/bin/python def foofoo(): ret = # Do logic if ret != 0: print repr(ret) + 'number of errors' sys.ext(1) else: print 'NO ERRORS!!!!' sys.exit(0) def main(argv): # Do main stuff foofoo() if __main__ == "__main__": main(sys.argv[1:] My Bash script: #!/bin/bash python script.py -a a1 -b a2 -c a3 if [ $?!=0 ]; then echo "exit 1" fi echo "EXIT 0" My problem is that I am always getting exit 1 printed in my Bash script. How can I get my Python exit code in my Bash script? A: The spaces are important because they are the arguments delimiter: if [ $? != 0 ]; then echo "exit 1" fi echo "EXIT 0" Or numeric test -ne. See man [ for [ command or man bash for builtin more details. # Store in a variable. Otherwise, it will be overwritten after the next command exit_status=$? if [ "${exit_status}" -ne 0 ]; then echo "exit ${exit_status}" fi echo "EXIT 0"
Q: Get the exit code of a Python script in a Bash script I'm new to Bash and want to catch my Python script exit code. My script.py looks like: #! /usr/bin/python def foofoo(): ret = # Do logic if ret != 0: print repr(ret) + 'number of errors' sys.ext(1) else: print 'NO ERRORS!!!!' sys.exit(0) def main(argv): # Do main stuff foofoo() if __main__ == "__main__": main(sys.argv[1:] My Bash script: #!/bin/bash python script.py -a a1 -b a2 -c a3 if [ $?!=0 ]; then echo "exit 1" fi echo "EXIT 0" My problem is that I am always getting exit 1 printed in my Bash script. How can I get my Python exit code in my Bash script? A: The spaces are important because they are the arguments delimiter: if [ $? != 0 ]; then echo "exit 1" fi echo "EXIT 0" Or numeric test -ne. See man [ for [ command or man bash for builtin more details. # Store in a variable. Otherwise, it will be overwritten after the next command exit_status=$? if [ "${exit_status}" -ne 0 ]; then echo "exit ${exit_status}" fi echo "EXIT 0"
stackoverflow
{ "language": "en", "length": 189, "provenance": "stackexchange_0000F.jsonl.gz:913426", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44693740" }
945b0b3af8bca0db3b999e7106217987c12f8445
Stackoverflow Stackexchange Q: PHP - How to start google chrome with PHP script Here is my code: <?php $output = exec('/bin/bash /var/www/html/test.sh'); echo "<pre>$output</pre>"; ?> Here is what i have in test.sh: #!/bin/bash export DISPLAY=:0 google-chrome echo "Starting Chrome" When i execute ./test.sh google-chrome is started and then i see in my terminal the text Starting Chrome. However when i execute the php script i see only the text Starting Chrome. Why google-chrome does not start when the test.sh is called by apache2 ? I think there is something regarding permissions. Is it even possible to achieve such thing somehow ?
Q: PHP - How to start google chrome with PHP script Here is my code: <?php $output = exec('/bin/bash /var/www/html/test.sh'); echo "<pre>$output</pre>"; ?> Here is what i have in test.sh: #!/bin/bash export DISPLAY=:0 google-chrome echo "Starting Chrome" When i execute ./test.sh google-chrome is started and then i see in my terminal the text Starting Chrome. However when i execute the php script i see only the text Starting Chrome. Why google-chrome does not start when the test.sh is called by apache2 ? I think there is something regarding permissions. Is it even possible to achieve such thing somehow ?
stackoverflow
{ "language": "en", "length": 99, "provenance": "stackexchange_0000F.jsonl.gz:913428", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44693747" }
13e9ab4bcdb8878c0c187a87c9aa2aa05f66dc70
Stackoverflow Stackexchange Q: Performance issues when using C# attributes? As far as I've understood attributes are generated at compile time (otherwise it would not be possible to load attributes via relflection). Anyhow, are there certain cases where attributes might lead to performance issues at runtime? A: Anyhow, are there certain cases where attributes might lead to performance issues at runtime? Only if your program is excessively looking for attributes in your code. Attributes are only useful when something is looking for it. If you decorate your class with [MyAwsomeAttribute] and nothing is looking for it, then there will be no performance difference. The performance difference will depend on how many attributes you have; if it was discovered; and the time it takes to execute the offending attribute (assuming it has this ability, many attributes are purely metadata). Good examples are WCF custom behaviour attributes with their detailed and potentially complex implementation methods.
Q: Performance issues when using C# attributes? As far as I've understood attributes are generated at compile time (otherwise it would not be possible to load attributes via relflection). Anyhow, are there certain cases where attributes might lead to performance issues at runtime? A: Anyhow, are there certain cases where attributes might lead to performance issues at runtime? Only if your program is excessively looking for attributes in your code. Attributes are only useful when something is looking for it. If you decorate your class with [MyAwsomeAttribute] and nothing is looking for it, then there will be no performance difference. The performance difference will depend on how many attributes you have; if it was discovered; and the time it takes to execute the offending attribute (assuming it has this ability, many attributes are purely metadata). Good examples are WCF custom behaviour attributes with their detailed and potentially complex implementation methods. A: (To test out MickyD's answer) Even this code, with a really evil attribute still has no performance impact because the attribute is never contructed. class Program { static void Main(string[] args) { Console.WriteLine(new Foo().ToString()); Console.ReadLine(); } } [ThrowExceptionAttribute] public class Foo { } public class ThrowExceptionAttribute : Attribute { public ThrowExceptionAttribute() { throw new NotImplementedException(); } } Of course if you do reflect over the attribute you can get performance impacts. But then your question becomes, "Can running arbitrary code have performance implications?".
stackoverflow
{ "language": "en", "length": 234, "provenance": "stackexchange_0000F.jsonl.gz:913499", "question_score": "10", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44693976" }
bf424b19b2107fa14be6cfc4b06da233363efff3
Stackoverflow Stackexchange Q: Spring new transaction combined with Retryable If I have a method that has a Spring retryable for a certain exception, and also has a Transactional(Requires_new), every time the retry is done, will it create a new transaction or use the existing one? ie @Retryable(maxAttempts = 5, backoff = @Backoff(delay = 250), include = {ActivitiOptimisticLockingException.class}) @Transactional(propagation = Propagation.REQUIRES_NEW) public void setVariable(String processId, String variableName, String variableValue){ engine.getRuntimeService().setVariable(processId, variableName, variableValue); } What will actually happen here? A: will be created new transaction each time. it the same as get service from spring context and call method N times. every call creates new transaction (use propagation that you added into service or method). Call your transactioanl servics method call it's call a proxy, retry calls that proxy also. Also your transaction might have timeout,retry try call in new timeout duration ,not N try times in one timeout
Q: Spring new transaction combined with Retryable If I have a method that has a Spring retryable for a certain exception, and also has a Transactional(Requires_new), every time the retry is done, will it create a new transaction or use the existing one? ie @Retryable(maxAttempts = 5, backoff = @Backoff(delay = 250), include = {ActivitiOptimisticLockingException.class}) @Transactional(propagation = Propagation.REQUIRES_NEW) public void setVariable(String processId, String variableName, String variableValue){ engine.getRuntimeService().setVariable(processId, variableName, variableValue); } What will actually happen here? A: will be created new transaction each time. it the same as get service from spring context and call method N times. every call creates new transaction (use propagation that you added into service or method). Call your transactioanl servics method call it's call a proxy, retry calls that proxy also. Also your transaction might have timeout,retry try call in new timeout duration ,not N try times in one timeout
stackoverflow
{ "language": "en", "length": 145, "provenance": "stackexchange_0000F.jsonl.gz:913502", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44693983" }
a1c90180121241302ea235df3b65f04d1b3902eb
Stackoverflow Stackexchange Q: How to change key of lookup kwargs field in RetrieveAPIView in Django REST Framework? Django REST Framework uses pk has the lookup field when using the RetrieveApiView and same has to be defined in the url kwargs . This makes the url look like : url(r'^(/foobar/(?P<pk>[\d]+)/$', FooBarFetch.as_view(), name="foo_bar") But I want to replace the pk in the url with something more descriptive like foo_bar_id. Changing the look_up_field doesn't work as it still has to use the pk to perform the lookup. Just kwrags key has to be changed in the url. A: So I dug into the classes GenericAPIView etc and found that it uses a field lookup_url_kwarg for this purpose. By default, it is None and that makes the kwarg key to be pk in the url. We just need to override the field like : lookup_url_kwarg = 'foo_bar_id' Where foo_bar_id is the key used in url.
Q: How to change key of lookup kwargs field in RetrieveAPIView in Django REST Framework? Django REST Framework uses pk has the lookup field when using the RetrieveApiView and same has to be defined in the url kwargs . This makes the url look like : url(r'^(/foobar/(?P<pk>[\d]+)/$', FooBarFetch.as_view(), name="foo_bar") But I want to replace the pk in the url with something more descriptive like foo_bar_id. Changing the look_up_field doesn't work as it still has to use the pk to perform the lookup. Just kwrags key has to be changed in the url. A: So I dug into the classes GenericAPIView etc and found that it uses a field lookup_url_kwarg for this purpose. By default, it is None and that makes the kwarg key to be pk in the url. We just need to override the field like : lookup_url_kwarg = 'foo_bar_id' Where foo_bar_id is the key used in url.
stackoverflow
{ "language": "en", "length": 149, "provenance": "stackexchange_0000F.jsonl.gz:913529", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44694082" }
6fb71f3cf224a82e510ffb50591a68701935d168
Stackoverflow Stackexchange Q: EditorConfig in Android Studio There is an option in Android studio to enable EditorConfig support( defaultsettings->codingstyle dialog),but not sure how it works.Could you please let me know how to integrate .editorconfig file in the Andriod Studio project? A: The website EditorConfig.org has a good explanation of how to get it set up. Android Studio has it built-in so there should be no need to add a plugin or set up Android Studio, just.. * *Add your .editorconfig file in the root directory of the project (as mentioned at EditorConfig.org) *Enable EditorConfig support (Settings → Editor → Code Style → Enable EditorConfig support) Note: A search for .editorconfig files will stop if the root filepath is reached or an EditorConfig file with root=true is found. Edit: I would recommend starting with something along these lines, and tweaking it as your team sees fit: # indicate this is the root of the project root = true [*.{kt,java,xml,gradle,md}] charset = utf-8 indent_style = space indent_size = 4 trim_trailing_whitespace = true insert_final_newline = true end_of_line = lf
Q: EditorConfig in Android Studio There is an option in Android studio to enable EditorConfig support( defaultsettings->codingstyle dialog),but not sure how it works.Could you please let me know how to integrate .editorconfig file in the Andriod Studio project? A: The website EditorConfig.org has a good explanation of how to get it set up. Android Studio has it built-in so there should be no need to add a plugin or set up Android Studio, just.. * *Add your .editorconfig file in the root directory of the project (as mentioned at EditorConfig.org) *Enable EditorConfig support (Settings → Editor → Code Style → Enable EditorConfig support) Note: A search for .editorconfig files will stop if the root filepath is reached or an EditorConfig file with root=true is found. Edit: I would recommend starting with something along these lines, and tweaking it as your team sees fit: # indicate this is the root of the project root = true [*.{kt,java,xml,gradle,md}] charset = utf-8 indent_style = space indent_size = 4 trim_trailing_whitespace = true insert_final_newline = true end_of_line = lf
stackoverflow
{ "language": "en", "length": 174, "provenance": "stackexchange_0000F.jsonl.gz:913543", "question_score": "19", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44694132" }
ab95655c96b9250ab66e5827a5c822b9d44af854
Stackoverflow Stackexchange Q: xlwings: Delete a col | row from Excel How do I delete a row in Excel? wb = xw.Book('Shipment.xlsx') wb.sheets['Page1_1'].range('1:1').clear() .clear() works to remove the content. I want to delete the row. I'm surprise the .clear() function works, but not .delete() Any advice helps! Thank you A: I use xlwings 0.11.7 with Python 3.6.0 on my Windows7. I do this, and it can work very well : import xlwings as xw from xlwings.constants import DeleteShiftDirection app = xw.App() wb = app.books.open('name.xlsx') sht = wb.sheets['Sheet1'] # Delete row 2 sht.range('2:2').api.Delete(DeleteShiftDirection.xlShiftUp) # Delete row 2, 3 and 4 sht.range('2:4').api.Delete(DeleteShiftDirection.xlShiftUp) # Delete Column A sht.range('A:A').api.Delete(DeleteShiftDirection.xlShiftToLeft) # Delete Column A, B and C sht.range('A:C').api.Delete(DeleteShiftDirection.xlShiftToLeft) wb.save() app.kill()
Q: xlwings: Delete a col | row from Excel How do I delete a row in Excel? wb = xw.Book('Shipment.xlsx') wb.sheets['Page1_1'].range('1:1').clear() .clear() works to remove the content. I want to delete the row. I'm surprise the .clear() function works, but not .delete() Any advice helps! Thank you A: I use xlwings 0.11.7 with Python 3.6.0 on my Windows7. I do this, and it can work very well : import xlwings as xw from xlwings.constants import DeleteShiftDirection app = xw.App() wb = app.books.open('name.xlsx') sht = wb.sheets['Sheet1'] # Delete row 2 sht.range('2:2').api.Delete(DeleteShiftDirection.xlShiftUp) # Delete row 2, 3 and 4 sht.range('2:4').api.Delete(DeleteShiftDirection.xlShiftUp) # Delete Column A sht.range('A:A').api.Delete(DeleteShiftDirection.xlShiftToLeft) # Delete Column A, B and C sht.range('A:C').api.Delete(DeleteShiftDirection.xlShiftToLeft) wb.save() app.kill() A: It is now possible to delete without using api property: wb.sheets['Page1_1'].range('1:1').delete() A: Try using the .Rows, for example: wb.sheets("Page1_1").Rows(1).Delete And you would similarly use the .Columns for deleting columns: wb.sheets("Page1_1").Columns(1).Delete A: for i in range(end+1, max_col): workbook.sheets[sheet_name].api.Columns(end+1).Delete() I have done this to Delete a certain range of columns, should work with rows too, you just need to keep in mind that the Columns/Rows shift so i just stay at the same column
stackoverflow
{ "language": "en", "length": 186, "provenance": "stackexchange_0000F.jsonl.gz:913553", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44694156" }
8fe01128bcac299a51dda5a04390b0ce2973ad73
Stackoverflow Stackexchange Q: Elasticsearch Java API 5.4 - How to get the inner hits of a nested query? I recently started to migrate from Elasticsearch version 2.4 to version 5.4. In version 2.4 I implemented some nested queries including inner hits, using the official Java API, which do not work in version 5.4 anymore. Can anyone tell me, how to get the inner hits of a nested query using the Elasticsearch Java API 5.4? Unfortunately, I can't find any sources regarding this topic, not even in the Elasticsearch documentation. My functioning nested query in version 2.4: QueryBuilders.nestedQuery("classes.links", QueryBuilders.boolQuery() .must(QueryBuilders.termQuery("classes.links.name", "xyz")) ).innerHit(new QueryInnerHitBuilder()) My attempt to get this query to work in version 5.4: QueryBuilders.nestedQuery("classes.links", QueryBuilders.boolQuery() .must(QueryBuilders.termQuery("classes.links.name", "xyz")), ScoreMode.Avg ).innerHit(new InnerHitBuilder()) //Error here As suggested here Elastic Search - java api for inner hit, I tried to replace QueryInnerHitBuilder() with InnerHitBuilder(), but it still does not work. I am getting following error: "Cannot resolve method 'innerHit(org.elasticsearch.index.query.InnerHitBuilder)'" A: The below implementation worked for me . innerHit(new InnerHitBuilder())
Q: Elasticsearch Java API 5.4 - How to get the inner hits of a nested query? I recently started to migrate from Elasticsearch version 2.4 to version 5.4. In version 2.4 I implemented some nested queries including inner hits, using the official Java API, which do not work in version 5.4 anymore. Can anyone tell me, how to get the inner hits of a nested query using the Elasticsearch Java API 5.4? Unfortunately, I can't find any sources regarding this topic, not even in the Elasticsearch documentation. My functioning nested query in version 2.4: QueryBuilders.nestedQuery("classes.links", QueryBuilders.boolQuery() .must(QueryBuilders.termQuery("classes.links.name", "xyz")) ).innerHit(new QueryInnerHitBuilder()) My attempt to get this query to work in version 5.4: QueryBuilders.nestedQuery("classes.links", QueryBuilders.boolQuery() .must(QueryBuilders.termQuery("classes.links.name", "xyz")), ScoreMode.Avg ).innerHit(new InnerHitBuilder()) //Error here As suggested here Elastic Search - java api for inner hit, I tried to replace QueryInnerHitBuilder() with InnerHitBuilder(), but it still does not work. I am getting following error: "Cannot resolve method 'innerHit(org.elasticsearch.index.query.InnerHitBuilder)'" A: The below implementation worked for me . innerHit(new InnerHitBuilder())
stackoverflow
{ "language": "en", "length": 163, "provenance": "stackexchange_0000F.jsonl.gz:913561", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44694174" }
71df6d21889d06de29e5966b42e8d62eb1977bf7
Stackoverflow Stackexchange Q: How to check for correlation among continuous and categorical variables? I have a dataset including categorical variables(binary) and continuous variables. I'm trying to apply a linear regression model for predicting a continuous variable. Can someone please let me know how to check for correlation among the categorical variables and the continuous target variable. Current Code: import pandas as pd df_hosp = pd.read_csv('C:\Users\LAPPY-2\Desktop\LengthOfStay.csv') data = df_hosp[['lengthofstay', 'male', 'female', 'dialysisrenalendstage', 'asthma', \ 'irondef', 'pneum', 'substancedependence', \ 'psychologicaldisordermajor', 'depress', 'psychother', \ 'fibrosisandother', 'malnutrition', 'hemo']] print data.corr() All of the variables apart from lengthofstay are categorical. Should this work? A: correlation in this scenario is quite misleading as we are comparing categorical variable with continuous variable
Q: How to check for correlation among continuous and categorical variables? I have a dataset including categorical variables(binary) and continuous variables. I'm trying to apply a linear regression model for predicting a continuous variable. Can someone please let me know how to check for correlation among the categorical variables and the continuous target variable. Current Code: import pandas as pd df_hosp = pd.read_csv('C:\Users\LAPPY-2\Desktop\LengthOfStay.csv') data = df_hosp[['lengthofstay', 'male', 'female', 'dialysisrenalendstage', 'asthma', \ 'irondef', 'pneum', 'substancedependence', \ 'psychologicaldisordermajor', 'depress', 'psychother', \ 'fibrosisandother', 'malnutrition', 'hemo']] print data.corr() All of the variables apart from lengthofstay are categorical. Should this work? A: correlation in this scenario is quite misleading as we are comparing categorical variable with continuous variable A: Convert your categorical variable into dummy variables here and put your variable in numpy.array. For example: data.csv: age,size,color_head 4,50,black 9,100,blonde 12,120,brown 17,160,black 18,180,brown Extract data: import numpy as np import pandas as pd df = pd.read_csv('data.csv') df: Convert categorical variable color_head into dummy variables: df_dummies = pd.get_dummies(df['color_head']) del df_dummies[df_dummies.columns[-1]] df_new = pd.concat([df, df_dummies], axis=1) del df_new['color_head'] df_new: Put that in numpy array: x = df_new.values Compute the correlation: correlation_matrix = np.corrcoef(x.T) print(correlation_matrix) Output: array([[ 1. , 0.99574691, -0.23658011, -0.28975028], [ 0.99574691, 1. , -0.30318496, -0.24026862], [-0.23658011, -0.30318496, 1. , -0.40824829], [-0.28975028, -0.24026862, -0.40824829, 1. ]]) See : numpy.corrcoef A: There is one more method to compute the correlation between continuous variable and dichotomic (having only 2 classes) variable, since this is also a categorical variable, we can use it for the correlation computation. The link for point biserial correlation is given below. https://www.statology.org/point-biserial-correlation-python/
stackoverflow
{ "language": "en", "length": 257, "provenance": "stackexchange_0000F.jsonl.gz:913578", "question_score": "18", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44694228" }
530110dceea1430285a1af457587fe7a73f5a74b
Stackoverflow Stackexchange Q: How to scale from left to right only This is my css code body { transform: scaleX(0.67); } In this my entire website shrink both from right and left.but i need only scale from left how can i do this A: You add transform-origin which define from which position the transform should occur. Its default value is center center (50% 50%) and you need left center (0 50%) body { transform: scaleX(0.67); transform-origin: left center; }
Q: How to scale from left to right only This is my css code body { transform: scaleX(0.67); } In this my entire website shrink both from right and left.but i need only scale from left how can i do this A: You add transform-origin which define from which position the transform should occur. Its default value is center center (50% 50%) and you need left center (0 50%) body { transform: scaleX(0.67); transform-origin: left center; } A: I believe the transform-origin can be helpful here: body { transform:scale(0.67); transform-origin:left center; }
stackoverflow
{ "language": "en", "length": 92, "provenance": "stackexchange_0000F.jsonl.gz:913579", "question_score": "10", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44694230" }
b49745aa1453d59e1964b82b2cedb07302825c5d
Stackoverflow Stackexchange Q: Detect when the device emulation mode is turned on or off in Chrome Chrome DevTools has the option to use the device emulation mode. I know there's a way to test whether the mode is on or not. But I'd like to know when it's being activated or deactivated, on click. Are there any events I could listen to, fired by the browser, that indicate the mode was turned on or off? A: I ended up doing this: $(window).on('orientationchange', function(e) { if (e.tagret && e.target.devicePixelRatio > 1) { // Emulation mode activated } else { // Emulation mode deactivated } }); Works for Google Chrome (my version: 58.0). Is it the bulletproof way? Not sure. It's enough for my needs, though. orientationchange docs here.
Q: Detect when the device emulation mode is turned on or off in Chrome Chrome DevTools has the option to use the device emulation mode. I know there's a way to test whether the mode is on or not. But I'd like to know when it's being activated or deactivated, on click. Are there any events I could listen to, fired by the browser, that indicate the mode was turned on or off? A: I ended up doing this: $(window).on('orientationchange', function(e) { if (e.tagret && e.target.devicePixelRatio > 1) { // Emulation mode activated } else { // Emulation mode deactivated } }); Works for Google Chrome (my version: 58.0). Is it the bulletproof way? Not sure. It's enough for my needs, though. orientationchange docs here. A: My solution: $(window).on('orientationchange', function(e) { setTimeout(function() { var emulationModeActivated = window.navigator.userAgent.indexOf('Mobile') !== -1; }, 0); }); Chrome adds Mobile to userAgent in device emulation mode, for example "Mozilla/5.0 (iPhone; CPU iPhone OS 10_3 like Mac OS X) AppleWebKit/602.1.50 (KHTML, like Gecko) CriOS/56.0.2924.75 Mobile/14E5239e Safari/602.1" e.target.devicePixelRatio isn't usable on Mac with Retina Display as value is always > 1
stackoverflow
{ "language": "en", "length": 184, "provenance": "stackexchange_0000F.jsonl.gz:913581", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44694234" }
39993a7feda3c696433d8040ab00775bc0c212aa
Stackoverflow Stackexchange Q: How to get a WCF ClientBase in a faulted state? As I remember correctly, in .NET Framework 3.x, a WCF service client (ClientBase<T>) would get a faulted state when the service returned an exception (FaultException). This could result in problems when the service client was not disposed / closed correctly. (See the many posts about this subject, like: What is the best workaround for the WCF client `using` block issue?). This behavior has been changed! Somehow, when the service throws an exception, the service client rethrows that exception, but the client does not enter a faulted state anymore. Two questions: * *Under which conditions does the service client enter a faulted state? *Since when (what framework version or patch) has this behavior been altered?
Q: How to get a WCF ClientBase in a faulted state? As I remember correctly, in .NET Framework 3.x, a WCF service client (ClientBase<T>) would get a faulted state when the service returned an exception (FaultException). This could result in problems when the service client was not disposed / closed correctly. (See the many posts about this subject, like: What is the best workaround for the WCF client `using` block issue?). This behavior has been changed! Somehow, when the service throws an exception, the service client rethrows that exception, but the client does not enter a faulted state anymore. Two questions: * *Under which conditions does the service client enter a faulted state? *Since when (what framework version or patch) has this behavior been altered?
stackoverflow
{ "language": "en", "length": 125, "provenance": "stackexchange_0000F.jsonl.gz:913595", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44694294" }
48acf0bf4cab711c97f70a5813003cf7561cf416
Stackoverflow Stackexchange Q: How to accept JSON POST parameters as @RequestParam in Spring Servlet? I'm trying to create a POST servlet that should be called with JSON request. The following should work, but does not. What might be missing? @RestController public class MyServlet { @PostMapping("/") public String test(@RequestParam String name, @RequestParam String[] params) { return "name was: " + name; } } JSON POST: { "name": "test", "params": [ "first", "snd" ] } Result: name is always null. Why? "Response could not be created: org.springframework.web.bind.MissingServletRequestParameterException: Required String parameter 'name' is not present" A: In general I don't pass a request param in a POST method. Instead, I am using a DTO to pass it in the body like: @RequestMapping(value = "/items", method = RequestMethod.POST) public void addItem(@RequestBody ItemDTO itemDTO) Then, you need to create the ItemDTO as a POJO with the necessary fields.
Q: How to accept JSON POST parameters as @RequestParam in Spring Servlet? I'm trying to create a POST servlet that should be called with JSON request. The following should work, but does not. What might be missing? @RestController public class MyServlet { @PostMapping("/") public String test(@RequestParam String name, @RequestParam String[] params) { return "name was: " + name; } } JSON POST: { "name": "test", "params": [ "first", "snd" ] } Result: name is always null. Why? "Response could not be created: org.springframework.web.bind.MissingServletRequestParameterException: Required String parameter 'name' is not present" A: In general I don't pass a request param in a POST method. Instead, I am using a DTO to pass it in the body like: @RequestMapping(value = "/items", method = RequestMethod.POST) public void addItem(@RequestBody ItemDTO itemDTO) Then, you need to create the ItemDTO as a POJO with the necessary fields. A: In addition to @stzoannos answer, if you do not want to create POJO for json object, you can use google GSON library to parse json into JsonObject class, which allow to work with parameters through same as get and set methods. JsonObject jsonObj = new JsonParser().parse(json).getAsJsonObject(); return "name is: " + jsonObj.get("name").getAsString();
stackoverflow
{ "language": "en", "length": 194, "provenance": "stackexchange_0000F.jsonl.gz:913599", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44694305" }
1c599f83b5bf7df382bc014db726a29d4c1293a7
Stackoverflow Stackexchange Q: What is the "M- notation" and where is it documented? The man page of cat says: -v, --show-nonprinting use ^ and M- notation, except for LFD and TAB What is the M- notation and where is it documented? Example: $cat log -A wrote 262144 bytes from file test.x in 9.853947s (25.979 KiB/s)^M$ ^M> ^H^H ^H^H> What do ^M and ^H mean? A: ^M is for Control-M (a carriage return), ^H for Control-H (a backspace). M-Something is Meta-Something (Meta- is what the Alt key does in some terminals).
Q: What is the "M- notation" and where is it documented? The man page of cat says: -v, --show-nonprinting use ^ and M- notation, except for LFD and TAB What is the M- notation and where is it documented? Example: $cat log -A wrote 262144 bytes from file test.x in 9.853947s (25.979 KiB/s)^M$ ^M> ^H^H ^H^H> What do ^M and ^H mean? A: ^M is for Control-M (a carriage return), ^H for Control-H (a backspace). M-Something is Meta-Something (Meta- is what the Alt key does in some terminals). A: I was wondering this too. I checked the source but it seemed easier to create a input file to get the mapping. I created a test input file with a Perl scrip for( my $i=0 ; $i < 256; $i++ ) { print ( sprintf( "%c is %d %x\n", $i, $i ,$i ) ); } and then ran it through cat -v Also if you see M-oM-;M-? at the start of a file it is the UTF-8 byte order mark. Scroll down through these to get to the M- values: ^@ is 0 0 ^A is 1 1 ^B is 2 2 ^C is 3 3 ^D is 4 4 ^E is 5 5 ^F is 6 6 ^G is 7 7 ^H is 8 8 (9 is tab) (10 is NL) ^K is 11 b ^L is 12 c ^M is 13 d ^N is 14 e ^O is 15 f ^P is 16 10 ^Q is 17 11 ^R is 18 12 ^S is 19 13 ^T is 20 14 ^U is 21 15 ^V is 22 16 ^W is 23 17 ^X is 24 18 ^Y is 25 19 ^Z is 26 1a ^[ is 27 1b ^\ is 28 1c ^] is 29 1d ^^ is 30 1e ^_ is 31 1f ...printing chars removed... ^? is 127 7f M-^@ is 128 80 M-^A is 129 81 M-^B is 130 82 M-^C is 131 83 M-^D is 132 84 M-^E is 133 85 M-^F is 134 86 M-^G is 135 87 M-^H is 136 88 M-^I is 137 89 M-^J is 138 8a M-^K is 139 8b M-^L is 140 8c M-^M is 141 8d M-^N is 142 8e M-^O is 143 8f M-^P is 144 90 M-^Q is 145 91 M-^R is 146 92 M-^S is 147 93 M-^T is 148 94 M-^U is 149 95 M-^V is 150 96 M-^W is 151 97 M-^X is 152 98 M-^Y is 153 99 M-^Z is 154 9a M-^[ is 155 9b M-^\ is 156 9c M-^] is 157 9d M-^^ is 158 9e M-^_ is 159 9f M- is 160 a0 M-! is 161 a1 M-" is 162 a2 M-# is 163 a3 M-$ is 164 a4 M-% is 165 a5 M-& is 166 a6 M-' is 167 a7 M-( is 168 a8 M-) is 169 a9 M-* is 170 aa M-+ is 171 ab M-, is 172 ac M-- is 173 ad M-. is 174 ae M-/ is 175 af M-0 is 176 b0 M-1 is 177 b1 M-2 is 178 b2 M-3 is 179 b3 M-4 is 180 b4 M-5 is 181 b5 M-6 is 182 b6 M-7 is 183 b7 M-8 is 184 b8 M-9 is 185 b9 M-: is 186 ba M-; is 187 bb M-< is 188 bc M-= is 189 bd M-> is 190 be M-? is 191 bf M-@ is 192 c0 M-A is 193 c1 M-B is 194 c2 M-C is 195 c3 M-D is 196 c4 M-E is 197 c5 M-F is 198 c6 M-G is 199 c7 M-H is 200 c8 M-I is 201 c9 M-J is 202 ca M-K is 203 cb M-L is 204 cc M-M is 205 cd M-N is 206 ce M-O is 207 cf M-P is 208 d0 M-Q is 209 d1 M-R is 210 d2 M-S is 211 d3 M-T is 212 d4 M-U is 213 d5 M-V is 214 d6 M-W is 215 d7 M-X is 216 d8 M-Y is 217 d9 M-Z is 218 da M-[ is 219 db M-\ is 220 dc M-] is 221 dd M-^ is 222 de M-_ is 223 df M-` is 224 e0 M-a is 225 e1 M-b is 226 e2 M-c is 227 e3 M-d is 228 e4 M-e is 229 e5 M-f is 230 e6 M-g is 231 e7 M-h is 232 e8 M-i is 233 e9 M-j is 234 ea M-k is 235 eb M-l is 236 ec M-m is 237 ed M-n is 238 ee M-o is 239 ef M-p is 240 f0 M-q is 241 f1 M-r is 242 f2 M-s is 243 f3 M-t is 244 f4 M-u is 245 f5 M-v is 246 f6 M-w is 247 f7 M-x is 248 f8 M-y is 249 f9 M-z is 250 fa M-{ is 251 fb M-| is 252 fc M-} is 253 fd M-~ is 254 fe M-^? is 255 ff A: I am not sure about the M- notation, but the ones involving ^ uses the caret notation: Caret notation is a notation for control characters in ASCII. In particular, The digraph stands for the control character whose ASCII code is the same as the character's ASCII code with the uppermost bit, in a 7-bit encoding, reversed. which you can verify by looking at the ASCII binary (octal) representation: Image source: http://www.asciitable.com Because ASCII is such a limited character set (as you can see above), it's straightforward to list all control chars representable by the caret notation, e.g., http://xahlee.info/comp/unicode_character_representation.html. A: You can see the definition in the key_name(3) manpage Likewise, the meta(3X) function allows the caller to change the output of keyname, i.e., it determines whether to use the “M-” prefix for “meta” keys (codes in the range 128 to 255). Both use_legacy_coding(3X) and meta(3X) succeed only after curses is initialized. X/Open Curses does not document the treatment of codes 128 to 159. When treating them as “meta” keys (or if keyname is called before initializing curses), this implementation returns strings “M-^@”, “M-^A”, etc. key_name(3X) So basically the Meta analog of the Ctrl version is the keycode of Ctrl + 128. You can see that easily in Brian's table. Here's a slightly modified version for ease of comparison $ LC_ALL=C perl -e 'for( my $i=0 ; $i < 128; $i++ ) { print ( sprintf( "%c is %d %x\t\t%c is %d %x\n", $i, $i, $i, $i + 128, $i + 128, $i + 128 ) ); }' >bytes.txt $ cat -v bytes.txt ^@ is 0 0 M-^@ is 128 80 ^A is 1 1 M-^A is 129 81 ^B is 2 2 M-^B is 130 82 ^C is 3 3 M-^C is 131 83 ... ^Y is 25 19 M-^Y is 153 99 ^Z is 26 1a M-^Z is 154 9a ^[ is 27 1b M-^[ is 155 9b ^\ is 28 1c M-^\ is 156 9c ^] is 29 1d M-^] is 157 9d ^^ is 30 1e M-^^ is 158 9e ^_ is 31 1f M-^_ is 159 9f is 32 20 M- is 160 a0 ! is 33 21 M-! is 161 a1 " is 34 22 M-" is 162 a2 # is 35 23 M-# is 163 a3 $ is 36 24 M-$ is 164 a4 % is 37 25 M-% is 165 a5 & is 38 26 M-& is 166 a6 ' is 39 27 M-' is 167 a7 ( is 40 28 M-( is 168 a8 ) is 41 29 M-) is 169 a9 * is 42 2a M-* is 170 aa + is 43 2b M-+ is 171 ab , is 44 2c M-, is 172 ac - is 45 2d M-- is 173 ad . is 46 2e M-. is 174 ae / is 47 2f M-/ is 175 af 0 is 48 30 M-0 is 176 b0 1 is 49 31 M-1 is 177 b1 ... : is 58 3a M-: is 186 ba ; is 59 3b M-; is 187 bb < is 60 3c M-< is 188 bc = is 61 3d M-= is 189 bd > is 62 3e M-> is 190 be ? is 63 3f M-? is 191 bf @ is 64 40 M-@ is 192 c0 A is 65 41 M-A is 193 c1 B is 66 42 M-B is 194 c2 ... Z is 90 5a M-Z is 218 da [ is 91 5b M-[ is 219 db \ is 92 5c M-\ is 220 dc ] is 93 5d M-] is 221 dd ^ is 94 5e M-^ is 222 de _ is 95 5f M-_ is 223 df ` is 96 60 M-` is 224 e0 a is 97 61 M-a is 225 e1 b is 98 62 M-b is 226 e2 ... z is 122 7a M-z is 250 fa { is 123 7b M-{ is 251 fb | is 124 7c M-| is 252 fc } is 125 7d M-} is 253 fd ~ is 126 7e M-~ is 254 fe ^? is 127 7f M-^? is 255 ff The part after M- on the right is exactly the same as on the left, with the keycodes differ by 128 You can also check cat's source code, the basic expression is *bpout++ = ch - 128; for the Meta key version in the show_nonprinting case A: Answer from the book. Unix power tools. 25.7 Show Non-Printing Characters with cat -v or od -c. "cat -v has its own symbol for characters outside the ASCII range with their high bits set, also called metacharacters. cat -v prints those as M- followed by another character. There are two of them in the cat -v output: M-^? and M-a . To get a metacharacter, you add 200 octal. "Say what?" Let's look at M-a first. The octal value of the letter a is 141. When cat -v prints M-a , it means the character you get by adding 141+200, or 341 octal. You can decode the character cat prints as M-^? in the same way. The ^? stands for the DEL character, which is octal 177. Add 200+177 to get 377 octal. "
stackoverflow
{ "language": "en", "length": 1694, "provenance": "stackexchange_0000F.jsonl.gz:913608", "question_score": "16", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44694331" }
ff6958944786c32346e12f735a392d5b2e110a31
Stackoverflow Stackexchange Q: API for Spark Streaming Statistics I'm looking for API which allows for accessing Spark Streaming Statistics which are available in "Streaming" tab in history server. I'm mainly interested in batch processing time value but it's not directly available via REST API at least according to documentation: https://spark.apache.org/docs/latest/monitoring.html#rest-api Any ideas how to get various information like in "Streaming" tab or running job in history server? A: There's a metrics endpoint available on the same port as the Spark UI on the driver node. http://<host>:<sparkUI-port>/metrics/json/ Streaming-related metrics have a .StreamingMetrics in their name: Sample from a local test job: local-1498040220092.driver.printWriter.snb.StreamingMetrics.streaming.lastCompletedBatch_processingDelay: { value: 30 }, local-1498040220092.driver.printWriter.snb.StreamingMetrics.streaming.lastCompletedBatch_processingEndTime: { value: 1498124090031 }, local-1498040220092.driver.printWriter.snb.StreamingMetrics.streaming.lastCompletedBatch_processingStartTime: { value: 1498124090001 }, local-1498040220092.driver.printWriter.snb.StreamingMetrics.streaming.lastCompletedBatch_schedulingDelay: { value: 1 }, local-1498040220092.driver.printWriter.snb.StreamingMetrics.streaming.lastCompletedBatch_submissionTime: { value: 1498124090000 }, local-1498040220092.driver.printWriter.snb.StreamingMetrics.streaming.lastCompletedBatch_totalDelay: { value: 31 }, local-1498040220092.driver.printWriter.snb.StreamingMetrics.streaming.lastReceivedBatch_processingEndTime: { value: 1498124090031 }, local-1498040220092.driver.printWriter.snb.StreamingMetrics.streaming.lastReceivedBatch_processingStartTime: { value: 1498124090001 } To get the processing time we need to diff local-StreamingMetrics.streaming.lastCompletedBatch_processingEndTime - StreamingMetrics.streaming.lastCompletedBatch_processingStartTime
Q: API for Spark Streaming Statistics I'm looking for API which allows for accessing Spark Streaming Statistics which are available in "Streaming" tab in history server. I'm mainly interested in batch processing time value but it's not directly available via REST API at least according to documentation: https://spark.apache.org/docs/latest/monitoring.html#rest-api Any ideas how to get various information like in "Streaming" tab or running job in history server? A: There's a metrics endpoint available on the same port as the Spark UI on the driver node. http://<host>:<sparkUI-port>/metrics/json/ Streaming-related metrics have a .StreamingMetrics in their name: Sample from a local test job: local-1498040220092.driver.printWriter.snb.StreamingMetrics.streaming.lastCompletedBatch_processingDelay: { value: 30 }, local-1498040220092.driver.printWriter.snb.StreamingMetrics.streaming.lastCompletedBatch_processingEndTime: { value: 1498124090031 }, local-1498040220092.driver.printWriter.snb.StreamingMetrics.streaming.lastCompletedBatch_processingStartTime: { value: 1498124090001 }, local-1498040220092.driver.printWriter.snb.StreamingMetrics.streaming.lastCompletedBatch_schedulingDelay: { value: 1 }, local-1498040220092.driver.printWriter.snb.StreamingMetrics.streaming.lastCompletedBatch_submissionTime: { value: 1498124090000 }, local-1498040220092.driver.printWriter.snb.StreamingMetrics.streaming.lastCompletedBatch_totalDelay: { value: 31 }, local-1498040220092.driver.printWriter.snb.StreamingMetrics.streaming.lastReceivedBatch_processingEndTime: { value: 1498124090031 }, local-1498040220092.driver.printWriter.snb.StreamingMetrics.streaming.lastReceivedBatch_processingStartTime: { value: 1498124090001 } To get the processing time we need to diff local-StreamingMetrics.streaming.lastCompletedBatch_processingEndTime - StreamingMetrics.streaming.lastCompletedBatch_processingStartTime A: As Spark 2.2.0 was released in july, one month after your post I guess your link refers to: spark 2.1.0. Apparently the REST API got extended for Spark Streaming, see spark 2.2.0. So if you still got the possibility to update the Spark version, I recommend doing that. You can then receive data from all batches with the endpoint: /applications/[app-id]/streaming/batches
stackoverflow
{ "language": "en", "length": 211, "provenance": "stackexchange_0000F.jsonl.gz:913640", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44694424" }
c2e179bcd202a8d606075fb083cf902eb2204e96
Stackoverflow Stackexchange Q: Extract first object from array with es6 array methods I have this code to get an array of one object: let selectedShop = initialResultsState.get('products') .filter(product => product.shop.selected) console.log(selectedShop) result: Can I extract the object from the array in the same operation by stringing another es6 array method to the end of filter, rather than doing let newVariable = selesctedShop[0]? I tried to string this to it: .map(x => {return { shop: x.shop, products: x.products }}) but it is still an array of one object because map always returns a new array. A: Two basic ways: First way is shift'ing: Array method, you can use Array.prototype.shift(). let selectedShop = initialResultsState.get('products') .filter(product => product.shop.selected) .shift(); Second way is an assignment: You can do this, by destructuring assignment. In your case: let [selectedShop] = initialResultsState.get('products') .filter(product => product.shop.selected); This is available in ES6, supported in major browsers without transpiling. But you could see another approach, in answers (Mikael Lennholm's answer) is Array.prototype.find(). This can be more performance-effective.
Q: Extract first object from array with es6 array methods I have this code to get an array of one object: let selectedShop = initialResultsState.get('products') .filter(product => product.shop.selected) console.log(selectedShop) result: Can I extract the object from the array in the same operation by stringing another es6 array method to the end of filter, rather than doing let newVariable = selesctedShop[0]? I tried to string this to it: .map(x => {return { shop: x.shop, products: x.products }}) but it is still an array of one object because map always returns a new array. A: Two basic ways: First way is shift'ing: Array method, you can use Array.prototype.shift(). let selectedShop = initialResultsState.get('products') .filter(product => product.shop.selected) .shift(); Second way is an assignment: You can do this, by destructuring assignment. In your case: let [selectedShop] = initialResultsState.get('products') .filter(product => product.shop.selected); This is available in ES6, supported in major browsers without transpiling. But you could see another approach, in answers (Mikael Lennholm's answer) is Array.prototype.find(). This can be more performance-effective. A: How about using the find() method instead of filter()? find() always return a single item, not wrapped in an array, unless it doesn't find any item, in which case it returns undefined let selectedShop = initialResultsState.get('products') .find(product => product.shop.selected) It's also a lot more efficient since it actually stops iterating over the array as soon as it has found an item. filter() will always iterate over the entire array which is a waste if you're only interested in the first relevant item anyway.
stackoverflow
{ "language": "en", "length": 249, "provenance": "stackexchange_0000F.jsonl.gz:913656", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44694494" }
da31eb5d859075aafc180fd81580a10614707382
Stackoverflow Stackexchange Q: How to check if Error is subclass of NSError I want a function which, for any given Error, will give me some description of it protocol CustomError { } func customDescription(_ error: Error) -> String { switch error { case let customError as CustomError: return "custom error" case ???: return "not subclass of NSError" case let nsError as NSError: return "subclass of NSError" } } Above is not the real code, and I don't want a String description, but a Dictionary, but this is not important in context of the question. The problem is I don't know how to distinguish Errors which is subclass of NSError and which is not because any swift error could be bridged to NSError. Is it possible in swift? A: As you already noticed, any type conforming to Error can be bridged to NSError, therefore error is NSError is always true, and a cast error as NSError does always succeed. What you can do is to check the dynamic type of the value with type(of:): type(of: error) is NSError.Type evaluates to true if error is an instance of NSError or a subclass.
Q: How to check if Error is subclass of NSError I want a function which, for any given Error, will give me some description of it protocol CustomError { } func customDescription(_ error: Error) -> String { switch error { case let customError as CustomError: return "custom error" case ???: return "not subclass of NSError" case let nsError as NSError: return "subclass of NSError" } } Above is not the real code, and I don't want a String description, but a Dictionary, but this is not important in context of the question. The problem is I don't know how to distinguish Errors which is subclass of NSError and which is not because any swift error could be bridged to NSError. Is it possible in swift? A: As you already noticed, any type conforming to Error can be bridged to NSError, therefore error is NSError is always true, and a cast error as NSError does always succeed. What you can do is to check the dynamic type of the value with type(of:): type(of: error) is NSError.Type evaluates to true if error is an instance of NSError or a subclass. A: private protocol _NSError: Error { // private, so _NSError is equal to NSError } extension NSError: _NSError { } public func customDescription(_ error: Error) -> String { switch error { case let nsError as _NSError: print(nsError as! NSError) return "NSError" default: return "others" } }
stackoverflow
{ "language": "en", "length": 234, "provenance": "stackexchange_0000F.jsonl.gz:913673", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44694549" }
cb82f430f317cf935a058fd822cbd1bf31beecca
Stackoverflow Stackexchange Q: Docker Swarm with image versions externalized to .env file I used to externalized my image versions to my .env file. This make it easy to maintain and I don't modify my docker-compose.yml file just to upgrade a version, so I'm sure I won't delete a line by mistake or whatever. But when I try to deploy my services with stack to the swarm, docker engine complains that my image is not a correct reposity/tag, with the exact following message : Error response from daemon: rpc error: code = 3 desc = ContainerSpec: "GROUP/IMAGE:" is not a valid repository/tag To fix this, I can fix the image version directly in the docker-compose.yml file. Is there any logic here or it that a bug? But this mixes fix part of the docker-compose and variable ones. Cheers, Olivier A: The answer is quite simple: it's not a bug, nor a feature. .env is not currently supported by docker stack. You must source manually the .env running export $(cat .env) before running docker stack ... There is an issue discussing this needs in the Docker Github. https://github.com/docker/docker.github.io/issues/3654 and another one discussing the problem and solution: https://github.com/moby/moby/issues/29133#issuecomment-285980447
Q: Docker Swarm with image versions externalized to .env file I used to externalized my image versions to my .env file. This make it easy to maintain and I don't modify my docker-compose.yml file just to upgrade a version, so I'm sure I won't delete a line by mistake or whatever. But when I try to deploy my services with stack to the swarm, docker engine complains that my image is not a correct reposity/tag, with the exact following message : Error response from daemon: rpc error: code = 3 desc = ContainerSpec: "GROUP/IMAGE:" is not a valid repository/tag To fix this, I can fix the image version directly in the docker-compose.yml file. Is there any logic here or it that a bug? But this mixes fix part of the docker-compose and variable ones. Cheers, Olivier A: The answer is quite simple: it's not a bug, nor a feature. .env is not currently supported by docker stack. You must source manually the .env running export $(cat .env) before running docker stack ... There is an issue discussing this needs in the Docker Github. https://github.com/docker/docker.github.io/issues/3654 and another one discussing the problem and solution: https://github.com/moby/moby/issues/29133#issuecomment-285980447 A: you can create a deploy.sh export $(cat .env) > /dev/null 2>&1; docker stack deploy ${1:-STACK_NAME} * *The .env parses without regular expressions or unstable tricks. *Errors on stderr produced by #comments inside the .env will be redirected to stdin (2>&1) *Undesired prints of all export and error now on stdin too, are redirected to /dev/null. This prevents console flood. *Those errors do not prevent .env being parsed correctly. We can define STACK_NAME in our .env but we can also pass our customized stack_name . deploy.sh <stack_name> (stack_name opcional) This workaround gave me headaches for 3 nights A: As already mentioned the .env isn't currently supported by the docker stack. So the alternative way is to clearly specify the environment file using by env_file, e.g.: version: '3.3' services: foo-service: image: foo-image env_file: - .env environment: - SOME_ENV=qwerty A: The yaml parser in docker stack deploy doesn't have all the same features of that in docker-compose. However, you can use docker-compose config to output a yaml file after it's done all the variable substitutions, extending other files, and merging multiple files together. This effectively turns docker-compose into a preprocessor. A: I have encountered the same issue, My issue was caused by running docker-compose in a different python virtual environment. I hope this could help. It seems that by running on a different python virtual env, I ruined the docker-compose dependency pythondotenv
stackoverflow
{ "language": "en", "length": 423, "provenance": "stackexchange_0000F.jsonl.gz:913704", "question_score": "10", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44694640" }
cbe40ed5ff7ee6eefe598e3899556a35900f7bcd
Stackoverflow Stackexchange Q: The difference between logical shift right, arithmetic shift right, and rotate right I've been reading the classic Hacker's delight and I am having trouble understanding the difference between logical shift right,arithmetic shift right, and rotate right. Please excuse if the doubt seems too simple. A: The difference is pretty much explained in the right-most column. * *Logical shift treats the number as a bunch of bits, and shifts in zeros. This is the >> operator in C. *Arithmetic shift treats the number as a signed integer (in 2s complement), and "retains" the topmost bit, shifting in zeros if the topmost bit was 0, and ones if it was one. C's right-shift operator has implementation-defined behavior if the number being shifted is negative.For example, the binary number 11100101 (-27 in decimal, assuming 2s complement), when right-shifted 3 bits using logical shift, becomes 00011100 (decimal 28). This is clearly confusing. Using an arithmetic shift, the sign bit would be kept, and the result would become 11111100 (decimal -4, which is about right for -27 / 8). *Rotation does neither, since topmost bits are replaced by lowermost bits. C does not have an operator to do rotation.
Q: The difference between logical shift right, arithmetic shift right, and rotate right I've been reading the classic Hacker's delight and I am having trouble understanding the difference between logical shift right,arithmetic shift right, and rotate right. Please excuse if the doubt seems too simple. A: The difference is pretty much explained in the right-most column. * *Logical shift treats the number as a bunch of bits, and shifts in zeros. This is the >> operator in C. *Arithmetic shift treats the number as a signed integer (in 2s complement), and "retains" the topmost bit, shifting in zeros if the topmost bit was 0, and ones if it was one. C's right-shift operator has implementation-defined behavior if the number being shifted is negative.For example, the binary number 11100101 (-27 in decimal, assuming 2s complement), when right-shifted 3 bits using logical shift, becomes 00011100 (decimal 28). This is clearly confusing. Using an arithmetic shift, the sign bit would be kept, and the result would become 11111100 (decimal -4, which is about right for -27 / 8). *Rotation does neither, since topmost bits are replaced by lowermost bits. C does not have an operator to do rotation. A: First remember that machine words are of fixed size. Say 4, and that your input is: +---+---+---+---+ | a | b | c | d | +---+---+---+---+ Then pushing everything one position to the left gives: +---+---+---+---+ | b | c | d | X | +---+---+---+---+ Question what to put as X? * *with a shift put 0 *with rotate put a Now push everything one position to the right gives: +---+---+---+---+ | X | a | b | c | +---+---+---+---+ Question what to put as X? * *with a logical shift put 0 *with an arithmetic shift put a *with rotate put d Roughly. Logical shift correspond to (left-shift) multiplication by 2, (right-shift) integer division by 2. Arithmetic shift is something related to 2's-complement representation of signed numbers. In this representation, the sign is the leftmost bit, then arithmetic shift preserves the sign (this is called sign extension). Rotate has no ordinary mathematical meaning, and is almost an obsolete operation even in computers. A: Logical right shift means shifting the bits to the right and MSB(most significant bit) becomes 0. Example: Logical right shift of number 1 0 1 1 0 1 0 1 is 0 1 0 1 1 0 1 0. Arithmetic right shift means shifting the bits to the right and MSB(most significant bit) is same as in the original number. Example: Arithmetic right shift of number 1 0 1 1 0 1 0 1 is 1 1 0 1 1 0 1 0.
stackoverflow
{ "language": "en", "length": 446, "provenance": "stackexchange_0000F.jsonl.gz:913807", "question_score": "53", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44694957" }
d539f5f82f3db957d2f569e8131513e3066ade28
Stackoverflow Stackexchange Q: How to compare 'CBCharacteristic.uuid == "String"' in swift? How to compare this? uuid and string cant compare func charValues() { let charValue = String(data: getCharacteristic.value!, encoding: String.Encoding.ascii)! if getCharacteristic.uuid == "Manufacturer Name String" { self.lblManufactureName.text = charValue } } A: There’s a uuidString property on CBUUID.
Q: How to compare 'CBCharacteristic.uuid == "String"' in swift? How to compare this? uuid and string cant compare func charValues() { let charValue = String(data: getCharacteristic.value!, encoding: String.Encoding.ascii)! if getCharacteristic.uuid == "Manufacturer Name String" { self.lblManufactureName.text = charValue } } A: There’s a uuidString property on CBUUID. A: You need to compare UUID against UUID. Her you compare UUID against name. CBPeripheral: 0x1c41066f0, identifier = 1CE90EC4-F208-ED69-A4A8-8CF804DDA0D3, name = fxc0c945fc40, state = disconnected as you can see there is a difference. try comparing name to name or UUIDS
stackoverflow
{ "language": "en", "length": 87, "provenance": "stackexchange_0000F.jsonl.gz:913814", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44694992" }
c066ca7a15d9c8d23489b8f8be7cdeb36de24612
Stackoverflow Stackexchange Q: Storing IV when using AES asymmetric encryption and decryption I'm looking at an C# AES asymmetric encryption and decryption example here and not sure if i should store the IV in a safe place (also encrypted??). Or i can just attach it to the encrypted text for using later when i with to decrypt. From a short reading about AES it seems it's not needed at all for decryption but i'm not sure i got it right and also the aes.CreateDecryptor(keyBytes, iv) need it as parameter. I use a single key for all encryptions. A: It's fairly standard to transmit the encrypted data as IV.Concat(cipherText). It's also fairly standard to put the IV off to the side, like in PKCS#5. The IV-on-the-side approach matches more closely with how .NET wants to process the data, since it's somewhat annoying to slice off the IV to pass it separately to the IV parameter (or property), and then to have a more complicated slicing operation with the ciphertext (or recovered plaintext). But the IV is usually transmitted in the clear either way. So, glue it together, or make it a separate column... whatever fits your program and structure better.
Q: Storing IV when using AES asymmetric encryption and decryption I'm looking at an C# AES asymmetric encryption and decryption example here and not sure if i should store the IV in a safe place (also encrypted??). Or i can just attach it to the encrypted text for using later when i with to decrypt. From a short reading about AES it seems it's not needed at all for decryption but i'm not sure i got it right and also the aes.CreateDecryptor(keyBytes, iv) need it as parameter. I use a single key for all encryptions. A: It's fairly standard to transmit the encrypted data as IV.Concat(cipherText). It's also fairly standard to put the IV off to the side, like in PKCS#5. The IV-on-the-side approach matches more closely with how .NET wants to process the data, since it's somewhat annoying to slice off the IV to pass it separately to the IV parameter (or property), and then to have a more complicated slicing operation with the ciphertext (or recovered plaintext). But the IV is usually transmitted in the clear either way. So, glue it together, or make it a separate column... whatever fits your program and structure better. A: Answer: IV is necessary for decryption as long as the content has been encrypted with it. You don't need to encrypt or hide the IV. It may be public. -- The purpose of the IV is to be combined to the key that you are using, so it's like you are encrypting every "block of data" with a different "final key" and then it guarantees that the cipher data (the encrypted one) will always be different along the encryption (and decryption) process. This is a very good illustration of what happens IF YOU DON'T use IV. Basically, the encryption process is done by encrypting the input data in blocks. So during the encryption of this example, all the parts of the image that have the same color (let's say the white background) will output the same "cipher data" if you use always the same key, then a pattern can still be found and then you didn't hide the image as desired. So combining a different extra data (the IV) to the key for each block is like you are using a different "final key" for each block, then you solve your problem.
stackoverflow
{ "language": "en", "length": 390, "provenance": "stackexchange_0000F.jsonl.gz:913815", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44694994" }
03bb26f746b52867cb3765db86f8e11cf9702e5f
Stackoverflow Stackexchange Q: Use Remote Config to set API Url in app - Good or Bad idea? I have been reading a lot of articles in regards to security and other parties reverse engineering your app and then flooding your APIs etc. For my current (nativescript) app I am using Firebase for Auth and then have my own API URL hardcoded into the app. I am considering using Firebase Remote Config to retrieve my API URL and then setting it in the app. In order to not have my API URL exposed. I was wondering if someone has done this before? And if this approach is a good or bad idea? Thanks. Robert
Q: Use Remote Config to set API Url in app - Good or Bad idea? I have been reading a lot of articles in regards to security and other parties reverse engineering your app and then flooding your APIs etc. For my current (nativescript) app I am using Firebase for Auth and then have my own API URL hardcoded into the app. I am considering using Firebase Remote Config to retrieve my API URL and then setting it in the app. In order to not have my API URL exposed. I was wondering if someone has done this before? And if this approach is a good or bad idea? Thanks. Robert
stackoverflow
{ "language": "en", "length": 111, "provenance": "stackexchange_0000F.jsonl.gz:913816", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44694996" }
c0b7209a92a53ec431092f874573e602129bf68b
Stackoverflow Stackexchange Q: where is the source control->History menu going in xcode 9? i just install the xcode 9 beta version , and i found that i can't see svn's all checkin logs in xcode 9 which are available in source control->History in xcode 8 and before , how to get these infomation in xcode 9? A: They moved it to this tab on the left nav.
Q: where is the source control->History menu going in xcode 9? i just install the xcode 9 beta version , and i found that i can't see svn's all checkin logs in xcode 9 which are available in source control->History in xcode 8 and before , how to get these infomation in xcode 9? A: They moved it to this tab on the left nav. A: This works for a git repository: Open Menu View/Navigators/Show Source Control Navigator (%2) Commit details appear in the Utilities pane if Source Control Inspector is selected. A: Xcode Version 9.2 View -> Navigators -> Show Source Control Navigator or just ⌘ Command + 2
stackoverflow
{ "language": "en", "length": 110, "provenance": "stackexchange_0000F.jsonl.gz:913849", "question_score": "21", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44695116" }
5a71ded9499abedd698f27c8dbebc49c2066ec80
Stackoverflow Stackexchange Q: LEFT JOIN with DAX I am looking for a way to do a LEFT JOIN like in SQL but with DAX. So let say that I have 2 tables A and B. B is a subset of A. so having Table A: rowa rowb rowc and having Tabel B: rowa I need TableC with: A.rowa; B.rowa A.rowb; null A.rowc; null How can I achieve this with DAX? Thank you for your time! A: For example: DEFINE VAR TABLE1=DATATABLE("L1",STRING,{{1},{2}}) VAR TABLE2=DATATABLE("L1",STRING,{{1},{3}}) EVALUATE NATURALLEFTOUTERJOIN(TABLE1,ADDCOLUMNS(TABLE2,"L2",[L1]))
Q: LEFT JOIN with DAX I am looking for a way to do a LEFT JOIN like in SQL but with DAX. So let say that I have 2 tables A and B. B is a subset of A. so having Table A: rowa rowb rowc and having Tabel B: rowa I need TableC with: A.rowa; B.rowa A.rowb; null A.rowc; null How can I achieve this with DAX? Thank you for your time! A: For example: DEFINE VAR TABLE1=DATATABLE("L1",STRING,{{1},{2}}) VAR TABLE2=DATATABLE("L1",STRING,{{1},{3}}) EVALUATE NATURALLEFTOUTERJOIN(TABLE1,ADDCOLUMNS(TABLE2,"L2",[L1])) A: try this............. NATURALINNERJOIN(<leftJoinTable>, <rightJoinTable>) A: please provide more context and explain what is the problem you are trying to solve. In general, DAX works with extended table which means that it works by default with tables already denormalized according to relationships you define in the data model. Therefore, unless of specific needs or constraints, it is best practice to define this as a relationship in the data model, and not a left join in DAX formula. Physical relationships in the data model is what makes DAX execution quick and clear.
stackoverflow
{ "language": "en", "length": 174, "provenance": "stackexchange_0000F.jsonl.gz:913851", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44695125" }
95f696258a561bb4f7b0b9fcb10eda81b5a1df76
Stackoverflow Stackexchange Q: Remove every occurence of special characters in QString How can I remove every occurence of special characters ^ and $ in a QString? I tried: QString str = "^TEST$^TEST$"; str = str.remove(QRegularExpression("[^$].")); A: You missed to escape the ^. To escape that, a \ is needed, but that also needs to be escaped because of C strings. Also you want one ore more occurences to match with +. This regular expression should work: [\\^$]+, see online. So it has to be: QString str = "^TEST$^TEST$"; str = str.remove(QRegularExpression("[\\^$]+")); Another possibility as said in the comments below by Joe P is: QString str = "^TEST$^TEST$"; str = str.remove(QRegularExpression("[$^]+")); because the ^ has just a special meaning at the beginning, where you have to escape it to get it literally, see online.
Q: Remove every occurence of special characters in QString How can I remove every occurence of special characters ^ and $ in a QString? I tried: QString str = "^TEST$^TEST$"; str = str.remove(QRegularExpression("[^$].")); A: You missed to escape the ^. To escape that, a \ is needed, but that also needs to be escaped because of C strings. Also you want one ore more occurences to match with +. This regular expression should work: [\\^$]+, see online. So it has to be: QString str = "^TEST$^TEST$"; str = str.remove(QRegularExpression("[\\^$]+")); Another possibility as said in the comments below by Joe P is: QString str = "^TEST$^TEST$"; str = str.remove(QRegularExpression("[$^]+")); because the ^ has just a special meaning at the beginning, where you have to escape it to get it literally, see online. A: You can also try using a regular expression where you can remove every non-alphanumeric character: QString str = "$om<Mof*%njas" str = str.remove(QRegExp("[^a-zA-Z\\d\\s]"));
stackoverflow
{ "language": "en", "length": 154, "provenance": "stackexchange_0000F.jsonl.gz:913863", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44695198" }
3f443772721d4a9f7398181f66f5cb61a8e8976c
Stackoverflow Stackexchange Q: Will "go generate" change to the directory to run script? I'm trying to generate some code by several tools. So, I've write a file data.go as package main //go:generate awk -v OFILE=names.go -f gen_table.awk $HOME/prj/header.h //go:generate gofmt -w names.go //go:generate ./gen_index_html.sh The generated names.go won't formatted automatically, so, I added the second line to force the correct format. And I assume it runs subsequently. Will Go generate jump into this directory before run these command? Because the script gen_index_html.sh just accept work in current directory. A: Per the documentation: The generator is run in the package's source directory. So, whatever command is run with generate will be run in the same directory as the file containing the generate directive. Multiple generate directives in one file are executed one at a time in source code order.
Q: Will "go generate" change to the directory to run script? I'm trying to generate some code by several tools. So, I've write a file data.go as package main //go:generate awk -v OFILE=names.go -f gen_table.awk $HOME/prj/header.h //go:generate gofmt -w names.go //go:generate ./gen_index_html.sh The generated names.go won't formatted automatically, so, I added the second line to force the correct format. And I assume it runs subsequently. Will Go generate jump into this directory before run these command? Because the script gen_index_html.sh just accept work in current directory. A: Per the documentation: The generator is run in the package's source directory. So, whatever command is run with generate will be run in the same directory as the file containing the generate directive. Multiple generate directives in one file are executed one at a time in source code order.
stackoverflow
{ "language": "en", "length": 136, "provenance": "stackexchange_0000F.jsonl.gz:913979", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44695539" }
7eed36b527145be89afca1b714f533e817602f66
Stackoverflow Stackexchange Q: How can I import a svg file to a Vue component? In vue single file component.I import a svg file like this: import A from 'a.svg' And then how can I use A in my component? A: I would just use vue-svg Install via Vue CLI 3: vue add svg Input: <img src="@/assets/logo.svg?data" /> Output: <img src="data:image/svg+xml;base64,..." /> or this is work also... import LogoImage from "@/assets/logo.svg?inline"
Q: How can I import a svg file to a Vue component? In vue single file component.I import a svg file like this: import A from 'a.svg' And then how can I use A in my component? A: I would just use vue-svg Install via Vue CLI 3: vue add svg Input: <img src="@/assets/logo.svg?data" /> Output: <img src="data:image/svg+xml;base64,..." /> or this is work also... import LogoImage from "@/assets/logo.svg?inline" A: You can also use something like this: <template> <img :src="logo"></img> </template> <script> import logo from '../assets/img/logo.svg' export default { data() { return { logo } } } </script> This doesn't require installing external modules and works out of the box. A: Based on the information you provided, what you can do is: * *Install vue-svg-loader npm install --save-dev vue-svg-loader *Configure webpack: module: { rules: [ { test: /\.svg$/, loader: 'vue-svg-loader', // `vue-svg` for webpack 1.x }, ], }, *Import the svg and use it as a regular component: <template> <nav id="menu"> <a href="..."> <SomeIcon class="icon" /> Some page </a> </nav> </template> <script> import SomeIcon from './assets/some-icon.svg'; export default { name: 'menu', components: { SomeIcon, }, }; </script> Reference: https://github.com/visualfanatic/vue-svg-loader A: I've gotten the following to work in Vue 3. Doesn't require messing with webpack or installing any third party plugins. <template> <img :src="mySVG" /> </template> <script> export default { name: 'App', data(){ return { mySVG: require('./assets/my-svg-file.svg') } } } </script> Note: I'm aware that you cannot modify certain pieces of the SVG when using it in img src, but if you simply want to use SVG files like you would any other image, this seems to be a quick and easy solution. A: If you have control over the svg file, you can just wrap it in a vue file like so: a.vue: <template> <svg>...</svg> </template> Just require the file like this afterwards: import A from 'a.vue' A: I like to use pug as a template engine (comes with many advantages) - if you do so, you will be able to easily include files like SVG's just by writing: include ../assets/some-icon.svg That's it! there is nothing else to do - I think this is an very easy and convenient way to include stuff like smaller svg's - file's easily included code is still clean! Here you can get some more information how to include PugJS into you Vue instance https://www.npmjs.com/package/vue-cli-plugin-pug A: First you need a specific loader for the component which will contain the svg my webpack.base.config.js module: { rules: [ { test: /\.svg$/, loader: 'vue-svg-loader', }, { test: /\.vue$/, use: [ { loader: "vue-loader", options: vueLoaderConfig }, { loader: "vue-svg-inline-loader", options: { /* ... */ } } ] } //.. your other rules } docs of vues-svg-inline-loader : https://www.npmjs.com/package/vue-svg-inline-loader docs of vue-svg-loader : https://www.npmjs.com/package/vue-svg-loader Next, you can initialise a vue file <template> <div> <img svg-inline class="icon" src='../pathtoyourfile/yoursvgfile.svg' alt="example" /> </div> </template> <script> import axios from 'axios' export default { name: 'logo', data () { }, } </script> <!-- Add "scoped" attribute to limit CSS to this component only --> <style scoped> #logo{ width:20%; } .rounded-card{ border-radius:15px; } //the style of your svg //look for it in your svg file .. //example .cls-1,.cls-7{isolation:isolate;}.cls-2{fill:url(#linear-gradient);}.cls-3{fill:url(#linear-gradient-2);};stroke-width:2px;}..cls-6{opacity:0.75;mix-blend-mode:multiply;}.cls-7{opacity:0.13;}.cls-8{fill:#ed6a29;}.cls-9{fill:#e2522b;}.cls-10{fill:#ed956e;}.cls-185{fill:#ffc933;}..cls-13{fill:#ffd56e;}.cls-14{fill:#1db4d8;}.cls-15{fill:#0f9fb7;}.cls-16{fill:#3ad4ed;}.cls-17{fill:#25bdde;}.cls-18{fill:#fff;} // </style> Your svg fils must dont contain style tag so copy paste the style in the vue style with scoped propoerty to keep it specific to this component you can just load you component in specific place of your app and use it <template> <v-app id="app"> <logo/> <router-view/> </v-app> </template> <script> import logo from './components/logo.vue' export default { name: 'App', data(){ return { //your data } }, components:{ logo //the name of the component you imported }, } } </script> <style> #app { font-family: 'Hellow', sans-serif; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; text-align: center; color: #070b0f; margin-top: 60px; } </style> A: If you are using Webpack you can use the require context to load SVG files from a directory. Be aware that this will put all SVG files within your Javascript files and might bloat your code though. As a simplified example I am using this svg component: data() { return { svg: '' }; }, props: { name: { type: String, required: true } } created() { this.svg = require(`resources/assets/images/svg/${this.name}.svg`); } The template simply looks like this: <template> <div :class="classes" v-html="svg"></div> </template> Normally you can't simply load SVG files like that and expect them to be used with a v-html directive since you are not getting the raw output. You have to use the Webpack raw-loader so make sure you get the raw output: { test: /\.svg$/, use: [ { loader: 'raw-loader', query: { name: 'images/svg/[name].[ext]' } }, { loader: 'svgo-loader', options: svgoConfig } ] } The example above also uses the svgo-loader since you will want to heavily optimize your SVG files if you do down this route. Hopefully this help you or anyone else out on how to solve this without diving straight into a third-party solution to fix this. A: You can always save it as a .svg file in your /static/svg/myfile.svg (using webpack) and just use it as an image file: <img src="/static/svg/myfile.svg">. No require / import / loader needed. A: +1 for @Stephan-v's solution, but here's a slightly modified approach for 2021 with Webpack 5. * *Your Vue component <template/> Option A: Single SVG file <template> <svg viewBox="0 0 24 24"> <use :xlink:href="require('@/assets/icons/icon.svg')"></use> </svg> </template> Option B: SVG Sprite (e.g. for FeatherIcons) <template> <svg viewBox="0 0 24 24"> <use :xlink:href="require('@/assets/icons/sprite.svg') + `#${iconName}`" ></use> </svg> </template> <script> export default { props: { // Dynamic property to easily switch out the SVG which will be used iconName: { type: String, default: "star", }, }, }; </script> *You may need a Webpack loader. NOTE: You may not need the Webpack Loader if you're using Vue 3 (as mentioned above) or Vite. If you're using Storybook or Nuxt, you will likely still need it. $ npm install svgo-loader -D $ yarn add svgo-loader -D webpack.config.js (or similar) module.exports = { mode: "development", entry: "./foo.js", output: {}, // ... other config ... module: { rules: [ ///////////// { // Webpack 5 SVG loader // https://webpack.js.org/guides/asset-modules/ // https://dev.to/smelukov/webpack-5-asset-modules-2o3h test: /\.svg$/, type: "asset", use: "svgo-loader", }, ], ///////////// }, }; *Done! A: I was able to get svgs loading inline via <div v-html="svgStringHere"></div> Where svgStringHere is a computed property that returns an svg as a string
stackoverflow
{ "language": "en", "length": 1046, "provenance": "stackexchange_0000F.jsonl.gz:913988", "question_score": "73", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44695560" }
3d96f226fd46d33aa8db95a0e0081a437b661450
Stackoverflow Stackexchange Q: SessionVariable parametrised in Invoke-WebRequest I want to pass a variable to SessionVariable as an argument, so that I can use any name for the SessionObject so that I can test/send to other functions, and not happy it is being hardcoded... i.e. Pass $TCSessionVar instead of a hardcoded TCSessionVar argument. $Response= Invoke-WebRequest -Uri $TeamCityUrl ` -SessionVariable $TCSessionVar -Method ` Get -Headers @{"Authorization"="Basic $Encoded"} $ReturnObject =New-Object PSCustomObject @{ SessionVar=[Microsoft.PowerShell.Commands.WebRequestSession]$TCSessionVar Response=$Response } The error I get is: Cannot convert the "Microsoft.PowerShell.Commands.WebRequestSession" value of type "System.String" to type "Microsoft.PowerShell.Commands.WebRequestSession". --Note: I can create the WebReuestSessionObject from the response, but then SessionVariable is pointless.. $TCCookieName="TCSESSIONID" $Cookies=$Response.Headers.'set-cookie' $TCCookie= $cookies.Split(";")| Where-Object {$_ -match $TCCookieName} $CookieName=$TCCookie.Split("=")[0] $CookieValue=$TCCookie.Split("=")[1] $TCSession = New-Object Microsoft.PowerShell.Commands.WebRequestSession $cookie = New-Object System.Net.Cookie $cookie.Name = $CookieName $cookie.Value = $CookieValue $cookie.Domain=([uri]$TeamCityUrl).Host $TCSession.Cookies.Add($cookie); A: It works! $sess = New-Object Microsoft.PowerShell.Commands.WebRequestSession $IWRParams = @{ SessionVariable = Get-Variable -name sess -ValueOnly Method = 'GET' Uri = "$($Server)/login" Headers = @{'X-Redmine-API-Key'=$Key} } $Response = Invoke-WebRequest @IWRParams $this.CSRFToken = $Response.Forms.Fields['authenticity_token'] $this.Session = Get-Variable -name $sess -ValueOnly
Q: SessionVariable parametrised in Invoke-WebRequest I want to pass a variable to SessionVariable as an argument, so that I can use any name for the SessionObject so that I can test/send to other functions, and not happy it is being hardcoded... i.e. Pass $TCSessionVar instead of a hardcoded TCSessionVar argument. $Response= Invoke-WebRequest -Uri $TeamCityUrl ` -SessionVariable $TCSessionVar -Method ` Get -Headers @{"Authorization"="Basic $Encoded"} $ReturnObject =New-Object PSCustomObject @{ SessionVar=[Microsoft.PowerShell.Commands.WebRequestSession]$TCSessionVar Response=$Response } The error I get is: Cannot convert the "Microsoft.PowerShell.Commands.WebRequestSession" value of type "System.String" to type "Microsoft.PowerShell.Commands.WebRequestSession". --Note: I can create the WebReuestSessionObject from the response, but then SessionVariable is pointless.. $TCCookieName="TCSESSIONID" $Cookies=$Response.Headers.'set-cookie' $TCCookie= $cookies.Split(";")| Where-Object {$_ -match $TCCookieName} $CookieName=$TCCookie.Split("=")[0] $CookieValue=$TCCookie.Split("=")[1] $TCSession = New-Object Microsoft.PowerShell.Commands.WebRequestSession $cookie = New-Object System.Net.Cookie $cookie.Name = $CookieName $cookie.Value = $CookieValue $cookie.Domain=([uri]$TeamCityUrl).Host $TCSession.Cookies.Add($cookie); A: It works! $sess = New-Object Microsoft.PowerShell.Commands.WebRequestSession $IWRParams = @{ SessionVariable = Get-Variable -name sess -ValueOnly Method = 'GET' Uri = "$($Server)/login" Headers = @{'X-Redmine-API-Key'=$Key} } $Response = Invoke-WebRequest @IWRParams $this.CSRFToken = $Response.Forms.Fields['authenticity_token'] $this.Session = Get-Variable -name $sess -ValueOnly A: It works if I -SessionVariable (get-variable -name "TCSessionVar" -ValueOnly) and get the response as: (get-variable -name (get-variable "TCSessionVar" -valueonly) -valueonly) $Response= Invoke-WebRequest -Uri $TeamCityUrl -SessionVariable (get-variable -name "TCSessionVar" -ValueOnly) -Method Get -Headers @{"Authorization"="Basic $Encoded"} $ReturnObject =New-Object PSCustomObject @{ SessionVar=[Microsoft.PowerShell.Commands.WebRequestSession](get-variable -name (get-variable "TCSessionVar" -valueonly) -valueonly) Response=$Response }
stackoverflow
{ "language": "en", "length": 212, "provenance": "stackexchange_0000F.jsonl.gz:914010", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44695645" }
0b6f6f877cefe6d512c278ffd9f8c268b85d84c4
Stackoverflow Stackexchange Q: Failed to acquire termination assertion when installing placeholder. What does it mean? I am getting this error in Xcode 9 beta . What does it mean? Attaching snapshot for the same. A: The problem seems to be that Xcode 9 Beta does/can not close the iPhone app if it is open on the simulator when the app is built and run (which it used to do). Simply closing the app on the iPhone simulator (shift-command-h) seems to do the trick. Another option is to restart the simulator. Hopefully Apple will fix this issue soon.
Q: Failed to acquire termination assertion when installing placeholder. What does it mean? I am getting this error in Xcode 9 beta . What does it mean? Attaching snapshot for the same. A: The problem seems to be that Xcode 9 Beta does/can not close the iPhone app if it is open on the simulator when the app is built and run (which it used to do). Simply closing the app on the iPhone simulator (shift-command-h) seems to do the trick. Another option is to restart the simulator. Hopefully Apple will fix this issue soon.
stackoverflow
{ "language": "en", "length": 95, "provenance": "stackexchange_0000F.jsonl.gz:914019", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44695667" }
695f0e4be4f2802d38fbb7b5a765481c2a068360
Stackoverflow Stackexchange Q: A photo to vector 'scribbled line' algorithm I would like to build an algorithm that turns a picture into a scribble line that resembles the original picture. An example would be this: into this: The second image is drawn by hand with one stroke. I am not sure where to begin experiments. My intuition would be to desaturate the image and simulate "drawing" a line with some quasi-randomness that draw more lines in places where the image is darker. Ideally what I would like to get is a vector image in the end with one path. Thank you Disclaimer: I have already asked this question on dsp.stackexchange 3, but there was no answer. I am not sure if that was the right place to ask. I realise this is not purely programming question, but maybe someone can point me in the right direction? A: Try this ... not mine, but pretty neat. The 'files' tab of the code page contains the original image file. Processing is a fun language for doing this kind of stuff :-) https://www.openprocessing.org/sketch/486307
Q: A photo to vector 'scribbled line' algorithm I would like to build an algorithm that turns a picture into a scribble line that resembles the original picture. An example would be this: into this: The second image is drawn by hand with one stroke. I am not sure where to begin experiments. My intuition would be to desaturate the image and simulate "drawing" a line with some quasi-randomness that draw more lines in places where the image is darker. Ideally what I would like to get is a vector image in the end with one path. Thank you Disclaimer: I have already asked this question on dsp.stackexchange 3, but there was no answer. I am not sure if that was the right place to ask. I realise this is not purely programming question, but maybe someone can point me in the right direction? A: Try this ... not mine, but pretty neat. The 'files' tab of the code page contains the original image file. Processing is a fun language for doing this kind of stuff :-) https://www.openprocessing.org/sketch/486307 A: Interesting option (less arty than your example =^) : http://cgv.cs.nthu.edu.tw/projects/Recreational_Graphics/CircularScribbleArtsPoster https://cgv.cs.nthu.edu.tw/publications
stackoverflow
{ "language": "en", "length": 190, "provenance": "stackexchange_0000F.jsonl.gz:914041", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44695766" }
241f9727d9c6df4628fe79c3f7a7494a49df6ddc
Stackoverflow Stackexchange Q: Cannot resolve the collation conflict between "Latin1_General_BIN" and "Latin1_General_CI_AS" in the equal to operation I am getting the following error Cannot resolve the collation conflict between "Latin1_General_BIN" and "Latin1_General_CI_AS" in the equal to operation. Code SELECT @PARTS = SUM(Llines_1.[qty]) from pick RIGHT OUTER JOIN op AS Llines_1 ON pick.picknote = Llines_1.[order_no] WHERE (pick.batchid = @batchid) AND (product = @product) group by product Order By product I have tired using COLLATE SQL_Latin1_General_CP1_CI_AS before the from but still get the same error. SELECT @PARTS = SUM(Llines_1.[qty]) COLLATE SQL_Latin1_General_CP1_CI_AS from pick RIGHT OUTER JOIN op AS Llines_1 ON pick.picknote = Llines_1. [order_no] WHERE (pick.batchid = @batchid) AND (product = @product) group by product Order By product A: I assume that collations of picknote and order_no are different Try this: SELECT @PARTS = SUM(Llines_1.[qty]) from pick RIGHT OUTER JOIN op AS Llines_1 ON pick.picknote = Llines_1.[order_no] COLLATE SQL_Latin1_General_CP1_CI_AS WHERE (pick.batchid = @batchid) AND (product = @product) group by product Order By product
Q: Cannot resolve the collation conflict between "Latin1_General_BIN" and "Latin1_General_CI_AS" in the equal to operation I am getting the following error Cannot resolve the collation conflict between "Latin1_General_BIN" and "Latin1_General_CI_AS" in the equal to operation. Code SELECT @PARTS = SUM(Llines_1.[qty]) from pick RIGHT OUTER JOIN op AS Llines_1 ON pick.picknote = Llines_1.[order_no] WHERE (pick.batchid = @batchid) AND (product = @product) group by product Order By product I have tired using COLLATE SQL_Latin1_General_CP1_CI_AS before the from but still get the same error. SELECT @PARTS = SUM(Llines_1.[qty]) COLLATE SQL_Latin1_General_CP1_CI_AS from pick RIGHT OUTER JOIN op AS Llines_1 ON pick.picknote = Llines_1. [order_no] WHERE (pick.batchid = @batchid) AND (product = @product) group by product Order By product A: I assume that collations of picknote and order_no are different Try this: SELECT @PARTS = SUM(Llines_1.[qty]) from pick RIGHT OUTER JOIN op AS Llines_1 ON pick.picknote = Llines_1.[order_no] COLLATE SQL_Latin1_General_CP1_CI_AS WHERE (pick.batchid = @batchid) AND (product = @product) group by product Order By product
stackoverflow
{ "language": "en", "length": 159, "provenance": "stackexchange_0000F.jsonl.gz:914097", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44695927" }
255f3d008d53e6a87d72c7256e79e7c9dad060dd
Stackoverflow Stackexchange Q: How to compare a string password with laravel Encrypted Password I Had first created a Web Application in Laravel.Now I am working on its mobile Application using Ionic Frame work. While working with laravel, laravel convert the password to its encryption. Now while working on integration of API in Ionic with Laravel, with login functionality I am facing an issue that, How can I a compare the password entered through Mobile App with the encrypted password in laravel table. If its would have been web App then its working fine, but for API integration I am facing this issue. Please Help me A: Hash::check is the best method to do it. if(Hash::check('plain password', 'encrypted password')){ //enter your code }
Q: How to compare a string password with laravel Encrypted Password I Had first created a Web Application in Laravel.Now I am working on its mobile Application using Ionic Frame work. While working with laravel, laravel convert the password to its encryption. Now while working on integration of API in Ionic with Laravel, with login functionality I am facing an issue that, How can I a compare the password entered through Mobile App with the encrypted password in laravel table. If its would have been web App then its working fine, but for API integration I am facing this issue. Please Help me A: Hash::check is the best method to do it. if(Hash::check('plain password', 'encrypted password')){ //enter your code } A: 2 ways: 1. $hashedPassword = User::find(1)->password; if (Hash::check('plain-text-password', $hashedPassword)) { // The passwords match... } * $hashedPassword = User::find(1)->password; if (Hash::make('plain-text-password') === $hashedPassword) { // The passwords match... } However, as the official documentation says, and as a pro-DRY (Don't Repeat Yourself) approach, if you are using the LoginController included with Laravel, you will probably not need to use these 2 ways directly, as it automatically does the first way already. Ref: https://laravel.com/docs/5.4/hashing IMPORTANT UPDATE The second way posted here does not work because Laravel uses bcrypt, which generates random a salt on each hash; making each hash different from the original hashed password in the the database. Use the first way instead. You may also read Where are laravel password salts stored?.
stackoverflow
{ "language": "en", "length": 244, "provenance": "stackexchange_0000F.jsonl.gz:914158", "question_score": "12", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44696084" }
11762a4d436245ab8400c213b401ba5f6ff74a13
Stackoverflow Stackexchange Q: How to copy files and folders which start with dot in their name in gulp I am trying to copy files from one folder to another. My source folder has following structure + source + file1.txt + .bin + file2.txt + .ignore Now I have my gulp task as follows gulp.task("copy", function() { gulp.src("./source/**/*", { base: "source" }) .pipe(gulp.dest("./dest/")); }); This task skips copying of .bin folder and .ignore file. How do I update my task to copy the .bin folder and .ignore file as well ? A: Can you try with "dot: true"? gulp.src("./source/**/*", { dot: true, base: "source" }) .pipe(gulp.dest("./dest/")) If you want to know more https://github.com/isaacs/minimatch#options
Q: How to copy files and folders which start with dot in their name in gulp I am trying to copy files from one folder to another. My source folder has following structure + source + file1.txt + .bin + file2.txt + .ignore Now I have my gulp task as follows gulp.task("copy", function() { gulp.src("./source/**/*", { base: "source" }) .pipe(gulp.dest("./dest/")); }); This task skips copying of .bin folder and .ignore file. How do I update my task to copy the .bin folder and .ignore file as well ? A: Can you try with "dot: true"? gulp.src("./source/**/*", { dot: true, base: "source" }) .pipe(gulp.dest("./dest/")) If you want to know more https://github.com/isaacs/minimatch#options
stackoverflow
{ "language": "en", "length": 110, "provenance": "stackexchange_0000F.jsonl.gz:914180", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44696155" }
7827970b63ef811571eacc545386bd735d6799d7
Stackoverflow Stackexchange Q: to_json not working with selectExpr in spark I am reading a databricks blog link and I find a problem with the built-in function to_json. In the codes blew within this tutorial, it returns error: org.apache.spark.sql.AnalysisException: Undefined function: 'to_json'. This function is neither a registered temporary function nor a permanent function registered in the database 'default'. Does this means that this usage in the tutorial is wrong? and no udf could be used in selectExpr. Could I do something like register this to_json function into default database? val deviceAlertQuery = notifydevicesDS .selectExpr("CAST(dcId AS STRING) AS key", "to_json(struct(*)) AS value") .writeStream .format("kafka") .option("kafka.bootstrap.servers", "host1:port1,host2:port2") .option("toipic", "device_alerts") .start() A: You need to improt the to_json function as import org.apache.spark.sql.functions.to_json This should work rather than the selectExpr data.withColumn("key", $"dcId".cast("string")) .select(to_json(struct(data.columns.head, data.columns.tail:_*)).as("value")).show() You must also use the spark 2.x I hope this helps to solve your problem.
Q: to_json not working with selectExpr in spark I am reading a databricks blog link and I find a problem with the built-in function to_json. In the codes blew within this tutorial, it returns error: org.apache.spark.sql.AnalysisException: Undefined function: 'to_json'. This function is neither a registered temporary function nor a permanent function registered in the database 'default'. Does this means that this usage in the tutorial is wrong? and no udf could be used in selectExpr. Could I do something like register this to_json function into default database? val deviceAlertQuery = notifydevicesDS .selectExpr("CAST(dcId AS STRING) AS key", "to_json(struct(*)) AS value") .writeStream .format("kafka") .option("kafka.bootstrap.servers", "host1:port1,host2:port2") .option("toipic", "device_alerts") .start() A: You need to improt the to_json function as import org.apache.spark.sql.functions.to_json This should work rather than the selectExpr data.withColumn("key", $"dcId".cast("string")) .select(to_json(struct(data.columns.head, data.columns.tail:_*)).as("value")).show() You must also use the spark 2.x I hope this helps to solve your problem. A: based on information I get from mail list. this function are not added into SQL from spark 2.2.0. Here is the commit link:commit. Hope this will help. THX Hyukjin Kwon and Burak Yavuz.
stackoverflow
{ "language": "en", "length": 177, "provenance": "stackexchange_0000F.jsonl.gz:914193", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44696195" }
e835965892e0f2c213c17561c24fc2546b1cff0c
Stackoverflow Stackexchange Q: Is it possible to directly call docker run from AWS lambda I have a Java standalone application which I have dockerized. I want to run this docker everytime an object is put into S3 storage. On way is to do it via AWS batch which I am trying to avoid. Is there a direct and easy way to call docker run from a lambda? A: Yes. It is possible to run containers out Docker images stored in Docker Hub within AWS Lambda using SCAR. For example, you can create a Lambda function to execute a container out of the ubuntu:16.04 image in Docker Hub as follows: scar init ubuntu:16.04 And then you can run a command or a shell-script within that container upon each invocation of the function: scar run scar-ubuntu-16-04 whoami SCAR: Request Id: ed5e9f09-ce0c-11e7-8375-6fc6859242f0 Log group name: /aws/lambda/scar-ubuntu-16-04 Log stream name: 2017/11/20/[$LATEST]7e53ed01e54a451494832e21ea933fca --------------------------------------------------------------------------- sbx_user1059 You can use your own Docker images stored in Docker Hub. Some limitations apply but it can be effectively used to run generic applications on AWS Lambda. It also features a programming model for file-processing event-driven applications. It uses uDocker under the hood.
Q: Is it possible to directly call docker run from AWS lambda I have a Java standalone application which I have dockerized. I want to run this docker everytime an object is put into S3 storage. On way is to do it via AWS batch which I am trying to avoid. Is there a direct and easy way to call docker run from a lambda? A: Yes. It is possible to run containers out Docker images stored in Docker Hub within AWS Lambda using SCAR. For example, you can create a Lambda function to execute a container out of the ubuntu:16.04 image in Docker Hub as follows: scar init ubuntu:16.04 And then you can run a command or a shell-script within that container upon each invocation of the function: scar run scar-ubuntu-16-04 whoami SCAR: Request Id: ed5e9f09-ce0c-11e7-8375-6fc6859242f0 Log group name: /aws/lambda/scar-ubuntu-16-04 Log stream name: 2017/11/20/[$LATEST]7e53ed01e54a451494832e21ea933fca --------------------------------------------------------------------------- sbx_user1059 You can use your own Docker images stored in Docker Hub. Some limitations apply but it can be effectively used to run generic applications on AWS Lambda. It also features a programming model for file-processing event-driven applications. It uses uDocker under the hood. A: Yes and no. What you can't do is execute docker run to run a container within the context of the Lambda call. But you can trigger a task on ECS to be executed. For this to work, you need to have a cluster set up on ECS, which means you need to pay for at least one EC2 instance. Because of that, it might be better to not use Docker, but I know too little about your application to judge that. There are a lot of articles out there how to connect S3, Lambda and ECS. Here is a pretty in-depth article by Amazon that you might be interested in: https://aws.amazon.com/blogs/compute/better-together-amazon-ecs-and-aws-lambda/ If you are looking for code, this repository implements what is discussed in the above article: https://github.com/awslabs/lambda-ecs-worker-pattern Here is a snippet we use in our Lambda function (Python) to run a Docker container from Lambda: result = boto3.client('ecs').run_task( cluster=cluster, taskDefinition=task_definition, overrides=overrides, count=1, startedBy='lambda' ) We pass in the name of the cluster on which we want to run the container, as well as the task definition that defines which container to run, the resources it needs and so on. overrides is a dictionary/map with settings that you want to override in the task definition, which we use to specify the command we want to run (i.e. the argument to docker run). This enables us to use the same Lambda function to run a lot of different jobs on ECS. Hope that points you in the right direction. A: Yes try Udocker. Udocker is a simple tool written in Python, it has a minimal set of dependencies so that can be executed in a wide range of Linux systems. udocker does not make use of docker nor requires its installation. udocker "executes" the containers by simply providing a chroot like environment over the extracted container. The current implementation uses PRoot to mimic chroot without requiring privileges. Examples Pull from docker hub and list the pulled images. udocker pull fedora Create the container from a pulled image and run it. udocker create --name=myfed fedora udocker run myfed cat /etc/redhat-release And also its good to check Hackernoon. Because: In Lambda, the only place you are allowed to write is /tmp. But udocker will attempt to write to the homedir by default. And other stuff.
stackoverflow
{ "language": "en", "length": 573, "provenance": "stackexchange_0000F.jsonl.gz:914220", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44696264" }
086479f21dd70f1368b6452e14842b8791ffa311
Stackoverflow Stackexchange Q: How to uninstall the latest update for Visual studio 2017? After updating to the 21st of June update for Visual studio 2017, I'm no longer able to build my project. I'm getting a BadImageException, the signature is incorrect on Microsoft.CodeAnalysis. I've tried to clean solution, reboot computer, repair visual studio 2017 (and resharper) (in that order). I can build in release but I can not build in debug. Obviously, this is a problem for me. How can I undo the latest visual studio 2017 update? In visual studio 2015 you used to be able to simply undo the updates but in visual studio 2017 this seems to be obscured. A: A bit of a dead question but I resolved this by simply uninstalling the entire installation and reinstalling it (including the update) and that resolved it for me.
Q: How to uninstall the latest update for Visual studio 2017? After updating to the 21st of June update for Visual studio 2017, I'm no longer able to build my project. I'm getting a BadImageException, the signature is incorrect on Microsoft.CodeAnalysis. I've tried to clean solution, reboot computer, repair visual studio 2017 (and resharper) (in that order). I can build in release but I can not build in debug. Obviously, this is a problem for me. How can I undo the latest visual studio 2017 update? In visual studio 2015 you used to be able to simply undo the updates but in visual studio 2017 this seems to be obscured. A: A bit of a dead question but I resolved this by simply uninstalling the entire installation and reinstalling it (including the update) and that resolved it for me.
stackoverflow
{ "language": "en", "length": 139, "provenance": "stackexchange_0000F.jsonl.gz:914233", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44696323" }
3634327e92095b561b4d4db9f4ab6a91192e73cf
Stackoverflow Stackexchange Q: Execute inline JavaScript in Scrapy response I am trying to log into a website with Scrapy, but the response received is an HTML document containing only inline JavaScript. The JS redirects to the page I want to scrape data from. But Scrapy does not execute the JS and therefore doesn't route to the page I want it to. I use the following code to submit the login form required: def parse(self, response): request_id = response.css('input[name="request_id"]::attr(value)').extract_first() data = { 'userid_placeholder': self.login_user, 'foilautofill': '', 'password': self.login_pass, 'request_id': request_id, 'username': self.login_user[1:] } yield scrapy.FormRequest(url='https://www1.up.ac.za/oam/server/auth_cred_submit', formdata=data, callback=self.print_p) The print_p callback function is as follows: def print_p(self, response): print(response.text) I have looked at scrapy-splash but I could not find a way to execute the JS in the response with scrapy-splash. A: I'd suggest using Splash as a rendering service. Personally, I found it more reliable than Selenium. Using scripts, you can instruct it to interact with the page.
Q: Execute inline JavaScript in Scrapy response I am trying to log into a website with Scrapy, but the response received is an HTML document containing only inline JavaScript. The JS redirects to the page I want to scrape data from. But Scrapy does not execute the JS and therefore doesn't route to the page I want it to. I use the following code to submit the login form required: def parse(self, response): request_id = response.css('input[name="request_id"]::attr(value)').extract_first() data = { 'userid_placeholder': self.login_user, 'foilautofill': '', 'password': self.login_pass, 'request_id': request_id, 'username': self.login_user[1:] } yield scrapy.FormRequest(url='https://www1.up.ac.za/oam/server/auth_cred_submit', formdata=data, callback=self.print_p) The print_p callback function is as follows: def print_p(self, response): print(response.text) I have looked at scrapy-splash but I could not find a way to execute the JS in the response with scrapy-splash. A: I'd suggest using Splash as a rendering service. Personally, I found it more reliable than Selenium. Using scripts, you can instruct it to interact with the page. A: Probably selenium can help you pass this JS. If you haven't checked it yet you can use some examples like this. If you'll have luck to reach it then you can get page url with: self.driver.current_url And scrape it after.
stackoverflow
{ "language": "en", "length": 195, "provenance": "stackexchange_0000F.jsonl.gz:914253", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44696376" }
ac0a64f672119fa635da3cab58a19fd03ccff9fb
Stackoverflow Stackexchange Q: How to refer to yourself in the anonymous class? I have next code in kotlin: handler.postDelayed(object : Runnable { override fun run() { Timber.i("run post msg") handler.postDelayed(this, AppPrefs.SEARCH_DELAY) } },AppPrefs.SOCKET_INTERVAL) how you see it's simple standard way to create delayed task (with Runnable class). Value this references to anonimus Object implements Runnable and compile and works fine But when i make lamdba for this: handler.postDelayed({ Timber.i("run post msg") handler.postDelayed(this, AppPrefs.SOCKET_INTERVAL) },AppPrefs.SOCKET_INTERVAL) value this referenced to outher class. How referenced from inner anonimus class to yourself? A: You cannot do this. A similar question was asked on Kotlin's forum and yole (one of the creators of the language) said this: this in a lambda refers to the instance of the containing class, if any. A lambda is conceptually a function, not a class, so there is no such thing as a lambda instance to which this could refer. The fact that a lambda can be converted into an instance of a SAM interface does not change this. Having this in a lambda mean different things depending on whether the lambda gets SAM-converted would be extremely confusing.
Q: How to refer to yourself in the anonymous class? I have next code in kotlin: handler.postDelayed(object : Runnable { override fun run() { Timber.i("run post msg") handler.postDelayed(this, AppPrefs.SEARCH_DELAY) } },AppPrefs.SOCKET_INTERVAL) how you see it's simple standard way to create delayed task (with Runnable class). Value this references to anonimus Object implements Runnable and compile and works fine But when i make lamdba for this: handler.postDelayed({ Timber.i("run post msg") handler.postDelayed(this, AppPrefs.SOCKET_INTERVAL) },AppPrefs.SOCKET_INTERVAL) value this referenced to outher class. How referenced from inner anonimus class to yourself? A: You cannot do this. A similar question was asked on Kotlin's forum and yole (one of the creators of the language) said this: this in a lambda refers to the instance of the containing class, if any. A lambda is conceptually a function, not a class, so there is no such thing as a lambda instance to which this could refer. The fact that a lambda can be converted into an instance of a SAM interface does not change this. Having this in a lambda mean different things depending on whether the lambda gets SAM-converted would be extremely confusing.
stackoverflow
{ "language": "en", "length": 186, "provenance": "stackexchange_0000F.jsonl.gz:914258", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44696398" }
406e1c8ffed4299b3aa85e2421ca3d5f585688cf
Stackoverflow Stackexchange Q: Where to put A: It doesn't matter. Neither approach is "more standard". <style> elements are not renderable anyway, so there is no real need to put them in the <defs> section As commented by Paul LeBeau. After reading this article about style on MDN, that shows an example of a style simply under the SVG root, I am more convinced it is correct to put <style> there rather than under <defs>. Also, since <defs> tag is indeed for reusable graphical elements that should be rendered, and <style> is not a renderable element, there's no point keeping it there.
Q: Where to put A: It doesn't matter. Neither approach is "more standard". <style> elements are not renderable anyway, so there is no real need to put them in the <defs> section As commented by Paul LeBeau. After reading this article about style on MDN, that shows an example of a style simply under the SVG root, I am more convinced it is correct to put <style> there rather than under <defs>. Also, since <defs> tag is indeed for reusable graphical elements that should be rendered, and <style> is not a renderable element, there's no point keeping it there. A: Graphical elements defined in <defs> are not rendered directly and will be rendered only with use. Hence it is always a good practice to use <defs> if the graphical object is defined for later use. It also increases the readability of the code. More Information
stackoverflow
{ "language": "en", "length": 145, "provenance": "stackexchange_0000F.jsonl.gz:914261", "question_score": "23", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44696405" }
316fc8a5e618bad579f9ecd19bffb6ef192c7823
Stackoverflow Stackexchange Q: How to get the package name of a function in R? I'm debugging some code and I think I might have twice the same function in 2 packages. I want to output the package name of the function as it would be executed by the R console. Examples : * *function_package_name(print) # --> base *function_package_name(select) # --> dplyr I cannot simply use ?select because I think it links to the choice of 2 packages: dplyr and MASS. How can I know which select function I'm using ? NB : this is NOT a duplicate of list all functions on CRAN, nor of Find the package names from a function name in R, so sos::findFn() is not an acceptable answer ! I'm not looking for potential other functions named like this one, I'm looking for the package name of the one I'm currently using ! A: Try the function where from the pryr package > library(dplyr) > pryr::where("select") <environment: package:dplyr> attr(,"name") [1] "package:dplyr" attr(,"path") [1] "/home/francois/.R/library/dplyr" > library(MASS) > pryr::where("select") <environment: package:MASS> attr(,"name") [1] "package:MASS" attr(,"path") [1] "/home/francois/.R/library/MASS"
Q: How to get the package name of a function in R? I'm debugging some code and I think I might have twice the same function in 2 packages. I want to output the package name of the function as it would be executed by the R console. Examples : * *function_package_name(print) # --> base *function_package_name(select) # --> dplyr I cannot simply use ?select because I think it links to the choice of 2 packages: dplyr and MASS. How can I know which select function I'm using ? NB : this is NOT a duplicate of list all functions on CRAN, nor of Find the package names from a function name in R, so sos::findFn() is not an acceptable answer ! I'm not looking for potential other functions named like this one, I'm looking for the package name of the one I'm currently using ! A: Try the function where from the pryr package > library(dplyr) > pryr::where("select") <environment: package:dplyr> attr(,"name") [1] "package:dplyr" attr(,"path") [1] "/home/francois/.R/library/dplyr" > library(MASS) > pryr::where("select") <environment: package:MASS> attr(,"name") [1] "package:MASS" attr(,"path") [1] "/home/francois/.R/library/MASS" A: This is not a better answer than jkt's, but I was puttering anyway because AK88's findAllFun intrigued me as a way of finding all of the functions on that may be loaded in the search path (though it should be noted that AK88's function seems to return all of the packages in the library that have the function name in its namespace). Anyway, here is a function that will return a vector of package names that contained a function of a desired name. Most importantly, it orders the package names in the order of the search path. That means that if you just type function_name, the first package in which the function will be encountered is the first package in the result. locate_function <- function(fun, find_in = c("searchpath", "library")){ find_in <- match.arg(arg = find_in, choices = c("searchpath", "library")) # Find all libraries that have a function of this name. h <- help.search(pattern = paste0("^", fun, "$"), agrep = FALSE) h <- h$matches[,"Package"] if (find_in == "library") return(h) # List packages in the search path sp <- search() sp <- sp[grepl("^package", sp)] sp <- sub("^package[:]", "", sp) # List the packages on the search path with a function named `fun` # in the order they appear on the search path. h <- h[h %in% sp] h[order(match(h, sp, NULL))] } ## SAMPLE OUTPUT library(dplyr) library(MASS) locate_function("select") # [1] "MASS" "dplyr" ## Unload the dplyr package, then reload it so. ## This makes it appear BEFORE MASS in the search path. detach("package:dplyr", unload = TRUE) library(dplyr) locate_function("select") # [1] "dplyr" "MASS" It also includes an option that lets you see all of the packages (even unloaded packages) that contain a function of the desired name. locate_function("select", find_in = "library") # [1] "dplyr" "raster" "MASS" A: Perhaps even most convenient, if you are just after the package name: environmentName(environment(select)) The advantage is that this produces a string rather than an environment object.
stackoverflow
{ "language": "en", "length": 497, "provenance": "stackexchange_0000F.jsonl.gz:914272", "question_score": "9", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44696431" }
031aded778e06fb662c66f7ee44bf30bd2c89d0d
Stackoverflow Stackexchange Q: Semantic UI browser compatibility I have seen here: https://github.com/Semantic-Org/Semantic-UI/blob/master/README.md the compatibility list ok this UI but I cannot understand which Firefox and Chrome version are last 2 version. Where can I find the version number of these 2 browsers to have a sure list of compatibility? Thanks FA
Q: Semantic UI browser compatibility I have seen here: https://github.com/Semantic-Org/Semantic-UI/blob/master/README.md the compatibility list ok this UI but I cannot understand which Firefox and Chrome version are last 2 version. Where can I find the version number of these 2 browsers to have a sure list of compatibility? Thanks FA
stackoverflow
{ "language": "en", "length": 49, "provenance": "stackexchange_0000F.jsonl.gz:914277", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44696449" }
5aa07a6b2b119491c925f4d5eac74330a81cd7a8
Stackoverflow Stackexchange Q: Angular shared module via npm link I want to create a project with a shared module that contains some generic things and some components which are shared all over my apps (e.g. header). Later this module will be added as dependency in package.json and should be installed via Nexus. But during development I want to npm link this from my filesystem, because I don´t want to go the "nexus way" every time I change something in the shared module. My questions are: * *Can I use angular decorators (e.g. @Componennt or @NgModule) in this shared module? *How to import all this shared module stuff into my actual project? A: Question 1: yes but you have to build the library locally for each change. Question 2: * *Create a folder in node_modules with the same name of your library as in your package.json *Link this folder to the folder containing the locally built library (npm link) *Use regular import statement in the typescript file The problem at this point is that the whole thing does not work reliably. Therefore it is better to use a monorepo for this kind of problem. A good tool for this is: https://nx.dev/angular/getting-started/why-nx
Q: Angular shared module via npm link I want to create a project with a shared module that contains some generic things and some components which are shared all over my apps (e.g. header). Later this module will be added as dependency in package.json and should be installed via Nexus. But during development I want to npm link this from my filesystem, because I don´t want to go the "nexus way" every time I change something in the shared module. My questions are: * *Can I use angular decorators (e.g. @Componennt or @NgModule) in this shared module? *How to import all this shared module stuff into my actual project? A: Question 1: yes but you have to build the library locally for each change. Question 2: * *Create a folder in node_modules with the same name of your library as in your package.json *Link this folder to the folder containing the locally built library (npm link) *Use regular import statement in the typescript file The problem at this point is that the whole thing does not work reliably. Therefore it is better to use a monorepo for this kind of problem. A good tool for this is: https://nx.dev/angular/getting-started/why-nx
stackoverflow
{ "language": "en", "length": 198, "provenance": "stackexchange_0000F.jsonl.gz:914334", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44696635" }
7ba6d431585ff6f92a190ec978c897331ce42159
Stackoverflow Stackexchange Q: Pandas: Converting Columns to Rows based on ID I am new to pandas, I have the following dataframe: df = pd.DataFrame([[1, 'name', 'peter'], [1, 'age', 23], [1, 'height', '185cm']], columns=['id', 'column','value']) id column value 0 1 name peter 1 1 age 23 2 1 height 185cm I need to create a single row for each ID. Like so: id name age height 0 1 peter 23 185cm Any help is greatly appreciated, thank you. A: You can use pivot_table with aggregate join: df = pd.DataFrame([[1, 'name', 'peter'], [1, 'age', 23], [1, 'height', '185cm'], [1, 'age', 25]], columns=['id', 'column','value']) print (df) id column value 0 1 name peter 1 1 age 23 2 1 height 185cm 3 1 age 25 df1 = df.astype(str).pivot_table(index="id",columns="column",values="value",aggfunc=','.join) print (df1) column age height name id 1 23,25 185cm peter Another solution with groupby + apply join and unstack: df1 = df.astype(str).groupby(["id","column"])["value"].apply(','.join).unstack(fill_value=0) print (df1) column age height name id 1 23,25 185cm peter
Q: Pandas: Converting Columns to Rows based on ID I am new to pandas, I have the following dataframe: df = pd.DataFrame([[1, 'name', 'peter'], [1, 'age', 23], [1, 'height', '185cm']], columns=['id', 'column','value']) id column value 0 1 name peter 1 1 age 23 2 1 height 185cm I need to create a single row for each ID. Like so: id name age height 0 1 peter 23 185cm Any help is greatly appreciated, thank you. A: You can use pivot_table with aggregate join: df = pd.DataFrame([[1, 'name', 'peter'], [1, 'age', 23], [1, 'height', '185cm'], [1, 'age', 25]], columns=['id', 'column','value']) print (df) id column value 0 1 name peter 1 1 age 23 2 1 height 185cm 3 1 age 25 df1 = df.astype(str).pivot_table(index="id",columns="column",values="value",aggfunc=','.join) print (df1) column age height name id 1 23,25 185cm peter Another solution with groupby + apply join and unstack: df1 = df.astype(str).groupby(["id","column"])["value"].apply(','.join).unstack(fill_value=0) print (df1) column age height name id 1 23,25 185cm peter A: Assuming your dataframe as "df", below line would help: df.pivot(index="subject",columns="predicate",values="object")
stackoverflow
{ "language": "en", "length": 168, "provenance": "stackexchange_0000F.jsonl.gz:914337", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44696642" }
c98f47133cbc76f662c35941d3f405ef16e54221
Stackoverflow Stackexchange Q: Firebase reset password issue for users with social media authentication I am trying to reset password for Firebase in iOS for a user who has email authentication as well as Facebook and Twitter authentication in Firebase. The password is reset successfully and the user ID is the same, but the user's Facebook and Twitter authentication is removed (see below). How do I reset password in Firebase without removing social media authentication? User authentication with social media linking before password reset User authentication with social media unlinked after password reset This issue also occur on Android A: The following reply from a Googler seems to indicate that the unlinking is an intended consequence of the password reset to allow the user to recover their account in the case it was hijacked and modified by another user: https://stackoverflow.com/a/44694017/1171539
Q: Firebase reset password issue for users with social media authentication I am trying to reset password for Firebase in iOS for a user who has email authentication as well as Facebook and Twitter authentication in Firebase. The password is reset successfully and the user ID is the same, but the user's Facebook and Twitter authentication is removed (see below). How do I reset password in Firebase without removing social media authentication? User authentication with social media linking before password reset User authentication with social media unlinked after password reset This issue also occur on Android A: The following reply from a Googler seems to indicate that the unlinking is an intended consequence of the password reset to allow the user to recover their account in the case it was hijacked and modified by another user: https://stackoverflow.com/a/44694017/1171539 A: First I would check the method they used to sign in: You can lookup the providers linked to an account using: fetchProvidersForEmail To reset the password, use: sendPasswordResetWithEmail There are also instructions on how to send the password reset and redirect back to app: https://firebase.google.com/docs/auth/ios/passing-state-in-email-actions see Firebase forgot password- how to identify whether user signed in with email or facebook? Once you know the sign in method, if the method is email/password, you can call to specifically reset the password with email only. If it's social media then you can just not reset it, or ask the user to unlink the account, reset the password of the account, and relink the account if you're really determined. Have you tried that?
stackoverflow
{ "language": "en", "length": 258, "provenance": "stackexchange_0000F.jsonl.gz:914343", "question_score": "19", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44696667" }
e5c55220a5df86a5e1ebc286e150bfba0869dc9d
Stackoverflow Stackexchange Q: Ionic1, display badge count when push notification arrives i am working with ionic 1 app and implemented push notifications in it. Now i am trying to implement badge on app icon and for that purpose i am using cordova plugin which is (Cordova Badge Plugin ). it is working when push notifications arrives and by clicking on that notification it opens the app and i have implemented badge logic there. i code it like this. $rootScope.$on('cloud:push:notification', function (event, data) { document.addEvenLtistener('deviceready', function () { cordova.plugins.notification.badge.requestPermission(function (granted) { cordova.plugins.notification.badge.set(10); }); }, false); var msg = data.message; alert(msg.title + ': ' + msg.text); }); on closing app it shows badge and when i open app again it clears the badge so i know how actually badge is working. To clear badge i am doing code like this. document.addEventListener('deviceready', function () { cordova.plugins.notification.badge.clear(); }, false); All i want is to app icon to display the badge count when the notification arrives, right now it shows the count but only when i open the app.
Q: Ionic1, display badge count when push notification arrives i am working with ionic 1 app and implemented push notifications in it. Now i am trying to implement badge on app icon and for that purpose i am using cordova plugin which is (Cordova Badge Plugin ). it is working when push notifications arrives and by clicking on that notification it opens the app and i have implemented badge logic there. i code it like this. $rootScope.$on('cloud:push:notification', function (event, data) { document.addEvenLtistener('deviceready', function () { cordova.plugins.notification.badge.requestPermission(function (granted) { cordova.plugins.notification.badge.set(10); }); }, false); var msg = data.message; alert(msg.title + ': ' + msg.text); }); on closing app it shows badge and when i open app again it clears the badge so i know how actually badge is working. To clear badge i am doing code like this. document.addEventListener('deviceready', function () { cordova.plugins.notification.badge.clear(); }, false); All i want is to app icon to display the badge count when the notification arrives, right now it shows the count but only when i open the app.
stackoverflow
{ "language": "en", "length": 172, "provenance": "stackexchange_0000F.jsonl.gz:914364", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44696740" }
f79cfc0f8e3daea9143165c1a5ddda14fe98a7d1
Stackoverflow Stackexchange Q: How can I know the Jenkins Build Status Im actually doing my own Jenkins plugin, and I have a class that extends from RunListener<Run>, with the next onCompleted() method: @Override public void onCompleted(Run build, TaskListener listener) { int number = build.number; EnvVars env; String name = ""; try { env = build.getEnvironment(listener); name = env.get("JOB_NAME") + "-" + env.get("BUILD_NUMBER"); } catch (IOException | InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } GraphicAction act = new GraphicAction(name); build.getActions().add((Action) act); } Is there any posibility of executing the last 2 lines only if the build has been successful? Thanks! A: You can use Jenkins REST API to get the job status: {JENKINS_URL}/job/{JOB_NAME}/lastBuild/api/json and then look for value for "status". jenkins rest
Q: How can I know the Jenkins Build Status Im actually doing my own Jenkins plugin, and I have a class that extends from RunListener<Run>, with the next onCompleted() method: @Override public void onCompleted(Run build, TaskListener listener) { int number = build.number; EnvVars env; String name = ""; try { env = build.getEnvironment(listener); name = env.get("JOB_NAME") + "-" + env.get("BUILD_NUMBER"); } catch (IOException | InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } GraphicAction act = new GraphicAction(name); build.getActions().add((Action) act); } Is there any posibility of executing the last 2 lines only if the build has been successful? Thanks! A: You can use Jenkins REST API to get the job status: {JENKINS_URL}/job/{JOB_NAME}/lastBuild/api/json and then look for value for "status". jenkins rest
stackoverflow
{ "language": "en", "length": 122, "provenance": "stackexchange_0000F.jsonl.gz:914373", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44696764" }
edc6a28d622d5be7b2328a0e9ddc7155c0b6dae4
Stackoverflow Stackexchange Q: Jenkinsfile - How to detect destination of pull request so I have a jenkinsfile where I have my pipeline. I would like to be able to distinguish between a pull request which is going to the master and pull requests going elsewhere. I have found example code for detecting pull requests - something like this: env.BRANCH_NAME.startsWith('PR-') But is there any way to find out the target of the pull request from some env. variable? A: As per https://yourjenkinsurl/pipeline-syntax/globals: CHANGE_TARGET For a multibranch project corresponding to some kind of change request, this will be set to the target or base branch to which the change could be merged, if supported; else unset.
Q: Jenkinsfile - How to detect destination of pull request so I have a jenkinsfile where I have my pipeline. I would like to be able to distinguish between a pull request which is going to the master and pull requests going elsewhere. I have found example code for detecting pull requests - something like this: env.BRANCH_NAME.startsWith('PR-') But is there any way to find out the target of the pull request from some env. variable? A: As per https://yourjenkinsurl/pipeline-syntax/globals: CHANGE_TARGET For a multibranch project corresponding to some kind of change request, this will be set to the target or base branch to which the change could be merged, if supported; else unset.
stackoverflow
{ "language": "en", "length": 112, "provenance": "stackexchange_0000F.jsonl.gz:914391", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44696833" }
7c5245305b8b8541d1df5249695d944e25811677
Stackoverflow Stackexchange Q: TStringList.LoadFromFile Unicode I am attempting to open a txt file to a StringList but if I open a UTF-8 format it fails to load, this is confusing because I have Unicode XE2, am I missing something stupid here? Simple Sample Sl := tStringList.Create; SL.LoadFromFile(sFilePath); For i =0 to SL.Count -1 do foo but the String does not load when the txt file is UTF-8 but works fine when its in ANSI format. A: TStringList.LoadFromFile will attempt to infer the encoding from the file's byte order mark (BOM). If no BOM is present then ANSI encoding is assumed. In your case it seems clear that there is no BOM, so you must tell LoadFromFile which encoding to use. Do that by specifying the encoding as the second argument passed to LoadFromFile: SL.LoadFromFile(sFilePath, TEncoding.UTF8);
Q: TStringList.LoadFromFile Unicode I am attempting to open a txt file to a StringList but if I open a UTF-8 format it fails to load, this is confusing because I have Unicode XE2, am I missing something stupid here? Simple Sample Sl := tStringList.Create; SL.LoadFromFile(sFilePath); For i =0 to SL.Count -1 do foo but the String does not load when the txt file is UTF-8 but works fine when its in ANSI format. A: TStringList.LoadFromFile will attempt to infer the encoding from the file's byte order mark (BOM). If no BOM is present then ANSI encoding is assumed. In your case it seems clear that there is no BOM, so you must tell LoadFromFile which encoding to use. Do that by specifying the encoding as the second argument passed to LoadFromFile: SL.LoadFromFile(sFilePath, TEncoding.UTF8); A: If your UTF-8 file does have a BOM, then loading a UTF-8 file which contains an invalid UTF-8 byte sequence will produce an empty result, with no exception or indication of the failure. This is a 'feature' of the Delphi file handling. So if you see this result and your file has a valid BOM, check the content.
stackoverflow
{ "language": "en", "length": 192, "provenance": "stackexchange_0000F.jsonl.gz:914407", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44696887" }
c3bae1b0f1e15696fda7cc1a7fded08264f955fc
Stackoverflow Stackexchange Q: How to make proper https request using Finagle to Telegram API Currently i working at simple bot that will have a telegram interface. The problem is, that finagle have means to make http request, but i have no clue how to make https request. I tried to make https request with scala-library finagle to telegram bot API: val service: Service[http.Request, http.Response] = Http.client.withTlsWithoutValidation.newService("api.telegram.org:443") val request = http.Request(http.Method.Get,bottoken + "/getMe") request.host = "api.telegram.org" val t = Await.result(service(request) onSuccess(a => a) onFailure( exc => println("Auth check failed : " + exc.toString ))) if (t.status == Status.Ok) { println("Auth check success") } else { println("Auth check failed : " + t.toString + "\r\n" + t.contentString) } Every time i run this code it yields 400 Bad Request http response. Http.client.withTls("api.telegram.org") yields the same result. What am i doing wrong? A: You have to add in the Request the http protocol. val request = http.Request(http.Method.Get, "http://yourholeHost/getMe")
Q: How to make proper https request using Finagle to Telegram API Currently i working at simple bot that will have a telegram interface. The problem is, that finagle have means to make http request, but i have no clue how to make https request. I tried to make https request with scala-library finagle to telegram bot API: val service: Service[http.Request, http.Response] = Http.client.withTlsWithoutValidation.newService("api.telegram.org:443") val request = http.Request(http.Method.Get,bottoken + "/getMe") request.host = "api.telegram.org" val t = Await.result(service(request) onSuccess(a => a) onFailure( exc => println("Auth check failed : " + exc.toString ))) if (t.status == Status.Ok) { println("Auth check success") } else { println("Auth check failed : " + t.toString + "\r\n" + t.contentString) } Every time i run this code it yields 400 Bad Request http response. Http.client.withTls("api.telegram.org") yields the same result. What am i doing wrong? A: You have to add in the Request the http protocol. val request = http.Request(http.Method.Get, "http://yourholeHost/getMe")
stackoverflow
{ "language": "en", "length": 153, "provenance": "stackexchange_0000F.jsonl.gz:914412", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44696909" }
6c9667b4d5c28c736698d4bb15a30c719174567d
Stackoverflow Stackexchange Q: How to read path variables or URL parameters in Custom deserializer Spring Boot I've written a custom deserializer for an entity in Spring Boot application. Now I need to access URL parameters and path variables in my custom deserializer for some data manipulation. Please tell me how can i do that. Thanks A: For path variables deserialization you don't need to involve jackson but you have to "tune" Spring MVC itself by means of defining your own org.springframework.core.convert.converter.Converter For example: @Component public class StringToLocalDateTimeConverter implements Converter<String, LocalDateTime> { @Override public LocalDateTime convert(String source) { return LocalDateTime.parse( source, DateTimeFormatter.ISO_LOCAL_DATE_TIME); } } @GetMapping("/findbydate/{date}") public GenericEntity findByDate(@PathVariable("date") LocalDateTime date) { return ...; } Here is an article about it.
Q: How to read path variables or URL parameters in Custom deserializer Spring Boot I've written a custom deserializer for an entity in Spring Boot application. Now I need to access URL parameters and path variables in my custom deserializer for some data manipulation. Please tell me how can i do that. Thanks A: For path variables deserialization you don't need to involve jackson but you have to "tune" Spring MVC itself by means of defining your own org.springframework.core.convert.converter.Converter For example: @Component public class StringToLocalDateTimeConverter implements Converter<String, LocalDateTime> { @Override public LocalDateTime convert(String source) { return LocalDateTime.parse( source, DateTimeFormatter.ISO_LOCAL_DATE_TIME); } } @GetMapping("/findbydate/{date}") public GenericEntity findByDate(@PathVariable("date") LocalDateTime date) { return ...; } Here is an article about it. A: 1. HandlerMethodArgumentResolver See spring-mvc-custom-data-binder, Another answer 2. HandlerInterceptor See Interception
stackoverflow
{ "language": "en", "length": 128, "provenance": "stackexchange_0000F.jsonl.gz:914422", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44696941" }
7d7a77b97c82112082e3eeb92197449561087d6d
Stackoverflow Stackexchange Q: How to track FCM push notifications send form server side or Rest Client? Is it possible to track push notification from firebase console which is send from server-side or Rest client. A: See duplicate post. No. Currently, only messages sent through the Firebase Notifications Console are visible in the console. Messages sent through the API could be tracked in the Diagnostics Tool, keeping in mind that this doesn't include messages sent to topics.
Q: How to track FCM push notifications send form server side or Rest Client? Is it possible to track push notification from firebase console which is send from server-side or Rest client. A: See duplicate post. No. Currently, only messages sent through the Firebase Notifications Console are visible in the console. Messages sent through the API could be tracked in the Diagnostics Tool, keeping in mind that this doesn't include messages sent to topics.
stackoverflow
{ "language": "en", "length": 74, "provenance": "stackexchange_0000F.jsonl.gz:914442", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44697015" }