id
stringlengths 40
40
| text
stringlengths 29
2.03k
| original_text
stringlengths 3
154k
| subdomain
stringclasses 20
values | metadata
dict |
---|---|---|---|---|
4c95b30ef1179c82893f26e8abe83ec68c0ea719 | Stackoverflow Stackexchange
Q: Can I manually set the value of a field annotated by @LastModifiedDate in java - spring? I have a java class that has a modifiedDate field which is annotated by @LastModifiedDate .
In a particular case , I want to set the field of the "modifiedDate" manually , using the corresponding setter of that field before saving it into a mongo collection (using the org.springframework.data.repository.CrudRepository#save(java.lang.Object) )
I tried it by setting it using the corresponding setter but it did not work.
I cannot further remove this annotation as in all other cases , this annotation is extremely helpful.
Note : Please refer : http://ufasoli.blogspot.in/2017/01/spring-data-force-version-and.html . Here , using auditingHandler they are marking an object as modified even though it is not. I want it to be marked unmodified even if it is.
| Q: Can I manually set the value of a field annotated by @LastModifiedDate in java - spring? I have a java class that has a modifiedDate field which is annotated by @LastModifiedDate .
In a particular case , I want to set the field of the "modifiedDate" manually , using the corresponding setter of that field before saving it into a mongo collection (using the org.springframework.data.repository.CrudRepository#save(java.lang.Object) )
I tried it by setting it using the corresponding setter but it did not work.
I cannot further remove this annotation as in all other cases , this annotation is extremely helpful.
Note : Please refer : http://ufasoli.blogspot.in/2017/01/spring-data-force-version-and.html . Here , using auditingHandler they are marking an object as modified even though it is not. I want it to be marked unmodified even if it is.
| stackoverflow | {
"language": "en",
"length": 133,
"provenance": "stackexchange_0000F.jsonl.gz:909221",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44680715"
} |
2ffa649b5fe233230f89ff54e506782383242fbb | Stackoverflow Stackexchange
Q: Spring boot JPA filter by join table I cant find something concrete in the docs (https://docs.spring.io/spring-data/jpa/docs/current/reference/html/#jpa.query-methods.query-creation)
and no satisfying answere in several blogs.
so here my question.
I have table Entity like:
@Entity
@Table(name = "workstation")
public class Workstation
{
@ManyToOne(fetch = FetchType.LAZY)
@JoinColumn(name = "userid", nullable = false)
public User user;
}
And the user table Entity:
public class user
{
@Id
@GeneratedValue(strategy = IDENTITY)
@Column(name = "ID", unique = true, nullable = false)
public Integer id;
@Column(name = "age", nullable = false)
public int age
}
Now I would like to have a query in my Repository like this:
public interface WorkstationRepo extends CrudRepository<Workstation, Long> {
findByUserAgeLesserThan(int maxAge);
}
Means I want to find all user who are under a certain age through my Workstation Entity.
And is it possible without a @Query annotation? Or/And how should it look like?
A: Try this
List<Workstation> findByUserAgeLessThan(int maxAge);
Alternatively you can also write your query
@Query("Select w from Workstation w where w.user.age < :maxAge")
List<Workstation> findByUserAgeLesserThan(@Param("maxAge") int maxAge);
| Q: Spring boot JPA filter by join table I cant find something concrete in the docs (https://docs.spring.io/spring-data/jpa/docs/current/reference/html/#jpa.query-methods.query-creation)
and no satisfying answere in several blogs.
so here my question.
I have table Entity like:
@Entity
@Table(name = "workstation")
public class Workstation
{
@ManyToOne(fetch = FetchType.LAZY)
@JoinColumn(name = "userid", nullable = false)
public User user;
}
And the user table Entity:
public class user
{
@Id
@GeneratedValue(strategy = IDENTITY)
@Column(name = "ID", unique = true, nullable = false)
public Integer id;
@Column(name = "age", nullable = false)
public int age
}
Now I would like to have a query in my Repository like this:
public interface WorkstationRepo extends CrudRepository<Workstation, Long> {
findByUserAgeLesserThan(int maxAge);
}
Means I want to find all user who are under a certain age through my Workstation Entity.
And is it possible without a @Query annotation? Or/And how should it look like?
A: Try this
List<Workstation> findByUserAgeLessThan(int maxAge);
Alternatively you can also write your query
@Query("Select w from Workstation w where w.user.age < :maxAge")
List<Workstation> findByUserAgeLesserThan(@Param("maxAge") int maxAge);
A: This works:
@Query("SELECT w from Workstation w INNER JOIN w.user u where u.age < ?1")
List<Workstation> findByUserAgeLessThan(int age);
A: You should have something like this:
@Query("SELECT w from Workstation w INNER JOIN w.user u where u.age < :age")
List<Workstation> findByUserAgeLessThan(@Param("age") int age);
Basically, you need to JOIN the tables using JPQL.
| stackoverflow | {
"language": "en",
"length": 221,
"provenance": "stackexchange_0000F.jsonl.gz:909228",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44680729"
} |
e4877b6baac068f4dd1dd64ac1bce3c2abe1b8e4 | Stackoverflow Stackexchange
Q: How to get the handle of curves plotted by contour3 in MATLAB? When I want to rotate the lines ploted using contour3, it seems like the graphic handles of the lines are already deleted. How can I get the handle of the contour lines? Or is it possible to rotate the lines with the contour matrix C?
>> x = -2:0.25:2; x = -2:0.25:2;
>> [X,Y] = meshgrid(x);
>> Z = X.*exp(-X.^2-Y.^2);
>> C=contour3(X,Y,Z,10,'m');
>> hd=gca;
>> rotate(hd,[0 1 0],90,[0 0 0]);
The lines did't move after entering the last command. (I'm using MATLAB 2016a.)
A: You need to get the second output from contour3, which is the handle to the Contour graphics object:
[C, h] = contour3(...);
Unfortunately, this won't help you with your rotation problem. From the documentation for rotate:
rotate(h,direction,alpha) rotates the graphics object h by alpha degrees. Specify h as a surface, patch, line, text, or image object. ...
Note that rotate won't work on axes or Contour objects. Instead, you'll need to alter the camera view with view.
| Q: How to get the handle of curves plotted by contour3 in MATLAB? When I want to rotate the lines ploted using contour3, it seems like the graphic handles of the lines are already deleted. How can I get the handle of the contour lines? Or is it possible to rotate the lines with the contour matrix C?
>> x = -2:0.25:2; x = -2:0.25:2;
>> [X,Y] = meshgrid(x);
>> Z = X.*exp(-X.^2-Y.^2);
>> C=contour3(X,Y,Z,10,'m');
>> hd=gca;
>> rotate(hd,[0 1 0],90,[0 0 0]);
The lines did't move after entering the last command. (I'm using MATLAB 2016a.)
A: You need to get the second output from contour3, which is the handle to the Contour graphics object:
[C, h] = contour3(...);
Unfortunately, this won't help you with your rotation problem. From the documentation for rotate:
rotate(h,direction,alpha) rotates the graphics object h by alpha degrees. Specify h as a surface, patch, line, text, or image object. ...
Note that rotate won't work on axes or Contour objects. Instead, you'll need to alter the camera view with view.
| stackoverflow | {
"language": "en",
"length": 174,
"provenance": "stackexchange_0000F.jsonl.gz:909242",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44680766"
} |
70ea8f85897cb9bdb617cde61691727839a14119 | Stackoverflow Stackexchange
Q: Computing training score using cross_val_score I am using cross_val_score to compute the mean score for a regressor. Here's a small snippet.
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score
cross_val_score(LinearRegression(), X, y_reg, cv = 5)
Using this I get an array of scores. I would like to know how the scores on the validation set (as returned in the array above) differ from those on the training set, to understand whether my model is over-fitting or under-fitting.
Is there a way of doing this with the cross_val_score object?
A: You can use cross_validate instead of cross_val_score
according to doc:
The cross_validate function differs from cross_val_score in two ways -
*
*It allows specifying multiple metrics for evaluation.
*It returns a dict containing training scores, fit-times and score-times in addition to the test score.
| Q: Computing training score using cross_val_score I am using cross_val_score to compute the mean score for a regressor. Here's a small snippet.
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score
cross_val_score(LinearRegression(), X, y_reg, cv = 5)
Using this I get an array of scores. I would like to know how the scores on the validation set (as returned in the array above) differ from those on the training set, to understand whether my model is over-fitting or under-fitting.
Is there a way of doing this with the cross_val_score object?
A: You can use cross_validate instead of cross_val_score
according to doc:
The cross_validate function differs from cross_val_score in two ways -
*
*It allows specifying multiple metrics for evaluation.
*It returns a dict containing training scores, fit-times and score-times in addition to the test score.
A: Why would you want that? cross_val_score(cv=5) does that for you as it splits your train data 10 times and verifies accuracy scores on 5 test subsets. This method already serves as a way to prevent your model from over-fitting.
Anyway, if you are eager to verify accuracy on your validation data, then you have to fit your LinearRegression first on X and y_reg.
| stackoverflow | {
"language": "en",
"length": 198,
"provenance": "stackexchange_0000F.jsonl.gz:909248",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44680781"
} |
681b206a724003b5b29dbd4999b4ed4725d580f9 | Stackoverflow Stackexchange
Q: Remove duplicate items from list in C#, according to one of their properties I have list of class of type:
public class MyClass
{
public SomeOtherClass classObj;
public string BillId;
}
public List<MyClass> myClassObject;
Sample Values:
BillId = "123",classObj = {},
BillId = "999",classObj = {},
BillId = "777",classObj = {},
BillId = "123",classObj = {}
So in above example, we have duplicate values for BillId. I would like to remove all the duplicate values (Not Distinct) so the result would contain only 999 & 777 value.
One way to achieve this to
*
*Loop through all items
*Get count of unique BillId
*If count is greater than 1, store that BillId in another variable
*Loop again and remove item based on BillId
Is there any straightforward way to achieve this?
A: You can also do this,
var result = myClassObject.GroupBy(x => x.BillId)
.Where(x => !x.Skip(1).Any())
.Select(x => x.First());
FIDDLE
| Q: Remove duplicate items from list in C#, according to one of their properties I have list of class of type:
public class MyClass
{
public SomeOtherClass classObj;
public string BillId;
}
public List<MyClass> myClassObject;
Sample Values:
BillId = "123",classObj = {},
BillId = "999",classObj = {},
BillId = "777",classObj = {},
BillId = "123",classObj = {}
So in above example, we have duplicate values for BillId. I would like to remove all the duplicate values (Not Distinct) so the result would contain only 999 & 777 value.
One way to achieve this to
*
*Loop through all items
*Get count of unique BillId
*If count is greater than 1, store that BillId in another variable
*Loop again and remove item based on BillId
Is there any straightforward way to achieve this?
A: You can also do this,
var result = myClassObject.GroupBy(x => x.BillId)
.Where(x => !x.Skip(1).Any())
.Select(x => x.First());
FIDDLE
A: I think this would work:
var result = myClassObject.GroupBy(x => x.BillId)
.Where(x => x.Count() == 1)
.Select(x => x.First());
Fiddle here
A: This may help.
var result = myClassObject
.GroupBy(x => x.BillId)
.Where(x => x.Count()==1)
.Select(x => x.FirstOrDefault());
A: The .Where(x => x.Count()==1) wasn't good for me.
You can try:
.GroupBy(x => x.codeLigne).Select(x => x.First()).ToList()
A: Try this.
var distinctList = myClassObject.GroupBy(m => m.BillId)
.Where(x => x.Count() == 1)
.SelectMany(x => x.ToList())
.ToList();
A: You've asked for a straightforward solution to the problem, and the GroupBy+Where+Select solutions satisfy perfectly this requirement, but you might also be interested for a highly performant and memory-efficient solution. Below is an implementation that uses all the tools that are currently available (.NET 6+) for maximum efficiency:
/// <summary>
/// Returns a sequence of elements that appear exactly once in the source sequence,
/// according to a specified key selector function.
/// </summary>
public static IEnumerable<TSource> UniqueBy<TSource, TKey>(
this IEnumerable<TSource> source,
Func<TSource, TKey> keySelector,
IEqualityComparer<TKey> comparer = default)
{
ArgumentNullException.ThrowIfNull(source);
ArgumentNullException.ThrowIfNull(keySelector);
Dictionary<TKey, (TSource Item, bool Unique)> dictionary = new(comparer);
if (source.TryGetNonEnumeratedCount(out int count))
dictionary.EnsureCapacity(count); // Assume that most items are unique
foreach (TSource item in source)
CollectionsMarshal.GetValueRefOrAddDefault(dictionary, keySelector(item),
out bool exists) = exists ? default : (item, true);
foreach ((TSource item, bool unique) in dictionary.Values)
if (unique)
yield return item;
}
The TryGetNonEnumeratedCount+EnsureCapacity combination can have a significant impact on the amount of memory allocated during the enumeration of the source, in case the source is a type with well known size, like a List<T>.
The CollectionsMarshal.GetValueRefOrAddDefault ensures that each key will be hashed only once, which can be impactful in case the keys have expensive GetHashCode implementations.
Usage example:
List<MyClass> unique = myClassObject.UniqueBy(x => x.BillId).ToList();
Online demo.
The difference of the above UniqueBy from the built-in DistinctBy LINQ operator, is that the former eliminates completely the duplicates altogether, while the later preserves the first instance of each duplicate element.
| stackoverflow | {
"language": "en",
"length": 467,
"provenance": "stackexchange_0000F.jsonl.gz:909255",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44680799"
} |
4b530d6f366a4b17b0bdc105d014d3266c643978 | Stackoverflow Stackexchange
Q: Electron (Windows) Cannot read property 'hide' of undefined I am learning electron and when I use this line app.dock.hide();
I am receiving the error Cannot read property 'hide' of undefined
Is this a windows issue? The training video is being done for a Mac PC. The entire code is:
const path = require('path');
const electron = require('electron');
const TimerTray = require('./app/timer_tray');
const { app, BrowserWindow, Tray } = electron;
let mainWindow;
let tray;
app.on('ready', () => {
app.dock.hide(); // <-- Error happening here
mainWindow = new BrowserWindow({
height: 500,
width: 300,
frame: false,
resizable: false,
show: false
});
mainWindow.loadURL(`file://${__dirname}/src/index.html`);
// Hides mainWindow if another app is clicked
mainWindow.on('blur', () => {
mainWindow.hide();
});
const iconName = process.platform === 'win32' ? 'iconTemplate.png' : 'iconTemplate.png';
const iconPath = path.join(__dirname, `./src/assets/${iconName}`);
tray = new TimerTray(iconPath, mainWindow);
});
This is supposed to hide the icon from the taskbar. Any ideas of why windows throws a fit?
A: Just found it here.
In order to make the window not show in the taskbar, you can either call win.setSkipTaskbar(true); or add skipTaskbar to the options passed to the new BrowserWindow:
{
// ...
skipTaskbar: true,
// ...
}
| Q: Electron (Windows) Cannot read property 'hide' of undefined I am learning electron and when I use this line app.dock.hide();
I am receiving the error Cannot read property 'hide' of undefined
Is this a windows issue? The training video is being done for a Mac PC. The entire code is:
const path = require('path');
const electron = require('electron');
const TimerTray = require('./app/timer_tray');
const { app, BrowserWindow, Tray } = electron;
let mainWindow;
let tray;
app.on('ready', () => {
app.dock.hide(); // <-- Error happening here
mainWindow = new BrowserWindow({
height: 500,
width: 300,
frame: false,
resizable: false,
show: false
});
mainWindow.loadURL(`file://${__dirname}/src/index.html`);
// Hides mainWindow if another app is clicked
mainWindow.on('blur', () => {
mainWindow.hide();
});
const iconName = process.platform === 'win32' ? 'iconTemplate.png' : 'iconTemplate.png';
const iconPath = path.join(__dirname, `./src/assets/${iconName}`);
tray = new TimerTray(iconPath, mainWindow);
});
This is supposed to hide the icon from the taskbar. Any ideas of why windows throws a fit?
A: Just found it here.
In order to make the window not show in the taskbar, you can either call win.setSkipTaskbar(true); or add skipTaskbar to the options passed to the new BrowserWindow:
{
// ...
skipTaskbar: true,
// ...
}
| stackoverflow | {
"language": "en",
"length": 193,
"provenance": "stackexchange_0000F.jsonl.gz:909295",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44680931"
} |
659161ef0e688a0f332e207f101c87b732934829 | Stackoverflow Stackexchange
Q: ASP.NET Core Identity Settings Not Working I've an implementation of Identity Server 4 with ASP.NET Identity. I asked an earlier question about how I would apply certain login rules and received an answer explaining how I could add some options in Startup.cs. Here's what I added to the ConfigureServices method:
services.AddIdentity<ApplicationUser, IdentityRole>(options =>
{
options.Lockout.DefaultLockoutTimeSpan = TimeSpan.FromMinutes(15);
options.Lockout.MaxFailedAccessAttempts = 5;
options.Password.RequiredLength = 9;
options.Password.RequireDigit = true;
options.Password.RequireLowercase = true;
options.Password.RequireUppercase = true;
options.Password.RequireNonAlphanumeric = false;
options.SignIn.RequireConfirmedEmail = true;
})
.AddEntityFrameworkStores<ApplicationDbContext>()
.AddDefaultTokenProviders();
The password rules seem to work, but the lockout rules have no effect. Is there something I need to enable?
A: Not sure how I missed this. The lockout feature happens as part of the sign-in process in the PasswordSignInAsync method on the SignInManager. The line of code I needed to change is part of the Login method in the AccountController:
SignInManager.PasswordSignInAsync(
model.Email,
model.Password,
model.RememberLogin,
lockoutOnFailure: true); // <- HERE
| Q: ASP.NET Core Identity Settings Not Working I've an implementation of Identity Server 4 with ASP.NET Identity. I asked an earlier question about how I would apply certain login rules and received an answer explaining how I could add some options in Startup.cs. Here's what I added to the ConfigureServices method:
services.AddIdentity<ApplicationUser, IdentityRole>(options =>
{
options.Lockout.DefaultLockoutTimeSpan = TimeSpan.FromMinutes(15);
options.Lockout.MaxFailedAccessAttempts = 5;
options.Password.RequiredLength = 9;
options.Password.RequireDigit = true;
options.Password.RequireLowercase = true;
options.Password.RequireUppercase = true;
options.Password.RequireNonAlphanumeric = false;
options.SignIn.RequireConfirmedEmail = true;
})
.AddEntityFrameworkStores<ApplicationDbContext>()
.AddDefaultTokenProviders();
The password rules seem to work, but the lockout rules have no effect. Is there something I need to enable?
A: Not sure how I missed this. The lockout feature happens as part of the sign-in process in the PasswordSignInAsync method on the SignInManager. The line of code I needed to change is part of the Login method in the AccountController:
SignInManager.PasswordSignInAsync(
model.Email,
model.Password,
model.RememberLogin,
lockoutOnFailure: true); // <- HERE
| stackoverflow | {
"language": "en",
"length": 152,
"provenance": "stackexchange_0000F.jsonl.gz:909298",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44680939"
} |
499fea5a822a942e02039b7dd37b6321b91a6ef8 | Stackoverflow Stackexchange
Q: How to join wagtail and django sitemaps? I'm using wagtail app in my Django project. Is it possible to join django sitemaps (https://docs.djangoproject.com/en/1.11/ref/contrib/sitemaps/) with wagtail sitemaps (wagtail.contrib.wagtailsitemaps)? Tried using django sitemap indexes, but it divide only django sitemap, how I can include wagtail sitemap?
A: Wagtail uses the Django sitemap framework since version 1.10. This should allow you to easily combine regular Django sitemaps with Wagtail sitemaps.
There is a small catch however; because wagtail supports multiple sites the sitemap should know for which site the sitemap is generated. For this reason wagtail provides it's own sitemap views (index and sitemap). These views extend the Django sitemap views in order to propagate the site object.
So instead of importing the sitemap views from django:
from django.contrib.sitemaps import views as sitemap_views
Use the wagtail versions:
from wagtail.contrib.wagtailsitemaps import views as sitemaps_views
And then use the Django approach to map the urls to the views:
from wagtail.contrib.wagtailsitemaps import Sitemap
urlpatterns = [
# ...
url(r'^sitemap\.xml$', sitemaps_views.index, {
'sitemaps': {
'pages': Sitemap
},
'sitemap_url_name': 'sitemap',
}),
url(r'^sitemap-(?P<section>.+)\.xml$', sitemaps_views.sitemap,
name='sitemap'),
# ...
]
For a complete example you can see the code in the tests:
https://github.com/wagtail/wagtail/blob/911009473bc51e30ff751fda0ea5d2fa1d2b450f/wagtail/tests/urls.py#L36
| Q: How to join wagtail and django sitemaps? I'm using wagtail app in my Django project. Is it possible to join django sitemaps (https://docs.djangoproject.com/en/1.11/ref/contrib/sitemaps/) with wagtail sitemaps (wagtail.contrib.wagtailsitemaps)? Tried using django sitemap indexes, but it divide only django sitemap, how I can include wagtail sitemap?
A: Wagtail uses the Django sitemap framework since version 1.10. This should allow you to easily combine regular Django sitemaps with Wagtail sitemaps.
There is a small catch however; because wagtail supports multiple sites the sitemap should know for which site the sitemap is generated. For this reason wagtail provides it's own sitemap views (index and sitemap). These views extend the Django sitemap views in order to propagate the site object.
So instead of importing the sitemap views from django:
from django.contrib.sitemaps import views as sitemap_views
Use the wagtail versions:
from wagtail.contrib.wagtailsitemaps import views as sitemaps_views
And then use the Django approach to map the urls to the views:
from wagtail.contrib.wagtailsitemaps import Sitemap
urlpatterns = [
# ...
url(r'^sitemap\.xml$', sitemaps_views.index, {
'sitemaps': {
'pages': Sitemap
},
'sitemap_url_name': 'sitemap',
}),
url(r'^sitemap-(?P<section>.+)\.xml$', sitemaps_views.sitemap,
name='sitemap'),
# ...
]
For a complete example you can see the code in the tests:
https://github.com/wagtail/wagtail/blob/911009473bc51e30ff751fda0ea5d2fa1d2b450f/wagtail/tests/urls.py#L36
| stackoverflow | {
"language": "en",
"length": 193,
"provenance": "stackexchange_0000F.jsonl.gz:909337",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44681046"
} |
06cb9557ed14023bff1db2ee42dd810cc25385ba | Stackoverflow Stackexchange
Q: How do I Specify PDF Output Path in Chrome Headless Mode This page shows that you can now use chrome to generate a PDF of a webpage using the following CLI command:
chrome --headless --disable-gpu --print-to-pdf https://www.chromestatus.com/
However, it does not state how to specify the output path.
How do you specify the output path?
A: chrome --headless --disable-gpu --print-to-pdf="C:/temp/pdftest.pdf" https://www.google.com/
source:
https://cs.chromium.org/chromium/src/headless/app/headless_shell.cc?type=cs&l=63
| Q: How do I Specify PDF Output Path in Chrome Headless Mode This page shows that you can now use chrome to generate a PDF of a webpage using the following CLI command:
chrome --headless --disable-gpu --print-to-pdf https://www.chromestatus.com/
However, it does not state how to specify the output path.
How do you specify the output path?
A: chrome --headless --disable-gpu --print-to-pdf="C:/temp/pdftest.pdf" https://www.google.com/
source:
https://cs.chromium.org/chromium/src/headless/app/headless_shell.cc?type=cs&l=63
| stackoverflow | {
"language": "en",
"length": 64,
"provenance": "stackexchange_0000F.jsonl.gz:909341",
"question_score": "17",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44681065"
} |
c801b494f6e73138a39518d41df1c5be4cd1cbc8 | Stackoverflow Stackexchange
Q: Spring Boot JPA: Storing entity as JSON In my Spring Boot app I have Entity like that:
@Entity
@Table(name = "UserENT")
public class User implements Serializable {
@Id
private String id;
private Data data;
...
I would like to achieve that object Data will be stored in DB in json format. But it will be mapped on Data object when selecting from DB.
Thanks for any advice.
A: You can implement a javax.persistence.AttributeConverter, as in:
public class DataJsonConverter implements AttributeConverter<Data, String> {
private ObjectMapper objectMapper = ...;
@Override
public String convertToDatabaseColumn(Data data) {
try {
return objectMapper.writeValueAsString(data);
} catch (JsonProcessingException e) {
throw new RuntimeException("Could not convert to Json", e);
}
}
@Override
public Data convertToEntityAttribute(String json) {
try {
return objectMapper.readValue(json, Data.class);
} catch (IOException e) {
throw new RuntimeException("Could not convert from Json", e);
}
}
}
You can then use it by annotating your field:
@Convert(converter = DataJsonConverter.class)
private Data data;
| Q: Spring Boot JPA: Storing entity as JSON In my Spring Boot app I have Entity like that:
@Entity
@Table(name = "UserENT")
public class User implements Serializable {
@Id
private String id;
private Data data;
...
I would like to achieve that object Data will be stored in DB in json format. But it will be mapped on Data object when selecting from DB.
Thanks for any advice.
A: You can implement a javax.persistence.AttributeConverter, as in:
public class DataJsonConverter implements AttributeConverter<Data, String> {
private ObjectMapper objectMapper = ...;
@Override
public String convertToDatabaseColumn(Data data) {
try {
return objectMapper.writeValueAsString(data);
} catch (JsonProcessingException e) {
throw new RuntimeException("Could not convert to Json", e);
}
}
@Override
public Data convertToEntityAttribute(String json) {
try {
return objectMapper.readValue(json, Data.class);
} catch (IOException e) {
throw new RuntimeException("Could not convert from Json", e);
}
}
}
You can then use it by annotating your field:
@Convert(converter = DataJsonConverter.class)
private Data data;
| stackoverflow | {
"language": "en",
"length": 155,
"provenance": "stackexchange_0000F.jsonl.gz:909362",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44681127"
} |
50e05b5f96e2730764322cb03775f8a63d08e2c2 | Stackoverflow Stackexchange
Q: Web Google authentication with firebase
uncaught exception: Error: This operation is not supported in the
environment this application is running on. "location.protocol" must
be http, https or chrome-extension and web storage must be enabled.
var config = {
apiKey: "*****",
authDomain: "******",
};
firebase.initializeApp(config);
var provider = new firebase.auth.GoogleAuthProvider();
provider.addScope('profile');
provider.addScope('https://www.googleapis.com/auth/drive');
firebase.auth().signInWithRedirect(provider);
alert(1);
}
A:
uncaught exception: Error: This operation is not supported in the
environment this application is running on. "location.protocol" must
be HTTP, HTTPS or chrome-extension and web storage must be enabled.
Recently even i faced the same error.
You are opening this file directly in the browser without any web server. Firebase authentication won't work if you open the file directly. Try to load your HTML through web server it should solve your issue.
The reason behind this bug is when you use authentication services they will use web storage. web storage does not work when you open an HTML file directly without any web browser
For example, use apache and open through apache like http://localhost/filename.html in the browser
| Q: Web Google authentication with firebase
uncaught exception: Error: This operation is not supported in the
environment this application is running on. "location.protocol" must
be http, https or chrome-extension and web storage must be enabled.
var config = {
apiKey: "*****",
authDomain: "******",
};
firebase.initializeApp(config);
var provider = new firebase.auth.GoogleAuthProvider();
provider.addScope('profile');
provider.addScope('https://www.googleapis.com/auth/drive');
firebase.auth().signInWithRedirect(provider);
alert(1);
}
A:
uncaught exception: Error: This operation is not supported in the
environment this application is running on. "location.protocol" must
be HTTP, HTTPS or chrome-extension and web storage must be enabled.
Recently even i faced the same error.
You are opening this file directly in the browser without any web server. Firebase authentication won't work if you open the file directly. Try to load your HTML through web server it should solve your issue.
The reason behind this bug is when you use authentication services they will use web storage. web storage does not work when you open an HTML file directly without any web browser
For example, use apache and open through apache like http://localhost/filename.html in the browser
A: Most simple way....
just upload your files on github and run with github page(i.e. https://ur name.github.io/yr dir/yr html file.
A: Try this code. It should work.
var config = {
apiKey: "*****",
authDomain: "******",
};
firebase.initializeApp(config);
var provider = new firebase.auth.GoogleAuthProvider();
provider.addScope('profile');
provider.addScope('https://www.googleapis.com/auth/drive');
firebase.auth().signInWithRedirect(provider);
//add the code below to your previous lines
firebase.auth().getRedirectResult().then(function(authData) {
console.log(authData);
}).catch(function(error) {
console.log(error);
});
| stackoverflow | {
"language": "en",
"length": 233,
"provenance": "stackexchange_0000F.jsonl.gz:909436",
"question_score": "12",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44681352"
} |
048a12260fbdec4b8d87fe94853e2f21705e4900 | Stackoverflow Stackexchange
Q: Reduce pyinstaller executable size I have only one line of code input() written in python and packed with pyinstaller with option --onefile. The exe file is 4577 kB which is almost 5Mb. How can I reduce its size or exclude some auto-bundled libraries?
A: The .exe file you create using pyinstaller includes the python interpreter and all modules included in your script.Maybe, the modules you are using have a big library themselves. You can however try using py2exe but it might not work for all projects.The other way to get it smaller is to use a compression program as like, compress the executable using UPX (have a look at this:http://htmlpreview.github.io/?https://github.com/pyinstaller/pyinstaller/blob/v2.0/doc/Manual.html#a-note-on-using-upx).
You can also try excluding some items too but at the discretion that removing such items doesn't interfere with the functionality of your .exe.
| Q: Reduce pyinstaller executable size I have only one line of code input() written in python and packed with pyinstaller with option --onefile. The exe file is 4577 kB which is almost 5Mb. How can I reduce its size or exclude some auto-bundled libraries?
A: The .exe file you create using pyinstaller includes the python interpreter and all modules included in your script.Maybe, the modules you are using have a big library themselves. You can however try using py2exe but it might not work for all projects.The other way to get it smaller is to use a compression program as like, compress the executable using UPX (have a look at this:http://htmlpreview.github.io/?https://github.com/pyinstaller/pyinstaller/blob/v2.0/doc/Manual.html#a-note-on-using-upx).
You can also try excluding some items too but at the discretion that removing such items doesn't interfere with the functionality of your .exe.
A: Ah, You are not creating the build in a separate virtual environment.
Create a virtual environment just for build purpose and install the packages you need in this environment.
in your cmd execute these to create a virtual enviornment
python -m venv build_env
cd build_env
C:\build_env\Scripts\Activate
you will see this >>(build_env) C:\build_env
Install all the packages you need for your script, start with pyinstaller
pip install pyinstaller
Once you are all installed, build the exe as before.
The exe built using the virtual environment will be faster and smaller in size!!
For more details check https://python-forum.io/Thread-pyinstaller-exe-size
A: I had a similar problem and found a solution. I used Windows terminal preview. This program allows creation of various virtual environments like Windows Power Shell (btw. Linux Ubuntu too. Also, worth noting: you can have many terminals in this program installed and, even, open a few at once. Very cool stuff).
Inside Windows Power Shell in Windows terminal preview I installed all the necessary libraries (like pandas etc.), then I opened the path to my file and tried to use this command:
pyinstaller --onefile -w 'filename.py'
...but, the output exe didn't work. For some reason, the console said that there is a lack of one library (which I had installed earlier). I've found the solution in mimic the auto-py-to-exe library. The command used by this GUI is:
pyinstaller --noconfirm --onedir --console "C:/Users/something/filename.py"
And this one works well. I reduced the size of my output exe program from 911MB to 82,9MB!!!
BTW: 911MB was the size of output made by auto-py-to-exe.
I wonder how is it possible that no one yet has created a compressor that reads the code, checks what libraries are part of the code, then putting only them inside the compression. In my case, auto-py-to-exe probably loaded all libraries that I ever installed. That would explain the size of this compressed folder.
Some suggest using https://virtualenv.pypa.io/en/stable/ but in my opinion, this library is very difficult, at least for me.
| stackoverflow | {
"language": "en",
"length": 465,
"provenance": "stackexchange_0000F.jsonl.gz:909438",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44681356"
} |
608cbadcda61fbc915beafb386d64d1d11f01c86 | Stackoverflow Stackexchange
Q: MySQL Duplicate a database I want to copy (all data, schema, procedures and so on) from database1 to database2, that are located on the same server.
I've tried using mysqldump but each time I get
ERROR 1227 (42000) at line 18: Access denied; you need (at least one
of) the SUPER privilege(s) for this operation
That because my root user that I use for this operation does not have SUPER privilege and I don't have access to change this.
Is there a solution to do this without using mysqldump?
Keep in mind that it's a pretty big database with over one hundred tables.
A: You have to grant privilegies to the user:
GRANT ALL PRIVILEGES ON database_name.* TO 'username'@'yourhost';
if you want to do it for all databases:
GRANT SELECT ON *.* TO 'username'@'yourhost';
FLUSH PRIVILEGES;
| Q: MySQL Duplicate a database I want to copy (all data, schema, procedures and so on) from database1 to database2, that are located on the same server.
I've tried using mysqldump but each time I get
ERROR 1227 (42000) at line 18: Access denied; you need (at least one
of) the SUPER privilege(s) for this operation
That because my root user that I use for this operation does not have SUPER privilege and I don't have access to change this.
Is there a solution to do this without using mysqldump?
Keep in mind that it's a pretty big database with over one hundred tables.
A: You have to grant privilegies to the user:
GRANT ALL PRIVILEGES ON database_name.* TO 'username'@'yourhost';
if you want to do it for all databases:
GRANT SELECT ON *.* TO 'username'@'yourhost';
FLUSH PRIVILEGES;
| stackoverflow | {
"language": "en",
"length": 137,
"provenance": "stackexchange_0000F.jsonl.gz:909444",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44681367"
} |
58e0c6810aaf80b44d558d5c0fca9b53340b046d | Stackoverflow Stackexchange
Q: ggplot using purrr map() to plot same x with multiple y's I want to create multiple plots that have the same x but different y's using purrr package methodology. That is, I would like to use the map() or walk() functions to perform this.
Using mtcars dataset for simplicity.
ggplot(data = mtcars, aes(x = hp, y = mpg)) + geom_point()
ggplot(data = mtcars, aes(x = hp, y = cyl)) + geom_point()
ggplot(data = mtcars, aes(x = hp, y = disp)) + geom_point()
edit
So far I have tried
y <- list("mpg", "cyl", "disp")
mtcars %>% map(y, ggplot(., aes(hp, y)) + geom_point()
A: This is one possibility
ys <- c("mpg","cyl","disp")
ys %>% map(function(y)
ggplot(mtcars, aes(hp)) + geom_point(aes_string(y=y)))
It's just like any other map function, you just need to configure your aesthetics properly in the function.
| Q: ggplot using purrr map() to plot same x with multiple y's I want to create multiple plots that have the same x but different y's using purrr package methodology. That is, I would like to use the map() or walk() functions to perform this.
Using mtcars dataset for simplicity.
ggplot(data = mtcars, aes(x = hp, y = mpg)) + geom_point()
ggplot(data = mtcars, aes(x = hp, y = cyl)) + geom_point()
ggplot(data = mtcars, aes(x = hp, y = disp)) + geom_point()
edit
So far I have tried
y <- list("mpg", "cyl", "disp")
mtcars %>% map(y, ggplot(., aes(hp, y)) + geom_point()
A: This is one possibility
ys <- c("mpg","cyl","disp")
ys %>% map(function(y)
ggplot(mtcars, aes(hp)) + geom_point(aes_string(y=y)))
It's just like any other map function, you just need to configure your aesthetics properly in the function.
A: I've made a bit more general function for this, because it's part of EDA protocol (Zuur et al., 2010). This article from Ariel Muldoon helped me.
plotlist <- function(data, resp, efflist) {
require(ggplot2)
require(purrr)
y <- enquo(resp)
map(efflist, function(x)
ggplot(data, aes(!!sym(x), !!y)) +
geom_point(alpha = 0.25, color = "darkgreen") +
ylab(NULL)
)
}
where:
*
*data is your dataframe
*resp is response variable
*efflist is a char of effects (independent variables)
Of course, you may change the geom and/or aesthetics as it needs. The function returns a list of plots which you can pass to e.g. cowplot or gridExtra as in example:
library(gridExtra)
library(dplyr) # just for pipes
plotlist(mtcars, hp, c("mpg","cyl","disp")) %>%
grid.arrange(grobs = ., left = "HP")
| stackoverflow | {
"language": "en",
"length": 254,
"provenance": "stackexchange_0000F.jsonl.gz:909449",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44681380"
} |
9258ef6ccc862743f261a962b066463da1941e29 | Stackoverflow Stackexchange
Q: can't uninstall packages Hi I uninstalled a package but it still look accessible, can somebody help please? Thank u!
> remove.packages("RODBC")
Removing package from ‘E:/R/R-3.3.3/library’
(as ‘lib’ is unspecified)
> library(RODBC)
# no error. it's still there
> attr(sessionInfo()$otherPkgs$RODBC, "file")
[1] "E:/R/R-3.3.3/library/RODBC/Meta/package.rds"
# it really is there...
> remove.packages("dplyr")
Removing package from ‘E:/R/R-3.3.3/library’
(as ‘lib’ is unspecified)
> library(dplyr)
Error in library(dplyr) : there is no package called ‘dplyr’
# this guy is removed
> .Library
[1] "E:/R/R-3.3.3/library"
> .libPaths()
[1] "E:/R/R-3.3.3/library"
Would it be possible that the package RODBC was in use so that can not be removed?
A: this has happened to me before and I think what I did was literally go find the package's folder from file explorer on my computer and manually delete it
| Q: can't uninstall packages Hi I uninstalled a package but it still look accessible, can somebody help please? Thank u!
> remove.packages("RODBC")
Removing package from ‘E:/R/R-3.3.3/library’
(as ‘lib’ is unspecified)
> library(RODBC)
# no error. it's still there
> attr(sessionInfo()$otherPkgs$RODBC, "file")
[1] "E:/R/R-3.3.3/library/RODBC/Meta/package.rds"
# it really is there...
> remove.packages("dplyr")
Removing package from ‘E:/R/R-3.3.3/library’
(as ‘lib’ is unspecified)
> library(dplyr)
Error in library(dplyr) : there is no package called ‘dplyr’
# this guy is removed
> .Library
[1] "E:/R/R-3.3.3/library"
> .libPaths()
[1] "E:/R/R-3.3.3/library"
Would it be possible that the package RODBC was in use so that can not be removed?
A: this has happened to me before and I think what I did was literally go find the package's folder from file explorer on my computer and manually delete it
A: I had the same trouble, so I try to delete packages by hand, but found I haven't root authority. Then I close R, and start it with sudo, try to remove packages again. However, it worked for me.
A: Possible causes
There could be many reasons as to why it is happening.
In my case, remove.packages("somepackagehere") was not working because the current system user that I am using doesn't have write privileges to the packages I want to uninstall. So, this is a possible reason for computers/machines with multiple users using R.
Checking the location of packages
This can be checked by issuing the statement in the R console:
.libPaths()
output
[1] "/Library/Frameworks/R.framework/Versions/4.0/Resources/library"
The directory output may vary per R installation. This is just for my case. The directory output is where the installed packages were stored. It may look different for Windows systems.
Checking privileges
In Mac and Linux, the privileges can be checked by:
cd /Library/Frameworks/R.framework/Versions/4.0/Resources/library
ls -la
output
drwxrwxr-x 422 root admin 13504 Apr 21 19:13 .
drwxrwxr-x 18 root admin 576 Jul 16 2020 ..
drwxr-xr-x 3 mario admin 96 Jun 17 2021 RODBC
drwxr-xr-x 3 mario admin 96 Jun 17 2021 dplyr
In this case, it was mario who installed the packages. Since I --luigi-- was currently using the machine, I can not remove those packages. It is only mario that can do it.
In Windows, I have no clue as to how it can be checked.
Granting privileges
cd /Library/Frameworks/R.framework/Versions/4.0/Resources/library
sudo chown -R luigi:admin .
OR
cd /Library/Frameworks/R.framework/Versions/4.0/Resources/library
sudo chmod -R o+w .
In Windows, I have no clue as to how privileges can be granted.
Removing packages
Finally, with the correct privileges, you can now remove the packages like so:
remove.packages("RODBC")
A: Also had problems with remove.packages so deleting the folders as suggested by @sweetmusicality worked for me:
#e.g. remove packages associated with tidyverse
pkremove <- tools::package_dependencies("tidyverse", db = installed.packages())$tidyverse
lapply(pkremove, function(x) {
unlink(paste0(.libPaths()[1], "/", x), recursive = TRUE)
})
| stackoverflow | {
"language": "en",
"length": 458,
"provenance": "stackexchange_0000F.jsonl.gz:909451",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44681383"
} |
6ead09dcdec856e93be28dd8a00d9fabcca9e367 | Stackoverflow Stackexchange
Q: replace does not work on temporary file I'm using the replace module from NPM successfully (see the first example). However, I want to keep the original file and process a new (temporary) file that is a copy of it. Here is what I tried:
Works:
var replace = require("replace");
replace({
regex: "foo",
replacement: "bar",
paths: [path_in],
recursive: true,
silent: true,
});
Doesn't work:
var replace = require("replace");
var fs = require('fs');
fs.createReadStream(path_in).pipe(fs.createWriteStream(path_temp));
replace({
regex: "foo",
replacement: "bar",
paths: [path_temp],
recursive: true,
silent: true,
});
Do I need to close the pipe()? Not sure what to do here..
Thanks,
Edit: This GitHub issue is related.
A: The .pipe() is asynchronous so you need to wait for the .pipe() to finish before trying to use the destination file. Since .pipe() returns the destination stream, you can listen for the close or error events to know when it's done:
var replace = require("replace");
var fs = require('fs');
fs.createReadStream(path_in).pipe(fs.createWriteStream(path_temp)).on('close', function() {
replace({
regex: "foo",
replacement: "bar",
paths: [path_temp],
recursive: true,
silent: true,
});
}).on('error', function(err) {
// error occurred with the .pipe()
});
| Q: replace does not work on temporary file I'm using the replace module from NPM successfully (see the first example). However, I want to keep the original file and process a new (temporary) file that is a copy of it. Here is what I tried:
Works:
var replace = require("replace");
replace({
regex: "foo",
replacement: "bar",
paths: [path_in],
recursive: true,
silent: true,
});
Doesn't work:
var replace = require("replace");
var fs = require('fs');
fs.createReadStream(path_in).pipe(fs.createWriteStream(path_temp));
replace({
regex: "foo",
replacement: "bar",
paths: [path_temp],
recursive: true,
silent: true,
});
Do I need to close the pipe()? Not sure what to do here..
Thanks,
Edit: This GitHub issue is related.
A: The .pipe() is asynchronous so you need to wait for the .pipe() to finish before trying to use the destination file. Since .pipe() returns the destination stream, you can listen for the close or error events to know when it's done:
var replace = require("replace");
var fs = require('fs');
fs.createReadStream(path_in).pipe(fs.createWriteStream(path_temp)).on('close', function() {
replace({
regex: "foo",
replacement: "bar",
paths: [path_temp],
recursive: true,
silent: true,
});
}).on('error', function(err) {
// error occurred with the .pipe()
});
| stackoverflow | {
"language": "en",
"length": 180,
"provenance": "stackexchange_0000F.jsonl.gz:909477",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44681489"
} |
d7ffb95c6bac9b0e4f7fd34c872e8fe862717b39 | Stackoverflow Stackexchange
Q: Django - Easiest way of making parallel API calls Suppose that I have a Django view and that I would like to call two APIs at the same time.
def djangoview(request):
#Call API 1
#Call API 2 before API Call 1 finishes
#Wait for both calls to finish
output = [Apicall1.response, Apicall2.response]
return(HttpResponse)
I've been trying to use multiprocessing library and pools without success. I'm using Apache2 and Wsgi, how can I make this work? Maybe making the call on a different thread?
| Q: Django - Easiest way of making parallel API calls Suppose that I have a Django view and that I would like to call two APIs at the same time.
def djangoview(request):
#Call API 1
#Call API 2 before API Call 1 finishes
#Wait for both calls to finish
output = [Apicall1.response, Apicall2.response]
return(HttpResponse)
I've been trying to use multiprocessing library and pools without success. I'm using Apache2 and Wsgi, how can I make this work? Maybe making the call on a different thread?
| stackoverflow | {
"language": "en",
"length": 84,
"provenance": "stackexchange_0000F.jsonl.gz:909484",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44681513"
} |
ca1438983ba67b70c0ae1b5166a601c02afd400b | Stackoverflow Stackexchange
Q: Testing: blur emulate I would like to test simple angular component with input.
So an example in the bottom has little preparation for the test, and in a component should occur test function on blur, which shows log, but I have no logs in console. I tried 2 cases: getting div native element and click it and use blur() function for input native element. In angular app blur successfully occur, but it doesn't work in test. How can I fix it?
@Component({
template: '<div><input [(ngModel)]="str" (blur)="testFunc($event)" /></div>'
})
class TestHostComponent {
it: string = '';
testFunc = () => {
console.log('blur!!!');
}
}
describe('blur test', () => {
let component: TestHostComponent;
let fixture: ComponentFixture<TestHostComponent>;
let de: DebugElement;
let inputEl: DebugElement;
beforeEach(() => { /* component configuration, imports... */ }
beforeEach(() => {
fixture = TestBed.createComponent(TestHostComponent);
component = fixture.componentInstance;
de = fixture.debugElement;
inputEl = fixture.debugElement.query(By.css('input'));
fixture.detectChanges();
})
it('test', async(() => {
const inp = inputEl.nativeElement;
inp.value = 123;
inp.dispatchEvent(new Event('input'));
fixture.detectChanges();
expect(component.it).toEqual('123');
inp.blur();
const divEl = fixture.debugElement.query(By.css('div'));
divEl.nativeElement.click();
}))
}
A: You can use dispatchEvent to emulate a blur:
inp.dispatchEvent(new Event('blur'));
| Q: Testing: blur emulate I would like to test simple angular component with input.
So an example in the bottom has little preparation for the test, and in a component should occur test function on blur, which shows log, but I have no logs in console. I tried 2 cases: getting div native element and click it and use blur() function for input native element. In angular app blur successfully occur, but it doesn't work in test. How can I fix it?
@Component({
template: '<div><input [(ngModel)]="str" (blur)="testFunc($event)" /></div>'
})
class TestHostComponent {
it: string = '';
testFunc = () => {
console.log('blur!!!');
}
}
describe('blur test', () => {
let component: TestHostComponent;
let fixture: ComponentFixture<TestHostComponent>;
let de: DebugElement;
let inputEl: DebugElement;
beforeEach(() => { /* component configuration, imports... */ }
beforeEach(() => {
fixture = TestBed.createComponent(TestHostComponent);
component = fixture.componentInstance;
de = fixture.debugElement;
inputEl = fixture.debugElement.query(By.css('input'));
fixture.detectChanges();
})
it('test', async(() => {
const inp = inputEl.nativeElement;
inp.value = 123;
inp.dispatchEvent(new Event('input'));
fixture.detectChanges();
expect(component.it).toEqual('123');
inp.blur();
const divEl = fixture.debugElement.query(By.css('div'));
divEl.nativeElement.click();
}))
}
A: You can use dispatchEvent to emulate a blur:
inp.dispatchEvent(new Event('blur'));
A: Use
dispatchFakeEvent(inp, 'blur');
and here is dispatchFakeEvent:
export function createFakeEvent(type: string) {
const event = document.createEvent('Event');
event.initEvent(type, true, true);
return event;
}
export function dispatchFakeEvent(node: Node | Window, type: string)
{
node.dispatchEvent(createFakeEvent(type));
}
| stackoverflow | {
"language": "en",
"length": 215,
"provenance": "stackexchange_0000F.jsonl.gz:909509",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44681595"
} |
99721c5c4ea9e0721b4d9ce02170ee4dd3794e02 | Stackoverflow Stackexchange
Q: Transfer Data From Apple Watch To iPhone in Swift I have been trying for multiple hours to find a way to send a global variable from the Apple Watch to the iPhone.
There are a lot of questions abut transferring data from iPhone to Apple Watch but not vice versa in Swift 3.
This is not a duplicate because I want to transfer data from the Apple Watch to iPhone, not vice Versa.
How can I do this?
A: Transferring data from Apple Watch to iPhone is very similar to vice versa.
For global variable you probably should use updateApplicationContext() of WCSession:
let session = WCSession.default()
if session.activationState == .activated {
session.updateApplicationContext(["my_global": g_myGlobal])
}
On the iPhone you should assign delegate to the default WCSession and activate it. In the WCSessionDelegate, implement the following method:
func session(_ session: WCSession, didReceiveApplicationContext applicationContext: [String : Any]) {
let receivedGlobal = applicationContext["my_global"] as? TypeOfTheGlobal
}
Alternatively you can use sendMessage(_:replyHandler:errorHandler:) but this will require iPhone to be reachable.
In general I would recommend you to follow Zak Kautz advice and read about WatchConnectivity. This is the most common way to communicate between watch and phone.
| Q: Transfer Data From Apple Watch To iPhone in Swift I have been trying for multiple hours to find a way to send a global variable from the Apple Watch to the iPhone.
There are a lot of questions abut transferring data from iPhone to Apple Watch but not vice versa in Swift 3.
This is not a duplicate because I want to transfer data from the Apple Watch to iPhone, not vice Versa.
How can I do this?
A: Transferring data from Apple Watch to iPhone is very similar to vice versa.
For global variable you probably should use updateApplicationContext() of WCSession:
let session = WCSession.default()
if session.activationState == .activated {
session.updateApplicationContext(["my_global": g_myGlobal])
}
On the iPhone you should assign delegate to the default WCSession and activate it. In the WCSessionDelegate, implement the following method:
func session(_ session: WCSession, didReceiveApplicationContext applicationContext: [String : Any]) {
let receivedGlobal = applicationContext["my_global"] as? TypeOfTheGlobal
}
Alternatively you can use sendMessage(_:replyHandler:errorHandler:) but this will require iPhone to be reachable.
In general I would recommend you to follow Zak Kautz advice and read about WatchConnectivity. This is the most common way to communicate between watch and phone.
A: Using the watch connectivity framework, you can implement two-way communication.
| stackoverflow | {
"language": "en",
"length": 204,
"provenance": "stackexchange_0000F.jsonl.gz:909520",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44681638"
} |
a17dcc785ac81ba41fd2927bbf6da2a11b1a5ab9 | Stackoverflow Stackexchange
Q: Stretch the last element in UIStackView I have vertical UIStackView which has two horizontal UIStackViews, if bottom UIStackView has larger width than UIStackView on the top then UIStackView on the top will be stretched to the width of the bottom UIStackView.
And now I have first element in the top UIStackView stretched, but I want to stretch last element.
Current:
Expected:
How can I achieve this?
A: Assuming your labels have the default Horizontal Content Hugging Priority set to 251 each, change the Green Label to 252
Everything else you should be able to leave as you have it.
| Q: Stretch the last element in UIStackView I have vertical UIStackView which has two horizontal UIStackViews, if bottom UIStackView has larger width than UIStackView on the top then UIStackView on the top will be stretched to the width of the bottom UIStackView.
And now I have first element in the top UIStackView stretched, but I want to stretch last element.
Current:
Expected:
How can I achieve this?
A: Assuming your labels have the default Horizontal Content Hugging Priority set to 251 each, change the Green Label to 252
Everything else you should be able to leave as you have it.
| stackoverflow | {
"language": "en",
"length": 100,
"provenance": "stackexchange_0000F.jsonl.gz:909525",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44681649"
} |
8bff430bdffb976919d85ab8fec65c8b06483faa | Stackoverflow Stackexchange
Q: linking std::experimental::filesystem with Xcode 9 I am using std::experimental::filesystem with Xcode 9.0 beta. The compiler phase completes OK but the linker complains of undefined symbols:
std::experimental::filesystem::v1::path::__filename() const
std::experimental::filesystem::v1::path::__filename() const
std::experimental::filesystem::v1::path::__stem() const
std::experimental::filesystem::v1::__status(std::experimental::filesystem::v1::path const&, std::__1::error_code*)
I am also using std::experimental::filesystem::canonical(), but the linker isn't complaining about that being missing.
How can I configure the project to include these missing references?
UPDATE:
I have been experimenting with a simpler program. If I just use canonical() then the linker complains that it is missing.
I can use std::experimental::optional - everything compiles, links and runs OK. But 'optional' is a template and so probably doesn't involve a library.
A: Are you compiling with any -std= option ("C++ Language Dialect" option in Xcode)?
| Q: linking std::experimental::filesystem with Xcode 9 I am using std::experimental::filesystem with Xcode 9.0 beta. The compiler phase completes OK but the linker complains of undefined symbols:
std::experimental::filesystem::v1::path::__filename() const
std::experimental::filesystem::v1::path::__filename() const
std::experimental::filesystem::v1::path::__stem() const
std::experimental::filesystem::v1::__status(std::experimental::filesystem::v1::path const&, std::__1::error_code*)
I am also using std::experimental::filesystem::canonical(), but the linker isn't complaining about that being missing.
How can I configure the project to include these missing references?
UPDATE:
I have been experimenting with a simpler program. If I just use canonical() then the linker complains that it is missing.
I can use std::experimental::optional - everything compiles, links and runs OK. But 'optional' is a template and so probably doesn't involve a library.
A: Are you compiling with any -std= option ("C++ Language Dialect" option in Xcode)?
| stackoverflow | {
"language": "en",
"length": 119,
"provenance": "stackexchange_0000F.jsonl.gz:909542",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44681703"
} |
f774a5b73f8378728f898cb981ce7c9b7120eddd | Stackoverflow Stackexchange
Q: Using split pop and join all at once can it work? I really love Javascript and I wrote my code like this. I feel like it should work. Am I doing it in the wrong order? If it won't work like this why not?
var mydate = new Date();
alert( mydate.toLocaleTimeString().split(":").pop().join(':'));
split() makes it an array, pop() takes off the end of the array, join() makes it a string again right?
A: You could use Array#slice with a negative end/second argument.
Array#pop returns the last element, but not the array itself. slice returns a copy of the array with all emements from start without the last element.
var mydate = new Date();
console.log(mydate.toLocaleTimeString().split(":").slice(0, -1).join(':'));
| Q: Using split pop and join all at once can it work? I really love Javascript and I wrote my code like this. I feel like it should work. Am I doing it in the wrong order? If it won't work like this why not?
var mydate = new Date();
alert( mydate.toLocaleTimeString().split(":").pop().join(':'));
split() makes it an array, pop() takes off the end of the array, join() makes it a string again right?
A: You could use Array#slice with a negative end/second argument.
Array#pop returns the last element, but not the array itself. slice returns a copy of the array with all emements from start without the last element.
var mydate = new Date();
console.log(mydate.toLocaleTimeString().split(":").slice(0, -1).join(':'));
A: No, pop() will remove the last element from the array and return it.
To achieve what you're trying, you'll need to first assign the result of split() to a variable you can then reference:
var mydate = new Date(),
myarr = mydate.toLocaleTimeString().split(':');
myarr.pop();
console.log(myarr.join(':'));
A: if all you want to achieve is hours:minutes, you can just simply do this
var mydate = new Date();
console.log(mydate.getHours() + ':' + mydate.getMinutes());
A: You are trying to use method chaining where next method in the chain uses the output of the previously executed method. Reason it's not working is because "join()" method is a prototype of an array but "pop()" returns an array element which doesn't aforementioned method that's why the error.
refactor your code as below:
var myDate = new Date(),
myDateArr = myDate.toLocaleTimeString().split(':');
myDateArr.pop(); // Remove the seconds
myDate = myDateArr.join(':'); // Returns string
console.log(myDate);
Hope this helps.
A: Try this
var mydate = new Date();
alert( mydate.toLocaleTimeString().split(":").slice(0, 2).join(":"));
| stackoverflow | {
"language": "en",
"length": 274,
"provenance": "stackexchange_0000F.jsonl.gz:909557",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44681742"
} |
5ef595e2698e3183acf39741095df44f0c3f3170 | Stackoverflow Stackexchange
Q: Federating multiple Fuseki endpoints - authentication We are running multiple fuseki servers. We want to run Sparql queries using data from any number of them. That means using the SERVICE key word, no problem.
How do we set up authentication in Fuseki server A to access Fuseki server B?
Presumably it is done using a service .ttl file, but after waterboarding google for an hour it still won't give an straight answer.
| Q: Federating multiple Fuseki endpoints - authentication We are running multiple fuseki servers. We want to run Sparql queries using data from any number of them. That means using the SERVICE key word, no problem.
How do we set up authentication in Fuseki server A to access Fuseki server B?
Presumably it is done using a service .ttl file, but after waterboarding google for an hour it still won't give an straight answer.
| stackoverflow | {
"language": "en",
"length": 73,
"provenance": "stackexchange_0000F.jsonl.gz:909564",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44681769"
} |
21c9db05b77c445c2d9916c1d30414f8a7fdb07f | Stackoverflow Stackexchange
Q: Webpack 3 no scope hoisting and no fallback output with --display-optimization-bailout cli flag upgraded to webpack 3 and using the ModuleConcatenationPlugin plugin (followed instructions webpack 3: Official Release!!). i'm not seeing any difference in a diff of my bundle with the plugin added vs without and still seeing all the function closures.
added the --display-optimization-bailout flag to for output on why hoisting was prevented, but i don't see any output.
looked around the webpack source a bit and it appears that bailout reasons are not being set because module.meta.harmonyModule is falsy, not sure why...
https://github.com/webpack/webpack/blob/master/lib/optimize/ModuleConcatenationPlugin.js#L42
anyone else have a simliar issue?
| Q: Webpack 3 no scope hoisting and no fallback output with --display-optimization-bailout cli flag upgraded to webpack 3 and using the ModuleConcatenationPlugin plugin (followed instructions webpack 3: Official Release!!). i'm not seeing any difference in a diff of my bundle with the plugin added vs without and still seeing all the function closures.
added the --display-optimization-bailout flag to for output on why hoisting was prevented, but i don't see any output.
looked around the webpack source a bit and it appears that bailout reasons are not being set because module.meta.harmonyModule is falsy, not sure why...
https://github.com/webpack/webpack/blob/master/lib/optimize/ModuleConcatenationPlugin.js#L42
anyone else have a simliar issue?
| stackoverflow | {
"language": "en",
"length": 102,
"provenance": "stackexchange_0000F.jsonl.gz:909566",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44681775"
} |
4c3d1d0ca3228359aa9965bd78eaedaad60e25f6 | Stackoverflow Stackexchange
Q: Ignoring NaN in a dataframe I want to find the unique elements in a column of a dataframe which have missing values. i tried this: df[Column_name].unique() but it returns nan as one of the elements. what can i do to just ignore the missing values.
dataframe look like this.click here
A: Try calling .dropna() right before your call to .unique(). A working example:
import pandas as pd
import numpy as np
df = pd.DataFrame({'col1': np.random.randint(0, 10, 12)})
df.loc[2] = np.nan
df.loc[5] = np.nan
df['col1'].unique()
### output: array([ 4., 0., nan, 8., 1., 3., 2., 6.])
df['col1'].dropna().unique()
### output: array([ 4., 0., 8., 1., 3., 2., 6.])
| Q: Ignoring NaN in a dataframe I want to find the unique elements in a column of a dataframe which have missing values. i tried this: df[Column_name].unique() but it returns nan as one of the elements. what can i do to just ignore the missing values.
dataframe look like this.click here
A: Try calling .dropna() right before your call to .unique(). A working example:
import pandas as pd
import numpy as np
df = pd.DataFrame({'col1': np.random.randint(0, 10, 12)})
df.loc[2] = np.nan
df.loc[5] = np.nan
df['col1'].unique()
### output: array([ 4., 0., nan, 8., 1., 3., 2., 6.])
df['col1'].dropna().unique()
### output: array([ 4., 0., 8., 1., 3., 2., 6.])
| stackoverflow | {
"language": "en",
"length": 107,
"provenance": "stackexchange_0000F.jsonl.gz:909567",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44681776"
} |
3790bff2c36339148422015c2bfb6cad9dad6edc | Stackoverflow Stackexchange
Q: PrimeNG autocomplete with object binding Currently with my autocomplete setup my input field shows "[object Object]" rather than the appropriate property of the selected suggestion.
The suggestions themselves render ok, showing the groupName and groupDescription properties correctly, but after selection my input is just rendering the object rather than the 'groupName' field like I was hoping the [field] attribute would instruct.
<p-autoComplete [(ngModel)]="groupSearchText" [suggestions]="groupResults" (completeMethod)="search($event)" [field]="groupName" [size]="30" [minLength]="3">
<template let-group pTemplate="item">
<div class="ui-helper-clearfix" style="border-bottom:1px solid #D5D5D5">
<div style="font-size:18px;margin:10px 10px 0 0">{{group.groupName}}</div>
<div style="font-size:10px;margin:10px 10px 0 0">{{group.groupDescription}}</div>
</div>
</template>
</p-autoComplete>
A: Change [field]="groupName" to field="groupName"
If you look at PrimeNG's doc, they dont use [] for field either.
Example from PrimeNG doc:
<p-autoComplete [(ngModel)]="countries" [suggestions]="filteredCountriesMultiple" (completeMethod)="filterCountryMultiple($event)" styleClass="wid100"
[minLength]="1" placeholder="Countries" field="name" [multiple]="true">
</p-autoComplete>
I also tested in my own app using [field], caused the same problem you mentioned.
| Q: PrimeNG autocomplete with object binding Currently with my autocomplete setup my input field shows "[object Object]" rather than the appropriate property of the selected suggestion.
The suggestions themselves render ok, showing the groupName and groupDescription properties correctly, but after selection my input is just rendering the object rather than the 'groupName' field like I was hoping the [field] attribute would instruct.
<p-autoComplete [(ngModel)]="groupSearchText" [suggestions]="groupResults" (completeMethod)="search($event)" [field]="groupName" [size]="30" [minLength]="3">
<template let-group pTemplate="item">
<div class="ui-helper-clearfix" style="border-bottom:1px solid #D5D5D5">
<div style="font-size:18px;margin:10px 10px 0 0">{{group.groupName}}</div>
<div style="font-size:10px;margin:10px 10px 0 0">{{group.groupDescription}}</div>
</div>
</template>
</p-autoComplete>
A: Change [field]="groupName" to field="groupName"
If you look at PrimeNG's doc, they dont use [] for field either.
Example from PrimeNG doc:
<p-autoComplete [(ngModel)]="countries" [suggestions]="filteredCountriesMultiple" (completeMethod)="filterCountryMultiple($event)" styleClass="wid100"
[minLength]="1" placeholder="Countries" field="name" [multiple]="true">
</p-autoComplete>
I also tested in my own app using [field], caused the same problem you mentioned.
| stackoverflow | {
"language": "en",
"length": 137,
"provenance": "stackexchange_0000F.jsonl.gz:909585",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44681844"
} |
0113baed5c85b6a7008840b9c59b7e97ab0f193c | Stackoverflow Stackexchange
Q: How to quickly get the last line from a .csv file over a network drive? I store thousands of time series in .csv files on a network drive. Before I update the files, I first get the last line of the file to see the timestamp and then I update with data after that timestamp. How can I quickly get the last line of a .csv file over a network drive so that I don't have to load the entire huge .csv file only to use the last line?
A: There is a nifty reversed tool for this, assuming you are using the built-in csv module:
how to read a csv file in reverse order in python
In short:
import csv
with open('some_file.csv', 'r') as f:
for row in reversed(list(csv.reader(f))):
print(', '.join(row))
In my test file of:
1: test, 1
2: test, 2
3: test, 3
This outputs:
test, 3
test, 2
test, 1
| Q: How to quickly get the last line from a .csv file over a network drive? I store thousands of time series in .csv files on a network drive. Before I update the files, I first get the last line of the file to see the timestamp and then I update with data after that timestamp. How can I quickly get the last line of a .csv file over a network drive so that I don't have to load the entire huge .csv file only to use the last line?
A: There is a nifty reversed tool for this, assuming you are using the built-in csv module:
how to read a csv file in reverse order in python
In short:
import csv
with open('some_file.csv', 'r') as f:
for row in reversed(list(csv.reader(f))):
print(', '.join(row))
In my test file of:
1: test, 1
2: test, 2
3: test, 3
This outputs:
test, 3
test, 2
test, 1
| stackoverflow | {
"language": "en",
"length": 155,
"provenance": "stackexchange_0000F.jsonl.gz:909592",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44681865"
} |
8bef61ede4e76879b1daf2123d0c95d62d1c9dfb | Stackoverflow Stackexchange
Q: Rails lightweight/raw WebSockets I have a Rails app that I want to use for some WebSocket stuff without ActionCable imposing Rails specific client-side structure on a separate JS app as well as some inter-process/server overhead on the server for every message (Redis).
Id still like to use Rails for the server instances so can directly use ActiveRecord and other such Rails components, but is it possible to strip out the extra stuff (channels etc.) from ActionCable to just get plain messages?
And ideally some control over what instance the socket connects to (e.g. to use the chat room example, make everyone that joins a given room connects to the same Rails process, e.g. via a URL query parameter, and not a tonne of separate subdomains/"separate apps")?
| Q: Rails lightweight/raw WebSockets I have a Rails app that I want to use for some WebSocket stuff without ActionCable imposing Rails specific client-side structure on a separate JS app as well as some inter-process/server overhead on the server for every message (Redis).
Id still like to use Rails for the server instances so can directly use ActiveRecord and other such Rails components, but is it possible to strip out the extra stuff (channels etc.) from ActionCable to just get plain messages?
And ideally some control over what instance the socket connects to (e.g. to use the chat room example, make everyone that joins a given room connects to the same Rails process, e.g. via a URL query parameter, and not a tonne of separate subdomains/"separate apps")?
| stackoverflow | {
"language": "en",
"length": 127,
"provenance": "stackexchange_0000F.jsonl.gz:909610",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44681920"
} |
efd6c7fa9639d7f76ac380eeb5e9150a9e91467e | Stackoverflow Stackexchange
Q: Width and background color of ng bootstrap tooltip I need to modify the width of the tooltip box and the background of it too. How can I achieve it? I am using angular2 and ng bootstrap.
<i class="fa fa-info-circle info-icon-background" [ngbTooltip]="tooltipContent" aria-hidden="true" ></i>
I have tried putting the following in my "task-modal.component.css" css file but it does not seem to work. Please help.
.tooltip-inner{
width: 400px;
background-color: #FFFFFF;
}
In my angular component, I specify the css file as:
@Component({
selector: 'task-modal',
templateUrl: './task-modal.component.html',
styleUrls: ['task-modal.component.css'],
providers: [TasksService]
})
A: I think you are trying to style an element that is outside of your component's encapsulation.
Give this a shot:
:host >>> .tooltip-inner {
width: 400px;
background-color: #FFFFFF;
}
Note: The use of /deep/ and >>> in Angular 2
>>> Appears to be deprecated. Best way is to have a global style sheet for styles that need to break encapsulation.
You can also break the components encapsulation for styling with the following. encapsulation: ViewEncapsulation.None However, I personally prefer breaking it on a case by case basis.
@Component({
selector: 'task-modal',
templateUrl: './task-modal.component.html',
styleUrls: ['task-modal.component.css'],
providers: [TasksService],
encapsulation: ViewEncapsulation.None
});
Documentation
https://angular.io/api/core/ViewEncapsulation
| Q: Width and background color of ng bootstrap tooltip I need to modify the width of the tooltip box and the background of it too. How can I achieve it? I am using angular2 and ng bootstrap.
<i class="fa fa-info-circle info-icon-background" [ngbTooltip]="tooltipContent" aria-hidden="true" ></i>
I have tried putting the following in my "task-modal.component.css" css file but it does not seem to work. Please help.
.tooltip-inner{
width: 400px;
background-color: #FFFFFF;
}
In my angular component, I specify the css file as:
@Component({
selector: 'task-modal',
templateUrl: './task-modal.component.html',
styleUrls: ['task-modal.component.css'],
providers: [TasksService]
})
A: I think you are trying to style an element that is outside of your component's encapsulation.
Give this a shot:
:host >>> .tooltip-inner {
width: 400px;
background-color: #FFFFFF;
}
Note: The use of /deep/ and >>> in Angular 2
>>> Appears to be deprecated. Best way is to have a global style sheet for styles that need to break encapsulation.
You can also break the components encapsulation for styling with the following. encapsulation: ViewEncapsulation.None However, I personally prefer breaking it on a case by case basis.
@Component({
selector: 'task-modal',
templateUrl: './task-modal.component.html',
styleUrls: ['task-modal.component.css'],
providers: [TasksService],
encapsulation: ViewEncapsulation.None
});
Documentation
https://angular.io/api/core/ViewEncapsulation
A: did you make sure to set encapsulation to ViewEncapsulation.None in the ts file of the component?
@Component({
selector: 'task-modal',
encapsulation: ViewEncapsulation.None,
templateUrl: './task-modal.component.html',
styleUrls: ['task-modal.component.css'],
providers: [TasksService]
})
in the html add a tooltipClass:
<i class="fa fa-info-circle info-icon-background" tooltipClass="custom-tooltip-class" [ngbTooltip]="tooltipContent" aria-hidden="true" ></i>
and in your css styles use the custom class:
.custom-tooltip-class .tooltip-inner{
width: 400px;
background-color: #FFFFFF;
}
.custom-tooltip-class .arrow::before {
border-top-color: #FFFFFF;
}
https://ng-bootstrap.github.io/#/components/tooltip/examples
A: add this in your css file, in your case task-modal.component.css,
Angular 2:
/deep/ .tooltip-inner {
width: 400px;
background-color: #FFFFFF;
}
Angular 4.3.0
/deep/ was deprecated in Angular 4.3.0 and ::ng-deep is now preferred,
::ng-deep .tooltip-inner {
width: 400px;
background-color: #FFFFFF;
}
A: You can use custom class, define it:
.my-custom-class .tooltip-inner {
max-width: 400px;
width: 400px;
}
and use in tooltip:
<button type="button" class="btn btn-outline-secondary" ngbTooltip="Nice class!"
tooltipClass="my-custom-class">
Tooltip with custom class
</button>
| stackoverflow | {
"language": "en",
"length": 331,
"provenance": "stackexchange_0000F.jsonl.gz:909632",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44681992"
} |
a72183107870c50f7f831e0cb4044cf129b4d93f | Stackoverflow Stackexchange
Q: Http service cache I'm trying to implement caching in Angular application for http service.
My code in service countriesService
public get(): Observable<any> {
return this.http.get(`/countries`, {})
.map(res => res.json())
.publishReplay(1)
.refCount();
}
In component CountriesComponent, I have
ngOnInit() {
this.countriesService.get()
.subscribe(res => {
this.countries = res.countries;
});
}
I'm loading component in route config
const appRoutes: Routes = [
{ path: 'countries', component: CountriesComponent },
{ path: 'cities', component: CitiesComponent },
];
Every time I returning from cities to countries, I see a request to => /countries. It shouldn't fire request as it should be cached(that's how it's working in angular 1.x with promises), but not with angular 4 and rxJs.
A: you can save countries in service for first time after that you can re use Service variable.
public get(): Observable<any> {
if(this.countries != null)
{
return Observable.of(this.countries );
}
else
{
return this.http.get(`/countries`, {})
.map(res => res.json())
.do(countries => this.countries = countries )
.publishReplay(1)
.refCount();
}
}
| Q: Http service cache I'm trying to implement caching in Angular application for http service.
My code in service countriesService
public get(): Observable<any> {
return this.http.get(`/countries`, {})
.map(res => res.json())
.publishReplay(1)
.refCount();
}
In component CountriesComponent, I have
ngOnInit() {
this.countriesService.get()
.subscribe(res => {
this.countries = res.countries;
});
}
I'm loading component in route config
const appRoutes: Routes = [
{ path: 'countries', component: CountriesComponent },
{ path: 'cities', component: CitiesComponent },
];
Every time I returning from cities to countries, I see a request to => /countries. It shouldn't fire request as it should be cached(that's how it's working in angular 1.x with promises), but not with angular 4 and rxJs.
A: you can save countries in service for first time after that you can re use Service variable.
public get(): Observable<any> {
if(this.countries != null)
{
return Observable.of(this.countries );
}
else
{
return this.http.get(`/countries`, {})
.map(res => res.json())
.do(countries => this.countries = countries )
.publishReplay(1)
.refCount();
}
}
| stackoverflow | {
"language": "en",
"length": 160,
"provenance": "stackexchange_0000F.jsonl.gz:909673",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44682115"
} |
e1513a48c89230c1c6bdd0a3ee252390595c3cc3 | Stackoverflow Stackexchange
Q: PHP str_replace doesnt replace character "°" Hello !
I am getting a String from a .xlsx file ->"N°X"
I would like to replace "N°" by Num_X
I am trying to do this with
$var = str_replace("N°","Num_",$var);
But nothing is replaced (according to echo $var)
problem come from ° because when i try to replace some String by other
(without °) str_replace works
Any suggestions ?
A: Ensure that input string is UTF8.
$var = "N°X";
print mb_detect_encoding($var);
If you don't get UTF-8 out of this, convert it:
$var = mb_convert_encoding($var, 'UTF-8');
And then your str_replace will work as intended.
Another tool that might help you with encoding issues is xxd.
php -r '$var = "N°X"; echo $var;' | xxd
should return
00000000: 4ec2 b058 N..X
which reveals the middle character is encoded as C2B0 hex, which is
Unicode Character 'DEGREE SIGN' (U+00B0). fileformat.info comes handy now.
| Q: PHP str_replace doesnt replace character "°" Hello !
I am getting a String from a .xlsx file ->"N°X"
I would like to replace "N°" by Num_X
I am trying to do this with
$var = str_replace("N°","Num_",$var);
But nothing is replaced (according to echo $var)
problem come from ° because when i try to replace some String by other
(without °) str_replace works
Any suggestions ?
A: Ensure that input string is UTF8.
$var = "N°X";
print mb_detect_encoding($var);
If you don't get UTF-8 out of this, convert it:
$var = mb_convert_encoding($var, 'UTF-8');
And then your str_replace will work as intended.
Another tool that might help you with encoding issues is xxd.
php -r '$var = "N°X"; echo $var;' | xxd
should return
00000000: 4ec2 b058 N..X
which reveals the middle character is encoded as C2B0 hex, which is
Unicode Character 'DEGREE SIGN' (U+00B0). fileformat.info comes handy now.
| stackoverflow | {
"language": "en",
"length": 147,
"provenance": "stackexchange_0000F.jsonl.gz:909678",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44682131"
} |
d4a6b9f78ae492887b1cd0e4d26032f624c27c1b | Stackoverflow Stackexchange
Q: Add default collation to existing mongodb collection I'm trying to add a default collation to my mongodb collections. It's simple to create a new collection with a collation:
db.createCollection(name, {collation:{locale:"en",strength:1}})
Unfortunately I looked through the docs and didn't see any db.updateCollection function. How am I supposed to add a collation without destroying and recreating all my documents in a new collection?
A: From the collation specifications,
After the initial release of server version 3.4, many users will want
to apply Collations to all operations on an existing collection. Such
users will have to supply the Collation option to each operation
explicitly; however, eventually the majority of users wishing to use
Collations on all operations on a collection will create a collection
with a server-side default. We chose to favor user verbosity right now
over abstracting the feature for short-term gains.
So you know its not a option just yet.
| Q: Add default collation to existing mongodb collection I'm trying to add a default collation to my mongodb collections. It's simple to create a new collection with a collation:
db.createCollection(name, {collation:{locale:"en",strength:1}})
Unfortunately I looked through the docs and didn't see any db.updateCollection function. How am I supposed to add a collation without destroying and recreating all my documents in a new collection?
A: From the collation specifications,
After the initial release of server version 3.4, many users will want
to apply Collations to all operations on an existing collection. Such
users will have to supply the Collation option to each operation
explicitly; however, eventually the majority of users wishing to use
Collations on all operations on a collection will create a collection
with a server-side default. We chose to favor user verbosity right now
over abstracting the feature for short-term gains.
So you know its not a option just yet.
A: There's one other option that works for my production needs: Execute mongodump on a collection
mongodump --host hostname --port 32017 --username usr --password pwd --out c:\backup --db my_database --collection my_collection
That will generate two files and one of them named my_collection.metadata.json. Open this file and modify options property according to MongoDB docs.
{
"options": {
"collation": {
"locale": "en",
"strength": 1
}
}
...
}
And then restore using mongorestore
mongorestore --host hostname --port 32017 --username usr --password pwd --db contactstore c:\backup\my_database --drop
From then on, any index you create will use that specific collation by default. Unfortunately, this requires a downtime window, so make sure you get one.
| stackoverflow | {
"language": "en",
"length": 260,
"provenance": "stackexchange_0000F.jsonl.gz:909687",
"question_score": "13",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44682160"
} |
73c46abf55e8033ceb3dd3f54bb42fe409b9e62e | Stackoverflow Stackexchange
Q: Selecting all XML elements and their values dynamically using LINQ I have the following code which dynamically selects all of the distinct element names, however; I also want to see the values for these elements. How can I do this using LINQ? I am open to doing it other ways as well.
XDocument doc = XDocument.Load("XMLFile1.xml");
foreach (var name in doc.Descendants("QueryResults").Elements()
.Select(x => x.Name).Distinct())
{
}
A: Something like this would work
XDocument doc = XDocument.Load("XMLFile1.xml");
foreach (var name in doc.Descendants("QueryResults").Elements()
.Select(x => new {Name = x.Name, Value = e.Value}).Distinct())
{
}
| Q: Selecting all XML elements and their values dynamically using LINQ I have the following code which dynamically selects all of the distinct element names, however; I also want to see the values for these elements. How can I do this using LINQ? I am open to doing it other ways as well.
XDocument doc = XDocument.Load("XMLFile1.xml");
foreach (var name in doc.Descendants("QueryResults").Elements()
.Select(x => x.Name).Distinct())
{
}
A: Something like this would work
XDocument doc = XDocument.Load("XMLFile1.xml");
foreach (var name in doc.Descendants("QueryResults").Elements()
.Select(x => new {Name = x.Name, Value = e.Value}).Distinct())
{
}
A: The accepted query is differnt then the original one because it changes how Distinct works because it no longer compares only Name but also Value. If you want to see which names have which values you need to use GroupBy on the Name and get the Value for each item.
var results =
doc
.Descendants("QueryResults")
.Elements()
.GroupBy(x => x.Name, (name, items) => new
{
Name = name,
Values = items.Select(x => x.Value)
});
A: You would just use name.Value, which is a string property of XElement.
| stackoverflow | {
"language": "en",
"length": 180,
"provenance": "stackexchange_0000F.jsonl.gz:909690",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44682166"
} |
053d548480c85746e78d865d6dcb70196e240f21 | Stackoverflow Stackexchange
Q: Common Lisp: relative path to absolute May be it is a really dumb question, but after playing around with all built-in pathname-family functions and cl-fad/pathname-utils packages I still can't figure out how to convert a relative path to absolute (with respect to $PWD):
; let PWD be "/very/long/way"
(abspath "../road/home"); -> "/very/long/road/home"
Where the hypothetical function abspath works just like os.path.abspath() in Python.
A: The variable *DEFAULT-PATHNAME-DEFAULTS* usually contains your initial working directory, you can merge the pathname with that;
(defun abspath (pathname)
(merge-pathnames pathname *default-pathname-defaults*))
And since this is the default for the second argument to merge-pathnames, you can simply write:
(defun abspath (pathname)
(merge-pathnames pathname))
| Q: Common Lisp: relative path to absolute May be it is a really dumb question, but after playing around with all built-in pathname-family functions and cl-fad/pathname-utils packages I still can't figure out how to convert a relative path to absolute (with respect to $PWD):
; let PWD be "/very/long/way"
(abspath "../road/home"); -> "/very/long/road/home"
Where the hypothetical function abspath works just like os.path.abspath() in Python.
A: The variable *DEFAULT-PATHNAME-DEFAULTS* usually contains your initial working directory, you can merge the pathname with that;
(defun abspath (pathname)
(merge-pathnames pathname *default-pathname-defaults*))
And since this is the default for the second argument to merge-pathnames, you can simply write:
(defun abspath (pathname)
(merge-pathnames pathname))
A: UIOP
Here is what the documentation of UIOP says about cl-fad :-)
UIOP completely replaces it with better design and implementation
A good number of implementations ship with UIOP (used by ASDF3), so it's basically already available when you need it (see "Using UIOP" in the doc.). One of the many functions defined in the library is uiop:parse-unix-namestring, which understands the syntax of Unix filenames without checking if the path designates an existing file or directory. However the double-dot is parsed as :back or :up which is not necessarily supported by your implementation. With SBCL, it is the case and the path is simplified. Note that pathnames allows to use both :back and :up components; :back can be simplified easily by looking at the pathname only (it is a syntactic up directory), whereas :up is the semantic up directory, meaning that it depends on the actual file system. You have a better chance to obtain a canonical file name if the file name exists.
Truename
You can also call TRUENAME, which will probably get rid of the ".." components in your path. See also 20.1.3 Truenames which explains that you can point to the same file by using different pathnames, but that there is generally one "canonical" name.
A: Here's the final solution (based on the previous two answers):
(defun abspath
(path-string)
(uiop:unix-namestring
(uiop:merge-pathnames*
(uiop:parse-unix-namestring path-string))))
uiop:parse-unix-namestring converts the string argument to a pathname, replacing . and .. references; uiop:merge-pathnames* translates a relative pathname to absolute; uiop:unix-namestring converts the pathname back to a string.
Also, if you know for sure what kind of file the path points to, you can use either:
(uiop:unix-namestring (uiop:file-exists-p path))
or
(uiop:unix-namestring (uiop:directory-exists-p path))
because both file-exists-p and directory-exists-p return absolute pathnames (or nil, if file does not exist).
UPDATE:
Apparently in some implementations (like ManKai Common Lisp) uiop:merge-pathnames* does not prepend the directory part if the given pathname lacks ./ prefix (for example if you feed it #P"main.c" rather than #P"./main.c"). So the safer solution is:
(defun abspath
(path-string &optional (dir-name (uiop:getcwd)))
(uiop:unix-namestring
(uiop:ensure-absolute-pathname
(uiop:merge-pathnames*
(uiop:parse-unix-namestring path-string))
dir-name)))
| stackoverflow | {
"language": "en",
"length": 453,
"provenance": "stackexchange_0000F.jsonl.gz:909704",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44682199"
} |
105514cd8b26e727b2fac627ce4dbd3243c7a387 | Stackoverflow Stackexchange
Q: Can you use AVG() and PERCENTILE_DISC() functions together in a single Redshift query? I'm having difficulty combining these two functions in to a single query, such as:
WITH time_values AS (
SELECT
(end_time - start_time) * 1.0 / 3600000000 AS num_hours
FROM table
WHERE
end_time >= 1493596800000000
AND start_time < 1493683200000000
)
SELECT
PERCENTILE_DISC(0.25) WITHIN GROUP (ORDER BY num_hours) OVER() AS p25,
PERCENTILE_DISC(0.50) WITHIN GROUP (ORDER BY num_hours) OVER() AS p50,
PERCENTILE_DISC(0.80) WITHIN GROUP (ORDER BY num_hours) OVER() AS p80,
PERCENTILE_DISC(0.99) WITHIN GROUP (ORDER BY num_hours) OVER() AS p99,
AVG(num_hours)
FROM time_values;
This returns ERROR: column "time_values.num_hours" must appear in the GROUP BY clause or be used in an aggregate function
A: AVG() can be both aggregate (requires grouping by some column) and window function (requires frame clause). The error appears because you use a window function PERCENTILE already and there is no frame clause for AVG function though it's not obvious. To use AVG in the same query you need to simulate the clause which will look like
AVG(num_hours) OVER ()
| Q: Can you use AVG() and PERCENTILE_DISC() functions together in a single Redshift query? I'm having difficulty combining these two functions in to a single query, such as:
WITH time_values AS (
SELECT
(end_time - start_time) * 1.0 / 3600000000 AS num_hours
FROM table
WHERE
end_time >= 1493596800000000
AND start_time < 1493683200000000
)
SELECT
PERCENTILE_DISC(0.25) WITHIN GROUP (ORDER BY num_hours) OVER() AS p25,
PERCENTILE_DISC(0.50) WITHIN GROUP (ORDER BY num_hours) OVER() AS p50,
PERCENTILE_DISC(0.80) WITHIN GROUP (ORDER BY num_hours) OVER() AS p80,
PERCENTILE_DISC(0.99) WITHIN GROUP (ORDER BY num_hours) OVER() AS p99,
AVG(num_hours)
FROM time_values;
This returns ERROR: column "time_values.num_hours" must appear in the GROUP BY clause or be used in an aggregate function
A: AVG() can be both aggregate (requires grouping by some column) and window function (requires frame clause). The error appears because you use a window function PERCENTILE already and there is no frame clause for AVG function though it's not obvious. To use AVG in the same query you need to simulate the clause which will look like
AVG(num_hours) OVER ()
| stackoverflow | {
"language": "en",
"length": 173,
"provenance": "stackexchange_0000F.jsonl.gz:909719",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44682256"
} |
871fc7b94f87a76348af438cc321e2631ce5b128 | Stackoverflow Stackexchange
Q: Visual Studio 2017 bin\roslyn files locked during build I am running VS2017 version 26430.13 and every time I try to build a web project I get errors that access to the files in the bin\roslyn is denied.
Over a period of about 5 minutes the files are unlocked and I can build but the 5 minute delay is unacceptable.
These are the files that stay locked:
*
*Microsoft.CodeAnalysis.CSharp.dll
*Microsoft.CodeAnalysis.dll
*Microsoft.CodeAnalysis.VisualBasic.dll
*Microsoft.DiaSymReader.Native.amd64.dll
*System.Collections.Immutable.dll
*System.Diagnostics.FileVersionInfo.dll
*System.IO.Compression.dll
*System.IO.FileSystem.dll
*System.IO.FileSystem.Primitives.dll
*System.Reflection.Metadata.dll
*System.Security.Cryptography.Algorithms.dl
*System.Security.Cryptography.Primitives.dl
*System.ValueTuple.dll
*VBCSCompiler.exe
A: UPDATE the Microsoft.CodeDom.Providers.DotNetCompilerPlatform package to V1.0.7
*
*Find Microsoft.CodeDom.Providers.DotNetCompilerPlatform from NuGet
Uninstall Old version
Install V1.0.7 or latest
| Q: Visual Studio 2017 bin\roslyn files locked during build I am running VS2017 version 26430.13 and every time I try to build a web project I get errors that access to the files in the bin\roslyn is denied.
Over a period of about 5 minutes the files are unlocked and I can build but the 5 minute delay is unacceptable.
These are the files that stay locked:
*
*Microsoft.CodeAnalysis.CSharp.dll
*Microsoft.CodeAnalysis.dll
*Microsoft.CodeAnalysis.VisualBasic.dll
*Microsoft.DiaSymReader.Native.amd64.dll
*System.Collections.Immutable.dll
*System.Diagnostics.FileVersionInfo.dll
*System.IO.Compression.dll
*System.IO.FileSystem.dll
*System.IO.FileSystem.Primitives.dll
*System.Reflection.Metadata.dll
*System.Security.Cryptography.Algorithms.dl
*System.Security.Cryptography.Primitives.dl
*System.ValueTuple.dll
*VBCSCompiler.exe
A: UPDATE the Microsoft.CodeDom.Providers.DotNetCompilerPlatform package to V1.0.7
*
*Find Microsoft.CodeDom.Providers.DotNetCompilerPlatform from NuGet
Uninstall Old version
Install V1.0.7 or latest
A: Project > Manage NuGet Packages... > Installed(tab) > in search input set this:
codedom
click to update
A: I have VS 2017 Enterprise and for me the issue was resolved by this:
*
*Downgraded Microsoft.Net.Compilers from 2.3.1 to 2.3.0
*Downgraded Microsoft.CodeDom.Providers.DotNetCompilerPlatform from 1.0.5 to 1.0.4.
A: Just open task manager and kill any instances of VBCSCompiler.exe. You don't even need to close Visual Studio.
A: Instead of killing the process manually, you may use the following commands in a Pre-Build Event:
tasklist /FI "IMAGENAME eq VBCSCompiler.exe" 2>NUL | find /I /N "VBCSCompiler.exe">NUL
if "%ERRORLEVEL%"=="0" (taskkill /IM VBCSCompiler.exe /F) else (verify >NUL)
A: A workaround is close VS, open task manager and kill any instances of VBCSCompiler.exe
(Thanks Tom John: https://developercommunity.visualstudio.com/content/problem/71302/binroslyn-files-locked-during-build.html)
A: Revert the Microsoft.CodeDom.Providers.DotNetCompilerPlatform package to V1.0.4
This advice came from a comment on the developer community problem report https://developercommunity.visualstudio.com/solutions/79954/view.html.
We were on v1.0.5 and experienced locked files frequently. After reverting the Microsoft.CodeDom.Providers.DotNetCompilerPlatform package to V1.0.4 we are no longer experiencing locked files.
A: Install Microsoft.CodeDom.Providers.DotNetCompilerPlatform.BinFix nuget
It fixed the issue for me
A: for me updating the nuget package...
Microsoft.Net.Compilers
to the latest at the time of this post 2.7.0 fixed this for me. it was version 1.3.2
A: For me I just open solution in file explorer and delete bin folders of all projects in it. now it's working fine.
A: In VS2017 & VS2019, this can happen when IIS Express is still running. Open the tray next to the clock, hover over the IIS Express icon running, right-click on the icon, click "Exit", confirm prompt, close active running worker processes. This can also be true when trying to publish your web project as well.
A: In my case I did these two steps:
*
*uninstall-package Microsoft.CodeDom.Providers.DotNetCompilerPlatform
*Install-Package Microsoft.CodeDom.Providers.DotNetCompilerPlatform -Version 1.0.8
A: Before you try anything drastic, restart your computer
A: I was having the same issue in MVC 5. I just opened Nuget Package Manager, searched and updated the following:
Microsoft.CodeDom.Providers.DotNetCompilerPlatform
A: Deleting this dll Microsoft.CodeDom.Providers.DotNetCompilerPlatform solved the issue for me
A: Update the Microsoft.CodeDom.Providers.DotNetCompilerPlatform package to V1.0.6
| stackoverflow | {
"language": "en",
"length": 443,
"provenance": "stackexchange_0000F.jsonl.gz:909722",
"question_score": "65",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44682281"
} |
9a02b583e2119812d1da7903bf1aaba8c660ccfb | Stackoverflow Stackexchange
Q: R ODBC - Querying Column name with spaces I am trying to query through R OBDC. But one Column name has space on it. For example, [Account No].
I am using this code to query:
esiid_ac <- sqlQuery(myconn, paste("
SELECT * FROM CustomerUsage WHERE ((CustomerUsage.Account No ='", 12345, "')) ", sep=""),as.is=TRUE)
I am getting the following error:
[1] "42000 102 [Microsoft][ODBC Driver 11 for SQL Server][SQL
Server]Incorrect syntax near 'No'." [2]
"[RODBC] ERROR: Could not SQLExecDirect '\n SELECT * FROM
CustomerUsage WHERE ((CustomerUsage.Account No ='678987')) '
How to solve this?
Can I read this table with column index instead of column names?
Thank you.
A: After tinkering around with quotes a little bit, this worked for me:
df <- sqlQuery(myconn, 'SELECT * FROM mytable WHERE "column name" =123', as.is=TRUE)
| Q: R ODBC - Querying Column name with spaces I am trying to query through R OBDC. But one Column name has space on it. For example, [Account No].
I am using this code to query:
esiid_ac <- sqlQuery(myconn, paste("
SELECT * FROM CustomerUsage WHERE ((CustomerUsage.Account No ='", 12345, "')) ", sep=""),as.is=TRUE)
I am getting the following error:
[1] "42000 102 [Microsoft][ODBC Driver 11 for SQL Server][SQL
Server]Incorrect syntax near 'No'." [2]
"[RODBC] ERROR: Could not SQLExecDirect '\n SELECT * FROM
CustomerUsage WHERE ((CustomerUsage.Account No ='678987')) '
How to solve this?
Can I read this table with column index instead of column names?
Thank you.
A: After tinkering around with quotes a little bit, this worked for me:
df <- sqlQuery(myconn, 'SELECT * FROM mytable WHERE "column name" =123', as.is=TRUE)
A: Have you tried square brackets (They work for me when there are special characters in column names)?
esiid_ac <- sqlQuery(myconn, paste(" SELECT * FROM CustomerUsage WHERE ((CustomerUsage.[Account No] ='", 12345, "')) ", sep=""),as.is=TRUE)
A: You can use \"COL_NAME\" instead of COL_NAME and use that as you would always use it. For example:
esiid_ac <- sqlQuery(myconn, "SELECT * FROM CustomerUsage WHERE \"Account No\" = 12345")
A: Can you try to put the column name like [Account No] and then try?
A: You can try...
df <- sqlQuery(myconn, "SELECT * FROM mytab WHERE `crazy column name` =123", as.is=TRUE)
| stackoverflow | {
"language": "en",
"length": 227,
"provenance": "stackexchange_0000F.jsonl.gz:909736",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44682327"
} |
6811efd676ce2c5328bac9f9b28538d68e32f1f4 | Stackoverflow Stackexchange
Q: Static imports in swift In java you can do import static MyClass and you'll be able to access the static methods of MyClass without having to prepend them with the class name:
myMethod() instead of MyClass.myMethod()
Is there a way to do this in swift?
A: I don´t think you can import a static class like Java, it´s a traditional static in Swift where you call it by class name + variable/function.
class MyClass {
static let baseURL = "someURl"
static func myMethod() {
}
}
MyClass.baseURL or MyClass. myMethod.
What you can do is to add a typealias to make an alias for your Static class.
private typealias M = MyClass
And then use the following: M.baseURL or M.myMethod.
| Q: Static imports in swift In java you can do import static MyClass and you'll be able to access the static methods of MyClass without having to prepend them with the class name:
myMethod() instead of MyClass.myMethod()
Is there a way to do this in swift?
A: I don´t think you can import a static class like Java, it´s a traditional static in Swift where you call it by class name + variable/function.
class MyClass {
static let baseURL = "someURl"
static func myMethod() {
}
}
MyClass.baseURL or MyClass. myMethod.
What you can do is to add a typealias to make an alias for your Static class.
private typealias M = MyClass
And then use the following: M.baseURL or M.myMethod.
| stackoverflow | {
"language": "en",
"length": 121,
"provenance": "stackexchange_0000F.jsonl.gz:909758",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44682401"
} |
86a28fe2f8bf4414fedb664bb663e70488980dd3 | Stackoverflow Stackexchange
Q: RDS SQL Server Execute permission denied on object in msdb database I have a RDS SQL Server instance where in I logged in as a master user. When I try to create a category for my jobs the execution of the SP, sp_add_category fails with permission denied error.
Can anyone please guide me if this permission is available for us to change? Obviously, grant exec on the SP didn't work.
EXEC dbo.sp_add_category
@class=N'JOB',
@type=N'LOCAL',
@name=N'AdminJobs' ;
Thanks in advance.
| Q: RDS SQL Server Execute permission denied on object in msdb database I have a RDS SQL Server instance where in I logged in as a master user. When I try to create a category for my jobs the execution of the SP, sp_add_category fails with permission denied error.
Can anyone please guide me if this permission is available for us to change? Obviously, grant exec on the SP didn't work.
EXEC dbo.sp_add_category
@class=N'JOB',
@type=N'LOCAL',
@name=N'AdminJobs' ;
Thanks in advance.
| stackoverflow | {
"language": "en",
"length": 80,
"provenance": "stackexchange_0000F.jsonl.gz:909773",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44682458"
} |
d2d3aff51bd42f53005cb7cacb7a0c9609917941 | Stackoverflow Stackexchange
Q: ngIf - else vs two ngIf conditions Consider the following code sample:
<div *ngIf="condition; else elseBlock">
<!-- markup here -->
</div>
<ng-template #elseBlock>
<div>
<!-- additional markup here -->
</div>
</ng-template>
Another way I can achieve the same functionality is:
<div *ngIf="condition">
<!-- markup here -->
</div>
<div *ngIf="!condition">
<!-- additional markup here -->
</div>
I want to know specific reasons for which of these two ways should be used and why?
A: The first solution is better. Although both achieve the same thing, the first is cleaner and easier to follow logically.
You only do one check with the first solution. This is a good solution because:
*
*It takes time to evaluate a condition.
*By using else, you only have to check once. In your second solution you check if a condition is met (which takes time), and then you check if the not of the same condition is met.
*Using else allows you to have an action to do in the case that somehow neither of the conditions are met.
EDIT: See @toazaburo's answer, which addresses unsafeness.
| Q: ngIf - else vs two ngIf conditions Consider the following code sample:
<div *ngIf="condition; else elseBlock">
<!-- markup here -->
</div>
<ng-template #elseBlock>
<div>
<!-- additional markup here -->
</div>
</ng-template>
Another way I can achieve the same functionality is:
<div *ngIf="condition">
<!-- markup here -->
</div>
<div *ngIf="!condition">
<!-- additional markup here -->
</div>
I want to know specific reasons for which of these two ways should be used and why?
A: The first solution is better. Although both achieve the same thing, the first is cleaner and easier to follow logically.
You only do one check with the first solution. This is a good solution because:
*
*It takes time to evaluate a condition.
*By using else, you only have to check once. In your second solution you check if a condition is met (which takes time), and then you check if the not of the same condition is met.
*Using else allows you to have an action to do in the case that somehow neither of the conditions are met.
EDIT: See @toazaburo's answer, which addresses unsafeness.
A: Using else lets you avoid writing the condition twice, which could be a source of bugs, so I would use that preferentially. You can also put the else template down at the bottom of your template if you so choose, which can make the template easier to read.
If you are unwrapping an observable with the async pipe, then it is much easier to write
<div *ngIf="myItems$ | async as myItems; else noItemsYet">
than to add another <div *ngIf="!(myitems | async)">, which would also force Angular to create another internal subscription.
Those voting to close this question as a matter of opinion are misguided. The fact that people might have different opinions on something does not mean that there are not valid points that are worth putting forward.
If using else was a matter of mere preference, I doubt the Angular developers would have prioritized it for Angular 4 as they did.
As @Ishnark points out, there may be performance implications as well.
| stackoverflow | {
"language": "en",
"length": 345,
"provenance": "stackexchange_0000F.jsonl.gz:909785",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44682495"
} |
82d34c4450bd99d12132e277094c3559cbe82fa5 | Stackoverflow Stackexchange
Q: How to use tag in Android I need to know how to use layout tag in android xml file. I know it is used for data binding but I do not have complete knowledge on this. Please let me know if anyone can help me in same.
Thanks in Advance !!
A: The <layout> tag must be the root tag when you are using DataBinding. Doing so you are telling the compiler that you are using DataBinding and your layout will have special tags like <variable> or <import>, so you have to embed your layout within that tag.
In short, you need to use the <layout> tag whenever you are using DataBinding for the compiler to understand the special tags and generate the DataBinding class with the right variables and methods.
If you have a layout like this (layout_data_binding.xml):
<layout xmlns:android="http://schemas.android.com/apk/res/android">
<data>
<variable name="user" type="com.example.User"/>
</data>
<LinearLayout
android:orientation="vertical"
android:layout_width="match_parent"
android:layout_height="match_parent">
<TextView android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@{user.firstName}"/>
<TextView android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@{user.lastName}"/>
</LinearLayout>
</layout>
It is based on what is inside the <layout> tag to create the LayoutDataBinding class (auto-generated) with the User variable and its getters and setters.
| Q: How to use tag in Android I need to know how to use layout tag in android xml file. I know it is used for data binding but I do not have complete knowledge on this. Please let me know if anyone can help me in same.
Thanks in Advance !!
A: The <layout> tag must be the root tag when you are using DataBinding. Doing so you are telling the compiler that you are using DataBinding and your layout will have special tags like <variable> or <import>, so you have to embed your layout within that tag.
In short, you need to use the <layout> tag whenever you are using DataBinding for the compiler to understand the special tags and generate the DataBinding class with the right variables and methods.
If you have a layout like this (layout_data_binding.xml):
<layout xmlns:android="http://schemas.android.com/apk/res/android">
<data>
<variable name="user" type="com.example.User"/>
</data>
<LinearLayout
android:orientation="vertical"
android:layout_width="match_parent"
android:layout_height="match_parent">
<TextView android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@{user.firstName}"/>
<TextView android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@{user.lastName}"/>
</LinearLayout>
</layout>
It is based on what is inside the <layout> tag to create the LayoutDataBinding class (auto-generated) with the User variable and its getters and setters.
| stackoverflow | {
"language": "en",
"length": 186,
"provenance": "stackexchange_0000F.jsonl.gz:909798",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44682525"
} |
b360b9bbd669bd20a1c44098e4a7411f6832e640 | Stackoverflow Stackexchange
Q: Azure How to find maximum NICs that can be attached to a VM? I am looking for REST API that gives me the maximum number of NICs that can be attached to a VM based on the VM Size.
I have searched for Azure REST API references, but I couldn't find any API. I am able to use the below API to get max. data disks that can be attached to VM, I also need to get the max. NICs. Any help how I can get this information?
https://management.azure.com/subscriptions/xxxxxx-xxxx-xxxx-xxxx-xxxxxxx/providers/Microsoft.Compute/locations/westus/vmSizes?api-version=2016-03-30
Sample output:
{
"name": "Standard_DS1_v2",
"numberOfCores": 1,
"osDiskSizeInMB": 1047552,
"resourceDiskSizeInMB": 7168,
"memoryInMB": 3584,
"maxDataDiskCount": 4
},
A: well, its dependent on the size of the VM. Check this article, it has got everything you need in it.
| Q: Azure How to find maximum NICs that can be attached to a VM? I am looking for REST API that gives me the maximum number of NICs that can be attached to a VM based on the VM Size.
I have searched for Azure REST API references, but I couldn't find any API. I am able to use the below API to get max. data disks that can be attached to VM, I also need to get the max. NICs. Any help how I can get this information?
https://management.azure.com/subscriptions/xxxxxx-xxxx-xxxx-xxxx-xxxxxxx/providers/Microsoft.Compute/locations/westus/vmSizes?api-version=2016-03-30
Sample output:
{
"name": "Standard_DS1_v2",
"numberOfCores": 1,
"osDiskSizeInMB": 1047552,
"resourceDiskSizeInMB": 7168,
"memoryInMB": 3584,
"maxDataDiskCount": 4
},
A: well, its dependent on the size of the VM. Check this article, it has got everything you need in it.
| stackoverflow | {
"language": "en",
"length": 127,
"provenance": "stackexchange_0000F.jsonl.gz:909838",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44682619"
} |
8194fb52b405933b788aba26dec02d1d1ba0ed79 | Stackoverflow Stackexchange
Q: Is it possible to use Metal shader code in an iPad Swift Playground? Is it possible to use custom Metal shader code in an iPad Swift Playground?
If so, how does one get the *.metal file or code (or its pre-compiled object) onto an iPad to use with a Playground and Swift code? Is the use of Xcode on a Mac required to help?
(this question is not about using the built-in performance shaders, or about running stuff in a Playground on a Mac)
A: You have two options: either create an iOS playground in Xcode and send it to your iPad but then you are not allowed to edit the .metal file, just read it, like in this example; or you can create your shaders file as a multiline string (now possible in Swift 4 and Xcode 9 beta) and create your library from that string. A more cumbersome way is to concatenate string lines in Xcode 8/Swift 3 like in this example.
| Q: Is it possible to use Metal shader code in an iPad Swift Playground? Is it possible to use custom Metal shader code in an iPad Swift Playground?
If so, how does one get the *.metal file or code (or its pre-compiled object) onto an iPad to use with a Playground and Swift code? Is the use of Xcode on a Mac required to help?
(this question is not about using the built-in performance shaders, or about running stuff in a Playground on a Mac)
A: You have two options: either create an iOS playground in Xcode and send it to your iPad but then you are not allowed to edit the .metal file, just read it, like in this example; or you can create your shaders file as a multiline string (now possible in Swift 4 and Xcode 9 beta) and create your library from that string. A more cumbersome way is to concatenate string lines in Xcode 8/Swift 3 like in this example.
| stackoverflow | {
"language": "en",
"length": 165,
"provenance": "stackexchange_0000F.jsonl.gz:909846",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44682638"
} |
404fc5e49d9e5d19b9d81f5aeb6b7d05aed74ffb | Stackoverflow Stackexchange
Q: Ruby on Rails bootstrap alert box In my application I show messages to users like this way.
<% flash.each do |key, value| %>
<div class="alert alert-<%= key %> alert-dismissible">
<button type="button" class="close" data-dismiss="alert" aria-label="Close"><span aria-hidden="true">×</span></button>
<%= value %>
</div>
<% end %>
I have following lines in my application
.alert-notice {
@extend .alert-warning;
}
When I redirect user with redirect_to root_url, :notice => 'messages' everything perfect. Notice message can be seen. However when I direct user with redirect_to root_url, :info => 'messages', No message can be seen. I have debugged code and realised that flash is empty that condition.
It's ok:
redirect_to root_url, :notice => 'messages'
Here is problem:
redirect_to root_url, :info => 'messages'
Any suggestions ?
Thanks.
A: If you want to use info you have to add this to application_helper.rb
add_flash_types :info
You can add as many extra types as you want, e.g.
add_flash_types :info, :success, :warning, :danger
Extra:
I would also suggest you get used to the new hast notation in Ruby
redirect_to root_url, info: 'messages' # instead of :info => ...
| Q: Ruby on Rails bootstrap alert box In my application I show messages to users like this way.
<% flash.each do |key, value| %>
<div class="alert alert-<%= key %> alert-dismissible">
<button type="button" class="close" data-dismiss="alert" aria-label="Close"><span aria-hidden="true">×</span></button>
<%= value %>
</div>
<% end %>
I have following lines in my application
.alert-notice {
@extend .alert-warning;
}
When I redirect user with redirect_to root_url, :notice => 'messages' everything perfect. Notice message can be seen. However when I direct user with redirect_to root_url, :info => 'messages', No message can be seen. I have debugged code and realised that flash is empty that condition.
It's ok:
redirect_to root_url, :notice => 'messages'
Here is problem:
redirect_to root_url, :info => 'messages'
Any suggestions ?
Thanks.
A: If you want to use info you have to add this to application_helper.rb
add_flash_types :info
You can add as many extra types as you want, e.g.
add_flash_types :info, :success, :warning, :danger
Extra:
I would also suggest you get used to the new hast notation in Ruby
redirect_to root_url, info: 'messages' # instead of :info => ...
A: :info is not a vaild option for redirect (only :alert and :notice can be used that way); so, to use info, you must assign it directly to flash.
Try this instead:
flash[:info] = 'messages'
redirect_to root_url
Or this:
redirect_to root_url, flash: { info: 'messages' }
| stackoverflow | {
"language": "en",
"length": 222,
"provenance": "stackexchange_0000F.jsonl.gz:909866",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44682718"
} |
fbe50b3c4816e3df5b50cf066e55be857095a9e8 | Stackoverflow Stackexchange
Q: Yield in anonymous function in PHP I was refactoring my code and I was involved in a problem: I want to do something like this:
public function myFunc($time)
{
$this->otherFunc(function () {
try {
yield;
return true;
}
catch (RuntimeException $re) {
// Don't care
}
}, $time);
}
But when I do
foreach($this->myFunc(1000) as $action) {
// Code block here...
}
It raises an error.
I think the problem is the yield inside the anonymous function. Is there any way to do it?
I have also tried a Closure, but it doesn't work:
public function myFunc($time)
{
$yield = yield;
$this->otherFunc(function($test) use ($yield) {
try {
$yield;
return true;
}
catch (RuntimeException $re) {
// Don't care
}
}, $time);
}
and ...
foreach($this->myFunc(1000) as $action) {
{
// Code block here...
}
}
Features:
*
*PHP: v5.6
*Framework: none
*Desperation level: medium/high XP
Thanks in advance!
UPDATE:
Real example with Selenium Test:
public function doWaitUntil($time)
{
$this->waitUntil(function($testCase) {
try {
yield($testCase);
return true;
}
catch (RuntimeException $re) {}
}, $time);
}
public function exampleTest()
{
foreach($this->doWaitUntil(2000) as $action) {
$action->byId('whatever')->click();
}
foreach($this->doWaitUntil(1000) as $action) {
$action->byId('whatever-dependent-element')->click();
}
// ...
}
| Q: Yield in anonymous function in PHP I was refactoring my code and I was involved in a problem: I want to do something like this:
public function myFunc($time)
{
$this->otherFunc(function () {
try {
yield;
return true;
}
catch (RuntimeException $re) {
// Don't care
}
}, $time);
}
But when I do
foreach($this->myFunc(1000) as $action) {
// Code block here...
}
It raises an error.
I think the problem is the yield inside the anonymous function. Is there any way to do it?
I have also tried a Closure, but it doesn't work:
public function myFunc($time)
{
$yield = yield;
$this->otherFunc(function($test) use ($yield) {
try {
$yield;
return true;
}
catch (RuntimeException $re) {
// Don't care
}
}, $time);
}
and ...
foreach($this->myFunc(1000) as $action) {
{
// Code block here...
}
}
Features:
*
*PHP: v5.6
*Framework: none
*Desperation level: medium/high XP
Thanks in advance!
UPDATE:
Real example with Selenium Test:
public function doWaitUntil($time)
{
$this->waitUntil(function($testCase) {
try {
yield($testCase);
return true;
}
catch (RuntimeException $re) {}
}, $time);
}
public function exampleTest()
{
foreach($this->doWaitUntil(2000) as $action) {
$action->byId('whatever')->click();
}
foreach($this->doWaitUntil(1000) as $action) {
$action->byId('whatever-dependent-element')->click();
}
// ...
}
| stackoverflow | {
"language": "en",
"length": 193,
"provenance": "stackexchange_0000F.jsonl.gz:909867",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44682719"
} |
ca9e759fa152a9a294ee609a8aba81929b884eb1 | Stackoverflow Stackexchange
Q: Calculate the code coverage of Selenium tests in Java In IntelliJ, I am coding a Java Spring project built with maven. For that project, I created some selenium tests to test my web application and they work as intended. Though due to problems with spring annotations I wasn't able to create JUnit tests for my controllers, so I wanted to test them together with selenium.
I can get the code coverage of my JUnit tests just fine, but I haven't achieved the same for the Selenium tests. I tried using the integrated code coverage plugin of IntelliJ, Emma, and JaCoCo, but none of them give me any results.
I have already searched on StackOverflow, but all results I get are either with third-party tools, changing some configuration with my tomcat server + maven (I don't know much about these topics) and again JaCoCo (which doesn't work for me). Isn't there any easy way to achieve this within IntelliJ? JUnit code coverage works too, so why not selenium? Any help would be highly appreciated.
| Q: Calculate the code coverage of Selenium tests in Java In IntelliJ, I am coding a Java Spring project built with maven. For that project, I created some selenium tests to test my web application and they work as intended. Though due to problems with spring annotations I wasn't able to create JUnit tests for my controllers, so I wanted to test them together with selenium.
I can get the code coverage of my JUnit tests just fine, but I haven't achieved the same for the Selenium tests. I tried using the integrated code coverage plugin of IntelliJ, Emma, and JaCoCo, but none of them give me any results.
I have already searched on StackOverflow, but all results I get are either with third-party tools, changing some configuration with my tomcat server + maven (I don't know much about these topics) and again JaCoCo (which doesn't work for me). Isn't there any easy way to achieve this within IntelliJ? JUnit code coverage works too, so why not selenium? Any help would be highly appreciated.
| stackoverflow | {
"language": "en",
"length": 174,
"provenance": "stackexchange_0000F.jsonl.gz:909881",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44682774"
} |
eeb8b3f8fed52ec8b8ee7c6095407f4720ffa3be | Stackoverflow Stackexchange
Q: Spring boot application need to connect weblogic oracle datasource The spring boot application by default is connecting to derby embedded database as shown in the below statement.
Starting embedded database: url='jdbc:derby:memory:testdb;create=true', username='sa'
I don't know where it is picking the above url from
I need to connect weblogic oracle datasource I gave the following properties in the application.properties of the application but its not picking the below properties
spring.jpa.hibernate.ddl-auto=create-drop
# Oracle settings
spring.datasource.url=jdbc:oracle:thin:@//localhost:1521/XE
spring.datasource.username=system
spring.datasource.password=vasu
spring.datasource.driver-class-name=oracle.jdbc.driver.OracleDriver
Added the following entry in pom.xml
<dependency>
<groupId>com.github.noraui</groupId>
<artifactId>ojdbc7</artifactId>
<version>12.1.0.2</version>
</dependency>
A: I assume you already have your oracle datasource defined in the weblogic, so you don't need neither oracle driver in your application classpath nor spring.datasource.{url,username,password,driver-class-name} properties defined.
What you need instead is spring.datasource.jndi-name property. Just set it to jndi name of your datasource from weblogic and spring will pickup it just like that.
Of course you have to have an oracle driver in weblogic classpath (lib directory or something like that).
spring.datasource.jndi-name=java:jdbc/OracleDS
Documentation.
| Q: Spring boot application need to connect weblogic oracle datasource The spring boot application by default is connecting to derby embedded database as shown in the below statement.
Starting embedded database: url='jdbc:derby:memory:testdb;create=true', username='sa'
I don't know where it is picking the above url from
I need to connect weblogic oracle datasource I gave the following properties in the application.properties of the application but its not picking the below properties
spring.jpa.hibernate.ddl-auto=create-drop
# Oracle settings
spring.datasource.url=jdbc:oracle:thin:@//localhost:1521/XE
spring.datasource.username=system
spring.datasource.password=vasu
spring.datasource.driver-class-name=oracle.jdbc.driver.OracleDriver
Added the following entry in pom.xml
<dependency>
<groupId>com.github.noraui</groupId>
<artifactId>ojdbc7</artifactId>
<version>12.1.0.2</version>
</dependency>
A: I assume you already have your oracle datasource defined in the weblogic, so you don't need neither oracle driver in your application classpath nor spring.datasource.{url,username,password,driver-class-name} properties defined.
What you need instead is spring.datasource.jndi-name property. Just set it to jndi name of your datasource from weblogic and spring will pickup it just like that.
Of course you have to have an oracle driver in weblogic classpath (lib directory or something like that).
spring.datasource.jndi-name=java:jdbc/OracleDS
Documentation.
| stackoverflow | {
"language": "en",
"length": 162,
"provenance": "stackexchange_0000F.jsonl.gz:909889",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44682795"
} |
44c6f2bbdb70464fa15ca810f1f1aa22ca0242f6 | Stackoverflow Stackexchange
Q: Unable to cast object of type 'WhereArrayIterator`1[System.String]' to type 'System.String[]' I'm trying to get sub directories paths from a directory, but ignore some folders, the code below gives me this error,
System.InvalidCastException: Unable to cast object of type 'WhereArrayIterator`1[System.String]' to type 'System.String[]'
can anyone help?
Dim subdirectoryEntries() As String = Directory.GetDirectories(ConfigurationSettings.AppSettings("FsRoot") & Path.DirectorySeparatorChar & readerClientList.GetString(0)).
Where(Function(name) Not name.EndsWith(folder, StringComparison.OrdinalIgnoreCase))
A: the result of the getdirectories where is a iqueryable result, you have to add an . and for example tolist or toarray or similar method to cast to this method.
Hope this help
| Q: Unable to cast object of type 'WhereArrayIterator`1[System.String]' to type 'System.String[]' I'm trying to get sub directories paths from a directory, but ignore some folders, the code below gives me this error,
System.InvalidCastException: Unable to cast object of type 'WhereArrayIterator`1[System.String]' to type 'System.String[]'
can anyone help?
Dim subdirectoryEntries() As String = Directory.GetDirectories(ConfigurationSettings.AppSettings("FsRoot") & Path.DirectorySeparatorChar & readerClientList.GetString(0)).
Where(Function(name) Not name.EndsWith(folder, StringComparison.OrdinalIgnoreCase))
A: the result of the getdirectories where is a iqueryable result, you have to add an . and for example tolist or toarray or similar method to cast to this method.
Hope this help
| stackoverflow | {
"language": "en",
"length": 94,
"provenance": "stackexchange_0000F.jsonl.gz:909904",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44682842"
} |
8d9bdc89f3f399c57186c1986c6fa1d9524e9170 | Stackoverflow Stackexchange
Q: Using conda environment in IPython interactive shell I am trying to use the interactive shell of IPython within my conda env and am having issues.
The steps I take are:
source activate myenv
conda install ipython
ipython
When I am in ipython interactive shell, it calls python from the anaconda root bin. (~/anaconda2/bin')
Is there anything I can do to change the python path to ~/anaconda2/envs/myenv/bin and import packages from myenv?
I see few solutions to making env work in jupyter when I search the web, but no answer on making it work on the interactive shell.
A: This is likely due to your $PATH variable being messed up.
THe easiest way to make sure you get IPython from within an env is to use $ python -m IPython <rest of the options> to start IPython. This works for many of the Python installable application; like pytest, pip and other.
| Q: Using conda environment in IPython interactive shell I am trying to use the interactive shell of IPython within my conda env and am having issues.
The steps I take are:
source activate myenv
conda install ipython
ipython
When I am in ipython interactive shell, it calls python from the anaconda root bin. (~/anaconda2/bin')
Is there anything I can do to change the python path to ~/anaconda2/envs/myenv/bin and import packages from myenv?
I see few solutions to making env work in jupyter when I search the web, but no answer on making it work on the interactive shell.
A: This is likely due to your $PATH variable being messed up.
THe easiest way to make sure you get IPython from within an env is to use $ python -m IPython <rest of the options> to start IPython. This works for many of the Python installable application; like pytest, pip and other.
| stackoverflow | {
"language": "en",
"length": 151,
"provenance": "stackexchange_0000F.jsonl.gz:909915",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44682872"
} |
99266374341ec799b09dc23d454893e0b2a85ca6 | Stackoverflow Stackexchange
Q: How to run unit tests against an app using an embedded framework without x86_x64 We need to integrate into our iOS app a framework that doesn't target x86_x64. We don't have access to that framework source code and we've been informed that they would release it for that architecture in a couple of months. Meanwhile our unit test target doesn't compile with the following linker error:
ld: framework not found FrameworkName for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see
invocation)
I've tried adding an linker flag for Any iOS Simulator SDK with the value -weak_framework FrameworkName. By doing so we can run the app on simulator but the test target crash with the same linker error.
Any idea how to setup the build settings/phase so that we can run our unit test suite? Thanks!
Also note that we need to add that framework as an embedded binaries.
| Q: How to run unit tests against an app using an embedded framework without x86_x64 We need to integrate into our iOS app a framework that doesn't target x86_x64. We don't have access to that framework source code and we've been informed that they would release it for that architecture in a couple of months. Meanwhile our unit test target doesn't compile with the following linker error:
ld: framework not found FrameworkName for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see
invocation)
I've tried adding an linker flag for Any iOS Simulator SDK with the value -weak_framework FrameworkName. By doing so we can run the app on simulator but the test target crash with the same linker error.
Any idea how to setup the build settings/phase so that we can run our unit test suite? Thanks!
Also note that we need to add that framework as an embedded binaries.
| stackoverflow | {
"language": "en",
"length": 156,
"provenance": "stackexchange_0000F.jsonl.gz:909931",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44682910"
} |
62660aa46cf6194f72729d9788be2229ce47c3cc | Stackoverflow Stackexchange
Q: FixedDataTable: isMounted is deprecated Using FixedDataTable in my react project and im suprised to see the warning below:
warning.js:36 Warning: FixedDataTable: isMounted is deprecated. Instead, make sure to clean up subscriptions and pending requests in componentWillUnmount to prevent memory leaks.
What I've understood is that isMounted is seen as an antipattern Link so im suprised to see it in the actual source code. Am I missing something here?
_didScrollStop: function _didScrollStop() {
if (this.isMounted() && this._isScrolling) {
this._isScrolling = false;
this.setState({ redraw: true });
if (this.props.onScrollEnd) {
this.props.onScrollEnd(this.state.scrollX, this.state.scrollY);
}
}
}
| Q: FixedDataTable: isMounted is deprecated Using FixedDataTable in my react project and im suprised to see the warning below:
warning.js:36 Warning: FixedDataTable: isMounted is deprecated. Instead, make sure to clean up subscriptions and pending requests in componentWillUnmount to prevent memory leaks.
What I've understood is that isMounted is seen as an antipattern Link so im suprised to see it in the actual source code. Am I missing something here?
_didScrollStop: function _didScrollStop() {
if (this.isMounted() && this._isScrolling) {
this._isScrolling = false;
this.setState({ redraw: true });
if (this.props.onScrollEnd) {
this.props.onScrollEnd(this.state.scrollX, this.state.scrollY);
}
}
}
| stackoverflow | {
"language": "en",
"length": 93,
"provenance": "stackexchange_0000F.jsonl.gz:909964",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44683009"
} |
e45adbffa51826462761a470cdb50764a048bb84 | Stackoverflow Stackexchange
Q: Implement K-fold cross validation in MLPClassification Python I am learning how to develop a Backpropagation Neural Network using scikit-learn. I still confuse with how to implement k-fold cross validation in my neural network. I wish you guys can help me out. My code is as follow:
import numpy as np
from sklearn.model_selection import KFold
from sklearn.neural_network import MLPClassifier
f = open("seeds_dataset.txt")
data = np.loadtxt(f)
X=data[:,0:]
y=data[:,-1]
kf = KFold(n_splits=10)
X_train, X_test, y_train, y_test = X[train], X[test], y[train], y[test]
clf = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1)
clf.fit(X, y)
MLPClassifier(activation='relu', alpha=1e-05, batch_size='auto',
beta_1=0.9, beta_2=0.999, early_stopping=False,
epsilon=1e-08, hidden_layer_sizes=(5, 2), learning_rate='constant',
learning_rate_init=0.001, max_iter=200, momentum=0.9,
nesterovs_momentum=True, power_t=0.5, random_state=1, shuffle=True,
solver='lbfgs', tol=0.0001, validation_fraction=0.1, verbose=False,
warm_start=False)
A: In case you are looking for already built in method to do this, you can take a look at cross_validate.
from sklearn.model_selection import cross_validate
model = MLPClassifier()
cv_results = cross_validate(model, X, Y, cv=10,
return_train_score=False,
scoring=model.score)
print("Fit scores: {}".format(cv_results['test_score']))
The thing I like about this approach is it gives you access to the fit_time, score_time, and test_score. It also allows you to supply your choice of scoring metrics and cross-validation generator/iterable (i.e. Kfold). Another good resource is Cross Validation.
| Q: Implement K-fold cross validation in MLPClassification Python I am learning how to develop a Backpropagation Neural Network using scikit-learn. I still confuse with how to implement k-fold cross validation in my neural network. I wish you guys can help me out. My code is as follow:
import numpy as np
from sklearn.model_selection import KFold
from sklearn.neural_network import MLPClassifier
f = open("seeds_dataset.txt")
data = np.loadtxt(f)
X=data[:,0:]
y=data[:,-1]
kf = KFold(n_splits=10)
X_train, X_test, y_train, y_test = X[train], X[test], y[train], y[test]
clf = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1)
clf.fit(X, y)
MLPClassifier(activation='relu', alpha=1e-05, batch_size='auto',
beta_1=0.9, beta_2=0.999, early_stopping=False,
epsilon=1e-08, hidden_layer_sizes=(5, 2), learning_rate='constant',
learning_rate_init=0.001, max_iter=200, momentum=0.9,
nesterovs_momentum=True, power_t=0.5, random_state=1, shuffle=True,
solver='lbfgs', tol=0.0001, validation_fraction=0.1, verbose=False,
warm_start=False)
A: In case you are looking for already built in method to do this, you can take a look at cross_validate.
from sklearn.model_selection import cross_validate
model = MLPClassifier()
cv_results = cross_validate(model, X, Y, cv=10,
return_train_score=False,
scoring=model.score)
print("Fit scores: {}".format(cv_results['test_score']))
The thing I like about this approach is it gives you access to the fit_time, score_time, and test_score. It also allows you to supply your choice of scoring metrics and cross-validation generator/iterable (i.e. Kfold). Another good resource is Cross Validation.
A: Kudos to @COLDSPEED's answer.
If you'd like to have the prediction of n fold cross-validation, cross_val_predict() is the way to go.
# Scamble and subset data frame into train + validation(80%) and test(10%)
df = df.sample(frac=1).reset_index(drop=True)
train_index = 0.8
df_train = df[ : len(df) * train_index]
# convert dataframe to ndarray, since kf.split returns nparray as index
feature = df_train.iloc[:, 0: -1].values
target = df_train.iloc[:, -1].values
solver = MLPClassifier(activation='relu', solver='adam', alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1, verbose=True)
y_pred = cross_val_predict(solver, feature, target, cv = 10)
Basically, the option cv indicates how many cross-validation you'd like to do in the training. y_pred is the same size as target.
A: Do not split your data into train and test. This is automatically handled by the KFold cross-validation.
from sklearn.model_selection import KFold
kf = KFold(n_splits=10)
clf = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1)
for train_indices, test_indices in kf.split(X):
clf.fit(X[train_indices], y[train_indices])
print(clf.score(X[test_indices], y[test_indices]))
KFold validation partitions your dataset into n equal, fair portions. Each portion is then split into test and train. With this, you get a fairly accurate measure of the accuracy of your model since it is tested on small portions of fairly distributed data.
| stackoverflow | {
"language": "en",
"length": 380,
"provenance": "stackexchange_0000F.jsonl.gz:910017",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44683197"
} |
2f25638415ed35da538655b2c3d2bb6fa3cb3495 | Stackoverflow Stackexchange
Q: use docker for google cloud data flow dependencies I am interested in using Google cloud Dataflow to parallel process videos. My job uses both OpenCV and tensorflow. Is it possible to just run the workers inside a docker instance, rather than installing all the dependencies from source as described:
https://cloud.google.com/dataflow/pipelines/dependencies-python
I would have expected a flag for a docker container, which is already sitting in google container engine.
A: 2021 update
Dataflow now supports custom docker containers. You can create your own container by following these instructions:
https://cloud.google.com/dataflow/docs/guides/using-custom-containers
The short answer is that Beam publishes containers under dockerhub.io/apache/beam_${language}_sdk:${version}.
In your Dockerfile you would use one of them as base:
FROM apache/beam_python3.8_sdk:2.30.0
# Add your customizations and dependencies
Then you would upload this image to a container registry like GCR or Dockerhub, and then you would specify the following option: --worker_harness_container_image=$IMAGE_URI
And bing! you have a customer container.
It is not possible to modify or switch the default Dataflow worker container. You need to install the dependencies according to the documentation.
| Q: use docker for google cloud data flow dependencies I am interested in using Google cloud Dataflow to parallel process videos. My job uses both OpenCV and tensorflow. Is it possible to just run the workers inside a docker instance, rather than installing all the dependencies from source as described:
https://cloud.google.com/dataflow/pipelines/dependencies-python
I would have expected a flag for a docker container, which is already sitting in google container engine.
A: 2021 update
Dataflow now supports custom docker containers. You can create your own container by following these instructions:
https://cloud.google.com/dataflow/docs/guides/using-custom-containers
The short answer is that Beam publishes containers under dockerhub.io/apache/beam_${language}_sdk:${version}.
In your Dockerfile you would use one of them as base:
FROM apache/beam_python3.8_sdk:2.30.0
# Add your customizations and dependencies
Then you would upload this image to a container registry like GCR or Dockerhub, and then you would specify the following option: --worker_harness_container_image=$IMAGE_URI
And bing! you have a customer container.
It is not possible to modify or switch the default Dataflow worker container. You need to install the dependencies according to the documentation.
A: If you have a large number of videos you will have to incur the large startup cost regardless. Thus is the nature of Grid Computing in general.
The other side of this is that you could use larger machines under the job than the n1-standard-1 machines, thus amortizing the cost of the download across less machines that could potentially process more videos at once if the processing was coded correctly.
A: One solution is to issue the pip install commands through the setup.py option listed for Non-Python Dependencies.
Doing this will download the manylinux wheel instead of the source distribution that the requirements file processing will stage.
| stackoverflow | {
"language": "en",
"length": 279,
"provenance": "stackexchange_0000F.jsonl.gz:910038",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44683250"
} |
cca116153fbe3f1bc99a2e594ff5f1ed8e51461b | Stackoverflow Stackexchange
Q: Firebase admin SDK node - wildcard usage in reference Is it possible to listen for child_added in the admin SDK while using a wildcard in the ref? In cloud functions, I could use "{random}" but this does not work in Node.js. example:
var refHighscore = db.ref("highscores/classic/alltime/{score}")
A: No. This is a unique feature of Cloud Functions, as a trigger is registered directly with the Realtime Database itself. The Admin SDK functions just like the Client SDK -- no wildcards.
| Q: Firebase admin SDK node - wildcard usage in reference Is it possible to listen for child_added in the admin SDK while using a wildcard in the ref? In cloud functions, I could use "{random}" but this does not work in Node.js. example:
var refHighscore = db.ref("highscores/classic/alltime/{score}")
A: No. This is a unique feature of Cloud Functions, as a trigger is registered directly with the Realtime Database itself. The Admin SDK functions just like the Client SDK -- no wildcards.
| stackoverflow | {
"language": "en",
"length": 80,
"provenance": "stackexchange_0000F.jsonl.gz:910053",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44683287"
} |
948d244c58ff1a9afab71f4d3327edeb85957a9f | Stackoverflow Stackexchange
Q: Google Play Store installs/uninstalls/sales API I'm trying to find an API to fetch the number of installs, uninstalls and sales for my app on Google Play Store, with filtering by date, but I haven't found anything like that, only for iOS. Does anybody know one?
| Q: Google Play Store installs/uninstalls/sales API I'm trying to find an API to fetch the number of installs, uninstalls and sales for my app on Google Play Store, with filtering by date, but I haven't found anything like that, only for iOS. Does anybody know one?
| stackoverflow | {
"language": "en",
"length": 46,
"provenance": "stackexchange_0000F.jsonl.gz:910056",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44683291"
} |
9b5eb0c66d479a650fa7cab3d8d0ba1372409343 | Stackoverflow Stackexchange
Q: javascript find an id that contains partial text Environment: Just JavaScript
Is there a way to get an element that contains partial text?
<h1 id="test_123_abc">hello world</h1>
In this example, can I get the element if all I had was the test_123 part?
A: Since you can't use Jquery, querySelectorAll is a descent way to go
var matchedEle = document.querySelectorAll("[id*='test_123']")
| Q: javascript find an id that contains partial text Environment: Just JavaScript
Is there a way to get an element that contains partial text?
<h1 id="test_123_abc">hello world</h1>
In this example, can I get the element if all I had was the test_123 part?
A: Since you can't use Jquery, querySelectorAll is a descent way to go
var matchedEle = document.querySelectorAll("[id*='test_123']")
A: querySelectorAll with starts with
var elems = document.querySelectorAll("[id^='test_123']")
console.log(elems.length);
<h1 id="test_123_abc">hello world</h1>
<h1 id="test_123_def">hello world</h1>
<h1 id="test_123_ghi">hello world</h1>
A: You can achieve it (without using jQuery) by using querySelectorAll.
var el = document.querySelectorAll("[id*='test_123']");
You can get a clear example of it by going through the following link:
Find all elements whose id begins with a common string
| stackoverflow | {
"language": "en",
"length": 118,
"provenance": "stackexchange_0000F.jsonl.gz:910071",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44683329"
} |
6ac62fbb752bf51b63a0b6b5c57565d209155f88 | Stackoverflow Stackexchange
Q: Material UI Card Height I'm sure I'm missing something easy here but I can't figure this out for the life of me.
*
*I am using Material-UI card components in React to display
albums with track lists. The cards are expandable and are filling
100% of their parent container so that when one card is expanded
they all expand in height.
*How can I stop the other cards from expanding with their parent
element and get them to stay at just enough height to show the
content?
Only the expanded card should change height.
Here is a screenshot of what I mean: material-ui cards
A: You can apply styles to card.
var cardStyle = {
display: 'block',
width: '30vw',
transitionDuration: '0.3s',
height: '45vw'
}
And in your CardStyle you can apply the above styling by
<Card style={cardStyle}>
<CardHeader
title="URL Avatar"
subtitle="Subtitle"
avatar="https://placeimg.com/800/450/nature"
/>
| Q: Material UI Card Height I'm sure I'm missing something easy here but I can't figure this out for the life of me.
*
*I am using Material-UI card components in React to display
albums with track lists. The cards are expandable and are filling
100% of their parent container so that when one card is expanded
they all expand in height.
*How can I stop the other cards from expanding with their parent
element and get them to stay at just enough height to show the
content?
Only the expanded card should change height.
Here is a screenshot of what I mean: material-ui cards
A: You can apply styles to card.
var cardStyle = {
display: 'block',
width: '30vw',
transitionDuration: '0.3s',
height: '45vw'
}
And in your CardStyle you can apply the above styling by
<Card style={cardStyle}>
<CardHeader
title="URL Avatar"
subtitle="Subtitle"
avatar="https://placeimg.com/800/450/nature"
/>
| stackoverflow | {
"language": "en",
"length": 144,
"provenance": "stackexchange_0000F.jsonl.gz:910078",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44683340"
} |
67f10f2241e8ff4915becded0c4c3917759d0953 | Stackoverflow Stackexchange
Q: how to Change font size, Without changing the size of button in CSS In the following code I want to change just inner font size not the button size.
<html>
<head>
<style>
.sm {
font-size:x-small;
}
</style>
</head>
<body>
<button class="sm">submit</button>
</body>
</html>
A: To change the text size without changing the button size, you would need to fix the size of the button. This can be done using height and width in CSS. That way you can change the font-size without having it affect the button size.
Take a look at my code below. As you can see, with the height and width changed, the button is now a fixed size. This is proven by the text being larger than the button.
CSS
button {
font-size: 30px;
height: 60px;
width: 60px;
}
HTML
<button>
Hello
</button>
| Q: how to Change font size, Without changing the size of button in CSS In the following code I want to change just inner font size not the button size.
<html>
<head>
<style>
.sm {
font-size:x-small;
}
</style>
</head>
<body>
<button class="sm">submit</button>
</body>
</html>
A: To change the text size without changing the button size, you would need to fix the size of the button. This can be done using height and width in CSS. That way you can change the font-size without having it affect the button size.
Take a look at my code below. As you can see, with the height and width changed, the button is now a fixed size. This is proven by the text being larger than the button.
CSS
button {
font-size: 30px;
height: 60px;
width: 60px;
}
HTML
<button>
Hello
</button>
A: You can also use span tag which worked quite well for me.
<button><span style="font-size:10px;">ClickMe</span></button>
A: Change your style to this
.sm
{
font-size:20px;
height:30px;
width: 120px;
}
hope this helps !
A: Another method to achieve this would be to take advantage of the 3rd dimension.
This method does not require you to fix the size of your button and allows it to still adapt to the content size. It would also scale any svg or other content you put in the button to provide a nice hover pop effect.
HTML
<button>
<div>
Button Text
</div>
</button>
CSS
button div {
-webkit-transition: -webkit-transform 0.14s ease;
}
button:hover div {
-webkit-transform: perspective(100px) scale(1.05);
}
button:active div {
-webkit-transform: perspective(100px) scale(1.08);
}
I am not sure this is what your trying to achieve but I think its a neat solution and I thought I'd share it in case anyone else want to try it.
| stackoverflow | {
"language": "en",
"length": 291,
"provenance": "stackexchange_0000F.jsonl.gz:910085",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44683370"
} |
6a9fb40d40e4c0030809894f11b44fdce142fa04 | Stackoverflow Stackexchange
Q: iOS App using iCloud storage is adding .icloud file extension to my documents In my iOS (swift 3.0) mobile application, the device creating a document to store in iCloud is able to load and manipulate the file again. Once iCloud transfers the file(s) to another device, they are no longer able to open and have a "." prepended to the original file name AND .icloud appended to the end.
My document structure is as follows:
Project_Name.spp (directory file wrapper) holding a project.data file and additional directory file wrappers Page_Name.spg containing page.meta, page.plist, and screenshot.jpg
Except for the root directory file wrapper, all files are renamed as similar to: .project.data.icloud
This was not happening previously so I'm not certain what code update would have created this behavior. Any thoughts would be helpful.
A: The files are in the process of being downloaded by the iCloud sync process. Did you call startDownloadingUbiquitousItem on the parent directory?
| Q: iOS App using iCloud storage is adding .icloud file extension to my documents In my iOS (swift 3.0) mobile application, the device creating a document to store in iCloud is able to load and manipulate the file again. Once iCloud transfers the file(s) to another device, they are no longer able to open and have a "." prepended to the original file name AND .icloud appended to the end.
My document structure is as follows:
Project_Name.spp (directory file wrapper) holding a project.data file and additional directory file wrappers Page_Name.spg containing page.meta, page.plist, and screenshot.jpg
Except for the root directory file wrapper, all files are renamed as similar to: .project.data.icloud
This was not happening previously so I'm not certain what code update would have created this behavior. Any thoughts would be helpful.
A: The files are in the process of being downloaded by the iCloud sync process. Did you call startDownloadingUbiquitousItem on the parent directory?
| stackoverflow | {
"language": "en",
"length": 155,
"provenance": "stackexchange_0000F.jsonl.gz:910094",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44683400"
} |
d90d3132a140480367b3b57a602168ee6495da97 | Stackoverflow Stackexchange
Q: Converting a string looking list as a list in python The HTTP response header for 'packages_list' returns the following, which is a list looking string. How do i convert this to an actual list? I have tried typecasting the string as a list which didn't work. I am not keen on doing find and replace or strip. Once I have the list I am creating a windows forms with buttons with text for each of the items in list. Any help is appreciated
I am using IronPython 2.6 (yes, I know its old but cant move away for backward compatibility reasons)
['Admin', 'MMX_G10_Asia', 'MMX_G10_London', 'MMX_G10_Readonly', 'MMX_Credit_Readonly', 'MMX_Govies_ReadOnly']
httpConn = httplib.HTTPConnection(base_server_url)
httpConn.request("POST", urlparser.path, params)
response = httpConn.getresponse()
headers = dict(response.getheaders())
print headers['packages_list']
A: The simplest approach, IMHO, would be to use literal_eval:
>>> s = "['Admin', 'MMX_G10_Asia', 'MMX_G10_London', 'MMX_G10_Readonly', 'MMX_Credit_Readonly', 'MMX_Govies_ReadOnly']"
>>> s
"['Admin', 'MMX_G10_Asia', 'MMX_G10_London', 'MMX_G10_Readonly', 'MMX_Credit_Readonly', 'MMX_Govies_ReadOnly']"
>>> from ast import literal_eval
>>> literal_eval(s)
['Admin', 'MMX_G10_Asia', 'MMX_G10_London', 'MMX_G10_Readonly', 'MMX_Credit_Readonly', 'MMX_Govies_ReadOnly']
| Q: Converting a string looking list as a list in python The HTTP response header for 'packages_list' returns the following, which is a list looking string. How do i convert this to an actual list? I have tried typecasting the string as a list which didn't work. I am not keen on doing find and replace or strip. Once I have the list I am creating a windows forms with buttons with text for each of the items in list. Any help is appreciated
I am using IronPython 2.6 (yes, I know its old but cant move away for backward compatibility reasons)
['Admin', 'MMX_G10_Asia', 'MMX_G10_London', 'MMX_G10_Readonly', 'MMX_Credit_Readonly', 'MMX_Govies_ReadOnly']
httpConn = httplib.HTTPConnection(base_server_url)
httpConn.request("POST", urlparser.path, params)
response = httpConn.getresponse()
headers = dict(response.getheaders())
print headers['packages_list']
A: The simplest approach, IMHO, would be to use literal_eval:
>>> s = "['Admin', 'MMX_G10_Asia', 'MMX_G10_London', 'MMX_G10_Readonly', 'MMX_Credit_Readonly', 'MMX_Govies_ReadOnly']"
>>> s
"['Admin', 'MMX_G10_Asia', 'MMX_G10_London', 'MMX_G10_Readonly', 'MMX_Credit_Readonly', 'MMX_Govies_ReadOnly']"
>>> from ast import literal_eval
>>> literal_eval(s)
['Admin', 'MMX_G10_Asia', 'MMX_G10_London', 'MMX_G10_Readonly', 'MMX_Credit_Readonly', 'MMX_Govies_ReadOnly']
A: You can to check if the string is a valid python type
>>> import ast
>>> s = "['Admin', 'MMX_G10_Asia', 'MMX_G10_London', 'MMX_G10_Readonly', 'MMX_Credit_Readonly', 'MMX_Govies_ReadOnly']"
>>> ast.literal_eval(s)
['Admin', 'MMX_G10_Asia', 'MMX_G10_London', 'MMX_G10_Readonly', 'MMX_Credit_Readonly', 'MMX_Govies_ReadOnly']
A: Another option is to convert this string to JSON format and then read it in:
import json
s = headers['packages_list'].replace("'", '"')
result = json.loads(s)
A: Ugly but I am going with the below. Thanks again everyone for the help !
headers = dict(response.getheaders())
print headers['packages_list']
result = headers['packages_list'].replace("'",'')
result = result.replace("[","")
result = result.replace("]", "")
print result
package_list = result.split(",")
print "the 2nd item in teh list is ", package_list[1]
| stackoverflow | {
"language": "en",
"length": 266,
"provenance": "stackexchange_0000F.jsonl.gz:910110",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44683457"
} |
5e6e0076ad378ada9dfc7f0be944211bed9a3f3f | Stackoverflow Stackexchange
Q: Best way to store images for Google Cloud Machine Learning project? I am running a Machine Learning project using Google Cloud Platform with Tensorflow and Keras. I have about 30,000 PNG images in my dataset. When I run it locally, Keras has great utilities to load images but Google Cloud Services needs to use certain librairies such as tensorflow.file_io (see: Load numpy array in google-cloud-ml job) in order to read in files from a GC bucket.
What is the best way to load the images from a Google Cloud Storage bucket? Right now I am saving them as bytes and reading them in from one file, but it would be great to be able to load the images directly from the GC bucket.
Thanks,
| Q: Best way to store images for Google Cloud Machine Learning project? I am running a Machine Learning project using Google Cloud Platform with Tensorflow and Keras. I have about 30,000 PNG images in my dataset. When I run it locally, Keras has great utilities to load images but Google Cloud Services needs to use certain librairies such as tensorflow.file_io (see: Load numpy array in google-cloud-ml job) in order to read in files from a GC bucket.
What is the best way to load the images from a Google Cloud Storage bucket? Right now I am saving them as bytes and reading them in from one file, but it would be great to be able to load the images directly from the GC bucket.
Thanks,
| stackoverflow | {
"language": "en",
"length": 125,
"provenance": "stackexchange_0000F.jsonl.gz:910123",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44683492"
} |
64450909fce7d32d65eceb3b2dd275164a9e8b24 | Stackoverflow Stackexchange
Q: JSON Error when create react app Trying to follow tutorial on react (Here: https://hackernoon.com/simple-react-development-in-2017-113bd563691f), but I keep getting JSON error "Error parsing JSON. unexpected end of JSON input." I'm using yarn. How do i fix the JSON input?
A: i faced the same issue & this solved the issue :
try the below command in your terminal
npm cache clean --force
solution ref
| Q: JSON Error when create react app Trying to follow tutorial on react (Here: https://hackernoon.com/simple-react-development-in-2017-113bd563691f), but I keep getting JSON error "Error parsing JSON. unexpected end of JSON input." I'm using yarn. How do i fix the JSON input?
A: i faced the same issue & this solved the issue :
try the below command in your terminal
npm cache clean --force
solution ref
A: For me the answer given by @M.abosalem didn't work but
yarn cache clean
did the trick
| stackoverflow | {
"language": "en",
"length": 81,
"provenance": "stackexchange_0000F.jsonl.gz:910141",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44683555"
} |
bc1808e275130886ce3917b19bed4755baee178f | Stackoverflow Stackexchange
Q: cannot log to console using scala-logging with logback.xml configuration on amazon-emr I am using scala-logging with a logback.xml configuration file to send log messages to the console, but they do not appear. My code is running on an Amazon-EMR cluster and my code is called using spark-sumbit.
My build.sbt file contains the dependencies:
libraryDependencies += "ch.qos.logback" % "logback-classic" % "1.1.7"
libraryDependencies += "com.typesafe.scala-logging" %% "scala-logging" % "3.5.0"
My src/main/resources/logback.xml contains:
<configuration>
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<root level="debug">
<appender-ref ref="console"/>
</root>
</configuration>
I am logging from a method in a class that resembles:
import com.typesafe.scalalogging.LazyLogging
class MyClass() extends LazyLogging {
def myMethod() = {
logger.debug("debug logging test")
logger.info("info logging test")
logger.warn("warn logging test")
logger.error("error logging test")
println("This message appears in console")
}
}
None of the messages at any log level appear. Note that when I use println method, the messages appear in the console/stdout.
| Q: cannot log to console using scala-logging with logback.xml configuration on amazon-emr I am using scala-logging with a logback.xml configuration file to send log messages to the console, but they do not appear. My code is running on an Amazon-EMR cluster and my code is called using spark-sumbit.
My build.sbt file contains the dependencies:
libraryDependencies += "ch.qos.logback" % "logback-classic" % "1.1.7"
libraryDependencies += "com.typesafe.scala-logging" %% "scala-logging" % "3.5.0"
My src/main/resources/logback.xml contains:
<configuration>
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<root level="debug">
<appender-ref ref="console"/>
</root>
</configuration>
I am logging from a method in a class that resembles:
import com.typesafe.scalalogging.LazyLogging
class MyClass() extends LazyLogging {
def myMethod() = {
logger.debug("debug logging test")
logger.info("info logging test")
logger.warn("warn logging test")
logger.error("error logging test")
println("This message appears in console")
}
}
None of the messages at any log level appear. Note that when I use println method, the messages appear in the console/stdout.
| stackoverflow | {
"language": "en",
"length": 153,
"provenance": "stackexchange_0000F.jsonl.gz:910171",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44683645"
} |
9fa797fc93fea6764bb3425b0bb1cf8c908865ae | Stackoverflow Stackexchange
Q: Equating symbolic coefficients I would like to seek y particular of ODE y'' - y' - 2y = 4x^2
I made the following script:
syms x A0 A1 A2
ypa = A2*x^2+A1*x+A0; % y_p assume
cyp = diff(ypa,2) - diff(ypa) - 2*ypa % according to ODE
P1 = 4*x^2; P2 = cyp ; % Equating P1 and P2
C = coeffs(P1 - P2,x);
A0 = solve(C(1),A0)
A1 = solve(C(2),A1)
A2 = solve(C(3),A2)
I got the correct answer for A2 = -2. But I did not get for A0 (should be -3) and A1 (should be 2). How to get them automatically?
P.S I'm using MATLAB R2013a.
A: Instead of calling solve 3 times, once on each equation of C, you should call it once on the entire system of equations so that the proper substitutions are done to give you a numeric result for each variable:
>> [A0, A1, A2] = solve(C)
A0 =
-3
A1 =
2
A2 =
-2
| Q: Equating symbolic coefficients I would like to seek y particular of ODE y'' - y' - 2y = 4x^2
I made the following script:
syms x A0 A1 A2
ypa = A2*x^2+A1*x+A0; % y_p assume
cyp = diff(ypa,2) - diff(ypa) - 2*ypa % according to ODE
P1 = 4*x^2; P2 = cyp ; % Equating P1 and P2
C = coeffs(P1 - P2,x);
A0 = solve(C(1),A0)
A1 = solve(C(2),A1)
A2 = solve(C(3),A2)
I got the correct answer for A2 = -2. But I did not get for A0 (should be -3) and A1 (should be 2). How to get them automatically?
P.S I'm using MATLAB R2013a.
A: Instead of calling solve 3 times, once on each equation of C, you should call it once on the entire system of equations so that the proper substitutions are done to give you a numeric result for each variable:
>> [A0, A1, A2] = solve(C)
A0 =
-3
A1 =
2
A2 =
-2
| stackoverflow | {
"language": "en",
"length": 162,
"provenance": "stackexchange_0000F.jsonl.gz:910174",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44683663"
} |
4fde8f8346778b85aaf92c66e01737c206765a5a | Stackoverflow Stackexchange
Q: reserve array memory in advance in Julia How can we reserve memory (or allocate memory without initialization) in Julia? In C++, a common pattern is to call reserve before calling push_back several times to avoid having to call on malloc more than once. Is there an equivalent in Julia?
A: I think you are looking for sizehint!
help?> sizehint!
search: sizehint!
sizehint!(s, n)
Suggest that collection s reserve capacity for at least n elements. This can
improve performance.
| Q: reserve array memory in advance in Julia How can we reserve memory (or allocate memory without initialization) in Julia? In C++, a common pattern is to call reserve before calling push_back several times to avoid having to call on malloc more than once. Is there an equivalent in Julia?
A: I think you are looking for sizehint!
help?> sizehint!
search: sizehint!
sizehint!(s, n)
Suggest that collection s reserve capacity for at least n elements. This can
improve performance.
| stackoverflow | {
"language": "en",
"length": 79,
"provenance": "stackexchange_0000F.jsonl.gz:910183",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44683685"
} |
95c3d8fa1a88a7eb9f816ec537e3cb4fbad77bb9 | Stackoverflow Stackexchange
Q: Setting the prompt in Beeline When connecting to beeline, my prompt is some hefty truncated version of the JDBC url:
0: jdbc:hive2//fully.qualified.host.na
Which takes up an annoying amount of real estate.
I tried set hive.cli.prompt=foo>>, and get an error that that property is not in the list of params that are allowed to be modified at runtime.
Is there no way to set the prompt to a custom values?
A: EDIT:
For hive, you can set hive.cli.prompt; for beeline it is hardcoded here:
https://github.com/apache/hive/blob/477f541844db3ea5eaee8746033bf80cd48b7f8c/beeline/src/java/org/apache/hive/beeline/BeeLine.java#L1769-L1775
| Q: Setting the prompt in Beeline When connecting to beeline, my prompt is some hefty truncated version of the JDBC url:
0: jdbc:hive2//fully.qualified.host.na
Which takes up an annoying amount of real estate.
I tried set hive.cli.prompt=foo>>, and get an error that that property is not in the list of params that are allowed to be modified at runtime.
Is there no way to set the prompt to a custom values?
A: EDIT:
For hive, you can set hive.cli.prompt; for beeline it is hardcoded here:
https://github.com/apache/hive/blob/477f541844db3ea5eaee8746033bf80cd48b7f8c/beeline/src/java/org/apache/hive/beeline/BeeLine.java#L1769-L1775
| stackoverflow | {
"language": "en",
"length": 85,
"provenance": "stackexchange_0000F.jsonl.gz:910202",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44683735"
} |
f34b455c297bdb10048cdb97c170a4ec9faeb284 | Stackoverflow Stackexchange
Q: Have a list of hours between two dates in python I have two times and I want to make a list of all the hours between them using the same format in Python
from= '2016-12-02T11:00:00.000Z'
to= '2017-06-06T07:00:00.000Z'
hours=to-from
so the result will be something like this
2016-12-02T11:00:00.000Z
2016-12-02T12:00:00.000Z
2016-12-02T13:00:00.000Z
..... and so on
How can I so this and what kind of plugin should I use?
A: simpler solution using standard library's datetime package:
from datetime import datetime, timedelta
DATE_TIME_STRING_FORMAT = '%Y-%m-%dT%H:%M:%S.%fZ'
from_date_time = datetime.strptime('2016-12-02T11:00:00.000Z',
DATE_TIME_STRING_FORMAT)
to_date_time = datetime.strptime('2017-06-06T07:00:00.000Z',
DATE_TIME_STRING_FORMAT)
date_times = [from_date_time.strftime(DATE_TIME_STRING_FORMAT)]
date_time = from_date_time
while date_time < to_date_time:
date_time += timedelta(hours=1)
date_times.append(date_time.strftime(DATE_TIME_STRING_FORMAT))
will give us
>>>date_times
['2016-12-02T11:00:00.000000Z',
'2016-12-02T12:00:00.000000Z',
'2016-12-02T13:00:00.000000Z',
'2016-12-02T14:00:00.000000Z',
'2016-12-02T15:00:00.000000Z',
'2016-12-02T16:00:00.000000Z',
'2016-12-02T17:00:00.000000Z',
'2016-12-02T18:00:00.000000Z',
'2016-12-02T19:00:00.000000Z',
'2016-12-02T20:00:00.000000Z',
...]
| Q: Have a list of hours between two dates in python I have two times and I want to make a list of all the hours between them using the same format in Python
from= '2016-12-02T11:00:00.000Z'
to= '2017-06-06T07:00:00.000Z'
hours=to-from
so the result will be something like this
2016-12-02T11:00:00.000Z
2016-12-02T12:00:00.000Z
2016-12-02T13:00:00.000Z
..... and so on
How can I so this and what kind of plugin should I use?
A: simpler solution using standard library's datetime package:
from datetime import datetime, timedelta
DATE_TIME_STRING_FORMAT = '%Y-%m-%dT%H:%M:%S.%fZ'
from_date_time = datetime.strptime('2016-12-02T11:00:00.000Z',
DATE_TIME_STRING_FORMAT)
to_date_time = datetime.strptime('2017-06-06T07:00:00.000Z',
DATE_TIME_STRING_FORMAT)
date_times = [from_date_time.strftime(DATE_TIME_STRING_FORMAT)]
date_time = from_date_time
while date_time < to_date_time:
date_time += timedelta(hours=1)
date_times.append(date_time.strftime(DATE_TIME_STRING_FORMAT))
will give us
>>>date_times
['2016-12-02T11:00:00.000000Z',
'2016-12-02T12:00:00.000000Z',
'2016-12-02T13:00:00.000000Z',
'2016-12-02T14:00:00.000000Z',
'2016-12-02T15:00:00.000000Z',
'2016-12-02T16:00:00.000000Z',
'2016-12-02T17:00:00.000000Z',
'2016-12-02T18:00:00.000000Z',
'2016-12-02T19:00:00.000000Z',
'2016-12-02T20:00:00.000000Z',
...]
A: If possible I would recommend using pandas.
import pandas
time_range = pandas.date_range('2016-12-02T11:00:00.000Z', '2017-06-06T07:00:00.000Z', freq='H')
If you need strings then use the following:
timestamps = [str(x) + 'Z' for x in time_range]
# Output
# ['2016-12-02 11:00:00+00:00Z',
# '2016-12-02 12:00:00+00:00Z',
# '2016-12-02 13:00:00+00:00Z',
# '2016-12-02 14:00:00+00:00Z',
# '2016-12-02 15:00:00+00:00Z',
# '2016-12-02 16:00:00+00:00Z',
# ...]
| stackoverflow | {
"language": "en",
"length": 174,
"provenance": "stackexchange_0000F.jsonl.gz:910220",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44683774"
} |
0bf8216640748f1e1e06ea23043ae57a9b679100 | Stackoverflow Stackexchange
Q: Why would one want to use string.Format with only one parameter? I encountered the following code string in a project's code:
var result = string.Format(source);
with variable 'source' being a string
I can't understand what useful this line is doing. As I always thought, we need at least two parameters for string.Format method to have some useful output.
ReSharper is not highlighting this as a something redundant so it seems that this line might have some purpose which I can't grasp at the moment. (Or maybe ReSharper just doesn't handle this case specifically)
Why would one want to use string.Format with only one parameter?
A: Perhaps the overloaded function
public static string Format(string format, params object[] args);
is allowing that code to compile. I cann't imagine that string.Format(source) would be helping in any meaningful way.
| Q: Why would one want to use string.Format with only one parameter? I encountered the following code string in a project's code:
var result = string.Format(source);
with variable 'source' being a string
I can't understand what useful this line is doing. As I always thought, we need at least two parameters for string.Format method to have some useful output.
ReSharper is not highlighting this as a something redundant so it seems that this line might have some purpose which I can't grasp at the moment. (Or maybe ReSharper just doesn't handle this case specifically)
Why would one want to use string.Format with only one parameter?
A: Perhaps the overloaded function
public static string Format(string format, params object[] args);
is allowing that code to compile. I cann't imagine that string.Format(source) would be helping in any meaningful way.
| stackoverflow | {
"language": "en",
"length": 136,
"provenance": "stackexchange_0000F.jsonl.gz:910223",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44683784"
} |
d979643845c9dc7687f07d7863334f0a71325453 | Stackoverflow Stackexchange
Q: Using spread notation in Array.reduce I am trying to get the es6 syntax working for this code fragment.
let checkList = [1, 2].map(i => "Check " + i)
let checks = checkList
// .reduce((acc, check) => Object.assign(acc, {[check]: {valid: false}}), {})
.reduce((acc, check) => {...acc, {[check]: {valid: false}}}, {})
console.log(checks)
The output if i use the commented line in https://babeljs.io is as below and which is what i want to get using the new syntax.
Object {
"Check 1": Object {
"valid": false
},
"Check 2": Object {
"valid": false
}
}
I am not sure if there is a syntax error in this code. I tried selecting all the presets in babeljs but it does't compile properly.
A: Object spread is stage 4 proposal and isn't part of existing specifications. Stage 3 preset should be enabled in Babel, namely, transform-object-rest-spread.
There are syntax errors in the code above that will prevent it from being compiled properly even with required presets.
It should be
let checks = checkList
.reduce((acc, check) => ({...acc, [check]: {valid: false}}), {});
| Q: Using spread notation in Array.reduce I am trying to get the es6 syntax working for this code fragment.
let checkList = [1, 2].map(i => "Check " + i)
let checks = checkList
// .reduce((acc, check) => Object.assign(acc, {[check]: {valid: false}}), {})
.reduce((acc, check) => {...acc, {[check]: {valid: false}}}, {})
console.log(checks)
The output if i use the commented line in https://babeljs.io is as below and which is what i want to get using the new syntax.
Object {
"Check 1": Object {
"valid": false
},
"Check 2": Object {
"valid": false
}
}
I am not sure if there is a syntax error in this code. I tried selecting all the presets in babeljs but it does't compile properly.
A: Object spread is stage 4 proposal and isn't part of existing specifications. Stage 3 preset should be enabled in Babel, namely, transform-object-rest-spread.
There are syntax errors in the code above that will prevent it from being compiled properly even with required presets.
It should be
let checks = checkList
.reduce((acc, check) => ({...acc, [check]: {valid: false}}), {});
A: First of all you don't have wrap the properties in an extra object (unless you also want to use the spread operator on that).
So {...acc, {[check]: {valid: false}}} can become {...acc, [check]: {valid: false}}
This means you're adding an object to accumulator. The key of this object is the name you assigned it (Check[n]) and the values are the ones you set ({valid...}).
Secondly, as far I know, you cannot use the spread operator without explicitly a new value. So you should either write your state on a new line like:
let checks = checkList.reduce((acc, check) => {
return {...acc, [check]: {valid: false}}
}, {})
Or wrap it in extra parentheses:
let checks = checkList.reduce((acc, check) => ({...acc, [check]: {valid: false}}) , {})
| stackoverflow | {
"language": "en",
"length": 302,
"provenance": "stackexchange_0000F.jsonl.gz:910225",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44683789"
} |
f76ab877e9459ae3eb6065a74a5c5e12269f743e | Stackoverflow Stackexchange
Q: Android animated button size change from both sides Android animation
How to change button size from both sides (Left and Right)
in the same time
As this image
I tried this .. but it doesn't work like what I wish
public void scaleView(View v, float startScale, float endScale) {
Animation anim = new ScaleAnimation(
startScale, endScale, // Start and end values for the X axis scaling
1f, 1f, // Start and end values for the Y axis scaling
Animation.RELATIVE_TO_SELF, 0f, // Pivot point of X scaling
Animation.RELATIVE_TO_SELF, 1f); // Pivot point of Y scaling
anim.setFillAfter(true); // Needed to keep the result of the animation
anim.setDuration(3000);
v.startAnimation(anim);
}
A: Change pivotXValue to 0.5
public void scaleView(View v, float startScale, float endScale) {
Animation anim = new ScaleAnimation(
startScale, endScale, // Start and end values for the X axis scaling
1f, 1f, // Start and end values for the Y axis scaling
Animation.RELATIVE_TO_SELF, 0.5f, // Pivot point of X scaling
Animation.RELATIVE_TO_SELF, 1f); // Pivot point of Y scaling
anim.setFillAfter(true); // Needed to keep the result of the animation
anim.setDuration(3000);
v.startAnimation(anim);
}
| Q: Android animated button size change from both sides Android animation
How to change button size from both sides (Left and Right)
in the same time
As this image
I tried this .. but it doesn't work like what I wish
public void scaleView(View v, float startScale, float endScale) {
Animation anim = new ScaleAnimation(
startScale, endScale, // Start and end values for the X axis scaling
1f, 1f, // Start and end values for the Y axis scaling
Animation.RELATIVE_TO_SELF, 0f, // Pivot point of X scaling
Animation.RELATIVE_TO_SELF, 1f); // Pivot point of Y scaling
anim.setFillAfter(true); // Needed to keep the result of the animation
anim.setDuration(3000);
v.startAnimation(anim);
}
A: Change pivotXValue to 0.5
public void scaleView(View v, float startScale, float endScale) {
Animation anim = new ScaleAnimation(
startScale, endScale, // Start and end values for the X axis scaling
1f, 1f, // Start and end values for the Y axis scaling
Animation.RELATIVE_TO_SELF, 0.5f, // Pivot point of X scaling
Animation.RELATIVE_TO_SELF, 1f); // Pivot point of Y scaling
anim.setFillAfter(true); // Needed to keep the result of the animation
anim.setDuration(3000);
v.startAnimation(anim);
}
A: Did you try this:
v.animate().setDuration(3000).scaleX(endScale);
| stackoverflow | {
"language": "en",
"length": 186,
"provenance": "stackexchange_0000F.jsonl.gz:910227",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44683791"
} |
2d730f148f401b37569860ef6986d0e9011a74f3 | Stackoverflow Stackexchange
Q: How can you clone a branch from the original GitHub project to your fork? I forked a GitHub project. I want to create a new branch, that is a clone of the original project's master branch (the master branch has new commits from when I last forked).
How can I do this?
A: Fetch from your upstream, checkout to that branch, then force push to that branch on your fork.
git fetch upstream
git checkout <target branch>
git push -f origin <target branch>
Disclaimer: I haven't tested this.
| Q: How can you clone a branch from the original GitHub project to your fork? I forked a GitHub project. I want to create a new branch, that is a clone of the original project's master branch (the master branch has new commits from when I last forked).
How can I do this?
A: Fetch from your upstream, checkout to that branch, then force push to that branch on your fork.
git fetch upstream
git checkout <target branch>
git push -f origin <target branch>
Disclaimer: I haven't tested this.
A: First you need to configure a remote for a the original repo.
$ git remote add upstream https://github.com/ORIGINAL_OWNER/ORIGINAL_REPOSITORY.git
$ git fetch upstream
Now you can use any git command with any branch from the original repo.
$ git checkout master
$ git pull upstream master
Of course, you can checkout any other branch than master.
$ git checkout my-radical-new-feature
$ git pull upstream master
Typically I pull directly to my local master branch as shown here and then merge the local master into other branches where I am working.
$ git checkout master
$ git pull upstream master
$ git checkout my-radical-new-feature
$ git merge master
See the GitHub docs for more details
| stackoverflow | {
"language": "en",
"length": 203,
"provenance": "stackexchange_0000F.jsonl.gz:910243",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44683831"
} |
215e14fb4e161afbbf9f9f2da9ccfab089965088 | Stackoverflow Stackexchange
Q: Split a list from some element and getting the item values into another list in Python 2.7 This is my list:
l = ['today','is','a','holiday','but','nothing','to','do']
I wish to put the items after 'but' into another list such as:
another_list = ['nothing','to','do']
I tried the following way:
l = ['today','is','a','holiday','but','nothing','to','do']
for element in l:
parts = element.split('but')
But it does not provide the output I expected.
A: You are splitting on an array element not a string.. so in this case you a split isn't going to work. There is nothing to split. Instead find the index and continue from there. Checkout https://www.tutorialspoint.com/python/python_lists.htm to learn more about python list
l = ['today','is','a','holiday','but','nothing','to','do']
# get the index of but
i = l.index("but") # "This method returns index of the found object otherwise raise an exception indicating that value does not find."
# print everything after "but" using slice logic.
print l[i+1:]
| Q: Split a list from some element and getting the item values into another list in Python 2.7 This is my list:
l = ['today','is','a','holiday','but','nothing','to','do']
I wish to put the items after 'but' into another list such as:
another_list = ['nothing','to','do']
I tried the following way:
l = ['today','is','a','holiday','but','nothing','to','do']
for element in l:
parts = element.split('but')
But it does not provide the output I expected.
A: You are splitting on an array element not a string.. so in this case you a split isn't going to work. There is nothing to split. Instead find the index and continue from there. Checkout https://www.tutorialspoint.com/python/python_lists.htm to learn more about python list
l = ['today','is','a','holiday','but','nothing','to','do']
# get the index of but
i = l.index("but") # "This method returns index of the found object otherwise raise an exception indicating that value does not find."
# print everything after "but" using slice logic.
print l[i+1:]
A: You can use index:
l[l.index('but')+1:]
>>> ['nothing', 'to', 'do']
A: Join the list, partition it, and then re-split it.
' '.join(l).partition('but')[-1].split() # ['nothing', 'to', 'do']
| stackoverflow | {
"language": "en",
"length": 175,
"provenance": "stackexchange_0000F.jsonl.gz:910272",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44683916"
} |
3cf3c3a90f53b3871906c04f013c16db271d3f69 | Stackoverflow Stackexchange
Q: I have a PopupBox that I want to change the icon for. Currently it defaults to DotsVertical and I would like to have it as a DotsHorizontal I am using MaterialDesignInXAML for a WPF application. I have a PopupBox that I want to change the icon for. Currently it defaults to DotsVertical and I would like to have it as a DotsHorizontal.
I tried the following with no luck.
<materialDesign:PopupBox PlacementMode="BottomAndAlignRightEdges" StaysOpen="False">
<materialDesign:PopupBox.Content>
<materialDesign:PackIcon Kind="DotsHorizontal" />
</materialDesign:PopupBox.Content>
<StackPanel>
<TextBlock Text="Test1" />
<TextBlock Text="Test2" />
<TextBlock Text="Test3" />
</StackPanel>
</materialDesign:PopupBox>
Thanks in advance!
A: In order to change the Icon and preserve the current style use this:
<materialDesign:PopupBox PlacementMode="BottomAndAlignRightEdges" StaysOpen="False">
<materialDesign:PopupBox.ToggleContent>
<materialDesign:PackIcon Kind="DotsHorizontal"
Foreground="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType=materialDesign:PopupBox}, Path=Foreground}" />
</materialDesign:PopupBox.ToggleContent>
<StackPanel>
<TextBlock Text="Test1" />
<TextBlock Text="Test2" />
<TextBlock Text="Test3" />
</StackPanel>
</materialDesign:PopupBox>
| Q: I have a PopupBox that I want to change the icon for. Currently it defaults to DotsVertical and I would like to have it as a DotsHorizontal I am using MaterialDesignInXAML for a WPF application. I have a PopupBox that I want to change the icon for. Currently it defaults to DotsVertical and I would like to have it as a DotsHorizontal.
I tried the following with no luck.
<materialDesign:PopupBox PlacementMode="BottomAndAlignRightEdges" StaysOpen="False">
<materialDesign:PopupBox.Content>
<materialDesign:PackIcon Kind="DotsHorizontal" />
</materialDesign:PopupBox.Content>
<StackPanel>
<TextBlock Text="Test1" />
<TextBlock Text="Test2" />
<TextBlock Text="Test3" />
</StackPanel>
</materialDesign:PopupBox>
Thanks in advance!
A: In order to change the Icon and preserve the current style use this:
<materialDesign:PopupBox PlacementMode="BottomAndAlignRightEdges" StaysOpen="False">
<materialDesign:PopupBox.ToggleContent>
<materialDesign:PackIcon Kind="DotsHorizontal"
Foreground="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType=materialDesign:PopupBox}, Path=Foreground}" />
</materialDesign:PopupBox.ToggleContent>
<StackPanel>
<TextBlock Text="Test1" />
<TextBlock Text="Test2" />
<TextBlock Text="Test3" />
</StackPanel>
</materialDesign:PopupBox>
A: Figured it out and will leave an answer here in case anyone else comes across this issue. There is a property called ToggleContent
<materialDesign:PopupBox PlacementMode="BottomAndAlignRightEdges" StaysOpen="False">
<materialDesign:PopupBox.ToggleContent>
<materialDesign:PackIcon Kind="DotsHorizontal" />
</materialDesign:PopupBox.ToggleContent>
<StackPanel>
<TextBlock Text="Test1" />
<TextBlock Text="Test2" />
<TextBlock Text="Test3" />
</StackPanel>
</materialDesign:PopupBox>
| stackoverflow | {
"language": "en",
"length": 176,
"provenance": "stackexchange_0000F.jsonl.gz:910280",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44683929"
} |
f76850775bfe3eb07e0946be48f8bd39936f6212 | Stackoverflow Stackexchange
Q: Remove duplicate words from cells in R I have a 2-column data frame, where the first column is a number, and the second column contains a list of research categories. A reduced version of my data:
aa <- data.frame(a=c(1:4),b=c("Fisheries, Fisheries, Geography, Marine Biology",
"Fisheries", "Marine Biology, Marine Biology, Fisheries, Zoology", "Geography"))
I want to convert column b into a unique list of elements, i.e., remove the duplicates, so that the end result is
a b
1 Fisheries, Geography, Marine Biology
2 Fisheries
3 Marine Biology, Fisheries, Zoology
4 Geography
I am able to do this for individual elements of the list, for example, using unique(unlist(strsplit(aa[1]))) BUT only on individual elements, not the entire column (otherwise it returns a single unique list for the entire column). I can’t figure out how to do this for the entire list, one element at a time. Maybe with lapply and write my own function for *unique(unlist(strsplit()))?
Many thanks!
A: This should work for you.
aa <- data.frame(a=c(1:4),b=c("Fisheries, Fisheries, Geography, Marine Biology",
"Fisheries", "Marine Biology, Marine Biology, Fisheries, Zoology", "Geography"))
aa$b <- sapply(aa$b, function(x) paste(unique(unlist(str_split(x,", "))), collapse = ", "))
| Q: Remove duplicate words from cells in R I have a 2-column data frame, where the first column is a number, and the second column contains a list of research categories. A reduced version of my data:
aa <- data.frame(a=c(1:4),b=c("Fisheries, Fisheries, Geography, Marine Biology",
"Fisheries", "Marine Biology, Marine Biology, Fisheries, Zoology", "Geography"))
I want to convert column b into a unique list of elements, i.e., remove the duplicates, so that the end result is
a b
1 Fisheries, Geography, Marine Biology
2 Fisheries
3 Marine Biology, Fisheries, Zoology
4 Geography
I am able to do this for individual elements of the list, for example, using unique(unlist(strsplit(aa[1]))) BUT only on individual elements, not the entire column (otherwise it returns a single unique list for the entire column). I can’t figure out how to do this for the entire list, one element at a time. Maybe with lapply and write my own function for *unique(unlist(strsplit()))?
Many thanks!
A: This should work for you.
aa <- data.frame(a=c(1:4),b=c("Fisheries, Fisheries, Geography, Marine Biology",
"Fisheries", "Marine Biology, Marine Biology, Fisheries, Zoology", "Geography"))
aa$b <- sapply(aa$b, function(x) paste(unique(unlist(str_split(x,", "))), collapse = ", "))
| stackoverflow | {
"language": "en",
"length": 186,
"provenance": "stackexchange_0000F.jsonl.gz:910320",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44684052"
} |
22b558a83a2fce418791a7f7ea65da0a1b312bf9 | Stackoverflow Stackexchange
Q: Jquery Check if element's title contains specific text I have a list like this
<ol class="exampleList">
<li title="Example someTitle">...</li>
<li title="someTitle2">...</li>
<li title="someTitle3">...</li>
<li title="someTitle4">...</li>
</ol>
I want to check if a "li" has title contain "Example" then add class for it. The result should be like this
<ol class="exampleList">
<li title="Example someTitle" class="Added">...</li>
<li title="someTitle2">...</li>
<li title="someTitle3">...</li>
<li title="someTitle4">...</li>
</ol>
Please help me. And sorry for my bad English.
A: You can use the attribute contains selector (*=)
$("li[title*=Example]").addClass("someClass");
https://api.jquery.com/attribute-contains-selector/
| Q: Jquery Check if element's title contains specific text I have a list like this
<ol class="exampleList">
<li title="Example someTitle">...</li>
<li title="someTitle2">...</li>
<li title="someTitle3">...</li>
<li title="someTitle4">...</li>
</ol>
I want to check if a "li" has title contain "Example" then add class for it. The result should be like this
<ol class="exampleList">
<li title="Example someTitle" class="Added">...</li>
<li title="someTitle2">...</li>
<li title="someTitle3">...</li>
<li title="someTitle4">...</li>
</ol>
Please help me. And sorry for my bad English.
A: You can use the attribute contains selector (*=)
$("li[title*=Example]").addClass("someClass");
https://api.jquery.com/attribute-contains-selector/
| stackoverflow | {
"language": "en",
"length": 82,
"provenance": "stackexchange_0000F.jsonl.gz:910343",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44684117"
} |
1a42a1fa5ff84cf6f84ac03fe93a17c76cdb642a | Stackoverflow Stackexchange
Q: iOS - Change the height of text field inside a UIStackView I have this layout
And I want to resize the height of every text field inside, but I cant do it on inspector. Its something about constraints?
Can someone help me with this thing?
Here is my tree of components
A: yes you will need to set height constraints for the stuff that is in your stack view. Have a look at this:How to set height of containers in stack view?
| Q: iOS - Change the height of text field inside a UIStackView I have this layout
And I want to resize the height of every text field inside, but I cant do it on inspector. Its something about constraints?
Can someone help me with this thing?
Here is my tree of components
A: yes you will need to set height constraints for the stuff that is in your stack view. Have a look at this:How to set height of containers in stack view?
| stackoverflow | {
"language": "en",
"length": 83,
"provenance": "stackexchange_0000F.jsonl.gz:910347",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44684130"
} |
25b1b3816899500ae2e48d5877e3f74418750d63 | Stackoverflow Stackexchange
Q: Is there a way to get F#'s constructor-parameters-are-automatically-immutable-private-members feature in C#? In F#, the constructor parameters are automatically saved as immutable fields in the class without having to write the declaration of the field or the copying of the constructor parameter to the field in the constructor.
Is there any way to mimic this functionality in C#? Is there a reason why a future version of C# couldn't have this feature (i.e. is there some fundamental limitation of the existing language that conflicts with this feature)?
A: The C# equivalent will be records. These are currently earmarked for C# 8, but there's no guarantees (especially as they were proposed for C# 6 & 7 too and were delayed in both cases).
| Q: Is there a way to get F#'s constructor-parameters-are-automatically-immutable-private-members feature in C#? In F#, the constructor parameters are automatically saved as immutable fields in the class without having to write the declaration of the field or the copying of the constructor parameter to the field in the constructor.
Is there any way to mimic this functionality in C#? Is there a reason why a future version of C# couldn't have this feature (i.e. is there some fundamental limitation of the existing language that conflicts with this feature)?
A: The C# equivalent will be records. These are currently earmarked for C# 8, but there's no guarantees (especially as they were proposed for C# 6 & 7 too and were delayed in both cases).
| stackoverflow | {
"language": "en",
"length": 122,
"provenance": "stackexchange_0000F.jsonl.gz:910370",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44684203"
} |
56182aad6d4f47e0bf6774ab3fc29c8871171896 | Stackoverflow Stackexchange
Q: Can't edit LaunchScreen.storyboard on Visual Studio I have a Xamarin.Forms app, and am trying to publish the iOS app in it. I'm following the Xamarin tutorial here for the launch screen.
In steps 6 and 7 it assumes there is some View on the screen. I don't see one. This is what I have:
When I try to drag an Image View as mentioned in the tutorial - I get a "do not enter" symbol on the Image View. (I assume, because I first need the default View there.)
What now? (I'm using VS2017 on Windows 10 Pro.)
A: Just add ViewController and go from there
| Q: Can't edit LaunchScreen.storyboard on Visual Studio I have a Xamarin.Forms app, and am trying to publish the iOS app in it. I'm following the Xamarin tutorial here for the launch screen.
In steps 6 and 7 it assumes there is some View on the screen. I don't see one. This is what I have:
When I try to drag an Image View as mentioned in the tutorial - I get a "do not enter" symbol on the Image View. (I assume, because I first need the default View there.)
What now? (I'm using VS2017 on Windows 10 Pro.)
A: Just add ViewController and go from there
A: Drag and drop ViewController to LaunchScreen.storyboard
Example here:
| stackoverflow | {
"language": "en",
"length": 116,
"provenance": "stackexchange_0000F.jsonl.gz:910371",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44684206"
} |
e21a5115942aaf16c97a148492368f40d28cc407 | Stackoverflow Stackexchange
Q: Game Center Leaderboard not appearing on production version of game? My game went on the app store a few days ago and I cannot see my leaderboard at all. It just says "no data available"
However, when I build my game directly to my phone using an ad hoc profile I'm seeing my beta player's scores..
My leaderboard is in the not live status. I'm not sure if that matters or how to change that. I have other games with a "not live" leaderboard that are working on production..
Should I delete and remake the leaderboard now that my game is actually up on the app store?
I cleared my test data.. no luck.
Do I just need to wait? It's confusing.
I've called Apple support and theyre "working on it" but I feel like I'm stuck in a black hole now.. any way I can fix this myself?
A: I'm not sure if this section existed before. But I needed to explicitly add my created leaderboards to my release.
| Q: Game Center Leaderboard not appearing on production version of game? My game went on the app store a few days ago and I cannot see my leaderboard at all. It just says "no data available"
However, when I build my game directly to my phone using an ad hoc profile I'm seeing my beta player's scores..
My leaderboard is in the not live status. I'm not sure if that matters or how to change that. I have other games with a "not live" leaderboard that are working on production..
Should I delete and remake the leaderboard now that my game is actually up on the app store?
I cleared my test data.. no luck.
Do I just need to wait? It's confusing.
I've called Apple support and theyre "working on it" but I feel like I'm stuck in a black hole now.. any way I can fix this myself?
A: I'm not sure if this section existed before. But I needed to explicitly add my created leaderboards to my release.
| stackoverflow | {
"language": "en",
"length": 171,
"provenance": "stackexchange_0000F.jsonl.gz:910372",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44684208"
} |
56d7198068ad9ca99b0ebf04a36c76ba28a142cd | Stackoverflow Stackexchange
Q: Spring Oauth2RestTemplate error "access_denied" I need to consume a OAuth2 Rest service with ClientCredential Grant.
I'm using spring security and spring oauth2.
To get the access token i need to call the token uri passing to it a clientId and a password
Basically i need to send a POST with this body
{"clientId":"demo",
"password": "demo_password"
}
and I should get something like that in the response
{
"expiresIn": 3600,
"accessToken": "EF2I5xhL2GU9pAwK",
"statusCode": 200,
"refreshToken": "72BIcYWYhPjuPDGb"
}
I was trying to configure OAuth2RestTemplate in this way
@Configuration
@EnableOAuth2Client
public class RestTemplateConf {
@Value("${ApiClient}")
private String oAuth2ClientId;
@Value("${ApiSecret}")
private String oAuth2ClientSecret;
@Value("${ApiUrl}")
private String accessTokenUri;
@Bean
public OAuth2RestTemplate oAuthRestTemplate() {
ClientCredentialsResourceDetails resourceDetails = new ClientCredentialsResourceDetails();
resourceDetails.setClientId(oAuth2ClientId);
resourceDetails.setClientSecret(oAuth2ClientSecret);
resourceDetails.setAccessTokenUri(accessTokenUri);
resourceDetails.setTokenName("accessToken");
OAuth2RestTemplate restTemplate = new OAuth2RestTemplate(resourceDetails, new DefaultOAuth2ClientContext());
return restTemplate;
}
}
but i get always
Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is error="access_denied", error_description="Error requesting access token."] with root cause
org.springframework.web.client.HttpServerErrorException: 500 Internal Server Error
If i make a POST call to the tokenUri with POSTMAN, for instance, i get the token correctly...
| Q: Spring Oauth2RestTemplate error "access_denied" I need to consume a OAuth2 Rest service with ClientCredential Grant.
I'm using spring security and spring oauth2.
To get the access token i need to call the token uri passing to it a clientId and a password
Basically i need to send a POST with this body
{"clientId":"demo",
"password": "demo_password"
}
and I should get something like that in the response
{
"expiresIn": 3600,
"accessToken": "EF2I5xhL2GU9pAwK",
"statusCode": 200,
"refreshToken": "72BIcYWYhPjuPDGb"
}
I was trying to configure OAuth2RestTemplate in this way
@Configuration
@EnableOAuth2Client
public class RestTemplateConf {
@Value("${ApiClient}")
private String oAuth2ClientId;
@Value("${ApiSecret}")
private String oAuth2ClientSecret;
@Value("${ApiUrl}")
private String accessTokenUri;
@Bean
public OAuth2RestTemplate oAuthRestTemplate() {
ClientCredentialsResourceDetails resourceDetails = new ClientCredentialsResourceDetails();
resourceDetails.setClientId(oAuth2ClientId);
resourceDetails.setClientSecret(oAuth2ClientSecret);
resourceDetails.setAccessTokenUri(accessTokenUri);
resourceDetails.setTokenName("accessToken");
OAuth2RestTemplate restTemplate = new OAuth2RestTemplate(resourceDetails, new DefaultOAuth2ClientContext());
return restTemplate;
}
}
but i get always
Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is error="access_denied", error_description="Error requesting access token."] with root cause
org.springframework.web.client.HttpServerErrorException: 500 Internal Server Error
If i make a POST call to the tokenUri with POSTMAN, for instance, i get the token correctly...
| stackoverflow | {
"language": "en",
"length": 181,
"provenance": "stackexchange_0000F.jsonl.gz:910375",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44684212"
} |
2543b2eb1ccda72b85a5f0f289ef5fdbfc94d551 | Stackoverflow Stackexchange
Q: Python webkit2 can't open links in Gmail or Google inbox I'm creating a Gtk+ app with Python that contains a WebKit2 webview. Anytime an outgoing link in GMail or Google Inbox is clicked a popup is spawned by javascript saying:
"Grrr! A popup blocker may be preventing the application from opening the page. If you have a popup blocker, try disabling it to open the window."
I've enabled the "javascript-can-open-windows-automatically" setting for the webview but it still doesn't work.
If I right-click on any of the outgoing links, and select "open link" or "open in new window", the links will open.
I've also created a handler for navigation requests using "decide-policy" signal and "create" signal, in the hopes of retrieving the uri, however in request object the uri is blank.
This seems to be a problem with webview and gmail/inbox in general as discussed here. The previous question hasn't been solved, therefore I'm asking again but in the context of Python.
| Q: Python webkit2 can't open links in Gmail or Google inbox I'm creating a Gtk+ app with Python that contains a WebKit2 webview. Anytime an outgoing link in GMail or Google Inbox is clicked a popup is spawned by javascript saying:
"Grrr! A popup blocker may be preventing the application from opening the page. If you have a popup blocker, try disabling it to open the window."
I've enabled the "javascript-can-open-windows-automatically" setting for the webview but it still doesn't work.
If I right-click on any of the outgoing links, and select "open link" or "open in new window", the links will open.
I've also created a handler for navigation requests using "decide-policy" signal and "create" signal, in the hopes of retrieving the uri, however in request object the uri is blank.
This seems to be a problem with webview and gmail/inbox in general as discussed here. The previous question hasn't been solved, therefore I'm asking again but in the context of Python.
| stackoverflow | {
"language": "en",
"length": 162,
"provenance": "stackexchange_0000F.jsonl.gz:910383",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44684251"
} |
c10e96e540fae3eb06a0ea562b3d3bcd13b4695d | Stackoverflow Stackexchange
Q: MBProgressHUD not show When I write like this, it works as expected:
override func viewDidLoad() {
super.viewDidLoad()
MBProgressHUD.showAdded(to: navigationController!.view, animated: true)
}
However, when I put it in DispatchQueue.main block, the hud doesn't show:
override func viewDidLoad() {
super.viewDidLoad()
DispatchQueue.main.async {
MBProgressHUD.showAdded(to: self.navigationController!.view, animated: true)
}
}
I debug the view hierarchy, and there is a layout issue:
"Position and size are ambiguous for MBProgressHUD"
The navigationController is a childViewController of fatherViewController and the container view is set by auto layout in fatherViewController's view.
Is that cause the issue?
A: I have built a demo to test it, but it didn't occur. So I rechecked my code and found I put some work in DispatchQueue.main.async which block the UI and cause the problem.
| Q: MBProgressHUD not show When I write like this, it works as expected:
override func viewDidLoad() {
super.viewDidLoad()
MBProgressHUD.showAdded(to: navigationController!.view, animated: true)
}
However, when I put it in DispatchQueue.main block, the hud doesn't show:
override func viewDidLoad() {
super.viewDidLoad()
DispatchQueue.main.async {
MBProgressHUD.showAdded(to: self.navigationController!.view, animated: true)
}
}
I debug the view hierarchy, and there is a layout issue:
"Position and size are ambiguous for MBProgressHUD"
The navigationController is a childViewController of fatherViewController and the container view is set by auto layout in fatherViewController's view.
Is that cause the issue?
A: I have built a demo to test it, but it didn't occur. So I rechecked my code and found I put some work in DispatchQueue.main.async which block the UI and cause the problem.
A: I performed test especially for you and following code works for me:
override func viewDidLoad() {
super.viewDidLoad()
DispatchQueue.main.async {
MBProgressHUD.showAdded(to: self.navigationController!.view, animated: true)
}
}
so the problem is somewhere else located
A: The root UIView geometry is not being calculated(viewDidLayoutSubviews is not called) at the point of viewDidLoad. I'd recommend putting your code into viewDidAppear.
Also, no need to explicitly call it inside DispatchAsync.main, as other mentioned it is being called in the UI thread unless you want to call it from the background thread.
A: Actually MBProgressHUD works on UI main thread and you are trying to call it in background thread.
Try this:-
override func viewDidLoad() {
super.viewDidLoad()
DispatchQueue.main.async { start()
}
}
func start(){
MBProgressHUD.showAdded(to: self.navigationController!.view, animated: true)}
| stackoverflow | {
"language": "en",
"length": 247,
"provenance": "stackexchange_0000F.jsonl.gz:910428",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44684397"
} |
e33fe0b29d70740522450cd5b45c86c4630eb92c | Stackoverflow Stackexchange
Q: Is the constructor still needed in React with autobinding and property initializers I am refactoring an es6 class based React component that uses the normal constructor, and then binds methods, and defines state/attributes within that constructor. Something like this:
class MySpecialComponent extends React.Component {
constructor(props) {
super(props)
this.state = { thing: true }
this.myMethod = this.myMethod.bind(this)
this.myAttribute = { amazing: false }
}
myMethod(e) {
this.setState({ thing: e.target.value })
}
}
I want to refactor this so that I am autobinding the functions, and using property initializers for the state and attributes. Now my code looks something like this:
class MySpecialComponent extends React.Component {
state = { thing: true }
myAttribute = { amazing: false }
myMethod = (e) => {
this.setState({ thing: e.target.value })
}
}
My question is, do I still need the constructor? Or are the props also autobound? I would have expected to still need the constructor and included super(props), but my code seems to be working and I'm confused.
Thanks
A: You don't need an explicitly defined constructor unless you need to reference the props in your initial state object.
| Q: Is the constructor still needed in React with autobinding and property initializers I am refactoring an es6 class based React component that uses the normal constructor, and then binds methods, and defines state/attributes within that constructor. Something like this:
class MySpecialComponent extends React.Component {
constructor(props) {
super(props)
this.state = { thing: true }
this.myMethod = this.myMethod.bind(this)
this.myAttribute = { amazing: false }
}
myMethod(e) {
this.setState({ thing: e.target.value })
}
}
I want to refactor this so that I am autobinding the functions, and using property initializers for the state and attributes. Now my code looks something like this:
class MySpecialComponent extends React.Component {
state = { thing: true }
myAttribute = { amazing: false }
myMethod = (e) => {
this.setState({ thing: e.target.value })
}
}
My question is, do I still need the constructor? Or are the props also autobound? I would have expected to still need the constructor and included super(props), but my code seems to be working and I'm confused.
Thanks
A: You don't need an explicitly defined constructor unless you need to reference the props in your initial state object.
A: You don't need to define a constructor explicitly , and then do super(props).You can access the props as in the example below. i.e. 'prop1'
class MySpecialComponent extends React.Component {
state = {
thing: true ,
prop1:this.props.prop1
}
myAttribute = { amazing: false }
myMethod = (e) => {
this.setState({ thing: e.target.value })
}
render(){
console.log(this.state.prop1);
return(
<div>Hi</div>
);
}
}
ReactDOM.render(<MySpecialComponent prop1={1}/> , mountNode);
A: From my understanding, you don't need to type out a constructor at all when using class properties (as in your second code example). The accepted answer states that you do need one if you "need to reference the props in your initial state object," but if you're using said class properties, then you're probably using Babel to transpile it, in which case a constructor is used, it's just being done behind the scenes. Because of this, you don't need to add a constructor yourself, even if you are using props in state.
See this aricle for better examples and a better explanation.
| stackoverflow | {
"language": "en",
"length": 354,
"provenance": "stackexchange_0000F.jsonl.gz:910451",
"question_score": "13",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44684481"
} |
f60fbe3fd514bb1e6010e2f2e0514c360ee5a924 | Stackoverflow Stackexchange
Q: SQL Server: CREATE FUNCTION with declare variables inside I would like to create a function in SQL Server.
In this function, I need to define some variables and then use it in the SELECT.
SQL looks like below:
CREATE FUNCTION [dbo].[MyFussnction]
(
@path [nvarchar](10)
)
RETURNS TABLE
BEGIN
DECLARE @xx varchar(50);
SET @xx = 'Windows%';
RETURN
SELECT * FROM MyTable WHERE DataPath LIKE @path AND XX LIKE @xx;
END
But, it is not able to be created and the error says:
Msg 102, Level 15, State 31, Procedure MyFussnction, Line 12 [Batch Start Line 0]
Incorrect syntax near 'BEGIN'.
A: You need to define columns of table to return, then you can use declare, something like below
CREATE FUNCTION [dbo].[MyFussnction] (
@path [nvarchar](10)
)
RETURNS @Mytable TABLE
(
ID int PRIMARY KEY NOT NULL
-- define other columns
)
AS
BEGIN
DECLARE @xx varchar(50);
SET @xx = 'Windows%';
Insert into @Mytable
SELECT Id FROM MyTable WHERE DataPath LIKE @path AND XX LIKE @xx;
RETURN;
END
| Q: SQL Server: CREATE FUNCTION with declare variables inside I would like to create a function in SQL Server.
In this function, I need to define some variables and then use it in the SELECT.
SQL looks like below:
CREATE FUNCTION [dbo].[MyFussnction]
(
@path [nvarchar](10)
)
RETURNS TABLE
BEGIN
DECLARE @xx varchar(50);
SET @xx = 'Windows%';
RETURN
SELECT * FROM MyTable WHERE DataPath LIKE @path AND XX LIKE @xx;
END
But, it is not able to be created and the error says:
Msg 102, Level 15, State 31, Procedure MyFussnction, Line 12 [Batch Start Line 0]
Incorrect syntax near 'BEGIN'.
A: You need to define columns of table to return, then you can use declare, something like below
CREATE FUNCTION [dbo].[MyFussnction] (
@path [nvarchar](10)
)
RETURNS @Mytable TABLE
(
ID int PRIMARY KEY NOT NULL
-- define other columns
)
AS
BEGIN
DECLARE @xx varchar(50);
SET @xx = 'Windows%';
Insert into @Mytable
SELECT Id FROM MyTable WHERE DataPath LIKE @path AND XX LIKE @xx;
RETURN;
END
| stackoverflow | {
"language": "en",
"length": 166,
"provenance": "stackexchange_0000F.jsonl.gz:910468",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44684527"
} |
d67af615f9195d922df0fd13e7925fe5aedaabc7 | Stackoverflow Stackexchange
Q: GLoVe word embedding for special words in seq2seq I am training a seq2seq model in Tensorflow. I want to use GLoVe as word embedding for words in my sequences. In seq2seq, I am using some of tags like EOS(End of Sequence), GO(start of sequence for decoder), PAD(for padding sequence), as words in my sequences.
But GLoVe won't be having embedding for these tags.
So, how should I represent these tags ?
| Q: GLoVe word embedding for special words in seq2seq I am training a seq2seq model in Tensorflow. I want to use GLoVe as word embedding for words in my sequences. In seq2seq, I am using some of tags like EOS(End of Sequence), GO(start of sequence for decoder), PAD(for padding sequence), as words in my sequences.
But GLoVe won't be having embedding for these tags.
So, how should I represent these tags ?
| stackoverflow | {
"language": "en",
"length": 72,
"provenance": "stackexchange_0000F.jsonl.gz:910552",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44684721"
} |
8fc5824fc60f3100b6272c01319e5082415d9df8 | Stackoverflow Stackexchange
Q: how to type sudo password when using subprocess.call? i defined a function that switch my proxy settings every now and then,
problem is that i want it to run in a loop without manual intervention. But when i execute the program in sudo it gets called the first time en runs smoothly, second time it asks me for my sudo password. Here is the bit of code:
def ProxySetting(Proxy):
print "ProxyStetting(Proxy)"
call("networksetup -setwebproxy 'Wi-Fi' %s" "on" % Proxy, shell = True)
call("networksetup -setsecurewebproxy 'Wi-Fi' %s" "on" % Proxy, shell = True)
call("networksetup -setftpproxy 'Wi-Fi' %s" "on" %Proxy , shell=True)
I could use threading but am sure there is a way of doing it that wont cause problems. How can i hard code my sudo password so that it runs at the beginning of the function?
A: Another method of passing your password to a shell command through python that wouldn't involve it showing up in any command history or ps output is:
p = subprocess.Popen(['sudo', self.resubscribe_script], stdin=subprocess.PIPE)
p.communicate('{}\n'.format(self.sudo_password))
Note that using communicate will only allow one input to be given to stdin; there are other methods for getting a reusable input.
| Q: how to type sudo password when using subprocess.call? i defined a function that switch my proxy settings every now and then,
problem is that i want it to run in a loop without manual intervention. But when i execute the program in sudo it gets called the first time en runs smoothly, second time it asks me for my sudo password. Here is the bit of code:
def ProxySetting(Proxy):
print "ProxyStetting(Proxy)"
call("networksetup -setwebproxy 'Wi-Fi' %s" "on" % Proxy, shell = True)
call("networksetup -setsecurewebproxy 'Wi-Fi' %s" "on" % Proxy, shell = True)
call("networksetup -setftpproxy 'Wi-Fi' %s" "on" %Proxy , shell=True)
I could use threading but am sure there is a way of doing it that wont cause problems. How can i hard code my sudo password so that it runs at the beginning of the function?
A: Another method of passing your password to a shell command through python that wouldn't involve it showing up in any command history or ps output is:
p = subprocess.Popen(['sudo', self.resubscribe_script], stdin=subprocess.PIPE)
p.communicate('{}\n'.format(self.sudo_password))
Note that using communicate will only allow one input to be given to stdin; there are other methods for getting a reusable input.
A: Here you can execute a command sudo without interactive prompt asking you to type your password :
from subprocess import call
pwd='my password'
cmd='ls'
call('echo {} | sudo -S {}'.format(pwd, cmd), shell=True)
| stackoverflow | {
"language": "en",
"length": 225,
"provenance": "stackexchange_0000F.jsonl.gz:910565",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44684764"
} |
5dfc89807bdce33cfec083503f53119526733e29 | Stackoverflow Stackexchange
Q: How to add base URI in RestTemplate Is there any other way to initialize RestTemplate with base URI other than
extending RestTemplate and overriding the execute method.currently i have the code like below.Thanks
class CustomRestTemplate extends RestTemplate {
String baseUrl
@Override
protected T doExecute(URI url, HttpMethod method, RequestCallback requestCallback, ResponseExtractor responseExtractor) throws RestClientException {
return super.doExecute(new URI(baseUrl + url.toString()), method, requestCallback, responseExtractor)
}
A: Spring 5.0:
This sends a GET request to http://localhost:8080/myservice
RestTemplate restTemplate = new RestTemplate();
restTemplate.setUriTemplateHandler(new DefaultUriBuilderFactory("http://localhost:8080"));
restTemplate.getForObject("/myservice", String.class);
| Q: How to add base URI in RestTemplate Is there any other way to initialize RestTemplate with base URI other than
extending RestTemplate and overriding the execute method.currently i have the code like below.Thanks
class CustomRestTemplate extends RestTemplate {
String baseUrl
@Override
protected T doExecute(URI url, HttpMethod method, RequestCallback requestCallback, ResponseExtractor responseExtractor) throws RestClientException {
return super.doExecute(new URI(baseUrl + url.toString()), method, requestCallback, responseExtractor)
}
A: Spring 5.0:
This sends a GET request to http://localhost:8080/myservice
RestTemplate restTemplate = new RestTemplate();
restTemplate.setUriTemplateHandler(new DefaultUriBuilderFactory("http://localhost:8080"));
restTemplate.getForObject("/myservice", String.class);
A: If you are using Spring Boot, you can use org.springframework.boot.web.client.RestTemplateBuilder.rootUri(baseUrl).build()
A: You can create your custom DefaultUriTemplateHandler
DefaultUriTemplateHandler defaultUriTemplateHandler = new DefaultUriTemplateHandler();
defaultUriTemplateHandler.setBaseUrl(url);
And then add it to restTemplate
return new RestTemplateBuilder()
.uriTemplateHandler(defaultUriTemplateHandler)
.build();
A: Spring's RestTemplate (version 4.2.0.RELEASE) support a method named setUriTemplateHandler. If this is never set, it contains a DefaultUriTemplateHandler
DefaultUriTemplateHandler supports a method named 'setBaseUrl`
So, you can set the base URL there.
A: AFAIK there is no way other than what you have listed above
| stackoverflow | {
"language": "en",
"length": 164,
"provenance": "stackexchange_0000F.jsonl.gz:910584",
"question_score": "27",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44684823"
} |
e5d4053088c824338ccfb12573d41a67f43b2123 | Stackoverflow Stackexchange
Q: error: zlib library and headers are required R on HPC System:
Red Hat Enterprise Linux Server release 6.5 (Santiago)
I’ve installed zlib 1.2.11 on the home folder of a Red Hat HPC as part of the process for installing R base 3.4.0.
I get this error even after successful install of zlib
checking for inflateInit2_ in -lz... no
checking whether zlib support suffices... configure: error: zlib library and headers are required
I’ve checked R documentation and configure file for the issue of R requiring versions newer than 1.2.6 but not lexicographically recognizing 1.2.11 as >1.2.6, and that particular bug was patched in R 3.4.
I've reviewed this question posted previously and the response is not relevant due to R 3.4 resolving that issue.
Any suggestion and/or input would be much appreciated.
| Q: error: zlib library and headers are required R on HPC System:
Red Hat Enterprise Linux Server release 6.5 (Santiago)
I’ve installed zlib 1.2.11 on the home folder of a Red Hat HPC as part of the process for installing R base 3.4.0.
I get this error even after successful install of zlib
checking for inflateInit2_ in -lz... no
checking whether zlib support suffices... configure: error: zlib library and headers are required
I’ve checked R documentation and configure file for the issue of R requiring versions newer than 1.2.6 but not lexicographically recognizing 1.2.11 as >1.2.6, and that particular bug was patched in R 3.4.
I've reviewed this question posted previously and the response is not relevant due to R 3.4 resolving that issue.
Any suggestion and/or input would be much appreciated.
| stackoverflow | {
"language": "en",
"length": 132,
"provenance": "stackexchange_0000F.jsonl.gz:910593",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44684848"
} |
8c6f3dc114cf7e4f380a690fb62ad23585c68e95 | Stackoverflow Stackexchange
Q: Access to store.dispatch in a saga for use with react router redux I'm currently refactoring a react + redux + saga + react-router 3 application to use the new react-router 4 due to breaking changes. Before I would use browserHistory to direct to an appropriate path based on the results from a saga. Due to react-router 4 changes, I can't use browserHistory any longer.
Now I've incorporated react-router-redux to essentially do what browserHistory did. The problem is that react-router-redux only works within a store.dispatch, e.g. store.dispatch(push('/')). I can't seem to find a way to access either the store or it's dispatch function inside my sagas. Any ideas on how to access store.dispatch within a saga? I know you can pass arguments in the root saga but I don't know how to retrieve them in my actual sagas.
A: Use redux-saga's put effect, which dispatches redux actions to the store - docs.
import { call, put } from 'redux-saga/effects'
// ...
function* fetchProducts() {
const products = yield call(Api.fetch, '/products')
// create and yield a dispatch Effect
yield put({ type: 'PRODUCTS_RECEIVED', products })
}
| Q: Access to store.dispatch in a saga for use with react router redux I'm currently refactoring a react + redux + saga + react-router 3 application to use the new react-router 4 due to breaking changes. Before I would use browserHistory to direct to an appropriate path based on the results from a saga. Due to react-router 4 changes, I can't use browserHistory any longer.
Now I've incorporated react-router-redux to essentially do what browserHistory did. The problem is that react-router-redux only works within a store.dispatch, e.g. store.dispatch(push('/')). I can't seem to find a way to access either the store or it's dispatch function inside my sagas. Any ideas on how to access store.dispatch within a saga? I know you can pass arguments in the root saga but I don't know how to retrieve them in my actual sagas.
A: Use redux-saga's put effect, which dispatches redux actions to the store - docs.
import { call, put } from 'redux-saga/effects'
// ...
function* fetchProducts() {
const products = yield call(Api.fetch, '/products')
// create and yield a dispatch Effect
yield put({ type: 'PRODUCTS_RECEIVED', products })
}
| stackoverflow | {
"language": "en",
"length": 184,
"provenance": "stackexchange_0000F.jsonl.gz:910615",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44684910"
} |
0307df8dcf647032c5c4bea4aae55258a3ca0314 | Stackoverflow Stackexchange
Q: Boundary points from a set of coordinates I have a set of lat,long points, and from this points I'd like to extract the points that form the boundaries, I've used convexhull, but for my purpouse is not enough as convehull just returns the most distant points that form the polygon where all the points fit, I need ALL the points that form the peremiter, something like the image I've attached. What could I do? Is there some kind of package ready to use instead of implement any spatial algorithm?
Thanks
A: You must use a package for convex polygons. Here is an example:
import alphashape
import matplotlib.pyplot as plt
points = put your points here (can be array)!
alpha = 0.95 * alphashape.optimizealpha(points)
hull = alphashape.alphashape(points, alpha)
hull_pts = hull.exterior.coords.xy
fig, ax = plt.subplots()
ax.scatter(hull_pts[0], hull_pts[1], color='red')
| Q: Boundary points from a set of coordinates I have a set of lat,long points, and from this points I'd like to extract the points that form the boundaries, I've used convexhull, but for my purpouse is not enough as convehull just returns the most distant points that form the polygon where all the points fit, I need ALL the points that form the peremiter, something like the image I've attached. What could I do? Is there some kind of package ready to use instead of implement any spatial algorithm?
Thanks
A: You must use a package for convex polygons. Here is an example:
import alphashape
import matplotlib.pyplot as plt
points = put your points here (can be array)!
alpha = 0.95 * alphashape.optimizealpha(points)
hull = alphashape.alphashape(points, alpha)
hull_pts = hull.exterior.coords.xy
fig, ax = plt.subplots()
ax.scatter(hull_pts[0], hull_pts[1], color='red')
A: Assuming that you have all points (latitudes and longitudes) in two lists LATS, LONGS respectively, this python snippet cand do the trick. hullPoint will have the set of points that can draw the convex hull.
import numpy as np
from scipy.spatial import ConvexHull
allPoints=np.column_stack((LATS,LONGS))
hullPoints = ConvexHull(allPoints)
A: Use Concave hull (Alpha shape) instead.
| stackoverflow | {
"language": "en",
"length": 193,
"provenance": "stackexchange_0000F.jsonl.gz:910656",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44685052"
} |
6cfe54cc854af536ba2d991397998d237326292a | Stackoverflow Stackexchange
Q: Can I combine {% include %} with custom filter in django 1.11? First I have created custom filter that makes autoindentation:
templatetags/indent.py
from django import template
from django.template.defaultfilters import stringfilter
register = template.Library()
@register.filter(name='indent')
@stringfilter
def indent(value, arg):
import re
regex = re.compile('^', re.M)
return re.sub(regex, ' ' * int(arg), value)
and I want to include subtemplate that will be indented by certain number of spaces (eg. 8):
Is this allowed in django's templates:
{% load indent %}
{% include "footer.html"|indent:"8" %}
A: I think you do can do it by using filter:
{% load indent %}
{% filter indent:"8" %}
{% include "footer.html" %}
{% endfilter %}
| Q: Can I combine {% include %} with custom filter in django 1.11? First I have created custom filter that makes autoindentation:
templatetags/indent.py
from django import template
from django.template.defaultfilters import stringfilter
register = template.Library()
@register.filter(name='indent')
@stringfilter
def indent(value, arg):
import re
regex = re.compile('^', re.M)
return re.sub(regex, ' ' * int(arg), value)
and I want to include subtemplate that will be indented by certain number of spaces (eg. 8):
Is this allowed in django's templates:
{% load indent %}
{% include "footer.html"|indent:"8" %}
A: I think you do can do it by using filter:
{% load indent %}
{% filter indent:"8" %}
{% include "footer.html" %}
{% endfilter %}
| stackoverflow | {
"language": "en",
"length": 109,
"provenance": "stackexchange_0000F.jsonl.gz:910682",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44685126"
} |
a6aade42d9b3dda6dd110edaa1ad7f85d4b1cb32 | Stackoverflow Stackexchange
Q: Ansible password_hash with variable I am writing a simple task to create a user. As part of this task, I want to read the password from defaults/main.yml
defaults/main.yml
test_user: testuser
test_group: testgroup
test_user_password: somepassword
my tasks file is as below
- name: "Creating Group for testuser"
group:
name: "{{ test_group }}"
state: present
- name: "Creating testuser"
user:
name: "{{ test_user }}"
password: "{{ [test_user_password] | password_hash('sha512') }}"
shell: /bin/ksh
group: "{{ test_group }}"
update_password: on_create
This gives me a unexpected templating error. How can I read the password from main.yml and use it inside the password filter?
A: In the Creating testuser task remove the square brackets around test_user_password. When a variable is referenced in Ansible it has to be enclosed with {{}}.
- hosts: localhost
remote_user: user
become: yes
vars:
test_user: testuser
test_group: testgroup
test_user_password: somepassword
tasks:
- name: Creating Group for testuser
group:
name: "{{ test_group }}"
state: present
- name: Creating testuser
user:
name: "{{ test_user }}"
password: "{{ test_user_password | password_hash('sha512') }}"
shell: /bin/bash
group: "{{ test_group }}"
update_password: on_create
| Q: Ansible password_hash with variable I am writing a simple task to create a user. As part of this task, I want to read the password from defaults/main.yml
defaults/main.yml
test_user: testuser
test_group: testgroup
test_user_password: somepassword
my tasks file is as below
- name: "Creating Group for testuser"
group:
name: "{{ test_group }}"
state: present
- name: "Creating testuser"
user:
name: "{{ test_user }}"
password: "{{ [test_user_password] | password_hash('sha512') }}"
shell: /bin/ksh
group: "{{ test_group }}"
update_password: on_create
This gives me a unexpected templating error. How can I read the password from main.yml and use it inside the password filter?
A: In the Creating testuser task remove the square brackets around test_user_password. When a variable is referenced in Ansible it has to be enclosed with {{}}.
- hosts: localhost
remote_user: user
become: yes
vars:
test_user: testuser
test_group: testgroup
test_user_password: somepassword
tasks:
- name: Creating Group for testuser
group:
name: "{{ test_group }}"
state: present
- name: Creating testuser
user:
name: "{{ test_user }}"
password: "{{ test_user_password | password_hash('sha512') }}"
shell: /bin/bash
group: "{{ test_group }}"
update_password: on_create
A: Take into account that given task will always be marked as "changed" meaning that it is not idempotent. To avoid this behaviour you can add salt as a second parameter to password_hash function, like this:
- name: Creating testuser
user:
name: "{{ test_user }}"
password: "{{ test_user_password | password_hash('sha512', test_user_salt) }}"
shell: /bin/bash
group: "{{ test_group }}"
update_password: on_create
| stackoverflow | {
"language": "en",
"length": 236,
"provenance": "stackexchange_0000F.jsonl.gz:910689",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44685150"
} |
d0898c8d41dd87537f100622d5eacdc900111039 | Stackoverflow Stackexchange
Q: Error installing and using laravel mix I want to install Mix in laravel . I run "npm install --no-bin-links" in my IDE terminal or in CMD but I get this error: (my laravel version is 5.4.27)
D:\wamp64\www\laravelProject>npm install --no-bin-links
npm WARN deprecated node-uuid@1.4.8: Use uuid module instead
npm ERR! Windows_NT 6.3.9600
npm ERR! argv "D:\Program Files\nodejs\node.exe" "D:\Program Files\nodejs\node_modules\npm\bin\npm-cli.js" "install" "--no-bin-links"
npm ERR! node v6.11.0
npm ERR! npm v3.10.10
npm ERR! Maximum call stack size exceeded
npm ERR!
npm ERR! If you need help, you may report this error at:
npm ERR! https://github.com/npm/npm/issues
npm ERR! Please include the following file with any support request:
npm ERR! D:\wamp64\www\laravelProject\npm-debug.log
A: I found!!
I remove node-modules then removed package-lock.json and ran
| Q: Error installing and using laravel mix I want to install Mix in laravel . I run "npm install --no-bin-links" in my IDE terminal or in CMD but I get this error: (my laravel version is 5.4.27)
D:\wamp64\www\laravelProject>npm install --no-bin-links
npm WARN deprecated node-uuid@1.4.8: Use uuid module instead
npm ERR! Windows_NT 6.3.9600
npm ERR! argv "D:\Program Files\nodejs\node.exe" "D:\Program Files\nodejs\node_modules\npm\bin\npm-cli.js" "install" "--no-bin-links"
npm ERR! node v6.11.0
npm ERR! npm v3.10.10
npm ERR! Maximum call stack size exceeded
npm ERR!
npm ERR! If you need help, you may report this error at:
npm ERR! https://github.com/npm/npm/issues
npm ERR! Please include the following file with any support request:
npm ERR! D:\wamp64\www\laravelProject\npm-debug.log
A: I found!!
I remove node-modules then removed package-lock.json and ran
A: The following command helped me too:
npm cache clean --force
Also, you might need to add sudo in linux.
| stackoverflow | {
"language": "en",
"length": 138,
"provenance": "stackexchange_0000F.jsonl.gz:910700",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44685191"
} |
2025f0725f6a186a52afa0dc7d2caefa9e0e8fbc | Stackoverflow Stackexchange
Q: Will go test code only referenced in test files be compiled into the binary? I am wondering what code will be compiled into the go binary if you compile a binary using go build ./... . This will compile a binary that has a cli program. For this cli program, I have test code and non test code. I currently have several flavours of test code:
*
*foo_test.go in package foo_test
*foo_internal_test.go in package foo
*testutil.go in package testutil that provides test utility functions
No test code is actually referenced in the non test code. The testutil functions are only imported in the test files.
If the test code is infact compiled into the binary , how much of a problem is this?
A: I believe that if you have an init() function in an otherwise unreachable file, it will still be linked into the executable.
_test.go files would be still excluded.
This bit us when we had some test helper code that was not in _test files. One had an init() function which ran on the executable startup.
| Q: Will go test code only referenced in test files be compiled into the binary? I am wondering what code will be compiled into the go binary if you compile a binary using go build ./... . This will compile a binary that has a cli program. For this cli program, I have test code and non test code. I currently have several flavours of test code:
*
*foo_test.go in package foo_test
*foo_internal_test.go in package foo
*testutil.go in package testutil that provides test utility functions
No test code is actually referenced in the non test code. The testutil functions are only imported in the test files.
If the test code is infact compiled into the binary , how much of a problem is this?
A: I believe that if you have an init() function in an otherwise unreachable file, it will still be linked into the executable.
_test.go files would be still excluded.
This bit us when we had some test helper code that was not in _test files. One had an init() function which ran on the executable startup.
A: A go binary only includes code reachable from its main() entry point. For test binaries main() is the test runner.
As to "how much of a problem" it is if it were included... none. It would increase the binary size and compilation time somewhat but otherwise have no impact - code that isn't executed, by definition, does nothing.
| stackoverflow | {
"language": "en",
"length": 239,
"provenance": "stackexchange_0000F.jsonl.gz:910704",
"question_score": "10",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44685206"
} |
0ada06c1d3a029dc7b1434db1dbe6dcc99957806 | Stackoverflow Stackexchange
Q: How can i sort semantic versions in pandas? I have a list of software releases as versions. The software follows the semantic version specification, meaning there is a major version, a minor version and patch versions:
*
*0.1
*0.2
*0.2.1
*0.3
*...
*0.10
*0.10.1
Is there a way in pandas to sort these versions so that 0.2 is bigger than 0.1 but smaller than 0.10?
A: You can use the standard distutils for this!
from distutils.version import StrictVersion
versions = ['0.1', '0.10', '0.2.1', '0.2', '0.10.1']
versions.sort(key=StrictVersion)
Now it's sorted like this: ['0.1', '0.2', '0.2.1', '0.10', '0.10.1']
Source
| Q: How can i sort semantic versions in pandas? I have a list of software releases as versions. The software follows the semantic version specification, meaning there is a major version, a minor version and patch versions:
*
*0.1
*0.2
*0.2.1
*0.3
*...
*0.10
*0.10.1
Is there a way in pandas to sort these versions so that 0.2 is bigger than 0.1 but smaller than 0.10?
A: You can use the standard distutils for this!
from distutils.version import StrictVersion
versions = ['0.1', '0.10', '0.2.1', '0.2', '0.10.1']
versions.sort(key=StrictVersion)
Now it's sorted like this: ['0.1', '0.2', '0.2.1', '0.10', '0.10.1']
Source
A: Pandas solution with sorted, StrictVersion solution and assign to column:
print (df)
ver
0 0.1
1 0.2
2 0.10
3 0.2.1
4 0.3
5 0.10.1
from distutils.version import StrictVersion
df['ver'] = sorted(df['ver'], key=StrictVersion)
print (df)
ver
0 0.1
1 0.2
2 0.2.1
3 0.3
4 0.10
5 0.10.1
EDIT:
For sort index is possible use reindex:
print (df)
a b
ver
0.1 1 q
0.2 2 w
0.10 3 e
0.2.1 4 r
0.3 5 t
0.10.1 6 y
from distutils.version import StrictVersion
df = df.reindex(index=pd.Index(sorted(df.index, key=StrictVersion)))
print (df)
a b
0.1 1 q
0.2 2 w
0.2.1 4 r
0.3 5 t
0.10 3 e
0.10.1 6 y
A: Those work fine if your values are unique, but here is the best solution that I've found for columns of semantic values that might have duplication.
import pandas as pd
from distutils.version import StrictVersion
unique_sorted_versions = sorted(set(df['Version']), key=StrictVersion)
groups = [df[df['Version'].isin([version])]
for version in unique_sorted_versions]
new_df = pd.concat(groups)
A: I come across this problem too, after googling a lot (the first page I find is this SO question :D), I suppose my solution is worth to mention.
So for now there is two sort functions in pandas, sort_values and sort_index, neither of them have a key parameter for us to pass a custom sort function to it. See this github issue.
jezrael's answer is very helpful and I'll build my solution based on that.
df['ver'] = sorted(df['ver'], key=StrictVersion) is useful only if the verion column is the single column in the DataFrame, otherwise we need to sort the other columns following the version column.
jezrael reindex the DataFrame, because the wanted index order can be obtained by the buitin sorted function, who does have a key parameter.
But, what if the version is not the index and I don't want to set_index('ver')?
We can use apply to map the original version string to a StrictVersion object, then sort_values will sort in the wanted order:
from distutils.version import StrictVersion
df['ver'] = df['ver'].apply(StrictVersion)
df.sort_values(by='ver')
A: You can come up with something like that:
for module, versions in result.items():
result[module] = sorted(
versions, key=lambda x: mixutil.SemVersion(x.version), reverse=True
)
| stackoverflow | {
"language": "en",
"length": 454,
"provenance": "stackexchange_0000F.jsonl.gz:910708",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44685219"
} |
6959f67ed2bba200a5e5c21135940f9959100b21 | Stackoverflow Stackexchange
Q: React Native - Can't place icon on top of image I'm trying to place an icon over an image, but all it's doing is pushing the image down the page.
Here's what it currently looks like:
I'm trying to place the blue back arrow on top of that image as you can see.
Here's what my markup looks like:
<ScrollView style={ styles.container }>
<View style={ styles.coverImageContainer }>
<Image
style={ styles.coverImage }
source={ require('./img.jpg') }
>
<View>
<Ionicons name="ios-arrow-back" color="#4F8EF7" size={25} />
</View>
</Image>
</View>
...
And here are my styles:
container: {
backgroundColor: '#f9f9f9',
flex: 1,
},
coverImageContainer: {
backgroundColor: '#000',
},
coverImage: {
width: null,
height: 170,
resizeMode: 'cover',
opacity: 0.7,
flex: 1,
},
What am I doing wrong?
If I get rid of the icon, the image displays how I want it to, but I would like the back button icon on top of it too. Here's what it looks like without the icon:
A: Position Icon component absolutely.
<Ionicons name="ios-arrow-back" color="#4F8EF7" size={25} style={{ position: 'absolute', top: 30, left: 10 }} />
| Q: React Native - Can't place icon on top of image I'm trying to place an icon over an image, but all it's doing is pushing the image down the page.
Here's what it currently looks like:
I'm trying to place the blue back arrow on top of that image as you can see.
Here's what my markup looks like:
<ScrollView style={ styles.container }>
<View style={ styles.coverImageContainer }>
<Image
style={ styles.coverImage }
source={ require('./img.jpg') }
>
<View>
<Ionicons name="ios-arrow-back" color="#4F8EF7" size={25} />
</View>
</Image>
</View>
...
And here are my styles:
container: {
backgroundColor: '#f9f9f9',
flex: 1,
},
coverImageContainer: {
backgroundColor: '#000',
},
coverImage: {
width: null,
height: 170,
resizeMode: 'cover',
opacity: 0.7,
flex: 1,
},
What am I doing wrong?
If I get rid of the icon, the image displays how I want it to, but I would like the back button icon on top of it too. Here's what it looks like without the icon:
A: Position Icon component absolutely.
<Ionicons name="ios-arrow-back" color="#4F8EF7" size={25} style={{ position: 'absolute', top: 30, left: 10 }} />
A: The StatusBar is always visible, even if you use position:'absolute'; zIndex: 99999 on back button, the are 2 ways:
*
*Remove status bar by adding <StatusBar hidden={true}/> inside a render
*Add marginTop: 22 for moving arrow a little downward
| stackoverflow | {
"language": "en",
"length": 215,
"provenance": "stackexchange_0000F.jsonl.gz:910710",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44685227"
} |
faaa840304ad17badc77deeaa2e7f6bb0ceb678c | Stackoverflow Stackexchange
Q: How do I access a specific property from a local storage object? (Angular 2/angular-2-local-storage) So I'm storing an object into local storage using angular-2-local-storage as such:
this.localStorage.set('myObject', {'prop1': "string", 'prop2': "string"});
Then I'm trying to get a specific property of that object as such:
this.localStorage.get('myObject').prop1;
But doing this gives me the error Property 'prop1' does not exist on type '{}'
I've also tried storing the object in a local variable, and then trying to access the property as such, but it gives me the same error.
var myObject = this.localStorage.get('myObject');
var myProperty = myObject.prop1;
What am I doing wrong, and how do I access the data?
A: I'm using localStorage.setItem / localStorage.getItem with Angular2, so it's maybe not the same behavior. But have you tried by stringifying and parsing your object ?
In my case :
localStorage.setItem('myObject', JSON.stringify({'prop1': "string", 'prop2': "string"}));
let temp = JSON.parse(localStorage.getItem('myObject'));
temp.prop1; // gives "string"
In your case, something like :
localStorage.set('myObject', JSON.stringify({'prop1': "string", 'prop2': "string"}));
let temp = JSON.parse(localStorage.get('myObject'));
temp.prop1;
Hope it cans help !
| Q: How do I access a specific property from a local storage object? (Angular 2/angular-2-local-storage) So I'm storing an object into local storage using angular-2-local-storage as such:
this.localStorage.set('myObject', {'prop1': "string", 'prop2': "string"});
Then I'm trying to get a specific property of that object as such:
this.localStorage.get('myObject').prop1;
But doing this gives me the error Property 'prop1' does not exist on type '{}'
I've also tried storing the object in a local variable, and then trying to access the property as such, but it gives me the same error.
var myObject = this.localStorage.get('myObject');
var myProperty = myObject.prop1;
What am I doing wrong, and how do I access the data?
A: I'm using localStorage.setItem / localStorage.getItem with Angular2, so it's maybe not the same behavior. But have you tried by stringifying and parsing your object ?
In my case :
localStorage.setItem('myObject', JSON.stringify({'prop1': "string", 'prop2': "string"}));
let temp = JSON.parse(localStorage.getItem('myObject'));
temp.prop1; // gives "string"
In your case, something like :
localStorage.set('myObject', JSON.stringify({'prop1': "string", 'prop2': "string"}));
let temp = JSON.parse(localStorage.get('myObject'));
temp.prop1;
Hope it cans help !
A: You must save the object as a string
localStorage.setItem('myObjectName', JSON.stringify(this.myObject));
and then obtain the string and convert it to an object
this.myObject = JSON.parse(localStorage.getItem('myObjectName'));
A: Can you provide more details about how you are configuring you angular2 local storage. I checked the source code and its doing the parsing and stringifing before you get or set respectively.I have not tested it but you should be able to do the following:
import { LocalStorageModule } from 'angular-2-local-storage';
@NgModule({
imports: [
LocalStorageModule.withConfig({
prefix: 'my-app',
storageType: 'localStorage'
})
],
declarations: [
..
],
providers: [
..
],
bootstrap: [AppComponent]
})
export class AppModule {}
and then in you component you can inject this service as following
constructor (
private localStorageService: LocalStorageService
) {
}
ngOnInit() {
this.localStorageService.set(key, value);
console.log(this.localStorageService.get(key));
}
You dont have to parse or stringify anything. Hope this will help.
AGAIN: I have not tested it this code is entirely based on angular-2-local-storage documentation.
| stackoverflow | {
"language": "en",
"length": 325,
"provenance": "stackexchange_0000F.jsonl.gz:910724",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44685261"
} |
2a5308d3cf39fff1c6eb5bebcf28b428f83daf74 | Stackoverflow Stackexchange
Q: Get package name and parametrized type from a field element - Annotation Processor How can I get package name, generic type and Parametrized type from a type from a field element in Annotation processor?
Say, if Element.asType returns java.util.List<String>, I want to get
*
*Package name java.util
*Generic type List<E> or raw type List (preferably raw type)
*Actual Type String
Is there any method in element utils, type utils?
A: Getting the package java.util:
Element e = processingEnv.getTypeUtils().asElement(type);
PackageElement pkg = processingEnv.getElementUtils().getPackageOf(e);
Getting the raw type List:
TypeMirror raw = processingEnv.getTypeUtils().erasure(type);
Getting the type arguments e.g. String:
if (type.getKind() == TypeKind.DECLARED) {
List<? extends TypeMirror> args =
((DeclaredType) type).getTypeArguments();
args.forEach(t -> {/*...*/});
}
See: Types.asElement, Elements.getPackageOf, Types.erasure and DeclaredType.getTypeArguments.
| Q: Get package name and parametrized type from a field element - Annotation Processor How can I get package name, generic type and Parametrized type from a type from a field element in Annotation processor?
Say, if Element.asType returns java.util.List<String>, I want to get
*
*Package name java.util
*Generic type List<E> or raw type List (preferably raw type)
*Actual Type String
Is there any method in element utils, type utils?
A: Getting the package java.util:
Element e = processingEnv.getTypeUtils().asElement(type);
PackageElement pkg = processingEnv.getElementUtils().getPackageOf(e);
Getting the raw type List:
TypeMirror raw = processingEnv.getTypeUtils().erasure(type);
Getting the type arguments e.g. String:
if (type.getKind() == TypeKind.DECLARED) {
List<? extends TypeMirror> args =
((DeclaredType) type).getTypeArguments();
args.forEach(t -> {/*...*/});
}
See: Types.asElement, Elements.getPackageOf, Types.erasure and DeclaredType.getTypeArguments.
| stackoverflow | {
"language": "en",
"length": 120,
"provenance": "stackexchange_0000F.jsonl.gz:910727",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44685279"
} |
a8a80426c0441cfd391676b6299dba4498ef89f9 | Stackoverflow Stackexchange
Q: How can I increase the margins for the R function, pheatmap? I am plotting a heatmap using pheatmap (Documentation). I am plotting a matrix in a fairly straightforward way:
pheatmap(mat, annotation_col=df, labels_col=rld$Infection_Line, fontsize_row=5, fontsize_col=7)
The bottom of my plot is getting cut off so that I can't see the column names at the bottom. It looks like this:
I have tried to increase the margins using par() and oma(), as well as cexRow=...
I need to make it so that I can see these long column names without reducing my plot size. I just want to stretch the margin at the bottom down. Does anyone know how to do this?
Thanks in advance.
| Q: How can I increase the margins for the R function, pheatmap? I am plotting a heatmap using pheatmap (Documentation). I am plotting a matrix in a fairly straightforward way:
pheatmap(mat, annotation_col=df, labels_col=rld$Infection_Line, fontsize_row=5, fontsize_col=7)
The bottom of my plot is getting cut off so that I can't see the column names at the bottom. It looks like this:
I have tried to increase the margins using par() and oma(), as well as cexRow=...
I need to make it so that I can see these long column names without reducing my plot size. I just want to stretch the margin at the bottom down. Does anyone know how to do this?
Thanks in advance.
| stackoverflow | {
"language": "en",
"length": 114,
"provenance": "stackexchange_0000F.jsonl.gz:910733",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44685299"
} |
78875ef22a3dc5aaed6469ba56c15648abb82e7a | Stackoverflow Stackexchange
Q: react native "attempt to set value to an immutable object" I'm creating a draggable box. which I can drag anywhere on the screen but I'm getting this error which says that "You attempted to set the key _value on an object that is meant to be immutable and has been frozen". Can anyone tell me what am I doing wrong.
My Code:
import React, { Component } from 'react'
import {
AppRegistry,
StyleSheet,
Text,
Button,
ScrollView,
Dimensions,
PanResponder,
Animated,
View
} from 'react-native'
import { StackNavigator } from 'react-navigation'
export default class Home extends Component{
componentWillMount(){
this.animatedValue = new Animated.ValueXY();
this.panResponder = PanResponder.create({
onStartShouldSetPanResponder: (evt, gestureState) => true,
onMoveShouldSetPanResponder: (evt, gestureState) => true,
onPanResponderGrant: (e, gestureState) => {
},
onPanResponderMove:Animated.event([
null,{dx: this.animatedValue.x , dy:this.animatedValue.y}
]),
onPanResponderRelease: (e, gestureState) => {
},
})
}
render(){
const animatedStyle = {
transform:this.animatedValue.getTranslateTransform()
}
return(
<View style={styles.container}>
<Animated.View style={[styles.box ,animatedStyle]} {...this.panResponder.panHandlers}>
<Text>Home</Text>
</Animated.View>
</View>
)
}
}
var styles = StyleSheet.create({
container: {
flex: 1,
marginLeft: 10,
marginRight: 10,
alignItems: 'stretch',
justifyContent: 'center',
},
box:{
height:90,
width:90,
textAlign:'center'
}
});
A: In my case I got this error because I forgot to change the View into Animated.View.
| Q: react native "attempt to set value to an immutable object" I'm creating a draggable box. which I can drag anywhere on the screen but I'm getting this error which says that "You attempted to set the key _value on an object that is meant to be immutable and has been frozen". Can anyone tell me what am I doing wrong.
My Code:
import React, { Component } from 'react'
import {
AppRegistry,
StyleSheet,
Text,
Button,
ScrollView,
Dimensions,
PanResponder,
Animated,
View
} from 'react-native'
import { StackNavigator } from 'react-navigation'
export default class Home extends Component{
componentWillMount(){
this.animatedValue = new Animated.ValueXY();
this.panResponder = PanResponder.create({
onStartShouldSetPanResponder: (evt, gestureState) => true,
onMoveShouldSetPanResponder: (evt, gestureState) => true,
onPanResponderGrant: (e, gestureState) => {
},
onPanResponderMove:Animated.event([
null,{dx: this.animatedValue.x , dy:this.animatedValue.y}
]),
onPanResponderRelease: (e, gestureState) => {
},
})
}
render(){
const animatedStyle = {
transform:this.animatedValue.getTranslateTransform()
}
return(
<View style={styles.container}>
<Animated.View style={[styles.box ,animatedStyle]} {...this.panResponder.panHandlers}>
<Text>Home</Text>
</Animated.View>
</View>
)
}
}
var styles = StyleSheet.create({
container: {
flex: 1,
marginLeft: 10,
marginRight: 10,
alignItems: 'stretch',
justifyContent: 'center',
},
box:{
height:90,
width:90,
textAlign:'center'
}
});
A: In my case I got this error because I forgot to change the View into Animated.View.
A: Try this out. This will solve your issue.
You need to initialize animatedValue in state object to make it work.
constructor(props) {
super(props);
this.state = {
animatedValue: new Animated.ValueXY()
}
}
onPanResponderMove:Animated.event([
null,{dx: this.state.animatedValue.x , dy:this.state.animatedValue.y}
]),
| stackoverflow | {
"language": "en",
"length": 232,
"provenance": "stackexchange_0000F.jsonl.gz:910739",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44685328"
} |