id
stringlengths 5
11
| text
stringlengths 0
146k
| title
stringclasses 1
value |
|---|---|---|
doc_5600
|
but its not returning few of applications, what ever the application I needed is a 64 bit one.
OS:- Windows 7
Note:- When I use the command its returning the applications which are under [Wow6432Node] folder, but my application is not presented under this folder. Its present under [HKLM/SOFTWARE] location
Please help me to solve this problem.
A: This behaviour is due to the registry redirector. You are running the 32 bit version of REG, presumably because the process that invokes it is a 32 bit process. And so the 32 bit version of REG reads the 32 bit view of the registry by default.
You should use /reg:64 switch to force reg to use the 64 bit view of the registry, as described here: MS-KB-948698.
If you are doing this from a program then it's better to use the registry API to read entries than using the REG tool.
| |
doc_5601
|
values = [
{ id: 10, name: 'axel', desc: 'it's me' },
{ id: 10, name: 'axel', desc: 'not me' },
{ id: 11, name: 'payton', desc: 'hello I'm payton'},
{ id: 10, name: 'axel', desc: 'it's me' }
];
the row that will be highlighted should be ...
values = [
{ id: 10, name: 'axel', desc: 'it's me' },
{ id: 10, name: 'axel', desc: 'it's me' }
];
There may be new objects inserted into the array without an id but I would still like to remove the objects with the same name and description.
A: I don't know ag-grid but if you want to find the duplicated object you can do this
const values = [
{ id: 10, name: 'axel', desc: "it's me" },
{ id: 10, name: 'axel', desc: 'not me' },
{ id: 11, name: 'payton', desc: "hello I'm payton"},
{ id: 10, name: 'axel', desc: "it's me" }
];
const findDuplicated = data => Object.values(data.reduce((res, item) => {
const key = JSON.stringify(item)
return {
...res,
[key]: [...(res[key] || []), item]
}
}, {})).filter(item => item.length > 1).flat()
console.log(findDuplicated(values))
console.log(findDuplicated(values.map(({name, desc}) => ({name, desc}))))
| |
doc_5602
|
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical" android:layout_width="match_parent"
android:layout_height="match_parent">
<LinearLayout android:orientation="horizontal"
android:layout_width="match_parent"
android:layout_height="match_parent">
<SeekBar android:id="@+id/seek"
android:layout_width="300dip"
android:layout_height="wrap_content"
android:progress="50"/>
...and a few more elements here.
</LinearLayout>
<CheckBox android:id="@+id/CheckBox"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:text="somestring" />
</LinearLayout>
A: The LinearLayout before the CheckBox has its height set to MATCH_PARENT(and it fills all the parent's height) and the parent LinearLayout of both has the orientation set to vertical so the CheckBox is pushed out of the screen.
Set the height of the LinearLayout containing the SeekBar to WRAP_CONTENT for example.
A: change your LinerLayout definition it cannot be android:layout_height="match_parent"
<LinearLayout android:orientation="horizontal"
android:layout_width="match_parent"
android:layout_height="0px"
android:layout_weight="1">
if you set match_parent for linearlayout it consumes all content so wrap_content is better.
But if you set it as I wrote checkbox will be at bottom page a linearlayout will consumes remaing part of screen.
A: You need to set your inner Linear Layout's height and width to wrap content.
Try this.
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical" >
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:orientation="horizontal" >
<SeekBar
android:id="@+id/seek"
android:layout_width="300dip"
android:layout_height="wrap_content"
android:progress="50" />
</LinearLayout>
<CheckBox
android:id="@+id/CheckBox"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="somestring" />
</LinearLayout>
A: Try This :
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:orientation="vertical"
android:weightSum="10" >
<LinearLayout
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:layout_weight="9"
android:orientation="horizontal" >
<SeekBar
android:id="@+id/seek"
android:layout_width="300dip"
android:layout_height="wrap_content"
android:progress="50" />
</LinearLayout>
<CheckBox
android:id="@+id/CheckBox"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_weight="1"
android:text="somestring" />
</LinearLayout>
A: In your layout chose one element that you want to be flexible in size, say height wise. Then make its height="0dp" and weight="1". Make heights of rest of elements: height="wrap_content".
Also you may have given discreet size of some dp to your other elements, and hence their total height runs out of available screen space OR there are too many elements that they go beyond your screen height. In this case, wrap your layout in a ScrollView.
| |
doc_5603
|
I've looked at the plt documentation and some examples. I think this related question and this one come close, but no luck. Here's what I have so far.
import numpy as np
import seaborn as sns
import scipy.stats as stats
import matplotlib.pyplot as plt
pepe_calories = np.array([361, 291, 263, 284, 311, 284, 282, 228, 328, 263, 354, 302, 293,
254, 297, 281, 307, 281, 262, 302, 244, 259, 273, 299, 278, 257,
296, 237, 276, 280, 291, 278, 251, 313, 314, 323, 333, 270, 317,
321, 307, 256, 301, 264, 221, 251, 307, 283, 300, 292, 344, 239,
288, 356, 224, 246, 196, 202, 314, 301, 336, 294, 237, 284, 311,
257, 255, 287, 243, 267, 253, 257, 320, 295, 295, 271, 322, 343,
313, 293, 298, 272, 267, 257, 334, 276, 337, 325, 261, 344, 298,
253, 302, 318, 289, 302, 291, 343, 310, 241])
modern_calories = np.array([310, 315, 303, 360, 339, 416, 278, 326, 316, 314, 333, 317, 357,
304, 363, 387, 279, 350, 367, 321, 366, 311, 308, 303, 299, 363,
335, 357, 392, 321, 361, 285, 321, 290, 392, 341, 331, 338, 326,
314, 327, 320, 293, 333, 297, 315, 365, 408, 352, 359, 312, 300,
263, 358, 345, 360, 336, 378, 315, 354, 318, 300, 372, 305, 336,
286, 296, 413, 383, 328, 418, 388, 416, 371, 313, 321, 321, 317,
402, 290, 328, 344, 330, 319, 309, 327, 351, 324, 278, 369, 416,
359, 381, 324, 306, 350, 385, 335, 395, 308])
ax = sns.distplot(pepe_calories, fit_kws={"color":"blue"}, kde=False,
fit=stats.norm, hist=None, label="Pepe's");
ax = sns.distplot(modern_calories, fit_kws={"color":"orange"}, kde=False,
fit=stats.norm, hist=None, label="Modern");
# Get the two lines from the axes to generate shading
l1 = ax.lines[0]
l2 = ax.lines[1]
# Get the xy data from the lines so that we can shade
x1 = l1.get_xydata()[:,0]
y1 = l1.get_xydata()[:,1]
x2 = l2.get_xydata()[:,0]
y2 = l2.get_xydata()[:,1]
x2min = np.min(x2)
x1max = np.max(x1)
ax.fill_between(x1,y1, where = ((x1 > x2min) & (x1 < x1max)), color="red", alpha=0.3)
#> <matplotlib.collections.PolyCollection at 0x1a200510b8>
plt.legend()
#> <matplotlib.legend.Legend at 0x1a1ff2e390>
plt.show()
Any ideas?
Created on 2018-12-01 by the reprexpy package
import reprexpy
print(reprexpy.SessionInfo())
#> Session info --------------------------------------------------------------------
#> Platform: Darwin-18.2.0-x86_64-i386-64bit (64-bit)
#> Python: 3.6
#> Date: 2018-12-01
#> Packages ------------------------------------------------------------------------
#> matplotlib==2.1.2
#> numpy==1.15.4
#> reprexpy==0.1.1
#> scipy==1.1.0
#> seaborn==0.9.0
A: While gathering the pdf data from get_xydata is clever, you are now at the mercy of matplotlib's rendering / segmentation algorithm. Having x1 and x2 span different ranges also makes comparing y1 and y2 difficult.
You can avoid these problems by fitting the normals yourself instead of
letting sns.distplot do it. Then you have more control over the values you are
looking for.
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
norm = stats.norm
pepe_calories = np.array([361, 291, 263, 284, 311, 284, 282, 228, 328, 263, 354, 302, 293,
254, 297, 281, 307, 281, 262, 302, 244, 259, 273, 299, 278, 257,
296, 237, 276, 280, 291, 278, 251, 313, 314, 323, 333, 270, 317,
321, 307, 256, 301, 264, 221, 251, 307, 283, 300, 292, 344, 239,
288, 356, 224, 246, 196, 202, 314, 301, 336, 294, 237, 284, 311,
257, 255, 287, 243, 267, 253, 257, 320, 295, 295, 271, 322, 343,
313, 293, 298, 272, 267, 257, 334, 276, 337, 325, 261, 344, 298,
253, 302, 318, 289, 302, 291, 343, 310, 241])
modern_calories = np.array([310, 315, 303, 360, 339, 416, 278, 326, 316, 314, 333, 317, 357,
304, 363, 387, 279, 350, 367, 321, 366, 311, 308, 303, 299, 363,
335, 357, 392, 321, 361, 285, 321, 290, 392, 341, 331, 338, 326,
314, 327, 320, 293, 333, 297, 315, 365, 408, 352, 359, 312, 300,
263, 358, 345, 360, 336, 378, 315, 354, 318, 300, 372, 305, 336,
286, 296, 413, 383, 328, 418, 388, 416, 371, 313, 321, 321, 317,
402, 290, 328, 344, 330, 319, 309, 327, 351, 324, 278, 369, 416,
359, 381, 324, 306, 350, 385, 335, 395, 308])
pepe_params = norm.fit(pepe_calories)
modern_params = norm.fit(modern_calories)
xmin = min(pepe_calories.min(), modern_calories.min())
xmax = max(pepe_calories.max(), modern_calories.max())
x = np.linspace(xmin, xmax, 100)
pepe_pdf = norm(*pepe_params).pdf(x)
modern_pdf = norm(*modern_params).pdf(x)
y = np.minimum(modern_pdf, pepe_pdf)
fig, ax = plt.subplots()
ax.plot(x, pepe_pdf, label="Pepe's", color='blue')
ax.plot(x, modern_pdf, label="Modern", color='orange')
ax.fill_between(x, y, color='red', alpha=0.3)
plt.legend()
plt.show()
If, let's say, sns.distplot (or some other plotting function) made a plot that you did not want to have to reproduce, then you could use the data from get_xydata this way:
import numpy as np
import seaborn as sns
import scipy.stats as stats
import matplotlib.pyplot as plt
pepe_calories = np.array([361, 291, 263, 284, 311, 284, 282, 228, 328, 263, 354, 302, 293,
254, 297, 281, 307, 281, 262, 302, 244, 259, 273, 299, 278, 257,
296, 237, 276, 280, 291, 278, 251, 313, 314, 323, 333, 270, 317,
321, 307, 256, 301, 264, 221, 251, 307, 283, 300, 292, 344, 239,
288, 356, 224, 246, 196, 202, 314, 301, 336, 294, 237, 284, 311,
257, 255, 287, 243, 267, 253, 257, 320, 295, 295, 271, 322, 343,
313, 293, 298, 272, 267, 257, 334, 276, 337, 325, 261, 344, 298,
253, 302, 318, 289, 302, 291, 343, 310, 241])
modern_calories = np.array([310, 315, 303, 360, 339, 416, 278, 326, 316, 314, 333, 317, 357,
304, 363, 387, 279, 350, 367, 321, 366, 311, 308, 303, 299, 363,
335, 357, 392, 321, 361, 285, 321, 290, 392, 341, 331, 338, 326,
314, 327, 320, 293, 333, 297, 315, 365, 408, 352, 359, 312, 300,
263, 358, 345, 360, 336, 378, 315, 354, 318, 300, 372, 305, 336,
286, 296, 413, 383, 328, 418, 388, 416, 371, 313, 321, 321, 317,
402, 290, 328, 344, 330, 319, 309, 327, 351, 324, 278, 369, 416,
359, 381, 324, 306, 350, 385, 335, 395, 308])
ax = sns.distplot(pepe_calories, fit_kws={"color":"blue"}, kde=False,
fit=stats.norm, hist=None, label="Pepe's");
ax = sns.distplot(modern_calories, fit_kws={"color":"orange"}, kde=False,
fit=stats.norm, hist=None, label="Modern");
# Get the two lines from the axes to generate shading
l1 = ax.lines[0]
l2 = ax.lines[1]
# Get the xy data from the lines so that we can shade
x1, y1 = l1.get_xydata().T
x2, y2 = l2.get_xydata().T
xmin = max(x1.min(), x2.min())
xmax = min(x1.max(), x2.max())
x = np.linspace(xmin, xmax, 100)
y1 = np.interp(x, x1, y1)
y2 = np.interp(x, x2, y2)
y = np.minimum(y1, y2)
ax.fill_between(x, y, color="red", alpha=0.3)
plt.legend()
plt.show()
A: I suppose not using seaborn in cases where you want to have full control over the resulting plot is often a useful strategy. Hence just calculate the fits, plot them and use fill between the curves up to the point where they cross each other.
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
pepe_calories = np.array(...)
modern_calories = np.array(...)
x = np.linspace(150,470,1000)
y1 = stats.norm.pdf(x, *stats.norm.fit(pepe_calories))
y2 = stats.norm.pdf(x, *stats.norm.fit(modern_calories))
cross = x[y1-y2 <= 0][0]
fig, ax = plt.subplots()
ax.fill_between(x,y1,y2, where=(x<=cross), color="red", alpha=0.3)
ax.plot(x,y1, label="Pepe's")
ax.plot(x,y2, label="Modern")
ax.legend()
plt.show()
| |
doc_5604
|
$1,234.56
for US dollars for the United States region, they can enter this in another:
€ 1.234,56
for euros using Germany as the region.
Regardless what the currency is and what region the amount is formatted in, the value internally should be returned in the same manner, which in this case would be:
1234.56
I could just use an input field with a type set to "text" but then I would have to write code to check for the currency symbol and how the value is formatted. But it's possible that the user doesn't enter a currency symbol. Also, I don't see a clear way of distinguishing regional numbers without the currency symbol.
Is there any solution that lets the user select the currency symbol and manages the value entered to provide a consistent internal value?
A: If there is a way to achieve this using just a specifically typed HTML input I don't know about it, the closest one would be type=number, though that doesn't allow for formatted numbers.
For a text type and having code check the input, if you can assume the numbers to always be formatted as latin numbers, you can use a rather naive approach using regular expressions.
function normalize(currency) {
const naive = /(?:(-)\s*)?(?:[^0-9]+\s*)?([0-9]{1,3}(?:[ ,\.]?[0-9]{3})*)(?:[,\.]([0-9]+))?(?:\s*[^0-9-]+)?(?:\s*(-))?/;
const [,mina,base,fraction,minb] = currency.match(naive)||[];
const min = mina || minb ? '-' : '';
return base ? Number(`${min}${base.replace(/[^\d]+/g, '')}.${fraction||0}`) : '';
}
const output = document.querySelector('output');
document.querySelector('input[name=currency]')
.addEventListener('input', (event) => {
const { value } = event.target;
output.value = normalize(value);
});
<label>
amount <input type=text name=currency>
</label>
<div>
The normalized value: <output></output>
</div>
The expression is build around the following concepts:
*
*(?:(-)\s*)?(?:[^0-9]+\s*)?: a minus symbol may be before the number, optionally followed by a currency symbol
*([0-9]{1,3}(?:[ ,\.]?[0-9]{3})*): a base number which may contain grouping symbols and groups of three numbers
*(?:[,\.]([0-9]+))?: an optional fraction number, indicated by either a comma or period
*(?:\s*[^0-9-]+)?(?:\s*(-))?: a minus symbol may be after the number, optionally preceded by a currency symbol
This works for a lot of use cases, though I consider it naive mostly because it does not verify the "thousands separator" not being used also as fraction separator.
a more thorough example
| |
doc_5605
|
type=timestamp, Default=CURRENT_TIMESTAMP, extra=on update CURRENT_TIMESTAMP
And which type of variable should i need to define in java class?
desc dummy;
+------------+--------------+------+-----+-------------------+-----------------------------+
| Field | Type | Null | Key | Default | Extra |
+------------+--------------+------+-----+-------------------+-----------------------------+
| timestamp | timestamp | NO | | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP |
+------------+--------------+------+-----+-------------------+-----------------------------+
| |
doc_5606
|
var deferred = $q.defer();
//...make use of the deferred
return deferred.promise;
I want to add a shortcut to the top of the function that will bypass the async activity and effectively return a resolved promise immediately. How should I do this?
e.g. would this be idiomatic?
if (shouldShortcut) {
return $q.when(true);
}
A: You can just resolve the deferred immediately, but still return its promise:
if (shouldShortcut) {
deferred.resolve();
return deferred.promise;
}
A: Edit: I see now that you are talking about $q and not Q. See below the break for my previous answer as it relates to the Q library.
Based on Benjamin Gruenbaum's comment below (which he has since deleted), $q.when() is a fine way to do this:
var resolvedPromise = $q.when();
You can resolve with a particular value by passing that into when():
var resolvedPromise = $q.when("all good");
There's no need to involve deferreds here. In fact, I would suggest limiting your use of deferreds, as they will most likely soon be passé in favor of the revealing constructor pattern that is used in ES6.
(previous answer)
The Q library provides a method for doing this that is consistent with the ES6 promises standard:
Q.Promise.resolve();
this produces a resolved promise.
If you want to resolve it with a particular value, you can pass in that value:
Q.Promise.resolve("all good"); // promise resolved with the value "all good"
| |
doc_5607
|
I have a table a:
id(long)
referenceIdentifier(uuid)
isLatestversion(boolean)
and a table b:
id(long)
someArbitraryAttachedInfo(String)
referenceToTableA(uuid)
Table a will have multiple versons of the same object, different id's but the referenceIdentifier remains the same. There is only one object a with the isLatestversion flag set to true for any given referenceIdentifier (We will determine the latest version from a revision log).
We want table b to have the uuid so we can set it once and not have to track and update whenever a newer version of a is availabe and to see all relevant attachments when viewing an older version.
How can I model this in hibernate?
When I use a @ManyToOne with a @JoinColumn and @Where it generates an unique-constraint on table a's uuid.
When i use @JoinFormula it generates a bytea field instead of the referenced column's type.
Thank you for any help or pointers you can provide.
| |
doc_5608
|
import os
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
image_dir = os.path.join(BASE_DIR, "Images")
for root, dirs, files in os.walk(image_dir):
for file in files:
if file.endswith(".png") or file.endswith(".jpg"):
path = os.path.join(root, file)
print(path)
So, when I try to print the path using command prompt, it gives me nothing it just exist. It's supposed to give all the pictures in the images folder. So what seem to be the problem?
| |
doc_5609
|
<tbody>
<tr key={key}>
<td>
<TextField
id="FirstName"
label="First Name"
type="name"
value={this.state.fname}
/>
</td>
<td>
<TextField
id="LastName"
label="Last Name"
type="name"
value={this.state.lastname}
/>
</td>
</tr>
Add icon
A: Here is the solution,
Firstly To render it on a browser set the default values to your state variables.
constructor(props){
super(props);
this.state = {
fname : 'Foo ',
lastname : 'Bar',
}
}
Now, your code will get the value from state and it will render in DOM as you wanted. And use **defaultValue** in place of Value in input. Attribute 'value' is used when you are setting the values onChange but as you have already declared above in state , use **defaultValue** .
Like I have done below :-
<TextField
id="FirstName"
label="First Name"
type="name"
defaultValue={this.state.fname}
/>
| |
doc_5610
|
3, 0, 1, 244
four comma separated values.
how I can translate the id to a name?
is there a dictionary out there?
Thanks!
A: What you could do is use a script processor to split the values in the network.application_id field and then do a lookup to append the values accordingly. This is definitely an option, but not necessarily the only option. Feel free to accept this answer if it works for you.
| |
doc_5611
|
I think the error is something todo with the xml layouts
database details:
public static final String question = "quest";
private static final String DATABASE_NAME = "QuestionDB";
private static final String DATABASE_TABLE = "questions";
DBAdapter da = new DBAdapter(this);
// open the database connection
da.open();
// assign to the cursor the ability to loop through the records.
Cursor c = da.getAllContacts();
the method getAllContacts(); is based on this code below
public Cursor getAllContacts()
{
return db.query(DATABASE_TABLE, new String[] {KEY_ROWID,
question, possibleAnsOne,possibleAnsTwo, possibleAnsThree,realQuestion}, null, null, null, null, null);
}
startManagingCursor(c);
// move to the first element
c.moveToFirst();
String[] from = new String[]{"quest"};
int[] to = new int[] { R.id.name_entry };
SimpleCursorAdapter sca = new SimpleCursorAdapter(this, R.layout.layout_list_example, c, from, to);
setListAdapter(sca);
Here are the xml files layout_list_example.xml
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="fill_parent"
android:layout_height="wrap_content" >
<ListView
android:id="@android:id/android:list"
android:layout_width="fill_parent"
android:layout_height="wrap_content" />
</LinearLayout>
list_example_entry.xml
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="fill_parent"
android:layout_height="wrap_content" >
<TextView
android:id="@+id/name_entry"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:textSize="28dip" />
<TextView
android:id="@+id/number_entry"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:textSize="28dip" />
</LinearLayout>
extended contents setting the content review removed that problem. I will remember that for the future. I had to change the column back to quest as it said no column existed and when i ran the activity i got a black screen with lines across i guess a list view with no data.
Here is the error log
05-12 19:52:22.570: E/AndroidRuntime(534): FATAL EXCEPTION: main
05-12 19:52:22.570: E/AndroidRuntime(534): java.lang.RuntimeException: Unable to start activity ComponentInfo{alex.android.basic.databse.questions/alex.android.basic.databse.questions.Result}: java.lang.IllegalArgumentException: column 'question' does not exist
05-12 19:52:22.570: E/AndroidRuntime(534): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1956)
05-12 19:52:22.570: E/AndroidRuntime(534): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1981)
05-12 19:52:22.570: E/AndroidRuntime(534): at android.app.ActivityThread.access$600(ActivityThread.java:123)
05-12 19:52:22.570: E/AndroidRuntime(534): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1147)
05-12 19:52:22.570: E/AndroidRuntime(534): at android.os.Handler.dispatchMessage(Handler.java:99)
05-12 19:52:22.570: E/AndroidRuntime(534): at android.os.Looper.loop(Looper.java:137)
05-12 19:52:22.570: E/AndroidRuntime(534): at android.app.ActivityThread.main(ActivityThread.java:4424)
05-12 19:52:22.570: E/AndroidRuntime(534): at java.lang.reflect.Method.invokeNative(Native Method)
05-12 19:52:22.570: E/AndroidRuntime(534): at java.lang.reflect.Method.invoke(Method.java:511)
05-12 19:52:22.570: E/AndroidRuntime(534): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:784)
05-12 19:52:22.570: E/AndroidRuntime(534): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:551)
05-12 19:52:22.570: E/AndroidRuntime(534): at dalvik.system.NativeStart.main(Native Method)
05-12 19:52:22.570: E/AndroidRuntime(534): Caused by: java.lang.IllegalArgumentException: column 'question' does not exist
05-12 19:52:22.570: E/AndroidRuntime(534): at android.database.AbstractCursor.getColumnIndexOrThrow(AbstractCursor.java:267)
05-12 19:52:22.570: E/AndroidRuntime(534): at android.widget.SimpleCursorAdapter.findColumns(SimpleCursorAdapter.java:332)
05-12 19:52:22.570: E/AndroidRuntime(534): at android.widget.SimpleCursorAdapter.<init>(SimpleCursorAdapter.java:81)
05-12 19:52:22.570: E/AndroidRuntime(534): at alex.android.basic.databse.questions.Result.onCreate(Result.java:37)
05-12 19:52:22.570: E/AndroidRuntime(534): at android.app.Activity.performCreate(Activity.java:4465)
05-12 19:52:22.570: E/AndroidRuntime(534): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1049)
05-12 19:52:22.570: E/AndroidRuntime(534): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1920)
A: This is incorrect String[] from = new String[]{"quest"};
That String[] needs to be the name of one of the columns in your cursor, as it is used to map data from your cursor to the listview.
I think you want this:
String[] from = new String[]{"question"};
EDIT
If it can't find that then I would guess that you need to set the content view fir the activity.
setContentView(R.layout.layout_list_example);
EDIT 2
I think we've been staring at it without seeing it... in your query, enclose each item in quotes except for the KEY_ one. "question", etc.
A: If you are using ListActivty your layout xml should have listview with the id android:id="@android:id/list"
For ex:
<ListView
android:id="@android:id/list"
android:layout_width="match_parent"
android:layout_height="wrap_content" >
</ListView>
So changing android:id="@android:id/android:list" in your layout_list_example.xml with android:id="@android:id/list" should fix your problem
Hope this helps
| |
doc_5612
|
Created a spreadsheet which collects data from 7 separate workbooks. The workbooks communicate back and forth as follows.
Master Workbook reports 'Jobs' from it's own column A onto 7 respective workcenters, in their column A, using v-lookup.
Workcenters then report updates in their columns B through E, based upon their progress for the respective job in column A. This data makes its way back to the Master Workbook using a nested if statement and vlookup.
When deleting a 'Job' row from the Master sheet, the job deletes from the Workcenters, but the data tied to the Job remains. This is causing the Workcenter to populate inaccurate data based upon the Job in Column A.
Hope this isnt too confusing...
Thanks in advance for the help!
| |
doc_5613
|
//Open xml file
XmlResourceParser _xml = res.getXml(R.xml.animals_records);
try
{
//Check for end of document
int eventType = _xml.getEventType();
while (eventType != XmlPullParser.END_DOCUMENT) {
//Search for record tags
if ((eventType == XmlPullParser.START_TAG) &&(_xml.getName().equals("record"))){
//Record tag found, now get values and insert record
String _Title = _xml.getAttributeValue(null, TITLE);
String _Color = _xml.getAttributeValue(null, COLOR, 0);
_Values.put(TITLE, _Title);
_Values.put(COLOR, _Color);
db.insert(TABLENAME1, null, _Values);
}
if ((eventType == XmlPullParser.START_TAG) &&(_xml.getName().equals("trees"))){
//Record tag found, now get values and insert record
String _Title = _xml.getAttributeValue(null, FAMILY);
String _Color = _xml.getAttributeValue(null, SPECIES, 0);
_Values.put(FAMILY, _Title);
_Values.put(SPECIES, _Color);
db.insert(TABLENAME2, null, _Values);
}
eventType = _xml.next();
}
}
With an XML that looks something like this:
<animals>
<record title="Dog" color="Brown" />
<record title="Cat" color="Gray" />
<record title="Rabbit" color="White" />
<record title="Spider" color="Black" />
<trees family="Hardwood" species="Oak" />
<trees family="Soft" color="Pine" />
</animals>
Any ideas? Obvious faults?
| |
doc_5614
|
Is it possible to avoid it and make Travis reuse what it downloaded the first time ?
I probably made some mistakes in my .travis.yml file, here is a copy of it
language: android
android:
components:
# Uncomment the lines below if you want to
# use the latest revision of Android SDK Tools
- platform-tools
- tools
# The BuildTools version used by your project
- build-tools-23.0.2
# The SDK version used to compile your project
- android-23
# Additional components
- extra-android-support
- extra-google-google_play_services
- extra-google-m2repository
- extra-android-m2repository
- addon-google_apis-google-19
# Specify at least one system image,
# if you need to run emulator(s) during your tests
# - sys-img-armeabi-v7a-android-19
# - sys-img-x86-android-17
script:
- ./gradlew check
- ./gradlew test --continue
# - ./gradlew build connectedCheck
A:
Why do Travis CI download everything each time it builds?
We talk about this here, Travis-ci downloads the cache from S3, so there is not a significant speed improvement caching big files like Android SDK.
How to include Android SDK in Travis-ci cache? (not recommended)
The relevant bits are here, you can add to the cache any file you like and know his path:
cache:
directories:
- ${TRAVIS_BUILD_DIR}/gradle/caches/
- ${TRAVIS_BUILD_DIR}/gradle/wrapper/dists/
- ${TRAVIS_BUILD_DIR}/android-sdk/extras/ # please don't include sys-images
I did it time ago, you can see the code in the link shared by @nicolas-f:
language: android
jdk: oraclejdk8
env:
global:
- GRADLE_USER_HOME=${TRAVIS_BUILD_DIR}/gradle
- ANDROID_HOME=${TRAVIS_BUILD_DIR}/android-sdk
- SDK=${TRAVIS_BUILD_DIR}/android-sdk
- PATH=${GRADLE_USER_HOME}/bin/:${SDK}/:${SDK}/tools/:${SDK}/platform-tools/:${PATH}
before_install:
- export OLD_SDK=/usr/local/android-sdk-24.0.2;
mkdir -p ${SDK};
cp -u -R ${OLD_SDK}/platforms ${SDK}/platforms;
cp -u -R ${OLD_SDK}/system-images ${SDK}/system-images;
cp -u -R ${OLD_SDK}/tools ${SDK}/tools
cache:
apt: true
directories:
- ${TRAVIS_BUILD_DIR}/gradle/caches/
- ${TRAVIS_BUILD_DIR}/gradle/wrapper/dists/
- ${TRAVIS_BUILD_DIR}/android-sdk/extras/
android:
components:
# Update Android SDK Tools
- tools
- platform-tools
- build-tools-23.0.2
- android-23
- add-on
- extra
script:
- ./gradlew check
Use ls to confirm that the SDK path has not changed another time.
It's not currently necessary to move the SDK, and you need to update other things, perhaps add another tools after platform-tools, this code is outdated.
| |
doc_5615
|
Context
Here's the database schema for this piece of the problem:
CREATE TABLE IF NOT EXISTS users (
id TEXT PRIMARY KEY,
email CITEXT NOT NULL UNIQUE,
password TEXT NOT NULL,
name TEXT NOT NULL,
created_at DATE NOT NULL DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS teams (
id TEXT PRIMARY KEY,
email CITEXT NOT NULL,
name TEXT NOT NULL,
created_at DATE NOT NULL DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS memberships (
id TEXT PRIMARY KEY,
"user" TEXT NOT NULL REFERENCES users(id) ON UPDATE CASCADE ON DELETE CASCADE,
team TEXT NOT NULL REFERENCES teams(id) ON UPDATE CASCADE ON DELETE CASCADE,
role TEXT NOT NULL,
created_at DATE NOT NULL DEFAULT CURRENT_TIMESTAMP,
UNIQUE("user", team)
);
And the API endpoint in question is GET /users/:user/teams, which returns all of the teams a user is a member of. Here's what the controller for that route looks like:
(Note: all of this is Javascript, but it's been sort of pseudocode'd for clarity.)
async getTeams(currentId, userId) {
await exists(userId)
await canFindTeams(currentUser, userId)
let teams = await findTeams(userId)
let maskedTeams = await maskTeams(currentUser, teams)
return maskedTeams
}
Those four asynchronous functions are the core logical steps that need to happen for the authorization to be "complete". Here's what each of those functions roughly looks like:
async exists(userId) {
let user = await query(`
SELECT id
FROM users
WHERE id = $[userId]
`)
if (!user) throw new Error('user_not_found')
return user
}
exists simply checks whether a user by that userId even exists in the database, and throws the proper error code if not.
query is just pseudocode for running a SQL query with escaped variables.
async canFindTeams(currentUser, userId) {
if (currentUser.id == userId) return
let isTeammate = await query(`
SELECT role
FROM memberships
WHERE "user" = $[currentUser.id]
AND team IN (
SELECT team
FROM memberships
WHERE "user" = $[userId]
)
`)
if (!isTeammate) throw new Error('team_find_unauthorized')
}
canFindTeams ensures that either the current user is the one making the request, or that the current user is a teammate of the user in question. Anyone else should not be authorized to find the user in question. In my real implementation, it's actually done with roles that have associated actions, so that a teammate can teams.read but can't teams.admin unless they are an own. But I simplified that for this example.
async findTeams(userId) {
return await query(`
SELECT
teams.id,
teams.email,
teams.name,
teams.created_at
FROM teams
LEFT JOIN memberships ON teams.id = memberships.team
LEFT JOIN users ON users.id = memberships.user
WHERE users.id = $[userId]
ORDER BY
memberships.created_at DESC,
teams.id
`)
}
findTeams will actually query the database for the teams objects.
async maskTeams(currentUser, teams) {
let memberships = await query(`
SELECT team
FROM memberships
WHERE "user" = $[currentUser.id]
`)
let teamIds = memberships.map(membership => membership.team)
let maskedTeams = teams.filter(team => teamIds.includes(team.id))
return maskedTeams
}
maskTeams will return only the teams that a given user should see. This is needed because a user should be able to see all of their teams, but teammates should only be able to see their teams in common, so as to not leak information.
Problems
One of the requirements that led me to break it up like this is that I need a way to throw those specific error codes, so that the errors returned to API clients are helpful. For example, the exists function runs before the canFindTeams function so that not everything errors with a 403 Unauthorized.
Another, that's not well communicated here in pseudocode, is that the currentUser can actually be an app (a third-party client), a team (an access token that pertains to the team itself) or a user (the common case). This requirement makes it difficult to implement the canFindTeams or the maskTeams function as single SQL statements, since the logic has to be forked three ways... In my implementation, both functions are actually switch statements around the logic for handling all three cases—that the requester is an app, a team and a user.
But even given those constraints, this feels like a lot of extra code to write to ensure all of these authentication requirements. I'm worried about performance, code maintainability, and also about the fact that these queries aren't all in single transactions.
Questions
*
*Do the extra queries meaningfully affect performance?
*Can they be combined into fewer queries easily?
*Is there a better design for the authorization that simplifies this?
*Does not using transactions pose problems?
*Anything else you'd change?
Thanks!
A: I made it a function and simplified the tables just to be easier to test. SQL Fiddle. I'm making assumptions since some of the rules are embedded in the javascript pseudo code which I do not quite understand.
create or replace function visible_teams (
_user_id int, _current_user_id int
) returns table (
current_user_role int,
team_id int,
team_email text,
team_name text,
team_created_at date
) as $$
select
m0.role,
m0.team,
t.email,
t.name,
t.created_at
from
memberships m0
left join
memberships m1 on m0.team = m1.team and m1.user = _user_id
inner join
teams t on t.id = m0.team
where m0.user = _current_user_id
union
select null, null, null, null, null
where not exists (select 1 from users where id = _user_id)
order by role nulls first
;
$$ language sql;
Returns all current user's teams plus the the user common teams:
select * from visible_teams(3, 1);
current_user_role | team_id | team_email | team_name | team_created_at
-------------------+---------+------------+-----------+-----------------
1 | 1 | email_1 | team_1 | 2016-03-13
1 | 3 | email_3 | team_3 | 2016-03-13
2 | 2 | email_2 | team_2 | 2016-03-13
(3 rows)
When the user does not exist it returns the first line containing nulls plus all current user's teams:
select * from visible_teams(5, 1);
current_user_role | team_id | team_email | team_name | team_created_at
-------------------+---------+------------+-----------+-----------------
| | | |
1 | 1 | email_1 | team_1 | 2016-03-13
1 | 3 | email_3 | team_3 | 2016-03-13
2 | 2 | email_2 | team_2 | 2016-03-13
(4 rows)
When the current user does not exist then an empty set:
select * from visible_teams(1, 5);
current_user_role | team_id | team_email | team_name | team_created_at
-------------------+---------+------------+-----------+-----------------
(0 rows)
A: Your intent/requirement to reflect details about the failure to the user showing different errors is a major reason for not joining the queries into fewer ones.
For answering your explicit questions:
Do the extra queries meaningfully affect performance?
This really depends on the number of rows with the tables. For performance you should go and measure the timings of the queries. This really can't be judged from the queries (alone). Usually queries with "column=VALUE" have a good chance to be performing OK, given the table is small or there is a proper index in place.
Can they be combined into fewer queries easily?
Given the queries you showed, combining would be possible. This will likely loose the distinction of the actual cause of the auth failure (or add extra complexity to the query). However, you already stated the real queries are likely a bit more complex.
Combining several tables and (supposedley) lots of alternatives (ORs, UNIONs needed to cover the variants) might cause the query optimizer to no longer find a good plan.
So, as you are concerned with performance, combining the queries might
have a negative effect on overall performance (subject to measurement as usual). The overall performance also couls sufffer as you then have less queries running in parallel. (Which only is a benefit as long as the number of parallel requests really is low).
Is there a better design for the authorization that simplifies this?
This can't be answered based on the few criteria presented that led to this design. We would need input about what needs to be achieved and what the musts and shouds of the security strategy are. In some case e.g. you might get by with using row level security available from PG asof version 9.5.
Does not using transactions pose problems?
Yes, not having transactions could lead to inconsistent decision results as soon as there are changes to your authorisation tables while queries are being executed. E.g. consider a user is being removed and the canFindTeam is completed before the exists query, or similar race conditions.
Those effects need not necesarily be harmful, but they definitely exist.
PFor getting a clearer picture on this, please consider the possible modifications (Insert, delete, update) on the auth tables and the effect on your auth queries (and do not assume the queries are executed in order - you are running the async!) and the final decision and return to the user. If all of these results are not exhiiting a risk then you may stick with not using transactions.
Otherwise using transactions is strongly recommended.
Anything else you'd change?
From a security perspective giving details about a failure is a bad thing to do.
So you should really always return a "not authorized" on failure or just return an empty result (and only log the detailed reasult of the checks for analysis or debug).
A: I might (and probably am) over simplifying this, but lets start with simplified clarification. You want information for a specific user, and whatever teams they MAY be affiliated with. By starting with a given user, you will ALWAYS get at least the user components if it is a valid user in question. Only IF there is a membership record and a corresponding team will you get all the team information that this one person is directly associated with. If this query returns NO records, then the user ID is invalid to begin with, and you can respond accordingly with 0 records.
SELECT
u.id as userid,
u.email,
u.password,
u.name,
u.created_at,
m.id as memberid,
m.team as teamid,
m.role,
m.created_at as membercreated,
t.email as teamEmail,
t.name as teamName,
t.created_at as teamCreated
from
users u
LEFT JOIN memberships m
ON u.id = m.user
LEFT JOIN teams t
ON m.team = t.id
where
u.id = UserIDYouAreInterestedIn
So this is going from the user to the membership to the teams that one person IS directly associated with and has no bearing on another person. I was not seeing where this "other person" reference was coming from which restricts showing details only for the common teams. So, until further clarification, I will expand this answer and take it another level down to get all memberships of another user and they share the same team... Basically by reversing the nesting of tables on common membership / team back to the user table.
SELECT
u.id as userid,
u.email,
u.password,
u.name,
u.created_at,
m.id as memberid,
m.team as teamid,
m.role,
m.created_at as membercreated,
t.email as teamEmail,
t.name as teamName,
t.created_at as teamCreated,
u2.name as OtherTeamMate,
u2.email as TeamMateEMail
from
users u
LEFT JOIN memberships m
ON u.id = m.user
LEFT JOIN teams t
ON m.team = t.id
LEFT JOIN memberships m2
on m.team = m2.team
AND m2.user = IDOfSomeOtherUser
LEFT JOIN users u2
on m2.user = u2.id
where
u.id = UserIDYouAreInterestedIn
I hope this makes sense, and let me clarify the re-join to memberships as m2. If person "A" has a membership to teams "X", "Y" and "Z", then I want to join the memberships table by the SAME TEAM -- AND Some Other Person ID. IF one such entry DOES exist, go to the user's table again (alias u2) and grab the teammate's name and email.
If there are 50 teams available, but person "A" is only applicable to 3 teams, then it is only looking for other POSSIBLE members of those 3 teams AND the user on the secondary (m2 alias) membership table is that "other" person's ID.
A: I wanted to summarize a few things after having thought about the problem some more and implemented a solution... @rpy's answer helped a lot, read that first!
There are a few things that are inherent to the authorization code and the database querying code that allow for a better, more future-proof design that lets you get rid of two of those queries.
404's not 403's
The first problem, which @rpy alluded to, is that for security purposes, you don't want to show users who aren't authorized to find an object a 403 response, since it leaks information. Instead, all errors like 403: user_find_unauthorized that are thrown from the code should be remapped (however you want to make that happen) to 404: user_not_found.
With that in place, it's also pretty easy to change the authorization code to not fail when a user object doesn't exist in the first place. (Actually, in my case my authorization code was already structured this way).
That lets you get rid of the exists check—one query down.
Think About Pagination
The second problem is a future problem: what will happen when you decide to add pagination to your API later? With my example code, pagination would be very hard to implement since "querying" and "masking" were separate, such that doing things like LIMIT 10 becomes near impossible to do correctly.
For this reason, although the masking code might get complex, you have to include it in your original find query, to allow for pagination LIMIT and ORDER BY clauses.
One more query down.
2 is Better than 1
After all of that, I don't think I'd want to combine the last two queries into a single query, because the separation of concerns between them is very useful. Not only that, but if someone isn't authorized to access an object, the current setup will fail fast without the chance that it negatively impacts database load by having to do unnecessary work.
With all of that you'd end up with something along the lines of:
async getTeams(currentId, userId) {
await can(['users.find', 'teams.find'], currentUser, userId)
let teams = await findTeams(currentUser, userId)
return teams
}
can will perform the authorization, and by providing users.find in addition to teams.find it will ensure that unauthorized looks return 404s.
findTeams will perform the lookups, and by passing it currentUser it can also incorporate the necessary masking logic.
Hope that all helps anyone else who's wondering about this!
| |
doc_5616
|
A: Solved by adding foreign key constraint at table B
| |
doc_5617
|
MediaControllerCompat mediaController = new MediaControllerCompat(this, token);
MediaControllerCompat.setMediaController(this, mediaController);
The token is acquire from MediaSession.
All the times that back button is pressed a leak is detected. I don't have any callback/listener registered to MediaControllerCompat. I already tried set MediaController to null on activity's onDestroy() method, no success.
MediaControllerCompat.setMediaController(this, null);
Follow bellow the LeakCanary log.
D/LeakCanary: * com.me.PlaybackFullscreenActivity has leaked:
D/LeakCanary: * GC ROOT android.os.ResultReceiver$MyResultReceiver.this$0
D/LeakCanary: * references android.support.v4.media.session.MediaControllerCompat$MediaControllerImplApi21$1.this$0 (anonymous subclass of android.os.ResultReceiver)
D/LeakCanary: * references android.support.v4.media.session.MediaControllerCompat$MediaControllerImplApi23.mControllerObj
D/LeakCanary: * references android.media.session.MediaController.mContext
D/LeakCanary: * leaks com.me.ui.playback.PlaybackFullscreenActivity instance
D/LeakCanary: * Retaining: 54 KB.
D/LeakCanary: * Reference Key: 004ed9cd-c668-4d23-9ee6-cecad1b980a5
D/LeakCanary: * Device: unknown Android Android SDK built for x86_64 sdk_google_phone_x86_64
D/LeakCanary: * Android Version: 7.1 API: 25 LeakCanary: 1.5 00f37f5
D/LeakCanary: * Durations: watch=5018ms, gc=115ms, heap dump=1936ms, analysis=6011ms
D/LeakCanary: * Details:
D/LeakCanary: * Instance of android.os.ResultReceiver$MyResultReceiver
D/LeakCanary: | this$0 = android.support.v4.media.session.MediaControllerCompat$MediaControllerImplApi21$1@322318080 (0x13362f00)
D/LeakCanary: | mDescriptor = java.lang.String@1887101392 (0x707ae1d0)
D/LeakCanary: | mObject = -813433536
D/LeakCanary: | mOwner = android.os.ResultReceiver$MyResultReceiver@322318176 (0x13362f60)
D/LeakCanary: | shadow$_klass_ = android.os.ResultReceiver$MyResultReceiver
D/LeakCanary: | shadow$_monitor_ = 0
D/LeakCanary: * Instance of android.support.v4.media.session.MediaControllerCompat$MediaControllerImplApi21$1
D/LeakCanary: | this$0 = android.support.v4.media.session.MediaControllerCompat$MediaControllerImplApi23@322317952 (0x13362e80)
D/LeakCanary: | mHandler = android.os.Handler@322318144 (0x13362f40)
D/LeakCanary: | mLocal = true
D/LeakCanary: | mReceiver = android.os.ResultReceiver$MyResultReceiver@322318176 (0x13362f60)
D/LeakCanary: | shadow$_klass_ = android.support.v4.media.session.MediaControllerCompat$MediaControllerImplApi21$1
D/LeakCanary: | shadow$_monitor_ = 0
D/LeakCanary: * Instance of android.support.v4.media.session.MediaControllerCompat$MediaControllerImplApi23
D/LeakCanary: | mCallbackMap = java.util.HashMap@322587040 (0x133a49a0)
D/LeakCanary: | mControllerObj = android.media.session.MediaController@322587088 (0x133a49d0)
D/LeakCanary: | mExtraBinder = android.support.v4.media.session.MediaSessionCompat$MediaSessionImplApi21$ExtraSession@319823424 (0x13101e40)
D/LeakCanary: | mPendingCallbacks = null
D/LeakCanary: | shadow$_klass_ = android.support.v4.media.session.MediaControllerCompat$MediaControllerImplApi23
D/LeakCanary: | shadow$_monitor_ = 0
D/LeakCanary: * Instance of android.media.session.MediaController
D/LeakCanary: | static MSG_UPDATE_EXTRAS = 7
D/LeakCanary: | static MSG_DESTROYED = 8
D/LeakCanary: | static MSG_UPDATE_VOLUME = 4
D/LeakCanary: | static MSG_UPDATE_QUEUE_TITLE = 6
D/LeakCanary: | static MSG_UPDATE_PLAYBACK_STATE = 2
D/LeakCanary: | static $staticOverhead = byte[72]@317243393 (0x12e8c001)
D/LeakCanary: | static MSG_UPDATE_QUEUE = 5
D/LeakCanary: | static MSG_EVENT = 1
D/LeakCanary: | static TAG = java.lang.String@1886292312 (0x706e8958)
D/LeakCanary: | static MSG_UPDATE_METADATA = 3
D/LeakCanary: | mCallbacks = java.util.ArrayList@322318048 (0x13362ee0)
D/LeakCanary: | mCbRegistered = false
D/LeakCanary: | mCbStub = android.media.session.MediaController$CallbackStub@322317984 (0x13362ea0)
D/LeakCanary: | mContext = com.me.ui.playback.PlaybackFullscreenActivity@322837504 (0x133e1c00)
D/LeakCanary: | mLock = java.lang.Object@319489728 (0x130b06c0)
D/LeakCanary: | mPackageName = null
D/LeakCanary: | mSessionBinder = android.media.session.ISessionController$Stub$Proxy@319491728 (0x130b0e90)
D/LeakCanary: | mTag = null
D/LeakCanary: | mToken = android.media.session.MediaSession$Token@319489760 (0x130b06e0)
D/LeakCanary: | mTransportControls = android.media.session.MediaController$TransportControls@319489744 (0x130b06d0)
D/LeakCanary: | shadow$_klass_ = android.media.session.MediaController
D/LeakCanary: | shadow$_monitor_ = 0
D/LeakCanary: * Instance of com.me.ui.playback.PlaybackFullscreenActivity
D/LeakCanary: | static $staticOverhead = byte[16]@317706241 (0x12efd001)
D/LeakCanary: | static serialVersionUID = 0
D/LeakCanary: | static $change = null
D/LeakCanary: | mToolbar = android.support.v7.widget.Toolbar@321094656 (0x13238400)
D/LeakCanary: | playbackFragment = com.me.ui.playback.PlaybackFragment@318524080 (0x12fc4ab0)
D/LeakCanary: | mDelegate = android.support.v7.app.AppCompatDelegateImplV23@320052000 (0x13139b20)
D/LeakCanary: | mEatKeyUpEvent = false
D/LeakCanary: | mResources = null
D/LeakCanary: | mThemeId = 2131427393
D/LeakCanary: | mCreated = true
D/LeakCanary: | mFragments = android.support.v4.app.FragmentController@323740768 (0x134be460)
D/LeakCanary: | mHandler = android.support.v4.app.FragmentActivity$1@323839264 (0x134d6520)
D/LeakCanary: | mNextCandidateRequestIndex = 0
D/LeakCanary: | mOptionsMenuInvalidated = false
D/LeakCanary: | mPendingFragmentActivityResults = android.support.v4.util.SparseArrayCompat@323840160 (0x134d68a0)
D/LeakCanary: | mReallyStopped = true
D/LeakCanary: | mRequestedPermissionsFromFragment = false
D/LeakCanary: | mResumed = false
D/LeakCanary: | mRetaining = false
D/LeakCanary: | mStopped = true
D/LeakCanary: | mStartedActivityFromFragment = false
D/LeakCanary: | mStartedIntentSenderFromFragment = false
D/LeakCanary: | mExtraDataMap = android.support.v4.util.SimpleArrayMap@323839232 (0x134d6500)
D/LeakCanary: | mActionBar = null
D/LeakCanary: | mActionModeTypeStarting = 0
D/LeakCanary: | mActivityInfo = android.content.pm.ActivityInfo@319807616 (0x130fe080)
D/LeakCanary: | mActivityTransitionState = android.app.ActivityTransitionState@323795264 (0x134cb940)
D/LeakCanary: | mApplication = com.me.MainApplication@314898704 (0x12c4f910)
D/LeakCanary: | mCalled = true
D/LeakCanary: | mChangeCanvasToTranslucent = false
D/LeakCanary: | mChangingConfigurations = false
D/LeakCanary: | mComponent = android.content.ComponentName@323825776 (0x134d3070)
D/LeakCanary: | mConfigChangeFlags = 0
D/LeakCanary: | mCurrentConfig = android.content.res.Configuration@323855456 (0x134da460)
D/LeakCanary: | mDecor = null
D/LeakCanary: | mDefaultKeyMode = 0
D/LeakCanary: | mDefaultKeySsb = null
D/LeakCanary: | mDestroyed = true
D/LeakCanary: | mDoReportFullyDrawn = false
D/LeakCanary: | mEmbeddedID = null
D/LeakCanary: | mEnableDefaultActionBarUp = false
D/LeakCanary: | mEnterTransitionListener = android.app.SharedElementCallback$1@1888376616 (0x708e5728)
D/LeakCanary: | mExitTransitionListener = android.app.SharedElementCallback$1@1888376616 (0x708e5728)
D/LeakCanary: | mFinished = true
D/LeakCanary: | mFragments = android.app.FragmentController@323740720 (0x134be430)
D/LeakCanary: | mHandler = android.os.Handler@323839136 (0x134d64a0)
D/LeakCanary: | mIdent = 169286722
D/LeakCanary: | mInstanceTracker = android.os.StrictMode$InstanceTracker@323740736 (0x134be440)
D/LeakCanary: | mInstrumentation = android.app.Instrumentation@315044816 (0x12c733d0)
D/LeakCanary: | mIntent = android.content.Intent@323821632 (0x134d2040)
D/LeakCanary: | mLastNonConfigurationInstances = null
D/LeakCanary: | mMainThread = android.app.ActivityThread@314791872 (0x12c357c0)
D/LeakCanary: | mManagedCursors = java.util.ArrayList@323839168 (0x134d64c0)
D/LeakCanary: | mManagedDialogs = null
D/LeakCanary: | mMenuInflater = null
D/LeakCanary: | mParent = null
D/LeakCanary: | mReferrer = java.lang.String@323822208 (0x134d2280)
D/LeakCanary: | mResultCode = 0
D/LeakCanary: | mResultData = null
D/LeakCanary: | mResumed = false
D/LeakCanary: | mSearchEvent = null
D/LeakCanary: | mSearchManager = null
D/LeakCanary: | mStartedActivity = false
D/LeakCanary: | mStopped = true
D/LeakCanary: | mTemporaryPause = false
D/LeakCanary: | mTitle = java.lang.String@314691776 (0x12c1d0c0)
D/LeakCanary: | mTitleColor = 0
D/LeakCanary: | mTitleReady = true
D/LeakCanary: | mToken = android.os.BinderProxy@323829824 (0x134d4040)
D/LeakCanary: | mTranslucentCallback = null
D/LeakCanary: | mUiThread = java.lang.Thread@1955762776 (0x74929258)
D/LeakCanary: | mVisibleBehind = false
D/LeakCanary: | mVisibleFromClient = true
D/LeakCanary: | mVisibleFromServer = true
D/LeakCanary: | mVoiceInteractor = null
D/LeakCanary: | mWindow = com.android.internal.policy.PhoneWindow@317655136 (0x12ef0860)
D/LeakCanary: | mWindowAdded = true
D/LeakCanary: | mWindowManager = android.view.WindowManagerImpl@323839680 (0x134d66c0)
D/LeakCanary: | mInflater = com.android.internal.policy.PhoneLayoutInflater@323776416 (0x134c6fa0)
D/LeakCanary: | mOverrideConfiguration = null
D/LeakCanary: | mResources = android.content.res.Resources@315044736 (0x12c73380)
D/LeakCanary: | mTheme = android.content.res.Resources$Theme@323839712 (0x134d66e0)
D/LeakCanary: | mThemeResource = 2131427393
D/LeakCanary: | mBase = android.app.ContextImpl@319796480 (0x130fb500)
D/LeakCanary: | shadow$_klass_ = com.me.ui.playback.PlaybackFullscreenActivity
D/LeakCanary: | shadow$_monitor_ = 1293121552
D/LeakCanary: * Excluded Refs:
D/LeakCanary: | Field: android.view.inputmethod.InputMethodManager.mNextServedView
D/LeakCanary: | Field: android.view.inputmethod.InputMethodManager.mServedView
D/LeakCanary: | Field: android.view.inputmethod.InputMethodManager.mServedInputConnection
D/LeakCanary: | Field: android.view.inputmethod.InputMethodManager.mCurRootView
D/LeakCanary: | Field: android.os.UserManager.mContext
D/LeakCanary: | Field: android.net.ConnectivityManager.sInstance
D/LeakCanary: | Field: android.view.Choreographer$FrameDisplayEventReceiver.mMessageQueue (always)
D/LeakCanary: | Thread:FinalizerWatchdogDaemon (always)
D/LeakCanary: | Thread:main (always)
D/LeakCanary: | Thread:LeakCanary-Heap-Dump (always)
D/LeakCanary: | Class:java.lang.ref.WeakReference (always)
D/LeakCanary: | Class:java.lang.ref.SoftReference (always)
D/LeakCanary: | Class:java.lang.ref.PhantomReference (always)
D/LeakCanary: | Class:java.lang.ref.Finalizer (always)
D/LeakCanary: | Class:java.lang.ref.FinalizerReference (always)
Can anybody help me?
Thanks in advance.
A: This leak was fixed and released in 25.2.0 support library.
Font: issuetracker
A: MediaControllerCompat.setMediaController() instantiates controllerObj. Then this object is used to perform setMediaController(activity, controllerObj). After this is performed, I see no seams that would make controllerObj not to be leaked. In other words, it seems that one should take care of nulling out that object on his own:
MediaSessionCompat mediaSessionCompat = ...;
MediaController mediaController =
(MediaController) mediaSessionCompat.getController().getMediaController();
// explicitly nulling out MediaController
mediaController = null;
Note, that performing MediaControllerCompat.setMediaController(this, null) would not make previously set object to be nulled out, rather it would just update current instance with the new one. But controllerObj keeps a hard reference to the hosting activity and no one had taken care of nulling it out.
| |
doc_5618
|
my php page is
$user = array();
$user["image"] = base64_encode($result["image"]);
// success
$response["success"] = 1;
// user node
$response["image_table"] = array();
array_push($response["image_table"], $user);
when i use that array in my app i use this...
if (success == 1)
{
address = json.getJSONArray(TAG_IMAGE_TABLE);
for (int i = 0; i < address.length(); i++) {
JSONObject c = address.getJSONObject(i);
image = c.getString(TAG_IMAGE);
}
it gives me result like
json response: {"success":1,"image_table": [{"image":"iVBORw0KGgoAAA...................."
when i use this image in my image view i use this like
ImageView ivProperty = ((ImageView) myContentView.findViewById(R.id.image_property));
byte[] decodedString;
try {
decodedString = Base64.decode(image, Base64.URL_SAFE);
Bitmap decodedByte = BitmapFactory.decodeByteArray(decodedString, 0, decodedString.length);
ivProperty.setImageBitmap(decodedByte);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
But It Gives Me null pointer exception
my logcat values are
03-27 10:10:44.355: E/AndroidRuntime(2391): java.lang.NullPointerException: Input string was null.
03-27 10:10:44.355: E/AndroidRuntime(2391): at com.big_property_info.Base64.decode(Base64.java:1242)
03-27 10:10:44.355: E/AndroidRuntime(2391): at com.big_property_info.MainActivityMap$GeocoderTask$2.getInfoContents(MainActivityMap.java:314)
How to solve that null pointer exception ...when i m receiving image string.....
A: You can use Picasso for load images easily. For example:
Picasso.with(getActivity()).load(url).into(imageView);
A: Check this. Its image downloader library and easy to implement.
DisplayImageOptions imageOptions;
ImageLoader imageLoader;
imageOptions = new DisplayImageOptions.Builder().showImageForEmptyUri(R.drawable
.logo_image).showImageOnFail(R.drawable.logo_image).cacheInMemory(true)
.cacheOnDisk(true)
.build();
imageLoader = ImageLoader.getInstance();
imageLoader.init(ImageLoaderConfiguration.createDefault(getActivity()));
imageLoader.displayImage(uri, imageView, imageOptions);
//Where uri is url of imageview stored in server. imageview is Imageview in which you want to show image.
Check out link for document in github.
A: You need to check your string whether it is null or not.
ImageView ivProperty = ((ImageView) myContentView.findViewById(R.id.image_property));
byte[] decodedString;
try {
if (image!=null&&!image.equalsIgnoreCase("")) {
decodedString = Base64.decode(image, Base64.URL_SAFE);
Bitmap decodedByte = BitmapFactory.decodeByteArray(decodedString, 0, decodedString.length);
ivProperty.setImageBitmap(decodedByte);
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
A: May be problem is causing because of characterset which is used for encoding and decoding in php and android
Use same Characterset for both ends for Encoding and Decoding image data
refer this link to resolve your problem
https://stackoverflow.com/a/15156991/4985541
A: If you are getting image in Base64 string you can decode it like this,
byte[] data = Base64.decode(base64, Base64.DEFAULT);
String text = new String(data, "UTF-8");
Or you can get the link also from server and use below code to show it on image view.
if (imageUrl != null && isImageUrl) {
Picasso.with(getApplicationContext()).load(Constants.IMG_URL + imageUrl).resize(150, 100).centerInside().into(ivActionHome);
}
| |
doc_5619
|
I want to trigger the "test" workflow at the end of the "nuget" workflow. But it doesn't work.
In the test file, there is the run on workflow event, but is not triggered.
Here are my files :
nuget.yml :
name: nuget
on:
workflow_dispatch:
push:
branches: "**"
pull_request:
branches: "**"
jobs:
get_nuget:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- uses: nuget/setup-nuget@v1
with:
nuget-version: 'latest'
- uses: actions/cache@v1
id: cache
with:
path: ~/.nuget/packages
key: ${{ runner.os }}-nuget-${{ hashFiles('**/packages.lock.json') }}
- name: NuGet Restore
if: steps.cache.outputs.cache-hit != 'true'
run: nuget restore gremy.ovh.sln --no-restore
test.yml
name: test
on:
workflow_run:
workflows: [nuget]
types:
- completed
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup .NET
uses: actions/setup-dotnet@v2
with:
dotnet-version: 6.0.x
- name: Restore dependencies
run: dotnet restore --locked-mode
- name: Build
run: dotnet build --no-restore
- name: Test
run: dotnet test --no-build --verbosity normal
| |
doc_5620
|
The limitation is that I'm using MySql 5.5.36.
Also lets assume that we are talking about millions of tables that I'm using and that using the mysql information schema is not going to happen.
What I would like to know is there an easy way to get table names used?
Explain is obviously good for SELECT statments but since its MySql 5.5.36 I can't use it on replace,update,insert etc.
PDOStatement::getColumnMeta might help us with getting a table name, but it won't work with the queries that return result set.
Some kind of regexp for this might might be possible but I very much doubt that is a good solution for this, my queries are big have multiple JOINS etc. the regexp would be very complicated and probably fail fair percentage of time.
Any other ideas?
| |
doc_5621
|
Array
(
[0] => Array
(
'name' => 'A'
'Date' => 12-10-1990
'Grade' => '20D'
'Level' => 'A10'
)
[1] => Array
(
'name' => 'C'
'Date' => 10-10-1990
'Grade' => '10C'
'Level' => 'C10'
)
[2] => Array
(
'name' => 'B'
'Date' => 12-11-1995
'Grade' => '13E'
'Level' => 'A8'
)
)
Could anybody know how todo this(paging,sorting)? example display in html table:
Name (asc/desc) | Date (asc/desc)| Grade (asc/desc)| Level(asc/desc)
Thanks
A: from http://php.net/manual/en/function.array-multisort.php
<?php
$data[] = array('volume' => 67, 'edition' => 2);
$data[] = array('volume' => 86, 'edition' => 1);
$data[] = array('volume' => 85, 'edition' => 6);
$data[] = array('volume' => 98, 'edition' => 2);
$data[] = array('volume' => 86, 'edition' => 6);
$data[] = array('volume' => 67, 'edition' => 7);
?>
In this example, we will order by volume descending, edition ascending.
We have an array of rows, but array_multisort() requires an array of columns, so we use the below code to obtain the columns, then perform the sorting.
<?php
// Obtain a list of columns
foreach ($data as $key => $row) {
$volume[$key] = $row['volume'];
$edition[$key] = $row['edition'];
}
// Sort the data with volume descending, edition ascending
// Add $data as the last parameter, to sort by the common key
array_multisort($volume, SORT_DESC, $edition, SORT_ASC, $data);
?>
The dataset is now sorted, and will look like this:
volume | edition
-------+--------
98 | 2
86 | 1
86 | 6
85 | 6
67 | 2
67 | 7
A: you could use the usort function to sort your array.
Imagine that you want to order by 'name':
$sortedArray=usort($array,'cmpname');
function cmpname($arr1,$arr2){
$nameA=$arr1['name'];
$nameB=$arr2['name'];
if ($nameA == $nameB) {
return 0;
}
return ($nameA > $nameB) ? +1 : -1;
}
And then you can do the pagination returning the desired number of items using the function array_slice.
| |
doc_5622
|
The issue is, for one of my projects)the file ends up being very big. (think like 50M instead of a few k).
How can I investigate why it is so big?
The commands line:
git bundle create someFile.bundle myBranch origin/myBranch
A: To store just a few commits, you must know at least one commit that is in the repository where you want to unbundle the bundle. origin/master is a good candidate if the repositories have been cloned from the same upstream. But any commit object ID (SHA1) that is present in both repositories. Then the command is:
git bundle create someFile.bundle origin/master..myBranch
or (as your post indicates)
git bundle create someFile.bundle origin/myBranch..myBranch
To repeat: The destination must have the exact commit that is origin/myBranch in your repository; it is not enough that they have some commit named origin/myBranch.
Explanation:
With your command, you have actually packaged the entire history leading up to myBranch. The trick to reduce the size is to package just the objects that the other side does not have. For this reason, you pick a commit that is present in the destination, and then you can reduce the history by passing a commit range that ends in your branch tip: their/commit..myBranch.
| |
doc_5623
|
I tried the following based on other answers from SO:
*
*Reinstalled gradle (using brew)
*Removed Android studio and everything associated (except the SDK). Including the caches, logs.
*Manually downloaded the gradle-2.6 zip and placed it inside .gradle/wrapper/dists
*I even tried deleting a bunch of files from the plugins directory. Didn't work.
From my event log :
14:14:19 Throwable: Null child action in group Toolbar Run Actions () of class class com.intellij.openapi.actionSystem.DefaultActionGroup, id=GradleKiller.KillGradle
14:14:20 Gradle sync started
14:14:21 Gradle sync failed: Could not create an instance of Tooling API implementation using the specified Gradle distribution 'https://services.gradle.org/distributions/gradle-2.6-all.zip'.
Consult IDE log for more details (Help | Show Log)
14:14:22 Throwable: Null child action in group Toolbar Run Actions () of class class com.intellij.openapi.actionSystem.DefaultActionGroup, id=GradleKiller.KillGradle
14:14:22 Unregistered VCS root detected
The directory /Users/anuj/projects/makerville/android is under Git, but is not registered in the Settings.
Add root Configure Ignore
14:14:27 Throwable: Null child action in group Toolbar Run Actions () of class class com.intellij.openapi.actionSystem.DefaultActionGroup, id=GradleKiller.KillGradle
14:14:32 Throwable: Null child action in group Toolbar Run Actions () of class class com.intellij.openapi.actionSystem.DefaultActionGroup, id=GradleKiller.KillGradle
Other details :
Android Studio 1.3.1
JRE 1.6.0
build.gradle looks like
14:14:19 Throwable: Null child action in group Toolbar Run Actions () of class class com.intellij.openapi.actionSystem.DefaultActionGroup, id=GradleKiller.KillGradle
14:14:20 Gradle sync started
14:14:21 Gradle sync failed: Could not create an instance of Tooling API implementation using the specified Gradle distribution 'https://services.gradle.org/distributions/gradle-2.6-all.zip'.
Consult IDE log for more details (Help | Show Log)
14:14:22 Throwable: Null child action in group Toolbar Run Actions () of class class com.intellij.openapi.actionSystem.DefaultActionGroup, id=GradleKiller.KillGradle
14:14:22 Unregistered VCS root detected
The directory /Users/anuj/projects/makerville/android is under Git, but is not registered in the Settings.
Add root Configure Ignore
14:14:27 Throwable: Null child action in group Toolbar Run Actions () of class class com.intellij.openapi.actionSystem.DefaultActionGroup, id=GradleKiller.KillGradle
14:14:32 Throwable: Null child action in group Toolbar Run Actions () of class class com.intellij.openapi.actionSystem.DefaultActionGroup, id=GradleKiller.KillGradle
14:14:37 Throwable: Null child action in group Toolbar Run Actions () of class class com.intellij.openapi.actionSystem.DefaultActionGroup, id=GradleKiller.KillGradle
| |
doc_5624
|
The code below is what I have so far to create the search icon.
But not sure where to go from here, I can't seem to find anything that can help me figure this out. Help would be very much appreciated.
Many thanks
https://i.stack.imgur.com/OXeT2.png
https://i.stack.imgur.com/FskZS.png
https://i.stack.imgur.com/m3Xto.png
private func setupRightNavItems() {
let searchButton = UIButton(type: .system)
searchButton.setImage(#imageLiteral(resourceName: "search_icon").withRenderingMode(.alwaysOriginal), for: .normal)
searchButton.frame = CGRect(x: 0, y: 0, width: 25, height: 25)
navigationItem.rightBarButtonItem = UIBarButtonItem(customView: searchButton)
}
A: Take a look at UISearchController, you can init it with searchResultsController, which is your second screen. Then when user taps the search button, you present UISearchController.
A: *
*Create a new viewController and put the functionality of image 2 and 3 in this viewController. Present this viewController when you press the search button from the first viewController.
2.or you can make a UIView on image1 viewController and put a search bar and table view in this view. just show and hide this UIView when you press the search button.
| |
doc_5625
|
Sub AddHeader()
Range("CA1").Formula = "Stay Date"
End Sub
=====================================
Sub CellCopy()
Range("H2:H4000").Copy Range("CA2")
End Sub
=====================================
Sub CopyData()
Dim xRow As Long
Dim VInSertNum As Variant
xRow = 1
Application.ScreenUpdating = False
Do While (Cells(xRow, "A") <> "")
VInSertNum = Cells(xRow, "P")
If ((VInSertNum > 1) And IsNumeric(VInSertNum)) Then
Range(Cells(xRow, "A"), Cells(xRow, "BZ")).Copy
Range(Cells(xRow + 1, "A"), Cells(xRow + VInSertNum - 1, "BZ")).Select
Selection.Insert Shift:=xlDown
xRow = xRow + VInSertNum - 1
End If
xRow = xRow + 1
Loop
Application.ScreenUpdating = False
End Sub
=====================================
Sub RunAllMacros()
AddHeader
CellCopy
CopyData
End Sub
A: Your question isn't entirely clear but if I understand correctly, you want to:
*
*Repeat each row in your worksheet n times (where n is read from the worksheet itself and each row has its own n value).
*There are certain columns you want to exclude from being repeated.
I would add that:
*
*It might be better to loop in reverse order (so that row insertions do not affect the iterator/variable keeping track of loop progress).
*Have you considered copying the entire row (Range.EntireRow) and then using Range.Clear to clear those columns which you didn't want repeated?
*It's always good to include an example of input and expected output. Otherwise, it's difficult for the responder to verify their own answer.
The code below:
Option Explicit
Private Sub AddHeader(ByVal someSheet As Worksheet)
someSheet.Range("CA1").Formula = "Stay Date"
End Sub
Private Sub CellCopy(ByVal someSheet As Worksheet)
someSheet.Range("H2:H4000").Copy someSheet.Range("CA2")
End Sub
Private Sub RunAllMacros()
Dim sheetToModify As Worksheet
Set sheetToModify = ActiveSheet ' Better to replace with something like ThisWorkbook.Worksheets("Sheet1")
AddHeader sheetToModify
CellCopy sheetToModify
CopyData sheetToModify
End Sub
Private Sub CopyData(ByVal someSheet As Worksheet)
Dim lastRow As Long
lastRow = someSheet.Cells(someSheet.Rows.Count, "A").End(xlUp).Row
Dim rowIndex As Long
For rowIndex = lastRow To 2 Step -1 ' Presume you want to skip headers?
Dim numberOfTimesToRepeatRow As Variant
numberOfTimesToRepeatRow = someSheet.Cells(rowIndex, "P") ' Will need to -1 as count includes the row being copied.
If IsGreaterThanOne(numberOfTimesToRepeatRow) Then
With someSheet.Range("A" & rowIndex, "CA" & rowIndex)
.Copy
.Offset(1).Resize(numberOfTimesToRepeatRow - 1).Insert Shift:=xlDown
' Have to repeat/re-evaluate (cannot use With or
' object reference since rows have been inserted)
.Offset(1).Resize(numberOfTimesToRepeatRow - 1).Columns("CA").Clear
End With
End If
Next rowIndex
Application.CutCopyMode = False
End Sub
Private Function IsGreaterThanOne(ByVal someValue As Variant)
' Dedicated function to reduce indentation in caller.
' Returns True if value is numeric AND greater than 1 (else
' False).
' Separate IF statements since no short-circuit
' evaluation -- meaning non-numeric values could otherwise
' cause type mismatch error.
If IsNumeric(someValue) Then
If someValue > 1 Then
IsGreaterThanOne = True
End If
End If
End Function
The code above keeps the value in column CA for only the original rows -- and not for the newly inserted rows. In the other words, there are blanks in column CA of the newly inserted rows.
Hope that makes sense and gives you some idea on how to achieve this. If I've misunderstood, you can let me know.
| |
doc_5626
|
Currently, my point data are being pulled from MYSQL and converted into GeoJson using GeoPHP. The map.
I would like to know if there is a way to use MarkerCluster plugin with my GeoJson file, called mysql_points_geojson.php in code below:
// Bike Racks
var bikeRacksIcon = L.icon({
iconUrl: 'bicycleparking.png',
iconSize: [24, 28],
iconAnchor: [12, 28],
popupAnchor: [0, -25]
});
bikeRacks = new L.geoJson(null, {
pointToLayer: function (feature, latlng) {
return L.marker(latlng, {
icon: bikeRacksIcon,
title: feature.properties.city
});
},
onEachFeature: function (feature, layer) {
if (feature.properties) {
var content = '<table border="0" style="border-collapse:collapse;" cellpadding="2">' +
'<tr>' + '<th>City</th>' + '<td>' + feature.properties.city + '</td>' + '</tr>' +
'<tr>' + '<th>Country</th>' + '<td>' + feature.properties.country + '</td>' + '</tr>' +
'<table>';
layer.bindPopup(content);
}
}
});
$.getJSON("mysql_points_geojson.php", function (data) {
bikeRacks.addData(data);
}).complete(function () {
map.fitBounds(bikeRacks.getBounds());
});
A: Your layer bikeRacks can either be a L.MarkerClusterGroup or a L.geoJson layer.
A solution could be to create your custom geojson layer that you support clustering.
I think it would be far easier to forget about L.geojson layer and parse the "mysql_points_geojson.php" data structure yourself (you can take ideas from https://github.com/Leaflet/Leaflet/blob/master/src/layer/GeoJSON.js)
Furthermore, I think it would be even easier to forget about geojson and see it the server cannot send back a simple array of points (easier to parse)
For me the code should be like that ...
var bikeRacks = new L.MarkerClusterGroup({});
$.getJSON("mysql_points_geojson.php", function (data) {
// iterate on data to find the points
// create a marker for each point
bikeRacks.addLayer(marker);
}).complete(function () {
map.fitBounds(bikeRacks.getBounds());
});
A: Even if it's an old post, here is how I did it. I used clusterMarker plugin.
var promise = $.getJSON("yourFile.json");
/* Instanciate here your clusters */
var clusters = L.markerClusterGroup({
spiderfyOnMaxZoom: false,
showCoverageOnHover: false,
zoomToBoundsOnClick: true
});
promise.then(function(data) {
Inside this function, through click actions or whatever, you add your markers to the clusters.
myMarker.addTo(clusters);
Andd finally, at the end you add the clusters
clusters.addTo(map);
| |
doc_5627
|
The error:
When I visit the page that contains the debugging information it says that the uploaded file exceed the maximum file size allowed by the server (40 MB) although I uploaded a 100 KB image.
In the content type of my agenda, both my images fields have the 40 MB limit set up and same for my php.ini upload_max_filesize is set to 40 MB
I have confirmed that this error can be reproduced on different environments (Windows + Xampp and Linux + Nginx)
I ran out of ideas, so if everyone has any ideas on how to fix this, I would appreciate ^^.
A: In your Nginx server config try something like:
server {
client_max_body_size 40M;
| |
doc_5628
|
" Unexpected token, expected : "
Why is that?
This is my first code:
const GetUp = (num) => {
for (let i = 1; i <= num; i++) {
if (i % 3 === 0) {
console.log('Get')
}
if (i % 5 === 0) {
console.log('Up')
}
if (i % 3 === 0 && i % 5 === 0) {
console.log('GetUp')
} else {
console.log(i)
}
}
}
GetUp(200)
This is my recent code:
const SetRuc = (num) => {
for (let i = 1; i <= num; i++) {
(i % 3 === 0) ? console.log('Set')
(i % 5 === 0) ? console.log('Ruc')
(i % 3 === 0 && i % 5 === 0) ? console.log('SetRuc') : console.log(i)
}
}
SetRuc(100)
A: use && for shothand if without else
add semicolumns ; to let it know that it's the end of the instruction, otherwise it will evaluate the three lines as one instruction.
const SetRuc = (num) => {
for (let i = 1; i <= num; i++) {
(i % 3 === 0) && console.log('Set');
(i % 5 === 0) && console.log('Ruc');
(i % 3 === 0 && i % 5 === 0) ? console.log('SetRuc') : console.log(i);
}
}
SetRuc(100)
A: EG this:
(i % 3 === 0) ? console.log('Set')
provides no : option for the ?. If you don't want anything to happen in the event that the ? check is false, you can simply provide an empty object, or undefined:
(i % 3 === 0) ? console.log('Set') : {}
A: If you don't want to do anything in case of false result in ternary operator. you can just say something like statement ? 'expression' : null
just mention null in there. Something like
const SetRuc = (num) => {
for (let i = 1; i <= num; i++) {
(i % 3 === 0) ? console.log('Set') : null;
(i % 5 === 0) ? console.log('Ruc') : null;
(i % 3 === 0 && i % 5 === 0) ? console.log('SetRuc') : console.log(i);
}
}
SetRuc(100)
A: You misuse the ternary operator, the syntax is:
condition ? expr1 : expr1
Assuming that expr1 will be executed if the condition is true, otherwise expr2 will.
So you may want this:
const SetRuc = (num) => {
for (let i = 1; i <= num; i++) {
(i % 3 === 0) ? console.log('Set') :
(i % 5 === 0) ? console.log('Ruc') :
(i % 3 === 0 && i % 5 === 0) ? console.log('SetRuc') : console.log(i)
}
}
SetRuc(100)
A: const SetRuc = (num) => {
for (let i = 1; i <= num; i++) {
(i % 3 === 0) ? console.log('Set') :
(i % 5 === 0) ? console.log('Ruc') :
(i % 3 === 0 && i % 5 === 0) ? console.log('SetRuc') : console.log(i)
}
}
SetRuc(100)
you have missed : after console.log('Set') and console.log('Ruc')
| |
doc_5629
|
I have already created the index for the query of whereArrayContains and orderBy. So it works perfectly when I fetch the first 10 records.
But when I would like to fetch the next 10 result, then I get back the first 10.
override suspend fun getShoppingLists(
currentUser: User, coroutineScope: CoroutineScope
) = flow<Resource<MutableList<ShoppingList>>> {
emit(Resource.Loading(true))
val result = if(lastShoppingListResult == null) {
shoppingListsCollectionReference
.whereArrayContains(FRIENDSSHAREDWITH, currentUser.id)
.orderBy(DUEDATE, Query.Direction.DESCENDING)
.limit(LIMIT_10)
.get()
.await()
} else {
shoppingListsCollectionReference
.whereArrayContains(FRIENDSSHAREDWITH, currentUser.id)
.orderBy(DUEDATE, Query.Direction.DESCENDING)
.startAfter(lastShoppingListResult)
.limit(LIMIT_10)
.get()
.await()
}
val documentsList = result.documents
if (documentsList.size > 0) {
lastShoppingListResult = documentsList[documentsList.size - 1]
}
val listOfShoppingList = mutableListOf<ShoppingList>()
for (document in documentsList) {
val shoppingList = document?.toObject(ShoppingList::class.java)
shoppingList?.let {
listOfShoppingList.add(shoppingList)
}
}
emit(Resource.Success(listOfShoppingList))
}.catch { exception ->
exception.message?.let { message ->
emit(Resource.Error(message))
}
}.flowOn(Dispatchers.IO)
A: Finally I have found the solution.
I'm passing the lastShoppingListResult variable as a nullable object into the .startAfter() method.
I had to cast it directly to a non nullable DocumentSnapshot. In this case the request works perfectly.
...
.startAfter(lastShoppingListResult as DocumentSnapshot)
...
| |
doc_5630
|
However it's working fine with Chrome or Mozilla
Code:
{
var optgroup_ids = optgroupids.split('%')[0].replace('_span', '');
var optgroup_id = optgroup_ids.substr(0, optgroup_ids.lastIndexOf(optgroup_ids,'_'));
error = true;
mprint("Error in optgroup condition for optgroup id: " + optgroup_id + ": " + e.message, "red");
}
I tried adding the below code snippet at top of Script as polyfill
if (!('lastIndexOf' in Array.prototype)) {
Array.prototype.lastIndexOf= function(find, i /*opt*/) {
if (i===undefined) i= this.length-1;
if (i<0) i+= this.length;
if (i>this.length-1) i= this.length-1;
for (i++; i-->0;) /* i++ because from-argument is sadly inclusive */
if (i in this && this[i]===find)
return i;
return -1;
};
}
No Luck! Need help on how to add polyfills.
| |
doc_5631
|
{
"startTime" : NumberLong("1483542955570"),
"startDate" : ISODate("2017-01-04T15:15:55.570Z"),
"endTime" : NumberLong("1483542955570"),
"endDate" : ISODate("2017-01-04T15:15:55.570Z")
}
While mapping this back to a Java POJO, I am trying the below code.
public <T> T getPOJOFromMongoDocument(Document resourceDocument, Class<T> clazz) {
String serialize = JSON.serialize(resourceDocument);
return objectMapper.readValue(serialize,
clazz);
}
serialize has the date fields returned as following
"startDate" : { "$date" : "2017-01-04T15:15:55.570Z"}
Due to $date, Jackson ObjectMapper returns the below exception during parsing:
java.lang.RuntimeException: Error parsing mongoDoc to Pojo : errorMessage : {Can not deserialize instance of java.util.Date out of START_OBJECT token at [Source: {
"startTime": 1483542955570,
"startDate": {
"$date": "2017-01-04T15:15:55.570Z"
},
"endTime": 1483542955570,
"endDate": {
"$date": "2017-01-04T15:15:55.570Z"
}}; line: 1, column: 381] (through reference chain: com.gofynd.engine.mongo.models.RuleWithDataVO["validity"]->com.gofynd.engine.mongo.models.ValidityVO["startDate"])}
Is there way to solve this without using an ODM?
A: When deserializing to Date Jackson expects a String like "2017-01-04T15:15:55.570Z". Instead, it sees the start of another object (the { char) inside the JSON hence the exception.
Consider specifying your Pojo class and another MongoDate class similar to this:
class MongoDate {
@JsonProperty("$date")
Date date;
}
class Pojo {
long startTime;
long endTime;
MongoDate startDate;
MongoDate endDate;
}
Alternatively if you can't / don't want to add a MongoDate class you can introduce a custom deserializer for Date fields. In that case Pojo:
class Pojo {
long startTime;
long endTime;
@JsonDeserialize(using = MongoDateConverter.class)
Date startDate;
@JsonDeserialize(using = MongoDateConverter.class)
Date endDate;
}
And the deserializer would look like this:
class MongoDateConverter extends JsonDeserializer<Date> {
private static final SimpleDateFormat formatter = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS'Z'");
@Override
public Date deserialize(JsonParser jp, DeserializationContext ctxt) throws IOException {
JsonNode node = jp.readValueAsTree();
try {
return formatter.parse(node.get("$date").asText());
} catch (ParseException e) {
return null;
}
}
}
A: I went back and tried a similar approach of using a deserializer.
Here's the code:
public class MongoDateDeserializer extends JsonDeserializer<Date> {
@Override
public Date deserialize(JsonParser jsonParser, DeserializationContext deserializationContext)
throws IOException {
ObjectCodec oc = jsonParser.getCodec();
JsonNode node = oc.readTree(jsonParser);
String dateValue = node.get("$date")
.asText();
DateFormat df = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS'Z'");
Date date = null;
try {
date = df.parse(dateValue);
} catch (ParseException e) {
e.printStackTrace();
}
return date;
}}
Changes in VO are are as below:
@JsonDeserialize(using = MongoDateDeserializer.class)
private Date startDate;
@JsonDeserialize(using = MongoDateDeserializer.class)
private Date endDate;
This has worked successfully.
However, would be better if MongoDB's JSON.serialize does the job of returning a normalized json. Hopefully in future.
| |
doc_5632
|
Table(attribute1, attribute2...);
---------------------------------
Users(iduser, username)
Link(idlink, title, userid)
Comment(idcomment, content, linkid, userid)
How to select:
Link title, with corresponding username and number of comments?
I'm currently doing like this:
Q1-Select links (SELECT * FROM `links`)
Q2-Extract usernames from previous query(Q1) - (SELECT username FROM `user` WHERE iduser=Q1.userid
Q3-Extract number of comments from Q1 by id (SELECT COUNT(*) as comments FROM `comment` WHERE linkid='Q1.idlink')
I believe we can do this in much more optimized way. I got idea how to get Link with corresponding username but I got stuck when I need to count comments.
A: SELECT iduser, username, Link.title, COUNT(idcomment)
FROM Users
LEFT JOIN Link ON (iduser = userid)
LEFT JOIN Comment ON (linkid = idlink)
GROUP BY iduser, idlink
Note that your Comment table is somewhat badly designed - the 'userid' field is not necessary, and can actually lead to situation where you've got a cross-linked record. e.g. a Comment belonging to user A might could be linked to a Link record belonging to user B.
A: It is good practice to get into the habit of putting the fields you want into both the SELECT and GROUP BY clauses, that way it won't come as such a shock when you have to use an RDBMS that insists on it.
SELECT
`l`.`idlink`,
`l`.`title`,
`u`.`username`,
COUNT(`c`,`idcomment`) AS `comment_count`
FROM `links` `l`
INNER JOIN `users` `u`
ON `l`.`userid` = `u`.`iduser`
LEFT JOIN `comments` `c`
ON `l`.`idlink` = `c`.`linkid`
GROUP BY
`l`.`idlink`,
`l`.`title`,
`u`.`username`
A: SELECT
l.idlink
, l.title
, l.userid
, u.iduser
, u.username
, c.idcomment
, c.content
FROM Link AS l
JOIN Users AS u ON u.iduser=l.userid
JOIN Comment AS c ON c.linkid=l.idlink
| |
doc_5633
|
But I want to calculate the size of the browser area available to me and divide that up into 80x60 pixel blocks, so different displays would have a different number of GridLayout cells. i.e. the larger your display the more cells you have in your grid.
The problem I'm having is that at the init() time, the WebBrowser information that I can get the width and height from, isn't available to me (and it says so in the API). I've tried the example code but 'attach' is still being called from init, effectively.
I could use some sort of listener (not sure which) to do my grid setup and adding controls to all that but that sounds really messy and cumbersome to me.
So, the questions:
*
*What Listener would be appropriate?
*Isn't there just a simple way of working out what available screen space there is?
A: Alternative 1:
WebBrowser browser = ((WebApplicationContext) getApplication().getContext()).getBrowser();
int width = browser.getScreenWidth();
int height = browser.getScreenHeight();
Best Way:
getApplication().getMainWindow().addListener(new ResizeListener() {
@Override
public void windowResized(ResizeEvent e) {
System.out.printf("window info : %sx%s\n",
e.getWindow().getWidth(),
e.getWindow().getHeight()));
}
});
A: Make your Application implement HttpServletRequestListener, and implement the callback like so:
public void onRequestEnd(HttpServletRequest request,
HttpServletResponse response) {
WebBrowser browser = ((WebApplicationContext) getContext()).getBrowser();
int width = browser.getScreenWidth();
int height = browser.getScreenHeight();
if(width==0 || width==-1) return;
if(height==0 || height==-1) return;
System.out.println("Your browser window is "+width+" by "+height);
| |
doc_5634
|
let arr = obj.map(e => {
let { dateTime, averageJitterInMs } = e;
return [dateTime, +averageJitterInMs];
});
The source date example is stored as UTC time: 2018-10-14T17:19:53.2596293
I've tried:
let arr = obj.map(e => {
let { dateTime, averageJitterInMs } = e;
return [function() { return new Date(dateTime) }, +averageJitterInMs];
});
this gives me zeros for date...
Also tried:
let arr = obj.map(e => {
let { dateTime, averageJitterInMs } = e;
return [function() { new Date(dateTime) }, +averageJitterInMs];
});
this also gives me zeros...
Also tried:
let arr = obj.map(e => {
let { dateTime, averageJitterInMs } = e;
return [Date.parse(dateTime), +averageJitterInMs];
});
this gives me epoch time I believe which isn't what I'm after.
Obviously my syntax for using an anonymous function is incorrect among other things. Just looking for a little help on the proper way to do this or if it can be done inside this let block.
Also, given the date format, do I need to perform additional action on it to interpret the 'time' value in the string? I have the option of formatting the string as I wish from the back-end before it hits the JavaScript (i.e. through my C# code). So if it makes sense to do it there so it's easier for JavaScript to parse it, no problem.
A: You could just use the Date constructor:
return [new Date(dateTime), +averageJitterInMs]
[I could change the format] if it makes sense to do it, so that it's easier for JavaScript to parse it
I would just pass milliseconds since 1970 as that is the only thing Date() parses reliably
| |
doc_5635
|
As I have 15 choices of discrete diameter sizes which are [2,4,6,8,12,16,20,24,30,36,40,42,50,60,80] that can be used for any of the six pipelines that I have in the system, the list of possible solutions becomes 15^6 which is equal to 11,390,625
To solve the problem, I am using Mixed-Integer Linear Programming using Pulp package. I am able to find the solution for the combination of same diameters (e.g. [2,2,2,2,2,2] or [4,4,4,4,4,4]) but what I need is to go through all combinations (e.g. [2,4,2,2,4,2] or [4,2,4,2,4,2] to find the minimum. I attempted to do this but the process is taking a very long time to go through all combinations. Is there a faster way to do this ?
Note that I cannot calculate the pressure drop for each pipeline as the choice of diameter will affect the total pressure drop in the system. Therefore, at anytime, I need to calculate the pressure drop of each combination in the system.
I also need to constraint the problem such that the rate/cross section of pipeline area > 2.
Your help is much appreciated.
The first attempt for my code is the following:
from pulp import *
import random
import itertools
import numpy
rate = 5000
numberOfPipelines = 15
def pressure(diameter):
diameterList = numpy.tile(diameter,numberOfPipelines)
pressure = 0.0
for pipeline in range(numberOfPipelines):
pressure += rate/diameterList[pipeline]
return pressure
diameterList = [2,4,6,8,12,16,20,24,30,36,40,42,50,60,80]
pipelineIds = range(0,numberOfPipelines)
pipelinePressures = {}
for diameter in diameterList:
pressures = []
for pipeline in range(numberOfPipelines):
pressures.append(pressure(diameter))
pressureList = dict(zip(pipelineIds,pressures))
pipelinePressures[diameter] = pressureList
print 'pipepressure', pipelinePressures
prob = LpProblem("Warehouse Allocation",LpMinimize)
use_diameter = LpVariable.dicts("UseDiameter", diameterList, cat=LpBinary)
use_pipeline = LpVariable.dicts("UsePipeline", [(i,j) for i in pipelineIds for j in diameterList], cat = LpBinary)
## Objective Function:
prob += lpSum(pipelinePressures[j][i] * use_pipeline[(i,j)] for i in pipelineIds for j in diameterList)
## At least each pipeline must be connected to a diameter:
for i in pipelineIds:
prob += lpSum(use_pipeline[(i,j)] for j in diameterList) ==1
## The diameter is activiated if at least one pipelines is assigned to it:
for j in diameterList:
for i in pipelineIds:
prob += use_diameter[j] >= lpSum(use_pipeline[(i,j)])
## run the solution
prob.solve()
print("Status:", LpStatus[prob.status])
for i in diameterList:
if use_diameter[i].varValue> pressureTest:
print("Diameter Size",i)
for v in prob.variables():
print(v.name,"=",v.varValue)
This what I did for the combination part which took really long time.
xList = np.array(list(itertools.product(diameterList,repeat = numberOfPipelines)))
print len(xList)
for combination in xList:
pressures = []
for pipeline in range(numberOfPipelines):
pressures.append(pressure(combination))
pressureList = dict(zip(pipelineIds,pressures))
pipelinePressures[combination] = pressureList
print 'pipelinePressures',pipelinePressures
A: I would iterate through all combinations, I think you would run into memory problems otherwise trying to model ALL combinations in a MIP.
If you iterate through the problems perhaps using the multiprocessing library to use all cores, it shouldn't take long just remember only to hold information on the best combination so far, and not to try and generate all combinations at once and then evaluate them.
If the problem gets bigger you should consider Dynamic Programming Algorithms or use pulp with column generation.
| |
doc_5636
|
CREATE TABLE word(
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
word TEXT NOT NULL,
count INT NOT NULL,
ratio NUMERIC(10, 3) NOT NULL,
percent_of_total NUMERIC(10, 3) NOT NULL,
daily_count_id UUID REFERENCES daily_count(id)
);
I then tries to insert with this statement:
INSERT INTO word (word, count, ratio, percent_of_total, daily_count_id)
VALUES ('test', 5, 5/214, 5*100/214,
(SELECT id from daily_count WHERE day_of_count = CURRENT_DATE+1));
It works. It inserts the value when selecting it from the table then the numeric values has been rounded like the following:
67035a35-e5df-495b-95d5-cb3b4041c7b4 test 5 0.000 2.000 91858e7a-3440-4959-9074-9d197d6c97fc
The values 2.000 and 0.000 are rounded but I need them to be the precise value.
I'm using the DataGrip IDE but I do not think it has anything to do with it.
A: 5/214 will be executed as integer division. I.e. the result will be integer.
If you want floating point division, you can simply do 5.0/214.
(Or use cast, e.g cast(5 as NUMERIC(10, 3)) / 214.)
| |
doc_5637
|
How should i create a kind of util class with code in for save/load data?
this is the code for history view:
public class geschiedenis extends AppCompatActivity {
public static final Object EXTRA_MESSAGE = "com.example.Gezondheidzlogin.MESSAGE";
//variables
String jaartal,diagnose,behandeling, titel;
EditText injaartaltext,indiagnosetext,inbehandelingtext;
@override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_inputgeschiedenis);
Intent intent3 = getIntent();
}
public void savedata (View view) {
//laad gegevens in in object voor te versturen naar verwerking
String jaartal;
EditText editText = (EditText) findViewById(R.id.injaartaltext);
jaartal = editText.getText().toString() + ",";
//intent.putExtra(EXTRA_MESSAGE, jaartal);
String diagnose;
editText = (EditText) findViewById(R.id.indiagnosetext);
diagnose = editText.getText().toString()+",";
//intent.putExtra(EXTRA_MESSAGE, diagnose);
editText = (EditText) findViewById(R.id.inbehandelingtext);
behandeling = editText.getText().toString();
String result = "GESCHIEDENIS" + jaartal +diagnose + behandeling;
Intent intent13 = new Intent(this, activity_processing.class);
intent13.putExtra((String) EXTRA_MESSAGE, result);
startActivity(intent13);
}
public void startexit (View view) {
Intent intent100 = new Intent(this, startexit.class);
startActivity(intent100);
finish();
System.exit(0);
}
public void startmenu(View view) {
Intent intent = new Intent(this, MainActivity.class);
startActivity(intent);
}
}
thx for help, Rob
| |
doc_5638
|
./prog -a "(1, 2, 3)(4, 5)(6, 7, 8)" filename
Is it possible to parse this string using flex/bison without writing it to a file and parsing that file?
A: See this question String input to flex lexer
A: I think you can achieve something like that (I did a similar thing) by using fmemopen to create a stream from a char*and then replace that to stdin
Something like that (not sure if it's fully functional since I'm actually trying to remember available syscalls but it would be something similar to this)
char* args = "(1,2,3)(4,5)(6,7,8)"
FILE *newstdin = fmemopen (args, strlen (args), "r");
FILE *oldstdin = fdup(stdin);
stdin = newstdin;
// do parsing
stdin = oldstdin;
A: Here is a complete flex example.
%%
<<EOF>> return 0;
. return 1;
%%
int yywrap()
{
return (1);
}
int main(int argc, const char* const argv[])
{
YY_BUFFER_STATE bufferState = yy_scan_string("abcdef");
// This is a flex source. For yacc/bison use yyparse() here ...
int token;
do {
token = yylex();
} while (token != 0);
// Do not forget to tell flex to clean up after itself. Lest
// ye leak memory.
yy_delete_buffer(bufferState);
return (EXIT_SUCCESS);
}
A: another example. this one redefines the YY_INPUT macro:
%{
int myinput (char *buf, int buflen);
char *string;
int offset;
#define YY_INPUT(buf, result, buflen) (result = myinput(buf, buflen));
%}
%%
[0-9]+ {printf("a number! %s\n", yytext);}
. ;
%%
int main () {
string = "(1, 2, 3)(4, 5)(6, 7, 8)";
yylex();
}
int myinput (char *buf, int buflen) {
int i;
for (i = 0; i < buflen; i++) {
buf[i] = string[offset + i];
if (!buf[i]) {
break;
}
}
offset += i;
return i;
}
A: The answer is "Yes". See the O'Reilly publication called "lex & yacc", 2nd Edition by Doug Brown, John Levine, Tony Mason. Refer to Chapter 6, the section "Input from Strings".
I also just noticed that there are some good instructions in the section "Input from Strings", Chapter 5 of "flex and bison", by John Levine. Look out for routines yy_scan_bytes(char *bytes, int len), yy_scan_string("string"), and yy_scan_buffer(char *base, yy_size_t size). I have not scanned from strings myself, but will be trying it soon.
| |
doc_5639
|
@Override
public boolean onCreateOptionsMenu(Menu menu) {
MenuInflater inflater = getSupportMenuInflater();
inflater.inflate(R.menu.mainmenu, menu);
return true;
}
The XML file I'm trying to inflate (res/menu/mainmenu):
<?xml version="1.0" encoding="utf-8"?>
<menu xmlns:android="http://schemas.android.com/apk/res/android" >
<item
android:id="@+id/menu_item"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:actionLayout="@string/filter_menuitem"
android:showAsAction="ifRoom"
android:title="@string/filter_menuitem"
/>
Red output from LogCat
10-31 13:42:06.500: E/AndroidRuntime(11629): FATAL EXCEPTION: main
10-31 13:42:06.500: E/AndroidRuntime(11629):android.content.res.Resources$NotFoundException: File Filter from xml type layout resource ID #0x7f090012
10-31 13:42:06.500: E/AndroidRuntime(11629): at android.content.res.Resources.loadXmlResourceParser(Resources.java:2190)
10-31 13:42:06.500: E/AndroidRuntime(11629): at android.content.res.Resources.loadXmlResourceParser(Resources.java:2145)
10-31 13:42:06.500: E/AndroidRuntime(11629): at android.content.res.Resources.getLayout(Resources.java:872)
10-31 13:42:06.500: E/AndroidRuntime(11629): at android.view.LayoutInflater.inflate(LayoutInflater.java:394)
10-31 13:42:06.500: E/AndroidRuntime(11629): at com.android.internal.view.menu.MenuItemImpl.setActionView(MenuItemImpl.java:566)
10-31 13:42:06.500: E/AndroidRuntime(11629): at com.actionbarsherlock.internal.view.menu.MenuItemWrapper.setActionView(MenuItemWrapper.java:230)
10-31 13:42:06.500: E/AndroidRuntime(11629): at com.actionbarsherlock.view.MenuInflater$MenuState.setItem(MenuInflater.java:454)
10-31 13:42:06.500: E/AndroidRuntime(11629): at com.actionbarsherlock.view.MenuInflater$MenuState.addItem(MenuInflater.java:468)
10-31 13:42:06.500: E/AndroidRuntime(11629): at com.actionbarsherlock.view.MenuInflater.parseMenu(MenuInflater.java:190)
10-31 13:42:06.500: E/AndroidRuntime(11629): at com.actionbarsherlock.view.MenuInflater.inflate(MenuInflater.java:112)
10-31 13:42:06.500: E/AndroidRuntime(11629): at edu.calpoly.android.lab3.AdvancedJokeList.onCreateOptionsMenu(AdvancedJokeList.java:112)
10-31 13:42:06.500: E/AndroidRuntime(11629): at com.actionbarsherlock.app.SherlockActivity.onCreatePanelMenu(SherlockActivity.java:184)
10-31 13:42:06.500: E/AndroidRuntime(11629): at com.actionbarsherlock.ActionBarSherlock.callbackCreateOptionsMenu(ActionBarSherlock.java:560)
10-31 13:42:06.500: E/AndroidRuntime(11629): at com.actionbarsherlock.internal.ActionBarSherlockNative.dispatchCreateOptionsMenu(ActionBarSherlockNative.java:64)
10-31 13:42:06.500: E/AndroidRuntime(11629): at com.actionbarsherlock.app.SherlockActivity.onCreateOptionsMenu(SherlockActivity.java:149)
10-31 13:42:06.500: E/AndroidRuntime(11629): at android.app.Activity.onCreatePanelMenu(Activity.java:2449)
10-31 13:42:06.500: E/AndroidRuntime(11629): at com.android.internal.policy.impl.PhoneWindow.preparePanel(PhoneWindow.java:418)
10-31 13:42:06.500: E/AndroidRuntime(11629): at com.android.internal.policy.impl.PhoneWindow.invalidatePanelMenu(PhoneWindow.java:769)
10-31 13:42:06.500: E/AndroidRuntime(11629): at com.android.internal.policy.impl.PhoneWindow$1.run(PhoneWindow.java:3015)
10-31 13:42:06.500: E/AndroidRuntime(11629): at android.os.Handler.handleCallback(Handler.java:605)
10-31 13:42:06.500: E/AndroidRuntime(11629): at android.os.Handler.dispatchMessage(Handler.java:92)
10-31 13:42:06.500: E/AndroidRuntime(11629): at android.os.Looper.loop(Looper.java:137)
10-31 13:42:06.500: E/AndroidRuntime(11629): at android.app.ActivityThread.main(ActivityThread.java:4519)
10-31 13:42:06.500: E/AndroidRuntime(11629): at java.lang.reflect.Method.invokeNative(Native Method)
10-31 13:42:06.500: E/AndroidRuntime(11629): at java.lang.reflect.Method.invoke(Method.java:511)
10-31 13:42:06.500: E/AndroidRuntime(11629): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:995)
10-31 13:42:06.500: E/AndroidRuntime(11629): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:762)
10-31 13:42:06.500: E/AndroidRuntime(11629): at dalvik.system.NativeStart.main(Native Method)
10-31 13:42:06.500: E/AndroidRuntime(11629): Caused by: java.io.FileNotFoundException: Filter
10-31 13:42:06.500: E/AndroidRuntime(11629): at android.content.res.AssetManager.openXmlAssetNative(Native Method)
10-31 13:42:06.500: E/AndroidRuntime(11629): at android.content.res.AssetManager.openXmlBlockAsset(AssetManager.java:487)
10-31 13:42:06.500: E/AndroidRuntime(11629): at android.content.res.Resources.loadXmlResourceParser(Resources.java:2172)
10-31 13:42:06.500: E/AndroidRuntime(11629): ... 27 more
And my import statements, if it helps
import android.content.Context;
import android.os.Bundle;
import android.util.Log;
import android.view.KeyEvent;
import android.view.View;
import android.view.View.OnClickListener;
import android.view.View.OnKeyListener;
import android.view.inputmethod.InputMethodManager;
import android.widget.*;
import com.actionbarsherlock.ActionBarSherlock;
import com.actionbarsherlock.app.SherlockActivity;
import com.actionbarsherlock.view.MenuInflater;
import com.actionbarsherlock.view.Menu;
import com.actionbarsherlock.view.MenuItem;
Any feedback as to why this exception is being thrown?
| |
doc_5640
|
I am getting an error StreamnotFound. How to resolve this issue?
A: The Flash Player does not support Speex codec and does not support adding codecs not built-into the player.
A: While it would be difficult, you could implement the speex codec in Actionscript and use the Sound object's sampleData event to play it.
It would likely be far easier to convert speex to mp3 server-side.
| |
doc_5641
|
$stmt = $conn->prepare("SELECT * FROM `users` WHERE user LIKE ? ");
$stmt->bind_param("s", $filtered_form['user']);
$stmt->execute();
$stmt->store_result();
if ($stmt->num_rows > 0) {
$stmt->bind_result($id, $user, $pass, $first, $last, $type, $email);
$stmt->fetch();
$stmt->close();
}
if ($pass === $filtered_form['pass']) {
$_SESSION['id'] = $id;
$_SESSION['user'] = $user;
$_SESSION['first'] = $first;
$_SESSION['last'] = $last;
$_SESSION['email'] = $email;
$_SESSION['type'] = $type;
header("Location:index.php");
exit;
} else {
return "Incorrect password";
}
however Visual Studio says there is a problem that the variables $id, $user, $pass, $first, $last, $type, $email are not defined. I added the variables like this:
$stmt = $conn->prepare("SELECT * FROM `users` WHERE user LIKE ? ");
$stmt->bind_param("s", $filtered_form['user']);
$stmt->execute();
$stmt->store_result();
if ($stmt->num_rows > 0) {
$id = "";
$user = "";
$pass = "";
$first = "";
$last = "";
$type = "";
$email = "";
$stmt->bind_result($id, $user, $pass, $first, $last, $type, $email);
$stmt->fetch();
$stmt->close();
}
if ($pass === $filtered_form['pass']) {
$_SESSION['id'] = $id;
$_SESSION['user'] = $user;
$_SESSION['first'] = $first;
$_SESSION['last'] = $last;
$_SESSION['email'] = $email;
$_SESSION['type'] = $type;
header("Location:index.php");
exit;
} else {
return "Incorrect password";
}
And the problem goes away. Upon reviewing the PHP documentation, I cant find examples where the variables must be defined first, yet visual studio still shows it as an error. Any idea why this is?
A: Nope, it is not necessary when variables are passed by reference, which is the case here. So it's Visual Studio who is wrong here.
However, you are using obsoleted techniques here, and can get rid of these false positive warnings and reduce the amount of code at once:
$stmt = $conn->prepare("SELECT * FROM `users` WHERE user = ? ");
$stmt->bind_param("s", $filtered_form['user']);
$stmt->execute();
$row = $stmt->get_result()->fetch_assoc();
if ($row and password_verify($filtered_form['pass'], $row['pass']) {
$_SESSION['user'] = $row;
header("Location:index.php");
exit;
} else {
return "Incorrect password";
}
as you can see, get_result() gives you a much better result (pun not intended) than store_result(), letting you to store the user information in a single variable, so it won't litter the $_SESSION array.
And num_rows() proves to be completely useless (as it always happens).
An important note: you should never ever store passwords in plain text. Alwas shore a hashed password instead.
| |
doc_5642
|
if(isset($_POST['download']))
{
include_once("dbconnect.php");
$records = mysqli_query($con,"select * from curricular");
$delimiter = "\t";
$filename = "Student-Data_".date('Y-m-d').".xls";
$f = fopen('php://memory','w');
$fields = array("ID","NAME");
fputcsv($f,$fields,$delimiter);
while($data = mysqli_fetch_array($records))
{
$output=array($data['ID'],$data['name']);
fputcsv($f,$output,$delimiter);
}
fseek($f,0);
header('Content-type: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet');
header('Content-Disposition: attachment; filename="' . $filename . '";');
//header('Content-Type: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet');
//header('Content-Disposition: attachment; filename="' . $filename . '";');
fpassthru($f);
exit;
}
?>
I want to transfer data from MySQL to the Excel but I am not able to set the column and row width of the excel being exported. Please Help.
| |
doc_5643
|
I am not sure how to do this. So any pointers will be appreciated.
Thanks
Rahul
A: You can list a directory using either the opendir(3) and readdir(3) functions or the FSOpenIterator and FSGetCatalogInfoBulk functions from the Core Services File Manager.
A: The easiest solution is to get the Applications dir and then use the Qt helpers to iterate over it - i.e QDir, and finding bundles as directories whose names end in '.app'. Here's some code to get a QDir from a folder reference type - there are many similar constants, to get the desktop/trash/library folders. The 'domain' value is important - for many folders (eg, Library) there's a per-user version as well as global and network versions. FileVault can complicate things further.
The documentation on FSFindFolder should make things clearer, and there's examples all over the web.
static QDir applicationsDir()
{
short domain = kOnAppropriateDisk;
FSRef ref;
OSErr err = FSFindFolder(domain, kApplicationsFolderType, false, &ref);
if (err) {
return QDir();
}
return QDir(getFullPath(ref));
}
/*
Constructs a full unicode path from a FSRef.
*/
static QString getFullPath(const FSRef &ref)
{
QByteArray ba(2048, 0);
if (FSRefMakePath(&ref, reinterpret_cast<UInt8 *>(ba.data()), ba.size()) == noErr)
return QString::fromUtf8(ba).normalized(QString::NormalizationForm_C);
return QString();
}
| |
doc_5644
|
b = str(raw_input('please enter a book '))
searchfile = open("txt.txt", "r")
for line in searchfile:
if b in line:
print line
break
else:
print 'Please try again'
This works for what I want to do, but I was wanting to improve on it by repeating the loop if it goes to the else statement. I have tried running it through a while loop but then it says 'line' is not defined, any help would be appreciated.
A: Assuming you want to repeat the search continually until something is found, you can just enclose the search in a while loop guarded by a flag variable:
with open("txt.txt") as searchfile:
found = False
while not found:
b=str(raw_input('please enter a book '))
if b == '':
break # allow the search-loop to quit on no input
for line in searchfile:
if b in line:
print line
found = True
break
else:
print 'Please try again'
searchfile.seek(0) # reset file to the beginning for next search
A: Try this:
searchfile = open("txt.txt", "r")
content = searchfile.readlines()
found = False
while not found:
b = raw_input('Please enter a book ')
for line in content:
if b in line:
print line
found = True
break
else:
print 'Please try again'
searchfile.close()
You load the content in a list and use a boolean flag to control if you already found the book in the file. When you find it, you're done and can close the file.
| |
doc_5645
|
I have the below code, which works exactly as I want. It takes domains with lots of subdomains & normalizes them to just the hostname + TLD.
I can't find any vectorization examples using if-else statements.
import pandas as pd
import time
#import file into dataframe
start = time.time()
path = "Desktop/dom1.csv"
df = pd.read_csv(path, delimiter=',', header='infer', encoding = "ISO-8859-1")
#strip out all ---- values
df2 = df[((df['domain'] != '----'))]
#extract only 2 columns from dataframe
df3 = df2[['domain', 'web.optimisedsize']]
#define tld and cdn lookup lists
tld = ['co.uk', 'com', 'org', 'gov.uk', 'co', 'net', 'news', 'it', 'in' 'es', 'tw', 'pe', 'io', 'ca', 'cat', 'com.au',
'com.ar', 'com.mt', 'com.co', 'ws', 'to', 'es', 'de', 'us', 'br', 'im', 'gr', 'cc', 'cn', 'org.uk', 'me', 'ovh', 'be',
'tv', 'tech', '..', 'life', 'com.mx', 'pl', 'uk', 'ru', 'cz', 'st', 'info', 'mobi', 'today', 'eu', 'fi', 'jp', 'life',
'1', '2', '3', '4', '5', '6', '7', '8', '9', '0', 'earth', 'ninja', 'ie', 'im', 'ai', 'at', 'ch', 'ly', 'market', 'click',
'fr', 'nl', 'se']
cdns = ['akamai', 'maxcdn', 'cloudflare']
#iterate through each row of the datafrme and split each domain at the dot
for row in df2.itertuples():
index = df3.domain.str.split('.').tolist()
cleandomain = []
#iterate through each of the split domains
for x in index:
#if it isn't a string, then print the value directly in the cleandomain list
if not isinstance(x, str):
cleandomain.append(str(x))
#if it's a string that encapsulates numbers, then it's an IP
elif str(x)[-1].isnumeric():
try:
cleandomain.append(str(x[0])+'.'+str(x[1])+'.*.*')
except IndexError:
cleandomain.append(str(x))
#if its in the CDN list, take a subdomain as well
elif len(x) > 3 and str(x[len(x)-2]).rstrip() in cdns:
try:
cleandomain.append(str(x[len(x)-3])+'.'+str(x[len(x)-2])+'.'+str(x[len(x)-1]))
except IndexError:
cleandomain.append(str(x))
elif len(x) > 3 and str(x[len(x)-3]).rstrip() in cdns:
try:
cleandomain.append(str(x[len(x)-4])+'.'+str(x[len(x)-3])+'.'+str(x[len(x)-2])+'.'+ str(x[len(x)-1]))
except IndexError:
cleandomain.append(str(x))
#if its in the TLD list, do this
elif len(x) > 2 and str(x[len(x)-2]).rstrip()+'.'+ str(x[len(x)-1]).rstrip() in tld:
try:
cleandomain.append(str(x[len(x)-3])+'.'+str(x[len(x)-2])+'.'+ str(x[len(x)-1]))
except IndexError:
cleandomain.append(str(x))
elif len(x) > 2 and str(x[len(x)-1]) in tld:
try:
cleandomain.append(str(x[len(x)-2])+'.'+ str(x[len(x)-1]))
except IndexError:
cleandomain.append(str(x))
#if its not in the TLD list, do this
else:
cleandomain.append(str(x))
#add the column to the dataframe
df3['newdomain2']=cleandomain
se = pd.Series(cleandomain)
df3['newdomain2'] = se.values
#select only the new domain column & usage
df4 = df3[['newdomain2', 'web.optimisedsize']]
#group by
df5 = df4.groupby(['newdomain2'])[['web.optimisedsize']].sum()
#sort
df6 = df5.sort_values(['web.optimisedsize'], ascending=["true"])
end = time.time()
print(df6)
print(end-start)
My input is this DF:
In [4]: df
Out[4]:
Domain Use
0 graph.facebook.com 4242
1 news.bbc.co.uk 23423
2 news.more.news.bbc.co.uk 234432
3 profile.username.co 235523
4 offers.o2.co.uk 235523
5 subdomain.pyspark.org 2325
6 uds.data.domain.net 23523
7 domain.akamai.net 23532
8 333.333.333.333 3432324
During, the index splits it to this:
[['graph', 'facebook', 'com'], ['news', 'bbc' .....
I then append the new domain to the original dataframe as a new column. This then gets grouped by + summed to create the final dataframe.
In [10]: df
Out[10]:
Domain Use newdomain
0 graph.facebook.com 4242 facebook.com
1 news.bbc.co.uk 23423 bbc.co.uk
2 news.more.news.bbc.co.uk 234432 bbc.co.uk
3 profile.username.co 235523 username.co
A: One of the problems is that in every iteration you execute you have index = df3.domain.str.split('.').tolist(). When I put this line outside of the loop the calculation is 2 times faster. 587ms VS 1.1s.
I also think that your code is wrong. You do not use the row variable and use index instead. And when you iterate index one element is always a list. So if not isinstance(x, str) is always True. (You can see it in line_debugger output below)
String operations are generally not vectorizable. Even the .str notation is in reality a python loop.
And here is an output of line_debugger tool in Jupyter notebook:
Initialization (f is a function wrapped around the code):
%load_ext line_profiler
%lprun -f f f(df2, df3)
Output:
Total time: 1.82219 s
File: <ipython-input-8-79f01a353d31>
Function: f at line 1
Line # Hits Time Per Hit % Time Line Contents
==============================================================
1 def f(df2,df3):
2 1 8093.0 8093.0 0.2 index = df3.Domain.str.split('.').tolist()
3 #iterate through each row of the datafrme and split each domain at the dot
4 901 11775.0 13.1 0.2 for row in df2.itertuples():
5
6 900 26241.0 29.2 0.5 cleandomain = []
7 #iterate through each of the split domains
8 810900 971082.0 1.2 18.8 for x in index:
9 #if it isn't a string, then print the value directly in the cleandomain list
10 810000 1331253.0 1.6 25.8 if not isinstance(x, str):
11 810000 2819163.0 3.5 54.6 cleandomain.append(str(x))
12 #if it's a string that encapsulates numbers, then it's an IP
13 elif str(x)[-1].isnumeric():
14 try:
15 cleandomain.append(str(x[0])+'.'+str(x[1])+'.*.*')
16 except IndexError:
17 cleandomain.append(str(x))
18 #if its in the CDN list, take a subdomain as well
19 elif len(x) > 3 and str(x[len(x)-2]).rstrip() in cdns:
20 try:
21 cleandomain.append(str(x[len(x)-3])+'.'+str(x[len(x)-2])+'.'+str(x[len(x)-1]))
22 except IndexError:
23 cleandomain.append(str(x))
24 elif len(x) > 3 and str(x[len(x)-3]).rstrip() in cdns:
25 try:
26 cleandomain.append(str(x[len(x)-4])+'.'+str(x[len(x)-3])+'.'+str(x[len(x)-2])+'.'+ str(x[len(x)-1]))
27 except IndexError:
28 cleandomain.append(str(x))
29 #if its in the TLD list, do this
30 elif len(x) > 2 and str(x[len(x)-2]).rstrip()+'.'+ str(x[len(x)-1]).rstrip() in tld:
31 try:
32 cleandomain.append(str(x[len(x)-3])+'.'+str(x[len(x)-2])+'.'+ str(x[len(x)-1]))
33 except IndexError:
34 cleandomain.append(str(x))
35 elif len(x) > 2 and str(x[len(x)-1]) in tld:
36 try:
37 cleandomain.append(str(x[len(x)-2])+'.'+ str(x[len(x)-1]))
38 except IndexError:
39 cleandomain.append(str(x))
40 #if its not in the TLD list, do this
41 else:
42 cleandomain.append(str(x))
My code:
Data preparation:
from io import StringIO
import pandas as pd
#import file into dataframe
TESTDATA=StringIO("""Domain,Use
graph.facebook.com, 4242
news.bbc.co.uk, 23423
news.more.news.bbc.co.uk, 234432
profile.username.co, 235523
offers.o2.co.uk, 235523
subdomain.pyspark.org, 2325
uds.data.domain.net, 23523
domain.akamai.net, 23532
333.333.333.333,3432324
""")
df=pd.read_csv(TESTDATA)
df["Domain"] = df.Domain.str.strip()
df = pd.concat([df]*100)
df2 = df
#extract only 2 columns from dataframe
df3 = df2
#define tld and cdn lookup lists
tld = ['co.uk', 'com', 'org', 'gov.uk', 'co', 'net', 'news', 'it', 'in' 'es', 'tw', 'pe', 'io', 'ca', 'cat', 'com.au',
'com.ar', 'com.mt', 'com.co', 'ws', 'to', 'es', 'de', 'us', 'br', 'im', 'gr', 'cc', 'cn', 'org.uk', 'me', 'ovh', 'be',
'tv', 'tech', '..', 'life', 'com.mx', 'pl', 'uk', 'ru', 'cz', 'st', 'info', 'mobi', 'today', 'eu', 'fi', 'jp', 'life',
'1', '2', '3', '4', '5', '6', '7', '8', '9', '0', 'earth', 'ninja', 'ie', 'im', 'ai', 'at', 'ch', 'ly', 'market', 'click',
'fr', 'nl', 'se']
cdns = ['akamai', 'maxcdn', 'cloudflare']
Timing in jupyter notebook:
%%timeit
index = df3.Domain.str.split('.').tolist()
#iterate through each row of the datafrme and split each domain at the dot
for row in df2.itertuples():
cleandomain = []
#iterate through each of the split domains
for x in index:
#if it isn't a string, then print the value directly in the cleandomain list
if not isinstance(x, str):
cleandomain.append(str(x))
#if it's a string that encapsulates numbers, then it's an IP
elif str(x)[-1].isnumeric():
try:
cleandomain.append(str(x[0])+'.'+str(x[1])+'.*.*')
except IndexError:
cleandomain.append(str(x))
#if its in the CDN list, take a subdomain as well
elif len(x) > 3 and str(x[len(x)-2]).rstrip() in cdns:
try:
cleandomain.append(str(x[len(x)-3])+'.'+str(x[len(x)-2])+'.'+str(x[len(x)-1]))
except IndexError:
cleandomain.append(str(x))
elif len(x) > 3 and str(x[len(x)-3]).rstrip() in cdns:
try:
cleandomain.append(str(x[len(x)-4])+'.'+str(x[len(x)-3])+'.'+str(x[len(x)-2])+'.'+ str(x[len(x)-1]))
except IndexError:
cleandomain.append(str(x))
#if its in the TLD list, do this
elif len(x) > 2 and str(x[len(x)-2]).rstrip()+'.'+ str(x[len(x)-1]).rstrip() in tld:
try:
cleandomain.append(str(x[len(x)-3])+'.'+str(x[len(x)-2])+'.'+ str(x[len(x)-1]))
except IndexError:
cleandomain.append(str(x))
elif len(x) > 2 and str(x[len(x)-1]) in tld:
try:
cleandomain.append(str(x[len(x)-2])+'.'+ str(x[len(x)-1]))
except IndexError:
cleandomain.append(str(x))
#if its not in the TLD list, do this
else:
cleandomain.append(str(x))
| |
doc_5646
|
We uses different library functions and STL in C++ coding. And there is a beautiful documentation on STL with complexities.
I want to know about the complexities of different built in generic Collections methods (e.g. complexity of java.util.Arrays.sort()) in java. Is there any proper documentation about the complexities in Java all together?
Thanks in advance.
A: Please read official Oracle documentation with attention, for example cite from (https://docs.oracle.com/javase/7/docs/api/java/util/Arrays.html#sort(byte[]) ) -
Implementation note: The sorting algorithm is a Dual-Pivot Quicksort by Vladimir Yaroslavskiy, Jon Bentley, and Joshua Bloch. This algorithm offers O(n log(n)) performance on many data sets that cause other quicksorts to degrade to quadratic performance, and is typically faster than traditional (one-pivot) Quicksort implementations.
As you can see O(n log(n)) is specified
| |
doc_5647
|
In adapter I call Music(values).playButton()
In a fragment I want to call Music().stop() to stop the sound, but this class requires parameters that are not in the fragment
How can I call this method in a fragment?
Class Music
class Music(var button: Button, val context: Context, val resources: Resources, val Id: Int, var buttonArray: MutableList<Button>, val mpArray: MutableList<MediaPlayer>) {
fun play() : MediaPlayer {
if (buttonArray.size >= 2) {
buttonArray.removeFirst()
}
if (mpArray.size >= 2) {
mpArray.removeFirst()
}
mpArray.add(MediaPlayer.create(context, Id))
buttonArray.add(button)
if (mpArray.size == 1 && !mpArray[0].isPlaying) {
mpArray[0].start()
return mpArray[0]
} else if (mpArray.size == 2 && buttonArray[0] == buttonArray[1]) {
if (mpArray[0].isPlaying) {
mpArray[0].pause()
mpArray[0].reset()
mpArray[0].release()
} else if (!mpArray[1].isPlaying) {
mpArray[1].start()
return mpArray[1]
}
}
if (mpArray.size == 2 && buttonArray[0] != buttonArray[1]) {
if (mpArray[0].isPlaying) {
mpArray[0].pause()
mpArray[0].reset()
mpArray[0].release()
mpArray[1].start()
} else if (!mpArray[0].isPlaying) {
mpArray[1].start()
return mpArray[1]
}
}
return MediaPlayer.create(context, Id)
}
fun stop() {
if (play().isPlaying) {
play().stop()
play().release()
}
}
Adapter
class Adapters(private val cards: List<Card>, val resources: Resources, val context: Context, private val musicPlayerListener: MusicPlayerListener) : RecyclerView.Adapter<RecyclerView.ViewHolder>() {
private var buttonArray = mutableListOf<Button>()
private var mpArray = mutableListOf<MediaPlayer>()
private lateinit var music: Music
fun playMusic() {
musicPlayerListener.onMusicPlay(music.play())
}
override fun getItemViewType(position: Int): Int = when(cards[position]) {
is Card.AudioButton -> 11
else -> throw IllegalArgumentException("Error")
}
override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): RecyclerView.ViewHolder {
return object : RecyclerView.ViewHolder(
when(viewType) {
11 -> LayoutInflater.from(parent.context).inflate(R.layout.in_lesson_audio_button, parent, false)
else -> throw IllegalArgumentException("Error")
}) {
}
}
override fun onBindViewHolder(holder: RecyclerView.ViewHolder, position: Int) {
when(val card = cards[position]) {
is Card.AudioButton -> {
val button = holder.itemView.findViewById<Button>(R.id.in_lesson_audio_button)
button.setOnClickListener {
Music(button, context, resources, getAudioId(card.audioButton), buttonArray, mpArray).play()
}
}
}
}
override fun getItemCount() = cards.size
private fun getAudioId(audioElement: String): Int = resources.getIdentifier(audioElement, "raw", context.packageName)
}
Fragment
class Fragment : Fragment(), MusicPlayerListener {
private val args: DetailsFragmentArgs by navArgs()
var mMediaPlayer: MediaPlayer? = null
override fun onCreateView(
inflater: LayoutInflater, container: ViewGroup?,
savedInstanceState: Bundle?,
): View? {
val binding = inflater.inflate(R.layout.fragment, container, false)
val recyclerView = binding.findViewById<RecyclerView>(R.id.list_view)
recyclerView.layoutManager = LinearLayoutManager(context)
recyclerView.adapter = Adapters(parseLesson(resources,R.xml.text), resources, requireContext() )
return binding
}
override fun onPause() {
super.onPause()
mMediaPlayer?.stop()
}
override fun onMusicPlay(mediaPlayer: MediaPlayer) {
mMediaPlayer = mediaPlayer
}
}
A: From the adapter when you call Music.play(), it will return an instance of MediaPlayer, you should send this back to fragment and keep it in fragment as a local variable.
Then from fragment, use that instance of MediaPlayer to call stop() on it.
class ExampleAdapter(private val musicPlayerListener: MusicPlayerListener) {
private lateinit var music: Music
fun playMusic() {
musicPlayerListener.onMusicPlay(music.play())
}
}
class ExampleFragment : MusicPlayerListener {
var mMediaPlayer: MediaPlayer? = null
override fun onMusicPlay(mediaPlayer: MediaPlayer) {
mMediaPlayer = mediaPlayer
}
override fun onPause() {
super.onPause()
mMediaPlayer?.stop()
}
}
interface MusicPlayerListener {
fun onMusicPlay(mediaPlayer: MediaPlayer)
}
| |
doc_5648
|
anyone help out
also
sudo service mysql start
Job for mariadb.service failed because the control process exited with error code.
See "systemctl status mariadb.service" and "journalctl -xe" for details.
enter image description here
gives this output
| |
doc_5649
|
If I use the following Draw method (without any rotation and origin specified) the the object is drawn at the correct/expected place:
spriteBatch.Draw(myTexture, destinationRectangle, null, Color.White, 0.0f, Vector2.Zero, SpriteEffects.None, 0);
However, if I use the origin and rotation like shown below, the object is rotating around is center but the object is floating above the expecting place (by around 20 pixels.)
Vector2 origin = new Vector2(myTexture.Width / 2 , myTexture.Height / 2 );
spriteBatch.Draw(myTexture, destinationRectangle, null, Color.White, ballRotation, origin, SpriteEffects.None, 0);
Even if I set the ballRotation to 0 the object is still drawn above the expected place
spriteBatch.Draw(myTexture, destinationRectangle, null, Color.White, 0.0f, origin, SpriteEffects.None, 0);
Is seems that just by setting the origin, the placement of the object changes.
Can somebody tell me how to use the origin parameter correctly.
Solution:
Davor's response made the usage of origin clear.
The following change was required in the code to make it work:
Vector2 origin = new Vector2(myTexture.Width / 2 , myTexture.Height / 2 );
destinationRectangle.X += destinationRectangle.Width/2;
destinationRectangle.Y += destinationRectangle.Height / 2;
spriteBatch.Draw(myTexture, destinationRectangle, null, Color.White, ballRotation, origin, SpriteEffects.None, 0);
A: this is correct use of origin. but now your position changed also to center, it's not on top left corner anymore, its on center. and it's floating for width/2 and height/2 from position befor seting origin.
so if your texture is 20x20, you need to subtract X by 10 (width/2) and Y by 10 (height/2) and you will have original position.
| |
doc_5650
|
DECLARE @StartDate DATETIME, @EndDate DATETIME
SET @StartDate = DATEADD(mm, DATEDIFF(mm, 0, getdate()) - 1, 0)
SET @EndDate = DATEADD(mm, 1, @StartDate)
SELECT
dbo.General_Ledger_Detail.Accounting_ID,
dbo.General_Ledger_Detail.Cost_Centre,
dbo.General_Ledger_Detail.Product_ID,
dbo.General_Ledger_Detail.Accounted_Amount AS Amount,
dbo.General_Ledger_Detail.Account_Name,
dbo.General_Ledger_Detail.Accounting_Date,
dbo.Account_Codes_Sales_OPEX$.[Opex Type],
dbo.LogSolOpexCC.Logistic_Solutions_Type
FROM
dbo.General_Ledger_Detail
INNER JOIN dbo.Account_Codes_Sales_OPEX$
ON dbo.General_Ledger_Detail.Accounting_ID =
dbo.Account_Codes_Sales_OPEX$.[Account Code]
INNER JOIN dbo.LogSolOpexCC
ON dbo.General_Ledger_Detail.Cost_Centre = dbo.LogSolOpexCC.Cost_Centre
GROUP BY dbo.General_Ledger_Detail.Accounting_ID,
dbo.General_Ledger_Detail.Cost_Centre,
dbo.General_Ledger_Detail.Product_ID,
dbo.General_Ledger_Detail.Accounted_Amount,
dbo.General_Ledger_Detail.Account_Name,
dbo.General_Ledger_Detail.Accounting_Date,
dbo.Account_Codes_Sales_OPEX$.[Opex Type],
dbo.LogSolOpexCC.Logistic_Solutions_Type
HAVING (dbo.General_Ledger_Detail.Accounting_Date BETWEEN @startdate AND @enddate)
A: You cannot pass parameters to SQL Server views. https://www.mssqltips.com/sqlservertip/5147/limitations-when-working-with-sql-server-views/
A: SELECT dbo.General_Ledger_Detail.Accounting_ID, dbo.General_Ledger_Detail.Cost_Centre, dbo.General_Ledger_Detail.Product_ID,
dbo.General_Ledger_Detail.Accounted_Amount AS Amount, dbo.General_Ledger_Detail.Account_Name, dbo.General_Ledger_Detail.Accounting_Date,
dbo.Account_Codes_Sales_OPEX$.[Opex Type], dbo.LogSolOpexCC.Logistic_Solutions_Type
FROM dbo.General_Ledger_Detail INNER JOIN
dbo.Account_Codes_Sales_OPEX$ ON dbo.General_Ledger_Detail.Accounting_ID = dbo.Account_Codes_Sales_OPEX$.[Account Code] INNER JOIN
dbo.LogSolOpexCC ON dbo.General_Ledger_Detail.Cost_Centre = dbo.LogSolOpexCC.Cost_Centre
GROUP BY dbo.General_Ledger_Detail.Accounting_ID, dbo.General_Ledger_Detail.Cost_Centre, dbo.General_Ledger_Detail.Product_ID,
dbo.General_Ledger_Detail.Accounted_Amount, dbo.General_Ledger_Detail.Account_Name, dbo.General_Ledger_Detail.Accounting_Date,
dbo.Account_Codes_Sales_OPEX$.[Opex Type], dbo.LogSolOpexCC.Logistic_Solutions_Type
HAVING (dbo.General_Ledger_Detail.Accounting_Date BETWEEN DATEADD(mm, DATEDIFF(mm, 0, GETDATE()) - 1, 0) AND DATEADD(mm, 1, GETDATE()))
| |
doc_5651
|
org.hibernate.QueryException: could not resolve property: line_type
In my Oracle Database the field is named "LINE_TYPE" and is VARCHAR2(20 BYTE).
Here is the code in my REST service table (which I have reverse engineered from my oracle database):
private String lineType;
@Column(name="LINE_TYPE")
@Size(max = 20, message = "Line Type has a max size of 20 characters.")
public String getLineType() {
return this.lineType;
}
public void setLineType(String lineType) {
this.lineType = lineType;
}
Also in my tableCriteria I have the getters and setters:
public String getLineType() {
return lineType;
}
public void setLineType(String lineType) {
this.lineType = lineType;
}
The last time I had this error it was through a spelling mistake or case sensitivity but I have double and triple checked and that is not the case here.
I have debugged the entity that the REST service is receiving in NetBeans and I can see that it is receiving the data. So why cant it be resolved?
Anyone see anything I don't?
| |
doc_5652
|
Is there a way to autorun a macro that clicks a button on one of my sheets?
A: Solved. Was calling a script from another sheet.
| |
doc_5653
|
def find_average(numbers):
c = sum(numbers)
for number in numbers:
d = c / number
return d
pass
Could someone explain why this doesn't work?
A: The for loop isn't necessary, since you're already using sum(). Once you've summed all the elements, the only thing you need to do is divide the sum by the number of entries in numbers:
def find_average(numbers):
if not numbers:
return 0
return sum(numbers) / len(numbers)
A: def find_average(numbers):
c = sum(numbers)
if not numbers: # to avoid the division by zero
return 0
return c / len(numbers)
print(find_average([1, 2, 3]))
When you return it stops the function, so you were only dividing by the first element. To get the number of elements in a list you can use the keyword len.
| |
doc_5654
|
Each player has an integer that represent the number of life points he has left
like so
class Player
{
public int Life;
....
}
I need to alert the Game class every time a player's life is changed, The Game class look like so
class Game
{
public void OnPlayerLifeChange();
....
}
I want to activate the function OnPlayerLifeChange whenever the Life member is changed, how can I do it?.
again note that I cant change the Player class and make life a property with an event.
A: The person that wrote the Player class needs to add a public method to check for any updates to the Life variable. Then you just need to make use of that method.
If the person did not write such a method, then you cannot access it (that's why encapsulation is important, otherwise anyone can access something that isn't meant to be accessed)
A: One common way to do this is to implement the INotifyPropertyChanged interface in the Player class, change Life from a field to a property and raise the PropertyChanged event from the setter.
class Player : INotofyPropertyChanged
{
private int _life;
public int Life
{
get { return _life; }
set { _life = value; OnPropertyChanged("Life"); }
}
....
}
Game can then subscribe to the PropertyChanged events of all players and react accordingly.
A: Without access to the Player class, one option would be to just check the values of the Life variable for all the players on a set interval.
You would need to keep local variables within your Game class to keep track of what the Life variables were previously set to, but whenever you noticed that one of the values of a Life variable had changed you could execute whatever code you needed to within the Game class, which would give you basically the same behavior as an event handler (although probably not as effecient).
class Game {
List<Player> playerList;
ArrayList lifeValues;
System.Timers.Timer lifeCheckTimer;
Game() {
playerList = new List<Player>();
//add all players that have been instantiated to the above list here
lifeValues = new ArrayList();
//add all the player.Life values to the above list here
//these will need to be added in the same order
lifeCheckTimer = new System.Timers.Timer();
lifeCheckTimer.Elapsed += new ElapsedEventHandler(lifeCheckElapsed);
//you can change the 500 (0.5 seconds) below to whatever interval you want to
//check for a change in players life values (in milliseconds)
lifeCheckTimer.Interval = 500;
lifeCheckTimer.Enabled = true;
}
private static void lifeCheckElapsed(object source, ElapsedEventArgs e)
{
for (int i = 0; i < playerList.Count(); i ++) {
if (((Player)playerList[i]).Life != lifeValues[i])
OnPlayerLifeChange();
lifeValues[i] = ((Player)playerList[i]).Life;
}
}
}
| |
doc_5655
|
A: There are a variety of sources. The US Postal Service is the authority on this. Unfortunately you have to buy the database. You can get FREE list of zips in a city from USPS through an online interactive form, one city at a time!
http://mobile.usps.com/iphone/iphoneFindZip.aspx
There are other FREE services. These are based on the US Census 2010 ZCTAs. These approximate zip code boundaries for census purposes, but are not a 100% match. Maxmind does not provide a free database anymore. Geonames does. You would have to read up on how complete and accurate it is.
http://download.geonames.org/export/zip/
I've worked in the past with these things.
[ADDED] To anybody interested, I made a (FREE) CSV file of approxiamately all USPS postal codes with their corresponding cities and lat/lng. As many would understand, the USPS dataset is copyrighted and the free ZCTA data from the US Census is an approximation.
I decided to use the following method to reverse build nearly the entire set. I got the US Department of Educations NCES dataset on all schools in the United States (103K schools), and figure that 99% of postal codes would have at least on school. I used the address information to build a unique set of postal codes to cities. Statistics wise, there are approxiametely 41.5K unique postal codes. My list has 43K - assuming some postal codes have multiple entries due to being in more than one city. Also note, the lat/lng is NOT an area centroid. It is simply the lat/lng of some school within the postal code. Have fun with my free dataset!
Format: State, City, ZipCode, Latitude, Longitude
http://www.opengeocode.org/download/cityzip.zip
A: i work here boundaries-io.com
I think this is what you need it uses US Census as repository: US Zipcode Boundaries API:
https://boundaries-io.com
Above API shows US Boundaries(GeoJson) by zipcode,city, and state.
you should use the API programatically to handle large results.
for example:
A: concept name Reverse Geocoding :-
here you can get the total address along with postal code also:-
places.getDetails( request_details, function(results_details, status){
// Check if the Service is OK
if (status == google.maps.places.PlacesServiceStatus.OK) {
places_postal = results_details.address_components
places_phone = results_details.formatted_phone_number
places_phone_int = results_details.international_phone_number
places_format_address = results_details.formatted_address
places_google_url = results_details.url
places_website = results_details.website
places_rating = results_details.rating
for (var i = 0; i < places_postal.length; i++ ) {
if (places_postal[i].types == "postal_code" && ){
console.log(places_postal[i].long_name)
}
}
}
});
Example directly from Google
and for retrieving Zip-code
var address = results[0].address_components;
var zipcode = address[address.length - 1].long_name;
| |
doc_5656
|
-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEArL7/qAft6XBmEP9JkadhsPYydS7V+wsOLQPpJbtRmvs7rDUG
5hagEjhMKolSksAL8Gh4ZR84iFnATv81xLzoKBbWtHfMmgOohrXJy3Xw1kRrJemh
ZLmoJDbFVyDiCXXIDpfLDxm/9jBFn+hUuESFXMIBpbjhFJ0t12HFqXwFmKVfkNbq
JhwKuq3AEylN8dWn3oQNz4rq2ZCZiqjNBo0X4hny0GlBGvPHADS9Fe8DX/yN8Ggj
IM7MvQeVi3uiZ0u1qhiK7wcaPoTEjXJH4NhbmzZjGRQ/2VznbeXCYdzWzfAHDWjS
ADT6895geYYHTAJi1v7qtBRP2sihpdBhIpihgwIDAQABAoIBADOWxtrzo0V338Nr
uhjZl/81R1RfrF/QqWcgJ9yw2GokZWnEXE8SqrNGRNjfMd3JpMcjK/FnJYby5s+w
v+oFUH/Ick5rCJtmREoWuDEfA9G5lRY5c42VNHW8NasTku2oUxqokmfsFLv9Jo3e
4I43SGyvM7a+Q9nYJvyPomw/MZyoKKUJr7Poa1lYAqFigIWCbU2C0c9sHhsVNQJZ
+t69y9DiNTX7VDRhj8UQ07H0qs8nG06bFjt411Z/jdsKvh59ucLGARHYS0t4OGcr
CkIRUYI1xPF2UPCnCB7EJoeUbJPxtGt9Qb1yrV8U6K2WtezO/Suld7u2u/lX/aey
urkUwAECgYEA47oeArcPVttOJjFRL+YX6g7ixfRblh5SaPxVB19gqN404KrWqPWD
JpdageERr/TprtSXw75B5YZzdE1HjS811RN0gwS7d47uYu41XB/glH0E1u23w5CN
ldVoUKRG5JrK/ebXzUaXTPSelPjDXuucGoNN5X2K7vWBf31qzwKGGAECgYEAwjFp
/w+4vURb1Zzsp2/lDI2Md2Kq49YKIWOYZkPPUtUK0Xoqm0oKZF1Vl04T8ANnSIKk
n1aiNnxwmaYaOMfB2UHVbDbE2F5yLYUIelzVaMqzanxPN5oq0NBCW8UUxE+GPan4
syz5rEzBz/hENR9oFnvxbxewJsR5UjD43wmiWYMCgYEAu/xj0bH0A6s9s+F6N6Ql
kZ2ALhEtmZqmROwn9NITJNNpqxzb3tXs0eqXWCfHRg1S6nOsZHWmSCbZH+S7cBzM
v3wz7gP2DRf8ScaCXe4iofEiEZpi3Bl0B4AHgKpbq1LsxvPMqTPgqjI0xp0kCjNM
xcYmg49DJUedAvUxOnnG4AECgYEAiPXs+jWOZ/6kfn5U8qqac0YKAdGXEWXOc0oZ
HFdLC/Kx1JhDII8R0UN6sGIi8a6U07FAhhjGA4O0rslVySIp+B7UdaQTJT9HbA9d
sV90LJp5++p8vIyBEhEwHCVdxi8IUMlmXIil9v2T3CgPgyAJe4Ii/+VHGbCMmIlt
nXDgDh0CgYEAlmko3ujnfoVwV3f92JgetIZx5IMe5rylJ5FjxyQ3dD3UEaEQgxoz
81j23ZSFVe4Mg8PzyxFgLgEZN7TVj/sjELqpRlRhZUu91io9FjGuU6XZNgfjRAhH
RbgFj9mnr8TV9kETuxXpoGaMD/7MVvetg8Qr1nxpi7m29Ao5L5R5h7g=
-----END RSA PRIVATE KEY-----
I can generate it's SHA256 fingerprint like so using openssl:
$ openssl rsa -in rsa_private_test.pem -pubout -outform DER | openssl sha256 -binary | openssl base64
writing RSA key
8T0BsSCXlbxqFGekWsIuGhj6/ca/6VpLjDqzT4X3TBQ=
How can I do the same in Ruby? My assumption is that both approaches i.e. via OpenSSL in Ruby and the openssl should yield the same result.
A: sorry I don't have the permission to directly comment on your answer.
I faced the same problem, and I also tried your answer but couldn't get the expected answer. May I know how did you get the rsa_public? Thank you.
Here is my ruby code:
pkey = OpenSSL::PKey::RSA.new(File.read('rsa_private_test.pem'))
sha256 = OpenSSL::Digest::SHA256.new
digest = sha256.digest(pkey.public_key.to_der)
puts Base64.encode64(digest)
A: You could achieve it by OpenSSL::Digest:
require 'openssl'
pem = OpenSSL::PKey::RSA.new(2048)
fingerprint = OpenSSL::Digest::SHA256.new(pem.to_der).to_s
# => "9a7c94fd90bf88d0b52aa48739b552bc6afc39e4a7d6949aa0ad1c110852906d"
A: So, from this other post and @Topaco's comment here I figured out what I was doing wrong.
Part of the problem is that .hexdigest always returns text and the output in the second part of the openssl returns output that needs to be in binary form. And you should get the DER of the public key since the first part of the openssl command returns the public key.
You can see it's working now ...
pry(main)> sha256 = OpenSSL::Digest::SHA256.new
=> #<OpenSSL::Digest::SHA256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855>
pry(main)> digest = sha256.digest(rsa_public.to_der)
=> "\xF1=\x01\xB1 \x97\x95\xBCj\x14g\xA4Z\xC2.\x1A\x18\xFA\xFD\xC6\xBF\xE9ZK\x8C:\xB3O\x85\xF7L\x14"
pry(main)> Base64.encode64(digest)
=> "8T0BsSCXlbxqFGekWsIuGhj6/ca/6VpLjDqzT4X3TBQ=\n"
As opposed to what I was getting before ...
pry(main)> digest = OpenSSL::Digest::SHA256.hexdigest(rsa_public.to_der).to_s
=> "f13d01b1209795bc6a1467a45ac22e1a18fafdc6bfe95a4b8c3ab34f85f74c14"
pry(main)> Base64.encode64(digest)
=> "ZjEzZDAxYjEyMDk3OTViYzZhMTQ2N2E0NWFjMjJlMWExOGZhZmRjNmJmZTk1\nYTRiOGMzYWIzNGY4NWY3NGMxNA==\n"
| |
doc_5657
|
import tweepy
CONSUMER_KEY = 'private'
CONSUMER_SECRET = 'private'
ACCESS_KEY = 'private'
ACCESS_SECRET = 'private'
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth)
dms = api.list_direct_messages(5)
print(len(dms))
for i in range(len(dms)):
api.destroy_direct_message(dms[i]._json['id'])
When I run it, print(len(dms))
prints 0, showing that it didn't get the list at all.
A: The count parameter for the GET direct_messages/events/list Twitter API endpoint that API.list_direct_messages uses is the "max number of events to be returned".
It is not guaranteed to return that many.
You should use Cursor instead, to iterate over 5 direct messages.
Also, you can access the id directly as an attribute of DirectMessage objects.
| |
doc_5658
|
Periodically, I have to refresh it 5 or 6 times to get the map to display properly. Otherwise, I get one marker in the upper left corner and the map is greyed out. I don't even get an error message so I'm not sure what to trouble shoot.
I suspect it may have something to do with it triggering the onMapReady(event) as the map is being generated, but I don't know how to set up some type of timed listener in Flex. (I have Googled this but have only found instances in JavaScript or Jquery to do so.)
Can someone give me a tip on how to check for the maps being in an idle state using ActionScript/Flex? I think if I can get that part, I can remove the mapevent_mapready="onMapReady(event)" and just make the onMapReady(event) function run when the map is idle...or something like that. I've seen it for JavaScript but of course that won't directly work in AS and I don't know enough of either to make a proper translation.
Here's my code:
public function onMapReady(event:MapEvent):void {
var dojoGeo:Array = geoSchoolInfoAry;
var md:MarkerData = new MarkerData(dojoGeo[0],dojoGeo[1]);
var latlng:LatLng = new LatLng(md.lat,md.lng);
var markerOptions:MarkerOptions = new MarkerOptions();
markerOptions.icon = new dojoIcon();
var dojoMarker:Marker = new Marker(latlng,markerOptions);
map.addOverlay(dojoMarker);
var markerOptions2:MarkerOptions = new MarkerOptions();
markerOptions2.icon = new studentIcon();
var studentMarker:Marker = new Marker(latlng,markerOptions2);
map.addOverlay(studentMarker);
map.setCenter(new LatLng(dojoGeo[0],dojoGeo[1]), 11, MapType.NORMAL_MAP_TYPE);
map.addControl(new ZoomControl());
map.addControl(new MapTypeControl());
map.addControl(new ScaleControl());
addSchoolMarker(md,dojoMarker);
addStudentsToMap();
}
And
<mx:HBox width="100%" height="100%">
<maps:Map xmlns:maps="com.google.maps.*" id="map" key="map key here" mapevent_mapready="onMapReady(event)" width="90%" height="90%" sensor="false" />
</mx:HBox>
Thanks for any help!
UPDATE: Using Firebug Developer tools, I discovered 2 errors. A CrossDomain.xml - Aborted and 404 Not Found error. Not exactly sure how to proceed with this since I don't know if these are just symptoms of whatever is causing the problem in the first place.
A: Well, I have to thank my co-worker for finding a solution.
The problem was a timing issue, with the API being called at the same time as the functions were running with the data to populate the map. So he suggested that the mapOnReady event shouldn't run until the other 2 functions were completed.
Hope it can help someone who runs into a similar issue.
Here's the code:
public function getGeoInfo_Handler(results):void
{
try
{
geoSchoolInfoAry = (results.getSchoolInfo.split(","));
geoInfoAC = new ArrayCollection(results.getInfo.split(";"));
remoteCallComplete=true;
loadMap();
}
catch (error:Error)
{
FlexException.errorHandler(error, "StudentPopMapModuleCode:getGeoInfoSchool_Handler");
}
}
public function onMapReady(event:MapEvent = null):void
{
mapReady=true;
loadMap();
}
private function loadMap():void
{
if(mapReady && remoteCallComplete)
{
var dojoGeo:Array = geoSchoolInfoAry;
var md:MarkerData = new MarkerData(dojoGeo[0], dojoGeo[1]);
var latlng:LatLng = new LatLng(md.lat, md.lng);
map.setCenter(new LatLng(dojoGeo[0], dojoGeo[1]), 11, MapType.NORMAL_MAP_TYPE);
map.addControl(new ZoomControl());
map.addControl(new MapTypeControl());
map.addControl(new ScaleControl());
var markerOptions:MarkerOptions = new MarkerOptions();
markerOptions.icon = new dojoIcon();
var dojoMarker:Marker = new Marker(latlng, markerOptions);
map.addOverlay(dojoMarker);
var markerOptions2:MarkerOptions = new MarkerOptions();
markerOptions2.icon = new studentIcon();
var studentMarker:Marker = new Marker(latlng, markerOptions2);
map.addOverlay(studentMarker);
addSchoolMarker(md, dojoMarker);
addStudentsToMap();
}
}
| |
doc_5659
|
...unsafe to get as much throughoutput as can be achived.
the issue is that i when calling the fucntion and itterating on the result
shows different characters but within the scope of GetSomeTs() it's fine.
so just before the return i test one of the elements and it prints the correct value.
this is the testing struct.
public unsafe struct T1
{
public char* block = stackalloc char[5];<--will not compile so the process will be done within a local variable inside a method
}
public unsafe struct T1
{
public char* block;
}
static unsafe T1[] GetSomeTs(int ArrSz)
{
char[] SomeValChars = { 'a', 'b', 'c', 'd', 'e' };
T1[] RtT1Arr = new T1[ArrSz];
for (int i = 0; i < RtT1Arr.Length; i++)
{
char* tmpCap = stackalloc char[5];
for (int l = 0; l < 5; l++)
{
SomeValChars[4] = i.ToString()[0];
tmpCap[l] = SomeValChars[l];
}
RtT1Arr[i].block = tmpCap;//try 1
//arr[i].block = &tmpCap[0];//try 2
}
// here its fine
Console.WriteLine("{0}", new string(RtT1Arr[1].block));
return RtT1Arr;
}
but using it anywhere else printing garbage.
void Main()
{
T1[] tstT1 = GetSomeTs(10);
for (int i = 0; i < 10; i++)
{
Console.WriteLine("{0}", new string(tstT1[i].block));//,0,5, Encoding.Default));
}
}
A: When you allocate memory with stackalloc that memory only exists until the function returns in which you have allocated it. You are returning a pointer to memory that is no longer allowed to be accessed.
Hard to recommend a fix because it's unclear what you want to achieve. Probably, you should just use a managed char[].
Encoding.Default.GetBytes is pretty slow so that's likely to be your hotspot anyway and the rest is less important. i.ToString() also is quite slow and produces garbage. If you are after perf then stop creating unneeded objects all the time such as SomeValChars. Create it once and reuse.
| |
doc_5660
|
Demonstration code is:
import java.awt.BorderLayout;
import java.awt.Frame;
import java.awt.Menu;
import java.awt.MenuBar;
import java.awt.MenuItem;
import java.awt.event.WindowAdapter;
import java.awt.event.WindowEvent;
public class Main extends Frame {
public Main() {
this.addWindowListener(new WindowAdapter() {
@Override
public void windowClosing(WindowEvent e) {
System.exit(0);
}
});
Menu mn = new Menu("File");
MenuItem mi = new MenuItem("It is very very long text");
mn.add(mi);
mi = new MenuItem("long");
mn.add(mi);
MenuBar mb = new MenuBar();
mb.add(mn);
this.setLayout(new BorderLayout());
this.setMenuBar(mb);
this.setSize(200,200);
this.setVisible(true);
}
public static void main(String[] args) {
new Main();
}
}
| |
doc_5661
|
list
How do I filter the list such that when I type “Reject”, it will only show items with Status “Reject”
Here is the code I tried (from the documentation https://developer.microsoft.com/en-us/fabric#/components/detailslist):
private _onChangeText = (text: any) => {
this.setState({ items: text ? this.state.items.filter(i =>
i.Status.indexOf(text) > -1) : this.state.items });
}
<TextField
label="Filter by name:"
onChanged={this._onChangeText}
/>
Thanks!
A: Here's a Codepen where I'm filtering a collection of items to see if the text is present in any of the item's values (case-insensitive). It is similar to the documentation example you linked in your original question. I hope that helps!
let COLUMNS = [
{
key: "name",
name: "Name",
fieldName: "Name",
minWidth: 20,
maxWidth: 300,
},
{
key: "status",
name: "Status",
fieldName: 'Status',
minWidth: 20,
maxWidth: 300
}
];
const ITEMS = [
{
Name: 'xyz',
Status: 'Approve'
},
{
Name: 'abc',
Status: 'Approve'
},
{
Name: 'mno',
Status: 'Reject'
},
{
Name: 'pqr',
Status: 'Reject'
}
]
const includesText = (i, text): boolean => {
return Object.values(i).some((txt) => txt.toLowerCase().indexOf(text.toLowerCase()) > -1);
}
const filter = (text: string): any[] => {
return ITEMS.filter(i => includesText(i, text)) || ITEMS;
}
class Content extends React.Component {
constructor(props: any) {
super(props);
this.state = {
items: ITEMS
}
}
private _onChange(ev: React.FormEvent<HTMLInputElement | HTMLTextAreaElement>, text: string) {
let items = filter(text);
this.setState({ items: items });
}
public render() {
const { items } = this.state;
return (
<Fabric.Fabric>
<Fabric.TextField label="Filter" onChange={this._onChange.bind(this)} />
<Fabric.DetailsList
items={ items }
columns={ COLUMNS }
/>
</Fabric.Fabric>
);
}
}
ReactDOM.render(
<Content />,
document.getElementById('content')
);
| |
doc_5662
|
#include<stdio.h>
void main(){
int n,i;
int table[10];
for(i=1;i<=10;i++){
scanf("%d",table[i]);
}
for(i=1;i<=n;i++){
printf("\n%d",table[i]);
}
getchar();
getchar();
return ;
}
A: When you declare an array of size N, the elements are indexed from 0 to N - 1. From there, you need to pass the address of your variable to scanf, not the variable itself. Since you are using an array, this becomes very simple.
for (int i = 0; i < 10; ++i)
scanf("%d", table + i);
A: You should use
scanf("%d", &table[i]);
Additionally the loop should start with 0, because indexes in C start from 0..N-1.
for(i = 0; i < 10; i++)
scanf("%d", &table[i]);
A: In C, array indexing starts from 0.
Change for loop to
for(i = 0; i < 10; i++) {...}
and
scanf("%d",table[i]);
to
scanf("%d", &table[i]);
^& operator before the variable table[i]
| |
doc_5663
|
catch (Exception ex)
{
lblMessage.Text = ex.Message;
}
I sthere any way where I can get this 547 code in my C# code.? Like
ex.somethin (which gives me the error message's code).
A: Try This.
try
{
}
catch(SqlException ex)
{
lblMessage.Text = ex.Message;
}
A: Multiple catches can be used:
try
{
}
catch(SqlException sqlex)
{
if(sqlex.Number ==547)
{
//code
}
}
catch(Exception ex)
{
lblMessage.Text = ex.Message;
}
A: You can try using Elmah library (Error Logging Modules And Handlers)
Here is a step by step tutorial on how to use it: http://www.asp.net/web-forms/tutorials/deployment/deploying-web-site-projects/logging-error-details-with-elmah-cs
A: For more details http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlexception.aspx
try
{
...
...
}
catch(SqlException ex)
{
lblMessage.Text = ex.Message;
}
| |
doc_5664
|
Whenever someone starts typing the timer starts. When the person stops typing for a restroom break, phone call, etc. the timer pauses and continues where it left off when the person starts to type again. Once the person is done typing they hit the submit button and the time gets recorded.
Someone had suggested a JS OnFocus but that won't stop the timer when someone leaves their desk for a minute or two. Someone else also suggested a time out limit but what happens if it times out and the person returns to finish their typing.
Any suggestions on the coding or where to look would be appreciated.
*
*JJ
A: I think your title is a little misleading although your post is tagged correctly. None of this (unless you use AJAX) will be in PHP. Your best be would probably to use timeouts with keyup or keydown events -- restart the timeout timer every time the event is fired. If the time runs out, you stop the global timer that keeps track of everything. I'm not sure I fully understand
Someone else also suggested a time out limit but what happens if it times out and the
person returns to finish their typing.
Isn't that the idea? They would take some sort of break during which the timer would pause and then resume when they return? If that is the case, it will be very difficult to determine what a "break" is versus what a "finished" is. In that case you would need to define some constant timeout after which it is assumed they are done. Otherwise, you will probably need some sort of "I'm done" button or event that either is clicked or fired based on some user action.
Edit in response to your comment: Basically, you have a couple of options. You will need at least one AJAX call if you want to store the time data on the server. This will fire when the "Done" button gets clicked. You will need JS handlers to handle the keydown/keyup events. These can either trigger AJAX calls which in turn start/stop timers on the server written in PHP. Alternatively you could have these start/stop timers on the client implemented in JS. Basically, the way I see it, (this is pesudocode, not actual JS)
function keyDownHandler{
resumeGlobalTimer();
restartTimeOutTimer();
}
function timeout(){
pauseGlobalTimer();
}
function resumeGlobalTimer(){
if (globalTimerIsRunning)
//global timer wasn't paused, do nothing
else{
someTimer.resume();
globalTimerIsRunning = true;
}
}
So basically, when a key is pressed, you start a timer that waits to see if another key is pressed within the time limit and it also calls a function that resumes the global timer. If a key is pressed within the time limit, that timeout timer is reset to the starting value and starts ticking down again. If it times out, it assumes the person walked away and pauses the global timer. The resume function checks to make sure the timer isn't already running (ie, not paused) and then attempts to resume it.
At the end of everything, an AJAX call will upload the final global value to PHP.
Hope this helps!
| |
doc_5665
|
On load of the scene from Reality Composer, I'm storing the initial transform of the ball here
Experience.loadTestSphereAsync { (result) in
switch result {
case .success(let anchor):
anchor.generateCollisionShapes(recursive: true)
self.testSphereAnchor = anchor
self.ball?.physicsBody?.mode = .static
self.arView.scene.anchors.append(self.testSphereAnchor)
self.ballStartTransform = self.testSphereAnchor.ball?.transform
self.targetStartTransform = self.testSphereAnchor.target?.transform
self.addTapGestures()
case .failure(let error):
fatalError(error.localizedDescription)
}
}
To move the ball back to its original position, I'm doing this
func didPressRestartButton() {
ball?.clearForcesAndTorques()
ball?.physicsBody?.mode = .static
ball?.move(to: ballStartTransform!, relativeTo: nil, duration: 0.3)
}
Instead of returning to the original position, the ball just freezes where it is.
A: Set the physicsBody mode instead to .kinematic
https://developer.apple.com/documentation/realitykit/physicsbodymode/kinematic
| |
doc_5666
|
The Css Issue
the solution
A: While using url rewriting, you have to provide absolute path to the css file.
According to your snapshot, the css file path should be like:
http://localhost/freefootball(sohail)/bower_components/bootstrap/dist/css/bootstrap.min.css
| |
doc_5667
|
Spring Batch application, the exception is not stopping the scheduler jobs but it is causing issue during database operations.
Please help me to find the root cause of this issue.
2019/08/08 19:55:38,223 INFO - JobExecutionListener in before JOB NOT RUNNING10077830 7
Aug 8, 2019 7:55:38 PM com.microsoft.sqlserver.jdbc.TDSChannel enableSSL
INFO: java.security path: /usr/jdk/instances/jdk1.6.0_30/jre/lib/security
Security providers: [SunPKCS11-Solaris version 1.6, SUN version 1.6, SunRsaSign version 1.5, SunJSSE version 1.6, SunJCE version 1.6, SunJGSS version 1.0, SunSASL version 1.5, XMLDSig version 1.0, SunPCSC version 1.6]
SSLContext provider info: Sun JSSE provider(PKCS12, SunX509 key/trust factories, SSLv3, TLSv1)
SSLContext provider services:
[SunJSSE: KeyFactory.RSA -> sun.security.rsa.RSAKeyFactory
aliases: [1.2.840.113549.1.1, OID.1.2.840.113549.1.1]
, SunJSSE: KeyPairGenerator.RSA -> sun.security.rsa.RSAKeyPairGenerator
aliases: [1.2.840.113549.1.1, OID.1.2.840.113549.1.1]
, SunJSSE: Signature.MD2withRSA -> sun.security.rsa.RSASignature$MD2withRSA
aliases: [1.2.840.113549.1.1.2, OID.1.2.840.113549.1.1.2]
, SunJSSE: Signature.MD5withRSA -> sun.security.rsa.RSASignature$MD5withRSA
aliases: [1.2.840.113549.1.1.4, OID.1.2.840.113549.1.1.4]
, SunJSSE: Signature.SHA1withRSA -> sun.security.rsa.RSASignature$SHA1withRSA
aliases: [1.2.840.113549.1.1.5, OID.1.2.840.113549.1.1.5, 1.3.14.3.2.29, OID.1.3.14.3.2.29]
, SunJSSE: Signature.MD5andSHA1withRSA -> com.sun.net.ssl.internal.ssl.RSASignature
, SunJSSE: KeyManagerFactory.SunX509 -> com.sun.net.ssl.internal.ssl.KeyManagerFactoryImpl$SunX509
, SunJSSE: KeyManagerFactory.NewSunX509 -> com.sun.net.ssl.internal.ssl.KeyManagerFactoryImpl$X509
, SunJSSE: TrustManagerFactory.SunX509 -> com.sun.net.ssl.internal.ssl.TrustManagerFactoryImpl$SimpleFactory
, SunJSSE: TrustManagerFactory.PKIX -> com.sun.net.ssl.internal.ssl.TrustManagerFactoryImpl$PKIXFactory
aliases: [SunPKIX, X509, X.509]
, SunJSSE: SSLContext.SSL -> com.sun.net.ssl.internal.ssl.SSLContextImpl
, SunJSSE: SSLContext.SSLv3 -> com.sun.net.ssl.internal.ssl.SSLContextImpl
, SunJSSE: SSLContext.TLS -> com.sun.net.ssl.internal.ssl.SSLContextImpl
, SunJSSE: SSLContext.TLSv1 -> com.sun.net.ssl.internal.ssl.SSLContextImpl
, SunJSSE: SSLContext.Default -> com.sun.net.ssl.internal.ssl.DefaultSSLContextImpl
, SunJSSE: KeyStore.PKCS12 -> com.sun.net.ssl.internal.pkcs12.PKCS12KeyStore
]
java.ext.dirs: /usr/jdk/instances/jdk1.6.0_30/jre/lib/ext:/usr/jdk/packages/lib/ext
2019/08/08 19:55:38,264 ERROR - Encountered fatal error executing job
org.springframework.batch.core.JobExecutionException: Flow execution ended unexpectedly
at org.springframework.batch.core.job.flow.FlowJob.doExecute(FlowJob.java:141)
at org.springframework.batch.core.job.AbstractJob.execute(AbstractJob.java:301)
at org.springframework.batch.core.launch.support.SimpleJobLauncher$1.run(SimpleJobLauncher.java:134)
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:49)
at org.springframework.batch.core.launch.support.SimpleJobLauncher.run(SimpleJobLauncher.java:127)
at com.cbkonnect.h2h.batch.quartz.JobLauncherDetails.executeInternal(JobLauncherDetails.java:51)
at org.springframework.scheduling.quartz.QuartzJobBean.execute(QuartzJobBean.java:113)
at org.quartz.core.JobRunShell.run(JobRunShell.java:223)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:549)
Caused by: org.springframework.batch.core.job.flow.FlowExecutionException: Ended flow=FILE_RECIEVE_ACK at state=FILE_RECIEVE_ACK.FILE_RECIEVE_ACK_STEP1 with exception
at org.springframework.batch.core.job.flow.support.SimpleFlow.resume(SimpleFlow.java:161)
at org.springframework.batch.core.job.flow.support.SimpleFlow.start(SimpleFlow.java:131)
at org.springframework.batch.core.job.flow.FlowJob.doExecute(FlowJob.java:135)
... 8 more
Caused by: org.springframework.transaction.CannotCreateTransactionException: Could not open JDBC Connection for transaction; nested exception is com.microsoft.sqlserver.jdbc.SQLServerException: The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: "SQL Server returned an incomplete response. The connection has been closed. ClientConnectionId:0b74d397-671c-44fb-9c9e-18f6b2e6fbb6".
at org.springframework.jdbc.datasource.DataSourceTransactionManager.doBegin(DataSourceTransactionManager.java:241)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:372)
at org.springframework.transaction.interceptor.TransactionAspectSupport.createTransactionIfNecessary(TransactionAspectSupport.java:417)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:255)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:94)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
at $Proxy14.getStepExecutionCount(Unknown Source)
at org.springframework.batch.core.job.flow.JobFlowExecutor.isStepRestart(JobFlowExecutor.java:82)
at org.springframework.batch.core.job.flow.JobFlowExecutor.executeStep(JobFlowExecutor.java:63)
at org.springframework.batch.core.job.flow.support.state.StepState.handle(StepState.java:60)
at org.springframework.batch.core.job.flow.support.SimpleFlow.resume(SimpleFlow.java:152)
... 10 more
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: "SQL Server returned an incomplete response. The connection has been closed. ClientConnectionId:0b74d397-671c-44fb-9c9e-18f6b2e6fbb6".
at com.microsoft.sqlserver.jdbc.SQLServerConnection.terminate(SQLServerConnection.java:1667)
at com.microsoft.sqlserver.jdbc.TDSChannel.enableSSL(IOBuffer.java:1668)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:1323)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:991)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:827)
at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:1012)
at java.sql.DriverManager.getConnection(DriverManager.java:582)
at java.sql.DriverManager.getConnection(DriverManager.java:154)
at com.cbkonnect.jdbc.datasource.CbkonnectDataSource.getConnectionFromDriverManager(CbkonnectDataSource.java:210)
at com.cbkonnect.jdbc.datasource.CbkonnectDataSource.getConnectionFromDriver(CbkonnectDataSource.java:182)
at org.springframework.jdbc.datasource.AbstractDriverBasedDataSource.getConnectionFromDriver(AbstractDriverBasedDataSource.java:153)
at org.springframework.jdbc.datasource.AbstractDriverBasedDataSource.getConnection(AbstractDriverBasedDataSource.java:119)
at org.springframework.jdbc.datasource.DataSourceTransactionManager.doBegin(DataSourceTransactionManager.java:203)
... 21 more
Caused by: java.io.IOException: SQL Server returned an incomplete response. The connection has been closed. ClientConnectionId:0b74d397-671c-44fb-9c9e-18f6b2e6fbb6
at com.microsoft.sqlserver.jdbc.TDSChannel$SSLHandshakeInputStream.ensureSSLPayload(IOBuffer.java:651)
at com.microsoft.sqlserver.jdbc.TDSChannel$SSLHandshakeInputStream.readInternal(IOBuffer.java:708)
at com.microsoft.sqlserver.jdbc.TDSChannel$SSLHandshakeInputStream.read(IOBuffer.java:700)
at com.microsoft.sqlserver.jdbc.TDSChannel$ProxyInputStream.readInternal(IOBuffer.java:895)
at com.microsoft.sqlserver.jdbc.TDSChannel$ProxyInputStream.read(IOBuffer.java:883)
at com.sun.net.ssl.internal.ssl.InputRecord.readFully(InputRecord.java:293)
at com.sun.net.ssl.internal.ssl.InputRecord.read(InputRecord.java:331)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:830)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1170)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1197)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1181)
at com.microsoft.sqlserver.jdbc.TDSChannel.enableSSL(IOBuffer.java:1618)
... 32 more
2019/08/08 19:55:38,277 INFO - JobExecutionListener after JOB
| |
doc_5668
|
I want do find a shorter and more elegant way than a double nested iff:
.....
|extend name= iff(isempty(NAME1) == false, NAME1, iff(isempty(NAME2) == false, NAME2, NAME3))
A: assuming you meant isnotempty() and not isempty(), then you could use the coalesce() function: https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/coalescefunction
for example - the value under the column d in the result of the following query is 'hello'
print a = '', b = '', c = 'hello'
| project d = coalesce(a, b, c)
| |
doc_5669
|
As far as I understand, from the Z3 tutorial by Moura and Bjorner, it is not possible to "nest recursive data-type definitions inside other types, such as arrays".
So, suppose I have the following OCaml type:
type value =
| Num of float
| String of string
| List of value list
Ideally, I would like to encode this type in Z3 using the built-in Z3List type, but I think this is not possible, because Z3 does not support mutual recursion between recursive data-types and the other types. Can someone confirm that this is the case?
If it is so, I guess the only possible way is for me to define my own type for a list of values, say my_list, and for the types my_list and value to be mutually recursive. In OCaml:
type value =
| Num of float
| String of string
| List of my_list
and my_list =
| Cons of value * my_list
| nil
But this means that I will not be able to leverage the built-in reasoning infrastructure that Z3 supports for Z3Lists. Is there a better way to do this?
A: It is correct that you would have to use the flattened version with my_list.
The good news is that the built-in reasoning on lists in Z3 uses just the same mechanism as other data-types so you would get the same reasoning support with the flat data-type declaration.
| |
doc_5670
|
*
*A table tA with an "ID" and a "Description" columns
*"Description" is a string column.
*The contents of the table are:
ID || Description
1 || "String1"
2 || "String2"
3 || "String3"
If I execute the following SQL query:
"SELECT ID FROM tA WHERE Description = 'String2'" it returns 2 (as expected)
But:
If I execute the following SQL query:
"SELECT ID FROM tA WHERE Description = 'String2 '" (trailing withespaces) it also returns 2! (as it is an exact comparison, it should return NONE)
If I execute the following SQL query:
"SELECT ID FROM tA WHERE Description = ' String2'" (leading withespaces) it returns NONE (as expected)
Do you know what is the reason of this difference in behaviour?
Thanks in advance.
A: You need to use "%EXACT" around your column name. This should return no records: "SELECT ID FROM tA WHERE %EXACT(Description) = 'String2 '"
| |
doc_5671
|
from collections import deque
d = deque()
for _ in range(int(input())):
method, *n = input().split()
getattr(d, method)(*n)
print(*d)
and
from collections import deque
d = deque()
for _ in range(int(input())):
method, *n = input().split()
d.method(*n)
print(*d)
A: getattr(...) will get a named attribute from an object; getattr(x, 'y') is equivalent to x.y.
Where as d.method(*n) will try to lookup the method named method in deque object result in AttributeError: 'collections.deque' object has no attribute 'method'
>>> from collections import deque
>>> d = deque()
>>> dir(d) # removed dunder methods for readability
[
"append",
"appendleft",
"clear",
"copy",
"count",
"extend",
"extendleft",
"index",
"insert",
"maxlen",
"pop",
"popleft",
"remove",
"reverse",
"rotate",
]
>>> method = "insert"
>>> d.method
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'collections.deque' object has no attribute 'method'
>>> insert_method = getattr(d, method)
>>> insert_method
<built-in method insert of collections.deque object at 0x000001E5638AC0>
>>> help(insert_method)
Help on built-in function insert:
insert(...) method of collections.deque instance
D.insert(index, object) -- insert object before index
>>> insert_method(0, 1)
>>> d
deque([1])
| |
doc_5672
|
Here is my gulp task file:
var gulp = require('gulp');
var ts = require('gulp-typescript');
var merge2 = require('merge2');
var config = require('../gulp.config');
var concat = require('gulp-concat');
var debug = require('gulp-debug');
var path = require('path');
var filter = require('gulp-filter');
var tsproj = ts.createProject(config.files.tsconfig);
gulp.task('typescript:src', function(){
var testFilter = filter(['src/**/*.spec.ts'], {restore: true, passthrough: false});
var compileStream = gulp
.src(['src/**/*.ts', '!src/client/jspm_packages/**'])
.pipe(ts(tsproj));
var tsd = compileStream
.dts
.pipe(concat(path.basename(config.files.srcDefinitions)))
.pipe(gulp.dest(config.paths.typings));
var js = compileStream
.js
.pipe(testFilter)
.pipe(gulp.dest('dist'));
var tests = testFilter.restore.pipe(gulp.dest('test'));
return merge2([tsd, js, tests]);
});
Does anyone have any idea what I am doing wrong?
--Thanks
A: I think you can archive what you want by doing so:
//Copy specs
gulp.src(['dest/**/*.spec.js'])
.pipe(gulp.dest('dest/test'));
//Delete
del.sync(['dest/**/*.spec.js']);
Where 'dest' - is your destination output folder for js compiled files.
As a side not - I do not really follow why would you like to do that. By moving resulting js files you most likely will break imports if any between them and the rest of the application. For example if you have something like this in your spec.ts file:
import {F} from '../core/Commons';
it will not work after copying it into 'test' subdirectory as the path '../core/Commons' will become invalid.
Instead you should from the beginning have all your spec files in the 'test' folder and ts compiler will will do the job of puting js output to the 'test' directory in the destination folder.
Hope this will help.
| |
doc_5673
|
group | attr1 | attr2 | time1 | time2
--------------------------------------------
1 | 1 | 7 | 1 | 2
1 | 4 | 4 | 4 | 7
1 | 3 | 3 | 6 | 9
2 | 2 | 2 | 2 | 5
2 | 2 | 5 | 3 | 6
2 | 1 | 6 | 4 | 7
2 | 4 | 2 | 5 | 8
3 | 6 | 7 | 6 | 10
What I would like to do is the following:
*
*Group by group
*For every group data frame:
2.1. Apply expanding window on the whole dataframe (all columns)
2.2. For every 'expanding' dataframe
2.2.1. Filter the 'expanding' dataframe using time1 & time2 columns (e.g. `df[df[time1]<df[time2]]`)
2.2.2. Perform various aggregations (ideally using `.agg` with `dict` argument, as there are many different aggregations for many columns)
*The output has basically the same number of rows as the input
My problems are:
*
*I don't see a way to specify 'expanding grouping column'. If that was possible, then I could do something like:
def func(group_df, agg_dict):
group_df_filtered = filter the dataframe on time columns
return group_df_filtered.agg(agg_dict)
df.groupby(['group', expanding(1)]).apply(func, agg_dict=agg_dict)
*I don't see a way to perform expanding operation on the whole dataframe. If that was possible, I could do:
def func(group_df, agg_dict):
for col, funcs in agg_dict:
agg_dict[col] = [lambda df: f(df[df[time1]<df[time2]]) for f in funcs]
return group_df.expanding(1).agg(agg_dict)
df.groupby('group').apply(func, agg_dict=agg_dict)
I found a workaround that works similarly to the second approach, except that I pass whole columns to the func and do subsetting (as I have the whole column instead of just the expanding part) and filtering inside the function, but it's terribly slow, mostly due to the fact that I'm wrapping functions together and have a lot of custom code.
Is there a nice and most importantly, a fast way to achieve the functionality I need? I guess it would require as little pure python code as possible to work relatively fast (one of the reasons I use agg and dict instead of e.g. doing apply row by row or something similar, that would kill the performance, the other reason is I have multiple functions for different columns, so implementing that by hand every time would be way too verbose).
| |
doc_5674
|
CString Str = _T("Really cool string");
TCHAR szBuffer[32];
_stprintf(szBuffer, _T("Here it is: %s"), Str);
I haven't figured out how this magic works with a standard CString, since CString::FormatString just passes the variable argument list through to _vswprintf and _swprintf. But whatever it does is missing in my derived class.
The operator (LPCTSTR) is inherited as expected, and works when explicitly called.
Any ideas?
A: Your assumption is wrong: There is no implicit conversion to LPCTSTR when a CString object is passed to a printf-style function. The compiler has no way of knowing that that's what you want - it's not going to parse the format string to deduce type information.
Instead, your CString object is passed to printf as is. The magic here is that the CString authors predicted wrong assumptions about when and how the implicit cast operator is invoked and modeled CString to be compatible with C-strings. To do so, CString contains a single LPTSTR pointer and no v-table. Now if a CString object is passed to a printf-style function, only this pointer will wind up as a parameter, and everything appears to work. Note that appearing to work is a valid form of undefined behavior. This is undefined behavior.
If you're wondering where CString stores the remaining information (current size, capacity, etc.), it resides in memory just ahead of the character buffer. This way all the information is available through a single pointer:
CStringData* GetData() const throw() {
return( reinterpret_cast< CStringData* >( m_pszData )-1 );
}
Now to solve your real problem: Don't rely on undefined behavior, use explicit casts where necessary and nobody will get hurt:
_stprintf(szBuffer, _T("Here it is: %s"), static_cast<LPCTSTR>(Str));
As an alternative to the cast you can call CString::GetString():
_stprintf(szBuffer, _T("Here it is: %s"), Str.GetString());
Also keep in mind: Deriving from a class that does not provide a virtual destructor is a resource leak waiting to happen. In other words: Do not derive from CString.
A: When you derive from a class, not all operators are inherited. Google the topic of operator inheritance in C++. In your derived class, you may need to implement the operators and simply forward to the base class.
| |
doc_5675
|
int** Solve(){
int array[6][6];
//some determining of array's elements...
int **ptr=calloc(4,sizeof(int*));
for(int i=0;i<4;i++){
for(int j=0;j<4;j++){
ptr[i][j]=array[i+1][j+1];
}
}
return ptr;
}
A: I would add a comment under your question, but I lack reputation.
int r = 3, c = 4, i, j, count;
int **arr = (int **)malloc(r * sizeof(int *));
for (i=0; i<r; i++)
arr[i] = (int *)malloc(c * sizeof(int));
That's how 2d array allocation must look like.
The difference between malloc and calloc is that calloc zero-initializes allocated memory, so they are interchangeable in your case.
| |
doc_5676
|
I have reproduced my code below
# Reproduced Data (my dataframe which I'm estimating my VAR with is called "debt_arg")
structure(c(43.0909967382336, 26.9790446803372, 16.967092620984,
29.5051405616314, 4.17318850227895, -4.84876355707382, -24.0607156164264,
-17.9926676757792, -26.8046197351318, -39.2565717944843, -39.308523853837,
-46.8804759131896, -89.3024279725422, -101.794380031895, -106.046332091248,
-121.2682841506, -95.2702362099528, -62.4721882693054, -53.5841403286581,
-38.7460923880107, -22.6580444473633, -42.4199965067158, -1.6919485660685,
46.4060993745788, 27.4541473152261, 172.152195255873, 181.360243196521,
176.898291137168, 309.076339077816, 336.794387018463, 196.02243495911,
395.660482899758, 464.988530840405, 494.556578781053, 396.9146267217,
505.832674662347, 524.940722602995, 519.918770543642, 420.396818484289,
349.224866424937, 186.772914365584, 184.720962306232, 40.1990102468787,
87.1570581875262, 42.2151061281736, -182.786845931179, -248.408797990532,
-223.860750049884, -246.332702109237, -160.484654168589, -130.956606227942,
-53.8285582872948, -69.6505103466475, -113.822462406, -119.124414465353,
-71.5863665247052, -105.228318584058, -64.0802706434106, -111.022222702763,
-40.9141747621159, -34.1361268214685, -92.7280788808212, -6.2500309401737,
37.8680170004736, 60.686064941121, 101.094112881769, 103.382160822416,
-74.1797912369368, -99.1317432962894, -228.933695355642, -534.125647414995,
-654.007599474347, -688.0995515337, -617.091503593053, -501.913455652405,
-462.655407711758, -405.577359771111, -539.759311830463, -464.641263889816,
-370.263215949168, -338.035168008521, -361.017120067874, -495.649072127226,
-418.951024186579, -349.762976245931, -393.074928305284, -331.696880364637,
-305.778832423989, -232.140784483342, -150.822736542695, -116.134688602047,
-30.4666406614001, -5.22859272075266, 86.9494552198946, 34.457503160542,
108.705551101189, 158.723599041837, 141.341646982484, 55.3496949231317,
-67.5222571362212, 72.7557908044264, 129.453838745074, 106.911886685721,
222.349934626368, 289.877982567016, 368.006030507663, 454.244078448311,
670.382126388958, 506.110174329605, 573.968222270253, 755.1862702109,
363.624318151547, 664.462366092195, 414955.426174104, 410044.891543103,
403034.35691006, 412023.822277016, 422013.287643972, 418102.753010928,
435092.218377884, 438381.68374484, 432171.149111795, 445760.614478752,
450850.079845708, 445939.545212663, 432029.01057962, 418218.475946575,
394307.941313531, 372097.406680487, 346586.872047443, 319376.337414399,
287765.802781355, 250555.268148311, 228444.733515267, 192634.198882223,
142523.664249179, 114613.129616135, 73902.594983091, 52092.060350047,
28781.5257170029, 13770.9910839589, -7139.54354908504, -31950.0781821292,
-44460.6128151731, -55771.1474482173, -75681.6820812612, -101592.216714305,
-120802.751347349, -125313.285980393, -155323.820613438, -190834.355246481,
-219044.889879525, -244455.42451257, -247865.959145613, -249076.493778658,
-245687.028411702, -244097.563044746, -271308.09767779, -282518.632310834,
-301029.166943878, -301139.701576922, -301850.236209966, -287760.77084301,
-300871.305476054, -316081.840109098, -312692.374742142, -328102.909375186,
-330013.44400823, -348623.978641274, -380734.513274318, -393545.047907362,
-409355.582540406, -429866.11717345, -451376.651806494, -483687.186439538,
-509197.721072583, -531308.255705626, -548218.79033867, -577929.324971715,
-593639.859604758, -618950.394237803, -628560.928870847, -622271.463503891,
-592781.998136935, -507492.532769979, -501703.067403023, -474413.602036067,
-482224.136669111, -509334.671302155, -507745.205935199, -506355.740568243,
-473266.275201287, -424176.809834331, -399587.344467375, -322497.879100419,
-207608.413733463, -169418.948366507, -145829.482999551, -99540.0176325953,
-18950.5522656394, 8438.91310131643, 27528.3784682727, 34117.8438352286,
87907.3092021844, 137896.77456914, 196686.239936097, 236275.705303052,
240065.170670008, 288854.636036964, 325344.10140392, 340733.566770876,
296223.032137832, 353212.497504788, 431101.962871744, 455291.4282387,
519680.893605656, 551470.358972612, 591359.824339568, 664649.289706524,
697038.75507348, 720028.220440436, 703717.685807392, 697007.151174348,
717496.616541304, 716186.08190826, 728275.547275216, 1538.70118346597,
2398.42391964878, 2392.09965578383, 4133.94339191895, 3777.72612805406,
4867.33386418917, 4108.55160032429, 5550.94433645941, 4634.90307259452,
5278.04380872964, 5088.51254486475, 7486.71828099987, 6869.96001713498,
6070.3357532701, 4563.65148940522, 6128.08522554033, 2788.93596167545,
2682.28969781056, 2229.84943394568, 3741.65917008079, 2630.38390621591,
4082.16364235103, 2198.15237848614, 4896.45411462126, 4157.35585075637,
4921.13358689149, 4820.0463230266, 6147.74805916172, 4105.90179529683,
5193.46153143195, 3666.34326756707, 4597.95400370218, 1570.8447398373,
2221.92547597241, 166.215212107527, 1763.06994824264, -1176.90431562224,
-961.478579487128, -1735.65784335201, -1407.9491072169, -5683.92737108178,
-5080.73563494666, -8825.40289881155, -6968.02316267644, -17166.0654265413,
-20479.0666904062, -20975.0449542711, -19596.572218136, -18925.8824820009,
-16613.6967458657, -16307.3730097306, -14033.8542735955, -13841.2785374604,
-12716.7748013253, -13487.4250651902, -11195.5563290551, -12067.9225929199,
-10807.5008567848, -10885.5221206497, -7810.93638451459, -10139.4426483795,
-8558.90691224435, -8606.53717610924, -5071.93643997413, -6154.21470383901,
-3857.6179677039, -5539.78023156878, 47.5015045663386, -3501.44675929855,
-156.083023163432, 610.628712971687, 761.724449106798, -6687.22581475809,
-6513.97907862297, -6552.17734248786, -671.552606352736, -4200.09687021763,
-2435.65713408251, -933.942397947387, 10092.1733381877, 5412.08207432284,
7628.10081045795, 7935.10854659307, 19198.7752827282, 11920.7060188633,
17229.0047549984, 16831.0094911335, 31239.1312272686, 19334.4489634038,
20324.2716995389, 20205.076435674, 26013.2221718091, 3328.73490794423,
5991.50064407934, 6570.13238021445, 20323.7811163496, 11933.7938524847,
15170.6605886198, 16821.4173247549, 13656.38206089, -8733.33720297486,
-4910.67546683973, -7102.64173070462, 6088.80900543051, 663.179741565611,
1739.09747770072, 216.726213835747, 8738.72494997097, -3065.35931389393,
-12117.1165777588, -29574.4198416237, -23324.9971054886, -32336.0323693535,
270.48, 265.24, 652.73, 800.62, 747.79, 660.42, 419.8, 426.33,
420.66, 412.91, 487.09, 581.92, 558.75, 519.1, 604.55, 460.49,
382.46, 405.58, 431.74, 518.96, 509.36, 607.4, 558.86, 649.37,
706.08, 809.41, 822.47, 687.5, 709.97, 550.44, 380.28, 430.06,
419.78, 498.72, 534.4, 550.47, 569.24, 496.9, 475.42, 416.77,
443.81, 402.25, 243.55, 295.39, 436.13, 350.65, 395.19, 524.95,
566.46, 765.61, 827.69, 1071.96, 1201.66, 945.45, 1142.5, 1375.37,
1400.42, 1367.41, 1694.83, 1543.31, 1800.58, 1711.09, 1637.27,
2090.46, 2102.78, 2190.87, 2187.97, 2151.73, 2103.72, 2107.87,
1598.17, 1079.66, 1125.95, 1587.97, 2075.14, 2320.73, 2373.71,
2185.01, 2643.42, 3523.59, 3388.03, 3360.64, 2463.63, 2462.63,
2683.99, 2346.68, 2451.73, 2854.29, 3380.78, 2976.27, 4783.77,
5391.03, 6373.82, 7887.33, 12548.99, 8597.02, 10837.23, 11656.81,
9814.62, 11675.18, 12992.43, 14683.49, 16675.68, 16917.86, 20265.32,
21912.63, 26078.29, 30065.61, 31108.86, 26037.01, 33461.77, 30292.55,
33466.03, 3.26e+08, 2.1e+07, 1.1e+07, 7.869e+09, 862806000, 538167500,
476328500, 157505500, 731948111.111111, 24350957088.2353, 999682500,
4203927088.23529, 1806496171.11111, 164884528.235294, 2178749500,
2576536163.86555, -1104639488.88889, 1183937108.57143, 1323054635.48522,
2240739607.7745, 3231203531.20524, 2224656303.03667, 2799743875.36837,
2852285791.5461, 4248938759.38433, 1762499191.62355, 2765832512.19452,
1498329486.02275, 3436536228.02919, 4467813458.46352, 2844847427.10581,
153779772.547889, 768245951.027719, 2874603185.30815, 84905815.7334787,
2265403119.14626, 2444297843.45995, 1369023915.06914, -253538732.407444,
-1664050535.17686, -2690301300.23777, -1596956359.91031, -2925302741.22981,
-2533781832.48488, -2258880000, -1220300000, 142670000, -1664830000,
-2036900000, -2134950000, -497270000, -3059160000, -3754980000,
-1132040000, -2212520000, -2153150000, -3101240000, -728950000,
991330000, 1156170000, 2753260645.58738, 1910464819.74444, 852896475.94148,
2277112718.5433, 2022087685.81889, 5273568557.49061, -1502521294.35984,
1063441737.14022, -226518951.379118, -1253194355.76109, -2177171727.73634,
-1535518403.57232, 96147441.6142862, -296349372.551493, -627558644.109443,
-467930782.724932, 405257699.151905, 144634171.908078, 337338547.911262,
1859566805.68781, 796934898.126637, 286707726.576741, -236299720.277408,
-887533002.579341, 1553120015.26578, 27003236.9796999, -1454448585.92745,
124239641.731611, 168476951.081102, -568202127.603569, -253500911.561899,
1240493061.28016, 1162373441.80884, -4349136882.03693, 540850191.216397,
63371068.2983998, 1134211263.74247, 1367030943.20801, 618885142.028415,
-2829317497.3944, 4728966886.53824, 12590576935.5312, 6879440409.39944,
10868932518.4022, 15553076839.5202, 4374233548.15403, 6179975808.64763,
12303491605.5898, 16348265006.8077, -1264432453.55569, -1071543953.94089,
-728949104.179213, 519284160.862738, 0, 0, 0, 0, 321395635.417275,
-27163751.5253196, 578208608.58746, 341330056.878433, 253612251.627469,
407474140.566744, 3485218027.37565, 1524948883.60027, 1095063503.93567,
-243667941.737244, 2303448152.30374, 1065446917.68978, 637179622.619144,
414989651.653276, 57287900.1825424, 442961505.748743, 614615523.155122,
474062261.101183, -528254493.861979, 306917257.290977, 466868012.120676,
91074750.1073804, 1873300034.41525, -111963691.48737, 562133833.756524,
-499826456.669203, -65715454.514262, -206227359.32734, -95642879.0800113,
-10545930096.3757, -309843542.700806, 178698848.75935, 281636981.229176,
-94039364.237304, -3456799000.38897, 42016298.5163033, 29986452.0184,
-34745156.95985, -69089531.8771083, 104975191.0695, 3880000,
18160000, -1.2e+07, -125920000, 49020000, 19960000, -67760000,
64130000, -60730000, 32580000, -11850000, -46150000, -3340000,
-34610000, -67770000, 57620000, 247065572.305654, 110723963.408605,
132149695.892849, 216716497.314816, 549906355.874947, 594931451.444728,
228472127.27772, 411625076.952925, -92909673.1325036, -202873018.788848,
-58179792.5150349, -176659291.195054, -73100388.2223938, 10404172.6882247,
-54286803.4483821, -94934845.5623686, -215442207.120147, -82511407.2007654,
95972418.060002, 130964874.692201, -51017265.9840873, 85839986.5801663,
126620883.760985, -369639265.12419, -29215440.7294261, -11121692.0314486,
-115981397.706632, 21410812.1712994, 79654137.3840613, 27308434.4325477,
-4229545.44216412, 239144758.016801, 14686931.9932968, 231275172.573391,
66295018.1798717, -94357525.4859095, 41697675.4063964, 22833247.3850187,
-44701305.1208689, 219281935.220203, 29548818.0680997, 38598506.304463,
417521774.71664, 499547569.881022, 50027913.0234979, 852409184.207933,
1596119677.49392, 514743198.781707, 102026577.988947, -292454396.807919,
-130885581.879457, -187033564.604513, 195110655.737268), .Dim = c(113L,
6L), .Dimnames = list(NULL, c("sp", "m1_us", "m1_arg", "eq_arg",
"pfdebt_arg", "pfequity_arg")), .Tsp = c(1991, 2019, 4), class = c("mts",
"ts", "matrix"))
# Estimate VAR Model
var.est.debt_arg <- VAR(debt_arg,p=1,type="both",season=NULL)
summary(var.est.debt_arg)
# Impulse Response Functions
debtarg_1 <- irf(var.est.debt_arg,response="pfdebt_arg",impulse="sp",n.ahead=40,ortho=TRUE,boot=TRUE)
plot(debtarg_1) # response of pfdebt to s&p shock
debtarg_2 <- irf(var.est.debt_arg,response="pfdebt_arg",impulse="m1_us",n.ahead=40,ortho=TRUE,boot=TRUE)
plot(debtarg_2) # response of pfdebt to us M1 Shock
debtarg_3 <- irf(var.est.debt_arg,response="pfdebt_arg",impulse="m1_arg",n.ahead=40,ortho=TRUE,boot=TRUE)
plot(debtarg_3) # response of pfdebt to a domestic m1 shock
debtarg_4 <- irf(var.est.debt_arg,response="pfdebt_arg",impulse="eq_arg",n.ahead=40,ortho=TRUE,boot=TRUE)
plot(debtarg_4) # response of pfdebt to equity market price shock
debtarg_5 <- irf(var.est.debt_arg,response="pfdebt_arg",impulse="pfequity_arg",n.ahead=40,ortho=TRUE,boot=TRUE)
plot(debtarg_5) # response of pfdebt to pfequity shocks
I would like to plot these 5 IRFs using GGPlot and GridExtra. I'm looking to plot each IRF in grid format in a row
Thanks!
| |
doc_5677
|
<script type="text/javascript" language="javascript">
$(document).ready(function() {
SubmitClick = function() {
if ($("#<%= fuFile.ClientID %>").val() == "") {
$("#error").html("File is required");
return false;
}
}
});
</script>
<asp:FileUpload ID="fuFile" runat="server" />
<asp:Button ID="btnSubmit" runat="server" Text="Submit"
OnClientClick="SubmitClick()" UseSubmitBehavior="false"
OnClick="btnSubmit_Click" />
<span id="error"></span>
I thought that setting UseSubmitBehavior="false" and returning false in the javascript function would work, but it's not. My error message gets displayed for a second before the server side code runs.
What am I doing wrong here?
A: If you are using JQuery:
$btn.on("click", function (e) {
e.preventDefault();
}
A: What worked for me was:
Remove OnClientClick="SubmitClick()" and set UseSubmitBehavior back to true.
Then change your $(document).ready() method to
$(document).ready(function () {
$('#btnSubmit').click(function () {
if ($("#<%= fuFile.ClientID %>").val() == "") {
$("#error").html("File is required");
return false;
}
return true;
});
});
A: Typically to cancel a client click you would add this return false;
You can update your code to do this and it should work: OnClientClick="return SubmitClick();"
A: You need to return the cancellation of the submitted behavior using return in OnClientClick event.
try this
OnClientClick="return SubmitClick()"
| |
doc_5678
|
since CM and LineageOS does not support my specific S5 model (G900H) - I added the missing folders (device, kernel, hardware etc.) from fevax's repositories (https://github.com/Fevax).
After fixing some compile errors and adding all of the needed fevax's repositories, I successfully built a CM ROM zip file using the 'brunch k3gxx' command.
However, as I said, flashing it (after factorty reset) using TWRP recovery goes into bootloop. I also tried flashing only recovery/boot/system partitions using heimdall but it also leads to bootloop.
I thought that maybe the problem was the toolchain but after a long research I found out that it was probably ok (prebuilts/gcc/linux-x86/arm/arm-eabi-4.8/bin/arm-eabi-).
In addition, fevax has published his own ROM for this exact model and I flashed it successfully and it works fine. I tried to contact him (Email, xda messages etc.) but he didn't answer me yet. I want to make my own custom ROM, so his ROM only proves that it is possile to create a custom CM ROM for this special S5 model.
Currently I have no idea how to even understand what causes the bootloop, the 'brunch k3gxx' log does not show any unusual warnings or errors.
So, is there any way to debug a bootloop is Samsung Galaxy S5? Or any other way to get information about why that bootloop occures?
| |
doc_5679
|
@Entity
@Table(name = "exercises")
public class ExerciseEntity {
@Id
private Long id;
private String nameOfExercise;
private Integer caloriesBurnedForHour;
@Column(columnDefinition = "TEXT")
private String bonusInfo;
@ManyToMany(mappedBy = "exerciseEntityList",fetch = FetchType.EAGER)
private List<PersonalTrainingProgram> personalTrainingPrograms;
public ExerciseEntity() {
}
@Entity
public class PersonalTrainingProgram {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
@ManyToMany
@JoinTable()
private List<ExerciseEntity> exerciseEntityList;
@ManyToOne
private FoodProgram foodProgram;
@ManyToOne
private AppUser appUser;
public PersonalTrainingProgram() {
}
And here is the method,
@Override
public List<ExerciseEntity> findExercisesBySpecificId(List<ExerciseDto> exerciseDto) {
List<ExerciseEntity> listEntity= new ArrayList<>();
for (int i = 0; i <exerciseDto.size() ; i++) {
ExerciseEntity byId = exercisesRepository.getById(exerciseDto.get(i).getId());
listEntity.add(byId);
}
return listEntity;
}
@Override
public void addItToDatabase(List<ExerciseDto> exerciseDto, FoodProgramDto
foodProgramDto,String username) {
PersonalTrainingProgram personalTrainingProgram = new PersonalTrainingProgram();
personalTrainingProgram.setExerciseEntityList(findExercisesBySpecificId(exerciseDto));
personalTrainingProgram.setFoodProgram(foodProgramService
.findFoodProgramById(foodProgramDto));
personalTrainingProgram.setAppUser(appUserRepository.findByUsername(username).get());
personalTrainingRepository.save(personalTrainingProgram);
}
After exercise line on debug mode i get this " {ExerciseEntity$HibernateProxy$B...}"
I think it can be possibly from relationship, but everything seems fine to me. Any help would be appreciate
Repository is standard repository, extending JPA
A: You must use findById that returns an Optional.
getById returns not an entity but a reference.
From the documentation:
Returns a reference to the entity with the given identifier. Depending
on how the JPA persistence provider is implemented this is very likely
to always return an instance and throw an EntityNotFoundException on
first access. Some of them will reject invalid identifiers
immediately.
So it's always a proxy that may not be initialized and causes the problem
| |
doc_5680
|
I have the following function I use for updating the line numbers text view:
String lineDelimiter = "\n";
public void updateLineNumbers(){
int lines = textBox.getLineCount();
lineNums.setText(1 + lineDelimiter);
for(int i = 2; i < lines; i++){
lineNums.append(i + lineDelimiter);
}
}
All this is fine, but the problem is when you have a document with say 200 odd lines you start to notice a little delay when adding lines. Is this cause Android TextView's setText/append methods are a little slow? Or is it the concatination that's causing the delay?
I've also made a similar function that appends a line number when the user adds a line number, and vice versa, as opposed to clearing the TextView and adding each line numbers again like the function above does. But this function still lags the app when the user adds/removes line(s).
How can I stop this? I can't think of what to do and it's stressing me out because it's lagging my app and rendering it unusable for large files! :(
Thanks for looking!
SOLUTION
I've found a way to have fast line numbers, which is to use a custom TextView with onDraw(Canvas canvas) overriden and to draw them that way which results in lag-free line numbers :).
A:
Is this cause Android TextView's setText/append methods are a little slow? Or is it the concatination that's causing the delay?
Use Traceview and find out.
Off the cuff, I would imagine that calling append() a whole bunch of times on a TextView will be vastly slower than calling append() a bunch of times on a StringBuilder, then calling setText() once on the TextView.
How can I stop this?
Don't handle line numbers that way. For example, put a TextView to the left of the EditText, and put your line numbers in the TextView, one per line.
| |
doc_5681
|
with open(file) as fin:
rows = ( line.split() for line in fin )
d = { row[0]:row[1:] for row in rows }
For a tab delimited input, file. And it works fine on my personal machine, but when I move it over to a shared computing cluster, it doesn't like the 3rd line, d = { row[0]:row[1:] for row in rows }. I've been trying to figure out why.
The only thing I could come up with was a difference in versions of Python. I'm running 2.7.3 and the shared cluster runs 2.6.1, but that doesn't seem totally reasonable - did I miss something totally obvious? I appreciate any advice y'all have.
Here is the text of the error (in Python 2.6.1 on cluster),
File "Alphabet.py", line 22
d = { row[0]:row[1:] for row in rows }
^
SyntaxError: invalid syntax
A: dict comprehension is new in Python 2.7, see PEP 274 http://www.python.org/dev/peps/pep-0274/
From that:
>>> dict([(i, chr(65+i)) for i in range(4)])
is semantically equivalent to
>>> {i : chr(65+i) for i in range(4)}
If you need compatibility with Python before 2.7, use the first version.
| |
doc_5682
|
data: [Solaergy, 3255, Solagy, 3635, Soly, 36235, Solar energy, 54128, Solar energy, 54665, Solar energy, 563265]
Now i want to split data into two arrays title and isbn (of books):
String[] titles = new String[data.length];
String[] isbnS = new String[data.length];
for (int i = 0; i < data.length; i += 2) {
titles[i] = data[i];
isbnS[i] = data[i + 1];
}
System.out.println("titles: " + Arrays.toString(titles));
System.out.println("isbnS: " + Arrays.toString(isbnS));
My problem is that there is a null value after each value in each two arrays:
titles: [Solaergy, null, Solagy, null, Soly, null, Solar energy, null, Solar energy, null, Solar energy, null]
isbnS: [3255, null, 3635, null, 36235, null, 54128, null, 54665, null, 563265, null]
I want to be like this:
titles: [Solaergy, Solagy, Soly, Solar energy, Solar energy, Solar energy]
isbnS: [3255, 3635, 36235, 54128, 54665, 563265]
A: You got the indices wrong :
String[] titles = new String[data.length/2];
String[] isbnS = new String[data.length/2];
int count = 0;
for (int i = 0; i < data.length; i += 2) {
titles[count] = data[i];
isbnS[count] = data[i + 1];
count++;
}
A: for (int i = 0, j=0; i < data.length; i += 2, j++) {
titles[j] = data[i];
isbnS[j] = data[i + 1];
}
A: I think what you're trying to do is put one element n one array and the next one in the other.
However, you're trying to store integers in a string array.
This is what I would do:
String[] titles = new String[data.length/2];
int[] isbnS = new int[data.length/2];
int j=0;
for(int i=0; i<data.length; i+=2)
{
titles[j] = data[i];
isbnS[j++] = data[i+1];
}
| |
doc_5683
|
There's a way to save the last id from incremental import, maybe localy and upload it to s3 via cronjob.
My first idea is, when i create the job i just send a request to Redshift, where my data is stored and get the last id or last_modified, via bash script.
Another idea is to get the output of sqoop job --show $jobid, filter the parameter of last_id and using it to create the job again.
But i don't know if sqoop offer a way to do this more easily.
A: As per the Sqoop docs,
If an incremental import is run from the command line, the value which should be specified as --last-value in a subsequent incremental import will be printed to the screen for your reference. If an incremental import is run from a saved job, this value will be retained in the saved job. Subsequent runs of sqoop job --exec someIncrementalJob will continue to import only newer rows than those previously imported.
So, you need to store nothing. Sqoop's metastore will take care of saving last value and avail for next incremental import job.
Example,
sqoop job \
--create new_job \
-- \
import \
--connect jdbc:mysql://localhost/testdb \
--username xxxx \
--password xxxx \
--table employee \
--incremental append \
--check-column id \
--last-value 0
And start this job with the --exec parameter:
sqoop job --exec new_job
A: Solution
I change the file sqoop-site.xml and add the endpoint to my MySQL.
Steps
*
*Create the MySQL instance and run this queries:
CREATE TABLE SQOOP_ROOT (version INT, propname VARCHAR(128) NOT NULL, propval VARCHAR(256), CONSTRAINT SQOOP_ROOT_unq UNIQUE (version, propname)); and INSERT INTO SQOOP_ROOT VALUES(NULL, 'sqoop.hsqldb.job.storage.version', '0');
*Change the original sqoop-site.xml adding your MySQL endpoint, user and password.
<property>
<name>sqoop.metastore.client.enable.autoconnect</name>
<value>true</value>
<description>If true, Sqoop will connect to a local metastore
for job management when no other metastore arguments are
provided.
</description>
</property>
<!--
The auto-connect metastore is stored in ~/.sqoop/. Uncomment
these next arguments to control the auto-connect process with
greater precision.
-->
<property>
<name>sqoop.metastore.client.autoconnect.url</name>
<value>jdbc:mysql://your-mysql-instance-endpoint:3306/database</value>
<description>The connect string to use when connecting to a
job-management metastore. If unspecified, uses ~/.sqoop/.
You can specify a different path here.
</description>
</property>
<property>
<name>sqoop.metastore.client.autoconnect.username</name>
<value>${sqoop-user}</value>
<description>The username to bind to the metastore.
</description>
</property>
<property>
<name>sqoop.metastore.client.autoconnect.password</name>
<value>${sqoop-pass}</value>
<description>The password to bind to the metastore.
</description>
</property>
When you execute the command sqoop job --list in first time it will return zero values. But after creating the jobs, if you shutdown the EMR, you don't lose the sqoop metadata from executing jobs.
In EMR, we can use the Bootstrap Action to automate this operation in cluster creation.
| |
doc_5684
|
Error: <spyOn> : push() method does not exist
Usage: spyOn(<object>, <methodName>)
exp.ts :
import { Component } from '@angular/core';
import { NavController } from 'ionic-angular';
import { Injectable } from '@angular/core' ;
import { HttpClient, HttpResponse, HttpErrorResponse, HttpHeaders } from '@angular/common/http' ;
import { SubmittingHc } from '../submitting-conditions/submitting-conditions';
@Injectable()
@Component({
selector: 'page-hc-entry',
templateUrl: 'hc-entry.html'
})
export class HcEntryPage {
private hc = {
id: null,
memberid: null,
one:null,
two:null
};
constructor(public navCtrl: NavController) { }
goToSubmittingConditions(){
this.navCtrl.push(SubmittingHc, this.hc);
}
}
exp.spec.ts :
import { async, ComponentFixture, TestBed, inject } from '@angular/core/testing';
import { By } from '@angular/platform-browser';
import { IonicModule, Platform, NavController} from 'ionic-angular/index';
import { StatusBar } from '@ionic-native/status-bar';
import { SplashScreen } from '@ionic-native/splash-screen';
import { HcEntryPage } from './hc-entry';
import { SubmittingHc } from '../submitting-conditions/submitting-conditions';
describe('hc component' () => {
let comp: HcEntryPage;
let fixture: ComponentFixture<HcEntryPage>;
beforeEach(async(()=>{
TestBed.configureTestingModule({
declarations:[HcEntryPage],
imports:[
IonicModule.forRoot(HcEntryPage)
],
providers:[
NavController,
SubmittingHc
]
});
}));
beforeEach(()=>{
fixture = TestBed.createComponent(HcEntryPage);
comp = fixture.componentInstance;
});
it('should create component', () => expect(comp).toBeDefined());
it('should navigate to submitting condition page', () =>{
spyOn(comp.navCtrl, 'push').and.stub();
comp.goToSubmittingConditions();
expect(comp.navCtrl.push).toHaveBeenCalledWith(SubmittingHc);
});
});
Tried the below code but gives same error :
it('should be able to launch SubmittingHc page', () => {
let navCtrl = fixture.debugElement.injector.get(NavController);
spyOn(navCtrl, 'push');
comp.goToMyCare({});
expect(navCtrl.push).toHaveBeenCalledWith(SubmittingHc);
});
A: Try making a file "project/test-config/mocks-ionic.ts" containing:
export class NavMock {
public pop(): any {
return new Promise(function(resolve: Function): void {
resolve();
});
}
public push(): any {
return new Promise(function(resolve: Function): void {
resolve();
});
}
public getActive(): any {
return {
'instance': {
'model': 'something',
},
};
}
public setRoot(): any {
return true;
}
public registerChildNav(nav: any): void {
return ;
}
}
Now import NavMock and redefine providers like:
import {
NavMock
} from '../test-config/mocks-ionic'
providers: [
{ provide: NavController, useClass: NavMock}
]
Example test case:
it('should be able to launch wishlist page', () => {
let navCtrl = fixture.debugElement.injector.get(NavController);
spyOn(navCtrl, 'push');
de = fixture.debugElement.query(By.css('ion-buttons button'));
de.triggerEventHandler('click', null);
expect(navCtrl.push).toHaveBeenCalledWith(WishlistPage);
});
Refer to this project for more help: https://github.com/ionic-team/ionic-unit-testing-example
| |
doc_5685
|
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE UndecidableInstances #-}
module Test where
import Control.Monad.Reader
import Data.Functor.Identity
class Monad m => TestMonad m where
test :: Int -> m Int
data TestData m = TestData {
_test :: Int -> m Int
}
instance (Monad m, MonadReader (TestData m) m) => TestMonad m where
test n = ask >>= \(TestData f) -> f n
env :: TestData Identity
env = TestData {
_test = return
}
If i want to run this example
runReader (test 5) env
i recieve this error:
*Test Control.Monad.Reader> runReader (test 5) env
<interactive>:5:12: error:
• Couldn't match type ‘Identity’
with ‘ReaderT (TestData Identity) Identity’
arising from a functional dependency between:
constraint ‘MonadReader
(TestData (ReaderT (TestData Identity) Identity))
(ReaderT (TestData Identity) Identity)’
arising from a use of ‘test’
instance ‘MonadReader r (ReaderT r m)’ at <no location info>
• In the first argument of ‘runReader’, namely ‘(test 5)’
In the expression: runReader (test 5) env
In an equation for ‘it’: it = runReader (test 5) env
*Test Control.Monad.Reader>
What do i do incorrectly and how i fix it?
The question is removed. I found my error. I have types env :: TestData Identity and test 5 :: ReaderT (TestData Identity) Identity. But my instance covers only the case test 5 :: ReaderT (TestData m) Identity where m ~ ReaderT (TestData m) Identity.
The correct instance is:
instance TestMonad (Reader (TestData Identity)) where
test n = do
(TestData f) <- ask
return $ runIdentity $ f n
| |
doc_5686
|
So what i would like to do is remove the respective "li span" if data returned from the server that is assigned to it is empty, so i can calculate the right sum.
Here is how i assign data to my list:
$.get(url, function (data) {
$.each(data, function (i, info) {
$("#accordion li").text(function (y, name) {
if ($(this).is(":contains(" + info.cName + ")")) {
if ($(this).is(":has(span)")) {
$(this).children().replaceWith('<span class = "cCount">' + info.count + '</span>');
$(this).children().fadeOut(10000);
} else {
$(this).append('<span class = "cCount">' + info.count + '</span>');
}
}
countCont($(this).parent().attr('id'));
});
});
});
A: After you fade it out, remove it from the DOM using .remove()
//...
$("#accordion li").text(function (y, name) {
var $this = $(this); //also store this, dont select it 5 times
if ($this.is(":contains(" + info.cName + ")")) {
if ($this.is(":has(span)")) {
$this.children().replaceWith('<span class = "cCount">' + info.count + '</span>');
$this.children().fadeOut(10000);
$this.remove();
} else {
$this.append('<span class = "cCount">' + info.count + '</span>');
}
}
countCont($this.parent().attr('id'));
});
Edit: In response to the comment, this will remove the span if it exists then add a new one on:
//...
var $this = $(this); //also store this, dont select it 5 times
if ($this.is(":contains(" + info.cName + ")")) {
if ($this.is(":has(span)")) {
//remove the span
var $span = $('span', this);
$span.fadeOut(10000);
$span.remove();
}
//always append new span
$this.append('<span class = "cCount">' + info.count + '</span>');
}
| |
doc_5687
| ||
doc_5688
|
<Window x:Class="WpfApplication9.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Title="MainWindow" Height="350" Width="525">
<Window.Resources>
<Style TargetType="Button">
<Setter Property="Content">
<Setter.Value>
<Grid>
<TextBlock Text="help"></TextBlock>
</Grid>
</Setter.Value>
</Setter>
</Style>
</Window.Resources>
<Grid>
<Grid.RowDefinitions>
<RowDefinition></RowDefinition>
<RowDefinition></RowDefinition>
<RowDefinition></RowDefinition>
<RowDefinition></RowDefinition>
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
<ColumnDefinition></ColumnDefinition>
<ColumnDefinition></ColumnDefinition>
<ColumnDefinition></ColumnDefinition>
</Grid.ColumnDefinitions>
<Button Grid.Row="1" Grid.Column="0"></Button>
<Button Grid.Row="1" Grid.Column="1"></Button>
</Grid>
A: You cannot (or at least should not) use a UI element in a Setter, because a UI element can only exist in one place in a visual tree. That is to say: a UI element may only have one parent element. Try setting the content to some non-UI value, like a simple string, and let WPF apply a data template for you:
<Setter Property="Content" Value="help" />
If you want to specify complex UI content, set a ContentTemplate instead; that will allow you to use a DataTemplate to build up a common visual tree.
Note, however, that it is unusual to set Content on a button setter; the content typically varies from button to button, whereas styles are meant to set property values that should be common across control instances.
A: Style is shared so there is only one instance of Grid and since Visual can have only one parent it will be visible in the last place where you use it. You can disable sharing for Style
<Style ... x:Shared="False">
x:Shared Attribute
When set to false, modifies WPF resource-retrieval behavior so that requests for the attributed resource create a new instance for each request instead of sharing the same instance for all requests.
| |
doc_5689
|
})
observeEvent(input$update,{
shiny::validate(
need(nrow(train_df) > 0, "No points found for these filters!")
)
})
My validate statement works but the error message does not show up in the app. Any help would be appreciated.
| |
doc_5690
|
I have different firms, from different industries & countries and from different years. Below is just a small example. I would like to replace the missing values (in the column industry or country) with the existing values in the column, if they come from the same firm.
For example, firm 123 is missing its industry in year 2. I have the industry in which the firm belongs to from the previous (or sometimes succeeding) years but do not know how to add it.
Another example: I have the firm 444 which is missing its country in year 3. I do have its country from the previous years but do not know how to transfer / copy it to the 3rd year:
--firm--year--industry--country
--123----1-------1---------usa
--123----2-------1---------usa
--123----3--------.--------usa
--333----1-------2---------usa
--333----2--------.---------usa
--444---1---------.----------fr
--444---2---------2---------fr
--444---3---------2----------.
I looked up on stata/help and on the internet. All I could find was the replace command, but it only replaced equal numbers.
I think it will be something with:
replace industry=(problaby something dependant of the firm (and maybe year)) if industry==.
replace country=(problaby something dependant of the firm (and maybe year)) if country==.
I am not sure for the country replacement, because the observations are not numbers. I think I will need to generate a new variable with numbers for the country-replacement.
Thanks a lot!
A: Take a look at replacing missing values with neighboring values FAQ and at user-written xfill. The latter is useful for filling in static variables. It replaces missing values in a cluster with the unique non-missing value within that cluster.
A: For this particular example where the industry variable is the same within firm, you could also write
levelsof firm, local(F)
foreach f of local F{
sum industry if firm==`f'
replace industry=r(mean) if firm==`f' & industry==.
}
This code creates a local variable F that is a list of all different firms. It then summarizes the industry for each firm. Since the industry number will always be the same for observations from the same firm, the mean value will just be the industry number. The code then replaces the value of industry to be this number for all observations within that firm (you could omit the "& industry==." part of the code here and it would still work the same way).
| |
doc_5691
|
'Msg 156, Level 15, State 1, Procedure AdminReport, Line 3 Incorrect
syntax near the keyword 'BEGIN'
. Also my "customers.firstname' could not be Bound.
CREATE VIEW [dbo].[AdminReport]
AS
BEGIN
SELECT
b.bookingID,
b.totalCost,
b.bookingDate,
b.paymentConfirmation,
c.customersID,
customers.firstname,
c.surname,
c.contactNum,
paymentConfirmation
FROM
booking b
INNER JOIN customers c
ON b.customerID= c.customersID
Where
paymentConfirmation = 'False'
ORDER BY
bookingDate ASC
END
GO
Could someone help please! Thanks.
A: your customer.firstname cannot be bound because you are renaming the table as "c" so use c.firstname
is paymentconfirmation from b? If so might as well state it on the query to keep it consistent. Run the select statement by itself and see if it gives you an error.
A: Just remove the BEGIN and END. They're not needed in CREATE VIEW syntax.
See: http://www.w3schools.com/sql/sql_view.asp
| |
doc_5692
|
<button onclick="playAudio()">Play</button>
But if the source url is incorrect, it prints in the console:
Uncaught (in promise) NotSupportedError: failed to load because no supported source was found
var myAudio = new Audio();
myAudio.src = "myAudioSrc (incorrect url)";
function playAudio(){
myAudio.play();
//prints error
}
How to prevent this error message in console if the source url is incorrect?
A: The error is generated when you call .play(), not when you assign the invalid src. To suppress it, as the error indicates, you need to catch the error thrown by the promise that .play() returns:
var myAudio = new Audio();
myAudio.src = "myAudioSrc (incorrect url)";
console.log('src has been assigned');
myAudio.play()
.catch(() => void 0);
Because embedded snippets wouldn't show the error anyway, here's an example on JSFiddle to illustrate the difference: https://jsfiddle.net/smjy5b9u/
A: You can simply catch the promise returned (if any), even though this message is IMM Chrome's implementation just being too verbose.
var p = new Audio('foo').play();
// check we actually have a Promise (older browser may not return this)
if(p)
p.catch(function(e){/*silent*/});
| |
doc_5693
|
When I was programming in C I used to check for a return value of NULL to indicate that an invalid call had been made. However this does not seem to work in Python. I have also checked for not t and -1. What I am trying to do at this stage is check that the user has not put in an invalid datetime without checking each field individually for errors like 31/02/2014, etc.
How can I check the return value?
A: Python uses exceptions to communicate incorrect inputs. If time.strptime() doesn't raise an exception, the return value is correct and you don't have to validate it yourself.
If an exception is raised, you can catch it with a try...except statement:
try:
t = time.strptime(userinput, format)
except ValueError:
# exception raised, not valid input, give feedback to user
Demo:
>>> import time
>>> time.strptime('01/02/2014 10:42:36', '%d/%m/%Y %H:%M:%S')
time.struct_time(tm_year=2014, tm_mon=2, tm_mday=1, tm_hour=10, tm_min=42, tm_sec=36, tm_wday=5, tm_yday=32, tm_isdst=-1)
>>> time.strptime('31/02/2014 10:42:36', '%d/%m/%Y %H:%M:%S')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python3.4/_strptime.py", line 494, in _strptime_time
tt = _strptime(data_string, format)[0]
File "/Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python3.4/_strptime.py", line 465, in _strptime
datetime_date(year, 1, 1).toordinal() + 1
ValueError: day is out of range for month
Note that the exception was thrown here because 31 falls outside the range valid days for February.
| |
doc_5694
|
So this is my canvas at the moment:
And I want the user to change the stroke color by using this:
A: Based on the picture you posted, looks like you want a solution like markE posted in his answer. Although, in the text you mention type="color". If you want to use this input you can check this jsfiddle working. In this second case, just remember a lot of browser do not support it yet. Click here if you want to see a list of browser that support it.
Below I will try to detail what changes I did to your jsfiddle.
Firstly, you need to set a callback to the color input. This mean when the value of the input changes, it calls the method change. The function change is getting the value of the input and set in a global variable called color.
var color = "rgb(255,0,0)";
function change(e){
color = this.value;
}
document.getElementById("color").onchange = change;
The other change was inside your draw function. Before the draw it is getting the value in the variable color. This way, the next time you change the color it will update the color used in the stroke.
ctx.strokeStyle = color;
With those changes, if in the future you decide to use another tool to get the color (for example, you can check the browser to see if it support the input="color" and use a different color picker in this case), you just need to set the new color in the variable color.
A: Here's a simple example of a color picker on a "tools" canvas used to set the current color (fill/stroke) on the drawing canvas:
Javascript paint Canvas, need help re-locating buttons?
For your "color wheel" picker, you would paint your wheel on the tools canvas and then use context.getImageData to grab the pixel color data under the mouse cursor.
var imgData=ctx.getImageData(mouseX,mouseY,1,1);
var data=imgData.data;
return("rgba("+data[0]+","+data[1]+","+data[2]+","+data[3]+")");
After the user picks their color on the tools canvas, you can use context.strokeStyle and context.fillStyle to make those colors active on the drawing canvas.
A: All you need to do is get the value of the color input and set the strokeStyle to that.
Live Demo
var points=new Array(),
colorInput = document.getElementById("color");
function start(e){
var mouseX = e.pageX - canvas.offsetLeft;
var mouseY = e.pageY - canvas.offsetTop;
paint = true;
ctx.beginPath();
ctx.moveTo(mouseX,mouseY);
points[points.length]=[mouseX,mouseY];
};
function draw(e){
if(paint){
var mouseX = e.pageX - canvas.offsetLeft;
var mouseY = e.pageY - canvas.offsetTop;
ctx.lineTo(mouseX,mouseY);
ctx.stroke();
// set the value to the color input
ctx.strokeStyle = colorInput.value;
ctx.lineJoin = ctx.lineCap = 'round';
points[points.length]=[mouseX,mouseY];
}
}
function stop(e){
paint=false;
var s=JSON.stringify(points);
localStorage['lines']=s;
}
var paint=false;
var canvas = document.getElementById('myCanvas');
var ctx=canvas.getContext("2d");
canvas.addEventListener('mousedown',start);
canvas.addEventListener('mousemove',draw);
canvas.addEventListener('mouseup',stop);
| |
doc_5695
|
Everything goes fine for the AP with DHCP and 1 other AP, but whenever I add the third AP, I can't connect to the wifi, it just keeps on authenticating.
I tried looking up the problem of course, what I found that it is probably due to different security settings on the different APs. However, I can't seem to find where/how to match these settings to resolve the issue.
Could anyone help me out?
The router that has the problem is the TP-Link Archer C7. The one with DHCP enabled is the Netgear WAC104 and the AP that does work is the TP-link AP200.
A: In my case (one root AP and one bridged; i couldn't get around Authetication error on Windows 10 and Android when trying to connect to bridged AP) what partially worked was in the android settings - wifi - WPS Push Button and when prompt to push WPS button on AP i did so and voila. Android suddenly connected and Windows 10 as well (did nothing additional there). Just to clarify - AP is TP-Link. ANd what dodn't work was adding MAC address as allowed to MAC filtering. Didn't seem to have any security differernces since i use almost all settings on default and two APs were same model and firmware. It worked partially because after a few minutes tried to connect to bridged AP but had to repeat WPS steps in order to work.
| |
doc_5696
|
public void some(Object a){
Map<?, ?> map = **(Map<?,?>)a**; //converting unknown object to map
}
I expected the RHS to have an unchecked warning.
While this code has a warning:
public void some(Object a){
Map<Object, Object> map = **(Map<Object,Object>)a**;
//converting unknown object to Map<Object,Object>
}
Also, for below case there is no warning:
String str = (String) request.getAttribute("asd") //returns Object
Does this mean that unchecked warnings came with generics? There were no such warnings before introduction of generics in Java?
A: Yes, the unchecked warning is only relevant to generic types.
What it means is: this cast from Object to Map<T1, T2> might succeed because the object is indeed a Map, but the runtime has no way, due to type erasure, to check that it's a Map<T1, T2>. It might very well be a Map<T3, T4>. So you might very well break the type-safety of the map by putting T1, T2 elements inside, or get a ClassCastException when trying to read values from the map.
You have no warning for the first cast because you're casting to a Map<?, ?>, which means that the key and the value type is unknown, which is true. You won't be able to perform a type-unsafe operation on such a map without additional casts: you can't add anything to such a map, and the only thing you can get out of it is instances of Object.
A: You get no "unchecked" warning because the cast is completely "checked" -- a cast to Map<?,?> only needs to ensure that the object is a Map (and nothing else), and that is completely checkable at runtime. In other words, Map<?,?> is a reifiable type.
| |
doc_5697
|
Recently I was working on S3 and playing a bit with S3 object lock. So I enabled S3 object lock for a specific object with governance mode along with legal hold. Now when I tried to overwrite the object with the same file using the following CLI command:
aws s3 cp /Users/John/Desktop/112133.jpg s3://my-buck/112133.jpg
It succeeded interestingly and I checked in the console that the new file is uploaded with Latest Version on it. Now I read this in AWS docs that:
Bypassing governance mode doesn't affect an object version's legal
hold status. If an object version has a legal hold enabled, the legal
hold remains in force and prevents requests to overwrite or delete the
object version.
Now my question is how it get overwritten if this CLI command is used to overwrite a file? I tried also in the console to re uplaod the same file but it also worked.
Moreover I uploaded another file and enabled ojbect lock with compliance mode and it also get overwritten. But deletion doesn't work for both cases as expected.
Did I understand something wrong about the whole S3 ojbect lock thing? Any help will be appreciated.
A: To quote the Object Lock documentation:
Object Lock works only in versioned buckets, and retention periods and
legal holds apply to individual object versions. When you lock an
object version, Amazon S3 stores the lock information in the metadata
for that object version. Placing a retention period or legal hold on
an object protects only the version specified in the request. It
doesn't prevent new versions of the object from being created.
| |
doc_5698
| ||
doc_5699
|
https://github.com/GoogleCloudPlatform/appengine-try-java
I am able to deploy it in the AppEngine standard environment, but I am not able to pass my custom app.yaml configuration file in which I define some environment variables. For that I created app.yaml in /src/main/appengine/.
Is there way to configure this directly in the pom.xml file?
I tried to run:
mvn clean appengine:deploy -Dapp.deploy.appEngineDirectory=src/main/appengine/
However that doesn't make any difference and I when I see:
[INFO] GCLOUD: Services to deploy:
[INFO] GCLOUD:
[INFO] GCLOUD: descriptor: [/target/appengine-staging/app.yaml]
When I open this app.yaml, my config file is not included and ignored.
A: The Java standard environment by default uses the appengine-web.xml file, not the app.yaml file. And yes, it's possible to set environment variables in it. From the appengine-web.xml Reference:
Optional. The appengine-web.xml file can define environment variables
that are set when the application is running.
<env-variables>
<env-var name="DEFAULT_ENCODING" value="UTF-8" />
</env-variables>
The app you downloaded has the file in src/main/webapp/WEB-INF/appengine-web.xml
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.