id
stringlengths
40
40
text
stringlengths
29
2.03k
original_text
stringlengths
3
154k
subdomain
stringclasses
20 values
metadata
dict
3b6bfd4c09ecf5686225a7fec6cfbba96ac1b2b9
Stackoverflow Stackexchange Q: Reading python documentation in the terminal? Is there a way to install the python documentation that would make it available as if it was a manpage? (I know you can download the sourcefiles for the documentation and read them in vim, using less or whatever but I was thinking about something a bit less manual. Don't want to roll my own.) A: I don't if this is what you wanted but you everything you can do in IDLE you can do on the command line. Example: C:>python >>help(print()) >>help(plt.plot()) This way you can access documentation
Q: Reading python documentation in the terminal? Is there a way to install the python documentation that would make it available as if it was a manpage? (I know you can download the sourcefiles for the documentation and read them in vim, using less or whatever but I was thinking about something a bit less manual. Don't want to roll my own.) A: I don't if this is what you wanted but you everything you can do in IDLE you can do on the command line. Example: C:>python >>help(print()) >>help(plt.plot()) This way you can access documentation A: I don't know if that's exactly what you are looking for, but the python interactive console offers a help command. You can use it in the following manner. >>> help() Welcome to Python 3.6's help utility! If this is your first time using Python, you should definitely check out the tutorial on the Internet at http://docs.python.org/3.6/tutorial/. Enter the name of any module, keyword, or topic to get help on writing Python programs and using Python modules. To quit this help utility and return to the interpreter, just type "quit". To get a list of available modules, keywords, symbols, or topics, type "modules", "keywords", "symbols", or "topics". Each module also comes with a one-line summary of what it does; to list the modules whose name or summary contain a given string such as "spam", type "modules spam". help> list This will output the whole documentation for all of the list methods. A: You can use help(Class-name/method-name/anything). But also using __doc__ A special __doc__ docstring is attached to every class and method. For example look what i typed into my interpreter. >>> print(str.__doc__) str(object='') -> str str(bytes_or_buffer[, encoding[, errors]]) -> str Create a new string object from the given object. If encoding or errors is specified, then the object must expose a data buffer that will be decoded using the given encoding and error handler. Otherwise, returns the result of object.__str__() (if defined) or repr(object). encoding defaults to sys.getdefaultencoding(). errors defaults to 'strict'. >>> print(int.__doc__) int(x=0) -> integer int(x, base=10) -> integer Convert a number or string to an integer, or return 0 if no arguments are given. If x is a number, return x.__int__(). For floating point numbers, this truncates towards zero. If x is not a number or if base is given, then x must be a string, bytes, or bytearray instance representing an integer literal in the given base. The literal can be preceded by '+' or '-' and be surrounded by whitespace. The base defaults to 10. Valid bases are 0 and 2-36. Base 0 means to interpret the base from the string as an integer literal. >>> int('0b100', base=0) 4 It even works for modules. >>> import math >>> math.__doc__ 'This module is always available. It provides access to the\nmathematical functions defined by the C standard.' >>> math.ceil.__doc__ 'ceil(x)\n\nReturn the ceiling of x as an Integral.\nThis is the smallest integer >= x.' >>> Since every class has a __doc__ which is a docstring attached to it you can call it using the class_name.__doc__ >>> print(ord.__doc__) Return the Unicode code point for a one-character string. A: You can use the BIF help() In terminal go to Python REPL like python Now type help() Now for example if you wish to get help for a class say IPv4Network which is in ipaddress package then you just have to specify the fully qualified path for IPv4Network i.e ipaddress.IPv4Network EXAMPLE: $ python3 Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56) [GCC 7.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> help() Welcome to Python 3.6's help utility! help> ipaddress.IPv4Network Help on class IPv4Network in ipaddress: ipaddress.IPv4Network = class IPv4Network(_BaseV4, _BaseNetwork) | This class represents and manipulates 32-bit IPv4 network + addresses.. | | Attributes: [examples for IPv4Network('192.0.2.0/27')] | .network_address: IPv4Address('192.0.2.0') | .hostmask: IPv4Address('0.0.0.31') | .broadcast_address: IPv4Address('192.0.2.32') | .netmask: IPv4Address('255.255.255.224') | .prefixlen: 27 | | Method resolution order: | IPv4Network | _BaseV4 | _BaseNetwork | _IPAddressBase | builtins.object | | Methods defined here: | | __init__(self, address, strict=True) | Instantiate a new IPv4 network object. | | Args: | address: A string or integer representing the IP [& network]. | '192.0.2.0/24' | '192.0.2.0/255.255.255.0' | '192.0.0.2/0.0.0.255' | are all functionally the same in IPv4. Similarly, | '192.0.2.1' and so on A: It's not an exact copy of the documentation, but there's the builtin help() function. In an interactive python session, you just call help(whatever_you_want_to_read_about), for example: >>> help(all) Help on built-in function all in module builtins: all(...) all(iterable) -> bool Return True if bool(x) is True for all values x in the iterable. If the iterable is empty, return True. Alternatively, you can start an interactive help session like this: C:\Users\Rawing>python -c "help()" Welcome to Python 3.4! This is the interactive help utility. help> And then just type the function/class/module you want to know about: help> all Help on built-in function all in module builtins: all(...) all(iterable) -> bool Return True if bool(x) is True for all values x in the iterable. If the iterable is empty, return True. A: On Debian (and derived distributions, like Ubuntu) install pydoc package. Then you can use pydoc whatever command. A: I will answer since I don't satisfied with the accepted answer. Probably because I don't use IDLE. Note that, I use Ubuntu (terminal). Probably in other OSs, it works the same. Is there a way to install that? No need. I found that it comes by default. How to access that? Use help() command in python' shell. * *In the shell, type command help(). *Now that you're in the help utility, enter anything that you want to read its documentation. Press q to quit the documentation and type quit to quit the help utility.
stackoverflow
{ "language": "en", "length": 961, "provenance": "stackexchange_0000F.jsonl.gz:914446", "question_score": "13", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44697036" }
a3ef6dbdd56c9c3bdbf64f90c695af284dbf0cfd
Stackoverflow Stackexchange Q: Twig - ternary conditional operator it's gonna sound very stupid but I don't know how to write this ternary condition within my twig template. {% for post in posts %} <div class="news_text {{ loop.index is odd ? left : right }}"> {{ post.content }} </div> {% endfor %} Can anyone tell me what would be the good syntax pls? :-) A: You might try doing this the other way. Create an if and according to the result set output to either left or right. {% for post in posts %} {% set output = "right" %} {% if loop.index is odd %} {% set output = "left" %} {% endif %} <div class="news_text {{ output }}"> {% endfor %} But if you want to do it your way try: {% for post in posts %} <div class="news_text {{ loop.index is odd ? "left" : "right" }}"> {% endfor %}
Q: Twig - ternary conditional operator it's gonna sound very stupid but I don't know how to write this ternary condition within my twig template. {% for post in posts %} <div class="news_text {{ loop.index is odd ? left : right }}"> {{ post.content }} </div> {% endfor %} Can anyone tell me what would be the good syntax pls? :-) A: You might try doing this the other way. Create an if and according to the result set output to either left or right. {% for post in posts %} {% set output = "right" %} {% if loop.index is odd %} {% set output = "left" %} {% endif %} <div class="news_text {{ output }}"> {% endfor %} But if you want to do it your way try: {% for post in posts %} <div class="news_text {{ loop.index is odd ? "left" : "right" }}"> {% endfor %} A: Instead of using if/else, there actually exists a ternary operator in twig Ternary Operator / Conditional operator {{ (isTrue) ? 'true' : 'false' }} Short-hand syntax is also supported: {{ foo ?: 'no' }} is the same as {{ foo ? foo : 'no' }} {{ foo ? 'yes' }} is the same as {{ foo ? 'yes' : '' }} Official Docs
stackoverflow
{ "language": "en", "length": 214, "provenance": "stackexchange_0000F.jsonl.gz:914470", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44697143" }
ea72d0b93454038d83af143f086ad493af976ad3
Stackoverflow Stackexchange Q: How to give formControlName to a FormArray object - Angular 2 (ReactiveFormsModule) In my application I made a form using Reactive Forms. In my app their is a button Add new Fields upon clicking this button new fields are added. I am able to add new fields but I am not able to assign formControlName. Could any one please show me the right path how can I add formControlName to these dynamically added fields. Here is the Plnkr for this. A: You have formArray of array of FormGroup. So use formArrayName with loop of formGroupName with formControlName(itemDetail, quantity, rate...) <table formArrayName="salesList"> <tr> <th>Item Detail</th> <th>Quantity</th> <th>Rate</th> <th>Tax</th> <th>Amount</th> </tr> <!--Input controls --> <tr *ngFor="let saleList of salesListArray.controls;let i = index" [formGroupName]="i"> <td> <div class="col-sm-6"> <input class="form-control" type="text" placeholder="Item Detail" formControlName = "itemDetail"/> </div> </td> <td> <div class="col-sm-6"> <input class="form-control" type="text" placeholder="0" formControlName = "quantity" /> </div> </td> <td> <div class="col-sm-6"> <input class="form-control" type="text" placeholder="0.00" formControlName = "rate"/> </div> </td> <td> <div class="col-sm-6"> <input class="form-control" type="text" placeholder="Select a tax" formControlName = "tax"/> </div> </td> <td> <div class="col-sm-6"> <span>{{saleList.amount}}}</span> </div> </td> </tr> </table> Fixed Plunker See also * *https://scotch.io/tutorials/how-to-build-nested-model-driven-forms-in-angular-2 *https://angular.io/guide/reactive-forms#display-the-formarray
Q: How to give formControlName to a FormArray object - Angular 2 (ReactiveFormsModule) In my application I made a form using Reactive Forms. In my app their is a button Add new Fields upon clicking this button new fields are added. I am able to add new fields but I am not able to assign formControlName. Could any one please show me the right path how can I add formControlName to these dynamically added fields. Here is the Plnkr for this. A: You have formArray of array of FormGroup. So use formArrayName with loop of formGroupName with formControlName(itemDetail, quantity, rate...) <table formArrayName="salesList"> <tr> <th>Item Detail</th> <th>Quantity</th> <th>Rate</th> <th>Tax</th> <th>Amount</th> </tr> <!--Input controls --> <tr *ngFor="let saleList of salesListArray.controls;let i = index" [formGroupName]="i"> <td> <div class="col-sm-6"> <input class="form-control" type="text" placeholder="Item Detail" formControlName = "itemDetail"/> </div> </td> <td> <div class="col-sm-6"> <input class="form-control" type="text" placeholder="0" formControlName = "quantity" /> </div> </td> <td> <div class="col-sm-6"> <input class="form-control" type="text" placeholder="0.00" formControlName = "rate"/> </div> </td> <td> <div class="col-sm-6"> <input class="form-control" type="text" placeholder="Select a tax" formControlName = "tax"/> </div> </td> <td> <div class="col-sm-6"> <span>{{saleList.amount}}}</span> </div> </td> </tr> </table> Fixed Plunker See also * *https://scotch.io/tutorials/how-to-build-nested-model-driven-forms-in-angular-2 *https://angular.io/guide/reactive-forms#display-the-formarray
stackoverflow
{ "language": "en", "length": 189, "provenance": "stackexchange_0000F.jsonl.gz:914507", "question_score": "14", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44697231" }
aea7192f864bfcd680f9e014b93f56e39f3286dd
Stackoverflow Stackexchange Q: Custom validation using AddModelError for child viewmodel I'm having to do some custom validation on a child viewmodel, adding any issues to the ModelState using ModelState.AddModelError(). However I'm presented with an issue trying to get the correct property key. My parent viewmodel contains N ChildViewModels, where I'm validating a child property: public class ParentViewModel { ... public List<ChildViewModel> Children = new List<ChildViewModel>(); } public class ChildViewModel { public int Id { get; set; } ... public int PropertyToValidate { get; set; } } In the view, I iterate over the ChildViewModel collection and display a partial view for each: @foreach(var child in Model.Children) Html.RenderPartial("EditorTemplates/ChildViewModel", child); This results in the property controls and validation placeholders of each child being rendered as follows, with a unique Name prefix: <input type="number" name="Children[83d5b826-f8f0-47fe-8985-c45debd1d846].PropertyToValidate"> <span class="field-validation-valid" data-valmsg-for="Children[83d5b826-f8f0-47fe-8985-c45debd1d846].PropertyToValidate" data-valmsg-replace="true"></span> My question is, how do I get Children[83d5b826-f8f0-47fe-8985-c45debd1d846].PropertyToValidate to use as a key when adding to the ModelStateDictionary? In my controller I have a collection of invalid ChildViewModel and I've tried adding them to the ModelStateDictionary as follows: invalidChildViewModels.ForEach(x => ModelState.AddModelError( ExpressionHelper.GetExpressionText(x), "My custom validation error message")); But this results in a blank key being added and as such the validation message doesn't get displayed.
Q: Custom validation using AddModelError for child viewmodel I'm having to do some custom validation on a child viewmodel, adding any issues to the ModelState using ModelState.AddModelError(). However I'm presented with an issue trying to get the correct property key. My parent viewmodel contains N ChildViewModels, where I'm validating a child property: public class ParentViewModel { ... public List<ChildViewModel> Children = new List<ChildViewModel>(); } public class ChildViewModel { public int Id { get; set; } ... public int PropertyToValidate { get; set; } } In the view, I iterate over the ChildViewModel collection and display a partial view for each: @foreach(var child in Model.Children) Html.RenderPartial("EditorTemplates/ChildViewModel", child); This results in the property controls and validation placeholders of each child being rendered as follows, with a unique Name prefix: <input type="number" name="Children[83d5b826-f8f0-47fe-8985-c45debd1d846].PropertyToValidate"> <span class="field-validation-valid" data-valmsg-for="Children[83d5b826-f8f0-47fe-8985-c45debd1d846].PropertyToValidate" data-valmsg-replace="true"></span> My question is, how do I get Children[83d5b826-f8f0-47fe-8985-c45debd1d846].PropertyToValidate to use as a key when adding to the ModelStateDictionary? In my controller I have a collection of invalid ChildViewModel and I've tried adding them to the ModelStateDictionary as follows: invalidChildViewModels.ForEach(x => ModelState.AddModelError( ExpressionHelper.GetExpressionText(x), "My custom validation error message")); But this results in a blank key being added and as such the validation message doesn't get displayed.
stackoverflow
{ "language": "en", "length": 199, "provenance": "stackexchange_0000F.jsonl.gz:914531", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44697329" }
0e5b806b42dfa3d05266ae45d5849eaf8ee3c92d
Stackoverflow Stackexchange Q: bash command to redirect stdin in while loop (using sed) I am trying to stdin only lines from 1 to 1000 from a file (output.txt) to a while loop. I have tried something like this: #!/bin/bash while read -r line; do echo "$line" done < (sed -n 1,1000p data/output.txt) A: Just tried: #!/bin/bash while read -r line; do echo "$line" done < <(sed -n 1,1000p data/output.txt) adding another angular bracket "<" did the trick... If someone can explain that could be interesting. Thanks
Q: bash command to redirect stdin in while loop (using sed) I am trying to stdin only lines from 1 to 1000 from a file (output.txt) to a while loop. I have tried something like this: #!/bin/bash while read -r line; do echo "$line" done < (sed -n 1,1000p data/output.txt) A: Just tried: #!/bin/bash while read -r line; do echo "$line" done < <(sed -n 1,1000p data/output.txt) adding another angular bracket "<" did the trick... If someone can explain that could be interesting. Thanks A: the part <( ), is called process substitution, it can replace a filename in a command. fifos can also be used to do the same thing. mkfifo myfifo sed -n 1,1000p data/output.txt > myfifo & while read -r line; do echo "$line" done < myfifo A: You seem to want to pipe the output from one command to another. If so, use a pipe: sed -n 1,1000p data/output.txt | while read -r line; do echo "$line"; done Or, using the right tool for the right job: head -1000 data/output.txt | while read -r ; do something; done
stackoverflow
{ "language": "en", "length": 182, "provenance": "stackexchange_0000F.jsonl.gz:914538", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44697346" }
839fdf0de143e57e4cca7f1d7468a84f86ad5fc6
Stackoverflow Stackexchange Q: How to populate Android Room database table on first run? In SQLiteOpenHelper there is a onCreate(SQLiteDatabase ...) method which i used to populate database tables with some initial data. Is there a way to insert some data into Room database table on first app run? A: You can populate tables after creating the database, make sure the operation is running on a separate thread. You can follow the classes bellow to pre-populate tables on the first time. AppDatabase.kt @Database(entities = [User::class], version = 1, exportSchema = false) abstract class AppDatabase : RoomDatabase() { abstract fun userDao(): UserDao companion object { // For Singleton instantiation @Volatile private var instance: AppDatabase? = null fun getInstance(context: Context): AppDatabase { return instance ?: synchronized(this) { instance ?: buildDatabase(context).also { instance = it } } } private fun buildDatabase(context: Context): AppDatabase { return Room.databaseBuilder(context, AppDatabase::class.java, DATABASE_NAME) .addCallback(object : RoomDatabase.Callback() { override fun onCreate(db: SupportSQLiteDatabase) { super.onCreate(db) //pre-populate data Executors.newSingleThreadExecutor().execute { instance?.let { it.userDao().insertUsers(DataGenerator.getUsers()) } } } }) .build() } } } DataGenerator.kt class DataGenerator { companion object { fun getUsers(): List<User>{ return listOf( User(1, "Noman"), User(2, "Aayan"), User(3, "Tariqul") ) } } }
Q: How to populate Android Room database table on first run? In SQLiteOpenHelper there is a onCreate(SQLiteDatabase ...) method which i used to populate database tables with some initial data. Is there a way to insert some data into Room database table on first app run? A: You can populate tables after creating the database, make sure the operation is running on a separate thread. You can follow the classes bellow to pre-populate tables on the first time. AppDatabase.kt @Database(entities = [User::class], version = 1, exportSchema = false) abstract class AppDatabase : RoomDatabase() { abstract fun userDao(): UserDao companion object { // For Singleton instantiation @Volatile private var instance: AppDatabase? = null fun getInstance(context: Context): AppDatabase { return instance ?: synchronized(this) { instance ?: buildDatabase(context).also { instance = it } } } private fun buildDatabase(context: Context): AppDatabase { return Room.databaseBuilder(context, AppDatabase::class.java, DATABASE_NAME) .addCallback(object : RoomDatabase.Callback() { override fun onCreate(db: SupportSQLiteDatabase) { super.onCreate(db) //pre-populate data Executors.newSingleThreadExecutor().execute { instance?.let { it.userDao().insertUsers(DataGenerator.getUsers()) } } } }) .build() } } } DataGenerator.kt class DataGenerator { companion object { fun getUsers(): List<User>{ return listOf( User(1, "Noman"), User(2, "Aayan"), User(3, "Tariqul") ) } } } A: I tried a number of ways to do this, each to no available. First, I tried adding a Migration implementation to Room using the 'addMigrations' method, but found that it only runs during a database upgrade, but not on creation. Then, I tried passing a SQLiteOpenHelper implementation to Room using the 'openHelperFactory' method. But after creating a bunch of classes in order to get around Room's package-level access modifiers, I abandoned the effort. I also tried subclassing Room's FrameworkSQLiteOpenHelperFactory but, again, the package-level access modifier of its constructor didn't support this. Finally, I created a IntentService to populate the data and invoked it from the onCreate method of my Application subclass. The approach works but a better solution should be the upcoming fix to the tracker issue mentioned by Sinigami elsewhere on this page. Darryl [Added July 19, 2017] The issue looks as though it's resolved in Room 1.0.0. Alpha 5. This release added a callback to RoomDatabase that lets you execute code when the database is first created. Take a look at: https://developer.android.com/reference/android/arch/persistence/room/RoomDatabase.Callback.html A: @Provides @Singleton LocalDatabase provideLocalDatabase(@DatabaseInfo String dbName, Context context) { return Room.databaseBuilder(context, LocalDatabase.class, dbName) .addCallback(new RoomDatabase.Callback() { @Override public void onCreate(@NonNull SupportSQLiteDatabase db) { super.onCreate(db); db.execSQL("INSERT INTO id_generator VALUES(1, 1, 1);"); } }) // .addMigrations(LocalDatabase.MIGRATION_1_2) .build(); } A: I tried to use the RoomDatabase.Callback as suggested by Arnav Rao, but to use a callback you cannot use the DAO as the callback is created before the database has been built. You could use db.insert and content values, but I didn't think that would have been correct. So after looking into it a bit more - took me ages lol - but I actually found the answer when going through the samples provided by Google. https://github.com/googlesamples/android-architecture-components/blob/master/PersistenceContentProviderSample/app/src/main/java/com/example/android/contentprovidersample/data/SampleDatabase.java See line 52 and the method on line 71 - In there you can see after the build of the database instance, the next line calls a method which checks if there are any records in the database (using the DAO) and then if it’s empty it inserts the initial data (again using the DAO). Hope this helps anyone else who was stuck :) A: There are 3 ways of prepopulating of db The first 2 is coping from assets and file which is described here The third way is programmatical after db creation Room.databaseBuilder(context, Database::class.java, "app.db") // ... // 1 .createFromAsset(...) // 2 .createFromFile(...) // 3 .addCallback(DatabaseCallback()) .build() Here is the manual filling class DatabaseCallback : RoomDatabase.Callback() { override fun onCreate(db: SupportSQLiteDatabase) = db.run { // Notice non-ui thread is here beginTransaction() try { execSQL(...) insert(...) update(...) delete(...) setTransactionSuccessful() } finally { endTransaction() } } } A: Updated You can do this in 3 ways: important check this for migration details 1- Populate your database from exported asset schema Room.databaseBuilder(appContext, AppDatabase.class, "Sample.db") .createFromAsset("database/myapp.db") .build(); 2- Populate your database from file Room.databaseBuilder(appContext, AppDatabase.class, "Sample.db") .createFromFile(new File("mypath")) .build(); 3- You can run scripts after database is created or run every time database is opened using RoomDatabase.Callback, this class is available in the latest version of the Room library. You need to implement onCreate and onOpen method of RoomDatabase.Callback and add it to RoomDatabase.Builder as shown below. yourDatabase = Room.databaseBuilder(context, YourDatabase.class, "your db") .addCallback(rdc) .build(); RoomDatabase.Callback rdc = new RoomDatabase.Callback() { public void onCreate (SupportSQLiteDatabase db) { // do something after database has been created } public void onOpen (SupportSQLiteDatabase db) { // do something every time database is open } }; Reference You can use Room DAO itself in the RoomDatabase.Callback methods to fill the database. For complete examples see Pagination and Room example RoomDatabase.Callback dbCallback = new RoomDatabase.Callback() { public void onCreate(SupportSQLiteDatabase db) { Executors.newSingleThreadScheduledExecutor().execute(new Runnable() { @Override public void run() { getYourDB(ctx).yourDAO().insertData(yourDataList); } }); } }; A: I was struggling with this topic too and this solution worked for me: // build.gradle def room_version = "2.2.5" // Room implementation "androidx.room:room-runtime:$room_version" kapt "androidx.room:room-compiler:$room_version" implementation "androidx.room:room-ktx:$room_version" In your App class: // virtually create the db val db = Room.databaseBuilder( appContext, AppDatabase::class.java, Res.getString(R.string.dbname) ).createFromAsset(Res.getString(R.string.source_db_name)).build() // first call to db really creates the db on filesystem db.query("SELECT * FROM " + Room.MASTER_TABLE_NAME, null)
stackoverflow
{ "language": "en", "length": 869, "provenance": "stackexchange_0000F.jsonl.gz:914556", "question_score": "61", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44697418" }
8eeb6930aef2daf2964e0c633910ed6a0b3ccddf
Stackoverflow Stackexchange Q: Back to app button handler After programatically sending user to the settings screen, there's a back-to-app button in left upper corner: Tapping this button makes the user go back to my app. However, at that point Application calls its delegate with the same methods that are being called when we get back from background: applicationWillEnterForeground and applicationDidBecomeActive Meanwhile, I need to distinguish whether user came back to the app by tapping this particular "back-to-app" button, or by simply entering the app after sending it to background in any other way. Is this somehow possible? A: I believe, there is no way to distinguish by default. My suggestion is, if you are focusing for a particular settings entry change, just compare the new setting's value with the old one in the applicationDidBecomeActive. If there is a change, then you can distinguish the flow. However, If there is no change, you can't.
Q: Back to app button handler After programatically sending user to the settings screen, there's a back-to-app button in left upper corner: Tapping this button makes the user go back to my app. However, at that point Application calls its delegate with the same methods that are being called when we get back from background: applicationWillEnterForeground and applicationDidBecomeActive Meanwhile, I need to distinguish whether user came back to the app by tapping this particular "back-to-app" button, or by simply entering the app after sending it to background in any other way. Is this somehow possible? A: I believe, there is no way to distinguish by default. My suggestion is, if you are focusing for a particular settings entry change, just compare the new setting's value with the old one in the applicationDidBecomeActive. If there is a change, then you can distinguish the flow. However, If there is no change, you can't. A: Do you develop two apps that you want to connect that way? There is far more way to leave your app then that you described: * *User taps home button once *User taps home button twice *User press power button while app is still in a foreground *On 3D-touch enabled devices user do 3D touch in leading edge. *User uses "Back to the app" thing you described *User gets notification and pick-pop it *User goes to the other app from notification *User opens notification center and do action there *User opens control center and do some action there *User use sharing functionality or hyperlink inside your app that can trigger other apps. I may miss sth, but this list I created in favor for showing that, distinguish between this action can be very hard. Even if you will handle one of the action is not necessarily handle all the other actions. Will help if you'll tell more about the use case that you have or problem that you're trying to solve. A: I would suggest a generic solution related to solve similar problems detecting different launch Options (How our App is in Active state (Running)) Swift 2.3 In AppDelegate func application(application: UIApplication, didFinishLaunchingWithOptions launchOptions: [NSObject: AnyObject]?) -> Bool { if let options = launchOptions{ print(options.description) //These options will give the difference between launching from background or from pressing the back button if (launchOptions?[UIApplicationLaunchOptionsRemoteNotificationKey] != nil) { //this indicates the app is launched by pressing Push notifications }else if (launchOptions?[UIApplicationLaunchOptionsLocalNotificationKey] != nil) { //This indicates the app is launched on tapping the local notifications }else if (launchOptions?[UIApplicationLaunchOptionsSourceApplicationKey] != nil){ //This indicates the App launched from a valid source e.g: on tap of Open App from App Store when your App is installed or directly from home screen } } } Reference: Apple docs provide all the available launch options which can be detected https://developer.apple.com/documentation/uikit/uiapplicationdelegate/launch_options_keys Use the Power of delegate Protocol methods by adding Observers https://developer.apple.com/documentation/uikit/uiapplicationdelegate Swift 3 Equivalent: //adding observer NotificationCenter.default.addObserver(self, selector: #selector(applicationDidBecomeActive), name: .UIApplicationDidBecomeActive, object: nil) //removing observer NotificationCenter.default.removeObserver(self, name: .UIApplicationDidBecomeActive, object: nil) // callback func applicationDidBecomeActive() { // handle event } Similar Questions in StackOverFlow which my help you out: Detect when "back to app" is pressed How to detect user returned to your app in iOS 9 new back link feature? Detect if the app was launched/opened from a push notification Checking launchOptions in Swift 3
stackoverflow
{ "language": "en", "length": 550, "provenance": "stackexchange_0000F.jsonl.gz:914605", "question_score": "12", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44697577" }
27553304962ecfb5353c6e76d2392e57b6d30860
Stackoverflow Stackexchange Q: Using ggplot's facet_wrap with autocorrelation plot I want to create a ggplot figure of autocorrelations for different subgroups of my data. Using the forecast package, I manage to produce a ggplot figure for the whole sample like this: library(tidyverse) library(forecast) df <- data.frame(val = runif(100), key = c(rep('a', 50), key = rep('b', 50))) ggAcf(df$val) Which produces: But now I'm trying the following to produce the facets and it doesn't work: ggplot(df) + ggAcf(aes(val)) + facet_wrap(~key) Any ideas? A: A possible solution building out the acf values and plot manually. library(tidyverse) library(forecast) df <- data.frame(val = runif(100), key = c(rep('a', 50), key = rep('b', 50))) df_acf <- df %>% group_by(key) %>% summarise(list_acf=list(acf(val, plot=FALSE))) %>% mutate(acf_vals=purrr::map(list_acf, ~as.numeric(.x$acf))) %>% select(-list_acf) %>% unnest() %>% group_by(key) %>% mutate(lag=row_number() - 1) df_ci <- df %>% group_by(key) %>% summarise(ci = qnorm((1 + 0.95)/2)/sqrt(n())) ggplot(df_acf, aes(x=lag, y=acf_vals)) + geom_bar(stat="identity", width=.05) + geom_hline(yintercept = 0) + geom_hline(data = df_ci, aes(yintercept = -ci), color="blue", linetype="dotted") + geom_hline(data = df_ci, aes(yintercept = ci), color="blue", linetype="dotted") + labs(x="Lag", y="ACF") + facet_wrap(~key)
Q: Using ggplot's facet_wrap with autocorrelation plot I want to create a ggplot figure of autocorrelations for different subgroups of my data. Using the forecast package, I manage to produce a ggplot figure for the whole sample like this: library(tidyverse) library(forecast) df <- data.frame(val = runif(100), key = c(rep('a', 50), key = rep('b', 50))) ggAcf(df$val) Which produces: But now I'm trying the following to produce the facets and it doesn't work: ggplot(df) + ggAcf(aes(val)) + facet_wrap(~key) Any ideas? A: A possible solution building out the acf values and plot manually. library(tidyverse) library(forecast) df <- data.frame(val = runif(100), key = c(rep('a', 50), key = rep('b', 50))) df_acf <- df %>% group_by(key) %>% summarise(list_acf=list(acf(val, plot=FALSE))) %>% mutate(acf_vals=purrr::map(list_acf, ~as.numeric(.x$acf))) %>% select(-list_acf) %>% unnest() %>% group_by(key) %>% mutate(lag=row_number() - 1) df_ci <- df %>% group_by(key) %>% summarise(ci = qnorm((1 + 0.95)/2)/sqrt(n())) ggplot(df_acf, aes(x=lag, y=acf_vals)) + geom_bar(stat="identity", width=.05) + geom_hline(yintercept = 0) + geom_hline(data = df_ci, aes(yintercept = -ci), color="blue", linetype="dotted") + geom_hline(data = df_ci, aes(yintercept = ci), color="blue", linetype="dotted") + labs(x="Lag", y="ACF") + facet_wrap(~key) A: library(forecast) df <- data.frame(val = runif(100), key = c(rep('a', 50), key = rep('b', 50))) a = subset(df, key == "a") ap = ggAcf(a$val) b = subset(df, key == "b") bp = ggAcf(b$val) library(grid) grid.newpage() pushViewport(viewport(layout=grid.layout(1,2))) print(ap, vp=viewport(layout.pos.row = 1, layout.pos.col = 1)) print(bp, vp=viewport(layout.pos.row = 1, layout.pos.col = 2)) Or: grid.newpage() pushViewport(viewport(layout=grid.layout(1,2))) print(ap, vp=viewport(layout.pos.row = 1, layout.pos.col = 1)) print(bp, vp=viewport(layout.pos.row = 1, layout.pos.col = 2)) A: Adam Spannbauer's answer is excellent and the output is very similar to that of forecast::ggAcf, with possibly just the dotted confidence limits lines differing from the dashed ones produced by ggAcf (something very easy to fix if desired). A quick and perhaps simpler alternative is to use ggfortify::autoplot with a list for your different facet values as in the example below: # Load ggfortify require(ggfortify) # Create sample data frame df <- data.frame(val = runif(100), key = c(rep('a', 50), key = rep('b', 50))) # Create list with ACF objects for different key values acf.key <- list() for (i in 1:length(unique(df$key))) { acf.key[[i]] <- acf(df$val[df$key==unique(df$key)[[i]]]) } # Plot using ggfortify::autoplot autoplot(acf.key, ncol=2) Unfortunately it doesn't seem to be possible to get the banners with the facet titles above the plots as in standard ggplot, so the final result is not as polished as that of the answer above. I was also unable to remove the y-axis label of the right-hand-side plot while keeping the label of the left-hand-side plot.
stackoverflow
{ "language": "en", "length": 406, "provenance": "stackexchange_0000F.jsonl.gz:914610", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44697596" }
cec5407be38e6b5725f91eb6a433f05aaa42e758
Stackoverflow Stackexchange Q: Custom order of display of completions - monaco I am referring to the completion-provider-example for monaco. I noticed that the completions are defined in this order: lodash, express, mkdirp but the suggestions in the editor are listed alphabetically. I would like to customize this behaviour. Is this possible? I have looked at this pull request, but can't get it wired up. Any help is appreciated! A: In the example that you link to, simply add the sortText key to each completion item. This value is used for determining the order of the items in the completion box. Modification to linked example: return [ { label: '"lodash"', kind: monaco.languages.CompletionItemKind.Function, documentation: "The Lodash library exported as Node.js modules.", insertText: '"lodash": "*"', sortText: 'a' }, { label: '"express"', kind: monaco.languages.CompletionItemKind.Function, documentation: "Fast, unopinionated, minimalist web framework", insertText: '"express": "*"', sortText: 'b' }, { label: '"mkdirp"', kind: monaco.languages.CompletionItemKind.Function, documentation: "Recursively mkdir, like <code>mkdir -p</code>", insertText: '"mkdirp": "*"', sortText: 'c' } ]; The sortText values 'a', 'b', 'c' now determine the order of the suggestions.
Q: Custom order of display of completions - monaco I am referring to the completion-provider-example for monaco. I noticed that the completions are defined in this order: lodash, express, mkdirp but the suggestions in the editor are listed alphabetically. I would like to customize this behaviour. Is this possible? I have looked at this pull request, but can't get it wired up. Any help is appreciated! A: In the example that you link to, simply add the sortText key to each completion item. This value is used for determining the order of the items in the completion box. Modification to linked example: return [ { label: '"lodash"', kind: monaco.languages.CompletionItemKind.Function, documentation: "The Lodash library exported as Node.js modules.", insertText: '"lodash": "*"', sortText: 'a' }, { label: '"express"', kind: monaco.languages.CompletionItemKind.Function, documentation: "Fast, unopinionated, minimalist web framework", insertText: '"express": "*"', sortText: 'b' }, { label: '"mkdirp"', kind: monaco.languages.CompletionItemKind.Function, documentation: "Recursively mkdir, like <code>mkdir -p</code>", insertText: '"mkdirp": "*"', sortText: 'c' } ]; The sortText values 'a', 'b', 'c' now determine the order of the suggestions.
stackoverflow
{ "language": "en", "length": 171, "provenance": "stackexchange_0000F.jsonl.gz:914624", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44697646" }
e68d02ed4f5fc96bb861c28cce6b082e6309279c
Stackoverflow Stackexchange Q: CloudFormation nested stack name I need to set nested stack name explicitly in a CloudFormation template, but don't see such option in AWS documentation. Is there way to achieve this? I can specify stack name, when running a parent stack, but all nested stacks, got a randomly generated stack name, based on a resource name created, like: VPC: Type: AWS::CloudFormation::Stack Properties: TemplateURL: https://s3-eu-west-1.amazonaws.com/cf-templates-wtmg/vpc.yaml Parameters: EnvironmentName: !Ref AWS::StackName Which will generate nested stack name in form parent_stack_name-VPC-random_hash. A: Yes. I was looking for the same thing also but currently it's not available. I think the reason of you wanted a specific stack name is to use it for output referral? What you can do/I did was: 1) For those in the same parent stack, you need to output from nested stack and then refer directly from the stack like !GetAtt NestedStack1.Outputs.Output1 2) For those which are outside for parent stack, you will need to output twice. Once in the nested stack and once in the parent stack. Then you can refer to the parent stack output. Hope this will help.
Q: CloudFormation nested stack name I need to set nested stack name explicitly in a CloudFormation template, but don't see such option in AWS documentation. Is there way to achieve this? I can specify stack name, when running a parent stack, but all nested stacks, got a randomly generated stack name, based on a resource name created, like: VPC: Type: AWS::CloudFormation::Stack Properties: TemplateURL: https://s3-eu-west-1.amazonaws.com/cf-templates-wtmg/vpc.yaml Parameters: EnvironmentName: !Ref AWS::StackName Which will generate nested stack name in form parent_stack_name-VPC-random_hash. A: Yes. I was looking for the same thing also but currently it's not available. I think the reason of you wanted a specific stack name is to use it for output referral? What you can do/I did was: 1) For those in the same parent stack, you need to output from nested stack and then refer directly from the stack like !GetAtt NestedStack1.Outputs.Output1 2) For those which are outside for parent stack, you will need to output twice. Once in the nested stack and once in the parent stack. Then you can refer to the parent stack output. Hope this will help. A: I ran into the same thing just today. From the official AWS documentation, supporting the original answer to this question: You can add output values from a nested stack within the containing template. You use the GetAtt function with the nested stack's logical name and the name of the output value in the nested stack in the format Outputs.NestedStackOutputName https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-stack.html Looks like we still cannot reference the stack name more easily. The first answer to the question on this page still stands.
stackoverflow
{ "language": "en", "length": 263, "provenance": "stackexchange_0000F.jsonl.gz:914653", "question_score": "9", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44697719" }
138881124d90aed3ca85b15ea155faa84dbfe644
Stackoverflow Stackexchange Q: How to access laravel valet project from local network? I built a laravel project in my desktop folder and running it using valet on project-name.dev . How can I access it from local network ? My IP address is 192.168.1.5 ,I'm using a mac and I tried below code but It gives me a error in my project. Is there any different solution ? php artisan serve --host 192.168.1.5 --port 80 Any help will be appreciated!! A: You can do that by using --host=0.0.0.0. By using 0.0.0.0, you don't need to hardcode your IP address. It will automatically point to your IP address, even if it changed at some point, maybe when you connect to a different network. So you type php artisan serve --host=0.0.0.0 --port=80 You can now access your app using your browser or on another computer in the same network. http://192.168.1.5:80
Q: How to access laravel valet project from local network? I built a laravel project in my desktop folder and running it using valet on project-name.dev . How can I access it from local network ? My IP address is 192.168.1.5 ,I'm using a mac and I tried below code but It gives me a error in my project. Is there any different solution ? php artisan serve --host 192.168.1.5 --port 80 Any help will be appreciated!! A: You can do that by using --host=0.0.0.0. By using 0.0.0.0, you don't need to hardcode your IP address. It will automatically point to your IP address, even if it changed at some point, maybe when you connect to a different network. So you type php artisan serve --host=0.0.0.0 --port=80 You can now access your app using your browser or on another computer in the same network. http://192.168.1.5:80 A: You can follow the step below * *Make valet site unsecure 1st using valet unsecure sitename *Run valet share A publicly accessible URL will be inserted into your clipboard and is ready to paste directly into your browser. That's it. Please note valet share does not currently support sharing sites that have been secured using the valet secure command. https://laravel.com/docs/5.6/valet#sharing-sites A: You forgot the equals sign in your command. It should work like this: php artisan serve --port=80 --host=192.168.1.5 To connect any of your local network devices to your mac you can type into their browser's address bar: 192.168.1.5:80
stackoverflow
{ "language": "en", "length": 245, "provenance": "stackexchange_0000F.jsonl.gz:914666", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44697753" }
d7ae9670fc517988092bed7fed33cccd2f78e8ad
Stackoverflow Stackexchange Q: Maven Java home configuration I installed JDK and set up Maven. Call of mvn -version i get returns: The JAVA_HOME environment variable is not defined correctly This environment variable is needed to run this program NB: JAVA_HOME should point to a JDK not a JRE $JAVA_HOME variable is set to C:\Program Files\Java\jdk1.8.0_131\bin in system variables. Call of %JAVA_HOME% returns path C:\Program Files\Java\jdk1.8.0_131\bin. Where is the problem? A: The question is about Windows but I came here trying to solve the problem on Ubuntu. I faced a similar problem. I configured $JAVA_HOME in /etc/environment like $JAVA_HOME=PATH_TO_JDK for example $JAVA_HOME=/home/max/jdk1.8.0_144 Careful with * *White space after path declaration $JAVA_HOME=/home/max/jdk1.8.0_144[[_NO_WHITE_SPACE_AFTER_DECLARATION]] *Don't put any double apostrophe $JAVA_HOME="/home/max/jdk1.8.0_144" *Don't put /bin e.g $JAVA_HOME=/home/max/jdk1.8.0_144/bin <- This is wrong
Q: Maven Java home configuration I installed JDK and set up Maven. Call of mvn -version i get returns: The JAVA_HOME environment variable is not defined correctly This environment variable is needed to run this program NB: JAVA_HOME should point to a JDK not a JRE $JAVA_HOME variable is set to C:\Program Files\Java\jdk1.8.0_131\bin in system variables. Call of %JAVA_HOME% returns path C:\Program Files\Java\jdk1.8.0_131\bin. Where is the problem? A: The question is about Windows but I came here trying to solve the problem on Ubuntu. I faced a similar problem. I configured $JAVA_HOME in /etc/environment like $JAVA_HOME=PATH_TO_JDK for example $JAVA_HOME=/home/max/jdk1.8.0_144 Careful with * *White space after path declaration $JAVA_HOME=/home/max/jdk1.8.0_144[[_NO_WHITE_SPACE_AFTER_DECLARATION]] *Don't put any double apostrophe $JAVA_HOME="/home/max/jdk1.8.0_144" *Don't put /bin e.g $JAVA_HOME=/home/max/jdk1.8.0_144/bin <- This is wrong A: As you can see in the documentation the JAVA_HOME variable must point to the java installation path, not to the bin folder. Change it to C:\Program Files\Java\jdk1.8.0_131 A: Yes, original question is about pure windows, but for those who came here wondering about windows linux subsystem WSL, I stumbled across the thing trying to set up my win WSL, to use windows java Open jdk binaries. Though in a while i gave up on that idea. Installed jdk with 'sudo apt install ...' and then set WSL java home from installed path: root@mypc://# java -version openjdk version "1.8.0_265" OpenJDK Runtime Environment (build 1.8.0_265-8u265-b01-0+deb9u1-b01) OpenJDK 64-Bit Server VM (build 25.265-b01, mixed mode) root@mypc://# which java /usr/bin/java root@mypc://# realpath /usr/bin/java /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java use yours realpath instead. root@mypc://# export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 root@mypc://# mvn -v Apache Maven 3.6.2 (40f52333136460af0dc0d7232c0dc0bcf0d9e117; 2019-08-27T15:06:16Z) Maven home: /mnt/c/javaDir/mvn/apache-maven-3.6.2 Java version: 1.8.0_265, vendor: Oracle Corporation, runtime: /usr/lib/jvm/java-8-openjdk-amd64/jre Default locale: en_US, platform encoding: UTF-8 OS name: "linux", version: "4.4.0-43-microsoft", arch: "amd64", family: "unix" add export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 to a ~/.bash_profile for this being set every time linux subsystem is lunched.
stackoverflow
{ "language": "en", "length": 300, "provenance": "stackexchange_0000F.jsonl.gz:914682", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44697803" }
b5e39c4b70fe9d725da69b23c422a3962f4467a6
Stackoverflow Stackexchange Q: How to get current shift by current Time from Mysql database Here is my table : shift_master shift(varchar) | start_time(varchar) | end_time(varchar) Morning | 06:00 AM | 02:00 PM AfterNoon | 02:00 PM | 10:00 PM Evening | 10:00 PM | 06:00 AM I need one query in which I will pass the current time and it will give the record of shift which is currently going on. I really appreciate your help. A: First Change your datatype for start_time and end_time to timestamp and enter time in its format and then SELECT shift FROM shift_master WHERE start_time <= NOW() AND end_time > NOW(); Its difficult to compare the time stored as varchar with current_time. If possible, then do as described above, it will simplify your work.
Q: How to get current shift by current Time from Mysql database Here is my table : shift_master shift(varchar) | start_time(varchar) | end_time(varchar) Morning | 06:00 AM | 02:00 PM AfterNoon | 02:00 PM | 10:00 PM Evening | 10:00 PM | 06:00 AM I need one query in which I will pass the current time and it will give the record of shift which is currently going on. I really appreciate your help. A: First Change your datatype for start_time and end_time to timestamp and enter time in its format and then SELECT shift FROM shift_master WHERE start_time <= NOW() AND end_time > NOW(); Its difficult to compare the time stored as varchar with current_time. If possible, then do as described above, it will simplify your work. A: Try to use this SELECT shift FROM shift_master WHERE TIME(NOW()) BETWEEN TIME(STR_TO_DATE(start_time,'%h:%i %p')) AND TIME(STR_TO_DATE(end_time,'%h:%i %p')) A: Try this query set @now_time = UNIX_TIMESTAMP( now() ) % 86400; SELECT shift FROM ( SELECT shift, UNIX_TIMESTAMP( STR_TO_DATE(start_time,'%h:%i %p') ) % 86400 as start, UNIX_TIMESTAMP( STR_TO_DATE(end_time,'%h:%i %p') ) % 86400 as end FROM shift_master ) M WHERE ( start < end AND @now_time BETWEEN start AND end - 1 ) OR ( start > end AND ( @now_time > start or @now_time < end ) ) A: SELECT shift FROM shift_master WHERE (start_time < end_time and (start_time <= time(now()) and end_time > time(now()))) OR (start_time > end_time and (start_time <= time(now()) or end_time > time(now())))
stackoverflow
{ "language": "en", "length": 242, "provenance": "stackexchange_0000F.jsonl.gz:914683", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44697811" }
3a4afd6e30e4aff6e540ad31fece202ad64fadde
Stackoverflow Stackexchange Q: Can you completely disable CORS support in Spring? As described in CORS preflight request fails due to a standard header if you send requests to OPTIONS endpoints with the Origin and Access-Control-Request-Method headers set then they get intercepted by the Spring framework, and your method does not get executed. The accepted solution is the use @CrossOrigin annotations to stop Spring returning a 403. However, I am generating my API code with Swagger Codegen and so I just want to disable this and implement my OPTIONS responses manually. So can you disable the CORS interception in Spring? A: Try this one if you have at least Java 8: @EnableWebSecurity public class WebSecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http.cors().configurationSource(request -> new CorsConfiguration().applyPermitDefaultValues()); } }
Q: Can you completely disable CORS support in Spring? As described in CORS preflight request fails due to a standard header if you send requests to OPTIONS endpoints with the Origin and Access-Control-Request-Method headers set then they get intercepted by the Spring framework, and your method does not get executed. The accepted solution is the use @CrossOrigin annotations to stop Spring returning a 403. However, I am generating my API code with Swagger Codegen and so I just want to disable this and implement my OPTIONS responses manually. So can you disable the CORS interception in Spring? A: Try this one if you have at least Java 8: @EnableWebSecurity public class WebSecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http.cors().configurationSource(request -> new CorsConfiguration().applyPermitDefaultValues()); } } A: Previous answers almost all about ENABLING CORS, this worked for me to disable. @Configuration public class MyConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http.cors().and().csrf().disable(); } @Bean public WebMvcConfigurer corsConfigurer() { return new WebMvcConfigurer() { @Override public void addCorsMappings(CorsRegistry registry) { registry.addMapping("/**").allowedMethods("*"); } }; } } A: For newer versions of spring boot: @Configuration public class WebConfiguration implements WebMvcConfigurer { @Override public void addCorsMappings(CorsRegistry registry) { registry.addMapping("/**").allowedMethods("*"); } } The Kotlin way @Configuration class WebConfiguration : WebMvcConfigurer { override fun addCorsMappings(registry: CorsRegistry) { registry.addMapping("/**").allowedMethods("*") } } A: From their documentation: If you are using Spring Web MVC @Configuration @EnableWebMvc public class WebConfig extends WebMvcConfigurerAdapter { @Override public void addCorsMappings(CorsRegistry registry) { registry.addMapping("/**") .allowedMethods("HEAD", "GET", "PUT", "POST", "DELETE", "PATCH"); } } If you are using Spring Boot: @Configuration public class MyConfiguration { @Bean public WebMvcConfigurer corsConfigurer() { return new WebMvcConfigurerAdapter() { @Override public void addCorsMappings(CorsRegistry registry) { registry.addMapping("/**") .allowedMethods("HEAD", "GET", "PUT", "POST", "DELETE", "PATCH"); } }; } } Yuriy Yunikov answer is correct as well. But I don't like the "custom" filter. In case you have Spring Web Security which causes you trouble. Check this SO Answer. A: Try to add a following filter (you can customize it for you own needs and methods supported): @Component public class CorsFilter extends OncePerRequestFilter { @Override protected void doFilterInternal(final HttpServletRequest request, final HttpServletResponse response, final FilterChain filterChain) throws ServletException, IOException { response.addHeader("Access-Control-Allow-Origin", "*"); response.addHeader("Access-Control-Allow-Methods", "GET, POST, DELETE, PUT, PATCH, HEAD"); response.addHeader("Access-Control-Allow-Headers", "Origin, Accept, X-Requested-With, Content-Type, Access-Control-Request-Method, Access-Control-Request-Headers"); response.addHeader("Access-Control-Expose-Headers", "Access-Control-Allow-Origin, Access-Control-Allow-Credentials"); response.addHeader("Access-Control-Allow-Credentials", "true"); response.addIntHeader("Access-Control-Max-Age", 10); filterChain.doFilter(request, response); } } A: None of the above worked for me. Here is how I did it for Spring-Boot 2.6.7 and Java 18. (I know I will have to look this up myself the next time I have to set up a spring backend again): @EnableWebSecurity public class WebSecurityConfig extends WebSecurityConfigurerAdapter { @Override public void configure(HttpSecurity http) throws Exception { http.cors().and().csrf().disable(); } @Bean public CorsFilter corsFilter() { final UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource(); final CorsConfiguration config = new CorsConfiguration(); config.addAllowedOrigin("*"); config.addAllowedHeader("*"); config.addAllowedMethod("*"); source.registerCorsConfiguration("/**", config); return new CorsFilter(source); } } A: Spring MVC * *https://docs.spring.io/spring-framework/docs/5.3.13/reference/html/web.html#mvc-cors-global-java @Configuration(proxyBeanMethods = false) @EnableWebMvc public class WebConfig implements WebMvcConfigurer { @Override public void addCorsMappings(CorsRegistry registry) { registry.addMapping("/**").allowedMethods("*").allowedHeaders("*"); } } Spring Boot * *https://docs.spring.io/spring-boot/docs/2.5.7/reference/htmlsingle/#features.developing-web-applications.spring-mvc.cors @Configuration(proxyBeanMethods = false) public class MyConfiguration { @Bean public WebMvcConfigurer corsConfigurer() { return new WebMvcConfigurer() { @Override public void addCorsMappings(final CorsRegistry registry) { registry.addMapping("/**").allowedMethods("*").allowedHeaders("*"); } }; } } Spring Security ( with Spring MVC or Spring Boot) If using Spring Security, set following configuration additionally: * *https://docs.spring.io/spring-security/site/docs/5.5.3/reference/html5/#cors @Configuration(proxyBeanMethods = false) public class WebSecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(final HttpSecurity http) throws Exception { // ... // see also: https://docs.spring.io/spring-security/site/docs/5.5.3/reference/html5/#csrf-when http.csrf().disabled(); // if Spring MVC is on classpath and no CorsConfigurationSource is provided, // Spring Security will use CORS configuration provided to Spring MVC http.cors(Customizer.withDefaults()); } } A: I use Spring Security in my Spring Boot application and enable access from specific domains (or from all domains). My WebSecurityConfig: @Configuration @EnableWebSecurity public class WebSecurityConfig extends WebSecurityConfigurerAdapter { // ... @Override protected void configure(HttpSecurity http) throws Exception { // add http.cors() http.cors().and().csrf().disable().authorizeRequests() .antMatchers("/get/**").permitAll() .antMatchers("/update/**").hasRole("ADMIN") .anyRequest().authenticated() .and() .httpBasic(); // Authenticate users with HTTP basic authentication // REST is stateless http.sessionManagement() .sessionCreationPolicy(SessionCreationPolicy.STATELESS); } // To enable CORS @Bean public CorsConfigurationSource corsConfigurationSource() { final CorsConfiguration configuration = new CorsConfiguration(); configuration.setAllowedOrigins(ImmutableList.of("https://www.yourdomain.com")); // www - obligatory // configuration.setAllowedOrigins(ImmutableList.of("*")); //set access from all domains configuration.setAllowedMethods(ImmutableList.of("GET", "POST", "PUT", "DELETE")); configuration.setAllowCredentials(true); configuration.setAllowedHeaders(ImmutableList.of("Authorization", "Cache-Control", "Content-Type")); final UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource(); source.registerCorsConfiguration("/**", configuration); return source; } } Sometimes is needed to clear browser history before testing. Detailed information may be seen here: http://appsdeveloperblog.com/crossorigin-restful-web-service/ Just for those who use Angular. From Angular I run requests to backend: export class HttpService { username = '..'; password = '..'; host = environment.api; uriUpdateTank = '/update/tank'; headers: HttpHeaders = new HttpHeaders({ 'Content-Type': 'application/json', Authorization: 'Basic ' + btoa(this.username + ':' + this.password) }); constructor(private http: HttpClient) { } onInsertTank(tank: Tank) { return this.http.put(this.host + this.uriUpdateTank, tank, { headers: this.headers }) .pipe( catchError(this.handleError) ); } ... } Old version. In my Spring Boot application no other ways worked then this: import org.springframework.core.Ordered; import org.springframework.core.annotation.Order; import org.springframework.stereotype.Component; import javax.servlet.*; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; @Component @Order(Ordered.HIGHEST_PRECEDENCE) public class RequestFilter implements Filter { public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) { HttpServletRequest request = (HttpServletRequest) req; HttpServletResponse response = (HttpServletResponse) res; response.setHeader("Access-control-Allow-Origin", "*"); response.setHeader("Access-Control-Allow-Methods", "POST, PUT, GET, OPTIONS, DELETE"); response.setHeader("Access-Control-Allow-Headers", "x-requested-with, x-auth-token"); response.setHeader("Access-Control-Max-Age", "3600"); response.setHeader("Access-Control-Allow-Credentials", "true"); if (!(request.getMethod().equalsIgnoreCase("OPTIONS"))) { try { chain.doFilter(req, res); } catch (Exception ex) { ex.printStackTrace(); } } else { System.out.println("Pre-flight"); response.setHeader("Access-Control-Allowed-Methods", "POST, GET, DELETE"); response.setHeader("Access-Control-Max-Age", "3600"); response.setHeader("Access-Control-Allow-Headers", "authorization, content-type,x-auth-token, " + "access-control-request-headers, access-control-request-method, accept, origin, authorization, x-requested-with"); response.setStatus(HttpServletResponse.SC_OK); } } public void init(FilterConfig filterConfig) { } public void destroy() { } } A: I use spring boot and this is solved my problem. I am using React for front-end. import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.web.servlet.config.annotation.CorsRegistry; import org.springframework.web.servlet.config.annotation.WebMvcConfigurer; import org.springframework.web.servlet.config.annotation.WebMvcConfigurerAdapter; @Configuration public class CorsConfig extends WebMvcConfigurerAdapter { @Bean public WebMvcConfigurer corsConfigurer() { return new WebMvcConfigurerAdapter() { @Override public void addCorsMappings(CorsRegistry registry) { registry.addMapping("/**") .allowedMethods("HEAD", "GET", "PUT", "POST", "DELETE", "PATCH"); } }; } }
stackoverflow
{ "language": "en", "length": 972, "provenance": "stackexchange_0000F.jsonl.gz:914712", "question_score": "47", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44697883" }
6be2ffca474b90e511a5a6b00927b02bf6381809
Stackoverflow Stackexchange Q: Why a constant escapes to heap in golang? Here is my golang code to set a key C to value 1 in redis It uses third party code in order to connect to redis. package main import ( "github.com/garyburd/redigo/redis" ) func main() { c, err := redis.Dial("tcp", ":6379") _, err = c.Do("SET", "C", 1) if err != nil { panic(err) } } On doing go build -gcflags "-m -m", it reports ./del.go:41: 1 escapes to heap ./del.go:41: from c.Do("SET", "C", 1) (parameter to indirect call) at ./del.go:41 Why does 1 escape to the heap ? It is a simple constant whose complete information is known at compile time and it takes just 64bits on a 64bit system. Why should it be stored in heap memory? A: The signature of Do is: Do(commandName string, args ...interface{}) (reply interface{}, err error) Because args is a variadic (slice of) interface{} it's heap allocated. I believe there could be some optimisations in the pipeline for simple cases like this in future versions of Go: https://github.com/golang/go/issues/15528
Q: Why a constant escapes to heap in golang? Here is my golang code to set a key C to value 1 in redis It uses third party code in order to connect to redis. package main import ( "github.com/garyburd/redigo/redis" ) func main() { c, err := redis.Dial("tcp", ":6379") _, err = c.Do("SET", "C", 1) if err != nil { panic(err) } } On doing go build -gcflags "-m -m", it reports ./del.go:41: 1 escapes to heap ./del.go:41: from c.Do("SET", "C", 1) (parameter to indirect call) at ./del.go:41 Why does 1 escape to the heap ? It is a simple constant whose complete information is known at compile time and it takes just 64bits on a 64bit system. Why should it be stored in heap memory? A: The signature of Do is: Do(commandName string, args ...interface{}) (reply interface{}, err error) Because args is a variadic (slice of) interface{} it's heap allocated. I believe there could be some optimisations in the pipeline for simple cases like this in future versions of Go: https://github.com/golang/go/issues/15528
stackoverflow
{ "language": "en", "length": 172, "provenance": "stackexchange_0000F.jsonl.gz:914720", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44697906" }
3c84f7571450e166173a2b8f1cb0ece16b9787f1
Stackoverflow Stackexchange Q: How to leave project? I can not find a way to leave the project, the owner of which is not me. Any attempts to do this through the IAM lead to nothing. I want the project to stop showing up in my account. A: A member of the project with the permission resourcemanager.projects.setIamPolicy is required to add/remove/update IAM policies on the project. The IAM role roles/owner, contains this permission. The member with this permission is the only member that will be able to remove you from the project typically this is the "Project Owner". Unless your membership is being granted via a Google Group / Google Apps domain that you belong to. If your Google account is added directly, the "Project Owner" can remove you by: * *Accessing the project *Navigating to "IAM & Admin" *Selecting your Google account email from the Members list *Selecting "Remove" If your access is granted via a Google Group / Google Apps domain, you will need to remove yourself from those entities. Currently, the project will continue to appear in your Project Selection window until you are removed.
Q: How to leave project? I can not find a way to leave the project, the owner of which is not me. Any attempts to do this through the IAM lead to nothing. I want the project to stop showing up in my account. A: A member of the project with the permission resourcemanager.projects.setIamPolicy is required to add/remove/update IAM policies on the project. The IAM role roles/owner, contains this permission. The member with this permission is the only member that will be able to remove you from the project typically this is the "Project Owner". Unless your membership is being granted via a Google Group / Google Apps domain that you belong to. If your Google account is added directly, the "Project Owner" can remove you by: * *Accessing the project *Navigating to "IAM & Admin" *Selecting your Google account email from the Members list *Selecting "Remove" If your access is granted via a Google Group / Google Apps domain, you will need to remove yourself from those entities. Currently, the project will continue to appear in your Project Selection window until you are removed.
stackoverflow
{ "language": "en", "length": 185, "provenance": "stackexchange_0000F.jsonl.gz:914725", "question_score": "16", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44697921" }
ec167cc7e83d4833eeaed59aaa3329b70b13a055
Stackoverflow Stackexchange Q: How to store MongoDB data with docker-compose I have this docker-compose: version: "2" services: api: build: . ports: - "3007:3007" links: - mongo mongo: image: mongo volumes: - /data/mongodb/db:/data/db ports: - "27017:27017" The volumes, /data/mongodb/db:/data/db, is the first part (/data/mongodb/db) where the data is stored inside the image and the second part (/data/db) where it's stored locally? It works on production (ubuntu) but when i run it on my dev-machine (mac) I get: ERROR: for mongo Cannot start service mongo: error while creating mount source path '/data/mongodb/db': mkdir /data/mongodb: permission denied Even if I run it as sudo. I've added the /data directory in the "File Sharing"-section in the docker-program on the mac. Is the idea to use the same docker-compose on both production and development? How do I solve this issue? A: Actually it's the other way around (HOST:CONTAINER), /data/mongodb/db is on your host machine and /data/db is in the container. You have added the /data in the shared folders of your dev machine but you haven't created /data/mongodb/db, that's why you get a permission denied error. Docker doesn't have the rights to create folders.
Q: How to store MongoDB data with docker-compose I have this docker-compose: version: "2" services: api: build: . ports: - "3007:3007" links: - mongo mongo: image: mongo volumes: - /data/mongodb/db:/data/db ports: - "27017:27017" The volumes, /data/mongodb/db:/data/db, is the first part (/data/mongodb/db) where the data is stored inside the image and the second part (/data/db) where it's stored locally? It works on production (ubuntu) but when i run it on my dev-machine (mac) I get: ERROR: for mongo Cannot start service mongo: error while creating mount source path '/data/mongodb/db': mkdir /data/mongodb: permission denied Even if I run it as sudo. I've added the /data directory in the "File Sharing"-section in the docker-program on the mac. Is the idea to use the same docker-compose on both production and development? How do I solve this issue? A: Actually it's the other way around (HOST:CONTAINER), /data/mongodb/db is on your host machine and /data/db is in the container. You have added the /data in the shared folders of your dev machine but you haven't created /data/mongodb/db, that's why you get a permission denied error. Docker doesn't have the rights to create folders. A: I get the impression you need to learn a little bit more about the fundamentals of Docker to fully understand what you are doing. There are a lot of potential pitfalls running Docker in production, and my recommendation is to learn the basics really well so you know how to handle them. Here is what the documentation says about volumes: [...] specify a path on the host machine (HOST:CONTAINER) So you have it the wrong way around. The first part is the past on the host, e.g. your local machine, and the second is where the volume is mounted within the container. Regarding your last question, have a look at this article: Using Compose in production. A: Since Docker-Compose syntax version 3.2, you can use a long syntax of the volume property to specify the type of volume. This allows you to create a "Bind" volume, which effectively links a folder from a container to a folder in your host. Here is an example : version : "3.2" services: mongo: container_name: mongo image: mongo volumes: - type: bind source: /data target: /data/db ports: - "42421:27017" source is the folder in your host and target the folder in your container More information avaliable here : https://docs.docker.com/compose/compose-file/#long-syntax
stackoverflow
{ "language": "en", "length": 393, "provenance": "stackexchange_0000F.jsonl.gz:914751", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44698004" }
1399b8b56ef5c311d630ee0ed545bf722d6ba047
Stackoverflow Stackexchange Q: (A-Frame) local gltf wont load; Cannot read property 'slice' of undefined I took the code from the A-Frame School in which a gltf model is loaded. Then I loaded the Sample Models from Khronos, this box and tried to load it, but I get this error (several times) GLTFLoader.js:979 Uncaught (in promise) TypeError: Cannot read property 'slice' of undefined at GLTFLoader.js:979 at i (GLTFLoader.js:570) at GLTFLoader.js:975 at <anonymous> I can load .obj models and tried the several versions of the model, but get always the error. The sample code does work locally, meaning it loads the model correctly, getting it from the aframe cdn. Heres the code for completion <!DOCTYPE html> <html> <head> <title>glTF Model</title> <meta name="description" content="glTF Model"> <script src="https://rawgit.com/aframevr/aframe/b395ea0/dist/aframe-master.min.js"></script> </head> <body> <a-scene> <a-assets> <a-asset-item id="boxModel" src="Box.gltf"></a-asset-item> </a-assets> <a-gltf-model src="#boxModel"></a-gltf-model> </a-scene> </body> </html> A: Replace Aframe version to this: <script src="https://aframe.io/releases/0.7.1/aframe.min.js"></script>
Q: (A-Frame) local gltf wont load; Cannot read property 'slice' of undefined I took the code from the A-Frame School in which a gltf model is loaded. Then I loaded the Sample Models from Khronos, this box and tried to load it, but I get this error (several times) GLTFLoader.js:979 Uncaught (in promise) TypeError: Cannot read property 'slice' of undefined at GLTFLoader.js:979 at i (GLTFLoader.js:570) at GLTFLoader.js:975 at <anonymous> I can load .obj models and tried the several versions of the model, but get always the error. The sample code does work locally, meaning it loads the model correctly, getting it from the aframe cdn. Heres the code for completion <!DOCTYPE html> <html> <head> <title>glTF Model</title> <meta name="description" content="glTF Model"> <script src="https://rawgit.com/aframevr/aframe/b395ea0/dist/aframe-master.min.js"></script> </head> <body> <a-scene> <a-assets> <a-asset-item id="boxModel" src="Box.gltf"></a-asset-item> </a-assets> <a-gltf-model src="#boxModel"></a-gltf-model> </a-scene> </body> </html> A: Replace Aframe version to this: <script src="https://aframe.io/releases/0.7.1/aframe.min.js"></script> A: Those models are in the 2.0 folder which means you need glTF v2.0 loader. A-Frame 0.5.0/0.6.0 supports glTF v1. But glTF v2 will be supported in A-Frame 0.7.0, but you can use gltf-model-next from Don McCurdy: https://github.com/donmccurdy/aframe-extras/blob/master/src/loaders/gltf-model-next.js Or you can grab different models instead from the 1.0 folder: https://github.com/KhronosGroup/glTF-Sample-Models/tree/master/1.0
stackoverflow
{ "language": "en", "length": 193, "provenance": "stackexchange_0000F.jsonl.gz:914807", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44698203" }
2ed820dcf1160557b41a402e33d52080e33dda65
Stackoverflow Stackexchange Q: How to completely uninstall kubernetes I installed kubernetes cluster using kubeadm following this guide. After some period of time, I decided to reinstall K8s but run into troubles with removing all related files and not finding any docs on official site how to remove cluster installed via kubeadm. Did somebody meet the same problems and know the proper way of removing all files and dependencies? Thank you in advance. For more information, I removed kubeadm, kubectl and kubelet using apt-get purge/remove but when I started installing the cluster again I got next errors: [preflight] Some fatal errors occurred: Port 6443 is in use Port 10251 is in use Port 10252 is in use /etc/kubernetes/manifests is not empty /var/lib/kubelet is not empty Port 2379 is in use /var/lib/etcd is not empty A: use kubeadm reset command. this will un-configure the kubernetes cluster.
Q: How to completely uninstall kubernetes I installed kubernetes cluster using kubeadm following this guide. After some period of time, I decided to reinstall K8s but run into troubles with removing all related files and not finding any docs on official site how to remove cluster installed via kubeadm. Did somebody meet the same problems and know the proper way of removing all files and dependencies? Thank you in advance. For more information, I removed kubeadm, kubectl and kubelet using apt-get purge/remove but when I started installing the cluster again I got next errors: [preflight] Some fatal errors occurred: Port 6443 is in use Port 10251 is in use Port 10252 is in use /etc/kubernetes/manifests is not empty /var/lib/kubelet is not empty Port 2379 is in use /var/lib/etcd is not empty A: use kubeadm reset command. this will un-configure the kubernetes cluster. A: I use the following scripts to completely uninstall an existing Kubernetes cluster and its running docker containers sudo kubeadm reset sudo apt purge kubectl kubeadm kubelet kubernetes-cni -y sudo apt autoremove sudo rm -fr /etc/kubernetes/; sudo rm -fr ~/.kube/; sudo rm -fr /var/lib/etcd; sudo rm -rf /var/lib/cni/ sudo systemctl daemon-reload sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X # remove all running docker containers docker rm -f `docker ps -a | grep "k8s_" | awk '{print $1}'` A: If wanting to make it easily repeatable, it would make sense to make this into a script. This is assuming you are using a Debian based OS: #!/bin/sh # Kube Admin Reset kubeadm reset # Remove all packages related to Kubernetes apt remove -y kubeadm kubectl kubelet kubernetes-cni apt purge -y kube* # Remove docker containers/ images ( optional if using docker) docker image prune -a systemctl restart docker apt purge -y docker-engine docker docker.io docker-ce docker-ce-cli containerd containerd.io runc --allow-change-held-packages # Remove parts apt autoremove -y # Remove all folder associated to kubernetes, etcd, and docker rm -rf ~/.kube rm -rf /etc/cni /etc/kubernetes /var/lib/dockershim /var/lib/etcd /var/lib/kubelet /var/lib/etcd2/ /var/run/kubernetes ~/.kube/* rm -rf /var/lib/docker /etc/docker /var/run/docker.sock rm -f /etc/apparmor.d/docker /etc/systemd/system/etcd* # Delete docker group (optional) groupdel docker # Clear the iptables iptables -F && iptables -X iptables -t nat -F && iptables -t nat -X iptables -t raw -F && iptables -t raw -X iptables -t mangle -F && iptables -t mangle -X NOTE: This will destroy everything related to Kubernetes, etcd, and docker on the Node/server this command is run against! A: If you are clearing the cluster so that you can start again, then, in addition to what @rib47 said, I also do the following to ensure my systems are in a state ready for kubeadm init again: kubeadm reset -f rm -rf /etc/cni /etc/kubernetes /var/lib/dockershim /var/lib/etcd /var/lib/kubelet /var/run/kubernetes ~/.kube/* iptables -F && iptables -X iptables -t nat -F && iptables -t nat -X iptables -t raw -F && iptables -t raw -X iptables -t mangle -F && iptables -t mangle -X systemctl restart docker You then need to re-install docker.io, kubeadm, kubectl, and kubelet to make sure they are at the latest versions for your distribution before you re-initialize the cluster. EDIT: Discovered that calico adds firewall rules to the raw table so that needs clearing out as well. A: In my "Ubuntu 16.04", I use next steps to completely remove and clean Kubernetes (installed with "apt-get"): kubeadm reset sudo apt-get purge kubeadm kubectl kubelet kubernetes-cni kube* sudo apt-get autoremove sudo rm -rf ~/.kube And restart the computer. A: kubeadm reset /*On Debian base Operating systems you can use the following command.*/ # on debian base sudo apt-get purge kubeadm kubectl kubelet kubernetes-cni kube* /*On CentOs distribution systems you can use the following command.*/ #on centos base sudo yum remove kubeadm kubectl kubelet kubernetes-cni kube* # on debian base sudo apt-get autoremove #on centos base sudo yum autoremove /For all/ sudo rm -rf ~/.kube A: The guide you linked now has a Tear Down section: Talking to the master with the appropriate credentials, run: kubectl drain <node name> --delete-local-data --force --ignore-daemonsets kubectl delete node <node name> Then, on the node being removed, reset all kubeadm installed state: kubeadm reset
stackoverflow
{ "language": "en", "length": 693, "provenance": "stackexchange_0000F.jsonl.gz:914834", "question_score": "82", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44698283" }
3a20d94c7453a7fc1c975b78f0cbc9aa42aa105e
Stackoverflow Stackexchange Q: Can I disable select mode in a PowerShell script? PowerShell default is that when you click inside the PowerShell console, PowerShell goes into "select mode" and pauses the script until you hit space, enter or escape. I have a Script with an infinite-loop while ($true) {} which should always run, how can I tell PowerShell to not stop the script when someone accidentally clicks into the PowerShell window? A: In Windows PowerShell, this can be achieved by: * *Right click on PowerShell window icon; *Select "Properties"; *Disable "Edit Options" > "QuickEdit Mode".
Q: Can I disable select mode in a PowerShell script? PowerShell default is that when you click inside the PowerShell console, PowerShell goes into "select mode" and pauses the script until you hit space, enter or escape. I have a Script with an infinite-loop while ($true) {} which should always run, how can I tell PowerShell to not stop the script when someone accidentally clicks into the PowerShell window? A: In Windows PowerShell, this can be achieved by: * *Right click on PowerShell window icon; *Select "Properties"; *Disable "Edit Options" > "QuickEdit Mode".
stackoverflow
{ "language": "en", "length": 93, "provenance": "stackexchange_0000F.jsonl.gz:914835", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44698285" }
bf7869c71adcd17313c30014ba5a825e8a89054a
Stackoverflow Stackexchange Q: How to Select Multiple XML tags as XElement By attatribute value? How to select Multiple XML Tags as XElement, filtering based on same attribute. I have Below code i want to select tags with are having action=true <root> <first action="true"> <path>E:\Myfolder</path> </first> <second> <path>C:\Users\</path> </second> <third action="true"> <name>Mytasks</name> </third> </root> and Output shout be like this <first action="true"> <path>E:\Myfolder</path> </first> <third action="true"> <name>Mytasks</name> </third> anybody please help me. I used FirstorDefault() But i am getting only one record among all A: Try This xd = XDocument.Load("XML FILE PATH"); xe = xd.Root; IEnumerable<XElement> oColl = from x in xe.Descendants() where ((string)x.Attribute("action")).equals("true") select x;
Q: How to Select Multiple XML tags as XElement By attatribute value? How to select Multiple XML Tags as XElement, filtering based on same attribute. I have Below code i want to select tags with are having action=true <root> <first action="true"> <path>E:\Myfolder</path> </first> <second> <path>C:\Users\</path> </second> <third action="true"> <name>Mytasks</name> </third> </root> and Output shout be like this <first action="true"> <path>E:\Myfolder</path> </first> <third action="true"> <name>Mytasks</name> </third> anybody please help me. I used FirstorDefault() But i am getting only one record among all A: Try This xd = XDocument.Load("XML FILE PATH"); xe = xd.Root; IEnumerable<XElement> oColl = from x in xe.Descendants() where ((string)x.Attribute("action")).equals("true") select x; A: Try This . $(path).find('root').find('[action="true"]')
stackoverflow
{ "language": "en", "length": 108, "provenance": "stackexchange_0000F.jsonl.gz:914838", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44698291" }
b35626d1d8ffff76530a368d1a6e66452ba8a4b4
Stackoverflow Stackexchange Q: Electron JS and TypeScript - Using TS-Node with main process How would you adjust the following script to allow electron main process to use Typescript with ts-node? "scripts": { "shell": "cross-env NODE_ENV=development electron ts-node ./app/main.ts" } A: cross-env NODE_ENV=development electron -r ts-node/register ./app/main.ts https://github.com/TypeStrong/ts-node#programmatic You can require ts-node and register the loader for future requires by using require('ts-node').register({ /* options */ }). You can also use file shortcuts - node -r ts-node/register or node -r ts-node/register/transpile-only - depending on your preferences.
Q: Electron JS and TypeScript - Using TS-Node with main process How would you adjust the following script to allow electron main process to use Typescript with ts-node? "scripts": { "shell": "cross-env NODE_ENV=development electron ts-node ./app/main.ts" } A: cross-env NODE_ENV=development electron -r ts-node/register ./app/main.ts https://github.com/TypeStrong/ts-node#programmatic You can require ts-node and register the loader for future requires by using require('ts-node').register({ /* options */ }). You can also use file shortcuts - node -r ts-node/register or node -r ts-node/register/transpile-only - depending on your preferences. A: While using electron -r ts-node/register works for simple cases, you can also have your script point to a bootscript JavaScript file that just does this: require('ts-node').register() require('./app/main') The benefit of doing this is that you can specify ts-node options. Necessary for e.g. monorepos, where you might to something like require('ts-node').register({ project: './app/tsconfig.json' }). See the ts-node docs for options that can be specified: https://typestrong.org/ts-node/api/interfaces/RegisterOptions.html
stackoverflow
{ "language": "en", "length": 147, "provenance": "stackexchange_0000F.jsonl.gz:914877", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44698418" }
870e068efa460c9d59408c5994ec3dd802e9b306
Stackoverflow Stackexchange Q: use both beforeAction() and behaviors() method in controller in Yii2 I want to use both beforeAction() and behaviors() method in my controller. If i add beforeAction() method in my code than behaviors() method is not working. And if i remove beforeAction() method than behaviors() method is working. I dont want to remove beforeAction() as it is use to disable csrf token for ajax calls. public function beforeAction($action) { if($action->id =='ignore' || $action->id =='accept') { $this->enableCsrfValidation = false; } return true; } And i want to use behaviors() method for authentication. public function behaviors() { return [ 'access' => [ 'class' => AccessControl::className(), 'only' => ['create','index','update','change','view','page','active','list'], 'rules' => [ [ 'actions' => ['create','index','update','change','view','page','active','list'], 'allow' => true, 'roles' => ['@'], 'matchCallback' => function ($rule, $action) { echo "string"; die; }, ], ], 'denyCallback' => function ($rule, $action) { return $this->redirect(Yii::$app->request->baseUrl); } ], ]; } Is there any way to use both method in same controller. A: public function beforeAction($action) { if($action->id =='ignore' || $action->id =='accept') { $this->enableCsrfValidation = false; } //return true; return parent::beforeAction($action); } you need to return the parent beforeAction()
Q: use both beforeAction() and behaviors() method in controller in Yii2 I want to use both beforeAction() and behaviors() method in my controller. If i add beforeAction() method in my code than behaviors() method is not working. And if i remove beforeAction() method than behaviors() method is working. I dont want to remove beforeAction() as it is use to disable csrf token for ajax calls. public function beforeAction($action) { if($action->id =='ignore' || $action->id =='accept') { $this->enableCsrfValidation = false; } return true; } And i want to use behaviors() method for authentication. public function behaviors() { return [ 'access' => [ 'class' => AccessControl::className(), 'only' => ['create','index','update','change','view','page','active','list'], 'rules' => [ [ 'actions' => ['create','index','update','change','view','page','active','list'], 'allow' => true, 'roles' => ['@'], 'matchCallback' => function ($rule, $action) { echo "string"; die; }, ], ], 'denyCallback' => function ($rule, $action) { return $this->redirect(Yii::$app->request->baseUrl); } ], ]; } Is there any way to use both method in same controller. A: public function beforeAction($action) { if($action->id =='ignore' || $action->id =='accept') { $this->enableCsrfValidation = false; } //return true; return parent::beforeAction($action); } you need to return the parent beforeAction()
stackoverflow
{ "language": "en", "length": 181, "provenance": "stackexchange_0000F.jsonl.gz:914878", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44698419" }
636cd9ca52ed69bd45340d549443fc08e90c3b19
Stackoverflow Stackexchange Q: Map JSON To List> I have a JSON of the format [{ "id" : "a01", "name" : "random1", "val" : "random2" }, { "id" : "a03", "name" : "random3", "val" : "random4" }] I need to map it to a List holding various Map objects. How do I achieve it? Even if I am able to convert this JSON to a List of String of the form { "id" : "a01", "name" : "random1", "val" : "random2" } then I have a method to convert each individual String to a Map. A: You will need to pass a TypeReference to readValue with the desired result type: ObjectMapper mapper = new ObjectMapper(); List<Map<String, Object>> data = mapper.readValue(json, new TypeReference<List<Map<String, Object>>>(){});
Q: Map JSON To List> I have a JSON of the format [{ "id" : "a01", "name" : "random1", "val" : "random2" }, { "id" : "a03", "name" : "random3", "val" : "random4" }] I need to map it to a List holding various Map objects. How do I achieve it? Even if I am able to convert this JSON to a List of String of the form { "id" : "a01", "name" : "random1", "val" : "random2" } then I have a method to convert each individual String to a Map. A: You will need to pass a TypeReference to readValue with the desired result type: ObjectMapper mapper = new ObjectMapper(); List<Map<String, Object>> data = mapper.readValue(json, new TypeReference<List<Map<String, Object>>>(){}); A: Use gson with specified type to convert to list of maps: Gson gson = new Gson(); Type resultType = new TypeToken<List<Map<String, Object>>>(){}.getType(); List<Map<String, Object>> result = gson.fromJson(json, resultType); A: private List<Map<String, Object>> map_string; Json you need to pass:- { "map_formula" : [ { "A+" : "if(price<400),\"40000\",0", "B" : "", "c" : "", "d" : "", "e" : "" }, { "poor" : "value for poor", "good" : "300", "average" : "300", "excellent" : "300" } ] }
stackoverflow
{ "language": "en", "length": 198, "provenance": "stackexchange_0000F.jsonl.gz:914883", "question_score": "26", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44698437" }
b6f5d6c2d4378052636655f69e664c183fb32a7e
Stackoverflow Stackexchange Q: Firebase UI auth and my own server side back end db I am a mobile app developer and happened to come across firebaseUI. I am trying to build an application which communicates with a webservice which provide some features. As a starting point to create a webservice (java webservice) I was trying to use FirebaseUI for authentication. When I started doing a sample project in firebase, I could see that the user data get saved in the Firebase's real time db. Is there any way to plug in our webservice code to interact with our own db instead of firebase db ? If there is a way, what extra things I have to do from my side to create a complete login-sign up- forgot password flow(or what features provided by firebse UI I have to rewrite)? Another related question: If I adopt this firebase UI - webserver flow, can I continue using the free firebase plan as I am not using the real time database or storage now?
Q: Firebase UI auth and my own server side back end db I am a mobile app developer and happened to come across firebaseUI. I am trying to build an application which communicates with a webservice which provide some features. As a starting point to create a webservice (java webservice) I was trying to use FirebaseUI for authentication. When I started doing a sample project in firebase, I could see that the user data get saved in the Firebase's real time db. Is there any way to plug in our webservice code to interact with our own db instead of firebase db ? If there is a way, what extra things I have to do from my side to create a complete login-sign up- forgot password flow(or what features provided by firebse UI I have to rewrite)? Another related question: If I adopt this firebase UI - webserver flow, can I continue using the free firebase plan as I am not using the real time database or storage now?
stackoverflow
{ "language": "en", "length": 169, "provenance": "stackexchange_0000F.jsonl.gz:914907", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44698523" }
85317fabd657a3e84e2ddec8d5d271f640184474
Stackoverflow Stackexchange Q: Match everything between but exclude words - notepad++ Input: start some T1 random T2 text T3 end should result in: start T1 T2 T3 end I tried using >(?<=start)[\S\s]*?(?=end) to match everything between start and end and exclude T1 T2 T3 with: ^(?!T\d) Is it possible to combine them into a single regex that can be pasted into notepad++ for people not familiar with writing code to do it in several passes? A: You could use this regular expression:        Find: ^(?!T\d|start).*\R(?=(^(?!start$).*\R)*end$)        Replace: (empty)        . matches newlines: No Click "Replace All" These assumptions are made: * *The start and end delimiters should each be the only text on their lines (so not ---start or start ///), *They should appear in pairs in the correct order (so first start and then end) *They should not be nested, so after a start cannot come another start before you have an end. The look-ahead makes this a rather inefficient regular expression, as with each match it needs to check again the text that follows until the next end.
Q: Match everything between but exclude words - notepad++ Input: start some T1 random T2 text T3 end should result in: start T1 T2 T3 end I tried using >(?<=start)[\S\s]*?(?=end) to match everything between start and end and exclude T1 T2 T3 with: ^(?!T\d) Is it possible to combine them into a single regex that can be pasted into notepad++ for people not familiar with writing code to do it in several passes? A: You could use this regular expression:        Find: ^(?!T\d|start).*\R(?=(^(?!start$).*\R)*end$)        Replace: (empty)        . matches newlines: No Click "Replace All" These assumptions are made: * *The start and end delimiters should each be the only text on their lines (so not ---start or start ///), *They should appear in pairs in the correct order (so first start and then end) *They should not be nested, so after a start cannot come another start before you have an end. The look-ahead makes this a rather inefficient regular expression, as with each match it needs to check again the text that follows until the next end.
stackoverflow
{ "language": "en", "length": 175, "provenance": "stackexchange_0000F.jsonl.gz:914937", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44698623" }
14bc1ff461d00e00a6763ce3b606d4b8346f7d1c
Stackoverflow Stackexchange Q: Why do I get Error: Could not find or load main class .jar when I run docker image I have written my docker file as below: From java:8 EXPOSE 8081 ADD /target/Demo-0.0.1-SNAPSHOT.jar Demo.jar ENTRYPOINT ["java",".jar","Demo.jar"] ("Demo" is my project name. It creates a Spring boot application.) I am using a Linux machine. A: Make sure you have mentioned "-jar" in the ENTRYPOINT ["java","-jar","Demo.jar"]. you can try to execute the jar using normal java command( java -jar target/Demo-0.0.1-SNAPSHOT.jar ) to make sure the jar builds properly. FROM java:8 ADD target/Demo-0.0.1-SNAPSHOT.jar Demo.jar EXPOSE 8081 ENTRYPOINT ["java","-jar","Demo.jar"]
Q: Why do I get Error: Could not find or load main class .jar when I run docker image I have written my docker file as below: From java:8 EXPOSE 8081 ADD /target/Demo-0.0.1-SNAPSHOT.jar Demo.jar ENTRYPOINT ["java",".jar","Demo.jar"] ("Demo" is my project name. It creates a Spring boot application.) I am using a Linux machine. A: Make sure you have mentioned "-jar" in the ENTRYPOINT ["java","-jar","Demo.jar"]. you can try to execute the jar using normal java command( java -jar target/Demo-0.0.1-SNAPSHOT.jar ) to make sure the jar builds properly. FROM java:8 ADD target/Demo-0.0.1-SNAPSHOT.jar Demo.jar EXPOSE 8081 ENTRYPOINT ["java","-jar","Demo.jar"] A: You might have a typo: ENTRYPOINT ["java","-jar","Demo.jar"]
stackoverflow
{ "language": "en", "length": 103, "provenance": "stackexchange_0000F.jsonl.gz:914944", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44698651" }
18a4111ee394f01746076023093811c3f79693ee
Stackoverflow Stackexchange Q: Why does Gson's toJson return null when using one-liner map Somehow new Gson().toJson() returns null when I give it an one-liner map: Map map = new HashMap() {{ put("hei", "sann"); }}; new Gson().toJson(map); // returns null! All other implementations I can find works as expected: new JSONObject(map).toString(); // returns {"hei":"sann"} JsonOutput.toJson(map); // returns {"hei":"sann"} new ObjectMapper().writeValueAsString(map); // returns {"hei":"sann"} It works if I wrap the map: new Gson().toJson(new HashMap(map)); // returns {"hei":"sann"} A regular map works too: map = new HashMap(); map.put("hei", "sann"); new Gson().toJson(map); // returns {"hei":"sann"} Is it a bug or a feature? I've created a test project at https://github.com/henrik242/map2json Relevant Gson issue: https://github.com/google/gson/issues/1080
Q: Why does Gson's toJson return null when using one-liner map Somehow new Gson().toJson() returns null when I give it an one-liner map: Map map = new HashMap() {{ put("hei", "sann"); }}; new Gson().toJson(map); // returns null! All other implementations I can find works as expected: new JSONObject(map).toString(); // returns {"hei":"sann"} JsonOutput.toJson(map); // returns {"hei":"sann"} new ObjectMapper().writeValueAsString(map); // returns {"hei":"sann"} It works if I wrap the map: new Gson().toJson(new HashMap(map)); // returns {"hei":"sann"} A regular map works too: map = new HashMap(); map.put("hei", "sann"); new Gson().toJson(map); // returns {"hei":"sann"} Is it a bug or a feature? I've created a test project at https://github.com/henrik242/map2json Relevant Gson issue: https://github.com/google/gson/issues/1080
stackoverflow
{ "language": "en", "length": 107, "provenance": "stackexchange_0000F.jsonl.gz:914981", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44698766" }
c3ee82d409d0da6c4667a8789d154794dd030f8f
Stackoverflow Stackexchange Q: Highlight corner of table cell - html/css I have a table on my website and I want to highlight corners of some cells like in excel. Is there any way how to do this in CSS? I have already applied bootstrap style and data table extension on my table. A: Use a linear-gradient td { padding: 1em 3em; border: 1px solid grey; background-image: linear-gradient(225deg, red, red 10px, transparent 10px, transparent); } <table> <tr> <td></td> <td></td> <td></td> </tr> </table> Or a Pseudo-element td { padding: 1em 3em; border: 1px solid grey; position: relative; } td::after { content: ''; position: absolute; top: 0; right: 0; width: 0; height: 0; border-width: 7.5px; border-style: solid; border-color: red red transparent transparent; } <table> <tr> <td></td> <td></td> <td></td> </tr> </table>
Q: Highlight corner of table cell - html/css I have a table on my website and I want to highlight corners of some cells like in excel. Is there any way how to do this in CSS? I have already applied bootstrap style and data table extension on my table. A: Use a linear-gradient td { padding: 1em 3em; border: 1px solid grey; background-image: linear-gradient(225deg, red, red 10px, transparent 10px, transparent); } <table> <tr> <td></td> <td></td> <td></td> </tr> </table> Or a Pseudo-element td { padding: 1em 3em; border: 1px solid grey; position: relative; } td::after { content: ''; position: absolute; top: 0; right: 0; width: 0; height: 0; border-width: 7.5px; border-style: solid; border-color: red red transparent transparent; } <table> <tr> <td></td> <td></td> <td></td> </tr> </table> A: As @Era suggested, you can use :before to build that triangle inside the cell with only css. For positioning you will need to make that cell with position: relative;, this will make every absolute item inside it be relative to the element's position. Then with some borders you can easily build the red corner. table, tr, td{ border: 1px solid black; } table{ border-collapse: collapse; width: 300px; } td.corner{ position: relative; } td.corner:before{ content: ''; position: absolute; top: 0; right: 0; border-left: 5px solid transparent; border-top: 5px solid red; } <table> <tr> <td>a</td> <td>b</td> </tr> <tr> <td class="corner">c</td> <td>d</td> </tr> </table>
stackoverflow
{ "language": "en", "length": 227, "provenance": "stackexchange_0000F.jsonl.gz:914982", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44698771" }
e52419b35a36b67525d85b76d6307317e3c50068
Stackoverflow Stackexchange Q: xCode 9 - iOS 11: NSURLConnection - sendAsynchronousRequest fails I just downloaded the latest version of xCode (9.0 beta (9M136h)). However, when I try to make a request to my server in iOS 11 simulator (Using NSURLConnection sendAsynchronousRequest), an error is received: NSURLSession/NSURLConnection HTTP load failed (kCFStreamErrorDomainSSL, -9807) NSURLConnection finished with error - code -1202 NSError object contains the message - @"NSLocalizedDescription" : @"The certificate for this server is invalid. You might be connecting to a server that is pretending to be “***” which could put your confidential information at risk." The plist contains: <key>NSAppTransportSecurity</key> <dict> <key>NSAllowsArbitraryLoads</key> <true/> </dict> so it is not the problem in this case (I guess) Needless to say that it is working in iOS 10/9/8 Any suggestions? Thanks in advance! A: You need to allow your application to run HTTP (no S) connections. By default, Apple only allows HTTPS: * *go to your info.plist *then press the plus icon on any of them *Search for "App Transport Security Settings" *click the little arrow to the left and find "Allow arbitrary loads", by default it is set to "NO" change it to "YES"
Q: xCode 9 - iOS 11: NSURLConnection - sendAsynchronousRequest fails I just downloaded the latest version of xCode (9.0 beta (9M136h)). However, when I try to make a request to my server in iOS 11 simulator (Using NSURLConnection sendAsynchronousRequest), an error is received: NSURLSession/NSURLConnection HTTP load failed (kCFStreamErrorDomainSSL, -9807) NSURLConnection finished with error - code -1202 NSError object contains the message - @"NSLocalizedDescription" : @"The certificate for this server is invalid. You might be connecting to a server that is pretending to be “***” which could put your confidential information at risk." The plist contains: <key>NSAppTransportSecurity</key> <dict> <key>NSAllowsArbitraryLoads</key> <true/> </dict> so it is not the problem in this case (I guess) Needless to say that it is working in iOS 10/9/8 Any suggestions? Thanks in advance! A: You need to allow your application to run HTTP (no S) connections. By default, Apple only allows HTTPS: * *go to your info.plist *then press the plus icon on any of them *Search for "App Transport Security Settings" *click the little arrow to the left and find "Allow arbitrary loads", by default it is set to "NO" change it to "YES" A: For all of you who get this error in iOS 11, please make sure you're working against valid (secured) certificate in your server. In our case, the certificate wasn't strict enough. Once our server guy integrated new valid certificate, the problem has gone. One way to check if the certificate is secured, is to past the problematic link in the browser. As a result, you might see that the connection is not secured: A: Since you've got an invalid certificate error, I'll make the following suggestion based on my personal security practice. If you're still in your servicing terms with your CA, ask them to issue a new valid certificate for you. Check your Keychain setting and make sure no CA cert is missing. Alternatively, you can issue your own self-signed certificate for testing purposes, and add it to your local Keychain as trust anchor. A search for "how to create self-signed x509 certificate" will return something you might find useful.
stackoverflow
{ "language": "en", "length": 350, "provenance": "stackexchange_0000F.jsonl.gz:914987", "question_score": "17", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44698788" }
ad1b9336560d6cb24518bf0f6f4522efdb27c5da
Stackoverflow Stackexchange Q: JavaScript library to read doc and docx on client I am searching for a JavaScript library, which can read .doc - and .docx - files. The focus is only on the text content. I am not interested in pictures, formulas or other special structures in MS-Word file. It would be great if the library works with to JavaScript FileReader as shown in the code below. function readExcel(currfile) { var reader = new FileReader(); reader.onload = (function (_file) { return function (e) { //here should the magic happen }; })(currfile); reader.onabort = function (e) { alert('File read canceled'); }; reader.readAsBinaryString(currfile); } I searched through the internet, but I could not get what I was looking for. A: You can use docxtemplater for this (even if normally, it is used for templating, it can also just get the text of the document) : var zip = new JSZip(content); var doc=new Docxtemplater().loadZip(zip) var text= doc.getFullText(); console.log(text); See the Doc for installation information (I'm the maintainer of this project) However, it only handles docx, not doc
Q: JavaScript library to read doc and docx on client I am searching for a JavaScript library, which can read .doc - and .docx - files. The focus is only on the text content. I am not interested in pictures, formulas or other special structures in MS-Word file. It would be great if the library works with to JavaScript FileReader as shown in the code below. function readExcel(currfile) { var reader = new FileReader(); reader.onload = (function (_file) { return function (e) { //here should the magic happen }; })(currfile); reader.onabort = function (e) { alert('File read canceled'); }; reader.readAsBinaryString(currfile); } I searched through the internet, but I could not get what I was looking for. A: You can use docxtemplater for this (even if normally, it is used for templating, it can also just get the text of the document) : var zip = new JSZip(content); var doc=new Docxtemplater().loadZip(zip) var text= doc.getFullText(); console.log(text); See the Doc for installation information (I'm the maintainer of this project) However, it only handles docx, not doc A: now you can extract the text content from doc/docx without installing external dependencies. You can use the node library called any-text Currently, it supports a number of file extensions like PDF, XLSX, XLS, CSV etc Usage is very simple: * *Install the library as a dependency (/dev-dependency) npm i -D any-text * *Make use of the getText method to read the text content var reader = require('any-text'); reader.getText(`path-to-file`).then(function (data) { console.log(data); }); * *You can also use the async/await notation var reader = require('any-text'); const text = await reader.getText(`path-to-file`); console.log(text); Sample Test var reader = require('any-text'); const chai = require('chai'); const expect = chai.expect; describe('file reader checks', () => { it('check docx file content', async () => { expect( await reader.getText(`${process.cwd()}/test/files/dummy.doc`) ).to.contains('Lorem ipsum'); }); }); I hope it will help!
stackoverflow
{ "language": "en", "length": 304, "provenance": "stackexchange_0000F.jsonl.gz:915020", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44698896" }
27a27d565041f0feb069ca523f600e527a4f7218
Stackoverflow Stackexchange Q: Neo4j delete graph out of memory I'm using neo4j on a linux machine 16G memory and I'm trying to delete all the graph . it has 11353056 relationship vs 19900 nodes. when I run Match (n) detach delete n after loading for a while I get thee out of memory error . how can I delete the graph ? should I proceed by deleting the relationships and then delete the nodes to prevent that problem ? A: Do like this to remove records with a limitation: MATCH (n) WITH n LIMIT 10000 DETACH DELETE n RETURN count(*); If you want to remove everything like property keys, stop neo4j service and remove everything from data/graph.db
Q: Neo4j delete graph out of memory I'm using neo4j on a linux machine 16G memory and I'm trying to delete all the graph . it has 11353056 relationship vs 19900 nodes. when I run Match (n) detach delete n after loading for a while I get thee out of memory error . how can I delete the graph ? should I proceed by deleting the relationships and then delete the nodes to prevent that problem ? A: Do like this to remove records with a limitation: MATCH (n) WITH n LIMIT 10000 DETACH DELETE n RETURN count(*); If you want to remove everything like property keys, stop neo4j service and remove everything from data/graph.db A: Instead of use Cypher to delete all the graph you can I stop Neo4j and delete the data/graph.db folder. After it restart Neo4j. Another suggestion is to run your deletion query with a limit repeating it until no more records exists. For example: Match (n) detach delete n limit 5000 A: Install the APOC plugin and use this APOC command; it will do the delete in batches of whatever you set batchSize to. CALL apoc.periodic.iterate('MATCH (n) RETURN n', 'DETACH DELETE n', {batchSize:1000}) A: You can delete the graph on multiple steps by using LIMIT ex MATCH (n) with n limit 100 DETACH DELETE n
stackoverflow
{ "language": "en", "length": 220, "provenance": "stackexchange_0000F.jsonl.gz:915032", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44698936" }
c687f7ad612b72ab09f639eb38f65ef296011e54
Stackoverflow Stackexchange Q: Requesting blob images and transforming to base64 with fetch API I have some images that will be displayed in a React app. I perform a GET request to a server, which returns images in BLOB format. Then I transform these images to base64. Finally, i'm setting these base64 strings inside the src attribute of an image tag. Recently I've started using the Fetch API. I was wondering if there is a way to do the transforming in 'one' go. Below an example to explain my idea so far and/or if this is even possible with the Fetch API. I haven't found anything online yet. let reader = new window.FileReader(); fetch('http://localhost:3000/whatever') .then(response => response.blob()) .then(myBlob => reader.readAsDataURL(myBlob)) .then(myBase64 => { imagesString = myBase64 }).catch(error => { //Lalala }) A: Thanks to @GetFree, here's the async/await version of it, with promise error handling: const imageUrlToBase64 = async url => { const response = await fetch(url); const blob = await response.blob(); return new Promise((onSuccess, onError) => { try { const reader = new FileReader() ; reader.onload = function(){ onSuccess(this.result) } ; reader.readAsDataURL(blob) ; } catch(e) { onError(e); } }); }; Usage: const base64 = await imageUrlToBase64('https://via.placeholder.com/150');
Q: Requesting blob images and transforming to base64 with fetch API I have some images that will be displayed in a React app. I perform a GET request to a server, which returns images in BLOB format. Then I transform these images to base64. Finally, i'm setting these base64 strings inside the src attribute of an image tag. Recently I've started using the Fetch API. I was wondering if there is a way to do the transforming in 'one' go. Below an example to explain my idea so far and/or if this is even possible with the Fetch API. I haven't found anything online yet. let reader = new window.FileReader(); fetch('http://localhost:3000/whatever') .then(response => response.blob()) .then(myBlob => reader.readAsDataURL(myBlob)) .then(myBase64 => { imagesString = myBase64 }).catch(error => { //Lalala }) A: Thanks to @GetFree, here's the async/await version of it, with promise error handling: const imageUrlToBase64 = async url => { const response = await fetch(url); const blob = await response.blob(); return new Promise((onSuccess, onError) => { try { const reader = new FileReader() ; reader.onload = function(){ onSuccess(this.result) } ; reader.readAsDataURL(blob) ; } catch(e) { onError(e); } }); }; Usage: const base64 = await imageUrlToBase64('https://via.placeholder.com/150'); A: The return of FileReader.readAsDataURL is not a promise. You have to do it the old way. fetch('http://localhost:3000/whatever') .then( response => response.blob() ) .then( blob =>{ var reader = new FileReader() ; reader.onload = function(){ console.log(this.result) } ; // <--- `this.result` contains a base64 data URI reader.readAsDataURL(blob) ; }) ; General purpose function: function urlContentToDataUri(url){ return fetch(url) .then( response => response.blob() ) .then( blob => new Promise( callback =>{ let reader = new FileReader() ; reader.onload = function(){ callback(this.result) } ; reader.readAsDataURL(blob) ; }) ) ; } //Usage example: urlContentToDataUri('http://example.com').then( dataUri => console.log(dataUri) ) ; //Usage example using await: let dataUri = await urlContentToDataUri('http://example.com') ; console.log(dataUri) ; A: If somebody gonna need to do it in Node.js: const fetch = require('cross-fetch'); const response = await fetch(url); const base64_body = (await response.buffer()).toString('base64');
stackoverflow
{ "language": "en", "length": 325, "provenance": "stackexchange_0000F.jsonl.gz:915039", "question_score": "22", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44698967" }
b8baea7b2d336da0af1aebb35e98a3c4e3ac3595
Stackoverflow Stackexchange Q: How to fast delete many files I have a folder in Windows Server with subfolders and ≈50000 files. When I click the right mouse button and choose delete (or shift+delete) – all files are deleted in 10-20 seconds. When I delete files using code – 1500-4000 seconds. Delete large number of files – don't work for me. My code: string folderPath = @"C://myFolder"; DirectoryInfo folderInfo = new DirectoryInfo(folderPath); folderInfo.Delete(true); // true - recursive, with sub-folders How to delete files faster? A: A much faster way to delete files is to use the Windows functions instead of the .NET ones. You will need to first import the function: [DllImport("kernel32.dll", SetLastError = true)] [return: MarshalAs(UnmanagedType.Bool)] static extern bool DeleteFile(string lpFileName); And then you can do this: string[] files = Directory.EnumerateFiles(path, "*". SearchOption.AllDirectories); foreach (string file in files) { DeleteFile(file); } Once the files are deleted, which is the slowest part by using the managed APIs, you can call Directory.DeleteFolder(path, true) to delete the empty folders.
Q: How to fast delete many files I have a folder in Windows Server with subfolders and ≈50000 files. When I click the right mouse button and choose delete (or shift+delete) – all files are deleted in 10-20 seconds. When I delete files using code – 1500-4000 seconds. Delete large number of files – don't work for me. My code: string folderPath = @"C://myFolder"; DirectoryInfo folderInfo = new DirectoryInfo(folderPath); folderInfo.Delete(true); // true - recursive, with sub-folders How to delete files faster? A: A much faster way to delete files is to use the Windows functions instead of the .NET ones. You will need to first import the function: [DllImport("kernel32.dll", SetLastError = true)] [return: MarshalAs(UnmanagedType.Bool)] static extern bool DeleteFile(string lpFileName); And then you can do this: string[] files = Directory.EnumerateFiles(path, "*". SearchOption.AllDirectories); foreach (string file in files) { DeleteFile(file); } Once the files are deleted, which is the slowest part by using the managed APIs, you can call Directory.DeleteFolder(path, true) to delete the empty folders. A: Since the question is actually about deleting network shared folders and it's stated that the explorer based delete is much faster than the C# internal delete mechanism, it might help to just invoke a windows shell based delete. ProcessStartInfo Info = new ProcessStartInfo(); Info.Arguments = "/C rd /s /q \"<your-path>\""; Info.WindowStyle = ProcessWindowStyle.Hidden; Info.CreateNoWindow = true; Info.FileName = "cmd.exe"; Process.Start(Info); Ofcourse, you have to replace <your-path>. However, I don't have the infrastructure and files available to test the performance myself right now. A: Not quite sure why the method DirectoryInfo.Delete() takes too much time when deleting folders that have a lot of files and sub-folders. I suspect that the method may also do quite a few things that are unnecessary. I write a small class to to use Win API without doing too many unnecessary things to test my idea. It takes about 40 seconds to delete a folder that have 50,000 files and sub-folders. So, hope it helps. I use this PowerScript to generate the testing files. $folder = "d:\test1"; For ($i=0; $i -lt 50000; $i++) { New-Item -Path $folder -Name "test$i.txt" -ItemType "file" -Value $i.ToString(); } The following is the code in C#. using System; using System.Collections.Generic; // using System.Runtime.InteropServices; using System.IO; // namespace TestFileDelete { class FileDelete { [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] struct WIN32_FIND_DATAW { public FileAttributes dwFileAttributes; public System.Runtime.InteropServices.ComTypes.FILETIME ftCreationTime; public System.Runtime.InteropServices.ComTypes.FILETIME ftLastAccessTime; public System.Runtime.InteropServices.ComTypes.FILETIME ftLastWriteTime; public UInt32 nFileSizeHigh; // DWORD public UInt32 nFileSizeLow; // DWORD public UInt32 dwReserved0; // DWORD public UInt32 dwReserved1; // DWORD [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 260)] public String cFileName; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 14)] public String cAlternateFileName; }; static readonly IntPtr INVALID_HANDLE_VALUE = new IntPtr(-1); [DllImport("kernel32.dll", CharSet = CharSet.Unicode, SetLastError = true)] private static extern IntPtr FindFirstFileW(String lpFileName, out WIN32_FIND_DATAW lpFindFileData); [DllImport("kernel32.dll", CharSet = CharSet.Unicode, SetLastError = true)] private static extern Boolean FindNextFileW(IntPtr hFindFile, out WIN32_FIND_DATAW lpFindFileData); [DllImport("kernel32.dll")] private static extern Boolean FindClose(IntPtr handle); [DllImport("kernel32.dll", CharSet = CharSet.Unicode, SetLastError = true)] public static extern Boolean DeleteFileW(String lpFileName); // Deletes an existing file [DllImport("kernel32.dll", CharSet = CharSet.Unicode, SetLastError = true)] private static extern Boolean RemoveDirectoryW(String lpPathName); // Deletes an existing empty directory // This method check to see if the given folder is empty or not. public static Boolean IsEmptyFolder(String folder) { Boolean res = true; if (folder == null && folder.Length == 0) { throw new Exception(folder + "is invalid"); } WIN32_FIND_DATAW findFileData; String searchFiles = folder + @"\*.*"; IntPtr searchHandle = FindFirstFileW(searchFiles, out findFileData); if (searchHandle == INVALID_HANDLE_VALUE) { throw new Exception("Cannot check folder " + folder); } do { if ((findFileData.dwFileAttributes & FileAttributes.Directory) == FileAttributes.Directory) { // found a sub folder if (findFileData.cFileName != "." && findFileData.cFileName != "..") { res = false; break; } } // if ((findFileData.dwFileAttributes & FileAttributes.Directory) == FileAttributes.Directory) else { // found a file res = false; break; } } while (FindNextFileW(searchHandle, out findFileData)); FindClose(searchHandle); return res; } // public static Boolean IsEmptyFolder(String folder) // This method deletes the given folder public static Boolean DeleteFolder(String folder) { Boolean res = true; // keep non-empty folders to delete later (after we delete everything inside) Stack<String> nonEmptyFolder = new Stack<String>(); String currentFolder = folder; do { Boolean isEmpty = false; try { isEmpty = IsEmptyFolder(currentFolder); } catch (Exception ex) { // Something wrong res = false; break; } if (!isEmpty) { nonEmptyFolder.Push(currentFolder); WIN32_FIND_DATAW findFileData; IntPtr searchHandle = FindFirstFileW(currentFolder + @"\*.*", out findFileData); if (searchHandle != INVALID_HANDLE_VALUE) { do { // for each folder, find all of its sub folders and files String foundPath = currentFolder + @"\" + findFileData.cFileName; if ((findFileData.dwFileAttributes & FileAttributes.Directory) == FileAttributes.Directory) { // found a sub folder if (findFileData.cFileName != "." && findFileData.cFileName != "..") { if (IsEmptyFolder(foundPath)) { // found an empty folder, delete it if (!(res = RemoveDirectoryW(foundPath))) { Int32 error = Marshal.GetLastWin32Error(); break; } } else { // found a non-empty folder nonEmptyFolder.Push(foundPath); } } // if (findFileData.cFileName != "." && findFileData.cFileName != "..") } // if ((findFileData.dwFileAttributes & FileAttributes.Directory) == FileAttributes.Directory) else { // found a file, delete it if (!(res = DeleteFileW(foundPath))) { Int32 error = Marshal.GetLastWin32Error(); break; } } } while (FindNextFileW(searchHandle, out findFileData)); FindClose(searchHandle); } // if (searchHandle != INVALID_HANDLE_VALUE) }// if (!IsEmptyFolder(folder)) else { if (!(res = RemoveDirectoryW(currentFolder))) { Int32 error = Marshal.GetLastWin32Error(); break; } } if (nonEmptyFolder.Count > 0) { currentFolder = nonEmptyFolder.Pop(); } else { currentFolder = null; } } while (currentFolder != null && res); return res; } // public static Boolean DeleteFolder(String folder) }; class Program { static void Main(string[] args) { DateTime t1 = DateTime.Now; try { Boolean b = FileDelete.DeleteFolder(@"d:\test1"); } catch (Exception ex) { Console.WriteLine(ex.Message); } DateTime t2 = DateTime.Now; TimeSpan ts = t2 - t1; Console.WriteLine(ts.Seconds); } } }
stackoverflow
{ "language": "en", "length": 936, "provenance": "stackexchange_0000F.jsonl.gz:915116", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44699238" }
445cbd1e749c55497b8be23aa717c2e93ac4135c
Stackoverflow Stackexchange Q: Auto instrumentation like Spring Cloud Sleuth in Node.js While Zipkin sdk is available for Node.js, I'm looking for auto-instrumentation like Spring Cloud Sleuth in Node.js app. Is there a module or framework for it in Node.js? What I mean by auto-instrumentation above is that in Java I don't have to write code to instrument servlets/filters/rest clients with Zipkin. Sleuth automatically does that. While Zipkin instrumentation seems manual in Node.js. A: you can use node-sleuth package for this : https://www.npmjs.com/package/node-sleuth?activeTab=explore
Q: Auto instrumentation like Spring Cloud Sleuth in Node.js While Zipkin sdk is available for Node.js, I'm looking for auto-instrumentation like Spring Cloud Sleuth in Node.js app. Is there a module or framework for it in Node.js? What I mean by auto-instrumentation above is that in Java I don't have to write code to instrument servlets/filters/rest clients with Zipkin. Sleuth automatically does that. While Zipkin instrumentation seems manual in Node.js. A: you can use node-sleuth package for this : https://www.npmjs.com/package/node-sleuth?activeTab=explore
stackoverflow
{ "language": "en", "length": 80, "provenance": "stackexchange_0000F.jsonl.gz:915124", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44699269" }
f1108de4c55d99ab36e09aca5c0421056720a2f2
Stackoverflow Stackexchange Q: How to remove Azure On-Premises Data Gateway connection gateway installations? the Azure On-Premises Data Gateway got installed with the wrong Azure region settings on a Virtual Machine in Azure. After uninstalling the on-premises Data Gateway, the original connection gateway installation is not removed and a new one with the same name cannot be created anymore. I even tried deleting the Virtual Machine to see if they would disappear. This had no impact. When creating the on-premises data gateway in the Azure Portal I can still select the old installation names. Also the following request returns those installations: https://management.azure.com/subscriptions/{subscriptionId}/Microsoft.Web/locations/northeurope/connectionGatewayInstallations?api-version=2015-08-01-preview I tried removing them with a DELETE http request to the connectionGatewayInstallations endpoint, but that returned a 400 Bad Request: https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Web/locations/northeurope/connectionGatewayInstallations/{connectionGatewayInstallationId}?api-version=2015-08-01-preview { "error": { "code": "DisallowedResourceOperation", "message": "The operation 'delete' on resource type 'locations/connectionGatewayInstallations' is disallowed." } } Anyone having any ideas on how to delete those connectionGatewayInstallations? A: Looks like this issue resolved itself. The endpoint uri for the connection gateway installation has changed in the api and the old connections are gone now.
Q: How to remove Azure On-Premises Data Gateway connection gateway installations? the Azure On-Premises Data Gateway got installed with the wrong Azure region settings on a Virtual Machine in Azure. After uninstalling the on-premises Data Gateway, the original connection gateway installation is not removed and a new one with the same name cannot be created anymore. I even tried deleting the Virtual Machine to see if they would disappear. This had no impact. When creating the on-premises data gateway in the Azure Portal I can still select the old installation names. Also the following request returns those installations: https://management.azure.com/subscriptions/{subscriptionId}/Microsoft.Web/locations/northeurope/connectionGatewayInstallations?api-version=2015-08-01-preview I tried removing them with a DELETE http request to the connectionGatewayInstallations endpoint, but that returned a 400 Bad Request: https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Web/locations/northeurope/connectionGatewayInstallations/{connectionGatewayInstallationId}?api-version=2015-08-01-preview { "error": { "code": "DisallowedResourceOperation", "message": "The operation 'delete' on resource type 'locations/connectionGatewayInstallations' is disallowed." } } Anyone having any ideas on how to delete those connectionGatewayInstallations? A: Looks like this issue resolved itself. The endpoint uri for the connection gateway installation has changed in the api and the old connections are gone now.
stackoverflow
{ "language": "en", "length": 174, "provenance": "stackexchange_0000F.jsonl.gz:915143", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44699327" }
18c7883b7d1ec05d4c2ff2c0a4bee8ededd9f1ec
Stackoverflow Stackexchange Q: Modified ROUND UP function in Oracle I have peculiar requirement where I want to round up numbers in below fashion - 1.14 to 1.5 1.6 to 2 0.8 to 1 7.5 to 7.5 that is, to nearest multiple of 0.5. Is there a way to achieve this? Best Regards A: If you want to round up to the nearest 0.5: Test Data: CREATE TABLE your_table ( value ) AS SELECT 1.14 FROM DUAL UNION ALL SELECT 1.6 FROM DUAL UNION ALL SELECT 0.8 FROM DUAL UNION ALL SELECT 7.5 FROM DUAL; Query: SELECT value, CEIL( value * 2 ) / 2 AS rounded_value FROM your_table Output: VALUE ROUNDED_VALUE ----- ------------- 1.14 1.5 1.6 2 0.8 1 7.5 7.5 If you want to round away from zero to the nearest 0.5 (not the same thing for negative numbers), you can use: SIGN(value) * CEIL(ABS(value)*2) / 2
Q: Modified ROUND UP function in Oracle I have peculiar requirement where I want to round up numbers in below fashion - 1.14 to 1.5 1.6 to 2 0.8 to 1 7.5 to 7.5 that is, to nearest multiple of 0.5. Is there a way to achieve this? Best Regards A: If you want to round up to the nearest 0.5: Test Data: CREATE TABLE your_table ( value ) AS SELECT 1.14 FROM DUAL UNION ALL SELECT 1.6 FROM DUAL UNION ALL SELECT 0.8 FROM DUAL UNION ALL SELECT 7.5 FROM DUAL; Query: SELECT value, CEIL( value * 2 ) / 2 AS rounded_value FROM your_table Output: VALUE ROUNDED_VALUE ----- ------------- 1.14 1.5 1.6 2 0.8 1 7.5 7.5 If you want to round away from zero to the nearest 0.5 (not the same thing for negative numbers), you can use: SIGN(value) * CEIL(ABS(value)*2) / 2 A: Here is one way: select floor( val * 2 + 1) / 2
stackoverflow
{ "language": "en", "length": 160, "provenance": "stackexchange_0000F.jsonl.gz:915174", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44699418" }
9def4978e9a227270ecba739280f399759756b99
Stackoverflow Stackexchange Q: Truncate partitioned table - PostgreSQL I have a master table called fact_sgw and have a child table called fact_sgw_2016_06 which is just data for June 2016. Could I truncate the child table without losing any data from the master table (fact_sgw)? A: TRUNCATE TABLE <child_table_name> works fine on Postgres 14.
Q: Truncate partitioned table - PostgreSQL I have a master table called fact_sgw and have a child table called fact_sgw_2016_06 which is just data for June 2016. Could I truncate the child table without losing any data from the master table (fact_sgw)? A: TRUNCATE TABLE <child_table_name> works fine on Postgres 14.
stackoverflow
{ "language": "en", "length": 51, "provenance": "stackexchange_0000F.jsonl.gz:915175", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44699421" }
64480ce0de18bebafe09c5b1d7f26b0ba03f9c13
Stackoverflow Stackexchange Q: Keys must be non-empty strings and can't contain ".", "#", "$", "/", "[", or "]" I am trying to write with admin permission two places at once on firebase functions. Getting this strange error: Error: Firebase.set failed: First argument contains an invalid key (/Auction/TD6MKEhS/-Kn9cMUPkk) My code is: var newPostRef = admin.database().ref().child("History/" + this.customeruid + '/' + this.quoteid + '/' + this.banuid).push(); var newPostKey = newPostRef.key; var updatedBidQuote = {}; // Create the data we want to update let postData = { creationBidDate: admin.database.ServerValue.TIMESTAMP, customeruid: this.banuid, quoteCompanyCreatoruid: this.customeruid, Amount: this.banBid }; updatedBidQuote['/Auction/' + this.customeruid + '/' + this.quoteid] = postData; updatedBidQuote['/History/' + this.customeruid + '/' + this.quoteid + '/' + this.banuid + '/' + newPostKey] = postData; return admin.database().ref().set(updatedBidQuote); I check the object of postData and didnt have any (.keys or strange value) A: You can only pass full paths into update, so the last line should be: return admin.database().ref().update(updatedBidQuote)
Q: Keys must be non-empty strings and can't contain ".", "#", "$", "/", "[", or "]" I am trying to write with admin permission two places at once on firebase functions. Getting this strange error: Error: Firebase.set failed: First argument contains an invalid key (/Auction/TD6MKEhS/-Kn9cMUPkk) My code is: var newPostRef = admin.database().ref().child("History/" + this.customeruid + '/' + this.quoteid + '/' + this.banuid).push(); var newPostKey = newPostRef.key; var updatedBidQuote = {}; // Create the data we want to update let postData = { creationBidDate: admin.database.ServerValue.TIMESTAMP, customeruid: this.banuid, quoteCompanyCreatoruid: this.customeruid, Amount: this.banBid }; updatedBidQuote['/Auction/' + this.customeruid + '/' + this.quoteid] = postData; updatedBidQuote['/History/' + this.customeruid + '/' + this.quoteid + '/' + this.banuid + '/' + newPostKey] = postData; return admin.database().ref().set(updatedBidQuote); I check the object of postData and didnt have any (.keys or strange value) A: You can only pass full paths into update, so the last line should be: return admin.database().ref().update(updatedBidQuote) A: For me, I had unnecessary curly brackets. I changed from return admin.database().ref().update( {updateObj} ); to return admin.database().ref().update(updateObj);
stackoverflow
{ "language": "en", "length": 168, "provenance": "stackexchange_0000F.jsonl.gz:915179", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44699426" }
4445bd49f97655622ecca94be3620b2db53dc863
Stackoverflow Stackexchange Q: JpaRepository: Fetch specific lazy collections If I have an Entity Person with some lazy-collections (Cars, Bills, Friends, ...) and want to write a JpaRepository-method that gives me all persons indluding eagerly fetched Cars, is this possible? I know that one can do this on single objects, but is this somehow possible with collections of persons? A: Yes, there is a very convenient @EntityGraph annotation provided by Spring Data JPA. It can be used to fine tune the used entitygraph of the query. Every JPA query uses an implicit entitygraph, that specifies, which elements are eagerly or lazy fetched depending on the relations fetchtype settings. If you want a specific relation to be eagerly fetched you need to specify it in the entitygraph. @Repository public interface PersonRepository extends CrudRepository<Person, Long> { @EntityGraph(attributePaths = { "cars" }) Person getByName(String name); } Spring Data JPA documentation on entity graphs
Q: JpaRepository: Fetch specific lazy collections If I have an Entity Person with some lazy-collections (Cars, Bills, Friends, ...) and want to write a JpaRepository-method that gives me all persons indluding eagerly fetched Cars, is this possible? I know that one can do this on single objects, but is this somehow possible with collections of persons? A: Yes, there is a very convenient @EntityGraph annotation provided by Spring Data JPA. It can be used to fine tune the used entitygraph of the query. Every JPA query uses an implicit entitygraph, that specifies, which elements are eagerly or lazy fetched depending on the relations fetchtype settings. If you want a specific relation to be eagerly fetched you need to specify it in the entitygraph. @Repository public interface PersonRepository extends CrudRepository<Person, Long> { @EntityGraph(attributePaths = { "cars" }) Person getByName(String name); } Spring Data JPA documentation on entity graphs A: Use the following JPA query to get the both tables data. Here used jpa query to fetch the cars. A "fetch" join allows associations or collections of values to be initialized along with their parent objects using a single select. This is particularly useful in the case of a collection. It effectively overrides the outer join and lazy declarations of the mapping file for associations and collections. See this for more explanation on join fetch Use the "join fetch", to fetch object eagerly. public interface CustomRepository extends JpaRepository<Person, Long> { @Query("select person from PersonModel as person left join fetch person.cars as cars") public PersonModel getPersons(); }
stackoverflow
{ "language": "en", "length": 254, "provenance": "stackexchange_0000F.jsonl.gz:915184", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44699436" }
4ebb43fb959dbd53b911de0a1aaa7a8f39ddf157
Stackoverflow Stackexchange Q: How to use multiple ng-content in the same component in Angular 2? I would like to display different template in my component. Only one will show. If hasURL is true, I want to show the <a></a>. If hasURL is false, I want to show the <button></button>. The problem if hasURL is false, the component show button, but the ng-content is empty. Because it's already read in the first "a></a> Is there a way to solve that please? <a class="bouton" href="{{ href }}" *ngIf="hasURL"> <ng-content> </ng-content> </a> <button class="bouton" *ngIf="!hasURL"> <ng-content> </ng-content> </button> A: You can wrap ng-content in ng-template and use ngTemplateOutlet <a class="bouton" href="{{ href }}" *ngIf="hasURL"> <ng-container *ngTemplateOutlet="contentTpl"></ng-container> </a> <button class="bouton" *ngIf="!hasURL"> <ng-container *ngTemplateOutlet="contentTpl"></ng-container> </button> <ng-template #contentTpl><ng-content></ng-content></ng-template> Plunker Example See also * *How to conditionally wrap a div around ng-content Angular 9 demo
Q: How to use multiple ng-content in the same component in Angular 2? I would like to display different template in my component. Only one will show. If hasURL is true, I want to show the <a></a>. If hasURL is false, I want to show the <button></button>. The problem if hasURL is false, the component show button, but the ng-content is empty. Because it's already read in the first "a></a> Is there a way to solve that please? <a class="bouton" href="{{ href }}" *ngIf="hasURL"> <ng-content> </ng-content> </a> <button class="bouton" *ngIf="!hasURL"> <ng-content> </ng-content> </button> A: You can wrap ng-content in ng-template and use ngTemplateOutlet <a class="bouton" href="{{ href }}" *ngIf="hasURL"> <ng-container *ngTemplateOutlet="contentTpl"></ng-container> </a> <button class="bouton" *ngIf="!hasURL"> <ng-container *ngTemplateOutlet="contentTpl"></ng-container> </button> <ng-template #contentTpl><ng-content></ng-content></ng-template> Plunker Example See also * *How to conditionally wrap a div around ng-content Angular 9 demo
stackoverflow
{ "language": "en", "length": 136, "provenance": "stackexchange_0000F.jsonl.gz:915193", "question_score": "65", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44699469" }
5e3945197a210d8f3cf5ad1c2433dcd7cdc9efe2
Stackoverflow Stackexchange Q: str_ireplace() does not catch uppercase in Cyrillic script I am using this function to color all the matches in my search. It works on lowercase words, in Cyrillic script, like "search" but not with "Search". The function: public function highlight($text='', $word='') { if(strlen($text) > 0 && strlen($word) > 0) { return (str_ireplace($word, "<span class='highlights'><strong>" . $word . "</strong></span>", $text)); } return $text; } I've been thinking about it but I don't know how I should change it, to make it work. Can you give me advice? Thank you in advance!
Q: str_ireplace() does not catch uppercase in Cyrillic script I am using this function to color all the matches in my search. It works on lowercase words, in Cyrillic script, like "search" but not with "Search". The function: public function highlight($text='', $word='') { if(strlen($text) > 0 && strlen($word) > 0) { return (str_ireplace($word, "<span class='highlights'><strong>" . $word . "</strong></span>", $text)); } return $text; } I've been thinking about it but I don't know how I should change it, to make it work. Can you give me advice? Thank you in advance!
stackoverflow
{ "language": "en", "length": 91, "provenance": "stackexchange_0000F.jsonl.gz:915204", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44699521" }
1080322b0e1939a88c44a38a09fbf726dc49eab6
Stackoverflow Stackexchange Q: Node: check latest version of package programmatically I'd like my node package (published on npm) to alert the user when a new version is available. How can i check programmatically for the latest version of a published package and compare it to the current one? Thanks A: You can combine the npmview (for getting remote version) and semver (for comparing versions) packages to do this: const npmview = require('npmview'); const semver = require('semver'); // get local package name and version from package.json (or wherever) const pkgName = require('./package.json').name; const pkgVersion = require('./package.json').version; // get latest version on npm npmview(pkgName, function(err, version, moduleInfo) { // compare to local version if(semver.gt(version, pkgVersion)) { // remote version on npm is newer than current version } });
Q: Node: check latest version of package programmatically I'd like my node package (published on npm) to alert the user when a new version is available. How can i check programmatically for the latest version of a published package and compare it to the current one? Thanks A: You can combine the npmview (for getting remote version) and semver (for comparing versions) packages to do this: const npmview = require('npmview'); const semver = require('semver'); // get local package name and version from package.json (or wherever) const pkgName = require('./package.json').name; const pkgVersion = require('./package.json').version; // get latest version on npm npmview(pkgName, function(err, version, moduleInfo) { // compare to local version if(semver.gt(version, pkgVersion)) { // remote version on npm is newer than current version } });
stackoverflow
{ "language": "en", "length": 124, "provenance": "stackexchange_0000F.jsonl.gz:915225", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44699595" }
5cf18c0447cec18c741a039404517860214de31a
Stackoverflow Stackexchange Q: How to save a file to a specific directory in python? Currently, I am using this code to save a downloaded file but it is placing them in the same folder where it is being run from. r = requests.get(url) with open('file_name.pdf', 'wb') as f: f.write(r.content) How would I save the downloaded file to another directory of my choice? A: Or if in Linux, try: # To save to an absolute path. r = requests.get(url) with open('/path/I/want/to/save/file/to/file_name.pdf', 'wb') as f: f.write(r.content) # To save to a relative path. r = requests.get(url) with open('folder1/folder2/file_name.pdf', 'wb') as f: f.write(r.content) See open() function docs for more details.
Q: How to save a file to a specific directory in python? Currently, I am using this code to save a downloaded file but it is placing them in the same folder where it is being run from. r = requests.get(url) with open('file_name.pdf', 'wb') as f: f.write(r.content) How would I save the downloaded file to another directory of my choice? A: Or if in Linux, try: # To save to an absolute path. r = requests.get(url) with open('/path/I/want/to/save/file/to/file_name.pdf', 'wb') as f: f.write(r.content) # To save to a relative path. r = requests.get(url) with open('folder1/folder2/file_name.pdf', 'wb') as f: f.write(r.content) See open() function docs for more details. A: As long as you have access to the directory you can simply change your file_name.pdf' to '/path_to_directory_you_want_to_save/file_name.pdf' and that should do what you want. A: You can just give open a full file path or a relative file path r = requests.get(url) with open(r'C:\path\to\save\file_name.pdf', 'wb') as f: f.write(r.content) A: Here is a quicker solution: r = requests.get(url) open('/path/to/directory/file_name.pdf', 'wb').write(r.content)
stackoverflow
{ "language": "en", "length": 165, "provenance": "stackexchange_0000F.jsonl.gz:915261", "question_score": "38", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44699682" }
6f0f5d80c75f36c70d1310345ce116f0b61b535c
Stackoverflow Stackexchange Q: R ifelse - unexpected change in vector order I am getting an unexpected result after running: test = c(rep(FALSE, 2), rep(TRUE, 6)) ifelse(test, c(1:8)[test], 1) [1] 1 1 5 6 7 8 3 4 I would have expected 1 1 3 4 5 6 7 8 but the indexes of yes in ifelse(test, yes, no) are turned. Maybe I need more coffee but I would appreciate if anyone could explain the logic behind this result. A: The lengths of the vectors in ifelse should be the same. In the OP's code, the second argument is again subsetted while the third argument 1 gets recycled (fine though) ifelse(test, 1:8, 1) #[1] 1 1 3 4 5 6 7 8 It is explained in the documentation of ?ifelse If yes or no are too short, their elements are recycled. yes will be evaluated if and only if any element of test is true, and analogously for no. Here, the 'yes', 'no' denotes to the general usage arguments in ifelse ifelse(test, yes, no)
Q: R ifelse - unexpected change in vector order I am getting an unexpected result after running: test = c(rep(FALSE, 2), rep(TRUE, 6)) ifelse(test, c(1:8)[test], 1) [1] 1 1 5 6 7 8 3 4 I would have expected 1 1 3 4 5 6 7 8 but the indexes of yes in ifelse(test, yes, no) are turned. Maybe I need more coffee but I would appreciate if anyone could explain the logic behind this result. A: The lengths of the vectors in ifelse should be the same. In the OP's code, the second argument is again subsetted while the third argument 1 gets recycled (fine though) ifelse(test, 1:8, 1) #[1] 1 1 3 4 5 6 7 8 It is explained in the documentation of ?ifelse If yes or no are too short, their elements are recycled. yes will be evaluated if and only if any element of test is true, and analogously for no. Here, the 'yes', 'no' denotes to the general usage arguments in ifelse ifelse(test, yes, no)
stackoverflow
{ "language": "en", "length": 171, "provenance": "stackexchange_0000F.jsonl.gz:915298", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44699799" }
d8e47298bae1afe5c80c8357030f521df176edf5
Stackoverflow Stackexchange Q: How can I check the first letter of each item in an array? I'm building a pig latin translator and I can't figure out how to identify the first letter of the entered words. I've converted the input to an array with each item being a new word, but how do I select each first letter of each item to determine if it's a consonant/vowel/etc.? A: a = ['This', 'is', 'a', 'sentence'] for word in a: print(word[0]) Output: T i a s
Q: How can I check the first letter of each item in an array? I'm building a pig latin translator and I can't figure out how to identify the first letter of the entered words. I've converted the input to an array with each item being a new word, but how do I select each first letter of each item to determine if it's a consonant/vowel/etc.? A: a = ['This', 'is', 'a', 'sentence'] for word in a: print(word[0]) Output: T i a s A: words = ['apple', 'bike', 'cow'] Use list comprehension, that is, building a list from the contents of another: firsts = [w[0] for w in words] firsts Output ['a','b','c'] A: using list cmprh with checking if a word not null a = ['This', 'is', '', 'sentence'] [w[0] for w in a if w] Output : ['T', 'i', 's']
stackoverflow
{ "language": "en", "length": 141, "provenance": "stackexchange_0000F.jsonl.gz:915332", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44699896" }
542955e34d2a3f3fedb5eaa82a6c0e5325cb7c5e
Stackoverflow Stackexchange Q: MaxListenersExceededWarning - Loopback I am getting the following error: (node:18591) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 wakeup listeners added. Use emitter.setMaxListeners() to increase limit. after a script for sending push notifications is executed. I am using "node-gcm" and "apn" npm modules for sending android and ios push notifications respectively. Code that I am using for sending notifications is: Android: async.each(tokenBatches, function (batch) { // Assuming you already set up the sender and message sender.send(message, {registrationIds: batch}, function (err, result) { // Push failed? if (err) { // Stops executing other batches console.log(err); } console.log(result); }); }); Here, device tokens are passed as a batch of 1000 tokens. IOS: provider.send(notification, iosTokens).then((response) => { console.log(response); }); Here, all tokens are sent inside the iosTokens array. These two scripts are run in parallel. What could be wrong in this code? I saw some solutions asking to set max listeners, but I am not getting it right. Is there any way to fix the memory leak error. Any help would be appreciated! Thanks in advance.
Q: MaxListenersExceededWarning - Loopback I am getting the following error: (node:18591) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 wakeup listeners added. Use emitter.setMaxListeners() to increase limit. after a script for sending push notifications is executed. I am using "node-gcm" and "apn" npm modules for sending android and ios push notifications respectively. Code that I am using for sending notifications is: Android: async.each(tokenBatches, function (batch) { // Assuming you already set up the sender and message sender.send(message, {registrationIds: batch}, function (err, result) { // Push failed? if (err) { // Stops executing other batches console.log(err); } console.log(result); }); }); Here, device tokens are passed as a batch of 1000 tokens. IOS: provider.send(notification, iosTokens).then((response) => { console.log(response); }); Here, all tokens are sent inside the iosTokens array. These two scripts are run in parallel. What could be wrong in this code? I saw some solutions asking to set max listeners, but I am not getting it right. Is there any way to fix the memory leak error. Any help would be appreciated! Thanks in advance.
stackoverflow
{ "language": "en", "length": 173, "provenance": "stackexchange_0000F.jsonl.gz:915333", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44699906" }