qid
int64 2
74.7M
| question
stringlengths 31
65.1k
| date
stringlengths 10
10
| metadata
sequence | response_j
stringlengths 4
63.3k
| response_k
stringlengths 4
60.5k
|
---|---|---|---|---|---|
1,256,905 | I have a server (Ubuntu) with two public IPs: the first one is dynamic and the second static.
Software run on the server connects to a public internet endpoint which is behind a firewall for whitelisting IPs, hence it needs to use the static IP for outbound traffic there for things to work.
The server is configured so the static IP is on a loopback interface and set up to
```
$ sudo vim /etc/network/interfaces.d/eip.cfg
auto lo:1
iface lo:1 inet static
address <static_ip>/32
post-up ip route replace default via <dynamic_ip> src <static_ip>
```
After network restart, it's possible to see the static IP like so:
```
$ curl ipinfo.io/ip
<static_ip>
```
Hence, all works well and it's possible to curl the other endpoint. All good.
**PROBLEM**
The software uses Docker and thus sits inside a container that even uses a custom network.
I have already tested to use `--network=host` and that works well because this puts the container on the host's network (where it's working).
The issue is that the container must be on the custom network, because it connects to other internal services on that network.
I understand that Docker uses MASQUERADE and that it could be possible to switch that off and configure iptables manually as an option. However, I wonder what other options are available and what is the option that could be recommended.
I'm open for suggestions - maybe someone has done this already! Thanks! | 2017/10/06 | [
"https://superuser.com/questions/1256905",
"https://superuser.com",
"https://superuser.com/users/63102/"
] | The packets generated in the container are being forwarded via the Docker hosts interface rather than being generated by the host. The `src` hint added to the default route does not take effect as the packets already have a source address. I believe iptables masq only chooses the source IP address based on the outbound interface, so I think you are stuck with inserting a new rule before the default MASQ rule ([like the serverfault answer](https://serverfault.com/q/686035/125521)).
```
iptables -t nat -I POSTROUTING -p all -s <docker_network>/24 \
-j SNAT --to-source <static_ip>
```
Unless you are familiar with Dockers use of iptables, then switching it off completely may cause more issues. | The easiest solution is to use the `--outgoing-ip` command option with `docker run` [commands](https://www.aquasec.com/wiki/display/containers/Docker+CLI+Commands). This example represents a container with an outgoing IPv4 address. For IPv6, use `--outgoing-ip6` instead.
An alternative method is define this in your network definition and add some *iptables* rules, you’ll have something like this:
```
# docker network create <NETWORK NAME> --subnet=192.168.10.0/24 \ --gateway=192.168.10.1
# iptables -t nat -I POSTROUTING -s \ 192.168.10.0/24 -j SNAT --to-source OUTGOING_IP
```
Docker will have POSTROUTING rules.
Then connect your containers to this custom network created. All traffic for these containers will go through the specified IP Address.
```
# docker network connect <NETWORK-NAME> <CONTAINER>
``` |
47,333,319 | I am trying to add form validation using JavaScript. When I add
```
document.getElementById("one").setAttribute("ng-click", "insertData()");
```
to `validateForm` function, it does not work after I press the button.
When I move that line out of `validateForm` function it works, but does not perform form validation.
Here is my code:
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.6.4/angular.min.js"></script>
</head>
<body>
<div class="container">
<div ng-app="myapp" ng-controller = "controller">
<div class="row">
<div id="divID"> </div>
<div class="col-sm-12 col-md-10 col-md-offset-1 ">
<div class="form-group">
<div class="input-group">
<span class="input-group-addon"><i class="glyphicon glyphicon-user"></i></span>
<input class="form-control" id = "k" placeholder="Name" ng-model = "username" name="user_name" type="text" autofocus>
</div>
</div>
<div class="form-group">
<div class="input-group">
<span class="input-group-addon">
<i class="glyphicon glyphicon-user"></i></span>
<input class="form-control" placeholder="NIC" ng-model = "nic" name="NIC" type="text" value="">
</div>
</div>
<div class="form-group">
<div class="input-group">
<span class="input-group-addon"> @</span>
<input class="form-control" placeholder="Email" ng-model = "Email" name="email" type="email" value="">
</div>
</div>
<div class="form-group">
<div class="input-group">
<span class="input-group-addon">
<i class="glyphicon glyphicon-lock"></i></span>
<input class="form-control" placeholder="Password" ng-model = "password" name="Password" type="password" value="">
</div>
</div>
<div class="form-group">
<div class="input-group">
<span class="input-group-addon">
<i class="glyphicon glyphicon-lock"></i></span>
<input class="form-control" placeholder="Confierm Password" ng-model = "Conf_password" name="Conf_password" type="password" value="">
</div>
</div>
<div class="form-group">
<input type="submit" id = 'one' onclick="validateForm()" class="btn btn-lg btn-primary btn-block" value="Sign Up">
</div>
</div>
</div>
</div>
</div>
</body>
</html>
<Script>
function validateForm()
{
var x = document.getElementById("k").value;
if (x == "")
{
document.getElementById("k").setAttribute("placeholder", "Insert Name");
}
else
{
//document.getElementById("one").removeAttribute("onclick");
document.getElementById("one").setAttribute("ng-click", "insertData()");
}
}
// when add this part here code working fine ---- document.getElementById("one").setAttribute("ng-click", "insertData()");
var app = angular.module("myapp", []);
app.controller("controller",function($scope,$http){
$scope.insertData = function()
{
$http.post(
"Signup.php",
{'username':$scope.username,'nic':$scope.nic,'email':$scope.Email,'password':$scope.password,'Conf_password':$scope.Conf_password }
).then(function(data){
var result = angular.toJson(data.data);
var myEl = angular.element( document.querySelector( '#divID' ) );
if(result.replace(/^"|"$/g, '') == 1)
{
myEl.replaceWith("<div class='alert alert-success alert-dismissable fade in'><a href='#' class='close' data-dismiss='alert' aria-label='close'>×</a><strong>Success!</strong>You have sucessfully registerd. Please login</div>");
}
else
{
myEl.replaceWith(result.replace(/^"|"$/g, ''));
}
$scope.username = null;
});
}
});
</script>
``` | 2017/11/16 | [
"https://Stackoverflow.com/questions/47333319",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8683121/"
] | You could do it this way. Change your class to this definition. In your controller `coreBarCode.json` will have the the json which you can then work with as needed:
```
public class CoreBarCodeDTO
{
private string _json;
public string json { get { return _json; }
set {
string decoded = HttpUtility.UrlDecode(value);
_json = decoded;
}
}
}
```
>
> Update
>
>
>
```
[HttpPost]
public async Task<IHttpActionResult> Get([FromBody] CoreBarCodeDTOcoreBarCode coreBarCode)
{
string Bar_Code = coreBarCode.json;
//work with the JSON here, with Newtonsoft for example
var obj = JObject.Parse(Bar_Code);
// obj["MouseSampleBarcode"] now = "MOS81"
}
``` | As @Lokki mentioned in his comment. The GET verb does not have a body, you need to change that to POST or PUT (depending if you are creating/searching or updating), so your code would look like this:
```
[HttpPost("/")]
public async Task<IHttpActionResult> Get([FromBody] CoreBarCodeDTO.RootObject coreBarCode)
{
string Bar_Code = coreBarCode.MouseSampleBarcode.ToString();
``` |
47,333,319 | I am trying to add form validation using JavaScript. When I add
```
document.getElementById("one").setAttribute("ng-click", "insertData()");
```
to `validateForm` function, it does not work after I press the button.
When I move that line out of `validateForm` function it works, but does not perform form validation.
Here is my code:
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.6.4/angular.min.js"></script>
</head>
<body>
<div class="container">
<div ng-app="myapp" ng-controller = "controller">
<div class="row">
<div id="divID"> </div>
<div class="col-sm-12 col-md-10 col-md-offset-1 ">
<div class="form-group">
<div class="input-group">
<span class="input-group-addon"><i class="glyphicon glyphicon-user"></i></span>
<input class="form-control" id = "k" placeholder="Name" ng-model = "username" name="user_name" type="text" autofocus>
</div>
</div>
<div class="form-group">
<div class="input-group">
<span class="input-group-addon">
<i class="glyphicon glyphicon-user"></i></span>
<input class="form-control" placeholder="NIC" ng-model = "nic" name="NIC" type="text" value="">
</div>
</div>
<div class="form-group">
<div class="input-group">
<span class="input-group-addon"> @</span>
<input class="form-control" placeholder="Email" ng-model = "Email" name="email" type="email" value="">
</div>
</div>
<div class="form-group">
<div class="input-group">
<span class="input-group-addon">
<i class="glyphicon glyphicon-lock"></i></span>
<input class="form-control" placeholder="Password" ng-model = "password" name="Password" type="password" value="">
</div>
</div>
<div class="form-group">
<div class="input-group">
<span class="input-group-addon">
<i class="glyphicon glyphicon-lock"></i></span>
<input class="form-control" placeholder="Confierm Password" ng-model = "Conf_password" name="Conf_password" type="password" value="">
</div>
</div>
<div class="form-group">
<input type="submit" id = 'one' onclick="validateForm()" class="btn btn-lg btn-primary btn-block" value="Sign Up">
</div>
</div>
</div>
</div>
</div>
</body>
</html>
<Script>
function validateForm()
{
var x = document.getElementById("k").value;
if (x == "")
{
document.getElementById("k").setAttribute("placeholder", "Insert Name");
}
else
{
//document.getElementById("one").removeAttribute("onclick");
document.getElementById("one").setAttribute("ng-click", "insertData()");
}
}
// when add this part here code working fine ---- document.getElementById("one").setAttribute("ng-click", "insertData()");
var app = angular.module("myapp", []);
app.controller("controller",function($scope,$http){
$scope.insertData = function()
{
$http.post(
"Signup.php",
{'username':$scope.username,'nic':$scope.nic,'email':$scope.Email,'password':$scope.password,'Conf_password':$scope.Conf_password }
).then(function(data){
var result = angular.toJson(data.data);
var myEl = angular.element( document.querySelector( '#divID' ) );
if(result.replace(/^"|"$/g, '') == 1)
{
myEl.replaceWith("<div class='alert alert-success alert-dismissable fade in'><a href='#' class='close' data-dismiss='alert' aria-label='close'>×</a><strong>Success!</strong>You have sucessfully registerd. Please login</div>");
}
else
{
myEl.replaceWith(result.replace(/^"|"$/g, ''));
}
$scope.username = null;
});
}
});
</script>
``` | 2017/11/16 | [
"https://Stackoverflow.com/questions/47333319",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8683121/"
] | You could do it this way. Change your class to this definition. In your controller `coreBarCode.json` will have the the json which you can then work with as needed:
```
public class CoreBarCodeDTO
{
private string _json;
public string json { get { return _json; }
set {
string decoded = HttpUtility.UrlDecode(value);
_json = decoded;
}
}
}
```
>
> Update
>
>
>
```
[HttpPost]
public async Task<IHttpActionResult> Get([FromBody] CoreBarCodeDTOcoreBarCode coreBarCode)
{
string Bar_Code = coreBarCode.json;
//work with the JSON here, with Newtonsoft for example
var obj = JObject.Parse(Bar_Code);
// obj["MouseSampleBarcode"] now = "MOS81"
}
``` | So, as I said: Get doesn't have body.
Follow @KinSlayerUY answer.
```
[HttpPost("/")]
public async Task<IHttpActionResult> Post([FromBody] CoreBarCodeDTO.RootObject coreBarCode)
{
string Bar_Code = coreBarCode.MouseSampleBarcode.ToString();
...
}
```
If you need use GET remove `[FromBody]` attribute and send data as single parameters
```
[HttpGet("/")]
public async Task<IHttpActionResult> Get(string mouseSampleBarcode)
{
var rootObject = new CoreBarCodeDTO.RootObject
{
MouseSampleBarcode = mouseSampleBarcode
}
...
}
``` |
55,871,586 | I'm trying to make a paint-program in JavaScript, and I want to include an undo-function (not an eraser). How do I add all the events to an array, and then make it possible to delete them one by one?
I have a dropdown list of tools (only four of them are working so far). I've added an undo button with an id. I've tried for hours (days in fact) to find out how to do this. I have found some examples, and I think I'll have to use both push and an empty array to get further?
This is the code for the tool-selection and the button
```
<label>
Object type:
<select id="selectTool">
<option value="line">Linje</option>
<option value="pencil">Blyant</option>
<option value="rect">Rektangel</option>
<option value="circle">Sirkel</option>
<option value="oval">Oval</option>
<option value="polygon">Polygon</option>
</select>
Shape drawn:
<select id="shapeDrawn">
<option value=""></option>
</select>
<input type="button" id="cmbDelete" value="Undo last action">
</label>
```
The undo function could maybe be something like this, but this function
```
var shapes = [];
shapes.push(newShape);
function cmbDeleteClick(){
if(shapes.length > 0){
var selectedShapeIndex = selectShape.selectedIndex;
shapes.splice(selectedShapeIndex,1);
selectShape.options.remove(selectedShapeIndex);
selectShape.selectedIndex = selectShape.options.length - 1;
}
cmbDelete = document.getElementById("cmbDelete");
cmbDelete.addEventListener("click",cmbDeleteClick, false);
fillSelectShapeTypes();
drawCanvas();
}
```
Ideally, everything that gets painted on the canvas is added to a dropdown menu, and it can be removed (undone) by clicking a button. Here is the "working" version of the code [JS Bin](https://jsbin.com/vasivug/edit?html,output) | 2019/04/26 | [
"https://Stackoverflow.com/questions/55871586",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10572128/"
] | Unfortunately, the call stack isn't available with T-SQL error handling. Consider upvoting [this feature request](https://feedback.azure.com/forums/908035-sql-server/suggestions/32891353-provide-function-to-retrieve-the-entire-call-stack) to facilitate capturing T-SQL stack details.
The example below uses a nested TRY/CATCH to raise a user-defined error (message number 50000) when the inner script errors, capturing the available details along with context description ("inner script"). When errors occur in the outer script, the original error is simply re-thrown. Absence of a context and a system error number indicates the outermost script erred, although you could build and raise a user-defined error there instead, including the outer script context description.
```
BEGIN TRY
BEGIN TRY
EXECUTE sp_executesql @stmt = N'SELECT 1/0;';
END TRY
BEGIN CATCH
DECLARE
@ErrorNumber int
,@ErrorMessage nvarchar(2048)
,@ErrorSeverity int
,@ErrorState int
,@ErrorLine int;
SELECT
@ErrorNumber =ERROR_NUMBER()
,@ErrorMessage =ERROR_MESSAGE()
,@ErrorSeverity = ERROR_SEVERITY()
,@ErrorState =ERROR_STATE()
,@ErrorLine =ERROR_LINE();
RAISERROR('Error %d caught in inner script at line %d: %s'
,@ErrorSeverity
,@ErrorState
,@ErrorNumber
,@ErrorLine
,@ErrorMessage);
END CATCH;
END TRY
BEGIN CATCH
THROW;
END CATCH;
GO
``` | you can use `sp_executeSQL`'s [OUTPUT parameter](https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-executesql-transact-sql?view=sql-server-2017#c-using-the-output-parameter):
```
DECLARE @ErrorLine NVARCHAR(32)
DECLARE @Params NVARCHAR(150) = '@Return INT OUTPUT'
DECLARE @SQL NVARCHAR(MAX) = ''
SET @SQL = @SQL + ' '
SET @SQL = @SQL + ' BEGIN TRY '
SET @SQL = @SQL + ' SELECT 100 AS First '
SET @SQL = @SQL + ' SELECT 1/0 AS Second '
SET @SQL = @SQL + ' END TRY '
SET @SQL = @SQL + ' BEGIN CATCH '
SET @SQL = @SQL + ' SELECT @Return = ERROR_LINE() '
SET @SQL = @SQL + ' END CATCH '
SET @SQL = @SQL + ' '
EXEC sp_executeSQL @SQL, @Params, @Return = @ErrorLine OUTPUT
SELECT @ErrorLine
```
this code will show `@ErrorLine` = 1 regardless of where the error is because technically, the entire SQL is on a single line and it makes the whole thing more complex, but you get the idea...
EDIT: if `@ErrorLine` is `NULL`, there's no error in `sp_executeSQL`. |
55,871,586 | I'm trying to make a paint-program in JavaScript, and I want to include an undo-function (not an eraser). How do I add all the events to an array, and then make it possible to delete them one by one?
I have a dropdown list of tools (only four of them are working so far). I've added an undo button with an id. I've tried for hours (days in fact) to find out how to do this. I have found some examples, and I think I'll have to use both push and an empty array to get further?
This is the code for the tool-selection and the button
```
<label>
Object type:
<select id="selectTool">
<option value="line">Linje</option>
<option value="pencil">Blyant</option>
<option value="rect">Rektangel</option>
<option value="circle">Sirkel</option>
<option value="oval">Oval</option>
<option value="polygon">Polygon</option>
</select>
Shape drawn:
<select id="shapeDrawn">
<option value=""></option>
</select>
<input type="button" id="cmbDelete" value="Undo last action">
</label>
```
The undo function could maybe be something like this, but this function
```
var shapes = [];
shapes.push(newShape);
function cmbDeleteClick(){
if(shapes.length > 0){
var selectedShapeIndex = selectShape.selectedIndex;
shapes.splice(selectedShapeIndex,1);
selectShape.options.remove(selectedShapeIndex);
selectShape.selectedIndex = selectShape.options.length - 1;
}
cmbDelete = document.getElementById("cmbDelete");
cmbDelete.addEventListener("click",cmbDeleteClick, false);
fillSelectShapeTypes();
drawCanvas();
}
```
Ideally, everything that gets painted on the canvas is added to a dropdown menu, and it can be removed (undone) by clicking a button. Here is the "working" version of the code [JS Bin](https://jsbin.com/vasivug/edit?html,output) | 2019/04/26 | [
"https://Stackoverflow.com/questions/55871586",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10572128/"
] | Unfortunately, the call stack isn't available with T-SQL error handling. Consider upvoting [this feature request](https://feedback.azure.com/forums/908035-sql-server/suggestions/32891353-provide-function-to-retrieve-the-entire-call-stack) to facilitate capturing T-SQL stack details.
The example below uses a nested TRY/CATCH to raise a user-defined error (message number 50000) when the inner script errors, capturing the available details along with context description ("inner script"). When errors occur in the outer script, the original error is simply re-thrown. Absence of a context and a system error number indicates the outermost script erred, although you could build and raise a user-defined error there instead, including the outer script context description.
```
BEGIN TRY
BEGIN TRY
EXECUTE sp_executesql @stmt = N'SELECT 1/0;';
END TRY
BEGIN CATCH
DECLARE
@ErrorNumber int
,@ErrorMessage nvarchar(2048)
,@ErrorSeverity int
,@ErrorState int
,@ErrorLine int;
SELECT
@ErrorNumber =ERROR_NUMBER()
,@ErrorMessage =ERROR_MESSAGE()
,@ErrorSeverity = ERROR_SEVERITY()
,@ErrorState =ERROR_STATE()
,@ErrorLine =ERROR_LINE();
RAISERROR('Error %d caught in inner script at line %d: %s'
,@ErrorSeverity
,@ErrorState
,@ErrorNumber
,@ErrorLine
,@ErrorMessage);
END CATCH;
END TRY
BEGIN CATCH
THROW;
END CATCH;
GO
``` | Here is the solution that returns the line number. We are in a SPROC that takes parameters. Essentially, as a tSQL developer, you will need to guess where issues will occur, normally around arguments to formal parameter inputs.
```
-- Preamble
CREATE PROCEDURE [Meta].[ValidateTable]
@DatabaseNameInput VARCHAR(100), -- = 'DatabaseNameInput',
@TableNameInput VARCHAR(100), -- = 'TableNameInput',
@SchemaNameInput VARCHAR(100), -- = 'SchemaNameInput',
AS
BEGIN
DECLARE @crlf CHAR(2) = CHAR(13) + CHAR(10),
-----------Database Validity------------------
@IsDatabaseValid BIT,
@DatabaseNumber INTEGER,
@DatabaseNamePredicate NVARCHAR(100),
@CurrentExecutingContext NVARCHAR(40),
@DatabaseValidityExecutable NVARCHAR(100),
@DatabaseParameterString NVARCHAR(50),
-----------Table Validity------------------
@TableObjectIdentity INTEGER,
@TableString NVARCHAR(500),
@TableParameterString NVARCHAR(50),
@TableValidityExecutable NVARCHAR(200),
-----------Error Handling------------------
@ErrorState INTEGER = 0,
@ErrorNumber INTEGER = 0,
@ErrorSeverity INTEGER = 0,
@MyErrorMessage NVARCHAR(150),
@SetMessageText NVARCHAR(1024) = 'No Error Message Text for sys.messages.',
@ErrorDescription NVARCHAR(1024) = 'No error description was given.';
-- Be aware of SQL Injection Risk with no semi-colons at the line tails
SET @TableString = 'N' + '''' + @DatabaseNameInput + '.' + @SchemaNameInput + '.' + @TableNameInput + '''';
SET @DatabaseParameterString = N'@DatabaseNumber INTEGER OUTPUT ';
SET @TableParameterString = N'@TableObjectIdentity INTEGER OUTPUT';
-- Phase 0.0, testing for database existence.
PRINT 'Table Validity Executable: ' + @TableValidityExecutable;
EXECUTE sp_executesql @DatabaseValidityExecutable, @DatabaseParameterString, @DatabaseNumber = @DatabaseNumber OUTPUT;
IF @DatabaseNumber IS NULL
BEGIN
SET @MyErrorMessage = 'The @DatabaseNameInput parameter: "%s" specified by the caller does not exist on this SQL Server - ' + @@SERVERNAME;
EXECUTE sys.sp_addmessage @msgnum = 59802, @severity = 16, @msgtext = @MyErrorMessage, @replace = 'replace', @lang = 'us_english';
RAISERROR(59802, 15, 1, @DatabaseNamePredicate);
END;
-- Phase 0.1, testing for table existence.
PRINT 'Table Validity Executable: ' + @TableValidityExecutable;
EXECUTE sp_executesql @TableValidityExecutable, @TableParameterString, @TableObjectIdentity = @TableObjectIdentity OUTPUT;
IF @TableObjectIdentity IS NULL
BEGIN
SET @MyErrorMessage = 'The @TableNameInput parameter: "%s" specified by the caller does not exist in this database - ' + DB_NAME() +';';
EXECUTE sys.sp_addmessage @msgnum = 59803, @severity = 16, @msgtext = @MyErrorMessage, @replace = 'replace', @lang = 'us_english';
RAISERROR(59803, 15, 1, @TableString);
END;
``` |
52,528,984 | I have recently added a JavaScript function to my page that switches the background image of my page at intervals of 5 seconds. There is an input form in that page as well, and when I click the Submit button, the switching stops and goes back to the first image, and the switching starts all over again.
How do I keep the background image continuously switching regardless of what I do in that page?
I think I can fix this with localStorage property, but I am not sure how to implement that in this particular code, any ideas?
```
<body class="main">
<div class ="up">
<img src='images/usa.jpg' id="circle"/>
</div>
<script>
var image_tracker = 'usa';
function change(){
var image = document.getElementById('circle');
if(image_tracker=='usa'){
image.src = 'images/O_Square_P1.jpg';
image_tracker = 'uthant';
}
else if(image_tracker=='uthant'){
image.src = 'images/U_Thant_PIC_3.jpg';
image_tracker = 'oasis';
}
else if(image_tracker=='oasis'){
image.src = 'images/usa.jpg';
image_tracker= 'usa';
}
}
var timer = setInterval('change();',5000);
</script>
</body>
``` | 2018/09/27 | [
"https://Stackoverflow.com/questions/52528984",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10357600/"
] | May be [byexample](https://byexamples.github.io/byexample/) is what you are looking for.
It is a tool to run the snippets of code (aka examples) in a text file and check their outputs. It is like Python's doctests but it works for [Javascript](https://byexamples.github.io/byexample/languages/javascript), Ruby, Python and others (even for C and C++).
The Javascript examples can be written in a README.md like:
```
```javascript
1 + 2
out:
3
```
```
or like:
```
```javascript
> 1 + 2
3
```
```
Then, you run them from the command line:
```
$ byexample -l javascript README.md
[PASS] Pass: 2 Fail: 0 Skip: 0
```
And that's it. The full documentation of the tool can be found [here](https://byexamples.github.io/byexample/) and [here](https://byexamples.github.io/byexample/overview/usage), and the particular comments and limitations for Javascript is [here](https://byexamples.github.io/byexample/languages/javascript).
*Disclaimer:* I'm the author of [byexample](https://byexamples.github.io/byexample/) and I created it for the *same reason* that **rmharrison** wrote in his question.
Like him, my documentation was "out of sync" in time to time and the only way to notice that was running the examples by hand. For that reason I created this tool to automatically check and validate the docs.
It is really useful to me; I really hope that it would be useful to others. | This possibly should be achieved in the opposite way. Examples should exist as files that can be linted and tested. Their contents can be injected into README.md on documentation build with any template engine.
E.g. custom `includeJs` helper function can be defined to render
```
{{ includeJs('foo.js') }}
```
to respective Markdown:
```
**foo.js**
```javascript
/* foo.js contents */
```
```
Depending on how much snippets have in common, the documentation could possibly be parsed first to uniformly generate files from existing snippets.
E.g.
```
```
# $ node
> const say = require('say')
> say.nancat('grumpy is best')
'grumpy is best'
```
```
could be transformed into
```
// grumpy-is-best.js
const say = require('say')
say.nancat('grumpy is best')
``` |
52,528,984 | I have recently added a JavaScript function to my page that switches the background image of my page at intervals of 5 seconds. There is an input form in that page as well, and when I click the Submit button, the switching stops and goes back to the first image, and the switching starts all over again.
How do I keep the background image continuously switching regardless of what I do in that page?
I think I can fix this with localStorage property, but I am not sure how to implement that in this particular code, any ideas?
```
<body class="main">
<div class ="up">
<img src='images/usa.jpg' id="circle"/>
</div>
<script>
var image_tracker = 'usa';
function change(){
var image = document.getElementById('circle');
if(image_tracker=='usa'){
image.src = 'images/O_Square_P1.jpg';
image_tracker = 'uthant';
}
else if(image_tracker=='uthant'){
image.src = 'images/U_Thant_PIC_3.jpg';
image_tracker = 'oasis';
}
else if(image_tracker=='oasis'){
image.src = 'images/usa.jpg';
image_tracker= 'usa';
}
}
var timer = setInterval('change();',5000);
</script>
</body>
``` | 2018/09/27 | [
"https://Stackoverflow.com/questions/52528984",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10357600/"
] | This possibly should be achieved in the opposite way. Examples should exist as files that can be linted and tested. Their contents can be injected into README.md on documentation build with any template engine.
E.g. custom `includeJs` helper function can be defined to render
```
{{ includeJs('foo.js') }}
```
to respective Markdown:
```
**foo.js**
```javascript
/* foo.js contents */
```
```
Depending on how much snippets have in common, the documentation could possibly be parsed first to uniformly generate files from existing snippets.
E.g.
```
```
# $ node
> const say = require('say')
> say.nancat('grumpy is best')
'grumpy is best'
```
```
could be transformed into
```
// grumpy-is-best.js
const say = require('say')
say.nancat('grumpy is best')
``` | For Python, there is [exdown](https://github.com/nschloe/exdown), a small helper tool of mine. Install with
```
pip install exdown
```
and test your snippets with
```py
import exdown
import pytest
@pytest.mark.parametrize("string, lineno", exdown.extract("README.md"))
def test_readme(string, lineno):
exec(string)
``` |
52,528,984 | I have recently added a JavaScript function to my page that switches the background image of my page at intervals of 5 seconds. There is an input form in that page as well, and when I click the Submit button, the switching stops and goes back to the first image, and the switching starts all over again.
How do I keep the background image continuously switching regardless of what I do in that page?
I think I can fix this with localStorage property, but I am not sure how to implement that in this particular code, any ideas?
```
<body class="main">
<div class ="up">
<img src='images/usa.jpg' id="circle"/>
</div>
<script>
var image_tracker = 'usa';
function change(){
var image = document.getElementById('circle');
if(image_tracker=='usa'){
image.src = 'images/O_Square_P1.jpg';
image_tracker = 'uthant';
}
else if(image_tracker=='uthant'){
image.src = 'images/U_Thant_PIC_3.jpg';
image_tracker = 'oasis';
}
else if(image_tracker=='oasis'){
image.src = 'images/usa.jpg';
image_tracker= 'usa';
}
}
var timer = setInterval('change();',5000);
</script>
</body>
``` | 2018/09/27 | [
"https://Stackoverflow.com/questions/52528984",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10357600/"
] | May be [byexample](https://byexamples.github.io/byexample/) is what you are looking for.
It is a tool to run the snippets of code (aka examples) in a text file and check their outputs. It is like Python's doctests but it works for [Javascript](https://byexamples.github.io/byexample/languages/javascript), Ruby, Python and others (even for C and C++).
The Javascript examples can be written in a README.md like:
```
```javascript
1 + 2
out:
3
```
```
or like:
```
```javascript
> 1 + 2
3
```
```
Then, you run them from the command line:
```
$ byexample -l javascript README.md
[PASS] Pass: 2 Fail: 0 Skip: 0
```
And that's it. The full documentation of the tool can be found [here](https://byexamples.github.io/byexample/) and [here](https://byexamples.github.io/byexample/overview/usage), and the particular comments and limitations for Javascript is [here](https://byexamples.github.io/byexample/languages/javascript).
*Disclaimer:* I'm the author of [byexample](https://byexamples.github.io/byexample/) and I created it for the *same reason* that **rmharrison** wrote in his question.
Like him, my documentation was "out of sync" in time to time and the only way to notice that was running the examples by hand. For that reason I created this tool to automatically check and validate the docs.
It is really useful to me; I really hope that it would be useful to others. | Try [markdown-doctest](https://github.com/Widdershin/markdown-doctest):
```
npm install markdown-doctest
```
Insert this in your markdown file (i.e. README.md):
```
```js
var a = 5;
var b = 10;
console.log(a + c);
```
```
And run `markdown-doctest`:
```
$ markdown-doctest
x..
Failed - README.md:32:17
evalmachine.<anonymous>:7
console.log(a + c);
^
ReferenceError: c is not defined
``` |
52,528,984 | I have recently added a JavaScript function to my page that switches the background image of my page at intervals of 5 seconds. There is an input form in that page as well, and when I click the Submit button, the switching stops and goes back to the first image, and the switching starts all over again.
How do I keep the background image continuously switching regardless of what I do in that page?
I think I can fix this with localStorage property, but I am not sure how to implement that in this particular code, any ideas?
```
<body class="main">
<div class ="up">
<img src='images/usa.jpg' id="circle"/>
</div>
<script>
var image_tracker = 'usa';
function change(){
var image = document.getElementById('circle');
if(image_tracker=='usa'){
image.src = 'images/O_Square_P1.jpg';
image_tracker = 'uthant';
}
else if(image_tracker=='uthant'){
image.src = 'images/U_Thant_PIC_3.jpg';
image_tracker = 'oasis';
}
else if(image_tracker=='oasis'){
image.src = 'images/usa.jpg';
image_tracker= 'usa';
}
}
var timer = setInterval('change();',5000);
</script>
</body>
``` | 2018/09/27 | [
"https://Stackoverflow.com/questions/52528984",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10357600/"
] | May be [byexample](https://byexamples.github.io/byexample/) is what you are looking for.
It is a tool to run the snippets of code (aka examples) in a text file and check their outputs. It is like Python's doctests but it works for [Javascript](https://byexamples.github.io/byexample/languages/javascript), Ruby, Python and others (even for C and C++).
The Javascript examples can be written in a README.md like:
```
```javascript
1 + 2
out:
3
```
```
or like:
```
```javascript
> 1 + 2
3
```
```
Then, you run them from the command line:
```
$ byexample -l javascript README.md
[PASS] Pass: 2 Fail: 0 Skip: 0
```
And that's it. The full documentation of the tool can be found [here](https://byexamples.github.io/byexample/) and [here](https://byexamples.github.io/byexample/overview/usage), and the particular comments and limitations for Javascript is [here](https://byexamples.github.io/byexample/languages/javascript).
*Disclaimer:* I'm the author of [byexample](https://byexamples.github.io/byexample/) and I created it for the *same reason* that **rmharrison** wrote in his question.
Like him, my documentation was "out of sync" in time to time and the only way to notice that was running the examples by hand. For that reason I created this tool to automatically check and validate the docs.
It is really useful to me; I really hope that it would be useful to others. | For Python, there is [exdown](https://github.com/nschloe/exdown), a small helper tool of mine. Install with
```
pip install exdown
```
and test your snippets with
```py
import exdown
import pytest
@pytest.mark.parametrize("string, lineno", exdown.extract("README.md"))
def test_readme(string, lineno):
exec(string)
``` |
52,528,984 | I have recently added a JavaScript function to my page that switches the background image of my page at intervals of 5 seconds. There is an input form in that page as well, and when I click the Submit button, the switching stops and goes back to the first image, and the switching starts all over again.
How do I keep the background image continuously switching regardless of what I do in that page?
I think I can fix this with localStorage property, but I am not sure how to implement that in this particular code, any ideas?
```
<body class="main">
<div class ="up">
<img src='images/usa.jpg' id="circle"/>
</div>
<script>
var image_tracker = 'usa';
function change(){
var image = document.getElementById('circle');
if(image_tracker=='usa'){
image.src = 'images/O_Square_P1.jpg';
image_tracker = 'uthant';
}
else if(image_tracker=='uthant'){
image.src = 'images/U_Thant_PIC_3.jpg';
image_tracker = 'oasis';
}
else if(image_tracker=='oasis'){
image.src = 'images/usa.jpg';
image_tracker= 'usa';
}
}
var timer = setInterval('change();',5000);
</script>
</body>
``` | 2018/09/27 | [
"https://Stackoverflow.com/questions/52528984",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10357600/"
] | Try [markdown-doctest](https://github.com/Widdershin/markdown-doctest):
```
npm install markdown-doctest
```
Insert this in your markdown file (i.e. README.md):
```
```js
var a = 5;
var b = 10;
console.log(a + c);
```
```
And run `markdown-doctest`:
```
$ markdown-doctest
x..
Failed - README.md:32:17
evalmachine.<anonymous>:7
console.log(a + c);
^
ReferenceError: c is not defined
``` | For Python, there is [exdown](https://github.com/nschloe/exdown), a small helper tool of mine. Install with
```
pip install exdown
```
and test your snippets with
```py
import exdown
import pytest
@pytest.mark.parametrize("string, lineno", exdown.extract("README.md"))
def test_readme(string, lineno):
exec(string)
``` |
67,431,995 | I'm new to JavaScript. I need help getting on with my code. Right now, the font and color in `label_1` and `label_2` are changing. I want only `h2` to change color and font which is under the div class `task-box`. How should I proceed?
```js
function swap_color_and_font() {
document.getElementById("label_1").style.color = "red";
document.getElementById("label_2").style.fontFamily = "courier ";
}
```
```html
<div class="task_box">
<h2>Change font and color</h2>
<p> Lorem ipsum dolor sit amet </p>
<p>"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud"Lorem ipsum dolor sit amet, consectetur adipiscing elit. </p>
<form>
<fieldset>
<label for="font_color" id="label_1" </label><br/>
<input type="text" id="font_color" value="" /><br/>
<label for="font_color" id="label_2" </label><br/>
<input type="text" id="font_family" value="" /><br/>
<input onclick="swap_color_and_font()" value="Change font and color" type="button" />
</fieldset>
</form>
``` | 2021/05/07 | [
"https://Stackoverflow.com/questions/67431995",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | Here is an HTML and js comprehension mistake.
First, use [document.querySelector](https://developer.mozilla.org/fr/docs/Web/API/Document/querySelector) instead of [document.getElementById](https://developer.mozilla.org/fr/docs/Web/API/Document/getElementById) because it's way easily and maintainable.
`document.querySelector()` take a string parameter which is an identifier like CSS use.
So here we want to change our h2 tag, place "h2" in param.
*Maybe it could be better to use HTML id (refer to tutorials)*
And btw, don't use `<br>` tags like this. Prefer to use CSS (refer to tutorial)
```js
// Here your javascript code should be in script tags if it's not in an other document
function swap_color_and_font() {
document.querySelector("h2").style.color = "red";
document.querySelector("h2").style.fontFamily = "courier ";
}
```
```html
<div class="task_box">
<h2>Change font and color</h2>
<p> Lorem ipsum dolor sit amet </p>
<p>"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud"Lorem ipsum dolor sit amet, consectetur adipiscing elit. </p>
<form>
<!-- apply css rules on fieldset like : display: flex; flex-direction: column; -->
<fieldset>
<label for="font_color">Font color</label><br/>
<input type="text" id="font_color" value="" /><br/>
<label for="font_family">Font</label><br/>
<input type="text" id="font_family" value="" /><br/>
<input onclick="swap_color_and_font()" value="Change font and color" type="button" />
</fieldset>
</form>
``` | There are some broken tags in your HTML and your Javascript isn't in script tags.
You can add a 'id' to your h2 element and select it with 'getElementById' or use the 'querySelector' function. Make sure to properly close your 'task\_box' div.
Here is an fixed example:
```
<!DOCTYPE html>
<html>
<head>
<title>Stackoverflow Example</title>
</head>
<body>
<div class="task_box"> <!-- Start of div class with classname 'task_box' -->
<h2 id="h2-with-id">Change font and color</h2>
<p> Lorem ipsum dolor sit amet </p>
<p>"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud"Lorem ipsum dolor sit amet, consectetur adipiscing elit. </p>
<form>
<fieldset>
<!-- Make sure to properly define the label tags -->
<label for="font_color" id="label_1">Label 1</label><br/>
<input type="text" id="font_color" value=""/><br/>
<label for="font_color" id="label_2">Label 2</label><br/>
<input type="text" id="font_family" value=""/><br/>
<input onclick="swap_color_and_font()" value="Change font and color" type="button"/>
</fieldset>
</form>
</div> <!-- End of div class with classname 'task_box' -->
<div class="task_box">
<h2>Second H2</h2>
</div>
<!-- Create a Script tag to add Javascript code in document -->
<script>
function swap_color_and_font() {
// Get Element By ID
document.getElementById("h2-with-id").style.color="red";
// Get Element by Query Selection (Select all h2 elements within the class 'task_box')
document.querySelector(".task_box h2").style.fontFamily="courier";
// Loop trough all h2 elements that are in classes 'task_box'
var allH2Elements = document.querySelectorAll(".task_box h2");
allH2Elements.forEach(function(h2Element) {
h2Element.style.color="red";
h2Element.style.fontFamily="courier";
});
}
</script>
</body>
</html>
``` |
18,665,748 | Basically, I would like to show `#pagebox` when #about is clicked. At the minute `#about` is a link, I don't know if this affects anything.
The CSS code for `#pagebox` (`#about` doesn't have any)
```
#pagebox{
z-index:99;
background-color:white;
position:fixed;
width:700px;
margin-left:-350px;
margin-top:-300px;
top:50%;
left:50%;
height:600px;
visibility:hidden;
}
```
and this is the script I used (from another questions answer):
```
<script language="JavaScript" type="text/javascript">
$(document).ready(function(){
$('#about').on('click', function(){
$('#pagebox').show();
});
</script>
```
By the way, I don't know if this makes any difference but I'm coding on Tumblr. Thank you all for your help. | 2013/09/06 | [
"https://Stackoverflow.com/questions/18665748",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2403194/"
] | You must use display:none; not visibility:hidden; here's a [Fiddle](http://jsfiddle.net/mdesdev/mMkNU/)
**CSS**
```
#pagebox{
z-index:99;
background-color:white;
position:fixed;
width:700px;
margin-left:-350px;
margin-top:-300px;
top:50%;
left:50%;
height:600px;
display: none;
}
```
**jQuery**
```
$(function() {
$('#about').on('click', function(){
$('#pagebox').show('slow');
});
});
```
or if you want on/off switch
```
$(function() {
$('#about').on('click', function(){
$('#pagebox').toggle('slow');
});
});
``` | Ok, I fixed your HTML5 errors, duplicate id's, added alt atributes to images, charset, fixed doctype, closed `<head>` and bunch of unclosed links but you still have some inline CSS, work on that, please contact me on email milan@mdesdev.net, source is a little big to post it here. |
10,594,900 | What is the difference between a REST API and a normal API (which prints a JSON response)? | 2012/05/15 | [
"https://Stackoverflow.com/questions/10594900",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1355857/"
] | There is no difference at all. REST describes a way of interacting with a HTTP server, not what the server should return in response. Most web apps interact with the server side by POST or GET requests with any additional information needed to fulfil the request in a form submission for POST or the query string for GET. So if you want to delete something from the server they typically do POST with a form that contains data that specifies a resource along with an instruction to delete it.
However, HTTP implements methods (also known as verbs) other than GET or POST. It also implements, amongst others, HEAD (return the same headers you would have done for a GET, but with no response body), PUT (Take the request body and store its content at whatever URL the PUT request was made to), and DELETE (Delete whatever resource exists at the specified URL). A REST interface simply makes use of these additional verbs to convay the meaning of the request to the server.
Browsers typically only support GET and POST for "normal" (non-XHR) requests, but tools like Curl can issue the full set of HTTP verbs. You can also use additional verbs with XHR-based techniques such as AJAX.
You will still have to provide a traditional non-REST API for browsers to use, unless you're making javascript and XHR support a requirement for using your app. | REST mostly just refers to using the HTTP protocol the way it was intended. Use the `GET` HTTP method on a URL to retrieve information, possibly in different formats based on HTTP `Accept` headers. Use the `POST` HTTP method to create new items on the server, `PUT` to edit existing items, `DELETE` to delete them. Make the API idempotent, i.e. repeating the same query with the same information should yield the same result. Structure your URLs in a hierarchical manner etc.
REST just is a guiding principle how to use URLs and the HTTP protocol to structure an API. It says nothing about return formats, which may just as well be JSON.
That is opposed to, for example, APIs that send binary or XML messages to a designated port, not using differences in HTTP methods or URLs at all. |
30,112,178 | I am trying to upload a file to JIRA via its REST API using the python lib found here: [jira python documentation](http://jira-python.readthedocs.org/en/latest/)
It seems pretty straight forward I wrote a method that allows me to pass an issue and then it attaches a filename. and one that lets me retrieve an issue from JIRA.
```
from jira.client import JIRA
class JIRAReport (object):
def attach(self,issue):
print 'Attaching... '
attachment = self.jira.add_attachment(issue, attachment=self.reportpath, filename='Report.xlsx')
print 'Success!'
def getissue(self):
if not self.issue == None:
return self.jira.issue(self.issue)
return None
```
then in my main script I am getting the issue and attaching the file to an issue I retrieved from JIRA
```
report = JiraReport()
report.issue = 'ProjectKey-1'
report.reportpath = '../report_upload/tmp/' + filename
issue = report.getissue()
if not issue == None:
report.attach(issue)
else:
print "No Issue with Key Found"
```
I am able to get the issue/create issues if needed but when using the `self.jira.add_attachment()` method I am getting 405 Method Not Allowed.
The file exists and is able to be opened.
Here is the add\_attachment() method from the [source](https://bitbucket.org/bspeakmon/jira-python/src/6897f63cf459eeb98c80638273ffa3ba8f99bd70/jira/client.py?at=master#cl-323) code:
```
def add_attachment(self, issue, attachment, filename=None):
"""
Attach an attachment to an issue and returns a Resource for it.
The client will *not* attempt to open or validate the attachment; it expects a file-like object to be ready
for its use. The user is still responsible for tidying up (e.g., closing the file, killing the socket, etc.)
:param issue: the issue to attach the attachment to
:param attachment: file-like object to attach to the issue, also works if it is a string with the filename.
:param filename: optional name for the attached file. If omitted, the file object's ``name`` attribute
is used. If you aquired the file-like object by any other method than ``open()``, make sure
that a name is specified in one way or the other.
:rtype: an Attachment Resource
"""
if isinstance(attachment, string_types):
attachment = open(attachment, "rb")
# TODO: Support attaching multiple files at once?
url = self._get_url('issue/' + str(issue) + '/attachments')
fname = filename
if not fname:
fname = os.path.basename(attachment.name)
content_type = mimetypes.guess_type(fname)[0]
if not content_type:
content_type = 'application/octet-stream'
files = {
'file': (fname, attachment, content_type)
}
r = self._session.post(url, files=files, headers=self._options['headers'])
raise_on_error(r)
attachment = Attachment(self._options, self._session, json.loads(r.text)[0])
return attachment
``` | 2015/05/07 | [
"https://Stackoverflow.com/questions/30112178",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2517452/"
] | Try below code
```
UPDATE `{%TABLE_PREFIX%}usermeta` SET `meta_key` = replace(`meta_key`, '{%OLD_TABLE_PREFIX%}', '{%NEW_TABLE_PREFIX%}');
UPDATE `{%TABLE_PREFIX%}options` SET `option_name` = replace(`option_name`, '{%OLD_TABLE_PREFIX%}', '{%NEW_TABLE_PREFIX%}');
```
For further detail
[You do not have sufficient permissions to access this page without any change](https://stackoverflow.com/questions/13815461/you-do-not-have-sufficient-permissions-to-access-this-page-without-any-change) | Remove the debug code and make sure all the folder permissions are set to 755 and all the file permissions are set to 644. If they are all correct and you're still getting the error, check your .htaccess file for any lockouts.
Alternatively, if neither of those options work, I would suggest [disabling the plugins](https://www.ostraining.com/blog/wordpress/disable-a-wordpress-plugin/) on the site and try to gain access again. |
30,112,178 | I am trying to upload a file to JIRA via its REST API using the python lib found here: [jira python documentation](http://jira-python.readthedocs.org/en/latest/)
It seems pretty straight forward I wrote a method that allows me to pass an issue and then it attaches a filename. and one that lets me retrieve an issue from JIRA.
```
from jira.client import JIRA
class JIRAReport (object):
def attach(self,issue):
print 'Attaching... '
attachment = self.jira.add_attachment(issue, attachment=self.reportpath, filename='Report.xlsx')
print 'Success!'
def getissue(self):
if not self.issue == None:
return self.jira.issue(self.issue)
return None
```
then in my main script I am getting the issue and attaching the file to an issue I retrieved from JIRA
```
report = JiraReport()
report.issue = 'ProjectKey-1'
report.reportpath = '../report_upload/tmp/' + filename
issue = report.getissue()
if not issue == None:
report.attach(issue)
else:
print "No Issue with Key Found"
```
I am able to get the issue/create issues if needed but when using the `self.jira.add_attachment()` method I am getting 405 Method Not Allowed.
The file exists and is able to be opened.
Here is the add\_attachment() method from the [source](https://bitbucket.org/bspeakmon/jira-python/src/6897f63cf459eeb98c80638273ffa3ba8f99bd70/jira/client.py?at=master#cl-323) code:
```
def add_attachment(self, issue, attachment, filename=None):
"""
Attach an attachment to an issue and returns a Resource for it.
The client will *not* attempt to open or validate the attachment; it expects a file-like object to be ready
for its use. The user is still responsible for tidying up (e.g., closing the file, killing the socket, etc.)
:param issue: the issue to attach the attachment to
:param attachment: file-like object to attach to the issue, also works if it is a string with the filename.
:param filename: optional name for the attached file. If omitted, the file object's ``name`` attribute
is used. If you aquired the file-like object by any other method than ``open()``, make sure
that a name is specified in one way or the other.
:rtype: an Attachment Resource
"""
if isinstance(attachment, string_types):
attachment = open(attachment, "rb")
# TODO: Support attaching multiple files at once?
url = self._get_url('issue/' + str(issue) + '/attachments')
fname = filename
if not fname:
fname = os.path.basename(attachment.name)
content_type = mimetypes.guess_type(fname)[0]
if not content_type:
content_type = 'application/octet-stream'
files = {
'file': (fname, attachment, content_type)
}
r = self._session.post(url, files=files, headers=self._options['headers'])
raise_on_error(r)
attachment = Attachment(self._options, self._session, json.loads(r.text)[0])
return attachment
``` | 2015/05/07 | [
"https://Stackoverflow.com/questions/30112178",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2517452/"
] | Here's the exact fix of this error, goto
>
> PhpMyAdmin > [your database] > prefix\_usermeta
>
>
>
Now search for `prefix_user_level` and `prefix_capabilities`
rename these both to match your database prefix *(same in lower case letters, probably)*.
### Why it happens:
While installing wordpress you have mistakenly set your prefix in All CAPS or at least one Capital Letter. However when you export your database it automatically turns everything into smaller letters except those inside tables. So either you need to manually change those caps to small letters or change your prefix again to caps. | Remove the debug code and make sure all the folder permissions are set to 755 and all the file permissions are set to 644. If they are all correct and you're still getting the error, check your .htaccess file for any lockouts.
Alternatively, if neither of those options work, I would suggest [disabling the plugins](https://www.ostraining.com/blog/wordpress/disable-a-wordpress-plugin/) on the site and try to gain access again. |
30,112,178 | I am trying to upload a file to JIRA via its REST API using the python lib found here: [jira python documentation](http://jira-python.readthedocs.org/en/latest/)
It seems pretty straight forward I wrote a method that allows me to pass an issue and then it attaches a filename. and one that lets me retrieve an issue from JIRA.
```
from jira.client import JIRA
class JIRAReport (object):
def attach(self,issue):
print 'Attaching... '
attachment = self.jira.add_attachment(issue, attachment=self.reportpath, filename='Report.xlsx')
print 'Success!'
def getissue(self):
if not self.issue == None:
return self.jira.issue(self.issue)
return None
```
then in my main script I am getting the issue and attaching the file to an issue I retrieved from JIRA
```
report = JiraReport()
report.issue = 'ProjectKey-1'
report.reportpath = '../report_upload/tmp/' + filename
issue = report.getissue()
if not issue == None:
report.attach(issue)
else:
print "No Issue with Key Found"
```
I am able to get the issue/create issues if needed but when using the `self.jira.add_attachment()` method I am getting 405 Method Not Allowed.
The file exists and is able to be opened.
Here is the add\_attachment() method from the [source](https://bitbucket.org/bspeakmon/jira-python/src/6897f63cf459eeb98c80638273ffa3ba8f99bd70/jira/client.py?at=master#cl-323) code:
```
def add_attachment(self, issue, attachment, filename=None):
"""
Attach an attachment to an issue and returns a Resource for it.
The client will *not* attempt to open or validate the attachment; it expects a file-like object to be ready
for its use. The user is still responsible for tidying up (e.g., closing the file, killing the socket, etc.)
:param issue: the issue to attach the attachment to
:param attachment: file-like object to attach to the issue, also works if it is a string with the filename.
:param filename: optional name for the attached file. If omitted, the file object's ``name`` attribute
is used. If you aquired the file-like object by any other method than ``open()``, make sure
that a name is specified in one way or the other.
:rtype: an Attachment Resource
"""
if isinstance(attachment, string_types):
attachment = open(attachment, "rb")
# TODO: Support attaching multiple files at once?
url = self._get_url('issue/' + str(issue) + '/attachments')
fname = filename
if not fname:
fname = os.path.basename(attachment.name)
content_type = mimetypes.guess_type(fname)[0]
if not content_type:
content_type = 'application/octet-stream'
files = {
'file': (fname, attachment, content_type)
}
r = self._session.post(url, files=files, headers=self._options['headers'])
raise_on_error(r)
attachment = Attachment(self._options, self._session, json.loads(r.text)[0])
return attachment
``` | 2015/05/07 | [
"https://Stackoverflow.com/questions/30112178",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2517452/"
] | If its not a prefix issue, then try resetting the roles for WordPress to its defaults, I used the plugin 'Capability Manager Enhanced' to do just that and it worked like a charm :) | Remove the debug code and make sure all the folder permissions are set to 755 and all the file permissions are set to 644. If they are all correct and you're still getting the error, check your .htaccess file for any lockouts.
Alternatively, if neither of those options work, I would suggest [disabling the plugins](https://www.ostraining.com/blog/wordpress/disable-a-wordpress-plugin/) on the site and try to gain access again. |
30,112,178 | I am trying to upload a file to JIRA via its REST API using the python lib found here: [jira python documentation](http://jira-python.readthedocs.org/en/latest/)
It seems pretty straight forward I wrote a method that allows me to pass an issue and then it attaches a filename. and one that lets me retrieve an issue from JIRA.
```
from jira.client import JIRA
class JIRAReport (object):
def attach(self,issue):
print 'Attaching... '
attachment = self.jira.add_attachment(issue, attachment=self.reportpath, filename='Report.xlsx')
print 'Success!'
def getissue(self):
if not self.issue == None:
return self.jira.issue(self.issue)
return None
```
then in my main script I am getting the issue and attaching the file to an issue I retrieved from JIRA
```
report = JiraReport()
report.issue = 'ProjectKey-1'
report.reportpath = '../report_upload/tmp/' + filename
issue = report.getissue()
if not issue == None:
report.attach(issue)
else:
print "No Issue with Key Found"
```
I am able to get the issue/create issues if needed but when using the `self.jira.add_attachment()` method I am getting 405 Method Not Allowed.
The file exists and is able to be opened.
Here is the add\_attachment() method from the [source](https://bitbucket.org/bspeakmon/jira-python/src/6897f63cf459eeb98c80638273ffa3ba8f99bd70/jira/client.py?at=master#cl-323) code:
```
def add_attachment(self, issue, attachment, filename=None):
"""
Attach an attachment to an issue and returns a Resource for it.
The client will *not* attempt to open or validate the attachment; it expects a file-like object to be ready
for its use. The user is still responsible for tidying up (e.g., closing the file, killing the socket, etc.)
:param issue: the issue to attach the attachment to
:param attachment: file-like object to attach to the issue, also works if it is a string with the filename.
:param filename: optional name for the attached file. If omitted, the file object's ``name`` attribute
is used. If you aquired the file-like object by any other method than ``open()``, make sure
that a name is specified in one way or the other.
:rtype: an Attachment Resource
"""
if isinstance(attachment, string_types):
attachment = open(attachment, "rb")
# TODO: Support attaching multiple files at once?
url = self._get_url('issue/' + str(issue) + '/attachments')
fname = filename
if not fname:
fname = os.path.basename(attachment.name)
content_type = mimetypes.guess_type(fname)[0]
if not content_type:
content_type = 'application/octet-stream'
files = {
'file': (fname, attachment, content_type)
}
r = self._session.post(url, files=files, headers=self._options['headers'])
raise_on_error(r)
attachment = Attachment(self._options, self._session, json.loads(r.text)[0])
return attachment
``` | 2015/05/07 | [
"https://Stackoverflow.com/questions/30112178",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2517452/"
] | Try below code
```
UPDATE `{%TABLE_PREFIX%}usermeta` SET `meta_key` = replace(`meta_key`, '{%OLD_TABLE_PREFIX%}', '{%NEW_TABLE_PREFIX%}');
UPDATE `{%TABLE_PREFIX%}options` SET `option_name` = replace(`option_name`, '{%OLD_TABLE_PREFIX%}', '{%NEW_TABLE_PREFIX%}');
```
For further detail
[You do not have sufficient permissions to access this page without any change](https://stackoverflow.com/questions/13815461/you-do-not-have-sufficient-permissions-to-access-this-page-without-any-change) | Here's the exact fix of this error, goto
>
> PhpMyAdmin > [your database] > prefix\_usermeta
>
>
>
Now search for `prefix_user_level` and `prefix_capabilities`
rename these both to match your database prefix *(same in lower case letters, probably)*.
### Why it happens:
While installing wordpress you have mistakenly set your prefix in All CAPS or at least one Capital Letter. However when you export your database it automatically turns everything into smaller letters except those inside tables. So either you need to manually change those caps to small letters or change your prefix again to caps. |
30,112,178 | I am trying to upload a file to JIRA via its REST API using the python lib found here: [jira python documentation](http://jira-python.readthedocs.org/en/latest/)
It seems pretty straight forward I wrote a method that allows me to pass an issue and then it attaches a filename. and one that lets me retrieve an issue from JIRA.
```
from jira.client import JIRA
class JIRAReport (object):
def attach(self,issue):
print 'Attaching... '
attachment = self.jira.add_attachment(issue, attachment=self.reportpath, filename='Report.xlsx')
print 'Success!'
def getissue(self):
if not self.issue == None:
return self.jira.issue(self.issue)
return None
```
then in my main script I am getting the issue and attaching the file to an issue I retrieved from JIRA
```
report = JiraReport()
report.issue = 'ProjectKey-1'
report.reportpath = '../report_upload/tmp/' + filename
issue = report.getissue()
if not issue == None:
report.attach(issue)
else:
print "No Issue with Key Found"
```
I am able to get the issue/create issues if needed but when using the `self.jira.add_attachment()` method I am getting 405 Method Not Allowed.
The file exists and is able to be opened.
Here is the add\_attachment() method from the [source](https://bitbucket.org/bspeakmon/jira-python/src/6897f63cf459eeb98c80638273ffa3ba8f99bd70/jira/client.py?at=master#cl-323) code:
```
def add_attachment(self, issue, attachment, filename=None):
"""
Attach an attachment to an issue and returns a Resource for it.
The client will *not* attempt to open or validate the attachment; it expects a file-like object to be ready
for its use. The user is still responsible for tidying up (e.g., closing the file, killing the socket, etc.)
:param issue: the issue to attach the attachment to
:param attachment: file-like object to attach to the issue, also works if it is a string with the filename.
:param filename: optional name for the attached file. If omitted, the file object's ``name`` attribute
is used. If you aquired the file-like object by any other method than ``open()``, make sure
that a name is specified in one way or the other.
:rtype: an Attachment Resource
"""
if isinstance(attachment, string_types):
attachment = open(attachment, "rb")
# TODO: Support attaching multiple files at once?
url = self._get_url('issue/' + str(issue) + '/attachments')
fname = filename
if not fname:
fname = os.path.basename(attachment.name)
content_type = mimetypes.guess_type(fname)[0]
if not content_type:
content_type = 'application/octet-stream'
files = {
'file': (fname, attachment, content_type)
}
r = self._session.post(url, files=files, headers=self._options['headers'])
raise_on_error(r)
attachment = Attachment(self._options, self._session, json.loads(r.text)[0])
return attachment
``` | 2015/05/07 | [
"https://Stackoverflow.com/questions/30112178",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2517452/"
] | Try below code
```
UPDATE `{%TABLE_PREFIX%}usermeta` SET `meta_key` = replace(`meta_key`, '{%OLD_TABLE_PREFIX%}', '{%NEW_TABLE_PREFIX%}');
UPDATE `{%TABLE_PREFIX%}options` SET `option_name` = replace(`option_name`, '{%OLD_TABLE_PREFIX%}', '{%NEW_TABLE_PREFIX%}');
```
For further detail
[You do not have sufficient permissions to access this page without any change](https://stackoverflow.com/questions/13815461/you-do-not-have-sufficient-permissions-to-access-this-page-without-any-change) | If its not a prefix issue, then try resetting the roles for WordPress to its defaults, I used the plugin 'Capability Manager Enhanced' to do just that and it worked like a charm :) |
38,962,174 | ```js
var myset = new Set();
myset.add({ key: 123, value: 100 });
var has = myset.has({ key: 123, value: 100 });
console.log(has); // false
var obj = {
key: 456,
value: 200
};
myset.add(obj);
has = myset.has(obj);
console.log(has); // true
has = myset.has(x => x.key === 123);
console.log(has); // false
```
The problem in this case: I just add `{ key: 123, value: 100 }` to `myset`, why doesn't it contain `{ key: 123, value: 100 }`?
Another case, if I use `obj` instead of `{ key: 123, value: 100 }`, it would return `true`.
[Set.prototype.has()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set/has) says:
>
> The has() method returns a boolean indicating whether an element with the specified value exists in a Set object or not.
>
>
>
But that doesn't mention about: what's `specified value`?
Clearly, in this case `{ key: 123, value: 100 }` and `{ key: 123, value: 100 }` are similar to, and.... I'm getting `false`. So what's `specified` here?
And the second question: why doesn't they support `predicate` in `has()` method?
In my example. It's harder to search if I use `for...of...`:
```
for (let obj of myset) {
if (obj.key === 123) return true;
}
```
While it can be inline with predicating:
```
has = myset.has(x => x.key === 123)
```
So, should it be improved for future? | 2016/08/15 | [
"https://Stackoverflow.com/questions/38962174",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10550549/"
] | [`Set.prototype.has`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set/has) doesn't find the object because it tests using [value equality](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set#Value_equality), meaning:
```
{ key: 123 } !== { key: 123 } // true
```
If you want to be able to find an item based on a predicate, you will have to add that function manually. The function probably doesn't exist already because a `Set` is only efficient when you need fast lookups without iteration. If you want to iterate values, use an array instead.
Here's how you could implement it just as [`Array.prototype.find`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/find):
```
Set.prototype.find = function () {
return Array.prototype.find.apply([...this], arguments);
};
``` | `{ key: 123, value: 100 } === { key: 123, value: 100 }` is `false` because JavaScript performs a shallow comparison. Each object literal creates a new object, they may hold the same values, but they are still different objects that just happens to look alike.
```
var a = {};
var b = a;
a === b; // true
```
In this example you get `true` because now you are comparing the same object. You can tell that `a`, and `b` are the same object because changes in `a` are reflected in `b`.
```
a.x = 1;
b.x === 1; // true
```
`myset.has(x => x.key === 123)` here you are asking if the set has this new lambda that you just created. It would be nice if `has` used your lambda to check the elements of the set, but unfortunately the method does perform this check. |
41,187,073 | ```
{
"Actor": {
"knownlanguages": [
"English"
]
}
}
```
This JSON is stored in JSON columntype of MySQL with name `data`.
My question is how to check whether `knownlanguages` key contains value `English` from JSON datatype of MySQL using query? | 2016/12/16 | [
"https://Stackoverflow.com/questions/41187073",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6355132/"
] | Easy:
```
SELECT * FROM table WHERE JSON_CONTAINS(json, '"English"', "$.Actor.knownlanguages")
```
or (depending on what you have to do):
```
SELECT JSON_CONTAINS(json, '"English"', "$.Actor.knownlanguages") FROM table
```
reference: <https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html> | you can use `like` keyword with wildcard `%`
```
SELECT * FROM table where data like "%English%"
```
**NOTE:**
this won't search in JSON field `knownlanguages`, but in column `data` |
11,834,048 | I am trying to make a page with css with the little bit of skill I have been learning. I have the title at the top of the page and it looks good but when you resize the window the title over laps the image border I have. I just found out, that it doesn't display well in firefox. How could I fix this? ![enter image description here](https://i.stack.imgur.com/LGpAM.jpg)
```
<!DOCTYPE HTML PUBLIC "-//W3C//DTM HTML 4.0 Transitional//EN"
"http://www.w3.org/TR/REC-html40/loose.dtd">
<html>
<head>
<title>This is my website</title>
</head>
<style type="text/css">
body{
background-image:url('bg-body.jpg');
background-repeat: repeat;
width:967px;
}
h1 {
font-size: 40px;
position:absolute;
top: 47px;
left: 27%;
color:red;
}
.venus {
position:absolute;
top: 160px;
left: 44%;
}
#menu {
position:absolute;
top: 400px;
left: 130px;
color:blue;
height:500px;
width:290px;
background-color:#DCE1CA;
border-top: 12px outset silver;
border-left: 12px outset silver;
border-right: 5px outset silver;
border-bottom: 5px outset silver;
}
li {
line-height:200%;
font-size: large;
list-style-type:none;
list-style-image:none;
}
#textarea {
position:absolute;
top: 400px;
left: 425px;
color:blue;
text-align:left;
height:500px;
width:1149px;
background-color:#DCE1CA;
border-top: 12px outset silver;
border-left: 12px outset silver;
border-right: 5px outset silver;
border-bottom: 5px outset silver;
}
</style>
<body>
<img src="sidebar.png" width="100%" height="2000px" class="sidebar" />
<h1>Welcome to my HTML/CSS/JavaScript Page</h1>
<img src="venus.jpg" width="200px" height="200px" class="venus" />
<div id="menu">
<ul>
<li>Mix of CSS & JS</li>
<li>Your Full Name</li>
<li>How Many Apples</li>
<li>Many Questions</li>
<li>Background Color</li>
<li>My Family Event</li>
<li>Images, Images</li>
</ul>
</div>
<div id="textarea">
Here is my story: I have been learning JavaScript and on the side I have been learning some CSS/HTML.
</div>
</body>
</html>
```
![enter image description here](https://i.stack.imgur.com/6bhqK.jpg)
![enter image description here](https://i.stack.imgur.com/JApN4.jpg) | 2012/08/06 | [
"https://Stackoverflow.com/questions/11834048",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1524354/"
] | I would advice you to set your 'sidebar.png' as a background image, rather then an actual `<img>` in the html. `<img>`in the html should only be used if it's actual content, like your venus.jpg. The background is considered styling and not content.
That beeing said i would try to do something like this:
[Fiddle](http://jsfiddle.net/GS5VJ/)
I've put most of the explanation in the css code trying to make things clear. Perhaps it's a bit avanced, but since you say you are learning css... ;-)
The advantage of working with this kind of 'liquid' design is that the corner images will not be distorted, as they are currently. Also will your problem of the title running over the image never occurs again, wich was the goal ulmtimatly. | Try setting the width to 100% instead of a fixed width, then set the background-position to fixed. See if that helps. |
11,834,048 | I am trying to make a page with css with the little bit of skill I have been learning. I have the title at the top of the page and it looks good but when you resize the window the title over laps the image border I have. I just found out, that it doesn't display well in firefox. How could I fix this? ![enter image description here](https://i.stack.imgur.com/LGpAM.jpg)
```
<!DOCTYPE HTML PUBLIC "-//W3C//DTM HTML 4.0 Transitional//EN"
"http://www.w3.org/TR/REC-html40/loose.dtd">
<html>
<head>
<title>This is my website</title>
</head>
<style type="text/css">
body{
background-image:url('bg-body.jpg');
background-repeat: repeat;
width:967px;
}
h1 {
font-size: 40px;
position:absolute;
top: 47px;
left: 27%;
color:red;
}
.venus {
position:absolute;
top: 160px;
left: 44%;
}
#menu {
position:absolute;
top: 400px;
left: 130px;
color:blue;
height:500px;
width:290px;
background-color:#DCE1CA;
border-top: 12px outset silver;
border-left: 12px outset silver;
border-right: 5px outset silver;
border-bottom: 5px outset silver;
}
li {
line-height:200%;
font-size: large;
list-style-type:none;
list-style-image:none;
}
#textarea {
position:absolute;
top: 400px;
left: 425px;
color:blue;
text-align:left;
height:500px;
width:1149px;
background-color:#DCE1CA;
border-top: 12px outset silver;
border-left: 12px outset silver;
border-right: 5px outset silver;
border-bottom: 5px outset silver;
}
</style>
<body>
<img src="sidebar.png" width="100%" height="2000px" class="sidebar" />
<h1>Welcome to my HTML/CSS/JavaScript Page</h1>
<img src="venus.jpg" width="200px" height="200px" class="venus" />
<div id="menu">
<ul>
<li>Mix of CSS & JS</li>
<li>Your Full Name</li>
<li>How Many Apples</li>
<li>Many Questions</li>
<li>Background Color</li>
<li>My Family Event</li>
<li>Images, Images</li>
</ul>
</div>
<div id="textarea">
Here is my story: I have been learning JavaScript and on the side I have been learning some CSS/HTML.
</div>
</body>
</html>
```
![enter image description here](https://i.stack.imgur.com/6bhqK.jpg)
![enter image description here](https://i.stack.imgur.com/JApN4.jpg) | 2012/08/06 | [
"https://Stackoverflow.com/questions/11834048",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1524354/"
] | I would advice you to set your 'sidebar.png' as a background image, rather then an actual `<img>` in the html. `<img>`in the html should only be used if it's actual content, like your venus.jpg. The background is considered styling and not content.
That beeing said i would try to do something like this:
[Fiddle](http://jsfiddle.net/GS5VJ/)
I've put most of the explanation in the css code trying to make things clear. Perhaps it's a bit avanced, but since you say you are learning css... ;-)
The advantage of working with this kind of 'liquid' design is that the corner images will not be distorted, as they are currently. Also will your problem of the title running over the image never occurs again, wich was the goal ulmtimatly. | If your worried about what your page will look like when you re-size it you should be setting your heights, widths, and positions by percentage, not pixels. Setting those attributes by percentage scales them when the page re-sizes.
Ex: `background-size: 100%;` |
49,213,876 | I'm using different Docker containers for API and front-end.
```
frontend:
image: <my_frontend_image>
ports:
- "3000:3000"
api:
restart: always
build: ./<my_api_folder>
ports:
- "9002:9002"
```
On API side I have the endpoint for sending emails:
```
/emails/custom
```
On frontend side I'm trying to send a request to this endpoint:
```
$.ajax({
url: "http://api:9002/api/emails/custom",
dataType: "json",
type: "POST"
...
```
But it doesn't work. It looks like it sends a request to FrontEnd container again.
What is wrong here?
UPDATE:
Maybe this issue is somehow related to my nginx configurations too:
```
location / {
# Define the location of the proxy server to send the request to
# Web it's a name of docker container with frontend.
proxy_pass http://frontend:3000;
# Redefine the header fields that NGINX sends to the upstream server
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Define the maximum file size on file uploads
client_max_body_size 5M;
}
# Setup communication with API container.
location /api {
rewrite "^/api/(.*)$" /$1 break;
proxy_pass http://api:9002;
proxy_redirect off;
proxy_set_header Host $host;
}
``` | 2018/03/10 | [
"https://Stackoverflow.com/questions/49213876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7335432/"
] | You are trying to access the api service externally from the browser not from within the frontend container which is where the api hostname is accessible. Port forwarding is done by docker so externally you need to hit the host. Eg.: `http://localhost:9002` (if your using docker for mac for example)
A similar question/answer can be found [here](https://stackoverflow.com/questions/48845838/docker-connect-nodejs-container-with-apache-container/48846801#48846801) | Well, the problem was with a final `/` in JS URL.
It's fixed just by replacing:
```
url: "http://api:9002/api/emails/custom",
```
with
```
url: "http://api:9002/api/emails/custom/",
```
Another thing that might be improved - hostname, as @Marko mentioned. |
49,213,876 | I'm using different Docker containers for API and front-end.
```
frontend:
image: <my_frontend_image>
ports:
- "3000:3000"
api:
restart: always
build: ./<my_api_folder>
ports:
- "9002:9002"
```
On API side I have the endpoint for sending emails:
```
/emails/custom
```
On frontend side I'm trying to send a request to this endpoint:
```
$.ajax({
url: "http://api:9002/api/emails/custom",
dataType: "json",
type: "POST"
...
```
But it doesn't work. It looks like it sends a request to FrontEnd container again.
What is wrong here?
UPDATE:
Maybe this issue is somehow related to my nginx configurations too:
```
location / {
# Define the location of the proxy server to send the request to
# Web it's a name of docker container with frontend.
proxy_pass http://frontend:3000;
# Redefine the header fields that NGINX sends to the upstream server
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Define the maximum file size on file uploads
client_max_body_size 5M;
}
# Setup communication with API container.
location /api {
rewrite "^/api/(.*)$" /$1 break;
proxy_pass http://api:9002;
proxy_redirect off;
proxy_set_header Host $host;
}
``` | 2018/03/10 | [
"https://Stackoverflow.com/questions/49213876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7335432/"
] | You are trying to access the api service externally from the browser not from within the frontend container which is where the api hostname is accessible. Port forwarding is done by docker so externally you need to hit the host. Eg.: `http://localhost:9002` (if your using docker for mac for example)
A similar question/answer can be found [here](https://stackoverflow.com/questions/48845838/docker-connect-nodejs-container-with-apache-container/48846801#48846801) | var url = document.location.protocol + "//" + document.location.hostname + ":9002";
This works for me as well.
But I am curious to know if we can make it work with container name. |
49,213,876 | I'm using different Docker containers for API and front-end.
```
frontend:
image: <my_frontend_image>
ports:
- "3000:3000"
api:
restart: always
build: ./<my_api_folder>
ports:
- "9002:9002"
```
On API side I have the endpoint for sending emails:
```
/emails/custom
```
On frontend side I'm trying to send a request to this endpoint:
```
$.ajax({
url: "http://api:9002/api/emails/custom",
dataType: "json",
type: "POST"
...
```
But it doesn't work. It looks like it sends a request to FrontEnd container again.
What is wrong here?
UPDATE:
Maybe this issue is somehow related to my nginx configurations too:
```
location / {
# Define the location of the proxy server to send the request to
# Web it's a name of docker container with frontend.
proxy_pass http://frontend:3000;
# Redefine the header fields that NGINX sends to the upstream server
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Define the maximum file size on file uploads
client_max_body_size 5M;
}
# Setup communication with API container.
location /api {
rewrite "^/api/(.*)$" /$1 break;
proxy_pass http://api:9002;
proxy_redirect off;
proxy_set_header Host $host;
}
``` | 2018/03/10 | [
"https://Stackoverflow.com/questions/49213876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7335432/"
] | I have look into your problem more carefully. You can't do what you tryint to do, because browser really always use local env to make ajax requests, and in that env "api" dose not exist, it look at its own hosts file and can't find it, and finally fail.
**You need to update your frontend code to makes requrest based on where it is.**
```
var url = document.location.protocol + "//" + document.location.hostname + ":9002";
var api = url + "/api/emails/custom"
$.ajax({
url: api,
dataType: "json",
type: "get",
});
}
```
You can also correctly resolve "api" to ip address in your **local hosts** file, but I think this is more sensible solution. I also made working version [here](https://github.com/ultrox/ajaxdocker):
**NOTE:** I made change from POST to GET request. | Well, the problem was with a final `/` in JS URL.
It's fixed just by replacing:
```
url: "http://api:9002/api/emails/custom",
```
with
```
url: "http://api:9002/api/emails/custom/",
```
Another thing that might be improved - hostname, as @Marko mentioned. |
49,213,876 | I'm using different Docker containers for API and front-end.
```
frontend:
image: <my_frontend_image>
ports:
- "3000:3000"
api:
restart: always
build: ./<my_api_folder>
ports:
- "9002:9002"
```
On API side I have the endpoint for sending emails:
```
/emails/custom
```
On frontend side I'm trying to send a request to this endpoint:
```
$.ajax({
url: "http://api:9002/api/emails/custom",
dataType: "json",
type: "POST"
...
```
But it doesn't work. It looks like it sends a request to FrontEnd container again.
What is wrong here?
UPDATE:
Maybe this issue is somehow related to my nginx configurations too:
```
location / {
# Define the location of the proxy server to send the request to
# Web it's a name of docker container with frontend.
proxy_pass http://frontend:3000;
# Redefine the header fields that NGINX sends to the upstream server
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Define the maximum file size on file uploads
client_max_body_size 5M;
}
# Setup communication with API container.
location /api {
rewrite "^/api/(.*)$" /$1 break;
proxy_pass http://api:9002;
proxy_redirect off;
proxy_set_header Host $host;
}
``` | 2018/03/10 | [
"https://Stackoverflow.com/questions/49213876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7335432/"
] | I have look into your problem more carefully. You can't do what you tryint to do, because browser really always use local env to make ajax requests, and in that env "api" dose not exist, it look at its own hosts file and can't find it, and finally fail.
**You need to update your frontend code to makes requrest based on where it is.**
```
var url = document.location.protocol + "//" + document.location.hostname + ":9002";
var api = url + "/api/emails/custom"
$.ajax({
url: api,
dataType: "json",
type: "get",
});
}
```
You can also correctly resolve "api" to ip address in your **local hosts** file, but I think this is more sensible solution. I also made working version [here](https://github.com/ultrox/ajaxdocker):
**NOTE:** I made change from POST to GET request. | var url = document.location.protocol + "//" + document.location.hostname + ":9002";
This works for me as well.
But I am curious to know if we can make it work with container name. |
12,806,529 | I'm working on an existing .Net winforms application that has traditionally had crystal reports run against a LAN SQL Server database. I'm trying to get it working with an Azure cloud database.
The existing app uses the server name given by the user at login time to connect and set each table's location etc within the report so the report can be run against any database server rather than only work against the database the report was designed against. Hopefully this is a familiar concept for those who have used Crystal Reports in a windows app before as this is the second company I have worked for that has code similar to the following which loops through each table and sub report in the report to make sure they are all pointing at the specified server:
```
For Each crTable In crTables
CrtableLogoninfo = crTable.LogOnInfo
CrtableLogoninfo.ConnectionInfo = App._CrConnectionInfo
crTable.ApplyLogOnInfo(CrtableLogoninfo)
crTable.Location = App.DBName & ".dbo." & crTable.Location.Substring(crTable.Location.LastIndexOf(".") + 1)
Next
crSections = CrReportDocument.ReportDefinition.Sections
For Each crSection In crSections
crReportObjects = crSection.ReportObjects
For Each crReportObject In crReportObjects
If crReportObject.Kind = ReportObjectKind.SubreportObject Then
crSubreportObject = CType(crReportObject, SubreportObject)
subRepDoc = crSubreportObject.OpenSubreport(crSubreportObject.SubreportName)
crTables = subRepDoc.Database.Tables
For Each crTable In crTables
CrtableLogoninfo = crTable.LogOnInfo
CrtableLogoninfo.ConnectionInfo = App._CrConnectionInfo
crTable.ApplyLogOnInfo(CrtableLogoninfo)
crTable.Location = App.DBName & ".dbo." & crTable.Location.Substring(crTable.Location.LastIndexOf(".") + 1)
Next
End If
Next
Next
```
This has been working for years against various LAN databases but doesn't seem to work with Azure. Crystal at runtime decides that the parameters that have been set by code need to be prompted for from the user, which seems to indicate something has gone wrong with the connection. If you enter parameter values it then errors complaining certain fields do not exist.
I've played about with various calls to \_CrConnectionInfo.LogonProperties.Set("Connection String", "SomeConnectionString") immediately before the code above but at best it proceeds to a point where it says "Operation not yet implemented".
I tried from Crystal Reports directly to connect to the cloud and got that working using the SQL Server Client 11 connection type. In the application when I create an SQL client connection via a call to `_CrConnectionInfo.LogonProperties.Set("Connection String", ""Provider=SQLNCLI;Server=tcp:theAzureDBInstanceName;Password=thePassword;Persist Security Info=True;User ID=thelogin;Initial Catalog=theDBName;Encrypt=yes;")")` or some similar variation it still doesn't work.
Does anyone know how to get Crystal to connect to an Azure database directly from a .Net app? If you have code suggestions either VB.Net or C# samples are fine. | 2012/10/09 | [
"https://Stackoverflow.com/questions/12806529",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/463967/"
] | Well, join players and games, count, group by player, order by count desc (greatest first), and limit to one if you want the first.
```
SELECT p.NID, p.name, COUNT(*)
FROM Players p
INNER JOIN Games g ON g.PID= p.NID
GROUP BY p.NID, p.name
ORDER BY COUNT(*) DESC
LIMIT 1;
``` | I'm guessing there's no counting to be done, just a simple integer value in the Games table indicating the number of games for a player ID...
```
SELECT pl.name FROM Players pl join Games g ON pl.NID=g.GID
ORDER BY g.games DESC
limit 1;
``` |
1,902,494 | Should disabling the back button, backspace/delete key be done in the FLEX App or in JavaScript?
Any suggested solutions?
Thanks! | 2009/12/14 | [
"https://Stackoverflow.com/questions/1902494",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/149080/"
] | IMHO you should never take native browser control away from a user. While there are a few users that are going to inadvertently take themselves out of the app, the majority of users are going to be frustrated over not being able to perform an expected functionality (maybe not if you remove backspace, but definately if you take away the browser back button). Your overall frustration is going to be higher if you remove this functionality than if you leave it in default.
A better option is to open your app in a new tab or window. This takes away the need to remove any functionality and accomplishes your goal in a way that is acceptable to almost all users. | I'd recommend doing it in JavaScript, but there are difficulties with either solution. Different hardware platforms and browsers interpret keyboard signals differently, and I've had trouble capturing them with absolute certainty in Flash before. Just Delete should be okay from either, though JavaScript might give you more flexibility to tweak this based on the user's browser/platform if you needed to. |
18,036,270 | I'm going through the CoderByte exercises and I came across the following problem:
*>Using the JavaScript language, have the function LetterChanges(str) take the str parameter being passed and modify it using the following algorithm. Replace every letter in the string with the letter following it in the alphabet (ie. c becomes d, z becomes a). Then capitalize every vowel in this new string (a, e, i, o, u) and finally return this modified string.*
I wrote out it out in JSBin and it worked fine (even te, but in CoderByte it didn't. I want to ask the community if what I wrote is correct and it's an issue on CoderByte, or if my code is wrong and the issue is with JSBin.
The code is as follows:
```
function LetterChanges(str) {
var iLetters = str.split('');
var newStr = [];
for (var i = 0; i < str.length; i++) {
if (/[a-y]/ig.test(iLetters[i])) {
newStr[i] = String.fromCharCode(iLetters[i].charCodeAt(0) + 1);
if (/[aeiou]/ig.test(newStr[i])) {
newStr[i] = newStr[i].toUpperCase();
}
} else if (/[z]/ig.test(iLetters[i])) {
newStr[i] = "A";
} else if (/[^A-Z]/ig.test(iLetters[i])) {
newStr[i] = iLetters[i];
}
}
return newStr.join('');
}
``` | 2013/08/03 | [
"https://Stackoverflow.com/questions/18036270",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2649213/"
] | Seems like a bug on their back-end JS runner indeed. As you've stated, your code runs fine and should be accepted. Worth reporting it to their support imo.
Here's an alternative solution [specifying a function as second parameter](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/replace#Specifying_a_function_as_a_parameter) to `.replace()`:
```
function LetterChanges(str) {
return str.replace(/[a-z]/ig, function(c) {
return c.toUpperCase() === 'Z' ? 'A' : String.fromCharCode(c.charCodeAt(0) + 1);
}).replace(/[aeiou]/g, function(c) {
return c.toUpperCase();
});
}
``` | Your code worked just fine for me on [jsfiddle](http://jsfiddle.net/Xotic750/PsKff/) when compared against the following alternative
On CoderByte, your code failed but the following worked. Seems to be a problem on their site.
```
function letterChanges(str) {
var newString = "",
code,
length,
index;
for (index = 0, length = str.length; index < length; index += 1) {
code = str.charCodeAt(index);
switch (code) {
case 90:
code = 65;
break;
case 122:
code = 97
break;
default:
if ((code >= 65 && code < 90) || (code >= 97 && code < 122)) {
code += 1;
}
}
newString += String.fromCharCode(code);
}
return newString.replace(/[aeiou]/g, function (character) {
return character.toUpperCase();
});
}
console.log(LetterChanges("Then capitalize every vowel in this new string (a, e, i, o, u) and finally return this modified string."));
console.log(letterChanges("Then capitalize every vowel in this new string (a, e, i, o, u) and finally return this modified string."));
```
Output
```
UIfO dbqjUbmjAf fwfsz wpxfm jO UIjt Ofx tUsjOh (b, f, j, p, v) bOE gjObmmz sfUvsO UIjt npEjgjfE tUsjOh. fiddle.jshell.net/:70
UIfO dbqjUbmjAf fwfsz wpxfm jO UIjt Ofx tUsjOh (b, f, j, p, v) bOE gjObmmz sfUvsO UIjt npEjgjfE tUsjOh.
``` |
18,036,270 | I'm going through the CoderByte exercises and I came across the following problem:
*>Using the JavaScript language, have the function LetterChanges(str) take the str parameter being passed and modify it using the following algorithm. Replace every letter in the string with the letter following it in the alphabet (ie. c becomes d, z becomes a). Then capitalize every vowel in this new string (a, e, i, o, u) and finally return this modified string.*
I wrote out it out in JSBin and it worked fine (even te, but in CoderByte it didn't. I want to ask the community if what I wrote is correct and it's an issue on CoderByte, or if my code is wrong and the issue is with JSBin.
The code is as follows:
```
function LetterChanges(str) {
var iLetters = str.split('');
var newStr = [];
for (var i = 0; i < str.length; i++) {
if (/[a-y]/ig.test(iLetters[i])) {
newStr[i] = String.fromCharCode(iLetters[i].charCodeAt(0) + 1);
if (/[aeiou]/ig.test(newStr[i])) {
newStr[i] = newStr[i].toUpperCase();
}
} else if (/[z]/ig.test(iLetters[i])) {
newStr[i] = "A";
} else if (/[^A-Z]/ig.test(iLetters[i])) {
newStr[i] = iLetters[i];
}
}
return newStr.join('');
}
``` | 2013/08/03 | [
"https://Stackoverflow.com/questions/18036270",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2649213/"
] | Seems like a bug on their back-end JS runner indeed. As you've stated, your code runs fine and should be accepted. Worth reporting it to their support imo.
Here's an alternative solution [specifying a function as second parameter](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/replace#Specifying_a_function_as_a_parameter) to `.replace()`:
```
function LetterChanges(str) {
return str.replace(/[a-z]/ig, function(c) {
return c.toUpperCase() === 'Z' ? 'A' : String.fromCharCode(c.charCodeAt(0) + 1);
}).replace(/[aeiou]/g, function(c) {
return c.toUpperCase();
});
}
``` | Just another solution from the answer of @Fabrício Matté
and a bit explanation is that using regex getting first alphabets from a to z using `/[a-z]/` and replacing them by adding one to ASCII of the each string using `String.fromCharCode(Estr.charCodeAt(0)+1)` and the rest is matter of finding vowel's using again regex `[aeiou]` and returning capitalized string of it.
```
function LetterChanges(str) {
return str.replace(/[a-z]/ig, function(Estr) {
return String.fromCharCode(Estr.charCodeAt(0)+1);
}).replace(/[aeiou]/ig, function(readyStr) {
return readyStr.toUpperCase();
})
}
``` |
55,031,745 | I've enabled CORS successfully in development. My Golang back end communicates well with my Angular front end on my local machine. However, I can't figure out how to enable CORS in production (Ubuntu on DigitalOcean). I get this on Firefox:
"Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at <http://localhost:12345/anteroom>. (Reason: CORS request did not succeed)."
I'm running the Golang back end with a systemd unit and serving it at localhost:12345.
I'm running the Angular front end as a build (built with `--prod` flag) using PM2 with angular-http-server, and serving it out of port 8080. This port is behind a firewall. I use Nginx to handle HTTPS traffic for this front end. It listens on port 80 and passes (`proxy_pass`) requests to it at port 8080. The landing page (which requires only a GET request) loads ok in the browser, so this setup seems feasible.
The versions I'm working with: Ubuntu 16.04, PM2 3.3.1, Angular CLI 7.3.4, angular-http-server 1.8.1.
The problem happens when the front end tries to POST JSON data to the back end (localhost:12345/anteroom, as seen in the message above).
I've read that CORS is a server-side issue. So, I've tried enabling it wherever I've a server, that is, in the back end, Nginx, and angular-http-server.
It's enabled in my Golang code:
```
func anteroom(res http.ResponseWriter, req *http.Request) {
res.Header().Set("Access-Control-Allow-Origin", "*")
res.Header().Set("Access-Control-Allow-Methods", "POST, OPTIONS")
res.Header().Set("Access-Control-Allow-Headers", "Content-Type")
res.Header().Set("Content-Type", "application/json")
...
}
func main() {
...
# Using Gorilla mux router.
router := mux.NewRouter()
router.HandleFunc("/anteroom", anteroom).Methods("POST", "OPTIONS")
}
```
This successfully enables CORS in development, where serving Golang is just opening its built binary and Angular is served with `ng serve`.
The above isn't enough in production. So, I've tried enabling it with angular-http-server. Note the `--cors` flag at the end:
```
pm2 start $(which angular-http-server) --name app -- --path /PATH/TO/DIST -p 8080 --cors
```
I've also tried enabling it in the Nginx file pertaining to the Angular front end build (adapted from [here](https://enable-cors.org/server_nginx.html)):
```
location / {
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'Content-Type';
add_header 'Content-Type' 'application/json';
return 204;
}
if ($request_method = 'POST') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'Content-Type';
add_header 'Content-Type' 'application/json';
}
if ($request_method = 'GET') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'Content-Type';
}
proxy_pass http://localhost:8080;
}
}
```
I've looked at the documentation for PM2, angular-http-server, Nginx, and a bunch of other things and I don't know what I'm missing. Let me know? Thanks. | 2019/03/06 | [
"https://Stackoverflow.com/questions/55031745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1291530/"
] | Thanks to Ravinder Payal, I used tcpdump to look at the headers going back and forth. To cut a very long story short, at some point it made me realise that I'd set the front end to communicate with "localhost". Obviously, that meant whichever client browser using the front end would be looking for it on its own local machine.
To solve this, I set up separate [application environments](https://github.com/angular/angular-cli/wiki/stories-application-environments) for my angular front end. This allows the front end to communicate with localhost in staging and with my back end domain in production. | ```
func anteroom(res http.ResponseWriter, req *http.Request) {
res.Header().Set("Access-Control-Allow-Origin", "*")
res.Header().Set("Access-Control-Allow-Methods", "POST, OPTIONS")
res.Header().Set("Access-Control-Allow-Headers", "Content-Type")
res.Header().Set("Content-Type", "application/json")
...
}
func main() {
...
# Using Gorilla mux router.
router := mux.NewRouter()
router.HandleFunc("/anteroom", anteroom).Methods("POST", "OPTIONS")
}
```
GET method is missing in the code.
Change this line `res.Header().Set("Access-Control-Allow-Methods", "POST, OPTIONS")` to `res.Header().Set("Access-Control-Allow-Methods", "GET, POST, OPTIONS")` |
4,594,522 | After 2 days of debug, I nailed down my time-hog: the Python garbage collector.
My application holds a lot of objects in memory. And it works well.
The GC does the usual rounds (I have not played with the default thresholds of (700, 10, 10)).
Once in a while, in the middle of an important transaction, the 2nd generation sweep kicks in and reviews my ~1.5M generation 2 objects.
**This takes 2 seconds!**
The nominal transaction takes less than 0.1 seconds.
My question is what should I do?
I can turn off generation 2 sweeps (by setting a very high threshold - is this the right way?) and the GC is obedient.
When should I turn them on?
We implemented a web service using Django, and each user request takes about 0.1 seconds.
Optimally, I will run these GC gen 2 cycles **between** user API requests. But how do I do that?
My view ends with `return HttpResponse()`, **AFTER** which I would like to run a gen 2 GC sweep.
How do I do that? Does this approach even make sense?
Can I mark the object that NEVER need to be garbage collected so the GC will not test them every 2nd gen cycle?
How can I configure the GC to run full sweeps when the Django server is relatively idle?
Python 2.6.6 on multiple platforms (Windows / Linux). | 2011/01/04 | [
"https://Stackoverflow.com/questions/4594522",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78234/"
] | I believe one option would be to completely disable garbage collection and then manually collect at the end of a request as suggested here: [How does the Garbage Collection mechanism work?](https://stackoverflow.com/questions/774357/garbage-collection)
I imagine that you could disable the GC in your `settings.py` file.
If you want to run GarbageCollection on every request I would suggest developing some Middleware that does it in the [process response](http://docs.djangoproject.com/en/1.2/topics/http/middleware/#process-response) method:
```
import gc
class GCMiddleware(object):
def process_response(self, request, response):
gc.collect()
return response
``` | >
> My view ends with return HttpResponse(), AFTER which I would like to run a gen 2 GC sweep.
>
>
>
```
// turn off GC
// do stuff
resp = HttpResponse()
// turn on GC
return resp
```
I'm not sure, but instead of `//turn on GC` you might be able to `// spawn thread to turn on GC in 0.1 sec`.
In order to make sure that GC doesn't happen until after the request is processed, if the thread spawning doesn't work, you would need to modify django itself or use some sort of django hook, as dcurtis suggested.
If you're dealing with performance-critical code, you might also want to consider using a manual memory management language like C/C++ for that part, and using Python simply to invoke/query it. |
4,594,522 | After 2 days of debug, I nailed down my time-hog: the Python garbage collector.
My application holds a lot of objects in memory. And it works well.
The GC does the usual rounds (I have not played with the default thresholds of (700, 10, 10)).
Once in a while, in the middle of an important transaction, the 2nd generation sweep kicks in and reviews my ~1.5M generation 2 objects.
**This takes 2 seconds!**
The nominal transaction takes less than 0.1 seconds.
My question is what should I do?
I can turn off generation 2 sweeps (by setting a very high threshold - is this the right way?) and the GC is obedient.
When should I turn them on?
We implemented a web service using Django, and each user request takes about 0.1 seconds.
Optimally, I will run these GC gen 2 cycles **between** user API requests. But how do I do that?
My view ends with `return HttpResponse()`, **AFTER** which I would like to run a gen 2 GC sweep.
How do I do that? Does this approach even make sense?
Can I mark the object that NEVER need to be garbage collected so the GC will not test them every 2nd gen cycle?
How can I configure the GC to run full sweeps when the Django server is relatively idle?
Python 2.6.6 on multiple platforms (Windows / Linux). | 2011/01/04 | [
"https://Stackoverflow.com/questions/4594522",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78234/"
] | I believe one option would be to completely disable garbage collection and then manually collect at the end of a request as suggested here: [How does the Garbage Collection mechanism work?](https://stackoverflow.com/questions/774357/garbage-collection)
I imagine that you could disable the GC in your `settings.py` file.
If you want to run GarbageCollection on every request I would suggest developing some Middleware that does it in the [process response](http://docs.djangoproject.com/en/1.2/topics/http/middleware/#process-response) method:
```
import gc
class GCMiddleware(object):
def process_response(self, request, response):
gc.collect()
return response
``` | An alternative might be to disable GC altogether, and configure mod\_wsgi (or whatever you're using) to kill and restart processes more frequently. |
4,594,522 | After 2 days of debug, I nailed down my time-hog: the Python garbage collector.
My application holds a lot of objects in memory. And it works well.
The GC does the usual rounds (I have not played with the default thresholds of (700, 10, 10)).
Once in a while, in the middle of an important transaction, the 2nd generation sweep kicks in and reviews my ~1.5M generation 2 objects.
**This takes 2 seconds!**
The nominal transaction takes less than 0.1 seconds.
My question is what should I do?
I can turn off generation 2 sweeps (by setting a very high threshold - is this the right way?) and the GC is obedient.
When should I turn them on?
We implemented a web service using Django, and each user request takes about 0.1 seconds.
Optimally, I will run these GC gen 2 cycles **between** user API requests. But how do I do that?
My view ends with `return HttpResponse()`, **AFTER** which I would like to run a gen 2 GC sweep.
How do I do that? Does this approach even make sense?
Can I mark the object that NEVER need to be garbage collected so the GC will not test them every 2nd gen cycle?
How can I configure the GC to run full sweeps when the Django server is relatively idle?
Python 2.6.6 on multiple platforms (Windows / Linux). | 2011/01/04 | [
"https://Stackoverflow.com/questions/4594522",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78234/"
] | We did something like this for gunicorn. Depending on what wsgi server you use, you need to find the right hooks for AFTER the response, not before. Django has a `request_finished` signal but that signal is still pre response.
For gunicorn, in the config you need to define 2 methods like so:
```
def pre_request(worker, req):
# disable gc until end of request
gc.disable()
def post_request(worker, req, environ, resp):
# enable gc after a request
gc.enable()
```
The `post_request` here runs after the http response has been delivered, and so is a very good time for garbage collection. | I believe one option would be to completely disable garbage collection and then manually collect at the end of a request as suggested here: [How does the Garbage Collection mechanism work?](https://stackoverflow.com/questions/774357/garbage-collection)
I imagine that you could disable the GC in your `settings.py` file.
If you want to run GarbageCollection on every request I would suggest developing some Middleware that does it in the [process response](http://docs.djangoproject.com/en/1.2/topics/http/middleware/#process-response) method:
```
import gc
class GCMiddleware(object):
def process_response(self, request, response):
gc.collect()
return response
``` |
4,594,522 | After 2 days of debug, I nailed down my time-hog: the Python garbage collector.
My application holds a lot of objects in memory. And it works well.
The GC does the usual rounds (I have not played with the default thresholds of (700, 10, 10)).
Once in a while, in the middle of an important transaction, the 2nd generation sweep kicks in and reviews my ~1.5M generation 2 objects.
**This takes 2 seconds!**
The nominal transaction takes less than 0.1 seconds.
My question is what should I do?
I can turn off generation 2 sweeps (by setting a very high threshold - is this the right way?) and the GC is obedient.
When should I turn them on?
We implemented a web service using Django, and each user request takes about 0.1 seconds.
Optimally, I will run these GC gen 2 cycles **between** user API requests. But how do I do that?
My view ends with `return HttpResponse()`, **AFTER** which I would like to run a gen 2 GC sweep.
How do I do that? Does this approach even make sense?
Can I mark the object that NEVER need to be garbage collected so the GC will not test them every 2nd gen cycle?
How can I configure the GC to run full sweeps when the Django server is relatively idle?
Python 2.6.6 on multiple platforms (Windows / Linux). | 2011/01/04 | [
"https://Stackoverflow.com/questions/4594522",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78234/"
] | I believe one option would be to completely disable garbage collection and then manually collect at the end of a request as suggested here: [How does the Garbage Collection mechanism work?](https://stackoverflow.com/questions/774357/garbage-collection)
I imagine that you could disable the GC in your `settings.py` file.
If you want to run GarbageCollection on every request I would suggest developing some Middleware that does it in the [process response](http://docs.djangoproject.com/en/1.2/topics/http/middleware/#process-response) method:
```
import gc
class GCMiddleware(object):
def process_response(self, request, response):
gc.collect()
return response
``` | Building on the approach from @milkypostman you can use gevent. You want one call to garbage collection *per request* but the problem with the @milkypostman suggestion is that the call to gc.collect() will still block the returning of the request. Gevent lets us return immediately and have the GC run proceed *after* the process is returned from.
First in your wsgi file be sure to monkey patch all with gevent magic stuff and disable garbage collection. You can set `gc.disable()` but some libraries have context managers that turn it on after disabling it (messagepack for instance), so the 0 threshold is more sticky.
```
import gc
from gevent import monkey
# Disable garbage collection runs
gc.set_threshold(0)
# Apply gevent monkey magic
monkey.patch_all()
```
Then create some middleware for Django like this:
```
from gc import collect
import gevent
class BaseMiddleware:
def __init__(self, get_response):
self.get_response = get_response
class GcCollectMiddleware(BaseMiddleware):
"""Middleware which performs a non-blocking gc.collect()"""
def __call__(self, request):
response = self.get_response(request)
gevent.spawn(collect)
return response
```
You'll see the main difference here vs the previously suggested approach is that `gc.collect()` is wrapped in `gevent.spawn` which will not block returning the `HttpResponse` and your users will get a snappier response! |
4,594,522 | After 2 days of debug, I nailed down my time-hog: the Python garbage collector.
My application holds a lot of objects in memory. And it works well.
The GC does the usual rounds (I have not played with the default thresholds of (700, 10, 10)).
Once in a while, in the middle of an important transaction, the 2nd generation sweep kicks in and reviews my ~1.5M generation 2 objects.
**This takes 2 seconds!**
The nominal transaction takes less than 0.1 seconds.
My question is what should I do?
I can turn off generation 2 sweeps (by setting a very high threshold - is this the right way?) and the GC is obedient.
When should I turn them on?
We implemented a web service using Django, and each user request takes about 0.1 seconds.
Optimally, I will run these GC gen 2 cycles **between** user API requests. But how do I do that?
My view ends with `return HttpResponse()`, **AFTER** which I would like to run a gen 2 GC sweep.
How do I do that? Does this approach even make sense?
Can I mark the object that NEVER need to be garbage collected so the GC will not test them every 2nd gen cycle?
How can I configure the GC to run full sweeps when the Django server is relatively idle?
Python 2.6.6 on multiple platforms (Windows / Linux). | 2011/01/04 | [
"https://Stackoverflow.com/questions/4594522",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78234/"
] | An alternative might be to disable GC altogether, and configure mod\_wsgi (or whatever you're using) to kill and restart processes more frequently. | >
> My view ends with return HttpResponse(), AFTER which I would like to run a gen 2 GC sweep.
>
>
>
```
// turn off GC
// do stuff
resp = HttpResponse()
// turn on GC
return resp
```
I'm not sure, but instead of `//turn on GC` you might be able to `// spawn thread to turn on GC in 0.1 sec`.
In order to make sure that GC doesn't happen until after the request is processed, if the thread spawning doesn't work, you would need to modify django itself or use some sort of django hook, as dcurtis suggested.
If you're dealing with performance-critical code, you might also want to consider using a manual memory management language like C/C++ for that part, and using Python simply to invoke/query it. |
4,594,522 | After 2 days of debug, I nailed down my time-hog: the Python garbage collector.
My application holds a lot of objects in memory. And it works well.
The GC does the usual rounds (I have not played with the default thresholds of (700, 10, 10)).
Once in a while, in the middle of an important transaction, the 2nd generation sweep kicks in and reviews my ~1.5M generation 2 objects.
**This takes 2 seconds!**
The nominal transaction takes less than 0.1 seconds.
My question is what should I do?
I can turn off generation 2 sweeps (by setting a very high threshold - is this the right way?) and the GC is obedient.
When should I turn them on?
We implemented a web service using Django, and each user request takes about 0.1 seconds.
Optimally, I will run these GC gen 2 cycles **between** user API requests. But how do I do that?
My view ends with `return HttpResponse()`, **AFTER** which I would like to run a gen 2 GC sweep.
How do I do that? Does this approach even make sense?
Can I mark the object that NEVER need to be garbage collected so the GC will not test them every 2nd gen cycle?
How can I configure the GC to run full sweeps when the Django server is relatively idle?
Python 2.6.6 on multiple platforms (Windows / Linux). | 2011/01/04 | [
"https://Stackoverflow.com/questions/4594522",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78234/"
] | We did something like this for gunicorn. Depending on what wsgi server you use, you need to find the right hooks for AFTER the response, not before. Django has a `request_finished` signal but that signal is still pre response.
For gunicorn, in the config you need to define 2 methods like so:
```
def pre_request(worker, req):
# disable gc until end of request
gc.disable()
def post_request(worker, req, environ, resp):
# enable gc after a request
gc.enable()
```
The `post_request` here runs after the http response has been delivered, and so is a very good time for garbage collection. | >
> My view ends with return HttpResponse(), AFTER which I would like to run a gen 2 GC sweep.
>
>
>
```
// turn off GC
// do stuff
resp = HttpResponse()
// turn on GC
return resp
```
I'm not sure, but instead of `//turn on GC` you might be able to `// spawn thread to turn on GC in 0.1 sec`.
In order to make sure that GC doesn't happen until after the request is processed, if the thread spawning doesn't work, you would need to modify django itself or use some sort of django hook, as dcurtis suggested.
If you're dealing with performance-critical code, you might also want to consider using a manual memory management language like C/C++ for that part, and using Python simply to invoke/query it. |
4,594,522 | After 2 days of debug, I nailed down my time-hog: the Python garbage collector.
My application holds a lot of objects in memory. And it works well.
The GC does the usual rounds (I have not played with the default thresholds of (700, 10, 10)).
Once in a while, in the middle of an important transaction, the 2nd generation sweep kicks in and reviews my ~1.5M generation 2 objects.
**This takes 2 seconds!**
The nominal transaction takes less than 0.1 seconds.
My question is what should I do?
I can turn off generation 2 sweeps (by setting a very high threshold - is this the right way?) and the GC is obedient.
When should I turn them on?
We implemented a web service using Django, and each user request takes about 0.1 seconds.
Optimally, I will run these GC gen 2 cycles **between** user API requests. But how do I do that?
My view ends with `return HttpResponse()`, **AFTER** which I would like to run a gen 2 GC sweep.
How do I do that? Does this approach even make sense?
Can I mark the object that NEVER need to be garbage collected so the GC will not test them every 2nd gen cycle?
How can I configure the GC to run full sweeps when the Django server is relatively idle?
Python 2.6.6 on multiple platforms (Windows / Linux). | 2011/01/04 | [
"https://Stackoverflow.com/questions/4594522",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78234/"
] | We did something like this for gunicorn. Depending on what wsgi server you use, you need to find the right hooks for AFTER the response, not before. Django has a `request_finished` signal but that signal is still pre response.
For gunicorn, in the config you need to define 2 methods like so:
```
def pre_request(worker, req):
# disable gc until end of request
gc.disable()
def post_request(worker, req, environ, resp):
# enable gc after a request
gc.enable()
```
The `post_request` here runs after the http response has been delivered, and so is a very good time for garbage collection. | An alternative might be to disable GC altogether, and configure mod\_wsgi (or whatever you're using) to kill and restart processes more frequently. |
4,594,522 | After 2 days of debug, I nailed down my time-hog: the Python garbage collector.
My application holds a lot of objects in memory. And it works well.
The GC does the usual rounds (I have not played with the default thresholds of (700, 10, 10)).
Once in a while, in the middle of an important transaction, the 2nd generation sweep kicks in and reviews my ~1.5M generation 2 objects.
**This takes 2 seconds!**
The nominal transaction takes less than 0.1 seconds.
My question is what should I do?
I can turn off generation 2 sweeps (by setting a very high threshold - is this the right way?) and the GC is obedient.
When should I turn them on?
We implemented a web service using Django, and each user request takes about 0.1 seconds.
Optimally, I will run these GC gen 2 cycles **between** user API requests. But how do I do that?
My view ends with `return HttpResponse()`, **AFTER** which I would like to run a gen 2 GC sweep.
How do I do that? Does this approach even make sense?
Can I mark the object that NEVER need to be garbage collected so the GC will not test them every 2nd gen cycle?
How can I configure the GC to run full sweeps when the Django server is relatively idle?
Python 2.6.6 on multiple platforms (Windows / Linux). | 2011/01/04 | [
"https://Stackoverflow.com/questions/4594522",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78234/"
] | An alternative might be to disable GC altogether, and configure mod\_wsgi (or whatever you're using) to kill and restart processes more frequently. | Building on the approach from @milkypostman you can use gevent. You want one call to garbage collection *per request* but the problem with the @milkypostman suggestion is that the call to gc.collect() will still block the returning of the request. Gevent lets us return immediately and have the GC run proceed *after* the process is returned from.
First in your wsgi file be sure to monkey patch all with gevent magic stuff and disable garbage collection. You can set `gc.disable()` but some libraries have context managers that turn it on after disabling it (messagepack for instance), so the 0 threshold is more sticky.
```
import gc
from gevent import monkey
# Disable garbage collection runs
gc.set_threshold(0)
# Apply gevent monkey magic
monkey.patch_all()
```
Then create some middleware for Django like this:
```
from gc import collect
import gevent
class BaseMiddleware:
def __init__(self, get_response):
self.get_response = get_response
class GcCollectMiddleware(BaseMiddleware):
"""Middleware which performs a non-blocking gc.collect()"""
def __call__(self, request):
response = self.get_response(request)
gevent.spawn(collect)
return response
```
You'll see the main difference here vs the previously suggested approach is that `gc.collect()` is wrapped in `gevent.spawn` which will not block returning the `HttpResponse` and your users will get a snappier response! |
4,594,522 | After 2 days of debug, I nailed down my time-hog: the Python garbage collector.
My application holds a lot of objects in memory. And it works well.
The GC does the usual rounds (I have not played with the default thresholds of (700, 10, 10)).
Once in a while, in the middle of an important transaction, the 2nd generation sweep kicks in and reviews my ~1.5M generation 2 objects.
**This takes 2 seconds!**
The nominal transaction takes less than 0.1 seconds.
My question is what should I do?
I can turn off generation 2 sweeps (by setting a very high threshold - is this the right way?) and the GC is obedient.
When should I turn them on?
We implemented a web service using Django, and each user request takes about 0.1 seconds.
Optimally, I will run these GC gen 2 cycles **between** user API requests. But how do I do that?
My view ends with `return HttpResponse()`, **AFTER** which I would like to run a gen 2 GC sweep.
How do I do that? Does this approach even make sense?
Can I mark the object that NEVER need to be garbage collected so the GC will not test them every 2nd gen cycle?
How can I configure the GC to run full sweeps when the Django server is relatively idle?
Python 2.6.6 on multiple platforms (Windows / Linux). | 2011/01/04 | [
"https://Stackoverflow.com/questions/4594522",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78234/"
] | We did something like this for gunicorn. Depending on what wsgi server you use, you need to find the right hooks for AFTER the response, not before. Django has a `request_finished` signal but that signal is still pre response.
For gunicorn, in the config you need to define 2 methods like so:
```
def pre_request(worker, req):
# disable gc until end of request
gc.disable()
def post_request(worker, req, environ, resp):
# enable gc after a request
gc.enable()
```
The `post_request` here runs after the http response has been delivered, and so is a very good time for garbage collection. | Building on the approach from @milkypostman you can use gevent. You want one call to garbage collection *per request* but the problem with the @milkypostman suggestion is that the call to gc.collect() will still block the returning of the request. Gevent lets us return immediately and have the GC run proceed *after* the process is returned from.
First in your wsgi file be sure to monkey patch all with gevent magic stuff and disable garbage collection. You can set `gc.disable()` but some libraries have context managers that turn it on after disabling it (messagepack for instance), so the 0 threshold is more sticky.
```
import gc
from gevent import monkey
# Disable garbage collection runs
gc.set_threshold(0)
# Apply gevent monkey magic
monkey.patch_all()
```
Then create some middleware for Django like this:
```
from gc import collect
import gevent
class BaseMiddleware:
def __init__(self, get_response):
self.get_response = get_response
class GcCollectMiddleware(BaseMiddleware):
"""Middleware which performs a non-blocking gc.collect()"""
def __call__(self, request):
response = self.get_response(request)
gevent.spawn(collect)
return response
```
You'll see the main difference here vs the previously suggested approach is that `gc.collect()` is wrapped in `gevent.spawn` which will not block returning the `HttpResponse` and your users will get a snappier response! |
61,644,355 | I try to realize a little NestJS application in which I want to integrate a task scheduler. One of the first tasks of this scheduler will be to update data in database. This task service will user a UserService like below :
```
import {
Injectable,
Inject,
UnprocessableEntityException,
HttpStatus,
} from '@nestjs/common';
import { Repository } from 'typeorm';
import * as bcrypt from 'bcryptjs';
import { User, UserStatus } from './user.entity';
import { LoggerService } from '../../../logger/logger.service';
import { HammerErrors } from 'src/error/hammer.errors';
import * as Config from '../../../config/global.env';
@Injectable()
export class UserService {
private readonly _logger = new LoggerService(UserService.name);
constructor(
@Inject('USER_REPOSITORY')
private _userRepository: Repository<User>,
) {}
....
async reactiveBlockedUsers() {
this._logger.setMethod(this.reactiveBlockedUsers.name);
this._logger.log(`Update blocked users`);
await this._userRepository
.createQueryBuilder('hm_users')
.update(User)
.set({
nextValidDate: null,
})
.where(
`hm_users.nextValidDate IS NOT NULL hm_users.nextValidDate < utc_timestamp()`,
)
.execute();
return null;
}
```
}
My task service is as follows:
```
import { LoggerService } from '../../../logger/logger.service';
import { UserService } from '../../providers/user/user.service';
import { Injectable } from '@nestjs/common';
import { Cron, CronExpression } from '@nestjs/schedule';
@Injectable()
export class TasksService {
private _logger = new LoggerService(TasksService.name);
constructor(private _userService: UserService) {}
@Cron(CronExpression.EVERY_MINUTE)
async reactiveUsers() {
this._logger.setMethod(this.reactiveUsers.name);
this._logger.log(`Reactive blocked users`);
try {
await this._userService.reactiveBlockedUsers();
} catch (error) {
this._logger.log(error);
}
}
@Cron(CronExpression.EVERY_MINUTE)
startBatchJobs() {
this._logger.setMethod(this.startBatchJobs.name);
this._logger.log(`Start batch job service`);
}
triggerNotifications() {}
}
```
The task module is the following :
```
import { Module } from '@nestjs/common';
import { TasksService } from './tasks.service';
import { UserService } from 'src/shared/providers/user/user.service';
import { Repository } from 'typeorm';
@Module({
providers: [
TasksService,
UserService,
{
provide: 'USER_REPOSITORY',
useClass: Repository,
},
],
})
export class TasksModule {}
```
When I the process works, I have the following trap :
[INFO] 06/05/2020 19:57:00.024 [TasksService.reactiveUsers] TypeError: Cannot read property 'createQueryBuilder' of undefined
Or :
(node:13668) UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'createQueryBuilder' of undefined
at Repository.createQueryBuilder (D:\dev\hammer\hammer-server\node\_modules\typeorm\repository\Repository.js:17:29)
at UserService.reactiveBlockedUsers (D:\dev\hammer\hammer-server\dist\src\shared\providers\user\user.service.js:183:14)
at TasksService.reactiveUsers (D:\dev\hammer\hammer-server\dist\src\shared\services\tasks\tasks.service.js:25:33)
at CronJob.fireOnTick (D:\dev\hammer\hammer-server\node\_modules\cron\lib\cron.js:562:23)
at Timeout.callbackWrapper [as \_onTimeout] (D:\dev\hammer\hammer-server\node\_modules\cron\lib\cron.js:629:10)
at listOnTimeout (internal/timers.js:549:17)
at processTimers (internal/timers.js:492:7)
(node:13668) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see <https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode>). (rejection id: 2)
(node:13668) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
when I suppress the try catch.
For me it is a problem of providing Repositiry module which is injected in UserService but I don't understand how that works and how to solve this problem.
If somebody has an idea, I will be interested.
Thanks for your help. | 2020/05/06 | [
"https://Stackoverflow.com/questions/61644355",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13299466/"
] | `Repository` is an abstract class in TypeORM and cannot be instantiated directly. Nest does what it can around this, but ultimately it will provide an `undefined` and thus, calling `createQueryBuilder` will give you a problem. As you are using TypeORM, is there a reason you aren't using the `@nestjs/typeorm` package? It manages creating the repository classes for you and makes things much easier. You are are gungho about not using the package, you can always create your own Repository class `UserRepository extends Repository` decorated with the proper metadata and all, and then use that class instead of `Repository` | Thank you for information I did not know about this existing package. I will try to implement your proposal. I am a novice in these technology and I discover day by day something new. From beginning of March I have learned HTML, CSS, Javascript, Typescript NestJs and Angular and now I try to implement a little project with all these technologies. I think that I have many hole in my knowledge and I apreciate your answer to give me those informations. |
44,232,422 | I am new to creating a full stack app with with MERN (using React instead of Angular as I am familiar with React). I've been looking at tutorials to learn how to separate my server-side code (express/mongo) apart as I initially had my Express routes, MongoDB connection, and API requests defined in my server.js file (Just to get something working).
Currently, my folder structure and the way I define my routes and db is as follows:
1. routes.js includes all routes I've defined in my routes folder and
exports a routes function to be used in server.js
2. Use express.Router to define specific routes for a model (i.e user) inside routes folder. I also include MongoDB model here to perform any necessary actions (find, insert, etc)
3. Define mongo schema in userModel.js
At this point, I'm not sure where to connect my MongoDB. Before, I connected to the DB in server.js, but if I want use my models to query against my DB, do I define my connection inside every route file where I use a model? Is there a way for me to only call mongoose.connect once and ensure I'm always connected to my DB?
```
// Connect to mongodb
mongoose.connect(process.env.MONGOLAB_URI || db_url)
```
userRoutes.js
```
-root folder
-public
-src
-server
-db
-models
-userModel.js
-routes
-userRoutes.js
-routes.js
-server.js
``` | 2017/05/28 | [
"https://Stackoverflow.com/questions/44232422",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3059274/"
] | Add the mongoose connection in a separate file. Then link that connection using `let schema = require('_path_to_file_')` wherever you need to link to the database. | I've often seen the following dir structure w.r.t full-stack JS apps
* root
+ client
+ server
+ common
Likewise I've seen a modified [FHS](https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard) approach implemented similar to the following
* root
+ src (contains the react/redux dirs - actions, comp, cont, reducers, store, etc.)
+ bin (compiled dist)
+ etc ("Editable Text Configuration" or "Extended Tool Chest")
+ public
+ package.json, readme, etc. |
44,232,422 | I am new to creating a full stack app with with MERN (using React instead of Angular as I am familiar with React). I've been looking at tutorials to learn how to separate my server-side code (express/mongo) apart as I initially had my Express routes, MongoDB connection, and API requests defined in my server.js file (Just to get something working).
Currently, my folder structure and the way I define my routes and db is as follows:
1. routes.js includes all routes I've defined in my routes folder and
exports a routes function to be used in server.js
2. Use express.Router to define specific routes for a model (i.e user) inside routes folder. I also include MongoDB model here to perform any necessary actions (find, insert, etc)
3. Define mongo schema in userModel.js
At this point, I'm not sure where to connect my MongoDB. Before, I connected to the DB in server.js, but if I want use my models to query against my DB, do I define my connection inside every route file where I use a model? Is there a way for me to only call mongoose.connect once and ensure I'm always connected to my DB?
```
// Connect to mongodb
mongoose.connect(process.env.MONGOLAB_URI || db_url)
```
userRoutes.js
```
-root folder
-public
-src
-server
-db
-models
-userModel.js
-routes
-userRoutes.js
-routes.js
-server.js
``` | 2017/05/28 | [
"https://Stackoverflow.com/questions/44232422",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3059274/"
] | Add the mongoose connection in a separate file. Then link that connection using `let schema = require('_path_to_file_')` wherever you need to link to the database. | folder structure to build API using MongoDB, Express, Node JS
* controllers
/\* user.controller.js
* models
/\* user.model.js
* routes
/\* user.route.js
* config
/\* database.js
/\* server.js
/\*package.json
```js
//database.js
module.exports = {
url:'mongodb://localhost:27017/db_name'
}
``` |
44,232,422 | I am new to creating a full stack app with with MERN (using React instead of Angular as I am familiar with React). I've been looking at tutorials to learn how to separate my server-side code (express/mongo) apart as I initially had my Express routes, MongoDB connection, and API requests defined in my server.js file (Just to get something working).
Currently, my folder structure and the way I define my routes and db is as follows:
1. routes.js includes all routes I've defined in my routes folder and
exports a routes function to be used in server.js
2. Use express.Router to define specific routes for a model (i.e user) inside routes folder. I also include MongoDB model here to perform any necessary actions (find, insert, etc)
3. Define mongo schema in userModel.js
At this point, I'm not sure where to connect my MongoDB. Before, I connected to the DB in server.js, but if I want use my models to query against my DB, do I define my connection inside every route file where I use a model? Is there a way for me to only call mongoose.connect once and ensure I'm always connected to my DB?
```
// Connect to mongodb
mongoose.connect(process.env.MONGOLAB_URI || db_url)
```
userRoutes.js
```
-root folder
-public
-src
-server
-db
-models
-userModel.js
-routes
-userRoutes.js
-routes.js
-server.js
``` | 2017/05/28 | [
"https://Stackoverflow.com/questions/44232422",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3059274/"
] | I've often seen the following dir structure w.r.t full-stack JS apps
* root
+ client
+ server
+ common
Likewise I've seen a modified [FHS](https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard) approach implemented similar to the following
* root
+ src (contains the react/redux dirs - actions, comp, cont, reducers, store, etc.)
+ bin (compiled dist)
+ etc ("Editable Text Configuration" or "Extended Tool Chest")
+ public
+ package.json, readme, etc. | folder structure to build API using MongoDB, Express, Node JS
* controllers
/\* user.controller.js
* models
/\* user.model.js
* routes
/\* user.route.js
* config
/\* database.js
/\* server.js
/\*package.json
```js
//database.js
module.exports = {
url:'mongodb://localhost:27017/db_name'
}
``` |
14,220,321 | How do I return the response/result from a function `foo` that makes an asynchronous request?
I am trying to return the value from the callback, as well as assigning the result to a local variable inside the function and returning that one, but none of those ways actually return the response — they all return `undefined` or whatever the initial value of the variable `result` is.
**Example of an asynchronous function that accepts a callback** (using jQuery's `ajax` function):
```
function foo() {
var result;
$.ajax({
url: '...',
success: function(response) {
result = response;
// return response; // <- I tried that one as well
}
});
return result; // It always returns `undefined`
}
```
**Example using Node.js:**
```
function foo() {
var result;
fs.readFile("path/to/file", function(err, data) {
result = data;
// return data; // <- I tried that one as well
});
return result; // It always returns `undefined`
}
```
**Example using the `then` block of a promise:**
```
function foo() {
var result;
fetch(url).then(function(response) {
result = response;
// return response; // <- I tried that one as well
});
return result; // It always returns `undefined`
}
``` | 2013/01/08 | [
"https://Stackoverflow.com/questions/14220321",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/218196/"
] | Originally, callbacks were used for asynchronous operations (e.g., in the [XMLHttpRequest API](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest)). Now promise-based APIs like the browser's [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) have become the default solution and the nicer [`async/await`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function) syntax is supported by all modern browsers and on Node.js (server side).
A common scenario - fetching JSON data from the server - can look like this:
```
async function fetchResource(url) {
const res = await fetch(url);
if (!res.ok) {
throw new Error(res.statusText);
}
return res.json();
}
```
To use it in another function:
```
async function doSomething() {
try {
const data = await fetchResource("https://example.test/resource/1");
// ...
} catch (e) {
// Handle error
...
}
}
```
If you design a modern API, it is strongly recommended to prefer promise-based style over callbacks. If you inherited an API that relies on callbacks, it is possible to wrap it as a promise:
```
function sleep(timeout) {
return new Promise((resolve) => {
setTimeout(() => {
resolve();
}, timeout);
});
}
async function fetchAfterTwoSeconds(url) {
await sleep(2000);
return fetchResource(url);
}
```
In Node.js, which historically relied exclusively on callbacks, that technique is so common that they added a helper function called [`util.promisify`](https://nodejs.org/api/util.html#util_util_promisify_original). | code from my job it work,try to search in stf but not found
this way it work
1. declare ajax call function
```
function get_LegalDataPOST_II(data, EndPoint) {var a_params = {
sql_h: '',
sql_txt: data
}
var data = JSON.stringify(a_params);
if (EndPoint == undefined) {
EndPoint = 'DB';
}
var sFormData = new FormData();
sFormData.append("EndPoint", EndPoint);
sFormData.append("Data", data);
sFormData.append("ByUser", get_ul_Legal());
return $.ajax({
url: "home/xxxxPOST/",
type: "POST",
contentType: false,
processData: false,
data: sFormData,
success: function (data) { },
error: function () {}
});
}
```
2. get data json
```
var sql = " select * from foo ";
get_LegalDataPOST_II(sql).done(function (data) {
var data = JSON.parse(data);
$(data).each(function () {
});
});
```
Method Post if want to code post comment |
14,220,321 | How do I return the response/result from a function `foo` that makes an asynchronous request?
I am trying to return the value from the callback, as well as assigning the result to a local variable inside the function and returning that one, but none of those ways actually return the response — they all return `undefined` or whatever the initial value of the variable `result` is.
**Example of an asynchronous function that accepts a callback** (using jQuery's `ajax` function):
```
function foo() {
var result;
$.ajax({
url: '...',
success: function(response) {
result = response;
// return response; // <- I tried that one as well
}
});
return result; // It always returns `undefined`
}
```
**Example using Node.js:**
```
function foo() {
var result;
fs.readFile("path/to/file", function(err, data) {
result = data;
// return data; // <- I tried that one as well
});
return result; // It always returns `undefined`
}
```
**Example using the `then` block of a promise:**
```
function foo() {
var result;
fetch(url).then(function(response) {
result = response;
// return response; // <- I tried that one as well
});
return result; // It always returns `undefined`
}
``` | 2013/01/08 | [
"https://Stackoverflow.com/questions/14220321",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/218196/"
] | Using Promise
=============
The most perfect answer to this question is using `Promise`.
```
function ajax(method, url, params) {
return new Promise(function(resolve, reject) {
var xhr = new XMLHttpRequest();
xhr.onload = function() {
resolve(this.responseText);
};
xhr.onerror = reject;
xhr.open(method, url);
xhr.send(params);
});
}
```
Usage
-----
```
ajax("GET", "/test", "acrive=1").then(function(result) {
// Code depending on result
})
.catch(function() {
// An error occurred
});
```
---
But wait...!
============
There is a problem with using promises!
Why should we use our own custom Promise?
-----------------------------------------
I was using this solution for a while until I figured out there is an error in old browsers:
>
> Uncaught ReferenceError: Promise is not defined
>
>
>
So I decided to implement my own Promise class for **ES3 to below** JavaScript compilers if it's not defined. Just add this code before your main code and then safely use Promise!
```
if(typeof Promise === "undefined"){
function _classCallCheck(instance, Constructor) {
if (!(instance instanceof Constructor)) {
throw new TypeError("Cannot call a class as a function");
}
}
var Promise = function () {
function Promise(main) {
var _this = this;
_classCallCheck(this, Promise);
this.value = undefined;
this.callbacks = [];
var resolve = function resolve(resolveValue) {
_this.value = resolveValue;
_this.triggerCallbacks();
};
var reject = function reject(rejectValue) {
_this.value = rejectValue;
_this.triggerCallbacks();
};
main(resolve, reject);
}
Promise.prototype.then = function then(cb) {
var _this2 = this;
var next = new Promise(function (resolve) {
_this2.callbacks.push(function (x) {
return resolve(cb(x));
});
});
return next;
};
Promise.prototype.catch = function catch_(cb) {
var _this2 = this;
var next = new Promise(function (reject) {
_this2.callbacks.push(function (x) {
return reject(cb(x));
});
});
return next;
};
Promise.prototype.triggerCallbacks = function triggerCallbacks() {
var _this3 = this;
this.callbacks.forEach(function (cb) {
cb(_this3.value);
});
};
return Promise;
}();
}
``` | code from my job it work,try to search in stf but not found
this way it work
1. declare ajax call function
```
function get_LegalDataPOST_II(data, EndPoint) {var a_params = {
sql_h: '',
sql_txt: data
}
var data = JSON.stringify(a_params);
if (EndPoint == undefined) {
EndPoint = 'DB';
}
var sFormData = new FormData();
sFormData.append("EndPoint", EndPoint);
sFormData.append("Data", data);
sFormData.append("ByUser", get_ul_Legal());
return $.ajax({
url: "home/xxxxPOST/",
type: "POST",
contentType: false,
processData: false,
data: sFormData,
success: function (data) { },
error: function () {}
});
}
```
2. get data json
```
var sql = " select * from foo ";
get_LegalDataPOST_II(sql).done(function (data) {
var data = JSON.parse(data);
$(data).each(function () {
});
});
```
Method Post if want to code post comment |
14,220,321 | How do I return the response/result from a function `foo` that makes an asynchronous request?
I am trying to return the value from the callback, as well as assigning the result to a local variable inside the function and returning that one, but none of those ways actually return the response — they all return `undefined` or whatever the initial value of the variable `result` is.
**Example of an asynchronous function that accepts a callback** (using jQuery's `ajax` function):
```
function foo() {
var result;
$.ajax({
url: '...',
success: function(response) {
result = response;
// return response; // <- I tried that one as well
}
});
return result; // It always returns `undefined`
}
```
**Example using Node.js:**
```
function foo() {
var result;
fs.readFile("path/to/file", function(err, data) {
result = data;
// return data; // <- I tried that one as well
});
return result; // It always returns `undefined`
}
```
**Example using the `then` block of a promise:**
```
function foo() {
var result;
fetch(url).then(function(response) {
result = response;
// return response; // <- I tried that one as well
});
return result; // It always returns `undefined`
}
``` | 2013/01/08 | [
"https://Stackoverflow.com/questions/14220321",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/218196/"
] | Another approach to return a value from an asynchronous function, is to pass in an object that will store the result from the asynchronous function.
Here is an example of the same:
```
var async = require("async");
// This wires up result back to the caller
var result = {};
var asyncTasks = [];
asyncTasks.push(function(_callback){
// some asynchronous operation
$.ajax({
url: '...',
success: function(response) {
result.response = response;
_callback();
}
});
});
async.parallel(asyncTasks, function(){
// result is available after performing asynchronous operation
console.log(result)
console.log('Done');
});
```
I am using the `result` object to store the value during the asynchronous operation. This allows the result be available even after the asynchronous job.
I use this approach a lot. I would be interested to know how well this approach works where wiring the result back through consecutive modules is involved. | Using Promise
=============
The most perfect answer to this question is using `Promise`.
```
function ajax(method, url, params) {
return new Promise(function(resolve, reject) {
var xhr = new XMLHttpRequest();
xhr.onload = function() {
resolve(this.responseText);
};
xhr.onerror = reject;
xhr.open(method, url);
xhr.send(params);
});
}
```
Usage
-----
```
ajax("GET", "/test", "acrive=1").then(function(result) {
// Code depending on result
})
.catch(function() {
// An error occurred
});
```
---
But wait...!
============
There is a problem with using promises!
Why should we use our own custom Promise?
-----------------------------------------
I was using this solution for a while until I figured out there is an error in old browsers:
>
> Uncaught ReferenceError: Promise is not defined
>
>
>
So I decided to implement my own Promise class for **ES3 to below** JavaScript compilers if it's not defined. Just add this code before your main code and then safely use Promise!
```
if(typeof Promise === "undefined"){
function _classCallCheck(instance, Constructor) {
if (!(instance instanceof Constructor)) {
throw new TypeError("Cannot call a class as a function");
}
}
var Promise = function () {
function Promise(main) {
var _this = this;
_classCallCheck(this, Promise);
this.value = undefined;
this.callbacks = [];
var resolve = function resolve(resolveValue) {
_this.value = resolveValue;
_this.triggerCallbacks();
};
var reject = function reject(rejectValue) {
_this.value = rejectValue;
_this.triggerCallbacks();
};
main(resolve, reject);
}
Promise.prototype.then = function then(cb) {
var _this2 = this;
var next = new Promise(function (resolve) {
_this2.callbacks.push(function (x) {
return resolve(cb(x));
});
});
return next;
};
Promise.prototype.catch = function catch_(cb) {
var _this2 = this;
var next = new Promise(function (reject) {
_this2.callbacks.push(function (x) {
return reject(cb(x));
});
});
return next;
};
Promise.prototype.triggerCallbacks = function triggerCallbacks() {
var _this3 = this;
this.callbacks.forEach(function (cb) {
cb(_this3.value);
});
};
return Promise;
}();
}
``` |
14,220,321 | How do I return the response/result from a function `foo` that makes an asynchronous request?
I am trying to return the value from the callback, as well as assigning the result to a local variable inside the function and returning that one, but none of those ways actually return the response — they all return `undefined` or whatever the initial value of the variable `result` is.
**Example of an asynchronous function that accepts a callback** (using jQuery's `ajax` function):
```
function foo() {
var result;
$.ajax({
url: '...',
success: function(response) {
result = response;
// return response; // <- I tried that one as well
}
});
return result; // It always returns `undefined`
}
```
**Example using Node.js:**
```
function foo() {
var result;
fs.readFile("path/to/file", function(err, data) {
result = data;
// return data; // <- I tried that one as well
});
return result; // It always returns `undefined`
}
```
**Example using the `then` block of a promise:**
```
function foo() {
var result;
fetch(url).then(function(response) {
result = response;
// return response; // <- I tried that one as well
});
return result; // It always returns `undefined`
}
``` | 2013/01/08 | [
"https://Stackoverflow.com/questions/14220321",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/218196/"
] | **Have a look at this example:**
```
var app = angular.module('plunker', []);
app.controller('MainCtrl', function($scope,$http) {
var getJoke = function(){
return $http.get('http://api.icndb.com/jokes/random').then(function(res){
return res.data.value;
});
}
getJoke().then(function(res) {
console.log(res.joke);
});
});
```
As you can see `getJoke` is **returning a** resolved **promise** (it is resolved when returning `res.data.value`). So you wait until the **$http.get** request is completed and then **console.log(res.joke)** is executed (as a normal asynchronous flow).
**This is the plnkr:**
<http://embed.plnkr.co/XlNR7HpCaIhJxskMJfSg/>
**ES6 way (async - await)**
```
(function(){
async function getJoke(){
let response = await fetch('http://api.icndb.com/jokes/random');
let data = await response.json();
return data.value;
}
getJoke().then((joke) => {
console.log(joke);
});
})();
``` | Here is an example that works:
```js
const validateName = async userName => {
const url = "https://jsonplaceholder.typicode.com/todos/1";
try {
const response = await axios.get(url);
return response.data
} catch (err) {
return false;
}
};
validateName("user")
.then(data => console.log(data))
.catch(reason => console.log(reason.message))
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
```html
<script src=
"https://cdnjs.cloudflare.com/ajax/libs/axios/0.21.1/axios.min.js"></script>
``` |
14,220,321 | How do I return the response/result from a function `foo` that makes an asynchronous request?
I am trying to return the value from the callback, as well as assigning the result to a local variable inside the function and returning that one, but none of those ways actually return the response — they all return `undefined` or whatever the initial value of the variable `result` is.
**Example of an asynchronous function that accepts a callback** (using jQuery's `ajax` function):
```
function foo() {
var result;
$.ajax({
url: '...',
success: function(response) {
result = response;
// return response; // <- I tried that one as well
}
});
return result; // It always returns `undefined`
}
```
**Example using Node.js:**
```
function foo() {
var result;
fs.readFile("path/to/file", function(err, data) {
result = data;
// return data; // <- I tried that one as well
});
return result; // It always returns `undefined`
}
```
**Example using the `then` block of a promise:**
```
function foo() {
var result;
fetch(url).then(function(response) {
result = response;
// return response; // <- I tried that one as well
});
return result; // It always returns `undefined`
}
``` | 2013/01/08 | [
"https://Stackoverflow.com/questions/14220321",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/218196/"
] | ### 1. A first stumbling step
As for many others, my encounter with asynchronous calls was puzzling at
first.
I don't remember the details, but I may have tried something like:
```js
let result;
$.ajax({
url: 'https://jsonplaceholder.typicode.com/todos/1',
success: function (response) {
console.log('\nInside $.ajax:');
console.log(response);
result = response;
}
});
console.log('Finally, the result: ' + result);
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
```html
<script src=
"https://ajax.googleapis.com/ajax/libs/jquery/3.6.0/jquery.min.js"></script>
```
*Whoops!* The output of the line
`console.log('Finally, the result: ' + result);`
which I thought would be printed *last*, is actually printed *before* the
other output! – And it doesn't contain the result: it just prints `undefined`.
1
How come?
**A helpful insight**
I distinctly remember my first *aha!* moment on how to understand asynchronous
calls.
It was [this comment](https://stackoverflow.com/q/44298196#comment75602602_44298196) saying:
*you actually don't want to get the data **out** of a callback;
you want to get your data-needing action **into** the callback!*
2
This is obvious in the example above.
But is it still possible to write code *after* the asynchronous call that
deals with the response once it has completed?
### 2. Plain JavaScript and a callback function
The answer is *yes!* – It is possible.
One alternative is the use of a *callback* function in a continuation-passing
style:
3
```js
const url = 'https://jsonplaceholder.typicode.com/todos/2';
function asynchronousCall (callback) {
const request = new XMLHttpRequest();
request.open('GET', url);
request.send();
request.onload = function () {
if (request.readyState === request.DONE) {
console.log('The request is done. Now calling back.');
callback(request.responseText);
}
};
}
asynchronousCall(function (result) {
console.log('This is the start of the callback function. Result:');
console.log(result);
console.log('The callback function finishes on this line. THE END!');
});
console.log('LAST in the code, but executed FIRST!');
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
Note how the function `asynchronousCall` is `void`. It returns nothing.
Instead, by calling `asynchronousCall` with an anonymous callback function
(`asynchronousCall(function (result) {...`), this function executes the
desired actions on the result, but only *after* the request has completed –
when the `responseText` is available.
Running the above snippet shows how I will probably not want to write any code
*after* the asyncronous call (such as the line
`LAST in the code, but executed FIRST!`).
*Why?* – Because such code will
happen *before* the asyncronous call delivers any response data.
Doing so is bound to cause confusion when comparing the *code* with the *output*.
### 3. Promise with `.then()` – or `async`/`await`
The `.then()` construct was introduced in the *ECMA-262 6th Edition in June
2015*, and the `async`/`await` construct was introduced in the *ECMA-262
8th Edition in June 2017*.
The code below is still plain JavaScript, replacing the old-school
*XMLHttpRequest* with *Fetch*.
4
```js
fetch('http://api.icndb.com/jokes/random')
.then(response => response.json())
.then(responseBody => {
console.log('.then() - the response body:');
console.log(JSON.stringify(responseBody) + '\n\n');
});
async function receiveAndAwaitPromise () {
const responseBody =
(await fetch('http://api.icndb.com/jokes/random')).json();
console.log('async/await:');
console.log(JSON.stringify(await responseBody) + '\n\n');
}
receiveAndAwaitPromise();
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
A word of warning is warranted if you decide to go with the `async`/`await`
construct. Note in the above snippet how `await` is needed in *two* places.
If forgotten in the first place, there will be no output. If forgotten in the
second place, the only output will be the empty object, `{}`
(or `[object Object]` or `[object Promise]`).
Forgetting the `async` prefix of the function is maybe the worst of all – the
output will be `"SyntaxError: missing ) in parenthetical"` – no mentioning of
the *missing* `async` keyword.
### 4. Promise.all – array of URLs 5
Suppose we need to request a whole bunch of URLs.
I could send one request, wait till it responds, then send the next request,
wait till *it* responds, and so on ...
Aargh! – That could take a loong time. Wouldn't it be better if I could send
them *all* at once, and then wait no longer than it takes for the slowest
response to arrive?
As a simplified example, I will use:
```js
urls = ['https://jsonplaceholder.typicode.com/todos/2',
'https://jsonplaceholder.typicode.com/todos/3']
```
The JSONs of the two URLs:
```json
{"userId":1,"id":2,"title":"quis ut nam facilis et officia qui",
"completed":false}
{"userId":1,"id":3,"title":"fugiat veniam minus","completed":false}
```
The goal is to get an array of objects, where each object contains the `title`
value from the corresponding URL.
To make it a little more interesting, I will assume that there is already an
array of *names* that I want the array of URL results (the *titles*) to be
merged with:
```js
namesonly = ['two', 'three']
```
The desired output is a mashup combining `namesonly` and `urls` into an
*array of objects*:
```js
[{"name":"two","loremipsum":"quis ut nam facilis et officia qui"},
{"name":"three","loremipsum":"fugiat veniam minus"}]
```
where I have changed the name of `title` to `loremipsum`.
```js
const namesonly = ['two','three'];
const urls = ['https://jsonplaceholder.typicode.com/todos/2',
'https://jsonplaceholder.typicode.com/todos/3'];
Promise.all(urls.map(url => fetch(url)
.then(response => response.json())
.then(responseBody => responseBody.title)))
.then(titles => {
const names = namesonly.map(value => ({ name: value }));
console.log('names: ' + JSON.stringify(names));
const latins = titles.map(value => ({ loremipsum: value }));
console.log('latins:\n' + JSON.stringify(latins));
const result =
names.map((item, i) => Object.assign({}, item, latins[i]));
console.log('result:\n' + JSON.stringify(result));
});
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
All the above examples are short and succinctly convey how asynchronous calls
may be used on toyish APIs.
Using small APIs works well to explain concepts and working code, but the
examples might be a bit of dry runs.
The next section will show a more realistic example on how APIs may be
combined to create a more interesting output.
### 5. How to visualize a mashup in Postman 6
[The MusicBrainz API](http://musicbrainz.org/doc/Development/XML_Web_Service/Version_2)
has information about artists and music bands.
An example – a request for the British rock band *Coldplay* is:
<http://musicbrainz.org/ws/2/artist/cc197bad-dc9c-440d-a5b5-d52ba2e14234?&fmt=json&inc=url-rels+release-groups>.
The JSON response contains – among other things – the 25 earliest album titles
by the band.
This information is in the `release-groups` array.
The start of this array, including its first object is:
```json
...
"release-groups": [
{
"id": "1dc4c347-a1db-32aa-b14f-bc9cc507b843",
"secondary-type-ids": [],
"first-release-date": "2000-07-10",
"primary-type-id": "f529b476-6e62-324f-b0aa-1f3e33d313fc",
"disambiguation": "",
"secondary-types": [],
"title": "Parachutes",
"primary-type": "Album"
},
...
```
This JSON snippet shows that the first album by Coldplay is *Parachutes*.
It also gives an `id`, in this case `1dc4c347-a1db-32aa-b14f-bc9cc507b843`,
which is a unique identifier of the album.
This identifier can be used to make a lookup in [the Cover Art Archive API](https://wiki.musicbrainz.org/Cover_Art_Archive/API):
<http://coverartarchive.org/release-group/1dc4c347-a1db-32aa-b14f-bc9cc507b843>.
7
For each album, the JSON response contains some images, one of which is the
front cover of the album.
The first few lines of the response to the above request:
```json
{
"images": [
{
"approved": true,
"back": false,
"comment": "",
"edit": 22132705,
"front": true,
"id": 4086974851,
"image": "http://coverartarchive.org/release/435fc965-9121-461e-b8da-d9b505c9dc9b/4086974851.jpg",
"thumbnails": {
"250": "http://coverartarchive.org/release/435fc965-9121-461e-b8da-d9b505c9dc9b/4086974851-250.jpg",
"500": "http://coverartarchive.org/release/435fc965-9121-461e-b8da-d9b505c9dc9b/4086974851-500.jpg",
"1200": "http://coverartarchive.org/release/435fc965-9121-461e-b8da-d9b505c9dc9b/4086974851-1200.jpg",
"large": "http://coverartarchive.org/release/435fc965-9121-461e-b8da-d9b505c9dc9b/4086974851-500.jpg",
= = > "small": "http://coverartarchive.org/release/435fc965-9121-461e-b8da-d9b505c9dc9b/4086974851-250.jpg"
},
...
```
Of interest here is the line
`"small": "http://coverartarchive.org/release/435fc965-9121-461e-b8da-d9b505c9dc9b/4086974851-250.jpg"`.
That URL is a direct link to the front cover of the *Parachutes* album.
**The code to create and visualize the mashup**
The overall task is to use Postman to visualize all the album titles and front
covers of a music band.
How to write code to achieve this has already been described in quite some
detail in [an answer](https://stackoverflow.com/a/67824483) to the question
*How can I visualize an API mashup in Postman?* – Therefore I will avoid
lengthy discussions here and just present the code and a screenshot of the
result:
```js
const lock = setTimeout(() => {}, 43210);
const albumsArray = [];
const urlsArray = [];
const urlOuter = 'https://musicbrainz.org/ws/2/artist/' +
pm.collectionVariables.get('MBID') + '?fmt=json&inc=url-rels+release-groups';
pm.sendRequest(urlOuter, (_, responseO) => {
const bandName = responseO.json().name;
const albums = responseO.json()['release-groups'];
for (const item of albums) {
albumsArray.push(item.title);
urlsArray.push('https://coverartarchive.org/release-group/' + item.id);
}
albumsArray.length = urlsArray.length = 15;
const images = [];
let countDown = urlsArray.length;
urlsArray.forEach((url, index) => {
asynchronousCall(url, imageURL => {
images[index] = imageURL;
if (--countDown === 0) { // Callback for ALL starts on next line.
clearTimeout(lock); // Unlock the timeout.
const albumTitles = albumsArray.map(value => ({ title: value }));
const albumImages = images.map(value => ({ image: value }));
const albumsAndImages = albumTitles.map(
(item, i) => Object.assign({}, item, albumImages[i]));
const template = `<table>
<tr><th>` + bandName + `</th></tr>
{{#each responseI}}
<tr><td>{{title}}<br><img src="{{image}}"></td></tr>
{{/each}}
</table>`;
pm.visualizer.set(template, { responseI: albumsAndImages });
}
});
});
function asynchronousCall (url, callback) {
pm.sendRequest(url, (_, responseI) => {
callback(responseI.json().images.find(obj => obj.front === true)
.thumbnails.small); // Individual callback.
});
}
});
```
**The result and documentation**
[![Result and documentation in Postman](https://i.imgur.com/NtwLtvM.png "Result and documentation in Postman")](https://i.imgur.com/NtwLtvM.png)
**How to download and run the Postman Collection**
Running the Postman Collection should be straightforward.
1. Download and save
2. In Postman, *`Ctrl` + `O` > Upload Files >
`MusicBands.pm_coll.json` > Import*.
You should now see `MusicBands` among your collections in Postman.
3. *Collections > `MusicBands` > `DummyRequest` > **Send***.
8
4. In the Postman Response Body, click *Visualize*.
5. You should now be able to scroll 15 albums as indicated by the
screenshot above.
### References
* [How do I return the response from an asynchronous call?](https://stackoverflow.com/q/14220321)
* [Some questions and answers about asynchronous calls](https://stackoverflow.com/a/67662000)
* [Using plain JavaScript and a callback function](https://stackoverflow.com/a/16825593)
* [Continuation-passing style](https://en.wikipedia.org/wiki/Continuation-passing_style)
* [XMLHttpRequest: onload vs. onreadystatechange](https://stackoverflow.com/a/19247992)
* [XMLHttpRequest.responseText](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/responseText)
* [An example demonstrating `async`/`await`](https://stackoverflow.com/a/48415961)
* [Fetch](https://github.github.io/fetch/)
* [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise)
* [The XMLHttpRequest Standard](https://xhr.spec.whatwg.org/)
* [The Fetch Standard](https://fetch.spec.whatwg.org/)
* [The Web Hypertext Application Technology Working Group (WHATWG)](https://whatwg.org/news/start)
* [Links to ECMA specifications](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Language_Resources)
* [Convert an array of values to an array of objects](https://stackoverflow.com/a/49602789)
* [How can I fetch an array of URLs with Promise.all?](https://stackoverflow.com/a/67671337)
* [Documentation of the MusicBrainz API](http://musicbrainz.org/doc/Development/XML_Web_Service/Version_2)
* [Documentation of the Cover Art Archive API](https://wiki.musicbrainz.org/Cover_Art_Archive/API)
* [How can I visualize an API mashup in Postman?](https://stackoverflow.com/a/67824483)
---
1 Expressed by the original poster as: *they all return
`undefined`*.
2 If you think asynchronous calls are confusing, consider having a
look at [some questions and answers about asynchronous calls](https://stackoverflow.com/a/67662000) to see if that helps.
3 The name `XMLHttpRequest` is as misleading as the *X* in
*AJAX* – these days the data format of Web APIs is ubiquitously JSON, not XML.
4 [Fetch](https://github.github.io/fetch/)
returns a [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise).
I was surprised to learn that neither *XMLHttpRequest* nor *Fetch* are part of
the *ECMAScript* standard.
The reason JavaScript can access them here is because the web browser provides
them.
[The Fetch Standard](https://fetch.spec.whatwg.org/) and
[the XMLHttpRequest Standard](https://xhr.spec.whatwg.org/) are both upheld by
[the Web Hypertext Application Technology Working Group (WHATWG)](https://whatwg.org/news/start) that was formed in June 2004.
5 This section borrows a lot from
[How can I fetch an array of URLs with Promise.all?](https://stackoverflow.com/a/67671337).
6 This section relies heavily on
[How can I visualize an API mashup in Postman?](https://stackoverflow.com/a/67824483).
7 This URL is automatically redirected to:
<https://ia800503.us.archive.org/29/items/mbid-435fc965-9121-461e-b8da-d9b505c9dc9b/index.json>.
8 If you get an error,
*Something went wrong while running your scripts*,
try hitting **Send** again. | use `async` & `await`
Sample Code:
```
const data = async() => {
const res = await get('https://getdata.com')
}
``` |
14,220,321 | How do I return the response/result from a function `foo` that makes an asynchronous request?
I am trying to return the value from the callback, as well as assigning the result to a local variable inside the function and returning that one, but none of those ways actually return the response — they all return `undefined` or whatever the initial value of the variable `result` is.
**Example of an asynchronous function that accepts a callback** (using jQuery's `ajax` function):
```
function foo() {
var result;
$.ajax({
url: '...',
success: function(response) {
result = response;
// return response; // <- I tried that one as well
}
});
return result; // It always returns `undefined`
}
```
**Example using Node.js:**
```
function foo() {
var result;
fs.readFile("path/to/file", function(err, data) {
result = data;
// return data; // <- I tried that one as well
});
return result; // It always returns `undefined`
}
```
**Example using the `then` block of a promise:**
```
function foo() {
var result;
fetch(url).then(function(response) {
result = response;
// return response; // <- I tried that one as well
});
return result; // It always returns `undefined`
}
``` | 2013/01/08 | [
"https://Stackoverflow.com/questions/14220321",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/218196/"
] | If you're using promises, this answer is for you.
-------------------------------------------------
This means AngularJS, jQuery (with deferred), native [XHR](https://en.wikipedia.org/wiki/XMLHttpRequest)'s replacement (fetch), [Ember.js](https://en.wikipedia.org/wiki/Ember.js), [Backbone.js](https://en.wikipedia.org/wiki/Backbone.js)'s save or any [Node.js](https://en.wikipedia.org/wiki/Node.js) library that returns promises.
Your code should be something along the lines of this:
```
function foo() {
var data;
// Or $.get(...).then, or request(...).then, or query(...).then
fetch("/echo/json").then(function(response){
data = response.json();
});
return data;
}
var result = foo(); // 'result' is always undefined no matter what.
```
[Felix Kling did a fine job](https://stackoverflow.com/questions/14220321/how-do-i-return-the-response-from-an-asynchronous-call/14220323#14220323) writing an answer for people using jQuery with callbacks for Ajax. I have an answer for native XHR. This answer is for generic usage of promises either on the frontend or backend.
---
The core issue
--------------
The JavaScript concurrency model in the browser and on the server with Node.js/io.js is *asynchronous* and *reactive*.
Whenever you call a method that returns a promise, the `then` handlers are *always* executed asynchronously - that is, **after** the code below them that is not in a `.then` handler.
This means when you're returning `data` the `then` handler you've defined did not execute yet. This in turn means that the value you're returning has not been set to the correct value in time.
Here is a simple analogy for the issue:
```js
function getFive(){
var data;
setTimeout(function(){ // Set a timer for one second in the future
data = 5; // After a second, do this
}, 1000);
return data;
}
document.body.innerHTML = getFive(); // `undefined` here and not 5
```
The value of `data` is `undefined` since the `data = 5` part has not executed yet. It will likely execute in a second, but by that time it is irrelevant to the returned value.
Since the operation did not happen yet (Ajax, server call, I/O, and timer) you're returning the value before the request got the chance to tell your code what that value is.
One possible solution to this problem is to code *re-actively*, telling your program what to do when the calculation completed. Promises actively enable this by being temporal (time-sensitive) in nature.
### Quick recap on promises
A Promise is a *value over time*. Promises have state. They start as pending with no value and can settle to:
* **fulfilled** meaning that the computation completed successfully.
* **rejected** meaning that the computation failed.
A promise can only change states *once* after which it will always stay at the same state forever. You can attach `then` handlers to promises to extract their value and handle errors. `then` handlers allow [chaining](https://stackoverflow.com/questions/22539815/arent-promises-just-callbacks) of calls. Promises are created by [using APIs that return them](https://stackoverflow.com/questions/22519784/how-do-i-convert-an-existing-callback-api-to-promises). For example, the more modern Ajax replacement `fetch` or jQuery's `$.get` return promises.
When we call `.then` on a promise and *return* something from it - we get a promise for *the processed value*. If we return another promise we'll get amazing things, but let's hold our horses.
### With promises
Let's see how we can solve the above issue with promises. First, let's demonstrate our understanding of promise states from above by using the [Promise constructor](https://developer.mozilla.org/en-US/docs/Mozilla/JavaScript_code_modules/Promise.jsm/Promise) for creating a delay function:
```
function delay(ms){ // Takes amount of milliseconds
// Returns a new promise
return new Promise(function(resolve, reject){
setTimeout(function(){ // When the time is up,
resolve(); // change the promise to the fulfilled state
}, ms);
});
}
```
Now, after we [converted setTimeout](http://stackoverflow.com/questions/22519784/how-do-i-convert-an-existing-callback-api-to-promises) to use promises, we can use `then` to make it count:
```js
function delay(ms){ // Takes amount of milliseconds
// Returns a new promise
return new Promise(function(resolve, reject){
setTimeout(function(){ // When the time is up,
resolve(); // change the promise to the fulfilled state
}, ms);
});
}
function getFive(){
// We're RETURNING the promise. Remember, a promise is a wrapper over our value
return delay(100).then(function(){ // When the promise is ready,
return 5; // return the value 5. Promises are all about return values
})
}
// We _have_ to wrap it like this in the call site, and we can't access the plain value
getFive().then(function(five){
document.body.innerHTML = five;
});
```
Basically, instead of returning a *value* which we can't do because of the concurrency model - we're returning a *wrapper* for a value that we can *unwrap* with `then`. It's like a box you can open with `then`.
### Applying this
This stands the same for your original API call, you can:
```
function foo() {
// RETURN the promise
return fetch("/echo/json").then(function(response){
return response.json(); // Process it inside the `then`
});
}
foo().then(function(response){
// Access the value inside the `then`
})
```
So this works just as well. We've learned we can't return values from already asynchronous calls, but we can use promises and chain them to perform processing. We now know how to return the response from an asynchronous call.
ES2015 (ES6)
------------
ES6 introduces [generators](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/function*) which are functions that can return in the middle and then resume the point they were at. This is typically useful for sequences, for example:
```
function* foo(){ // Notice the star. This is ES6, so new browsers, Nodes.js, and io.js only
yield 1;
yield 2;
while(true) yield 3;
}
```
Is a function that returns an *iterator* over the sequence `1,2,3,3,3,3,....` which can be iterated. While this is interesting on its own and opens room for a lot of possibility, there is one particular interesting case.
If the sequence we're producing is a sequence of actions rather than numbers - we can pause the function whenever an action is yielded and wait for it before we resume the function. So instead of a sequence of numbers, we need a sequence of *future* values - that is: promises.
This somewhat a tricky, but very powerful trick let’s us write asynchronous code in a synchronous manner. There are several "runners" that do this for you. Writing one is a short few lines of code, but it is beyond the scope of this answer. I'll be using Bluebird's `Promise.coroutine` here, but there are other wrappers like `co` or `Q.async`.
```
var foo = coroutine(function*(){
var data = yield fetch("/echo/json"); // Notice the yield
// The code here only executes _after_ the request is done
return data.json(); // 'data' is defined
});
```
This method returns a promise itself, which we can consume from other coroutines. For example:
```
var main = coroutine(function*(){
var bar = yield foo(); // Wait our earlier coroutine. It returns a promise
// The server call is done here, and the code below executes when done
var baz = yield fetch("/api/users/" + bar.userid); // Depends on foo's result
console.log(baz); // Runs after both requests are done
});
main();
```
ES2016 (ES7)
------------
In ES7, this is further standardized. There are several proposals right now, but in all of them you can `await` promise. This is just "sugar" (nicer syntax) for the ES6 proposal above by adding the `async` and `await` keywords. Making the above example:
```
async function foo(){
var data = await fetch("/echo/json"); // Notice the await
// code here only executes _after_ the request is done
return data.json(); // 'data' is defined
}
```
It still returns a promise just the same :) | We find ourselves in a universe which appears to progress along a dimension we call "time". We don't really understand what time is, but we have developed abstractions and vocabulary that let us reason and talk about it: "past", "present", "future", "before", "after".
The computer systems we build--more and more--have time as an important dimension. Certain things are set up to happen in the future. Then other things need to happen after those first things eventually occur. This is the basic notion called "asynchronicity". In our increasingly networked world, the most common case of asynchronicity is waiting for some remote system to respond to some request.
Consider an example. You call the milkman and order some milk. When it comes, you want to put it in your coffee. You can't put the milk in your coffee right now, because it is not here yet. You have to wait for it to come before putting it in your coffee. In other words, the following won't work:
```
var milk = order_milk();
put_in_coffee(milk);
```
Because JavaScript has no way to know that it needs to **wait** for `order_milk` to finish before it executes `put_in_coffee`. In other words, it does not know that `order_milk` is **asynchronous**--is something that is not going to result in milk until some future time. JavaScript, and other declarative languages execute one statement after another without waiting.
The classic JavaScript approach to this problem, taking advantage of the fact that JavaScript supports functions as first-class objects which can be passed around, is to pass a function as a parameter to the asynchronous request, which it will then invoke when it has completed its task sometime in the future. That is the "callback" approach. It looks like this:
```
order_milk(put_in_coffee);
```
`order_milk` kicks off, orders the milk, then, when and only when it arrives, it invokes `put_in_coffee`.
The problem with this callback approach is that it pollutes the normal semantics of a function reporting its result with `return`; instead, functions must not reports their results by calling a callback given as a parameter. Also, this approach can rapidly become unwieldy when dealing with longer sequences of events. For example, let's say that I want to wait for the milk to be put in the coffee, and then and only then perform a third step, namely drinking the coffee. I end up needing to write something like this:
```
order_milk(function(milk) { put_in_coffee(milk, drink_coffee); }
```
where I am passing to `put_in_coffee` both the milk to put in it, and also the action (`drink_coffee`) to execute once the milk has been put in. Such code becomes hard to write, and read, and debug.
In this case, we could rewrite the code in the question as:
```
var answer;
$.ajax('/foo.json') . done(function(response) {
callback(response.data);
});
function callback(data) {
console.log(data);
}
```
### Enter promises
This was the motivation for the notion of a "promise", which is a particular type of value which represents a **future** or **asynchronous** outcome of some sort. It can represent something that already happened, or that is going to happen in the future, or might never happen at all. Promises have a single method, named `then`, to which you pass an action to be executed when the outcome the promise represents has been realized.
In the case of our milk and coffee, we design `order_milk` to return a promise for the milk arriving, then specify `put_in_coffee` as a `then` action, as follows:
```
order_milk() . then(put_in_coffee)
```
One advantage of this is that we can string these together to create sequences of future occurrences ("chaining"):
```
order_milk() . then(put_in_coffee) . then(drink_coffee)
```
Let's apply promises to your particular problem. We will wrap our request logic inside a function, which returns a promise:
```
function get_data() {
return $.ajax('/foo.json');
}
```
Actually, all we've done is added a `return` to the call to `$.ajax`. This works because jQuery's `$.ajax` already returns a kind of promise-like thing. (In practice, without getting into details, we would prefer to wrap this call so as for return a real promise, or use some alternative to `$.ajax` that does so.) Now, if we want to load the file and wait for it to finish and then do something, we can simply say
```
get_data() . then(do_something)
```
for instance,
```
get_data() .
then(function(data) { console.log(data); });
```
When using promises, we end up passing lots of functions into `then`, so it's often helpful to use the more compact ES6-style arrow functions:
```
get_data() .
then(data => console.log(data));
```
### The `async` keyword
But there's still something vaguely dissatisfying about having to write code one way if synchronous and a quite different way if asynchronous. For synchronous, we write
```
a();
b();
```
but if `a` is asynchronous, with promises we have to write
```
a() . then(b);
```
Above, we said, "JavaScript has no way to know that it needs to **wait** for the first call to finish before it executes the second". Wouldn't it be nice if there **was** some way to tell JavaScript that? It turns out that there is--the `await` keyword, used inside a special type of function called an "async" function. This feature is part of the upcoming version of ECMAScript (ES), but it is already available in transpilers such as [Babel](https://en.wikipedia.org/wiki/Babel_(transcompiler)) given the right presets. This allows us to simply write
```
async function morning_routine() {
var milk = await order_milk();
var coffee = await put_in_coffee(milk);
await drink(coffee);
}
```
In your case, you would be able to write something like
```
async function foo() {
data = await get_data();
console.log(data);
}
``` |
14,220,321 | How do I return the response/result from a function `foo` that makes an asynchronous request?
I am trying to return the value from the callback, as well as assigning the result to a local variable inside the function and returning that one, but none of those ways actually return the response — they all return `undefined` or whatever the initial value of the variable `result` is.
**Example of an asynchronous function that accepts a callback** (using jQuery's `ajax` function):
```
function foo() {
var result;
$.ajax({
url: '...',
success: function(response) {
result = response;
// return response; // <- I tried that one as well
}
});
return result; // It always returns `undefined`
}
```
**Example using Node.js:**
```
function foo() {
var result;
fs.readFile("path/to/file", function(err, data) {
result = data;
// return data; // <- I tried that one as well
});
return result; // It always returns `undefined`
}
```
**Example using the `then` block of a promise:**
```
function foo() {
var result;
fetch(url).then(function(response) {
result = response;
// return response; // <- I tried that one as well
});
return result; // It always returns `undefined`
}
``` | 2013/01/08 | [
"https://Stackoverflow.com/questions/14220321",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/218196/"
] | If you're *not* using jQuery in your code, this answer is for you
=================================================================
Your code should be something along the lines of this:
```
function foo() {
var httpRequest = new XMLHttpRequest();
httpRequest.open('GET', "/echo/json");
httpRequest.send();
return httpRequest.responseText;
}
var result = foo(); // Always ends up being 'undefined'
```
[Felix Kling did a fine job](https://stackoverflow.com/questions/14220321/how-do-i-return-the-response-from-an-asynchronous-call/14220323#14220323) writing an answer for people using jQuery for AJAX, but I've decided to provide an alternative for people who aren't.
([Note, for those using the new `fetch` API, Angular or promises I've added another answer below](https://stackoverflow.com/a/30180679/1348195))
---
What you're facing
==================
This is a short summary of "Explanation of the problem" from the other answer, if you're not sure after reading this, read that.
The **A** in AJAX stands for **asynchronous**. That means sending the request (or rather receiving the response) is taken out of the normal execution flow. In your example, [`.send`](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest#send%28%29) returns immediately and the next statement, `return result;`, is executed before the function you passed as `success` callback was even called.
This means when you're returning, the listener you've defined did not execute yet, which means the value you're returning has not been defined.
Here is a simple analogy:
```
function getFive(){
var a;
setTimeout(function(){
a=5;
},10);
return a;
}
```
[(Fiddle)](http://jsfiddle.net/7RK3k/)
The value of `a` returned is `undefined` since the `a=5` part has not executed yet. AJAX acts like this, you're returning the value before the server got the chance to tell your browser what that value is.
One possible solution to this problem is to code *re-actively* , telling your program what to do when the calculation completed.
```
function onComplete(a){ // When the code completes, do this
alert(a);
}
function getFive(whenDone){
var a;
setTimeout(function(){
a=5;
whenDone(a);
},10);
}
```
This is called [CPS](http://en.wikipedia.org/wiki/Continuation-passing_style). Basically, we're passing `getFive` an action to perform when it completes, we're telling our code how to react when an event completes (like our AJAX call, or in this case the timeout).
Usage would be:
```
getFive(onComplete);
```
Which should alert "5" to the screen. [(Fiddle)](http://jsfiddle.net/PAjZR/).
Possible solutions
==================
There are basically two ways how to solve this:
1. Make the AJAX call synchronous (let’s call it SJAX).
2. Restructure your code to work properly with callbacks.
1. Synchronous AJAX - Don't do it!!
-----------------------------------
As for synchronous AJAX, **don't do it!** Felix's answer raises some compelling arguments about why it's a bad idea. To sum it up, it'll freeze the user's browser until the server returns the response and create a very bad user experience. Here is another short summary taken from MDN on why:
>
> XMLHttpRequest supports both synchronous and asynchronous communications. In general, however, asynchronous requests should be preferred to synchronous requests for performance reasons.
>
>
> In short, synchronous requests block the execution of code... ...this can cause serious issues...
>
>
>
If you *have* to do it, you can pass a flag. [Here is how](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Synchronous_and_Asynchronous_Requests#Synchronous_request):
```
var request = new XMLHttpRequest();
request.open('GET', 'yourURL', false); // `false` makes the request synchronous
request.send(null);
if (request.status === 200) {// That's HTTP for 'ok'
console.log(request.responseText);
}
```
2. Restructure code
-------------------
Let your function accept a callback. In the example code `foo` can be made to accept a callback. We'll be telling our code how to *react* when `foo` completes.
So:
```
var result = foo();
// Code that depends on `result` goes here
```
Becomes:
```
foo(function(result) {
// Code that depends on `result`
});
```
Here we passed an anonymous function, but we could just as easily pass a reference to an existing function, making it look like:
```
function myHandler(result) {
// Code that depends on `result`
}
foo(myHandler);
```
For more details on how this sort of callback design is done, check Felix's answer.
Now, let's define foo itself to act accordingly
```
function foo(callback) {
var httpRequest = new XMLHttpRequest();
httpRequest.onload = function(){ // When the request is loaded
callback(httpRequest.responseText);// We're calling our method
};
httpRequest.open('GET', "/echo/json");
httpRequest.send();
}
```
[(fiddle)](http://jsfiddle.net/DAcWT/)
We have now made our *foo* function accept an action to run when the AJAX completes successfully. We can extend this further by checking if the response status is not 200 and acting accordingly (create a fail handler and such). Effectively it is solving our issue.
If you're still having a hard time understanding this, [read the AJAX getting started guide](https://developer.mozilla.org/en-US/docs/AJAX/Getting_Started) at MDN. | If you're using promises, this answer is for you.
-------------------------------------------------
This means AngularJS, jQuery (with deferred), native [XHR](https://en.wikipedia.org/wiki/XMLHttpRequest)'s replacement (fetch), [Ember.js](https://en.wikipedia.org/wiki/Ember.js), [Backbone.js](https://en.wikipedia.org/wiki/Backbone.js)'s save or any [Node.js](https://en.wikipedia.org/wiki/Node.js) library that returns promises.
Your code should be something along the lines of this:
```
function foo() {
var data;
// Or $.get(...).then, or request(...).then, or query(...).then
fetch("/echo/json").then(function(response){
data = response.json();
});
return data;
}
var result = foo(); // 'result' is always undefined no matter what.
```
[Felix Kling did a fine job](https://stackoverflow.com/questions/14220321/how-do-i-return-the-response-from-an-asynchronous-call/14220323#14220323) writing an answer for people using jQuery with callbacks for Ajax. I have an answer for native XHR. This answer is for generic usage of promises either on the frontend or backend.
---
The core issue
--------------
The JavaScript concurrency model in the browser and on the server with Node.js/io.js is *asynchronous* and *reactive*.
Whenever you call a method that returns a promise, the `then` handlers are *always* executed asynchronously - that is, **after** the code below them that is not in a `.then` handler.
This means when you're returning `data` the `then` handler you've defined did not execute yet. This in turn means that the value you're returning has not been set to the correct value in time.
Here is a simple analogy for the issue:
```js
function getFive(){
var data;
setTimeout(function(){ // Set a timer for one second in the future
data = 5; // After a second, do this
}, 1000);
return data;
}
document.body.innerHTML = getFive(); // `undefined` here and not 5
```
The value of `data` is `undefined` since the `data = 5` part has not executed yet. It will likely execute in a second, but by that time it is irrelevant to the returned value.
Since the operation did not happen yet (Ajax, server call, I/O, and timer) you're returning the value before the request got the chance to tell your code what that value is.
One possible solution to this problem is to code *re-actively*, telling your program what to do when the calculation completed. Promises actively enable this by being temporal (time-sensitive) in nature.
### Quick recap on promises
A Promise is a *value over time*. Promises have state. They start as pending with no value and can settle to:
* **fulfilled** meaning that the computation completed successfully.
* **rejected** meaning that the computation failed.
A promise can only change states *once* after which it will always stay at the same state forever. You can attach `then` handlers to promises to extract their value and handle errors. `then` handlers allow [chaining](https://stackoverflow.com/questions/22539815/arent-promises-just-callbacks) of calls. Promises are created by [using APIs that return them](https://stackoverflow.com/questions/22519784/how-do-i-convert-an-existing-callback-api-to-promises). For example, the more modern Ajax replacement `fetch` or jQuery's `$.get` return promises.
When we call `.then` on a promise and *return* something from it - we get a promise for *the processed value*. If we return another promise we'll get amazing things, but let's hold our horses.
### With promises
Let's see how we can solve the above issue with promises. First, let's demonstrate our understanding of promise states from above by using the [Promise constructor](https://developer.mozilla.org/en-US/docs/Mozilla/JavaScript_code_modules/Promise.jsm/Promise) for creating a delay function:
```
function delay(ms){ // Takes amount of milliseconds
// Returns a new promise
return new Promise(function(resolve, reject){
setTimeout(function(){ // When the time is up,
resolve(); // change the promise to the fulfilled state
}, ms);
});
}
```
Now, after we [converted setTimeout](http://stackoverflow.com/questions/22519784/how-do-i-convert-an-existing-callback-api-to-promises) to use promises, we can use `then` to make it count:
```js
function delay(ms){ // Takes amount of milliseconds
// Returns a new promise
return new Promise(function(resolve, reject){
setTimeout(function(){ // When the time is up,
resolve(); // change the promise to the fulfilled state
}, ms);
});
}
function getFive(){
// We're RETURNING the promise. Remember, a promise is a wrapper over our value
return delay(100).then(function(){ // When the promise is ready,
return 5; // return the value 5. Promises are all about return values
})
}
// We _have_ to wrap it like this in the call site, and we can't access the plain value
getFive().then(function(five){
document.body.innerHTML = five;
});
```
Basically, instead of returning a *value* which we can't do because of the concurrency model - we're returning a *wrapper* for a value that we can *unwrap* with `then`. It's like a box you can open with `then`.
### Applying this
This stands the same for your original API call, you can:
```
function foo() {
// RETURN the promise
return fetch("/echo/json").then(function(response){
return response.json(); // Process it inside the `then`
});
}
foo().then(function(response){
// Access the value inside the `then`
})
```
So this works just as well. We've learned we can't return values from already asynchronous calls, but we can use promises and chain them to perform processing. We now know how to return the response from an asynchronous call.
ES2015 (ES6)
------------
ES6 introduces [generators](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/function*) which are functions that can return in the middle and then resume the point they were at. This is typically useful for sequences, for example:
```
function* foo(){ // Notice the star. This is ES6, so new browsers, Nodes.js, and io.js only
yield 1;
yield 2;
while(true) yield 3;
}
```
Is a function that returns an *iterator* over the sequence `1,2,3,3,3,3,....` which can be iterated. While this is interesting on its own and opens room for a lot of possibility, there is one particular interesting case.
If the sequence we're producing is a sequence of actions rather than numbers - we can pause the function whenever an action is yielded and wait for it before we resume the function. So instead of a sequence of numbers, we need a sequence of *future* values - that is: promises.
This somewhat a tricky, but very powerful trick let’s us write asynchronous code in a synchronous manner. There are several "runners" that do this for you. Writing one is a short few lines of code, but it is beyond the scope of this answer. I'll be using Bluebird's `Promise.coroutine` here, but there are other wrappers like `co` or `Q.async`.
```
var foo = coroutine(function*(){
var data = yield fetch("/echo/json"); // Notice the yield
// The code here only executes _after_ the request is done
return data.json(); // 'data' is defined
});
```
This method returns a promise itself, which we can consume from other coroutines. For example:
```
var main = coroutine(function*(){
var bar = yield foo(); // Wait our earlier coroutine. It returns a promise
// The server call is done here, and the code below executes when done
var baz = yield fetch("/api/users/" + bar.userid); // Depends on foo's result
console.log(baz); // Runs after both requests are done
});
main();
```
ES2016 (ES7)
------------
In ES7, this is further standardized. There are several proposals right now, but in all of them you can `await` promise. This is just "sugar" (nicer syntax) for the ES6 proposal above by adding the `async` and `await` keywords. Making the above example:
```
async function foo(){
var data = await fetch("/echo/json"); // Notice the await
// code here only executes _after_ the request is done
return data.json(); // 'data' is defined
}
```
It still returns a promise just the same :) |
14,220,321 | How do I return the response/result from a function `foo` that makes an asynchronous request?
I am trying to return the value from the callback, as well as assigning the result to a local variable inside the function and returning that one, but none of those ways actually return the response — they all return `undefined` or whatever the initial value of the variable `result` is.
**Example of an asynchronous function that accepts a callback** (using jQuery's `ajax` function):
```
function foo() {
var result;
$.ajax({
url: '...',
success: function(response) {
result = response;
// return response; // <- I tried that one as well
}
});
return result; // It always returns `undefined`
}
```
**Example using Node.js:**
```
function foo() {
var result;
fs.readFile("path/to/file", function(err, data) {
result = data;
// return data; // <- I tried that one as well
});
return result; // It always returns `undefined`
}
```
**Example using the `then` block of a promise:**
```
function foo() {
var result;
fetch(url).then(function(response) {
result = response;
// return response; // <- I tried that one as well
});
return result; // It always returns `undefined`
}
``` | 2013/01/08 | [
"https://Stackoverflow.com/questions/14220321",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/218196/"
] | ECMAScript 6 has 'generators' which allow you to easily program in an asynchronous style.
```
function* myGenerator() {
const callback = yield;
let [response] = yield $.ajax("https://stackoverflow.com", {complete: callback});
console.log("response is:", response);
// examples of other things you can do
yield setTimeout(callback, 1000);
console.log("it delayed for 1000ms");
while (response.statusText === "error") {
[response] = yield* anotherGenerator();
}
}
```
To run the above code you do this:
```
const gen = myGenerator(); // Create generator
gen.next(); // Start it
gen.next((...args) => gen.next([...args])); // Set its callback function
```
If you need to target browsers that don't support ES6 you can run the code through Babel or closure-compiler to generate ECMAScript 5.
The callback `...args` are wrapped in an array and destructured when you read them so that the pattern can cope with callbacks that have multiple arguments. For example with [node fs](https://nodejs.org/api/fs.html):
```
const [err, data] = yield fs.readFile(filePath, "utf-8", callback);
``` | Use a `callback()` function inside the `foo()` success.
Try it in this way. It is simple and easy to understand.
```
var lat = "";
var lon = "";
function callback(data) {
lat = data.lat;
lon = data.lon;
}
function getLoc() {
var url = "http://ip-api.com/json"
$.getJSON(url, function(data) {
callback(data);
});
}
getLoc();
``` |
14,220,321 | How do I return the response/result from a function `foo` that makes an asynchronous request?
I am trying to return the value from the callback, as well as assigning the result to a local variable inside the function and returning that one, but none of those ways actually return the response — they all return `undefined` or whatever the initial value of the variable `result` is.
**Example of an asynchronous function that accepts a callback** (using jQuery's `ajax` function):
```
function foo() {
var result;
$.ajax({
url: '...',
success: function(response) {
result = response;
// return response; // <- I tried that one as well
}
});
return result; // It always returns `undefined`
}
```
**Example using Node.js:**
```
function foo() {
var result;
fs.readFile("path/to/file", function(err, data) {
result = data;
// return data; // <- I tried that one as well
});
return result; // It always returns `undefined`
}
```
**Example using the `then` block of a promise:**
```
function foo() {
var result;
fetch(url).then(function(response) {
result = response;
// return response; // <- I tried that one as well
});
return result; // It always returns `undefined`
}
``` | 2013/01/08 | [
"https://Stackoverflow.com/questions/14220321",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/218196/"
] | **Short answer**: Your `foo()` method returns immediately, while the `$ajax()` call executes asynchronously *after the function returns*. The problem is then how or where to store the results retrieved by the async call once it returns.
Several solutions have been given in this thread. Perhaps the easiest way is to pass an object to the `foo()` method, and to store the results in a member of that object after the async call completes.
```
function foo(result) {
$.ajax({
url: '...',
success: function(response) {
result.response = response; // Store the async result
}
});
}
var result = { response: null }; // Object to hold the async result
foo(result); // Returns before the async completes
```
Note that the call to `foo()` will still return nothing useful. However, the result of the async call will now be stored in `result.response`. | Originally, callbacks were used for asynchronous operations (e.g., in the [XMLHttpRequest API](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest)). Now promise-based APIs like the browser's [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) have become the default solution and the nicer [`async/await`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function) syntax is supported by all modern browsers and on Node.js (server side).
A common scenario - fetching JSON data from the server - can look like this:
```
async function fetchResource(url) {
const res = await fetch(url);
if (!res.ok) {
throw new Error(res.statusText);
}
return res.json();
}
```
To use it in another function:
```
async function doSomething() {
try {
const data = await fetchResource("https://example.test/resource/1");
// ...
} catch (e) {
// Handle error
...
}
}
```
If you design a modern API, it is strongly recommended to prefer promise-based style over callbacks. If you inherited an API that relies on callbacks, it is possible to wrap it as a promise:
```
function sleep(timeout) {
return new Promise((resolve) => {
setTimeout(() => {
resolve();
}, timeout);
});
}
async function fetchAfterTwoSeconds(url) {
await sleep(2000);
return fetchResource(url);
}
```
In Node.js, which historically relied exclusively on callbacks, that technique is so common that they added a helper function called [`util.promisify`](https://nodejs.org/api/util.html#util_util_promisify_original). |
14,220,321 | How do I return the response/result from a function `foo` that makes an asynchronous request?
I am trying to return the value from the callback, as well as assigning the result to a local variable inside the function and returning that one, but none of those ways actually return the response — they all return `undefined` or whatever the initial value of the variable `result` is.
**Example of an asynchronous function that accepts a callback** (using jQuery's `ajax` function):
```
function foo() {
var result;
$.ajax({
url: '...',
success: function(response) {
result = response;
// return response; // <- I tried that one as well
}
});
return result; // It always returns `undefined`
}
```
**Example using Node.js:**
```
function foo() {
var result;
fs.readFile("path/to/file", function(err, data) {
result = data;
// return data; // <- I tried that one as well
});
return result; // It always returns `undefined`
}
```
**Example using the `then` block of a promise:**
```
function foo() {
var result;
fetch(url).then(function(response) {
result = response;
// return response; // <- I tried that one as well
});
return result; // It always returns `undefined`
}
``` | 2013/01/08 | [
"https://Stackoverflow.com/questions/14220321",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/218196/"
] | The simplest solution is to create a JavaScript function and call it for the Ajax `success` callback.
```
function callServerAsync(){
$.ajax({
url: '...',
success: function(response) {
successCallback(response);
}
});
}
function successCallback(responseObj){
// Do something like read the response and show data
alert(JSON.stringify(responseObj)); // Only applicable to a JSON response
}
function foo(callback) {
$.ajax({
url: '...',
success: function(response) {
return callback(null, response);
}
});
}
var result = foo(function(err, result){
if (!err)
console.log(result);
});
``` | >
> JavaScript is single threaded.
>
>
>
The browser can be divided into three parts:
1. Event Loop
2. Web API
3. Event Queue
The event loop runs for forever, i.e., kind of an infinite loop. The event queue is where all your functions are pushed on some event (example: click).
This is one by one carried out of queue and put into the event loop which executes this function and prepares itself for the next one after the first one is executed. This means execution of one function doesn't start until the function before it in the queue is executed in the event loop.
Now let us think we pushed two functions in a queue. One is for getting a data from the server and another utilises that data. We pushed the serverRequest() function in the queue first and then the utiliseData() function. The serverRequest function goes in the event loop and makes a call to server as we never know how much time it will take to get data from server, so this process is expected to take time and so we busy our event loop thus hanging our page.
That's where Web API come into the role. It takes this function from the event loop and deals with the server making the event loop free, so that we can execute the next function from the queue.
The next function in the queue is utiliseData() which goes in the loop, but because of no data available, it goes to waste and execution of the next function continues until the end of the queue. (This is called Async calling, i.e., we can do something else until we get data.)
Let us suppose our serverRequest() function had a return statement in code. When we get back data from the server Web API, it will push it in the queue at the end of queue.
As it gets pushed at the end of the queue, we cannot utilise its data as there isn't any function left in our queue to utilise this data. **Thus it is not possible to return something from the async call.**
Thus the *solution* to this is *callback* or *promise*.
* An *image* from [one of the answers here](https://stackoverflow.com/questions/14220321/how-do-i-return-the-response-from-an-asynchronous-call/38898933#38898933) correctly explains callback use...\*
We give our function (function utilising data returned from the server) to a function calling the server.
[![Callback](https://i.stack.imgur.com/UCJgN.png)](https://i.stack.imgur.com/UCJgN.png)
```
function doAjax(callbackFunc, method, url) {
var xmlHttpReq = new XMLHttpRequest();
xmlHttpReq.open(method, url);
xmlHttpReq.onreadystatechange = function() {
if (xmlHttpReq.readyState == 4 && xmlHttpReq.status == 200) {
callbackFunc(xmlHttpReq.responseText);
}
}
xmlHttpReq.send(null);
}
```
In my *code* it is called as:
```
function loadMyJson(categoryValue){
if(categoryValue === "veg")
doAjax(print, "GET", "http://localhost:3004/vegetables");
else if(categoryValue === "fruits")
doAjax(print, "GET", "http://localhost:3004/fruits");
else
console.log("Data not found");
}
```
[JavaScript.info callback](https://javascript.info/callbacks) |
65,926,134 | Javascript
>
> var x = y || z (logical-or operator)
>
>
>
```
x = y if y is not falsy, otherwise z
```
**What's the SASS/SCSS equivalent of Javascript's "var x = y || z"?** | 2021/01/27 | [
"https://Stackoverflow.com/questions/65926134",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1219368/"
] | `$x = if(variable-exists($y), $y, $z);`
[Docs for variable-exists](https://sass-lang.com/documentation/modules/meta#variable-exists) | The `!default` flag, will have this variable skipped, if it already exists.
For example:
```
$var: foo;
$var: bar !default;
@debug $var; // foo
```
And:
```
$var: bar !default;
@debug $var; // bar
```
**Update:** since you clarified your question, this might be a better approach:
```
@function return($x, $y) {
@return if(($x != false), $x, $y);
}
@debug return("foo", "bar"); // "foo"
@debug return (false, "bar"); // "bar
``` |
65,926,134 | Javascript
>
> var x = y || z (logical-or operator)
>
>
>
```
x = y if y is not falsy, otherwise z
```
**What's the SASS/SCSS equivalent of Javascript's "var x = y || z"?** | 2021/01/27 | [
"https://Stackoverflow.com/questions/65926134",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1219368/"
] | `$x = if(variable-exists($y), $y, $z);`
[Docs for variable-exists](https://sass-lang.com/documentation/modules/meta#variable-exists) | Double-pipe Operator
====================
This construct is called "double-pipe operator" in JavaScript, and helps a lot in writing readable and clean code.
------------------------------------------------------------------------------------------------------------------
But I don't think SASS have something like it, sadly. The closest you can get to it is by using default values with the `!default` tag or the @if and @else flow controls. |
65,926,134 | Javascript
>
> var x = y || z (logical-or operator)
>
>
>
```
x = y if y is not falsy, otherwise z
```
**What's the SASS/SCSS equivalent of Javascript's "var x = y || z"?** | 2021/01/27 | [
"https://Stackoverflow.com/questions/65926134",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1219368/"
] | The `!default` flag, will have this variable skipped, if it already exists.
For example:
```
$var: foo;
$var: bar !default;
@debug $var; // foo
```
And:
```
$var: bar !default;
@debug $var; // bar
```
**Update:** since you clarified your question, this might be a better approach:
```
@function return($x, $y) {
@return if(($x != false), $x, $y);
}
@debug return("foo", "bar"); // "foo"
@debug return (false, "bar"); // "bar
``` | Double-pipe Operator
====================
This construct is called "double-pipe operator" in JavaScript, and helps a lot in writing readable and clean code.
------------------------------------------------------------------------------------------------------------------
But I don't think SASS have something like it, sadly. The closest you can get to it is by using default values with the `!default` tag or the @if and @else flow controls. |
19,722,578 | In my Ruby on Rails application i am using coffee script with handlebar js,
I am getting the json from ENV.tagList in the coffeescript,
the json its like,
```
[{"id":13,"name":"ruby"},{"id":6,"name":"yahoo"},{"id":12,"name":"Mysql"},
{"id":14,"name":"text"},{"id":7,"name":"google"},{"id":8,"name":"Test"},
{"id":3,"name":"normandy"}]
```
In handlebar i want to display each name value as a button. | 2013/11/01 | [
"https://Stackoverflow.com/questions/19722578",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2182638/"
] | Just replace JOIN table with subquery which counts sum for each `payment_borrowerid`
```
SELECT bp.project_id,bp.project_name,bp.project_costing,bp.project_borrower_id,
bp.member_userid,bp.project_staus,pb.SUM_payment_amount as total
FROM borrower_project_master as bp
INNER JOIN
( select payment_borrowerid,SUM(payment_amount) as SUM_payment_amount
FROM
payment_invest_master
GROUP BY payment_borrowerid
)
as pb ON bp.project_borrower_id=pb.payment_borrowerid
WHERE (
(pb.SUM_payment_amount/bp.project_costing)*100 < 100
AND bp.project_staus='Y'
)
ORDER BY RAND() LIMIT 0,3
``` | Try to use HAVING instead of WHERE. |
32,076,664 | I have an image with class=img-responsive and at the smaller windows it slides up and gets smaller than i want. I need to have a proper size of that image at smaller screens like mobile phones, ipads.
Here is the screenshots:
At normal size:
<http://s27.postimg.org/4yhdspz4z/Screen_Shot_2015_08_18_at_18_09_51.png>
At smaller screen:
<http://s9.postimg.org/hzu7hdsan/Screen_Shot_2015_08_18_at_18_10_02.png>
And here is my code:
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- The above 3 meta tags *must* come first in the head; any other head content must come *after* these tags -->
<title>Ajans 1000</title>
<!-- Bootstrap -->
<link href="css/bootstrap.min.css" rel="stylesheet">
<!-- HTML5 shim and Respond.js for IE8 support of HTML5 elements and media queries -->
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/html5shiv/3.7.2/html5shiv.min.js"></script>
<script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script>
<![endif]-->
<style>
.navbar-default {
background-color: #da7600;
}
.navbar-default .navbar-nav > li > a:hover, .navbar-default .navbar-nav > li > a:focus {
color: white; /*Sets the text hover color on navbar*/
}
.navbar-default .navbar-nav > .active > a, .navbar-default .navbar-nav > .active >
a:hover, .navbar-default .navbar-nav > .active > a:focus {
color: white; /*BACKGROUND color for active*/
background-color: #3fbcd7;
border-color: #3fbcd7;
}
.nav > li > a:hover,
.nav > li > a:focus {
text-decoration: none;
background-color: #3fbcd7; /*Change rollover cell color here*/
}
.navbar-default .navbar-nav > li > a {
color: white; /*Change active text color here*/
}
.navbar-default .navbar-header .navbar-toggle:hover, .navbar-default .navbar-header .navbar-toggle:focus {
background-color:#3fbcd7;
}
.navbar-default .navbar-toggle .icon-bar {
background-color: white;
}
#firstContainer {
background-image:url(bg_yaziylabin.png);
background-size:cover;
width:100%;
background-position:center;
margin-top:10px;
}
#secondContainer {
background-image:url(yaziylabin_part2.png);
background-size:center;
width:100%;
background-position:center;
background-repeat:no-repeat;
}
.headerImage {
margin-top:50px;
}
.logoImage {
margin-top:-10px;
}
</style>
</head>
<body data-spy="scroll" data-target=".navbar-collapse">
<div class="navbar navbar-default navbar-fixed-top">
<div class="container">
<div class="navbar-header">
<a class="navbar-brand" href="http://www.yaziylabin.com">
<img style="max-height:25px; margin-top:-3px; padding-right:10px;" src="yaziylabin_logo.png" />
</a>
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-collapse" />
<span class="sr-only"> Toggle Navigation </span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</div>
<div class="collapse navbar-collapse">
<ul class="nav navbar-nav">
<li class="active"> <a href="#secondContainer"> Biz Kimiz, Ne Yaparız ? </a> </li>
<li> <a href="#thirdContainer"> Yardımcı Olabileceğimiz Konular </a> </li>
<li> <a href="#fourthContainer"> Referanslarımız </a> </li>
<li> <a href=""> İletişim </a> </li>
</ul>
</div>
</div>
</div>
<div class="container" id="firstContainer">
<div class="row">
<img class="logoImage img-responsive center-block" src="yaziylabin_yazi.png" />
</div>
</div>
<div class="container" id="secondContainer">
<div class="row">
</div>
</div>
<!-- jQuery (necessary for Bootstrap's JavaScript plugins) -->
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js"></script>
<!-- Include all compiled plugins (below), or include individual files as needed -->
<script src="js/bootstrap.min.js"></script>
<script>
$("#topContainer").css("min-height", $(window).height());
$("#firstContainer").css("min-height", 0.55*($(window).height()));
$("#secondContainer").css("min-height", $(window).height());
$("#thirdContainer").css("min-height", $(window).height());
$("#fourthContainer").css("min-height", $(window).height());
</script>
</body>
</html>
``` | 2015/08/18 | [
"https://Stackoverflow.com/questions/32076664",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3248583/"
] | You should only need this rule, [per the documentation](http://git-scm.com/docs/gitignore):
```
**/build
```
This will exclude `build` folders globally.
If any of those folders had been added prior, you'd have to remove it from Git via `git rm`.
What's actually *wrong* with your `.gitignore`:
* `/build` and `build/` are equivalent; they will match a top-level `build/` folder.
* `/*/build` and `/*/*/build` wouldn't match anything. | I believe it's because three of your ignore lines are from the root directory:
```
/build
/*/build/
/*/*/build/
```
when they should be from the current directory:
```
./build
./*/build/
./*/*/build/
``` |
32,076,664 | I have an image with class=img-responsive and at the smaller windows it slides up and gets smaller than i want. I need to have a proper size of that image at smaller screens like mobile phones, ipads.
Here is the screenshots:
At normal size:
<http://s27.postimg.org/4yhdspz4z/Screen_Shot_2015_08_18_at_18_09_51.png>
At smaller screen:
<http://s9.postimg.org/hzu7hdsan/Screen_Shot_2015_08_18_at_18_10_02.png>
And here is my code:
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- The above 3 meta tags *must* come first in the head; any other head content must come *after* these tags -->
<title>Ajans 1000</title>
<!-- Bootstrap -->
<link href="css/bootstrap.min.css" rel="stylesheet">
<!-- HTML5 shim and Respond.js for IE8 support of HTML5 elements and media queries -->
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/html5shiv/3.7.2/html5shiv.min.js"></script>
<script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script>
<![endif]-->
<style>
.navbar-default {
background-color: #da7600;
}
.navbar-default .navbar-nav > li > a:hover, .navbar-default .navbar-nav > li > a:focus {
color: white; /*Sets the text hover color on navbar*/
}
.navbar-default .navbar-nav > .active > a, .navbar-default .navbar-nav > .active >
a:hover, .navbar-default .navbar-nav > .active > a:focus {
color: white; /*BACKGROUND color for active*/
background-color: #3fbcd7;
border-color: #3fbcd7;
}
.nav > li > a:hover,
.nav > li > a:focus {
text-decoration: none;
background-color: #3fbcd7; /*Change rollover cell color here*/
}
.navbar-default .navbar-nav > li > a {
color: white; /*Change active text color here*/
}
.navbar-default .navbar-header .navbar-toggle:hover, .navbar-default .navbar-header .navbar-toggle:focus {
background-color:#3fbcd7;
}
.navbar-default .navbar-toggle .icon-bar {
background-color: white;
}
#firstContainer {
background-image:url(bg_yaziylabin.png);
background-size:cover;
width:100%;
background-position:center;
margin-top:10px;
}
#secondContainer {
background-image:url(yaziylabin_part2.png);
background-size:center;
width:100%;
background-position:center;
background-repeat:no-repeat;
}
.headerImage {
margin-top:50px;
}
.logoImage {
margin-top:-10px;
}
</style>
</head>
<body data-spy="scroll" data-target=".navbar-collapse">
<div class="navbar navbar-default navbar-fixed-top">
<div class="container">
<div class="navbar-header">
<a class="navbar-brand" href="http://www.yaziylabin.com">
<img style="max-height:25px; margin-top:-3px; padding-right:10px;" src="yaziylabin_logo.png" />
</a>
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-collapse" />
<span class="sr-only"> Toggle Navigation </span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</div>
<div class="collapse navbar-collapse">
<ul class="nav navbar-nav">
<li class="active"> <a href="#secondContainer"> Biz Kimiz, Ne Yaparız ? </a> </li>
<li> <a href="#thirdContainer"> Yardımcı Olabileceğimiz Konular </a> </li>
<li> <a href="#fourthContainer"> Referanslarımız </a> </li>
<li> <a href=""> İletişim </a> </li>
</ul>
</div>
</div>
</div>
<div class="container" id="firstContainer">
<div class="row">
<img class="logoImage img-responsive center-block" src="yaziylabin_yazi.png" />
</div>
</div>
<div class="container" id="secondContainer">
<div class="row">
</div>
</div>
<!-- jQuery (necessary for Bootstrap's JavaScript plugins) -->
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js"></script>
<!-- Include all compiled plugins (below), or include individual files as needed -->
<script src="js/bootstrap.min.js"></script>
<script>
$("#topContainer").css("min-height", $(window).height());
$("#firstContainer").css("min-height", 0.55*($(window).height()));
$("#secondContainer").css("min-height", $(window).height());
$("#thirdContainer").css("min-height", $(window).height());
$("#fourthContainer").css("min-height", $(window).height());
</script>
</body>
</html>
``` | 2015/08/18 | [
"https://Stackoverflow.com/questions/32076664",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3248583/"
] | You should only need this rule, [per the documentation](http://git-scm.com/docs/gitignore):
```
**/build
```
This will exclude `build` folders globally.
If any of those folders had been added prior, you'd have to remove it from Git via `git rm`.
What's actually *wrong* with your `.gitignore`:
* `/build` and `build/` are equivalent; they will match a top-level `build/` folder.
* `/*/build` and `/*/*/build` wouldn't match anything. | try using
```
build/
**/build/
```
in the .gitignore
If the files are already tracked use `git rm --cached myfile` to remove them from version control but keep them in local repo. Once removed from version control they should not get detected on change. |
37,683,120 | Trying out new flexible app engine runtime. In this case a custom Ruby on Rails runtime based on the google provided ruby runtime.
When firing of `gcloud preview app deploy` the whole process takes ~8 minutes, most of which is "updating service". Is this normal? And more importantly, how can I speed it up?
Regards,
Ward | 2016/06/07 | [
"https://Stackoverflow.com/questions/37683120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/127401/"
] | Deploying to the same version got me from 6 minutes to 3 minutes in subsequent deploys.
Example:
```
$ gcloud app deploy app.yaml --version=test
``` | Just fire this command from root directory of app.yaml
From shell visit directory of app.yaml then run gcloud app deploy
It will be uploaded within few seconds. |
37,683,120 | Trying out new flexible app engine runtime. In this case a custom Ruby on Rails runtime based on the google provided ruby runtime.
When firing of `gcloud preview app deploy` the whole process takes ~8 minutes, most of which is "updating service". Is this normal? And more importantly, how can I speed it up?
Regards,
Ward | 2016/06/07 | [
"https://Stackoverflow.com/questions/37683120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/127401/"
] | Yes, that is totally normal. Most of the deployment steps happen away from your computer and are independent of your codebase size, so there's not a lot you can do to speed up the process.
Various steps that are involved in deploying an app on App Engine can be categorized as follows:
1. Gather info from app.yaml to understand overall deployment
2. Collect code and use the docker image specified in app.yaml to build a docker image with your code
3. Provision Compute Instances, networking/firewall rules, install docker related tools on instance, push docker image to instance and start it
4. Make sure all deployments were successful, start health-checks and if required, transfer/balance out the load.
The only process which takes most of time is the last part where it does all the necessary checks to make sure deployment was successful and start ingesting traffic. Depending upon your code size (uploading code to create container) and requirements for resources (provisioning custom resources), step 2 and 3 might take a bit more time.
If you do an analysis you will find that about 70% of time is consumed in last step, where we have least visibility into, yet the essential process which gives app-engine the ability to do all the heavy lifting. | As suggested above by @ludo you could use in the meantime Google App Engine Standard instead of Flex. Which, takes approximately ~30-50 seconds after the first deployment.
You can test GAE Standard by running this tutorial, which doesn't require a billing account:
<https://codelabs.developers.google.com/codelabs/cloud-app-engine-springboot/index.html#0>
And agreed. this doesn't address GAE Flex but gives some options to accelerate during development. |
37,683,120 | Trying out new flexible app engine runtime. In this case a custom Ruby on Rails runtime based on the google provided ruby runtime.
When firing of `gcloud preview app deploy` the whole process takes ~8 minutes, most of which is "updating service". Is this normal? And more importantly, how can I speed it up?
Regards,
Ward | 2016/06/07 | [
"https://Stackoverflow.com/questions/37683120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/127401/"
] | Deploying to the same version got me from 6 minutes to 3 minutes in subsequent deploys.
Example:
```
$ gcloud app deploy app.yaml --version=test
``` | Make sure you check what is in the zip it's uploading (it tells you the location of this on deploy), and make sure your yaml skip\_files is set to include things like your .git directory if you have one, and node\_modules |
37,683,120 | Trying out new flexible app engine runtime. In this case a custom Ruby on Rails runtime based on the google provided ruby runtime.
When firing of `gcloud preview app deploy` the whole process takes ~8 minutes, most of which is "updating service". Is this normal? And more importantly, how can I speed it up?
Regards,
Ward | 2016/06/07 | [
"https://Stackoverflow.com/questions/37683120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/127401/"
] | Yes, that is totally normal. Most of the deployment steps happen away from your computer and are independent of your codebase size, so there's not a lot you can do to speed up the process.
Various steps that are involved in deploying an app on App Engine can be categorized as follows:
1. Gather info from app.yaml to understand overall deployment
2. Collect code and use the docker image specified in app.yaml to build a docker image with your code
3. Provision Compute Instances, networking/firewall rules, install docker related tools on instance, push docker image to instance and start it
4. Make sure all deployments were successful, start health-checks and if required, transfer/balance out the load.
The only process which takes most of time is the last part where it does all the necessary checks to make sure deployment was successful and start ingesting traffic. Depending upon your code size (uploading code to create container) and requirements for resources (provisioning custom resources), step 2 and 3 might take a bit more time.
If you do an analysis you will find that about 70% of time is consumed in last step, where we have least visibility into, yet the essential process which gives app-engine the ability to do all the heavy lifting. | Make sure you check what is in the zip it's uploading (it tells you the location of this on deploy), and make sure your yaml skip\_files is set to include things like your .git directory if you have one, and node\_modules |
37,683,120 | Trying out new flexible app engine runtime. In this case a custom Ruby on Rails runtime based on the google provided ruby runtime.
When firing of `gcloud preview app deploy` the whole process takes ~8 minutes, most of which is "updating service". Is this normal? And more importantly, how can I speed it up?
Regards,
Ward | 2016/06/07 | [
"https://Stackoverflow.com/questions/37683120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/127401/"
] | Make sure you check what is in the zip it's uploading (it tells you the location of this on deploy), and make sure your yaml skip\_files is set to include things like your .git directory if you have one, and node\_modules | Just fire this command from root directory of app.yaml
From shell visit directory of app.yaml then run gcloud app deploy
It will be uploaded within few seconds. |
37,683,120 | Trying out new flexible app engine runtime. In this case a custom Ruby on Rails runtime based on the google provided ruby runtime.
When firing of `gcloud preview app deploy` the whole process takes ~8 minutes, most of which is "updating service". Is this normal? And more importantly, how can I speed it up?
Regards,
Ward | 2016/06/07 | [
"https://Stackoverflow.com/questions/37683120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/127401/"
] | Yes, that is totally normal. Most of the deployment steps happen away from your computer and are independent of your codebase size, so there's not a lot you can do to speed up the process.
Various steps that are involved in deploying an app on App Engine can be categorized as follows:
1. Gather info from app.yaml to understand overall deployment
2. Collect code and use the docker image specified in app.yaml to build a docker image with your code
3. Provision Compute Instances, networking/firewall rules, install docker related tools on instance, push docker image to instance and start it
4. Make sure all deployments were successful, start health-checks and if required, transfer/balance out the load.
The only process which takes most of time is the last part where it does all the necessary checks to make sure deployment was successful and start ingesting traffic. Depending upon your code size (uploading code to create container) and requirements for resources (provisioning custom resources), step 2 and 3 might take a bit more time.
If you do an analysis you will find that about 70% of time is consumed in last step, where we have least visibility into, yet the essential process which gives app-engine the ability to do all the heavy lifting. | Note that the subsequent deploys should be much faster than 8 mins. It's usually 1 minute or less in my tests with Node.js on App Engine Flex. |
37,683,120 | Trying out new flexible app engine runtime. In this case a custom Ruby on Rails runtime based on the google provided ruby runtime.
When firing of `gcloud preview app deploy` the whole process takes ~8 minutes, most of which is "updating service". Is this normal? And more importantly, how can I speed it up?
Regards,
Ward | 2016/06/07 | [
"https://Stackoverflow.com/questions/37683120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/127401/"
] | Yes, that is totally normal. Most of the deployment steps happen away from your computer and are independent of your codebase size, so there's not a lot you can do to speed up the process.
Various steps that are involved in deploying an app on App Engine can be categorized as follows:
1. Gather info from app.yaml to understand overall deployment
2. Collect code and use the docker image specified in app.yaml to build a docker image with your code
3. Provision Compute Instances, networking/firewall rules, install docker related tools on instance, push docker image to instance and start it
4. Make sure all deployments were successful, start health-checks and if required, transfer/balance out the load.
The only process which takes most of time is the last part where it does all the necessary checks to make sure deployment was successful and start ingesting traffic. Depending upon your code size (uploading code to create container) and requirements for resources (provisioning custom resources), step 2 and 3 might take a bit more time.
If you do an analysis you will find that about 70% of time is consumed in last step, where we have least visibility into, yet the essential process which gives app-engine the ability to do all the heavy lifting. | Just fire this command from root directory of app.yaml
From shell visit directory of app.yaml then run gcloud app deploy
It will be uploaded within few seconds. |
37,683,120 | Trying out new flexible app engine runtime. In this case a custom Ruby on Rails runtime based on the google provided ruby runtime.
When firing of `gcloud preview app deploy` the whole process takes ~8 minutes, most of which is "updating service". Is this normal? And more importantly, how can I speed it up?
Regards,
Ward | 2016/06/07 | [
"https://Stackoverflow.com/questions/37683120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/127401/"
] | Yes, that is totally normal. Most of the deployment steps happen away from your computer and are independent of your codebase size, so there's not a lot you can do to speed up the process.
Various steps that are involved in deploying an app on App Engine can be categorized as follows:
1. Gather info from app.yaml to understand overall deployment
2. Collect code and use the docker image specified in app.yaml to build a docker image with your code
3. Provision Compute Instances, networking/firewall rules, install docker related tools on instance, push docker image to instance and start it
4. Make sure all deployments were successful, start health-checks and if required, transfer/balance out the load.
The only process which takes most of time is the last part where it does all the necessary checks to make sure deployment was successful and start ingesting traffic. Depending upon your code size (uploading code to create container) and requirements for resources (provisioning custom resources), step 2 and 3 might take a bit more time.
If you do an analysis you will find that about 70% of time is consumed in last step, where we have least visibility into, yet the essential process which gives app-engine the ability to do all the heavy lifting. | Deploying to the same version got me from 6 minutes to 3 minutes in subsequent deploys.
Example:
```
$ gcloud app deploy app.yaml --version=test
``` |
37,683,120 | Trying out new flexible app engine runtime. In this case a custom Ruby on Rails runtime based on the google provided ruby runtime.
When firing of `gcloud preview app deploy` the whole process takes ~8 minutes, most of which is "updating service". Is this normal? And more importantly, how can I speed it up?
Regards,
Ward | 2016/06/07 | [
"https://Stackoverflow.com/questions/37683120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/127401/"
] | Deploying to the same version got me from 6 minutes to 3 minutes in subsequent deploys.
Example:
```
$ gcloud app deploy app.yaml --version=test
``` | Note that the subsequent deploys should be much faster than 8 mins. It's usually 1 minute or less in my tests with Node.js on App Engine Flex. |
37,683,120 | Trying out new flexible app engine runtime. In this case a custom Ruby on Rails runtime based on the google provided ruby runtime.
When firing of `gcloud preview app deploy` the whole process takes ~8 minutes, most of which is "updating service". Is this normal? And more importantly, how can I speed it up?
Regards,
Ward | 2016/06/07 | [
"https://Stackoverflow.com/questions/37683120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/127401/"
] | Deploying to the same version got me from 6 minutes to 3 minutes in subsequent deploys.
Example:
```
$ gcloud app deploy app.yaml --version=test
``` | As suggested above by @ludo you could use in the meantime Google App Engine Standard instead of Flex. Which, takes approximately ~30-50 seconds after the first deployment.
You can test GAE Standard by running this tutorial, which doesn't require a billing account:
<https://codelabs.developers.google.com/codelabs/cloud-app-engine-springboot/index.html#0>
And agreed. this doesn't address GAE Flex but gives some options to accelerate during development. |
38,378,615 | I have two tables, one is a table with just the table headers, the other table contains all the table data. Both tables are inside their own separate divs. I'm trying to make it so that scrolling horizontally on the table data div will trigger an event in JavaScript that will scroll the table header div at the same rate. I know I could get rid of the divs and just have one table with sticky headers, but I want to try to do it this way. Here's a simplified version of code that I thought would work:
**HTML:**
```
<div id = "div1">
<table id = "stickyheaders" class = "table table-condensed table-striped small">
<thead><tr>
<th>header1</th>
<th>header2</th>
<th>header3</th>
<th>header4</th>
<th>header5</th>
<th>header6</th>
<th>header7</th>
<th>header8</th>
<th>header9</th>
<th>header10</th>
</tr></thead>
</table>
</div>
<div id = "div2">
<table id = "tablebody" class = "table table-condensed table-striped small">
<tbody>
<tr>
<td>data1</td>
<td>data2</td>
<td>data3</td>
<td>data4</td>
<td>data5</td>
<td>data6</td>
<td>data7</td>
<td>data8</td>
<td>data9</td>
<td>data10</td>
</tr>
</tbody>
</table>
</div>
```
**JavaScript:**
```
$(document).ready(function() {
$('#div2').on('scroll', function () {
$('#').scrollLeft($(this).scrollLeft());
});
} )();
```
And here's the [fiddle](https://jsfiddle.net/9mo2rydf/1/)
Am I missing something stupid here? Thanks in advance for your help. I know this is similar to another question asked here, but that one doesn't have an answer and didn't really help me out. | 2016/07/14 | [
"https://Stackoverflow.com/questions/38378615",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3835034/"
] | You are missing the core scrolling stuff. Replace the `$('#')` with the right `id` and remove the `()` at the end. And yea, add jQuery:
```js
$(document).ready(function() {
$('#div2').on('scroll', function() {
$('#div1').scrollLeft($(this).scrollLeft());
});
});
```
```css
#div1 {
width: 50%;
height: 40px;
padding: 10px;
border: 1px solid #c0c0c0;
border-radius: 5px;
overflow-y: auto;
overflow-x: auto;
label {
display: block;
}
tr:after {
content: ' ';
display: block;
visibility: auto;
clear: both;
}
}
#div2 {
width: 50%;
height: 50px;
padding: 10px;
border: 1px solid #c0c0c0;
border-radius: 5px;
overflow-y: auto;
overflow-x: auto;
label {
display: block;
}
tr:after {
content: ' ';
display: block;
visibility: auto;
clear: both;
}
}
```
```html
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div id="div1">
<table id="stickyheaders" class="table table-condensed table-striped small">
<thead>
<tr>
<th>header1</th>
<th>header2</th>
<th>header3</th>
<th>header4</th>
<th>header5</th>
<th>header6</th>
<th>header7</th>
<th>header8</th>
<th>header9</th>
<th>header10</th>
</tr>
</thead>
</table>
</div>
<div id="div2">
<table id="tablebody" class="table table-condensed table-striped small">
<tbody>
<tr>
<td>data1</td>
<td>data2</td>
<td>data3</td>
<td>data4</td>
<td>data5</td>
<td>data6</td>
<td>data7</td>
<td>data8</td>
<td>data9</td>
<td>data10</td>
</tr>
</tbody>
</table>
</div>
```
Scrolling on the bottom div will scroll the top one. Add jQuery to the jsFiddle.
**Fiddle: <https://jsfiddle.net/2xt1p8t7/>** | I came across this while trying to answer my own question.
Slightly expanded solution. I was trying to link two divs in a similar manner, so that one scrolled with the other. Here is the initial code (both divs had a class called joinscroll).
```
$('.joinscroll').on('scroll touchmove mousemove', function(e){
if ($(this).attr("id") == "topdiv") { $('#bottomdiv').scrollLeft($(this).scrollLeft()); }
if ($(this).attr("id") == "bottomdiv") { $('#topdiv').scrollLeft($(this).scrollLeft()); }
})
```
The problem that I had was that during scrolling, the scroll on the div being scrolled by the function was being detected by the browser, which was causing the function to be executed for that div's scroll. This caused really jerky scrolling, basically because there was a feedback loop.
I tried tricks with preventDefault and stopPropagation, but they didn't work.
In the end, the simplest solution was to detect which div the mouse was hovering over and use that to suppress the other function:
```
$('.joinscroll').on('scroll touchmove mousemove', function(e){
if ($(this).is(':hover')) {
if ($(this).attr("id") == "topdiv") { $('#bottomdiv').scrollLeft($(this).scrollLeft()); }
if ($(this).attr("id") == "bottomdiv") { $('#topdiv').scrollLeft($(this).scrollLeft()); }
}
})
```
Hope this helps somebody. |
56,927,609 | I have my react app set up with create-react-app and I was trying to run it with Docker container and Docker compose.
However, I got the following error when I ran it with Docker compose.
```
web_1 | Could not find a required file.
web_1 | Name: index.html
web_1 | Searched in: /usr/src/app/web_client/public
```
I am using Windows 10 and Docker quickstart terminal
Here is my folder structure:
```
vocabulary-app
|
web_client
|
node_modules/
public/
src/
package.json
package-lock.json
Dockerfile
yarn.lock
docker-compose.yml
```
Here is the content of `docker-compose.yml`
```
### Client SERVER ###
web:
build: ./web_client
environment:
- REACT_APP_PORT=80
expose:
- 80
ports:
- "80:80"
volumes:
- ./web_client/src:/usr/src/app/web_client/src
- ./web_client/public:/usr/src/app/web_client/public
links:
- server
command: npm start
```
Here is the `Dockerfile`
```
FROM node:9.0.0
RUN mkdir -p /usr/src/app/web_client
WORKDIR /usr/src/app/web_client
COPY . .
RUN rm -Rf node_modules
RUN npm install
CMD npm start
```
I also tried to explore the file system in docker and got the following result:
```
$ docker run -t -i vocabularyapp_web /bin/bash
root@2c099746ebab:/usr/src/app/web_client# ls
Dockerfile node_modules package.json src
README.md package-lock.json public yarn.lock
root@2c099746ebab:/usr/src/app/web_client# cd public/
root@2c099746ebab:/usr/src/app/web_client/public# ls
favicon.ico index.html manifest.json
```
This one basically means that the `index.html` file is there, so I got more confused about the error message.
Does someone have solution to this? | 2019/07/08 | [
"https://Stackoverflow.com/questions/56927609",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9633315/"
] | I don't have a docker-compose file, but I was able to get a similar react app to run with the docker command
`docker run -v ${PWD}:/app -v /app/node_modules -p 3001:3000 --rm app-name:dev`
I was then able to access my react app from `localhost:3000` | I did an explict COPY ./public/index.html and this problem went away
but similar error generated for index.js and index.css.
Doing same for these fixed it. |
56,927,609 | I have my react app set up with create-react-app and I was trying to run it with Docker container and Docker compose.
However, I got the following error when I ran it with Docker compose.
```
web_1 | Could not find a required file.
web_1 | Name: index.html
web_1 | Searched in: /usr/src/app/web_client/public
```
I am using Windows 10 and Docker quickstart terminal
Here is my folder structure:
```
vocabulary-app
|
web_client
|
node_modules/
public/
src/
package.json
package-lock.json
Dockerfile
yarn.lock
docker-compose.yml
```
Here is the content of `docker-compose.yml`
```
### Client SERVER ###
web:
build: ./web_client
environment:
- REACT_APP_PORT=80
expose:
- 80
ports:
- "80:80"
volumes:
- ./web_client/src:/usr/src/app/web_client/src
- ./web_client/public:/usr/src/app/web_client/public
links:
- server
command: npm start
```
Here is the `Dockerfile`
```
FROM node:9.0.0
RUN mkdir -p /usr/src/app/web_client
WORKDIR /usr/src/app/web_client
COPY . .
RUN rm -Rf node_modules
RUN npm install
CMD npm start
```
I also tried to explore the file system in docker and got the following result:
```
$ docker run -t -i vocabularyapp_web /bin/bash
root@2c099746ebab:/usr/src/app/web_client# ls
Dockerfile node_modules package.json src
README.md package-lock.json public yarn.lock
root@2c099746ebab:/usr/src/app/web_client# cd public/
root@2c099746ebab:/usr/src/app/web_client/public# ls
favicon.ico index.html manifest.json
```
This one basically means that the `index.html` file is there, so I got more confused about the error message.
Does someone have solution to this? | 2019/07/08 | [
"https://Stackoverflow.com/questions/56927609",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9633315/"
] | I had this same issue when working deploying a React App using Docker on an Ubuntu 18.04 server.
The issue was that I missed the step of copying the contents of the entire previous build to the working directory (`/app`) of the application.
**Here's how I solved it**:
Add this below as a step after the `RUN npm install` and before the `RUN npm run build` steps:
```
COPY . ./
```
**My Dockerfile**:
```
# Set base image
FROM node:latest AS builder
# Set working directory
WORKDIR /app
# Copy package.json and install packages
COPY package.json .
RUN npm install
# Copy other project files and build
COPY . ./
RUN npm run build
# Set nginx image
FROM nginx:latest
# Nginx config
RUN rm -rf /etc/nginx/conf.d/default.conf
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
# Static build
COPY --from=builder /app/public /usr/share/nginx/html
# Set working directory
WORKDIR /usr/share/nginx/html
# Start Nginx server
CMD ["/bin/bash", "-c", "nginx -g \"daemon off;\""]
```
**My docker-compose file**:
```
version: "3"
services:
web:
image: my_app-frontend
build:
context: .
dockerfile: Dockerfile
environment:
REACT_APP_API_URL: ${REACT_APP_API_URL}
expose:
- "3000"
restart: always
volumes:
- .:/app
```
**Nginx for serving static files**:
Create a directory called `nginx` and then under the directory create a file called `default.conf`. The file will have the following configuration:
```
server {
listen 80;
add_header Cache-Control no-cache;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
expires -1;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
```
**My .env**
Create a `.env` file where you will store your environment variables:
```
REACT_APP_API_URL=https://my_api
```
That's it.
**I hope this helps** | I don't have a docker-compose file, but I was able to get a similar react app to run with the docker command
`docker run -v ${PWD}:/app -v /app/node_modules -p 3001:3000 --rm app-name:dev`
I was then able to access my react app from `localhost:3000` |
56,927,609 | I have my react app set up with create-react-app and I was trying to run it with Docker container and Docker compose.
However, I got the following error when I ran it with Docker compose.
```
web_1 | Could not find a required file.
web_1 | Name: index.html
web_1 | Searched in: /usr/src/app/web_client/public
```
I am using Windows 10 and Docker quickstart terminal
Here is my folder structure:
```
vocabulary-app
|
web_client
|
node_modules/
public/
src/
package.json
package-lock.json
Dockerfile
yarn.lock
docker-compose.yml
```
Here is the content of `docker-compose.yml`
```
### Client SERVER ###
web:
build: ./web_client
environment:
- REACT_APP_PORT=80
expose:
- 80
ports:
- "80:80"
volumes:
- ./web_client/src:/usr/src/app/web_client/src
- ./web_client/public:/usr/src/app/web_client/public
links:
- server
command: npm start
```
Here is the `Dockerfile`
```
FROM node:9.0.0
RUN mkdir -p /usr/src/app/web_client
WORKDIR /usr/src/app/web_client
COPY . .
RUN rm -Rf node_modules
RUN npm install
CMD npm start
```
I also tried to explore the file system in docker and got the following result:
```
$ docker run -t -i vocabularyapp_web /bin/bash
root@2c099746ebab:/usr/src/app/web_client# ls
Dockerfile node_modules package.json src
README.md package-lock.json public yarn.lock
root@2c099746ebab:/usr/src/app/web_client# cd public/
root@2c099746ebab:/usr/src/app/web_client/public# ls
favicon.ico index.html manifest.json
```
This one basically means that the `index.html` file is there, so I got more confused about the error message.
Does someone have solution to this? | 2019/07/08 | [
"https://Stackoverflow.com/questions/56927609",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9633315/"
] | I had this same issue when working deploying a React App using Docker on an Ubuntu 18.04 server.
The issue was that I missed the step of copying the contents of the entire previous build to the working directory (`/app`) of the application.
**Here's how I solved it**:
Add this below as a step after the `RUN npm install` and before the `RUN npm run build` steps:
```
COPY . ./
```
**My Dockerfile**:
```
# Set base image
FROM node:latest AS builder
# Set working directory
WORKDIR /app
# Copy package.json and install packages
COPY package.json .
RUN npm install
# Copy other project files and build
COPY . ./
RUN npm run build
# Set nginx image
FROM nginx:latest
# Nginx config
RUN rm -rf /etc/nginx/conf.d/default.conf
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
# Static build
COPY --from=builder /app/public /usr/share/nginx/html
# Set working directory
WORKDIR /usr/share/nginx/html
# Start Nginx server
CMD ["/bin/bash", "-c", "nginx -g \"daemon off;\""]
```
**My docker-compose file**:
```
version: "3"
services:
web:
image: my_app-frontend
build:
context: .
dockerfile: Dockerfile
environment:
REACT_APP_API_URL: ${REACT_APP_API_URL}
expose:
- "3000"
restart: always
volumes:
- .:/app
```
**Nginx for serving static files**:
Create a directory called `nginx` and then under the directory create a file called `default.conf`. The file will have the following configuration:
```
server {
listen 80;
add_header Cache-Control no-cache;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
expires -1;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
```
**My .env**
Create a `.env` file where you will store your environment variables:
```
REACT_APP_API_URL=https://my_api
```
That's it.
**I hope this helps** | I did an explict COPY ./public/index.html and this problem went away
but similar error generated for index.js and index.css.
Doing same for these fixed it. |
29,606,787 | Here is my plunker:
<http://plnkr.co/edit/8SOkJeAG4Tctp4zLoAJ0?p=preview>
```
var iframe = document.getElementById('myIframe');
var iframediv = iframe.contentWindow.document; // canvas goes here
iframediv.body.innerHTML += '<canvas id="stage" width="360" height="180"></canvas>';
```
Is the code where I am getting the cannot read property error.
I'm attempting to create a canvas in an iframe, the iframe is located on a page which is loaded as an ionic menu.
I want the code which controls the data in the canvas to be contained in a separate js file for easy readability and re-use the code if required.
The problem seems to be that since I'm declaring my JavaScript file in my index.html, but the iframe I want to use is in my home.html file then the JavaScript file can't find it?
How should I declare my iframe and javascript files so that it can read the properties of my iframe in a nested javascript file? | 2015/04/13 | [
"https://Stackoverflow.com/questions/29606787",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4528707/"
] | The line
```
strcat(*final_buff,token_arr);/*concatinate the token to buffer*/
```
will be a problem in the first iteration of the loop.
You are also assuming that the length of the string will never need more than 1 space. You can remove that assumption by executing
```
sprintf(temp,"%d",strlen(token_arr));
```
early in the loop and using `strlen(temp)` to compute the length required for `*final_buff`.
I suggest the following update to the `while` loop:
```
while (token_arr != NULL)
{
printf("tokens--%s\n",token_arr);
sprintf(temp,"%d",strlen(token_arr));
// +3 -> two spaces and the terminating null character.
length = length + strlen(token_arr) + strlen(temp) + 3;
if ( *final_buff == NULL )
{
// No need to use length*sizeof(char). sizeof(char) is
// guaranteed to be 1
*final_buff = malloc(length);
(*final_buff)[0] = '\0';
}
else
{
*final_buff = realloc(*final_buff,(length));/*allocate memory for the buffer*/
}
if (NULL == *final_buff)
{
printf("token memory allocation error\n");
exit(1);
}
strcat(*final_buff,token_arr); /*concatinate the token to buffer*/
strcat(*final_buff," ");
strcat(*final_buff,temp); /*concatinate buffer with string length */
strcat(*final_buff," ");
token_arr = strtok(NULL , ",");/*read next token */
}
``` | Use `snprintf()` if you want to convert a value to a string and need to make sure you don't overflow a buffer.
<http://man7.org/linux/man-pages/man3/printf.3.html>
>
> **Return value**
>
>
> Upon successful return, these functions return the number of
> characters printed (excluding the null byte used to end output to
> strings).
>
>
> The functions snprintf() and vsnprintf() do not write more than size
> bytes (including the terminating null byte ('\0')). If the output was
> truncated due to this limit then the return value is the number of
> characters (excluding the terminating null byte) which would have been
> written to the final string if enough space had been available. Thus,
> a return value of size or more means that the output was truncated.
> (See also below under NOTES.)
>
>
> If an output error is encountered, a negative value is returned.
>
>
> |
29,606,787 | Here is my plunker:
<http://plnkr.co/edit/8SOkJeAG4Tctp4zLoAJ0?p=preview>
```
var iframe = document.getElementById('myIframe');
var iframediv = iframe.contentWindow.document; // canvas goes here
iframediv.body.innerHTML += '<canvas id="stage" width="360" height="180"></canvas>';
```
Is the code where I am getting the cannot read property error.
I'm attempting to create a canvas in an iframe, the iframe is located on a page which is loaded as an ionic menu.
I want the code which controls the data in the canvas to be contained in a separate js file for easy readability and re-use the code if required.
The problem seems to be that since I'm declaring my JavaScript file in my index.html, but the iframe I want to use is in my home.html file then the JavaScript file can't find it?
How should I declare my iframe and javascript files so that it can read the properties of my iframe in a nested javascript file? | 2015/04/13 | [
"https://Stackoverflow.com/questions/29606787",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4528707/"
] | `*final_buff` should *always* be null-terminated.
When it is allocated for the first time with `realloc`, it may not be null-terminated. You are writing with `strcat` to it, which requires the buffer to be already null-terminated.
In `main`, you could write
```
char *final_buff = malloc(sizeof(char)); // allocate 1 char
// todo: error checking
*final_buff = 0; // null-terminate
data_token(databuf,&final_buff);
``` | Use `snprintf()` if you want to convert a value to a string and need to make sure you don't overflow a buffer.
<http://man7.org/linux/man-pages/man3/printf.3.html>
>
> **Return value**
>
>
> Upon successful return, these functions return the number of
> characters printed (excluding the null byte used to end output to
> strings).
>
>
> The functions snprintf() and vsnprintf() do not write more than size
> bytes (including the terminating null byte ('\0')). If the output was
> truncated due to this limit then the return value is the number of
> characters (excluding the terminating null byte) which would have been
> written to the final string if enough space had been available. Thus,
> a return value of size or more means that the output was truncated.
> (See also below under NOTES.)
>
>
> If an output error is encountered, a negative value is returned.
>
>
> |
52,326,971 | How can I run **`SELECT DISTINCT field_name from table;`** SQL query in Django as `raw sql` ?
When I try to use `Table.objects.raw("""SELECT DISTINCT field_name from table""")`, I got an exception as
>
> InvalidQuery: Raw query must include the primary key
>
>
> | 2018/09/14 | [
"https://Stackoverflow.com/questions/52326971",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8283848/"
] | If you don't need the model instances (which are useless if you want a single field), you can as well just use a plain db-api cursor:
```
from django.db import connection
cursor = connection.cursor()
cursor.execute("select distinct field from table")
for row in cursor:
print(row[0])
```
But for your example use case you don't need SQL at all - the orm has `values` and `values_list` querysets and a `distinct()` modifier too:
```
queryset = YourModel.objects.values_list("field", flat=True).order_by("field").distinct()
print(str(queryset.query))
# > 'SELECT DISTINCT `table`.`field` FROM `table` ORDER BY `table`.`title` ASC'
for title in queryset:
print(title)
```
NB :
1/ since we want single field, I use the `flat=True` argument to avoid getting a list of tuples
2/ I explicitely set the ordering on the field else the default ordering eventually defined in the model's meta could force the ordering field to be part of te generated query too. | Looks like you have to use some workaround
```
select field_name, max(id)
from table_name
group by field_name;
``` |
5,911,602 | am trying to switch my application to use multilingual-ng, unfortunutely though, there is very little documentation and FAQ's online. I hope someone would be able to tell what is going on with my practice,
following is my model
```
class Main(models.Model):
""" Main Class for all categories """
slug = models.SlugField()
is_active = models.BooleanField(default=True)
site = models.ForeignKey(Site)
parent = models.ForeignKey('self', blank=True, null=True)
class Translation(TranslationModel):
title = models.CharField(max_length=100)
label = models.CharField(max_length=100, blank=True, null=True)
description = models.TextField(blank=True, null=True)
disclaimer = models.TextField(blank=True, null=True)
class Meta:
unique_together = (("slug", "parent"))
def __unicode__(self):
return self.title if self.title is not None else _("No translation")
```
and following is my admin.py
```
class MainAdmin(MultilingualModelAdmin):
''' Multilingual interface for Main category '''
class ListAdmin(MultilingualModelAdmin):
''' Multilingual interface for Main category '''
admin.site.register(Main, MainAdmin)
admin.site.register(List, ListAdmin)
```
When I access my admin panel, I can see the model, list of items, add new items but when I try to edit an existing item or delete one I get the followng error
```
Environment:
Request Method: GET
Request URL: http://mazban.com/admin/category/main/1/
Django Version: 1.3
Python Version: 2.6.1
Installed Applications:
['django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.admin',
'django.contrib.admindocs',
'compressor',
'django.contrib.gis',
'multilingual',
'mazban.lib.apps.core',
'mazban.lib.apps.gis',
'mazban.apps.global',
'mazban.apps.listing',
'mazban.apps.listing.post',
'mazban.apps.listing.home',
'mazban.apps.listing.engine',
'mazban.apps.listing.category']
Installed Middleware:
('django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.locale.LocaleMiddleware',
'mazban.lib.MiddleWare.custom.RequestIsMobile')
Traceback:
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/core/handlers/base.py" in get_response
111. response = callback(request, *callback_args, **callback_kwargs)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/contrib/admin/options.py" in wrapper
307. return self.admin_site.admin_view(view)(*args, **kwargs)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/utils/decorators.py" in _wrapped_view
93. response = view_func(request, *args, **kwargs)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/views/decorators/cache.py" in _wrapped_view_func
79. response = view_func(request, *args, **kwargs)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/contrib/admin/sites.py" in inner
197. return view(request, *args, **kwargs)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/multilingual/admin.py" in wrapped
31. resp = func(cls, request, *args, **kwargs)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/multilingual/admin.py" in change_view
277. return super(MultilingualModelAdmin, self).change_view(*args, **kwargs)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/utils/decorators.py" in _wrapper
28. return bound_func(*args, **kwargs)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/utils/decorators.py" in _wrapped_view
93. response = view_func(request, *args, **kwargs)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/utils/decorators.py" in bound_func
24. return func(self, *args2, **kwargs2)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/db/transaction.py" in inner
217. res = func(*args, **kwargs)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/contrib/admin/options.py" in change_view
947. obj = self.get_object(request, unquote(object_id))
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/contrib/admin/options.py" in get_object
451. return queryset.get(pk=object_id)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/db/models/query.py" in get
341. clone = self.filter(*args, **kwargs)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/db/models/query.py" in filter
550. return self._filter_or_exclude(False, *args, **kwargs)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/db/models/query.py" in _filter_or_exclude
568. clone.query.add_q(Q(*args, **kwargs))
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/db/models/sql/query.py" in add_q
1172. can_reuse=used_aliases, force_having=force_having)
Exception Type: TypeError at /admin/category/main/1/
Exception Value: add_filter() got an unexpected keyword argument 'force_having'
``` | 2011/05/06 | [
"https://Stackoverflow.com/questions/5911602",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/145277/"
] | Don't use django-multilingual-ng, as it is not supported anymore and will bring you many headaches. The author of the django-multilingual-ng started a new promising project, named [django-nani](https://github.com/ojii/django-nani). It should be reliable and Django 1.3 compatible.
As for me, this problem didn't show on Django 1.2.4, so you might want to move back to that version, once you go through the [Django 1.2.5 release notes](http://docs.djangoproject.com/en/1.2/releases/1.2.5/). | I've got the same problem, upgrading from 1.2.4 to the new security releases in 1.2.7. Ng is already in use and can't be swapped out, even though support for it has been dropped. Just the world we live in. I can't find any documentation on `force_having`s role in the django query system.
Glad they're working on a new system though. If anyone has any knowledge on `force_having` it would be greatly appreciated. |
5,911,602 | am trying to switch my application to use multilingual-ng, unfortunutely though, there is very little documentation and FAQ's online. I hope someone would be able to tell what is going on with my practice,
following is my model
```
class Main(models.Model):
""" Main Class for all categories """
slug = models.SlugField()
is_active = models.BooleanField(default=True)
site = models.ForeignKey(Site)
parent = models.ForeignKey('self', blank=True, null=True)
class Translation(TranslationModel):
title = models.CharField(max_length=100)
label = models.CharField(max_length=100, blank=True, null=True)
description = models.TextField(blank=True, null=True)
disclaimer = models.TextField(blank=True, null=True)
class Meta:
unique_together = (("slug", "parent"))
def __unicode__(self):
return self.title if self.title is not None else _("No translation")
```
and following is my admin.py
```
class MainAdmin(MultilingualModelAdmin):
''' Multilingual interface for Main category '''
class ListAdmin(MultilingualModelAdmin):
''' Multilingual interface for Main category '''
admin.site.register(Main, MainAdmin)
admin.site.register(List, ListAdmin)
```
When I access my admin panel, I can see the model, list of items, add new items but when I try to edit an existing item or delete one I get the followng error
```
Environment:
Request Method: GET
Request URL: http://mazban.com/admin/category/main/1/
Django Version: 1.3
Python Version: 2.6.1
Installed Applications:
['django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.admin',
'django.contrib.admindocs',
'compressor',
'django.contrib.gis',
'multilingual',
'mazban.lib.apps.core',
'mazban.lib.apps.gis',
'mazban.apps.global',
'mazban.apps.listing',
'mazban.apps.listing.post',
'mazban.apps.listing.home',
'mazban.apps.listing.engine',
'mazban.apps.listing.category']
Installed Middleware:
('django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.locale.LocaleMiddleware',
'mazban.lib.MiddleWare.custom.RequestIsMobile')
Traceback:
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/core/handlers/base.py" in get_response
111. response = callback(request, *callback_args, **callback_kwargs)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/contrib/admin/options.py" in wrapper
307. return self.admin_site.admin_view(view)(*args, **kwargs)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/utils/decorators.py" in _wrapped_view
93. response = view_func(request, *args, **kwargs)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/views/decorators/cache.py" in _wrapped_view_func
79. response = view_func(request, *args, **kwargs)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/contrib/admin/sites.py" in inner
197. return view(request, *args, **kwargs)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/multilingual/admin.py" in wrapped
31. resp = func(cls, request, *args, **kwargs)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/multilingual/admin.py" in change_view
277. return super(MultilingualModelAdmin, self).change_view(*args, **kwargs)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/utils/decorators.py" in _wrapper
28. return bound_func(*args, **kwargs)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/utils/decorators.py" in _wrapped_view
93. response = view_func(request, *args, **kwargs)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/utils/decorators.py" in bound_func
24. return func(self, *args2, **kwargs2)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/db/transaction.py" in inner
217. res = func(*args, **kwargs)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/contrib/admin/options.py" in change_view
947. obj = self.get_object(request, unquote(object_id))
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/contrib/admin/options.py" in get_object
451. return queryset.get(pk=object_id)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/db/models/query.py" in get
341. clone = self.filter(*args, **kwargs)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/db/models/query.py" in filter
550. return self._filter_or_exclude(False, *args, **kwargs)
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/db/models/query.py" in _filter_or_exclude
568. clone.query.add_q(Q(*args, **kwargs))
File "/users/mo/Projects/python-envs/mazban/lib/python2.6/site-packages/django/db/models/sql/query.py" in add_q
1172. can_reuse=used_aliases, force_having=force_having)
Exception Type: TypeError at /admin/category/main/1/
Exception Value: add_filter() got an unexpected keyword argument 'force_having'
``` | 2011/05/06 | [
"https://Stackoverflow.com/questions/5911602",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/145277/"
] | I installed from latest revision and the error disapeared:
```
$ pip install git+https://github.com/ojii/django-multilingual-ng.git
```
Although the error is gone using this release, it still says it it unsupported. I am heavily inclined to roll back to Django 1.2.4, but I am still trying to figure this out.
As mentioned, the [django-nani](https://github.com/ojii/django-nani) project is promising, but it is still in alpha stages. I couldn't find a way to work with any type of model relationship as of today's revision. They [will be working on it soon](https://github.com/chrisglass/django-nani/commit/35ede9ea954e691799c43ab47b413f6e7f5857c2#commitcomment-437811). | I've got the same problem, upgrading from 1.2.4 to the new security releases in 1.2.7. Ng is already in use and can't be swapped out, even though support for it has been dropped. Just the world we live in. I can't find any documentation on `force_having`s role in the django query system.
Glad they're working on a new system though. If anyone has any knowledge on `force_having` it would be greatly appreciated. |
39,590,579 | I am following some guide to build an executable jar.
But I am having a problem, A Java Exception occured.
I tried to run it in cmd.
java.lang.ClassNotFoundException: lc.kra.system.keyboard.GlobalKeyboardHook
I am using 4 external libraries.
mindrot jbcrypt,json simple, geoip2, and keyboard and mouse hook
My Jar file directories are,
[![enter image description here](https://i.stack.imgur.com/BuBaB.png)](https://i.stack.imgur.com/BuBaB.png)
here is my imports,
```
package timer_app;
...
import org.mindrot.jbcrypt.BCrypt;
import lc.kra.system.keyboard.GlobalKeyboardHook;
import lc.kra.system.keyboard.event.GlobalKeyAdapter;
import lc.kra.system.keyboard.event.GlobalKeyEvent;
import lc.kra.system.mouse.GlobalMouseHook;
import lc.kra.system.mouse.event.GlobalMouseAdapter;
import lc.kra.system.mouse.event.GlobalMouseEvent;
import org.json.simple.JSONAware;
import org.json.simple.parser.JSONParser;
import org.json.simple.*;
import java.nio.file.StandardCopyOption;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
```
and here is my Manifest file,
```
Manifest-Version: 1.0
Created-By: 1.8.0_101 (Oracle Corporation)
Main-Class: timer_app.Timer
Class-Path: lib\lib1.jar lib\geoip2-2.8.0-rc1.jar lib\system-hook-2.5.jar
```
Added cReate jar tool in jcreator.
[![enter image description here](https://i.stack.imgur.com/d2pIL.png)](https://i.stack.imgur.com/d2pIL.png) | 2016/09/20 | [
"https://Stackoverflow.com/questions/39590579",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5399555/"
] | ```
$ cat ip.txt
S0C S1C S0U S1U EC EU OC OU PC PU YGC YGCT FGC FGCT GCT
52416.0 52416.0 178.1 0.0 419456.0 261407.3 524288.0 30663.4 50272.0 30745.7 898 10.393 6 0.243 10.636
$ sed -E 's/\s+/\n/g' ip.txt | pr -2ts' : '
S0C : 52416.0
S1C : 52416.0
S0U : 178.1
S1U : 0.0
EC : 419456.0
EU : 261407.3
OC : 524288.0
OU : 30663.4
PC : 50272.0
PU : 30745.7
YGC : 898
YGCT : 10.393
FGC : 6
FGCT : 0.243
GCT : 10.636
```
Using `sed` to replace white-spaces with newline, then using `pr` to style the output | `tr` twice and `pr` for output control:
```
$ tr -s \ '\n' file | pr -2 -t -s" : "
S0C : 52416.0
S1C : 52416.0
S0U : 178.1
S1U : 0.0
EC : 419456.0
EU : 261407.3
OC : 524288.0
OU : 30663.4
PC : 50272.0
PU : 30745.7
YGC : 898
YGCT : 10.393
FGC : 6
FGCT : 0.243
GCT : 10.636
```
Or `awk` and `pr`:
```
$ awk '$1=$1' OFS="\n" file |pr -2 -t -s" : "
```
ie. `$1=$1` rebuilds the record and `OFS="\n"` changes the output field separator to newline. `pr` makes sweet columns. |
39,590,579 | I am following some guide to build an executable jar.
But I am having a problem, A Java Exception occured.
I tried to run it in cmd.
java.lang.ClassNotFoundException: lc.kra.system.keyboard.GlobalKeyboardHook
I am using 4 external libraries.
mindrot jbcrypt,json simple, geoip2, and keyboard and mouse hook
My Jar file directories are,
[![enter image description here](https://i.stack.imgur.com/BuBaB.png)](https://i.stack.imgur.com/BuBaB.png)
here is my imports,
```
package timer_app;
...
import org.mindrot.jbcrypt.BCrypt;
import lc.kra.system.keyboard.GlobalKeyboardHook;
import lc.kra.system.keyboard.event.GlobalKeyAdapter;
import lc.kra.system.keyboard.event.GlobalKeyEvent;
import lc.kra.system.mouse.GlobalMouseHook;
import lc.kra.system.mouse.event.GlobalMouseAdapter;
import lc.kra.system.mouse.event.GlobalMouseEvent;
import org.json.simple.JSONAware;
import org.json.simple.parser.JSONParser;
import org.json.simple.*;
import java.nio.file.StandardCopyOption;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
```
and here is my Manifest file,
```
Manifest-Version: 1.0
Created-By: 1.8.0_101 (Oracle Corporation)
Main-Class: timer_app.Timer
Class-Path: lib\lib1.jar lib\geoip2-2.8.0-rc1.jar lib\system-hook-2.5.jar
```
Added cReate jar tool in jcreator.
[![enter image description here](https://i.stack.imgur.com/d2pIL.png)](https://i.stack.imgur.com/d2pIL.png) | 2016/09/20 | [
"https://Stackoverflow.com/questions/39590579",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5399555/"
] | If the columns match up perfectly, you could do something like this with awk:
```
awk -v OFS=' : ' '{ for (i = 1; i <= NF; ++i) if (NR == 1) key[i] = $i; else val[i] = $i }
END { for (i = 1; i <= NF; ++i) print key[i], val[i] }' file
```
Loop through the fields on each line, assigning keys and values to separate arrays. After reading both lines, print the combined output.
`print` inserts the Output Field Separator `OFS` between each field, so set it to `' : '` to get your desired output. | `tr` twice and `pr` for output control:
```
$ tr -s \ '\n' file | pr -2 -t -s" : "
S0C : 52416.0
S1C : 52416.0
S0U : 178.1
S1U : 0.0
EC : 419456.0
EU : 261407.3
OC : 524288.0
OU : 30663.4
PC : 50272.0
PU : 30745.7
YGC : 898
YGCT : 10.393
FGC : 6
FGCT : 0.243
GCT : 10.636
```
Or `awk` and `pr`:
```
$ awk '$1=$1' OFS="\n" file |pr -2 -t -s" : "
```
ie. `$1=$1` rebuilds the record and `OFS="\n"` changes the output field separator to newline. `pr` makes sweet columns. |
39,590,579 | I am following some guide to build an executable jar.
But I am having a problem, A Java Exception occured.
I tried to run it in cmd.
java.lang.ClassNotFoundException: lc.kra.system.keyboard.GlobalKeyboardHook
I am using 4 external libraries.
mindrot jbcrypt,json simple, geoip2, and keyboard and mouse hook
My Jar file directories are,
[![enter image description here](https://i.stack.imgur.com/BuBaB.png)](https://i.stack.imgur.com/BuBaB.png)
here is my imports,
```
package timer_app;
...
import org.mindrot.jbcrypt.BCrypt;
import lc.kra.system.keyboard.GlobalKeyboardHook;
import lc.kra.system.keyboard.event.GlobalKeyAdapter;
import lc.kra.system.keyboard.event.GlobalKeyEvent;
import lc.kra.system.mouse.GlobalMouseHook;
import lc.kra.system.mouse.event.GlobalMouseAdapter;
import lc.kra.system.mouse.event.GlobalMouseEvent;
import org.json.simple.JSONAware;
import org.json.simple.parser.JSONParser;
import org.json.simple.*;
import java.nio.file.StandardCopyOption;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
```
and here is my Manifest file,
```
Manifest-Version: 1.0
Created-By: 1.8.0_101 (Oracle Corporation)
Main-Class: timer_app.Timer
Class-Path: lib\lib1.jar lib\geoip2-2.8.0-rc1.jar lib\system-hook-2.5.jar
```
Added cReate jar tool in jcreator.
[![enter image description here](https://i.stack.imgur.com/d2pIL.png)](https://i.stack.imgur.com/d2pIL.png) | 2016/09/20 | [
"https://Stackoverflow.com/questions/39590579",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5399555/"
] | ```
awk -v OFS=: 'NR==1 {for(i=1;i<=NF;i++) a[i]=$i;next} {for(i=1;i<=NF;i++) print a[i],$i}' infile
S0C:52416.0
S1C:52416.0
S0U:178.1
S1U:0.0
EC:419456.0
EU:261407.3
OC:524288.0
OU:30663.4
PC:50272.0
PU:30745.7
YGC:898
YGCT:10.393
FGC:6
FGCT:0.243
GCT:10.636
``` | `tr` twice and `pr` for output control:
```
$ tr -s \ '\n' file | pr -2 -t -s" : "
S0C : 52416.0
S1C : 52416.0
S0U : 178.1
S1U : 0.0
EC : 419456.0
EU : 261407.3
OC : 524288.0
OU : 30663.4
PC : 50272.0
PU : 30745.7
YGC : 898
YGCT : 10.393
FGC : 6
FGCT : 0.243
GCT : 10.636
```
Or `awk` and `pr`:
```
$ awk '$1=$1' OFS="\n" file |pr -2 -t -s" : "
```
ie. `$1=$1` rebuilds the record and `OFS="\n"` changes the output field separator to newline. `pr` makes sweet columns. |
72,716,572 | I'm pretty new to using XSL/XSLT to perform XML transformations, and have a scenario I'm looking for some help with.
The TLDR; summation of the problem. I am working on a C# solution to escape certain characters in MathML, specifically in `<mtext>` nodes. Characters include, but are not necessarily limited to `{`, `}`, `[`, and `]`, where they would need to be updated to `\{`, `\}`, `\[`, and `\]` respectively. Seeing some of the interesting things people have done with XSLT transformation, I figured I would give that a shot.
For reference, here's a sample block of MathML:
```
<math style='font-family:Times New Roman' xmlns='http://www.w3.org/1998/Math/MathML'>
<mstyle mathsize='15px'>
<mrow>
<mtext>4 ___ {</mtext>
<mtext mathvariant='italic'>x</mtext>
<mtext>: </mtext>
<mtext mathvariant='italic'>x</mtext>
<mtext> is a natural number greater than 4}</mtext>
</mrow>
</mstyle>
</math>
```
Fiddling around, I have found that using this XSL, I can print out the contents of each `<mtext>` element:
```
<?xml version='1.0' encoding=""UTF-8""?>
<xsl:stylesheet version=""1.0"" xmlns:xsl=""http://www.w3.org/1999/XSL/Transform"">
<xsl:output method=""xml"" indent=""yes""/>
<xsl:template match=""node()|@*"">
<xsl:copy>
<xsl:apply-templates select=""node()|@*"" />
</xsl:copy>
</xsl:template>
<xsl:template match=""/"">
<xsl:for-each select=""//*[local-name()='mtext']"">
<xsl:variable name=""myMTextVal"" select=""text()"" />
<xsl:message terminate=""no"">
<xsl:value-of select=""$myMTextVal""/>
</xsl:message>
</xsl:for-each>
</xsl:template>
</xsl:stylesheet>
```
My first thought, which may seemed to quickly be an incorrect road to go down, was to use a `translate()` in the for-each loop as an XSL 1.0 version of XSL 2.0's `replace()`:
```
<!-- Outside of the looping template -->
<xsl:param name=""braceOpen"" select=""'{'"" />
<xsl:param name=""braceOpenReplace"" select=""'\{'"" />
<!-- In the loop itself -->
<xsl:value-of select=""translate(//*[local-name()='mtext']/text(), $braceOpen, $braceOpenReplace)""/>
```
The problem with using translate's limitation of a variation of 1:1 replacement quickly became apparent when the first mtext's content started to display as "4 \_\_\_ \" rather than "4 \_\_\_ \{".
So digging some more, I ran across these threads:
[XSLT string replace](https://stackoverflow.com/questions/3067113/xslt-string-replace)
[XSLT Replace function not found](https://stackoverflow.com/questions/1069092/xslt-replace-function-not-found)
both of which offered an alternative solution in lieu of `replace()`. So I set up a test of:
```
<xsl:template name=""ProcessMathText"">
<xsl:param name=""text""/>
<xsl:param name=""replace""/>
<xsl:param name=""by""/>
<xsl:choose>
<xsl:when test=""contains($text,$replace)"">
<xsl:value-of select=""substring-before($text,$replace)""/>
<xsl:value-of select=""$by""/>
<xsl:call-template name=""ProcessMathText"">
<xsl:with-param name=""text"" select=""substring-after($text,$replace)""/>
<xsl:with-param name=""replace"" select=""$replace""/>
<xsl:with-param name=""by"" select=""$by""/>
</xsl:call-template>
</xsl:when>
<xsl:otherwise>
<xsl:value-of select=""$text""/>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
```
and placed this within the `for-each` block:
```
<xsl:otherwise>
<xsl:variable name=""mTextText"" select=""Text"" />
<xsl:call-template name=""ProcessMathText"">
<xsl:with-param name=""text"" select=""$mTextText""/>
<xsl:with-param name=""replace"" select=""'{'""/>
<xsl:with-param name=""by"" select=""'\{'""/>
</xsl:call-template>
</xsl:otherwise>
```
However, that began to throw "'xsl:otherwise' cannot be a child of the 'xsl:for-each' element." errors. Ultimately, I'm not 100% sure how to "invoke" the `<xsl:otherwise>` content as stated in the links above without it being within the `for-each` block, which I'm kind of wired to do based on my history with AS, JS, Python, and C#, so I was hoping someone might be able to help me out, or point me in a direction that might yield results rather than me just banging my head against a wall.
One other possible issue I have noticed on the output... It looks like the transformation results in losing the HTML entity characters such as  , and having them replaced with " ", which is something I do not want, as that could cause some annoying headaches down the line. Is there a way to maintain the structure, and only replace specific content, without accidentally replacing or in a sense "rendering" HTML entities?
Thanks in advance for your help! | 2022/06/22 | [
"https://Stackoverflow.com/questions/72716572",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6416442/"
] | Because optional parameters are by default `null` unless otherwise specified in the constructor definition. If you want to declare another default value, you need to use this syntax:
```dart
void main() {
var house = Home(pr1: 4, pr3: 5);
house.printDate();
// The pr1: 4
// The pr2: str1
// The pr3: 5
}
class Home {
int pr1=1;
String? pr2;
int pr3=3;
Home({required this.pr1, this.pr2 = 'str1', required this.pr3});
void printDate() {
print('The pr1: $pr1');
print('The pr2: $pr2');
print('The pr3: $pr3');
}
}
``` | I refactored your code for readability and would drop the first solution using if-else statement
```
class Home {
//initialize the variables using the final keyword
final int? pr1;
final String? pr2;
final int? pr3;
//set the constructors
Home({required this.pr1, this.pr2, required this.pr3});
void printDate(){
//using the pr1 value
print('The pr1: $pr1');
//perfoming a runtime assertation for dev
//assert(pr2 == null);//if the value for pr2 is null; it run smoothly
//using if-else statement, I can tell wht is shows as outcome
if (pr2 == null){
print('The pr2: str1');
}else{print('The pr2: $pr2');}//I would be rewriting this as a tenary operator
//using the pr3 value
print('The pr3: $pr3');
}
}
//the app starts ro run from here
void main(){
var house = Home(pr1:5,pr3:6,);
house.printDate();
}
```
From the above, you can see that when the condition for pr2 is null, it displays the text 'str1' and if it is not null, it displays the value of pr2.
Hope this helps alot. |
72,716,572 | I'm pretty new to using XSL/XSLT to perform XML transformations, and have a scenario I'm looking for some help with.
The TLDR; summation of the problem. I am working on a C# solution to escape certain characters in MathML, specifically in `<mtext>` nodes. Characters include, but are not necessarily limited to `{`, `}`, `[`, and `]`, where they would need to be updated to `\{`, `\}`, `\[`, and `\]` respectively. Seeing some of the interesting things people have done with XSLT transformation, I figured I would give that a shot.
For reference, here's a sample block of MathML:
```
<math style='font-family:Times New Roman' xmlns='http://www.w3.org/1998/Math/MathML'>
<mstyle mathsize='15px'>
<mrow>
<mtext>4 ___ {</mtext>
<mtext mathvariant='italic'>x</mtext>
<mtext>: </mtext>
<mtext mathvariant='italic'>x</mtext>
<mtext> is a natural number greater than 4}</mtext>
</mrow>
</mstyle>
</math>
```
Fiddling around, I have found that using this XSL, I can print out the contents of each `<mtext>` element:
```
<?xml version='1.0' encoding=""UTF-8""?>
<xsl:stylesheet version=""1.0"" xmlns:xsl=""http://www.w3.org/1999/XSL/Transform"">
<xsl:output method=""xml"" indent=""yes""/>
<xsl:template match=""node()|@*"">
<xsl:copy>
<xsl:apply-templates select=""node()|@*"" />
</xsl:copy>
</xsl:template>
<xsl:template match=""/"">
<xsl:for-each select=""//*[local-name()='mtext']"">
<xsl:variable name=""myMTextVal"" select=""text()"" />
<xsl:message terminate=""no"">
<xsl:value-of select=""$myMTextVal""/>
</xsl:message>
</xsl:for-each>
</xsl:template>
</xsl:stylesheet>
```
My first thought, which may seemed to quickly be an incorrect road to go down, was to use a `translate()` in the for-each loop as an XSL 1.0 version of XSL 2.0's `replace()`:
```
<!-- Outside of the looping template -->
<xsl:param name=""braceOpen"" select=""'{'"" />
<xsl:param name=""braceOpenReplace"" select=""'\{'"" />
<!-- In the loop itself -->
<xsl:value-of select=""translate(//*[local-name()='mtext']/text(), $braceOpen, $braceOpenReplace)""/>
```
The problem with using translate's limitation of a variation of 1:1 replacement quickly became apparent when the first mtext's content started to display as "4 \_\_\_ \" rather than "4 \_\_\_ \{".
So digging some more, I ran across these threads:
[XSLT string replace](https://stackoverflow.com/questions/3067113/xslt-string-replace)
[XSLT Replace function not found](https://stackoverflow.com/questions/1069092/xslt-replace-function-not-found)
both of which offered an alternative solution in lieu of `replace()`. So I set up a test of:
```
<xsl:template name=""ProcessMathText"">
<xsl:param name=""text""/>
<xsl:param name=""replace""/>
<xsl:param name=""by""/>
<xsl:choose>
<xsl:when test=""contains($text,$replace)"">
<xsl:value-of select=""substring-before($text,$replace)""/>
<xsl:value-of select=""$by""/>
<xsl:call-template name=""ProcessMathText"">
<xsl:with-param name=""text"" select=""substring-after($text,$replace)""/>
<xsl:with-param name=""replace"" select=""$replace""/>
<xsl:with-param name=""by"" select=""$by""/>
</xsl:call-template>
</xsl:when>
<xsl:otherwise>
<xsl:value-of select=""$text""/>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
```
and placed this within the `for-each` block:
```
<xsl:otherwise>
<xsl:variable name=""mTextText"" select=""Text"" />
<xsl:call-template name=""ProcessMathText"">
<xsl:with-param name=""text"" select=""$mTextText""/>
<xsl:with-param name=""replace"" select=""'{'""/>
<xsl:with-param name=""by"" select=""'\{'""/>
</xsl:call-template>
</xsl:otherwise>
```
However, that began to throw "'xsl:otherwise' cannot be a child of the 'xsl:for-each' element." errors. Ultimately, I'm not 100% sure how to "invoke" the `<xsl:otherwise>` content as stated in the links above without it being within the `for-each` block, which I'm kind of wired to do based on my history with AS, JS, Python, and C#, so I was hoping someone might be able to help me out, or point me in a direction that might yield results rather than me just banging my head against a wall.
One other possible issue I have noticed on the output... It looks like the transformation results in losing the HTML entity characters such as  , and having them replaced with " ", which is something I do not want, as that could cause some annoying headaches down the line. Is there a way to maintain the structure, and only replace specific content, without accidentally replacing or in a sense "rendering" HTML entities?
Thanks in advance for your help! | 2022/06/22 | [
"https://Stackoverflow.com/questions/72716572",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6416442/"
] | Because optional parameters are by default `null` unless otherwise specified in the constructor definition. If you want to declare another default value, you need to use this syntax:
```dart
void main() {
var house = Home(pr1: 4, pr3: 5);
house.printDate();
// The pr1: 4
// The pr2: str1
// The pr3: 5
}
class Home {
int pr1=1;
String? pr2;
int pr3=3;
Home({required this.pr1, this.pr2 = 'str1', required this.pr3});
void printDate() {
print('The pr1: $pr1');
print('The pr2: $pr2');
print('The pr3: $pr3');
}
}
``` | The second way would be to use a tenary operator to check for null
```
class Home {
//initialize the variables using the final keyword
final int? pr1;
final String? pr2;
final int? pr3;
//set the constructors
Home({required this.pr1, this.pr2, required this.pr3});
void printDate() {
//using the pr1 value
print('The pr1: $pr1');
//pr2 check for null safety using tenary
pr2 == null ? print('The Pr2: str1'):print('The pr2:$pr2');
//using the pr3 value
print('The pr3: $pr3');
}
}
//the app starts ro run from here
void main() {
var house = Home(
pr1: 5,
pr3: 6,
);
house.printDate();
}
```
What does tenary mean? simply put, think of it as a logic condition, a form of if-else but in an easier way, the way to express this is:
```
condition?expression1:expression2;
```
if the condition is true, expression 1 runs, if the condition is false, expression 2 runs.
To answer your question, from dart 2.12, it became null safe, it is not a safe practice to variables to constructors. I would be glad to explain further if the need be. |
62,142,482 | Here's the T-SQL I try to run:
```
CREATE EXTERNAL DATA SOURCE mySource WITH
(TYPE = BLOB_STORAGE, LOCATION = 'https://myContainer.blob.core.windows.net', CREDENTIAL = myCredential)
```
Here's the error I get:
```
Msg 105057, Level 16, State 1, Line 6
CREATE EXTERNAL DATA SOURCE statement failed because the value for the 'TYPE' option is invalid. Change the value for the 'TYPE' option and try again.
```
I've Googled for "Msg 105057" and I get nothing. My goal is to use the OPENROWSET function to bulk insert a JSON file from my Azure Storage account into my Azure Data Warehouse. | 2020/06/01 | [
"https://Stackoverflow.com/questions/62142482",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1803823/"
] | It does support Blob Storage, but the TYPE needs to be 'HADOOP' and the location needs to use "wasbs" instead of "https":
```
CREATE EXTERNAL DATA SOURCE mySource WITH
(TYPE = HADOOP, LOCATION = 'wasbs://myContainer.blob.core.windows.net', CREDENTIAL = myCredential)
``` | The problem is that Azure SQL Data Warehouse (aka Azure Synapse Analytics) does not support the BLOB\_STORAGE type or the OPENROWSET function. This seems counter-intuitive, but it's not the 1st time Azure has disappointed me.
<https://learn.microsoft.com/en-us/sql/t-sql/functions/openrowset-transact-sql?view=sql-server-ver15#syntax>
<https://learn.microsoft.com/en-us/sql/relational-databases/import-export/examples-of-bulk-access-to-data-in-azure-blob-storage?view=sql-server-ver15>
Note that 'Azure Synapse Analytics (SQL DW)' is not marked as 'Applies to" in these documents. |
58,842,636 | Is there a way to set log level in tf serving via docker? I see these params but do not see anything about logging there
```
--port=8500 int32 Port to listen on for gRPC API
--grpc_socket_path="" string If non-empty, listen to a UNIX socket for gRPC API on the given path. Can be either relative or absolute path.
--rest_api_port=0 int32 Port to listen on for HTTP/REST API. If set to zero HTTP/REST API will not be exported. This port must be different than the one specified in --port.
--rest_api_num_threads=48 int32 Number of threads for HTTP/REST API processing. If not set, will be auto set based on number of CPUs.
--rest_api_timeout_in_ms=30000 int32 Timeout for HTTP/REST API calls.
--enable_batching=false bool enable batching
--allow_version_labels_for_unavailable_models=false bool If true, allows assigning unused version labels to models that are not available yet.
--batching_parameters_file="" string If non-empty, read an ascii BatchingParameters protobuf from the supplied file name and use the contained values instead of the defaults.
--model_config_file="" string If non-empty, read an ascii ModelServerConfig protobuf from the supplied file name, and serve the models in that file. This config file can be used to specify multiple models to serve and other advanced parameters including non-default version policy. (If used, --model_name, --model_base_path are ignored.)
--model_config_file_poll_wait_seconds=0 int32 Interval in seconds between each poll of the filesystemfor model_config_file. If unset or set to zero, poll will be done exactly once and not periodically. Setting this to negative is reserved for testing purposes only.
--model_name="default" string name of model (ignored if --model_config_file flag is set)
--model_base_path="" string path to export (ignored if --model_config_file flag is set, otherwise required)
--max_num_load_retries=5 int32 maximum number of times it retries loading a model after the first failure, before giving up. If set to 0, a load is attempted only once. Default: 5
--load_retry_interval_micros=60000000 int64 The interval, in microseconds, between each servable load retry. If set negative, it doesn't wait. Default: 1 minute
--file_system_poll_wait_seconds=1 int32 Interval in seconds between each poll of the filesystem for new model version. If set to zero poll will be exactly done once and not periodically. Setting this to negative value will disable polling entirely causing ModelServer to indefinitely wait for a new model at startup. Negative values are reserved for testing purposes only.
--flush_filesystem_caches=true bool If true (the default), filesystem caches will be flushed after the initial load of all servables, and after each subsequent individual servable reload (if the number of load threads is 1). This reduces memory consumption of the model server, at the potential cost of cache misses if model files are accessed after servables are loaded.
--tensorflow_session_parallelism=0 int64 Number of threads to use for running a Tensorflow session. Auto-configured by default.Note that this option is ignored if --platform_config_file is non-empty.
--tensorflow_intra_op_parallelism=0 int64 Number of threads to use to parallelize the executionof an individual op. Auto-configured by default.Note that this option is ignored if --platform_config_file is non-empty.
--tensorflow_inter_op_parallelism=0 int64 Controls the number of operators that can be executed simultaneously. Auto-configured by default.Note that this option is ignored if --platform_config_file is non-empty.
--ssl_config_file="" string If non-empty, read an ascii SSLConfig protobuf from the supplied file name and set up a secure gRPC channel
--platform_config_file="" string If non-empty, read an ascii PlatformConfigMap protobuf from the supplied file name, and use that platform config instead of the Tensorflow platform. (If used, --enable_batching is ignored.)
--per_process_gpu_memory_fraction=0.000000 float Fraction that each process occupies of the GPU memory space the value is between 0.0 and 1.0 (with 0.0 as the default) If 1.0, the server will allocate all the memory when the server starts, If 0.0, Tensorflow will automatically select a value.
--saved_model_tags="serve" string Comma-separated set of tags corresponding to the meta graph def to load from SavedModel.
--grpc_channel_arguments="" string A comma separated list of arguments to be passed to the grpc server. (e.g. grpc.max_connection_age_ms=2000)
--enable_model_warmup=true bool Enables model warmup, which triggers lazy initializations (such as TF optimizations) at load time, to reduce first request latency.
--version=false bool Display version
--monitoring_config_file="" string If non-empty, read an ascii MonitoringConfig protobuf from the supplied file name
--remove_unused_fields_from_bundle_metagraph=true bool Removes unused fields from MetaGraphDef proto message to save memory.
--use_tflite_model=false bool EXPERIMENTAL; CAN BE REMOVED ANYTIME! Load and use TensorFlow Lite model from `model.tflite` file in SavedModel directory instead of the TensorFlow model from `saved_model.pb` file.
```
There are some related SO questions, like [this one](https://stackoverflow.com/questions/46359852/logging-requests-being-served-by-tensorflow-serving-model) but they talk about serving via bazel.
Specifically, I want to log the incoming requests. | 2019/11/13 | [
"https://Stackoverflow.com/questions/58842636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8321467/"
] | I'm not sure this is going to give you exactly what you want, but I have had luck getting more verbose logging out of TensorFlow Serving by setting the environment variable `TF_CPP_MIN_VLOG_LEVEL`, where the bigger the value, the more verbose the logging.
E.g., `TF_CPP_MIN_VLOG_LEVEL=4` will be extremely verbose. | You can set the VLOG level per module with e.g. `TF_CPP_VMODULE=http_server=1` (to set VLOG level 1 for `http_server.cc`). This is useful because even just `TF_CPP_MIN_VLOG_LEVEL=1` (applying to all modules) is really quite verbose. |
58,842,636 | Is there a way to set log level in tf serving via docker? I see these params but do not see anything about logging there
```
--port=8500 int32 Port to listen on for gRPC API
--grpc_socket_path="" string If non-empty, listen to a UNIX socket for gRPC API on the given path. Can be either relative or absolute path.
--rest_api_port=0 int32 Port to listen on for HTTP/REST API. If set to zero HTTP/REST API will not be exported. This port must be different than the one specified in --port.
--rest_api_num_threads=48 int32 Number of threads for HTTP/REST API processing. If not set, will be auto set based on number of CPUs.
--rest_api_timeout_in_ms=30000 int32 Timeout for HTTP/REST API calls.
--enable_batching=false bool enable batching
--allow_version_labels_for_unavailable_models=false bool If true, allows assigning unused version labels to models that are not available yet.
--batching_parameters_file="" string If non-empty, read an ascii BatchingParameters protobuf from the supplied file name and use the contained values instead of the defaults.
--model_config_file="" string If non-empty, read an ascii ModelServerConfig protobuf from the supplied file name, and serve the models in that file. This config file can be used to specify multiple models to serve and other advanced parameters including non-default version policy. (If used, --model_name, --model_base_path are ignored.)
--model_config_file_poll_wait_seconds=0 int32 Interval in seconds between each poll of the filesystemfor model_config_file. If unset or set to zero, poll will be done exactly once and not periodically. Setting this to negative is reserved for testing purposes only.
--model_name="default" string name of model (ignored if --model_config_file flag is set)
--model_base_path="" string path to export (ignored if --model_config_file flag is set, otherwise required)
--max_num_load_retries=5 int32 maximum number of times it retries loading a model after the first failure, before giving up. If set to 0, a load is attempted only once. Default: 5
--load_retry_interval_micros=60000000 int64 The interval, in microseconds, between each servable load retry. If set negative, it doesn't wait. Default: 1 minute
--file_system_poll_wait_seconds=1 int32 Interval in seconds between each poll of the filesystem for new model version. If set to zero poll will be exactly done once and not periodically. Setting this to negative value will disable polling entirely causing ModelServer to indefinitely wait for a new model at startup. Negative values are reserved for testing purposes only.
--flush_filesystem_caches=true bool If true (the default), filesystem caches will be flushed after the initial load of all servables, and after each subsequent individual servable reload (if the number of load threads is 1). This reduces memory consumption of the model server, at the potential cost of cache misses if model files are accessed after servables are loaded.
--tensorflow_session_parallelism=0 int64 Number of threads to use for running a Tensorflow session. Auto-configured by default.Note that this option is ignored if --platform_config_file is non-empty.
--tensorflow_intra_op_parallelism=0 int64 Number of threads to use to parallelize the executionof an individual op. Auto-configured by default.Note that this option is ignored if --platform_config_file is non-empty.
--tensorflow_inter_op_parallelism=0 int64 Controls the number of operators that can be executed simultaneously. Auto-configured by default.Note that this option is ignored if --platform_config_file is non-empty.
--ssl_config_file="" string If non-empty, read an ascii SSLConfig protobuf from the supplied file name and set up a secure gRPC channel
--platform_config_file="" string If non-empty, read an ascii PlatformConfigMap protobuf from the supplied file name, and use that platform config instead of the Tensorflow platform. (If used, --enable_batching is ignored.)
--per_process_gpu_memory_fraction=0.000000 float Fraction that each process occupies of the GPU memory space the value is between 0.0 and 1.0 (with 0.0 as the default) If 1.0, the server will allocate all the memory when the server starts, If 0.0, Tensorflow will automatically select a value.
--saved_model_tags="serve" string Comma-separated set of tags corresponding to the meta graph def to load from SavedModel.
--grpc_channel_arguments="" string A comma separated list of arguments to be passed to the grpc server. (e.g. grpc.max_connection_age_ms=2000)
--enable_model_warmup=true bool Enables model warmup, which triggers lazy initializations (such as TF optimizations) at load time, to reduce first request latency.
--version=false bool Display version
--monitoring_config_file="" string If non-empty, read an ascii MonitoringConfig protobuf from the supplied file name
--remove_unused_fields_from_bundle_metagraph=true bool Removes unused fields from MetaGraphDef proto message to save memory.
--use_tflite_model=false bool EXPERIMENTAL; CAN BE REMOVED ANYTIME! Load and use TensorFlow Lite model from `model.tflite` file in SavedModel directory instead of the TensorFlow model from `saved_model.pb` file.
```
There are some related SO questions, like [this one](https://stackoverflow.com/questions/46359852/logging-requests-being-served-by-tensorflow-serving-model) but they talk about serving via bazel.
Specifically, I want to log the incoming requests. | 2019/11/13 | [
"https://Stackoverflow.com/questions/58842636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8321467/"
] | I'm not sure this is going to give you exactly what you want, but I have had luck getting more verbose logging out of TensorFlow Serving by setting the environment variable `TF_CPP_MIN_VLOG_LEVEL`, where the bigger the value, the more verbose the logging.
E.g., `TF_CPP_MIN_VLOG_LEVEL=4` will be extremely verbose. | This can help you :
```
docker ... -e TF_CPP_MIN_VLOG_LEVEL=4 ...
```
Add `-e TF_CPP_MIN_VLOG_LEVEL=4` or `-e TF_CPP_VMODULE=http_server=1` to docker command |
38,446,391 | I wrote a java application that takes an environment variable that takes an argument to set a key for a JWT token salt key. Is there a way for me to pass the command variables in Docker Compose?
```
java -Djava.security.egd=file:/dev/./urandom -jar /user-profile-api.jar --key=blah
```
And to run the docker image you just
```
docker run -p 8080:8080 docker_image --key=blah
``` | 2016/07/18 | [
"https://Stackoverflow.com/questions/38446391",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5294769/"
] | The three languages you refer to are the 'languages of the web' - almost all websites nowadays use all three to craft any number of different outcomes (HTML for structure, CSS for style and Javascript for interactivity/actions, etc).
All modern smartphones will be able to read all three of these languages through their web browser, usually extremely well.
If you're happy with your current website when accessed through a desktop/laptop, but it lacks finesse when accessed via a mobile platform, it's usually a very simple job to make it "[responsive](https://stackoverflow.com/questions/33866454/how-to-make-a-responsive-website-using-html-css-and-javascript)" (i.e. so it renders in a different order/with different font sizes etc) for mobile.
If, however, you're looking to design a totally new website both for browser AND mobile, you should probably look to build it mobile-first from the ground up, using a modern CSS framework like Blaz has just suggested. | All standard smartphone web browsers should only support HTML5, CSS3 and JavaScript.
If you are trying to design your website to be compatible with mobile devices I suggest you take a look at some [Responsive CSS Frameworks for Web Design](http://www.awwwards.com/what-are-frameworks-22-best-responsive-css-frameworks-for-web-design.html). Some of the most popular ones right now are [Bootstrap](http://getbootstrap.com/), [Foundation](http://foundation.zurb.com/) and [Semantic UI](http://semantic-ui.com/). |
72,227,397 | I want to verify my assumptions about Time and Space complexity of two different implementations of valid palindrome functions in JavaScript.
In the first implementation we are using a helper function and just pass pointers
```js
const isPalindrome = str => {
return isPalindromeHelper(str, 0, str.length - 1);
}
const isPalindromeHelper = (str, start, end) => {
if (end - start < 1) return true;
return str[start] === str[end] && isPalindromeHelper(str, start + 1, end - 1)
}
```
In this case I am assuming that the time complexity is O(N) and the space complexity is O(N) as well.
However, let's say that instead of passing pointers we are slicing the string each time. And let's say that `slice` is O(n) operation.
```js
const isPalindrome = str => {
if (str.length <= 1) return true;
if (str[0] !== str[str.length - 1]) return false;
return isPalindrome(str.slice(1, str.length - 1));
}
```
Would this push both Time and Space complexity to O(N^2) if slice was O(N) operation? Time because of time complexity of slice and Space would increase since we are constantly creating new strings? | 2022/05/13 | [
"https://Stackoverflow.com/questions/72227397",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8110650/"
] | >
> Would this push both Time and Space complexity to O(N^2) if slice was O(N) operation? Time because of time complexity of `slice`...
>
>
>
Yes, **if** we assume `slice` has a time complexity of O(), then we have O(−1 + −2 + −3 + ... + 1) which is O(²)
>
> ...and Space would increase since we are constantly creating new strings?
>
>
>
Again, **if** we assume `slice` allocates new memory for the slice, then we have (with the same formula) a space usage of O(²)
### However...
As strings are immutable, a JavaScript engine may benefit from memory that already has the string, and not really allocate new memory for slices. This is called [string interning](https://en.wikipedia.org/wiki/String_interning). This could bring both time and space complexity back to O(). However, since there is no *requirement* in the EcmaScript language specification to actually do this, we have no guarantee. | Both of them are recursive operations and run through the whole length of the string (n) `n/2` times since they start at `0` and at `n - 1` and they run until they meet at `n/2`.
In the O notation, this would mean both are `O(n)` since you can ignore the constant `2`. |
30,788,527 | I am creating an internal CMS for work, and it is important that all pages be mobile-friendly. When you view a page with CKEditor 4.4.7 installed from a phone, the editor shows up as a normal textarea and none of the HTML or text within it is properly formatted.
I can request a desktop version using my phone's browser, and sometimes this will work. It seems to be pretty hit-or-miss across different phones. I don't believe it has anything to do with enabling JavaScript.
Sorry for the lack of technical details - has anyone had any experience with this before?
Thanks.
EDIT ---
Found it. I'm currently invoking the editor by creating a normal text area and then adding this javascript which replaces it given an ID:
```
<script type="text/javascript">
$( document ).ready( function() {
$( 'textarea#SomeIDHere' ).ckeditor();
} );
</script>
```
Well, after some extensive digging, I found that you can automatically call whatever browser it's loaded into "Compatible", even though it's not necessarily safe or true, simply by adding altering the code as such:
```
<script type="text/javascript">
CKEDITOR.env.isCompatible = true;
$( document ).ready( function() {
$( 'textarea#SomeIDHere' ).ckeditor();
} );
</script>
```
I edited this just in case anyone ever comes across the same issue. Not sure how to close the question though. I'm too new and dumb. | 2015/06/11 | [
"https://Stackoverflow.com/questions/30788527",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5000465/"
] | In general, CKEditor *is* compatible with iOS and Chrome on Android. If it does not show up in these environments, this is most often an issue with the environment detection mechanism being misled by the browser user agent string.
Up till version 4.4.7 (actually, 4.4.8, but this one hasn't been released yet) CKEditor is only loaded on whitelisted environments (as defined in the `env.js` file). The original purpose was to block CKEditor from appearing in environments where it's not supported. However, the browser detection mechanism is not perfect, especially on mobile devices where browser vendors tend to spoof user agent strings, causing issues that you described.
You can, however, enable CKEditor is unsupported environments (at your own risk) by changing the [CKEDITOR.env.isCompatible](http://docs.ckeditor.com/#!/api/CKEDITOR.env-property-isCompatible) flag to `true`, which causes CKEditor to load in all environments, including the unsupported ones. Note, however, that this has one drawback: it not only enables CKEditor in modern mobile devices, but also tries to load it in old Internet Explorer versions (6&7) where it no longer works (which may cause some level of user frustration). And thus, when using this solution, it is recommended to still blacklist old IEs, like this:
```
// Enable CKEditor in all environments except IE7 and below.
if ( !CKEDITOR.env.ie || CKEDITOR.env.version > 7 )
CKEDITOR.env.isCompatible = true;
```
You can read more about it in the [Enabling CKEditor in Unsupported Environments](http://docs.ckeditor.com/#!/guide/dev_unsupported_environments) article.
An important note: **This mechanism is just about to change in CKEditor 4.5**, the next major relase that is due very soon. Ticket [#13316](https://dev.ckeditor.com/ticket/13316) changes `CKEDITOR.env.isCompatible` from a whiletlist to a blacklist which will hopefully help resolve issues like this one. | here solution for you just need to change or updated you ckeditor.js file to latest ckeditor.js file
[here link of latest ckeditor.js](https://cdn.ckeditor.com/4.5.10/standard/ckeditor.js)
here screenshot
[![enter image description here](https://i.stack.imgur.com/NxdyS.jpg)](https://i.stack.imgur.com/NxdyS.jpg) |
18,262,288 | I am trying to extract the below info for any user from GitHub.
![Example user contributions](https://i.stack.imgur.com/IGsub.png)
Is there a way/API exposed in [GitHub REST API](http://developer.github.com/v3/) where we can get this information directly? | 2013/08/15 | [
"https://Stackoverflow.com/questions/18262288",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/662425/"
] | Yes, You can do this easily with the new graphql API
Check out the explorer: <https://developer.github.com/v4/explorer/>
There you can see the contributions collection which is an edge of the user. You can get all of the information necessary to rebuild the calendar.
I've included a full example, and the explorer documentation can guide you even further.
Specifically to answer your question, the `query.user.contributionsCollection.contributionsCalendar.totalContributions`
is what you are looking for
Go ahead and copy/paste the following into the explorer and you will see my contribution history for the last year
```
query {
user(login: "qhenkart") {
email
createdAt
contributionsCollection(from: "2019-09-28T23:05:23Z", to: "2020-09-28T23:05:23Z") {
contributionCalendar {
totalContributions
weeks {
contributionDays {
weekday
date
contributionCount
color
}
}
months {
name
year
firstDay
totalWeeks
}
}
}
}
}
``` | If you would prefer a `npm` package, you can try this.
<https://github.com/SammyRobensParadise/github-contributions-counter#readme>
You can get all time contributions or contributions by each year. |
18,262,288 | I am trying to extract the below info for any user from GitHub.
![Example user contributions](https://i.stack.imgur.com/IGsub.png)
Is there a way/API exposed in [GitHub REST API](http://developer.github.com/v3/) where we can get this information directly? | 2013/08/15 | [
"https://Stackoverflow.com/questions/18262288",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/662425/"
] | To load the svg with all contributions you can use this code in your html page
```
<img src="https://ghchart.rshah.org/username" alt="Name Your Github chart">
```
[![enter image description here](https://i.stack.imgur.com/3qJrn.png)](https://i.stack.imgur.com/3qJrn.png)
To customize color you can just do that
```
<img src="https://ghchart.rshah.org/HEXCOLORCODE/username" alt="Name Your Github chart">
```
HEXCOLORCODE = **17A2B8**
[![enter image description here](https://i.stack.imgur.com/RF6XZ.png)](https://i.stack.imgur.com/RF6XZ.png) | I believe you can see count of contributions in a timeframe as well as other individual contributor analytics within Code Climate’s Velocity git analytics, which you may request access to here: <https://go.codeclimate.com/velocity-free-for-teams> |
18,262,288 | I am trying to extract the below info for any user from GitHub.
![Example user contributions](https://i.stack.imgur.com/IGsub.png)
Is there a way/API exposed in [GitHub REST API](http://developer.github.com/v3/) where we can get this information directly? | 2013/08/15 | [
"https://Stackoverflow.com/questions/18262288",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/662425/"
] | Answers for 2019, Use [GitHub API V4](https://developer.github.com/v4/object/contributionscollection/).
First go to GitHub to apply for a token: <https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line>. step 7, scopes select only `read:user`
**cUrl**
```
curl -H "Authorization: bearer token" -X POST -d '{"query":"query {\n user(login: \"MeiK2333\") {\n name\n contributionsCollection {\n contributionCalendar {\n colors\n totalContributions\n weeks {\n contributionDays {\n color\n contributionCount\n date\n weekday\n }\n firstDay\n }\n }\n }\n }\n}"}' https://api.github.com/graphql
```
**JavaScript**
```js
async function getContributions(token, username) {
const headers = {
'Authorization': `bearer ${token}`,
}
const body = {
"query": `query {
user(login: "${username}") {
name
contributionsCollection {
contributionCalendar {
colors
totalContributions
weeks {
contributionDays {
color
contributionCount
date
weekday
}
firstDay
}
}
}
}
}`
}
const response = await fetch('https://api.github.com/graphql', { method: 'POST', body: JSON.stringify(body), headers: headers })
const data = await response.json()
return data
}
const data = await getContributions('token', 'MeiK2333')
console.log(data)
``` | You can use the [github events api](https://developer.github.com/v3/activity/events/) for that:
Example (node.js)
-----------------
```js
const got = require('got')
async function getEvents(username) {
const events = []
let page = 1
do {
const url = `https://api.github.com/users/${username}/events?page=${page}`
var { body } = await got(url, {
json: true
})
page++
events.push(...body)
} while(!body.length)
return events
}
(async () => {
const events = await getEvents('handtrix')
console.log('Overall Events', events.length)
console.log('PullRequests', events.filter(event => event.type === 'PullRequestEvent').length)
console.log('Forks', events.filter(event => event.type === 'ForkEvent').length)
console.log('Issues', events.filter(event => event.type === 'IssuesEvent').length)
console.log('Reviews', events.filter(event => event.type === 'PullRequestReviewEvent').length)
})()
```
Example (javascript)
--------------------
```js
async function getEvents(username) {
const events = []
let page = 1
do {
const url = `https://api.github.com/users/${username}/events?page=${page}`
var body = await fetch(url).then(res => res.json())
page++
events.push(...body)
} while(!body.length)
return events
}
(async () => {
const events = await getEvents('handtrix')
console.log('Overall Events', events.length)
console.log('PullRequests', events.filter(event => event.type === 'PullRequestEvent').length)
console.log('Forks', events.filter(event => event.type === 'ForkEvent').length)
console.log('Issues', events.filter(event => event.type === 'IssuesEvent').length)
console.log('Reviews', events.filter(event => event.type === 'PullRequestReviewEvent').length)
})()
```
Documentation
-------------
* <https://developer.github.com/v3/activity/events/>
* <https://developer.github.com/v3/activity/events/types/>
* <https://www.npmjs.com/package/got> |
18,262,288 | I am trying to extract the below info for any user from GitHub.
![Example user contributions](https://i.stack.imgur.com/IGsub.png)
Is there a way/API exposed in [GitHub REST API](http://developer.github.com/v3/) where we can get this information directly? | 2013/08/15 | [
"https://Stackoverflow.com/questions/18262288",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/662425/"
] | I believe you can see count of contributions in a timeframe as well as other individual contributor analytics within Code Climate’s Velocity git analytics, which you may request access to here: <https://go.codeclimate.com/velocity-free-for-teams> | If you would prefer a `npm` package, you can try this.
<https://github.com/SammyRobensParadise/github-contributions-counter#readme>
You can get all time contributions or contributions by each year. |
18,262,288 | I am trying to extract the below info for any user from GitHub.
![Example user contributions](https://i.stack.imgur.com/IGsub.png)
Is there a way/API exposed in [GitHub REST API](http://developer.github.com/v3/) where we can get this information directly? | 2013/08/15 | [
"https://Stackoverflow.com/questions/18262288",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/662425/"
] | Answers for 2019, Use [GitHub API V4](https://developer.github.com/v4/object/contributionscollection/).
First go to GitHub to apply for a token: <https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line>. step 7, scopes select only `read:user`
**cUrl**
```
curl -H "Authorization: bearer token" -X POST -d '{"query":"query {\n user(login: \"MeiK2333\") {\n name\n contributionsCollection {\n contributionCalendar {\n colors\n totalContributions\n weeks {\n contributionDays {\n color\n contributionCount\n date\n weekday\n }\n firstDay\n }\n }\n }\n }\n}"}' https://api.github.com/graphql
```
**JavaScript**
```js
async function getContributions(token, username) {
const headers = {
'Authorization': `bearer ${token}`,
}
const body = {
"query": `query {
user(login: "${username}") {
name
contributionsCollection {
contributionCalendar {
colors
totalContributions
weeks {
contributionDays {
color
contributionCount
date
weekday
}
firstDay
}
}
}
}
}`
}
const response = await fetch('https://api.github.com/graphql', { method: 'POST', body: JSON.stringify(body), headers: headers })
const data = await response.json()
return data
}
const data = await getContributions('token', 'MeiK2333')
console.log(data)
``` | You could use this function to extract the contributions from the last year (client):
```
function getContributions(){
const svgGraph = document.getElementsByClassName('js-calendar-graph')[0];
const daysRects = svgGraph.getElementsByClassName('day');
const days = [];
for (let d of daysRects){
days.push({
date: d.getAttribute('data-date'),
count: d.getAttribute('data-count')
});
}
return days;
}
```
I've also written a small node module which can 'extract' the contributions
[@simonwep/github-contributions](https://www.npmjs.com/package/@simonwep/github-contributions)
Maybe this will help you (even I'm 4 years to late) |
18,262,288 | I am trying to extract the below info for any user from GitHub.
![Example user contributions](https://i.stack.imgur.com/IGsub.png)
Is there a way/API exposed in [GitHub REST API](http://developer.github.com/v3/) where we can get this information directly? | 2013/08/15 | [
"https://Stackoverflow.com/questions/18262288",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/662425/"
] | You can use the [github events api](https://developer.github.com/v3/activity/events/) for that:
Example (node.js)
-----------------
```js
const got = require('got')
async function getEvents(username) {
const events = []
let page = 1
do {
const url = `https://api.github.com/users/${username}/events?page=${page}`
var { body } = await got(url, {
json: true
})
page++
events.push(...body)
} while(!body.length)
return events
}
(async () => {
const events = await getEvents('handtrix')
console.log('Overall Events', events.length)
console.log('PullRequests', events.filter(event => event.type === 'PullRequestEvent').length)
console.log('Forks', events.filter(event => event.type === 'ForkEvent').length)
console.log('Issues', events.filter(event => event.type === 'IssuesEvent').length)
console.log('Reviews', events.filter(event => event.type === 'PullRequestReviewEvent').length)
})()
```
Example (javascript)
--------------------
```js
async function getEvents(username) {
const events = []
let page = 1
do {
const url = `https://api.github.com/users/${username}/events?page=${page}`
var body = await fetch(url).then(res => res.json())
page++
events.push(...body)
} while(!body.length)
return events
}
(async () => {
const events = await getEvents('handtrix')
console.log('Overall Events', events.length)
console.log('PullRequests', events.filter(event => event.type === 'PullRequestEvent').length)
console.log('Forks', events.filter(event => event.type === 'ForkEvent').length)
console.log('Issues', events.filter(event => event.type === 'IssuesEvent').length)
console.log('Reviews', events.filter(event => event.type === 'PullRequestReviewEvent').length)
})()
```
Documentation
-------------
* <https://developer.github.com/v3/activity/events/>
* <https://developer.github.com/v3/activity/events/types/>
* <https://www.npmjs.com/package/got> | You could use this function to extract the contributions from the last year (client):
```
function getContributions(){
const svgGraph = document.getElementsByClassName('js-calendar-graph')[0];
const daysRects = svgGraph.getElementsByClassName('day');
const days = [];
for (let d of daysRects){
days.push({
date: d.getAttribute('data-date'),
count: d.getAttribute('data-count')
});
}
return days;
}
```
I've also written a small node module which can 'extract' the contributions
[@simonwep/github-contributions](https://www.npmjs.com/package/@simonwep/github-contributions)
Maybe this will help you (even I'm 4 years to late) |
18,262,288 | I am trying to extract the below info for any user from GitHub.
![Example user contributions](https://i.stack.imgur.com/IGsub.png)
Is there a way/API exposed in [GitHub REST API](http://developer.github.com/v3/) where we can get this information directly? | 2013/08/15 | [
"https://Stackoverflow.com/questions/18262288",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/662425/"
] | To load the svg with all contributions you can use this code in your html page
```
<img src="https://ghchart.rshah.org/username" alt="Name Your Github chart">
```
[![enter image description here](https://i.stack.imgur.com/3qJrn.png)](https://i.stack.imgur.com/3qJrn.png)
To customize color you can just do that
```
<img src="https://ghchart.rshah.org/HEXCOLORCODE/username" alt="Name Your Github chart">
```
HEXCOLORCODE = **17A2B8**
[![enter image description here](https://i.stack.imgur.com/RF6XZ.png)](https://i.stack.imgur.com/RF6XZ.png) | You can use the [github events api](https://developer.github.com/v3/activity/events/) for that:
Example (node.js)
-----------------
```js
const got = require('got')
async function getEvents(username) {
const events = []
let page = 1
do {
const url = `https://api.github.com/users/${username}/events?page=${page}`
var { body } = await got(url, {
json: true
})
page++
events.push(...body)
} while(!body.length)
return events
}
(async () => {
const events = await getEvents('handtrix')
console.log('Overall Events', events.length)
console.log('PullRequests', events.filter(event => event.type === 'PullRequestEvent').length)
console.log('Forks', events.filter(event => event.type === 'ForkEvent').length)
console.log('Issues', events.filter(event => event.type === 'IssuesEvent').length)
console.log('Reviews', events.filter(event => event.type === 'PullRequestReviewEvent').length)
})()
```
Example (javascript)
--------------------
```js
async function getEvents(username) {
const events = []
let page = 1
do {
const url = `https://api.github.com/users/${username}/events?page=${page}`
var body = await fetch(url).then(res => res.json())
page++
events.push(...body)
} while(!body.length)
return events
}
(async () => {
const events = await getEvents('handtrix')
console.log('Overall Events', events.length)
console.log('PullRequests', events.filter(event => event.type === 'PullRequestEvent').length)
console.log('Forks', events.filter(event => event.type === 'ForkEvent').length)
console.log('Issues', events.filter(event => event.type === 'IssuesEvent').length)
console.log('Reviews', events.filter(event => event.type === 'PullRequestReviewEvent').length)
})()
```
Documentation
-------------
* <https://developer.github.com/v3/activity/events/>
* <https://developer.github.com/v3/activity/events/types/>
* <https://www.npmjs.com/package/got> |
18,262,288 | I am trying to extract the below info for any user from GitHub.
![Example user contributions](https://i.stack.imgur.com/IGsub.png)
Is there a way/API exposed in [GitHub REST API](http://developer.github.com/v3/) where we can get this information directly? | 2013/08/15 | [
"https://Stackoverflow.com/questions/18262288",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/662425/"
] | Answers for 2019, Use [GitHub API V4](https://developer.github.com/v4/object/contributionscollection/).
First go to GitHub to apply for a token: <https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line>. step 7, scopes select only `read:user`
**cUrl**
```
curl -H "Authorization: bearer token" -X POST -d '{"query":"query {\n user(login: \"MeiK2333\") {\n name\n contributionsCollection {\n contributionCalendar {\n colors\n totalContributions\n weeks {\n contributionDays {\n color\n contributionCount\n date\n weekday\n }\n firstDay\n }\n }\n }\n }\n}"}' https://api.github.com/graphql
```
**JavaScript**
```js
async function getContributions(token, username) {
const headers = {
'Authorization': `bearer ${token}`,
}
const body = {
"query": `query {
user(login: "${username}") {
name
contributionsCollection {
contributionCalendar {
colors
totalContributions
weeks {
contributionDays {
color
contributionCount
date
weekday
}
firstDay
}
}
}
}
}`
}
const response = await fetch('https://api.github.com/graphql', { method: 'POST', body: JSON.stringify(body), headers: headers })
const data = await response.json()
return data
}
const data = await getContributions('token', 'MeiK2333')
console.log(data)
``` | To load the svg with all contributions you can use this code in your html page
```
<img src="https://ghchart.rshah.org/username" alt="Name Your Github chart">
```
[![enter image description here](https://i.stack.imgur.com/3qJrn.png)](https://i.stack.imgur.com/3qJrn.png)
To customize color you can just do that
```
<img src="https://ghchart.rshah.org/HEXCOLORCODE/username" alt="Name Your Github chart">
```
HEXCOLORCODE = **17A2B8**
[![enter image description here](https://i.stack.imgur.com/RF6XZ.png)](https://i.stack.imgur.com/RF6XZ.png) |
18,262,288 | I am trying to extract the below info for any user from GitHub.
![Example user contributions](https://i.stack.imgur.com/IGsub.png)
Is there a way/API exposed in [GitHub REST API](http://developer.github.com/v3/) where we can get this information directly? | 2013/08/15 | [
"https://Stackoverflow.com/questions/18262288",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/662425/"
] | You can use the [github events api](https://developer.github.com/v3/activity/events/) for that:
Example (node.js)
-----------------
```js
const got = require('got')
async function getEvents(username) {
const events = []
let page = 1
do {
const url = `https://api.github.com/users/${username}/events?page=${page}`
var { body } = await got(url, {
json: true
})
page++
events.push(...body)
} while(!body.length)
return events
}
(async () => {
const events = await getEvents('handtrix')
console.log('Overall Events', events.length)
console.log('PullRequests', events.filter(event => event.type === 'PullRequestEvent').length)
console.log('Forks', events.filter(event => event.type === 'ForkEvent').length)
console.log('Issues', events.filter(event => event.type === 'IssuesEvent').length)
console.log('Reviews', events.filter(event => event.type === 'PullRequestReviewEvent').length)
})()
```
Example (javascript)
--------------------
```js
async function getEvents(username) {
const events = []
let page = 1
do {
const url = `https://api.github.com/users/${username}/events?page=${page}`
var body = await fetch(url).then(res => res.json())
page++
events.push(...body)
} while(!body.length)
return events
}
(async () => {
const events = await getEvents('handtrix')
console.log('Overall Events', events.length)
console.log('PullRequests', events.filter(event => event.type === 'PullRequestEvent').length)
console.log('Forks', events.filter(event => event.type === 'ForkEvent').length)
console.log('Issues', events.filter(event => event.type === 'IssuesEvent').length)
console.log('Reviews', events.filter(event => event.type === 'PullRequestReviewEvent').length)
})()
```
Documentation
-------------
* <https://developer.github.com/v3/activity/events/>
* <https://developer.github.com/v3/activity/events/types/>
* <https://www.npmjs.com/package/got> | If you would prefer a `npm` package, you can try this.
<https://github.com/SammyRobensParadise/github-contributions-counter#readme>
You can get all time contributions or contributions by each year. |
18,262,288 | I am trying to extract the below info for any user from GitHub.
![Example user contributions](https://i.stack.imgur.com/IGsub.png)
Is there a way/API exposed in [GitHub REST API](http://developer.github.com/v3/) where we can get this information directly? | 2013/08/15 | [
"https://Stackoverflow.com/questions/18262288",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/662425/"
] | Answers for 2019, Use [GitHub API V4](https://developer.github.com/v4/object/contributionscollection/).
First go to GitHub to apply for a token: <https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line>. step 7, scopes select only `read:user`
**cUrl**
```
curl -H "Authorization: bearer token" -X POST -d '{"query":"query {\n user(login: \"MeiK2333\") {\n name\n contributionsCollection {\n contributionCalendar {\n colors\n totalContributions\n weeks {\n contributionDays {\n color\n contributionCount\n date\n weekday\n }\n firstDay\n }\n }\n }\n }\n}"}' https://api.github.com/graphql
```
**JavaScript**
```js
async function getContributions(token, username) {
const headers = {
'Authorization': `bearer ${token}`,
}
const body = {
"query": `query {
user(login: "${username}") {
name
contributionsCollection {
contributionCalendar {
colors
totalContributions
weeks {
contributionDays {
color
contributionCount
date
weekday
}
firstDay
}
}
}
}
}`
}
const response = await fetch('https://api.github.com/graphql', { method: 'POST', body: JSON.stringify(body), headers: headers })
const data = await response.json()
return data
}
const data = await getContributions('token', 'MeiK2333')
console.log(data)
``` | I believe you can see count of contributions in a timeframe as well as other individual contributor analytics within Code Climate’s Velocity git analytics, which you may request access to here: <https://go.codeclimate.com/velocity-free-for-teams> |