text
stringlengths 64
2.99M
|
---|
How can I validate the password and userID inputs on login.jsp against
my User table?
You can use spring-security module, which is very powerful for authenticating & authorizing the user requests (like in your web application) and you can find an example here
spring-security module provides various methods to configure the user details like inmemory, database, LDAP, etc.., but for your case you need to go for JDBC authentication using (AuthenticationManagerBuilder.jdbcAuthentication()).
The approach is you need to provide a configuration class by overriding methods configAuthentication() and configure() methods of WebSecurityConfigurerAdapter
Do I have to use JDBC Resultset or there is some other better way to
do validate the user inputs?
No, you don't need to handle JDBC Resultset directly, rather in spring-security, you just need to provide the datasource (database access details) and sql query like select username,password from users where username=?.
You can refer here for configuring JDBC authentication. |
According to my knowledge and due my findings on this, both POST /person/{id} and POST /cool.person/{id} are correct.
I think the issue is in your endpoint, your end point is not giving permission to you to add another entry via a POST method.
I followed your path and I couldn't reproduce your situation, but I found that need permission from endpoint to put another entry there.
I'll attach my synaps file and snap shot of the response
<?xml version="1.0" encoding="UTF-8"?>
<api xmlns="http://ws.apache.org/ns/synapse"
name="admin--UrlTest"
context="/paternType/1.0"
version="1.0"
version-type="context">
<resource methods="POST" url-mapping="/persons.list" faultSequence="fault">
<inSequence>
<property name="api.ut.backendRequestTime"
expression="get-property('SYSTEM_TIME')"/>
<filter source="$ctx:AM_KEY_TYPE" regex="PRODUCTION">
<then>
<send>
<endpoint name="admin--UrlTest_APIproductionEndpoint_0">
<http uri-template="http://jsonplaceholder.typicode.com/posts?"/>
<property name="ENDPOINT_ADDRESS"
value="http://jsonplaceholder.typicode.com/posts?"/>
</endpoint>
</send>
</then>
<else>
<send>
<endpoint name="admin--UrlTest_APIsandboxEndpoint_0">
<http uri-template="http://jsonplaceholder.typicode.com/posts?"/>
<property name="ENDPOINT_ADDRESS"
value="http://jsonplaceholder.typicode.com/posts?"/>
</endpoint>
</send>
</else>
</filter>
</inSequence>
<outSequence>
<class name="org.wso2.carbon.apimgt.usage.publisher.APIMgtResponseHandler"/>
<send/>
</outSequence>
</resource>
<resource methods="POST" url-mapping="/persons" faultSequence="fault">
<inSequence>
<property name="api.ut.backendRequestTime"
expression="get-property('SYSTEM_TIME')"/>
<filter source="$ctx:AM_KEY_TYPE" regex="PRODUCTION">
<then>
<send>
<endpoint name="admin--UrlTest_APIproductionEndpoint_1">
<http uri-template="http://jsonplaceholder.typicode.com/posts?"/>
<property name="ENDPOINT_ADDRESS"
value="http://jsonplaceholder.typicode.com/posts?"/>
</endpoint>
</send>
</then>
<else>
<send>
<endpoint name="admin--UrlTest_APIsandboxEndpoint_1">
<http uri-template="http://jsonplaceholder.typicode.com/posts?"/>
<property name="ENDPOINT_ADDRESS"
value="http://jsonplaceholder.typicode.com/posts?"/>
</endpoint>
</send>
</else>
</filter>
</inSequence>
<outSequence>
<class name="org.wso2.carbon.apimgt.usage.publisher.APIMgtResponseHandler"/>
<send/>
</outSequence>
</resource>
<resource methods="GET" url-mapping="/personlist" faultSequence="fault">
<inSequence>
<property name="api.ut.backendRequestTime"
expression="get-property('SYSTEM_TIME')"/>
<filter source="$ctx:AM_KEY_TYPE" regex="PRODUCTION">
<then>
<send>
<endpoint name="admin--UrlTest_APIproductionEndpoint_2">
<http uri-template="http://jsonplaceholder.typicode.com/posts?"/>
<property name="ENDPOINT_ADDRESS"
value="http://jsonplaceholder.typicode.com/posts?"/>
</endpoint>
</send>
</then>
<else>
<send>
<endpoint name="admin--UrlTest_APIsandboxEndpoint_2">
<http uri-template="http://jsonplaceholder.typicode.com/posts?"/>
<property name="ENDPOINT_ADDRESS"
value="http://jsonplaceholder.typicode.com/posts?"/>
</endpoint>
</send>
</else>
</filter>
</inSequence>
<outSequence>
<class name="org.wso2.carbon.apimgt.usage.publisher.APIMgtResponseHandler"/>
<send/>
</outSequence>
</resource>
<handlers>
<handler class="org.wso2.carbon.apimgt.gateway.handlers.common.APIMgtLatencyStatsHandler"/>
<handler class="org.wso2.carbon.apimgt.gateway.handlers.security.CORSRequestHandler">
<property name="apiImplementationType" value="ENDPOINT"/>
</handler>
<handler class="org.wso2.carbon.apimgt.gateway.handlers.security.APIAuthenticationHandler"/>
<handler class="org.wso2.carbon.apimgt.gateway.handlers.throttling.ThrottleHandler"/>
<handler class="org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageHandler"/>
<handler class="org.wso2.carbon.apimgt.usage.publisher.APIMgtGoogleAnalyticsTrackingHandler">
<property name="configKey" value="gov:/apimgt/statistics/ga-config.xml"/>
</handler>
<handler class="org.wso2.carbon.apimgt.gateway.handlers.ext.APIManagerExtensionHandler"/>
</handlers>
</api>
Jeewana. |
There isn't an official example for doing exactly this, but it's altogether possible.
If you wish to base the logic around trying to authenticate with a new device (i.e. only 5 devices can stream music for this account), Cognito included a newDeviceUsed boolean in the input your Lambda hook will get (see docs).
On top of that, you'd need to have some credentials in your lambda hook with the authority to call admin list devices. Based on however your logic dictates (perhaps if newDeviceUsed is true), then, you'd call that API. It's worth noting that AdminListDevices will return both remembered and not remembered devices, so you might want to adjust your logic as needed.
Does that make sense?
EDIT:
More details on how Lambda handles credentials are available in their docs. How exactly you want to call adminListDevices will vary quite a bit based on your logic and language of choice, but with the credentials having the power to do so, it should just be a normal call. See how SES is called in the Cognito developer guide examples. |
If you means to read secondly location, I think Azure ARM management could help us to do this. I have tried it on my local and get the result as followings:
[Update]
Here is my test code:
public async Task<string> GetToken()
{
AuthenticationResult result = null;
string test;
AuthenticationContext authContext = new AuthenticationContext(adInfo.AuthUrl + adInfo.Telnant);
ClientCredential cc = new ClientCredential(adInfo.ClientId, adInfo.ClientSecret);
try
{
result = await authContext.AcquireTokenAsync(adInfo.Resource,cc);
test = result.AccessToken;
return test;
}
catch (AdalException ex)
{
return ex.Message;
}
}
public async Task GetSQLInfo()
{
string token = await GetToken();
var sqlclient = new SqlManagementClient(new TokenCloudCredentials(adApplication.Subscription, token));
var data = await sqlclient.Databases.GetAsync("jatestgroup", "jaserver", "jasql");
}
Here is my class about adInfo and adApplication:
public class AdInfo
{
[JsonProperty(PropertyName = "clientid")]
public string ClientId { get; set; }
[JsonProperty(PropertyName = "clientsecret")]
public string ClientSecret { get; set; }
[JsonProperty(PropertyName = "returnurl")]
public string ReturnUrl { get; set; }
[JsonProperty(PropertyName = "telnantid")]
public string Telnant { get; set; }
[JsonProperty(PropertyName = "authurl")]
public string AuthUrl { get; set; }
[JsonProperty(PropertyName = "resource")]
public string Resource { get; set; }
}
public class AdApplication
{
[JsonProperty(PropertyName = "ARMTemplate")]
public AdInfo Application { get; set; }
[JsonProperty(PropertyName = "subscription")]
public string Subscription { get; set; }
}
My Json Settings:
{
"ARMTemplate": {
"clientid": "****",
"clientsecret": "****",
"returnurl": "http://localhost:20190",
"telnantid": "****",
"authurl": "https://login.microsoftonline.com/",
"resource": "https://management.core.windows.net/"
},
"subscription": "***"
}
Since this issue is more related with Auth failed. I would suggest you create a new thread and give more info for us if my code does not give you help. |
I am able to solve this issue by reading the documentation for Class _RSAobj (https://www.dlitz.net/software/pycrypto/api/current/Crypto.PublicKey.RSA._RSAobj-class.html#encrypt)
Attention: this function performs the plain, primitive RSA encryption
(textbook). In real applications, you always need to use proper
cryptographic padding, and you should not directly encrypt data with
this method. Failure to do so may lead to security vulnerabilities. It
is recommended to use modules Crypto.Cipher.PKCS1_OAEP or
Crypto.Cipher.PKCS1_v1_5 instead.
Looks like cryptographic padding is not used by default in the RSA module for Python, hence the difference.
Modified Python Script:
from Crypto.PublicKey import RSA
from Crypto.Cipher import PKCS1_v1_5
import sys, time, signal, socket, requests, re, base64
pubkey = "9B596422997705A8805F25232C252B72C6B68752446A30BF9117783FE094F8559CA4A7AA5DBECAEC163596E96CD9577BDF232EF2F45DC474458BDA8EC272311924B8A176896E690135323D16800CFB9661352737FEDA9FB8DD44B6025EA8037CBA136DAE2DC0061B2C66A6F564E2C9E9579DAFD9BFF09ACF3B6E39BF363370F4A8AD1F88E3AE9A7F2C1B1C75AC00EAE308F57EC9FBDA244FC8B0D1234677A6BEE228FEE00BF653E8E010D0E59A30D0A1194298E052399A62E6FBD490CF03E16A1166F901E996D53DE776169B4EE8705E1512CCB69F8086C66213667A070A65DA28AF18FC8FC01D37158706A0B952D0963A9A4E4A6451758710C9ADD1245AB057389535AB0FA1D363A29ED8AE797D1EE958352E55D4AD4565C826E9EF12FA53AE443418FD704091039E190690FD55BF32E7E8C7D7668B8F0550C5E650C7D021F63A5055B7C1AEE6A669079494C4B964C6EA7D131FA1662CF5F5C83721D6F218038262E9DDFE236015EE331A8556D934F405B4203359EE055EA42BE831614919539A183C1C6AD8002E7E58E0C2BCA8462ADBF3916C62857F8099E57C45D85042E99A56630DF545D10DD338410D294E968A5640F11C7485651B246C5E7CA028A5368A0A74E040B08DF84C8676E568FC12266D54BA716672B05E0AA4EE40C64B358567C18791FD29ABA19EACA4142E2856C6E1988E2028703B3E283FA12C8E492FDB"
foobar = "foobar"
pubkey_int = long(pubkey,16)
pub_exp = 65537L
pubkey_obj = RSA.construct((pubkey_int, pub_exp))
cipher = PKCS1_v1_5.new(pubkey_obj)
encypted_data = cipher.encrypt(foobar)
encypted_data_b64 = base64.b64encode(encypted_data)
print encypted_data_b64
|
Please consider the following:
I guess you should majorly be concerned on sending batch emails. If you don't do it right you might have problems. To avoid those problems make sure to follow the Bulk Senders Guidelines here https://support.google.com/a/answer/81126 Another factor to take into account is the email authentication. When using smtp, make sure all email sent will pass SPF and DKIM to prevent being marked as spam or worse, getting emails rejected. If you use the Gmail API then all you need to do is make sure you set up SPF by following the steps here https://support.google.com/a/answer/178723?hl=en and DKIM by following the steps here https://support.google.com/a/answer/174126?hl=en As per the GMAIL API quotas, you can use 1,000,000,000 units per day so I don't think that will be a problem. The benefit of using SMTP is that you can use SMTP RELAY https://support.google.com/a/answer/2956491 which gives you a way much higher limit when sending emails in case the Bulk Senders Guidelines is something that won't work for you.
In summary, if all you are looking for is to send batch emails, then I guess going with SMTP is easier. Hope this helps! |
This enhances security by making the values of these "secrets" (and other password/security/token related values) only known within the production environment, as opposed to anyplace the repository is checked out. Each environment (for example, local development, development server, staging/demo server, production environment, etc) should have its own separate configuration of authentication or security parameters, if possible, and these should be configured within the local environment, and not stored in the application's code itself.
Even if your repository is private, it is still not safe. Developers come and go. Operations people come and go. Project management comes and goes. You might have some third party contractors with repo access. A big enough project tends to have lots of actors working on it over time, and it isnt necessary for any of them to know the production configuration values (except perhaps the ops folk). Even with a private repository, there is still a security risk for exposing secret keys/values to anyone with repo access. |
Your problem with using -like is a missing wildcard.
I guess the Outbound folder shall contain the modified csv?
These endless command lines aren't necessary, neither in a
script nor in the shell.
For me this is much better readable (and it works) :
Function getTag ([string]$vuln){
if ($vuln -like "adobe-apsb-*")
{return "Adobe Flash"}
if ($vuln -like "jre-*")
{return "Java"}
else
{return "No vulnerability code available"}
}
$in = "D:\Splunk\Inbound"
$out = "D:\Splunk\Outbound\"
Get-ChildItem $in -Filter *.csv |
ForEach-Object{
Import-Csv $_.FullName |
Select-Object *,@{
Name='Tag';
Expression={getTag $_.'Vulnerability ID'}
} | Export-Csv -path $($out+$_.Name) -notype
}
> tree . /f
│
├───Inbound
│ test.csv
│
└───Outbound
test.csv
> gc .\Inbound\test.csv
"A","Vulnerability ID","C"
0,"adobe-apsb-blah",2
5,"jre-trash",9
> gc .\Outbound\test.csv
"A","Vulnerability ID","C","Tag"
"0","adobe-apsb-blah","2","Adobe Flash"
"5","jre-trash","9","Java"
|
Oh silly willy.
You're getting pounded with down votes because tis a simple question and answer.
If you're doing this in a controller, all you're doing is making a string in c#.
A string doesn't care about that plus sign you have. It thinks it's part of the string.
Also, single quote up there has no matching quote and is part of the string.
There's multiple ways to fix this. You can go basic, formatted, or interpolation.
You tried basic which would correctly be:
"<a href=\"http://localhost:53008/authentication/confirmhire?Cid=" + confirmationId + "\" > Here </ a > ";
Notice it's broken up.
A better way would be a formatted string as such:
string.Format("<a href=\"http://localhost:53008/authentication/confirmhire?Cid={0}\" > Here </ a > ", confirmationId);
And even better interpolation! :D
$"<a href='http://localhost:53008/authentication/confirmhire?Cid={confirmationId}'>Here</a>"
Chose your poison, all are correct, though the last one is technically the best. |
You can try this:
var pfx = '/var/lib/openshift/555dd1415973ca1660000085/app-root/runtime/repo/pfx.p12';
var deviceId = "<38873D3B 0D61A965 C1323D6C 0A9F2866 D1BB50A3 64F199E5 483862A6 7F02049C>"; // <----- THIS HERE
var apnagent = require('apnagent')
var agent = new apnagent.Agent(deviceId);
agent.set('pfx file', pfx);
// our credentials were for development
agent.enable('sandbox');
console.log('LOG1');
console.log(agent);
agent.connect(function (err) {
console.log('LOG2');
// gracefully handle auth problems
if (err && err.name === 'GatewayAuthorizationError') {
console.log('Authentication Error: %s', err.message);
process.exit(1);
}
// handle any other err (not likely)
else if (err) {
throw err;
}
// it worked!
var env = agent.enabled('sandbox')
? 'sandbox'
: 'production';
console.log('apnagent [%s] gateway connected', env);
});
The explanation is this: module.exports is the common module export syntax. This way you are avoiding exporting a module and importing it by manually hardcoding the deviceId in your code. |
TL;DR
Consider what would happen if your username on GitHub was foo while your work email address was bar@example.com. If Git or GitHub enforced a direct mapping between identity and email address, how would you expect Git to handle this portably and reliably?
How about if your name was John Q. Public, your username on localhost was john-public, and your GitHub account was jpublic? How should Git handle these differences across systems?
Git can't, so Git doesn't. Instead, Git treats commit data and authentication as separate things.
Don't Confuse Commit Data with Credentials
Data stored in Git commit objects and the credentials you're presenting to GitHub aren't the same things at all. You're thinking that your username or email address are your identity in Git, but they actually have nothing at all to do with authentication or authorization within Git or GitHub. The credentials you're presenting to GitHub are your GitHub username and password, or your GitHub username and SSH key, and any relationship to your local username or email address is purely coincidental.
If you ever use Git on an NFS-mounted share, work for different companies over the life of a project, work for more than one company at a time, or need to keep work and non-work projects logically separated, you learn to appreciate that Git's email attribution mechanisms are both flexible and portable.
Remember that Git is a content tracker, not an authentication system. Most of the authentication you do with third parties like GitHub are actually handled outside of Git using SSH or HTTPS protocols, neither of which care about fields in your commit objects.
Username and Email Address Aren't Identities
One of them is that I need to explicitly configure user.name and user.email parameters (or whatever the proper term is, it doesn't seem to be mentioned in the docs, like many other things). Maybe it makes sense because I don't need to provide credentials when I commit locally. But it actually does ask for my credentials when I push my changes and just accepts whatever user.name value was set without checking whether it matches my login. Then GitHub shows my changes under someone else's name, which is very confusing.
You're conflating many very different issues. Some of the more obvious ones are listed below, but there are certainly others.
Git commits track GIT_AUTHOR_NAME and GIT_COMMITTER_NAME as part of the commit object. The committer and the author aren't necessarily the same, and being able to apply patches to the code base on behalf of someone else is considered a design feature.
GIT_AUTHOR_EMAIL and GIT_COMMITTER_EMAIL can vary from system to system, and even from project to project since Git supports per-project configuration files. This email information is attached to the commit and may be used by git-format-patch, but it doesn't intrinsically have anything to do with SSH or HTTP(S) authentication.
GitHub assigns changes to users based on email addresses. However, this is a user-facing implementation decision by GitHub; Git itself doesn't conflate commit objects with authentication. At the command line, you can work a lot of magic with ~/.mailmap.
GitHub allows you to add multiple email addresses to your account for tracking commits that belong to you, and also allows you to use a private address if you like.
GitHub uses a variety of authentication mechanisms, but in general users push or pull using SSH or HTTPS. You use a username and SSH key for the former, and a username and password for the latter. Usernames need not match on both the local and remote systems.
Other authentication mechanisms like SMTP have their own configuration values separate from Git's user.name or user.email.
In general, Git's decision to keep authentication separate from author or committer details is a good one for portability. You can have different usernames or email addresses on different systems or projects, and your identity information is relatively portable when kept in ~/.gitconfig, $GIT_DIR/.git/config, or the appropriate environment variables. |
You can login using 2 ways.
By using credentials within your code.
Keep your credentials in a file.
By using credentials within your code.
// credentials object identifying user for authentication
// user must have AWSConnector and AmazonS3FullAccess for
// this example to work
AWSCredentials credentials = new BasicAWSCredentials("YourAccessKeyID", "YourSecretAccessKey");
// create a client connection based on credentials
AmazonS3 s3client = new AmazonS3Client(credentials);
Keep your credentials in a file:
/*
* Create your credentials file at ~/.aws/credentials (C:\Users\USER_NAME\.aws\credentials for Windows users)
* and save the following lines after replacing the underlined values with your own.
*
* [default]
* aws_access_key_id = YOUR_ACCESS_KEY_ID
* aws_secret_access_key = YOUR_SECRET_ACCESS_KEY
*/
AWSCredentials credentials = new ProfileCredentialsProvider().getCredentials();
AmazonS3 s3 = new AmazonS3Client(credentials);
For crud operation, you can go through this tutorial: https://github.com/aws/aws-sdk-java/blob/master/src/samples/AmazonS3/S3Sample.java
// Create a bucket
System.out.println("Creating bucket " + bucketName + "\n");
s3.createBucket(bucketName);
/*
* List the buckets in your account
*/
System.out.println("Listing buckets");
for (Bucket bucket : s3.listBuckets()) {
System.out.println(" - " + bucket.getName());
}
/*
* Delete an object - Unless versioning has been turned on for your bucket,
* there is no way to undelete an object, so use caution when deleting objects.
*/
System.out.println("Deleting an object\n");
s3.deleteObject(bucketName, key);
/*
* Delete a bucket - A bucket must be completely empty before it can be
* deleted, so remember to delete any objects from your buckets before
* you try to delete them.
*/
System.out.println("Deleting bucket " + bucketName + "\n");
s3.deleteBucket(bucketName);
For giving permissions in a specific bucket, please check the picture:
Add Permission:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::riz-bucket001/*"
}
]
}
N.B: This bucket policy make everything in the bucket publicly readable. So be careful to use it. If you use study purpose, then OK. But in business purpose, don't use it.
Thanks a lot Michael - sqlbot
You can check more policies here as your necessity: Specifying Permissions in a Policy |
Your implementation is for authorization not authentication. I think instead of creating an authorization policy, writing custom authentication middleware would be right way for your case.
First see how to implement custom authentication Simple token based authentication/authorization in asp.net core for Mongodb datastore
To implement above way for your case HandleAuthenticateAsync should be something like below:
protected override async Task<AuthenticateResult> HandleAuthenticateAsync()
{
AuthenticateResult result = null;
var principal = GetPrincipalFromSession();
if(principal != null)
{
result = AuthenticateResult.Success(new AuthenticationTicket(principal,
new AuthenticationProperties(), Options.AuthenticationScheme));
}
else
{
result = AuthenticateResult.Skip();
}
return result;
}
Update based on comment:
protected override async Task<bool> HandleUnauthorizedAsync(ChallengeContext context)
{
Response.Redirect(Options.LoginPath);// you need to define LoginPath
return true;
}
Also you should store principal in session when user signs in. |
Polling is indeed not supported in MobileFirst Foundation 8.0.
https://mobilefirstplatform.ibmcloud.com/tutorials/en/foundation/8.0/upgrading/migrating-push-notifications/
I don't have any official alternative, but since polling is the checking of some backend for new content and if true, then have a notification dispatched, you can still create some service of your own to check your backend if there is a new "record" or new otherwise new content, and if true, the construct a JSON for that notification and send it.
In v8.0 you have multiple REST endpoints you could use together with Confidential clients to send it.
http://www.ibm.com/support/knowledgecenter/SSHS8R_8.0.0/com.ibm.worklight.apiref.doc/rest_runtime/c_restapi_runtime.html
https://mobilefirstplatform.ibmcloud.com/tutorials/en/foundation/8.0/authentication-and-security/confidential-clients/
You can also take a look at the following way of constructing a mechanism to send notifications using Node.js: https://mobilefirstplatform.ibmcloud.com/blog/2016/10/18/using-mff-8-push-service-rest-api-in-a-nodejs-based-server/ |
You are confusing two seperate concepts viz:
Authentication
Provisioning
ADFS only does the former. You need an Identity Manager (IM) to do the latter.
So:
User should be able to authenticate once, and based on the credentials user would be redirected to the required application
I'll concentrate on the Microsoft world. ADFS and Azure AD can both do this. User --> application --> IDP - authenticates --> back to application
A central management console should be available for administrators to simplify role assignments as well as grant access to the various applications as required
IM functionality. AAD could do the group assignment but doesn't really have workflows. You can use "Active Directory Users Control" in Windows Server to manually edit AD attributes for use by ADFS.
Users can register for certain applications but certain sensitive applications require administrator approval before the user can successfully log in.
IM - needs workflows
This SSO should also secure an API with some sort of permission logic e.g. only supervisor roles can delete a record
This is both. ADFS 4.0 (Server 2016) can protect web API as can AAD. Deleting roles is IM as above.
Users should be able to register with OAuth providers such as Facebook, Twitter, Google & Windows live.
Microsoft has limited social interaction. You can add some social providers using AAD. I use Auth0 and federate as it has tons of social providers. Azure B2C may be of use here.
The SSO provider should be simple to implement into multitude of platforms such as Windows Apps, Web Apps, Mobile & services
For web apps, you can use SAML, WS-Fed, OpenID Connect & OAuth.
For Windows Apps, you can use OpenID Connect & OAuth.
For Mobile & services, you can use OpenID Connect & OAuth. (Note there are four flows to cater for different scenarios).
ADFS 4.0 (Server 2016) and AAD can support all the above. |
When you authenticate a user in Devise it takes the plaintext password and combines it with a pepper and passes it through bcrypt. If the encypted password matches the value in the DB the record is returned.
The pepper is based on your rails app secret. So unless the two apps have the exact same secret then authentication will fail even if you have the correct encrypted password. Remember that the input to bcrypt has to be identical to give the same result. That means the same plaintext, salt, pepper and number of stretches.
You're correct in that sharing the passwords back and forth is not a solid solution. You are also correct in that using UUIDs instead of a numerical auto-incrementing id is part of the solution.
What you can do is implement your own authentication provider. The Doorkeeper gem makes it pretty easy to setup your own OAuth provider (or you can use Auth0 if using an external service is acceptable).
The web app would use Devise + OmniAuth to authenticate users against the authentication provider and use the credentials returned to identify the user in the web application.
For the API application I would use Knock for JWT authentication. Where the API server proxies to your authentication server via the oauth gem.
However you should consider at this point if your web and API app really should be running on separate DBs. Keeping writes in sync across both DBs can be quite the task and your should ask yourself if your really need it at this stage. |
JOSE is a framework, not a standard. JSON Web Signature (JWS) is a standard defined in RFC 7515, and JSON Web Token (JWT) is a compact token format using JWS signature defined in RFC 7519
Is there any standard or tool (at least for java/javascript) to guaranty that conversation of JSON to string always represent in unique format?
Yes, JWS defines that JWS Payload is encoded as BASE64URL(UTF8(JWS Payload))
Is it possible to omit the second part of JWT which contains JSON data and create in using an arbitrary JSON creator?
You can omit the second part of JWT (the payload), but then it won't be a JWT. I think you do not need JWT (the purpose of JWT is exchange authentication tokens) but apply a digital signature to your document. And JWS is suitable for this
But, with several signers you will need an additional layer of digital signature capabilities. For example to include the signer's identity, relate the content signed with the signer or set the order of signatures.
Unfortunately there is no a standard to do this, like XAdES, PAdES or CAdES for XML, PDF and binary documents |
What Fabio has suggested is valid and should be working. You might have other issues preventing it from functioning.
Can you check it by simplifying App.authenticate() like this?
Just to rule out possible errors of underlying layer.
authenticate() {
log.info("login successful");
}
Another guess:
Is ./services/log a static object? Assuming it is not, injection might be missing for it.
Since you are using TypeScript, autoinject might help you to avoid similar pitfalls.
import {autoinject, computedFrom} from "aurelia-framework";
import {AuthService, AuthenticateStep} from 'aurelia-authentication';
import {log} from "./services/log";
@autoinject()
export class App {
authService: AuthService;
logger: log;
constructor(auth: AuthService, logger: log) {
this.authService = auth;
this.logger = logger;
}
}
What is the proper setup to get a method in the App VM to bind in a subview?
I know 3 possible solutions to achieve that (there may be more). I've created a gist showing these in action.
https://gist.run/?id=b9e8fee11e338e08bc5da7d4df68e2db
Use the dropdown to switch between navbar implementations. :)
1. HTML Only Element + bindables
This is your current scenario. See nav-bar-function.html in the demo.
2. <compose> + inheritance
Composition can be useful for some dynamic scenarios, but try not to overuse it. [Blog post]
When no model is provided, composed element inherits parent's viewmodel context.
Normally I would not recommend using it in your case. However, considering your issues with Solution 1., you could use this option for debug purposes. If you get the same error with <compose> as well, you may have a problem with App.authenticate() function itself.
Try it out in your solution by replacing
<nav-bar router.bind="router" authenticate.call="authenticate()"></nav-bar>
with
<compose view="./nav-bar.html"></compose>
This way, nav-bar.html behaves as a part of App component. See nav-bar-compose.html in the demo.
3. Custom Element + EventAggregator
You can use pub/sub communication between components* to avoid tight-coupling. Related SO answer: [Accessing Custom Component Data/Methods], and [Docs]
*components: custom elements with viewmodel classes
I hope it will help! :) |
I'd like to know how to return a real token, which can be used to authenticate a user instead of sending the username:password every time.
Symfony Guard Component
Guard aims at simplifying the authentication subsystem.
Before Guard, setting up custom authentication was a lot more work. You needed to create several parts/classes and make them work together. It's flexible, you can create any authentication system you want, but it needs some effort. With Guard, it becomes a lot easier, while maintaining all flexibility.
This is not the component you're looking for.
Symfony Security Token
When reading the word "token" in documentation about the Guard Component, what's referred to is an implementation of the TokenInterface. Symfony uses these implementations to keep track of the authentication state. These implementations never leave your application, it's an internal thing.
This is not the token you're looking for.
JSON Web Token
The "token" you're talking about is some pease of information a client can use to authenticate with. This can be a random string like the "access token" of OAuth 2.0 protocol, or a self-contained and signed set of information, like JSON Web Tokens (JWT).
IMHO JWT would be the most future-proof token at the moment. The Anatomy of a JSON Web Token is a good read to get familiar with JWT.
There are several bundles out there that can easily integrate JWT into your Symfony project. LexikJWTAuthenticationBundle is the most popular one right now. I suggest you have a look :) |
Goal 1
Override the has_perm method of the ModelBackend class, from my backends.py file:
import logging
from difflib import get_close_matches
from django.conf import settings
from django.contrib.auth.backends import ModelBackend
class ModelBackendHelper(ModelBackend):
def has_perm(self, user_obj, perm, obj=None):
if not user_obj.is_active:
return False
else:
obj_perms = self.get_all_permissions(user_obj, obj)
allowed = perm in obj_perms
if not allowed:
if settings.DEBUG:
similar = get_close_matches(perm, obj_perms)
if similar:
logging.warn("{0} not found, but is similar to: {1}".format(perm, ','.join(similar)))
return allowed
How it works:
Same logic of has_perm but if settings.DEBUG is True and similar versions of perm are found then output a warning log message of level WARN:
WARNING:root:myapp.view_tasks not found, but is similar to: myapp.view_task
Change the value of AUTHENTICATION_BACKENDS in settings.py:
AUTHENTICATION_BACKENDS = ['myapp.backends.ModelBackendHelper']
This can be used both in production and development environments but personally I wouldn't include this in a production site, I hope that when everything goes to production permissions are consolidated.
Goal 2
Permissions belong to models and to keep this DRY I'm reusing the permissions defined in Meta:
from django.db import models
class Task(models.Model):
name = models.CharField(max_length=30)
description = models.TextField()
class Meta:
permissions = (
("view_task", "Can see available tasks"),
)
def _get_perm(model, perm):
for p in model._meta.permissions:
if p[0] == perm:
return p[0]
err = "Permission '{0}' not found in model {1}".format(perm, model)
raise Exception(err)
def format_perm(perm):
return '{0}.{1}'.format(__package__, perm)
def get_perm(model, type):
return format_perm(_get_perm(model, type))
PERM_APP_VIEW_TASK = get_perm(Task, "view_task")
Permissions can be accessed with get_perm or with a shortcut PERM_APP_VIEW_TASK:
models.PERM_APP_VIEW_TASK
# or
get_perm(Task, "view_task")
# or
format_perm(_get_perm(Task, "view_task"))
In case of searching for a missing permission get_perm will raise an Exception:
PERM_APP_VIEW_TASK = get_perm(Task, "add_task")
Message:
Exception: Permission 'add_task' not found in model <class 'myapp.models.Task'>
|
This is a really silly way of getting a random number. Its behavior is extremely dependent on the platform and on what else the application code is doing. For example, if you call it in a loop with no other memory allocation going on, then on many systems malloc will keep returning the same block over and over because free returns it to the front of the free list, so p will be the same each time. On many others, you'll get a cycle eventually as the heap grows to a certain size and then the allocator cycles through a number of slots.
This particular possibility is evidently what's happening in your experiment: you get rand==0 in 70/256 cases and rand==1 and rand==2 in 93/256 cases (apart, presumably, from the first few allocations where the cycle has not been reached yet and the last, partial cycle). I'm not familiar enough with malloc implementations to explain this particular distribution; possibly the control structures used by malloc have a size that's a multiple of 3 which would create a bias modulo 3.
Modulo a power of 2, you'd get a bias on every realistic platform as the return values are guaranteed to be correctly aligned for any type and most platforms have alignment constraints on powers of 2. Or, in other words, a few of the lowest-order bits of p are always 0.
rand is standard C and, bad as it is on most platforms (don't even think of using it for anything related to security, and even for probabilistic algorithms it often isn't good enough), it would rarely be worse than malloc output.
There are probably a few platforms out there where the return value of malloc is randomized to make exploitation of bugs such as buffer overflows and use-after-free harder (because the attacker can't predict where interesting objects will be, in the same vein as ASLR), but such platforms would have a better random generator API anyway. |
REST-full API's work on something called a handshake, so where security is concerned, we must ensure we are using SSL encryption in the data transfer.
When we talk about an API Key we're really saying a random but unique string (token) for that single user to gather data across domains (CORS).
Assuming verifyAPIKey() is robust
This isn't a pre-defined function in PHP. To include a 'robust' infastructure to API tokens, you would need a set of instructions.
Client requesting data uses a unique token and return URI ->
Client redirects user to your sandbox with this information ->
User allows client access and Client recieves a seperate unique token ->
Client creates a request with that token to an endpoint to then get the data
Really, the hack here would be an Injection when you're querying your database for them tokens. However, not to worry because SO has plenty of resoucres on securing queries.
A simple example of this context would look something like this:
class Endpoint {
private $Token;
public function setToken($token) {
$this->Token = $token;
return $this;
}
public function isToken() {
$smpt = (new Database)->Prepare('SQL HERE');
$smpt->execute([$token]);
return empty($smpt->fetch()[0]) ? false : true;
}
}
$e = new Endpoint();
$e->setToken($_POST['token']); // change to an MVC design rather than post methods
if($e->isToken()) {
// API logic (ie, return action related to that token)
}
Or see a working example of this. |
If you're using Laravel as an API, you'll likely want to do token-based authentication. The main idea is to use the 'api' guard (which uses the TokenGuard class), instead of the default 'web' guard (you can change this in congif/auth.php). This way Laravel will look for a Bearer token in the Authorization header in each request.
This is a decent walk through to get started, though I'm sure there are many other good blog posts and tutorials out there.
EDIT:
Not sure if this is helpful, but here's an example using an automated test. In my ModelFactory, I have a user model with an api_token attribute which is set to a random (and unique) string of 60 characters. This is a simple test to make sure I can retrieve a list of users:
/** @test */
public function user_index_endpoint_returns_list_of_users()
{
$user = factory(User::class)->create();
$this->get('/users', array(
'Accept' => 'application/json',
'Authorization' => 'Bearer ' . $user->api_token
))
->assertResponseStatus(200)
->seeJsonStructure([
'users'
]);
}
At the top of my config/auth.php file, I've set my default guard to 'api':
'defaults' => [
'guard' => 'api',
'passwords' => 'users',
],
Finally, in my routes file I've set up my users route to use the 'auth' middleware.
Route::group(['middleware' => 'auth'], function() {
Route::get('/users', 'UsersController@index');
});
Because I've defined the 'api' middleware as my default, behind the scenes Laravel will use the TokenGuard class to try to authenticate my request. To do so, it will look for a bearer token in my request and compare that against the users table. |
I think this can be done in a simpler way using an error handling operator onErrorResumeNext() which pass control to another Observable rather than invoking onError(), if the observable encounters an error.
Let's assume your 2 observables are:
// Assuming authenticate returns an exception of typeA if it fails
Observable<A> authenticate;
// returns an exception of typeB if refresh fails
Observable<B> refresh;
Now let's create another observable using concatMap.
// Probably you want to emit token from refresh observable. If not,
// then I hope you will get the idea how it's done.
Observable<Object> refreshThenAuthenticate = refresh.concatMap(new Func1<A, Observable<B>>() {
@Override
public Observable<A> call(B b) {
// create an authentication observable using the token
return createAuthenticateObservable(b.getToken());
}
});
// Above can be written in much simpler form if you use Java 8 lambdas.
/* first authenticate will be executed, if it fails then control will
* go to refreshThenAuthenticate.
*/
authenticate
.onErrorResumeNext(refreshThenAuthenticate)
.subscribe(new Observer<A>() {
@Override
public void onCompleted() {
// login successful
}
@Override
public void onError(Throwable e) {
// if e of typeA show login dialog
// else do something else
}
@Override
public void onNext(A a) {
}
});
|
Session management, this is a trait most commonly associated with web applications so I'll assume that's what App 1 and 2 are. You may find this article (Single Sign-On for Regular Web Apps an interesting read, in particular the section on session management.
When talking about managing sessions, there are typically three layers of sessions we need to consider:
Application Session
Auth0 (Federation Provider)1 session
Identity Provider session
1 This would be applicable to you if you planned on having your authentication server further delegate the authentication to additional identity provider like Google or Facebook.
Personally, I would not use the ID token as the session identifier and instead use a shorter ID and keep the session state server-side.
However, the ID token is meant to be provided to a client applications as a way to supply them with information about an authentication operation. Here, client application refers to the role of the application and not its deployment characteristics so you can have client applications that live only on the server world or outside of it in end-user devices/computers.
The previous implies that having ID tokens cross the server-side boundary is completely okay and that your intentions of using it as the session cookie value is fine.
Do have in mind that both cookies and ID tokens have the notion of expiration so having the token inside a cookie may be kind of confusing. You either need to keep expiration in sync (duplication) or ignore one and make sure that everyone knows which one is being ignored (everyone might even mean you three months from now). |
Best use if/else to make clear what branches will execute and which won't:
ad.authenticate(username, password).then((success) => {
if (!success) {
res.status(401).send(); // authentication failed
} else {
return User.findOne({ username }).exec().then(user => {
if (!user) {
res.status(403).send(); // unauthorized, no account in DB
} else if (user.displayName) {
res.status(201).json(user); // all good, return user details
} else {
// fetch user details from the AD
return ad.getUserDetails(username, password).then(details => {
// update user object with the response details and save
// ...
return user.save();
}).then(update => {
res.status(201).json(update); // all good, return user object
});
}
});
}
}).then(() => next(), err => next(err));
The nesting of then calls is quite necessary for conditional evaluation, you cannot chain them linearly and "break out" in the middle (other than by throwing exceptions, which is really ugly).
If you don't like all those then callbacks, you can use async/await syntax (possibly with a transpiler - or use Bluebird's Promise.coroutine to emulate it with generator syntax). Your whole code then becomes
router.post('/login', async (req, res, next) => {
try {
// authenticate
const success = await ad.authenticate(req.body.username, req.body.password);
if (!success) {
res.status(401).send(); // authentication failed
} else {
const user = await User.findOne({ username }).exec();
if (!user) {
res.status(403).send(); // unauthorized, no account in DB
} else if (user.displayName) {
res.status(201).json(user); // all good, return user details
} else {
// fetch user details from the AD
const details = await ad.getUserDetails(username, password);
// update user object with the response details and save
// ...
const update = await user.save();
res.status(201).json(update); // all good, return user object
}
}
next(); // let's hope this doesn't throw
} catch(err) {
next(err);
}
});
|
First of all, in order to ensure the app security, I suggest you to move the authentication method to a before_action in ApplicationController and skip it for the public actions.
If you are using devise gem you can add:
class ApplicationController < ActionController::Base
protect_from_forgery with: :exception
before_action :authenticate_user!
end
Using this your app is secure by default.
For the comment update I recommend you to follow this approach:
Make your comment creation form an AJAX form adding the remote param and move it to a partial:
_form.html.erb
<%= form_for comment, remote: true do |form|%>
<%=form.hidden_field :review_id, review_id%>
<%=form.text_field :content%>
<%=form.submit :save%>
<% end %>
Create comments list partial too:
_comments.html.erb
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<% comments.each do |comment| %>
<tr>
<td>
<%= image_tag comment.user.avatar.url %>
</td>
<td>
<%= comment.content %>
</td>
</tr>
<% end %>
</tbody>
</table>
So your review page should be something like this:
reviews/show.html.erb
<div class="review-photo">
<%= @review.photo %>
</div>
<div class="review-content">
<%= @review.content %>
</div>
<section class="comments">
<div class="new-comment-container">
<%= render 'my_new_comment_form_path', comment: Comment.new, review_id: @review.id %>
</div>
<div class="comments-container">
<%= render 'my_comments_partial_path', comments: @review.comments %>
</div>
</section>
Update your comments controller in order to response to AJAX on the create action:
def create
format.js do
@review = Review.find_by(params[:review_id])
@review.comments << Comment.create(comment_params)
end
end
private
def comment_params
params
.require(:comment)
.permit(:content)
.merge(user_id: current_user.id)
end
This code has a little refactor to ensure the commenter is the current user.
Then you should create a create.js.erb file to response to the create action, in this file you should replace the old list and the old form with the new ones:
comments/create.js.erb
$('.new-comment-container').html("<%= j render 'my_new_comment_form_path', comment: Comment.new, review_id: @review.id%>")
$('.comments-container').html("<%= j render 'my_comments_partial_path', comments: @review.comments%>")
I think this is a clean way to work with AJAX forms in rails. |
Try changing your JSON tree to this :-
Users:
- uid
- email
tests
- noOfTotalTest : 4 // Lets say 4
- id
- title
- user_uid
- index // Just index of the post
Now use this codeBlock :-
FIRDatabase.database().reference().child("tests/noOfTotalTest").observeSingleEvent(of: .value, with: {(Snap) in
let totalNoOfTest = Snap.value as! Int
print(totalNoOfTest)
let randNum = Int(arc4random_uniform(UInt32(totalNoOfTest))) + 1
print(randNum)
FIRDatabase.database().reference().child("tests").queryOrdered(byChild: "index").queryEqual(toValue: randNum).observeSingleEvent(of: .value, with: {(Snapshot) in
print(Snapshot.value!)
})
})
NoW whenever you post a test to your database you gotta do these things:-
Query for the total no of tests in the DB, noOfTotalTest
Once retrieved increment it by +1 and update noOfTotalTest and put it in with other of the test details and set it to your DB
And the process carries on....
PS:- For making the post just/ SAVING:-
FIRDatabase.database().reference().child("tests/noOfTotalTest").observeSingleEvent(of: .value, with: {(Snap) in
if Snap.exists(){
// This is not the first post
let totalNoOfTest = Snap.value as! Int
FIRDatabase.database().reference().child("tests").childByAutoId().setValue(["userID" : UID, "details" : Details, "index" : totalNoOfTest + 1])
FIRDatabase.database().reference().child("tests/noOfTotalTest").setValue(totalNoOfTest + 1)
} else {
// This is your first post
FIRDatabase.database().reference().child("tests").childByAutoId().setValue(["userID" : UID, "details" : Details, "index" : 1])
FIRDatabase.database().reference().child("tests/noOfTotalTest").setValue(1)
}
})
To extend this for you to be able to delete, you can save the indexes that are active in your node's which you need to randomise.
For that add this to your JSON tree:-
active_Indexes :{
12 : true,
09 : true,
198 : true,
11: true,
103 : true,
}
Now what you need is to store these INDEX in an dictionary , then randomise the array element :-
var localIndexDirectory = [Int : Bool]
//Listen to any changes to the database, and update your local index directory accordingly
override func viewDidLoad() {
super.viewDidLoad()
FIRDatabase.database().reference().child("active_Index").observe(.childRemoved, with: {(Snap) in
print(Snap.value)
let keyToBeChanged = Int(Snap.key)!
self.localIndexDirectory.removeValue(forKey: keyToBeChanged)
print(self.localIndexDirectory)
})
FIRDatabase.database().reference().child("active_Index").observe(.childAdded, with: {(Snap) in
print(Snap)
let keyToBeChanged = Int(Snap.key)!
self.localIndexDirectory.updateValue(true, forKey: keyToBeChanged)
print(self.localIndexDirectory)
})
}
This will keep your directory updated for the indexes available(SINCE .observe is a continuos thread in your network thread) in your database, then all you need is to randomise those indexes at that particular time and query that test for that particular index. But now to activate the Deleting function in your app you also need to modify your saving function i.e whenever you save a new node to your database, make sure you also append the active_Indexes node in your DB, with that particular index.
PPS:- You would also need to update your security rules for dealing with different authentication states. |
A PHP script will likely move files around whose name is determined at runtime (the name of a temporary file that has just been uploaded). The check is meant to ensure that poorly-written scripts don't expose system files, or files containing authentication secrets.
Suppose I write a script that lets you upload an image to my server, enhance it by embedding some super-cute cat gifs that I provide, and download it again. To keep track of which image you are working on, I embed the file name in the request URLs for my edit buttons:
http://example.com/add-kitty.php?img=ato3508.png&add=kitty31.gif
Or maybe I embed the same information in a cookie or POST data, wrongly thinking that this makes it more secure. Then some moderately enterprising script kiddie comes by and tries this (or the POST/cookie equivalent):
http://example.com/add-kitty.php?img=$2Fetc%2Fpasswd&add=kitty31.gif
See it? That's the path /etc/passwd, url-encoded. Oops! You may have just made your /etc/passwd file available for download, with a little kitty noise in the middle.
Obviously this is not a full working exploit for anything, but I trust you get the idea: The move_uploaded_file function makes the extra check to protect you from attacks that will inject an unintended filename into your code. |
Any URL can be used with any HTTP verb, including GET, POST, PUT, DELETE, and others. Therefore there is no way to infer a verb given only the URL.
Even if you could figure out when the verb is a POST, the URL is not sufficient to recreate the page, because the POST requires information held in the body (not the URL) of the request.
It sounds like your application offers sort of a bookmark feature that is held server side. If the intention is to allow the user to pick up where they left off, you have a few options.
Instead of storing the URL only, store the full HTTP request, including the verb and the form body. If any of the values are session-specific (e.g. a session cookie), then you will need to be able to substitute new values for those somehow. Be very careful with this as there could be unforeseen consequences, e.g. if the POST operation did something that cannot be repeated. Also, it is a very bad idea to store the POST request of anything involved in authentication, such as the login page.
Instead of bringing the user back to the target of the POST, bring the user back to the page that originated the POST. One way to do this is to program all of the controllers that handle a POST to validate the form variables; if they are completely missing, redirect to the page which the user was supposed to fill in.
Consider letting the user manage his or her own bookmarks, using browser features instead of server logic.
|
There are some security vulnerabilities in your code, and they are serious enough to warrant an answer, even though this is not quite what you were asking. Hopefully someone else will address that. If this system is live on the web then it would be worth considering taking it down until it is resolved, so that your users' data is not put at risk.
You have two security issues that need to be dealt with promptly. I can try to outline them but if your data is of at least moderate value then I recommend you engage the services of a contractor who can come in and fix them. He or she will probably want to change your character set to UTF-8 anyway!
SQL injection
Consider this query:
SELECT user_id FROM tbl_user WHERE email = '$sUserName'
AND BINARY password = '$sPassword'
AND user_group = $iGROUP;
The assumption is that you have values like this:
$sUserName = "halfer@example.com";
$sPassword = "p@ssw0rd";
$iGROUP = 1;
OK, so let's try something, which is usually trivial to do with web forms:
$sUserName = "halfer@example.com' OR user_id = 1 --";
$sPassword = "p@ssw0rd";
$iGROUP = 1;
That will create a query that looks like this:
SELECT user_id FROM tbl_user WHERE email = 'halfer@example.com' OR user_id = 1 --'
AND BINARY password = 'p@ssw0rd'
AND user_group = 1;
Since we have used the comment device -- that boils down to this:
SELECT user_id FROM tbl_user WHERE email = 'halfer@example.com' OR user_id = 1;
Now, you probably don't have halfer@example.com in there, but you almost certainly do have a user with an ID of 1, and moreover they are likely to be marked as an approved account. If they are not, an attacker could keep trying with a few numbers, and would be likely to get a result within a few minutes.
The solution to this problem is very well documented.
Plaintext passwords
Let's say you fix the above vulnerability, but that another unknown one exists, and your database gets stolen anyway. Your attacker will have all your user records (perhaps enough to conduct identity fraud) and they will certainly have lots of email addresses and passwords.
So they will feed them into an automated script to try those credentials in Facebook, GMail, eBay, Hotmail, and perhaps some banking sites too. They will do this because it is well-known that most people re-use email/password combinations, even though experts have told them not to ad nauseum.
Thus if you store passwords in plain text you have aided an attacker in a secondary attack, and can share some of the blame when your user suffers fraud or financial loss. (I don't know if anyone has been held legally culpable for a secondary attack - it would be very interesting to look into).
Your solution here is password hashing. This is a one-way algorithm that is very difficult to reverse, and slows re-use attacks down considerably. |
scanf_s() is not described by the C99 Standard (or previous ones).
If you want to use a compiler that targets C99 (or previous) use scanf().
For C11 Standard, scanf_s() is much harder to use than scanf() for improved security against buffer overflows.
So what is basically different between two ways?
scanf_s() is a more secure version of scanf(). After the argument specifying the destination, the size of the destination must be provided. The program checks that the buffer is of the specified size before copying it to ensure that that there are no overwrites and malicious code isn't run. The argument has to be passed in case of scanf_s().
And if scanf's width specification is blocking buffer overflow, why do we call original scanf "unsafe"?
The format specifiers that can be used with scanf functions support explicit field width setting, which limit the maximum size of the input and prevent buffer overflow. But scanf() features are difficult to use, since the field width has to be embedded into format string (there's no way to pass it through a variadic argument, as it can be done in printf). scanf is indeed rather poorly designed in that regard. But nevertheless any claims that scanf is somehow hopelessly broken with regard to string-buffer-overflow safety are completely bogus and usually made by lazy programmers.
The real problem with scanf() has a completely different nature, even though it is also about overflow. When scanf function is used for converting decimal representations of numbers into values of arithmetic types, it provides no protection from arithmetic overflow. If overflow happens, scanf produces undefined behavior. For this reason, the only proper way to perform the conversion in C standard library is functions from strto family.
So, to summarize the above, the problem with scanf is that it is difficult to use properly and safely with string buffers. And it is impossible to use safely for arithmetic input. The latter is the real problem. The former is just an inconvenience.
scanf_s solves the problem of buffer-overflow in case of character array by passing field width through a variadic argument, as it can be done in printf (in scanf() field width has to be embedded into format string). Also, field width is mandatory in scanf_s but is optional in scanf. |
This is a rather broad question and can (most likely) lead to opinionated answers, but I'll try to remain on the track.
These tools serve different purpose, but I see 3 cases where they can be used together. There might be other cases, but your mileage may vary.
Express is used to serve Ember assets through views
This is the case where you use Express as a backend server. If you need to serve the Ember application using some server logic, this is one way to go.
For example, you may want to have the authentication part done by Express (e.g. not by Ember), let's say to restrict access to some parts of the Express application: assets, pages, or your Ember application.
and you will end up rendering your dynamic templates with a script tag sourcing to your static Ember application.
Any other combination of language/server works here, choosing Express is only a matter of taste.
Your Ember application consumes an API
Again, I assume you write your API using JavaScript, and you want to serve it using Express. This is also a matter of personal taste.
You want to use server-side rendering for your Ember application
This is the only case where you will use Express, wether you want it or not.
Server-side rendering for Ember application is done with fastboot, which is used with ember-cli-fastboot, which has a dependency (not a devDependency) to Express (see the ember-cli-fastboot package.json file).
If you don't know about Fastboot, I suggest you watch this introduction video.
Conclusion
You don't need to mix these tools, but there are various ways of using Ember in conjunction with Express.
The only case where you have to use Express is if you use ember-cli-fastboot. |
I had exactly the same problem as you, I wanted to authenticate a user from a single page application, calling the API located on an other server.
The official auth0 example is a classic Express web application that does authentication and renders html page, but it's not a SPA connected to an API hosted on an other domain.
Let's break up what happens when the user authenticates in this example:
The user makes a request calling /auth/auth0 route
The user is automatically redirected to the Auth0 authentication process (Auth0 login form to choose the provider and then the provider login screen)
The user is redirected to /auth/success route
/auth/success route redirects to the static html page public/success.html, also sending a jwt-token cookie that contains the user's token
Client-side, when public/success.html loads, Feathers client authenticate() method reads the token from the cookie and saves it in the local storage.
From now, the Feathers client will authenticate the user reading the cookie from the local storage.
I tried to adapt this scenario to a single-page application architecture, implementing the following process:
From the SPA, call the authentication API with a source query string parameter that contains the SPA URL. For example: http://my-api.com/auth/auth0?source=http://my-spa.com
Server-side, in /auth/auth0 route handler, create a cookie to store that URL
After a successful login, read the source cookie to redirect the user back to the SPA, sending the JWT token in a cookie.
But the last step didn't work because you can't set a cookie on a given domain (the API server domain) and redirect the user to an other domain! (more on this here on Stackoverflow)
So actually I solved the problem by:
server-side: sending the token back to the client using the URL hash.
client-side: create a new html page that reads the token from the URL hash
Server-side code:
// Add a middleware to write in a cookie where the user comes from
// This cookie will be used later to redirect the user to the SPA
app.get('/auth/auth0', (req, res, next) => {
const { origin } = req.query
if (origin) {
res.cookie(WEB_CLIENT_COOKIE, origin)
} else {
res.clearCookie(WEB_CLIENT_COOKIE)
}
next()
})
// Route called after a successful login
// Redirect the user to the single-page application "forwarding" the auth token
app.get('/auth/success', (req, res) => {
const origin = req.cookies[WEB_CLIENT_COOKIE]
if (origin) {
// if there is a cookie that contains the URL source, redirect the user to this URL
// and send the user's token in the URL hash
const token = req.cookies['feathers-jwt']
const redirectUrl = `${origin}/auth0.html#${token}`
res.redirect(redirectUrl)
} else {
// otherwise send the static page on the same domain.
res.sendFile(path.resolve(process.cwd(), 'public', 'success.html'))
}
})
Client-side, auth0.html page in the SPA
In the SPA, I created a new html page I called auth0.html that does 3 things:
it reads the token from the hash
it saves it in the local storage (to mimic what the Feathers client does)
it redirects the user to the SPA main page index.html
html code:
<html>
<body>
<script>
function init() {
const token = getToken()
if (!token) {
console.error('No auth token found in the URL hash!')
}
// Save the token in the local storage
window.localStorage.setItem('feathers-jwt', token)
// Redirect to the single-page application
window.location.href = '/'
}
// Read the token from the URL hash
function getToken() {
const hash = self.location.hash
const array = /#(.*)/.exec(hash)
if (!array) return
return array[1]
}
init()
</script>
</body>
</html>
And now in the SPA I can use the Feathers client, reading the token from the local storage when the app starts.
Let me know if it makes sense, thank you! |
You can use docusign's latest API using nuget package manager "DocuSign.eSign.Api".
I did it using C#:
//Very first prepare Recepients:
Recipients recpnts = new Recipients
{
//CurrentRoutingOrder = "1", // Optional.
Signers = new List<Signer>()
{
new Signer
{
RecipientId = "1",
RoleName = "Prospect",
Email = "ert@gmail.com",
Name = "Shyam",
},
}
};
// Call Envelopes API class which has UpdateRecepient Method
EnvelopesApi epi = new EnvelopesApi();
var envelopeId ="62501f05-4669-4452-ba14-c837a7696e04";
var accountId = GetAccountId();
// The following Line is responsible for Resend Envelopes.
RecipientsUpdateSummary recSummary = epi.UpdateRecipients(accountId, envelopeId , recpnts);
// Get Status or Error Details.
var summary = recSummary.RecipientUpdateResults.ToList();
var errors = summary.Select(rs => rs.ErrorDetails).ToList();
// Method to get your Docusign Account Details and Authorize it.
private static string GetAccountId()
{
string username = "Account Email Address";
string password = "Account Password;
string integratorKey = "Your Account Integrator Key";
// your account Integrator Key (found on Preferences -> API page)
ApiClient apiClient = new ApiClient("https://demo.docusign.net/restapi");
Configuration.Default.ApiClient = apiClient;
// configure 'X-DocuSign-Authentication' header
string authHeader = "{\"Username\":\"" + username + "\", \"Password\":\"" + password + "\", \"IntegratorKey\":\"" + integratorKey + "\"}";
Configuration.Default.AddDefaultHeader("X-DocuSign-Authentication", authHeader);
// we will retrieve this from the login API call
string accountId = null;
/////////////////////////////////////////////////////////////////
// STEP 1: LOGIN API
/////////////////////////////////////////////////////////////////
// login call is available in the authentication api
AuthenticationApi authApi = new AuthenticationApi();
LoginInformation loginInfo = authApi.Login();
accountId = loginInfo.LoginAccounts[0].AccountId;
return accountId;
}
|
Security is only done on server-side. Your application, including the html, styles and javascript is available for everyone in the browser. You will get all the sensitive data via ajax calls from the server. During this API call you can check credentials or verify an authentication token from the header. Normally you would do something of the following:
Sessions
Check username/password and create a session with the id of the user, which will be checked on every api call. This is not stateless and you have to avoid it, if you want to create a RESTful api.
Tokens
Check username/password and save a randomly created token in your database, which you send back to the client. The client will send this token with every request and you will check your database to see if this request is authorized. This is stateless, but requires a query on every request just for authentication.
JWT
Check username/password and send an encrypted JSON object to the client, which only you can decrypt with your server side secret. The client will send this token with every request and you can authenticate the request just by decrypting the token. This is stateless and does not require you to save the tokens. |
Parent class should not know about own children. Reflection API and related functions are not a good choice to implement a high level logic.
In your case, you can use a something like Strategy pattern.
First, we declare generic interface of authentication method:
/**
* Common authentication interface.
*/
interface AuthStrategyInterface
{
public function authUser($uid, $pw);
}
Next, we add a some custom implementations of this interface:
/**
* Firsts implementation.
*/
class FooAuthStrategy implements AuthStrategyInterface
{
public function authUser($uid, $pw)
{
return true;
}
}
/**
* Second implementation.
*/
class BarAuthStrategy implements AuthStrategyInterface
{
public function authUser($uid, $pw)
{
return false;
}
}
Then we create yet another implementation that holds a collection of specific strategies.
Its authUser() method in turn passes authentication parameters to every inner strategy until one returns true.
/**
* Collection of nested strategies.
*/
class CompositeAuthStrategy implements AuthStrategyInterface
{
private $authStrategies;
public function addStrategy(AuthStrategyInterface $strategy)
{
$this->authStrategies[] = $strategy;
}
public function authUser($uid, $pw)
{
foreach ($this->authStrategies as $strategy) {
if ($strategy->authUser($uid, $pw)) {
return true;
}
}
return false;
}
}
It's not the only way to solve your problem, but just an example. |
My entire team seems to manage their local branches based on being able to visually see in the commit graph that it's been merged,
This seems to be the actual problem right here. If you're already using a GitHub pull-request workflow, the status of the pull request should be the authoritative answer to this question. Has the PR been accepted? Good! Branch is merged, move on with your life.
One option you may want to consider is adopting (and possibly enforcing) a workflow in which you only accept single-commit pull requests. Let people do whatever the heck they want in their local repositories, but when submitting a pull request they squash the appropriate commits together before submitting (and make liberal use of a rebase workflow locally to update the pull request if they need to make changes).
This has the advantage that the "visual inspection" method will continue to work, since you will no longer be synthesizing commits on GitHub.
Update
I've put together a small tool that uses the GitHub API to determine if pull requests associated with a given commit sha have been closed. Drop the git-is-merged script into your $PATH somewhere, and then you can do this:
$ git checkout my-feature-branch
$ git is-merged
All pull requests for 219e0f04a44053633abc947ce2b9d700156de978 are closed.
Or:
$ git is-merged my-feature-branch
All pull requests for 219e0f04a44053633abc947ce2b9d700156de978 are closed.
The script returns status text and exit codes for:
No pull requests exist
All pull requests are closed
Some pull requests are closed
All pull requests are open
For squashed commits, you can use any of the commit shas that were part of the original pull request or the commit sha of the squashed commit.
As written this tool will only work with public repositories, but it's using the PyGithub module which does support authentication. |
It turned out the Name claim was also the e-mail address, so I had to intercept the Notifications to pass in the Name claim as the Email claim:
adOptions.Notifications = new OpenIdConnectAuthenticationNotifications()
{
SecurityTokenValidated = async n =>
{
var id = n.AuthenticationTicket.Identity;
var nid = new ClaimsIdentity(
id.AuthenticationType,
System.Security.Claims.ClaimTypes.GivenName,
System.Security.Claims.ClaimTypes.Role);
//Here I added the Name as E-mail claim
nid.AddClaim(new Claim(ClaimTypes.Email, id.Name));
nid.AddClaim(id.FindFirst(System.Security.Claims.ClaimTypes.NameIdentifier));
nid.AddClaim(id.FindFirst(System.Security.Claims.ClaimTypes.GivenName));
nid.AddClaim(id.FindFirst(System.Security.Claims.ClaimTypes.Name));
n.AuthenticationTicket = new AuthenticationTicket(nid, n.AuthenticationTicket.Properties);
}
};
|
Hmm, let's see:
Telnet is simpler (as others have noted already);
Telnet is obviously faster, as the protocol is much more trivial and there is no key exchange and no encryption involved;
Telnet is less vulnerable
Wait, stop, WHAT !?
Well, yes, telnet protocol is plain-text, so you can just sniff the connection and now you know the password and everything else.
And that is a well known fact indeed.
How hard would it be to actually sniff the particular telnet session,
depends on the network setup and a bunch of other things, and might range from being completely trivial to extremely hard to do.
But aside the (obvious) lack of encryption, when it comes to protocol and the service implementation(s) itself, which one is less vulnerable in overall ?
Let's take a look in the CVE database:
Telnet: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=telnet
There were 5 vulnerabilities, registered in 2016,
3 of them are just "hardcoded credentials", which is more of a vendor error than a real service implementation or protocol flaw.
Now, SSH: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=ssh
25 (!) vulnerabilities (year 2016), ranging from the "hard-coded" credentials to allowing the selection of low-security encryption algorithms, issues which allow for denial-of-service attacks or reading the private keys from the remote process memory and so on.
So there were obviously many more SSH related vulnerabilities than Telnet vulnerabilities detected in 2016, and Telnet is a winner here.
That is actually pretty logical, taking that SSH is a much more complex protocol, and a typical SSH implementation will have many more features, like X11 forwarding, file transfer, tunnels e.t.c.,requiring a more complex code, and making a much wider "attack surface"
Please take the above with a grain of salt, Telnet is still plain-text
and it is widely regarded as an outdated protocol, so you definitely have to use a decent SSH implementation instead.
Just make sure that it is configured properly (e.g. switch off features you are not going to use), and keep it up to date at all times.
At the same time, you have to remember that sometimes "obvious things" are not always that "obvious", when you look at them at a bit different angle,
and that is the point of this post. |
1) Whether a contactless Java Card is allowed for banking depends on whether it has the corresponding type approval. Which type approvals are needed depends on the bank and the banking application. Typical type approvals that are needed are Common Criteria and the Master Card type approval. Common Criteria will not only be applied to various elements of the card itself (like the chip, the OS and the application), but also to the development organization (building security, IT security etc.).
2) Yes, a single Java Card supports multiple applications. For example, a modern SIM cards like G&D SkySIM CX (which even simultaneously supports NFC (ISO 14443 via SWP) besides ISO 7816) hosts a variety of applications at the same time: The GlobalPlatform Card Manager application, ETSI GSM/UICC application, MIFARE, MasterCard, Visa, several applications from the telco provider, and even a Smart Card Web Server. Java Card smart cards that support multiple applications usually implement the GlobalPlatform specification to manage the life cycle of the card as well as the life cycle of those applications (authentication / authorization, loading, installation, memory allocation / quota, selection, deselection, termination).
3) The protocol consists of multiple layers, most of which are public. The lower layers, like ISO 7816, which describes the "packets" (APDUs) and general characteristics of smart cards, ISO 14443, which describes the characteristics of contactless cards, SWP, which describes the situation when using a contact-based card's ISO 7816 C6 Pin to delegate ISO 14443 to a contactless frontend, Global Platform and Java Card are public. I'm not sure, however, how public the specifications of Visa and MasterCard are.
4) For such a simulation, you need to simulate the terminal-side as well. Your applet needs to implement the Application side of the Visa specification, and your terminal needs to run a smart card client application that triggers these commands. This is, however, the normal way of testing basically all smart card applications.
If with "actual payment terminal" you mean a real payment terminal, you will not be able to run your simulation with that because you would not have the required secrets (keys etc.). The actual payment terminal would first recognize your Visa applet, but then reject it because it lacks the correct keys. Testing is always done with special test keys, and real keys are usually not available during development. Depending on the application, either the real keys might get inserted during the personalization of the application, or the application is generated with card-specific keys which have to register with a background server. For details, you'd have to consult the corresponding specification. |
Create this provider class
public class QueryStringOAuthBearerProvider : OAuthBearerAuthenticationProvider
{
public override Task RequestToken(OAuthRequestTokenContext context)
{
var value = context.Request.Query.Get("access_token");
if (!string.IsNullOrEmpty(value))
{
context.Token = value;
}
return Task.FromResult<object>(null);
}
}
Use it in Startup.cs
OAuthOptions = new OAuthAuthorizationServerOptions
{
TokenEndpointPath = new PathString("/Token"),
Provider = new ApplicationOAuthProvider(PublicClientId),
AuthorizeEndpointPath = new PathString("/api/Account/ExternalLogin"),
AccessTokenExpireTimeSpan = TimeSpan.FromDays(14),
// In production mode set AllowInsecureHttp = false
AllowInsecureHttp = true
};
// Enable the application to use bearer tokens to authenticate users
//app.UseOAuthBearerTokens(OAuthOptions); // old line
app.UseOAuthAuthorizationServer(OAuthOptions); // new line
// Enable the application to retrieve tokens from query string to authenticate users
app.UseOAuthBearerAuthentication(new OAuthBearerAuthenticationOptions()
{
Provider = new QueryStringOAuthBearerProvider()
});
Now it will get token from url "..../?access_token=xxxxxxx" like that and try it to validate. |
This is the process I have used in my current project. When a user logs in, I take the token and store in localStorage. Then every time a user goes to any route, I wrap the component that the route serves in a hoc. Here is the code for the HOC that checks for token.
export function requireAuthentication(Component) {
class AuthenticatedComponent extends React.Component {
componentWillMount () {
this.checkAuth(this.props.user.isAuthenticated);
}
componentWillReceiveProps (nextProps) {
this.checkAuth(nextProps.user.isAuthenticated);
}
checkAuth (isAuthenticated) {
if (!isAuthenticated) {
let redirectAfterLogin = this.props.location.pathname;
browserHistory.push(`/login?next=${redirectAfterLogin}`);
}
}
render () {
return (
<div>
{this.props.user.isAuthenticated === true
? <Component {...this.props}/>
: null
}
</div>
)
}
}
const mapStateToProps = (state) => ({
user: state.user
});
return connect(mapStateToProps)(AuthenticatedComponent);
}
Then in my index.js I wrap each protected route with this HOC like so:
<Route path='/protected' component={requireAuthentication(ProtectedComponent)} />
This is how the user reducer looks.
export default function userReducer(state = {}, action) {
switch(action.type) {
case types.USER_LOGIN_SUCCESS:
return {...action.user, isAuthenticated: true};
default:
return state;
}
}
action.user contains the token. The token can either come from the api when a user first logs in, or from localstorage if this user is already a logged in user. |
It is really a bad idea to have different login pages for Spring security, because it is not the way it is designed for. You are going to fall into trouble to define the authentication entry point to use and will need a lot of boiling plate. According to your other requirements, I would propose you the following way:
use one single login page with one single AuthenticationManager (the default ProviderManager will be fine)
configure it with two different AuthenticationProviders, both being DaoAuthenticationProvider, each pointing on one of you 2 user tables
configure those provider to automatically set different roles, ROLE_ADMIN for the former that will process admins, ROLE_USER for the latter
That way you fully rely on SpringSecurity infrastructure with as little modifications as possible to meet your requirements.
But you should also wonder whether you really need different databases. At least for the major providers (Oracle, PostgreSQL, MariaDB, ...) there is no harm in having many null columns in a record. IMHO you should do a serious analysis to compare both ways, either one single table (much simpler to setup for Spring Security) or two tables. |
It is not clear if you are looking for an example to connect to the Datastore Emulator (for development) or the real Datastore on Google Cloud Platform. It looks like your primary objective is to debug your code from an IDE, which you can do either way.
For connecting to the Datastore Emulator - see the below post:
Google Datastore Emulator using Java (Not using GAE)
For connecting to the Datastore on GCP -
If you have not run the glcoud init command, run it and follow the on-screen instructions to set up default project and auth credentials. Then you can use the below code to access the real Datastore:
Datastore datastore = DatastoreOptions.getDefaultInstance().getService();
Other option is to set project ID and Auth Credentials within your code using the DatastoreOptions.Builder. You need to get/download the JSON Credentials from your Google Cloud Console.
See the below links for more info/sample code:
https://github.com/GoogleCloudPlatform/google-cloud-java#specifying-a-project-id
https://github.com/GoogleCloudPlatform/google-cloud-java#authentication |
The same error solved like this
using UnityEngine;
using System.Collections;
using UnityEditor.Callbacks;
using UnityEditor;
using System;
using UnityEditor.iOS.Xcode;
using System.IO;
public class AutoIncrement : MonoBehaviour {
[PostProcessBuild]
public static void ChangeXcodePlist(BuildTarget buildTarget, string pathToBuiltProject)
{
if (buildTarget == BuildTarget.iOS)
{
// Get plist
string plistPath = pathToBuiltProject + "/Info.plist";
var plist = new PlistDocument();
plist.ReadFromString(File.ReadAllText(plistPath));
// Get root
var rootDict = plist.root;
// Change value of NSCameraUsageDescription in Xcode plist
var buildKey = "NSCameraUsageDescription";
rootDict.SetString(buildKey, "Taking screenshots");
var buildKey2 = "ITSAppUsesNonExemptEncryption";
rootDict.SetString(buildKey2, "false");
// Write to file
File.WriteAllText(plistPath, plist.WriteToString());
}
}
// Use this for initialization
void Start () {
}
// Update is called once per frame
void Update () {
}
[PostProcessBuild]
public static void OnPostprocessBuild(BuildTarget target, string pathToBuiltProject)
{
//A new build has happened so lets increase our version number
BumpBundleVersion();
}
// Bump version number in PlayerSettings.bundleVersion
private static void BumpBundleVersion()
{
float versionFloat;
if (float.TryParse(PlayerSettings.bundleVersion, out versionFloat))
{
versionFloat += 0.01f;
PlayerSettings.bundleVersion = versionFloat.ToString();
}
}
[MenuItem("Leman/Build iOS Development", false, 10)]
public static void CustomBuild()
{
BumpBundleVersion();
var levels= new String[] { "Assets\\ShootTheBall\\Scenes\\MainScene.unity" };
BuildPipeline.BuildPlayer(levels,
"iOS", BuildTarget.iOS, BuildOptions.Development);
}
}
|
Default an observable is not multicast enabled. This means that multiple subscriptions will generate their own stream. Since you do not want to have to re-fetch the authentication credentials per subscription you need to make your original stream multicast-compatible. This is done using for example the share() operator, which will make the stream hot upon the first subscription and internally stores a refcount for all following subscriptions.
Since subscriptions which are late to the party will not automatically get the previous emitted values when the stream became hot we need to build in functionality to replay emitted values.
Both of these requirements are combined in the shareReplay() operator which lets your multicast the latest n values.
So your code would look like this:
const authStream = this.authService.auth$.shareReplay(1);
// component 1
const myAuthDisposable = authStream.subscribe(auth => this.auth = auth)
// component 2
const myAuthDisposable = authStream.subscribe(auth => this.auth = auth)
|
In the example you gave: primary/instance@REALM
primary = service name (e.g. HTTP running on a target server)
instance = FQDN (typically) which needs to be in DNS - it would be
the FQDN of the server that the "primary" (service) runs on
REALM = typically written in upper-case (though not mandatory) - this (though not always) matches the DNS domain name of environment in which Kerberos authentication is to occur. It is, a collection of computers sharing a common namespace and Kerberos database.
Example of an SPN: HTTP/server1.acme.com@ACME.COM. In this example, it could be shortened to just HTTP/server1.acme.com assuming DNS is set right in the machine's environment.
For your example, foo.bar.com, the realm would likely be FOO.BAR.COM. It doesn't have to be though. You can definitely have a DNS FQDN of foo.bar.com existing in a Kerberos realm of another name, but that realm name would have to be fully-qualified, you can't just have it as "REALM1". Kerberos relies heavily on DNS. I suppose it is technically possible to have a non-fully qualified Kerberos realm name though I have never seen it done in practice. You would just be asking for major trouble. For your 3 services talking to each other, yes, each one of them would have to have its own SPN, they have to be delineated separately in the Kerberos database otherwise how would clients find them? Three distinct services in this case would each need their own keytab file. But each keytab would not have the principals for the other services. Don't use the word "principal", as you did like that, on its own. A principal is a security object which may have an SPN, or it may not. It depends. There are different types of security principals, such as users, which have UPNs instead. Services are SPNs. Computers are the third type of category. Suggest you read up more here, if you are in a Microsoft Active Directory environment, the most popular version of Kerberos implementation today. http://social.technet.microsoft.com/wiki/contents/articles/4209.kerberos-survival-guide.aspx |
Here is my example that might help someone (Alamofire 4.0, Swift 3, xCode 8)
import Alamofire
class NetworkConnection {
let developmentDomain = Config.developmentDomain // "api.myappdev.com"
let productionDomain = Config.productionDomain // "api.myappprod.com"
let certificateFilename = Config.certificateFilename // "godaddy"
let certificateExtension = Config.certificateExtension // "der"
let useSSL = true
var manager: SessionManager!
var serverTrustPolicies: [String : ServerTrustPolicy] = [String:ServerTrustPolicy]()
static let sharedManager = NetworkConnection()
init(){
if useSSL {
manager = initSafeManager()
} else {
manager = initUnsafeManager()
}
}
//USED FOR SITES WITH CERTIFICATE, OTHERWISE .DisableEvaluation
func initSafeManager() -> SessionManager {
setServerTrustPolicies()
manager = SessionManager(configuration: URLSessionConfiguration.default, delegate: SessionDelegate(), serverTrustPolicyManager: ServerTrustPolicyManager(policies: serverTrustPolicies))
return manager
}
//USED FOR SITES WITHOUT CERTIFICATE, DOESN'T CHECK FOR CERTIFICATE
func initUnsafeManager() -> SessionManager {
manager = Alamofire.SessionManager.default
manager.delegate.sessionDidReceiveChallenge = { session, challenge in
var disposition: URLSession.AuthChallengeDisposition = .performDefaultHandling
var credential: URLCredential?
if challenge.protectionSpace.authenticationMethod == NSURLAuthenticationMethodServerTrust {
disposition = URLSession.AuthChallengeDisposition.useCredential
credential = URLCredential(trust: challenge.protectionSpace.serverTrust!) //URLCredential(forTrust: challenge.protectionSpace.serverTrust!)
} else {
if challenge.previousFailureCount > 0 {
disposition = .cancelAuthenticationChallenge
} else {
credential = self.manager.session.configuration.urlCredentialStorage?.defaultCredential(for: challenge.protectionSpace)
if credential != nil {
disposition = .useCredential
}
}
}
return (disposition, credential)
}
return manager
}
func setServerTrustPolicies() {
let pathToCert = Bundle.main.path(forResource: certificateFilename, ofType: certificateExtension)
let localCertificate:Data = try! Data(contentsOf: URL(fileURLWithPath: pathToCert!))
let serverTrustPolicies: [String: ServerTrustPolicy] = [
productionDomain: .pinCertificates(
certificates: [SecCertificateCreateWithData(nil, localCertificate as CFData)!],
validateCertificateChain: true,
validateHost: true
),
developmentDomain: .disableEvaluation
]
self.serverTrustPolicies = serverTrustPolicies
}
static func addAuthorizationHeader (_ token: String, tokenType: String) -> [String : String] {
let headers = [
"Authorization": tokenType + " " + token
]
return headers
}
}
add following to your Info.plist
<key>NSAppTransportSecurity</key>
<dict>
<key>NSAllowsArbitraryLoads</key>
<true/>
<key>NSExceptionDomains</key>
<dict>
<key>api.myappdev.com</key>
<dict>
<key>NSExceptionAllowsInsecureHTTPLoads</key>
<true/>
<key>NSExceptionRequiresForwardSecrecy</key>
<false/>
<key>NSIncludesSubdomains</key>
<true/>
<key>NSRequiresCertificateTransparency</key>
<false/>
<key>NSTemporaryExceptionMinimumTLSVersion</key>
<string>TLSv1.2</string>
</dict>
</dict>
</dict>
and here is an example of making an request
import Alamofire
class ActionUserUpdate {
let url = "https://api.myappdev.com/v1/"
let manager = NetworkConnection.sharedManager.manager
func updateUser(_ token: String, tokenType: String, expiresIn: Int, params: [String : String]) {
let headers = NetworkConnection.addAuthorizationHeader(token, tokenType: tokenType)
manager?.request(url, method: .put, parameters: params, encoding: JSONEncoding.default, headers: headers).responseJSON { response in
print(response.description)
print(response.debugDescription)
print(response.request) // original URL request
print(response.response) // URL response
print(response.data) // server data
print(response.result) // result of response serialization
}
}
}
|
So, I have contemplated this in great length and decided this:
Yes it's ok to have those digits on your front end.
Of course - for many reasons as follows.
First:
Question:
Am I right to be concerned that another site could hijack my user pool, potentially tricking users into signing up to it?
Response:
If I were to take your UserPoolID and ClientID - could I "hijack" to your application?
Answer:
Not exactly...maybe kinda sorta but why...
The level of "tenancy" or "permission" you give to a client is entirely up to you and your IAM Roles. Lets say we don't consider my second and more relavant reason, yet - (origin checks).
If I steal your access keys and misuse your app/brand/whatever, I am simply driving clients to your site. I cannot gain access to your client list, data, logs, records, etc. IF you set your authenticated user permissions to not allow that.
Locking down your "Admin Level" permissions to client lists, pool info, data, etc.
Example (added to your Statement section):
{
"Effect": "Deny",
"Action": [
"cognito-identity:CreateIdentityPool",
"cognito-identity:DeleteIdentityPool",
"cognito-identity:DeleteIdentities",
"cognito-identity:DescribeIdentity",
"cognito-identity:DescribeIdentityPool",
"cognito-identity:GetIdentityPoolRoles",
"cognito-identity:ListIdentities",
"cognito-identity:ListIdentityPools",
"cognito-identity:LookupDeveloperIdentity",
"cognito-identity:MergeDeveloperIdentities",
"cognito-identity:SetIdentityPoolRoles",
"cognito-identity:UnlinkDeveloperIdentity",
"cognito-identity:UpdateIdentityPool"
],
"Resource": [
"arn:aws:cognito-identity:us-east-1:ACCOUNT_DIGITS:identitypool/us-east-1:PoolID_NUMBERS"
]
}
Or simply the opposite:
{
"Effect": "Allow",
"Action": [
"cognito-identity:GetOpenIdTokenForDeveloperIdentity"
],
"Resource": "arn:aws:cognito-identity:us-east-1:ACCOUNT_DIGITS:identitypool/us-east-1:NUMBERS-NUMBERS-PoolID"
}
Only need the "cognito-identity:GetOpenIdTokenForDeveloperIdentity" part.
Locking down your "User Level" permissions to stuff
Example:
{
"Effect": "Allow",
"Action": [ "s3:PutObject", "s3:GetObject" ],
"Resource": [
"arn:aws:s3:::[bucket]/[folder]/${cognito-identity.amazonaws.com:sub}/*"
]
}
As an obvious rule of thumb - only give users permission to what they need. Lock down all the crap you can possibly lock down and use the policy simulator.
Conclusion to reason one:
You can lock down all the things that would expose your client base and make it pointless for someone to 'hyjack' your site.
Counter argument:
Ya, but what IF
Here is a doc that might help for IAM stuff
And some more
Second:
Question:
The pre-authentication Lambda payload doesn't seem to include any request origin data, so I guess that isn't the way to do it.
Response:
Hmm.
Answer:
Yes it does include request origin data - IF one sets it up.
Question:
I'm concerned that the AWS Cognito User Pools Javascript API doesn't seem to care which website requests are coming from
Answer:
For this - you're correct. If you are using static served files with user pools triggers - there is little done to check origin.
So - if you really want to - you can set all this up using API Gateway to Lambda.
This will remove direct interaction with User Pools from the client side and put it on the back end.
Preface:
Sets to setup:
Go into User Pools and set up a pool
Add a cognito idetity pool
Go into Lambda and hook up a function with an API Gateway Trigger Event
put in your code - this is a "login" example:
const
AWS = require( 'aws-sdk' ),
UserPool = new AWS.CognitoIdentityServiceProvider();
exports.handler = ( event, context, callback ) => {
console.log( event );
const params = {
AuthFlow: 'CUSTOM_AUTH',
ClientId: 'numbers',
AuthParameters: {
USERNAME: event.email,
PASSWORD: event.password
}
};
UserPool.initiateAuth( params, ( err, data ) => {
callback( err, data );
} );
};
In the above - yes you can do:
UserPool.initiateAuth( params, callback );
Instead of:
UserPool.initiateAuth( params, ( err, data ) => {
callback( err, data );
} );
But this throws weird errors - there is an issue open on GitHub about it already.
Go to the trigger event from API Gateway
Click on your method and go into the section Integration Request
At the bottom you'll see Body Mapping Templates
Add a new one and put in application/json
You should see the sample template come up that follows:
This is Apache Template Velocity Language - different from the JSONScheme language used by the other mapping template things:
#set($allParams = $input.params())
{
"body-json" : $input.json('$'),
"params" : {
#foreach($type in $allParams.keySet())
#set($params = $allParams.get($type))
"$type" : {
#foreach($paramName in $params.keySet())
"$paramName" : "$util.escapeJavaScript($params.get($paramName))"
#if($foreach.hasNext),#end
#end
}
#if($foreach.hasNext),#end
#end
},
"stage-variables" : {
#foreach($key in $stageVariables.keySet())
"$key" : "$util.escapeJavaScript($stageVariables.get($key))"
#if($foreach.hasNext),#end
#end
},
"context" : {
"account-id" : "$context.identity.accountId",
"api-id" : "$context.apiId",
"api-key" : "$context.identity.apiKey",
"authorizer-principal-id" : "$context.authorizer.principalId",
"caller" : "$context.identity.caller",
"cognito-authentication-provider" : "$context.identity.cognitoAuthenticationProvider",
"cognito-authentication-type" : "$context.identity.cognitoAuthenticationType",
"cognito-identity-id" : "$context.identity.cognitoIdentityId",
"cognito-identity-pool-id" : "$context.identity.cognitoIdentityPoolId",
"http-method" : "$context.httpMethod",
"stage" : "$context.stage",
"source-ip" : "$context.identity.sourceIp",
"user" : "$context.identity.user",
"user-agent" : "$context.identity.userAgent",
"user-arn" : "$context.identity.userArn",
"request-id" : "$context.requestId",
"resource-id" : "$context.resourceId",
"resource-path" : "$context.resourcePath"
}
}
With this, you can get the source-ip, cognito information, etc.
This method is a secure method for locking down origin. You can either check origin by doing an if check in Lambda, or a IAM condition - blocking all requests from other origins. |
While there are ways to block the thread until the asynchronous request finishes, there is a simple and more resource-effective solution since Spring 3.2.
You can use DeferredResult<T> as your return type to enable asynchronous processing. This allows the servlet container to reuse the HTTP worker thread right away, while sparing you the headache of forcefully serializing a chain of asynchronous requests.
By filling out the comments, your code would look like this:
@RequestMapping(value = "/users", method = RequestMethod.DELETE)
public DeferredResult<String> deleteUser(@RequestHeader("Authentication") String token) {
final DeferredResult<String> result = new DeferredResult<>();
FirebaseUtil.getUid(token, new OnSuccessListener<FirebaseToken>() {
@Override
public void onSuccess(FirebaseToken decodedToken) {
String uid = decodedToken.getUid();
User userToDelete = userDao.get(uid);
userDao.delete(uid);
clearUserAccounts(userToDelete);
result.setResult(uid + " was deleted");
}
}, new OnFailureListener() {
@Override
public void onFailure(@NonNull Exception e) {
result.setErrorResult(e);
}
});
return result;
}
|
func pbkdf2(hash :CCPBKDFAlgorithm, password: String, salt: [UInt8], keyCount: Int, rounds: UInt32!) -> [UInt8]! {
let derivedKey = [UInt8](count:keyCount, repeatedValue:0)
let passwordData = password.dataUsingEncoding(NSUTF8StringEncoding)!
let derivationStatus = CCKeyDerivationPBKDF(
CCPBKDFAlgorithm(kCCPBKDF2),
UnsafePointer<Int8>(passwordData.bytes), passwordData.length,
UnsafePointer<UInt8>(salt), salt.count,
CCPseudoRandomAlgorithm(hash),
rounds,
UnsafeMutablePointer<UInt8>(derivedKey),
derivedKey.count)
if (derivationStatus != 0) {
print("Error: \(derivationStatus)")
return nil;
}
return derivedKey
}
hash is the hash type such as kCCPRFHmacAlgSHA1, kCCPRFHmacAlgSHA256, kCCPRFHmacAlgSHA512.
Example from sunsetted documentation section:
Password Based Key Derivation 2 (Swift 3+)
Password Based Key Derivation can be used both for deriving an encryption key from password text and saving a password for authentication purposes.
There are several hash algorithms that can be used including SHA1, SHA256, SHA512 which are provided by this example code.
The rounds parameter is used to make the calculation slow so that an attacker will have to spend substantial time on each attempt. Typical delay values fall in the 100ms to 500ms, shorter values can be used if there is unacceptable performance.
This example requires Common Crypto
It is necessary to have a bridging header to the project:
#import <CommonCrypto/CommonCrypto.h>
Add the Security.framework to the project.
Parameters:
password password String
salt salt Data
keyByteCount number of key bytes to generate
rounds Iteration rounds
returns Derived key
func pbkdf2SHA1(password: String, salt: Data, keyByteCount: Int, rounds: Int) -> Data? {
return pbkdf2(hash:CCPBKDFAlgorithm(kCCPRFHmacAlgSHA1), password:password, salt:salt, keyByteCount:keyByteCount, rounds:rounds)
}
func pbkdf2SHA256(password: String, salt: Data, keyByteCount: Int, rounds: Int) -> Data? {
return pbkdf2(hash:CCPBKDFAlgorithm(kCCPRFHmacAlgSHA256), password:password, salt:salt, keyByteCount:keyByteCount, rounds:rounds)
}
func pbkdf2SHA512(password: String, salt: Data, keyByteCount: Int, rounds: Int) -> Data? {
return pbkdf2(hash:CCPBKDFAlgorithm(kCCPRFHmacAlgSHA512), password:password, salt:salt, keyByteCount:keyByteCount, rounds:rounds)
}
func pbkdf2(hash :CCPBKDFAlgorithm, password: String, salt: Data, keyByteCount: Int, rounds: Int) -> Data? {
let passwordData = password.data(using:String.Encoding.utf8)!
var derivedKeyData = Data(repeating:0, count:keyByteCount)
let derivationStatus = derivedKeyData.withUnsafeMutableBytes {derivedKeyBytes in
salt.withUnsafeBytes { saltBytes in
CCKeyDerivationPBKDF(
CCPBKDFAlgorithm(kCCPBKDF2),
password, passwordData.count,
saltBytes, salt.count,
hash,
UInt32(rounds),
derivedKeyBytes, derivedKeyData.count)
}
}
if (derivationStatus != 0) {
print("Error: \(derivationStatus)")
return nil;
}
return derivedKeyData
}
Example usage:
let password = "password"
//let salt = "saltData".data(using: String.Encoding.utf8)!
let salt = Data(bytes: [0x73, 0x61, 0x6c, 0x74, 0x44, 0x61, 0x74, 0x61])
let keyByteCount = 16
let rounds = 100000
let derivedKey = pbkdf2SHA1(password:password, salt:salt, keyByteCount:keyByteCount, rounds:rounds)
print("derivedKey (SHA1): \(derivedKey! as NSData)")
Example Output:
derivedKey (SHA1): <6b9d4fa3 0385d128 f6d196ee 3f1d6dbf>
Password Based Key Derivation Calibration
This example requires Common Crypto
It is necessary to have a bridging header to the project:
#import <CommonCrypto/CommonCrypto.h>
Add the Security.framework to the project.
Determine the number of PRF rounds to use for a specific delay on the current platform.
Several parameters are defaulted to representative values that should not materially affect the round count.
password Sample password.
salt Sample salt.
msec Targeted duration we want to achieve for a key derivation.
returns The number of iterations to use for the desired processing time.
Password Based Key Derivation Calibration (Swift 3)
func pbkdf2SHA1Calibrate(password: String, salt: Data, msec: Int) -> UInt32 {
let actualRoundCount: UInt32 = CCCalibratePBKDF(
CCPBKDFAlgorithm(kCCPBKDF2),
password.utf8.count,
salt.count,
CCPseudoRandomAlgorithm(kCCPRFHmacAlgSHA1),
kCCKeySizeAES256,
UInt32(msec));
return actualRoundCount
}
Example usage:
let saltData = Data(bytes: [0x73, 0x61, 0x6c, 0x74, 0x44, 0x61, 0x74, 0x61])
let passwordString = "password"
let delayMsec = 100
let rounds = pbkdf2SHA1Calibrate(password:passwordString, salt:saltData, msec:delayMsec)
print("For \(delayMsec) msec delay, rounds: \(rounds)")
Example Output:
For 100 msec delay, rounds: 93457
Password Based Key Derivation Calibration (Swift 2.3)
func pbkdf2SHA1Calibrate(password:String, salt:[UInt8], msec:Int) -> UInt32 {
let actualRoundCount: UInt32 = CCCalibratePBKDF(
CCPBKDFAlgorithm(kCCPBKDF2),
password.utf8.count,
salt.count,
CCPseudoRandomAlgorithm(kCCPRFHmacAlgSHA1),
kCCKeySizeAES256,
UInt32(msec));
return actualRoundCount
}
Example usage:
let saltData = [UInt8]([0x73, 0x61, 0x6c, 0x74, 0x44, 0x61, 0x74, 0x61])
let passwordString = "password"
let delayMsec = 100
let rounds = pbkdf2SHA1Calibrate(passwordString, salt:saltData, msec:delayMsec)
print("For \(delayMsec) msec delay, rounds: \(rounds)")
|
As the question is very broad, the answers are precise to the point and do no go into depth of the subject.
Upgrade from CRM 20XX to 20YY
Although you could do an in-place upgrade (existing CRM Server + existing SQL database), which in essence is almost as if you were applying a cumulative update, or provision a new server and use the existing SQL server, the best and the Microsoft's recommended way to upgrade is to go about doing a Migration Upgrade (new CRM server + new SQL server).
Steps to migrate (short story):
Provision a new CRM instance with a new SQL Server instance and an
SSRS instance (if applicable)
Apply any product updates/roll ups/cumulative updates. Backup the
existing CRM database.
Restore the database to the new SQL Server instance provisioned.
Use the deployment manager and start the Import Organization
process pointing to the restored database, which will start the
upgrade process.
Long story would involve upgrading plugins with the latest version of the SDK (which would involve un-registering them, upgrading them and re-registering all the plugins and steps), setting up authentication, SPNs etc. I would recommend giving the article linked above a good read.
Note that the upgrade has to be incremental (e.g. 2011 - 2013 - 2015 - 2016, applicable CUs in between if any).
Best way to migrate data from 20XX to 20YY CRM
You do not need to migrate data from 20XX to 20YY if you go by the migration upgrade route or for that matter any supported upgrade path. There is a thought process that an upgrade would inadvertently need a data migration, but in reality it does not. Unless you are moving data from another system or changing/cleaning up your existing CRM data structure (consolidating entities, moving notes around etc.) you most probably do not need any migration.
Assuming that you need to do one of the above, some of the most used integration tools are Scribe for Microsoft Dynamics CRM or KingswaySoft for Dynamics CRM. My favourite being KingswaySoft for being easily extendable and also the pricing model (you could essentially buy a 3 month license and get your migration done as migrations are a one time operation).
Difference between CRM On-Prem and CRM Online
Apart from the whole cloud, licensing model differences between the two, there are still some features which are online exclusive (at least for the moment or till the next on-prem update).
Choosing between the two, in my experience working with clients who were both online and on-prem essentially boils down to:
Upfront cost, on-going maintenance.
Existing infrastructure (if a company is already on cloud with
office 365, they would most probably end up going with CRM online).
Control over upgrades, databases. On-Prem customers are usually the
ones which like more control over the databases, servers, when to
update/upgrade.
Features that are online only (although Microsoft does roll
most of them out to on-premise installations too, but they tend come
much slower than they do for online instances, usually 3-6 months). Some features such as inside view and social listening are online exclusive for now.
Dynamics 365:
Dynamics 365 is a combination of ERP (GP, NAV, AX), Dynamics CRM and some integration extension tools like Parature. From a CRM standpoint it is going to be no different than CRM 2016 online. Just the back end data structure which probably will be more one size fits all model. Although more details are yet to come out, from a functionality point of view it won't be a whole new product. They might come up with a couple of new functionalities like they do with every major release, but CRM as we know is still going to be the same predominantly. |
I created a sample project at GitHub called AspNetMvcActiveDirectoryOwin. You can fork it.
There are few steps you will want to following -
First of all, you want to authenticate with Active Directory.
public class ActiveDirectoryService : IActiveDirectoryService
{
public bool ValidateCredentials(string domain, string userName, string password)
{
using (var context = new PrincipalContext(ContextType.Domain, domain))
{
return context.ValidateCredentials(userName, password);
}
}
public User GetUser(string domain, string userName)
{
User result = null;
using (var context = new PrincipalContext(ContextType.Domain, domain))
{
var user = UserPrincipal.FindByIdentity(context, userName);
if (user != null)
{
result = new User
{
UserName = userName,
FirstName = user.GivenName,
LastName = user.Surname
};
}
}
return result;
}
}
Second, you want to create claims which will be used in Owin Middleware.
public class OwinAuthenticationService : IAuthenticationService
{
private readonly HttpContextBase _context;
private const string AuthenticationType = "ApplicationCookie";
public OwinAuthenticationService(HttpContextBase context)
{
_context = context;
}
public void SignIn(User user)
{
IList<Claim> claims = new List<Claim>
{
new Claim(ClaimTypes.Name, user.UserName),
new Claim(ClaimTypes.GivenName, user.FirstName),
new Claim(ClaimTypes.Surname, user.LastName),
};
ClaimsIdentity identity = new ClaimsIdentity(claims, AuthenticationType);
IOwinContext context = _context.Request.GetOwinContext();
IAuthenticationManager authenticationManager = context.Authentication;
authenticationManager.SignIn(identity);
}
public void SignOut()
{
IOwinContext context = _context.Request.GetOwinContext();
IAuthenticationManager authenticationManager = context.Authentication;
authenticationManager.SignOut(AuthenticationType);
}
}
|
C# WebApi PDF download all working with Angular JS Authentication
Web Api Controller
[HttpGet]
[Authorize]
[Route("OpenFile/{QRFileId}")]
public HttpResponseMessage OpenFile(int QRFileId)
{
QRFileRepository _repo = new QRFileRepository();
var QRFile = _repo.GetQRFileById(QRFileId);
if (QRFile == null)
return new HttpResponseMessage(HttpStatusCode.BadRequest);
string path = ConfigurationManager.AppSettings["QRFolder"] + + QRFile.QRId + @"\" + QRFile.FileName;
if (!File.Exists(path))
return new HttpResponseMessage(HttpStatusCode.BadRequest);
HttpResponseMessage response = new HttpResponseMessage(HttpStatusCode.OK);
//response.Content = new StreamContent(new FileStream(localFilePath, FileMode.Open, FileAccess.Read));
Byte[] bytes = File.ReadAllBytes(path);
//String file = Convert.ToBase64String(bytes);
response.Content = new ByteArrayContent(bytes);
response.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment");
response.Content.Headers.ContentType = new MediaTypeHeaderValue("application/pdf");
response.Content.Headers.ContentDisposition.FileName = QRFile.FileName;
return response;
}
Angular JS Service
this.getPDF = function (apiUrl) {
var headers = {};
headers.Authorization = 'Bearer ' + sessionStorage.tokenKey;
var deferred = $q.defer();
$http.get(
hostApiUrl + apiUrl,
{
responseType: 'arraybuffer',
headers: headers
})
.success(function (result, status, headers) {
deferred.resolve(result);;
})
.error(function (data, status) {
console.log("Request failed with status: " + status);
});
return deferred.promise;
}
this.getPDF2 = function (apiUrl) {
var promise = $http({
method: 'GET',
url: hostApiUrl + apiUrl,
headers: { 'Authorization': 'Bearer ' + sessionStorage.tokenKey },
responseType: 'arraybuffer'
});
promise.success(function (data) {
return data;
}).error(function (data, status) {
console.log("Request failed with status: " + status);
});
return promise;
}
Either one will do
Angular JS Controller calling the service
vm.open3 = function () {
var downloadedData = crudService.getPDF('ClientQRDetails/openfile/29');
downloadedData.then(function (result) {
var file = new Blob([result], { type: 'application/pdf;base64' });
var fileURL = window.URL.createObjectURL(file);
var seconds = new Date().getTime() / 1000;
var fileName = "cert" + parseInt(seconds) + ".pdf";
var a = document.createElement("a");
document.body.appendChild(a);
a.style = "display: none";
a.href = fileURL;
a.download = fileName;
a.click();
});
};
And last the HTML page
<a class="btn btn-primary" ng-click="vm.open3()">FILE Http with crud service (3 getPDF)</a>
This will be refactored just sharing the code now hope it helps someone as it took me a while to get this working. |
Other way, to manually generate an OAuth2 Accesss Token we can use an instance of TokenService
@Autowired
private AuthorizationServerEndpointsConfiguration configuration;
@Override
public String generateOAuth2AccessToken(User user, List<Role> roles, List<String> scopes) {
Map<String, String> requestParameters = new HashMap<String, String>();
Map<String, Serializable> extensionProperties = new HashMap<String, Serializable>();
boolean approved = true;
Set<String> responseTypes = new HashSet<String>();
responseTypes.add("code");
// Authorities
List<GrantedAuthority> authorities = new ArrayList<GrantedAuthority>();
for(Role role: roles)
authorities.add(new SimpleGrantedAuthority("ROLE_" + role.getName()));
OAuth2Request oauth2Request = new OAuth2Request(requestParameters, "clientIdTest", authorities, approved, new HashSet<String>(scopes), new HashSet<String>(Arrays.asList("resourceIdTest")), null, responseTypes, extensionProperties);
UsernamePasswordAuthenticationToken authenticationToken = new UsernamePasswordAuthenticationToken(user.getUsername(), "N/A", authorities);
OAuth2Authentication auth = new OAuth2Authentication(oauth2Request, authenticationToken);
AuthorizationServerTokenServices tokenService = configuration.getEndpointsConfigurer().getTokenServices();
OAuth2AccessToken token = tokenService.createAccessToken(auth);
return token.getValue();
}
|
That Docker commands hanging bug happened after I deleted a container.
The daemon dockerd was in an abnormal state: it couldn't be started (sudo service docker start) after having been stopped (service docker stop).
# sudo service docker start
Redirecting to /bin/systemctl start docker.service
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
# journalctl -xe
kernel: device-mapper: ioctl: unable to remove open device docker-253:0-19468577-d6f74dd67f106d6bfa483df4ee534dd9545dc8ca
...
systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
systemd[1]: Failed to start Docker Application Container Engine.
systemd[1]: Unit docker.service entered failed state.
systemd[1]: docker.service failed.
polkitd[896]: Unregistered Authentication Agent for unix-process:22551:34177094 (system bus name :1.290, object path /org
ESCESC
kernel: dev_remove: 41 callbacks suppressed
kernel: device-mapper: ioctl: unable to remove open device docker-253:0-19468577-fc63401af903e22d05a4518e02504527f0d7883f9d997d7d97fdfe72ba789863
...
dockerd[22566]: time="2016-11-28T10:18:09.840268573+01:00" level=fatal msg="Error starting daemon: timeout"
systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
systemd[1]: Failed to start Docker Application Container Engine.
Moreover, many zombie Docker processes could be observed using ps -eax | grep docker (presence of a "Z" in the "STAT" column), for example docker-proxies.
After rebooting the server and restarting Docker, the zombie processes disappeared and Docker commands were working again. |
1. The result array for a WRITE command is a single byte of value 10. What does this mean and what other values should I look out for?
The value 10 (Ah in hexadecimal or 1010b in binary representation) is an explicit ACK, an acknowledgement returned when a command that returns no data succeeds.
The possible values are actual data, ACK, passive ACK, or NACK. These are defined by the NFC Forum Digital Protocol specification and by the NFC Forum Type 2 Tag Operation specification.
If the command is expected to return actual data on success, the data is returned instead of an explicit ACK value.
ACK is defined as a 4-bit short frame (see NFC Forum Digital Protocol specification and ISO/IEC 14443-3 for further details) with the value 1010b (Ah).
A passive ACK is defined as the tag not sending a response at all within a certain timeout.
NACK is defined as a 4-bit short frame with the value 0x0xb (where x is either 0 or 1).
The NTAG213/215/216 product data sheet is a bit more specific on possible NACK values:
0000b (0h) indicates an invalid command argument.
0001b (1h) indicates a parity or CRC error.
0100b (4h) indicates an invalid authentication counter overflow.
0101b (5h) indicates an EEPROM write error.
In addition to the above, the NFC stack implementations on some devices do not properly propagate NACK responses to the app. Instead they either throw a TagLostException or return null. Similarly, you might(?) get a TagLostException indicating a passive ACK.
Thus, you would typically check the result of the transceive method for the following (unless you send a command that is expected to result in a passive ACK):
try {
response = nfca.transceive(command);
if (response == null) {
// either communication to the tag was lost or a NACK was received
} else if ((response.length == 1) && ((response[0] & 0x00A) != 0x00A)) {
// NACK response according to Digital Protocol/T2TOP
} else {
// success: response contains ACK or actual data
}
} catch (TagLostException e) {
// either communication to the tag was lost or a NACK was received
}
2. I expected the READ method to to return 4 bytes of user data (i.e. the 4 bytes that correspond to my pageNum), but it returned 16 bytes. Why is that the case?
The READ command is defined to return 4 blocks of data starting with the specified block number (in the NFC Forum Type 2 Tag Operation specification). Thus, if you send a READ command for block 4, you get the data of blocks 4, 5, 6, and 7.
3. Is it good practise to check nfcA.isConnected() before calling nfcA.connect() and, if so, is there likely to be any sigificant performance penalty in doing so?
If you receive the Tag handle directly from the NFC system service (through an NFC intent) the tag won't be connected. So unless you use the Tag handle before calling nfca.connect(), I don't see why you would want to call nfca.isConnected() before. However, calling that method before connecting has barely any performance overhead since calling isConnected() on a closed tag technology object will be handled by the famework API without calling into the NFC system service. Hence, it's not much more overhead than a simple if over a boolean member variable of the NfcA object.
4. Is it better to call nfcA.setTimeout() before or after nfcA.connect()?
I'm not sure about that one. However, the transceive timeout is typically reset on disconnecting the tag technology.
5. For my NTAG213 tags nfcA.getMaxTransceiveLength() returns 253. Does that really mean I can write up to 251 bytes of user data (plus the 2 other bytes) in one go and, if so, is that advisable or is it better to write each page (4 bytes) with separate nfcA.transceive() calls?
No, you can only write one block at a time. This is limited by the WRITE command of the NTAG213, which only supports one block as data input.
However, a transceive buffer size of 253 allows you to use the FAST_READ command to read multiple blocks (up to 62, so up to 45 for the NTAG213) at a time:
int firstBlockNum = 0;
int lastBlockNum = 42;
byte[] result = nfcA.transceive(new byte[] {
(byte)0x3A, // FAST_READ
(byte)(firstBlockNum & 0x0ff),
(byte)(lastBlockNum & 0x0ff),
});
|
I was struggling with the same problem. Following are the key points I followed.
In Jenkins pipeline job,
Under Build Triggers, check 'Trigger builds remotely (e.g., from scripts)' and fill in the 'Authentication Token' with some random and unique token.
In BitBucket repository,
Go to Settings > Services
Select 'Jenkins' from the drop down and 'Add service'.
Check 'Csrf Enabled'
Endpoint : http://username:apitoken@yourjenkinsurl.com/
You can find username and apitoken at Jenkins home > People
Select the user and click on configure. Under 'API Token' click on the 'Show API Token' button and you see the username and apitoken
Module name : This is optional. It can be any particular file or folder which is to be watched.
Project name : The project name in Jenkins.
If the job is in some folder structure, say I have 'MyTestFolder/MyTestPipelineJob', Project name to be mentioned is 'MyTestFolder/job/MyTestPipelineJob'
Token : 'Authentication Token' created in Jenkins job.
You are ready to go!!
I referred http://felixleong.com/blog/2012/02/hooking-bitbucket-up-with-jenkins/ and some of my instincts. :) |
Our solution was to replicate the IdentityServer3's partial login: use a custom cookie to persist data between steps.
First, we need to register our custom cookie authentication (at Startup.Configure)
app.UseCookieAuthentication(new CookieAuthenticationOptions
{
AuthenticationScheme = "my-partial",
AutomaticAuthenticate = false,
AutomaticChallenge = false
});
The first step/entry point of the login workflow should be mapped to GET /account/login (as of IdentityServer4 1.0.0-rc2).
In second step, after the credentials are sent and verified, we persist the username (and eventually any other data) into a cookie.
Code:
var claims = new []
{
new Claim("my-user", username),
new Claim("some-attribute", someAttribute)
};
await HttpContext.Authentication
.SignInAsync("my-partial", new ClaimsPrincipal(new ClaimsIdentity(claims)));
Important: avoid using POST /account/login as a second step. Because regardless of your result, IdentityServer's middleware will redirect you back to the authorization endpoint (as of RC2). Just pick any other path.
At your last step, key parts
we read the persisted data from the cookie
remove the partial cookie
sign in the "real" user
redirect to returnUrl (this was added to the first step as a query parameter. Don't forget to send along it)
In code
var partialUser = await HttpContext.Authentication.AuthenticateAsync("my-partial");
var username = partialUser?.Claims.FirstOrDefault(c => c.Type == "dr-user")?.Value;
var claims = new [] { /* Your custom claims */};
await HttpContext.Authentication
.SignOutAsync("my-partial");
await HttpContext.Authentication
.SignInAsync(username, username, claims);
return Redirect(returnUrl);
In addition, you might want to validate inputs, for example return to the first step, if there is no partial cookie, etc. |
Since the Firebase back-end services are hosted in the cloud, they are by nature accessible by anyone. There is no way to limit their access to only people that are using the code that you write. Any developer can download the SDK, rewrite your code and use that to access the same back-end services.
That's why you secure access to Firebase data (whether structured data in the database or files in storage) through user-based security. Making your users sign in to the app, means that you can identify who is accessing the data. Once you've authenticated the users, you can use Firebase's security rules (for database or storage) to ensure they can only access the data they're authorized for. They may still be using other code, but you'll at least know who they are and be assured that they can only access the data in ways you authorized.
You can get the best of both worlds (requiring users to be authenticated, without requiring them to log-in) by using anonymous authentication. Just keep in mind that there too, any developer can download the Firebase SDK and authenticate the user anonymously.
For an older discussion on the topic (for the database, but it applies equally to storage), see How to prevent other access to my firebase |
Since it's a Console Application you need to write a code in the Main method to wait and not to close immediately after running codes.
static void Main()
{
FileSystemWatcher watcher = new FileSystemWatcher();
string filePath = ConfigurationManager.AppSettings["documentPath"];
watcher.Path = filePath;
watcher.EnableRaisingEvents = true;
watcher.NotifyFilter = NotifyFilters.FileName;
watcher.Filter = "*.*";
watcher.Created += new FileSystemEventHandler(OnChanged);
// wait - not to end
new System.Threading.AutoResetEvent(false).WaitOne();
}
Your code only tracks changes in the root folder, if you wanted to watch the subfolders you need to set IncludeSubdirectories=true for your watcher object.
static void Main(string[] args)
{
FileSystemWatcher watcher = new FileSystemWatcher();
string filePath = @"d:\watchDir";
watcher.Path = filePath;
watcher.EnableRaisingEvents = true;
watcher.NotifyFilter = NotifyFilters.FileName;
watcher.Filter = "*.*";
// will track changes in sub-folders as well
watcher.IncludeSubdirectories = true;
watcher.Created += new FileSystemEventHandler(OnChanged);
new System.Threading.AutoResetEvent(false).WaitOne();
}
You must also be aware of buffer overflow. FROM MSDN FileSystemWatcher
The Windows operating system notifies your component of file changes
in a buffer created by the FileSystemWatcher. If there are many
changes in a short time, the buffer can overflow. This causes the
component to losing track of changes in the directory, and it will only
provide the blanket notification. Increasing the size of the buffer with
the InternalBufferSize property is expensive, as it comes from
non-paged memory that cannot be swapped out to disk, so keep the
buffer as small yet large enough to not miss any file change events.
To avoid a buffer overflow, use the NotifyFilter and
IncludeSubdirectories properties so you can filter out unwanted change
notifications.
|
I struggled with the same problem for a few days before arriving at a solution. In answer to your question: yes, you should be able to get the e-mail address back in your claims as long as you:
Include the profile or email scope in your request, and
Configure your application in the Azure Portal Active Directory section to include Sign in and read user profile under Delegated Permissions.
Note that the e-mail address may not be returned in an email claim: in my case (once I got it working) it's coming back in a name claim.
However, not getting the e-mail address back at all could be caused by one of the following issues:
No e-mail address associated with the Azure AD account
As per this guide to Scopes, permissions, and consent in the Azure Active Directory v2.0 endpoint, even if you include the email scope you may not get an e-mail address back:
The email claim is included in a token only if an email address is associated with the user account, which is not always the case. If it uses the email scope, your app should be prepared to handle a case in which the email claim does not exist in the token.
If you're getting other profile-related claims back (like given_name and family_name), this might be the problem.
Claims discarded by middleware
This was the cause for me. I wasn't getting any profile-related claims back (first name, last name, username, e-mail, etc.).
In my case, the identity-handling stack looks like this:
IdentityServer3
IdentityServer3.AspNetIdentity
A custom Couchbase storage provider based on couchbase-aspnet-identity
The problem was in the IdentityServer3.AspNetIdentity AspNetIdentityUserService class: the InstantiateNewUserFromExternalProviderAsync() method looks like this:
protected virtual Task<TUser> InstantiateNewUserFromExternalProviderAsync(
string provider,
string providerId,
IEnumerable<Claim> claims)
{
var user = new TUser() { UserName = Guid.NewGuid().ToString("N") };
return Task.FromResult(user);
}
Note it passes in a claims collection then ignores it. My solution was to create a class derived from this and override the method to something like this:
protected override Task<TUser> InstantiateNewUserFromExternalProviderAsync(
string provider,
string providerId,
IEnumerable<Claim> claims)
{
var user = new TUser
{
UserName = Guid.NewGuid().ToString("N"),
Claims = claims
};
return Task.FromResult(user);
}
I don't know exactly what middleware components you're using, but it's easy to see the raw claims returned from your external provider; that'll at least tell you they're coming back OK and that the problem is somewhere in your middleware. Just add a Notifications property to your OpenIdConnectAuthenticationOptions object, like this:
// Configure Azure AD as a provider
var azureAdOptions = new OpenIdConnectAuthenticationOptions
{
AuthenticationType = Constants.Azure.AuthenticationType,
Caption = Resources.AzureSignInCaption,
Scope = Constants.Azure.Scopes,
ClientId = Config.Azure.ClientId,
Authority = Constants.Azure.AuthenticationRootUri,
PostLogoutRedirectUri = Config.Identity.RedirectUri,
RedirectUri = Config.Azure.PostSignInRedirectUri,
AuthenticationMode = AuthenticationMode.Passive,
TokenValidationParameters = new TokenValidationParameters
{
ValidateIssuer = false
},
Notifications = new OpenIdConnectAuthenticationNotifications
{
AuthorizationCodeReceived = context =>
{
// Log all the claims returned by Azure AD
var claims = context.AuthenticationTicket.Identity.Claims;
foreach (var claim in claims)
{
Log.Debug("{0} = {1}", claim.Type, claim.Value);
}
return null;
}
},
SignInAsAuthenticationType = signInAsType // this MUST come after TokenValidationParameters
};
app.UseOpenIdConnectAuthentication(azureAdOptions);
See also
This article by Scott Brady contains a section on Claims Transformation which may be useful if neither of the above fixes it.
This discussion on the IdentityServer3 GitHub account was a huge help to me, especially this response.
|
TL;DR a web client uses CONNECT only when it knows it talks to a proxy and the final URI begins with https://.
When a browser says:
CONNECT www.google.com:443 HTTP/1.1
it means:
Hi proxy, please open a raw TCP connection to google; any following
bytes I write, you just repeat over that connection without any
interpretation. Oh, and one more thing. Do that only if you talk to
Google directly, but if you use another proxy yourself, instead you
just tell them the same CONNECT.
Note how this says nothing about TLS (https). In fact CONNECT is orthogonal to TLS; you can have only one, you can have other, or you can have both of them.
That being said, the intent of CONNECT is to allow end-to-end encrypted TLS session, so the data is unreadable to a proxy (or a whole proxy chain). It works even if a proxy doesn't understand TLS at all, because CONNECT can be issued inside plain HTTP and requires from the proxy nothing more than copying raw bytes around.
But the connection to the first proxy can be TLS (https) although it means a double encryption of traffic between you and the first proxy.
Obviously, it makes no sense to CONNECT when talking directly to the final server. You just start talking TLS and then issue HTTP GET. The end servers normally disable CONNECT altogether.
To a proxy, CONNECT support adds security risks. Any data can be passed through CONNECT, even ssh hacking attempt to a server on 192.168.1.*, even SMTP sending spam. Outside world sees these attacks as regular TCP connections initiated by a proxy. They don't care what is the reason, they cannot check whether HTTP CONNECT is to blame. Hence it's up to proxies to secure themselves against misuse. |
At first glance I see three issues here:
First: you are not using the same mode: in java you have AES/ECB/PKCS5Padding whereas your php uses AES-128-CBC.
Second: you probably aren't using the same IV's in the Java and PHP code (IV's are irrelevant for ECB, but once you switch your java to CBC you will need it):
You have $iv = '1010101010101010' (which is then passed to openssl) in your php but nothing like that in your java.
At the very least, you will probably need something like that in your Java part as well:
cipher.init(Cipher.DECRYPT_MODE/ENCRYPT_MODE, secretKey, new IvParameterSpec(iv))
with iv being a byte[] containing your IV bytes.
Third: once the problems above are addressed, padding may be the next breaking thing: your java cipher specification mentions PKCS5Padding. You need to make sure that both of your counterparts use the same.
edit: Fourth: One more issue is the way you derive the key bits to be used. In java you take the first 16 bytes of a sha1-hash, and in php you just pass $key to openssl. openssl might be deriving the encryption key in a different way.
When building cryptography-related tools using block ciphers, it's always nice to revisit classics like Block cipher mode of operation and Padding on Wikipedia, to get a sense of what is going on under the hood. |
Have you tried using root_url instead of root_path in your SessionsController?
HTTP requires a fully qualified URL when doing a 302 redirect. The _url method provides an absolute path, including protocol and server name. The _path method provides a relative path, while assuming the same server and protocol as the current URL.
Try switching to root_url and let me know if anything changes!
Edit: The reason I suggest using an absolute path is because you mentioned the redirect was working fine with just one table, before you added another table and nested routes.
Edit: One other thing I noticed is the authentication block below is not placed within the sessions#create route, as you would expect.
if @user && @user.authenticate(password)
session[:user_id] = @user.id
redirect_to root_path, notice: "Logged in successfully"
else
redirect_to login_path, alert: "Invalid Username/Password combination"
end
Is there a reason it exists outside of a method? Try moving it back inside the create route. |
If you are using Google Sign-In for authentication, there is no out of the box support for forcing your user to authenticate with Google every time they use your app.
This makes sense, because the user is still authed with Google on your phone. A login system only authenticates the user; it doesn't inherently protect data stored on the device. As long as Google has a valid access token, the user won't have to type a username and password again (and simply clicking "login with Google" again doesn't really provide extra protection here).
If your primary concern is blocking access to users who have left the company, you should be covered if you are using Google Apps for your company. If you disable the user's account, their access tokens should become invalid. Google Apps admins can also manually revoke access to specific apps for specific users.
If you don't use Google Apps (e.g. your users are using @gmail.com accounts or accounts from a domain outside fo your control), you might want to consider implementing a list of users allowed to access the application, and verify the current user has access by checking that list via an API call on launch.
If the goal is really protecting the confidential information in the application, you might want to take an approach similar to Android Pay in which you require your user to set and enter a PIN number to access the application. As an added benefit, you can then use that PIN to encrypt any confidential data you are storing locally. |
At present, the Azure AD doesn't support to pass the query string.
As a workaround, you may use the state parameter. More about the parameters support by the Azure AD, you can refer the links below:
Authorize access to web applications using OAuth 2.0 and Azure Active Directory
Authorize access to web applications using OpenID Connect and Azure Active Directory
Update
app.UseOpenIdConnectAuthentication(new OpenIdConnectAuthenticationOptions
{
Authority = "",
ClientId = "",
ClientSecret= "",
Scope = "",
RedirectUri = "",
TokenValidationParameters = new TokenValidationParameters
{
ValidateIssuer = false
},
Notifications=new OpenIdConnectAuthenticationNotifications
{
RedirectToIdentityProvider= onRedirectToIdentityProvider,
MessageReceived= onMessageReceived
}
});
private Task onMessageReceived(MessageReceivedNotification<OpenIdConnectMessage, OpenIdConnectAuthenticationOptions> notification)
{
string mycustomparameter;
var protectedState = notification.ProtocolMessage.State.Split('=')[1];
var state = notification.Options.StateDataFormat.Unprotect(protectedState);
state.Dictionary.TryGetValue("mycustomparameter", out mycustomparameter);
return Task.FromResult(0);
}
private Task onRedirectToIdentityProvider(RedirectToIdentityProviderNotification<OpenIdConnectMessage, OpenIdConnectAuthenticationOptions> notification)
{
var stateQueryString = notification.ProtocolMessage.State.Split('=');
var protectedState = stateQueryString[1];
var state = notification.Options.StateDataFormat.Unprotect(protectedState);
state.Dictionary.Add("mycustomparameter", "myvalue");
notification.ProtocolMessage.State = stateQueryString[0] + "=" + notification.Options.StateDataFormat.Protect(state);
return Task.FromResult(0);
}
|
One major risk is that any single cross-site scripting vulnerability in your application could be used to steal the token from the cookie, because it's not httpOnly (while I understand why that is the case). XSS in a javascript-heavy application like an SPA is very common and hard to avoid.
Also you're saying the token is kept in the cookie so that after closing the browser, the user is still logged in. On the one hand, that's bad practice, a user closing the browser probably expects being logged out. On the other hand, this means the cookie is persisted to disk, so it is much easier for an attacker to steal it from the client.
Another thing that comes to mind is cross-site request forgery (CSRF), but if I understand correctly, authentication is actually based on the Authorize header, where the token is copied in each request. If that's the case, CSRF is not an issue for you (but it would be, if sending the token in the cookie was enough).
So at the very least, I think you should
not use a persisted cookie for the token
try to minimize the chance of XSS (eg. by automatically scanning your code, but that will never be 100%, also by carefully choosing secure by default technologies)
make sure auhentication is based on the Authorize header and not the cookie
Still mainly because of the XSS risk, I would probably not recommend doing it this way in a security-critical application. |
As briefly explained in the comments section, the methodology to use is CORS.
This is the idea that we re-direct the request so it becomes an internal request and the server then allows the request.
Let's say Site A is the domain we're contacting to login, a request from Site B is not necessary to start with. Instead, we redirect the OP to Site A's authentication portal. Once OP authenticates, Site A will redirect to Site B with an authentication code (usually as a GET request in the URI) like this:
www.sitea.com/someController/someAction?someUrl=www.siteb.com&scope=user.Profile
www.siteb.com/someController/someAction?someAuth=someCode
Notice that in the Site A redirect, we also include some GET requests in the URI that contain the scope of information we're requesting and furthermore, the site return URI which you can validate in Site A.
Once Site B receives an authentication, the handshake can be created. A cURL request can be sent to Site A containing the Authentication and a response about the user will be returned.
www.siteA.com/someController/someAction?someAuth=someCode
Which would return something similar to a JSON encoded response like this:
{username: 'someString',
recentlyVisited: {
someDate,
someDate,
}email: 'someString'}
|
I would recommend for this case to expose your access to the Binary content(eg: the photo) via a JavaAdapter that would give you more freedom on the types of payload you can submit and return from the MFP server.
One way would be to handle the picture as a Base64 and send it inside a JSON. The same process can be used to read photos.
@PUT
@Path("/addPhoto")
@Produces("application/json")
@Consumes({MediaType.APPLICATION_JSON, MediaType.APPLICATION_FORM_URLENCODED})
public String receivePhoto(@FormParam(value="photoId") String photoId, @FormParam(value="data") String photoData){
//Process your photo, convert from base64 to the format of choice.
}
The key point here is to work with the annotations @Produces and @Consumes to explore the best way to handle your binary data. If it is in more compact format(like png/jpeg raw) or as Base64 that could be used inside of a JSON response.
https://mobilefirstplatform.ibmcloud.com/tutorials/en/foundation/8.0/adapters/java-adapters/java-http-adapter/
Depending on how much of your load is based on binary format, it would worth to check if the use of "confidential clients" or "protecting external resources" wouldn't be more appropriated to your use case. Than instead of creating an adapter in MFP, you can use MobileFirst to help on the security setup, but the 3rd party layer handles binary data manipulation.
confidential clients: https://mobilefirstplatform.ibmcloud.com/tutorials/en/foundation/8.0/authentication-and-security/confidential-clients/
protecting-external-resources: https://mobilefirstplatform.ibmcloud.com/tutorials/en/foundation/8.0/authentication-and-security/protecting-external-resources/
In short possible alternatives:
Use of binaries encoded as Base64
Explore JavaAdapters to handle custom payloads
3rd party protected access by exploring "confidential clients" or "protecting external resources" alternatives.
Hope this information helps, |
Well firstly, SHA-256 is not an encryption algorithm.
Digital signatures can use SHA256 as digest algorithm, yes. As you can see in the source of the QuerySignatureUtil, the actual algorithm is now configurable and can take lots of different values. The configuration retrieval is done with the SystemPropertiesManager calls in your snippet, and the config can come from two places:
For fedlet: the properties should be defined in FederationConfig.properties.
For the OpenAM server, the settings can be found under the Common Federation Configuration in the Global settings.
If you want to take a look at the digital signature implementation, then there are two classes of interest:
FMSigProvider: this class deals with proper XML signatures, all the digital signatures will be part of the XML document as per xmldsig spec.
QuerySignatureUtil: this class mainly deals with querystring signing, which has different set of rules than regular XML signatures. In this case the signature will not be part of the signed XML document, instead the signature will be put on the query string. The SAML binding spec that describes the HTTP-Redirect binding discusses this in more details.
If you want to control the DigestMethod value within the digital signature, then you need to have a look at OPENAM-7778, that was implemented in 13.5.0.
If you want to encrypt SAML messages using 256 bit encryption algorithms, then you will need to install the JCE jurisdiction files, after that, you should be able to configure http://www.w3.org/2001/04/xmlenc#aes256-cbc as XML encryption algorithm. |
You're not assigning your db connection within a usable scope. You are returning it from your dbConnect() function, you're just not doing anything with the returned value.
dbConnect('sessions'); should be $cxn = dbConnect('sessions');
You would be alerted to the cause of the problem if you used mysqli_error() to let MySQL tell you what it didn't like.
Finally, you should be using bound parameters in your query instead of injecting user-provided data directly. Search something like "mysqli bound parameters" to learn how to do this. What you have now is open to attack.
For your INSERT statement, use bound parameters:
$sql = "INSERT INTO sessions.users (userid, password, fullname, email, notes) VALUES (?, ?, ?, ?, ?)";
$stmt = mysqli_prepare($sql);
mysqli_stmt_bind_param($stmt, 'issss', $newID, PASSWORD($newpass), $fullName, $email, $notes);
if(!mysqli_stmt_execute($stmt))
{
// use this for debugging, but do NOT leave it in production code
die(mysqli_error($cxn));
}
mysqli_stmt_close($stmt);
Above is untested so it's possible you'll have to make small adjustments. It should give you a pretty good idea of what to do at least.
Also, make sure that PASSWORD() is really what you want to be using. I have a feeling it's not, but I don't want to assume. |
Javascript is well known for multitude of ways to accomplish same result.
Based on your code snippets it's very subjective to say which is more practical. Both of them are fairly simple, just to proof the concept (you don't have function that only returns string).
To answer your comment lets try to imagine more practical situation.
var isAuth = true; // lets imagine this comes as result of non trivial authentication process
function customGreeting(isAuthenticated) {
return isAuthenticated ? "Hello, welcome back" : "Please Sign in to continue";
}
document.getElementById('demo').innerHTML = customGreeting(isAuth);
Above code uses approach from first snippet. Some might say that it more 'elegant' than using second approach (bellow).
var isAuth = true; // lets imagine this comes as result of non trivial authentication process
function customGreeting(isAuthenticated) {
if(isAuthenticated){
document.getElementById("demo").innerHTML = "Hello, welcome back";
}else{
document.getElementById("demo").innerHTML = "Please Sign in to continue";
}
}
customGreeting();
If we set functional programming approach aside (with is out of scope for this discussion) it's very hard to argue which approach is better. |
Currently Traffic Manager can't probe URIs behind authentication walls. That goes for Basic HTTP authentication as well.
i.e. If you're responding with a redirect,
HTTP/1.1 302 Found
Location: https://token.service
Traffic Manager will mark your endpoint Unhealthy since it expects a 200 OK back.
You'll need a page/controller/route/whathaveyou that doesn't require authentication, and returns 200 OK back to the Traffic Manager probe.
e.g. http://example.com/health
Put all your health logic in there - you could for example check if your database and Redis cache is healthy, and then return 200 OK, else return 5xx.
From https://azure.microsoft.com/en-gb/documentation/articles/traffic-manager-monitoring/:
Note:
Traffic Manager only considers an endpoint to be online if the return message is 200 OK. An endpoint is unhealthy when any of the following events occur:
A non-200 response is received (including a different 2xx code, or a 301/302 redirect)
Request for client authentication
Timeout (the timeout threshold is 10 seconds)
Unable to connect
|
you can use .duplicated(keep=False) method:
In [138]: df['API Name'].duplicated(keep=False)
Out[138]:
0 False
1 True
2 False
3 True
Name: API Name, dtype: bool
In [139]: df[df['API Name'].duplicated(keep=False)]
Out[139]:
API Name Test Result Risk Rating Vulnerability Category
1 https://api-test1.com PASS MEDIUM Authentication Test
3 https://api-test1.com FAIL CRITICAL Configuration Management
UPDATE: you don't need such variables (api1, api2, etc.) as you can always easily access your data in the DataFrame:
In [152]: apis = df['API Name'].unique()
In [153]: apis
Out[153]: array(['https://api-test.com', 'https://api-test1.com', 'https://api-test2.com'], dtype=object)
In [154]: for api in apis:
...: print(df.loc[df['API Name'] == api])
...:
API Name Test Result Risk Rating Vulnerability Category
0 https://api-test.com FAIL LOW Information Gathering
API Name Test Result Risk Rating Vulnerability Category
1 https://api-test1.com PASS MEDIUM Authentication Test
3 https://api-test1.com FAIL CRITICAL Configuration Management
API Name Test Result Risk Rating Vulnerability Category
2 https://api-test2.com SKIP HIGH Web Service
|
What is a Service Principal Name?
The SPN represents the service entry point into your SQL server for clients to find (using DNS) when they will be using Kerberos authentication.
SPNs are written as a service followed by the fully-qualified DNS name of the IP host the service is running on, (and sometimes optionally, appended with the Kerberos realm name appended to the end). For example if your SQL server were named 'sqlserver1' and your AD domain name was 'acme.com' would be written as: MSSQLsvc/sqlserver1.acme.com.
The SPN itself is found inside the Kerberos database, and clients during the authentication process reach out to DNS to find the IP target host and the Kerberos database (KDC) holding the service principal, grab a Kerberos service ticket from the KDC and use that to single sign-on authenticate to the server running on the target service named in the SPN.
Configuring SPNs
In AD, in the properties of the computer object representing your SQL server, you will add the SPN, and optionally configure Kerberos delegation for that service. You could optionally add the SPN to a user account running the SQL service in AD instead.
In your scenario, Kerberos should actually be the primary authentication method, with NTLM used only as a fallback. If you setup up DNS, AD, Kerberos delegation and the target server correctly, you should never have to fallback to NTLM. With SharePoint, you would use Kerberos to SSO into SharePoint, and then you could optionally allow Kerberos delegation for that same user account to be able to run SQL statements own the SQL DB server as themselves.
None of this is for the faint of heart, and I have actually not setup this precise scenario myself, I just know the underlying concepts; instead my experience is mainly setting up Kerberos SSO to Active Directory authentication to web applications running on Linux platforms. But you asked what an SPN was for and that's what I've answered.
Further Reading
I googled and found this link for you for actually setting up your scenario, it talks about configuring SharePoint with Active Directory with SQL server using Kerberos delegation: Plan for Kerberos authentication in SharePoint 2013 |
I might be missing something, but what Google requires and what's also specified by OAuth2 is that when refreshing a token from a confidential client application the client must authenticate itself.
The most common type of credentials being used for confidential clients are a client identifier alongside a client secret. This information is issued to a client application and is unrelated to the end-user.
By requiring client authentication the authorization server can be sure the request comes from a specific client and adjust its response accordingly. For example, an authorization server can decide that certain permissions - scopes - can only be requested from confidential clients.
The argument around reducing the number of times the client secret needs to be sent over the wire is a non-issue. OAuth2 mandates that the communication happens over TLS and if you have issues with sending the secret then you would also have issues with sending bearer access tokens.
In conclusion, although sometimes doing things exactly according to spec without questioning the overall context might lead to vulnerabilities:
... some libraries treated tokens signed with the none algorithm as a valid token with a verified signature. The result? Anyone can create their own "signed" tokens with whatever payload they want, allowing arbitrary account access on some systems.
(source: Critical vulnerabilities in JSON Web Token libraries)
Some libraries treated the none algorithm according to spec, but ignored the usage context; if the developer passed a key to verify a signature it most likely did not want to treat unsigned tokens as valid.
However, passing the secret on the refresh token request is not one of these situations so don't worry about it. |
Try to use
drv <- RSQLServer::SQLServer()
instead of
drv <- dbDriver("SqlServer")
You must have downloaded and installed the jTDS driver.
For Windows authentication you have to install a DLL too:
If you intend to use integrated security (Windows Authentication) to
authenticate your server session, you will need to download jTDS and
copy the native single sign on library (ntlmauth.dll) to any
location on your system’s PATH (e.g. Sys.getenv("PATH") ).
Source: https://cran.r-project.org/web/packages/RSQLServer/RSQLServer.pdf
Your JDBC connection string looks strange, please make sure your JDBC connection string is correct.
If you are using the jTDS driver the connection string syntax is
different from the JDBC driver of Microsoft
The jTDS syntax is specified here:
http://jtds.sourceforge.net/faq.html#urlFormat
jdbc:jtds:<server_type>://<server>[:<port>][/<database>][;<property>=<value>[;...]]
where is "sqlserver".
The Microsoft JDBC syntax is specified here but I think it does not work because RSQLServer is based on the cross-platform jTDS JDBC driver
https://msdn.microsoft.com/en-us/library/ms378428(v=sql.110).aspx
Example:
jdbc:sqlserver://localhost;databaseName=AdventureWorks;integratedSecurity=true;
Replace the "localhost" part with the IP address or server name like "myServer.honey.moon.com", in case of a non-standard IP port (not 1433) of the instance use "localhost:1234".
You can figure out the IP port by looking at the connection string you use to connect to the database via SQL Server Management Studio! |
It is possible to update AbstractInboundFileSynchronizer to recognize updated files, but it is brittle and you run into other issues.
Update 13/Nov/2016: Found out how to get modification timestamps in seconds.
The main problem with updating the AbstractInboundFileSynchronizer is that it has setter-methods but no (protected) getter-methods. If, in the future, the setter-methods do something smart, the updated version presented here will break.
The main issue with updating files in the local directory is concurrency: if you are processing a local file at the same time that an update is being received, you can run into all sorts of trouble. The easy way out is to move the local file to a (temporary) processing directory so that an update can be received as a new file which in turn removes the need to update AbstractInboundFileSynchronizer. See also Camel's timestamp remarks.
By default FTP servers provide modification timestamps in minutes. For testing I updated the FTP client to use the MLSD command which provides modification timestamps in seconds (and milliseconds if you are lucky), but not all FTP servers support this.
As mentioned on the Spring FTP reference the local file filter needs to be a FileSystemPersistentAcceptOnceFileListFilter to ensure local files are picked up when the modification timestamp changes.
Below my version of the updated AbstractInboundFileSynchronizer, followed by some test classes I used.
public class FtpUpdatingFileSynchronizer extends FtpInboundFileSynchronizer {
protected final Log logger = LogFactory.getLog(this.getClass());
private volatile Expression localFilenameGeneratorExpression;
private volatile EvaluationContext evaluationContext;
private volatile boolean deleteRemoteFiles;
private volatile String remoteFileSeparator = "/";
private volatile boolean preserveTimestamp;
public FtpUpdatingFileSynchronizer(SessionFactory<FTPFile> sessionFactory) {
super(sessionFactory);
setPreserveTimestamp(true);
}
@Override
public void setLocalFilenameGeneratorExpression(Expression localFilenameGeneratorExpression) {
super.setLocalFilenameGeneratorExpression(localFilenameGeneratorExpression);
this.localFilenameGeneratorExpression = localFilenameGeneratorExpression;
}
@Override
public void setIntegrationEvaluationContext(EvaluationContext evaluationContext) {
super.setIntegrationEvaluationContext(evaluationContext);
this.evaluationContext = evaluationContext;
}
@Override
public void setDeleteRemoteFiles(boolean deleteRemoteFiles) {
super.setDeleteRemoteFiles(deleteRemoteFiles);
this.deleteRemoteFiles = deleteRemoteFiles;
}
@Override
public void setRemoteFileSeparator(String remoteFileSeparator) {
super.setRemoteFileSeparator(remoteFileSeparator);
this.remoteFileSeparator = remoteFileSeparator;
}
@Override
public void setPreserveTimestamp(boolean preserveTimestamp) {
// updated
Assert.isTrue(preserveTimestamp, "for updating timestamps must be preserved");
super.setPreserveTimestamp(preserveTimestamp);
this.preserveTimestamp = preserveTimestamp;
}
@Override
protected void copyFileToLocalDirectory(String remoteDirectoryPath, FTPFile remoteFile, File localDirectory,
Session<FTPFile> session) throws IOException {
String remoteFileName = this.getFilename(remoteFile);
String localFileName = this.generateLocalFileName(remoteFileName);
String remoteFilePath = (remoteDirectoryPath != null
? (remoteDirectoryPath + this.remoteFileSeparator + remoteFileName)
: remoteFileName);
if (!this.isFile(remoteFile)) {
if (this.logger.isDebugEnabled()) {
this.logger.debug("cannot copy, not a file: " + remoteFilePath);
}
return;
}
// start update
File localFile = new File(localDirectory, localFileName);
boolean update = false;
if (localFile.exists()) {
if (this.getModified(remoteFile) > localFile.lastModified()) {
this.logger.info("Updating local file " + localFile);
update = true;
} else {
this.logger.info("File already exists: " + localFile);
return;
}
}
// end update
String tempFileName = localFile.getAbsolutePath() + this.getTemporaryFileSuffix();
File tempFile = new File(tempFileName);
OutputStream outputStream = new BufferedOutputStream(new FileOutputStream(tempFile));
try {
session.read(remoteFilePath, outputStream);
} catch (Exception e) {
if (e instanceof RuntimeException) {
throw (RuntimeException) e;
}
else {
throw new MessagingException("Failure occurred while copying from remote to local directory", e);
}
} finally {
try {
outputStream.close();
}
catch (Exception ignored2) {
}
}
// updated
if (update && !localFile.delete()) {
throw new MessagingException("Unable to delete local file [" + localFile + "] for update.");
}
if (tempFile.renameTo(localFile)) {
if (this.deleteRemoteFiles) {
session.remove(remoteFilePath);
if (this.logger.isDebugEnabled()) {
this.logger.debug("deleted " + remoteFilePath);
}
}
// updated
this.logger.info("Stored file locally: " + localFile);
} else {
// updated
throw new MessagingException("Unable to rename temporary file [" + tempFile + "] to [" + localFile + "]");
}
if (this.preserveTimestamp) {
localFile.setLastModified(getModified(remoteFile));
}
}
private String generateLocalFileName(String remoteFileName) {
if (this.localFilenameGeneratorExpression != null) {
return this.localFilenameGeneratorExpression.getValue(this.evaluationContext, remoteFileName, String.class);
}
return remoteFileName;
}
}
Following some of the test classes I used.
I used dependencies org.springframework.integration:spring-integration-ftp:4.3.5.RELEASE and org.apache.ftpserver:ftpserver-core:1.0.6 (plus the usual logging and testing dependencies).
public class TestFtpSync {
static final Logger log = LoggerFactory.getLogger(TestFtpSync.class);
static final String FTP_ROOT_DIR = "target" + File.separator + "ftproot";
// org.apache.ftpserver:ftpserver-core:1.0.6
static FtpServer server;
@BeforeClass
public static void startServer() throws FtpException {
File ftpRoot = new File (FTP_ROOT_DIR);
ftpRoot.mkdirs();
TestUserManager userManager = new TestUserManager(ftpRoot.getAbsolutePath());
FtpServerFactory serverFactory = new FtpServerFactory();
serverFactory.setUserManager(userManager);
ListenerFactory factory = new ListenerFactory();
factory.setPort(4444);
serverFactory.addListener("default", factory.createListener());
server = serverFactory.createServer();
server.start();
}
@AfterClass
public static void stopServer() {
if (server != null) {
server.stop();
}
}
File ftpFile = Paths.get(FTP_ROOT_DIR, "test1.txt").toFile();
File ftpFile2 = Paths.get(FTP_ROOT_DIR, "test2.txt").toFile();
@Test
public void syncDir() {
// org.springframework.integration:spring-integration-ftp:4.3.5.RELEASE
AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext();
try {
ctx.register(FtpSyncConf.class);
ctx.refresh();
PollableChannel msgChannel = ctx.getBean("inputChannel", PollableChannel.class);
for (int j = 0; j < 2; j++) {
for (int i = 0; i < 2; i++) {
storeFtpFile();
}
for (int i = 0; i < 4; i++) {
fetchMessage(msgChannel);
}
}
} catch (Exception e) {
throw new AssertionError("FTP test failed.", e);
} finally {
ctx.close();
cleanup();
}
}
boolean tswitch = true;
void storeFtpFile() throws IOException, InterruptedException {
File f = (tswitch ? ftpFile : ftpFile2);
tswitch = !tswitch;
log.info("Writing message " + f.getName());
Files.write(f.toPath(), ("Hello " + System.currentTimeMillis()).getBytes());
}
Message<?> fetchMessage(PollableChannel msgChannel) {
log.info("Fetching message.");
Message<?> msg = msgChannel.receive(1000L);
if (msg == null) {
log.info("No message.");
} else {
log.info("Have a message: " + msg);
}
return msg;
}
void cleanup() {
delFile(ftpFile);
delFile(ftpFile2);
File d = new File(FtpSyncConf.LOCAL_DIR);
if (d.isDirectory()) {
for (File f : d.listFiles()) {
delFile(f);
}
}
log.info("Finished cleanup");
}
void delFile(File f) {
if (f.isFile()) {
if (f.delete()) {
log.info("Deleted " + f);
} else {
log.error("Cannot delete file " + f);
}
}
}
}
public class MlistFtpSessionFactory extends AbstractFtpSessionFactory<MlistFtpClient> {
@Override
protected MlistFtpClient createClientInstance() {
return new MlistFtpClient();
}
}
public class MlistFtpClient extends FTPClient {
@Override
public FTPFile[] listFiles(String pathname) throws IOException {
return super.mlistDir(pathname);
}
}
@EnableIntegration
@Configuration
public class FtpSyncConf {
private static final Logger log = LoggerFactory.getLogger(FtpSyncConf.class);
public static final String LOCAL_DIR = "/tmp/received";
@Bean(name = "ftpMetaData")
public ConcurrentMetadataStore ftpMetaData() {
return new SimpleMetadataStore();
}
@Bean(name = "localMetaData")
public ConcurrentMetadataStore localMetaData() {
return new SimpleMetadataStore();
}
@Bean(name = "ftpFileSyncer")
public FtpUpdatingFileSynchronizer ftpFileSyncer(
@Qualifier("ftpMetaData") ConcurrentMetadataStore metadataStore) {
MlistFtpSessionFactory ftpSessionFactory = new MlistFtpSessionFactory();
ftpSessionFactory.setHost("localhost");
ftpSessionFactory.setPort(4444);
ftpSessionFactory.setUsername("demo");
ftpSessionFactory.setPassword("demo");
FtpPersistentAcceptOnceFileListFilter fileFilter = new FtpPersistentAcceptOnceFileListFilter(metadataStore, "ftp");
fileFilter.setFlushOnUpdate(true);
FtpUpdatingFileSynchronizer ftpFileSync = new FtpUpdatingFileSynchronizer(ftpSessionFactory);
ftpFileSync.setFilter(fileFilter);
// ftpFileSync.setDeleteRemoteFiles(true);
return ftpFileSync;
}
@Bean(name = "syncFtp")
@InboundChannelAdapter(value = "inputChannel", poller = @Poller(fixedDelay = "500", maxMessagesPerPoll = "1"))
public MessageSource<File> syncChannel(
@Qualifier("localMetaData") ConcurrentMetadataStore metadataStore,
@Qualifier("ftpFileSyncer") FtpUpdatingFileSynchronizer ftpFileSync) throws Exception {
FtpInboundFileSynchronizingMessageSource messageSource = new FtpInboundFileSynchronizingMessageSource(ftpFileSync);
File receiveDir = new File(LOCAL_DIR);
receiveDir.mkdirs();
messageSource.setLocalDirectory(receiveDir);
messageSource.setLocalFilter(new FileSystemPersistentAcceptOnceFileListFilter(metadataStore, "local"));
log.info("Message source bean created.");
return messageSource;
}
@Bean(name = "inputChannel")
public PollableChannel inputChannel() {
QueueChannel channel = new QueueChannel();
log.info("Message channel bean created.");
return channel;
}
}
/**
* Copied from https://github.com/spring-projects/spring-integration-samples/tree/master/basic/ftp/src/test/java/org/springframework/integration/samples/ftp/support
* @author Gunnar Hillert
*
*/
public class TestUserManager extends AbstractUserManager {
private BaseUser testUser;
private BaseUser anonUser;
private static final String TEST_USERNAME = "demo";
private static final String TEST_PASSWORD = "demo";
public TestUserManager(String homeDirectory) {
super("admin", new ClearTextPasswordEncryptor());
testUser = new BaseUser();
testUser.setAuthorities(Arrays.asList(new Authority[] {new ConcurrentLoginPermission(1, 1), new WritePermission()}));
testUser.setEnabled(true);
testUser.setHomeDirectory(homeDirectory);
testUser.setMaxIdleTime(10000);
testUser.setName(TEST_USERNAME);
testUser.setPassword(TEST_PASSWORD);
anonUser = new BaseUser(testUser);
anonUser.setName("anonymous");
}
public User getUserByName(String username) throws FtpException {
if(TEST_USERNAME.equals(username)) {
return testUser;
} else if(anonUser.getName().equals(username)) {
return anonUser;
}
return null;
}
public String[] getAllUserNames() throws FtpException {
return new String[] {TEST_USERNAME, anonUser.getName()};
}
public void delete(String username) throws FtpException {
throw new UnsupportedOperationException("Deleting of FTP Users is not supported.");
}
public void save(User user) throws FtpException {
throw new UnsupportedOperationException("Saving of FTP Users is not supported.");
}
public boolean doesExist(String username) throws FtpException {
return (TEST_USERNAME.equals(username) || anonUser.getName().equals(username)) ? true : false;
}
public User authenticate(Authentication authentication) throws AuthenticationFailedException {
if(UsernamePasswordAuthentication.class.isAssignableFrom(authentication.getClass())) {
UsernamePasswordAuthentication upAuth = (UsernamePasswordAuthentication) authentication;
if(TEST_USERNAME.equals(upAuth.getUsername()) && TEST_PASSWORD.equals(upAuth.getPassword())) {
return testUser;
}
if(anonUser.getName().equals(upAuth.getUsername())) {
return anonUser;
}
} else if(AnonymousAuthentication.class.isAssignableFrom(authentication.getClass())) {
return anonUser;
}
return null;
}
}
Update 15/Nov/2016: Note on xml-configuration.
The xml-element inbound-channel-adapter is directly linked to the FtpInboundFileSynchronizer via org.springframework.integration.ftp.config.FtpInboundChannelAdapterParser via FtpNamespaceHandler via spring-integration-ftp-4.3.5.RELEASE.jar!/META-INF/spring.handlers.
Following the xml-custom reference guide, specifying a custom FtpNamespaceHandler in a local META-INF/spring.handlers file should allow you to use the FtpUpdatingFileSynchronizer instead of the FtpInboundFileSynchronizer. It did not work for me with unit-tests though and a proper solution would probably involve creating extra/modified xsd-files so that the regular inbound-channel-adapter is using the regular FtpInboundFileSynchronizer and a special inbound-updating-channel-adapter is using the FtpUpdatingFileSynchronizer. Doing this properly is a bit out of scope for this answer.
A quick hack can get you started though. You can overwrite the default FtpNamespaceHandler by creating package org.springframework.integration.ftp.config and class FtpNamespaceHandler in your local project. Contents shown below:
package org.springframework.integration.ftp.config;
public class FtpNamespaceHandler extends org.springframework.integration.config.xml.AbstractIntegrationNamespaceHandler {
@Override
public void init() {
System.out.println("Initializing FTP updating file synchronizer.");
// one updated line below, rest copied from original FtpNamespaceHandler
registerBeanDefinitionParser("inbound-channel-adapter", new MyFtpInboundChannelAdapterParser());
registerBeanDefinitionParser("inbound-streaming-channel-adapter",
new FtpStreamingInboundChannelAdapterParser());
registerBeanDefinitionParser("outbound-channel-adapter", new FtpOutboundChannelAdapterParser());
registerBeanDefinitionParser("outbound-gateway", new FtpOutboundGatewayParser());
}
}
package org.springframework.integration.ftp.config;
import org.springframework.integration.file.remote.synchronizer.InboundFileSynchronizer;
import org.springframework.integration.ftp.config.FtpInboundChannelAdapterParser;
public class MyFtpInboundChannelAdapterParser extends FtpInboundChannelAdapterParser {
@Override
protected Class<? extends InboundFileSynchronizer> getInboundFileSynchronizerClass() {
System.out.println("Returning updating file synchronizer.");
return FtpUpdatingFileSynchronizer.class;
}
}
Also add preserve-timestamp="true" to the xml-file to prevent the new IllegalArgumentException: for updating timestamps must be preserved. |
The error "403 forbidden" can be returned if the service is turned off inside the admin console or if the user which you are trying to create a circle for, has not created a Google Plus profile. Here is a sample of an implementation with the Google PHP Client Library version 2.0.3 but your code should also work.
<?php
session_start();
//INCLUDE PHP CLIENT LIBRARY
require_once "google-api-php-client-2.0.3/vendor/autoload.php";
$client = new Google_Client();
$client->setAuthConfig("client_credentials.json");
$client->setRedirectUri('http://' . $_SERVER['HTTP_HOST'] . '/createCircle.php');
$client->addScope(array(
"https://www.googleapis.com/auth/plus.circles.write",
"https://www.googleapis.com/auth/plus.me")
);
if (isset($_SESSION['access_token']) && $_SESSION['access_token']) {
$client->setAccessToken($_SESSION['access_token']);
$service = new Google_Service_PlusDomains($client);
$circle = new Google_Service_PlusDomains_Circle(array(
'displayName' => 'VIP Circle',
'description' => 'Best of the best'
)
);
$userId = 'me';
$newcircle = $service->circles->insert($userId, $circle);
echo "Circle created: ".$newcircle->id." - ".$newcircle->selfLink;
} else {
if (!isset($_GET['code'])) {
$auth_url = $client->createAuthUrl();
header('Location: ' . filter_var($auth_url, FILTER_SANITIZE_URL));
} else {
$client->authenticate($_GET['code']);
$_SESSION['access_token'] = $client->getAccessToken();
$redirect_uri = 'http://' . $_SERVER['HTTP_HOST'] . '/createCircle.php';
header('Location: ' . filter_var($redirect_uri, FILTER_SANITIZE_URL));
}
}
?>
Make sure to review the following references: https://developers.google.com/+/domains/authentication/scopes
https://developers.google.com/+/domains/authentication/
https://support.google.com/a/answer/1631746?hl=en
I hope this helps! |
The second link seems like what I am looking for, but I am a bit confused, what is the difference with the service OpenIdDict-Core that is mentioned in the description?
AspNet.Security.OpenIdConnect.Server (codenamed ASOS) is the equivalent of OWIN/Katana's OAuthAuthorizationServerMiddleware in the ASP.NET Core world: it's a low-level OpenID Connect framework that can be used to implement your own server, using the same events-based approach as the rest of the ASP.NET Core Security middleware.
ASOS provides all the primitives you need (e.g OpenIdConnectRequest and OpenIdConnectResponse) and handles most of the protocol details for you (e.g request validation, or token generation), but it's up to you to implement things like client authentication or user authentication.
ASOS is not for everyone: it has been specifically designed to offer a low-level, protocol-first experience and to be as flexible as possible: if you're not comfortable at all with how OAuth2/OpenID Connect work in general, then it's likely not for you.
For more information about ASOS, you can read this blog posts series: http://kevinchalet.com/2016/07/13/creating-your-own-openid-connect-server-with-asos-introduction.
OpenIddict is an OpenID Connect server library that is based on ASOS: it handles things like client authentication or token revocation for you and provides the interfaces you need for that (it also comes with default EF-based stores).
Unlike ASOS, it's an opinionated server whose main objective is to encourage you to do the right thing by rejecting everything that is not considered as "safe" from a security perspective (e.g it will reject authorization requests containing response_type=token if the client is a confidential client).
The idea behind OpenIddict is that all you have to implement is user authentication, which can be done using ASP.NET Core Identity in your own MVC controller. Everything else is considered "as dangerous" and deliberately hidden and handled by OpenIddict.
If you want to learn more about OpenIddict, I'd recommend reading this blog post: https://blogs.msdn.microsoft.com/webdev/2016/10/27/bearer-token-authentication-in-asp-net-core/ |
Not sure, if this is helpful.. but you can access SQL Server Database with PHP.
Loading the Driver
You can download the SQL Server Driver for PHP at the Microsoft Download Center. Included in the download are two .dll files: php_sqlsrv.dll and php_sqlsrv_ts.dll.
Configuring the Driver
The SQL Server Driver for PHP has three configuration options:
LogSubsystems:
Use this option to turn the logging of subsystems on or off. The default setting is SQLSRV_LOG_SYSTEM_OFF (logging is turned off by default).
LogSeverity:
Use this option to specify what to log after logging has been turned on. The default setting is SQLSRV_LOG_SEVERITY_ERROR (only errors are logged by default after logging has been turned on).
WarningsReturnAsErrors:
By default, the SQL Server Driver for PHP treats warnings generated by sqlsrv functions as errors. Use the WarningsReturnAsErrors option to change this behavior. The default setting for this option is true.
Creating a Connection
The sqlsrv_connect function is used to establish a connection to the server.
$serverName = "(local)";
$connectionOptions = array("Database"=>"DBNAME");
/* Connect using Windows Authentication. */
$conn = sqlsrv_connect( $serverName, $connectionOptions);
if( $conn === false )
{ die( FormatErrors( sqlsrv_errors() ) ); }
By default, the sqlsrv_connect function uses Windows Authentication to establish a connection.
The sqlsrv_connect function accepts two parameters: $serverName and $connectionOptions (optional).
$serverName – This required parameter is used to specify the name of the server to which you want to connect. In the code above, a connection is established to the local server. This parameter can also be use to specify a SQL Server instance or a port number.
For example:
$serverName = "myServer\instanceName";
-or-
$serverName = "myServer, 1521";
$connectionOptions - This optional parameter is an array of key-value pairs that set options on the connection. In the code above, the database is set to DBNAME for the connection. Other options include ConnectionPooling, Encrypt, UID, and PWD.
Note: The UID and PWD options must be set in the $connectionOptions parameter to log into the server with SQL Server Authentication.
Go through below link for more details:
Accessing SQL Server Databases with PHP |
import java.security.Key;
import javax.crypto.Cipher;
import javax.crypto.spec.SecretKeySpec;
import sun.misc.BASE64Decoder;
import sun.misc.BASE64Encoder;
public class AdvanceEncryptionSecurity {
private static final String ALGORITHM = "AES";
private static final int ITERATIONS = 2;
private static final byte[] keyValue = new byte[] { 'P', 'R', 'S', 'a', 'n', 'd', 'A', 'P', 'F', 'A', 'A', 'l', 'l', 'i', 'e', 'd' };
private static String salt = "prs and pfa";
public static String encrypt(String value) throws Exception {
Key key = generateKey();
Cipher c = Cipher.getInstance(ALGORITHM);
c.init(Cipher.ENCRYPT_MODE, key);
String valueToEnc = null;
String eValue = value;
for (int i = 0; i < ITERATIONS; i++) {
valueToEnc = salt + eValue;
byte[] encValue = c.doFinal(valueToEnc.getBytes());
eValue = new BASE64Encoder().encode(encValue);
}
return eValue;
}
public static String decrypt(String value) throws Exception {
Key key = generateKey();
Cipher c = Cipher.getInstance(ALGORITHM);
c.init(Cipher.DECRYPT_MODE, key);
String dValue = null;
String valueToDecrypt = value;
for (int i = 0; i < ITERATIONS; i++) {
byte[] decordedValue = new BASE64Decoder().decodeBuffer(valueToDecrypt);
byte[] decValue = c.doFinal(decordedValue);
dValue = new String(decValue).substring(salt.length());
valueToDecrypt = dValue;
}
return dValue;
}
private static Key generateKey() throws Exception {
Key key = new SecretKeySpec(keyValue, ALGORITHM);
return key;
}
}
Try this code. It can encrypt and decrypt. I'm assuming you know how to code in java. |
(1) If you are using basic authentication, you can add a basic authentication like so:
app.factory('mySamples', ['$http', function($http) {
var credentials = btoa(username + ':' + authtoken);
var authorization = {
'Authorization': 'Basic ' + credentials
};
return $http({method: 'GET', url: 'website/api/samples/', headers: authorization });
};
If all of your $http calls use this same auth, you can use $http.defaults.headers.common.Authorization
(2) I’m pretty sure nginx does not provide session handling—you’ll have to do that in django.
(3) I don’t typically use sessions with REST since REST is usually done stateless.
OP asked how to enable CORS. Enabling CORS is usually done on the server-side. You can do it in nginx by adding:
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
To the appropriate location block. You can also do this on the django side with middleware pacakges like django-cors-middleware (https://pypi.python.org/pypi/django-cors-middleware) or django-cors-headers (https://pypi.python.org/pypi/django-cors-headers). |
You should use
key = cipher.random_key
instead of RSA key
I have used it in following way for my purpose
Generate cypher random keys
Do AES encryption of data with these keys
Before supply the keys encrypt it with RSA public key
At receiver end
Decrypt the cypher keys with RSA private key
Decrypt the data with resultant cypher keys
Note: We can not encrypt large data with RSA private/public key based technique
Super secured Example
# At sender side
public_key_file = 'public.pem'
message = 'Hey vishh you are awesome!!'
cipher = OpenSSL::Cipher::AES.new(128, :CBC)
cipher.encrypt
aes_key = cipher.random_key
encrypted_data = cipher.update(message) + cipher.final
# encrypted_data is ready to travel
rsa = OpenSSL::PKey::RSA.new(File.read(public_key_file))
rsa_cypher_key = rsa.public_encrypt(aes_key)
# rsa_cypher_key is ready to travel
# sending these data in encoded format is good idea
encrypted_data = Base64.encode64(encrypted_data)
rsa_cypher_key = Base64.encode64(rsa_cypher_key)
====> encrypted_data + rsa_cypher_key =====> Travelling
encrypted_data = Base64.decode64(encrypted_data)
rsa_cypher_key = Base64.decode64(rsa_cypher_key) # decode the data
# At recevier side
private_key_file = 'private.pem'
# Decrypt the cypher key with private key
rsp = OpenSSL::PKey::RSA.new(File.read('./config/private.pem'))
aes_key = private_key.private_decrypt(rsa_cypher_key)
decipher = OpenSSL::Cipher::AES.new(128, :CBC)
decipher.decrypt
decipher.key = aes_key
message = decipher.update(encrypted_data) + decipher.final
p message
'Hey vishh you are awesome!!'
|
Maybe I am too late to answer this question, but I was facing this issue from couple of days and couldn't find any solid solution online. I have found solution hence I am sharing it.
//Steps to make sqlite database authenticated
download sqlite3 amalgamation zip file
unzip the file. The file should contain shell.c, sqlite3.c, sqlite3.h, sqlite3ext.h
click on find the link here
3a. Open userauth.c and copy the entire code and paste it at the end of your sqlite3.c file.
3b. Open sqlite3userauth.h and copy the entire code and pase it at the end of your sqlite3.h file.
create a output file for executing the command in shell
command: gcc -o sqlite3Exe shell.c sqlite3.c -DSQLITE_USER_AUTHENTICATION -ldl -lpthread
4a. Youll get error no such file "sqlite3userauth.h" in your shell.c file:
solution: go to that file and comment th line.(this is because youve already included the necessary code when you copied sqlite3auth.h into sqlite3.h)
4b. Test your output file by running ./sqlite3Exe (this is the name youve given to the output file generated in previous step). you'll get sqlite console.
4c. Create a database and on the authentication flag:
command1: .open dbname.db
command2: .auth on
command3: .exit//command 3 is optional
Building the library
5a: creating object file
//Compiling sqlite3.c to create object after appending our new code
command: gcc -o sqlite3.o -c sqlite3.c -DSQLITE_USER_AUTHENTICATION
With this command, we generate object file which we can use to compile our c file.
Create c file to authenticate your database:
//authUser.c
#include "stdio.h"
#include "stdlib.h"
#include "sqlite3.h"
int main(int argc,char * argv[]){
int a = 10;
int rtn, rtn2;
sqlite3 *db;
char *sql, *zErMsg;
rtn = sqlite3_open("dbname.db", &db);
rtn = sqlite3_user_add(db,"username","password",2, 1);//last but one param is for number of bytes for password, last param is for weather the user is admin or not
if(rtn){
fprintf(stderr, "Can't open database: %s\n", sqlite3_errmsg(db));
return(0);
}else{
fprintf(stderr, "Protected database successfully\n");
}
sqlite3_close(db);
return 0;
}
Compiling the program
//Compiling the program
command1: gcc authUser.c sqlite3.o -lpthread -ldl
command2: ./a.out //Output:protected database successfully
create c file to create table if the user is authenticated
//createTable.c
#include "stdio.h"
#include "stdlib.h"
#include "sqlite3.h"
static int callback(void *NotUsed, int argc, char **argv, char **azColName){
int i;
for(i=0; i less then argc; i++){
printf("%s = %s\n", azColName[i], argv[i] ? argv[i] : "NULL");
}
printf("\n");
return 0;
}
int main(int argc,char * argv[]){
int a = 10;
int rtn, rtn2;
sqlite3 *db;
char *sql, *zErMsg;
rtn = sqlite3_open("dbname.db", &db);
rtn = sqlite3_user_authenticate(db, "user","password",2);
if(rtn){
fprintf(stderr, "Can't open database: %s\n", sqlite3_errmsg(db));
return(0);
}else{
fprintf(stderr, "Opened database successfully\n");
}
sql = "create table newtable(id int not null primary key, name varchar(100) not null)";
//sql = "insert into newtable values(5, 'ishwar')";
rtn = sqlite3_exec(db, sql, callback, 0, &zErMsg);
if(rtn != SQLITE_OK){
sqlite3_free(zErMsg);
}else{
fprintf(stdout, "Table created successfully \n");
//fprintf(stdout, "inserted successfully \n");
}
sqlite3_close(db);
return 0;
}`
compiling the program
//Compiling the program
command1: gcc createTable.c sqlite3.o -lpthread -ldl
command2: ./a.out //Output:Table created successfully
Create c file to add values in table
from the previous code, you can see two sql variable and two fprintf inside else, now uncomment the commented line and comment the other one. and runt the same command as above
output: Inserted successfully
And youre done, try experimenting with the code, change the values of sqlite3_user_authenticate function you wont be able to do these operations,at max you may be able to open database(when you comment the sqlite3_user_authenticate functon.nothing else)
Testing it with shell
Run the command: ./sqlite3Exe (the output file we created in step 4)
command1: .open dbname.db
command2: .tables //you should get error, user_auth
Thank you(please feel free to mail me in case of any problem: ishwar.rimal@gmail.com) |
If you're not constrained to use a specific web framework, feel free to try the following filter based implementation for jersey. Note that you still need to add a fair amount of custom code for handling the logic of "Collective authentication" as jersey only provides the basic tools required for this, and it doesn't explicitly implement the whole concept. Here's how you could do it, on a high level:
class AuthorizationProvider {
public void authenticate(ContainerRequestContext requestContext) {
// Here you would need to query your database to get the Collection of Users belonging
// to the "Collective" Role. You would then check if they are all logged in.
// A really abstract version would look like this, assuming you've already queried the DB
// and have a reference to the above mentioned Collection.
if (collectiveUsers.size == collectiveUsers.stream().filter(User::isLoggedIn).count()) {
return true;
}
return false;
}
}
class AuthorizationRequestFilter implements ContainerRequestFilter {
private final AuthorizationProvider authorizationProvider;
@Override
public void filter(ContainerRequestContext requestContext) {
if (authorizationProvider.authenticate(requestContext)) {
// serve whatever it is you want to serve if all required users are logged in
} else {
// otherwise reject the request
requestContext.abortWith(Response
.status(Response.Status.UNAUTHORIZED)
.entity("Resource available only after collective login")
.build());
}
}
}
@ApplicationPath("/")
class MyApplication extends ResourceConfig {
public MyApplication() {
// Register the filter
register(AuthorizationRequestFilter.class);
}
}
Apart from this, you would also need to handle the Login part.
You would assign these specific users the Collective role, and you would mark them as logged in, whenever they successfully pass through login authentication.
If all the above conditions are met, you should be able to successfully serve your "Collective only" page, only when all "Collective" users are logged in.
This also covers the part where if either one of these users logs out, you store the state in your database (mark the Collective user with isLoggedIn = false). So from this point on, whenever somebody requests the page, it will return Unauthorized.
Conversely, you can also attempt to implement SSE (Server sent events) to actively update the frontend part, if somebody logs out. With this, the page will actively be disabled even if somebody has already managed to get it previously.
Container request filter source and example, for reference, jersey docs |
User.Identity.IsAuthenticated looks at the authentication cookie from the client to determine if the user is logged in or not. Since the authentication cookie is not present when you are POSTing to your login method, it will always return false. Additionally, why perform the check right after you logged the user in? The check should actually be performed on the login GET method.
public ActionResult Login(string returnUrl)
{
if (User.Identity.IsAuthenticated)
{
//already logged in - no need to allow login again!!
return RedirectToAction("Index", "Home");
}
ViewBag.ReturnUrl = returnUrl;
return View();
}
[AllowAnonymous]
[HttpPost]
public ActionResult Login(UserProfile register)
{
//check your model state!
if(!ModelState.IsValid) return View();
//this method returns some result letting you know if the user
//logged in successfully or not. You need to check that.
//Additionally, this method sets the Auth cookie so you can
//do you IsAuthenticated call anywhere else in the system
var loginResult = WebSecurity.Login(register.UserName, register.password, true);
//login failed, display the login view again or go whereever you need to go
if(!loginResult) return View();
//Good to go, user is authenticated - redirect to where need to go
return RedirectToAction("Index", "Home");
}
Here is the MSDN for the WebSecurity.Login method |
You can use a session or a serialized file to mark or upload the file.
Once PHP imports the file into the system, you can start recording the file or session
//File serialized, to First file
if($_FILES['file']['error'] != 0){ //Checks for error while importing file
if (!file_exists($tmpF)) {
$count = 1;
$queue[] = ['id'=>$count,'nameFile'=>$_FILES['file']['name'],'status'=>0];
$tmpF = sys_get_temp_dir().'/reportUpload.txt';
$tmp = fopen($tmpF,'w');
fwrite($tmp, serialize($queue));
fclose($tmp);
}else{
//For the second file on.
$tmpF = sys_get_temp_dir().'/reportUpload.txt';
$file = fopen($tmpF,'r');
$queue = unserialize(fgets($file));
fclose($file);
$last = count($queue);
$count = $queue[$last]['id']+1;
$queue[] = ['id'=>$count,'nameFile'=>$_FILES['file']['name'],'status'=>0];
$tmp = fopen($tmpF,'w');
fwrite($tmp, serialize($queue));
fclose($tmp);
}
}
//Starting Encryption
$tmpU = sys_get_temp_dir().'/reportUploadExecution.txt';
if(!file_exists($tmpU)){
$tmpF = sys_get_temp_dir().'/reportUpload.txt';
$file = fopen($tmpF,'r');
$queue = unserialize(fgets($file));
$line = 0;
while(!feof($queue)){
if($queue[$line]['status']==FALSE){
//Starting Encryption
$tmpU = sys_get_temp_dir().'/reportUploadExecution.txt';
$execution = ['id'=>$line,'nameFile'=>$queue[$line]['nameFile']]
$tmp1 = fopen($tmpU,'w');
fwrite($tmp1, serialize($execution));
fclose($tmp1);
// Your code here
// If encryption ended successfully set line "Status" = 1
unlink($tmpU);
$line++;
}
}
fclose($file);
}
It's just an idea, if you can post part of the code, it will help you better. |
in the latest version of DropWizard you can find it possible both to do Authentication and Authorization. The former, in a nutshell, instructs DropWizard to ask a user for credentials if you use basic authentication when she tries to access a resource or provide some other identity check. The latter allows one to grant a user access to various resources based on user's roles.
The are various possibilities how you can store user data and roles. Examples include a database which you mentioned, a LDAP server and a third-party identity management system.
If you are interested in Basic Authentication, you can take a look at my example here. A database is used to store user's credentials. Also, here is my little bit dated tutorial on DropWizard authentication. The code for the latest version is in the aforementioned example application.
To implement authentication only, you can skip adding roles and registering an Authorizer. To add authorization you can add a roles collection to your User entity and use annotations such as @RolesAllowed and @PermitAll along with Authorizer implementation to grant/deny access to your resources.
A link to DropWizard authentication docs is here.
Please feel free to ask more questions in comments and good luck. |
Storing sensitive information in NSUserDefaults is not recommended. You should use Keychain services, instead:
By making a single call to this API, an app can store small bits of secret information on a keychain, from which the app can later retrieve the information—also with a single call. The keychain secures data by encrypting it before storing it in the file system, relieving you of the need to implement complicated encryption algorithms.
Now, the iOS Keychain provides a pretty low-level API, so it is usually better to use a higher-level wrapper, such as those provided by Locksmith, or keychain-swift.
For example, using the latter, storing to and reading from the keychain is as simple as doing (after the required setup):
If you prefer, you can go the direct way and use the sample provided by Apple at the above link.
let keychain = KeychainSwift()
keychain.set("hello world", forKey: "my key")
keychain.get("my key")
EDIT:
As to code structure, you could create a class to encapsulate the token and any other information required with each request. This class would have, e.g., an init method taking the token; and a method called createNews with the following simplified signature:
func createNews(news:NewsEntity) {
...
Depending on your style preferences, this could be singleton reading your token from the keychain (or NSUserDefaults, although that would not be advisable). |
You should actually use built in Spring Boot MongoDb Starter features and related auto configuration through application properties. Custom host, port, passwords etc. can and should be set via dedicated Spring Boot MongoDB Properties:
spring.data.mongodb.authentication-database= # Authentication database name.
spring.data.mongodb.database=test # Database name.
spring.data.mongodb.field-naming-strategy= # Fully qualified name of the FieldNamingStrategy to use.
spring.data.mongodb.grid-fs-database= # GridFS database name.
spring.data.mongodb.host=localhost # Mongo server host.
spring.data.mongodb.password= # Login password of the mongo server.
spring.data.mongodb.port=27017 # Mongo server port.
spring.data.mongodb.repositories.enabled=true # Enable Mongo repositories.
spring.data.mongodb.uri=mongodb://localhost/test # Mongo database URI. When set, host and port are ignored.
spring.data.mongodb.username= # Login user of the mongo server.
And link to the full list of supported properties is here. |
I Understand what you are trying to do. I have done exactly that. The key is to Create a static class in your DAL that uses the IServiceCollection. then in here you add your context here's mine and it works a treat My front end doesn't even know about entity framework, nethier does my business layer:
public static IServiceCollection RegisterRepositoryServices(this IServiceCollection services)
{
services.AddIdentity<ApplicationUser, IdentityRole<int>>(
config => { config.User.RequireUniqueEmail = true;
config.Cookies.ApplicationCookie.LoginPath = "/Account/Login";
config.Cookies.ApplicationCookie.AuthenticationScheme = "Cookie";
config.Cookies.ApplicationCookie.AutomaticAuthenticate = false;
config.Cookies.ApplicationCookie.Events = new CookieAuthenticationEvents()
{
OnRedirectToLogin = async ctx =>
{
if (ctx.Request.Path.StartsWithSegments("/visualjobs") && ctx.Response.StatusCode == 200)
{
ctx.Response.StatusCode = 401;
}
else
{
ctx.Response.Redirect(ctx.RedirectUri);
}
await Task.Yield();
}
};
}).AddEntityFrameworkStores<VisualJobsDbContext, int>()
.AddDefaultTokenProviders();
services.AddEntityFramework().AddDbContext<VisualJobsDbContext>();
services.AddScoped<IRecruiterRepository, RecruiterRepository>();
services.AddSingleton<IAccountRepository, AccountRepository>();
return services;
}
then in my service layer I have another static class. My service layer has a reference to the repository layer and I register the repository services here (bootstrapping the repository into the service layer), like so and then I do the same again in the UI:
Service layer code:
public static class ServiceCollectionExtensions
{
public static IServiceCollection RegisterServices(this IServiceCollection services)
{
services.RegisterRepositoryServices();
services.AddScoped<IRecruiterService, RecruiterService>();
services.AddSingleton<IAccountService, AccountService>();
return services;
}
}
The Magic in the Repository Layer:
public partial class VisualJobsDbContext : IdentityDbContext<ApplicationUser, IdentityRole<int>, int>
{
private IConfigurationRoot _config;
public VisualJobsDbContext() { }
public VisualJobsDbContext(IConfigurationRoot config, DbContextOptions<VisualJobsDbContext> options) : base(options)
{
_config = config;
}
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
base.OnConfiguring(optionsBuilder);
optionsBuilder.UseSqlServer(@_config["ConnectionStrings:VisualJobsContextConnection"]);
}
protected override void OnModelCreating(ModelBuilder modelBuilder)
{....
|
Your MVC application is configured to redirect to an absolute http URL rather than a relative URL when the user needs to sign-in.
For new MVC applications that are based on the Owin middleware, this is configured in App_Start/Startup.Auth.cs.
app.UseCookieAuthentication(new CookieAuthenticationOptions
{
AuthenticationType = DefaultAuthenticationTypes.ApplicationCookie,
LoginPath = new PathString("/Account/Login"),
Provider = new CookieAuthenticationProvider
{
// Enables the application to validate the security stamp when the user logs in.
// This is a security feature which is used when you change a password or add an external login to your account.
OnValidateIdentity = SecurityStampValidator.OnValidateIdentity<ApplicationUserManager, ApplicationUser>(
validateInterval: TimeSpan.FromMinutes(30),
regenerateIdentity: (manager, user) => user.GenerateUserIdentityAsync(manager))
}
});
and add the following after the OnValidateIdentity property:
OnApplyRedirect = ApplyRedirect
Then, later in the class, add the following function:
private static void ApplyRedirect(CookieApplyRedirectContext context)
{
Uri absoluteUri;
if (Uri.TryCreate(context.RedirectUri, UriKind.Absolute, out absoluteUri))
{
context.Response.Redirect(absoluteUri.PathAndQuery);
return;
}
context.Response.Redirect(context.RedirectUri);
}
Basically, this is converting the absolute URL to a relative URL. The relative URL then is passed back to the browser. Since the redirect is relative, it should maintain the https URL. |
Yes, what you see here is an example of undefined behavior.
Writing beyond the end of allocated array (aka buffer overflow) is a good example of undefined behavior: it will often appear to "work normally", while other times it will crash (e.g. "Segmentation fault").
A low-level explanation: there are control structures in memory that are situated some distance from your allocated objects. If your program does a big buffer overflow, there is more chance it will damage these control structures, while for more modest overflows it will damage some unused data (e.g. padding). In any case, however, buffer overflows invoke undefined behavior.
The "struct hack" in your first form also invokes undefined behavior (as indicated by the warning), but of a special kind - it's almost guaranteed that it would always work normally, in most compilers. However, it's still undefined behavior, so not recommended to use. In order to sanction its use, the C committee invented this "flexible array member" syntax (your second syntax), which is guaranteed to work.
Just to make it clear - assignment to an element of an array never allocates space for that element (not in C, at least). In C, when assigning to an element, it should already be allocated, even if the array is "flexible". Your code should know how much to allocate when it allocates memory. If you don't know how much to allocate, use one of the following techniques:
Allocate an upper bound:
struct myStruct{
int data;
int array[100]; // you will never need more than 100 numbers
};
Use realloc
Use a linked list (or any other sophisticated data structure)
|
It is doable but tricky.
Cassandra knows which nodes are part of a cluster based on the cluster name. If your cluster name is not the same for both cluster, first step would be to rename your clusters to have the same name.
The second step is to take one cluster as the parent cluster where you will join the other nodes to it. Let's call this the parent cluster and the other one the joining cluster. In this step, define the keyspace and column families that exist in the joining cluster to be the same as the parent cluster. At this stage, your parent cluster has the keyspace definition but no data from the joining cluster. On the other hand, in the joining cluster, you will have to define the keyspace that exists on parent cluster the same way.
Your nodes in both clusters have to have public interfaces in order to be able to communicate. I am not sure how this is done on Google Cloud, but I am sure you can give public interfaces to your instances in both accounts. Then you treat these two clusters as two different datacenters in Cassandra notion, and once all the machines can access Cassandra ports on each other, change cassandra.yaml on each cluster and add other cluster's nodes to it. If you are using property file snitch to manage your replication, you need to update that as well so that it recognizes all nodes and their location.
Finally, do a rolling restart and alter the keyspace replication factors to replicate the way you want.
Updates:
Adding clarification for Daniel Compton's point, that when public interface is enabled, you need to properly setup encryption for replication between public interfaces as well as restricting access to those public interfaces to only the IPs of all of your cassandra nodes.
Renaming the cluster is possible and I have exercised this who process once before.
To rename cluster, change the cluster name in cassandra.yaml. Then change system.local table on each node to reflect that change and do a rolling restart. details of renaming the cluster can be found here:
cassandra - Saved cluster name Test Cluster != configured name |
currently the @AuthorizedFeignClient annotation is only available for microservice applications using UAA as authentication type, but not for gateways and UAA server itself!
I guess you were looking for the annoation in the gateway, or the the UAA server.
Why is this like this? For the gateway it is because the gateway already has a couple of responsibilities, so building composite logic in there is not a good idea.
If you generate a microservice (not gateway, not uaa server), you should have the client package in your Java root with this annoatation, as well as some more configurations (feign client config, load balanced resource details...)
You can copy those to your gateway, to make it working there.
You can copy them to the UAA, too. More on that, this even will work, but with some weird fact...when UAA will ask service "foo" for some data, it will first ask the UAA for a client credentials authentication....like performing a query to itself...while it could just access grant it itself. There is no accurate way to do it, but I didn't want to keep it in this uncool way in JHipster, so the annotation is for microservice only. |
Even though both algorithms make use of SHA-256, they are fundamentally different:
RS256 (RSASSA-PKCS1-v1_5 using SHA-256) relies on generating a digital signature with a specific private key.
HS256 (HMAC using SHA-256) relies on a shared secret plus the cryptographic hash function (SHA-256) to generate a message authentication code (MAC).
Validating tokens issued with each of the previous algorithms implies that for RS256 the entity doing the validation knows the public key associated with the private key used for signing, while for HS256 it implies that the entity knows the shared secret.
Choosing between one versus the other is then usually motivated by the characteristics of the applications that will validate the issued tokens.
If you want to validate a token on a browser-based application, the use of HS256 is automatically ruled out because that would imply you would have to include the shared secret in a place anyone would have access, making it completely useless because now anyone with access to the code could issue their own signed tokens.
In conclusion, if token validation is done on a controlled environment (server-side) you may go with HS256 because it's simpler to get started. However, if token validation is done on hostile environment you need to go with an algorithm based on asymmetric cryptography; in this case that would be RS256. |
I believe the issue is the double router-outlet as the comments above suggest. I did something like this in my project.
This was my solution:
I first created a service to determine the state of the login (I realize that event emitter is not good practice, I'm working on changing this to a behavior subject):
GlobalEventsManager
import { EventEmitter, Injectable } from '@angular/core';
@Injectable()
export class GlobalEventsManager {
public showNavBar: EventEmitter<boolean> = new EventEmitter<boolean>();
}
I used an authguard for my routes that checks if the login token is not expired. I used the angular2-jwt library for this
AuthGuard
import { Injectable } from '@angular/core';
import { Router, CanActivate, ActivatedRouteSnapshot, RouterStateSnapshot } from '@angular/router';
import { tokenNotExpired } from 'angular2-jwt';
import { GlobalEventsManager } from './GlobalEventsManager';
@Injectable()
export class AuthGuard implements CanActivate {
constructor(private router: Router, private globaleventsManager: GlobalEventsManager) {}
canActivate(next: ActivatedRouteSnapshot,
state: RouterStateSnapshot) {
if (tokenNotExpired('currentUser')) {
this.globaleventsManager.showNavBar.emit(true);
return true;
} else {
localStorage.removeItem('currentUser');
this.router.navigate(['/login']);
return false;
}
}
}
When a user logs in using my login component, I set the GlobalEventsManager to true
Login
constructor(
private router: Router,
private authenticationService: AuthenticationService,
private globaleventsManager: GlobalEventsManager) { }
ngOnInit() {
// reset login status
this.authenticationService.logout();
this.globaleventsManager.showNavBar.emit(false);
}
login() {
this.loading = true;
this.authenticationService.login(this.model.username, this.model.password)
.subscribe( (result) => {
this.globaleventsManager.showNavBar.emit(true);
this.router.navigate(['/']);
In my navbar I subscribe to the GlobalEventsManager and set a boolean property to the result:
Navbar
import { Component } from '@angular/core';
import { Router } from '@angular/router';
import { NavActiveService } from '../../../services/navactive.service';
import { GlobalEventsManager } from '../../../services/GlobalEventsManager';
@Component({
moduleId: module.id,
selector: 'my-navbar',
templateUrl: 'navbar.component.html',
styleUrls:['navbar.component.css'],
})
export class NavComponent {
showNavBar: boolean = true;
constructor(private router: Router,
private globalEventsManager: GlobalEventsManager){
this.globalEventsManager.showNavBar.subscribe((mode:boolean)=>{
this.showNavBar = mode;
});
}
}
In my navbar HTML I can now create two navbars, one navbar simply has a login nav and when my authguard notices that a toekn is expired or a user is not logged in it displays the navbar with just login as an option. Once logged in, the GlobalEventsManager value changes and my logged in navbar is displayed:
Navbar HTML
<div *ngIf="showNavBar">
//My logged in navbar
</div>
<div *ngIf="!showNavBar">
//navbar that just displays login as an option
</div>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.