Sunday, December 30, 2012

Risk-Based Access Control Part Two: Client Side and Administration of VIP

As promised, a follow-up post on managing risk with VIP.   The first part covered the web services call to VIP User Management.  Here we'll cover the fingerprinting of the device.  Simply add a little javascript into your pre-risk analysis page (typically a login page):


<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>Login</title>
<script type="text/javascript" src="https://vipuserservices.verisign.com/vipuserservices/static/v_1_0/scripts/iadfp.js"></script>
</head>


A good client side developer might make the SOAP call into VIP User Services might make the web service call here.  I used a server-side intermediary to make the web service call, so I posted the fingerprint data as part of the form:


<input type="submit" onclick="document.getElementById('deviceFingerprint').value=IaDfp.readFingerprint();return true;" value="Sign-In" />
 

I'll cover what I did with the return values, feeding them into Symantec O3 in Part III.  Let's go over the management side here in the meantime.  VIP Intelligent Auth is managed from the same VIP console used for the one-time-pin (OTP) and credential management.

Although Symantec doesn't feature the most knobs and dials of risk vendors, one could argue that the simplicity of the black-box system is sufficient for most risk administrators.  A simple slider to determine the threshold of risk before the request is deemed "risky".


You can also manage whitelist and blacklist IP addresses:

As well as countries on your bad-boy list:


And that's about it.  Symantec keeps most of its risk algorithms black boxed.  Some testing will help you to understand the typical values for risk score (0-100) and how you can factor that into your authorization policies.  

Monday, November 26, 2012

Risk-Based Access Control: Part One

A couple of years ago, I published an approach to doing risk-based access control using Oracle Adaptive Access Management (OAAM).

http://fusionsecurity.blogspot.com/2011/01/risky-business.html

More recently, I've had a chance to play with Symantec's VIP (Identity Protection) user services, which is most well known for its two-factor one-time-pin (OTP) service.  VIP also includes a risk component that can collect footprint information about the client and return a risk score back to the PEP or PDP for enforcement.   VIP user services is divided into a couple of different areas:


  • Query Services: Provides information on the end-user and when the credential was last bound to the user and when the credential was last authenticated.
  • Management Services: CRUD operations on users, adding credentials to those users
  • Authentication Services: Validate OTPs and evaluate risk
These are SOAP based services and the WSDLs are available for download from the VIP Management Console.  I used Axis2 to connect to convert the WSDL to Java stubs and connect to the service.  Here is a snippet:


                        RiskScoreType riskScore = null;
EvaluateRiskRequest riskRequest = new EvaluateRiskRequest();
EvaluateRiskRequestType riskType = new EvaluateRiskRequestType();
IpAddressType remoteIpAddress = new IpAddressType();
remoteIpAddress.setIpAddressType(ipAddress);

RequestIdType myRequestId = new RequestIdType();
myRequestId.setRequestIdType(requestId);
UserIdType myUserIdType = new UserIdType();
myUserIdType.setUserIdType(user);
UserAgentType myUserAgentType = new UserAgentType();
myUserAgentType.setUserAgentType(userAgent);
IAAuthDataType myIAAuthDataType = new IAAuthDataType();
myIAAuthDataType.setIAAuthDataType(fingerprint);

riskType.setIp(remoteIpAddress);
riskType.setRequestId(myRequestId);
riskType.setUserId(myUserIdType);
riskType.setUserAgent(myUserAgentType);
riskType.setIAAuthData(myIAAuthDataType);
riskRequest.setEvaluateRiskRequest(riskType);
Boolean isRisky = true;
try {
       EvaluateRiskResponse response = authServiceStub.evaluateRisk(riskRequest);
       System.out.println("Status: " + response.getEvaluateRiskResponse().getStatus());
isRisky = response.getEvaluateRiskResponse().getRisky();
System.out.println("Risky? " + isRisky);
System.out.println("Policy Version: " + response.getEvaluateRiskResponse().getPolicyVersion());
System.out.println("Risk Reason: " +           response.getEvaluateRiskResponse().getRiskReason());
        riskScore = response.getEvaluateRiskResponse().getRiskScore();
The response I get back is something like:

Risky? false
Policy Version: 1.0
Risk Reason: Device recognition, Device Reputation
Risk Score: 51


The risk score is based on configurable settings on the VIP management side.  I'll discuss the VIP policy side in the next part of this series.


Monday, November 19, 2012

WWBD, My First Jailbreak, MAM Compliment to IAM

At CSA Congress earlier this month, I told the story about this creepy dude sitting next to me on the plane down to Orlando.  I don't know if it was the bad scottish accent or the strange dental work, or the feeling that he was shoulder-surfing me when I was working on the iPad.  I got a picture of him leaving the airport...


Yea, turns out that was MY iPad he got off with while I was in the bathroom on the plane.  I called my company and they were able to wipe the device with MDM.  I have to wonder though, how long would it take any good super-villian to get the data they wanted off a device?   I had to assume he had the device password since he probably saw me punch that in on the plane.

I needed to present again off my backup iPad, a first generation model.  It doesn't support mirroring, so I would need to jailbreak it anyway in order to hack the device to support display to VGA.  It took me all of 10 minutes to do so.  I thought back to the incident on the plane.  What would Bane Do?   He probably had the device broken and all of the data pulled off before it was wiped.  Then I had to ask myself, what data did those mobile apps keep on the device?  It's not always obvious to me what apps are storing on the file system.  I do know that developers make it easy to access the applications once you've signed in once to app, not to have to do again.  How could I have mitigated this risk once my device was compromised?

I've found that Mobile Application Management (MAM) paired with a robust access management solution can help in a lot of ways.   With the MAM solutions I've evaluated, the theme is essentially a corporate app store.  Applications downloaded through the MAM have been vetted through the company and thus have some degree of governance.  MAM can apply application specific policy for those apps that come from the MAM app store.  Some of these policies are very MDMish in nature (prevent access when device is jailbroken, or prevent apps from storing data locally for instance), but specific to the app.  This is nice on the BYOD front, as the company can protect its interests, without wiping out someone's personal data that happens to be co-located with the corporate apps.  It also helps with the Bane scenario, because for the apps that matter to the company, data is always remote from the device.

The more interesting part to me, though, is the authentication aspect.  The ability to require authentication per application provides a consistent authentication experience, linking the MAM with the corporate access management system.  Through a web view and SAML approach, one can authenticate to the app, propagate identity to the service provider, and get SSO within the device.  Once you're linked in with the IAM, then you can start applying context-based authorization to the scenario.  How did the user authenticate?  What device?  Where was the user when he/she requested the app?  What is the historical profile of the user accessing this application?   What is the environmental risk condition presently?  What time of day was it accessed?  How sensitive is the application or content being accessed?  Using a risk engine, these factors can be leveraged to generate a verdict that might trigger a 2nd factor to be required before accessing the cloud resources.  Bane might have device access, but he isn't going to get to the more sensitive apps.




Wednesday, October 31, 2012

Catch me at CSA Congress in Orlando November 7th

I'm going to be speaking at the Cloud Security Alliance (CSA) Congress in Orlando on Nov 7th.  Here is the brochure for the conference:

http://www.misti.com/PDF/174/20920/CSA12%20Bro_S.pdf

The topic is "Enterprise Insecurity...Mobile Devices Collide with the Cloud."  I will be touching on Mobile Application vs. Mobile Device Management (MDM), limitations with the current pure SAML standard when dealing with mobile devices and cloud, preventing side-door access and other timely topics.


If you have some time to stop by and say hi, please do.

Monday, October 29, 2012

Preventing Data Loss to the Wild

I recently got a preview from a vendor's implementation of a policy-based access control integrated with a Data Loss Prevention (DLP) and encryption solution to provide a very compelling story around protection of files being posted to cloud service providers like DropBox and SalesForce, as well as internal content management like SharePoint.    This is especially relevant to existing customers of this vendor's DLP solution or prospective customers of both a SSO/WAM/Federation/Access and DLP solution.



Does that picture give you pause?  The vendor had a number of modes including:

- Encrypt files when being uploaded to these cloud service providers
- Passive DLP monitoring
- DLP classification based on or in addition to encryption

What do these modes mean?  Let's start with encryption in the context of SalesForce.  Say you upload a file in this encrypt only mode.  If someone outside the organization gets ahold of the file through some means, without the key, they would be unable to read it.   The vendor has a hosted & managed PKI management service, so there will be a very light footprint for entry.

With DLP, policies can be applied within the policy-based access control system.  The access control system can block or redact content based on DLP rules that are applied.   DLP renders a verdict that contains a DLP score.  Policies can be applied based on that score for instance, to block or redact content.  Typically an organization would start out in passive mode to monitor activity and give them a way to tune the policies without adversely affecting operations.

If you're evaluating a SSO & access control solution, you should consider this solution in your evaluation criteria if there's content that you worry about leaving the organization.  The vendor indicated that it would be available by end of 2012 calendar year.

Thursday, October 18, 2012

How do I Prevent Side-Door Access?

Despite all of the best laid plans of an organization, it seems like lines-of-business and individuals are still going to the internet to leverage services that put the organization at risk.  "This file's too large for email, I'm just going to throw it up on my personal Box account."


That's just the accidental scenario.  There's the more nefarious 'Kinko's run', where the terminated employee heads to the local internet shop to download the contact list from online CRM that hasn't been de-provisioned.   Like most solutions, there's a people/product/process side to making things easier for employees, while keeping employers out of trouble.

From the product side, many cloud identity providers can leverage SAML to prevent side-door access to cloud service providers.  The Service Provider always redirects back to the enterprise for authentication.  Assuming the employee has been removed from Active Directory or whatever mechanism the Identity Provider uses, access is cut off.

For many of the SaaS Service Providers that don't support SAML yet, some cloud identity solutions provide Form POST SSO.  This Form POST can be paired with a provisioning system to prevent side-door access using a technique called 'password cloaking'.    Essentially the user's password at the service provider is unknown to the user.  It is periodically reset by the provisioning tool to ensure that it in sync between the Service and Identity Provider's credential vault used for SSO.


Users authenticate to the Identity Provider with their enterprise credentials and get SSO to the Service Provider without knowing the password to that account.  This approach isn't without its problems.  Reset password wizards at the service provider can be used to circumvent the cloaking mechanism.   This is where the people and process come in.

If users can be provided a better experience in getting access to shared file services like Box through the  enterprise account that is governed by a Cloud Identity solution, most would opt for that approach, knowing that they're being a good citizen and it's easier anyway.   Proper education about available services will get critical mass necessary for adaption.

Tuesday, October 16, 2012

Just-In-Time Versus Asynchronous Provisioning

There are multiple options for getting identities into and out of external service providers (SP).  This is often dependent on what the SP supports.  Some SPs support Just-in-Time (JIT) Provisioning, which means that the service provider will consume SAML attributes upon login and either create the user if he/she doesn't already exist in the SP's user store, or update the local account if any of the attributes differ from what is sent in the assertion.  This is secured by the inherent trust of the SAML authentication.  The downsides of JIT provisioning are that you typically cannot de-provision a user.

Asynchronous provisioning utilizes some kind of scheduled job or user interactive method of synchronizing identities from one or more sources to the service provider.  This typically involves a polling of the authoritative user store and pushing any adds, deletes, inactivates, changes, etc. into the target.  The target service provider must provide some kind of provisioning service in order for this to work.   Many of the provisioning vendors have implemented connectors/adapters for SPs like Google, WebEx and SalesForce using their respective APIs.  Each of these are one-off implementations and most SPs don't provide any kind of public provisioning hooks today.

An Oasis standard called SPML for provisioning was created over 10 years ago, but has lacked in adaption (so much so that I won't bother to even spell out the acronym).  Its failure was likely in its lack of granularity for updating profiles and the general complexity of implementing a SOAP-based service.  Simple Cloud Identity Management (SCIM) is a likely successor for SPML.  Plenty of information on SCIM at http://www.simplecloud.info, but SCIM's use of REST and the timely need for it with the proliferation of cloud service providers will drive higher adaption.

This matters because the natural progression of these cloud providers will be the aggregation of these services into packaged offerings.  Identity propagation will be key to enabling these aggregations and you can't propagate identity easily without having an account at the SP.  Thus, end-to-end user lifecycle will require that accounts are provisioned and de-provisioned from the SPs from that 3rd party aggregator.   SCIM enablement will be a big land grab for provisioning vendors going forward.

Tuesday, October 2, 2012

Example of Context-Based Authorization Reducing Number of Policies

Access control requirements have morphed from URL to group-based policy to an attribute/risk/role-based object and data model.   Objects in portals no longer have unique URLs (WebParts in Sharepoint for instance).  Policy enforcement of web services, REST and data are becoming more prevalent as requirements.  A platform that only does HTTP cannot be centralized if it doesn’t have a story for these other protocols.  

Trying to model authorization with a traditional Web Access Management (WAM) product is tedious, especially if it can't accept contextual attributes from the calling component or Policy Enforcement Point (PEP).

Let's say you have a financial institution that has a number of reports:  Summary and Detailed, each with multiple regional (East and West) and content (Stocks and Bonds).  2x2x2=8 different reports.  Let's pretend we have the following actors:


NameTypeLevelRegionQualified
Charlie GildCustomerGold
Chad SilvaCustomerSilver
Ed EastonEmployee-BrokerN/AEastStocks
Ender WigginsEmployee-BrokerN/AWest[Stocks, Bonds]
PaulPartnerN/A

The institutions have the following policies:



  • Employee-Brokers can only view account reports within their own region and what they are qualified to trade (stocks, bonds, or both)
  • Gold-level customers can view detailed and summary account and can transfer funds up to 20K
  • Silver customers can only see summary account info and can transfer funds up to 10K

In order to implement this with a URL-structure today, one would need a large number of policies:

PERMIT:
/summary/bonds/east -> [employees.region=east AND employees.qualified=bonds | both ] OR partners OR customers
/summary/stocks/west -> [ employees.region=west AND employees.qualified=stocks | both ] OR partners OR customers
/summary/stocks/east -> ...
/summary/stocks/west -> ...
/detailed/bonds/east -> [ employees.region=east  AND employees.qualified=bonds | both ] OR customers.level=gold
/detailed/bonds/west -> ...
/detailed/stocks/east -> …
...


Contrast this with context aware authorization:

PERMIT (view, Employees, Reports) if Report.Type IN [Subject.Qualified] AND Report.Region = Subject.Region
PERMIT (view, Customers, Detailed) if Subject.Level=gold
PERMIT (view, [Customers, Partners] , Summary)
PERMIT (transfer, Customers, Funds) if amount < 10K
PERMIT (transfer, Customers, Funds) if Customer.level = gold AND amount < 20K

Dynamic context-based policies are must more dynamic and easier to manage.  These policies assume that the PDP can:


1) Accept context from the PEP that can be used in the policy.  The Report.Region is an example of the context attribute which is known by the content management system that holds the report as metadata.

2) Lookup attributes at decision time from a PIP.  Customer.level is an example of a persisted entitlement for the Customer.

3) Evaluate a hierarchy of resources.  Detailed and Summary would inherit from 'Reports' in the first policy above.

Let's put it to the test:  Let's say Ed Easton requests a detailed east region bonds report.  The first policy would by most applicable.  The last part of the constrain would return true as the Report.Region (East) matches what Ed has in the directory (East).  However, the Report.Type (Bonds) is not in the list of equities he is qualified for.  Ed is denied access to that report.  

These examples are a little contrived, but hopefully they help illustrate the point.

Monday, October 1, 2012

Comparing OES 10g with OES 11gR2

Oracle Entitlements Server has been through more name changes and re-writes than any other product in history.  Okay, that's probably a reach, but it's had its fair share.  The most recent release is OES 11gR2 (11.1.2).  Most implementations of OES are version 10g, which is based on the BEA implementation.  Oracle set out with some ambitious goals for OES and have met a number of milestones in this rev.  This post outlines some of the differences. Documentation home for 11gR2 is here.

Conceptually, OES 11gR2 is the same:  You have a standalone Policy Decision Point (PDP) that handles authorization requests from web service and RMI client (language agnostic) and a Java-based embedded PDP/PEP (The 'E' being Enforcement) API for any Java container.  Both models can take context in the form of attributes from applications and use the context values in policy decisioning.  Both models can pull attributes from Policy Information Points (PIP) like Database and LDAP, usually querying on Subject.

Implementation of this policy engine differs significantly, however.  Some of my favorite improvements:
  • Constraint editor: Where in 10g, you had a free-form text box where you had to know the various expression language and functions, in 11gR2 you have an editor that gives you list operations, drop-down boxes for type-specific expressions, and lists of available attributes.
  • Policies can be written directly against external groups and users:  In 10g, you basically had to replicate users and groups in the OES 10g Identity Store in order to compose policy against these entities.  In 11gR2, you can browse external directories and write policy against external users & groups.
  • Improved runtime and management APIs: The PEP API in 11gR2 has been greatly simplified, without sacrificing functionality.  The BLM API in 10g was simply a mess, versus the 11gR2 Management API, which is intuitive.
Some potential reasons for not moving to 11gR2:

  • The 10g Policy Administration Point (PAP) ran on Tomcat (though it wasn't a pure Java EE solution) as well as WebLogic, where 11gR2 is only WebLogic today.  You could use a number of non-Oracle databases as well, where 11gR2 only supports Derby and Oracle.  
  • The WebLogic authorization story changed significantly in 11gR2.  Instead of leveraging the Authorization and Role Mapping providers in WebLogic, OES plugs in at the JPS/OPSS layer.  For those using OES for WebLogic container authorization, this could be a challenge (not that it was frictionless before).  Long-term, the 11gR2 solution is better, with auto-deployment of applications into OPSS automatically being managed by OES when they share the same policy store.  
Oracle has definitely left its fingerprint on this revision of OES, with probably little of the CrossLogix & BEA codebase remaining.  Where some other Oracle ADF consoles can be pretty clunky, the Authorization Policy Management (APM) console for OES is fairly good, IMO.

While the initial 11gR1 (11.1.1.5) release had some missing pieces from 10g such as Policy Simulation and SharePoint support, the 11gR2 release has tidied up those deficiencies.  My wish list going forward is better tooling for development, especially in the area of data security (handling multiple predicates in an obligation on the client side).