Wednesday, July 27, 2016

Sylvester McMonkey McBean's Role Mining Machine



Many vendors promise that they have the best tool that will perform role mining, consolidation, and will fix all that is wrong with your RBAC approach.  I think this pitch is akin to Sylvester McMonkey McBean’s promise to the Sneetches in the Dr. Seuss classic:


Each role now has a star on thar’s!  For those unfamiliar with The Sneetches by Dr. Suess, Messrs. McBean makes a tidy profit taking stars on and off Sneetches as perception of whether the star is good waxes and wanes.   Enterprises often trade in their inefficient RBAC models for a newer model that, like a new automobile, starts to deprecate quickly.  A couple of years later, they are back at McMonkey McBean’s table.  

One of the tools a recent client was considering was an Older Identity Analytics product that featured the ability to generate a confusion matrix as part of its role consolidation.  I found that feature rather appropriate for the output of this particular McMonkey McBean Role Machine.  Rather than getting visualization of the performance of the algorithm, it should be used to measure the business’ reaction:


As Alessandro Colantonio writes, “Automatically elicited roles often have no connection to business practice.”    Role mining is as much art as it is tech, and crafting a role model that works for the business requires iteration, buy-in, flexibility and visualization.  The assertion that a tool can spit out a new role model that can be put into practice and maintained over time is a fallacy.  Companies are trying to address the symptoms (too many roles, hard-to-understand entitlements), rather than the root causes, which I think is primarily that RBAC cannot properly model access without a role permutation problem beyond the most rudimentary scenarios.

Context-driven and attribute based (ABAC) models, in conjunction with RBAC, offer a modern approach that can limit role explosion and prevent the need to restructure your role and entitlement model frequently.  A simple example from a recent client is that they would model seniority as separate roles, Analyst and Senior Analyst, for instance.  The Senior Analyst would have additional entitlements.  Seniority is contextual, as it could be based upon not just years of experience in an area, but the amount of training and how recent that training was conducted.  A policy model that leverages the one logical role (Analyst) with the delta set of entitlements driven by policy governed by the data in LMS (training) systems for example, would be a more dynamic policy model that would reduce the number of roles being managed.

Security Administrators understand giving access to someone by putting them in an Active Directory group.  However, until the Executives realize they are in a spiral of filling McMonkey McBean’s coffers, invests in the software and experience for proper policy modeling, they will continue to re-learn the lesson that the Sneetches unfortunately did not.

Sunday, December 30, 2012

Risk-Based Access Control Part Two: Client Side and Administration of VIP

As promised, a follow-up post on managing risk with VIP.   The first part covered the web services call to VIP User Management.  Here we'll cover the fingerprinting of the device.  Simply add a little javascript into your pre-risk analysis page (typically a login page):


<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>Login</title>
<script type="text/javascript" src="https://vipuserservices.verisign.com/vipuserservices/static/v_1_0/scripts/iadfp.js"></script>
</head>


A good client side developer might make the SOAP call into VIP User Services might make the web service call here.  I used a server-side intermediary to make the web service call, so I posted the fingerprint data as part of the form:


<input type="submit" onclick="document.getElementById('deviceFingerprint').value=IaDfp.readFingerprint();return true;" value="Sign-In" />
 

I'll cover what I did with the return values, feeding them into Symantec O3 in Part III.  Let's go over the management side here in the meantime.  VIP Intelligent Auth is managed from the same VIP console used for the one-time-pin (OTP) and credential management.

Although Symantec doesn't feature the most knobs and dials of risk vendors, one could argue that the simplicity of the black-box system is sufficient for most risk administrators.  A simple slider to determine the threshold of risk before the request is deemed "risky".


You can also manage whitelist and blacklist IP addresses:

As well as countries on your bad-boy list:


And that's about it.  Symantec keeps most of its risk algorithms black boxed.  Some testing will help you to understand the typical values for risk score (0-100) and how you can factor that into your authorization policies.  

Monday, November 26, 2012

Risk-Based Access Control: Part One

A couple of years ago, I published an approach to doing risk-based access control using Oracle Adaptive Access Management (OAAM).

http://fusionsecurity.blogspot.com/2011/01/risky-business.html

More recently, I've had a chance to play with Symantec's VIP (Identity Protection) user services, which is most well known for its two-factor one-time-pin (OTP) service.  VIP also includes a risk component that can collect footprint information about the client and return a risk score back to the PEP or PDP for enforcement.   VIP user services is divided into a couple of different areas:


  • Query Services: Provides information on the end-user and when the credential was last bound to the user and when the credential was last authenticated.
  • Management Services: CRUD operations on users, adding credentials to those users
  • Authentication Services: Validate OTPs and evaluate risk
These are SOAP based services and the WSDLs are available for download from the VIP Management Console.  I used Axis2 to connect to convert the WSDL to Java stubs and connect to the service.  Here is a snippet:


                        RiskScoreType riskScore = null;
EvaluateRiskRequest riskRequest = new EvaluateRiskRequest();
EvaluateRiskRequestType riskType = new EvaluateRiskRequestType();
IpAddressType remoteIpAddress = new IpAddressType();
remoteIpAddress.setIpAddressType(ipAddress);

RequestIdType myRequestId = new RequestIdType();
myRequestId.setRequestIdType(requestId);
UserIdType myUserIdType = new UserIdType();
myUserIdType.setUserIdType(user);
UserAgentType myUserAgentType = new UserAgentType();
myUserAgentType.setUserAgentType(userAgent);
IAAuthDataType myIAAuthDataType = new IAAuthDataType();
myIAAuthDataType.setIAAuthDataType(fingerprint);

riskType.setIp(remoteIpAddress);
riskType.setRequestId(myRequestId);
riskType.setUserId(myUserIdType);
riskType.setUserAgent(myUserAgentType);
riskType.setIAAuthData(myIAAuthDataType);
riskRequest.setEvaluateRiskRequest(riskType);
Boolean isRisky = true;
try {
       EvaluateRiskResponse response = authServiceStub.evaluateRisk(riskRequest);
       System.out.println("Status: " + response.getEvaluateRiskResponse().getStatus());
isRisky = response.getEvaluateRiskResponse().getRisky();
System.out.println("Risky? " + isRisky);
System.out.println("Policy Version: " + response.getEvaluateRiskResponse().getPolicyVersion());
System.out.println("Risk Reason: " +           response.getEvaluateRiskResponse().getRiskReason());
        riskScore = response.getEvaluateRiskResponse().getRiskScore();
The response I get back is something like:

Risky? false
Policy Version: 1.0
Risk Reason: Device recognition, Device Reputation
Risk Score: 51


The risk score is based on configurable settings on the VIP management side.  I'll discuss the VIP policy side in the next part of this series.


Monday, November 19, 2012

WWBD, My First Jailbreak, MAM Compliment to IAM

At CSA Congress earlier this month, I told the story about this creepy dude sitting next to me on the plane down to Orlando.  I don't know if it was the bad scottish accent or the strange dental work, or the feeling that he was shoulder-surfing me when I was working on the iPad.  I got a picture of him leaving the airport...


Yea, turns out that was MY iPad he got off with while I was in the bathroom on the plane.  I called my company and they were able to wipe the device with MDM.  I have to wonder though, how long would it take any good super-villian to get the data they wanted off a device?   I had to assume he had the device password since he probably saw me punch that in on the plane.

I needed to present again off my backup iPad, a first generation model.  It doesn't support mirroring, so I would need to jailbreak it anyway in order to hack the device to support display to VGA.  It took me all of 10 minutes to do so.  I thought back to the incident on the plane.  What would Bane Do?   He probably had the device broken and all of the data pulled off before it was wiped.  Then I had to ask myself, what data did those mobile apps keep on the device?  It's not always obvious to me what apps are storing on the file system.  I do know that developers make it easy to access the applications once you've signed in once to app, not to have to do again.  How could I have mitigated this risk once my device was compromised?

I've found that Mobile Application Management (MAM) paired with a robust access management solution can help in a lot of ways.   With the MAM solutions I've evaluated, the theme is essentially a corporate app store.  Applications downloaded through the MAM have been vetted through the company and thus have some degree of governance.  MAM can apply application specific policy for those apps that come from the MAM app store.  Some of these policies are very MDMish in nature (prevent access when device is jailbroken, or prevent apps from storing data locally for instance), but specific to the app.  This is nice on the BYOD front, as the company can protect its interests, without wiping out someone's personal data that happens to be co-located with the corporate apps.  It also helps with the Bane scenario, because for the apps that matter to the company, data is always remote from the device.

The more interesting part to me, though, is the authentication aspect.  The ability to require authentication per application provides a consistent authentication experience, linking the MAM with the corporate access management system.  Through a web view and SAML approach, one can authenticate to the app, propagate identity to the service provider, and get SSO within the device.  Once you're linked in with the IAM, then you can start applying context-based authorization to the scenario.  How did the user authenticate?  What device?  Where was the user when he/she requested the app?  What is the historical profile of the user accessing this application?   What is the environmental risk condition presently?  What time of day was it accessed?  How sensitive is the application or content being accessed?  Using a risk engine, these factors can be leveraged to generate a verdict that might trigger a 2nd factor to be required before accessing the cloud resources.  Bane might have device access, but he isn't going to get to the more sensitive apps.




Wednesday, October 31, 2012

Catch me at CSA Congress in Orlando November 7th

I'm going to be speaking at the Cloud Security Alliance (CSA) Congress in Orlando on Nov 7th.  Here is the brochure for the conference:

http://www.misti.com/PDF/174/20920/CSA12%20Bro_S.pdf

The topic is "Enterprise Insecurity...Mobile Devices Collide with the Cloud."  I will be touching on Mobile Application vs. Mobile Device Management (MDM), limitations with the current pure SAML standard when dealing with mobile devices and cloud, preventing side-door access and other timely topics.


If you have some time to stop by and say hi, please do.

Monday, October 29, 2012

Preventing Data Loss to the Wild

I recently got a preview from a vendor's implementation of a policy-based access control integrated with a Data Loss Prevention (DLP) and encryption solution to provide a very compelling story around protection of files being posted to cloud service providers like DropBox and SalesForce, as well as internal content management like SharePoint.    This is especially relevant to existing customers of this vendor's DLP solution or prospective customers of both a SSO/WAM/Federation/Access and DLP solution.



Does that picture give you pause?  The vendor had a number of modes including:

- Encrypt files when being uploaded to these cloud service providers
- Passive DLP monitoring
- DLP classification based on or in addition to encryption

What do these modes mean?  Let's start with encryption in the context of SalesForce.  Say you upload a file in this encrypt only mode.  If someone outside the organization gets ahold of the file through some means, without the key, they would be unable to read it.   The vendor has a hosted & managed PKI management service, so there will be a very light footprint for entry.

With DLP, policies can be applied within the policy-based access control system.  The access control system can block or redact content based on DLP rules that are applied.   DLP renders a verdict that contains a DLP score.  Policies can be applied based on that score for instance, to block or redact content.  Typically an organization would start out in passive mode to monitor activity and give them a way to tune the policies without adversely affecting operations.

If you're evaluating a SSO & access control solution, you should consider this solution in your evaluation criteria if there's content that you worry about leaving the organization.  The vendor indicated that it would be available by end of 2012 calendar year.

Thursday, October 18, 2012

How do I Prevent Side-Door Access?

Despite all of the best laid plans of an organization, it seems like lines-of-business and individuals are still going to the internet to leverage services that put the organization at risk.  "This file's too large for email, I'm just going to throw it up on my personal Box account."


That's just the accidental scenario.  There's the more nefarious 'Kinko's run', where the terminated employee heads to the local internet shop to download the contact list from online CRM that hasn't been de-provisioned.   Like most solutions, there's a people/product/process side to making things easier for employees, while keeping employers out of trouble.

From the product side, many cloud identity providers can leverage SAML to prevent side-door access to cloud service providers.  The Service Provider always redirects back to the enterprise for authentication.  Assuming the employee has been removed from Active Directory or whatever mechanism the Identity Provider uses, access is cut off.

For many of the SaaS Service Providers that don't support SAML yet, some cloud identity solutions provide Form POST SSO.  This Form POST can be paired with a provisioning system to prevent side-door access using a technique called 'password cloaking'.    Essentially the user's password at the service provider is unknown to the user.  It is periodically reset by the provisioning tool to ensure that it in sync between the Service and Identity Provider's credential vault used for SSO.


Users authenticate to the Identity Provider with their enterprise credentials and get SSO to the Service Provider without knowing the password to that account.  This approach isn't without its problems.  Reset password wizards at the service provider can be used to circumvent the cloaking mechanism.   This is where the people and process come in.

If users can be provided a better experience in getting access to shared file services like Box through the  enterprise account that is governed by a Cloud Identity solution, most would opt for that approach, knowing that they're being a good citizen and it's easier anyway.   Proper education about available services will get critical mass necessary for adaption.