The DIMACS Workshop on Trust Management in Networks was held in South Plainfield, NJ from 9/30/96-10/2/96.
Trust management is authentication and access control. The conference organizers say that those two standard concepts need to be lumped together. They say applications don't want to know the name of the remote principal. Applications just want to know whether the remote principal is OK, authorized, trusted.
There are no proceedings. Abstracts can be found at http://jya.com/dimacs.txt
Four general security systems were debated at the conference. I will
describe them, then review the big debates. They are:
a) Simple Distributed Security Infrastructure (SDSI)
b) PolicyMaker
c) Simple Public Key Infrastructure (SPKI)
d) X.509v3
Steve Kent (BB&N) presented.
The central concern of X.509 is authentication, not access control. X.509 is an ISO standard, part of the X.500 Directory series. A principal has a global name, like:
C=US O=The Open Group OU=RI CN=Clifford Kahn
X.509 certificates bind public keys to names. The X.509 standard does not say what trust to extend to each introducer, or "certification authority". But in the culture surrounding X.509, there is much reliance on a few well-known and trusted introducers--global certification authorities.
[Another model with some good properties is roughly: trust your own ancestors and the other principal's ancestors as introducers.]
X.509 is widely deployed. NetScape, Internet Explorer, Secure Electronic Transactions (SET), etc. X.509v3 provides for extensions, which can be used for authorization purposes. A certificate could, for example, designate someone as an officer of a corporation.
SDSI is a proposal by Butler Lampson (Microsoft) and Ron Rivest (MIT). Lampson presented.
SDSI provides for both authentication and access control. A group can consist of any set of principals--one from this company, three from that university. Groups can be defined as intersections and/or unions of other groups.
Principals have names, generally. (Anonymous access is also possible.) Each principal defines its own name space for principals it knows about. This name space can contain symbolic links to other peoples' name spaces. The consensus at the conference was that if principals were going to have names, SDSI had the right approach to names.
A principal can certify only names in its own name space. This is similar to the "only trust ancestors" rule.
SDSI certificates have meanings expressible in English, a valuable rule so that humans can know what they are signing.
In SDSI each certificate states its own revocation policy.
SDSI is "an artful combination of well established ideas," says Lampson.
Two implementations are due later this year.
PolicyMaker is work of Matt Blaze, Joan Feigenbaum, and Jack Lacy (AT&T Laboratories). Blaze presented.
A certificate contains an executable program, a boolean function OK(A). A is an app-specific description of the action requested. If the keys match, the function is called. The function must be in a safe language. Right now that language is awkward, a safe version of awk.
The function can impose authentication rules. Examples:
- There can be only k introducers in a chain.
- The principal must be vouched for by at least two independent
chains of introducers.
The function can make application-specific checks. Example: Yes if it's under $500.
How real users control access to their resources with PolicyMaker is "an open question".
PolicyMaker is available and is being used by researchers, who love its flexibility.
SPKI is a draft IETF standard, being led by Carl Ellison (CyberCash). Ellison presented.
SPKI has no names, in general. A certificate declares that a key's
holder is allowed to do a particular thing. Examples:
- telnet into a particular account on a particular host
- read a certain directory and everything in it
Should we identify a principal by name or by key?
Should certificates be programs?
Should we use X.509 or not?
Claim: An application doesn't want to know the name of the requestor. An application wants to know whether the requestor is allowed to do what it is asking to do. The security subsystem should answer that question. Names are beside the point. (This is the position of Blaze, Feigenbaum, and Lacy, among others.)
What of security management? When granting someone access, should we identify them by name? Names are ambiguous. To resolve the ambiguity, you have to go out of band. If you're going out of band anyway, why not transmit a key out of band? So runs the argument.
Ellison's view: before granting someone access, you should authenticate them in person and get their public key. Ellison works for CyberCash. It was observed that even credit card companies don't do what he recommends, much less publishers granting access to a document, etc.
Steve Kent's view: when you use account numbers as names, there is no ambiguity. Forget having a single name for each person. "Let a thousand CAs bloom."
Lampson's view: you need names for auditability. People need to be able to check whether the access controls are right.
Lampson: "The whole point of the security framework is to abstract drastically down from the full complexity of the application." With PolicyMaker, nobody will understand the security system. Specialized checks should be in the application, not in the certificate.
Q: When a resource owner wants to grant someone access in PolicyMaker,
what does he/she do? Write a program?
A (Blaze): It's an open question.
- Template policies?
- A GUI for building these programs graphically?
And: "I'm not concerned with whether the average COBOL programmer can
produce certificates." Much less with whether the end user can do
so, apparently.
Anyone who thinks principal names are useless thinks X.509 is useless.
That aside, X.509 notation is really ugly, hard to generate and decode. On the other hand, X.509 is widely deployed. In practice its cumbersomeness can be hidden from users and mostly from programmers. There is a big problem only if (as Ellison observes) one is trying to process X.509 certificates in a very small processor, such as a smart card. SPKI is designed to accommodate such small implementations.
I don't cover every talk, just a selection.
Ed Felten (Princeton University)
What can the user do about evil executables? Three alternatives:
1. Shrink wrap and trust.
2. Limit the app and don't trust. (Java's approach.)
3. Like 1, but delegate the trustworthiness decision to someone: an
ISP, a corporate DP office.
The warnings that web browsers emit about downloading apps amount to the "shrink wrap and trust" model: they ask the user to decide whether the source is reliable.
The warnings are invisible to most people, "like a fly they swatted". Ed ran an experiment and confirmed this intuition.
Approach 2 (Java's) can be made better if we empower applets more. For example, let them store an initialization file, but restrict its name (\AUTOEXEC.BAT should be rejected).
Felten believes in a hybrid of these approaches.
WWW Consortium people described a system whereby rating agencies can rate Web pages for safety, much as they now rate them for pornography and violence. Users can decide which rating agencies they want to believe, perhaps even pay for the service.
The porno and violence ratings are not generally signed today; it hasn't been necessary. Safety ratings would have to be signed, which means more infrastructure.
Stuart Haber (Bellcore)
The obvious (after a bit of thought) way to do secure time stamps is to take a secure hash of a document and send it to a time-stamping server. The server then signs a certificate saying it witnessed the hash at a particular time. But the server has to be trusted.
There's a way to do secure time stamps without adding a trusted server. A company publishes a secure hash each week in the New York Times. The point is that the hash is widely witnessed. Anyone can verify the hashes.
Samuel I. Schaen (Mitre)
The Federal government is moving to use the Web for emergency response, eg, to natural disasters. This poses a whole set of problems, like keeping the web servers from being swamped by the general public, and reserving their bandwidth for emergency workers. There is also a great need for both access control to prevent fraud and such, and for flexibility to allow emergency workers to get the job done.
Mark Lomas (Cambridge University)
Lomas showed how to tighten up protocols so that misbehaving certification authorities would be easier to catch. For example, the certification authority and the revocation authority would be separated.
Vipin Swarup (MITRE)
Swarup seemed to have a sophisticated and appropriate security model for mobile agents. He also presented ideas about how to achieve safety--how to keep a subverted agent from penetrating its new host--when an agent moved its execution context from one host to another. These ideas involved validation functions, but it was not clear whether the state of a general program could be validated.
Dennis Branstad (TIS)
The user will be able to set criteria for releasing information, such
as medical records:
- to certain people
- to people with certain roles (their doctor)
- upon certain events or conditions
Policies can have multiple authors (me, my doctor, my insurance company) and can have conflicting parts. There are priority rules.
He wants to automatically analyze policies for completeness (does it make a decision in every case?) and consistency (can it make conflicting decisions, with no basis for picking one?).
This is in an early prototype stage.