Anonymity As A Security Property

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

The rise of user-centric identity management amplifies the need for a combination of strong security and privacy protection. Anonymous credential systems are one of the most promising answers to this need. They allow users to selectively prove statements about their identity attributes while keeping the corresponding data hidden.

4.1 Introduction

With increasing accessibility of information, it becomes more challenging to protect the privacy of individuals. To solve this problem, it requires an application that controls the dissemination of personal information by an individual. The best idea for such a system is introduced by Chaum[3] and is referred as an anonymous credential system also called as pseudonym system.

An anonymous credential system [3, 4, 6, 7, and 8] consists of users and organizations. Users are known to organizations only by pseudonyms. And linking different pseudonyms of the same user can not be done here. However a user can prove to an organization of possessing a credential, which is issued by another organization, without revealing anything more than information about that credential. Credentials can be for one-time use (these are called one-show credentials) and for unlimited use (these are called multiple-show credentials). User possessing multi- show credential can demonstrate the credential for arbitrary number of times while these demonstrations cannot be linked to each other.

It was Chaum[3] who introduced the pseudonym systems as a way to allow a user to work efficiently but anonymously with multiple organizations as each organization identifies a user by a different pseudonym or nym. These nyms are unlinkable: i.e databases of two organizations cannot be combined to build up a profile on the user. For example: Bob’s doctor may give a credential asserting his good health using the knowledge on Bob by one nym and the same can be shown to Bob’s insurance company that knows him by another nym.

4.2 Definitions

A few definitions taken from [51] are in order at this point.

Privacy refers to the skill of the individual to control the distribution of information about him. Note that this does not necessarily mean that the personal information never gets revealed to anyone rather, a system that respects privacy will allow an individual to select what information about him is revealed, and to whom. This personal information may be any of a large number of things, including reading habits, shopping habits, nationality, email or IP address, physical address, or of course, identity.

Anonymity and pseudonymity are two forms of privacy of identity (though often, in common usage, they are conflated and are both referred to as simply "anonymity"). A system that offers anonymity is one in which the user gets to control who learns his identity (or other verinym ("true name")). In particular, it is the case that his identity is not automatically inserted in headers (or is easily derived from same), and also that it is difficult, if not impossible, for an adversary to "break" the system, and discover the user’s identity against his wishes.

The distinction between anonymity and pseudonymity is that in the latter, the user maintains one or more persistent personae (pseudonyms, or nyms) that are not connected to the user’s physical identity. People with whom the user interacts using a given nym can be assured that, although they do not know the physical identity behind the nym, it is in fact the same person each time. With anonymity, on the other hand, there is no such persistent identifier, and systems that provide strong (or unlinkable) anonymity leave no inherent way to tell whether any given message or transaction was performed by the same person as any other.

Anonimity is defined as the state of being not identifiable within a set of subjects, the anonymity set. The anonymity set is the set of all possible subjects. With respect to acting entities, the anonymity set consists of the subjects who might cause an action. With respect to addresses, the anonymity set consists of the subjects who might be addressed. Both anonymity sets may be the same or they may overlap. The anonymity sets may vary over time. Thus anonymity is about the features of not letting the details of one entity to other.

Forward secrecy refers to an adversary’s incapability to recover security critical information. For example: to retrieve true name of the sender of a controversial message, after the message is sent. Providers of anonymity services should take care and ensure to provide forward secrecy for instance keeping no logs. We currently see fairly regular affirmations in the legal system, that if a provider of a service does keep a log of the identity of the user of the service, then he is compelled to turn it over to satisfy even the most trivial of legal requests, such as a civil subpoena. This compulsion is often used by companies to track down people who criticize them on Internet message boards: the company files suit against the unknown poster, claiming libel or defamation, and uses the existence of the suit to force the company hosting the message board to reveal the identity of the poster. The suit is then dropped, and the company pursues its own action against the now identified speaker (for example, by firing him if he is an employee). Therefore, to protect the privacy of one’s users, the operator of such a service must ensure that he has no logs to turn over, and has no way to go "back in time" and reveal information about a past transaction.

4.3 Nymity

People engage in numerous forms of transactions every day. Some of these transactions are communicative; for example, sending a letter (or email) to a friend, reading a newspaper, posting to a newsgroup, or using an online chat room. Some are commercial; for example, buying a newspaper, selling stock, or trading baseball cards. In each case, the participants in the transaction exchange some content: information in the case of communicative transactions, or value in the case of commercial transactions. But the transactions also involve the exchange (among the participants) or revelation (to outsiders) of meta-content: information about the participants, or about the transaction itself. Some examples of meta-content may include the date and time of the transaction, the values of the items exchanged the physical location at which the transaction was conducted, or information about the identities of the participants.

Nymity of a transaction according to [51] s defined to be the amount of information about the identity of the participants that is revealed. Note that transactions often have different nymity levels for different participants, and it may be that the nymity level with respect to a participant differs from that with respect to an outside observer; for example, the person with whom an individual is corresponding may know the identity, but that information is hidden from third party eavesdroppers. This section catalogues various useful nymity levels that occur in common types of transactions, and to note certain properties of these values.

4.3.1 The levels of nymity

The amount of identity one chooses to, or is required to, reveal in a transaction (be it a commercial transaction or a communication) is variable, and depends on the particular situation. Based on these different amounts of revealed identity, we place them in table which we call the "Nymity Table".

Level of nymity

Identity

Verinymity

Government ID

Social Security Number

Credit card number

Address

Persistent Pseudonymity

Pen names

Nym servers

Linkable Anonymity

Prepaid phone cards

Frequent−purchase cards

Unlinkable Anonymity

Cash payments

Anonymous remailers

Table 4.1 : The Nymity Table

4.3.1.1 The high end: verinymity

A verinym is a True Name [75]. But what do we mean by that? We could mean the name printed on the government-issue birth certificate, driver’s license, or passport, but not necessarily. By "verinym" or "True Name" we can also refer to any piece of identifying information that can differentiate an individual from a crowd of potential candidates. For example, a credit card number is a verinym. So can be a telephone number, or a street address. In the online world, an email address or an IP address can also be considered a verinym. The idea is that, for instance if user x know user y as one of the people in a potential set of candidates, then if user x gets a verinym for user y, user x can figure out user y. Clearly, some attributes may or may not be verinyms, depending on the particular set of candidates that user x has in mind. For example, if the candidate set is rather small, then simply knowing that user y works in Washington, DC may be sufficient to single user y out; but that same piece of information is not sufficient if the candidate set is, say, the set of US Federal Government employees. Transactions in which a verinym is revealed are said to provide verinymity. This forms the high end of the Nymity table.

Verinyms have two important properties:

Linkability: Any verinymous transaction that an individual performs can be linked back to him, and thus, to any other verinymous transaction carried out. Verinymous transactions therefore inherently contribute to the dossier effect; they make it possible, if not easy, to construct a large dossier on individual by cross-referencing large numbers of databases which are each indexed by one of individual’s verinyms.

Permanence: Verinyms are, for the most part, hard to change, and, generally, even if one does change them, there is often a record of the change (thus linking ones old name to ones new one).

These two properties are what makes identity theft so problematic: if an imposter uses one of the verinyms and sullies it (say, by giving it a bad name or a bad credit record), it’s quite difficult to get the situation cleaned up, since one can’t change the verinym in use (permanence), and one can’t separate the transactions made by him from the ones made by the imposter (linkability). Companies such as Verisign want to bring verinymity to the Internet in a strong way by issuing Digital Passports that will probably tie ones online activities to a real-life verinym, such as the name on ones driver’s license.

4.3.1.2 The low end: unlinkable anonymity

In contrast, at the extreme low end of the Nymity table are transactions that reveal no information at all about the identity of the participant. We say that transactions of this type provide unlinkable anonymity. We use the term "unlinkable" to mean that not only no information about a user identity is revealed, but also that there is no way to tell whether or not the user is the same person that performed some given prior transaction.

The most common example in the physical world is paying for groceries with cash. No information about identity is revealed to the merchant, nor is the merchant able to determine which of the cash transactions over the course of a month are by the same person. Technologies such as type I anonymous remailers provide unlinkable anonymity for Internet email; there is no way to tell if two remailer messages are from the same person or not.

4.3.1.3 The middle: linkable anonymity, persistent pseudonymity

As usual, the most interesting elements of the table are not at the extremes, but rather, in the middle. Above unlinkable anonymity in the Nymity table is the situation where a transaction does not reveal information about the identity of the participant, yet different transactions made by the same participant can, at least in some cases, be linked together. A simple example is when one purchases a prepaid phone card (using cash). Neither the merchant nor the phone company generally learns the identity of the purchaser; however, the phone company can link together all calls made with the same card. This is linkable anonymity. Similarly, many grocery stores offer frequent-purchase cards; user sign up for one of these, often using an obviously fake name (it turns out the stores don’t care about name), and then all purchases user make can be linked to the card, and thus to each other, but not to user’s identity.

When some authors or columnists publish their works, they do so under pen names. The True Name of the author is not revealed, but it is assumed that when another work appears under the same pen name, it was written by the same author or group of authors. "Bourbaki" and "Dear Abby" are two well-known pen names that were used by a group of people, as opposed to a single person. In the digital world, we can in fact arrange something slightly stronger: unforgeability. That is, only the original person or cooperating group behind the pen name can use it; in that way, we assure that if some given name appears to have authored a number of works, we can be certain that the original author was responsible for them. This is called persistent pseudonymity, and sits between linkable anonymity and verinymity in the Nymity table.

In the online world, the nym servers (see section 4.5) provide for persistent pseudonymity: two posts from [email protected] are assured to have come from the same person. Because of this assurance of origin, persistent pseudonymity (sometimes simply referred to as "pseudonymity"; the persistence is implied) provides for what is known as reputation capital. If one performs transactions with a certain pseudonym (or "nym") repeatedly, be they commercial or communicative, that nym will gain a reputation for individual, either positive or negative. For instance, user might come to believe that the nym pays promptly when purchasing things online; or that the goods he advertises in an online auction are generally of poor quality; or that he is very knowledgeable in the field of recreational sailing; or that he spouts off loudly about quantum mechanics, but he knows nothing at all about it —or even all of the above. User may form an idea of the kinds of commercial transactions he would be willing to perform with this nym, and of the kinds of communications he is willing to undertake with him, and of the kinds of things that user will believe if he says them. This is that nym’s reputation with user, and that reputation is useful, even though you may know nothing at all about the person behind the nym. This is in fact one of the most common levels of nymity on the Internet today.

This level of nymity shares somewhat the property of linkability with that of verinymity: anything posted under a given nym can be linked to anything else so posted. However, a single person may have multiple pseudonyms, and this is where the usefulness of this level of nymity is most apparent. It is not generally possible to link transactions made less than one pseudonym to those made under a different one. That is, given transactions from two different pseudonyms, it is usually very difficult, if not impossible, to tell whether the transactions were performed by (and therefore the pseudonyms represent) the same person or not. This lack of permanence allows for a number of useful features; for example, a user might use one pseudonym on a resume web site while looking for a new job (so that current employer cannot tell you’re thinking of leaving), and a different pseudonym on a singles web site while looking for a date. There is no good reason these two pseudonyms should be able to be tied together. User might use a pseudonym when he is young, so that when he is looking for a job twenty years later, his teenage posts to Usenet don’t come back to trouble him. In this way, pseudonymity defeats the ability of others to compile dossiers on individual in the manner described earlier. The ability for nyms to acquire reputation, and to defeat the dossier effect, suggests that persistent pseudonymity is the desired nymity level at which to aim our system.

4.3.2 Properties of the Nymity Table

One of the fundamental properties of the Nymity Table is that, given any transaction that normally occupies a certain position in the table, it is extremely easy to change the transaction to have a higher position in the table (closer to verinymity): the participant merely has to agree to provide more information about himself. This is the situation, for example, where a consumer volunteers to use a frequent-purchase card at a grocery store to allow the merchant to link together all of the purchases he ever makes. Assuming the merchant does not require some proof of identity to obtain the card; this action moves the position in the Nymity Table from "unlinkable anonymity" to "linkable anonymity". If the merchant does require a True Name in order to issue the card, the table gets pushed all the way to "verinymity"— a pretty steep price for groceries.

Similarly, a poster using an anonymous remailer can sign his messages with the PGP key of a pseudonym, in order to turn unlinkable anonymity into pseudonymity. Or a user of a pseudonym can "unmask" him by voluntarily revealing his own identity. In all cases, simply revealing more information at a higher layer (ISO-OSI) of the protocol is sufficient to increase the nymity of the transaction. On the other hand, it is extremely difficult to move a transaction down the table (towards unlinkable anonymity). If the only method of payment one has available is a credit card, for example (payments over the Internet currently fall into this category, to a first approximation), it is challenging to find a way to buy something without revealing that verinym. If any layer of one’s protocol stack has a high level of nymity associated with it, it is difficult to build a protocol on top of it that has a lower level of nymity. For example, anonymous electronic cash protocols inherently have a very low level of nymity, but if one tries to use them over ordinary TCP/IP, then one’s IP address (a verinym, or close to it) is necessarily revealed, and the point of the anonymous application layer protocol is lost. In this sense, the Nymity Table bears some resemblance to a ratchet: it is easy to move an existing protocol up, but it is hard to move down.

For this reason, when we design new protocols, at all layers of the stack, we should design them to have as low a nymity level as possible. If done in this way, protocols built on top of them will not be forced to have high nymity. Also, even if we want a protocol to have high nymity, it is best to design it with low nymity, and then simply provide the additional identity information at a higher layer. The reason for this is that it enables us to easily change our minds later; although we may want some new system to provide verinymity or persistent pseudonymity, we may later decide that some (perhaps limited) uses of lower levels of nymity are desired. In order that we are able to avoid a complete redesign, we should design the system with a low nymity level from the start, add additional nymity now by simply displaying the additional information, and then just remove this added nymity if desired at a later time. We further note that this added nymity can be of whatever strength is required by the application; for example, a "screen name" could be merely sent as an additional field in a chat protocol, whereas an application requiring unforgeability would likely require some sort of digital signature, or other authenticity check.

4.4 Nym Servers

Nym servers [65] store an anonymous reply block, and map it to a pseudonymous email address. When a message is received for this address it is not stored, but immediately forwarded anonymously using the reply block to the owner of the pseudonym. In other words, Nym Servers act as a gateway between the world of conventional email and the world of anonymous remailers. Since they hold no identifying information, and are simply using anonymous reply blocks for routing purposes, they do not require users to trust them in order to safeguard their anonymity. Over the years, special software has been developed to support complex operations such as encoding anonymous mail to go through many remailers, and managing Nym Server accounts.

Nym servers are also associated with pseudonymous communications. Since the pseudonymous identity of a user is relatively persistent it is possible to implement reputation systems, or other abuse prevention measures. For example a nym user might at first only be allowed to send out a small quantity of email messages that increases over time, as long as abuse reports are not received by the nym server operator. Nym servers and pseudonymous communications offer some hope of combining anonymity and accountability. At the same time, it is questionable how long the true identity of a pseudonymous user can be hidden. If all messages sent by a user are linked between them by the same pseudonym, one can try to apply author identification techniques to uncover the real identity of the user. Rao et al [66] in their paper entitled "Can Pseudonymity Really Guarantee Privacy?" shows that the frequency of function words4 in the English language can be used in the long term to identify users. A similar analysis could be performed using the sets of correspondents of each nym, to extract information about the user.

4.5 Anonymity as a Security property

Anonymity allows actors to hide their relation to particular actions and outcomes. Since anonymous communication is the main subject of this work, our objective will be to hide correspondences between senders and the messages they sent, a property we will call sender anonymity, or receivers and the messages they receive, namely recipient anonymity. It is possible for a channel to offer full bidirectional anonymity to allow anonymous senders to converse with anonymous receivers. Anonymous communications are studied in the context of computer security because they are taking place in an adversarial context. The actor attempts to protect their anonymity via-a-vis some other parties that try to uncover the hidden links. This information has some value for those performing surveillance and would entail some cost for the subject if it was revealed.

An example from the commercial world where identification is desirable to a seller of goods is described by Odlyzko [9]. By linking together all the previous purchases of a buyer they can infer how willing they are to pay for particular products and price discriminate by charging as much as possible. In this case the value of performing surveillance can be defined in monetary terms. Another example, not involving monetary value and cost, is surveillance against a suspected terrorist cell. By uncovering the identities of the participants, investigators can map their social network through drawing `friendship trees'. As a result, they are able to evaluate the level of threat by measuring the size of the network; they could get warning of an imminent action by analyzing the intensity of the traffic and are able to extract information about the command structure by examining the centrality of different participants. In this case, value is extracted by reducing uncertainty and reducing the cost of neutralizing the organization if necessary. As well as the value and cost of the information extracted there are costs associated with performing surveillance. An extensive network of CCTV cameras is expensive and so is the schemes proposed for blanket traffic data retention. Additionally the cost of analysis and dissemination of intelligence might actually be dominant [10]. Similarly, counter-surveillance and anonymity also have costs. These costs are associated with designing, operating, maintaining and assessing these systems. Furthermore, counter-surveillance technologies impose a huge opportunity cost. They inhibit parties under surveillance using conventional, efficient, but usually unprotected technologies. They might even prevent parties from communicating if appropriate countermeasures are not available against the perceived surveillance threats. The element of perceived threat is important, since the capabilities of adversaries are often not directly observable.

This uncertainty pushes up the cost of counter-surveillance measures, such as anonymous communication technologies. In other words, paranoia by itself entails significant costs. It is important to note that anonymity of communications is a security property orthogonal to the secrecy of communications. A typical example illustrating this is an anonymous sent letter to a newspaper. In this case the identity of the sender of the letter is not revealed although the content is public. One can use conventional encryption techniques, in addition to anonymous communications, to protect the confidentiality of messages' contents.

4.6 Anonymous Credential System

Credential Systems allow Subjects to prove possession of attributes to interested parties. In a sound credential system Subject’s first need to obtain a structure termed a credential from an entity termed the credential Issuer. Some well-defined set of attributes along with their values are encoded by the issuer into the credential and this set is passed on, or ‘granted’, to the Subject. Only after going through this process the subject can prove possession of those (and only those) attributes which are encoded in the credential. The interested party, during this later process is said to ‘verify the credential’ and is therefore called a Verifier. Subjects are typically human users, Issuers are typically well-known organizations having control over the attributes to be encoded into the credentials issued by them, and the service providers that perform access control based on attributes are verifiers.

An Anonymous Credential is a vector of attributes certified by a trusted certification authority. It can be verified by anybody, such that the holder (the "prover") can selectively disclose its components. For example, she may choose to disclose only the fact that he has a valid driver’s license, but not her age.

4.6.1 Basic desirable properties

Forging a credential for a user should be impossible even if an adaptive attack on the organization is launched by users and other organizations. Each pseudonym and credential must belong to some well-defined user [8]. Systems having such a quality are said to have consistency of credentials. Organizations are autonomous entities and hence it is desirable that they be separable, i.e., they may have the ability to choose their keys themselves and independently of other entities, in order to ensure these keys have security facilitate the systems key management. Also desired is user privacy, which means apart from the fact of the user's ownership of some set of credentials an organization cannot find out anything about a user, even if it cooperates with other organizations. In particular, two pseudonyms belonging to the same user [7, 3, 4, 6, 5, and 8] cannot be linked. Besides the requirement that it be based on efficient protocols, it is also required that each interaction involves as few entities as possible, and the rounds and amount of communication be minimal. In particular, if a user has a multiple-show credential from some organization, then the user should be able to demonstrate it without getting the organization to reissue credentials each time.

4.6.2 Additional desirable properties

It is important to discourage users from sharing their credentials and pseudonyms with other users. Earlier this was done by PKI-assured non-transferability. That is, sharing a credential also means sharing a particular, most important secret key ( example, the secret key that gives access to the user’s bank account) from outside the system [ 8, 69, and 70].

Another feature that is desirable is the ability to discover a user’s identity when that user’s are illegal (this feature is optional and is called global anonymity revocation); or reveal a user's pseudonym with an issuing organization in case the user misuses credential (even this feature is optimal and this is called local anonymity revocation). Even allowing one-show credentials can also be beneficial, i.e., credentials that can be used only once and should incorporate an off-line double-spending test. And encoding attributes, such as expiration dates, into a credential is also desired.

4.6.3 Model

Anonymous credential systems [67, 68, 3, 4, and 8] authorize anonymous yet authenticated and accountable transactions between users and service providers. The basic primitives provided by these systems allow for establishing pseudonyms and issuing, showing, verifying and deanonymizing credentials. All credentials and pseudonyms of a user are generated from a user master secret SU. The anonymous credential infrastructure is a privacy-enhanced pseudonymous PKI which implements zero-knowledge protocols. A user U can establish a pseudonym NI with an issuing organization OI. U can also obtain a credential C signed by OI certifying certain attributes. Later on, U may prove these attributes to a verifying organization OV. In this system, the user may choose which attributes to prove to OV (note that proving an attribute does not necessarily imply showing its value; for example, a user may prove he is older than 18 without actually showing his age). Multiple credential shows are unlinkable. If a user is registered with several organizations, the pseudonyms established are not linkable. The anonymous credential system provides protocols that allow the user to prove ownership of multiple credentials. These systems also implement optional revocation of anonymity for accountability purposes. In this case, the user must provide a verifiable encryption of his pseudonym or identity: he must encrypt the information with the public key of a trusted entity OD and prove that it can be decrypted by OD from the protocol transcript.

Many anonymous credential systems have been proposed in the literature, each with its own particular set of entities, properties, underlying problems and assumptions. In this section we present the model of anonymous credential systems on which our work is based. In order to be consistent with the majority of existing schemes, it is intended to be as general as possible. We consider an anonymous credential system to involve four types of player: issuers, subjects, intermediate issuers and verifiers. We refer to issuers and verifiers, collectively, as ‘organizations’. In order to interact with an organization it is assumed that subjects establish at least one pseudonym with that organization. These pseudonyms are assumed that they do not bear any connection to the identity of the subject they belong to. We further assume that pseudonyms are unlinkable. Subjects may obtain credentials, i.e. structures that encode a well defined, finite set of attributes together with their values, from issuers. They may subsequently show those credentials to verifiers, i.e. convince them that they possess (possibly a subset of) the encoded attributes. Credentials are issued under a pseudonym that the subject has established with its issuer, and it is shown under the pseudonym that the subject has established with the relevant verifier.

It is assumed that the anonymous credential system is sound i.e. it offers pseudonym owner protection; that means credentials under a given pseudonym can be shown by only the subject which has established that pseudonym. Soundness also implies credential unforgeability; the only way that subjects may prove possession of a credential from a legitimate issuer. In some applications, it is required that the system offers the stronger credential non-transferable property. This property guarantees that a subject cannot prove possession of a credential that is not issued to it, even if the subject colludes with other subject(s) that may have (legitimately) obtained such a credential. That means a system that offers non-transferability eliminates credential sharing, whereas a system that offers only unforgeability, does not. (Of course, the degree of protection against credential sharing is always limited, since if one subject gives all its secrets to another subject then the latter subject will always be able to impersonate the former and use its credentials.) We require that credentials are bound to the subject to which they have been issued. Therefore it is assumed that either in practice subjects do not share their credentials or the system offers non-transferability. Further it is assumed that the system protects privacy properly in that a subject's transactions with organizations do not compromise the unlinkability of its pseudonyms. We note, however, that this unlinkability can only be guaranteed up to a certain point, as credential types potentially reveal links between pseudonyms. The type of a credential is defined as the collection of attribute values that are encoded into the credential.

To see how credential types can be exploited to link subject's pseudonyms, consider the following trivial scenario. At time τ, a credential of type t is shown under the pseudonym p. However, suppose that up to time τ , only one credential of type t has been issued, and this was done under pseudonym pꞌ. It follows, under the assumption that credentials are bound to subjects, that the two pseudonyms p; pꞌ belong to the same subject; the colluding organizations can successfully link those two pseudonyms. We note that, as part of credential showing, some anonymous credential systems allow subjects to reveal only a subset of the encoded attributes. For these systems, it is tempting to define the type of a credential as the collection of attributes that is revealed to the verifier during showing. However, we restrict our attention to the scenario where the verifier, rather than the subject, selects the attributes to be revealed during credential showing. This is, as far as our analysis is concerned, equivalent to the case where only the required set of attributes is encoded into a credential in the first place. This scenario is also likely to be valid for the case where verifiers perform attribute-based access control.

Fig: 4.1 reference model

4.6.4 Attributes

We denote an attribute Ai as the tuple consisting of name, value and type that is Ai={ni,vi,ti}. The name must be unique within its scope (e.g., a credential structure), which will allow us to refer to the attribute using that name and the scope. The value refers to the content of the attribute, which is encoded as defined by the type.

Attributes {

Attribute {FirstName, known, type:string }

Attribute {LastName, known, type:string }

Attribute {Sex,known,type:string}

Attribute {Nationality,known,type:string}

Attribute {CivilStatus, known, type:enum }{Marriage, Never Married, Widowed, Legally Separated, Divorced}

Attribute {PassportNumber, known, type:string }

Attribute {DateofBirth, known, type: date 1900s}

Attribute {Profession, known, type:enum}{Student, Scientist, Doctor, Engineer, Lawyer, Teacher, others}

Attribute {AcademicDegree, known, type:enum}{B.Tech, M.Tech, Ph.D, MCA, M.Sc, M.S., M.D., others}

Attribute {MinorityStatus, known, type:enum}{Blind, Deaf, Hear impaired, Phys impaired, None}

}

AttributeOrder {FirstName, LastName, Sex, Nationality, CivilStatus,

PassportNumber, DateofBirth, Profession, AcademicDegree, Minoritystatus}

4.6.5 Credentials

Credentials can have two options either one-show or multi-show credentials. The owner of a credential is aware of all attribute values but the issuer or the verifier might not be aware of certain values.

4.7 Revocation of Anonymous Credentials

Strong anonymous communication schemes make it practically impossible to identify the initiator later on. We say that anonymity is unconditional. Revocability is a required feature in a system for anonymous communication. In this section, we explain what revocation is, we discuss the difficulties there are to implement revocation for anonymous communication, and we raise some issues regarding trust and revocation. Finally, we list the requirements for a solution for revocable anonymous access to the Internet.

4.7.1 Concept of Revocation

In an unconditional anonymous system, it is impossible under any circumstances to find out the identity behind a particular transaction. In contrast, a revocable anonymous system provides a backdoor with which an identity can be traced back. Revocation should be provided according to some rules: revocation should only be technically possible when dedicated trusted party cooperates; this trustee should not be involved in the anonymity service itself; upon revocation, only the identity of the particular targets should be revealed, while all other transactions and/or users remain anonymous. In our case of anonymous access to the service provider, revocation could make sense in a number of situations; for example, tracing of a user who uploaded or downloaded illegal content on a particular web server, tracing of a hacker who broke into a particular host, tracing of users who communicated with a suspicious party, etc. A revocable anonymity service can be intended for users throughout the whole Internet, or it might also be developed for users within a certain organization or ISP. Revocation is about the ability to trace back.

4.7.2 Difficulties

For anonymous communication, it is not immediately clear which data is relevant for revocation: data packets will probably not contain any useful information (e.g., for onion routing, only the onions contain routing information). For anonymous communication, there will typically be many more entities involved (and/or they are more distributed). For revocation of communication, neither initiator nor responder are involved (nor interested!) in revocation. It seems very difficult, if not impossible, to force an initiator not to use other unconditional anonymous channels instead of the revocable one. Does this mean that a system for revocable anonymous communication needs to be ‘hardwired’ into computer systems? No, it does not seem to be practically possible to control an initiator’s or responder’s computing platform (e.g., storing serial numbers and corresponding cryptographic keys into tamper-resistant network interface cards). However, it might be more realistic to assume that the infrastructure in between initiator and responder (e.g., routers) can be controlled. Note that it is always possible to use a computer at for example a cybercafe, to anonymously access the Internet. If revocation of the anonymous communication would just lead to the IP address of that computer, it might give no clue about the identity of the initiator. Thus, a system for revocable anonymous access to the Internet should be carefully designed in order to prevent the situation in which only behaving people will be revocable, while people with malicious intentions will be able to circumvent the system anyway.

4.7.3 Trust and Revocable anonymous communication

Just as in any other security system, trust is a crucial issue in a system for revocable anonymous communication. Who should be trusted, to what extent, and for what purpose, are questions we have to solve. Depending on the actual system for anonymous communication there might be one or more entities that know the relationship between an initiator and a responder during communication (e.g., if there is only one intermediate mix). These entities are trusted not to reveal or log this relationship. Obviously, the party dedicated to revocation should be trusted not to misuse its powers. This party should not have access to data records of which the anonymity must not be revoked. In both Crowds-like and Onion Routing-like systems, applications access the Internet anonymously through a proxy. This proxy knows the initiator’s as well as the responder’s identity. For individual use, this proxy is under the control of the initiator. Both alternatives should be possible for a revocable system. In an open environment (e.g., the Internet), it does not seem to be realistic to have just one party that can revoke anonymity on its own. It is better to distribute the capability of revocation among several parties, preferably with a threshold scheme (i.e., a certain minimum number of these parties need to cooperate to be able to revoke). Instead of having separate trusted parties for the revocation purpose only, it might be interesting, in the limit, to distribute the capability of revocation among the anonymous service providers themselves. Upon request of the police, a judge, or the government, the community of providers can then decide to cooperate. Note however that this is in contrast to the requirement that the trustee should not be in involved in the actual anonymity service (see next section).

4.7.4 Requirements

Based on the observations and discussions made up to this point, we list here the most important requirements for a system for revocable anonymous access to the Internet. The system should provide anonymous access to the Internet at the IP layer. This means that any application is anonymized, and, for example, not only HTTP. Revocation should be provided according to the rules: the cooperation of a dedicated trusted party should be required for revocation; this party should not be involved in the actual anonymity service; the trusted party should only be able to revoke the anonymity of the suspected communication and no other. Revocation should lead to the identity of the initiator. This identity should be bound to a user, and should not just be the IP address of the originating host. It should not be possible to take over some user’s identity or hijack an existing anonymous connection. Revocation should not lead to the identity of a behaving initiator. The system should be designed in such a way that only the initiator needs to know about the infrastructure and is required to install the necessary(software) interface to use it. The responder may of course be aware that it receives (some of it’s) communication through this infrastructure, but does not need to install any special software or hardware. Information needed for revocation should be stored at a central place. This centralized storage should not affect the strong anonymity properties of the service itself.

For any Revocation Scheme following are the requirements taken from [76]:

(Rl: Anonymity) The anonymity of the anonymous certificate holder must be preserved by the revocation scheme. The fact that the certificate is to be revoked is not the justification to reveal the holder’s real-world identity, to allow linkage of his certificates, or to release information that may later lead to his identification.

(R2: Authorisation)To prevent attack of denial of service type, the revocation scheme must facilitate necessary actions by authorized parties only on receiving proper authorization tokens.

(R3: Accountability) The revocation scheme should provide accountability of a party’s actions. Here by accountability, we mean the scheme provides authorization to an authorized entity, such as a Legal Authority, to collect, interpret and collate evidence of actions of parties involved from an audit trail. This way, each party is motivated to comply with the protocol, as any deviation may be detected and attributed to them. To hold parties accountable, the evidence shown must contain non-repudiation property.

(R4: Non-repudiation) The property of non-repudiation is important for a scheme in order to hold a party accountable for their actions, i.e. when presented with the evidence, no party can falsely deny his/her actions or claim that such actions were performed by another party.

(R5: Notification) Notification of revocation is a feature to be provided to the user as a courtesy. And this notification may be in form of a Revocation Acknowledgement to the user in the case that the user requested the revocation or a Revocation Notice to the user if the revocation has been invoked by the CA or a Legal Authority.

4.8 Conclusion

In this chapter we give an overview of the Anonymous Credential system, its properties and our reference model and features.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now