Trusted system

Trusted system

In the security engineering subspecialty of computer science, a trusted system is a system that is relied upon to a specified extent to enforce a specified security policy. As such, a trusted system is one whose failure may break a specified security policy.

A different usage of the term can be found in Time Management, specifically, GTD (Getting Things Done).

Trusted systems in classified information

Trusted systems used for the processing, storage and retrieval of sensitive or classified information.

Central to the concept of U.S. Department of Defense-style "trusted systems" is the notion of a "reference monitor", which is an entity that occupies the logical heart of the system and is responsible for all access control decisions. Ideally, the reference monitor is (a) tamperproof, (b) always invoked, and (c) small enough to be subject to independent testing, the completeness of which can be assured. Per the U.S. National Security Agency's 1983 Trusted Computer System Evaluation Criteria (TCSEC), or Orange Book, a set of "evaluation classes" were defined that described the features and assurances that the user could expect from a trusted system.

The highest levels of assurance were guaranteed by significant system engineering directed toward minimization of the size of the trusted computing base, or TCB, defined as that combination of hardware, software, and firmware that is responsible for enforcing the system's security policy.

Because failure of the TCB breaks the trusted system, higher assurance is provided by the minimization of the TCB. An inherent engineering conflict arises in higher-assurance systems in that, the smaller the TCB, the larger the set of hardware, software, and firmware that lies outside the TCB. This may lead to some philosophical arguments about the nature of trust, based on the notion that a "trustworthy" implementation may not necessarily be a "correct" implementation from the perspective of users' expectations.

In stark contrast to the TCSEC's precisely defined hierarchy of six evaluation classes, the more recently introduced Common Criteria (CC)—which derive from an uneasy meld of more or less technically mature standards from various NATO countries—provide a more tenuous spectrum of seven "evaluation classes" that intermix features and assurances in an arguably non-hierarchical manner and lack the philosophic precision and mathematical stricture of the TCSEC. In particular, the CC tolerate very loose identification of the "target of evaluation" (TOE) and support—even encourage—a flippant intermixture of security requirements culled from a variety of predefined "protection profiles." While a very strong case can be made that even the more seemingly arbitrary components of the TCSEC contribute to a "chain of evidence" that a fielded system properly enforces its advertised security policy, not even the highest (E7) level of the CC can truly provide analogous consistency and stricture of evidentiary reasoning.

The mathematical notions of trusted systems for the protection of classified information derive from two independent but interrelated corpora of work. In 1974, David Bell and Leonard LaPadula of MITRE, working under the close technical guidance and economic sponsorship of Maj. Roger Schell, Ph.D., of the U.S. Army Electronic Systems Command (Ft. Hanscom, MA), devised what is known as the Bell-LaPadula model, in which a more or less trustworthy computer system is modeled in terms of objects (passive repositories or destinations for data, such as files, disks, printers) and subjects (active entities—perhaps users, or system processes or threads operating on behalf of those users—that cause information to flow among objects). The entire operation of a computer system can indeed be regarded a "history" (in the serializability-theoretic sense) of pieces of information flowing from object to object in response to subjects' requests for such flows.

At the same time, Dorothy Denning at Purdue University was publishing her Ph.D. dissertation, which dealt with "lattice-based information flows" in computer systems. (A mathematical "lattice" is a partially ordered set, characterizable as a directed acyclic graph, in which the relationship between any two vertices is either "dominates," "is dominated by," or neither.) She defined a generalized notion of "labels"—corresponding more or less to the full security markings one encounters on classified military documents, "e.g.", TOP SECRET WNINTEL TK DUMBO—that are attached to entities. Bell and LaPadula integrated Denning's concept into their landmark MITRE technical report—entitled, "Secure Computer System: Unified Exposition and Multics Interpretation"—whereby labels attached to objects represented the sensitivity of data contained within the object (though there can be, and often is, a subtle semantic difference between the sensitivity of the data within the object and the sensitivity of the object itself)), while labels attached to subjects represented the trustworthiness of the user executing the subject. The concepts are unified with two properties, the "simple security property" (a subject can only read from an object that it "dominates" ["is greater than" is a close enough—albeit mathematically imprecise—interpretation] ) and the "confinement property," or "*-property" (a subject can only write to an object that dominates it). (These properties are loosely referred to as "no-read-up" and "no-write-down," respectively.) Jointly enforced, these properties ensure that information cannot flow "downhill" to a repository whence insufficiently trustworthy recipients may discover it. By extension, assuming that the labels assigned to subjects are truly representative of their trustworthiness, then the no-read-up and no-write-down rules rigidly enforced by the reference monitor are provably sufficient to constrain Trojan horses, one of the most general classes of attack ("sciz.", the popularly reported worms and viruses are specializations of the Trojan horse concept).

The Bell-LaPadula model technically enforces only "confidentiality," or "secrecy," controls, "i.e.", they address the problem of the sensitivity of objects and attendant trustworthiness of subjects not inappropriately to disclose it. The dual problem of "integrity," i.e., the problem of accuracy (even provenance) of objects and attendant trustworthiness of subjects not inappropriately to modify or destroy it, is addressed by mathematically affine models, the most important of which is named for its creator, K. J. Biba. Other integrity models include the Clark-Wilson model and Shockley and Schell's program integrity model.

An important feature o the class of security controls described "supra", termed mandatory access controls, or MAC, is that they are entirely beyond the control of any user: the TCB automatically attaches labels to any subjects executed on behalf of users; files created, deleted, read, or written by users; and so forth. In contrast, an additional class of controls, termed discretionary access controls, are under the direct control of the system users. Familiar protection mechanisms such as permission bits (supported by UNIX since the late 1960s and—in a more flexible and powerful form—by Multics since earlier still) and access control lists (ACLs) are familiar examples of discretionary access controls.

The behavior of a trusted system is often characterized in terms of a mathematical model—which may be more or less rigorous, depending upon applicable operational and administrative constraints—that takes the form of a finite state machine (FSM) with state criteria; state transition constraints; a set of "operations" that correspond to state transitions (usually, but not necessarily, one); and a descriptive top-level specification, or DTLS, entailing a user-perceptible interface ("e.g.", an API, a set of system calls [in UNIX parlance] or system exits [in mainframe parlance] ), each element of which engenders one or more model operations.

Trusted systems in trusted computing

Trust is used by the Trusted Computing Group mainly in the sense of authorization ("a trusted user is a user authorized to do X").

Trusted systems in policy analysis

Trusted systems in the context of national or homeland security, law enforcement, or social control policy are systems in which some conditional prediction about the behavior of people or objects within the system has been determined prior to authorizing access to system resources. [ 1 ]

For example, trusted systems include the use of "security envelopes" in national security and counterterrorism applications, "trusted computing" initiatives in technical systems security, and the use of credit or identity scoring systems in financial and anti-fraud applications; in general, they include any system (i) in which probabilistic threat or risk analysis is used to assess "trust" for decision-making before authorizing access or for allocating resources against likely threats (including their use in the design of systems constraints to control behavior within the system), or (ii) in which deviation analysis or systems surveillance is used to insure that behavior within systems complies with expected or authorized parameters.

The widespread adoption of these authorization-based security strategies (where the default state is DEFAULT=DENY) for counterterrorism, anti-fraud, and other purposes is helping accelerate the ongoing transformation of modern societies from a notional Beccarian model of criminal justice based on accountability for deviant actions after they occur, see Cesare Beccaria, On Crimes and Punishment (1764), to a Foucauldian model based on authorization, preemption, and general social compliance through ubiquitous preventative surveillance and control through system constraints, see Michel Foucault, Discipline and Punish (1975, Alan Sheridan, tr., 1977, 1995). In this emergent model, "security" is geared not towards policing but to risk management through surveillance, exchange of information, auditing, communication, and classification. These developments have led to general concerns about individual privacy and civil liberty and to a broader philosophical debate about the appropriate forms of social governance methodologies.

Trusted systems in information theory

Trusted systems in the context of information theory is based on the definition of trust as 'Trust is that which is essential to a communication channel but cannot be transferred from a source to a destination using that channel' by Ed Gerck [ 2 ] .

In Information Theory, information has nothing to do with knowledge or meaning. In the context of Information Theory, information is simply that which is transferred from a source to a destination, using a communication channel. If, before transmission, the information is available at the destination then the transfer is zero. Information received by a party is that what the party does not expect -- as measured by the uncertainty of the party as to what the message will be.

Likewise, trust as defined by Gerck has nothing to do with friendship, acquaintances, employee-employer relationships, loyalty, betrayal and other overly-variable concepts. Trust is not taken in the purely subjective sense either, nor as a feeling or something purely personal or psychological -- trust is understood as something potentially communicable. Further, this definition of trust is abstract, allowing different instances and observers in a trusted system to communicate based on a common idea of trust (otherwise communication would be isolated in domains), where all necessarily different subjective and intersubjective realizations of trust in each subsystem (man and machines) may coexist. [ 3 ]

Taken together in the model of Information Theory, information is what you do not expect and trust is what you know. Linking both concepts, trust is seen as qualified reliance on received information. In terms of trusted systems, an assertion of trust cannot be based on the record itself, but on information from other information channels. [ 4 ]

An introduction to the calculus of trust (Example: 'If I connect two trusted systems, are they more or less trusted when taken together?') is given in [ 3 ] .

The IBM Federal Software Group [ 5 ] has suggested that [ 2 ] provides the most useful definition of trust for application in an information technology environment, because it is related to other information theory concepts and provides a basis for measuring trust. In a network centric enterprise services environment, such notion of trust is considered [ 5 ] to be requisite for achieving the desired collaborative, service-oriented architecture vision.

References

# The concept of trusted systems described here is discussed in K. A. Taipale, " [http://doi.ieeecomputersociety.org/10.1109/MIS.2005.89 The Trusted Systems Problem: Security Envelopes, Statistical Threat Analysis, and the Presumption of Innocence] ," Homeland Security - Trends and Controversies, IEEE Intelligent Systems, Vol. 20 No. 5, pp. 80-83 (Sept./Oct. 2005).
# "Trust Points", in Digital Certificates: Applied Internet Security by J. Feghhi, J. Feghhi and P. Williams, Addison-Wesley, ISBN 0-20-130980-7, 1998; [http://mcwg.org/mcg-mirror/trustdef.htm Toward Real-World Models of Trust: Reliance on Received Information]
# " [http://nma.com/papers/it-trust-part1.pdf Trust as Qualified Reliance on Information, Part I] ," The COOK Report on Internet, Volume X, No. 10, January 2002, ISSN 1071 - 6327.
# [http://pages.ca.inter.net/~euclid1/call.html John D. Gregory] , Electronic Legal Records: Pretty Good Authentication?
# [http://issaa.org/documents/NCEStrustframework.doc Christopher Daly] , A Trust Framework for the DoD Network-Centric Enterprise Services (NCES) Environment, IBM Corp., 2004.

External links

See also, [http://trusted-systems.info/ The Trusted Systems Project] , a part of the Global Information Society Project ( [http://global-info-society.org/ GISP] ), a joint research project of the World Policy Institute ( [http://worldpolicy.org/ WPI] ) and the Center for Advanced Studies in Sci. & Tech. Policy ( [http://advancedstudies.org/ CAS] ).

"See also:"
*computer security
*secure computing
*trusted computing


Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Trusted System — In der Sicherheitstechnik und in der Informatik ist ein Trusted System ein System, auf das sich bis zu einem bestimmten Ausmaß verlassen wird, um eine bestimmte Sicherheitsrichtlinie durchzusetzen. Wenn es versagt, kann es die… …   Deutsch Wikipedia

  • Trusted Computing — (TC) is a technology developed and promoted by the Trusted Computing Group. The term is taken from the field of trusted systems and has a specialized meaning. With Trusted Computing the computer will consistently behave in specific ways, and… …   Wikipedia

  • Trusted Computer System Evaluation Criteria — (TCSEC) is a United States Government Department of Defense (DoD) standard that sets basic requirements for assessing the effectiveness of computer security controls built into a computer system. The TCSEC was used to evaluate, classify and… …   Wikipedia

  • Trusted Computing — (TC) ist eine Technologie, die von der Trusted Computing Group entwickelt und beworben wird. Der Ausdruck ist dem Fachausdruck Trusted System entlehnt, hat jedoch eine eigene Bedeutung. Trusted Computing bedeutet, dass der Betreiber eines PC… …   Deutsch Wikipedia

  • Trusted Platform — Dieser Artikel oder Abschnitt bedarf einer Überarbeitung. Näheres ist auf der Diskussionsseite angegeben. Hilf mit, ihn zu verbessern, und entferne anschließend diese Markierung. Trusted Computing (TC) ist eine Technologie, die von der Trusted… …   Deutsch Wikipedia

  • Trusted computing — Dieser Artikel oder Abschnitt bedarf einer Überarbeitung. Näheres ist auf der Diskussionsseite angegeben. Hilf mit, ihn zu verbessern, und entferne anschließend diese Markierung. Trusted Computing (TC) ist eine Technologie, die von der Trusted… …   Deutsch Wikipedia

  • Trusted Systems — In der Sicherheitstechnik und in der Informatik ist ein Trusted System ein System, auf das sich bis zu einem bestimmten Ausmaß verlassen wird, um eine bestimmte Sicherheitsrichtlinie durchzusetzen. Wenn es versagt, kann es die… …   Deutsch Wikipedia

  • Trusted operating system — (TOS) generally refers to an operating system that provides sufficient support for multilevel security and evidence of correctness to meet a particular set of government requirements.The most common set of criteria for trusted operating system… …   Wikipedia

  • Trusted Solaris — is a security evaluated operating system based on Solaris by Sun Microsystems, featuring a mandatory access control model.Parts of Trusted Solaris: *Accounting *Role Based Access Control *Auditing *Device Allocation *Mandatory Access Control… …   Wikipedia

  • Trusted Execution Technology — (TET or TXT), formerly known as LaGrande Technology is a key component of Intel s initiative of safer computing . Intel Trusted Execution Technology (Intel TXT) is a hardware extension to some of Intel s microprocessors and respective chipsets,… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”