Number
int64
1
7.61k
Text
stringlengths
2
3.11k
7,401
With a digital signature scheme, the central office can arrange beforehand to have a public key on file whose private key is known only to the branch office. The branch office can later sign a message and the central office can use the public key to verify the signed message was not a forgery before acting on it. A forger who doesn't know the sender's private key can't sign a different message, or even change a single digit in an existing message without making the recipient's signature verification fail.
7,402
Encryption can hide the content of the message from an eavesdropper, but encryption on its own may not let recipient verify the message's authenticity, or even detect selective modifications like changing a digit—if the bank's offices simply encrypted the messages they exchange, they could still be vulnerable to forgery. In other applications, such as software updates, the messages are not secret—when a software author publishes an patch for all existing installations of the software to apply, the patch itself is not secret, but computers running the software must verify the authenticity of the patch before applying it, lest they become victims to malware.
7,403
Replays. A digital signature scheme on its own does not prevent a valid signed message from being recorded and then maliciously reused in a replay attack. For example, the branch office may legitimately request that bank transfer be issued once in a signed message. If the bank doesn't use a system of transaction ids in their messages to detect which transfers have already happened, someone could illegitimately reuse the same signed message many times to drain an account.
7,404
Uniqueness and malleability of signatures. A signature itself cannot be used to uniquely identify the message it signs—in some signature schemes, every message has a large number of possible valid signatures from the same signer, and it may be easy, even without knowledge of the private key, to transform one valid signature into another. If signatures are misused as transaction ids in an attempt by a bank-like system such as a Bitcoin exchange to detect replays, this can be exploited to replay transactions.
7,405
Authenticating a public key. Prior knowledge of a public key can be used to verify authenticity of a signed message, but not the other way around—prior knowledge of a signed message cannot be used to verify authenticity of a public key. In some signature schemes, given a signed message, it is easy to construct a public key under which the signed message will pass verification, even without knowledge of the private key that was used to make the signed message in the first place.
7,406
Non-repudiation, or more specifically non-repudiation of origin, is an important aspect of digital signatures. By this property, an entity that has signed some information cannot at a later time deny having signed it. Similarly, access to the public key only does not enable a fraudulent party to fake a valid signature.
7,407
Note that these authentication, non-repudiation etc. properties rely on the secret key not having been revoked prior to its usage. Public revocation of a key-pair is a required ability, else leaked secret keys would continue to implicate the claimed owner of the key-pair. Checking revocation status requires an "online" check; e.g., checking a certificate revocation list or via the Online Certificate Status Protocol. Very roughly this is analogous to a vendor who receives credit-cards first checking online with the credit-card issuer to find if a given card has been reported lost or stolen. Of course, with stolen key pairs, the theft is often discovered only after the secret key's use, e.g., to sign a bogus certificate for espionage purpose.
7,408
In their foundational paper, Goldwasser, Micali, and Rivest lay out a hierarchy of attack models against digital signatures:
7,409
They also describe a hierarchy of attack results:
7,410
The strongest notion of security, therefore, is security against existential forgery under an adaptive chosen message attack.
7,411
All public key / private key cryptosystems depend entirely on keeping the private key secret. A private key can be stored on a user's computer, and protected by a local password, but this has two disadvantages:
7,412
A more secure alternative is to store the private key on a smart card. Many smart cards are designed to be tamper-resistant . In a typical digital signature implementation, the hash calculated from the document is sent to the smart card, whose CPU signs the hash using the stored private key of the user, and then returns the signed hash. Typically, a user must activate their smart card by entering a personal identification number or PIN code . It can be arranged that the private key never leaves the smart card, although this is not always implemented. If the smart card is stolen, the thief will still need the PIN code to generate a digital signature. This reduces the security of the scheme to that of the PIN system, although it still requires an attacker to possess the card. A mitigating factor is that private keys, if generated and stored on smart cards, are usually regarded as difficult to copy, and are assumed to exist in exactly one copy. Thus, the loss of the smart card may be detected by the owner and the corresponding certificate can be immediately revoked. Private keys that are protected by software only may be easier to copy, and such compromises are far more difficult to detect.
7,413
Entering a PIN code to activate the smart card commonly requires a numeric keypad. Some card readers have their own numeric keypad. This is safer than using a card reader integrated into a PC, and then entering the PIN using that computer's keyboard. Readers with a numeric keypad are meant to circumvent the eavesdropping threat where the computer might be running a keystroke logger, potentially compromising the PIN code. Specialized card readers are also less vulnerable to tampering with their software or hardware and are often EAL3 certified.
7,414
Smart card design is an active field, and there are smart card schemes which are intended to avoid these particular problems, despite having few security proofs so far.
7,415
One of the main differences between a digital signature and a written signature is that the user does not "see" what they sign. The user application presents a hash code to be signed by the digital signing algorithm using the private key. An attacker who gains control of the user's PC can possibly replace the user application with a foreign substitute, in effect replacing the user's own communications with those of the attacker. This could allow a malicious application to trick a user into signing any document by displaying the user's original on-screen, but presenting the attacker's own documents to the signing application.
7,416
To protect against this scenario, an authentication system can be set up between the user's application and the signing application. The general idea is to provide some means for both the user application and signing application to verify each other's integrity. For example, the signing application may require all requests to come from digitally signed binaries.
7,417
One of the main differences between a cloud based digital signature service and a locally provided one is risk. Many risk averse companies, including governments, financial and medical institutions, and payment processors require more secure standards, like FIPS 140-2 level 3 and FIPS 201 certification, to ensure the signature is validated and secure.
7,418
Technically speaking, a digital signature applies to a string of bits, whereas humans and applications "believe" that they sign the semantic interpretation of those bits. In order to be semantically interpreted, the bit string must be transformed into a form that is meaningful for humans and applications, and this is done through a combination of hardware and software based processes on a computer system. The problem is that the semantic interpretation of bits can change as a function of the processes used to transform the bits into semantic content. It is relatively easy to change the interpretation of a digital document by implementing changes on the computer system where the document is being processed. From a semantic perspective this creates uncertainty about what exactly has been signed. WYSIWYS means that the semantic interpretation of a signed message cannot be changed. In particular this also means that a message cannot contain hidden information that the signer is unaware of, and that can be revealed after the signature has been applied. WYSIWYS is a requirement for the validity of digital signatures, but this requirement is difficult to guarantee because of the increasing complexity of modern computer systems. The term WYSIWYS was coined by Peter Landrock and Torben Pedersen to describe some of the principles in delivering secure and legally binding digital signatures for Pan-European projects.
7,419
An ink signature could be replicated from one document to another by copying the image manually or digitally, but to have credible signature copies that can resist some scrutiny is a significant manual or technical skill, and to produce ink signature copies that resist professional scrutiny is very difficult.
7,420
Digital signatures cryptographically bind an electronic identity to an electronic document and the digital signature cannot be copied to another document. Paper contracts sometimes have the ink signature block on the last page, and the previous pages may be replaced after a signature is applied. Digital signatures can be applied to an entire document, such that the digital signature on the last page will indicate tampering if any data on any of the pages have been altered, but this can also be achieved by signing with ink and numbering all pages of the contract.
7,421
RSA
7,422
DSA
7,423
ECDSA
7,424
EdDSA
7,425
RSA with SHA
7,426
ECDSA with SHA
7,427
ElGamal signature scheme as the predecessor to DSA, and variants Schnorr signature and Pointcheval–Stern signature algorithm
7,428
Rabin signature algorithm
7,429
Pairing-based schemes such as BLS
7,430
NTRUSign is an example of a digital signature scheme based on hard lattice problems
7,431
Undeniable signatures
7,432
Aggregate signatureru – a signature scheme that supports aggregation: Given n signatures on n messages from n users, it is possible to aggregate all these signatures into a single signature whose size is constant in the number of users. This single signature will convince the verifier that the n users did indeed sign the n original messages. A scheme by Mihir Bellare and Gregory Neven may be used with Bitcoin.
7,433
Signatures with efficient protocols – are signature schemes that facilitate efficient cryptographic protocols such as zero-knowledge proofs or secure computation.
7,434
Most digital signature schemes share the following goals regardless of cryptographic theory or legal provision:
7,435
Only if all of these conditions are met will a digital signature actually be any evidence of who sent the message, and therefore of their assent to its contents. Legal enactment cannot change this reality of the existing engineering possibilities, though some such have not reflected this actuality.
7,436
Legislatures, being importuned by businesses expecting to profit from operating a PKI, or by the technological avant-garde advocating new solutions to old problems, have enacted statutes and/or regulations in many jurisdictions authorizing, endorsing, encouraging, or permitting digital signatures and providing for their legal effect. The first appears to have been in Utah in the United States, followed closely by the states Massachusetts and California. Other countries have also passed statutes or issued regulations in this area as well and the UN has had an active model law project for some time. These enactments vary from place to place, have typically embodied expectations at variance with the state of the underlying cryptographic engineering, and have had the net effect of confusing potential users and specifiers, nearly all of whom are not cryptographically knowledgeable.
7,437
Adoption of technical standards for digital signatures have lagged behind much of the legislation, delaying a more or less unified engineering position on interoperability, algorithm choice, key lengths, and so on what the engineering is attempting to provide.
7,438
Some industries have established common interoperability standards for the use of digital signatures between members of the industry and with regulators. These include the Automotive Network Exchange for the automobile industry and the SAFE-BioPharma Association for the healthcare industry.
7,439
In several countries, a digital signature has a status somewhat like that of a traditional pen and paper signature, as in the 1999 EU digital signature directive and 2014 EU follow-on legislation. Generally, these provisions mean that anything digitally signed legally binds the signer of the document to the terms therein. For that reason, it is often thought best to use separate key pairs for encrypting and signing. Using the encryption key pair, a person can engage in an encrypted conversation , but the encryption does not legally sign every message he or she sends. Only when both parties come to an agreement do they sign a contract with their signing keys, and only then are they legally bound by the terms of a specific document. After signing, the document can be sent over the encrypted link. If a signing key is lost or compromised, it can be revoked to mitigate any future transactions. If an encryption key is lost, a backup or key escrow should be utilized to continue viewing encrypted content. Signing keys should never be backed up or escrowed unless the backup destination is securely encrypted.
7,440
For technical reasons, an encryption scheme usually uses a pseudo-random encryption key generated by an algorithm. It is possible to decrypt the message without possessing the key but, for a well-designed encryption scheme, considerable computational resources and skills are required. An authorized recipient can easily decrypt the message with the key provided by the originator to recipients but not to unauthorized users.
7,441
Historically, various forms of encryption have been used to aid in cryptography. Early encryption techniques were often used in military messaging. Since then, new techniques have emerged and become commonplace in all areas of modern computing. Modern encryption schemes use the concepts of public-key and symmetric-key. Modern encryption techniques ensure security because modern computers are inefficient at cracking the encryption.
7,442
One of the earliest forms of encryption is symbol replacement, which was first found in the tomb of Khnumhotep II, who lived in 1900 BC Egypt. Symbol replacement encryption is “non-standard,” which means that the symbols require a cipher or key to understand. This type of early encryption was used throughout Ancient Greece and Rome for military purposes. One of the most famous military encryption developments was the Caesar Cipher, which was a system in which a letter in normal text is shifted down a fixed number of positions down the alphabet to get the encoded letter. A message encoded with this type of encryption could be decoded with the fixed number on the Caesar Cipher.
7,443
Around 800 AD, Arab mathematician Al-Kindi developed the technique of frequency analysis – which was an attempt to systematically crack Caesar ciphers. This technique looked at the frequency of letters in the encrypted message to determine the appropriate shift. This technique was rendered ineffective after the creation of the polyalphabetic cipher by Leon Battista Alberti in 1465, which incorporated different sets of languages. In order for frequency analysis to be useful, the person trying to decrypt the message would need to know which language the sender chose.
7,444
Around 1790, Thomas Jefferson theorized a cipher to encode and decode messages in order to provide a more secure way of military correspondence. The cipher, known today as the Wheel Cipher or the Jefferson Disk, although never actually built, was theorized as a spool that could jumble an English message up to 36 characters. The message could be decrypted by plugging in the jumbled message to a receiver with an identical cipher.
7,445
A similar device to the Jefferson Disk, the M-94, was developed in 1917 independently by US Army Major Joseph Mauborne. This device was used in U.S. military communications until 1942.
7,446
In World War II, the Axis powers used a more advanced version of the M-94 called the Enigma Machine. The Enigma Machine was more complex because unlike the Jefferson Wheel and the M-94, each day the jumble of letters switched to a completely new combination. Each day's combination was only known by the Axis, so many thought the only way to break the code would be to try over 17,000 combinations within 24 hours. The Allies used computing power to severely limit the number of reasonable combinations they needed to check every day, leading to the breaking of the Enigma Machine.
7,447
Today, encryption is used in the transfer of communication over the Internet for security and commerce. As computing power continues to increase, computer encryption is constantly evolving to prevent eavesdropping attacks. With one of the first "modern" cipher suites, DES, utilizing a 56-bit key with 72,057,594,037,927,936 possibilities being able to be cracked in 22 hours and 15 minutes by EFF's DES cracker in 1999, which used a brute-force method of cracking. Modern encryption standards often use stronger key sizes often 256, like AES, TwoFish, ChaCha20-Poly1305, Serpent. Cipher suites utilizing a 128-bit or higher key, like AES, will not be able to be brute-forced due to the total amount of keys of 3.4028237e+38 possibilities. The most likely option for cracking ciphers with high key size is to find vulnerabilities in the cipher itself, like inherent biases and backdoors. For example, RC4, a stream cipher, was cracked due to inherent biases and vulnerabilities in the cipher.
7,448
In the context of cryptography, encryption serves as a mechanism to ensure confidentiality. Since data may be visible on the Internet, sensitive information such as passwords and personal communication may be exposed to potential interceptors. The process of encrypting and decrypting messages involves keys. The two main types of keys in cryptographic systems are symmetric-key and public-key .
7,449
Many complex cryptographic algorithms often use simple modular arithmetic in their implementations.
7,450
In symmetric-key schemes, the encryption and decryption keys are the same. Communicating parties must have the same key in order to achieve secure communication. The German Enigma Machine utilized a new symmetric-key each day for encoding and decoding messages.
7,451
In public-key encryption schemes, the encryption key is published for anyone to use and encrypt messages. However, only the receiving party has access to the decryption key that enables messages to be read. Public-key encryption was first described in a secret document in 1973; beforehand, all encryption schemes were symmetric-key .: 478  Although published subsequently, the work of Diffie and Hellman was published in a journal with a large readership, and the value of the methodology was explicitly described. The method became known as the Diffie-Hellman key exchange.
7,452
RSA is another notable public-key cryptosystem. Created in 1978, it is still used today for applications involving digital signatures. Using number theory, the RSA algorithm selects two prime numbers, which help generate both the encryption and decryption keys.
7,453
A publicly available public-key encryption application called Pretty Good Privacy was written in 1991 by Phil Zimmermann, and distributed free of charge with source code. PGP was purchased by Symantec in 2010 and is regularly updated.
7,454
Encryption has long been used by militaries and governments to facilitate secret communication. It is now commonly used in protecting information within many kinds of civilian systems. For example, the Computer Security Institute reported that in 2007, 71% of companies surveyed utilized encryption for some of their data in transit, and 53% utilized encryption for some of their data in storage. Encryption can be used to protect data "at rest", such as information stored on computers and storage devices . In recent years, there have been numerous reports of confidential data, such as customers' personal records, being exposed through loss or theft of laptops or backup drives; encrypting such files at rest helps protect them if physical security measures fail. Digital rights management systems, which prevent unauthorized use or reproduction of copyrighted material and protect software against reverse engineering , is another somewhat different example of using encryption on data at rest.
7,455
Encryption is also used to protect data in transit, for example data being transferred via networks , mobile telephones, wireless microphones, wireless intercom systems, Bluetooth devices and bank automatic teller machines. There have been numerous reports of data in transit being intercepted in recent years. Data should also be encrypted when transmitted across networks in order to protect against eavesdropping of network traffic by unauthorized users.
7,456
Conventional methods for permanently deleting data from a storage device involve overwriting the device's whole content with zeros, ones, or other patterns – a process which can take a significant amount of time, depending on the capacity and the type of storage medium. Cryptography offers a way of making the erasure almost instantaneous. This method is called crypto-shredding. An example implementation of this method can be found on iOS devices, where the cryptographic key is kept in a dedicated 'effaceable storage'. Because the key is stored on the same device, this setup on its own does not offer full privacy or security protection if an unauthorized person gains physical access to the device.
7,457
Encryption is used in the 21st century to protect digital data and information systems. As computing power increased over the years, encryption technology has only become more advanced and secure. However, this advancement in technology has also exposed a potential limitation of today's encryption methods.
7,458
The length of the encryption key is an indicator of the strength of the encryption method. For example, the original encryption key, DES , was 56 bits, meaning it had 2^56 combination possibilities. With today's computing power, a 56-bit key is no longer secure, being vulnerable to brute force attacks.
7,459
Quantum computing utilizes properties of quantum mechanics in order to process large amounts of data simultaneously. Quantum computing has been found to achieve computing speeds thousands of times faster than today's supercomputers. This computing power presents a challenge to today's encryption technology. For example, RSA encryption utilizes the multiplication of very large prime numbers to create a semiprime number for its public key. Decoding this key without its private key requires this semiprime number to be factored, which can take a very long time to do with modern computers. It would take a supercomputer anywhere between weeks to months to factor in this key. However, quantum computing can use quantum algorithms to factor this semiprime number in the same amount of time it takes for normal computers to generate it. This would make all data protected by current public-key encryption vulnerable to quantum computing attacks. Other encryption techniques like elliptic curve cryptography and symmetric key encryption are also vulnerable to quantum computing.
7,460
While quantum computing could be a threat to encryption security in the future, quantum computing as it currently stands is still very limited. Quantum computing currently is not commercially available, cannot handle large amounts of code, and only exists as computational devices, not computers. Furthermore, quantum computing advancements will be able to be utilized in favor of encryption as well. The National Security Agency is currently preparing post-quantum encryption standards for the future. Quantum encryption promises a level of security that will be able to counter the threat of quantum computing.
7,461
Encryption is an important tool but is not sufficient alone to ensure the security or privacy of sensitive information throughout its lifetime. Most applications of encryption protect information only at rest or in transit, leaving sensitive data in clear text and potentially vulnerable to improper disclosure during processing, such as by a cloud service for example. Homomorphic encryption and secure multi-party computation are emerging techniques to compute encrypted data; these techniques are general and Turing complete but incur high computational and/or communication costs.
7,462
In response to encryption of data at rest, cyber-adversaries have developed new types of attacks. These more recent threats to encryption of data at rest include cryptographic attacks, stolen ciphertext attacks, attacks on encryption keys, insider attacks, data corruption or integrity attacks, data destruction attacks, and ransomware attacks. Data fragmentation and active defense data protection technologies attempt to counter some of these attacks, by distributing, moving, or mutating ciphertext so it is more difficult to identify, steal, corrupt, or destroy.
7,463
The question of balancing the need for national security with the right to privacy has been debated for years, since encryption has become critical in today's digital society. The modern encryption debate started around the '90s when US government tried to ban cryptography because, according to them, it would threaten national security. The debate is polarized around two opposing views. Those who see strong encryption as a problem making it easier for criminals to hide their illegal acts online and others who argue that encryption keep digital communications safe. The debate heated up in 2014, when Big Tech like Apple and Google set encryption by default in their devices. This was the start of a series of controversies that puts governments, companies and internet users at stake.
7,464
Encryption, by itself, can protect the confidentiality of messages, but other techniques are still needed to protect the integrity and authenticity of a message; for example, verification of a message authentication code or a digital signature usually done by a hashing algorithm or a PGP signature. Authenticated encryption algorithms are designed to provide both encryption and integrity protection together. Standards for cryptographic software and hardware to perform encryption are widely available, but successfully using encryption to ensure security may be a challenging problem. A single error in system design or execution can allow successful attacks. Sometimes an adversary can obtain unencrypted information without directly undoing the encryption. See for example traffic analysis, TEMPEST, or Trojan horse.
7,465
Integrity protection mechanisms such as MACs and digital signatures must be applied to the ciphertext when it is first created, typically on the same device used to compose the message, to protect a message end-to-end along its full transmission path; otherwise, any node between the sender and the encryption agent could potentially tamper with it. Encrypting at the time of creation is only secure if the encryption device itself has correct keys and has not been tampered with. If an endpoint device has been configured to trust a root certificate that an attacker controls, for example, then the attacker can both inspect and tamper with encrypted data by performing a man-in-the-middle attack anywhere along the message's path. The common practice of TLS interception by network operators represents a controlled and institutionally sanctioned form of such an attack, but countries have also attempted to employ such attacks as a form of control and censorship.
7,466
Even when encryption correctly hides a message's content and it cannot be tampered with at rest or in transit, a message's length is a form of metadata that can still leak sensitive information about the message. For example, the well-known CRIME and BREACH attacks against HTTPS were side-channel attacks that relied on information leakage via the length of encrypted content. Traffic analysis is a broad class of techniques that often employs message lengths to infer sensitive implementation about traffic flows by aggregating information about a large number of messages.
7,467
Padding a message's payload before encrypting it can help obscure the cleartext's true length, at the cost of increasing the ciphertext's size and introducing or increasing bandwidth overhead. Messages may be padded randomly or deterministically, with each approach having different tradeoffs. Encrypting and padding messages to form padded uniform random blobs or PURBs is a practice guaranteeing that the cipher text leaks no metadata about its cleartext's content, and leaks asymptotically minimal O {\displaystyle O} information via its length.
7,468
Read/write conflicts, commonly termed interlocking in accessing the same shared memory location simultaneously are resolved by one of the following strategies:
7,469
Here, E and C stand for 'exclusive' and 'concurrent' respectively. The read causes no discrepancies while the concurrent write is further defined as:
7,470
Several simplifying assumptions are made while considering the development of algorithms for PRAM. They are:
7,471
These kinds of algorithms are useful for understanding the exploitation of concurrency, dividing the original problem into similar sub-problems and solving them in parallel. The introduction of the formal 'P-RAM' model in Wyllie's 1979 thesis had the aim of quantifying analysis of parallel algorithms in a way analogous to the Turing Machine. The analysis focused on a MIMD model of programming using a CREW model but showed that many variants, including implementing a CRCW model and implementing on an SIMD machine, were possible with only constant overhead.
7,472
PRAM algorithms cannot be parallelized with the combination of CPU and dynamic random-access memory because DRAM does not allow concurrent access to a single bank ; but they can be implemented in hardware or read/write to the internal static random-access memory blocks of a field-programmable gate array , it can be done using a CRCW algorithm.
7,473
However, the test for practical relevance of PRAM algorithms depends on whether their cost model provides an effective abstraction of some computer; the structure of that computer can be quite different than the abstract model. The knowledge of the layers of software and hardware that need to be inserted is beyond the scope of this article. But, articles such as Vishkin demonstrate how a PRAM-like abstraction can be supported by the explicit multi-threading paradigm and articles such as Caragea & Vishkin demonstrate that a PRAM algorithm for the maximum flow problem can provide strong speedups relative to the fastest serial program for the same problem. The article Ghanim, Vishkin & Barua demonstrated that PRAM algorithms as-is can achieve competitive performance even without any additional effort to cast them as multi-threaded programs on XMT.
7,474
This is an example of SystemVerilog code which finds the maximum value in the array in only 2 clock cycles. It compares all the combinations of the elements in the array at the first clock, and merges the result at the second clock. It uses CRCW memory; m <= 1 and maxNo <= data are written concurrently. The concurrency causes no conflicts because the algorithm guarantees that the same value is written to the same memory. This code can be run on FPGA hardware.
7,475
While the variety of existing process calculi is very large , there are several features that all process calculi have in common:
7,476
To define a process calculus, one starts with a set of names whose purpose is to provide means of communication. In many implementations, channels have rich internal structure to improve efficiency, but this is abstracted away in most theoretic models. In addition to names, one needs a means to form new processes from old ones. The basic operators, always present in some form or other, allow:
7,477
Parallel composition of two processes P {\displaystyle {\mathit {P}}} and Q {\displaystyle {\mathit {Q}}} , usually written P | Q {\displaystyle P\vert Q} , is the key primitive distinguishing the process calculi from sequential models of computation. Parallel composition allows computation in P {\displaystyle {\mathit {P}}} and Q {\displaystyle {\mathit {Q}}} to proceed simultaneously and independently. But it also allows interaction, that is synchronisation and flow of information from P {\displaystyle {\mathit {P}}} to Q {\displaystyle {\mathit {Q}}} on a channel shared by both. Crucially, an agent or process can be connected to more than one channel at a time.
7,478
Channels may be synchronous or asynchronous. In the case of a synchronous channel, the agent sending a message waits until another agent has received the message. Asynchronous channels do not require any such synchronization. In some process calculi channels themselves can be sent in messages through channels, allowing the topology of process interconnections to change. Some process calculi also allow channels to be created during the execution of a computation.
7,479
Interaction can be a directed flow of information. That is, input and output can be distinguished as dual interaction primitives. Process calculi that make such distinctions typically define an input operator {\displaystyle x} ) and an output operator , both of which name an interaction point that is used to synchronise with a dual interaction primitive.
7,480
Should information be exchanged, it will flow from the outputting to the inputting process. The output primitive will specify the data to be sent. In x ⟨ y ⟩ {\displaystyle x\langle y\rangle } , this data is y {\displaystyle y} . Similarly, if an input expects to receive data, one or more bound variables will act as place-holders to be substituted by data, when it arrives. In x {\displaystyle x} , v {\displaystyle v} plays that role. The choice of the kind of data that can be exchanged in an interaction is one of the key features that distinguishes different process calculi.
7,481
Sometimes interactions must be temporally ordered. For example, it might be desirable to specify algorithms such as: first receive some data on x {\displaystyle {\mathit {x}}} and then send that data on y {\displaystyle {\mathit {y}}} . Sequential composition can be used for such purposes. It is well known from other models of computation. In process calculi, the sequentialisation operator is usually integrated with input or output, or both. For example, the process x ⋅ P {\displaystyle x\cdot P} will wait for an input on x {\displaystyle {\mathit {x}}} . Only when this input has occurred will the process P {\displaystyle {\mathit {P}}} be activated, with the received data through x {\displaystyle {\mathit {x}}} substituted for identifier v {\displaystyle {\mathit {v}}} .
7,482
The key operational reduction rule, containing the computational essence of process calculi, can be given solely in terms of parallel composition, sequentialization, input, and output. The details of this reduction vary among the calculi, but the essence remains roughly the same. The reduction rule is:
7,483
The interpretation to this reduction rule is:
7,484
The class of processes that P {\displaystyle {\mathit {P}}} is allowed to range over as the continuation of the output operation substantially influences the properties of the calculus.
7,485
Processes do not limit the number of connections that can be made at a given interaction point. But interaction points allow interference . For the synthesis of compact, minimal and compositional systems, the ability to restrict interference is crucial. Hiding operations allow control of the connections made between interaction points when composing agents in parallel. Hiding can be denoted in a variety of ways. For example, in the π-calculus the hiding of a name x {\displaystyle {\mathit {x}}} in P {\displaystyle {\mathit {P}}} can be expressed as P {\displaystyle P} , while in CSP it might be written as P ∖ { x } {\displaystyle P\setminus \{x\}} .
7,486
The operations presented so far describe only finite interaction and are consequently insufficient for full computability, which includes non-terminating behaviour. Recursion and replication are operations that allow finite descriptions of infinite behaviour. Recursion is well known from the sequential world. Replication ! P {\displaystyle !P} can be understood as abbreviating the parallel composition of a countably infinite number of P {\displaystyle {\mathit {P}}} processes:
7,487
Process calculi generally also include a null process which has no interaction points. It is utterly inactive and its sole purpose is to act as the inductive anchor on top of which more interesting processes can be generated.
7,488
Process algebra has been studied for discrete time and continuous time .
7,489
In the first half of the 20th century, various formalisms were proposed to capture the informal concept of a computable function, with μ-recursive functions, Turing machines and the lambda calculus possibly being the best-known examples today. The surprising fact that they are essentially equivalent, in the sense that they are all encodable into each other, supports the Church-Turing thesis. Another shared feature is more rarely commented on: they all are most readily understood as models of sequential computation. The subsequent consolidation of computer science required a more subtle formulation of the notion of computation, in particular explicit representations of concurrency and communication. Models of concurrency such as the process calculi, Petri nets in 1962, and the actor model in 1973 emerged from this line of inquiry.
7,490
Research on process calculi began in earnest with Robin Milner's seminal work on the Calculus of Communicating Systems during the period from 1973 to 1980. C.A.R. Hoare's Communicating Sequential Processes first appeared in 1978, and was subsequently developed into a full-fledged process calculus during the early 1980s. There was much cross-fertilization of ideas between CCS and CSP as they developed. In 1982 Jan Bergstra and Jan Willem Klop began work on what came to be known as the Algebra of Communicating Processes , and introduced the term process algebra to describe their work. CCS, CSP, and ACP constitute the three major branches of the process calculi family: the majority of the other process calculi can trace their roots to one of these three calculi.
7,491
Various process calculi have been studied and not all of them fit the paradigm sketched here. The most prominent example may be the ambient calculus. This is to be expected as process calculi are an active field of study. Currently research on process calculi focuses on the following problems.
7,492
The ideas behind process algebra have given rise to several tools including:
7,493
CADP
7,494
Concurrency Workbench
7,495
mCRL2 toolset
7,496
The history monoid is the free object that is generically able to represent the histories of individual communicating processes. A process calculus is then a formal language imposed on a history monoid in a consistent fashion. That is, a history monoid can only record a sequence of events, with synchronization, but does not specify the allowed state transitions. Thus, a process calculus is to a history monoid what a formal language is to a free monoid .
7,497
The use of channels for communication is one of the features distinguishing the process calculi from other models of concurrency, such as Petri nets and the actor model . One of the fundamental motivations for including channels in the process calculi was to enable certain algebraic techniques, thereby making it easier to reason about processes algebraically.
7,498
Like industry standards such as UML activity diagrams, Business Process Model and Notation, and event-driven process chains, Petri nets offer a graphical notation for stepwise processes that include choice, iteration, and concurrent execution. Unlike these standards, Petri nets have an exact mathematical definition of their execution semantics, with a well-developed mathematical theory for process analysis.
7,499
The German computer scientist Carl Adam Petri, after whom such structures are named, analyzed Petri nets extensively in his 1962 dissertation Kommunikation mit Automaten.
7,500
A Petri net consists of places, transitions, and arcs. Arcs run from a place to a transition or vice versa, never between places or between transitions. The places from which an arc runs to a transition are called the input places of the transition; the places to which arcs run from a transition are called the output places of the transition.