Number
int64
1
7.61k
Text
stringlengths
2
3.11k
7,501
Graphically, places in a Petri net may contain a discrete number of marks called tokens. Any distribution of tokens over the places will represent a configuration of the net called a marking. In an abstract sense relating to a Petri net diagram, a transition of a Petri net may fire if it is enabled, i.e. there are sufficient tokens in all of its input places; when the transition fires, it consumes the required input tokens, and creates tokens in its output places. A firing is atomic, i.e. a single non-interruptible step.
7,502
Unless an execution policy is defined, the execution of Petri nets is nondeterministic: when multiple transitions are enabled at the same time, they will fire in any order.
7,503
Since firing is nondeterministic, and multiple tokens may be present anywhere in the net , Petri nets are well suited for modeling the concurrent behavior of distributed systems.
7,504
Petri nets are state-transition systems that extend a class of nets called elementary nets.
7,505
Definition 1. A net is a tuple N = {\displaystyle N=} where
7,506
Definition 2. Given a net N = , a configuration is a set C so that C ⊆ P.
7,507
Definition 3. An elementary net is a net of the form EN = where
7,508
Definition 4. A Petri net is a net of the form PN = , which extends the elementary net so that
7,509
If a Petri net is equivalent to an elementary net, then Z can be the countable set {0,1} and those elements in P that map to 1 under M form a configuration. Similarly, if a Petri net is not an elementary net, then the multiset M can be interpreted as representing a non-singleton set of configurations. In this respect, M extends the concept of configuration for elementary nets to Petri nets.
7,510
In the diagram of a Petri net , places are conventionally depicted with circles, transitions with long narrow rectangles and arcs as one-way arrows that show connections of places to transitions or transitions to places. If the diagram were of an elementary net, then those places in a configuration would be conventionally depicted as circles, where each circle encompasses a single dot called a token. In the given diagram of a Petri net , the place circles may encompass more than one token to show the number of times a place appears in a configuration. The configuration of tokens distributed over an entire Petri net diagram is called a marking.
7,511
In the top figure , the place p1 is an input place of transition t; whereas, the place p2 is an output place to the same transition. Let PN0 be a Petri net with a marking configured M0, and PN1 be a Petri net with a marking configured M1. The configuration of PN0 enables transition t through the property that all input places have sufficient number of tokens "equal to or greater" than the multiplicities on their respective arcs to t. Once and only once a transition is enabled will the transition fire. In this example, the firing of transition t generates a map that has the marking configured M1 in the image of M0 and results in Petri net PN1, seen in the bottom figure. In the diagram, the firing rule for a transition can be characterised by subtracting a number of tokens from its input places equal to the multiplicity of the respective input arcs and accumulating a new number of tokens at the output places equal to the multiplicity of the respective output arcs.
7,512
Remark 1. The precise meaning of "equal to or greater" will depend on the precise algebraic properties of addition being applied on Z in the firing rule, where subtle variations on the algebraic properties can lead to other classes of Petri nets; for example, algebraic Petri nets.
7,513
The following formal definition is loosely based on . Many alternative definitions exist.
7,514
A Petri net graph is a 3-tuple {\displaystyle } , where
7,515
The flow relation is the set of arcs: F = { ∣ W > 0 } {\displaystyle F=\{\mid W>0\}} . In many textbooks, arcs can only have multiplicity 1. These texts often define Petri nets using F instead of W. When using this convention, a Petri net graph is a bipartite directed graph {\displaystyle } with node partitions S and T.
7,516
The preset of a transition t is the set of its input places: ∙ t = { s ∈ S ∣ W > 0 } {\displaystyle {}^{\bullet }t=\{s\in S\mid W>0\}} ; its postset is the set of its output places: t ∙ = { s ∈ S ∣ W > 0 } {\displaystyle t^{\bullet }=\{s\in S\mid W>0\}} . Definitions of pre- and postsets of places are analogous.
7,517
A marking of a Petri net is a multiset of its places, i.e., a mapping M : S → N {\displaystyle M:S\to \mathbb {N} } . We say the marking assigns to each place a number of tokens.
7,518
A Petri net is a 4-tuple {\displaystyle } , where
7,519
In words
7,520
We are generally interested in what may happen when transitions may continually fire in arbitrary order.
7,521
We say that a marking M' is reachable from a marking M in one step if M ⟶ G M ′ {\displaystyle M{\underset {G}{\longrightarrow }}M'} ; we say that it is reachable from M if M ⟶ G ∗ M ′ {\displaystyle M{\overset {*}{\underset {G}{\longrightarrow }}}M'} , where ⟶ G ∗ {\displaystyle {\overset {*}{\underset {G}{\longrightarrow }}}} is the reflexive transitive closure of ⟶ G {\displaystyle {\underset {G}{\longrightarrow }}} ; that is, if it is reachable in 0 or more steps.
7,522
For a Petri net N = {\displaystyle N=} , we are interested in the firings that can be performed starting with the initial marking M 0 {\displaystyle M_{0}} . Its set of reachable markings is the set R   = D   { M ′ | M 0 → ∗ M ′ } {\displaystyle R\ {\stackrel {D}{=}}\ \left\{M'{\Bigg |}M_{0}{\xrightarrow{*}}M'\right\}}
7,523
The reachability graph of N is the transition relation ⟶ G {\displaystyle {\underset {G}{\longrightarrow }}} restricted to its reachable markings R {\displaystyle R} . It is the state space of the net.
7,524
A firing sequence for a Petri net with graph G and initial marking M 0 {\displaystyle M_{0}} is a sequence of transitions σ → = ⟨ t 1 ⋯ t n ⟩ {\displaystyle {\vec {\sigma }}=\langle t_{1}\cdots t_{n}\rangle } such that M 0 ⟶ G , t 1 M 1 ∧ ⋯ ∧ M n − 1 ⟶ G , t n M n {\displaystyle M_{0}{\underset {G,t_{1}}{\longrightarrow }}M_{1}\wedge \cdots \wedge M_{n-1}{\underset {G,t_{n}}{\longrightarrow }}M_{n}} . The set of firing sequences is denoted as L {\displaystyle L} .
7,525
A common variation is to disallow arc multiplicities and replace the bag of arcs W with a simple set, called the flow relation, F ⊆ ∪ {\displaystyle F\subseteq \cup } . This does not limit expressive power as both can represent each other.
7,526
Another common variation, e.g. in Desel and Juhás , is to allow capacities to be defined on places. This is discussed under extensions below.
7,527
The markings of a Petri net {\displaystyle } can be regarded as vectors of non-negative integers of length | S | {\displaystyle |S|} .
7,528
Then their difference
7,529
It must be required that w is a firing sequence; allowing arbitrary sequences of transitions will generally produce a larger set.
7,530
Meseguer and Montanari considered a kind of symmetric monoidal categories known as Petri categories.
7,531
One thing that makes Petri nets interesting is that they provide a balance between modeling power and analyzability: many things one would like to know about concurrent systems can be automatically determined for Petri nets, although some of those things are very expensive to determine in the general case. Several subclasses of Petri nets have been studied that can still model interesting classes of concurrent systems, while these determinations become easier.
7,532
An overview of such decision problems, with decidability and complexity results for Petri nets and some subclasses, can be found in Esparza and Nielsen .
7,533
It is a matter of walking the reachability-graph defined above, until either the requested-marking is reached or it can no longer be found. This is harder than it may seem at first: the reachability graph is generally infinite, and it isn't easy to determine when it is safe to stop.
7,534
In fact, this problem was shown to be EXPSPACE-hard years before it was shown to be decidable at all . Papers continue to be published on how to do it efficiently. In 2018, Czerwiński et al. improved the lower bound and showed that the problem is not ELEMENTARY. In 2021, this problem was shown to be non-primitive recursive, independently by Jerome Leroux and by Wojciech Czerwiński and Łukasz Orlikowski. These results thus close the long-standing complexity gap.
7,535
While reachability seems to be a good tool to find erroneous states, for practical problems the constructed graph usually has far too many states to calculate. To alleviate this problem, linear temporal logic is usually used in conjunction with the tableau method to prove that such states cannot be reached. Linear temporal logic uses the semi-decision technique to find if indeed a state can be reached, by finding a set of necessary conditions for the state to be reached then proving that those conditions cannot be satisfied.
7,536
A place in a Petri net is called k-bound if it does not contain more than k tokens in all reachable markings, including the initial marking; it is said to be safe if it is 1-bounded; it is bounded if it is k-bounded for some k.
7,537
A Petri net is called k-bounded, safe, or bounded when all of its places are. A Petri net is called bounded if it is bounded for every possible initial marking.
7,538
A Petri net is bounded if and only if its reachability graph is finite.
7,539
Boundedness is decidable by looking at covering, by constructing the Karp–Miller Tree.
7,540
It can be useful to explicitly impose a bound on places in a given net. This can be used to model limited system resources.
7,541
For example, if in the net N, both places are assigned capacity 2, we obtain a Petri net with place capacities, say N2; its reachability graph is displayed on the right.
7,542
Alternatively, places can be made bounded by extending the net. To be exact, a place can be made k-bounded by adding a "counter-place" with flow opposite to that of the place, and adding tokens to make the total in both places k.
7,543
As well as for discrete events, there are Petri nets for continuous and hybrid discrete-continuous processes that are useful in discrete, continuous and hybrid control theory, and related to discrete, continuous and hybrid automata.
7,544
There are many extensions to Petri nets. Some of them are completely backwards-compatible with the original Petri net, some add properties that cannot be modelled in the original Petri net formalism . Although backwards-compatible models do not extend the computational power of Petri nets, they may have more succinct representations and may be more convenient for modeling. Extensions that cannot be transformed into Petri nets are sometimes very powerful, but usually lack the full range of mathematical tools available to analyse ordinary Petri nets.
7,545
The term high-level Petri net is used for many Petri net formalisms that extend the basic P/T net formalism; this includes coloured Petri nets, hierarchical Petri nets such as Nets within Nets, and all other extensions sketched in this section. The term is also used specifically for the type of coloured nets supported by CPN Tools.
7,546
A short list of possible extensions follows:
7,547
There are many more extensions to Petri nets, however, it is important to keep in mind, that as the complexity of the net increases in terms of extended properties, the harder it is to use standard tools to evaluate certain properties of the net. For this reason, it is a good idea to use the most simple net type possible for a given modelling task.
7,548
Instead of extending the Petri net formalism, we can also look at restricting it, and look at particular types of Petri nets, obtained by restricting the syntax in a particular way. Ordinary Petri nets are the nets where all arc weights are 1. Restricting further, the following types of ordinary Petri nets are commonly used and studied:
7,549
Workflow nets are a subclass of Petri nets intending to model the workflow of process activities. The WF-net transitions are assigned to tasks or activities, and places are assigned to the pre/post conditions. The WF-nets have additional structural and operational requirements, mainly the addition of a single input place with no previous transitions, and output place with no following transitions. Accordingly, start and termination markings can be defined that represent the process status.
7,550
WF-nets have the soundness property, indicating that a process with a start marking of k tokens in its source place, can reach the termination state marking with k tokens in its sink place . Additionally, all the transitions in the process could fire . A general sound WF-net is defined as being k-sound for every k > 0.
7,551
A directed path in the Petri net is defined as the sequence of nodes linked by the directed arcs. An elementary path includes every node in the sequence only once.
7,552
A well-handled Petri net is a net in which there are no fully distinct elementary paths between a place and a transition , i.e., if there are two paths between the pair of nodes then these paths share a node. An acyclic well-handled WF-net is sound .
7,553
Extended WF-net is a Petri net that is composed of a WF-net with additional transition t . The sink place is connected as the input place of transition t and the source place as its output place. Firing of the transition causes iteration of the process .
7,554
WRI WF-net, is an extended acyclic well-handled WF-net. WRI-WF-net can be built as composition of nets, i.e., replacing a transition within a WRI-WF-net with a subnet which is a WRI-WF-net. The result is also WRI-WF-net. WRI-WF-nets are G-sound, therefore by using only WRI-WF-net building blocks, one can get WF-nets that are G-sound by construction.
7,555
The design structure matrix can model process relations, and be utilized for process planning. The DSM-nets are realization of DSM-based plans into workflow processes by Petri nets, and are equivalent to WRI-WF-nets. The DSM-net construction process ensures the soundness property of the resulting net.
7,556
Other ways of modelling concurrent computation have been proposed, including vector addition systems, communicating finite-state machines, Kahn process networks, process algebra, the actor model, and trace theory. Different models provide tradeoffs of concepts such as compositionality, modularity, and locality.
7,557
An approach to relating some of these models of concurrency is proposed in the chapter by Winskel and Nielsen.
7,558
The components of a distributed system communicate and coordinate their actions by passing messages to one another in order to achieve a common goal. Three significant challenges of distributed systems are: maintaining concurrency of components, overcoming the lack of a global clock, and managing the independent failure of components. When a component of one system fails, the entire system does not fail. Examples of distributed systems vary from SOA-based systems to massively multiplayer online games to peer-to-peer applications.
7,559
A computer program that runs within a distributed system is called a distributed program, and distributed programming is the process of writing such programs. There are many different types of implementations for the message passing mechanism, including pure HTTP, RPC-like connectors and message queues.
7,560
Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one or more computers, which communicate with each other via message passing.
7,561
The word distributed in terms such as "distributed system", "distributed programming", and "distributed algorithm" originally referred to computer networks where individual computers were physically distributed within some geographical area. The terms are nowadays used in a much wider sense, even referring to autonomous processes that run on the same physical computer and interact with each other by message passing.
7,562
While there is no single definition of a distributed system, the following defining properties are commonly used as:
7,563
A distributed system may have a common goal, such as solving a large computational problem; the user then perceives the collection of autonomous processors as a unit. Alternatively, each computer may have its own user with individual needs, and the purpose of the distributed system is to coordinate the use of shared resources or provide communication services to the users.
7,564
Other typical properties of distributed systems include the following:
7,565
Distributed systems are groups of networked computers which share a common goal for their work. The terms "concurrent computing", "parallel computing", and "distributed computing" have much overlap, and no clear distinction exists between them. The same system may be characterized both as "parallel" and "distributed"; the processors in a typical distributed system run concurrently in parallel. Parallel computing may be seen as a particularly tightly coupled form of distributed computing, and distributed computing may be seen as a loosely coupled form of parallel computing. Nevertheless, it is possible to roughly classify concurrent systems as "parallel" or "distributed" using the following criteria:
7,566
The figure on the right illustrates the difference between distributed and parallel systems. Figure is a schematic view of a typical distributed system; the system is represented as a network topology in which each node is a computer and each line connecting the nodes is a communication link. Figure shows the same distributed system in more detail: each computer has its own local memory, and information can be exchanged only by passing messages from one node to another by using the available communication links. Figure shows a parallel system in which each processor has a direct access to a shared memory.
7,567
The situation is further complicated by the traditional uses of the terms parallel and distributed algorithm that do not quite match the above definitions of parallel and distributed systems . Nevertheless, as a rule of thumb, high-performance parallel computation in a shared-memory multiprocessor uses parallel algorithms while the coordination of a large-scale distributed system uses distributed algorithms.
7,568
The use of concurrent processes which communicate through message-passing has its roots in operating system architectures studied in the 1960s. The first widespread distributed systems were local-area networks such as Ethernet, which was invented in the 1970s.
7,569
ARPANET, one of the predecessors of the Internet, was introduced in the late 1960s, and ARPANET e-mail was invented in the early 1970s. E-mail became the most successful application of ARPANET, and it is probably the earliest example of a large-scale distributed application. In addition to ARPANET , other early worldwide computer networks included Usenet and FidoNet from the 1980s, both of which were used to support distributed discussion systems.
7,570
The study of distributed computing became its own branch of computer science in the late 1970s and early 1980s. The first conference in the field, Symposium on Principles of Distributed Computing , dates back to 1982, and its counterpart International Symposium on Distributed Computing was first held in Ottawa in 1985 as the International Workshop on Distributed Algorithms on Graphs.
7,571
Various hardware and software architectures are used for distributed computing. At a lower level, it is necessary to interconnect multiple CPUs with some sort of network, regardless of whether that network is printed onto a circuit board or made up of loosely coupled devices and cables. At a higher level, it is necessary to interconnect processes running on those CPUs with some sort of communication system.
7,572
Whether these CPUs share resources or not determines a first distinction between three types of architecture:
7,573
Shared memory
7,574
Shared disk
7,575
Shared nothing.
7,576
Distributed programming typically falls into one of several basic architectures: client–server, three-tier, n-tier, or peer-to-peer; or categories: loose coupling, or tight coupling.
7,577
Another basic aspect of distributed computing architecture is the method of communicating and coordinating work among concurrent processes. Through various message passing protocols, processes may communicate directly with one another, typically in a main/sub relationship. Alternatively, a "database-centric" architecture can enable distributed computing to be done without any form of direct inter-process communication, by utilizing a shared database. Database-centric architecture in particular provides relational processing analytics in a schematic architecture allowing for live environment relay. This enables distributed computing functions both within and beyond the parameters of a networked database.
7,578
Reasons for using distributed systems and distributed computing may include:
7,579
Examples of distributed systems and applications of distributed computing include the following:
7,580
Many tasks that we would like to automate by using a computer are of question–answer type: we would like to ask a question and the computer should produce an answer. In theoretical computer science, such tasks are called computational problems. Formally, a computational problem consists of instances together with a solution for each instance. Instances are questions that we can ask, and solutions are desired answers to these questions.
7,581
Theoretical computer science seeks to understand which computational problems can be solved by using a computer and how efficiently . Traditionally, it is said that a problem can be solved by using a computer if we can design an algorithm that produces a correct solution for any given instance. Such an algorithm can be implemented as a computer program that runs on a general-purpose computer: the program reads a problem instance from input, performs some computation, and produces the solution as output. Formalisms such as random-access machines or universal Turing machines can be used as abstract models of a sequential general-purpose computer executing such an algorithm.
7,582
The field of concurrent and distributed computing studies similar questions in the case of either multiple computers, or a computer that executes a network of interacting processes: which computational problems can be solved in such a network and how efficiently? However, it is not at all obvious what is meant by "solving a problem" in the case of a concurrent or distributed system: for example, what is the task of the algorithm designer, and what is the concurrent or distributed equivalent of a sequential general-purpose computer?
7,583
The discussion below focuses on the case of multiple computers, although many of the issues are the same for concurrent processes running on a single computer.
7,584
Three viewpoints are commonly used:
7,585
In the case of distributed algorithms, computational problems are typically related to graphs. Often the graph that describes the structure of the computer network is the problem instance. This is illustrated in the following example.
7,586
Consider the computational problem of finding a coloring of a given graph G. Different fields might take the following approaches:
7,587
While the field of parallel algorithms has a different focus than the field of distributed algorithms, there is much interaction between the two fields. For example, the Cole–Vishkin algorithm for graph coloring was originally presented as a parallel algorithm, but the same technique can also be used directly as a distributed algorithm.
7,588
Moreover, a parallel algorithm can be implemented either in a parallel system or in a distributed system . The traditional boundary between parallel and distributed algorithms does not lie in the same place as the boundary between parallel and distributed systems .
7,589
In parallel algorithms, yet another resource in addition to time and space is the number of computers. Indeed, often there is a trade-off between the running time and the number of computers: the problem can be solved faster if there are more computers running in parallel . If a decision problem can be solved in polylogarithmic time by using a polynomial number of processors, then the problem is said to be in the class NC. The class NC can be defined equally well by using the PRAM formalism or Boolean circuits—PRAM machines can simulate Boolean circuits efficiently and vice versa.
7,590
In the analysis of distributed algorithms, more attention is usually paid on communication operations than computational steps. Perhaps the simplest model of distributed computing is a synchronous system where all nodes operate in a lockstep fashion. This model is commonly known as the LOCAL model. During each communication round, all nodes in parallel  receive the latest messages from their neighbours,  perform arbitrary local computation, and  send new messages to their neighbors. In such systems, a central complexity measure is the number of synchronous communication rounds required to complete the task.
7,591
This complexity measure is closely related to the diameter of the network. Let D be the diameter of the network. On the one hand, any computable problem can be solved trivially in a synchronous distributed system in approximately 2D communication rounds: simply gather all information in one location , solve the problem, and inform each node about the solution .
7,592
On the other hand, if the running time of the algorithm is much smaller than D communication rounds, then the nodes in the network must produce their output without having the possibility to obtain information about distant parts of the network. In other words, the nodes must make globally consistent decisions based on information that is available in their local D-neighbourhood. Many distributed algorithms are known with the running time much smaller than D rounds, and understanding which problems can be solved by such algorithms is one of the central research questions of the field. Typically an algorithm which solves a problem in polylogarithmic time in the network size is considered efficient in this model.
7,593
Another commonly used measure is the total number of bits transmitted in the network . The features of this concept are typically captured with the CONGEST model, which is similarly defined as the LOCAL model, but where single messages can only contain B bits.
7,594
Traditional computational problems take the perspective that the user asks a question, a computer processes the question, then produces an answer and stops. However, there are also problems where the system is required not to stop, including the dining philosophers problem and other similar mutual exclusion problems. In these problems, the distributed system is supposed to continuously coordinate the use of shared resources so that no conflicts or deadlocks occur.
7,595
There are also fundamental challenges that are unique to distributed computing, for example those related to fault-tolerance. Examples of related problems include consensus problems, Byzantine fault tolerance, and self-stabilisation.
7,596
Much research is also focused on understanding the asynchronous nature of distributed systems:
7,597
Coordinator election is the process of designating a single process as the organizer of some task distributed among several computers . Before the task is begun, all network nodes are either unaware which node will serve as the "coordinator" of the task, or unable to communicate with the current coordinator. After a coordinator election algorithm has been run, however, each node throughout the network recognizes a particular, unique node as the task coordinator.
7,598
The network nodes communicate among themselves in order to decide which of them will get into the "coordinator" state. For that, they need some method in order to break the symmetry among them. For example, if each node has unique and comparable identities, then the nodes can compare their identities, and decide that the node with the highest identity is the coordinator.
7,599
The definition of this problem is often attributed to LeLann, who formalized it as a method to create a new token in a token ring network in which the token has been lost.
7,600
Coordinator election algorithms are designed to be economical in terms of total bytes transmitted, and time. The algorithm suggested by Gallager, Humblet, and Spira for general undirected graphs has had a strong impact on the design of distributed algorithms in general, and won the Dijkstra Prize for an influential paper in distributed computing.