Excerpt

Heuristics to Optimize the Variable Ordering in

Binary Decision Diagrams (BDDs)

Marvin Caspar

Chair of Software Engineering: Dependability, TU Kaiserslautern

**Abstract.** The arrangement of variables in a Binary Decision Diagram (BDD) determines the size of the BDD after applying reduction rules and thus plays a decisive role in finding a compact representation of the Boolean function. Since there are already many possible arrangements with only a few variables and techniques for finding the optimal solution require too much time, the use of heuristics is needed. The main focus is on known approaches and especially on the dynamic method of the sifting algorithm. In this paper, two fault trees are examined, for which five heuristics are applied to evaluate and compare their effectiveness.

**Keywords:** Binary Decision Diagram(BDD), Variable Ordering, Heuristic, Fault Tree, Sifting

## 1 Introduction

Technical systems become more and more complicated by time, and so safety analysis is an inalienable process nowadays. Safety in a system means the absence of unacceptable risks. One of the main tasks in ensuring safety can be described in identifying hazards, accidents caused by uncontrollable influences and studying failures of specific components. The fault tree analysis (FTA) is one of the principal analysis techniques and is undisputcdly the essential method for risk assessment in the avionics industry [12]. Fault trees were initially developed in 1962 by Bell Labs, an industrial research and scientific development company in New Jersey [7]. It gives an overview of events combined with Boolean operators causing an undcsircd top event which can be represented as a Boolean function. Depending on how many variables arc involved there can arise complex Boolean functions. There arc various means to analyze the top event, for example, the Minimal Cut Sets and the Binary Decision Diagrams (BDD). In this thesis, only the second method is used. Ordered Binary Decision Diagrams (OBDD) arc a compact way of representing Boolean functions and arc often used for fault tree analysis. Further applications of the BDD structure can mainly be found in verification, test pattern generation, synthesis and analysis of combinational and sequential circuits or timing analysis [11]. Given a specific variable order, the OBDD can be reduced to a distinct ROBDD. The size of the ROBDD depends on the variable order and can vary enormously. To keep the size of a BDD small, a good variable order must be found. More specifically it is essential to identify structures and, if necessary, to determine and group variables with a high degree of influence. Concerning the fact that the number of possible variable arrangements can become very large for complex trees and an algorithm for finding the optimal solution takes too much time some other means for finding a good solution are needed. In this paper, some heuristics as proposed by various authors are presented for this kind of problem. After that, two fault trees will be analyzed with these heuristics to check how well these methods work and how they proceed.

## 2 Fundamentals

**Fault Trees Analysis (FTA)** is often used to calculate the reliability of complex systems for evaluating the possibility of an accident [15]. The method starts with a single unwanted event at the top of the fault tree, the so-called top event. A fault tree is created in a top-down process, down to the individual failure states of the components. The leaves are referenced as variables (represented by a circle) and mainly get logically linked via AND gates (straight baseline) or OR gates (curved baseline). Figure 1 illustrates a typical fault tree.

Abbildung in dieser Leseprobe nicht enthalten

Figure 1: A fault tree.

The top event can be expressed as a Boolean function f = (x1 A x2 A x3) V x4. In the following, we will introduce some definitions from Ebendt [8].

Definition:

Let B := {0,1} and n e N. All variables in Xn = {x1,x2,..., xn} are either 0 or 1. A mapping

f : B^{n} B; (x_{1},x_{2}, ..., x_{n}) *B*

is called a **Boolean function.**

Definition:

An **Ordered Binary Decision Diagram (OBDD)** is a pair (n, *G),* where n declares the variable ordering and **G** is a finite directed acyclic graph **(DAG)** G = (V, E) with exactly one root node plus:

- Each node in V is either a non-terminal node or {0,1} called a terminal node.

- non-terminal nodes v are labeled with a variable in Xn, denoted var(v) and have two successors in V which arc denoted lcft(v) and right (v).

- The variables on each path from the root node to a terminal node arc encountered at most once and have the same order. Furthermore, the order defines levels, beginning with the root.

Definition:

The number of total non-terminal nodes in an OBDD called size is given by 2n — 1 for n variables.

A high number of non-terminal nodes increases the required memory enormously. Therefore we need a more compact form of representation. The idea, is to reduce the OBDD. Possibly not all variables need to be considered to tell if the function becomes true or not. Therefore we introduce two reduction operations.

**Deletion Rule:** Let v e V be a node with left(v)=right(v)=v’. Remove v and redirect all edges leading to v to v’.

**Merging Rule:** Nodes v, v' *e V* and label(v)=label(v’), left(v)=left(v’) and right(v)=right(v’). Then remove v and redirect all edges leading to v to v’.

Definition:

An OBDD is called **Reduced Ordered Binary Decision Diagram (ROBDD)** when there is no node left on which the deletion or merging rule can be applied. There arc some ways to transfer an OBDD to an ROBDD. Jung ct al. [2] use the coherent BDD algorithm and conventional BDD algorithm, which arc based on the Shannon Decomposition. In the following, we assume that all BDDs arc ordered and reduced and use the term BDD synonymously for ROBDD.

The calculations in chapter 4 to get a BDD from the Boolean functions arc based on a logic tool developed by the Embedded Systems Group, Department of Computer Science University of Kaiserslautern (https://es.cs.uni-kl.de/ tools/teaching/PropLogic.html). The tool provides us with the appropriate ROBDD for the various variable arrangements.

Definition:

Let I C Xn be a set of Boolean variables. Then nn(I) is the set of all possible variable orderings.

nn(I) = {n : {1, [2] ...,n} X [1] [2] , ...,n(|[II])} = [I]}

Son e *n n (I* ) is called a **variable ordering.**

Furthermore the natural ordering is depicted as n(i) = i for all i e *I.*

For better clarity we can write x1 < x2... < xn as a variable ordering, where x1 is in the root of the graph.

It is evident that there are n! possibilities for ordering the n variables. Given a slot with n places, the first variable has n possibilities to be placed the second only (n-1) and so on, until the last has just one possibility left.

A brute force search is in O(n! • 2n) because there are the n! variable orderings, and each of them can have an up to 2n — 1 node tree.

Friedman and Supowit [3] show an algorithm finding an optimal ordering in O(n[2] • 3n). The idea is to exclude many of the n! Variable orders. This is based on the fact that knowledge about the optimal order of k variables can be used for k—1 variables.

There are functions with 2n variables, which lead to BDDs with 2n + 2 to 2n+[1] nodes just by changing the variable ordering [10]. They have the same size for six variables, but for n=20 there is a BDD with 42 nodes and some with 2' 524000).

Grumbcrg ct al. [13] pointed out that finding an optimal variable arrangement is an NP-complctc problem. NP-complctc means the set of all problems that can be solved by non-dctcrministic Turing machines in polynomial time. These reflections show how essential the variable ordering becomes regarding work with BDDs. In the following topic, we will discuss some heuristics dealing with that.

## 3 Heuristics for variable ordering

Now five useful heuristics arc introduced and will later be used on two fault trees. The first four of them arc static, while the fifth heuristic is dynamic. Dynamic means that the variable ordering is not fixed immediately, but is changed over several iterations.

**Heuristic Hl: Depth-First Left-Most**

This is the most commonly used heuristic. Starting at the top, a path is wholly deepened before branching paths arc followed. Therefore the node on the highest level will be visited first. When applied to the fault tree in Figure 1 the result is x1 < x2 < x3 < x4.

A natural variation of this approach would be to traverse the tree using a width search instead, which is done in the Breadth-First Left-Most heuristic. However, studies in [4] have shown, that depth search is usually the better method.

**Heuristic H2: Weighted Depth-First Left-Most**

Mo ct al. [14] use this approach by configuring each gate and node with a weight number. First, each basic event xj gets the weight 1. For each gate the number of variables it links is calculated and this number becomes the weight of that gate. Usually, gates are designated with the variable gi. Expressed mathematically, this returns the following:

Abbildung in dieser Leseprobe nicht enthalten

where W (g_{j}) is the weight of gate *g _{j}* and C(g

_{i}) stands for the direct child nodes x

_{j}or direct child gates

*g*of gate gi.

_{j}Starting at the root level, the algorithm goes along the path that takes the gates with the lowest weights. The variables within the gates arc recorded according to the principle of Hl.

It is obvious that the root node carries the highest weight and this number indicates how many terms the Boolean function has. Logically, this number is always greater than or equal to the number of variables involved.

**Heuristic H3: Repeated-Event-Priority Depth-First Left-Most**

Mo ct al. [14] introduced this approach. First, all events arc assigned their number of occurrences. Then a standard Depth-First Left-Most search is used to arrange the events according to their number of repetitions. This is done in descending order, lienee those with the highest value first. The special ease that all variables occur exactly once leads to Hl. The idea, behind this concept is to place the variables with a high impact first.

**Heuristic H4: Fanout Gate Depth-First Left-Most**

The origin of this heuristic can be found in Fujita ct al. [16] and was slightly modified by Bouissou ct al. [1]. The gates arc sorted according to their number of references (i.c., number of fanouts). For the sake of clarity, the gates arc numbered consecutively by a Depth-First Left-Most search, and the algorithm picks gates with larger fanout first. The exact order of the variables results from the arrangement of the variables at the gate. Consequently, for a meaningful outcome of the heuristic, it is assumed that gates arc used more than once. For the same fanout, gates with a lower index arc preferred.

**Heuristic H5: Sifting**

The sifting algorithm was first introduced by Rudcll [6]. It tries to find the optimal position for a variable, while all other variable positions remain fixed.

For this purpose, the optimal position for each variable is determined by bruteforce enumeration. In each pass, the variable is swapped with its successor until it returns to its original position (sifting through all possible positions). From all these n possibilities of this pass in the first run, the one is taken which leads to the smallest BDD. The position of this variable will not be changed in the following passes (e.g., variable xl remains at position 3 for all further steps, and no other variable can come to position 3). Consequently, in the next pass, there is one possibility less for the next variable. Logically, the algorithm then performs 1 + 2+... + n =i calculations (Gauss sum). In each step, the variable with the highest number of occurrences is included and, if there is a tie, the one with a lower index.

It is possible that the BDD size increases strongly after the first swaps and then decreases again, what can be seen in chapter 4. For the algorithm, however, only this one position is interesting in each step, which creates the smallest BDD. If the size of the smallest BDDs remains the same, the variable order is used for the next step which was first created by the swaps.

## 4 Exemplary Application

In the following, a fault tree is analyzed, which has a considerable depth. For a better overview the events are only numbered without a textual description.

Abbildung in dieser Leseprobe nicht enthalten

Figure 2: Fault tree designed for evaluation purposes.

**[...]**

- Quote paper
- Marvin Caspar (Author), 2019, Heuristics to Optimize the Variable Ordering in Binary Decision Diagrams, Munich, GRIN Verlag, https://www.grin.com/document/1064133

Publish now - it's free

Comments