I have learnt a little bit about binary decision diagrams when trying to find out how FPGA synthesis works. The benefits of adopting this form is it is easier to be mapped onto a real life circuit (by comparing a subgraph with the graphs of existing circuit components), and it has a good balance of the complexity of doing $\times$ and $+$.

I'll start with the algorithms for OBDDs because the definition of BDD and OBDD is simple and intuitive, but I'll still put the code which implements this data structure here.

You can view code of a working implementation here. And there's also a demo of the code (ported to the browser via wasm) by the end of this article.

### Convert from formulas

Though the book didn't mention this algorithm, I still want to write this, because in real life, most of times we get logic formulas instead of BDDs as user input.

So suppose we already have a (parsed) logic formula $f$, and we can have all atom variables inside it $v$, we can have a recursive algorithm to convert it into a BDD.

• if $v$ is empty, then the all variables must have had their values assigned, so we can just wrap the value of $f$ in a node and return it.
• else, we take the first variable $v_1$ from $v$, generate a node $n$ for it, and then we can have two subformulas $f_1$ and $f_2$ by replacing $v_1$ with $true$ and $false$ respectively, which we can recursive call to get their BDDs' root $b_1$ and $b_0$. Then connect $n$ to $b_1$ and $b_0$ and return $n$.

In pseudo code:

fn from_formula(formula: &Expression) -> BDD {
fn from_formula_recursive(
formula: &Expression,
tree: &mut Graph,
known_values: &mut Map<Atom, bool>,
remain_values: &[Atom]
) -> NodeRef {
if remain_values.is_empty() {
new_node
} else if let ((consider_variable, remain_values)) = remain_values.split_first() {
known_values.insert(consider_variable, false);
let b_0 = from_formula_recursive(formula, tree, known_values, remain_values);
known_values.insert(consider_variable, true);
let b_1 = from_formula_recursive(formula, tree, known_values, remain_values);
new_node
}
}

let mut tree = Graph::new();
let mut known_values = Map::new();
let remain_values = formula.all_atoms();
let root = from_formula_recursive(formula, &mut tree, &mut known_values, &remain_values);
BDD { root, tree }
}


Actually this algorithm is similar with "recursive Shannon expansion", which we'll use again in apply.

#### Complexity

Once the formula's variables count increased by 1, we need to do 2 more recursive calls with 1 less variable.

Presume the graph is implemented in an efficient enough way, we have $T(n)=2T(n-1)+O(1)$, which lead us to $O(2^n)$, where $n$ is variable count.

### Reduce

The algorithm reduce now traverses B layer by layer in a bottom-up fashion, beginning with the terminal nodes. In traversing B, it assigns an integer label id(n) to each node n of B, in such a way that the subOBDDs with root nodes n and m denote the same boolean function if, and only if, id(n) equals id(m).

I'll write the pseudo code here, with a little more details:

fn reduce(bdd: &BDD) -> BDD {
// first we need to aggregate the terminal nodes
let true_nodes = bdd.tree.all_nodes().filter(|n| n.value == true);
let false_nodes = bdd.tree.all_nodes().filter(|n| n.value == false);
if true_nodes.empty() && false_nodes.empty() {
return BDD::Empty;
} else if true_nodes.empty() {
return BDD::B0;
} else if false_nodes.empty() {
return BDD::B1;
}
let mut new_tree = Graph::new();
// map from "node in old graph" to "node in new graph"
let mut node_map = Map::new();
true_nodes.for_each(|n| node_map.insert(n, new_true_node));
false_nodes.for_each(|n| node_map.insert(n, new_false_node));
// Since the node with smaller variable may depend on the larger variables,
// we need to use BinaryHeap here, so larger variables could be handled first
let mut nodes_to_consider: BinaryHeap<_, |it| it.value()> = true_nodes.chain(false_nodes)
.map(|it| it.parent())
.collect();
while let (node_to_consider, nodes_to_consider) = nodes_to_consider.split_first() {
if node_map.contains_key(node_to_consider) {
continue;
}
let false_child_in_old_tree = node_to_consider.child(false);
let false_child_in_new_tree = node_map.get(false_child_in_old_tree);
let true_child_in_old_tree = node_to_consider.child(true);
let true_child_in_new_tree = node_map.get(true_child_in_old_tree);
if false_child_in_new_tree == true_child_in_new_tree {
// this node doesn't make any difference
node_map.insert(node_to_consider, false_child_in_new_tree);
} else if let Some(equiv_node) = false_child_in_new_tree.parents_on_side(false).intersection(true_child_in_new_tree.parents_on_side(true)) {
// we can keep a "(new_tree.parent_on_side(false), new_tree.parent_on_side(true)) -> node" map to speed up this

// this node is equivalent to another node
node_map.insert(node_to_consider, equiv_node)
} else {
// this node is unique
node_map.insert(node_to_consider, new_node);
}
nodes_to_consider.extend(node_to_consider.parents());
}
BDD { root: new_tree.root(), tree: new_tree }
}


#### Complexity

This algorithm considers each node once, and for BinaryHeap "Insertion and popping the largest element have O(log(n)) time complexity", so the complexity is $O(nlogn)$, where $n$ is the origin BDD's node count.

### Apply

We can view this operation as an abstract over $\text{&}$ and $|$ operations on boolean functions, ie. apply a fn (bool, bool) -> bool to two boolean functions.

So basically we just split the true and false case for every single variable, and apply the function to the rest of the function.

Pseudo code:

pub fn apply(lhs: &BDD, rhs: &BDD, f: fn(bool, bool) -> bool) -> BDD {
fn apply_recursive(
result_tree: &mut Graph,
lhs: &BDD,
rhs: &BDD,
f: fn(bool, bool) -> bool,
lhs_node: NodeRef,
rhs_node: NodeRef,
) -> NodeRef {
// these three are helper functions
// view the large match first

fn explore_left(lhs_variable: Atom) -> NodeRef {

// solve the problem when lhs_variable is false
let next_lhs_node = lhs_node.child(false);
let lhs_node_left = apply_recursive(result_tree, lhs, rhs, f, next_lhs_node, rhs_node);

// solve the problem when lhs_variable is true
let next_lhs_node = lhs_node.child(true);
let lhs_node_right = apply_recursive(result_tree, lhs, rhs, f, next_lhs_node, rhs_node);

node
}

fn explore_right(lhs_variable: Atom) -> NodeRef {
// symmetric to explore_left
// omitted
}

fn explore_both(variable: Atom) -> NodeRef {

// solve the problem when variable is false
let next_lhs_node = lhs_node.child(false);
let next_rhs_node = rhs_node.child(false);
let node_left = apply_recursive(result_tree, lhs, rhs, f, next_lhs_node, next_rhs_node);
// solve the problem when variable is true
let next_lhs_node = lhs_node.child(true);
let next_rhs_node = rhs_node.child(true);
let node_left = apply_recursive(result_tree, lhs, rhs, f, next_lhs_node, next_rhs_node);

node
}

match (lhs_node.value(), rhs_node.value()) {
(bool, bool) => {
let result = f(lhs_node.value(), rhs_node.value());
},
(lhs_variable: Atom, bool) | ((lhs_variable: Atom, rhs_variable: Atom) if lhs_variable < rhs_variable) => {
// variable only exists in lhs
// so we can just dig in on the left side
explore_left(lhs_variable)
}
(bool, rhs_variable: Atom) | ((lhs_variable: Atom, rhs_variable: Atom) if lhs_variable > rhs_variable) => {
// symmetric to the above
explore_right(rhs_variable)
}
(variable: Atom, _) => {
// both lhs and rhs have this variable, so we need to explore both sides
explore_both(variable)
}
}
}

let mut new_graph = Graph::new();
let lhs_node = lhs.root();
let rhs_node = rhs.root();
let new_root = apply_recursive(&mut new_graph, lhs, rhs, f, lhs_node, rhs_node);
BDD { root: new_root, tree: new_graph }
}


Memoization can be added to apply_recursive to speed up the algorithm.

#### Complexity

Assume there are $m$ nodes in lhs and $n$ in rhs, each time the recursive function split the problem into two sub-problems which are 1 element less of the original problem (suppose explore_both happens every time), ie. $T(m,n) = T(m-1, n-1) + O(1)$, which is $O(2^{mn})$, however, many of these sub-problems are duplicated. With memoization, we just need to consider each pair of (lhs_node, rhs_node) once, so the complexity is actually $O(mn)$.

### Restrict

Restrict is an operation which gives a new BDD which is the same as the original one, except that one of the variables is forced to be a certain value.

This is quite easy to implement, just connect the parent of the node of variable to the corresponding child of the node directly.

pub fn restrict(bdd: BDD, variable: Atom, variable_value: bool) -> BDD {
let result = bdd.clone();
let variable_nodes = result.tree.nodes(value == variable);
for node in variable_nodes {
let redirect_to = node.child(variable_value);
for edge in node.incoming_edges() {
result.graph.remove_edge(edge);
}
result.graph.remove_node(node);
}
result
}


#### Complexity

If the data structure is efficient enough, this operation's complexity is $O(n\cdot e_{avg})$, where $n$ is the count of nodes of this variable, and $e_{avg}$ is the average incoming edge count of these nodes.

However we always needs to reduce once after restrict, which is usually more expensive than restrict itself.

### Exists

$∃x.f$ means "whether $f$ can be true by make $x$ be true or false".

exists(bdd, x) = restrict(bdd, x, false) | restrict(bdd, x, true)


So a naive method is simply calculate restrict(bdd, x, false) and restrict(bdd, x, true), then apply | on the results.

An possible optimization is that since the structure "before" x in the BDDs are identical, we just needs to do restrict(bdd rooted in x, x, false) | restrict(bdd rooted in x, x, false), and restrict(bdd rooted in x, x, bool) is actually the left/right child of bdd rooted in x.

So the algorithm is:

fn exists(bdd: BDD, variable: Atom) -> BDD {
let result = bdd.clone();
let variable_nodes = result.tree.nodes(value == variable);
for node in variable_nodes {
let lhs = result.subtree(node.child(false));
let rhs = result.subtree(node.child(true));
result.replace_subtree(result.subtree(node), apply(lhs, rhs, |));
}
result
}


lhs:

rhs:

result: