7

Click here to load reader

Fast Boolean Factoring with Multi-Objective Goals fileFast Boolean Factoring with Multi-Objective Goals A.I.Reis Nangate Inc Sunnyvale CA USA 94089-1321 [email protected] A.B.Rasmussen

  • Upload
    vanminh

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Fast Boolean Factoring with Multi-Objective Goals fileFast Boolean Factoring with Multi-Objective Goals A.I.Reis Nangate Inc Sunnyvale CA USA 94089-1321 are@nangate.com A.B.Rasmussen

Fast Boolean Factoring with Multi-Objective Goals A.I.Reis

Nangate Inc Sunnyvale CA

USA 94089-1321

[email protected]

A.B.Rasmussen Nangate Inc

Sunnyvale CA USA 94089-1321

[email protected]

L.S.Rosa Jr. PGMICRO

Instituto de Informatica UFRGS

[email protected]

R.P.Ribas PGMICRO

Instituto de Informatica UFRGS

[email protected]

ABSTRACT

This paper introduces a fast algorithm for Boolean factoring. The

proposed approach is based on a novel synthesis paradigm,

functional composition, which performs synthesis by associating

simpler sub-functions. The method constructively controls

characteristics of final and intermediate functions, allowing the

adoption of secondary criteria other than the number of literals for

optimization. This multi-objective factoring algorithm has

presented interesting features and advantages when compared to

previous related works.

Categories and Subject Descriptors

7.1 [High-level Synthesis, Logic synthesis and Circuit

Optimization]: Combinational, sequential, and asynchronous

logic synthesis.

General Terms

Algorithms, Measurement, Performance, Design,

Experimentation, Theory.

Keywords

Factoring, Functional Composition, Exact Factoring, Boolean

Factoring, read-once formulas, unate functions, binate functions.

1. INTRODUCTION Factoring is a core procedure in logic synthesis. Yet, according to

Hachtel and Somenzi [9], the only known optimality result for

factoring (until 1996) is the one presented by Lawler in 1964 [10].

Heuristic techniques have been proposed for factoring that

achieved high commercial success. This includes the quick_factor

(QF) and good_factor (GF) algorithms available in SIS tool [15].

Recently, factoring methods that produce exact results for read-

once factored forms have been proposed [7] and improved [8].

However, the IROF algorithm [7,8] fails for functions that cannot

be represented by read-once formulas. The Xfactor algorithm

[6,11] is exact for read-once forms and produces good heuristic

solutions for functions not included in this category.

Most of these algorithms [6,7,8,11,15] take as input a sum-of-

products (SOP) or a product-of-sums (POS). As SOP/POS forms

are completely specified, the don’t cares are not treated during the

factoring but while generating the SOP/POS. Thus, the whole

process is not exact. Lawler’s algorithm [10] starts from a

functional description and considers don’t cares, but it is too slow

to complete for all functions of 4 variables.

A method for exact factoring based on Quantified Boolean

Satisfiability (QBF) [1] was proposed by Yoshida et al [19]. The

Exact_Factor [19] algorithm constructs a special data structure

called eXchange Binary (XB) tree, which encodes all equations

with a given number N of literals. The XB-tree contains three

different classes of configurable nodes: internal (or operator),

exchanger and leaf (or literal). All classes of nodes can be

configured through configuration variables. The Exact_Factor

algorithm derives a QBF formula representing the XB-tree and

then compares it to the function to be factored by using a miter [2]

structure. If the QBF formula for the miter is satisfiable, the

assignment of the configuration variables is computed and a

factored form with N literals is derived. The exactness of the

algorithm derives from the fact that it searches for a read-once

formula and then the number of literals is increased by one until a

satisfiable QBF formula is obtained.

The development of new multi-level optimization algorithms has

recently shifted towards the use of And-Inverter-Graphs (AIGs)

[12]. Especially important for these approaches is the concept of

K-cuts, which allows working with sub-functions of at most K-

inputs [5]. The use of factor-cuts [4] allows the efficient

enumeration of cuts with up to 16 inputs. In this kind of approach,

the minimum implementation of small functions has increased

importance [13]. Notice that the criteria for optimality may

include secondary criteria like logic depth [17]. In this context, an

exact algorithm that is able to: (1) minimize factored forms taking

into account multi-objective goals, (2) generate more than one

alternative minimum solutions and (3) start from a functional

description, possibly incompletely specified is largely desired. As

shown in Table 1, all these characteristics are present in the

algorithm introduced in this paper. When compared to Lawler’s

[10] and Exact_Factor [19] algorithms, the approach proposed

here is more time-efficient and it has the characteristics (1) and

(2) above as additional advantages. Both the method presented

here and the Exact_Factor approach have features which provide

optimized run-time for read-once formulas, when compared to

Lawler’s method.

This paper is organized as follows. Section 2 presents basic

concepts, including the need for multi-objective optimization

goals. The proposed algorithm is described in Section 3, detailing

timing optimizations for general, symmetric, unate and read-once

functions. Results and comparisons to other methods are

presented in Section 4. The final section discusses the conclusions

of this paper.

Page 2: Fast Boolean Factoring with Multi-Objective Goals fileFast Boolean Factoring with Multi-Objective Goals A.I.Reis Nangate Inc Sunnyvale CA USA 94089-1321 are@nangate.com A.B.Rasmussen

Table 1. Comparing the properties of previous methods to the proposed approach.

Method Start point Exactness Functions

treated

Optimized for

read-once

Incompletely

specified functions

More than

one solution

Multi-

objective

[10] Functional Exact All No Yes No No

QF/GF [15] SOP/POS Heuristic All No No No No

[6,10] SOP/POS Exact for

read-once All Yes No No No

[7,8] SOP/POS Exact for

read-once

Only

read-once Yes No No No

[19] Functional Exact All Yes Yes No No

This paper Functional Exact (and

multi-objective) All Yes Yes Yes Yes

2. BASIC CONCEPTS A literal is an instance of a variable (positive literal) or its

complement (negative literal). A Boolean function is said to be

positive (negative) unate with respect to a given variable when a

prime irredundant SOP contains only positive (negative) literals

of the variable. A Boolean function is said to be binate with

respect to a given variable when positive and negative literals of

the variable appear in a minimum SOP.

Example 1: Consider the minimum SOP presented by Equation

(1). The function is binate with respect to variable a, positive

unate with respect to b and negative unate with respect to c. ■

cabaf ⋅+⋅= (1)

A factored form is read-once when each variable contributes with

at most one literal. Equation (2) is a read-once form. Equation (1)

is not a read-once form, since variable a is read twice.

fedcbao +⋅⋅+⋅= )( (2)

The cofactors of a Boolean function f with respect to a given

variable are obtained by setting the variable to one (positive

cofactor) or to zero (negative cofactor), eliminating then the

variable from the function. A cube cofactor is obtained by setting

more than one variable to specific values (zero or one).

Example 2: Consider Equation (1). The positive cofactor with

respect to variable a is baf =)( . The negative cofactor with

respect to variable a is caf =)( . Examples of cube cofactors of f

include abcf =)( , 1)( =⋅cbf , 0)( =⋅cbf and acbf =⋅ )( . ■

A factored form can be implemented as a complementary

series/parallel CMOS transistor network. This process is

straightforward; further details are described by Weste and Harris

[18]. The number of series transistors in one CMOS plane is the

number of parallel transistors in the other CMOS plane. The

number of series transistors affects the performance of the final

cell and it can vary for factored forms representing the same

Boolean function. Example 3 illustrates the motivation for the

proposed algorithm.

Example 3: Consider the equations shown in Table 2, which

represent the same Boolean function. Equation (3) has minimum

literal cost (L). The number of series (S) switches is the same for

all equations. Equation (5) has minimum parallel (P) cost. ■

Table 2. Literal, series and parallel costs of factored forms.

Eq# Equation L S P

(3) )()()( cbacdacdbf ⋅+⋅++++⋅= 9 3 5

(4) ))()(()( abbddacabcf +⋅++⋅⋅++⋅= 10 3 5

(5) ))()(()( abbddacabcf +⋅++⋅+⋅++= 10 3 4

(6) )()()( cdaacbacdbf ++⋅+⋅+++⋅= 10 3 6

Consequently, it is necessary to develop algorithms able to deal

with multi-objective design goals, which consider topological

properties (like the number of series and parallel switches) while

reducing the number of literals.

3. PROPOSED ALGORITHM The proposed algorithm uses logic functions that are represented

as a pair of {functionality, implementation}. The functionality is

either a BDD node or a truth table (coded as bit strings on top of

integers). The implementation is either the root of an operator tree

or a string representing a factored form. The implementation can

also have associated data about number of literals, logic depth,

number of transistors, series/parallel properties, etc. These data

are necessary to perform synthesis with multi-objective goals.

1 vector <factoredForms> optimizeEquation() 2 { 3 if (target_function==constant) 4 return constant; 5 compute_allowed_subfunctions(); 6 bool sf ← false; //solutions found

7 sf=create_literals(); 8 if (sf) 9 return solutions; 10 else for (int i=2; i<maxLit; i++){ 11 sf=fillBucket(i); 12 if (sf) 13 return solutions; 14 } 15 return "no solutions found"; 16 }

Figure 1. Pseudo-code for the optimization algorithm.

Page 3: Fast Boolean Factoring with Multi-Objective Goals fileFast Boolean Factoring with Multi-Objective Goals A.I.Reis Nangate Inc Sunnyvale CA USA 94089-1321 are@nangate.com A.B.Rasmussen

The pseudo-code of the algorithm proposed herein is shown in

Fig.1. The first step is to verify if the target function is constant

(lines 3-4). The second step is the computation of allowed sub-

functions (line 5), which is described in Section 3.1. The

algorithm is constructive and it starts from simple functions

representing the literals. The call to create_literals() in

line 7 will produce the pairs {functionality, implementation} for

functions that can be represented by a single literal. The next step

is to verify if the target function can be represented by a single

literal, which is done by checking the flag sf on lines 8-9. The

intermediate functions are kept in buckets indexed by number of

literals. If the target function cannot be implemented as a single

literal, the algorithm starts a loop (lines 10-14) where the buckets

are filled with increasing number of literals. The fillBucket

method fills a bucket with functions having i literals. The method

combines pairs of functions with fewer literals than i to get the

functions with i literals. For instance, to fill the bucket i=4 the

buckets to be combined are {j=1, k=3} and {j=2, k=2}. The

resulting combinations will have 4 literals, as j+k=4. The

combination of two buckets makes all the and and or

combinations with one element from each bucket. In the final step

of the method, when the target function cannot be implemented

with maxLit or fewer literals, the algorithm finishes without

returning a solution (line 15).

Next sub-sections describe further details of the algorithm. Hash

tables for allowed and already looked sub-functions are described

in Sections 3.1 and 3.2, respectively. Section 3.3 discusses the

role of larger and smaller sub-functions. Optimizations for unate

and symmetric variables are discussed in Sections 3.4 and 3.5,

respectively. Section 3.6 discusses the special case of read-once

formulas, whose solution is highly optimized. A complete

example is presented in section 3.7.

3.1 Allowed Sub-Functions Hash Table The great number of intermediate sub-functions created by

exhaustive combination increases the execution time of the

algorithm. To optimize performance, a hash table of allowed sub-

functions is maintained. Sub-functions that are not present in the

allowed sub-functions hash table are discarded, unless they are

greater or smaller than the target function (as described in Section

3.3). The use of a hash table of allowed sub-functions helps to

control the execution time of the algorithm. The algorithm can be

more or less exact according to the number of allowed sub-

functions. Empirically we have found that the set of all cube

cofactors of the target function are very good intermediate

functions. The intuition behind this is that by setting variables to

zero and one in an optimized factored form it is possible to obtain

sub-expressions of the formula. However, the cube cofactors

alone are not sufficient to obtain an exact solution, and for some

functions this set of allowed sub-functions is not even able to

produce a solution at all. So, several effort levels have been

implemented in the algorithm. These levels are described in the

following. (1) Effort low: only the cube cofactors of the target

function are included in the allowed sub-functions. (2) Effort

feasible: in addition to the cube cofactors, the product of the cube

cofactors and each of the variables set in the cube cofactors are

also added. (3) Effort medium: all the functions resulting from

products and sums of the cube cofactors are also allowed. (4)

Effort high: the products and sums of all the functions in effort

medium are also allowed. (5) Effort exact: all functions allowed.

It has been empirically observed that for all functions whose

efforts high and exact completed the result was not superior to

effort medium. That strongly suggests that effort medium is

sufficient to produce exact results.

3.2 Already-Looked Hash Table The already-looked hash table stores the functions already

introduced previously. These functions have been produced with

fewer or equal number of literals and do not need to be introduced

twice. This process speeds-up execution time.

3.3 Larger and Smaller Functions Two Boolean functions can be compared and classified according

their relative order, which can be: equal, larger, smaller and not-

comparable. It is said that f1 is larger (smaller) than f2 when the

on-set of f1 is a superset (subset) of the on-set of f2. Two

functions are equal when they have equal on-set and off-set. They

are not-comparable when the on-sets are not contained by each

other. Larger and smaller sub-functions with respect to the target

function are always accepted as they are useful for composing

products and sums representing the target function [10]. Larger

and smaller functions are kept in separate buckets and checked for

primality before insertion.

Notice that don’t cares can be treated by terminating the algorithm

when a function in between the on-set and the off-set of the target

unspecified function is found.

3.4 Unate Variables Unateness and binateness can be detected at functional level,

before any equation is produced. This can be verified by

comparing the positive and negative cofactors of the function with

respect to each variable. A function f is positive (negative) unate

on variable x when )(xf is larger (smaller) than )(xf . The

function does not depend on x if )()( xfxf = . When )(xf and

)(xf are not comparable, the function is binate on x.

Lemma 1: If a variable is positive (negative) unate, any factored

form containing a negative (positive) literal of that variable is not

optimized. ■

By using Lemma 1, only the literals with the right polarity are

created by the function create_literals. This reduces the

execution time by avoiding the creation of non-optimal factored

forms containing the literals in the wrong polarity.

3.5 Symmetric Variables Two variables are symmetric when they can be interchanged

without modifying the logic function.

Example 4: Consider Equation (7). Exchanging variables a and b

results in the same function. So they are symmetric. Exchanging

variables a and c does not yield the same function. So they are not

symmetric. ■

abccabcbaf +⋅≠+⋅=+⋅= (7)

Symmetry can also be detected at functional level, before any

equation is produced. This can be detected functionally, by

comparing the cube cofactors of the two variables involved. Two

variables x and y are symmetric in function f when

Page 4: Fast Boolean Factoring with Multi-Objective Goals fileFast Boolean Factoring with Multi-Objective Goals A.I.Reis Nangate Inc Sunnyvale CA USA 94089-1321 are@nangate.com A.B.Rasmussen

)()( yxfyxf ⋅=⋅ . Two variables x and y are anti-symmetric in

function f when )()( yxfyxf ⋅=⋅ . The fact of two variables

being anti-symmetric means that the function does not change if

the variables are inverted and exchanged to each other. The

information about symmetry and anti-symmetry can be used to

greatly reduce the number of cube cofactors needed as allowed

sub-functions. Basically, instead of computing all cube cofactors

of the function, the symmetric and anti-symmetric variables are

grouped and the cube cofactors are computed by first setting all

the variables inside a group before picking a variable from another

group.

Example 5: Consider a function f of variables {a, b, c, d}.

Suppose that a is symmetric to c and that b is anti-symmetric to d.

Then the set {a, b, c, d} is divided into two sub-sets such that {(a,

c), (b, d)} and the subsets impose an order for computing the cube

cofactors. This way, the cofactors )( baf ⋅ and )( daf ⋅ are not

considered, while )( bcaf ⋅⋅ and )( caf ⋅ are allowed. ■

The information about unateness, derived as described in Section

3.4, can be used to detect variables that are not symmetric, as to

be symmetric, variables must have the same polarity properties.

Example 6: Recall Example 1; no variable in Example 1 can be

symmetric to each other, as their unate/binate (polarity) properties

are different. ■

3.6 Read-Once Formulas This section proves that effort low produces exact results when

the target function can be represented through a read-once

formula.

Theorem 1: If a function f can be represented through a read once

formula, all the partial sub-equations correspond to functions that

are cube cofactors of f.

Proof: As each variable appears as a single literal, they can all be

independently set to non-controlling values, which makes only

one literal disappear at a time. This way, any sub-equation (or

sub-set) of f can be obtained by assigning non-controlling values

to the variables to be eliminated. This is the definition of cube

cofactors. ■

Corollary: If the target function in our algorithm is a read-once

function, the exact solution is obtained with effort low, as the

cube cofactors are sufficient to obtain the read-once formula. ■

Example 7: Consider Equation (8). Any sub-equation

corresponds to a cube cofactor. For instance,

)()( dcbafef ⋅+⋅=⋅ and )()( bafecf ⋅=⋅⋅ . ■

fedcbaf +⋅⋅+⋅= )( (8)

Notice that, for computing the cube cofactors, the read-once

expression is not necessary. This can be made at functional level,

before starting to compute the read-once equation.

3.7 A Complete Example In this section, we provide a complete example for the algorithm,

discussing how the aspects described before are taken into

account in the execution of the method. In our example we

consider that the pairs {functionality, implementation} are

described as {bit_vector, string}.

Example 8: Find a minimum implementation for Boolean

function f represented by the bit vector 11000101. Consider it

depends on three variables a, b, c, represented by bit vectors

a=11110000, b=11001100 and c=10101010.

First step is to compute the allowed sub-functions. This step first

computes the unateness and symmetry information for variables.

Variable a is binate, as )(af =11001100 and )(af =01010101

are of non-comparable order. Variable b is positive unate, as

)(bf =11110101 is larger than )(bf =00000101. Variable c is

negative unate, as )(cf =11000000 is smaller than )(cf =

11001111. No variable is symmetric, so symmetry information is

not used to reduce the computation of cube cofactors. The

computation of the cube cofactors will result in eight different non

constant functions: )(af =11001100, )(af =01010101,

)(bf =11110101, )(bf =00000101, )(cf =11000000, )(cf =

11001111, )( cbf ⋅ =11110000, )( cbf ⋅ = 00001111. It is

important to make two observations: (1) the total number of cube

cofactors is greatly reduced since some cube cofactors are equal

and some are constant; and (2) the list of cube cofactors already

contains the literals in the right polarities.

At this point, depending on the effort level, combinations could

be done between every pair of allowed sub-functions. The effort

medium would perform all the ands and ors of the allowed sub-

functions resulting from cube cofactors. The effort high would

compute all the and and or operations of the allowed sub-

functions in effort medium. Suppose that for this example effort

low is adopted. Then the set of eight non-constant cube cofactors

listed above corresponds to the set of allowed sub-functions.

Next step is the creation of the representations of the literals. This

will create the following {bit_vector, string} pairs and insert them

in the bucket for the 1-literal formulas: {11110000, a},

{00001111, !a}, {11001100, b} and {01010101, !c}.

Once the 1-literal bucket is filled, the combination part starts, by

producing the 2-literal combinations: {11110101, (a)+(!c)},

{00000101, (!a)*(!c)}, {11000000, (a)*(b)} and {11001111,

(!a)+(b)}. Other 2-literal combinations are produced and not

accepted, as they are not in the allowed sub-function hash table or

they have already been produced with fewer literals. No 3-literal

combination is accepted. The 4-literal combinations produce two

minimum solutions: {11000101, ((a)*(b))+((!a)*(!c))} and

{11000101, ((!a)+(b))*((a)+(!c))}. Both are accepted as they are

equal to the target function. ■

4. RESULTS This section discusses the results of the proposed method. Table 3

presents the resulting equations from the method. The first

column presents the numbers to identify the equations. The

second column presents the references from which the equations

were obtained. The third column presents the factored forms

resulting from the method presented here. Notice that minimum

forms were obtained for all equations; and the resulting minimum

forms are the ones shown in the table. The fourth and fifth

columns show the execution time (in seconds) for effort low to

Page 5: Fast Boolean Factoring with Multi-Objective Goals fileFast Boolean Factoring with Multi-Objective Goals A.I.Reis Nangate Inc Sunnyvale CA USA 94089-1321 are@nangate.com A.B.Rasmussen

obtain one (EL1S) or six (EL6S) solutions. The effort low did not

find minimum solutions for all equations. In four cases, no

solution was found. These are Equations (9), (14), (23) and (24),

which are marked with a NF for low effort. In one case, a non-

optimal solution was found (with 18 literals instead of the

minimum 14). This is Equation (15); which is indicated with a

NO label. The sixth and seventh columns show the execution time

(in seconds) for effort medium. The execution times are larger,

but this effort level is able to produce minimum factored forms for

all functions.

Section 3.6 proved that effort low would produce exact results for

functions in which a read-once formula exists. This can be

verified in Table 3. However, several equations which are not

read-once formulas were also found. Consider Equation (10),

where variables b and d contribute with two literals each:

acbddbf ⋅⋅++⋅= )( . This version of the equation is found only

with effort medium, because the sub-expression cbd ⋅+ is not a

cube cofactor of the function. However, the equivalent equation

)( badcbaf +⋅+⋅⋅= is obtained with effort low as it is

composed of two cube cofactors which are read-once:

cbadf ⋅⋅=)( and )()( badcf +⋅= . Notice that the effort level

medium exploits a larger space of solutions, as the sums and the

products of the cube cofactors are inserted in the hash table of

allowed sub-functions.

Before considering the execution time of the approach proposed

here, it is necessary to consider some details of the most relevant

prior approaches [11, 19]. The Xfactor [11] algorithm runs in the

order of milliseconds. However, it starts from and already existing

SOP and according to the authors (see [8]) the algorithm requires

a pre-processing that can be exponential in the size of the original

input if a BDD or truth table is used instead. The execution times

reported for the proposed algorithm include reading a file on disk,

constructing the BDD, applying the algorithm and rewriting the

results to disk. Thus, the execution time is of the same order, but

the fact that we start from a BDD makes the proposed algorithm

very interesting from an execution time point-of-view.

Additionally, the method introduced here has the advantages

already presented in Table 1, including a much better control of

the characteristics of the final factored form. The Exact_Factor

[19] algorithm starts from a functional description (BDD or truth

table). However, the execution times are of the order of seconds,

instead of milliseconds (for a similar machine). For instance,

Exact_Factor reports an execution time of 466.3sec to solve

Equation (24), while the proposed algorithm takes only 3.172sec.

This is a speed-up of two orders of magnitude. The worst case

reported in Table 3 is 85sec for Equation (15), with 14 literals,

and Exact_Factor reports equations up to a maximum of 12

literals, claiming a worst case time of 10 minutes.

Table 3. Execution times for factorization examples from previous works (time unity is seconds).

Eq# Source Equation Time EL1S Time EL6S Time EM1S Time EM6S

(9) [10] )( bacbacf +⋅+⋅⋅= 0.004 (NF) 0.004 (NF) 0.012 0.012

(10) [10] acbddbf ⋅⋅++⋅= )( 0.004 0.005 0.024 0.027

(11) [10] )()()( dbcebacef +++⋅+⋅+= 0.037 0.039 0.197 0.199

(12) [10] acbddbf ⋅⋅++⋅= )()( 0.005 0.007 0.025 0.027

(13) [3] )))((()( dcbagfef +⋅+⋅++= 0.127 0.129 1.149 1.526

(14) [3] )()( cadbdcbaf +⋅++⋅+⋅= 0.027 (NF) 0.027(NF) 0.184 0.232

(15) [3] )()()()( fbadcbaceadgfbo ++⋅++++⋅++⋅+⋅= 2.486 (NO) 2.494 (NO) 84 85

(16) [3] )()( dcbaef +⋅++= 0.007 0.007 0.052 0.056

(17) [3,11,16] )()( eagfbadco ⋅++⋅⋅++= 0.321 0.322 4.289 7.432

(18) [3] )()( eacbaf ⋅+⋅+= 0.006 0.006 0.023 0.026

(19) [3] )( eacbaf +⋅+⋅= 0.008 0.008 0.021 0.025

(20) [3] )()( bacdaeaf ⋅+⋅++⋅= 0.021 0.022 0.143 0.178

(21) [11] ))(()()( chebahgedf +⋅++⋅+⋅+= 0.561 0.564 6.499 7.665

(22) [11] )()( bacdaef ⋅+⋅⋅+= 0.014 0.014 0.072 0.086

(23) [11] )()( fedcbafedcbao ⋅+⋅+⋅⋅⋅+⋅+⋅= 0.269 (NF) 0.269 (NF) 2.347 3.215

(24) [19] )()( fedcbadcbaeo ⋅+⋅+⋅⋅⋅+⋅+= 1.171 (NF) 1.171 (NF) 3.172 4.260

Page 6: Fast Boolean Factoring with Multi-Objective Goals fileFast Boolean Factoring with Multi-Objective Goals A.I.Reis Nangate Inc Sunnyvale CA USA 94089-1321 are@nangate.com A.B.Rasmussen

Table 4. Example of multi-objective factorization.

Eq# Equation L S P Time (sec)

(25) ))))))+()+((+)((+)(()+((+))(+)(()+()+((+)( abcdabcdefabcdabcdefo ⋅⋅⋅⋅⋅⋅⋅⋅⋅= 20 4 7 7.98

(26) ))))a+b()c+d(+)ab(+)cd(()e+f((+)))c+d(ab(+))a+b(cd(((+)ef(o ⋅⋅⋅⋅⋅⋅⋅⋅⋅= 20 3 9 7.32

(27) )))+(+)))(+)(()+((())(+)(++(())+(+)))(+)(()+((( dcbafefedcbafebadcfefeo ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅= 22 8 4 8.23

Table 4 presents an example of multi-objective factorization.

Equation (25) is the minimum literal count equation obtained

when only literals are minimized. Equation (26) is obtained when

the algorithm is required to minimize literals and obtain the

minimum possible number of switches in series (which is 3). This

minimum number is pre-computed according [14] and used as a

parameter by the algorithm. Sub-functions not respecting the

lower bounds in [14] are discarded. Equation (27) is obtained

when the algorithm is required to minimize literals and obtain the

minimum possible number of switches in parallel (which is 4). In

this case the number of literals is increased by two (from 20 to

22). Notice that, as the data of sub-functions is always known

during the execution of the algorithm, the algorithm proposed can

be modified to accept only sub-functions with certain

characteristics. The approach can consider any secondary criteria

that can be computed in a monotonically increasing way, so that

the solutions are generated in the right order. Additionally, the

new costs must be easily obtainable for a combination of the sub-

functions, so that the method is fast. In the case of literals, this can

be done by simple addition. These requirements allow not only to

control the number of series and parallel switches, but also logic

depth (per input variable) and sub-function support size. These

features will be implemented as future work.

Table 5 shows the number of literals for some functions where our

algorithm produces better literal count than quick_factor and

good_factor.

Table 5. Number of literals of the proposed approach

compared to quick_factor(QF) and good_factor(GF).

Eq# QF [15] GF [15] This paper

(17) 12 11 8

(18) 6 6 5

(20) 8 7 7

(21) 13 11 9

(22) 7 7 6

(23) 20 20 12

(24) 19 18 11

(25) 28 21 20

5. CONCLUSIONS This paper has proposed the first multi-objective exact factoring

algorithm. The algorithm is based on a new synthesis paradigm

(functional composition) and it is very fast compared to other

approaches, providing speedups of two orders of magnitude for

exact results. This characteristic makes it a useful piece for

approaches based on restructuring small portions of logic, like

[13] and [17]. The algorithm has the ability to take secondary

criteria (like series and parallel number of transistors, or support

size) into account, while generating several alternative solutions.

These are unique characteristics which are very useful in the

context of local optimizations.

6. REFERENCES [1] Benedetti. M. 2005. sKizzo: a suite to evaluate and certify

QBFs. 20th CADE, LNCS vol. 3632, 369–376.

[2] Brand, D. 1993. Verification of large synthesized designs.

ICCAD 93. IEEE, Los Alamitos, CA, 534-537.

[3] Brayton, R. K. 1987. Factoring logic functions. IBM J. Res.

Dev. 31, 2 (Mar. 1987), 187-198.

[4] Chatterjee, S., Mishchenko, A., and Brayton, R. 2006. Factor

cuts. ICCAD '06. ACM, New York, NY, 143-150.

[5] Cong, J., Wu, C., and Ding, Y. 1999. Cut ranking and

pruning: enabling a general and efficient FPGA mapping

solution. FPGA '99. ACM, New York, NY, 29-35.

[6] Golumbic, M. C. and Mintz, A. 1999. Factoring logic

functions using graph partitioning. ICCAD '99. IEEE Press,

Piscataway, NJ, 195-199.

[7] Golumbic, M. C., Mintz, A., and Rotics, U. 2001. Factoring

and recognition of read-once functions using cographs and

normality. DAC '01. ACM, New York, NY, 109-114.

[8] Golumbic, M. C., Mintz, A., and Rotics, U. 2008. An

improvement on the complexity of factoring read-once

Boolean functions. Discrete Appl. Math. 156, 10 (May.

2008), 1633-1636.

[9] Hachtel, G. D. and Somenzi, F. 2000 Logic Synthesis and

Verification Algorithms. 1st. Kluwer Academic Publishers.

[10] Lawler, E. L. 1964. An Approach to Multilevel Boolean

Minimization. J. ACM 11, 3 (Jul. 1964), 283-295.

[11] Mintz, A. and Golumbic, M. C. 2005. Factoring boolean

functions using graph partitioning. Discrete Appl. Math. 149,

1-3 (Aug. 2005), 131-153.

Page 7: Fast Boolean Factoring with Multi-Objective Goals fileFast Boolean Factoring with Multi-Objective Goals A.I.Reis Nangate Inc Sunnyvale CA USA 94089-1321 are@nangate.com A.B.Rasmussen

[12] Mishchenko, A., Chatterjee, S., and Brayton, R. 2006. DAG-

aware AIG rewriting a fresh look at combinational logic

synthesis. DAC '06. ACM, New York, NY, 532-535.

[13] Mishchenko, A. Brayton, R. Chatterjee, S. 2008. Boolean

factoring and decomposition of logic networks. ICCAD

2008. IEEE, pp.38-44.

[14] Schneider, F. R., Ribas, R. P., Sapatnekar, S. S., and Reis, A.

I. 2005. Exact lower bound for the number of switches in

series to implement a combinational logic cell. ICCD. IEEE

Computer Society, Washington, DC, 357-362.

[15] Sentovich, E., Singh, K., Lavagno, L., Moon, C., Murgai, R.,

Saldanha, A., Savoj, H., Stephan, P., Brayton, R., and

Sangiovanni-Vincentelli, A. 1992. SIS: A system for

sequential circuit synthesis. Tech. Rep. UCB/ERL M92/41.

UC Berkeley, Berkeley.

[16] Stanion, T.; Sechen, C. 1994. Boolean division and

factorization using binary decision diagrams ," IEEE TCAD,

vol.13, no.9, pp.1179-1184.

[17] Werber, J., Rautenbach, D., and Szegedy, C. 2007. Timing

optimization by restructuring long combinatorial paths.

ICCAD'06. IEEE Press, Piscataway, NJ, 536-543.

[18] Weste, N.H.E. and Harris, D. 2005. Section 6.2.1: Static

CMOS. In: CMOS VLSI design, Addison Wesley, 321-327.

[19] Yoshida, H.; Ikeda, M.; Asada, K., 2006. Exact Minimum

Logic Factoring via Quantified Boolean Satisfiability.

ICECS '06. 1065-1068.