An Introduction to Digital Logic

This section contains a brief overview of digital electronics, it has enough information for you to complete the labs for this course, but is not meant to be all inclusive. If you need more information there are many books in the library that cover digital logic.

Up until now the labs have dealt with electricity in its analog form where a quantity is described by the amount of voltage, or current, or charge... expressed as a real number. However a large proportion of electronic equipment, including computers, uses digital electronics where the quantities (usually voltage) are described by two states; on and off. These two states can also be represented by true and false, 1 and 0, and in most physical systems are represented by the voltages 5V and 0V, or something close to that. While the restriction to two states seems limiting it makes many things easier because problems due to noise are minimized. It is generally very easy to reliably distinguish between logic 1 or logic 0.

Since many quantities cannot be represented by two states, more than one binary digit can be used to represent a number. For example the number 2510 (twenty five base 10) can be represented by the binary number 110012. It is easy to convert back and forth from binary to decimal by remembering that each digit in a binary number simply corresponds to a power of 2, as every digit in a decimal number corresponds to a power of 10. Using the previous example:

101 100 24 23 22 21


(10) (1) (16) (8) (4) (2) (1)
2 5 = 1 1 0 0 1
2*10 + 5*1 = 1*16 + 1*8 + 0*4 + 0*2 + 1*1

In general an n digit binary number can represent numbers from 0 to 2n-1. For instance a byte is 8 bits and can represent numbers from 0 to 255 (28-1).  

More about binary numbers.


Combinatorial Logic

Another advantage of digital electronics is the ability to express signals in terms of logic equations using standard terms from logic: and, or, and not. These functions can be represented by truth tables as shown below with A and B as inputs and C as output.

and or not
0 0 0 0 0 0 0 1
0 1 0 0 1 1 1 0
1 0 0 1 0 1
1 1 1 1 1 1

These logic functions can be represented using a shorthand notation, and is represented by . or &, or is represented by + or #, and not is represented by ~ or ! (there are also other conventions, the most common is to put a bar over the variable). Thus the equation D equals A and B or not C can be represented as

D = A . B+ ˜C or by D = A & B # !C

Obviously this equation has different meanings depending on whether the and or the or function is performed first and parentheses can be used in the normal way to get rid of the ambiguity

D = (A . B)+ ˜C

Other functions that are common are nand and nor. The nand function is an and function followed by a not, nor is an or function followed by not. The symbols used in schematics for these functions are given below:

wpe4D.gif (1975 bytes)

Logic equations, like any other, can get complicated quickly. To simplify logic equations a system called Boolean algebra (after the mathematician George Boole) was developed. A short selection of its theorems is listed.

(1) A.0 = 0 (6) A+1 = 1 !A.!B (11) (A.B).C = A.(B.C)
(2) A.1 = A (7) A+ = 1 1 (12) A.(B+C) = A.B+B.C
(3) A. = 0 (8) A+A = A 0 (13) A+(B.C)= (A+B).(A+C)
(4) A+0 = A (9) A+B = B+A 0 (14) !(A+B) = !A.!B
(5) A.A = A (10) (A+B)+C = A+(B+C) 0 (15) !(A.B) = !A+!B

Some of these rules are quite obvious.  For example if we make out the truth table for rule (1) we get:

A A.0
0 0
1 0

Some of the other rules are not so obvious.  For example rule 14 yields the truth table shown below

A B !A !B   A+B !(A+B) !A.!B
0 0 1 1   0 1 1
0 1 1 0   1 0 0
1 0 0 1   1 0 0
1 1 0 0   1 0 0

The truth table shows that rule 14, !(A+B) = !A.!B, is correct.

These theorems can be used to simplify equations. For example if we start off with the expression 
                                  D = (A.B+(A+C.˜C)).A+B, 
we can apply the rules in turn to simplify it.

D = (A.B+(A+C.˜C)).A+B apply 3
D = (A.B+A).A+B apply 2
D = (A.B+A.1).A+B apply 12
D = (A.(B+1)).A+B apply 6
D = A.A+B apply 5
D = A+B

As with the algebra you learned in elementary school, this kind of simplification gets tedious, and messy, quickly. Luckily there is a graphical shortcut to doing logic minimizations called a Karnaugh map. This introduction will only cover Karnaugh maps with up to four variables, though the technique can be generalized to larger systems - though these systems are usually simplified using computers. Consider the truth table of the equation given above, which is given in the forma of a truth table and a three variable Karnaugh map:

wpe50.gif (2421 bytes)

One way to get a solution is simply to write an expression for each true result in the table.  For example, the lower-leftmost true result from the Karnaugh map represents the case where A=1 and B=0 and C=0 and it can be written as A.˜B.˜C.  The true result next to it can be written as A.˜B.C.  Now to develop a logic expression, we would just or together all of these terms.  Thus our result (including an expression for each true term -- 6 in all) is:

A.˜B.˜C  +  A.˜B.C  +  A.B.C  +  A.B.~C  +  ˜A.B.A.B.˜C

This is called the sum-of-products form.  Although this expression is correct, it is also unwieldy. We could use the theorems of Boolean algebra to simplify the expression, which is often difficult and does not guarantee a best solution.  However we can use a visual technique based on Karnaugh maps to develop a minimal sum-of-products solution.

To get the simplified equation one takes the table and encircles as many 1's as possible in rectangular groups that have 1, 2, 4, or 8 elements (i.e., any power of 2). The idea is to make the groupings as large as possible. For the example above this can be accomplished with 2 groupings:

wpe51.gif (1597 bytes)

If you examine these groupings carefully you can see that the red grouping has no dependence on the variables A or C, and is totally described by the statement B=1. The blue group on the other hand has no dependence on B or C and is described by the statement A=1. Therefore to include the elements from both groups we can use the equation A+B. If you had used smaller groups you would have obtained an equivalent, though more complicated, expression. Try it. This graphical method is clearly easier than the technique used earlier that employed algebraic simplifications.

You should examine the map shown above and convince yourself that any grouping of a single element needs all three variables to describe it. For instance the uppermost "1" on the right hand side is described by B.˜C. A grouping of two gets rid of the dependence on one of the variables (the two rightmost ones have no dependence on A and are given by B.˜C). A group of four, as you have seen, depends only on one variable. Therefore by choosing the smallest number of groups (i.e., the largest groups), you will come up with the minimal equation describing those groups. The result obtained with the Karnaugh map is called the minimal sum of products form because it generates the smallest number of product (anded) terms.

Also, if you look at the table again you can convince yourself that it is possible to "wrap-around" the ends of the table, as shown

wpe52.gif (1572 bytes)

The two groups are represented by B.C (the red group) and the blue group that wraps around by A.˜C.

This technique also works for two variables (trivial), four variables (shown below), and even more (though this gets complicated and will not be described here). A typical four variable map and its groupings are shown here.

wpe56.gif (2974 bytes)

which simplifies to: A.B.D.˜C + .B.˜D + .C + ˜B.C. Prove it to yourself.


Months in school

wpe58.gif (3488 bytes)

Truth Table and Karnaugh map

wpe61.gif (3707 bytes)

Alternate solution (others possible)

wpe62.gif (2543 bytes)


Sequential Logic

The digital logic described thus far is called combinatorial logic because the output depends solely upon the presently existing combination of the inputs; past values of the inputs are not important. Sequential logic deals with the issue of time dependence and can get much more complicated than combinatorial logic -- much in the same way that differential equations are more difficult than algebraic equations. The fundamental building block of sequential circuits is the flip-flop (which flips and flops between different logic states) of which there are several. The simplest flip-flop is the R-S, or Set-Reset flip flop which is made up of two gates:

wpe5A.gif (2207 bytes)

Note that flip flops are often quite complicated at the gate level and are frequently represented by a "black box" with inputs and outputs as shown at right.

Let's see how this device operates by examining the four possible inputs, if S,R=1,0 then Q=1 and ˜Q=0, therefore S=1 sets Q. If S,R=0,1 then Q=1 and ˜Q=0, or R=1 resets Q. If S,R=1,1 then Q=0 and ˜Q=0; a result that doesn't seem to make sense, but we will deal with this soon. If S,R=0,0 then you can convince yourself that the outputs are indeterminate -- that is you cannot figure out what they are. This is where the time dependence of the circuit is important: if S,R goes from the 1,0 to the 0,0 state then Q=1 and ˜Q=0; if S,R goes from 0,1 to 0,0 then Q=0 and ˜Q=1; but if S,R goes from 1,1 to 0,0 then Q and ˜Q are still indeterminate and so we will call 1,1 the disallowed state and design our circuit so this state is not used. The 0,0 state is called the hold state. The truth table for this circuit is given as:
















Note that the variables have a subscript associated with them. This subscript denotes time, so that if S,R=0,0 at time n then the output Q retains the value it had at time n-1.

There are several other types of flip-flops, but the most popular are the D and J-K flip-flops which have the following truth tables:
























and the circuit symbols:

wpe5D.gif (1988 bytes)

Note that these flip-flops have another input called the "clock". The transition from time n-1 to time n occurs at some transition of the clock; usually when the clock goes from low to high (rising edge) or from high to low (falling edge), but is sometimes level sensitive (for instance the output may do what is required by the input while the clock is high and holds the last value when the clock is low). In addition to the clock there are sometimes reset inputs to clear the output to logic 0, and preset inputs to set the outputs to logic 1.

To further understand these devices consider the circuit shown below (actually a simplified portion of an integrated circuit, the 7493):

Note: the inverted outputs of the flip-flops aren't shown since we aren't using them.

wpe6C.gif (3716 bytes)

Assume that the flip-flops are falling edge triggered and that the outputs are initially all 0, then if a series of clock pulses is fed into the circuit, Clock, you should be able to convince yourself that the output is given by:

wpe6B.gif (2578 bytes)

Thus Qa is half the frequency of the clock, Qb half that of Qa, and Qc half that of Qb; making Qc 1/8 the frequency of the clock. Also if you use Qa through Qc to represent a number (Qa=least significant bit), then the output of this circuit cycles repetitively through eight states representing the numbers 010 to 710 (02 to 1112). Therefore this circuit is known as a divide by eight counter. The actual 7493 is a little more complicated and includes a reset signal so that you can reset the counter to 0 at any time.  We will use counters in the next lab.

Thus ends the introduction to digital logic.

Comments or Questions?