Monday, October 31, 2011

macro processor


Macro  Processor
Macro  Instruction 
  • A macro  instruction (macro) is simply a notational  convenience for the programmer.
  • A macro represents a commonly used group of statements in the source program.
  • The macro processor replaces each macro instruction with the corresponding group of source statements.
    • This operation is called “expanding the macro”
  • Using macros allows a programmer to write a shorthand version of a program.
  • For example, before calling a subroutine, the contents of all registers may need to be stored. This routine work can be done using a macro.
Machine  Independent 
  • The functions  of a macro processor essentially involve  the substitution of one group of lines  for another. Normally, the processor performs  no analysis of the text it handles.
  • The meaning of these statements are of no concern during macro expansion.
  • Therefore, the design of a macro processor generally is machine independent.
  • Macro mostly are used un assembler language programming. However, it can also be used in high-level programming languages such as C or C++.
Basic  Functions 
  • Macro definition
    • The two directive MACRO and MEND are used in macro definition.
    • The macro’s name appears before the MACRO directive.
    • The macro’s parameters appear after the MACRO directive.
    • Each parameter begins with ‘&’
    • Between MACRO and MEND is the body of the macro. These are the statements that will be generated as the expansion of the macro definition.
Basic  Functions 
  • Macro expansion  (or invocation)
    • Give the name of the macro to be expanded and the arguments to be used in expanding the macro.
    • Each macro invocation statement will be expanded into the statements that form the body of the macro, with arguments from the macro invocation substituted for the parameters in the macro prototype.
    • The arguments and parameters are associated with one another according to their positions.
      • The first argument corresponds to the first parameter, and so on.
Macro  Program Example 
Macro  definition 
Avoid  the use of labels in a macro
Macro  definition 
Avoid  the use of labels in a macro
Macro  invocations
Expanded  Macro Program
Retain  Labels on Expanded Macro 
  • The label  on the macro invocation statement CLOOP  has been retained as a label on the  first statement generated in the macro  expansion.
  • This allows the programmer to use a macro instruction in exactly the same way as an assembler language mnemonic.
Differences  between Macro and Subroutine 
  • After macro  processing, the expanded file can be used  as input to the assembler.
  • The statements generated from the macro expansions will be assembled exactly as though they had been written directly by the programmer.
  • The differences between macro invocation and subroutine call
    • The statements that form the body of the macro are generated each time a macro is expanded.
    • Statements in a subroutine appear only once, regardless of how many times the subroutine is called.
Avoid  Uses of Labels in Macro 
  • In RDBUFF  and WRBUFF macros, many program-counter relative  addressing instructions are used to avoid  the uses of labels in a macro.
    • For example,   JLT  * - 19
  • This is to avoid generating duplicate labels when the same macro is expanded multiple time at different places in the program. (will be treated as error by the assembler)
  • Later on, we will present a method which allows a programmer to use labels in a macro definition.
Two-Pass  Macro Processor 
  • Like an  assembler or a loader, we can design  a two-pass macro processor in which all  macro definitions are processed during the  first pass, and all macro invocation statements  are expanded during the second pass.
  • However, such a macro processor cannot allow the body of one macro instruction to contain definitions of other macros.
    • Because all macros would have to be defined during the first pass before any macro invocations were expanded.
Macro  Containing Macro Example
Macro  Containing Macro Example 
  • MACROS contains  the definitions of RDBUFF and WRBUFF which  are written in SIC instructions.
  • MACROX contains the definitions of RDBUFF and WRBUFF which are written in SIC/XE instructions.
  • A program that is to be run on SIC system could invoke MACROS whereas a program to be run on SIC/XE can invoke MACROX.
  • Defining MACROS or MACROX does not define RDBUFF and WRBUFF. These definitions are processed only when an invocation of MACROS or MACROX is expanded.
One-Pass  Macro Processor 
  • A one-pass  macro processor that alternate between macro  definition and macro expansion is able  to handle “macro in macro”.
  • However, because of the one-pass structure, the definition of a macro must appear in the source program before any statements that invoke that macro.
    • This restriction is reasonable.
Data  Structures 
  • DEFTAB
    • Store the definition statements of macros
    • Comment lines are omitted.
    • References to the macro instruction parameters are converted to a positional notation for efficiency in substituting arguments.
  • NAMTAB
    • Store macro names, which serves an index to DEFTAB
    • Contain pointers to the beginning and end of the definition
  • ARGTAB
    • Used during the expansion of macro invocations.
    • When a macro invocation statement is encountered, the arguments are stored in this table according to their position in the argument list.
Data  Structures Snapshot
Algorithm 
  • Procedure DEFINE
    • Called when the beginning of a macro definition is recognized. Make appropriate entries in DEFTAB and NAMTAB.
  • Procedure EXPAND
    • Called to set up the argument values in ARGTAB and expand a macro invocation statement
  • Procedure GETLINE
    • Get the next line to be processed
Handle  Macro in Macro 
  • When a  macro definition is being entered into  DEFTAB, the normal approach is to continue  until an MEND directive is reached.
  • This will not work for “macro in macro” because the MEND first encountered (for the inner macro) will prematurely end the definition of the outer macro.
  • To solve this problem, a counter LEVEL is used to keep track of the level of macro definitions. A MEND will end the definition of the macro currently being processed only when LEVEL is 0.
    • This is very much like matching left and right parentheses when scanning an arithmetic expression.
Algorithm  Pseudo Code
Machine  Independent Features
Concatenation  of Macro Parameters 
  • Most macro  processors allow parameters to be concatenated  with other character stings.
  • E.g., to flexibly and easily generate the variables XA1, XA2, XA3, …, or XB1, XB2, XB3, “A” or “B” can be input as an argument. We just need to concatenate “X”, the argument, and the “1” , “2”, “3” .. together.
Concatenation  Problem 
  • Suppose that  the parameter to such a macro instruction  is named &ID, the body of the  macro definition may contain a statement  like LDA X&ID1, in which &ID is  concatenated after the string “X” and  before the string “1”.
  • The problem is that the end of the parameter is not marked. Thus X&ID1 may mean “X” + ID + “1” or “X” + ID1.
  • To avoid this ambiguity, a special concatenation operator -> is used. The new form becomes X&ID->1. Of course, -> will not appear in the macro expansion.
Concatenation  Example
Generation  of Unique Labels 
  • Previously  we see that, without special processing,  if labels are used in macro definition,  we may encounter the “duplicate labels”  problem if a macro is invocated multiple  time.
  • To generate unique labels for each macro invocation, when writing macro definition, we must begin a label with $.
  • During macro expansion, the $ will be replaced with $xx, where xx is a two-character alphanumeric counter of the number of macro instructions expanded.
    • XX will start from AA, AB, AC,…..
Unique  Labels Macro Definition
Unique  Labels Macro Expansion
Conditional  Macro Expansion 
  • So far,  when a macro instruction is invoked, the  same sequence of statements are used to  expand the macro.
  • Here, we allow conditional assembly to be used.
    • Depending on the arguments supplied in the macro invocation, the sequence of statements generated for a macro expansion can be modified.
  • Conditional macro expansion can be very useful. It can generate code that is suitable for a particular application.
Conditional  Macro Example 
  • In the  following example, the values of &EOR  and &MAXLTH parameters are used to  determine which parts of a macro definition  need to be generated.
  • There are some macro-time control structures introduced for doing conditional macro expansion:
    • IF- ELSE-ENDIF
    • WHILE-ENDW
  • Macro-time variables can also be used to store values that are used by these macro-time control structures.
    • Used to store the boolean expression evaluation result
    • A variable that starts with & but not defined in the parameter list is treated as a macro-time variable.
Conditional  macro control structure 
Macro  time variable
Conditional  macro expansion 1
Conditional  macro expansion 2
Conditional  macro expansion 3
Conditional  Macro Implementation 
  • The assembler  maintains a symbol table that contains  the values of all macro-time variables  used.
  • Entries in this table are made or modified when SET statements are processed.
  • When an IF statement is encountered during the expansion of a macro, the specified boolean expression is evaluated.
    • If the value of this expression is TRUE, the macro processor continues to process until it encounters the next ELSE or ENDIF.
      • If ELSE is encountered, then skips to ENDIF
    • Otherwise, the assembler skips to ELSE and continues to process until it reaches ENDIF.
Conditional  Macro Example 
Macro  processor function
Conditional  Macro Expansion v.s. Conditional Jump Instructions  
  • The testing  of Boolean expression in IF statements  occurs at the time macros are expanded. 
  • By the time the program is assembled, all such decisions have been made.
  • There is only one sequence of source statements during program execution.
  • In contrast, the COMPR instruction test data values during program execution. The sequence of statements that are executed during program execution may be different in different program executions.
Keyword  Macro Parameters 
  • So far,  all macro instructions use positional parameters.
    • If an argument is to be omitted, the macro invocation statement must contain a null argument to maintain the correct argument positions.
    • E.g., GENER ,,DIRECT,,,,,,3.
  • If keyword parameters are used, each argument value is written with a keyword that names the corresponding parameters.
    • Arguments thus can appear in any order.
    • Null arguments no longer need to be used.
    • E.g., GENER TYPE=DIRECT, CHANNEL=3
  • Keyword parameter method can make a program easier to read than the positional method.
Keyword  Macro Example 
Can specify  default values
Keyword  parameters
Design  Options
Recursive  Macro Expansion 
  • If we  want to allow a macro to be invoked  in a macro definition, the already presented  macro processor implementation cannot be used.
  • This is because the EXPAND routine is recursively called but the variable used by it (e.g., EXPANDING) is not saved across these calls.
  • It is easy to solve this problem if we use a programming language that support recursive functions. (e.g., C or C++).
Recursive  Macro Example
Recursive  Macro Example 
For easy  implementation, we require that RDCHAR macro 
be defined  before it is used in RDBUFF macro.  This
requirement  is very reasonable.

Sunday, October 30, 2011

artificial neural network



Artificial Neural Networks for Beginners

Carlos Gershenson

C.Gershenson@sussex.ac.uk

1. Introduction

The scope of this teaching package is to make a brief induction to Artificial Neural Networks (ANNs) for people who have no previous knowledge of them. We first make a brief introduction to models of networks, for then describing in general terms ANNs. As an application, we explain the backpropagation algorithm, since it is widely used and many other algorithms are derived from it.

The user should know algebra and the handling of functions and vectors. Differential calculus is recommendable, but not necessary. The contents of this package should be understood by people with high school education. It would be useful for people who are just curious about what are ANNs, or for people who want to become familiar with them, so when they study them more fully, they will already have clear notions of ANNs. Also, people who only want to apply the backpropagation algorithm without a detailed and formal explanation of it will find this material useful. This work should not be seen as “Nets for dummies”, but of course it is not a treatise. Much of the formality is skipped for the sake of simplicity. Detailed explanations and demonstrations can be found in the referred readings. The included exercises complement the understanding of the theory. The on-line resources are highly recommended for extending this brief induction.

2. Networks

One efficient way of solving complex problems is following the lemma “divide and conquer”. A complex system may be decomposed into simpler elements, in order to be able to understand it. Also simple elements may be gathered to produce a complex system (Bar Yam, 1997). Networks are one approach for achieving this. There are a large number of different types of networks, but they all are characterized by the following components: a set of nodes, and connections between nodes.

The nodes can be seen as computational units. They receive inputs, and process them to obtain an output. This processing might be very simple (such as summing the inputs), or quite complex (a node might contain another network...)

The connections determine the information flow between nodes. They can be unidirectional, when the information flows only in one sense, and bidirectional, when the information flows in either sense.

The interactions of nodes though the connections lead to a global behaviour of the network, which cannot be observed in the elements of the network. This global behaviour is said to be emergent. This means that the abilities of the network supercede the ones of its elements, making networks a very powerful tool.



Networks are used to model a wide range of phenomena in physics, computer science, biochemistry, ethology, mathematics, sociology, economics, telecommunications, and many other areas. This is because many systems can be seen as a network: proteins, computers, communities, etc. Which other systems could you see as a network? Why?

3. Artificial neural networks

One type of network sees the nodes as ‘artificial neurons’. These are called artificial neural networks (ANNs). An artificial neuron is a computational model inspired in the natural neurons. Natural neurons receive signals through synapses located on the dendrites or membrane of the neuron. When the signals received are strong enough (surpass a certain threshold), the neuron is activated and emits a signal though the axon. This signal might be sent to another synapse, and might activate other neurons.



















Figure 1. Natural neurons (artist’s conception).

The complexity of real neurons is highly abstracted when modelling artificial neurons. These basically consist of inputs (like synapses), which are multiplied by weights (strength of the respective signals), and then computed by a mathematical function which determines the activation of the neuron. Another function (which may be the identity) computes the output of the artificial neuron (sometimes in dependance of a certain threshold). ANNs combine artificial neurons in order to process information.
















Figure 2. An artificial neuron



The higher a weight of an artificial neuron is, the stronger the input which is multiplied by it will be. Weights can also be negative, so we can say that the signal is inhibited by the negative weight. Depending on the weights, the computation of the neuron will be different. By adjusting the weights of an artificial neuron we can obtain the output we want for specific inputs. But when we have an ANN of hundreds or thousands of neurons, it would be quite complicated to find by hand all the necessary weights. But we can find algorithms which can adjust the weights of the ANN in order to obtain the desired output from the network. This process of adjusting the weights is called learning or training.

The number of types of ANNs and their uses is very high. Since the first neural model by McCulloch and Pitts (1943) there have been developed hundreds of different models considered as ANNs. The differences in them might be the functions, the accepted values, the topology, the learning algorithms, etc. Also there are many hybrid models where each neuron has more properties than the ones we are reviewing here. Because of matters of space, we will present only an ANN which learns using the backpropagation algorithm (Rumelhart and McClelland, 1986) for learning the appropriate weights, since it is one of the most common models used in ANNs, and many others are based on it.

Since the function of ANNs is to process information, they are used mainly in fields related with it. There are a wide variety of ANNs that are used to model real neural networks, and study behaviour and control in animals and machines, but also there are ANNs which are used for engineering purposes, such as pattern recognition, forecasting, and data compression.

3.1. Exercise

This exercise is to become familiar with artificial neural network concepts. Build a network consisting of four artificial neurons. Two neurons receive inputs to the network, and the other two give outputs from the network.




























There are weights assigned with each arrow, which represent information flow. These weights are multiplied by the values which go through each arrow, to give more or



less strength to the signal which they transmit. The neurons of this network just sum their inputs. Since the input neurons have only one input, their output will be the input they received multiplied by a weight. What happens if this weight is negative? What happens if this weight is zero?

The neurons on the output layer receive the outputs of both input neurons, multiplied by their respective weights, and sum them. They give an output which is multiplied by another weight.

Now, set all the weights to be equal to one. This means that the information will flow unaffected. Compute the outputs of the network for the following inputs: (1,1), (1,0), (0,1), (0,0), (-1,1), (-1,-1).

Good. Now, choose weights among 0.5, 0, and -0.5, and set them randomly along the network. Compute the outputs for the same inputs as above. Change some weights and see how the behaviour of the networks changes. Which weights are more critical (if you change those weights, the outputs will change more dramatically)?

Now, suppose we want a network like the one we are working with, such that the outputs should be the inputs in inverse order (e.g. (0.3,0.7)->(0.7,0.3)).

That was an easy one! Another easy network would be one where the outputs should be the double of the inputs.

Now, let’s set thresholds to the neurons. This is, if the previous output of the neuron (weighted sum of the inputs) is greater than the threshold of the neuron, the output of the neuron will be one, and zero otherwise. Set thresholds to a couple of the already developed networks, and see how this affects their behaviour.

Now, suppose we have a network which will receive for inputs only zeroes and/or ones. Adjust the weights and thresholds of the neurons so that the output of the first output neuron will be the conjunction (AND) of the network inputs (one when both inputs are one, zero otherwise), and the output of the second output neuron will be the disjunction (OR) of the network inputs (zero in both inputs are zeroes, one otherwise). You can see that there is more than one network which will give the requested result.

Now, perhaps it is not so complicated to adjust the weights of such a small network, but also the capabilities of this are quite limited. If we need a network of hundreds of neurons, how would you adjust the weights to obtain the desired output? There are methods for finding them, and now we will expose the most common one.

4. The Backpropagation Algorithm

The backpropagation algorithm (Rumelhart and McClelland, 1986) is used in layered feed-forward ANNs. This means that the artificial neurons are organized in layers, and send their signals “forward”, and then the errors are propagated backwards. The network receives inputs by neurons in the input layer, and the output of the network is given by the neurons on an output layer. There may be one or more intermediate hidden layers. The backpropagation algorithm uses supervised learning, which means that we provide the algorithm with examples of the inputs and outputs we want the network to compute, and then the error (difference between actual and expected results) is calculated. The idea of the backpropagation algorithm is to reduce this error, until the ANN learns the training data. The training begins with random weights, and the goal is to adjust them so that the error will be minimal.



The activation function of the artificial neurons in ANNs implementing the backpropagation algorithm is a weighted sum (the sum of the inputs xi multiplied by their respective weights wji):

(1)


We can see that the activation depends only on the inputs and the weights.

If the output function would be the identity (output=activation), then the neuron would be called linear. But these have severe limitations. The most common output function is the sigmoidal function:

(2)



The sigmoidal function is very close to one for large positive numbers, 0.5 at zero, and very close to zero for large negative numbers. This allows a smooth transition between the low and high output of the neuron (close to zero or close to one). We can see that the output depends only in the activation, which in turn depends on the values of the inputs and their respective weights.

Now, the goal of the training process is to obtain a desired output when certain inputs are given. Since the error is the difference between the actual and the desired output, the error depends on the weights, and we need to adjust the weights in order to minimize the error. We can define the error function for the output of each neuron:

(3)


We take the square of the difference between the output and the desired target because it will be always positive, and because it will be greater if the difference is big, and lesser if the difference is small. The error of the network will simply be the sum of the errors of all the neurons in the output layer:

(4)


The backpropagation algorithm now calculates how the error depends on the output, inputs, and weights. After we find this, we can adjust the weights using the method of gradient descendent:




(5)



This formula can be interpreted in the following way: the adjustment of each weight ()wji) will be the negative of a constant eta (0) multiplied by the dependance of the previous weight on the error of the network, which is the derivative of E in respect to wi. The size of the adjustment will depend on 0, and on the contribution of the weight to the error of the function. This is, if the weight contributes a lot to the error, the adjustment will be greater than if it contributes in a smaller amount. (5) is used until we find appropriate weights (the error is minimal). If you do not know derivatives, don’t worry, you can see them now as functions that we will replace right away with algebraic expressions. If you understand derivatives, derive the expressions yourself and compare your results with the ones presented here. If you are searching for a mathematical proof of the backpropagation algorithm, you are advised to check it in the suggested reading, since this is out of the scope of this material.

So, we “only” need to find the derivative of E in respect to wji. This is the goal of the backpropagation algorithm, since we need to achieve this backwards. First, we need to calculate how much the error depends on the output, which is the derivative of E in respect to Oj (from (3)).

(6)



And then, how much the output depends on the activation, which in turn depends on the weights (from (1) and (2)):

(7)




And we can see that (from (6) and (7)):


(8)



And so, the adjustment to each weight will be (from (5) and (8)):

(9)


We can use (9) as it is for training an ANN with two layers. Now, for training the network with one more layer we need to make some considerations. If we want to adjust the weights (let’s call them vik) of a previous layer, we need first to calculate how the error depends not on the weight, but in the input from the previous layer. This is easy, we would just need to change xi with wji in (7), (8), and (9). But we also need to see how the error of the network depends on the adjustment of vik. So:


(10)


Where:


(11)



And, assuming that there are inputs uk  into the neuron with vik  (from (7)):

(12)


If we want to add yet another layer, we can do the same, calculating how the error depends on the inputs and weights of the first layer. We should just be careful with the indexes, since each layer can have a different number of neurons, and we should not confuse them.

For practical reasons, ANNs implementing the backpropagation algorithm do not have too many layers, since the time for training the networks grows exponentially. Also, there are refinements to the backpropagation algorithm which allow a faster learning.

4.1. Exercise

If you know how to program, implement the backpropagation algorithm, that at least will train the following network. If you can do a general implementation of the backpropagation algorithm, go ahead (for any number of neurons per layer, training sets, and even layers).

If you do not know how to program, but know how to use a mathematical assistant (such as Matlab or Mathematica), find weights which will suit the following network after defining functions which will ease your task.

If you do not have any computing experience, find the weights by hand.

The network for this exercise has three neurons in the input layer, two neurons in a hidden layer, and three neurons in the output layer. Usually networks are trained with large training sets, but for this exercise, we will only use one training example. When the inputs are (1, 0.25, -0.5), the outputs should be (1,-1,0). Remember you start with random weights.























Friday, October 28, 2011

digital signature




What is a Digital Signature?
An introduction to Digital Signatures, by David Youd





Bob

(Bob's public key)
(Bob's private key)
Bob has been given two keys. One of Bob's keys is called a Public Key, the other is called a Private Key.

Bob's Co-workers:

Anyone can get Bob's Public Key, but Bob keeps his Private Key to himself
PatDougSusan
Bob's Public key is available to anyone who needs it, but he keeps his Private Key to himself. Keys are used to encrypt information. Encrypting information means "scrambling it up", so that only a person with the appropriate key can make it readable again. Either one of Bob's two keys can encrypt data, and the other key can decrypt that data.
Susan (shown below) can encrypt a message using Bob's Public Key. Bob uses his Private Key to decrypt the message. Any of Bob's coworkers might have access to the message Susan encrypted, but without Bob's Private Key, the data is worthless.

"Hey Bob, how about lunch at Taco Bell. I hear they have free refills!"HNFmsEm6Un BejhhyCGKOK JUxhiygSBCEiC 0QYIh/Hn3xgiK BcyLK1UcYiY lxx2lCFHDC/A

HNFmsEm6Un BejhhyCGKOK JUxhiygSBCEiC 0QYIh/Hn3xgiK BcyLK1UcYiY lxx2lCFHDC/A"Hey Bob, how about lunch at Taco Bell. I hear they have free refills!"
With his private key and the right software, Bob can put digital signatures on documents and other data. A digital signature is a "stamp" Bob places on the data which is unique to Bob, and is very difficult to forge. In addition, the signature assures that any changes made to the data that has been signed can not go undetected.


To sign a document, Bob's software will crunch down the data into just a few lines by a process called "hashing". These few lines are called a message digest. (It is not possible to change a message digest back into the original data from which it was created.)


Bob's software then encrypts the message digest with his private key. The result is the digital signature.

Finally, Bob's software appends the digital signature to document. All of the data that was hashed has been signed.

Bob now passes the document on to Pat.

First, Pat's software decrypts the signature (using Bob's public key) changing it back into a message digest. If this worked, then it proves that Bob signed the document, because only Bob has his private key. Pat's software then hashes the document data into a message digest. If the message digest is the same as the message digest created when the signature was decrypted, then Pat knows that the signed data has not been changed.


Plot complication...

Doug (our disgruntled employee) wishes to deceive Pat. Doug makes sure that Pat receives a signed message and a public key that appears to belong to Bob. Unbeknownst to Pat, Doug deceitfully sent a key pair he created using Bob's name. Short of receiving Bob's public key from him in person, how can Pat be sure that Bob's public key is authentic?

It just so happens that Susan works at the company's certificate authority center. Susan can create a digital certificate for Bob simply by signing Bob's public key as well as some information about Bob.

Bob Info: 
    Name
    Department
    Cubical Number
Certificate Info: 
    Expiration Date
    Serial Number
Bob's Public Key:
    

Now Bob's co-workers can check Bob's trusted certificate to make sure that his public key truly belongs to him. In fact, no one at Bob's company accepts a signature for which there does not exist a certificate generated by Susan. This gives Susan the power to revoke signatures if private keys are compromised, or no longer needed. There are even more widely accepted certificate authorities that certify Susan.
Let's say that Bob sends a signed document to Pat. To verify the signature on the document, Pat's software first uses Susan's (the certificate authority's) public key to check the signature on Bob's certificate. Successful de-encryption of the certificate proves that Susan created it. After the certificate is de-encrypted, Pat's software can check if Bob is in good standing with the certificate authority and that all of the certificate information concerning Bob's identity has not been altered.
Pat's software then takes Bob's public key from the certificate and uses it to check Bob's signature. If Bob's public key de-encrypts the signature successfully, then Pat is assured that the signature was created using Bob's private key, for Susan has certified the matching public key. And of course, if the signature is valid, then we know that Doug didn't try to change the signed content.

Although these steps may sound complicated, they are all handled behind the scenes by Pat's user-friendly software. To verify a signature, Pat need only click on it.


(c) 1996, David Youd
Permission to change or distribute is at the discretion of the author
Warning: You may be missing a few lines of text if you print this document. This seems to occur on pages following pages that have blank space near the bottom due to moving tables with large graphics in them to the next page so that the images are not split across pages. If this happens to you, simply print out document in sections. (Ex: I have the problem on page 4, so I print pages 1-3, then pages 4-5.)