Mircroprocessor and Computer Architecture Concepts Quiz
Fundamental of Microprocessor
Fundamentals of Microprocessors
A microprocessor is the heart of a computer system, capable of performing arithmetic, logic, control, and input/output operations. It is a small, integrated circuit (IC) that can execute a set of instructions stored in memory. Microprocessors are the key components in modern computing devices, including computers, smartphones, embedded systems, and many other electronic devices.
1. Introduction to Microprocessors
A microprocessor is a programmable device that interprets and executes instructions to perform tasks. It processes data and controls the flow of information between different components of the system.
- Definition: A microprocessor is a single integrated circuit (IC) that contains all the necessary components for performing the functions of a central processing unit (CPU).
- Components of a Microprocessor:
- Arithmetic Logic Unit (ALU): Performs arithmetic and logical operations.
- Control Unit (CU): Directs the operations of the microprocessor by interpreting instructions.
- Registers: Temporary storage locations used to hold data and instructions.
- Bus Interface: Handles communication between the microprocessor and other components like memory and I/O devices.
Microprocessors have become the backbone of modern electronics, and their design and architecture are essential for understanding how computers work.
2. Microprocessor Systems with Bus Organization
A microprocessor system consists of a microprocessor, memory, and peripheral devices. These systems communicate with each other using a bus, a set of wires or signal paths that transfer data between components.
Bus Types:
- Data Bus: Transfers data between components.
- Address Bus: Carries the address of data in memory.
- Control Bus: Carries control signals that manage the timing and sequence of operations.
Bus Organization:
Microprocessor systems use a system bus to transfer information. The microprocessor interacts with memory and I/O devices through this bus.
- Memory Bus: The microprocessor communicates with the memory to fetch or store data.
- I/O Bus: Connects the microprocessor to external peripherals like keyboards, displays, etc.
The bus organization can be classified into:
- Single Bus System: A common bus is shared by both the data, address, and control buses.
- Multiple Bus System: Different buses for data, address, and control signals.
3. Microprocessor Architecture and Operation
The architecture of a microprocessor defines how its internal components are organized and how they interact with each other to process instructions.
Basic Architecture:
- ALU (Arithmetic Logic Unit): Responsible for performing arithmetic and logical operations like addition, subtraction, AND, OR, etc.
- CU (Control Unit): Directs the flow of data by controlling the operations of the ALU, memory, and I/O devices.
- Registers: Temporary storage for data, addresses, and intermediate results.
- Clock: Generates timing signals to synchronize the operations of the microprocessor.
The operation of a microprocessor involves fetching instructions from memory, decoding them, executing the operation, and then writing back results to memory or I/O devices.
4. 8085 Microprocessor and Its Operation
The 8085 Microprocessor is an 8-bit microprocessor developed by Intel in 1976. It has 40 pins and operates on a single 5V power supply. It is widely used in early computers and embedded systems.
Key Features:
- 8-bit data bus (can process 8-bit data at a time).
- 16-bit address bus (can address up to 64 KB of memory).
- Clock speed of up to 3 MHz.
- 74 instructions and 246 opcodes.
- 5 machine cycles and 11 addressing modes.
Operation:
The 8085 performs basic operations in a sequence of steps called the instruction cycle. These steps include:
- Fetch: Fetch the instruction from memory.
- Decode: Decode the instruction to determine what operation to perform.
- Execute: Perform the operation (e.g., arithmetic, logical, data transfer).
- Write-back: Write the result back to memory or a register.
5. 8085 Instruction Cycle, Machine Cycle, T States
-
Instruction Cycle: The time required to fetch and execute an instruction is known as the instruction cycle. The instruction cycle consists of several machine cycles and T states.
-
Machine Cycle: A machine cycle is the basic operation of the microprocessor to execute a specific instruction. Each machine cycle involves fetching data from memory, decoding it, executing the operation, and possibly writing back to memory or registers. The 8085 has 5 types of machine cycles:
- Opcode Fetch Cycle
- Memory Read Cycle
- Memory Write Cycle
- I/O Read Cycle
- I/O Write Cycle
-
T States: Each machine cycle is further divided into smaller time units called T states. Each T state represents a single clock cycle. For example, the Opcode Fetch Cycle may require 4 T states.
6. Addressing Modes in 8085
Addressing modes define how the operands (data) of an instruction are specified. The 8085 supports several addressing modes:
-
Immediate Addressing: The operand is specified in the instruction itself.
- Example:
MVI A, 32H(Load 32H into register A).
- Example:
-
Register Addressing: The operand is located in a register.
- Example:
MOV A, B(Copy contents of register B to A).
- Example:
-
Direct Addressing: The operand is in a specific memory location, given by a 16-bit address in the instruction.
- Example:
LDA 2000H(Load accumulator with data from memory location 2000H).
- Example:
-
Indirect Addressing: The operand’s address is contained in a register pair.
- Example:
MOV A, M(Copy data from the memory location pointed to by HL pair to register A).
- Example:
-
Register Indirect Addressing: The instruction points to a register pair that holds the address of the operand.
- Example:
HL pair holds memory address where data is to be fetched.
- Example:
-
Implicit Addressing: The operand is implied by the instruction and does not need to be specified.
- Example:
NOP(No operation).
- Example:
7. Introduction to 8086 Microprocessor
The 8086 Microprocessor is a 16-bit processor developed by Intel in 1978. It is the predecessor to the x86 architecture, which has since been used in many modern processors. The 8086 is capable of handling 16-bit data and supports 20-bit addressing, allowing it to address up to 1 MB of memory.
Key Features:
- 16-bit data bus: Can process 16 bits of data at a time.
- 20-bit address bus: Can address up to 1MB (1024 KB) of memory.
- Clock speed: Typically operates at clock speeds of 5 MHz, 8 MHz, or 10 MHz.
- Segmentation: Supports memory segmentation for organizing and managing memory more efficiently.
- Instruction Set: Supports a larger and more complex instruction set compared to the 8085.
- Pipeline Architecture: It uses a pipeline for executing instructions, allowing for faster processing.
Operation:
The 8086 microprocessor operates in two modes:
- Minimum Mode: The processor operates independently with a single processor system.
- Maximum Mode: The processor works with additional processors in a multi-processor system.
Summary:
- Microprocessor: A central processing unit on a single chip that executes instructions.
- Microprocessor System with Bus Organization: Uses buses for communication between the microprocessor, memory, and I/O devices.
- 8085 Microprocessor: An 8-bit microprocessor with a 16-bit address bus, capable of addressing up to 64KB of memory, with various addressing modes and instructions.
- Instruction Cycle, Machine Cycle, and T States: Defines the operations and timing of instructions in the 8085 microprocessor.
- 8086 Microprocessor: A 16-bit microprocessor with advanced features like memory segmentation and a larger instruction set, paving the way for modern computing architectures.
Understanding these basic concepts is essential for working with microprocessors, whether for embedded systems or advanced computing applications.
Introduction To Assembly Language Programming
Introduction to Assembly Language Programming (10 hrs)
Assembly language programming involves writing instructions in a symbolic form that directly corresponds to the machine code understood by a computer's microprocessor. It serves as a low-level programming language that is much closer to machine code compared to high-level programming languages. In this note, we will cover the basics of assembly language programming, the classification of instructions and addressing modes, 8085 instruction sets, and various practical aspects like assembling, executing, debugging programs, developing counters, time delay routines, and interfacing concepts.
1. Assembly Language Programming Basics
Assembly language is a human-readable representation of the binary instructions executed by the microprocessor. Each assembly language instruction corresponds to one machine language instruction.
Basic Components of Assembly Language:
- Mnemonics: Human-readable instruction representations (e.g.,
MOV,ADD,SUB) used instead of binary opcodes. - Operands: The data or memory locations used by instructions (e.g., registers, memory addresses).
- Labels: Identifiers used to refer to specific locations in code, making it easier to manage control flow.
- Directives: Special instructions for the assembler, such as defining data or setting memory locations.
Assembly language programs are written for specific processors (e.g., 8085, 8086, etc.), and the instructions vary between different processors.
Benefits of Assembly Language:
- Control: Provides more direct control over hardware.
- Efficiency: Programs can be optimized for speed and size.
- Understanding of Hardware: Helps understand how the CPU and memory work at a low level.
Limitations:
- Complexity: Assembly language programs are more difficult to write and maintain than high-level languages.
- Portability: Assembly language programs are specific to one type of processor and cannot be easily transferred to different hardware.
2. Classification of Instructions and Addressing Modes
Classification of Instructions:
Instructions in assembly language can be classified into various types depending on the operation they perform:
-
Data Transfer Instructions: These instructions move data from one location to another.
- Example:
MOV A, B(Move contents of register B into register A).
- Example:
-
Arithmetic Instructions: These instructions perform arithmetic operations such as addition, subtraction, multiplication, and division.
- Example:
ADD A, B(Add contents of register B to register A).
- Example:
-
Logical Instructions: Logical instructions perform logical operations like AND, OR, XOR, etc.
- Example:
AND A, B(Logical AND operation between A and B).
- Example:
-
Control Instructions: These instructions control the program flow, such as jumping to another location or halting the program.
- Example:
JMP(Jump to a specified address),HLT(Halt the execution).
- Example:
-
Branching Instructions: These are conditional and unconditional branch instructions used for altering the flow of the program.
- Example:
JZ(Jump if zero),JC(Jump if carry).
- Example:
-
Stack Instructions: These instructions are used to work with the stack, such as pushing and popping data.
- Example:
PUSH(Push data onto the stack),POP(Pop data from the stack).
- Example:
Addressing Modes:
Addressing modes specify how the operand of an instruction is accessed. The 8085 microprocessor supports several addressing modes:
-
Immediate Addressing: The operand is directly specified in the instruction.
- Example:
MVI A, 32H(Move immediate value 32H to register A).
- Example:
-
Register Addressing: The operand is in a register.
- Example:
MOV A, B(Move data from register B to register A).
- Example:
-
Direct Addressing: The operand is at a specific memory location.
- Example:
LDA 2050H(Load accumulator with data from memory location 2050H).
- Example:
-
Indirect Addressing: The operand’s address is specified indirectly via a register pair.
- Example:
MOV A, M(Move data from memory location pointed by HL pair to register A).
- Example:
-
Register Indirect Addressing: The instruction accesses data through a register pair pointing to a memory address.
- Example:
MOV A, M(Move data from memory address stored in HL pair to register A).
- Example:
3. 8085 Instruction Set
The 8085 microprocessor has a wide range of instructions that allow it to perform various operations, including data transfer, arithmetic, logic, branching, and I/O operations.
Common 8085 Instructions:
-
MOV (Move): Transfers data between registers.
- Example:
MOV A, B(Move contents of register B to register A).
- Example:
-
ADD (Addition): Adds the contents of a register or memory location to the accumulator.
- Example:
ADD B(Add contents of register B to the accumulator).
- Example:
-
SUB (Subtraction): Subtracts the contents of a register or memory location from the accumulator.
- Example:
SUB B(Subtract contents of register B from the accumulator).
- Example:
-
JMP (Jump): Alters the program flow by jumping to a specified address.
- Example:
JMP 2050H(Jump to address 2050H).
- Example:
-
HLT (Halt): Stops the program execution.
- Example:
HLT(Halt execution).
- Example:
Instruction Set Categories:
- Data transfer instructions (MOV, MVI, LXI, etc.).
- Arithmetic instructions (ADD, SUB, INR, DCR, etc.).
- Logical instructions (ANA, ORA, CMP, etc.).
- Branching instructions (JMP, CALL, RET, etc.).
- Control instructions (NOP, HLT).
4. Assembling, Executing, and Debugging the Programs
Assembling:
Assembly language programs need to be translated into machine code before execution. This is done using an assembler, a tool that converts the symbolic instructions into binary code. The assembler takes the source code and generates an object file, which can be loaded into memory for execution.
Steps to Assemble:
- Write the assembly language program using an editor.
- Assemble the code using the assembler to generate machine code.
- Load the machine code into the microprocessor's memory.
- Execute the program on the hardware or in a simulator.
Executing:
Once the program is assembled, it is loaded into memory and executed. In case of the 8085 microprocessor, the instructions are fetched, decoded, and executed in a sequential manner, unless control instructions (like JMP) alter the flow.
Debugging:
Debugging involves finding and correcting errors in the program. Common tools for debugging include:
- Simulators: Allow you to simulate the execution of the program step-by-step.
- Breakpoints: Allow stopping the execution at specific points to examine values.
- Step-by-step Execution: Allows running the program instruction-by-instruction to monitor the program's behavior.
5. Developing Counters and Time Delay Routines
Counters and time delay routines are common in assembly language programming. These routines are typically used to control the timing and repetition of certain actions.
Developing Counters:
Counters are used to repeat an operation a certain number of times. A typical counter program could use the INR or DCR instructions to increment or decrement a register, along with conditional branching to repeat operations.
Example of a Simple Counter:
MVI B, 05H ; Load 5 into register B (counter)
LOOP:
INR B ; Increment counter B
MOV A, B ; Move counter value to accumulator
CALL DELAY ; Call delay subroutine
JMP LOOP ; Jump back to LOOP
Developing Time Delay Routines:
A time delay can be created by creating a loop that does nothing for a specified amount of time. The time delay depends on the clock speed of the microprocessor.
Example of a Simple Time Delay:
MVI B, 0FFH ; Load 255 into B (time delay loop counter)
DELAY:
NOP ; No operation (do nothing)
DCR B ; Decrement B
JNZ DELAY ; If B is not zero, jump to DELAY
6. Interfacing Concepts
Interfacing in assembly language programming refers to the connection between the microprocessor and external devices like memory, sensors, displays, etc. The microprocessor can interact with external devices via input/output (I/O) operations.
Types of Interfacing:
- Memory Interfacing: Connecting RAM or ROM to the microprocessor.
- I/O Interfacing: Connecting external devices (like sensors, motors, displays) using I/O ports.
Interfacing concepts involve understanding how the microprocessor communicates with external hardware using the address, data, and control buses.
Summary:
- Assembly Language Basics: Assembly is a low-level language that uses mnemonics to represent machine instructions.
- Instruction Classification & Addressing Modes: Instructions are classified into categories like data transfer, arithmetic, logical, etc., with various addressing modes.
- 8085 Instruction Set: The 8085
Basic Computer Architecture
Basic Computer Architecture (4 hrs)
Computer architecture refers to the design and organization of a computer's fundamental components, including the processor, memory, storage, and communication systems. It provides the blueprint for how a computer functions and how its components interact to execute instructions and process data. In this note, we will explore the history of computer architecture, memory hierarchy, instruction codes, and other essential components like registers, bus systems, and control mechanisms.
1. Introduction to Computer Architecture
History of Computer Architecture:
The concept of computer architecture dates back to the early 20th century, when the first general-purpose computers were developed. Over time, computer systems evolved from mechanical devices to electronic systems, leading to the development of modern computer architecture.
- Early Computers: The first computers, like Charles Babbage's Analytical Engine, were designed for specific calculations. These machines used mechanical components and lacked the flexibility of modern electronic systems.
- Stored Program Concept (1940s): John von Neumann's architecture introduced the concept of storing program instructions in memory alongside data. This concept forms the foundation of most modern computers.
- Microprocessor Era (1970s and Beyond): The development of microprocessors in the 1970s led to the miniaturization of computers and the advent of personal computers, introducing new architecture concepts like RISC (Reduced Instruction Set Computing) and CISC (Complex Instruction Set Computing).
Overview of Computer Organization:
Computer organization refers to the physical structure and functional components of a computer system. It involves how the processor, memory, and input/output devices are interconnected.
Key components of computer organization:
- Central Processing Unit (CPU): The CPU performs the primary computational work, executing instructions, performing calculations, and managing data flow.
- Memory: Memory stores both program instructions and data for fast access by the CPU. Different types of memory include RAM, ROM, and cache.
- Input/Output Devices (I/O): These allow the computer to interact with the outside world (e.g., keyboards, displays, printers).
2. Memory Hierarchy and Cache
The memory hierarchy refers to the arrangement of different types of memory in a computer system, each with its own speed and capacity. The hierarchy ensures that the CPU has quick access to data and instructions, improving system performance.
Levels of Memory Hierarchy:
- Registers: Small, high-speed storage locations located within the CPU. They are the fastest form of memory but have limited capacity.
- Cache Memory: A small amount of very fast memory located close to the CPU, used to store frequently accessed data and instructions. It reduces the time needed to access data from slower main memory.
- L1 Cache: Located inside the CPU, it is the fastest and smallest cache.
- L2 Cache: Located outside the CPU but still close, it is larger and slower than L1 cache.
- Main Memory (RAM): The primary memory of the computer, used to store programs and data during execution. It is larger but slower compared to cache.
- Secondary Storage: Devices like hard drives and SSDs that provide long-term storage for data and programs. These are much slower than RAM but offer much higher storage capacity.
Cache Memory:
Cache memory is critical for improving the performance of a computer. When the CPU needs data, it first checks the cache memory before accessing slower memory like RAM. If the data is found in the cache, this is called a cache hit. If not, the data is fetched from the slower memory, and the cache is updated with the new data.
3. Organization of Hard Disk
A hard disk is a non-volatile storage device that uses magnetic storage to store data on rotating platters. It is used for long-term data storage and can hold large amounts of information.
Key Components of a Hard Disk:
- Platters: Circular disks coated with a magnetic material. Each platter can store data in the form of magnetic fields.
- Spindle: The motor that rotates the platters at high speed.
- Read/Write Heads: These heads move over the platters to read data from or write data to the disk.
- Tracks and Sectors: The surface of each platter is divided into concentric circles called tracks. Each track is further divided into smaller units called sectors, which store data.
Hard Disk Organization:
- Cylinders: A collection of tracks at the same position on each platter.
- Clusters: Groups of sectors that the operating system treats as a unit of storage.
- Seek Time: The time it takes for the read/write head to position itself over the correct track.
- Latency: The time it takes for the desired sector to rotate under the read/write head.
4. Instruction Codes and Stored Program Organization
Stored Program Organization:
In the stored program concept, both data and program instructions are stored in memory. This allows the CPU to fetch and execute instructions sequentially, making it much more flexible than earlier punched-card systems.
- Memory Location for Instructions: The instructions of a program are stored in the memory. The CPU fetches and decodes each instruction one at a time from memory.
- Program Counter (PC): The program counter keeps track of the memory address of the next instruction to be executed.
- Control Unit: The control unit manages the fetching and decoding of instructions and coordinates the operations of other components.
Indirect Addressing:
Indirect addressing refers to a mode where the memory location of the operand is stored in a register or memory location, rather than directly in the instruction.
- Example: If a program instruction contains an address pointing to a memory location, and the data or operand is stored at that memory location, this is called indirect addressing.
5. Computer Registers
Registers are small, high-speed storage locations within the CPU. They hold data temporarily during processing. Registers can store intermediate results of calculations, memory addresses, or control information.
Types of Registers:
- Accumulator (A): Stores the result of arithmetic and logical operations.
- Program Counter (PC): Holds the memory address of the next instruction to be executed.
- Instruction Register (IR): Holds the current instruction being decoded and executed.
- Memory Address Register (MAR): Holds the address of the memory location being accessed.
- Memory Buffer Register (MBR): Holds data read from or written to memory.
6. Common Bus System
A common bus system is used to transfer data between various components like the CPU, memory, and I/O devices. It is a shared pathway for data transmission, and multiple components can use the bus to communicate with each other.
Components of a Bus:
- Data Bus: Carries the data between components.
- Address Bus: Carries the memory address from the CPU to memory or I/O devices.
- Control Bus: Carries control signals that manage the operations of the computer system, such as read or write commands.
7. Instruction Set
An instruction set is a collection of all the instructions that a microprocessor can execute. The instruction set defines the operations that can be performed by the CPU, including data manipulation, control, and input/output operations.
- Types of Instructions: Common instructions include data transfer (MOV), arithmetic (ADD), logic (AND), and control instructions (JMP).
- Instruction Format: Each instruction consists of an operation code (opcode) and the operand(s), which can be a register, memory address, or immediate value.
8. Timing and Control - Instruction Cycle
The instruction cycle refers to the steps the CPU goes through to fetch, decode, and execute instructions. It involves multiple machine cycles, and the control unit manages the timing and sequencing of these steps.
Instruction Cycle Steps:
- Fetch Cycle: The instruction is fetched from memory into the instruction register.
- Decode Cycle: The instruction is decoded to determine what action needs to be taken.
- Execute Cycle: The action specified by the instruction is carried out.
Machine Cycle:
A machine cycle consists of multiple states, including the fetch, decode, and execute phases, and each phase requires a certain number of clock cycles.
Control Unit:
The control unit coordinates the operations of the CPU, generating the control signals required for fetching, decoding, and executing instructions.
Summary:
- Computer Architecture: Refers to the design and organization of computer components like the CPU, memory, and I/O systems.
- Memory Hierarchy: Includes registers, cache memory, RAM, and secondary storage, each with different speeds and storage capacities.
- Hard Disk Organization: Involves platters, tracks, sectors, and clusters, used for long-term data storage.
- Instruction Set: A collection of instructions that the CPU can execute, defining the machine's capabilities.
- Instruction Cycle: Describes the process of fetching, decoding, and executing instructions, managed by the control unit.
Understanding computer architecture is fundamental for optimizing computer performance, developing hardware, and understanding how software interacts with hardware.
Microprogrammed Control
Microprogrammed Control (10 hrs)
Microprogrammed control is a method used in designing the control unit of a computer. It allows the control signals of the processor to be generated by executing a sequence of microinstructions stored in memory. The key advantage of this approach is the flexibility and ease of modifying the control logic without needing to physically rewire the system. In this note, we will explore the basic design of accumulator-based systems, the organization of the Arithmetic Logic Unit (ALU), control memory, microprogramming, and the design of control units.
1. Basic Computer Design of Accumulator
In a basic accumulator-based computer design, the accumulator (AC) serves as a temporary storage location for data that is being processed by the CPU. The accumulator is integral to performing arithmetic operations, and it interacts with other components like the Arithmetic Logic Unit (ALU) and control unit.
Control of Accumulator Register:
The accumulator register is directly controlled by the control unit. The control unit generates signals to load data into the accumulator, clear the accumulator, or transfer data between the accumulator and memory.
- Loading the accumulator: When data is loaded into the accumulator, the control unit generates a signal that connects the appropriate data bus to the accumulator.
- Clearing the accumulator: The control unit can issue a signal to reset the accumulator to zero.
- Transfer to/from memory: The control unit will enable a transfer of data between the accumulator and memory.
2. ALU Organization
The Arithmetic Logic Unit (ALU) is responsible for performing all arithmetic and logical operations in a computer. In a microprogrammed control system, the ALU organization is crucial because it determines how the CPU can perform operations like addition, subtraction, AND, OR, and comparison.
ALU Operations:
- Arithmetic Operations: These include addition, subtraction, multiplication, and division. The ALU performs these operations on the data stored in the registers or memory.
- Logical Operations: These include AND, OR, NOT, and XOR operations, typically used for bitwise manipulation and comparison.
Control of ALU:
- The ALU operation is controlled by microinstructions. Each microinstruction corresponds to a specific ALU operation. The control unit sets the appropriate control signals to determine which ALU operation should be performed.
- The ALU may also take inputs from registers (e.g., the accumulator) or memory and will output the result to a register or memory location.
3. Control Memory
Control memory is a specialized memory in a microprogrammed control unit that stores microprograms (sequences of microinstructions) to control the operation of the computer. Control memory can be either ROM (Read-Only Memory) or RAM (Random Access Memory), but ROM is typically used for microprogramming because it is non-volatile and faster.
Control Memory Functions:
- Stores microinstructions that tell the control unit which signals to generate at each step of instruction execution.
- The address sequencing logic determines the sequence in which microinstructions are fetched and executed.
Address Sequencing:
The address sequencing logic generates the address for the next microinstruction to be fetched from control memory. This is typically done using a microprogram sequencer, which controls the flow of microinstructions based on conditions such as:
- Next address: The next address could be the next sequential address or the address of a jump instruction (conditional branching).
- Branching: Conditional branching is used to decide the next address based on the execution of specific conditions or flags (e.g., zero flag, carry flag).
4. Conditional Branching in Microprogramming
Conditional branching is a key concept in microprogramming, allowing the control unit to change the sequence of microinstructions based on the outcome of an operation.
- Conditional Branching: If a condition is met (e.g., a comparison between two values), the control unit can alter the normal sequence of microinstructions by jumping to a different microinstruction address.
- Example: If an arithmetic operation results in zero, the microprogram can branch to a different set of instructions to handle this specific condition, such as skipping further operations or setting a flag.
In microprogramming, the branching is typically controlled by the status flags, such as the Zero Flag, Carry Flag, and Sign Flag, which are updated after every operation.
5. Mapping of Instruction-Subroutines
In microprogramming, complex instructions (which are typically divided into multiple machine-level instructions) can be handled using subroutines. These are sets of microinstructions that can be reused and mapped to specific machine instructions.
- Instruction Mapping: Machine instructions are mapped to a sequence of microinstructions, which are fetched from control memory.
- Subroutines: A microprogram may call subroutines for repeated tasks (e.g., adding numbers, fetching data from memory, etc.).
- Microinstructions: Each machine instruction is mapped to a sequence of microinstructions that control the operations in the CPU. These microinstructions are stored in control memory.
6. Microprogram: Symbolic and Binary Microprogram
Microprogramming involves writing sequences of microinstructions that the control unit uses to generate control signals. These microinstructions can be written in different forms:
Symbolic Microprogram:
-
A symbolic microprogram is written using mnemonics, making it easier to understand and modify. Each symbolic microinstruction corresponds to a control signal or a set of control signals.
- Example:
LOAD AC, R1could symbolize the microinstruction that loads the accumulator with data from register R1.
- Example:
Binary Microprogram:
-
A binary microprogram is the machine-readable form of the microprogram. The symbolic instructions are converted into binary code that the control unit can execute.
- Example: A symbolic microinstruction like
LOAD AC, R1would be converted into a specific binary value that represents the control signals to activate the appropriate pathways.
- Example: A symbolic microinstruction like
The microprogram is stored in control memory, and each instruction in the microprogram corresponds to a unique control word that tells the control unit what operations to perform.
7. Design of Control Unit
The control unit in a computer system is responsible for generating the necessary signals to manage the operation of the CPU. In microprogrammed control, the control unit generates these signals by executing a sequence of microinstructions stored in control memory.
Basic Requirements of Control Unit:
- The control unit must generate signals that coordinate the operation of other components, such as the ALU, registers, and memory.
- It must support sequential control, conditional branching, and the ability to load and execute microinstructions from control memory.
Structure of Control Unit:
- The control unit has two key parts: the microprogram sequencer and the control memory.
- The microprogram sequencer fetches microinstructions from control memory and generates the sequence of addresses for subsequent microinstructions based on the instruction cycle.
- The control memory stores the microprogram, which consists of the microinstructions needed to control the operation of the computer.
8. Micro Program Sequencer
The microprogram sequencer (or microinstruction sequencer) is responsible for managing the flow of microinstructions. It determines the sequence in which microinstructions are executed based on the instruction cycle and control signals.
Functions of the Microprogram Sequencer:
- Sequencing: It controls the flow of execution, ensuring that microinstructions are fetched in the correct order.
- Branching: It handles conditional jumps, enabling the control unit to change the flow of execution based on flags or conditions.
- Addressing: The sequencer generates the next address to fetch from control memory, either sequentially or through a branch.
The microprogram sequencer plays a key role in determining whether to fetch the next microinstruction or jump to a different part of the microprogram.
Summary
- Microprogrammed Control: A method of controlling the CPU by using sequences of microinstructions stored in control memory.
- Accumulator Design: The accumulator serves as a temporary data storage and interacts with the ALU and control unit.
- ALU Organization: The ALU performs arithmetic and logical operations and is controlled by microinstructions.
- Control Memory: Stores the microprogram and is accessed by the control unit to generate control signals.
- Conditional Branching: Allows the microprogram to change its execution flow based on conditions, like flags.
- Microprogram: Consists of symbolic or binary instructions that control the CPU's operations.
- Design of Control Unit: The control unit fetches and executes microinstructions from control memory, with the microprogram sequencer ensuring correct sequencing and branching.
Microprogramming provides flexibility in designing control units, as the control logic can be easily modified by changing the microprogram, offering a higher level of abstraction compared to hardwired control systems.
Central Processing Unit
Central Processing Unit (10 hrs)
The Central Processing Unit (CPU) is the brain of the computer where most processing takes place. It executes instructions from programs, manages data flow between memory and peripherals, and coordinates the operations of other components within the system. The design and functionality of the CPU are key to understanding how computers operate and how programs are executed.
In this note, we will cover the following topics in detail:
- General Register Organization
- Data Transfer and Manipulation
- Program Control
1. General Register Organization
The general register organization is the architecture that defines how the CPU stores and manipulates data. It includes various types of registers such as general-purpose registers, status registers, and specialized registers like the program counter and stack pointer.
Control Word:
A control word is a binary word that contains the control signals necessary to perform an operation. In the context of the CPU, the control word determines how the CPU interacts with memory and I/O devices. It specifies what actions should be taken, such as whether data should be read from or written to a specific register or memory location.
The control word is generally composed of:
- Opcode: Defines the operation to be performed.
- Operand: Specifies the registers or memory locations involved.
- Control Signals: Specifies other signals to manage timing and operations, like read/write control, and other logic that controls the operations in the CPU.
Stack Organization and Instructions:
The stack is a special area in memory used to store temporary data. It follows a Last-In, First-Out (LIFO) order for data access, meaning the most recently pushed item is the first one to be popped out.
Stack operations include:
- Push: Puts data onto the top of the stack.
- Pop: Removes data from the top of the stack.
A stack is primarily used to store return addresses during function calls, and local variables in certain programming models.
Stack Pointer (SP): This special-purpose register keeps track of the top of the stack. It automatically adjusts when data is pushed or popped.
Instruction Formats: Instruction formats define how machine instructions are structured. They specify the length of different parts of an instruction, such as the operation code (opcode) and operands.
An instruction format typically includes:
- Opcode: Specifies the operation to be performed (e.g., ADD, MOV).
- Operand(s): Specifies the data to be operated on, which can be in the form of a register, memory address, or immediate value.
- Addressing Mode: Specifies how the operands are accessed (e.g., direct, indirect, or register addressing).
Addressing Modes:
Addressing modes determine how the CPU interprets the operands in an instruction. Common addressing modes include:
- Immediate Addressing: The operand is directly provided in the instruction.
- Register Addressing: The operand is in a register.
- Direct Addressing: The operand is in a specific memory location, identified by the address field of the instruction.
- Indirect Addressing: The operand is at a memory location pointed to by a register or memory address.
2. Data Transfer and Manipulation
Data transfer and manipulation refer to the instructions used by the CPU to move and process data. These instructions allow the CPU to perform arithmetic, logical, and bitwise operations, as well as control the movement of data within the CPU and between memory and I/O devices.
Data Transfer Instructions:
Data transfer instructions are used to move data between registers, memory, and I/O devices.
-
MOV: The MOV instruction is used to move data from one register to another, or between memory and a register.
Example:
MOV A, B(Move the contents of register B to register A). -
LOAD/STORE: These instructions are used to load data from memory into registers (LOAD), or store data from registers into memory (STORE).
Example:
LOAD A, 1000(Load data from memory address 1000 into register A).
Data Manipulation Instructions:
Data manipulation instructions modify the contents of registers or memory.
-
Arithmetic Instructions: These instructions are used to perform arithmetic operations on data stored in registers or memory.
- ADD: Adds two operands.
- SUB: Subtracts one operand from another.
- MUL: Multiplies two operands.
- DIV: Divides one operand by another.
Example:
ADD A, B(Add the contents of register B to register A). -
Logical and Bit Manipulation Instructions: These instructions perform logical operations like AND, OR, XOR, NOT, and shifts on data at the bit level.
- AND: Performs a bitwise AND operation.
- OR: Performs a bitwise OR operation.
- XOR: Performs a bitwise XOR operation.
- NOT: Performs a bitwise NOT operation.
Example:
AND A, B(Perform bitwise AND between registers A and B).
Shift Instructions:
Shift instructions manipulate data by shifting the bits in a register to the left or right. They are useful for multiplying or dividing numbers by powers of two.
- Logical Shift Left (LSL): Shifts all bits to the left, with 0s filling in the vacated positions.
- Logical Shift Right (LSR): Shifts all bits to the right, with 0s filling in the vacated positions.
- Arithmetic Shift: Similar to logical shift but preserves the sign bit (for signed numbers).
- Rotate: Bits are rotated around the end of the register.
Example: SHL A, 1 (Shift the contents of register A left by one bit).
3. Program Control
Program control refers to the instructions that alter the normal sequence of program execution. These instructions help manage the flow of control in a program by enabling conditional branching, function calls, and interrupt handling.
Status Bit Conditions:
Status bits (or flags) are set or cleared based on the outcome of various arithmetic or logical operations. Common status flags include:
- Zero Flag (Z): Set if the result of an operation is zero.
- Carry Flag (C): Set if a carry is generated in an arithmetic operation (e.g., overflow in addition).
- Sign Flag (S): Set if the result of an operation is negative (for signed numbers).
- Overflow Flag (O): Set if an arithmetic overflow occurs (i.e., the result exceeds the capacity of the register).
These flags are used by conditional branch instructions to control program flow.
Conditional Branch Instructions:
Conditional branch instructions allow the CPU to change the flow of execution based on certain conditions, such as the result of an arithmetic operation.
- BEQ (Branch if Equal): Branches if the Zero flag is set.
- BNE (Branch if Not Equal): Branches if the Zero flag is not set.
- BGT (Branch if Greater Than): Branches if the result of an operation is greater than zero.
- BLT (Branch if Less Than): Branches if the result of an operation is less than zero.
Example: BEQ label (Branch to the specified label if the Zero flag is set).
Subroutine Call and Return:
Subroutine calls are used to invoke a set of instructions that can be reused in different parts of a program. The CPU executes a subroutine when a call instruction is encountered and returns after the subroutine completes.
-
CALL: The CALL instruction pushes the return address onto the stack and jumps to the subroutine's starting address.
Example:
CALL subroutine1(Call the subroutinesubroutine1). -
RET: The RET instruction pops the return address from the stack and jumps back to the instruction following the
CALL.Example:
RET(Return from the current subroutine).
Program Interrupt:
An interrupt is a mechanism that allows the CPU to temporarily halt the execution of the current program and transfer control to a special routine, called the interrupt service routine (ISR), in response to an external event or condition.
- Types of Interrupts:
- Hardware Interrupts: Generated by external devices (e.g., keyboard, mouse, or timer) to request CPU attention.
- Software Interrupts: Triggered by the software (e.g., for system calls or exceptions).
- Maskable Interrupts: Interrupts that can be ignored or "masked" by the CPU.
- Non-maskable Interrupts (NMI): Interrupts that cannot be ignored by the CPU and are used for critical operations like power failures.
Interrupts are handled by saving the current state of the program, executing the appropriate ISR, and then restoring the program’s state to resume execution.
Summary
- General Register Organization: Defines how registers in the CPU are organized and how they interact with the control unit, memory, and ALU.
- Data Transfer and Manipulation: Includes instructions for transferring and manipulating data in registers and memory, as well as performing arithmetic, logical, and bitwise operations.
- Program Control: Includes instructions that control program execution flow, such as conditional branching, subroutine calls, and interrupts.
The design and operation of the CPU's control logic, data transfer, and manipulation capabilities are fundamental to the performance and functionality of a computer system. These components enable the execution of complex tasks efficiently, with flexibility and control
.
Pipeline, Vector Processing and Multiprocessors
Pipeline, Vector Processing, and Multiprocessors (6 hrs)
In modern computer architecture, techniques like parallel processing, pipelining, vector processing, and the use of multiprocessors play critical roles in enhancing computational performance. These techniques optimize the execution of instructions, especially for complex tasks involving large datasets or multiple operations. This note will provide an in-depth understanding of these concepts, focusing on parallel processing, instruction pipelining, vector processing, and multiprocessors.
1. Parallel Processing
Parallel processing refers to the simultaneous execution of multiple tasks or processes. The primary goal is to improve performance by dividing a task into smaller sub-tasks, which can be processed simultaneously, rather than sequentially. Parallel processing can be implemented at various levels:
- Bit-level parallelism: Processing multiple bits simultaneously.
- Instruction-level parallelism (ILP): Executing multiple instructions concurrently.
- Task-level parallelism: Running independent tasks or processes at the same time.
- Data-level parallelism: Performing operations on large datasets in parallel.
In parallel processing, tasks or instructions that are independent of each other can be executed at the same time, taking advantage of multiple CPU cores or processing units.
Types of Parallel Processing Systems:
- Single Instruction, Multiple Data (SIMD): A single instruction is applied to multiple pieces of data simultaneously.
- Multiple Instruction, Multiple Data (MIMD): Different instructions are executed on different data streams simultaneously.
2. Pipeline in Computer Architecture
Pipelining is a technique used in computer architecture to increase instruction throughput (the number of instructions executed in a given time). It is analogous to an assembly line in manufacturing, where different stages of an instruction are processed in parallel. Each stage completes a part of the instruction, and as one instruction moves from one stage to another, new instructions can be fetched, decoded, executed, etc.
Four-Segment Instruction Pipeline:
A typical instruction pipeline in a CPU can be divided into several stages. The four-segment pipeline is a simplified model used in many processors. The stages typically include:
- Fetch (IF): The instruction is fetched from memory.
- Decode (ID): The fetched instruction is decoded to determine the operation.
- Execute (EX): The operation is carried out, such as performing arithmetic or logical operations.
- Write-back (WB): The result is written back to a register or memory.
The main advantage of pipelining is that while one instruction is being executed, the next instruction can be fetched, decoded, and prepared for execution. This leads to efficient utilization of the CPU and faster execution of programs.
Data Dependency:
Data dependency occurs when one instruction requires the result of a previous instruction before it can execute. There are three primary types of data dependencies:
- Read-after-write (RAW): The output of one instruction is needed as input for the next instruction (true dependency).
- Write-after-read (WAR): An instruction writes to a register that is read by a previous instruction.
- Write-after-write (WAW): Two instructions write to the same register, and the order of execution affects the result.
These dependencies can cause pipeline hazards, where instructions have to wait for previous ones to complete, reducing the benefits of pipelining.
Handling of Branch Instructions:
Branch instructions can significantly slow down a pipeline because they alter the flow of execution. For example, a conditional branch depends on a comparison, and the decision on which path to take is not known until the comparison is completed. This causes pipeline stalls because the instruction fetch stage cannot proceed until the branch decision is made.
There are several techniques to handle branch instructions:
- Branch prediction: Predicts the outcome of a branch instruction to keep the pipeline moving.
- Delayed branching: Delays the execution of the branch instruction until the pipeline is clear.
- Branch target buffers (BTB): Caches the target addresses of branches to quickly fetch instructions after a branch.
3. Vector Processing
Vector processing involves performing operations on entire vectors (arrays of data) in a single instruction. This is in contrast to scalar processing, where each operation is performed on a single element. Vector processing is highly effective for applications involving large amounts of data, such as scientific computing, simulations, and image processing.
Vector Operations:
In a vector processor, operations are applied to entire vectors or matrices at once, which is far more efficient than performing scalar operations one element at a time. Common vector operations include:
- Addition: Adding corresponding elements of two vectors.
- Subtraction: Subtracting corresponding elements.
- Dot Product: The sum of the products of corresponding elements of two vectors.
- Scalar Multiplication: Multiplying each element of a vector by a scalar value.
Vector Processor Design:
A vector processor contains specialized hardware that can execute these vector operations. The key components include:
- Vector registers: Large registers that store entire vectors.
- Vector functional units: Units capable of performing vector operations (e.g., vector adders, multipliers).
- Vector memory: Specialized memory that can read/write entire vectors at once.
4. Matrix Multiplication and Vector Processing
Matrix multiplication is a fundamental operation in many computational problems, such as in graphics processing, scientific simulations, and machine learning. Vector processors are particularly effective at performing matrix operations because they can process entire rows or columns of matrices in parallel.
For example, given two matrices, A and B, the multiplication result C can be computed efficiently using vector processing. Each row of matrix A can be multiplied by the corresponding column of matrix B, with the results being summed up to form the entries of the resulting matrix C.
Matrix Multiplication Example:
Suppose we have two matrices:
- Matrix A (size m x n)
- Matrix B (size n x p)
The resulting matrix C will have the size m x p, and the element C[i][j] is computed as the dot product of row i of matrix A and column j of matrix B:
Vector processors can accelerate this operation by performing multiple dot products in parallel.
5. Multiprocessors and Parallelism
Multiprocessors are systems that use multiple processors (CPUs) to perform parallel computing. These processors can be connected in various configurations to share tasks and data:
Types of Multiprocessors:
- Shared-memory Multiprocessors: All processors share the same memory space. Communication is done through memory. These systems are easier to program but may face issues related to memory contention and synchronization.
- Distributed-memory Multiprocessors: Each processor has its own local memory, and communication is done through message-passing between processors. These systems scale well but require complex programming models.
Advantages of Multiprocessors:
- Increased Throughput: By distributing tasks among multiple processors, overall performance can be significantly improved.
- Fault Tolerance: If one processor fails, others can continue the operation, making the system more robust.
- Scalability: More processors can be added to handle more significant tasks or larger datasets.
Challenges in Multiprocessing:
- Synchronization: Coordinating data access between processors can be complex, especially in shared-memory systems.
- Load Balancing: Ensuring that all processors are equally loaded with tasks to prevent underutilization.
- Communication Overhead: Transferring data between processors, especially in distributed systems, can incur delays.
Summary
- Parallel Processing enables simultaneous task execution, speeding up computation.
- Pipelining improves throughput by breaking tasks into stages, though data dependency and branch instructions can introduce hazards.
- Vector Processing handles operations on entire datasets in parallel, accelerating scientific and matrix operations.
- Multiprocessors allow multiple processors to work together, increasing performance and reliability but introducing challenges such as synchronization and communication overhead.
Together, these techniques are essential in modern computing, helping to tackle computationally intensive problems more efficiently.
Laboratory Works
Here are the 8085 Assembly Language programs corresponding to the laboratory works you mentioned:
1. Multi-byte Addition & Subtraction, Multi-byte Decimal Addition & Subtraction
Multi-byte Addition (Hexadecimal Numbers)
; Program to add two multi-byte numbers
MVI H, 5000H ; Load first byte address of number 1 in HL pair
MVI L, 6000H ; Load first byte address of number 2 in DE pair
MVI D, 00H ; Clear D register (used for carry)
MVI E, 00H ; Clear E register (used for sum)
LOOP_ADD:
MOV A, M ; Load the byte from memory (first number)
INX H ; Increment HL pair
ADD M ; Add the byte from memory (second number)
MOV D, A ; Move result to D register (storing sum)
INX L ; Increment DE pair
MOV M, D ; Store the result in memory
DCR C ; Decrement counter to complete addition
JNZ LOOP_ADD ; Continue until the complete number is added
HLT
Multi-byte Subtraction (Hexadecimal Numbers)
; Program to subtract multi-byte numbers
MVI H, 5000H ; Load the address of first number
MVI L, 6000H ; Load the address of second number
MVI D, 00H ; Clear D register
MVI E, 00H ; Clear E register
LOOP_SUB:
MOV A, M ; Load the first byte from number 1
INX H ; Increment HL pair
SUB M ; Subtract the second byte from number 2
MOV D, A ; Move result to D register
INX L ; Increment DE pair
MOV M, D ; Store result in memory
DCR C ; Decrement counter to complete subtraction
JNZ LOOP_SUB ; Continue until all bytes are subtracted
HLT
Multi-byte Decimal Addition (BCD Numbers)
; Program to add two multi-byte decimal numbers
MVI H, 5000H ; Load first byte address of number 1
MVI L, 6000H ; Load first byte address of number 2
MVI D, 00H ; Clear D register (carry)
MVI E, 00H ; Clear E register (sum)
LOOP_BCD_ADD:
MOV A, M ; Load the byte from number 1
INX H ; Increment HL pair
DAD D ; Add with the second number
MOV M, A ; Store the result in memory
INX L ; Increment DE pair
DCR C ; Decrement counter to complete addition
JNZ LOOP_BCD_ADD ; Continue until all numbers are added
HLT
Multi-byte Decimal Subtraction (BCD Numbers)
; Program to subtract multi-byte decimal numbers
MVI H, 5000H ; Load address of first number
MVI L, 6000H ; Load address of second number
MVI D, 00H ; Clear D register (carry)
MVI E, 00H ; Clear E register (difference)
LOOP_BCD_SUB:
MOV A, M ; Load the first byte of number 1
INX H ; Increment HL pair
SUB M ; Subtract the second byte
MOV D, A ; Store the result in D register
INX L ; Increment DE pair
MOV M, D ; Store result in memory
DCR C ; Decrement counter to complete subtraction
JNZ LOOP_BCD_SUB ; Continue until all bytes are subtracted
HLT
2. Adder and Subtractor Circuit
To design an adder or subtractor circuit, assembly language code is usually used to control the I/O pins, handling the carry bit for addition and borrow bit for subtraction. However, here's a simple representation using a hypothetical circuit where we process the result of addition or subtraction.
; Adder Circuit
MVI A, 05H ; Load A with first operand
MOV B, A ; Move value of A to B
MVI A, 03H ; Load A with second operand
ADD B ; Add operand in B to A
MOV C, A ; Store result in C (simulating output)
HLT
; Subtractor Circuit
MVI A, 05H ; Load A with first operand
MOV B, A ; Move value of A to B
MVI A, 03H ; Load A with second operand
SUB B ; Subtract operand in B from A
MOV C, A ; Store result in C (simulating output)
HLT
3. Study of 8259 Programmable Interrupt Controller (PIC)
The 8259 PIC is used for interrupt management. It can handle interrupts from multiple sources. Below is an assembly program that demonstrates how to initialize the PIC and develop an interrupt service routine.
; Initialize 8259 PIC
MVI A, 11H ; Set control word for 8259
OUT 20H ; Send control word to 8259
MVI A, 0AH ; Set the interrupt vector address
OUT 21H ; Send vector address to 8259
; Wait for interrupt
WAIT:
NOP ; No operation, just waiting for interrupt
JMP WAIT ; Continue waiting for interrupt
HLT
The interrupt service routine is typically set up in the interrupt vector, which will be executed when an interrupt occurs.
4. Keyboard/Display Controller - Keyboard Scan and Display Handling
In this program, a basic method of scanning a keyboard and handling a display (blinking and rolling) is demonstrated.
; Keyboard Scan and Display Handling
KEYBOARD_SCAN:
; Assuming port 1 is connected to the keyboard
IN 01H ; Read input from keyboard
MOV A, M ; Store the key pressed
CALL DISPLAY_KEY ; Call function to display the key
DISPLAY_KEY:
; Assuming port 2 is connected to the display
OUT 02H ; Display key value to the display
RET
HLT
5. Parallel Data Transfer
This assembly program simulates parallel data transfer between two devices connected to two different ports:
; Parallel Data Transfer
MVI A, 01H ; Load data to be transferred
OUT 01H ; Send data to port 1
MVI A, 02H ; Load new data
OUT 02H ; Send data to port 2
MVI A, 03H ; Load new data
OUT 03H ; Send data to port 3
HLT
6. Study of Microcomputer Development System
This program gives a basic understanding of interacting with a microcomputer development system and testing some general I/O operations.
; Microcomputer Development System - I/O testing
MVI A, 55H ; Load data 55H into accumulator
OUT 01H ; Output data to port 1 (display, etc.)
MVI A, AAH ; Load data AAH into accumulator
OUT 02H ; Output data to port 2
HLT
Conclusion
These examples cover essential assembly language programming for 8085, such as multi-byte addition and subtraction, interfacing with peripherals (keyboard, display), parallel data transfer, and handling interrupts. Each program simulates a basic interaction with hardware or demonstrates the functionality of key components like the 8259 Interrupt Controller, and data transfer mechanisms.
Syllabus
Course Description
Course Description
This course is an introduction to microprocessor and computer architecture. It covers topics in both the physical design of the computer (organization) and the logical design of the computer (architecture).
Course Objectives
The course has following specific objectives:
- To explain the microprocessor.
- To explain the assembly language programming,
- To explain the overview of computer organization.
- To explain the principle of CPU system.
- To explain the principle of memory system
- To explain the principle of data flow.
Unit Contents
1. Fundamental of Microprocessor : 5 hrs
Introduction to Microprocessors, Microprocessor systems with bus organization, Microprocessor architecture and operation, 8085 Microprocessor and its operation, 8085 instruction cycle, machine cycle, T states, Adressing modes in 8085, Introduction yo 8086.
2. Introduction To Assembly Language Programming : 10 hrs
Assembly Language Programming Basics, Classification of Instructions and Adressing Mode, 8085 Instructin Sets, Assembling, Executing and Debugging the Programs, Developing Counters and Time DElay Routines, Interfacing Concepts
3. Basic Computer Architecture : 4 hrs
Introduction: History of Computer architecture, Overview of computer organization, Memory Hierarchy and cache, Organization of hard disk.
Instruction Codes: Stored Program Organization-Indirect Adress, Computer Registers, Common bus system, Instruction set, Timing and Control-Instruction Cycle
4. Microprogrammed Control : 10 hrs
Basic Computer Design of Accumulator: Control of Ac Registor, ALU Organization; Control Memory-Adress Seqeuncing; Conditional Branching, Mapping of Instruction-Subroutines; Micro Program: Symbolic Micro Program, Binary Micro Profram; Design of Control Unit: Basi Requirement of Control Unit, Structure of Control Unit, Micro Program Sequencer
5. Central Processing Unit : 10 hrs
General Register Organization: Control Word, Stack Organization and Instruction; Formats-Addressing Models.
Data Transfer and Manipulation: Data Transfer Instruction, Data Manipulation Instructions, Arithmetic Instructions, Logical and Bit Manipulaion Instruction, Shift Instructions.
Program Control: Status Bit Conditions, Conditional Branch Instruction, Subroutine Call and Return, Program Interrupt, Types of Interrupts
6. Pipeline, Vector Processing and Multiprocessors : 6 hrs
Parallel Processing, Pipeline Examples: Four Segment Instruction Pipeline, Data Dependency, Handeling of Branch Instructions; Vector Processing; Vector operations, Matrix Multiplication;
Laboratory Works
8085 Assembly Language program
1. Multi byte Addition & Subtraction, Multi byte decimal addition & subtraction.
2. Adder and substractor circuit.
3. Study of 8259 programmable interrupt controller - Development of interrupt service routine.
4. Keyboard/display. controller - Keyboard scan - blinking and rolling display.
5. Parallel data transfer
6. Study of Microcomputer development system.
No comments:
Post a Comment