278500-Apresentações
287 pág.

278500-Apresentações


DisciplinaArquitetura e Organização de Computadores 1277 materiais4.376 seguidores
Pré-visualização17 páginas
1\uf6d92004 Morgan Kaufmann Publishers
Lectures for 3rd Edition
Note: these lectures are often supplemented with other 
materials and also problems from the text worked out 
on the blackboard. You\u2019ll want to customize these 
lectures for your class. The student audience for 
these lectures have had exposure to logic design and 
attend a hands-on assembly language programming 
lab that does not follow a typical lecture format.
2\uf6d92004 Morgan Kaufmann Publishers
Chapter 1
3\uf6d92004 Morgan Kaufmann Publishers
Introduction
\u2022 This course is all about how computers work
\u2022 But what do we mean by a computer?
\u2013 Different types: desktop, servers, embedded devices
\u2013 Different uses: automobiles, graphics, finance, genomics\u2026
\u2013 Different manufacturers: Intel, Apple, IBM, Microsoft, Sun\u2026
\u2013 Different underlying technologies and different costs!
\u2022 Analogy: Consider a course on \u201cautomotive vehicles\u201d
\u2013 Many similarities from vehicle to vehicle (e.g., wheels)
\u2013 Huge differences from vehicle to vehicle (e.g., gas vs. electric)
\u2022 Best way to learn:
\u2013 Focus on a specific instance and learn how it works
\u2013 While learning general principles and historical perspectives
4\uf6d92004 Morgan Kaufmann Publishers
Why learn this stuff?
\u2022 You want to call yourself a \u201ccomputer scientist\u201d
\u2022 You want to build software people use (need performance)
\u2022 You need to make a purchasing decision or offer \u201cexpert\u201d advice
\u2022 Both Hardware and Software affect performance:
\u2013 Algorithm determines number of source-level statements
\u2013 Language/Compiler/Architecture determine machine instructions
(Chapter 2 and 3)
\u2013 Processor/Memory determine how fast instructions are executed
(Chapter 5, 6, and 7)
\u2022 Assessing and Understanding Performance in Chapter 4
5\uf6d92004 Morgan Kaufmann Publishers
What is a computer?
\u2022 Components:
\u2013 input (mouse, keyboard)
\u2013 output (display, printer)
\u2013 memory (disk drives, DRAM, SRAM, CD)
\u2013 network
\u2022 Our primary focus: the processor (datapath and control)
\u2013 implemented using millions of transistors
\u2013 Impossible to understand by looking at each transistor
\u2013 We need...
6\uf6d92004 Morgan Kaufmann Publishers
Abstraction
\u2022 Delving into the depths 
reveals more information
\u2022 An abstraction omits unneeded detail, 
helps us cope with complexity
What are some of the details that 
appear in these familiar abstractions?
swap(int v[], int k)
{int temp;
 temp = v[k];
 v[k] = v[k+1];
 v[k+1] = temp;
}
swap:
 muli $2, $5,4
 add $2, $4,$2
 lw $15, 0($2)
 lw $16, 4($2)
 sw $16, 0($2)
 sw $15, 4($2)
 jr $31
00000000101000010000000000011000
00000000000110000001100000100001
10001100011000100000000000000000
10001100111100100000000000000100
10101100111100100000000000000000
10101100011000100000000000000100
00000011111000000000000000001000
Assembler
Compiler
Binary machine
language
program
(for MIPS)
Assembly
language
program
(for MIPS)
High-level
language
program
(in C)
7\uf6d92004 Morgan Kaufmann Publishers
How do computers work?
\u2022 Need to understand abstractions such as:
\u2013 Applications software
\u2013 Systems software
\u2013 Assembly Language
\u2013 Machine Language
\u2013 Architectural Issues: i.e., Caches, Virtual Memory, Pipelining
\u2013 Sequential logic, finite state machines
\u2013 Combinational logic, arithmetic circuits
\u2013 Boolean logic, 1s and 0s
\u2013 Transistors used to build logic gates (CMOS)
\u2013 Semiconductors/Silicon used to build transistors
\u2013 Properties of atoms, electrons, and quantum dynamics
\u2022 So much to learn!
8\uf6d92004 Morgan Kaufmann Publishers
Instruction Set Architecture
\u2022 A very important abstraction
\u2013 interface between hardware and low-level software
\u2013 standardizes instructions, machine language bit patterns, etc.
\u2013 advantage: different implementations of the same architecture
\u2013 disadvantage: sometimes prevents using new innovations
True or False: Binary compatibility is extraordinarily important?
\u2022 Modern instruction set architectures:
\u2013 IA-32, PowerPC, MIPS, SPARC, ARM, and others
\u2022 ABI
9\uf6d92004 Morgan Kaufmann Publishers
Historical Perspective
\u2022 ENIAC built in World War II was the first general purpose computer
\u2013 Used for computing artillery firing tables
\u2013 80 feet long by 8.5 feet high and several feet wide
\u2013 Each of the twenty 10 digit registers was 2 feet long
\u2013 Used 18,000 vacuum tubes
\u2013 Performed 1900 additions per second
\u2013Since then:
Moore\u2019s Law: 
transistor capacity doubles 
every 18-24 months
10\uf6d92004 Morgan Kaufmann Publishers
Pentium 4
11\uf6d92004 Morgan Kaufmann Publishers
Aumento da densidade
12\uf6d92004 Morgan Kaufmann Publishers
Aumento da densidade
13\uf6d92004 Morgan Kaufmann Publishers
Comparação do desempenho
Year
Performance
1
10
100
1,000
10,000
100,000
CPU
Memory
14\uf6d92004 Morgan Kaufmann Publishers
Evolução dos Processadores
15\uf6d92004 Morgan Kaufmann Publishers
Aumento do Consumo
16\uf6d92004 Morgan Kaufmann Publishers
Pentium 4
17\uf6d92004 Morgan Kaufmann Publishers
Chapter 2
18\uf6d92004 Morgan Kaufmann Publishers
Instructions:
\u2022 Language of the Machine
\u2022 We\u2019ll be working with the MIPS instruction set architecture
\u2013 similar to other architectures developed since the 1980's
\u2013 Almost 100 million MIPS processors manufactured in 2002
\u2013 used by NEC, Nintendo, Cisco, Silicon Graphics, Sony, \u2026
1400
1300
1200
1100
1000
900
800
700
600
500
400
300
200
100
0
1998 2000 2001 20021999
Other
SPARC
Hitachi SH
PowerPC
Motorola 68K
MIPS
IA-32
ARM
19\uf6d92004 Morgan Kaufmann Publishers
MIPS arithmetic
\u2022 All instructions have 3 operands
\u2022 Operand order is fixed (destination first)
Example:
C code: a = b + c
MIPS \u2018code\u2019: add a, b, c 
(we\u2019ll talk about registers in a bit)
\u201cThe natural number of operands for an operation like addition is
three\u2026requiring every instruction to have exactly three operands, no 
more and no less, conforms to the philosophy of keeping the 
hardware simple\u201d
20\uf6d92004 Morgan Kaufmann Publishers
MIPS arithmetic
\u2022 Design Principle: simplicity favors regularity. 
\u2022 Of course this complicates some things...
C code: a = b + c + d;
MIPS code: add a, b, c
add a, a, d
\u2022 Operands must be registers, only 32 registers provided
\u2022 Each register contains 32 bits
\u2022 Design Principle: smaller is faster. Why?
21\uf6d92004 Morgan Kaufmann Publishers
Registers vs. Memory
Processor I/O
Control
Datapath
Memory
Input
Output
\u2022 Arithmetic instructions operands must be registers, 
\u2014 only 32 registers provided
\u2022 Compiler associates variables with registers
\u2022 What about programs with lots of variables
22\uf6d92004 Morgan Kaufmann Publishers
Memory Organization
\u2022 Viewed as a large, single-dimension array, with an address.
\u2022 A memory address is an index into the array
\u2022 "Byte addressing" means that the index points to a byte of memory.
0
1
2
3
4
5
6
...
8 bits of data
8 bits of data
8 bits of data
8 bits of data
8 bits of data
8 bits of data
8 bits of data
23\uf6d92004 Morgan Kaufmann Publishers
Memory Organization
\u2022 Bytes are nice, but most data items use larger "words"
\u2022 For MIPS, a word is 32 bits or 4 bytes.
\u2022 232 bytes with byte addresses from 0 to 232-1
\u2022 230 words with byte addresses 0, 4, 8, ... 232-4
\u2022 Words are aligned
i.e., what are the least 2 significant bits of a word address?
0
4
8
12
...
32 bits of data
32 bits of data
32 bits of data
32 bits of data
Registers hold 32 bits of data
24\uf6d92004 Morgan Kaufmann Publishers
Instructions
\u2022 Load and store instructions
\u2022 Example:
C code: A[12] = h + A[8];
MIPS code: lw $t0, 32($s3)
add $t0, $s2, $t0
sw $t0, 48($s3)
\u2022 Can refer to registers by name (e.g.,