A1.1.1 Describe CPU components functions and interactions (AO2)

A1.1.1_1 Units: ALU, CU

Arithmetic Logic Unit (ALU)

  • Performs arithmetic operations (e.g., addition, subtraction, multiplication, division)
  • Executes logical operations (e.g., AND, OR, NOT, comparisons)
  • Processes data from registers or memory
  • Produces results for storage or further processing

Control Unit (CU)

  • Directs operations of the CPU by decoding instructions from memory
  • Coordinates data flow between ALU, registers, and memory via control signals
  • Manages the fetch-decode-execute cycle
  • Ensures proper instruction execution

A1.1.1_2 Registers: IR, PC, MAR, MDR, AC

Instruction Register (IR)

  • Holds the current instruction being executed
  • Fetched from memory

Program Counter (PC)

  • Stores the memory address of next instruction
  • Increments after each fetch

Memory Address Register (MAR)

  • Contains the address of memory location
  • Specifies read/write location

Memory Data Register (MDR)

  • Holds data being transferred to/from memory
  • Acts as a buffer

Accumulator (AC)

  • Stores intermediate results of ALU operations
  • Used for temporary data storage

A1.1.1_3 Buses: address, data, control

Address Bus

  • Carries memory addresses from CPU to memory/I/O
  • Unidirectional, determining where data is read from or written to

Data Bus

  • Transfers actual data between CPU, memory, and peripherals
  • Bidirectional, allowing data to move to and from the CPU

Control Bus

  • Transmits control signals from the CU to coordinate operations
  • Carries signals like read/write commands, clock signals, and interrupts

A1.1.1_4 Processors: single core, multi-core, co-processors

Single Core Processor

  • Contains one processing unit
  • Executing one instruction stream at a time
  • Suitable for basic tasks but limited by sequential processing

Multi-Core Processor

  • Integrates multiple cores on a single chip
  • Enabling parallel task execution
  • Improves performance for multitasking and complex applications (e.g., Intel Core i7 with 4–8 cores)

Co-Processors

  • Specialized processors (e.g., GPU, FPU)
  • Handle specific tasks like floating-point calculations or graphics
  • Offloads work from the main CPU to improve efficiency

A1.1.1_5 Diagrammatic representation of CPU components relationships

Key Components and Interactions

  • CPU comprises ALU, CU, registers (IR, PC, MAR, MDR, AC), and buses (address, data, control)
  • Fetch Phase: PC sends address to MAR; instruction fetched via data bus to MDR, then to IR
  • Decode Phase: CU interprets instruction in IR, sending control signals to ALU or registers
  • Execute Phase: ALU processes data from registers (e.g., AC), storing results back in registers or memory via MDR
  • Store phase: This phase is optional and can write the result of calculations to Memory (using the MAR and MDR)
  • Buses: Address bus links MAR to memory; data bus transfers data to/from MDR; control bus coordinates operations

Diagram Elements

  • Show CPU with ALU, CU, and registers connected by buses
  • Arrows indicate data flow: PC → MAR → Memory → MDR → IR → CU → ALU
  • Highlight control signals from CU to all components for coordination

A1.1.2 Describe GPU role (AO2)

A1.1.2_1 GPU architecture for specific tasks and complex computations

Specialized Design

  • GPUs contain thousands of smaller cores (e.g., Nvidia RTX 4090 with 16,384 CUDA cores) optimized for parallel processing
  • Unlike CPUs with fewer, general-purpose cores
  • Designed for simultaneous execution of multiple tasks, ideal for data-intensive computations

Parallel Processing

  • Handles large datasets concurrently, such as processing multiple pixels or matrix operations in parallel
  • Enables high-speed execution of repetitive tasks, like rendering 3D graphics or matrix multiplications in AI

High Throughput

  • Optimized for massive data processing, delivering fast results for tasks like real-time graphics rendering or scientific simulations
  • Uses high-bandwidth memory (e.g., GDDR6) to manage large data transfers efficiently

Video RAM (VRAM)

  • Dedicated high-speed memory stores large datasets, such as textures for graphics or training data for machine learning
  • Ensures rapid access to data, reducing bottlenecks during complex computations

A1.1.2_2 Applications: video games, AI, simulations, graphics rendering, machine learning

Video Games

  • Renders complex 3D scenes in real-time
  • Calculating lighting, textures, and pixel colors concurrently
  • Supports high frame rates and resolutions (e.g., 4K gaming) using parallel processing

AI/Machine Learning

  • Accelerates training and inference of neural networks
  • Performing parallel matrix operations
  • Used in deep learning frameworks (e.g., TensorFlow, PyTorch) for tasks like image recognition or natural language processing

Simulations

  • Processes large-scale simulations, such as physics calculations or climate modeling
  • Due to high throughput
  • Enables real-time processing for applications like molecular dynamics or fluid simulations

Graphics Rendering

  • Supports real-time and offline rendering for high-resolution visuals
  • In films, animations, and VR
  • Uses APIs like DirectX or OpenGL to manage rendering pipelines efficiently

Other Uses

  • Facilitates data analysis in scientific research (e.g., genomic sequencing) and cryptocurrency mining
  • Enhances performance in tasks requiring parallel computation, such as video editing or autonomous vehicle processing

A1.1.3 Explain CPU vs GPU differences (HL) (AO2)

A1.1.3_1 Design philosophies, usage scenarios

CPU Design Philosophy

  • General-purpose processor optimized for sequential task execution and complex control logic
  • Designed for low-latency operations
  • Handling diverse tasks like operating system management and application logic

GPU Design Philosophy

  • Specialized for parallel processing, focusing on high-throughput computations for repetitive tasks
  • Built for data-intensive applications requiring simultaneous processing of large datasets

Usage Scenarios

  • CPU: Ideal for tasks requiring sequential processing, such as running operating systems, executing single-threaded applications, or managing I/O operations (e.g., database queries, file management)
  • GPU: Suited for parallelizable tasks like 3D rendering, machine learning training, scientific simulations, and video encoding/decoding

A1.1.3_2 Core architecture, processing power, memory access, power efficiency

Aspect CPU GPU
Core Architecture Few, powerful cores (e.g., 4–16 cores in modern CPUs like AMD Ryzen 9) optimized for complex tasks and high clock speeds Thousands of smaller, simpler cores (e.g., Nvidia RTX 4080 with 9728 CUDA cores) designed for parallel execution of simpler tasks
Processing Power High single-threaded performance for tasks requiring sequential logic or branching Superior parallel processing power, capable of executing thousands of threads simultaneously for data-heavy tasks
Memory Access Uses fast cache (L1, L2, L3) with low latency for quick access to frequently used data; interacts with system RAM Relies on high-bandwidth VRAM (e.g., GDDR6) for large data transfers, with higher latency but greater throughput for parallel tasks
Power Efficiency Higher power consumption per core due to complex logic and higher clock speeds; optimized for efficiency in general tasks Consumes more power overall due to many cores but efficient for parallel workloads; less efficient for sequential tasks

A1.1.3_3 CPU-GPU collaboration: task division, data sharing, execution coordination

Task Division

  • CPU: Manages overall system operations, task scheduling, and sequential logic (e.g., running the OS, handling user inputs)
  • GPU: Offloads parallel tasks from CPU, such as rendering graphics, matrix computations, or AI model training

Data Sharing & Execution

  • Data is transferred between CPU and GPU via system buses (e.g., PCIe), with CPU preparing data for GPU processing
  • GPU stores data in VRAM for fast access during parallel computations, with results sent back to CPU or memory
  • CPU issues instructions to GPU via APIs (e.g., CUDA, OpenCL), coordinating when and how GPU processes tasks
  • Synchronization ensures GPU completes parallel tasks before CPU proceeds with dependent operations

A1.1.4 Explain primary memory types purposes (AO2)

A1.1.4_1 Types: RAM, ROM, cache (L1, L2, L3), registers

Random Access Memory (RAM)

  • Volatile memory used to store data and instructions actively used by the CPU
  • Allows fast read/write access for temporary storage during program execution (e.g., 8–32 GB in modern PCs)

Read-Only Memory (ROM)

  • Non-volatile memory storing firmware or BIOS, which initializes hardware during boot-up
  • Data is permanent or semi-permanent, typically not modified during normal operation

Cache (L1, L2, L3)

  • High-speed, volatile memory located close to or within the CPU for frequently accessed data/instructions
  • L1 Cache: Smallest, fastest, per-core (e.g., 64 KB)
  • L2 Cache: Larger, slightly slower, often per-core or shared (e.g., 512 KB–2 MB)
  • L3 Cache: Largest, shared across cores, slower than L1/L2 (e.g., 8–32 MB in multi-core CPUs)

Registers

  • Smallest, fastest memory within the CPU (e.g., IR, PC, MAR, MDR, AC)
  • Store intermediate data during processing, directly accessible by ALU and CU

A1.1.4_2 CPU-memory interactions for performance optimization

Data Flow

  • CPU accesses registers for immediate data
  • Cache for frequently used data
  • RAM for larger datasets
  • Instructions and data move from RAM to cache, then to registers for processing, reducing access time

Memory Hierarchy & Optimization

  • Hierarchy: Registers → L1 → L2 → L3 → RAM: Organized by speed, size, and proximity to CPU
  • Hierarchy minimizes latency and maximizes performance by prioritizing faster memory for critical tasks
  • Optimization Techniques: Prefetching, cache coherence, memory management

A1.1.4_3 Cache miss, cache hit relevance

Cache Hit

  • Occurs when requested data/instructions are found in cache, enabling fast CPU access
  • Improves performance by reducing latency compared to fetching from RAM

Cache Miss

  • Occurs when requested data is not in cache, requiring slower access from RAM or secondary storage
  • Types: Compulsory (first access), capacity (cache full), conflict (data evicted due to mapping)

Relevance

  • High cache hit rates improve CPU performance by minimizing delays
  • Cache miss increases latency, impacting system efficiency, especially in data-intensive applications
  • Cache design (size, associativity) and algorithms (e.g., LRU for eviction) optimize hit/miss ratios

A1.1.5 Describe fetch-decode-execute cycle (AO2)

A1.1.5_1 CPU operations for single instruction execution

Fetch

  • CPU retrieves the next instruction from memory using the Program Counter (PC)
  • Instruction is copied from memory to the Memory Data Register (MDR) via the data bus
  • Then placed in the Instruction Register (IR)
  • PC increments to point to the next instruction

Decode

  • Control Unit (CU) interprets the instruction in the IR
  • Determines the required actions
  • Identifies the operation (e.g., ADD, LOAD) and operands (e.g., registers or memory addresses) needed

Execute

  • CPU performs the instruction's operation, such as arithmetic (via ALU), data transfer, or branching
  • Results may be stored in registers (e.g., Accumulator) or memory
  • As directed by the instruction

A1.1.5_2 Memory-registers interaction via address, data, control buses

Address Bus

  • Carries the memory address from the PC to the Memory Address Register (MAR) during the fetch phase
  • Used to specify memory locations for data read/write during execution

Data Bus

  • Transfers the instruction from memory to the MDR during fetch
  • Moves data between memory and registers during execution
  • Bidirectional, allowing data to flow to and from the CPU

Control Bus

  • Carries control signals from the CU to coordinate fetch, decode, and execute phases
  • e.g., read/write signals, clock pulses
  • Ensures proper timing and synchronization between CPU, memory, and registers

Interaction Example

  • Fetch: PC → MAR → memory (via address bus); instruction → MDR → IR (via data bus)
  • Decode: CU interprets IR contents, sending control signals via control bus
  • Execute: ALU processes data from registers, with results sent to memory or registers via data bus

A1.1.6 Describe pipelining in multi-core architectures (HL) (AO2)

A1.1.6_1 Instruction stages: fetch, decode, execute

Fetch

  • Retrieves the next instruction from memory using the Program Counter (PC)
  • Instruction is loaded into the Instruction Register (IR) via the Memory Data Register (MDR)

Decode

  • Control Unit (CU) interprets the instruction in the IR
  • Identifying the operation and required operands
  • Prepares control signals for execution, such as selecting registers or ALU functions

Execute

  • Arithmetic Logic Unit (ALU) or other CPU components perform the instruction's operation
  • e.g., arithmetic, logical, or data transfer
  • Results are stored in registers or memory as needed

Pipelining Concept

  • Divides instruction processing into stages (fetch, decode, execute)
  • Allowing multiple instructions to be processed simultaneously at different stages
  • Each stage operates concurrently, increasing instruction throughput in a single core

A1.1.6_2 Write-back stages for performance improvement

Write-Back Stage

  • Final stage where the results of the execute stage are written to registers or memory
  • Ensures data (e.g., ALU results or loaded memory values) is stored for use in subsequent instructions

Performance Improvement

  • Write-back allows the CPU to complete an instruction and free resources for the next instruction in the pipeline
  • Reduces pipeline stalls by ensuring results are available without delaying subsequent stages
  • In multi-core systems, write-back ensures data consistency across cores, often using cache coherence protocols

A1.1.6_3 Independent and parallel core operations

Independent Core Operations

  • Each core in a multi-core processor operates its own pipeline
  • Processing separate instruction streams concurrently
  • Cores handle distinct tasks or threads, improving overall system performance
  • e.g., one core runs a game's graphics, another handles AI

Parallel Core Operations

  • Multi-core architectures allow simultaneous execution of multiple pipelines
  • Leveraging parallelism for multitasking or multi-threaded applications
  • Cores share resources like L3 cache or memory controllers, but operate independently for their fetch-decode-execute cycles

Pipelining in Multi-Core

  • Pipelining within each core increases instruction throughput
  • While multiple cores enable parallel task execution
  • Challenges include pipeline hazards (e.g., data dependencies, branch mispredictions) and resource contention between cores
  • Mitigated by advanced scheduling and cache management

A1.1.7 Describe secondary memory storage types (AO2)

A1.1.7_1 Internal: SSD, HDD, eMMCs

Solid State Drive (SSD)

  • Non-volatile storage using flash memory
  • Fast read/write speeds (e.g., 500–3500 MB/s for NVMe SSDs)
  • Durable, no moving parts, lower power consumption compared to HDDs
  • Used in laptops, desktops, and servers for operating systems and applications

Hard Disk Drive (HDD)

  • Non-volatile storage using spinning magnetic disks
  • Slower than SSDs (e.g., 100–200 MB/s)
  • Higher capacity at lower cost per GB, suitable for bulk data storage (e.g., 1–16 TB)
  • Common in desktops, servers, and archival systems

Embedded MultiMediaCard (eMMC)

  • Flash-based storage integrated into device motherboards
  • Commonly used in smartphones, tablets, and low-cost laptops
  • Slower than SSDs (e.g., 150–400 MB/s) but compact and cost-effective for embedded systems

A1.1.7_2 External: SSD, HDD, optical drives, flash drives, memory cards, NAS

External SSD

  • Portable flash-based storage connected via USB or Thunderbolt
  • High speeds (e.g., 500–2000 MB/s)
  • Used for backups, data transfer, or extending device storage

External HDD

  • Portable magnetic disk storage via USB
  • Larger capacities (e.g., 1–8 TB) but slower speeds than SSDs
  • Ideal for backups, media storage, or large file transfers

Optical Drives

  • Use laser technology to read/write data on CDs, DVDs, or Blu-ray discs
  • e.g., 4.7 GB for DVD, 25–50 GB for Blu-ray
  • Slower and less common, used for software distribution or archival media

Flash Drives

  • Small, portable USB-based flash storage (e.g., 16 GB–1 TB)
  • Moderate speeds (e.g., 100–400 MB/s)
  • Used for quick file transfers or temporary storage

Memory Cards

  • Compact flash storage (e.g., SD, microSD) for cameras, phones, and drones
  • e.g., 32 GB–1 TB
  • Vary in speed (e.g., 10–300 MB/s) based on class (e.g., UHS-I, UHS-II)

Network-Attached Storage (NAS)

  • Storage devices connected to a network, housing multiple HDDs or SSDs for shared access
  • Used for centralized data storage, backups, or media streaming in homes or businesses

A1.1.7_3 Usage scenarios for different drives

SSD (Internal/External)

  • OS boot drives, gaming, or applications requiring fast load times
  • External SSDs for professionals needing portable, high-speed storage (e.g., video editors)

HDD (Internal/External)

  • Bulk storage for large files (e.g., video archives, databases) where cost is a priority
  • External HDDs for home backups or media libraries

eMMC

  • Budget-friendly storage in low-cost devices like Chromebooks, smartphones, or IoT devices
  • Suitable for lightweight applications with moderate performance needs

Optical Drives

  • Legacy use for software installation, movie playback, or archival data
  • e.g., Blu-ray for 4K video

Flash Drives/Memory Cards

  • Temporary file transfers, photography, or small-scale storage in portable devices
  • Memory cards for high-speed needs in cameras (e.g., 4K video recording)

NAS

  • Centralized storage for home or office networks
  • Supporting file sharing, backups, or media streaming
  • Ideal for collaborative environments or remote access to large datasets

A1.1.8 Describe compression (AO2)

A1.1.8_1 Lossy vs lossless compression

Lossy Compression

  • Reduces file size by permanently discarding less critical data
  • Not fully recoverable
  • Suitable for multimedia (e.g., images, audio, video) where minor quality loss is acceptable
  • Example: JPEG for images (discards fine details), MP3 for audio (removes inaudible frequencies)
  • Advantage: Significantly smaller file sizes, ideal for streaming or storage-limited scenarios
  • Disadvantage: Reduced quality, unsuitable for data requiring exact reproduction

Lossless Compression

  • Reduces file size without losing any data
  • Allowing full reconstruction of the original
  • Used for text, code, or data where accuracy is critical (e.g., ZIP files, PNG images)
  • Example: FLAC for audio, GIF for images, ZIP for documents
  • Advantage: Preserves all data, ensuring no quality loss
  • Disadvantage: Larger file sizes compared to lossy compression

A1.1.8_2 Run-length encoding, transform coding

Run-Length Encoding (RLE)

  • A lossless compression technique
  • Replaces sequences of identical data with a single value and count
  • Example: String "AAAAA" compressed as "5A" to reduce size
  • Effective for data with repetitive patterns, such as simple images or text with long runs of identical characters
  • Limitation: Less effective for complex or non-repetitive data, where compression gains are minimal

Transform Coding

  • Used primarily in lossy compression
  • Transforms data into a different domain for efficient compression
  • Example: Discrete Cosine Transform (DCT) in JPEG and MP3
  • Converts spatial/audio data into frequency components
  • Removes less perceptible data (e.g., high-frequency components) to reduce size while maintaining acceptable quality
  • Advantage: High compression ratios for multimedia (e.g., video, images)
  • Limitation: Lossy nature makes it unsuitable for applications requiring exact data preservation

A1.1.9 Describe cloud computing services (AO2)

A1.1.9_1 Services: SaaS, PaaS, IaaS

Software as a Service (SaaS)

  • Delivers software applications over the internet
  • Managed by third-party providers
  • Users access via web browsers without installing or maintaining software
  • Examples: Google Workspace (Docs, Sheets), Microsoft 365, Dropbox
  • Purpose: Provides ready-to-use applications for end-users, focusing on ease of use and accessibility

Platform as a Service (PaaS)

  • Provides a platform for developers to build, deploy, and manage applications
  • Without handling underlying infrastructure
  • Includes tools like operating systems, server software, and development frameworks
  • Examples: Google App Engine, Microsoft Azure App Services, Heroku
  • Purpose: Simplifies application development and deployment for developers

Infrastructure as a Service (IaaS)

  • Offers virtualized computing resources over the internet
  • e.g., servers, storage, networking
  • Users control operating systems and applications while providers manage hardware
  • Examples: Amazon Web Services (EC2, S3), Microsoft Azure VMs, Google Compute Engine
  • Purpose: Provides scalable, on-demand infrastructure for businesses and developers

A1.1.9_2 Differences in control, flexibility, resource management, availability

Aspect SaaS PaaS IaaS
Control Minimal user control; provider manages application, updates, and infrastructure Moderate control; users manage applications and data, while provider handles OS and server maintenance High control; users manage OS, applications, and configurations, with provider managing physical hardware
Flexibility Limited flexibility; users are restricted to provider's application features and configurations Moderate flexibility; supports custom application development within provider's framework High flexibility; users can configure virtual machines, storage, and networks to suit specific needs
Resource Management Fully managed by provider, including updates, scaling, and maintenance Provider manages infrastructure and scaling; users focus on application development Users manage resource allocation (e.g., CPU, storage) while provider ensures hardware availability
Availability High availability; accessible via internet with minimal setup, ideal for non-technical users Available to developers with internet access, with provider ensuring platform uptime Highly available, with scalable resources; requires technical expertise for setup and maintenance

A1.2.1 Describe data representation methods (AO2)

A1.2.1_1 Integers in binary, hexadecimal

Binary Representation

  • Integers stored as sequences of bits (0s and 1s)
  • Using fixed-length formats (e.g., 8-bit, 16-bit, 32-bit)
  • Positive integers: Straightforward binary (e.g., 5 = 00000101 in 8-bit)
  • Negative integers: Two's complement (invert bits and add 1)
  • e.g., -5 = 11111011 in 8-bit

Hexadecimal Representation

  • Integers represented using base-16 (0–9, A–F)
  • Each digit corresponds to 4 bits
  • Compact and human-readable compared to binary
  • e.g., 255 = FF in hexadecimal, 11111111 in binary
  • Commonly used in programming and memory addressing for brevity

A1.2.1_2 Binary, hexadecimal to decimal conversion

Binary to Decimal

  • Each bit represents a power of 2
  • Summed based on position (right to left, starting at 2^0)
  • Example: 1101 = (1×2^3) + (1×2^2) + (0×2^1) + (1×2^0) = 8 + 4 + 0 + 1 = 13

Hexadecimal to Decimal

  • Each digit represents a power of 16
  • With A=10, B=11, ..., F=15
  • Example: 2F = (2×16^1) + (15×16^0) = 32 + 15 = 47

Process

  • Convert by multiplying each digit by its positional value and summing the results
  • Used in programming to interpret or debug numerical data stored in memory

A1.2.1_3 Binary to hexadecimal conversion

Conversion Process

  • Group binary digits into sets of 4 bits (nibbles), starting from the right
  • Pad with zeros if needed
  • Map each 4-bit group to its hexadecimal equivalent
  • e.g., 1010 = A, 1111 = F
  • Example: Binary 11010110 = (1101)(0110) = D6 in hexadecimal

Purpose

  • Simplifies representation of large binary numbers for programmers and system designers
  • Commonly used in memory dumps, debugging, or hardware interfacing

A1.2.2 Explain binary data storage (AO2)

A1.2.2_1 Binary encoding fundamentals, impact on storage/retrieval

Binary Encoding Basics

  • Data is stored as sequences of bits (0s and 1s)
  • The smallest unit of digital information
  • Bits are grouped into bytes (8 bits), which represent characters, numbers, or other data types
  • Using encoding standards like ASCII or Unicode
  • Encoding ensures consistent data representation across systems for accurate storage and retrieval

Impact on Storage

  • Binary's simplicity enables efficient storage in memory (e.g., RAM, SSD)
  • Compatibility with hardware
  • Storage size depends on data type and encoding
  • e.g., 1 byte for ASCII character, 4 bytes for 32-bit integer

Impact on Retrieval

  • Retrieval requires decoding binary data back to its original form
  • Using the same encoding standard
  • Speed of retrieval varies by storage medium (e.g., RAM is faster than HDD)
  • Incorrect encoding/decoding can cause data corruption or misinterpretation

A1.2.2_2 Storage of integers, strings, characters, images, audio, video in binary

Integers

  • Stored as fixed-length binary numbers
  • e.g., 16-bit, 32-bit
  • Positive integers use standard binary
  • Negative integers use two's complement

Strings and Characters

  • Characters encoded using ASCII (8-bit)
  • e.g., 'A' = 01000001
  • Or Unicode (16-bit or more) for multilingual support
  • Strings stored as contiguous sequences of character encodings
  • e.g., "Hi" = 01001000 01101001 in ASCII

Images

  • Stored as pixel data
  • Each pixel encoded as binary values for color
  • e.g., RGB: 24 bits for 8 bits per red, green, blue channel
  • Formats like JPEG use compression to reduce binary size
  • Storing metadata for reconstruction

Audio

  • Stored as binary representations of sampled waveforms
  • With amplitude values at fixed intervals
  • e.g., 16-bit WAV audio stores each sample as a 2-byte binary value
  • MP3 uses lossy compression

Video

  • Stored as sequences of compressed image frames
  • e.g., H.264 in MP4
  • And synchronized audio
  • Includes metadata (e.g., resolution, frame rate) in binary to guide playback

A1.2.3 Describe logic gates purpose and use (AO2)

A1.2.3_1 Logic gates purpose

Fundamental Building Blocks

  • Logic gates are electronic circuits that perform basic logical operations
  • e.g., AND, OR, NOT on binary inputs (0s and 1s)
  • Serve as the foundation for digital circuits in CPUs, memory, and other computer components

Purpose

  • Process binary data to enable computation, decision-making, and data manipulation in computer systems
  • Combine to form complex circuits for arithmetic, control, and memory operations

A1.2.3_2 Functions and applications in computer systems

Functions

  • Each gate performs a specific logical operation based on input values
  • Producing a single binary output
  • Enable binary decision-making (e.g., comparisons, data routing)
  • And arithmetic operations

Applications

  • CPUs: Used in ALUs for arithmetic (e.g., addition via XOR and AND gates) and logical operations
  • Memory: Implement flip-flops and latches for data storage in registers and RAM
  • Control Units: Manage instruction execution through decision-making circuits
  • I/O Systems: Facilitate signal processing and data transfer in peripherals

A1.2.3_3 Role in binary computing

Binary Processing

  • Operate on binary inputs (0 = false, 1 = true)
  • To produce binary outputs
  • Aligning with digital systems' binary nature
  • Enable manipulation of bits to perform computations, store data, or control hardware

Circuit Design

  • Combined to create complex circuits (e.g., adders, multiplexers)
  • That process multi-bit data
  • Essential for implementing Boolean algebra in hardware
  • Forming the basis of all digital operations

A1.2.3_4 Boolean operators: AND, OR, NOT, NAND, NOR, XOR, XNOR

AND

  • Outputs 1 only if all inputs are 1
  • e.g., A=1, B=1 → Output=1
  • Used for operations requiring multiple conditions to be true

OR

  • Outputs 1 if at least one input is 1
  • e.g., A=1, B=0 → Output=1
  • Used for operations where any condition triggers an action

NOT

  • Inverts the input
  • e.g., A=1 → Output=0
  • Used for negating signals or conditions

NAND

  • Outputs 0 only if all inputs are 1
  • Inverse of AND
  • Universal gate, capable of constructing any logic circuit

NOR

  • Outputs 1 only if all inputs are 0
  • Inverse of OR
  • Universal gate, used in memory and control circuits

XOR

  • Outputs 1 if exactly one input is 1
  • e.g., A=1, B=0 → Output=1
  • Used in arithmetic (e.g., addition) and error detection

XNOR

  • Outputs 1 if inputs are the same
  • e.g., A=1, B=1 → Output=1
  • Used in equality checks and digital comparators

A1.2.4 Construct and analyze truth tables (AO3)

A1.2.4_1 Predict simple logic circuit outputs

Purpose

  • Truth tables list all possible input combinations and their corresponding outputs for a logic circuit
  • Used to predict the behavior of circuits built with logic gates (e.g., AND, OR, NOT)

Process

  • Identify inputs (e.g., A, B) and list all possible combinations (2^n for n inputs)
  • Apply the circuit's logic operations to determine the output for each input combination
  • Example: For an AND gate with inputs A, B, output is 1 only when A=1 and B=1

A1.2.4_2 Determine outputs from inputs

Method

  • For each input combination, apply the Boolean expression or gate function to calculate the output
  • Example: For circuit (A AND B) OR NOT C, compute step-by-step:
  • Evaluate A AND B
  • Evaluate NOT C
  • Combine results with OR to get the final output

Application

  • Used to verify circuit behavior or debug incorrect outputs in digital systems

A1.2.4_3 Relate to Boolean expressions

Connection

  • Truth tables directly represent Boolean expressions
  • Where each row corresponds to an evaluation of the expression
  • Example: Expression A AND B has a truth table where output is 1 only when A=1, B=1

Use

  • Helps translate Boolean expressions into circuit designs or verify equivalence between expressions
  • Facilitates understanding of complex logic by breaking it into input-output mappings

A1.2.4_4 Derive from logic diagrams for simplification

Derivation

  • Analyze a logic diagram to construct its truth table by tracing inputs through gates to the output
  • Example: A circuit with (A XOR B) AND C requires evaluating XOR first, then AND with C for each input combination

Simplification

  • Use truth table to identify redundant patterns or simplify the logic using Boolean algebra
  • e.g., A AND A = A
  • Simplified circuits reduce gate count, improving efficiency and cost

A1.2.4_5 Use Karnaugh maps, algebraic simplification

Karnaugh Maps (K-maps)

  • Graphical tool to simplify Boolean expressions
  • By grouping 1s in a grid based on input combinations
  • Each cell represents a truth table row
  • Adjacent 1s are grouped to minimize terms
  • Example: For A AND B OR A AND NOT B, K-map groups terms to simplify to A

Algebraic Simplification

  • Apply Boolean algebra rules (e.g., distributive, De Morgan's laws)
  • To reduce complex expressions
  • Example: (A AND B) OR (A AND NOT B) simplifies to A
  • Using the rule X AND Y OR X AND NOT Y = X

Purpose

  • Both methods reduce circuit complexity, minimizing gates and improving performance in hardware design

A1.2.5 Construct logic diagrams (AO3)

A1.2.5_1 Show logic gate connections

Purpose

  • Logic diagrams visually represent the connections between logic gates
  • To implement a Boolean expression or function
  • Used to design and analyze digital circuits in hardware like CPUs or memory

Process

  • Identify the Boolean expression or truth table to be implemented
  • Select appropriate gates (e.g., AND, OR, NOT) and connect inputs to outputs to match the logic
  • Example: For expression (A AND B) OR C, connect A and B to an AND gate, then its output and C to an OR gate

A1.2.5_2 Use standard gate symbols: AND, OR, NOT, NAND, NOR, XOR, XNOR

Standard Symbols

  • AND: Semicircle with flat input side, multiple inputs, one output (1 if all inputs are 1)
  • OR: Curved shape with pointed output, multiple inputs, one output (1 if any input is 1)
  • NOT: Triangle with a small circle at the output, inverts the input
  • NAND: AND gate with a circle at the output, inverts AND result
  • NOR: OR gate with a circle at the output, inverts OR result
  • XOR: OR gate with an extra curved line at input, outputs 1 if exactly one input is 1
  • XNOR: XOR gate with a circle at the output, outputs 1 if inputs are the same

Usage

  • Standard symbols ensure universal understanding in circuit design and documentation
  • Used in tools like schematic editors or hardware description languages (e.g., VHDL)

A1.2.5_3 Process inputs to produce outputs

Input Processing

  • Inputs (binary 0 or 1) flow through gates, each performing its logical operation
  • Outputs of one gate may serve as inputs to another, creating a cascade of operations
  • Example: For (A OR B) AND NOT C, A and B feed an OR gate, its output and NOT C feed an AND gate

Output Generation

  • Final output reflects the combined effect of all gates based on the input values
  • Verified using truth tables or simulation to ensure correct logic implementation

A1.2.5_4 Combine gates for complex operations

Complex Circuits

  • Multiple gates are combined to implement complex Boolean expressions or functions
  • e.g., adders, multiplexers
  • Example: A half-adder uses XOR for sum (A XOR B) and AND for carry (A AND B)

Design Approach

  • Break down complex expressions into simpler sub-expressions, assigning gates to each
  • Connect gates hierarchically to achieve the desired logic
  • Ensuring correct input-output flow

A1.2.5_5 Simplify using Boolean algebra

Boolean Algebra Simplification

  • Apply rules like De Morgan's laws, distributive property, or absorption laws
  • To reduce gate count
  • Example: (A AND B) OR (A AND NOT B) simplifies to A
  • Using the rule X AND Y OR X AND NOT Y = X

Benefits

  • Reduces the number of gates, lowering cost, power consumption, and circuit complexity
  • Improves performance by minimizing signal propagation delays

Process

  • Start with the Boolean expression or truth table
  • Apply simplification rules
  • Redesign the logic diagram
  • Verify simplified circuit using truth tables or Karnaugh maps to ensure equivalent functionality

A1.3.1 Describe operating systems role (AO2)

A1.3.1_1 Abstract hardware complexities, manage resources

Abstract Hardware Complexities

  • Operating System (OS) acts as an intermediary between hardware and user applications
  • Hiding low-level hardware details
  • Provides a user-friendly interface (e.g., GUI, CLI) to interact with hardware like CPU, memory, and storage
  • Example: Translates a file save command into disk operations without user needing to manage sectors

Manage Resources

  • Allocates and optimizes hardware resources (e.g., CPU time, memory, storage, I/O devices) among processes
  • Ensures efficient multitasking by scheduling processes and managing resource conflicts
  • Example: Windows or Linux allocates CPU time to multiple applications running concurrently

Key Functions

  • Facilitates communication between software and hardware (e.g., drivers for peripherals)
  • Maintains system stability by handling errors and resource contention
  • Supports application execution by providing necessary services like file management and memory allocation

A1.3.2 Describe operating system functions (AO2)

A1.3.2_1 Maintain system integrity, background operations

System Integrity

  • Ensures reliable operation by monitoring hardware and software for errors or crashes
  • Implements error handling (e.g., recovering from application crashes) and security measures
  • To protect system stability
  • Example: Windows uses system logs to track and resolve errors; Linux uses kernel panic handling

Background Operations

  • Manages essential processes invisible to users
  • Such as system updates, disk maintenance, or network monitoring
  • Runs services like task scheduling, logging, and power management to keep the system operational
  • Example: macOS runs background daemons for automatic backups (Time Machine) or software updates

A1.3.2_2 Manage memory, file system, devices, scheduling, security, accounting, GUI, virtualization, networking

Memory Management

  • Allocates and deallocates memory for processes
  • Ensuring efficient use of RAM
  • Uses techniques like virtual memory to extend available memory via storage (e.g., paging, swapping)

File System Management

  • Organizes, stores, and retrieves data on storage devices
  • Using file systems (e.g., NTFS, ext4)
  • Manages file access, permissions, and directory structures

Device Management

  • Controls hardware devices (e.g., printers, keyboards) via drivers
  • Facilitating communication with applications
  • Handles input/output operations and resource allocation for devices

Process Scheduling

  • Determines which processes run on the CPU and for how long
  • Using algorithms like round-robin or priority scheduling
  • Ensures smooth multitasking and optimal CPU utilization

Security

  • Protects system and data through user authentication, access controls, and encryption
  • Example: Linux uses user permissions (e.g., chmod) to restrict file access

Accounting

  • Tracks resource usage (e.g., CPU time, disk space)
  • For performance monitoring or billing in multi-user systems
  • Example: Cloud OS tracks usage for billing in services like AWS

Graphical User Interface (GUI)

  • Provides visual interfaces (e.g., Windows desktop, macOS Finder)
  • For user interaction with applications and files
  • Simplifies system navigation compared to command-line interfaces

Virtualization

  • Enables running multiple OS instances or virtual machines on a single physical system
  • Example: VMware or Hyper-V allows Linux and Windows to run simultaneously

Networking

  • Manages network connections, protocols, and data transfer
  • For internet or local network access
  • Example: OS handles TCP/IP for web browsing or file sharing over a LAN

A1.3.3 Compare scheduling approaches (AO3)

A1.3.3_1 Allocate CPU time for performance optimization

Purpose

  • Scheduling determines how the CPU allocates time to multiple processes or threads
  • To maximize system performance
  • Aims to optimize throughput, minimize response time, and ensure fair resource distribution

Key Factors

  • Balances CPU utilization, process priority, and system responsiveness
  • Impacts system efficiency in multitasking environments (e.g., running multiple applications)

A1.3.3_2 Methods: first-come first-served, round robin, multilevel queue, priority scheduling

First-Come First-Served (FCFS)

  • Processes are executed in the order they arrive in the ready queue
  • Simple to implement, non-preemptive (process runs to completion)
  • Example: Batch processing systems where jobs are queued and processed sequentially

Round Robin (RR)

  • Each process is assigned a fixed time slice (quantum) and cycles through the queue
  • Preemptive, ensuring fair CPU time distribution for multitasking
  • Example: Used in time-sharing systems like Linux for interactive applications

Multilevel Queue

  • Divides processes into multiple queues based on type or priority
  • e.g., system processes, user processes
  • Each queue may use a different scheduling algorithm
  • Example: Operating systems like Windows use this for prioritizing foreground vs. background tasks

Priority Scheduling

  • Assigns CPU time based on process priority
  • Higher-priority processes run first
  • Can be preemptive (interrupts lower-priority tasks) or non-preemptive
  • Example: Real-time systems prioritize critical tasks like sensor processing in embedded devices
Scheduling Method Advantages Disadvantages Use Case
FCFS Simple, fair for long processes Long waiting times, poor for interactive tasks Batch processing
Round Robin Fair, responsive for multitasking Overhead from context switching Time-sharing systems
Multilevel Queue Flexible, prioritizes diverse tasks Complex to manage, potential starvation for low-priority tasks Mixed workloads
Priority Scheduling Prioritizes critical tasks, efficient for real-time systems Starvation of low-priority tasks, complex priority management Real-time applications

A1.3.4 Evaluate polling vs interrupt handling (AO3)

A1.3.4_1 Consider event frequency, CPU overhead, power, predictability, latency, security

Aspect Polling Interrupt Handling
Description The CPU periodically checks the status of a device or process to determine if it needs attention Devices or processes signal the CPU when an event occurs, pausing current tasks to handle it
Event Frequency Efficient for high-frequency events where constant checking is justified (e.g., high-speed data transfers) Efficient for low-frequency or irregular events, as CPU only responds when signaled
CPU Overhead High, as CPU continuously polls devices, consuming cycles even when no events occur Lower, as CPU is freed for other tasks until an interrupt occurs, reducing wasted cycles
Power Higher power consumption due to constant CPU activity, less efficient for battery-powered devices More power-efficient, especially in idle states, as CPU can enter low-power modes until interrupted
Predictability Highly predictable, as checks occur at fixed intervals, ensuring consistent response times Less predictable, as interrupt timing depends on external events, potentially causing variable response times
Latency Higher latency, as events are only detected during polling cycles, potentially delaying response Lower latency, as interrupts are handled immediately, improving responsiveness
Security Lower risk of unauthorized interrupts but vulnerable to denial-of-service if polling loops are exploited Requires careful management to prevent malicious interrupts (e.g., interrupt storms) but generally secure with proper controls

A1.3.4_2 Scenarios: keyboard/mouse inputs, network, disk I/O, embedded/real-time systems

Keyboard/Mouse Inputs

  • Polling: Continuously checks input devices, suitable for high-frequency inputs (e.g., gaming mice with high polling rates)
  • Interrupt Handling: Preferred for typical use, as inputs are sporadic; interrupts signal keypresses or mouse movements, reducing CPU load

Network

  • Polling: Used in high-throughput network interfaces (e.g., servers with constant data streams) to minimize interrupt overhead
  • Interrupt Handling: Common for typical network traffic, where packets arrive irregularly, allowing efficient CPU use between events

Disk I/O

  • Polling: Suitable for high-speed disk operations (e.g., SSDs in data centers) where frequent checks ensure fast data transfers
  • Interrupt Handling: Preferred for standard disk operations, as I/O events (e.g., read/write completion) occur sporadically, saving CPU resources

Embedded/Real-Time Systems

  • Polling: Often used in simple embedded systems or real-time applications requiring predictable timing (e.g., industrial control systems)
  • Interrupt Handling: Preferred in complex embedded systems (e.g., automotive ECUs) for quick response to critical events like sensor triggers

A1.3.5 Explain OS role in multitasking, resource allocation (HL) (AO2)

A1.3.5_1 Challenges: task scheduling, resource contention, deadlock

Task Scheduling

  • Role: The operating system (OS) manages multiple processes or threads by scheduling CPU time
  • Ensuring efficient multitasking
  • Mechanism: Uses algorithms (e.g., round-robin, priority scheduling) to allocate CPU time
  • Balancing responsiveness and throughput
  • Challenge: Balancing fairness (all tasks get CPU time) with priority (critical tasks execute first) while minimizing overhead from context switching
  • Example: Linux schedules processes using the Completely Fair Scheduler (CFS) to distribute CPU time equitably

Resource Contention

  • Role: The OS allocates shared resources (e.g., CPU, memory, I/O devices) to multiple processes
  • Preventing conflicts
  • Challenge: Multiple processes requesting the same resource (e.g., disk access) can cause delays or bottlenecks
  • Solution: The OS uses resource queues, locks, or semaphores to manage access and ensure orderly execution
  • Example: Windows uses memory management to allocate RAM to applications, preventing overuse by any single process

Deadlock

  • Role: The OS must prevent or resolve situations where processes block each other by holding resources while waiting for others
  • Challenge: Deadlocks occur when processes form a circular wait (e.g., Process A holds Resource 1 and waits for Resource 2, while Process B holds Resource 2 and waits for Resource 1)
  • Solution: The OS employs deadlock prevention (e.g., resource ordering), detection (monitoring resource allocation), or recovery (terminating or rolling back processes)
  • Example: Unix systems use algorithms like the Banker's Algorithm to avoid deadlocks in resource allocation

A1.3.6 Describe control system components (HL) (AO2)

A1.3.6_1 Input, process, output, feedback (open/closed-loop)

Input

  • Data or signals received from sensors or user interfaces
  • That initiate system actions
  • Example: Temperature reading from a thermostat sensor

Process

  • The control logic or algorithm that processes input data
  • To determine the appropriate output
  • Typically executed by a controller (e.g., microcontroller or CPU) using predefined rules or computations

Output

  • Actions or signals generated by the system
  • To achieve the desired effect
  • Sent to actuators or devices
  • Example: Activating a heater to adjust room temperature

Feedback

  • Open-Loop: No feedback; system operates without monitoring output effects
  • Example: A washing machine running a fixed cycle without checking water temperature
  • Closed-Loop: Uses feedback to monitor output and adjust inputs for accuracy
  • Example: A thermostat adjusts heating based on continuous temperature feedback
  • Purpose: Feedback ensures system accuracy and adaptability, with closed-loop systems being more precise but complex

A1.3.6_2 Controller, sensors, actuators, transducers, control algorithm

Controller

  • The central component (e.g., microcontroller, PLC)
  • That processes inputs and generates outputs based on a control algorithm
  • Example: A microcontroller in an autonomous vehicle processing sensor data to control steering

Sensors

  • Devices that measure environmental variables
  • e.g., temperature, pressure, light
  • And provide input data
  • Example: A motion sensor detecting movement for a security system

Actuators

  • Devices that perform physical actions based on controller output
  • e.g., motors, valves, heaters
  • Example: A servo motor adjusting a robotic arm's position

Transducers

  • Convert one form of energy to another
  • e.g., physical to electrical signals for sensors or actuators
  • Example: A thermocouple converts heat to an electrical signal for temperature sensing

Control Algorithm

  • The logic or program that governs how inputs are processed to produce outputs
  • Example: PID (Proportional-Integral-Derivative) algorithm in a thermostat to maintain stable temperature

A1.3.7 Explain control systems in real-world applications (HL) (AO2)

A1.3.7_1 Examples: autonomous vehicles, thermostats, elevators, washing machines, traffic signals, irrigation, security systems, doors

Autonomous Vehicles

  • Function: Use sensors (e.g., LIDAR, cameras) to detect surroundings
  • With controllers processing data to navigate, avoid obstacles, and control speed/steering
  • Control System: Closed-loop; feedback from sensors adjusts driving actions in real-time
  • Example: Tesla's Autopilot adjusts steering based on lane detection feedback

Thermostats

  • Function: Monitor room temperature via sensors
  • And control heating/cooling systems to maintain a set point
  • Control System: Closed-loop; uses PID algorithms to minimize temperature deviations
  • Example: Nest Thermostat adjusts HVAC based on continuous temperature feedback

Elevators

  • Function: Respond to floor requests, using sensors to detect position
  • And controllers to manage motor operation for smooth travel
  • Control System: Closed-loop; feedback ensures accurate floor alignment and door operation
  • Example: Otis elevators use sensors to stop precisely at requested floors

Washing Machines

  • Function: Control water levels, temperature, and cycle duration
  • Based on user settings and sensor inputs (e.g., load weight)
  • Control System: Open-loop for fixed cycles; closed-loop for adaptive features like water level adjustment
  • Example: Modern washers adjust spin speed based on load imbalance detection

Traffic Signals

  • Function: Manage vehicle and pedestrian flow using timers or sensors
  • e.g., vehicle detectors to adjust signal timing
  • Control System: Open-loop for fixed timing; closed-loop for adaptive signals based on traffic flow
  • Example: Smart traffic lights adjust green light duration based on real-time traffic data

Irrigation Systems

  • Function: Control water distribution based on soil moisture sensors or weather data
  • To optimize irrigation
  • Control System: Closed-loop; adjusts water flow based on sensor feedback
  • Example: Smart sprinklers pause watering during rain, detected by moisture sensors

Security Systems

  • Function: Detect intrusions via sensors (e.g., motion, door/window sensors)
  • And trigger alarms or notifications
  • Control System: Closed-loop; feedback from sensors activates responses like alerts or camera recording
  • Example: Ring security systems trigger alarms when motion is detected

Automatic Doors

  • Function: Use sensors (e.g., infrared, pressure) to detect approaching objects
  • And control door opening/closing
  • Control System: Closed-loop; feedback ensures doors open only when needed and close safely
  • Example: Supermarket sliding doors open when motion sensors detect a person

A1.4.1 Evaluate interpreters vs compilers (AO3)

A1.4.1_1 Mechanics, use-cases

Interpreters

  • Mechanics: Execute code line-by-line, translating and running each statement in real-time without creating an intermediate file
  • Example: Python interpreter reads and executes Python scripts directly
  • Use-Cases: Rapid development, scripting, and testing; ideal for dynamic environments like web scripting or interactive shells
  • Example: JavaScript in browsers for dynamic web content

Compilers

  • Mechanics: Translate entire source code into machine code or bytecode before execution, producing an executable file
  • Example: C++ compiler generates an .exe file from source code
  • Use-Cases: Performance-critical applications, large-scale software, and systems requiring optimized execution
  • Example: C for operating system kernels or game engines

A1.4.1_2 Error detection, translation time, portability, JIT, bytecode

Error Detection

  • Interpreters: Detect errors during runtime, stopping at the first error encountered (e.g., syntax or runtime errors)
  • Advantage: Immediate feedback during development
  • Disadvantage: Errors in later code may go unnoticed until execution
  • Compilers: Detect errors during compilation (e.g., syntax, type errors), preventing execution until all errors are fixed
  • Advantage: Catches errors across the entire program before running
  • Disadvantage: Requires recompilation after fixes, slowing development

Translation Time

  • Interpreters: No separate translation phase; execution is immediate but slower due to real-time translation
  • Compilers: Require a separate compilation phase, which can be time-consuming but results in faster execution

Portability

  • Interpreters: Highly portable; code runs on any platform with the interpreter installed (e.g., Python scripts run on any OS with Python)
  • Compilers: Less portable; compiled executables are platform-specific unless targeting a virtual machine (e.g., Java bytecode)

Just-In-Time (JIT) Compilation

  • Combines interpreter and compiler traits
  • Compiles code at runtime into machine code for faster execution
  • Example: Java's JVM uses JIT to compile bytecode dynamically, improving performance

Bytecode

  • Intermediate representation used by compilers (e.g., Java, Python)
  • For portability across platforms
  • Executed by virtual machines (e.g., JVM, Python VM)
  • Which interpret or JIT-compile bytecode to machine code

A1.4.1_3 Scenarios: rapid development, performance-critical, cross-platform

Rapid Development

  • Interpreters: Preferred for quick prototyping, scripting, or iterative development due to immediate execution and flexibility
  • Example: Python for data analysis scripts or web development with JavaScript
  • Compilers: Slower for development due to compilation but used when performance outweighs development speed

Performance-Critical

  • Interpreters: Less suitable due to slower runtime execution from line-by-line interpretation
  • Compilers: Preferred for applications requiring high performance, such as games or embedded systems, due to optimized machine code
  • Example: C++ for game engines like Unreal Engine

Cross-Platform

  • Interpreters: Excel in cross-platform scenarios, as code runs on any system with the interpreter
  • Example: Python scripts run identically on Windows, macOS, or Linux
  • Compilers: Support cross-platform via bytecode (e.g., Java) or recompilation for each platform, requiring more effort
  • Example: Java's "write once, run anywhere" using JVM
Aspect Interpreters Compilers
Mechanics Line-by-line execution Full code translation to machine code
Error Detection Runtime, immediate feedback Compile-time, comprehensive checks
Translation Time None, immediate execution Separate compilation phase
Portability High, runs on any interpreter platform Lower, platform-specific or via bytecode
Performance Slower due to real-time translation Faster due to optimized machine code
Use-Cases Rapid development, scripting Performance-critical, large systems