Skip to content

Latest commit

 

History

History
285 lines (167 loc) · 34.5 KB

memory.md

File metadata and controls

285 lines (167 loc) · 34.5 KB

Memory Management Segmentation and Paging

Processors memory management through two fundamental components: segmentation and paging.

Segmentation

Segmentation is a crucial mechanism that partitions the processor's addressable memory space, known as the linear address space, into protected address spaces referred to as segments. These segments serve to isolate individual code, data, and stack modules, enabling the concurrent execution of multiple programs or tasks on the same processor without causing interference.

In protected mode, segmentation is mandatory as there's no mode bit to disable it. The use of paging, however, is optional. Segmentation plays a pivotal role in ensuring program isolation and security by enforcing boundaries between segments. Each program or task can have its set of segments, and the processor ensures that one program cannot disrupt the execution of another by writing into its segments. Additionally, segmentation allows for the typing of segments, restricting the operations that can be performed on them.

All segments within a system reside in the processor's linear address space. To locate a specific byte within a segment, a logical address, also known as a far pointer, is required. This logical address comprises a segment selector and an offset. The segment selector is a unique identifier for a segment and provides an index into a descriptor table, such as the global descriptor table (GDT), to access a data structure called a segment descriptor. Each segment has its segment descriptor, specifying the segment's size, access rights, privilege level, type, and the base address within the linear address space.

To find a byte within a segment, the offset part of the logical address is added to the base address specified in the segment descriptor. The resulting sum forms a linear address within the processor's linear address space.

Paging

Paging serves as a method to convert linear (virtual) addresses into uniform-sized units known as pages, allowing the operating system to dynamically transfer these pages between memory and external storage devices, usually a disk, as required.Paging complements segmentation by providing a mechanism for implementing a demand-paged, virtual-memory system. It allows sections of a program's execution environment to be mapped into physical memory as needed. Paging can also be employed to isolate multiple tasks. Unlike segmentation, paging is optional in IA-32 architecture.

These two memory management mechanisms, segmentation and paging, can be configured to support various systems, including simple single-program systems, multitasking systems, or multiprocessor systems that use shared memory.

IA-32 Memory Management  credit :- intel processor manual_

let's delve deep into the inner workings of memory management, examining each component in this diagram meticulously

The Starting Point: Logical Address (Far Pointer)

Our journey into memory management begins with the logical address, represented in binary form. This logical address, often referred to as a "far pointer," embodies the essence of the memory request. It's the electronic equivalent of GPS coordinates, pointing to a specific location within the vast landscape of memory.

Mapping to Segment Selector and Offset

The logical address undergoes a crucial transformation upon entering the CPU's Memory Management Unit (MMU). It is dissected into two fundamental components: the segment selector and the offset. This division is akin to breaking down a destination into coordinates – the segment selector tells where to look, and the offset indicates how far to go within that location.

Segment Selector Mapped to GDT (Global Descriptor Table)

The segment selector serves as a key to unlock a wealth of information stored in the Global Descriptor Table (GDT). The GDT is a data structure residing in memory, which the CPU uses to gain insights into the system's memory layout. Think of it as consulting a comprehensive map that provides details about the memory's structure and access permissions.

GDT Contains Segment Descriptor

Within the GDT lies a treasure trove of segment descriptors. Each of these descriptors functions like a blueprint, detailing the characteristics and properties of a specific segment in memory. It's analogous to consulting architectural plans before navigating a complex structure. These descriptors contain critical information, including segment size, access rights, privilege level, segment type, and the base address of the segment.

Segment Descriptor and Offset Mapped to Segment Base Address

The segment descriptor, coupled with the offset, plays a vital role in guiding the CPU towards the base address of the selected segment. This process is equivalent to your GPS leading you to the starting point of your destination within a vast city. The base address is the entry point to the specific memory segment of interest.

Mapping to Linear Address Space

The segment's base address takes us into the realm of the linear address space. This is a contiguous, virtual memory address space that spans the entire range of addressable memory locations. Think of it as the canvas upon which electrons paint their memory tapestry. It's a seamless and continuous space, abstracted from the underlying physical memory hardware.

Linear Address Calculation (Offset + Base Address)

Within this linear address space, the CPU performs a simple arithmetic operation. It adds the offset (which is a measure of distance) to the base address. This operation calculates the precise memory location within the linear address space. It's akin to setting GPS coordinates within a vast geographical landscape, determining the exact point of interest.

Linear Address to Page Directory Entry

If the computer system employs paging, which is a memory management scheme commonly used in modern operating systems, the linear address is further utilized. The linear address, in this context, is employed to locate a specific entry within the Page Directory. The Page Directory serves as a roadmap, guiding the electron towards the destination.

Page Directory Entry to Page Table

The Page Directory entry functions as an index, pointing the way to a Page Table. In a sense, this is like navigating through a library and finding the correct bookshelf where the information you seek is stored. The Page Table contains essential information about how the linear address corresponds to a specific physical address. It's the intermediary step in our journey.

Offset Within Page Table Entry

Inside the Page Table entry, the offset plays a crucial role. It's like opening a book and finding the exact page number you're looking for. In this case, the offset tells us which specific page in physical memory corresponds to the desired virtual location. It narrows down the search, taking us one step closer to the final destination.

Page Table Entry to Physical Address

Finally, the Page Table entry provides the electron with the exact physical memory location where the data or instruction it's seeking resides. It's akin to opening the book to the right page and finding the desired information. This physical address is the ultimate goal of our memory management journey.

In the absence of paging, the processor's linear address space is directly tied to the physical address space.just like many people skips their college and goes directly to apply for jobs and studying books. However, in today's multitasking computing environments, there's a significant challenge. These systems often define a linear address space that's far larger than what's practically feasible to store entirely in physical memory. It's like having a library catalog that lists thousands of books, but your physical library can only hold a fraction of them. To bridge this gap and create the illusion of a vast, continuous memory space, we rely on a clever concept known as "paging." Paging acts as the magician's wand, transforming this seemingly limited physical memory into a dynamic and expansive virtual memory environment.

Paging operates like a "virtual memory" wizard, where the immense linear address space is simulated using a combination of actual physical memory (RAM and ROM) and a storage space, akin to an archive or warehouse, often referred to as disk storage.

Here's how this magic trick works:

Breaking it into Pages: To start, the large linear address space is divided into smaller, manageable units called "pages." These pages are typically around 4 kilobytes each. Think of these pages as building blocks, and the entire memory as a vast collection of LEGO pieces.

Storage Locations:

These pages can be found in different locations, each serving a unique purpose:

Physical Memory (RAM):

Some pages are kept in physical memory, which is the fastest storage location. Think of it as your working desk, where you keep the most relevant books for immediate access.

Read-Only Memory (ROM):

Certain pages, especially those containing essential system instructions and data, reside in ROM. These pages are like ancient scrolls, providing timeless knowledge.

Disk Storage:

Due to the constraints of physical memory size, not all pages can fit at once. So, some pages are stored in disk storage, such as a hard drive or SSD. This disk storage is like a vast archive that houses books you don't need right now.

Page Directory and Page Tables:

To manage this complex arrangement of pages, the operating system (or its core executive component) maintains a set of data structures:

Page Directory:

Think of the page directory as the master index of the entire virtual memory system. It provides an overview of where each page can be found, similar to a catalog in a library.

Page Tables:

Page tables are like indices specific to certain chapters within a book. Each page table corresponds to a particular page or range of pages, providing detailed information about their storage locations, whether in physical memory or on disk.

Now, let's explore how this magical transformation happens when a program or task needs to access a specific memory address within the linear address space:

Linear Address Request:

When a program requires access to a particular memory address within the linear address space, it sends this request to the CPU's Memory Management Unit (MMU). Think of it as asking the librarian for a specific book.

Consulting Page Directory and Page Tables:

The MMU consults the page directory and page tables to perform address translation. It's like the librarian referring to the library catalog and shelf indices to find the exact book you're looking for.

Translation to Physical Address:

The MMU translates the linear address into a physical address. It's akin to the librarian leading you to the exact shelf where the book is located, providing the page's physical address in memory.

Now, here's where the real magic happens:

Handling Page Faults: Sometimes, the requested page is not currently present in physical memory. In this scenario, the processor raises what's called a "page-fault exception." Think of it as trying to access a book that's not on the library's shelves but stored away in the archives. When a page fault occurs:

Interrupting Execution:

The processor temporarily pauses the program's execution. It's like putting a bookmark in your reading when you can't find the page you're looking for.

Operating System Steps In:

The operating system or executive takes charge of the page fault situation. It's similar to the librarian coming to your aid when you can't locate a book.

Page Retrieval:

The OS or executive retrieves the required page from disk storage and loads it into physical memory. This process is akin to the librarian fetching the requested book from the archives and placing it on your desk.

Resuming Execution:

Once the necessary page is in physical memory, the program's execution resumes seamlessly. It's like picking up your reading right where you left off, now that you have the book you were searching for.

The beauty of paging, when implemented correctly within the operating system or its executive layer, is that the swapping of pages between physical memory and disk is entirely transparent to the correct execution of a program. It's as if the librarian's actions happen behind the scenes, and you can continue reading without interruption.

Even more impressively, programs originally designed for older 16-bit processors, can seamlessly take advantage of paging when they run in virtual-8086 mode.

Basic FLAT MODEL

The most straightforward memory model in a computer system is the basic "flat model." In this model, both the operating system and application programs have access to a seamless, unsegmented address space. This approach effectively conceals the underlying segmentation mechanism of the architecture, making it transparent to both system designers and application programmers.

When implementing a basic flat memory model using the IA-32 architecture, a minimum of two segment descriptors needs to be created: one for referencing a code segment and another for referencing a data segment. Intriguingly, these segments are both mapped to the entire linear address space, with identical base address values of 0 and segment limits of 4 gigabytes. By setting the segment limit to 4 gigabytes, the segmentation mechanism is cleverly designed to avoid generating exceptions for out-of-limit memory references, even in cases where no physical memory is present at a particular address.

Interestingly, in the physical address space, read-only memory (ROM or EPROM) is typically situated at the highest address locations because the processor commences execution at the address FFFF_FFF0H. Conversely, random-access memory (RAM or DRAM) is positioned at the lowest address locations due to the initial base address for the DS data segment after reset initialization being set to 0. This arrangement adds a layer of efficiency and logic to the memory management strategy within the IA-32 architecture.

Segment Registers: In the IA-32 architecture, segment registers are an essential part of memory management. These registers include CS (Code Segment), DS (Data Segment), SS (Stack Segment), ES (Extra Segment), FS, and GS. Each of these registers serves a specific purpose in managing memory access.

In the x86 architecture, segment registers play a crucial role in memory management and segmentation. These registers define various segments of memory and control how memory is accessed and protected. Let's delve into each segment register:

  1. CS (Code Segment):

    • Role: The CS register points to the segment that contains the currently executing code. It determines the base address for fetching instructions.
    • Usage: CS is involved in instruction fetching, and it also influences the privilege level of code execution (ring 0 for kernel mode, ring 3 for user mode).
    • Protection: Code execution rights and privilege levels are enforced based on the code segment.
  2. DS (Data Segment):

    • Role: The DS register typically points to the segment that holds program data, such as variables and data structures.
    • Usage: DS is used for data access instructions. When you load or store data, DS specifies the base address.
    • Protection: Data access rights are enforced based on the data segment.
  3. ES (Extra Segment):

    • Role: ES is a general-purpose segment register with no predefined role. It can be used for various purposes as needed.
    • Usage: Programmers can map ES to different segments for specific data operations or for accessing extra data segments.
    • Customization: ES is flexible and can be customized to suit application requirements, like accessing thread-local storage.
  4. SS (Stack Segment):

    • Role: SS points to the segment used for stack operations. It defines the base address for the stack.
    • Usage: All stack-related instructions, like push and pop, use SS to determine the stack's location.
    • Protection: SS plays a crucial role in stack operations and helps maintain stack integrity.
  5. FS (Extra Segment - 2):

    • Role: Similar to ES, FS is a general-purpose segment register. It doesn't have a predefined role and can be used for various purposes.
    • Usage: Programmers can map FS to different segments for specific data or thread-local storage.
    • Customization: FS offers flexibility and can be customized based on specific programming needs.
  6. GS (General Storage - 2):

    • Role: GS is another general-purpose segment register without a predefined role.
    • Usage: Like FS, GS can be mapped to custom segments and used for specific data operations or addressing.
    • Flexibility: GS provides flexibility in memory management and can be adapted to application requirements.

SR determine the base addresses for code, data, stack, and other segments, influencing how instructions access memory and ensuring that different segments do not interfere with each other. While CS, DS, and SS have well-defined roles, ES, FS, and GS are versatile and can be adapted for various purposes, making them valuable tools for system designers and programmers working on complex memory management tasks, such as multithreading or custom memory structures.

Code and Data Segment Descriptors: Segment descriptors are data structures that provide detailed information about a segment's properties and characteristics. In the context of the basic flat model, we primarily focus on the code and data segment descriptors. These descriptors contain crucial information such as:

  • Access Rights: This field specifies what actions are allowed in the segment. It includes permissions like read, write, execute, and privilege level information.

  • Limit: The limit defines the size of the segment. It determines the range of valid addresses within the segment.

  • Base Address: The base address indicates where the segment starts in the linear address space. It's the reference point from which memory addresses within the segment are calculated.

Mapping Segment Registers to Segment Descriptors: In the basic flat model, the segment registers (CS, DS, SS, etc.) are mapped to specific segment descriptors. This mapping establishes which descriptor governs the properties of a particular segment. Let's look at how this mapping works:

  1. CS (Code Segment): CS is typically mapped to the code segment descriptor. The code segment descriptor holds information about the code segment's access rights, limit, and base address. This ensures that when the processor fetches instructions for execution, it knows the boundaries and permissions associated with the code segment.

  2. DS (Data Segment): DS is mapped to the data segment descriptor. Similarly, the data segment descriptor defines how data in the segment can be accessed, its size limit, and where it starts in the linear address space. DS is commonly used for variables and data storage.

  3. SS (Stack Segment): SS is mapped to the stack segment descriptor. This descriptor specifies how the stack segment should be treated in terms of access control, size limits, and the starting point within the linear address space. The stack segment is crucial for managing function calls and local variables.

  4. ES (Extra Segment), FS, and GS: These additional segment registers, ES, FS, and GS, can be mapped to other segment descriptors if needed. They offer flexibility for data manipulation and specialized memory management tasks.

Access and Limit in Linear Address Space: Once the segment registers are mapped to their respective segment descriptors, the access rights and limit fields in those descriptors come into play when referencing memory within the linear address space.

  • Access Rights: The access rights field in the segment descriptor dictates what operations can be performed within that segment. For example, for the code segment (CS), it specifies whether code execution is allowed, while for the data segment (DS), it determines whether data can be read or written. These permissions help enforce security and access control.

  • Limit: The limit field defines the size of the segment, restricting memory access to addresses within this range. It ensures that memory references outside this range trigger exceptions, preventing unauthorized access or unintended operations.

Base Address in Linear Address Space: The base address field in the segment descriptor provides the starting point for the segment within the linear address space. When combined with an offset or address offset within the segment, it forms a complete memory address.

For example, if the base address for the data segment (DS) is set to 0xC0000000, and you want to access a variable located at offset 0x1000 within the data segment, the linear address would be 0xC0001000. This is because the base address (0xC0000000) is added to the offset (0x1000) to calculate the linear address.

protected flat model

The protected flat model, a variation of the basic flat model, incorporates a crucial enhancement by setting segment limits to encompass only the range of addresses where physical memory actually exists. When an attempt is made to access memory beyond this range, a general-protection exception (#GP) is triggered. This model thus offers a rudimentary level of hardware protection against certain types of program errors.

Interestingly, this protected flat model can be enriched with additional complexity to enhance security further. For instance, to establish isolation between user and supervisor code and data through the paging mechanism, four distinct segments must be defined: code and data segments at privilege level 3 for user access, and code and data segments at privilege level 0 for supervisor access. Typically, these segments overlap one another and originate from address 0 in the linear address space. This flat segmentation model, combined with a straightforward paging structure, serves not only to safeguard the operating system from applications but also to safeguard applications from one another when a separate paging structure is added for each task or process. Remarkably, this design approach is employed by several popular multitasking operating systems, underlining its effectiveness in maintaining system stability and security

Multi segment memory model

The multi-segment memory model, takes full advantage of the segmentation mechanism in the x86 architecture to enforce hardware-based protection for code, data structures, programs, and tasks. In this model, each program or task is allocated its unique set of segment descriptors and associated segments. These segments can either be exclusively private to the assigned program or shared among multiple programs. Hardware controls access to all segments and the execution environments of individual programs running on the system.

Multi-Segment Model
Figure from intel manual. Multi-Segment Model

Here are some key aspects of the multi-segment model:

  1. Segmentation for Protection: One of the primary purposes of this model is to ensure robust security and protection. Each program or task operates within its own set of segments, isolating it from others. Access checks are applied rigorously to prevent unauthorized access and operations. For example, code segments can be designated as read-only, and the hardware can enforce this, preventing any writes into code segments. This segmentation-based protection enhances system security.

  2. Custom Segment Descriptors: In the multi-segment model, every program or task gets its table of segment descriptors. These descriptors contain crucial information about each segment, such as its base address, limit, and access rights. The access rights are particularly important as they determine the operations allowed within a segment. The hardware leverages this information to perform access control and validation.

  3. Private or Shared Segments: Programs or tasks have the flexibility to use segments that are entirely private to them or shared among multiple entities. This means that certain segments can be designed to be accessible and modifiable by specific programs, while others can be shared for common resources or code libraries. This flexibility enables efficient memory usage and resource sharing.

  4. Hardware-Enforced Protection Rings: Protection levels or rings can be established using the access rights information. This means that segments can be assigned different privilege levels, with lower levels having fewer privileges. For instance, operating system procedures can be placed in segments with higher privilege levels, safeguarding them from unauthorized access by application programs running at lower privilege levels.

  5. Access Checks: Access checks extend beyond merely verifying that an address falls within the segment's limit. They also involve verifying that the operations being performed within a segment are allowed. For instance, code segments can be marked as read-only, and any attempt to write data into them would be denied by the hardware.

In summary, the multi-segment model in the x86 architecture utilizes segmentation to create a robust and secure memory management system. It offers fine-grained control over memory access, enabling programs and tasks to operate within their isolated segments while also allowing for shared segments for efficient resource utilization. The access rights and privilege levels provided by segment descriptors, along with hardware-based protection mechanisms, ensure that memory is accessed and modified according to predefined rules, enhancing the reliability and security of the computing system.

Intro to 64 bit

In 64-bit mode, the x86 processor largely disables segmentation, resulting in a flat and continuous 64-bit linear-address space. This mode simplifies memory addressing by treating the segment base of CS (Code Segment), DS (Data Segment), ES (Extra Segment), and SS (Stack Segment) as zero, effectively making the linear address equal to the effective address for most operations. However, there are exceptions for the FS (File System) and GS (General Storage) segments.

Here's a breakdown of how segmentation works in 64-bit mode:

  1. Flat Linear Address Space: In this mode, the processor eliminates the complexity of segmentation by setting the segment base of CS, DS, ES, and SS to zero. As a result, when you calculate a linear address, it is essentially the same as the effective address. This simplifies memory management and addressing for most operations.

  2. FS and GS Segments: The FS and GS segments are exceptions to this flat model. These segment registers still hold valid segment base addresses. They can be used as additional base registers in linear address calculations. This feature is particularly useful for addressing local data and certain operating system data structures.

  3. Segment Limit Checks: Unlike in earlier modes, the processor does not perform segment limit checks at runtime in 64-bit mode. In previous x86 modes, the hardware would check if an address falls within the segment's limit, providing a level of protection against memory access violations. However, in 64-bit mode, these checks are largely disabled, and the responsibility for ensuring memory access falls within valid bounds is shifted to software and the operating system.

system architecture within the protected mode, the processor employs a dual-stage process of address translation to reach a physical address. This process encompasses logical-address translation and linear address space paging.

Even in scenarios where segments are used minimally, each byte within the processor's address space is accessed via a logical address. This logical address comprises a 16-bit segment selector and a 32-bit offset, as depicted in Figure 3-5. The segment selector plays the role of identifying the specific segment in which the byte is situated, while the offset pinpoints the byte's location within the segment, relative to the segment's base address.

Subsequently, the processor undertakes the translation of each logical address into a linear address. A linear address is a 32-bit address located within the processor's linear address space. Similar to the physical address space, this linear address space is a seamless, unsegmented area encompassing a total of 4,294,967,295 bytes, spanning from 0 to FFFFFFFFH. Notably, the linear address space accommodates all the segments and system tables that are defined for a given system.

To effect the translation of a logical address into a linear address, the processor follows a three-step process:

  1. It utilizes the offset within the segment selector to locate the segment descriptor corresponding to the segment in either the Global Descriptor Table (GDT) or the Local Descriptor Table (LDT) and then reads this descriptor into the processor. It's essential to perform this step solely when a new segment selector is loaded into a segment register.

  2. The processor scrutinizes the segment descriptor, inspecting the access rights and the segment's range to ascertain that the segment is accessible and that the offset falls within the confines of the segment.

  3. To culminate the translation, the processor adds the base address of the segment (extracted from the segment descriptor) to the offset. This addition yields a linear address, pinpointing the precise location of the byte in memory.

This process ensures that logical addresses are successfully translated into corresponding linear addresses, facilitating the retrieval of data from the physical memory for program execution and data manipulation.

Segment selectors play a vital role in the memory management of a protected mode system. These 16-bit identifiers, as illustrated in Figure 3-6, serve as references to segment descriptors that define specific memory segments. A segment selector is composed of several components:

  1. Index (Bits 3 through 15): This field selects one of the 8,192 descriptors available in either the Global Descriptor Table (GDT) or the Local Descriptor Table (LDT). The processor calculates the precise descriptor by multiplying the index value by 8, representing the number of bytes in a segment descriptor, and then adds this result to the base address of the corresponding descriptor table, which is stored in either the GDTR (Global Descriptor Table Register) or the LDTR (Local Descriptor Table Register), depending on whether the TI (Table Indicator) flag is set or cleared.

  2. TI (Table Indicator) Flag (Bit 2): This flag determines which descriptor table to use. When cleared, it selects the GDT; when set, it designates the LDT.

  3. Requested Privilege Level (RPL) (Bits 0 and 1): These bits specify the privilege level of the selector, ranging from 0 (most privileged) to 3 (least privileged). The relationship between the RPL, the Current Privilege Level (CPL) of the executing program or task, and the Descriptor Privilege Level (DPL) of the descriptor that the segment selector points to is detailed in Section 5.5, "Privilege Levels."

It's noteworthy that the first entry within the GDT remains unused by the processor. This particular entry serves as a "null segment selector." When a segment register, except for CS or SS registers, is loaded with a null selector, it doesn't generate an exception. However, if a segment register containing a null selector is used to access memory, an exception, specifically a general-protection exception (#GP), is triggered. A null selector proves valuable for initializing unused segment registers. Yet, loading the CS or SS register with a null segment selector results in the generation of a #GP exception.

While segment selectors are visible to application programs as part of a pointer variable, their values are typically assigned or modified by link editors or linking loaders, not by application programs. This division of responsibility helps maintain the integrity and security of the memory management system.

Moreover, segment registers in the processor are designed to facilitate efficient address translation and minimize coding complexity. There are six segment registers, each associated with a specific type of memory reference, including code, stack, and data. To enable program execution and data manipulation, at least the code-segment (CS), data-segment (DS), and stack-segment (SS) registers must be loaded with valid segment selectors. Additionally, three more data-segment registers (ES, FS, and GS) can be used to make additional data segments accessible to the currently executing program or task.

Notably, for a program to access a particular segment, the corresponding segment selector must be loaded into one of the segment registers. While a system may define numerous segments, only six can be readily available for immediate use. Other segments can be made accessible during program execution by loading their segment selectors into these registers.

Each segment register is composed of a "visible" part and a "hidden" part, sometimes referred to as a "descriptor cache" or a "shadow register." When a segment selector is loaded into the visible part of a segment register, the processor also populates the hidden part of the segment register with essential information from the segment descriptor pointed to by the segment selector. This information encompasses the base address, segment limit, and access control details. This caching of information in the segment register, both visible and hidden, enables the processor to perform address translation efficiently without requiring additional bus cycles to retrieve data from the segment descriptor. However, in systems where multiple processors share the same descriptor tables, it becomes the responsibility of software to reload the segment registers when the descriptor tables undergo modification. Failing to do so may result in the continued use of an outdated segment descriptor cached in a segment register after its memory-resident version has been altered.

Two types of load instructions are available for loading the segment registers:

  1. Direct Load Instructions: These instructions explicitly reference the segment registers and include operations like MOV, POP, LDS, LES, LSS, LGS, and LFS.

  2. Implied Load Instructions: These instructions, such as the far-pointer versions of the CALL, JMP, and RET instructions, the SYSENTER and SYSEXIT instructions, and the IRET, INT n, INTO, INT3, and INT1 instructions, modify the contents of the CS register (and sometimes other segment registers) as a byproduct of their operation.

The MOV instruction can also be employed to store the visible part of a segment register in a general-purpose register.