Memory management is a very important aspect of operating system performance. This article covers different memory management techniques like paging, swapping, compaction and segmentation.
The operating system manages the resources of the computer, controls application launches, and performs tasks such as data protection and system administration. The resource that the operating system uses the most is memory. Memory is a storage area on the computer that contains the instructions and data that the computer uses to run the applications.
When the applications or the operating system need more memory than is available on the computer, the system must swap the current contents of the memory space with the contents of the memory space that is being requested. In the same way, different situations need different memory management techniques.
Some cases call for the use of paging, while others may require the use of an on-disk cache. Ultimately, deciding which memory management technique to use is a matter of optimizing the user interface for the available hardware and software. In this article, we will learn these different memory management techniques.
Table of content
- What is memory management?
- Memory allocation schemes
- Static linking and dynamic Linking
- Memory management techniques
What is memory management?
Memory management is allocating, freeing, and re-organizing memory in a computer system to optimize the available memory or to make more memory available. It keeps track of every memory location(if it is free or occupied).
Modern computers' four main memory management techniques are paging, swapping, segmentation, and compaction. Swapping is the best technique for memory management because it provides the most efficient use of system resources.
Memory allocation schemes
1. Contiguous memory management schemes:
Contiguous memory allocation means assigning continuous blocks of memory to the process. The best example of i is Array.
2. Non-contiguous memory management schemes:
The program is divided into blocks(fixed size or variable size)and loaded at different portions of memory. That means program blocks are not stored adjacent to each other.
Static linking and dynamic Linking
1. Static linking
Static linking is the process of incorporating the code of a program or library into a program so that it is linked at compile time.
- It generally leads to faster program startup times since the library’s code is copied into the executable file.
- It makes it easier to debug programs since all symbols are resolved at compile time.
- It can make programs harder to update since all of the code for a given library is copied into each program.
- More computation cost as the executable file size will be big.
- This can lead to increased disk usage and memory usage.
- It can lead to symbol clashes, where two different libraries define the same symbol.
2. Dynamic linking
In this, the library is not copied into the program’s code. Instead, the program has a reference to where the library is located. When the program is run, it will load the library from that location.
- Saves memory space.
- Low maintenance cost.
- Shared files are used.
- Page fault can occur when the shared code is not found, then the program loads modules in the memory.
Inability to use the available memory perfectly.
Fragmentation occurs when most free blocks are too small/large to satisfy any request perfectly. There are two types of fragmentation.
1. External fragmentation
External fragmentation occurs when there is free space in the memory that is not big enough to hold the process that needs to be allocated because it is not available in a contiguous way. This means the memory has holes(free spaces), and the operating system cannot use this space for anything. As a result, the file may have to be split into several pieces and stored in different parts of the disk. For e.g. when the process is of 5 Kb and the free space is available in 2 Kb,2 Kb and 1 Kb at different locations of memory, not as 5 Kb in a continuous way. The solution is compaction.
2. Internal fragmentation
Internal fragmentation occurs when the operating system allocates more memory than a process needs. This happens because blocks come in fixed sizes, and a file may not fit perfectly into one block. When this happens, part of the file is stored in the next block, leading to internal fragmentation. For eg. if 2Kb is required to store a process and the available space is 6 Kb. Then 4Kb space is wasted.
Memory management techniques
When the process is to be executed, then that process is taken from secondary memory to stored in RAM.But RAM have limited space so we have to take out and take in the process from RAM time to time. This process is called swapping. The purpose is to make a free space for other processes. And later on, that process is swapped back to the main memory.
The situations in which swapping takes place
- The Round Robin algorithm is executing in which the quantum process is supposed to preempt after running for some time. In that case, that process is swapped out, and the new process is swapped in.
- When there is a priority assigned to each process, the process with low priority is swapped out, and the higher priority process is swapped in. After its execution, the lower priority process is again swapped in, and this process is so fast that users will not know anything about it.
- In the shortest time remaining first algorithm, when the next process(which arrives in the ready queue) has less burst time, then the executing process is preempted.
- When a process has to do I/O operations, then that process temporarily swapped out.
It is further divided into two types:
- Swap-in: Swap-in means removing a program from the hard disk and putting it back in the RAM.
- Swap-out: Swap-out means removing a program from the RAM and putting it into the hard disk.
Paging is the memory management technique in which secondary memory is divided into fixed-size blocks called pages, and main memory is divided into fixed-size blocks called frames. The Frame has the same size as that of a Page. The processes are initially in secondary memory, from where the processes are shifted to main memory(RAM) when there is a requirement. Each process is mainly divided into parts where the size of each part is the same as the page size. One page of a process is mainly stored in one of the memory frames. Paging follows no contiguous memory allocation. That means pages in the main memory can be stored at different locations in the memory.
Compaction is a memory management technique in which the free space of a running system is compacted, to reduce fragmentation problem and improve memory allocation efficiency. Compaction is used by many modern operating systems, such as Windows, Linux, and Mac OS X. As in the fig we have some used memory(black color) and some unused memory(white color).The used memory is combined.All the empty spaces are combined together.This process is called compaction.This is done to prevent to solve the problem of fragmentation, but it requires too much of CPU time.
By compacting memory, the operating system can reduce or eliminate fragmentation and make it easier for programs to allocate and use memory.
The compaction process usually consists of two steps:
- Copying all pages that are not in use to one large contiguous area.
- Then, write the pages that are in use into the newly freed space.
Segmentation is another memory management technique used by operating systems. The process is divided into segments of different sizes and then put in the main memory. The program/process is divided into modules, unlike paging, in which the process was divided into fixed-size pages or frames. The corresponding segments are loaded into the main memory when the process is executed. Segments contain the program’s utility functions, main function, and so on.
In response to the increased demand for memory on mobile devices, modern operating systems are able to increase the memory allocation for applications in a more controlled way. This approach ensures that the applications are only allowed to use the memory that is necessary for them, and that the memory is not consumed by the applications that are not actively in use.
In this article, we have covered memory management techniques with diagrams. If you find this helpful, then please share it with your friends.
What are the common problems with memory management in Windows systems?
The system doesn't have enough memory and starts to swap to disk more and more, which can slow down the system. The system is using too much memory and starts to crash. The system is using the wrong memory location, which can cause programs to not work correctly.
How to solve the memory management problems?
Check the system specifications to see if the system has enough memory. Reduce the size of programs that are running. Troubleshoot and fix errors that are causing the system to use too much memory.
What is External Fragmentation in OS?
When total memory is enough available to a process but can not be allocated because of memory blocks are very small. That means program size is more than any available memory hole. Generally, external fragmentation occurs in dynamic or variable size partitions. External fragmentation can be solved using compaction technique. Also external fragmentation can be prevented by paging or segmentation mechanisms.
What is main memory?
Main Memory just like a brain of our computer. Memory stores data and instructions required during the processing of data and output results. Computer Memory is a physical device capable of storing information temporary and permanent. All the programs which are executing will be in main memory. The performance of computer mainly based on memory and CPU.If we have more main memory/RAM then performance of computer will be fast.
What is thrashing?
Thrashing occurs when a system spends a significant amount of time and resources continuously swapping pages between physical memory and disk, due to excessive page faults. It leads to a decrease in overall system performance as the majority of CPU cycles are wasted on page swapping rather than executing useful work.