Every time a program creates a variable, enough memory space for it must be allocated. Depending on the type of variable, a different number of memory positions are allocated to that variable. For example, in Java and C++ an integer variable requires 4 contiguous memory positions whilst a float variable requires 8 contiguous memory positions. If your program creates an array of 10 integers, then 40 contiguous memory positions need to be allocated.
When the execution of your program finishes, the corresponding allocated memory positions are freed so other programs being executed in the machine can use them.
With the continuous allocation/release of memory positions, a problem can emerge known as memory fragmentation. That is, the available memory blocks are not contiguous, but “interrupted” by used memory blocks. Therefore, a request from a program requiring a block of X contiguous memory positions might be rejected even if there were X available positions because they are not contiguous.
The team in charge of implementing an algorithm that searches for free memory positions has received 3 proposals that aim to search for available memory positions in a memory made of N elements. The aim of the algorithms is striking a balance between fast running time and low memory fragmentation. Assume that the arriving requests can ask for a number of memory positions in the range from 1 to M where M is too lower than N, both inclusive. The proposals are:
Linear Search: When a request of X memory positions is received, scan the memory positions from left to right. Allocate the first block with X or more available positions. As soon a block is found, return the position of the first element. If not, return -1. By way of example, assume the 16-byte memory is in the state shown in Figure 2(this occupation state is the result of many past memory allocation and release processes)
If a new request for 1 memory position arrives, Linear Search would then find memory position to for it, as this is the first block of at least 1 memory position available when scanning the memory from left to right. Notice that, in terms of fragmentation, selecting memory position would be a better option but this algorithm does not allow that selection. Notice that the task of updating the state of the memory positions to mark them as ‘allocated’ is not part of the Linear Search algorithm.
DirectSearch: The memory is divided in M chunks, each equally made of N/M memory positions (remember that M is the maximum number of memory positions program can request). Requests of 1 memory position are allocated in the first chunk; requests of 2 memory positions are allocated in the second chunk and so on. Notice that the chunk size N/M must be greater than M, otherwise, a program will not have enough memory allocated to a chunk. To speed up the search of an available block of memory positions in each chunk, the M-element arrayPOS is kept. Assume the variables used to manage memory positions –as the array POS -are stored in a reserved part of the memory, not allocated to programs. The first element of POS stores the position (in the memory) where the first available block of 1 memory position is located, the second element stores the position (in memory) where the first available block of 2 memory positions is located and so on. If the ith element in the array POS stores number -1, it means that there are no blocks of (i+1) memory positions available in the i-th chunk. When a request of Xmemory positions is received, the algorithm simply checks the element (X-1) in the array POS. If it stores a value different from -1, it means there is space to allocate a block of X. In that case, the value stored in POS[X-1] is returned. Otherwise, the value -1 is returned (request rejected). By way of example, assume M=4 with different programs requesting1, 2, 3 or 4 memory positions. In that case, the memory would be divided into 4 chunks. Figure 3 shows a possible state of our 16-byte memory, divided into 4 chunks. In the figure, the content of the array POS is also shown.
In Figure 3:
In this example, if a program requests 2 memory positions, the algorithm checks the value in POS. Because this value is different to -1, it means there is a block of 2 free memory positions starting at position  in the memory. So, the value 6 is returned. Notice that the task of actually allocating the found memory positions to the request and updating the array POS is not performed by DirectSearch. DirectSearch only finds a suitable block of free memory positions. Thus, another function allocates the memory positions  and  (marking them as used) and updates the value of POS to -1.
Exhaustive Search: When a request for X memory positions is received, the block of available positions that best fit the request is allocated. That is, the block with the lowest value for (A-X), where A is the number of available positions in that block. If a block is found, return the position of the first element. If not, return -1. By way of example, consider the same example shown in Figure 2. Assume that a 1-memory position request arrives. In this case, Exhaustive Search would have two possible options: allocating memory position , with a value of (A-X) equal to (3-1)=2 or allocating memory position , with a value of (A-X) equal to (1-1)=0. Since the lowest value is the one corresponding to position , this value (13) is returned by the algorithm. Notice that the task of updating the state of the memory positions to mark them as ‘allocated’ is not part of the Exhaustive Search algorithm.
Ref: (2020) Algorithms and Data Structures 2, University of London.