Storage Management

Memory Management

Manages free memory, allocates a process memory when it needs it

Done either by paging/swapping or ….not.


We use a buffer to transfer data between different places, as temporary storage.
When the buffer gets full, we have to replace something.

  • Choosing which to replace:
    • First in First Out
    • Least Recently Used
    • Clock

Disk Performance

On magnetic or tape = entirely dependent on seek and rotational delays. (as well as actual transfer to buffer time). Seek time is the slowest/most drain.

Processing Order for IO Requests:

  • Random
    • WORST
  • FIFO
    • Fair, good for few processes but bad for many
  • Shortest Seek Time First
    • pretty good but can lead to starvation
  • Elevator -add requests into a sorted list. Continue to one side of disk and then go backwards.
    • Better than FIFO, worst than SSTF
    • No Starvation, but going backwards is terrible for sequential read requests
  • One way elevator - goes to one side, zooms back to start, goes again
    • Zooming = faster than read stop read stop
    • No starvation, better for sequential stuff
    • Reduces the max delay to read a sector

Methods of Storing Processes in Memory

(for e.g. embedded systems)
We don't need paging if we have enough spare memory for what is required. If there's one thing it's easy, otherwise we have to divide memory into sections for each process.

We run into problems if there are lots of processes - if we use equal chunks we get internal fragmentation, and can't run any process larger than a chunk. If we use dynamically size chunks we get external fragmentation. We can reshuffle dynamic allocation at execution time, if we need (eh effort).

Dynamic Partition Allocation Strategies

  • Best-fit
    • Searches whole list, finds smallest fragment it'll fit in
    • Ends up with lots of small fragments that we can't use
  • First-fit
    • Starts at the beginning and finds the first free fragment it'll fit in
    • Allocates mostly at the beginning, not end
  • Next-fit
    • Starts from last allocated fragment and finds the first it'll fit in
    • Pretty even allocation, but doesn't save big chunks at the end
  • Worst-fit
    • Whimsical. Not significantly better than best-fit and still has to search whole list.

Dynamic Partitioning Hardware Interaction

Wherever the process runs in memory, it needs to at some point know the addresses it can touch. We can either tell it this at:

  • Compile/linker time (only if we know the run location, we have to recompile if it changes)
  • Load time (compiler generates relocatable code and the loader tells it the actual addresses)
  • Run time (compile time logical addresses are translated to physical ones by hardware)

For compaction, we need to be able to relocate running programs (so we need dat hardware support for translation).

To use logical addresses run time also has to protect anything below or above range. It sends an addressing error exception if we go out of bounds. We have to be able to modify the bound and limit addresses (start of process and length of process in memory) whenever we context switch (diff processes have diff bounds etc) or load or relocate.

Of course all only works if the memory is contiguous, and we don’t have heaps of processes that are frequently inactive