Allocation of dynamic memory in memory-constrained environments is known to be a hard problem. When memory can be allocated at runtime the following problems present themselves in a fully-generalized system:
Provision of sufficient memory: the heap must be at least as big as the peak dynamic memory usage of the program, but that is impossible to predict for programs that allocate memory based on conditions that do not arise until runtime. Heap exhaustion can lead to unexpected program failure at allocation time.
Fragmentation of memory: a sub-problem of provision of sufficient memory, arising when repeated allocation and de-allocation of small memory segments cause available memory to be non-contiguous, limiting the maximum possible size for a new allocation.
Garbage management: whether explicitly or automatically, dynamic memory may need to be freed when no longer in use, but the definition of "in use" varies from program to program.
Performance overhead: dynamic memory allocation and de-allocation can take non-deterministic amounts of time depending on the current structure of the heap. This is particularly problematic for systems with hard realtime performance requirements, but even in non-realtime applications this can lead to degenerate performance that arises only after an application has been running for a long time.
In modern programming environments the above concerns are often mitigated by simply over-provisioning memory to the point that the problems arise only in unlikely edge cases. When memory is plentiful, one can often take a "best effort" approach to dynamic memory management, and when performance requirements are soft one can even resort to automatic memory management strategies like a compacting garbage collector, which trade CPU time for improved use of heap memory.
However, embedded systems remain problematic for the following reasons:
Embedded systems are often very memory-constrained compared to today's general-purpose computing hardware, with RAM capacities measured in kilobytes rather than gigabytes.
Embedded systems often have hard or soft realtime constraints placed on them, even if just remaining ready to respond to external interrupts in a timely fashion, which make "stopping the world" for non-deterministic amounts of time undesirable or impossible.
Embedded systems can sometimes run for many years without restarting and will often have no way to signal to the user that memory is becoming tight or has been exhausted. A seemingly-random memory allocation failure several months into the runtime of an embedded system could therefore be at least inconvenient, and possibly even catastrophic.
For these reasons embedded software designers often choose to avoid dynamic memory allocations altogether. Sometimes that is not possible, and so designers will adopt various strategies to constrain dynamic memory usage so that it is predictable, such as providing a fixed memory pool to each subsystem and using fixed-sized allocations within those pools to avoid fragmentation.
The tools available for such memory management schemes in C and C++ are limited, since both languages by default expect a single system allocator from which all callers obtain dynamic memory. Custom allocators are possible, but memory allocation is a cross-cutting concern and so such allocators can make it impossible to re-use third-party code that depends on the default allocator.
The remainder of this article describes a hypothetical alternative model of memory management which aims to make memory management more explicit while allowing code written by different parties to inter-operate. This new scheme is best adopted as a core language feature, and is thus planned be implemented as the built-in dynamic memory management strategy in my work-in-progress embedded systems programming language, Alamatic.
Theory of Operation
The scheme proposed below treats memory as a dependency to be provided via the dependency injection pattern. In this pattern, resources required by a subsystem are provided by the caller rather than being acquired directly. This article therefore assumes some mechanism of passing dependencies into a subsystem but does not specify one.
The scheme also aims to make ownership of memory explicit, preventing dynamic memory allocation from being buried inside leaf functions and instead allowing all possible heap usage to be discovered by observing the transfer of objects between functions. No automatic garbage collection is included, but it is made explicit which subsystem is responsible for freeing dynamically-allocated memory.
This scheme defines three types of dependency that may be provided to a subsystem, at different levels of granularity and responsibility:
A plain reference to an object or array of objects of a particular type: in this case, memory management is handled entirely by the caller and an already-allocated buffer is provided. In C this would just be a pointer, and is the simplest case. A subsystem accepting a plain reference may of course be provided a reference to a static- or stack-allocated object instead; allocation is a separated concern in this case.
Allocation object: represents ownership of an allocated, strongly-typed area of memory. This is a generic type and will be henceforth referred to via the C++-style notation Allocation<T>. This object provides the mechanism to free the allocation, and passing this object as a function parameter implies transfer of ownership of the allocation, and with it the responsibility to free the object.
MemoryPool object: represents ownership of an area of memory from which allocations can be made. This object provides the mechanism to acquire Allocation<T> objects, and also provides a mechanism to sub-divide the area into nested MemoryPool objects to partition the memory space.
Each of the above dependency types represents a different memory management capability. By following the dependency graph a system maintainer can track all places where dynamic memory may be allocated or freed, and the MemoryPool mechanism allows subdivision of the heap into a fixed memory map where desired.