A while ago I did a Webinar looking at C++ for embedded applications. It was well attended and well received and there were lots of questions and comments, which is always very satisfying. I observed that a number of people were specifically interested in dynamic memory allocation in C and C++ and the challenges that are presented to embedded and real time programmers. So I developed a further Webinar specifically looking at dynamic memory allocation and fragmentation. Both of these were recorded and available as archives to view on demand.
I was interested in investigating how to avoid dynamic memory [heap] fragmentation …
The issue comes about because the arbitrary allocation and deallocation of chunks of memory can result in something of a mess. The free space is not contiguous - i.e. it is fragmented - and an allocation can fail, even though enough free space is actually available.
In this example, there is 6K of free memory, but a request for a 4K block would fail.
An obvious solution would be to de-fragment the memory, but this is not a possibility for 2 reasons:
- The application code uses pointers to the allocated memory; changing the address of allocated chunks would break that code. This is possible in languages like Java and Visual Basic, where there are no direct pointers.
- Such “garbage collection” would be intrinsically non-deterministic, which would not be acceptable in a real time system.
The only realistic solution, that I could propose, is to use the block [partition] memory allocation provided by most real time operating systems. A series of partition pools are created, with partition sizes in a geometric series [e.g. 32, 64, 128, 256 bytes]. Allocations are then made from the pool that has partitions [just] large enough. This solves the problem, as a partition pool cannot get fragmented and its usage is most likely to be highly deterministic.
But I had another idea: would creative use of a memory management unit [MMU] be another way to apparently de-fragment the heap? The concept is simple: the MMU manages the free space and maps it so that it always appears to be contiguous. Of course, when memory is allocated, the MMU would also need to maintain the apparent contiguity of the allocated chunk.
To be frank, my experience in the use of MMUs is limited. So my question is: would this actually work? If you have any insight, please comment or email me.