Sign In
Forgot Password?
Sign In | | Create Account

What is "real time"?

The term “real time” is widely used nowadays. Although it is a technical term, it finds its way into quite normal conversation. I might be heard to say “I do not watch much real-time TV”, meaning that I record programs to watch at my convenience. So, colloquially, real time means “immediate” or “occurring now”. How does this align with its precise meaning when we refer to a real time operating system, for example? …

Looking up “real-time system” in a rather old computer dictionary yields:

“Any system in which the processing of data input to the system to obtain a result occurs virtually simultaneously with the event generating that data.”

It cites airline booking systems being an example. This is clearly not a useful definition for our needs.

Here is a better definition:

“A real-time system is one in which the correctness of the computations not only depends upon the logical correctness of the computation but also upon the time at which the result is produced.

“If the timing constraints of the system are not met, system failure is said to have occurred.”

Another way of putting this definition is to say that a real time system is, above all, predictable. We tend to use the term deterministic.

So a deterministic operating system performs all its actions in a well defined timeframe and enables a programmer to produce applications with the same characteristic. Real time does not mean fast – it means fast enough for the specific requirements of the application in hand.

Unfortunately, it is not quite so black and white. An OS can have a degree of determinism – it is a question of the variance between the time taken to do operations under different circumstances. So, a classic RTOS, like Nucleus, has a very low variance and is, hence, very deterministic. Linux, on the other hand, generally exhibits quite a high variance and may not normally be described as real time.

There is always the “brute force” approach to building a system, where you design with enough raw CPU power that the speed/variance of the OS hardly matters, as everything will be done in time. For some requirements, that might be a good solution, but, for many, such a profligate use of resources is not an option.

Linux, Nucleus Kernel, Nucleus, RTOS

More Blog Posts

About Colin Walls Follow on Twitter

Colin WallsI have over thirty years experience in the electronics industry, largely dedicated to embedded software. A frequent presenter at conferences and seminars and author of numerous technical articles and two books on embedded software, I am a member of the marketing team of the Mentor Graphics Embedded Systems Division, and am based in the UK. Away from work, I have a wide range of interests including photography and trying to point my two daughters in the right direction in life. Learn more about Colin, including his go-to karaoke song and the best parts of being British: Visit The Colin Walls Blog

More Posts by Colin Walls

Comments 8

Post a Comment
Most real-time I've done was a water level control based on an FPGA. I did it on a dare by a college professor. It did the control loop math in about 3 clock cycles. After that I still have trouble thinking of any OS as real-time =P

3:05 AM Mar 3, 2010

At the other end of things, I remember someone making the case a few years ago that a payroll system was a real-time application. Its constraints were on a much longer timescale than was usually considered real-time, but if the checks weren't cut at the end of the pay period, you had a failure.

3:32 PM Mar 5, 2010

wheels: I guess that kind of supports my premise that "real time" is not the same as "real fast". In some ways a payroll system is real time, because paying too early is bad too.

Colin Walls
3:55 PM Mar 5, 2010

You cited my operating definition in your 9th paragraph: "fast enough." If you are driving a user interface, and must update a display in response to a human pushing buttons, any delay less than 100 msec is (to the client) indistinguishable from 0. If you are trying to to protect an IGBT from catastrophic failure after it comes out of saturation, the maximum delay is more like 10 usec. There are 4 orders of magnitude between those two numbers, yet both are 'real-time.' Trying to iron some numeric definition of the term is a waste of time. Gary Lynch Staco Energy Products

Gary Lynch
8:04 PM Mar 17, 2010

Absolutely Gary - you hit the nail on the head. But if the OS doesn't give you confidence that it is and always will be fast enough, you are in trouble [and it isn't real time].

Colin Walls
8:09 PM Mar 17, 2010

Your point about the FPGA is that the parallel operations inherent in the hardware put all the traditional OS, compile, single cpu, etc. timing to shame.

Karl Stevens
6:41 PM Mar 21, 2010

Designing with raw cpu power only solves the issue if a single cpu is a given. Multiple small cpus dedicated to specific tasks is simpler and allows cpu power to be scaled.

Karl Stevens
6:47 PM Mar 21, 2010

Karl: Exactly. As multi-CPU designs are becoming the norm, this issue becomes more important.

Colin Walls
10:21 AM Mar 22, 2010

Add Your Comment

Please complete the following information to comment or sign in.

(Your email will not be published)


Online Chat