Adapting PC Technology
for Internet Appliances

Dan Hildebrand
Senior Architect, R&D
QNX Software Systems Ltd.


The huge, rapidly evolving personal computer (PC) marketplace is generating a wealth of tools and platforms that manufacturers can use to create internet applianceseverything from web browsers to highly integrated embedded processors. Nonetheless, a problem remains. The operating-system environments for the desktop PC typically impose massive disk and memory overheads, making them inappropriate for consumer-priced products. The ideal solution? A standards-based realtime operating environment specifically designed for processor- and memory-constrained runtime systems.

This paper explores how such a solution, based on the QNX realtime OS and its Photon microGUI, makes 
it possible to develop low-cost, high-performance internet appliances that can track rapidly evolving internet services. Since PC hardware is evolving just as fast, this paper also explores issues related to product life cycle (configuration, scalability, obsolescence management) and discusses embedded PC hardware that addresses the longer-term needs of appliance manufacturers.


With the widespread availability of cost-effective 32-bit microprocessors, applications formerly found only on the desktopincluding applications for accessing the Internetcan now be integrated into a variety of embedded systems and appliance-class devices. Examples of such devices include digital set-top boxes (STBs) for televisions, smart phones for the home or office, and intranet devices for industry-specific application areas, such as the point of sale. In this paper, we refer to all such internet-enabled devices as internet appliances.

Developers are already creating a variety of reference designs for internet appliances: everything from roll-your-own OS kernels running on low-cost embedded processors to desktop operating systems running on modified PC workstations, complete with MPEG hardware for live video. While these reference platforms point to where internet appliances are ultimately going, most internet connections (modems, cable companies, etc.) available to the consumer still lack the infrastructure and network bandwidth to deliver services such as "video on demand." Nonetheless, internet appliances can already access several viable information services, including the World Wide Web.

Internet services and technologies are evolving at breakneck speed. As a result, products that can't track these technologies have little chance to gain a foothold in the market. To keep up with consumer demand, both the hardware and software used in an internet appliance must be scalable, ideally through customer-applied upgrades. Furthermore, these components must be relatively cheap so that the appliance can be regularly replaced or exchanged when it inevitably becomes obsolete.

Within this turbulent marketplace, product developers are tasked with creating products that cost little, yet deliver high functionality. They also face exceptional time-to-market pressures. For example, consumer appliance manufacturers normally put significant effort into designing the product for "manufacturability," to minimize production costs. Given the inherently short life cycle of any appliance that provides internet access, this cost-reduction phase must be as brief as possible. Furthermore, the design must be able to accommodate quick updates without requiring a completeand expensiveretooling.

Fortunately, manufacturers don't have to create the technology needed to track these rapidly changing functionality requirements. Instead, they can borrow that technology from a readily available source: The desktop PC.

Why Use PC Technology?

With its huge marketshare and thousands of vendors worldwide, the PC platform is the de facto standard across a wide spectrum of applications, offering a proven, "off the shelf" toolset. In fact, because much of the technology needed for internet appliances is actually created on and for the desktop PC, an internet appliance design that borrows from the PC will benefit from reduced engineering effortand risk.

For example, during early discussions of video-on-demand for television STBs, realtime decompression of MPEG video streams was identified as a significant performance problem, and high-speed RISC processors were routinely suggested as the solution. At the same time, the multimedia movement on the PC desktop started to generate a number of MPEG-capable video chipsets and CPU instruction-set extensions, such as MMX. Ultimately, the large demand for desktop multimediaand economies of scalecreated cost-effective dedicated silicon that addressed the MPEG performance issues, even when that silicon was paired with lower-end PC processors. The net effect: PC-derived technologies for live-video STBs.

Rich toolset

With various desktop PC technologies available for integration into the internet appliance, other advantages of the PC architecture come into play. For example, the desktop PC provides what is arguably the richest selection of operating systems, development tools, and peripherals for the embedded systems developer. Also, because a PC-derived appliance is architecturally equivalent to the desktop PC, the PC and the software available for it become natural prototyping tools for the internet-appliance developer. And there's another, even more important benefit: With internet and multimedia technologies initially coming to life on PCs, having the internet appliance architecturally track the desktop makes it easier to migrate those hardware and software technologies to the internet appliance.

Still, there are dangers to tracking PC technologies too closely. While it's relatively easy to create a "PC compatible" embedded system by pairing an x86 processor with a PC motherboard chipset, next-generation parts make most chipsets obsolete very quickly. In fact, for competitive reasons, motherboard chipset manufacturers are driven to generate new versions about every six months! Obviously, products based on these components will require frequent redesign just to maintain manufacturability. The manufacturer must be prepared to either negotiate long-term arrangements with component manufacturers or accelerate the redesign life cycle of the appliance to match that of desktop PC hardware (a feasible option since the internet appliance marketplace may require that product life cycles be accelerated compared to other consumer electronics). Nonetheless, there are other PC hardware options more closely tuned to the needs of appliance manufacturers.

Extended product life

Thanks to the rapid growth of the embedded x86 industry, a number of high-integration x86 processors have become available, including the AMD ÉlanSC310/400 (, the Cyrix MediaGX (, the Intel386 EX (, and the National Semiconductor NS486SXF ( Since these processors are specifically targeted at the embedded marketplace, their manufacturers are more willing to make them available for the long termmuch longer than for motherboard chipsets targeted at the desktop PC. Better yet, these processors integrate a variety of peripherals into the CPU, reducing the component count, and cost, of system designs.

Besides these processors, companion chips with long-term availability guarantees also exist. For example, the RadiSys R380EX chip (, when paired with the Intel386 EX, provides a significant portion of the functionality required to build an embedded PC.

Off-the-shelf hardware resources

To give manufacturers a jump start, x86 chipmakers also offer evaluation boards and reference designs, including some targeted directly at the low-cost requirements of internet appliances. These include the AMD ÉlanSC310 reference design (, the Intel EXPLR2 evaluation board (, and the National Semiconductor Odin web-appliance reference design ( Manufacturers can use these platforms "as is" for product evaluation, rapid prototyping, system development, and, in some cases, limited production runs. They can also readily "cut and paste" from these designs to create their own custom hardware.

Recognizing an opportunity, many PC-clone manufacturers have also made available STB-suitable cases, power supplies, infrared wireless keyboards, wireless mice, and so on. This hardware allows an STB manufacturer to bypass many initial tooling problems and configure a system around off-the-shelf hardware, much as a PC clone vendor does today. While this approach won't likely meet the price points required of a true consumer-class product, it certainly enables a developer to get to market quickly.

What about Software?

So far we've discussed the applicability of PC hardware, but the lion's share of an internet appliance's functionality is expressed through software. Fortunately, this software already exists on the desktop PC. Unfortunately, the operating systems used to host that software share a prodigious appetite for RAM, disk storage, and CPU cycles. Equipping an internet appliance with sufficient resources to run a desktop OS and GUI, including an internet browser, would push the hardware complement of the internet appliance into the range of the desktop PC, missing the consumer price point entirely.

Since the internet appliance is a purpose-specific device, not a general-purpose PC, it needn't carry the software overhead of a desktop OS designed to support generic desktop applicationssuch as a resource-intensive windowing system, binary compatibility with legacy applications, and so on. Instead, the appliance can run a much smaller, purpose-built OS, specifically designed to host internet appliance applications. With the greater efficiency and reduced memory/disk requirements of this OS, hardware costs can be trimmed and that elusive sub-$300 price point achieved. For STB applications, the resulting low unit cost can be "buried" within the monthly cable-service charge, removing the need for a consumer purchase in the first place.

Besides the basic multitasking capabilities needed to run a mix of processes, an internet appliance needs to provide a web browser, an email program, possibly a news reader, and other applications expected by the consumer, such as a channel guide. To host these applications, and to make the environment easy to use, the appliance needs a GUI. Given the precise timing requirements for managing the flow of video and audio data in some applications, it also requires realtime services of the OS.

Appliance-ready software

Conveniently, realtime OSs for embedded applications provide processor- and memory-efficient runtime environments suited to this application. Nonetheless, the same need to minimize engineering effort that encourages manufacturers to "borrow" from PC hardware standards should also move them in the direction of software API standards. If the API supported by the chosen OS matches the API used by the applications to be hosted on the appliance, the manufacturer can save significant development effort simply by porting those applications from the PC and other environments. Better still, a standard API enables the manufacturer to closely track the rapidly evolving technologies demanded by consumers.

For an example of the efficacy of a standards-based OS environment, consider the port of the Spyglass HTML 3.2 web browser ( to the QNX realtime OS. Just a single day was required for a "proof of concept" port of the X Windows version of Spyglass technology to QNX. Clearly, X is too resource intensive for use in an internet appliance, so the Spyglass port has since been adapted to QNX's Photon microGUI ®windowing system (described later), and runs in roughly 400K of ROM or flash and 1M of RAM. This is an excellent example of how a standards-based environment can ease the porting of a popular internet technology to an internet appliance, since Spyglass is also the web-browser technology that Microsoft based their web browser on. Developing a web browser from scratch for a proprietary OS and a minimal graphics library would have required significantly more development effortan effort that could arguably never end in an attempt to track evolving web-browsing technologies.

Realtime POSIX

Given that much of the existing internet software was created on Unix systems, it follows that a Unix or POSIX API is a good choice. Also, a review of the Java runtime-engine source code reveals that it favors an essentially POSIX-compliant OS with asynchronous I/O, generic thread support, a filesystem, networking support, and a windowing system. While POSIX environments have a reputation for being resource intensive (given their historical roots in Unix), a reading of the POSIX standards shows a careful definition of interface, but not of implementation. As a result, a microkernel architecture can be used to provide a POSIX API without the architectural "weight" of the Unix kernel.

For example, the QNX/Neutrino® realtime microkernel ( delivers virtually all of the basic POSIX OS services in about 32K of code. These include the core services of POSIX 1003.1a, 1003.1b (realtime), 1003.1c (threads), and 1004.1d (realtime extensions). In fact, the only kernel-class functionality the microkernel doesn't offer is the ability to create additional processes. That functionality is provided by the Neutrino Process Manager, Proc, which requires just another 32K of memory. The resulting 64K OS core delivers much of the OS functionality required of the Java runtime engine (networking, a filesystem, and a windowing system are external to the kernel).

Neutrino offers multiple levels of memory protection, from no protection to full process-to-process protection. In the no-protection model, application processes run as threads in one address space. In the process-to-process protection model, each process runs in a separate, MMU-protected address space (most embedded x86 processors have integrated MMUs). For Java applets downloaded through the web, this level of protection is unnecessary, since Java itself provides a secure runtime environment. But for components of the system not implemented in Java, memory protection maintains system reliability by providing "firewalls" between those components. The end result? An internet appliance can support both Java applets and high-performance processor-native applications, without any loss of reliability.

With the ability to adopt POSIX (and hence Unix) source code, the internet appliance manufacturer can track many emerging technologiesPC or otherwisewith minimal effort. But, of course, not all these technologies come from the Unix world; many come from the Windows world. As a result, QNX Software Systems is working with Award Software International ( on a port of the APIAccess toolkit, which allows developers to port Win32 source code to the QNX realtime OS.

Embedded GUI

To host the variety of graphical applications expected by the consumer, an internet appliance requires a windowing system. A conventional graphics library, while small enough, lacks the functionality required to host full-scale applications such as web browsers. On the other hand, a conventional desktop windowing system, which provides all the functionality needed, consumes too many resources to be cost-effective!

There is a way out of this dilemma. We've already seen how microkernel technology can help create a rich, yet memory-lean, OS environment. It can do the same for a windowing environment. For example, QNX's Photon microGUIwhich is built around a "graphical" microkernelis a scalable windowing system that can deliver the functionality of a high-end GUI in very little memory: roughly 500K as configured for an internet appliance.

A 2+4 configuration

To complete the functionality needed to build an internet appliance, the QNX/Photon runtime environment supports a minimal TCP/IP implementation occupying 50K and a filesystem for flash memory (or rotating disk). Both services are added in the form of processes managed by the OS microkernel. The total memory requirements for this environmentincluding OS, windowing system, networking, filesystem, HTML 3.2 web browser, email, internet news reader, and a personal information manager (scheduling, address list, etc.)add up to less than 2M of flash memory and 4M of RAM. This "2+4" memory configuration is obviously smaller than what a desktop OS requires to deliver similar functionality. But it's also significantly smaller than what a Java OS environment requires.

A demo of this software environment, which offers more functionality than the above configuration (and requires somewhat more memory), can be downloaded from

Development and Prototyping Environment

Since the internet appliance can be an essentially PC-compatible platform, the developer can use a conventional desktop PC as a development and prototyping platform for the final product. The necessary peripheral hardware can be installed in the PC (e.g. a cable modem) and software development begun while the hardware team works in parallel. To more closely model the performance characteristics of the actual appliance, the developer can opt to use one of several evaluation boards provided by AMD, Intel, or National Semiconductor.


By borrowing hardware technology from the desktop PC world, and combining it with a suitable embedded software environment, the developer can readily derive an internet appliance design. And, as customer requirements increase, the developer can incorporate additional technologies from the ever-evolving PC world with a minimum of redesign. This mix of attributes allows a PC-derived internet appliance to achieve the hallmarks of a commercially successful consumer-electronics product: short time-to-market, low engineering cost, minimal risk, and an ability to support the latest features and technologies expected by consumers.

© QNX Software Systems Ltd. 1997
QNX, Neutrino, and Photon microGUI are registered trademarks of QNX Software Systems Ltd.
All other trademarks and registered trademarks belong to their respective owners.

Realtime Extensions to Windows NT

Are they right for your next realtime project?

Greg Bergsma
Senior Architect, R&D
QNX Software Systems Ltd.


With the recent introduction of realtime extensions to Windows NT, many realtime developers are starting to consider NT for their next project. It's easy to see why. Rather than connect realtime and desktop applications over a network, it appears that developers can now integrate both into a single system, while using a single API.

But is NT with realtime extensions really a solution for mission-critical realtime applications? Let's look at the capabilities we have come to expect from a realtime operating system (RTOS) such as determinism, reliability, low overhead, and source-code portability and see how "realtime NT"compares.

Realtime Determinism

By definition, realtime applications are required to respond to external events within predictable time limits. This is especially true of "hard"realtime systems, where missed deadlines can have dire, or even disastrous, consequences.

A realtime system's ability to respond to external events within a specified time in known as determinism. To indicate how well an RTOS can support determinism, most vendors quote at least the following performance metrics (see Figure 1):

  • interrupt latency: the time from the start of the physical interrupt to the execution of the first instruction of the user-written interrupt service routine (ISR)
  • scheduling latency: the time from the execution of the last instruction of the user-written interrupt handler to the first instruction of the process made "ready"by that interrupt
  • context-switch time: the time from the execution of the last instruction of one user-level process to the first instruction of the next user-level process

Figure 1 - To provide an indication of an operating system's realtime determinism, most OS vendors refer to the following metrics: interrupt latency (til), scheduling latency (tsl), and context switch time (tcs).

While these metrics don't provide a full indication of an RTOS's determinism, they can help you assess whether an RTOS can achieve the determinism and performance required for your realtime application. Just as important, they can help you compare the performance of a realtime NT extension to that of a native RTOS.

OS vendors tend to quote these metrics across a range of processors. For example, let's look at the figures for QNX, a realtime operating system used in a wide variety of realtime applications. Times are in microseconds:





Pentium 200




Pentium 100




486 DX/33




How do NT realtime extensions compare? The numbers vary, but the published figures for some extensions indicate performance numbers 10 to 15 times slower than the numbers in the above table.

Why are these extensions so much slower? One reason is that they repeatedly poll a hardware interrupt in order to give control to the realtime subsystem. While determinism could be improved by increasing the polling rate, this increase uses up CPU cycles that your NT applications may require to achieve acceptable performance. The problem becomes worse in a networked application since network cards not only require access to as many CPU cycles as possible but also impose their own high interrupt rate.

High Availability and Robustness

Determinism is important, but there are additional criteria for measuring a realtime system, such as high availability. Can the system's OS continue to run or at least recover rapidly if a software fault occurs? For that matter, can the OS continue to provide services even if a critical hardware component, such as a hard drive, fails?

Achieving high availability is a complex problem that requires a variety of features in the OS. For example, let's consider how the OS deals with software faults.

No matter how hard we try to write error-free code, a practical reality is that our realtime applications will contain undetected programming errors, such as stray pointers and out-of-bound array indices. Any of these can cause a software fault and, potentially, cause the system to crash. To detect such errors, you need an OS that supports the Memory Management Unit (MMU) found on most of today's 32-bit processors. If a memory-access violation occurs, the MMU will notify the OS, which in turn can abort the errant process at the offending instruction.

Some realtime extension products for NT provide memory protection for realtime processes; some do not. But even if an extension supports memory protection, you still have to ask whether it will let you implement a software watchdog.

What is a software watchdog? It's a process that is informed by the OS whenever a memory violation occurs. This process then makes an intelligent decision on how to recover from the fault.

Hardware vs. Software Watchdogs

To understand the importance of a software watchdog, let's look at what many existing systems use to recover from software faults: a hardware watchdog timer attached to the processor reset line. Typically, a component of the system software checks for system integrity, and then strobes the timer hardware to indicate that the system is "sane."If the hardware timer isn't strobed regularly, it expires and forces a processor reset. The good news is that the system recovers from the software or hardware lockup. The bad news is that the system must also completely restart, which defeats our goal of high system availability.

Compare this behavior to a software watchdog, which can intelligently choose from several, less drastic, recovery methods. Instead of always forcing a full reset, the software watchdog could:

  • simply restart that process without shutting down the rest of the system, or
  • abort any related processes, initialize the hardware to a "safe"state, and restart the related processes in a coordinated manner, or
  • if the failure is critical, perform a coordinated shutdown of the entire system and sound an audible alarm to notify the maintenance staff

The software watchdog lets you retain programmed control of the system, even though several processes within the control software may have failed. A hardware watchdog timer can still help you recover from hardware "latch-ups,"but for software failures you now have much better control. Furthermore, by employing the "partial restart"approach, your system can survive intermittent software failures without experiencing any downtime.

An Obvious Choice

While performing a partial restart, your system can also collect information about the nature of the software failure. For example, if the system contains or has access to mass storage (flash memory, hard drive, a network link to another computer with a hard drive), the software watchdog can generate a chronologically archived sequence of process dump files. These dump files can then give you the information you need to engineer a "fix"before you experience similar failures.

A software watchdog not only decreases costly (or even dangerous) downtime, but also helps you avoid software faults in the future. For these reasons, you should make sure a realtime NT extension has the features required to let you implement a software watchdog.

Reducing Kernel Faults

Of course, programming errors don't occur only in application code. To support new hardware or system services, you may need to develop device drivers and other system-level services.

In traditional OS architectures, these components run as part of the kernel in kernel mode (see Figure 2). Code running in kernel mode runs without MMU protection. As a result, errant pointers or array subscripts in device drivers can cause kernel faults, which only a hardware reboot can remedy. The more code built into the kernel, the greater the likelihood of kernel faults. In Windows NT, these faults result in the "blue screen"crash.

In a microkernel OS like QNX, only the kernel (32k of code) and interrupt service routines (ISRs) run in kernel mode, drastically reducing the possibility of kernel faults (see Figure 3).

Figure 2 - Traditional OS architecture

Figure 3 - QNX Microkernel Architecture

The "Blue Screen" Crash

All vendors of realtime NT extensions have recognized the need to deal with blue screen crashes. As a result, some of these products can trap a kernel fault so that the realtime subsystem can choose to continue running or to close down gracefully. Still, the ability to continue running is a questionable benefit if you can't interact with the NT components of the system--such as the operator interface!

And there is a greater problem: some realtime extensions to NT can potentially contribute to kernel faults. These extensions are implemented directly into the kernel as an interrupt service routine (ISR) or into the Hardware Abstraction Layer (HAL). As a result, the whole realtime subsystem runs in kernel mode. So what happens if you have a stray pointer in your realtime application? You get kernel faults the blue screen crash.

Also, most realtime applications require custom device drivers. Since all NT device drivers reside in the kernel space, this only contributes to the fragility of the system.

Technical Support

The subject of software crashes raises another question: Whom do you call for support when you experience problems? Microsoft, or the vendor of your realtime extension? Before you invest in an extension, you need to determine who will assume the responsibility of providing you with technical support if your system experiences problems.

Access to Resources

To provide streamlined access to system resources (e.g. filesystems, devices, communications gateways), traditional RTOSs provide an API that is implemented either by system processes or by the kernel itself. A distributed RTOS, such as QNX, goes a step further and turns a network of computers into a single logical machine. As a result, a process running on any computer can, with appropriate privileges, access all resources on the network, including:

  • filesystems (hard drives, CD-ROM drives, etc.)
  • communications ports (serial, parallel, modems)
  • CPUs
  • communications gateways (e.g. TCP/IP)

This distributed approach can significantly enhance system availability: If a device fails on one machine, you can automatically restart a process to use a device, or even a filesystem, on another machine.

When evaluating a realtime NT extension, you need to determine whether it will let you access resources from both NT applications and realtime applications. For example, let's say your realtime subsystem requires high-performance access to the NT filesystem. Does the NT extension provide the functionality to let you do this? If so, how does it provide this access? Does it go through the HAL? If it does, you'll end up using the same mechanisms that make NT unsuitable for real time (i.e. you'll lose control over the priority of Deferred Procedure Calls initiated by an ISR). Also, what happens if Microsoft decides to make changes to the HAL? Will your realtime extension stop functioning?

All the above questions also apply to accessing communications gateways.

System Overhead

As I mentioned earlier, some realtime NT extensions implement realtime determinism by means of a high-frequency polling interrupt. This interrupt imposes a processing overhead even when no realtime work is to be done. The result? Fewer CPU cycles for non-realtime applications and increased latency. In comparison, most RTOSs are event-driven, responding to interrupts only as they occur.

As for memory overhead, most extensions simply increase NT's already large memory requirements. Most RTOSs, on the other hand, can fit easily into small, ROM-based embedded systems.

Conformance to Standard APIs

To protect their code investment, many developers strive to create applications that are portable across OS platforms. Industry standards such as the POSIX API have emerged to help developers achieve this goal - even NT offers a POSIX option. Nevertheless, the widespread success of Microsoft operating systems has created an additional, de facto standard: the Win32 API. Consequently, several RTOSs now support both POSIX and Win32.

Unfortunately, some NT realtime extensions support neither POSIX nor Win32. Instead, they use a proprietary API that defeats any goal you may have of achieving platform portability and vendor independence. Other extensions provide only a subset of the Win32 API, and may thus limit the functionality you can implement in your realtime subsystem.


Realtime extensions to NT offer a degree of realtime determinism that NT alone cannot provide. But having a degree of determinism is only a piece of the puzzle. A realtime environment must also be extremely reliable. It must be able to recover quickly from software faults, without downtime, and avoid kernel faults. For most applications, the environment should impose low CPU overhead and minimal memory requirements. And it should offer a portable API.

As we've seen, many realtime extensions to NT can't fulfill these requirements. Most RTOSs, on the other hand, offer established technologies that have been fine-tuned to the demands of the realtime marketplace. As a result, a "loosely coupled" approach still makes the most sense for most realtime applications: use NT for the desktop, an RTOS for the realtime control, and integrate the two systems via the various networking options now offered by RTOS vendors.

© QNX Software Systems Ltd. 1998
QNX, Neutrino, and Photon microGUI are registered trademarks of QNX Software Systems Ltd. All other trademarks and registered trademarks belong to their respective owners.

* plus shipping