Jack Ganssle
http://www.embedded.com/1999/9910/9910sr.htm
Reports
of the demise of the ICE are premature. This powerful debugging tool is
alive and well, even if it may need more planning to use.
The in-circuit emulator (ICE) remains the
most powerful debugging tool available. Nothing else approaches an
ICE’s wide range of
troubleshooting resources; no other tool so completely addresses
problems stemming from hardware/software interaction. The emulator’s
birth was almost coincident with the start of the microprocessor age,
since micros live deeply buried in applications that simply don’t
resemble computers.
The goal of any debugging
tool is to provide visibility into a device’s internal operation.
Emulators do so by replacing the microprocessor with hardware that
replicates CPU operations, while mirroring
those operations to the user.
Emulators traditionally provide many debug features that are common to all debugging tools: breakpoints, single stepping, and ways to display and alter registers and memory. Like similar tools, they link your source code to the debugging session—you break on a line of C code, not hex, and data is shown in its appropriate type.
Unlike other debugging
tools though, an ICE includes features you’ll get only with a device
packed with hardware. Real-time trace, emulation
RAM, complex breakpoints, and other features assist in finding the more
difficult problems associated with real-time issues or complicated
interactions in the code.
Positioning
Once upon a time, emulators were
de rigueur
for any serious embedded development work, but because of costs
(both the purchase price of an emulator and the engineering costs to
design one), these tools have been repositioned in the market.
Managers want bigger, more
complex systems designed in less time and for less money. Escalating
salaries coupled with huge code sizes suggest that the cost of tools
should be irrelevant. Instead we see perplexing demands to do more with
less.
On top of this, chip
vendors are less tolerant of expensive development systems. At one
time, they partially funded ICE engineering; today that’s much less
common. Non-mainstream processors often just have no emulator support
at all.
We do need tools, however,
so
many chip companies now add debugging resources on-board their CPUs.
The BDM/JTAG debuggers currently flooding the market fill the tool void
by coupling the developer’s PC to the target system via a simple serial
bus tied to the processor itself.
The BDMs are crowding out
traditional emulators because they offer a reasonable set of features
for little cost. Bigger projects mean more programmers and it’s hard to
equip everyone with high-end tools, but companies don’t mind buying
inexpensive BDMs for every developer.
A BDM gives the user run-control over the target: you can start and stop the firmware, set software breakpoints, and examine/alter all CPU and board resources. Coupled with a source-level debugger, a BDMgives you an environment very much like that offered by Microsoft’s CodeView.
But the BDM has not
supplanted the ICE, nor is it likely to. Instead the emulator’s role is
shifting from being the general purpose debugger to the tool of choice
for dealing with difficult
problems. (Note, though, that in the eight-bit world where there are
few BDM options, the ICE still reigns as the debugger of choice).
Bringing up new hardware? An emulator
coupled with a logic analyzer or scope gives you precise control over
all of the system’s signals. Instead of troubleshooting memory logic by
trying to capture a system crash that starts and ends in a single
microsecond, use the emulator to issue looping reads and writes to the
memory logic. Trace signal generation with the
scope to quickly find problems. Ditto for diagnosing complex I/O
problems.
Emulators also help with
finding hardware/software interactions. On some CPUs, like the 68k
family, a wild pointer that issues a read or write to nonexistent
memory creates a system hang that is all but impossible to track down
with simpler tools. Some ICEs flag this condition, others let you break
on any very long bus cycle. Either way, it’s easy to look back over the
instructions just executed to find the cause of the problem.
Surely, though, the ICE’s preeminent
role, despite a wealth of cheap, highly functional BDMs, is in
diagnosing real-time problems.
Traditional debugging techniques fail in the time dimension. Though sometimes an interrupt service routine might be so slow and so benign that developers can successfully single step, most start and run to completion in microseconds and cannot, under any circumstances, be stopped. Set a breakpoint in an ISR that accepts fast serial data and you’ll lose incoming information. The emulator’s trace facility captures the ISR without affecting system operation or throughput.
Consider what happens when
you set a complex breakpoint in a software-based debugger (like
CodeView), or in a BDM or similar embedded tool. To break when the code
writes 0x23 to foobar, the software/BDM debugger will, in effect,
single-step through much of the code, examining foobar at each step, as
there’s no hardware resource to detect the condition. On an emulator,
however, the complex
breakpoints run at full speed in real time. The unit monitors bus
cycles, never slowing things down until it’s actually time to take the
breakpoint.
“Hard” real-time systems
fail when some action does not take place in a timely matter. Use the
emulator’s timers to trigger trace collection or breakpoint when an ISR
exceeds some maximum latency, or when mainline code runs too long. Or
measure time to understand where it bleeds away. Time stamped qualified
trace tells you how often a
particular routine runs. Timers triggered on start/stop conditions
report periods lost to runtime routines.
The point is that real
time is an important issue in some embedded systems and that real-time
developers need tools that help them work in the time dimension as
easily as in the procedural one. To date, only the ICE fills this role.
Support
The dark side of selling emulators is providing effective customer support. Vendors provide a very complex device, integrated with a third-party software debugger which must seamlessly connect to hundreds of different, often poorly designed, target systems. A target system timing error of even a few nanoseconds can render the ICE useless even when the target seems to run correctly sans ICE.
Customers are equally
frustrated. After spending $5,000 to $20,000 or more on the device,
they quite reasonably expect and demand a low-hassle,
plug-it-in-and-run-it tool. No one wants to spend time
fighting with problems that appear to be caused by the ICE—let alone
making changes to an apparently working target system just to make a
tool function properly.
Customers are not emulator
experts and vendors know little about each individual target system.
Worse for the support providers, all too often the customer is a
software engineer who has perhaps zero knowledge of the target’s
hardware design. It’s pretty hard to provide first-class support when
questions like “what’s
your target’s clock frequency?” are met with dead silence.
Vendors themselves are
challenged in finding the really smart engineers who can identify
problems from minimal telephone clues and who deeply understand the
interaction of hardware and software, yet who are happy to work the
support lines. Some give designers split responsibility spending some
amount of time each month manning the phones. Others use a team of
support people as the first line of defense, addressing the more
straightforward
problems and turning killer issues over to the design group.
Gone are the days when
vendors routinely sent applications engineers to your facility to
diagnose and cure hardware/software interaction problems. Most support
takes place over the phone via toll-free hotlines. This reduces support
costs to the vendor—as well as emulator costs to the customer—but,
though the telephone is a marvelous communications device, it truly
reduces the vendor’s understanding of what’s
going on
when
in the target system.
Something like 50% of
support involves debugging the customers’ target systems. A CPU input
pin intentionally or accidentally left floating may confuse the ICE;
low clock levels and ringing make emulation impossible; a design that
is technically correct but that leaves zero timing margins will often
defeat any CPU-based debugging tool. On top of this, the astonishing
complexity of modern processors makes building a perfect, nonintrusive
ICE difficult at best, so all
too often emulator restrictions and even bugs create support headaches
for both vendors and customers. Worse, new CPUs come onto the market so
quickly that often what appears to be an ICE defect is really a flaw in
the chip’s design.
All ICE vendors invite
customers to send their target systems into the factory, where the
emulator designers can work through target and ICE problems. This is in
one sense a perfect solution to the support problem, as the vendor’s
engineers have all of their tools
and resources at hand to make the unit work properly in the target. The
downside is that problems surface when the customer is in panic mode.
Even with FedEx, a week or more may get lost in shipping—a week that no
one wants to sacrifice.
Clearly, emulators are
the most complex debugging tools you’ll use. Leave time in the
development schedule to get the unit working properly! And recognize
that you might have to send the target to the vendor, so try to ensure
that at least the computer boards
can run without needing a three-ton cabinet of I/O electronics.
Many vendors welcome
customer contact before the target design starts. Call your selected
vendor to discover the unit’s restrictions and just to get design
advice. ICE vendors see hundreds of designs based on any one processor
and have learned tricks to avoid problems that plague too many
engineers. Make sure the hardware team accommodates these limitations
before presenting you with an un-emulatable board.
Selecting an ICE
We techies tend to focus on features when
selecting a car, a computer, or any cool bit of electronics.
Silver-tongued salesmen push features as the way to our hearts and
wallets. Yet we fork out substantial amounts of cold, hard cash not for
features per se, but for a tool that helps us debug our code faster. A
feature that works well, that’s slickly integrated into the source
debugger, and that gives us deep insight into our
code’s operation is invaluable. One stemming from a marketing guru’s
dreams of the next IPO may look good on a datasheet but do little
that’s truly useful.
Few big projects use ICEs as the sole debugging tool, so the first consideration is to look at the entire development environment. Answer these questions:
How well does the source-level debugger interface to the emulator? If the emulator’s most powerful features are accessed via a crude command line screen, odds are you’ll be unable to use symbolic or source-level references in that screen
Is the source-level debugger very tightly tied to the compiler? Unless it’s deeply aware of how the compiler generates code, it will struggle with converting acquired real-time trace data back to source form
Are all third-party software packages intelligently supported? RTOS-awareness is today’s holy grail. Most debuggers understand at least a lot of RTOS data structures. If you’re using an RTOS, make sure the debugger/ emulator combination will show what the RTOS is doing in a high-level form
What other debugging strategies will you employ? Except for smaller projects using just a couple of engineers, most systems today use a mix of debuggers. As mentioned earlier, perhaps the bulk of the team uses BDMs or ROM monitors. A few emulators might be reserved for tough problems. In this case make sure the emulator and BDM use the same source-level debugger. Otherwise you’ll forget the ICE’s interface and will resist using the tool to avoid relearning the other interface. Use a common source-level debugger across all of the BDMs and emulators, and moving between tools will be as hassle-free as possible
Next, think through your
expectations of the tool. Do you demand 100% nonintrusive behavior in
all modes? For some processors this may simply be impossible.
Consider cache. Less expensive ICEs might have a restriction that requires disabling the cache so the unit sees every
bus cycle (remember that transactions to on-board cache are not mirrored to the bus).
The prefetcher/pipeline may also create
difficulties. Newer emulators are now prefetcher-smart and won’t break
on prefetched-but-discarded cycles. Even better, they show trace data
properly aligned with memory read/write cycles displayed with the
instructions that caused the bus operation.
But when an emulator includes, for example, a performance analyzer or code coverage tester, new problems surface. Code coverage logs addresses on the bus to ensure that every instruction gets executed during testing. Performance analyzers may monitor the same addresses, mixing in time data, to see what runs when. Prefetchers keep a small instruction queue between the bus and the CPU core full by assuming the next instruction needed will be at the current address+1. Jumps, calls, and interrupts invalidate this assumption, so addresses on the bus may—depending on the design of the emulator—not always indicate what the CPU is actually doing. The upshot is that code coverage and performance analysis based on prefetched data will give occasional wrong results.
Alternatives exist; some tools instrument
your code to echo true address info out to the bus. Talk to the vendor,
ask hard questions, and be sure to match your expectations against the
realities of the situation.
Decent customer support is critical. Evaluate the prospective vendor’s support team: call and ask hard questions. Be demanding. Ask for references. If, as is quite common, the source-level debugger comes from a vendor other than the emulator manufacturer, who supports what? Will you get passed from toll-free number to toll-free number ad nauseam, with each supplier pointing its finger at the other? Or, will one company simply take complete customer satisfaction responsibility?
Toss in an RTOS, of course you’re
assuming and expecting reasonable RTOS awareness in all of the tools.
Where do you turn for support of the RTOS and access to the
RTOS’s internals?
Think through the mechanical connection
to the target system. SMT (surface mount) and BGA are devastating
technologies to developers. They’ve led to teetering towers of sinfully
expensive adapters between the target’s CPU and the emulator’s pod.
Unless you’ve got a reliable connection, each passing wisp of wind
seems to cause one or more pins to pop free. Sometimes the best
alternative is to solder the adapters in place.
Decide what sort of connection you’d
like between the ICE and your workstation. RS-232 is a great choice for
smaller systems, but when building a megabyte of firmware, downloading
code will eat up a lot of development time. Though we rarely upload
code from the emulator, we do bring up huge hunks of real-time trace
data, so trace is another communications bottleneck. A 128K deep trace
buffer 100 bits wide is over a megabyte itself.
Cards that plug into your PC offer nearly instantaneous download speeds at the cost of fighting the usual hassles of making new hardware work under Windows. Network connections give a nice mix of ease of installation with very high data transfer rates.
Running a fast target? Today it’s
possible to buy ICEs for 100MHz CPUs—that’s a 10ns machine cycle. Most
high speed processors complete the majority of instructions in a single
cycle; that 100MHz emulator must keep up with 100 million instructions
per second. At these rates the speed of light becomes a serious
obstacle just to PCB design, and a major
challenge to the emulator engineers who must propagate the signals off
your board, into their tool, seamlessly and with essentially no signal
degradation.
If your design uses a high speed
processor and you haven’t contacted the ICE vendor long before
designing the system’s hardware, you’ll fail. Odds are that the
hardware designers will use every bit of margin for their own on-board
needs, leaving zero (or negative) nanoseconds left over for the sake of
the tools. The tiny timing shifts
created by an ICE may well make your board all but undebuggable.
Raw speed is just part of the problem. Fast logic switches from a zero to a one and back at sub-nanosecond rates. Fourier taught us that these regular, rapid transitions are composed of sine waves with frequencies out to the hundreds of gigahertz—two to three orders of magnitude faster than the system’s clock rate suggests. Transmission line theory tells us that at these speeds, small impedance mismatches corrupt the signals, often to the point where ones and zeroes will be confused. A perfectly designed target system, with no impedance mismatches at all, still may fail when connected to an ICE, since the target drives the emulator’s cable and other electronics.
The moral? Contact your vendor, early and often. Solicit design advice and application examples.
The ICEstill holds its own
The ICE continues to thrive despite much
cheaper debugging
alternatives like BDMs. Time has changed the ICE’s role somewhat, but
it’s clear that no tool comes close in terms of troubleshooting
real-time problems and deeply buried hardware/software interactions.
Do the support and
technical woes make emulation sound like a nightmare? High-density SMT
CPUs and those systems running at high speeds do bring their special
challenges. An ICE offers tremendous debugging power, but at the cost
of some initial setup trials and perhaps design restrictions. Work out
the details up-front, early in the project, and minimize the hassles.
Jack Ganssle is a consulting technical editor for ESP. He conducts seminars on embedded systems and helps companies with their embedded challenges. His “Break Points” column appears monthly in ESP. Contact him at [email protected] .