Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Embedded system engineering magazine 2005.01,02

.pdf
Скачиваний:
31
Добавлен:
23.08.2013
Размер:
10.5 Mб
Скачать

+++ SEE FOR YOURSELF WHAT BLUE STREAK CAN DO AT OUR EMBEDDED WORLD STAND (HALL 12, STAND 244). +++

Sharp Microcontrollers

Naturally

fast and flexible

Microelectronic components for system solutions – naturally from Sharp

Blue Streak might well have been modelled on that naturally fast and flexible performer, the kingfisher. By combining an ARM processor, LCD controller and various built-in options such as Audio Codec and Smartcard, SD card, IrDA and USB interfaces in a conveniently compact package, Blue Streak gives you the high system performance and versatility

you want in controlling passive LCDs such as STN and DMTN and active LCDs such as TFT, Advanced TFT and HR-TFT in formats varying from 1/8 VGA (or even smaller) to XGA. Since this performance and flexibility are also combined with an outstanding low-power design, Blue Streak is ideal for handheld applications such as PDAs. Blue Streak – the name says it all.

And what can we do for your system? Give us a ring on +49 (0)180 / 507 35 07 or mail to infosme@seeg.sharp-eu.com

<Bueyr's Guide>

ESE Magazine January 05

ARM-based microcontrollers Stand and deliver

<Written by> Ian Johnson, ARM </W> Supplement sponsored by ARM www.arm.com

The move from 8- and 16-bit processors to standalone 32-bit processors with multiple peripherals allows developers to use high-level programming languages to develop software to run on embedded devices. This creates new demands on software development tools.

STANDALONE PROCESSORS based around the ARM 32-bit processor family are becoming increasingly popular, with processors available from many

companies. These processors include technologies that, when best exploited by compilers and debuggers, can cut development times and improve the quality of software.

Moving to a 32-bit processor opens up several advantages for the software developer, not least the ability to use higher level languages and hence allow the re-use of tested and proven code from other projects, cutting design times and increasing productivity. However, maybe the most important benefit of using an ARM corebased processor is that it is able to run code compiled for the 16-bit Thumb instruction set, thus saving dramatically on memory cost, due to reduced code size (on average a saving of 30%). It is the job of an efficient compiler to produce efficient 16-bit Thumb binaries from high-level C or C++ source code.

resources. This means that the compiler must support the use of language constructs such as C++ exceptions, but a developer who does not use this feature in their software should not incur the overheads inherent in supporting such a feature. A compiler should provide a mechanism for the user to say that their software does not require C++ exception support, or even better should detect this at compile time. Codesize is also minimized using smart inlining whereby the compiler makes inlining decisions based on function size and usage – this feature can give over 20% codesize savings.

The compiler also has to be aware of the different versions of ARM core and the different instruction set architectures in order to best use the processor pipeline.

The latest version of the ARM architecture currently being implemented in standalone processors is the V5TE instruction set in ARM9 cores. This architecture has new instructions such as count leading zeros (CLZ) and 16-bit mul-

peripheral sets, rather than the processor core. When there is no specific tool set for the standalone processor, it is up to the engineer to ensure that the compiler knows which ARM core is being used, and this is found on every datasheet.

There is also an Application Binary Interface (ABI) specified by ARM that enables code compiled by compatible compilers to be built together into the same application without the need to port the code between the different compilers. This is being rolled out as an open standard in the same way as the AMBA bus standard.

Debugging

There are several hardware and software technologies available to support debugging standalone processors based around ARM cores.

The EmbeddedICE Macrocell is included in every ARM core and provides a JTAG interface to the boundary scan registers, some watchpoint registers in addition to a debug communications

Moving to a 32-bit processor opens up several advantages for the software developer, not least the ability to use higher level languages

Compiler

There are an increasing number of developers using C++ for deeply embedded designs, thus presenting the compiler with the challenge of minimising the impact of C++ language features on code size, otherwise known as ‘code bloat’. For example, a hard disk drive manufacturer is using C++ for the code running on its ARM corebased disk controller in order to increase programmer productivity and quality, making the code easier to structure, debug and re-use.

This trend requires the compiler to be aware of the needs of embedded designers using C++, where resources are restricted and hitting specific performance targets is key, rather than compilers aimed at the desktop software development market, where the developer can essentially assume unlimited memory and disk

tiplies to increase code density and performance in DSP code, division and floating point arithmetic. Qualities of a good compiler include automatically selecting these new instructions when possible and using optimized C runtime libraries. When compiling for a V5TE core, a library containing functions optimized with CLZ is used, rather than an unoptimized library which also works on V4 cores. Such features can make the most of new instructions and are a key advantage to embedded software designers.

This also means that the particular ARM core is a key piece of data. While the compiler can know the correct instruction set architecture for a particular core – the ARM7TDMI or ARM926, for example, it may not know which core is in a specific standalone processor from a third party chip developer. These all have different naming conventions depending on the

channel. This technology allows a debugger to access the hardware registers and full system memory through five pins.

If the processor is running a real time operating system (RTOS) then it can be useful to view the state of the RTOS at the same time as registers or system memory. When execution halts it is easy to see exactly what lines of source are being executed in each context of interest. The possibility to bring up debug windows whose contents contain the source code, stack, registers, resources etc that relate to a specific execution context can be vitally useful in determining a coding error for example. Single stepping code in a particular context will update the state of execution contexts shown in other debug windows to see how different threads and processes interact.

If there is no RTOS running or no RTOS sup-

BG16

A key issue with standalone devices is that they all have different peripherals, some of which are themselves complex controllers that need to be accessed during the

software development cycle, and there are several different ways of doing this.

port, then RealMonitor can make use of a small

such as whether an LED on or off, rather than a

deeply embedded ARM devices such as ASICs

program built into the application that watches

bit value at an obscure memory location, making

and system-on-chip devices, but are now

for a particular exception and on the exception

system development faster and easier.

becoming available on the wide range of stand-

talks back through the debug communications

The value of this approach is that chip ven-

alone processors from different vendors. While

channels. This can be interrupt driven or polled

dors can take the files and modify them for their

not all are implemented by every development

with the goal that both methods should be as

own particular products to ease the develop-

tool at this point, these capabilities are being

non-intrusive as possible. This is not strictly pos-

ment process. This is now starting to happen

added into the processors and the development

sible as there is always code running, either

with standalone processors, providing more

tool chain to reduce the development time and

directly or as a thread, that could feasibly inter-

information to the debuggers to ease the devel-

ease the development process.

<Ends>

fere with the application code.

opment process.

 

 

www.arm.com

 

A technology that is emerging in standalone

All these capabilities have been included in

 

 

 

 

 

processors is the Embedded Trace Macrocell

 

 

 

 

 

 

(ETM). This is a block of logic with, its own set of

 

 

 

registers which are programmed via the JTAG

 

 

 

port, that sits between the core and memory and

 

 

 

can monitor non-intrusively the interactions of the

 

 

 

memory with the outside world. By monitoring the

 

 

 

load, store and move data operations and instruc-

 

 

 

tion fetches it can build a mirror of the instruction

 

 

 

flow and data accesses, which can then be trans-

 

 

 

ferred to a separate Trace Port Analyser (TPA). The

 

 

 

debugger can than decode the information stored

 

 

 

within the TPA to recreate, or trace, a particular

 

 

 

sequence of events that, for example, cause the

 

 

 

code execution to be corrupted.

 

 

 

As the ETM can be configured by the debug-

 

 

 

ger this also allows the filtering out specific

 

 

 

addresses or address ranges using address com-

 

 

 

parators before the data reaches the debugger,

 

 

 

allowing real time analysis at the hardware level

 

 

 

and reducing the amount of data that has to be

 

 

 

passed over the link.

 

 

 

System level development

A key issue with standalone devices is that they all have different peripherals, some of which are themselves complex controllers that need to be accessed during the software development cycle, and there are several different ways of doing this.

To tackle this, it is extremely useful to visualize the different peripherals that are attached to the core, ranging from the memory controller or the vector interrupt controller down to peripherals as simple as an LED controller.

This can be implemented as simply as a text file that can be read by the debugger and describes the peripheral and gives it a value type, the actual value of which is read into the debugger through the JTAG port when the core is being debugged., In essence this process is describing the standalone processor datasheet in a way the debug tools can interpret the information. In this way the developer can use the debugger to see real and directly modify values

<Buyer's Guide>

<Bueyr's Guide>

ESE Magazine January 05

ARM7 as a General Purpose

Microcontroller

<Written by> Trevor Martin, Hitex (UK) Ltd </W> Supplement sponsored by ARM www.arm.com

What should you look for when choosing an ARM based MCU?

OOF THE EASIEST trends to spot in the world of microcontrollers is the adoption of the ARM7-TDMI core as the for general purpose microcontrollers.

Originally ARM processors were IP, largely for consumer products such as mobile phones. The high development costs of such projects made the use of ARM7-TDMI the preserve of large blue chip companies. However there has been a rush of semiconductor companies releasing new microcontrollers based on the core. Many of these microcontrollers are true single-chip, 32-bit devices at astonishingly low prices - often beating the price of existing 8-bit microcontrollers. This article will look at how the ARM7-TDMI is used in a general purpose microcontroller and review key points to look for when evaluating such devices.

What is ARM?

The ARM7 TDMI–S CPU is a 32-bit pipelined RISC processor with six different operating modes to support exception processing and operating systems, two instruction sets and a MAC unit. Due to its innate simplicity, it has a very low gate count: the CPU takes up only a small silicon area, leaving room for interesting on-chip peripherals, and has very low power consumption.

Instruction sets

At first sight, most ARM7 TDMI-S micros have reasonable amounts of on-chip flash ROM for program storage. However, since the ARM7 TDMI-S is a 32-bit microcontroller, with each instruction four bytes long it is quite easy to rapidly gobble-up on-chip flash. For this reason, there are two instruction sets. The ARM instruction set is 32-bits wide and will produce the fastest code. The Thumb instruction set is 16-bits wide and

Figure 1: ARM7 TDMI-S

Figure 2: Test and decrament statements

compresses the program size but reduces the per-

implementation of ARM7 in standard silicon

formance of the processor. To fit an application in

runs at 60MHz, so a single cycle instruction

the restricted resources, it is vital to inter-work

needs a memory access time of 16.6ns. Since

the two instruction sets; for example, all the inter-

this is currently faster than most commercial

rupt routines could be coded in the ARM instruc-

flash memories, simply fetching the instructions

tion set for maximum performance while the larg-

for the processor becomes a major bottleneck.

er background code could be coded in the Thumb

When evaluating an ARM7 TDMI-S based

instruction set for maximum code compression.

microcontroller, it is important to see how this

The ARM 32-bit instruction set also has some

problem has been overcome.

unique features. Every instruction is conditionally

The simplest solution, and the approach

executed, depending on the condition code flags

used on the earliest microcontrollers, is to copy

in the CPU. Figure 2 shows how a more tradition-

executable code to the on-chip SRAM and use

al microcontroller deals with a test and decre-

its fast access time to boost the program per-

ment statement. If this was used on ARM the

formance. As on chip SRAM is usually very lim-

pipeline would be flushed and have to be refilled.

ited, this is impractical for single chip applica-

However, the ARM instruction set always exe-

tions. Another solution is to add a cache memo-

cutes the decrement instruction: if the condition it

ry. While this will speed up program execution,

true the variable is decremented otherwise the

it has a couple of drawbacks. Firstly, caches are

instruction passes through the pipeline as a NOP.

complex and take up a lot of die space that could

This removes the small conditional branches typ-

be used for extra peripherals. Secondly, a cache

ically found in microcontroller programs, produc-

makes program execution non-deterministic,

ing more linear code and enhancing the perform-

which can be a problem in some hard real-time

ance of the pipeline.

applications. An interesting half-way house is to

Memory requirements

use “memory accelerator” units. These essen-

tially buffer a small page of program instructions

The ability of the ARM7 TDMI-S to execute

without any of the intelligence of a cache. This

instructions in a single cycle is a potential prob-

works well with ARM 32-bit instructions as the

lem for chip designers. Currently the fastest

linear program flow from the conditional execu-

BG18

Several manufacturers are set to follow the logical upgrade path from ARM7 to the more powerful ARM9, creating a true industry-standard architecture

tion will get a high hit rate on the accelerator and thus a big performance boost.

Interrupt handling

The ARM7 TDMI-S has two external interrupt lines, IRQ (General purpose interrupt) and FIQ (Fast interrupt), to service all the on-chip peripherals. Using a simple OR gate to join the peripheral interrupt lines to, say, the IRQ line would result in very poor interrupt latency and restrict the chip’s performance. Instead an additional interrupt unit external to the CPU has to be added, providing additional hardware support for interrupt servicing. ARM has designed a standard Vector Interrupt Controller (VIC), to act as a hardware look-up table and provide the address of the required interrupt service routine when an exception is triggered. Some microcontrollers have a more complex interrupt support that provides automatic interrupt nesting. How interrupt handling has been implemented effects the real-time performance of the implementation as a whole.

Debugging

The ARM7 TDMI-S has much of the necessary debugging hardware included on-chip, replacing expensive and complex in circuit emulators, although the level of debug support is manufacturer dependent. The minimum is a JTAG port to allow flash programming for on-chip and external memory and simple debug features such as breakpoints, start/stop execution and the viewing of memory. ARM provides the “Embedded Trace Module” (ETM) as an additional debug port. This has all the features of the JTAG port but also provides real-time trace and operating system information and enables features such as code coverage monitoring and performance analysis. These start to match in-circuit emulators and are essential for code development in safety-critical and high-integrity applications, allowing rapid detection of more complex real time bugs and extensive software testing. As the ETM is not a standard part of the ARM7 TDMI-S core semiconductor, manufacturers have to licence it from ARM. ARM also provides an additional debug feature called “Real Monitor”. This is resident in its own flash memory, separate from the main application flash memory. It can be activated by the JTAG debugger and provides pseudo-real-time updates of selected variables to the JTAG, allowing you to watch variables on the fly, albeit with some intrusion by the debug tools.

In addition to the hardware debuggers, a number of simulators are also available. Why would you want a simulator when low cost hardware debuggers are available? While a pure

ARM7 simulator is of limited use, there are simulators available for specific microcontrollers with accurate peripheral and interrupt simulation. Being able to swap between simulation and real world debugging goes some way to overcoming the limitations of the basic JTAG interface.

Writing the code

There are many ARM compilers available on the market and they can all generate code to run on any ARM7 TDMI-S based microcontroller. There is even a free compiler available as a GNU port. However, most of them do not provide any specific support for microcontrollers and many are biased towards generating the fastest possible code. For a small footprint, single-chip microcontroller with limited memory resources, code size is the major concern. Since most of your software will be in the Thumb instruction set, the efficiency of Thumb libraries and code generation is particularly important.

Each device manufacturer implements their own memory system, so the start-p assembler

code required to get off the interrupt vector and to the main() in C code will vary. Checking that suitable start-up code is available for the microcontroller you intend to use will save a lot of work, particularly if you are new to the ARM7 CPU.

Conclusion

Families of ARM7 TDMI-S based general purpose microcontrollers are available from several manufacturers. They range from very small footprint, single-chip devices with limited SRAM and flash memory and a restricted peripheral set, up to feature-rich microcontrollers with external busses supporting megabytes of memory. All have the same core CPU and work with the same tools. Several manufacturers are set to follow the logical upgrade path from ARM7 to the more powerful ARM9, creating a true industry-stan- dard architecture running from sub 8-bit prices to powerful microcontrollers running operating sys-

tems such as Linux and CE.

<Ends>

www.hitex.co.uk

 

</Buyer's Guide>

ESE Magazine January 05

BG19

In-Depth: Comments on Code Generation

<Written by> William I. Lundgren, Gedae, Inc. </W>

Tools for code generation need to take account of the many different aspects of the real world. How are these best resolved

CODE GENERATION implies another level of abstraction over source code. The idea is that some of the implementation detail that is included in source code is not necessary when applying code generation.

While a graphical representation is used in place of source code by many code generators, sometimes the implementation detail (such as sends and receives) must still be included. In such cases the code generation has added little value: the application is not portable, cannot be easily reconfigured, obscures the essential functionality and limits improvements in coding productivity.

The separation of the purely functional description (what processing of the data is to be done) from implementation specific information (how the functionality is to be implemented) is the essential goal. This can be done by maintaining 2 separate sources of application information – one describes the functionality and the second specifies how that functionality is to be implemented. Many copies of the implementation specification can be maintained for a single graph.

Another consideration is the amount of information that is added by the code generator. One simple example is the addition of send and receive functions. The implementation specification says to put adjacent function boxes on different processors and the code generation transforms determine the need for a send and receive boxes.

The challenge in designing the language for a

code generator is to discover the essential functional information that provides sufficient information to maintain functionality while the code generation makes radical changes to the implementation. A second challenge is the need for intelligent algorithms that transform the functional and implementation specification into an implementation that runs on a heterogeneous (RISC, DSP, FPGA and other architectures) multiprocessor target.

All the concepts like data flow, state machines and object oriented need to be available in the comprehensive language we seek. Data flow is a good abstraction that meets the needs of some functional requirements. Object oriented concepts provide for the localization of related functionality. State diagrams can directly expression of part of the functionality. Each is limited. We need fully integrated language to directly state the diverse functional requirements of system development. This language needs to contain sufficient information so that intelligent algorithms can produce a functionally accurate implementation. We must integrate and extend existing concepts to create a new language that meets those requirements. Once the language has been developed (or during the process of developing the language) we must also develop a full suite of algorithms that transform the functional description into an implementation. The suite must take into account the variability of the implementation specification

and target, and must not sacrifice efficiency. Gedae is an example of such an approach. The

approach has to be to identify and attack some the parts of a problem that leads to a valuable capability and then to progressively attack additional pieces of the problem. Gedae breaks the problems into domains that provide for direct expression of very diverse functional requirements. Three current domains deal with efficient execution when data production is static, dynamic, or segmented. State information and software must be reset on the segment boundaries. A fourth domain addresses parameters where all parameters are assumed to be in agreement at all times. The sequence of execution must be under user control since boxes with side effects (side effects are not visible to Gedae) can result in unpredictable behavior. Gedae has chosen a simple virtual machine (N fully connected processors) as the target for the initial stage of development. The 100+ algorithms Gedae uses to transform the application allow it to create an efficient application on top of the virtual machine, and the structure of the Gedae application allows for future development and enhancements to be added with minimal disruption. Gedae is adding another domain to attack the programming of hardware components such as FPGAs. The first product of that type will be

released in Q1 2005.

<Ends>

www.gedae.com

 

JUST AS A COMPILER can be used to create the machine code for an application, code generation tools sit at a higher level of abstraction and gener-

ate source code for inclusion in the application. State machines are one area where the process can be automated, generating a table driven state machine, which is much more compact than a switch statement or several pages of nested if statements. The rise in the use of UML 2.0 to model applications is reflected in the availability of tools to generate code from the model. The increasing density of FPGAs has made it possible to automatically

Code generation tools make real impact

Martin Whitbread looks at recent developments in code generation.

port some components of a model to hard-

IAR: state machines

ware, leaving the rest to run in a closely cou-

The IAR state machine tool visualSTATE uses

pled processor.

standard Boolean mathematical algorithms to>>

<In-Depth>

ESE Magazine January 05

37

<In-Depth>

ESE Magazine January 05

<<translate the state chart diagram into efficient table driven C code. It produces target and compiler independent ANSI C code which can be used for 8, 16 & 32 bit MCUs and even PC applications.

How such a system deals with interrupts is critical to the endeavour. In visualSTATE interrupts are "events" and each event is a potential trigger for a transition. The sample C code below illustrates the use of interrupts within a visualSTATE based application:

Interrupt [0x23] UART_Receive_Handler (void)

{

//add any code before executing task //add interrupt event to a queue SEQ_Add_PrioEvent(eUART_receive); //visualSTATE is constantly checking //this queue to process events

}

It is possible to integrate visualSTATE with existing applications. All the device driver code that has been written is retained and any control logic code is replaced with the code generated by the state chart model. Where an RTOS is used (commercial or home grown), then each Task can be an independent visualSTATE system. A Navigator contains a tree browser, allowing users to see the file structure of their workspace. Another tool, Designer is used for creating state charts. The tree browser allows users to review and navigate through their project. In the main window users create their model using states, transitions, events, actions, initial states, variables, assignments, concurrent regions, unit states, history states, deep history states, guards, signals, parameters, entry, exit and do reactions - according to the UML notation.

The use of state charts enables the construction of an interactive and iterative working model where users begin with an outline of their application, and then step by step add functionality at a more detailed level. It is possible at any time to simulate the behaviour of the model using the IAR visualSTATE Validator, and create a prototype or target implementation whenever that is wanted. In order to see what is going on in state chart diagrams during simulation, Graphical Back Animation can be used.

The code generated by visualSTATE is in ANSI-C or C++ source code. This gives the maximum flexibility in porting the application to a specific target. Returning to the key concern of code size, IAR claims that applications based on visualSTATE will typically occupy less code and data space than a corresponding handwritten application. This means that visualSTATE is suitable even for 8- and 16-bit targets.

ARTiSAN: UML modelling

UML 2 is making a significant impact on large complex applications where a UML model can be built and tested relatively quickly and the application then generated rather than hand

coded. ARTiSAN Real-time Studio provides modelling support for UML 2.0 and supports SysML standards. It includes code synchronizers for Ada 83/95 and Spark Ada 83/95, as well as C, C++, Java, making it interesting to military and aerospace and other safety critical users. Forward generation, reverse engineering and round tripping are all supported. Users can synchronize code and design throughout the development lifecycle – whether they prefer to make changes at the model or code level. Different parts of the model can be implemented in different programming languages.

Synchronization requires no ‘magic tags’ in the code and there is no dependency upon any inefficient or inflexible middle layer code. Because code generation is template-based, it is customizable. Users can take complete control of the mapping from UML to code customizing templates to match their in-house coding standards or specific safety-critical subsets, like Praxis’ Spark Ada. They can also map model components to the OMG Interface Description Language (IDL) for CORBA and generate SQL schema for databases using ARTiSAN’s Table Relationship Diagram.

Telelogic TAU: UML 2.0

With the release of TAU/Developer 2.3, Telelogic extended its UML 2.0 modelling support, and introduced new model-driven development support for C++ and Java. The new features of TAU/Developer 2.3 included support for UML 2.0 Activity, Component, Deployment, Interaction Overview, and Package diagrams.

TAU/Developer 2.3 also introduced UML 2.0 generation from state diagrams for real-time components, round-trip and reverse engineering, and integration with Microsoft Visual Studio

.NET for graphical debugging.

New model-driven Java development support was also introduced with a Java development environment aimed at software architects/designers who produce design blueprints using UML, and Java developers who work with source code in IDEs. Capabilities include modelling using any combination of UML graphical, UML textual or Java syntax, round-trip and reverse engineering, and integration with the Sun Java Studio and Eclipse IDEs. The performance of building executables from the model (C code generators) has also been improved. This release also added further optimization algorithms to the Agile C code generator, lowering the cost of target hardware by reducing the resource demands of the software.

LabVIEW:

functionality onto FPGAs

At first glance graphical virtual instrumentation tools like LabVIEW seem to have little to do with

actual embedded applications. Further study reveals this not to be the case. Target code generation using FPGAs and a dedicated CPU is an option that results in execution speeds not imagined on a PC and a means of rapidly generating some quite complex applications.

In order to support the operation of LabView models in embedded applications National Instruments has produced CompactRIO. This is a small rugged industrial control and acquisition system powered by reconfigurable I/O (RIO) FPGA technology giving high performance and allowing customization. CompactRIO incorporates a real-time processor and reconfigurable FPGA for reliable stand-alone embedded or distributed applications, and hot-swappable industrial I/O modules with built-in signal conditioning for direct connection to sensors and actuators. Embedded systems can be developed using LabVIEW graphical programming tools for rapid development.

The CompactRIO platform includes cRIO9002 and cRIO-9004 real-time controllers with industrial floating-point processors, the cRIO910x family of 4 and 8-slot reconfigurable chassis featuring 1 million or 3 million gate FPGAs, and a wide variety of I/O types, from ±80 mV thermocouple inputs to 250 VAC/VDC universal digital inputs. CompactRIO embedded systems are developed using LabVIEW, the LabVIEW Real-Time Module and the LabVIEW FPGA Module. There are two configurations for CompactRIO – embedded systems and R Series expansion systems.

CompactRIO

With the embedded RIO FPGA hardware, users can implement multi-loop analogue PID control systems at loop rates exceeding 100 kS/s. Digital control systems can be implemented at loop rates up to 1 MS/s, and it is possible to evaluate multiple rungs of Boolean logic using single-cycle while loops at 40 MHz (25 ns). Due to the parallel nature of the RIO core, adding additional computation does not necessarily reduce the speed of the FPGA application.

The Future

Developments in software technology are always slow to become accepted, the steady progression from machine code through assembler to high level languages, and then to object oriented techniques, has taken decades. Tools that support large complex systems are essential if the industry is not going to see the types of major project failures that

occur in IT.

<Ends>

www.iar.com

 

www.artisansw.com

 

www.telelogic.com

 

www.ni.com

 

38

THE

TILCON IDS The Graphical Interface Development Suite for Your Embedded Device

Come visit us at:

Feb 22nd-24th Hall 11 / Stand 126

Focus exclusively on your GUI differentiation and customer sign-off

by leveraging Tilcon's unique Embedded Vector Engine (EVE)

Develop, test, modify and demo your interface on a PC

Target many platforms with one development effort

Reconfigure your screens without recoding

ILCON

Focus

 

 

 

Differentiation

 

your time on

 

 

 

 

 

 

 

 

 

differentiation.

 

custom

 

Tilcon

 

Our industry

 

code

 

Engine

 

proven graphics

 

 

 

(EVE)

 

engine will do

 

 

 

 

 

the rest.

 

 

 

 

 

Develop,

test and simulate on a PC, not target dependent.

Target

VxWorks, Linux, CE, QNX on PPC, x86, XScale, TI, MIPS...screens run unchanged on all platforms.

Reconfigure or rebrand without recoding.

The Graphical Interface Company TILCON IDS is a trademark of TILCON Software Ltd.

All other company and product names are trademarks of their respective corporations.

New GIS Builder

Pre-integrated industry solutions —- automotive, medical, industrial automation, defense...

Free 30 day evaluation system

www.tilcon.com

tel: +1 613-226-3917

email: infonews@tilcon.com

<In-Depth>

ESE Magazine January 05

On-target rapid prototyping

<Written by> Tom Erkkinen , The MathWorks </W>

Your new algorithm does the job on a real-time, rapid prototyping computer, but will it work on the actual embedded target? Find out with on-target rapid prototyping, a fast emerging trend in the embedded systems development process.

I TEEN YEARS AGO at a design

Freview, an automotive powertrain R&D or advanced production engineer is touting a hot new algorithm. The grizzled project manager grumbles, “Great, but will it drive an engine?” Months later, a hand-coded version of the algorithm appears and begins dynamometer or proving ground tests. After significant time and cost the project manager’s test is answered. Today the R&D engineer touting yet another hot new algorithm has the added twist that it did indeed drive an engine in the lab or on the test track via the rapid prototyping computer. Now the grizzled project manager inevitably asks, “Great, but will it drive an engine using real pro-

duction hardware?”

Is this progress? Definitely. One should first assess if it is even feasible for an algorithm to provide the correct behaviour for a highly complex system, such as those found in many of today’s automotive electronic control units (ECUs). And rapid prototyping on powerful realtime computers helps provide that understanding. However, one also needs to know if the algorithm is practical; that is, will it work on an 8-, 16-, or 32-bit resource-constrained ECU?

Modelling 101

It has been more than 50 years since the manipulation of block diagrams was described in two papers by T. M. Stout. Controls and signal processing engineers have been in love with them ever since. Block diagrams are the preferred way to specify or model a complex mathematical algorithm. A classic model of a feedback controller or DSP algorithm is shown in Figure 1.

Another powerful modelling capability is now available with finite state machines. When

used in conjunction with block diagrams, a complete behavioural model can be created for embedded systems containing both event-based and time-based components. Examples of such systems include transmission control modules (TCMs), gas turbine controllers, and flight management systems (FMSs). These systems are likely to use block diagrams for specifying the digital processing, filters, and lookup tables, while state machines and flow diagrams provide the mechanisms for modelling fault detection, built-in testing, and mode/shift logic.

The model can be as detailed or as high-level as is appropriate for a given embedded system development stage. While the detail can transform the model into a classic software design specification (SDS) document, it is more than that: it simulates.

Simulating Models

Model simulation allows developers to check that a system’s behavioural requirements have indeed been satisfied. OEMs and their suppliers who exchange executable models instead of paper specifications find that this improves overall communication and reduces round-trip iterations to clarify requirements.

Simulation of system models containing block diagrams and state machines is now common practice, with system models containing tens or hundreds of thousands of blocks. Some blocks and state machines represent 10 or more lines of code. The key to success here is to manage the model like a formal software specification with modelling guidelines, doing model partitioning, holding model reviews, and so on.

But not all embedded systems have the memory or the computational speed necessary to imple-

Figure 1: Feedback controller model.

ment such massive creations. Modellers should recognise from the start that collections of deeply nested hierarchical state machines or filters comprised of complex H-infinity matrix maths operations probably just won’t fit in a 16 bit processor with 32K ROM, and 2K RAM. They need to understand which block constructs yield the best code and review models for maximum code efficiency. With this mindset and today’s technology, it is certainly possible to deploy a model in a mass-produc- tion, low-cost hardware environment.

But having a model that executes on a host computer can only offer so much. The key enabler is automatic code generation, which transforms models into C code that can run virtually anywhere with push-button automation.

Getting Code from Models

It is impossible to list all the development and test activities for which companies are using automatically generated C code: each component in Figure 1 can manifest itself and connect to other components as software, hardware, or remain a model. With that basic understanding, your process can assume almost any shape or form, not just a waterfall or V diagram. Activities based on automatically generated code from models include:

Simulation Acceleration – Code generated and compiled for both the plant and controller models executes on the host computer and runs much faster than interpretive simulation.

Rapid Prototyping – Code generated just for the controller model is cross-compiled and down->> loaded to a high-speed, floating-point, rapid-pro- totyping computer to execute in real time.

On-Target Rapid Prototyping – As with rapid prototyping, code is generated just for the controller model. It is then cross-compiled and downloaded to the production embedded microprocessor or ECU or a close cousin configured with a little more memory and I/O.

Production Code Generation – Code generated for the detailed controller model is down-

loaded to the production embedded micro- >>

40

Соседние файлы в предмете Электротехника