PCB assembly & starting up The Moment of Truth

Autor / Redakteur: by Gerhard Eigelsreiter
and Thomas Thun / Claudia Mallok

An interface-independent core hardware which combines a powerful CPU with dynamically reconfigurable FPGAs and thereby provides high I/O-bandwidth which can be used in an unrestricted way – this idea was to be realised by the CPU-module CERO.

Anbieter zum Thema

The com-bination of flexible soft- and hardware, I/O-bandwidth, extensive constructive measures in the field of PCB-design, optimising strategies of the PCB manufacturer and the board assembler in an interplay of schematic draft and PCB-design elaboration set new benchmarks regarding stability and functionality of the hardware – and the efforts were worthwhile.

After the completion of the PCB for the CERO-module – an 18-layer-board with piled powerplanes, differen-tial impedances, blind vias and plugged vias – it has to be assembled. Normally the board assembler faces the accom-plished facts and is confronted with the completed PCB. Well, it’s not surprising that some of the electronics manufac-turers finally behave as moody as physics in GHz-ranges. In order to prevent this the cooperation of unit^el, ILFA GmbH and TAUBE ELECTRONIC was planned right from the start and the exchange of information has been successfully inplemented .

The employment of new technologies in PCB construction, the insertion of components with very small packages and high pin counts has to comply with the production possibilities of the service provider. Especially regarding ICs in high-pin-count BGA packages the plugged-via-technology demands a precise coordination between board assembler and developer concerning land pattern dimensioning and the SMT capacitors. This information is already implemented into the PCB design when creating the component library of the PCB design tool. Finally after a modification of the land pattern of the SMT capacitors the board assembler agreed to the plugged-via-technology. The precision of solder masks, the component distances and density are no more than an extract of a set of rules that significantly contributed to the solder- and production quality of the assembly.

Realize a PCB-design for manufacturing

An 18-layer-board with piled powerplanes puts the cooperation between PCB manufacturer and electronics manufacturer on a new, and in terms of quality higher level. At this stage measures against twist and bow during the solder process are determined as well as solder samples have to be provided to investigate optimised temperature profiles.

Setting-up operation

Finally dinner is served – the most interesting part for any hardware developer is setting-up operation, even the slightest mistake is relentlessly detected. Due to the complex PCB layer stack-up the electronics assembler suggested producing only one prototype instead of three giving the possibility to fall back on two complete component sets for a fast implementation of possible changes.

A resulting disadvantage: During setup no comparing measurements were available, giving the situation – to put it mildly – a decent exclusiveness with temporarily shifted culmination points. The 3.3 V operation voltage was externally supplied through a 3-pole power supply connector. On the board itself two linear amplifiers generated the additionally necessary 1.8 V core voltage for the Coldfire-CPU as well as the 1.5 V core voltage for the Virtex-II FPGA.

This sequential enabling of the voltages solves a critical prob-lem required for the Coldfire “power-sequencing”. The 1.8 V have to be available before the 3.3 V. An auxiliary logic ensures the adherence to the maximum voltage difference of 1.8 V between core and operating voltage even during switching off . The core voltage of the FPGAs was not subject to these restrictions. A first attempt success (a first go success): the first prototype works

Using a regulated power supply unit for general laboratory applications with adjustable current limit the operating voltage was slowly raised towards the targeted 1.5 V. Initiated by a sharp current rise beyond the expected limit already at a very low initial voltage, a short troubleshooting session revealed the following: Instead of the expected 1.5 V at the first testinstance the current rose very rapidly to 1.5 A (expected value 0.6 to 0.8 A). Despite careful insulation a body contact heat sink – linear regulator could be detected, which could be eliminated immediately. Then all measured values lay(were) in the expected range – the operational current amounted to 0.6 A.

Apart from a redundant additional SELECT line to the SDRAM controller of the Coldfire-CPU and the logic level change of a control line in the newest data sheet revision of the Ethernet controllers the remaining tasks, from the hardware point of view could easily be executed right from the start.

And all that – it has to be pointed out – by means of this one and only prototype. There is nothing worse than adapting new firmware on to hardware, which exhibits soldering and assembly errors.

Combination of CPU and FPGA pushes debugging of hardware and software

Meanwhile the software department had to face completely different problems. Incomplete or contradicting RESET and initialization sequences in the manuals and application notes delayed the compilation of the firmware. The BDM-Interface to the Coldfire-CPU turned out to be a great help. In principle it represents a simple analyser built into the CPU chip.

It offered extended possibilities of loading programs, examining program sequences, on-the-fly debugging, examining and of changing register values as well as visualising important internal structures.

Significant enhancement in hardware and software debugging pushes the combination of CPU and FPGA by means of arbitrarily reloadable, free contrivable analyser modules.

Even high-complex, sometimes extremely fast timing sequences can be represented by testpoints and connectors (easy accessible for probes) without having any influence on critical signal delays or changing real time operation behaviour. The simulation of the adjustment of different setup and hold times in the timing of the Ethernet-controller and the Coldfire-CPU could therefore be examined by measuring on the target hardware in real time without disturbing factors that are otherwise inevitable

Signal-integrity measurements

A further delicate topic in hardware design are the measurements concerning signal integrity. Signal paths, which are subject to signals with high slew rates, have to be designed as wave guides, i.e. with controlled impedance. This seems to be a textbook note – and it actually is – Considering point-to-point connections these requirements can be approximated in their simplest form by impedance controlled wiring and a serial source resistance. What does the textbook say about bus-driven connections? Wellthis is difficult to say! Actually this answer isn’t really helpful to solve the problem. The use of active termination chips, at the end of each bus connection as well as in the middle represents a good compromise.

Caution when using active termination chips

However optimum performance of these termination chips can only be gained in case of excellently decoupled power supplies. On the CERO module all clocks are point-to-point connections with active termination at their respective end. The system clock is lead (is guided ?)via a clock distributor, its outputs being connected by delayequalised traces to the CPU, the FPGA and the SDRAMs. All SRAM controls are terminated and certainly all bus connections are actively terminated as well.

The 6-GHz-high-speed-oscilloscope WaveMaster 8600A from LeCroy, being predestined for jitter and timing analysis, was used for clock and data signal recording and evaluation. The impedance of the signal traces in relation to the power planes plays a decisive role concerning the function of the device.

Each differential conductor is referenced to groundplanes

Therefore each differential conductor pair is referenced to one or two groundplanes. Differential impedance averages out at 100 Ω. Using the impedance calculation software from Polar Instruments Ltd., mechanical parameters of the conductors, such as width and copper thickness associated to the layer distances, were determined by the PCB manufacturer and transferred as constraints into the PowerPCB layout tool. Equally all single-ended signals are referenced to the respective Groundplanes, thus optimising the current return path – verified by the appropriate measurement results.

EMI measurements

Measurement results concerning waveforms of critical controls, e.g. clock lines of CPU, FPGA and SDRAM, not only suggested excellent signal integrity, but also have given room for cautious optimism regarding the coming EMI measurements. Cautious optimism, because particularly switching characteristics slope and dynamic power consumption of highly integrated CPUs and FPGAs affect considerably the degree of radio interference. However, the excellent stability and functionality of the hardware turned out to be the much more important detail. Intricate testing up to the load limit carried out over months has not led to instabilities or data overruns in a single case.

The FPGA test scenario represented a special challenge. By software shifting more than 90% of the available CLB-flipflops (approx. 9000), were clocked by means of DLL over the global Clock network with minimum skew in parallel with 200 MHz. The measured operating current abruptly rose from 0,9 to 4,6 A. An appropriate heat sink for this „High-Frequency-Heater“ (FPGA) kept the temperature within the limits given by the manufacturer.

Irrespective of that the acid test for the SDRAM was run on the entire 64-MB-memory via the CPU. All test scenarios were connectible and disconnectible ( could be connected and disconnected) arbitrarily by software. (Figures 4 and 5, contribution page 72).

Combination of a high performance CPU and dynamically reconfigurable FPGAs

Where will all those efforts lead to? With all that technical enthusiasmone should not forget the underlying purpose of the efforts: the application itself. The combination of a high performance CPU and dynamically reconfigurable FPGAs paired with an extremeley high I/O bandwidth that can be used for the application without any restriction to realise a hardware whose application fields cover many market segments. There is no focus on a certain operational area . This is exemplified in the article on image processing on page 59.

Epilogue

Finally only practical experience and the market can serve as indicator for the suitability of consistent approaches. The magic word bandwidth with an emphasis on the I/O-area increasingly proves as the linchpin for flexible core hardware with long life cycle in market segments with rapidly changing product cycles and adaptations. The following example will give you proof of this. In May 2003 Xilinx presented their own transceiver-IC’s (RocketPHY) with 10 GBit/s for serial data transmission (White Paper: Backplane).

20 LVDS input and 20 LVDS output channels at free disposal

The parallel connection to 16 LVDS input and 16 LVDS output channels of the Virtex II (PRO) product lines specific to the chip. By a specific connector, optimised for differential signal line guidance, the CERO board provides even 20 LVDS input and 20 LVDS output channels (including controls) at free disposal. Necessary layer changes of LVDS pairs to the inner layers were realised exclusively with blind vias, “back drilling“ was out of question [1].

Although the development of this board is based on design documents issued at the beginning of 2002, the degree of freedom concerning hardware changes with a view to further bandwidth requirements have been impressively confirmed. Positive EMC tests should be, apart from legal aspects, an important criterion for the chosen hardware and software measures – it simply boils down to this.

Moreover they don’t guarantee stable functionality, particularly if interference energies reach the external world weakened by sophisticated case constructions and external screen measures only in order to comply with the respective, legally prescribed upper limits. In return they “rave up” within the assembly, many a time leading to puzzling instabilities in usually independent functional areas (figures 6 and 7).

A new European EMC guideline with substantially eased regulations has been drafted , which becomes effective at the beginning of 2004: a so called “EMC-law light”. As a consequence (result) it will be easier for the manufacturer to obtain and use the CE mark regarding EMC.

Therefore it becomes increasingly important to counter interfering emission and immission in various fields of electronics by constructive measures. The combination of software (CPU) and hardware flexibility (99% of FPGA resources are freely available), the high bandwidth of the input/output structures, the extensive constructional measures in the PCB design, the optimisa-tion strategies within the interrelation of schematic draft and PCB design elaboration (avoidance principle, component selection, piled power planes etc.) are setting new benchmarks regarding stability and functionality [3].

In the long run software with few bugs or being errortolerant can be realized economically on stabile hardware platforms only.

References:

[1] „10 Gbps NRZ Serial Backplane

White Paper (PDF)“ Xilinx White Paper: 10gbps_nrz_whitepaper.pdf

[2] „EMV: Europa macht Verdruss“. Elektronik 8/2003, Seite 7.

[3] „A Digital Designer’s Guide to Verifying Signal Integrity“

von Tektronix

(ID:201113)