Driving both technology and cost/performance
Q&A with Chris Fanning and David Lee Rutledge, Lattice Semiconductor Corporation
Two senior execs at Lattice give us their views on the Lattice strategy, with an in-depth look at non-volatile FPGA technology, in this exclusive interview and short article. When Lattice entered the FPGA market, they chose to focus on two key differentiators – Non-Volatility (NV) and Low-Cost – and set about developing technology and products in each area.
PL: Lattice is making pushes in a lot of areas. Why such diversity in the offering, and how does it tie together from a strategy standpoint?
DLR: When Lattice entered the FPGA market, we chose to focus on two key differen-tiators - Non-Volatility (NV) and Low-Cost - and set about developing technology and products in each area.
NV is the technology of choice for programmable logic, providing system-level benefits such as single-chip solutions, instant-on capabilities, and in-system programmability. We knew that we could leverage our NV expertise to deliver these benefits to FPGA users.
Lattice saw an opportunity to fill a void in the market by developing a Low-Cost,Feature-Rich FPGA and CPLD product line. Our strategy has been to develop products that work very well for most applications, allowing us to substantially reduce cost with little impact on performance.
So, while our products may seem diverse, they have been developed based on a consistent strategy to deliver value-added products, with NV and Low-Cost, Feature-Rich as ourkey differentiators.
PL: What‚Äôs the biggest technology trend driving your FPGA implementations today, and how do you see that affecting what you can do in the near future?
DLR: Achieving higher levels of functional integration and performance while maintain-ing acceptable power consumption is a major challenge. Historically, the scaling of supply voltages and transistor feature sizes has provided great benefits in functional integration levels and performance, while lowering power consumption. In the future, these techniques will still provide substantially higher levels of functional density and performance, but at the expense of increased levels of static power consumption.
At 45 nm there will be a more direct trade-off between speed and power. For example, a high-density FPGA (~500 K LUT) built on 45 nm technology could easily consume over 10 W of standby power, with no clocks running, at the commercial +85 ¬∞C junction temperature limit.
Innovation in power management to optimize this trade-off is a high priori-ty. Innovation is required concurrently across multiple disciplines, ranging from optimizing the basic process technology through the development of new "power-optimized" product architectures and alsothrough the development of "power-aware"design tools.
Also, there is increased use of embedded SERDES channels as high-bandwidth chip-to-chip communication links. An FPGA with 40 channels of soon-to-be-available 10 G SERDES will have an in-credible processing capacity of 400 Gbps.This trend will accelerate the develop-ment of radically new FPGA-based High-Performance Computing (HPC) platforms.
PL: On the tools side, I see some additions of synthesis and simulation capability, and I‚Äôm guessing you‚Äôre expanding further. What‚Äôs the latest technology you are working on, and what‚Äôs the impact?
CF: Lattice has made a very significant investment in its design tool, ispLEVER. Two initiatives we see having great impact are physical synthesis and incremental design.
Physical synthesis should enable more optimal results in improved device per-formance and help accelerate the debug process. Incremental design is a collection of technologies that ultimately provides more rapid turnaround time for achieving timing closure. Both these enable customers to meet timing requirements more easily and with fewer design iterations, even as FPGA devices become even larger and more complex.
Other Lattice-driven design tool advancements include:
- Power Calculator allows specifying parameters such as voltage, temperature,process variations, airflow, heat sink, resource utilization, activity, and frequency, and then calculates static (DC) and dynamic (AC) power consumption.
- Reveal uses a signal-centric model for embedded logic debug. Signals of interestare user-defined, and the tool adds instrumentation along with the proper connec-tions to enable the required in-system analysis can then be performed. Users can specify complex, multi-event triggering sequences that make system-level design debug smoother and faster.
PL: What about embedding operating systems on an FPGA core? How is this changing the way people design?
CF: uClinux support expands our commitment to the open source model, and it consistently appears at the top of designer surveys we‚Äôve seen as the preferred RTOS for embedded design. uClinux and the LatticeMico32 core allow designers to implement control systems in a design flow that builds on Lattice‚Äôs open source, embedded solutions approach.
The adoption of embedded products has increased among FPGA designers in the lasttwo to three years as the processors provided by FPGA vendors have dramatically improved in functionality, increasing designer productivity and lowering design risk. Embedded processors increasingly include a robust assortment of middleware such as DDR, DDR2, SDRAM memory controllers, Tri-Speed Ethernet Media Access Controller, and PCI 33 MHz Target, which automatically integrate into Lattice‚Äôs Mico System Builder. Middleware has enabled designers to quickly and confidently configure a microproces-sor in their design at very low cost, or no cost at all.
PL: What should designers be doing differently now to get an advantage with their next FPGA-based design?
DLR: We think in terms of two fundamental types of FPGA applications: control-oriented applications and data path applications. Control-oriented applications really have not evolved much over the years, implementing numerous small finite state machines and utilize many parallel I/Osfor system monitoring and control. There are not too many new issues to deal with in this area.
Data path applications, how-ever, have continued to grow and evolve. It is more impor-tant than ever for a system designer to consider how to best architect systems to effectively leverage the new high-speed SERDES-based capabilities of FPGAs. This increased data processing bandwidth will allow for radically new system-level architectures that can provide dramatically higher levels of cost/performance.
CF: And there‚Äôs increasing-ly sophisticated functionality in ispLEVER to help the architect. Also, FPGA IP is particularly important, offer- ing customers time to marketand risk management advan-tages by providing proven,hardware-validated solutions that are typically parameterizable. These pre-engineered IP cores are cost-effective,and can save countless hours of develop-ment and validation effort.