FPGA stands for field-programmable gate array. That’s quite a mouthful, so let’s start with a basic definition. Essentially, an FPGA is a hardware circuit that a user can program to carry out one or more logical operations. Taken a step further, FPGAs are integrated circuits, or ICs, which are sets of circuits on a chip—that’s the “array” part. Those circuits, or arrays, are groups of programmable logic gates, memory, or other elements.

With a standard chip, such as the Intel Curie module in an Arduino board or a CPU in your laptop, the chip is fully baked. It can’t be programmed; you get what you get. With these chips, a user can write software that loads onto a chip and executes functions. That software can later be replaced or deleted, but the hardware chip remains unchanged.

With an FPGA, there is no chip. The user programs the hardware circuit or circuits. The programming can be a single, simple logic gate (an AND or OR function), or it can involve one or more complex functions, including functions that, together, act as a comprehensive multi-core processor.

Why Use an FPGA?

You might use an FPGA when you need to optimize a chip for a particular workload, or when you are likely to need to make changes at the chip level later on. Uses for FPGAs cover a wide range of areas—from equipment for video and imaging, to circuitry for computer, auto, aerospace, and military applications, in addition to electronics for specialized processing and more. FPGAs are particularly useful for prototyping application-specific integrated circuits (ASICs) or processors. An FPGA can be reprogrammed until the ASIC or processor design is final and bug-free and the actual manufacturing of the final ASIC begins. Intel itself uses FPGAs to prototype new chips.

In fact, Intel recently acquired a company called eASIC as a way to accelerate its designing and prototyping process. eASIC produces something called a “structured ASIC,” which relies on a model that is in between an ASIC and an FPGA. As this AnandTech article explains, with a structured ASIC:

“Engineers can create a design using an FPGA, then rather than spending time optimizing the circuit layout, they bake the fixed layout into a single design mask for manufacturing. By being a fixed design like an ASIC, it is faster than a variable design, but without the die area benefits of ASIC-like power savings. However, it was designed in FPGA time, rather than ASIC time (up to six months saved), and saves power through its fixed design.”

FPGAs for the Rest of Us

So what’s a real-world example of how FPGAs are used? In the eBook, FPGAs for Dummies, co-authors Andrew Moore and Ron Wilson give a simple FPGA example of a rear-view camera for a car. In the example, a camera might take 250 milliseconds to capture and display an image to the driver. If regulations change to require that the window of time be only 100 milliseconds, the car could require costly and near-impossible alterations if the camera relied on a microprocessor-based solution. With an FPGA though, the new regulation could be met and implemented without new hardware or new processors. Cars in production, unsold cars, and cars already sold could be updated with a simple reprogramming of the FPGA.

FPGAs are also useful to enterprise businesses because they can be dynamically reprogrammed with a data path that exactly matches a specific workload, like data analytics, image inference, encryption, or compression. Optimized FPGAs are also more power-efficient than running equivalent workloads on a CPU. That combination of versatility, efficiency, and performance adds up to an appealing package for modern businesses looking to process more data at a lower total cost of ownership (TCO).

The New Frontier for FPGAs: Artificial Intelligence

Today, FPGAs are gaining prominence in another field: deep neural networks (DNNs) that are used for artificial intelligence (AI). Running DNN inference models takes significant processing power. Graphics processing units (GPUs) are often used to accelerate inference processing, but in some cases, high-performance FPGAs might actually outperform GPUs in analyzing large amounts of data for machine learning. (This article describes one example of this, in which an Intel Stratix 10 FPGA outperformed a GPU in testing.)

Microsoft is already putting Intel FPGA versatility to use for accelerating AI. Microsoft’s Project Brainwave provides customers with access to Intel Stratix FPGAs through Microsoft Azure cloud services. The cloud servers outfitted with these FPGAs have been configured specifically for running deep learning models. The Microsoft service lets developers harness the power of FPGA chips without purchasing and configuring specialized hardware and software. Instead, developers can work with common open-source tools, such as the Microsoft Cognitive Toolkit or TensorFlow AI development framework.

Learn More About FPGAs and Other Tech Topics

From data analytics to encryption to chip development to AI inference models, FPGAs offer a level of performance and versatility that appeals to a wide range of users. If you think FPGAs might have a home in your business, check out the Intel Acceleration Hub to learn more. And to keep up on the latest info on FPGAs and other trending tech topics, follow us on our blog, LinkedIn, and Twitter.

Share this:

FacebooktwitterlinkedinmailFacebooktwitterlinkedinmail