With this new initative, Intel hopes to that some of the world’s largest data center operators will source all their compute needs from Intel, rather than from rivals like AMD or ARM-based chip vendors, or build their own.
The new product will give large customers, such as Facebook and eBay, the ability to run code on a reconfigurable chip and a typical x86. This could be useful for speeding up usual tasks such as compression or decompresssion, and also specific search or machine learning algorithms. In the most extreme case, the capabilities of the embedded FPGA can be adapted to the requirements of the specific algorithm run on a machine.
By adding a FPGA on top of a Xeon and linking it via Quick Path Interconnect, Intel FPGAs can demonstrate a speedup of 10X over traditional chips for certain tasks, and that adding an FPGA to a chip via QPI give its a 2X speedup again.
Read the original article at The Register.