Research

Approximate Computing

The high-performance computing systems available today are far from meeting the performance and energy efficiency targets necessary to satisfy the needs of future computing systems.  With processor clock frequencies leveling-off over the past several years, the only way to achieve the expected levels of performance within the same physical size and the same or lower energy requirements is to have more processors computing in parallel.  However, increasing concurrency among the processors increases interprocessor communication, which then becomes the critical bottleneck.

Stochastic Computing

This work is investigating a novel approach for computation called stochastic logic. The conventional approach for designing processors has been rigidly hierarchical with sharp boundaries between different levels of abstraction. From the logic level up, the Boolean functionality of the system is fixed and deterministic. Stochastic computing, on the other hand, conceptually transforms probability values into probability values, although it still relies on conventional Boolean logic gates as the underlying substrate.

High-performance and parallel computing with GPUs

Using Graphics Processing Units (GPUs) for general-purpose computing has made high-performance parallel computing very cost-effective for a wide variety of applications. However, programming these highly-parallel processors still remains somewhat of an art.

We have several projects investigating how GPUs can be applied to different application domains, including:

Storage Systems

Center for Research in Intelligent Storage Systems

 

Processor design with emerging technologies

A variety of new technologies are being proposed for future processors, such as spintronic components, carbon nanotubes, and graphene. We are investigating how processors can be efficiently designed to best take advantage of these new technologies.