4.2 Article

Unleashing the Potential of PIM: Accelerating Large Batched Inference of Transformer-Based Generative Models

Journal

IEEE COMPUTER ARCHITECTURE LETTERS
Volume 22, Issue 2, Pages 113-116

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/LCA.2023.3305386

Keywords

Transformer-based generative model; processing-in-memory; attention

Ask authors/readers for more resources

Transformer-based generative models utilize attention to summarize input sequences and generate output sequences. However, conventional computing platforms are inefficient in handling attention. To address this issue, we propose AttAcc, which takes advantage of the reuse of KV matrices during summarization and reduces external bandwidth and energy consumption by processing in-memory.
Transformer-based generative models, such as GPT, summarize an input sequence by generating key/value (KV) matrices through attention and generate the corresponding output sequence by utilizing these matrices once per token of the sequence. Both input and output sequences tend to get longer, which improves the understanding of contexts and conversation quality. These models are also typically batched for inference to improve the serving throughput. All these trends enable the models' weights to be reused effectively, increasing the relative importance of sequence generation, especially in processing KV matrices through attention. We identify that the conventional computing platforms (e.g., GPUs) are not efficient at handling this attention part for inference because each request generates different KV matrices, it has a low operation per byte ratio regardless of the batch size, and the aggregate size of the KV matrices can even surpass that of the entire model weights. This motivates us to propose AttAcc, which exploits the fact that the KV matrices are written once during summarization but used many times (proportional to the output sequence length), each multiplied by the embedding vector corresponding to an output token. The volume of data entering/leaving AttAcc could be more than orders of magnitude smaller than what should be read internally for attention. We design AttAcc with multiple processing-in-memory devices, each multiplying the embedding vector with the portion of the KV matrices within the devices, saving external (inter-device) bandwidth and energy consumption.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.2
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available