Skip to content

Latest commit

 

History

History
11 lines (6 loc) · 1.99 KB

paper_2.md

File metadata and controls

11 lines (6 loc) · 1.99 KB

CombTransformers: Statement-Wise Transformers for Statement-Wise Representations

Authors: Bertolotti, Francesco and Cazzola, Walter

Abstract:

This study presents a novel category of Transformer architectures known as comb transformers, which effectively reduce the space complexity of the self-attention layer from a quadratic to a subquadratic level. This is achieved by processing sequence segments independently and incorporating <inline-formula><tex-math notation="LaTeX">$mathcal{X}$</tex-math><alternatives><mml:math><mml:mrow><mml:mi mathvariant="script">X</mml:mi></mml:mrow></mml:math><inline-graphic xlink:href="cazzola-ieq1-3310793.gif"/></alternatives></inline-formula>-word embeddings to merge cross-segment information. The reduction in attention memory requirements enables the deployment of deeper architectures, potentially leading to more competitive outcomes. Furthermore, we design an abstract syntax tree (AST)-based code representation to effectively exploit comb transformer properties. To explore the potential of our approach, we develop nine specific instances based on three popular architectural concepts: funnel, hourglass, and encoder-decoder. These architectures are subsequently trained on three code-related tasks: method name generation, code search, and code summarization. These tasks encompass a range of capabilities: short/long sequence generation and classification. In addition to the proposed comb transformers, we also evaluate several baseline architectures for comparative analysis. Our findings demonstrate that the comb transformers match the performance of the baselines and frequently perform better.

Link: Read Paper

Labels: general coding task, code model, code model training, source code model