Inside CodeT5+: Salesforce's State-Of-The-Art Coding Language Model
The model combining code generation with strong task reasoning capabilities.
Large language models (LLMs) pretrained on extensive source code, referred to as “Code LLMs,” have revolutionized code intelligence in recent years. These AI-powered generative tools have empowered software developers, enabling them to create and maintain code with remarkable ease and significantly enhance their productivity. Despite these achievements, existing code LLMs face two critical limitations within their design. Firstly, many models adopt specific architectures that restrict their adaptability to downstream tasks. For instance, decoder-only models like GPT-based LLMs struggle with understanding tasks such as defect detection and code retrieval. Addressing these challenges often requires substantial architectural modifications or additional fine-tuning to align the models with specific applications. Secondly, current models rely on a limited set of pretraining objectives that may not be optimal for certain downstream tasks. For example, T5-based models trained with a span denoising objective are ill-suited for auto-regressive generation tasks like code completion. This discrepancy between pretraining and inference objectives results in significant performance degradation.
To address these challenges, Salesforce Research recently introduced CodeT5+, an innovative family of code LLMs that significantly improves flexibility in terms of model architecture and learning objectives. CodeT5+ models seamlessly adapt to both code generation and understanding tasks while consistently outperforming or matching the performance of other LLMs.