In a move reminiscent of the launch of Copilot Chat by Microsoft-owned GitHub in July, Meta has unveiled Code Llama, a new large language model tailored specifically for coding tasks. Built on top the Llama 2 model, Code Llama has been enriched with over 500 billion tokens of code and related data. It's designed to assist software developers by generating code and providing detailed responses to various prompts, adeptly managing major languages like Python, Javascript, and C++ and facilitating tasks from code completion to debugging.
Meta offers Code Llama in three sizes: 7B, 13B, and 34B parameters. The 7B and 13B models are optimized for real-time tasks such as swift code completion, while the 34B version, although delivering superior performance, requires more computational power. Furthermore, specialized versions of Code Llama have been rolled out. A Python-specific variant enhances Python coding tasks, and the Instruct model is fine-tuned to provide safer and more intuitive answers to natural language queries.
With Code Llama, Meta seeks to enhance developer productivity and democratize coding for novices. They recognize the potential of this AI, but they also champion its careful deployment, emphasizing an open-source approach to bolster safety. As Large Language Models (LLMs) gain traction in aiding developers, Code Llama is poised to serve various sectors, from research and industry to open-source initiatives.
Meta has released Code Llama under an open-source license, endorsing its use in both non-commercial research and commercial undertakings. They aspire for this move to spark innovation in AI coding assistants and entrust the broader community to assess its strengths and vulnerabilities. Recent months have evidenced this strategy's success, with many developers gravitating towards open-source solutions for their applications. Concurrently, platforms like OpenAI's ChatGPT have witnessed a dip in the usage of their web iterations, a trend influenced by multiple factors. Regardless, Code Llama could potentially turbocharge automated development across application types, or at the very least, notably reduce costs for businesses when launching or sustaining products in the market.