[ad_1]
Google’s Bard and OpenAI’s ChatGPT are claimed to be proficient in writing code. Meta has now entered this space by launching Code Llama, a large language model (LLM) that can use text prompts to generate and discuss code.
“Code Llama is state-of-the-art for publicly available LLMs on coding tasks. It has the potential to make workflows faster and more efficient for developers and lower the barrier to entry for people who are learning to code,” the company said.
“Code Llama has the potential to be used as a productivity and educational tool to help programmers write more robust, well-documented software,” it added.
How does Code Llama work?
Meta explained that Code Llama is a code-specialised version of Llama 2 that was created by further training Llama 2 on its code-specific datasets. This brought enhanced coding capabilities and can also be used for code completion and debugging.
It supports many of the most popular programming languages used today, including Python, C++, Java, PHP, Typescript (Javascript), C#, Bash and more. Meta is releasing Code Llama for both research and commercial use under the same community licence as Llama 2.
“Code Llama is designed to support software engineers in all sectors — including research, industry, open source projects, NGOs and businesses,” the company said.
Coda Llama in three sizes
Meta is releasing Code Llama in three sizes: 7B, 13B and 34B parameters. While each model is trained with 500B tokens of code and code-related data, they address different serving and latency requirements.
The lightweight 7B model can be served on a single GPU, is fast and is suitable for tasks that require low latency. Similarly, the 13B model is more suitable for tasks like real-time code completion. The 34B model returns the best results and allows for better coding assistance.
“Code Llama is state-of-the-art for publicly available LLMs on coding tasks. It has the potential to make workflows faster and more efficient for developers and lower the barrier to entry for people who are learning to code,” the company said.
“Code Llama has the potential to be used as a productivity and educational tool to help programmers write more robust, well-documented software,” it added.
How does Code Llama work?
Meta explained that Code Llama is a code-specialised version of Llama 2 that was created by further training Llama 2 on its code-specific datasets. This brought enhanced coding capabilities and can also be used for code completion and debugging.
It supports many of the most popular programming languages used today, including Python, C++, Java, PHP, Typescript (Javascript), C#, Bash and more. Meta is releasing Code Llama for both research and commercial use under the same community licence as Llama 2.
“Code Llama is designed to support software engineers in all sectors — including research, industry, open source projects, NGOs and businesses,” the company said.
Coda Llama in three sizes
Meta is releasing Code Llama in three sizes: 7B, 13B and 34B parameters. While each model is trained with 500B tokens of code and code-related data, they address different serving and latency requirements.
The lightweight 7B model can be served on a single GPU, is fast and is suitable for tasks that require low latency. Similarly, the 13B model is more suitable for tasks like real-time code completion. The 34B model returns the best results and allows for better coding assistance.
[ad_2]
Source link