Ollama Enables Local Running of Llama 3.2 on AMD GPUs

0




Iris Coleman
Sep 27, 2024 10:16

Ollama makes it easier to run Meta’s Llama 3.2 model locally on AMD GPUs, offering support for both Linux and Windows systems.





Running large language models (LLMs) locally on AMD systems has become more accessible, thanks to Ollama. This guide will focus on the latest Llama 3.2 model, published by Meta on September 25, 2024. Meta’s Llama 3.2 goes small and multimodal with 1B, 3B, 11B, and 90B models. Here’s how you can run these models on various AMD hardware configurations and a step-by-step installation guide for Ollama on both Linux and Windows Operating Systems on Radeon GPUs.

Supported AMD GPUs

Ollama supports a range of AMD GPUs, enabling their product on both newer and older models. The list of supported GPUs by Ollama is available here.

Installation and Setup Guide for Ollama

Linux

System Requirements: Ubuntu 22.04.4 AMD GPUs with the latest AMD ROCm™ software installed Install ROCm 6.1.3 following the provided instructions Install Ollama via a single command Download and run the Llama 3.2 model:

Windows

System Requirements: Windows 10 or higher Supported AMD GPUs with the driver installed For Windows installation, download and install Ollama from here. Once installed, open PowerShell and run:

You can find the list of all available models from Ollama here.

Conclusion

The extensive support for AMD GPUs by Ollama demonstrates the growing accessibility of running LLMs locally. From consumer-grade AMD Radeon™ RX graphics cards to high-end AMD Instinct™ accelerators, users have a wide range of options to run models like Llama 3.2 on their own hardware. This flexible approach to enabling innovative LLMs across the broad AI portfolio allows for greater experimentation, privacy, and customization in AI applications across various sectors.

Image source: Shutterstock



Source link

You might also like
Leave A Reply

Your email address will not be published.