Australia/Sydney
BlogMay 1, 2024

Fine-Tune Any 7B LLM on a Single 8GB GPU Locally

Fahd Mirza

 This video is a hands-on step-by-step tutorial to show how to fine-tune any model locally on single GPU. 


Code:

conda create --name xtuner-env python=3.10 -y

conda activate xtuner-env


pip install -U 'xtuner[deepspeed]'


xtuner list-cfg


xtuner train internlm2_chat_7b_qlora_oasst1_e3 --deepspeed deepspeed_zero2


xtuner convert pth_to_hf ${CONFIG_NAME_OR_PATH} ${PTH} ${SAVE_PATH}


xtuner chat ${NAME_OR_PATH_TO_LLM} --adapter {NAME_OR_PATH_TO_ADAPTER} [optional arguments]

Share this post:
On this page

Let's Partner

If you are looking to build, deploy or scale AI solutions — whether you're just starting or facing production-scale challenges — let's chat.

Subscribe to Fahd's Newsletter

Weekly updates on AI, cloud engineering, and tech innovations