Australia/Sydney
BlogFebruary 22, 2024

Manage and Run Gemma LLM with Keras Locally

Fahd Mirza

This video shows how to install and manage Gemma LLM with Keras. Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models.



Code:

!pip install keras --upgrade

!pip install kaggle

from google.colab import files

uploaded = files.upload()

for fn in uploaded.keys():
  print('User uploaded file "{name}" with length {length} bytes'.format(
      name=fn, length=len(uploaded[fn])))
 
# Then move kaggle.json into the folder where the API expects to find it.
!mkdir -p ~/.kaggle/ && mv kaggle.json ~/.kaggle/ && chmod 600 ~/.kaggle/kaggle.json

!pip install keras_nlp --upgrade
!pip install keras --upgrade

import os

os.environ["KERAS_BACKEND"] = "jax"

import keras_nlp
import keras
import tensorflow as tf
import time

keras.mixed_precision.set_global_policy("mixed_float16")

preprocessor = keras_nlp.models.GemmaPreprocessor.from_preset(
    "gemma_2b_en"
)

gemma_lm = keras_nlp.models.GemmaCausalLM.from_preset("gemma_2b_en")

gemma_lm.generate("which one came first, egg or chicken?", max_length=130)

 

Share this post:
On this page

Let's Partner

If you are looking to build, deploy or scale AI solutions — whether you're just starting or facing production-scale challenges — let's chat.

Subscribe to Fahd's Newsletter

Weekly updates on AI, cloud engineering, and tech innovations