Australia/Sydney
BlogJuly 3, 2024

Install OpenLIT Locally - Best Free Tool for LLM Monitoring and Tracing

Fahd Mirza

 This video installs OpenLIT locally and integrates it with local Ollama models. OpenLIT is an OpenTelemetry-native tool designed to help developers gain insights into the performance of their LLM applications in production. It automatically collects LLM input and output metadata, and monitors GPU performance for self-hosted LLMs.




Code:

conda create -n lit python=3.11 -y && conda activate lit

pip install torch
pip install git+https://github.com/huggingface/transformers

git clone https://github.com/openlit/openlit.git

docker compose up -d

pip install openlit
pip install ollama

import ollama
prompt="what is happiness"

import openlit
openlit.init(otlp_endpoint="http://127.0.0.1:4318",trace_content=False)

response = ollama.generate(model='llama3', prompt=prompt)
Share this post:
On this page

Let's Partner

If you are looking to build, deploy or scale AI solutions — whether you're just starting or facing production-scale challenges — let's chat.

Subscribe to Fahd's Newsletter

Weekly updates on AI, cloud engineering, and tech innovations