Australia/Sydney
BlogApril 26, 2024

Use Hugging Face API Locally for Free Model Access

Fahd Mirza

 This video is a hands-on step-by-step tutorial with code to show you how to use hugging face inference API locally for free.




Code:


#pip install huggingface_hub

#export HF_TOKEN="<>"


from huggingface_hub import InferenceClient

import json


repo_id = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"


llm_client = InferenceClient(

    model=repo_id,

    timeout=120,

)


def call_llm(inference_client: InferenceClient, prompt: str):

    response = inference_client.post(

        json={

            "inputs": prompt,

            "parameters": {"max_new_tokens": 200},

            "task": "text-generation",

        },

    )

    return json.loads(response.decode())[0]["generated_text"]



response=call_llm(llm_client, "write me a crazy joke")

print (response)

Share this post:
On this page

Let's Partner

If you are looking to build, deploy or scale AI solutions — whether you're just starting or facing production-scale challenges — let's chat.

Subscribe to Fahd's Newsletter

Weekly updates on AI, cloud engineering, and tech innovations