ollama-model-shiv / README.md
shiv1119's picture
Create README.md
292e01d verified
---
license: other
language:
- en
base_model:
- meta-llama/Llama-3.1-8B
new_version: shiv1119/ollama-model-shiv
pipeline_tag: text-generation
tags:
- ollama
- local-llm
- docker
- offline
- shiv
---
# πŸ¦™ Shiv's Custom Local LLM (Ollama Ready)
Welcome to **`ollama-model-shiv`**, a custom-built local language model designed to run **completely offline** using [Ollama](https://ollama.com/). This repo packages everything β€” from `Modelfile` to model blobs β€” ready for Docker-based deployment and local inference.
---
## Link to model - https://huggingface.co/shiv1119/ollama-model-shiv
## πŸ“ Directory Structure
OllamaModelBuild/
β”œβ”€β”€ model_test.py # Sample Python script to interact with the model
β”œβ”€β”€ .gitattributes # Git LFS or text handling config
β”œβ”€β”€ ollama_clean/
β”‚ β”œβ”€β”€ docker-compose.yml # Docker config to run Ollama with this model
β”‚ β”œβ”€β”€ Modelfile # Ollama build instructions
β”‚ └── models/ # Model weights & metadata
β”‚ β”œβ”€β”€ blobs/ # Binary blobs of the model
β”‚ └── manifests/ # Manifest describing model structure
---
## πŸš€ Features
- βœ… 100% Offline β€” No internet or API key needed
- πŸ‹ Docker-ready with `docker-compose.yml`
- ⚑ Works with [Ollama CLI](https://ollama.com)
- πŸ” Full model reproducibility via blobs and manifests
- 🧠 Based on `LLaMA2` / open-weight LLM architecture
---
## πŸ› οΈ Getting Started
### πŸ”§ 1. Install Prerequisites
- [Install Docker](https://docs.docker.com/get-docker/)
- (Optional) [Install Ollama CLI](https://ollama.com/download) β€” used during build
### πŸ‹ 2. Run the Model Using Docker
In the root of the project:
```bash
cd ollama_clean
docker-compose up --build
```
This builds and runs the model container locally with your custom blobs and Modelfile.
πŸ§ͺ Test the Model (Optional)
```bash
docker-compose exec ollama ollama run tinyllama "Hello"
```
Use the included Python script to test interaction:
python model_test.py
Customize it to query your local Ollama model running at http://localhost:11434.
🧰 Model Components
Modelfile: Blueprint for Ollama to build the model
blobs/: Raw model weights
manifests/: Metadata describing model format/version
docker-compose.yml: Encapsulates build/run config
🧠 About Ollama
Ollama makes it simple to run LLMs locally on your own machine β€” private, fast, and API-free.
πŸ“¦ Repo Purpose
This repository was created to:
Host a working local LLM solution
Enable offline inference
Serve as a template for packaging custom models with Ollama
πŸ“œ License
This repo is for educational/research purposes only. Please ensure you comply with the license of any base models used (e.g., LLaMA2, Mistral, etc.).
πŸ™Œ Credits
Crafted with ❀️ by Shiv Nandan Verma