File size: 2,817 Bytes
292e01d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 |
---
license: other
language:
- en
base_model:
- meta-llama/Llama-3.1-8B
new_version: shiv1119/ollama-model-shiv
pipeline_tag: text-generation
tags:
- ollama
- local-llm
- docker
- offline
- shiv
---
# π¦ Shiv's Custom Local LLM (Ollama Ready)
Welcome to **`ollama-model-shiv`**, a custom-built local language model designed to run **completely offline** using [Ollama](https://ollama.com/). This repo packages everything β from `Modelfile` to model blobs β ready for Docker-based deployment and local inference.
---
## Link to model - https://huggingface.co/shiv1119/ollama-model-shiv
## π Directory Structure
OllamaModelBuild/
βββ model_test.py # Sample Python script to interact with the model
βββ .gitattributes # Git LFS or text handling config
βββ ollama_clean/
β βββ docker-compose.yml # Docker config to run Ollama with this model
β βββ Modelfile # Ollama build instructions
β βββ models/ # Model weights & metadata
β βββ blobs/ # Binary blobs of the model
β βββ manifests/ # Manifest describing model structure
---
## π Features
- β
100% Offline β No internet or API key needed
- π Docker-ready with `docker-compose.yml`
- β‘ Works with [Ollama CLI](https://ollama.com)
- π Full model reproducibility via blobs and manifests
- π§ Based on `LLaMA2` / open-weight LLM architecture
---
## π οΈ Getting Started
### π§ 1. Install Prerequisites
- [Install Docker](https://docs.docker.com/get-docker/)
- (Optional) [Install Ollama CLI](https://ollama.com/download) β used during build
### π 2. Run the Model Using Docker
In the root of the project:
```bash
cd ollama_clean
docker-compose up --build
```
This builds and runs the model container locally with your custom blobs and Modelfile.
π§ͺ Test the Model (Optional)
```bash
docker-compose exec ollama ollama run tinyllama "Hello"
```
Use the included Python script to test interaction:
python model_test.py
Customize it to query your local Ollama model running at http://localhost:11434.
π§° Model Components
Modelfile: Blueprint for Ollama to build the model
blobs/: Raw model weights
manifests/: Metadata describing model format/version
docker-compose.yml: Encapsulates build/run config
π§ About Ollama
Ollama makes it simple to run LLMs locally on your own machine β private, fast, and API-free.
π¦ Repo Purpose
This repository was created to:
Host a working local LLM solution
Enable offline inference
Serve as a template for packaging custom models with Ollama
π License
This repo is for educational/research purposes only. Please ensure you comply with the license of any base models used (e.g., LLaMA2, Mistral, etc.).
π Credits
Crafted with β€οΈ by Shiv Nandan Verma |