File size: 2,817 Bytes
292e01d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
---

license: other
language:
- en
base_model:
- meta-llama/Llama-3.1-8B
new_version: shiv1119/ollama-model-shiv
pipeline_tag: text-generation
tags:
- ollama
- local-llm
- docker
- offline
- shiv
---

# πŸ¦™ Shiv's Custom Local LLM (Ollama Ready)

Welcome to **`ollama-model-shiv`**, a custom-built local language model designed to run **completely offline** using [Ollama](https://ollama.com/). This repo packages everything β€” from `Modelfile` to model blobs β€” ready for Docker-based deployment and local inference.

---
## Link to model - https://huggingface.co/shiv1119/ollama-model-shiv

## πŸ“ Directory Structure

OllamaModelBuild/
β”œβ”€β”€ model_test.py # Sample Python script to interact with the model
β”œβ”€β”€ .gitattributes # Git LFS or text handling config
β”œβ”€β”€ ollama_clean/
β”‚ β”œβ”€β”€ docker-compose.yml # Docker config to run Ollama with this model
β”‚ β”œβ”€β”€ Modelfile # Ollama build instructions
β”‚ └── models/ # Model weights & metadata
β”‚ β”œβ”€β”€ blobs/ # Binary blobs of the model
β”‚ └── manifests/ # Manifest describing model structure


---

## πŸš€ Features

- βœ… 100% Offline β€” No internet or API key needed
- πŸ‹ Docker-ready with `docker-compose.yml`
- ⚑ Works with [Ollama CLI](https://ollama.com)
- πŸ” Full model reproducibility via blobs and manifests
- 🧠 Based on `LLaMA2` / open-weight LLM architecture

---

## πŸ› οΈ Getting Started

### πŸ”§ 1. Install Prerequisites

- [Install Docker](https://docs.docker.com/get-docker/)
- (Optional) [Install Ollama CLI](https://ollama.com/download) β€” used during build

### πŸ‹ 2. Run the Model Using Docker

In the root of the project:

```bash
cd ollama_clean
docker-compose up --build
```

This builds and runs the model container locally with your custom blobs and Modelfile.
πŸ§ͺ Test the Model (Optional)
```bash
docker-compose exec ollama ollama run tinyllama "Hello"
```
Use the included Python script to test interaction:

python model_test.py

Customize it to query your local Ollama model running at http://localhost:11434.
🧰 Model Components

    Modelfile: Blueprint for Ollama to build the model

    blobs/: Raw model weights

    manifests/: Metadata describing model format/version

    docker-compose.yml: Encapsulates build/run config

🧠 About Ollama

    Ollama makes it simple to run LLMs locally on your own machine β€” private, fast, and API-free.

πŸ“¦ Repo Purpose

This repository was created to:

    Host a working local LLM solution

    Enable offline inference

    Serve as a template for packaging custom models with Ollama

πŸ“œ License

This repo is for educational/research purposes only. Please ensure you comply with the license of any base models used (e.g., LLaMA2, Mistral, etc.).
πŸ™Œ Credits

Crafted with ❀️ by Shiv Nandan Verma