Commit
·
6452a34
0
Parent(s):
initial commit
Browse files- .gitattributes +37 -0
- README.md +91 -0
- foundation-sec-8b-q4_k_m.gguf +3 -0
.gitattributes
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
*.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
|
README.md
ADDED
@@ -0,0 +1,91 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
base_model: fdtn-ai/Foundation-Sec-8B
|
6 |
+
pipeline_tag: text-generation
|
7 |
+
library_name: transformers
|
8 |
+
tags:
|
9 |
+
- security
|
10 |
+
- llama
|
11 |
+
- gguf
|
12 |
+
- quantization
|
13 |
+
---
|
14 |
+
|
15 |
+
# Foundation-Sec-8B-Q4_K_M-GGUF Model Card
|
16 |
+
|
17 |
+
**This model was quantized from [fdtn-ai/Foundation-Sec-8B](https://huggingface.co/fdtn-ai/Foundation-Sec-8B) to an 4-bit (Q4_K_M) GGUF checkpoint using llama.cpp. It retains the cybersecurity specialization of the original 8-billion-parameter model while reducing the memory footprint from approximately 16GB (BF16) to around 8.54GB (Q4_K_M) for inference.**
|
18 |
+
|
19 |
+
## Model Description
|
20 |
+
|
21 |
+
`fdtn-ai/Foundation-Sec-8B-Q4_K_M-GGUF` is an 4-bit quantized variant of **Foundation-Sec-8B** — an 8B-parameter LLaMA 3.1–based model that was continued-pretrained on a curated corpus of cybersecurity-specific text (e.g., CVEs, threat intel reports, exploit write-ups, compliance guides). The base model was originally released on April 28, 2025 under Apache 2.0, and excels at tasks such as:
|
22 |
+
|
23 |
+
- **Threat intelligence summarization** (e.g., summarizing CVE details)
|
24 |
+
- **Vulnerability classification** (mapping CVEs/CWEs to MITRE ATT&CK)
|
25 |
+
- **Incident triage assistance** (extracting IoCs, summarizing log data)
|
26 |
+
- **Red-team simulation prompts** and **security-workflow generation**
|
27 |
+
|
28 |
+
Rather than re-uploading or replicating the entire training details, please refer to the original model card for foundational architecture, training data, evaluation results, and known limitations.
|
29 |
+
|
30 |
+
## Quantization Details
|
31 |
+
|
32 |
+
- **Quantization Scheme:** 4-bit, GPTQ-inspired “Q4_K_M” (vector-wise quantization with per-group scales)
|
33 |
+
- **Toolchain:** Converted via [llama.cpp’s export utilities](https://github.com/ggml-org/llama.cpp) (commit `v0.1.81` or newer) to GGUF format.
|
34 |
+
- **Resulting File Size:** ~ 4.92 GB on disk (raw GGUF blob)
|
35 |
+
- **Runtime Footprint:**
|
36 |
+
- Memory: ≈ 4.94 GB of RAM when loaded on CPU with llama.cpp
|
37 |
+
- **Format:**
|
38 |
+
- File extension: `.gguf`
|
39 |
+
- Internally contains:
|
40 |
+
1. Metadata (architecture, tokenizer vocab, hyperparameters)
|
41 |
+
2. Vocabulary list (BPE tokens)
|
42 |
+
3. Weight tensors (for each layer and head) stored in 4-bit quantized form
|
43 |
+
- Compliant with LlamaCpp Python wrapper (`llama_cpp`) and C++ CLI (`llama.cpp`) inference engines
|
44 |
+
|
45 |
+
## How to Use
|
46 |
+
|
47 |
+
### Install llama.cpp on Mac
|
48 |
+
|
49 |
+
Use Homebrew:
|
50 |
+
```bash
|
51 |
+
brew install llama-cpp
|
52 |
+
```
|
53 |
+
|
54 |
+
or install from scratch:
|
55 |
+
|
56 |
+
```bash
|
57 |
+
# Install dependencies
|
58 |
+
brew install cmake
|
59 |
+
|
60 |
+
# Clone and build llama.cpp
|
61 |
+
git clone https://github.com/ggml-org/llama.cpp.git
|
62 |
+
cd llama.cpp
|
63 |
+
make
|
64 |
+
|
65 |
+
# Add to PATH (optional)
|
66 |
+
sudo cp llama-cli /usr/local/bin/
|
67 |
+
```
|
68 |
+
|
69 |
+
### Run the Model
|
70 |
+
|
71 |
+
```bash
|
72 |
+
llama-cli -m foundation-sec-8b-q4_k_m.gguf -p "CVE-2021-44228 is a remote code execution flaw in Apache Log4j2 via unsafe JNDI lookups (\"Log4Shell\"). The CWE is CWE-502.\n\nCVE-2017-0144 is a remote code execution vulnerability in Microsoft's SMBv1 server (\"EternalBlue\") due to a buffer overflow. The CWE is CWE-119.\n\nCVE-2014-0160 is an information-disclosure bug in OpenSSL's heartbeat extension (\"Heartbleed\") due to out-of-bounds reads. The CWE is CWE-125.\n\nCVE-2017-5638 is a remote code execution issue in Apache Struts 2's Jakarta Multipart parser stemming from improper input validation of the Content-Type header. The CWE is CWE-20.\n\nCVE-2019-0708 is a remote code execution vulnerability in Microsoft's Remote Desktop Services (\"BlueKeep\") triggered by a use-after-free. The CWE is CWE-416.\n\nCVE-2015-10011 is a vulnerability about OpenDNS OpenResolve improper log output neutralization. The CWE is" -n 128
|
73 |
+
```
|
74 |
+
|
75 |
+
## References
|
76 |
+
|
77 |
+
1. **Original Model Card:**
|
78 |
+
[fdtn-ai/Foundation-Sec-8B](https://huggingface.co/fdtn-ai/Foundation-Sec-8B) (April 28, 2025) – continued pretraining of LLaMA 3.1-8B on cybersecurity data.
|
79 |
+
|
80 |
+
2. **Llama-cpp GGUF Quantization:**
|
81 |
+
Ggerganov, J. (2022). _Llama.cpp: Llama inference in pure C/C++/Assembly/GGUF_. GitHub repository.
|
82 |
+
|
83 |
+
3. **ZeroQuant:**
|
84 |
+
Yao, Z. et al. (2022). “ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers.” arXiv: 2206.01861.
|
85 |
+
|
86 |
+
4. **SmoothQuant:**
|
87 |
+
Xiao, G. et al. (2022). “SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models.” arXiv: 2211.10438.
|
88 |
+
|
89 |
+
**License:** Apache 2.0 (same as base)
|
90 |
+
**Contact:** For questions about usage, quantization details, or license terms, please open an issue on the Hugging Face repo or contact `paulkass@cisco.com`.
|
91 |
+
|
foundation-sec-8b-q4_k_m.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6883ec6480d218094cd88494fb006443c99f430d09ba26ed12ac0859c95cf7ba
|
3 |
+
size 4921462368
|