image/png

GGUF iMat

Eva Mindlink 72B

Eva Mindlink 72B is a normalized denoised fourier interpolation of the following models:

output_base_model: "Qwen/Qwen2.5-72B"
output_dtype: "bfloat16"
finetune_merge:
  - { "model": "Skywork/MindLink-72B-0801", "base": "Qwen/Qwen2.5-72B", "alpha": 0.9, "is_input": true }
  - { "model": "Unbabel/Tower-Plus-72B", "base": "Qwen/Qwen2.5-72B", "alpha": 0.5 }
  - { "model": "EVA-UNIT-01/EVA-Qwen2.5-72B-v0.2", "base": "Qwen/Qwen2.5-72B", "alpha": 0.8, "is_output": true }

In other words, all of these models get warped and interpolated in signal space, and then jammed back on top of the base model (which in this case was Qwen2.5-72B); with the MindLink-72B-0801 input layer and the EVA-Qwen2.5-72B-v0.2 output layer.

Citation

If you find our work helpful, feel free to give us a cite.

@misc{eva-mindlink-72b,
    title = {Eva Mindlink 72B},
    url = {https://huggingface.co/maldv/Eva-Mindlink-72B},
    author = {Praxis Maldevide},
    month = {August},
    year = {2025}
}
Downloads last month
224
Safetensors
Model size
72.7B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for maldv/Eva-Mindlink-72b

Base model

Qwen/Qwen2.5-72B
Finetuned
(1)
this model
Quantizations
2 models