MrRobotoAI's picture
Upload folder using huggingface_hub
db2d086 verified
---
base_model:
- openchat/openchat-3.6-8b-20240522
- MrRobotoAI/llama3-8B-Special-Dark-v2.0
- TIGER-Lab/MAmmoTH2-8B-Plus
- OwenArli/Awanllm-Llama-3-8B-Cumulus-v1.0
- refuelai/Llama-3-Refueled
- NousResearch/Hermes-2-Theta-Llama-3-8B
- SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522)
* [MrRobotoAI/llama3-8B-Special-Dark-v2.0](https://huggingface.co/MrRobotoAI/llama3-8B-Special-Dark-v2.0)
* [TIGER-Lab/MAmmoTH2-8B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B-Plus)
* [OwenArli/Awanllm-Llama-3-8B-Cumulus-v1.0](https://huggingface.co/OwenArli/Awanllm-Llama-3-8B-Cumulus-v1.0)
* [refuelai/Llama-3-Refueled](https://huggingface.co/refuelai/Llama-3-Refueled)
* [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B)
* [SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NousResearch/Hermes-2-Theta-Llama-3-8B
parameters:
weight: 1.0
- model: MrRobotoAI/llama3-8B-Special-Dark-v2.0
parameters:
weight: 1.0
- model: openchat/openchat-3.6-8b-20240522
parameters:
weight: 1.0
- model: OwenArli/Awanllm-Llama-3-8B-Cumulus-v1.0
parameters:
weight: 1.0
- model: refuelai/Llama-3-Refueled
parameters:
weight: 1.0
- model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
parameters:
weight: 1.0
- model: TIGER-Lab/MAmmoTH2-8B-Plus
parameters:
weight: 1.0
merge_method: linear
dtype: float16
```