--- library_name: transformers tags: - mergekit - merge - mistral --- # Info **Notice**: It appears I added too much Nina-v2-7B into this. Will have to do more testing. This is a test combining what I think is the best RP model as of now [Endevor/InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B) with [Virt-io/Nina-v2-7B](https://huggingface.co/Virt-io/Nina-v2-7B) and Eros-Erebus-Holodeck-7B The goal is to make the model smarter with Nina-v2-7B and add more story variation with Eros-Erebus-Holodeck-7B [**Experimental SillyTavern presets**](https://huggingface.co/Virt-io/Irene-RP-7B/tree/main/presets) # Irene-RP-7B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [Endevor/InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B) as a base. ### Models Merged The following models were included in the merge: * [Virt-io/Nina-v2-7B](https://huggingface.co/Virt-io/Nina-v2-7B) * Mergekit/Eros-Erebus-Holodeck-7B ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Endevor/InfinityRP-v1-7B parameters: weight: 1.0 - model: Virt-io/Nina-v2-7B parameters: weight: 0.75 - model: Mergekit/Eros-Erebus-Holodeck-7B parameters: weight: 0.55 merge_method: task_arithmetic base_model: Endevor/InfinityRP-v1-7B parameters: normalize: true int8_mask: true dtype: float16 ``` # Eros-Erebus-Holodeck-7B This just [Virt-io/Erebus-Holodeck-7B](https://huggingface.co/Virt-io/Erebus-Holodeck-7B) merged with [tavtav/eros-7b-test](https://huggingface.co/tavtav/eros-7b-test) I probably will not be uploading this as my upload speed is too slow and free colab is ooming ``` slices: - sources: - model: tavtav/eros-7b-test layer_range: [0, 32] - model: Virt-io/Erebus-Holodeck-7B layer_range: [0, 32] merge_method: slerp base_model: tavtav/eros-7b-test parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 # fallback for rest of tensors dtype: float16 ```