Edit model card

Conversational Language Model Interface using FASTTEXT

This project provides a Command Line Interface (CLI) for interacting with a FastText language model, enabling users to generate text sequences based on their input. The script allows customization of parameters such as temperature, input text, top-k predictions, and model file path.

Installation

Before running the script, ensure you have Python installed on your system. Additionally, you'll need to install the FastText library:

Colab

Google Colab Notebook

pip install fasttext

Usage

To use the script, you should first obtain or train a FastText model. Place the model file (usually with a .bin extension) in a known directory.

The script can be executed with various command-line arguments to specify the behavior:

import argparse
import fasttext
import numpy as np

def apply_repetition_penalty(labels, probabilities, used_labels, penalty_scale=1.9):
    """
    Applies a repetition penalty to reduce the probability of already used labels.

    :param labels: List of possible labels.
    :param probabilities: Corresponding list of probabilities.
    :param used_labels: Set of labels that have already been used.
    :param penalty_scale: Scale of the penalty to be applied.
    :return: Adjusted probabilities.
    """
    adjusted_probabilities = probabilities.copy()
    for i, label in enumerate(labels):
        if label in used_labels:
            adjusted_probabilities[i] /= penalty_scale
    # Normalize the probabilities to sum to 1 again
    adjusted_probabilities /= adjusted_probabilities.sum()
    return adjusted_probabilities

def predict_sequence(model, text, sequence_length=20, temperature=.5, penalty_scale=1.9):
    """
    Generates a sequence of labels using the FastText model with repetition penalty.

    :param model: Loaded FastText model.
    :param text: Initial text to start the prediction from.
    :param sequence_length: Desired length of the sequence.
    :param temperature: Temperature for sampling.
    :param penalty_scale: Scale of repetition penalty.
    :return: List of predicted labels.
    """
    used_labels = set()
    sequence = []

    for _ in range(sequence_length):
        # Predict the top k most probable labels
        labels, probabilities = model.predict(text, k=40)
        labels = [label.replace('__label__', '') for label in labels]
        probabilities = np.array(probabilities)

        # Adjust the probabilities with repetition penalty
        probabilities = apply_repetition_penalty(labels, probabilities, used_labels, penalty_scale)

        # Sampling according to the adjusted probabilities
        label_index = np.random.choice(range(len(labels)), p=probabilities)
        chosen_label = labels[label_index]

        # Add the chosen label to the sequence and to the set of used labels
        sequence.append(chosen_label)
        used_labels.add(chosen_label)

        # Update the text with the chosen label for the next prediction
        text += ' ' + chosen_label

    return sequence

def generate_response(model, input_text, sequence_length=512, temperature=.5, penalty_scale=1.9):
    generated_sequence = predict_sequence(model, input_text, sequence_length, temperature, penalty_scale)
    return ' '.join(generated_sequence)

def main():
    parser = argparse.ArgumentParser(description="Run the language model with specified parameters.")
    parser.add_argument('-t', '--temperature', type=float, default=0.5, help='Temperature for sampling.')
    parser.add_argument('-f', '--file', type=str, help='File containing input text.')
    parser.add_argument('-p', '--text', type=str, help='Direct input text.')
    parser.add_argument('-n', '--length', type=int, default=50, help='length predictions to consider.')
    parser.add_argument('-m', '--model', type=str, required=True, help='Address of the FastText model file.')

    args = parser.parse_args()

    # Load the model
    model = fasttext.load_model(args.model)

    input_text = ''
    if args.file:
        with open(args.file, 'r') as file:
            input_text = file.read()
    elif args.text:
        input_text = args.text
    else:
        print("No input text provided. Please use -f to specify a file or -p for direct text input.")
        return

    # Generate and print the response
    response = generate_response(model, input_text + " [RESPONSE]", sequence_length=args.length, temperature=args.temperature)
    print("\nResponse:")
    print(response)

if __name__ == "__main__":
    main()

python conversation_app.py -t TEMPERATURE -f FILE -p TEXT -k TOPK -m MODEL_PATH
  • -t TEMPERATURE or --temperature TEMPERATURE: Sets the temperature for predictions. A higher temperature results in more diverse results. Default is 0.5.
  • -f FILE or --file FILE: Specifies a path to a file containing input text. The script will read this file and use its contents as input.
  • -p TEXT or --text TEXT: Directly provide the input text as a string.
  • -n LENGTH or --length TOPK: Determines the number of top predictions to consider for the model's output. Default is 50.
  • -m MODEL_PATH or --model MODEL_PATH: The path to the FastText model file (required).

Example

python conversation_app.py -t 0.7 -p "What is the future of AI?" -n 40 -m /path/to/model.bin

This command sets the temperature to 0.7, uses the provided question as input, considers the top 40 predictions, and specifies the model file path.

Note

  • The script's output depends on the quality and training of the FastText model used.
  • Ensure the specified model file path and input file path (if used) are correct.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train ccore/FT_512_openhermes_test