A General Learning Model's profile picture

A General Learning Model

company

AI & ML interests

machine learning, automation, installers, blockchain

Organization Card
About org cards

title: aGLM Autonomous General Learning Model emoji: 🔥 colorFrom: black colorTo: green sdk: static pinned: false

machine learning as a process research into machine learning intelligence principals and application

First iteration of aglm.py is available in the Professor-Codephreak LLM codebase as automindx https://github.com/pythaiml/automindx as a solution to machine memory with aglm as the memory parser

aglm.py - Autonomous General Learning Model Overview

The aglm.py module implements an Autonomous General Learning Model (AGLM) that utilizes a pre-trained language model to generate contextual responses based on a conversation history. It is designed to process and generate responses from conversation data stored in memory files, using a pre-trained language model. Classes and Functions LlamaModel

This class represents the AGLM. It is responsible for initializing the language model and tokenizer, as well as generating contextual responses based on conversation history.

init(self, model_name, models_folder): Constructor that initializes the AGLM with the specified model_name and models_folder. It initializes the language model and tokenizer.

initialize_model(self): Initializes the language model and tokenizer using the specified model_name and models_folder.

generate_contextual_output(self, conversation_context): Generates a contextual response based on the given conversation context. It formats the conversation history using format_to_llama_chat_style and generates a response using the pre-trained language model.

determine_batch_size()

A utility function that determines an appropriate batch size for processing memory files based on available system memory. It calculates the batch size using the total available memory and a predefined maximum memory usage threshold. main()

The main entry point of the script. It reads conversation history from memory files in batches, generates a contextual response using the AGLM, and prints the response. It uses the LlamaModel class to perform response generation. Usage

Import the necessary modules: os, glob, ujson, psutil, AutoModelForCausalLM, AutoTokenizer from the transformers library, and format_to_llama_chat_style from automind.

Define the LlamaModel class, which encapsulates the AGLM's behavior. It initializes the language model, tokenizer, and generates responses based on conversation context.

Define the utility function determine_batch_size() that calculates an appropriate batch size based on system memory.

Define the main() function, which reads memory files in batches, generates responses, and prints the generated response.

If the script is executed as the main program (if name == 'main':), it calls the main() function to execute the AGLM.

Example Use Case

The aglm.py script could be used as part of a larger system that utilizes conversation memory to generate context-aware responses in a chatbot or virtual assistant application. It reads conversation history from memory files, processes the data in batches to manage memory usage, generates responses using a pre-trained language model, and prints the generated response to the console.

By integrating the aglm.py module with other components, developers can create more intelligent and contextually-aware conversational agents.

models

None public yet

datasets

None public yet