File size: 1,356 Bytes
00da2d4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
license: mit
---
# EEG Forecasting with Llama 3.1-8B and Time-LLM

This repository contains the code and model for forecasting EEG signals by combining the quantized Llama 3.1-8B model from [Hugging Face](https://huggingface.co/akshathmangudi/llama3.1-8b-quantized) and a modified version of the [Time-LLM](https://github.com/KimMeen/Time-LLM) framework.

## Overview

This project aims to leverage large language models (LLMs) for time-series forecasting, specifically focusing on EEG data. The integration of Llama 3.1-8B allows us to apply powerful sequence modeling capabilities to predict future EEG signal patterns with high accuracy and efficiency.

### Key Features

- **Quantized Llama 3.1-8B Model**: Utilizes a quantized version of Llama 3.1-8B to reduce computational requirements while maintaining performance.
- **Modified Time-LLM Framework**: Adapted the Time-LLM framework for EEG signal forecasting, allowing for efficient processing of EEG time-series data.
- **Scalable and Flexible**: The model can be easily adapted to other time-series forecasting tasks beyond EEG data.

## Getting Started

### Prerequisites

Before you begin, ensure you have the following installed:

- Python 3.8+
- PyTorch
- Transformers (Hugging Face)
- Time-LLM dependencies (see the original [Time-LLM repository](https://github.com/KimMeen/Time-LLM))