Spaces:
Running
SWIFT MT564 Documentation Assistant
Version: 1.0.0
Date: May 14, 2025
Author: Replit AI
Table of Contents
- Introduction
- Project Overview
- System Architecture
- Installation & Setup
- Component Details
- Usage Guide
- Troubleshooting
- References
Introduction
The SWIFT MT564 Documentation Assistant is a specialized AI system designed to help financial professionals understand and work with SWIFT MT564 message formats (Corporate Action Notifications). It combines web scraping, natural language processing, and a conversational interface to provide an intelligent assistant for interpreting MT564 documentation.
Project Overview
This project creates a complete pipeline that:
- Scrapes SWIFT MT564 documentation from official sources
- Processes this information into a structured format
- Fine-tunes a TinyLlama language model on this specialized data
- Provides a user interface for asking questions about MT564
- Enables deployment to Hugging Face for easy sharing and use
The system is designed to be modular, allowing for future expansion to other SWIFT message types or financial documentation.
System Architecture
The system consists of several key components:
SWIFT-MT564-Assistant/
βββ scrapers/ # Web scraping components
β βββ iso20022_scraper.py # Scraper for ISO20022 website
β βββ pdf_parser.py # PDF extraction utilities
β βββ data_processor.py # Converts raw data to training format
β
βββ model/ # ML model components
β βββ download_tinyllama.py # Script to download TinyLlama model
β βββ upload_to_huggingface.py # Script to upload model to Hugging Face
β βββ tinyllama_trainer.py # Fine-tuning implementation
β βββ evaluator.py # Tests model performance
β
βββ webapp/ # Web application
β βββ app.py # Flask application
β βββ templates/ # HTML templates
β β βββ index.html # Main page
β β βββ result.html # Results display
β βββ static/ # CSS, JS, and other static files
β
βββ data/ # Data storage
β βββ raw/ # Raw scraped data
β βββ processed/ # Processed training data
β βββ uploaded/ # User-uploaded PDFs
β
βββ train_mt564_model.py # Script to train the model
βββ prepare_mt564_data.py # Script to prepare training data
βββ dependencies.txt # Project dependencies
βββ setup.py # Setup and utility script
βββ README.md # Project documentation
Installation & Setup
System Requirements
- Python 3.8 or higher
- At least 4GB RAM (8GB+ recommended)
- At least 10GB free disk space
- CUDA-compatible GPU recommended for training (but not required)
- Internet connection for downloading models and data
Local Installation
Clone or download the project:
- Download the zip file from Replit
- Extract to a folder on your local machine
Set up a virtual environment:
# Create a virtual environment python -m venv venv # Activate the environment # On Windows: venv\Scripts\activate # On macOS/Linux: source venv/bin/activate
Install dependencies:
# Install core dependencies pip install torch transformers datasets huggingface_hub accelerate pip install requests beautifulsoup4 trafilatura flask pip install PyPDF2 tqdm nltk rouge # Or use the dependencies.txt file pip install -r dependencies.txt
Run the setup script for guidance:
python setup.py --mode guide
Environment Variables
The following environment variables are used:
HUGGING_FACE_TOKEN
: Your Hugging Face API token (for uploading models)FLASK_APP
: Set to "webapp/app.py" for running the web interfaceFLASK_ENV
: Set to "development" for debugging or "production" for deployment
Component Details
Data Collection
The data collection process involves scraping SWIFT MT564 documentation from official sources:
ISO20022 Website Scraping:
python scrapers/iso20022_scraper.py --output_dir ./data/raw
This scrapes the ISO20022 website's MT564 documentation and saves it in structured JSON format.
Data Processing:
python prepare_mt564_data.py --input_file ./data/raw/mt564_documentation.json --output_file ./data/processed/mt564_training_data.json
This converts the raw data into instruction-response pairs suitable for training.
Model Training
The model training process involves:
Downloading the base model:
python model/download_tinyllama.py --model_name TinyLlama/TinyLlama-1.1B-Chat-v1.0 --output_dir ./data/models
Fine-tuning on MT564 data:
python train_mt564_model.py --model_name ./data/models/TinyLlama-1.1B-Chat-v1.0 --training_data ./data/processed/mt564_training_data.json --output_dir ./mt564_tinyllama_model
Training parameters can be adjusted as needed:
--epochs
: Number of training epochs (default: 3)--batch_size
: Batch size (default: 2)--learning_rate
: Learning rate (default: 2e-5)
Evaluating the model: The training script includes validation, but further evaluation can be performed on test data if needed.
Web Interface
The web interface provides a user-friendly way to interact with the model:
Starting the web server:
python webapp/app.py
Using the interface:
- Open a browser and navigate to
http://localhost:5000
- Upload SWIFT MT564 documentation PDFs
- Ask questions about the message format
- View AI-generated responses
- Open a browser and navigate to
Hugging Face Integration
The project includes tools for seamless integration with Hugging Face:
Uploading your model:
# Set your Hugging Face API token export HUGGING_FACE_TOKEN=your_token_here # Upload the model python model/upload_to_huggingface.py --model_dir ./mt564_tinyllama_model --repo_name your-username/mt564-tinyllama
Creating a Hugging Face Space:
- Go to huggingface.co and click "New Space"
- Choose Gradio or Streamlit template
- Link to your uploaded model
- Use the sample code provided in the setup guide
Usage Guide
Common Workflows
Complete Pipeline
- Scrape data β 2. Process data β 3. Download model β 4. Train model β 5. Upload to Hugging Face
# 1. Scrape data
python scrapers/iso20022_scraper.py --output_dir ./data/raw
# 2. Process data
python prepare_mt564_data.py --input_file ./data/raw/mt564_documentation.json --output_file ./data/processed/mt564_training_data.json
# 3. Download model
python model/download_tinyllama.py --output_dir ./data/models
# 4. Train model
python train_mt564_model.py --training_data ./data/processed/mt564_training_data.json --output_dir ./mt564_tinyllama_model
# 5. Upload to Hugging Face
export HUGGING_FACE_TOKEN=your_token_here
python model/upload_to_huggingface.py --model_dir ./mt564_tinyllama_model --repo_name your-username/mt564-tinyllama
Using Pre-trained Model
If you already have a trained model, you can skip steps 1-4 and just run the web interface:
# Start the web interface
python webapp/app.py
Troubleshooting
Common Issues
Out of memory during training:
- Reduce batch size:
--batch_size 1
- Increase gradient accumulation:
--gradient_accumulation_steps 8
- Use CPU only if necessary:
--device cpu
- Reduce batch size:
Installation errors:
- Make sure you're using Python 3.8+
- Try installing dependencies one by one
- Check for package conflicts
Hugging Face upload issues:
- Verify your HUGGING_FACE_TOKEN is set correctly
- Make sure you have write access to the repository
- Check for repository naming conflicts
Getting Help
If you encounter issues:
- Check the error messages for specific details
- Consult the Hugging Face documentation for model/API issues
- Review the TinyLlama documentation for model-specific questions
References
- SWIFT MT564 Documentation
- TinyLlama Project
- Hugging Face Documentation
- Transformers Library
- Flask Web Framework
License
This project is available under the Apache 2.0 License.
Acknowledgements
This project utilizes several open-source libraries and resources, including TinyLlama, Hugging Face Transformers, and Flask.