Spaces:
Sleeping
A newer version of the Streamlit SDK is available: 1.56.0
title: Polymer Discovery Platform
sdk: streamlit
python_version: '3.10'
app_file: Home.py
Polymer Discovery Platform
An integrated Streamlit platform for polymer screening and candidate discovery. The application combines property lookup, machine-learning prediction, molecular visualization, multi-objective discovery, AI-assisted query translation, novel polymer SMILES generation, and export to an automated molecular dynamics workflow.
What The Platform Does
The website is organized into eight modules:
Property Probe: query a single polymer by SMILES or name and retrieve available database values with prediction fallback.Batch Prediction: run multi-property prediction for pasted, uploaded, or built-in polymer sets.Molecular View: render 2D and 3D molecular structures and export structure assets.Discovery (Manual): perform explicit constraint-based and multi-objective polymer screening.Discovery (AI): translate natural-language design requests into structured discovery settings with bring-your-own-key LLM support.Novel SMILES Generation: sample new polymer candidates with the pretrained RNN and filter against local datasets.Literature Search: search papers, stage evidence records, and review structured material-property extraction before promotion.Feedback: submit issue reports and feature requests through a webhook-backed form.
Core Capabilities
- Multi-source property lookup from
EXP,MD,DFT,GC, andPOLYINFO - Property prediction across 28 polymer properties
- Large-scale screening over real and virtual candidate libraries
- Exact Pareto ranking with trust and diversity-aware selection
- AI-assisted prompt-to-spec generation for discovery workflows
- Novelty-filtered polymer SMILES generation
- Material-aware literature retrieval, evidence staging, and reviewer workflow
- ADEPT handoff for downstream molecular dynamics workflow packaging
Repository Layout
.
βββ Home.py # Main Streamlit homepage
βββ app.py # Compatibility entrypoint
βββ pages/ # User-facing application modules
βββ src/ # Prediction, discovery, lookup, and UI logic
βββ literature/ # Literature-mining pipeline components
βββ scripts/ # Utility and workflow scripts
βββ data/ # Lookup tables, discovery datasets, ADEPT files
βββ models/ # Trained prediction and generation assets
βββ RNN/ # Generator training/inference code
βββ icons/ # Application icons and branding assets
Data And Model Assets
This repository expects pretrained models and local data tables to be present. The application uses:
- source datasets such as
EXP.csv,MD.csv,DFT.csv,GC.csv,POLYINFO.csv, andPI1M.csv - derived property tables such as
POLYINFO_PROPERTY.parquetandPI1M_PROPERTY.parquet - trained checkpoint files under
models/ - pretrained RNN assets under
RNN/pretrained_model/andmodels/rnn/pretrained_model/
If you clone only the code without the large assets, several app modules will not run correctly.
Local Development
Use Python 3.10.
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
streamlit run Home.py
Open http://localhost:8501.
Literature Dependencies
The production app now includes the literature workflow through requirements.txt.
If you are working on the literature pipeline in isolation, you can still install:
pip install -r requirements-literature.txt
Environment Configuration
Create a local .env file if needed. The template is provided in .env.example.
Key variables used by the platform include:
LLM / Discovery AI
CRC_OPENWEBUI_API_KEYOPENWEBUI_API_KEYOPENAI_API_KEYCRC_OPENWEBUI_BASE_URLOPENWEBUI_BASE_URLCRC_OPENWEBUI_MODELOPENWEBUI_MODELOPENAI_MODEL
The Discovery AI page also supports direct bring-your-own-key usage against supported providers from the UI.
Literature Pipeline
PUBMED_EMAILPUBMED_API_KEYSEMANTIC_SCHOLAR_API_KEYPAGEINDEX_API_KEYLITERATURE_MODEL_OPTIONS
Feedback / Analytics
FEEDBACK_WEBHOOK_URLFEEDBACK_WEBHOOK_TOKENAPP_DEPLOYMENT_SOURCE
Running With Docker
docker build -t polymer-discovery .
docker run --rm -p 8501:8501 polymer-discovery
The container launches:
streamlit run Home.py --server.port=8501 --server.address=0.0.0.0 --server.headless=true
Notes For Deployment
- The app is designed as a Streamlit website.
- Heavy modules depend on local datasets and pretrained checkpoints being available at the expected paths.
- The AI-assisted discovery page requires a valid API key when using in-app LLM generation.
- The feedback page requires a configured webhook to receive submissions.
Citation And Use
If you use this platform in research or build on top of it, cite the associated paper once published. Until then, reference the repository and the MONSTER Lab platform description.