Skip to content

Benchmarking ARIMA, XGBoost, LSTM, and Transformer models for time-series forecasting. Includes data preprocessing, hyperparameter tuning, RMSE-based evaluation, and long-horizon forecasts, highlighting the strengths of deep learning models under limited data conditions.

Notifications You must be signed in to change notification settings

ShaonINT/DataScience-Project-Renewable-Energy-Use-Forecasting

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 

Repository files navigation

Time Series Forecasting Benchmark

This repository presents a consolidated benchmarking study comparing classical, machine learning, and deep learning models for time-series forecasting under limited data conditions.

Models Evaluated

  • ARIMA (statistical baseline)
  • XGBoost (tree-based regression with lag features)
  • LSTM (recurrent neural network)
  • Transformer (attention-based deep learning model)

Methodology

  • Time-series data is split into training, validation, and test sets.
  • Hyperparameter tuning is performed for LSTM and Transformer models using validation RMSE.
  • Final evaluation is conducted on a held-out test set using RMSE as the primary metric.
  • The best-performing model is used for long-horizon forecasting.

Key Findings

  • Deep learning models outperform traditional approaches on this dataset.
  • LSTM achieves the lowest test RMSE, showing better generalisation in low-data settings.
  • Transformer performs well during validation but shows mild overfitting on the test set.

Notes

  • Results should be interpreted with caution due to the small test size.
  • Long-horizon forecasts are generated using a recursive strategy, which may accumulate error.

Requirements

  • Python 3.x
  • NumPy
  • Pandas
  • Scikit-learn
  • Statsmodels
  • PyTorch
  • XGBoost

Usage

Run the notebook sequentially to reproduce preprocessing, model training, evaluation, and forecasting results.

About

Benchmarking ARIMA, XGBoost, LSTM, and Transformer models for time-series forecasting. Includes data preprocessing, hyperparameter tuning, RMSE-based evaluation, and long-horizon forecasts, highlighting the strengths of deep learning models under limited data conditions.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors