Minfang Li
Deep learning techniques to forecast solar radiation.
Rel. Edoardo Patti, Alessandro Aliberti, Marco Castangia. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering), 2024
Abstract: |
As one of the representatives of renewable energy, solar energy is increasingly receiving widespread attention for its application in photovoltaic power generation. Solar power generation has many advantages, including cleanliness, environmental protection, and renewable characteristics. However, due to the intermittency and fluctuation of solar radiation, the stability and reliability of solar power generation systems face challenges. To better plan and manage electrical energy, accurately predicting solar radiation becomes a crucial task. Traditional time series forecasting methods have certain limits in processing large-scale datasets and improving forecasting results, so we turn to deep learning methods to obtain better performance. First, we selected LSTM as the baseline model, which has excellent performance in the field of time series prediction. To further explore the performance of sequence models, we introduced the LSTM Seq2Seq model. Considering the sequential processing limitations of the LSTM structure, the Transformer, a structure with a self-attention mechanism noticed by us, can process all elements in the sequence simultaneously to better capture the relationship between elements. However, we found that the performance of the Transformer did not surpass LSTM much. In response, we further modified the Transformer model: Transformer-CNN, Transformer-MA and Transformer-mix. Which incorporate changes such as adding convolution layers and extending encoders, aiming to achieve optimal performance in solar radiation prediction tasks. Their performance was noticeably better than LSTM and Transformer, but just slightly superior to LSTM Seq2Seq. Notably, Transformer-CNN exhibited outstanding performance, closely rivaling Transformer-mix in overall performance while maintaining a simpler structure. Taking into account the advantages of LSTM and Transformer, we propose a hybrid model TFT that combines the two. TFT has a more complex structure and parameter settings. In the hyperparameters tuning of TFT, we found that model performance is not simply positively related to structural complexity, but closely related to the dataset. Finally, we compared multiple models in a 6-hour wavelet prediction task, including LSTM, LSTM Seq2Seq, Transformer, Transformer-CNN, Transformer-MA, Transformer-mix and TFT. Overall, TFT showed its significant advantage in forecasting solar radiation. Especially for longer-term prediction, the performance of TFT is outstanding, which shows that TFT has unique advantages in capturing long-term dependencies. |
---|---|
Relatori: | Edoardo Patti, Alessandro Aliberti, Marco Castangia |
Anno accademico: | 2023/24 |
Tipo di pubblicazione: | Elettronica |
Numero di pagine: | 64 |
Informazioni aggiuntive: | Tesi secretata. Fulltext non presente |
Soggetti: | |
Corso di laurea: | Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering) |
Classe di laurea: | Nuovo ordinamento > Laurea magistrale > LM-32 - INGEGNERIA INFORMATICA |
Aziende collaboratrici: | NON SPECIFICATO |
URI: | http://webthesis.biblio.polito.it/id/eprint/30922 |
Modifica (riservato agli operatori) |