DC Field | Value | Language |
dc.contributor.author | BOUNAB, ABdelmounaim | - |
dc.date.accessioned | 2023-10-17T12:46:22Z | - |
dc.date.available | 2023-10-17T12:46:22Z | - |
dc.date.issued | 2023 | - |
dc.identifier.uri | https://repository.esi-sba.dz/jspui/handle/123456789/538 | - |
dc.description | Encadreur : M BENSENANE H am dane | en_US |
dc.description.abstract | Day after day our lives dependent on electronic devices more and more such as smartphones and
computers, and this is accompanied by a growing concern for managing the resources needed to
power these devices. In order to use energy efficiently, it is important to be able to forecast the
energy consumption of a building so that energy production can be optimized for different climatic
conditions. This is particularly important in the context of smart cities and networks, which are
currently an enthusiastic area of research.
Recent studies [4] have shown that artificial intelligence (AI) algorithms based on long and shortterm
memory (LSTM) neural networks (NNs) are very accurate at predicting energy consumption.
These AI algorithms rely on collecting a long-term history of energy consumption data and
associated weather data. However, processing and analyzing such a large amount of data requires
significant compute and network resources, resulting in additional power consumption of cloudbased
computers.
To solve this problem, low-cost embedded systems can play an important role in predicting energy
consumption in different climates. These small systems typically use a microcontroller and present
an attractive trade-off in terms of computing power, power consumption, programming flexibility,
size, and cost. However, because microcontrollers have limited processing power and memory, it
is not possible to use the traditional BackPropagation (BP) algorithm to train NNs on them.
Instead, the AI model is first trained and tested on a computer using a GPU for high computing
power. Then, the model parameters are compressed and optimized to reduce computational
complexity so that the model can be deployed on the small embedded system. This is done by
reducing the number of model parameters and using efficient bit quantization without degrading
the accuracy too much. In addition, interesting work has been done to run the BP algorithm on the
embedded system itself.
Another promising approach is transfer learning (TL), which involves training and deploying an
NN on a small embedded system using pre-trained models from larger computers. TL is a wellsuited
technique for deploying NNs on small embedded systems completely autonomously.
So, AI algorithms based on LSTM neural networks can greatly contribute to the efficient
management of energy resources. Using low-cost embedded systems, these algorithms can be
deployed in various climatic contexts, without consuming excessive energy. This presents a
promising solution to the challenge of energy resource management, especially in the context of
smart cities and smart grids. | en_US |
dc.language.iso | en | en_US |
dc.title | Time Series Forecasting Mastery: Intelligent Energy Consumption Analysis for Microcontrollers | en_US |
dc.type | Thesis | en_US |
Appears in Collections: | Ingénieur
|