Show simple item record

dc.contributor.authorAlfaverh, Fayiz
dc.contributor.authorDenai, Mouloud
dc.contributor.authorSun, Yichuang
dc.date.accessioned2020-02-21T01:08:07Z
dc.date.available2020-02-21T01:08:07Z
dc.date.issued2020-02-17
dc.identifier.citationAlfaverh , F , Denai , M & Sun , Y 2020 , ' Demand Response Strategy Based on Reinforcement Learning and Fuzzy Reasoning for Home Energy Management ' , IEEE Access , vol. 8 , 9000577 , pp. 39310-39321 . https://doi.org/10.1109/ACCESS.2020.2974286
dc.identifier.issn2169-3536
dc.identifier.otherPURE: 19566040
dc.identifier.otherPURE UUID: bbe6d1f2-7c95-479e-a1db-179550d05859
dc.identifier.otherScopus: 85081669247
dc.identifier.urihttp://hdl.handle.net/2299/22329
dc.description.abstractAs energy demand continues to increase, demand response (DR) programs in the electricity distribution grid are gaining momentum and their adoption is set to grow gradually over the years ahead. Demand response schemes seek to incentivise consumers to use green energy and reduce their electricity usage during peak periods which helps support grid balancing of supply-demand and generate revenue by selling surplus of energy back to the grid. This paper proposes an effective energy management system for residential demand response using Reinforcement Learning (RL) and Fuzzy Reasoning (FR). RL is considered as a model-free control strategy which learns from the interaction with its environment by performing actions and evaluating the results. The proposed algorithm considers human preference by directly integrating user feedback into its control logic using fuzzy reasoning as reward functions. Q-learning, a RL strategy based on a reward mechanism, is used to make optimal decisions to schedule the operation of smart home appliances by shifting controllable appliances from peak periods, when electricity prices are high, to off-peak hours, when electricity prices are lower without affecting the customer’s preferences. The proposed approach works with a single agent to control 14 household appliances and uses a reduced number of state-action pairs and fuzzy logic for rewards functions to evaluate an action taken for a certain state. The simulation results show that the proposed appliances scheduling approach can smooth the power consumption profile and minimise the electricity cost while considering user’s preferences, user’s feedbacks on each action taken and his/her preference settings. A user-interface is developed in MATLAB/Simulink for the Home Energy Management System (HEMS) to demonstrate the proposed DR scheme. The simulation tool includes features such as smart appliances, electricity pricing signals, smart meters, solar photovoltaic generation, battery energy storage, electric vehicle and grid supply.en
dc.format.extent12
dc.language.isoeng
dc.relation.ispartofIEEE Access
dc.subjectDemand response
dc.subjectQ-learning
dc.subjectfuzzy reasoning
dc.subjecthome energy management system
dc.subjectreinforcement learning
dc.subjectsmart appliances
dc.subjectsmart home
dc.subjectComputer Science(all)
dc.subjectMaterials Science(all)
dc.subjectEngineering(all)
dc.titleDemand Response Strategy Based on Reinforcement Learning and Fuzzy Reasoning for Home Energy Managementen
dc.contributor.institutionCentre for Engineering Research
dc.contributor.institutionCommunications and Intelligent Systems
dc.contributor.institutionSchool of Physics, Engineering & Computer Science
dc.contributor.institutionDepartment of Engineering and Technology
dc.description.statusPeer reviewed
dc.identifier.urlhttp://www.scopus.com/inward/record.url?scp=85081669247&partnerID=8YFLogxK
rioxxterms.versionAM
rioxxterms.versionofrecordhttps://doi.org/10.1109/ACCESS.2020.2974286
rioxxterms.typeJournal Article/Review
herts.preservation.rarelyaccessedtrue


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record