Journal
ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS
Volume 13, Issue -, Pages -Publisher
ASSOC COMPUTING MACHINERY
DOI: 10.1145/2632158
Keywords
Algorithms; Design; Performance; Dynamic power management; intelligent reinforcement and indexing
Funding
- National Research Foundation of Korea [21A20131600005] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)
Ask authors/readers for more resources
In this work, an embedded system working model is designed with one server that receives requests by a requester by a service queue that is monitored by a Power Manager (PM). A novel approach is presented based on reinforcement learning to predict the best policy amidst existing DPM policies and deterministic markovian nonstationary policies (DMNSP). We apply reinforcement learning, namely a computational approach to understanding and automating goal-directed learning that supports different devices according to their DPM. Reinforcement learning uses a formal framework defining the interaction between agent and environment in terms of states, response action, and reward points. The capability of this approach is demonstrated by an event-driven simulator designed using Java with a power-manageable machine-to-machine device. Our experiment result shows that the proposed dynamic power management with timeout policy gives average power saving from 4% to 21% and the novel dynamic power management with DMNSP gives average power saving from 10% to 28% more than already proposed DPM policies.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available