Journal
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Volume 44, Issue 9, Pages 4642-4658Publisher
IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2021.3068799
Keywords
Hidden Markov models; Markov processes; Graphical models; Bayes methods; Probabilistic logic; Mathematical model; Data models; Hidden markov models; Bayesian networks; model selection; structure learning; time series; information asymmetries; linear gaussian; autoregressive; Yule-Walker equations
Funding
- Spanish Centre for the Development of Industrial Technology (CDTI) [IDI-20180156]
- Spanish Ministry of Science, Innovation and Universities [PID2019-109247GB-I00, RTC2019-006871-7]
- project BAYES-CLIMA-NEURO, BBVA Foundation's Grant (2019)
Ask authors/readers for more resources
In a real-life process evolving over time, the relationship between relevant variables may change. Asymmetric hidden Markov models provide a dynamic framework where different inference models can be used for each state of the process. This paper modifies recent asymmetric hidden Markov models to incorporate an asymmetric autoregressive component for continuous variables, allowing the model to choose the optimal order of autoregression. The paper also demonstrates the adaptation of inference, hidden states decoding, and parameter learning for the proposed model. Experimental results with synthetic and real data showcase the capabilities of this new model.
In a real life process evolving over time, the relationship between its relevant variables may change. Therefore, it is advantageous to have different inference models for each state of the process. Asymmetric hidden Markov models fulfil this dynamical requirement and provide a framework where the trend of the process can be expressed as a latent variable. In this paper, we modify these recent asymmetric hidden Markov models to have an asymmetric autoregressive component in the case of continuous variables, allowing the model to choose the order of autoregression that maximizes its penalized likelihood for a given training set. Additionally, we show how inference, hidden states decoding and parameter learning must be adapted to fit the proposed model. Finally, we run experiments with synthetic and real data to show the capabilities of this new model.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available