-
针对异构环境下不同业务类型用户对于接入网络的不同服务质量(quality of service,QoS)需求,该文提出了一种基于马尔可夫决策模型的切换选择算法.建立基于软件定义网络(software defined network,SDN)的异构无线网络架构,以实现对异构网络的通透控制.利用马尔可夫过程预测下一时刻的网络状态以得到采取动作后的一次回报,依据网络的不同状态属性针对实时用户和非实时用户分别构建立即回报函数,并采用层次分析法确定属性权重;基于状态动作对构建期望回报函数,采用逐次逼近的迭代方式得到使长期期望回报最大的切换策略.仿真结果表明,该方法针对不同业务类型用户均能选取最优切换策略,同时降低阻塞率,提高了用户的QoS和无线网络的资源利用率.Coexistence of multiple wireless access technologies will be an indicator of next-generation wireless network, and the integration of heterogeneous wireless networks will meet the needs of high-performance services for mobile users. According to unique quality of service (QoS) requirements of different service type users in heterogeneous environment, the Markov decision model based handoff selection algorithm is proposed in this paper. A heterogeneous wireless network architecture based on the software defined network (SDN) is established to realize the transparency control of heterogeneous networks. Network state information of heterogeneous wireless networks is mastered by SDN controller. It is responsible for scheduling network resources dynamically according to the performance characteristics of each network. If the network state information in equal interval is sampled, the next moment state of network is only related to the current network state and action, but it is not related to the historical state. The problem of handoff selection for heterogeneous wireless networks is modeled as a Markov process with discrete time and continuous state. To predict the next moment state of network by Markov process to obtain a reward, when the reward is positive, it represents the income; when it is negative, it represents the cost. An immediate reward function is constructed for real-time service and non real-time service users respectively according to their different state attributes of the network. Considering five state attributes of wireless network as follows:delay, delay jitter, bandwidth, error rate and network load, the immediate reward function is constructed with weighted summation. Due to the difference in attribute weight distribution among different service type users, the attribute weights are determined by the analytic hierarchy process. In the long term, the objective function which consists of immediate reward function sequence is used to measure future long-term rewards. Then expected reward function based on the state action pair is constructed to obtain the handoff strategy of the maximum expected return by the iterative method of successive approximation. The proposed Markov decision model based handoff selection algorithm is used in simulation of the Matlab platform. The simulation results show that the proposed method can select the optimal handoff strategy for different service type users and reduce the blocking rate, thereby improving the QoS of users and resource utilization of wireless networks.
-
Keywords:
- heterogeneous wireless networks /
- handoff selection /
- Markov process /
- analytic hierarchy process
[1] Falowo O E, Chan H A 2012 Eurasip J. Wirel. Comm. 221
[2] Zhu K, Niyato D, Wang P 2010 Proceedings of IEEE Wireless Communications and Networking Conference Sydney, Australia, April 18-21, 2010 p1
[3] Yan X, Šekerciğlu lu Y A, Narayanan S 2010 Comput. Networking 54 1848
[4] Liu J, Xiong Q Y, Shi X, Wang K, Shi W R 2015 Chin. Phys. B 24 076401
[5] Ahmed A, Boulahia L M, Gaiti D 2014 IEEE Commun. Surv. Tutorials 16 776
[6] Hasib A, Fapojuwo A 2008 IEEE Trans. Veh. Technol. 57 2426
[7] Kunarak S, Sulessathira R, Dutkiewicz E 2013 Proceedings of IEEE International Conference of Region 10 Xi'an, China, October 22-25, 2013 p1
[8] Salem M, Ismail M, Misran N 2011 J. Appl. Sci. 11 336
[9] Niyato D, Hossain E 2009 IEEE Trans. Veh. Technol. 58 2008
[10] Naghavi P, Rastegar S H, Shah-Mansouri V, Kebriaei H 2016 IEEE Wirel. Commun. Lett. 5 52
[11] Stevens-Navarro E, Martinez-Morales J D, Pineda-Rico U 2012 J. Appl. Res. Technol. 10 534
[12] Wang N, Shi W X, Fan S S, Liu S X 2011 Proceedings of 2nd International Conference on Challenges in Environmental Science and Computer Engineering Haikou, China, December 14-15, 2011 p55
[13] Liu K M 2014 J. Inf. Comput. Secor. 11 3373
[14] Zhu S F, Liu F, Chai Z Y, Qi Y T, Wu J S 2012 Acta Phys. Sin. 61 096401 (in Chinese)[朱思峰, 刘芳, 柴争义, 戚玉涛, 吴建设2012物理学报61 096401]
[15] Ning Z L, Song Q Y, Liu Y J, Wang F Z, Wu X Y 2014 Comput. Electr. Eng. 40 456
[16] Ma B, Deng H, Xie X Z, Liao X F 2015 China Commun. 12 106
[17] Ma B, Xie X Z, Liao X F 2015 J. Electron. Inform. Technol. 37 874 (in Chinese)[马彬, 谢显中, 廖晓峰2015电子与信息学报37 874]
[18] Chen T, Matinmikko M, Chen X F, Zhou X, Ahokangas P 2015 IEEE Commun. Mag. 53 126
[19] Wang H C, Chen S Z, Xu H, Ai M, Shi Y 2015 IEEE Network 29 16
[20] Shen Y 2013 Chin. Phys. B 22 058902
[21] Yang X L, Tan X Z, Guan K 2015 Acta Phys. Sin. 64 108403 (in Chinese)[杨小龙, 谭学治, 关凯2015物理学报64 108403]
[22] Tsai C, Yang F N 2013 J. Hydraul. Eng. 139 1265
[23] Fei R, Cui D W 2009 Acta Phys. Sin. 58 5133 (in Chinese)[费蓉, 崔杜武2009物理学报58 5133]
[24] Alavipoor F S, Karimi S, Balist J, Khakian A H 2016 Global. J. Environ. Sci. Manage. 2 197
[25] Marco W, Martijn V O 2012 Reinforcement Learning:State of the Art (Berlin:Springer) pp223-229
-
[1] Falowo O E, Chan H A 2012 Eurasip J. Wirel. Comm. 221
[2] Zhu K, Niyato D, Wang P 2010 Proceedings of IEEE Wireless Communications and Networking Conference Sydney, Australia, April 18-21, 2010 p1
[3] Yan X, Šekerciğlu lu Y A, Narayanan S 2010 Comput. Networking 54 1848
[4] Liu J, Xiong Q Y, Shi X, Wang K, Shi W R 2015 Chin. Phys. B 24 076401
[5] Ahmed A, Boulahia L M, Gaiti D 2014 IEEE Commun. Surv. Tutorials 16 776
[6] Hasib A, Fapojuwo A 2008 IEEE Trans. Veh. Technol. 57 2426
[7] Kunarak S, Sulessathira R, Dutkiewicz E 2013 Proceedings of IEEE International Conference of Region 10 Xi'an, China, October 22-25, 2013 p1
[8] Salem M, Ismail M, Misran N 2011 J. Appl. Sci. 11 336
[9] Niyato D, Hossain E 2009 IEEE Trans. Veh. Technol. 58 2008
[10] Naghavi P, Rastegar S H, Shah-Mansouri V, Kebriaei H 2016 IEEE Wirel. Commun. Lett. 5 52
[11] Stevens-Navarro E, Martinez-Morales J D, Pineda-Rico U 2012 J. Appl. Res. Technol. 10 534
[12] Wang N, Shi W X, Fan S S, Liu S X 2011 Proceedings of 2nd International Conference on Challenges in Environmental Science and Computer Engineering Haikou, China, December 14-15, 2011 p55
[13] Liu K M 2014 J. Inf. Comput. Secor. 11 3373
[14] Zhu S F, Liu F, Chai Z Y, Qi Y T, Wu J S 2012 Acta Phys. Sin. 61 096401 (in Chinese)[朱思峰, 刘芳, 柴争义, 戚玉涛, 吴建设2012物理学报61 096401]
[15] Ning Z L, Song Q Y, Liu Y J, Wang F Z, Wu X Y 2014 Comput. Electr. Eng. 40 456
[16] Ma B, Deng H, Xie X Z, Liao X F 2015 China Commun. 12 106
[17] Ma B, Xie X Z, Liao X F 2015 J. Electron. Inform. Technol. 37 874 (in Chinese)[马彬, 谢显中, 廖晓峰2015电子与信息学报37 874]
[18] Chen T, Matinmikko M, Chen X F, Zhou X, Ahokangas P 2015 IEEE Commun. Mag. 53 126
[19] Wang H C, Chen S Z, Xu H, Ai M, Shi Y 2015 IEEE Network 29 16
[20] Shen Y 2013 Chin. Phys. B 22 058902
[21] Yang X L, Tan X Z, Guan K 2015 Acta Phys. Sin. 64 108403 (in Chinese)[杨小龙, 谭学治, 关凯2015物理学报64 108403]
[22] Tsai C, Yang F N 2013 J. Hydraul. Eng. 139 1265
[23] Fei R, Cui D W 2009 Acta Phys. Sin. 58 5133 (in Chinese)[费蓉, 崔杜武2009物理学报58 5133]
[24] Alavipoor F S, Karimi S, Balist J, Khakian A H 2016 Global. J. Environ. Sci. Manage. 2 197
[25] Marco W, Martijn V O 2012 Reinforcement Learning:State of the Art (Berlin:Springer) pp223-229
计量
- 文章访问数: 5923
- PDF下载量: 250
- 被引次数: 0