基于灵敏度分析与增强捕食-食饵优化的重介质选煤过程动态模型

Dynamic models of dense medium coal preparation using sensitivity analysis and reinforcement prey predator optimization

  • 摘要: 重介质选煤是当前我国煤炭洗选过程的主要处理方式,其动态模型是特性分析、优化和控制的基础,因此精确的动态模型是实现选煤智能化的关键。为实现这一目标,将机理知识、统计分析和人工智能方法相结合,构建了一种机理知识与数据混合驱动的重介质选煤过程动态建模。① 基于物料平衡原理,建立了由矿浆混合、重介质分选以及重介质回收3个主过程组成的动态机理模型;② 采用基于方差的Sobol’灵敏度分析方法,利用低偏差Sobol’序列对所建立的模型参数进行灵敏度分析,决策出矿浆分离比例与溢流、底流灰分比例常数等对模型起主导作用的关键参数进行优化,非重要参数在一定区间内取值,减小了后续参数优化算法的计算成本;③ 为提高模型参数优化的准确性,提出了一种增强捕食-食饵优化算法,其利用增强学习思想使得搜索个体能够学习自身的历史信息实现步长的自适应变化,解决了传统捕食-食饵优化算法因恒定搜索步长而导致易陷入局部最优的问题,实现了对重介质动态模型的参数寻优。实验研究中,首先根据工艺知识和工程经验确定模型参数的取值范围,非重要参数在此区间内任意取值,进而根据实际数据,采用增强捕食-食饵优化算法对重要的模型参数寻找最优值。结果表明,增强捕食-食饵优化算法能够提高对关键参数最优值的搜索能力,使得所求得模型的均方根误差和概率密度标准差相较于其他模型较小,更能反应实际工业过程,表明了所提模型的有效性与准确性。这一研究成果为重介质选煤智能化系统研究与应用提供了重要的模型基础。

     

    Abstract: Dense Medium Coal Preparation (DMCP) is currently the main method for coal washing in China.Its dynamic model is the foundation of characteristic analysis,optimization and control.Therefore,an accurate dynamic model is the key to achieve intelligent coal preparation.Motivated by this goal,the mechanism knowledge,statistical analysis and artificial intelligence methods were integrated to construct a knowledge data driven dynamic model for DMCP process.Firstly,using the material balance principle,a mechanism model including slurry mixing process,dense medium separation process and recovery process was established.Furthermore,low discrepancy Sobol’ sequence was used in the variance based Sobol’ sensitivity analysis to analyze the sensitivity of mechanism model parameters,thereby finding the key model parameters that played a leading role on the model,such as the slurry separation ratio and the ash ratio constant in the overflow and underflow.In this way,only the important model parameters should be optimized and the unimportant model parameters could be assigned within a certain range,which would reduce the calculation cost in the subsequent optimization algorithm.Meanwhile,to improve the accuracy of model parameter optimization,a Reinforcement Prey Predator Optimization (RPPO) algorithm was proposed with the idea of reinforcement learning.The RPPO algorithm can make search agent adaptively adjust the step size by learn the historical information,which solves the local optimum problem caused by the constant step size.In the experimental study,the ranges of model parameters were firstly set according to the process knowledge and engineering experience.Then the unimportant model parameters were selected randomly from this ranges and fixed,while the key model parameters were optimized using RPPO algorithm according to the practical data.The results show that the RPPO algorithm can improve the search ability for the optimal key model parameters,making the model outputs closest to the actual measurement data.The root mean square error of the proposed model and the standard deviation of probability density are smaller than the other models,proving the effectiveness and accuracy of the proposed model.This study provides important model foundation for the intelligent system research and implementation of DMCP process.

     

/

返回文章
返回