Development of neurocontrollers with evolutionary reinforcement learning
|dc.identifier.citation||Conradie, A. and Aldrich, C. 2005. Development of neurocontrollers with evolutionary reinforcement learning. Computers and Chemical Engineering. 30: pp. 1-17.|
The growth in intelligent control is among other fuelled by the realization that nonlinear control theory is not yet able to provide practical solutions to present day control challenges. Overdesign is therefore often used as a means to avoid highly nonlinear regions of operation, despite the risk of significant economic penalties both in terms of capital and operating costs. The Symbiotic Adaptive Neuro-Evolution (SANE) algorithm combines the design and controller development functions into a single coherent step through the use of evolutionary reinforcement learning. SANE locates the optimum operating steady state and develops a neurocontroller based on maximising economic considerations. In this paper, the use of SANE to optimize and control a bioreactor at its economically optimal steady state is discussed. The developed neurocontroller was found to be robust in the presence of significant process uncertainty, as a result of the generalization afforded by the learning algorithm. More autonomous control is thus achieved in operating regions of greater complexity and uncertainty. Overdesign in the process industries may thus be limited by the use of the SANE algorithm.
|dc.title||Development of neurocontrollers with evolutionary reinforcement learning|
|dcterms.source.title||Computers and Chemical Engineering|
|curtin.accessStatus||Fulltext not available|