Stochastic dynamic programming (SDP) is extensively used in the optimization for long-term reservoir operations. Generally, both of the steady state optimal policy and its associated performance indices (PIs) for mult...Stochastic dynamic programming (SDP) is extensively used in the optimization for long-term reservoir operations. Generally, both of the steady state optimal policy and its associated performance indices (PIs) for multipurpose reservoir are of prime importance. To derive the PIs there are two typical ways: simulation and probability formula. Among the disadvantages, one is that these approaches require the pre-specified operation policy. IHuminated by the convergence of objective function in SDP, a new approach, which has the advantage that its use can be concomitant with the solving of SDP, is proposed to determine the desired PIs. In the case study, its efficiency is also practically tested.展开更多
基金Yunnan Natural Science Foundation under contract 98E004Z
文摘Stochastic dynamic programming (SDP) is extensively used in the optimization for long-term reservoir operations. Generally, both of the steady state optimal policy and its associated performance indices (PIs) for multipurpose reservoir are of prime importance. To derive the PIs there are two typical ways: simulation and probability formula. Among the disadvantages, one is that these approaches require the pre-specified operation policy. IHuminated by the convergence of objective function in SDP, a new approach, which has the advantage that its use can be concomitant with the solving of SDP, is proposed to determine the desired PIs. In the case study, its efficiency is also practically tested.