Abstract
In this paper we re-formulate the automatic differentiation (and in particular, the backward automatic differentiation, also known as adjoint automatic differentiation, AAD) for random variables. While this is just a formal re-interpretation it allows one to investigate the algorithms in the presence of stochastic operators like expectation, conditional expectation or indicator functions. We then specify the algorithms to efficiently incorporate non-pathwise operators (like conditional expectation operators). Under a comparably mild assumption it is possible to retain the simplicity of the backward automatic differentiation algorithm in the presence of conditional expectation operators. This simplifies important applications like - in mathematical finance - the application of backward automatic differentiation to the valuation of Bermudan options or calculation of xVA's. We give the proof for a generalized version of the result. We then discuss in detail how the framework allows dramatic reduction of the memory requirements and improves the performance of a tapeless implementation of automatic differentiation (while the implementation brings advantages similar to 'vector AAD' (sometimes called tape compression) for free, it allows improvements beyond this. We present the implementation aspects and show how concepts from object-functional programing, like immutable objects and lazy evaluation enable additional reductions of the memory requirements.
Item Type: | Journal article |
---|---|
Faculties: | Mathematics, Computer Science and Statistics > Mathematics |
Subjects: | 500 Science > 510 Mathematics |
ISSN: | 1469-7688 |
Language: | English |
Item ID: | 82420 |
Date Deposited: | 15. Dec 2021, 15:01 |
Last Modified: | 15. Dec 2021, 15:01 |