Oberseminar Finanz- und Versicherungsmathematik
Jointly organised by Prof. Dr. Francesca Biagini, Prof. Dr. Thilo Meyer-Brandis, Prof. Dr. Christoph Knochenhauer, Prof. Dr. Aleksey Min, Prof. Dr. Matthias Scherer and Prof. Dr. Rudi Zagst
Jointly organised by Prof. Dr. Francesca Biagini, Prof. Dr. Thilo Meyer-Brandis, Prof. Dr. Christoph Knochenhauer, Prof. Dr. Aleksey Min, Prof. Dr. Matthias Scherer and Prof. Dr. Rudi Zagst
Room B 349, Department of Mathematics, LMU Munich, Theresienstr. 39, 80333 Munich (how to find us).
Dates | Times | Speakers | Titles |
---|---|---|---|
Mo, 19 May | TBA | TBA | TBA |
Mo, 23 June | TBA | Jae Youn Ahn, Ewha Womans University (Korea) | Interpretable Generalized Coefficient Models Integrating Deep Neural Networks within a State-Space Framework for Insurance Credibility |
Mo, 21 July | TBA | TBA | TBA |
Mo, 6 October | see Munich Risk and Insurance Days 2025 | ||
Tue, 7 October | see Munich Risk and Insurance Days 2025 |
Credibility methods in insurance provide a linear approximation, formulated as a weighted average of claim history, making them highly interpretable for estimating the predictive mean of the a posteriori rate. In this presentation, we extend the credibility method to a generalized coefficient regression model, where credibility factors—interpreted as regression coefficients—are modeled as flexible functions of claim history. This extension, structurally similar to the attention mechanism, enhances both predictive accuracy and interpretability. A key challenge in such models is the potential issue of non-identifiability, where credibility factors may not be uniquely determined. Without ensuring the identifiability of the generalized coefficients, their interpretability remains uncertain. To address this, we first introduce a state-space model (SSM) whose predictive mean has a closed-form expression. We then extend this framework by incorporating neural networks, allowing the predictive mean to be expressed in a closed-form representation of generalized coefficients. We demonstrate that this model guarantees the identifiability of the generalized coefficients. As a result, the proposed model not only offers flexible estimates of future risk—matching the expressive power of neural networks—but also ensures an interpretable representation of credibility factors, with identifiability rigorously established. This presentation is based on joint work with Mario Wuethrich (ETH Zurich) and Hong Beng Lim (Chinese University of Hong Kong).