Posts

Showing posts with the label actuarial tricks and insurance

On rank-ordering very complex datasets

Image
The main idea: the concept of the efficient frontier can be generalized such as to allow the rank-ordering of extremely complex datasets based on a large set of mutually contradicting criteria. 0. Posts on this blog are ranked in decreasing order of likeability to myself. This entry was originally posted on 24.01.2022, and the current version may have been updated several times from its original form.   1.1 Say you have a list of things you’d like to compare with regards to a number of criteria. Obviously, if all things are such as to be ranked in the same order across all of these criteria, comparisons are easy. What to do when entities are ranked differently with regards to different criteria though? 1.2 To make this less esoteric, let’s take a simple example: I wish to purchase a used vehicle, and the only relevant considerations are its price (the lower the better), its production year (the more recent the better) and its mileage (the lower the better). I have this narro...

On removing abnormal claims from reserving triangles

Image
The main idea: iteratively replacing the highest deviating cell in a reserving triangle with the expected result for that cell removes without ignoring abnormally large claims.  0. Posts on this blog are ranked in decreasing order of likeability to myself. This entry was originally posted on 27.10.2021, and the current version may have been updated several times from its original form.   1.1 In a previous post I discussed a general method for removing outliers from a dataset given that one has a model. Let's try now to apply this to non-life claims reserving by triangles. 1.2 The cumulated triangle below includes one obvious outlier incurred in 2018, and emerging one year after. 1.3 Having a model of the data arranged in triangle form means breaking down the triangle into a vertical (exposure) and horizontal (pattern) component. These two are dependent on the choice of reserving method, with the additive method being an obvious example of splitting the triangle into two dim...

On aligning agents and insurers

The main idea: pay agent commissions in a temporary staggered fashion that allows for claims to materialise and capture some of the agent’s underwriting intuition.    0. Posts on this blog are ranked in decreasing order of likeability to myself. This entry was originally posted on 01.05.2022, and the current version may have been updated several times from its original form. 1.1 There is an obvious conflict of interest non-life insurers will suffer from when they pay agents a commission to sell policies: an agent getting a fixed percentage of the premium will have an interest in maximising revenue per unit of time, whilst the insurer’s interested in maximising profit per unit of time. In other words, agents have no reason to care about which clients will cause more or fewer claims. If they know, they conveniently forget. 1.2 Most if not all insurers try to mitigate this issue by centralising underwriting, such as to leave no power in the hands of the agent when it comes to...

On removing outliers

Image
The main idea: iteratively replacing the datapoint most distant from the expected by the latter allows a model to account for outliers without ignoring them.    0. Posts on this blog are ranked in decreasing order of likeability to myself. This entry was originally posted on 17.10.2021, and the current version may have been updated several times from its original form. 1.1 Another technique which would be rather obvious, just writing about it as I don’t remember encountering it. The application to claims reserving could be a bit more original, that’s coming in a next post. 1.2 So, a technique that would help to smooth out any outliers in your dataset (as long as you have a model of the underlying pattern, which if you don’t, how do you know that you even have outliers?) is to iteratively replace the datum that diverges by most (absolute value or percentage, depending on context) by the expected datum by the latter. Recalculate the model parameters given the new data, and re...

On perpetual insurance

The main idea: proposing an insurance schedule whereby one premium is paid and cover continues indefinitely until a claim is made as a way to help with products afflicted by moral hazard.  0. Posts on this blog are ranked in decreasing order of likeability to myself. This entry was originally posted on 25.05.2022, and the current version may have been updated several times from its original form. 1.1 Here’s the idea: insurance that requires you to pay premium once, and rolls over forever. Until you make a claim, that is, whereupon the policy lapses (the claim is still honoured of course). 1.2 To explain what value would this setup provide, let’s take the example of health insurance. 1.3 With health insurance, by the time you apply all safeguards aimed at keeping people from making spurious or just non-critical claims, you end up with a policy so complex that all flexibility and transparency is lost. And yet, health insurance only works if claimants limit themselves to making ...

On tail estimation in triangle-based reserving

Image
The main idea: applying autoregression of development factors to estimate the tail. 0. Posts on this blog are ranked in decreasing order of likeability to myself. This entry was originally posted on 06.10.2021, and the current version may have been updated several times from its original form. 1.1 This technique would be incredibly obvious to anyone who has reserved non-life by triangles, and the only reason I’m posting of it is that I have not encountered it anywhere. Haven’t looked too hard though. 1.2 So you set up your reserving triangle, which you hope to complete by chain-ladder but - oh no! – development does not appear to be over (even if it does appear to be, is it really?). 1.3 What do you do about that tail, replicate the last factor, replicate its square, double, what? Well, first of all cumulate your development factors. 1.4 Now the next step is obvious. Just set up a simple autoregressive model on the cumulated factors, predicting each based on the factor preceding it....