Waterfall Methodology


1. Plan

The Plan phase moves a project from an approved, prioritized High-Level Analysis to the point where it is ready to begin analyzing and defining requirements for the solution.

Execute Entry Criteria

Requirement
Accountable Role
□ Approved, Six Questions
Idea Owner

 

Plan Activities

Activity
Accountable Role
□ Request Clarity project number
IT Delivery Manager
Description
The delivery manager requests the project to be setup in Clarity. The Clarity Team creates a project number in Clarity.
Templates
Clarity Project Request Form
□ Assign a Project Manager
IT Delivery Manager
Description
The delivery manager works with the resource managers to assign a Project Manager to the project.



2. Analysis

The Analysis phase defines the requirements of the system and states in a clear, precise fashion the functions of the system.

Analysis Activities
Activity
Accountable Role
□ Elicit, analyze, and validate detailed requirements
BA Lead
Description
Based on the high-level business requirements, created detailed functional and non-functional requirements for the solution.
Non functional Requirements
Requirements Packet
Business Data Requirements
Service Level Agreements

Sub-Activities
□ Elicit and define business requirements (functional/non-functional)
□ Document business requirements (functional/non-functional)
□ Evaluate information security needs
□ Review and approve requirements
□ Define Business Data Requirements
□ Ensure functional/non-functional requirements are testable
Contributing Roles
BA Lead, Stakeholders
BA Lead, Tech Lead, Project Sponsor, UX Lead
Information Security Lead, Tech Lead, BA Lead
Project Manager, Solution Architect Lead, Data Architect Lead, Operational Lead *NEW
BA Lead, Data Architect Lead
System Test Lead, Performance Test Lead, Operational Lead *NEW


3. Design
The primary objective of the Design phase is to create a design that satisfies the agreed application requirements. In the Design phase, the Systems Development Life Cycle (SDLC) process continues to move from the “what” questions of the analysis phase to the “how” questions.

Design Activities

Activity
Accountable Role
□ Complete user interface design
UX Lead
Description
Based on the user analysis, design and test the user interface of the application.
User Experience Prototype/Wireframe
Sub-Activities
□ Create wireframes
□ Create prototypes
□ Conduct usability tests
□ Document design specifications
Contributing Roles
UX Lead, BA Lead, Tech Lead, Project Sponsor
UX Lead, BA Lead, Tech Lead
UX Lead, BA Lead
UX Lead
□ Design solution
Tech Lead
Description
Use the detailed requirements to investigate the best solution and create a technical design. The design should be directly traceable to the detailed requirements.
Templates
Business Solution Design
Enterprise Architecture
Technical Design Document
 

4. Build 
During the Build phase, developers execute the plans laid out in the Design phase. The developers create the database, generate code for the data flow process, and create the user interface. During construction, unit and integration tests are executed, test scripts are created, and test environments and data are prepared. 


Build Activities
Activity
Accountable Role
□ Begin Transition planning
Project Manager
Description
Begin creating the necessary plans to transition the finished product to the customers.
Templates
Organizational Change Management Plan
Training Plan
Implementation Plan
Documentation Plan
Support Transition Plan
Detailed Hardware/Software Inventory
Sub-Activities
□ Begin creating Organizational Change Management Plan
□ Begin creating Training Plan
□ Begin creating Production Validation Plan
□ Begin creating Back Out Plan
□ Begin creating Support Transition Plan
□ Complete Documentation Plan
Contributing Roles
OCM Lead, Project Manager, Project Sponsor
BA Lead, Project Manager, Project Sponsor
Change Owner, Project Manager, Release Manager, BA Lead
Release Manager, Project Manager, Tech Lead
Operational Lead, Project Manager
Technical Writer, Team Leads



5. Test & Production Readiness

The Test phase takes a working software or technology and tests it for functionality, system, performance, and user acceptance to ensure the deliverable meets quality standards.

Test Activities
Activity
Accountable Role
□ Test software or technology solution
System Test Lead, Performance Test Lead
Description
Test the software and technology, including system, performance, and user acceptance testing, to ensure the software or technology satisfies business goals and objectives.
Service Level Agreement
Sub-Activities
□ Move and configure code through environments
□ Complete system testing
□ Complete performance testing (pass PEGT)
□ Verify authentication/authorization models
□ (If applicable) Complete security assessment test
□ (If applicable) Validate monitoring and detection capabilities
□ Complete user acceptance testing
□ Review and approve user acceptance test results
□ Validate SLAs
□ Review and approve SLAs
Contributing Roles
Software Configuration Manager, Developers
System Test Lead
Performance Test Lead, Performance & Capacity
System Test Lead, Info Security Lead
System Test Lead, Info Security Lead
Info Security Lead
System Test Lead, BA Lead
System Test Lead, BA Lead, OCM Lead
Performance Test Lead, Performance & Capacity
Project Manager, Project Sponsor, Solution Architect Lead, Tech Lead, Operational Lead, BA Lead, System Test Lead
 


6. Transition
The Transition phase includes the activities necessary to deliver working software or technology as an enabler to the customer. The project leadership team is accountable for delivering software or technology that meets the business case’s requirements and success measures. Documentation for the closure decision is minimal and covers only the requirements to complete the transition.

Transition Activities
Activity
Accountable Role
□ Complete training and communications
Project Manager
Description
Conduct training and send internal and external communications to prepare the customers for the new software or technology.
 
Sub-Activities
□ Execute Organizational Change Management Plan
□ Execute Communication Plan
□ Review authentication/authorization models
□ Review software/technology with production support staff
□ Execute Training Plan
Contributing Roles
OCM Lead
OCM Lead, Project Manager
BA Lead, Info Security Lead, Project Sponsor
BA Lead, Project Manager
BA Lead, Project Manager
□ Deploy software or technology
Release Manager
Description
Move the software or technology into production for customer to begin using.
 
 

Lessons Learned in Multiple stakeholder projects

Clearer decision rules

•More clarity around decision authority. In production, we had a fairly high incidence of decisions being revisited and overturned.
•Recommendation:  Build RACI chart and establish clearer project milestones and accompanying review sessions

Wing to wing process flows

•Better understand pre and post workflows across Agent, Employee and Customer, no matter who submits the loss.
•Recommendation:  Establish clearer ownership of employee workflow

More effective feedback loop post release

•Multiple avenues for feedback (SVP’s, DSM’s, claims employees, agent portal team, etc. cause confusion)
•Recommendation:  Fully centralized feedback resource for all  questions and commentary, e.g. IT Service Desk; also clear talking points on continuous rapid improvement

Alignment of minimum viable product and continuous improvement

•Initial application design was based on releasing and regularly improving. there was confusion at times around appropriate scope
•Recommendation: Broader discussion around creating a culture where minimum viable is allowable with rapid enhancements based on user feedback? Or do we want to have a larger test audience and longer development cycle to make sure we capture every possible scenario?

Lack of alignment of UAT cycles

•Front and Back End were not on the same UAT test cycles, thereby causing some redundancy and confusion.
•Recommendation:  Aligned UAT cycles

 

Shared priorities

•Lack of clear owner for establishing shared priorities; e.g. front end enhancement needs unavailable backend resources
•Recommendation:  Align work efforts and availability between front end and backend through regular sponsor checkpoint

Minimal feedback received after initial release (during pilot rollout)

•Small agent population during pilot rollout made it difficult to know where improvements could be made.
Recommendation:  More robust group of Agents in higher volume areas for the initial pilot

How to manage Agile in Project

How do we break story cards into tasks?

-          Story cards will generally have 2 tasks

-          Task 1: Development task will include completing (UI, Validation/Wiring, Integration). Some development task may involve only one or two or the above.

-          Task 2: Testing(UI, Validation, Integration, All view ports)

-          Story cards will not be broken as UI piece, Validations piece and Integration piece unless the story points exceed 8 and the split will be based on sections/tiles.

How to estimate?

-          Use Fibonacci series

-          Estimate the entire development effort (UI, Validation/Wiring, Integration) for a particular story card

-          Max allowed story point should be 8. Story card needs to be broken up if points exceed 8

How do we claim points?

-          QA’s to perform high level smoke testing in DEV environment once the development task is complete and move the cards to system testing column for claiming points.  

-          Dev is responsible to get the BA and QA together as far  as possible to move it for claiming points(in local env)

When do we call the story cards as accepted, completed and Done?

-          Once smoke testing is performed in DEV env on a story card, points can be claimed and story card will be called as ‘accepted

-          Once Functional testing is performed in TEST region, story card will be called as ‘completed’

-          Once Business verification is complete, story card will be called as ‘Done’

How do we move incomplete story cards?

-          Story card will be moved to the subsequent sprints until they are moved to ‘accepted’ status

How does the BA, Dev, QA activities take place?

-          (Current Sprint - 1) : Requirement

-          Current sprint: Development + Defects from previous sprint

-          (Current sprint + 1): Functional Testing

-          (Current sprint +2): Business testing

Note: It might be possible that we may not be able to claim many points till the integration pieces are developed.

  

How does the story board look like? 

 

Current Iteration

Development(WIP)

(UI, validations/Wiring, Integration)

Ready for Smoke testing

Accepted

(Ready for system testing)

(Claim points)

Completed

(Ready for Business testing)

Done

Ready to use     In progress

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Next Iteration  

 

 

 

 

Roadblocks

Ready to use     In progress

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Meetings: -

1.       Daily stand up

2.       Sprint planning

3.       Backlog grooming

4.       Customer Demo

5.       Retrospectives

6.       Daily developer time

 

AIDA 182 - Risk and Insurance Analysis Techniques

Chapter 1: Classifying and Analyzing Risk

All types of predictive analysis should begin with a clear understanding of the business purpose. Understanding the risk, is important as the risk may involve a threat to the organization or, in some cases, both a threat and an opportunity

 Classifying the various types of risk can help an organization understand and manage its risks. The categories should align with an organization’s objective and risk management goals.

 Importance of classifying risks; similar risks have the same attributes and can be managed with the similar techniques. And also classifying/identifying new risks with same attributes, will be less likely to be overlooked. 

 The relationship between likelihood and consequences is critical for risk management in assessing risk and deciding whether and how to manage it.

Types of risks

  • Pure Risk - A chance of loss, or no loss, but no chance pf gain. Insurance mostly deals in Pure Risk, Quantifiable and Diversifiable Risk
  • Speculative risk - A chance of loss, or no loss, or gain
  • Diversifiable risk and Non-Diversifiable Risks
  • Quadrant of risks  (hazard, operational, financial and strategic)

 

Insurance deals with pure, objective and diversifiable risk


These risks are not mutually exclusive and one risk can be attributed to more than one type. Example: A risk may have pure risk and speculative risks aspects also.

 Certain types of speculative risks:

1.   Price Risk : Uncertainty over the size of cash flow, changes in cost of raw materials and other inputs, as well as cost-related changes in the market for completed products and other outputs.

2.    Credit Risk : The risk that customers or other creditors will fail to make promised payments as they come due.

 Pure risk and speculative risks are managed differentialy.

 

Examples of Speculative risks in investments:

1.   Market Risk : The risk associated with fluctuations in prices of financial securities, such as stocks and bonds

2.   Inflation Risk : The risk associated with the loss of purchasing power because of an overall increase in the economy’s price level

3.   Interest Rate Risk: The risk associated with a security’s future value because of change in the interest rates.

4.   Liquidity Risk: The risk associated with being able to liquidate an investment easily and at a reasonable price. That an asset cannot be sold on short notice without incurring a loss.


Subjective Risks: The perceived amount of risk based on an individual’s or organization’s opinion

Objective Risks: The measurable variation in uncertain outcomes based on facts and data.

The closer and the subjective interpretation of the risk to the objective risks are, the better the risk management plan will likely be. If facts and data are not available, then an objective risks will be treated as subjective risks.

The main reason that subjective risks and objective risks can differ substantially include these:

1.    Familiarity and Control

2.    Consequences and Likelihood

3.    Risk Awareness

!!IMPORTANT!! - Subjective risk can exist even where objective risk does not.


Diversifiable Risk : A risk that affects only some individuals, business or small groups. I.e. not highly correlated and can be managed through diversification, or spread, of risk.

Non-Diversifiable Risk : A risks that affects a large segment of society at the same time. Are correlated, that is their gains or losses ten to occur simultaneously rather than randomly. Example : Inflation, Unemployment, natural disasters and hurricanes.

Systemic Risk : The potential for a major disruption in the function of an entire market or financial system. These risks are also non-diversifiable. Example: failure of Lehman Brothers

 

Quadrants of Risk for an organization - (purpose is to classify in Pure Risk and Speculative Risk, and the focus of risk quadrants is different from risk classifications , the four quadrants of risk focus on the risk source and who traditionally manages it, different organizations classify different types f risk differently)

  • Hazard Risks (classified as pure risk) arise from property, liability, or personal loss exposures and are generally the subject of insurance. Injuries to employees and injuries from companies products are also Hazard risks
  • Operational risks (classified as pure risk) fall outside the hazard risk category and arise from people or a failure in processes, systems, or controls, include those involving information technology. Supply chain is a risk.
  • Financial Risks (classified as speculative risk) arise from effect of market forces on financial assets or liabilities and include market risk, credit risk, liquidity risk and price risk. Example buying a new production line machine
  • Strategic Risk (classified as speculative risk) arise from the trends in the economy and society, including changes in the economic, political, and competitive environments, as well as from demographic shifts.

 

Basic Risk Measures

Risk Management requires measures of risk in order to both know the nature of risks and manage them to help an organization meet its objectives. Quantifying those risks that can be measured should for the basis of risk assessment and ongoing measurement provides benchmarks to monitor and evaluate the success of an organization’s risk management program.

Types of measurement

1.    Exposure : Any conditions that presents a possibility of gain or loss, whether or not an actual loss occurs. It provides a measure of the maximum potential damage associated with an occurrence. If risk is non-diversifiable then risk increases as the exposure increases. There may be situation where exposures are not easily quantifiable but an attempt must be made to find the maximum exposure value.. Example: Risk of cyber attack

2.    Volatility : Frequent fluctuations, such as in the price of an asset, provides a basic measure that can be applied to risk.Risk increases as Volatility increases. Organizations may be directly or indirectly affected. Hedging is a risk management technique.

3.    Likelihood : the ability to determine the probability of an event mathematically is the foundations of insurance and risk management. In insurance “likelihood” is used instead or “probability” because probability relies on law of large numbers

4.    Consequence

5.    Time horizon

6.    Correlation

Law of Large Numbers: A mathematical principal stating that as the number of similar but independent exposure units increases, the relative accuracy of predictions about future (outcomes) losses also increases.

Probability is the outcome based on observations with some know parameters controlling the experiment.

Likelihood is a statistics concept, is where we know the observation values and we try to make a model by finding the parameters which will maximize the probability value of outcome and thus support hypothesis. So the concept of confidence interval and how the measured data is spread out is termed as important.

With Likelihood we will use various modelling and predictive analysis techniques

 Difference between Likelihood and Probability can be better explained in the following post below;

http://stats.stackexchange.com/questions/2641/what-is-the-difference-between-likelihood-and-probability

http://stats.stackexchange.com/questions/665/whats-the-difference-between-probability-and-statistics/675#675

 

In insurance we need to determine and quantify the likelihood wherever possible and the consequence.The relationship between likelihood and consequence is critical for risk management is risk and deciding whether and how to manage it. Consequence are the measure of the degree to which an occurrence could positively or negatively affect an organization. The greater the consequences, the greater the risk.

Insurance can help in risks with low likelihood and major consequence and high likelihood and major consequence. Risks with minor consequence are not suitable for insurance and can be managed by organization within.

Time Horizon : Estimated duration, longer time horizon higher the risk. another measure of exposure

Correlation : a relationship between variables, is a measure that should be applied to the management of an organization overall risk portfolio. Greater the correlation, greater the risk.Diversification is a risk management technique that can reduce the risk of correlation.

Correlation and Co-Variance and are the common statistical language for describing the relationships among various sources of risk

Correlation is scaled version of Co-Variance from -1 to +1. Co-Variance is the relative association between variables to move in tandem or independently of each other

Uses of Correlation and Co-Variance by risk professionals:

1.       Identifying and Quantifying relationships among various sources of risk

2.       Communicating throughout the organization the degree of uncertainty in a risk portfolio

3.       Prioritizing investments in loss control

4.       Optimizing financing for multiple sources of risk

5.       Evaluating risk management program effectiveness and competitive risk.

 

Monte Carlo simulation : A computerized statistical model that simulates the effects of various types of uncertainty. The technique provides a probability distribution of outcomes that can occur after applying a range of random conditions (or variables) to simulate an event or series of events hundreds or even thousands of times. The resulting probability distribution can be used to forecast and better understand the correlations among the variables


Correlation Matrices : Is a matrix to see how a source of risk interacts with other risk sources. The matrix will always have a value of +1 along the diagonal, meaning that a risk source is always perfectly positively correlated with itself.

Sources of risk that have a low-correlation, no-correlation or a negative correlation with other risk sources in a portfolio are generally good risk sources to add to the portfolio (all factors that could influence the correlations being equal) and tend to improve the organization's risk-return position. This relationships is used to measure how two risk sources interact, which can help direct a company's investments, plan new projects for its future, or gain a competitive edge in product development.

Correlation does not measure Causality. Causality defines and measures the relations between two events, where the second is brought about by the first

!!Caution!! as with any statistical measure, care must be used in interpreting the results of correlation analysis. Correlation results may be skewed by abnormal observations, inaccuracy of data, or an insufficient number of observations. A risk professional should not rely solely on correlation analysis.


Trend Analysis : An analysis that identifies mathematical patterns in past data and then projects these patterns into the future, or develop forecasts. Example: Mostly used to project and adjust forecasted future dollar amounts of losses or gains using an anticipated inflation rate.

Helps in detecting forecasts in losses and gains, and also loss severity and loss frequency

Regression Analysis : A statistical technique that is used to estimate relationships between variables, using past data of this variables. It is a type of Trend Analysis, it can increase the forecast of trend analysis by examining the variables that affects trends.

Linear Regression Analysis : Takes some assumptions and deals with a constant rate of change of dependent variable (predicted or output variable). A form of regression analysis that assumes that the change in the dependent variable is constant for each unit of change in the independent variable. There are limitations to using this analysis , accuracy will not be good if we try to see too far in the future, and if we check in the past the predicted vale will be very close to the actual value. The regression line represents a line of best fit which can  be a straight or smooth curve line

A risk management professional may apply more than one linear regression line by incorporating several independent variables simultaneously.

In some cases a risk management professional may need to apply a curvilinear regression line, this happens when the dependent variables changes at a accelerating or decelerating rate rather than at a constant rate.

Trend Analysis seeks predictable patterns of change in a dynamic, changing environment. Organizations use trend analysis to develop forecasts based on patterns of change. Regression analysis can increase the accuracy of an organization's forecasts by using statistical analysis to examine related variables that affect trends.

Forecasts should be accepted only in the underlying assumptions are valid.

Results of Statistical Analysis should be interpreted with reason and not with automatic response just because they are mathematically based. Furthermore perhaps more for the risk management then for some other uses of there forecasting techniques the seeming scarcity of loss data when compared with the apparent wealth of data in other management specialties, makes forecasts of of accidental losses more difficult


What we learned from Chapter 1

  • Need clear Business Need for Classifying and Analyzing Risk
  • Classify the Risk for the Organization to understand Threat and Opportunity
  • Categorize the Risk so that - how to contain the risk can be done
  • Analyze the various risks by measuring the sources of risk
  • Use Statistical Techniques to measure the correlation between various risks and sources
  • Find Projections/Forecast using Statistical Techniques - Like Regression Analysis
  • Understand in the results of the regression analysis, with reason and with data 
  • Make Risk Management Decision 


Chapter 2: Probability Distributions of Analyzing Risks

Probability of an event it the relative frequency with which the event can be expected to occur in the long run in a stable environment. provided the conditions at recording the frequency will remain constant throughout. Without the ability to determine the probability of losses, insurers would not be able to successfully underwrite insurance.


Probabilities deduced solely from historical data may change as new data are discovered or the environment changes.

Theoretical probability is based on theoretical principles rather than on actual experiences, preferred by insurance professional as they are unchanging, but not applicable in all situations. The results are constant as long as the the physical conditions that generates then remain unchanged. 

Empirical probability (a posteriori probability) : A measures that is based on based on actual experience through historical data or from the observation of facts. The empirical probabilities deduced solely from historical data may change as new data are discovered or as the environment that produces those events changes. Empirical probabilities are only estimates whose accuracy depends on the size and representative nature of the samples being studied.

  • The first requirement of a probability distribution is that it provide a mutually exclusive, collectively exhaustive list of outcomes, loss categories (bins) must be designed so that all losses can be included.
  • The second requirement  of a probability distribution is that is defines the set of probabilities associated with each of the possible outcome 

Probability Analysis : A technique for forecasting events, such as accidental and business losses, on the assumption that they are governed by an unchanging probability distribution.

The law of large number has some limitations. It can be used to more accurately forecast future events only when the events being forecast meet all three of these criteria:

  • The events have occurred in the past under substantially identical conditions and have resulted from unchanging, basic causal forces.
  • The events can be expected to occur in the future under the same, unchanging conditions.
  • The events have been, and will continue to be, both independent of one another and sufficiently numerous.


Probability Distribution : based on empirical probabilities, a probability distribution can be determined. A probability distribution ideally has to be mutually exhaustive and collectively exhaustive (i.e. every condition probability is found out and sum of probabilities is 1). The distribution can be discrete and continuous. A PD is a presentation (best representation is table, chart or graph) of probability estimates of a particular set of circumstances and of the probability of each possible outcome. It represents probability estimates for a particular set of circumstances and the probability of each possible outcome.

Discrete probability are typically used as loss frequency, where as continuous probability are typically used for loss severity

Two criteria for studying empirical probability distributions

  • Mutually Exclusive, collectively list of outcomes
  • It defines the set of probabilities associated with each of the possible outcomes

Although insurers and risk management professional work with theoretical distributions on occasion. Relatively few of the loss exposures they analyze involve theoretical probabilities. Eg: Workers Compensation Claims, Medical Claims

  • To study 'Continuous Probability Distribution' is to divide the distribution into a countable number of bins, because by definition CPD can have an infinite number of values, for frequency and severity distributions.  If divided into number of bins a chart format or table would be the best way to study continuous probability distribution and forecast future losses using frequency and severity distributions. Otherwise mostly a graph of with x-y axis is used.
  • Discrete Probability distributions are usually displayed in a table that lists all possible outcomes and the probability of each outcome.

The outcomes in a continuous probability distribution are called probability density functions. Continuous probability distributions are typically used for severity distributions - they depict the value of loss, rather than the number of outcomes

'Central Tendency Theorem' measure gives the best guess what the outcome will be. The single outcome that is the most representative of all possible outcomes included within a empirical probability distribution. Most widely measures are mean, median and mode. After probability distributions are constructed, the central tendency is used to compare the the characteristics of those probability distributions.

Expected Value : The weighted average of all of the possible outcomes of a theoretical probability distribution. The procedure for calculating the expected value applies to all theoretical discrete probability distributions, regardless of their shape or dispersion. For continuous distributions, the expected value is also a weighted average of the possible outcomes. However, calculating the expected value for a continuous distribution is much more complex.

Expected Value applies to Theoretical Discrete Probability Distribution

Mean : The sum of the values in a data set divided by the number of values. It is considered in empirical probability distribution, we do not use expected value but the mean. The mean is only a good estimate of the expected outcome if the underlying conditions determining those outcomes remain constant over time. The mean will often use the mean as the single best guess as to forecasting future events.

Median : The value at the midpoint of a sequential data set with an odd number of values, or the mean of the two middle values of a sequential data set with an even number of values. With probability distribution of losses, calculating the probabilities of losses equal to or less than a given number of losses or dollar amount of losses, individually ad cumulatively, can be helpful in selecting retention levels. Similarly, calculating individual and cumulative probabilities of losses equal to or greater than a given number of losses or dollar amounts can help in selecting upper limits of insurance coverage. Therefore we should look at both the columns of cumulative and individual probabilities.

Example : best guess of the number of Workers Comp claim that an organization will suffer can be predicted to be the mean of past loses

Mode: The most frequently occurring value in a distribution. With mode it allow insurers to focus on the outcomes that are the most common.



Many loss distributions are skewed because the probability of small losses is large whereas the probability of small losses is large whereas the probability of large is small.

Asymmetrical distributions are common for severity distributions. 

If the distributions are skewed than the median is a better estimate of what is most likely to occur than mean. Example : Workers Compensation claims are skewed, therefore the median of claims of current year will give a better estimate of claims for next year. So companies can figure out the deductible to be kept in the policies. to retain the loss. (we will need cumulative frequencies and median value of cumulative frequencies to find the mid point or median) 

Another important concept is Dispersion

While analyzing probability distributions, insurance  and risk management professionals use measures of dispersion to access the credibility of the measures of central tendency, used in analyzing loss exposures.

Measures of central tendency for a distribution of outcomes include the expected value (or mean), which can provide useful information for comparing characteristics of distributions. However, another important characteristic of distribution is dispersion. Dispersion describes the extent to which the dispersion is spread out rather than concentrated around the expected value.

Less Dispersion means less uncertainty about the expected outcomes. More the dispersion the flatter the shape of a distribution. Insurance professional will be able to able underwrite around estimated losses to offer insurance coverage where the variation is lower, compared to where the variation is higher. Less variation around central tendency means less risk or in other words - because there is less risk involved in the loss exposure.  

Two widely used statistical measure of dispersion.

  • Standard Deviation : A measure of dispersion between the values in a distribution and the expected value (or mean) of that distribution, calculated by taking the square root of the variance. To calculate the standard deviation using the sample of outcomes, it is not necessary to know the probability of each outcome, just often how each outcome occurred.
  • Coefficient of Variation : a measure of dispersion calculated by dividing a distribution's standard deviation by its mean. It used to compare two distributions having same mean but different variability relative to its mean.

For a insurance and risk management professional, knowing the expected values is one element of the information, by calcuaing the standard deviation, one can be sure sure how the expected values is close to the actual value. The underwriter should choose the give coverage to account with lesser standard deviation or lower coefficient of variation. Higher coeffiecient of variation means higher variablility which means losses are less predictable

Important to note that with higher coefficient of variation, means more variability means for an underwriter the losses become less predictable making it more difficult to accurately forecast an individual outcome. This measure can be used to find out if a particular loss control measure has made losses more or less predictable.

Using Normal Distribution

Insurance and risk management professionals use normal probability distributions to predict future losses, which enable them to marshal the resources to control losses that can be prevented or mitigated and to finance those that cannot

A probability distribution that, when graphed generates a bell-shaped curve. This particular probability distribution can help to accurately forecast the variability around some central, average, or expected value and has therefore proven useful in accurately forecasting the variability of any physical phenomena. In theory the normal distribution assigns some probability greater than zero for every outcome, regardless of its distance from the mean. 

For all normal distributions, 34.13 percent of all outcomes are within one standard deviation above the mean and, because every normal distribution is symmetrical, another 34.13 percent of all outcomes fall within one standard deviation below the mean.

Consequently, 95.44 percent of all outcomes are within two standard deviations above or below the mean, and fewer than 5 percent of outcomes are outside two standard deviations above or below the mean.

The portion of the distribution between three standard deviations above the mean and three standard deviations below it contains 99.74 percent of all outcomes. 

2.15 percent of all outcomes are between two and three standard deviations above the mean, and another 2.15 percent are between two and three standard deviations below the mean.

49.87 percent (34.13% + 13.59% + 2.15% percent) of all outcomes are three standard deviations or less above the mean and an equal percentage are three standards deviations below the mean

50 percent is the mean (or expected value), median and mode.


The normal distribution is a probability frequency distribution that when graphed, generates a bell-shaped curve. this particular probability distributions can help to accurately forecast the variability around some central, average or expected value and has therefore proven useful in accurately forecasting the variability of many physical phenomena.

What we learned from chapter 2

  • How to construct empirical probability distributions chart, table or graph
  • The concept of central tendency sing expected value, mean, median or mode
  • The variability of input data using dispersion
  • Concept of probability frequency distributions for loss frequency and loss severity
  • About Normal probability distribution which helps in accurate forecasting


Chapter 3 : Risk Modelling Techniques

Modelling Methods

When and how each model is used is partially determined by its limitations
    • Methods based on historical data.
    • Methods based on expert input.
    • Methods based on combining historical data and expert input.

Big Data Analysis Techniques : Data that is too large to be gathered and analysed by traditional methods, and new data analysis techniques, such as text data mining and social network analysis are improving risk modelling. For Example : Analysing Adjustor notes or Customer Service Conversations.

Machine Learning : in which computers tach themselves to make better decisions based on previous results and new data, provides methods to continually improve risk models. As new data is input to a computerized model, the model learns from the new data and produces more accurate predictions.


Techniques based on Historical Data

Empirical probability distribution : is constructed as a table, chart or graph from values of a random variable. Commonly the frequency of the variable in a period. This probability distribution is the common starting point for most analysis, and the law of large numbers is very important, because the larger the sample, the more accurate and reliable the resulting sample.

Theoretical probability distribution : are constructed using mathematical or analytical formulas and are then used as a statistical reference or comparison. It is used to study and improve Empirical probability Distribution if insufficient number of historical data points are available

Theoretical Probability Distribution are useful in explaining the potential variability of categories of risk. For example, claim frequency is often assumed to follow a certain distribution, either a negative binomial or Poisson distribution. by contrast claim severity, is often assumed to follow a lognormal or a Pareto (or conditional claim or tail) distribution. Mostly actuaries create the models.

Security and stock prices are often represented by a normal distribution.

Extreme value theory (for long tail/skewed distribution) : Statistical probability estimations of extreme deviations from the median (in skewed data, the median is the a better estimate of the expected value than mean)  of probability distributions. The study of the tail . The difficulty in analyzing and forecasting rare events is in the lack of data available. EVT attempts to model the tails extremities of a rarely occurring or unknown variable like 100-year earthquake or EVT stress test for financial and banking organizations. It can be used to predict the probability of certain events at certain values and is useful in assessing risk from infrequently occurring but high-severity losses.

Regression analysis : assumes that the variable being forecast varies predictable with another variable. The variable being forecast is the dependent variable, and the variable that determines the value of the variable being forecast is the independent variable. Can also be use to forecast opportunities.


Methods based on Expert Input

Preference among bets : Convert expert opinions to probabilities. This is done using the formula of probability of a certain event from happening. This model is best employed when there is little observable data available.

Example used in forecasting : Political Risk or Legal risk

Judgements of relative likelihood : The representation of probability by means of expert input regarding the likelihood of event outcomes. Limitation is the input can be influenced by an unintentional bias that causes the representation to characterize less familiar events as less likely and more familiar events as more likely.

Example : helpful in obtaining knowledge or opinions from experts unfamiliar with probability assessments

The Delphi technique : A collaborating estimating technique strategy using expert input to reach consensus by continuously refining individual responses. After successfully refining questions and showing the tabulated results of previous set of questions or surveys results before next round of questions are answered. This technique helps in problem-solving and decision making processes by encouraging experts to share their opinions and forecasts and continuously converging and synthesizing the answered until a group conclusion is reached. This technique can be more cost-effective than assembling a facilitated workshop of experts and is useful when exploring a topic of varying subjective judgments or one encompassing a range of disciplines , but it may not be as helpful when forecasting concerning a new or unknown entity or when novel ways o approaching a decision are needed.

Example : The Delphi Technique is useful when exploring a topic of varying a subjective judgments or one encompassing a range of disciplines.


For more complex modelling involving many areas of uncertainty and interactions - Methods Based on Combining Historical Data and Expert Input

Monte Carlo Simulation : focuses on specific variables in a project. A computer randomly selects values for each variable according to a probability distribution and generates thousands of possible scenarios. The results are assembled into probability distributions representing possible outcomes.

Fuzzy Logic : A type of logic that assigns values to indefinite data fields to facilitate more accurate probability. The process of inputs, outputs, fuzzy sets, and fuzzy rules is complex, but it is meant to use familiar terms, be flexible and adapt to imprecise data.

Bayesian Inference : This is used when there is not sufficient data points and expert input in not available. However, thorough a method known as Bayes' Rule, then conditional probability can be broken down into individual components, each of which is easier to estimate.

p(B/A) = p(A/B) * p(B)/p(A)

The conditional probability, which is the probability of an outcome, given that another condition exists. It forms  the basis for many different types of mode used to analyse data.

After modelling, the next steps is to analyze the event consequences

Decision Tree Analysis : Analyses the uncertainties of decision outcomes and can provide both qualitative and quantitative analysis. Qualitatively, the can help generate scenarios, progressions and consequences that could potentially result from a decision, the construction begins with a statement of the initial decision under construction. Quantitatively,  they can provide estimate probabilities and frequencies of various scenarios resulting from a decision.The product of probabilities of each event in a pathway and the value of its outcome can  be compared to determine the pathway that produces the highest expected value (best pathway through a problem).

Disadvantage: A decision can simplify a complex problem, reducing the effectiveness of resulting decisions. And some sometimes the decision tree may become so complex that they are ineffective in communicating the rationale for a decision to those not involved in a process.

Another disadvantage is a decision trees trees also does not show the inter-dependencies

It helps to examine the consequences, including costs, and gains, of decisions. An organization can use the decision tree to compare the alternative decisions and select the most effective strategy to achieve a goal


Event Tree Analysis : Representation and calculations are similar to Decision Tree Analysis in both Quantitative and Qualitative but application is very different. ETA analyzes the consequences of accidental events rather than decisions. An accidental event is defined as the first significant deviations from a normal situation that may lead to unwanted consequences. Qualitatively is can help generate scenarios, progressions, and consequences that could potentially result from an accidental event. Quantitatively, it can estimate probabilities and frequencies of various scenarios and outcomes and help organizations determine the effectiveness of or need for controls and safeguards. Event Tress are often used to determine the need for and to examine the effectiveness of risk treatment methods as they are typically used to analyze negative consequences (risk of loss). It examines all possible outcomes of an accidental event, their probabilities, and existing measures to prevent or control them. An organization may use this approach to examine the effectiveness, of systems, risk treatments, or risk control measures and to identify, recommend, and justify expenditures of money, time, or resource for improvements.

. Advantages are all similar to decision tress, and disadvantage is that it typically provides two options - success or failure and thereby fails to reflect the complexity of some progressions.

Classification Tree Analysis : is used for data mining, which involves extracting hidden patterns from data. They describe data in data mining context , not decisions, although decisions can be made from the data obtained 

Decision tree and Event Tree Analysis differ in their purposes, information used and information produced

These trees described above describe data, not decisions, It is up to the analyst to make and justify the decisions.


Influence Diagrams (another way similar to Decision Trees)  : Provides a visual graph of a decision, sometimes using mathematical expression, to construct a model showing the known and unknown factors related to the decision and positive or negative outcomes. Purpose is to get and understand the whole picture of known and unknown factors and value of that information.

Elements of an Influence Diagram

Decision nodes are represented by Rectangles
Variables are represented by Ovals
Benefits and Costs are represented by Diamonds

Difference between Influence Diagram and Decision Tress is the construction and representation of data. Influence diagram give cost return benefit analysis which provides a holistic view, ie. they can show inter-dependencies marked with arrows, decision nodes, variables impacting the decision, benefits and costs

Decision trees have branches that outline each alternative, (or go/no-go decisions). The branches come out of circle where each circle shows the probability of each branch which represents an outcome.

Advantages of Influence Diagram over Decision Trees : Influence trees represents a holistic view and presents an overview of the decision-making process and can incorporate all of the identified variables and their probabilities. Can be constructed as a type of Bayesian network which includes tables in addition of a graph. ie. they can show inter-dependencies marked with arrows, decision nodes, variables impacting the decision, benefits and costs

An influence diagram presents an overview of the decision-making process and can incorporate all of the identified variables and their probabilities. In contrast, the decision tree does not have a method for combining alternatives and outcomes represented by alternatives

Influence diagram can be used for ongoing decision making within an organization. Decision trees are most suitable to a one-time go or no-go type of decision.

Applying Influence Diagram and Dashboards

Risk Dashboard : A computer interface that reports quantitative data regarding an organizations key risk indicators.

Key Risk Indicator (KRI) : A tool that an organization uses to measure the uncertainty of meeting an organizational objective.

In gathering input and data and calculating probabilities of different target variables. One hypothesis has to always be confirmed that the target variables should be mutually exclusive and collectively exhaustive.

In  a typical scenarios the Case Analysis 5 Steps are:

1. For each option, evaluate data and draw an influence diagram
2. Determine the probabilities for each option
3. Consider the economic and financial issues for each option - Cost benefit analysis
4. Compare the options
5. Make a recommendation

Value at Risk and Earning at Risk for financial risks and financial outcomes

Value at Risk (VAR) A threshold value such that the probability of loss on the portfolio over the give time horizon (usually short time horizon) exceeds this value, assuming no normal markets and no trading in the portfolio.

Key benefits of VaR as a risk measure:

  1. The potential loss associated with an investment decision can be quantified
  2. Complex positions (typically involving multiple decision can be quantified)
  3. Loss is expressed in easily understood monetary terms

Limitation : Is the VaR does not accurately measure the extent of which a loss may exceed the VaR threshold. CVaR provides this same benefits as VaR and also takes into account the the extremely large losses that may occur, usually with low probabilities, in the tail of a probability distribution. CVaR is particularly important in fat-tailed distributions, for which the extremely large losses have higher probabilities that with most other probability distribution.

CVaR is a model to determine the likelihood of a loss given that the loss is greater than or equal to the VaR,assuming no normal markets and no trading in the portfolio.


Earnings at Risk (EaR) : The maximum expected loss of earnings within a specific degree confidence usually with a time period of one year. Models are developed using Monte-Carlo Simulation, presented as probability distribution or Histogram of individual probabilities. The EaR threshold represents the lower end of projected earnings within a specific confidence, such as 95 percent.

EaR is helpful in comparing the likely effects of different risk management strategies on earnings. However there are limitations, including the complexity of the calculations and a need to understand the relationship of different variables on an organization's results. In summary EaR is used by financial and non-financial organization to model the effects of changes in various factors on an organization's earnings.

 (EaR) entails modeling the influence of factors : Changes in the prices of products and production costs on an organization's earnings.

Catastrophe Modeling

Catastrophe models can assess not only potential loss severity but also the probability that a catastrophe will occur (meaning loss frequency which is difficult for rare events) Example : wildfire, winter storms, flood and terrorism

Models are created using insurer supplied exposure data to produce a range of potential losses that may result from the catastrophes being modeled, along with their associated probability of exceedance. The models are proprietary, an insurer generally derives information from several catastrophe models because both the assumptions behind and the results from different catastrophe models vary.


Models typically include three basic components


1. Hazard : Actual physical cause and intensity, built with actual geophysical information and weather information

2. Engineering : Based on intensity from Hazard component, calculate the extent of structural damage, using damage functions to compute the level of damage and estimate time to repair or rebuild affected structures. the second part is calculated to produce estimates of business interruption losses or alternate living expenses (ALE) of residential policies.

3. Financial : This is for the company to set a loss reserve, based on if the property insured as Actual Cash Value or replacement costs basis.

Socio-economic factors, such as likelihood or fraud or theft following a catastrophe
Demand surge of raw materials in total insured loss, both are also also included sometimes in Catastrophe models


Key Output of Catastrophe Models

(AAL) average Annual Loss: is the long term average loss expected in any one year for in-force policies for the cause of loss being modeled. AAL is also referred to as the catastrophe loss cost, or pure premium, and is typically expressed as the expected loss per unit of premium. AAL values reflect all the components of the catastrophe model (hazard,engineering, and financial), AAL values are more effective in identifying concentrations of catastrophe-prone on-force policies than a simple review of the geographic distribution of in-force policies.

Other several uses are it can generate policy-level AAL information so that reinsurance rates can be developed. A risk load is also often incorporated into the rate to cover the possibility of extreme events generating losses well in excess of the AAL. Reinsurers and primary insurers can also use. AAL information to compare the pricing of different reinsurance program proposals.

  • Primary insurers use the results of catastrophe models primarily to understand the catastrophe loss potential of their portfolio of in-force policies.
  • Policy-level average annual loss (AAL) information generated by catastrophe models is used by reinsurers to develop catastrophe reinsurance rates.

Exceedance Probability curve: One of the most commonly used outputs is the exceedance probability (EP) curve, which represents the full spectrum of potential losses and their associated probabilities of occurrence. These probabilities are called exceedance probabilities and reflect the probability that a loss of a specified size will be equaled or exceeded. The return periods in years, as the inverse of the exceedance probability. Exceedance probabilities are assigned by the catastrophe model. 

It is prudent to focus on exceedance probabilities, rather than on return periods, to avoid thinking that "the 100-year loss won't occur in my lifetime" or that if the 100-year loss occurred this year, it won't come again for another 100 years. In fact, there is a 1 percent probability every year that the 100-year loss will occur.

Based on an exceedance probability analysis, the primary insurer can make informed decision regarding the size of the reinsurance limit it should purchase.

Catastrophe modeling issues : Different models from different vendors can produce different outcomes because of variation in the data and assumption used to build each model. The reliability of model output is only as good as the quality of primary insurers' exposure data used as input. 

Big Data can be used to create more accurate catastrophe models. By analyzing enormous amounts of data from multiple sources very quickly, allowing models to respond to changes in climate, engineering, technology, and geopolitical and socioeconomic conditions.

Organization use decision tee analysis to compare the consequences, costs, and gains of alternative decisions and to select the most effective strategy to achieve a goal. 

Organizations used event tree analysis to examine all possible consequences of an accidental event and the effective of existing measures to prevent or control those consequences.


What we learned from Chapter 3

  • How to model risk and the strengths and weakness for each type of model
  • Classification of models risk are at a high level of three types
  • After risk modelling it is analyzing the number through decision trees, influence diagrams and event trees and make decisions
  • Outcomes of decision are are KRI's and Risk dashboard
  • Also learnt about Catastrophe Modelling and how it is very important as reinsurance also comes in picture based on these models


Chapter 4 - Analyzing Loss Exposures

Identifying Loss Exposures

The methods of information that enable an organization to take a systematic approach to identifying loss exposures include these:

  • Document Analysis
  • Compliance Review
  • Inspections
  • Expertise within and beyond the organization
  • New Data analysis techniques such as IOT, and social media analysis, which involves discovering patterns and relationships based on links in a network, along with text mining, which involves languages recognition's


In document analysis there are two sources : Standardized and Organization Specific

     1) The sources of these documents can be standardized and originate from outside the organization, such as risk assessment questionnaires, insurance coverage checklists and surveys. These standardized documents broadly categorize the loss exposures that most organizations typically face and are not completed with information that is exclusive to the organization. The templates for these documents are published by American Management Association (AMA), the International Risk Management Institute (IRMI), the Risk and Insurance Management society (RIMS), and others.

     2) Documents that are organization-specific, such as financial statements and accounting records, contracts, insurance policies, policy and procedure manuals, flowcharts and organizations charts, and loss histories.Other examples of documents are web-sites, news releases, or reports from external organizations such as A.M. Best or D&B may indicate something useful about organization loss exposures. Other examples are Organizational Polices and Records such as corporate-by-laws, board meetings, employee manuals, procedure manuals, mission statements, and risk management policies.

Checklists typically capture less information then questionnaire. Both checklists and questionnaire may be produced by insurers(such questionnaires are known as Insurance surveys). Most of the Questions on these surveys relate to loss exposures for which commercial insurance is generally available.

A questionnaire captures more descriptive information that a checklist. For example, as well as identifying a loss exposure, a questionnaire may capture information about the amount or values exposed to loss. The questionnaire can be designed to include questions that address, key property, liability, net income, and at least some personnel loss exposures.

Risk management or risk assessment questionnaires have a broader focus and address both insurable and uninsurable loss exposures. However, a disadvantage of risk assessment questionnaires is that they typically can be completed only with considerable expense, time and effort and still may not identify all possible loss exposures.

Standardizing a survey or questionnaire has both advantages and disadvantages. The questions are relevant by most organizations and can be answered by persons who have little risk management expertise. However, no standardized  questionnaire can be expected to uncover all loss exposures particularly of a given industry, let alone those unique to a given organization. 

Experience risk management professionals often follow up with additional questions that are not on the standardized document.

Financial Statements : primary role is to identify major categories of loss exposures or any future plans of the company which may lead to loss exposure

  • Balance Sheet : The financial statement that reports the assets (property which pose a risk), liabilities (amount owed by the organization, mortgage or loans), and owner's equity of an organization as of a specific date.
  • Income Statement : The financial statement that reports an organization's profit or loss for a specific period by comparing the revenues generated with the expenses incurred to produce those revenues. (will indicate financial loss exposures the reduce revenues or increase expenses such as fluctuations in the value of investments, interest rate volatility, foreign exchange rate changes, or commodity price swings)
  • Statement of Cash Flows : The financial statement that summarizes the cash effects of an organization's operating, investing, and financing activities during a specific period. Fund flow analysis on the statement of cash flows can identify the amounts of cash either subjects to loss or available to meet continuing obligations.

Disadvantage of Financial Statements 

     1. is that they do not identify or quantify the individual loss exposures. For example : the balance sheet may show that there is $5 million in property exposed to loss, but it does not specify how many properties make up that $5 million, where those properties are located and how much each individual property is worth.

     2. Another disadvantage is that financial statements depict past activities, but does not show projected values or project future events.

Contracts

A contract can generate liability loss exposures in two ways.

  • Hold Harmless agreement (or indemnity agreement) A contractual provision that obligates one of the parties to assume the legal liability of another party.
  • if the organization in contract fails to fulfill a valid contract.

Indemnification : the process of restoring an individual or organization to a pre-loss financial condition.

Insurance Policies : is a means of risk-financing, reviewing insurance policies can also be helpful in risk assessment. To identify insurance coverage that an organization has not purchased and therefore potentially identify insurable exposures that have not been insured, a risk management professional can compare his or her organization's coverage against an industry checklist of insurance policies currently in effect.

Organizational Policies and Records : Loss Exposures can be identified using organizational policies and records, such as corporate by-laws, board meeting, employee manuals, procedure manuals, missions statement and risk management policies. Sometimes internal documents are also refered as they may show in other loss exposure to the organization

Flowcharts : is a diagram that depicts the sequence of activities of activities performed by a particular organization process. It shows the sequence and relationships between those operations

Organization charts : depicts the hierarchy of an organization's personnel and can help identify key personnel for whom the organization may have a personnel loss exposure. It can also track the flow of information through the an organization and identify any bottlenecks that may exist. The organization does not depict if the person is a key personnel and the importance of the individual to the continuous operation or profitability of the organization.

Loss Histories : Loss histories are often an important indicator of an organization's current or future loss exposures. Loss histories of other comparable organizations are also studied and analyzed if the losses has not occurred to the organization.

Compliance Review : To minimize or avoid liability loss exposures, by complying with local, state and federal statutes and regulations.In-house legal and accounting resources or outside expertise are generally used to conduct the compliance review. Drawback: As compliance is every changing, remaining in compliance requires ongoing monitoring and are expensive and time consuming.

Personal Inspection :  should take the opportunity to discuss the particular operations with front-line personnel who are often best placed to identify non obvious loss exposures. It is expensive and requires individuals with skills and expertise to identify unexpected but possible loss exposures.

Expertise within and Beyond the Organization : is done by experts from the fields such as law, finance, statistics, accounting, auditing, and the technology of the organization's industry can be consulted, information is gathered from interviews and questionnaires

Example of Expertise is Hazard Analysis : A method of analysis that identifies conditions that increase the frequency or severity of loss. 

All in all to analyse loss exposures and gather data we use the below techniques :

  • Document Analysis
  • Risk Assessment Questionnaires and Checklists
  • Financial Statements and Underlying Accounting Records
  • Contracts
  • Insurance Policies
  • Organizational Policies and Records
  • Flowcharts and Organizational Charts
  • Loss histories
  • Compliance Review
  • Personal Inspections
  • Expertise within and beyond the Organization


Data requirements for analyzing loss exposures, where analysis of past losses from similar loss exposures is the basis of current or future loss exposure information

  • Relevant Data : data should not be very old, as it will be no more relevant and Data should be of similar loss exposures
  • Complete Data : depends largely on the nature of loss exposure being considered, helps to isolate the cause of each loss, to make reasonable estimates of the dollar amounts of future losses
  • Consistent Data : so that losses are not underestimated or overestimated of future losses dollar amounts, data must be expressed in constant dollars, to adjust for difference in price levels or economic levels i.e. inflation. Common case of distortion is Inflation. To prevent this distortion, historical losses should be adjusted (indexed) so that loss data is expressed in constant dollars.
  • Organized Data : generally by calendar dates to check for any seasonality in the data. If the data data is organized by loss sizes (in increasing or decreasing order), then it could reveal clusters of losses by severity. Organizing the losses by size is also  the foundation for developing loss severity distributions or loss trends over time. 


To predict the four dimensions of loss exposures accurately: 

The data must be collected on a consistent basis for all recorded losses, and the data must be expressed in constant  (or Real) dollars.

Other terms used to describe dollars of historical values

Nominal Dollars : dollar values at the time of loss. For example, if a fire destroyed a building in 1995 and it cost $100,000 to repair the building in 1995, then the loss in nominal dollars is $100,000

Current Dollars : Dollar values today. This value involves inflating all historical dollar values to today;s value by using some measure of inflation such as Consumer Price Index

Real or Constant Dollars : Dollar values in some base year. The value enables comparison of losses that occurred in different time periods. The choice of year does not matter. For convenience the current year is mostly chosen.


Four dimensions of Analyzing loss exposures

  • Loss Frequency : the number of similar losses during a specific period (mean, median and mode, more analysis can be done with standard deviation and skewness measures), 
    • but this method cannot be used for low-frequency and high-severity events, so an estimate are conveyed with the certain degree of error or confidence
    • Frequency Distributions are usually discrete probability distributions based on past data regarding how often similar events have happened.
    • Common applications of relative frequency measures in risk management are injuries per person per hour in workers compensation claims and auto accidents per mile driven
  • Loss Severity : Dollar amount for a specific occurrence. We use MPL i.e. Maximum Possible Loss) which is the total value expected to loss at any one location or from any one event. for statistical purposes some assumptions are made and MPL is close to 95% (or 98%) of maximum amount of exposure value. 
    • worst case loss or maximum possible loss. In theory, liability losses are limited only by the defendant's total wealth.
    • For example, in the case of fire damage to its buildings and its contents, the maximum possible loss is typically the value of the building plus the total value of the building's contents
  • Total Dollar Losses : The total dollar amount for all occurrences during a specific period. Two measurements we use Expected Total Dollar Losses and Worst Case Total Dollar Losses (i.e. maximum frequency of loss and maximum severity)
  • Timing : The points at which losses occur and loss payments are made. The timing dimension is significant because money held in reserve to pay for a loss can earn interest until the actual payment is made. Whether a loss is counted when it is incurred or when it is paid is also significant for various accounting and tax reasons. Investment income and interest earnings should be considered when analyzing the timing dimension of loss exposures. The delay in in payment increases the uncertainty associated with the loss amount, it allows reserves to earn interest or investment income over a longer period of time

Two methods to study Loss Frequency and Loss Severity jointly is by the Prouty Approach or to create a single total claims distribution

Four Categories of Loss Frequency and Three Categories of Loss Severity are shown below and are subjective in nature but these categories helps risk management professional prioritize the loss exposure

Number of Categories Loss Frequency Loss Severity
     
1 Almost Nil Slight
2 Slight Significant
3 Moderate Severe
4 Definite  


A Loss exposure's frequency and severity tend to be inversely related

MPL is the total value exposed to loss at any one location or from any one event


Assessing Credibility of Data refers to the level of confidence that available data can accurately indicate future losses by involve empirical distributions developed from past losses

There are several factors, age of data, and whether the data represents actual loss or estimates of losses
Both internal and external factors, that may influence data credibility for an organization. 
  • Internally, changes in the way that an organization operates, such are alterations to manufacturing processes or changes in data collection methods, may significantly reduce the credibility of previously collected data.
  • Externally, events such as natural catastrophes, large liabilities awards, or terrorist attacks not only alter the data that are collected in that time frame, but also may cause shifts in the operating environment that render previously collected data less credible. Because of delays in reporting and paying of claims, more recent data are not always actual losses, but estimates of what the ultimate losses will be.

Once projections are made along the four dimensions of loss exposures, the analysis of the loss exposures will often dictate which type of risk control or risk financing measures should be implemented. The average losses during the coming years might be projected to fall along the line labeled "projected", and the probably maximum loss might be projected to fall along the line labelled "maximum" (i.e. MPL : Maximum Possible Losses). Probably minimum loss levels might also be projected, as shown by the "minimum" line. 


Summary :

To identify Loss exposures we do document analysis, compliance review, inspections and expertise within and beyond the organization.

To accurately analyze loss exposures using data on past losses, the data should be relevant, complete, consistent, and organized (by time of loss , or time of losses paid or size of loss). 

The analysis step of the risk management process involves considering the four dimensions of a loss exposures: loss frequency, loss severity, total dollar losses and timing.



Chapter 5 - Loss Reserving Techniques



Accurate reserving practices are critical to the stability and solvency of insurers.

Loss Reserve : An estimate of the amount of money the insurer expects to pay in the future for losses that have already occurred and been reported, but are not yet settled. It also includes Loss Adjustment Expense (LAE) amounts, related either to individual claim files or to the overall claim operation, that cannot be allocated to a specific claim file

Role of loss reserves, including the relationship over time between incurred losses, paid losses, reserves, and policyholders' surplus

Time required to investigate a property claim is shorter than third-party liability claims.


purpose of loss reserves is to estimate the insurer’s liability for losses that have occurred but have not yet been settled.


Purpose of Loss Reserve : Estimate the insurer’s liability for losses that have occurred in the past but have not yet been settled.

Loss Adjustment Expenses : the expense that an insurer incurs to investigate, defend, and settle claims according to the terms specified in the insurance policy. the National Association of Insurance Commissioners (NAIC) categorizes the LAE as DCC or AC
  • DCC (Defense and Cost Containment) are often related to as ALAE - Allocated loss adjustment expense which is the expense an insurer incurs to investigate, defend, and settle claims according to the terms specified in the insurance policy.
  • AO (Accounting and Other) are often related to ULAE - Unallocated Loss Adjustment Expense which cannot be readily associated with a specific claim includes all other expenses such as adjusters' salaries and other fees and expenses.

LAE : Insurers must establish a reserve to cover the delay between the time a loss occurs and the time the loss is settled and claim is paid. this delay is shorter for most property claims and longer for liability claims extending upto a few years.

Actuaries tend to estimate the unpaid LAE

Loss adjustment expense reserves are estimates of the future expense that an insurer expects that an insurer expects to incur to investigate, defend, and settle claims for losses that have already occurred.


Insurers total losses is calculated as incurred losses  = Paid Losses + Loss Reserves + Loss Adjustment expense reserves


Paid Losses : Losses that have been paid to, or on behalf of, insureds during a given period.

Incurred Losses : The losses that have occurred during a specific period, no matter when claims resulting from the losses are paid

Incurred but not reported losses (IBNR) - A reserve established for losses that reasonably can be assumed to have been incurred but not yet reported. Based on past claims experience, and modified on current conditions of frequency and severity of reported claims. This is bulk (or aggregate) reserve this is done in case the case-by-case reserves may become inadequate.

Other reason to have IBNR are:

  • Additional cost of claims that have been reopened after previously being settled and closed
  • Growth in percentage of reported case reserves or an amount for reported losses for which case reserves are inadequate
  • based on past experience and modified on current experience
  • Serve, in part, to account for future growth of known losses.


Life cycle of Incurred Losses

Loss reserves amounts must be reviewed and updated to reflect change in paid losses and loss expense amounts over time. One method is Accident Year Method

When accounting for accident-year losses for year 20X6, a reserve change in 20X7 for a loss that occurred in 20X6 would be included. An accident-year method aggregates incurred losses for a given period (such as twelve months) using all incurred losses for insured events that occurred during that period. Any losses that occurred in previous or later periods are not included.

Accident Year Method: A method of organizing ratemaking statistics that uses incurred losses for an accident year, which consist of all losses related to claims arising from accidents that occur during the year, and that estimates earned premiums by formulas from accounting records. Any losses that occurred in previous periods are not included. Any losses that occurred in previous periods are not included. An accident years account can be kept open for many years until all losses that occurred in that year are fully paid.


Ultimate Losses : The final paid amount for all losses in a n accident year. Incurred losses in Accident year are generally less than the Ultimate losses and cause loss reserves to increase, also causing incurred losses to increase for the next accident year.

The increase of decrease of incurred losses over time is called loss development. Actuaries can the life cycle of incurred losses related to a single accident year. An accident year's accounts can be kept open for many years until all losses that occurred in that year are fully paid.


Implications of Inadequate Loss Reserves

Underestimating or overestimating the final cost of claims can distort an insurer's financial condition. Continued overstating of loss reserves could result in tax penalties relating to the taxes that would otherwise apply to the resulting deferred income. Past claims are also the basis of future rates. As part of ratemaking, actuaries base future rates not only on the amount paid on both open and closed claims, but on the amount reserved for open and IBNR claims

Challenges to establishing Adequate Loss Reserves

  • Incomplete or inaccurate information
  • Reserve inaccuracy can also be the result of a lack of expertise on the claims representative's part or an unwillingness to reevaluate the claim and adjust the loss reserve amount where appropriate.
  • Lack of training of claim representative or frequent claim adjustor turnover
  • Management Changes,
  • Changes in reserving guidelines, 
  • Restructuring of reinsurance programs
  • Price Inflation, because reserves should reflect the ultimate cost of a claim and not the claim's present value, they should account for the claim's future settlement value.
  • Changes in legislation and regulation
  • Judgement in court cases can lead to a new case law and open emerging areas of coverage


Relationship Between Loss Reserves and Surplus indicates organization net worth

Loss reserves are one of the largest liabilities on an insurer's balance sheet and are shown under the "Losses" element. Loss Adjustment Expenses reserves are included within the "Loss Adjustment Expenses" element of the Balance sheet

If reserves are too low, the difference between assets and liabilities will result in an overstated policyholder's surplus amount. thereby wrongly overstating underwriting profit.


Principal Elements of an Insurer Balance Sheet

Assets Liabilities
Bonds Losses
Stocks Loss Adjustment Expenses
Cash Unearned Premiums
Premium Balances  
Reinsurance Recoverables  
   
  Surplus and Other Funds
  Surplus as Regards Policy Holders


Insurers must estimate reserves as precisely as possible, however estimating reserves is difficult reserves because in most cases, the amount that insurer will eventually pay for a claim in uncertain. and also insurer may not know all the facts about the underlying claim

Case Reserve: A loss reserve assigned to an individual claim to pay for ultimate loss amount. Primary insurer's claims department usually sets reserves

Reinsurers may supplement the primary insurer's case reserves for their own financial reporting purposes.

Case reserves can be established for these categories of loss reserves:

  • Reported losses - payment certain - calculated this type of reserve is simple, it is a matter of adding the agreed settlement amounts for all claims between claimant and insurer.
  • Reported losses - payment uncertain - Three methods are commonly used to determine that case loss reserves for reported losses when the amount of payment is uncertain : Judgement Method, Average Method and Tabular Method
  • Allocated loss Adjustment expenses (ALAEs)

Judgement Method : A method to establish a case loss reserve based on largely on experience with similar claims. One weakness of the judgement method is that the accuracy of its results depends on the quality and extent of the claims representative's experience.

Average Method ( or factor method) : A method to establish a case reserve by using an average amount for specific categories of claims based on an analysis of past claims and is trended for inflationary changes. Under the average method, reserves for some individual claims are inadequate and reserve for other claims are excessive. However, if the average is accurate, the aggregate loss reserves accurately reflects the ultimate loss amounts for all outstanding claims.

Most suitable for claims which are relatively frequent, reported and paid promptly and not subject to extreme variations, such as Auto collision claims. Not suitable for Liability claims (some time Judgement Method and Average method are used together for liability claims) - The average method would be the best method for reserving Auto claims

Tabular Method : A case reserving method that establishes an average amount for all claims that have similar characteristics in terms of the claimant's age,health, and natural status. This method is useful for calculating insurance or for calculating structured settlement amounts under liability insurance. Primary useful in loss reserves for lost income benefits for Workers compensation and or calculating structured settlement amounts under liability insurance.

A structured settlement amount for a liability insurance claim.

The tabular method uses rates and factors from one or more actuarial tables to calculate the present value of future loss payments. The present value amount becomes the case loss reserve for those payments. These tables are examples of those that can be used: 

  • Morbidity Table, showing the likelihood of sickness or injury
  • Mortality tables, showing the likelihood of death
  • Annuity Tables, showing the likelihood of survival
  • Remarriage Tables, showing the likelihood of remarriage by a widow or widower. Each case loss reserve calculated by the tabular method can be considered as an average reserve for all claims with the same characteristics. 

Primary weakness of the tabular method is that its applicability is limited to situations in which a fixed amount of benefits is paid over a period of time such as life insurance. Once example to calculate a case loss reserve for loss income benefit can be calculated by suing mortality tables and present value factors.

Case reserves for ALAE can be established by using the judgement method or fixed percentage of case loss reserve.

Correcting Case Reserves : for inadequate case reserves, one method for correcting total case reserves is to increase the case reserve for each claim. The simplest way to do this is to add the same percentage to each. These increase are often called "additional case reserves". A more time consuming method of correcting understated case reserves is to review each open claims file, increasing only those reserves that are inadequate. For their financial reporting purposes, reinsurers may supplement the primary insurer's case reserves. A reinsurer's claims personnel may review the primary insurer's claim files and add amounts that they feel are necessary to account for loss development. The reinsurer's total case reserves would then consist of primary insurer's case reserves and the reinsurer's additional case reserves.


Bulk Reserve: Reserves established for the settlement of an entire group of claims. 

Typically established for 

  • Reported Losses - Payment Uncertain : Reserves for reported losses when the amount of payment is uncertain cane be calculated on a bulk basis by subtracting the amount already paid for losses from a certain percentage of total earned premium
  • Incurred but not reported (IBNR) reserves - are reserves for losses that have occurred but have not yet been reported to the insurer. Example is Liability claims, where the loss has happened but how much liability is applicable is tremendously hard to determine. The IBNR loss category also includes a reserve for reported losses that are expected to develop; that is, the final payment for these losses is expected to exceed the amount for which they are currently reserved. (This component of IBNR is sometimes called IBNER, incurred but not enough reserved).

         IBNR reserves are by their very nature, a bulk reserve. IBNR reserves are residual reserves because, at any point of time, they                    equal the difference between incurred losses and ultimate losses.

         IBNR reserves = Ultimate Reserves - Reported incurred losses

         Three basic methods of estimating IBNR reserves exist, Loss Ratio Method, Percentage Method, Loss Triangle Method

Loss Ratio Method : This method assumes that the ultimate loss ratio will equal the loss ratio that was considered when calculating premium rates. Deducting paid and reserved amounts for reported losses from the ultimate loss amounts yields the IBNR reserve.

The loss ratio method may be useful in the early stages of developing IBNR reserves for long tail liability insurance.One weakness of the loss ratios method is that the actual loss ration seldom equals the anticipated loss ratio. In fact, the difference between them can be substantial. If the actual loss ratio is less than the anticipated loss ratio, the loss ratio method results in redundant reserves. If the actual loss ratios is greater than the anticipated loss ratio, the methods results in inadequate reserves. Furthermore, if the premium rates charged were inadequate (as evidenced by an underwriting loss), the re-insurer needs to recognize the inadequate subject premium rates used by the primary insurer when calculating the anticipated loss ratio.

Despite these weakness, the loss ratio method is often used in the early stages of development for long tail liability insurance. after 24 months the loss triangle method is more reliable.


Percentage Method : This method uses historical relationships between IBNR reserves and reported losses to develop percentages that are used in IBNR forecasts. The number of months necessary for losses to develop to their ultimate level varies depending on the type of insurance. The percentage method is acceptable for estimating property loss reserves because they can be estimated with reasonable accuracy soon after they are reported. It is likely to be less accurate for liability loss reserves which typically take longer to develop.


Loss Triangle Method : also known as loss development method,the chain link method, the chain ladder method, and the link ratio method. It is used for liability insurance, particularly liability insurance that requires many years to fully develop. This method produce reliable results unless the historical data and the actuarial assumptions are accurate.

A loss triangle is a display of historical data in the shape of a triangle. The data usually consist of the total reported losses for each historical year, although other data can be used, such as losses paid, number of claims paid, or average claim size

A major assumption of the loss triangle method is that the historical pattern of development will continue

The loss data used in a loss triangle may or may not include allocated loss adjustment expense (ALAE) information. If the loss data include ALAE, then the forecasted loss amounts will also include it.

.

These are the four major steps for calculating IBNR reserves from a loss triangle:

  • Organize historical data in a loss triangle format
  • Calculate twelve-month loss development factors from the loss triangle
  • Calculate ultimate loss development factors from the twelve-month development factors
  • Use ultimate loss development factors to calculate the IBNR reserves

These factors are called twelve-month loss development factors(also, known as age-to-age loss development factors and link ratios).

Ultimate loss development factor : A factor that is applied to the most recent estimate of incurred losses for a specific accident year to estimate the ultimate incurred loss for that year.

Selecting twelve month development factors

  • If the three averages show an increasing trend, select the largest factor
  • If the three averages show a decreasing trend, select the smallest factor
  • If the three averages do not show a trend, select the factor intermediate in value.

Typical Assumptions is that the reserves are still inadequate

  • Loss Adjustment expenses, both allocated (ALAE) and unallocated (ULAE) are done via bulk reserves

Bulk reserves for ALAE can be estimated by applying a percentage factor to either earned premiums or incurred losses. The percentage factor is determined by analyzing the insurer's perspective. Typical values is 25% to current incurred losses.

One disadvantage of this method of estimating ALAE is that the calculation assumes no changes have occurred that affect the factor. If changes have occurred, the factor must be adjusted. Another disadvantage results from the manner in which losses are usually settle. Small losses, especially those settled without payment, are usually settled more quickly than large losses. Consequently, total loss reserves at any given time are likely to include a disproportionate number of large losses. Because large losses usually involve proportionally more ALAE than small losses, the percentage method may underestimate the ALAE reserve. Calculating ALAE using the loss triangle may overcome this problem.

ULAE cannot be attributed to specific claims. The reserve for ULAE is usually estimated as a percentage of the sum of incurred losses and ALAE. The insurer determines the percentage based on experience. Because ULAE consists of budgeted items, the total amount to be paid in a given year is related to losses incurred in earlier years, particularly for long-tail liability insurance.


Combined methods of Loss Reserving

Two-part combination method : To realize the advantage of both the loss ratio method and the loss triangle method, some actuaries have suggested that a weighted average of the two methods be used with weights varying by the number of months after the start of the policy year. At the end of the accident year (twelve months of development), the reserve would be based entirely on the loss ratio method because the reported loss data are not mature enough to estimate ultimate losses using the loss triangle method. Starting at twenty-four months of development, the loss reserve can be partially based on the loss triangle method. At sixty months of development and thereafter, the reserve is based solely on the loss triangle method.



Bornhuetter-Ferguson Method

A variation of the two-part combination method that does not rely on judgmental weights is called the Bornhuetter-Ferguson method. The method eliminates the incurred but not reported (IBNR) reserve using expected losses and an IBNR factor. It is frequently used when the losses reported to the insurer are not sufficiently mature to use the loss triangle method. Immature loss data occurs because of the delay between the time a loss occurs and when it is reported to the insurer. The delay is more pronounced for liability insurance than for property insurance. 

Likewise, re-insurers experience an even longer delay in loss reporting because they establish reserves only after the primary insurer does. Losses arising out of casualty excess of loss treaties (re-insurer treaty) generally suffer the most delay. The re-insurer's reported losses can be zero for the first two or three years before retention's are exceeded and primary insurers report known claim to their claimants. Even small change in assumptions can cause wide variations in IBNR reserves and thereby net income, sometimes changing a profit to a loss or vice versa.


The Bornhuetter-Ferguson method of estimating loss reserves uses expected losses and an incurred but not reported (IBNR) factor

Weakness of this method : 

  • It involves inherent subjectivity in selecting incurred but not reported (IBNR) factors.
  • The Bornhuetter-Ferguson method of estimating loss reserves uses expected losses and an incurred but not reported (IBNR) factor.


Three-part combination method

The three-part combination method combines the loss ratio method, the loss triangle method, and case loss reserves. this combinations therefore requires three sets of weights. The weights are set so that they place most or all of the emphasis on the loss ratio method in the first year. Thereafter, the loss ratio method is phased out and the loss triangle method is phased in. Subsequently the loss triangle method is phased out, and more weight is placed on case loss reserves. Finally, when all the losses have been reported and only a few remain open, the reserves is based entirely on case loss reserves.

Finally, when all losses have been reported and only a few remain open, the reserve is based on case loss reserves. This emphasis on case loss reserves is based on the belief that in the final stages of development, case reserves are likely to be more accurate than bulk reserves.


Chapter 6 - Ratemaking Techniques


Insurance ratemaking is challenging, because when rates are developed, the amounts of fortuitous (unforeseen) losses and their associated expenses are unknown. When the ratemaking process, insurers strive to be competitive, profitable while also meeting all insurance policy obligations. 

The future development of losses can be estimated by several actuarial methods. The most common method used applies loss development factors to the Current experience.

Ratemaking : the process insurers use to calculate insurance rates, which are a premium component. To be approved, rates must comply with applicable regulations. Rate regulations is generally based on having rates that are adequate, not excessive, and not unfairly discriminatory.

the primary goal of ratemaking is to develop a rate structure that enables the insurer to compete effectively while earning a reasonable profit on its operations. The ratemaking foal complements the underwriting goal, which is to develop and maintain a profitable book of business.

A rate is the charge for the exposure to risk. Rates should provide for unanticipated contingencies, such as actual losses being greater then projected.

Ideal characteristics of rates:

  • Be stable - changing rates is expensive and time consuming. may cause dissatisfaction among consumers and sometimes lead to regulatory or legislative actions
  • Be responsive - be the best estimate possible and change immediately in response to external factors that affects losses, revised annually. Responsiveness is a desirable ratemaking characteristic. Because conditions are constantly changing, any delay between when data are collected and when they are used tends to reduce the accuracy of rates. The delay in reflecting loss experience in rates stems from several sources, including Time period during which rates are in effect, usually a full year
  • Provide for contingencies - have enough to pay for unexpected severe claims
  • Promote risk control - with low premiums more people will buy insurance
  • Reflect difference in risk exposure - the premium should not be a flat rate, so that some will be overcharged and some will be undercharged.

Rate Components:

  • An amount needed to pay future claims and loss adjustment expenses(prospective loss costs)
  • An amount needed to pay future expenses , such as acquisition expenses, overhead, and premium taxes(expense provision)
  • An amount for profit and contingencies(profit and contingencies factor)

The first component of an insurance rate is related to the prospective loss costs developed by advisory organizations or by insurers with large pools of loss data. The second and third components are related to an expense multiplier. Once the insurance rate is calculated, it is multiplied by the appropriate number of exposure units to produce a premium


Some states require that investment income be considered explicitly in rate calculations.

Rate : The price per exposure unit for insurance coverage.

Earned Exposure Unit : is the exposure unit for which the insurer has provided a full period of coverage. the period are typically measured in years.

Loss Costs The portion of the rate that covers projected claim payments and loss adjusting expenses. It is sometimes also called Pure Premium which is the average amount of money an insurer must charge per exposure unit in order to be able to cover the total anticipated losses for that line of business.

Premium The price of the insurance coverage provided for a specified period

Expense Provision The amount that is included in an insurance rate to cover the insurer's expenses (such as acquisition expenses, general expenses, premium taxes, and licenses and fees paid to the government, regulatory, and advisory organizations) and that might include loss adjustment expenses but that excludes investment expenses. This component is sometimes referred to as underwriting expenses  which are costs incurred by an insurer for operation, taxes, fees, and the acquisition of new policies.

Loss adjustment expenses (LAE) are the expenses associated with adjusting claims. The expenses are often split into either allocated or unallocated LAE. Some allocated loss adjustment expenses, such as legal fees to defend a claim, may be included in the pure premium instead of in the expense provision. 

Insurers add a loading for profit and contingencies. This loading protects the insurer against the possibility that actual losses and expenses will exceed the projected losses and expenses included in the insurance rate. If excessive losses or expense are not incurred, the funds generated by the loading produce additional profit for the insurer.

Underwriting Profit Income an insurer earns from premiums paid by policyholders minus incurred losses and underwriting expenses.

Investment Profit investment operations use the funds generated by the insurance operations to buy, or sell bonds, stocks, and other investments to earn an investment profit. The return from these investments is called investment income.

Insurers commonly consider investment results explicitly in their rate calculations. Some states even require that investment income be considered explicitly. Sophisticated models are available that can be used to include investment returns in the insurance rate. The investment return earned by an insurer depends largely on the types of insurance written, the loss reserves, and associated unearned premium reserves.

Factors the affect Ratemaking (areas of uncertainty)

  • Estimation of losses
  • Delays in data collection and use
  • Change in the cost of claims
  • Insurer's projected expenses
  • Target level of profit and contingencies

The difference between the estimated amount that will ultimately be paid for claims and the actual loss amount paid to data is the loss reserves.

Incurred losses include both paid losses and outstanding loss reserves.

Insurance rates are mostly based on claims paid in three-year or five year period

In theory, an insurer could avoid this problem by waiting for all claims to be paid before using loss exposures to calculate rates. when all claims incurred during a given period have been paid, there is no need for loss reserves. In practice, however, waiting would create problems. if the rate filing were delayed for several years to permit all claims to be settled, then factors such as inflation, changes in traffic conditions, and so forth would have a greater chance of changing the loss exposures. The effects of these factors might be greater than the effects of errors in estimating loss reserves.

  • Delays in data collection and use are mostly due to
  • Delays by insureds in reporting losses to insurers
  • Time required to analyze data and prepare a rate filing
  • Delays in obtaining state approval of filed rates
  • Time required to implement new rates
  • Time period during which rates are in effect, usually a full year

Experience period : The period for which all pertinent statistics are collected and analyzed in the rate making process.



Change in Cost of Claims

Loss Severity and Loss frequency affect an insurer's loss experience during any given period. 

Below factors are difficult to quantify, but their aggregate losses can be determined with reasonable accuracy by trending

  • Economic inflation or deflation during the inevitable also affects the average cost of loss(severity)
  • Legislative or regulatory changes such as modification in rules governing claim settlement can affect the number of losses(frequency)

Trending : A statistical technique for analyzing environmental changes and projecting such changes into the future.

Insurer's projected Expenses

Insurance rates also based on the insurer's projected expenses rather than on past experience it is based on judgement and budgeted expenses and allocations for general administrative expenses are also put in.

Target Level of Profit and Contingencies

Investment income : Interest, Dividends, and net capital gains received by an insurer from the insurer's financial assets, minus its investment expenses.

Insurer's commonly use three ratemaking methods:

  • Pure Premium method
  • Loss Ratio Method
  • Judgement Method


Method Data Required Uses Formulae
Pure Premium Method

A method for calculating insurance rates using estimates of future losses and expenses, including a profit and contingencies factor.
• Incurred Losses
• Earned exposure units
• Expense provision
• Profit and contingencies factor
To develop rates from past experience (cannot be used without past experience) and is independent of current rates Steps to calculate via pure premium method
1) calculate the pure premium, the amount needed to pay losses and, depending on the line of business, allocated loss adjustment expenses)
2) estimate expenses per exposure unit based on the insurer's past expenses (except investment expenses and possibly loss adjustment expenses).
3)determine profit and contingencies factor (includes net investment income)
4) determine rate exposure unit
Rate per exposure unit = (Pure Premium + fixed expenses per exposure unit)
  / ( 1 - Variable expenses percentage - Profit and Contingencies factor
 
Loss Ratio Method

A method for determining insurance rates based on a comparison of actual and expected loss ratios
Actual loss ratio, calculated from:

• Incurred Losses
• Earned Premiums

Expected loss ratio, calculated as:
100% - Provision for expenses, profit, and contingencies
To modify existing rates (cannot be sued without existing rates, cannot be used to determine rates for a new type of insurance, therefore for new types of insurance either use pure premium method and judgement method) 1. Actual Loss Ratio = Incurred Losses / Earned Premium
2. Expected Loss Ratio = 100% - Expense Premium

Rate Change = (Actual loss ratio - Expected loss ratio)/ Expected loss ratio
If rate change percentage is negative, it indicates a rate reduction. If positive, it indicates a rate increase.
 
Judgement Method

A method for determining insurance rates that relies heavily on the experience and knowledge of an actuary or an underwriter who makes little or no use of loss experience data
Rates based on experience and judgement To develop rates when data are limited (requires skilled judgement). May be used for Ocean marine insurance, some inland marine classed, aviation insurance, and terrorism coverage.  
 
Analytics using Big Data

Rates based on huge swaths of data from both external and internal sources Big Data can be used in commercial ratemaking. Data regarding weather, geographical features, and satellite imagery can help evaluate property risks. Text mining and social media analysis can indicate potential product liability, malpractice, or doctors' and officers' exposures.

Weather and Geographical databases for Home Owner's policies

Vehicle telematic provide information on vehicle and driver habits
 


For a new type of insurance, either the pure premium method or the judgement method must be used.


Rate Making Process Overview can vary significantly by type of insurance

By insurance staff or advisory organization on behalf of the insurer

  1. Collect data
  2. Adjust data
  3. Calculate overall indicated rate change
  4. Prepare rate filings and submit to regulatory authorities as required

An insurer follows a similar process when reviewing loss costs, rather than rates. The provision for expense and for profit and contingencies are excluded from the process, but all other adjustments and parts of the process are unchanged.

Advisory Organization : An independent organization that works with and on behalf of insurers or subscribe to its services.

Loss Cost Multiplier : A factor that provides for differences in expected loss, individual company expenses, underwriting profit and contingencies when multiplied with a loss cost, it produces a rate. For companies that rely on loss cost filings made by advisory organizations the ratemaking process involves calculating and filing an appropriate loss cost multiplier.


Collect Data : Insurance companies collecte more data than advisory organizations require.The data falls into three general categories

  • Losses, both paid and incurred (including any loss adjustment expenses to be included in the pure premium)
  • Earned premium and/or exposure information
  • Expenses, including a profit and contingencies factor

If rates are to vary by rating class and/or territory, data must be identified for each class and territory. Ideally,  the incurred losses, earned premiums, and earned exposure units should be based on the same group of policies, Because this is not always practical, approximation techniques are used.


Calendar-Year method : A method of collecting ratemaking data that estimates both earned premiums and incurred losses by formulas from accounting records. It is suitable for fire, inland marine, and auto physical damage insurance, losses are paid relatively, quickly, and loss reserves tend to be small relative to earned premiums. The calendar-year method is unsuitable for collecting ratemaking data for liability and workers compensations insurance, because the delay in loss payment can be long and the loss reserves can be large relative to earned premiums. So policy-year or accident-year method is used.

Policy year method : A method of collecting ratemaking data that analyses all policies issued in a given twelve-month period,and that links all losses, premiums, and exposure units to the policy to which they are related.

Adjust Data : Necessary because loss data is collected from present and past periods.

Actuaries use several ways of adjusting premium and loss data:

  • Adjust premium to current rate level.
  • Adjust historic experience for future development, histroic premiums in total to current level
  • Apply trending to losses and premium. Example premiums may also gave to be adjusted for different levels of coverage value purchased. For Example: an automobile liability insurer finds that it is now selling much more of its $100,00 per accident limits than the $25,000 limit it had in the past.


Adjust premium to current rate level

On-level factor : A factor that is used to adjust historical premiums to the current rate level. It adjusts rate level for each year to the most recent period's rate levels.

Premiums may also have to be adjusted for different levels of coverage purchased.

Adjust historic experience for future development

Loss development factor : An actuarial means for adjusting losses to reflect future growth in claims due to both increases in the incurred amount for reported losses and incurred but not reported (IBNR) claims

This is done to open claims that may require future payment or the possibility of a late-reported claims for which the insurer is liable. The insurer must estimate the values of these future payments and add it to the payments to date in order to estimate the ultimate losses of each period.

Apply Trending to Losses and Premium

Examples of such changes would be inflation of claim costs, the increasing safety of newer cars, or changes in legal liability. The most frequently used source of trends is historical experience. The experience can be reviewed by an insurer using its own data or by a statistical agent, such as Insurance Services Office, Inc. (ISO) or the National Council on Compensation Insurance(NCCI), using the combined experience of numerous companies. The trend adjustment commonly uses historical experience to project past trends into the future. Loss trending is usually reviewed in separate severity and frequency components.

The trends can be projected into the future using an exponential trending method. Exponential trending assumes that data being projected will increase or decrease by a fixed percentage each year as compared with the previous year. A method of loss trending that assumes a fixed percentage increase or decrease for each time period. Example price inflation.

Losses may need to be adjusted to current conditions if other significant external changes have affected loss payouts in recent years. For example, workers compensation insurance benefits  are established by statute. If legislation or a court decision changes these benefits, past losses must be adjusted to current benefit levels.

 Trending of losses in fire insurance is generally restricted to claim severity.

Calculating Overall Indicated Rate change

The purpose of adjustments, development and trending is to bring prior experience to a level comparable to the future rate's policy period. The loss-ratio method and pure premium method can be used to produce an indication of rate change. These methods depend on the amount and type of experience available.

Loss Ratio Method : A loss reserving technique that establishes aggregate reserves for all claims for a type of insurance.

Territorial Relatives : can be determined by comparing he estimated loss ratio or (Pure premium) for each geographic territory to the statewide average loss ratios (or pure premium). This comparison produces factors that are applied to the statewide average rate to reflect experience in each geographic territory. If a given territory has limited experience, its territorial loss ratios are likely to vary widely. Differences from the overall average rate must be supported by credible experience.

Class relative are developed similar to Territorial Relative.


Prepare and submit Rate filings

A rate filing is a document submitted to state regulatory authorities.

A filing must include at least these seven items

  • schedule of the proposed new rates
  • statement about the percentage change, either an increase or decrease, in the statewide average rate
  • Explanation of differences between overall statewide change in rate and the percentage change of the rates for individual territories and/or rating classed (if any)
  • Data to support the proposed rate changes, including territorial and class relatives
  • expense provision data
  • Target profit provision included in the rates, if applicable and any supporting calculations
  • Explanatory material to enable state insurance regulators to understand and evaluate the filing

Actuaries are the most qualified to answer the question to rate filing, however some insurance companies drects the questions to the legal department or filing specialists. When loss costs are filed by an advisory organization, the insurer is responsible for filing its expense provisions, which would yield its final rates.

Rate-making factor variances by types of insurance. The variations can result from the characteristics of loss exposures, regulatory requirements and other factors.

1 ) Experience Period : Fire Insurance a five year experience period is used. the experience for each of the five years is not given equal weight. the experience for the most recent years is given greater weight to promote rate responsiveness.

Property causes of loss, such as wind, is even longer - frequently twenty years or more. The purpose of such a long experience when a major hurricane, a series of major tornadoes, or another natural catastrophe strikes an area. The factors can be considered in determining the appropriate experience period : (1) legal requirements, if any; (2) the variability of losses over time; and (3) the credibility of the resulting ratemaking data. The second and third factors are related to some degree.

2) Trending: For property insurance, loss claim frequency is low and generally stable, so trending may be restricted to claim severity.

For liability insurance, separate trending of claim severity and claim frequency is common because of the different factors that affect them.

For fire insurance, trending both losses and premium is necessary.


Incurred Losses = Losses Paid + outstanding loss reserves

A special trending problem exists in workers compensation insurance. Because the benefits for such insurance are established by statute, legislation or a court decision can change the benefits unexpectedly. A law amendment factor is used to adjust rates and losses to reflect statutory benefit changes. Actuaries can estimate with reasonable accuracy the effects of a statutory benefit changes on the losses that insurers will incur under their policies.

3) Large Loss Limitations : also applicable for property insurance for large losses or accumulation of small losses from a single event

  • Basic Limit : the minimum limit coverage for which a policy can be written, usually found in liability lines.
  • Catastrophe Model : A type of computer program that estimates losses from future potential catastrophic events. Flat charge in the premium for the catastrophe event is charged.

4) Credibility : The level of confidence an actuary has in projected losses; increases as the number of exposure units increases. the higher the volatility, the more data are required to provide a reasonable projection of future losses. Because the larger number of claims per exposure and smaller average claim size leads t more stable results.

The higher the volatility the more data is required.

Credibility Factor : The factor applied in rate making to adjust for the predicted value of loss data and used to minimize the variations in the rates that result from purely chance variations in loses.

In auto insurance, advisory organizations and large insurance consider the statewide loss data to be fully credible. That assumption might be inappropriate for some small insurers  who base their rates solely on their own loss data. For territories and classes with loss data not fully credible, rates are calculated as weighted average of the indicated rate for the territory or class and the statewide average rate for all classes and territories combined. Credibility Factors are used as weights

For property insurance, because of the low average claim frequency, a three-part weighted average could be used, combined the state loss data for the rating class, regional (multi state) loss data of the rating class, and the state loss data for a major group encompassing several rating classes.

5) Increased limit factor : A factor applied to the rates for basic limits to arrive at an appropriate rate for higher limits. For example if a person buys extra coverage of 1 million $, where the base rate is only 50,000 $. The exposure to loss is substantially greater at higher limits, Second higher limits can also require a portions of the coverage to be re-insured, with the additional expense of reinsurance included in the rate. The greater the variability requires a greater risk charge at higher levels of coverage.

An amount over and above the expected loss component of the premium to compensate the insurer for taking the risk that losses may be higher than expected.

Data used in developing insurance rates for basic coverage limits include only losses associated with the basic coverage limit.


Chapter 7 - Risk Control


All risk management activities fall into 2 categories: Risk Control and Risk financing

Risk Control : A conscious act or decision not to act that reduces the frequency and/or severity of losses or makes losses more predictable which can be classified into the below broad categories:

  • Avoidance : Decrease loss frequency : a technique that involves ceasing or never undertaking an activity so that the possibility of a future loss occurring from that activity is eliminated.
  • Loss Prevention : technique that reduces the frequency of a particular loss, can be split into pre-loss measure and post loss measure also called Disaster recovery plan , catastrophe recovery and contingency plan
  • Loss reduction : reduces the severity of loss, two broad categories are pre-loss and post-loss, example burglar alarm, disaster recovery planning, also called catastrophe plan or contingency plan
  • Separation : a technique that isolates loss exposures from one another to minimize the adverse effect of a single loss. Intent is to reduce loss severity at the adverse effect of loss frequency. Standard deviation will only decrease thereby making loss more predictable. Example : Ed is a cattle owner who also owns land in a variety of counties. He relies on the proceeds from the sale of his cattle as his primary source of income. Ed makes the decision to disperse his herd over several locations as a means to limit the potential impact from a loss at a single location
  • Duplication : technique that uses backups, spares, or copies of critical property, information, or capabilities and keeps them in reserve. used to reduce loss severity, unlike Separation the loss frequency will not increase because the duplicated unit is kept in reserve. The asset or activity should be considered important for investing in duplication, Duplication redusces loss severity without increasing loss frequency.
  • Diversification : to spread loss exposure over numerous projects, products, markets, or regions, used for managing business risks that hazard risks, closely resembles duplication 



Avoidance can be proactive or reactive

Proactive avoidance seeks to avoid a loss before it exists, whereas as reactive avoidance seeks to eliminate a loss exposure that already exists. It is impossible to completely avoid risk it is neither feasible nor desirable, because loss exposures arise from activities that are essential to individuals and to organizations. For example a manufacturer making motorcycle helmets, could not sop selling them in order to avoid liability loss exposures.

To see the results of a loss technique, we can see the mean reduction, reduction in standard deviation and coefficient of variation.

Heinrich's Domino Theory : The principles outlined in his publication in 1931 became the basis of modern risk control measures.after thorough analysis of work injuries caused by accidents. He determined that work injuries actually result of a series of unsafe acts and/or mechanical or physical hazards (dominoes) that occurred in a specific order. If any of these dominoes can be removed then the entire chain gets broken and the loss can be prevented. The five dominoes are 

  • social environment and ancestry
  • the fault of persons
  • personal or mechanical hazards
  • the accident
  • the injury

Disaster recovery plan (post-loss measure of loss reduction) : also called Catastrophy plan or contingency plan. A plan for backup procedures, emergency, response, and post-disaster and recovery to ensure that critical resources are available to facilitate the continuity of operations in an emergency situation

       

Calculation of probability of Separation technique


Root Cause Analysis

Determination of root cause allows organizations to discern the cause of a armful event and prevent such events from recurring.

Root cause: The event or circumstance that directly leads to an occurrence

RCA - Root Cause Analysis : A systematic procedure that uses the results of the other analysis techniques to identify the predominant cause of the accident.

Nature of root causes has four basic characteristics
  • A root cause can be expressed as a specific underlying cause, not as a generalization.
  • A root cause can be reasonably identified.
  • A root cause must be expressed as something that can be modified.
  • A root cause must produce effective recommendations for prevention of future accidents that stem from the root cause.
Harmful events generally are associated with one of three basic causes of loss
  • Physical cause : failure of a tangible or material item
  • Human error on inaction
  • Organization loss stem from faulty systems, processes, or policies
Root Cause Analysis Approaches:
  • Safety based RCA originated from accident analysis and occupational safety and health.
  • Production-based RCA evolved from quality control procedures for industrial manufacturing.
  • Process-based RCA is similar to production-based RCA, but it also includes business processes.
  • Failure-based RCA stems from failure analysis and is used primarily in engineering and maintenance.
  • Systems-based RCA combines these four approaches with change management, risk management, and systems analysis components.

Steps in RCA Process
          
     


The first step in RCA process is data collection. Root Causes associated with an event cannot be identified without complete information about the surrounding circumstances, facts and causes.

Second step in the process is Causal Factors charting. The agents that directly result in one event causing another is Causal Factors. Usually the most readily apparent causal factors is given the most attention during the charting process, but more than one causal factor can be associated with an event.

Third Step is Root Cause Identification. Mapping or flow-charting can help determine the underlying reason(s) for each causal factor identified.

Final step is recommendation determination and implementation. The implementation should be tracked till completion.

Typically used after an event has occurred, can also be used to predict an event.

Failure Mode and Effects Analysis (FEMA)

An analysis that reverses the direction of reasoning in fault tree analysis by starting with causes and branching out to consequences.

Used predominately in product development and operations management. Its objective is to identify failure nodes and perform effects analysis. Its ultimate objective is to identify the most critical systems failures that can cause the most damaging consequences. Once this is known, a plan of action can be developed and implemented.

Failure Mode : The manner in which a perceived or actual defect in an item process or design occurs.
Effects Analysis: The study of a failure's consequences to determine a risk event's root cause(s)
Indenture Level: An item's relative complexity within an assembly, system, or function
Local effect: The consequences of a failure mode on the operation, function, or status of the specific item or system level under analysis.
Next-higher Level Effect: The consequence of a failure mode on the operation, function, or status of the items in the indenture level immediately above the indenture level under analysis.
End Effect: The consequence of a failure mode on the operation function, or status of the highest indenture level.



FMEA can be followed by a criticality analysis. An analysis the identifies the critical components of a system and ranks the severity of losing each component.

These four categories of failures are used in criticality analysis

Category 1: failure resulting in excessive unscheduled maintenance
Category 2: failure resulting in delay or loss of operational availability
Category 3: failure resulting in potential mission failure
Category 4: failure resulting in potential loss of life

Risk Priority Number (RPN) : The product of rankings for consequence, occurrence, and detection used to identify critical failure modes when assessing risk within a design or process. RP determines the relative risk of a particular FMEA item

Consequence Rankings (C) - rate the severity of the effect of the failure
Occurrence Rankings (O) - rate the likelihood that the failure will occur (failure rate)
Detection Rankings (D) - rate the likelihood that the failure will not be detected before it reaches the customer

Criticality Score : A product of the risk priority number elements of consequence and occurrence used to determine the relative risk of a failure mode and effects analysis team. Also called as the criticality score which is the product of the probability of failure and the severity of failure


Advantages of using FEMA
  • It is widely applicable to many different system modes
  • When used early in the design phase, it can reduce costly equipment modifications
  • It can improve the quality, reliability and safety of a product or process , as well as improve an organization's image and competitiveness by possibly reducing scrap in production
  • It emphasizes problem prevention by identifying problems early in the process and eliminating potential failure modes.

Disadvantages of using FEMA

When used a top-down tool, FMEA may only identify major failure modes in a system

Other analysis methods might be better suited for this type of analysis When used as a bottom-up tool, it can complement other methods, such as fault-tree analysis (FTA), and identify more failure modes resulting in top level syndrome

Analyzing complex multi layered systems can be difficult and tedious with FMEA, and studies that are not adequately controlled and focused can be time-consuming and costly.


Fault Tree Analysis (FTA) : An analysis that takes a particular system failure and traces the events leading to the system failure backwards in time. It uses the deductive method of moving from general to specific to examine conditions that may have led to or influenced a risk event. When these causes of events are identified, the risk management professional can determine the probability of their occurrence and apply the loss control techniques that will be most beneficial in preventing the harmful event.

We use flowcharts and with "AND" or "OR" gate and add probabilities to the risk event. The fault tree analysis identifies the events leading to a harmful event, it naturally suggests loss prevention measures. The distinction between "AND" and "OR" gates provides some guidance in choosing among loss prevention alternatives.

Five Steps of Fault Tree Analysis

To ensure that a fault tree includes all necessary and sufficient events and is useful as a risk management tool, risk management professionals follow five steps:

1. Identify a specific harmful event to construct the fault tree. Be as specific as possible so that events contributing to the failure can e fully described
2. Diagram, in reverse order, the events that led to the harmful event
3. Determine whether the events leading to any other event on the fault tree are connected by an "AND" or by an "OR" gate.
4. Evaluate the fault tree to determine possible system improvement
5. Make suggestions to management about risk control measures that can treat the hazards identified in the fault tree

Assumptions of FTA 

All components exist in only one of two conditions - success or failure (operation or not operational)
Any system component's failure is independent of any other component's failure.
Each failure has an unchanging probability of occurrence. Many fault trees limit the number of potential causes of failure they examine, perhaps overlooking other causes.

Limitations of FTA

If a high degree of uncertainty does not exist concerning the probabilities of the underlying or base events, the probability of the top event may also be uncertain.
Important pathways to the top event might not be explored if all causal events are not included or procedures change
Because a fault tree is static, it may need to be reconstructed in the future if circumstances or proceedings change
Human error is difficult to characterize in a fault tree
"Domino effects" or conditional failures are not easily included in a fault tree


Chapter 8 - Analyzing Business Performance


Key Performance Indicators (KPI) : Financial or non-financial measurements that defines how successfully an organizations is progressing toward its long-term goals Each organization will establish a level of adherence that will be tolerated in meeting its KPIs.

Critical Success Factors : An element necessary for an organization's success that is derived from a strategic objective. A KPI measures an activity the signals the achievement of a CSF. CSFs and KPIs are interrelated. CSF helps a refine a strategic objective to represent a more concise and specific intention, and KPIs refines a CSF and measures activitites that signal whether a CSF has been achieved.

For each KPI there is a tolerance level for how much deviation in the standard established in the KPI will be acceptable



Key Risk Indicator (KRI)

A tool used by an organization to measure the uncertainty of meeting a strategic business objective. Organizations use (KRIs) to plan for and respond to risk. KRIs can reveal emerging risks, identify risks, identify risk exposure levels, and detect changes or trends in existing risk exposures.

KPIs measure an organization's progress toward achieving its objective; KRIs measure risk and volatility related to achieving those objectives. Although KPIs are lagging in nature-they measure the consequences of change that has occurred. KRIs are always leading(predictive) indicators.

KRIs can be internal or external to an organization. An effective approach is to have a limited number of focused KRIs linked to risks that have the greatest  potential impact to the organization

Risk threshold : Predefined tolerance range that measures variances from expected outcomes.

Risk Criteria : Information used as a basis for measuring the significance of a risk. It relates to an organization;s strategic risks serve as the basis for KRIs.

Risk threshold are developed from Risk Criteria for each risk. the define the boundaries or risk tolerance. When the level exceeds the risk threshold, management is alerted to take action.



KRIS arise from various sources, can be internal or external. Effective KRIs are founded on an organization's objective.

1. Corporate Strategies and Objectives
2. Company Policies, regulations, and legal requirements
3. Loss Experience
4. Stakeholder Requirements
5. Risk Assessments
6. Internal and External Subject-Matter Experts; External Experts can provide objective analysis
7. Trade Publications and loss registries


Effective KRIs share several key characteristics

1. they are based on quantifiable information
2. They are tied to risk categories (or risk hazard)
3. They are tied to the organization's objectives
4. They support management decision
5. They can be bench marked

Uses of KRIs

As part of risk management program, KRIs are used to assess, define, and measure potential changes in known risk conditions or identify and monitor emerging risk. They car provide early warnings of emerging risks, trends, or changes in risk exposures, giving the organizations sufficient time to prevent or minimize and potential losses.

KRIS have other applications
  • Validation and Monitoring: Can help define performance targets, business strategies, and objectives as part of the organization's strategic planning efforts
  • Enhanced efficiency
  • Clarification of risk-taking expectations: Thresholds and standards embodied in KRIs communicate and reinforce organizations values, risk appetite, risk tolerance and accountability expectations to staff management.
  • Monitoring risk exposures
  • Measuring risk: Large financial institutions use KRIs to identify and maintain the appropriate level of economic capital to avoid unanticipated losses and calibrate capital models.

Risk Appetite : The total exposed amount that an organizations wishes to undertake on the basis of risk-return trade-offs for one or more desired and expected outcomes.

Economic Capital : The amount of capital required by an organizations to ensure solvency at a given probability level, such as 99 percent, based on the fair value of its assets minus the fair value of its liabilities.


Business Process Management (BPM) : A systematic, iterative plan to analyze and improve business processes through life-cycle phases to achieve long-term goals and client satisfaction. focuses on coordinating all activities of an organization toward the preferences and needs of its customers. this management process includes five life-cycle steps-identifying processes of an organization toward the preferences and needs of its customers. This management process includes five life-cycle steps- identifying processes, designing/redesigning processes, modeling scenarios, executing process changes, and monitoring results. Mapping risks to processes can expedite risk-treatment decisions and facilitate effective risk optimization.

It is structured approach that aligns on organization's operational components with its strategic goals and objectives, giving the organization the capability to be flexible, innovative and attuned to emerging issues.

Risk Indicator : A tool used by an organization to measure the level of uncertainty in an activity, project, or process. Risk indicators alert the organization to be proactive by intervening and mitigating an impending event before the harm occurs

BPM not only aims to improve processes, but to do so in an ongoing manner. BPM uses Risk Indicators, which provide feedback that helps the organization identify needed process improvements.
Although incorporating information technology was the initial focus of BPM, human-driven process are usually considered as well, because judgement or intuition is an important component of managing business processes. BPM uses technology and human applications in a systematic, structures approach that requires continual input, evaluation, and adjustments of operational activitites.

These are some benefits of BPM:

Providing senior management with regular feedback on process efficiency.
Enabling efficient use of resources
Maximizing benefits from technology
Responding quickly to client, regulatory and market demands

The goal of BPM is achievement of organizational objectives through process improvements that incorporate technology, manage risks, improve efficiency, and increase profitability through life cycle of continual measurement and feedback.

BPM Life Cycle achieves not only improvement but continual improvement in an ongoing manner

1. Identify Processes - Critical processes that support achievement of the organization's goals are selected for analysis, design, redesign, or automation. Often this step is understood/obvious and can be skipped. We can directly start from step 2 - Design Process
2. Design/redesign processes - The identified processes are designed or redesigned by considering workflows, affected personnel, reporting procedures, operating requirements, and referral mechanisms. Workflows include person-to-person, person-to-automated-system, or system-to-system interactions.
3. Model Scenarios - Variables are applied to the process design to identify the response to various what-if scenarios.
4. Execute process changes - Software defines and executes the process, with human input as necessary. the entire process is driven by the collaboration of human and technological input.
5. Monitor results - Process are tracked so that statistics on their performance can be gathered and compared with performance indicators. Performance indicators can measure areas such as productivity, defect rates , and cycle times, pointing to individual processes that need attention.


In summary, BPM's iterative approach emphasizes a life cycle of continual identification, analysis, and monitoring of results, which enables not only improvement, but ongoing improvement. BPM is a technology-driven approach in conjunction with human input.
External risk and factors are not considered for BPM






Cyber Insurance -- a new way to look at insurance

Recently i had attended a webinar on a future trend in insurance called Cyber Insurance. While products for Cyber Insurance were primarily serve by surplus line brokers and agents. These products are becoming more mainstream and agents are now writing Cyber Insurance as a stand alone policy or as part of Commercial Package Policy.

While cyber insurance initially was not covered but some aspects were covered in the form of business interruption coverage. There were specific exclusion where property damaged (such as computers or industrial machines) by a Cyber attack would not be covered. This was an unmet need which was filled with Cyber insurance.


The basic tenets of the policy is still the same as any commercial policy, where we have first party coverage to cover any damage to business owned property due to cyber attack but not third party as it will already be covered under liability insurance. The thin line of difference between Cyber Insurance and Commercial Property is that when a claim is filed, if the cause of loss is determined to be a cyber attack then only cyber insurance will kick in. 

A special clause for an exlcusion is also kept in the policy when the the cyber attack is determined to be catastrophic and has affected a large number of organization. 


Now one of the most interesting things of the discussion in the webinar was the idea that in which category of business would a cyber attack would be most disastrous for the business operation for the particular class of business. The answer was the Critical infrastructure such as power plants, water supplies, heavy industrial factories. 


As these industries rely on computer systems for monitoring critical systems, any terminal getting hacked could spread through the entire network and bring operations to a stand still. An example of a power plant facing an cyber attack could take out lights to an city, thereby resulting in huge losses to the city and the power plant operation. A typical minute of downtime for a power plant is euqal to a loss of around 40,000 dollars.



Some other points in the discussion of the webinar was 

Risk Management

1) Risk Framing — understanding the Business 


2) Risk Assessment - how well we are doing in terms of risk assessment, can be defined as follows

    • evaluate the response plan
    • account the impact
    • audit policies controls
    • evaluate architecture

3) Risk Response - closing the gaps, making policies, what are going to do about it  — assessing both reactively and proactive  — good process to react.

4) Monitoring Risk - monitoring the internal and external factors


Failure in risk management - could lead to breach and reputation is a huge consequence of breach.


What can cyber insurance do - cyber liability insurance can cover the business interruption expenses and lost income as a result of a network outage or computer system malfunction due to 


Primary Loss Exposures :

: administrative error

: employee negligence

: malicious attack


physical damage as a result of a cyber. (property insurance doesn’t always cover this)


explanation 

property insurance by cyber attack will not be covered by named - peril property policies

may be covered in all-risk property policies, normally cyber attack is exclusion

industry/risk specific enhancements are available similar to other commercial policies (Example : replacement power for utility companies)

Explanation: Outage to power system - unable to give power, often time have to purchase power from another power plant, this purchase can be covered by purchase power endorsement


In a typical survey with agents

88 % have not quoted cyber liability insurance for utility insurance

38 % have quoted cyber liability for manufacturing companies

This result shows market is ripe for Expansion


Where do you think greater risk for cyber insurance is ?  — 


Answer: 

power grid because it is under served even nuclear power facilities

Data Breach, could result in exposing health care records of patients


one example was hacking into german steel and damage to steel




Traditionally cyber liability is mainly sold as business interruption policies or component of a policy, main difference it should be physical cause for business interruption like fire , but in cyber insurance 

it can be something like administrative , or system failure


Risk management strategies

  • Risk Mitigation — IOT, automated control systems
  • Buy cyber liability insurance for Risk Transfer


For specific coverage  — scada software systems , exclusions don’t have catastrophic type like physical like fire wind because it is covered in physical property


First party property damage is covered in cyber liability , not third property damage which will be covered by general liability 


Loss in income due to business interruption is covered in cyber insurance similar to property insurance


Property can be covered in cyber insurance by enhancing cyber insurance


As per suvery Agents do not have in-depth knowledge of writing cyber insurance


http://smarter.nasinsurance.com/


Primary rating base continues to be

  • revenue 
  • number of records that they store
  • expect to loose when there is an outage


Limits are set based  per occurrence basis. 20 million per insured basis — spread among other insured customers 


Typical limits are between1 million dollar limit — upwards to 100 million dollar limit 


Very strict terrorism exclusions - nation state — all electronic terrorism 


Some common points to discuss are:

micheal palotay — mpalotay@nasinsurance.com

818.800.4476

www.nasinsurance.com


dave dalva

ddalva@strozfriedberg.com

202-534-3294

www.strozfriedberg.com