Safety engineering

NASA’s illustration showing high impact risk areas for the International Space Station

Safety engineering is an engineering discipline which assures that engineered systems provide acceptable levels of safety. It is strongly related to industrial engineering/systems engineering, and the subset system safety engineering. Safety engineering assures that a life-critical system behaves as needed, even when components fail. Continue reading “Safety engineering”

Financial modeling

Financial modeling is the task of building an abstract representation (a model) of a real world financial situation.[1] This is a mathematical model designed to represent (a simplified version of) the performance of a financial asset or portfolio of a business, project, or any other investment.

Typically, then, financial modeling is understood to mean an exercise in either asset pricing or corporate finance, of a quantitative nature. It is about translating a set of hypotheses about the behavior of markets or agents into numerical predictions.[2] At the same time, “financial modeling” is a general term that means different things to different users; the reference usually relates either to accounting and corporate finance applications or to quantitative finance applications.

While there has been some debate in the industry as to the nature of financial modeling—whether it is a tradecraft, such as welding, or a science—the task of financial modeling has been gaining acceptance and rigor over the years.[3]


In corporate finance and the accounting profession, financial modeling typically entails financial statement forecasting; usually the preparation of detailed company-specific models used for decision making purposes[1] and financial analysis.

Applications include:

  • Business valuation, especially discounted cash flow, but including other valuation approaches
  • Scenario planning and management decision making (“what is”; “what if”; “what has to be done”[4])
  • Capital budgeting, including cost of capital (i.e. WACC) calculations
  • Financial statement analysis / ratio analysis (including of operating- and finance leases, and R&D)
  • Revenue related: forecasting, analysis
  • Project finance modeling
  • Cash flow forecasting
  • Credit decisioning: Credit analysis and Consumer credit risk; impairment- and provision-modelling
  • Working capital- and treasury management; asset and liability management
  • Management accounting: Activity-based costing, Profitability analysis, Cost analysis

To generalize[citation needed] as to the nature of these models: firstly, as they are built around financial statements, calculations and outputs are monthly, quarterly or annual; secondly, the inputs take the form of “assumptions”, where the analyst specifies the values that will apply in each period for external / global variables (exchange rates, tax percentage, etc….; may be thought of as the model parameters), and for internal / company specific variables (wages, unit costs, etc….). Correspondingly, both characteristics are reflected (at least implicitly) in the mathematical form of these models: firstly, the models are in discrete time; secondly, they are deterministic. For discussion of the issues that may arise, see below; for discussion as to more sophisticated approaches sometimes employed, see Corporate finance § Quantifying uncertainty and Financial economics § Corporate finance theory.

Modelers are often designated “financial analyst” (and are sometimes referred to (tongue in cheek) as “number crunchers”). Typically, the modeler will have completed an MBA or MSF with (optional) coursework in “financial modeling”. Accounting qualifications and finance certifications such as the CIIA and CFA generally do not provide direct or explicit training in modeling.[citation needed] At the same time, numerous commercial training courses are offered, both through universities and privately. For the components and steps of business modeling here, see the list for “Equity valuation” under Outline of finance § Discounted cash flow valuation; see also Valuation using discounted cash flows § Determine cash flow for each forecast period for further discussion and considerations.

Although purpose-built business software does exist (see also Fundamental Analysis Software), the vast proportion of the market is spreadsheet-based; this is largely since the models are almost always company-specific. Also, analysts will each have their own criteria and methods for financial modeling.[5] Microsoft Excel now has by far the dominant position, having overtaken Lotus 1-2-3 in the 1990s. Spreadsheet-based modelling can have its own problems,[6] and several standardizations and “best practices” have been proposed.[7] “Spreadsheet risk” is increasingly studied and managed;[7] see model audit.

One critique here, is that model outputs, i.e. line items, often inhere “unrealistic implicit assumptions” and “internal inconsistencies”.[8] (For example, a forecast for growth in revenue but without corresponding increases in working capital, fixed assets and the associated financing, may imbed unrealistic assumptions about asset turnover, leverage and/or equity financing. See Sustainable growth rate § From a financial perspective.) What is required, but often lacking, is that all key elements are explicitly and consistently forecasted. Related to this, is that modellers often additionally “fail to identify crucial assumptions” relating to inputs, “and to explore what can go wrong”.[9] Here, in general, modellers “use point values and simple arithmetic instead of probability distributions and statistical measures”[10] — i.e., as mentioned, the problems are treated as deterministic in nature — and thus calculate a single value for the asset or project, but without providing information on the range, variance and sensitivity of outcomes.[11] (See Valuation using discounted cash flows § Determine equity value.) Other critiques discuss the lack of basic computer programming concepts.[12] More serious criticism, in fact, relates to the nature of budgeting itself, and its impact on the organization [13][14] (see Conditional budgeting § Criticism of budgeting).

Quantitative finance

In quantitative finance, financial modeling entails the development of a sophisticated mathematical model.[citation needed] Models here deal with asset prices, market movements, portfolio returns and the like. A general distinction[citation needed] is between: “quantitative financial management”, models of the financial situation of a large, complex firm; “quantitative asset pricing”, models of the returns of different stocks; “financial engineering”, models of the price or returns of derivative securities; “quantitative corporate finance”, models of the firm’s financial decisions.

Relatedly, applications include:

  • Option pricing and calculation of their “Greeks”
  • Other derivatives, especially interest rate derivatives, credit derivatives and exotic derivatives
  • Modeling the term structure of interest rates (bootstrapping / multi-curves, short rate models, HJM) and credit spreads
  • Credit scoring and provisioning
  • Corporate financing activity prediction problems
  • Portfolio optimization[15]
  • Real options
  • Risk modeling (Financial risk modeling) and value at risk[16]
  • Credit valuation adjustment, CVA, as well as the various XVA
  • Actuarial applications: Dynamic financial analysis (DFA), UIBFM, investment modeling

These problems are generally stochastic and continuous in nature, and models here thus require complex algorithms, entailing computer simulation, advanced numerical methods (such as numerical differential equations, numerical linear algebra, dynamic programming) and/or the development of optimization models. The general nature of these problems is discussed under Mathematical finance § History: Q versus P, while specific techniques are listed under Outline of finance § Mathematical tools. For further discussion here see also: Financial models with long-tailed distributions and volatility clustering; Brownian model of financial markets; Martingale pricing; Extreme value theory; Historical simulation (finance).

Modellers are generally referred to as “quants” (quantitative analysts), and typically have advanced (Ph.D. level) backgrounds in quantitative disciplines such as statistics, physics, engineering, computer science, mathematics or operations research. Alternatively, or in addition to their quantitative background, they complete a finance masters with a quantitative orientation,[17] such as the Master of Quantitative Finance, or the more specialized Master of Computational Finance or Master of Financial Engineering; the CQF is increasingly common.

Although spreadsheets are widely used here also (almost always requiring extensive VBA); custom C++, Fortran or Python, or numerical analysis software such as MATLAB, are often preferred,[17] particularly where stability or speed is a concern. MATLAB is often used at the research or prototyping stage[citation needed] because of its intuitive programming, graphical and debugging tools, but C++/Fortran are preferred for conceptually simple but high computational-cost applications where MATLAB is too slow; Python is increasingly used due to its simplicity and large standard library. Additionally, for many (of the standard) derivative and portfolio applications, commercial software is available, and the choice as to whether the model is to be developed in-house, or whether existing products are to be deployed, will depend on the problem in question.[17]

The complexity of these models may result in incorrect pricing or hedging or both. This Model risk is the subject of ongoing research by finance academics, and is a topic of great, and growing, interest in the risk management arena.[18]

Criticism of the discipline (often preceding the financial crisis of 2007–08 by several years) emphasizes the differences between the mathematical and physical sciences, and finance, and the resultant caution to be applied by modelers, and by traders and risk managers using their models. Notable here are Emanuel Derman and Paul Wilmott, authors of the Financial Modelers’ Manifesto. Some go further and question whether mathematical- and statistical modeling may be applied to finance at all, at least with the assumptions usually made (for options; for portfolios). In fact, these may go so far as to question the “empirical and scientific validity… of modern financial theory”.[19] Notable here are Nassim Taleb and Benoit Mandelbrot.[20] See also Mathematical finance § Criticism and Financial economics § Challenges and criticism.

See also

  • Asset pricing model
  • Economic model
  • Financial engineering
  • Financial forecast
  • Financial Modelers’ Manifesto
  • Financial models with long-tailed distributions and volatility clustering
  • Financial planning
  • Integrated business planning
  • Model audit
  • Modeling and analysis of financial markets
  • Outline of finance § Education
  • Pro forma § Financial statements
  • Profit model


  1. Jump up to:a b “How Financial Modeling Works”.
  2. ^ Low, R.K.Y.; Tan, E. (2016). “The Role of Analysts’ Forecasts in the Momentum Effect” (PDF)International Review of Financial Analysis48: 67–84. doi:10.1016/j.irfa.2016.09.007.
  3. ^ Nick Crawley (2010). Which industry sector would benefit the most from improved financial modeling standards?,
  4. ^ Joel G. Siegel; Jae K. Shim; Stephen Hartman (1 November 1997). Schaum’s quick guide to business formulas: 201 decision-making tools for business, finance, and accounting students. McGraw-Hill Professional. ISBN 978-0-07-058031-2. Retrieved 12 November 2011. §39 “Corporate Planning Models”. See also, §294 “Simulation Model”.
  5. ^ See for example, Valuing Companies by Cash Flow Discounting: Ten Methods and Nine Theories, Pablo Fernandez: University of Navarra – IESE Business School
  6. ^ Danielle Stein Fairhurst (2009). Six reasons your spreadsheet is NOT a financial model Archived 2010-04-07 at the Wayback Machine,
  7. Jump up to:a b Best Practice, European Spreadsheet Risks Interest Group
  8. ^ Krishna G. Palepu; Paul M. Healy; Erik Peek; Victor Lewis Bernard (2007). Business analysis and valuation: text and cases. Cengage Learning EMEA. pp. 261–. ISBN 978-1-84480-492-4. Retrieved 12 November 2011.
  9. ^ Richard A. Brealey; Stewart C. Myers; Brattle Group (2003). Capital investment and valuation. McGraw-Hill Professional. pp. 223–. ISBN 978-0-07-138377-6. Retrieved 12 November 2011.
  10. ^ Peter Coffee (2004). Spreadsheets: 25 Years in a Cell, eWeek.
  11. ^ Prof. Aswath Damodaran. Probabilistic Approaches: Scenario Analysis, Decision Trees and Simulations, NYU Stern Working Paper
  12. ^ Blayney, P. (2009). Knowledge Gap? Accounting Practitioners Lacking Computer Programming Concepts as Essential Knowledge. In G. Siemens & C. Fulford (Eds.), Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2009 (pp. 151-159). Chesapeake, VA: AACE.
  13. ^ Loren Gary (2003). Why Budgeting Kills Your Company, Harvard Management Update, May 2003.
  14. ^ Michael Jensen (2001). Corporate Budgeting Is Broken, Let’s Fix It, Harvard Business Review, pp. 94-101, November 2001.
  15. ^ Low, R.K.Y.; Faff, R.; Aas, K. (2016). “Enhancing mean–variance portfolio selection by modeling distributional asymmetries”(PDF)Journal of Economics and Business85: 49–72. doi:10.1016/j.jeconbus.2016.01.003.
  16. ^ Low, R.K.Y.; Alcock, J.; Faff, R.; Brailsford, T. (2013). “Canonical vine copulas in the context of modern portfolio management: Are they worth it?” (PDF)Journal of Banking & Finance37 (8): 3085–3099. doi:10.1016/j.jbankfin.2013.02.036.
  17. Jump up to:a b c Mark S. Joshi, On Becoming a Quant.
  18. ^ Riccardo Rebonato (N.D.). Theory and Practice of Model Risk Management.
  19. ^
  20. ^ “Archived copy” (PDF). Archived from the original (PDF) on 2010-12-07. Retrieved 2010-06-15.

Black–Scholes equation

In mathematical finance, the Black–Scholes equation is a partial differential equation (PDE) governing the price evolution of a European call or European put under the Black–Scholes model. Broadly speaking, the term may refer to a similar PDE that can be derived for a variety of options, or more generally, derivatives.

Simulated geometric Brownian motions with parameters from market data

For a European call or put on an underlying stock paying no dividends, the equation is:

{\displaystyle {\frac {\partial V}{\partial t}}+{\frac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}+rS{\frac {\partial V}{\partial S}}-rV=0}

where V is the price of the option as a function of stock price S and time tr is the risk-free interest rate, and {\displaystyle \sigma } is the volatility of the stock.

The key financial insight behind the equation is that, under the model assumption of a frictionless market, one can perfectly hedge the option by buying and selling the underlying asset in just the right way and consequently “eliminate risk”. This hedge, in turn, implies that there is only one right price for the option, as returned by the Black–Scholes formula.

Financial interpretation of the Black–Scholes PDE

The equation has a concrete interpretation that is often used by practitioners and is the basis for the common derivation given in the next subsection. The equation can be rewritten in the form:

{\displaystyle {\frac {\partial V}{\partial t}}+{\frac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}=rV-rS{\frac {\partial V}{\partial S}}}

The left-hand side consists of a “time decay” term, the change in derivative value with respect to time, called theta, and a term involving the second spatial derivative gamma, the convexity of the derivative value with respect to the underlying value. The right-hand side is the riskless return from a long position in the derivative and a short position consisting of {\displaystyle {\frac {\partial V}{\partial S}}} shares of the underlying.

Black and Scholes’ insight is that the portfolio represented by the right-hand side is riskless: thus the equation says that the riskless return over any infinitesimal time interval, can be expressed as the sum of theta and a term incorporating gamma. For an option, theta is typically negative, reflecting the loss in value due to having less time for exercising the option (for a European call on an underlying without dividends, it is always negative). Gamma is typically positive and so the gamma term reflects the gains in holding the option. The equation states that over any infinitesimal time interval the loss from theta and the gain from the gamma term offset each other, so that the result is a return at the riskless rate.

From the viewpoint of the option issuer, e.g. an investment bank, the gamma term is the cost of hedging the option. (Since gamma is the greatest when the spot price of the underlying is near the strike price of the option, the seller’s hedging costs are the greatest in that circumstance.)

Derivation of the Black–Scholes PDE

The following derivation is given in Hull’s Options, Futures, and Other Derivatives.[1]:287–288 That, in turn, is based on the classic argument in the original Black–Scholes paper.

Per the model assumptions above, the price of the underlying asset (typically a stock) follows a geometric Brownian motion. That is

{\displaystyle {\frac {dS}{S}}=\mu \,dt+\sigma \,dW\,}

where W is a stochastic variable (Brownian motion). Note that W, and consequently its infinitesimal increment dW, represents the only source of uncertainty in the price history of the stock. Intuitively, W(t) is a process that “wiggles up and down” in such a random way that its expected change over any time interval is 0. (In addition, its variance over time T is equal to T; see Wiener process § Basic properties); a good discrete analogue for W is a simple random walk. Thus the above equation states that the infinitesimal rate of return on the stock has an expected value of μ dt and a variance of {\displaystyle \sigma ^{2}dt}.

The payoff of an option {\displaystyle V(S,T)} at maturity is known. To find its value at an earlier time we need to know how {\displaystyle V} evolves as a function of {\displaystyle S} and {\displaystyle t}. By Itô’s lemma for two variables we have

{\displaystyle dV=\left(\mu S{\frac {\partial V}{\partial S}}+{\frac {\partial V}{\partial t}}+{\frac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}\right)dt+\sigma S{\frac {\partial V}{\partial S}}\,dW}

Now consider a certain portfolio, called the delta-hedge portfolio, consisting of being short one option and long {\displaystyle {\frac {\partial V}{\partial S}}} shares at time {\displaystyle t}. The value of these holdings is

{\displaystyle \Pi =-V+{\frac {\partial V}{\partial S}}S}

Over the time period {\displaystyle [t,t+\Delta t]}, the total profit or loss from changes in the values of the holdings is (but see note below):

{\displaystyle \Delta \Pi =-\Delta V+{\frac {\partial V}{\partial S}}\,\Delta S}

Now discretize the equations for dS/S and dV by replacing differentials with deltas:

{\displaystyle \Delta S=\mu S\,\Delta t+\sigma S\,\Delta W\,}
{\displaystyle \Delta V=\left(\mu S{\frac {\partial V}{\partial S}}+{\frac {\partial V}{\partial t}}+{\frac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}\right)\Delta t+\sigma S{\frac {\partial V}{\partial S}}\,\Delta W}

and appropriately substitute them into the expression for {\displaystyle \Delta \Pi }:

{\displaystyle \Delta \Pi =\left(-{\frac {\partial V}{\partial t}}-{\frac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}\right)\Delta t}

Notice that the {\displaystyle \Delta W} term has vanished. Thus uncertainty has been eliminated and the portfolio is effectively riskless. The rate of return on this portfolio must be equal to the rate of return on any other riskless instrument; otherwise, there would be opportunities for arbitrage. Now assuming the risk-free rate of return is {\displaystyle r} we must have over the time period {\displaystyle [t,t+\Delta t]}

{\displaystyle r\Pi \,\Delta t=\Delta \Pi }

If we now equate our two formulas for {\displaystyle \Delta \Pi } we obtain:

{\displaystyle \left(-{\frac {\partial V}{\partial t}}-{\frac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}\right)\Delta t=r\left(-V+S{\frac {\partial V}{\partial S}}\right)\Delta t}

Simplifying, we arrive at the celebrated Black–Scholes partial differential equation:

{\displaystyle {\frac {\partial V}{\partial t}}+{\frac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}+rS{\frac {\partial V}{\partial S}}-rV=0}

With the assumptions of the Black–Scholes model, this second order partial differential equation holds for any type of option as long as its price function {\displaystyle V} is twice differentiable with respect to {\displaystyle S} and once with respect to {\displaystyle t}. Different pricing formulae for various options will arise from the choice of payoff function at expiry and appropriate boundary conditions.

Technical note: A subtlety obscured by the discretization approach above is that the infinitesimal change in the portfolio value was due to only the infinitesimal changes in the values of the assets being held, not changes in the positions in the assets. In other words, the portfolio was assumed to be self-financing.[citation needed]

Alternative derivation

Here is an alternative derivation that can be utilized in situations where it is initially unclear what the hedging portfolio should be. (For a reference, see 6.4 of Shreve vol II).

In the Black–Scholes model, assuming we have picked the risk-neutral probability measure, the underlying stock price S(t) is assumed to evolve as a geometric Brownian motion:

{\displaystyle {\frac {dS(t)}{S(t)}}=r\ dt+\sigma dW(t)}

Since this stochastic differential equation (SDE) shows the stock price evolution is Markovian, any derivative on this underlying is a function of time t and the stock price at the current time, S(t). Then an application of Ito’s lemma gives an SDE for the discounted derivative process {\displaystyle \exp(-rt)V(t,S(t))}, which should be a martingale. In order for that to hold, the drift term must be zero, which implies the Black—Scholes PDE.

This derivation is basically an application of the Feynman–Kac formula and can be attempted whenever the underlying asset(s) evolve according to given SDE(s).

Solving the Black–Scholes PDE

Once the Black–Scholes PDE, with boundary and terminal conditions, is derived for a derivative, the PDE can be solved numerically using standard methods of numerical analysis,[2] such as a type of finite difference method.[3] In certain cases, it is possible to solve for an exact formula, such as in the case of a European call, which was done by Black and Scholes.

To do this for a call option, recall the PDE above has boundary conditions

{\displaystyle {\begin{aligned}C(0,t)&=0{\text{ for all }}t\\C(S,t)&\rightarrow S{\text{ as }}S\rightarrow \infty \\C(S,T)&=\max\{S-K,0\}\end{aligned}}}

The last condition gives the value of the option at the time that the option matures. Other conditions are possible as S goes to 0 or infinity. For example, common conditions utilized in other situations are to choose delta to vanish as S goes to 0 and gamma to vanish as S goes to infinity; these will give the same formula as the conditions above (in general, differing boundary conditions will give different solutions, so some financial insight should be utilized to pick suitable conditions for the situation at hand).

The solution of the PDE gives the value of the option at any earlier time, {\displaystyle \mathbb {E} \left[\max\{S-K,0\}\right]}. To solve the PDE we recognize that it is a Cauchy–Euler equation which can be transformed into a diffusion equation by introducing the change-of-variable transformation

{\displaystyle {\begin{aligned}\tau &=T-t\\u&=Ce^{r\tau }\\x&=\ln \left({\frac {S}{K}}\right)+\left(r-{\frac {1}{2}}\sigma ^{2}\right)\tau \end{aligned}}}

Then the Black–Scholes PDE becomes a diffusion equation

{\displaystyle {\frac {\partial u}{\partial \tau }}={\frac {1}{2}}\sigma ^{2}{\frac {\partial ^{2}u}{\partial x^{2}}}}

The terminal condition {\displaystyle C(S,T)=\max\{S-K,0\}} now becomes an initial condition

{\displaystyle u(x,0)=u_{0}(x):=K(e^{\max\{x,0\}}-1)=K\left(e^{x}-1\right)H(x)},

where H(x) is the Heaviside step function. The Heaviside function corresponds to enforcement of the boundary data in the St coordinate system that requires when t = T,

{\displaystyle C(S,\,T)=0\quad \forall \;\;S<K},

assuming both SK > 0. With this assumption, it is equivalent to the max function over all x in the real numbers, with the exception of x = 0. The equality above between the max function and the Heaviside function is in the sense of distributions because it does not hold for x = 0. Though subtle, this is important because the Heaviside function need not be finite at x = 0, or even defined for that matter. For more on the value of the Heaviside function at x = 0, see the section “Zero Argument” in the article Heaviside step function.

Using the standard convolution method for solving a diffusion equation given an initial value function, u(x, 0), we have

{\displaystyle u(x,\tau )={\frac {1}{\sigma {\sqrt {2\pi \tau }}}}\int _{-\infty }^{\infty }{u_{0}(y)\exp {\left[-{\frac {(x-y)^{2}}{2\sigma ^{2}\tau }}\right]}}\,dy},

which, after some manipulation, yields

{\displaystyle u(x,\tau )=Ke^{x+{\frac {1}{2}}\sigma ^{2}\tau }N(d_{1})-KN(d_{2})} ,

where {\displaystyle N(\cdot )} is the standard normal cumulative distribution function and

{\displaystyle {\begin{aligned}d_{1}&={\frac {1}{\sigma {\sqrt {\tau }}}}\left[\left(x+{\frac {1}{2}}\sigma ^{2}\tau \right)+{\frac {1}{2}}\sigma ^{2}\tau \right]\\d_{2}&={\frac {1}{\sigma {\sqrt {\tau }}}}\left[\left(x+{\frac {1}{2}}\sigma ^{2}\tau \right)-{\frac {1}{2}}\sigma ^{2}\tau \right].\end{aligned}}}

These are the same solutions (up to time translation) that were obtained by Fischer Black in 1976, equations (16) p. 177.[4]

Reverting {\displaystyle u,x,\tau } to the original set of variables yields the above stated solution to the Black–Scholes equation.

The asymptotic condition can now be realized.
{\displaystyle u(x,\,\tau ){\overset {x\rightsquigarrow \infty }{\asymp }}Ke^{x},}

which gives simply S when reverting to the original coordinates.

{\displaystyle \lim _{x\to \infty }N(x)=1}.


  1. ^ Hull, John C. (2008). Options, Futures and Other Derivatives (7 ed.). Prentice Hall. ISBN 978-0-13-505283-9.
  2. ^ “A Fast, Stable and Accurate Numerical Method for the Black-Scholes Equation of American Options” International Journal of Theoretical and Applied Finance, Vol. 11, No. 5, pp. 471-501, 2008, April 20, 2010
  3. ^ Finite Difference Schemes that Achieve Dynamical Consistency for Population Models Thirteenth Virginia L. Chatelain Memorial Lecture presented by Talitha Washington at Kansas State University on November 9, 2017
  4. ^ Black, Fischer S. “The Pricing of Commodity Contracts” Journal of Financial Economics, 3, pp. 167-179, 1976, reference added August 3, 2019

Vasicek model

A trajectory of the short rate and the corresponding yield curves at T=0 (purple) and two later points in time

In finance, the Vasicek model is a mathematical model describing the evolution of interest rates. It is a type of one-factor short rate model as it describes interest rate movements as driven by only one source of market risk. The model can be used in the valuation of interest rate derivatives, and has also been adapted for credit markets. It was introduced in 1977 by Oldřich Vašíček,[1] and can be also seen as a stochastic investment model.


The model specifies that the instantaneous interest rate follows the stochastic differential equation:

{\displaystyle dr_{t}=a(b-r_{t})\,dt+\sigma \,dW_{t}}

where Wt is a Wiener process under the risk neutral framework modelling the random market risk factor, in that it models the continuous inflow of randomness into the system. The standard deviation parameter, {\displaystyle \sigma }, determines the volatility of the interest rate and in a way characterizes the amplitude of the instantaneous randomness inflow. The typical parameters {\displaystyle b,a} and {\displaystyle \sigma }, together with the initial condition {\displaystyle r_{0}}, completely characterize the dynamics, and can be quickly characterized as follows, assuming {\displaystyle a} to be non-negative:

  • {\displaystyle b}: “long term mean level”. All future trajectories of {\displaystyle r} will evolve around a mean level b in the long run;
  • {\displaystyle a}: “speed of reversion”. {\displaystyle a} characterizes the velocity at which such trajectories will regroup around {\displaystyle b} in time;
  • {\displaystyle \sigma }: “instantaneous volatility”, measures instant by instant the amplitude of randomness entering the system. Higher {\displaystyle \sigma } implies more randomness

The following derived quantity is also of interest,

  • {\displaystyle {\sigma ^{2}}/(2a)}: “long term variance”. All future trajectories of {\displaystyle r} will regroup around the long term mean with such variance after a long time.

{\displaystyle a} and {\displaystyle \sigma } tend to oppose each other: increasing {\displaystyle \sigma } increases the amount of randomness entering the system, but at the same time increasing {\displaystyle a} amounts to increasing the speed at which the system will stabilize statistically around the long term mean {\displaystyle b} with a corridor of variance determined also by {\displaystyle a}. This is clear when looking at the long term variance,

{\displaystyle {\frac {\sigma ^{2}}{2a}}}

which increases with {\displaystyle \sigma } but decreases with {\displaystyle a}.

This model is an Ornstein–Uhlenbeck stochastic process. Making the long term mean stochastic to another SDE is a simplified version of the cointelation SDE.[2]


Vasicek’s model was the first one to capture mean reversion, an essential characteristic of the interest rate that sets it apart from other financial prices. Thus, as opposed to stock prices for instance, interest rates cannot rise indefinitely. This is because at very high levels they would hamper economic activity, prompting a decrease in interest rates. Similarly, interest rates do not usually decrease below 0. As a result, interest rates move in a limited range, showing a tendency to revert to a long run value.

The drift factor {\displaystyle a(b-r_{t})} represents the expected instantaneous change in the interest rate at time t. The parameter b represents the long-run equilibrium value towards which the interest rate reverts. Indeed, in the absence of shocks ({\displaystyle dW_{t}=0}), the interest rate remains constant when rt = b. The parameter a, governing the speed of adjustment, needs to be positive to ensure stability around the long term value. For example, when rt is below b, the drift term {\displaystyle a(b-r_{t})} becomes positive for positive a, generating a tendency for the interest rate to move upwards (toward equilibrium).

The main disadvantage is that, under Vasicek’s model, it is theoretically possible for the interest rate to become negative, an undesirable feature under pre-crisis assumptions. This shortcoming was fixed in the Cox–Ingersoll–Ross model, exponential Vasicek model, Black–Derman–Toy model and Black–Karasinski model, among many others. The Vasicek model was further extended in the Hull–White model. The Vasicek model is also a canonical example of the affine term structure model, along with the Cox–Ingersoll–Ross model.

Asymptotic mean and variance

We solve the stochastic differential equation to obtain

{\displaystyle r_{t}=r_{0}e^{-at}+b\left(1-e^{-at}\right)+\sigma e^{-at}\int _{0}^{t}e^{as}\,dW_{s}.\,\!}

Using similar techniques as applied to the Ornstein–Uhlenbeck stochastic process we get that state variable is distributed normally with mean

{\displaystyle \mathrm {E} [r_{t}]=r_{0}e^{-at}+b(1-e^{-at})}

and variance

{\displaystyle \mathrm {Var} [r_{t}]={\frac {\sigma ^{2}}{2a}}(1-e^{-2at}).}

Consequently, we have

{\displaystyle \lim _{t\to \infty }\mathrm {E} [r_{t}]=b}


{\displaystyle \lim _{t\to \infty }\mathrm {Var} [r_{t}]={\frac {\sigma ^{2}}{2a}}.}

See also

  • Ornstein–Uhlenbeck process.
  • Hull–White model
  • Cox–Ingersoll–Ross model


  1. ^ Vasicek, O. (1977). “An equilibrium characterization of the term structure”. Journal of Financial Economics5 (2): 177–188. CiteSeerX doi:10.1016/0304-405X(77)90016-2.
  2. ^ Mahdavi Damghani B. (2013). “The Non-Misleading Value of Inferred Correlation: An Introduction to the Cointelation Model”. Wilmott Magazine2013 (67): 50–61. doi:10.1002/wilm.10252.
  • Hull, John C. (2003). Options, Futures and Other Derivatives. Upper Saddle River, NJ: Prentice Hall. ISBN 978-0-13-009056-0.
  • Damiano Brigo, Fabio Mercurio (2001). Interest Rate Models – Theory and Practice with Smile, Inflation and Credit (2nd ed. 2006 ed.). Springer Verlag. ISBN 978-3-540-22149-4.
  • Jessica James, Nick Webber (2000). Interest Rate Modelling. Wiley. ISBN 978-0-471-97523-6.

Prime rate

Prime rates in the US, FRG and the European Union

prime rate or prime lending rate is an interest rate used by banks, usually the interest rate at which banks lend to customers with good credit. Some variable interest rates may be expressed as a percentage above or below prime rate.[1]:8

Use in different banking systems

United States and Canada

Historically, in North American banking, the prime rate was the actual interest rate, although this is no longer the case. The prime rate varies little among banks and adjustments are generally made by banks at the same time, although this does not happen frequently. The current prime rate is 3.25% in the United States,[2] while it is 2.45% in Canada.[3]

Historical chart of the effective Federal Funds Rate

In the United States, the prime rate runs approximately 300 basis points (or 3 percentage points) above the federal funds rate, which is the interest rate that banks charge each other for overnight loans made to fulfill reserve funding requirements. The Federal funds rate plus a much smaller increment is frequently used for lending to the most creditworthy borrowers, as is LIBOR, the London Interbank Offered Rate. The Federal Open Market Committee (FOMC) meets eight times per year to set a target for the federal funds rate.

Prior to December 17, 2008, the Wall Street Journal followed a policy of changing its published prime rate when 23 out of 30 of the United States’ largest banks changed their prime rates. Recognizing that fewer, larger banks now control most banking assets—i.e., it is more concentrated—the Journal now publishes a rate reflecting the base rate posted by at least 70% of the top ten banks by assets.


Effective January 2, 2015, the Base Lending Rate (BLR) structure was replaced with a new Base Rate (BR) system. Under BR, which will now serve as the main reference rate for new retail floating rate loans, banks in Malaysia can determine their interest rate based on a formula set by Bank Negara, the Malaysian central bank.[4]

Malayan Banking Bhd (Maybank) has set a group-wide base rate at 3.2%, effective Jan 2, 2015. All new retail loans and financing such as mortgages, unit trust loans, share margin financing, personal financing and overdraft facilities which are applied for by individual customers will be based on the base rate.[5] Though certain banks may be setting a higher BR compared to others, they can sometimes offer lower ELR to customers in order to remain competitive.[6] Loans that are already approved and extended prior to January 2, 2015 will still follow the old BLR until the end of the loan tenure.


The prime rate is used often as an index in calculating rate changes to adjustable-rate mortgages (ARM) and other variable rate short-term loans. It is used in the calculation of some private student loans. Many credit cards and home equity lines of credit with variable interest rates have their rate specified as the prime rate (index) plus a fixed value commonly called the spread or margin.

See also

  • FRED (Federal Reserve Economic Data)


  1. ^ Thomas, L., Money, Banking and Financial Markets (Mason, OH: Thomson South-Western, 2006), p. 8.
  2. ^ According to data published by The Wall Street Journal Online and the Federal Reserve Board of Governors. “Federal Reserve Statistical Data”. Federal Reserve.
  3. ^ According to data published by The Wall Street Journal Online and the Bank of Canada. “Daily Digest- Rates and Statistics- Bank of Canada”. Bank of Canada.
  4. ^ Ho, Fiona (January 6, 2015). “Base Rate vs BLR in Malaysia: How Does BR Work?”. Retrieved January 26, 2015.
  5. ^ “Maybank sets base rate at 3.2%”. The Sun Daily. January 5, 2015. Retrieved January 26, 2015.
  6. ^ Ho, Fiona (January 6, 2015). “Base Rate vs BLR in Malaysia: How Does BR Work?”. Retrieved January 26, 2015.

Interest rate ceiling

An interest rate ceiling (also known as an interest rate cap) is a regulatory measure that prevents banks or other financial institutions from charging more than a certain level of interest.

Interest rate caps and their impact on financial inclusion

Research was conducted after Zambia reopened an old debate on a lending rate ceiling for banks and other financial institutions. The issue originally came to the fore during the financial liberalisations of the 1990s and again as microfinance increased in prominence with the award of the Nobel Peace Prize to Muhammad Yunus and Grameen Bank in 2006. It was over the appropriateness of regulatory intervention to limit the charging of rates that are deemed, by policymakers, to be excessively high.[1]

A 2013 research paper[1] asked

  • Where are interest rate caps currently used, and where have they been used historically?
  • What have been the impacts of interest rate caps, particularly on expanding access to financial services?
  • What are the alternatives to interest rate caps in reducing spreads in financial markets? [1]

Understanding the composition of the interest rate

The researcher [2] decided that to assess the appropriateness of an interest rate cap as a policy instrument, (or whether other approaches would be more likely to achieve the desired outcomes of government), it was vital to consider what exactly makes up the interest rate and how banks and MFIs are able to justify rates that might be considered excessive.[1]

He found broadly there were four components to the interest rate:-

  • Cost of funds
  • The overheads
  • Non performing loans
  • Profit[1]

Cost of funds

The cost of funds is the amount that the financial institution must pay to borrow the funds that it then lends out. For a commercial bank or deposit taking microfinance institutions this is usually the interest that it gives on deposits. For other institutions it could be the cost of wholesale funds, or a subsidised rate for credit provided by government or donors. Other MFIs might have very cheap funds from charitable contributions.[1]

The overheads

The overheads reflect three broad categories of cost.

  • Outreach costs – the expansion of a network or development of new products and services must also be funded by the interest rate margin
  • Processing costs – is the cost of credit processing and loan assessment, which is an increasing function of the degree of information asymmetry
  • General overheads- general administration and overheads associated with running a network of offices and branches[1]

The overheads, and in particular the processing costs can drive the price differential between larger loans from banks and smaller loans from MFIs. Overheads can vary significantly between lenders and measuring overheads as a ratio of loans made is an indicator of institutional efficiency.[1]

Non performing loans

Lenders must absorb the cost of bad debts and write them off in the rate that they charge. This allowance for non-performing loans means lenders with effective credit screening processes should be able to bring down rates in future periods, while reckless lenders will be penalised.[1]


Lenders will include a profit margin that again varies considerably between institutions. Banks and commercial MFIs with shareholders to satisfy are under greater pressure to make profits than NGO or not-for-profit MFIs.[1]

The rationale behind interest rate caps

Interest rate caps are used by governments for political and economic reasons, most commonly to provide support to a specific industry or area of the economy. Government may have identified what it considers being a market failure in an industry, or is attempting to force a greater focus of financial resources on that sector than the market would determine.[1]

  • Loans to the agricultural sector to boost agricultural productivity as in Bangladesh.
  • Loans to credit constrained SMEs as in Zambia.[1]

The researcher found it is also often argued that interest rate ceilings can be justified on the basis that financial institutions are making excessive profits by charging exorbitant interest rates to clients. This is the usury argument [3] and is essentially one of market failure where government intervention is required to protect vulnerable clients from predatory lending practices. The argument, predicated on an assumption that demand for credit at higher rates is price inelastic, postulates financial institutions are able to exploit information asymmetry, and in some cases short run monopoly market power, to the detriment of client welfare. Aggressive collection practices for non-payment of loans have exacerbated the image of certain lenders.[1]

The researcher says that economic theory suggests market imperfections will result from information asymmetry and the inability of lenders to differentiate between safe and risky borrowers.[4] When making a credit decision, a bank or a microfinance institution cannot fully identify a client’s potential for repayment.[1]

Two fundamental issues arise:[1]

  • Adverse selection – clients that are demonstrably lower risk are likely to have already received some form of credit. Those that remain will either be higher risk, or lower risk but unable to prove it. Unable to differentiate, the bank will charge an aggregated rate which will be more attractive to the higher risk client. This leads to a raised probability of default ex ante.[1]
  • Moral hazard – clients borrowing at a higher rate might be required to take more risk (hence higher potential return) to cover their borrowing costs leading to a higher probability of default afterwards.[1]

The researcher claims that traditional ‘microfinance group lending methodology’ helps manage adverse selection risk by using social capital and risk understanding within a community to price risk. However, interest rate controls are most often found at the lower end of the market where financial institutions (usually MFIs) use the information asymmetry to justify high lending rates. In a non-competitive market (as is likely to exist in a remote African village), the lender likely holds the monopoly power to make excessive profit without competition evening them out.[1]

The financial markets will segment so large commercial banks service larger clients with larger loans at lower interest rates and microfinance institutions charge higher rates of interest on a larger volume of low value loans. In between, smaller commercial banks can find a niche serving medium to large enterprises. Inevitably the missing middle, individuals and businesses will be unable to access credit from either banks or MFIs.[1]

The researcher found it intuitive that basic interest rate caps are most likely to bite at the lower end of the market, with interest rates charged by microfinance institutions generally higher than those by banks [5] and this is driven by a higher cost of funds and higher relative overheads. Transaction costs make larger loans relatively more cost effective for the financial institution.[1]

If it costs a commercial bank $100 to make a credit decision on a $10,000 loan then it will factor this 1% into the price of the loan (the interest rate). The cost of loan assessment does not fall in proportion with the loan size and so if a loan of $1,000 still costs $30 to assess, the cost which must be factored in rises to 3%. This cost pushes the higher rates of lending on smaller loans. The higher prices are usually paid because the marginal product of capital is higher for people with little or no access to it.[1]

In implementing a cap, government is aiming to incentivise lenders to push out the supply curve and increase access to credit while bringing down lending rates, assuming the cap is set below the market equilibrium. If above then lenders will continue to lend as before.[1]

The researcher thinks such thinking ignores the actions of the banks and MFIs operating under asymmetric information. The imposition of a maximum price of loans magnifies the problem of adverse selection as the consumer surplus that it creates is a larger pool willing borrowers of unidentifiable creditworthiness.[1]

Faced with this problem, he proposes lenders have three options:[1]

– Increased lending, meaning lending to more bad clients and pushing up NPLs – Increased investment in processing systems to better identify good clients, increasing overheads – Increased investment in outreach to clients, identified as having good repayment potential, increasing overheads [1]

All options increase costs and force the supply curve back to the left, detrimental to financial outreach as the quantity of credit falls. Unless financial service providers can absorb the cost increases while maintaining a profit, they may ration credit to those that they can readily support at the prescribed interest rate, refuse credit to other clients and the market moves.[1]

The researcher asks if the story of interest rate caps leading to credit rationing is borne out in reality?[1]

The use of interest rate caps

Though conceptually simple, there is much variation in the methodologies used by governments to implement limits on lending rates. While some countries use a vanilla interest rate cap written into all regulations for licensed financial institutions, others have attempted a more flexible approach.[1]

The most simple interest rate control puts an upper limit on any loans from formal institutions. This might simply say that no financial institution may issue a loan at a rate greater than, say, 40% interest per annum, or 3% per month.[1]

Rather than set a rigid interest rate limit, governments in many countries find it preferable to discriminate between different types of loan and set individual caps based on the client and type of loan. The logic for such a variable cap is that it can bite at various levels of the market, minimising consumer surplus.[1]

As a more flexible measure, the interest cap is often linked to the base rate set by the central bank in setting monetary policy meaning the cap reacts in line with market conditions rising with monetary tightening and falling with easing.[1]

• This is the model used in Zambia,[6] where banks are able to lend at nine percentage points over the policy rate and microfinance lending is priced as a multiple of this.[1]

• Elsewhere, governments have linked the lending rate to the deposit rate and regulated the spread that banks and deposit taking MFIs can charge between borrowing and lending rates. As some banks look to get around lending caps by increasing arrangement fees and other costs to the borrower, governments have often tried to limit the total price of the loan. Other governments have attempted to set different caps for different forms of lending instrument.[1]

  • In South Africa, the National Credit Act (2005) identified eight sub-categories of loan, each with their own prescribed maximum interest rate.

Mortgages(RRx2.2)+5% per annum, Credit facilities(RRx2.2)+10% per annum, Unsecured credit transactions (RRx2.2)+20% per annum, Developmental credit agreements for the development of a small business, RRx2.2)+20% per annum Developmental credit agreements for low income housing (unsecured)(RRx2.2)+20% per annum, Short-term transactions, 5% per month, Other credit agreements(RRx2.2)+10% per annum, Incidental credit agreements 2% per month.[1]

The impact of interest caps

Supply side

Financial outreach

The researcher identified the major argument used against the capping of interest rates as them distorting the market and preventing financial institutions from offering loan products to those at the markets lower end with no alternative credit access. This counters the financial outreach agenda prevalent in many poor countries today. He claims the debate boils down to the prioritisation of cost of credit over access to credit.[1] He identifies a randomised experiment in Sri Lanka [7] which found the average real return to capital for microenterprises to be 5.7% per month, well above the typical interest rate of between 2-3% that was provided by MFIs. Similarly, the same authors found in Mexico[8] that returns to capital were an estimated 20-33% per month, up to five times higher than market interest rates.[1]

His paper states that MFIs have historically been able to expand outreach rapidly by funding network expansion with profits from existing borrowers, meaning existing clients are subsidising outreach to new areas. Capping interest rates can hinder this as MFIs may remain profitable in existing markets but cut investment in new markets and at extremes, government action on interest rates can cause existing networks to retract. In Nicaragua,[9] the governments Microfinance Association Law in 2001 limited microloan interest to the average of rates set by the banking system and attempted to legislate for widespread debt forgiveness. In response to perceived persecution by government, a number of MFIs and commercial banks withdrew from certain areas hindering the outreach of the financial sector.[1]

The researcher articulates that there is also evidence to suggest capping lending rates for licensed MFIs incentives, NGO-MFIs, and other finance sources for the poor to stay outside of the regulatory system. In Bolivia, the imposition of a lending cap led to a notable fall in the licensing of new entities, .[9] Keeping lenders out of the system should be unattractive to governments as it increases the potential for predatory lending and lack of consumer protection.[1]

Price rises

The paper states there is evidence from developed markets that the imposition of price caps could in fact increase the level of interest rates.[1] The researcher came across a study of payday loans in Colorado,[10] the imposition of a price ceiling initially saw reduced interest rates but over a longer period rates steadily rose towards the interest rate cap. This was explained by implicit collusion, by which the price cap set a focal point so that lenders knew that the extent of price rises would be limited and hence collusive behaviour had a limited natural outcome.[1]

Demand side

Elasticity of demand

The paper asserts that inherent in any argument for an upper limit on interest rates is an assumption that demand for credit is price inelastic. If the inverse were true, and that market demand was highly sensitive to small rises in lending rates then there would be minimal reason for government or regulators to intervene.[1]

The researcher showed that Karlan and Zinman[11] carried out a randomised control trial in South Africa to test the received wisdom that the poor are relatively non-sensitive to interest rates. They found around lender’s standard rates, elasticities of demand rose sharply meaning that even small increases in interest rates lead to a significant fall in the credit demand. If the poor are indeed this responsive to changes in the interest rate, then it suggests that the practice of unethical monetary loans would not be commercially sustainable and hence there is little need for government to cap interest rates.[1]

Borrower trends

The publication explains that the chain behind implementing an interest cap runs that the cap will have an effect on the wider economy through its impact on consumer and business activities and says the key question to be addressed by any cap is whether it bites and therefore impacts borrower behaviour at the margin.[1]

It gives the case study of South Africa where the National Credit Act was introduced in 2005 to protect consumers and to guard against reckless lending practices by financial institutions. It was a variable cap that discriminated between eight types of lending instrument to ensure the cap bit at different levels.

Credit constraints and productivity

The researcher observed that an interest cap exacerbates the problem of adverse selection as it restricts lenders’ ability to price discriminate and means that some enterprises that might have received more expensive credit for riskier business ventures will not receive funding. There has been some attempt to link this constraint in the availability of credit to output. In Bangladesh,[12] firms with access to credit were found to be more efficient than firms with a credit constraint. The World Bank [13] found credit constraints may reduce profit margins buy up to 13.6% per year.[1]

Are interest rates too high?

The paper shows a detailed 2009 study by Consultative Group to Assist the Poor (“CGAP”) [14] looked in detail at the four elements of loan pricing for MFIs and attempted to measure whether the poor were indeed being exploited by excessively high interest rates. Their data is interesting for international comparison, but tell us relatively little about efficiency of individual companies and markets. However they do provide some interesting and positive conclusions, for example, the ratio of operating expenses to total loan portfolio declined from 15.6% in 2003 to 12.7% in 2006, a trend likely to have been driven by the twin factors of competition and learning by doing.[1][14]

The researcher mentions profitability as there is some evidence of MFIs generating very high profits from microfinance clients. The most famous case was the IPO of Compartamos, a Mexican microfinance organisation that generated millions of dollars in profit for its shareholders. Compartamos had been accused of immoral money lending (usury), charging clients annualised rates in excess of 85%. The CGAP study found that the most profitable ten percent of MFIs globally were making returns on equity in excess of 35%.[1]

He proposes that while the international comparison is interesting, it also has practical implications. It provides policymakers with a conceptual framework with which to assess the appropriateness of intervention in credit markets. The question that policymakers must answer if they are to justify interfering in the market and capping interest rates is whether excessive profits or bloated overheads are pushing interest rates to a higher rate than their natural level. This is a subjective regulatory question, and the aim of a policy framework should be to ensure sufficient contestability to keep profits in check before the need for intervention arises.[1]

Alternative methods of reducing interest rate spreads

He states that from an economic perspective, input based solutions like interest rate caps or subsidies distort the market and hence it would better to let the market determine the interest rate, and to support certain desirable sectors through other means such as output-based aid. Indeed, there are a number of other methods available that can contribute to a reduction in interest rates.[1]

In the short term, soft pressure can be an effective tool – as banks and MFIs need licenses to operate, they are often receptive to influence from the central bank or regulatory authority. However to truly bring down interest rates sustainably, governments need to build a business and regulatory environment and support structures that encourage the supply of financial services at lower cost and hence push the supply curve to the right.[1]

Market structure

The paper shows that the paradigm of classical economics runs that competition between financial institutions should force them to compete on the price of loans that they provide and hence bring down interest rates. Competitive forces can certainly play a role in forces lenders to either improve efficiency in order to bring down overheads, or to cut profit margins. In a survey of MFI managers in Latin America and the Caribbean,[9] competition was cited as the largest factor determining the interest rate that they charged. The macro evidence supports this view – Latin countries with the most competitive microfinance industries, such as Bolivia and Peru, generally have the lowest interest rates.[1]

The corollary of this, and the orthodox view, would seem to be that governments should license more financial institutions to promote competition and drive down rates. However it is not certain that more players means greater competition. Due to the nature of the financial sector, with high fixed costs and capital requirements, smaller players might be forced to levy higher rates in order to remain profitable. Weak businesses that are inefficiently run will not necessarily add value to an industry and government support can often be misdirected to supporting bad businesses. Governments should be willing to adapt and base policy on a thorough analysis of the market structure, with the promotion of competition, and the removal of unnecessary barriers to entry such as excessive red tape, as a goal.[1]

Market information

The evidence the researcher suggests that learning by doing is a key factor in building up efficiency and hence lowering overheads and hence interest rates. Institutions with a decent track record are better able to control costs and more efficient at evaluating loans while a larger loan book will generate economies of scale. More established businesses should also be able to renegotiate and source cheaper funds, again bringing down costs. In China, the government supports the financial sector by setting a ceiling on deposits and a floor on lending rates meaning that banks are able to sustain a minimum level of margin. Following an international sample of MFIs, there is clear evidence from the Microfinance Information Exchange [15] (MIX) that operating expenses fell as a proportion of gross loan portfolio as businesses matured.[1]

The implication of this is that governments would be better off addressing the cost structures of financial institutions to allow them to remain commercially sustainable in the longer term. For example, government investment in credit reference bureaus and collateral agencies decreases the costs of loan appraisal for banks and MFIs. Supporting product innovation, for example through the use of a financial sector challenge fund, can bring down the cost of outreach and government support for research and advocacy can lead to the development of demand-led products and services. The FinMark Trust is an example of donor funds supporting the development of research and analysis as a tool for influencing policy.[1]

Demand side support

The researcher states that Government can help to push down interest rates by promoting transparency and financial consumer protection. Investment in financial literacy can strengthen the voice of the borrower and protect against possible exploitation. Forcing regulated financial institutions to be transparent in their lending practices means that consumers are protected from hidden costs. Government can publish and advertise lending rates of competing banks to increase competition. Any demand side work is likely to have a long lead time to impact but it is vital that even if the supply curve does shift to the right that the demand curve follows it.[1]


The researcher concludes that there are situations when an interest rate cap may be a good policy decision for governments. Where insufficient credit is being provided to a particular industry that is of strategic importance to the economy, interest rate caps can be a short-term solution. While often used for political rather than economic purposes, they can help to kick start a sector or incubate it from market forces for a period of time until it is commercially sustainable without government support. They can also promote fairness – as long as a cap is set at a high enough level to allow for profitable lending for efficient financial institutions to SMEs, it can protect consumers from usury without significantly impacting outreach. Additionally, financial outreach is not an end in itself and greater economic and social impact might result from cheaper credit in certain sectors rather than greater outreach. Where lenders are known to be very profitable then it might be possible to force them to lend at lower rates in the knowledge that the costs can be absorbed into their profit margins. Caps on interest rates also protect against usurious lending practices and can be used to guard against the exploitation of vulnerable members of society.[1]

However, he does say that although there are undoubtedly market failures in credit markets, and government does have a role in managing these market failures (and indeed supporting certain sectors), interest rate caps are ultimately an inefficient way of reaching the goal of lower long-term interest rates. This is because they address the symptom, not the cause of financial market failures. In order to bring down rates sustainably, it is likely that governments will need to act more systemically, addressing issues in market information and market structure and on the demand side and ultimately supporting a deeper level of financial sector reform.[1]


  1. Jump up to:a b c d e f g h i j k l m n o p q r s t u v w x y z aa ab ac ad ae af ag ah aiaj ak al am an ao ap aq ar as at au av aw ax ay az ba bb bc bd Miller, H., Interest rate caps and their impact on financial inclusion, ECONOMIC AND PRIVATE SECTOR PROFESSIONAL EVIDENCE AND APPLIED KNOWLEDGE SERVICES,
  2. ^ Howard Miller, Nathan Associates, February 2013
  3. ^ Office of Fair Trading (OFT), Price Controls: Evidence and arguments surrounding price control and interest rate caps for high-cost credit (May 2010)
  4. ^ Stiglitz, Joseph & Weiss, Andrew, Credit Rationing in Markets with Imperfect Information (June 1981)
  5. ^ Kneiding, Cristoph and Rosenberg, Richard, Variations in Microcredit Interest Rates (July 2008) CGAP Brief
  6. ^ Bank of Zambia press release, available here Archived 2015-09-12 at the Wayback Machine
  7. ^ De Mel, Suresh, McKenzie, David John and Woodruff, Christopher M., Returns to Capital in Microenterprises: Evidence from a Field Experiment (May 1, 2007). World Bank Policy Research Working Paper No. 4230
  8. ^ McKenzie, David John and Woodruff, Christopher M., Experimental Evidence on Returns to Capital and Access to Finance in Mexico (March 2008)
  9. Jump up to:a b c Campion, Anita, Ekka, Rashmi Kiran and Wenner, Mark, Interest Rates and Implications for Microfinance in Latin America and the Caribbean, IADB (March 2012)
  10. ^ DeYoung, Robert and Phillips, Ronnie J., Payday Loan Pricing (2009)
  11. ^ Karlan, Dean S. and Zinman, Jonathan, Credit Elasticities in Less-Developed Economies: Implications for Microfinance (December 2006)
  12. ^ Baqui Khalily, M.A. and Khaleque, M.A., Access to Credit and Productivity of Enterprises in Bangladesh: Is There Causality? (2012)
  13. ^ Khandker, Shahidur R., Samad, Hussain A. and Ali, Rubaba, Does Access to Finance Matter in Microenterprise Growth? Evidence from Bangladesh (January 2013) World Bank Policy Research Working Paper no. 6333
  14. Jump up to:a b Rosenberg, Richard, Gonzalez, Adrian and Narian, Sushma, The New Moneylenders: Are the Poor Being Exploited by High Microcredit Interest Rates? (February 2009)
  15. ^

Cumulative process

Cumulative process is a contribution to the economic theory of interest, proposed in Knut Wicksell’s 1898 work, Interest and Prices. Wicksell made a key distinction between the natural rate of interest and the money rate of interest. The money rate of interest, to Wicksell, is the interest rate seen in the capital market; the natural rate of interest is the interest rate at which supply and demand in the market for goods are in equilibrium – as though there were no need for capital markets.

According to the idea of cumulative process, if the natural rate of interest was not equal to the market rate, demand for investment and quantity of savings would not be equal. If the market rate is beneath the natural rate, an economic expansion occurs and prices rise. The resulting inflation depresses the real interest rate and causes further expansion and further price increases.

The theory of the cumulative process of inflation is an early decisive swing at the idea of money as a “veil”. Wicksell’s process was much in line with the ideas of Henry Thornton’s earlier work.[1] Wicksell’s theory claims, that increases in the supply of money lead to rises in price levels, but the original increase is endogenous, created by the conditions of the financial and real sectors.

With the existence of credit money, Wicksell claimed, two interest rates prevail: the “natural” rate and the “money” rate. The natural rate is the return on capital – or the real profit rate. It can be considered to be equivalent to the marginal product of new capital. The money rate, in turn, is the loan rate, an entirely financial construction. Credit, then, is perceived quite appropriately as “money”. Banks provide credit, after all, by creating deposits upon which borrowers can draw. Since deposits constitute part of real money balances, therefore the bank can, in essence, “create” money.

Wicksell’s main thesis, the disequilibrium engendered by real changes leads endogenously to an increase in the demand for money – and, simultaneously, its supply as banks try to accommodate it perfectly. Given full employment (a constant Y) and payments structure (constant V), then in terms of the equation of exchange, MV = PY, a rise in M leads only to a rise in P. Thus, the story of the Quantity theory of money, the long-run relationship between money and inflation, is kept in Wicksell. Wicksell’s main thesis, the disequilibrium engendered by real changes leads endogenously to an increase in the demand for money – and, simultaneously, its supply as banks try to accommodate it perfectly.

Primarily, Say’s law is violated and abandoned by the wayside. Namely, when real aggregate supply does constrain, inflation results because capital goods industries cannot meet new real demands for capital goods by entrepreneurs by increasing capacity. They may try but this would involve making higher bids in the factor market which itself is supply-constrained – thus raising factor prices and hence the price of goods in general. In short, inflation is a real phenomenon brought about by a rise in real aggregate demand over and above real aggregate supply.

Finally, for Wicksell the endogenous creation of money, and how it leads to changes in the real market (i.e. increase real aggregate demand) is fundamentally a breakdown of the Neoclassical tradition of a dichotomy between monetary and real sectors. Money is not a “veil” – agents do react to it and this is not due to some irrational “money illusion”. However, we should remind ourselves that, for Wicksell, in the long run, the Quantity theory still holds: money is still neutral in the long run, although to do so, Knut Wicksell have broken the cherished Neoclassical principles of dichotomy, money supply exogeneity and Say’s law.

“There is a certain rate of interest on loans which is neutral in respect to commodity prices, and tends neither to raise nor to lower them. This is necessarily the same as the rate of interest which would be determined by supply and demand if no use were made of money and all lending were effected in the form of real capital goods. It comes to much the same thing to describe it as the current value of the natural rate of interest on capital.” Knut Wicksell – Interest and Prices, 1898, p. 102


  1. ^ Gårdlund, Torsten (1990). Knut Wicksell: rebell i det nya riket (in Swedish) (New, [rev.] ed.). Stockholm: SNS. ISBN 91-7150-390-0. SELIBR 7609549.


  • Wicksell, K. ([1901]1934), Forelasningar I Nationalekonomi, Lund: Gleerups Forlag. English translation: Lectures on Political Economy, London: Routledge and Sons.
  • Knut Wicksell – Interest and Prices, 1898 (pdf), Ludwig von Mises Institute, 2007
  • Lars Pålsson Syll (2011). Ekonomisk doktrinhistoria (History of economic theories) (in Swedish). Studentlitteratur. p. 198. ISBN 91-44-06834-4, 9789144068343
  • Boianovsky, Mauro, Erreygers, Guido (2005), Social comptabilism and pure credit systems. Solvay and Wicksell on monetary reform, in: Fontaine, Philippe, Leonard, Robert, (ed.), The experiment in the history of economics, London, Routledge.
  • Michael Woodford (2003), Interest and Prices: Foundations of a Theory of Monetary Policy. Princeton University Press, ISBN 0-691-01049-8.
  • Lars Jonung (1979), “Knut Wicksell’s norm of price stabilization and Swedish monetary policy in the 1930s”. Journal of Monetary Economics 5, pp. 45–496.
  • Axel Leijonhufvud, The Wicksell Connection: Variation on a Theme. UCLA. November 1979.
  • Gårdlund, Torsten (1990). Knut Wicksell: rebell i det nya riket (in Swedish) (New, [rev.] ed.). Stockholm: SNS. ISBN 91-7150-390-0. SELIBR 7609549.

Total return swap

Diagram explaining Total return swap

Total return swap, or TRS (especially in Europe), or total rate of return swap, or TRORS, or Cash Settled Equity Swap is a financial contract that transfers both the credit risk and market risk of an underlying asset.

Contract definition

A swap agreement in which one party makes payments based on a set rate, either fixed or variable, while the other party makes payments based on the return of an underlying asset, which includes both the income it generates and any capital gains. In total return swaps, the underlying asset, referred to as the reference asset, is usually an equity index, loans, or bonds. This is owned by the party receiving the set rate payment.

Total return swaps allow the party receiving the total return to gain exposure and benefit from a reference asset without actually having to own it. These swaps are popular with hedge funds because they get the benefit of a large exposure with a minimal cash outlay.[1]

High-cost borrowers who seek financing and leverage, such as hedge funds, are natural receivers in Total Return Swaps. Lower cost borrowers, with large balance sheets, are natural payers.

Less common, but related, are the partial return swap and the partial return reverse swap agreements, which usually involve 50% of the return, or some other specified amount. Reverse swaps involve the sale of the asset with the seller then buying the returns, usually on equities.

Advantage of using Total Return Swaps

The TRORS allows one party (bank B) to derive the economic benefit of owning an asset without putting that asset on its balance sheet, and allows the other (bank A, which does retain that asset on its balance sheet) to buy protection against loss in its value.[2]

TRORS can be categorised as a type of credit derivative, although the product combines both market risk and credit risk, and so is not a pure credit derivative.


Hedge funds use Total Return Swaps to obtain leverage on the Reference Assets: they can receive the return of the asset, typically from a bank (which has a funding cost advantage), without having to put out the cash to buy the Asset. They usually post a smaller amount of collateral upfront, thus obtaining leverage.

Hedge funds (such as The Children’s Investment Fund (TCI)) have attempted to use Total Return Swaps to side-step public disclosure requirements enacted under the Williams Act. As discussed in CSX Corp. v. The Children’s Investment Fund Management, TCI argued that it was not the beneficial owner of the shares referenced by its Total Return Swaps and therefore the swaps did not require TCI to publicly disclose that it had acquired a stake of more than 5% in CSX. The United States District Court rejected this argument and enjoined TCI from further violations of Section 13(d) Securities Exchange Act and the SEC-Rule promulgated thereunder.[3]

Total Return Swaps are also very common in many structured finance transactions such as collateralized debt obligations (CDOs). CDO Issuers often enter TRS agreements as protection seller in order to leverage the returns for the structure’s debt investors. By selling protection, the CDO gains exposure to the underlying asset(s) without having to put up capital to purchase the assets outright. The CDO gains the interest receivable on the reference asset(s) over the period while the counterparty mitigates their credit risk.

See also

  • Credit derivative
  • Repurchase agreement


  1. ^ Staff, Investopedia (24 November 2003). “Total Return Swap”.
  2. ^ Dufey, Gunter; Rehm, Florian (2000). “An Introduction to Credit Derivatives (Teaching Note)”. hdl:2027.42/35581.
  3. ^ “562 F.Supp.2d 511 (S.D.N.Y. 2008), see also” (PDF).