Thursday, 13 April 2017

Alleged Currency Manipulations and Retaliatory Tariffs. Some lessons from the 1930s

Thilo Albers is PhD student in Economic History
at London School of Economics (LSE)

How forceful can retaliations to alleged currency manipulations be? What are the effects on trade? The following research seeks answers to these questions in the interwar period.

The evidence for China still deliberately undervaluing her currency is at best weak (see Cheung et al 2016). Yet, with the new US president in office, import surcharges for alleged currency manipulation against her and other countries have become more likely. Indeed, even before he had come into office, important public figures across the political spectrum had called for an import surcharge (e.g. Krugman 2010). At the heart of such debates is the argument that the country undervaluing her currency significantly gains at the expense of others. A lower real exchange rate stimulates exports, which in turn creates current account problems abroad (Goldstein and Lardy, 2006). It is frequently invoked that a retaliatory tariff could be used to force the alleged currency manipulator to re-align her currency. According to the standard narrative (e.g. Krugman 2010), this worked smoothly towards the end of Bretton Woods, when the United States forced other countries to re-align their currencies with an import surcharge. However, this was a very particular case in a very particular setting and the final realignment might have well been reached without the surcharge (Irwin 2013). Neither does this case answer the most important question. What are the potential political and economic costs of retaliatory tariff policies?

The 1930s provide a blueprint to assess such costs. Some countries had left the gold standard and floated their currencies. Other countries alleged them of deliberately undervaluing their currencies and imposed retaliatory tariffs. In a new study focusing on French commercial policy (Albers 2017), I show that moving towards discretionary tariff policies can have high political and economic costs. The study is a first attempt to quantify the relative importance of retaliatory as opposed to general tariff increases for this commercial policy episode. The retaliatory motive for French protectionism turns out to have been at least as important as factors driving the general tariff level. The effects of retaliation on trade were comparable to those of modern trade treaties – just with the opposite sign. The analysis of historical newspapers demonstrates that leniency vanished from the public discourse and nationalist agitation took over.

Alleged currency manipulation back then

When Britain had unilaterally left the gold standard in the autumn of 1931 and other countries followed suit soon after, policymakers in these countries did not intended to manipulate their currencies. The imminent threat of further deflation and the drain of gold reserves had effectively pushed countries off the gold standard, especially Great Britain (Accominotti 2012). However, many policymakers abroad perceived this devaluation as currency manipulation. At the forefront of them, the French government retaliated by raising tariffs and introducing quotas specifically aimed at those countries that had left the gold standard.

From the villain to the victim of exchange rate policies

It is not without irony that French commercial policymakers perceived their country (and other countries on the gold standard) as the victim of currency depreciations abroad. When France stabilised her currency at 20 % of its pre-war value in 1928 while many countries such as Britain returned to their pre-war parities, this led to a massive gold influx in France. Some have argued that this played a part in causing the Great Depression, because it led to further deflation abroad (Johnson 1997, Irwin 2010). The paper shows that contemporary commentators abroad likewise argued that the Franc was undervalued. In this sense, France was the villain of exchange rate policies in the late 1920s.

After the first wave of currency depreciations had hit in the autumn of 1931, tables turned. The real value of the Franc doubled against the pound in the following two years. French policymakers now felt victimised by exchange rate policies abroad. A qualitative analysis of contemporary newspapers focusing on the Anglo-French commercial policy relationship suggests that the rhetoric shifted from leniency before the devaluations to agitation afterwards. Numbers can indeed mirror this debate as Figure 1 shows. It plots the number of articles in the Guardian per year containing keywords that identify protectionism and tariffs in general and those that contain additional references to tariff wars or retaliation. The retaliatory sentiment first peaked in 1930, when the discussion about the Smoot-Hawley tariff in the United States got heated. This local peak was far exceeded by the discussions about the devaluations two years later. These numbers and the discussion of the articles behind them lead to the conclusion that the political costs of the devaluations and the following retaliation were indeed high.

Figure 1: The Rhetoric of Retaliation

Identifying the retaliatory motive in commercial policy

Tariffs had been increasing across all countries during this episode, and mostly so in countries adhering to the gold standard (Eichengreen and Irwin 2010). The new retaliatory protectionism, however, had a new quality and severe political economy implications. Retaliation was directed at certain trading partners and thus different from the previous general increases in tariffs to either balance trade and budgets or protect the home industries. Irwin (1993) coined this bilateralism “pernicious,” but so far, we know little about its magnitude relative to the general increase of protectionism and its effects on trade.
While most studies on protectionism make use of aggregate tariff data, this study employs a novel dataset of bilateral tariff rates of France against her trading partners. This so-far widely neglected dimension of tariff data allows me to separate general tariff increases from those with a retaliatory motive by using a difference-in-differences setup. Figure 1 shows that the “tariff treatment” for those leaving the gold standard was indeed very large.
Figure 2: The "tariff treatment" for leaving the gold standard
The most conservative estimate suggests that, while the general increase (against all trading partners) amounted to 5 %, the retaliatory component of the increase in French protectionism amounted to about 7.5 %. This is very close to the average tariff reduction reached by NAFTA (Burfisher et al. 2001). Hence, retaliation was important for the increase of French protectionism, but did it matter for trade, too? A back-of-the-envelope calculation and an econometric estimate suggest that the reduction in trade implied by these tariff increases was about 20 %. This magnitude, albeit being a bit smaller, is comparable to the one of trade-creating effects of Regional Trade Agreements (see median estimate by Head and Mayer 2014). In sum, the economic costs of retaliation were large.

What do we learn?

It is almost needless to say that French policymakers did not change minds abroad with their actions, especially as the abandonment of the gold standard abroad was clearly a prerequisite for recovery (Eichengreen 2013). The chaotic manner and the absence of any coordination of the devaluations, however, led to more protectionism in those countries that decided to stay on the gold standard. The quality of this protectionism was markedly different as it targeted certain trading partners. This discretion could thus lead to tit-for-tat tariff escalations, for which the interwar period has become so infamous for. The political and economic costs of retaliatory tariffs were large by modern standards.

We should be skeptical when commentators refer to the successful case of 1971, in which the United States had employed an import surcharge to force countries to re-align their currencies. There is no guarantee for retaliatory tariffs to solve currency disputes. On the contrary, the attempt to use them as a bargaining chip might fail and instead provoke ever more protectionism. After all economic policy cooperation appears to be the best recipe to avoid disaster. 

This blog post was written by Thilo Albers, PhD candidate at the Deparment of Economic History at LSE. 

The working paper can be downloaded here:


Accominotti, Olivier (2012): “London Merchant Banks, the Central European Panic and the Sterling Crisis of 1931,” The Journal of Economic History, Vol. 72, pp. 1–43. 

Albers, Thilo (2017): “Currency Valuations, Retaliation and Trade Conflicts Evidence from Interwar France,” LSE Economic History Working Paper, No. 258/2017

Burfisher, Mary E., Sherman Robinson, and Karen Thierfelder (2001): “The Impact of NAFTA on the United States,” The Journal of Economic Perspectives, Vol. 15, pp. 125–144.

Cheung, Yin-Wong, Chinn, Menzie and Xin Nong (2016): “Estimating currency misalignment using the Penn effect: It’s not as simple as it looks.” NBER Working Paper, No. 22539

Eichengreen and Irwin (2010): “The Slide to Protectionism in the Great Depression: Who Succumbed and Why?” Journal of Economic History, Vol. 70, pp. 871–897.

Eichengreen, Barry (2013): “Currency War or International Policy Coordination?” Journal of Policy Modeling, Vol. 35, pp. 425 – 433.

Johnson, H. Clark (1997): Gold, France, and the Great Depression, 1919–1932: Yale University Press.

Goldstein, Morris and Nicholas Lardy (2006): “China’s Exchange Rate Policy Dilemma,” The American Economic Review, Vol. 96, pp. 422–426. 

Head, Keith and Thierry Mayer (2014) “Gravity Equations: Workhorse,Toolkit, and Cookbook,” in Elhanan Helpman Gita Gopinath and Kenneth Rogoff eds. Handbook of International Economics, Vol. 4, Chap. 3, pp. 131 – 195. 

Irwin, Douglas A (1993): “Multilateral and Bilateral Trade Policies in the World Trading System: An Historical Perspective,” in Jaime De Melo and Arvind Panagariya eds. New Dimensions in Regional Integration, Vol. 5: Centre for Economic Policy Research, Cambridge University Press, pp. 90–119. 

Irwin, Douglas A. (2010): “Did France Cause the Great Depression?” NBER Working Paper, Vol. 16350 

Irwin, Douglas A. (2013): The Nixon shock after forty years: the import surcharge revisited. World Trade Review, 12(1), 29-56.

Krugman, Paul (2010): “Taking on China,” New York Times, March 14, 2010

Mankiw, Gregory N. (2009): “It’s no Time for Protectionism”, New York Times, February 7, 2009

Tuesday, 14 February 2017

Between war and peace: The Ottoman economy and foreign exchange trading at the Istanbul bourse

Did events during the First World War reflect in the foreign exchange rates? A new  EHES working paper by Avni Önder Hanedar, Hatice Gaye Gencer, Sercan Demiralay, and İsmail Altay from different universities in Turkey provide evidence on the foreign exchange trading at the Istanbul bourse of the Ottoman Empire to shed light on this question.

They examine the influence of political risks on the foreign exchange rates at the Istanbul bourse during the First World War. Their empirical methodology is identifying the abrupt changes in the value of Lira against the currencies of the neutral countries at the Istanbul bourse, i.e., the Dutch Guilder, the Swedish Krona and the Swiss Franc. They exploit unique data on daily foreign exchange rates announced at the Istanbul bourse from May 1918 to June 1919. The data are manually collected from the Ottoman Empire’s official newspaper, i.e., Takvim-i Vekayi.

A column of Takvim-i Vekayi showing the value of Turkish Lira against several foreign currencies on 27 August 1918 (Takvim-i Vekayi. (28 August 1918). Kambiyo: 6.

They fill the gap in the historical literature on the Ottoman economy for the period ended by the First World War, in which there is a lack of empirical research (See Hanedar, Hanedar, & Torun (2017, 2016)). Furthermore, the literature on the impacts of the First World War on foreign exchange rates is confined (See Hall (2004), Kanago & McCormick (2013)).

The findings pinpoint the sudden changes in the value of Lira against the currencies of the neutral countries at the Istanbul bourse during important war-related events pointing out that the end of WWI was approaching. The war and occupation of the Allies deteriorated the economy of the Ottoman Empire, whereby the inflation levels surmounted along with the huge budget deficits. These circumstances were reflected in the foreign exchange rates and the Lira devaluated significantly against the currencies of the neutral countries by the end of the war.

The value of one Lira against Swiss Franc, Dutch Guilder, and Swedish Krona, 1918–1919. The three vertical lines in the graph represent the armistices signed by Bulgaria, the Ottoman Empire, and Germany, respectively. (Click to enlarge)
The research uncovers the effect of the war-related events on the foreign exchange rates using data from the First World War and validates the significance of these events at the beginning of the 20th century. It can be suggested that even at the war conditions, the Ottoman foreign exchange market displayed efficiency to some degree in the period marking the end of WWI. 

This blog post was written by Avni Önder Hanedar, researcher in economics and econometrics at (Dokuz Eylül University and Sakarya University).

The working paper can be downloaded here:


Hall, G. J. (2004). Exchange rates and casualties during the First World War. Journal of Monetary Economics, 51(8): 1711–1742.

Hanedar, A. Ö., Hanedar, E. Y., and Torun, E. (2016). The end of the Ottoman Empire as reflected in the İstanbul bourse. Historical Methods, 49(3):145–156.

Hanedar, A. Ö., Hanedar, E. Y., Torun, E., and Ertuğrul, M. (2017). Perceptions on the Dissolution of an Empire: Insight from the İstanbul Bourse and the Ottoman War Bond. Defence and Peace Economics, (Forthcoming).

Kanago, B. and McCormick, K. (2013). The Dollar-Pound exchange rate during the first nine months of World War II. Atl. Econ Journal, 41(4): 385–404.

Takvim-i Vekayi. 30 May 1918–11 June 1919.

Wednesday, 8 February 2017

Why did Argentina become a super-exporter of agricultural and food products during the Belle Époque (1880-1929)?

In the first wave of globalization the populations of some extra-European countries were also able to earn high incomes but with low levels of industrialisation. These countries had been recently colonised by Europe (Canada, Argentina, Uruguay, Australia and New Zealand), and their economic growth was based on the rapid expansion of their exports of primary products and on the linkage effects of these exports with other economic activities. 

This was the case of Argentina during these years. According to the recent estimates of world trade published by Federico and Tena-Junguito (2016), Argentine exports, which represented around 0.8% of world trade during the early 1850s, reached levels of almost 4% in the 1920s .

Figure 1. Ratio of Argentine exports over world exports (% at current prices)
Source: Federico and Tena (2016)

There are very few studies that use a cliometric perspective in order to identify the determinants of such an accelerated growth in exports, which is a necessary condition for the export-led model to work. The objective of this work is to provide a cliometric contribution to this field of study, constructing a gravity model to explain the determinants of the growth of Argentina’s exports between 1880 and 1929. 
To this end, the bilateral export data we need have been drawn from a meticulous review of the Argentine foreign trade statistics. In contrast with the vast majority of the quantitative analyses of this subject, we have studied the annual path of the principal export products; that is, the destinations of each individual product. The following chart summarises Argentine exports in current and constant values (calculated with the prices of 1913) (Figure 2).

Figure 2. Argentine exports, in current and constant values (1913 prices),
in millions of pounds, 1875-1929
Source: Own elaboration according to official Argentine statistics (1875-1929) and Cortes Conde et al. (1965).

As we can see, Argentina’s integration into international markets was successful after the 1870s. But, according to Cortes Conde (1985), it was not until the last decade of the nineteenth century that exports contributed to paying for debt services and to financing imports, which was necessary not only to transform the productive structure but also to cover the consumption needs of the domestic market. 

To analyse export growth, we have separated the products into three groups: 1) traditional livestock exports, which include wool, salted and dried cattle hides, raw sheep skins, bovines, jerked meat and tallow; 2) crop exports, that consider wheat, corn and linseed and 3) processed agrifood exports, which are composed of chilled and frozen beef, frozen mutton, wheat flour, quebracho logs and quebracho extract. As we can see first, although the first group also grew, if we ignore the fluctuations and focus on a long-term perspective, the second and the third groups grew more and at a faster pace.

Figure 3. Breakdown of Argentine exports at constant prices of 1913 (thousands of pounds). Own elaboration. Source: Argentine official statistics.

Our econometric results reveal that the increase in Argentina’s GDP was important to explain the export growth. On the one hand, new lands were successfully incorporated into the productive system. On the other hand, labour and capital, traditionally scarce factors, were supplied from abroad. 

However, obviously without a solvent demand for the type of goods in which the country successively specialised, the export business would not have developed sufficiently. Therefore, the demand for food and raw materials, particularly from the most developed European countries, was essential.

The fall in transport costs was also a contributing factor. However, during the period analysed, the increases or reductions in tariffs did not have a significant effect on the country’s exports as a whole.
These overall results are better understood when analysed by types of product. This also constitutes an original contribution since the literature has generally not differentiated between different export goods. In this case, significant peculiarities may be observed. The development of the Argentine economy constituted an obstacle for the growth of its exports of livestock products (unprocessed), as agriculture competed for the land on which this activity was developed. Furthermore, the emergence of a meat-processing industry gave rise to a preference for the export of frozen and chilled meats as opposed to live animals. The opposite was the case for raw and processed agricultural and livestock products that experienced an improvement in exports as a result of the country’s economic growth. Tariff protection only had a significant effect on agricultural products, particularly wheat, which, from the end of the nineteenth century, faced increasing obstacles in some continental countries.

Vicente Pinilla
Augustina Rayes

The blog post was written by Vicente Pinilla (Universidad de Zaragoza) and Agustina Rayes (Universidad Nacional del Centro de la Provincia de Buenos Aires).

The working paper can be downloaded here:


Cortés Conde, R. (1985): “The Export Economy of Argentina, 1880-1920”, in R. Cortés Conde and S.J.Hunt (eds.), The Latin American economies: growth and the export sector 1880-1930, Nueva York, Holmes.
Federico, G. and Tena-Junguito, A. (2016): “World trade, 1800-1938: a new data-set”, European Historical Economics Society, Working Paper 93.

Monday, 6 February 2017

Plague and long-term development

The lasting effects of the 1629-30 epidemic on the Italian cities

Guido Alfani is
associate professor at
Bocconi University
After many years of relative neglect, plague has recently started to recover a long-lost popularity among economic historians. In particular, the Black Death pandemic of the fourteenth century has been singled out as a possible factor favouring Europe over the main Asian economies, particularly India and China (for example, Clark 2007; Voigtländer and Voth 2013). Indeed, there is evidence of a long-lasting improvement in European and Mediterranean real wages immediately after the Black Death (Pamuk 2007; Campbell 2010). However, there is also evidence that in less densely populated areas of Europe, like Ireland or Spain, the long-term consequences of plague were negative, not positive, as “[Plague] destroyed the equilibrium between scarce population and abundant resources” (Álvarez Nogal and Prados de la Escosura 2013, p. 3). More generally it can be argued that maybe, among plagues and other lethal epidemics, the Black Death is the exception in having had (mostly) positive long-run consequences (Alfani and Murphy 2017).

Indeed, in a recent article I suggested that during the seventeenth century, the epidemiology of plague differed between the North and the South of Europe (Alfani 2013a). The South, and Italy in particular, was affected much more severely than the North. In 1629-31, plague killed about one-third of the population of northern Italy. A second epidemic, in 1656-57, ravaged central-southern Italy. In the Kingdom of Naples, overall population losses are in the 30-43 per cent range (Fusco 2009). The economic consequences of these plagues were negative and indeed, I argued that the differential impact of plague contributes to explain the origin of the relative decline of the most advanced areas of Italy compared to northern Europe (Alfani 2010; 2013a; 2013b).

In a new EHES working paper which I co-authored with Marco Percoco, we introduce the largest-existing database of urban mortality rates in plague years. This allows us, first, to demonstrate the particularly high severity of the last Italian plagues (in the two seventeenth-century waves, mean mortality rates in cities were in the order of 400 per thousand), and secondly, to analyze their economic impact.

By using the methods of economic geography, we study the ability of a mortality crisis to alter the growth path followed by a city (in particular, we follow the approach introduced by Davis and Weinstein 2002). We find evidence that the 1629-30 plague affecting northern Italy was able to displace some of the most dynamic and economically advanced Italian cities, like Milan or Venice, moving them to a lower growth path. We also estimate the huge losses the epidemic caused in urban populations (Figure 1), and show that it had a lasting effect on urbanization rates throughout the affected areas (note that changes on urbanization rates and in city size are often used as an indicator of economic growth or decline over the long run: see for example Bosker et al. 2008; Percoco 2013).

Figure 1. Size of the urban population in Piedmont, Lombardy, and Veneto (1620-1700)

Our argument is further strengthened by the fact that while there is clear evidence of the negative consequences of the 1630 plague, there is very little to argue for a positive effect. As we suggest, the potential positive consequences of the plague were entirely eroded by a negative productivity shock. Our regression analysis provides indirect evidence of this, however there is also direct evidence as for key cities like Florence, Genoa and Milan we have time-series of real wages of masons covering the entire seventeenth century (Figure 2). This sample of cities includes one heavily affected by the 1630 plague (Milan: mortality rate of 462 per thousand), one relatively less affected (Florence: 137 per thousand) and one entirely spared (Genoa). Interestingly, of the three, the only one showing signs of an increase in real wages after 1630 is Genoa. 

Figure 2. Real wages of masons in cities of northern Italy and overall urban and rural real wages in central-northern Italy, 1600-1700 (index based on the average of 1620-30). 

By demonstrating that the plague had a permanent negative effect on many key Italian urban economies, we provide support to the hypothesis that the origins of the relative economic decline of the northern part of the Peninsula are to be found in particularly unfavorable epidemiological conditions. More generally, our paper provides a useful new perspective on Italian long-term economic trends, including aspects like the falling-back of northern Italy compared to its main European competitors and the final consequences of the progressive “ruralization” of the Italian economies during the seventeenth century.

The working paper can be downloaded here:


Alfani, G. 2010. ‘Pestilenze e «crisi di sistema» in Italia tra XVI e XVII secolo. Perturbazioni di breve periodo o cause di declino economico?’, in S. Cavaciocchi (ed.), Le interazioni fra economia e ambiente biologico. Florence: Florence University Press: 223-247.

Alfani, G. 2013a. ‘Plague in seventeenth century Europe and the decline of Italy: and epidemiological hypothesis’, European Review of Economic History, 17 (4): 408-430

Alfani, G. 2013b. Calamities and the Economy in Renaissance Italy. The Grand Tour of the Horsemen of the Apocalypse. Basingstoke: Palgrave.

Alfani, G. and T. Murphy. 2017. ‘Plague and Lethal Epidemics in the Pre-Industrial World’, Journal of Economic History, 77(1): 314-343.

Álvarez Nogal, C. and L. Prados de la Escosura. (2013). ‘The Rise and Fall of Spain (1270-1850)’, Economic History Review, 66(1): 1–37.

Bosker, M., Brakman, S., H. Garretsen, H. de Jong, and M. Schramm. 2008, ‘Ports, Plagues and Politics: Explaining Italian City Growth 1300-1861’, European Review of Economic History, 12: 97-131.

Campbell, B. M. S. 2010. “Nature as historical protagonist: environment and society in pre-industrial England”, Economic History Review 63: 281-314.

Clark, G. 2007. A Farewell to the Alms: A Brief Economic History of the World. Princeton: Princeton University Press.

Davis, D.R. and D.E. Weinstein. 2002. ‘Bones, Bombs, and Break Points: The Geography of Economic Activity’, American Economic Review, 92(5): 1269-1289.

Fusco, I. 2009. ‘La peste del 1656-58 nel Regno di Napoli: diffusione e mortalità’, in G. Alfani, G. Dalla Zuanna and A. Rosina (eds.), La popolazione all’alba dell’era moderna, special number of Popolazione e Storia, 2/2009: 115-138.

Malanima, P. 2013. ‘When did England overtake Italy? Medieval and early modern divergence in prices and wages’, European Review of Economic History, 17: 45-70.

Pamuk, S. 2007. ‘The Black Death and the origins of the ‘Great Divergence’ across Europe, 1300-1600’, European Review of Economic History, 11: 289-317.

Percoco, M. 2013a. ‘Geography, Institutions and Urban Development: Evidence from Italian Cities’, Annals of Regional Science, 50: 135–152.

Voigtländer, N. and H.J. Voth 2013. “The Three Horsemen of Riches: Plague, War, and Urbanization in Early Modern Europe.” Review of Economic Studies 80 (2): 774–811.

Thursday, 12 January 2017

Accounting for the ‘Little Divergence’

This blog post was written by
AlexandraM. de Pleijt,
post doc at Utrecht University

What drove economic growth in pre-industrial Europe, 1300-1800? 

The Industrial Revolution is arguably the most important break in global economic history, separating a world of at best very modest improvements in real incomes from the period of ‘modern economic growth’. Thanks to pioneering work of van Leeuwen and van Zanden (2012) and Broadberry et al (2015) this phenomenon has recently been linked to the study of long-term trends in per capita GDP. One of the questions is to what extent growth before 1750 helps to explain the break that occurs after that date; the idea of a ‘Little Divergence’ within Europe has been suggested as part of the explanation why the Industrial Revolution occurred in this part of the world. 

This ‘Little Divergence’ is the process whereby the North Sea Area (the UK and the Low Countries) developed into the most prosperous and dynamic part of the Continent. The new series on per capita GDP demonstrate that the Low Countries and England witnessed almost continuous growth between the 14th and the 18th century, whereas in other parts of the continent real incomes went down in the long run (Italy), or stagnated at best (Portugal, Spain, Germany, Sweden and Poland) (see Figure 1). As a consequence, at the dawn of the Industrial Revolution in the 1750s, the level of GDP per capita of Holland and England had increased to 2355 and 1666 (international) dollars of 1990 respectively, compared to 876 and 919 dollar in 1347 (just before the arrival of the Black Death), and 1454 and 1134 in 1500 (Bolt and van Zanden 2014). 

Gross Domestic Product per capita, 1300-1800. 
Notes and sources: See Bolt and van Zanden (2014)

Although the ‘Little Divergence’ between the North Sea area and the rest of the continent has been established, very little is known about the causes of this phase of pre-industrial growth. Why were the Low Countries and England already long before 1800 able to break through Malthusian constraints and generate a process of almost continuous economic growth? Various hypotheses have been suggested. One of the explanations focuses on institutional changes. The rise of socio-political institutions (in particular the rise of active parliaments) and demographic institutions (notably the European Marriage Pattern) were favourable for growth in the Low Countries and England (de Moor and van Zanden 2010, van Zanden et al 2012). Other scholars have stressed the importance of the growth of overseas (e.g. Acemoglu et al 2005) – a hypothesis which is supported by Allen’s (2003) study explaining differences in real wages in Europe between 1300 and 1800. Finally, others have indicated the importance of increases in agricultural productivity (Overton 1996) and human capital formation (Baten and van Zanden 2008).

In a new EHES working paper paper, we have tested the various hypotheses explaining pre-industrial growth in early modern Europe using new data on per capita GDP, political institutions (active parliaments), human capital formation (per capita book consumption), productivity in agriculture (yield ratio’s), and international trade (per capita size of the merchant fleet). Our empirical findings show that GDP growth before the Industrial Revolution was mainly driven by human capital formation. We moreover show that institutional changes (the rise of active Parliaments) were closely related to pre-industrial growth.

The working paper can be downloaded here:


Acemoglu, Daron, Simon Johnson, and James A. Robinson. “The Rise of Europe: Atlantic Trade, Institutional Change and Growth.” American Economic Review 95, no. 3 (2005): 546-79.

Allen, Robert C. “Progress and Poverty in Early Modern Europe.” Economic History Review LVI, no. 3 (2003): 403-43.

Baten, Joerg, and Jan Luiten van Zanden. “Book Production and the Onset of Modern Economic Growth.” Journal of Economic Growth 13, no. 3 (2008): 217-35.

Bolt, Jutta, and Jan Luiten van Zanden. “The Maddison Project: collaborative research on
historical national accounts.” Economic History Review 67, no. 3 (2014): 627-51.

Broadberry, Stephen N., Bruce Campbell, Alex Klein, Mark Overton, and Bas van Leeuwen. British Economic Growth, 1270-1870. Cambridge: Cambridge University Press, 2015

De Moor, Tine, and Jan Luiten van Zanden. “Girl Power: The European Marriage Pattern and Labour Markets in the North Sea Region in the Late Medieval and Early Modern Period.” Economic History Review 63, no. 1 (2010a): 1-33.

Overton, Mark. Agricultural Revolution in England: The Transformation of the Agrarian Economy 1500-1850. Cambridge: Cambridge University Press, 1996.

Van Zanden, Jan Luiten, and Bas van Leeuwen. “Persistent but not Consistent: The Growth of National Income in Holland, 1347-1807.” Explorations in Economic History 49, no. 2 (2012): 119-30.

Van Zanden, Jan Luiten, Eltjo Buringh, and Maarten Bosker. “The Rise and Decline of European Parliaments, 1188-1789.” Economic History Review 65, no. 3 (2012): 835-61.

Tuesday, 29 November 2016

Long Run Growth in Spain: Evidence from Historical National Accounts

Leandro Prados de la Escosura
 (Universidad Carlos III,
CEPR, Groningen, and CAGE)
Can we rely on historical estimates of GDP to assess output and material welfare in the long run? 

In the early days of modern economic quantification, Kuznets (1952: 16-17), noticed the “tendency to shrink from long-term estimates” on the grounds of “the increasing inadequacy of the data as one goes back in time and to the increasing discontinuity in social and economic conditions”. Cautious historians recommend to restrict the use of GDP to societies that had efficient recording mechanisms, relatively centralised economic activities, and a small subsistence sector (Hudson, 2016; Deng and O’Brien, 2016). But should not the adequacy of data be “judged in terms of the uses of the results” (Kuznets, 1952: 17)? 

A new dataset

It is with these caveats that a new set of historical national accounts, with GDP estimates from the demand and supply side, is presented for Spain as the basis to investigate its modern economic growth (Prados de la Escosura, 2016b). 
Historical output and expenditure series are reconstructed for the century prior to the introduction of modern national accounts. The new series are built from highly disaggregated data grounded on the painstaking research carried out during the last decades. 
Then, available national accounts are spliced through interpolation, as an alternative to conventional retropolation, to derive new continuous series for 1958-2015 (Prados de la Escosura, 2014, 2016a). Later, the series for the ‘pre-statistical era’ are linked to the spliced national accounts providing yearly series for GDP and its components over 1850-2015. Finally, on the basis of new population and labour force estimates, GDP per head and labour productivity are derived (the dataset can be accessed at

What do the data show? 

Aggregate economic activity multiplied fifty times between 1850 and 2015, at an average cumulative growth rate of 2.4 per cent per year. Four main phases may be established: 1850-1950 (with a shift to a lower level during the Civil War, 1936-1939), 1951-1974, 1975-2007, and 2008-2015, in which the growth trend varied significantly. 

Figure 1. Real GDP and GDP per head (2010=100) (logs)

But to what extent did a larger amount of goods and services affect individuals’ living conditions? Since population trebled, real GDP per head experienced nearly a 16-fold increase, growing at an annual rate of 1.7 percent. Such an improvement took place at an uneven pace. Per capita GDP grew at 0.7 per cent over 1850-1950, doubling its initial level. During the next quarter of a century, the Golden Age, its pace accelerated more than 7-fold, so by 1974 per capita income was 3.6 times higher than in 1950. Although economic progress slowed down from 1975 onwards, and the rate of per capita GDP growth shrank to one-half that of the Golden Age, the level of per capita GDP more than doubled between 1974 and 2007. The Great Recession (2008-13) shrank per capita income by 11 per cent, but, by 2015, its level was still 83 per cent higher than at the time of Spain’s EU accession (1985).
What steered such a remarkable rise in product per capita? GDP per capita depends on the amount of work per person and how productive this effort is. GDP per capita and labour productivity (measured as GDP per hour worked) evolved alongside over 1850-2015, even though, as the amount of hours worked per person shrank labour productivity grew at a faster pace –it increased 23-fold against 16-fold for GDP per capita. 

Figure 2. Per Capita GDP and its Components, 1850-2015 (2010=100) (logs)

Behind the decline in hours worked per person the main element is the reduction in hours worked per fully occupied worker, which fell from 2,800 hours per year in mid-nineteenth century to less than 1,900 in the early twentieth-first century. Thus, long-term gains in output per capita are entirely attributable to productivity gains, with phases of accelerating GDP per capita, such as the 1920s or the Golden Age (1950-1974), matching those of faster labour productivity growth. 
A closer look at the last four decades reveals, however, significant discrepancies, with phases of acceleration in labour productivity correspond to those of GDP per person slowdown, and vice versa. Thus, periods of sluggish (1975-84) or negative (2008-13) per capita GDP growth paralleled episodes of vigorous or recovering productivity growth, although in the first case, the ‘transition to democracy’ decade, labour productivity offset the sharp contraction in hours worked –largely resulting from unemployment- and prevented a decline in GDP per head. Conversely, the years between Spain’s accession to the European Union (1985) and the eve of the Great Recession (2007), exhibited substantial per capita GDP gains while labour productivity slowed down. Thus, during the three decades after Spain’s accession to the EU, in which grew at 3 per cent per year, doubling its GDP per head, the increase in hours worked per person contributed more than half of it. It can, then, be concluded that since the mid-1970s the Spanish economy has been unable to combine employment creation and productivity growth, with the implication that sectors that expanded and created jobs (mostly construction and services) were those less successful in attracting investment and technological innovation.

Falling behind, catching up, … and falling back again?

Spanish long-term growth has been similar to that of western nations, though Spain’s level of GDP per head appears systematically lower. 

Figure 3. Spain’s Comparative Real Per Capita GDP (2011 EKS $) (logs)

The pace of growth before 1950 was comparatively slow in Spain. Sluggish performance over 1883-1913 and failing to take advantage of its World War I neutrality to catch up, partly account for it. Furthermore, the progress achieved in the 1920s was outweighed by Spain’s short-lived recovery from the Depression, brought to a halt by Civil War (1936-39), and by a longer and weaker post-war reconstruction than in the warring western European countries after 1945. Thus, Spain fell behind between 1850 and 1950. 
The situation reverted from 1950 to 2007. The Golden Age, especially since 1960, stands out as years of outstanding performance and catching up to the advanced nations. Steady, although slower growth after the transition to democracy years (1975-84), allowed Spain to keep catching up until 2007. The Great Recession reversed the trend, although it is too soon to determine whether it has opened a new phase of falling behind. 
On the whole, Spain’s relative position to western countries has evolved along a wide-U shape, deteriorating to 1950 (except for the 1870s and 1920s) and recovering thereafter (but for the episodes of the transition to democracy and the Great Recession). Thus, at the beginning of the twentieth-first century Spanish real GDP per head represented a proportion of US and Germany’s income similar to that of mid-nineteenth century, and to that of the 1870s with regard to France and Italy, although had significantly improved with respect to the UK. 

This blog post was written by Leandro Prados de la Escosura, professor in Economic history at (Universidad Carlos III)

The working paper can be downloaded here:


Deng, K. and P. O’Brien (2016), “China’s GDP Per Capita from the Han Dynasty to Communist Times”, World Economics 17, 2: 79-123.
Hudson, P. (2016), GDP per capita: from measurement tool to ideological construct, LSE Business Review (10 May 2016).
Kuznets, S. (1952), Income and Wealth of the United States. Trade and Structure, Income and Wealth Series II, Cambridge: Bowes and Bowes.
Prados de la Escosura (2016a), “Mismeasuring Long Run Growth. The Bias from Spliced National Accounts: The Case of Spain”, Cliometrica 10, 3: 251-275
Prados de la Escosura, L. (2016b), Spain’s Historical National Accounts: Expenditure and Output, 1850-2015, EHES Working Paper 103 The dataset can be accessed at
Prados de la Escosura, L. (2014), Mismeasuring long-run growth: The bias from spliced national accounts,  (4 September)

Tuesday, 15 November 2016

The mining sectors in Chile and Norway, ca. 1870-1940: the development of a knowledge gap

Kristin Ranestad is a post-doc at
University of Olso

New EHES working paper

Chile and Norway are two ‘natural resource intensive economies’, which have had different development trajectories, yet are closely similar in industrial structure and geophysical conditions. 

The questions of how and why Chile and Norway have developed so differently are explored through an in-depth comparative analysis of knowledge accumulation in one of the natural resource sectors, namely mining, from around 1870 to 1940, a period in which mining went through important technological changes and the two countries started to diverge.

Countries rich in natural resources which exhibit poor economic performance, are often understood as being ‘cursed’ and recommended to shift to industries which are not based on raw materials. A key empirical problem with the ‘resource curse’ argument, however, is that some of the richest countries in the world, such as Norway, Sweden, Canada and Australia, have developed fast-growing economies based on natural resources. Differences in economic performance across natural resource intensive economies suggests that an abundance of natural resources does not necessarily lead to stagnation. Conversely, some countries have arguably developed because of their natural resources, not despite them. Evidence suggests that natural resource intensive industries in high-income economies have been highly knowledge intensive, dynamic and innovative, they have created linkages to other industries within the economy, and developed specialisations and new industries which have contributed to complex economic structures (see e.g. Andersen 2012; De Ferranti et al. 2002; Hirsch-Kreinsen et al. 2003; Ville and Wicken 2012). In this paper, I seek to contribute to this debate by systematically comparing how knowledge accumulation occurred in one sector, namely mining. Comparing one natural resource sector allows for much more in-depth empirical analyses than on a country level and allows us to explore how natural resource industries in some countries have become highly innovative, while others have not. 

A gap started to emerge between the two mining sectors from the late nineteenth century. While the mining sector in Chile was considered technologically advanced in the mid-nineteenth century, from the late nineteenth century, Chile’s share of copper production fell dramatically, multinational created ‘enclaves’, a technological gap emerged within the sector between technologically advanced multinational companies and small-scale companies using old technology, thousands of mines were abandoned and many of ore deposits remained unexploited. The mining sector in Norway, on the other hand, was innovative, multinational companies were more integrated in the host economy and production of large-scale electro-metallurgical production started in the late nineteenth century. 
“Boletin Minero”: The mining bulletin  included articles 
about mining companies, mining production, 
new technology, debates about the mining education etc.

Why did this gap between the two mining sectors develop? I explore how comparable knowledge organisations in the two countries; formal mining education, organisations for technology transfer and geological research centres, developed technological knowledge, and how such organisations encouraged or blocked innovation for the sectors.

I use primary sources from archives in Chile, Norway and the United States in the form of written documents. Study programs and course descriptions for both countries make it possible to compare the mining instruction on higher and intermediate level in detail. Graduate lists enable comparisons of the availability of mining engineers and technicians in the sector. Student yearbooks provide unique information about all the mining engineers, technicians, and other skilled workers with expertise which was relevant for mining. The books provide information about the work, positions and travels of the graduates.  These sources, together with engineering and company reports and technical and mining journals, allow us to follow the graduates from school and into their working life, and they enable us to make in-depth comparisons of the relationship between knowledge development, education, learning and innovation (See Ranestad 2016 for an explanation of these sources).

The detailed comparison of these knowledge organisations shows that there were differences between Chile and Norway in terms of knowledge accumulation. The set of organisations in Chile blocked transfer, use and diffusion of knowledge, while in Norway the organisations facilitated the creation, transfer and adoption of knowledge, which in turn contributed to an overall dynamic and innovative mining sector. This led to a knowledge gap between the two countries. 

“Ingeniørene” : An example of the student yearbooks which
include  detailed information about the
career paths of the engineers.

The formal mining instructions in the two countries were similar, but two countries differed when it came to the availability of mining engineers, technicians and other relevant skilled workers to administrate mining companies and manage complex technology. In particular, Chile had too few formally trained workers to fill the managing and strategic technical positions at the thousands of mining companies, technical schools and research centres. Additionally, the two countries differed when it came to scholarships and funds for practical learning. During trips abroad engineers and technicians acquired valuable contacts, information of new techniques, and most importantly they acquired practical know-how with foreign technology. While continuous public and private programs were established in Norway, and most of the mining engineers went abroad to learn, scholarships were only provided sporadically in Chile, and only very few engineers went abroad. These differences in knowledge accumulation between the two countries, I argue, contributes to explain the diverging paths of the two sectors. 

The two countries also differed when it came to geological mapping, prospecting, analyses of ore and economic planning. Without a deep understanding of the geology and about the existing mineral deposits and their potential profits, new mining projects could hardly take place and the mining sector could barely advance (David and Wright 1997). In Norway, The Geological Survey of Norway, a public organisation, was established in 1858 and had in principle two main tasks. On one hand, it was to contribute to new knowledge about geological features, their scope and potential utility. On the other hand, it sought to contribute to new and more systematic surveys of the country’s geological formations and deposits (Børresen and Wale 2008). In Chile, a permanent organisation with the aim of systematically map the country’s resources, did not exist. Sporadic geological work was carried out (Villalobos 1990), but it was not nearly enough to acquire complete and in-depth knowledge of existing ore deposits, their grade and possible profits. Therefore, the several thousand mines that were abandoned and unexploited mineral deposits remained unknown. This situation endured and large mineral deposits were not found up until recent time (De Ferranti et al 2002, 58-59). In short, the lack of geological maps and ore surveys in Chile had huge implications for the progress of the mining sector by blocking the start-up of mining projects. This, in turn, was linked to the small number of mining engineers and geologists in the country, who were indispensable for this type of work. These differences in knowledge accumulation contribute to explain the emerging development gap of the two sectors.  

The underlying reason for the knowledge gap may be linked to the role of the state. In Chile, members of the National Mining Society, professors and engineers expressed the need for more geological surveys, more skilled workers and more initiatives to send engineers abroad to learn. However, although some public initiatives were implemented, they were clearly not enough to encourage continuous innovation processes in the sector. It is, perhaps, strange that not more was done in Chile to develop knowledge for mining and to learn about the existing mineral and metal deposits, considering that this was a country with huge mineral and metal ores and some of the largest copper deposits in the world. Despite this huge natural resource potential, mapping the country’s natural resources, education and knowledge transfer were simply given lower priority by the broader set of political decision-makers. In Norway, in contrast, the state was much more active in supporting knowledge development as it funded the National Geological Survey, guaranteed general schooling, financed universities, mining and technical schools and managed many of the scholarships for study travels. 

Finally, I would like to commemorate Karl Gunnar Persson, who was a kind, joyful and caring person. He was a great support to Paul and very understanding. I met him several times with Paul for dinner and drinks and we heard cheerful stories about his travels, life experiences and research. I miss him and those very nice and interesting conversations.

This blog post was written by Kristin Ranestad, University of Olso
The EHES working paper can be downloaded here:

Andersen, Allan Dahl. 2012. “Towards a new approach to natural resources and development: the role of learning, 
innovation and linkage dynamics”. Int. J. Technological Learning, Innovation and Development, vol. 5 (3).

Børresen, Anne Kristine, and Astrid Wale. 2008. Kartleggerne. Trondheim: Tapir akademisk forlag.

David, Paul, and Gavin Wright. 1997. “Increasing Returns and the Genesis of American Resource Abundance”. Industrial and Corporate Change, vol. 6 (2).

De Ferranti, David, Guillermo E. Perry, Daniel Lederman, and William E. Maloney. 2002. From Natural Resources to the Knowledge Economy. Washington D. C: The World Bank.

Hirsch-Kreinsen, Hartmut, David Jacobsen, Steffan Laestadius, and Keith Smith. 2003. “Low-Tech Industries and the Knowledge Economy: State of the Art and Research Challenges”. PILOT Policy and Innovation in Low-Tech. Oslo: STEP – Centre for Innovation Research.

Ranestad, Kristin. 2015. ”The mining sectors in Chile and Norway from approximately 1870 to 1940: the development of a knowledge gap.” PhD diss., University of Geneva.

Villalobos, S. et al. (1990): Historia de la ingenieria en Chile. Santiago: Editorial Universitaria.

Ville, S., & Wicken, O. (2013). “The dynamics of resource-based economic development: Evidence from Australia and Norway”. Industrial and Corporate Change, 22 (5).

Friday, 21 October 2016

Danger to the Old Lady of Threadneedle Street?

Patrick O'Brien  is Professor Emeritus,
London School of Economics

NEW EHES Working paper 

The Bank Restriction Act of 1797 suspended the convertibility of the Bank of
England's notes into gold. The current historical consensus is that the suspension was a result of the state's need to finance the war, France’s remonetization, a loss of confidence in the English country banks, and a run on the Bank of England’s reserves following a landing of French troops in Wales.

In a recent EHES paper (O’Brien and Palma 2016) we argue that while these factors can help us understand the timing of the Restriction period, they cannot explain its success. We deploy new long-term data which leads us to a complementary explanation: the policy succeeded thanks to the reputation of the Bank of England, achieved through a century of prudential collaboration between the Bank and the Treasury. Furthermore, the Restriction Period led to a permanent shift in the role of banknotes in the economy, despite the inauguration of the classical gold standard in 1821.

Nuno Palma is Assistant Professor,
 University of Groningen

This episode has some parallel with the better-known 1914 suspension of the gold standard, but some important differences too. One such difference is the much more moderate effects that resulted. No major financial crisis followed, and inflation eventually increased but remained moderate. In the words of Schumpeter (1987/1954, p. 690-1): “In spite of the suspension … war finance did not produce any great effects upon prices and foreign exchange-rates until about 1800. To the modern student who is inured to stronger stuff, the most striking feature of the subsequent inflation is its mildness … at no time was the government driven to do anything more unorthodox than abnormally heavy borrowing from the Bank, and even this borrowing never surpassed the limits beyond which the term ‘borrowing’ becomes an euphemism for printing government fiat”.

The Bank of England – which was a private company, though it was already beginning to play a public role – had suffered a significant drain in its reserves from the mid-1790s. In 1797 it suspended convertibility of its notes into gold. It also started issuing small denomination notes. Banknotes became increasingly important as a means of payment. As we document in the paper, the economy-wide circulation of all means of exchange except coin (such as inland bills of exchange or banknotes) at the retail and wage-paying levels had remained limited until the 1790s. The data allows us to study the case of Bank of England notes in detail, by comparison with coin supply (Figure 1 below). As the figure suggests, the 1797 suspension marks a discontinuity for Bank of England notes, which increased a great deal in real terms after that date. At the same time, coin supply had been falling since shortly after the beginning of war. It is tempting to interpret this shift in terms of Gresham’s law, but we do not favor that interpretation because the selection of a “bad” means of payment implies asymmetry of information and no seller would have had any difficulty distinguishing Bank of England notes from coin. Instead, Bank of England notes eventually gained a discount (which reached a maximum of about 50%), but this only mattered after about 1808.

Not only did the value of Bank of England notes in circulation increase a great deal, but their denominational distribution changed. While up to the 1790s £10 notes were the lowest note denomination issued by the Bank of England (over £1,000 in 2015 prices), it was only in 1793, at the start of the war against Napoleonic France, that £5 notes were first issued. Denominations of £5 were in turn followed by £2 and £1 banknotes, issued in 1797, coinciding with the Restriction Period. Crucially, also allowing for a margin of contemporaneous inflation, £1 was then just enough to pay a laborer’s weekly wage. The fact that many new issues were of lower denominations implies that just looking at the value of the increase of Bank of England notes underestimates how much more frequent they became at this time. This had important long-term consequences, because it was at this point that for the first time ordinary people, and in particular the lower classes, became accustomed to banknotes as a means of payment.

Figure 1. Coin supply and Bank of England notes, at constant prices of 1700.
Sources: Bank of England (1967), Palma (2016); for the deflator, Broadberry et al (2015).

As the suspension took place, a large number of merchants all over the country signed declarations in which they promised to accept and keep using banknotes. The most prominent of these meetings was that of London; while the Bank of England had a role in arranging this meeting, the fact is that it could not force the merchants to take that decision, which was also publicly announced through publication in The Times. Hence both merchants and regular people accepted the Bank’s notes. Figure 2 shows a contemporary print where John Bull, who represents the English people, accepts the paper pound despite the warnings of French alarmists who warn him that it will be worthless once the French land.

Figure 2. John Bull accepts paper money despite the warnings of French alarmists,
by James Gillray. Published at Hannah Humphrey’s print shop
on St. James Street, London, March 1st, 1797

The argument we make in this paper is that the emergency conditions of the 1790s combined with a long history of prudent behavior as well as close collaboration between the Bank and the Treasury to allow this to be possible. While Bordo and White (1991) focus on the credibility of the public finances of the British State, our focus here is on the credibility of the outstanding liabilities of one particular institution, the Bank of England. We argue that the credible commitment underpinning the success of British public finance consisted of two parts: the government's commitment to sound public finance, and the Bank's commitment to sustaining both public and private credit. In this paper we focus on the latter, and in particular on the matter of how by the late eighteenth century the Bank managed to implement a set of monetary policies that were highly unconventional by the standards of the time, with a good measure of success. 

The Bank of England shifted gears with the Restriction Period, but it would be wrong to assume that after 1821 there was a return to the previous status quo. Figure 3 presents the data of Figure 1 as a ratio, and extends its horizon to the mid-nineteenth century. The figure shows three important facts about this period. First, as we have previously seen, after a long period of stability, there was a spike in Bank of England notes at the time of the Restriction Period. Second, and importantly, when the supply of notes was later reduced, the reversal was only partial: the level did not return that of the 1790s. Third, and crucially, the previously stationary distribution then gained an upward trend – the growth which started in the 1790s continued into the nineteenth century. The regime change to the system caused by the Bank Restriction in the 1790s persisted well into the future, long after that act was repealed. Through a process of path-dependence, it caused a permanent shift to a fiat-based monetary system, which – despite the later imposition of the classical gold standard – allowed for continuous growth of fiat money relative to slower-growing quantities of precious metals well into the nineteenth century.

Figure 3 The ratio of bank of England notes to coin supply, 1696-1844

Which factors interacted with the Bank of England’s initial reputation to make the policy a success? Three reasons stand out. First, the Bank of England’s expansion of banknotes during the restriction was of a much smaller magnitude than had been the case in France a few years before. In 1797, the ratio of Bank of England notes over nominal GDP was just under 23%, and in the next few years issues were never such that the 20% percent mark was crossed again, a target made easier by the economic growth performance of the British economy during those years (Bank of England 1967, Broadberry et al 2015). This strongly contrasts with the case of France during the assignats debacle, where the expansion of fiat was eventually exponential (Sargent and Velde 1995). In contrast with France, the Bank of England’s policies were subject to a series of checks and balances, being closely monitored, as exemplified by the “Bullion report”, and related controversies and debates (see for instance Feavearyear 1931, pp. 190-2). Second, not only did Britain’s already have a comparatively high level of fiscal capacity, being able to credibly borrow, but the policies of the Bank were also at this time accompanied by a series of fiscal reforms. An example was the introduction of an income tax in 1798, which complemented the monetary reforms and allowed for the sustainability of the government’s budget constraint, while ruling out hyperinflation. Finally, the policy was promised (and believed) to be a temporary, wartime measure.

This blog post was written by:
Patrick O'Brien (Professor Emeritus of Global Economic History, Department of Economic History, London School of Economics)

Nuno Palma (Assistant Professor, Department of Economics, Econometrics, and Finance, University of Groningen)

The working paper can be downloaded here:


Bank of England (1967). Bank of England Liabilities and Assets: 1696 to 1966. Quarterly Bulletin, June edition. Available at, Accessed August 13, 2014

Bordo, M., and White, E. (1991). A tale of two currencies: British and French finance during the Napoleonic Wars. The Journal of Economic History 51(02): pp. 303-316

Broadberry, Stephen, Bruce Campbell, Alexander Klein, Mark Overton, and Bas van Leeuwen (2015). British Economic Growth, 1270-1870. Cambridge University Press

Feavearyear, A. (1931). Pound Sterling: A History of English Money. Oxford

Palma, Nuno (2016). Reconstruction of annual money supply over the long run: the case of England, 1279-1870. EHES Working Papers in Economic History No. 94

Sargent, T. and F. Velde (1995). Macroeconomic features of the French Revolution, Journal of Political Economy, 103

Schumpeter, J. A. (1987/1954). History of Economic Analysis. Routledge