The Contrarian Blog
Menu

The Contrarian Blog

Is poverty really the problem and not inequality? A policy discussion

1/10/2021

0 Comments

 
By Ayan Banerjee
Some economists hold the belief poverty, not inequality, is the problem and to alleviate poverty, the government should use income transfers rather than direct interventions in particular markets. This article will attempt to critically assess these two ideas and evaluate the overall point of view.
 
Inequality refers to the differences between income groups and poverty (absolute) concerns those who have a standard of living below a certain benchmark (Barr, 2004). Poverty is an issue that presents several multifaceted problems. Firstly, households that fall below the poverty line face serious effects on their health in several direct and indirect ways. These include the consequences of being in poorer quality housing and the health effects due to poorer nutrition which is common even in developed countries where a healthier diet is often more expensive. Poor health is a significant constraint on a society’s economic and social prosperity as there are additional care costs, opportunity costs of lost output and reduced welfare of the population due to ill health. Secondly, higher levels of poverty are related to increased crime levels. According to the Becker model, a lack of legal opportunities associated with those in poverty would encourage the rational agent to engage in criminal activity as it is more attractive. Poor households also have a lower disposable income, therefore, are unable to generate as much income through spending and consumption. Overall, poverty is a significant multi-faceted problem causing its alleviation to represent an important objective. 
 
Highlighting a value-free poverty line presents several problems (Barr, 2004). Since consumption opportunities depend on a combination of both money and non-money income, this creates issues on measuring actual consumption based solely on money income. Additionally, the definition of the unit of income and the time period of measurement of income both lead to an arbitrary definition of poverty. Since policymakers still establish a poverty line despite these issues, it is inevitable measurements either fail to capture all poverty or overestimate the number of those in poverty. Therefore, conclusions drawn from poverty data may not truly reflect actual living standards suggesting inequality and other indicators must also be considered.  
Picture
Another reason why inequality is also a problem is that some agents can have higher utility in a country with greater levels of poverty but lower levels of inequality. As shown in the above diagram, the two curves illustrate two societies, one with higher levels of poverty but also lower inequality than the other. Assuming rational utility-maximising agents, we can assess which society a representative poor person would rather be in. If their utility solely depends on their income, they will choose a society with higher inequality and lower poverty levels. Conversely, if their utility increases with their income and decreases with the income of a rich person, the rational choice may be to be in the other society. Overall, agents behind a veil of ignorance can prefer to be in a society with higher poverty levels and lower inequality demonstrating that if the aim is to maximise aggregate welfare of society, inequality is also a problem; this is shown by the existence of the income externality.
 
Income transfers are often used as a method of poverty alleviation; however, its effectiveness is contested. It is argued that income transfers are an effective way of alleviating poverty in the short run by offering cash transfers to households that satisfy certain conditions. The success of income transfers depends on its horizontal and vertical efficiency. Some applicants may face a harsh evaluation and consequently not be offered support whilst on the demand-side, eligible agents may not apply for income transfers either due to lack of knowledge, inconvenience, or stigma. Its effectiveness also depends on the vertical efficiency: that income transfers are not offered to households that no longer need them. These transfers have a strong potential of being a labour supply disincentive, increasing administrative and tax costs whilst pushing wages higher affecting firms’ profits. Therefore, the success of income transfers is highly dependent on the means testing of the eligibility for income transfers. If these programmes are both vertically and horizontally efficient, income transfers improve the standard of living to above the poverty line.
 
Alternatively, governments can choose to enact forms of market-based intervention to alleviate poverty. One example of this is housing intervention. If expenditure on housing is too high, it can force individuals into poverty, therefore, providing more affordable housing would improve living standards and increase disposable income. However, the success of this intervention is debatable. Intervention occurs through the government directly supplying houses, providing mortgage subsidies, enacting price controls or size regulation. Public sector housing suppliers lack a profit motive which can lead to the inefficient use of resources such as through over-staffing due to bureaucratic issues.  A price ceiling would also likely create a shortage of housing . This would create a loss in the welfare of society (deadweight loss). Therefore, whilst it may reduce poverty, it is likely to cause separate externalities could outweigh any benefits.

There is also a similar effect of enacting a minimum wage policy intervention in the labour market. This aims to reduce in-work poverty and offer protection against exploitation; however, it is ­­­­route at tackling poverty. There is the potential this would cause an increase in unemployment as there would likely be an excess of labour supplied, however, this was not observed in the UK after the introduction of a national minimum wage. This may lead to the firing of workers as firms would have incurred higher costs, likely affecting the lowest-paid workers most as they would be the first group impacted. Furthermore, there would likely be an increase in the price level set by firms in response to higher costs of production. This would disproportionately impact low-income workers as has a higher proportional effect on their disposable income. Overall, this policy’s effectiveness at alleviating in-work poverty is uncertain.
In conclusion, poverty alleviation represents an important economic objective for a society. However, this does not detract from the fact inequality can still be a problem and requires may intervention. Income transfers do represent a viable route at alleviating poverty in the short run, however, its effectiveness is dependent on horizontal and vertical efficiency. Alternatively, market intervention can be effective at achieving poverty alleviation, however, its success is more volatile depending on the implementation.
0 Comments

Is the public provision of services inevitably inefficient?

23/7/2021

0 Comments

 
By Ayan Banerjee
The efficiency of the public provision of goods or services underpins a variety of social policy decisions. This article will evaluate whether public provision is inevitably inefficient and critically evaluate examples of reforms introduced to improve performance in separate areas of social policy.
 
Economic efficiency is achieved when all goods or services, as well as factors of production, are distributed and allocated in a way that makes the ‘best use of limited resources’ given agents’ preferences (Barr, 2004). This is the point at which output is maximised, an optimal product mix is produced given consumer tastes and technology and that consumers allocate income in a way that maximised utility.  
​
Many economists argue that public provision leads to the inefficient over-supply of goods or services. This can be shown by the simple model of bureaucracy. The model is underpinned by suggesting that bureaucrats act out of self-interest and not in the interest of the public. 
The below graph illustrates a market in which the government is the sole (monopoly) supplier of a good or service. A private monopolist would supply at Qm and Pm. The socially optimal bundle would be a Qs and Ps, where demand intercepts the long-run marginal cost curve. Bureaucrats will seek to maximise their budget as this is what determines their income and thus their utility. Niskanen’s analysis suggests that bureaucrats often take advantage of the information asymmetries such that they can generate a budget greater than what is socially optimal. This allows them to increase output above Qs to Qb. This creates a deadweight loss and reduces welfare due to over-supply, thus leading to an inefficient outcome.
Picture
With regards to the decentralised provision of services, the invisible hand theorem asserts that the quantity and price of outputs that clears markets will be an efficient bundle under the assumptions of a perfectly competitive market. Here, private firms have a strong profit maximising motive which drives them to be more productive and efficient in the use of their factors of production. Secondly, many economists predict the private supply of services to be more efficient than government provision as they argue there are practical difficulties associated with transferring and assigning ownership rights concerning public provision as opposed to with investor shareholders in private firms. This reduces the incentive for public managers to produce in a way that maximises citizens’ wealth (Miranda, R. et al., 1995).
 
Some economists argue that centralised provision can be more efficient. As explained earlier, a perfectly competitive, complete market with perfect information with no market failures is efficient. However, these assumptions are often not satisfied. Perfect competition is violated where there are few firms and/or individuals in a market which is common such as with regards to oligopolies. Additionally, market failures can often occur especially relating to pure public goods that are non-rival in consumption, non-excludable and non-rejectable making efficient production in a market unlikely (Barr, 2004). Furthermore, perfect information is often violated. Overall, the assumptions required for an efficient allocation from decentralised provision are often violated, thus warranting the consideration for public provision as a serious alternative. Additionally, neoclassical theory underpinned by decentralised provision is argued to be incomplete as ignores the potential for government intervention to improve an economy’s dynamic efficiency and technological capability (Bailey, 1951). Overall, there are certain goods and environments that suggest a centralised provision provides advantages over decentralised provision.
 
Germany provides healthcare through quasi-markets-based social health insurance. This can be compared to the public provision of healthcare in the UK to assess whether the differences reflect a disparity in efficiency. Germany’s healthcare is provided by non-governmental competing, not-for-profit, health insurance funds (Thomson, et al., 2013, p. 57). This aimed to introduce competition on the supply side by providing more cost-effective, responsive healthcare with greater consumer choice. This contrasts with the NHS in the UK which is nearly entirely publicly funded. Both the UK and Germany have the same life expectancy at birth indicating there is not a significant difference in health status. The UK’s health expenditure per capita is significantly lower (21.1%) (The World Bank, 2018), whilst its health expenditure as a percentage of GDP is also lower (1.43%) than Germany’s (The World Bank, 2018). Whilst more data is required (beyond the scope of this essay) to judge differences in quality of healthcare more accurately, the UK’s public provision of healthcare is clearly more cost-effective than Germany’s. Overall, policies aiming to improve efficiency in healthcare through the introduction of quasi-markets do not have significant supporting empirical evidence. Although it can be argued to provide greater choice and responsiveness as seen by Germany’s healthcare system’s response to the pandemic, in this area of social policy, public provision can be more efficient in certain cases. 
 
In conclusion, the predicted efficiency of a centralised public provision of goods or services depends upon which model you subscribe to. It is difficult to judge which model is most representative due to the problematic nature of empirically testing efficiency in this scenario. As seen in reforms in health, the relative efficiencies of decentralised and public provision depend on a range of factors. Therefore, I partially disagree with the statement that public provision is inevitably inefficient. Public provision is likely to not be fully efficient, however, it can in some cases for it to be more efficient than free markets. ​
0 Comments

Is it time for the UK to move to a social insurance based healthcare system?

2/3/2021

0 Comments

 
By Ayan Banerjee
Picture
Before the NHS was formed in 1948, choices for health care were clearly recognised by the public. People had to make the judgements on which occasions they could afford to seek medical care as well as situations where they could not afford to do without it. However, with the introduction of the NHS, the need for choices did not disappear but became ‘obscured’ by several factors (Mooney, et al., 1986). The majority of health care services had now become ‘zero-priced’ at the point of consumption. By fully nationalising health care, the government at the time aimed to gradually erode the stock of ill health, increasing the health of the population and reducing demand for health care. Thus, gradually diminishing the proportion of gross national product used for funding the NHS (Mooney, et al., 1986). However, this initial expectation w§as gradually eroded by its unfortunate legacy; spending as a percentage of GDP had increased from 3.5% to roughly 7% between 1949 and 2019 (The Health Foundation, 2019). Furthermore, in the past decade waiting times on key services such as A&E and cancer treatments have increased steadily (Thorlby, et al., 2019). Consequently, economists and policymakers are considering a shift in the way that health care is supplied to a ‘social insurance system’ similar to those in operation in Germany and France. This article will investigate and evaluate this alternative means of supply by comparing its differing key characteristics to the NHS. We will then compare the overall systems on efficiency, equity and choice grounds. Finally, we will conclude whether it is in fact time for the UK to move to a ‘social insurance system’.
 
Characteristics of a ‘social insurance system’:
This article will consider a ‘social insurance system’ to consist of Social Health Insurance (SHI) characterised by the following four points:
  1. Universal coverage
  2. Greater co-payment
  3. More choice in relation to medical practitioners
  4. Greater variation and choice in relation to facilities
Universal coverage mandates that all individuals belong to an approved plan whilst greater co-payment involves paying fixed amounts for a covered service each time that medical service is accessed. Point three allows for consumers of health care to have more choice with regards to the medical practitioners that provide the services they use. The last point specifies a greater variation of plans that individuals can choose from, providing they are approved by government. 
 
Germany’s Health Care System
SHI has its historical roots in Europe dating to pioneering legislation for a system of compulsory national health insurance in Germany in 1883 (Folland, et al., 2017, p. 541). At present, Germany serves as the most appropriate example of an SHI system that the UK may transition to. Its government stipulates universal coverage over a range of different plans, with worker and union groups forming the backbone of the supply of such plans (Phelps, 2003, p. 561). Most commonly used services are included in coverage provided by competing, not-for-profit, non-governmental health insurance funds called “sickness funds” (Thomson, et al., 2013, p. 57)

Fees are negotiated with providers which acts as the major cost control. Numerous government subsidies supply the governmental parts of these programs, which are financed through income and wage taxes. All employed citizens (and other groups such as pensioners) earning less than €4,350 per month are mandatorily covered by government insurance (Thomson, et al., 2013, p. 57). Additionally, obligatory employer payments as well as individual contributions form an important part of the financing scheme with total contributions averaging almost 13% of a worker’s salary (Phelps, 2003, p. 561). Unemployed individuals have their premiums paid by the federal unemployment insurance fund, while the self-employed must pay the entire contribution themselves. Retirees’ pension plans pay mandatory insurance premiums equalling the national average payroll contribution. Roughly 11% of the population opts in for private health insurance that provides supplementary coverage which is costed through risk-related coverage charges (Thomson, et al., 2013).

Germany’s Health Care system will be compared with the NHS throughout this article in the hope of revealing where their differences may be advantageous or disadvantageous. 
 
Universal Coverage
The economic logic behind this idea centres on the idea that each citizen derives utility from other citizens’ ability to consume medical care. Therefore, making collective demand for the good exceed the total private demand determining it as a merit good (Phelps, 2003, p. 550). Furthermore, some economists argue that those without insurance act as free riders on a health care system that would provide health care to persons who ‘show up at the door’, especially at A&E rooms. By stipulating universal coverage, every citizen is insured automatically which eliminates these free riders. 

Conversely, assuming that the UK would transition to a Germany-style SHI system, there would still exist free riders comprising of the unemployed, who do not pay taxes that entirely fund their health care. However, since this issue exists in both systems, it is not especially pertinent in considering this specific transition.     

The NHS is characterised by offering health care to all individuals in the UK, therefore a move to a universal coverage system does not compromise the benefits detailed above. However, there are some new benefits from compulsory insurance over the NHS. If all citizens pay an average income-proportional premium, there could be no efficiency loss (Barr, 2004, p. 285). Where insurance is compulsory, it may be possible to pool high and low risks and charge everyone the average premium since low-risk individuals cannot opt-out of insurance. Consequently, low-risk groups pay an actuarial premium plus an unavoidable lump-sum tax, and the high-risk groups pay this actuarial premium and receive a lump-sum transfer. Therefore, a system of universal coverage insurance is not susceptible to the problems associated with adverse selection. Conversely, it may cause inefficiencies through standard policies not allowing for differences in preferences, however, a system like Germany’s offers around 105 ‘sickness funds’ providing a range of choices (Krankenkassen Deutschland, 2021). 

Health care costs are paid in a 50/50 share between employees and employers in Germany. This disproportionately affects the self-employed who are required to pay the full costs. Therefore, the additional costs act as a disincentive to self-employment, an often-necessary stage of setting up new businesses or entrepreneurial ventures. Consequently, a universal coverage insurance system may act as a frictional force against enterprise and new venture creation.

Overall, a system of universal coverage is likely to cause little direct inefficiencies with regards to insurance markets, however, this does not mean that the entire system of SHI is more efficient as the NHS (explored further on in the article) (Barr, 2004, p. 115). Additionally, a system similar to Germany’s may affect enterprise.
 
Co-payment & its ramifications
Co-payment was introduced in Germany in the 1990s to prevent the overutilisation of the health care service. With increasing waiting times and high levels of pressure on the NHS, some economists argue that introducing co-payment will reduce overall waiting times and lower total health care costs. It is hypothesised that this occurs through reducing the use of consultations with general practitioners, specialists, and ambulatory care (Kill & Houlberg, 2014).  In Germany, co-payments are limited to 2% of a family’s gross annual income and 1% for chronically ill patients. Since their introduction, the average length of a hospital stay has decreased from 14 days to 9 (Hess, 2005), therefore, reducing the cost per patient. 

Whilst co-payments do exist in the UK in the form of prescription charges, a move to a system with greater co-payment may help alleviate some of the costs especially with regards to time spent in hospital. However, studies suggest co-payments have no significant effects on the prevalence of hospitalisations implying that it only shifts costs from insurers to consumers of health care (Kill & Houlberg, p. 825). Furthermore, empirical evidence indicates that vulnerable and low-income groups reduce their use relatively more than other population groups (Louckx, 2002). Therefore, its introduction would require the exemption of certain demographics, however, this is likely to lead to arbitrary dividing lines (Kill & Houlberg, p. 825). This will also incur additional administrative costs especially if efforts are made to preserve equity.

Additionally, exempting groups from co-payments would create added heterogeneity in contributions to public healthcare services. This may reduce support for a public system in the long run, as net contributors may see it as unfair. Furthermore, it is predicted that the availability of supplementary private health insurance (what is available in Germany) counters any desirable demand effects through lowering the price paid by the consumer (Kill & Houlberg, p. 825). 

Overall, introducing co-payments forces some key economic and political trade-offs meaning policymakers must carefully consider what their goals are before its implementation. 
 
Increasing choice in relation to medical practitioners and facilities
In Germany, individuals have free choice in GPs, hospitals (if referred), and specialists unlike in the UK. Registration is not required with a primary care physician meaning GPs have no formal ‘gatekeeping’ function (Thomson, et al., p. 59). In the UK, legislative changes in 1991 allowed larger GP practices to become fundholders permitting them to buy certain types of care for their patients (Barr, p. 287). This meant GP surgeries (purchasers) became separated from their providers (hospitals), leading to consumers having their choices made on behalf of them by an agent (GP or District Health Authority). This also created a degree of geographical heterogeneity in the quality of health care available. Therefore, an SHI system would allow for greater levels of choice directly for the consumer. 

This can be beneficial as it greatly mitigates the risk of situations where an allocated GP fails to diagnose conditions. By no longer having the gatekeeping function, that GPs serve in the NHS, there would likely be fewer levels of missed diagnoses as a result of non-referrals. Additionally, this would increase competition between hospitals and GPs, therefore, improving internal efficiency (Barr, p. 287). However, there would also likely be a significant increase in costs for the NHS as having more hospital admissions/referrals is a major cost driver. Furthermore, the supply of specialised doctors is considerably inelastic due to the number of years required and difficulties of training specialists. Therefore, a shift to a system that places less importance on GPs would take decades to be enacted making it mostly impractical in the short run.
  
Additional to choices of medical practitioners, increased choices in relation to facilities affects the quality of healthcare. Similar to with practitioners, a greater choice between mutually exclusive facilities could increase efficiency on the supply side. Legislation could be introduced allowing all hospitals in the UK to be self-governing trusts, essentially creating a quasi-market within the NHS. These markets would still rely on public funding but decentralise demand and supply (Barr, p. 287). This introduces competition on the supply side whilst still not having profit-maximising firms. Consumers can be seen to have a form of voucher instead of spending cash and therefore have greater choice in facilities and practitioners. It is important to note that if medical providers do not face the costs of their decisions, there would be no incentive for productive efficiency. 

Conversely, downward cost pressures may negatively affect the quality of healthcare, which imperfectly informed consumers would be unable to recognise. Therefore, for quasi-markets to be beneficial in the supply of health care, quality must be carefully monitored by government which can be difficult with regards to health outcomes (Barr, 2004). There would also be incentives for independent trusts to ‘weed out’ costly patients essentially cream skimming (Barr, 2004).

The argument for public providers to be inefficient stems from them not being profit-driven, however, many suppliers in quasi-markets would not be profit-maximisers casting some doubt on the validity of the theory.
For more variation in relation to facilities there would likely have to be the privatisation of certain health care suppliers similar to in Germany. However, this may create upward cost pressures. The costs of setting up the infrastructure for efficient markets and additional costs used for advertising and increasing providers’ market shares would increase overall costs of provision (Le Grand, 1991). Furthermore, a switch from a monopolistic provider may bring about increased labour and other input costs due to the reduced effect of economies of scale. On the other hand, it can be argued that advertising creates more informed customers and thus more efficient decisions although this would be difficult to assess (Le Grand, 1991). 

Germany does have quasi-markets for the provision of health care and different public and private insurance providers compete for customers. Therefore, Germany still serves as a relevant comparison tool. 
Overall, greater choice in relation to medical practitioners and facilities would have a significant impact on efficiency. Many argue that by creating more competition through the introduction of quasi-markets, it will drive down costs and ultimately provide better overall health care. However, the likely ramifications on quality and possibly efficiency make a shift quasi-markets exceedingly difficult to judge as beneficial.
 
Efficiency and Equity comparison
Germany’s SHI has the key characteristics of ‘social insurance system’ being considered in this article, therefore it serves as the most appropriate comparison tool for relating SHI to the NHS. Metrics that can be used for evaluating the relative efficiency and equity of the two systems would be:
  1. Health expenditure as a share of GDP
  2. Health expenditure per capita 
  3. Life expectancy
Picture
The first two points are indicators for efficiency in the use of resources whilst Point 3 assesses health status. Health coverage is not a relevant parameter as SHI and the NHS have universal coverage. 
Both countries have the same life expectancy at birth of 81 years (The World Bank, 2018)therefore, there is not a significant difference in health status. The UK has significantly lower health expenditure per capita, 21.1% lower than Germany’s (The World Bank, 2018). Additionally, the UK’s health expenditure as a percentage of GDP is 1.43% lower than Germany’s (The World Bank, 2018). Although far more data is required to make accurate judgements over differences in quality of healthcare (which is beyond the scope of this article), having the same life expectancy suggests there is not a large disparity. More notably, the UK’s health care seems to be significantly more cost effective than Germany’s.
Overall, a brief look into the empirical data illustrates no significant disparity in the quality of healthcare between Germany and the UK. Additionally, the UK’s health service is more cost effective.
 
Conclusion
A move to a ‘social insurance system’ would mark a vast change in the way that health care is supplied in the UK. Many argue that the NHS is a compromise between the legacies of history, current medical and social values (Mooney, et al., p. 123). However, would a shift to an SHI system be an improvement over the NHS in the UK? Undeniably SHI brings greater levels of individual choice, however, in a market of health care characterised by widespread asymmetric information between providers and consumers this would likely bring more issues than benefits. Universal coverage mostly maintains equity in terms of healthcare and is unlikely to directly cause inefficiencies. Greater co-payment would likely have more negative effects than beneficial ones mainly due to additional costs and the disproportionate effect on low-income and vulnerable groups. Whilst increasing choice in practitioners and facilities may improve efficiency through the increased competition from quasi-markets, however, its expected ramifications on quality and potentially cost-effectiveness make its successful implementation uncertain. Empirical evidence suggests that the NHS is more cost effective and efficient than the German SHI system with a mostly similar quality of health care. Therefore, I believe that it is not ‘time’ for the UK to move to a ‘social insurance system’ as its suggested improvements are obscured by the uncertainty of its positive effects and likely ramifications. No system of health care is perfect, however, the NHS with the right reforms has the potential to be the least inefficient and inequitable form of organisation.
Bibliography
​
Barr, N., 2004. Economics of the Welfare State. 4th Edition ed. Oxford: Oxford University Press.
Besley, T., Hall, J. & Preston, I., 1998. Private and Public Health Insurance in the UK. European Economic Review, Issue 42, pp. 491-497.
Buchanan, J., 1965. The Inconsistencies of the NHS, London: Institute of Economic Affairs.
Dolan, P., Gudex, C., Kind, P. & Williams, A., 1996. Valuing Health States: A Comparison of Methods. Journal of Health Economics, Issue 15, pp. 209-231.
Folland, S., Goodman, A. C. & Stano, M., 2017. The Economics of Health and Health Care. 8th Edition ed. Abingdon: Routledge.
Fuchs, V., 1998. Who Shall Live?. Expanded Edition ed. London: World Scientific Publishing.
Hess, W., 2005. Hospitals walking a tightrope between reform pressure and financial straits, Dresden: Allianz Group.
Jones, A. M., 2006. The Elgar Companion to Health Economics. 1st Edition ed. London: Edward Edgar Publishing Limited.
Kill, A. & Houlberg, K., 2014. How does copayment for health care services affect demand, health and redistribution? A systematic review of the empirical evidence from 1990 to 2011. The European Journal of Health Economics, Issue 15, pp. 813-828.
Krankenkassen Deutschland, 2021. List: Statutory health insurance companies. [Online] 
Available at: https://www.krankenkassen.de/gesetzliche-krankenkassen/krankenkassen-liste/
[Accessed February 2021].
Le Grand, J., 1991. Quasi-Markets and Social Policy. The Economic Journal, 101(408), pp. 1256-1267.
Louckx, F., 2002. Patient cost sharing and access to health care, London: Routledge.
Mooney, G. H., 2003. Economics, Medicine and Health Care. 3rd Edition ed. London: Pearson Education Limited.
Mooney, G. H., Russel, E. M. & Weir, R. D., 1986. Choice for Health Care: A Practical Introduction to the Economics of Health Provision. 2nd Edition ed. London: Macmillan.
Phelps, C. E., 2003. Health Economics. 3rd Edition ed. United States: Addison Wesley.
Rask, K. & Rask, K., 2000. Public Insurance Substituting for Private Insurance: New Evidence Regarding Public Hospitals, Uncompensated Care Funds and Medicaid. Journal of Health Economics, Issue 19, pp. 1-31.
The Health Foundation, 2019. Health Spending as a share of GDP. [Online] 
Available at: https://www.health.org.uk/news-and-comment/charts-and-infographics/health-spending-as-a-share-of-gdp-remains-at-lowest-level-in?gclid=Cj0KCQiA0-6ABhDMARIsAFVdQv9XUwFh9g5nSkiqpINIXYwr0yGBHhKENy244_iZHH8IOLLT8kgKjMoaAlNpEALw_wcB
[Accessed February 2020].
The World Bank, 2018. Current health expenditure (% of GDP). [Online] 
Available at: https://data.worldbank.org/indicator/SH.XPD.CHEX.GD.ZS
[Accessed February 2020].
The World Bank, 2018. Current health expenditure per capita (current US$). [Online] 
Available at: https://data.worldbank.org/indicator/SH.XPD.CHEX.PC.CD
[Accessed February 2020].
The World Bank, 2018. Life expectancy at birth, total (years). [Online] 
Available at: https://data.worldbank.org/indicator/SP.DYN.LE00.IN
[Accessed February 2020].
Thomson, S., Osborn, R., Squires, D. & Jun, M., 2013. International Profiles of Health Care Systems, 2013, New York: The Commonwealth Fund.
Thorlby, R., Gardner, T. & Turton, C., 2019. NHS performance and waiting times: Priorities for the next government, London: The Health Foundation.
Van Doorslaer, E., 2000. Equity in the Delivery of Health Care in Europe and the US. Journal of Health Economics, Issue 19, pp. 553-583.
0 Comments

Latin America: a brief look into how its colonial past is affecting its socio-economic present

2/1/2021

2 Comments

 
By Ayan Banerjee
During the 16th to 18th centuries, European nations arrived and set up colonies across Latin America. Since the 1500s, the continent has undergone a relative economic decline. It's evident that a ‘reversal of fortune’ had occurred, where some of the richest countries of the 1500s are some of the poorest in the present day. This article will analyse and evaluate some of the factors responsible for this decline.

Spanish and Portuguese settlers landed in South America in the early 16th century, setting up colonies covering most of the continent. Spain 'followed a policy of conquest imperialism, exterminated the Aztec and Inca elites and their priesthood, and seized their property' (Maddison A., 2007). Furthermore, the arrival of new diseases (Smallpox, Measles, Diphtheria, Typhus, Influenza) had by the middle of the 16th century, wiped out 70-90% of the indigenous population; these lands were subsequently repopulated with Europeans and slaves from Africa. Even after leaving and granting independence in the 18th century, the effect of a colonial past has had a far-reaching impact up to the present day. Many economists argue that this long-lasting influence is the cause of the relative economic decline. Some go further stating that it is, in particular, the institutions set up by settlers that are responsible for this. Institutions are defined as purposely designed constraints that configure social, political and economic exchanges, essentially designing the incentives of economic agents. Path dependency is the concept that decisions presented to people are dependent on previous decisions or experiences made in the past. For example, the continued use of QWERTY keyboard layouts in computers and phones beyond its original use on the typewriter. The path dependency theory hypothesises that the institutions set up during the colonial past influences and impacts present performance.
M --> S --> EI --> CI --> P
Illustrated above is the path that is followed. M represents potential settler mortality that impacts settlements (S). S then impacts the early institutions (EI) which in turn affect the current institutions (CI). Finally, the current institutions directly impact a country’s current performance (CP). Institutions set up during the colonial past were often extractive, used to capture wealth and send significant amounts of it back to the colonial ruling country. These institutions were often characterised by greatly increasing inequality within settled countries. Therefore, many current institutions have similar characteristics. The economist Douglas North argues that institutions greatly govern the economic development and overall health of an economy. It can therefore be argued that the institutions set up in South America’s colonial past have created current institutions that are still extractive and act as constraints to growth.
Picture
South America in the age of the Bourbons: the ruling house of Spain from the 1700's.
One way in which institutions constrain growth in South America is through the ‘persistence of inequality’ (Sokoloff & Engerman, 2000). South America and other previous colonies such as Jamaica and Haiti are considered to be some of the most unequal countries in the world. These countries ran in their colonial past based on the control from a small group of elites with high levels of national inequality. Since then, those in power were able to set up a ‘legal framework’ which gave the ruling minority great influence over laws and government policies to extract more power. This continued after the settlers left with one group of elite exchanging for another. This new group of elites were able to institutionalise inequality in their countries to leverage long-term power and wealth. One of these methods of obtaining long-term power was through the control of land. After the settlers left, the minorities left in charge began owning and controlling most land in their respective countries. They could, therefore, control its distribution, the tax system used and also the threshold of allowed land ownership. Furthermore, through controlling large expanses of land, they owned some of the country’s largest revenue streams. These would include agriculture and access to natural resources such as oil, natural gas, minerals and metal ores. 

This control was continually secured behind a facade of democracy. Although the majority of South American countries were democracies by the mid-19th century, they were still controlled by an elected elite, largely decedents of the previous elite. They were able to leverage their power mainly through restrictions to voting. Not only was there a lack in secrecy in the balloting, but there was also often a wealth and literacy requirement; resulting in that most countries, less than 10% of people were allowed to vote between 1840-1940 (Sokoloff & Engerman, 2000). Although the suffragette movement and other voting rights movements travelled from North America and the West during the late 19th century, most were not imposed until the mid to late 20th century prolonging inequality. 

Overall, the spread of power being uneven was dependent on the homogeneity of the voting population. By reserving voting rights to a small minority, a small group of elites were able to secure power whilst acting as rent-seekers, extracting resources in a kleptocratic rule which stifled growth and fostered inequality for several centuries.

Continued restrictions to education also contributed to relative economic decline. Investing in education can be a powerful tool to encourage economic growth through improving the productivity and quality of workers and in turn of human capital. This is evident in most developed countries that have widespread and effective education systems available. Contrastingly, most South American countries failed to set up a primary education system up until the early 19th century due to their lack of prosperity. Governments chose not to invest in the mass education of its population unlike its ex-colony siblings in North America who realised the value of allocating resources to educating the youth. Furthermore, South American countries remained slow at introducing ‘schooling institutions.’ Overall, the lack of widespread education led to the quality of human capital remaining mostly stagnant and therefore, leading to the loss of potential economic growth. 

Some economists argue that economic decline continued through the 20th century to the present day due to South America’s failure to rapidly industrialise. Most countries globally underwent rapid urbanisation during WW2, however, those in South America failed to do so. The lack of industrialisation meant that much of South America was still reliant on commodity exports which would decline in price significantly. Combined with the price of imports increasing rapidly, this led to large prolonged current account deficits and hence extensive debt. Governments then operated protectionist policies in an attempt to stimulate domestic import-substituting manufacturing, however, this resulted in mass unemployment and further inequality due to being poorly imposed. Overall, poor governance in the 20th century failed to make the transition from a risky commodity-exporting based economy to an industrialised one. This further increased internal inequality and hence, continued the economic decline into the present day.
​
South American countries experienced high living standards before settlers arrived in the 1500s. However, the settlers’ arrival and legacy after departure shaped the subsequent economic decline to the present day. The role of institutions in this is largely significant. The distribution of political power and wealth was imbalanced combined with general human capital not being improved, fostered inequality. This institutionalised inequality stifled growth as didn’t realise the ‘economic potential’ of large, marginalised groups. Furthermore, poor governance in the 20th century through protectionist policies and failure to move away from a commodity-exporting based economy continued this economic decline. As long as there remains an elite minority in power of South American countries, mass inequality and poor governance will persevere, restricting to growth and furthering the continent’s economic decline.
2 Comments

Water & air pollution represents two of the main challenges facing society: what can economics offer to do about it?

20/12/2020

0 Comments

 
By Ayan Banerjee
Pollution refers to activities that reduce the ambient quality of a particular environment (Ison, et al., 2002, p. 133). With growing levels of global pollution and its consequences becoming increasingly evident, its management and abatement have developed into one of society’s major challenges. Air and water pollution are an especially pressing area of concern since society’s high dependency on these fundamental human needs make its contamination especially impactful. Pollution abatement would consist of measures to reduce, eliminate or control the pollution for a given environment (Moosa & Ramiah, 2016, p. 130). The application of economic instruments can provide various approaches to this problem. This essay will describe and evaluate the effectiveness of these options.
 
Air pollution can most often be grouped into two clusters: mobile and stationary sources. Although they emit many of the same pollutants, they require different economic instruments to effectively be abated.
 
Firstly, the mobility of the source of air pollution has two main impacts on abatement policy. Pollution can be caused by the temporary location of a source – such as rush hour in urban areas – and since they are mobile, sources cannot be relocated as would be done with an electric power plant (Tietenberg & Lewis, 2014, p. 480). Additionally, it’s challenging to ‘tailor’ emissions rates to a confined pollutant pattern as any specific source can end up in numerous locations during the course of its life. One economic approach would be the use of implicit subsidies. Often the private costs associated with a pollution-creating activity do not reflect the social cost. For example, since only a minority of the road construction costs are funded by fuel taxes, there is a discrepancy between the marginal private cost of an additional mile driven on the road and its social cost (Tietenberg & Lewis, 2014, p. 482). These non-internalised social costs are therefore incorporated into free markets through the use of implicit subsidies. The government executes these subsidies by ‘suppressing’ supply which raises prices which would reflect the higher ‘true’ cost of these goods or services. Overall, this approach, especially effective for mobile pollutant sources, would reduce the demand for these pollution creating activities, thus, abating some negative environmental impacts.
 
Stationary-source air pollution is susceptible to abatement from these measures as well as some additional approaches. The most common one being the Pigouvian tax. These taxes are on the producers of an externality which is equal to the net marginal external cost imposed (Ison, et al., 2002, p. 105). This would affect the producer of vertically shifting their supply curve reducing the equilibrium quantity of the good or service supplied.
Picture
As illustrated above, the tax level is aimed to be when the marginal benefit of the good or service equals the marginal damage to society. This is the most common method of internalising an externality. Furthermore, this is argued to be fairer as it follows the ‘polluter-pays’ principle embraced by the OECD since 1972 (Ison, et al., 2002, p. 83). Furthermore, Pigouvian taxes are more likely to yield a double dividend where the economic costs associated with a taxation system can be outweighed by government spending using the revenue generated. This system can easily be appropriated with regards to water pollution if the producer can be identified. The major drawback of this abatement approach is that the producer has to be identifiable and also taxable. For example, air and water pollution can be difficult to identify its source as has the potential to diffuse great distances. Overall, Pigouvian taxes are a versatile approach to pollution abatement for both the air and water, however, its drawbacks are stem from identifying the sources of pollution. 
 
Water pollution in many ways is similarly dealt with as air pollution, however, the economic instruments used are more unique to these specific pollution sources. Water pollution can either stem from point or non-point sources. Whilst point sources can be treated very similarly to stationary-source air pollution with instruments such as Pigouvian taxes and Implicit subsidies; non-point sources are harder to identify, so new policy approaches must be employed. ‘Watershed-Based Trading’ most notably applied in 1996 when the EPA issued the ‘Draft Framework for Watershed Based Trading’ (Tietenberg & Lewis, 2014, p. 530). This operates through point source polluters meeting water quality criteria by buying reductions from other point or nonpoint sources that would have lower marginal costs of abatement. They would usually trade different pollutant abatements mainly either phosphorus or nitrogen. This would further allow firms to exploit economies of scale for pollution-abatement technology which will reduce the overall cost to markets and create faster, cheaper clean-up. Conversely, this approach is very complicated and involves accounting for the distribution of pollutants to derive accurate trading ratios that ensure pollution reductions after trades illustrate the required abatement. Another policy approach would be a consent system (Ison, et al., 2002, p. 139). This is very simple as it states that pollution emitted cannot exceed a certain determined level. Many economists criticise this as they believe a tax system can achieve a similar level of abatement at a lower total cost. Furthermore, it eliminates the chance of a double dividend as there is no tax revenue but only expenditure to enforce the max level of pollution. 
 
In conclusion, subsidies, taxes, trading and consent systems are all appropriate policy options based on economic theory. They vary in effectiveness depending on the assumptions and constraints of each method. Many rely on being able to identify the source of pollution and therefore are often liable to not working fully efficiently. However, overall economics can provide several useful policy approaches outside of command, control and institutional instruments.
 
 
Bibliography
​
Hanley, N., Shogren, J. & White, B., 2013. Introduction to Environmental Economics. 2nd Edition ed. UK: Oxford University Press.
Ison, S., Peake, S. & Wall, S., 2002. Environmental Issues and Policies. 1st Edition ed. England: Pearson Education Ltd..
Moosa, I. & Ramiah, V., 2016. The costs and benefits of environmental regulation. UK: Edward Elgar Publishing Ltd..
Park, C. C., 1986. Environmental Policies: An international review. 1st Edition ed. Australia: Croom Helm Ltd..
Tietenberg, T. & Lewis, L., 2014. Environmental & Natural Resource Economics. Ninth Edition ed. England: Pearson Education Ltd..
Wills, I., 1997. Economics and the Environment. 1st Edition ed. Australia: Allen & Unwin.
0 Comments

GDP, it sucks; what else can we use and why?

24/9/2020

0 Comments

 
By Ayan Banerjee
Ever since GDP was adopted as the main metric of wealth during the 1944 Bretton Woods conference, it has been used as an indicator of growth. However, many economists argue that it fails to capture a significant range of important factors that contribute to a country’s wealth. Wealth has a wide range of definitions depending on the discipline but is consistently agreed upon to include anything of value. The frequent publishing of GDP figures hugely influences the health of an economy through informing international trade, investment, political and financial decisions. Therefore, it is vital to have an accurate measuring system otherwise it could have far-reaching economic consequences. Here we'll have a look into the flaws of GDP, why it has been widely criticised, its alternative measures and their effectiveness and finally determine which of these is the best substitution for GDP.
 
GDP or Gross Domestic Product is the most widely used indicator of wealth in the world. To ensure that its calculation is carried out fairly across countries to allow comparison, a System of National Accounts (SNA) is used to analyse statistics. Although it aims to allow comparisons by formalising markets and extending accounting, it fails to capture numerous aspects of a country’s economy. The SNA often creates miscalculations resulting from causes such as the exclusion of non-market transactions; the failure to account for inequality, the sustainability of growth, environmental and health externalities and also through considering the replacement of depreciated capital as introducing new capital. An example of this is that mass deforestation would contribute to GDP growth due to the introduction of timber but doesn’t account for the environmental and social effects of losing forests. Its many flaws and failure to capture several aspects of wealth incentivise the search for more suitable alternatives.
 
One alternative is the Fordham Index of Social Health (FISH). Formed in 1987 by the Fordham Institute for Innovation in Social Policy as an indicator for ‘social well-being,’ its premise is that the index is based on the combined effect of numerous social factors and issues. These issues are spread across four stages of life: childhood, youth, adulthood and old age. 16 indicators are identified including infant mortality, housing and income inequality. These can capture the overall effect of how some areas have improved and others worsened over time. For example, infant mortality has improved whereas child poverty has worsened since 1970 in the US. By considering a wider range of social factors, the FISH can capture a better picture of changes in wealth. The FISH index has decreased in the US whilst GDP has grown illustrating the potential discrepancies.
 
Another substitute is the Genuine Progress Indicator (GPI) outlined in 1994. Designed to involve aspects missed by GDP to better represent the well-being of a nation, it involves aspects of the environment and social factors such as poverty rate. Considered to be a great improvement over GDP by most environmental economists, it places more emphasis on the functions of communities and households. Most notably, the replacement of these functions would not be considered as growth as it would be using GDP. Like FISH, GPI uses numerous socio-economic factors such as crime rate but also considers environmental factors such as pollution and more abstract unquantifiable socio-economic factors such as ‘family breakdown’. The total of 18 indicators thus provides a fuller picture by adding environmental factors and using more complex socio-economic measures. However, there are still some drawbacks; GPI fails to measure a few other key influential factors such as human capital, diversity and lifestyle-related diseases. Overall, GPI improves over FISH by increasing the range of indicators and considering more pertinent environmental factors. However, still suffers drawbacks by excluding several important metrics.  
 
Another indicator formulated to replace GDP is the Gross Sustainable Development Product (GSDP). Defined as the total value of production within a region over time, it uses GDP as a foundation but builds upon it by measuring the costs of development and growth to society. This is quantified through the analysis of the prices of goods and services in markets within a country. Furthermore, it also considers the impact on people activity due to environment, resource availability and development; biodiversity and environmental effects as well as the impact on future generations. The main aim of this metric is representing the concerns around sustainability. Ever since the beginning of our current modern growth epoch in the 1870s, there has been a constant evaluation of how sustainable our growth can be. GSDP aids with this whilst also capturing some environmental and social factors. However, this metric fails to consider as large a range of social and environmental changes as GPI achieves, therefore, unlikely to fully represent a nation’s wealth. 
 
The final alternative this article will explore is Gross Environmental Sustainable Development Index (GESDI). Used to measure the quality of growth and development over its quantity, it involves over 200 indicators spread across four areas. The first main one being people and their social, economic, psychological, physical and spiritual aspects. The others being available resources, environment and economic development. This metric builds upon GSDP by also considering a far wider range of more holistic measures and in-turn, capturing far more non-monetary measures. 
 
An effective indicator of wealth needs to capture an extremely wide range socio-economic, environmental and intangible factors. Most recently, there has been a shift in economics to consider  peoples’ well-being constituting wealth rather than their material possessions. These alternatives outlined above illustrate this shift by building upon GDP and capturing more non-metric factors that make up most aspects of peoples’ wealth. The most effective at these are GESDI, GPI and FISH as GSDP still suffers from too many of the limitations that GDP experiences. Out of these three remaining indexes, I believe GESDI is the most effective alternative. This is because FISH does not capture enough aspects of the environment as its importance wasn’t as well known when it was first formulated in the 1980s. Furthermore, GPI lags behind GESDI in the number of indicators used which help provide GESDI with a far more detailed picture of a nation’s wealth. 
 
GDP has many drawbacks that contribute to its inability to act as an indicator of wealth. Alternative indicators aim to capture the more intangible aspects of wealth found in social and environmental areas. The most effective of these being GESDI which uses over 200 indicators to form a far more detailed picture of nations' wealth. This indicator could be used to measure and compare countries’ wealth growth more fairly. It is also a better indicator of an elected political party’s success over their elected period. Over time, economists are outlining increasingly accurate measures of wealth. Despite these continuous incremental improvements, we also need to decide on an effective measure for semi-long-term use, so that countries can allocate resources knowing that their expected benefits would still be observed in future published results. In spite of all this, decision-makers need to also consider the monetary and time cost of calculating a chosen metric. High costs of calculation may encourage short-cutting and miscalculations, all of which makes an alternative’s benefits redundant.  
0 Comments

‘Helicopter money’ and the deradicalisation of negative interest rates

21/9/2020

0 Comments

 
By Adam Perkins, Ayan Banerjee
​Up to this point mainstream economic thought has been divided in three eras. Starting with the publication of the General Theory in 1936, the Keynesian era saw the economy being perceived as a beast that needed to be tamed with state intervention, as opposed to a self-correcting organism. Stagflation in the 1970s dismantled Keynesian theory, with high inflation and low growth revealing flaws in the paradigm. Milton Freidman’s monetarism took its place and provided solutions to problems Keynesianism could not explain. By the 2000s economists were drawing on a synthesis of Keynesian and Monetarist princples to answer policy questions. This period is also marked by central bank independence and flexible inflation targeting. Coronavirus presents another threat that will inevitably disrupt conventional economic policy. Traditional policy was already looking tired pre-corona. 2010s recovery was notoriously slow and both inflation and unemployment were inexplicably low. Two concerns that conventional economic wisdom had no answer for. Furthermore, monetary policy was facing the issue that rate of interest needed to generate enough demand was below zero or ‘reaching the zero-lower bound’. Quantitative easing (QE) was the solution to this, but its efficacy and viability as a long-term option are both in question. By far the biggest problem policymakers face is distribution. Many argued that the maldistribution of income was the root of stagnant economic growth. Pointing out that the rich have a higher marginal propensity to save and therefore as their share of income grows so will national saving. Simultaneously, Antitrust policy is in upheaval with the dominance of tech giants prompting a rethink. This made the standard economic paradigm increasingly fragile throughout the 2010s, so it shouldn’t be surprising that a once in a lifetime type event like coronavirus created the urgent need for a change in economic policy strategy. The virus created issues such as disrupted supply chains causing a price level surge and a sharp decrease in aggregate investment. Most concerning was that the job losses were not only significant in number but mainly focused on the hospitality sector where women, minorities and low skilled workers are overrepresented. Therefore, this crisis was unique in that the poorest in society were being affected the worst. This created a sense of urgency amongst policymakers and economists alike to find a new approach.
 
Modern Economists attitude to recovery policy can be sorted in three groups from most to least radical. Firstly, there are those that believe that monetary policy alone has enough firepower to reliably stimulate the economy. Many economists, including Ben Bernanke, state there is enough scope for further asset purchases and that consequently monetary policy alone would be sufficent to fight a recession. However, nowadays many doubt that asset purchases have the reach to deliver unlimited stimulus. Leading us to the  second school, who believe that budget deficits and fiscal stimulus are a more effective way to recovery. The more radical members of this group believe that central banks should act as enablers of public debt by allowing for cheap public borrowing. This idea is being pushed (particularly by Adair Turner former regulator) as a mainstream policy termed ‘helicopter money’. In traditional economic theory running such a high and prolonged deficit  would cause serious public debt issues. But with this new attitude, relying on central banks to backstop debt, high public debts become a less signifcant problem. This does challenge the idea of central bank independence and may compromise inflation targeting. Furthermore, the success of fiscal stimulus is almost completely dependent on how well it is targeted as too extensive a package could keep business’s alive that are meant to fail. Therefore, this route is effective in theory but is at the mercy of how it is executed. 
Picture
Economy can grow its way out of debt and borrow at no fiscal cost as long as growth is higher than interest payments
​What both of these approaches have in common is that they leave a bill to be paid in the future and have both been explored before. This combined with the aforementioned ‘urgency to find a new approach’ has led to the exploration of an approach that 10-15 years ago would’ve been considered very radical. Negative interest rates. Not that negative interest rates have never been considered and are even in place in some countries (Switzerland’s current bank rate is -0.75%) but they are still considered fringe by the mainstream. The notion of a negative bank rate is gradually becoming more appealing as the global public debt crisis grows. Furthermore, central banks extensive use of QE over the last decade has somewhat immobilized them as they cannot raise rates without paying interest on the ‘huge bill parked on it’ (The Economist, 2020). The classic criticism of negative interest rates is that customers will simply withdraw all their money and hide them under mattresses. Although this is still valid, the movement towards a cashless society means this problem is much closer to being solved now than it ever has been. One suggestion is to eliminate large denomination notes making storing large quantities of cash impractical. This kind of reform would need to sweeping as it the bank rate takes too long to transmit to real rates it will be ineffective. The proponents of negative interest rates say that experimental rates such as Switzerland’s are not radical enough and that negative rates of -3% are what is needed for benefit to be felt. 

This does however bring into question the reversal interest rate.
Picture
​This is the point at which rates become so low they actually deter lending and are therefore counter-intuitive to growth. The reversal rate is dependent on a number of factors such as strictness of capital constraints and bank holdings, so an actual figure is not known. One thing that is known QE use raises the reversal rate which does not bode well for the current economy. It is entirely possible that a more aggressive negative rate such as -3% could very well surpass the reversal rate and cause the policy to be counter-productive but this is not a certainty. One thing that is certain is that a change of perspective is needed. Inequality issues will only be exacerbated as dominant incumbent companies increase automation and workers bargaining power is diminished. Public debt is already at an all-time high and QE is showing its limitations. Perhaps negative interest rates are the new shiny weapon policymakers are looking for.
0 Comments

Should we really spend billions to make a few atoms? A look into superheavy element synthesis

3/8/2020

0 Comments

 
By Ayan Banerjee
Picture
Facility at the Flerov laboratory of Nuclear Reactions, Dubna, Russia
With so many governments running huge deficits to keep the economy afloat at the moment, now more so than ever we need to be carefully scrutinising all areas in which we allocate resources. The synthesis of superheavy elements is an example of a branch of research that requires significantly large amounts of scientific resources which begs the question: is it a justifiable use of these resources?  

Superheavy elements are also known as transactinide’s, therefore, are elements with an atomic number beyond the actinide period range: Z > 103.1. It's also important to note that synthesis is the artificial production of these elements provided that they exist for at least 10-14 seconds. For this investigation,  scientific resources will be considered to include the sum of capital, labour, materials, energy and all other expenses used. The synthesis of these elements doesn't have a long history, with the first superheavy element only being synthesised in 1964 (Rutherfordium) and the most recent, Oganesson, discovered in 2002. Therefore, for this relatively young branch of scientific research, we're going to look into whether it's a good use of our scarce resources by evaluating the associated gains and costs. 
 
Firstly, to evaluate the costs, we need to detail the various methods of superheavy element synthesis. A major challenge when forming progressively heavier elements is overcoming the increasing electrostatic repulsion within the nucleus. Therefore, different production methods were utilised for different superheavy elements as they possess differing nuclear structures and compositions.
For elements 103 < Z < 107, two heavy nuclei – one accelerated in a ‘heavy-ion beam’ using a cyclotron, and the other stationary – are collided, fusing and forming a superheavy compound nucleus. However, the compound nucleus’s excited state (high energy) made it progressively more unstable making nuclei heavier than Z = 106 unfeasible with this method.
In the early 1990s at the GSI laboratory, scientists were able to use cold fusion to form nuclei in the range 106 < Z < 113. Since the fusion occurred at a lower temperature, the products were at a less excited state and therefore, more stable. However, as they attempted to produce elements with Z > 112 it became apparent that even cold fusion couldn’t overcome the larger electrostatic repulsion between heavier projectile ions. Furthermore, the projectile ions lacked the sufficient number of neutrons to form stable compound nuclei. Therefore, a new synthesis approach would need to be developed.
This new method consisted of using the rare isotope Ca-48 (28 neutrons, 20 protons) as the projectile ion. The resulting compound nucleus has an even lower excitation energy than using the cold fusion method. Additionally, the larger relative mass difference between the projectile nuclei results in a lower coulomb repulsion force which is easier to overcome making fusion feasible. Neutron-enriched isotopes of transuranium such as americium are used as a target material. This method was able to produce the elements 112 < Z < 119, the remaining heaviest elements that we have discovered so far.
 
These various synthesis methods require a significant range of scientific resources which make up the majority of total costs.
One of the largest costs is that of the reactors. These take up large amounts of lab space and require expensive specialist components as well as the labour of highly trained engineers and scientists. For example, the Russian Joint Institute for Nuclear Research in Dubna where numerous superheavy elements were discovered had a manufacturing cost of $238M and currently employs 4500 people. However the costs can be even higher such as at the Lawrence Berkeley National Laboratory which cost $2.2B to construct. These accelerators often utilise powerful cyclotrons which consist of expensive magnets and superconductors to achieve ultra-fast beams of ions. Complex detectors purposed with tracking individual particles are additionally very expensive and extremely difficult to construct, often requiring years of design and manufacturing.
Additionally, there are high costs associated with running these reactions which can often be ongoing for weeks at a time. Energy usage is extremely high from the advanced components and machinery used to create and contain high-energy particle beams. This power usage can often exceed 100MW in certain reactors, equivalent to that of a small city. The superconductors require cooling to temperatures close to absolute zero, only achievable through the use of expensive liquid helium as a coolant. However, arguably one of the most expensive aspects of this process is the use of Ca-48. This rare isotope makes up roughly only 0.19% of the world’s naturally occurring calcium. Not only does this require a significant amount of time to produce, but also costs roughly $200,000 per gram. Even though accelerators are optimised to use higher intensity beams reducing the rate of Ca-48 usage, the cumulative usage across long-duration experiments builds up a significant cost. 
Scientists’ time and effort is also another resource that is utilised significantly for the complex synthesis of these superheavy elements.
Overall, the synthesis of superheavy elements requires a combination of significant direct construction, labour, materials, reactants and time costs.
 
Furthermore, the use of these required resources also presents an opportunity cost: equivalent to the value lost in the alternative uses of the resources. Similar materials and capital used for superheavy element synthesis are also used in other areas of particle and nuclear sciences. For example, particle accelerators can be used to research areas in theoretical, particle and relativistic nuclear physics. These all could potentially lead to answers to major scientific questions about the fundamental properties of the universe. The exchangeable capital would include the expensive liquid helium coolant, superconductors and magnets use to focus particle beams and additionally powerful and costly cyclotrons. Furthermore, many of the scientists and engineers could be placed on alternative projects, although this would be challenging as they are mainly specialised in highly specific areas. 
 
Although there are significant costs, synthesising superheavy elements presents a range of benefits and opportunities, not only to the scientific community but also to the world as a whole. Potentially, some of these benefits may stem from the direct uses of superheavy elements. However, since they have only recently been possible to synthesise, many of their uses and properties are still unknown. This was also the case in the 20th century, when transuranium were first explored, however, continued synthesis and research discovered a range of uses. This includes the use of americium in smoke detectors, curium and californium for neutron radiography and interrogation and also plutonium in nuclear weapons. Therefore, beneficial direct uses may be discovered from the further synthesis and research of superheavy elements.
Furthermore, superheavy elements synthesis can present a range of indirect benefits. Most notably, more research through their production could provide sufficient data to prove the theory of ‘the island of stability.’ This refers to a theorised region of superheavy elements on the periodic table that have half-lives significantly longer in magnitude relative to other superheavy elements. These half-lives could be seconds or minutes relative to other elements that last nanoseconds. The ‘island of stability’ region is categorised by containing nuclei that have a spherical shape. Comparably, the exploration and synthesis of transuranium elements provided sufficient understanding to disprove Bohr’s ‘liquid-drop’ model which improved theories in the area of nuclear physics.
Discoveries in this area would provide an improved understanding of what keeps nuclei together and how some heavy nuclei are able to resist fission. 
Furthermore, the skills developed from carrying out these processes can be used to solve problems such as in national security and the management of radioactive sources such as nuclear weapons. Most significantly, scientists believe that the decay of these superheavy elements could provide information on what binds subatomic particles together and therefore, the forces involved in nucleosynthesis. This may address whether there is a final boundary on the periodic table. Overall, the synthesis of superheavy elements presents many indirect uses which could answer major questions improving our overall understanding in the sciences.
 
To determine if superheavy element synthesis is a good use of scientific resources, we need to address whether the direct and indirect gains exceed the associated costs and opportunity costs. It's difficult to determine the direct benefits as most are currently unknown. However, the positive externalities from conducting this research are far-reaching and significant in the areas of theoretical and nuclear sciences. Furthermore, there may be unforeseen indirect benefits such as in newfound applications of the skills and understanding gained from carrying out these processes.
More importantly, we need to question the purpose of scientific research. I believe it's to improve our understanding of the world around us and the universe or to invent and discover new technologies to improve lives - something both exciting and incredible useful. In this case, the synthesis of superheavy elements could provide important opportunities in both areas. Since the possible benefits are so significant, especially to the scientific community, I believe that the benefits do outweigh the costs and it is a good use of scientific resource.

 
References:
  1. Wikipedia, https://en.wikipedia.org/wiki/Superheavy_element, (accessed July 2020)
  2. Kernchemie ,http://www.kernchemie.de/Transactinides/Transactinide-2/transactinide-2.html, (accessed August 2020)
  3. Physics World, https://physicsworld.com/a/superheavy-elements/, (accessed July 2020)
  4. JINR, http://www.jinr.ru/wp-content/uploads/JINR_Docs/7_plan_17-23_eng.pdf, (accessed July  2020)
  5. LBL, https://cx.lbl.gov/documents/2009-assessment/lbnl-cx-cost-benefit.pdf, (accessed August 2020)
  6. Lawrence Livermore National Laboratory, https://pls.llnl.gov/research-and-development/nuclear-science/project-highlights/livermorium/elements-113-and-115#5, (accessed July 2020)
  7. Deccan Herald, https://www.deccanherald.com/content/605696/synthesis-superheavy-elements.html, (accessed July 2020)

Do you think the synthesis of superheavy elements is a good use of scientific resources? Please let us know in comments section below.
0 Comments

Batteries. Less ‘green’ than we thought?

13/7/2020

0 Comments

 
By Ayan Banerjee
Without a doubt, you have some type of battery close to you as you’re reading this. Be it the battery in your laptop or phone or the AA’s in a torch. Our reliance on this technology is set to increase rapidly as countries and individuals race to become greener. 
 
The shift to renewable energy sources such as solar or wind is incentivising governments to invest in grid-based energy storage systems. Presently, most grids operate on a second-by-second supply to meet demand which is possible by the flexibility of fossil fuels in changing the power delivered at any time. However, the intermittent nature of most renewable energy sources means the same approach would lead to widespread power cuts and overloads as supply would rarely match demand at any time. Therefore, large-scale batteries offer a storage for energy to be deposited when excess is being produced and drawn from when there is a supply deficit, therefore, eliminating any supply issues. 
 
This all sounds very complicated to run; does it actually work in reality? This is where South Australia provides a useful case study. By most accounts, the world’s largest battery installed by Tesla in 2017 has been a great success. A region of the world previously plagued with astronomically high electricity prices has seen the major price drops since the battery was brought online. It was also a financial success, earning AU$23.8M in the first half of 2018 spurring the investment in future international battery uptake. This combined with electrification of other industries such as the car industry is set to increase the demand of batteries significantly over the current century. Therefore, for something produced on this scale, we must heavily scrutinise its sustainability. 
 
In May 2016, thousands of dead fish were plucked from the waters of the Liqi river, where a toxic chemical leak from the Ganzizhou Rongda Lithium mine had wreaked havoc within the local ecosystem. Some eyewitnesses reported seeing cow and yak carcasses floating downstream, dead from drinking contaminated water. It was the third such incident in the space of seven years in an area which has seen a sharp rise in mining activity, including operations run by BYD, the world’ biggest supplier of lithium-ion batteries for smartphones and electric cars at the time. After the second incident, in 2013, officials closed the mine, but when it reopened in April 2016, the fish started dying again. Lithium-ion batteries are the most common type of battery used presently with 12kg of Lithium in the battery of a Tesla Model S. Demand for lithium is increasing exponentially, and it doubled in price between 2016 and 2018.
Picture
​The production process for lithium, or more specifically lithium carbonate, involves drilling holes in salt flats and pumping salty, mineral-rich brine to the surface. This brine is left to evaporate, and the resulting salts are filtered so the lithium carbonate can be extracted. Although a very simple process, it uses large amounts of water and can be time-consuming – taking between 18 and 24 months.
 
It’s a relatively cheap and effective process, but it uses a lot of water – approximately 500,000 gallons per tonne of lithium. In Chile’s Salar de Atacama, mining activities consumed 65 per cent of the region’s water. This is having a big impact on local farmers – who grow quinoa and herd llamas – in an area where some communities already have to get water driven in from elsewhere.
There’s also the potential for toxic chemicals to leak from the evaporation pools into the water supply. These include chemicals (such as HCl) which are used in the processing of lithium into a form that can be sold, as well as those waste products that are filtered out of the brine at each stage. Research in Nevada found impacts on fish as far as 150 miles downstream from a lithium processing operation. A report by Friends of the Earth states that lithium extraction inevitably harms the soil and causes air contamination. Like any mining process, it is invasive, scarring the landscape and damaging the water table whilst polluting the earth and local wells.
 
Conversely, lithium may not be the most problematic ingredient of modern rechargeable batteries. It is relatively abundant and may in fact be generated from seawater in future, albeit through a very energy-intensive process.
 
Two other key ingredients, cobalt and nickel, could potentially cause a huge environmental cost. Cobalt is found in huge quantities right across the Democratic Republic of Congo and central Africa, and hardly anywhere else. The price has quadrupled in the last two years.
One of the biggest challenges with cobalt is that it’s located in one country, so there’s a strong motivation to dig it up and sell it; as a result, there’s a large incentive for unsafe and unethical behaviour. In the Congo, Cobalt is predominantly extracted in ‘artisanal mines’ by hand often using child labour without any protective equipment. 
 
So how can we reduce all of these environmental and human effects? Many scientists argue that new battery technology needs to be developed that uses more common, and environmentally friendly materials to make batteries. Researchers are working on new battery chemistries that replace cobalt and lithium with more common and less toxic materials. However, these need to be cheaper and have higher energy density than the batteries before it to incentivise a transition. With all of these associated environmental and human impacts, it’s imperative that we scrutinise all points of the battery manufacturing supply chain so we can actually make a true ‘green’ transition.
0 Comments

How useful is Cost-Benefit Analysis for public policy decisions?

2/7/2020

0 Comments

 
By Ayan Banerjee
When making any decision for public policy, one must ask the question: which policy will be best for society? Economic development oppurtunities can range from investing in infrastructure to subsidies, all of which have costs and benefits. Economists must therefore use systems in which choices can be made about their effectiveness. However, diverse approaches have different positives and negatives and are further liable to their underlying assumptions. This essay will evaluate all of these factors and attempt to determine which method is most effective.
 
Cost-Benefit Analysis is a decision-making tool in which the benefits of a project or policy are weighed against the costs with regards to society. There are four core steps to carrying out this analysis. Firstly, the project or policy must be defined including its time period and the population it affects. Then identify the physical impacts such as labour man hours or tonnes of landfill. Following this, monetary evaluations of these costs and benefits are made. Finally, future costs and benefits are discounted since it is assumed benefits in the future are worth less than those received sooner. 
 
There are several assumptions made throughout the process of Cost-Benefit analysis which affect its success at maximising societal welfare. 
 
It is often assumed that the benefits and costs in the future are worth less than those that occur sooner. On one hand, there is an inherent time preference of obtaining benefits sooner rather than later. Conversely, it can be argued that the longer a project’s impact lasts, the more valuable it is with regards to sustainability. In future, environmental resources and benefits will naturally be scarcer as we pollute the environment more. Consequently, the supply of clean air, water and other environmental goods would be lower meaning they could be worth more in the future than they would be now. 
 
The model of discounting most commonly used assumes discounting to be exponential[2]. Usually, this system accurately represents markets in evaluating the investment choices of firms, however, it may not be applicable for public choices[3]. The discount rate for public choices is often referred to as the social discount rate and is not straightforwardly observable. Cost-Benefit Analysis is very sensitive to changes in discount rate and since the rate is difficult to determine, discounting will often lead to inaccuracies. 
 
Another perspective is that environmental policy decisions have intergenerational effects[4]. Therefore, we must consider the time preference of not only present-day society but also those of the future meaning discounting solely exponentially would be imprecise. Utilising discounting runs the risk that it may downgrade future damages caused by present day economic activity[5] which would cause major consequences in the future.
 
Conversely, there are benefits from using Cost-Benefit Analysis. Firstly, if the underlying assumptions are correct, it would result in the most efficient allocation of resources to maximise total social welfare. Secondly, this method has the advantage of incorporating a wide variety of factors. Often real-world scenarios involve many variables, making this approach versatile. Finally, impacts can be compared in the same unit – monetary terms. This makes it easy to relate costs to benefits in an understandable medium. However, often the process of quantifying costs and benefits is a difficult task and stated preference methods are liable to inaccuracies[6]. Ecosystems are complex networks that are difficult to isolate and quantify. Therefore, these inaccuracies in monetary valuation are likely to be most prominent with respect to environmental impacts.
 
Alternative approaches include Cost-effectiveness analysis which creates a distinct aim of a policy and then compares different approaches respective to their overall cost. Therefore, the lowest cost approach to achieve the same goal is the best option. The advantage this has over Cost-Benefit analysis is that it avoids monetising the environmental benefits which are difficult to quantify. Although it avoids this negative of Cost-Benefit analysis, it is liable to the other assumptions. Furthermore, in this approach, the degree of implementation must be decided beforehand, therefore, cost effectiveness is maximised which is often not equivalent to total social welfare.
 
Additionally, there is Multi-criteria analysis which uses several metrics instead of placing all costs and benefits in monetary terms. Valuing certain criteria such as population of species can be very difficult and liable to great uncertainty. When involving the environment and ecosystems, using different criteria would reduce these errors and therefore make data more representative of reality. On the other hand, it is harder to compare different metrics than if it were all in monetary terms as in Cost-Benefit Analysis.
 
In conclusion, although Cost-Benefit Analysis is an effective tool, it is inherently flawed by its underlying assumptions and high sensitivity to discount rate. Its alternatives attempt to avoid these faults through either removing variables (Cost-effectiveness analysis) or using different metrics (multi-criteria analysis). However, these new approaches deliver different issues. Therefore, I believe that policy makers should always use multiple approaches and decide on policies that are supported by various analytical methods. This would aid avoiding the negatives of each approach and therefore give a more accurate analysis.


[1] Pg. 114, Kolstad, C.D. (2011), Intermediate Environmental Economics, 2nd edition, Oxford University Press, Oxford
[2] Alberini, Anna and Alan Krupnick, Costof Illness and Willingness to Pay estimates of improved air quality: evidence from Taiwan, 76:37-53 (2000)
[3] Pg. 115, Kolstad, C.D. (2011), Intermediate Environmental Economics, 2nd edition, Oxford University Press, Oxford
[4] Pg. 116, Kolstad, C.D. (2011), Intermediate Environmental Economics, 2nd edition, Oxford University Press, Oxford
[5] Pg. 121, Field, B.C. and Field, M. (2013), Environmental Economics: An Introduction, 6th edition, McGraw-Hill Irwin.
[6] Pg. 145, Field, B.C. and Field, M. (2013), Environmental Economics: An Introduction, 6th edition, McGraw-Hill Irwin.
0 Comments
<<Previous

    Archives

    October 2021
    July 2021
    March 2021
    January 2021
    December 2020
    September 2020
    August 2020
    July 2020

    Categories

    All

Powered by Create your own unique website with customizable templates.
  • Home
  • About
  • Contact
  • Home
  • About
  • Contact