Abdul Rotimi Mohammed's Blog: Enlightment blog

January 9, 2025

Africa's Debt Threat



It is typical for us in Africa as we go about our daily lives, to gripe about our “crappy” politicians. It would seem to us that they just can’t get anything right. The most politically aware would then launch into a laundry list of their everyday ineptitudes, inanities and outright crimes. While there is of course some truth to these accusations, it is also true that there also circumstances that rarely get a mention in everyday discussion in Africa, which make it seem that they can’t get anything right, for which the issue of whether they are at fault is not so clear-cut. One of those circumstances is that of government debt, particularly external debt.

The international NGO Christian Aid, reported that 32 countries in Africa spent more on external debt payments than healthcare in 2023, while 25 spent more on external debt payments than education. In 2023, African governments spent more than twice as much on external debt payments as healthcare, on average, and slightly more on external debt payments than education. In all, 34 African countries spend more on external debt payments than healthcare and/or education.

The Christian Aid report further stated that the external debt service total for all African countries in 2023 was $85 billion. The corresponding figure for 2024 was $104 billion. Moreover, debt servicing is pushing aside key spending to confront the climate and environmental crises. On average, African governments are spending seven times more on external debt payments than on measures to adapt to climate change. You might recall that climate change is a problem Africa has contributed the least to but will bear most of the brunt of its negative fallout.

Some time back, a report presented by the United Nations (UN) on the global debt crisis, showed that countries in Africa borrow at rates that are, on average, four times higher than rates for the US and eight times higher than for Germany. This of course, is because African countries are perceived to significantly more likely to default on debt payments. While this is perfectly understandable, the effect of this on the social development plans of African governments is nothing less than catastrophic. Perhaps it will be enlightening to look at the catastrophic effects of Africa’s external debt servicing regime in some specific countries. We will be looking at Kenya, Ethiopia, Nigeria, Zambia and Malawi.

As Kenya’s debt payments have increased in recent years, public spending has fallen steeply. Between 2017 and 2022, real public spending per person fell by a huge 15%. By 2025 it is still projected to be 7% less than in 2017, and a similar level to 2015 – a decade without any real increase in public spending.
The reduction in spending in social sectors is manifested in lack of essential services in health and education. According to Development Finance International, just over 40% of children complete secondary school and around 55% have access to healthcare. There is little chance of these proportions increasing, if external debt payments continue to prevent public spending from increasing.

Across Ethiopia, civil society groups record poor access to equitable and quality healthcare. In education, the coverage and education package are not fulfilling the minimum standards, leading to a generally poor quality of education. Development Finance International reports that only 15% of Ethiopian children complete secondary school, and just under 40% have access to healthcare.

Nigeria’s real public spending per person has been falling every year since 2020, and was projected by the International Monetary Fund (IMF) to be a gigantic 40% less in 2024 than it was in 2019. These cuts are happening in a country where just over 55% of children complete secondary school and just under 45% have access to healthcare. With the debt crisis, most of the country’s revenues are now being channeled to debt servicing obligations at the expense of basic social services. The financial data/news service Bloomberg some time back, made the projection that Nigeria will spend six times more on servicing its debt in 2024 than on building new schools or hospitals. The United Nations Educational Scientific and Cultural Organization (UNESCO) estimated that the number of primary school aged children not attending school has increased from 7.5 million in 2010 to 10 million today, and the number of secondary school-aged children not at school has increased from 5 million in 2010 to 13 million in 2024.

The Zambian government’s external debt payments shot up, reaching 24% of revenue by 2019. Zambia was only able to keep making these payments by cutting public spending, including on social services. Real public spending on healthcare fell by 13% between 2014 and 2020, while spending per person on education fell by a staggering 40% in the same time.

When the Covid pandemic began, it became impossible for Zambia to keep making these high debt payments. China and other governments agreed to suspend debt payments, but private bondholders refused. In late 2020, Zambia defaulted on its debt payments to external private lenders and governments.

Since defaulting, Zambia has been able to use savings on debt payments to increase spending on social services again. Real education spending per person has increased by 36% since 2021, although it is still lower than in 2014. Real health spending per person is now 48% higher than in 2021, and 28% higher than in 2014. If Zambia were making all its external debt payments, they would equal as much as the government’s spending on education, healthcare and social protection combined.

The debt crisis is taking its toll on Malawi’s public spending. Between 2022 and 2026, Malawi’s real public spending per person is expected to fall by 32%; and in 2026 it will be over 10% lower than it was in 2015. These cuts are happening in a country in which only 15% of children complete secondary school, and just under half have access to healthcare.
It would be quite appropriate at this stage to wonder, “How did we get here?” Well the story of Africa’s debt debacle comes in two chapters with about 8 years separating them.

The first chapter starts in Africa’s post-independence era (basically the 1950s/60s/70s). Africa’s post-independence era fell into a slightly longer and more global era known as the “Cold War” era that ran from the end of WW2 in 1945 to the fall of the Berlin wall in 1989. During the Cold War, the US and the now defunct Union of Socialist Soviet Republics (USSR) vied for global dominance. Each was doing its best to get its economic ideology (Capitalism in the case of the US and Communism in the case of the USSR) adopted by the rest of the world. To a lesser extent, Communist China took part in this ideological battle mainly with the USSR to see who would be the leading light of the communist world. Communism reach its highest point during the Cold War era, since its ideas were originally expounded upon by Karl Marx and Friedrich Engels in 19th century, with at least one third of the globe declaring themselves to be communist countries.

With the US and USSR vying for global influence during the period, generous loans to Third World countries became a major weapon in each superpower’s arsenal. Post-colonial African governments, having inherited weakened economies and struggling mightily to navigate a structurally unequal global economy, gobbled up this windfall of generous loans as many of them were cash-strapped and in desperate need of funds to carry out development programs. They would constantly play the two superpowers off each other (and China off the USSR for African nations that had declared for Communism) in a bid to extract ever more generous loans. They often carried on like these games could be played forever. It must also be said that the lenders failed to carry out proper due diligence procedures and the result was that debt levels in Africa soared beyond the point at which they could be realistically serviced.

The music would stop playing in 1989. That year the USSR basically imploded and rather suddenly ushered in the end of the Cold War. That led to Capitalism being virtually uncontested by any other ideology and with that, the buying of influence with loans came to be seen as unnecessary. Virtually overnight, loans were called in. This left African governments scrambling for money they didn’t have to repay their loans. It wasn’t just loans that were cut back. Development aid was significantly cut back too. Between the last days of the Cold War and the dawn of the new millennium, development aid in general fell by 40%.

As the new millennium approached, a movement in Europe started gathering steam to have Third World debts forgiven. The movement, known as Jubilee 2000, would spread to the US largely through the heroic efforts of the Irish rock band U2’s front man, Bono. The result of this movement’s efforts was that in 1999-2000, the debts of the world’s poorest countries were forgiven to a significant degree.

The second chapter of the debt debacle has its roots in the Global Financial Crisis (GFC) of 2008. Following the crisis, interest rates fell across the western world. This made lending to the developing world attractive, as high interest rates could be charged. The GFC ushered in a period of lower growth globally. This was made worse in the developing world by subsequent shocks, including Covid-19, the climate crisis and the Ukraine conflict, and soaring food prices.

The upshot of all this is that African government external debt payments was projected to be at least 18.5% of budget revenues in 2024, the highest since 1998. This was almost four times as much as in 2010, and the highest of any region in the world.

The International Monetary Fund (IMF) says governments struggle to pay external debts once they are higher than 14%–23% of government revenue. In 2024, 28 African countries were projected to have external debt payments over 14% of government revenue, with 23 of these paying over 20% of government revenue. In contrast, in 2010 no African governments were spending over 20% of revenue on external debt payments, and only one, Tunisia, was spending more than 14%.

Again, a coalition of western NGOs is calling for debt cancellation. Beyond that, they are also calling for a redesign of the global economic system so that is more just and does not consign the global majority to poverty, while accumulating wealth for a tiny, yet powerful, global minority.
As an African, one can only view the efforts of this western coalition with profound admiration and a sense of gratitude but it remains to be seen that they will eventually be successful in their efforts.

I personally think that there are more modest efforts also worth pursuing. Given the statement made by the IMF that governments struggle to pay external debts once they are higher than 14%–23% of government revenue, I think it would go a long way if yearly external debt servicing was pegged at no more than 5% of government revenue. In the rebuilding of Europe effort after the destruction of WW2, Germany’s debt repayments were capped at 3.5% of export revenue.

It should be noted that even such a modest reform won’t be achieved without a hard fight, and I don’t think we need to wait for caring westerners before we get cracking on this. We all, perhaps through the signing of petitions and the actions our civil society groups, need to let our respective governments know about our grievances with the structural imbalances of the international financial system and have them make concerted efforts, perhaps joining hands with other developing nations in the global south to have these imbalances addressed, starting with the capping of government revenue used for external debt servicing at 5%.

This is something we need to do if we are ever going to get the chance to break out of the vicious cycle of debt, which deepens our dependence on commodities, which keeps our economies fragile and most Africans mired in poverty.

BEFORE YOU GO: Please check out my book on Amazon, Why Africa is not rich like America and Europe. Thank you
Bibliography
1. Hertz, Noreena. 2004 The Debt Threat: How debt is destroying the developing world…and threatening us all New York: HarperCollins
2. Sachs, Jeffrey. 2005 The End of Poverty: How We Can Make It Happen in our Lifetime London: Penguin Books
3. Larble, Jennifer et al. 2024 ‘Between Life and Debt’ Christian Aid
 •  0 comments  •  flag
Share on Twitter
Published on January 09, 2025 21:23

December 10, 2024

How Japan got Rich: Lessons for Africa



It would probably come as a shock to anyone reading this post that Japan was once as poor as the tiny African nation of Djibouti. That was in 1868. That year however, was the beginning of a new destiny for Japan. A big humiliation at the hands of the American navy a decade and a half earlier had deeply hurt Japanese pride and had led to intense soul-searching in the intervening period.

Japan emerged from that soul-searching resolutely determined to become a modern nation, and that meant being able to match the industrial capabilities of the west. A largely agrarian nation at the time with fishing being the main occupation, by 1912, Japan had become a relatively industrialized nation. By the 1970s, it was the second largest economy in the world after the US (It has in recent years, been overtaken by China). As should be expected, such a journey was nothing less than tumultuous and gut-wrenching. What follows is the story of that journey.

Europeans first came to Japan in 1543. They happened to be Portuguese traders. Through this contact, Japan began to assimilate western culture, the most influential aspects at the time being Christianity and guns (strange combination, I know). Guns were popular because they proved decisive in Japan’s civil conflicts. The Japanese didn’t stop at simply importing guns. They soon learned how to make them. They got so good at it that they were soon making improvements to European models and at some point in the late 16th century, may have been manufacturing more guns than any European nation.

Christianity also proved popular with sections of Japanese society. At the peak of its assimilation, there were between 300,000 and 700,000 Christians in a population of 18 million. Christianity however, was on a collision course with old Japanese values. The results were not pretty. Now for the rulers of Japan at the time, a Japanese citizen’s highest obligation was to the rulers of the land. For Japanese Christians however, the highest obligation was to God. These irreconcilable positions led to the banning of Christianity in 1612 and a ferocious eradication thereafter.

The eradication of Christianity was just the beginning. It was the start of a long period of isolation from the rest of the world for Japan. In 1616, all foreign merchant vessels – except Chinese – were barred from all Japanese ports except two (Nagasaki and Hirado). Foreign residence was limited to just three places, Edo (now Tokyo), Kyoto, and Sakai. In 1624, the Spanish were barred; in 1639, the Portuguese. The English didn’t wait to be barred, they just stopped coming. That left the Dutch, who other than when summoned by the Japanese authorities, were essentially under house arrest. From 1633, Japanese vessels needed official authorization to leave the country; three years later, all Japanese ships were confined to home waters. From 1637, no Japanese was allowed to leave the country by whatever means. What’s more, if you were outside, no return, on penalty of death. In all, Japan’s isolation lasted for some 250 years.

I know this makes for very unpleasant reading for some, particularly the banning of Christianity. But on the flip side, even through this ugly incident, you can almost see why when the Japanese decided to industrialize, they made rapid progress and were extremely successful. When the Japanese set their minds to something, they go all out to achieve it. We badly need some of that mentality in Africa right now (pursuing the right ends of course).

The Japanese were brought out of their long isolation by a rude awakening. In 1853, an American navy commander by the name of Matthew Perry led a fleet of warships to the Japanese coast on the orders of the then US president Millard Fillmore. He had been sent by president Fillmore to seek a trade treaty. He carried a letter from the president to the rulers of Japan, which was really a politely worded threat that if Japan did not open its markets to American goods, Perry would use his gunboats to blast Japan into oblivion. This kind of action was euphemistically referred to at the time as ‘gunboat diplomacy’. China had some years earlier in 1839 and some years later in 1856 had suffered an even more egregious episode of gunboat diplomacy at the hands of Britain, in what was known as the Opium Wars. Here Britain waged war with China, fighting for the right to sell opium in China, which the Chinese authorities very understandably sought to keep out. Britain won and to add insult to injury, snatched Hong Kong from China and would not return it till 1997.

Japan, under the threat of force caved in to US demands and opened their markets but not without having their pride severely wounded. The incident led to intense debate that lasted about a couple of decades about the need to become an industrial nation, as it had been made clear that industrial power equaled military power. Of course, such a proposed change is bound to meet fierce resistance because it is almost impossible for such a change not to lead to the redistribution of political power.

On the side of reform was a Japanese samurai (member of the noble warrior class) by the name of Okubo Toshimichi, who lived in a domain in the south of Japan not effectively controlled by the Tokugawa family that ruled over all of Japan at the time. The Tokugawas had usurped power that rightly belonged to the Emperor who they had sidelined. Okubo Toshimichi had a plan to wage war against the Tokugawas, reinstall the emperor and have him initiate the reforms that would lead to the industrialization of Japan. He would succeed and the emperor was restored to this throne in 1868. This event in Japan’s history is referred to as the Meiji restoration. Okubo Toshimichi would be made Minister of Finance and would oversee a radical program of industrialization that at the time was the fastest that any nation had undertaken. This period, known as the Meiji period lasted up to 1912. During this period, output of iron and coal multiplied. Steam engines were introduced to Japan. Its merchant fleet soon included hundreds of steam ships, and the country built thousands of kilometers of railway tracks. Compulsory education started, and Tokyo University was founded. Public health and life expectancy increased.

Japan, who had previously been so hostile to foreigners, would now spend about one third of its budget hosting foreign experts. They would teach subjects as varied as engineering, medicine, agriculture, law, economics, military organization, science, and many other topics. Fortunately, pockets of the gun-making skills learnt centuries past still existed even though gun-making had been banned sometime during the isolation. These skills would prove incredibly useful to a whole range of machinery production and work with metals: screw fasteners, mechanical clocks, eventually rickshaws and bicycles. By the 1930s, Japan was the world’s largest textile exporter.

The 1930s also saw Japan flex its military muscles, having gained industrial power. During the decade it would colonize China and other East Asian neighbors like Korea. When you think of the population difference between China and Japan (about 474,800,000 for China and 64,450,000 for Japan at the time), you begin to appreciate the advantages of industrial power.

All was not well with Japan though. The 1930s would see Japan deal with plenty of economic turmoil, bringing the first phase of Japan’s rapid industrialization to an end. Then there was the onset of world war two in the 1940s, at the end of which many a Japanese found himself mired in poverty. But of all this turmoil just seemed to be merely setting the stage for Japan’s second act; its second stage of rapid industrialization dubbed “the high growth system” that occurred from the 1950s to the 70s, catapulted Japan to global economic supremacy. At the time, it was second only to the US, with great fears in the US in the 1980s that it would soon claim the top spot on the back of it having dethroned the US as the world’s number one automobile producer. You could even find traces of this fear in popular Hollywood movies of the time likeRobocop III and Rising Sun.

In Japan’s first phase of industrialization, it focused mostly on light industries like textiles. In the second phase it would change orientation and focus on heavy and high tech industries, producing heavy duty machinery and highly sophisticated precision equipment. One organization above all was responsible for this re-orientation and that was Japan’s Ministry of International Trade and Industry (MITI). Over those decades, MITI would launch a raft of targeted Industrial Policies with the intention of inducing Japan’s private sector to make the switch from light industries to the greater productivity enhancing, thus greater wealth generating heavy industries. In this MITI was supremely successful though the process was not without its pains and trial and error. MITI’s success would end up being dubbed by the world at large as Japan’s “economic miracle”.

The American fear of Japan’s eventual world domination would never materialize. Japan would somewhat lose its way in the 1990s from problems many believe to have been caused by its financial sector. This decade is often referred to as “Japan’s lost decade”. China, having made many monumental errors in its past had finally gotten its act together and was marching through highly successful reforms that had been initiated in 1978. China would eventually surpass Japan, but Japan remains a glowing testament to what visionary and determined political leadership can achieve in the face of seemingly impossible odds.

BEFORE YOU GO: Please check out my book on Amazon, Why Africa is not rich like America and Europe. Thank you

Bibliography
1. Roser, Christoph. 2017 “Better, Faster Cheaper” in the History of Manufacturing: From the Stone Age to Lean Manufacturing and Beyond. Boca Raton: CRC Press
2. Yulek, Murat A. 2018 How Nations Succeed: Manufacturing, Trade, Industrial Policy, & Economic Development. Singapore: Palmgrave Macmillan
3. Acemoglu, Daron et al. 2012 Why Nations Fail: The Origins of Power, Prosperity and Poverty. London: Profile Books Ltd
4. Landes, David. 1998 The Wealth and Poverty of Nations: Why Some are so Rich and Some are so Poor. London: Abacus
5. Johnson, Chalmers. 1982 MITI and the Japanese Miracle: The Growth of Industrial Policy, 1925-1975 California: Stanford University Press
 •  0 comments  •  flag
Share on Twitter
Published on December 10, 2024 18:07

How America got Rich: Lessons for Africa



In the year 1776, the year America declared Independence from Britain, it was pretty much dependent on Britain for manufactured goods, much like the case of sub-Saharan’s dependence on the developed world for much of the same today.

This was a situation deliberately orchestrated by Britain. Britain had instituted policies and acts throughout its colonies banning them from producing manufactured goods, permitting them only to produce raw materials, which were shipped to Britain at prices it controlled, and Britain in return sold manufactured goods in its colonies, which of course were multiple times in value of the raw materials it got from its colonies. This was one of the conditions that the American colony found increasingly intolerable that led to the War of Independence. Another major issue were taxes imposed by Britain that the American colony found increasingly onerous.

Despite the acts/policies of Britain banning manufacturing, there was some illegal small-scale manufacturing going on the American colony prior to 1776. This small-scale manufacturing was the result of interactions between a few technically inclined farmers and blacksmiths.

Now you need to understand that late 18th century America was an overwhelmingly rural country with virtually every one being a subsistence farmer (as about two-thirds of Africa is now) living on subsistence farms. The few who were not farmers tended to be millers who helped farmers convert their grain to flour or they were blacksmiths fashioning all manner of relatively crude tools and utensils for farmers. During winters when no farming could be done, technically inclined farmers would hang out with blacksmiths to carry out all manner of experiments. In this way, America’s manufacturing output slowly grew, though to be sure it was very small-scale and taking place in just a few locations in the northeastern part of the country where the first colonists settled before spreading southwards and then westwards. Still, it was the beginning of the process that would turn the US from an agricultural nation to an industrial nation.

We shall soon meet some of these farmers turned engineers (though the word engineer didn’t exist at the time), who kick started America’s Industrial Revolution from the bottom up but I should point out that there was also equally top down activity by some visionary politicians to get America’s industrialization going, which I think it is instructive that we visit first.

Foremost amongst these visionary politicians was Alexander Hamilton, America’s first Secretary of the Treasury (Minister of Finance. A position he held at the age of 33). You see, even though the War of Independence had been won, Britain was still using other means to keep America dependent on them for manufactured goods. They banned skilled artisans from emigrating to the US, and often dumped goods at prices below cost. Hamilton realized that this situation inimical to the long-term health of the American nation because in the future, a nation’s wealth and power would largely be a function of its industrial economy.

I think that it is unfortunate that something Hamilton grasped then, many African politicians are still struggling to grasp more than 200 years later. Anyway, Hamilton was dead set on pushing reforms that would transition America from an agricultural nation to an Industrial nation. The reforms had 3 main points; Setting up a national (central) bank that could lend money to manufacturers and provide a stable currency that would encourage investment; Set up a tariff system to regulate imports in order to protect America’s nascent industry; The funding and building of a transportation system consisting of toll roads, canals and railroads to span the entire country (often described as a continent because the US is a country of continental proportions) in order to create a national market that could spur nationwide demand for manufactured goods. He submitted his ideas in a report to Congress titled Report on Manufactures in 1791. He would find backing for his proposals from the North, particularly the Northeast, where the few industries that existed were generally found.

However, he was opposed by the South save for the limited implementation of an act setting up the national bank, which was led by Thomas Jefferson, the first Secretary of State and later, 3rd president of the US. For about 70 years, the South and the North would consistently lock horns over these issues till the eruption of the Civil war in 1861, broke the deadlock in the North’s favor. They also happened to win the War as well. The South opposed these reforms because having no industries, the national bank would be of little benefit to them. They also thought a tariff system would only increase the price of goods and finally, a nationwide transportation system would be of little use to them because they had a river system that enabled their main revenue earner, cotton to be easily shipped to the ports for export to Britain. Jefferson also thought such reforms would make government beholden to financial and business interests, and become unresponsive to the common man. I should also add that Jefferson and Hamilton had great animosity for one another.

With the onset of the War, many southern states would secede from the US. This gave the North the majority it needed to push through its reforms. The reforms themselves would play a crucial part in helping them win the war in 1865. A fact that was not lost on the defeated South. After the war, the South would come around to the North’s way of thinking and agreed that that the creation of a self-sufficient nation was contingent upon a diverse economy that could be achieved only through the development of industry.

We can now circle back to the ingenious farmer/engineers that built the machines that built America. The first major figure was Oliver Evans who made many innovations but arguably his most influential was the conveyor belt he conceived over a couple of years in 1782 and 1783. Originally conceived to convey grain to millers, today, conveyor belts are everywhere that we use them without thinking. Escalators at train stations, airports and shopping malls. On production and assembly lines in factories. In hoisting bricks, mortar and every kind of building material on construction sites.

Next we look at Sam Slater, a Briton who is credited with having started large scale textile production in America. He was able to do this because he was extremely fortunate to win an apprenticeship at the age of 14 in Britain’s very first factory. By age 21, he emigrated to the US in 1789. But before he did that, he memorized every single detail of the factory he worked in, and on embarking on his journey dressed as a farmer so he wouldn’t be suspected of being an artisan. Though he landed in the US penniless, he soon found financial backers. He then from memory, reproduced the factory he had worked in in Britain. For this he is credited for having gotten the Industrial Revolution officially underway in the US.

Next we look at Eli Whitney. Whitney’s name is supremely associated with America’s journey to mass manufacturing though without controversy. The two inventions most associated with him are the cotton gin and the system of interchangeable parts. Some observers (though not all), question his role in the emergence of these two things, some believing he only made minor improvements to the cotton gin and wasn’t much more than a gifted promoter of interchangeable parts. What is not in doubt is that his name is associated with these two inventions more than any other, so I will go on to describe, what happened as if there is no controversy.

Whitney, though trained as a lawyer had a technical inkling from a young age. After graduation, he would move from the North where he was from to the South to take up a job as a tutor, but would soon get distracted by the problems cotton growers were having harvesting cotton. The process was painfully slow, even with hardworking slaves working all day. Whitney would devise the cotton gin in 1794 which would increase a slave’s productivity by about 50 times. An unfortunate byproduct of this success was that it gave a spur to the slave trade, which is rather ironic given that Whitney vehemently opposed slavery.

But as intended, it greatly increased cotton output. In a single year after Whitney’s invention, the US cotton crop increased from 5 to 8 million pounds. 6 years later (1800) 35 million pounds were produced. In 1805 this figure doubled, rising to 70 million. By 1820, it was 160 million and by 1825 (the year Whitney died) it was 225 million. Whitney would however make little money from his cotton gin as he could not enforce his patents and so his invention was liberally copied with him, hardly receiving a dime. So he soon found himself in dire financial straits.

At the time Whitney made his far more important contribution, American manufacturing was based on the handcraft system. What this meant is that manufactured goods were made by skilled artisans largely unaided by machines. One large flaw of the handicraft system was that no 2 units of the same good, for example a gun, even made by the same artisan, were exactly alike. So if you had what were supposed to be two identical units of the same gun that were supposed to consists of the same parts, you couldn’t interchange their parts because artisans could not get the corresponding parts in the two units of the gun to be exactly alike. Another major flaw was that the handcraft system was too slow and so output of industrial scale that could supply the needs of an entire nation wasn’t possible. Whitney changed all of that. In 1798, the US almost went to war with France. To prepare for the possible outbreak of war, the US required the production of war rifles, enough to equip an army and manufactured with utmost speed.

Even though Whitney had no prior experience with weaponry, through high contacts he secured the contract. The contract called for him to deliver 10,000 rifles in 3 years. He knew there was no way the handcraft system could turn out that many rifles in so short a time. Therefore, he started thinking about building machines that would build the rifles. It was then he was supposed to have also come up with the idea of making that parts of the rifles interchangeable, since machines could be consistent in a way human beings could not. Whitney would be grossly late in fulfilling the contract, not completing the order until 8 years after. The US government however seemed to have been satisfied because they gave him another contract which secured him financially.

It is almost impossible to exaggerate the importance of the breakthrough of interchangeable parts to global mass manufacturing today and to the US at the time. Once machines could be devised to produce a manufactured product’s parts to very fine tolerances as to make the parts interchangeable, it was relatively straightforward to get those machines to produce any other manufactured product with interchangeable parts and without the need for skilled artisans. All the skill necessary was embedded in the machines and unskilled labor could competently operate the machines. So from rifles, then to revolvers, the system of interchangeable parts was soon used to produce just about every manufactured good on a very large scale basis…farm harvesters, clocks, typewriters, sewing machines, which is credited with having spurred feminism…women in America first found work outside the home in textile factories producing standardized clothing. Earning money for themselves gave them the platform to push for other rights like the right to vote…bicycles, cameras and then ultimately automobiles largely through the visionary efforts of Henry Ford. The automobile would be a spur to many other industries, particularly steel and petroleum.

The automobile industry would surpass both of them in annual gross value in 1925, becoming the dominant sector in the US economy. It would also completely revolutionize social life in America. Housing, shopping, dating, all were radically changed by the automobile. By this time, America had become the world’s industrial powerhouse.
America didn’t stop there of course. It has continued to redefine what it means to be an industrial nation. Since then it has achieved breakthroughs in electricity production and electronics, computing, Jet aircraft, robotics, biotechnology, clean technology, nanotechnology…with interchangeable parts and mass manufacturing playing crucial roles in all these areas. There seems to be no end in sight and it is hard to see what the future brings. One thing is certain though, scientific and technological expertise will be at the heart of any new breakthroughs and will be the basis for new rounds of economic prosperity.

Bibliography
1. Burlingame, Roger. 1953 Machines that Built America. New York: Signet Key Books
2. Roser, Christoph. 2017 “Better, Faster Cheaper” in the History of Manufacturing: From the Stone Age to Lean Manufacturing and Beyond. Boca Raton: CRC Press
3. Olson, James S. et al. 2015 The Industrial Revolution: Key Themes and Documents. California: ABC-CLIO
 •  0 comments  •  flag
Share on Twitter
Published on December 10, 2024 17:40

October 15, 2024

How Britain got Rich: Lessons for Africa



Britain…England, specifically, was the first country to be transformed by the Industrial Revolution. Starting around 1760, it was the first country to have its people migrate en masse from earning their living from subsistence agriculture and hand crafts to working in factories that had mechanized the art of production. The transition from subsistence agriculture to manufacturing had actually been going on for a few centuries before then. They got just got turbo-charged and took a critical dimension around 1760 with the invention of the steam engine. In addition to manufacturing, the revolution also involved dramatic innovations in mining, transportation and communication, which of course led to profound changes in society.

A natural question to ask at this point is “Why England and not somewhere else?” In trying to answer this, we will see that so large a transformation does not all of a sudden come out of nowhere. In fact, changes and developments that had been going on in Western Europe for about 150 years played a big role in making the Industrial Revolution possible. These changes themselves were enabled by processes and events stretching back to the 10th century.

The Industrial Revolution enabled Britain, and others that would eventually follow like the rest of western Europe, the US and Japan, to create mass affluence societies, pulling far ahead of other regions of the world that didn’t adopt these changes thereby creating the difference between the rich world and the developing world we know today. It would also help usher in a new global political order where the rich world, whose technological and economic might led to geopolitical and military might, were able to push the developing world around, foisting abhorrent institutions on them like colonization and slavery.

Before we get to talking about Britain, I should point out that the period between the years 900 and 1300 (i.e. between the 10th and 14th centuries) was a critical one for Europe where fundamental institutions enabling the emergence of modern society were put in place, setting it up for future global leadership. Western Europe in particular, became the most urbanized part of the world. There was a lot of land reclamation, forest clearing, building of roads, bridges, churches, castles etc. during this time. The urbanization process was a key driver for an investment boom that followed. It was during this period that the economy of Europe first surpassed the economies of China and the Middle East.

The mass urbanization in Western Europe was accompanied or followed by other far reaching changes both technical and societal that were the exclusive preserve of Europe for centuries but have since spread to every other part of the world that even we in the developing world take them for granted, not appreciating the crucial roles they played in advancing society. Some of the technical advances include: -

Eyeglasses: This might seem trivial but was actually a tremendous advance. Eyeglasses doubled the work life of skilled craftsmen, especially those who did high precision jobs. The crystalline lens of the human eye begins to harden around age 40, causing a condition similar to farsightedness. At that age, a 13th century European craftsman could reasonably expect to live and work for another 20 years…if he could see well enough. Eyeglasses solved this problem. Eyeglasses further encouraged the invention of fine instruments like gauges, micrometers, fine wheel cutters etc., thereby laying the basis for articulated machines with fitted parts. This gave Europe a huge advantage over other civilizations as it solidly put them on the road to batch and then mass production. Europe enjoyed a monopoly on eyeglasses for 300 to 400 years.

Mechanical Clock: It is easy to trivialize the mechanical clock but it is arguably the greatest invention of the European Middle ages around the 13th century. It made reliable time-keeping on an infinite basis possible. Prior to its invention, time was kept with sun dials and water clocks. Sun dials could only be used on clear days and water clocks eventually clogged up and stopped working as a result of sedimentation. The mechanical clock was the first digital device, paving the way for a whole new field of precision engineering. Productivity, a concept so fundamental to wealth creation, is inextricably bound up with the invention of the mechanical clock. It led to an effort to maximize production per unit of time, giving birth to the field of scientific management, which played a huge role in lifting 20th century worker productivity. Europeans enjoyed a monopoly on the mechanical clock for about 300 years.

Printing: China invented printing in the 9th century but a combination factors, including the difficulty of the Chinese language, the dominant printing technique used at the time and the relatively rigid nature of Chinese social institutions, prevented it from exploding as it did in Europe, where it was introduced several centuries later. The invention of the Gutenberg printing press in the 15th century would eventually cause the cost of books and other reading material to drop by at least 90%, leading to the phenomenon of mass publication for the first time in history.

The waterwheel: Known to the Romans, it was revived in 10th-11th century Europe. It was used for grinding grain, pounding cloth (transforming wool manufacture in the process), hammering metal, rolling and drawing sheet metal and wire; mashing hops for beer; pulping rags for paper. Paper, which was manufactured by hand and foot for a 1,000 years following its invention by the Chinese (later adopted by the Arabs) was manufactured mechanically as soon as it arrived in Europe in the 13th century.

I should point out that these technical achievements were the outgrowth of a value system that placed a premium on scientific, rational explanations for natural occurring phenomena (as opposed to myth and magic).

Now that we have explored the back drop of Western European development we can begin to ask why Britain. To answer this, it would help to have a template of what the ideal modern society should look like. That template is as follows: -
• Knows how to operate, manage and build the instruments of production and to create, adapt and master new techniques on the technological frontier.
• Is able to impart this knowledge and know-how to the young, whether by formal education or apprenticeship training.
• Chose people for jobs by competence and relative merit; promoted and demoted on the basis of performance.
• Affords opportunity to individual or collective enterprise; encourage initiative, competition, and emulation.
• Allows people to enjoy and employ the fruits of their labor and enterprise.

These standards imply corollaries: gender equality; no discrimination on the basis of irrelevant criteria (sex, race, religion. etc.); also a preference for scientific rationality over magic and superstition (irrationality).
Such a society would also possess the kind of political and social institutions that favor achievement of these larger goals; that would for example:

• Secure rights of private property, the better to encourage saving and investment.
• Secure rights of personal liberty – secure them against both the abuses of tyranny and private disorder (crime and corruption).
• Provide stable government, itself governed by publicly known rules (a government of laws rather than men).
• Provide responsive government, one that will hear complaint and make redress.
• Provide honest government, such that economic actors are not moved to seek advantage and privilege inside or outside the marketplace. In economic jargon, there should be no rents to favor and position.
• Provide moderate, efficient, not greedy government. The effect should be to hold taxes down, reduce the government’s claim on the social surplus, and avoid privilege.

This society would be marked by geographical and social mobility. People would move about as they sought opportunity, and would rise and fall as they made something or nothing of themselves. This society would value new as against old, youth as against experience, change and risk as against safety. It would not be a society of equal income, because talents and inclinations towards hard work are not equal; but it would tend to a more even distribution of income than is found with privilege and favor. It would have a relatively large middle class. By the 18th century, Britain clearly met these conditions more than any other society.

Specific developments in Britain’s development story worth highlighting include its success in developing textiles production in the 16th century and the invention of the steam engine in the 18th century, which was enabled by research into atmospheric science carried out in the 17th century and the abundance of coal which served as its cheap source of fuel. Its invention itself was incentivized by Britain’s relatively high wage structure, which was the result of its success with textiles production. By the end of the seventeenth century, about 40 per cent of England’s woolen cloth production was exported, and woolen fabrics amounted to 69 percent of the country’s exports of domestic manufactures. This led to one-quarter of London’s workforce being employed in shipping, port services or related activities by the early 18th century.

The gains from woolen production, however impressive would be dwarfed by those from cotton, which came about as a result of the mechanization of the processes involved in cotton production. Employment in the British cotton industry reached 425,000 in the 1830s and accounted for 16 per cent of jobs in British manufacturing and 8 per cent of British GDP. The mechanization of industry depended heavily on the availability of cheap iron, so it gave much impetus to the emergence of the steel industry.

Britain would build on its successes pioneering the Industrial Revolution in 18th century well into the 19th, as industrialization increasingly came to be powered by electricity as opposed to steam. By this time though, it was surpassed by the US and Germany because those two nations had developed bigger appreciation for the deeper scientific principles powering industrial progress as opposed to Britain that by a greater extent relied on technical trial and error. Still, the progress was nothing short transformational, continuing well into the 20th century and beyond. British income figures prove that fact. British income per head doubled between 1780 and 1860 and further increased by a multiple of 6 between 1860 and 1990, even as its population increased.

There are lessons for every developing region seeking to emulate British success, Africa included, though the development path won’t be exactly the same. For instance, a benefit of starting late means you can skip centuries long, trial and error and adopt the latest technologies and institutions. But be warned, reaching the higher echelons of economic development, requires the indigenization of technological innovation, which implies a highly skilled workforce and which further implies a solid education system, particularly scientific education. This ultimately depends widespread enthronement of a culture of scientific rationalization and enquiry. Such a culture is very far from being dominant in Africa. This is where African values must begin to change.

BEFORE YOU GO: Please check out my book Why Africa is not Rich like America and Europe on Amazon

Bibliography
1. Zanden, Jan Luiten Van. 2009 The Long Road to the Industrial Revolution: The European Economy in a Global Perspective 1000-1800. Leiden: Brill
2. Landes, David. 1998 The Wealth and Poverty of Nations: Why Some are so Rich and Some are so Poor. London: Abacus
3. Allen, Robert C. 2012 The British Industrial Revolution in Global Perspective. Cambridge: Cambridge University Press
4. Yulek, Murat A. 2018 How Nations Succeed: Manufacturing, Trade, Industrial Policy, & Economic Development. Singapore: Palmgrave Macmillan
 •  0 comments  •  flag
Share on Twitter
Published on October 15, 2024 23:08

September 9, 2024

Africa's Job Crisis



Africa faces a very daunting challenge. According to the International Monetary Fund (IMF), Africa needs to create about roughly 18 million additional jobs every year for 20 years from 2015 to 2035. The World Bank’s estimate is roughly 15 million new jobs until 2030. The African Development Bank comes in lower at about 10-12 million new jobs during the 2020s. One estimate even suggests that this annual requirement will grow to 30 million new jobs by 2050. The scary part is that Africa’s working age population will still be growing by then. It is commonly agreed (and in my opinion it should go without saying) that not just any job will do. They need to be decent and high productivity jobs.

How has Africa fared so far? Well, since 2000, sub-Saharan Africa has added on average of about 9 million jobs a year. At least 2/3 have been in subsistence or smallholder agriculture, self-employment or providing low value added services – activities that tend to be poorly remunerated and precarious. Less than 1/3 of the jobs – 2.6 million – have been in waged employment, with only 200,000 – 300,000 provided by the industrial sector.
The above two paragraphs (I have paraphrased them) come from my good friend, Edward Paice’s book, titled Youth Quake: Why African Demography Matters, published in 2021. Edward was so kind as to gift me a complimentary copy. The book is loaded to the brim with facts, insights and data about African demography. His book will be my main reference for this post. Edward hails from Britain and runs a Think Tank focused on Africa called Africa Research Institute, out of London.

You might be wondering how Africa got itself into this demographic mess. Well, Africa’s present condition is the result of subtle interplay between demography and economics known as the demographic transition, or as in Africa’s case the lack of a demographic transition. I have discussed the demographic transition in a previous post. Then, I pointed out that the demographic transition refers to a phenomenon where there is a shift from high birth rates and high death rates in societies with minimal technology, education (especially of women) and economic development, to low birth rates and low death rates in societies with advanced technology, education and economic development, as well as the stages between these two scenarios.

I had pointed out that there were 5 stages. Stage 1 is characterized by high birth rates and high death rates. During this stage, the society evolves in accordance with the Malthusian paradigm, with population essentially determined by the food supply. Any fluctuations in food supply tend to translate directly into population fluctuations. In stage 2, the death rates drop quickly due to improvements in food supply and sanitation, which increase life expectancy and reduce disease. This is the stage that most sub-Saharan African countries are at. The resulting fall in death rates tends to lead to a population explosion. In stage 3, birth rates begin to fall rapidly as a result marked economic progress, an expansion in women's status and education, and access to and use of contraception. In stage 4, both birth rates and death rates are low. No country is at stage 5.

It has been observed in general that successful industrialization plays a crucial role in helping countries get to the highly desirable stage 3. This is so because successful industrialization should take the bulk of the unemployed, those relying on subsistence, smallholder agriculture and those relying on low value service provision, out of their informal, precarious circumstances and place them in formal, waged, high productivity employment. This process ends up aiding birth control as former subsistence farmers no longer have to rely on children as a source of income-generation through using them as farm labour. It further aids birth control because people in formal employment will often have access to formal social safety nets in old age like pensions. This reduces the need to depend on adult children as a source of income. So in essence, successful industrialization gives a double benefit; it increases the number of jobs available and reduces the number of people competing for the available jobs. Where industrialization suffers a case of arrested development like in much of sub-Saharan Africa, this double benefit isn’t acquired, hence the African Job Crisis.

Research jointly sponsored by the African Development Bank, the Brookings Institution and the United Nations University World Institute for Development Economics Research, around 2016 showed that by any measure, Africa’s industrial sector is small relative to the average for the developing world as a whole. The share of manufacturing in GDP is less than one-half of the average for all developing countries, and in contrast with developing countries as a whole, it is declining. Manufacturing output per capita is about 10 percent of the global developing country average. Per capita manufactured exports are slightly more than 10 percent of the developing country average, and the share of manufactured exports in total exports is strikingly low. Moreover, these measures have changed little since the 1990s.

The research also showed that both regionally and at the individual country level, Africa ended up in 2015 more or less where it started in terms of industrial development in the 1970s. That is roughly some 40 years standing in the same spot. South Korea went from being one of the world’s poorest countries to one of the world’s richest in the same timeframe.

We have looked at the African Job Crisis in general. Let’s look at the details for specific countries. Edward Paice’s book is armed with details about quite a number of African countries, you really should check his book out. I will take just a sliver of that information and for just 3 countries: Nigeria, Uganda and Kenya.

Nigeria’s working age population is set to expand by about 2/3 to 150 million between 2010 and 2030. The magnitude of this increase is double that of between 1990 and 2010, and equivalent to the addition of the entire population of France or Thailand. The working age population will expand by a further 100 million to 250 million between 2030 and 2050. Almost 20 million young Nigerians sought to join the workforce between 2014 and 2018 but only 3.5 million new jobs were created; in 2018 there were just 450,000. By the end of that year, the number of Nigerians unemployed but actively looking for work had almost quadrupled over 5 years to 21 million, an official unemployment rate of 23%. Among Nigerians with post-secondary-school education, the rate had almost tripled to 30%, meaning that an educated Nigerian was more likely to be unemployed than Nigerians in general. Unemployment of youth also tripled to 13.1 million in the same period, a rate of 30%. If underemployment among youth is added, the combined rate reached 55%. Nigeria’s unemployed total far surpasses that of all European Union (EU) states combined, even though the EU has twice the population of Nigeria.

Nigeria’s National Bureau of Statistics estimates that about ¾ of all new jobs are created by the traditional or informal sector. This accounts for more than half of GDP and the total workforce – about 55 million Nigerians in all. Only 1 in 10 have formal waged employment, well below the level of 2 decades ago, and more than half of these prized jobs are in the public sector. For the official unemployment rate simply to stabilize at its current level of 20-25% during the decade to 2030 will require the creation of 30-35 million jobs. This is almost 10 times the number of jobs created in the 5 years from 2014 to 2018.

In Uganda, the official unemployment rate in 2018 Statistical Abstract was 9%, with youth unemployment at 13% - considerably lower than in South Africa or Nigeria. People whose occupation is subsistence agriculture – some 6 million, or 40% of the working population – are counted as being outside the labor force and are therefore not included in the calculation of unemployment. Among the 7.75 million youth, defined in Uganda as between 18 and 30, 3.3 million are outside the labor force, of which 2.2 million are categorized as Not in Employment, Education or Training (NEET) and 1.1 million as ‘potential labor force’. By simply adding the ‘potential labor force’ number to the 0.6 million young people who are officially unemployed, youth unemployment rises from the headline rate of 13% to 38%.

The full extent of joblessness among under-30s is even worse. The term ‘in employment’ as applied to 3.9 million young Ugandans is open to misinterpretation. Only about 2.1 million of them have ‘transited’ to a job deemed ‘stable’ or ‘satisfactory’, whether working for someone else or self-employment; the remaining 1.7 million are among 5 million Ugandans classified as ‘in transition’, meaning they are employed in temporary or unsatisfactory work, or in effect unemployed. By exactly how many those ‘in transition’ would further swell the ranks of the unemployed is not discernible from official statistics, but the true extent of youth unemployment is certainly more than 50%. In recent years both the AfDB and Ugandan research institute Advocates Coalition for Development and Environment have asserted the real figure is higher than 60%. Please note that the Ugandan Bureau of Statistics is not deliberately trying to misrepresent the situation. It is simply following international guidelines, which inadvertently end up hiding the true extent of the job crisis.

Kenya has acquired a reputation for having one of the most vibrant private sectors on the continent. The number of waged jobs in the private sector – over 2 million – is more than double those in the public sector. It is also one of a handful of African countries that has reached stage 3 of the demographic transition. Sadly though, Kenya has an unemployment/underemployment of its own.

The 2015-16 Labour Force Survey enumerated an economically active population between the ages of 15 and 54 of 19.3 million Kenyans, of whom 1.4 million were classified as unemployed. If underemployed are added, this figure rises to 5 million, or about a quarter of the labor force. Among Kenyans with tertiary education, more than a quarter were economically inactive.

Furthermore, the decent job crisis is worsening and therefore the number of unemployed youth is forecast to double by 2035, with a commensurate rise in underemployment as well. More than 750,000 young Kenyans are entering labor market annually, a number that will not start to ease until midcentury. In 2017 and 2018, the economy created between 850,000 and 900,000 new jobs. However, it is self-employment in the urban and rural informal sectors, or firms with a couple of employees, that generate almost 90% of new jobs – and account for over 80% of total employment in Kenya. Whereas salaried employment reached a total of 2.76 million in 2018, an increase in more than a 3rd in a decade, in the same period the number of Kenyans in informal employment grew by 70% to 14.1 million, 2/3 of them in rural areas. Only about 1 in 10 of eligible workers succeed in finding waged employment and over a quarter of these waged jobs are classified as casual – not employed for longer than 24 hours at a time.

None of this makes pleasant reading, but it is necessary in order to understand the true extent of the African Job Crisis. I had mentioned in a previous post that manufacturing has the greatest multiplier effects in terms of jobs created when compared to either agriculture or services. That in my book, should make it the top priority of just about every single sub-Saharan African government, especially in the light of how it helps countries achieve the demographic transition.



BEFORE YOU GO: Please share this post with as many people as possible and please check out my book, Why Africa is not rich like America and Europe on Amazon. Thank you

Bibliography
1. Paice, Edward. 2021 Youth Quake: Why African Demography Matters. London: Head of Zeus
2. Wikipedia article on the demographic transition https://en.wikipedia.org/wiki/Demogra...
3. Newman, Carol et al. 2016 Made in Africa: Learning to Compete in Industry. Washington D.C: Brookings Institution Press
 •  0 comments  •  flag
Share on Twitter
Published on September 09, 2024 23:25

August 31, 2024

Is AI coming for our Jobs?



In my opinion, it will be at least quite a while before the spectre of mass unemployment as a result of the wide deployment of Artificial Intelligence (AI) becomes a relevant issue for Africa. In Africa we are already dealing with the problem of mass unemployment as a result of too little productivity, whereas the problem of mass unemployment from AI deployment is a problem of too much productivity if indeed too much productivity can be a problem. Even in the advanced West, the supposedly coming AI-induced unemployment is at least a few decades out, assuming that it even comes at all. However, this is a topic of increasingly global interest, that it seems worthwhile to discuss it.

There has been plenty of debate on the topic with the debaters falling broadly into two camps. The first are the Cassandras/Doomsday prophets predicting the coming mass unemployment as described in the previous paragraph. The second are those that are skeptical of such a dystopian outcome with some of them going as far as to accuse the Cassandras of scare-mongering and mere hype.

In my reading of the debate, I have noticed that people in the first camp tend to have technical backgrounds, some being actual AI practitioners, though the best of them tend to be well-rounded thinkers. Representative of this first group is Kai-Fu Lee, who built pioneering AI systems in the 1980s while working at Apple, who then took his talents to other Silicon Valley darlings like Microsoft and Google, and who currently runs a Venture Capital firm in China called Sinovation Ventures; and Martin Ford, a computer engineering graduate and brilliant author who wrote the bestseller, Rise of the Robots: Technology and the Threat of a Jobless Future, who if anything, is even more pessimistic than Kai-Fu Lee. Of course, not all technologists share their dire predictions.

Members of the skeptical group tend to be economists. Representative of this class is Daron Acemoglu, the brilliant MIT professor who co-authored the bestseller Why Nations Fail: The Origins of Power, Prosperity, and Poverty, and Guy Standing, who authored the rather thoughtful and provocative The Corruption of Capitalism: Why Rentiers Thrive and Work does not Pay, in which, as a side note, he dismisses the whole spectre of AI or even technology in general, induced unemployment. Again, not all economists are as sanguine as Daron Acemoglu and Guy Standing.

We have surveyed the battlefield, and the principal combatants. Let’s now look at the weapons being deployed by the combatants to wage the war, in this case being the arguments that are being tossed to and fro. The skeptics go first. They claim AI is nothing more than the latest iteration of technological innovations that though may cause jobs loss in the short-term, nevertheless improve productivity to the extent that new jobs are created to replace the old that have been lost.

Not so fast, say the pessimists. While the pessimists concede that the many technological innovations that have come into being since the industrial revolution got going a few centuries ago have inevitably made life better, people like Kai-Fu Lee point out that there have been many less instances of the truly big advancements, the so called General Purpose Technologies (GPT). He mentions that there are just three, which receive broad support as being worthy to be considered GPTs. They are the steam engine, electricity and ICT. He contends that AI is shaping up to be another GPT. His counter-argument is essentially that we have much less historical data (Since it’s the GPTs that really count) for us to be so confident that this time will be no different. He believes that for a better assessment of the impact of AI, we should look at the historical record of the impact on jobs and wages of these three GPTs alone. He states that while the record of the first two clearly show far more people benefitting even though these benefits took a while to show up and a relatively small number suffered the brunt of the disruptions caused by these GPTs, the record of ICT so far, in terms of impact on jobs and wealth inequality has been far more ambiguous.

Defenders of ICT might say more time is needed and they may have a point. After all, some observers note that it took the original industrial revolution, which got its start in 1760, some 85 years before it started lifting living standards for everybody (And progressive social policies played a big role in that. It wasn’t just about technology). Then again, the first computer to be actually built, the ENIAC was built in 1945 and the beginnings of computer design can be traced to the work of Charles Babbage in 1822. MIT Professor John McCarthy originally coined the expression “Artificial Intelligence” in 1956. The first AI conference also took place that year. The IT crowd could probably come back with the fact that the technologies that enabled the Industrial Revolution to take off in 1760 were decades, if not centuries in the making...this is the kind of debate I like…ferocious…with everything on the line.

Anyway, in his book, AI Superpowers, China, Silicon Valley and the New World Order, Kai-Fu Lee makes clear that he believes AI will tilt the net job creation scales to the negative side, though he tries to proffer solutions, running the gamut from a Universal Basic Income (an idea that goes back to 1960s, with vocal supporters including Martin Luther King Jr. and Richard Nixon) to remuneration for all forms of social work that will most likely be paid for by some sort of AI tax. Martin Ford is an even bigger fan of the Universal Basic Income.

In his book, Martin Ford gives alarming example after alarming example of the progress in AI and Robotics that has enabled company after company to carry out tasks with drastically reduced employment numbers compared to what they would have been like without the advances. He goes on to say that AI looks to be on the path to becoming a utility like electricity and so any new industries will probably adopt AI from the start and will be unlikely to create many jobs, at least when compared to previous industrial eras. I must admit that the arguments in Martin Ford’s book are rather impeccable. Also, though trained as a computer engineer, he shows a solid grasp of economics. However, given the complexity of the debate and fact that we will not be able to see the end game with clarity for at least a few decades, it is entirely possible for some counter-intuitive development to occur that deflect his otherwise logical arguments. I am specifically bringing this up because Daron Acemoglu points out that something of that nature has happened before, that being the wide scale deployment of Automated Teller Machines (ATMs) in the West.

Martin Ford happens to mention ATMs in passing, and though he doesn’t discuss them in specific details, the overall tone of his book and his rather generic statement about them suggests that he believes that they have played their part in the job disruption in the banking sector. Daron Acemoglu would counter that this is incorrect. In fact, he points out that the wide-scale deployment of ATMs actually increased overall employment in the banking sector. Academic studies suggest this happened because the deployment of ATMs helped reduce the cost of banking, thus encouraging banks to open up more branches and hire people who specialized in tasks that the ATMs did not automate.

From this example, Acemoglu leads us to the general observation that often enough, sufficiently productive technologies, while automating a particular line of work, thus reducing job demand in that area, will simultaneously increase job demand in other areas. Emphasis on “sufficiently productive”. He is at pains to point out that not all productivity enhancing innovations fall into this category. Some are just productive enough to automate a task without raising the demand for labor in other tasks.

So it seems the pertinent question would be into what group does AI fall? Needless to say that that will be a very difficult question to answer as it depends on a number of complicating factors. From the inherent nature of the field itself, to the skill of its practitioners, the uses to which it is put, and also the surrounding social environment that plays a significant part in how it is deployed.

For instance, to Martin Ford’s examples of how tech start-ups are using AI to drastically reduce headcount, Daron Acemoglu might ask…”is this because of the inherent nature of AI or because of the incentives that American society gives to the venture capitalists bankrolling these start-ups that forces them to focus on the short-term, hence they look for quick and relatively safe rewards from the straightforward automation of existing tasks as opposed to focusing on longer-term, riskier research that might create more jobs in the long-term?” As an example, he cites the case of health insurance. He believes scientists and engineers working on software or hardware, health workers could use to assist patients in doing their rehabilitation therapy at home after a surgery rather than in a hospital could potentially save insurance companies lots of money, improve well-being, and create new jobs. He doesn’t say what jobs but other than scientists, engineers and health workers, I think the other obvious ones in the case of hardware would be those involved in the logistics of the warehousing, distribution, sale and repair of the devices (I wouldn’t want to give the impression that software is incapable of such benign ripple effects. A good example would be the impact of computational modeling on oil and gas exploration). He however notes that the bulk of automation efforts in the insurance industry go towards the automation of the process for the approval of insurance claims, which he states that while saving money for the insurance firm should reduce headcount. He also points out biases in the US tax code, which taxes labor at higher rate than capital. Employers have to pay payroll taxes (used to finance social security and Medicare) on labor, but not on robots. This he says, encourages companies to engage in what he dubs as “excessive automation”, where companies automate in circumstances where perhaps, it would have been wiser not to.

I mentioned in my previous post that it is not a crime for a company to deploy technology in order to cut costs. In fact, the capitalist system of competition will inevitably prevail on you to do so. But cost-cutting in some areas of the economy needs to be matched by the development of new industries and hence new jobs in others for the long term health of a society. When this is not so, it is ultimately harmful to even the companies themselves because as unemployment rises it reduces the aggregate demand for products, which puts pressure on companies’ profit margins, which forces them to engage in further cost cutting and then the economy can easily find itself in a vicious cycle. I for one think that utilizing AI in emerging industries that have potential for broad transformation but have yet do so because of unresolved pain points that prevent them from going mainstream, might be a use that could to lead to creation of new jobs in adequate numbers. An example in my opinion would be renewable energy.

Personally, I can’t make up my mind on what side of the fence matters will eventually fall but, I hope this post would have helped us become slightly more informed about the issues and to also help us see that the debate cannot be easily cast into black and white. In fact, it seems to have more than 50 shades of grey…unfortunately without the hanky panky.

BEFORE YOU GO: Please share this with as many people as possible. Also check out my book, Why Africa is not rich like America and Europe

Bibliography
1. Lee, Kai-Fu. 2018 AI Superpowers, China, Silicon Valley and the New World Order. New York: Houghton Mifflin Harcourt
2. Ford, Martin. 2015 Rise of the Robots: Technology and the Threat of a Jobless Future. New York: Basic Books
3. Acemoglu Daron, Restrepo Pascual. Jan 2018 ‘Artificial Intelligence, Automation and Work’ NBER Working Paper Series
4. Standing Guy. 2017 Corruption in Capitalism: Why Rentiers Thrive and Work Does Not Pay. Hull: Biteback Publishing
5. Banerjee Abhijit, Duflo Esther. 2019 Good Economics for Hard Times: Better Answers to Our Biggest Problems. London: Allen Lane
 •  0 comments  •  flag
Share on Twitter
Published on August 31, 2024 21:53

August 27, 2024

All this talk about Industrialization is old news. We are in the IT age



It is common to hear the times we live in being described as the “Information Age”, understandably as a result of the immense role Information Technology (IT) has come to play in the global economy and also in our personal lives. In western media, the expression “Post-Industrial” is often used as a substitute for the “Information Age”, a term which suggests that we have exited the industrial age driven by industrial production and manufacturing and entered into an age driven solely by information. This seems to have the unfortunate tendency of creating in the minds of some people (and I have personally met some of them) that the economic activities stemming from Industry no longer matter or may even be irrelevant since they come from a previous era. The folly of this kind of thinking can be shown by pointing out that if one is to take the argument to its logical conclusion, then Agriculture in this day and age, is even more irrelevant since the Agricultural age came before the Industrial Age. “But we need to eat!”, you say. We also need the products and processes stemming from Industry to create a society with the potential of lifting the majority of its citizenry from a crude existence and subsistence and to prevent societal ills that previously used to afflict the world like famine and disease epidemics.

I should also point out a society looking to lift the majority of its populace from poverty cannot continue to depend indefinitely on importation for the bulk of its industrial goods as this is inimical to the long-term growth and development of such a society and this manifests in a variety of ways that should be familiar to even casual observers of sub-Saharan African economies and they include; widely fluctuating national revenues, which bring on frequent economic instability, all stemming from an overdependence on exporting primary commodities, whose prices are subject to great volatility in the global commodity markets, all in a bid to generate foreign exchange needed to import badly needed industrial goods; an ever worsening exchange rate as the imbalance caused by heavy importation and an overreliance on primary commodities for generating forex continually goes unaddressed; markedly high levels of unemployment when compared to industrial nations as there is insufficient domestic economic activity to meet the demand for jobs from the teeming masses; a recurring experience of “brain drain”, as Africa’s best and brightest often try to take their chances in the more functional economies of the industrial west. These are just a few instances of the dysfunctionalities common to sub-Saharan African economies stemming from their so far inability, to transform from being primary commodity-driven to being industrial sector driven. The reader can no doubt furnish more for herself (I am feminist-compliant). IT, though a much welcome development, doesn’t render everything that came before it irrelevant, it merely adds a new layer to the previous industrial layers, which if done properly, should upgrade industrial performance, much like how the original industrial revolution mostly powered by steam, was revolutionized with the advent of electricity in the 19th century. Electricity augmented Industry, it didn’t render it irrelevant, neither will IT.

Based on the discussion in the previous paragraphs, it should be readily seen that the suggestion of many IT aficionados, that sub-Saharan Africa skip the stage of industrial development to focus fully on IT, while well-meaning, is somewhat misguided. First of all, IT booming as it may be does not transcend the economic laws of supply and demand. If every African looking for a job piles into the IT sector wages should depress to the point that much IT work should become undesirable. This, some IT industry observers claim has already happened in the West at the lower rungs of IT employment. Secondly, IT does not have the same level of economic multiplier effects, both in the variety of economic activities stimulated by it and in the absolute number of jobs created when compared to industrial production/manufacturing. Employment figures from India’s (a beacon of IT) manufacturing and IT sectors should make this clear. The economic data service Statista presented figures suggesting that employment in India’s manufacturing sector in 2023 was about 35.6 million. Corresponding figures for India’s IT industry from other sources suggested that employment in the combined IT/Business Process Management sectors at about the same time was about 5.4 million. Note that Business Process Management isn’t strictly IT. These are back office operations like accounting and customer support/call centres that ordinarily can only be rendered to local clients, but because of the explosion in IT, India has been able to successfully package them into what are referred to as “tradable services” whereby a service like accounting has been packaged such that it can be exported to other countries like manufactured goods. These even take the lion’s share of the jobs that the combined sectors create at about 4.14 million, leaving roughly just a little bit over 1 million core IT jobs.

This fact about multiplier effects can be further seen if you consider employment records of flagship companies of the respective industries. In 1990 (when the world had a substantially smaller population), the three biggest auto companies in the USA had a combined market capitalization of $36 billion, revenues of $250 billion, and 1.2 million employees. In 2014, the three biggest companies in Silicon Valley had a considerably higher market capitalization ($1.09 trillion), generated roughly the same revenues ($247 billion), but with about 10 times fewer employees (137,000). Focusing on single companies, Google, had 61,814 full-time employees in 2015. At its peak in 1979, in contrast, General Motors counted 600,000 employees on its payroll. Even in associated sectors like retail, there is a stark contrast in employment figures when you compare retail giants borne of the industrial age when compared to those borne of the IT age. Take Walmart and Alibaba: roughly around 2018, Walmart’s market capitalization was around US$300 billion, while Alibaba’s market capitalization was around US$400 billion; Walmart’s 2018 revenue was US$505.49 billion and its net income US$8.96 billion, while Alibaba’s revenue was US$39.3 billion and its net income US$10.72 billion; for the period 2010–2017, Walmart’s net sales rose by approximately 20%, while Alibaba’s rose 15-fold; roughly around 2018, the total number of Walmart’s staff was about 2.2 million worldwide, while Alibaba’s was 101,958. We should consider these numbers against the fact that the USA, the home of Walmart, has a population of over 320 million people while China, the home of Alibaba, has a population of over 1.3 billion people. Twitter (now called X) at the time, had about 3,860 employees and when Facebook purchased WhatsApp in 2014 for $19 billion, WhatsApp had a mere 55 employees. Instagram had just 13 employees when Facebook bought them for $1 billion in 2012.

Still on the subject of multiplier effects, the industry/manufacturing sector generally has more of what economists call backward and forward linkages when compared to either agriculture or services. In plain English, linkages simply refer to the number of other sectors/industries that economic activity could be stimulated in as a result of activity in another sector/industry. Take for instance the automobile industry. The number of other industries that are brought into play as a result of automobile production include Steel, Rubber (both raw and finished tires), Glass, Paints, Dyes, Silk and Rayon textiles, Chemical Fibres, Intermediate Organic Compounds, Electronics, Petroleum Products, Embedded Software Systems, Composite Plastics, Wood Pulp, Precision Car Parts, Petroleum Products and other Fossil Fuels, Road Freight Transport, Sea Shipping, Taxi and other Car Fleet Services, Wholesale and Retail Trade, Financing such as Auto loans etc., with all creating employment along the way. I think it is safe to say that IT services particularly software applications do not have that level of linkages. As for IT hardware production, that itself is a manufacturing business.

None of this of course, is to suggest that we shouldn’t be aggressive as to harnessing the benefits of IT to the fullest extent. To not do so would be foolish, as one would miss out on the huge productivity gains that IT could deliver. In fact, the logic of capitalism, with its incessant competition, compels companies to glean whatever benefits there are to be had in IT. What I am saying is that for Sub-Saharan Africa to solve its poverty and jobs crises, it can’t skip industrial development and pin all its hopes on IT, it has to be more broad based than that, particularly given the way most IT services are deployed where they usually used to automate an existing business process (There are other options but that is a topic for another day). Such activity should ordinarily lead to reduced number of jobs. That is the essence of productivity…well at least half of it…being able to do more with less…less would also include less staff. There is the other half to productivity, which I discussed in my previous post, that needs to be going on in tandem with the half that automates existing business processes for a society to be able to continually provide high wage jobs for the bulk of its citizenry, and that is the development of new capabilities that lead to new markets, new industries and hence new jobs. The obvious candidate for this in Sub-Saharan Africa is the industrial sector given that what currently obtains is operating far below its potential.

Some of you might point out that in the West that manufacturing does not provide the bulk of the jobs, that it is the service sector that does that. That is only because their manufacturing sector has become so productive that it has steadily reduced its share of employment while astronomically increasing output. Furthermore, the greatest demand creating those high wage service jobs comes from manufacturing. I have many times on this blog given the example of the British fashion industry, where only 10% of the jobs created in that industry are involved in the manufacturing of the product, but without that 10%, the remaining 90% wouldn’t exist.

The World Bank carried out a study in 2010 which suggested that Africa had the capacity to take productively part in the light manufacturing segment of the manufacturing sector to both improve domestic productivity and compete in export markets, if we got our acts together policy wise. We must hurry though. Other Southeast Asian nations like Indonesia, Vietnam and Bangladesh have aggressively moved into this space, picking up where China has left off. To put it in perspective, Ethiopia’s (one of Sub-Saharan Africa’s leading lights when it comes to manufacturing) textile exports – were $235 million in 2017; from a virtually standing start in the 1990s, Bangladesh’s textile and apparel exports were $37 billion in 2017.

That should make it really clear that we really need to quickly get clearly thought out Industrial Policies that are implemented with determined intent if we are to create the better life that much of the continent is desperately seeking.

Bibliography
1. Ford, Martin. 2015 Rise of the Robots: Technology and the Threat of a Jobless Future. New York: Basic Books
2. Yulek, Murat A. 2018 How Nations Succeed: Manufacturing, Trade, Industrial Policy, & Economic Development. Singapore: Palmgrave Macmillan
3. Paice, Edward. 2021 Youth Quake: Why African Demography Matters. London: Head of Zeus
4. Schwab Klaus. 2016 The Fourth Industrial Revolution. Geneva: World Economic Forum
5. Wikipedia article on Information Technology in India https://en.wikipedia.org/wiki/Informa... _in_India
6. Statista 2024 ‘Number of employees in the manufacturing sector India FY 2017-2023’ statista.com https://www.statista.com/statistics/1...
 •  0 comments  •  flag
Share on Twitter
Published on August 27, 2024 22:44

April 16, 2024

Liberal Arts: The Highest Form of Education



In my last post, I had discussed the need for African universities to inculcate critical thinking and active learning throughout the entire curriculum. I had also mentioned that there are two ways of doing this, one of which is the infusion model, where traditional subjects are structured in such a way that they encourage the development of critical thinking skills and facilitate the asking of open-ended questions. I had also pointed out that the infusion model is the basis for the oldest form of higher education, a form that mostly finds explicit expression in the US. That form of higher education is known as the Liberal Arts tradition, and is the subject of this post.

The phrase “Liberal Arts” derives from the Latin expression “artes liberales”, which might be translated to mean “skills for living fully and freely.” Understood in this way, the phrase suggests the potential for a richer and more fulfilling human experience. Originally, the Liberal Arts embodied what the elite of ancient Greece and ancient Rome (two groups that have made gigantic contributions to the evolution of higher education and the history of thought) considered essential for men to attain the highest intellectual and spiritual development possible. It was also considered by them as the education necessary to partake fully in societal discourse as citizens. To these ends, they came up with two higher education programs, the trivium (grammar, logic, rhetoric) and the quadrivium (arithmetic, geometry, music, astronomy). These were what the Greek and Roman elite thought made men capable of acting as citizens as opposed to slaves (theirs been slave-based economies), who to the extent they had an education, was an education focused on narrow, practical pursuits, what we would call today a vocation or training for a particular task/job.

Liberal Arts education has changed much in form and content since the time of the ancient Greeks and Romans but the same spirit still pretty much drives its evolution in modern times. Today, a liberal arts education is an endeavor to teach students to reason from first principles (as opposed to just regurgitating subject matter facts that students have memorized), to learn the art of asking insightful questions and to develop an intellectually rigorous understanding of how the world works in general via a process whereby philosophical and historical methodologies play very large roles.

To give example specifics, a modern liberal arts program would be an education pursued in an inter-related fashion in the following:

history and culture of one’s own society; world history and cultures; intercultural competence; epistemology; philosophical and aesthetic traditions; scientific ways of thinking; social institutions (e.g., family, government, economy, education, religion); ethics and values and their expression through human behavior, public policy, and law; quantitative analysis; mathematics and symbolic languages; qualitative analysis; the natural world; the human organism; the arts, literature, music, and other forms of creative expression.

I wouldn’t want you to think that pursuing a liberal arts education is incompatible with specializing in a particular major as one would in a typical university program. It isn’t, but of course, specializing in a major under liberal arts conditions would require extensive modifications to a traditional program of study, particularly in terms of distribution of course requirements. For example, in US liberal arts colleges where one opts to specialize in a major, typically, one-third of courses would be in the major (this could even be a professional course of study like accounting, engineering, business administration etc.), one-third in general education (this would be a mandatory core curriculum that focuses on the liberal arts portion of the education program), and one-third are electives.

Electives play a crucial role in a well-designed liberal arts program and good liberal arts colleges will go a long way to ensure that significant leeway is given to students to pick a significant number of electives entirely of their own choosing from the entirety of courses offered in the college/universities. This is done to enforce the idea that the student, not the lecturer is in charge of his/her own education. In a particularly well-equipped and forward looking college, you might even be given the option of designing your own electives if the ones on offer are not to your liking. Another common feature with liberal arts colleges is that you don’t have to commit to a particular major until late in your second year or early third year of studies.

I had mentioned at the beginning of this post that the liberal arts model is almost exclusively American. The earliest institutions of higher learning in the US started off as liberal arts colleges. This group includes some of America’s most prestigious institutions, the top 3 in terms of prestige being Harvard, Princeton and Yale. These and many more of the early liberal arts colleges would upgrade to full-fledged universities as a result of the competition coming from the then new universities being established that were based on the research model. The earliest and most prestigious of these research universities include MIT, John Hopkins, Stanford and the University of Chicago. Still though, the undergraduate portions of the early liberal arts colleges that upgraded to research universities retain a heavy liberal arts orientation, particularly Princeton and Yale.

The influence doesn’t only go in one direction. Some research universities that started off as research universities have evolved a significant liberal arts orientation at the undergraduate level, in which all undergraduates have to undergo a core liberal arts curriculum, irrespective of their selected major. A notable example is the University of Chicago. In all, today, out of about 4,072 institutions of higher learning in the US, about 680 of them are pure liberal arts colleges.

In recent decades, other parts of the world have become interested in the liberal arts model, most notably, Asia and Eastern Europe. In China for instance, reforms have been going on since the 1980s to broaden the undergraduate curriculum. Traditionally in China since modern times, the undergraduate curriculum has been designed along rigid, narrow, specialist lines, wholly pragmatic in character and geared exclusively towards economic and social development. The reforms are a response to a recognized need for a more flexible curriculum with significant general education requirements and interdisciplinary study. Peking University (China’s Harvard. Tsinghua is their MIT) provides a suitable example of the changes going on. It has designed a core curriculum for all its undergraduates based on Harvard’s core curriculum. Harvard’s core curriculum consists of seven areas namely Foreign Cultures, Historical Study, Literature and Arts, Moral Reasoning, Quantitative Reasoning, Science, and Social Analysis. The Core Curriculum at Harvard makes up almost a quarter of an undergraduate student’s study.

Another example is Zhejiang University, which has adopted a mode of education that emphasizes broad and deep foundation, free choice of specialty, interdisciplinary exchange and exploration, in order to provide a more open environment for students’ individualized development. Here, students are admitted into one of four broad schools (natural sciences; social sciences; engineering and technology; arts and design) as opposed to a specific discipline or program. Within that school, students are allowed to freely explore for a year and then made to choose a specialty at the end of their first year or beginning of their second year.

Of course, where ever the liberal arts model of education is introduced, there will always be those reactionaries who ask “But what is the liberal arts useful for since it does not prepare you for a specific job or task that can earn a living?”. This critique is not new. In fact it is as old as the classical learning of ancient Greece and Rome, some 2,300 years ago. Both Aristotle (Greek) and Cicero (Roman) were familiar with this critique and both responded to it. In answer to this, they both divided education into two parts. The specific, narrow kind suitable for performing a particular task or what has been termed useful education, and the general, open-ended enquiry into the nature of things which is not directed to anything in particular, the kind of education that its detractors call “useless” education.

Both Aristotle and Cicero were of the opinion that this “useless” education was superior to the useful education because it enables one to understand the true nature of things. In modern times, the usefulness of “useless” education or to call it by its proper name, liberal arts education is that it equips students with rigorous mental models that enable students to understand the world in its true complexity and therefore positions them to be able to take on the world’s most pressing, complex problems, which tend to be interdisciplinary in nature.

I should point out as further proof of the liberal arts usefulness that it has shaped people who on the surface, would seem unlikely to have embraced it. I am talking about some of the world’s most influential technology entrepreneurs. Take the late Steve Jobs for instance. He attended Reed College, which is like the purest of the pure liberal arts colleges you can find anywhere. In fact, he issued an ultimatum to his adoptive parents that, if they don’t allow him go to Reed College, they should forget about college/university because he will not attend anywhere else. Though he would eventually drop out, he still spent an additional 18 months on campus after formally dropping out, attending the classes he found interesting. Steve Jobs was always insisting that Apple’s DNA consisted not just of technology but the liberal arts and technology.

Another is Mark Zuckerberg. Zuckerberg attended a hugely expensive, private senior secondary school called Philips Academy at Exeter (There is another one called Philips Academy at Andover. One of the late M.K.O Abiola’s daughters was a student there) which adopts a liberal arts style education suitable for senior secondary school students that is exclusively geared towards preparing them for college/university. He is known to be able to recite from memory significant amounts of the Greek literature classic, The Iliad by Homer. The 2004 Brad Pitt movie Troy is based on the The Iliad. In a notable instance where Zuckerberg was interviewing a potential recruit at Facebook, the two of them spent the whole time discussing thermodynamics (Thermodynamics is the branch of physics that deals with theory and applications of heat flow).

Sergey Brin, co-founder of Google, is yet another. He has often been described as a “21st century Renaissance man”, after the manner of the great Leonardo da Vinci. I once browsed his PhD student page at Stanford and it happened to contain his reading list. For someone who was a computer science/mathematics major, I saw surprisingly little computer science or mathematics books. I did see quite a number of literature novels. Both Wole Soyinka’s Ake and Chinua Achebe’s Things Fall Apart were on that list. He has also produced some art works and is currently writing a physics textbook.

Given Africa’s present economic challenges, I think an implementation of an extensive pure liberal arts model is unrealistic, because on a per capita student basis, it is more expensive than the large research university model. What seems feasible would be to introduce some liberal arts elements like active learning in the large classes of a typical university, then have students break up into groups for seminar-style discussion tutorials that are probably facilitated by graduate assistants, in a bid to mimic the small class size and highly interactive nature of the pure liberal arts experience. Another doable would be redesigning courses to make them more interdisciplinary.

BEFORE YOU GO: Please share this with as many people as possible. Also check out my book, Why Africa is not rich like America and Europe.

Bibliography
1. Nugent, Georgia ‘The Liberal Arts in Action Past, Present, and Future’ Council of Independent Colleges
2. Szelenyi, Ivan ‘The Liberal Arts Education’ New Economy in New Europe
3. ‘A Liberal Arts Education at a Research Institution / Moving The UW Forward’ Communications and Outreach Workgroup 2018
4. W.R. Connor ‘Liberal Arts Education in the Twenty-first Century’ AALE Occasional Papers in Liberal Education #2
5. Morrisey, Sarah ‘The Value of a Liberal Arts Education’ SPICE | Philosophy, Politics, and Economics Undergraduate Journal Spring 2013 Volume 8
6. ‘The truth about the Liberal Arts’ Christendom College
7. Brown, Grattan ‘What is a Liberal Arts Education’ Belmont Abbey College
8. Roche, Mark William 2010 ‘Why Choose the Liberal Arts’ University of Notre Dame Press
9. Becker, Jonathan ‘What a Liberal Education is…and is not’
10. Gu, Jianmin et al 2018. Higher Education in China Singapore: Springer Nature
 •  0 comments  •  flag
Share on Twitter
Published on April 16, 2024 22:45

March 28, 2024

Critical Thinking: The Ultimate Goal of Education



I had ended the last post implying that African universities despite their challenges, really need to work hard to ensure that students, at the very least become critical thinkers on graduation. Without intending to offend anyone, it is important that it be pointed out that while thinking is a natural process, excellence in thinking is not. It has to be strenuously cultivated. Thinking, left to itself, it is often biased, distorted, partial, uninformed, and potentially prejudiced. The rigorous process of cultivation necessary to avoid all these flaws is what fashions undisciplined thought into critical thinking. We have seen what critical thinking is not, perhaps we should now look at what it is.

Critical thinking is, very simply stated, the ability to analyze and evaluate information. Critical thinkers raise vital questions and problems, formulate them clearly, gather and assess relevant information, use abstract ideas, think open-mindedly, and communicate effectively with others. Passive thinkers suffer a limited and ego-centric view of the world; they answer questions with yes or no and view their perspective as the only sensible one and their facts as the only ones relevant.

Critical thinking is different from just thinking. It is metacognitive, meaning that it involves thinking about your thinking, in order to make your thinking better. Unfortunately, societal norms often make it hard for people to develop critical thinking skills. Critical thinking is simply not a typical response to societal problems. What tends to happen is deference to tradition, elders, authority figures, religion etc.

It is important to note that asking insightful, precise questions is a critical part of critical thinking (pun intended). Asking questions is essentially the same as posing problems. In traditional learning, the teacher assigns problems to the students and as a result, teachers have already done a great deal of the thinking that the students ought to have done, if they were learning critically. A major part of learning how to think critically is learning to ask the questions—to pose the problems—yourself. This is often the hardest part of critical thinking. Insightful questions generally tend to be open-ended, meaning that there could be more than one right answer. However, one should note that there will be many wrong answers as well. Open-ended questions foster student-centered discussion, thereby encouraging critical thinking.

Learning to think critically makes one comfortable with ambiguity, which is important because ambiguity is the hallmark of the most important real world problems. Through critical thinking one develops the ability to solve unstructured problems. Traditional education has been accused of spending far too much time on solving well-structured problems, which tends to lead to a rather unhealthy emphasis on memorization, which tends to produce graduates that have little tolerance for ambiguity or unstructured problem solving, thereby rendering them unfit to tackle complex problems.

One may ask if all the trouble is worth it. The answer is yes. A high intellectual standard of critical thinking is essential to participate meaningfully in the social, economic and political aspects of a society. Embracing or adapting to continuous social, cultural and technological change also requires critical thinking. People’s life quality and everything they create, produce and build, depends on the quality of their thinking. In other words, the ability to think critically is an important life skill. Everybody encounters from time to time perplexities about what to believe or what to do, both in everyday life and in specialized occupations. Skillful critical thinking is by definition more likely to lead to a satisfactory resolution of such perplexities than inadequate reflection or a knee-jerk reaction. A disposition to respond to perplexities with skillful critical thinking is thus helpful to anyone in managing their life.

Furthermore, although most people develop some disposition to think critically and some skill at doing so in the ordinary course of their maturation, especially in the context of schooling, focused attention on the knowledge, skills and attitudes of a critical thinker can improve them noticeably. I should also point out that it is what employers consciously or unconsciously look for when they seek to employ university graduates even though what they might have studied has nothing to do with the work they will be employed to do (of course there are some who require university graduates simply because everybody else does). Finally, the primary purpose of education ought to be to learn how to think and not just the mere accumulation of facts.

To inculcate critical thinking into our universities, the lecture format will have to be modified to accommodate what is known as active learning. While the lecture method is teacher-centered, active learning is student-centered, with the student playing a significantly more active role in class activity. The student is less treated as a person to be taught and more as an equal to the teacher and engages in constructive dialogue both with the teacher and fellow students. Even in a large class, I can imagine a lecturer setting aside a few minutes to allow at least two students debate a well-chosen open-ended question. I am sure this will cause students to show a great deal more interest in their classes, even the back-benchers if only for the sole reason of having a good laugh while listening to their fellow back-benchers talk rubbish. But students should be beware, active learning calls for a great deal more responsibility on their part. They will have to take the trouble to be conversant with the material before it is taught in class. I admit making such a change is not without challenges, for both teacher and student. Lecturers come under intense pressure to cover the assigned curriculum, having to do this while teaching a large class almost necessitates the use of the lecture method, the unfortunate price of this being the genuine lack of student engagement with the course material. Only the minimum necessary will be done to pass what would often be a poorly thought out exam that often does more to end thought than stimulate it.

As an example of life’s unfairness, a lecturer who decided to take on the challenge of teaching her students to think critically might do her job too well, causing her problems come assessment time if she works in an institution that uses conventional methods of student assessment (i.e. a major exam at the end of the semester). To explain what I mean, I will tell a story of what happened during the development of the animated movie hit, Finding Nemo by Pixar animation studios.

Finding Nemo, you would recall, for those of you who watched it was a movie about a fish on a rescue mission to save his kidnapped son. In this he is aided by another fish that suffers from amnesia. Now since this was an animated movie, realism isn’t a high priority (Tom and Jerry amply demonstrates this). The animators were however determined to build as much realism into the movie as possible. To help with this, they hired a PhD student from a local university to give them lectures on fish biology. The lecturer claimed that those were by far the most fulfilling lectures he gave in his life. The irony was that never once was he able to complete a lecture according to his lesson plan. Each time, just a few minutes into his lecture, the questions would start flying from all directions.

Now as far as the ultimate purpose of education is concerned, one couldn’t ask for a better outcome. However, if the animators had been students at a regular university, setting an exam would provide serious challenges to the lecturer. His lesson plans would most likely have been designed to cover the curriculum. The ultimately haphazard nature of the classes would almost certainly mean that there would be parts that didn’t get covered. This presents a challenge when setting exams. If he drops the parts that weren’t covered, he runs the risk of looking incompetent to any examinations board that might be reviewing his exam questions. If he includes them, he runs the risk that students might do poorly on the exam, despite their evident enthusiasm for the subject, and as such, he runs the risk of being queried as to why he would assess what he didn’t teach. This should make clear that trying to inculcate critical thinking in a conventional university requires a very fine balance that is difficult in practice to achieve.

This pressure to cover the curriculum leads to what I consider another big scandal of education – the general horribleness of textbooks. Textbooks are primarily written for the convenience of the teacher as opposed to the inspiration of the student. They written in such a way that it makes easy for teachers to dish out readings and problem sets or create lecture notes that make it easy for the lecturer to demonstrate breadth of coverage come evaluation time. Students bear the brunt of this by having to deal with textbooks that often appear sterile, and are very boring. Small wonder students ditch them at the first opportunity.
Still on the issue of lecturer evaluation, it also doesn’t help that academics are primarily evaluated for promotion on the basis of their research output as opposed to their teaching skills. Hence, they generally do not give the best of themselves in class, they reserve that for their papers.

To be fair to the lecturers, they are not the only ones with misaligned incentives. There are students who would not like a more active mode of learning. They would prefer the passive mode that obtains because it would leave them with a lighter load and more time for extracurriculars. Then there are others who are simply interested in getting good grades, in order to land a well-paying job, irrespective of whether they actually get a solid education or not.

It should be clear now that changing the mode of instruction to accommodate critical thinking will be a herculean task. Even if one lecturer genuinely wanted incorporate active learning into her classes, she is not incentivized to do this by the university system. Such a change would have to happen on a university wide basis, throughout the entire curriculum. An added challenge is that there is bound to be on the part of the lecturers, a lack of familiarity with the critical thinking approach to education. It will take some time for them to effectively learn it.

A lot of research has gone into how best one could go about developing critical thinking skills in university students. That research has led to the identification of two basic models. The first model involves stand-alone instruction. Here, there is an explicit and overt focus on the form of good thinking and reasoning and on the art of asking insightful questions which are now reinforced by examples from everyday life. The other model involves infusion. Here, traditional subjects are structured in such a way that they encourage the development of critical thinking skills and facilitate the asking of open-ended questions. The subjects will often have a multi-disciplinary character in which historical and philosophical approaches loom large.

The infusion model has actually been the basis of a very old form of higher education, actually the oldest form of higher education that mostly has explicit expression these days in the US higher education system. I actually think that its virtual non-existence in the African higher education system is a gap that needs filling. This form of higher education is known as the liberal arts tradition and I shall be discussing in my final post on higher education.

BEFORE YOU GO: Please share this with as many people as possible. Also check out my book, Why Africa is not rich like America and Europe.

Bibliography
1. Huber, Richard M. 1992 How Professors Play the Cat Guarding the Cream: Why We’re Paying More and Getting Less in Higher Education. Virginia: George Mason University Press
2. Duron Robert et al ‘Critical Thinking Framework for Any Discipline’ International Journal of Teaching and Learning in Higher Education 2006, Volume 17, Number 2, 160-166
3. Hitchcock, David ‘Critical thinking as an educational ideal’ ResearchGate
4. Olga Lucía et al ‘Critical Thinking and its Importance in Education: Some Reflections’ https://doi.org/10.16925/ra.v19i34.2144
5. Price, David A. 2009 The Pixar Touch: The Making of a Company New York: Vintage Books
 •  0 comments  •  flag
Share on Twitter
Published on March 28, 2024 13:04

March 21, 2024

The Challenges of African Universities



In my last post, I had discussed the effect that globalization was having on universities worldwide and noted the relatively little role African universities were playing in the phenomenon. I had hinted that perhaps this was because African universities were beset with more fundamental, internal problems that would need to be resolved before we could truly get a place on the world stage.

These problems, I know, are not news to the majority of people reading this post because they would have known these problems in the most intimate way possible, they would have lived through them. Still, it is important to discuss such problems in a systematic and rigorous manner. This being the data age, perhaps we should start by delving into statistics that show how African universities stack up against global averages.
The sub-Saharan African average gross enrolment ratio for tertiary education increased marginally between 2013 and 2018, from 8.9 to 9.4 per cent. Over the same period, world average increased from 33 to 38 per cent. Of the Arab states in North Africa, many have relatively high gross tertiary enrolment rates for both sexes, in the case of Algeria (51%) even well above world average. For Mauritius, the figure is 41% and Morocco 36%. Corresponding figures are 68% for Europe, 61.5% for North America, 51.8% for South America and 28.9% for Asia.

Gross domestic expenditures on research and development (R&D) in African countries range from 0.82 % (South Africa) to 0.01 % (Madagascar). The median for African countries is 0.30 %. World average is 2.22%. The result of this is that Africa only accounts for roughly 1.3% of global R&D. Furthermore, African research output is heavily skewed to a handful of countries on the continent, with just Egypt and South Africa accounting for roughly half of Africa’s scientific publications; an additional 25 percent is generated collectively by Kenya, Morocco, Nigeria, and Tanzania.

Public spending per student is at about $1,000 with some claiming that this figure is declining. This markedly contrasts with public spending per student in the developed world, which is the region of $9,000 to $18,000.

The number of researchers per million people in African countries ranges from 1,772 (Tunisia) to 11 (Burundi). The median for Africa is 91. For the world it is 799.
One in every nine people who are born in Africa and have a university degree is a migrant in one of the 34 member stated of the OECD – the world’s most developed countries. There reportedly are more Sierra Leonean doctors living in the Chicago area of the state of Illinois, USA than in Sierra Leone.

We have seen the data, but what do they mean exactly? What implications do they hold about what happens on the ground on a day to day basis? To that discussion, we turn to next. To kick that off, perhaps a recounting of the history of the evolution of the African university system might be in order.

In the 1940s, Africa had just a handful of universities, about 31 in 1944 (out of about 3,703 worldwide). The numbers rose to 170 in 1969, 446 in 1989, and about 1,279 officially recognized universities in 2023 according to the uniRank database (with roughly at least 25,000 globally). Another source put the figure at 1,639, as early as 2015 (out of 18,808 worldwide). Up until the 1960s, African university students were typically educated to the highest international standards, with a substantial number receiving part or all of their education at universities abroad. In the 1970s and early 80s, while it was still common then for students to undertake at least a part of their studies abroad, economic conditions in many African countries had become so harsh, that a substantial number of African students receiving the education in foreign institutions who could afford to stay abroad did so.

This was also a politically difficult period for African universities as the many authoritarian governments created a hostile environment in order to prevent the more radical elements within the university community, from questioning the legitimacy of their regimes. Thus began the “brain-drain” phenomenon.

By the mid-1980s, access to opportunities for study abroad, especially in Europe, had so diminished that most had to undertake their entire education, from first degree to doctoral studies, at home. This occurred at a time when the range and quality of library holdings, as well as the quality of teaching and research at most African universities were in decline.

The IMF induced Structural Adjustment Programs (SAP) of the 80s and 90s, and studies done by its sister organization, the World Bank, also played a key role in creating an environment hostile to the growth of the African university system. SAP led to reduced government investment in critical social sectors like education and public health. In the 80s, The World Bank published a series of papers suggesting that investment in basic education yielded a greater return on investment when compared to investment in higher education. This led a lot of donor countries that had making funds available for the development of the African university system to withdraw their support. This was a major source of source of the decline of African universities in the 80s and early 90s. It should be noted that the World Bank study was not without detractors. A retraction was even made by the Bank in 2002.

Fortunately, as the 90s progressed, there was the beginning of a change in perception of African universities, which had led to their somewhat rapid proliferation in numbers both in terms of the number of schools and in their enrollment numbers, particularly those that are privately owned. Despite this, much of the problems that started in earlier decades that have essentially come to define the character of African universities remain. These problems run the gamut of inadequate funding, corruption, inadequate infrastructural facilities, shortage of qualified academic staff, strike actions, brain-drain, poor research (in terms of quantity, quality, research environment and often, in terms of relevance), poor remuneration, weak administration etc. Let’s delve more deeply.

Africa in the last few decades has had to cope with harsh economic conditions leading unsurprisingly to the severe underfunding of higher education. This of course was exacerbated by the significant withdrawal of international aid as a result of the aforementioned World Bank studies. Even in circumstances where such aid is made available, much of it is rerouted to donor countries to fund scholarships at donor universities. Only about a quarter of the funds released actually get to African universities, and when they do, the funds marked for research have their agenda set by international interests rather than local. Another issue with this research funding is that often enough the monies are not funneled through the universities’ institutional mechanisms for handling research on a university wide level but are going directly to individual researchers who managed to hustle some international grant. This pattern of funding does not support the building institutional capacity. It is also not likely to make much of a contribution to systematic theory building, which is required for fundamental breakthroughs.

In addition to the problems of funding research are problems with the nature of the research itself. The research output from African universities is very low and much of it of questionable quality. Further problems include relevance of the research carried out. As much of it is externally funded, topics researched may not be of direct relevance to national development. Even when it is, there is the added problem of such research not making an impact because it is disconnected from the policymaking circles in the country, compounding this is the fact that the research is written up in esoteric journals that policymakers have neither the time or the inclination to decipher.

Still on the topic of research, much of modern research is carried out in multidisciplinary teams; too many African researchers still work in isolation. It will be hardly surprising to know that much of the problems with African research stem from hostile environmental conditions. The key elements of a research infrastructure such as laboratories, equipment, libraries, effective systems of information storage, retrieval, and utilization; appropriate management systems; and policies that facilitate and support the research enterprise including incentives that recognize and reward high-calibre research are often missing and sometimes glaringly so. As a result of all of this, even though African universities are exclusively built on the research university model pioneered in Germany, that I mentioned in my last post, it would be a great stretch to call a lot of them research universities. They aren’t great halls of teaching either. This in part stems from the use of the lecture method, which to be fair is necessitated by the fact that African universities admit far more students than they can adequately cater for, which itself stems from the soaring demand for university education. Lecturing to large classes does not encourage independent and critical student thinking. At the post graduate level, African graduate study and post-doctoral training are considered weak by international standards.

Another dimension to problems faced by African universities stems from the quality of bureaucracies running the universities. Problems in this area include unsatisfactory recordkeeping, bureaucratic corruption, ineffective structures and systems of coordination, which all hamper the effective functioning of universities.

Yet another problem is the paucity of university-industry links in Africa, the reasons being that the small and mostly informal nature of African economies make such relationships difficult initiate and sustain; that the universities lack the institutional capacity to make such linkages meaningful, resulting in the confidence on the part of industry in the abilities of universities to deliver research findings that will be useful to industry; that the lack of policies that spell out the role of universities and their precise contribution to society, make it difficult for universities and industry to engage one another in meaningful collaboration; that even when such policies exist they aren’t usually enforced; that a cultural divide makes it difficult for universities and industry to see eye to eye; the ever present issue of a lack of funding, and often, the sheer lack of interest in relationships on both the part of universities and industry, among many other reasons.

We have looked at problems, now time for a brief look at solutions. A good number of the problems turn on the issue of lack of funding. That is an issue that won’t be solved overnight. Lack of funding ultimately stems from the low productivity of African economies. I have mentioned before that productivity is the key to wealth. How nations like the UK and the US have solved the productivity problem in centuries past and nations like Japan and China more recently is through the aggressive use of industrial policy. This takes the form of government selecting a few strategic industries that it wants to drive growth and then sending very strong signals to the private sector in form of import tariffs, production subsidies, tax holidays, clear regulatory and competition frameworks that keep uncertainty to the barest minimum etc., that strongly encourage investment in productivity raising sectors. Doing so would begin reduce the size of the informal economy in Africa which is abnormally large. Estimates run between 50 and 70% of GDP compared to about roughly 7-10% for nation like the US, UK and China. Greater formalization of the economy will increase the tax base, making more money available for investment in the universities. In the light manufacturing industry, there many sectors like textiles and footwear that have started many a country on the path to wealth creating productivity that do not initially require the deep scientific expertise that is the forte of universities. Such sectors can serve as a springboard to further deep science industrial activity further down the line.

A cynic might ask wat is there to compel government to follow through. That is where you, me and everybody else come in. Formation of strong civil society groups is essential for social development. In successful societies, civil society forms an effective counter weight to government. That needs to happen in Africa as well.
In the short term, there should be some minimums that the university should be able to guarantee. A university is supposed to help students do at least the following:

• Think logically with words and numbers
• Write and talk clearly
• Respond aesthetically
• Establish a moral framework
• Embark on a journey of lifelong learning

These happen to be the essential constituents of critical thinking. We shall look at that in the next post.

BEFORE YOU GO: Please share this with as many people as possible. Also check out my book, Why Africa is not rich like America and Europe.

Bibliography
1. Okolo Michael Monday et al ‘Higher Education in Nigeria: Challenges and Suggestions’ Middle European Scientific Bulletin
2. Adu, Kajsa Hallberg ‘Resources, relevance and impact – key challenges for African Universities’ The Nordic Africa Institute NAI Policy Paper 2020
3. Sawyerr, Akilagba ‘African Universities and the Challenge of Research Capacity Development’ JHEA/RESA Vol. 2, No. 1, 2004, pp. 211–240
4. Zeleza, Paul Tiyambe ‘The Giant Challenge of Higher Education in Africa’ The Elephant
5. Wolhuter C.C et al ‘Higher Education in Africa: Survey and Assessment’
6. Sa, Creso M. ‘Perspective of Industry’s Engagement with African Universities’
7. Huber, Richard M. 1992 How Professors Play the Cat Guarding the Cream: Why We’re Paying More and Getting Less in Higher Education. Virginia: George Mason University Press
 •  0 comments  •  flag
Share on Twitter
Published on March 21, 2024 23:07

Enlightment blog

Abdul Rotimi Mohammed
Takes a deep look at social issues mostly related to development challenges in Africa
Follow Abdul Rotimi Mohammed's blog with rss.