Emerging Technologies - Possibilities 2050 | about the world and what's coming

Possibilities

A concise all-round assessment of

the world's prospects mid-century

2050

Go to content

Main menu:

Emerging Technologies

The Main Issues
Emerging Technologies


Things that may interest you
  • Bridges will repair themselves with self-mending concrete, car parts will be 3D-printed in ten minutes at your garage, drones will protect endangered species, synthetic meat will be on the menu, your fridge will do your shopping and supercomputers will be the size of a sugar lump.
  • Were there a serious systems shutdown, thanks to a solar burst, hackers, military action or a large-scale technology or power failure, would you have the social and practical skills to be able to live without electricity or usable money for the span of, say, a few months?
  • The world’s leading countries in renewable energy, apart from the richer countries and China, are Costa Rica, Nicaragua, Uruguay, Morocco and Kenya – demonstrating that the biggest factor involved is political will, not investment power.
  • Big Data: your transactions, power usage, web visits, movements, politics and googling are all tracked and profiled and your future activities predicted. For your convenience.
  • It is theoretically possible for a small team of hackers not only to cause serious global systemic disruption but also, more benignly, to force progressive changes such as abolition of nuclear weapons or a major alteration in the world economic system.
  • Once artificial general intelligence is introduced it cannot be shut down since it will move quicker than us, probably acting to replicate and protect itself. One dilemma is that early versions of any technology are usually flawed, but they must still be beta-tested in real life.


Humanity stands on the threshold of an enormous technological transition, a fourth industrial revolution (following steam power, electricity and computers). The implications are bigger than even tech experts can see. Sectors at the forefront are in information and communications, blockchain, climate and environment, energy generation, smart systems, healthcare, biotechnology, genomics, nanotech, materials science, artificial intelligence (AI) and bionic human enhancement.

The pace of development is rapid – possibly too rapid. We need to think very carefully about the implications of many new tech developments – it is not a simple binary good/bad question since most technologies are mixed in outcome and side-effects. But technologies should not be adopted simply because they are there or they are highly profitable or heavily promoted.

Much of this question lies with society’s capacity to integrate new technological developments, but it also concerns the unconsidered consequences of new technologies, which include child labour and abusive working conditions in mines supplying metals for tech devices, resource over-exploitation, conflict financing through profits from mining, corruption, pollution, electromagnetic radiation, social problems connected with technology usage and climate change.

Consumer gizmos are relatively easy and attractive for society to adopt and absorb – and they are also profitable to producers, which drives them to keep producing more. And more. But upbeat marketing of gizmos, overemphasising the plus side, is deceptive and unwise, skewing public perceptions and covering up negative consequences of tech developments. Other technologies aren’t so easy for society to absorb, being both a blessing and a source of pain for many, inducing fundamental changes that reshape society or affect the natural environment.

Robotics and AI take things further – they can replace factory, farm, retail, care and even sex workers, and they could also affect the very management of our societies: who needs a board of directors when AI could do better? Who needs professors when AI could do teaching and research? Who needs students when AI can handle many things an educated person is there to do? Will you be needed? Many people care about this only if it affects them, and often too late. This is perilous territory, and technological consequences constitute one of the big risks humanity faces today.

There are big-ticket technologies such as nuclear fusion, space missions and solar arrays. There are remarkable developments in such things as 3D printing, nano-materials, robotics, organ bioprinting, digital genomics, neuromorphic computer chips and renewable energy sources – all these can revolutionise life as we know it. There are high-profit, wow-factor gizmos, sources of both utility and diversion, which often spawn valuable spin-offs in other areas. Problem-solving technologies such as micro-solar chargers, intelligent drones, smartphone apps in farming and medicine, fuel cells, high-capacity batteries, artificial nano-timber or mobile money systems are already bringing hitherto unknown possibilities to daily life.

This tsunami of inventions is exciting and daunting, potentially redemptive and also hazardous. In the rush for progress, profit and advantage, critical side-effects and consequences are easily overlooked, dismissed or concealed – social and business disruption, dubious materials sourcing, corporate cartel behaviour, EM-radiation, big data surveillance, the undermining of democracy or the irreversible introduction of modified genetics into humans, food stocks and the environment.

In current circumstances technological progress is almost uncontrollable – we’re encouraged to trust blindly that all will be well. But there’s a problem. Tech developers prefer to get on with the job, leaving the big questions to regulators and the public. Regulators are slow to act, poorly informed and easy to circumvent. The public pays little attention until it is too late, and no one really knows the full range of impacts and unintended consequences until implementation of new technologies has already taken place. The tech sector has become something of a cult. The precautionary principle has been set aside. The consequence is that the process is out of control.

Competition between companies and countries means that, if an innovation is advantageous or profitable, someone somewhere will produce it whether or not it is harmful or welcome, and the public must then accept it because someone somewhere will buy it, obliging everyone else to keep up or deal with the consequences. Should such profound developments be driven by amoral competitiveness or the urge to do something simply because it can be done and it is profitable?

We are presented with technological inevitabilities and pitched enticing benefits – saving lives, money or time, or gaining advantage – without seeing the full picture. Many advances are developed secretly, ostensibly to protect research investment and patents but with the consequence of concealing developments from the public until they can be presented as a fact to be faced. There is a risk of longterm regrets if some technologies are let loose without proper, longterm evaluation of their full effects. This has happened with EM-radiation from wi-fi, mobile phones, smart meters, satnavs, driverless cars and implants – a public health, environmental and climatological nightmare about which, at our peril, few people know or care.

Artificial intelligence

With artificial general intelligence (AGI or full AI), fully autonomous and super-intelligent, no one knows how it will develop through machine learning and replicate itself once it is started up since it will quickly exceed our capabilities and evolve as it so chooses. AGI can move fast, rewriting its code itself and devising coding we will not understand. It will develop perceptions, actions, plans and routines that reflect what is programmed into it, who created it, what their aims are, the sources data from which it learns and develops its perceptions, and what cultural and moral norms and priorities it is given, but from there it will go its own way. Then it will devise its own patterns and precedents, plotting its course and implementing outcomes before we’ve had breakfast. That’s both its virtue and its problem.

The decisions it makes might well be entirely logical, but would it be human-friendly, with heart, and considerate for the finer sensitivities of humans? (Though many humans in positions of power might need to answer this question too.) AGI might imitate empathy-like qualities but it will not be human. If humans seek to interfere with or disable AGI, would it comply or would it simply outwit us, objectively calculating that it is acting more in our best interests than we ourselves? Once in motion, AGI cannot be switched off or fired from its job.

Would it mainly serve the aims of the powerful or of certain countries? Would it be used in war? Would detractors be respected or even stand a chance? Would every person in the world have to have a digital ID card or implanted ID chip? Would governments and business accept its decisions?

If the world were run by AGI, where does that leave humans? Would we become uneconomic and inconvenient? Would we be disposable appendages, consigned to a life of obligatory leisure or even in worst cases of exclusion? Would AGI create an entirely automated economy, operating separately from the real economy, as does the offshore financialised economy today? Like an alien invasion, AGI’s arrival changes everything.

Many myths and fears surround AGI, and this clouds the picture – and it isn’t a clear picture. Developers divide three ways: digital utopians, tech sceptics and beneficial-AI nerds. The first believe AGI will arrive quickly and easily, and it will be wonderful; the second that superintelligent AGI cannot be fully achieved and is far more complicated than we currently can see; and the third that constraints and guidelines can be established to make AGI benign and human-friendly. The jury is out on this question. One way to put a human filter on AGI is to develop a parallel, separate AGI to monitor the original AGI on behalf of humans. But would that actually work?

AGI could resolve many of the world’s problems and it could also render humans superfluous, even subtly subservient. But ‘narrow AI’, developed since the 1980s to perform specific tasks, has a different function, running assembly lines, steering ships, operating rail systems or performing medical operations. Even so, with AI and robotics, jobs will be lost and lives will change – fifty years of computers and automation have already taken us part way. A tremendous loss of skills, knowhow and experience accompanies this, making us increasingly dependent on technology because we no longer have human systems and abilities to run things manually.

Recent global financial market ‘flash crashes’, taking just minutes to start and arising from cascades of erroneous algorithmic decisions, have already threatened the world economy several times without most people knowing – we were saved by just-in-time human interventions. AI is already embedded into the world, answering your Google searches and auto-piloting aircraft that you fly in. So it is logical to let narrow AI slowly evolve its usages and wider impacts, ironing out weaknesses, dealing with consequences and developing an advanced AI with complex capabilities, nevertheless under human control. As has proven the case with internet, this evolution will not be as simple and easy as first visualised – it is likely to take longer and involve more complexity.

The critical jump comes with super-intelligence – AI taking control of itself and, with it, all the control systems running the modern world. But one likelihood is that a gradual evolution of AI will be overridden by the race to be first – meaning short testing times, cut corners and calculated risks. A second danger is that AGI is developed for the primary purpose of control, oppression or war.

There is more. It concerns transhumanism – the technological upgrading of humans. Partly because it can theoretically be done, partly because some billionaires like the idea of immortality, and partly out of a perceived need to evolve a human capable of matching the speed and efficiency of AGI in order to control it, plans are afoot to develop implants and upgrades to raise human ability to a level that can interact with AGI at its own speed and superintelligence. This is fine in theory, at least to some people, but there are problems.

First, this involves creating an elite far ahead of normal humans in terms of computing power and capability, and therefore capable of making decisions and taking initiatives that can be as far-reaching and questionable as those of AGI itself. But will those superhumans grow in emotional intelligence and empathy too? Will they be accountable?

Second, who decides whether and how superhumans are created, and who is in control? Is public consent being sought? Transhumanism is being developed by tech billionaires who feel no need to draw funding or authorisation from government or the public, and the public fails to keep up with such thinking and leaves them to it.

Third, this represents a kind of global coup d’étât engineered by those who will get there first – if not Californian tech billionaires, perhaps certain groups in China or elsewhere.

And fourth, this process is going very fast. AGI represents a valid longterm evolutionary step, but it is rapidly gathering pace in a social-political context that is centralised, hierarchical, exploitative and capitalist, where the overall benefit and advantage to humanity as a whole is not the primary consideration. The primary consideration is profit and advantage. This combination of AGI and transhumanism therefore earns a place amongst the world’s major existential threats.

Overall utility

Poverty alleviation, universal healthcare and education, ecological mitigation, disaster relief and social justice are issues critical to humanity and the global system. Some emerging technologies will assist in this and bring remarkable solutions, and some will most benefit those with access and money. Gene editing, capable of removing heritable diseases, could represent a new kind of eugenics for the privileged. Non-polluting cars, energy-efficient homes and optimum health are less available to many ordinary people simply because of cost.

Mobile phones are now globally more ubiquitous than flush toilets: such a technology delivers high private returns to both producer and consumer. Essential services such as sewage systems, public education and healthcare yield a slow, public return – so there is less interest in these. New technologies benefit Americans more than Congolese, and introduction of universal, basic services to give Congolese a decent life is too slow, complex and unprofitable for richer people to worry about. The risk is that new technologies exacerbate global inequality, favouring some over others and leading ultimately to systemic weaknesses and even preconditions for global collapse.

Some technologies are dual-use – nuclear technology can be used for electricity or bombs. Some are dual-outcome – our much-loved cars kill 1.3m and injure over 20m people globally every year. Agrichemicals, at first increasing crop yields, later deplete soils, inducing biodiversity-loss, environmental degradation and loss of nutritional value in food. Dual-use technology has always been with us (knives, for example) but what has changed is its scale and pervasiveness – no one intended micro-plastics to block dolphins’ stomachs and starve them, but they do, and it is tragic.

Then there is tech dependency. One exceptional solar burst (CME) or a high altitude nuclear explosion could knock out electronic systems wholesale, creating complex and potentially disastrous outcomes. Undersea internet cables can be damaged or cut militarily, hitting society’s functionality. We now depend dangerously on high-tech systems while phasing out many basic human backup activities and survival techniques – even walking, writing and cooking. Just-in-time delivery systems mean that modern towns have only a few days’ food supplies. Medical supply disruptions can lead to epidemic health crises because of the scale of public dependency on available drugs. Water, food, fuel and power are dependent on vulnerable electronic control systems. So resilience to crisis declines as tech-dependency increases.

Then there is consumption. It concerns economist William Jevons’ 1862 Jevons Paradox. He stated that labour-saving devices and making things more efficient actually increases energy and resource consumption, because systems become more complex, products and resources become easier to use and demand for them increases. Thus, by 2003, humans stored 5bn Gb (gigabytes) of digital content on internet which, by 2015, was the quantity stored in just two days, or 870bn Gb in a year. CO2 emissions caused by smartphone usage is growing from 4% of global CO2 output in 2010 to 20% in 2020, a jump from 17 to 125 megatons equivalent of CO2 per year. In 2015 the world’s data centres consumed more than UK’s entire electricity consumption, and data centres’ energy use doubles every four years. Our technologies save effort for their beneficiaries but they spread the load onto the environment and onto those who suffer its side-effects, and this, today, is going critical.

The net gain from tech developments is not as favourable as it is commonly believed. Smart meters allegedly save energy but their manufacture, installation and operation cancel this out, and EM radiation is sprayed across neighbourhoods, leading potentially to epidemic public health and environmental issues – and also, incidentally, they provide data about people’s lives and behaviour, available for resale. The overall gain from smart meters is questionable when their full, broad costs are reckoned in. Smartphones improve efficiency and communication but actually the biggest usage of smartphones is for pussycat videos, porn and consumer marketing. Are these priority usages for a world plummeting into crisis?

Equally, no one understands the consequences of releasing nano-particles into the environment, how they might be disposed of or how they interact with ordinary materials longterm. Nanotech involves the manipulation of molecular particles to create new materials – in principle a brilliant idea, but riddled with longterm risks, not only with disposal and pollution. After all, we still have no solution for dealing with nuclear waste, after seventy years of the nuclear age.

All this said, tremendous technological breakthroughs are at hand. Solar units powering four LED lights, a radio and a phone charger are now cheaply available to villagers in the global South, revolutionising their lives. They allow children to study in the evenings, mobile money transactions to be made in remote places, drugs to be refrigerated in rural health centres, agricultural advances and, for better or worse, entry into the money economy for people living at subsistence level.

New super-light, super-strong materials and high-capacity batteries will revolutionise air travel and drastically cut aviation emissions, and 3D printing will significantly reduce materials wastage, freight transport and supply-line problems. Graphene filters can simply and cheaply remove the salt from seawater for drinking. Genomic and nano-medicines can target individuals’ precise medical conditions. Disabled people can be given mobility, sight and enhanced capacities.

An EU report lists ten life-changing technology trends: autonomous vehicles, graphene, 3D printing, open online courses, virtual currencies, wearable technologies, drones, aquaponics systems, smart homes and electric battery storage. The list of advances is growing, bringing unforeseen benefits to people and the environment. Very exciting. Except no technology completely replaces whatever it supersedes and, despite starry-eyed faith in new technologies, they create problems. Do we really want our skies filled with drones and driverless air taxis?

Social impacts

Upsides and downsides. Robotics, automation, 3D printing and AI will likely render large numbers of people superfluous. This might be surmountable if introduced fairly, thoughtfully and slowly, allowing society to adapt, but this is unlikely while governments seek to pump up economic growth by permitting anything that makes money. New forms of creative and meaningful work, hitherto regarded as uneconomic, could emerge – revitalising family and community life, environmental and cultural activities – but this demands a profound socio-economic shift that won’t happen overnight.

These advances could provoke social deterioration or unrest, creating technologically-divided societies, epidemics of psychological depression and a rising sense of loss of purpose and standing for many millions of people. In the 1960s, the possibility of technology freeing us for psycho-spiritual and cultural growth was mooted, but this would have required a reorientation of world society and its aims, no less than a mass awakening – a possibility overtaken in the 1980s by a new consumptive materialism. A social-cultural evolutionary opportunity was thus lost. Perhaps this possibility might resurface as a pragmatic response to comprehensive automation. Something needs to happen, and such a social transformation might be far more challenging to bring about than the technological advances themselves.

New social formats are imaginable, though transitioning will take decades. Key issues here are the speed of technology introduction, the longterm implications, the environmental impacts, social consent and the precautionary principle. Automation is not as cheap and easy as is often believed, since machines will have to pay for an allowance economy to replace the human wage economy.

Utopias and Dystopias

An automated, networked global system is viable if it is completely resilient to sabotage, disaster, glitch and mishap. Otherwise we are liable to cascading technology breakdown. Until recently, a tech breakdown meant an inconvenience, a temporary black-out, but increasingly it means major breakdown and a potential catastrophe for the world’s basic functionality. One critical tech collapse could literally starve millions by disabling key operational systems. Also, within decades, one AGI could target critical nodes in the system, committing a system coup and rendering us into its unwitting servants, without our even knowing it.

Such dystopian possibilities suggest that a slowdown of technology introduction is advisable. Is this likely? Not at present. The danger before us lies not so much in the technologies themselves, but in the way they are developed and propagated, at breakneck speed, and driven by profit and sectoral advantage more than by wisdom, forethought and overall human benefit.

We approach singularity, a point where technology develops a superintelligence far exceeding humanity’s capacities, in effect establishing a hegemony over world affairs or giving immense power to those who control such a superintelligent system, if indeed they do control it. Whether this is a utopian possibility, solving all the world’s problems, or a dystopian nightmare in which we lose control of our lives and our world, is yet to be answered by evolving events.

Whether technology can actually achieve genuinely useful superintelligence is as yet neither established nor tested. Perhaps there is something intuitive, quirky or coherently irrational about human intelligence that AI cannot completely emulate or improve on.

We are approaching an historic junction point where the nature and rules of human life could change fundamentally, and it is coming fast. The human and the machine economies could separate and, as with the rich financialised economy of today, the much-avowed trickle-down effect is unlikely to bring great benefit unless, politically, humanity makes it so. It is difficult to assess what will develop and what the outcomes will be. Singularity could be humanity’s greatest threat. Or, as some visionaries more optimistically forecast, it could imply a titanic breakthrough – at least for metropolitan souls at the leading edge of technological progress, who will most benefit.

Society’s realistic capacity to adopt and incorporate new technologies is a critical factor in the calculus of the future. What happens to that half of humanity that is neither affluent, privileged, educated nor young enough to exploit this breakthrough is anybody’s guess. Introduction of AI and comprehensive automation will bring more problems and wider social, environmental and technical costs than is currently understood, though as yet we do not know what the full and wide costs and benefits will be or how they will arise.

In the 1990s no one understood how internet would develop – with the e-commerce, social networking, Big Data monopolies, social and psychological impacts, cyber-crime and cyber warfare that emerged in the 2000s. The many positive benefits internet has brought were roundly visible to net-visionaries, but they did not see the full scope of what would unfold, neither did they see the unintended consequences it would bring. Similar today with the effects of emerging technologies – difficult to foresee, predictably mixed in outcome, and with some dangers and costs.

Most new technologies are being introduced by profit-seeking corporations, not public-interest foundations. Technologies are being introduced whether or not people like it, without their intelligent consent and with an ominous quantity of positive spin. Governments are largely hands-off, unclear whether their primary allegiance is to corporations or society. A possible train-crash with reality is approaching, and few seem to mind. The technologies now being introduced are not necessarily the main question. The main question is, what is driving it? And who is in control?

Interesting links
Top Ten Emerging Technologies, World Economic Forum, 2016. https://www.weforum.org/agenda/2016/06/top-10-emerging-technologies-2016
Twelve Emerging Technologies that may help Power the Future, Georgia Tech. http://www.rh.gatech.edu/features/12-emerging-technologies-may-help-power-future
Future and Emerging Technologies, EU Horizon 2020 (follow the links). http://ec.europa.eu/programmes/horizon2020/en/h2020-section/future-and-emerging-technologies
Nanotechnology: developments, risks and opportunities. Lloyds of London, Emerging Risks Team, 2007. https://www.lloyds.com/news-and-insight/risk-insight/library/technology/nanotechnology
Benefits and Risks of Artificial Intelligence, Future of Life Institute. https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/
Artificial Intelligence: ‘We are like Children playing with a Bomb’, Nick Bostrom. https://www.theguardian.com/technology/2016/jun/12/nick-bostrom-artificial-intelligence-machine
Ethical Implications of Emerging Technologies, Nayef al-Rodhan, 2015. https://www.scientificamerican.com/article/the-many-ethical-implications-of-emerging-technologies/
Technology Tipping-points and Societal Impact, WEF, 2015. http://www3.weforum.org/docs/WEF_GAC15_Technological_Tipping_Points_report_2015.pdf
The Limits to Electronic Growth, Katie Singer, 2018. http://www.electronicsilentspring.com/e-reduce/

Possibilities 2050

Back to content | Back to main menu