Martian Colonies: Building a Space Empire

After Successful Moon Landing, the Next Human Target Has Been Planet Mars. Technological Challenges Aside, Is This Quest Commercially Viable? 


ON 25 MAY, 1961, John Fitzgerald Kennedy, the United States president spoke to congress and in his speech, he issued NASA (the US space agency) a historic challenge: to land a man on the moon and bring him back home safely. This message was regarded as a national need in the background of the Cold War with the Soviet Union, the competing superpower.

The time allotted for this daunting task was less than a decade. And indeed, mankind achieved the gigantic leap on the moon on July 20, 1969. Much has changed since then; the Cold War rivals are now partners on the international space station. Having conquered the mysterious moon, humankind is now looking at new horizons beyond lunar surface. NASA is now keen about journeying to the Mars and beyond.

In my previous article The Superdrill: A Journey to the Core, you can read an account of an earth-drilling project in which the Russians superseded the Americans. Here we talk about the realm of space travel where the United States has not only taken the lead but is also harboring far more ambitious goals. Though a considerable range of conspiracy theories claim that American moon landing was a fake, traveling to Mars and beyond will be a hell of a ride.

Furthermore, in the twenty-first century, human ambitions are far beyond footsteps or flags; they aim to visit Mars and colonize over there. Can humans settle on Mars or other planets and why should or should not we do that? These are the central ideas of this article. But let us first look at the historical background of this seemingly futile endeavor.

Why Bother Traveling to Space

Humans are explorers by nature. However, the first step that humankind placed on the moon was not just out of curiosity. Nor was it to prove our technological credentials against other creatures. The real motivations behind space travel and consequent moon landing were purely political. The United States and the Soviet Union, the two extant super powers were locked in a fierce struggle in proving their dominance. This struggle is called the Cold War.

During the cold war, the two mighty giants were competing in many fields ranging from economy to social philosophy. Technology was no exception. Though it might seem a little immature, but the US and the USSR were contesting like two bored kids challenging each other over apparently trivial things.

Though the United States claimed the final victory in the space race; the Russians had several firsts: the first man into the space (Yuri Gagarin), the first spacewalk, the first space station, the first satellite around the moon, and the first lunar pictures taken from space. As the Americans clinched the conclusive triumph by successfully landing Neil Armstrong and his two crewmembers on the lunar surface, the conquest was finally over.

The space race between the US and the USSR inadvertently contributed to a plethora of technological developments. To name a few, earth satellites for weather forecasting, microcomputers, graphic designing, compact discs, earthquake prediction systems, radiation leakage detection devices, flat-panel televisions, the GPS, and many more are a consequence of space research.

Success begets further desire. Landing on the moon was an unprecedented achievement in mankind’s history. However, we do not sit contended with the memories of five decades old victory. We are looking forward to new horizons beyond the moon.

There could be several attractions alluring mankind to travel to moon or other planets and possibly colonize them. Least of them could be that humankind is less vulnerable to extinction — at least in logic — if they reside over more than one planet. It is analogous to the common advice: do not put all your eggs in a single basket.

The NASA Constellation Program

In 2004, NASA launched its new moon-bound vision, followed by the Constellation Program (2005-2009) aimed at completing international space station as well as repeating the epic act of lunar travel. However, the ultimate goal was to journey to the Mars in a crewed flight; this time NASA was looking forward to a sustained human presence in space. And all this was to be accomplished before 2020 — easier said than done.

Rather than trotting around the moon for a couple of hours, the Constellation astronauts would embark on missions that could last months. For this purpose, researchers at NASA would have to develop new tools for enabling prolonged living on the moon. Not only would they need to build semi-permanent habitats but also space stations for maintaining supplies for the moon or Mars dwellers.

The foremost technological challenge in this case is that NASA’s current rockets are smaller and less robust for the task at hand. Thus, NASA is developing new rockets called Ares I and Ares V that will be larger, taller and stronger than their 1960s Apollo counterparts. While engineering a return trip to moon and beyond will not be a trivial pursuit, experts agree that the biggest hurdle will be the financial challenge.

How much money can the United States expend on a fancy mission like this? During the Cold War era, based on the political heat between the two super powers, the significance of moon landing was enormous. During 1960s, a vast majority of Americans reckoned the expense of Apollo as justified against national security threats.

Today there is no cold war, and there is no competing super power. What would be the rational for such an ornamental pursuit in the harsh economic climate of the twenty-first century, many Americans ask. During the Apollo years, NASA’s budget was almost five percent of the federal budget. Today, it merely accounts for less than one percent.

Contrary to these arguments, NASA maintains that there are a host of good reasons for continuing space exploration ambition. Proponents of this initiative argue that it should not be viewed as a superfluous space thrill. Rather, this will lead to technological advances like high-efficiency batteries, energy storage systems and newer life-support mechanisms. The subsequent new inventions could be equally beneficial on earth and could lead to countless technological innovations.

Neglecting NASA’s pleas for substantiating the Constellation Program as a worthy goal, the Obama administration cancelled the program in 2011 shifting course from one of its election campaign promises. Having already spent $9 billion on the program, the fiscal burden was too much to bear for the United States. Will 2030 be a realistic timetable? Probably not.

A Trip to Mars

In 2013, Mars One — an operation aimed at conquering the red planet — started inviting applications for a 2023 Mars colonization mission. If you are offered any such choice, will you accept? Before signing up for any such adventure or possibly a misadventure, please be informed about your destination and the likely perils of this vast undertaking.

Mars has fascinated human curiosity since olden times. Archaic humans believed that the red dot in the sky was stained with the blood of fallen warriors. Actually, it has been named after the mythical god of war which the Romans called Mars. Mars is the fourth planet of the solar system and the one most similar to earth. It is this similarity that has given way to the thoughts of possible life at Mars.

Mars is smaller than earth; its diameter is about half that of the earth. While a Martian day is only thirty nine minutes longer than our day, a year at Mars is nearly double; 687 earth days to be precise. Just like earth, Mars has seasons too; minimum temperatures in winter can be as low as -200 degree Fahrenheit at the poles and the summer temperatures at the equator can reach a maximum of 80 degree Fahrenheit. The force of gravity at Mars is only 38 percent of what we feel at ground.

The quest for reaching Mars started as early as 1960s; since then 43 missions have been launched out of which only 23 have been successful. Apart from the United States and Russia, a number of other countries including Japan and China have tried their luck with the red planet, India being the latest contestant in this race. However, all of these attempts have been unmanned — the probability of your survival in a Mars trip is fifty-fifty (in best case scenario).

It is certainly possible for humans to travel to the Mars. But why should we go there when the journey will be expensive, dangerous and very long. This will be a one-way journey; once you embark the journey, there is no going back. The money and other resources required to carry all that stuff for building a reliable lifestyle on the distant island will be mind-boggling.

Various reasons are propagated for selecting Mars as an abode; some of them are quite trivial though. For instance, some proponents of Mars colonization say that Mars has a lot of iron in its soil. But that does not qualify as a strong economic reason; we have a lot of iron present in the earth’s crust.

We do not live in a Cold War era anymore, so there is no point to prove by landing on Mars first. No doubt the race for moon landing blessed us with numerous technological spin-offs; however, it’s no longer a wise investment.

With the currently available technology, it takes us only three days to commute to moon; by contrast, a trip to Mars would take eight month one way. Having finished a long, persistent journey, you will surely like to stay and roam around a little bit, before starting another eight months travel back home. It would take a heroic reserve of patience to handle confinement in an enclosed, sterile environment and still performing entrusted duties.

Being away from family and friends, you will be accompanied by the same people all the time. It would be no less than a traveling jail. Some researchers have suggested that women are better suited for Mars travel because they are lighter in weight and consume less food so a space vehicle carrying women will be easier to design. But weight of occupants and supplies is not the only constraint. Even if you have all the necessary means to travel to your beloved red planet, it is not simply a matter of entering a spacecraft and starting your journey.

The Martian trip can only be favorable during a time period called a launch window — a particular interlude when any rocket or spacecraft must lift off to attain a precise landing on another celestial body. This window depends upon the relative positions of the launching and receiving bodies (earth and Mars respectively). The launch window for Mars opens after every twenty six weeks; so if you miss this shuttle, you will have to wait for another six months before availing the next trip.

In reality, any spacecraft launched from earth would go to Sun’s orbit and after around eight months, it would intercept the Mars orbit around the Sun. Many things can go wrong during the voyage: electronic malfunctions, wrong speed of the spacecraft, faulty trajectory, lack of fuel, or worse still collision with a straying space rock.

Suppose you availed the launch window, and by a rare stroke of luck, successfully landed on the red globe. How will you inform your relatives about your safe arrival? What if you need support from earth? The first challenge you will face is to communicate with your earthly counterparts.

The radio time between earth and moon is merely 1.5 seconds. By contrast, it might take up to twenty minutes to send a radio message down to earth and another twenty minutes to receive a reply and that too if they have a ready-made answer to your question. For the fast-forwarded humans of the twenty-first century, it would be a terrible nuisance.

Irrespective of whether it can support any life form or not, Mars is far from an ideal place to spend a vacation. As you disembark from your enclosed space vehicle, you should be ready to face a hostile environment full of carbon dioxide. Worse still, Martian air is so thin that it does not hold any heat. The average temperature at the equator is -50 degree Fahrenheit; the atmospheric pressure is so low that water cannot exist as a liquid.

The Martian Colony

Imagine all technological obstacles in commuting and settling to Mars are removed and we have successfully landed a tribe of twenty humans on the red planet. A group of twenty people cannot be called a colony. With vast areas of land available for colonization, and assuming our current biological growth rate of 1 percent, it would take 394 years for these twenty people to multiply and achieve a total population of a thousand humans.

When will they touch the million mark, I leave this to your mathematical skills. But assuming a maximum planetary capacity of 10 billion Martians, it will take two millennia to outstrip the planet’s absorbing capability. Though exact numbers are hard to conclude, we humans have taken much longer time for claiming similar populations on earth.

The primary reason for this relatively slow growth is that throughout the course of history, humanity on earth has been repeatedly afflicted by natural and manmade catastrophes like famines, plagues, wars etc. It occurred only a few decades ago that we grasped and apparently defeated these inextricable adversaries.

What unforeseen diseases the aborigines of the red planet will have to encounter, will the Mars-dwelling farmers be efficient enough to save their crops from unpredictable weather patterns, will the technology-loaded Martians be any less belligerent towards their fellow humans? We simply do not know.

Why not Earth?

After successful moon landing, the next goal on human space agenda is to conquer and possibly colonize Mars. While there were purely political motives behind moon landing, there is no ongoing Cold War and there are no super powers at a technological conquest. Moreover, travelling to moon and spending a few hours is one thing, while spending a lifetime at a distant planet like Mars is an altogether a different situation. The required technological advancement and associated costs are unimaginable.

Mars is comparable to earth in some ways but that does not mean it is inhabitable like earth. Even if it supported preliminary life forms, there is still no plausible reason for building human colonies at Mars; the climate is hostile and the fiscal load of such an outlandish expedition is unbearable for any economy.

We are looking for colonization over moon or Mars. Why not earth? Have we already colonized the whole space available on earth? Residence in unpopulated parts of earth is much simpler and cheaper than other celestial bodies. There are huge areas on earth ripe for colonization: enormous deserts, frigid Antarctic wilderness, and vast ocean surfaces. These places can be far easier abodes than off-planet colonies where survival is much more intimidating.


The Super Drill: A Journey to the Core

Earth’s Interior is Extremely Hot; Much Hotter Than You Might Imagine. Does Drilling to the Core of the Earth Make Any Sense? Is There any Benefit of Doing This? Is It Even Achievable?


IN THE YEAR 1970, a group of Russian engineers took on an exceptionally ambitious project. The objective was to penetrate earth’s upper crust to reach the borderline where the crust terminates. The chosen site for this endeavor was Kola Peninsula in the USSR, located east of Norway. This project is known as the Kola Superdeep Borehole Project.

The mission continued for nineteen years and after the fall of the Soviet Union, the project was first mothballed and eventually abandoned in 1994. Despite exerting immense efforts, the drilling team managed to achieve a depth of 12 kilometers only. Though the team had only surpassed one-third of the intended depth, their drills failed to penetrate further; the technological challenges got incomprehensible.

The Kola Superdeep Borehole is still the deepest hole humans have ever drilled. You might ask why on earth you should drill through earth. Why would someone invest so much money, time and energy for such a tedious task? Well, it was not meant for oil exploration or any other commercial gain. It was a purely a technological quest with a political backdrop. But before going into the details of this interesting techno-political article, it would be useful to know the structural details of earth’s interior.

Beneath Our Feet

Earth is a huge ball of molten rock covered by an outer layer called crust. The total diameter of the planet earth is 12,742 kilometers out of which crust has a maximum thickness of 48 kilometers. Thus, compared to the planet as a whole, crust is very thin.

Beneath the crust is a solid but hot and supple shell of molten metal; it is called mantle. Though mantle is technically solid but the rock in this part flows in currents like a liquid. The thickness of the mantle is around 2,900 kilometers and it makes up about 84 percent of the total volume.

The innermost part of the globe is called core. It comprises two layers: a liquid outer layer made of iron, and a solid inner layer comprising both iron and nickel. This central part makes up around 15 percent of the planet’s volume.

The surface exactly beneath our feet is crust and is quite thin and brittle. As we get deep into the earth towards its center, it gets hotter and hotter. The deeper you go, the more intense will be the blistering heat. It has been estimated that the temperature at the center of the earth is around 9800 degree Fahrenheit — nearly the same as the surface of the Sun.

A logical question at this juncture of discourse could be how we know these details when we have never been able to penetrate deep enough. Well, there are multiple inter-related sources of this information. Volcanoes that spew red-hot molten lava to the surface enable an average sentient human to perceive that the interior of the planet must be extremely hot.

But beyond that, geologists have developed their sophisticated models of earth’s interior by listening to the subtle interactions of mighty earthquake waves with earth’s internal layers. Some of these waves are absorbed, some are deflected, and others are reflected. The science that studies these earth quivering jolts and related phenomena is called seismology.

To be exact, earth’s crust accounts for only one percent of the planet’s volume. Compared to earth’s radius of more than 6000 kilometers, the Russian drilling team had waged a valiant struggle to drill only 12 kilometers — a mere 0.2 percent. The reason is that the earth fights back anyone attempting to penetrate it.

Piercing Through the Earth

The groundwork for the Russian ground drilling project initiated in 1962, followed by years of preparation. The survey of a suitable site was completed in 1965 when Kola Peninsula in the north-western USSR was chosen. The next five years were busy with construction and other arrangements, and in 1970, the drill eventually started nudging the earth.

Drilling deep through the earth required immense efforts. There were some nearly insurmountable engineering challenges. The first issue was the immense heat encountered during the attempt. The project team had predicted that the crust would be as hot as 212 degree Fahrenheit. In fact, this was a miscalculation; the team had underestimated the likely temperatures.

As the drilling commenced, underlying rock was found to be as hot as 570 degree Fahrenheit. And it was getting hotter as the drilling continued deep down. After revisiting the numbers, the team realized that if they were to reach their target depth of 15 kilometers, it would require working at more than 570 degree Fahrenheit. Unfortunately, their dills were unable to bear such a high temperature and stopped working.

A far more challenging parameter antagonizing the earth drillers was excessive pressure. Engineers who build submarines to commute to the depths of oceans know that their vehicles must be designed to withstand pressures several hundred times greater than those confronted at the surface. Estimates indicate that the pressure at the margin between mantle and core could be as large as 136 gigapascals — that is nearly 1.4 million times the atmospheric pressure around us. This pressure is enough to crush an unprotected human to a thin paste, which will subsequently be decomposed due to unimaginable heat in the surroundings.

In order to commute to the center of the earth, one needs a super vehicle capable of withstanding excessive pressure and temperature. Similarly, to be able to penetrate deep through the earth, we will require a super drill that could resist extraordinarily huge forces and incredibly high temperatures — it’s hard to defeat the earth.

Why on Earth?

Returning to the motives behind the Kola Superdeep Borehole Project, it is worth stating that this was not the first of a kind. The story goes back to the years of Cold War between two superpowers of the time, the US and the USSR. Even before the race for space superiority took hold, there was a technological contest between the two countries to see who could potentially drill into the center of the earth.

During the cold war, the two giants were competing in many realms: which governmental system was better, socialism or capitalism; which economic structure was more effective, centrally-planned totalitarianism or price-controlled free-enterprise; and off course, who was the real technology titan? The rest of the world was watching this exciting competition like a boxing match.

Even before Russia got indulged in this apparently silly conquest, the United States had started an earth drilling project dubbed Project Mohole. The site chosen for this project was near Mexico’s Pacific coast. Starting in 1957, this team managed to drill through 183 meters below the sea floor, at 3,600 meters under water. Lack of funding stopped this project soon afterwards. Later, taunting at the meager outcome of an immense effort, the Mohole project came to be dubbed as the No Hole Project.

In parallel to this Super Drill Challenge was the Space Race, another high-tech competition between the US and the USSR. Though achievements of the Soviets were superior to Americans in earth drilling contest, this struggle was largely overshadowed as the world’s attention was fixated on moon landing, a feat that the United States eventually grabbed.

The motivations behind the Kola Superdeep Borehole Project were purely of political nature. Two superpowers, both capable of destroying human race, were competing for their technological superiority; just keen to show who can drill deeper. However, the consequences of the earth drilling project were far more progressive than the scientists involved anticipated.

Earth Demystified

There were a number of unexpected discoveries from the Kola Superdeep Borehole Project. The first revelation came when the rocks at that extreme depth were found to be saturated with water. This water had not seeped through the surface; rather it was created through millions of years, deep underground, and had remained there trapped by layers of rocks. Contemporary scientists believe that this water was formed as intense pressure squeezed hydrogen and oxygen atoms between deep rocks.

The researchers working on the project anticipated that as they were drilling through sedimentary rocks followed by a granite stratum, they will soon find a layer of basalt (cooled lava) underneath. The drillers found that the sedimentary rock went fifty percent deeper than expected and to their huge surprise, there was no basalt. Seismic studies suggested that at a depth of around 3 to 6 kilometers, granite would give way to basalt, but there was no such transition. This seismic anomaly reinforced the plate tectonics theory that was still budding at that time.

Another surprise came when a large amount of hydrogen gas started releasing through the shaft of the bore. Further investigation verified the presence of helium, nitrogen and carbon dioxide as well. The underground presence of carbon dioxide was a sign of another likely finding — the possibility of life. At a depth of 6.7 kilometers, researchers were able to detect twenty-four distinct species of plankton microfossils. They were found to have carbon and nitrogen coverings rather than typical limestone or silica. Despite the apparently unbearable heat and pressure, the microscopic remains were amazingly intact.

The project also contributed to technology advancement. For reaching a remarkable depth of 12 kilometers, Russian engineers had to design a whole new drill. The technique involved sending pressurized mud down a pipe where a turbine was used to blow the mud at the drill head, spinning it at 80 revolutions per minute. Today, the same drilling technology is used at oil wells.

Finally, a bit of superstition: a rumor has been constantly spread that the project ended because the scientists started hearing screams of damned souls from beneath — the scientists had discovered a well to hell. However, this well to hell hoax remains an unfounded delusion. No substantial evidence has ever been recorded for this.

A Money Pit

Whether aimed at political superiority or purely from a technological perspective, the idea of drilling the crust through earth’s core or even mantle seems ostensibly outlandish. First of all, the technological challenges as explained earlier are immense. It will require a lot of scientific research and engineering acumen to invent a super dill that could punch through extreme depths of earth.

Correspondingly, the financial requirements to fuel this endeavor would be huge. Some estimates indicate that the money required to drill through to the mantle of the earth would be $1 billion. How much the Russians spent on the Kola Superdeep Borehole Project? Probably, we will never be able to know.

During those days, the Soviet Union was a socialist economy. Contrary to a capitalist economy that uses prices to run most of the contemporary world, the Soviet economy was governed through a central command. There was a monumental bureaucracy assigned with a nearly impossible task of allocating resources efficiently. Consequently, resources were often squandered beyond record books.

By contrast, in a resource-competitive capitalist economy, wastages never go unnoticed as resources tend to flow naturally to their most useful consumers. That is precisely why the Mohole Project team abandoned drilling due to financial reasons while the Kola Superdeep Borehole Project team went on drilling without fiscal worries, until the engineering challenges got insurmountable. In contrast to the socialist economy of the USSR, the capitalist economy of the US could not fund drilling a money pit; particularly while they were also expending billions of dollars to the ongoing Space Race.

Assuming $1 billion as a realistic estimate, what will be the return on this investment? Now that there is not techno-political contest between two mighty giants, the only motive behind such an extraordinary initiative would be some scientific findings about the interior of the earth. However, it is not particularly necessary to drill through earth to know about its interior. Not even for getting a sample of the mantle; there are places on the surface where the mantle is already exposed, such as the middle of the Atlantic Ocean.

Moreover, using a sophisticated synergy of seismographs and computer simulations, modern geologists have developed several dependable scientific theories about the mutual interaction of earth’s crust and mantle. The money saved could be used for more productive goal such as poverty alleviation, pollution abatement or sustainable energy.

Just a Science Fiction

By far, the Kola Superdeep Borehole Project has been the most successful super drilling project, and the maximum depth humans have ever penetrated is 12 kilometers. This is less than one percent of earth’s radius. Even at this depth, the engineering challenges faced by the Russian team were overwhelming. With our scanty achievements, we can hardly claim to have conquered the earth’s interior.

Even if the engineering problems are resolved, the magnitude of the amount of money and other resources required to develop a super drill that could travel through to the center or even up to the mantle will be huge. Comparing the associated costs against the potential benefits of this extraordinary endeavor, it is evident that it is not a goal worth seeking.

Actually, travelling to the center of the earth is just a science fiction; it might never be turned into a reality.

Human Clones: Mastering Creation

Humans Have Cloned Animals With a Certain Degree of Success. Are We Capable of Doing the Same With Ourselves? If Yes, Should We Do This?



ON 22 FEBRUARY 1997, a group of scientists at the Roslin Institute, University of Edinburgh, Scotland, released a much awaited announcement in front of a frenzy of global media. They affirmed the birth of Dolly, world’s first mammal cloned from an adult body cell.

Dolly’s birth was an unprecedented milestone in the history of genetic engineering. Though she was not the first animal clone — scientists had already cloned small creatures like frogs and mice as well as other sheep from their respective embryos — Dolly was remarkable in that she was produced out of an adult cell. This achievement proved that specialized cells could be used to create an exact replica or clone of an organism.

Like any other clone, Dolly started her life in a test tube. After being kept under surveillance for six days, she was transferred into a surrogate mother’s womb. Surviving through a closely monitored period of pregnancy, Dolly was born on 5 July 1996. Despite pervasive concerns about her ability to lead a normal life, she mated and successfully produced six lambs — Bonny, the first one, followed by twins and a subsequent triplet.

In February 2003, Dolly was diagnosed with lungs tumor. In order to reduce the risk of her suffering, she was euthanized on 14 February 2003, after six years of living like an animal celebrity. Dolly’s body was donated to the National Museum of Scotland in Edinburg where she remains the museum’s most popular exhibit.

The transgenic episode of Dolly raised many controversial queries for scientists as well as general public. To rephrase a few: Can I produce a clone of my favorite pet? Can we resurrect dinosaurs through cloning? What if I could create a twin of myself? What would be the ethical and social implications of such a wonder? These questions are the central idea of this article. However, in order to probe these inquiries, it would be useful to comprehend the secret code that makes us what we are.

The Secret Code of Life

You may consider life a complicated affair, but our life essentially starts with the sexual mingling of two simple cells — an egg and a sperm; this process is called sexual reproduction. By the time our life grows mature, it consists of trillions of cells (a trillion is a million million). How do two cells convert to trillions and what holds them together? This phenomenal transformation is governed through an exceptionally spectacular code of instructions called DNA.

DNA, a short name for deoxyribonucleic acid, is a molecular code consisting of around 440 million sets of instructions. You might have seen its characteristic double-helix spiral structure, just like a ladder twisted around a vertical axis. Each rung of the ladder carries vital information for the growth and reproduction of the organism it belongs.

DNA is a huge molecule consisting more than 15 billion atoms. Several DNA molecules combine together to form genes, which subsequently unite to form more elaborate structures called chromosomes. Each species has a fixed number of chromosomes. For example, humans, or Homo sapiens, have 23 pairs of chromosomes that dictate their various traits.

Located in the nucleus of our cells, DNA has a unique ability to produce its own copies through complex chemical reactions. Due to the complexity of these reactions, sometimes errors creep in through the duplication process; these errors are called mutations. In most of the cases, our body rejects mutations; it flushes the erroneously duplicated cell and tries again. Nonetheless, sometimes the new mutated cell gets out of control and starts dividing excessively. This state is called cancer.

Over long periods of time, mutations build up, transforming the organism into entirely new species. This process is called evolution. Alterations in genetic material due to mutations are a natural process. But contemporary humans are keen about engineering the genetic material to achieve certain desired results. Actually, we have already done that; the subjectively successful cloning of Dolly is a self-evident proof.

How Cloning Works

In order to understand the process of cloning, it would be useful to know how animals normally reproduce. When two animals mate, each offspring gets one set of chromosomes from father and another from mother. The particular combination of chromosomes that each offspring happens to inherit determines a lot of things: eye color, straight or curly hair, boy or girl, so and so forth.

Parents enjoy no control over the process of sexual production. Any gene can be transferred to any of the offspring; it’s purely a matter of chance. That is why a brother and sister can be entirely different despite sharing same mom and dad. Only identical twins are born with exactly the same combination of genes and hence identical set of traits.

Cloning is the deliberate human intervention by which you take control of the reproduction process; you get hold over which genes should coalesce together to create a specific organism. Originally, the process of cloning is not a human invention; plants have a natural ability to make identical copies of them through asexual reproduction. Humans have extended this process to achieve desired yields of preferred fruits and crops.

We have also experimented with animal breeding to produce horses that can run faster, cows that produce more milk, and dogs with curly furs. However, this is natural breeding where male and female still mate sexually and the resulting offspring are not identical to either of the parents. This is pure sexual reproduction; we select the partners only.

By contrast, during animal cloning sexual reproduction is not involved. All the chromosomes of the clone come from a single parent hence the resultant offspring are identical copy of the parent; a parent and its clone have exactly identical genes. For instance, in case of Dolly, an ordinary cell was used to replace the nucleus of an egg cell so all of Dolly’s cells had DNA identical to a single parent that donated the cell. The surrogate mother’s womb was merely used as a breeding ground.

Human intervention in biological realm is not new. For millennia, people have been interested in reshaping themselves and other living organisms to create certain desirable characteristics. For instance, humans have been castrating bulls for nearly ten thousand years for producing oxen. Oxen are less aggressive and can be easily tamed to pull ploughs. Shockingly, humans also castrated their young males to create eunuchs. Being sexually sterile, these eunuchs could be safely entrusted with the duty of protecting the harem of the sultan.

Cloning is a recent addition to human interference in the creation of life. Frankly, this interference could be quite beneficial. For example, every once in a while, a cow is born that can naturally produce three times more milk than normal average. These superior cows could be cloned so that fewer cows could fulfill the milk production requirements. Cloning could also produce a herd of disease-free livestock, saving farmers millions of dollars in lost meat. We could possibly save ourselves from mad cow disease as well.

The Clonal Resurrection

We have already experimented this with Dolly and many other extant animals, but can we use cloning to bring back dinosaurs or other extinct animal species? In theory, yes; in practical, no.

A team of Russian, Japanese and Korean scientists has mapped the genetic material of ancient mammoths found frozen in the Siberian ice. They are planning to reconstruct the mammoth DNA and insert it in a fertilized egg-cell of a contemporary elephant. The altered cell will be implanted in the womb of a surrogate elephant with the aim of resurrecting a five thousand years ancient mammoth.

In order to resurrect a lost species such as a wooly mammoth, you need to get the DNA strand of the extinct animal. Now extracting the DNA strand from a long-dead animal is incredibly difficult. Moreover, even a tiny gap in the strand will make it useless. A possible way to bridge this gap is to use the DNA of a living African elephant. Whether this method will yield any results or not is still unclear. Furthermore, even it did work; will the resulting mammal be a real mammoth, or just a mutated African elephant?

Cloning researchers are of the view that the most likely candidates for clonal resurrection could be the species that have gone extinct only recently. The Tasmanian tiger and the Dodo bird are the top candidates because their specimens have been preserved in museums.

As explained earlier, DNA is a secret code that carries the instructions for building a life form. So in case of Dinosaurs that disappeared nearly 65 million years ago, there is absolutely no chance for finding these instructions as DNA degrades over a maximum period of few million years. Having no practical means of building synthetic DNA, the clonal rebirth of dinosaurs is just out of question.

The Global Genome Project

A genome is a detailed description of a DNA strand. Genetic researchers can analyze the human genome to map or sequence their DNA. This information which adds up to about 3.2 GB for a human, can be used to figure out if a person is carrying a gene for cancer, for example. This mapping is known as the Human Genome Project. This publicly funded project started in 1990 and was completed in 2003. The total cost to sequence the first human genome was $2.7 billion.

By comparison, James Watson who was the co-winner of the Noble Prize for the discovery of the DNA structure had his genome sequenced in $2 million in 2007. The corresponding costs are still decreasing dramatically. Eventually, when DNA sequencing will get as cheap as say $1000, the details of our 20,000 to 25,000 genes might become part of our medical record.

As access to genomes gets cheaper, the likely implications of DNA mapping would be extraordinary. From personality traits to physical features, everything controlled by our genes will be demystified. Astrologers and palm readers will have to either switch careers or rebrand themselves as gene readers. A new era of research in human genome and its linkage to peculiar social behaviors and particular diseases will emerge.

Health problems, both physical and mental, will often be treated in the light of sufferer’s genome. For instance, a cancer patient will be treated based on the specific mutations that made the tumor cells cancerous. For other illnesses as well, medications will be optimized genetically because people have varying sensitivities to medicines. Similarly, folks will be aware of particular life choices such as type of food based on their susceptibility to a particular disease.

Once information carried in genomes will become freely available, governments may build a database of its citizens’ genetic records to identify good and bad genes. Desired genes would be identified, preserved and possibly propagated through cloning, while threatening genes could be suppressed and killed in a shocking “genocide”.

Just like they determine the gender months before, expecting parents will have a choice to identify and possibly alter the particular traits of their baby; you could control your own product and interestingly choose a more appropriate name for it. You could have the power to make your son or daughter a super baby.

Double Trouble

The concept of cloning is not new. We have been cloning plants for decades, producing fruits and crops of our own desire. However, we have recently tried cloning animals, and this has raised a host of controversies.

A strong basis for these controversies is that plants have natural tendencies of cloning themselves through asexual reproduction (without using male and female cells). Therefore, we are fine with extending the natural process of plant cloning. By contrast, animal cloning is far more controversial primarily because humans themselves fall into the animal kingdom. A viable method for animal cloning will essentially open a door for extending the process to humans themselves.

A few years ago, the United States Food and Drug Administration declared that milk from cloned animals is safe to consume. The announcement flared a raging debate about human health, animal rights and ethics. Though there are regulations to restrict research in human cloning, sanctioning humans to maneuver creation of new animal life warrants them to play with life like a game.

Human cloning is currently illegal in virtually all parts of the world, but that does not mean it will always stay like this. In 2005, the United Nations General Assembly prohibited all forms of human cloning declaring it incompatible with human dignity. However, if some day humanity stops decrying the ethical issues related to cloning and accepts it as another medical triumph, there could be complex social and legal consequences.

For instance, if you clone yourself, who would own that clone: you or the cloning enterprise? How will you control your own clone? What if someone gets hold of your genetic material and clones you illicitly? Will cloning enable the reincarnation of dead relatives using a sample of their genome preserved during their lifetime? Will it open pathways for genetic immortality?

Critics of cloning argue that this game can get out of control. While animal rights are clearly being usurped in the cloning procedure, genome engineering could be used to create serfs and fearless soldiers serving the malicious intentions of a bio-dictator. The basket is so full that all beans can spill suddenly if handled carelessly.

Though scientists have already cloned more than ten animal species including frogs, mice, sheep, cows, pigs and horses, still there are plenty of technical glitches. The reliability of the process is still dubious; the procedure is particularly delicate and at times things do not turn out to be as anticipated. Sometimes more than hundred attempts are required before any success.

Another shortcoming is that cloned animals often resemble premature births; their vital functions are underdeveloped and hence they are vulnerable to disease attacks and early deaths. Take Dolly for example; though she has been probably the most successful animal clone, she could survive for only six years. A normal sheep has an average age of ten to twelve years.

Further, as they grow, most clones get hugely overweight and bloated. Perhaps it is not a good idea to create a clone of your pet. You will never get the same lovely character that you cherish so much.

Stepping into Nature’s Shoes

Plants have a natural ability to produce their clones. We have enhanced this ability to create desirable fruits and plentiful crops. We have also been successful in producing animal breeds with peculiar traits. But these processes have been in harmony with nature. On the other hand, cloning is a clear defiance against nature — pure and simple.

Animal cloning is a cruel process as it lacks any regard for the sufferings and needs of the poor lab specimens. The technique is delicate and still under development; extending its applications to human genome can be far more disastrous. Though genome engineering can bring about miraculous remedies for incurable diseases such as cancer, the same technology can be used to facilitate genocides.

A word of caution: Stepping into nature’s shoes can be suicidal.

Wireless Electricity: The Dream of a Cordfree Life

Can We Get Rid of the Ugly Grids that Block Our View of Scenic Landscapes? Can We Get Wireless Electricity? 


IN THE YEAR 1905, a few construction workers gathered in a small New York village to erect a particularly lofty structure, a 187 feet tall tower. Atop this tower was perched a fifty-five ton dome of conductive metals, and beneath it stretched an iron root system that penetrated more than 300 feet deep into the earth. This was Tesla’ power tower and its intended purpose was to bring about an energy revolution.

The idea in the mind of Nicola Tesla, the inventor of the tower, was fairly eccentric. He proposed to conduct the electricity through the earth and the sky, enabling a wireless transmission of electric power across large areas of land. Lamentably, the tower was never completed as envisioned by its inventor due to shortage of funds. Eventually, Tesla tower was unceremoniously demolished in 1917 for salvage of the debts that accrued through the project.

Despite his arduous efforts, Tesla could not accomplish commercially viable wireless electricity. However, it would be unfair if we don’t pay a brief homage to a generally unappreciated genius.

Though he could not gain the fame he deserved, Nicola Tesla was granted more than 250 patents across 24 countries. Apart from electricity, he was the inventor of car sparkplugs, remote controls, wireless communication systems and numerous other devices. So much so that a few months after Tesla’s death in 1943, the U.S. Supreme Court voided four of Marconi’s key patents, posthumously acknowledging Tesla’s innovations in radio.

Earlier in 1893, when George Westinghouse, an American entrepreneur and engineer, was awarded a contract for the electrification of world’s fair to be held in Chicago, he selected Tesla as the lead engineer for the project. Being an unconventional inventor, Tesla wanted to demonstrate the practicality and superiority of his AC (alternating current) technology over Edison’s rival DC (direct current) electric power.

On the day of the event, the fairgoers were amazed to see wireless lamps connected to an AC power electric field. Tesla not only eradicated the publicized safety concerns about his wired AC current, he also demonstrated the possibility of wireless electricity, which is the central thesis of this article. But before analyzing the prospects of wire free electric power, let us recognize the courtesies of this all-ubiquitous and often underestimated utility called electricity.

Why Electricity is So Common

Those of us who are fortunate enough to live in the developed world often take for granted the incessant supply of electricity to our homes and offices. A brief taste of what life is like without this essential service is experienced when the power goes off. In the absence of electricity, you would not be able to charge your cell phones, browse internet, or watch TV. But these would not be the biggest of your troubles in a prolonged power failure; lack of heating or cooling in extreme weather conditions and dearth of food supplies could threaten your very existence.

Relishing the luxuries of the twenty-first century, we are not as immune as our nineteenth century forebears, and electricity is undeniably the chief luxury that drives other pleasures of modern life. But why are we so reliant on electricity, why not heat or any other form of energy? Well, electricity possesses certain unparalleled features that make it an incomparable utility.

First of all, electricity is convenient for its users. It can be easily transformed into other forms such as heat, sound, motion etc. For instance, in an electric heater, electricity is converted into heat; in a loudspeaker, electricity takes the form of voices and sounds; similarly, in a washing machine, electric power transforms into rotation of the drum.

Another unique feature of electricity is that it can be generated from a variety of resources including fossil fuels, hydropower, solar power, wind energy, geothermal heat etc. In most of the cases, the electric power generations units are located far away from points of consumption, thereby keeping the consumers safe from the associated hazards.

Finally, electricity can be expediently transmitted from remotely located power plants to homes, offices, factories etc. with relatively minor energy losses. It is this last feature — transmission of electricity — that we are concerned about here. Currently this transmission happens through an electric grid, a set-up that connects the power plants to consumers through long fraying cables. Can we ever get rid of these cables that tarnish our view over the lovely horizon and seize many of our fellow humans from accessing the blessings of electricity?

The Ubiquitous Grid

Electricity travels from the power plant to your wall socket through an amazing network called power grid. While commuting on a highway through suburban or rural area, you might have noticed the large steel towers carrying a number of heavy transmission lines. Nonetheless, this is only a small part of an intricate maze. In the following lines, we will have a simplified overview of this complex network that empowers you to enjoy the benefits of electricity.

Starting from the power plant, almost all electrical power is generated through the rotating motion of an electric generator. There can be various ways to spin the generator: steam turbines in fossil fuel and nuclear power units, water wheels in hydraulic dams, or wind turbines in a wind farm. Regardless of the means of electricity generation, all power plants generate three-phase electricity. By contrast, the electric power that you get in your power outlets is single-phase.

The strictly technical details of single and three phases are beyond the scope of this discourse, so I will skip them here. Likewise, the difference between AC and DC current are too esoteric for an average reader. However, for the sake of comparison, it is worth mentioning that all big power plants generate AC (alternating current) while the electricity produced by small batteries is DC (direct current). These two types of current were the basis of what is termed as “Battle of Currents” between Tesla and his nemesis and former employer, Thomas Edison.

Edison thought that AC electricity is impractical and too dangerous to be implemented. It is said that, in order to prove his point, Edison had arranged for a convicted New York murderer to be put to death in an AC-powered electric chair. Contrary to Edison’s views, Tesla believed in the future of AC power. Though Edison is still one of the biggest icons in the history of American inventions, yet this battle was won by Tesla.

Getting back to AC versus DC power, AC power offers certain advantages over DC. All large generators produce AC naturally, so a conversion to DC will be an additional step. Moreover, it is easy and cheap to convert AC to DC but not vice versa. Therefore, as a matter of choice, AC is preferable over DC.

The AC current produced in generators is stepped up at the power station. This means that the voltage of the electricity is increased up to hundreds of thousands of volts. The devices that perform this service are called step-up transformers. It is an important amplification as at lower voltage, energy losses during transmission are excessive.

The high voltage electric power travels through a network of nearly equidistant strands of lofty towers. These high-voltage towers have three wires corresponding to the three phases. Many towers have an extra wire running along the top; these are ground wires and are primarily intended to attract lightning.

As the high-voltage electricity approaches its consumption destination, it is stepped down through a series of step-down transformers: its voltage is reduced to 240 volts or 120 volts depending upon which part of the world you are residing. So here you are, ready to plug your favorite gadget to be powered up through a socket at a hands span.

While the grid-based power transmission system has been supplying electricity to the whole world for more than a century, it has certain shortcomings. During recent years, with the growing populations and consequent increase in demand of electricity, the need to increase power generation is on the rise. However, in many parts of the world, the increase in generating capacity has outstripped the increase in transmission line capacity.

Furthermore, the current infrastructure of power transmission grids is vulnerable to natural and manmade disruptions. For instance, the massive blackout of the United States and Canada in 2003 affected around 55 million people. Upon investigation, it was found that the cause for failure was a combination of operator error and a tangling of a transmission line with a tree.

A Wireless World

The biggest problem with conventional power grids is the wastage of energy during transmission. The resistance of the wire used in the electrical grid distribution system causes a transmission loss of 26 to 30 percent of the energy generated. This loss implies that our present system of electrical distribution is only 70 to 74 percent efficient.

Considering the fact that electricity accounts for much of the energy consumed in the world, these losses are significant. And these losses become far more significant in the backdrop of diminishing fossil fuels and burgeoning environmental pollution. A wireless power transmission will essentially save both energy and environment. It has been estimated that an effective wireless equivalent can increase the distribution efficiency beyond 90 percent.

A few recent reports indicate that around 1.3 billion people across the globe lack access to electricity. A vast majority of them reside in Africa, China and India. These poor souls can be served the boons of electricity if we have a wire-free mode to transmit electric power across continents. For example, some countries such as the United States have surplus electricity during night times; this electricity can be supplied to needy countries on the day side of the planet. This can open new economic horizons in import and export of a highly sought-after commodity.

The wireless power transmission system would completely eliminate the existing high-tension power transmission cables, towers and transformers straddling between generators and consumers. Getting rid of the electric grid, we would be able to connect numerous power plants on a global scale just like we conduct teleconferences these days. Being cable free and more efficient, the costs of transmission and distribution would be much less. Consequently, electricity could be available to consumers at cheaper rates. Power failures due to short circuiting and electricity theft could also be eliminated.

The implications of a wireless power world would be far more profound than meeting energy needs and saving environment. Imagine charging your cell phones on the go, just like you receive calls or download data through internet; no nuisance of putting a power cable into a wall outlet. What about transportation? The electric cars being charged through a wireless system would have nearly infinite speed range. A wireless world will be simply awesome!

Wireless or Plug-Free

As a matter of fact, the wireless transmission of energy is already common in much of the modern world. Radio waves, used to send and receive cell phone, TV, radio and WiFi signals are energy waves after all. Thus, in some cases, we have already got rid of the annoying clutter of wires.

Wireless charging is now available on latest smart phones and some gadgets such as electric toothbrushes. Take the example of an electric toothbrush. It charges up when you place it on a special cradle; you do not need to connect it to any cord or wire. But the cradle still needs to be connected to the wall outlet. Is it wireless charging? A more appropriate description would be a plug-free charging.

Here is how it works: there is an electromagnet inside the cradle which takes in electricity from the wall outlet. As you place your toothbrush on to the cradle, the magnet stimulates a coil of wire inside the toothbrush to produce electricity. This is called electromagnetic induction. Using the same principle, you can charge some smart phones without using wires; but again you need a special cradle or mat — a particular place to keep your gadget. You are still not cord-free!

A wireless power system essentially involves transmission of energy from a power source to an electrical load across an air gap, without any intermediate connectors. So far, this has been possible through short distances through a technique called inductive coupling. This technique can be explained through three simple steps:

One: A coil of electrical wire produces a magnetic field when power is attached. This coil is called primary coil or transmitter.

Two: Another electrical coil is brought in that magnetic field; the field induces a current in the secondary coil called receiver.

Three: the current generated in the secondary coil can be used to power up the desired device.

In reality, we are not transmitting electricity through the air; we are putting a magnetic field in the air and letting it do its job. The problem is that for doing this trick over large distances would require huge coils. Moreover, a larger and stronger magnetic field will spread in all directions, resulting in enormous energy losses. Though a research team at MIT has tried to merge resonance technology with induction, the achieved results are far meager than those intended.

That is precisely why Tesla wanted to use earth and sky as conducting media. Earth itself carries a negative electric charge, while the atmosphere has an increasingly positive charge the higher up you go. What is missing is an effective method to direct electric current to our intended location. There are a number of proposals, each having its own challenges and risks.

In the context of wireless power, energy harvesting, also called energy scavenging, is the conversion of the ambient energy from the environment to electrical power. There can be various sources of this ambient energy including straying electric or magnetic fields, radio waves from nearby equipment etc. However, the conversion efficiency is usually very low and the electrical energy scavenged remains as miniscule as few mill watts. This tiny amount of electricity can only be used to power some micro power remote sensors.

Imagine for a moment that somehow we have been able to conduct wireless electricity over large distances through electromagnetic induction. What about the exposure of earth and its living beings to such a large and powerful magnetic field? The health impacts on those exposed to these exceedingly potent magnetic fields is a deadly fear, especially for those located near the transmitter or receiver.

Another proposal under consideration is using microwaves for generating wireless electricity. A couple of years ago, the Japanese Space Agency conducted a number of experiments to develop a space satellite to collect huge amounts of solar power, subsequently converted to microwaves, transmitted to a receiver on the ground, and finally converted into DC power for electrical use. Would you be comfortable being continuously bathed in microwaves from space, even if the risk were relatively low? In addition, a microwave based wireless transmission of electric power will definitely interfere already ongoing communication systems.

Laser beams have also been suggested for generating wireless electricity. In this method, a laser beam is pointed at a photovoltaic cell; the cell receives the laser beam and converts it into electric power. This technique is called power beaming. While we can construct a relatively compact set-up with a limited laser-to-electricity conversion efficiency, the system requires a direct line of sight between the laser and the solar cells. Most of us are already aware of the fact that lasers have the potential to damage biological tissues including eyes and skins.

From an economic perspective, a transition from power grids to wireless power will entail a whole lot of infrastructure to be uprooted and replaced. The capital cost of a wireless power transmission system could be far greater than our imagination. Do not forget that Tesla got bankrupt in building an experimental setup for wireless power.

Better Sort the Clutter

Nearly a century ago, Nicola Tesla attempted in vain to bless the world with cord-free electricity. Regardless of whether Tesla tower was a commercial failure or a technological glitch, the fact is that in the twenty-first century, we still have to live with a clutter of cables and wires, which often gets frustratingly entangled. While a few small gadgets are being successfully powered up without connecting cords, it is plug-free charging, not wireless electricity.

Many researchers are studying ways to enable wireless electricity, but these studies are merely restricted to laboratory scale experiments. While the commercial success of these technological endeavors is still dubious, the health and environmental impacts of widespread high energy waves are ostensibly hazardous. Apart from technological complications and related health and safety hazards, any wireless power transmission might not be commercially viable.

Instead of waiting for wireless charging of your gadgets, better sort the clutter under your table.

Time Machines: Traveling through Four Dimensions

Is Time Travel Merely a Science Fiction? Or Can We Really Travel Back into Past or Move Forward into Our Future? 



AROUND THE YEAR 250 CE, a group of pious youth, fleeing away from the tyrannies of Decius, the reigning Roman king, took refuge in a cave near Ephesus, an ancient Greek city — present day Turkey. They had religious differences with the monarch, so he had ordered their persecution. Upon arriving at the cave, the escapees were so weary that they went to sleep.

When they woke up, they felt that they had slept for a few hours only. They sent one of them to the city for buying food. However, the shopkeeper was amazed looking at the ancient coins that the man possessed. Upon investigating further about the ruler and whereabouts of the city, the group realized that they had slept for nearly 300 years.

This story has been mentioned in various religious scriptures with varying particulars. Irrespective of the specific details of the anachronistic miracle, the account of cave sleepers takes us to the central theme of this article: traveling through time.

For the cave sleepers, time had stopped while it was running as normal for the rest of the world. Such anachronism might have been a shocking predicament for the men least expecting it. Nonetheless, modern humans have been profoundly charmed with the concept of time travel as it promises them unprecedented powers. Traveling back in the past, you can correct your mistakes; similarly, moving forward through time, one can look ahead and chalk out flawless future plans.

The term time machine was coined by H.G. Wells, a prolific British writer, in his 1895 novel. Since then time machine — the imaginary device that makes time travel possible — has been a popular science fiction. However, in reality, it has remained just that — a science fiction. For decades, time travel lay beyond the fringes of respectable science, often viewed from merely a recreational perspective.

After a relatively dry discourse on environmental science, this article is intended to take you on an entertaining journey through time, both backward and forward. But before that, let us look at time through an unorthodox spectacle.

The Fourth Dimension

Imagine a two-dimensional living being. Yes, just like you see in a passport photograph. The poor creature— in case we could have one— will carry the burden of a terrible existence. It will have to take food from its mouth and spat out the waste from the same place as there is no third dimension. Luckily, it is just a bizarre imagination, the reality is always three-dimensional; three dimensions are a minimum for life to exist.

In the early part of the twentieth century, Albert Einstein suggested the idea of a four-dimensional space. All of us are familiar with the three banal dimensions (length, breadth, and height); Einstein combined time with the three spatial dimensions to form a four-dimensional world called spacetime.

In his special theory of relativity, Einstein proposed that the measured interval between two events depends on how the observer is moving; an observer moving relative to another observer will experience different durations between the same two events. Thus, according to Einstein, time is variable and ever changing. It even has a shape. It is bound up with the three dimensions of space through an inescapable linkage.

The notion of four-dimensional spacetime is beyond the imagination of any ordinary human as we live in a three-dimensional space from cradle to grave. It is our hackneyed instinct to regard time as eternal, absolute, immutable — nothing can affect the steady tick of the clock. We always think of time as a universal quantity that will be measured the same across the cosmos. It seems hard to visualize that different observers could measure different time intervals between the same recurrent events.

For this very reason, spacetime is one of the most non-intuitive concepts in physics; it was an awfully hefty intellectual leap for a young man staring out the window of a patent office in the capital of Switzerland. Nevertheless, Einstein’s imaginative ingenuity did not remain limited to time; he also postulated a novel concept of gravity in his general theory of relativity.

Imagine a trampoline with an iron ball resting in the center. The weight of the iron ball causes the fabric on which it is sitting to stretch and sag slightly. Now if you roll a smaller ball across the trampoline, it tries to go in a straight line at a constant speed following Newton’s laws of motion, but as it nears the massive object and the slope of the sagging fabric, it rolls downward, essentially drawn towards the more massive object. This movement of a smaller object towards a more massive one is called gravity.

According to Einstein, gravity is a consequence of curvature or bending in spacetime (trampoline surface in this case). Thus every object having a mass creates a little depression in the fabric of the cosmos. The trampoline and iron ball example is nearly analogous to the effect that a massive object such as the Sun (the iron ball) has on spacetime (the fabric) and the small ball (earth or other planets). It causes the fabric of spacetime to stretch and curve. This curvature of spacetime is called warp.

The spacetime warp is the basis of certain concurrent concepts that hint towards the possibility of building a time machine. But before getting into the details of a time machine, it would be imperative to say a few words about another ubiquitous reality called light.

An equally significant consequence of Einstein’s theories is that speed of light is invariable. While two observers moving at different speeds relative to each other will observe different time intervals between the same sequences of events, the speed of light for both spectators will remain the same — the speed of light is a universal constant (in vacuum; it decreases slightly in water).

According to E = mc2 — probably, world’s most famous equation — speed of light (c) has a crucial role to play in mass-energy conversions. The significance of this role was evident during the Second World War when humanity witnessed the nuclear tragedies of Hiroshima and Nagasaki. However, further conversation about those sad episodes would be an ill-timed digression.

How to Build a Time Machine

Though the intended function of a time machine is obvious — to transport us into the past or future — how such a miracle will come about is obscure. In order to resolve the matter, we can split the intended function into two sub-functions: traveling to the future and going back into the past.

In theory, future travel seems less daunting; many theoretical physicists approve its possibility if the practical challenges are surpassed. The key idea here is to slow down time. There can be two ways of impeding the passage of time: by moving at the speed of light or by exploiting sheer gravity.

As explained earlier, Einstein’s special theory of relativity states that two observers moving relative to each other will observe different time intervals for the same set of events. This can be further elaborated through an imaginary phenomenon called “Twin Paradox”.

Suppose you have a twin sister. If you could travel much faster than your other half, the ticking of the clock will be delayed for you. Consequently, your hour will be stretched longer than your sister. This is called time dilation.

As a consequence of time dilation, you will be able to observe happenings before your sister — you will be transported to your sister’s future. In addition, you will age slowly compared to your twin. However, for the effect to become noticeable you will have to move very fast; ideally, as fast as light.

Another way time dilation can be approached is by using gravity. Einstein’s general theory of relativity presents gravity as a consequence of spacetime bending or warping. Going back to the analogy of trampoline, the smaller ball rolled on to the fabric travels in a straight line with a uniform speed. But the warping in the fabric produced due to the mass of the larger ball causes the smaller ball to change its direction and travels towards the bigger ball, ultimately slowing down through the valley created by the larger ball’s weight.

If we replace the larger ball with an even larger ball, a deeper sagging in the fabric will further enhance the gravitational impact causing the smaller ball to collide the large one, eventually coming to halt. The small sphere will be almost static on spacetime canvas while the rest of the world will be moving as normal. This reminds me the story of cave sleepers narrated in the beginning of the article.

Having demystified the secret of future travel, let us now take on the more challenging task of commuting back into the past. The most realistic way to travel back to bygone times is to create shortcuts between two widely separated points in space. These short paths are called wormholes.

Suppose you are roaming through a mountain range. As you traverse through the peaks and valleys of the range, you come across a straight tunnel passing through the heart of a mountain. Unless you are a daredevil, you will be pleasantly surprised as you have found a shorter way.

Since there are many massive objects scattered across the universe, the fabric of spacetime is warped or curved just like the peaks and valleys of the mountainous range. A wormhole, also called star gate in science fiction, is like the shortcut tunnel through the curvature of spacetime. Scientists have suggested that a hypothetical wormhole can constitute a time machine enabling two-way time travel — both past and future.

A wormhole time machine can be created in three steps:

One: Create a wormhole — a shortcut tunnel — through space.

Two: Place one mouth of the wormhole near a gigantic object such as a neutron star. The immense gravity of the star will slow down time at this point of spacetime.

Three: Place the other corner of the wormhole on another carefully selected point with a different gravity and hence different spacetime warp. Consequently, time will flow at a different speed at the other chosen point.

Your time machine is ready. If you are located near the neutron star, you are already in the future relative to the other chosen point. Just enter through the wormhole tunnel and you will exit at the other end at an earlier point in time — you have made it to your past.

Unfortunately, all of above is a scheme limited to some tedious mathematics. There are certain nearly impossible tasks to be accomplished before realizing the fantasy of time machines. The following part of the article elaborates those limiting obstructions.

There is a Universal Speed Limit

In the preceding section, it was explained that one way to achieve time dilation and future travel is to flash as fast as light. In reality, the maximum speed we have ever been able to attain is 24,791 miles per hour (NASA’s Apollo 10 moon mission). By comparison, the speed of light is 186,000 miles per second (not per hour); with a maximum record speed of 24,791 miles per hour or 6.9 miles per second, our achievement level is only 0.003 percent and we need to move 27,000 times faster than our previous record.

Do you think we can improve the design of our space vehicles to approach the superluminal speed? No, we can’t; not even in theory. In fact, the relativity theory that links time dilation with superluminal speed has also postulated that speed of light is the universal speed limit. No material object can approach, let alone exceed it.

Let us assume that we want to build a space shuttle that could travel through space at a speed as fast as light. Einstein’s special theory of relativity states that as we increase the speed of an object, its mass increases. We do not notice any massive increments at our marginal speeds, but any object that approaches the speed of light will have an infinite mass. Moving an infinite mass through space will require an infinite amount of energy. We are already amidst an energy crisis!

You might ask how light particles manage to achieve the incredible speed limit. The answer is that light particles called photons do not possess mass. The same hand that created space and time billions of years ago, imparted photons this excessive speed.

Don’t Mess with Gravity

Being constrained by the universal speed limit, one might think of deploying gravity for time dilation. But this is not a thoroughfare as well.

Consider a massive object such as a neutron star. At the surface of a neutron star, gravity is so strong that time is slowed by about 30 percent relative to earth time. Viewing from such a star, earthly events would resemble a fast-forwarded video. But standing on such a massive object is next to impossible. In order to understand this, it would be useful to go back to the creation of the universe.

Almost 13.5 billion years ago, time, space, energy, and matter came into existence in a gigantic explosion. It all started with an infinitesimally tiny, dimensionless dot — imagine a dimension a billion times smaller than a proton. Now assume that the mass of the whole universe is compacted in this tiny dot. Such a massive dot will have nearly infinite gravity. This is called singularity.

At the moment of the creation of the universe, there was no space, no time, no matter and no energy. Singularity was the only presence. Then a sudden expansion caused the creation of universe in a single blinding pulse. This sudden expansive explosion is called the Big Bang. That is how contemporary scientists describe the creation of the universe.

Getting back to the neutron star where you plan to stand for relishing your slowest hour, the matter contained in a neutron star is so dense that all the people on earth can be fitted in a teaspoon of this exotic matter. Though this is far from singularity, such high density of matter yields unimaginable gravity.

According to Einstein, gravity causes spacetime warping. The intensive gravity at the surface of a neutron star will create a whopping curvature in spacetime, creating a hole deep enough to swallow anything in no time. This hole is called black hole — nothing, even light, can escape this warping; hence the name black hole; a dark hole seems to be a better term though.

A black hole represents the ultimate time warp; at the surface of the hole, time stands still relative to Earth. However, trying to approach a black hole would be a foolhardy idea; its immense gravity will pull you in a split second before you could witness the fast-forwarded happenings of your twin sister.

The Chronological Dilemma

Assume that we have discovered a way to create “antigravity”, enabling us to traverse through a wormhole without being swallowed by a black hole. Now imagine a time traveler that commutes through a couple of decades in the past, and murders his grandparents before his father is born. And if his father is not born, how can he or she come into the world? And if this time traveler is not born, who will murder the grandparents?

This is one of the mind-boggling paradoxes that defy the causality principle, which states that an effect cannot occur before its cause. In this case, the procreation between the grandparents is the cause of the time traveler’s existence. How can the traveler exist before the procreation takes place? There is an unfathomable tangle in the chronology of events.

Proponents of time travel have proposed an exclusive solution to this chronological dilemma. They emphasize that a time traveler will not be able to perform any task that disturbs the causality loop. Thus going back into the past, the traveler will not be able to do anything that hinders his own existence, including murder of his grandparents. In other words, people traveling into past or future will not enjoy the free will that their present offers.

Living in the Present

Human beings face enormous difficulty in living and thinking in the present. They either remain engrossed in their previous blunders or are found daydreaming about the optimistic future plans. Nevertheless, in doing so, they miss the most governable part of their life — the present.

Being able to travel through time has been one of the most fascinating dreams of mankind. Scientists have suggested various theories claiming the possibility of time travel. However, these are theories only. We cannot exceed the universal limits of speed, neither can we defy gravity. Even if we could, there are some unexplainable paradoxes. Therefore, building any kind of time machine is merely a dream, highly unlikely to be realized.

Better seek living in the present!

Zero Pollution Engines: A Tale of Environmental Redemption

Living in the Post-Industrial Era, Can We Eliminate Environmental Pollution? Is It a Realistic Goal? 


ON 5 DECEMBER 1952, when Londoners awoke, yawning and rubbing their eyes, they could see the sun dawning beyond a clear sky. Afterwards, as they mingled with the bustle of banal metropolitan schedules, a light veil of fog started enshrouding the British capital. By the afternoon, the fog had taken the shape of a yellow haze as it blended with thousands of tons of smoke being pushed into the skies of London through its innumerable industrial engines.

For London inhabitants, fog was no wonder but this soot-laden mixture of fog and smoke—smog—was phenomenal beyond their senses. It was so dense that people walking in the street were unable to see their own feet. But the worst attribute of this smog was that it carried acidic particulates discharged from coal burning power plants, causing breathing problems for inhabitants.

Exacerbating the anguish, a high-pressure air system parked over London prevented any air from blowing and sweeping away the polluted mist. Four days later, when the toxic smog eventually subsided, it had already claimed 4,000 lives with 150,000 citizens hospitalized. Later estimates indicated that the actual death toll was more than 12,000.

Though the killer smog of London was a hapless episode, the city could not blame anyone except itself. The culprit was consumption of fossil fuels that were being burnt to meet the limitless energy needs of London dwellers. Regrettably, United Kingdom is not the only place where such environmental disasters can occur; other industrialized nations such as China and the United States are equally vulnerable. In fact, out of twenty most polluted cities, China is already home to sixteen.

The demon of environmental pollution was born as the industrial revolution got underway during the eighteenth century; it entered puberty during the twentieth century; and at the dawn of the twenty-first century, pollution had taken the form of a looming monster. An unwanted offspring of industrialization, the menace of environmental pollution remained largely overlooked till events like the London Smog occurred.

The killer smog of London forced the British Parliament to pass clean air act in 1956. Since then, realizing the hazard, the international community has taken many steps to abate environmental degradation due to fuel combustion. However, some environmentalists assert that now it is too late to act; the monster has already been unleashed and reducing the environmental pollution to zero or pre-industrial levels is next to impossible.

Last two articles were related to human energy needs, both internal and external. This one is concerned with our environment and how we have impacted it. Can humankind redeem for its ecological crimes? Is zero pollution achievable? These are the themes of this article. Nonetheless, before learning about the remedy, it would be useful to know the history and symptoms of the disease.

Boons and Banes of Industrial Revolution

When human society was rooted almost 12,000 years ago — the agricultural revolution — technology was confined to a limited number of tools used for hunting and burning fires. Since then, no other event in known history has impacted human lifestyles as profoundly as the industrial revolution.

The industrial revolution began in the mid-1700s in England and around mid-1800s in the United States. Though industrialization had kick-started earlier, it would not be an exaggeration to say that the development of first practical steam engine in 1769 was the real push that sped the revolution up to its full throttle.

The advent of the industrial revolution accompanied a rapid economic change that altered all aspects of human life. As new mass production centers were established, innovative inventions were contrived bringing in a myriad of conveniences in peoples’ lives. As a consequence of extraordinary technological progress, newer and cheaper products were made available to faraway communities, thereby improving their standards of living. With the unprecedented scientific advancement, disease diagnosis and medical care techniques were also improved.

The establishment of centralized production systems engendered the need for transportation of finished goods to local, regional and global markets. The resulting development in the transportation means— trailer trucks, trains, and ships—along with additional activities like road construction and fuel processing had serious environmental repercussions like air pollution, depletion of natural resources and habitats destruction.

With the dawn of industrialization, rural and suburban areas were swallowed by urban centers. As transportation networks expanded, the problems of traffic congestion, noise pollution, and air contamination emerged as serious environmental issues. More road vehicles meant more emissions, new road constructions and greater demand for oil exploration. This was in addition to increased levels of carbon dioxide emissions and greater potential for the greenhouse effect.

Before industrialization, people preferred to manufacture products that could last longer; they tended to repair rather than replace goods. As the industrial revolution promoted consumerism, nearly every city and town established an open-air dump, where citizens brought items that could not be otherwise reused, sold, or recovered. With the population growth, people produced more trash, turning city dumps into mountains of stinking and toxic garbage.

Prior to the industrial revolution, major health threats were linked with lack of sanitation. The launching of new products introduced additional pollutants like CFCs, volatile organic compounds, soot, and sulfur oxides. In addition to the health hazards linked with pollution, the quest for more and improved products at cheaper rates resulted in harsh working conditions for workers, inappropriately long working hours and child labor.

Starting from 1769— the year when James Watt patented his steam engine— world annual coal production had increased 800 fold until 2006; and a decade later, the consumption is still on the rise. Simultaneously, other fossil fuels are being extracted too. These fuels have been consumed ruthlessly during last two centuries for enhancing human standards of living, but not without a painful tradeoff.

Heaving reaped the benefits of industrialization, as humankind opened its eyes in the twenty-first century; it was in the face of an ecological calamity. There are myriad facets of this calamity with its sources both diverse and dispersed. Photochemical smog is not the only form of environmental pollution. It is just one of the symptoms of a deeply rooted disease. But before a detailed examination of the ecological illness, let us consider the portentous threats lurking over the global climate.

The Climate Catastrophe

Human-induced climate change can be explained in three simple steps:

One: Anthropogenic combustion of fossil fuels during and after the industrial revolution has caused increased levels of carbon dioxide in the environment.

Two: Carbon dioxide is a greenhouse gas that traps heat in the atmosphere.

Three: Heat-trapping due to greenhouse effect has increased average global temperatures. This is called global warming.

What follows these simple steps is of enormous consequence, however. Warmer future climates will lead to rapid melting of glaciers leading to rise in sea-levels, causing flooding of coastal cities and densely populated river deltas. The subsequent lack of hygiene due to stagnant water ponds, heaps of sewage and poor relief facilities in under developed countries are likely to cause more water-borne diseases.

Changes in climate temperatures and rainfall patterns might affect crop production, creating issues of food security, especially in poor countries. Although developed nations are less likely to be impacted by these effects, they may have to expand their arable land.

Scientists claim that the rising levels of carbon dioxide in the atmosphere and the resulting heat waves during summer are likely to increase heat-related illnesses and deaths. Elderly people and children are vulnerable to these effects in particular. The heat wave that killed around 35,000 people in Western Europe in 2003 is a horrific example.

Even if we posit these climate threats as an exaggerated account, the current state of our environment is far from commendable. In our conquest for magnificence, we have transformed the only known life-bearing planet into an ailing planet.

The Ailing Planet

As we relish the luxuries of modern life, power engines that enable these luxuries also spew various pollutants to the atmosphere. Sulfur and nitrogen oxides emitted from fossil fueled power generators, industrial steam boilers, and vehicular engines combine with water in the atmosphere to produce dilute solutions of sulfuric acid, nitric acid and nitrous acid. These acids are precipitated back to the ground, making surface water and soil more acidic. This phenomenon is called acid rain.

The connection between acid rain and declining aquatic animal populations is indubitable. Toxic metals such as aluminum dissolve in acidic lakes; the increased concentration of these toxic metals adversely affects fish and other aquatic species. In addition, acidic water kills the small plants that feed fish, thereby affecting the whole food web.

Studies have indicated that birds living in areas with pronounced acid deposition have a greater likelihood of laying eggs with thin, fragile shells that may break and dry out before hatching. The problem is attributed to reduced proportion of calcium in the birds’ diet. Due to increased calcium solubility in acidic water, the plant roots lack calcium; so do the insects eating those plants. Thus, when birds eat insects with low calcium, they also face calcium deficit.

In addition to the detrimental effects of acid rain on animals and plants, precipitation of acidic substances corrodes building materials and metals. Historic sites in Venice and Rome are known to be worn away by acid deposition. Another example is the destruction of ancient Mayan ruins in southern Mexico, which is attributable to acid rain caused by uncapped emissions from oil wells in the Gulf of Mexico.

Apart from other pollutants, most coal-fired power plants are major mercury emitters. Mercury is present in coal in small traces and is released to the atmosphere during combustion. Mercury is a neurotoxin and, if deposited in an aquatic environment in the form of methyl mercury, it can accumulate in invertebrates and fish and is likely to affect their neural tissues.

During last quarter of the twentieth century, scientists identified some holes in the ozone layer. Ozone is a form of oxygen containing three oxygen atoms instead of two. It occurs naturally in the stratosphere between 19 and 30 kilometers above the earth, where it is produced as oxygen atoms split apart in the presence of sunlight, reuniting subsequently as a combination of three.

Ozone in the stratosphere protects earth and its inhabitants from high energy carcinogenic ultraviolet radiation. Certain chemicals called CFCs (chlorofluorocarbons) contained in refrigerants, aerosol sprays, coolants and fire extinguisher blowing agents have been found to react with stratospheric ozone, creating loopholes in this protective shield.

Speaking of land pollution, the garbage accumulating in our surroundings can also have continuing effects on human health and the environment. Various studies have indicated that there are health risks for people who live near landfills, including increased rates of certain types of cancer. Some of the chemicals present in the garbage have the potential to contaminate underground water reserves and pollute the atmosphere.

Phthalate, a chemical present in plastic wrappings, soft plastic toys and plastic medical equipment, is known to interfere with human hormone functions. Similarly, industrial solvents like trichloroethylene, an artificial chlorinated solvent widely used in industry to remove grease from metal parts and textiles, and perchloroethylene, a chemical mainly used as a dry-cleaning agent, are of great concern because they are potentially carcinogenic.

The past few decades have witnessed amazing advancement in technology, especially in the field of electronics. Despite the remarkable facilities offered by these advances, they have given birth to a new type of hazardous waste, called e-waste. E-waste refers to the consumer electronics that are discarded or are useless. These discarded items contain numerous toxic wastes and are growing rapidly in our surroundings.

It has been estimated that around 50 million tons of electronic products are discarded annually around the world. Most of the electronic wastes are produced by developed nations, which are later exported to developing countries for disposal. Since the government regulations are either absent or are not enforced in these third world countries, the used electronic products are often easily accessible to general public, who are exposed to health hazards associated with e-wastes.

The primary concern with e-wastes is the hazardous content they carry. Studies indicate that more than 1000 chemicals including chlorinated solvents, PVC plastics and various types of gases are used for manufacturing of electronic products and their components. For instance, computer monitors, typically contain 4 to 8 pounds of lead, a heavy metal known for causing brain damage among children.

Similar to monitors, flat panel TVs contain large amounts of mercury, which is a proven carcinogenic. Switches and batteries contain cadmium and nickel, which are toxic for humans, animals and plants alike. Metal housings and joints, often coated with chromium corrosion protector, cause toxicity in liver and kidney. Similarly, beryllium dust generated from relays, connectors and motherboards are highly poisonous for humans when inhaled.

A lesser known form of environmental pollution is thermal pollution. The cooling towers used in power plants release heat directly into the atmosphere, which raises the air temperature drastically, thus contributing to global warming.

Water heating due to thermal pollution alters marine ecology to a great extent. Since hot water holds relatively less oxygen; many species in aquatic habitats face difficulty in survival. Similarly, during nuclear plant startup, shut down for maintenance and then sudden startup creates abrupt temperature changes in water contained in lakes. These thermal shocks can be lethal for certain aquatic species.

According to World Water Council, 3900 children die every day due to waterborne diseases. Water pollution is one of the most offending environmental problems and is indicative of the misuse of the planet’s resources. Water pollution refers to any physical, chemical or biological change in water quality that adversely affects living organisms or makes it unsuitable for desired purposes. Waste water discharged from various sources contains many pollutants that create serious health hazards for humans.

Infectious diseases are among the most serious consequences of water pollution; especially in developing countries, where sanitation may be inadequate or non-existent. Waterborne diseases occur when parasites or other disease-causing microorganisms are spread via contaminated water. The resulting diseases include typhoid, intestinal parasites, and most of the enteric and diarrheal diseases caused by bacteria, parasites, and viruses.

Pills of Recovery

During last couple of decades, we have tried various remedies for curing our ailing planet. Let us have a synopsis of these recovery pills starting with coal-fired power plants.

Several solutions have been proposed for reducing the environmental impacts of coal burning in power plants. Increasing the efficiency of power generation, retrofitting old plants with newer and more efficient technology alternatives, carbon sequestration, promulgation of carbon taxes, switching to low sulfur and nitrogen coals and use of clean coal burning technologies such as fluidized bed combustors and installation of scrubbers in flue gas streams have been tried as possible pollution reduction measures.

Carbon sequestration refers to the removal of carbon emissions from the atmosphere. Alternatively, carbon can be directly seized from industrial emissions sources such as fossil fuel power plants. After capturing carbon dioxide from its emission source, it is stored in deep saline aquifers, old oil and coal beds or in deep oceans. In certain storage sites, carbon can be retained for decades or even centuries.

One of the most popular economic strategies to discourage excessive fuel consumption is the implementation of carbon taxes. For this purpose, a certain value of money is added to the original price of carbon fuels, making them expensive, so that the consumers use these fuels more prudently. Further, the revenue generated from carbon taxes can be invested in cleaner energy resources.

Another market-based tool to reduce greenhouse emissions is carbon trading. Unlike carbon tax, which is a rigid levy upon carbon-emitting fuels; carbon trading proposes a more flexible economic solution to global warming.

Carbon trading works much like trading of other commodities: a governing body sets a cap on the level of emissions and issues certain allowances to emitters; the emitters or members of the scheme can sell and buy carbon allowances among other members. For example, a company that produces too many emissions can purchase allowances from a company producing less than entitled levels; in this manner, achievement of overall targets is ensured.

Trading in gas emissions was initiated in the United States during the 1990s when the government imposed a cap on sulfur dioxide emissions from power plants. However, UK was the first country to execute an economy-wide implementation of carbon trading in March 2002 and was able to reduce carbon dioxide emissions by 5.9 million tons in just over three years.

At the dawn of the third millennium, TransAlta, Canada’s second largest emitter of greenhouse gases announced a voluntary plan to reduce its carbon dioxide emissions to zero by 2024. Off course, a zero emission target can only be practically achieved by carbon trading. A similar example is Chicago Climate Exchange, a pilot project that trades in carbon emissions like a stock market. Several renowned companies like Rolls-Royce, Ford, Motorola, and IBM are its members.

After industrial power units, the second largest and relatively mobile pollution sources are road vehicles. The pollutants emitted by car exhausts depend on several factors and can be effectively reduced if appropriate measures are taken. Vehicles with better emission control systems have been designed in order to cater the problem of air pollution.

According to US Environmental Protection Agency, today’s passenger cars emit 90 percent less carbon monoxide compared to their counterparts of the 1960s mainly due to the introduction of catalytic converters for emission control.

In 1985, a convention was held at Vienna to investigate the cause of ozone depletion. Two years later, 30 nations of the world gathered in Montreal and it was jointly avowed that chlorofluorocarbons are the leading cause of holes in ozone in the stratosphere. This declaration is famously known as the Montreal Protocol. Today the gradual elimination of ozone-depleting chemicals is being driven across the globe.

Although open-air dumps are still a common site in some third world countries, most of the dumps in the developed world have been cleaned and covered up. In some places, they have even been transformed into parks, housing colonies, or commercial establishments. The garbage produced by the citizens is now buried into hidden dumps, called landfills which are not open for public view.

Recycling of garbage is even preferable over landfill waste disposal as it is environmentally benign and facilitates conservation of natural resources such as trees and water. By turning wastes into useful stuff, recycling saves energy resources required for making new raw materials. It also saves useful landfill space. It has been estimated that every ton of recycled paper saves 17 trees, 7000 gallons of water, 4100 kilowatt-hour of energy and three cubic yards of landfill space.

Although water pollution has been raised to threatening levels, some commendable control measures have also been demonstrated. A triumphant example is that of Thames, England. After the industrial revolution, Thames became an easy drain for toxic wastes from domestic and industrial sewers. However, in 1950, England carried out a massive cleanup funded by millions of pounds contributed by both public and industrial communities. During the 1980s, the river showed a remarkable improvement and 95 fish species including pollution-sensitive salmon returned to the river.

Is Zero Pollution Achievable?

Though humankind has made several laudable efforts to abate the hazards of environmental pollution, reducing it to pre-industrial levels is altogether a different matter. The task at hand is much more perplexing that you might think.

More than eighty percent of our industrial and domestic energy needs are met through burning of fossil fuels. All fossil fuels are organic in nature, therefore carbon emissions are a natural outcome. Moreover, these fuels contain other elements such as sulfur and nitrogen so their burning produces additional harmful pollutants too.

Another key challenge is that the combustion of these fuels is imperfect. From motor vehicles to most of the electricity produced at power plants, the primary form of energy involved is heat. The device that converts heat into other useful forms of energy is called a heat engine. A common example of a heat engine is a car engine in which heat energy is released from fuel combustion, later converting into mechanical energy or motion.

Like any real-world process, combustion of fuels and subsequent conversion of heat into other energy forms are imperfect. The inherent inefficiency of energy conversion processes results in heat losses to the environment. As explained earlier in this book, irreversible heat loss or heat death during energy conversion is inevitable. So just because heat death is a universal reality, thermal pollution is an unavoidable phenomenon.

Being carbon-based, all fossil fuels must emit carbon dioxide during combustion. While other pollutants such as sulfur and nitrogen oxides might be controlled through technological improvements such as fluidized bed combustors and installation of scrubbers in flue gas streams, carbon dioxide is a natural product of fossil fuel burning.

We have invented ways to capture and dispose carbon dioxide from the atmosphere, but do these methods lead to zero pollution? The environmental implications of carbon storage in deep oceans are also a serious concern as carbon dioxide remains dissolved in water and may potentially harm aquatic life. Thus instead of removing pollution, we are merely displacing it to a location far from our sight.

Critics of carbon taxes and trading argue that these schemes cannot materialize significant reduction in carbon emissions since they do not discourage the polluting behavior of the emitters. Rather, they will reinforce the social inequality between developed nations and third world countries. In addition, the European Trading Scheme does not cover emissions from transportation and aviation industry, which contribute almost half of their total emissions.

While several European nations like Denmark, Switzerland, Sweden, Norway, Holland, Finland, Austria, Italy and Germany have imposed carbon tax on their fuel consumers, countries like Great Britain have refused to accept the proposed carbon tax idea, as they doubt the achievement of desired results in this way.

Likewise, opponents of recycling object the usefulness of the process. Recycling is a manufacturing process, and like any other manufacturing process it consumes energy resources; production of recycled goods leaves its own environmental footprints. Hence recycling has a limited contribution in reducing the volume of generated wastes.

If we lack any viable strategy to eliminate carbon pollution, the obvious solution is to replace fossil fuels with non-carbon alternatives. Various alternatives have been proposed including nuclear energy for power generation, alternative vehicular fuels such as hydrogen and ethanol, and renewable energy resources such as solar, wind, biomass, geothermal etc.

The problem is that none of these carbon-free alternatives have the technological feasibility to replace fossil fuels, at least not at present. There are many challenges and whether the ongoing research can overcome those challenges is still dubious. Moreover, despite being less polluting than fossil fuels, these alternatives also do not promise zero pollution. They also contaminate the environment in their own ways.

In this state of affairs, the last choice we are left with is to reduce consumption of resources. But how much should we reduce? We are talking about zero pollution here. What will be the economic impacts of such a drastic transition? And are the associated environmental benefits good enough?

Is Zero Pollution Optimal?

After examining the technological difficulties in achieving zero pollution, let us say a few words about the economic repercussions of this noble initiative. Economists have argued that it is not optimal to reduce pollution to zero. The cost of this reduction would probably exceed the benefits.

For instance, critics of carbon tax argue that it is a regressive tax; by discouraging the use of widely employed carbon fuels, it will take society several decades back and hinder the ongoing progress. The United Nations has also objected that carbon tax is an inefficient way of reducing carbon dioxide emissions in poorer countries as they do not have the essential resources to set, monitor and enforce such schemes. Similarly, small scale recycling is often expensive compared to other waste disposal methods like incineration or landfill disposal.

Speaking in economic terms, if the advantages received from reducing pollution exceeds the associated costs, only then society would benefit from a reduction in pollution. Thus if the cost of pollution abatement is just equal to the benefit from pollution abatement, then we have reached the point where society’s welfare has already been maximized with respect to environmental quality; and we should choose to live with the remaining pollution.

Zero Pollution Could Mean Extinction

In our quest for glory, we have created an ecological imbalance that remains a threat lurking over current and future generations. During last two and half centuries, the excessive combustion of dirty fuels and unrestrained consumerism has caused the environmental pollution to exceed alarming thresholds.

We have made some efforts to abate the hazards of pollution, but probably it is too late. The challenge is much bigger than our marginal efforts, while we cannot roll back the resource-hungry lifestyles that we enjoy.

Reducing the environmental pollution to zero or pre-industrial levels would essentially imply that we have to recede to pre-industrial living standards. This reversal of industry, economy, and society will mean the cessation of our convenient lifestyles. Since the current human generations lack the immunity of our pre-industrial forefathers, the recessive implications of zero pollution could possibly lead to the extinction of human race.

Human Trees: Food from the Sun

Why Can’t Humans Make Their Own Food from Sunlight, Just Like Trees? 


AROUND 2.5 BILLION YEARS AGO, some smart earthly organisms invented a mechanism to extract energy from incoming sunlight. By harnessing the energy in the sun’s rays and consuming carbon dioxide and water from the environment, they developed an ability to make their own food molecules. This unique capability allowed them to survive for billions of years without leaving their birthplace. These organisms are now called plants and the process by which they manufacture their food is called photosynthesis.

About 600 million years ago, some smarter organisms took on a rather selfish evolutionary path: they opted to be mobile organisms that could feed themselves on the food prepared by plants. Some of them even chose to forage other fellow mobile beings to meet their energy needs. These traveling creatures are called animals.

From a biological perspective, humans belong to the animal kingdom; they share several common traits with animals. Contrary to plants and just like animals, humans cannot make their own food. Not unlike animals, we have to move around to make our both ends meet. And perhaps the only benefits we can get from direct sunlight are receiving some vitamin D and release of some endorphins — neurotransmitters that relieve pain and induce euphoria.

Many of the modern human technologies including airplanes and bullet trains are a product of biomimicry. Can humans emulate plants’ photosynthesis to fulfill their own nutritional needs; can there be any human trees? Before answering this question, let us examine what it takes to make a plant or tree.

Green Mimicry

In the last article on perpetual motion machines, we learnt that energy can be transformed from one form to another so that the total sum is conserved. Photosynthesis is another energy conversion process that transforms light energy into chemical energy— energy locked in chemical bonds of vegetables and fruits that we get from plants.

The exact biochemistry of photosynthesis is intricate and somewhat irrelevant here so I will not bother the readers with the related chemical equations. Instead, the intention is to elaborate the mechanism using simple prose.

The term photosynthesis is a Greek compound combining photo (light) and synthesis (putting together). Like any chemical reaction, photosynthesis requires certain reactants present under the right set of conditions. The reactants, in this case, are carbon dioxide and water which occur in our environment naturally. The rest of the tale revolves around achieving the favorable circumstances.

All green plant cells contain certain organelles capable of absorbing solar radiations; they give plants their green color and are called chloroplasts. The green pigment inside chloroplasts is called chlorophyll and is responsible for absorbing sunlight for photosynthesis.

As it captures solar irradiance, chlorophyll converts light energy into a compound called ATP (Adenosine Triphosphate) through a chemical reaction that splits water into hydrogen and oxygen. This part of the photosynthesis is called light reaction as it requires sunlight. Oxygen is generated as a byproduct and is released to the atmosphere during this daylight activity.

The second stage of photosynthesis does not need solar radiations; hence these reactions are called dark reactions. Dark reactions involve the combination of hydrogen ions (obtained during light reactions) with carbon dioxide to form glucose; ATP molecules formed during the first phase are used to energize the dark reaction. The plants may use fresh glucose immediately or store as starch for later consumption.

The details of dark reactions were revealed by Melvin Calvin, an American biochemist, who won 1961 Nobel Prize in Chemistry for the discovery of chemical pathways of photosynthesis. In the honor of this discovery, dark reactions are also termed as The Calvin Cycle.

After delineating the process of plant photosynthesis, let us now muse over the prerequisites for making a human or animal photosynthesizer. In order to mimic a tree’s photosynthesis, we need to have chloroplasts in our cells which are absent in current human anatomy. Some nutritionists suggest that adding chlorophyll to our diet can enable us to take in the energy of the sun. Some enthusiasts even hint that chlorophyll is similar to hemoglobin present in our blood.

Hemoglobin is a protein that occurs in red blood cells of humans and animals. It contains an iron-rich pigment called heme which imparts red color to our blood. To be fair, chlorophyll and hemoglobin are comparable in some ways— they are amazingly similar in chemical structure — their functions are distinctly different though.

The primary role of blood hemoglobin is to carry oxygen from lungs to body cells. It also transports waste carbon dioxide from body tissues to the lungs from where it is exhaled into the atmosphere. By contrast, plants use chlorophyll to consume carbon dioxide and release oxygen to the atmosphere.

Unlike hemoglobin, chlorophyll releases oxygen as a byproduct; it has no capacity to retain oxygen like hemoglobin does in our bloodstream. Even if it did, the oxygen carried through leafy vegetables could create an explosion in our stomach by reacting with some flammable gases. Moreover, contrary to some bizarre claims, the human body lacks any capability to convert chlorophyll into hemoglobin.

In order to mimic plant photosynthesis in animals or humans, their cells need to be engineered genetically to introduce an equivalent of chloroplast. Indeed, humans are making breakneck progress in genome engineering but it is not just a technological contest. There are some treacherous compromises involved as well.

The Cost of Being a Tree

Plants seem to be smart creatures that do not need to strive for their nourishment. They are capable of making their own food “without moving a muscle”. But, given our fidgety nature, being a tree will not be an amusing experience for a human being.

Though originally opted as an evolutionary strategy, immobility has been the most evident plant trait for billions of years. Even if they desired, trees could not afford the luxury of trotting like horses. The main constraint, in this case, is the low energy density of the incoming sunlight.

As a matter of fact, photosynthesis is one of the least efficient energy conversion processes. Every year, the sun throws about 3.8 x 1024 joules of energy to the earth, out of which 8.4 x 1021 joules are used for fruitful photosynthesis; an efficiency of 0.22 percent — remaining ninety-nine percent of incoming solar energy remains uncaptured.

Thinking like a human tree, this is a shameful performance. But unlike humans, trees are gratified creatures who remain pleased with their meager existences. What would be more embarrassing is that a human tree— if perchance one happened to exist— would be nightmarishly huge.

An average, healthy human in his or her prime needs around 2,500 Calories of energy on daily basis. The biggest chunk of this energy is consumed by an extraordinarily large human brain. The rest time neural activity of human brain accounts for around 25 percent of the total body energy consumption while a vexing mental exercise increases this value by a slight amount.

By contrast, plant energy needs — though exact values are obscure — are relatively little mainly because they are static creatures and they do not carry a power-hungry thinking machine. A human tree moving here and there and carrying a humongous brain will require a colossal surface area to capture enough sunlight to meet its excessive energy requirements; consequently, the hybrid will be a clumsy looking creature, sprawling disproportionately. You will have to stand in the sun throughout the day, still unsure about your nocturnal energy needs.

Another remarkable plant trait is their ability to absorb water from the ground; plant tissues that make this happen are called roots. Whereas plants have the luxury of absorbing water through roots, humans get water only through drinking. The human body is about 60 percent water. By contrast, plants are 95 percent water. Being a human tree would imply that we will have to gulp gallons and gallons of water, probably replacing our petty goblets with multi-gallon containers.

A Synergy for Energy

Human imitation of plant photosynthesis may sound like a hallucination. However, some shrewd animals have used clever tricks to exploit photosynthetic plants as a source of energy. They have deployed intelligent synergies with tiny plants such as algae to mollify a small part of their hunger with free food.

The most famous example of such an alliance is corals that encourage algae to grow inside their tissues. While the tenants get a free accommodation, the owners get to steal some of the energy that the algae make from sunlight. This kind of relationship in which both organisms get benefits from the synergy is termed as mutualism.

A group of sea slugs, called Sacoglossa, go one step further: they steal chloroplasts from the algae that they eat, incorporating the stolen organelles into their own cells. However, these chloroplasts do not last long, so to replenish their supply, the sea slugs must eat more algae. Not a bad effort though!

Some researchers believe that chloroplasts — the tiny green cell organelles that are in fact nanomachines of photosynthesis— were independent organisms eons ago. Subsequently, they took up residence inside cells of other organisms where they have lived ever since. If this hypothesis is true, it suggests an interesting scenario where human genome could be engineered to induct chloroplasts in body cells.

Successful human trees must not only maintain chloroplasts inside their cells but should also pass them down to their offspring. This is far beyond current human capabilities; however, an eventual success could open new horizons for human and animal populations.

The Green Revolution

We owe a great deal to trees. The generous plants take in the carbon dioxide that we breathe out; in exchange, they give us oxygen that we breathe in. They help maintain the amount of water in the air and do not let the ambient temperature exceed opposite extremes. In addition, they prepare for us energy-rich organic compounds such as sugars and starches which are an essential part of our diet.

Though this suggestion may sound somewhat delirious, what if humans could do all these things by themselves? What if a human could take the shape of a tree while maintaining his or her own individuality? Such a transformation could have intense consequences on humans and their environment.

According to World Food Program, every ninth inhabitant of planet earth suffers the torment of being undernourished. A majority of hunger-stricken people live in poor countries of Asia and Africa. More than three million children die every year before reaching the age of five due to malnutrition. Though we have defeated famines by and large, hunger remains a perilous menace for many.

Human photosynthesizers could not only alleviate hunger; it could also help us combat environmental pollution by quenching carbon dioxide from the atmosphere. Donning an attractive green sheen, humans could use their own carbon dioxide for photosynthesis and their own oxygen for body metabolism. While anthropogenic activities are the most blameworthy cause of rising levels of carbon dioxide in the atmosphere and consequent global warming, human trees could redeem for the unatoned environmental sins of their non-tree forefathers.

The so-called “green revolution”, a state of nutritional autonomy, may cause folks to think differently. People may choose between labor and leisure, as work will become optional — only required to gain fancier possessions and not for food. Some of the people will work for intrinsic satisfaction while others will prefer to pursue the dreams of their childhood.

The Fourth Phase of Water

Getting back to the realm of reality where there is no free lunch, researchers are avidly interested in the prospects of human photosynthesis. A group of scientists has hinted towards a possible equivalent of chlorophyll in our skin— melanin, a pigment located in outer skin layer that protects the body against the ultraviolet radiation. The group claims that melanin could act as a light-capturing antenna collecting sunlight just like chlorophyll.

Another prominent study accentuates the role of water — the most abundant constituent of human body— as a light-absorbing substance. From schooldays, we have learned that water has three phases: solid, liquid and gas or vapor. But this study claims a fourth phase called the exclusion zone, abbreviated as EZ. The reason for this name is that this state of water excludes all solutes and does not dissolve anything.

The fourth phase of water has been described as a state between ice and liquid water, a kind of liquid crystal. More viscous, dense and alkaline than ordinary water, it carries additional oxygen; its chemical formula is H3O2. This phase is said to occur adjacent to hydrophilic — water loving — surfaces including our bodies. What is crucial here is that this phase thrives upon absorption of light.

The study proposes that water has the ability to transduce sunlight thereby converting ordinary water into more ordered, liquid-crystalline water, EZ— the fourth phase. This is a continuous process; additional absorption of radiant energy converts more ordinary water to EZ.

The presumed process of EZ generation resembles photosynthesis (described earlier) to a certain degree. Just like photosynthetic light reaction, this sun-powered process involves splitting of water molecules into positive and negative halves. The positive half combines with water to form hydronium ions while the negative half constitutes the building blocks of EZ. Adding more light creates more charge separation and the separated charges form a battery which, in essence, is an energy repository.

The components of this battery include water, sunlight and a hydrophilic surface. For comparison, water and sunlight play almost the same role here as in photosynthesis, and both are available in ample amounts. Nearly two-third of you and me is water, and who can deny the exuberance of sunlight that we enjoy.

However, the existence of a hydrophilic surface in our body — an equivalent of chlorophyll in this case — is still ambiguous. Who knows melanin, our skin pigment, could emulate plant chlorophyll enabling us to harvest energy from the sun. However, as more elaborate research is under way, drawing any conclusion at this stage will be a hasty generalization.

Trees are Trees, Humans are Humans

Some of the modern inventions are attributed to biomimetics. Examples abound: Wright brothers and their predecessors attempted at airplanes, taking inspiration from birds; When George de Mestral, a Swiss engineer, removed burrs from his dog, he coined the idea of Velcro; similarly, Japanese refined their bullet trains observing the knife-shaped bill of kingfisher, a short-tailed, fish-eating fowl.

Contrary to above examples, humans have seldom tried to mimic the charming features of animals and plants on their own bodies. The reason is obvious: biomimicry involving human bodies poses some existential risks. An example of such existential risk is human photosynthesizers where humans will lose their humanism while imitating plants.

Whereas some researchers are determined to discover certain magical substances that could serve as human chlorophyll, the present human technology is totally unable to enable human photosynthesis. Moreover, even if we could enable it, there will be certain mind-boggling sacrifices involved.

Bottom line: trees are trees; humans are humans. Period.