Category Archives: Artificial Intelligence

The Xenobots’ next generation has arrived.

In 2020, scientists made headlines worldwide when they developed “xenobots” – small “programmable” living beings composed of thousands of frog stem cells.

These pioneering xenobots were capable of moving through fluids and were said to be beneficial for detecting radioactivity, contaminants, medications, and infections. Early xenobots were capable of surviving for up to ten days.

In early 2021, a second batch of xenobots demonstrated unexpected new properties. These included self-healing and increased life expectancy. Additionally, they demonstrated a capability for cooperative behavior in swarms, for example, by massing into groups.

The same team of biologists, roboticists, and computer scientists presented a new type of xenobot last week. As with prior xenobots, they were built using artificial intelligence to simulate billions of prototypes, avoiding the time-consuming trial-and-error process in the laboratory. However, the latest xenobots have a critical difference: they are capable of self-replication.

What’s going on? They are capable of self-replication?!

The new xenobots are similar to Pac-Man in that they can suck up other frog stem cells and assemble new xenobots similar to themselves as they swim around. They are capable of sustaining this process for multiple generations.

However, they do not reproduce biologically in the conventional sense. Rather than that, they form the groupings of frog cells with their “mouths.” Ironically, the recently extinct Australian gastric-breeding frog was the only species that delivered birth via the mouth.

The latest development gets scientists one step closer to generating self-replicating organisms. Is this truly a Pandora’s Box?

Human-designed self-replication is not novel in concept. In 1966, John Von Neumann, a prominent mathematician, discussed “self-reproducing automata.”

In his 1986 book Engines of Creation, Eric Drexler, the US engineer credited with establishing the subject of “nanotechnology,” famously alluded to the possibilities of “grey goo.” He envisioned self-replicating nanobots that consumed their surrounds, converting everything into a sludge comprised entirely of themselves.

Although Drexler later regretted coining the term, his thought experiment has frequently been used to warn against the dangers associated with the development of new biological matter.

In 2002, an artificial polio virus produced from custom DNA sequences became capable of self-replication without the assistance of artificial intelligence. Although the laboratory-created virus was contained, it was capable of infecting and killing mice.

Prospects and advantages

According to the researchers that developed the new xenobots, their primary importance is in showcasing developments in biology, artificial intelligence, and robotics.

Future robots constructed entirely of organic materials may be more environmentally benign, as they may be engineered to degrade rather than remain. They may contribute to the resolution of health issues affecting humans, animals, and the environment. They may aid in the development of regenerative medicine or cancer therapy.

Xenobots may also serve as an inspiration for art and fresh perspectives on life. Strangely, xenobot “offspring” are created in the image of their parents, but not from them. As a result, they reproduce without genuinely reproducing biologically.

Perhaps alien life forms construct their “children” entirely from items in their environment, rather than from their own bodies?

What dangers exist?

It’s natural to harbor apprehensions regarding xenobot research. While one xenobot researcher stated that there is a “moral duty” to investigate these self-replicating systems, the research team acknowledges that their work raises legal and ethical difficulties.

Hundreds of years ago, English philosopher Francis Bacon proposed that certain types of inquiry are too risky to conduct. While we believe that is not the case with existing xenobots, it may be in the future.

Any hostile use of xenobots, or the use of artificial intelligence to build DNA sequences that result in intentionally hazardous synthetic creatures, is prohibited under the United Nations’ Biological Weapons Convention, the 1925 Geneva Protocol, and the Chemical Weapons Convention.

However, their use outside of combat is less precisely regulated.

Due to the interdisciplinary nature of these advancements, which include artificial intelligence, robotics, and biology, they are difficult to govern. However, it is critical to examine possibly hazardous usage.

There is an instructive precedence in this case. In 2017, the United States’ national academies of science and medicine released a joint study on the emerging field of human genome editing.

It defined the criteria under which scientists should be permitted to modify human genes in ways that would enable the changes to be passed on to future generations. It recommended that this labor be limited to “vital reasons of treating or preventing significant sickness or impairment,” and even then, only under strict supervision.

Both the United States and the United Kingdom currently permit select instances of human gene editing. However, generating new species capable of self-perpetuation was much beyond the reach of these investigations.

Observing the future

While xenobots are not yet created from human embryos or stem cells, it is possible that they may be in the future. Their creation poses similar concerns concerning the creation and modification of self-regulating biological forms.

At the moment, xenobots have a short lifespan and reproduce only once every few generations. Nonetheless, as the researchers point out, biological stuff can behave in unexpected ways, and these are not always benign.

We should also examine the non-human world’s potential consequences. Human, animal, and environmental health are inextricably linked, and introduced organisms can wreak havoc on ecosystems inadvertently.

What constraints should we impose on science in order to avert a real-world “grey goo” scenario? It is premature to be entirely prescriptive. However, authorities, scientists, and society must carefully balance the risks and benefits.

Here’s the video:

Stephen Hawking Predicted The World Would Be Taken Over By A Race Of’Superhumans’

Stephen Hawking’s final writings predict that a race of superhumans would take control, having surpassed their fellow creatures through genetic engineering.

Hawking makes no apologies in Brief Answers to the Big Questions, which will be published on Oct. 16 and excerpted in the UK’s Sunday Times (paywall).

42 1
Artificial Intelligence

Hawking issues a dire warning about the importance of regulating AI, noting that “in the future, AI may develop its own will, one that is at odds with ours.” A possible arms race over autonomous weapons should be halted before it begins, he writes, posing the question of what would happen if a weapons-related crash occurred on a par with the 2010 stock market Flash Crash. He goes on:

In summary, the development of superintelligent artificial intelligence would either be the finest or worst thing that has ever happened to humanity. The true danger posed by AI is not malice, but competence. A superintelligent AI will excel at achieving its objectives, and if those objectives conflict with ours, we’re in big trouble.

You’re probably not a nasty ant-hater who deliberately walks on ants, but if you’re in charge of a hydroelectric green-energy project and an anthill is in the area to be flooded, the ants are out of luck. Let us not put humans in the ant’s shoes.

42 1 1

The grim future of the planet, gene editing, and superhumans
The bad news is that nuclear war or environmental catastrophe will “cripple Earth” at some time in the next 1,000 years. However, at that time, “our inventive people will have discovered a method to escape Earth’s surly ties and therefore survive the tragedy.” However, the Earth’s other species are unlikely to survive.

The individuals who do manage to flee Earth will very certainly be new “superhumans” who have mastered gene editing technologies such as CRISPR. They will do this by violating anti-genetic engineering laws, enhancing their memory, illness resistance, and life expectancy, he claims.

Hawking expresses an unusual amount of enthusiasm for this final argument, stating, “There is no time to wait for Darwinian evolution to improve our intelligence and character.”

42 1 2

Once such superhumans exist, huge political issues will arise with unimproved humans who will be unable to compete. They will almost certainly fade out or become irrelevant. Rather than that, there will be a race of self-designing beings that will always improve. If the human species succeeds in redesigning itself, it is likely that it will expand and colonize other worlds and stars.

Space-based intelligent life
Hawking recognizes that there are a variety of possible theories for why intelligent life has not been discovered or visited Earth. His forecasts here are less audacious, but he prefers to believe that humans have “missed” other types of sentient life.

42 1 3

Is there a God? Hawking asserts that this is not the case.

The debate is whether God decided the way the universe began for reasons we cannot comprehend or whether it was established by a scientific rule. The second, I believe. You may name the rules of science “God” if you like, but it would not be a personal God that you would meet and interrogate.

The Earth’s greatest dangers
The first threat is an asteroid impact, similar to the one that wiped off the dinosaurs. However, Hawking says, “we have no protection” against it. Climate change is a more imminent concern. “An increase in ocean temperature would result in the melting of the ice caps and the release of significant amounts of carbon dioxide,” Hawking says. “Both of these impacts have the potential to transform our climate into that of Venus, which has a surface temperature of 250 degrees Celsius.”