May 31, 2022
In Douglas Adams’ The Hitchhikers Guide to the Galaxy, a supercomputer known as Deep Thought has calculated the answer to the “Ultimate Question of Life, the Universe, and Everything.” Unfortunately, Deep Thought has forgotten what the question actually is, and thus creates Earth as a supercomputer to answer the question, what is the question?
As creation myths go, this is perhaps no more unlikely than those propounded by many societies. And, in fact, the idea of Earth, or at least life on Earth, as a form of computation has attracted a number of adherents:
Once we regard living things as agents performing a computation — collecting and storing information about an unpredictable environment — capacities and considerations such as replication, adaptation, agency, purpose and meaning can be understood as arising not from evolutionary improvisation, but as inevitable corollaries of physical laws. In other words, there appears to be a kind of physics of things doing stuff, and evolving to do stuff. Meaning and intention — thought to be the defining characteristics of living systems — may then emerge naturally through the laws of thermodynamics and statistical mechanics. (Quanta Magazine)
Konrad Zuse is credited with building the first working programmable computer in 1941. And in 1967, he seems to be the first to propose that physics (and thus biology) is just computation. This idea was extended amongst others by Edward Fredkin in the 1980s, Juergen Schmidhuber in the 1990s, and Stephen Wolfram in the 2000s.
Another thread of this type focusing more specifically on life on Earth stretches back to the mid-1800s and the invention of statistical mechanics by scientists such as Ludwig Boltzmann and James Clark Maxwell. Maxwell in particular was struggling with the second law of thermodynamics which states that as energy is transferred or transformed, more and more of it is wasted. Thus, the ability to get useful work from energy resources is always diminishing. The asymptote of this law is that the universe will ultimately be reduced to a state of equilibrium where entropy has been maximized and nothing of meaning will occur again.
In order to avoid this dreary fate, living organisms use energy from their surroundings to sustain a non-equilibrium state. Physicists Erwin Schrodinger wrote in his 1994 book What is Life? The Physical Aspect of the Living Cell that living organisms survive on “negative entropy” by capturing and storing information. This information allows living things to stay out of equilibrium by behaving in a way that allows them to extract energy from changes in its surroundings. Failure to interact with this information would lead to gradual reversion to equilibrium, and thus death.
In this conception of life, biology is thus a computation that seeks to optimize the storage and exploitation of information. And this gives us a toe-hold to think about health as a process of computation as well. The thermodynamics of copying information dictates that there is a trade-off between the accuracy or precision of the copying and the energy required for the copying. As any organism has only a finite supply of energy, errors build up over time. An increasing amount of the energy captured by the organism must over time go to error correction, until such time as there are too many flaws to overcome and death occurs. This view is captured in ideas such as the Hayflick Limit which observes that cultured human cells have a finite number of times they can replicate and divide before they become senescent. And perhaps may explain why the natural life time of humans seems to cap out around a century.
While we have been living through the coming together of information technologies and life sciences over the past few years as machine learning and other techniques have been transforming our ability to explore the mysteries of biology, the ideas summarized above point to an even more fundamental intersection of life and information. One researcher has gone so far as to argue that all the data that humans produce during the regular course of their activities has created a new life form of its own, “the dataome”:
The dataome is the sum total of our nongenetic information, whether in books, electronic bits, paintings, temporary neural impulses, or even structures like libraries and machines, which support and encode information in themselves. It's the informational “ome” that coexists with us here on earth. It's a new lens to examine our world through because it seems to have its own evolutionary imperatives.
In fact, I propose that the dataome is an alternate living system, in a deeply symbiotic relationship with us. That may sound outrageous, but it seems to fit with many ideas about the nature of information as a thing—akin to energy or entropy—and what we think life is. In a sense, life is what happens to matter when information takes control. (Caleb Scharf)
These ideas of the symbiosis between biological life and information provide a provocative new way to explore more deeply not just the intersection of life sciences and data sciences, but also our understanding of social, economic, and cultural determinants of health. These are themes we expect to continue to explore in our work for years to come.
– Jonathan Friedlander, PhD & Geoffrey W. Smith
First Five is our curated list of essential media for the month which spans a range of content including scientific papers, books, podcasts, and videos. For our full list of interesting media in health, science, and technology, updated regularly, follow us on Twitter or Instagram.
1/ Improved processing capacity
Mutations in genes are often linked to diseases such as cancers or increased risk of developing diseases. A group of researchers used Drosophila melanogaster, or fruit fly, to identify a gene mutation that has a positive effect — namely higher IQ in humans.
2/ Data integration in the brain
The brain is often thought of as a data processing machine that integrates signals through siloed sets of neurons specialized for orientation, the direction of movement, spatiotemporal frequencies, etc. While this model remains somewhat accurate, a new study adds complexity to it by showing that the way the brain treats information depends also on the broader context.
3/ Coordinating data streams
Transitioning from keeping the focus on an object to grabbing such an object is another example of how the brain integrates information coming from different types of neurons. It requires a surprisingly complex neurological process involving intricate timing and coordination. Researchers now shed additional light on the machinations that ensure we don't look away from where we are reaching.
4/ Tricking the brain
While the main focus of genetic engineering has been to increase the production capacities of fruit and vegetables, it is about time that tastiness takes the center stage! Researchers combined fruit chemical and consumer sensory panel information to train machine learning models that can predict how flavorful a fruit will be from its chemistry.
5/ Running away from the brain
Researchers showed that attempting to run faster might require defying our natural biology. By combining data from runners monitored in a lab along with 37,000 runs recorded on wearable fitness trackers, scientists have found that humans' natural tendency is to run at a speed that conserves caloric loss. This comes as bad news for all runners aiming at maximizing their caloric expenditure.
Public-Interest Technologies for Better Health
Digitalis Commons is a non-profit that partners with groups and individuals striving to address complex health problems by building public-interest technology solutions that are frontier-advancing, open-access, and scalable.
AI & Society
The spring edition of Daedalus, the open access journal of the American Academy of Arts and Sciences, is focused on AI & Society. Topics in the edition include On language & reasoning, On inequality, justice & ethics, On the economy & future of work, and On the law & public trust.