Human longevity - deep learning x biotechnology x blockchain x quantum computing
If the immediate purpose of technology is to reduce resource scarcity, then its ultimate purpose is to eliminate mortality.
Algorithm prices. Source: Mother Jones
Whenever a technology company introduces a new product, they describe it as having better performance and greater efficiency. This means more products can be made available to more people in a more affordable way, thus reducing resource scarcity. The most scarce resource we have is time, which is governed by biology. As the technology that built civilization, the common destiny of humanity is to redesign our biology to reduce the scarcity of the resource of time by extending our lifespan. So how do we do it?
The three fundamental components of all modern technology are bytes, atoms, and genes. The basic unit of all digital information is the byte, the basic unit of all matter is the atom, and the basic unit of all biology is the gene. We have made long-term progress in creating products based on all three of these fundamental reality building blocks: from rockets to transistors to DNA sequencing hardware, and their convergence will help us live longer.
Just as cryptocurrency development rejects the basic premise of the traditional financial system that only government-issued money has value, growing lifespan development rejects the basic premise of the traditional medical system. That is, reversing the aging process is not possible, which branches the entire biomedical technology stack into a new field. This new technology stack will include the convergence of four exponential technologies: deep learning, multi-omics, blockchain, and quantum computing. Let us introduce each of them next.
01Deep Learning

Two Different Eras of Computational Use in Training Artificial Intelligence Systems
It is striking that deep learning presents an exponential advantage over other machine learning techniques in prediction. The "more computation" assumption is still an unknown upper bound. The more computation and high-quality data we provide to language models, image classifiers, and video generators, the more accurate they will be. Deep learning is the new paradigm of computing, where we no longer set rules for machines, but let them learn them. It's the exponential progress we've made in automated systems, from self-driving cars like Tesla, to chatbots that help people with mental illness, to search engines that immediately provide users with the best relevant results. Biological data can be too complex for any person or team, no matter how much knowledge they have in the relevant field, to discover unknown patterns.
But deep learning can discover these new patterns in higher dimensions than anyone has ever seen, and it can explain relationships that exist only in a 1000+ dimensional space. These deep learning models will help us replace the annual doctor's consultation that has made healthcare an obsolete form. Instead of manual diagnosis when we have symptoms, we will have these models continuously monitor multiple analytics in the form of biomarkers from smart health sensors, i.e., multi-omics data, which can predict lifestyle and give recommendations to prevent disease before it occurs, thus optimally extending our life span.
02Multi-omics

The Histology Revolution
"Histology" is a molecular term that refers to the study of a pair of molecules. Over the past few decades, we have seen amazing advances in hardware and software that enable medical researchers to study human health at the smallest scales and at the level of tiny molecules. More powerful and affordable microscopes, sequencing technologies and computing power have generated petabytes (about 1.07 billion MB) of data.

Genomics was the first related discipline to emerge with the study of the entire genome, also known as DNA. we are all creatures composed of DNA, our 700MB source code is composed of 3 billion letter strings made up of As, Cs, Ts and Gs. We are all individuals with unique combinations of personality traits, physical qualities, mental fortitude and dietary consistency that originate from a biological programming language. Its source code is four letters instead of 1s and 0s. Together, these primitives produce functions performed by chemical messengers that deliver messages to huge, highly complex structures such as proteins and fats.
As some of our genes are activated or turned off due to factors such as age or exposure to the environment, the way these DNA sequences are transformed leads to an interesting cascade of activity. This behavior is called epigenetic variation, and its study is known as epigenomics.
However, although DNA can preserve data, it does not apply it to specific tasks. To extract data and send it to the right location as information, for example to make proteins, DNA is transcribed into messenger molecules called RNA. The study of all these RNA transcription or translation processes is called transcriptomics.
Ten of the thousands of proteins from these processes are individually responsible for thousands of critical tasks in the body. The process of studying how these proteins are produced, degraded and expressed is called proteomics.

Proteins are degraded into a group of molecules called metabolites, including carbohydrates and lipids. This is the final downstream result of gene transcription and represents the current state of the biological system, and the study of this process is called metabolomics.
The challenge in modern medicine is to integrate all of this data into a complete picture of health, called a multi-omics analysis. In computer science terms, the genome is the hard disk, responsible for storing the data. The epigenome is the disk reader, the transcriptome is the decoder, the metabolome is the process monitor, and the proteome is the application.
The process of aging is the loss of information. Over time, the genome, which is the hard disk, starts to be missing. By examining all of the histological layers, we can more accurately understand the flow of information and eventually learn how to preserve and recover this information to effectively reverse the aging process.
We can use different types of neural network architectures to predict all this underlying data. Molecular biomarkers are discovered by analyzing the cascading information provided by different histologies. Biomarkers play an important role in planning preventive measures and decision making for patients and can be classified as diagnostic, prognostic or predictive. Diagnostic biomarkers are used to determine the presence of disease in patients, prognostic biomarkers provide information about the overall likelihood of disease with or without standard treatment, and predictive biomarkers are used to identify who is at risk of disease. All of these biomarkers are also used to determine which treatment is more appropriate for a particular patient.
Unfortunately, most multi-omics data are currently held privately by only a small number of organizations. However, we are seeing more and more histology datasets being made anonymously open source for adoption because of blockchain.
03Blockchain

How does blockchain work?
Blockchain has been in the public spotlight since the world's first cryptocurrency, Bitcoin, more than a decade ago. That's because it's a digital organism that's more powerful than all human computing. Trying to destroy the Bitcoin network would require more computing power than the world's 500 fastest supercomputers combined. That's because it uses what's called a proof-of-work algorithm to get nodes to solve randomly generated mathematical problems. This consensus layer is the first of its kind, and through the development of blockchain technology we will begin to create new organizations that have no central point of failure and are governed by communities that freely share datasets. The name of this emerging web3 native structure is DAO (Decentralized Autonomous Organization).


A good example of this is VitaDAO, a decentralized collective used to fund early stage longevity research. All research, transactions and data from VitaDAO are publicly available. By using its native tokens, members can join to vote on proposals and access all of its information in an open source manner. We will start to see more and more health-focused DAOs that release more of their histological datasets publicly and ensure ownership through the blockchain. However, if we really want to reverse the aging process, it is not enough to rely on lifestyle advice from deep learning models on histological datasets. We need drugs that can target age-related reversal of decline at the molecular level. In order to do this, we need to speed up the drug discovery process by several orders of magnitude. And how to do that? The answer lies in quantum computing.
04Quantum computing

The current realization of quantum computing superiority in China, the United States and Canada is proof that quantum computers outperform classical computers for some specific tasks. Competition in the quantum field continues, with hundreds of millions of dollars being invested in the development of these machines, and quantum computers will have the potential to outperform any traditional computer at an exponential rate. Using quantum mechanics, the concept of superposition and entanglement discovered in the last century, these machines are capable of rearranging matter and information in ways never before possible. Quantum computing can exponentially improve the accuracy of deep learning, and a new field, quantum deep learning, is gradually taking shape around this. These quantum models not only promise to improve the accuracy of preventive advice and diagnosis, but can help discover entirely new long-lived drugs.

The Drug Discovery Process
Drug discovery is a decades-long process that requires more than a billion dollars to bring a drug to market. If quantum computers run well enough, we may be able to simulate biochemical reactions exactly on the computer as if they were reality, by accurately simulating quantum mechanics. Not only could we discover new drugs, but instead of testing them in the lab, we could simulate them directly on the computer. With order-of-magnitude improvements in the drug design development and testing stages, we could create more new drugs faster than ever before.
Reference link:
https://medium.com/@siraj_raval/human-longevity-deep-learning-x-biomedical-datasets-x-blockchain-x-quantum-computing-f8ae54dc92c9
