IBM quantum computing and artificial intelligence in the context of the ChatGPT fire, the future of geometry
Recently, artificial intelligence has been firmly dominating public opinion.
OpenAI's ChatGPT and picture-generating AI systems like MidJourney and Stable Diffusion have gotten more people interested in advanced AI and talking about it. That's a good thing; however, it's not good if the transformational changes that happen over the next 20 or 30 years catch most of us off guard.
One company has been pioneering advanced AI longer than most - IBM. recently, Alessandro Curioni, one of IBM's most senior executives, discussed IBM's current projects in artificial intelligence, quantum computing and related areas.
Alessandro Curioni has been with IBM for 25 years. He is an IBM Fellow, Director of IBM Research, and Vice President for Europe and Africa.
IBM's Big Challenge
Since the 1970s, IBM has been working on the future of computing.
It has released an impressive array of products, such as Deep Blue, which defeated Gary Kasparov in a chess tournament in 1996, and Watson, which beat Ken Jennings on the TV quiz show Jeopardy in 2011.More recently, in 2018, the company developed a machine that can debate the world's debate champions in a debate with to assert itself in a debate.
The first of these machines was rule-based, and later machines used deep learning to create models trained on large amounts of data. Another paradigm shift is now taking place: the arrival of large language models (LLMs) or base models, which use a self-supervised technique for training. The system will take a large number of sentences from the network: hundreds of billions, block a random word from each sentence, and try to guess what that word is. Over time, the system will build a model that knows which words will appear in which sentences. This automation of the training process is a major advance and, thanks to the large amount of data and computing power available today, is possible.
As it turns out, this approach is not limited to text. It can be used for any type of structured data, including images, videos or computer code; or data streams generated by industrial processes; or the language of science: molecules translated into symbols.
Building specialized, large-scale language models
IBM is building large language models, but for specific applications, not for general-purpose use like ChatGPT.
For example, it is building systems specifically for organic chemistry and business. The weakness of general-purpose systems is that they are very superficial. They can answer most high-level questions, but if they go deeper, they get lost. A more specialized machine can go deeper and is less brittle. Specialization often means you can get better quality data and you can eliminate bias more easily.
One of the reasons ChatGPT performs better than GPT-3 is because of reinforcement learning with human feedback (RLHF). OpenAI, which created these systems, employs a large number of people to comment on the system's output and mark biased or offensive passages accordingly. This does prompt the flirtation that AI represents not artificial intelligence but affordable Indians; however, humans are used in the training process, not in the operation.
IBM wanted to demonstrate that it could develop a large model in a particular domain and then train it on proprietary data from client organizations within that domain. This would be a significant improvement in cost and sustainability over the old approach (developing a new model for each application).
Improving AI efficiency: more efficient chip design
Another area where IBM is looking to improve the efficiency and sustainability of AI and computing is chip design.
Large language models are approaching the scale of computation inside the human brain, but they use the same amount of energy as a small town, and the brain uses the same amount of energy as a light bulb. curioni said IBM is taking three steps to reduce the power requirements of advanced AI systems.
The first step is neuromorphic chips. Like IBM's True North and Loihi, they are closer to models of human neurons than traditional chip designs. Their calculations are less precise and more analog.
The second step is memristors (Memristors), which are processed and stored on the same chip, which reduces the energy required to retrieve and re-store data between computations.
The third step is the Spiking Neural Network (SNN), which transmits information only when it is needed for its specific function, whereas in a conventional chip, each neuron transmits information all the time.
Together, these three steps can lead to an improvement in energy efficiency of two to four orders of magnitude.
Breakthrough in quantum computing
IBM may not currently be seen as a global leader in artificial intelligence, but one area where it is recognized as being at the forefront is quantum computing, alongside Google and Microsoft. It has just announced a breakthrough in quantum cryptography that will allow data being transmitted today to remain secure even as quantum computers are built that can break today's encryption. Quantum computers running Shor's algorithm can effectively factorize numbers, and when they scale up, they will be able to factorize very large numbers that classical machines cannot do in a reasonable amount of time.
What IBM and some academic partners have done is to develop a new type of encryption called quantum-secure encryption. It is based on high-dimensional lattice cryptography and it is believed to be unbreakable by quantum computers. Over the past decade, a large research program was conducted to evaluate many potential types of quantum-secure encryption, and last July, four algorithms emerged as the strongest. Three of these four algorithms were developed at the Curioni lab in Zurich, and the winner has just been chosen.
The next step is to migrate data from the old form of encryption to this new form.
This task is becoming urgent. The U.S. government has directed that all its agencies must achieve quantum security by 2025, and other governments and companies are doing the same; IBM's breakthrough may come at the right time.
Reference link:
https://www.forbes.com/sites/calumchace/2023/01/19/ibm-and-the-grand-challenges-of-ai-and-quantum-computing/?sh=701f0e1d75e6
