In technology, singularity can be defined as the point in which an artificially intelligent computer, network, or robot is capable of recursively improving itself, leading to a runaway affect of rapid advancement. Another take on it—and I’m not precisely sure how I came to associate the term singularity—is the point in which a computer or network exceeds the computational capacity of the human brain.
There are two ways of looking at this milestone, which we may or may not be able to call a singularity. Hardware and software.
Hardware
At what point will we be able to construct a single system that matches the processing potential of all the neurons in the brain. Given some derivative of Moore’s Law or perhaps even a conservative estimate of how many transistors we’ll be able to cram into a single system, with or without advancements in materials or architecture, we can come up with a practical estimate of when it will be possible to build a computer with the physical computing power of the human brain.
As digital transistors and analog neurons are not directly comparable, I don’t think we can put a specific number of operations per second as a target. The fastest networked clusters of computers we have today, collectively referred to as supercomputers, may already exceed the raw processing power of a single human mind. It may be 5–20 years before the same capacity is available in an individual system, and only if major breakthroughs are made in construction, design, and energy consumption. We are already seeing that the future gains will likely be in concurrency rather than faster serial operations, which means programs that can be broken down into simpler tasks and distributed simultaneously to many cores will benefit more.
Ultimately, with hardware, progress will continue to march forward.
Software
With software, things are not so straightforward. While improvements and optimizations are made, progress is not so steady. Even if we have the physical infrastructure to mimic a human brain, what software would we run on it? Managing complexity, detecting patterns, efficiently storing and recalling memories, and making decisions are very difficult challenges.
State-of-the-art software can use substantial computing power, and yet still struggle with basic challenges despite being specifically designed for that one task.
Some software improves steadily and incrementally. Far too many others don’t. For every long-running project that steadily improves against benchmarks (such as operating systems, video game engines, Javascript engines in web browsers, etc.), many more are simply shuttered, re-written in new languages or for new frameworks with marginal improvements, or not ported to new architectures or platforms.
There is a thriving open source community, contributing to software at every level of the stack, but more development time is spent on closed source, proprietary software that will no longer benefit society after the organization decides to shut it down or replace it. And, by nature of being proprietary, if another organization seeks to provide functionality similar to another organization—competitor or not—it must spend resources on recreating the same functionality.
Programming language specific package managers are perhaps the best way we have to isolate functionality and make it available for use in any other projects, or even compose multiple other packages to create higher-level functionality. Another great option is containerizing standalone services, which can transcend programming language. A Ruby Gem is nearly useless for a Node.js project, but a containerized service can be utilized just as easily by a Python project just as easily as a Go project. That is, until you upgrade your kernel or lxc container software…
We are making progress, but far too often, our progress feels like two steps forward and one step back. If anything, what we need most is to make progress on improving how we make progress in software.