neural multiplexer

platform shifts

technology has alway been driven by platform shifts. every decade or so, there is a fundamental breakthrough in science and engineering that leads to exponential change over the next decade. in this post, i will summarize the two platform shifts that are most exciting and interesting to me, as well as lay out two additional platform shifts that i expect to follow in a ten - twenty year time frame. maybe in fifteen years, i will reread this and write a follow up post on my predictions!

current platform shifts

there are two platform shifts that are already occurring that will drive societal change over the next decade. the first is foundation models, including but not limited to large language models. the second is decarbonization.

foundation models

it is hard to miss this platform shift. open ai, meta, nvidia, anthropic, and more are all developing large foundation models that enable computers to handle increasingly complex tasks. without wading into the debate around agi and superintelligence, foundation models are already changing the way people work and interact with computers.

i consider foundation models a platform shift (or really, a platform evolution) at the application layer of computing. they have emerged from the lineage of computing history: from the transistor to the microchip to the personal computer to the internet to the smartphone to the cloud and now to foundation models. each of these have moved the application layer for technology towards deeper integration with human thought and capability.

however, foundation models, and the intelligence they provide, are a unique platform shift because most other platform shifts have resulted in more visible proliferation of computing devices. by this, i mean that other platform shifts have introduced computers ubiquitously throughout society ways that are impossible to ignore; you now have to think about computing devices everywhere and anywhere you go. foundation models are interesting because they abstract away computation behind interfaces that are natively human, like language or images. this is exciting! we can leverage foundation models to create more subtle experiences that feel authentic and textured in the same way that human interactions are. there are some concerns with this (deepfakes, misinformation, etc), but overall i think there is endless opportunity in enhancing the human experience, removing discomfort, and creating moments of delight and joy using foundation models.

decarbonization

the second big platform shift occurring is decarbonization: the shift away from a fossil fuel based economy to one build on clean energy. this platform shift is also an ecological necessity, given how climate change is impacting both human communities and biological diversity.

it feels like after decades of "will renewable technology advance quickly enough to be economically viable," we have finally reached a point where the answer is "yes". electric vehicles are taking off. solar energy is becoming cheaper and cheaper year over year. the economic opportunity of artificial intelligence is shifting the cost curve of nuclear energy firmly to the left. this platform shift, like foundation models, is decades in the making.

what is the promise of decarbonization? energy abundance, cleaner and healthier cities and communities, and ecological recovery from climate change. that is a worthwhile goal for humanity, and one we need to continue pursuing and investing in. the consequences of failing to decarbonize feel even more significant and fraught, so it will be interesting to see how communities, businesses, and more coordinate to ensure collective transition away from fossil fuels.

upcoming platform shifts

beyond what i observe happening now, there are two massive platform shifts that will follow the above in the next ten to twenty years. the first is automated robotics and the second is computational intelligence. i'm not alone in identifying these upcoming shifts, but there is a lot of path dependence in how we achieve them.

automated robots

a lot has been written about robots. tesla is trying to build humanoid robots, as well as a company called figure. i'm sure there are more. robotics are what come after foundation models. we will marry the intelligence from foundation models with mechanics to create devices that can automate difficult, expensive, dangerous, and even mundane tasks.

in a way, robotics feels like the white whale of computing. it is what fascinated us as kids, has been written about ad nauseam in science fiction, and is the basis for a lot of fears we have about technology. it will be interesting to see how it all plays out from a technology, societal, and geopolitical perspective. what do robots enable us to do at scale that we could do before? how will nations respond to automated robotics across the world? how will we align robots with the broader goals of our communities and societies?

these are difficult questions that i don't think anyone has clear answers on; however, the prospect of how robotics can improve our society, from manufacturing to logistics to agriculture, seem endless. notably, there are still a lot of engineering breakthroughs that need to happen for robotics to take off, starting with hardware. right now, foundation models have to run inference on compute and energy intensive servers. in order for us to bring this intelligence to robots i think we will need to make energy efficient client side hardware that can run large scale inference on the robot itself, as server side latency will become a bottleneck for real time action. there is a lot of work we need to do to enable that!

computational intelligence

the more you play and work with foundation models, the more you begin to understand their current constraints. i think one of the biggest constraints is reasoning around complex and abstract computation. right now, foundation models are really good at domains where humans have well-defined ways of understanding information. primarily, this is text and images. humans are naturally good at understanding text and visuals because of our evolutionary adaptations; the breakthrough of current foundation models has been allowing computers to understand this kind of information.

however, they struggle with domains that have abstract ways of understanding information. if we want foundation models to operate in these domains, we need more data that describes the domain and tasks that exist within it. this is what i mean by computational intelligence: how can we help computers understand domains that we use consciously defined human abstractions to reason about, where the understanding isn't biologically instinctive? some examples of these domains include chip manufacturing, drug design, and materials discovery.

there is a feedback loop that will drive the emergence of computational intelligence. the cost of computing continues to decrease. the cost of hardware continues to decrease. over the next decade, we will have a proliferation of scientific data across materials, engineering, and biotechnology because the cost of gathering and producing this data will continue to collapse. as we gather the data, we will figure out ways of transforming it into foundation models that can be used to scale computational intelligence. this will be a new platform that will unlock a whole new set of applications that we will build to solve societal problems across medicine, energy, and more.