Product

The Future of Mobile Computing

Steamclock • Dec 11th, 2025

As we wind down Steamclock’s 15th year, we have a lot to reflect on. Over the years we’ve worked with our clients to ship apps that have mapped the internet and helped save over 35 million pounds of waste from going to landfills. We’ve supported five of our clients’ apps through acquisitions, and we’ve even partnered with Reddit to train an army of clandestine Cold War operatives.

During that time we’ve seen a lot of change, whether it’s new tech stacks, business models, hardware specs, cloud capabilities, user preferences, or UI conventions. Every aspect of the mobile experience has continued to evolve.

But as much as things have changed, the basic mobile experience in 2025 would be pretty familiar to anyone in 2010 — poking at apps on a screen we hold in our hand. Considering how fast tech seems to be moving in 2025, how will mobile computing change over the next 15 years? Will we even have apps in 2040?

Steamclock’s co-founders, Allen and Nigel, and our Managing Director, Nick, looked at this question from three different angles: software, hardware, and social norms. Here’s what we see ahead.

Allen: The More Things Change…

The early years of mobile development were tumultuous. New APIs, frameworks, and approaches changed how teams worked on a yearly cadence. Over time, of course, the ecosystem settled down – we now understand the use cases well, and the development practices and frameworks are well established (with the possible exception of newcomer Kotlin Multiplatform). The majority of mobile dev is now about iterating and refining existing products.

At least, until recently. LLMs are changing development dramatically, as they make certain tools and techniques easier to implement (for example, refactoring codebases, rapid prototyping, and building with the most popular frameworks) and make others more difficult (for example, models are less adept with custom in-house app frameworks than the tried and true).

Even more profoundly, LLMs have unlocked a whole suite of product experiences that were never before possible. Some of these are experimental – we need to try them to discover the future – but others are just clear wins for UX and functionality. While the early years of LLMs featured a lot of chat interfaces, there are a raft of non-chat AI experiences that are just emerging.

"While the early years of LLMs featured a lot of chat interfaces, there are a raft of non-chat AI experiences that are just emerging."

These new capabilities and techniques will overturn many mobile apps. Some will be outcompeted, others will be reinvented. While much will change, one thing will stay consistent with the last 15 years: teams with a keen focus on using great UX to build strong businesses will find a lot of opportunity on our most personal devices.

Nigel: Looking Back to Look Ahead

The most recent disruptive change in hardware, the shift to smartphones as most people’s primary computer, started with the release of the iPhone in 2007. However, in the 15 years before that, there were plenty of signs that handheld computing devices were likely to be the next big thing.

  • Newton, Apple’s previous take on a mobile computer, was released in 1993.
  • Palm Pilot, much closer in form factor and behaviour to a modern smartphone, was released in 1996
  • Blackberry, the first glimmerings of a mobile device with modern internet connectivity, was released in 1999.

All of them were horribly compromised in various ways. Poor screens, poor connectivity, limited functionality. Only the BlackBerry could reasonably be described as a hit. We can say in hindsight, though, that all of those products had the right idea. And even in 1999, you could do the converse of Steve Jobs’ famous “Are you getting it? These are not three separate devices. This is one device”, and look at a Newton and a Palm Pilot and a BlackBerry and an ordinary cell phone and say “Once these are one device instead of separate ones, a lot of people are gonna want one of them”. Actually making that happen was incredibly hard, but realizing it was probably going to happen was not.

So when we are thinking about any hardware disruption within the next 15 years, there’s a good chance the Newton, Palm Pilot, and BlackBerry of that shift already exist. The strongest candidate for that is AR glasses.

Google Glass, Meta Ray-Bans, and Apple Vision Pro are all horribly compromised devices, functionally limited in some huge ways, with battery problems, weight problems, insufficient display resolution, and prices that vary from “too expensive” to “eye-wateringly expensive”. I could not, in good conscience, recommend that anyone buy any of those right now or invest too much time in developing software for them.

But as with the early mobile devices, you can look at where those devices are now and say two things:

  • If you can get something that is the size of a normal pair of glasses, doesn’t cost much more than a modern smartphone, and can do what the Vision Pro does, a lot of people are going to want one.
  • All of the technical advancement required to make that happen in the next decade or so seem totally plausible.

On the hardware front, that’s going to be what I’m keeping an eye on. There’s a good chance that at some point, possibly sooner than we think, AR glasses are going to have their iPhone moment, and when that happens, an exciting new world of mobile software could open up very quickly.

Nick: TMI in 2040

The future of mobile computing will depend a lot on people’s attitudes toward privacy. Increasingly, the more information we share with companies about ourselves, the more personalized and “thoughtful” AI-enabled software can become. While providing that information today comes with tradeoffs that often feel worth it, attitudes may shift as more of our data becomes easier to access, centralize, and act on without our direct intervention.

It’s not hard to imagine a future where, at a networking event, your mobile AI assistant quietly lets you know that you’re talking to Edith, who you last saw in person a year ago. It reminds you that her oldest daughter, Elle, is now studying architecture, that you and your partner have been meaning to have dinner with Edith and her husband, Emil, and that you’re all free on the 24th. Fiorino, Emil’s favourite restaurant, has a table at 7 p.m. The deposit has been paid.

Almost all of this is possible today with current technology. But to do this well and consistently would require something like root access to your life — continuous permission to read and write information about who you meet with, where you go and when, your social media footprint, all your messaging platforms, and your payment details. The more information you and everyone you know provide to a centralized system, the better that system can work, and the harder it becomes to step away from.

"The more information you and everyone you know provide to a centralized system, the better that system can work, and the harder it becomes to step away from."

But how comfortable should we feel giving companies (and the organizations they’re beholden to) root access to our lives in order to make remembering and buying stuff easier? In practice, people trade privacy for convenience all the time. When the payoff feels high enough, the risks feel low, and you know other people who are doing it, online privacy often doesn’t feel like a big concern. Think location sharing for maps and rides, or fitness trackers. So the question is whether the benefits of personalized mobile technologies will keep pace with the increasing amount of visibility and control companies will need to deliver these new services.

Secure on-device models and processing could address some of these concerns. It’s not hard to imagine technology advancing over the next 15 years to the point where on-the-go agentic performance becomes snappy, consistent, and helpful. Today, though, the reality seems pretty far away from that, where vast data centres are required to power “agentic” personal experiences that feel more like limited demos than dependable assistants.

Ultimately, mobile computing in 2040 could be determined less by technical constraints than by human ones: the norms we set, the regulations we pass, and the defaults we’re willing to accept. When building that future, we could treat privacy like a dial, where personalization isn’t just about anticipating our needs, but about letting us set clear, reversible limits on how and when our data is used.

The Only Constant

No one knows exactly what 2040 will look like, but the next 15 years will bring deep changes to both technology and society. What counts as an “app”, the hardware we run them on, and even our shared sense of “normal” technology use are all up for grabs. A lot of products will ship in that time. We’ll be focused on which ones actually stick — and how to help build them.

Steamclock • Steamclock Team

Steamclock is not responsible for any temporal paradoxes this article might cause.