"Machine intelligence is the last invention that humanity will ever need to make." — Nick Bostrom
What WILL Happen by 2028
I) Traditional biometrics will become as redundant as typewriters. So many security companies believe AGI-level intelligence is required to break their systems. Completely false.
II) This time in 3 years, you will get a FaceTime call from your mum asking you how your day was, as she does her usual smile while she's drinking coke. She will then ask you to borrow some money to get petrol. It wasn't really your mum. It was an AI trained by a malicious actor.
III) We will enter the era of 'digital humans'. You will no longer be able to tell the difference between a real and fake person from pure intuition.
Digital AI humans will create LinkedIn and Instagram profiles, post selfies, go viral, do livestreams, and even apply for and get job interviews. These AI agents will convincingly mimic real people, making it impossible to tell whether a job applicant is human or synthetic. As a result, companies will face unprecedented risks to their IP, data, and internal systems, all while unknowingly hiring AI.
IV) Dating apps become broken. Hinge and Tinder become flooded with fake AI profiles. Before, the barrier for catfishing was a simple phone or video call. Tomorrow that will be gone.
V) There WILL be a Silk Road for AI agents.

A lot more people will be downloading Tor browser. People can buy agents for most practical use cases, but many use cases will be highly illegal. Your average person will be able to purchase agents that can do their remote work for them. Zero technical ability required, as easy as a Spotify subscription. I do not say this out of speculation—light versions of this already exist in Telegram group chats.
VI) I think we'll reach a stage where governments work more with Big Tech (they already do, but it'll become more obvious). These partnerships will become more visible because governments are incredibly bureaucratic and full of low-quality talent—a combination that will collectively fail to keep up with the AI revolution.
VII) AI companies become 'barbershops'. What I mean is: everyone will have AI companies, like barbershops, all branding themselves as the best in town—visible on every corner of every street. Not all of them will be used. The ones that market well, sell a story, and have great UX will sell—but an overwhelming majority will become 'barbershops'.
VIII) NONE of this is AGI, or will require AGI. And this is the scary part. These will all be modular, narrow AI systems that minimally but effectively integrate into human-based systems.
What Will NOT Happen by 2028
I) We will NOT have "thinking machines". Nor will we have AGI—not in the way it's often portrayed. What we're heading toward is a world full of what David Chalmers would call philosophical zombies, or what I'd call functional zombies.

II) We will still struggle with robotics because there are fundamental questions about reality that need addressing. What is intelligence at the most fundamental level? What is consciousness? How can we perfect locomotion using fundamental principles from evolutionary biology? How can we simulate or approximate biological stochasticity on static GPUs (or another system)? Biological stochasticity is fundamental to locomotion and embodied intelligent reasoning.
III) There are about a million Stanford AI labs with names along the lines of "Thinking Machines"—90% of them will go underground (functionally). The ones that don't will be the ones that embrace uncertainty, don't follow hype, and actually try to address fundamental questions about reality, which will in turn inform actual thinking machines in the future (in the functional sense). OpenAI is OpenAI because they said yes to language models and reinforcement learning when everyone said no; you have to be willing to embrace uncertainty. Do not wait until everyone is "happy"—by that point, unless you are massively lucky, you are too late. You have to be willing to die and let people figure out you were right 100 years after your death. At the end of the day, if you're building something—or if you understand something most people don't—you'll be ahead of your time. And that means it might take a while for everyone else to catch up.
IV) LLMs won't be 'conscious'—this has got to be one of the dumbest things that goes around. Theoretically, I could be wrong; just as I could theoretically be hallucinating the fact I've just published an essay to my personal website. But for all practical purposes, I've just published an essay to my personal website.
V) Software Engineers are NOT "safe"—the definition simply changes. You can think of the future of coding like a factory. There will be factory workers. People will design the factory and how the machines interact; they will also repair machinery if things go wrong. But the machinery will do all the mechanical work. It will be more about system design and system-level thinking.
At a fundamental level, modern software engineering is about mastering a relatively narrow set of core skills: data structures and algorithms, Git, APIs, and networking. These are the mechanical basics that underpin most software work. When we talk about highly talented engineers, we're often referring to those who have deeply internalised these fundamentals, allowing them to generalise more effectively to new or unfamiliar problems. But the level of complex reasoning supposedly required is often overstated. In reality, much of what's perceived as "problem-solving" is just knowing how to search well—whether that's on Stack Overflow, ChatGPT, or through collaboration. People act as if the average engineer is solving logical paradoxes every day, when in fact, many aren't even necessarily strong at reasoning. Many are not especially emotionally intelligent either, nor do they bring interdisciplinary depth that could help them build software with broader awareness. The myth of the hyper-rational engineer solving grand problems every day is just that—a myth.
If you are going to university and you are syntactically teaching yourself CSS in 2025, ask yourself what on earth you are doing; we are in one of the biggest existential periods in human history, and you are manually teaching yourself CSS and 'vibe-learning' AI trends through Sam Altman tweets. Attention drives hype which drives investment which drives more attention; he is a genius, but he will often say things like "AGI by X year" if it plays into this cycle—so do many others. Just because an intelligent famous person says something doesn't mean it's true. It's called authority bias. Otherwise, you would have listened to Isaac Newton about alchemy, or Einstein's initial dismissal of quantum mechanics. All cracked geniuses, but no human being is infallible. Some people need to stop getting all their information from X, start reading research papers, and throw themselves into literature and books. If you don't, that is completely fine—as long as you can decipher what is pure hype or misguided, and what is truth.