Technically Correct, but Universally Wrong
The tech world is filled with people who have strong views and are more than happy to share them. An unreasonable number of them are priviledged white males. Many of them see the world through the lens of their own personal experience and don't bother much with external perspectives. Far too many are eager to start a fight over their religion of choice (be that a language, a technology, a "thought leader" whose message they subscribe to, or a topic they have precious little familiarity with).
I'm many of those things, I'd really like to avoid being all of them. Let's talk perspective and trying to see the big picture. Let's talk about how to be wrong, and be okay with that.
Warning
This post takes an intentionally excessive position on the topics it covers, it's probably going to anger anyone with strong feelings about technology, business, or the use of metaphors. You're under no obligation to read it, and your life will probably not be any better of for having done so. May Cthulu have mercy on your soul.
I've recently been spending a lot of time listening to others tell me about their specific area of focus and how it is the single most important thing in the whole world. Some of this has been in person, some of it has been from people I know, and some of it has been in the form of self-published op-eds intended to "educate the masses about the real state of AI from someone with insider knowledge".
The hallmark of these experiences, and I'm going to pick on the AI side to make a point, is that they utterly disregard the practical base rates and instead assume that the world is wholy defined by the contents of their echo chamber. Looking exclusively at these echo chambers, it's easy to imagine that these conclusions are correct; after all we have seen a stepwise increase in the number of AI search engines in the last few years, OpenAI saw some of the fastest customer growth of any startup ever, and NVIDIA just overtook Apple as the 3rd largest publicly traded company in the world - gaining over $1T in just a couple of months.
When you look at those datapoints in isolation, it's easy to conclude that we have gone from (nearly) zero to running in only a few years, and that if we keep this up we cannot even begin to imagine what a future a few years from now might look like. It's much the same as if you walked into a maternity ward on its first day of opening: yesterday there were no babies, and now there are dozens! If we keep up this scaling we'll have millions of babies in the ward by the end of the year!
Of course, these things don't operate in isolation - they run in the real world, where Amdahl's Law applies and where the average annual rate of improvement for most technologies sits at about 1-3%. Claims that US domestic electricity production will increase by tens of percent within a couple of years are based on the assumption that supply will increase to meet the rate of demand increase; but power grids are planned decades ahead of their current load and just because there's elasticity built into the system (which we're rapidly using up) doesn't mean you can ignore the lead times on building out this critical infrastructure.
For a prime case study; consider South Africa - where a failure to invest in power generation infrastructure for 20 years resulted in the country reaching the limit of its power generation capacity in 2004; we're now 20 years down the line and regularly scheduled rolling blackouts are still a way of life in Africa's most developed economy. These issues are not solved overnight, and in most cases they're shedding of their total capacity. Game that out for a low-tens of percent increase in capacity and you're starting to see the issue...
Of course, electricity generation isn't the only idea being peddled - because priviledged white males working in tech cannot help but fall into the trap of assuming that 9 women can make a baby in 1 month. Of course, in this case the idea is that a billion chatbots will be able to produce enough AI research to result in a five order of magnitude improvement in AI performance within a decade - ignoring the fact that research, critically valuable as it is, is only a part of the problem. You still need to build the systems, you still need to scale and operate them, they still require time to train and data to train on, and even if you figured out all of the answers up front (which one can rarely do without incrementally learning from your experiments) it still takes an enormous amount of time to train foundational models.
That's not to say these aren't smart people, and it's not to say that this is a problem limited to the AI sphere either, it's a problem induced by a lack of perspective and a bias for seeking out supporting viewpoints to justify our assumptions. It's the reason diverse teams consistently outperform heterogenous groups; it's the reason I get so excited when people on my team share ideas that directly contradict my own (and doubly so when they're proven right)!
Another area I see the same patterns playing out is in the software engineering realm, where it is common to hear even seasoned engineers complain about how senior leadership continues to get in the way of solving the important problems and demands that engineering works on pointless initiatives that are doomed to fail. We'll quietly ignore the fact that those same engineers have watched the value of their share options triple over the last 5 years and notch that down to the market being irrational...
Or, hear me out, we could assume that there's something bigger at play - that businesses do not exist to build great software, or even great products. They exist to make a profit and grow their market capitalization (ideally in a manner that allows them to continue doing so indefinitely). With this in mind, an executive team may well choose to forego investments in a product improvement and instead engage in engineering theatre to convince customers to stick around after a major outage (or at the very least, minimize the reputational damage that acting like it's just another Monday would cause). It's the reason they might opt to lay-off thousands of employees to appease the Reagan school of economics, knowing that they're losing valuable expertise and the capacity to execute on future plans in order to maximize the value they might gain from being in a strong financial position at the right moment.
So the next time an engineer is telling you how every problem would be solved if only their company would fix this small little thing called "everything", or a Silicon Valley tech-bro is trying to convince you that AI will take your job, your partner, your house, and your dog will love it more than you - just remember that they're staring at the world through the pinprick hole of their sphincter and they need only look around to realize their idea is surrounded by shit that makes it unlikely in the extreme.
Tips
If there's one thing to take away from this: surround yourself with people who challenge you, who will think for themselves and check your work, who will introduce you to different ideas and experiences. Surround yourself with people from different backgrounds, fields, genders, and demographics - learn to see the world through their eyes and try to avoid letting yourself fall into the trap of thinking there is ever a simple answer to a system dynamics problem. Go and read books, especially those that make you feel uncomfortable or force you to confront your humanity. Whatever you do; don't make the mistake of assuming that just because everyone is allowed to have an oppinion they are all equally valid: think for yourself and challenge the status quo (especially when you can make it better for others less fortunate). That's more than one thing... so know when to break the rules.
Benjamin Pannell
Site Reliability Engineer, Microsoft
Dublin, Ireland