BRAD ADAIR

My Thoughts on AI

I have started writing this post many times over the last year or so, but I have always stopped, because my thoughts on AI and its impact continue to evolve.

When I first started this post, I was tempted to dismiss AI out of hand. The models were not very good and it seemed like it was going to be a flash in the pan. However, things improved and evolved faster than I thought possible and now it is actually a complex topic that cannot and should not be easily dismissed.

I have slowly turned more and more tasks over to AI, and the results have continued to improve. It now manages a lot of busywork for me entirely independently. It has also written a lot of code for me, however none of this was on its own. I still have not found it trustworthy enough to let it just code things and then put them out there. I only let it write in programming languages that I know so that I can check what it is doing, and I approve all of what it is doing. Still, there have been a couple of applications that I have wanted for myself for a long time, that I just never had the time or energy to create. Thanks to AI assistance, they now exist. Things like a dashboard for my homelab that is exactly how I want it, an IT intelligence dashboard that lets me see important metrics at a glance, and a calendar app, that solves a problem that is likely one only I have. None of these are huge, but they are quality of life improvements for me, and I would never have gotten around to them without the assist from AI.

AI is here and it is not going anywhere

For those that hate the idea of AI and think that it should be banned, or that no one should use it, I hate to break it to you, but you are tilting at windmills. AI is not going anywhere. It has already proven itself to be useful in any number of use cases, and as the models continue to improve there are more and more use cases where it is becoming useful to employ AI.

While several studies have shown that developers are not more productive with AI, many of them think that they are. According to a METR study:

When AI is allowed, developers take 19% longer than without... Developers predicted a 24% speedup, but even after the study concluded, they believed AI had helped them complete tasks 20% faster when it had actually delayed their work

Source

The point is that developers feel more productive, and I think that makes them more productive overall, even if it is taking them slightly longer to write the code itself.

Additionally, I think that we are measuring the wrong things with these studies. Is lines of code really the ultimate mark of productivity? What about other tasks that take time? Creating merge requests, parsing notes from meetings, creating and revising documentation, writing tests. These are all tasks that AI is good at, and that developers can offload. This gives them more time to focus on the code. That not only helps overall productivity, but allows devs to focus on the part of the work that they are most likely to enjoy.

Finally, many other professionals are starting to use AI in their day-to-day lives. Anecdotally, I think that they are seeing larger productivity gains than software developers. While I have not found any studies that really support this one way or another, I spend a lot of time talking to people in finance, law, etc., and many of them report productivity gains from AI.

AI has a lot of problems

Despite the positives that I have written about already, there are many issues with AI. These include the fad of "vibe-coding", environmental and energy issues, and intellectual property concerns.

Vibe Coding

Let's start with "vibe-coding". First of all, this term is poorly defined. I am going to define it as having AI write code/create an app that either 1. is in a language the user does not know, and therefore cannot check or 2. regardless of whether the user knows the language or not, the user did not even look at the code and just let AI do its thing. This is then released into the wild.

The biggest issue with this is security.

AI-assisted commits exposed secrets at more than twice the rate of human-only commits — 3.2% versus 1.5%. A December 2025 analysis by security firm Tenzai examined 15 production applications built using five major AI coding tools and identified 69 vulnerabilities across the sample. API security firm Escape.tech scanned over 1,400 vibe-coded production applications and found that 65% had security issues and 58% contained at least one critical vulnerability.

Source

Fortunately, this appears to be a trend that is going to be short lived. A quick look at Reddit posts regarding these topics show that pushback is building and I think this is burning itself out.

Environmental and Energy Impacts

As the cost of living continues to rise, the price of energy is skyrocketing along with it. Some of this is due to the war in Iran. But a big chunk of it is from data centers and that appears to only be getting worse in the near future. According to Pew Research:

U.S. Data centers consumed 183 terawatt-hours of electricity in 2024, according to IEA estimates. That works out to more than 4% of the country's total electricity consumption last year - and is roughly equivalent to the annual electricity demand of the entire nation of Pakistan.

You don't need to know much about energy or economics to understand that this is simply unsustainable. We are going to need ways to generate more energy and we are going to need to do so in a cleaner fashion.

This also does not address the issues with water supplies, noise pollution and other harms that have been reported in communities with mega data centers.

Intellectual Property Issues

We are just at the very beginning of deciding as a society where AI fits with things like copyright law. The courts are in the process of figuring out some of it:

The number of infringement cases filed against AI companies in 2025 more than doubled the total at the end of 2024, from around 30 to now over 70

Source

This process is slow, and it will take time to figure out where things land legally. It will take even longer, and be even more complicated to figure out where things land ethically. It is pretty easy to say that copyright holders should be able to control their work and be compensated for its usage. The question is where the line is drawn, and that is far more complicated and far more nuanced.

Jobs

Finally we come to the biggest elephant in the room. How many jobs are going to be lost to AI? Right now, I do not think that many. I know that there have been a lot of announcements about layoffs relating to AI and even more speculation. However, much of this is simple AI-washing to help prevent valuations from taking a hit:

Investor Marc Andreessen is the latest tech leader to scoff at the idea that AI is fueling a wave of layoffs — dubbed “AI washing.” AI is the “silver-bullet excuse,” Andreessen said this week on the “20VC” podcast. But that doesn’t mean AI won’t be invoked as companies signal that they’re adopting the new tech — and making hard choices. Andreessen said the current wave of layoffs reflects widespread overstaffing during the pandemic, not AI. “Essentially every large company is overstaffed,” he said, most by at least 25% and some by as much as 75%.

Source

Additionally, time and again, we have fretted over the cost of jobs from technology, but many times, the reverse occurs. Take bank tellers and ATMs for example:

Basically starting in the mid-1990s, ATM machines came in in big numbers. We have, now, something like 400,000-some installed in the United States. And everybody assumed –including some of the bank managers, at first — that this was going to eliminate the teller job. And it didn’t. In fact, since 2000, not only have teller jobs increased, but they’ve been growing a bit faster than the labor force as a whole. That may eventually change. But the impact of the ATM machine was not to destroy tellers, actually it was to increase it.

What happened? Well, the average bank branch in an urban area required about 21 tellers. That was cut because of the ATM machine to about 13 tellers. But that meant it was cheaper to operate a branch. Well, banks wanted, in part because of deregulation but just for deregulation but just for basic marketing reasons, to increase the number of branch offices. And when it became cheaper to do so, demand for branch offices increased. And as a result, demand for bank tellers increased. And it increased enough to offset the labor-saving losses of jobs that would have otherwise occurred. So, again, it was one of these more dynamic things where the labor-saving technology actually created more jobs.

This is in fact a much more general pattern. We see a whole number of occupations where you might think that technology is going to destroy jobs because it’s taking over tasks; and the reverse happens. So, if you look, for instance, when they put in scanning technology into cash registers, the number of cashiers actually increased. When legal offices started using, beginning in the late 1990s, electronic discovery software for doing discovery of documents in lawsuits, the number of paralegals increased rather than decreased.

Source

So while I am sure that there will be some jobs lost to AI, and many others that will be blamed on AI, there is a good chance that AI will actually increase jobs over time.

Conclusion

So where does that leave us? AI is useful and it is not going anywhere. But it is also messy, expensive, legally unsettled, and frequently overhyped. The gap between what AI boosters claim and what the research actually shows is wide and yet, the research also misses things that matter, like whether people enjoy their work more, or whether that calendar app I always wanted finally exists.

I think the honest answer is that we are in the middle of it, which is always the hardest place to have a clear view. The people declaring AI to be an extinction-level event for knowledge workers are wrong. So are the people claiming that it is already transforming work in ways that can and have been studied and measured. The truth is complicated and actually, more interesting than either "camp" wants to admit.

My opinion is that five or ten years from now, AI will be a part of the background that most people will use without thinking too much about it. Companies and people that ignore it or avoid it, will be left behind. Companies and people that find out the best uses for it will flourish. The real questions are not about whether or not it sticks around, but how we handle the problems that come with it: the energy costs, the intellectual property concerns, the security risks that come from deploying it carelessly. Those are all solvable problems, but they require actual effort to solve, and right now most of the conversation is happening at the extremes instead of in the middle where the work gets done.

AI my thoughts thinkpiece