Mind Matters Natural and Artificial Intelligence News and Analysis
asian-software-developers-coding-at-office-with-multiple-screens-programmer-development-concept-photo-generative-ai-stockpack-adobe-stock
Asian software developers coding at office with multiple screens. Programmer development concept. Photo generative AI
Image licensed via Adobe Stock

Industry Pro: No, AI Is NOT Driving the Mass Big Tech Layoffs!

“It may explain the loss of some content generation and customer support positions — but doesn’t explain the layoffs of software developers.”
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Last week, we ran a story asking whether new AI was driving the mass Big Tech layoffs, as some have suggested. Our story attracted the attention of an industry professional who kindly agreed to let us publish the letter he wrote to us in response. And here it is:

A recent Mind Matters News article floated the question of whether generative AI was behind the recent spate of tech layoffs. Perhaps. It may explain the loss of some content generation and customer support positions — but I’m fairly confident it doesn’t explain the layoffs of software developers.

I work with GitHub’s Copilot AI code generation tools regularly. I am grateful for them and enjoy using them. Both my personal experience and my sense from my colleagues is that code-generation AI is helpful as a local productivity boost for certain contexts and problems, but is far from taking over anyone’s job.

Where we are now

We’re still at the stage where these technologies are comparable to an incredibly well-implemented combination of “spell checking on steroids” and “let me google that for you.” Tremendously useful, particularly when performing boring repetitive tasks, working through boilerplate, or implementing fiddly little algorithmic functions. What current AI solutions really aren’t able to do (yet?) is reason about and construct broader systems of software.

As a small example: I’m currently working on a biologically-relevant simulation code. The overall design of the code, and the optimizations I’ve discovered, are all mine. As I’ve run into issues and had to fix them, I’ve been the one bringing software expertise to bear.

At every step of the way, though, GitHub’s Copilot has helped me do some of the typing. There have been some memorably helpful occasions: I type the name of a function, then take a deep centering breath. I know I’m about to embark on a small side quest to write some exceptionally fiddly logic-puzzle code… when — Presto! — Copilot generates the correct (or almost correct; I really do have to check carefully each time!) code.

There have been times when Copilot has pleasantly surprised me with a novel solution that I hadn’t thought of. And even a small handful of times where its suggestions revealed a gap in my mental model of the system that I had to go back and fill. These nudges, prompts, and solutions have been undoubtedly helpful. They’re the reason I tend to prefer writing (some kinds of) code with Copilot instead of without.

But it doesn’t always work that way…

At the same time there have been hilarious moments where Copilot has clearly misunderstood my intent. In fact, as I’ve spent time with Copilot, I’ve reached the point where my own mental model of Copilot’s non-mental model allows me to roughly predict when it’ll get things wrong, how it’ll get things wrong and even and why it’ll get them wrong, as in: “Oh, right, of course you generated that, because 95% of code on the internet that looks like this is doing that sort of thing. But that’s not what I’m doing here, Copilot.”

I expect these areas will improve over time. More: I expect generative AI will eventually get to the stage where it can operate at more than just a local level of developer support. No doubt, future iterations will start to provide input and support in the realm of architecture and broader system-wide problem solving. We’re just not there yet. In fact, it’s often precisely when I’m doing those sorts of things that I turn Copilot off: Its eager (and often incorrect, given the context) suggestions break my concentration as I try to think about the bigger system.

All that is just to say: It’s too early to see AI affect actual workforce numbers for software development. Will it happen someday? Possibly. But I’m still skeptical (if, admittedly, at least a little biased as a developer myself!). Two reasons:

1) Reality abhors a vacuum. To the extent that AI frees developers to do higher level work, that is the extent to which developer’s time will get filled with (arguably more valuable) work.

2) We’re early days on the AI hype cycle, pictured below:

Jeremy Kemp/Gartner, Inc, Gartner Research’s Hype Cycle diagram, CC BY-SA 3.0

Early signs of the trough?

I don’t think we’ve hit the peak yet but there are early signs of the trough. Here’s an interesting study that speaks to the challenge of the current AI toolchain: Long story short: we may be seeing code quality start to go down. These local-context AI helpers don’t think about the broader properties of a codebase. They tend to (effectively) do a lot of copy-pasting to expediently solve the presenting problem. This seems to give us developers permission to be a bit more lazy than usual. It is correlating with a reduction in the amount of healthy refactoring that is often needed to keep a codebase neat and nimble.

Can these problems be solved? Probably? Maybe? But this is a great example of the Law of Unintended Consequences: By making developers more efficient locally (i.e. the current thing I’m working on just got a bit easier), we may have made them less efficient globally (i.e. I’m not thinking as carefully as I used to about the system as a whole).

Compare it to a semi-self-driving car…

I’ve experienced something similar with my semi-self-driving car. I absolutely love the experience of driving down the highway with adaptive cruise control and lane-assist turned on. The experience is less stressful and less demanding and I often reach my destination more relaxed in my 2021 Volvo XC40 than I do in my (decidedly-non-self-driving) 2015 Honda Odyssey.

But I know (in fact, I’ve experienced) that my self-driving car is only semi-self-driving. I find myself, now, entering a different mental state as I flip on the drive-assist features. My role shifts a bit further from “active driver” and a bit closer towards “active supervisor” — I become less attentive to some details (Am I at the speed I want? When do I need to start turning the wheel?) and a bit more attentive to others (Is that glare going to confuse the car? Has it detected the car that just entered my lane?). I can allow myself a brief moment of laziness — this next stretch is a straightaway on a clear sunny day; the car’s got it. I can take in some of the scenery…

Copilot can write some code for me, but I need to be attentive to make sure it’s the right code and that it fits into my broader codebase. My XC40 can handle some driving for me, but I need to be attentive and remain an alert supervisor to make sure I end up where I need to go (in one piece!).

That’s the dichotomy that’s emerging as these semi- but not-fully capable AI systems become a more commonplace part of daily life: they allow us to be lazy in some respects, but demand that we be less lazy in others.

You may also wish to read: Is new AI driving the mass Big Tech layoffs? The jury’s out on whether that’s really what’s happening and, if so, whether it will improve profitability. The Big Tech companies may see replacing workers with AI as only natural. After all, that’s the future their executives were told from childhood to expect.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Industry Pro: No, AI Is NOT Driving the Mass Big Tech Layoffs!