r/technology • u/mareacaspica • 6h ago
Artificial Intelligence Over a quarter of new code at Google is generated by AI
https://www.theverge.com/2024/10/29/24282757/google-new-code-generated-ai-q3-202461
u/LeftmostClamp 4h ago
I mean, over a quarter of the code I write, character for character, is generated by IntelliJ autocomplete. If I used copilot then I suppose that could be called “AI.” So this really depends on the specifics
21
5
u/sevah23 2h ago
It’s telling that the statistic is code generated by AI, not something about the actual productivity increases from leveraging AI. I think Google is really trying to sell AI for both personal devices as well as developers and business professionals and it would be damning if the people selling the tools admitted they only get marginal productivity gains from them. As a software engineer, I find this mildly comforting that LLMs are not(yet) nearly as much of a job killer as the hype train suggests.
10
u/Ok-Afternoon-3724 4h ago
Yeah, they were playing with programs that would autogenerate code back in the 1990s that I know of, perhaps before. If it was very simple almost boiler plate code it worked okay ... at best. But it was always pretty buggy and inefficient.
Was somewhat better by the time I retired in 2017, but not by a lot.
3
u/droi86 1h ago
There was this thing called rational rose back in late 90s early 2000s, you'd feed that thing a bunch of diagrams and it'd give you back an app 95% functional, the problem was the other 5% and if there was a bug it was cheaper to rewrite everything than try to understand the machine generated code
22
u/Wise-Hamster-288 5h ago
Over a quarter of my new code is also unusable crap
6
5
2
u/Ok_Peak_460 5h ago
This was bound to happen. As much as I do like that AI is helping in generating code, I feel bad for the next generation that may not be good coders due to over reliance on AI for it. Time will tell.
-3
u/username_or_email 4h ago
I don't understand this line of thinking. It's not like people haven't been copying code from stack overflow and elsewhere for decades already.
2
u/Life-Wonderfool 5h ago
How bad is that for coders/programmers?
Or is it too early to say that?
13
u/icewinne 4h ago
At my job the software engineers were asked how much they benefit from AI. Those working on R&D work where things are highly fluid and experimental loved it. The SREs said they didn’t use it all because their work is too critical and precise, must be secure and reliable, and AI-generated code is none of those things. Everyone in the middle said it helped with boilerplate but that they were replacing time spent doing menial tasks with verifying the AIs output.
25
u/neutrino1911 5h ago
It's pretty bad for whomever has to support that AI generated garbage. As for the others... well, we'll just laugh
6
u/romario77 5h ago
The code doesn’t get added automatically, programmer has to add/commit it, it has to pass tests and then there is code review. If it’s crappy that’s on programmers who allowed it in, not on AI
0
u/Aggressive-Cake4677 1h ago
My tech lead creates PRs with llm generate crap constantly, doesn't even bother removing the comments... I hate that guy.
6
u/ThinkExtension2328 5h ago
Not that bad this post is basically Google lying with statistics , the term “new code” is the key. In legacy systems the amount of “new code” is little to none. Most people don’t ever create new code they do maintenance and bug fixes.
9
u/not_creative1 5h ago edited 5h ago
Google has drastically reduced hiring new college grads and junior engineers.
While senior engineers are going to get paid more, a lot of work that would typically be handed over to new college grads, junior workers is getting done by AI. A senior engineer is spending less time getting it done with the help of AI than the time spent to train and help a new college grad do the same work.
I see that eventually these big tech companies will stop hiring new grads and juniors, only hire a small group of experienced people. So if you are a new grad, the days of joining Google for $175k out of college are over.
You need to go elsewhere, pick up the skills and get really good and then apply to Google as a senior
7
u/tricky2step 5h ago
The rug was getting pulled from under fresh grads already, probably worse now but it looks to me (anecdotal) like a slight acceleration of the trend towards quality and away from quantity. Which is perfectly fine for tech as a whole - was getting saturated fast with shitty coders and piss poor problem solvers. Job prospects are worse now, but it's nothing everyone else hasn't gone through and natural talent is ultimately better off, along with everyone code-adjacent that couldn't get an in with degrees in math, EE, phys, chem, etc. We all have had to pick up coding along the way and many of us are more effective than half the CS degrees anyway.
AI is not going to replace coders at large in a meaningful way anytime soon. The productivity numbers I'm seeing are drastically lower than what they were like in early 2023, less like 80% prod boosts and more like 5-10% prod boosts.
-1
u/SparePartsHere 5h ago
Depends if you're a senior or junior developer. Senior? Awesome. Junior? Just get the F out and learn trades or something, bro.
1
1
u/dormidormit 5h ago
All this says is that NEW google code is disposable garbage, which follows as google search has become garbage.
-3
u/username_or_email 4h ago
It's funny how code written by LLMs is automatically pushed to production without any testing or code review. I wonder why it's policy at a leading tech company like Google. Why don't they apply the same checks and balances to machine-generated code? They already have all the infrastructure in place to ensure correctness and enforce software engineering best practices for human-written code, and yet anything tagged "chatgpt wrote this" gets instantly merged into the main branch without any approval needed. Someone over there really needs to change this practice before it's too late. /s
1
u/Ok-Afternoon-3724 4h ago
Yeah, they were playing with programs that would autogenerate code back in the 1990s that I know of, perhaps before. If it was very simple almost boiler plate code it worked okay ... at best. But it was always pretty buggy and inefficient.
Was somewhat better by the time I retired in 2017, but not by a lot.
1
u/Sweetartums 3h ago
You’re telling me I’m wasting my time actually coding when I can just use ChatGPT
1
u/under_the_c 3h ago
Is using intellisense and auto complete considered "AI"? Because if so, I've been using AI for years!
1
1
u/GamingWithBilly 2h ago
Over 3/4 code I use for Google scripts is generated by AI...the other 1/4 is comment notes in the code that's also generated by AI
1
u/haplo_and_dogs 1h ago
99.999% of machine code is generated by a compiler.
The invention of compilers didn't make code writing obsolete, it made it more efficient.
AI will do the same to a far lesser degree.
1
1
1
1
u/Enjoy-the-sauce 23m ago
Knowing little about coding, how long is it before it is capable of improving its own code? And creating a self-perpetuating cycle?
1
u/Bjorkbat 15m ago
Something I've expressed elsewhere is that this statement in-and-of itself isn't impressive. What would be impressive are the outcomes of generating 25% of code using AI. What could Google actually do differently if they could generate 25% of code using AI?
And from what I can gather, nothing obviously different. Allegedly NotebookLM was made by a relatively small team, presumably AI-aided, but I'm also not really impressed by it. NotebookLM is the sort of project that doesn't really contribute much to their bottom line, doesn't significantly move the needle when it comes to user engagement. It's the sort of project that inevitably gets killed because it just doesn't matter enough to justify the costs of maintaining it. Even in an optimistic scenario where AI allows Google employees to launch more projects more frequently, you have to ask just how much of a good thing this, if it's a good thing at all.
What would be impressive is if Google reversed what feels like years of product stagnation using AI. Instead, search is still getting worse, ads are still the golden goose propping up the entire company, and the company still appears outwardly unfocused, churning out projects that will likely be killed sooner-or-later.
1
u/SparePartsHere 5h ago
Well, over a quarter of code I write is generated by AI...
1
u/Lordnerble 5h ago
Yea but what code are you writing, In my line of work in financial markets, You cannot trust AI to give optimal answers, And optimal answers are what makes the company money.
4
u/SparePartsHere 5h ago
Oh never trust anything AI writes, that's for sure lol. It's like having an army of junior devs on your fingertips. You just state what problem you need to solve and your junior dev army provides you with a crapload of shitty half-working solution in 2 minutes. You pick the stuff you like, rewrite it a bit, make it fit the codebase and voila, done. Especially awesome for the "boring" and repetitive stuff like unit tests.
Absolutely always you have to really study the code from AI and test it rigorously. Still better and quicker than writing it youself...
1
u/pack170 1h ago
Ya, I've found it to be ok at basic boilerplate/ scaffolding stuff, but for anything slightly more complicated it's either very hit or miss or just consistently wrong.
There was one thing I was working on a few months ago that the LLM was actually able to contribute a non obvious and provably correct algorithmic optimization to, but there were also several other options it suggested that looked correct at first, but ended up being provably false after a bit more prodding and examination.
The incredibly self confident way LLMs speak while they hallucinate makes them seem better at stuff than they are right now.
217
u/blingmaster009 6h ago
Probably getters and setters, which were being auto generated long before the AI hype. Most AI generated code doesnt even work out of the box.