INTERVIEW – Software engineers outsourcing work to AI imperiling their own jobs, warns expert
Generative AI has rapidly become mainstream in programming environments, with ‘more than 40% of the code generated on GitHub’ now coming from these tools, says Hasan Yasar of Carnegie Mellon University

- Major tech firms like Microsoft and Meta are slashing their programming workforce as AI increasingly assumes roles traditionally performed by human developers, says Yasar
ISTANBUL
Artificial intelligence could threaten the jobs of software engineers if developers continue automating more tasks through AI-driven tools, a software engineering expert has warned.
Programmers are already relying on generative AI and large language models (LLMs) such as ChatGPT, Claude, Gemini, and others, which have boosted their productivity, but are also putting their own professional futures at stake, said Hasan Yasar, an academic at Carnegie Mellon University in the US.
Generative AI has rapidly become mainstream in programming environments, with “more than 40% of the code generated on GitHub” now coming from these tools, he told Anadolu at a recent event in Istanbul.
“Unfortunately, software developers who think that they can only write code will not have a job,” said Yasar, a technical manager at the university’s Software Engineering Institute.
“When we look at the job hazard, the developers who only write code need to think a little differently and they need to think in a solution-oriented way.”
He warned that the risks are already being seen in the industry, with major tech firms like Microsoft and Meta slashing their programming workforce as AI increasingly assumes roles traditionally performed by human developers.
Yasar’s warning is backed by data from the US Bureau of Labor Statistics cited in a March report by The Washington Post, which showed that computer-programming jobs in the country have fallen to the lowest level since 1980.
The figure back then was around 300,000 and jumped to over 700,000 in the early 2000s, but has now halved, according to the report.
AI’s escalating energy demands
Aside from job displacement, Yasar highlighted another major issue concerning the proliferation of AI technologies: their significant and escalating energy consumption.
“What I mean by a serious energy need is twofold. One, there is energy that needs to be used to train those models,” he explained in an interview at the DevSecOps Days Istanbul Conference held in Istanbul last week.
Moreover, AI’s increased implementation at the consumer level, particularly in smartphones and mobile devices, further exacerbates energy demands.
“Today, we all have AI chips in the phones in our pockets, and their battery needs have increased … It has gotten bigger and we need more energy,” he noted.
AI’s energy demands have already become an industry-wide concern. According to Google’s 2024 Environmental Report, the company’s global data centers consumed nearly 6 billion gallons (approximately 22.7 billion liters) of water in a single year.
Furthermore, the International Energy Agency (IEA) projects that rapid developments in AI and cryptocurrency will double global data center energy consumption by 2026, increasing from 460 terawatt-hours to a staggering 1,000 terawatt-hours annually.
Yasar urged both developers and consumers to remain aware of these energy implications, advocating for more efficient usage of AI systems.
Highlighting the environmental implications, he provided a sobering comparison about the carbon footprint of AI: “The CO2 cost of training an AI model is equivalent to an airplane’s CO2 emission while crossing the Atlantic.
“So, what do we need to do? We need to pay attention to green energy initiatives such as ‘green software’ to address the environmental issues.”
Yasar even advised users against unnecessary interactions with AI chatbots to reduce energy consumption, humorously reminding them they “don’t need to thank” AI systems, as these minor interactions contribute to overall energy usage.
Despite these challenges, Yasar also highlighted the positive impacts AI has when integrated with DevSecOps – a methodology that integrates security practices into every phase of software development, from initial design through testing, delivery, and deployment.
“AI really speeds us up in terms of finding or solving the problem. So, we are actually taking our processes, automation, even further,” he explained.
Yasar explained that advanced AI tools, such as LLMs, greatly facilitate the detection and resolution of security vulnerabilities during software development. Rather than replacing software developers outright, he said AI complements their work by enhancing efficiency and quality.
In this context, AI does not take our jobs, but it helps us, he asserted.
“While helping us, it also increases our quality … We can get test results fast. We can develop fast – whether it is security testing, functional testing, or user testing.”
Anadolu Agency website contains only a portion of the news stories offered to subscribers in the AA News Broadcasting System (HAS), and in summarized form. Please contact us for subscription options.