In recent years, generative AI has grown at surprising rates, transforming from a niche phenomenon to a crucial technology for software. Market analysis predicts that AI-based programming tools will grow strongly: the global market for AI code tools is estimated to go from 4.3 billion dollars in 2023 to 12.6 billion by 2028 (CAGR ~24%). Similarly, the overall market for generative AI - which also includes automatic code generation - is set to grow from 21.1 billion in 2025 to 97.8 billion in 2030 (CAGR ~36%), with the only generation of code that could expand at an average annual rate of around 52%These data indicate a rapid expansion of investments and applications.
At the same time, practical adoption is already high: over 80% of developers use AI tools to write code, according to Forbes, AI support can increase the average productivity of developers by 126%, reducing times and errors in repetitive tasks. On a technological level, advanced models such as those of OpenAI, Anthropic and DeepMind are achieving previously unthinkable results: in 2025 AI such as Gemini (DeepMind) and OpenAI models have won gold medals at the Mathematics Olympics and are ranking at the top of programming competitions.
Surprisingly, these goals were only expected for the end of the decade, confirming the rapid progress; the impact of these technologies is also evident in the words of experts: Eric Schmidt (formerly Google) observes that "a significant percentage of routine code is already written by AI systems", and even Mark Zuckerberg predicts that within 12–18 months most of the code could be generated autonomously by artificial intelligence.
In summary, all data converges on a single point: automatic programming is quickly becoming a concrete reality.
The introduction of self-programming AI in the development cycle raises several critical issues. First of all, it is necessary to guarantee the quality and security of the generated code: an AI could make logical errors or introduce vulnerabilities that are difficult to detect. According to AgendaDigitale, it is essential to make sure that the software produced respects the requirements and expected behaviours, otherwise "dangerous code created by the machine" could cause serious failures.
Secondly, there is the problem of excessive dependence. If developers rely too much on AI tools, they risk losing creative problem solving skills.
Finally, legal and ethical aspects must be addressed: who is responsible for an error in the generated code? What copyright constraints apply to software written by an AI? Emerging regulations (e.g. European AI Act) require additional governance, audit and security mechanisms.
Code reliability: Automatic verification, testing and validation of AI output.
Security and alignment: Prevent the emergence of malicious behaviour in the generated code and protect systems from automated attacks.
Employment impact: Traditional junior developer roles will be redefined, requiring new supervision and AI engineering skills.
Legal responsibility: Define who owns and guarantees the code created by the AI.
Advanced models (such as Anthropic Claude) integrate feedback cycles based on the real execution of the code: they generate program fragments, execute them and analyse the result (works/does not work), iteratively improving the quality of the product code.
This "test-time feedback" approach can also be adopted in traditional development pipelines. At the same time, methodologies such as prompt engineering emerge: the human role moves from typing code to defining clear and precise instructions for AI, translating the needs of the project into effective prompts
Companies will have to train specialists in this field and integrate AI into DevOps tools, so that they continue to write, execute, test and monitor the software with minimal manual intervention
At the organisational level, the best practises provide for a "security first" mindset: AI code control policies, automation of security tests and continuous audits of the results generated. In summary, the solution is to combine intelligent automation with human supervision, implementing hybrid development pipelines in which AI manages repetitive tasks while man ensures the final quality
Whether you’re an individual creator, a startup founder, or an enterprise leader.
Hypereum is here to build with you.