The rapid adoption of artificial intelligence across European food systems has prompted regulatory scrutiny under the newly enacted EU Artificial Intelligence Act. A recent scholarly analysis examines how the AI Act’s risk-categorisation system applies to the diverse spectrum of AI applications throughout the agrifood value chain (Val, 2025). This regulatory framework, which came into force in August 2024, establishes tiered requirements based on assessed risks to human health, safety, and fundamental rights, with particular implications for agricultural and food production technologies.
The research identifies three primary risk categories with relevance to agrifood AI systems. Firstly, certain applications fall under prohibited practices, including AI-driven food recommendation systems that employ manipulative techniques or emotion recognition technologies used to evaluate food consumption in workplace or educational settings. While these prohibitions are relatively limited in scope, they signal regulatory attention to consumer autonomy and psychological integrity within food-related contexts.
Manipulative and deceptive AI systems in food recommendation
The prohibition on manipulative AI systems warrants particular attention within the food recommendations context. Article 5(a) of the AI Act prohibits systems that deploy subliminal techniques or materially distort behaviour by impairing informed decision-making. Food recommender systems, which utilise nutritional informatics to provide personalised dietary suggestions based on health data and preferences, present multiple vectors for manipulation and deception. Such systems could silently incorporate parameters misaligned with user interests – driven by profit optimisation, commercial partnerships, or ideological positions – whilst appearing to deliver objective nutritional guidance. Manipulative techniques including ‘nudging’, obfuscation, and algorithmic reframing could lead users towards choices benefiting system operators rather than consumer wellbeing. The regulatory challenge lies in distinguishing legitimate personalisation from prohibited manipulation, particularly where AI algorithms subtly prioritise sponsored products or modify recommendations to maximise engagement metrics rather than nutritional outcomes.
High-risk AI in agricultural machinery
Secondly, and most significantly, the high-risk classification applies extensively to AI safety components integrated into agricultural machinery and equipment. Systems covered by Regulation (EU) No 167/2013 on agricultural and forestry vehicles, as well as the Machinery Regulation (EU) 2023/1230, must comply with stringent requirements including risk management systems, technical documentation, human oversight provisions, and conformity assessments. This encompasses autonomous tractors, harvesting robotics, food-sorting machinery, and various automated agricultural implements that incorporate AI algorithms for collision avoidance or operational safety.
Thirdly, transparency obligations apply to AI systems that directly interact with natural persons, such as human resource management systems in agrifood enterprises or generative AI employed in food marketing. These systems must clearly inform users about AI involvement, though the practical implications remain largely consistent with cross-sectoral applications.
Critical regulatory gaps
The analysis identifies several potentially problematic omissions within the current regulatory framework. Despite the broad definition of ‘critical infrastructure’ encompassing food businesses engaged in wholesale distribution and large-scale industrial production, AI systems managing such infrastructure are not classified as high-risk under Annex III of the Act. This exclusion appears inconsistent with the treatment of similar infrastructure under Directive (EU) 2022/2557 on the resilience of critical entities and Directive (EU) 2022/2555 on cybersecurity measures, both of which recognise the critical status of food supply chains.
The scope of ‘safety component‘ presents interpretative challenges within the agrifood context. While mechanical hazards are clearly addressed, uncertainty surrounds whether non-mechanical risks – particularly food safety hazards such as failure to detect contaminated or spoiled products – fall within the regulatory ambit. An AI-driven fruit-sorting system that fails to identify rotten produce, for instance, poses genuine food safety risks, yet may not constitute a ‘safety component’ as currently defined.
Non-human values and sustainability concerns
Perhaps most notably for the agrifood sector, the human-centric approach of the AI Act provides minimal consideration for environmental, biodiversity, and animal welfare values. These concerns are mentioned only incidentally – through reporting duties for environmental harm, exceptional authorisations for environmental protection, and voluntary codes of conduct referencing environmental sustainability. The Act’s risk-categorisation system does not systematically address impacts on ecosystems, biodiversity, or animal wellbeing, despite the sector’s substantial environmental footprint and the welfare implications of AI-enabled livestock management systems.
This regulatory gap is particularly significant given AI’s potential to influence agricultural practices in contradictory ways. While precision agriculture technologies promise reduced pesticide and water usage, the same systems could theoretically enable intensified chemical applications through cost reductions and removal of human operators from exposure risks. The absence of mandatory environmental risk assessments for agrifood AI systems represents a notable divergence from the sector’s traditional regulatory attention to sustainability and ecological values.
Implications for food security
The study raises concerns regarding potential food security vulnerabilities arising from inadequate regulatory oversight of AI systems managing critical food infrastructure. Given the increasing reliance on AI for logistics, wholesale distribution, and industrial food processing, system failures or security breaches could disrupt food supply chains at scale. The current regulatory framework’s failure to classify such systems as high-risk may leave critical infrastructure exposed to foreseeable risks without the preventive safeguards mandated for other essential services.
The extraterritorial scope of the AI Act, often characterised as the ‘Brussels Effect’, extends these regulatory requirements to providers and deployers outside the European Union whose AI systems affect EU markets. This global reach potentially establishes European standards as de facto international norms for agrifood AI development, though enforcement challenges similar to those encountered with the General Data Protection Regulation may limit practical effectiveness.
Conclusions and future directions
The research concludes that while the AI Act establishes a comprehensive risk-based framework, its application to the agrifood sector reveals several areas requiring further attention. The high-risk classification for AI safety components in agricultural machinery will substantially impact equipment manufacturers and agricultural technology providers. However, the exclusion of AI systems managing critical food infrastructure, ambiguity surrounding non-mechanical safety hazards, and limited consideration of environmental values suggest potential gaps in regulatory coverage.
These findings underscore the need for ongoing scholarly examination and potential regulatory refinement as AI adoption accelerates throughout food systems. Future research should explore whether current provisions adequately address food security risks, clarify the intended scope of safety components within agrifood contexts, and consider mechanisms for incorporating sustainability objectives into AI risk assessments. Achieving optimal regulatory balance will require interdisciplinary collaboration between legal scholars, food scientists, and agricultural technology experts to ensure that innovation proceeds alongside appropriate safeguards for both human and ecological wellbeing.
Dario Dongo
References
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) http://data.europa.eu/eli/reg/2024/1689/oj
- Regulation (EU) 2023/1230 of the European Parliament and of the Council of 14 June 2023 on machinery and repealing Directive 2006/42/EC of the European Parliament and of the Council and Council Directive 73/361/EEC. Consolidated text: 29/06/2023 http://data.europa.eu/eli/reg/2023/1230/2023-06-29
- Regulation (EU) No 167/2013 of the European Parliament and of the Council of 5 February 2013 on the approval and market surveillance of agricultural and forestry vehicles. Consolidated text: 27/11/2024 http://data.europa.eu/eli/reg/2013/167/2024-11-27
- Val, I. L. (2025). The EU AI Act and the food system: How the European Union AI Act applies to agrifood. European Journal of Risk Regulation, 1–21. https://doi.org/10.1017/err.2025.10058
Dario Dongo, lawyer and journalist, PhD in international food law, founder of WIISE (FARE - GIFT - Food Times) and Égalité.








