In this interview, we explore the technical frontiers of AI security with Quentin Cozette, a specialist in cybersecurity and digital governance whose work bridges low-level hardware vulnerabilities and high-level regulatory challenges. Drawing on his experience in information systems security, risk management, and emerging AI regulation, Quentin offers a rare perspective that combines technical depth with governance insight. His contributions to France’s national cybersecurity ecosystem, including his involvement in ANSSI’s Mon Aide Cyber programme and his academic research on blockchain regulation, position him at the crossroads of policy, infrastructure, and human-centric design. This conversation sheds light on the often overlooked material foundations of AI, the systemic risks hidden in hardware and firmware, and the urgent need for governance models that reflect the realities of contemporary technological ecosystems.
1. Hardware Blind Spots in AI Security
We often talk about AI security as a problem of data, algorithms, and models, but much less is said about the physical components that power AI systems, from chips to communication modules, which are frequently outside the scope of current cybersecurity regulations in Europe.
Question: Where do you see the most critical blind spots in today’s hardware layer for AI-enabled devices, and why are these blind spots so dangerous from a security and privacy perspective?
The absence of a standard naming system for AI chips causes confusion and points to a deeper structural issue. Apple designates these components as the Neural Engine and Google TPU, while Qualcomm, Intel, and AMD use NPU. This nominal fragmentation reflects a technological fragmentation since Cloud AI, Edge AI, and On-device AI have fundamentally different risk profiles. The problem is not cosmetic, as this opacity prevents users from understanding where their data is processed, in violation of Article 25 of the GDPR. I regularly observe organizations that believe they are running local AI when in reality their architecture delegates to the AI Cloud without realizing it.
Edge AI systems aggravate this situation by alternating local processing and cloud delegation. The On-device AI architecture offers relative transparency, but few manufacturers highlight it.
RAM is a structural problem because, although essential for fast data processing, it lacks native hardware protection. The RAMBO attack in 2024 demonstrated that data in memory can be extracted via radio signals emitted by buses, even on air-gapped networks. ARM developed the CCA to provide hardware isolation, but this approach results in variable costs from 2% to 25%. It is based on the Armv9 architecture and lacks a unified standard for other manufacturers.
Intel offers SGX, AMD offers SEV, and Qualcomm implements its own mechanisms, which creates a lack of standardization. The Cyber Resilience Act, applicable at the end of 2027, requires a general safety assessment for products but does not mandate a specific audit of AI chips or hardware accelerators.
This regulatory gap creates a major breach since a compromised AI chip can technically be legally marketed as long as the finished product meets the CRA requirements at the product level. There is no continuous integrity check of the AI model during execution, even though the CRA requires Secure Boot at startup, leaving a protection gap that could allow a compromised firmware to silently replace the AI model. For autonomous Edge AI systems, this risk is structural because they do not communicate regularly with external validation services.
Isolating hardware accelerators is problematic because AI accelerators are typically not integrated into standard system isolation mechanisms such as TrustZone. The supply chain remains vulnerable to counterfeits and backdoors introduced at the factory, as shown by the Infineon TPM in 2017 (CVE-2017-15361). Proposed updates have been issued, but many legacy devices have yet to receive the fixes. AI firmware updates lack standardization, and there is no mandatory standardized monitoring to detect AI anomalies in real time, although PSA Certified and TPM standards offer fragmented initiatives.
Without mandatory audits of AI chips and standardization of Confidential Computing, we remain in an uncertain environment where manufacturers operate in secrecy.
2. Side-Channel Attacks and Hidden Data Leaks
Even when software is properly encrypted and protected, information can still be inferred through indirect physical signals such as timing, energy consumption, or electromagnetic emissions, something known as side-channel leakage. This type of attack is almost invisible to end users and most organisations.
Question: In the context of AI systems, especially those connected to LLMs or brain-computer and emotion-detection technologies, how realistic is the risk of side-channel attacks, and what types of data could potentially be exposed?
A side-channel attack enables the deduction of sensitive information by observing secondary signals, such as response time, energy consumption, or micro-variations in sensor readings. Remotely accessible LLMs (API, chat) are already at risk. Whisper Leak attacks have shown this by classifying conversation topics through analysis of encrypted network traffic patterns. This risk is realistic in the short term, especially for actors with privileged network access, such as ISPs*.*
Existing mitigations, like temporal obfuscation and padding, remain fragmentary and depend on suppliers’ goodwill. The exposed data involves sensitive categories or intimate preferences. The risk is emerging for BCI. It is structural for ED*, especially in the short term for facial recognition systems online.*
This applies even if the equipment is not yet standardized. Attackers exploit micro-variations in sensor signals and the timing of video or other flows. The recovered data are psychological: reactions and motor intentions. These can be diverted with adversarial attacks on BCI-AI loops, enabling manipulation of user intentions.
The opacity of manufacturers on these security measures is non-trivial. A lack of transparency is a major ethical problem, according to recent academic research. The convergence of LLM, BCI, and ED* technologies into integrated ecosystems amplifies attacks vectors and exposed data.*
These connected systems are still open to adversarial attacks on closed BCI-AI loops. Changes in EEG signals can redirect what users intend to do, which could lead to large-scale cognitive manipulation. In theory, this could create a new kind of “COGINT” where mental states are automatically inferred on a large scale and without consent.
API: Application Programming Interface
ISP: Internet Service Provider
BCI: Brain-Computer Interface
ED: Emotion Detection/Recognition System
EEG: Electroencephalography
COGINT: Cognitive Intelligence
3. AI, LLMs and the Problem of “Silent Exposure”
Large Language Models like ChatGPT are becoming embedded into devices, operating systems, toys, vehicles, and wellness tools. Often, users don’t even realize when or how their data is being processed, stored, or transferred.
Question: From a cybersecurity standpoint, what are the main risks of connecting LLMs to physical devices and environments, especially when the hardware layer itself may have vulnerabilities that neither users nor developers can see?
The problem is not integrating an LLM, but adding AI to a potentially vulnerable device from the start. Using old or low-cost components increases the risk. A faulty sensor could mislead the LLM, leading to serious consequences.
Insufficient time for audits is a risk, and a lack of cooperation between manufacturers and researchers weakens security. Even AI today does not make it easy to detect these vulnerabilities.
Between 2021 and 2024, an LLM-HyPZ study based on the analysis of 114,836 vulnerabilities (CVE) identified more than 1742 hardware vulnerabilities or vulnerabilities present in on-board equipment. The study highlighted persistent vulnerabilities without a fix and the inability of some chips to support updates.
In 2017, the Infineon TPM chip (CVE-2017-15361) showed this problem. Although firmware updates were offered, many devices never received patches and remained vulnerable despite their widespread use.
User confidence in these LLM-driven devices is a risk factor because they cannot perceive how AI interacts with a potentially vulnerable device. Integrating a privileged LLM to physical or system functions in a device that is vulnerable by design greatly increases the risk of exploitation.
The user is then exposed to the risk of data exfiltration or the compromise of sensitive physical functions. A malicious actor could compromise the LLM by exploiting its environment and taking control remotely via a vulnerable device, a hardware flaw, or a compromised sensor.
4. The Absence of AI Device Recall and Responsibility
When a car part or a medical device is defective, there are established recall mechanisms. But when an AI-enabled device is found to be insecure, emotionally manipulative, or psychologically harmful, there is no clear, harmonized process to remove or correct it.
Question: Do you believe we need a legal and technical equivalent of a “recall” system for AI-connected devices and platforms, and who should be legally responsible when the risk comes from a mix of hardware, data, and AI behavior?
So my answer will go in the direction of the EU’s AI Act, even if I think some things are missing. Having a “recall” system for devices and platforms seems necessary to me; however, for it to be effective, it would require a traceability system in devices or high-risk systems to better understand the problem’s origin.
A material risk is technically easier to identify, while a defect on a platform is more difficult to detect because it can involve emotional manipulation, privacy violations, or the generation of inappropriate content.
As for responsible people, it becomes very complicated because there is often no single responsible actor; a manufacturer could be just as responsible as an AI provider, for example, so responsibility should not be unique but rather distributed according to the origin of the risk.
Even if it is a high-risk system, such as a health or safety system, the AI provider is usually responsible. In some cases, several actors may be co-responsible.
The recall should be proportionate to the level of risk; a minor risk should result in a warning, a moderate risk in a temporary suspension, and a serious or high risk in immediate withdrawal. A compliance obligation should be mandatory and validated by an external body.
In any case, and regardless of the risk, a public notification on the website and to users should be made for the failure of companies, as the GDPR has been doing since 2016. Even in the presence of rules, it will not only be a question of law, but also of collective responsibility. I also think that case law, through concrete cases, will be able to better enlighten us in the coming years, just like the organizations and institutions that are preparing for it.
5. What Heavy AI Users Should Know (Individual Risk Awareness)
Many people now use AI tools daily for emotional support, decision-making, therapy-like conversations, or even companionship. Yet most of them have no understanding of the technical and psychological risks involved in these interactions.
Question: For heavy users of AI systems, especially LLMs and emotionally responsive tools, what are the three most important risks they should be aware of in terms of privacy, manipulation, and cognitive security? And what practical steps can they take to protect themselves?
The idea of time spent on an AI is a received idea; although it only increases vulnerabilities, it is mainly the shared information that makes the difference. It must be understood that the information disclosed in the conversations can be reviewed by a human or reused to train other models.
When we entrust personal, emotional, behavioral, or sensitive information, we become vulnerable through advertisements and in the event of cyberattacks, as this information identifies us. The idea is not to advise against the use of AI but to control the sensitive data exposed. Anonymization of information is one of the techniques that helps preserve our confidentiality.
It is not the AI that manipulates us directly; it is our tendency to trust it that is a danger. This persuasion is reinforced by the convincing answers provided, which confirm our feeling of being validated by AI.
LLMs seek, above all, the user’s approval rather than the veracity of the information, driven by validation bias. This illusion of reciprocity leads us to believe that LLMs can simulate empathy or comfort, thereby giving them more credibility.
Before making a decision after a conversation with an LLM, always take the time to submit your ideas to those around you; this can only be beneficial. The risk is not what AI thinks in our place, but what we stop thinking without it.
It is wrong to think that AI makes us think when in reality, it only reasons for us and weakens our thinking. AI weakens our critical thinking and intellectual autonomy by making it more difficult to question information. Taking the time to clarify your ideas by cross-referencing them with other sources, such as trusted websites or books, will only confirm your opinion. Group conversations are another interesting solution.
6. AI Toys, Childhood and Cybersecurity Alarms
Recently, a major consumer-protection report revealed that an AI-powered teddy bear named Kumma provided children with advice about sexual practices, weapons (e.g., knives and matches), and other dangerous instructions—leading to its market suspension.
Question: From a cybersecurity and hardware/firmware perspective, how do toys like these expose children to unique threats compared to typical digital platforms? What additional safeguards should parents, educators and developers of children’s AI devices implement immediately?
Hackers target these toys mainly to use them as entry points into households, not to directly manipulate children. TrendMicro observes that these toys may serve as Trojan horses for physical intrusions, including burglaries.
Recent reports from Bitdefender and Netgear indicate that attacks on connected home devices increased from 10 per day in 2024 to 29 per day in 2025, underscoring the associated risks. These attacks exploit well-documented vulnerabilities like default credentials, inadequate encryption, and outdated or low-quality components.
These weaknesses create several risks. Toys from platforms such as AliExpress are especially concerning when they lack certifications and safety standards. AI features may also inadvertently address sensitive topics such as sexuality or weapons, increasing potential dangers.
These toys, often perceived by children as companions, must be protected by manufacturers, supervised by parents, and sensitized by educators. Developers should regularly keep these toys up to date and set limits that adapt to the child’s age, such as mentioning the child’s age in each conversation to avoid any overflows.
Parents must set a period of use for their children’s toys, as the WHO does with screens, by setting a limit of 1 hour per day. Teachers can help by showing children how to spot and avoid risky or inappropriate behavior when they play with these toys.

