arXiv:2605.05403v1 Announce Type: new Abstract: This position paper argues that sycophancy in LLMs is a boundary failure between social alignment and epistemic integrity. Existing work often operationalizes sycophancy…
arXiv:2605.05329v1 Announce Type: new Abstract: Safety policies define what constitutes safe and unsafe AI outputs, guiding data annotation and model development. However, annotation disagreement is pervasive and can…
arXiv:2605.05427v1 Announce Type: new Abstract: As Large Language Models (LLMs) are integrated into global software systems, ensuring equitable safety guardrails is a critical requirement. Current fairness evaluations…
arXiv:2605.05360v1 Announce Type: new Abstract: Given two GNNs that output node embeddings, how can we determine if they were trained independently? An adversary could have trained one GNN specifically to mimic the…
arXiv:2605.05415v1 Announce Type: new Abstract: Large language models (LLMs) remain vulnerable to adversarial prompting despite advances in alignment and safety, often exhibiting harmful behaviors under novel attack…
arXiv:2605.05534v1 Announce Type: new Abstract: Adversarial learning and the robustness of Graph Neural Networks (GNNs) are topics of widespread interest in the machine learning community, as documented by the number of…
arXiv:2605.05220v1 Announce Type: new Abstract: Steering intermediate representations has emerged as a powerful strategy for controlling generative models, particularly in post-deployment alignment and safety settings.…
arXiv:2605.05524v1 Announce Type: new Abstract: Causal representation learning (CRL) seeks to recover latent variables with identifiability guarantees, typically up to permutation and component-wise reparameterization…
arXiv:2605.05221v1 Announce Type: new Abstract: Classical representation systems such as Fourier series, wavelets, and fixed dictionaries provide analytically tractable basis expansions, but they are not intrinsically…
arXiv:2605.05653v1 Announce Type: new Abstract: Mechanistic interpretability has revealed how concepts are encoded in large language models (LLMs), but emotional content remains poorly understood at the mechanistic…
arXiv:2605.05950v1 Announce Type: new Abstract: The increasing prevalence of Large Language Models (LLMs) in content creation has made distinguishing human-written textual content from LLM-generated counterparts a…
arXiv:2605.05662v1 Announce Type: new Abstract: Current LLM safety benchmarks are predominantly English-centric and often rely on translation, failing to capture country-specific harms. Moreover, they rarely evaluate a…
arXiv:2605.06076v1 Announce Type: new Abstract: The "Locate-then-Update" paradigm has become a predominant approach in the post-training of large language models (LLMs), identifying critical components via mechanistic…
arXiv:2605.06327v1 Announce Type: new Abstract: Safety benchmarks are routinely treated as evidence about how a language model will behave once deployed, but this inference is fragile if behavior depends on whether a…
arXiv:2605.05630v1 Announce Type: new Abstract: Hidden malicious intent in multi-turn dialogue poses a growing threat to deployed large language models (LLMs). Rather than exposing a harmful objective in a single…
This paper proposes a low-latency fraud-detection layer for spotting adversarial interaction patterns in LLM agents. It matters because agent defenses need to operate in real time, not just at the prompt-filtering stage.
NEURON combines SNOMED CT ontology grounding with machine learning to make clinical predictions more explainable. It is relevant for builders working on trustworthy medical AI, though the contribution appears narrower…
This paper reframes AI safety around irreversibility, arguing that low-friction deployment changes the control problem more than raw capability does. It should interest safety researchers looking for a systems-level…
This paper proposes a mediator-agent framework for human-vehicle collaboration that models both driver state and vehicle intent. It is relevant to safety work because it targets coordination failures caused by poor…
This paper studies how a jailbreak can propagate across multi-agent systems and proposes a foresight-guided defense to stop the spread early. It matters for builders shipping agent swarms, where one compromised agent…
This position paper argues that multi-agent safety depends more on interaction topology than on the alignment or scale of the underlying models. For builders of agentic systems, it reframes safety as a systems-design…
This paper offers a geometric explanation for emergent misalignment in fine-tuned LLMs, framing it as a feature-superposition problem rather than a mysterious safety failure. It should be useful for researchers studying…
A startup is pitching a mechanistic-interpretability tool for inspecting and steering LLM internals during training. If the claims hold up, it could give researchers a more direct way to debug model behavior and shape…
Artificial intelligence – MIT Technology Review·Apr 30·Score 7.0