

Researchers from MIT, Northeastern University, and Meta have surfaced a subtle but important security gap in modern AI systems: models can follow sentence form even when the words make no sense. As reported by Ars Technica, the team showed that prompts built with nonsense tokens but familiar grammatical patterns can still elicit valid answers. In one example that mimicked the structure of “Where is Paris located?” a garbled, similarly shaped query produced the answer “France.” For small and midsize businesses deploying AI assistants, that behavior creates a fresh prompt injection pathway that doesn’t rely on obvious “jailbreaking” language and may slip past keyword-based guardrails. The researchers plan to present their work at NeurIPS later this month.
The study suggests large language models don’t just learn meaning; they also internalize common sentence patterns tied to specific answer types. When those patterns are strong, the model can “shortcut” to a likely response using grammar cues alone. The team tested this by preserving the grammatical shape of standard questions but swapping in nonsensical terms. Despite the gibberish, models often returned plausible answers aligned with the original question’s intent.
Crucially, the paper indicates this pattern matching can sometimes override semantic understanding at the edges, which helps explain why certain jailbreak and prompt injection strategies work. The authors also note that detailed training data for commercial models isn’t public, so parts of the analysis for production systems are necessarily inferential. Still, the behavior appears consistent enough to matter for real-world use. In other words, a bot can look safe on keyword filters but still be steered by structure—an attack surface most teams aren’t testing today.
If you use AI for customer support, sales qualification, knowledge lookups, or internal workflows, your risk model can’t just focus on obvious cues like banned terms. This research highlights a quieter failure mode: a user can frame a request with a harmless vocabulary but a telltale shape, and the model may comply. That opens doors to content policy bypasses, unintended tool calls, and data exposure—even when your policies look solid on paper.
Consider common scenarios:
The business costs are concrete: brand trust hits from bad responses, regulatory exposure from data leaks, and painful cleanup after rogue actions. Because this vector relies on structure more than words, basic keyword blocklists won’t catch it, and many teams don’t red-team for grammar-based attacks. That’s why this isn’t just an academic finding—it’s a practical security and reliability issue for any AI-backed process.
Look for three pressure points in your stack:
Teams often rely on safety prompts and content filters. Those help, but the research implies they aren’t enough if the model is over-weighting syntax. You need layers that interrogate intent and verify that the requested action is allowed, not just that the words look harmless.
You don’t need a research lab to lower your risk. Start with these moves:
On the platform side, use the safety features you already have: content moderation endpoints, AI content safety APIs, and conversational guardrails in your chatbot builder. Pair them with business logic—allowlists, role checks, and pre-execution validators—so a clever sentence form can’t override your policies.
Expect more research and tooling that looks beyond keywords to the shape of language itself. Structure-aware detectors, stronger intent verification, and tighter links between authorization policies and LLM outputs are all on the horizon. For now, assume attackers will iterate on grammar-based probes the same way they iterate on jailbreak keywords. If you build tests, logs, and approvals around that assumption, you’ll be ahead of most operators.
For more on the study and examples of the behavior, see Ars Technica’s coverage.
Curious how this applies to your stack? We help SMBs design guardrails, red-team test suites, and safe AI automations that scale. Want to stay ahead of automation trends? StratusAI keeps your business on the cutting edge. Learn more →