
JSON is safe, for now.
In the LLM Landscape, Token Optimization Is both a Financial and Technical Priority. The TOON format was proposed as an alternative to JSON for communicating with language models, promising over 50% token savings. After extensive testing with real data, results fell short of expectations.
Domenico De Palma, AI Specialist R&D Team
12/19/2025


In the LLM Landscape, Token Optimization Is both a Financial and Technical Priority
The TOON format was proposed as an alternative to JSON for communicating with language models, promising over 50% token savings. After extensive testing with real data, results fell short of expectations.
The Promise of the TOON Format
TOON was born from a concrete need to reduce operational costs in LLM-based applications. By eliminating JSON’s verbosity with a more compact syntax, the format promised to halve token consumption while preserving the same amount of information. An appealing idea, especially considering the costs of using advanced model APIs.
Testing Methodology
We conducted comparative tests aimed at recreating real-world use cases for our product, such as technical troubleshooting conversations with multiple exchanges—for example, a customer support scenario for a malfunctioning laptop where the model helps the user solve the reported issues. Each response was generated in both formats, measuring token savings and output validity.
Results: Insufficient Savings and Validity Issues
The data paint a disappointing picture. Average token savings hovered around -10% per conversation, far from the promised 50%. Individual interactions showed variable reductions ranging from -9% to -35%—well below expectations.
The most critical issue emerges when analyzing output validity: out of 10 interactions, TOON produced technically valid output only with premium models. Parallel tests with mid-tier models revealed error rates above 40%, with malformed outputs requiring regeneration or corrective parsing.
The paradox is clear: to save 300–400 tokens in a full conversation, you have to use more expensive models, nullifying the economic benefit. A conversation that saves 40% tokens in TOON still costs more if it requires a premium model instead of a cheaper one with JSON.
Conclusions
Our tests show that TOON does not deliver on its promises. With savings under 25% in most cases and reliability issues on lower-tier models, it currently represents more of an overhead than an advantage. For production applications, JSON remains the recommended standard: mature, universally supported, and compatible with the entire LLM ecosystem.
Stay connected!
© 2025. All rights reserved.


Eudata s.r.l. SB
Via dei Valtorta 48, 20127, Milan
P.IVA 12421000154
info@pec.eudata.biz
Subscribe to our newsletter to stay updated on the AI landscape
