Authors
Filippa Kuhnle Sekkelsten
ChatGPT
May 6, 2026
SHARE

Tokenmaxxing

- AI should be used where it creates value, not as a badge of honor measured by token consumption.

The trend of tokenmaxxing

Nvidia CEO Jensen Huang recently said he would be “deeply alarmed” if a $500K engineer didn’t consume at least $250K in AI tokens. The comment quickly became part of a broader discussion happening across startups and large technology companies about how AI should be used inside engineering teams. That discussion has increasingly centered around a new term: tokenmaxxing. The idea is: If AI tools improve productivity, then increasing AI usage should increase output. Tokens, which measure AI compute, become a proxy for how much engineers rely on tools like Claude, GPT, Cursor, and Codex.

Across the industry, companies are rapidly increasing their token spend. Some startups now encourage engineers to use AI as aggressively as possible, while others have introduced internal expectations around usage. Venture-backed teams, especially smaller ones, increasingly see AI compute as leverage that allows them to move faster without hiring at the same pace.

Programs like Y Combinator’s startup credits, combined with subscription offerings such as Claude Max and OpenAI Pro, have also lowered the barrier to heavy usage. For many teams, AI consumption has become relatively cheap compared to adding more engineers.

When usage becomes the metric

Supporters of the trend view token spending as a productivity investment. Faster prototyping, automation, and iteration can justify higher compute costs if it leads to better products and quicker growth. But the trend has also raised concerns. Critics argue that token usage is a poor measure of productivity on its own. More AI consumption does not automatically mean better products, stronger execution, or more meaningful output. In some reported cases, companies have spent tens of thousands of dollars on AI-generated work that never shipped.

There are also concerns about the culture forming around AI usage. Reports from companies like Meta and OpenAI have described internal token leaderboards and unusually high levels of compute consumption among engineers. For some observers, it resembles earlier periods in software development where metrics like lines of code became shorthand for productivity despite having little connection to actual value creation. The broader concern is that teams may begin optimizing for activity instead of outcomes.

REWRK is doing more with less

Rather than maximizing AI usage, REWRK has focused on building systems designed to use AI more selectively and efficiently. Reducing unnecessary AI processing lowers infrastructure costs, improves scalability, and creates more predictable operations. But according to REWRK, it can also improve data quality and reliability. Large AI workflows often introduce noise, lose context, or generate inconsistent outputs when overused. As their Founder, Ola says:

REWRK obviously uses AI too. We use it as a sparring partner in software development and in selected parts of our platform. But we believe AI should be used where it creates value - not as a badge of honor measured by token consumption. We believe in using AI intentionally - keeping it to a necessary minimum rather than encouraging unnecessary overconsumption.

REWRK believes it can maintain greater control over structure, accuracy, and performance. Rather than treating higher token consumption as a signal of innovation, the team focus on efficiency, precision, and sustainable AI usage.

Ultimately, the tokenmaxxing discussion highlights a familiar challenge in technology. During periods of rapid change, measurable inputs often become proxies for progress. Today, that input is tokens. Whether token usage ultimately becomes a meaningful productivity signal remains unclear. What is becoming increasingly evident is that companies are still figuring out how AI spending translates into long-term value.

Tokenmaxxing