This editorial appeared in the February 20th, 2025, issue of the Topline newsletter.
Want all of the latest go-to-market insights without the wait? Subscribe to Topline and get the full newsletter every Thursday—delivered a week before it hits the blog.
My dear friend AJ has been on a mission to speak with founders of those elusive startups who are doing with five engineers what others need fifty to accomplish. When a prominent VC mentioned one such company, AJ pounced on the lead, asking for an introduction. Hilariously, he was shot down because the founder is now too busy…"hiring a ton of people."
Yet these efficiency wizards do exist. Cursor.ai hit $100M in ARR with just 20 employees, proving that AI leverage isn't just venture capital fairy dust.
Cursor and others are proof that AI clearly offers game-changing advantages, but we're in such early days that only a select few have decoded its real potential. These trailblazers typically live at AI-native startups, where "AI-first" isn't a marketing slogan but the water they swim in.
These trailblazers aren't just throwing AI at problems – they're reimagining every task from first principles. More crucially, they're developing an intimate familiarity with AI's blind spots and quirks. I suspect it's this deep understanding of AI's limitations – not just its capabilities – that creates their unfair advantage.
OpenAI's recent SWE-Lancer benchmark offers compelling evidence for this theory. By testing AI against real-world Upwork tasks with actual dollar values, they've created the first meaningful measure of AI's practical capabilities versus the hype.
The Field Test
OpenAI created the SWE-Lancer benchmark to shift from academic AI exercises to the reality of real-world problems with actual dollars at stake.
The benchmark challenged leading AI models with 1,488 genuine software engineering tasks from Upwork – actual bug fixes, feature requests, and maintenance tasks that businesses paid real money to solve.
The results were enlightening. Even the most advanced models completed only about a third of these tasks independently. Claude 3.5 Sonnet led with 33.7% solved (worth $403K), followed by OpenAI's o1 at 32.9% ($380K) and GPT-4o at 23.3% ($304K). Performance dropped dramatically on coding-heavy tasks, where success rates fell to just 8-21%.
While performance improved when models received multiple attempts or more computational resources, for flawless execution, AI still needs the right human guidance.
The Human Element
Sam Altman predicts that "in a decade, perhaps everyone on earth will be capable of accomplishing more than the most impactful person can today." That vision may eventually materialize, but current evidence shows AI still needs substantial human guidance to excel.
What does all this mean for people like AJ, who are seeking AI leverage today?
It means that the advantages of effective AI deployment are real – and magnified by how few companies have truly mastered this approach. Capturing this value requires the right talent: people who understand AI's capabilities and limitations with equal clarity, who can then transform these systems from sophisticated tools into genuine force multipliers.
The SWE-Lancer results reinforce this: alone, the best AI solved just one-third of tasks. But paired with an engineer who truly understands AI? That same task list becomes entirely solvable, at speeds that make traditional approaches look antiquated.
This insight builds on what we explored last week – that success in an AI-first world demands humans with expertise, judgment, taste and intuition. But the SWE-Lancer benchmark forces us to expand that view: in the near term, these individuals also need a thorough understanding of AI's strengths and weaknesses. These are the people who will determine which organizations thrive in the AI revolution – and which get left wondering what they missed.