What’s the name of your car?
I mean, you’ve named your car, right?
I actually haven’t, but if I had to choose, I’d call it “Tommy.”
It’s somehow a companion: I talk to it, it follows me on every trip, I take care of it, and I’m afraid something bad might happen to it.
Now, what about the names of your data tools?
We make decisions with them, we spend our days with them, we get frustrated with them, we take care of them.
Yet we don’t name[1] them—beyond their actual boring names. They might have cute duck, octopus, or engine-like faces, but that doesn’t help us form a personal connection. Everyone uses the same tools.
Kids, too, play with the same toys. But they ultimately give them personal names.
There’s something intrinsic to our software tools: they don’t exist in our physical world. And so, it’s hard to feel the same attachment as we do with our cars or toys.
I’ve been rediscovering my old Sony PSP recently, and wow! The buttons, the lightness, the screen—everything is so well made. The same feeling goes with a vinyl record, a book, a nice chessboard. Chess players even bridge the gap between the real and the virtual with these incredible chessboards.
The keyboard is likely the closest physical object to our digital tools. It’s not surprising to find fans of mechanical keyboards and custom keycaps quite easily.
Should we have keycaps for dbt—a hotkey to open our dbt project in Cursor? The same for BigQuery, Metabase, or ChatGPT.
These are basically our CMD+OPTION+<the key you never remember because you only use the tool once a quarter>
shortcuts embedded in our physical world.
I feel we’d make these tools more like companions if we could truly make them our own.
Yes, software should be soft. But the revival of BlackBerry, the fandom around mechanical keyboards, the Car Thing from Spotify, or the Claude Poetry camera all refresh the idea that the best instrument is a companion. It's sometimes weird. It can break. It needs care.
And now, it can even “talk” to you.
📡 Expected Contents
Betting Against Agents
Resonating with the intro: this post comes with very good arguments on why we shouldn't focus too much on agents.
Long story short: error rates compound exponentially in multi-step workflows. 95% reliability per step is equivalent to 36% success over 20 steps.
In reality, we usually need 99.9%...
Also, context windows create quadratic token costs making long autonomous conversations economically unsustainable.
It's not that agents don't work. It's that they don't work like we thinks. Don't get me wrong, I'm bullish on AI in general, but this read among others confirms that the current narrative suggesting 2025 is the breakthrough year is less and less a good conjecture.
Must read!
Thoughts on DuckLake
I didn't follow that much DuckLake announcement a few month ago. Max gives good thoughts on this post.
I'm not sure if such a framework is really needed right now. But I kinda expect the Duck team to have a proper plan and deploy the project even more on the market shortly.
dbt + AI follow up
AI isn’t just for modeling—here’s a sharp breakdown of how it can drive robust data testing and monitoring, too.
This second part of Mikkel's blog walks through pairing dbt with AI to automate checks, flag anomalies, and keep your data quality tight. Worth a read if you want to see what the future of analytics engineering looks like in practice.
Related: lately I thought about the sub-agent thing brought by Claude. I feel more and more eager to write these agents' "brain" prompts rather than writing code myself. Committed as markdown in the repo, and the entire team can use these robots to work way faster. Nothing really new, just wondering if it's one orthogonal path alongside semantic layering and more tooling for humans 🤔. To be continued.
Using uvx
If like me you write more and more CLI for daily operations, you might be interested in uvx:
The most important aspect is defining entry points (console scripts) in your pyproject.toml
. This tells uvx which commands are available when the package is installed:
[project]
name = "your-package"
version = "0.1.0"
description = "Your tool description"
[project.scripts]
your-command = "your_package.module:main_function"
another-command = "your_package.cli:cli_main"
When uvx is invoked, uv installs the corresponding package, which provides the command. The tool looks for executable commands defined in the package's entry points.
📰 The Blog Post
No blog post this time. I'm keen to explore more around UX for data + LLM, so I hope to write some follow ups of the last post with Julien. Here is the link again I you didn't read it yet.
🎨 Beyond The Bracket
This is code vibing.
I've been playing piano on the side for a decade now[2] and I'm always amazed by how technology helps to learn and develop new sounds, new vibe.
This time the instrument is code.
Playing around Strudel makes you see music in a totally different way. It gives new ideas.
It's again a call for me to draw a parallel with how literature, cognitive, and linguistic studies can help us use our new toys in novel ways and explore our own selves.
Just returned to France after a month in the US. Loved it! Felt like being a kid again - everything seemed fresh and waiting to be explored. Sure, there's probably novelty bias at play (only my second US trip). But after experiencing the wildlife and those massive open spaces, I feel cramped back in this tiny French box.
Already tons of things planned for the back to school - with a stop in London in September. I'll be in BigData London for those wanting to hang out there!