Bugster
Subscribe
Sign in
Home
Notes
Archive
About
Prompt caching: how we reduced LLM spend by 60x (and Get 20 % Faster Responses)
TL;DR — Caching the static prompt prefix (tools + system + stable memory) in our E2E testing agent delivered 60x lower cost per test and ~20% lower p95…
Aug 13
•
Bugster
and
Naquiao
4
Share this post
Bugster
Prompt caching: how we reduced LLM spend by 60x (and Get 20 % Faster Responses)
Copy link
Facebook
Email
Notes
More
Build fast. Test in parallel.
Why 2025 testing is receipts, not ceremony — and how we designed Bugster for that.
Aug 12
•
Bugster
and
Facundo Lopez Scala
3
Share this post
Bugster
Build fast. Test in parallel.
Copy link
Facebook
Email
Notes
More
Client, Server, or Edge: How to Render Smartly in Next.js
Choosing between client, server, or edge rendering in Next.js isn't about what's "better", it's about what makes sense for your data, UX, and context.
Aug 5
•
Bugster
and
Juan Beck
1
Share this post
Bugster
Client, Server, or Edge: How to Render Smartly in Next.js
Copy link
Facebook
Email
Notes
More
July 2025
What We Learned Launching Bugster: How Testing Agents Actually Behave
After shipping Bugster on Product Hunt, we opened signups and saw over 200 companies and teams jump in within 48 hours to assess if Bugster could solve…
Jul 29
•
Bugster
2
Share this post
Bugster
What We Learned Launching Bugster: How Testing Agents Actually Behave
Copy link
Facebook
Email
Notes
More
LLM API Call or Agent? How Modern AI Gets (and Loses) Its Autonomy
An Engineer’s Guide to Choosing the Right Approach for Your LLM
Jul 29
•
Bugster
and
Naquiao
2
Share this post
Bugster
LLM API Call or Agent? How Modern AI Gets (and Loses) Its Autonomy
Copy link
Facebook
Email
Notes
More
Share
Copy link
Facebook
Email
Notes
More
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts