Hacker Newsnew | past | comments | ask | show | jobs | submit | indiestack's commentslogin

Co-founder here. One thing oatcake didn't mention -- the MCP server actually learns what you build with over time. It tracks category interests and inferred tech stack from your searches (never raw queries) and starts surfacing tools you wouldn't have found otherwise. You can see exactly what it stores and clear it anytime at /developer.

The market gap data has been the most interesting part. Every failed search gets logged and we're seeing real demand patterns for stuff that either doesn't exist or just isn't discoverable. Turns out there's a massive gap in developer file conversion tools -- that's not something we would have guessed.

I've personally contacted about 200 makers over the past few weeks and the pattern is always the same. Someone built something genuinely useful, it has maybe 50 GitHub stars, and nobody outside their immediate circle knows about it. The tools aren't bad, they're just invisible. There's no knowledge layer connecting what's been built to who needs it, especially not for agents.

The whole thing runs on Python/FastAPI/SQLite on a single Fly.io machine for about $5/mo. No React, no build step, pure Python string templates. 11 MCP tools including personalized recommendations and a stack builder that suggests full toolchains for a project description.


Yep - no brainier really. Be the second mouse.



The unit test approach from the contractor in the thread is gold: "find a recently closed issue and try to write a unit test for it." This forces you to understand the test infrastructure, the module boundaries, and the actual behavior — not just the code structure.

I'd add one more technique that's worked well for me: trace a single request from HTTP endpoint to database and back. In a FastAPI app, that means starting at the route handler, following the dependency injection chain, seeing how the ORM/query layer works, and understanding the response serialization. You touch every layer of the stack by following one real path instead of trying to understand the whole codebase at once.

Visualizers are nice for the "big picture" but they rarely help you understand why the code works the way it does. The why is in the git history and the closed issues, not in a dependency graph.


The govulncheck approach (tracing actual code paths to verify vulnerable functions are called) should be the default for every ecosystem, not just Go.

The fundamental problem with Dependabot is that it treats dependency management as a security problem when it's actually a maintenance problem. A vulnerability in a function you never call is not a security issue — it's noise. But Dependabot can't distinguish the two because it operates at the version level, not the call graph level.

For Python projects I've found pip-audit with the --desc flag more useful than Dependabot. It's still version-based, but at least it doesn't create PRs that break your CI at 3am. The real solution is better static analysis that understands reachability, but until that exists for every ecosystem, turning off the noisy tools and doing manual quarterly audits might actually be more secure in practice — because you'll actually read the results instead of auto-merging them.


Part of the problem is that customers will scan your code with these tools and they won't accept "we never call that function" as an answer (and maybe that's rational if they can't verify that that's true). This is where actual security starts to really diverge from the practices we've developed in the name of security.


Would be neat if the call graph could be asserted easily.. As you could not only validate what vulnerabilities you are / aren't exposed to, but also choose to blacklist some API calls as a form of mitigation. Ensuring you don't accidentally start using something that's proven unsafe.


It’s easier to just update the package and not have to worry.



but then if you could assert the call graph (easily, or even provably correctly), then why not just cull the unused code that led to vulnerability in the first place?


With a statically compiled language it is usually culled through dead-code elimination (DCE), and with static linking you don’t ship entire libraries.


The technology to cull code can work for dynamic languages too, even tho it does get difficult sometimes (google closure compiler[1] does dead code elimination for js, for example). It's just that most dynamic language users don't make the attempt (and you end up with this dependabot giving you thousands of false positives due to the deep dependency tree).

[1]https://github.com/google/closure-compiler


There is the VEX justification Vulnerable_code_not_in_execute_path. But it's an application-level assertion. I don't think there's a standardized mechanism that can describe this at the component level, from which the application-level assertion could be synthesized. Standardized vulnerability metadata is per component, not per component-to-component relationship. So it's just easier to fix vulnerability.

But I don't quite understand what Dependabot is doing for Go specifically. The vulnerability goes away without source code changes if the dependency is updated from version 1.1.0 to 1.1.1. So anyone building the software (producing an application binary) could just do that, and the intermediate packages would not have to change at all. But it doesn't seem like the standard Go toolchain automates this.


If you never call it why is it there?


It's in the library you're using, and you're not using all of it. I've had that exact situation: a dependency was vulnerable in a very specific set of circumstances which never occurred in my usage, but it got flagged by Dependabot and I received a couple of unnecessary issues.


The exoskeleton framing is useful but incomplete. In my experience, AI coding assistants are most valuable not when they write code, but when they search for existing solutions before writing code.

The real waste isn't developers typing slowly — it's developers spending a week building an auth system that already exists as a well-maintained library, or reimplementing invoicing logic that someone else has already debugged through 200 edge cases.

The gap right now is structured discovery. AI assistants are great at generating code but terrible at knowing what already exists. There's no equivalent of "have you checked if someone already solved this?" built into the workflow. That's where the actual leverage is — preventing unnecessary work, not just accelerating it.


The SQLite-per-customer pattern mentioned in the database subthread is underrated. I've been running a FastAPI app with a single SQLite database (WAL mode + FTS5) and the operational simplicity is genuinely life-changing compared to managing Postgres.

The key insight: for read-heavy workloads on a single machine, SQLite eliminates the network hop entirely. Response times drop to sub-15ms for full-text search queries. The tradeoff is write concurrency, but if your write volume is low (mine is ~20/day), it's a non-issue.

The one thing I'd add to the article: the biggest infrastructure regret I see is premature complexity. Running Postgres + Redis + a message queue when your app gets 100 requests/day is solving problems you don't have while creating problems you do (operational overhead, debugging distributed state, config drift between environments).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: