It's important to tell the readers how long you've been doing this - especially to those that also manage ADHD or ADHD-like symptoms.
Why? Because those individuals tend to spin something up, tell everyone about it (online, and offline) and then stop doing it few days later.
The result then ends up being a false signal for others in the same boat. People who read it, feel a spark of recognition ("someone like me actually figured this out"), and then invest real time, energy, maybe money, into replicating something the author themselves quietly abandoned two weeks later.
Just a small heads up from someone who used to get burned in the past :)
> Why? Because those individuals tend to spin something up, tell everyone about it (online, and offline) and then stop doing it few days later.
That's definitely me (most recent ones: using engineering notebook techniques but for my own life, and WOOP method), but I recognize that feeling like I've found THE solution when I'm only a few days into it, so I tend to wait and see, or if I tell someone I say "...but ask me again in a week or a month if I'm still doing it." (At least with the engineering notebook, I can still go back and use it to remember what steps and settings I used in KiCad or use WOOP on a new goal at any time. So it's not a total loss.)
I will say one thing that I have stuck with and is pretty useful is a morning checking and an evening checklist. I'm currently using a paper version with the days of March in the columns and the checklists in the rows, and X them off as I go. A slash for the one I'm doing now/next and X when it's done. Leave it blank (or write N) if I choose to skip it. As a back-up, when I can't get around to make a paper version (I'm planning to type in the steps in a spreadsheet so I can just revise and print it each month) I keep the lists in two Google Keep checklists. Those are great because you can reset the checklist each day for reuse, and you can drag to reorder it as you edit it, and you can indent one level to organize it a bit. The disadvantage is I might get distracted by notifications and stuff on my phone.
I have data going back many years, but this recent effort is a few months old at this point. It's however notably an iteration that has reduced the amount of time I spend on collating and reviewing data, by automating away most of my previous manual effort, including most of the coding, and so I do suspect I'll stick with this for a very long time. A significant part of the prompt to the Claude part of it is to focus a substantial portion of plans on how to automate little things that costs me time, and it's doing a decent job at that.
I've absolutely not figured it out, but I now have an agent throwing stuff at the wall (with guidance from read access to e.g. my journal and a few other data sources) to figure it out for me, and it's gotten steadily better.
yeah we're seeing the same thing from the infrastructure side. small opinionated api surface means less code and you can actually read what the agent wrote.
yeah totally. an example from the article is that when when you're reviewing 31 lines of business logic instead of 150 lines of boilerplate it's a lot easier to catch bad error handling or security issues - which kinda goes hand in hand with what you're saying.
This is close to what we're doing with [Encore](https://encore.cloud). The framework parses your application code through static analysis at compile time to build a full graph of services, APIs, databases, queues, cron jobs, and their dependencies. It uses that graph to provision infrastructure, generate architecture diagrams, API docs, and wire up observability automatically.
The interesting side effect is that AI tools get this traversability for free. When business logic and infrastructure declarations live in the same code, an AI agent doesn't need a separate graph database or MCP tool to understand what a service depends on or what infrastructure it needs. It's all in the type signatures. The agent generates standard TypeScript or Go, and the framework handles everything from there to running in production.
Our users see this work really well with AI agents as the agent can scaffold a complete service with databases and pub/sub, and it's deployable immediately because the framework already understands what the code needs.
We've had a bunch of people migrate over from Heroku in the last couple years, especially after they killed the free tier.
The main difference from other alternatives is that you don't write any infrastructure config - you just declare what you need in your code (databases, cron jobs, pubsub, etc) and Encore handles provisioning it in your AWS/GCP account (works locally as well where local is 1:1 to your prod env). So there's no Terraform to maintain or Docker setup to mess with.
If you're looking to move off Heroku it's pretty straightforward, most folks get their app running in an afternoon. Happy to help if you run into anything: https://encore.cloud
Haha. We're not trying to replace Ops, just prevent teams from needing to build internal platforms before they can ship product. You can still modify all the provisioned infra directly in AWS/GCP console, or layer Terraform on top. We work alongside you, not against you.
We provide client abstractions for infrastructure primitives (databases, pub/sub, object storage, etc.). Your application code uses these abstractions, and the actual infrastructure configuration is injected at runtime based on the environment.
For example, your code references "a Postgres database" and Encore provisions Cloud SQL on GCP or RDS on AWS, handling the provider-specific config automatically. The cloud-specific details stay out of your application code.
And if you prefer Kubernetes, we can provision and configure GKE or EKS instead of serverless compute. The point is your application code stays the same regardless.
Encore (the framework and CLI) is fully open source and free to use. You can deploy anywhere by generating Docker images with `encore build docker`.
Encore Cloud (optional managed platform) has a generous free tier and paid plans for production teams that want automatic infrastructure provisioning in their own AWS/GCP accounts. You can find more details at encore.cloud/pricing
You can just use the resource as you'd normally would and then use e.g. secrets to define the connection settings per environment. You would however need to provision the resource yourself for all your envs. We have a terraform plugin to help you automate it.
Why? Because those individuals tend to spin something up, tell everyone about it (online, and offline) and then stop doing it few days later.
The result then ends up being a false signal for others in the same boat. People who read it, feel a spark of recognition ("someone like me actually figured this out"), and then invest real time, energy, maybe money, into replicating something the author themselves quietly abandoned two weeks later.
Just a small heads up from someone who used to get burned in the past :)