exactly this, verification should always been on the code
if someone fresh wants to contribute, now they will have to network before they can write code
honestly i don't see my self networking just so that i can push my code
I think there are valid ways to increase the outcome, like open source projects codifying the focus areas during each month, or verifying the PRs, or making PRs show proof of working etc,... many ways to deter folks who don't want to meaningfully contribute and simply ai generate and push the effort down the real contributors
Why are folks seemingly so averse to sending an email / hopping on a channel to actually talk to maintainers before just firing off code? I've been on both sides of this; I have been young and green and just fired off contributions without stopping to think, do they event want this?. Codebases are rarely built primarily out of zillions of shotgunned patches, they are more like a garden that needs tending over time, and the ones that are the best tenders are usually the ones that spend the most amount of time in the garden.
Zenact AI | Founding Engineers & Interns | Full-Time + 6-Month Internships | Onsite Bangalore | Location flexible for internships (India)
Tech: Golang • Python • AI Agents
At Zenact AI, we are building AI agents that test apps like real users. I personally faced this problem at Zomato for over 6 years while handling many bugs and incidents.
We launched recently and already got 35+ signups from leading unicorns & soonicorns in India.
Backed by the Zomato mafia.
Team comes with deep expertise from Zomato’s scale journey.
## Roles:
* Founding Engineers.
* Interns (6 months). Must’ve built serious projects or freelanced early in college.
## Tech Stack:
Golang, Python, Java(5%), Appium, AWS, Docker
## You’ll work on:
* Building the platform from the active feedback from the customers, with heavy focus on improving the end to end latency of testing.
* Fine-tuned vision & reasoning models (currently 92% accuracy vs SOTA ~60%)
At Zenact AI, we are building AI agents that test apps like real users. I personally faced this problem at Zomato for over 6 years while handling many bugs and incidents.
We launched recently and already got 35+ signups from leading unicorns & soonicorns in India.
Backed by the Zomato mafia.
Team comes with deep expertise from Zomato’s scale journey.
Roles:
Founding Engineers.
Interns (6 months). Must’ve built serious projects or freelanced early in college.
You’ll work on:
* Building the platform from the active feedback from the customers, with heavy focus on improving the end to end latency of testing.
* Fine-tuned vision & reasoning models (currently 92% accuracy vs SOTA ~60%)
one thing that is getting clear is that the gains from model enhancement is getting saturated
thats why we are starting to see a programming of ai, almost like programming building blocks
if there is a pathway for models to get smart enough to know when to trigger these hooks by themselves from system prompt or by default itself, then it wouldn't make sense to have these hooks
most of the current systems that need a reliable managed service for distributed locking use dynamodb, are there any scenarios where s3 is preferrable than dynamodb for implementing such distributed locking?
if someone fresh wants to contribute, now they will have to network before they can write code
honestly i don't see my self networking just so that i can push my code
I think there are valid ways to increase the outcome, like open source projects codifying the focus areas during each month, or verifying the PRs, or making PRs show proof of working etc,... many ways to deter folks who don't want to meaningfully contribute and simply ai generate and push the effort down the real contributors