[EDIT] Several commenters rightly noted that heavy em-dash usage is normal for the New Yorker (and common thanks to OS auto-replacement), so my “LLM giveaway” quip was off-base. Leaving this up for context—thanks for the corrections.
LLMs use em dashes because they’re in a lot of writing. They didn’t invent the concept.
I had to stop using em dashes in my own writing because it triggers people who never really thought about them prior to reading about their association with LLMs.
The only people who think em dashes are a dead giveaway for AI are people who don't know how to write and whose grammar is too poor to use them properly.
You realize that the reason why em dashes are so prevalent in chatgpt output is that they're present in the training data, ie. newspaper/magazine articles? I get being suspicious of em dashes in a reddit comment or whatever, but I'd expect em dashes from a professionally typeset publication.
Professionally typeset publications, or word documents, or regular Internet comments written by iOS users—it converts double-dashes to em-dashes automatically.
I’m convinced most folks noticing this now just weren’t aware of the punctuation before they heard about it in the AI paranoia context.
I’m also convinced a good chunk of Reddit comments are AI spam. But, I mean, we have to imagine anyone running an AI campaign knows to avoid the em-dashes by now.
Even in a Reddit comment they’re not that strange. macOS and iOS automatically turn two dashes in to an em dash when typing, so a lot of posters are probably using them without even realizing it and another portion stumbled across that and thought “oh neat” and kept doing it. Same goes for smart quotes.
So it’s just as likely that by spotting those features in a post you’ve found an Apple user as it is that the poster is an LLM.
Hey, this is Gabe from zenfetch. Been following you guys for a few months now since your first launch. I definitely resonate with all the problems you've described regarding celery shortcomings / other distributed task queues. We're on celery right now and have been through the ringer with various workflow platforms. Only reason we haven't switched to Hatchet is because we are finally in a stable place, though that might change soon in which case I'd be very open to jumping ship.
I know a lot of folks are going after the AI agent workflow orchestration platform, do you see yourselves progressing there?
In my head, Hatchet coupled with BAML (https://www.boundaryml.com/) could be an incredible combination to support these AI agents. Congrats on the launch
Hi Gabe, also Gabe here. Yes, this is a core usecase we're continuing to develop. Prior to Hatchet I spent some time as a contractor building LLM agents where I was frustrated with the state-of-tooling for orchestration and lock in of some of these platforms.
To that end, we’re building Hatchet to orchestrate agents with features that are common like streaming from running workers to frontend [1] and rate limiting [2] without imposing too many opinions on core application logic.
Interesting to see Kagi on this list. One of our users for Zenfetch specifically requested the option to see their Zenfetch articles alongside their search results, so we naively developed that feature to appear beside Google SERPs...
Turns out, he was a Kagi power user. Not the worst mistake on our end, though pretty neat to see it in the wild
This is Gabe, the founder of Zenfetch. Thanks for sharing. We're putting together an export option where you can download all your saved data as a CSV and should get that out by end of week.
Seems like this would be a good tool to build lessons on - if you could share a "class" and export a link for others to then copy the class, and expand on the lesson/class/topic into their own AI. but as a separate "class" and not fully integrated to my regular history blob?
I want the ability to search all my downloaded files and organize them based on context within. Have it create a category table, and allow me to "put all pics of my cat in this folder, and upload them to a gallery on imgur."
Been a minute since the APL days though I still have fond memories of my time there. Big fan of the Pensieve analogy, maybe we'll throw that in the landing page lol
Thanks for bringing this up, as privacy is one of (if not THE) highest priorities for us.
Happy to answer any questions you have, here are some preliminary notes that might be helpful:
1. We don't sell your data. Our business model is subscription based and we have DPAs with model providers to ensure none of that data is used for training
2. All data you explicitly save to zenfetch is encrypted in transit and at rest.
In the future, we'd like to move to a local-first platform where the data storage and processing takes place on your own machine
[EDIT] Several commenters rightly noted that heavy em-dash usage is normal for the New Yorker (and common thanks to OS auto-replacement), so my “LLM giveaway” quip was off-base. Leaving this up for context—thanks for the corrections.