Hacker Newsnew | past | comments | ask | show | jobs | submit | mbien's commentslogin

Thanks for your feedback! Indeed we tested it mostly for firefox & chrome compatibility for now


As stated in readme, the dataset is downloadable from our website


yes, we got quite a long queue :D


Hi, after gathering the model and tokenizer, you can use the generic huggingface tutorial for text generation: https://huggingface.co/blog/how-to-generate

We use custom control tokens, so you have to provide the specific structure of the recipe, simply typing it in the huggingface prompt won't work. The details are available in our papers about the solution.


> in our papers

A link would be kind.



Good question! I'd say, ultimate task is to achieve both, but what is fully achievable atm is to get the right recipe structure. The idea of "validity" is very hard to evaluate and often has high regional and personal variance, but many teams around the world are working on this problem at the moment!


IBM Chef Watson was quite good at it though, already five years ago. But I haven't found any similar service since it was turned off, what makes it so hard to replicate it?


Perfectly understood! We just cannot afford to run it on GPU in the backend so the generation takes up to 3 minutes which is terrible thing to do interactively :( You're invited to check it with 10minutemail or so, or you can try our model on your own if you have some python experience: https://huggingface.co/mbien/recipenlg


For privacy conscious and if you're willing to go that far[0], you can generate a unique URL for a recipe and store it for a limited amount of time. With that (and the URL being generated beforehand), you can tell the user to either provide an email address so you can email them, or just to visit the link later and check if the recipe is ready.

[0] Personally I wouldn't worry. It's open source and people can run it themselves if they're squeamish, and probably your time would be better spent on the engine itself :-)


If you don't mind me asking, how much does it cost per recipe(s) on CPU vs GPU?

I also wouldn't mind waiting 3 minutes if there was a direction telling me so, with a timer.


Hard to say per recipe, as in the current setup they are generated on demand and not pre-generated. But as you can see, we're hosting the solution in academia so it's basically for free vs around 2$/hour with GPU on AWS

Yes, you might be right about the timer, we'll think about changing it!


How about using browser notifications? That way, one could leave the website, receive a notification about data being ready, and click on it to find the generated recipe.


I'm truly impressed how wide the response is. It makes me sure that this project is truly worth continuing. Thanks HN!


Currently we have only what CapacitorSet prepared: https://github.com/Glorf/lear/issues/1 I'll try to fix HTTP/1.0 support and post some more detailed benchmark in few hours


Currently, it's architecture is similar to modern nginx, while simplicity is the point that makes it faster. I'll surely, look at your links and ideas, as this is still highly WIP. If you want to help creating this, feel free to join! :)


TBH, it was simpler in implementation and maintance, while creating just a little bit slower than pthread on initialization time.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: