Hacker Newsnew | past | comments | ask | show | jobs | submit | tosh's commentslogin


It's quite possible that SEO-wise the site does not make the cut into top x Google results but still is findable and considered by ChatGPT when it does its searches.

Especially in a longer ChatGPT conversation or via deep-research or more agentic modes (e.g. "Pro").

ChatGPT spends quite some time and diligence on searching.

Great for content that is not hyper search engine optimized but still (or even more) relevant. It bubbles up.


This is the only interview with Arthur Whitney I found other than the famous ACM interview by bcantrill


I would say dense code tends to help code reviews. It just is a bit unintuitive to spend minutes looking at a page of code when you are used to take a few seconds in more verbose languages.

I find it also easier to just grab the code and interactively play with it compared to do that with 40 pages of code.


FIXAPL is an interesting spin on APL without overloading on arity.

Many array languages overload glyphs on arity so they basically behave depending on if you call them with 1 argument (in "monadic form") vs 2 arguments ("dyadic form")

monadic: G A1

dyadic: A1 G A2

where G is the glyph and AN are arguments

The overloading can lead to confusion (but is also interesting in its own way because you can reduce the number of glyphs in the language surface).

That overloading is I would say also one of the reasons array languages might not be as approachable and one aspect of the 'difficult to read' argument.

Maybe even more important: avoiding overloading on arity helps with composition (I still have to dig into this deeper).


This is a bit like saying stop using Ubuntu, use Debian instead.

Both llama.cpp and ollama are great and focused on different things and yet complement each other (both can be true at the same time!)

Ollama has great ux and also supports inference via mlx, which has better performance on apple silicon than llama.cpp

I'm using llama.cpp, ollama, lm studio, mlx etc etc depending on what is most convenient for me at the time to get done what I want to get done (e.g. a specific model config to run, mcp, just try a prompt quickly, …)


> This is a bit like saying stop using Ubuntu, use Debian instead.

Not really, because Ubuntu has always acknowledged Debian and explicitly documented the dependency:

> Debian is the rock on which Ubuntu is built.

> Ubuntu builds on the Debian architecture and infrastructure and collaborates widely with Debian developers, but there are important differences. Ubuntu has a distinctive user interface, a separate developer community (though many developers participate in both projects) and a different release process.

Source: https://ubuntu.com/community/docs/governance/debian

Ollama never has for llama.cpp. That's all that's being asked for, a credit.


OK. That says absolutely nothing about actual UX or anything that matters to most actual users (as opposed to argumentative HN ideologues).


> Both llama.cpp and ollama are great and focused on different things and yet complement each other

According to the article, ollama is not great (that’s an understatement), focused on making money for the company, stealing clout and nothing else, and hardly complements llama.cpp at all since not long after the initial launch. All of these are backed by evidence.

You may disagree, but then you need to refute OP’s points, not try to handwave them away with a BS analogy that’s nothing like the original.


I guess read the article before commenting?


The author points out that the Ollama people are evil.

So it is more like saying "Stop using SCO Unix, use Linux instead".


Where do they use the term "evil"?


In the gaps between the tops of the lines and the bottoms of the other lines ;)


They might not use the word, but the behavior they describe is evil:

" This isn’t a matter of open-source etiquette, the MIT license has exactly one major requirement: include the copyright notice. Ollama didn’t.

The community noticed. GitHub issue #3185 was opened in early 2024 requesting license compliance. It went over 400 days without a response from maintainers. When issue #3697 was opened in April 2024 specifically requesting llama.cpp acknowledgment, community PR #3700 followed within hours. Ollama’s co-founder Michael Chiang eventually added a single line to the bottom of the README: “llama.cpp project founded by Georgi Gerganov.” "


There isn't much you can do with Ollama models besides saying good morning.


the original implementations of k were all proprietary

there are a few open source implementations as well by now

https://wiki.k-language.dev/wiki/Running_K


Which llm is best at driving DuckDB currently?


DuckDB exposes Postgres SQL, and most coding LLMs have been trained on that.

Of the small models I tested, Qwen 3.5 is the clear winner. Going to larger LLMs, Sonnet and Opus lead the charts.


It used to be possible to type immediately while the page is loading and have all key presses end up in the input field.

Why run this check before user can type?

Why not run it later like before the message gets sent to the server?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: