Hacker Newsnew | past | comments | ask | show | jobs | submit | pcf's commentslogin

He said: "LLM is going to change schools and universities a lot"

You said: "No it won't. It really, really wont."

With the explosive development of LLMs and their abilities, it seems your point of view is probably the hopeful one while the other poster has the realistic one.

It seems that you simply can't say anything about what LLMs will not be able to do. Especially when you try to use current "AI slop" as your main reason, which is being more and more eradicated.


> "AI slop" as your main reason, which is being more and more eradicated.

The slop is the hard truth.

As I made perfectly clear in my original post. My university professor friends get handed AI slop by their students each and every day.

There is no "eradication of slop" happening. If anything, it is getting worse. Trust me, my friends see the output from all the latest algorithms on their desk.

The students think they are being very clever, the students think the magical LLM is the best thing since sliced bread.

All the professor sees is a wall of slop on their desk and a student that is not learning how to reason and think with their own damn brain.

And when the professors tries politely and patiently to challenge them and test their understanding as you would expect in a university environment, the snowflake students just whine and complain because they know they've been caught out drinking the LLM kool-aid again for the 100th time this week.

Hence the student is wasting their time and money at university, and the professor is wasting their time trying to teach someone who is clearly not interested in learning because they think they can get the answer in 5 seconds from an LLM chatbot.

My professor friends chose the career they did because they enjoy the challenge of helping students along the way through their courses and watching them develop.

They are no longer seeing that same development in their students. And instead of devoting time to helping students, they are wasting time thinking up over-engineered fiendishly-complicated lab-tasks and tests that the students cannot cheat using LLM.

It is honestly a lose-lose situation for everybody.


I think you're missing the point. The conversation is not about what students give the professors, it's about how students learn. This obviously requires someone that wants to learn.

> it's about how students learn. This obviously requires someone that wants to learn.

FINALLY ! Someone who gets the point I was trying to make. I wish I could upvote you a million times.

This is precisely the point. Professors are happy to help people who want to learn.

Students who prefer to copy/paste into LLMs do not want to learn. University is there to foster learning and reasoning using your own brain. An LLM helps with neither.


Sweep aside the misunderstanding about students trying to "cheat" with LLM output instead of engagement in the topic at hand. I think there is a secondary debate here, even when you understand the original intent of the post above. It still boils down to the same concerns about "slop". Not the student presenting slop to the existing teaching system, but the student being led stray by the slop they are consuming on their own.

Being an auto-didact has always been a double-edged sword. You can potentially accelerate your learning and find your own specialization, but it is an extremely easy failure mode to turn yourself into some semi-educated crank. Once in a while, this leads to some renegade genius who opens new branches of knowledge. But in more cases, it aborts useful learning. The crank gets lost in their half-baked ontology and unable to really fix the flaws nor progress to more advanced topics.

The whole long history of learning institutions is, in part, trying to manage this very human risk. One of a teacher's main roles is to recognize a student who is spiraling out in this manner and steer them back. Nearly everyone has this potential to incrementally develop a sort of self-delusion, if not getting reality-checked on a regular basis. It takes incredible diligence to self-govern and never lose yourself in the chase.

This is where "sycophancy" in LLMs is a bigger problem than mere diction. If the AI continues to function as a sort of keyhole predictor, it does not have the context to model a big-picture purpose like education and keep all the incremental wanderings on course and bound to reality. Instead, it can amplify this worst-case scenario where you plunge down some rabbit-hole.


I sure hope those "university professor friends" exist, and you're not self-distancing. Because you really need help with the mindset like that. Students are not your enemies and LLMs are not ought to get you. Seek help.

Totally unrelated to this post. Get a grip.


Hi @fatihturker – exciting project if it works!

I have a MacBook Pro M1 Max w/64 GB RAM, and a Mac Studio M3 Ultra w/96 GB RAM. What do you think is possible to run on these? Just curious before I really try it out.


Wow, Kimi K2.5 runs on a single M3 Ultra with 512 GB RAM?

Can you share more info about quants or whatever is relevant? That's super interesting, since it's such a capable model.


Below are my test results after running local LLMs on two machines.

I'm using LM Studio now for ease of use and simple logging/viewing of previous conversations. Later I'm gonna use my own custom local LLM system on the Mac Studio, probably orchestrated by LangChain and running models with llama.cpp.

My goal has all the time been to use them in ensembles in order to reduce model biases. The same principle has just now been introduced as a feature called "model council" in Perplexity Max: https://www.perplexity.ai/hub/blog/introducing-model-council

Chats will be stored in and recalled from a PostgreSQL database with extensions for vectors (pgvector) and graph (Apache AGE).

For both sets of tests below, MLX was used when available, but ultimately ran at almost the same speed as GGUF.

I hope this information helps someone!

/////////

Mac Studio M3 Ultra (default w/96 GB RAM, 1 TB SSD, 28C CPU, 60C GPU):

• Gemma 3 27B (Q4_K_M): ~30 tok/s, TTFT ~0.52 s

• GPT-OSS 20B: ~150 tok/s

• GPT-OSS 120B: ~23 tok/s, TTFT ~2.3 s

• Qwen3 14B (Q6_K): ~47 tok/s, TTFT ~0.35 s

(GPT-OSS quants and 20B TTFT info not available anymore)

//////////

MacBook Pro M1 Max 16.2" (64 GB RAM, 2 TB SSD, 10C CPU, 32C GPU):

• Gemma 3 1B (Q4_K): ~85.7 tok/s, TTFT ~0.39 s

• Gemma 3 27B (Q8_0): ~7.5 tok/s, TTFT ~3.11 s

• GPT-OSS 20B (8bit): ~38.4 tok/s, TTFT ~21.15 s

• LFM2 1.2B: ~119.9 tok/s, TTFT ~0.57 s

• LFM2 2.6B (Q6_K): ~69.3 tok/s, TTFT ~0.14 s

• Olmo 3 32B Think: ~11.0 tok/s, TTFT ~22.12 s


Because it's timeless, not just "relevant now".


This is not HN material.


This is advocacy journalism, not HN material. It profiles a UN official as a moral hero rather than analysing falsifiable claims about procurement, targeting systems, casualty verification methods, or supply-chain data.[1]

1. The “Double” Military-Industrial Complex (with numbers) The article’s “economy of occupation” frame is incomplete: Gaza is a proxy-war zone where both blocs run industrial supply chains.

Western/Israeli MIC: Quincy Institute documents “at least $21.7 billion” in US military aid since Oct 7, 2023, funding Iron Beam lasers, JDAM kits, and munitions replenishment.[2] This is state-scale industrial output, not incidental corporate profiteering.

Iran-linked proxy MIC: Iran provides Hamas $350 million annually (2023 Israeli security source) and Hezbollah $700+ million/year, but has shifted from direct shipments to “broker of military-industrial knowledge,” transferring production blueprints for indigenous missile/UAV factories.[3][4] Alma Research notes this “hybrid doctrine” lets proxies manufacture locally, reducing interdiction risk.[4] Ignoring this material capacity misrepresents the war as asymmetric in only one direction.

2. “Genocide” is used by major bodies but remains legally indeterminate The article treats the label as settled. Empirically, it is not.

Who uses it: Amnesty International (Dec 2024) concluded there is “sufficient basis” to say Israel is committing genocide.[5] UN special rapporteurs have adopted the term.

Why it’s contested: The 1948 Convention requires “intent to destroy, in whole or in part, a national, ethnical, racial or religious group.”[6] The core dispute is inferring intent from conduct. NPR summarises: “it’s not always clear if they mean Hamas or Gazans.”[7] The ICJ’s final judgment on South Africa v. Israel is expected late 2027 or early 2028.[8] Until then, presenting the charge as fact rather than a plausible but unproven legal claim is premature.

3. Albanese’s criticism is methodological, not personal UN Watch’s legal analysis notes her June 2025 report uses “genocide” 57 times while “Hamas” and “terrorism” appear zero times (excluding footnotes).[1] Four governments (US, France, Germany, Canada) have condemned her approach.[9] The Special Rapporteur mandate itself is anomalous: it is the only HRC mandate that is indefinite (“until the end of the Israeli occupation”) and examines only Israeli violations, systematically excluding Palestinian armed groups.[10] This isn’t about “standing with the oppressed”; it’s about whether a mandate designed for activism can produce impartial analysis.

Bottom line: HN should discuss the political economy of proxy wars and the failure of international law to handle non-state industrialised conflict, not personality-driven morality tales.

SOURCES: [1] Georgetown University drops UN's Albanese due to US sanctions https://www.timesofisrael.com/georgetown-university-drops-un... [2] U.S. Military Aid and Arms Transfers to Israel, October 2023 https://quincyinst.org/research/u-s-military-aid-and-arms-tr... [3] Iranian support for Hamas - Wikipedia https://en.wikipedia.org/wiki/Iranian_support_for_Hamas [4] Hezbollah – Independent Weapons Production, a Hybrid Doctrine in ... https://israel-alma.org/hezbollah-independent-weapons-produc... [5] Amnesty concludes Israel is committing genocide in Gaza https://www.amnesty.org/en/latest/news/2024/12/amnesty-inter... [6] [PDF] Convention on the Prevention and Punishment of the Crime of ... https://www.un.org/en/genocideprevention/documents/atrocity-... [7] A question of intent: Is what's happening in Gaza genocide? - WGBH https://www.wgbh.org/news/2025-09-25/a-question-of-intent-is... [8] Whatever happened to South Africa's case at the ICJ? https://www.middleeasteye.net/explainers/israels-genocide-ga... [9] UN Watch Refutes Biased New Report by Francesca ... https://unwatch.org/un-watch-refutes-biased-new-report-by-fr... [10] UN Must Intervene on Flawed Special Procedure Mandate https://ngo-monitor.org/submissions/submission-to-unhrc-57th...


I use this model in Perplexity Pro (included in Revolut Premium), usually in threads where I alternate between Claude 4.5 Sonnet, GPT-5.2, Gemini 3 Pro, Grok 4.1 and Kimi K2.

The beauty with this availability is that any model you switch to can read the whole thread, so it's able to critique and augment the answers from other models before it. I've done this for ages with the various OpenAI models inside ChatGPT, and now I can do the same with all these SOTA thinking models.

To my surprise Kimi K2 is quite sharp, and often finds errors or omissions in the thinking and analyses of its colleagues. Now I always include it in these ensembles, usually at the end to judge the preceding models and add its own "The Tenth Man" angle.


That is so sad to hear. I absolutely loved Google Play Music – especially features like saving e.g. an online Universal Music release to my "archive" and then for myself being able to actually RENAME TRACKS with e.g. wrong metadata.

That and being able to mix my own uploaded tracks with online music releases into a curated collection almost made it a viable contender to my local iTunes collection.

And then... they just removed it forever. Bastards.


Yep, YTM is/was so clearly the inferior product it's laughable. Even as a Google employee with a discount etc (I can't remember what that was, but) on these things I switched to Spotify when they dropped it.

I worked on a team that wrote software for Chromecast based devices. The YTM app didn't even support Chromecast, our own product, and their responses on bug tickets from Googlers reporting this as a problem was pretty arrogant. It was very disheartening to watch. Complete organizational dysfunction.

I think YTM has substantially improved since then, but it still has terrible recommendations, and it still bizarrely blurs between video and music content.

Google went from a company run by engineers to one run by empire-building product managers so fast, it all happened in a matter of 2-3 years.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: