Hacker Newsnew | past | comments | ask | show | jobs | submit | loneboat's commentslogin

Maybe you're passing the sentence incorrectly. Could be, "They found software about making drugs/explosives, pornography about making drugs/explosives, and articles about making drugs/explosives".

Oh then the prisoners and I have something in common.

I mean they are basically describing a chemistry textbook at that point

Pasting a Wikipedia link or saying "just ask an LLM" only helps out the one instance of someone not knowing. I did the same thing as the OP you're replying to. They're right - a brief summary in the readme would be a near zero-effort permanent fix to people who stumble on your project and dont know what Oberon is.

Thinking independently and learning strategies to find their own way are among the most important foundations for anyone interested in technology or who wants to work in the field. If people are curious enough about a keyword or topic, they should also be willing to put in some effort to research it. Or, if that really isn’t possible, they should ask good questions. That’s generally much more promising than confronting the author of a post with accusations right off the bat.

So to explain what an apple pie is, first you must explain the universe? Every time?

Honest question. Where does one stop clarifying things?

One of the basic principles of communication is that you have a mental model of the person you're communicating with and are phrasing what you want them to understand in terms that you think they'll understand. So whenever you're writing something - anything - you should be writing with a target audience in mind, and stop explaining right around the point where you believe that your target audience doesn't need further explanation.

Of course it's normal for there to be a disconnect between your assumptions about your target audience and reality. In a real conversation this happens all the time and it's no big deal. When something's written and especially when it's printed it can be a bit more of a problem, so maybe better to err on the side of over-explaining. Also a good reason to have editors and proofreaders. But I'm rambling a bit.

In this case, the link was posted to HN by the author, so the author might have had "average HN reader" in mind. Oberon never really achieved success outside of a particular niche in academia, so unless they went to ETH Zurich I personally wouldn't expect someone - even someone in tech - to know about it.


Exactly. I knew what the link was about and didn't study at ETH Zurich. I (mistakenly?) think Oberon is that kind of "roots knowledge" shared between all of us, like Lisp or Forth. That's why I asked when one should stop clarifying things. Maybe some people need to know what a compiler os, or a VM, or a windowing system, or ...whatever.

What I mean is that having so much info at the toe of our tips, comments like "you should put a link about what this thing is" are needless.


> I (mistakenly?) think Oberon is that kind of "roots knowledge" shared between all of us, like Lisp or Forth.

Yes, I think that's a mistake. Lisp and Forth saw widespread commercial use, were hugely influential, and directly begat many other languages. While I'd expect most folks on here to be familiar with Pascal - you could say those same things about it - and maybe even know who Wirth is, Oberon basically saw no commercial use whatsoever and even in academia was basically only used at the school it came from. There's no real comparison.


Yes, you are likely confusing Oberon with Pascal. That is the Wirth language people usually have heard about. They may also have heard about Modula 2, but assuming that is stretching it. I was already interested in computers at the time, but I still only remember Oberon as that even bigger failure than Modula.

The problem with this is that there are so many branches of tech from 40 years ago that any one person is unlikely to be familiar with all of them.

I'm plenty familiar with the whole Modula 3 -> NextSTEP branch of this little tree, but the Oberon branch isn't something I've run into before.


Sorry, can you explain what ETH Zurich is? I’m not familiar with that term.

it is an Ethereum fork, named after Jan Zurich (a cousin of the famous Chief Niklaus Emil Wirth). Jan Zurich discovered a little moon on Uranus, and named it Blaise

see timeline on https://ethereum.org/ethereum-forks/


It’s also worth noting that Jan (who strictly uses the pronouns var / val) belongs to one of the most historically marginalized groups in modern tech: One-Pass Compiler Enthusiasts. They were repeatedly ostracized by the bloated LLVM cabal for stating that any build process taking longer than 50 milliseconds is a toxic social construct. The ETH fork was actually meant to fund a decentralized safe space where nobody is ever forced to use a borrow checker.

I assume you're just trolling to make a rhetorical point (apologies if not!), but FWIW:

ETH Zurich is one of the most well-regarded technical schools in the world, and arguably the most well-regarded technical school in Europe. It has many famous alumni, including Albert Einstein. I think it's fair to expect most people in tech to be familiar with the big schools in the field, even the ones in Europe, though maybe that's giving too much credit to Americans.

But maybe it's also worth pointing out some other principles of communication: ETH Zurich wasn't really the main topic of my comment, and it's OK if readers don't catch every reference; communication is invariably lossy, and as long as general meaning is conveyed that's OK! Also, given the context (the sentence "Oberon never really achieved success outside of a particular niche in academia, so unless they went to ETH Zurich...") even if the reader hadn't heard of ETH Zurich it could be reasonably inferred that ETH Zurich is an academic institution, probably in Zurich, where Oberon was successful. Part of writing is trusting that the reader is a rational person who understands how (the) language and the world work, otherwise communication becomes impossible.

Some associated ideas in the philosophy of language might be the "cooperative principle", the "principle of humanity", and the "principle of charity". I'll frankly make a muddle of trying to explain them in detail, and this reply is already too long and too snarky, so in this case I'd ask the interested reader to consult Wikipedia, the Stanford Encyclopedia of Philosophy, etc.


> even if the reader hadn't heard of ETH Zurich it could be reasonably inferred that ETH Zurich is an academic institution, probably in Zurich, where Oberon was successful. Part of writing is trusting that the reader is a rational person

The first sentence of the README says, "This project modernizes the Kernel of Oberon System 3 (version 2.3.7) by migrating it from the original Oberon Boot Loader (OBL, written in assembler) to the Multiboot specification (handled in Oberon directly in the Kernel)."

Armed with that and the headline "Oberon System 3 runs natively on Raspberry Pi 3", it can be reasonably inferred that "Oberon System 3" is an operating system (shown here of being capable of running on a Raspberry Pi). It doesn't require prior familiarity with Oberon, despite what the previous commenter said.

Neither you nor the original questioner are being particularly rational about this.


> it can be reasonably inferred that "Oberon System 3" is an operating system

"Oberon is an operating system" was indeed evident, but it's also not particularly illuminating. There are dozens of niche operating systems, why do we care about this one in particular? What does it do that other operating systems don't?


> "Oberon is an operating system" was indeed evident,

No, it is not evident: this is not correct.

Oberon is bare-metal self-hosted programming system. It is both a language and an OS.

> why do we care about this one in particular?

1. It is the final development in the career of Niklaus Wirth, the creator of Pascal. Pascal is the Wirthian language that had considerable commercial success.

(A dialect called the USCD p-System was one of the original 3 OSes that IBM offered for the PC, for instance. Apple created Object Pascal, and implemented parts of the Lisa and original Mac OSes in it. In the early days of DOS, Borland TurboPascal was one of the leading IDEs, and then when 16-bit Windows achieved commercial success, Borland's Delphi led the way as the most sophisticated Windows IDE.)

2. It's the end of his life's work. Wirth did not stop with Pascal.

The next generation was Modula. It was a bit of a flop, but the successor, Modula-2, was a hugely influential language too. Topspeed Modula-2 was at one time the fastest compiler of all kinds for the PC.

Development did not end there.

Others did Modula-3, not Wirth. He moved on to create Oberon.

3. This is the end of the line of the single most widespread and influential family of programming languages outside of the C world.

> What does it do that other operating systems don't?

Wirth was a keen advocate of small size and simplicity.

https://cr.yp.to/bib/1995/wirth.pdf

Oberon is one of the smallest simplest compiled languages of all time. It is also an OS, and an ID, and a tiled mouse-controlled windowing system. The core is about 4000 lines of code.

4k LOC.

The entire core OS is smaller than the tiniest trivial shell tool on any FOSS Unix.

It is almost unbelievably tiny, it is fast, and it is self-hosting. It can run bare-metal, on multiple platforms, or as a conventional language under another OS. It has its own GUI. It can interop with other languages. You can, and people do, build complete GUI apps in Oberon.

https://blackboxframework.org/

It may be less well-known than its own ancestors but this is an important, significant language, and the final generation of a very important and very much alive dynasty.


Borland Turbo Pascal for CP/M and MS-DOS was developed by Anders Hejlsberg, who went on to develop All The Languages for Microsoft.

Perhaps more surprisingly, Turbo Modula 2 for CP/M (which was certainly surpassed by Topspeed Modula 2) was developed by Martin Odersky, who created Scala.

Throw in Robert Griesemer and his co-creation of Go, and the Wirth family tree is as influential in modern programming as it possibly could be.


> No, it is not evident: this is not correct.

It is evident. It is correct.

You aren't making this any better.


No, it is not correct, and trying to coerce it into the reductive box of an incomplete view does not help.

Comparison: you are angrily maintaining "Orange is a colour! It is right there in the rainbow! Red, orange, yellow, green, blue, indigo, violet! It's a colour! That is the set to which it belongs!"

It is true. But it is incomplete.

It is also a fruit. It belongs equally in "apple, orange, banana, pear, quince."

Also a valid set; no parallel.

It is in other sets too. The set of citrus fruits. "Lemon, orange, lime, pomelo, grapefruit."

Oberon is a programming language.

It is also a set of frameworks. They are integral.

It is also an editor.

It is also a UI design.

It is also an OS.

Any one is true but is incomplete.

There are other views but yours and none is privileged; yours does not invalidate the others, nor they yours. You only see one but your view is too narrow.


I'm not sparring with you. You are wrong.

Oberon is an operating system, and "Oberon is an operating system" is a fair and accurate statement.

I don't expect you to relent on this—I'm too well acquainted. You're still wrong.


> "Oberon is an operating system" was indeed evident

> I was about 5 links deep before I figured out what Oberon actually was

You aren't being consistent.


I like how you assumed I’m American because I don’t have knowledge of international top ranking technical schools.

One of the basic principles of communication is that you have a mental model of the person you're communicating with and are phrasing what you want them to understand in terms that you think they'll understand. So whenever you're writing something - anything - you should be writing with a target audience in mind, and stop explaining right around the point where you believe that your target audience doesn't need further explanation. Not everybody lives in Europe or has knowledge of what the top technical schools are (which is a bit classist to assume tbh), and this type of Euro-centric thinking doesn’t work very well when communicating with people from other backgrounds.


Yes. Look up LLM "temperature" - it's an internal parameter that tweaks how deterministic they behave.

The models are deterministic, the inference is not.

Which is a useless distinction. When we say models in this context we mean the whole LLM + infrastructure to serve it (including caches, etc).

What does that even mean?

Even then, depending on the specific implementation, associativity of floating point could be an issue between batch sizes, between exactly how KV cache is implemented, etc.


That's still an inference time issue. If you have perfect inference with a zero temperature, the models are deterministic. There is no intrinsic randomness in software-only computing.

Floating point associativity differences can lead to non-determinism with 0 temperature if the order of operations are non-deterministic.

Anyone with reasonable experience with GPU computation who pays attention knows that even randomness in warp completion times can easy lead to non-determinism due to associativity differences.

For instance: https://www.twosigma.com/articles/a-workaround-for-non-deter...

It is very well known that CUDA isn't strongly deterministic due to these factors among practitioners.

Differences in batch sizes of inference compound these issues.

Edit: to be more specific, the non-determinism mostly comes from map-reduce style operations, where the map is deterministic, but the order that items are sent to the reduce steps (or how elements are arranged in the tree for a tree reduce) can be non-deterministic.


My point is, your inference process is the non-deterministic part; not the model itself.

Eh., if you have a PyTorch model that uses non-deterministic tensor operations like matrix multiplications, I think it is fair to call the model non-deterministic, since the matmul is not guaranteed to be deterministic - the non determinism of a matmul isn't a bug but a feature.

See e.g.https://discuss.pytorch.org/t/why-is-torch-mm-non-determinis...


> who cares if it can store an exabyte if it takes all month to read it

To be fair, if I'm reading an exabyte in a month, my hardware's pushing >3 Tbps, which I'd be very happy with.


Plus just put 32 in stripping RAID if you really need to read an exabyte a day

Eh, that doesn't math out. It's the bandwidth per storage density (or ultimately per price) that matters.

If you have great cost per byte but your bandwidth per byte is bad enough that the price per byte doesn't make up for it then you have an issue.

They've started making hard drives with multiple heads because of this issue, they increased density to the point where it's not useful to continue adding density if it doesn't come with more bandwdith.


*RAED

Or maybe RAEND


RAVED is more likely. These things aren't cheap.

What is RAVED?

I read it as "Redundant Array of Very Expensive Disks".

But if you need 1eb, waiting a whole month for it isn't great. You'd be better off with 720 1pb devices taking an hour in parallel.

Yes it causes problems in this increasingly narrow situation.

Massive storage that takes a month to fully read is acceptable in a wide variety of use cases. If it's cheaper than hard drives it'll get a huge amount of users.


It's notable that 'time to read/write entire device' has been creeping up for any storage device you can buy off the shelf for the past ~40 years.

Reading a floppy disk took around 30 secs for example. A whole CD took 5 mins. My whole 1TB SSD takes 10 mins.


A modern hard drive (36TB @ 280MB/s) can take more than a day. If you treat a bank of tapes as one device this can get even more extreme.

Interesting, this is my first time consciously thinking about this trend.

Perhaps the needs for read/write speed are bounded (before processor, etc. becomes the limiting factor), while more capacity is only limited by price. Or maybe increasing density of storage inherently means a tradeoff with I/O speed (AFAIK, NAND flash needs to rewrite lots of data just to make a single write? Atom-scale interactions have side effects)


In long term archival use cases this is less of an issue. Especially if it’s many exabytes we’re talking about, needing to be stored for decades.

But I 100% agree with your main point about possibility vs productionisation.


Fair point, but I'd argue that "Google informed [you] of a perfectly viable alternative that might be cheaper" isn't what happened. What happened is "Google offered you Honda first, for no other reason than Honda paid them money to do so".

If you squint they may look like the same thing, but their subtle difference is important. One is a tool suggesting "Hey I see you're trying to do A, but I think B might also fit your needs", and the other is "You want A? Ok, I'll eventually point you towards A, but only after you consume this message from our sponsor."

Google's not genuinely thinking "Hey this will help the user more!" and building that into their tool - it's an ad platform that mimics being helpful, in the name of growing profits.

(That's fine for them to do btw; They're a company and they need to make money somehow.)


Just declaring "oppose this" without any explanation isn't very constructive.


Why should anyone honestly critique an app that nobody could be bothered to write?


I may be misunderstanding your message. Are you saying I couldn't be bothered to write the app?


> If the grammar rules you learned in school disagree with (any!) native speaker, the rules are wrong.

I understand the sentiment of "the language is defined by its speakers", but this statement seems a bit overblown. According to that logic, it is literally impossible for someone to be incorrect about the meaning of a word.


> it is literally impossible for someone to be incorrect about the meaning of a word.

Yeah, it's important to frame it in terms of idiolects and dialects—any given speaker has an idiolect, and that idiolect is worth describing and documenting uncritically. But that speaker also benefits from speaking a shared dialect with other speakers, and it's valuable for that speaker to be on the same page with other speakers of their dialect about definitions.

I think what OP is getting at is that it's not the role of linguistics to assign a value judgement to a given usage—there are merely benefits that speakers can derive from better understanding the dialects that they use in daily life.


I could see it being uninteresting if it were assumed that we could never interact with the outside world, but this article is discussing the polar opposite. I feel like if we could, that would be undeniably interesting and worth pursuing.


True. I assume you have to assume that the outside world is likely to have built a simulated world in its own image, more or less. I personally struggle with the idea that something you can observe is really "outside", though. But, like all theories, the measure is in what it lets you do, and it's conceivable this would be a useful theory.


I think the main concern over this is that it's baked into the OS. That's a significant difference from "Here's a 3rd party app you can install if you want this functionality." Especially from an OS vendor who (a) dominates the desktop market, and (b) absolutely loves to hide or obfuscate the ability to disable such features (which are often enabled by default).


It's part of the Setup screen, where it can be enabled or disabled. Apparently the checkbox is checked by default, but I think most users would find it useful and would find the "security concerns" trivial


The people who understand what it does and how it works are a minority. People will accept Microsoft’s recommended settings, often without even reading it. Enabling it by default is beyond irresponsible. Especially on desktops that are very commonly shared between household members. And so many other reasons.

> I think most users would find it useful

For what? Most users won’t even remember that it’s enabled. This already exists as a niche product for specific tasks, it’s by no means a universally useful thing. Especially not at 25GB of disk usage and god knows what CPU/GPU/RAM.


Most people don't understand how technology can destroy their lives until they are faced with the consequences. It's all in the name of convenience. This is why we have regulations, so that people do not pray on unsuspecting and trustful people.


If that concept amuses you, you should check out this book: https://www.amazon.com/Motel-Mysteries-David-Macaulay/dp/039...


This is a really good book, thank you.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: