Hacker Newsnew | past | comments | ask | show | jobs | submit | getnormality's commentslogin

> To Zeilberger, believing in infinity is like believing in God. It’s an alluring idea that flatters our intuitions and helps us make sense of all sorts of phenomena. But the problem is that we cannot truly observe infinity, and so we cannot truly say what it is.

When the author says we cannot truly observe infinity, what does that mean? Infinity is a mathematical symbol we can observe. We can't observe infinitely many objects, but even if we could, it wouldn't be the same as observing infinity. You can't observe the number one by observing one stone.

I think there is some confusion in this article between symbols and what they can stand for, and I can't help but wonder if that same confusion is at the root of ideas like ultrafinitism.


> Infinity is a mathematical symbol we can observe.

This is like confusing the map for the territory.

Symbols live in syntax (like the syntax of programming languages), while mathematical concepts live in semantics. Infinity is not a symbol, it's not ∞. ∞ is the symbol we use to represent infinity.


The number 42 is also a mathematical symbol we can observe. (Or two symbols, depending on how you want to define symbol).

You can observe the symbol. You can observe 42 of some object, 42 sheep for example.

You can observe a pie chart, or an actual pie, with 42% of it missing.

You can observe a plank of wood that is 42 inches or centimeters long.

But you can't observe 42 itself.

It is not like a hill on a map, where there is a symbol, and there's an actual hill.

It is an adjective and not a noun. It's not real unless it is describing something else.

My point being that regular finite numbers are not real either. So what's wrong with infinity? Or the square root of 2, or pi?


There is a way to look at mathematics as just a bunch of rewrite rules for things on paper. It might not be particularly inspiring, but it's a valid way to look at things.

I think Douglas Adams had one of the best quotes regarding observing infinity:

"Infinity itself looks flat and uninteresting. Looking up into the night sky is looking into infinity – distance is incomprehensible and therefore meaningless."


saying infinity is a mathematical symbol we can observe is simplifying it way too much, all mathematical symbols are abstractions.

i can observe two apples. i cannot observe infinity apples.


Some might say that 2 is as made up as infinity. Let me elaborate a little - your brain together with society made an abstraction "apple", and only by not distinguishing between these "sets" of atoms you can have numbers.

> some might say

Well do you say it or are you just playing devils advocate? The post you are responding to seems very straightforward.

If you wanna go all philosophical, “real” might just be anything that is useful. In that way infinity is real because you can use it to do calculus. On the other hand, there are ways of doing calculus that do not involve thinking about infinity. But if you’re gonna count to three apples you pretty much have to go through “two” no matter what.


Can you observe 2.34 x 10^456789 apples?

Mathematical concepts don't have to have an obviously physical analogue. I mean, you'd find it difficult to observe minus two apples and certainly tricky to observe i apples.

To my mind, maths is like a "what if?" puzzle and whether or not infinity makes sense in the physical world, there's still fun to be had by considering the consequences of it.

That also means that it can be interesting to consider limited number systems which don't have any concept of infinity.


It seems to me that you're the one confused?

The mathematical symbol is just a representation of a concept, it's not infinity itself, you've got it backwards.


The symbol is not the thing. The map is not the territory. Ceci n'est pas une pipe.

Well, yeah. If I went straight to the press to trash the reputation of my client's product, rather than communicating internally first to help them proactively address the issues, I would expect to get fired.

Not that I am remotely interested in defending Meta, or optimistic that they would proactively address privacy issues. But I don't feel that sympathetic to the outsourcing company here either.

I don't know what happened behind the scenes. I'm just going off what is said and not said in the article. If I were whistleblowing about something like this, I would take pains to describe what measures I took internally before going public. I didn't see any of that here.

EDIT: Look, to be clear, I think it's bad that naive or uninformed people are buying video recorders from Meta and unintentionally having their private lives intruded on by a company that, based on its history, clearly can't be trusted to be a helpful, transparent partner to customers on privacy. I think it's good that the media is giving people a reminder of this. I think it's good that the sources said something, even though the consequences they suffered seem inevitable. But to me, there is nothing essentially new to be learned here, and I don't know what can or should be done to improve the situation. I think for now, the best thing for people to do is not buy Meta hardware if they have any desire for privacy. Maybe there are laws that could help, but what should be in the laws exactly? It's not obvious to me what would work. I suspect that some of the reason people buy these products is for data capture, and that will sometimes lead to sensitive stuff being recorded. What should the rules be around this and who should decide? Personally I don't know.


What makes you think the outsourcing firm didn't raise these concerns in email or meetings? You think these people wanted to lose jobs and income? That's irrational.

Why reflexively defend a massive tech corporation caught repeatedly violating the law?


> Why reflexively defend a massive tech corporation caught repeatedly violating the law?

Because it is the natural expansion of the quote attributed to Upton Sinclair:

> Socialism never took root in America because the poor see themselves not as an exploited proletariat, but as temporarily embarrassed millionaires


There are transgressions severe enough that your duty to stop them is heavier than your responsibility to "the reputation of your client's product." Amazing this needs to be stated, frankly.

Beautifully and succinctly put.

You would help conceal a crime against the people just because it's good business??

Congratulations, you have a bright future in politics and/or tech CEOing.


More like a bright future being someone's fall guy. The ignorance to think that a large tech giant like Facebook would give a crap about any of those concerns makes this person too politically inept to make it anywhere

Proactively address the issues? Are you kidding me? This is not an issue that just happened to slip by; it is 100% by design. You're fooling no one.

What specifically do you mean? It is by design that smart glasses see the things happening in front of their users? Yes, it is. That is why people buy them.

Huh. There you go again, thinking everyone else is an idiot. Capture of video data of users by Meta is never acceptable. It would not be acceptable for any phone, and it is not acceptable for any glass, ever.

Saving the data for any purpose other than allowing users to access it is bad enough; allowing Meta employees or contractors to view personal videos is on a whole new level.

I don't know why people buy smart glasses. Maybe they buy them for video capture. If so, the videos go to Meta's servers and Meta might do things with them. They might be criticized for not reviewing them in certain cases. That's one reason why I wouldn't buy Meta smart glasses.

If only we had the technology to record video without sending it to Meta's servers.

Main character syndrome? Lots of people seem to act like they are in a 24/7 live stream with 50 million followers.

The main issue here is Facebook employees viewing users' private video streams (including of user nudity) without the users' knowledge.

The secondary issue is that it's generally frowned upon to make your employees view nudity in the workplace. Are there extenuating circumstances here? No, we have no evidence there are any extenuating circumstances here.


> I think most ML people now think of neural-network architectures as being, essentially, choices of tradeoffs that facilitate learning in one context or another when data and compute are in short supply, but not as being fundamental to learning.

Is this a practical viewpoint? Can you remove any of the specific architectural tricks used in Transformers and expect them to work about equally well?


I think this question is one of the more concrete and practical ways to attack the problem of understanding transformers. Empirically the current architecture is the best to converge training by gradient descent dynamics. Potentially, a different form might be possible and even beneficial once the core learning task is completed. Also the requirements of iterated and continuous learning might lead to a completely different approach.


Coffee modifies physiology and cognition? You're telling me this for the first time.

The paper is about previously unknown ways coffee affects the body.

I was so surprised at this headline that I nearly leapt out of my chair!

But it says it’s the same for decaf. That is more interesting

Been treating coffee as caffeine with aroma. Any important points about coffee itself?

Humans known since 45 minutes after first drink

If the solar panels are movable we can go back to the Middle Ages formula where some of the land is left fallow each year to improve yields!

I skimmed the article for an explanation of why this is needed, what problem it solves, and didn't find one I could follow. Is the point that we want to be able to ask for visualizations directly against tables in remote SQL databases, instead of having to first pull the data into R data frames so we can run ggplot on it? But why create a new SQL-like language? We already have a package, dbplyr, that translates between R and SQL. Wouldn't it be more direct to extend ggplot to support dbplyr tbl objects, and have ggplot generate the SQL?

Or is the idea that SQL is such a great language to write in that a lot of people will be thrilled to do their ggplots in this SQL-like language?

EDIT: OK, after looking at almost all of the documentation, I think I've finally figured it out. It's a standalone visualization app with a SQL-like API that currently has backends for DuckDB and SQLite and renders plots with Vegalite. They plan to support more backends and renderers in the future. As a commenter below said, it's supposed to help SQL specialists who don't know Python or R make visualizations.


I was quite psyched when I read this so maybe I can tell you why it's interesting to me, although I agree the announcement could have done a better job at it.

In my experience, the only thing data fields share is SQL (analysts, scientists and engineers). As you said, you could do the same in R, but your project may not be written in R, or Python, but it likely uses an SQL database and some engine to access the data.

Also I've been using marimo notebooks a lot of analysis where it's so easy to write SQL cells using the background duckdb that plotting directly from SQL would be great.

And finally, I have found python APIs for plotting to be really difficult to remember/get used to. The amount of boilerplate for a simple scatterplot in matplotlib is ridiculous, even with a LLM. So a unified grammar within the unified query language would be pretty cool.


I share your pain. You might enjoy Plotnine for python, helps ease the pain. The only bad thing about ggplot is that once you learn it you start to hate every other plotting system. Iteration is so fast, and it is so easy to go from scrappy EDA plot to publication-quality plotting, it just blows everything else out of the water.

But isn't this then just another tool that you're including in your project? I don't get why I would want to add this as a visualization tool to a project, if it's already using R, or Python, etc...

I mean, is it to avoid loading the full data into a dataframe/table in memory?

I just don't see what the pain point this solves is. ggplot solves quite a lot of this already, so I don't doubt that the authors know the domain well. I just don't see the why this.


Anything to standardise some of the horrifying crap that data scientists write to visualise something.

Well there's always going to be a dependency anyway: loading the data, making it a dataframe, visualizing it, this might be 3 libraries already.

In a sense I really get your complaint. It's the xkcd standard thing all over, we now have a new competing standard.

I think for me it's not so much the ggplot connection, or the fact that I won't need a dataframe library.

It's that this might be the first piece of a standard way of plotting: no matter which backend (matplotlib, vega, ggplot), no matter how you are getting your data (dataframes, database), where you're doing this (Jupyter or marimo notebook, python script, R, heck lokkerstudio?). You could have just one way of defining a plot. That's something I've genuinely dreamt about.

And what makes this different from yet another library api to me is that it's integrated within SQL. SQL has already won the query standardisation battle, so this is a very promising idea for the visualization standardisation.


I see, that's insightful. At first sight I thought of it as a kind of novelty, extending SQL with a visual grammar to integrate with a specific plotting library. But from your comments I can now imagine it has potential as a general solution for that space between data - wherever it comes from, it can typically be queried by SQL - and its visualization.

Thinking further, though, there might be value in extracting the specs of this "grammar of graphics" from SQL syntax and generalized, so other languages can implement the same interface.


I completely agree, and I think this is also where I'm quite excited. This project's connection with ggplot , which has one of the most respected grammar for plotting, means that it would be in a good position to achieve what you describe.

There’s certainly some benefit in a declarative language for creating charts from SQL. Obviously this doesn’t do anything that you can’t also do easily in R or Python / matplotlib using about the same number of lines of code. But safely sandboxing those against malicious input is difficult. Whereas with a declarative language like this you could host something where an untrusted user enters the ggsql and you give them the chart.

So it’s something. But for most uses just prompting your favorite LLM to generate the matplotlib code is much easier.


This isn't about ggplot (or any particular library) per se, it's about using a flavour of SQL with a grammar of graphics: https://en.wikipedia.org/wiki/Wilkinson%27s_Grammar_of_Graph...

What makes it interesting is the interface (SQL) coupled with the formalism (GoG). The actual visualization or runtime is an implementation detail (albeit an important one).


It seems to be for sql users who don’t know python or r.

I would even add that it fits into a more general trend where operations are done within SQL instead of in a script/program which would use SQL to load data. Examples of this are duckdb in general, and BigQuery with all its LLM or ML functions.

Sometimes you fuck the cloud, sometimes the cloud fucks you


Maybe the old man is on to something.


Forgot to put a cache on it probably. :)


Nope


In what way is AI 2027 coming true?

AI 2027 predicted a giant model with the ability to accelerate AI research exponentially. This isn't happening.

AI 2027 didn't predict a model with superhuman zero-day finding skills. This is what's happening.

Also, I just looked through it again, and they never even predicted when AI would get good at video games. It just went straight from being bad at video games to world domination.


> Early 2026: OpenBrain continues to deploy the iteratively improving Agent-1 internally for AI R&D. Overall, they are making algorithmic progress 50% faster than they would without AI assistants—and more importantly, faster than their competitors.

> you could think of Agent-1 as a scatterbrained employee who thrives under careful management

According to this document, 1 of the 18 Anthropic staff surveyed even said the model could completely replace an entry level researcher.

So I'd say we've reached this milestone.


In the system card they seem to dismiss this. Quotes;

> (...) Claude Mythos Preview’s gains (relative to previous models) are above the previous trend we’ve observed, but we have determined that these gains are specifically attributable to factors other than AI-accelerated R&D,

> (The main reason we have determined that Claude Mythos Preview does not cross the threshold in question is that we have been using it extensively in the course of our day-to-day work and exploring where it can automate such work, and it does not seem close to being able to substitute for Research Scientists and Research Engineers—especially relatively senior ones.

> Early claims of large AI-attributable wins have not held up. In the initial weeks of internal use, several specific claims were made that Claude Mythos Preview had independently delivered a major research contribution. When we followed up on each claim, it appeared that the contribution was real, but smaller or differently shaped than initially understood (though our focus on positive claims provides some selection bias). In some cases what looked like autonomous discovery was, on inspection, reliable execution of a human-specified approach. In others, the attribution blurred once the full timeline was accounted for.

Anthropic is making significant progress at the moment. I think this is mostly explained by the fact that a massive reservoir of compute became available to them in mid/late 2025 (the Project Rainier cluster, with 1 million Trainium2 chips).


> According to this document, 1 of the 18 Anthropic staff surveyed even said the model could completely replace an entry level researcher. > > So I'd say we've reached this milestone.

If 1/N=18 are our requirements for statistical significance for world-altering claims, then yeah, I think we can replace all the researchers.


Both Anthropic and OpenAI employees have been saying since about January that their latest models are contributing significantly to their frontier research. They could be exaggerating, but I don’t think they are. That combined with the high degree of autonomy and sandbox escape demonstrated by Mythos seems to me like we’re exactly on the AI 2027 trajectory.


In AI 2027, May 2026 is when the first model with professional-human hacking abilities is developed. It's currently April 2026 and Mythos just got previewed.


I think previous models could do hacking just fine.


The Mythos system card shows massive improvements over Opus in hacking (e.g. a 0.8% -> 72% in "Firefox shell exploitation"). If you thought Opus was already human-professional-level, well.


What's the professional human baseline?


It's true though that the cyber security skills put firmly these models in the "weapons" category. I can't imagine China and other major powers not scrambling to get their own equivalent models asap and at any cost- it's almost existential at this point. So a proper arms race between superpowers has begun.


It's a little funny that "system/model card" has progressively been stretched to the point where it's now a 250 page report and no one makes anything of it.


I was thinking the same thing, but then I decided to embrace the frustration of the image. It's reminding us that the pictures we have in our heads are kind of fragile. They don't prepare us for a live encounter with Earth from some random angle in space.


This is kind of profound.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: