Hacker Newsnew | past | comments | ask | show | jobs | submit | psidium's commentslogin

Ironically, we have an infringing website right now on the front-page of HN (nypost).

Wow, thank you for the explanation. Such a complex topic and yet you’ve made it simple to understand.


i wonder what is the limit of quantization when it starts to destroy the logic of weights?


Is it correct to rephrase PolarQuant as “embeddings vectors but as complex numbers” since polar vectors were represented with i complex numbers in other fields of applications? Would this open a new field of transformations using complex number arithmetics?

Edit: I believe I have my math concepts all wrong. Vectors are currently represented with xyz coordinates, so that can be our complex number equivalent. This paper saves memory size by ‘simply’ transforming this “complex number” into its polar form (radius/size and angle). Since you use less bytes on angle + radius you then always refer to that polar form instead of the regular vector/complex number part. If I have my concepts correct this idea is simple and genius.

Edit 2: I may be missing the forest for the trees here. I’ll try to learn more from the actual sources if I can.


I work at a enterprise tech company that has kinda of a monopoly on its market. The hate I get when I mention I work there is so big… I can only imagine what a MS Teams dev would get these days. The worst is when they complain about the UI… when I’m one of the few frontend focused devs there.

I’ve had a government worker stop processing my request and start complaining about the product I build. Lost a good half an hour trying to understand their bug but we didn’t get anywhere


Yep, bugs are already just another cost of doing business for companies that aren’t user-focused. We can expect buggier code from now on. Especially for software where the users aren’t the ones buying it.

Disclaimer because I sound pessimistic: I do use a lot of AI to write code.

I do feel behind on the usage of it.


I really wish we would shift back towards quality and reliability being major selling points in software. There's only a handful of projects I'm aware of that emphasize it and they're both pleasures to use: Obidian (note app) and Linear (ticket tracking)


This was 2008 when they were fighting for titles still (turns out they were robbed and the FIA knew about it and let it play out).

That said I imagine what Jonathan Wheatley would be able to achieve in a task similar to this since he had the Red Bull team maintain a consistent sub-2 second pit stop at Red Bull and he was able to significantly quicken the Sauber one this year.


To play devil's advocate a bit, Ferrari won both championships in 2007 and the constructors in 2008, so it's hard to say they were robbed when they actually won. Massa though, is another story.


They made sure it wouldn’t happen again though


2008 -> 2021 bookends


I like this. I have crafted a Claude Code docker container to similar effects. My problem is that my env has intranet access all the time (and direct access to our staging environment) and I don’t want a coding agent that could go rogue having access to those systems. I did manage to spin up an iptables based firewall that blocks all requests unless they’re going to the IPs I allowlist on container start (I was inspired by the sandbox docs that Anthropic provides). My problem right now is that some things that my company use are behind Akamai, so a dig lookup + iptables allow does not work. I’ll probably have to figure out some sort of sidecar proxy that would allow requests on the fly instead of dig+iptables.


So they automated most of the production in order to compete with cheap labor. It didn’t bring the manufacturing jobs as we think of them back: it did something better.


Yes, but a visa stamp renewal is a visa “application” while the document that allows the application is the “petition”, which is the word used on this text and the step requiring the payment.


"Section 1. Restriction on Entry. (a) ... the entry into the United States..., is restricted, except for those aliens whose petitions are accompanied or supplemented by a payment of $100,000".

OK, if I consider this interpretation, which of the following do you think will apply to already-approved H-1B petitioners: 1. Existing H-1B holder can amend their already-approved petition by "supplementing a payment" to become eligible for a visa and re-entry. 2. It's not possible to amend an already-approved H-1B petition. So existing H-1B holders can never satisfy the requirement. They cannot re-enter with H-1B visa anymore. 3. This EO is not retrospective. So already-approved H-1B petitioners (with or without visa) are fine.


I’m not not a lawyer at all and I have no real idea. My guess is that existing visas are already printed and henceforth there is no petition anymore, you have a status or visa already. Since the old petition didn’t require the payment, you don’t need to show proof of payment now. But lol if I actually know how this works, can be anyone’s guess.

My guess is that if this goes forward new h1b visas petitioned while the worker is outside the US will have a line saying “must show proof of payment” or something like that, while petitions while in the US won’t have that line on their visa stamp


I don’t have the data but I assume the corpus available to train an LLM is majorly in English, written by Americans and western counterparts. If we’re training the LLMs to sound similar to the training data, I imagine the responses have to match that world view.

My anecdote is that before LLMs I would default to search Google in English instead of my own native language simply because there was so much more content in English to be found that would help me.

And here I am producing novel sentences in English to respond to your message, further continuing the cycle where English is the main language to search and do things.


In my experience, ChatGPT, at least, seems to have had multiple languages used to train its corpus. I am guessing this based on its interaction with me in a different language, where it changed English idioms like "short and sweet" to analogous versions in that language that were not direct translations.

But my guess is that the data sets used from the other languages are smaller (and actually, even if it had perfect access to every single piece of data on the internet, that would still be true, due to the astonishing quantity of English-language data out there compared to the rest. Your comment validates that). With less data, one would expect a poorer performance in all metrics for any non-Anglophone place, including the "cultural world view" metric.


And the RHLF was directed by Californians, and so the "values" are likely very California.


english is the lingua franca ;-)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: