Just be prepared near the end for refinements. When I did mine, I had to do another 6 months of refinements near the end to help get the locking of my bite grip to be where it needs to be. I was otherwise expecting that day to be when I’d be done, and later learned refinements are a fairly common occurrence when going through Invisalign.
As the sibling comment, it’s definitely worth it. Best of luck!
Not who you asked, but I think making the nuance between retail and corporate credit. With firms being corporate credit (i.e. we aren’t talking about individuals / retail).
- bonds. Loans interned to be bought by a range if investors and traded over time. Arranged and unwritten by investment banks.
- bank loans. The classic loan. The bank takes depositor money (that the depositor can take back anytime!) and loans it to someone or some company. The bank holds the loan
- private credit. Like a bank loan, but they get their money from long term investments by wealth people and institutions, add bank loans for leverage, and then hold the loan.
These are mostly syndicated. The traditional difference between loans and bonds was bank versus investment bank. The modern difference is in underwriting technique, degree of syndication/securitisation and loans mostly being floating and bonds mostly being fixed.
Convergent evolution in finance is actually a pet interest of mine. It seems like it's mostly driven by regulation. But the more you stare, the more the regulation appears like a canyon wall and the hydrology customs and connections. I'm not sure what the underlying geology is, however. Something bigger than customs or laws, but not so grand that it becomes ethereal.
The Banks get in trouble, and Gov has to step in. So Gov, reasonably, add regulations and restrictions. But the law can't be really specific, it requires gov employees to actually examine the bank and make decisions (eg about risk levels, etc).
The banks have a really large incentive to chip away at the effectiveness of the regulation. They hire lots of lawyers, consultants, notable economists, etc and just keep pushing on these rank and file gov regulators. They buy influence with politicians, and use that to pressure the regulators. They hire some of the regulators at very high pay, sending a signal to the others: play ball and a nice job awaits you.
Over time, they just wear down the regulators. The rules are interpreted to be mostly ineffective and nonsensical. Often at that point the politicians come in and just de-regulate.
The banks just have the incentive and focus to keep at it every day for years. No one else with power is paying attention.
I think Raymond Hettinger is called out specially here because he did a well known talk called [Modern Dictionaries](https://youtu.be/p33CVV29OG8) where around 32:00 to 35:00 in he makes the quip about how younger developers think they need new data structures to handle new problems, but eventually just end up recreating / rediscovering solutions from the 1960s.
“What has been is what will be, and what has been done is what will be done; there is nothing new under the sun.”
I think he was always reluctant to add features, and his version of Python would be slimmer, beautiful, and maybe 'finished'. His voice is definitely not guiding the contemporary Python development, which is more expansionist in terms of features.
Personally, I’ve been using a wrapper around `collections.namedtuple` as an underlying data structure to create frozen dictionaries when I’ve needed something like that for a project.
That works if you're dealing with a known set of keys (i.e. what most statically-typed languages would call a struct). It falls down if you need something where the keys are unknowable until runtime, like a lookup table.
I do like dataclasses, though. I find them sneaking into my code more and more as time goes on. Having a declared set of properties is really useful, and it doesn't hurt either that they're syntactically nicer to use.
The way Git computes diffs is by more or less storing all the source code in the .git directory as objects. At first glance it looks like a bunch of hashes, but tools can pull out source code from the objects tracked within the .git directory. Not least of which, the remote URL points to their username on GitHub and the author for commits can give you their email.
Not least of which, and even more so the URL had an auth secret in it. None of mine do, hence the question. I'm confused, because git has a credentials helper specifically designed to avoid storing secrets like that, or so I thought. So what tool is storing secrets in the git remote URL?
Yes, the git directory has all current and historical versions of the files packed into it, but that's not what the OP used to get information about the scammer.
Lossy compression vs lossless compression is the difference of whether you can get a 1:1 copy of the original data if you compress and then decompress it.
A simple example of this is if you have 4 bits of data and have a compression algorithm that turns it into 2 bits of data. If your dataset only contains 0000, 0011, 1100, and 1111; then this can technically be considered lossless compression because we can always reconstruct the exact original data (e.g. 0011 compresses to 01 and can decompress back to 0011, 1100 compresses to 10 and can decompress back to 1100, etc). However, if our dataset later included 1101 and got compressed to 10, this is now “lossy” because it would decompress to 1100, that last bit was “lost”.
An LLM is lossy compression because it lacks the capacity to 1:1 replicate all its input data 100% of the time. It can get quite close in some cases, sure, but it is not perfect every time. So it is considered “lossy”.
As the sibling comment, it’s definitely worth it. Best of luck!
reply