I love using AI tools and they are changing my work and life in amazing ways. I cannot imagine going back.
And yet, I am more concerned about the social damage due to their widespread use and the amounts of slop they generate.
Just this week:
- There was an article about a news company faking polls by asking LLMs for answers.
- My wife told me that she stopped watching any funny pet videos because 99% now is AI slop - start normal, but then turn into someone's slop idea.
- A friend told me their big tech company uses AI-generated metrics as part of performance evaluation. Nobody checks them.
- Another friend told me their big tech company requires engineers to use AI-generated commit messages with terrible signal-to-noise ratio and making version control and history useless for engineers. But directors and PMs love them, they are so descriptive!
- My neighbor uses LLMs to create some neighbor meeting plans/agendas, plausibly looking PDFs citing contractors etc. It's impossible to read through it, mixed hallucinations and real information, all wrapped in thousands of slop words. What is real and what made up? I'll spend 10x more time double guessing.
- Encountering more and more articles and general "content" that is AI generated and looks ok at the first glance, but slop upon inspections. Why would I read LLMs output on a webpage with ads, if I can ask it myself and get better, personal answers and style?
And I am not even talking here about other ethical issues, training data, less junior job positions, job replacement of journalists with LLM-equipeed contractors, etc.
LLMs make my personal and work life so much better, but social life unbearable. Is it worth the trade-off? I guess it doesn't matter at this point.
I think it remains to be seen whether the various AI tools we have today are a net-negative or net-positive for society.
Most inventions are a net positive: The steam engine, vaccines, chimneys.
A few are net-negative: grenades, leaded gasoline, asbestos insulation.
If we can no longer trust that a potential job candidate in a video call actually exists, they will have to be flown in. That's a cost. If we can no longer trust that an employee who wrote a document actually thought about it at all and must be questioned to make sure, that's a cost. Those costs will add up.
A written document or a video essay used to be proof-of-thought and now it's not. If we can't find new proofs of thought, and if AI doesn't get vastly better to the point where we can trust it blindly, then I think this will all be a net-negative.
One of the motivations to build data centers as fast as possible and improve tools as fast as possible may be to get to net-positive before it all gets banned. This article exists. The clock is ticking.
It's hilarious that people describe their anecdotal experience of being in a calorie deficit as a proof that "intermittent fasting works".
This was never the question, but whether intermittent fasting brings additional weight loss benefits as compared to calorie deficit with frequent meals.
My anecdotal experience from 20y of bodybuilding and doing ~3 cuts a year: for cutting, I tried IF, 6 meals a day, low fat, low carb, high fat true keto, balanced... everything works. And works equally well - this is backed by numerous studies. The only difference is the impact on health parameters (different will get worse on low fat vs high fat), satiety, and how easy it is for someone to sustain the diet and stay in a deficit. This will depend on the lifestyle and personal preferences. So my preferred way to cut is high protein, low carb, essential fats, a ton of fiber. When building muscle I go high everything but balanced.
Anything else and more is sectarianism and people bragging about their choices not having verified their true claimed efficacy or benefits.
> Anything else and more is sectarianism and people bragging about their choices not having verified their true claimed efficacy or benefits.
Everybody's looking for a silver bullet and wants to advocate for their specific one by tearing competing theories down. The reason that IF works is because it's more difficult to eat at a caloric surplus when you can only fill your stomach for 8 hours a day. Full stop. There might be modest ancillary benefits but as far as weight loss it really is as simple as calories in versus calories out. There are tons of variations on this theme dependent on goals and tolerance for discomfort but simple math wins ten times out of ten.
For the layperson IF or keto or something similarly extreme is effective but difficult. It requires strict adherence to a lifestyle that impacts one's social life and makes eating prepared foods difficult. Worst of all it leads to impromptu cheat days in moments of weakness that spiral out of control and negatively affect consistency. For people trying to lead a normal life I personally think eating at 80% TDEE with 1:1:1 macros is the most sustainable - you eat at your leisure, get sufficient protein for lean muscle mass and still eat carbs for energy and fun. It's basically "eat less, have a protein shake." Combine this with some light cardio and body weight/kettlebell stuff while watching TV and you'll see great functional fitness gains in addition to quick and steady weight loss.
Of course it's hard to build an online quasi-religion around moderation so this type of thinking isn't mainstream despite its efficacy.
Totally agree, you have to figure it out for yourself. Not only do these diets affect people differently, they also affect each individual differently throughout their life. IF might be great when you are 45 but no good when you are 20.
I really struggled to get lighter a few years ago and what ended up working finally was cutting my protein way down. After repeated failures with high protein/low carb, I finally just went for low protein despite no diet recommending it. It worked great, I lost muscle but it made satiety way easier and my body naturally seemed to shift to a lighter composition.
I still don't see any diets recommending that. It seems like a useful tool, especially given how "fitness" nowadays is lifting weights and chugging protein, there are going to be a ton of dudes in their 30s/40s who put on a boatload of muscle in their youth and now are struggling to get lighter using all the recommended high protein diets. If you don't give the muscle up satiety is going to make it an insane battle.
> The only difference is the impact on health parameters (different will get worse on low fat vs high fat), satiety, and how easy it is for someone to sustain the diet and stay in a deficit.
That was never the question or point. It was that’s it’s easier to adhere to intermittent fasting and consume less calories. So if you simply did a study comparing intermittent fasting vs general calorie restriction and didn’t control for calories then intermittent fasting would win. Controlling for calories completely misses the point
I share the sentiment. I haven’t used it in a while (at work use different languages and in the last few years my personal coding is only Python script/Jupyter notebook bite-sized), but anytime I hop into it, it immediately "clicks" and gives a comfortable feeling, despite changing over years.
A perfect language for medium sized relatively clean and mature personal or small team projects.
Frictionless, pleasant, not thinking too much how to express things (and still managing to write them reasonably idiomatic), tends to support clean encapsulated code, quite rich environment/libraries, great tools (debuggers, profilers), safe, relatively fast, not many foot guns, zero build setup, on small project zero build times, trivial to create good functional simple UI, can get fancy dynamic with reflection when I need "magic".
Basically not many pain points that would make me rage quit and almost everything I'd want is simple to achieve.
Original author here and it's been a while since I have read such word salad nonsense, sorry. Why people who have no idea or expertise comment on articles?
GenerateMips API constructs a mip chain by using box/bilinear (equivalent for factors of two) log N times.
Trilinear interpolates across three dimensions, such as 3D textures or mip chains. It is not a method for downsampling, but a method for filtering that interpolates two bilinear results, such as two bilinear filters of mip levels that were generated with "some" downsampling filter (which can be anything from box to Lanczos).
Anisotropic is a hybrid between trilinear across a smaller interpolation axis under perspective projection of a 3D asset and multiple taps along the longer axis. (More expensive)
> Trilinear interpolates across three dimensions, such as 3D textures or mip chains
I meant trilinear interpolation across the mip chain.
> generated with "some" downsampling filter (which can be anything from box to Lanczos)
In practice, whichever method is implemented in user-mode half of GPU drivers is pretty good.
> It is not a method for downsampling
No, but it can be applied for downsampling as well.
> under perspective projection of a 3D asset
Texture samplers don’t know or care about projections. They only take 2D texture coordinates, and screen-space derivatives of these. This is precisely what enables to use texture samplers to downsample images.
The only caveat, if you do that by dispatching a compute shader as opposed to rendering a full-screen triangle, you’ll have to supply screen-space derivatives manually in the arguments of Texture2D.SampleGrad method. When doing non-uniform downsampling without perspective projections, these ddx/ddy numbers are the same for all output pixels, and are trivial to compute on CPU before dispatching the shader.
> More expensive
On modern computers, the performance overhead of anisotropic sampling compared to trilinear is just barely measurable.
The problem is that your suggestion is strictly worse and unnecessarily complicated for the case discussed in the article. If you want to downsample 2x, 4x, etc, then that's just one level in the MIP hierarchy, no need to compute the rest. The point of the article however is to explain how one level in that MIP chain can be computed in the first place.
If you want to downsample 3x or fractional, then interpolating between two MIP levels is gonna be worse quality than directly sampling the original image.
Perspective (the use case for anisotropic filtering) isn't discussed in the article, but even then, the best quality will come from something like an EWA filter, not from anisotropic filtering which is designed for speed, not quality.
If someone believed they will earn 2-5x better than in academia, with full freedom to work on whatever interests them, and no need to deliver value to the employer... Well, let's say "ok", we have all been young and naive, but if their advisors have not adjusted their expectations, they are at fault, maybe even fraudulent.
Even being in elite research groups at the most prestigious companies you are evaluated on product and company Impact, which has nothing to do with how groundbreaking your research is, how many awards it gets, or how many cite it. I had colleagues at Google Research bitter that I was getting promoted (doing research addressing product needs - and later publishing it, "systems" papers that are frowned upon by "true" researchers), while with their highly cited theoretical papers they would get a "meet expectations" type of perf eval and never a promotion.
Yet your Google Research colleagues still earned way more than in academia, even without the promo.
Plus, there were quite a few places where a good publication stream did earn a promotion, without any company/business impact. FAIR, Google Brain, DM. Just not Google Research.
DeepMind didn't have any product impact for God knows how many years, but I bet they did have promos happening:)
You don't understand the Silicon Valley grind mindset :) I personally agree with you - I am happy working on interesting stuff, getting a good salary, and don't need a promo. Most times I switched jobs it was a temporary lowering of my total comp and often the level. But most Googlers are obsessed with levels/promotion, talk about it, and the frustration is real. They are hyper ambitious and see level as their validation.
And if you join as a PhD fresh grad (RS or SWE), L4 salary is ok, but not amazing compared to costs of living there. From L6 on it starts to be really really good.
> I am happy working on interesting stuff, getting a good salary, and don't need a promo
People who don't contribute to the bottom line are the first to get a PIP or to be laid off. Effectively the better performers are subsidizing their salary, until the company sooner or later decides to cut dead wood.
> full freedom to work on whatever interests them, and no need to deliver
> value to the employer...
That was an exaggeration. No employee has full freedom, and I am sure it was expected that you do something which within some period of time, even if not immediately, has prospects for productization; or that when something becomes productizable, you would then divert some of your efforts towards that.
It wasn't an exaggeration! :)
The shock of many of my colleagues (often not even junior... sometimes professors who decided to join the industry) "wait, I need to talk to product teams and ask them about their needs, requirements, trade-offs, and performance budgets and cannot just show them my 'amazing' new toy experiment I wrote a paper about that costs 1000x their whole budget and works 50% of time, and they won't jump to putting it into production?" was real. :)
They don't want to think about products and talk to product teams (but get evaluated based on research that gets into products and makes a difference there), just do Ivory tower own research.
One of many reasons why Google invented Transformers and many components of GPT pre-trainint, but ChatGPT caught them "by surprise" many years later.
Well there are a few. The Distinguished Scientists at Microsoft Research probably get to work on whatever interests them. But that is a completely different situation from a new Ph.D. joining a typical private company.
Someone correct me if this is wrong, but wasn't that pretty much the premise of Institute for Advanced Study? Minus very high-paying salaries. Just total intellectual freedom, with zero other commitments and distractions.
I know Feynman was somewhat critical to IAS, and stated that the lack of accountability and commitment could set up researchers to just follow their dreams forever, and eventually end up with some writers block that could take years to resolve.
> you are evaluated on product and company Impact, which has nothing to do with how groundbreaking your research is,
I wonder... There are some academics who are really big names in their fields, who publish like crazy in some FAANG. I assume that the company benefits from just having the company's name on their papers at top conferences.
One unique and new feature of Slang that sets it apart from existing shading languages is support for differentiation and gradient computation/propagation - while still cross-compiling generated forward and backward passes to other, platform-specific shading languages.
Before, the only way to backpropagate through shader code (such as material BRDF or lighting computation) was to either manually differentiate every function and chain them together, or rewrite it in another language or framework - such as PyTorch or a specialized language/framework like as Dr.Jit, and keeping both versions in sync after any changes.
Game developers typically don't use those, programming models are different (SIMT kernels vs array computations), it's a maintenance headache, and it was a significant blocker for a wider adoption of data-driven techniques and ML in existing renderers or game engines.
It does!
Both platform-specific compute shaders as well as cross-compilation to CUDA. The authors even provide some basic PyTorch bindings to help use existing shader code for gradient computation and backpropagation in ML and differentiable programming of graphics-adjacent tasks: https://github.com/shader-slang/slang-torch
(Disclaimer: this is the work of my colleagues, and I helped test-drive differentiable Slang and wrote one of the example applications/use-cases)
It's interestingly disingenuous that many claim of GLP-1 agonist miraculous effects on all kinds of health problems, where the same problems are "simply" solved by getting on a calorie deficit and lean. Liver, kidneys, heart, etc. If you have a non-alcoholic fatty liver disease and are obese, getting leaner will heal it. All those impressive results are on obese or diabetic people. So it is not only not a surprise, but also dishonest marketing or ignorance.
Don't get me wrong - those are miraculous drugs. First real non-stimulant low side effect appetite suppresion that will help millions. But let's wait for honest research on lean people before spreading marketing on how it improves overall health.
Also, how nobody mentions the need for increasing the dosage and tolerance build-up (just check reddits how much people end up having to take after months of continuous use). You cannot be on it "for life".
The increasing dosage is to tritrate up to a dose not because you gain tolerance. There are patients on GLP-1 for over a decade. Also maintenance and weight loss dosages are different: see the dosing charts for ozembic vs wegovy which are exactly the same drug.
Even if folks gain tolerance that doesn’t seem overly concerning. Mental health drugs also have tolerance issues and changing medicines every few years, while it has challenges for the patient, is an accepted part of long term psychiatric treatment.
Just a narrow comment, but type 2 diabetes certainly isn't limited to the obese. Many lean people develop issues with blood sugar that can't be controlled with diet alone.
A friend's son, who is an EMT, was recently diagnosed with type 2 diabetes at the age of 21. He doesn't drink or eat sweets, except on holidays, and works out five days a week. Suddenly, he started feeling sick, was vomiting, and ended up in the ER, all within three days. It can really hit you like a truck.
This is my #1 question on GLP-1: are we just seeing how humans do much, much better by being lean vs. the direct result of the drug?
A lean current-epoch human -- with our food abundance, access to modern medicines, higher standards of life, lower risks of injury, etc -- is likely going to be markedly healthier than a non-lean current-epoch human or a lean human from a prior age where medicine/food/etc was worse.
> where the same problems are "simply" solved by getting on a calorie deficit and lean
Except that there apparently is mounting evidence that GLP-1 agonists also address some issues that are not generally addressed by just restricting calories. TFA touches on this briefly: "The weight loss involved with GLP-1 agonist treatment is surely a big player in many of these beneficial effects, but there seem to be some pleiotropic ones beyond what one could explain by weight loss alone."
I seem to recall seeing claims that they reduce COVID-19 mortality even controlling for BMI (possibly because they inhibit systemic inflammation), reduce alcohol consumption, and even (though I think just anecdotally) may help overcome gambling addiction.
I don't know that you have to be disingenuous to both be enthused about these medications AND wish we'd never created the super-processed, super-sugary, make-people-crave-them-and-overeat-them modern American diet. Once you fuck with your gut biome for long enough it's not "simple" to solve it. It's incredibly difficult both discipline and metabolism-wise.
It's not.
a) compression can be lossless.
b) RAW is not about storing literal photons ADC measurements. It always has "some" processing as those always go through an ISP. We can obviously discuss which processing is the cutoff point and it will differ for different applications, but typically this would include things like clipping, sharpening, or denoising. And even some pro DSLRs would remove row noise or artifacts in supposedly "RAW" files!
If you can change the exposure or WB - it is what is the minimum practical/useful definition of a RAW.
>If you can change the exposure or WB - it is what is the minimum practical/useful definition of a RAW.
No. No it is not at all. Are you a photographer? I am not talking about processing before the photo is saved, I am talking abot the compression of the save file.
Are you trying to tell me that these are the same?
RAW
"A camera raw image file contains unprocessed or minimally processed data from the image sensor of either a digital camera, a motion picture film scanner, or other image scanner. Raw files are so named because they are not yet processed, and contain large amounts of potentially redundant data"
JPEG-XL
Lossless compression uses an algorithm to shrink the image without losing any IMPORTANT data.
Lossless compression is not about importance of data. Lossless is lossless, if the result of a roundtrip is not EXACTLY IDENTICAL then it is by definition not lossless but lossy.
Maybe you're confusing with "visually lossless" compression, which is a rather confusing euphemism for "lossy at sufficiently high quality".
JPEG XL can do both lossless and lossy. Lossless JPEG XL, like any other lossless image format, stores sample values exactly without losing anything. That is why it is called "lossless" — there is no loss whatsoever.
Yes, I am an (amateur) photographer for the last 27 years, from film, DSLRs, mirror less, mobile. And I worked on camera ISPs - both hardware modules, saving RAW files on mobile for Google Pixel, as well as software processing of RAWs.
"Lossless Compressed means that a Raw file is compressed like an ZIP archive file without any loss of data. Once a losslessly compressed image is processed by post-processing software, the data is first decompressed, and you work with the data as if there had never been any compression at all. Lossless compression is the ideal choice, because all the data is fully preserved and yet the image takes up much less space.
Uncompressed – an uncompressed Raw file contains all the data, but without any sort of compression algorithm applied to it. Unless you do not have the Lossless Compressed option, you should always avoid selecting the Uncompressed option, as it results in huge image sizes."
Why make the distinction if there is no difference?
Apple is COMPRESSING the image. Period. RAW photos can be compressed, but if they are then they are "RAW Compressed" Files, not "RAW" files.Apple is not saying you are shooting RAW Compressed, it says you are shooting ProRAW photos, which is slick marketing because everyone thinks they are shooting RAW photos but ProRAW is not RAW. The iPhone 12 gave you a choice to shoot RAW or ProRAW, but my iPhone 13 ProMax only allows the ProRAW option. I have no option to avoid Apple processing my photos anymore.
It is semantics but words matter. If something is off with the compression algorithm or the processing how would you know?
More, if the difference did not matter, why does Sony go out of the way to explain the difference?
If a computer compresses and expands the image using an algorithm you are not getting back the same image. Period. I do not care if you perceive it to be the same, it is not the same.
> Why make the distinction if there is no difference?
There is a difference, which is that the compressed lossless version is smaller and requires some amount of processing time to actually be compressed or uncompressed. But there is zero difference in the raw camera data. After decompression, it is identical.
> If a computer compresses and expands the image using an algorithm you are not getting back the same image. Period. I do not care if you perceive it to be the same, it is not the same.
It is the same. You can check each and every bit one by one, and they will all be identical.
No, but it’s also a painting instead of a digital file, so different considerations apply (maybe the copy wouldn’t be strictly identical, maybe the value is affected by “knowing that Van Gogh is the one who applied the paint to the canvas” or by the fact that only one such copy exist), and this is therefore a false analogy.
If you copy the number written on a piece of paper to another piece of paper, is it the same number? Yes, it is, and a digital photograph is defined by the numbers that make it up. Once you have two identical copies of a file, what difference does it make which one you read the numbers from?
Or are you arguing that when the camera writes those numbers to the raw file, it’s already a different image than was read from the sensor? After all, they were in volatile memory before a copy was written to the SD card.
It's the other way around - in hearing, phase is almost irrelevant. At medium frequencies, moving head by a few centimeters changes phase wand phase relationships of all frequencies - and we don't perceive it at all! Most audio synthesis methods work on variants of spectrograms and phase is approximated only later (mattering mostly for transients and rapid frequency content changes).
In images, scrambling phase yields a completely different image. A single edge will have the same spectral content as pink/brown~ish noise, but they look completely unlike one another.
Makes sense! My impression that phase matters from audio comes from when editing audio in a DAW or anything like that. We are very sensitive to sudden phase changes (which would be kind of like teleporting very fast from one point to another, from our heads point of view). Our ears kind of pick them up like sudden bursts of white noise (which also makes sense, given that they kind of look like an impulse when zoomed in a lot).
So when generating audio I think the next chunk needs to be continuous in phase to the last chunk, where in images a small discontinuity in phase would just result in a noisy patch in the image. That's why I think it should be somewhat like video models, where sudden, small phase changes from one frame to the next give that "AI graininess" that is so common in the current models
I have an example audio clip in there where the phase information has been replaced with random noise, so you can perceive the effect. It certainly does matter perceptually, but it is tricky to model, and small "vocoder" models do a decent job of filling it in post-hoc.
And I am not even talking here about other ethical issues, training data, less junior job positions, job replacement of journalists with LLM-equipeed contractors, etc.
LLMs make my personal and work life so much better, but social life unbearable. Is it worth the trade-off? I guess it doesn't matter at this point.