Receving a full body x-ray every day just for a week would exceed the yearly federal occupational dose for radiation workers. You would add an additional 26% lifetime chance of getting cancer doing this for a year.
The yearly limit for rad workers is 5000 mrem with most receiving none. Receiving any dose is usually a cause for concern at most facilities that handle radioactive materials. A full body x-ray would dose you with about 1000 mrem. For about every 10000 mrem you receive, you gain an additional 1% chance of lifetime cancer risk. There's a reason why you wear a lead apron when getting X-rays at the doctor's office and why the technician leaves the room.
Metal detectors would be a much more reasonable method. People that work at airports, courts, jails, some schools, and even some manufacturing facilities walk through metal detectors daily.
Even if it does, there will probably be a prolonged economic period where robots are doing dangerous/messy stuff like welding, plumbing but there is a human master guiding them from a few yards away, via prompts, controllers, etc. More of a semi-autonomous power tool than an fully autonomous master that is delivered by drone on-demand. Scalability is still a ways off.
Recall it was the same lawnmower^W Ellison-owned CBS that last minute pulled a 60 Minutes report on CECOT. They didn't blame that one on the government.
Given that, I believe the higher ups at CBS wanted this to happen, but are colluding with the executive branch or misrepresenting the situation to shift responsibility.
"Unobjectionable editorial reasons" is Orwellian framing. This is not how journalism works, and the fact that a major news org is now being operated this way is a five alarm fire, not business as usual.
The segment was screened five times and cleared by both CBS attorneys and Standards and Practices. Correspondent Sharyn Alfonsi wrote internally that "pulling it now, after every rigorous internal check has been met, is not an editorial decision, it is a political one."
Alfonsi's team had requested comment from the White House, State Department, and DHS. They refused. Weiss then used that silence to kill the story, saying they needed "the principals on the record and on camera." As Alfonsi put it, "Government silence is a statement, not a veto."
Weiss's other objections included demanding the men be called "illegal immigrants" instead of "Venezuelan migrants" (many had applied for asylum and were not here illegally), and pushing for a Stephen Miller interview, which the administration had already declined. Under Bari Weiss' standard, the administration has a pocket veto over any story simply by not responding. Again, not how any of this has worked, ever!
The "unobjectionable editorial reasons" were 'we cannot air anything critical of this administration unless this administration responds on the record first.' Which is just prima facie absurd.
Leet code interviews are in the spirit of filtering out charlatans who misrepresent having even basic programming fundamentals. Many interviewers take it too far, but the original motivation is essential to saving time in the hiring process. I was instantly converted after participating in the full hiring process for a junior dev, which didn't properly filter for programming skill.
Big companies may have separate hiring SWE departments where the initial interviewers don't even know what team or role you may land in, so they have to resort to something...
Why/how do you grep selectors? Seems overly optimistic to be able to guess the particular rule pattern that is applying a style. Browser tools are much more reliable.
Let's say you're thrown into a website you've never worked on before and asked to fix a styling problem. You can look in the browser tools, but the website will only be running the compiled production version, and if the team knows what they're doing there won't be source maps available.
So you've now found selectors in DevTools that you think are causing the problem, and you want to find them in the source code. In the case of many projects, that means searching through hundreds of small CSS files.
That's why you grep selectors, and where the pain comes. You have to start with the most specific rules that you found in DevTools, then start deleting parts from them until you find a non-nested rule that's in the source, yet still specific enough that you haven't got hundreds of matches to go through.
It would be great if something like ast-grep could take a CSS rule copied from DevTools and search for nested CSS that would compile to match it.
There's plenty of "high effort" market information manipulation going on, even before LLMs. Spread (justified, researched) FUD about a company your fund is shorting.
I don't think it's clear sarcasm in the sense you are making. I think GP was pointing out that doing what OP does yourself (HA git) comes with a lot of costs.
My point is that Pierre CC has really exorbitant pricing that would cost even more.
reply