Agreed. What the western parts of this map avoids is that the cultures are a mix of mostly descendent European cultures (Norwegian, Irish, German, etc.) and Hispanic cultures especially in the south differs strongly once you go north of Colorado.
and my first though is "What's different about South Dakota and North Dakota" and got told by a friend who's a geography nerd that much of South Dakota is really weird and isolated and different from other states.
I feel like this is the theme of our entire country at the moment: Many powerful people seem willing to torch stuff worth billions of dollars to the country as a whole in order to squeeze out a few million for themselves.
If we had a functioning regulatory environment... Haha. Nevermind. We vote for our leaders based on how loudly they promise to hurt trans kids.
The problem with openness and anonymity is that it invites bad actors. Social media is an unsolved problem and any platform that gets sufficiently large will be more valuable as a tool to disseminate misinformation and propaganda than as a tool for people to actually communate freely and openly.
I think a lot of AI-generated stuff will soon be seem as cheap schlock, fake plastic knock-offs, the WalMart of ideas. Some people will use it well. Most people won’t.
The question to me is whether we will lets these companies do completely undermine the financial side of the marketplace of ideas that people simple stop spending time writing (if everything’s just going to get chewed to hell by a monster our corporation) or Will writing and create content only in very private and possible purely offline scenarios that these AI companies have less access to.
In a sane world, I would expect guidance and legislation that would bridge the gap and attempt to create an equitable solution so we could have amazing AI tools without crushing by original creators. But we do not live in a sane world.
Yes, if the executive branch is trustworthy. We need to send a clear signal to the Republican Party that this sort of general behavior is not acceptable to Americans.
That’s the point. When Trump disagrees with facts the facts must be destroyed. When people are actually trying to solve problems they desire more information, not less.
I think this pattern of behavior needs to be cast as incompetence and cowardice rather than immorality. Avoiding transparency, rigorous analysis, and competitive disagreement and review is a sign that one cannot make a convincing argument, or does not have courage to do so. I don't think they care about the morality of it, or see themselves as morally justified.
Afloat in what sense? Usage is down. At least for me, the vast majority of my social and professional network has moved to other platforms. And every time a public figure (journalist, politician) does post it feels like they're immediately swarmed by right-wing propaganda idiots/bots. Feels like a complete mess.
LLMs save me a lot of time as a software engineer because they save me a ton of time doing either boilerplate work or mundane tasks that are relatively conceptually easy but annoying to actually have to do/type/whatever in an IDE.
But I still more-or-less have to think like a software engineer. That's not going to go away. I have to make sure the code remains clean and well-organized -- which, for example, LLMs can help with, but I have to make precision requests and (most importantly) know specifically what I mean by "clean and well-organized." And I always read through and review any generated code and often tweak the output because at the end of the day I am responsible for the code base and I need to verify quality and I need to be able to answer questions and do all of the usual soft-skill engineering stuff. Etc. Etc.
So do whatever fits your need. I think LLMs are a massive multiplier because I can focus on the actual engineering stuff and automate away a bunch of the boring shit.
But when I read stuff like:
"I lost all my trust in LLMs, so I wouldn't give them a big feature again. I'll do very small things like refactoring or a very small-scoped feature."
I feel like I'm hearing something like, "I decided to build a house! So I hired some house builders and told them to build me a house with three bedrooms and two bathrooms and they wound up building something that was not at all what I wanted! Why didn't they know I really liked high ceilings?"
> [LLMs] save me a ton of time doing either boilerplate work
I hear this frequently from LLM aficionados. I have a couple of questions about it:
1) If there is so much boilerplate that it takes a significant amount of coding time, why haven't you invested in abstracting it away?
2) The time spent actually writing code is not typically the bottleneck in implementing a system. How much do you really save over the development lifecycle when you have to review the LLM output in any case?
I don't know about the boilerplate part but when you are e.g. adding a new abstraction that will help simplify an existing pattern across the code base something like Copilot saves a ton of time. Write down what has to happen and why, then let the machine walk across the code base and make updates, update tests and docs, fix whatever ancillary breaks happen, etc. The real payoff is making it cheaper to do exploratory refactors and simple features so you can focus on making the code and overall design better.
That's an interesting approach. You still have to review all the changes to make sure they're correct and that the code is maintainable, though. I could see this being a net savings on a legacy code base or a brand new system still in the "sketching" phase.
Yes, one of the reasons I like Copilot over some of the terminal-based systems I've seen is the changes are all staged for review in VS Code so you have all the navigation etc tools and can do whatever needs to be done before committing. It saves a lot of time, even on new features. I think of it like a chainsaw, powerful but a little bit imprecise.
I'm in a similar boat. I've only started using it more very recently, and it's really helping my "white-page syndrome" when I'm starting a new feature. I still have to fix a bunch of stuff, but I think it's easier for me to fix, tweak and refactor existing code than it is to write a new file from scratch.
Often times there's a lot of repetition in the app I'm working on, and there's a lot of it that's already been abstracted away, but we still have to import the component, its dependencies, and setup the whole thing which is indeed pretty boring. It really helps to tell the LLM to implement something and point it to an example of the style I want.
This is the killer app for LLMs for me. I used to get super bogged down in the details of what I was trying to do, I would go a whole afternoon and while I would have started on the feature - I wouldn't have much to show for it in terms of working functionality. LLMs just provide a direction to go in and "get something up" before having to think through every little edge case and abstraction. Later once I have a a better idea of what I want, I go in and refactor by hand. But at least "it works" temporarily, and I find refactoring more enjoyable than writing fresh code anyway, primarily due to that "white page" effect you mention.