Couldn't agree more. I'm building open source software for the grid, contributing in a way that feels like it could truly make a difference, while building momentum for open standards. It doesn't feel like work, just creativity and problem solving. On top of that, I can just build stuff for fun. Kids want a Minecraft mod? Let's build it and learn a thing or two on the way.
For what it's worth adjusted for inflation, electricity prices have dropped over the last 30 years. We're now seeing a reversal of that trend. To be seen the magnitude and duration of that trend.
It's possible, though at this level I'm not 100% sure it matters if it's trolling or sincerely held belief, because it's a pretty clear and obvious statement that "even the citizens aren't safe".
Also, he's a fucking politician. The country shouldn't be run the same way as KiwiFarms.
Everything conservatives and Trump do starts as "trolling". What you call "trolling" is actually "floating the idea and figuring out how to make them happen for the first time".
Yeah, Trump was initially “joking” about trying for a third term. Now he’s serious about it. Who would have predicted this except for every single person who has ever seen Trump speak about anything ever?
Not the person you asked, but the chlorine level is very high in my muni water so I like running it through a Britta charcoal filter. If I'm in a rush, tap is fine.
Completely agree and it is an oversimplification when you graph people on even a 2-dimensional axis.
In reality we all have beliefs that are formed by our "in groups". People have groups beliefs formed from their religion, work, hobbies, study, and internet consumption. These all form our views and then get flattened to a 2-party system.
Unfortunately people can now form their identity solely on a political identity primarily due to social media.
Wasnt that number wholly debunked? I can only find fraud numbers like 100 million. Also, social security savings would only impact the social security budget. Those program funds don't go anywhere else.
I think FraudulentCoder may be falling victim to “media spin”.
There is a report [1] showing that the Social Security Administration had 72 billion of “improper payments”, which includes not only over-payments but under-payments, and does not necessarily imply fraud.
[2] I’m sure we all hope that the Inspector General continues to keep up this good work through any administration, and for that administration to have to answer any questions that work raises.
Agreed. I am no fan of the CCP but I have no issue with using DeepSeek since I only need to use it for coding which it does quite well. I still believe Sonnet is better. DeepSeek also struggles when the context window gets big. This might be hardware though.
Having said that, DeepSeek is 10 times cheaper than Sonnet and better than GPT-4o for my use cases. Models are a commodity product and it is easy enough to add a layer above them to only use them for technical questions.
If my usage can help v4, I am all for it as I know it is going to help everyone and not just the CCP. Should they stop publishing the weights and models, v3 can still take you quite far.
Curious why you have to qualify this with a “no fan of the CCP” prefix. From the outset, this is just a private organization and its links to CCP aren’t any different than, say, Foxconn’s or DJI’s or any of the countless Chinese manufacturers and businesses
You don’t invoke “I’m no fan of the CCP” before opening TikTok or buying a DJI drone or a BYD car. Then why this, because I’ve seen the same line repeated everywhere
Anything that becomes valuable will become a CCP property and it looks like DeepSeek may become that. The worry right now is that people feel using DeepSeek supports the CCP, just as using TikTok does. With LLMs we have static data that provides great control over what knowledge to extract from it.
This is just an unfair clause set up to solve the employment problem of people within the system, to play a supervisory role and prevent companies from doing evil. In reality, it has little effect, and they still have to abide by the law.
It mentioned not penalizing/rewarding the model for thoughts only rewarding the answer after the thought. I am curious how back propagation works then.
The researchers leverage existing language Chain-of-Thought data, where each sample consists of a question, reasoning steps, and the final answer. At stage 0, the model does not generate any thought tokens, and is just trained to yield the reasoning traces and correct answers for the Chain-of-Thought samples. In the subsequent stages, at each stage, we remove one reasoning step from the sample, and instead add thought tokens. In the illustration above, a single thought token is added in each stage, instead of a single reasoning step, but this is controlled by a hyperparameter ‘c’.
reply