Hacker Newsnew | past | comments | ask | show | jobs | submit | RyanShook's commentslogin

I’m also trying to see which one makes more sense. Discussion about rtk started today: https://news.ycombinator.com/item?id=47189599

What’s the best way to sync Obsidian without upgrading to their paid tier?

There is a self-hosted live sync plugin. It's rough around the edges but it mostly works and is actively maintained, if you are willing to self-host a sync server.

I say mostly works, because there are a lot of "gotchas" and the configuration and set up are a bit intimidating for the clients (the server is simple to host).

I used it for a while and it was fine, but I decided the cost of a coffee per month is worth not having to maintain it, and I switched to paying for their sync service.

However, there is also a git sync plugin that works really nicely. But it is not a real-time sync and it is not supported on mobile (officially). I mainly use that as a way to keep long running backups of my vaults in a self-hosted gitea instance (the default paid tier only keeps one month of history).



For my work notes, which are not allowed to be stored outside company resources, I have set up a git repo and use a plugin that auto commits.

It does not work well for sharing to a mobile env but works great for desktop.


If you’re on Apple devices only, then iCloud sync is free and works on all devices.

I no longer use Obsidian, so not sure what’s the best option for e.g. Linux <-> iOS sync except their service.


I'm quite fond of the obsidian-gut plugin and syncing to a private Forgejo instance.

I use syncthing to sync my notes between my PC and Laptop. It works pretty well.

What I don’t understand about policy violations is why Google never warns the user before banning. A simple alert or email would reduce so much frustration on the part of users and so much overhead for Google.

ToS change frequently and it’s not really fair to assume the user knows what is and is not correct use of tokens.


I think from their end, they see a lot more malicious users (e.g. spam accounts) that it's not worth providing a gentle warning before a ban. There might've been thousands more accounts created for Chinese companies for distillation[0], that Google didn't think of/weren't able to initially distinguish genuine user accounts just using a third party tool on their Antigravity token.

Like in a similar vein, Instagram sometimes randomly bans genuine users without appeal, probably because they deal with thousands more spam accounts that don't deserve a warning/appeals process.

[0] Like as Anthropic reported: https://www.anthropic.com/news/detecting-and-preventing-dist...


This is where mechanisms like ZKP + CBA built from a government ID will allow the company to distinguish, even through third parties, all without exposing actual identity.

Not just Google. This seems to be the default for most tech giants. I was banned on Facebook for an unknown reason, not provided any explanation, and given zero recourse. Had to resort to reaching out to a friend who worked there.

It is very easy to understand -- Google loses nothing by acting this way. They despise these users (and users in general by not providing any meaningful customer service), so it's natural they just cut off access completely.

If you think about how people's entire Google accounts are getting banned without apparently violating any terms without the ability to talk to someone or appeal, this feels almost nothing.


Pretty sure insider trading on prediction markets is a feature not a bug. These cases are laughable compared to the true size of insider trading volume on platforms like Kalshi.

The whole article reads as virtue signaling to me. Anthropic already has large defense contracts. Their models are already being used by the military. There's really no statement here.

The notion that it's bad to signal virtue is one of the crazier propaganda efforts I've seen over the last 20 years or so.

It’s a manipulative tactic. Businesses have no soul and no conscience.

It's arguable that businesses are subject to the same morality-inducing processes that humans are. For example, as a human (with a soul?) what is at risk when we do something immoral? I see it to be a reputational cost at the highest level. Morality could be viewed from the perspective that it increases predictability/coherence in society (generates less heat).

If societal feedback is the only thing keeping a human from deviating in catastrophic ways, that’s what we call a sociopath.

The humans working there do. To state otherwise is to absolve those humans of any responsibility.

Did I state otherwise though?

Did I say you stated otherwise?

How is it virtue signalling when sticking by these principles risks their entire business being destroyed by either being declared a supply chain risk or nationalized?

A company being asked to violate their virtues refuses, and then communicates that to reestablish their commitment to said virtues?

Tell me more about what they should do if a virtue signal in such a situation is a nothing statement.


Isn't it nice to have virtues to signal though? In saying that, you're saying you don't have any worth signaling over.

Not when your actions don’t align with your professed virtues.

Bot comment

So far my experience with skills is that they slow down or confuse agents unless you as the user understand what the skill actually contains and how it works. In general I would rather install a CLI tool and explain to the agent how I want it used vs. trying to get the agent to use a folder of instructions that I don't really understand what's inside.

Most LLM "harnessing" seems very lazy and bolted on. You can build much more robustly by leveraging a more complex application layer where you can manage state, but I guess people struggle building that

Common failure mode I've observed is people building a stateful harness for the LLM and then forgetting to tell the LLM about it. Leads to funny/disturbing results whenever the two "desync" in some way.

Example: a plan/act division, with the harness keeping state of which mode is active, and while in "plan mode", removing/disabling tools that can write data. Cue a mishandled timeout or an UI bug that prevents switching to "act mode", and suddenly the agent is spinning for 10 minutes questioning the nature of their reality, as the basic tools it needs to write code inexplicably ceased to exist, then opting for empirical experimentation and eventually figuring out a way to reimplement "search/replace" using shell calls or Python or whatever alternative wasn't properly sandboxed by the harness writers...

Part of this is just bugs in code, but what irks me is watching the LLM getting gaslighted or plain confused by rules of reality changing underneath it, all because the harness state wasn't made observable to the agent, or someone couldn't be arsed to have their error messages and security policies provide feedback to the LLM and not just the user.


> So far my experience with skills is that they slow down or confuse agents unless you as the user understand what the skill actually contains and how it works. In general I would rather install a CLI tool and explain to the agent how I want it used vs. trying to get the agent to use a folder of instructions that I don't really understand what's inside.

For Claude Code I add the tooling into either CLAUDE.md or .claude/INSTRUCTIONS.md which Claude reads when you start a new instance. If you update it, you MUST ask Claude to reread the file so it knows the full instructions.


I mean, yes. You should do exactly that: instruct an agent on how to do something you understand in terms you can explain.

Putting that in a `.md` file just means you don’t need to do it twice.


This is awesome, thanks for sharing. Seems especially well suited to phone cancellations but imagine negotiations being somewhat dicey due to commitments required. What model are you using for voice? Do you let the user select gender? Do you provide a summary to the user of the conversation after it’s done?

We use ElevenLabs for voice synthesis and Twilio for call control. IVR navigation is handled by our own rule based engine, and once a human is reached we use LLMs, including Claude, for the negotiation conversation.

We default to negotiating within the existing plan. No plan changes unless the user explicitly allows it. In practice, there are often retention or loyalty discounts available without changing the plan structure.

Yes, users get a post call summary.

Users can select voice characteristics including gender, and we’re adding optional voice cloning.


I always wonder how many zero-days exist on purpose…

I've heard this sentiment a lot, that governments/secret agencies/whoever create zero-days intentionally, for their own use.

This is an interesting thought to me (like, how does one create a zero-day that doesn't look intentional?) but the more I think about it, the more I start to believe that this fully is not necessary. There are enough faulty humans and memory unsafe languages in the loop that there will always be a zero-day somewhere, you just need to find it.

(this isn't to say something like the NSA has never created or ordered the creation of a backdoor - I just don't think it would be in the form of an "unintentional" zero-day exploit)


I'm not sure that governments actually create them, not prolifically at least. There's been some state actor influence over the years, for sure.

However, exploits that are known (only) by a state actor would most definitely be a closely guarded secret. It's only convenient for a state to release information about an exploit when either it's been made public or it has more consequences for not releasing.

So yes, exactly what you said. It's easier to find the exploits than to create them yourself. By extrapolation, you would have to assume that each state maintains its set of secret exploits, possibly never getting to use them for fear of the other side knowing of their existence. Cat & Mouse, Spy vs Spy for sure.


The NSA surely has ordered a backdoor.

>In December 2013, a Reuters news article alleged that in 2004, before NIST standardized Dual_EC_DRBG, NSA paid RSA Security $10 million in a secret deal to use Dual_EC_DRBG as the default in the RSA BSAFE cryptography library https://en.wikipedia.org/wiki/Dual_EC_DRBG


I think you are right that the shady actors pretty much can use existing bugs.

But you are also right that this is not the only way they work. With the XZ Utils backdoor (2024), we normal nerds got an interesting glimpse into how they create a zero-day. It was luckily discovered by an american developer not looking for zero-days, just debugging a performance problem.


It definitely feels like Claude is pulling ahead right now. ChatGPT is much more generous with their tokens but Claude's responses are consistently better when using models of the same generation.

When both decide to stop subsidized plans, only OpenAI will be somewhat affordable.

Based on what? Why is one more affordable over another? Substantiating your claim would provide a better discussion.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: