Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Giving a customer service ai the ability to configure firewall rules seems problematic

Maybe eventually they will be less suspectable to social engineering but I don't have that confidence yet

Is it still social engineering if you're talking to an AI?



Prompt injection is basically the AI version of social engineering, isn't it?


Social engineering has limits and each individual has unique vulnerabilities. It's not possible to call in and speak a single sentence compelling any agent who hears it to immediately burn the office building down.


Some human vulnerabilities are surprisingly common. A lot of scammers follow scripts and formula. Of course, to coax arson would be difficult. But life-devastating incidents like emptying out the entire bank account, leaking secrets, causing self-harm, etc. are not unheard of.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: