Hacker Newsnew | past | comments | ask | show | jobs | submit | pointyhatuk's commentslogin

That's totally shit.

Its also why we invoice and take wire payments rather than storing CC details. There's just so much to go wrong.

Also PKI is shit for this sort of thing. As demonstrated, the moment that public key is gone, then the whole system falls like a house of cards. For the non believers of this fact, why else would there be a certificate revocation list and root CA updates for windows periodically...


Using a processor who stores the card number outside of your infrastructure (ie. Stripe) can also be helpful.


Even better to use someone that isn't your payment processor, so that should you need to change payment processors you don't also have to re-acquire all the billing info from your customers. You can use Stripe today, PayPal tomorrow, and Braintree the next if that's what works best for your business.

Card vaulting as a service: https://spreedly.com/ ($10/mo for up to 5000 cards)


Isn't that just adding one more point of failure? I don't trust myself, I barely trust Stripe or Paypal, and I've not even heard of spreedly.


There's always going to be single points of failure, but which is more likely: you want or have to change payment processors (you've been terminated, your fees have gone up, you want to switch to a lower cost provider) or you want to change flat-rate vaulting services? Plus, Spreedly will give you your data if you leave, whereas there is no way to get stored billing info out of most payment processors.


Until they have a security breach.


Better rely on someone who's sole job is securing that info than doing it yourself.


Storing credit card info just helps make you a bigger target. If your a small company, better let someone else store card info, let them be the target.

Also you're fined by the credit card companies if you lose card information. I believe it's a per card fine, so it get expensive really quickly.

Actually I don't get why any company would choose to store credit card information, when most payment providers will do it for you.


Stripe is amazing... I trust them, someone hacks me, awesome, you got password hashes and stripe customer keys, all worthless.


Not exactly worthless, depending on the hack someone could still charge an awful lot to your customers and make you have a bad day.

But yes, significantly better than other situations.


True, but I'd rather fix the problem that got them in, force reset of passwords, and delete all customer keys and require them to create new ones than be like "uhhhh, our data was hacked and your credit card is safely encrypted... But we had the encryption key on the server too, oops"


But you still might need to consider data residency issues and making your customers aware of where their data is being stored.


Yes

But if that happens, it's not your responsibility (at least not 100%), it's theirs


Following the traditional responsibility/accountability dichotomy: They are responsible for storing the cc number securely but you are accountable when something goes wrong (because you gave them that task)

Much like Linode are responsible for hosting my clients site but I sigh am accountable when something goes wrong.


In what ways are wire payments better than using credit cards? In wire payments aren't you using the actual bank account numbers along with routing numbers which is also very sensitive information ?

Also I do not think, but I am not sure, that fraudulent wire payments/transfers are reversible.


They're not. You're right.

Wire transfers often are not able to be undone once they happen (and are accepted by the other bank). This is the reason why there's so much verification that happens in wire transfers. (I helped develop 2nd factor authentication used for authenticating wire transfers for a financial company)

Credit card charges can be reversed.

http://www.reba.net/news/wtransfer


Spot on. This is why I'm still a fan of the old directory style systems like dmoz.


You have obviously never used Lotus Notes in the early '00s then!

Same grade turd, just with a bigger price tag and more consultants than you can shake a shitty stick at.

Nostalgia here:

http://coderjournal.com/2008/02/lotus-notes-aol-corporate-wo...

vom


Ah, Notes; the best UI, bar none. I remember discovering the mail window had NO horizontal scrollbar as I moved the cursor down a line and it skipped to the end of a truly epic set of Received: headers, all on one line, with NO WAY TO GET BACK except holding down the left-cursor key for what felt like an hour...

I later discovered the option to turn the optional horizontal scrollbar back on, in a four level deep menu somewhere. Rage.


I know this is WAY too late to be helpful, but I wonder if hitting the HOME key would have put you back at the start of the line.

Either way, completely unacceptable UI for sure.


Regarding 1 (pair programming), I find that we work well with all our own personalities but not other people's :)


What buttons?

(adblock)


Using Ghostery here... my initial perception of a site is always based partially on how many entries are blocked on load. I've seen pages with 20+ calls to outside sharing services; at that point, a little extra cynicism kicks in.


Agreed. The number of things Ghostery blocks seems inversely proportional to the utility value of the site!


Serial killer? :)


Agreed.

We've got them on Windows too - .msi files :)

The main advantage of standalone packages is I can choose whatever version of package XYZ I want and push it out rather than being stuck with the package version the repository ships with until either the maintainer updates it or you patch it yourself or argue with some shoddy backport.


To me the main advantage of standalone packages (thanks for reminding me of what they're called) is that they just work (to borrow a phrase from Apple's fan club).

Having to solve installation/compilation/dependency issues is just a waste of time, when those problem don't need to exist at all (things can just be packaged with all dependencies and dumped onto another machine).

I also like the fact that .dmg files are self-contained installers that don't touch any files outside of the ApplicationName.app folder they install to. So that to uninstall something installed from a .dmg file all you need to do is delete the ApplicationName.app "file"/folder.

There's nothing advanced that Mac OS X is doing here. See the Ruby Enterprise Edition installer for how nice the world could be if only there were a change in the way most people think about distributing software in the Linux world. REE installs to its own folder and doesn't touch any system files; it also needs very little from the system besides the compiler (so no dependencies afaik). That's why it just works every time I install it. Now contrast that with ImageMagick.


> to uninstall something installed from a .dmg file all you need to do is delete the ApplicationName.app "file"/folder.

AppCleaner / AppZapper / AppDelete / CleanApp / etc beg to differ. A lot of packages leave a lot of crap behind that doesn't get removed with the .app directory itself.


That's fine but for some of us, connectivity is not ubiquitous and it isn't necessarily legal to ship our code or data somewhere else unknown on the Internet...


Citation from me - stuff that hasn't worked properly for me in the last 5 years:

Acer Timeline 1810TZ, Acer Timeline 3810TZ, Sony Vaio VPC-J1, Lenovo ThinkPad T61, Dell Precision 390, Dell Precision T3500.

All have irritating bugs, hibernation and sleep problems, random crashes despite being absolutely bog standard Intel hardware across the board supposedly fully supported by Linux. The same kit doesn't exhibit any problems under Windows.

The Timeline series machines under Windows 7 got a whopping 8-10 hours battery life of average load OUT OF THE BOX. Under Ubuntu, even with all the powertop tweaks etc, 3 hours was pushing it.

Current Ubuntu, CentOS and Debian were tried on each machine at the time.

Just can't be bothered any more. Linux (Debian) sits inside a VirtualBox VM on Windows on my T61 where ironically the battery lasts longer when it's in a VM than on bare metal...


Ugh another thing I really don't want in the database. Seriously, stuff like this will knacker your scalability over time.

I only say this because I've been there, with SQL Server's XML processing stuff, then spent nearly 2 years getting rid of it.


Can you elaborate on the scaling issues? Being able to query into JSON fields (including functional indexes) is a great helper for denormalization which is very good for actually increasing scalability.

For me, the native JSON support is a very handy tool to have in the toolbelt for parts of our application that have a very loose schema.


Well your database is a black box technically speaking. It's very hard to scale it horizontally and it is very expensive to scale it vertically as time goes on.

Logic suggests that you should keep as much processing functionality outside something which can't be scaled cheaply or easily and push it to cheaper front end servers.

On this basis, anything which implies more work than collecting and shifting the data over the wire shouldn't really be in the database. Parsing / processing JSON is one of those things that's going to eat CPU/memory.

Fundamentally there's nothing wrong with storing JSON inside the database and processing it externally, but processing it inside the database is a big risk.

I've seen the same thing over the years with XML in the database and more recently people adding CLR code to SQL Server stored procedures.


Hmm, I'd have to disagree with you on that. It is true that JSON processing could reduce raw scalalability in the sense that a query that uses JSON may be slower than one without. However, having JSON processing in the database simplifies quite a bit. For instance, imagine an app that processes and emits JSON and also uses a normalized database. This functionality now makes it possible to move some of the JSON processing closer to the database. In some cases, this may not be the best idea, and in others, it's a win. As with many things, having the choice is not the problem; it's how one chooses, given the choice. I can see the benefits of XML in the database, but I can also see how it could be misused. The key, as always, is to apply judicious thought to the problem.


Actually, these functions seem to support designs where the database is normalised, but front ends are emitting JSON.


Actually people USE JSON blob fields to increase scalability (as far as back as FriendFeed).

Sure, doing this hampers normalization and single server speed, but if the queries are parallelizable with shards et al, what would hamper scalability?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: