Data accessed on 5/13/2014, uber noticed on 9/17/2014, and then notifies affected on 2/27/2015. Thankfully it was only names and plate numbers, but still...
All I see from uber is bad publicity and poor management decisions. I wonder what it's like to work there from an insiders perspective, cause from the outside it doesn't look good.
It sounds like they realized the API was improperly exposed on 9/17/2014, but didn't necessary know if it had ever been accessed by an unauthorized request.
I could see it taking a while to find one bad request in the entire history of the API's lifespan -- presuming that they had to find the logs, weed out false positives, different sites and versions that behaved differently, etc.
That still doesn't explain a 5 month gap. The only (charitable) explanation that makes sense to me is that they discovered the API was exposed, thought they had proven it was never improperly accessed, and then only much later realized that it had been after all.
I'm not defending them on this because that does seem to be a long enough time to be more proactive about it. You did bring up an interesting point though, Uber is facing opposition from almost every city they are in. Whether it's small town South Carolina where I'm from and even in some of the largest cities in the world. It would be interesting to see how people deal with this on the inside and how it affects the culture.
Uber's in a position where they get flak for breaking the rules while also being painfully aware that following the rules is worse for them. They face opposition, but every time they play nice it doesn't go well for them.
The lesson here is that sometimes, you do much better by breaking all the rules.
Well, if the only way you can make business is to break laws and be total assholes to everyone, then it kind of strongly suggests you shouldn't be in business in the first place.
Uber is already hamstrung by their inexperienced drivers's reliance on Google Maps for navigation. It is in no way equivalent to actually knowing your way around.
The difficulty of making an urban self-driving car aside, Google would have to achieve a quantum leap forward in the quality of their navigation platform. Otherwise every auto-taxi in San Francisco will proceed single file down Van Ness, with turns onto Market or Mission only. All other streets will be empty and filled with pigeons.
It's not just taking traffic into account, which it can (sort of) already do.
They need to actually use the accident data. Use a diverse set of routes to get to the same place, so that there isn't one path for every car on the road. Understand stoplight patterns and where it's hard or easy to make a turn. Not focus obsessively on shortest path rather than most tolerable path.
Fundamentally what a good cab driver (which are not that common) offers you is knowledge of the best route to a destination from experience. Google Maps doesn't have that. And if it did, and it controlled a significant portion of the cars, it would no longer be the best route.
You know that you're describing things that machines are inherently better at than humans? And the problems you described stem mostly from the fact that maps also serve informative function (to learn the route in advance), and apparently Google didn't decide yet to compute routes in their navapp based on other active users of said navapp in the neighbourhood. But the data, infrastructure and algorithms are there.
> And if it did, and it controlled a significant portion of the cars, it would no longer be the best route.
Of course it would, because it would be able to run a global route optimization on all the cars simultaneously, thus outperforming even the most experienced human drivers by orders of magnitude. Getting above human level seems like a college-level exercise.
> inexperienced driver's reliance on Google Maps for navigation
When I lived in Sydney, I used to take the taxi often. And the rule was pretty much: If your address doesn't exist on Google Maps, they don't know how to get there. Even "At the corner of Hyde Park and Oxford St", which is in the CBD, returned a 404 from the driver's vocal API. They were all officially registered drivers, I just think cab's over reliance on Google Maps makes them unaware of the street names.
> Uber is already hamstrung by their inexperienced drivers's reliance on Google Maps for navigation. It is in no way equivalent to actually knowing your way around.
I don't know about you, but my experience is that Google Maps is far more reliable than a cabbie who purports to know their way around.
My experience is the exact opposite in most cities in the UK - Google Maps regularly gets me going insanely stupid routes, while the cabbies always seem to know every street and the best way to get to it.
Perhaps the US has a different culture for its cab drivers? I can't imagine why, though. We have all the taxi licensing schemes and whatnot that the US does, so it can't be a case of "there's more competition so they're better". Perhaps it literally does just come down to culture?
Not sure how it works elsewhere in the UK, but in London the cab drivers have to take an in-depth course called 'The Knowledge' which covers street routes, alternate routes during street closures, commercial destinations, etc. See some sample questions : http://www.taxiknowledge.co.uk/mock.html
This is majorly helpful beyond usual satnav use case. Eg, to get dropped off during high-traffic area, the cabbies will know well the surrounding streets and at your request can drop you off in a reasonable place 1-2 blocks away, for much less time and expense.
In the US, cabbies are often relatively recent immigrants. They may know parts of the city OK, but they can't be relied upon to have the city memorized.
There's licensing, but it's a far cry from the Knowledge of London.
I believe they've tried that in Portland and a few other cities. It hasn't gone particularly well for them. Also, there are cautionary tales like Night School, where the industry being disrupted gets the new entrant shut down.
Early 2014, you could see the drivers home address, cell phone number, ESN for their phone, the car(s) they had on their account's VINs... list goes on and on...
I don't know what happened here, but presumably they keep fairly detailed request logs. If they were notified of a security vulnerability like this, they would probably sweep logs for suspicious requests. This way they would become aware of all breaches using that vulnerability, but not until they found the vulnerability, which could be any amount of time after the breaches occurred.
Yeah, I definitely would not do that to a 3rd party system without a specific letter of engagement for penetration test or security review. Now, that being said, it's the first thing I would tell every single developer about as a senior developer and I would insist that test cases be written to verify that no such 'feature' was permitted into the application.
Should I consult with my lawyer before manually entering an address into a browser? I could easily make a mistake that would allow me to access the wrong page.
Heck, if we're being that careful maybe I should just throw my computers out the window. A Google search result or a forum post could link me to the wrong page and I could get sued.
This is a good point, but there should be more awareness towards the issue as a whole. I've seen many apps who expose data dangerously. Some developers may not be aware that these values are exposed (even with SSL), so they should architect their apps accordingly, reinforcing the fact that you should never trust the client. I also briefly touch on the fact about this dynamic architecture and some of the implications it brings.
That is a bunch of lawyer words that they can stretch to mean anything. What we need are hard deadlines, say two days after the breakin. Not enough to full find out what happens, but enough to force the companies to act.
Congress needs to stop pissing in the wind and make a federal law on breach disclosure. Self evidently companies won't universally do this on their own, and state specific law makes compliance more difficult and expensive.
How would such legislation ensure companies are able to detect such breaches in the first place? For every Target/PSN/Anthem/Uber how many companies aren't even aware they've been breached?
Withholding knowledge of a breach is self-rewarding which is why there needs to be a law stating the time frame for disclosure, and either penalties or liablity (per affected customer, employee, contractor, vendor). Lack of skill in detection is a problem, but I don't know whether Congress is well equipped to legislate that, and also companies aren't exactly incentivized to just let themselves get completely owned. They're just ignorant. There's no question this behavior is changing, even if we're dissatisfied with how slow it's happening.
I mean, the average Congresscritter probably has no idea what the typical answer is to "how do you do a password reset?" other than "call daughter/son". They're not good at establishing competency. They are sorta half way decent at bringing out the hammer "disclose what you know within X days, or we're going to fine you... when we do find out when you knew it."
So that brings up the question how the legislation determines whether and exactly when the company knew they were breached. And I'd say they should learn from history which is not to be such dicks like they were with hackers in the 80's and 90's and instantly criminalize disproportionately. We were learning things as a result of all of that, and by repressing it, we learned a lot less. So with companies I'd say up front the disclosure needs to be civil in nature (fines), and if there's willful hiding of what they know, tampering of evidence, destruction of evidence in an attempt to claim they didn't know they were breached or how badly, then it becomes criminal and lay down the hammer. Ultimately though, the worst punishment is up to the states, since the corporate charter is granted by states, not the feds. Off hand I can't think of a case where a corporation was executed in this manner though (revoking it's charter or articles of incorporation).
Companies are asking Congress to do this because already states are doing it, and it's totally haphazard. If Congresscritters were to get doxxed maybe it'd go faster.
As much as Uber messed up here and there was a security breach, comparable information is publicly available. For example, the TLC in NYC provides this:
This is a spreadsheet containing all the taxi drivers in NYC with their names, license numbers, and license expiration dates. Given that the only information leaked (according to Uber) were names and license numbers, that really isn't much beyond what might otherwise be available publicly.
In Massachusetts the driver's license number used to be your social security number. This was changes but are there other states that have not done so?
Uber really needs to have a public data retention policy stating that they anonymize or delete all data older than a couple weeks. I'm just waiting for them to be hacked and have to reveal that people's trip data for years has been released.
It's definitely not just Uber. And drivers' license numbers are serious PII! It was my exact example that I gave to my last appsec talk for Ruby developers this month in Nashville.
Starting with the user story: "As a Pawn Shop Clerk, I scan a copy of the customer’s drivers’ license because the company is required by law to keep this record at least two years from the date we purchase a used valuable from a customer."
A data retention policy can state that you delete all non-operational data after 60 or 90 days. Or that it is moved to one-way encrypted storage for up to a year. In other words, it can be a security mechanism vs "we keep everything in the SQL database, forever" that tends to be the default in many circumstances.
> Uber says it will offer a free one-year membership of Experian’s ProtectMyID Alert
My ID has been breached twice in other, unrelated incidents. Each time these ID protection companies want to know my SS# and all sorts of other stuff. My heart skips a beat imagining them scraping the web for my SS# and CC# in an otherwise well intentioned effort. I've refused their services and insist they only provide the insurance policy associated with this.
I see a lot of comments about security, but would be happy to bet this was simple social engineering and "human hacking". It's sobering to see large-ish companies that give full read access (and sometimes write) of customer and financial data to interns, fresh grads and new contractors for expediency. Young people are cheap. $500 is a new computer or weeks of food to an indebted student.
Management usually doesn't care, revenue and convenience trump security; until of course something bad happens, which is why older institutions have draconian access standards, meetings to discuss who has the right to know about the meeting to determine the access list management program (true story) and so on.
Nothing in the press release hints at an actual attack. "An unauthorized third party accessed our database, and we immediately changed the password" sounds like they realized one of their competitors hired an intern to get them a login.
Last year Uber was using Backbone and the JSON returned to the client included ALL information about the drivers you have used for trips including home address, phone number, etc. I wonder if this has something to do with that?
You could also use an auth token from the Android app and snoop around other users if you knew some info about them, which you could if you had access to a driver's phone (I did/do).
And depending on the state, you can find out the driver's birthday, or even if their real name is different from what is listed on their profile. The site at [0] shows how many states use soundex coding and modulus arithmetic to encode driver's license numbers with PII.
I'd be keen to see if every driver's info aligns with the license number (for those states that use encoding systems that embed PII into the number).
I find it unlikely they have a database explicitly for driver names/license plates. Unless it was some flat-file dump compromised. I'm curious how much data was really obtained. If only 50k were truly stolen, it could be a shard too. The lack of technical details is sketchy to me
I also find it unlikely it's just the name and license number. They used to return all of a driver's information (name, phone, address, drivers license, license plate, etc) from the rest endpoint they were using on their website. They closed that hole it after it we disclosed it to them.
Why does sec get breached? Marketing wants easy access to all data, that's it. Big Data / deep learning wants easy access, lots of data is in transit. Security is not convenient for operations, therefor companies have sec on paper and audits and stuff but no real sec.
The free one-year membership of Experian’s® ProtectMyID® Alert is genius, its giving away something that costs them nothing (presumably Experian are using this as a marketing opportunity) as if it's a real step in the right direction to make up for the data leak.
I work in info sec, and in one of the "Who's Hiring" posts a few months ago (do we still do those? I haven't seen one in a while) I asked "why are startups never hiring security guys?", because I never see a security engineer position open in those topics. I never got a response. To me that indicates the response is "we don't".
Listen, guys. I don't care how small you are. If you are handling PII or credit card data or anything that, if leaked, would harm your business or your customers, you need a security guy. Not a programmer who knows some security stuff. Not a manager who checks off the online PCI self-assessment. Not "we outsource to an MSSP". At least one security guy, full time. Make sure that everything you do is run past that person. If you're so busy that you can't run everything past that person, hire another.
It's not a joke. Stop fucking ruining people's lives. It's 2015, four years past "the year of the breach" [1]. Get with the program. It's not okay to have a breach. It's not. It doesn't matter how much money you saved from not having a security guy or the tools they need. Get someone who knows what they're talking about and listen to them.
Just like they all need a dedicated network engineer, or a dedicated storage engineer, or a dedicated "whatever" engineer? Losing data is unacceptable too right? I'm not saying security people are not necessary, but there are a lot of not dedicated "whatevers" that can handle "whatever" sufficiently. Coupled with security audits, which in my experience leave a LOT to be desired speaking as a not dedicated "security" engineer(I know of a few areas where I'd like to shore up but they never come up on security audits, hmmm), this is often sufficient. Besides, don't you figure Target, Home Depo, EA, and Sony had dedicated security people?
Again, this is not arguing against security guys.. It's just this post reads a bit like "Well, if they were only hiring people like me maybe this wouldn't have happened".
Only you can prevent forest fires. The aggregate knowledge and concern for security needs to increase. A lot of these security auditors(but not all!) are just running automated tools and generating automated reports. They don't understand your environment like you do. Even a dedicated security guy at a large company can't be everywhere and doesn't know everything(or even understand, say, hypervisor security). I was lucky enough to work with/for somebody that was very security concious and involved in the security scene... It changes your thinking. This thinking while staying pragmatic is the key IMHO. Just like the mantra "everyone is responsible for quality" I believe everyone should be responsible for security. If this isn't specific enough well.. Consider some of the hacks mentioned before. It is at the same time completely obvious to some people what went wrong in hindsight, but obviously not apparent to the implementers that it could go wrong at the time. A cultural shift in thinking and better education in security is the way forward IMHO.
A lot of these security auditors(but not all!) are just running automated tools and generating automated reports.
Woah... A professional security auditor actually knows what to look for and would be flexible enough to understand (and manipulate) what your software is doing. If you are paying someone to run a script, you're not getting your money's worth.
Even a dedicated security guy at a large company can't be everywhere and doesn't know everything(or even understand, say, hypervisor security).
True, that's why ideally you'd want a team of people on this, not just one security guy to carry the globe.
I agree though, security needs to be thought out from the beginning and throughout the development process. The later you catch something, the harder it is to fix. But you need a fair amount of experience as a developer to see security issues thoroughly (because you often need to understand the platforms you are building on, not just your domain). So if you don't have that knowledge you can either teach yourself or if you don't have the time, let someone else do it.
Having a thorough understanding of what you are actually doing does take care of most of that. I doubt that's where a startup's priorities are, sadly. For most, speed > solid code
"You can’t just hire a couple security engineers to shoulder this burden. You wouldn’t hire anyone to just “go deal with that scale issue” you have either."
Most startups that talk to me do hire somebody to "go deal with that scale issue." When they talk to me in particular, it seems like they want to hire somebody to "go deal with that security issue," too.
We should understand that a startup doesn't have the resources of a fully fledged company. That said, Uber has literally billions in resources, they should have done better.
That said, any company collecting PII (or any type of data a customer believes is protected really) as part of their business has a duty of protecting that information.
Unfortunately, you can't trust joe sixpack to make safe and sound decisions as to whether they should sign up and give their contact/personal info to your new random app, let alone evaluate the level of your opsec practices.
Saying "we take the privacy of our customers very seriously" months after a breach and going back to business as usual is not enough, and I think this is true for both startups and big corporations.
One of the earlier comments said "Stop fucking ruining people's lives.", I think it pretty much sums it up even if it's probably a bit extreme.
The first step is to stop considering security as an afterthought when you write any piece of code.
This is bullshit. If your organisation can't protect their customers data, it shouldn't exist. Enough of this "I need special treatment because I'm just two dropouts working from a Starbucks'.
Your view doesn't account for the fact that in computer security, offense overwhelmingly beats defense. Target. Sony. Home Depot. Nordstrom. Those are the ones you hear about, but what's scary are the number of company and government breaches that aren't made public. The cost of a zero-day is in the low to mid six figures.[1] If you are a juicy enough target, you will get hacked.
Obviously, this doesn't mean one should disregard security concerns. It's important to engage in good practices, to cultivate a combination of paranoia and attention to detail, to scrutinize suspicious behavior. But even if you do all of these things, modern computer systems have tons of surface area outside of your control. These days, it's unrealistic to demand perfect security from anyone, let alone small businesses.
Instead of being so uncompromising, I think it's better to ask businesses to explain their security policies and practices. Are administrators required to use multi-factor auth? Are backups encrypted, and if so, how? How are passwords hashed? Is data encrypted in transmission, and if so, how? Are server logs shipped to prevent tampering? Answering these questions (and others like them) can give security-conscious customers an idea of the business's expertise, and allow people to use products (or not) accordingly.
No, with all due respect, you're bullshit. Hacking my app is illegal. You're saying I shouldn't write a web app in the first place, just because I'm some guy and barely know the framework I'm using. Well, maybe you should go live in Somalia if you don't like a code of laws. I can't do security right. I can do a web app poorly, or nothing at all. You're saying, give the world nothing. I'm saying, sod off. I've had enough of perfectionists like you keeping people from making stuff.
Yeah, and fuck food safety regulations, because I'm just some schmuck who wants to operate a restaurant but can't be bothered to learn about how to do it properly so everybody who doesn't want to be poisoned shouldn't be such a bitch who'd prefer a steak without a side of e coli, right? And let's abolish drivers licenses too while we're at at, because anyone who wants to have a bare minimum of driving skill from other road users should just go live in Somalia, right?
Well actually, if you think it's OK to expose your user's data because you don't know what you're doing but think you deserve a piece of the startup gold rush pie anyway, it's you who should go live in Somalia and see what becomes of a 'society' of people who just do something with no oversight, skill of knowledge. If the choice is between 'doing it poorly' and 'nothing at all', then you should do 'nothing at all' because your actions affect other people. Basically you say 'screw my users, I can't be bothered to learn things properly but I want money anyway!'. Well, fuck you, you are the cause of all these problems, and you deserve everything that comes to you.
That's not perfectionism. His comment is an emotional simplification of a complex problem without any consideration of side effects. This is just like being "tough on crime" rhetoric of some politicians.
nothing, otherwise you put yourself at a disadvantage against other market players (at least in US).
US has no reasonable industry regulation, its more of a laughable industry written guidelines if anything. There are no consequences, no serious penalties for harming public. Whats more public itself is too clueless to care and incentivize proper behaviour. Only HUGE events are capable of changing (exxon valdez) perception and forcing real regulation.
> It's not okay to have a breach. It's not. It doesn't matter how much money you saved...
If only this were true, they would be hiring security people.
Nothing is going to change until companies are held accountable for the damages caused by negligence. If someone in the infosec field wants to make a difference, I'd reckon their best bet is lobbying to make this happen.
That's good! You guys probably have tons of personal data from students who didn't choose to use your service anyway, but their school made that choice for them. Be careful with it!
From first hand experience I can tell you that Uber was recruiting to start their information security program at least 18 months ago and when I spoke to their software engineers and SREs they had a decent understanding of modern security issues.
In the 18 months since I assume they have assembled a decent security team and while most likely Uber dropped the ball here at least they recognized the need for security before they got burnt.
> It doesn't matter how much money you saved from not having a security guy or the tools they need.
It's the only thing that matters. It's capitalism. Those who waste money on unneccessary expenses get outcompeted by those who don't. Unless you find a way to make companies financially responsible for crappy security, they won't care. Right now breaches like these seem to be more like free advertising (a typical user will just read "something something Uber something hacked" and next week will just have a vague sense Uber was mentioned in the news).
> would be enough if customers cared more about these breaches and take took business elsewhere.
If only they were spherical humans of uniform density...
> Apparently they don't care.
Well, they do, but they can't do anything about it. See, it's a very known problem with real humans - from boycotting Coca Cola for slaughtering people in poor countries, to tantalum capacitor production being based on even worse slaughter of men, women and children in poor countries, to the whole environmental fuckup we enjoy today - "voting with your wallet" doesn't work. People can't coordinate on a large enough scale, so they have to stick with the bad choices. Why should I move my business elsewhere, if the risk is unlikely and my competitor here will save money over me if he stays? And my competitor thinks the same, and neither of us move.
It's the very case regulations exist for - it's much easier (and in reality, actually feasible) to coordinate people to ban all businesses from doing particular classes of shitty things.
But honestly, I thought that's kind of economics 101, that this is market failure mode when it is applied to real, flesh and blood humans.
Currently when there is a data breach, the source company is usually on the hook for nothing more than signing victims up for credit reporting for a year, a scheme that I would not be surprised is actually money making (e.g. providing thousands of great leads for a credit reporting agency has to be gravy). On the cost/benefit ratio, security of personal information just really doesn't matter in cases like this. There are some other companies, like Google, who would take a serious image hit, but for a pseudo-employer company like Uber there will be no ramifications.
It isn't surprising that they deprioritize security -- the market doesn't demand it.
Serious, legit question here. How many lives will be ruined by this breach of 50k? How many lives were ruined when 40 million CCs and 70 million accounts (address, phone number , etc) were stolen in the Target breach?
Ruin seems like an awfully strong word here. I hesitate to say that because I don't want to downplay the importance of security. But to take security seriously I think we also have to be non-hyperbolic about the consequences of not doing so.
You're right, "ruin" was poetic license on my part. I was either going all-in with the outrage or I was going to delete the post and move on. I chose the former.
But think about the Anthem breech just this year. SSNs, full name, date of birth, employer and yearly income, home address. Everything you need to ruin someone's life for real, and permanently. Having to have a credit card replaced is an inconvenience. Having to constantly watch your credit because everything an attacker needs to open a new credit card in your name is ruin.
Beyond that though, little inconveniences add up. How many times do you want to have to replace your credit card in a year? And if you have an auto-payment set up but miss changing your card number for that, how much will missing that payment impact your credit score? And how many hours are you spending fighting fraudulent charges? How many years of your life are taken off by the stress of constant breaches?
Life ruining? Probably not. Life-changing? Yeah, it's getting there.
Two card changes in quick succession led to my car insurance being cancelled when they couldn't take a payment. The first I knew of it was when the blue lights came on behind me. That is firmly into ruined life territory.
No, in the UK being caught like this means a big fine, an long-term endorsement on your license and unpleasant side effects which include raised insurance premiums and not being able to get a hire car for several years afterwards. It IS actually a big deal. And it all has its roots in a bank card number changing unexpectedly.
And the point is, yes, consequences can be rather more damaging than having your Amazon Prime membership lapse.
Not having insurance is a strict liability offence
It's the same in my country.
But if you're late on payments then a normal dunning process begins. Before your insurance turns invalid you have to ignore at least 3 letters over the course of 3 months.
In the UK they just flip the switch when a single payment fails, without prior warning?
My understanding is that UberX is illegal in Thailand, but they've been doing it anyway as it's too difficult for police to enforce. If I was a Thai UberX driver, I might be thinking my life was significantly worse off...
UberX is apparently illegal in parts of Australia as well, and authorities have been using entrapment to identify drivers and fine them [1]. Having access to a database of drivers would make it easier for authorities to fine the drivers, apparently $1700 per offence.
"NICK MARSDEN, DEPT. OF TRANSPORT AND MAIN ROADS (Aug. 28, 2014, male voiceover): "No covert activity was done today, Uber locked third phone due to penalty infringement notices being issued yesterday. Time was spent purchasing new credit cards, activating Gmail accounts and setting up two more phones. These phones are the last ones, will be ordering additional units.""
I think you're going to need to define what a ruined life is. I doubt getting a replacement credit card in the mail will ruin someone's life under most definitions.
One time I got an e-mail from Amex saying my account may have been compromised and to click a link. Fearing a phishing attempt I went direct to their website and logged in. A big red banner said to contact them ASAP. I called a number they gave. They asked if I ordered plane tickets for Turkey. I said no. They said ok, no worries, they auto-blocked the purchase. We went through my history to make sure everything else was correct, it was. The card was cancelled and they mailed a new one overnight. It was under 22 hours from e-mail to new card in hand. I was very impressed.
My sister has info stolen from a fake pad at a gas station. It was a debit card. She got all of her money back but it took close to 6 weeks to fully resolve. If you live paycheck to paycheck, and at the time she wasn't too far off, that can be a very difficult situation. It worked out in the end but it was definitely painful for her.
By and large these are not "ruined life" events. Identity theft and fraud are, unfortunately, common and mainstream enough that hitting "ruined life" level is exceptionally difficult. Back in the 90s when the average person didn't know those terms it might have been more common. But now it's just something that everyone, individuals and corporations, have to deal with.
I mostly agree, but the calculation is different for different businesses.
The costs of engineering time and hiring a security professional may be much more expensive than the lost business due to breaches.
In this particular case they likely have enough resources to have made it happen, but it remains to be seen whether this will actually cost them much if anything.
That was the case with Home Depot. They made the determination that the cost of implementing proper security would be more than the cost of cleaning up the breach.
However, this is just offloading the negative externalities onto the banks and the customers. That's just shitty. I'm not one to call for government regulation that easily, but this is exactly why we have a federal government. To make sure the citizens are being protected. That's why the FCC declared ISPs to be utilities. To stop them from abusing the customers. That's why the EPA has regulations on chemical and oil spills. If Home Depot got fined $100m for their breach, what would the cost look like then?
The cost of security is more than the impact of the lost business. That needs to be fixed, because until it is, we will continue to have breaches that make our lives terrible.
I totally agree. I used to work in Info-sec back in D.C. before I moved out to San Francisco and I would always joke with my D.C. buddies that San Francisco cares about product first, security later. Most people in SF are so wrapped up around which framework is the coolest hottest thing and have no idea about anything related to security until it's too late.
I would always complain about being 2-5 years behind in terms of technology stacks when I lived in D.C., but we were always a lot more careful about deployment decisions and extremely serious about data security. That mentality is ingrained in me until today so I always think about security. Unfortunately; it's not in the minds of a lot of people.
I don't think these breaches are going to stop until something really serious happens or until there are some serious negative legal consequences for data breaches. Sadly, it's up to consumers to try and sort out which companies have the best practices and that's not always apparent.
If you look at the actual impact to companies by breaches like this, it's almost the same impact that Google, Apple, and Microsoft faced after the PRISM leak - hardly perceptible to revenue. What does happen though obviously is a lot of opex expenditures to deal with it all, and that's where companies are getting squeezed oftentimes in the usual behemoth dysfunctional megacorp landscape.
I'm gung-ho about security as much as you but the truth is that users hardly stop using a service after data breaches. The ones that do matter though are like the ones that delete all your AWS instances and custom AMIs when holding your company's assets for ransom. That is what startups should be worried about indeed (as well as enterprises that are transitioning more of their IT into cloud environments that can be scripted and automated much easier - makes it easier for attackers in theory too).
Agreed, but these guys aren't small, they're fucking huge, so they really don't have an excuse. They possibly still have the startup culture of growing quickly and worrying about the details (e.g. security) later.
I generally agree with the main thrust of your argument, but this seems way over the top. I don't know if every brick and mortar business in the world can afford to hire a dedicated security engineer. Should they all just close shop?
And if we're going by Wikipedia's definition of PII, probably at least 50% of the websites in the world. Including this one.
Maybe you don't mean all that, but if I am to take what you posted at face value, it seems insanely irrational.
I didn't just say PII, I said "PII or credit card data or anything that, if leaked, would harm your business or your customers". If you're not prepared to securely handle PII, you shouldn't be collecting it. Talking about B&M, does Guitar Center or Great Clips really need my phone number? No, they don't. So I don't give it to them. They keep this data because they want it, not because they need it.
Insanely irrational? If you are comfortable handing your information to companies who do nothing more than the bare minimum to make sure it's safe, that's insanely irrational. I get tired of companies giving my information away. I really do. There's no excuse for it.
Security is by no means a solved problem, nor is it possible to solve at this point. But how often do you hear that the company wasn't hashing, wasn't encrypting, wasn't paying attention to their security tools? Damn near every time. Things like this very, very rarely happen to companies who take security seriously.
You keep bringing up huge examples. Millions of credit cards leaked. Guitar center, a company that has 260 locations across the United States. Great clips has over 3,000 locations. Anthem, a company with multiple billions of dollars in revenue that handles extremely sensitive information for millions of people.
I remember as a kid, there was a family owned comic book shop I would visit. I would let them know what comics I'm interested in, and they would call me if they got something in that they thought I would be interested in. So they had good reason to have our phone number. It was probably even stored really insecurely (on a piece of paper). But it was the early 90's. And if someone stole it it might have included 200 phone numbers.
I buy magic the gathering cards from small card shops across the country. They need my address to ship it. Since most of these single store businesses are fairly low tech, I doubt they do too much to protect it.
Also, your initial statement also said handle, not collect. Most brick and mortar stores handle credit card data, although they don't collect it. There's even been instances of thieves affixing devices to ATMs and credit card machines that will scan your card data as you slide or insert it.
You can collect PII by hosting a static webpage (ip addresses). Leaking that PII can cause harm to users (DDOS). This may seem made up but people who make a living off live streaming are fairly regularly DDOS'd by malicious viewers who figure out their IP.
More generally, any leak of data, whether it includes PII or not, will harm a company's reputation.
This is why I called your post insanely irrational. It's all black and white: hire a dedicated security engineer, or you're doing it wrong. Regardless of size. The majority of businesses in the country would go out of business if they had to abide by your rules.
My POV is kind of limited, but I believe there's a strong argument to be made that security consulting is not an adequate substitute for in-house security. The largest problem that comes to mind is that hiring consultants can lead executives to think they've outsourced security and can not worry about it.
I completely agree with you, but sadly, this kind of thing will never happen until there is a strong incentive for those companies to hire dedicated security people. Right now it's standard for companies to apologize and give people a year of credit monitoring.
It's become such a common issue there is no incentive to invest in security and that's a bad thing.
This is anti-competitive, and regressive, it helps the big guns. Imagine big companies shutting down small ones because they don't have dedicated security staff. We need companies with working security to outsource this service, for example for storing records.
Probably so and also an indication of a probably lack of defense in depth and failure of access control. It will be another example to add to my Litany of Data Breaches the next time I speak with developers about appsec. You can see my last talk at https://www.youtube.com/watch?v=dj196NhPyWs&t=19m50s. So much failure to go around in application design and implementation.
I would actively insist not storing PII in plain text unless there was absolutely no way around it. And it may involve changing the business model to enforce that certain data is not needed to be actively processed by the web application in the ordinary course of business. This is part of the security pushback phase that is essential that more developers adopt as a matter of professional ethics.
I just think that this is a company with no product, no business model, horrendous business practices and somehow their valuation is still higher than the entire market they operate in. At least their not selling the data to the IRS, right?
Between this breach, and the impending classification as a cab company by more and more major cities. I think that we can consider Uber to be either the walking dead or something very close to it at this point.
I think it's a bit premature to count Uber out. They have an absurd amount of money in the bank, and they take on a very small amount of the risk of their business. I don't think that this data breach will have the slightest impact to their bottom line.
If they get classified as a cab company in a particular market, they'll sue. If they don't get their way, they'll exit the market. They could also move into other businesses which are clearly not cabs, as shown by their dabblings in courier and delivery services. Consider how many startups are paying people to drive things from point A to point B -- that entire industry could be outsourced to Uber.
If they do exit a market, they won't lose anything besides face -- it's the drivers who will carry all of the capital investment and most of the operating costs, they're the ones who will be hurt.
All I see from uber is bad publicity and poor management decisions. I wonder what it's like to work there from an insiders perspective, cause from the outside it doesn't look good.