CyberWeekly #5 - Privacy, AI, sabotage and security skills
Welcome to CyberWeekly, a weekly roundup of news, articles, long form blog posts and various other miscellania that interests your author, Michael Brunton-Spall.
Feel free to forward this on to people you think might be interested. If someone forwarded this to you, then you can subscribe to your own copy by visiting Cyberweekly
Replies to this email come straight to me, so just hit reply to send me feedback, comments or links or tweet it to me
This week contains stories about Privacy, AI, sabotage and security skills amongst others. There’s a theme here of the expanding and improving technological capabilities extending faster than our processes, practices, skills and tools can keep up. This is the normal way that technologies improve, with practice lagging behind the evolution of technology, but the speed of bulk data analysis and machine learning technology improvements means that we need to be aware that our old ways may not be appropriate. It also means we need to explore more new ways to deal with this stuff, to experiment and see if we can forge new processes and practices for this new world.
I’m pleased to say that I’ve passed 50 subscribers for this newsletter already, which terrifies me since I haven’t done anything to advertise or market this yet, so thank you everyone for tweeting/forwarding/telling your friends. I hope you enjoy it and do feel free to send me feedback, or send me links or stories that you think are interesting to me @bruntonspall, by email, or in person (I’m always happy to meet for coffee!). I only link to stories that I read, but I’m always interested in reading more.
Anyway, enjoy the reading list this week, and see you next week.
Why do we care so much about privacy? The New Yorker
“The law is constantly playing catch-up. In the digital age, almost all transactions are recorded somewhere, and almost any information worth keeping private involves a third party. Most of us store more in the cloud than in lockboxes. It does not make sense to constrain the technological capacities of law enforcement just because the technology allows it to work more efficiently, but those capacities can also lead to a society whose citizens have nowhere to hide.”
A fascinating long read about the history of privacy laws (primarily in the US). The interesting equivalence made over and over is that we care far more when a computer scans us than we do when an individual does. The gift that technology gives us is scale and speed, but that’s precisely the thing that creates unintended consequences further down the road.
Mithering about the unmodellable
“And hope that those processes don't evolve faster than the software. Which, unfortunately, they probably will.”
This is a fascinating insight into Parliament generally but this bit caught my eye. The sales pitch for Agile is that your software can be continuously evolving and changing to match the organisational changes. But in reality, even if the software is written in a way that can be changed, is your organisation likely to have the will, the budget, the time and the skills at the right time to make the change?
10 common security gotchas in Python and how to avoid them
“When thinking about security, you need to think about how it can be misused. Python is no exception, even within the standard library there are documented bad practices for writing hardened applications.”
This is a good list of security gotchas and many of these issues are essentially the same if you are programming in any dynamic language such as Ruby, PHP or JavaScript.
Why Security Skills Should Be Taught, Not Hired - SecurityIntelligence
“We have to admit that colleges will rarely be able to deliver candidates who are capable of dealing with the complexity of security from day one. Then again, very few industries expect that out of college graduates. An automobile manufacturer wouldn’t hire a newly minted electrical engineer and expect him or her to design a wiring harness for the next model of its car, for example. The company would start the new employee small, nurture him or her, gradually increasing responsibility over time.”
I thought this wouldn’t be a good article but it’s actually a lot better than I expected. The skills shortage in security is I think a combination of the pipeline problem as stated in this article as well as a failure of existing security professionals to keep up with the rapidly changing technology landscape. As well as struggling to hire a new cyber security professional and expect them to know enough to build a SOC, we don’t spend enough time and energy giving security professionals good technology training, so how can we expect them to have good opinions on Serverless or DevOps?
Why you should train your staff to think securely – IT Governance Blog
"Other staff, meanwhile, tend to have little knowledge of or interest in information security practices, which they often believe have been designed to hinder their day-to-day work. However, when any employee with Internet access can jeopardise the entire organisation with a single mouse-click, it should be clear that the responsibility for information security lies with every member of staff and that security practices need to be embedded in the working practices of the whole business."
Not a great article overall (it's really just an advert, sorry), but to me this article raises a really good point and then entirely misses it. If staff have little interest in your security policies and believe they hinder their working life, then that's because they actually do! The solution isn't to forcibly educate the staff in security and blame the user when they get it wrong. The solution has to be about meeting the staff where they are and understanding the business processes they use and helping them ensure those processes are secure by design.
[Note: I had a discussion with a few subscribers this week about whether to include “bad articles” that were plain wrong or had misinformation and then write commentary about them disagreeing. I decided that generally I would not spread further disinformation even if it’s with the intent of disproving. This article was slightly different in that it’s not really wrong, just doesn’t go as far as I would and isn’t the highest quality. The actual bad article in question was posted to a slack instance I’m part of and discussed there instead, which was more useful, if more limited]
Nation-state hackers attempted to use Equifax vulnerability against DoD, NSA official says
“Within 24 hours I would say of whenever an exploit or vulnerability is released, it is weaponized and used against us,” said Hogue. Hogue also said the use of “zero day” vulnerabilities to breach systems appears to be increasingly rare, based on his own work.
“At NSA we have not responded to an intrusion response that’s used a zero day vulnerability in over 24 months,” Hogue said. “The majority of incidents we see are a result of hardware and software updates that are not applying.”
This is old, and I tweeted about it at CyberUK 2018 this year, but it's worth us working out what the definition of a 0-day is. Most vulnerability researchers mean the use of a vulnerability before the patch is released or the vulnerability is announced. But if your organisation can't patch quicker than say 7 days, then all of the vulnerabilities released between day 0 and day 7 might as well be a 0-day as far as you are concerned. That is to say, the longer your patching cycle, the more 0-days you'll effectively be vulnerable to.
Elon Musk’s Long Obsession With Sabotage - The Atlantic
"Musk copied the text of the letter and pasted into a Word document, and checked the size of the file. He pored over the office’s printer activity logs, looking for a document that matched the one he had created. It’s not clear why this employee would print out the letter that appeared on Valleywag, but Musk’s hunch proved correct. He got a hit on the logs, and used that information to track down the person who carried out the printing job. The employee wrote a letter of apology and resigned."
This is quite interesting on detecting insider threats. Ignoring the slight level of paranoia included here, there are some interesting themes. Many insiders are motivated by frustration, by a frustrated sense of entitlement such as not being promoted. Tackling insider threats is part deterring them, part providing frustration relief (such as the integrity hotline) and part detecting them and acting. I think we tend to lump insiders together as a single actor, but I am increasingly of the view that insiders need to be broken down into different categories, not just witting and unwitting, but based on motivation as well.
Why Are There So Damn Many Ubers? | Village Voice
“A terribly overused buzzword that internet companies like to use is “frictionless,” but it’s a decent term to describe what happened next. Stripped down to its essence, the process of using Uber was the way black car services had always worked: You used a phone to make a car come to you, and you paid by credit card. As Ackman told the Times, “It’s not that different from using Google or a directory to find a car service.” But the app made everything easier for both the driver and the passenger; GPS in particular meant the driver could home right in on you rather than having to work out where exactly you were via conversation with a dispatcher. Having your credit card on file meant you didn’t have to think about payment on every trip. “.
This great description of disruption and frictionless design is a good reminder of how users can circumnavigate a set of regulations and rules if it’s easy. In cyber security things like BYOD, Slack, Trello, G-Suite and more are frictionless “shadow IT” that can circumnavigate our security policies. This isn’t to say our policies are great but that we need to heavily rethink them to work with these tools instead of looking to the past
Explainable AI [PDF]
"Management require interpretability to gain comfort and build confidence that they should deploy the system. Developers will therefore need AI systems to be explainable to get approval to move into production. Users (staff and consumers) want confidence that the AI system is accurately making (or informing) the right decisions. Society wants to know that the system is operating in line with basic ethical principles in areas such as the avoidance of manipulation and bias"
This is a good, if long, overview of AI, Machine Learning and Explainable AI, including a good framework for differing between explainable and transparent. There's some interesting techniques in here for understanding how various AI algorithms come to their decisions and how you might explain it, or have any confidence why the computer said no. Again, as we move more and more cybersecurity tools to using AI black boxes, we need to understand the impact on users. If your box prevents someone claiming a benefit or food stamps, are you confident that the false-positive rate is low?
That's all for this week. See you next week.
Michael