

Shared Security Podcast
Tom Eston, Scott Wright, Kevin Tackett
Shared Security is the the longest-running cybersecurity and privacy podcast where industry veterans Tom Eston, Scott Wright, and Kevin Tackett break down the week’s security WTF moments, privacy fails, human mistakes, and “why is this still a problem?” stories — with humor, honesty, and hard-earned real-world experience. Whether you’re a security pro, a privacy advocate, or just here to hear Kevin yell about vendor nonsense, this podcast delivers insights you’ll actually use — and laughs you probably need. Real security talk from people who’ve lived it.
Episodes
Mentioned books

Mar 18, 2019 • 10min
Equifax and Marriott Data Breach Updates, Facial Recognition at the Airport, Citrix Password Spraying Attack
** Correction about CLEAR as noted in this episode of the podcast. CLEAR does not use Facial Recognition technology, only iris or fingerprint biometric scans **
This is your Shared Security Weekly Blaze for March 18th 2019 with your host, Tom Eston. In this week’s episode: Equifax and Marriott data breach updates, facial recognition coming to 20 US airports, and the Citrix password spraying attack.
Protect your digital privacy with Silent Pocket’s product line of patented Faraday bags, phone cases, and wallets which will make your devices untrackable, unhackable and undetectable. Use discount code “sharedsecurity” to receive 15% off of your order during checkout. Visit silentpocket.com today to take advantage of this exclusive offer.
Hi everyone, welcome to the Shared Security Weekly Blaze where we update you on the top 3 cybersecurity and privacy topics from the week. These podcasts are published every Monday and are 15 minutes or less quickly giving you “news that you can use”.
In data breach news, Equifax CEO Mark Begor and Marriott CEO Arne Sorenson appeared before a US Senate subcommittee to testify regarding the data breaches that both companies have suffered. While no new information was made about the Equifax breach (just the committee grilling Equifax’s CEO on the security controls and investments in security that they’ve put in place) several more technical details about the Marriott breach were revealed. In September of last year, Accenture, who managed the Starwood Guest Reservation Database, contacted Marriott’s IT team about a strange query from a legitimate administrator account. Marriot discovered that these credentials were stolen and began an investigation. Investigators first found a remote access trojan being used as well as a tool to reveal usernames and passwords in memory called MimiKatz. Investigators finally found two encrypted files that were deleted and then recovered. These two files were removed from the Starwood network on November 13th of last year. Shortly after, investigators were able to decrypt these files to show what type of data was stolen. Even though 383 million guest records were accessed, the good news was that 9.1 million credit card numbers in the stolen data was encrypted and there has been no evidence to indicate that the master encryption keys to decrypt the card data was accessed. Marriott also said that they have not received any claims of loss from fraud from the incident. This is quite surprising, given that attackers had breached the Starwood network for at least 4 years since 2014 well before Marriott acquired the hotel chain.
In other Equifax news, famed reporter Brian Krebs reports that even if you already froze your credit files through Equifax after their data breach and were issued a PIN code, it still may be possible for an attacker to bypass your PIN and lift an existing credit freeze with just your name, social security number and birthday. Check out the link in our show notes to read the full article on this rather disturbing development.
US Customs and Border Protection (or CBP) is beginning to implement facial-recognition technology at 20 airports across the US. These new systems will be used to verify the identities of passengers entering and exiting the country. The plan is to have this system in place across all US airports by 2020. The technology will measure what’s called facial landmarks, which is the distance between the eyes or from the forehead to the chin, and match that data to passport photos stored in a database. You might be surprised to hear this but similar commercial facial-recognition systems are already in use at many airports already. For example, Delta has a “curb-to-gate” facial recognition system for international travelers at Atlanta International Airport and other airlines like JetBlue, British Airways, and Lufthansa are running similar pilot programs of their own. You may have also seen a third-party service called “Clear” at over 27 US airports which are kiosks that use iris or fingerprint biometric scans. Clear allows you to basically jump to the front of the security screening line, and includes a bunch of other airline specific perks, which can significantly decrease the time it takes through airport security. The issue with Clear, is that it comes at a cost of about $15 a month.
Facial-recognition technology seems to be implemented faster than we can understand the privacy ramifications. In a lot of ways, we’re starting to see the beginnings of a government funded massive surveillance network, now tied into the passport system, which has the potential to expand even outside of the airport. It’s also important to note that there are no laws that govern the use of facial recognition. Yet, the government is happy to roll this technology out, all in the name of your security. Third-parties like Clear, now make millions of dollars in this new business model of paying money in order to trade our privacy for extra convenience. Just so we don’t have to wait in line like everyone else. I hate to say this but it’s not going to stop anytime soon. So what do you think? Are you OK with facial-recognition technology being used at airports? Does it really improve security? And are you willing to trade your privacy for convenience?
A recent attack on Citrix, a large virtualization and software provider used by 98% of the Fortune 500, shows that weak and guessable passwords are still a huge problem for organizations. On March 6th, Citrix posted a notice that they had their internal network hacked by international cyber criminals. In a blog post about the intrusion Citrix said that the attackers may have accessed and downloaded business documents and that they are cooperating with the FBI in the ongoing activation. Apparently, the attack vector used was a technique called “Password Spraying” which is where an attacker puts together a list of usernames, usually collected through harvesting employee names from LinkedIn or other publicly available sources, and tries to login to exposed applications using a single common weak password like, “Winter2019” or “Password1”. Each login uses a username from the list and that single password. This technique is similar to another type of attack called a “brute force” attack were multiple logins and multiple common passwords are used. This type of attack is much noisier and easier to detect which is why many attacker prefer to use password spraying. Once an attacker finds a valid set of credentials, it doesn’t take long for the attacker to gain a foothold into the company’s internal network. Typically, this is done through lateral movement by exploiting vulnerabilities found with the access of that one single account. This attack, of course, take advantage of poor password policies as well as the lack of other controls like multi-factor authentication. Check out our show notes for our recent episode on multi-factor authentication to find out why just having a password alone, is not enough to protect user accounts.
That’s all for this week’s show. Be sure to follow the Shared Security Podcast on Facebook, Twitter and Instagram for the latest news and commentary. If you have feedback or topic ideas for the show you can email us at feedback[aT]sharedsecurity.net. First time listener to the podcast? Please subscribe on your favorite podcast listening app such as Apple Podcasts or watch and subscribe on our YouTube channel. Thanks for listening and see you next week for another episode of the Shared Security Weekly Blaze.
The post Equifax and Marriott Data Breach Updates, Facial Recognition at the Airport, Citrix Password Spraying Attack appeared first on Shared Security Podcast.

Mar 11, 2019 • 9min
Google Chrome Zero-Day, Facebook Phone Number Privacy, NSA Phone Data Collection Program
This is your Shared Security Weekly Blaze for March 11th 2019 with your host, Tom Eston. In this week’s episode: a new Google Chrome Zero-Day, how Facebook uses your phone number, and the shutdown of the NSA’s phone data collection program.
Protect your digital privacy with Silent Pocket’s product line of patented Faraday bags, phone cases, and wallets which will make your devices untrackable, unhackable and undetectable. Use discount code “sharedsecurity” to receive 15% off of your order during checkout. Visit silentpocket.com today to take advantage of this exclusive offer.
Hi everyone, welcome to the Shared Security Weekly Blaze where we update you on the top 3 cybersecurity and privacy topics from the week. These podcasts are published every Monday and are 15 minutes or less quickly giving you “news that you can use”.
Google announced last week that a patch released on March 1st for the Google Chrome web browser was actually to fix a zero-day vulnerability that has been under active attack. The vulnerability, which is known as a use-after-free bug, is a type of memory error which can allow malicious code to escape Chrome’s built in security sandbox and will allow commands to be ran on the local operating system. This particular vulnerability was found in what’s known as the “FileReader API” that allows web applications to read the contents of files within a user’s computer. Google updated their original post about the patch to indicate that “Access to bug details and links may be kept restricted until a majority of users are updated with a fix”. This is, of course, done to prevent malicious actors from accessing details on how the vulnerability works so that it cannot be replicated. As always, ensure you keep your web browser of choice updated. In fact, all modern browsers have a nifty auto-update feature. The Chrome browser will show you a “green, orange, red” three dot indicator at the top right of your browser. If its green, an update has been available for 2 days, if it’s orange, 4 days, and if it’s red, 7 days. Click on the three dots and simply click “Update Google Chrome”. If you don’t see this button or any color indicators, you’re at the most current version. Our advice is to take a minute now to ensure you’re using the latest version of Chrome.
First up in Facebook news last week was the controversy with how Facebook uses your phone number. The Electronic Frontier Foundation said that phone numbers in Facebook, which happen to be used for two-factor authentication, have the privacy setting set to searchable by “Everyone” as the default. In fact, Facebook only gives you the choice of “Everyone”, “Friends of Friends” and “Friends” which means there is no option to opt-out. Facebook is essentially forcing us into a trade-off between the security of two-factor authentication and privacy of our phone number. Keep in mind, back in April of last year, Facebook did remove the ability to search for a user by entering a phone number or email address in the Facebook search bar but it did not disable the ability for someone to search for you when they upload a list of their contacts, which happens to have your phone number in it.
In other Facebook news, a report from the Guardian shows that Facebook targeted politicians around the world, promising various forms of investments and incentives so that they would lobby on Facebook’s behalf against data privacy legislation. This was all made public via a brand new leak of internal Facebook documents. And if that wasn’t enough Facebook news, Facebook CEO Mark Zuckerberg released a manifesto of sorts which details his vision for building a privacy-focused messaging and social networking platform. Check out our show notes if you’re interested in reading Mark’s full post but basically he wants to change Facebook so that it can have more private interactions, end-to-end encryption, reducing permanence, safety, interoperability, and secure data storage. So what do you think? With all the controversy and scandal going on with Facebook, do you think Mark’s intentions for a more secure and private Facebook are true? Or, do you feel that ultimately we are the product and at the end of the day, making money off of our private data is what Facebook is really about. Let us know your thoughts by sending us an email at feedback@sharedsecurity.net or through any of our social media channels and lets continue the conversation.
And now a word from our sponsor, Edgewise Networks.
Organizations’ internal networks are overly permissive and can’t distinguish trusted from untrusted applications. Attackers abuse this condition to move laterally through networks, bypassing address-based controls to spread malware. Edgewise abstracts security policies away from traditional network controls that rely on IP addresses, ports, and protocols and instead ties controls directly to applications and their data paths.
Edgewise allows organizations to analyze the network attack surface and segment workloads based on the software and how it’s communicating. Edgewise monitors applications and protects data paths using zero trust segmentation.
Visit edgewise.net to get your free month of visibility.
The NSA has silently discontinued its very controversial program put in place after the 911 terrorist attacks which collected and analyzed millions of domestic phone calls and text messages. You may remember that this was the program exposed by whistleblower Edward Snowden. Because of the US Patriot Act in 2001, this program collected metadata of communications which included phone numbers on the call, when the calls took place and how long they lasted. Apparently, the system hasn’t been used in months and the Trump administration may not renew or extend this program. The New York Times says sources indicate that there have been problems with the way the data has been collected which may be the reason for the shutdown of the program.
In other NSA news, at the RSA security conference last week the NSA released a free software reverse engineering tool called “Ghidra” which is used internally by NSA employees. In fact, they even plan on releasing the source code for the tool on GitHub. In the meantime, that didn’t stop some researchers who downloaded the tool to discover that a network port was opened when running the application which would allow remote code execution. While the NSA states that they would never release a tool to the security community with a backdoor installed, it left many to speculate what the purpose of the port was. Upon letting the NSA know about this open port the NSA said that this is used for internal teams to collaborate and share information with each other. However, the port specified by the NSA was not the same one discovered by the researcher.
Now besides what port should be or shouldn’t be open, I find it fascinating that the NSA is trying to be more transparent about what they are working on, tools they develop and wanting more collaboration with the cybersecurity community. More transparency from the NSA is a good thing. So let’s hope for more of it in the future.
That’s all for this week’s show. Be sure to follow the Shared Security Podcast on Facebook, Twitter and Instagram for the latest news and commentary. If you have feedback or topic ideas for the show you can email us at feedback[aT]sharedsecurity.net. First time listener to the podcast? Please subscribe on your favorite podcast listening app such as Apple Podcasts or watch and subscribe on our YouTube channel. Thanks for listening and see you next week for another episode of the Shared Security Weekly Blaze.
The post Google Chrome Zero-Day, Facebook Phone Number Privacy, NSA Phone Data Collection Program appeared first on Shared Security Podcast.

Mar 4, 2019 • 14min
Multi-Factor Authentication, New Attacks on 4G and 5G Mobile Networks
This is your Shared Security Weekly Blaze for March 4th 2019 with your host, Tom Eston. In this week’s episode: Multi-factor authentication to protect your credentials, and new attacks on 4G and 5G mobile networks.
Protect your digital privacy with Silent Pocket’s product line of patented Faraday bags, phone cases, and wallets which will make your devices untrackable, unhackable and undetectable. Use discount code “sharedsecurity” to receive 15% off of your order during checkout. Visit silentpocket.com today to take advantage of this exclusive offer.
Almost every day we hear about a new data breach or leak of personal data. In a lot of these stories, compromised credentials are used in what is known as a ‘credential stuffing’ attack in which stolen credentials, from large databases of past data breaches, are used to gain access to many different types of popular applications and services. Just last week, one of those services was Intuit’s TurboTax application which right now, because of tax season in the US, is extremely popular. Victims of this particular attack had their information like social security numbers, address, date of birth, driver’s license number, previous tax returns and other personal data compromised. That’s enough data for someone’s identity to be stolen!
But even if we take the right precautions to use unique and complex passwords, many of us can still fall victim to a phishing or other social engineering attack where we may be convinced to giveaway our user credentials. In fact, in last week’s show I discussed a very realistic Facebook social login phishing campaign which looks so real that even cybersecurity professionals could fall for it.
So what can you do to help better protect your user credentials? The answer is multi-factor authentication and you should always enable it if the apps and services you are using support it. Here to discuss what multi-factor authentication is and how it’s different than other forms of authentication is Ian Paterson, CEO of identity assurance company, Plurilock.
Ian Paterson: Historically, authentication is based around what you know, which would be something like a password or a PIN number for your debit card; what you have, so that would be something like the debit card itself or maybe an RSA token; and something that you are, and that would be something like your fingerprint for touch ID or maybe your face for using facial recognition. And multi-factor authentication is when you have two or more of those factors. So you’re mixing and matching something that you know, something that you have, and something that you are.
Ian Paterson: Traditional authentication is generally something that you know, and that would be passwords. And what the world has learned over the last five to 10 years, is that passwords, something that you know, are really a terrible way of protecting stuff. I would say ironically, but not ironically, I got a note in my inbox earlier this week from Have I Been Pwned, saying, “Congratulations. You have been subject to a data breach.” And the reality is if you’ve been around online for any amount of time, probably you’ve had your credentials breached. And I usually talk about, there’s two people in the world, people who know that they’ve been part of a data breach and people who don’t know. And that’s basically it. So, coming back to your question. So MFA is designed to mitigate some of the problems around traditional authentication, I.e., passwords and we’re starting to see more of… More consumer options, certainly, around being able to use MFA or two factors, so two-factor authentication and multi-factor authentication, we’re starting to see more of those options being available to consumers.
Tom Eston: So, what are some of the issues that you’re seeing with the way that companies and applications and everyone is using multi-factor authentication right now?
Ian Paterson: I think that there are some good ways of doing multi-factor authentication and there are some not good ways of doing multi-factor authentication. So some examples of maybe good attempts, but attempts that come up short, would be using two forms of something that you know.
Ian Paterson: A lot of banks actually are still stuck with this. Where you’ll have a login and password and then if you get through the login and password, then they’ll ask you a security question. So it’s not actually multi-factor, they call it two-step verification in a lot of cases, which kinda sounds like two-factor authentication, but you’re still using two shared secrets, two something that you knows, in order to authenticate you as a person. And it’s a little bit better than just a password on its own, but not by much. And certainly it doesn’t meet a lot of the regulatory requirements around strong authentication. So we’re seeing that organizations are recognizing that this is not an ideal way of doing it and they’re moving away from it. But certainly… I still have some personal accounts just with organizations that I use and I’m still asked for a login, password, and a security question and it drives me nuts.
Tom Eston: Why should apps and services move away from offering SMS text-based multi-factor authentication?
Ian Paterson: What we’ve seen over the last couple of years is that SMS as a form of MFA, multi-factor authentication, is really insecure. So the Reddit hack a year or two ago, was they were able to get in because SMS was used as a form of multi-factor authentication and the attackers were able to usurp that and get access. And so, there are better ways of doing MFA. There are not so good ways of doing MFA. The security questions, SMS are definitely in that not great camp. Hardware is a great option as long as users are willing to go through the hassle of using it.
Tom Eston: Here’s Ian’s take on what the future of multi-factor authentication might look like.
Ian Paterson: So, Plurilock is looking at human behavior and using that as a form of biometrics. So we look at how you type, how you move a mouse, on mobile phones, how you walk or how you sit, which is gait analysis, and we use that as a form of invisible second-factor authentication, on top of your standard login and password. So if you consider that there can be a spectrum of really, really secure and really inconvenient on one end and on the other end of the spectrum would be really, really convenient but unbelievably insecure. There’s different solutions that you can plot on that spectrum.
Ian Paterson: And hardware is usually really, really secure. As a general rule, if you’re using hardware tokens or if you have a YubiKey, for instance. Like those are great solutions. The challenge is you actually want to roll out multi-factor authentication to more places than you can realistically expect users to do MFA. And so what happens is, and we’ve seen this with some of our customers and other organizations that we work with, they’ll purchase an MFA solution, they’ll integrate it in one or two points and then the rest of the interaction with users is left unprotected because they can’t get over the pushback from their end users to say, “Look, you can’t really expect to slow me down for five seconds, eight times a day, just so that I can log in securely.”
Ian Paterson: And so what we do is we come in and say, “Look in some cases, use hardware.” If you’re wiring $10 million, I would suggest that you probably want hardware in there to make sure that it’s the right person. But if it’s a… If it’s a manager who’s approving a small change or if it’s a lower risk transaction, is there a way that we can balance that convenience and security aspect? And so what we do is we look at your login and password, which you’re already, for the most part, doing today, we look at how you type in your login and password as a form of behavioral biometrics, and then we also use things like your location.
Ian Paterson: So have we seen you log in from the same location in the past. Rather than geo-fencing, we’ll actually do things like the impossible travel problem. So we’ll look at your last known good login, we’ll compute the time that it would have taken you to travel from point A to point B, where you’re currently logging in from, and say that if it’s physically impossible for you to travel from point A to point B, probably there’s something suspicious, right? So it’s all about flexibility. We don’t pre-configure very much, but we’re really looking at risk factors to know whether we need to pause the authentication and ask you for the hardware that you already have or just let you through.
Tom Eston: What about privacy and mass surveillance concerns with biometric-based multi-factor authentication?
Ian Paterson: So, personal privacy and biometrics is a hot topic. I think where we’re seeing those is that there’s more consumer demand and acceptance for forms of biometrics. And I think you only need to look at what Samsung and Apple are doing, and actually Microsoft Surface is for that matter, as well, where they’re trying to balance the use of biometrics, like your thumb print or facial recognition, with the convenience that that offers.
Ian Paterson: Now, the other angle to this, is that biometrics are not foolproof, in the same way that passwords are not foolproof. There’s no silver bullet here anywhere. But biometrics can be a useful tool when you’re talking about defense in depth. And what we’re seeing is that consumers are interacting with those technologies more and so as a result have a greater acceptance for how they can be used and how they can benefit them.
Ian Paterson: The challenge really when you come down to consumer adoption is what’s in it for them. And if you just have a ubiquitous surveillance system for your business, there’s not really a benefit for consumers, they’re just being tracked and there’s no… There’s nothing in it for them. But if you were to say, look, rather than fumbling around for your keys, trying to find that frustrating token with that six-digit rotating password that is gonna change in 30 seconds and it actually shows you the bars count down, which just produces anxiety, you have to get it right and then you get it wrong and then you have to wait for the next one. The whole thing is just a terrible user experience. And then if you give them the choice to say, “Look, you can do that or you can swipe your thumb print,” suddenly it’s a different conversation. It’s not just about ubiquitous surveillance, it’s around, “Well, there’s a trade-off here being made and well, actually, I kinda benefit from this.” And when you have that conversation, it’s just much, much more geared towards informed consent and around the value that the users get.
Tom Eston: That was Ian Paterson from Plurilock.
And now a word from our sponsor, Edgewise Networks.
Organizations’ internal networks are overly permissive and can’t distinguish trusted from untrusted applications. Attackers abuse this condition to move laterally through networks, bypassing address-based controls to spread malware. Edgewise abstracts security policies away from traditional network controls that rely on IP addresses, ports, and protocols and instead ties controls directly to applications and their data paths.
Edgewise allows organizations to analyze the network attack surface and segment workloads based on the software and how it’s communicating. Edgewise monitors applications and protects data paths using zero trust segmentation.
Visit edgewise.net to get your free month of visibility.
A group of researchers from Purdue University and the University of Iowa have released details on new security flaws found in 4G and 5G protocols, which are used by mobile networks, to bypass new security protections which would allow IMSI catching devices known as “Stingrays” to intercept phone calls and conduct location tracking. Stingray devices are known to be used by nation states and law enforcement. Surprisingly, the soon to be implemented 5G protocol has built in protections to defend against Stingray devices but the researchers found that these protections can be defeated. The research describes several different attacks, the first called Torpedo, exploits a weakness in the paging protocol mobile carriers use to notify a device before a call or text comes through; Piercer, which allows an attacker to determine a user’s identity (or IMSI) on a 4G network, and a IMSI-Cracking attack which can brute force an IMSI number on 4G and 5G networks. This attack in particular would allow Stingray devices to be used on the new 5G networks which are just starting to be deployed. The code and exploits will not be released by the researchers but instead the flaws will be reported to the mobile carriers so that they can be fixed. However, the researchers note that these attacks could be carried out with radio equipment costing only about $200. Let’s hope the mobile carriers fix these flaws soon, especially before 5G networks are fully deployed.
That’s all for this week’s show. Be sure to follow the Shared Security Podcast on Facebook, Twitter and Instagram for the latest news and commentary. If you have feedback or topic ideas for the show you can email us at feedback[aT]sharedsecurity.net. First time listener to the podcast? Please subscribe on your favorite podcast listening app such as Apple Podcasts or watch and subscribe on our YouTube channel. Thanks for listening and see you next week for another episode of the Shared Security Weekly Blaze.
The post Multi-Factor Authentication, New Attacks on 4G and 5G Mobile Networks appeared first on Shared Security Podcast.

Feb 25, 2019 • 9min
Google Nest’s Secret Microphone, Facebook Login Phishing, Password Manager Vulnerabilities
This is your Shared Security Weekly Blaze for February 25th 2019 with your host, Tom Eston. In this week’s episode: Google Nest’s secret microphone, a new Facebook login phishing campaign, and vulnerabilities in popular password managers.
Silent Pocket is a proud sponsor of the Shared Security Podcast! Silent Pocket offers a patented Faraday cage product line of phone cases, wallets and bags that can block all wireless signals, which will make your devices instantly untrackable, unhackable and undetectable. Use discount code “sharedsecurity” to receive 15% off of your order. Visit silent-pocket.com to take advantage of this exclusive offer.
Hi everyone, welcome to the Shared Security Weekly Blaze where we update you on the top 3 cybersecurity and privacy topics from the week. These podcasts are published every Monday and are 15 minutes or less quickly giving you “news that you can use”.
Do you own or thinking about owning a Nest Secure security system? If so, did you know that Google secretly installed a microphone into the system as a previously undocumented opt-in feature? Well just last week Google announced that an update for its Nest Secure system would allow users to enable the Google Assistant (that’s Google’s voice activated product) so that users could use voice commands to enable and disable the alarm system. In a report from Business Insider last week, a Google spokesperson said that the company had made an error and that “the on-device microphone was never intended to be a secret and should have been listed in the tech specs”. Google said that the microphone was originally included in the system for the future possibility of new features, like the ability to detect broken glass. Google also stated that the microphone was always disabled. This news comes at a very challenging time for the tech giant as many consumers are increasingly worried about their privacy and companies like Google who have continued to demonstrate a lack of commitment to protecting our private information.
In fact, a privacy group called EPIC which stands for the Electronic Privacy Information Center, is asking the Federal Trade Commission here in the United States to divest Nest from the rest of its parent company Google and disclose any data that these undocumented microphones may have been collecting. EPIC has, in the past, called for similar action against Google dating back to 2010 when Google was found to have been collecting Wi-Fi data from its Street View project which included Wi-Fi network names, MAC addresses, URLs, emails, and even passwords from unsecured Wi-Fi networks. So what do you think? Are you concerned about a microphone in your home security system? Or is the bigger issue that companies like Google are not being honest with consumers about the privacy impacting technology being used in their products.
And now a word from our sponsor, Edgewise Networks.
Organizations’ internal networks are overly permissive and can’t distinguish trusted from untrusted applications. Attackers abuse this condition to move laterally through networks, bypassing address-based controls to spread malware. Edgewise abstracts security policies away from traditional network controls that rely on IP addresses, ports, and protocols and instead ties controls directly to applications and their data paths.
Edgewise allows organizations to analyze the network attack surface and segment workloads based on the software and how it’s communicating. Edgewise monitors applications and protects data paths using zero trust segmentation.
Visit edgewise.net to get your free month of visibility.
Last week password management company Myki posted about a new Facebook login phishing campaign making the rounds that looks so realistic that even cybersecurity professionals would have a hard time recognizing it. The attack takes advantage of the popular “social login” feature that is used for most web and mobile applications these days. Social logins gives you the option of logging in with your Facebook account instead of creating a new set of user credentials. This is often times more convenient than always creating a new user name and password combination. However, in the case of this new attack, convenience may come at a price. The way this particular attack works is that the attacker creates a very realistic-looking social login pop-up where everything from the status and navigation bar, graphics and more all look just like the real social login page. The user can even interact with the login box, just like the real one, by moving it around the screen and closing it. Once you fill out the form with your Facebook login credentials, they are then sent to the attacker. Check out the link in our show notes for a video demonstration of what the attack looks like but the only advice given to protect yourself is to try and drag the prompt away from the box that it is currently displayed in. If by dragging the popup beyond the edge of the browser fails, you have yourself a malicious pop-up box. Now, unfortunately, this method is not something I’ve seen that many users or even cybersecurity professionals would know about. One thing I thought of was that that the Facebook social login process will automatically log you in if you happen to also be logged into your Facebook account. If you ever do get prompted to login to Facebook through one of these social prompts, I would first check to see if you’re logged into Facebook first. Other than that, stay vigilant as it may be a good idea to try to stay away from using social logins all together.
A recent audit of popular password managers LastPass, KeePass, Dashlane, and 1Password for Windows shows that they all leave traces of sensitive data within memory which could potentially be compromised if an attacker has physical access to the victim’s computer or if malware was able to extract the contents of memory. Security consulting firm Independent Security Evaluators, who performed the audit, says that they found vulnerabilities in the way that these applications store secrets like user names, passwords, and even the master password (within memory) while the application is in use or while it’s placed into a locked state. The good news? All of the password managers tested protect the master password and all passwords stored in their encrypted database while the apps are not running. However, while they are running or locked each password manager tested varied greatly on how secrets are stored and managed within memory. Some, like the free and open source KeePass application had the least amount of vulnerabilities and was the only password manager that completely scrubs the master password from memory while the app is running or in a locked state. 1Password version 7 was noted as the most vulnerable with how it stores all secrets within memory, including the master password.
Now, this research is by no means telling you to stop using password managers altogether or to dump the password managers noted in this audit. In fact, the opposite is true. Using any password manager is better than not using one at all. Having a password manager will always be a better strategy than using the same password for every site and service that you use.
That’s all for this week’s show. Be sure to follow the Shared Security Podcast on Facebook, Twitter and Instagram for the latest news and commentary. If you have feedback or topic ideas for the show you can email us at feedback[aT]sharedsecurity.net. First time listener to the podcast? Please subscribe on your favorite podcast listening app such as Apple Podcasts or watch and subscribe on our YouTube channel. Thanks for listening and see you next week for another episode of the Shared Security Weekly Blaze.
The post Google Nest’s Secret Microphone, Facebook Login Phishing, Password Manager Vulnerabilities appeared first on Shared Security Podcast.

Feb 18, 2019 • 10min
Preventing Illegal Robocalls, Webcam Spying, Dating App Account Hacking
This is your Shared Security Weekly Blaze for February 18th 2019 with your host, Tom Eston. In this week’s episode: Preventing illegal robocalls, should you be scared of your laptop’s webcam, and recent hacks of popular dating apps.
Silent Pocket is a proud sponsor of the Shared Security Podcast! Silent Pocket offers a patented Faraday cage product line of phone cases, wallets and bags that can block all wireless signals, which will make your devices instantly untrackable, unhackable and undetectable. Use discount code “sharedsecurity” to receive 15% off of your order. Visit silent-pocket.com to take advantage of this exclusive offer.
Hi everyone, welcome to the Shared Security Weekly Blaze where we update you on the top 3 cybersecurity and privacy topics from the week. These podcasts are published every Monday and are 15 minutes or less quickly giving you “news that you can use”.
I’ll bet you’re like me and whenever I see a phone call from a number I don’t recognize I refuse to answer it due to the amount of robocalls, scams and fraud attempts that I’m always receiving. In a previous podcast we referenced a report from a company called First Orion, that said nearly half of the mobile phone calls received in 2019 will be scams. Well, it’s 2019 and I’m starting to believe that it may even be higher than 50%! It really seems like the problem is getting worse. However, in a new report released from the FCC on the frequency and prevention of illegal robocalls shows that there is some progress being made to prevent these calls and to hold scammers accountable for their actions. In regards to call-blocking services the FCC states that hundreds of these services are now available, many of them for free, and that there has been significant progress made towards caller ID authentication through a new standard being implemented by the major telecom companies called STIR/SHAKEN. Umm…interesting martini reference there guys. Apparently, this standard verifies that caller ID’s are accurate and not spoofed or modified. Caller ID authentication is supposed to be implemented by all major telecom companies in the US by the end of this year. From a enforcement perspective, the FCC notes that they have proposed or imposed fines of around $245 million dollars just in the last two years against people and companies that have been found guilty of illegal robocalling. While all of these efforts seem to be making some progress, will caller ID authentication really drop the number of these robocalls? Time will tell but in the meantime, it’s probably best to get yourself one of the many free robocall and scam call blocking apps that are available. Check out our show notes for a link to many different types of popular apps that are available right now for you to use.
And now a word from our sponsor, Edgewise Networks.
Organizations’ internal networks are overly permissive and can’t distinguish trusted from untrusted applications. Attackers abuse this condition to move laterally through networks, bypassing address-based controls to spread malware. Edgewise abstracts security policies away from traditional network controls that rely on IP addresses, ports, and protocols and instead ties controls directly to applications and their data paths.
Edgewise allows organizations to analyze the network attack surface and segment workloads based on the software and how it’s communicating. Edgewise monitors applications and protects data paths using zero trust segmentation.
Visit edgewise.net to get your free month of visibility.
I was intrigued by a story last week posted on ZDNet titled “Should you be scared of your laptop’s webcam” which talks about a recent Wall Street Journal story about a columnist who hired an ethical hacker to see if he could hack into the webcams of her two laptops and a baby monitor. This story was to see if you really need to put tape or purchase a cover for your webcam. By using a carefully crafted phishing email, with a link to a malicious file, the hacker was able to gain access to all her web cams and home network. But was it as easy as sending a simple phishing email? No, it actually wasn’t. The story pointed out that it took the columnist “performing some intentionally careless things for him to succeed”. So what careless things are we talking about? Well, the malicious file that was sent to the columnist via the phishing email was flagged by her operating system, anti-virus and even Microsoft Office. She had intentionally dismissed all the various warnings that were alerting her and even purposely disabled the various built in security controls within her operating system. By doing all of this it finally allowed the malicious document to be edited and therefore allowed the malware to execute. Now that was just on Windows but on her MacBook Air it took even more steps to gain access to the camera and it required more things to “disable” to get the exploit to work. Now this begs the question, if it was so difficult for this ethical hacker to break through all these layers of security (with the assistance of the “victim” (yes, that’s victim in quotes), do we need to worry about our webcams getting hijacked?
The answer is…well it depends on things like your personal threat model and how diligent you are about security awareness. It’s true, updated and fully patched and protected modern operating systems like Windows and Apple macOS are much more difficult to break into these days. And that’s the key. Keep all of your systems fully patched and updated and never disable the built in security controls in your operating system. Also, don’t forget to change default passwords of those cheap Internet of Things devices as well. So the point is, its typically the action of the victim, like disabling anti-virus or other security controls, and not keeping our systems updated which leaves us at the greatest risk.
Last week was Valentine’s day and unfortunately for some users of dating sites OkCupid and ‘Coffee Meets Bagel’ it wasn’t all love and romance. TechCrunch reported that multiple users of OkCupid had their accounts hacked and passwords changed without their knowledge. And popular dating app ‘Coffee Meets Bagel’ had 6.1 million user names, email address and other personal details exposed in a recent massive pool of compromised data that was found for sale on the Dark Web. Other data from this dump included user data from other well-known data breaches such as My Heritage and MyFitnessPal. Representatives from OkCupid have denied that there was a data breach but essentially blamed their own users for choosing poor passwords that may have been exposed in previous data breaches. According to the TechCrunch article a spokesperson for OkCupid said “All websites constantly experience account takeover attempts. There has been no increase in account takeovers on OkCupid.”
Account takeovers relate to the more recent attack trend called “credential stuffing” where attackers leverage the credentials found in large databases of past data breaches and utilize tools and scripts to see if username and password combinations work on various web sites. Ironically, OkCupid and many other dating apps don’t have the ability to enable two-factor authentication so if you happened to be using the same password across all of the apps you use, you may more easily become a victim of a credential stuffing attack. If you’re one of the millions of people that use these and other dating apps, take a minute to review how you’re choosing your passwords and be sure to enable two-factor authentication if it’s available. If you happen to be looking for love on one of these sites, the last thing you need is to find out is the “heartbreaking” news that your account and personal data was compromised.
That’s all for this week’s show. Be sure to follow the Shared Security Podcast on Facebook, Twitter and Instagram for the latest news and commentary. If you have feedback or topic ideas for the show you can email us at feedback[aT]sharedsecurity.net. First time listener to the podcast? Please subscribe on your favorite podcast listening app such as Apple Podcasts or watch and subscribe on our YouTube channel. Thanks for listening and see you next week for another episode of the Shared Security Weekly Blaze.
The post Preventing Illegal Robocalls, Webcam Spying, Dating App Account Hacking appeared first on Shared Security Podcast.

Feb 13, 2019 • 30min
Artificial Intelligence in Cybersecurity, Apple FaceTime Bug, Nest Camera Passwords
In episode 85 of our monthly show we discuss artificial intelligence in cybersecurity, the recent Apple FaceTime bug, and the controversy over compromised Nest camera’s. This was also the first show we streamed live over YouTube! You can re-watch the live stream on our YouTube Channel.
The Shared Security Podcast sponsored by Silent Pocket and Edgewise Networks.
Be sure to follow the Shared Security Podcast on Facebook, Twitter and Instagram for the latest news and commentary. If you have feedback or topic ideas for the show you can email us at feedback[aT]sharedsecurity.net. First time listener to the podcast? Please subscribe on your favorite podcast listening app such as Apple Podcasts or watch and subscribe on our YouTube channel.
The post Artificial Intelligence in Cybersecurity, Apple FaceTime Bug, Nest Camera Passwords appeared first on Shared Security Podcast.

Feb 11, 2019 • 10min
DNA Testing and the FBI, $198 Million Dollar Cryptocurrency Password, Password Checkup Chrome Extension
This is your Shared Security Weekly Blaze for February 11th 2019 with your host, Tom Eston. In this week’s episode: DNA testing and the FBI, the $198 million dollar cryptocurrency password, and a new Chrome extension to protect your accounts from data breaches.
Silent Pocket is a proud sponsor of the Shared Security Podcast! Silent Pocket offers a patented Faraday cage product line of phone cases, wallets and bags that can block all wireless signals, which will make your devices instantly untrackable, unhackable and undetectable. Use discount code “sharedsecurity” to receive 15% off of your order. Visit silent-pocket.com to take advantage of this exclusive offer.
Hi everyone, welcome to the Shared Security Weekly Blaze where we update you on the top 3 cybersecurity and privacy topics from the week. These podcasts are published every Monday and are 15 minutes or less quickly giving you “news that you can use”.
Before we get in to the news this week I wanted to update you all on the Apple FaceTime bug that we talked about in last week’s episode. Well Apple has finally released a patch! Make sure you update your Apple iOS device to 12.1.4 and any Apple system running macOS to version 10.14.3 of Mojave. Check our show notes for a link to all the details and instructions on updating.
Now is a story about how one of the largest DNA testing companies, Family Tree DNA, is working with the FBI to allow them to search their massive genealogy database to solve crimes that have been nearly impossible to solve in the past. You may remember that this topic may sound very familiar as last year there was a story about how the “Golden State Killer” (Joseph DeAngelo) was convicted due to DNA information that was from an open source genealogy website called “GEDMatch”. Apparently, a distant relative of DeAngelo was found in the database which allowed law enforcement to pinpoint who the killer was through clues such as location, ethnicity and other characteristics. However, in this most recent story this is the first time that a private company has agreed to voluntarily allow database access to law enforcement. According to the article this new relationship with Family Tree allows the FBI to upload DNA samples and then have them matched to around a million DNA records contained in their database. It’s important to note that anyone can upload their own DNA profile to its service, not just paying customers.
I think we’re starting to see a very dangerous precedent in regards to the privacy of our DNA and who can access these records without user consent. While all of us would agree that finding murderers and solving unsolved crimes is really important, at what cost are we willing to have our most sensitive information, like our DNA, involved in searches or matching of other people’s profiles? Now that DNA testing kits are given as gifts and as it seems like everyone is doing it, what are the privacy ramifications in the future? One important thing to note, if you’ve used one of these DNA testing services in the past, you can delete your DNA records (or also known as your ‘kit’) either by contacting the company’s customer service or through your profile settings within the DNA service web application. This process will vary between DNA companies but be sure to read the terms of service and privacy policies of the DNA company that you have used to see how they handle and potentially share your DNA records with other third-parties. What do you think? If you’ve used one of these DNA services in the past are you concerned about this recent news? Let us know by commenting on our website or social media so we can continue this very important conversation.
And now a word from our sponsor, Edgewise Networks.
Organizations’ internal networks are overly permissive and can’t distinguish trusted from untrusted applications. Attackers abuse this condition to move laterally through networks, bypassing address-based controls to spread malware. Edgewise abstracts security policies away from traditional network controls that rely on IP addresses, ports, and protocols and instead ties controls directly to applications and their data paths.
Edgewise allows organizations to analyze the network attack surface and segment workloads based on the software and how it’s communicating. Edgewise monitors applications and protects data paths using zero trust segmentation.
Visit edgewise.net to get your free month of visibility.
Canadian bitcoin exchange, QuadrigaCX, owes its customers about $198 million dollars’ worth of cryptocurrency due to the sudden death of the company’s CEO, Gerry Cotton. The reason you may ask? Well the only person with the password to the offline storage wallet that stored the private encryption keys to unlock the cryptocurrency was the CEO. No other members of the company, nor the CEO’s wife had the password to the offline storage wallet. In a report from the Hacker News some have even questioned that the CEO may have faked his death or that this was what is known as a ‘exit scam’ where the CEO and his wife wanted to quickly get out of the cryptocurrency business and never to be seen again. While these two claims may be unfounded, this is a problem that is fairly common in the cryptocurrency market where the exchanges actually store the cryptocurrency and don’t just facilitate the transactions like traditional stock exchanges.
The lesson from this story is for all of us is to consider who have you designated as a backup for your passwords and other private information if you were to suddenly die? It’s an uncomfortable reality to think about but how would your immediate family handle your accounts, money and other important things if you were no longer here? This is definitely concerning if you are (hopefully) using a password vault or manager as we always advocate. Our advice is to come up with a plan with your immediate family or someone you trust to determine how they would access any passwords or other things that would be needed if you were no longer around. One suggestion might be to store your password vault passphrase in a safety deposit box or other password vault which your trusted designee may have access to. It’s a lot to consider and one that may require some real thinking about as every individual situation may be different but it’s very important that we all have a plan in place.
Data leaks and breaches are inevitable and that means that usernames and passwords we choose always seem vulnerable to compromise no matter how many precautions we take to protect this information. Often times, it’s the data of past data breaches that comes back to haunt us. Well to help combat this problem Google has released a new extension for the Chrome web browser called “Password Checkup”. The extension triggers a warning if the user name and password combination that you use, when signing into a site, is one of over 4 billion credentials that Google knows to have been compromised. The extension was developed jointly by cryptography experts from Stanford University to ensure that Google never sees any of your credentials being entered or retrieved, and that the extension itself cannot be compromised by attackers. In addition, all statistics being reported by the extension back to Google are anonymous. Google released a blog post showing how the extension works as well as the technical details behind the design. One thing I like about this extension is that it will only alert you if the same user name and password combination happened to be part of a past data breach. It won’t alert you on outdated passwords or weak passwords like “12345”. Check out our show notes for a link to download this great extension if you happen to use the Chrome web browser. The more awareness we can spread about the use of compromised credentials is a win for everyone.
That’s all for this week’s show. Be sure to follow the Shared Security Podcast on Facebook, Twitter and Instagram for the latest news and commentary. If you have feedback or topic ideas for the show you can email us at feedback[aT]sharedsecurity.net. First time listener to the podcast? Please subscribe on your favorite podcast listening app such as Apple Podcasts or watch and subscribe on our YouTube channel. Thanks for listening and see you next week for another episode of the Shared Security Weekly Blaze.
The post DNA Testing and the FBI, $198 Million Dollar Cryptocurrency Password, Password Checkup Chrome Extension appeared first on Shared Security Podcast.

Feb 4, 2019 • 10min
Massive Apple FaceTime Privacy Bug, Selling Your Privacy for Money, Insecure Smart Light Bulbs
This is your Shared Security Weekly Blaze for February 4th 2019 with your host, Tom Eston. In this week’s episode: The massive Apple FaceTime privacy bug, selling your privacy for money, and insecure smart light bulbs.
Silent Pocket is a proud sponsor of the Shared Security Podcast! Silent Pocket offers a patented Faraday cage product line of phone cases, wallets and bags that can block all wireless signals, which will make your devices instantly untrackable, unhackable and undetectable. Use discount code “sharedsecurity” to receive 15% off of your order. Visit silent-pocket.com to take advantage of this exclusive offer.
Hi everyone, welcome to the Shared Security Weekly Blaze where we update you on the top 3 cybersecurity and privacy topics from the week. These podcasts are published every Monday and are 15 minutes or less quickly giving you “news that you can use”.
In breaking news this past week, a very serious privacy bug in Apple FaceTime was found by a 14-year-old high school student who was trying to FaceTime his friends while playing Fortnite. The bug allows someone to force other Apple devices that have FaceTime installed (everything from iPhones, iPads and laptops or Mac’s running newer versions of macOS) to answer a FaceTime call, even if the other person doesn’t take any action. Essentially, this turns an iPhone into a surveillance device where the microphone stays active. If you’re interested in learning more about the fascinating story on how this bug was discovered and the painful path that this 14-year-old and his parents had to take to notify Apple of the issue, check out the link provided in our show notes for this episode. In response to this bug, Apple has disabled group FaceTime functionality but it’s still not a bad idea to turn off FaceTime in your Apple device settings until a patch is released. Apple states that an update will be issued in coming weeks. In the meantime, be sure to follow the podcast on Twitter, Facebook and Instagram for the latest updates on when a patch will be released.
Organizations’ internal networks are overly permissive and can’t distinguish trusted from untrusted applications. Attackers abuse this condition to move laterally through networks, bypassing address-based controls to spread malware. Edgewise abstracts security policies away from traditional network controls that rely on IP addresses, ports, and protocols and instead ties controls directly to applications and their data paths.
Edgewise allows organizations to analyze the network attack surface and segment workloads based on the software and how it’s communicating. Edgewise monitors applications and protects data paths using zero trust segmentation.
Visit edgewise.net to get your free month of visibility.
Facebook was in the news once again this past week when it was revealed in a TechCrunch story that Facebook was secretly paying users, from 13 to 35 years old, up to $20 per month plus referral fees to install an app called “Facebook Research” or known internally at Facebook as “Project Atlas”. This app is essentially a VPN and allowed Facebook to capture almost all data being used on an a personal Apple device including messages, photos, phone call data, and web browsing history. Facebook even went as far as to distribute this app outside of the Apple AppStore through Apple’s Enterprise Developer Program, which Apple designed for companies to distribute apps within an organization. The TechCrunch story prompted Apple last week to revoke Facebook’s access to this program as a terms of service violation because Facebook was using the Enterprise Developer Program to distribute “internal only” apps to the public.
Dan Goldstein, president and owner of Page 1 Solutions, a digital-marketing agency says “This shows, once again, that Facebook doesn’t value user privacy and goes to great lengths to collect private behavioral data to give it a competitive advantage. The FTC is already investigating Facebook’s privacy policies and practices. As Facebook’s efforts to collect and use private data continue to be exposed, it risks losing market share and may prompt additional governmental investigations and regulation”. In related news, Google has removed a similar app called “Screenwise Meter” from Apple’s Enterprise Developer Program in fear that Apple would also revoke their access to this program. Google was doing the exact same type of thing where they were using a program designed to be used internally by organizations to distribute an app to the public. Screenwise Meter is very similar to the Facebook Research app in that it collects similar data such as browsing history.
It seems that we’re starting to see more instances of tech companies offering money or other incentives in return for your private data. What do you think? Is this creepy or just the new world we live in? Would you participate in one of these programs where you allow access to your private photos, web browsing history and phone calls in return for money and gift cards? Let us know by commenting on our social media feeds and the show notes for this episode.
Don’t just throw away that cheap smart lightbulb that just went bad. Instead, you may want to smash it with a hammer before throwing it out as many of these lightbulbs appear to be storing sensitive information like your Wi-Fi password and other secrets. But is this news really that concerning? Well, in a series of blog posts posted by “Limited Results”, a blogger shows how easy it is to access the firmware of several different low cost smart lightbulb’s. These are products that you would typically find for sale on Amazon. Once the firmware was dumped to a computer, simple searches revealed network login information such as Wi-Fi network SSID’s and passwords, and other information like root certificates and private keys. The problem? Many cheap products like these take a lot of shortcuts by storing private information insecurely on the device.
Now, I wouldn’t be surprised if we see similar issues with most devices that fall into the category of the “Internet of Things”. From smart thermostats, sprinkler systems, power outlets and more, we should assume these devices are also prone to similar flaws. And that is, not building security in from the beginning when these products are designed. Unfortunately, much of the advice that I see being mentioned to better secure these devices are to only install them on a separate, segmented wireless network that is different than the one that you’re using for Internet access. While that seems reasonable, how many of us are actually doing this in practice? I’ll bet that the average home user of these products wouldn’t even think about this or know how to set up a separate network in the first place. In fact, most people don’t even know how to change the default network name or set a secure password to begin with. But ultimately, the risk of these devices falling into the wrong hands, and the work it takes to extract sensitive information is probably not worth the time of most criminals. I think there is a greater risk of your home being broken into by a thief through a window than getting your Wi-Fi password extracted from one of these devices.
That’s all for this week’s show. Be sure to follow the Shared Security Podcast on Facebook, Twitter and Instagram for the latest news and commentary. If you have feedback or topic ideas for the show you can email us at feedback[aT]sharedsecurity.net. First time listener to the podcast? Please subscribe on your favorite podcast listening app such as Apple Podcasts or watch and subscribe on our YouTube channel. Thanks for listening and see you next week for another episode of the Shared Security Weekly Blaze.
The post Massive Apple FaceTime Privacy Bug, Selling Your Privacy for Money, Insecure Smart Light Bulbs appeared first on Shared Security Podcast.

Jan 28, 2019 • 11min
The Lack of US Privacy Regulations, Nest Camera’s Hijacked
This is your Shared Security Weekly Blaze for January 28th 2019 with your host, Tom Eston. In this week’s episode: Where are the US federal privacy regulations and details on Nest camera’s being hijacked in credential stuffing attacks.
Silent Pocket is a proud sponsor of the Shared Security Podcast! Silent Pocket offers a patented Faraday cage product line of phone cases, wallets and bags that can block all wireless signals, which will make your devices instantly untrackable, unhackable and undetectable. Use discount code “sharedsecurity” to receive 15% off of your order. Visit silent-pocket.com to take advantage of this exclusive offer.
Hi everyone, welcome to the Shared Security Weekly Blaze where we update you on the top 3 cybersecurity and privacy topics from the week. These podcasts are published every Monday and are 15 minutes or less quickly giving you “news that you can use”.
January 28th is international data privacy day and ironically, it seems that we still have a major problem with protecting the privacy of our data. Data breach after data leak after countless examples of mishandling of our data by companies large and small, have led many of us to ask the question “Why isn’t there more laws and regulations in the US that are focused on data privacy?.” While Europe has the GDPR the United States seems drastically behind in a battle for the protection of our private data that seems to be getting worse every day. Eventually, something big with data privacy will have to happen to finally get the attention of Congress, right? How big of a data breach is big enough? Equifax, which impacted 143 million Americans, was one example of a huge breach of our private data, yet nothing has changed. Facebook’s Cambridge Analytica scandal sent Mark Zuckerberg to face questions by Congress, and again nothing changed. And now there are reports that major telecom companies are selling our location data to shady third-parties. So I ask you, will there finally be a bigger data breach that makes an even bigger impact this year which will drive a regulation from the federal level?
Here’s Ameesh Divatia, CEO and co-founder of Baffle, a data encryption company, with his thoughts on the development of new data privacy laws and regulations in the United States this year.
Ameesh: I think that would be very, very important because right now we have a mishmash of where every state has a notification law which means that you have to tell somebody and notify somebody about the fact that you’ve lost customers data. So a uniformed notification approach would definitely help. I think the key issue is the whole issue of fines. I think GDPR took it to a whole new level as how to fine entities that lose data. We need a more practical approach to that and I think that you’re going to see that. Where it hurts but doesn’t put you out of business because you do want data collection like I said very early on is very critical there is no way you’re going to get a lot of services without data being collected. But processing that data responsibly is what it’s all about. I always say security has traditionally been sort of sold with fear in the background. And that’s not good for anybody. What we see is a transition where being more secure and being able to protect the customers data is going to become a differentiator, a competitive differentiator versus the necessary evil that always gets in the way of business. And if that really starts happening that’s a true win, win for the industry as well as for the data aggregators.
Tom: So what do you see happening with privacy this year?
Ameesh: So what we see for 2019 is obviously a continued focus on the fact that privacy has to be taken seriously. I think you’re going to see some big fines being levied. Whether it’s the European Union or even the US states that are starting to catch up, I think that’s going to be another game changing event for 2019 where one of the large data aggregators is going to be fined. And that’s going to get the focus more and more on the fact that collecting data is the first step but making sure you protect it is a necessary second step.
Tom: That was Ameesh Divatia from Baffle.
Now, ironically just this past week we saw news stories that two major tech companies, Google and Facebook, are being fined or in the process of being fined. According to a report by the Washington Post, the Federal Trade Commission is planning on issuing a fine to Facebook because of the violation of an agreement dating back to 2012 stating that Facebook would keep certain user information private. No details on when this fine may happen or how much the fine will be, have been released. However, it’s sure to be much larger than the recent fine of €500,000 pounds issued by the United Kingdom to Facebook back in October of last year. Google, however, is right now being fined $57 million dollars, which happens to be the largest GDPR related fine ever issued, because Google failed to go far enough obtaining user consent to collect data for targeted advertising. So the question is, when will we see more enforcement in the US like we see in Europe? With the current government shut down, we’re not going to see anything happen soon and regardless of countless data breaches, it’s anyone’s guess if this year will be the year for a federal data privacy law.
Organizations’ internal networks are overly permissive and can’t distinguish trusted from untrusted applications. Attackers abuse this condition to move laterally through networks, bypassing address-based controls to spread malware. Edgewise abstracts security policies away from traditional network controls that rely on IP addresses, ports, and protocols and instead ties controls directly to applications and their data paths.
Edgewise allows organizations to analyze the network attack surface and segment workloads based on the software and how it’s communicating. Edgewise monitors applications and protects data paths using zero trust segmentation.
Visit edgewise.net to get your free month of visibility.
You may have seen reports on all the major national news channels here in the US about Nest camera’s being hacked which allow an attacker to talk though the camera saying scary phrases like “I’m going to kidnap your baby, I’m in your baby’s room.”. Most of these stories carry a lot of sensational headlines but without much context. So how are attackers gaining access to so many Nest camera’s all of a sudden? The answer is actually pretty trivial and it has something to with an attack called credential stuffing. Credential stuffing is where an attacker will use user names and passwords obtained from previous data breaches in order to compromise user accounts of many different types of sites and services. Databases of user names and passwords from previous data breaches are easily available either for sale on the Dark Web or by using some creative Google searching on the Internet. Once these credentials are obtained, the attacker uses a script or program to try logging into hundreds of websites until successful logins are found. Once the attacker has a successful login, other sites and services are then tried to see if the same password was used. And that, is the key to this attack. If you happened to use the same password for all sites and services you may happen to use, you can easily become a victim of an attack like this. This is exactly what happened in the case of all these Nest camera’s being hacked. So how do you prevent yourself from becoming a victim and having your Nest or other camera hijacked?
Well, it all goes back to basic password security. So make sure you’re using a password manager and always ensure you’re using random and complex passwords for each site and service that you use. Second, always enable two-factor authentication whenever it’s available. In the case of Nest, they do have an option to enable two-factor authentication, but it’s not enabled by default. Check your Nest account settings and enable this feature. Other smart cameras, specifically, Ring camera’s don’t have any options for two factor authentication so your best defense in these cases are only strong passwords. Your mileage may vary as account security for all smart camera’s and other Internet of Things devices is typically not very good at all and always subject to change.
That’s all for this week’s show. Be sure to follow the Shared Security Podcast on Facebook, Twitter and Instagram for the latest news and commentary. If you have feedback or topic ideas for the show you can email us at feedback[aT]sharedsecurity.net. First time listener to the podcast? Please subscribe on your favorite podcast listening app such as Apple Podcasts or watch and subscribe on our YouTube channel. Thanks for listening and see you next week for another episode of the Shared Security Weekly Blaze.
The post The Lack of US Privacy Regulations, Nest Camera’s Hijacked appeared first on Shared Security Podcast.

Jan 21, 2019 • 10min
Ring Doorbell Privacy Concerns, Recent Password Breach News, Biometrics and Fifth Amendment Rights
This is your Shared Security Weekly Blaze for January 21st 2019 with your host, Tom Eston. In this week’s episode: Ring doorbell privacy concerns, news on a recent password breach, and a new ruling on biometrics and Fifth Amendment rights.
Silent Pocket is a proud sponsor of the Shared Security Podcast! Silent Pocket offers a patented Faraday cage product line of phone cases, wallets and bags that can block all wireless signals, which will make your devices instantly untrackable, unhackable and undetectable. Use discount code “sharedsecurity” to receive 15% off of your order. Visit silent-pocket.com to take advantage of this exclusive offer.
Hi everyone, welcome to the Shared Security Weekly Blaze where we update you on the top 3 cybersecurity and privacy topics from the week. These podcasts are published every Monday and are 15 minutes or less quickly giving you “news that you can use”.
Amazon, who now owns popular smart doorbell maker Ring, is being accused of mishandling video footage from customers’ cameras. In a report from the Intercept, Ring is accused of mishandling videos that were taken from their line of smart home security cameras and allowing unrestricted access by internal employees to these videos. According to the article, in 2016 Ring moved its R&D operations to the Ukraine in a cost saving measure and the team had quote “unfettered access to a folder on Amazon’s S3 cloud storage service that contained every video created by every Ring camera around the world.” end quote On top of that, there was a database that allowed internal users access to run a search on any videos linked to a particular user and Ring executives and engineers in the US were allowed quote “unfiltered, round-the-clock live feeds from some customer cameras.” end quote
Apparently, Ring uses this team in the Ukraine to manually tag videos so that one day Ring’s AI technology could be trained to leverage this type of metadata. Video’s from Ring’s line of smart cameras can contain video from outside and inside someone’s house. Ring responded to the Intercept article with the following statement quote
“We take the privacy and security of our customers’ personal information extremely seriously. In order to improve our service, we view and annotate certain Ring videos. These videos are sourced exclusively from publicly shared Ring videos from the Neighbors app (in accordance with our terms of service), and from a small fraction of Ring users who have provided their explicit written consent to allow us to access and utilize their videos for such purposes.” end quote. There was more to their statement about their internal policies but I think you get the idea. The Intercepts sources for this story, of course, dispute these claims from Ring’s management.
While one can argue the trustworthiness of this article, it does have a great point to it. If you’re using a smart device like a Ring doorbell camera that saves its video or data to the cloud, you should probably assume that someone else will most likely be able to view your data. Regardless of what the companies privacy policy or terms of use say, there will always be ways for internal employees to access this data. From customer support situations or using your data to improve their own technology, companies will find creative ways to leverage incredibly valuable private information, especially from video feeds.
Organizations’ internal networks are overly permissive and can’t distinguish trusted from untrusted applications. Attackers abuse this condition to move laterally through networks, bypassing address-based controls to spread malware. Edgewise abstracts security policies away from traditional network controls that rely on IP addresses, ports, and protocols and instead ties controls directly to applications and their data paths.
Edgewise allows organizations to analyze the network attack surface and segment workloads based on the software and how it’s communicating. Edgewise monitors applications and protects data paths using zero trust segmentation.
Visit edgewise.net to get your free month of visibility.
When you see articles with sensational titles like “Hack Brief: An Astonishing 773 Million Records Exposed in Monster Breach” you usually think that this is a pretty serious situation. However, in this day and age, don’t be so quick to jump to conclusions as in this case these 773 million records with 21 million unique passwords are actually a collection of past data from many different data breaches. This data dump called “Collection #1” is approximately 87GB in size and was first analyzed by Troy Hunt who manages the HaveIBeenPwned data breach notification service. Troy Hunt confirmed that this data was in fact made up of many different data breaches from many different sources. Brian Krebs from KrebsOnSecurity.Com went a step further and contacted the seller of this data to find out more details. In discussions with the seller, he actually steered Brian away from “Collection #1” since the seller said that this data was at least 2-3 years old. The seller then tried to sell him more recent data which was less than 4GB in size and less than a year old.
So besides trying not to fall for “click bait” articles like the one created by Wired, the moral of this story is that collections of data from previous data breaches is big business. Data like this can easily be repackaged and resold as a “recent” data breach with very little ramifications. The take away from this is that if your information was ever part of one of these data breaches it can easily be recycled over and over to the highest bidder. As we always say, you should periodically think about your password management strategy. And this should include using a password manager, choosing unique passwords for each and every site and service that you use and using two-factor authentication (preferably app based) where ever possible or available.
Last week a US judge ruled that law enforcement cannot force individuals to unlock their mobile device through biometrics like your finger or face, whether or not a warrant has been issued. The judge, who was presiding over a case in the US District Court for the Northern District of California says that by forcing someone to unlock their device through biometrics violates a person’s Fifth Amendment rights against self-incrimination. This development is a long time coming as previously it was viewed that law enforcement had the right to force people to unlock a device with their face or finger. Before this new ruling, law enforcement treated biometrics just like passwords as suspects could be forced to unlock their device upon request. The judge has said “There are other ways that the government might access the content that do not trample on the Fifth Amendment.”
You may remember that I mentioned this exact topic back in October of last year where for the first time ever, there was now a documented case of law enforcement forcing an Apple iPhone X owner to unlock their device with their face. Now with this recent development, it’s great to see that while technology like biometrics are being treated the same way as passcodes from a Fifth Amendment perspective. I’ll bet, that future cases will challenge this ruling. But in the meantime, let’s call this latest ruling a victory for our privacy.
That’s all for this week’s show. Be sure to follow the Shared Security Podcast on Facebook, Twitter and Instagram for the latest news and commentary. If you have feedback or topic ideas for the show you can email us at feedback[aT]sharedsecurity.net. First time listener to the podcast? Please subscribe on your favorite podcast listening app such as Apple Podcasts or watch and subscribe on our YouTube channel. Thanks for listening and see you next week for another episode of the Shared Security Weekly Blaze.
The post Ring Doorbell Privacy Concerns, Recent Password Breach News, Biometrics and Fifth Amendment Rights appeared first on Shared Security Podcast.


