Security

Ring Doorbell Android app packed with third-party trackers

The original headline from the Electronic Freedom Foundation was:

Ring Doorbell App Packed with Third-Party Trackers

To me, that gave the appearance that the iOS app was packed with trackers. But the article itself doesn’t have a single mention of Apple or iOS, makes it clear the issue is with the Android app. Just wanted to call that out.

On to the article itself:

An investigation by EFF of the Ring doorbell app for Android found it to be packed with third-party trackers sending out a plethora of customers’ personally identifiable information (PII). Four main analytics and marketing companies were discovered to be receiving information such as the names, private IP addresses, mobile network carriers, persistent identifiers, and sensor data on the devices of paying customers.

The issue is not that the danger of your doorbell video or statistics being leaked, but that the trackers can be used to connect your IP address and other identifying info to other devices, building an on-line profile showing where you live and what other on-line information is linked to you.

This cohesive whole represents a fingerprint that follows the user as they interact with other apps and use their device, in essence providing trackers the ability to spy on what a user is doing in their digital lives and when they are doing it.

I hate this behavior. I love the idea of a video doorbell, but I continue to wait for one that is devoid of trackers, truly anonymized.

[VIDEO] A Cellebrite kiosk, unlocking smartphones for Police Scotland

[VIDEO] The video embedded in the main Loop post is purported to show a Cellebrite police kiosk, used to unlock cell phones. Here’s a link to the Cellebrite Kiosk product page. Indeed, does appear to be one and the same, even though the Scotland Police page does not specifically mention the name Cellebrite.

Though the phone in the video appears to have a USB-C connector, Cellebrite does claim to be able to unlock both Android and iOS devices (iOS 7 to iOS 12.3).

Apple dropped plan for encrypting backups after FBI complained – sources

Reuters:

Apple Inc dropped plans to let iPhone users fully encrypt backups of their devices in the company’s iCloud service after the FBI complained that the move would harm investigations, six sources familiar with the matter told Reuters.

And:

The tech giant’s reversal, about two years ago, has not previously been reported. It shows how much Apple has been willing to help U.S. law enforcement and intelligence agencies, despite taking a harder line in high-profile legal disputes with the government and casting itself as a defender of its customers’ information.

And:

When Apple spoke privately to the FBI about its work on phone security the following year, the end-to-end encryption plan had been dropped, according to the six sources. Reuters could not determine why exactly Apple dropped the plan.

“Legal killed it, for reasons you can imagine,” another former Apple employee said he was told, without any specific mention of why the plan was dropped or if the FBI was a factor in the decision.

Because this story is about a decision made several years ago, it’s not clear that Apple will ever comment on it. But it’s another piece of the big picture of how Apple handles your privacy, how they respond to requests from the FBI, et al, to hand over information about seized phones.

You can now use iPhones as Google security keys for 2FA

9to5Google:

Last year, Google announced that all Android 7+ devices can be used as two-factor authentication when signing into Gmail, Drive, and other first-party services. Most modern iPhones can now be used as a built-in phone security key for Google apps.

And:

A built-in phone security key differs from the Google Prompt, though both essentially share the same UI. The latter push-based approach is found in the Google Search app and Gmail, while today’s announcement is more akin to a physical USB-C/Lightning key in terms of being resistant to phishing attempts and verifying who you are. Your phone security key needs to be physically near (within Bluetooth range) the device that wants to log-in. The login prompt is not just being sent over an internet connection.

Feels like a step in the right direction, a tool to help stop SIM-swapping. Ultimately, I’d love all my log-in services to offer a setting that limited logins to Face ID only, with Face ID required to change that setting as well.

Wall Street Journal Editorial Board op-ed backs Apple in encryption battle

The op-ed is a long, logical walkthrough of the claims by Attorney General Barr and the counterclaim on the values of both privacy and encryption.

But at its heart:

Apple is no doubt looking out for its commercial interests, and privacy is one of its selling points. But its encryption and security protections also have significant social and public benefits. Encryption has become more important as individuals store and transmit more personal information on their phones — including bank accounts and health records — amid increasing cyber-espionage.

Criminals communicate over encrypted platforms, but encryption protects all users including business executives, journalists, politicians, and dissenters in non-democratic societies. Any special key that Apple created for the U.S. government to unlock iPhones would also be exploitable by bad actors.

If American tech companies offer backdoors for U.S. law enforcement, criminals would surely switch to foreign providers. This would make it harder to obtain data stored on cloud servers. Apple says it has responded to more than 127,000 requests from U.S. law enforcement agencies over the past seven years. We doubt Huawei would be as cooperative.

A worthy read.

Apple takes a (cautious) stand against opening a killer’s iPhones

Inflammatory headline aside, this New York Times piece is chock full of interesting quotes:

Executives at Apple have been surprised by the case’s quick escalation, said people familiar with the company who were not authorized to speak publicly. And there is frustration and skepticism among some on the Apple team working on the issue that the Justice Department hasn’t spent enough time trying to get into the iPhones with third-party tools, said one person with knowledge of the matter.

And:

The stakes are high for Mr. Cook, who has built an unusual alliance with President Trump that has helped Apple largely avoid damaging tariffs in the trade war with China. That relationship will now be tested as Mr. Cook confronts Mr. Barr, one of the president’s closest allies.

And:

At the heart of the tussle is a debate between Apple and the government over whether security or privacy trumps the other. Apple has said it chooses not to build a “backdoor” way for governments to get into iPhones and to bypass encryption because that would create a slippery slope that could damage people’s privacy.

And:

Bruce Sewell, Apple’s former general counsel who helped lead the company’s response in the San Bernardino case, said in an interview last year that Mr. Cook had staked his reputation on the stance. Had Apple’s board not agreed with the position, Mr. Cook was prepared to resign, Mr. Sewell said.

And:

Mr. Cook has made privacy one of Apple’s core values. That has set Apple apart from tech giants like Facebook and Google, which have faced scrutiny for vacuuming up people’s data to sell ads.

“It’s brilliant marketing,” Scott Galloway, a New York University marketing professor who has written a book on the tech giants, said of Apple. “They’re so concerned with your privacy that they’re willing to wave the finger at the F.B.I.”

And:

A Justice Department spokeswoman said in an email: “Apple designed these phones and implemented their encryption. It’s a simple, ‘front-door’ request: Will Apple help us get into the shooter’s phones or not?”

This is a giant issue. I don’t think there’s any way for a master encryption key to be created that won’t eventually get leaked or stolen.

If such a key was created, is there a case so important that would make putting that key in the hands of the world at large worth the risk? To me, that’s the heart of the dilemma.

Apple’s official response to AG Barr over unlocking Pensacola shooter’s phone

Input:

Earlier today Attorney General William Barr called on Apple to unlock the alleged phone of the Pensacola shooter — a man who murdered three people and injured eight others on a Naval base in Florida in December. Apple has responded by essentially saying: “no.”

I disagree with this characterization. Read Apple’s response. It’s more nuanced. If I had to capture it simply, I’d quote this paragraph:

We have always maintained there is no such thing as a backdoor just for the good guys. Backdoors can also be exploited by those who threaten our national security and the data security of our customers. Today, law enforcement has access to more data than ever before in history, so Americans do not have to choose between weakening encryption and solving investigations. We feel strongly encryption is vital to protecting our country and our users’ data.

Follow the headline link, read Apple’s response for yourself.

Motherboard: We tested Ring’s security. It’s awful

Joseph Cox, Motherboard:

From across the other side of the world, a colleague has just accessed my Ring account, and in turn, a live-feed of a Ring camera in my apartment. He sent a screenshot of me stretching, getting ready for work. Then a second colleague accessed the camera from another country, and started talking to me through the Ring device.

Earlier today, we posted about the Apple, Amazon, Google alliance designing an IoT open standard. I’d love to see Amazon close up these security holes.

Until then, I’ll limit my video doorbell candidates to those who sign up for HomeKit Secure Video.

The iPhone 11 Pro’s location data puzzler

Krebs On Security:

One of the more curious behaviors of Apple’s new iPhone 11 Pro is that it intermittently seeks the user’s location information even when all applications and system services on the phone are individually set to never request this data. Apple says this is by design, but that response seems at odds with the company’s own privacy policy.

and:

“We do not see any actual security implications,” an Apple engineer wrote in a response to KrebsOnSecurity. “It is expected behavior that the Location Services icon appears in the status bar when Location Services is enabled. The icon appears for system services that do not have a switch in Settings” [emphasis added].

There’s been a lot of discussion since this piece dropped. At its core, there seem to be system services that use Location Services without a Settings switch to disable that usage.

Grain of salt time. Interesting that this seems to be specific to the iPhone 11 Pro, and not occurring in earlier models. Seems to me that Apple should address this with a technical note or some added verbiage in the Location Services documentation.

Alexa and Google Home devices leveraged to phish and eavesdrop on users, again

Catalin Cimpanu, ZDNet:

Hackers can abuse Amazon Alexa and Google Home smart assistants to eavesdrop on user conversations without users’ knowledge, or trick users into handing over sensitive information.

And regarding the word “again” in the headline:

The attacks aren’t technically new. Security researchers have previously found similar phishing and eavesdropping vectors impacting Amazon Alexa in April 2018; Alexa and Google Home devices in May 2018; and again Alexa devices in August 2018.

Whack-a-mole. Amazon and Google respond to attacks with countermeasures, new attacks pop up.

As to the specifics, watch the videos embedded in the linked article. The phishing attack asks you for your password. Though there are some people who might actually respond to this, I’d guess most users would instantly get the evil intent here. But still, the fact that such an action exists, that it passes muster enough to be demo-able, does give me pause.

More troubling is the eavesdropping issue shown in the second set of videos. The fact that an action continues, even after you ask Alexa/Google to stop, does seem like it should not be allowed to happen.

Is this lack of security the price you pay for customizable actions?

Oregon judge ordered woman to type in her iPhone passcode so police could search it for evidence against her

Aimee Green, Oregon Live:

Police wanted to search the contents of an iPhone they found in Catrice Pittman’s purse, but she never confirmed whether it was hers and wasn’t offering up a passcode. Her defense attorney argued forcing her to do so would violate her rights against self-incrimination under the Fifth Amendment of the U.S. Constitution and Article 1 Section 12 of the Oregon Constitution.

But a Marion County judge sided with police and prosecutors by ordering Pittman to enter her passcode. On Wednesday, the Oregon Court of Appeals agreed with that ruling — in a first-of-its-kind opinion for an appeals court in this state.

This is a precedent that will resonate, make it more likely that courts will order defendants to unlock their phones.

Side note, I found this sequence very interesting:

Scott said the ruling won’t affect many Oregon defendants whose phones are seized by police because police already have technology that allows them to crack into most of those phones.

But:

The latest iPhones, more often than other phones, have proven difficult, Scott said.

“For people who want their information private, I would recommend getting an iPhone,” Scott said. “And Apple is not paying me to say that.”

Yet another reason to buy an iPhone.

On Apple sharing some portion of your web browsing history with Chinese conglomerate Tencent

First off:

  • Fire up your iPhone, head to Settings > Safari
  • Now tap the link that says “About Safari & Privacy…” (it’s the second of these links, just under the Check for Apple Pay switch)
  • Scroll down to the section labeled “Fraudulent Website Warning”

At the bottom of that paragraph:

Before visiting a website, Safari may send information calculated from the website address to Google Safe Browsing and Tencent Safe Browsing to check if the website is fraudulent. These safe browsing providers may also log your IP address.

Those words have raised a lot of eyebrows. The headline linked article digs into some history and lays out the concerns. Start off by reading the section “What is “Safe Browsing”, and is it actually safe?” That’ll set the table for why Google’s Safe Browsing is imperfect where privacy is concerned.

Which leads to:

The problem is that Safe Browsing “update API” has never been exactly “safe”. Its purpose was never to provide total privacy to users, but rather to degrade the quality of browsing data that providers collect. Within the threat model of Google, we (as a privacy-focused community) largely concluded that protecting users from malicious sites was worth the risk. That’s because, while Google certainly has the brainpower to extract a signal from the noisy Safe Browsing results, it seemed unlikely that they would bother. (Or at least, we hoped that someone would blow the whistle if they tried.)

But Tencent isn’t Google. While they may be just as trustworthy, we deserve to be informed about this kind of change and to make choices about it. At very least, users should learn about these changes before Apple pushes the feature into production, and thus asks millions of their customers to trust them.

OK, now you’re caught up. Is this a tempest in a teapot or a genuine privacy concern? Looking forward to an official response from Apple.

UPDATE: And here’s Apple’s official statement:

Apple protects user privacy and safeguards your data with Safari Fraudulent Website Warning, a security feature that flags websites known to be malicious in nature. When the feature is enabled, Safari checks the website URL against lists of known websites and displays a warning if the URL the user is visiting is suspected of fraudulent conduct like phishing.

To accomplish this task, Safari receives a list of websites known to be malicious from Google, and for devices with their region code set to mainland China, it receives a list from Tencent. The actual URL of a website you visit is never shared with a safe browsing provider and the feature can be turned off.

How Safari and iMessage have made iPhones less secure

The headline seemed sensationalistic, started reading filled with skepticism. That said, I did find the article well written and full of interesting detail.

A few examples:

Apple requires that all iOS web browsers—Chrome, Firefox, Brave, or any other—be built on the same WebKit engine that Safari uses. “Basically it’s just like running Safari with a different user interface,” Henze says. Apple demands browsers use WebKit, Henze says, because the complexity of running websites’ JavaScript requires browsers to use a technique called just-in-time (or JIT) compilation as a time-saving trick. While programs that run on an iOS device generally need to be cryptographically signed by Apple or an approved developer, a browser’s JIT speed optimization doesn’t include that safeguard.

As a result, Apple has insisted that only its own WebKit engine be allowed to handle that unsigned code. “They trust their own stuff more,” Henze says. “And if they make an exception for Chrome, they have to make an exception for everyone.”

The point being made here is that Apple bottlenecks all browser activity through WebKit. To me, this seems a solid approach, as long as WebKit is bulletproof.

The problem with making WebKit mandatory, according to security researchers, is that Apple’s browser engine is in some respects less secure than Chrome’s.

There’s the rub. If that’s truly the case. Seems to me, no matter the choice Apple makes here, there will be security holes. The key is how quickly Apple responds to identified flaws. My (possibly uninformed) sense is that Apple closes loopholes before they become widely known, or quickly issues a patch if exploits do become public.

As to Messages:

Hackable flaws in iMessage are far rarer than those WebKit. But they’re also far more powerful, given that they can be used as the first step in a hacking technique that takes over a target phone with no user interaction. So it was all the more surprising last month to see Natalie Silvanovich, a researcher with Google’s Project Zero team, expose an entire collection of previously unknown flaws in iMessage that could be used to enable remote, zero-click takeovers of iPhones.

Read Apple’s reply to the Project Zero accusations.

More disturbing than the existence of those individual bugs was that they all stemmed from the same security issue: iMessage exposes to attackers its “unserializer,” a component that essentially unpacks different types of data sent to the device via iMessage.

All very interesting. I’m betting that Apple is working hard to identify and fix attack vectors in WebKit and better sandbox Messages. I think it’s a safe bet that none of this information is new to Apple.

Apple: A message about iOS security

A statement from Apple about last week’s Google vulnerability blog post:

Last week, Google published a blog about vulnerabilities that Apple fixed for iOS users in February. We’ve heard from customers who were concerned by some of the claims, and we want to make sure all of our customers have the facts.

First, the sophisticated attack was narrowly focused, not a broad-based exploit of iPhones “en masse” as described. The attack affected fewer than a dozen websites that focus on content related to the Uighur community. Regardless of the scale of the attack, we take the safety and security of all users extremely seriously.

Google’s post, issued six months after iOS patches were released, creates the false impression of “mass exploitation” to “monitor the private activities of entire populations in real time,” stoking fear among all iPhone users that their devices had been compromised. This was never the case. Second, all evidence indicates that these website attacks were only operational for a brief period, roughly two months, not “two years” as Google implies. We fixed the vulnerabilities in question in February — working extremely quickly to resolve the issue just 10 days after we learned about it. When Google approached us, we were already in the process of fixing the exploited bugs.

Security is a never-ending journey and our customers can be confident we are working for them. iOS security is unmatched because we take end-to-end responsibility for the security of our hardware and software. Our product security teams around the world are constantly iterating to introduce new protections and patch vulnerabilities as soon as they’re found. We will never stop our tireless work to keep our users safe.

Appreciate the clarification here.

Scammer successfully Deepfaked CEO’s voice to fool underling into transferring $243,000

Gizmodo:

The CEO of an energy firm based in the UK thought he was following his boss’s urgent orders in March when he transferred funds to a third-party. But the request actually came from the AI-assisted voice of a fraudster.

Stories of AI fakes fooling real people continue to roll out. And, I suspect, they’ll only become more numerous as the tools for video and audio deep fakes become more prevalent and more sophisticated.

I’m less worried about someone calling, pretending to be someone asking me to transfer funds. I’m more concerned about the scam where someone imitates a family member, either in peril, or just asking for some personal information. In the heat of the moment, it’s easy to fall for something like this.

Google lays out iOS malware exploits found in the wild, but already patched by Apple back in February

As you make your way around the blogosphere this morning, you’re sure to see a number of articles highlighting mysterious or indiscriminate iPhone attacks, quietly hacking iPhones for years.

There’s a nugget of truth there, but as always, best to go straight to the horse’s mouth, this blog post from Google’s Project Zero.

Earlier this year Google’s Threat Analysis Group (TAG) discovered a small collection of hacked websites. The hacked sites were being used in indiscriminate watering hole attacks against their visitors, using iPhone 0-day.

There was no target discrimination; simply visiting the hacked site was enough for the exploit server to attack your device, and if it was successful, install a monitoring implant.

And:

TAG was able to collect five separate, complete and unique iPhone exploit chains, covering almost every version from iOS 10 through to the latest version of iOS 12. This indicated a group making a sustained effort to hack the users of iPhones in certain communities over a period of at least two years.

Most importantly:

We reported these issues to Apple with a 7-day deadline on 1 Feb 2019, which resulted in the out-of-band release of iOS 12.1.4 on 7 Feb 2019. We also shared the complete details with Apple, which were disclosed publicly on 7 Feb 2019.

So, the way I read this, Google uncovered the threat, reported it to Apple back in February, and Apple issued a patch pretty much immediately.

This is a news story, fair enough, but it’s about a problem that’s been long solved. Keep that grain of salt deeply in mind.

Say cheese: Ransomware-ing a DSLR camera

Eyal Itkin, Checkpoint:

Our research shows how an attacker in close proximity (WiFi), or an attacker who already hijacked our PC (USB), can also propagate to and infect our beloved cameras with malware. Imagine how would you respond if attackers inject ransomware into both your computer and the camera, causing them to hold all of your pictures hostage unless you pay ransom.

I can’t imagine this ever being worth the time for a hacker, but just another example of why we can’t have nice things. And the questionable value of adding the internet to everything.

Black Hat presenter demonstrates how to bypass Face ID on unconscious iPhone owner

Threatpost, via 9to5Mac:

Researchers on Wednesday during Black Hat USA 2019 demonstrated an attack that allowed them to bypass a victim’s FaceID and log into their phone simply by putting a pair of modified glasses on their face. By merely placing tape carefully over the lenses of a pair glasses and placing them on the victim’s face the researchers demonstrated how they could bypass Apple’s FaceID in a specific scenario. The attack itself is difficult, given the bad actor would need to figure out how to put the glasses on an unconscious victim without waking them up.

Obviously, this is a very slim scenario, requiring an unconscious victim. But it does raise the specter of law enforcement rendering someone unconscious in order to break into their phone.

Apple makes huge increases to its bug bounty program, top award hits $1M

Juli Clover, MacRumors:

Apple is introducing an expanded bug bounty program that covers macOS, tvOS, watchOS, and iCloud as well as iOS devices, Apple’s head of security engineering Ivan Krstić announced this afternoon at the Black Hat conference in Las Vegas.

Someone is going to pay for those vulnerability details. Way better for everyone if it’s Apple.

Israeli security firm claims spyware tool can harvest iCloud data in targeted iPhone attack

Tim Hardwick, MacRumors:

An Israeli security firm claims it has developed a smartphone surveillance tool that can harvest not only a user’s local data but also all their device’s communications with cloud-based services provided by the likes of Apple, Google, Amazon, and Microsoft.

From the paywalled Financial Times article that broke the story:

The new technique is said to copy the authentication keys of services such as Google Drive, Facebook Messenger and iCloud, among others, from an infected phone, allowing a separate server to then impersonate the phone, including its location.

This grants open-ended access to the cloud data of those apps without “prompting 2-step verification or warning email on target device”, according to one sales document.

And don’t miss this response from Apple:

In response to the report, Apple told FT that its operating system was “the safest and most secure computing platform in the world. While some expensive tools may exist to perform targeted attacks on a very small number of devices, we do not believe these are useful for widespread attacks against consumers.”

Um. That is quite different from a denial, makes me think this story is true. And once the tools are out there, you know they will find their way into black hat hands. Hopefully, Apple will silently update my devices with a leapfrog update to obsolete these tools.

Mac Zoom client vulnerability allows malicious website to access your camera

I have gotten into the habit of putting a post-it over my Mac camera. Some folks laugh at this, but this is exactly the reason why.

That said, the headline link is a Medium post with all the details. Most damning, though:

Additionally, if you’ve ever installed the Zoom client and then uninstalled it, you still have a localhost web server on your machine that will happily re-install the Zoom client for you, without requiring any user interaction on your behalf besides visiting a webpage. This re-install ‘feature’ continues to work to this day.

If you’ve ever installed Zoom on your Mac and want to check for this local server, go to Terminal (it’s in Applications/Utilities) and type:

lsof -i :19421

If you enter the command and nothing comes back, you’re good. If you do get a result, you’ve got that web server running. If you don’t intentionally want that server running, here’s a tweet with instructions on killing it.

One final note on this. Here’s Zoom’s official response to all of this, posted on their blog as Response to Video-On Concern.

If you are a Zoom user, worth reading the linked Medium post and Zoom’s response. Then stick some post-its on your Mac camera. Just to be safe.

What to do if you get SIM-swapped

MyCrypto and CipherBlade, via Medium:

MyCrypto and CipherBlade have collaborated on this article to help you understand the dangers of a SIM-jacking attack, and how best to defend yourself against and attack, and how to recover from such an event. This article aims to be a “one-stop” article to read, reference, and share with your friends and colleagues. It’s not short, but it’s thorough.

If you’ve been following our stories on SIM-swapping, this should be in your reading queue. Full of detail. I’ve gotten a lot of response to the posts, both warning me how many people are vulnerable to having their lives turned upside down by this hack, and weighing in with their take on best practices.

I’m far from an expert here, but this seems a solid resource, worth bookmarking and passing along. If you have an opinion on the linked post, either pro or con, please do ping me.

[H/T Ricky de Laveaga]

Everybody is getting tragically SIM swapped and you will too

Been reading a lot about folks getting SIM swapped lately. We posted this SIM-swap horror story a few days ago, and followed up with this story on the strategy that other countries are using but that the US is not.

Came across the headline linked post from Tony Sheng. An interesting read, wondering if it’s simply alarmist or insightful.

In a nutshell, Tony got SIM-swapped and went into great detail on the process and what he did to minimize harm. His highest priority:

Disassociated my phone number from my email address. If you connect your phone number to your email, then a hacker with your phone number can reset your password and take over your email address.

Once they have your email and your phone number, they can reset passwords on pretty much all your accounts for which you don’t have physical 2FA (like a Yubikey).

Step 1 is far and away the most important. If you haven’t done this yet. Stop reading and do it now.

Not sure how you do that. Do you use a secondary email address for verification? YubiKey is a hardware dongle. Secure, but not convenient.

Opinions on this? Please tweet at me with how you solve this problem.

The SIM swap fix that the US isn’t using

Andy Greenberg, Wired:

…an escalating pattern of fraud based on so-called SIM swap attacks, where hackers trick or bribe a phone company employee into switching the SIM card associated with a victim’s phone number. The attackers then use that hijacked number to take over banking or other online accounts. According to Tenreiro, the bank had seen more than 17 SIM swap frauds every month. The problem was only getting worse.

And:

SIM swap hackers rely on intercepting a one-time password sent by text after stealing a victim’s banking credentials, or by using the phone number as a password reset fallback. So the phone company, Tenreiro says, offered a straightforward fix: The carrier would set up a system to let the bank query phone records for any recent SIM swaps associated with a bank account before they carried out a money transfer. If a SIM swap had occurred in, say, the last two or three days, the transfer would be blocked. Because SIM swap victims can typically see within minutes that their phone has been disabled, that window of time let them report the crime before fraudsters could take advantage.

I recognize that this is a game of whack-a-mole, where one security hole is plugged and another one is discovered. But this seems a pretty solid solution.

By August of 2018, Mozambique’s largest bank was performing SIM swap checks with all the major carriers. “It reduced their SIM swap fraud to nearly zero overnight.”

Why is the US not following in Mozambique’s SIM-securing footsteps?

CTIA vice president for technology and cybersecurity John Marinho argued that while US carriers may not offer real-time SIM swap checks, that’s in part because the US has other protections, like geolocation checks based on banks’ mobile applications installed on smartphones, and two-factor authentication. (The latter, of course, is exactly the security measure SIM swaps attempt to circumvent.)

Fascinating read.

[H/T @Varunorcv]

SIM swap horror story: I’ve lost decades of data and Google won’t lift a finger

Matthew Miller, writing a story that no one wants to write:

At 11:30 pm on Monday, 10 June, my oldest daughter shook my shoulder to wake me up from a deep sleep. She said that it appeared my Twitter account had been hacked. It turns out that things were much worse than that.

After rolling out of bed, I picked up my Apple iPhone XS and saw a text message that read, “T-Mobile alert: The SIM card for xxx-xxx-xxxx has been changed. If this change is not authorized, call 611.”

This reads like a nightmare, told with ever increasing dread. Yes, it does get worse.

Read to the end for a section called “Recommendations for your security” but, since all this is still not resolved, I’m hoping for a “Lessons learned” update. And, for Matthew’s sake, hope the resolution happens quickly.

You care more about your privacy than you think

Charlie Warzel, New York Times, writing about this experiment by Harvard researcher Dan Svirsky:

Svirsky ran a series of tests where he had participants fill out online surveys for money and made them decide whether to share their Facebook profile data with a survey taker in exchange for a bonus (in some cases, 50 cents). In a direct trade-off scenario, Svirsky found that 64 percent of participants refused to share their Facebook profile in exchange for 50 cents and a majority were “unwilling to share their Facebook data for $2.50.” In sum: Respondents generally sacrificed a small bonus to keep from turning over personal information.

And:

But things changed when Svirsky introduced the smallest bit of friction. When participants were faced with what he calls “a veiled trade-off,” where survey takers had to click to learn whether taking the survey without connecting to Facebook would be free or cost them 50 cents, only 40 percent ended up refusing to share their data. And 58 percent of participants did not click to reveal which payment option was associated with privacy, even though doing so cost them nothing more than a second of their time.

I came across this article in this Daring Fireball post. From the post:

The lack of friction in the Sign In With Apple experience — especially using a device with Face ID or Touch ID — is a key part of why I expect it to be successful. It’s not just more private than signing in with Google or Facebook, it’s as good or better in terms of how few steps it takes.

The genius of the Google button is reducing friction for the user, easing them into sharing data from an already existing account. Even if your browser or app makes it easy to enter your email and password, using Touch ID or Face ID, there’s still friction in that sequence. The Google button is one simple step. With a privacy cost.

Sign in with Apple (SiwA) has the same lack of friction as the Google button. But without the privacy sacrifice. To me, this takes a good thing and makes it a great thing. I look forward to seeing SiwA in the wild.

Google reacts to “Sign in with Apple”

If you haven’t already, take a few minutes to read Sarah Perez’s excellent Answers to your burning questions about “Sign in with Apple”.

Once you’ve got your head wrapped around that, follow the headline link for The Verge’s interview with Google product management director Mark Risher. A few highlights:

Apple shook up the world of logins last week, offering a new single sign-on (or SSO) tool aimed at collecting and sharing as little data as possible. It was a deliberate shot at Facebook and Google, which currently operate the two major SSO services.

Not so sure it was a shot at anyone, but more of a safer, privacy respecting solutions for Apple users.

Once you start federating accounts, it means that maybe you still have a few passwords, but some new service you’re just trying out doesn’t need a 750-person engineering team dedicated to security. It doesn’t need to build its own password database, and then deal with all the liability and all the risk that comes with that.

This comment gets to the heart of the value of “Sign in with Apple” (SiwA). One of the benefits of SiwA is that it lets app developers ride on Apple’s safer, more secure coattails. And saves them from having to reinvent the wheel.

I will take the blame that we have not really articulated what happens when you press that “sign in with Google” button. A lot of people don’t understand, and some competitors have dragged it in the wrong direction. Maybe you click that button that it notifies all your friends that you’ve just signed into some embarrassing site.

With SiwA, you can bank on Apple respecting your privacy. Same thing with Apple Pay. Apple breaks the direct link between your identity-tied information and the validation process. And that’s a good thing.

I honestly do think this technology will be better for the internet and will make people much, much safer. Even if they’re clicking our competitors button when they’re logging into sites, that’s still way better than typing in a bespoke username and password, or more commonly, a recycled username and password.

Yup. Good read.

Walmart employees will soon deliver groceries directly into your fridge

The Verge:

Starting this fall in the US, Walmart customers in select cities can choose to have their groceries delivered directly into their refrigerators when away from home. The InHome service will use Walmart vehicles and its own workers equipped with proprietary wearable cameras. Using undisclosed “smart entry technology,” Walmart employees will be able to enter homes to make deliveries, while customers will be able to control access and watch the deliveries remotely.

This raises so many questions for me. Would you allow someone you don’t know to enter your home to put groceries in your refrigerator? This is one step beyond Amazon’s front door access that let’s them stick a package just inside your house.

And what about the footage of the interior of your home that is captured and put on the internet? How is this footage secured? Who can see it? It is available for you to see on the net, so the potential is there for other lurkers to see it as well.

Wondering if my kids will someday see this all as normal.

It’s the middle of the night. Do you know who your iPhone is talking to?

Geoffrey A. Fowler, Washington Post:

On a recent Monday night, a dozen marketing companies, research firms and other personal data guzzlers got reports from my iPhone. At 11:43 p.m., a company called Amplitude learned my phone number, email and exact location. At 3:58 a.m., another called Appboy got a digital fingerprint of my phone. At 6:25 a.m., a tracker called Demdex received a way to identify my phone and sent back a list of other trackers to pair up with.

And all night long, there was some startling behavior by a household name: Yelp. It was receiving a message that included my IP address -— once every five minutes.

And:

You might assume you can count on Apple to sweat all the privacy details. After all, it touted in a recent ad, “What happens on your iPhone stays on your iPhone.” My investigation suggests otherwise.

iPhone apps I discovered tracking me by passing information to third parties — just while I was asleep — include Microsoft OneDrive, Intuit’s Mint, Nike, Spotify, The Washington Post and IBM’s the Weather Channel. One app, the crime-alert service Citizen, shared personally identifiable information in violation of its published privacy policy.

This is a big deal. Privacy is core to Apple’s brand and one of the main reasons I am so loyal to Apple’s ecosystem. Looking forward to Apple’s response to the Washington Post.