Security

Amazing deep dive into the Apple iMessage NSO zero-click exploit

Google Project Zero blog:

We want to thank Citizen Lab for sharing a sample of the FORCEDENTRY exploit with us, and Apple’s Security Engineering and Architecture (SEAR) group for collaborating with us on the technical analysis.

And:

Recently, however, it has been documented that NSO is offering their clients zero-click exploitation technology, where even very technically savvy targets who might not click a phishing link are completely unaware they are being targeted. In the zero-click scenario no user interaction is required. Meaning, the attacker doesn’t need to send phishing messages; the exploit just works silently in the background. Short of not using a device, there is no way to prevent exploitation by a zero-click exploit; it’s a weapon against which there is no defense.

And:

The ImageIO library, as detailed in a previous Project Zero blogpost, is used to guess the correct format of the source file and parse it, completely ignoring the file extension. Using this “fake gif” trick, over 20 image codecs are suddenly part of the iMessage zero-click attack surface, including some very obscure and complex formats, remotely exposing probably hundreds of thousands of lines of code.

There’s a lot of detail here, fascinating if understanding exploits is your thing. But bottom line, a fake GIF is used to Trojan horse image processing code into life, and that code does the bad work, no clicks required.

Most importantly:

Apple inform us that they have restricted the available ImageIO formats reachable from IMTranscoderAgent starting in iOS 14.8.1 (26 October 2021), and completely removed the GIF code path from IMTranscoderAgent starting in iOS 15.0 (20 September 2021), with GIF decoding taking place entirely within BlastDoor.

Make sure you (and the folks you support) update to the latest and greatest.

See also: After US ban and Apple action, Pegasus spyware maker NSO running out of cash.

Your face is, or will be, your boarding pass

Elaine Glusac, New York Times:

If it’s been a year or more since you traveled, particularly internationally, you may notice something different at airports in the United States: More steps — from checking a bag to clearing customs — are being automated using biometrics.

And:

Many of the latest biometric developments use facial recognition, which the National Institute of Standards and Technology recently found is at least 99.5 percent accurate, rather than iris-scanning or fingerprints.

99.5% accurate means that 1 out of 200 is inaccurate. Just saying.

“Iris-scanning has been touted as the most foolproof,” said Sherry Stein, the head of technology in the Americas for SITA, a Switzerland-based biometrics tech company. “For biometrics to work, you have to be able to match to a known trusted source of data because you’re trying to compare it to a record on file. The face is the easiest because all the documents we use that prove your identity — driver’s licenses, passports etc. — rely on face.”

Delta has implemented a passport based program at Hartsfield-Jackson Atlanta International Airport:

In November, Delta Air Lines launched a new digital identity program for T.S.A. PreCheck members at Hartsfield-Jackson Atlanta International Airport who can opt in to using facial recognition to do everything from checking a bag to clearing security and boarding their domestic flight.

Opting in requires the passenger to enter their U.S. passport number, which provides the back-end check on your identity using your passport photo, even though the new program is domestic only.

Another program, for international flyers, in Chicago:

Returning from Iceland to Chicago O’Hare International Airport in October, I approached the airport kiosk that normally scans your passport and fingerprints and gets Global Entry members like me past Customs and Border Protection agents in the span of a few minutes. This time, the kiosk took my picture only, spat out a copy, which included my name and passport details, and sped me past agents in under a minute.

This future is coming, fast and furious. How well protected will this treasure trove of biometric data be? Seems clear it’ll be a relentless target for state actors. How long will it be until we start reading headlines about biometric data hacks?

DoJ arrests hacker involved with REvil Group that stole Apple’s MacBook Pro schematics

Juli Clover, MacRumors:

The United States Justice Department today announced that it has arrested Ukrainian Yaroslav Vasinskyi for his involvement with REvil, a group that executed ransomware attacks against businesses and government entities in the United States.

And:

REvil in April targeted Apple supplier Quanta Computer and stole schematics of the design of the 14 and 16-inch MacBook Pro models that were later released in October. The schematics unveiled MacBook Pro features like additional ports and the design of the notch, and REvil extorted Apple by threatening to release additional documents if the Cupertino company didn’t pay a $50 million fee.

And:

REvil continued on with its illicit activities and in May, was responsible for a cyberattack on the Colonial Pipeline that caused gas shortages on the East Coast of the United States. In July, REvil took advantage of a vulnerability in management software designed for Kaseya, targeting between 800 and 1,500 businesses worldwide.

Also interesting, from Krebs on Security:

The U.S. Department of State is now offering up to $10 million for the name or location any key REvil leaders, and up to $5 million for information on REvil affiliates.

Here’s a link to the indictment itself.

Apple AirTag bug enables ‘Good Samaritan’ attack

Krebs on Security:

The new $30 AirTag tracking device from Apple has a feature that allows anyone who finds one of these tiny location beacons to scan it with a mobile phone and discover its owner’s phone number if the AirTag has been set to lost mode. But according to new research, this same feature can be abused to redirect the Good Samaritan to an iCloud phishing page — or to any other malicious website.

And:

When scanned, an AirTag in Lost Mode will present a short message asking the finder to call the owner at at their specified phone number.

And:

Apple’s Lost Mode doesn’t currently stop users from injecting arbitrary computer code into its phone number field — such as code that causes the Good Samaritan’s device to visit a phony Apple iCloud login page.

And this bit of espionage history:

If this sounds like a script from a James Bond movie, you’re not far off the mark. A USB stick with malware is very likely how U.S. and Israeli cyber hackers got the infamous Stuxnet worm into the internal, air-gapped network that powered Iran’s nuclear enrichment facilities a decade ago. In 2008, a cyber attack described at the time as “the worst breach of U.S. military computers in history” was traced back to a USB flash drive left in the parking lot of a U.S. Department of Defense facility.

There clearly seems to be a phishing opportunity here. Guessing that Apple could add code to the firmware to prevent the injection of code to an AirTag phone number. No matter, good to be aware of this sort of attack.

Washington Post on Apple’s “bug bounty” program

Reed Albergotti, Washington Post:

Many who are familiar with the program say Apple is slow to fix reported bugs and does not always pay hackers what they believe they’re owed. Ultimately, they say, Apple’s insular culture has hurt the program and created a blind spot on security.

“It’s a bug bounty program where the house always wins,” said Katie Moussouris, CEO and founder of Luta Security, which worked with the Defense Department to set up its first bug bounty program. She said Apple’s bad reputation in the security industry will lead to “less secure products for their customers and more cost down the line.”

And:

In interviews with more than two dozen security researchers, some of whom spoke on the condition of anonymity because of nondisclosure agreements, they point to Apple’s rivals for comparison. Facebook, Microsoft and Google publicize their programs and highlight security researchers who receive bounties in blog posts and leader boards. They hold conferences and provide resources to encourage a broad international audience to participate.

And:

Most of them pay more money each year than Apple, which is at times the world’s most valuable company. Microsoft paid $13.6 million in the 12-month period beginning July 2020. Google paid $6.7 million in 2020. Apple spent $3.7 million last year, Krstić said in his statement. He said that number is likely to increase this year.

This is a long article, filled with bug bounty stories, many of them anonymously told. Hard to truly know whether this is the squeaky wheel getting all the attention, or something more problematic. But read the article (here’s an Apple News link if you don’t have access to WaPo).

Definitely reads like Apple puts less money into bug bounties, shines less of a light onto bug researcher efforts and successes than its competitors.

On the recently discovered iOS “Schou” networking bug

J. Glenn Künzler, Sonny Dickson blog:

Earlier this week, news broke of a strange networking issue that can permanently disable all WiFi activity on iOS devices. It’s currently known to affect iOS 14 only, and can cause quite a mess. The news was originally revealed by reverse engineer Carl Schou (via BleepingComputer (story sourced via MacTrast), and while there was originally very little information revealed about the issue or how it functions, we decided to put our research hats on and see what we could discover.

This all started with this tweet:

https://twitter.com/vm_call/status/1405937492642123782

Don’t try this at home. But a fascinating bug.

If you find this interesting, follow the headline link to watch J. Glenn Künzler try his hand to work through what’s going on.

iCloud users continue to be plagued by calendar spam

Sami Fathi, MacRumors:

Despite previous attempts to put the situation at rest, some iCloud users continue to experience spam calendar invitations, causing their calendars to be filled with random events.

And:

Victims are targeted in various ways. The most common method is by receiving a normal iCloud calendar invitation through their calendar app.

Interacting with the invitation, including declining, accepting, or choosing “Maybe,” lets the spammer know that the email is valid, so it can continue to be targeted.

Other users are targeted through web pop-ups on potentially malicious or adult websites.

If you find yourself subscribed to a spam calendar event, check out the video below, which Apple Support posted a few weeks ago. Also, check out this Apple support document, which basically says the same thing as the video.

Apple targeted in $50 million ransomware hack of supplier Quanta

Kartikay Mehrotra, Bloomberg:

As Apple Inc. was revealing its newest line of iPads and flashy new iMacs on Tuesday, one of its primary suppliers was enduring a ransomware attack from a Russian operator claiming to have stolen blueprints of the U.S. company’s latest products.

Then, about an email exchange with the hackers:

REvil then delivered on its promise to publish data it believes to be Apple’s proprietary blueprints for new devices. The images include specific component serial numbers, sizes and capacities detailing the many working parts inside of an Apple laptop.

A pretty significant security lapse. If those images became public, I wonder how significant the harm would be. A leg up for competitors trying to copy Apple designs? Or more of an annoyance, since the products have been announced, and will ship soon, available to be taken apart and examined firsthand?

Washington Post: Who the FBI got to unlock the San Bernardino shooter’s iPhone

Washington Post:

The iPhone used by a terrorist in the San Bernardino shooting was unlocked by a small Australian hacking firm in 2016, ending a momentous standoff between the U.S. government and the tech titan Apple.

At the time, the general consensus was that the FBI was using an Israeli security firm, well known for this sort of smartphone break-in.

Azimuth Security, a publicity-shy company that says it sells its cyber wares only to democratic governments, secretly crafted the solution the FBI used to gain access to the device, according to several people familiar with the matter.

And:

The identity of the hacking firm has remained a closely guarded secret for five years. Even Apple didn’t know which vendor the FBI used, according to company spokesman Todd Wilder. But without realizing it, Apple’s attorneys came close last year to learning of Azimuth’s role — through a different court case, one that has nothing to do with unlocking a terrorist’s device.

And:

Apple has a tense relationship with security research firms. Wilder said the company believes researchers should disclose all vulnerabilities to Apple so that the company can more quickly fix them. Doing so would help preserve its reputation as having secure devices.

And:

But many security researchers say it’s legitimate to sell these flaws to democratic governments. And the ability of government agencies to unlock iPhones has also spared Apple from direct conflict with these governments. For instance, by unlocking the terrorist’s iPhone, some say, Azimuth came to Apple’s rescue by ending a case that could have led to a court-ordered back door to the iPhone.

I do think it’s true that this solution took the heat off Apple, turned down the dial on Congress’ efforts to force Apple to create a backdoor to the iPhone. But as has been proven time and time again, there’s just no way a back door created for law enforcement would not end up in the hands of black hat hackers.

I do agree with Apple’s take, that researchers should disclose all vulnerabilities to Apple so they can release patches.

The Washington Post story is a fascinating read. Here’s a link to the Apple News version of the article.

A hacker got all my texts for $16

This is the scariest one of all:

Looking down at my phone, there was no sign it had been hacked. I still had reception; the phone said I was still connected to the T-Mobile network. Nothing was unusual there. But the hacker had swiftly, stealthily, and largely effortlessly redirected my text messages to themselves. And all for just $16.

And:

I hadn’t been SIM swapped, where hackers trick or bribe telecom employees to port a target’s phone number to their own SIM card. Instead, the hacker used a service by a company called Sakari, which helps businesses do SMS marketing and mass messaging, to reroute my messages to him.

And:

Unlike SIM jacking, where a victim loses cell service entirely, my phone seemed normal. Except I never received the messages intended for me, but he did.

The fact that this is possible shows how unsafe, how vulnerable, our current security infrastructure truly is.

Amazon adds end-to-end encryption to the Ring doorbell

EFF:

Almost one year after EFF called on Amazon’s surveillance doorbell company Ring to encrypt footage end-to-end, it appears they are starting to make this necessary change. This call was a response to a number of problematic and potentially harmful incidents, including larger concerns about Ring’s security and reports that employees were fired for watching customers’ videos.

And:

Videos taken by the Ring device for either streaming or later viewing are end-to-end encrypted such that only mobile devices you authorize can view them.

And:

Ring now has over a thousand partnerships with police departments across the country that allow law enforcement to request, with a single click, footage from Ring users. When police are investigating a crime, they can click and drag on a map in the police portal and automatically generate a request email for footage from every Ring user within that designated area.

The addition of one-to-end encryption adds another layer of protection to this model, presumably requiring a warrant to access your footage.

Read about the encryption model in this Amazon white paper.

If you own a Ring doorbell, here’s a link to Amazon’s instructions on enabling end-to-end encryption.

If you are in the market for a HomeKit video doorbell, check out this review of the Logitech Circle View doorbell. Still early days for HomeKit doorbells.

iOS 14.5 beta directs Safari ‘safe browsing’ traffic through Apple server instead of Google to protect personal user data

Sami Fathi, MacRumors:

Starting with iOS and iPadOS 14.5, Apple will proxy Google’s “Safe Browsing” service used in Safari through its own servers instead of relying on Google as a way to limit which personal data Google sees about users.

And:

Apple relies on Google’s “Safe Browsing,” a database/blocklist of websites crawled by Google of websites that it deems to be suspected phishing or scam.

And:

While Google doesn’t know which specific URL you’re trying to visit, it may collect your IP address during its interaction with Safari. Now on iOS/iPadOS 14.5, that’s no longer the case. As confirmed by the Head of Engineering for WebKit, Apple will now proxy Google’s Safe Browsing feature through its own servers instead of Google as a way to “limit the risk of information leak.”

Good move.

Apple says iOS 14.4 fixes three security bugs ‘actively exploited’ by hackers

Zack Whittaker, TechCrunch:

Apple has released iOS 14.4 with security fixes for three vulnerabilities, said to be under active attack by hackers.

The technology giant said in its security update pages for iOS and iPadOS 14.4 that the three bugs affecting iPhones and iPads “may have been actively exploited.” Details of the vulnerabilities are scarce, and an Apple spokesperson declined to comment beyond what’s in the advisory.

From that Apple security note:

Kernel impact: A malicious application may be able to elevate privileges. Apple is aware of a report that this issue may have been actively exploited.

And:

WebKit impact: A remote attacker may be able to cause arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.

Note that this is an issue for both iPadOS and iOS. So update your iPhone and iPad both.

Every deleted Parler post, many with users’ location data, has been archived

Dell Cameron, Gizmodo:

The researcher, who asked to be referred to by their Twitter handle, @donk_enby, began with the goal of archiving every post from January 6, the day of the Capitol riot; what she called a bevy of “very incriminating” evidence.

In a nutshell, Parler was hacked before it was taken down, all the posts, including deleted posts, were downloaded and archived. I can’t help but imagine the FBI will be interested in getting their hands on that data.

Here’s a link to an alleged description of how this all was done.

iPhone zero-click Wi-Fi exploit is one of the most breathtaking hacks ever

[VIDEO] First things first, this exploit has been patched by Apple.

But what I found fascinating about this is the video, showing off the hacker doing their proof of concept thing. As you watch it (video embedded in the main Loop post), imagine being in a hotel room and the hacker being in the room next door. Frightening, no? But also good that Apple has your back here.

We hacked Apple for 3 months: Here’s what we found

Sam Curry:

Between the period of July 6th to October 6th myself, Brett Buerhaus, Ben Sadeghipour, Samuel Erb, and Tanner Barnes worked together and hacked on the Apple bug bounty program.

And:

During our engagement, we found a variety of vulnerabilities in core portions of their infrastructure that would’ve allowed an attacker to fully compromise both customer and employee applications, launch a worm capable of automatically taking over a victim’s iCloud account, retrieve source code for internal Apple projects, fully compromise an industrial control warehouse software used by Apple, and take over the sessions of Apple employees with the capability of accessing management tools and sensitive resources.

Most importantly:

As of October 6th, 2020, the vast majority of these findings have been fixed and credited. They were typically remediated within 1-2 business days (with some being fixed in as little as 4-6 hours).

This is a fascinating read, filled with detail. Work like this finds the vulnerabilities before they can be used against us. There’s also a bit of insight on Apple’s bug bounty program.

Apple accidentally approved malware to run on macOS

Lily Hay Newmman, Wired:

College student Peter Dantini discovered the notarized version of Shlayer while navigating to the homepage of the popular open source Mac development tool Homebrew. Dantini accidentally typed something slightly different than brew.sh, the correct URL. The page he landed on redirected a number of times to a fake Adobe Flash update page. Curious about what malware he might find, Dantini downloaded it on purpose. To his surprise, macOS popped up its standard warning about programs downloaded from the internet, but didn’t block him from running the program. When Dantini confirmed that it was notarized, he sent the information on to longtime macOS security researcher Patrick Wardle.

And:

The campaign is distributing the ubiquitous “Shlayer” adware, which by some counts has affected as many as one in 10 macOS devices in recent years. The malware exhibits standard adware behavior, like injecting ads into search results. It’s not clear how Shlayer slipped past Apple’s automated scans and checks to get notarized, especially given that it’s virtually identical to past versions. But it’s the first known example of malware being notarized for macOS.

Interesting how this stuff gets discovered. All this time and it’s still in the wild. So much so, that it slipped past Apple’s scanners and got notarized.

Tesla and FBI prevented $1 million ransomware hack at Gigafactory Nevada

Electrek:

The FBI released information this week on the arrest of Egor Igorevich Kriuchkov, a 27-year-old Russian citizen, who they claim was part of a group who attempted to extort millions of dollars from a company in Nevada, which has now been identified as Tesla.

This is a pretty solid tale, involving a Tesla employee who turned down a million dollar payday, then wore a wire in an FBI sting. Part of my takeaway from this is all the companies who paid the ransomware rather than fight.

iOS 14’s best privacy feature? Catching data-grabbing apps

Alex Lee, Wired:

Last week, Instagram became the latest app to be called out by iOS 14’s privacy notifications feature after users began noticing that the green light indicator—which alerts users that the camera has been activated—kept turning on—even when the camera was not in use. Addressing the behavior, Instagram said that the activation of the camera was just a bug and that it was being triggered by a user swiping into the camera from the Instagram feed.

You’ve no doubt seen a steady stream of privacy-related “outings” as apps are called out for their inappropriate snooping, all revealed by iOS 14.

But this was an interesting perspective:

It’s wise to remember that most permissions abuse happens on Google’s Android operating system. Last year, researchers from the International Computer Science Institute found that up to 1,325 Android apps were gathering data, despite the researchers’ apps denying them permission to access that data. But whether Google decides to implement privacy notifications, however, is a different story.

And:

Maximilian Golla, a security researcher at the Max Planck Institute for Security and Privacy says that the business model on Android is different from iOS. “I wonder whether the app developers really want to change this, or Google really wants to implement such a feature, because they depend on this kind of tracking,” he thinks. “Google makes its money from Google AdSense, and I would be surprised if Google implements such a tracking notification.”

It would definitely be interesting to see Google copy this behavior from Apple. Both from a business perspective (not really in their interests to do so) and to see what it would reveal about snooping behavior of its apps.

New ‘unpatchable’ exploit allegedly found on Apple’s Secure Enclave chip

Filipe Espósito, 9to5Mac:

One of the major security enhancements Apple has brought to its devices over the years is the Secure Enclave chip, which encrypts and protects all sensitive data stored on the devices. Last month, however, hackers claimed they found a permanent vulnerability in the Secure Enclave, which could put data from iPhone, iPad, and even Mac users at risk.

Good explainer. A few key points:

  • This vulnerability is permanent. Because the Secure Enclave is embedded in the processor and not patchable, it cannot be fixed on a specific device.
  • That said, Apple has fixed the design itself, starting with the A12. So if you’ve got a device with an A7 through A11, that issue exists on your device.
  • The good news? To take advantage of the exploit, a hacker would need physical access to your device.

Here’s a list of devices that have the Arm A12. If you’ve got one of these, or newer, you’ve got the fix in place:

  • iPhone XS and XS Max
  • iPhone XR
  • iPad Mini (5th generation)
  • iPad Air (2019, 3rd generation)

Who’s behind Wednesday’s epic Twitter hack?

This starts with a retelling of the hack story, but that’s just the start. The real juice starts down below that.

People within the SIM swapping community are obsessed with hijacking so-called “OG” social media accounts. Short for “original gangster,” OG accounts typically are those with short profile names (such as @B or @joe). Possession of these OG accounts confers a measure of status and perceived influence and wealth in SIM swapping circles, as such accounts can often fetch thousands of dollars when resold in the underground.

And:

In a post on OGusers — a forum dedicated to account hijacking — a user named “Chaewon” advertised they could change email address tied to any Twitter account for $250, and provide direct access to accounts for between $2,000 and $3,000 apiece.

Great Dalrymple’s Beard!!! That can’t be real, can it?

Lucky225 said that just before 2 p.m. EDT on Wednesday, he received a password reset confirmation code via Google Voice for the @6 Twitter account. Lucky said he’d previously disabled SMS notifications as a means of receiving multi-factor codes from Twitter, opting instead to have one-time codes generated by a mobile authentication app.

But because the attackers were able to change the email address tied to the @6 account and disable multi-factor authentication, the one-time authentication code was sent to both his Google Voice account and to the new email address added by the attackers.

“The way the attack worked was that within Twitter’s admin tools, apparently you can update the email address of any Twitter user, and it does this without sending any kind of notification to the user,” Lucky told KrebsOnSecurity. “So [the attackers] could avoid detection by updating the email address on the account first, and then turning off 2FA.”

Lucky said he hasn’t been able to review whether any tweets were sent from his account during the time it was hijacked because he still doesn’t have access to it

Here’s a link to a detailed telling of this story.

Read the whole Krebs on Security post via the headline link. Fascinating and not a little scary. Amazing to me so little damage was done.

As I’ve said before, not convinced that this was the end of this particular misadventure. Would not be surprised if this was just some misdirection to hide a more critical unlocking event that will rear its head in the future.

Motherboard: Hackers convinced Twitter employee to help them hijack accounts

Joseph Cox, Motherboard:

A Twitter insider was responsible for a wave of high profile account takeovers on Wednesday, according to leaked screenshots obtained by Motherboard and two sources who took over accounts.

And:

“We used a rep that literally done all the work for us,” one of the sources told Motherboard. The second source added they paid the Twitter insider. Motherboard granted the sources anonymity to speak candidly about a security incident. A Twitter spokesperson told Motherboard that the company is still investigating whether the employee hijacked the accounts themselves or gave hackers access to the tool.

And:

After a wave of account takeovers, screenshots of an internal Twitter user administration tool are being shared in the hacking underground.

And this response from Twitter:

After the publication of this piece, Twitter said in a tweet that “We detected what we believe to be a coordinated social engineering attack by people who successfully targeted some of our employees with access to internal systems and tools.”

Were the employees duped by social engineering? Or was there complicity here, was a Twitter insider paid, as indicated by the article.

Also, there is some question as to whether the bitcoin scam was the hackers’ endgame. Or if access to the accounts opened a door that could be exploited later.

Beyond alarming.

Adobe and the end-of-life for Flash

Adobe:

As previously announced in July 2017, Adobe will stop distributing and updating Flash Player after December 31, 2020 (“EOL Date”).

And:

Open standards such as HTML5, WebGL, and WebAssembly have continually matured over the years and serve as viable alternatives for Flash content. Also, the major browser vendors are integrating these open standards into their browsers and deprecating most other plug-ins (like Adobe Flash Player).

And:

Adobe will be removing Flash Player download pages from its site and Flash-based content will be blocked from running in Adobe Flash Player after the EOL Date.

It’s been a long time since I’ve even seen Flash running on a device. When I hear someone talking about Flash, red lights go off in my head, all nostalgia has been pushed aside by thoughts of malware.

Gruber: Department of Justice reopens spat with Apple over iPhone encryption

Start by reading this New York Times piece, F.B.I. Finds Links Between Pensacola Gunman and Al Qaeda:

The F.B.I. recently bypassed the security features on at least one of Mr. Alshamrani’s two iPhones to discover his Qaeda links. Christopher A. Wray, the director of the F.B.I., said the bureau had “effectively no help from Apple,” but he would not say how investigators obtained access to the phone.

Gruber then proceeds to take down the Times’ narrative, piece-by-piece, with a quote Apple shared with the media in response to the FBI’s “no help” claim, ending his take with this:

Apple cooperated in every way they technically could. The DOJ is not asking for Apple’s cooperation unlocking existing iPhones — they’re asking Apple to make future iPhones insecure.

Gruber’s take is worth reading, soup to nuts. He does a solid job responding to the “make a backdoor that only white hats can get through” argument, an impossible ask.

I’d only add this little nugget, from NBCNews, that might explain how the FBI got in:

Software called Hide UI, created by Grayshift, a company that makes iPhone-cracking devices for law enforcement, can track a suspect’s passcode when it’s entered into a phone, according to two people in law enforcement, who asked not to be named out of fear of violating non-disclosure agreements.

The spyware, a term for software that surreptitiously tracks users, has been available for about a year but this is the first time details of its existence have been reported, in part because of the non-disclosure agreements police departments sign when they buy a device from Grayshift known as GrayKey.

It’s a cat and mouse game. IMO, a very important one.

FBI serves warrant on Apple to obtain information from Senator’s iCloud account

Los Angeles Times:

Federal agents seized a cellphone belonging to a prominent Republican senator on Wednesday night as part of the Justice Department’s investigation into controversial stock trades he made as the novel coronavirus first struck the U.S., a law enforcement official said.

And:

The seizure represents a significant escalation in the investigation into whether Burr violated a law preventing members of Congress from trading on insider information they have gleaned from their official work.

On the Apple side:

A second law enforcement official said FBI agents served a warrant in recent days on Apple to obtain information from Burr’s iCloud account and said agents used data obtained from the California-based company as part of the evidence used to obtain the warrant for the senator’s phone.

I’m curious what part of Burr’s iCloud account the FBI got access to. Was it iCloud Drive? Was it iCloud backup (perhaps Burr’s backup was not set to be encrypted)?

From Apple’s iCloud security overview:

iCloud secures your information by encrypting it when it’s in transit, storing it in iCloud in an encrypted format, and using secure tokens for authentication. For certain sensitive information, Apple uses end-to-end encryption. This means that only you can access your information, and only on devices where you’re signed into iCloud. No one else, not even Apple, can access end-to-end encrypted information.

For a clue on what information might have been available to the FBI, take a look at Section III of Apple’s Legal Process Guidelines (H/T Mike Wuerthele, AppleInsider).

Bit of a rabbit hole there, but an interesting read. Seems clear the FBI got what they needed.

Thunderbolt security vulnerabilities and the Mac

The linked Thunderbolt security report details 7 specific vulnerability scenarios. I can only imagine that Apple is long aware of these and will address them.

One in particular I found interesting is the weakness on Macs that run Boot Camp:

Apple supports running Windows on Mac systems using the Boot Camp utility. Aside from Windows, this utility may also be used to install Linux. When running either operating system, Mac UEFI disables all Thunderbolt security by employing the Security Level “None” (SL0). As such, this vulnerability subjects the Mac system to trivial Thunderbolt-based DMA attacks.

The way I read it, the vulnerabilities occur when a device is allowed to update its firmware. A Mac running Boot Camp disables Thunderbolt security and opens the door for attack. Here’s detail on the DMA attack.

Ryan Pickren found a bug in Safari that let malicious code access iOS and macOS camera. Apple gave him $75K

Ryan Pickren:

This vulnerability allowed malicious websites to masquerade as trusted websites when viewed on Desktop Safari (like on Mac computers) or Mobile Safari (like on iPhones or iPads). ​> Hackers could then use their fraudulent identity to invade users’ privacy. This worked because Apple lets users permanently save their security settings on a per-website basis. ​> If the malicious website wanted camera access, all it had to do was masquerade as a trusted video-conferencing website such as Skype or Zoom.

And:

I reported this bug to Apple in accordance with the Security Bounty Program rules and used BugPoC to give them a live demo. Apple considered this exploit to fall into the “Network Attack without User Interaction: Zero-Click Unauthorized Access to Sensitive Data” category and awarded me $75,000.

If this sort of thing concerns you, put a post-it over your Mac and Mac display cameras.

New iPad adds in hardware microphone disconnect

Apple Platform Security document:

All Mac portables with the Apple T2 Security Chip feature a hardware disconnect that ensures the microphone is disabled whenever the lid is closed. On the 13-inch MacBook Pro and MacBook Air computers with the T2 chip, and on the 15-inch MacBook Pro portables from 2019 or later, this disconnect is implemented in hardware alone. The disconnect prevents any software—even with root or kernel privileges in macOS, and even the software on the T2 chip—from engaging the microphone when the lid is closed. (The camera is not disconnected in hardware, because its field of view is completely obstructed with the lid closed.)

That’s the Mac side. On the iPad:

iPad models beginning in 2020 also feature the hardware microphone disconnect. When an MFI compliant case (including those sold by Apple) is attached to the iPad and closed, the microphone is disconnected in hardware, preventing microphone audio data being made available to any software—even with root or kernel privileges in iPadOS or in case the firmware is compromised.

The culture of camera and mic access on the Mac and iPad are very different. On my Mac, when the camera is in use, I see a light. And, as the note states, when the lid is closed, the camera is blocked.

Hardware disconnect does prevent the mic from working when the iPad case is closed. But what if I use my iPad without a case? And what about the camera without a case? There’s no hardware disconnect to rely on. Instead, Apple requires apps to ask for permission to access the camera and microphone.

Seemingly foolproof, but no.

Twitter warns of scheme to match phone numbers to Twitter accounts

Twitter:

We observed a particularly high volume of requests coming from individual IP addresses located within Iran, Israel, and Malaysia. It is possible that some of these IP addresses may have ties to state-sponsored actors.

And:

When used as intended, this endpoint makes it easier for new account holders to find people they may already know on Twitter. The endpoint matches phone numbers to Twitter accounts for those people who have enabled the “Let people who have your phone number find you on Twitter” option and who have a phone number associated with their Twitter account.

In the Twitter app, do this:

  • Tap your Twitter avatar.
  • Tap Settings and privacy
  • Tap Privacy and safety
  • Tap Discoverability and contacts

Turn stuff off.

Apple wants to standardize the format of SMS passcodes

The proposal is embedded in this GitHub repository. Easy read, short and clearly written.

From the linked ZDNet explainer:

Apple engineers have put forward a proposal today to standardize the format of the SMS messages containing one-time passcodes (OTP) that users receive during the two-factor authentication (2FA) login process.

And:

The proposal has two goals. The first is to introduce a way that OTP SMS messages can be associated with an URL. This is done by adding the login URL inside the SMS itself.

The second goal is to standardize the format of 2FA/OTP SMS messages, so browsers and other mobile apps can easily detect the incoming SMS, recognize web domain inside the message, and then automatically extract the OTP code and complete the login operation without further user interaction.

Basically, the goal is to automate the process, to have your device enter the code automatically, rather than you having to copy and paste it. Seems to me, in the past when this standardization was raised, there was a security concern about taking the human out of the middle of this process. Was that concern unfounded?