Hello friends, and welcome back to Week in Review., we dove into the genuinely bizarre machinations of the NFT market. This week, we’re talking about something a little more impactful on the current state of the web — Apple’s NeuralHash kerfuffle.
If you’re reading this on the TechCrunch site, you can get this in your inbox from theand follow my tweets
the big thing
In the past month, Apple did something it generally has done an exceptional job avoiding — the company made what seemed to be an entirely unforced error.
In early August — seemingly out of nowhere** — the companythat by the end of the year, they would be rolling out a technology called NeuralHash that actively scanned the libraries of all iCloud Photos users, seeking out image hashes that matched known images of child sexual abuse material (CSAM). For obvious reasons, the on-device scanning could not be opted out of.
This announcement was not coordinated with other major consumer tech giants; Apple pushed forward on the information alone. Researchers and advocacy groups had almost unilaterally negative feedback for the effort, raising concerns that this could create new abuse channels for actors like governments to detect on-device information that they regarded as objectionable. As my colleague, Zach, noted in a recent, “The Electronic Frontier Foundation said this week it had amassed more than 25,000 signatures from consumers. On top of that, close to 100 policy and rights groups, including the American Civil Liberties Union, also on Apple to abandon plans to roll out the technology.”
(The announcement alsogenerated some controversy inside Apple.)
The issue — of course — wasn’t that Apple was looking at finding ways that prevent the proliferation of CSAM while making as few device security concessions as possible. The issue was that Apple was unilaterally making a massive choice that would affect billions of customers (while likely pushing competitors towards similar solutions) and was doing so without external public input about possible ramifications or necessary safeguards.
To make a long story short, over the past month, researchers discovered Apple’s NeuralHash wasn’t as airtight as hoped, and the company announced Friday that it was delaying the rollout “to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.”
Having spent several years in the tech media, I will say that the only reason to release news on a Friday morning ahead of a long weekend is to ensure that the announcement is read and seen by as few people as possible, and it’s clear why they’d want that. It’s a major embarrassment for Apple. As with any delayed rollout like this, it’s a sign that their internal teams weren’t adequately prepared and lacked the ideological diversity to gauge the scope of the issue they were tackling. This isn’t a dig at Apple’s team building this so much as it’s a dig at Apple trying to solve a problem inside the Apple Park vacuum while adhering to its annual iOS release schedule.
Apple is increasingly looking to make privacy a key selling point for the iOS ecosystem. As a result of this productization, it has pushed the development of privacy-centric features towards the same secrecy its surface-level design changes command. In June, Apple announced iCloud+ and raised some eyebrows when they shared that certain new privacy-centric features would only be available to iPhone users who paid for additional subscription services.
You obviously can’t tap public opinion for every product update. Still, perhaps wide-ranging and trail-blazing security and privacy features should be treated differently than the average product update. Apple’s lack of engagement with research and advocacy groups on NeuralHash was pretty egregious and certainly raised questions about whether the company fully respects how their iOS choices affect the broader internet.
Delaying the feature’s rollout is good, but let’s all hope they take that time to reflect more broadly as well.
** Though the announcement surprised many, Apple’s development of this feature wasn’t coming entirely out of nowhere. Those at the top of Apple likely felt that the winds of global tech regulation might be shifting towards outright bans of some encryption methods in some of its biggest markets.
Back in October of 2020, then United States AG Bill Barr joined representatives from the UK, New Zealand, Australia, Canada, India, and Japan inraising significant concerns about how implementations of encryption tech posed “significant challenges to public safety, including to highly vulnerable members of our societies like sexually exploited children.” The letter effectively called on tech industry companies to get creative in tackling this problem.
Here are the TechCrunch news stories that especially caught my eye this week:
You may be shocked to hear that LinkedIn even had a Stories-like product on their platform, but if you did already know that they were testing Stories, you likely wouldn’t be so surprised to hear that the test didn’t pan out too well. The company announced this week that they’d suspend the feature at the end of the month. RIP.
While all appeared swimmingly for Richard Branson’s trip to space last month, the FAA has questions about why the flight seemed to veer off the cleared route unexpectedly. The FAA prevents the company from launching further until they discover the deal.
While Spotify makes news every month or two for spending a massive amount acquiring a popular podcast, Apple seems to have eyes on a different market for Apple Music, announcing this week that they’re bringing the classical music streaming service Primephonic onto the Apple Music team.
It isn’t a massive secret that ByteDance and Facebook have been trying to copy each other’s success at times. Still, many probably weren’t expecting TikTok’s parent company to wander into the virtual reality game. The Chinese company bought the startup Pico which makes consumer VR headsets for China and enterprise VR products for North American customers.
The same features that make Twitter an incredibly fantastic product for some users can also make the experience awful for others, a realization that Twitter has seemingly been very slow to make. Their latest solution is more individual user controls, which Twitter is testing out a new “safety mode” that pairs algorithmic intelligence with new user inputs.
Some of my favorites reads from our Extra Crunch subscription service this week:
“Y Combinator kicked off its fourth-ever virtual Demo Day today, revealing the first half of its nearly 400-company batch. The presentation, YC’s biggest yet, offers a snapshot into where innovation is heading, from not-so-simple seaweed to a Clearco for creators….”
“…Yesterday, the TechCrunch team and the startups with . We even Today, we’re doing it all over again. that presented on the record today, and below, you’ll find our votes for the best Y Combinator pitches of Day Two. The ones that, as people who sift through a few hundred pitches a day, made us go, ‘Oh wait, what’s this?’
“… if your company somehow hasn’t yet found its way to launch a debit or credit card, we have good news: It’s easier than ever to do so, and there’s actual money to be made. Just know that if you do, you’ve got plenty of competition; that actual customer usage will probably depend on how sticky your service is and how valuable the rewards you offer to your most active users….”
Thanks for reading, and again, if you’re reading this on the TechCrunch site, you can get this in your inbox from theand follow my tweets.