Six Colors
Six Colors

Apple, technology, and other stuff

Support this Site

Become a Six Colors member to read exclusive posts, get our weekly podcast, join our community, and more!

By Jason Snell

visionOS 2 offers small, but promising, steps forward

Editing the Home View in Bora Bora.

I’m just back from Cupertino, and there’s an awful lot to think about. But before all that, I thought I’d cover what absolutely everyone is talking about: visionOS 2.

More seriously, I installed the visionOS 2 developer beta this morning—this is the entire reason I have a Vision Pro!—and I’ve got a few quick thoughts before plowing on to developer sessions, thoughts about Apple Intelligence, and the rest.

Spatial photos. The most impressive single feature I’ve seen in visionOS 2 so far is the ability to create spatial photos from out of your old, mundane 2D photos. Load pretty much any photo in the Photos app and tap the Spatial icon in the top left corner of the image, and a fun sound effect plays as an animation sweeps across your photo, representing the system’s machine learning-driven software scanning your photo and building an artificial depth map to provide the illusion of depth.

You know, very few movies are shot in stereo anymore. It’s more expensive and cumbersome to shoot, and conversion to 3-D after the fact is good enough. Upon viewing Apple’s spatial-converted photos in visionOS 2, I had similar thoughts: I don’t see why we would ever need to shoot stereo images if machine learning is this good at faking it.

Seriously, whether it was a photo taken last week or 50 years ago, Apple’s algorithm does a staggeringly good job at building a depth map. You don’t need embedded LiDAR or other depth information—the algorithm does it, and it does in incredibly well. Pictures of my kids taken when they were little are suddenly given more depth (literally and figuratively). Pictures of me as a kid, even. A complex shot with a tower made of LEGO bricks in the foreground was scanned and mapped perfectly.

It takes about 15 seconds to spatialize a photo, so it’s unlikely that Apple will ever ask the Vision Pro to churn in the background through every single photo in your library, but I’ve yet to see a photo that failed to become more interesting after being converted.

Quick access to status!

New gestures. Apple has added some new gestures to visionOS, which key off of you holding out your palm and looking at it. When you do that, a floating icon appears next to your hand indicating that you can tap your finger and thumb together to open the Home View. (There’s also a new Close button at the top of the Home View so you can close it again.) It’s a nice idea because as much as I’ve internalized reaching up and tapping the Digital Crown to bring up the Home View, the act of doing so is also jarring given that all other Vision Pro interactions are gestures happening in the air in front of me.

I’m actually more excited about the second gesture that keys off the first. After you look at your stretched palm (have you ever really looked at your hand?) you can flip your hand over to reveal a floating bubble that displays the time, battery percentage, and volume. Given how frustrating it was to quickly check the time in visionOS, this is a great new feature that I anticipate using a lot.

If you bring your fingers together while looking at the bubble, you can slide right or left to adjust the device volume quickly. And if you tap while looking at the bubble, Control Center opens. This, to me, is a much better way to access Control Center—though it’s certainly less discoverable than the little firefly that frequently hovers at the top of your vision.

Customize Home View. You can now move apps around in the Home View, and it works pretty much how you’d think: while looking at an app, bring your finger and thumb together to enter Jiggle Mode and then pinch the app to drag it around, even across pages. It worked exactly as I expected it to, and I’m happy to finally be able to put some of my favorite apps on page one.

If you’re a Magic Keyboard user, you’ll be able to see it now.

Breakthrough keyboards. Typing on a keyboard in an Environment was previously very weird, because while your hands were visible, your keyboard itself was not. Apple has upgraded this in visionOS 2 so that it recognizes either the Apple Magic Keyboard or the keyboard on a MacBook, and that’s great.

But at least in the first developer beta, I found that it was a bit finicky—it needed the keyboard to be positioned just-so for it to appear, and even then it sometimes felt like the environment was eating away at the edges of the keyboard. I’m not even a hunt-and-peck typist, but being able to actually orient on the keyboard by seeing it is still valuable.

Also, I’m disappointed that Apple has limited this feature to its own keyboards. I realize it would be harder to generate a model that recognizes more generic keyboards, but most keyboards really do have some pretty obvious characteristics in common, don’t they? Alternately, maybe Apple should consider a feature like the one Meta offers, which lets you work in an environment but with a specific cut-out—like a tabletop—set to pass through.

Life’s a beach. Famously, visionOS shipped with two Environments marked as “Coming Soon”—one featuring a blurry beach image and another with fog-shrouded trees. The trees remain a mystery—Twin Peaks environment?—but the beach has arrived in the form of Bora Bora. As someone who loves beaches, it is spectacular. You can hear the sound of the waves, the palm trees gently blow in the breeze, and you can even see the color of the beach change as a thin cloud passes over the sun. I think I am going to be spending a lot of time in Bora Bora (the visionOS 2 environment, alas).

I didn’t get a chance to test out some other features due later this year, including support for a much larger display in Mac Virtual Display mode and multi-view support in the Apple TV app. In a Spatial Persona call with my podcast co-host Myke Hurley, the updated personas looked good (most notably hand gestures) but Myke’s mustache still prevented his persona’s mouth from moving. There’s more work to be done there.

Still, it’s exciting to get some major new features out of visionOS 2, despite the fact that visionOS 1.0 only shipped a few months ago.


Our favorite features of Apple Intelligence, the biggest missed opportunity at WWDC 2024, our most anticipated iOS 18 feature announced, and our reactions to the lackluster iPadOS announcements.



Sequoia virtual machines allow logging in to iCloud

Ars Technica’s Andrew Cunningham catches a nice small detail from amongst the WWDC developer announcements:

As long as your host operating system is macOS 15 or newer and your guest operating system is macOS 15 or newer, VMs will now be able to sign into and use iCloud and other Apple ID-related services just as they would when running directly on the hardware.

This is a complaint I’ve heard from more than a few developers of my acquaintance and should help improve testing and other processes. As someone who usually installs a beta release of macOS at some point during the cycle (albeit not in VM), it’s clear how much iCloud integration is a part of the whole OS experience, and the previous prohibition is a big limitation in terms of testing real world usage.

However, there are still some restrictions in place: the feature is only supported if both the host and virtualized OS is Sequoia or later.


By Dan Moren for Macworld

Apple is finally letting you have it your way–kinda

Apple, as a company, has always extolled the value of putting the “personal” in “personal computer.” From its earliest days pushing back at the monolith of IBM and beige boxes that all looked like one another to its more recent extremely personal devices like the iPhone, Apple Watch, and AirPods.

But that ethos of “personal” technology has always been in fundamental tension with the company’s other overriding principle: Apple knows best. Whether it’s the design of its apps or how to use its features, the company has a strong streak imposing what it believes is the best approach on its users.

In the company’s latest platform updates, this tension is more apparent than ever. Apple announced several new features that allow users to bring their own touches to their devices—but it did so in a typically Apple fashion that still kept everything within bounds.

Continue reading on Macworld ↦


Live from Cupertino, Jason has his in-person reactions to Apple’s big WWDC announcements. And in London, Myke processes his feelings about some controversial Apple choices.


Live from (Outside) the Steve Jobs Theater

Jason and Dan

Just hours after the WWDC Keynote, Jason and Dan reflect on the announcements while overlooking the ring at Apple Park.


By Jason Snell

Siri’s biggest gains might be found at the edges

We’re on the verge, maybe, of something big. Apple broke ground with its voice-driven Siri assistant back in 2011, but over the years, Siri’s trajectory has been meandering and frustrating. It’s driven deeply into popular culture—”Hey Siri” is practically a catchphrase—but it’s still painfully inconsistent at best and a failure at worst.

The rise of AI chatbots, which (despite their tendency to hallucinate) manage to project the illusion of intelligent conversation, has really put Siri up against it. This week, the Wall Street Journal reported that Apple had a faster, more conversational, and more accurate Siri waiting in the wings at least three years ago—but decided to go with a more modest upgrade code-named “Siri X.”

If the idea that someone inside Apple felt that Siri didn’t need to be more conversational and accurate makes you angry or confused or both, I’m right there with you. We can only hope that Apple has reconsidered its complacency over the last 18 months, since Apple executives reportedly realized that the rise of AI chatbots was destined to change user expectations.

I hope that the people who lost the arguments for a better Siri back then may have been winning those arguments for the past 18 months. And it’s made me optimistic—here in the shadow of WWDC, where everything’s possible, and nothing’s decided—that Apple might really be on the road to giving Siri the biggest upgrade it’s ever received.

That’s a big deal—and not just for the obvious reason, which is that I’d like to use Siri for something other than a tea timer or to add something to my shopping list. That’s because with a good, reliable, conversational Siri, a lot of Apple’s products can benefit.

When I’m running or walking my dog, I generally use just AirPods and my Apple Watch. Theoretically, I’m covered by Siri, but I generally avoid talking to it because it’s unreliable. I’ve also turned off all of the features that summarize messages I’m receiving because I find them awkward and unhelpful.

But imagine what a good Siri experience could unlock for users of AirPods, Apple Watch, or both. Humane built an entire product—the Ai Pin—out of a voice assistant that you carry around with you. But with a better Siri interface, Apple’s already most of the way there. The better the voice interface, the more useful your device can be when you need to interact with nothing but your voice.

Similarly, an upgraded Siri could be a huge step toward improving the CarPlay experience. As is the case with most contexts in which Siri is available, I’ve tried Siri in the car and learned my lesson—namely, to never use it again. When I’ve got to keep my hands on the wheel and my eyes on the road, being able to have a proper voice assistant that can summarize what’s going on, understand context, and follow up when I ask for it could be incredibly valuable. But Siri being unreliable has degraded that experience.

If there’s one place where the Humane Ai pin and those Meta Ray-Ban sunglasses have an edge over Apple’s stuff, it’s that they both offer a forward-facing camera to interpret the world around you. With an upgraded and competent Siri, it might make more sense for Apple to sell a pair of glasses or a clip-on camera pin as another accessory to help improve the data being input into Apple’s AI systems.

And then there’s the kitchen, where a device that combines a HomePod with a screen and camera might end up being quite welcome—I’ve used Google and Amazon devices in my kitchen for years—but it really needs to be rock-solid when it comes to voice commands. The Siri on my current HomePods doesn’t measure up.

Of course there are numerous ways that a better Siri and more AI-focused features can improve Apple’s core platforms. (I’d love a reason to use Siri on my Mac, for example, but I’ve never really found one.) But it strikes me that Apple’s biggest opportunity might actually be at the edges of our lives, where traditional interfaces are harder or even impossible to use. A useful Siri instantly makes the Apple Watch, AirPods, CarPlay, and HomePod better—and could facilitate even more interesting products to come.

But first thing’s first: Siri needs to get vastly better. I hope that we get the first sign of that on Monday.


By John Moltz

This Week in Apple: There’s AI and then there’s AI

John Moltz and his conspiracy board. Art by Shafer Brown.

AI is coming to Apple devices but it’s not what you thiiiiink. The big news, however, is what else we’re going to be getting: a laundry list of stuff we’ve wanted for a while. And is anybody going to be getting in line at Lowe’s to try a Vision Pro?

Everything happens at Craig’s house

Remember Christmas at Craig’s house? You were there. We were all having a good time, drinking “nog” and singing carols (you were off-tune, by the way). After a while we’re looking around and we’re like, hey, where’s Craig?! The guy snuck off upstairs! Hey, Craig! Get down here, we’re having a good time!

Turns out he was maybe having a bit of a panic attack.

“Apple went all in on AI after Craig Federighi tested Github Copilot during his Christmas break”

Now we are just days away from seeing how Apple has changed course in the intervening months. Honestly, though, what could Apple announce next week that could beat the fine AI offerings already available from other companies?

“Microsoft’s hallmark AI feature dubbed a security disaster just days away from Apple’s privacy-focused AI launch at WWDC”

“Google scrambles to manually remove weird AI answers in search”

“Humane Tells Customers to Stop Using AI Pin’s Charge Case Due to Fire Risk”

“Google Researchers Say AI Now Leading Disinformation Vector (and Are Severely Undercounting the Problem)”

Oh. Well. Anyway, if you thought Apple would choose not to use the term “AI”, psych! Get ready for Apple Intelligence.

“‘Apple Intelligence’ will automatically choose between on-device and cloud-powered AI”

The Verge helpfully gives you the tl;dr right up at the top.

Bloomberg reports that Apple’s AI features for its apps will focus on ‘broad appeal’ and privacy, while leaving the chatbots to others like OpenAI.

This seems like the right, sensible approach that one maybe wishes other companies would take. So expect everyone to keep saying Apple’s behind in AI because they’re not shoveling garbage answers into everyone’s screens like coal into a steam engine.

Burying the lede

OK, so we’re getting AI. But that’s not what regular Apple product users should be excited about.

“iOS 18 and macOS 15 to Feature Refreshed Settings App”

The Settings app on iPhone and iPad and the System Settings on Macs will get an updated UI with a “reorganized, cleaner layout” that’s simpler to navigate.

Is it possible that Mac users’ long national nightmare will soon be over?

“It Was Hell,” Recalls Survivor Of MacOS Ventura and Sonoma System Settings App

But that’s not all, as Mark Gurman continues to dish the deets.

“Apple to Debut Passwords App in Challenge to 1Password, LastPass”

While some consider this antitrust bait, those of us who’ve watched 1Password sell out (I liked their early stuff) and LastPass simply blow it might have other opinions.

Also according to Mark Gurman, Apple will ship a number of other remarkable enhancements to its operating systems, including:

  • A revamped Control Center with multiple pages and more customization.
  • An updated macOS Mail.app featuring Gmail-style email grouping capability and AI-assisted smart replies.
  • Retro wallpaper packs that “reference old school icons and slogans”.
  • Messages will let you react to an iMessage with any emoji.

And they’re not even going to charge for this upgrade?

Vision Lowe

Not to be outdone, the Vision Pro is also on the verge of a tantalizing user experience.

“Apple Vision Pro demo coming to Lowe’s home improvement retail locations”

I used to shop for Macs in the back of a CompUSA and even I was not expecting this.

Apparently Lowe’s has an immersive experience that allows you to see your kitchen of the future using the Vision Pro, like a 1950s GE ad come to life. Or… virtual life, anyway. Who says there aren’t enough experiences for the Vision Pro?

“A Lowe’s kitchen specialist will meet with you for a one-on-one experience, which should take about 45 minutes,” according to the website.

Hey, I’m an introvert. If I wanted to have an awkward conversation with someone in a kitchen for 45 minutes, I’d just go to a party. C’mon.

[John Moltz is a Six Colors contributor. You can find him on Mastodon at Mastodon.social/@moltz and he sells items with references you might get on Cotton Bureau.]


AI approaches and packing for WWDC

We’re packing our bags and taking this show on the road to WWDC. (But only Dan has to fly.) Before we go, we discuss how Apple is likely to handle discussing AI at the event, and what has led the company to this point.



by Jason Snell

Swift 6 will be a big, under the radar change

Howard Oakley says that there will be very little hype for a major change that will alter the fabric of apps on Apple’s platforms:

Don’t be caught by the marketing hype in Apple’s long list of new features coming in macOS 15. Just as Sonoma’s biggest change came not in its emoji count, but in how iCloud works, something glossed over at the time, there will hardly be a mention of this year’s biggest change, with the arrival of Swift 6 and its deep concurrency.

Apple devices all have many, many processor cores, so threading and concurrency is key to optimum performance.


What we’re excited about at WWDC, the oldest tech we have in use, how we store our digital photos, and whether we journal.



by Jason Snell

Microsoft, Apple, and AI

Charles Stross writes a refreshingly skeptical column about AI hype, how Apple has been ahead of the game for a while now, and why Microsoft seems to be courting disaster by enabling its Recall feature:

Recall is already attracting the attention of data protection regulators; I suspect in its current form it’s going to be dead on arrival, and those CoPilot+ PCs due to launch on June 18th are going to get a hurried overhaul. It’s also going to be interesting to see what Apple does, or more importantly doesn’t announce at WWDC next week, which is being trailed as the year when Apple goes all-in on AI.

As I wrote recently, Apple has an opportunity at WWDC to announce AI features that are useful and measured and promote themselves as a responsible entity that’s not rushing headlong into the latest tech industry hype cycle. I hope they take it.


There’s a Thread running through many of Apple’s latest products

No, not Meta’s social networking platform: Thread, the smart home radio chip.1 You know, the one Apple quietly put in its HomePods, iPhone 15 models and, *ahem* some Apple TVs. The Verge’s Jennifer Pattison Tuohy reports that the company has also been sneaking it into all sorts of hardware since last fall, including the brand new iPad Pro and iPad Air, the latest MacBook Pros, and more:

While the company doesn’t list Thread on the specs of any of these products, FCC reports indicate that many of Apple’s latest devices have had Thread radios tested for compliance. Generally, you don’t test a radio that’s not there.

Yes, it’s a mystery that can best be exemplified by a Ryan Reynolds GIF. It’s unclear whether or not these radios are active and, if they are, what they’re being used for.

Thread is a mesh system, and adding the capability into more devices could help improve connectivity and coverage for smart home devices across your home, especially as the Matter standard (which supports Thread alongside Wi-Fi and Ethernet) continues to roll out. But it’s also possible that Apple’s just throwing them in there because, oh well, they could be useful at some point in the indeterminate future. Kind of like the Ultra Wideband chip. So I guess the company will probably just keep including Thread until it’s in all of their devices and product lines2, at which point…profit?


  1. Ah, branding. Not confusing at all! 
  2. Except the $129 64GB Apple TV for…reasons. 

By Jason Snell for Macworld

Apple’s AI opportunity: Be the adult in the room

WWDC 24 is almost here, and everyone expects it to be all about AI. As the eyes of the tech world turn back to Apple, there’s a lot of work to do. It’s got to establish that it’s been working on AI features for years, show off new features, catch up with AI features from competitors, and maybe even take the opportunity to show where it’s raising the bar.

But this isn’t just Apple’s chance to show it’s doing AI right. It’s also an opportunity to redefine the conversation about AI to make it more substantive and results-oriented—and, of course, to make Apple look better while doing it.

Continue reading on Macworld ↦


Apple’s Screen Time restrictions on websites are easily circumvented

The Wall Street Journal‘s Joanna Stern describes how Apple’s Screen Time parental controls has a simple security vulnerability that allows unfettered web access:

I tried a number of Apple devices. With Screen Time enabled on iPads and iPhones running iOS/iPadOS 15, 16 and 17, I was able to visit porn sites, watch graphic, violent news footage on YouTube and Google “how to buy cocaine.” 

I was able to do the same in Safari on a MacBook Pro running the latest MacOS, Sonoma. All I had to do was type the character string—which we won’t reproduce here so it isn’t abused—plus any web address.

I’ve only just started setting up Screen Time for my kid who, thankfully, isn’t old enough to even type a URL, much less try to work around the restrictions. (The biggest concern is accidentally leaving the app he’s in.) I’ve heard from plenty of other parents, though, who’ve found Screen Time frustrating and full of loopholes. And this is after Apple started pruning third-party parental control apps from its iOS store.

At the end of the piece, Stern details a number of other Screen Time bugs that she’s had reported by others. I’d add a few more, like, say, making a passcode that’s longer than four digits.


by Jason Snell

Microsoft’s Recall feature is a security ‘disaster’

Tom Warren at the Verge has a nice summary of the security problems inherent in the new Windows feature Recall, which records everything you do (what could go wrong?) and is apparently quite hackable:

Microsoft may well find itself needing to rework Recall, or recall it, if you like. There are clearly some obvious holes in the way data is stored here that need to be addressed, and making this an opt-out experience has privacy campaigners concerned. Recall’s launch comes just weeks after Microsoft CEO Satya Nadella called on employees to make security Microsoft’s “top priority,” even if that means prioritizing it over new features.

For the amazingly ugly details, you can read Kevin Beaumont’s Medium post, which explains how having an easily cracked plain-text transcript of pretty much every screen on your PC, downloadable in seconds, is probably a bad idea.



Search Six Colors