Six Colors
Six Colors

Apple, technology, and other stuff

Support this Site

Become a Six Colors member to read exclusive posts, get our weekly podcast, join our community, and more!

By Jason Snell for Macworld

How Apple learns (or doesn’t) from its failures

Nobody’s perfect. We all make mistakes, from the littlest kindergarteners to the world’s most valuable and powerful corporations. What’s most important is how we respond to our mistakes, of course. Do we learn and grow? Do we deny and deflect? Or do we just give up?

What I’m saying is, Apple sometimes takes its failures and learns important lessons that inform its future attempts… but sometimes, it seems to just give up.

Continue reading on Macworld ↦


Choosing between noise cancellation and excellent audio quality in headphones, the impact of an unlimited supply of one thing on the world or your life, the significance of Apple’s adoption of RCS, and recommendations for Black Friday deals.



The possibilities that Apple will release a lower-cost MacBook, Apple’s difficulties in building a 5G radio to rival Qualcomm, Apple attempts to appease the EU by adopting RCS, and all hell breaks loose with OpenAI.


By Joe Rosensteel

Going in-depth on iPhone Spatial Video

The iOS 17.2 beta has brought the ability to shoot in Spatial Video for the forthcoming Vision Pro, and a handful of press participated in a demo where they could view Spatial Video on the Vision Pro headset. While the stuff recorded by Apple with the cameras in the Vision Pro headset naturally had better stereo separation than the iPhone, most members of the press seemed impressed by the content taken from a device that’s far more likely to be available to capture memories. (I’m more than a little curious to see a demo like that myself, but I’d settle for some good sushi.)

Earlier this summer I gave a quick overview of stereoscopic terms and filmmaking. Part of that post had to do with guessing at what Spatial Video was. In Apple’s marketing materials, they show third-person vantages of people experiencing perfectly separated, holographic experiences of things, but the reality is that it has a lot more in common with the left-eye/right-eye combo of traditional stereoscopic video.

In my piece this summer I linked to Chris Flick’s WWDC video, which covers general stereo terms and how Apple is handling streaming stereo content. The file container has a left-eye video stream and then metadata covering the differences between the two eyes in order to reconstruct the right-eye view. When Apple unveiled the iPhone 15 Pro and Pro Max, they touted that a beta update would bring the ability to shoot that spatial video, but they didn’t get into details, and showed another sci-fi hologram thing.

Computer, arch.

The iPhone 15’s Wide and Ultra Wide cameras were arranged so that when the iPhone was held horizontally, the software could crop in on them and get two similar-ish views. A reminder on the tech specs for the iPhone 15 Pro lenses and sensors that are being combined for Spatial Video:

  • 48MP Main: 24 mm, ƒ/1.78 aperture, second‑generation sensor‑shift optical image stabilization, 100% Focus Pixels, support for super‑high‑resolution photos (24MP and 48MP)
  • 12MP Ultra Wide: 13 mm, ƒ/2.2 aperture and 120° field of view, 100% Focus Pixels

I wondered what Apple would do to augment the left and right eye video capture to match them better, as anyone with an iPhone knows that there is a perceptible quality difference between these 0.5x and 1.0x lenses, but as my friend Dan Sturm pointed out on Mastodon, it doesn’t seem to be doing a whole heck of a lot:

First things first, I have to admit, I’ve been obsessing over trying to pull this stuff apart since the beta came out. It’s so easy to get caught up in the excitement around these types of things because it’s a new, magical experience. But there is no magic. This is exactly what you would expect Stereo3D footage from an iPhone to look like.

It’s very interesting to me how many [Stereo3D] “rules” they’re just ignoring here. The [depth of field] on the lenses does not match. The detail, color, compression, stabilization (or lack there of) does not match. The final image is not what one would call “good”, but it does work. It is [Stereo3D] footage from an iPhone.

Admittedly, for many people, it will feel like magic.

Dan’s referring to the slight differences between the two vantage points, which was one of the problems with iPhone video capture that I described back in June. Stu Maschwitz, and others found similar results, so it’s pretty safe to say it’s not a fluke.

To capture good 3D video, you ideally want identical lenses, and sensors, synchronously capturing what’s happening so that the only difference is the horizontal offset. Any differences in color, value, softness, will all seem to shimmer as your brain combines the two images. It’ll still have the illusion of being 3D, but it would be fatiguing, or uncomfortable to watch.

Without personally having access to a Vision Pro, I can only tell you things based on these videos we pulled apart using Spatialify, a iOS app that’s available only via a TestFlight beta. It is possible that visionOS is doing some additional processing of these videos as it decodes them, though my hunch is that it will continue to be exactly what it appears to be: two images from two very different cameras, put together.

There is also the matter of the fact that these videos are limited to 1080p30. I understand that the different focal ranges require substantial cropping on the Ultra Wide camera and produce a substantial drop in quality, but I’m less certain why the crop is exactly 1920×1080, since that’s not even the sensor’s aspect ratio. This video isn’t going into a 2000s-era TV, it’s meant to be viewed in an infinite canvas.

This limitation, more than anything else, really hampers why someone might want to capture Spatial Video right now. No, resolution isn’t everything—but it’s also not nothing. People also tend to shoot vertical videos because of how we hold our phones for both recording and viewing. This feature is asking people to choose between sharing a video optimized for phone viewing, or recording something that’s going to be part of a personal viewing experience.

Also consider the fact that Apple isn’t letting the iPhone 15 capture Spatial Photos. Stereoscopic photography has been around longer than motion picture film. That there’s no function to take a photo suggests something about the quality of the imagery. After all, it’s very easy to scrutinize a single still frame, while it’s a lot easier to forgive flaws in a constantly moving image.

I’m not saying that Apple’s Spatial Video implementation is bad. But I would be hesitant to recommend anyone switch their Camera app over to Spatial Video and shoot all of their videos with it right now. For the time being, I think people generally would be happier if they continued to shoot and share video as they do right now. You can always watch a video floating on a card in space with a Vision Pro headset, and at 4K resolution you can make it fill as much of your field of view as you might like.

So you still want to do it, huh?

If someone really does want to shoot Spatial Video, I’d recommend considering the subject matter first. In the demos Apple provided, they had a sushi chef making sushi for the journalists to record. The chef was near enough to camera that the chef would have internal depth, and also depth relative to the environment. Apple’s other videos also centered on people in environments.

From CNet’s Scott Stein:

The camera app makes recommendations on turning the camera sideways, and staying a certain distance from a subject. I was told to stay within 3 to 8 feet of what I was shooting for a good spatial video, but when I shot my test recording of someone making sushi at a table I got up closer and it looked perfectly fine. I also recorded in a well-lit room, but apparently the spatial video recording mode prevents adjustments on brightness and contrast, which means low-light recording may end up grainier than normal videos.

Shooting things like a wide-open space, with nothing in the foreground or mid ground, is not going to look or feel like much of anything. Please, I beg you, don’t shoot Spatial Video of fireworks—there will be no depth at all. Just because you think, “it’ll be in 3D” doesn’t mean it has any internal depth at that distance. You want that? Then record someone holding a sparkler.

Jason Snell took his iPhone 15 Pro to a Cal game for a couple of to shoot some Spatial Video. (See my video breaking it down.) Being in a stadium might feel big and grand, because you’re immersed in a large space that surrounds you—but it’s not something the iPhone can really capture. Spatial Video doesn’t surround you at all. You’re looking into a window at a stadium, and are very much separated from it instead of immersed. It will feel pretty flat without someone in the foreground as a subject. So definitely keep that window metaphor in mind.

Shooting something extremely close would likely cause an issue with things potentially breaking frame, so you could get close to something, provided it was “sticking out” at camera and not crossing the entire field of view.

Apple tries to mitigate issues with things breaking frame by having that fuzzy falloff at the edges instead of a hard termination where you’d be seeing something that’s exiting the frame in only one eye potentially causing strain. So be mindful of that, because you’re not going to see a soft-matte edge as you’re recording in the Camera app.

Try to always record Spatial Video in well-lit areas. There are still going to be subtle shifts in everything, because the lenses and sensors will match better when they don’t have to compensate for high ISO sensor noise and differences in aperture.

Eyes toward the future

I might sound pessimistic, but I’m not—I’m only skeptical of what’s currently on offer. It’s early days for Spatial Video. The first pass at Portrait Mode was really, really bad, but it’s now refined enough to where it can be executed as a post-processing option. Improvements over many years in hardware and software got us to this point, and if it’s a focus for Apple, I’m sure it’ll get there too.

I would caution that I’m only making that comparison to Portrait Mode in terms of the march of progress, not because I ever envision Apple shipping an entirely post-processed Spatial Video mode. As I mentioned in my previous post, when you offset something in post you’re cutting it up into layers, and every place where there’s something in front of something else is an occlusion. Where there are occlusions, there’s no data—and that has to be filled in. Could Apple do that with generative fill that’s stable across a frame range? Maybe, but that seems like more of a Google thing.

Maybe in a future iPhone we’ll have a better Ultra Wide camera sensor, lens, and optical stabilization? Perhaps we’ll see more advanced machine learning to transform the more detailed Wide camera data to cover over the inconsistencies in the Ultra Wide’s view? Maybe there will be the option to record 4K and Spatial Video with a later iPhone so you feel like you have the best of both worlds?

When Apple lets you take a still Spatial Photo, then that’s the signal that they’re confident in the image quality and not just the emotional content.

There’s no FOMO here, not right now, for Spatial Video. If the primary reason you shoot video is to remember moments with people, then make sure you keep that in mind. The people are more important than whether or not the video is 2D or 3D.

[Joe Rosensteel is a VFX artist and writer based in Los Angeles.]


By John Moltz

This Week in Apple: Bad looks

John Moltz and his conspiracy board. Art by Shafer Brown.

Imagine, if you will, an avalanche of tens of billions of beans being spilled. Such is what happened in Google’s antitrust trial. There’s big news in little bubbles this week and social networking continues to be a huge mistake.

Cringe all the lawyers

After years of speculation, we finally know that Apple makes a lot of money.

Well, from Google specifically.

“Apple Gets 36% of Google Revenue in Search Deal, Expert Says”

See, now it makes sense why Apple doesn’t feel bad about taking 10 to 30 percent from developers. “Google gives us 36 percent of a metric crapton of money. You’re arguing about 30 percent of that little bit you make?”

It makes sense through a certain ridiculously wealthy lens.

In a delightful detail about this revelation, Bloomberg says Google’s economics expert, John Murphy, accidentally let this tidbit slip, causing Google attorney John Schmidtlein to “visibly cringe”. Both Google and Apple had been trying to keep this number a secret but now the multi-billion dollar cat is out of the bag.

Once again let me humbly suggest that we have an antitrust trial running all the time, because the discovery and now accidental revelations, even the way they are unveiled, are just delicious.

50 shades of green

In an unexpected move, Apple announced this week that it will support RCS messaging sometime late next year.

“Later next year, we will be adding support for RCS Universal Profile, the standard as currently published by the GSM Association. We believe the RCS Universal Profile will offer a better interoperability experience when compared to SMS or MMS. This will work alongside iMessage, which will continue to be the best and most secure messaging experience for Apple users,” said an Apple spokesperson.

“We’re gonna do it, we just want everyone to know our way is better.” OK, Apple.

Apple further said it will work with the standards group in order to get end-to-end encryption in the standard.

After this announcement, there was wild speculation about what color bubble RCS messages would have. Certainly not blue. But green or a new color? Or even an entirely new color of the spectrum like the color flink or something, to further draw attention to the fact that the messages are not inherent to the Apple ecosystem? As it turns out, they’ll just be green. Sad, really. I wanted to see flink, even though it has reportedly driven several Apple researchers mad.

Social nutworks

You’ve heard of Taskrabbit, where you outsource chores. You’ve heard of Uber, where you outsource driving. Now Meta is hoping to invent a means of outsourcing responsibility, and it’s hoping that the government will help.

“Meta says vetting teens’ ages should fall on app stores, parents”

“Look, can’t our platforms just be unfettered cesspools — wretched hives of scum and villainy, if you will — and someone else can act as an unpaid bouncer standing at the door? Why should we have to do all the work? Or any work at all? We would prefer to do negative work, which is what we are proposing. Thank you for your consideration.”

Meanwhile, things continue to go swimmingly over at the web site that Musk ruined (look, if he doesn’t want to call it Twitter anymore, that’s what I’m calling it).

“IBM pulls X ads as Elon Musk endorses white pride”

Oh. Ohhh. That is… that is not good.

The fact that IBM had pulled ads and Apple was continuing to buy them had not gone unnoticed among Apple fans and something of a letter writing campaign to Tim Cook had taken off on Mastodon. When the company that sold technology to the Nazis during World War II is pulling money from Musk’s vanity dumpster fire and you are not, that is something of a bad look. Fortunately, Apple seems to have heard the message.

“Apple to pause advertising on X after Musk backs antisemitic post”

Here’s hoping this is one of those designs where the pause button is also the stop button.

[John Moltz is a Six Colors contributor. You can find him on Mastodon at Mastodon.social/@moltz and he sells items with references you might get on Cotton Bureau.]


Video

November Video Q&A

Time for our next monthly video Q&A, available only to subscribers at the More Colors or Backstage level.

Please send in your questions at sixcolors.com/morecolorqs or by typing /ask on Discord.


Kicking the can and upgrading green bubbles

Satellite emergency services get a reprieve, Apple surprisingly pre-announces a Messages upgrade, and we prepare for Thanksgiving. (See you in two weeks.)



Disney’s going to own all of Hulu. Does this uniquely position them to be the most legitimate challenger to Netflix? We also discuss the power of platforms, Apple’s “Napoleon” opportunity, and Disney’s Marvel experimentation.


Apple will add RCS support to Messages

Apple has told Chance Miller of 9to5Mac that it will be supporting the RCS standard for messaging:

Later next year, we will be adding support for RCS Universal Profile, the standard as currently published by the GSM Association. We believe RCS Universal Profile will offer a better interoperability experience when compared to SMS or MMS. This will work alongside iMessage, which will continue to be the best and most secure messaging experience for Apple users.

Here’s what this means:

  • iPhone communications with Android devices via Messages will improve. Currently Messages uses the old SMS and MMS standards for sending texts and media to Android phones. RCS supports better image transfers, pass-along of location data (used in several Messages features), and more.
  • It might mean that these messages are more secure than they used to be, though the fundamental security of RCS as a protocol is a little hazy. Apple says it’ll work with the GSM Association to improve the standard, which might include security improvements? We’ll see.

  • It absolutely doesn’t mean that “green bubble shame” is going anywhere. There’s no way that Apple will promote RCS messages to blue bubbles in Messages. Not only does the blue bubble indicate that messages are from other Apple devices, it indicates that they’ve been sent securely using iMessage. It’s not just branding, it’s meaningful. It’s unclear if RCS messages will use green bubbles or some other color, but it won’t be iMessage blue for sure.

  • This is coming in a software update next year, so either a late iOS 17 update or iOS 18.

Why is Apple doing this? It sure feels like a way to indicate its support of open standards at a time when it’s being investigated by various bodies, most notably in the European Union. Who knows if it’ll work?

Either way, this is a good announcement. It’s hard to believe that Apple still falls back to SMS and MMS for all communications with Android devices. RCS isn’t a replacement for iMessage, but it will improve chats with Android users within Messages. This hasn’t just been a user experience problem for Android users, it’s been one for iPhone users who have Android users in their lives.


Capturing spatial video on iPhone, our font opinions, whether we use journaling apps, and when we tell our non-techy friends to update their software.


App Store Awards finalists, WordPress woes, AI pins and the secret identity of Dan’s benefactor.


Apple extends Emergency SOS coverage for iPhone 14 users

Apple Newsroom:

One year ago today, Apple’s groundbreaking safety service Emergency SOS via satellite became available on all iPhone 14 models in the U.S. and Canada. Now also available on the iPhone 15 lineup in 16 countries and regions, this innovative technology — which enables users to text with emergency services while outside of cellular and Wi-Fi coverage — has already made a significant impact, contributing to many lives being saved. Apple today announced it is extending free access to Emergency SOS via satellite for an additional year for existing iPhone 14 users.

Apple’s in an interesting position with this service. Even though its currently limited to emergency usage—which is hopefully a pretty small percentage of overall eligible iPhone users—satellite connectivity isn’t cheap.

I was pretty confident Apple would kick this can down the road, and now they have. My guess is that it might (next year or the year after) introduce a paid tier that lets you do more with satellite connectivity—non-emergency messaging, for example—and use a charge for that to essentially subsidize free emergency functionality for all users.

Yes, Apple wants to continue to make money on this, but it definitely doesn’t want to be in a position of having a customer unable to use the service because they didn’t pony up for the monthly cost—that would not be a great look in those “look at all the people who are still here to celebrate their birthday because of Apple technology” videos.


By Dan Moren for Macworld

How AI could take iOS 18 and macOS 15 to the next level

Artificial intelligence is the buzziest of buzzwords right now. But as rivals like Microsoft, Amazon, and Google have gone full throttle on incorporating this latest hot technology into their products, Apple has taken a decidedly slower—if not uncharacteristic—approach that has more than a few critics lambasting the company for trailing behind its competitors.

Apple is, of course, no stranger to the use of machine learning in its products, though it’s tended to deploy that technology in more subtle ways that don’t scream “artificial intelligence.”

Still, if rumors are to be believed, Apple is going hard at building generative AI features in its software updates over the next year. Naturally, most people’s attention will probably go to Siri as one place the company could benefit from integrating the sort of technology demoed by others, but there are definitely other places throughout Apple’s software platforms where AI could make as big an impact—if not bigger—on users’ lives.

Continue reading on Macworld ↦


By Jason Snell

I’ll pin my hopes on AI assistants

Note: This story has not been updated since 2023.

Last week, the startup Humane did a marketing blitz for its forthcoming AI Pin, a $699 wearable designed by a bunch of ex-Apple people that has been the subject of a lot of tech-industry buzz lately.

It’s a really interesting product. While it would be easy to focus on the company’s lackluster marketing video that featured actual AI hallucinations, I’m more interested in what this product says and doesn’t say about the current and future of tech.

I don’t think the AI Pin will succeed for numerous reasons, foremost among them being the fact that it seems to be a product designed to make your smartphone unnecessary or ancillary. It feels to me like this is the product’s point of view not because of a deep philosophical reason but because Humane is a company with investors that needs to ship and sell a hardware product and trying to attach to the side of Apple’s or Google’s smartphone operating systems makes this thing an expensive accessory instead of a revolutionary device.

It’s not a point of view that makes sense otherwise, because it seems to posit a world where people just hate their smartphones and can’t wait to be rid of them. This is the world as seen through a funhouse mirror. People love their smartphones. That’s why we’re all staring into them for hours and hours every day! The knock on smartphones is that people use them all the time, and maybe I guess we shouldn’t? But unless you’re going on some sort of digital fast, the results are in: people love using their smartphones. They’re the ultimate hit product of four or five decades of personal computers.

This is not to say the AI Pin doesn’t fit into some interesting niches. A personal constellation of devices—smartphone in your pocket, smartwatch on your wrist, maybe smart earbuds in your ears—gets more interesting when you consider that all of those devices are working together to collect information and communicate it back to you. And none of the devices I carry around daily look at the world around me. The AI Pin has a camera and clips on your shirt, so it’s able to see what you’re seeing and presumably do things with that information.

There’s a lot of potential here. The iPhone can do some amazing things when you hold its camera up—including figure out exactly where it is based on the buildings it can see, which is bananas—but mostly, that camera is looking at the inside of my pocket. Whether it’s glasses or a pin or something else1, there’s valuable data to be gained from seeing the world around us. It’s a sense that our devices are missing most of the time.

The AI Pin’s interface is built around a smart assistant that uses a large language model as an interface. Humane is highly unlikely to corner the market on this technology. Instead, it’s using the same stuff that will immediately permeate into all our devices—at least, as soon as it’s ready.

Humane’s vision for the future of human interfaces doesn’t seem wrong to me. Sooner or later, voice assistants driven by large language models are going to be good and reliable, and the game is going to change. I don’t know if Google Assistant and Alexa and Siri are going to molt into beautiful butterflies next year or the year after or in 2030, but it sure seems like it’s going to happen.

What excites me about this is that what computers are good at, fundamentally, is drudgery. Computer spreadsheets were the first killer app because they eliminated the need to write numbers down on paper, sharpen pencils and erase pencil marks, and do all that math in order to figure out whatever you wanted to learn. My favorite moments using computers are often figuring out ways I can use automation to take a task that requires me to click and type stuff for half an hour and reduce it to a single keyboard shortcut.

Now imagine the prospect of an intelligent assistant that knows every single fact in your personal array of data. It’s read all your saved notes, email, time tracking data, and contacts, and knows what was said in all your meetings and more. Instead of having to invent search terms and search multiple data repositories and wrack your brain to find exactly the right piece of information, the assistant can just do it because it’s got millions of cycles to burn doing the drudgery for you.

Humane’s marketing videos do a pretty good job of showing how this next wave of AI-based assistants will change how we interact with our devices. I think most people will still enjoy tapping on a smartphone, but more complex interactions can be simplified. Kevin Roose of the New York Times wrote about using ChatGPT to create an agent that knew all the rules of his child’s daycare provider, effectively teaching an assistant to answer very specific questions about when Circle Time is and when the facility is closed for the holiday break. Leo Laporte built a programming coach for the Lisp language in less than half an hour.

Sure, it’s all early days. Chatbots still hallucinate—sometimes in their own marketing videos. If my time as a computer user and an observer of the tech world has taught me anything, it’s that new technologies explode into existence quickly—but then take way longer than you expected to get really good. We’re post-explosion now, and things are moving quickly, but it might take a while for this stuff to truly fulfill its potential. (In the meantime, those who are enthusiastic about tech get to play with the new, messy stuff! It’s why it’s fun to be an early adopter.)

Anyway, this brings us to Apple. According to Mark Gurman, iOS 18 will be full of AI features. We’ll see what that amounts to. Apple is a very careful company, and generative AI still feels a bit wild, but given what Adobe’s doing in its apps, it feels like it’s past time for Apple to get involved.

Putting machine-learning-based features in apps, as Apple has been doing for years, is just fine. But it’s obvious that Siri needs to be replaced with something better and smarter and capable of leveraging Apple’s ecosystem to make itself a uniquely personal tool for users of Apple’s devices. My iPhone can read my mail and my notes and peer into my calendar and knows my contacts… it does all this already. The next step is for Apple and Siri to put it all together.


  1. Would ultra-wide-angle lenses on the outside of AirPods be able to stitch together a 360-degree view of the world around you? Only Apple’s product lab knows for sure, I suppose. 

The Secret History of Hawkeye’s Dog Tags

Amazing story from Andy Lewis of The Ankler about the real-life veterans behind the dog tags worn by Alan Alda on the hall-of-fame sitcom “MASH”:

Often when he was putting the dog tags on in his dressing room, seemingly obtained by the costume designers of MAS*H, he would ponder who the men were and where they served, even if they survived the war. The fact that there was only one tag for each, instead of the usual pair, made him wonder if they were alive or dead: “I didn’t know anything about them other than their names.”

Suffice it to say that now Alda, and the rest of us, know a lot more about these two men who served in World War II and never knew who ended up wearing their dog tags.

[Via Craig Calcaterra]


Spatial videos on iPhone, 2024 iPad updates, the Apple Watch as a health device, Apple pausing iOS development to fix bugs, the potential of AI interfaces, and the meaning of the “pro” label.



Search Six Colors